diff --git a/.gitattributes b/.gitattributes index c911416bfb2ae98246247a87a689a664de6bb9f2..0ca48512d45a9923ee1696f7852bba8ea0172028 100644 --- a/.gitattributes +++ b/.gitattributes @@ -236,3 +236,4 @@ data_all_eng_slimpj/shuffled/split/split_finalaa/part-16.finalaa filter=lfs diff data_all_eng_slimpj/shuffled/split/split_finalaa/part-12.finalaa filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalaa/part-03.finalaa filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalaa/part-00.finalaa filter=lfs diff=lfs merge=lfs -text +data_all_eng_slimpj/shuffled/split/split_finalaa/part-18.finalaa filter=lfs diff=lfs merge=lfs -text diff --git a/data_all_eng_slimpj/shuffled/split/split_finalaa/part-18.finalaa b/data_all_eng_slimpj/shuffled/split/split_finalaa/part-18.finalaa new file mode 100644 index 0000000000000000000000000000000000000000..56903ea73325ed836b216f74fa2c2e5be8ba6dc2 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split/split_finalaa/part-18.finalaa @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22071a12a5f698989f63281634e24da9b352a237f04b23dd0244eb4866c765cb +size 12576665018 diff --git a/data_all_eng_slimpj/shuffled/split2/finalzxak b/data_all_eng_slimpj/shuffled/split2/finalzxak new file mode 100644 index 0000000000000000000000000000000000000000..86fca5222ee220a81ada78c507a998880592404f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzxak @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe first release of cosmological data from the \\satellite{Planck}\\\nsatellite \\citep{Planck-R1-I} and the final analysis of the\n\\satellite{Wilkinson Microwave Anisotropy Probe} (\\satellite{WMAP})\n\\citep{WMAP9-results} confirmed that the inflationary Lambda cold dark\nmatter ($\\Lambda$CDM) model provides an excellent fit to the angular temperature power\nspectrum for multipoles ranging from the quadrupole ($\\ell = 2$) up to\n$\\ell = 2500$. The effect of gravitational lensing of the cosmic microwave\nbackground (CMB) has been detected with a very high statistical\nsignificance ($25 \\sigma$) \\citep{Planck-R1-XVII} and breaks some parameter\ndegeneracies without reference to non-CMB observations. Most of the\nstatistical power in the \\satellite{Planck}\\ analysis comes from high-$\\ell$\nmultipoles, thus it may not come as a surprise that the best-fitting model\ntraces the high-$\\ell$ data much better than those at low-$\\ell$, where a\nlack of angular power (in the range $\\ell = 2$ to $32$) compared to the\nbest-fitting model is found at the $99$ per cent C.L.\\ \\citep{Planck-R1-XV}.\nNevertheless, it is quite remarkable that none of the models invoking\nadditional, physically well motivated parameters, such as the sum of\nneutrino masses, the number of effective relativistic degrees of freedom,\nor a running of the spectral index, can give rise to a significant\nimprovement of the fit \\citep{Planck-R1-XVI}. These findings indicate that\nsome special attention should be devoted to the largest angular scales,\nespecially as they potentially probe different physics than the small angular\nscales.\n\nSeveral anomalies at large angular scales discussed in the literature have\nbeen the source of some controversy since the first release of the\n\\satellite{WMAP}\\ data (see \\citealt{WMAP7-anomalies,CHSS-review} for reviews). The\nfirst of them was already seen by the \\satellite{Cosmic Background\n Explorer} (\\satellite{COBE}): the temperature two-point angular correlation function\ncomputed as an average over the complete sky\n\\begin{equation}\n \\Ccorr(\\theta) = \\overline{T(\\unitvec e_1) T(\\unitvec e_2)}, \\qquad\n \\unitvec e_1 \\cdot \\unitvec e_2 = \\cos \\theta, \n \\label{eq:Ctheta-full-sky}\n\\end{equation}\nwas found to be smaller than expected at large angular scales\n\\citep{DMR4-Ctheta}. Scant attention was given to this observation, due in\npart to the relatively low signal to noise ratio of the\n\\satellite{COBE}\\ observations, but mostly to the theory-driven shift in attention\naway from the angular correlation function to the angular power spectrum.\nThe lack of correlations on angular scales larger than $60$ degrees was\nrediscovered almost a decade later by \\satellite{WMAP}\\ in their one-year analysis\n\\citep{WMAP1-cosmology} and analysed in greater detail by us for the\n\\satellite{WMAP}\\ three and five-year data releases \\citep{CHSS-WMAP5}. We have\nemphasized its persistence in the data (contrary to some claims),\ndifferentiated it from the lowness of the temperature quadrupole with which\nit is often confused, and demonstrated how it challenges the canonical\ntheory's fundamental prediction of Gaussian random, statistically isotropic\ntemperature fluctuations. For related work on the missing large-angle\ncorrelations, see also \\cite{CHSS-WMAP3,Hajian2007,SHCSS2011,Kim2011,Zhang2012,Gruppuso2013}.\n\nThe \\satellite{Planck}\\ team presented an analysis of the angular two-point correlation\nfunction at a low resolution ($\\textit{Nside} = 64$) for their four component\nseparation methods (\\Planckmap{Commander-Ruler}, \\Planckmap{NILC}, \\Planckmap{SEVEM}, \\Planckmap{SMICA}) after the U73 mask was used to\nsuppress Galactic residuals. Based on comparison with $10^3$ realizations of\nthe best-fitting model, they find that the probability of obtaining a $\\chi^2$\nbetween the expected angular two-point correlation function of the\nbest-fitting model and the observed correlation function that is at least as\nlarge as that measured is $0.883$, $0.859$, $0.884$, and $0.855$ for the \\Planckmap{Commander-Ruler},\n\\Planckmap{NILC}, \\Planckmap{SEVEM}, and \\Planckmap{SMICA}\\ maps respectively \\citep{Planck-R1-XXIII}. However,\ntheir statistic fails to capture that what is anomalous about the angular\ntwo-point correlation function is not the extent to which it deviates from the\ntheoretical expected value of the function. Rather, as has been the\n case since the \\satellite{COBE-DMR}\\ observation, the pertinent anomaly is\nthat above about $60$ degrees the angular correlation function is very nearly\nzero. It is this very special way of deviating from our expectation that\n deserves our attention.\n\nIn this work, we analyse the two-point angular correlation function at\nlarge angles as seen in the final data release of \\satellite{WMAP}\\ and the first\ncosmology release of \\satellite{Planck}. The anomalous alignments of low multipole\nmodes with each other and with directions defined by the geometry and\nmotion of the Solar system are discussed in a companion paper\n\\citep{CHSS-Planck-R1-alignments}. Here we demonstrate that on the part of\nthe sky outside the plane of the Galaxy the absence of two-point angular\ncorrelations above about $60$ degrees remains a robust, statistically\nsignificant result, with a $p$-value between about $0.03$ and $0.33$ per\ncent depending on the precise map and Galaxy cut being analysed.\n\n\\section{Physics at large angular scales}\n\nHigh fidelity measurements of the microwave sky reveal the imprints\nof primary temperature, density and metric fluctuations in the early\nUniverse. By observing these fluctuations and analysing their statistical\nproperties, we seek a deeper understanding of cosmological inflation or any\nalternative mechanism that produced the initial fluctuations.\n\nStudying modes with wavelengths too large to enable causal contact across the\nmode during the radiation and most of the matter dominated epochs suggests\nthat we can learn something about the physics of inflation without detailed\nknowledge of the recent content of the Universe and associated astrophysical\ndetails (e.g.\\ reionization). This motivates us to pay special attention to\nunderstand the largest angular scales. Comoving scales that cross into the\nHubble radius at $z \\sim1$ and below are observed at angles larger than $60$\ndegrees. Thus features observed at those scales are either of primordial\nnature or stem from physics at $z \\la 1$, the epoch in the history of the\nUniverse that we arguably know best.\n\nTo be more precise, at $z = 0.91 (1.5, 7)$ the comoving Hubble length\nequals the length of a comoving arc with an opening angle of $90\\degr\n(60\\degr,18\\degr)$ for the best-fitting $\\Lambda$CDM\\ model. These angular\nscales correspond roughly to the scales that have been shown to be\nanomalous in previous works (the quadrupole, octopole, and up to modes $\\ell =\n10$). It is possibly noteworthy that $z \\sim 7$ corresponds to the moment\nwhen the Universe is fully reionized.\n\nFor better or worse, however, the large-angle CMB is also sensitive to the\nphysics that affects the microwave photons as they propagate from their\nlast scattering until their collection by our telescopes. The late-time\nintegrated Sachs-Wolfe (ISW) effect could potentially correlate the\nlarge-angle CMB with the local structure of the gravitational\npotential. Indeed, it has been proposed in the literature that some of the\nobserved CMB anomalies could be explained in this way\n\\citep{Rakic2006a,Francis2010,Dupe2011,Rassat2013}. Although\nreconstruction of the local gravitational potential from existing CMB and\nlarge-scale structure data is quite uncertain and subject to biases, such\nan explanation would indeed be an attractive possibility if only there were\nno lack of correlations on large scales.\nIf the observed lack of large-angle correlations is real, then we must\nexplain how the local gravitational potential manages to align with the\nprimordial temperature fluctuations in such a way that the resulting sky\nhas such a deficit. In the end this does not change the underlying\nproblem; it merely rephrases it from one about the CMB to one about the\nlocal gravitational potential.\n \nClearly, it is important to understand the lack of correlations at large\nangular scales in greater detail not just for its own sake, but also in\norder to evaluate any proposed explanation for other features of the CMB\ndata, especially other large-angle or low-$\\ell$ anomalies.\n\n \n\\section{Temperature two-point angular correlation function} \n\n\\subsection{Theory}\n\nIn the standard CMB analysis a full-sky map of temperature fluctuations,\n$T(\\unitvec e)$, is expanded in spherical harmonics as\n\\begin{equation}\n T(\\unitvec e) = \\sum_{\\ell m} a_{\\ell m} Y_{\\ell m}(\\unitvec e),\n\\end{equation}\nwhere the coefficients in the expansion are extracted from the full-sky as\n\\begin{equation}\n a_{\\ell m} = \\int T(\\unitvec e) Y_{\\ell m}^*(\\unitvec e)\\, \\rmn{d}\\unitvec e.\n\\end{equation}\nFrom these quantities we define the two-point angular power spectrum as\n\\begin{equation}\n \\Cee_\\ell \\equiv \\frac1{2\\ell+1} \\sum_{m}|a_{\\ell m}|^2.\n \\label{eq:Cl-full-sky}\n\\end{equation}\nNote that the two-point angular power spectrum may \\emph{always} be defined\nin this way. Only in the case of Gaussian random, statistically isotropic\ntemperature fluctuations will it contain \\emph{all} the statistical\ninformation. The full-sky two-point angular correlation\nfunction~(\\ref{eq:Ctheta-full-sky}) is related to the full-sky two-point\nangular power spectrum via a Legendre series\n\\begin{equation}\n \\Ccorr(\\theta) = \\sum_\\ell \\frac{2\\ell+1}{4\\upi} \\Cee_\\ell P_\\ell(\\cos\\theta),\n \\label{eq:Ctheta-full-sky-expansion}\n\\end{equation}\nwhere the $P_\\ell(\\cos\\theta)$ are Legendre polynomials.\n\nUnfortunately the complete sky cannot be observed due to foreground\ncontamination. If we let $W(\\unitvec e)$ represent a mask on the sky (in\nthe simplest case it is zero for pixels removed and one for those included)\nthen cut-sky quantities can be defined in analogy to the full-sky ones from\nabove. In particular, the cut-sky can be expanded in pseudo-$a_{\\ell m}$ as\n\\begin{equation}\n W(\\unitvec e) T(\\unitvec e) = \\sum_{\\ell m} \\tilde{a}_{\\ell m} Y_{\\ell\n m}(\\unitvec e),\n\\end{equation}\nwhere\n\\begin{equation}\n \\tilde{a}_{\\ell m} = \\int W(\\unitvec e) T(\\unitvec e) Y_{\\ell m}^*(\\unitvec e)\\,\n \\rmn{d}\\unitvec e.\n\\end{equation}\nFrom these the cut-sky two-point angular power spectrum is defined as the\npseudo-$\\Cee_\\ell$ given by\n\\begin{equation}\n \\tilde\\Cl \\equiv \\frac1{2\\ell+1} \\sum_m |\\tilde{a}_{\\ell m}|^2.\n \\label{eq:Cl-cut-sky}\n\\end{equation}\nSimilarly the cut-sky two-point angular correlation function is defined as\na sky average,\n\\begin{equation}\n \\tilde\\Ccorr (\\theta) \\equiv \\overline{W(\\unitvec e_1)T(\\unitvec e_1)\n W(\\unitvec e_2)T(\\unitvec e_2)},\n \\qquad\n \\unitvec e_1 \\cdot \\unitvec e_2 = \\cos \\theta.\n \\label{eq:Ctheta-cut-sky}\n\\end{equation}\nAgain this may be expanded in a Legendre series, now in terms of the\npseudo-$\\Cee_\\ell$, as\n\\begin{equation}\n \\tilde\\Ccorr(\\theta) = \\sum_\\ell \\frac{2\\ell+1}{4\\upi} \\tilde\\Cl\n P_\\ell(\\cos\\theta)\n \\label{eq:Ctheta-cut-sky-expansion}\n\\end{equation}\n(see \\citealt{Pontzen2010} for a proof of this result). The cut-sky\ntwo-point angular correlation function will be the main focus of this work.\n\nNote that it is common to just refer to $\\Ccorr(\\theta)$ as a single\nquantity covering both the full- and cut-sky cases. It should be\nremembered that whenever a cut-sky $\\Ccorr(\\theta)$ is discussed it is\ndefined as in Eq.~(\\ref{eq:Ctheta-cut-sky}) and it may be expanded in a\nLegendre series using the pseudo-$\\Cee_\\ell$ as in\nEq.~(\\ref{eq:Ctheta-cut-sky-expansion}).\n\nIt should also be noted that the discussion above, including the quantities\ndefined and relationships derived, does \\emph{not} rely on assumptions\nabout the theory. It is only when quantities measured in our Universe are\nrelated to the properties of the ensemble predicted by the theory that\nassumptions such as Gaussianity and statistical isotropy become important\nand must be identified. For example, to construct an estimator of the\ntheoretical two-point angular power spectrum from cut-sky observations --\neither through the pseudo-$\\Cee_\\ell$ or a maximum likelihood technique -- extra\nassumptions that may not be true on large-angles or at low-$\\ell$ are\nrequired.\n\nAt high-$\\ell$, observed deviations from Gaussianity agree with the amount\nof non-Gaussianity expected from the non-linear contributions of\ngravitational lensing \\citep{Planck-R1-XXIV}. However, at low-$\\ell$,\nthere are statistically significant anomalies in the temperature map, such\nas the alignments of multipoles and the hemispherical asymmetry\n\\citep{CHSS-Planck-R1-alignments,Planck-R1-XXIII}, that are evidence of\ncorrelations among the $a_{\\ell m}$ and the $\\Cee_\\ell$, and thus contradict the\nassumption of Gaussian-random statistically isotropic $a_{\\ell m}$. This\nsuggests that the physics underlying the observed sky cannot be\ncharacterized solely by the $\\Cee_\\ell$, the statistical quantities prescribed by\nthe canonical model; unless these anomalies are unfortunate `flukes', other\nstatistical tools are not just interesting but necessary. The question is\nwhich are the right ones. The answer clearly depends on the physics\nunderlying the anomalies. At least until that physics is established,\nmultiple approaches will need to be explored.\n\n\\subsection{Analysis of Observations}\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{Ctheta_smica}\n \\caption{Two-point angular correlation function from the inpainted\n \\satellite{Planck}\\ \\Planckmap{SMICA}\\ map. The black, dotted line shows the best-fitting\n $\\Lambda$CDM\\ model from \\satellite{Planck}. The shaded, cyan region is the one-sigma\n cosmic variance interval. Included from the \\Planckmap{SMICA}\\ map are the\n $\\Ccorr(\\theta)$ calculated on the full-sky (black, solid line) and\n from two cut skies using the U74 mask (green, dash-dotted line) and the\n KQ75y9 mask (red, dashed line). See the text for details.}\n \\label{fig:Ctheta-smica}\n\\end{figure}\n\nThe two-point temperature angular correlation function for the CMB,\n$\\Ccorr^{TT}(\\theta)$, has remained mostly unchanged since first measured\nby the \\satellite{COBE-DMR}\\ \\citep{DMR4-Ctheta}. The resulting curves from the\n\\satellite{Planck}\\ \\Planckmap{SMICA}\\ map are shown in Fig.~\\ref{fig:Ctheta-smica}. What is\nmost striking at first glance may be the difference between the\nbest-fitting $\\Lambda$CDM\\ model and the observed $\\Ccorr(\\theta)$ on both the\nfull and cut skies (the details of the masks will be discussed below).\nThis is a source of considerable confusion and great care must be taken to\nnot read too much into this. The values of $\\Ccorr(\\theta)$ at different\nangular separations $\\theta$ (or more precisely in different angular bins)\nare correlated, so the sizeable deviation between the expected $\\Lambda$CDM\\ and\nthe observed curves is not as significant as it may appear. Rather, it is\nthe very small value of the observed $\\Ccorr(\\theta)$ on large angular\nscales that is truly surprising. This is particularly true for the cut\nskies where there are essentially no correlations above about 60 degrees,\nexcept for some small anti-correlation near 180 degrees.\n\nTo quantify this lack of correlations on large angular scales we continue to\nuse the statistic first proposed in the \\satellite{WMAP}\\ one-year analysis\n\\citep{WMAP1-cosmology},\n\\begin{equation}\n \\label{eq:Shalf}\n \\ensuremath{S_{1\/2}} \\equiv \\int_{-1}^{1\/2} [\\Ccorr(\\theta)]^2 \\rmn{d}(\\cos\\theta)\n = \\sum_{\\ell=2}^{\\ell_{\\rmn{max}}} \\Cee_\\ell I_{\\ell\\ell'} \\mathcal{C}_{\\ell'}.\n\\end{equation}\nAs discussed above, this definition applies to both full-sky and cut-sky\nmaps. For the case of full-sky maps the full-sky $\\Cee_\\ell$ from\nEq.~(\\ref{eq:Cl-full-sky}) are used in the sum on the right-hand side,\nwhereas for the case of cut-sky maps the pseudo-$\\Cee_\\ell$ from\nEq.~(\\ref{eq:Cl-cut-sky}) are used. Throughout we will either refer to the\n$\\ensuremath{S_{1\/2}}$ statistic generically or make it clear the context in which it is\ncalculated. This statistic has not been optimized in any way, except\ncrudely by the choice of the limits of integration, particularly the upper\none which has been chosen to be a convenient value. We consistently resist\nthe temptation to optimize these limits in order to minimize the\noft-repeated criticism that the statistic is \\textit{a posteori}. In\nacknowledgement and partial response to that objection, we note that the\nstatistical significance of the absence of large-angle correlations is not\nparticularly dependent either on the precise value of either limit (so long\nas the range of integration focuses on large scales) nor on the particular\nchoice of reasonable integrand.\\footnote{In another paper, looking at the\n predictions for the two-point angular correlation function of temperature\n with polarization, specifically the $Q$ Stokes parameter \\citep{CHSS-TQ},\n we optimized the upper and lower limits of integration, and considered\n both $[\\Ccorr^{TQ}(\\theta)]^2$ and $\\Ccorr^{TQ}(\\theta)$ as integrands in\n the equivalent of (\\ref{eq:Shalf}). However, in that case we were\n \\textit{a priori} optimizing a statistic for a specific purpose --\n differentiating between two models. Furthermore, it was found\n that replacing $[\\Ccorr^{TQ}(\\theta)]^2$ in the integrand with\n $\\Ccorr^{TQ}(\\theta)$ makes no qualitative difference in the\n conclusions.}\n \nThe sum in equation~(\\ref{eq:Shalf}) shows how to quickly and easily\ncalculate $\\ensuremath{S_{1\/2}}$ in terms of the $\\Cee_\\ell$ or $\\tilde\\Cl$ from the Legendre\nseries~(\\ref{eq:Ctheta-full-sky-expansion}) or\n(\\ref{eq:Ctheta-cut-sky-expansion}). The ${I}_{\\ell \\ell'}$ are the\ncomponents of a matrix of integrals over products of Legendre polynomials\nand are simply related to the $\\mathcal{I}_{\\ell\\ell'}(1\/2)$ calculated in\nAppendix A of \\cite{CHSS-WMAP5} by ${(4\\upi)^2} I_{\\ell\\ell'} =\n{(2\\ell+1)(2\\ell'+1)} \\mathcal{I}_{\\ell\\ell'}(1\/2)$. The sum in the\n$\\ensuremath{S_{1\/2}}$ expression~(\\ref{eq:Shalf}) ranges from $\\ell=2$ to\n$\\ell=\\ell_{\\rmn{max}}$. The lower limit is due to the monopole and dipole being\nremoved from the map. We remove the monopole both because its amplitude is\nsignificantly larger than those of the other multipoles and because we are\ninterested in the correlations among fluctuations not in the background\nvalue. We remove the entire dipole because it is dominated by the Doppler\ndipole -- the (uninteresting) contribution due to our peculiar motion\nthrough the Universe; this is approximately two orders of magnitude larger\nthan the expected underlying dipole in the CMB rest frame. Once it is\npossible to measure the Doppler contribution to better than $1\\%$, it will\nbe far preferable to remove the Doppler dipole, and set $\\ell=1$ as the\nlower limit of the sum in expression~(\\ref{eq:Shalf}). For the upper limit\nthere is some freedom in the choice of $\\ell_{\\rmn{max}}$. Since $\\Cee_\\ell\\sim\n\\ell^{-2}$ we would expect that the result is independent of our choice\nprovided that $\\ell_{\\rmn{max}}$ is `large enough'. However, since we will find\nsmall values of $\\ensuremath{S_{1\/2}}$, the exact choice does have a slight effect on the\nfinal values. We have consistently chosen $\\ell_{\\rmn{max}}=100$ for all\ncalculations of $\\ensuremath{S_{1\/2}}$ in this work.\n\nThe calculation of $\\ensuremath{S_{1\/2}}$ has therefore been reduced to finding the\ntwo-point angular power spectrum either over the full-sky or cut-sky for\nsome map of the CMB temperature. However, a number of important choices\nmust be made, which we now discuss.\n\nFirst, there are a number of maps available for analysis. In each data\nrelease, the \\satellite{WMAP}\\ team included individual band maps and a full-sky\nIndependent Linear Combination (ILC) map designed to be as close to the\nforeground-subtracted CMB as possible. The \\satellite{Planck}\\ team released\nindividual band maps and three different foreground-subtracted maps --\n\\Planckmap{NILC}, \\Planckmap{SEVEM}, \\Planckmap{SMICA}\\ -- in their recent release, although they had many\nmore, including one they called the \\Planckmap{Commander-Ruler}\\ map. Here we will analyse the\nseven and nine-year \\satellite{WMAP}\\ $V$ and $W$ band maps and the ILC map, the\n\\satellite{Planck}\\ High Frequency Instrument (\\satellite{HFI}) $100\\unit{GHz}$ and Low Frequency\nInstrument (\\satellite{LFI}) $70\\unit{GHz}$ maps, and its \\Planckmap{NILC}, \\Planckmap{SEVEM}, and\n\\Planckmap{SMICA}\\ maps.\\footnote{All CMB data is available from the Lambda site,\n \\url{http:\/\/lambda.gsfc.nasa.gov\/}, including links to both \\satellite{WMAP}\\ and\n \\satellite{Planck}\\ results. The \\satellite{Planck}\\ results may directly be obtained via the\n \\satellite{Planck}\\ Legacy Archive, \\url{http:\/\/archives.esac.esa.int\/pla\/}.}\n\nOnce we have a map, we must also choose the resolution of the maps to be\nanalysed. A higher resolution will minimize resolution-dependent effects.\nOn the other hand, to reduce the computation time, particularly when\ngenerating statistics from realizations of $\\Lambda$CDM, a low resolution is\npreferred. As a compromise we have chosen the \\textsc{healpix}\\footnote{The\n \\textsc{healpix}\\ source code is freely available from\n \\url{healpix.sourceforge.net}.} resolution $\\textit{Nside}=128$ for all studies\nin this work.\n\nEven with the existence of cleaned, full-sky maps the concern of residual\ncontamination, particularly on the largest angular scales, remains. For\nthis reason it is desirable to remove the most contaminated regions of the\nsky and only analyse the cleanest ones. The \\satellite{Planck}\\ analysis used the U73\nmask which leaves a sky fraction $f_{\\rmn{sky}}=0.73$ \\citep{Planck-R1-XXIII}.\nThis mask is not publicly available but is constructed from the union of\nthe validity masks provided with the full-sky maps \\citep{Planck-R1-XII}.\nFor the \\Planckmap{NILC}, \\Planckmap{SEVEM}, and \\Planckmap{SMICA}\\ maps these masks are available. For the\n\\Planckmap{Commander-Ruler}\\ map only a minimal version of the mask is provided. Taking the union\nof these four masks produces what we call the U74 mask, a close\napproximation of the U73 mask but with $f_{\\rmn{sky}}=0.74$. For \\satellite{WMAP}\\ we use\ntheir extended temperature mask from their nine-year data release, named\nKQ75y9, which has $f_{\\rmn{sky}}=0.69$.\n\nThese masks are provided at high resolution; $\\textit{Nside}=2048$ for the U74 mask\nand $\\textit{Nside}=1024$ for the KQ75y9 mask. To degrade the masks to our working\nresolution of $\\textit{Nside}=128$ we follow the prescription defined in\n\\cite{Planck-R1-XXIII}: first the mask is degraded to $\\textit{Nside}=128$ using\n\\texttt{ud\\_grade} from \\textsc{healpix}, then any pixel with a value less then\n$0.8$ is set to zero, otherwise it is set to one. With this prescription\nthe $\\textit{Nside}=128$ masks have sky fractions of $f_{\\rmn{sky}}=0.72$ for U74 and\n$f_{\\rmn{sky}}=0.67$ for KQ75y9.\n\n\\begin{figure}\n \\includegraphics[width = \\linewidth]{KQ75y9+U74_masks}\n \\caption{Masks used in this work. A pixel may be removed by both masks\n (dark blue), only the KQ75y9 mask (light blue), only the U74 mask\n (yellow), or by neither mask (red).}\n \\label{fig:masks-comparison}\n\\end{figure}\n\nDespite the KQ75y9 mask removing more pixels, the U74 mask is not fully\ncontained within it. A comparison of the two masks is given in\nFig.~\\ref{fig:masks-comparison}. As can be seen, the two masks mostly\ncoincide, though there are many small regions of pixels only contained in\none of the two masks. In particular there are pixels that are excluded by\nthe U74 mask but included by the KQ75y9 mask and the KQ75y9 mask generally\nremoves more of the region around the Galactic centre than the U74 mask.\nThese small differences have a noticeable effect on the calculated cut-sky\n$\\ensuremath{S_{1\/2}}$\\@.\n\nIt is important that comparisons of data and simulations are made\nconsistently. In addition to the choices discussed above, cut-sky data\nwill always be compared to cut-sky realizations, with the maps in all cases\ntreated as similarly as possible. This is particularly important since, as\nnoted above, for cut skies the pseudo-$\\Cee_\\ell$ are employed in the calculation\nof $\\ensuremath{S_{1\/2}}$\\@. In this work we are \\emph{not} interested in reconstructing\nthe full-sky angular correlations. Instead, we find that angular\ncorrelations on the cut-sky are unusually low. We thus do not make\nstatements about the full-sky CMB, which at any rate cannot be reliably\nobserved, and for which a maximum-likelihood estimator may be more\nappropriate \\citep{Efstathiou2004-MLE, Efstathiou2010, Pontzen2010}. Even so,\nreconstructing the full-sky from a cut-sky requires extra assumptions and\nmay introduce its own biases \\citep{CHSS2011}.\n\nExtracting the $\\Cee_\\ell$ from a map, particularly from a masked map, also\nrequires some care. We use \\textsc{spice}\\ \\citep{polspice} for this purpose. For\ncut skies there is the added issue that, even if the full-sky does not\ninclude a monopole or dipole, these modes will exist in the portion of the\nsky included for evaluation. If we knew that the full-sky map did not\ncontain a residual monopole or dipole, then we could proceed without\nfurther concern. Unfortunately, with real data this is not known,\nparticularly for individual frequency band maps which definitely have\nGalactic contamination. We therefore remove the average monopole and\ndipole from all maps prior to extracting the $\\Cee_\\ell$\\@. For the monopole, we\ndo this by subtracting the average value of the temperature over the\nportion of the sky that is being retained; for the dipole we find the\nbest-fitting dipole over the retained sky and subtract that dipole. (In\n\\textsc{spice}\\ this removal is a built-in feature which we employ in our\nanalysis.) When analysing a cut-sky, this procedure generically introduces\na monopole and dipole (and alters the other multiples) into the equivalent\nfull-sky map. Though this may seem to be a problem, again recall that the\ncut-sky analysis is self-contained and internally consistent since the data\nand realizations are treated identically. The cut-sky statistics are\n\\emph{not} estimators of the full-sky, as again made clear by this monopole\nand dipole removal.\n\nThere is also the question of the effect of our motion with respect to the\nCMB rest frame on the quadrupole. Just as that motion, with velocity\n$\\beta\\equiv v\/c \\sim 10^{-3}$, induces a dipole with amplitude\n$\\mathcal{O}\\left(\\beta\\right)$ times the monopole, it also induces a\nDoppler quadrupole (DQ) with amplitude $\\mathcal{O}\\left(\\beta^2\\right)$\ntimes the monopole. The naive expectation that since $\\beta^2 \\sim\n10^{-6}$ the DQ will be an unimportant contribution to the cosmological\nquadrupole is not obviously true at least in part because the measured\nquadrupole is much smaller than the theoretical expectation. For each map\nwe analyse both the DQ uncorrected and the DQ corrected map to gauge the\nimportance of this effect. The one exception is the\n\\satellite{Planck}\\ \\satellite{LFI}\\ $70\\unit{GHz}$ map, where the DQ has been accounted for in\nthe calibration procedure. See\n\\citet{Planck-R1-V,CHSS-Planck-R1-alignments} for a more detailed\ndiscussion of this issue.\n\n\\section{Results}\n\n\\begin{figure}\n \\includegraphics[width = \\linewidth]{S12_LCDM}\n \\caption{Distribution of $\\ensuremath{S_{1\/2}}$ values from $10^6$ realizations of the\n best-fitting $\\Lambda$CDM\\ model for full and masked skies. The shaded regions\n (green, dash-dotted for the U74 mask and red, dashed for the KQ75y9\n mask) represent the spread of the the observed values as given in\n Tables~\\ref{tab:S12-results} and \\ref{tab:S12-results-DQ}. Masking\n only slightly affects the expected distributions and the observations\n are in the small $\\ensuremath{S_{1\/2}}$ tail of the distribution for both masks\n considered in this work.}\n \\label{fig:S12-histograms}\n\\end{figure}\n\n\\begin{table}\n \\caption{Smallness of $\\ensuremath{S_{1\/2}}$ for maps without the DQ correction. We\n analyse the cleaned maps from \\satellite{Planck}: \\Planckmap{NILC}, \\Planckmap{SEVEM}, and \\Planckmap{SMICA}, as\n well as from \\satellite{WMAP}: seven and nine-year ILC. We also analyse the\n individual frequency band maps from \\satellite{Planck}: \\satellite{HFI}\\ $100\\unit{GHz}$ and\n \\satellite{LFI}\\ $70\\unit{GHz}$, as well as from \\satellite{WMAP}: seven and nine-year $W$\n and $V$ bands. For each map, we use both the U74 and KQ75y9 masks.\n In all cases residual monopole and dipole contributions have been\n subtracted from the map after masking. For each map and mask we\n report the $\\ensuremath{S_{1\/2}}$ value and the associated $p$-value -- the fraction\n of realizations of $\\Lambda$CDM\\ in the \\satellite{Planck}\\ best-fitting $\\Lambda$CDM\\ model\n with an $\\ensuremath{S_{1\/2}}$ no larger than the reported value.}\n \\label{tab:S12-results}\n\\begin{tabular}{ld{4.1}d{3}d{4.1}d{3}} \\hline\n & \\multicolumn{2}{c}{U74}\n & \\multicolumn{2}{c}{KQ75y9}\n \\\\\n \\multicolumn{1}{c}{Map}\n & \\multicolumn{1}{c}{$\\ensuremath{S_{1\/2}}\\unit{(\\rmn{\\umu K})^4}$}\n & \\multicolumn{1}{c}{$p$ (\\%)}\n & \\multicolumn{1}{c}{$\\ensuremath{S_{1\/2}}\\unit{(\\rmn{\\umu K})^4}$}\n & \\multicolumn{1}{c}{$p$ (\\%)}\n \\\\ \\hline\n \\satellite{WMAP}\\ ILC 7yr & 1582.3 & 0.193 \t& 1225.8 & 0.085\n \\\\\n \\satellite{WMAP}\\ ILC 9yr & 1626.0 & 0.211\t& 1278.2 & 0.100\n \\\\\n \\satellite{Planck}\\ \\Planckmap{SMICA} & 1577.7 & 0.191 \t& 1022.3 & 0.044\n \\\\\n \\satellite{Planck}\\ \\Planckmap{NILC} & 1589.3 & 0.195\t& 1038.2 & 0.047\n \\\\\n \\satellite{Planck}\\ \\Planckmap{SEVEM} & 1657.7 & 0.225\t& 1153.4 & 0.069\n \\\\\n \\\\\n \\satellite{WMAP}\\ $W$ 7yr & 1863.6 & 0.316\t& 1133.9 & 0.065\n \\\\\n \\satellite{WMAP}\\ $W$ 9yr & 1887.1 & 0.329 \t& 1142.6 & 0.068\n \\\\\n \\satellite{Planck}\\ \\satellite{HFI}\\ $100$ & 1682.1& 0.235 & 911.6 & 0.027\n \\\\\n \\\\\n \\satellite{WMAP}\\ $V$ 7yr & 1845.0 & 0.307 \t& 1290.9 & 0.104\n \\\\\n \\satellite{WMAP}\\ $V$ 9yr & 1850.0 & 0.309\t& 1281.8 & 0.101\n \\\\\n \\satellite{Planck}\\ \\satellite{LFI}\\ $70^a$ & \\multicolumn{1}{c}{---} &\n \\multicolumn{1}{c}{---} & \\multicolumn{1}{c}{---} &\n \\multicolumn{1}{c}{---}\n \\\\\n \\hline\n \\end{tabular}\n\\\\ ${}^a$The calibration of the \\satellite{Planck}\\ \\satellite{LFI}\\ $70\\unit{GHz}$ channel\nincludes the DQ correction. See\n\\citet{Planck-R1-V,CHSS-Planck-R1-alignments} for details.\n\\end{table} \n\n\\begin{table}\n \\caption{Same as Table~\\ref{tab:S12-results} now with the DQ\n corrected maps.}\n \\label{tab:S12-results-DQ}\n\\begin{tabular}{ld{4.1}d{3}d{4.1}d{3}} \\hline\n & \\multicolumn{2}{c}{U74}\n & \\multicolumn{2}{c}{KQ75y9}\n \\\\\n \\multicolumn{1}{c}{Map}\n & \\multicolumn{1}{c}{$\\ensuremath{S_{1\/2}}\\unit{(\\rmn{\\umu K})^4}$}\n & \\multicolumn{1}{c}{$p$ (\\%)}\n & \\multicolumn{1}{c}{$\\ensuremath{S_{1\/2}}\\unit{(\\rmn{\\umu K})^4}$}\n & \\multicolumn{1}{c}{$p$ (\\%)}\n \\\\ \\hline\n \\satellite{WMAP}\\ ILC 7yr\t& 1620.3 & 0.208 \t& 1247.0 & 0.090\n \\\\ \n \\satellite{WMAP}\\ ILC 9yr \t& 1677.5 & 0.232 \t& 1311.8 & 0.109\n \\\\\n \\satellite{Planck}\\ \\Planckmap{SMICA}\t& 1606.3 & 0.202\t& 1075.5 & 0.053\n \\\\\n \\satellite{Planck}\\ \\Planckmap{NILC}\t& 1618.6 & 0.208 \t& 1096.2 & 0.058\n \\\\\n \\satellite{Planck}\\ \\Planckmap{SEVEM}\t& 1692.4 & 0.239 \t& 1210.5 & 0.082\n \\\\\n \\\\\n \\satellite{WMAP}\\ $W$ 7yr \t& 1839.0 & 0.304\t& 1128.5 & 0.064\n \\\\\n \\satellite{WMAP}\\ $W$ 9yr\t& 1864.2 & 0.317\t& 1138.3 & 0.066\n \\\\ \n \\satellite{Planck}\\ \\satellite{HFI}\\ $100$ & 1707.5 & 0.245\t& 916.3 & 0.028\n \\\\\n \\\\\n \\satellite{WMAP}\\ $V$ 7yr \t& 1829.2 & 0.300\t& 1276.2 & 0.099\n \\\\\n \\satellite{WMAP}\\ $V$ 9yr \t& 1840.4 & 0.304\t& 1268.8 & 0.097\n \\\\\n \\satellite{Planck}\\ \\satellite{LFI}\\ $70$\t& 1801.7 & 0.287\t& 1282.1 & 0.101\n \\\\\n \\hline\n \\end{tabular}\n\\end{table} \n\nHistograms of $\\ensuremath{S_{1\/2}}$ values from $10^6$ realizations of the\n\\satellite{Planck}\\ best-fitting $\\Lambda$CDM\\ model (based on their temperature only data)\nare shown in Fig.~\\ref{fig:S12-histograms}. Included in the figure are the\nfull-sky and cut-sky $\\ensuremath{S_{1\/2}}$. As seen in the figure, masking has a small\neffect; the peak of the distribution is shifted to slightly smaller values\ndue to masking, but this does not have a noticeable change on the tail of\nthe distribution. Regardless, in comparing cut-sky $\\ensuremath{S_{1\/2}}$ between the\ndata and our realizations, we always compare the one set of cut-sky data to\nthe same set of cut-sky realizations.\n\nThe $\\ensuremath{S_{1\/2}}$ values for the various map and mask combinations are given in\nTable~\\ref{tab:S12-results} for the case when the maps are not DQ corrected\nand in Table~\\ref{tab:S12-results-DQ} when the DQ correction has been\napplied. As discussed above, the realization maps are treated precisely\nlike the data maps -- they are masked, then monopole and dipole are\nsubtracted before $\\ensuremath{S_{1\/2}}$ is computed. Given that the value of $\\ensuremath{S_{1\/2}}$\non masked skies is extremely low compared to the typical value, having\n$10^6$ is necessary to make quantitatively precise statements. For each\ncomputed value of $\\ensuremath{S_{1\/2}}$ reported, we also report the $p$-value -- the\nfraction of realizations (expressed in per cent) that have an $\\ensuremath{S_{1\/2}}$ at\nleast as low. This we interpret as the probability of obtaining a value of\n$\\ensuremath{S_{1\/2}}$ this low by random chance in the best-fitting model of $\\Lambda$CDM.\n\nAn alternative approach is to allow for variations of the best-fitting\nparameters within their error bars, for example by examining a Monte Carlo\nMarkov chain of the parameters rather than just performing realizations of\nthe best-fitting values. (Such an approach was taken for example in\n\\cite{CHSS-WMAP5}.) This will affect the results only weakly, because\nvarying the parameters within their error bars will cause the expected\nlow-$\\ell$ $\\Cee_\\ell$ to vary by much less than their cosmic variance errors.\n\n\\begin{figure}\n \\includegraphics[width = \\linewidth]{Ctheta_bands}\n \\caption{Cut-sky $\\Ccorr(\\theta)$ using the KQ75y9 mask for individual\n frequency band maps. Shown are correlation functions from the\n \\satellite{WMAP}\\ nine-year $W$ (black, solid line) and $V$ (red, dashed line)\n bands along with the \\satellite{Planck}\\ \\satellite{HFI}\\ $100\\unit{GHz}$ (green, dash-dotted\n line) and \\satellite{LFI}\\ $70\\unit{GHz}$ (blue, dotted line) maps. The curves\n for the \\satellite{WMAP}\\ seven-year band maps are nearly identical to those from\n the nine-year maps and are not included for clarity. In all cases the\n correlation functions are in excellent agreement across the data\n releases and frequency bands. (Note the range on $y$-axis has been\n greatly reduced as compared to Fig.~\\ref{fig:Ctheta-smica} to allow for\n \\emph{any} difference to be noticeable by eye.)}\n \\label{fig:Ctheta-bands}\n\\end{figure}\n\nThe cut-sky $\\ensuremath{S_{1\/2}}$ values presented in Tables~\\ref{tab:S12-results} and\n\\ref{tab:S12-results-DQ} show that the region outside the masks is\nconsistently observed and cleaned in all the data releases mostly\nindependent of analysis procedures. In Fig.~\\ref{fig:Ctheta-bands} we plot\n$\\Ccorr(\\theta)$ for the \\satellite{WMAP}\\ nine-year $V$ and $W$ bands with the KQ75y9\nmask and the \\satellite{Planck}\\ \\satellite{HFI}\\ $100\\unit{GHz}$ and \\satellite{LFI}\\ $70\\unit{GHz}$ bands\nalso with the KQ75y9 mask. One can see that the cut-sky angular\ncorrelation functions are remarkably consistent across instruments and\nwavebands. (And also across \\satellite{WMAP}\\ data releases. We have chosen not to\nplot the \\satellite{WMAP}\\ seven-year correlation functions because they are nearly\nindistinguishable from the nine-year functions.) We can thus place great\nconfidence in the cut-sky $\\ensuremath{S_{1\/2}}$ results derived from \\satellite{WMAP}\\ and \\satellite{Planck}.\nThese results can be summarized as follows.\n\\begin{itemize}\n\\item Regardless of the maps, the cut-sky $\\ensuremath{S_{1\/2}}$ is very low, with\n $p$-values ranging from $0.027$ per cent for the\n \\satellite{Planck}\\ \\satellite{HFI}\\ $100\\unit{GHz}$ map with the KQ75y9 mask to \n $0.329$ per cent for the \\satellite{WMAP}\\ nine-year $W$ band map with the U74\n mask.\n\\item The cleaned maps have a smaller variation in $\\ensuremath{S_{1\/2}}$ values with a\n $p$-value always less than about $0.239$ per cent for the U74 mask and\n less than about $0.109$ per cent for the KQ75y9 mask.\n\\item The \\satellite{Planck}\\ maps typically have smaller $\\ensuremath{S_{1\/2}}$ values than the\n \\satellite{WMAP}\\ maps. (The one slight exception is the DQ corrected\n \\satellite{Planck}\\ \\satellite{LFI}\\ $70\\unit{GHz}$ band with the KQ75y9 mask.)\n\\item The only clear systematic trend is that the KQ75y9 mask consistently\n yields a lower cut-sky $\\ensuremath{S_{1\/2}}$ than does the U74 mask. Presumably this\n is due to the larger region around the Galactic centre excluded by the KQ75y9\n mask (see Fig \\ref{fig:masks-comparison}).\n\\item The DQ correction has little effect, tending to slightly increase\n $\\ensuremath{S_{1\/2}}$ in the \\satellite{Planck}\\ maps and decrease it in the \\satellite{WMAP}\\ ones. This is\n in contrast to the importance of applying the DQ correction for full-sky\n alignment studies \\citep{CHSS-Planck-R1-alignments}.\n\\end{itemize}\n\nOverall, the data very consistently show a lack of correlations on large\nangular scales outside the Galactic region (as defined by the two masks\nemployed). The $p$-value for the $\\ensuremath{S_{1\/2}}$ statistic has remained small and\nof comparable size throughout the \\satellite{WMAP}\\ data releases and now with the\nfirst \\satellite{Planck}\\ results. This is remarkable given the improvements in\nstatistics, cleaning, beams, masks, and other systematics. Further, this\nis in contrast to the full-sky $\\ensuremath{S_{1\/2}}$ which vary significantly from data\nrelease to data release and from map to map. The behaviour of the full-sky\n$\\ensuremath{S_{1\/2}}$ is discussed in more detail in \\citet{CHSS-Planck-R1-alignments}.\nIt suffices here to note that the full-sky value of $\\ensuremath{S_{1\/2}}$ varies from a\nlow of $3766\\unit{(\\rmn{\\umu K}^4)}$, from the \\satellite{Planck}\\ \\Planckmap{SEVEM}\\ map, to a high of\n$8938\\unit{(\\rmn{\\umu K}^4)}$, calculated from the seven-year \\satellite{WMAP}\\ reported values\nof the angular power spectrum based on a maximum likelihood estimator.\n\nWe again emphasize that the two-point angular correlation function that we\nhave calculated is monopole and dipole-subtracted. However, once the\nDoppler dipole is sufficiently well determined, only it should be removed\nand the underlying cosmological contribution to the dipole retained in\n$\\Ccorr(\\theta)$ and thus in $\\ensuremath{S_{1\/2}}$. \n\n\nThe measured lack of angular correlations in the dipole-subtracted sky has\nan important consequence for the primordial dipole. If the missing\ncorrelations are not a very unlikely fluke, nor (as our results indicate)\ndue to systematic errors or map-cleaning procedures, then they are caused\nby some as-yet unidentified physical mechanism. It is difficult to see how\nsuch a mechanism would set $\\Ccorr(\\theta)$ to be nearly zero on angular\nscales greater than $60$ degrees when the primordial dipole is subtracted,\nand yet somehow not also do so if the dipole were included. Instead, for a\nphysical mechanism we would expect the total angular correlation function\nincluding the contribution of the cosmological dipole to also be nearly\nzero on these scales. In the best-fitting $\\Lambda$CDM\\ model, the expected\ncontribution from the dipole alone is very large and generically spoils the\nvanishing of $\\Ccorr(\\theta)$ on large angular scales. Hence, if the\nvanishing correlations are of cosmological origin, then the primordial\ndipole is also expected to be very suppressed.\n\nTo be concrete, the expected value of $\\mathcal{C}_1$ in the\nbest-fitting $\\Lambda$CDM\\ model is approximately $3300\\unit{\\rmn{\\umu K}^2}$. With this\nvalue, the $\\mathcal{C}_1^2$ contribution to $\\ensuremath{S_{1\/2}}$ in Eq.~(\\ref{eq:Shalf}) alone\nwould contribute approximately $2.3\\times10^5\\unit{\\rmn{\\umu K}^4}$ to $\\ensuremath{S_{1\/2}}$. (In\nprinciple this could be compensated by the cross term, $\\mathcal{C}_1\\mathcal{C}_\\ell$ with\n$\\ell\\neq1$, which can be negative, however, in practice this does not occur\nowing mainly to the smallness of $\\mathcal{C}_2$.) Roughly, for the $\\mathcal{C}_1^2$\ncontribution to not make the $\\ensuremath{S_{1\/2}}$ `too large' the value of $\\mathcal{C}_1$ must\nalso not be `too large'. For example, requiring the contribution to $\\ensuremath{S_{1\/2}}$\nto be comparable to current cut-sky values, that is a contribution of order\n$1000\\unit{\\rmn{\\umu K}^4}$, places a limit $\\mathcal{C}_1\\la 200\\unit{\\rmn{\\umu K}^2}$. This has a\nprobability of occurring by chance in a realization of the best-fitting model\n(due to cosmic variance) of less than approximately $0.4$ per cent.\nEquivalently, to the extent that $\\mathcal{C}_1$ contributions dominate the value of\n$\\ensuremath{S_{1\/2}}$, in order to maintain a $p$-value for the $\\ensuremath{S_{1\/2}}$ less than $0.4$\nper cent once the cosmological dipole is included requires that\n$\\mathcal{C}_1\\la200\\unit{\\rmn{\\umu K}^2}$. \n\nTo summarize, it seems unlikely that a physical mechanism\nwould predict that the $\\ensuremath{S_{1\/2}}$ calculated from a dipole-subtracted cut-sky\nwould be small but the $\\ensuremath{S_{1\/2}}$ calculated from a non-dipole-subtracted\ncut-sky would not be. This strongly suggests that if the lack of angular\ncorrelations is physical in nature and not a statistical fluke, then a robust\nprediction can be made that there is a very small cosmological dipole. In a\nfuture work we will develop this prediction more precisely.\n\n\\section{Conclusions}\n\nThe CMB shows a lack of correlations on large angular scales. This can be\nquantified by the $\\ensuremath{S_{1\/2}}$ statistics proposed by \\citet{WMAP1-cosmology}\nwhich is best calculated on the portion of the sky outside the Galaxy.\nUnlike attempts to infer properties of the the full-sky correlation\nfunction, the cut-sky $\\ensuremath{S_{1\/2}}$ appears remarkably robust and trustworthy.\nIn our analysis we find that the $p$-value for the observed cut-sky\n$\\ensuremath{S_{1\/2}}$ in an ensemble of realizations of the best-fitting $\\Lambda$CDM\\ model\nnever exceeds $0.33$ per cent for any of the analysed combinations of maps\nand masks, with and without correcting for the Doppler quadrupole. This has\nremained the case since the \\satellite{WMAP}\\ three-year data release,\\footnote{The\n one-year \\satellite{WMAP}\\ release yielded slightly higher $p$-values -- $0.38$ per\n cent for the $V$ band, and $0.64$ per cent for the $W$ band\n \\citep{CHSS-WMAP3, CHSS-WMAP5}.} for both the individual ($V$ and $W$)\nband maps and the synthesized (ILC) map, and for the first \\satellite{Planck}\\ data\nrelease for both the \\satellite{LFI}\\ and \\satellite{HFI}\\ band maps and all the released\nsynthesized maps (\\Planckmap{NILC}, \\Planckmap{SMICA}, \\Planckmap{SEVEM}), when masked by either the\n\\satellite{WMAP}\\ KQ75y9 mask or the less conservative U74 mask (which is very similar\nto the \\satellite{Planck}\\ U73 mask). The \\satellite{HFI}\\ $100\\unit{GHz}$ map -- the presumably\ncleanest CMB band -- with the more conservative mask that has been defined\nby \\satellite{WMAP}\\ gives a $p$-value of only $0.03$ per cent! As general trends we\nnote that a larger mask tends to produce smaller $p$-values, the Doppler\nquadrupole correction does not change the results in a significant way, and\nthe \\satellite{Planck}\\ data yield somewhat smaller $p$-values than the \\satellite{WMAP}\\ data.\n\nThis apparent lack of temperature correlations on large angular scales is\nstriking. It is a robust observation that increases in statistical\nsignificance from \\satellite{COBE}\\ to \\satellite{WMAP}\\ to \\satellite{Planck}. The consistency of the lack\nof angular correlations greatly reduces the likelihood of instrumental\nissues as a cause. Since all three missions observed the same sky, we could\nbe unlucky and live in a very atypical realization of the Universe. A\nmethod of testing this hypothesis has been proposed that would utilize\nthe upcoming \\satellite{Planck}\\ polarization data \\citep{CHSS-TQ}. If it is not a\nstatistical fluke and not an instrumental issue, it still could be caused\nby foregrounds. This appears also unlikely as the lack of correlations is\nconsistently seen in individual bands as well as in foreground cleaned\nmaps. Thus, to the best of our knowledge, the lack of angular correlations\nis in contradiction with the idea of scale invariant, isotropic and\nGaussian perturbations seeded by cosmological inflation.\n\nAttempts to explain this lack of correlations should also address the\nvarious other anomalous aspects observed in the microwave sky. It turns out\nthat the lack of angular correlations puts very severe constraints on such\nmodels. For example, a plausible explanation for the alignments of\nmultipole vectors or for the hemispherical asymmetry observed might have\nbeen contamination by unaccounted foregrounds; however, one cannot easily\nunderstand how a hypothetical foreground, which presumably should be\nuncorrelated with the primordial temperature fluctuations, could cause an\nalmost exact cancellation of the primordial fluctuations at angular scales\nabove 60 degrees.\n\nSeveral attempts have been made to explain the absence of large-angle\ncorrelations as being due to an unknown foreground or, more generally, by\naltering the procedure by which one arrives at the cleaned maps. Indeed,\nwhen the cleaned maps are altered in any way the microwave sky can easily\nbe made to appear less anomalous. This\nis not surprising; almost any random modification of the observed maps will\nmake them less anomalous. Though the removal of anomalies may be a\nside effect of improved analysis procedures, using their removal as a\nbasis for judging the effectiveness of such a procedure is misguided. \n\nFinally, we emphasize that in order to be convincing, new theoretical\nmodels to explain the observed large-angle anomalies must be based on the\nstatistics of realizations of that model, not just on having the mean\nvalues of the model agree with observations. In other words, CMB map\nrealizations based on the underlying new model should have $p$-values for\nthe measured statistics that are not unusually small.\n\nThe large-angle temperature-temperature correlations in the CMB outside the\nGalaxy have been anomalously low in all relevant maps since the days of the\n\\satellite{COBE-DMR}. The final \\satellite{WMAP}\\ release and the initial \\satellite{Planck}\\ release\nconfirm that anomaly. After twenty years, we still await a satisfactory\nexplanation.\n \n \n\\section*{Acknowledgements}\n\nWe acknowledge valuable communications and discussions with F.~Bouchet,\nC.~Burigana, J.~Dunkley, G.~Efstathiou, K.~Ganga, P.~Naselsky, H.~Peiris, \nC.~R\\\"ath and D.~Scott. \nGDS and CJC are supported by a grant from the US Department of Energy to the\nParticle Astrophysics Theory Group at CWRU\\@. DH has been supported by the\nDOE, NSF, and the Michigan Center for Theoretical Physics\\@. DJS is supported\nby the DFG grant RTG 1620 `Models of gravity'. DH thanks the Kavli Institute\nfor Theoretical Physics and GDS thanks the Theory Unit at CERN for their\nrespective hospitality. This work made extensive use of the \\textsc{healpix}{}\npackage~\\citep{healpix}. The numerical simulations were performed on the\nfacilities provided by the Case ITS High Performance Computing Cluster.\n\n\\bibliographystyle{mn2e_new}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Credits}\n\n\n\\section{Introduction}\n\nNatural language processing is increasingly leveraged in sensitive domains like healthcare. For such critical tasks, the need to prevent discrimination and bias is imperative. Indeed, ensuring equality of health outcomes across different groups has long been a guiding principle of modern health care systems \\cite{culyer1993equity}.\nMoreover, medical data presents a unique opportunity to work with different \\textit{modalities}, specifically \\textit{text} (e.g., patient narratives, admission notes, and discharge summaries) and numerical or categorical data (often denoted \\textit{tabular} data, e.g., clinical measurements such as blood pressure, weight, or demographic information like ethnicity). Multi-modal data is not only reflective of many real-world settings, but machine learning models which leverage both structured and unstructured data often achieve greater performance than their individual constituents \\cite{horng2017creating}. While prior work studied fairness in the text and tabular modalities in isolation, there is little work on applying notions of algorithmic fairness in the broader multimodal setting \\citep{10.1145\/3368555.3384448, chen2018my}. \n\n\n\n\n\n\nOur work brings a novel perspective towards studying fairness algorithms for models which operate on \\textit{both} text and tabular data, in this case applied to the MIMIC-III clinical dataset (MIMIC-III) \\cite{MIMIC}. We evaluate two fairness algorithms: equalized-odds through post-processing, which is agnostic to the underlying classifier, and word embedding debiasing which is a text-specific technique. We show that ensembling classifiers trained on structured and unstructured data, along with the aforementioned fairness algorithms, can both improve performance and mitigate unfairness relative to their constituent components. We also achieve strong results on several MIMIC-III clinical benchmark prediction tasks using a dual modality ensemble; these results may be of broader interest in clinical machine learning \\citep{Harutyunyan2019, khadanga2019using}. \n\n\n\n\n\n\\section{Background}\n\n\n\n\\subsection{Combining Text and Tabular Data in Clinical Machine Learning}\nPrior work has shown that combining unstructured text with vital sign time series data improves performance on clinical prediction tasks. \\citet{horng2017creating} showed that augmenting an SVM with text information in addition to vital signs data improved retrospective sepsis detection. \\citet{akbilgic2019unstructured} showed that using a text-based risk score improves performance on prediction of death after surgery for a pediatric dataset. Closest to our work, \\citet{khadanga2019using} introduced a joint-modality neural network which outperforms single-modality neural networks on several benchmark prediction tasks for MIMIC-III. \n\n\\subsection{Classical fairness metrics}\nMany algorithmic fairness notions fall into one of two broad categories: individual fairness enforcing fairness across individual samples, and group fairness seeking fairness across protected groups (e.g. race or gender). \nWe focus on a popular group-level fairness metric: {\\em Equalized Odds} (EO) \\citep{NIPS2016_6374}. Instead of arguing that average classification probability should be equal across all groups (also known as {\\em Demographic Parity}) -- which may be unfair if the underlying group-specific base rates are unequal -- EO allows for classification probabilities to differ across groups only through the underlying ground truth. Formally, a binary classifier $\\widehat{Y}$ satisfies EO for a set of groups $\\mathcal{S}$ if, for ground truth $Y$ and group membership $A$:\n\\begin{equation*}\\label{EOeq}\\small\n \\begin{split}\n \\text{Pr} ( \\hat{Y}=1 \\,|\\, Y = y, A=a ) = \\text{Pr} ( \\hat{Y}=1 \\,|\\, Y = y, A=a' )\\\\ \\forall y \\in \\{0,1\\}, \\forall a,a' \\in \\mathcal{S}\n \\end{split}\n\\end{equation*}\n\nIn short, the true positive (TP) and true negative (TN) rates should be equal across groups. \n\n\\subsection{Equalized Odds Post Processing}\n\\citet{NIPS2016_6374} proposed a model-agnostic post-processing algorithm that minimizes this group specific error discrepancy while considering performance. Briefly, the post-processing algorithm determines group-specific random thresholds based on the intersection of group-specific ROC curves. The multi-modality of our underlying data and the importance of privacy concerns in the clinical setting make post-processing especially attractive as it allows fairness to be achieved agnostic to the inner workings of the base classifier. \n\n\n\\subsection{Debiasing word embeddings}\n\nPretrained word embeddings encode the societal biases of the underlying text on which they are trained, including gender roles and racial stereotypes\n\\citep{bolukbasi2016man, zhao-etal-2018-learning,manzini-etal-2019-black}. \nRecent work has attempted to mitigate this bias in context-free embeddings while preserving the utility of the embeddings.\n\\citet{bolukbasi2016man} analyzed gender subspaces by comparing distances between word vectors with pairs of gender-specific words to remove bias from gender-neutral words. \\citet{manzini-etal-2019-black} extended this work to the multi-class setting, enabling debiasing in race and religion. Concurrent to their work, \\cite{ravfogel2020null} propose iterative null space projection as a technique to hide information about protected attributes by casting it into the null space of the classifier. Following the recent popularity of BERT and ELMo, \\citet{liang2020towards} consider extending debiasing to sentence-level, contextualized representations. \n\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{img\/arc5.png}\n \\caption{Experimental setup and ensemble architecture. Fairness approaches are indicated in dotted boxes. \n \n }\n \\label{fig:architecture}\n\\end{figure}\n\n\n\n\n\\section{Experimental Setup}\\label{sec:experimental_setup}\n\n\\subsection{Clinical Prediction Tasks} \\label{clinical_pred_tasks}\nMIMIC-III contains deidentified health data associated with ~60,000 intensive care unit (ICU) admissions \\citep{MIMIC}. It contains both unstructured textual data (in the form of clinical notes) and structured data (in the form of clinical time series data and demographic, insurance, and other related meta-data). \nWe focus on two benchmark binary prediction tasks for ICU stays previously proposed by \\citet{Harutyunyan2019}: in-hospital mortality prediction (IHM), which aims to predict mortality based on the first 48 hours of a patient's ICU stay, and phenotyping, which aims to retrospectively predict the acute-care conditions that impacted the patient. Following \\citet{khadanga2019using} we extend the prediction tasks to leverage clinical text linked to their ICU stay. For both tasks the classes are higly imbalanced: in the IHM task only 13.1\\% of training examples are positive, and the relative imbalance of the labels in the phenotyping class can be seen in Figure \\ref{fig:phenotype_perc_tr}. To account for the label imbalance we evaluate performance using AUC ROC and AUC PRC. More details can be found in Appendix \\ref{appendix:a}. \n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=\\linewidth]{img\/phen_label_train_percentages.png}\n \\caption{Percentage of positive train cases for each of the 25 phenotyping tasks. The critical care conditions corresponding to the task codes can be found in Table \\ref{tab:phen_perc_legend} of the Appendix}\n \\label{fig:phenotype_perc_tr}\n\\end{figure}\n\n\n\\subsection{Fairness Definition}\nNext, we consider how we can extend a definition of fairness to this multimodal task. Following work by \\citet{10.1145\/3368555.3384448} in the single-modality setting, we examine True Positive and True Negative rates on our clinical prediction task between different protected groups. Attempting to equalize these rates corresponds to satisfying \\emph{Equalized Odds}. EO satisfies many desiderata within clinical settings, and has been used in previous clinical fairness work \\citep{10.1145\/3306618.3314278, doi:10.1111\/j.1468-2850.1997.tb00104.x, pmlr-v106-pfohl19a}. While EO does not explicitly incorporate the \\textit{multimodality} of our data, it accurately emphasizes the importance of the \\textit{downstream} clinical prediction task on the protected groups. Nonetheless, we acknowledge that EO alone is insufficient for practical deployment; na\\\"ive application can result in unacceptable performance losses and thus consultations with physicians and stakeholders must be held \\citep{Rajkomar2018}.\n\n\n\n\n\n\n\n\n\n\n\\subsection{Classification Models}\n\nWe provide brief descriptions below with details available in Appendix \\ref{appendix:b}.\n\\begin{itemize}\n\\setlength{\\itemsep}{-0.9pt}\n\\item \\textbf{Structured Data Model: }Following \\citet{Harutyunyan2019}, we use a channel-wise bidirectional Long Short Term Memory network (bi-LSTM). \n\\item \\textbf{Unstructured Textual Data:} We use a CNN encoder to extract the semantic features from clinical notes. Importantly, we experiment with training word embeddings from scratch and utilizing pre-trained BioWordVec embeddings \\citep{zhang2019biowordvec}.\n\n\n\\item \\textbf{Ensemble:} We perform logistic regression on the output binary classification probabilities from the previous models. \n\n\\end{itemize}\n\\section{Fairness Setup}\n\n\n\n\n\n\n\n\\subsection{Sensitive groups}\nRecall that EO explicitly ensures fairness with respect to sensitive groups while debiasing implicitly depends upon it. Leveraging the demographic data in MIMIC-III, we consider ethnicity (divided into Asian, Black, Hispanic, White and other), biological sex (divided into male and female), and insurance type (divided into government, medicare, medicaid, self-pay, private, and unknown). With the exception of biological sex, the sensitive groups are highly imbalanced (see Table \\ref{tab:in_hosp_mort_groups}). Note that insurance-type has been shown to be a proxy for socioeconomic status (SES) \\cite{pmid30794127}.\n\n\\begin{table}[]\n \\centering\n \n \\begin{tabular}{|c|c|c|c|}\n \\hline\n {\\begin{tabular}[c]{@{}c@{}}Sensitive \\\\ Group \\end{tabular}} & {\\begin{tabular}[c]{@{}c@{}}Train \\\\ Count\\end{tabular}} & {\\begin{tabular}[c]{@{}c@{}} Test \\\\ Count \\end{tabular}} & \\% of Test \\\\\n \\hline\n F & 7940 & 1415 & 44.0 \\% \\\\\n M & 9708 & 1778 & 56.0 \\% \\\\\n \\hline\n ASIAN & 408 & 60 & 1.9 \\% \\\\\n BLACK & 1658 & 285 & 8.9 \\% \\\\\n HISPANIC & 521 & 107 & 3.3 \\% \\\\\n OTHER & 2655 & 459 & 14.4 \\% \\\\\n WHITE & 12406 & 2282 & 71.5 \\% \\\\\n \\hline\n Government & 356 & 74 & 2.3 \\% \\\\\n Medicaid & 1362 & 205 & 6.4 \\% \\\\\n Medicare & 9857 & 1757 & 55.0 \\% \\\\\n Private & 4946 & 932 & 29.2 \\% \\\\\n Self Pay & 133 & 33 & 1.0 \\% \\\\\n UNKNOWN & 994 & 192 & 6.1 \\% \\\\\n \\hline\n \\end{tabular}\n \n \\caption{Distribution of sensitive-attributes over train and test data for the In-Hospital Mortality task}\n \\label{tab:in_hosp_mort_groups}\n\\end{table}\n\n\\subsection{Equalized Odds Post-Processing}\nWe apply our equalized-odds post processing algorithm on the predictions of the trained single-modality classifiers (physiological signal LSTM model as well as text-only CNN model) as well as the trained ensemble classifier. Note that we apply EO postprocessing only once for each experiment: either on the outputs of the single-modality model, or on the ensemble predictions. \n The fairness approaches are mutually exclusive: we do not consider applying EO postprocessing together with debiased word embeddings. We consider using both soft prediction scores (interpretable as probabilities) as well as thresholded hard predictions as input to the post-processing algorithm. These choices impact the fairness performance trade-off as discussed further in Section \\ref{sec:Results}.\n\n\n\n\\subsection{Socially Debiased Clinical Word Embeddings}\n\nWhile clinically pre-trained word embeddings may improve downstream task performance, they are not immune from societal bias \\cite{khattak2019survey}. \nWe socially debias these clinical word embeddings following \\citet{manzini-etal-2019-black}. We manually select sets of social-specific words (see Appendix \\ref{appendix:c}) to identify the fairness-relevant social bias subspace. \nFormally, having identified the basis vectors $\\{b_{1}, b_{2}, ..., b_{n}\\}$ of the social bias subspace $\\mathcal{B}$, we can find the projection $w_B$ of a word embedding $w$:\n\\[w_\\mathcal{B} = \\sum_{i=1}^{n} \\langle{w}, b_{i}\\rangle{b_{i}}\\]\n\nNext we apply hard debiasing, which will remove bias from existing word embeddings by subtracting $w_B$, their component in this fairness subspace. This yields $w'$, our socially debiased word embedding: \n\n\\begin{align*}\n w' = \\frac{w - w_B}{\\|w-w_B\\|}\n\\end{align*}\n\n\n\n\n\n\n\n\nWe consider debiasing with respect to race and gender. The race debiased embeddings are re-used for insurance tasks as empiric research has indicated that the use of proxy groups in fairness can be effective \\citep{DBLP:journals\/corr\/abs-1806-11212} and SES is strongly related to race \\citep{Williams2016}.\n\n\n\n\\section{Results and Analysis}\\label{sec:Results}\n\n\n\n\n\n\\begin{table}[h]\n\\centering\n\\resizebox{\\columnwidth}{!}{\n\\begin{tabular}{clllll}\n\\cline{2-5}\n\\multicolumn{1}{l|}{} &\n \\multicolumn{2}{c|}{IHM} &\n \\multicolumn{2}{c|}{Phenotyping} &\n \\multicolumn{1}{l}{} \\\\ \\cline{1-5}\n\\multicolumn{1}{|l|}{} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}AUC \\\\ PRC\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}AUC \\\\ ROC\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Macro \\\\ AUCROC\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Overall \\\\ AUCROC\\end{tabular}} &\n \\\\ \\cline{1-5}\n \\multicolumn{1}{|c|}{\\begin{tabular}[c]{@{}c@{}}Harutyunyan et. al\\\\ (2019) -- No Text\\end{tabular}}& \\multicolumn{1}{l|}{0.515} & \\multicolumn{1}{l|}{0.862} & \n\\multicolumn{1}{c|}{0.776} &\n\\multicolumn{1}{c|}{0.825} & \\\\ \\cline{1-5}\n\\multicolumn{1}{|c|}{\\begin{tabular}[c]{@{}c@{}}Khadanga et. al \\\\ (2019) -- Ensemble\\end{tabular}} & \\multicolumn{1}{l|}{0.525} & \\multicolumn{1}{l|}{0.865} &\n\\multicolumn{1}{c|}{--} & \n\\multicolumn{1}{c|}{--} & \\\\ \\cline{1-5}\n\\multicolumn{1}{|c|}{\\begin{tabular}[c]{@{}c@{}}Ours -- Text Only\\end{tabular}} &\n \\multicolumn{1}{l|}{0.472} &\n \\multicolumn{1}{l|}{0.815} &\n\\multicolumn{1}{c|}{0.766} & \n\\multicolumn{1}{c|}{0.829} & \\\\ \\cline{1-5}\n \\multicolumn{1}{|c|}{\\begin{tabular}[c]{@{}c@{}}Ours -- Text Only \\\\ + BioWordVec \\end{tabular}} &\n \\multicolumn{1}{l|}{0.489} &\n \\multicolumn{1}{l|}{0.841} &\n\\multicolumn{1}{c|}{0.771} & \n\\multicolumn{1}{c|}{0.837} & \\\\ \\cline{1-5}\n\\multicolumn{1}{|c|}{Ours -- Ensemble} & \\multicolumn{1}{l|}{\\textbf{0.582}} & \n\\multicolumn{1}{l|}{0.880} & \n\\multicolumn{1}{c|}{0.822} &\n\\multicolumn{1}{c|}{0.861} & \\\\ \\cline{1-5}\n\\multicolumn{1}{|c|}{\\begin{tabular}[c]{@{}c@{}}Ours -- Ensemble \\\\ + BioWordVec\\end{tabular}} & \\multicolumn{1}{l|}{\\textbf{0.582}} & \\multicolumn{1}{c|}{\\textbf{0.886}} & \n\\multicolumn{1}{c|}{\\textbf{0.829}} & \n\\multicolumn{1}{c|}{\\textbf{0.870}} & \\\\ \\cline{1-5}\n\\end{tabular}}\n\\caption{Leveraging clinical pretrained word embeddings improves performance compared to training word embeddings from scratch in the text-only model. Ensembling the text-only model with the clinical time series classifier improves performance further. \n }\n\\label{tab:my-table}\n\\end{table}\n\n\n\n\n\n\n\n\\begin{figure*}[htp]\n \\centering\n \\includegraphics[width=\\linewidth]{img\/phen_lipid_ETHNICITY_t14.png}\n \\caption{ \n Plots of TP Rate, TN Rate, and AUC on phenotyping task M for groups defined by sensitive attribute of race. Each vertical black line represents a classifier (line style indicating modality); the length of the line represents the range of scores over fairness groups. In the TP\/TN graphs, a shorter line represents better fairness; there is less discrepancy between the maximum and minimum group-specific TP\/TN rates. In the AUC graph (far right), the higher the vertical position of the line, the better the performance. EO is effective at reducing the spread in TP\/TN rates for the ensemble classifier (first two graphs) at the cost of performance (far right) graph. Meanwhile, debiased word embeddings both improves fairness, reducing the length of the line in the first two graphs, while achieving superior performance in AUC graph }\\label{fig:big_plot}\n \n\n \n \n\n \n\\end{figure*}\n\n\n \n\n\n\n\n\n\n\n\n\n\\subsection{Ensembling clinical word embeddings with structured data improves performance} \n \nEmpirically, we observe superior performance to prior literature on a suite of clinical prediction tasks in Table \\ref{tab:my-table}; more tasks are evaluated in Appendix Table \\ref{appendix:a}. Full hyperparameter settings and code for reproducibility can be found here \\footnote{\\url{https:\/\/github.com\/johntiger1\/multimodal_fairness\/clinicalnlp}}. The ensemble model\noutperforms both constituent classifiers (AUC plot on Figure \\ref{fig:big_plot}). This holds even when fairness\/debiasing techniques are applied, emphasizing the overall effectiveness of leveraging multi-modal data. However, the ensemble's improvements in performance do not directly translate to improvements in fairness; see the True Positive (TP) graph in Figure \\ref{fig:big_plot}, where the maximum TP gap remains consistent under the ensemble.\n\n\n\n\\subsection{Debiased word embeddings and the fairness performance trade-off}\nImproving fairness usually comes at the cost of reduced performance \\citep{pmlr-v81-menon18a}. Indeed, across all tasks, fairness groups and classifiers, we observe the group-specific disparities of TP and TN rates generally diminish when equalized odds post-processing is used (see Appendix \\ref{sec:appendix:full_result} for additional results). However, this post-processing also leads to a degradation in the AUC. \nNote that we apply EO-post processing on hard (thresholded) predictions of the classifiers. If instead \\textit{soft} prediction scores are used as inputs to the post-processing step, both the performance degradation and the fairness improvement are softened \\citep{NIPS2016_6374}. \n\n\n\n\n\n\nGenerally, word embedding debiasing (WED) also helps reduce TP\/TN discrepancies, although not to the same extent as EO postprocessing. Remarkably, in certain tasks, WED also yields a performance improvement, even compared to the fairness-free, unconstrained ensemble classifier. In particular, for the AUC graph in Figure \\ref{fig:big_plot}, leveraging debiased word embeddings improves the performance of the ensemble; at the same time, the TP and TN group discrepancy ranges are improved. However, we stress that this outcome was not consistently observed and further investigation is warranted.\n\n\n\n\nWe emphasize that EO and WED serve different purposes with different motivations. While EO explicitly seeks to minimize the TP\/TN range between sensitive groups (reflected in its performance on the first two plots in Figure \\ref{fig:big_plot}), WED seeks to neutralize text-specific bias in the word-embeddings. Despite the difference in goals, and despite operating only on the text-modality of the dataset, WED is still able to reduce the group-specific TP\/TN range; recent work on \\textit{proxy fairness} in text has shown that indirect correlation between bias in text and protected attributes may be useful in achieving parity \\citep{romanov2019s}. \n\n\n\nAlthough WED demonstrate some good properties with respect to both fairness and performance for our specific dataset and task, we caution that they represent only one approach to fairness in NLP \\cite{blodgett2020language}. Indeed, WED suffers from shortcomings related to intersectional fairness \\citep{gonen2019lipstick}, and we encourage further discussion into concretely defining fair, real-world NLP tasks and developing novel algorithms.\n\nOur results highlight the important role practitioners and stakeholders play in algorithmic fairness on clinical applications. The trade-off between performance and fairness, whether between the soft and hard labels used for EO, or between EO and debiased word embeddings, must be balanced based on numerous real world factors. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion}\n\n\n\n \n In this paper, we propose a novel multimodal fairness task for the MIMIC-III dataset, based on equalized odds. We provide two baselines: a classifier-agnostic fairness algorithm (equalized odds post-processing) and a text-specific fairness algorithm (debiased word embeddings). We observe that both methods generally follow the fairness performance tradeoff seen in single-modality tasks. EO is more effective at reducing the disparities in group-specific error rates while word-embedding debiasing has better performance. \n Future work can consider more generalized notions of fairness such as preferences-based frameworks, or extend text-specific fairness to contextualized word embeddings \\citep{hossain2020designing, 10.1145\/3368555.3384448}. \n Further analysis of the fairness performance tradeoff, especially in multimodal settings, will facilitate equitable decision making in the clinical domain\n \n \n \\section{Acknowledgements}\n We would like to acknowledge Vector Institute for office and compute resources. We would also like to thank Matt Gardner for his help with answering questions when using AllenNLP \\cite{Gardner2017AllenNLP}. John Chen and Safwan Hossain are funded by an Ontario Graduate Scholarship and a Vector Institute Research Grant. Ian Berlot-Attwell is funded by a Canada Graduate Scholarships-Master's, and a Vector Institute Research Grant. Frank Rudzicz is supported by a CIFAR Chair in AI.\n\n\n \n \n \n \n \n \n \n \n\n \n \n \n\n \n \n\n \n \n \n \n \n \n \n\n\n \n\n\n \n \n \n \n\n\n\n \n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nOptical parametric generation via a quadratic nonlinearity has been extensively studied for the capability of wavelength conversion through elastic photon-photon scattering, constituting the basis of various applications including coherent radiation \\cite{Dunn99}, spectroscopy \\cite{Rosenman99}, frequency metrology \\cite{Cundiff03}, and quantum information processing \\cite{Pan12}. With the ability to strongly confine optical modes in the micro-\/nano-scale, a number of integrated photonic platforms have been developed for strong nonlinear optical effects with high efficiencies and low power consumption \\cite{Harris06, Vuckovic09, Lipson11, Pavesi12, Solomon14, Tang16Optica, Watts17}.\n\nAmong all the integrated nonlinear photonic platforms, lithium niobate (LN) has recently attracted remarkable attentions, owing to its wide transparency window and strong quadratic optical nonlinearity. To date, a variety of nanophotonic systems, including waveguides \\cite{Pertsch15, Bowers16, Loncar17OE, Fathpour17APL, Luo18Optica, Ding18, Luo18semi}, microdisks \\cite{Loncar14, Cheng16, Luo17OE, Xu17, Cheng18, Xiao18}, microrings \\cite{Zappe18, Huang18}, and photonic crystal cavities \\cite{Pertsch13, Jiang18, Li18}, have been studied for optical parametric processes in LN. Cavity-enhanced nonlinear wavelength conversion has been demonstrated in doubly\/triply resonant LN microresonators through a number of techniques including modal phase matching \\cite{Loncar14, Xu17, Xiao18, Huang18}, cyclic phase matching \\cite{Cheng16, Luo17OE, Cheng18}, and quasi-phase matching \\cite{Zappe18}. However, the potential of the LN integrated platform has not yet been fully explored for efficient nonlinear parametric processes, and current devices demonstrate only moderate efficiencies far from what LN can provide. Here, we report optical parametric generation in a high-$Q$ Z-cut LN microring resonator through exact modal phase matching. The device exhibits optical $Q$'s of $\\sim$$10^5$ for the designed cavity modes in the 1550 and 780 nm bands, and both modes are well coupled to a single bus waveguide, enabling us to conveniently measure a second-harmonic generation (SHG) efficiency of 1,500$\\%~\\rm {W^{-1}}$. In addition, by pumping into the mode in the 780 nm band, we are also able to observe difference-frequency generation (DFG) in the telecom band. Our work shows the great promise of modal-phase-matched LN microresonators for efficient optical parametric generation.\n\n\n\\begin{figure*}[t!]\n\t\\centering\\includegraphics[width=2\\columnwidth]{Fig1-min.pdf}\n\t\\caption{(a) Experimental setup for device characterization and optical parametric generation. VOA: variable optical attenuator; WDM: wavelength-division multiplexer; LPF: long-pass filter; OSA: optical spectrum analyzer. (b) Numerically simulated effective indices of the TM$_{00}$ mode at 1550 nm and the TM$_{20}$ mode at 775 nm, as functions of the top width $w$ of a straight waveguide. Other waveguide parameters are $h_1$=550 nm, $h_2$=50 nm, and $\\theta$=75$^\\circ$. (c) Scanning electron microscopy image of our LN microring. (d) Zoom-in of the bus-ring coupling region.} \\label{Fig1}\n\\end{figure*}\n\n\\section{Design and characterization}\n\nIn order to achieve modal phase matching in a microresonator, we performed photonic design with a Z-cut LN thin film, whose optical axis lies vertically, showing no anisotropy of refractive index in the device plane. To utilize the largest nonlinear term $d_{33}$, we designed for phase matching between the fundamental quasi-transverse-magnetic mode (TM$_{00}$) at 1550 nm and a high-order mode TM$_{20}$ at 775 nm. For simplicity, we preformed numerical simulation of effective indices in a straight waveguide, as a guideline for microring resonators with a relatively large radius, which is 50 $\\mu$m in our study. Fig.~\\ref{Fig1}(b) presents the simulation result by the finite element method, which shows that for a waveguide thickness of 600 nm, modal phase matching happens for TM$_{00}$ at 1550 nm and TM$_{20}$ at 775 nm when the waveguide width is about 690 nm. For a microring resonator with the same cross-section, since the Z-cut LN thin film is isotropic in the device plane, the phase matching condition is consistently satisfied at any azimuthal angle, which is expected to produce strong SHG as the phase-matched FF light travels around the cavity.\n\nOur device fabrication started from a Z-cut LN-on-insulator wafer by NANOLN, with a 600-nm-thick LN thin film sitting on a 3-$\\mu$m-thick buried oxide layer and a silicon substrate, and the process was similar to that of our previous work \\cite{Li18}. Fig.~\\ref{Fig1}(c) shows a fabricated microring resonator, coupled to a pulley waveguide \\cite{Adibi10, Lu18}, and Fig.~\\ref{Fig1}(d) gives a closer look at the coupling region. Later device characterization shows that a bus waveguide top width of $\\sim$200 nm, a gap (measured at the top surface of the LN thin film) of $\\sim$350 nm, and a coupling length of $\\sim$20 $\\mu$m are able to give good coupling for both the fundamental-frequency (FF) and the second-harmonic (SH) modes.\n\nAfter fabricating the device, we conducted experiments to characterize its linear optical properties and demonstrate nonlinear parametric generation, with the setup shown in Fig.~\\ref{Fig1}(a). We used two continuous-wave tunable lasers, one in the telecom band around 1550 nm, the other in the near-infrared (NIR) around 780 nm. Light from both lasers was combined by a 780\/1550 wavelength-division multiplexer (WDM), and launched into the on-chip bus waveguide via a lensed fiber. The bus waveguide coupled light at both wavebands into and out of the microring resonator, inside which nonlinear optical parametric processes took place. A second lensed fiber was used to collect output light from the chip, and a second 780\/1550 WDM was utilized to separate light at the two wavebands. At the 1550 port of the WDM, a long-pass filter that passes light with a wavelength over 1100 nm was used to eliminate residual NIR light, and the telecom light was further split into two paths, one to an InGaAs detector for characterization, and the other to an optical spectrum analyzer (OSA) for spectral analysis of DFG; at the 780 port, the NIR light was also split into two paths, one to a Si detector for characterization, and the other to an OSA for detection of SHG. Variable optical attenuators were employed to study power-dependent properties, and polarization controllers were used for optimal coupling of the wanted polarization.\n\n\nIn order to obtain the linear optical properties of our microring resonator, we scanned the wavelengths of both lasers and measured the transmission spectra near both 1550 and 780 nm, as shown in Fig.~\\ref{Fig2}(a) and (b). Our microring resonator exhibits a single TM mode family near 1550 nm, and the mode at 1547.10 nm, which is the FF mode for modal-phase-matched SHG, is almost critically coupled, with a coupling depth of $\\sim$99\\% and a loaded optical $Q$ of 1.4$\\times10^5$ [see Fig.~\\ref{Fig2}(c)]. On the other hand, the SH mode at 773.55 nm is under-coupled, with a coupling depth of $\\sim$83\\% and a loaded optical $Q$ of 9$\\times10^4$ [see Fig.~\\ref{Fig2}(d)]. The fiber-to-chip coupling losses are about 6.9 and 11.4 dB\/facet for the FF and SH modes, respectively. These high optical $Q$'s, together with the large nonlinearity in the designed type-0 process using $d_{33}=27$ pm\/V, indicate strong and efficient nonlinear optical interactions in phase-matched parametric generation with cavity enhancement.\n\n\n\\begin{figure}[t!]\n\t\\centering\\includegraphics[width=1\\columnwidth]{Fig2.pdf}\n\t\\caption{Transmission spectra of the LN microring near (a) 1550 nm, and (b) 780 nm. TM$_{00}$ modes around 1550 nm and TM$_{20}$ modes around 780 nm are indicated by purple and magenta arrows, respectively, with big arrows showing the phase-matched modes. (c) and (d) Detailed transmission spectra of the two phase-matched modes, with experimental data shown as solid curves and fittings shown as dashed curves. } \\label{Fig2}\n\\end{figure}\n\n\\section{Optical parametric generation}\n\nTo study SHG in the microring resonator, we launched pump power into the FF mode at 1547.10 nm, and saw strong scattering of generated NIR light from the resonator by an optical microscope, with an example shown as the inset of Fig.~\\ref{Fig3}. By varying the pump power, we obtained the power dependence of the SHG, as shown in Fig.~\\ref{Fig3}. The experimental data exhibit a quadratic relation between the generated SH power and the FF pump power, which is the signature of SHG in the low-pump-power regime. The measured conversion efficiency is 1,500$\\%~\\rm{W^{-1}}$. This efficiency is more than one order of magnitude higher than those in many other LN microresonators \\cite{Loncar14, Cheng16, Luo17OE, Xiao18, Zappe18, Huang18, Pertsch13, Jiang18, Li18}. It is even comparable with a recent study of cyclic phase matching in an X-cut microdisk exhibiting an ultra-high $Q$ of $\\sim$10$^7$ \\cite{Cheng18}, two orders of magnitude higher than that of our microring resonator, directly showing the advantage of exact modal phase matching. With future optimization of the optical $Q$'s of our device (say, by using a thicker LN film and an oxide cladding to reduce the sidewall scattering loss), we expect a further increase in the conversion efficiency.\n\nThe measured efficient SHG validated phase matching in our microring, and also indicated its capability of other parametric processes. In order to explore this, we launched power in both the SH mode, and one of the modes near the FF mode. Fig.~\\ref{Fig4} presents the recorded spectra in the telecom band. With only 6.6 $\\mu$W of on-chip power at the SH mode, we were able to convert long-wavelength telecom light coherently into shorter wavelengths through DFG. The long-wavelength pump power launched on chip was 105 $\\mu$W, and the generated power at the difference frequencies was about 480 pW, indicating a conversion rate of about -53 dB.\n\n\n\\begin{figure}[t!]\n\t\\centering\\includegraphics[width=0.9\\columnwidth]{Fig3-min.pdf}\n\t\\caption{ Power dependence of SHG, showing a quadratic relation between the SH power and the pump power. The measured conversion efficiency is 1,500$\\%~\\rm {W^{-1}}$. The inset shows an optical image of generated SH light scattered from the microring when the pump power was 440 $\\mu$W. } \\label{Fig3}\n\\end{figure}\n\n\\section{Theoretical analysis}\nIn order to acquire a better understanding of nonlinear parametric processes in our device, we can analyze the system with a model derived from the coupled mode theory \\cite{BoydBook, Johnson07}. With no pump depletion, the conversion efficiency of modal-phase-matched SHG in our doubly resonant ($\\omega_2=2\\omega_1$) optical cavity can be calculated by\n\\begin{equation}\n\\Gamma \\equiv \\frac{P_2}{P_1^2 } \\approx \\frac{64 \\gamma^2}{\\hbar \\omega_1^4} \\frac{Q_{1l}^4 Q_{2l}^2}{Q_{1e}^2 Q_{2e}}, \\label{Gamma}\n\\end{equation}\nwhere $P_1$ ($P_2$) is the launched (produced) optical power at the FF (SH); $\\hbar$ is the reduced Planck constant; $\\omega_1$ ($\\omega_2$) is the optical frequency, $Q_{1l}$ ($Q_{2l}$) is the loaded $Q$, and $Q_{1e}$ ($Q_{2e}$) is the coupling $Q$ of the FF (SH) mode; and $\\gamma$ is the single-photon coupling strength written as\n\\begin{equation}\n\\gamma=\\sqrt{\\frac{\\hbar \\omega_1^2 \\omega_2}{2\\epsilon_0 \\tilde{\\epsilon}_1^2 \\tilde{\\epsilon}_2 }}\\frac{d_{eff}\\zeta}{\\sqrt{V_{eff}}}. \\label{gamma}\n\\end{equation}\nIn Eq.~\\ref{gamma}, $\\epsilon_0$ is the vacuum permittivity, $\\tilde{\\epsilon}_1$ ($\\tilde{\\epsilon}_2$) is the relative permittivity of the nonlinear medium at the FF (SH), $d_{eff}$ is the effective nonlinear coefficient, $\\zeta$ is the mode overlap factor represented as \n\\begin{equation}\n\\zeta = \\frac{ \\int_{\\chi^{(2)}} (E_{1z}^*)^2 E_{2z} d^3x}{|\\int_{\\chi^{(2)}} |\\vec{E}_1|^2 \\vec{E}_1 d^3x|^{\\frac{2}{3}} |\\int_{\\chi^{(2)}} |\\vec{E}_2|^2 \\vec{E}_2 d^3x|^{\\frac{1}{3}}}, \\label{zeta}\n\\end{equation}\nand $V_{eff} \\equiv (V_{1}^2 V_{2})^{\\frac{1}{3}}$ is the effective mode volume, with\n\\begin{equation}\nV_{i} = \\frac{ (\\int_{ all} \\epsilon_i |\\vec{E}_i|^2 d^3x)^3 }{|\\int_{\\chi^{(2)}} \\epsilon_i^{\\frac{3}{2}} |\\vec{E}_i|^2 \\vec{E}_i d^3x|^2},(i=1,2),\n\\end{equation}\nwhere $\\int_{\\chi^{(2)}}$ and $\\int_{all}$ denote three-dimensional integration over the nonlinear medium and all space, respectively, $\\epsilon_1(\\vec{r})$ [$\\epsilon_2(\\vec{r})$] is the relative permittivity at the FF (SH), and $E_{1z}$ ($E_{2z}$) is the z-component of $\\vec{E}_1(\\vec{r})$ [$\\vec{E}_2(\\vec{r})$], the electric field of the FF (SH) cavity mode.\n\nWith the equations above, the SHG efficiency in our LN microring is calculated to be $\\Gamma \\approx 30,000\\%~\\rm{W^{-1}}$. Thus, there is more than one order of magnitude difference between the theoretical prediction and our experimental result. The main reason for this discrepancy is likely non-uniformity of the microring at different azimuthal angles. By simulation, a change of 1 nm in the waveguide width, for example, will lead to a shift of $\\sim-3$ nm in the phase-matched pump wavelength of SHG. Considering the small linewidths of our cavity modes, which are only 11 pm for the FF mode and 9 pm for the SH mode, the phase-matching window is easily shifted out of the cavity resonances due to fabrication imperfections. Besides, there is also a certain level of non-uniformity in the thickness of the LN thin film, giving more uncertainty to the uniformity of phase matching. We believe these technical issues will be addressed by optimized fabrication techniques in the near future, and the conversion efficiency can be significantly improved.\n\n\\begin{figure}[t!]\n\t\\centering\\includegraphics[width=0.9\\columnwidth]{Fig4.pdf}\n\t\\caption{Recorded DFG spectra, when pumping at the SH mode in the NIR and one of the five nearest modes with longer wavelengths than the FF mode in the telecom band. Pump power at the SH mode was 6.6 $\\mu$W.} \\label{Fig4}\n\\end{figure}\n\n\\section{Conclusion}\n\nIn conclusion, we have demonstrated optical parametric generation in a LN microring resonator with modal phase matching. We have used a single bus waveguide to conveniently couple the FF and SH modes, both showing coupling depths over 80\\% and exhibiting loaded optical $Q$'s around $10^5$, resulting in a measured conversion efficiency of 1,500$\\%~\\rm{W^{-1}}$ for SHG. In addition, we have also observed DFG in the telecom band. Our work represents an important step towards ultra-highly efficient optical parametric generation in photonic circuits based on the LN integrated platform.\n\n\n\\section*{Acknowledgments}\nThe authors thank Xiyuan Lu at National Institute of Standards and Technology for helpful discussions. This work was supported in part by the National Science Foundation under Grant No.~ECCS-1641099, ECCS-1509749, and ECCS-1810169, and by the Defense Threat Reduction Agency under the Grant No.~HDTRA1827912. The project or effort depicted was or is sponsored by the Department of the Defense, Defense Threat Reduction Agency. The content of the information does not necessarily reflect the position or the policy of the federal government, and no official endorsement should be inferred. This work was performed in part at the Cornell NanoScale Facility, a member of the National Nanotechnology Coordinated Infrastructure (National Science Foundation, ECCS-1542081), and at the Cornell Center for Materials Research (National Science Foundation, DMR-1719875).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe Markov regime switching model has been a popular framework for empirical work in economics and finance. Following the seminal contribution by \\citet{hamilton89em}, it has been used in numerous empirical studies to model, for example, the business cycle \\citep{hamilton05stlouis, morleypiger12restat}, stock market volatility \\citep{hamiltonsusmel94joe}, international equity markets \\citep{angbekaert02rfs, okimoto08jfqa}, monetary policy \\citep{schorfheide05red, simszha06aer, bianchi13restud}, and economic growth \\citep{kahnrich07jme}. Comprehensive theoretical accounts and surveys of applications are provided by \\citet{hamilton08palgrave, hamilton16hdbk} and \\citet{angtimmermann12annual}.\n\nThe number of regimes is an important parameter in applications of Markov regime switching models. Despite its importance, however, testing for the number of regimes in Markov regime switching models has been an unsolved problem because the standard asymptotic analysis of the likelihood ratio test statistic (LRTS) breaks down because of problems such as unidentifiable parameters, the true parameter being on the boundary of the parameter space, and the degeneracy of the Fisher information matrix. Testing the number of regimes for Markov regime switching models with normal density, which are popular in empirical applications, poses a further difficulty because normal density has the undesirable mathematical property that the second-order derivative with respect to the mean parameter is linearly dependent on the first derivative with respect to the variance parameter, leading to further singularity.\n\nThis paper proposes the likelihood ratio test of the null hypothesis of $M_0$ regimes against the alternative hypothesis of $M_0 + 1$ regimes for any $M_0\\geq 1$ and derives its asymptotic distribution. To the best of our knowledge, the asymptotic distribution of the LRTS has not been derived for testing the null hypothesis of $M_0$ regimes with $M_0 \\geq 2$. To test the null hypothesis of no regime switching, namely $M_0=1$, \\citet{hansen92jae} derives an upper bound of the asymptotic distribution of the LRTS, and \\citet{garcia98ier} also studies this problem. \\citet{carrasco14em} propose an information matrix-type test for parameter constancy in general dynamic models including regime switching models. \\citet{chowhite07em} derive the asymptotic distribution of the quasi-LRTS for testing the single regime against two regimes by rewriting the model as a two-component mixture model, thereby ignoring the temporal dependence of the regimes.\\footnote{\\citet{cartersteigerwald12em} show that ignoring temporal dependence may render the quasi-maximum likelihood estimator inconsistent.} \\citet{quzhuo17wp} extend the analysis of \\citet{chowhite07em} and derive the asymptotic distribution of the LRTS that properly takes into account the temporal dependence of the regimes under some restrictions on the transition probabilities of latent regimes. \\citet{marmer08empirical} and \\citet{dufour17emreviews} develop tests for the null hypothesis of no regime switching by using different approaches from the LRTS. The studies discussed above focus on testing the single regime against two regimes. To the best of our knowledge, however, the asymptotic distribution of the LRTS for testing the null hypothesis of $M_0$ regimes with $M_0 \\geq 2$ remains unknown.\n\nSeveral papers in the literature consider tests when some parameters are not identified under the null hypothesis. These include \\citet{davies77bm, davies87bm}, \\citet{andrewsploberger94em, andrewsploberger95as}, \\citet{hansen96em}, \\citet{andrews01em}, and \\citet{liushao03as}, among others. Estimation and testing with a degenerate Fisher information matrix are investigated in an iid setting by \\citet{chesher84em}, \\citet{leechesher86joe}, \\citet{rotnitzky00bernoulli}, and \\citet{gukoenkervolgushev17wp}, among others. \\citet{chen14joe} examine uniform inference on the mixing probability in mixture models.\n\nTo facilitate the analysis herein, we develop a version of Le Cam's differentiable in quadratic mean (DQM) expansion that expands the likelihood ratio under the loss of identifiability, while adopting the reparameterization and higher-order expansion of \\citet{kasaharashimotsu15jasa}. In an iid setting, \\citet{liushao03as} develop a DQM expansion under the loss of identifiability in terms of the generalized score function. We extend \\citet{liushao03as} to accommodate dependent and heterogeneous data as well as modify them to fit our context of parametric regime switching models. Using a DQM-type expansion has an advantage over the ``classical'' approach based on the Taylor expansion up to the Hessian term because deriving a higher-order expansion becomes tedious as the order of expansion increases in a Markov regime switching model. Furthermore, regime switching models with normal components are not covered by \\citet{liushao03as} because their Theorem 4.1 assumes that the generalized score function is obtained by expanding the likelihood ratio twice, whereas our Section \\ref{subsec:hetero_normal} shows that the score function is a function of the fourth derivative of the likelihood ratio in the normal case.\n\nOur approach follows \\citet{dmr04as} [DMR hereafter], who derive the asymptotic distribution of the maximum likelihood estimator (MLE) of regime switching models. We express the higher-order derivatives of the period density ratios in terms of the conditional expectation of the derivatives of the period \\textit{complete-data} log-density, i.e., the log-density when the state variable is observable, by applying the missing information principle \\citep[][]{woodbury71biometrics,louis82jrssb} and extending the analysis of DMR. We then show that these derivatives of the period density ratios can be approximated by a stationary, ergodic, and square integrable martingale difference sequence by conditioning on the infinite past, and this approximation is shown to satisfy the regularity conditions for our DQM expansion.\n\nWe first derive the asymptotic null distribution of the LRTS for testing $H_0: M=1$ against $H_A: M=2$. When the regime-specific density is not normal, the log-likelihood function is locally approximated by a quadratic function of the \\textit{second-order} polynomials of the reparameterized parameters. When the density is normal, the degree of deficiency of the Fisher information matrix and required order of expansion depend on the value of the unidentified parameter; in particular, when the latent regime variables are serially uncorrelated, the model reduces to a finite mixture normal model in which the fourth-order DQM expansion is necessary to derive a quadratic approximation of the log-likelihood function. We expand the log-likelihood with respect to the judiciously chosen polynomials of the reparameterized parameters---which involves the \\textit{fourth-order} polynomials---to obtain a uniform approximation of the log-likelihood function in quadratic form and derive the asymptotic null distribution of the LRTS by maximizing the quadratic form under a set of cone constraints building on the results of \\citet{andrews99em,andrews01em}.\n\nTo derive the asymptotic null distribution of the LRTS for testing $H_0: M=M_0$ against $H_A: M=M_0+1$ for $M_0\\geq 2$, we partition a set of parameters that describes the true null model in the alternative model into $M_0$ subsets, each of which corresponds to a specific way of generating the null model. We show that the asymptotic distribution of the LRTS is characterized by the maximum of the $M_0$ random variables, each of which represents the LRTS for testing each of the $M_0$ subsets.\n\nWe also derive the asymptotic distribution of the LRTS under local alternatives. \\citet{carrasco14em} show that the contiguous local alternatives of their tests are of order $n^{-1\/4}$, where $n$ is the sample size. In a related problem of testing the number of components in finite mixture normal regression models, \\citet{kasaharashimotsu15jasa} show that the contiguous local alternatives are of order $n^{-1\/8}$ \\citep[see also][]{chenli09as, chenlifu12jasa, honguyen16as}. We show that the value of the unidentified parameter affects the convergence rate of the contiguous local alternatives. When the regime-specific density is normal, some contiguous local alternatives are of the order $n^{-1\/8}$, and the LRT is shown to have non-trivial power against them. The tests of \\citet{carrasco14em} do not have power against such alternatives, whereas the test of \\citet{quzhuo17wp} rules out such alternatives because of their restriction on the parameter space.\n\nThe asymptotic validity of the parametric bootstrap is also established both under the null hypothesis and under local alternatives. The simulations show that our bootstrap LRT has good finite sample properties. Our results also imply that the bootstrap LRT is valid for testing the number of hidden states in hidden Markov models because this paper's model includes the hidden Markov model as a special case. Although several papers have analyzed the asymptotic property of the MLE of the hidden Markov model,\\footnote{See, for example, \\citet{leroux92spa}, \\citet{francq98stat}, \\citet{krishnamurthy98jtsa}, \\citet{brr98as}, \\citet{jp99as}, \\citet{legrandmevel00math}, and \\citet{doucmatias01bernouiil}.} the asymptotic distribution of the LRTS for testing the number of hidden states has been an open question.%\n\\footnote{\\citet{gassiat00esaim} show that the LRTS for testing $H_0: M=1$ against $H_A: M=2$ diverges when state-specific densities have \\emph{known and distinct} parameter values. \\citet{dannemann08cjstat} analyze the modified quasi-LRTS for testing the null of two states against three.}\n\nThe remainder of this paper is organized as follows. After introducing the notation and assumptions in Section 2, we discuss the degeneracy of the Fisher information matrix and loss of identifiability in regime switching models in Section 3. Section 4 establishes the DQM-type expansion. Section 5 presents the uniform convergence of the derivatives of the density ratios. Sections 6 and 7 derive the asymptotic null distribution of the LRTS. Section 8 derives the asymptotic distribution under local alternatives. Section 9 establishes the consistency of the parametric bootstrap. Section 10 reports the results from the simulations and an empirical application, using U.S. GDP per capita quarterly growth rate data. Section 11 collects the proofs and auxiliary results.\n\n\\section{Notation and assumptions}\n\nLet $:=$ denote ``equals by definition.'' Let $\\Rightarrow$ denote the weak convergence of a sequence of stochastic processes indexed by $\\pi$ for some space $\\Pi$. For a matrix $B$, let $\\lambda_{\\min}(B) $ and $\\lambda_{\\max}(B)$ be the smallest and largest eigenvalues of $B$, respectively. For a $k$-dimensional vector $x = (x_1,\\ldots,x_k)'$ and a matrix $B$, define $|x| := \\sqrt{x'x}$ and $|B| := \\sqrt{\\lambda_{\\max}(B'B)}$. For a $k\\times 1$ vector $a=(a_1,\\ldots,a_k)'$ and a function $f(a)$, let $\\nabla_{a} f(a):=(\\partial f(a)\/\\partial a_{1},\\ldots,\\partial f(a)\/\\partial a_{k})'$, and let $\\nabla_{a}^j f(a)$ denote a collection of derivatives of the form $(\\partial^j\/\\partial a_{i_1}\\partial a_{i_2} \\ldots \\partial a_{i_j})f(a)$. Let $\\mathbb{I}\\{A\\}$ denote an indicator function that takes the value 1 when $A$ is true and 0 otherwise. $\\mathcal{C}$ denotes a generic non-negative finite constant whose value may change from one expression to another. Let $a \\vee b :=\\max\\{a,b\\}$ and $a \\wedge b :=\\min\\{a,b\\}$. Let $\\lfloor x \\rfloor$ denote the largest integer less than or equal to $x$, and define $(x)_+ := \\max\\{x,0\\}$. Given a sequence $\\{f_k\\}_{k=1}^n$, let $\\nu_n(f_k) := n^{-1\/2} \\sum_{k=1}^n [f_k - \\mathbb{E}_{\\vartheta^*}(f_k)]$. For a sequence $X_{n\\varepsilon}$ indexed by $n=1,2,\\ldots$ and $\\varepsilon$, we write $X_{n\\varepsilon} = O_{p\\varepsilon}(a_n)$ if, for any $\\delta>0$, there exist $\\varepsilon>0$ and $M, n_0 <\\infty$ such that $\\mathbb{P}(|X_{n\\varepsilon}\/a_n| \\leq M) \\geq 1- \\delta$ for all $n > n_0$, and we write $X_{n\\varepsilon} = o_{p\\varepsilon}(a_n)$ if, for any $\\delta_1,\\delta_2>0$, there exist $\\varepsilon>0$ and $n_0$ such that $\\mathbb{P}(|X_{n\\varepsilon}\/a_n| \\leq \\delta_1) \\geq 1- \\delta_2$ for all $n > n_0$. Loosely speaking, $X_{n\\varepsilon} = O_{p\\varepsilon}(a_n)$ and $X_{n\\varepsilon} = o_{p\\varepsilon}(a_n)$ mean that $X_{n\\varepsilon} = O_{p}(a_n)$ and $X_{n\\varepsilon} = o_{p}(a_n)$ when $\\varepsilon$ is sufficiently small, respectively. All limits are taken as $n \\to \\infty$ unless stated otherwise. The proofs of all the propositions and lemmas are presented in the appendix.\n \nConsider the Markov regime switching process defined by a discrete-time stochastic process $\\{(X_k,Y_k,W_k)\\}$, where $(X_k,Y_k,W_k)$ takes values in a set $\\mathcal{X}_M\\times \\mathcal{Y}\\times \\mathcal{W}$ with $\\mathcal{Y}\\subset \\mathbb{R}^{q_y}$ and $\\mathcal{W}\\subset \\mathbb{R}^{q_w}$, and let $\\mathcal{B}(\\mathcal{X}_M\\times\\mathcal{Y}\\times\\mathcal{W})$ denote the associated Borel $\\sigma$-field. For a stochastic process $\\{Z_k\\}$ and $a0$, define the neighborhood of $\\Gamma^*$ by\n\\[\n\\mathcal{N}_{\\varepsilon} := \\{ \\vartheta \\in \\Theta: |t_\\vartheta|< \\varepsilon\\}.\n\\]\nWhen the MLE is consistent, the asymptotic distribution of the LRTS is determined by the local properties of the likelihood functions in $\\mathcal{N}_{\\varepsilon}$.\n\nWe establish a general quadratic expansion of the log-likelihood function $\\ell_n(\\psi,\\pi,\\xi):= \\ell_n(\\vartheta,\\xi)$ defined in (\\ref{density}) around $\\ell_n(\\psi^*,\\pi,\\xi)$ that expresses $\\ell_n(\\psi,\\pi,\\xi)-\\ell_n(\\psi^*,\\pi,\\xi)$ as a quadratic function of $t_\\vartheta$. Once we derive a quadratic expansion, the asymptotic distribution of the LRTS can be characterized by taking its supremum with respect to $t_\\vartheta$ under an appropriate constraint and using the results of \\citet{andrews99em,andrews01em}. \n \nDenote the conditional density ratio by\n\\begin{equation} \\label{density_ratio}\nl_{\\vartheta k x_0} := \\frac{ p_{\\psi\\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1},{\\bf W}_{0}^k,x_0)}{ p_{\\psi^*\\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1},{\\bf W}_{0}^k,x_0)},\n\\end{equation} \nso that $\\ell_n(\\psi,\\pi, x_0) - \\ell_n(\\psi^*,\\pi, x_0) = \\sum_{k=1}^n\\log l_{\\vartheta k x_0}$. We assume that $l_{\\vartheta k x_0}$ can be expanded around $l_{\\vartheta^* k x_0}=1$ as follows. With a slight abuse of the notation, let $P_n(f_k) := n^{-1} \\sum_{k=1}^n f_k$ and recall $\\nu_n(f_k) := n^{-1\/2} \\sum_{k=1}^n [f_k - \\mathbb{E}_{\\vartheta^*}(f_k)]$. \n\\begin{assumption}\\label{assn_a3}\nFor all $k=1,\\ldots,n$, $l_{\\vartheta k x_0} -1$ admits an expansion\n\\begin{equation} \\label{lkx0_expand}\nl_{\\vartheta k x_0} -1 = t_\\vartheta' s_{\\pi k} + r_{\\vartheta k} + u_{\\vartheta k x_0},\n\\end{equation}\nwhere $t_\\vartheta$ satisfies $\\psi \\to \\psi^*$ if $t_\\vartheta \\to 0$ and $(s_{\\pi k}, r_{\\vartheta k}, u_{\\vartheta k x_0})$ satisfy, for some $C \\in (0,\\infty)$, $\\delta >0$, $\\varepsilon >0$, and $\\rho \\in (0,1)$, (a) $\\mathbb{E}_{\\vartheta^*}\\sup_{\\pi \\in \\Theta_\\pi} \\left|s_{\\pi k}\\right|^{2+\\delta} < C$, (b) $\\sup_{\\pi \\in \\Theta_\\pi}| P_n(s_{\\pi k}s_{\\pi k}') - \\mathcal{I}_\\pi| = o_p(1)$ with $0<\\inf_{\\pi\\in \\Theta_{\\pi} }\\lambda_{\\min}(\\mathcal{I}_\\pi) \\leq \\sup_{\\pi\\in \\Theta_{\\pi} }\\lambda_{\\max}(\\mathcal{I}_\\pi)<\\infty$, (c) $\\mathbb{E}_{\\vartheta^*}[\\sup_{\\vartheta \\in \\mathcal{N}_\\varepsilon} |r_{\\vartheta k}\/(|t_\\vartheta||\\psi-\\psi^*|)|^2] < \\infty$, (d) $\\sup_{\\vartheta \\in \\mathcal{N}_\\varepsilon} [ \\nu_n(r_{\\vartheta k})\/(|t_\\vartheta||\\psi-\\psi^*|)] = O_p(1)$, (e) $\\sup_{x_0 \\in \\mathcal{X}}\\sup_{\\vartheta \\in \\mathcal{N}_\\varepsilon} P_n (|u_{\\vartheta k x_0}|\/|\\psi-\\psi^*|)^j = O_p(n^{-1})$ for $j=1,2,3$, (f) $\\sup_{x_0 \\in \\mathcal{X}}\\sup_{\\vartheta \\in \\mathcal{N}_\\varepsilon} P_n (|s_{\\pi k}||u_{\\vartheta k x_0}|\/|\\psi-\\psi^*|) = O_p(n^{-1})$, (g) $\\sup_{\\vartheta \\in \\mathcal{N}_{\\varepsilon}} |\\nu_n(s_{\\pi k})| =O_p(1)$. \n\\end{assumption} \nWe first establish an expansion of $\\ell_n(\\psi,\\pi, x_0)$ in the neighborhood of $\\mathcal{N}_{c\/\\sqrt{n}}$ for any $c>0$. \n\\begin{proposition} \\label{Ln_thm1}\nSuppose that Assumptions \\ref{assn_a3} (a)--(f) hold. Then, for any $c>0$,\n\\begin{equation*}\n \\sup_{x_0 \\in \\mathcal{X}} \\sup_{\\vartheta \\in \\mathcal{N}_{c\/\\sqrt{n}}} \\left| \\ell_n(\\psi,\\pi, x_0) - \\ell_n(\\psi^*,\\pi, x_0) - \\sqrt{n}t_\\vartheta' \\nu_n (s_{\\pi k}) + n t_\\vartheta' \\mathcal{I}_\\pi t_\\vartheta\/2 \\right| = o_p(1).\n\\end{equation*}\n\\end{proposition} \nThe following proposition expands $\\ell_n(\\psi,\\pi, x_0)$ in $A_{n\\varepsilon}(x_0,\\eta) := \\{\\vartheta \\in \\mathcal{N}_{\\varepsilon}: \\ell_n(\\psi,\\pi, x_0)-\\ell_n(\\psi^*,\\pi, x_0) \\geq -\\eta \\}$ for some $\\eta \\in [0,\\infty)$. This proposition is useful for deriving the asymptotic distribution of the LRTS because a consistent MLE is in $ A_{n\\varepsilon}(x_0,\\eta)$ by definition. Let $A_{n\\varepsilon c}(x_0,\\eta) := A_{n\\varepsilon }(x_0,\\eta) \\cup \\mathcal{N}_{c\/\\sqrt{n}}$.\n\\begin{proposition} \\label{Ln_thm2}\nSuppose that Assumption \\ref{assn_a3} holds. Then, for any $\\eta>0$, (a) $\\sup_{x_0 \\in \\mathcal{X}} \\sup_{\\vartheta \\in A_{n\\varepsilon}(x_0,\\eta)} |t_\\vartheta| = O_{p \\varepsilon }(n^{-1\/2})$, and (b) for any $c>0$,\n\\begin{equation*} \n \\sup_{x_0 \\in \\mathcal{X}} \\sup_{\\vartheta \\in A_{n\\varepsilon c}(x_0,\\eta) }\\left|\\ell_n(\\psi,\\pi, x_0)-\\ell_n(\\psi^*,\\pi, x_0) - \\sqrt{n} t_\\vartheta' \\nu_n (s_{\\pi k}) + n t_\\vartheta' \\mathcal{I}_\\pi t_\\vartheta\/2 \\right| = o_{p \\varepsilon }(1).\n\\end{equation*}\n\\end{proposition}\n\nThe following corollary of Propositions \\ref{Ln_thm1} and \\ref{Ln_thm2} shows that $\\ell_n(\\vartheta,\\xi)$ defined in (\\ref{density}) admits a similar expansion to $\\ell_n(\\vartheta,x_0)$ for all $\\xi$. Consequently, the asymptotic distribution of the LRTS does not depend on $\\xi$, and $\\ell_n(\\vartheta,\\xi)$ may be maximized in $\\vartheta$ while fixing $\\xi$ or jointly in $\\vartheta$ and $\\xi$. Let $A_{n\\varepsilon } (\\xi,\\eta) := \\{\\vartheta \\in \\mathcal{N}_{\\varepsilon }: \\ell_n(\\psi,\\pi,\\xi) - \\ell_n(\\psi^*,\\pi,\\xi) \\geq -\\eta \\}$ and $A_{n\\varepsilon c} (\\xi,\\eta):= A_{n\\varepsilon} (\\xi,\\eta)\\cup \\mathcal{N}_{c\/\\sqrt{n}}$, which includes a consistent MLE with any $\\xi$.\n\\begin{corollary} \\label{cor_appn}\n(a) Under the assumptions of Proposition \\ref{Ln_thm1}, we have \\\\\n$\\sup_{\\xi \\in \\Xi}\\sup_{\\vartheta \\in \\mathcal{N}_{c\/\\sqrt{n}}} \\left| \\ell_n(\\psi,\\pi,\\xi) - \\ell_n(\\psi^*,\\pi,\\xi) - \\sqrt{n}t_\\vartheta' \\nu_n (s_{\\pi k}) + n t_\\vartheta' \\mathcal{I}_\\pi t_\\vartheta\/2 \\right| = o_p(1)$ for any $c>0$.\n(b) Under the assumptions of Proposition \\ref{Ln_thm2}, for any $\\eta>0$ and $c>0$, $\\sup_{\\xi\\in \\Xi}\\sup_{\\vartheta \\in A_{n\\varepsilon} (\\xi,\\eta) } |t_\\vartheta| = O_{p \\varepsilon }(n^{-1\/2})$ and $\\sup_{\\xi\\in \\Xi}\\sup_{\\vartheta \\in A_{n\\varepsilon c} (\\xi,\\eta) } | \\ell_n(\\psi,\\pi,\\xi) - \\ell_n(\\psi^*,\\pi,\\xi) - \\sqrt{n}t_\\vartheta' \\nu_n (s_{\\pi k}) + n t_\\vartheta' \\mathcal{I}_\\pi t_\\vartheta\/2 | = o_{p \\varepsilon }(1)$.\n\\end{corollary}\n\n\\section{Uniform convergence of the derivatives of the log-density and density ratios} \\label{sec:uniform_convergence}\n\nIn this section, we establish approximations that enable us to apply the results in Section \\ref{sec:expansion} to the log-likelihood function of regime switching models. Because of the presence of singularity, the expansion (\\ref{lkx0_expand}) of the density ratio $l_{\\vartheta k x_0}$ involves the higher-order derivatives of the density ratios $\\nabla_{\\psi}^j l_{\\vartheta k x_0}$ with $j\\geq 2$. Note that $\\nabla_\\psi^j l_{\\vartheta k x_0}$ can be expressed in terms of the derivatives of log-densities, $\\nabla_\\psi^j \\log p_{\\psi\\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1},{\\bf W}_{0}^k,x_0)$. We show that these derivatives are approximated by their stationary counterpart that condition on the infinite past $(\\overline{\\bf{Y}}_{-\\infty}^{k-1},{\\bf W}_{-\\infty}^{k})$ in place of $(\\overline{\\bf{Y}}_{0}^{k-1},{\\bf W}_{0}^{k})$. Consequently, the sequence $\\{\\nabla_\\psi^j l_{\\vartheta k x_0}\\}_{k=0}^{\\infty}$ is approximated by a stationary martingale difference sequence. \n \nFor $1 \\leq k \\leq n$ and $m \\geq 0$, let \n\\begin{equation} \\label{p_bar}\n\\overline p_{\\vartheta}({\\bf Y}_{-m +1 }^k|\\overline{\\bf{Y}}_{-m},{\\bf W}_{-m}^{k}) := \\sum_{{\\bf x}_{-m}^k\\in \\mathcal{X}^{k+m+1}}\\prod_{t=-m+1}^k p_{\\vartheta}(Y_t,x_t|\\overline{\\bf Y}_{t-1},W_t,x_{t-1})\\mathbb{P}_{\\vartheta^*}(x_{-m}|\\overline{\\bf{Y}}_{-m},{\\bf W}_{-m}^{k}),\n\\end{equation}\ndenote the stationary density of ${\\bf Y}_{-m +1 }^k$ associated with $\\vartheta$ conditional on $\\{\\overline{\\bf{Y}}_{-m},{\\bf W}_{-m}^{k}\\}$, where $X_{-m}$ is drawn from its \\emph{true} conditional stationary distribution $\\mathbb{P}_{\\vartheta^*}(x_{-m}|\\overline{\\bf{Y}}_{-m}^{k-1},{\\bf W}_{-m}^{k})$. Let $\\overline p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},{\\bf W}_{-m}^{k}) : = \\overline p_{\\vartheta}({\\bf Y}_{-m+1}^k|\\overline{\\bf{Y}}_{-m},{\\bf W}_{-m}^{k})\/\\overline p_{\\vartheta}({\\bf Y}_{-m+1}^{k-1}|\\overline{\\bf{Y}}_{-m},{\\bf W}_{-m}^{k-1})$ denote the associated conditional density of $Y_k$ given $(\\overline{\\bf{Y}}_{-m}^{k-1},{\\bf W}_{-m}^{k})$.\\footnote{Note that DMR use the same notation $\\overline p_{\\vartheta}(\\cdot|\\overline{\\bf{Y}}_{-m}^{k-1})$ for a different purpose. On p.\\ 2263 and in some other (but not all) places, DMR use $\\overline p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{0}^{k-1})$ to denote an (ordinary) stationary conditional distribution of $Y_k$.\n}\n\nDefine the density ratio as $l_{k,m,x}(\\vartheta) := p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},{\\bf W}_{-m}^{k},X_{-m}=x)\/p_{\\vartheta^*}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},{\\bf W}_{-m}^{k},X_{-m}=x)$. For $j=1,2,\\ldots,6$, $1 \\leq k \\leq n$, $m \\geq 0$, and $x \\in \\mathcal{X}$, define the derivatives of the log-densities and density ratios by, with suppressing the subscript $\\vartheta$ from $\\nabla_\\vartheta^j$ for brevity,\n\\begin{align*} \n& \\nabla^j \\ell_{k,m,x}(\\vartheta) := \\nabla^j \\log p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},{\\bf W}_{-m}^{k},X_{-m}=x),\\quad \\nabla^j l_{k,m,x}(\\vartheta) := \\frac{\\nabla^j p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},{\\bf W}_{-m}^{k},X_{-m}=x)}{ p_{\\vartheta^*}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},{\\bf W}_{-m}^{k},X_{-m}=x)},\\\\\n& \\nabla^j \\overline \\ell_{k,m}(\\vartheta):= \\nabla^j \\log \\overline p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},{\\bf W}_{-m}^{k}), \\quad\\text{and}\\quad \\nabla^j \\overline l_{k,m}(\\vartheta):= \\frac{ \\nabla^j \\overline p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},{\\bf W}_{-m}^{k})}{\\overline p_{\\vartheta^*}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},{\\bf W}_{-m}^{k})}.\n\\end{align*} \nThe following assumption corresponds to (A6)--(A8) in DMR and is tailored to our setting where some elements of $\\vartheta_x^*$ are not identified and $\\mathcal{X}$ is finite. Note that Assumptions (A6) and (A7) in DMR pertaining to $q_{\\vartheta_x}(x,x')$ hold in our case because the $p_{ij}$'s are bounded away from 0 and 1. Let $G_{\\vartheta k} := \\sum_{x_k\\in\\mathcal{X}}g_{\\vartheta_y}(Y_k|\\overline{\\bf{Y}}_{k-1}, x_k, W_k)$. $G_{\\vartheta k}$ satisfies Assumption \\ref{assn_a4}(b) in general when $\\mathcal{N}^*$ is sufficiently small.\n\\begin{assumption}\\label{assn_a4}\nThere exists a positive real $\\delta$ such that on $\\mathcal{N}^*:= \\{\\vartheta\\in\\Theta: |\\vartheta_y-\\vartheta_y^*| < \\delta\\}$ the following conditions hold: (a) For all $(\\overline{\\bf{y}},y',x,w)\\in \\mathcal{Y}^s \\times \\mathcal{Y} \\times \\mathcal{X} \\times \\mathcal{W}$, $g_{\\vartheta_y}(y'|\\overline{\\bf{y}},x,w)$ is six times continuously differentiable on $\\mathcal{N}^*$. (b) $\\mathbb{E}_{\\vartheta^*}[\\sup_{\\vartheta\\in \\mathcal{N}^*} \\sup_{x\\in \\mathcal{X}} | \\nabla^j\\log g_{\\vartheta_y}(Y_1|\\overline{\\bf{Y}}_0,x,W) |^{2q_j}]< \\infty$ for $j=1,2,\\ldots,6$ and $\\mathbb{E}_{\\vartheta^*}\\sup_{\\vartheta \\in \\mathcal{N}^*} |G_{\\vartheta k}\/G_{\\vartheta^* k}|^{q_g} < \\infty$ with $q_1=6q_0, q_2=5q_0, \\ldots, q_6=q_0$, where $q_0=(1+\\varepsilon) q_\\vartheta$ and $q_g=(1+\\varepsilon)q_\\vartheta\/\\varepsilon$ for some $\\varepsilon>0$ and $q_\\vartheta >\\max\\{3,\\text{dim}(\\vartheta)\\}$. (c) For almost all $(\\overline{\\bf{y}},y',w) \\in \\mathcal{Y}^s \\times \\mathcal{Y}\\times \\mathcal{W}$, $\\sup_{\\vartheta\\in \\mathcal{N}^*} g_{\\vartheta_y}(y'|\\overline{\\bf{y}},x,w) < \\infty$ and, for almost all $(\\overline{\\bf{y}},x,w)\\in \\mathcal{Y}^s\\times \\mathcal{X} \\times \\mathcal{W}$, for $j=1,2,\\ldots,6$, there exist functions $f^j_{\\overline{\\bf{y}},w,x}: \\mathcal{Y} \\rightarrow \\mathbb{R}^+$ in $L^1$ such that $|\\nabla^j g_{\\vartheta_y}(y'|\\overline{\\bf{y}},x,w) |\\leq f^j_{\\overline{\\bf{y}}, x, w}(y')$ for all $\\vartheta\\in \\mathcal{N}^*$.\n\\end{assumption} \n\nThe following proposition shows that $\\{\\nabla^j \\ell_{k,m,x}(\\vartheta)\\}_{m\\geq 0}$ and $\\{\\nabla^j \\overline \\ell_{k,m}(\\vartheta)\\}_{m\\geq 0}$ are $L^{r_j}(\\mathbb{P}_{\\vartheta^*})$-Cauchy sequences that converge to $\\nabla^j \\ell_{k,\\infty}(\\vartheta)$ $\\mathbb{P}_{\\vartheta^*}$-a.s.\\ and in $L^{r_j}(\\mathbb{P}_{\\vartheta^*})$ uniformly in $\\vartheta\\in \\mathcal{N}^*$ and $x \\in \\mathcal{X}$. \n\\begin{proposition} \\label{proposition-ell}\nUnder Assumptions \\ref{assn_a1}, \\ref{assn_a2}, and \\ref{assn_a4}, for $j=1,\\ldots,6$, there exist random variables $(K_j, \\{M_{j,k} \\}_{k=1}^n )\\in L^{r_j}(\\mathbb{P}_{\\vartheta^*})$ and $\\rho_* \\in (0,1)$ such that, for all $1 \\leq k \\leq n$ and $m' \\geq m \\geq 0$,\n\\begin{align*}\n(a) &\\quad \\sup_{x\\in\\mathcal{X}}\\sup_{\\vartheta\\in \\mathcal{N}^*}|\\nabla^j\\ell_{k,m,x}(\\vartheta) - \\nabla^j\\overline\\ell_{k,m}(\\vartheta) | \\leq K_j (k+m)^7\\rho_*^{k+m-1}\\quad \\text{$\\mathbb{P}_{\\vartheta^*}$-a.s.},\\\\\n(b) &\\quad \\sup_{x\\in\\mathcal{X}}\\sup_{\\vartheta\\in \\mathcal{N}^*}|\\nabla^j\\ell_{k,m,x}(\\vartheta) - \\nabla^j \\ell_{k,m',x}(\\vartheta) | \\leq K_j(k+m)^7\\rho_*^{k+m-1}\\quad \\text{$\\mathbb{P}_{\\vartheta^*}$-a.s.},\\\\\n(c) & \\quad \\sup_{m \\geq 0}\\sup_{x\\in\\mathcal{X}}\\sup_{\\vartheta\\in \\mathcal{N}^*}|\\nabla^j\\ell_{k,m,x}(\\vartheta)| + \\sup_{m \\geq 0}\\sup_{\\vartheta\\in \\mathcal{N}^*}|\\nabla^j\\overline\\ell_{k,m}(\\vartheta)|\\leq M_{j, k} \\quad \\text{$\\mathbb{P}_{\\vartheta^*}$-a.s.},\n\\end{align*}\nwhere $r_1 = 6q_0$, $r_2 = 3q_0$, $r_3 = 2q_0$, $r_4 =3q_0\/2$, $r_5 =6q_0\/5$, and $r_6 =q_0$. (d) Uniformly in $\\vartheta\\in \\mathcal{N}^*$ and $x \\in \\mathcal{X}$, $\\nabla^j\\ell_{k,m,x}(\\vartheta)$ and $\\nabla^j\\overline\\ell_{k,m}(\\vartheta)$ converge $\\mathbb{P}_{\\vartheta^*}$-a.s.\\ and in $L^{r_j}(\\mathbb{P}_{\\vartheta^*})$ to $\\nabla^j \\ell_{k,\\infty}(\\vartheta) \\in L^{r_j}(\\mathbb{P}_{\\vartheta^*})$ as $m\\rightarrow \\infty$.\n\\end{proposition} \nAs shown by the following proposition, we may prove the uniform convergence of the derivatives of the density ratios by expressing them as polynomials of the derivatives of the log-density and applying Proposition \\ref{proposition-ell} and H\\\"older's inequality.\n\\begin{proposition}\\label{lemma-omega}\nUnder Assumptions \\ref{assn_a1}, \\ref{assn_a2}, and \\ref{assn_a4}, for $j=1,\\ldots,6$, there exist random variables $\\{K_{j,k}\\}_{k=1}^n \\in L^{q_\\vartheta}(\\mathbb{P}_{\\vartheta^*})$ and $\\rho_* \\in (0,1)$ such that, for all $1 \\leq k \\leq n$ and $m' \\geq m \\geq 0$,\n\\begin{align*}\n(a) &\\quad \\sup_{x\\in\\mathcal{X}}\\sup_{\\vartheta\\in \\mathcal{N}^*}|\\nabla^j l_{k,m,x}(\\vartheta) - \\nabla^j \\overline l_{k,m}(\\vartheta) | \\leq K_{j,k} (k+m)^{7}\\rho_*^{k+m-1} \\quad \\mathbb{P}_{\\vartheta^*}\\text{-a.s.},\\\\\n(b) &\\quad \\sup_{x\\in\\mathcal{X}}\\sup_{\\vartheta\\in \\mathcal{N}^*}|\\nabla^j l_{k,m,x}(\\vartheta) - \\nabla^j l_{k,m',x}(\\vartheta) | \\leq K_{j,k} (k+m)^{7}\\rho_*^{k+m-1} \\quad \\text{$\\mathbb{P}_{\\vartheta^*}$-a.s.}, \\\\ \n(c) &\\quad \\sup_{m\\geq 0}\\sup_{x\\in\\mathcal{X}}\\sup_{\\vartheta\\in \\mathcal{N}^*}|\\nabla^j l_{k,m,x}(\\vartheta)| + \\sup_{m\\geq 0}\\sup_{\\vartheta\\in \\mathcal{N}^*}|\\nabla^j\\overline{l}_{k,m}(\\vartheta)| \\leq K_{j,k} \\quad \\mathbb{P}_{\\vartheta^*}\\text{-a.s.}\n\\end{align*}\n(d) Uniformly in $\\vartheta\\in \\mathcal{N}^*$ and $x \\in \\mathcal{X}$, $\\nabla^j l_{k,m,x}(\\vartheta)$ and $\\nabla^j\\overline l_{k,m}(\\vartheta)$ converge $\\mathbb{P}_{\\vartheta^*}$-a.s.\\ and in $L^{q_\\vartheta}(\\mathbb{P}_{\\vartheta^*})$ to $\\nabla^j l_{k,\\infty}(\\vartheta) \\in L^{q_\\vartheta}(\\mathbb{P}_{\\vartheta^*})$ as $m\\rightarrow \\infty$. (e) $\\sup_{\\vartheta\\in \\mathcal{N}^*}|\\nabla^j \\overline l_{k,0}(\\vartheta) - \\nabla^j l_{k,\\infty}(\\vartheta) | \\leq K_{j,k} k^{7}\\rho_*^{k-1}$ $\\mathbb{P}_{\\vartheta^*}$-a.s. \n\\end{proposition}\nWhen we apply the results in Section \\ref{sec:expansion} to regime switching models, $l_{k,0,x}(\\vartheta)$ corresponds to $l_{\\vartheta k x_0}$ on the left-hand side of (\\ref{lkx0_expand}), and $s_{\\pi k}$ in (\\ref{lkx0_expand}) is a function of the $\\nabla^j \\overline l_{k, 0}(\\vartheta)$'s. Proposition \\ref{lemma-omega} and the dominated convergence theorem for conditional expectations \\citep[][Theorem 5.5.9]{durrett10book} imply that $\\mathbb{E}_{\\vartheta^*}[\\nabla^j l_{k,\\infty}(\\vartheta)|\\overline{\\bf{Y}}_{-\\infty}^{k-1}]=0$ for all $\\vartheta\\in \\mathcal{N}^*$. Therefore, $\\{\\nabla^j l_{k,\\infty}(\\vartheta)\\}_{k=-\\infty}^{\\infty}$ is a stationary, ergodic, and square integrable martingale difference sequence, and $\\{\\nabla^j l_{k,\\infty}(\\vartheta)\\}_{j=1}^5$ satisfies Assumption \\ref{assn_a3}(a)(b)(g).\n\n\\section{Testing homogeneity} \\label{sec: testing-1}\n\nBefore developing the LRT of $M_0$ components, we analyze a simpler case of testing the null hypothesis $H_0: M=1$ against $H_A:M=2$ when the data are from $H_0$. We assume that the parameter space for $\\vartheta_{2,x}=(p_{11},p_{22})'$ takes the form $[\\epsilon, 1-\\epsilon]^2$ for a small $\\epsilon \\in (0,1\/2)$. Denote the true parameter in the one-regime model by $\\vartheta_1^*:= ((\\theta^*)',(\\gamma^*)')'$. The two-regime model gives rise to the true density $p_{\\vartheta_1^*}({\\bf Y}_1^n| \\overline{{\\bf Y}}_{0}, x_0)$ if the parameter $\\vartheta_2=(\\theta_1,\\theta_2,\\gamma,p_{11},p_{22})'$ lies in a subset of the parameter space\n\\[\n\\Gamma^* := \\left\\{ (\\theta_1,\\theta_2,\\gamma,p_{11},p_{22})\\in \\Theta_{ 2}: \\theta_1=\\theta_2=\\theta^*\\ \\text{and}\\ \\gamma=\\gamma^* \\right\\}.\n\\]\nNote that $(p_{11},p_{22})$ is not identified under $H_0$.\n\nLet $\\ell_n(\\vartheta_2,\\xi_2) := \\log \\left( \\sum_{x_0=1}^2 p_{\\vartheta_2}({\\bf Y}_1^n| \\overline{\\bf Y}_{0},{\\bf W}_{0}^n,x_0) \\xi_2(x_0) \\right)$ denote the two-regime log-likelihood for a given initial distribution $\\xi_2(x_0) \\in \\Xi_2$, and let $\\hat\\vartheta_2:= \\arg\\max_{\\vartheta_2 \\in \\Theta_{2}} \\ell_n(\\vartheta_2,\\xi_2)$ denote the MLE of $\\vartheta_2$ given $\\xi_2$, where $\\xi_2$ is suppressed from $\\hat \\vartheta_2$ because $\\xi_2$ does not matter asymptotically. Let $\\hat \\vartheta_1$ denote the one-regime MLE that maximizes the one-regime log-likelihood function $\\ell_{0,n}(\\vartheta_1) := \\sum_{k=1}^n \\log f (Y_k|\\overline{\\bf Y}_{k-1},W_k;\\gamma,\\theta)$ under the constraint $\\vartheta_1 = (\\theta',\\gamma')' \\in \\Theta_1$. \n\nWe introduce the following assumption for the consistency of $\\hat \\vartheta_1$ and $\\hat \\vartheta_2$. Assumption \\ref{A-consist}(b) corresponds to Assumption (A4) of DMR. Assumption \\ref{A-consist}(c) is a standard identification condition for the one-regime model. Assumption \\ref{A-consist}(d) implies that the Kullback--Leibler divergence between $ p_{\\vartheta_{1}^*}(Y_1 |\\overline{\\bf{Y}}_{-m}^0,{\\bf W}_{-m}^0)$ and $ p_{\\vartheta_{2}}(Y_1 |\\overline{\\bf{Y}}_{-m}^0,{\\bf W}_{-m}^0)$ is 0 if and only if $\\vartheta_2 \\in \\Gamma^*$.\n\\begin{assumption} \\label{A-consist} \n(a) $\\Theta_1$ and $\\Theta_2$ are compact, and $\\vartheta_1^*$ is in the interior of $\\Theta_1$. (b) For all $(x,x') \\in \\mathcal{X}$ and all $(\\overline{{\\bf y}},y',w)\\in \\mathcal{Y}^s\\times \\mathcal{Y}\\times \\mathcal{W}$, $f(y'|\\overline{\\bf{y}}_0,w;\\gamma,\\theta)$ is continuous in $(\\gamma,\\theta)$. (c) If $\\vartheta_{1}\\neq \\vartheta_1^*$, then $\\mathbb{P}_{\\vartheta^*_{1}}\\left(f(Y_1|\\overline{\\bf{Y}}_0,W_1;\\gamma,\\theta) \\neq f(Y_1|\\overline{\\bf{Y}}_0,W_1;\\gamma^*,\\theta^*) \\right)>0$. (d) $\\mathbb{E}_{\\vartheta^*_{1}}[ \\log p_{\\vartheta_{2}}(Y_1 |\\overline{\\bf{Y}}_{-m}^0,{\\bf W}_{-m}^1)] = \\mathbb{E}_{\\vartheta^*_{1}}[\\log p_{\\vartheta_{1}^*}(Y_1 |\\overline{\\bf{Y}}_{-m}^0,{\\bf W}_{-m}^1) ]$ for all $m \\geq 0$ if and only if $\\vartheta_{2}\\in \\Gamma^*$. \n\\end{assumption} \nThe following proposition shows the consistency of the MLEs of $\\vartheta_1^*$ and $\\vartheta_{2,y}^*$. \n\\begin{proposition} \\label{P-consist} Suppose that Assumptions \\ref{assn_a1}, \\ref{assn_a2}, and \\ref{A-consist} hold. Then, under the null hypothesis of $M=1$, $\\hat\\vartheta_1 \\overset{p}{\\rightarrow} \\vartheta_1^*$ and $\\inf_{\\vartheta_2\\in\\Gamma^*}|\\hat\\vartheta_2-\\vartheta_2|\\overset{p}{\\rightarrow} 0$.\n\\end{proposition}\n \nLet $LR_n:=2[\\ell_n(\\hat\\vartheta_2,\\xi_2) - \\ell_{0,n}(\\hat\\vartheta_1)]$ denote the LRTS for testing $H_0:M_0=1$ against $H_A:M_0=2$. Following the notation of Section \\ref{sec:expansion}, we split $\\vartheta_2$ into $\\vartheta_2=(\\psi,\\pi)$, where $\\pi$ is the part of $\\vartheta$ not identified under the null hypothesis; the elements of $\\psi$ are delineated later. In the current setting, $\\pi$ corresponds to $\\vartheta_{2,x}=(p_{11},p_{22})'$. Define $\\varrho := \\text{corr}_{\\vartheta_{2,x}} (X_k,X_{k+1}) = p_{11}+p_{22}-1$ and $\\alpha := \\mathbb{P}_{\\vartheta_{2,x}}(X_k=1)= (1-p_{22})\/(2-p_{11}-p_{22})$. The parameter spaces for $\\varrho$ and $\\alpha$ under restriction $p_{11},p_{22} \\in [\\epsilon, 1-\\epsilon]$ are given by $\\Theta_{\\varrho}:= [-1+2\\epsilon,1-2\\epsilon]$ and $\\Theta_{\\alpha}:= [\\epsilon, 1-\\epsilon]$, respectively. Because the mapping from $(p_{11},p_{22})$ to $(\\varrho,\\alpha)$ is one-to-one, we reparameterize $\\pi$ as $\\pi := (\\varrho,\\alpha)' \\in \\Theta_{\\pi }:=\\Theta_{\\varrho }\\times \\Theta_{\\alpha }$, and let $p_{\\psi\\pi}(\\cdot|\\cdot) := p_{\\vartheta_2} (\\cdot|\\cdot)$. Henceforth, we suppress ${\\bf W}_{0}^n$ for notational brevity and write, for example, $p_{\\psi\\pi}({\\bf Y}_1^n| \\overline{{\\bf Y}}_{0}, {\\bf W}_{0}^n,x_0)$ as $p_{\\psi\\pi}({\\bf Y}_1^n|\\overline{{\\bf Y}}_{0},x_0)$ and $p_{\\psi\\pi} (y_k,x_k| \\overline{\\bf{y}}_{k-1},x_{k-1},w_k)$ as $p_{\\psi\\pi} (y_k,x_k| \\overline{\\bf{y}}_{k-1},x_{k-1})$ when doing so does not cause confusion. \n\nWe derive the asymptotic distribution of the LRTS by applying Corollary \\ref{cor_appn} to $\\ell_n(\\psi,\\pi,\\xi_2)$ and representing $s_{\\pi k}$ and $t_\\vartheta$ in (\\ref{lkx0_expand}) in terms of $\\vartheta$, $f(Y_k|\\overline{\\bf{Y}}_{k-1};\\gamma,\\theta)$, and its derivatives; $s_{\\pi k}$ involves higher-order derivatives, and $t_\\vartheta$ consists of the functions of the polynomials of (reparameterized) $\\vartheta$. Section \\ref{subsec:nonnormal} analyzes the case when the regime-specific distribution of $y_k$ is not normal with unknown variance. Section \\ref{subsec:hetero_normal} analyzes the case when the regime-specific distribution $y_k$ is normal with regime-specific and unknown variance. Section \\ref{subsec:homo_normal} handles the normal distribution where the variance is unknown and common across regimes. Note that because $\\overline{\\bf Y}_{-\\infty}^\\infty$ and ${\\bf X}_{-\\infty}^\\infty$ are independent when $\\psi = \\psi^*$, we have \n\\begin{equation}\\label{indep}\n\\mathbb{P}_{\\psi^*\\pi}({\\bf X}_{-\\infty}^\\infty|\\overline{\\bf Y}_{-\\infty}^\\infty) = \\mathbb{P}_{\\psi^*\\pi}({\\bf X}_{-\\infty}^\\infty).\n\\end{equation}\nDefine $q_k := \\mathbb{I}\\{X_k=1\\}$, so that $\\alpha = \\mathbb{E}_{\\psi^*\\pi}[q_k]$.\n\n \n\\subsection{Non-normal distribution}\\label{subsec:nonnormal}\n \nWhen we apply Corollary \\ref{cor_appn} to regime switching models, $s_{\\pi k}$ is a function of \\\\$\\nabla^j\\overline p_{\\psi^*\\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^*\\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})$'s with $ \\overline p_{\\psi\\pi} (Y^k_{1}| \\overline{{\\bf Y}}_{0})$ defined in (\\ref{p_bar}). In order to express $\\nabla^j\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}) \/ \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})$ in terms of $\\nabla^j f(y|x;\\gamma,\\theta)$ via the Louis information principle (Lemma \\ref{louis} in the appendix), we first derive the derivatives of the \\emph{complete data} conditional density $p_{\\vartheta_2} (y_k,x_k| \\overline{\\bf{y}}_{k-1},x_{k-1}) = g_{\\vartheta_{2,y}}(y_k|\\overline{\\bf{y}}_{k-1}, x_k) q_{\\vartheta_{2,x}}(x_{k-1},x_k)=\\sum_{j =1}^2 \\mathbb{I}\\{x_k=j\\}f(y_k|\\overline{\\bf{y}}_{k-1};\\gamma,\\theta_j)q_{\\vartheta_{2,x}}(x_{k-1},x_k)$.\n\nConsider the following reparameterization. Let\n\\begin{equation}\n\\left(\n\\begin{array}{c}\n\\lambda \\\\\n\\nu \\\\\n\\end{array}\n\\right):=\n\\left(\n\\begin{array}{c}\n\\theta_1 - \\theta_2 \\\\\n\\alpha\\theta_1 + (1-\\alpha)\\theta_2\\\\\n\\end{array}\n\\right),\n\\quad \\text{so that} \\quad\n\\left(\n\\begin{array}{c}\n\\theta_1\\\\\n\\theta_2 \\\\\n\\end{array}\n\\right)=\n\\left(\n\\begin{array}{c}\n\\nu+ (1-\\alpha) \\lambda\\\\\n\\nu- \\alpha\\lambda\\\\\n\\end{array}\n\\right). \\label{repara}\n\\end{equation}\nLet $\\eta := (\\gamma',\\nu')'$ and $\\psi_{\\alpha}:=(\\eta',\\lambda')' \\in \\Theta_\\eta \\times \\Theta_\\lambda$. Under the null hypothesis of one regime, the true value of $\\psi_{\\alpha}$ is given by $\\psi_{\\alpha}^*:=(\\gamma^*,\\theta^*,0)'$. Henceforth, we suppress the subscript $\\alpha$ from $\\psi_{\\alpha}$. Using this definition of $\\psi$, let $\\vartheta_2 := (\\psi',\\pi')' \\in \\Theta_\\psi \\times \\Theta_\\pi$. By using reparameterization (\\ref{repara}) and noting that $q_k = \\mathbb{I}\\{x_k=1\\}$, we have $p_{\\psi\\pi} (y_k,x_k| \\overline{\\bf{y}}_{k-1},x_{k-1}) = g_{\\psi}(y_k|\\overline{\\bf{y}}_{k-1}, x_k) q_{\\pi}(x_{k-1},x_k)$ and \n\\begin{equation} \\label{repara_g}\ng_{\\psi}(y_k|\\overline{\\bf{y}}_{k-1},x_k) = f(y_k|\\overline{\\bf{y}}_{k-1};\\gamma,\\nu+(q_k-\\alpha)\\lambda ).\n\\end{equation}\nHenceforth, let $f^*_{k}$, $\\nabla f^*_{k}$, $g^*_{k}$, and $\\nabla g^*_{k}$ denote $f(Y_k|\\overline{\\bf{Y}}_{k-1};\\gamma^*,\\theta^*)$, $\\nabla f(Y_k|\\overline{\\bf{Y}}_{k-1};\\gamma^*,\\theta^*)$, $g_{\\psi^*}(Y_k|\\overline{\\bf{Y}}_{k-1},X_k) $, and $\\nabla g_{\\psi^*}(Y_k|\\overline{\\bf{Y}}_{k-1},X_k)$, respectively, and similarly for $\\log f^*_{k}$ ,$\\nabla \\log f^*_{k}$, $\\log g^*_{k}$, and $\\nabla \\log g^*_{k}$. \nExpanding $g_{\\psi}(Y_k|\\overline{\\bf{Y}}_{k-1},X_k)$ twice with respect to $\\psi=(\\gamma',\\nu',\\lambda')'$ and evaluating at $\\psi^*$ gives\n\\begin{equation} \\label{dg}\n\\begin{aligned}\n\\nabla_{\\eta} g_k^* &= \\nabla_{(\\gamma',\\theta')'} f_k^*, \\quad \\nabla_{\\lambda} g_k^* = (q_k -\\alpha)\\nabla_{\\theta} f_k^*, \\\\\n\\nabla_{\\lambda\\eta'} g_k^* &= (q_k -\\alpha)\\nabla_{\\theta(\\gamma',\\theta')} f_k^*, \\quad \\nabla_{\\lambda\\lambda'} g_k^* = (q_k-\\alpha)^2 \\nabla_{\\theta\\theta'} f_k^*.\n\\end{aligned} \n\\end{equation}\nRecall $\\varrho := \\text{corr}_{\\vartheta_2^*}(q_k,q_{k+1})$. Observe that $q_k$ satisfies\n\\begin{equation} \\label{markov_moments}\n\\begin{aligned}\n\\mathbb{E}_{\\vartheta_2^*}(q_k-\\alpha)^2 &= \\alpha(1-\\alpha), \\quad \\mathbb{E}_{\\vartheta_2^*}(q_k-\\alpha)^3 = \\alpha(1-\\alpha)(1-2\\alpha), \\\\\n\\mathbb{E}_{\\vartheta_2^*}(q_k-\\alpha)^4 &= \\alpha(1-\\alpha)(3\\alpha^2-3\\alpha+1), \\quad \\text{corr}_{\\vartheta_2^*}(q_k,q_{k+\\ell}) = \\varrho^{|\\ell|}, \n\\end{aligned}\n\\end{equation}\nwhere the first three results follow from the property of a Bernoulli random variable, and the last result follows from \\citet[][p. 684]{hamilton94book}. Then, it follows from (\\ref{indep}) and (\\ref{markov_moments}) that\n\\begin{equation} \\label{qk_moments}\n\\begin{aligned}\n&\\mathbb{E}_{\\vartheta^*}[q_k -\\alpha |\\overline{{\\bf Y}}_{-\\infty}^n] = 0,\\quad \\mathbb{E}_{\\vartheta^*}[(q_{t_1}-\\alpha) (q_{t_2}-\\alpha) | \\overline{\\bf Y}_{-\\infty}^n ] = \\alpha(1-\\alpha)\\varrho^{t_2-t_1}, \\quad t_2 \\geq t_1.\n\\end{aligned}\n\\end{equation} \nFrom Lemma \\ref{louis}, $\\log p_{\\psi\\pi}(y_k, x_k|\\overline{{\\bf y}}_{k-1},x_{k-1}) = \\log g_{\\psi}(y_k|\\overline{\\bf{y}}_{k-1},x_k) + \\log q_\\pi(x_{k-1},x_k)$, and the definition of $\\overline p_{\\psi \\pi} ( Y_1^k| \\overline{{\\bf Y}}_{0} )$ in (\\ref{p_bar}), we obtain\n\\[\n\\begin{aligned}\n&\\frac{\\nabla_\\psi \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})}{\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})} =\\nabla_\\psi \\log \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}) = \\sum_{t=1}^k \\mathbb{E}_{\\vartheta^*} \\left[\\nabla_\\psi \\log g^*_{t} \\middle|\\overline{{\\bf Y}}_{0}^k \\right] - \\sum_{t=1}^{k-1} \\mathbb{E}_{\\vartheta^*} \\left[\\nabla_\\psi \\log g^*_{t} \\middle|\\overline{{\\bf Y}}_{0}^{k-1} \\right].\n\\end{aligned}\n\\]\nApplying (\\ref{dg}), (\\ref{qk_moments}), and $g_k^* =f_k^*$ to the right-hand side gives\n\\begin{equation} \\label{d1p}\n\\begin{aligned}\n\\frac{\\nabla_\\eta \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})}{\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})} =\\nabla_{(\\gamma',\\theta')'} \\log f_k^*, \\quad\n\\frac{\\nabla_\\lambda \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})}{ \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})}=0.\n\\end{aligned}\n\\end{equation}\nSimilarly, it follows from Lemma \\ref{louis}, (\\ref{dg}), (\\ref{qk_moments}), (\\ref{d1p}), and $g_k^* =f_k^*$ that\n\\begin{align}\n& \\nabla_{\\lambda\\eta'} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}) =0, \\label{d2p0}\\\\\n& \\nabla_{\\lambda\\lambda'} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}) \\nonumber \\\\\n& = \\nabla_{\\lambda\\lambda'} \\log \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}) \\nonumber \\\\\n& = \\sum_{t=1}^k \\mathbb{E}_{\\vartheta^*} \\left[\\nabla_{\\lambda\\lambda'} \\log g^*_{t} \\middle|\\overline{{\\bf Y}}_{0}^k \\right] - \\sum_{t=1}^{k-1} \\mathbb{E}_{\\vartheta^*} \\left[\\nabla_{\\lambda\\lambda'} \\log g^*_{t} \\middle|\\overline{{\\bf Y}}_{0}^{k-1} \\right] \\nonumber \\\\\n& \\quad + \\sum_{t_1=1}^k \\sum_{t_2=1}^k \\mathbb{E}_{\\vartheta^*} \\left[ \\frac{\\nabla_{\\lambda} g^*_{t_1}}{g^*_{t_1}} \\frac{\\nabla_{\\lambda'} g^*_{t_2}}{g^*_{t_2}} \\middle| \\overline{{\\bf Y}}_{0}^k \\right] - \\sum_{t_1=1}^{k-1} \\sum_{t_2=1}^{k-1} \\mathbb{E}_{\\vartheta^*} \\left[ \\frac{\\nabla_{\\lambda} g^*_{t_1}}{g^*_{t_1}} \\frac{\\nabla_{\\lambda'} g^*_{t_2}}{g^*_{t_2}} \\middle| \\overline{{\\bf Y}}_{0}^{k-1} \\right] \\nonumber \\\\\n& = \\alpha(1-\\alpha) \\left[ \\frac{ \\nabla_{\\theta\\theta'}f_k^*}{f_k^*} + \\sum_{t=1}^{k-1} \\varrho^{k-t} \\left( \\frac{\\nabla_{\\theta} f_{t}^*}{f_{t}^*} \\frac{\\nabla_{\\theta'} f_{k}^*}{f_{k}^*} + \\frac{\\nabla_{\\theta} f_{k}^*}{f_{k}^*} \\frac{\\nabla_{\\theta'} f_{t}^*}{f_{t}^*} \\right)\\right].\\label{d2p}\n\\end{align} \nBecause the first-order derivative with respect to $\\lambda$ is identically equal to zero in (\\ref{d1p}), \nthe unique elements of $\\nabla_{\\eta} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}) \/\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}) $ and $\\nabla_{\\lambda\\lambda'} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})$ constitute the generalized score $s_{\\pi k}$ in Corollary \\ref{cor_appn}. This score is approximated by a stationary martingale difference sequence, where the approximation error satisfies Assumption \\ref{assn_a3}.\n\nWe collect some notations. Recall $\\psi = (\\eta',\\lambda')'$ and $\\eta = (\\gamma',\\nu')'$. For a $q \\times 1$ vector $\\lambda$ and a $q \\times q$ matrix $s$, define the $q_\\lambda \\times 1$ vectors $v(\\lambda)$ and $V(s)$ as\n\\begin{equation} \\label{v_lambda}\n\\begin{aligned}\nv(\\lambda) &:= ( \\lambda_1^2,\\ldots,\\lambda_q^2,\\lambda_1\\lambda_2,\\ldots,\\lambda_1\\lambda_q,\\lambda_2 \\lambda_3, \\ldots, \\lambda_2 \\lambda_q,\\ldots,\\lambda_{q-1}\\lambda_q)',\\\\\nV(s) &:= (s_{11}\/2,\\ldots,s_{qq}\/2,s_{12},\\ldots,s_{1q},s_{23},\\ldots,s_{2q},\\ldots,s_{q,q-1})'.\n\\end{aligned}\n\\end{equation}\nNoting that $\\alpha(1-\\alpha)>0$ for $\\alpha\\in\\Theta_{\\alpha}$, define, with $t_{\\lambda}(\\lambda,\\pi):=\\alpha(1-\\alpha)v(\\lambda)$, \n\\begin{equation}\\label{score}\n\\begin{aligned}\nt(\\psi,\\pi) &:= \n\\begin{pmatrix}\n\\eta - \\eta^* \\\\\nt_{\\lambda}(\\lambda,\\pi)\n\\end{pmatrix}, \\ \ns_{ \\varrho k} : = \\begin{pmatrix}\ns_{\\eta k} \\\\\ns_{\\lambda \\varrho k}\n\\end{pmatrix}, \\ \\text{where} \\\ns_{\\eta k} : = \\frac{\\nabla_\\eta \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})}{\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})} = \\begin{pmatrix}\n\\nabla_{\\gamma} f_k^* \/ f_k^* \\\\\n\\nabla_{\\theta} f_k^* \/ f_k^*\n\\end{pmatrix},\n\\end{aligned}\n\\end{equation}\nand $s_{\\lambda \\varrho k} := V(s_{\\lambda\\lambda \\varrho k})$ with \n\\begin{equation} \\label{score_lambda}\ns_{\\lambda\\lambda \\varrho k} := \\frac{\\nabla_{\\lambda\\lambda'} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})}{\\alpha(1-\\alpha) \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})} = \\frac{\\nabla_{\\theta \\theta'} f_k^*}{f_k^*} + \\sum_{t= 1}^{k-1} \\varrho^{k-t} \\left( \\frac{\\nabla_{\\theta} f_{t}^*}{f_{t}^*} \\frac{\\nabla_{\\theta'} f_{k}^*}{f_{k}^*} + \\frac{\\nabla_{\\theta} f_{k}^*}{f_{k}^*} \\frac{\\nabla_{\\theta'} f_{t}^*}{f_{t}^*} \\right).\n\\end{equation}\nHere, $s_{\\varrho k}$ in (\\ref{score}) depends on $\\varrho$ but not on $\\alpha$ and corresponds to $s_{\\pi k}$ in Corollary \\ref{cor_appn}.\nThe following proposition shows that the log-likelihood function is approximated by a quadratic function of $\\sqrt{n}t(\\psi,\\pi)$. Let $\\mathcal{N}_{\\varepsilon } := \\{ \\vartheta_2 \\in \\Theta_{2 }: |t(\\psi,\\pi)|< \\varepsilon \\}$. Let $A_{n \\varepsilon }(\\xi) := \\{\\vartheta \\in \\mathcal{N}_{\\varepsilon}: \\ell_n(\\psi,\\pi,\\xi) - \\ell_n(\\psi^*,\\pi,\\xi) \\geq 0\\} $ and $A_{n\\varepsilon c}(\\xi) := A_{n\\varepsilon }(\\xi)\\cup \\mathcal{N}_{c\/\\sqrt{n}}$, where we suppress the subscript $2$ from $\\xi_2$.\nWe use this definition of $A_{n\\varepsilon c}(\\xi)$ through Sections \\ref{subsec:nonnormal}--\\ref{subsec:homo_normal}. As shown in Sections \\ref{subsec:hetero_normal} and \\ref{subsec:homo_normal}, Assumption \\ref{A-nonsing1} does not hold for regime switching models with a normal distribution. \n\\begin{assumption} \\label{A-nonsing1} \n$0< \\inf_{\\varrho\\in \\Theta_{\\varrho}} \\lambda_{\\min}(\\mathcal{I}_{\\varrho}) \\leq \\sup_{\\varrho\\in\\Theta_{\\varrho}} \\lambda_{\\max}(\\mathcal{I}_\\varrho) < \\infty$\n for $\\mathcal{I}_{\\varrho}= \\lim_{k\\rightarrow \\infty} \\mathbb{E}_{\\vartheta^*}(s_{\\varrho k}s_{\\varrho k}')$, where $s_{\\varrho k}$ is given in (\\ref{score}).\n\\end{assumption} \n\\begin{proposition} \\label{P-quadratic} Suppose Assumptions \\ref{assn_a1}, \\ref{assn_a2}, \\ref{assn_a4}, \\ref{A-consist}, and \\ref{A-nonsing1} hold. Then, under the null hypothesis of $M=1$, (a) $\\sup_{\\xi}\\sup_{\\vartheta \\in A_{n \\varepsilon }(\\xi)} |t(\\psi,\\pi)| = O_{p \\varepsilon }(n^{-1\/2})$; and (b) for any $c>0$,\n\\begin{equation} \\label{ln_appn}\n\\sup_{\\xi \\in \\Xi}\\sup_{\\vartheta \\in A_{n \\varepsilon c}(\\xi) } \\left| \\ell_n(\\psi,\\pi,\\xi) - \\ell_n(\\psi^*,\\pi,\\xi) - \\sqrt{n}t(\\psi,\\pi)' \\nu_n (s_{\\varrho k}) + n t(\\psi,\\pi)' \\mathcal{I}_\\varrho t(\\psi,\\pi)\/2 \\right| = o_{p \\varepsilon} (1).\n\\end{equation}\n\\end{proposition}\nWe proceed to derive the asymptotic distribution of the LRTS. With $s_{\\varrho k}$ defined in (\\ref{score}), define \n\\begin{equation} \\label{I_lambda}\n\\begin{aligned}\n&\\mathcal{I}_{\\eta} := \\mathbb{E}_{\\vartheta^*}(s_{\\eta k} s_{\\eta k}'), \\quad \\mathcal{I}_{\\lambda\\varrho_1\\varrho_2} := \\lim_{k\\rightarrow \\infty}\\mathbb{E}_{\\vartheta^*}(s_{\\lambda\\varrho_1 k} s_{\\lambda\\varrho_2 k}'), \\quad \\mathcal{I}_{\\lambda\\eta \\varrho } := \\lim_{k\\rightarrow \\infty}\\mathbb{E}_{\\vartheta^*}(s_{\\lambda\\varrho k} s_{\\eta k}'),\\\\\n&\\mathcal{I}_{\\eta \\lambda \\varrho} := \\mathcal{I}_{\\lambda \\eta \\varrho}', \\quad \\mathcal{I}_{\\lambda.\\eta \\varrho_1 \\varrho_2}:=\\mathcal{I}_{\\lambda \\varrho_1 \\varrho_2}-\\mathcal{I}_{\\lambda\\eta \\varrho_1}\\mathcal{I}_{\\eta}^{-1}\\mathcal{I}_{\\eta\\lambda \\varrho_2}, \\quad \\mathcal{I}_{\\lambda.\\eta \\varrho}:=\\mathcal{I}_{\\lambda.\\eta \\varrho\\varrho},\\quad\\ Z_{\\lambda \\varrho}:=(\\mathcal{I}_{\\lambda.\\eta \\varrho})^{-1}G_{\\lambda.\\eta \\varrho}, \n\\end{aligned}\n\\end{equation} \nwhere\n$G_{\\lambda. \\eta \\varrho}$ is a $q_\\lambda$-vector mean zero Gaussian process indexed by $\\varrho$ with $\\text{cov}(G_{\\lambda. \\eta\\varrho_1},G_{\\lambda. \\eta\\varrho_2}) = \\mathcal{I}_{\\lambda.\\eta \\varrho_1 \\varrho_2}$. Define the set of admissible values of $\\sqrt{n}\\alpha(1-\\alpha)v(\\lambda)$ when $n\\rightarrow \\infty$ by $v(\\mathbb{R}^q):= \\{ x \\in \\mathbb{R}^{q_\\lambda}: x = v(\\lambda) \\text{ for some } \\lambda \\in \\mathbb{R}^q\\}$. Define $\\tilde t_{\\lambda \\varrho}$ by\n\\begin{equation} \\label{t-lambda}\nr_{\\lambda \\varrho}(\\tilde t_{\\lambda \\varrho}) = \\inf_{t_\\lambda \\in v(\\mathbb{R}^q)}r_{\\lambda \\varrho}(t_{\\lambda }), \\quad r_{\\lambda \\varrho}(t_{\\lambda}) := (t_{\\lambda} -Z_{\\lambda \\varrho})' \\mathcal{I}_{\\lambda.\\eta \\varrho} (t_{\\lambda} -Z_{\\lambda \\varrho}).\n\\end{equation}\nThe following proposition establishes the asymptotic null distribution of the LRTS. \n\\begin{proposition} \\label{P-LR}\nSuppose Assumptions \\ref{assn_a1}, \\ref{assn_a2}, \\ref{assn_a4}, \\ref{A-consist}, and \\ref{A-nonsing1} hold. Then, under the null hypothesis of $M=1$, $LR_n \\overset{d}{\\rightarrow} \\sup_{ \\varrho \\in\\Theta_{\\varrho}} \\left(\\tilde t_{\\lambda \\varrho}' \\mathcal{I}_{\\lambda.\\eta \\varrho} \\tilde t_{\\lambda \\varrho} \\right)$.\n\\end{proposition} \nIn Proposition \\ref{P-LR}, the LRTS and its asymptotic distribution depend on the choice of $\\epsilon$ because $\\Theta_\\varrho=[-1+2\\epsilon,1-2\\epsilon]$. It is possible to develop a version of the EM test \n\\citep[][]{chenli09as, chenlifu12jasa, kasaharashimotsu15jasa} in this context that does not impose an explicit restriction on the parameter space for $p_{11}$ and $p_{22}$; however, we leave such an extension to future research.\n\n\\begin{remark}\nWhen applied to the Markov regime switching model, the tests of \\citet{carrasco14em} use the residuals from projecting $\\nabla_{\\theta \\theta'} f_k \/ f_k + 2 \\sum_{t=1}^{k-1} \\varrho^{k-t} (\\nabla_{\\theta} f_{t} \/ f_{t}) (\\nabla_{\\theta'} f_{k} \/ f_{k})$ on $\\nabla_{\\theta} f_k \/ f_k$, where both are evaluated at the one-regime MLE. Therefore, in the non-normal case, the LRT and tests of \\citet{carrasco14em} are based on the same score function.\n\\end{remark}\n\n\\subsection{Heteroscedastic normal distribution}\\label{subsec:hetero_normal}\n\nSuppose that $Y_k\\in \\mathbb{R}$ in the $j$-th regime follows a normal distribution with regime-specific intercept $\\mu_j$ and variance $\\sigma_j^2$. We split $\\theta_j$ into $\\theta_j = (\\zeta_j,\\sigma_j^2)'= (\\mu_j,\\beta_j',\\sigma_j^2)'$, and write the density of the $j$-th regime as\n\\begin{equation}\\label{normal-density}\nf(y_k|\\overline{\\bf{y}}_{k-1};\\gamma,\\theta_j) =f(y_k|\\overline{\\bf{y}}_{k-1};\\gamma,\\zeta_j,\\sigma^2_j) = \\frac{1}{\\sigma_j}\\phi\\left( \\frac{y_k - \\mu_j - \\varpi(\\overline{\\bf{y}}_{k-1};\\gamma,\\beta_j ) }{\\sigma_j}\\right),\n\\end{equation}\nfor some function $\\varpi$. In many applications, $\\varpi$ is a linear function of $\\gamma$ and $\\beta_j$, e.g., $\\varpi(\\overline{\\bf{y}}_{k-1},w_k;\\gamma,\\beta_j)= (\\overline{\\bf{y}}_{k-1})'\\beta_j + w_k'\\gamma$. Consider the following reparameterization introduced in \\citet{kasaharashimotsu15jasa} ($\\theta$ in Kasahara and Shimotsu corresponds to $\\zeta$ here):\n\\begin{equation}\n\\left(\n\\begin{array}{c}\n\\zeta_1\\\\\n\\zeta_2\\\\\n\\sigma_1^2\\\\\n\\sigma_2^2\n\\end{array}\n\\right) =\\left(\n\\begin{array}{c}\n\\nu_{\\zeta} + (1-\\alpha)\\lambda_{\\zeta} \\\\\n\\nu_{\\zeta} -\\alpha\\lambda_{\\zeta}\\\\\n\\nu_\\sigma + (1- \\alpha)(2\\lambda_\\sigma+ C_1 \\lambda_\\mu^2)\\\\\n\\nu_\\sigma - \\alpha(2\\lambda_\\sigma+ C_2 \\lambda_\\mu^2)\n\\end{array}\n\\right), \\label{repara2}\n\\end{equation}\nwhere $\\nu_{\\zeta}=(\\nu_\\mu,\\nu_{\\beta}')'$, $\\lambda_{\\zeta}=(\\lambda_\\mu,\\lambda_{\\beta}')'$, $C_1 := -(1\/3)(1 + \\alpha)$, and $C_2 := (1\/3)(2 - \\alpha)$, so that $C_1=C_2-1$. Collect the reparameterized parameters, except for $\\alpha$, into one vector $\\psi_{\\alpha}$. As in Section \\ref{subsec:nonnormal}, we suppress the subscript $\\alpha$ from $\\psi_{\\alpha}$. Let the reparameterized density be \n\\begin{equation} \\label{repara_g_hetero}\ng_{\\psi}(y_k|\\overline{\\bf{y}}_{k-1},x_k) = f\\left(y_k|\\overline{\\bf{y}}_{k-1};\\gamma,\\nu_\\zeta + (q_k-\\alpha)\\lambda_\\zeta, \\nu_\\sigma + (q_k - \\alpha)(2\\lambda_\\sigma + (C_2 -q_k) \\lambda_\\mu^2 )\\right).\n\\end{equation}\nLet $\\psi := (\\eta',\\lambda')' \\in \\Theta_\\psi = \\Theta_\\eta \\times \\Theta_\\lambda$, where $\\eta := (\\gamma',\\nu_\\zeta',\\nu_\\sigma)'$ and $\\lambda := (\\lambda_\\zeta',\\lambda_\\sigma)'$. Because the likelihood function of a normal mixture model is unbounded when $\\sigma_j \\rightarrow 0$ \\citep{hartigan85book}, we impose $\\sigma_j \\geq \\epsilon_\\sigma$ for a small $\\epsilon_\\sigma >0$ in $\\Theta_\\psi$. We proceed to derive the derivatives of $g_{\\psi}(Y_k|\\overline{\\bf{Y}}_{k-1},X_k)$ evaluated at $\\psi^*$. $\\nabla_\\psi g_k^*$, $\\nabla_{\\lambda\\eta'}g_k^*$, and $\\nabla_{\\lambda\\lambda'}g_k^*$ are the same as those given in (\\ref{dg}) except for $\\nabla_{\\lambda_\\mu^2}g_k^*$ and that those with respect to $\\lambda_\\sigma^j$ are multiplied by $2^j$. The higher-order derivatives of $g_{\\psi}(Y_k|\\overline{\\bf{Y}}_{k-1},X_k)$ with respect to $\\lambda_\\mu$ are derived by following \\citet{kasaharashimotsu15jasa}. \nFrom Lemma \\ref{lemma_normal_der} and the fact that normal density $f(\\mu,\\sigma^2)$ satisfies\n\\begin{equation} \\label{norma_der}\n\\begin{aligned}\n\\nabla_{\\mu^2} f(\\mu,\\sigma^2) &= 2\\nabla_{\\sigma^2} f(\\mu,\\sigma^2), \\quad \\nabla_{\\mu^3} f(\\mu,\\sigma^2) = 2\\nabla_{\\mu\\sigma^2} f(\\mu,\\sigma^2), \\ \\ \\text{and}\\\\\n\\nabla_{\\mu^4} f(\\mu,\\sigma^2) &= 2 \\nabla_{\\mu^2\\sigma^2} f(\\mu,\\sigma^2)= 4 \\nabla_{ \\sigma^2 \\sigma^2} f(\\mu,\\sigma^2),\n\\end{aligned}\n\\end{equation}\nwe have\n\\begin{equation} \\label{d3g}\n\\nabla_{\\lambda_\\mu^i} g_k^* = d_{ik} \\nabla_{\\mu^i} f_k^*, \\quad i = 1, \\ldots, 4,\n\\end{equation}\nwhere \n\\begin{align*} \n d_{0k} & :=1, \\quad d_{1k} := q_k - \\alpha, \\quad d_{2k} := (q_k - \\alpha)(C_2 - \\alpha), \\quad d_{3k} := 2 (q_k - \\alpha)^2(1-\\alpha-q_k), \\\\\nd_{4k} & := -2(q_k - \\alpha)^4 + 3(q_k-\\alpha)^2(\\alpha - C_2)^2. \n\\end{align*} \nIt follows from $\\mathbb{E}_{\\vartheta^*}[ q_k|\\overline{{\\bf Y}}_{-\\infty}^n]=\\alpha$, (\\ref{markov_moments}), and elementary calculation that\n\\begin{equation} \\label{d3g2}\n\\begin{aligned}\n\\mathbb{E}_{\\vartheta^*}[ d_{ik}|\\overline{{\\bf Y}}_{-\\infty}^n] &= 0, \\quad \\mathbb{E}_{\\vartheta^*}[ \\nabla_{\\lambda_\\mu^i} g_k^*|\\overline{{\\bf Y}}_{-\\infty}^k] = 0, \\quad i = 1,2,3, \\\\\n\\mathbb{E}_{\\vartheta^*}[ d_{4k}|\\overline{{\\bf Y}}_{-\\infty}^n] &= \\alpha(1 - \\alpha) b(\\alpha), \\\\\n\\mathbb{E}_{\\vartheta^*}[ \\nabla_{\\lambda_\\mu^4} g_k^*|\\overline{{\\bf Y}}_{-\\infty}^k] &= \\alpha(1-\\alpha)b(\\alpha)\\nabla_{\\mu^4} f_k^* = \\alpha(1-\\alpha) b(\\alpha) 4\\nabla_{\\sigma^2\\sigma^2}f_k^* = b(\\alpha) \\mathbb{E}_{\\vartheta^*}[ \\nabla_{\\lambda_\\sigma^2} g_k^*|\\overline{{\\bf Y}}_{-\\infty}^k],\n\\end{aligned}\n\\end{equation}\nwith $b(\\alpha): = -(2\/3) (\\alpha^2 - \\alpha + 1) < 0$. Hence, $\\mathbb{E}_{\\vartheta^*}[ \\nabla_{\\lambda_\\sigma^2} g_k^*|\\overline{{\\bf Y}}_{-\\infty}^k]$ and $\\mathbb{E}_{\\vartheta^*}[ \\nabla_{\\lambda_\\mu^4} g_k^*|\\overline{{\\bf Y}}_{-\\infty}^k]$ are linearly dependent.\n\nWe proceed to derive a representation of $\\nabla^j \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})$ in terms of $\\nabla^j f_k^*$. Repeating the calculation leading to (\\ref{d1p})--(\\ref{d2p}) and using (\\ref{d3g2}) gives the following. First, (\\ref{d1p}) and (\\ref{d2p0}) still hold; second, the elements of $\\nabla_{\\lambda\\lambda'} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})$ except for the $(1,1)$-th element are given by (\\ref{d2p}) after adjusting that the derivative with respect to $\\lambda_{\\sigma}$ must be multiplied by 2 (e.g., $\\mathbb{E}_{\\vartheta^*}[\\nabla_{\\lambda_\\sigma} g_k^* |\\overline{{\\bf Y}}_{-\\infty}^n] = 2 \\nabla_{\\sigma^2}f_k^*$ and $\\mathbb{E}_{\\vartheta^*}[\\nabla_{\\lambda_\\sigma\\lambda_\\mu} g_k^* |\\overline{{\\bf Y}}_{-\\infty}^n] = 2 \\nabla_{\\sigma^2\\mu}f_k^*$); third,\n\\begin{equation} \\label{d2p-lambda}\n\\frac{\\nabla_{\\lambda_\\mu^2} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})}{\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})} = \\alpha(1-\\alpha) \\sum_{t=1}^{k-1} \\varrho^{k-t} \\left( 2 \\frac{\\nabla_{\\mu} f_{t}^*}{f_{t}^*} \\frac{\\nabla_{\\mu} f_{k}^*}{f_{k}^*}\\right).\n\\end{equation} \nWhen $\\varrho \\neq 0$, $\\nabla_{\\lambda_\\mu^2} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})$ is a non-degenerate random variable as in the non-normal case. When $\\varrho=0$, however, $\\nabla_{\\lambda_\\mu^2} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})$ becomes identically equal to 0, and indeed the first non-zero derivative with respect to $\\lambda_\\mu$ is the fourth derivative.\n\nBecause of this degeneracy, we derive the asymptotic distribution of the LRTS by expanding $\\ell_n(\\psi,\\pi,\\xi)-\\ell_n(\\psi^*,\\pi,\\xi)$ four times. It is not correct, however, to simply approximate $\\ell_n(\\psi,\\pi,\\xi)-\\ell_n(\\psi^*,\\pi,\\xi)$ by a quadratic function of $\\lambda_\\mu^2$ (and other terms) when $\\varrho \\neq 0$ and a quadratic function of $\\lambda_\\mu^4$ when $\\varrho=0$. This results in discontinuity at $\\varrho=0$ and fails to provide a valid uniform approximation. We establish a uniform approximation by expanding $\\ell_n(\\psi,\\pi,\\xi)$ four times but expressing $\\ell_n(\\psi,\\pi,\\xi)$ in terms of $\\varrho\\lambda_\\mu^2$, $\\lambda_\\mu^4$, and other terms.\n \nFor $m \\geq 0$, define $\\zeta_{k,m}(\\varrho):= \\sum_{t=-m+1}^{k-1} \\varrho^{k-t-1} 2 \\nabla_{\\mu} f_{t}^*\\nabla_{\\mu} f_{k}^*\/ f_{t}^* f_{k}^*$. Then, we can write (\\ref{d2p-lambda}) as\n\\begin{equation} \\label{d2p2}\n\\frac{\\nabla_{\\lambda_\\mu^2} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})}{\\alpha(1-\\alpha) \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})} = \\sum_{t=1}^{k-1} \\varrho^{k-t}\\left( 2 \\frac{\\nabla_{\\mu} f_{t}^*}{f_{t}^*} \\frac{\\nabla_{\\mu} f_{k}^*}{f_{k}^*}\\right) = \\varrho \\zeta_{k,0}(\\varrho).\n\\end{equation}\nNote that $\\zeta_{k,m}(\\varrho)$ satisfies $\\mathbb{E}_{\\vartheta^*}[\\zeta_{k,m}(\\varrho)| \\overline{{\\bf Y}}_{-m}^{k-1}]=0$ and is non-degenerate even when $\\varrho=0$. Define $v(\\lambda_\\beta)$ as $v(\\lambda)$ in (\\ref{v_lambda}) but replacing $\\lambda$ with $\\lambda_\\beta$. Collect the relevant parameters as \n\\begin{equation} \\label{t-psi2}\nt(\\psi,\\pi) :=\n\\begin{pmatrix}\n\\eta - \\eta^* \\\\\nt_{\\lambda}(\\lambda,\\pi)\n\\end{pmatrix},\n\\end{equation}\nwhere \n\\begin{equation} \\label{t_lambda_rho}\nt_{\\lambda}(\\lambda,\\pi)\n:=\\alpha(1-\\alpha)\n\\begin{pmatrix}\n\\varrho \\lambda_\\mu^2 \\\\\n \\lambda_\\mu\\lambda_\\sigma\\\\\n\\lambda_\\sigma^2 + b(\\alpha)\\lambda_\\mu^4\/12\\\\\n\\lambda_\\beta \\lambda_\\mu\\\\\n\\lambda_\\beta \\lambda_\\sigma\\\\\nv(\\lambda_\\beta)\n\\end{pmatrix}, \n\\end{equation}\nwith $b(\\alpha)= -(2\/3)(\\alpha^2-\\alpha+1)<0$. Recall $\\theta_j = (\\zeta_j',\\sigma_j^2)'= (\\mu_j,\\beta_j',\\sigma_j^2)'$. Similarly to (\\ref{score_lambda}), define the elements of the generalized score by\n\\begin{equation} \\label{score_lambda_beta}\n\\begin{pmatrix}\n* & s_{\\lambda_{\\mu\\beta} \\varrho k} & s_{\\lambda_{\\mu\\sigma} \\varrho k} \\\\\ns_{\\lambda_{\\beta\\mu} \\varrho k} & s_{\\lambda_{\\beta\\beta} \\varrho k} & s_{\\lambda_{\\beta\\sigma} \\varrho k} \\\\\ns_{\\lambda_{\\sigma\\mu} \\varrho k} & s_{\\lambda_{\\beta\\sigma} \\varrho k}& s_{\\lambda_{\\sigma\\sigma} \\varrho k} \n\\end{pmatrix}\n= \\frac{\\nabla_{\\theta \\theta'} f_k^*}{f_k^*} + \\sum_{t= 1 }^{k-1} \\varrho^{k-t} \\left( \\frac{\\nabla_{\\theta} f_{t}^*}{f_{t}^*} \\frac{\\nabla_{\\theta'} f_{k}^*}{f_{k}^*} + \\frac{\\nabla_{\\theta} f_{k}^*}{f_{k}^*} \\frac{\\nabla_{\\theta'} f_{t}^*}{f_{t}^*} \\right).\n\\end{equation}\nDefine the generalized score as \n\\begin{equation}\\label{score_normal}\ns_{\\varrho k} : = \n\\begin{pmatrix}\ns_{\\eta k} \\\\\ns_{\\lambda \\varrho k}\n\\end{pmatrix},\n\\quad \\text{where}\\quad\ns_{\\eta k} : = \\begin{pmatrix}\n\\nabla_{\\gamma} f_k^* \/ f_k^* \\\\\n\\nabla_{\\theta} f_k^* \/ f_k^* \n\\end{pmatrix} \\\n\\text{ and }\\ \ns_{\\lambda \\varrho k}\n:= \n\\begin{pmatrix}\n\\zeta_{k,0}(\\varrho)\/2 \\\\\n 2 s_{\\lambda_{\\mu\\sigma} \\varrho k} \\\\\n 2 s_{\\lambda_{\\sigma\\sigma} \\varrho k}\\\\\ns_{\\lambda_{\\beta \\mu} \\varrho k}\\\\\n 2 s_{\\lambda_{\\beta \\sigma} \\varrho k}\\\\\n V(s_{\\lambda_{\\beta\\beta} \\varrho k}) \n\\end{pmatrix}.\n\\end{equation} \nThe following proposition establishes a uniform approximation of the log-likelihood ratio. \n\\begin{assumption} \\label{A-nonsing2}\n(a) $0< \\inf_{\\varrho\\in\\Theta_{\\varrho}} \\lambda_{\\min}(\\mathcal{I}_\\varrho) \\leq \\sup_{\\varrho\\in\\Theta_{\\varrho}}\\lambda_{\\max}(\\mathcal{I}_\\varrho) < \\infty$ for $\\mathcal{I}_{\\varrho}= \\lim_{k\\rightarrow \\infty } \\mathbb{E}_{\\vartheta^*}(s_{\\varrho k}s_{\\varrho k}')$, where $s_{\\varrho k}$ is given in (\\ref{score_normal}). (b) $\\sigma_1^*,\\sigma_2^* >\\epsilon_\\sigma$. \n\\end{assumption} \n\n\\begin{proposition} \\label{P-quadratic-N1} Suppose Assumptions \\ref{assn_a1}, \\ref{assn_a2}, \\ref{assn_a4}, \\ref{A-consist}, and \\ref{A-nonsing2} hold and the density of the $j$-th regime is given by (\\ref{normal-density}). Then, under the null hypothesis of $M=1$, (a) $\\sup_{\\vartheta \\in A_{n \\varepsilon}(\\xi)} |t(\\psi,\\pi)| = O_{p \\varepsilon }(n^{-1\/2})$; and (b) for any $c>0$,\n\\begin{equation} \\label{ln_appn_N1}\n\\sup_{\\xi \\in \\Xi} \\sup_{\\vartheta \\in A_{n\\varepsilon c}(\\xi) } \\left| \\ell_n(\\psi,\\pi,\\xi) - \\ell_n(\\psi^*,\\pi,\\xi) - \\sqrt{n}t(\\psi,\\pi)' \\nu_n (s_{\\varrho k}) + n t(\\psi,\\pi)' \\mathcal{I}_\\varrho t(\\psi,\\pi)\/2 \\right| = o_{p \\varepsilon }(1).\n\\end{equation}\n\\end{proposition}\nThe asymptotic null distribution of the LRTS is characterized by the supremum of $ 2 t_{\\lambda}' G_{\\lambda.\\eta\\varrho}- t_{\\lambda}' \\mathcal{I}_{\\lambda.\\eta \\varrho} t_{\\lambda}$, where $G_{\\lambda. \\eta \\varrho}$ and $\\mathcal{I}_{\\lambda.\\eta \\varrho}$ are defined analogously to those in (\\ref{I_lambda}) but with $s_{\\varrho k}$ defined in (\\ref{score_normal}), and the supremum is taken with respect to $t_{\\lambda}$ and $\\varrho\\in\\Theta_{\\varrho}$ under the constraint implied by the limit of the set of possible values of $\\sqrt{n}t_{\\lambda}(\\lambda,\\pi)$ as $n\\rightarrow\\infty$. This constraint is given by the union of $\\Lambda_{\\lambda}^1$ and $\\Lambda_{\\lambda \\varrho}^2$, where $q_\\beta := \\dim(\\beta)$, $q_{\\lambda}:= 3 +2q_\\beta+q_\\beta(q_\\beta+1)\/2$, and\n\\begin{equation}\n\\begin{aligned}\\label{Lambda-lambda}\n&\\Lambda_{\\lambda }^1:=\\{ t_{\\lambda}=( t_{\\varrho\\mu^2}, t_{\\mu\\sigma },t_{\\sigma^2 },t_{\\beta\\mu}',t_{\\beta\\sigma}',t_{v(\\beta)}')' \\in \\mathbb{R}^{q_{\\lambda} } : \\\\\n& \\qquad \\qquad (t_{\\varrho\\mu^2},t_{\\mu\\sigma },t_{\\sigma^2 },t_{\\beta\\mu}')'\\in \\mathbb{R}\\times\\mathbb{R}\\times\\mathbb{R}_{-}\\times\\mathbb{R}^{q_\\beta}, t_{\\beta\\sigma}=0, t_{v(\\beta)}=0\\}, \\quad \\\\\n&\\Lambda_{\\lambda \\varrho}^2:=\\{ t_{\\lambda }= (t_{\\varrho\\mu^2},t_{\\mu\\sigma },t_{\\sigma^2},t_{\\beta\\mu}',t_{\\beta\\sigma}',t_{v(\\beta)}')' \\in \\mathbb{R}^{q_{\\lambda} }: t_{\\varrho\\mu^2} = \\varrho \\lambda_{\\mu}^2, t_{\\mu\\sigma }= \\lambda_\\mu\\lambda_\\sigma, \\\\\n& \\qquad \\qquad \nt_{\\sigma^2}= \\lambda_\\sigma^2, t_{\\beta\\mu }= \\lambda_{\\beta} \\lambda_\\mu, t_{\\beta\\sigma }= \\lambda_{\\beta} \\lambda_\\sigma, t_{v(\\beta)} =v_{\\beta}(\\lambda_{\\beta})\\ \\text{for some }\\lambda\\in \\mathbb{R}^{2+q_\\beta} \\}. \n\\end{aligned}\n\\end{equation}\nNote that $\\Lambda_{\\lambda\\varrho}^2$ depends on $\\varrho$, whereas $\\Lambda_{\\lambda}^1$ does not depend on $\\varrho$. Heuristically, $\\Lambda_{\\lambda}^1$ and $\\Lambda_{\\lambda\\varrho}^2$ correspond to the limits of the set of possible values of $\\sqrt{n}t_{\\lambda}(\\lambda,\\pi)$ when $\\liminf_{n\\to\\infty}n^{1\/8}|\\lambda_{\\mu}| > 0$ and $\\lambda_{\\mu}=o(n^{-1\/8})$, respectively. When $\\liminf_{n\\to\\infty}n^{1\/8}|\\lambda_{\\mu}| > 0$, we have $(\\hat\\lambda_\\sigma, \\hat\\lambda_\\beta) =O_p(n^{-3\/8})$ because $t_{\\lambda}(\\hat\\lambda,\\pi)=O_p(n^{-1\/2})$. Further, the set of possible values of $\\sqrt{n} \\varrho\\lambda_{\\mu}^2$ converges to $\\mathbb{R}$ because $\\varrho$ can be arbitrarily small. Consequently, the limit of $\\sqrt{n}t_{\\lambda}(\\lambda,\\pi)$ is characterized by $\\Lambda_{\\lambda}^1$.\n\nDefine $Z_{\\lambda \\varrho}$ and $\\mathcal{I}_{\\lambda.\\eta \\varrho}$ as in (\\ref{I_lambda}) but with $s_{\\pi k}$ defined in (\\ref{score_normal}). Let $Z_{\\lambda 0}$ and $\\mathcal{I}_{\\lambda.\\eta 0}$ denote $Z_{\\lambda \\varrho}$ and $\\mathcal{I}_{\\lambda.\\eta \\varrho}$ evaluated at $\\varrho=0$. Define $\\tilde{t}_{\\lambda}^1$ and $\\tilde{t}_{\\lambda\\varrho}^2$ by\n\\begin{equation} \\label{t-lambda-N1}\n\\begin{aligned}\nr_{\\lambda}(\\tilde{t}_{\\lambda }^1) & = \\inf_{t_{\\lambda} \\in \\Lambda_{\\lambda}^1}r_{\\lambda}(t_{\\lambda}), \\quad r_{\\lambda}(t_{\\lambda}) := (t_{\\lambda} -Z_{\\lambda 0})' \\mathcal{I}_{\\lambda.\\eta 0} (t_{\\lambda} -Z_{\\lambda 0}) \\quad \\\\\nr_{\\lambda\\varrho}(\\tilde{t}_{\\lambda\\varrho}^2) & = \\inf_{t_{\\lambda} \\in \\Lambda_{\\lambda\\varrho}^2}r_{\\lambda\\varrho}(t_{\\lambda}), \\quad r_{\\lambda\\varrho}(t_{\\lambda}) := \n(t_{\\lambda} -Z_{\\lambda \\varrho})' \\mathcal{I}_{\\lambda.\\eta \\varrho} (t_{\\lambda} -Z_{\\lambda \\varrho}). \n\\end{aligned}\n\\end{equation}\nThe following proposition establishes the asymptotic null distribution of the LRTS. \n\\begin{proposition} \\label{P-LR-N1}\nSuppose that the assumptions in Proposition \\ref{P-quadratic-N1} hold. Then, under the null hypothesis of $M=1$, $LR_n \\overset{d}{\\rightarrow} \\max\\{ \\mathbb{I}\\{\\varrho=0\\} (\\tilde t_{\\lambda }^1)' \\mathcal{I}_{\\lambda.\\eta 0} \\tilde t_{\\lambda }^1, \\sup_{\\varrho\\in\\Theta_{\\varrho}} (\\tilde t_{\\lambda \\varrho}^2)' \\mathcal{I}_{\\lambda.\\eta \\varrho} \\tilde t_{\\lambda \\varrho}^2 \\}$. \n\\end{proposition} \n\\begin{remark}\n\\citet{quzhuo17wp} derive the asymptotic distribution of the LRTS under the restriction that $\\varrho\\geq \\epsilon>0$. \n\\end{remark}\n\n\\begin{remark}\nIt is possible to extend our analysis to the exponential-LR type tests studied by \\citet{andrewsploberger94em} and \\citet{carrasco14em}.\n\\end{remark}\n\n\\subsection{Homoscedastic normal distribution}\\label{subsec:homo_normal}\n\nSuppose that $Y_k\\in \\mathbb{R}$ in the $j$-th regime follows a normal distribution with the regime-specific intercept $\\mu_j$ but with common variance $\\sigma^2$. We split $\\gamma$ and $\\theta_j$ into $\\gamma=(\\tilde \\gamma',\\sigma^2)'$ and $\\theta_j=(\\mu_j,\\beta_j')'$, and write the density of the $j$-th regime as\n\\begin{equation}\\label{normal-density-homo}\nf(y_k|\\overline{\\bf{y}}_{k-1};\\gamma,\\theta_j) =f(y_k|\\overline{\\bf{y}}_{k-1};\\tilde \\gamma,\\theta_j,\\sigma^2) = \\frac{1}{\\sigma}\\phi\\left( \\frac{y_k - \\mu_j- \\varpi(\\overline{\\bf{y}}_{k-1};\\tilde\\gamma,\\beta_j ) }{\\sigma}\\right),\n\\end{equation}\nfor some function $\\varpi$. Consider the following reparameterization:\n\\begin{equation}\n\\left(\n\\begin{array}{c}\n\\theta_1\\\\\n\\theta_2\\\\\n\\sigma^2 \n\\end{array}\n\\right) =\\left(\n\\begin{array}{c}\n\\nu_{\\theta} + (1-\\alpha)\\lambda \\\\\n\\nu_{\\theta} -\\alpha\\lambda \\\\ \n\\nu_\\sigma - \\alpha(1-\\alpha) \\lambda_\\mu^2\n\\end{array}\n\\right), \\label{repara-homo}\n\\end{equation}\nwhere $\\nu_{\\theta}=(\\nu_\\mu,\\nu_\\beta')'$ and $\\lambda=(\\lambda_\\mu,\\lambda_\\beta')'$. Collect the reparameterized parameters, except for $\\alpha$, into one vector $\\psi_{\\alpha}$. Suppressing $\\alpha$ from $\\psi_{\\alpha}$, let the reparameterized density be\n\\begin{equation}\\label{repara_g_homo}\ng_{\\psi}(y_k|\\overline{\\bf{y}}_{k-1},x_k) = f\\left(y_k|\\overline{\\bf{y}}_{k-1};\\tilde\\gamma,\\nu_\\theta+(q_k-\\alpha)\\lambda, \\nu_\\sigma -\\alpha(1-\\alpha) \\lambda_\\mu^2 \\right).\n\\end{equation}\nLet $\\eta = (\\tilde\\gamma',\\nu_\\theta',\\nu_\\sigma)'$; then, the first and second derivatives of $g_{\\psi}(y_k|\\overline{\\bf{y}}_{k-1},x_k)$ with respect to $\\eta$ and $\\lambda$ are the same as those given in (\\ref{dg}) except for $\\nabla_{\\lambda_\\mu^2}g_{\\psi}(y_k|\\overline{\\bf{y}}_{k-1},x_k)$. We derive the higher-order derivatives of $g_{\\psi}(y_k|\\overline{\\bf{y}}_{k-1},x_k)$ with respect to $\\lambda_\\mu$. From Lemmas \\ref{lemma_normal_der} and (\\ref{norma_der}), we obtain\n\\begin{equation} \\label{d3g-homo}\n\\begin{aligned}\n\\nabla_{\\lambda \\eta^i} g_k^* &= d_{1k} \\nabla_{\\theta \\eta^i} f_k^* \\quad \\text{for } i=0,1,\\ldots,\\\\\n\\nabla_{\\lambda_\\mu^i} g_k^* &= d_{ik} \\nabla_{\\mu^i} f_k^* \\quad \\text{for }i = 0,1, \\ldots, 4,\n\\end{aligned}\n\\end{equation}\nwhere $d_{0k} :=1$, $d_{1k} := q_k - \\alpha$, $d_{2k} := (q_k-\\alpha)^2-\\alpha(1-\\alpha)$, $d_{3k} := (q_k-\\alpha)^3 - 3(q_k-\\alpha)\\alpha(1-\\alpha)$, and\n$d_{4k} := (q_k - \\alpha)^4 -6 (q_k - \\alpha)^2 \\alpha(1-\\alpha) + 3\\alpha^2(1 - \\alpha)^2$.\nIt follows from $\\mathbb{E}_{\\vartheta^*}[ q_k|\\overline{{\\bf Y}}_{-\\infty}^n]=\\alpha$, (\\ref{markov_moments}), and elementary calculation that \n\\begin{equation} \\label{d3g2-homo}\n\\begin{aligned}\n\\mathbb{E}_{\\vartheta^*}[ \\nabla_{\\lambda_\\mu^i} g_k^*|\\overline{{\\bf Y}}_{0}^k] &= 0, \\quad\\mathbb{E}_{\\vartheta^*}[ d_{ik}|\\overline{{\\bf Y}}_{0}^k] = 0,\\quad i = 1,2, \\\\\n \\mathbb{E}_{\\vartheta^*}[ d_{3k}|\\overline{{\\bf Y}}_{0}^k] &= \\alpha(1 - \\alpha) (1-2\\alpha),\n\\quad \\mathbb{E}_{\\vartheta^*}[ d_{4k}|\\overline{{\\bf Y}}_{0}^k] = \\alpha(1 - \\alpha) (1-6\\alpha+6\\alpha^2).\n\\end{aligned}\n\\end{equation}\nRepeating the calculation leading to (\\ref{d1p})--(\\ref{d2p}) and using (\\ref{d3g2-homo}) gives the following. First, (\\ref{d1p}) and (\\ref{d2p0}) still hold; second, the elements of $\\nabla_{\\lambda\\lambda'} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})$ are given by (\\ref{d2p}) except for the $(1,1)$-th element; third, $\\nabla_{\\lambda_\\mu^2} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})$ is given by (\\ref{d2p-lambda}). Further, Lemma \\ref{lemma_d34_homo} in the appendix shows that when $\\varrho=0$, $\\nabla_{\\lambda_\\mu^3} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}) = \\alpha(1-\\alpha)(1-2\\alpha) \\nabla_{\\mu^3}f_k^*\/f_k^*$ and $\\nabla_{\\lambda_\\mu^4} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}) = \\alpha(1-\\alpha)(1-6\\alpha+6\\alpha^2) \\nabla_{\\mu^4}f_k^*\/f_k^*$. Because $\\nabla_{\\lambda_\\mu^3} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})=0$ when $\\alpha=1\/2$ and $\\varrho=0$, we expand $\\ell_n(\\psi,\\pi,\\xi)$ four times and express it in terms of $\\varrho\\lambda_\\mu^2$, $(1-2\\alpha)\\lambda_\\mu^3$, $\\lambda_\\mu^4$, and other terms to establish a uniform approximation.\n\nCollect the relevant parameters as\n\\begin{equation} \\label{t-psi2-homo}\nt(\\psi,\\pi) :=\n\\begin{pmatrix}\n\\eta - \\eta^* \\\\\nt_{\\lambda}(\\lambda,\\pi)\n\\end{pmatrix} \\quad \\text{and}\\quad\nt_{\\lambda}(\\lambda,\\pi)\n:=\\alpha(1-\\alpha)\n\\begin{pmatrix}\n \\varrho \\lambda_\\mu^2 \\\\\n (1-2\\alpha) \\lambda_\\mu^3 \\\\\n (1-6\\alpha+6\\alpha^2)\\lambda_\\mu^4 \\\\\n \\lambda_\\beta \\lambda_\\mu\\\\\n v(\\lambda_\\beta)\n\\end{pmatrix}.\n\\end{equation}\nDefine the generalized score as\n\\begin{equation}\\label{score_normal_homo}\ns_{\\varrho k} : = \n\\begin{pmatrix}\ns_{\\eta k} \\\\\ns_{\\lambda \\varrho k}\n\\end{pmatrix},\n\\quad \\text{where}\\quad\ns_{\\eta k} : = \\begin{pmatrix}\n\\nabla_{\\gamma} f_k^* \/ f_k^* \\\\\n\\nabla_{\\theta} f_k^* \/ f_k^* \n\\end{pmatrix} \\\n\\text{ and }\\ \ns_{\\lambda \\varrho k}\n:= \n\\begin{pmatrix}\n\\zeta_{k,0}(\\varrho)\/2 \\\\\ns_{\\lambda_{\\mu}^3 k}\/3!\\\\\ns_{\\lambda_{\\mu}^4 k}\/4!\\\\\ns_{\\lambda_{\\beta \\mu} \\varrho k}\\\\\nV(s_{\\lambda_{\\beta\\beta} \\varrho k})\n\\end{pmatrix},\n\\end{equation}\nwhere $\\zeta_{k,m}(\\varrho)$ is defined as in (\\ref{d2p2}), $s_{\\lambda_{\\mu}^i k}:=\\nabla_{\\mu^i}f_k^*\/f_k^*$ for $i=3,4$, and\n$s_{\\lambda_{\\beta \\mu} \\varrho k}$ and $s_{\\lambda_{\\beta\\beta} \\varrho k}$ are defined as in (\\ref{score_lambda_beta}) but using the density (\\ref{normal-density-homo}) in place of (\\ref{normal-density}).\nDefine, with $q_\\beta := \\dim(\\beta)$ and $q_{\\lambda}:=3 + q_\\beta+q_\\beta(q_\\beta+1)\/2$,\n\\begin{equation}\n\\begin{aligned}\\label{Lambda-lambda-homo}\n&\\Lambda_{\\lambda }^1:=\\{ t_{\\lambda}= ( t_{\\varrho\\mu^2}, t_{\\mu^3}, t_{\\mu^4}, t_{\\beta\\mu}', t_{v(\\beta)}')' \\in \\mathbb{R}^{q_{\\lambda}} : (t_{\\varrho\\mu^2}, t_{\\mu^3}, t_{\\mu^4}, t_{\\beta\\mu}')'\\in \\mathbb{R}\\times\\mathbb{R}\\times\\mathbb{R}_{-}\\times\\mathbb{R}^{q_\\beta}, t_{v(\\beta)}=0\\}, \\quad \\\\\n&\\Lambda_{\\lambda \\varrho}^2:=\\{ t_{\\lambda}= ( t_{\\varrho\\mu^2}, t_{\\mu^3}, t_{\\mu^4}, t_{\\beta\\mu}', t_{v(\\beta)}')' \\in \\mathbb{R}^{q_{\\lambda}}: t_{\\varrho\\mu^2} = \\varrho \\lambda_{\\mu}^2, t_{\\mu^3}=t_{\\mu^4}=0, t_{\\beta\\mu }= \\lambda_{\\beta} \\lambda_\\mu, \\\\\n& \\qquad \\quad \n t_{v(\\beta)} =v_{\\beta}(\\lambda_{\\beta})\\ \\text{for some }\\lambda\\in \\mathbb{R}^{1+q_\\beta} \\}. \n\\end{aligned}\n\\end{equation}\n\nThe following two propositions correspond to Propositions \\ref{P-quadratic-N1} and \\ref{P-LR-N1}, establishing a uniform approximation of the log-likelihood ratio and asymptotic distribution of the LRTS.\n\\begin{assumption} \\label{A-nonsing2-homo}\n$0< \\inf_{\\varrho \\in \\Theta_{\\varrho}} \\lambda_{\\min}(\\mathcal{I}_\\varrho) \\leq \\sup_{\\varrho \\in \\Theta_{\\varrho}}\\lambda_{\\max}(\\mathcal{I}_\\varrho) < \\infty$ for $\\mathcal{I}_{\\varrho}= \\lim_{k\\rightarrow \\infty} \\mathbb{E}_{\\vartheta^*}(s_{\\varrho k}s_{\\varrho k}')$, where $s_{\\varrho k}$ is given in (\\ref{score_normal_homo}).\n\\end{assumption}\n\\begin{proposition} \\label{P-quadratic-N1-homo}\nSuppose Assumptions \\ref{assn_a1}, \\ref{assn_a2}, \\ref{assn_a4}, \\ref{A-consist}, and \\ref{A-nonsing2-homo} hold and the density of the $j$-th regime is given by (\\ref{normal-density-homo}). Then, statements (a) and (b) of Proposition \\ref{P-quadratic-N1} hold.\n\\end{proposition} \n\\begin{proposition} \\label{P-LR-N1-homo} Suppose that the assumptions in Proposition \\ref{P-quadratic-N1-homo} hold. Then, under the null hypothesis of $M=1$, $LR_n \\overset{d}{\\rightarrow} \\max\\{ \\mathbb{I}\\{\\varrho=0\\} (\\tilde t_{\\lambda }^1)' \\mathcal{I}_{\\lambda.\\eta 0} \\tilde t_{\\lambda }^1, \\sup_{\\varrho\\in\\Theta_\\varrho} (\\tilde t_{\\lambda \\varrho}^2)' \\mathcal{I}_{\\lambda.\\eta \\varrho} \\tilde t_{\\lambda \\varrho}^2 \\}$, where $\\tilde{t}_{\\lambda}^1$ and $\\tilde{t}_{\\lambda\\varrho}^2$ are defined as in (\\ref{t-lambda-N1}) but in terms of $(Z_{\\lambda \\varrho}, \\mathcal{I}_{\\lambda.\\eta \\varrho}, Z_{\\lambda 0},\\mathcal{I}_{\\lambda.\\eta 0})$ constructed with $s_{\\varrho k}$ defined in (\\ref{score_normal_homo}) and $\\Lambda_{\\lambda }^1$ and $\\Lambda_{\\lambda \\varrho}^2$ defined in (\\ref{Lambda-lambda-homo}).\n\\end{proposition} \n\n\\section{Testing $H_0:M=M_0$ against $H_A:M=M_0+1$ for $M_0 \\geq 2$}\\label{sec-general}\nIn this section, we derive the asymptotic distribution of the LRTS for testing the null hypothesis of $M_0$ regimes against the alternative of $M_0+1$ regimes for general $M_0 \\geq 2$. We suppress the covariate ${\\bf W}_{a}^b$ unless confusion might arise.\n\nLet $\\vartheta_{M_0}^*=((\\vartheta_{M_0,x}^*)',(\\vartheta_{M_0,y}^*)')'$ denote the parameter of the $M_0$-regime model, where $\\vartheta_{M_0,x}^*$ contains $p_{ij}^*= q_{\\vartheta^*_{M_0,x}}(i,j)>0$ for $i= 1,\\ldots,M_0$ and $j=1,\\ldots,M_0-1$, and $\\vartheta_{M_0,y}^* = ((\\theta_1^*)',\\ldots,(\\theta_{M_0}^*)',(\\gamma^*)')'$. We assume $\\max_{i}\\sum_{j=1}^{M_0-1}p_{ij}^*<1$ and $\\theta_{1}^*<\\ldots< \\theta_{M_0}^*$ for identification. \nThe true $M_0$-regime conditional density of ${\\bf Y}_1^n$ given $\\overline{\\bf Y}_{0}$ and $x_0$ is \n\\begin{equation} \\label{true_model}\np_{\\vartheta_{M_0}^*}({\\bf Y}_1^n| \\overline{\\bf Y}_{0},x_0) = \\sum_{{\\bf x}_1^n\\in \\mathcal{X}_{M_0}^n}\\prod_{k=1}^n p_{\\vartheta_{M_0}^*}(Y_k,x_k|\\overline{\\bf Y}_{k-1},x_{k-1}),\n\\end{equation} \nwhere $p_{\\vartheta_{M_0}^*} (y_k,x_k| \\overline{\\bf{y}}_{k-1},x_{k-1}) = g_{\\vartheta_{M_0,y}^*}(y_k|\\overline{\\bf{y}}_{k-1}, x_k) q_{\\vartheta_{M_0,x}^*}(x_{k-1},x_k)$ with $g_{\\vartheta_{M_0,y}^*}(y_k|\\overline{\\bf{y}}_{k-1},x_k) = \\sum_{j=1,\\ldots,M_0} \\mathbb{I}\\{x_k=j\\}f(y_k|\\overline{\\bf{y}}_{k-1};\\gamma,\\theta_j^*)$.\n \nLet the conditional density of ${\\bf Y}_{1}^n$ of an $(M_0+1)$-regime model be\n\\begin{equation} \\label{fitted_model}\np_{\\vartheta_{M_0+1}}({\\bf Y}_1^n| \\overline{\\bf Y}_{0},x_0) := \\sum_{{\\bf x}_1^n\\in \\mathcal{X}_{M_0+1}^n}\\prod_{k=1}^n p_{\\vartheta_{M_0+1}}(Y_k,x_k|\\overline{\\bf Y}_{k-1},x_{k-1}),\n\\end{equation}\nwhere $p_{\\vartheta_{M_0+1}}(y_k,x_k|\\overline{\\bf y}_{k-1},x_{k-1})$ is defined similarly to $p_{\\vartheta_{M_0}^*}(y_k,x_k|\\overline{\\bf y}_{k-1},x_{k-1})$ with $\\vartheta_{M_0+1,x}:= \\{p_{ij}\\}_{i=1,\\ldots,M_0+1,j=1,\\ldots,M_0}$ and $\\vartheta_{M_0+1,y} := (\\theta_1',\\ldots,\\theta_{M_0+1}',\\gamma')'$. We assume that $\\min_{i,j}p_{ij} \\geq \\epsilon$ for some $\\epsilon\\in (0,1\/2)$.\n\nWrite the null hypothesis as $H_0 = \\cup_{m=1}^{M_0} H_{0m}$ with\n\\[\nH_{0m} : \\theta_1 < \\cdots < \\theta_{m} = \\theta_{m + 1} < \\cdots < \\theta_{M_0 + 1}.\n\\] \nDefine the set of values of $\\vartheta_{M_0+1}$ that yields the true density (\\ref{true_model}) under $\\mathbb{P}_{\\vartheta^*_{M_0}}$ as $\\Upsilon^*:=\\{\\vartheta_{M_0 + 1} \\in \\Theta_{M_0+1,\\epsilon}: p_{\\vartheta_{M_0+1}}({\\bf Y}_1^n| \\overline{\\bf Y}_{0}, x_0) = p_{\\vartheta_{M_0}^*}({\\bf Y}_1^n| \\overline{\\bf Y}_{0}, x_0)\\ \\mathbb{P}_{\\vartheta^*_{M_0}}\\text{-a.s.}\\}$. Under $H_{0m}$, the $(M_0 + 1)$-regime model (\\ref{fitted_model}) generates the true $M_0$-regime density (\\ref{true_model}) if $\\theta_m = \\theta_{m + 1} = \\theta_{m}^{*}$ and the transition matrix of $X_k$ reduces to that of the true $M_0$-regime model.\n\nWe reparameterize the transition probability of $X_k$ by writing $\\vartheta_{M_0+1,x}$ as $\\vartheta_{M_0+1,x} = (\\vartheta_{xm}',\\pi_{xm}')'$, where $\\vartheta_{xm}$ is point identified under $H_{0m}$, while $\\pi_{xm}$ is not point identified under $H_{0m}$. The transition probability of $X_k$ under $\\vartheta_{M_0+1,x}$ equals the transition probability of $X_k$ under $\\vartheta_{M_0,x}^*$ if and only if $\\vartheta_{xm} = \\vartheta_{xm}^*$. The detailed derivation including the definition of $\\vartheta_{xm}^*$ is provided in Section \\ref{subsec:p_m_repara} in the appendix. \nDefine the subset of $\\Upsilon^*$ that corresponds to $H_{0m}$ as \n\\begin{align*} \n\\Upsilon_{m}^*& := \\left\\{\\vartheta_{M_0 + 1} \\in \\Theta_{M_0 + 1}: \n\\theta_j=\\theta_{j}^*\\ \\text{for}\\ 1 \\leq j < m;\\ \\theta_m = \\theta_{m + 1} = \\theta_{m}^{*}; \\right.\\\\\n& \\qquad \\left. \\theta_{j} = \\theta_{j - 1}^*\\ \\text{for}\\ h+1 < j \\leq M_0+1;\\ \\gamma=\\gamma^*;\\ \\vartheta_{xm}=\\vartheta_{xm}^*\\right\\};\n\\end{align*} \nthen, $\\Upsilon^*= \\Upsilon_{1}^* \\cup \\cdots \\cup \\Upsilon_{M_0}^*$ holds. \n\nFor $M=M_0,M_0+1$, let $\\ell_n(\\vartheta_M,\\xi_M) := \\log \\left( \\sum_{x_0=1}^M p_{\\vartheta_M}({\\bf Y}_1^n| \\overline{\\bf Y}_{0},x_0) \\xi_M(x_0) \\right)$ denote the $M$-regime log-likelihood for a given initial distribution $\\xi_M(x_0) \\in \\Xi_M$. We treat $\\xi_M(x_0)$ as fixed. Let $\\hat\\vartheta_{M_0}:= \\arg\\max_{\\vartheta_{M_0} \\in \\Theta_{{M_0}}} \\ell_n(\\vartheta_{M_0},\\xi_{M_0})$ and $\\hat\\vartheta_{M_0+1}:= \\arg\\max_{\\vartheta_{M_0+1} \\in \\Theta_{M_0+1}} \\ell_n(\\vartheta_{M_0+1},\\xi_{M_0+1})$. The following proposition shows that the MLE is consistent in the sense that the distance between $\\hat{\\vartheta}_{M_0+1}$ and $\\Upsilon^*$ tends to 0 in probability. The proof of Proposition \\ref{P-consist_M} is essentially the same as the proof of Proposition \\ref{P-consist} and hence is omitted. \n\\begin{assumption} \\label{A-consist_M}\n(a) $\\Theta_{M_0}$ and $\\Theta_{M_0+1}$ are compact, and $\\vartheta_{M_0}^*$ is in the interior of $\\Theta_{M_0}$. (b) For all $(x,x') \\in \\mathcal{X}$ and all $(\\overline{{\\bf y}},y',w)\\in \\mathcal{Y}^s\\times \\mathcal{Y}\\times \\mathcal{W}$, $f(y'|\\overline{\\bf{y}}_0,w;\\gamma,\\theta)$ is continuous in $(\\gamma,\\theta)$. (c) $\\mathbb{E}_{\\vartheta^*_{M_0}}[ \\log ( p_{\\vartheta_{M_0}}(Y_1 |\\overline{\\bf{Y}}_{-m}^0,{\\bf W}_{-m}^1)] = \\mathbb{E}_{\\vartheta^*_{M_0}}[ \\log p_{\\vartheta_{M_0}^*}(Y_1 |\\overline{\\bf{Y}}_{-m}^0,{\\bf W}_{-m}^1) ]$ for all $m \\geq 0$ if and only if $\\vartheta_{M_0} = \\vartheta_{M_0}^*$. (d) $\\mathbb{E}_{\\vartheta^*_{M_0}}[ \\log ( p_{\\vartheta_{M_0+1}}(Y_1 |\\overline{\\bf{Y}}_{-m}^0,{\\bf W}_{-m}^0) ] = \\mathbb{E}_{\\vartheta^*_{M_0}} [\\log p_{\\vartheta_{M_0}^*}(Y_1 |\\overline{\\bf{Y}}_{-m}^0,{\\bf W}_{-m}^1) ]$ for all $m \\geq 0$ if and only if $\\vartheta_{M_0+1}\\in \\Upsilon^*$. \\end{assumption}\n\n\\begin{proposition} \\label{P-consist_M} \nSuppose Assumptions \\ref{assn_a1}, \\ref{assn_a2}, and \\ref{A-consist_M} hold. Then, under the null hypothesis of $M=M_0$, $\\hat\\vartheta_{M_0} \\overset{p}{\\rightarrow} \\vartheta_{M_0}^*$ and $\\inf_{\\vartheta_{M_0+1} \\in \\Upsilon^*} |\\hat{\\vartheta}_{M_0+1}-\\vartheta_{M_0+1}|\\overset{p}{\\rightarrow} 0$.\n\\end{proposition}\n\nLet $LR_{M_0,n}:=2[\\ell_n(\\hat\\vartheta_{M_0+1},\\xi_{M_0+1})-\\ell_n(\\hat\\vartheta_{M_0},\\xi_{M_0})]$ denote the LRTS for testing $H_0:M=M_0$ against $H_A:M=M_0+1$.\nWe proceed to derive the asymptotic distribution of the LRTS by analyzing the behavior of the LRTS when $\\vartheta_{M_0+1} \\in \\Upsilon^*_m$ for each $m$. Define $J_m :=\\{m,m+1\\}$. Observe that if ${\\bf X}_1^k \\in J_m^k$, then ${\\bf X}_1^k$ follows a two-state Markov chain on $J_m$ whose transition probability is characterized by $\\alpha_m := \\mathbb{P}_{\\vartheta_{M_0+1}}(X_k=m|X_k \\in J_m)$ and $\\varrho_m := \\text{corr}_{\\vartheta_{M_0+1}}(X_{k-1},X_{k}|(X_{k-1},X_{k}) \\in J_m^2)$. Collect reparameterized $\\pi_{xm}$ into $\\pi_{xm} := (\\varrho_m,\\alpha_m, \\phi_{m}')'$, where $\\phi_m$ does not affect the transition probability of ${\\bf X}_1^k$ when ${\\bf X}_1^k \\in J_m^{k}$. See Section \\ref{subsec:p_m_repara} in the appendix for the detailed derivation.\n \nDefine $q_{kj} := \\mathbb{I}\\{X_k=j\\}$; then, we can write $\\alpha_m$ and $\\varrho_m$ as $\\alpha_m = \\mathbb{E}_{\\vartheta_{M_0+1}}(q_{km}|X_k \\in J_m)$ and $\\varrho_m =\\text{corr}_{\\vartheta_{M_0+1}}(q_{k-1,m},q_{km}|(X_{k-1},X_{k}) \\in J_m^2)$. Because $\\overline{{\\bf Y}}_{-\\infty}^\\infty$ provides no information for distinguishing between $X_k=m$ and $X_k={m+1}$ if $\\theta_m = \\theta_{m+1}$, we can write $\\alpha_m$ and $\\varrho_m$ as\n\\begin{equation} \\label{alpha_rho_h}\n\\alpha_m = \\mathbb{E}_{\\vartheta_{M_0+1}}(q_{km}|X_k \\in J_m,\\overline{{\\bf Y}}_{-\\infty}^\\infty )\\quad \\text{and}\\quad\n\\varrho_m = \\text{corr}_{\\vartheta_{M_0+1}}(q_{k-1,m},q_{km}|(X_{k-1},X_{k}) \\in J_m^2,\\overline{{\\bf Y}}_{-\\infty}^\\infty). \n\\end{equation}\n\n\\subsection{Non-normal distribution} \\label{sec:M-nonnormal}\n\nFor non-normal component distributions, consider the following reparameterization similar to (\\ref{repara}):\n\\begin{equation*\n\\begin{pmatrix}\n\\theta_m\\\\\n\\theta_{m+1} \\\\\n\\end{pmatrix}\n=\n\\begin{pmatrix}\n\\nu_m + (1-\\alpha_m) \\lambda_m\\\\\n\\nu_m- \\alpha_m\\lambda_m\\\\\n\\end{pmatrix}. \n\\end{equation*} \nCollect the reparameterized identified parameters into one vector $\\psi_m: = (\\eta_m',\\lambda_m')'$, where $\\eta_m=(\\gamma',\\{\\theta_j'\\}_{j=1}^{m-1}, \\nu_m',\\{\\theta_j'\\}_{j=m+2}^{M_0+1},\\vartheta_{xm}')'$, so that the reparameterized $(M_0+1)$-regime log-likelihood function is $\\ell_n(\\psi_m,\\pi_{xm},\\xi_{M_0+1})$. Let $\\psi_m^{*}=(\\eta_m^*,\\lambda_m^*)=((\\vartheta_{M_0}^*)',0')'$ denote the value of $\\psi_{m}$ under $H_{0m}$. Define the reparameterized conditional density of $y_k$ as \n\\begin{align*}\ng_{\\psi_m}(y_k|\\overline{\\bf{y}}_{k-1}, x_k) & : = \\mathbb{I}\\{x_k \\in J_m\\} f(y_k|\\overline{\\bf{y}}_{k-1};\\gamma,\\nu_m + (q_{km}-\\alpha_m) \\lambda_m) + \\sum_{j \\in \\overline J_m} q_{kj} f(y_k|\\overline{\\bf{y}}_{k-1}; \\gamma,\\theta_j),\n\\end{align*}\nwhere $\\overline J_m := \\{1,\\ldots,M_0+1\\}\\setminus J_m$. Let $f_{mk}^{*}$ denote $f(Y_k|\\overline{\\bf{Y}}_{k-1};\\gamma^*,\\theta_m^*)$. It follows from (\\ref{alpha_rho_h}) and the law of iterated expectations that\n\\begin{equation} \\label{qk_moments_M}\n\\begin{aligned}\n&\\mathbb{E}_{\\vartheta^*_{M_0}}\\left[\\frac{\\mathbb{I}\\{X_k \\in J_m\\}(q_{km} -\\alpha_m)}{g_{\\psi_m^*}(Y_k|\\overline{\\bf{Y}}_{k-1}, X_k)} \\middle|\\overline{{\\bf Y}}_{-\\infty}^n\\right] \\\\ \n&=\\mathbb{E}_{\\vartheta^*_{M_0}}\\left[\\mathbb{E}_{\\vartheta^*_{M_0}}\\left[\\frac{ q_{km} -\\alpha_m }{f_{mk}^*}\\middle|X_k \\in J_m,\\overline{{\\bf Y}}_{-\\infty}^n\\right] \\mathbb{I}\\{X_k \\in J_m\\} \\middle|\\overline{{\\bf Y}}_{-\\infty}^n\\right] = 0, \\\\\n&\\mathbb{E}_{\\vartheta^*_{M_0}}\\left[\\frac{\\mathbb{I}\\{X_{t_1} \\in J_m\\}\\mathbb{I}\\{X_{t_2} \\in J_m\\}(q_{t_1h}-\\alpha_m) (q_{t_2h}-\\alpha_m)}{g_{\\psi_m^*}(Y_{t_1}|\\overline{\\bf{Y}}_{{t_1}-1}, X_{t_1})g_{\\psi_m^*}(Y_{t_2}|\\overline{\\bf{Y}}_{{t_2}-1}, X_{t_2})} \\middle| \\overline{\\bf Y}_{-\\infty}^n \\right] \\\\\n& = \\mathbb{E}_{\\vartheta^*_{M_0}}\\left[\\mathbb{E}_{\\vartheta^*_{M_0}}\\left[\\frac{(q_{t_1h}-\\alpha_m) (q_{t_2h}-\\alpha_m)}{f_{m t_1}^* f_{m t_2}^*} \\middle|{\\bf X}_{t_1}^{t_2} \\in J_m^{t_2-t_1+1},\\overline{{\\bf Y}}_{-\\infty}^n\\right] \\mathbb{I}\\{(X_{t_1},X_{t_2}) \\in J_m^2\\} \\middle|\\overline{{\\bf Y}}_{-\\infty}^n \\right] \\\\\n& = \\frac{\\alpha_m(1-\\alpha_m)\\varrho_m^{t_2-t_1}}{f_{m t_1}^* f_{m t_2}^*} \\mathbb{P}_{\\vartheta^*_{M_0}}((X_{t_1},X_{t_2}) \\in J_m^2|\\overline{{\\bf Y}}_{-\\infty}^n), \\quad t_2 \\geq t_1,\n\\end{aligned}\n\\end{equation} \nwhere the second equality holds because $g_{\\psi_m^*}(Y_k|\\overline{\\bf{Y}}_{k-1}, X_k)=f_{mk}^*$ if $X_k \\in J_m$, and the last equality holds because, conditional on $\\{{\\bf X}_{t_1}^{t_2} \\in J_m^{t_2-t_1+1},\\overline{\\bf Y}_{-\\infty}^n\\} $, ${\\bf X}_{t_1}^{t_2}$ is a two-state stationary Markov process with parameter $(\\alpha_m,\\varrho_m)$.\n\nLet $g_{0k}^*$, $q_{0k}^*$, and $\\overline p_{0k}^*$ denote $g_{\\vartheta_{M_0,y}^*}(Y_k,X_k| \\overline{{\\bf Y}}_{k-1},X_{k-1})$, $q_{\\vartheta_{M_0,x}^*}(X_{k-1},X_k)$, and $\\overline p_{\\vartheta_{M_0}^*}(Y_k| \\overline{{\\bf Y}}_{0}^{k-1})$. Let $\\nabla g_{0k}^*$ denote the derivative of $g_{\\vartheta_{M_0,y}}(Y_k,X_k| \\overline{{\\bf Y}}_{k-1},X_{k-1})$ evaluated at $\\vartheta_{M_0,y}^*$, and define $\\nabla q_{0k}^*$ and $\\nabla \\overline p_{0k}^*$ similarly. Repeating a derivation similar to (\\ref{dg})--(\\ref{d2p}) but using (\\ref{qk_moments_M}) in place of (\\ref{qk_moments}), we obtain \n\\begin{equation} \\label{d1p_M}\n\\begin{aligned}\n& \\nabla_{\\eta_m} \\overline p_{\\psi^*_m \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^*_m \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}) \\\\\n& =\n \\sum_{t=1}^k \\mathbb{E}_{\\vartheta^*} \\left[\\nabla_{\\vartheta_{M_0}} \\log (g_{0t}^*q_{0t}^*) \\middle|\\overline{{\\bf Y}}_{0}^k \\right] - \\sum_{t=1}^{k-1} \\mathbb{E}_{\\vartheta^*} \\left[\\nabla_{\\vartheta_{M_0}} \\log (g_{0t}^* q_{0t}^*)\\middle|\\overline{{\\bf Y}}_{0}^{k-1} \\right] \\\\\n& = \\nabla_{\\vartheta_{M_0}} \\overline p_{\\vartheta_{M_0}^*} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\vartheta_{M_0}^*} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}), \n\\end{aligned}\n\\end{equation} \n\\begin{align}\n& \\nabla_{\\lambda_m} \\overline p_{\\psi^*_m \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}) \/ \\overline p_{\\psi^*_m \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}) =0, \\quad \\nabla_{{\\lambda_m} \\eta_m'} \\overline p_{\\psi^*_m \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^*_m \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})=0, \\label{d2p_M0} \\\\\n& \\frac{\\nabla_{{\\lambda_m} {\\lambda_m} '} \\overline p_{\\psi^*_m \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})}{\\overline p_{\\psi^*_m \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})}\n = \\alpha_m(1-\\alpha_m) \\frac{ \\nabla_{\\theta\\theta'}f_{mk}^{*}}{f_{mk}^{*}} \\mathbb{P}_{\\vartheta^*_{M_0}}(X_k \\in J_m|\\overline{{\\bf Y}}_{0}^k) \\nonumber \\\\\n& \\quad + \\alpha_m(1-\\alpha_m) \\sum_{t=1}^{k-1} \\varrho_{m}^{k-t} \\left( \\frac{\\nabla_{\\theta} f_{mt}^{*}}{f_{mt}^{*}} \\frac{\\nabla_{\\theta'} f_{mk}^{*}}{f_{mk}^{*}} + \\frac{\\nabla_{\\theta} f_{mk}^{*}}{f_{mk}^{*}} \\frac{\\nabla_{\\theta'} f_{mt}^{*}}{f_{mt}^{*}} \\right) \\mathbb{P}_{\\vartheta^*_{M_0}}((X_t,X_k) \\in J_m^2|\\overline{{\\bf Y}}_{0}^k).\\label{d2p_M}\n\\end{align} \n\nDefine $\\tilde \\varrho := (\\varrho_1,\\ldots,\\varrho_{M_0})'$, define $t_{\\lambda}(\\lambda_m,\\pi_m)$ as $t_{\\lambda}(\\lambda,\\pi)$ in (\\ref{score}) by replacing $(\\lambda,\\pi)$ with $(\\lambda_m,\\pi_m)$, and let\n\\begin{equation} \\label{stilde}\nt(\\psi_m,\\pi_m) := \n\\begin{pmatrix}\n\\eta_m - \\eta^* \\\\\n t_{\\lambda}(\\lambda_m,\\pi_m) \n\\end{pmatrix}, \\ \n\\tilde{s}_{\\tilde\\varrho k} :=\n\\begin{pmatrix}\n\\tilde {s}_{\\eta k} \\\\\n\\tilde {s}_{\\lambda \\tilde\\varrho k}\n\\end{pmatrix},\\ \\text{ where }\\\n\\tilde s_{\\eta k} : = \n\\frac{\\nabla_{\\eta_m} \\overline p_{\\psi^*_m \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}) }{\\overline p_{\\psi^*_m \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})},\n\\ \n\\tilde{s}_{\\lambda \\tilde\\varrho k} :=\n\\begin{pmatrix}\n{s}_{\\lambda \\varrho_1 k}^1 \\\\\n\\vdots \\\\\n{s}_{\\lambda\\varrho_{M_0} k}^{M_0}\n\\end{pmatrix},\n\\end{equation}\nand $s_{\\lambda \\varrho_m k}^m := V(s_{\\lambda\\lambda \\varrho_m k}^m)$, where $s_{\\lambda\\lambda \\varrho_m k}^m $ is defined similarly to (\\ref{score_lambda}) as\n\\begin{equation} \\label{score_lambda_M}\n\\begin{aligned}\ns_{\\lambda\\lambda\\varrho_m k}^m &: = \\frac{\\nabla_{\\theta \\theta'} f_{mk}^{*}}{f_{mk}^{*}} \\mathbb{P}_{\\vartheta^*_{M_0}} (X_k \\in J_m|\\overline{{\\bf Y}}_{0}^k) \\\\\n& + \\sum_{t=1}^{k-1} \\varrho_m ^{k-t} \\left( \\frac{\\nabla_{\\theta} f_{mt}^{*}}{f_{mt}^{*}} \\frac{\\nabla_{\\theta'} f_{mk}^{*}}{f_{mk}^{*}} + \\frac{\\nabla_{\\theta} f_{mk}^{*}}{f_{mk}^{*}} \\frac{\\nabla_{\\theta'} f_{mt}^{*}}{f_{mt}^{*}} \\right) \\mathbb{P}_{\\vartheta^*_{M_0}} ((X_t,X_k) \\in J_m^2|\\overline{{\\bf Y}}_{0}^k).\n\\end{aligned}\n\\end{equation}\nSimilarly to (\\ref{I_lambda}), define \n\\begin{equation} \\label{I_tilde_lambda}\n\\begin{aligned}\n\\tilde {\\mathcal{I}}_{\\eta} &:= \\mathbb{E}_{\\vartheta^*_{M_0}}(\\tilde s_{\\eta k} \\tilde s_{\\eta k}'), \\quad \\tilde{\\mathcal{I}}_{\\lambda\\tilde \\varrho_1\\tilde \\varrho_2} := \\lim_{k\\rightarrow\\infty} \\mathbb{E}_{\\vartheta^*_{M_0}}(\\tilde s_{\\lambda\\tilde \\varrho_1 k} \\tilde s_{\\lambda\\tilde \\varrho_2 k}'), \\quad\n\\tilde{\\mathcal{I}}_{\\lambda\\eta \\tilde \\varrho } := \\lim_{k\\rightarrow\\infty} \\mathbb{E}_{\\vartheta^*_{M_0}}(\\tilde s_{\\lambda \\tilde \\varrho k} \\tilde s_{\\eta k}'), \\\\ \\tilde{\\mathcal{I}}_{\\eta \\lambda \\tilde \\varrho} &:= \\tilde{\\mathcal{I}}_{\\lambda \\eta \\tilde \\varrho}', \\quad \\tilde{\\mathcal{I}}_{\\lambda.\\eta \\tilde\\varrho_1\\tilde\\varrho_2}:= \\tilde{\\mathcal{I}}_{\\lambda \\tilde\\varrho_1\\tilde\\varrho_2}-\\tilde{\\mathcal{I}}_{\\lambda\\eta \\tilde\\varrho_1}\\tilde{\\mathcal{I}}_{\\eta}^{-1}\\tilde{\\mathcal{I}}_{\\eta\\lambda \\tilde\\varrho_2},\\quad \\tilde{\\mathcal{I}}_{\\lambda.\\eta \\varrho_m}^m := \\mathbb{E}_{\\vartheta^*_{M_0}}[G_{\\lambda.\\eta \\varrho_m}^m (G_{\\lambda.\\eta \\varrho_m}^m)'],\\\\\nZ^m_{\\lambda \\varrho_m} &:= (\\tilde{\\mathcal{I}}_{\\lambda.\\eta \\varrho_m}^m)^{-1}G_{\\lambda.\\eta \\varrho_m}^m,\n\\end{aligned}\n\\end{equation}\nwhere ${G}_{\\lambda.\\eta \\tilde \\varrho}=((G_{\\lambda.\\eta \\varrho_1}^1)',\\ldots,(G_{\\lambda.\\eta\\varrho_{M_0}}^{M_0})')'$ is an $M_0 q_\\lambda$-vector mean zero Gaussian process with \\\\$\\text{cov}({G}_{\\lambda.\\eta \\tilde\\varrho_1},{G}_{\\lambda.\\eta \\tilde \\varrho_2}) = \\tilde{\\mathcal{I}}_{\\lambda.\\eta \\tilde\\varrho_1\\tilde\\varrho_2}$. Note that $G_{\\lambda.\\eta \\tilde \\varrho}$ corresponds to the residuals from projecting $\\tilde s_{\\lambda \\tilde \\varrho k}$ on $\\tilde s_{\\eta k}$. Define $\\tilde t_{\\lambda \\varrho_m}^m$ by\n\\begin{equation*}\ng_{\\lambda \\varrho_m}^m(\\tilde t_{\\lambda \\varrho_m}^m) = \\inf_{t_\\lambda \\in v(\\mathbb{R}^q)}g_{\\lambda \\varrho_m}^m(t_{\\lambda }), \\quad g_{\\lambda \\varrho_m}^m(t_{\\lambda}) := (t_{\\lambda} -Z_{\\lambda \\varrho_m}^m)' \\tilde{\\mathcal{I}}_{\\lambda.\\eta \\varrho_m}^m (t_{\\lambda} -Z_{\\lambda \\varrho_m}^m). \n\\end{equation*}\nThe following proposition gives the asymptotic null distribution of the LRTS. Under the stated assumptions, the log-likelihood function permits a quadratic approximation in the neighborhood of $\\Upsilon_{m}^*$ similar to the one in Proposition \\ref{P-quadratic}. Define $A_{n\\varepsilon c}^m(\\xi) := \\{\\vartheta_{M_0+1} \\in \\Theta_{M_0+1} : \\{ \\ell_n(\\psi_m,\\pi_m,\\xi) - \\ell_n(\\psi_m^*,\\pi_m,\\xi) \\geq 0 \\} \\land |t(\\psi_m,\\pi_m)|< \\varepsilon \\} \\cup \\mathcal{N}_{c\/\\sqrt{n}}$. Under $H_0: M=M_0$, for any $c>0$, for $m=1,\\ldots,M_0$, and uniformly in $\\xi \\in \\Xi$ and $\\vartheta_{M_0+1} \\in A_{n\\varepsilon c}^m(\\xi)$,\n\\[\n\\ell_n(\\psi_m,\\pi_m,\\xi) - \\ell_n(\\psi_m^*,\\pi_m,\\xi) \\ - \\sqrt{n}t(\\psi_m,\\pi_m)' \\nu_n (s_{\\varrho_m k}) + n t(\\psi_m,\\pi_m)' \\mathcal{I}_{\\varrho_m} t(\\psi_m,\\pi_m)\/2 = o_{p \\varepsilon} (1),\n\\]\nwhere $s_{ \\varrho_m k} := (\\tilde{s}_{\\eta k}',({s}_{\\lambda \\varrho_m k}^m)')'$ and $\\mathcal{I}_{\\varrho_m} = \\lim_{k\\rightarrow\\infty} \\mathbb{E}_{\\vartheta^*_{M_0}}(s_{ \\varrho_m k}s_{ \\varrho_m k}')$. Consequently, the LRTS is asymptotically distributed as the maximum of the $M_0$ random variables, each of which represents the asymptotic distribution of the LRTS that tests $H_{0m}$. Denote the parameter space for $\\varrho_m$ by $\\Theta_{\\varrho_m}$, and let $\\tilde\\Theta_{\\varrho}:=\\Theta_{\\varrho_1}\\times\\ldots\\times\\Theta_{\\varrho_{M_0}}$.\n\\begin{assumption} \\label{A-nonsing_M}\n$0< \\inf_{\\tilde \\varrho \\in \\tilde\\Theta_{\\varrho}} \\lambda_{\\min}(\\tilde{\\mathcal{I}}_{\\tilde\\varrho}) \\leq \\sup_{\\tilde \\varrho \\in \\tilde\\Theta_{\\varrho}} \\lambda_{\\max}(\\tilde{\\mathcal{I}}_{\\tilde\\varrho}) < \\infty$ for $\\tilde{\\mathcal{I}}_{\\tilde\\varrho}:= \\lim_{k\\rightarrow\\infty}\\mathbb{E}_{\\vartheta^*_{M_0}}(\\tilde{s}_{\\tilde\\varrho k}\\tilde{s}_{\\tilde\\varrho k}')$, where $\\tilde{s}_{\\tilde\\varrho k}$ is given in (\\ref{stilde}).\n\\end{assumption}\n\\begin{proposition} \\label{P-LR_M}\nSuppose Assumptions \\ref{assn_a1}, \\ref{assn_a2}, \\ref{assn_a4}, \\ref{A-consist_M}, and \\ref{A-nonsing_M} hold. Then, under $H_0: M=M_0$, $LR_{M_0,n}\\overset{d}{\\rightarrow} \\max_{m=1,\\ldots,M_0}\\left\\{ \\sup_{\\varrho_m \\in \\Theta_{\\varrho}^m} \\left( (\\tilde t_{\\lambda \\varrho_m}^m)' \\tilde{\\mathcal{I}}_{\\lambda.\\eta \\varrho_m}^m \\tilde t_{\\lambda \\varrho_m}^m \\right) \\right\\} $.\n\\end{proposition}\n\n\\subsection{Heteroscedastic normal distribution}\\label{subsec:hetero_normal_M}\n\nAs in Section \\ref{subsec:hetero_normal}, we assume that $Y_k \\in \\mathbb{R}$ in the $j$-th regime follows a normal distribution with the regime-specific intercept and variance of which density is given by (\\ref{normal-density}). Consider the following reparameterization similar to (\\ref{repara2}):\n\\begin{equation*}\n\\left(\n\\begin{array}{c}\n\\zeta_m\\\\\n\\zeta_{m+1}\\\\\n\\sigma_m^2\\\\\n\\sigma_{m+1}^2\n\\end{array}\n\\right) =\\left(\n\\begin{array}{c}\n\\nu_{\\zeta m} + (1-\\alpha_m)\\lambda_{\\zeta m} \\\\\n\\nu_{\\zeta m} -\\alpha_m\\lambda_{\\zeta m}\\\\\n\\nu_{\\sigma m} + (1- \\alpha_m)(2\\lambda_{\\sigma m}+ C_1 \\lambda_{\\mu m}^2)\\\\\n\\nu_{\\sigma m} - \\alpha_m(2\\lambda_{\\sigma m}+ C_2 \\lambda_{\\mu m}^2)\n\\end{array}\n\\right),\n\\end{equation*}\nwhere $\\nu_{\\zeta m}=(\\nu_\\mu,\\nu_{\\beta}')'$, $\\lambda_{\\zeta m}=(\\lambda_{\\mu m},\\lambda_{\\beta m}')'$, $C_1 := -(1\/3)(1 + \\alpha_m)$, and $C_2 := (1\/3)(2 - \\alpha_m)$. As in Section \\ref{sec:M-nonnormal}, we collect the reparameterized identified parameters into $\\psi_m: = (\\eta_m',\\lambda_m')'$, where $\\eta_m=(\\gamma',\\{\\theta_j'\\}_{j=1}^{m-1}, \\nu_{\\zeta m}', \\nu_{\\sigma m},\\{\\theta_j'\\}_{j=m+2}^{M_0+1},\\vartheta_{xm}')'$ and $\\lambda_m:=(\\lambda_{\\zeta m}',\\lambda_{\\sigma m})'$. Similar to (\\ref{repara_g_hetero}), define the reparameterized conditional density of $y_k$ as\n\\begin{equation*}\n\\begin{aligned}\n& g_{\\psi_m}(y_k|\\overline{\\bf{y}}_{k-1}, x_k) = \\sum_{j \\in \\overline J_m} q_{kj} f(y_k|\\overline{\\bf{y}}_{k-1}; \\gamma,\\theta_j) \\\\\n& \\quad + \\mathbb{I}\\{x_k \\in J_m\\} f\\left(y_k|\\overline{\\bf{y}}_{k-1};\\gamma,\\nu_{\\zeta m}+(q_{km}-\\alpha_m)\\lambda_{\\zeta m}, \\nu_{\\sigma m} + (q_{km}-\\alpha_m)(2\\lambda_{\\sigma m} + (C_2-q_{km}) \\lambda_{\\mu m}^2 )\\right).\n\\end{aligned} \n\\end{equation*} \nLet $g_{mk}^*$, $f_{mk}^*$, $\\nabla g_{mk}^*$, and $\\nabla f_{mk}^*$ denote $g_{\\psi_m^{*}}(Y_k|\\overline{\\bf{Y}}_{k-1}, X_k)$, $f(Y_k|\\overline{\\bf Y}_{k-1};\\gamma^*,\\theta_m^*)$, $\\nabla g_{\\psi_m^{*}}(Y_k|\\overline{\\bf{Y}}_{k-1}, X_k)$, and $\\nabla f(Y_k|\\overline{\\bf Y}_{k-1};\\gamma^*,\\theta_m^*)$. From (\\ref{d3g}) and a derivation similar to (\\ref{qk_moments_M}), we obtain the following result that corresponds to (\\ref{d3g2}) in testing homogeneity:\n\\begin{equation} \\label{d3g2_M}\n\\begin{aligned}\n\\mathbb{E}_{\\vartheta^*_{M_0}} \\left[ \\nabla_{\\lambda_{\\mu m}^i} g_{mk}^{*} \/ g_{mk}^* \\middle|\\overline{{\\bf Y}}_{-\\infty}^k \\right] &= 0,\\quad i = 1,2,3,\\\\\n\\mathbb{E}_{\\vartheta^*_{M_0}} \\left[ \\nabla_{\\lambda_{\\mu m}^4} g_{mk}^{*} \/ g_{mk}^{*} \\middle| \\overline{{\\bf Y}}_{-\\infty}^k \\right] &= \\alpha_m(1 - \\alpha_m) b(\\alpha_m)( \\nabla_{\\mu^4}f_{mk}^* \/ f_{mk}^* )\\mathbb{P}_{\\vartheta^*_{M_0}}(X_k \\in J_m|\\overline{{\\bf Y}}_{-\\infty}^k) \\\\\n&= b(\\alpha_m) \\mathbb{E}_{\\vartheta^*_{M_0}}\\left[ \\nabla_{\\lambda_{\\sigma m}^2} g_{mk}^{*} \/ g_{mk}^{*} \\middle|\\overline{{\\bf Y}}_{-\\infty}^k\\right].\n\\end{aligned}\n\\end{equation}\nRepeating the calculation leading to (\\ref{d1p_M})--(\\ref{d2p_M}) and using (\\ref{d3g2_M}) gives the following. First, (\\ref{d1p_M}) and (\\ref{d2p_M0}) still hold; second, the elements of $\\nabla_{\\lambda_m\\lambda_m'} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})\/\\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})$ except for the $(1,1)$-th element are given by (\\ref{d2p_M}) while adjusting the derivative with respect to $\\lambda_{\\sigma m}$ by multiplying by 2; third,\n\\begin{equation*}\n\\frac{\\nabla_{\\lambda_{\\mu m}^2} \\overline p_{\\psi^*_m \\pi}(Y_k| \\overline{{\\bf Y}}_{0}^{k-1})}{\\overline p_{\\psi^*_m \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})} = \\alpha_m(1-\\alpha_m) \\sum_{t=1}^{k-1} \\varrho_m^{k-t} \\left( 2 \\frac{\\nabla_{\\mu} f_{mt}^*}{f_{mt}^*} \\frac{\\nabla_{\\mu} f_{mk}^*}{f_{mk}^*}\\right)\\mathbb{P}_{\\vartheta^*_{M_0}}((X_t,X_k) \\in J_m^2|\\overline{{\\bf Y}}_{0}^k).\n\\end{equation*} \n\nFor $m \\geq 0$, define $\\zeta_{k,m}^m(\\varrho_m):= \\sum_{t=-m+1}^{k-1} \\varrho_m^{k-t-1} 2 (\\nabla_{\\mu} f_{mt}^* \\nabla_{\\mu} f_{mk}^* \/ f_{mt}^* f_{mk}^*) \\mathbb{P}_{\\vartheta^*_{M_0}}((X_t,X_k) \\in J_m^2|\\overline{{\\bf Y}}_{0}^k)$. Similarly to (\\ref{score_lambda_beta}), define the elements of the generalized score as \n\\begin{equation}\n\\begin{aligned}\n&\n\\begin{pmatrix}\n * & s_{ \\lambda_{\\mu\\beta}\\varrho_m k}^m & s_{\\lambda_{\\mu\\sigma} \\varrho_m k}^m \\\\\ns_{ \\lambda_{\\beta\\mu}\\varrho_m k}^m & s_{ \\lambda_{\\beta\\beta}\\varrho_m k}^m & s_{ \\lambda_{\\beta\\sigma}\\varrho_m k}^m \\\\\ns_{ \\lambda_{\\sigma\\mu}\\varrho_m k}^m & s_{ \\lambda_{\\beta\\sigma}\\varrho_m k}^m & s_{ \\lambda_{\\sigma\\sigma}\\varrho_m k}^m \n\\end{pmatrix} : = \\frac{\\nabla_{\\theta \\theta'} f_{mk}^{*}}{f_{mk}^{*}} \\mathbb{P}_{\\vartheta^*_{M_0}} (X_k \\in J_m|\\overline{{\\bf Y}}_{0}^k) \\\\\n& \\qquad + \\sum_{t=1}^{k-1} \\varrho_m ^{k-t} \\left( \\frac{\\nabla_{\\theta} f_{mt}^{*}}{f_{mt}^{*}} \\frac{\\nabla_{\\theta'} f_{mk}^{*}}{f_{mk}^{*}} + \\frac{\\nabla_{\\theta} f_{mk}^{*}}{f_{mk}^{*}} \\frac{\\nabla_{\\theta'} f_{mt}^{*}}{f_{mt}^{*}} \\right) \\mathbb{P}_{\\vartheta^*_{M_0}} ((X_t,X_k) \\in J_m^2|\\overline{{\\bf Y}}_{0}^k).\n\\end{aligned}\n\\end{equation}\nSimilarly to (\\ref{score_normal}), define $\\tilde{s}_{\\tilde \\varrho k}$ as in (\\ref{stilde}) by redefining $s_{\\lambda \\varrho_m k}^m$ in (\\ref{stilde}) as\n\\begin{equation} \\label{stilde_normal}\ns_{\\lambda \\varrho_m k}^m\n:= \\begin{pmatrix}\n\\zeta_{k,0}^m(\\varrho_m)\/2 &\n2 s_{\\lambda_{\\mu\\sigma} \\varrho_m k}^m &\n2 s_{\\lambda_{\\sigma\\sigma} \\varrho_m k}^m&\n(s_{\\lambda_{\\beta \\mu} \\varrho_m k}^m)' &\n2 (s_{\\lambda_{\\beta \\sigma} \\varrho_m k}^m)' &\nV(s_{\\lambda_{\\beta\\beta} \\varrho _h k}^m)'\n\\end{pmatrix} '. \n\\end{equation}\nDefine ${\\mathcal{I}}_{\\lambda.\\eta \\varrho_m}^m$ and $Z^m_{\\lambda \\varrho_m}$ as in (\\ref{I_tilde_lambda}) with $s_{\\lambda \\varrho_m k}^m$ defined in (\\ref{stilde_normal}). Let $Z_{\\lambda 0}^m$ and $\\mathcal{I}_{\\lambda.\\eta 0}^m$ denote $Z_{\\lambda \\varrho_m}^m$ and $\\mathcal{I}_{\\lambda.\\eta \\varrho_m}^m$ evaluated at $\\varrho_m=0$. Define $\\Lambda_\\lambda^1$ as in (\\ref{Lambda-lambda}), and define $\\Lambda_{\\lambda \\varrho_m}^2$ as in (\\ref{Lambda-lambda}) by replacing $\\varrho$ with $\\varrho_m$. Similar to (\\ref{t-lambda-N1}), define $\\tilde{t}_{\\lambda}^{m1}$ and $\\tilde{t}_{\\lambda \\varrho_m}^{m2}$ by $r_{\\lambda}(\\tilde{t}_{\\lambda}^{m1}) = \\inf_{t_{\\lambda} \\in \\Lambda_{\\lambda}^1}r_{\\lambda}^m(t_{\\lambda})$ and $r_{\\lambda\\varrho_m}(\\tilde{t}_{\\lambda \\varrho_m}^{m2}) = \\inf_{t_{\\lambda} \\in \\Lambda_{\\lambda\\varrho_m}^2}r_{\\lambda\\varrho_m}^m(t_{\\lambda})$, where $r_{\\lambda}^m(t_{\\lambda}) := (t_{\\lambda} -Z_{\\lambda 0}^m)' \\mathcal{I}_{\\lambda.\\eta 0}^m (t_{\\lambda} -Z_{\\lambda 0}^m)$ and $r_{\\lambda\\varrho_m}^m(t_{\\lambda}) := (t_{\\lambda} -Z_{\\lambda \\varrho_m}^m)' \\mathcal{I}_{\\lambda.\\eta \\varrho_m}^m (t_{\\lambda} -Z_{\\lambda \\varrho_m}^m)$.\n\nThe following proposition establishes the asymptotic null distribution of the LRTS. As in the non-normal case, the LRTS is asymptotically distributed as the maximum of the $M_0$ random variables. \n\\begin{assumption} \\label{A-nonsing_M_normal}\nAssumption \\ref{A-nonsing_M} holds when $\\tilde{s}_{\\tilde \\varrho_m k}$ is given in (\\ref{stilde_normal}).\n\\end{assumption}\n\\begin{proposition} \\label{P-LR_M_normal}\nSuppose Assumptions \\ref{assn_a1}, \\ref{assn_a2}, \\ref{assn_a4}, \\ref{A-consist_M}, and \\ref{A-nonsing_M_normal} hold and the component density of the $j$-th regime is given by (\\ref{normal-density}). Then, under $H_0:m=M_0$, \n$LR_{M_0,n} \\overset{d}{\\rightarrow} \\max_{m=1,\\ldots,M_0}\\{ \\max\\{ \\mathbb{I}\\{\\varrho_m=0\\} (\\tilde t_{\\lambda }^{m1})' \\mathcal{I}_{\\lambda.\\eta 0}^m \\tilde t_{\\lambda }^{m1}, \\sup_{\\varrho_m \\in \\Theta_{\\varrho}^m} (\\tilde t_{\\lambda \\varrho_m}^{m2})' \\mathcal{I}_{\\lambda.\\eta \\varrho_m}^m \\tilde t_{\\lambda \\varrho_m}^{m2} \\}\\}$.\n\\end{proposition} \n\n\\subsection{Homoscedastic normal distribution}\n\nAs in Section \\ref{subsec:homo_normal}, we assume that $Y_k\\in \\mathbb{R}$ in the $j$-th regime follows a normal distribution with the regime-specific intercept and common variance whose density is given by (\\ref{normal-density-homo}).\n\nThe asymptotic distribution of the LRTS is derived by using a reparameterization\n\\begin{equation*}\n\\left(\n\\begin{array}{c}\n\\theta_m\\\\\n\\theta_{m+1}\\\\\n\\sigma^2\n\\end{array}\n\\right) =\\left(\n\\begin{array}{c}\n\\nu_{\\theta m} + (1-\\alpha_m)\\lambda_m \\\\\n\\nu_{\\theta m} -\\alpha_m\\lambda_m \\\\ \n\\nu_{\\sigma m} - \\alpha_m(1-\\alpha_m) \\lambda_{\\mu m}^2\n\\end{array}\n\\right),\n\\end{equation*}\nsimilar to (\\ref{repara-homo}) and following the derivation in Sections \\ref{subsec:homo_normal} and \\ref{subsec:hetero_normal_M}. For brevity, we omit the details of the derivation. Define $s_{\\lambda\\lambda \\varrho_m k}^m$ as in (\\ref{score_lambda_M}), and denote each element of $s_{\\lambda\\lambda \\varrho_m k}^m$ as\n\\begin{equation*}\ns_{\\lambda\\lambda \\varrho_m k}^m =\n\\begin{pmatrix}\n* & s_{\\lambda_{\\mu\\beta} \\varrho_m k}^m \\\\\ns_{\\lambda_{\\beta\\mu} \\varrho_m k}^m & s_{\\lambda_{\\beta\\beta} \\varrho_m k}^m \\\\\n\\end{pmatrix}.\n\\end{equation*}\nSimilarly to (\\ref{score_normal_homo}), define $\\tilde{s}_{\\tilde \\varrho k}$ as in (\\ref{stilde}) by redefining $s_{\\lambda \\varrho_m k}^m$ in (\\ref{stilde}) as\n\\begin{equation}\\label{stilde_normal_homo}\ns_{\\lambda \\varrho_m k}^m\n:= \n\\begin{pmatrix}\n\\zeta_{k,0}^m(\\varrho_m)\/2 & \ns_{\\lambda_{\\mu}^3 k}^m\/3! & \ns_{\\lambda_{\\mu}^4 k}^m\/4! &\n(s_{\\lambda_{\\beta \\mu} \\varrho k}^m)' & \nV(s_{\\lambda_{\\beta\\beta} \\varrho k}^m)'\n\\end{pmatrix} ',\n\\end{equation}\nwhere $s_{\\lambda_{\\mu}^i k}^m:= \\mathbb{P}_{\\vartheta^*_{M_0} (X_k \\in J_m|\\overline{{\\bf Y}}_{0}^k)} \\nabla_{\\mu^i}f(Y_k|\\overline{\\bf Y}_{k-1};\\gamma^*,\\theta_m^*)\/f(Y_k|\\overline{\\bf Y}_{k-1};\\gamma^*,\\theta_m^*)$ for $i=3,4$.\n\nThe following proposition establishes the asymptotic null distribution of the LRTS.\n\\begin{assumption} \\label{A-nonsing_M_normal_homo}\nAssumption \\ref{A-nonsing_M} holds when $\\tilde{s}_{\\tilde \\varrho_m k}$ is given in (\\ref{stilde_normal_homo}).\n\\end{assumption}\n\\begin{proposition} \\label{P-LR_M_normal_homo}\nSuppose Assumptions \\ref{assn_a1}, \\ref{assn_a2}, \\ref{assn_a4}, \\ref{A-consist_M}, and \\ref{A-nonsing_M_normal_homo} hold and the component density of the $j$-th regime is given by (\\ref{normal-density-homo}). Then, under $H_0:m=M_0$, $ LR_{M_0,n} \\overset{d}{\\rightarrow} \\max_{m=1,\\ldots,M_0}\\{ \\max\\{ \\mathbb{I}\\{\\varrho_m=0\\} (\\tilde t_{\\lambda }^{m1})' \\mathcal{I}_{\\lambda.\\eta 0}^m \\tilde t_{\\lambda }^{m1}, \\sup_{\\varrho_m \\in \\Theta_{\\varrho_m\\epsilon}} (\\tilde t_{\\lambda \\varrho_m}^{m2})' \\mathcal{I}_{\\lambda.\\eta \\varrho_m}^m \\tilde t_{\\lambda \\varrho_m}^{m2} \\}\\}$, where $\\tilde{t}_{\\lambda}^{m1}$ and $\\tilde{t}_{\\lambda \\varrho_m}^{m2}$ are defined as in Proposition \\ref{P-LR_M_normal} but in terms of $(Z_{\\lambda \\varrho_m}^m, \\mathcal{I}_{\\lambda.\\eta \\varrho_m}^m, Z_{\\lambda 0}^m, \\mathcal{I}_{\\lambda.\\eta 0}^m)$ constructed with $s_{\\lambda \\varrho_m k}^m$ given in (\\ref{stilde_normal_homo}) and $\\Lambda_\\lambda^1$ and $\\Lambda_{\\lambda \\varrho_m}^2$ defined as in (\\ref{Lambda-lambda-homo}) but replacing $\\varrho$ with $\\varrho_m$. \n\\end{proposition}\n\n\n\\section{Asymptotic distribution under local alternatives} \nIn this section, we derive the asymptotic distribution of our LRTS under local alternatives. While we focus on the case of testing $H_0: M=1$ against $H_A: M=2$, it is straightforward to extend the analysis to the case of testing $H_0: M=M_0$ against $H_A: M=M_0+1$ for $M_0\\geq 2$.\n\nGiven $\\pi\\in \\Theta_{\\pi}$, we define a local parameter $h:=\\sqrt{n}t(\\psi,\\pi)$, so that\n\\[\nh =\n\\begin{pmatrix}\nh_\\eta\\\\\nh_\\lambda \n\\end{pmatrix}=\n\\begin{pmatrix}\n\\sqrt{n} (\\eta-\\eta^*)\\\\\n\\sqrt{n} t_{\\lambda}(\\lambda,\\pi)\n\\end{pmatrix},\n\\] \nwhere $t_{\\lambda}(\\lambda,\\pi)$ differs across the different models and is given by (\\ref{score_lambda}), (\\ref{t_lambda_rho}), and (\\ref{t-psi2-homo}). Given $h=(h_\\eta',h_{\\lambda}')'$ and $\\pi\\in\\Theta_{\\pi}$, we consider the sequence of contiguous local alternatives $\\vartheta_n = (\\psi_n',\\pi_n')' = (\\eta_n',\\lambda_n',\\pi_n')'\\in\\Theta_{\\eta}\\times\\Theta_\\lambda\\times\\Theta_{\\pi}$ such that\n\\begin{equation}\\label{local-alternative}\nh_{\\eta} = \\sqrt{n}(\\eta_n-\\eta^*),\\quad h_\\lambda =\\sqrt{n} t_{\\lambda}(\\lambda_{n},\\pi_n)+o(1),\\quad \\text{and}\\ \\pi_n - \\pi =o(1).\n\\end{equation} \n\nLet $\\mathbb{P}_{\\vartheta,x_0}^n$ be the probability measure on $\\{Y_k\\}_{k=1}^n$ under $\\vartheta$ conditional on the value of $\\overline{\\bf Y}_0$, $X_0$, and ${\\bf W}_1^n$. Then, the log-likelihood ratio is given by\n\\[\n\\log \\frac{d\\mathbb{P}_{\\vartheta_n,x_0}^n}{d \\mathbb{P}_{\\vartheta^*,x_0}^n} = \\ell_n(\\psi_n,\\pi_n,x_0)-\\ell_n(\\psi^*,\\pi,x_0)= \\log \\left( \\frac{\\sum_{{\\bf x}_1^{n}} \\prod_{k=1}^n f_k(\\eta_n, \\lambda_n) q_{\\pi_n}(x_{k-1},x_k) }{ \\prod_{k=1}^n f_k(\\eta^*,0) } \\right),\n\\] \nwhere $f_k(\\eta,\\lambda)$ is defined by the right-hand side of (\\ref{repara_g}), (\\ref{repara_g_hetero}), and (\\ref{repara_g_homo}) for the models of the non-normal distribution, heteroscedastic normal distribution, and homoscedastic normal distribution, respectively. The following result follows from Le Cam's first and third lemmas and facilitates the derivation of the asymptotic distribution of the LRTS under $\\mathbb{P}_{\\vartheta_n,x_0}^n$.\n\n\\begin{proposition} \\label{P-LAN} Suppose that the assumptions of Propositions \\ref{P-quadratic}, \\ref{P-quadratic-N1}, and \\ref{P-quadratic-N1-homo} hold for the models of the non-normal, heteroscedastic normal, and homoscedastic normal distributions, respectively. Then, uniformly in $x_0 \\in \\mathcal{X}$, (a) $\\mathbb{P}_{\\vartheta_n,x_0}^n$ is mutually contiguous with respect to $\\mathbb{P}_{\\vartheta^*,x_0}^n$, and (b) under $\\mathbb{P}_{\\vartheta_n,x_0}^n$, we have $\\log (d\\mathbb{P}_{\\vartheta_n,x_0}^n \/ d \\mathbb{P}_{\\vartheta^*,x_0}^n) = h' \\nu_n(s_{\\varrho_n k}) - \\frac{1}{2}h' \\mathcal{I}_{\\varrho} h + o_{p}(1)$ with $\\nu_n(s_{\\varrho_n k}) \\overset{d}{\\rightarrow} N(\\mathcal{I}_{ \\varrho}h, \\mathcal{I}_{ \\varrho})$. \n\\end{proposition} \n\n\\subsection{Non-normal distribution}\n \nFor the non-normal distribution, the sequence of contiguous local alternatives is given by $\\lambda_n = \\bar \\lambda\/n^{1\/4}$ because then $h_\\lambda = \\sqrt{n}\\alpha(1-\\alpha)v(\\lambda_n)=\\alpha(1-\\alpha)v(\\bar\\lambda)$ holds. The following proposition derives the asymptotic distribution of the LRTS for the non-normal distribution under $H_{1 n}: (\\pi_n,\\eta_n,\\lambda_n) = (\\bar\\pi,\\eta^*,\\bar \\lambda\/n^{1\/4})$. \n \n\\begin{proposition} \\label{P-LAN2} Suppose that the assumptions of Proposition \\ref{P-LR} hold. \nFor $\\bar \\pi\\in\\Theta_{\\pi}$ and $\\bar \\lambda\\neq 0$, define $h_{\\lambda}:= \\bar\\alpha(1-\\bar\\alpha) v(\\bar \\lambda)$. Then, under $H_{1 n}: (\\pi_n,\\eta_n,\\lambda_n) = (\\bar\\pi,\\eta^*,\\bar \\lambda\/n^{1\/4})$, we have $LR_n \\overset{d}{\\rightarrow} \\sup_{\\varrho \\in \\Theta_{\\varrho}} (\\tilde t_{\\lambda\\varrho h})' \\mathcal{I}_{\\lambda.\\eta\\varrho} \\tilde t_{\\lambda\\varrho h}$, where $\\tilde t_{\\lambda \\varrho h}$ is defined as in (\\ref{t-lambda}) but replacing $Z_{\\lambda\\varrho}$ in (\\ref{t-lambda}) with $(\\mathcal{I}_{\\lambda.\\eta\\varrho})^{-1} G_{\\lambda.\\eta\\varrho}+ h_{\\lambda}$.\n\\end{proposition}\n\n\n\\subsection{Heteroscedastic normal distribution}\n\nFor the model with the heteroscedastic normal distribution, the sequences of contiguous local alternatives characterized by (\\ref{local-alternative}) include the local alternatives of order $n^{-1\/8}$. \n\\begin{proposition} \\label{P-LAN3} Suppose that the assumptions of Proposition \\ref{P-LR-N1} hold for model (\\ref{normal-density}). For $\\bar \\varrho \\in (-1,1)$, $\\bar \\alpha \\in (0,1)$, and $\\bar \\lambda := (\\bar \\lambda_\\mu,\\bar \\lambda_\\sigma,\\bar \\lambda_\\beta')'\\neq (0,0,0)'$, let\n\\begin{align*}\n& H_{1 n}^a: (\\varrho_n,\\alpha_n,\\eta_n,\\lambda_{\\mu n}, \\lambda_{\\sigma n},\\lambda_{\\beta n}) = (\\bar \\varrho\/n^{1\/4}, \\bar\\alpha,\\eta^*, \\bar \\lambda_\\mu\/ n^{1\/8}, \\bar \\lambda_{\\sigma}\/n^{3\/8},\\bar \\lambda_{\\beta}\/n^{3\/8}), \\\\\n& H_{1 n}^b: (\\varrho_n,\\alpha_n,\\eta_n,\\lambda_{\\mu n}, \\lambda_{\\sigma n},\\lambda_{\\beta n}) = (\\bar \\varrho, \\bar\\alpha,\\eta^*, \\bar \\lambda_\\mu\/ n^{1\/4}, \\bar \\lambda_{\\sigma}\/n^{1\/4},\\bar \\lambda_{\\beta}\/n^{1\/4}),\n\\end{align*}\nand define\n\\begin{align*}\nh_{\\lambda}^a: &=\n\\bar\\alpha(1-\\bar\\alpha) \\times (\\bar \\varrho \\bar \\lambda_\\mu^2,\n \\bar \\lambda_\\mu \\bar \\lambda_\\sigma,\n b(\\bar\\alpha) \\bar \\lambda_\\mu^4\/12,\n\\bar \\lambda_{\\beta}' \\bar \\lambda_\\mu, 0, 0)', \\\\\nh_{\\lambda}^b: &= \n \\bar\\alpha(1-\\bar\\alpha) \\times (\\bar \\varrho\\bar \\lambda_\\mu^2,\n \\bar \\lambda_\\mu \\bar \\lambda_\\sigma,\n \\bar \\lambda_\\sigma^2,\n \\bar \\lambda_{\\beta}' \\bar \\lambda_\\mu,\n\\bar \\lambda_\\beta' \\bar \\lambda_{\\sigma},\n v( \\bar \\lambda_\\beta)')'.\n\\end{align*}\nThen, for $j \\in \\{a,b\\}$, under $H_{1 n}^j$, we have $LR_n \\overset{d}{\\rightarrow} \\max\\{ \\mathbb{I}\\{\\varrho=0\\} (\\tilde t_{\\lambda h}^{1j})' \\mathcal{I}_{\\lambda.\\eta 0} \\tilde t_{\\lambda h}^{1j}, \\sup_{\\varrho \\in \\Theta_{\\varrho}} (\\tilde t_{\\lambda \\varrho h}^{2j})' \\mathcal{I}_{\\lambda.\\eta \\varrho} \\tilde t_{\\lambda \\varrho h}^{2j} \\}$, where $\\tilde t_{\\lambda h}^{1j}$ and $\\tilde t_{\\lambda \\varrho h}^{2j}$ are defined as in (\\ref{t-lambda-N1}) but replacing $Z_{\\lambda\\varrho}$ with $(\\mathcal{I}_{\\lambda.\\eta\\varrho})^{-1} G_{\\lambda.\\eta\\varrho}+ h_{\\lambda}^j$.\n\\end{proposition} \n\nIn the local alternative $H_{1 n}^a$, $\\varrho_n$ converges to $0$, and $\\lambda_{\\mu n}$ converges to 0 at a slower rate than $n^{-1\/4}$. Our test has non-trivial power against these local alternatives in the neighborhood of $\\varrho=0$. By contrast, the test of \\citet{carrasco14em} does not have power against the local alternatives in the neighborhood of $\\varrho=0$, as discussed in Section 5 of \\citet{carrasco14em}. The test proposed by \\citet{quzhuo17wp} assumes that $\\varrho$ is bounded away from zero and hence their test rules out $H_{1n}^a$.\n\n \n \n\\subsection{Homoscedastic normal distribution} \n \nThe local alternatives for the model with the homoscedastic distribution also include those of order $n^{-1\/8}$ in the neighborhood of $\\varrho=0$.\n\n\\begin{proposition} \\label{P-LAN4} Suppose that the assumptions of Proposition \\ref{P-quadratic-N1-homo} hold for model (\\ref{normal-density-homo}). For $\\bar \\varrho \\in (-1,1)$, $\\bar \\alpha \\in (0,1)$, $\\Delta_\\alpha \\neq 0$, and $\\bar \\lambda := (\\bar \\lambda_\\mu,\\bar \\lambda_\\beta')'\\neq (0,0)'$, let\n\\begin{align*}\n& H_{1 n}^a: (\\varrho_n,\\alpha_n,\\eta_n,\\lambda_{\\mu n}, \\lambda_{\\beta n}) = (\\bar \\varrho\/n^{1\/4}, 1\/2+\\Delta_{\\alpha}\/n^{1\/8}, \\eta^*, \\bar \\lambda_\\mu\/ n^{1\/8}, \\bar \\lambda_{\\beta}\/n^{3\/8}), \\\\\n& H_{1 n}^b: (\\varrho_n,\\alpha_n,\\eta_n,\\lambda_{\\mu n}, \\lambda_{\\beta n}) = (\\bar \\varrho, \\bar\\alpha,\\eta^*, \\bar \\lambda_\\mu\/ n^{1\/4}, \\bar \\lambda_{\\beta}\/n^{1\/4}),\n\\end{align*}\nand define $h_{\\lambda}^a := (1\/4) \\times (\\bar\\varrho \\bar\\lambda_{\\mu}^2,\\Delta_{\\alpha} \\bar\\lambda_{\\mu}^3,-\\bar\\lambda_\\mu^4\/2, \\bar\\lambda_\\beta'\\bar\\lambda_\\mu, 0)'$ and $h_{\\lambda}^b := \\bar\\alpha(1-\\bar\\alpha) \\times (\\bar \\varrho\\bar \\lambda_\\mu^2, 0, 0, \\bar \\lambda_{\\beta}' \\bar \\lambda_\\mu, v( \\bar \\lambda_\\beta)')'$. For $j=\\{a,b\\}$, define $\\tilde t_{\\lambda h}^{1j}$ and $\\tilde t_{\\lambda \\varrho h}^{2j}$ as in (\\ref{t-lambda-N1}) but replacing $Z_{\\lambda\\varrho}$ with $(\\mathcal{I}_{\\lambda.\\eta\\varrho})^{-1} G_{\\lambda.\\eta\\varrho}+ h_{\\lambda}^j$, where $\\mathcal{I}_{\\lambda.\\eta\\varrho}$ and $G_{\\lambda.\\eta\\varrho}$ are constructed with $s_{\\varrho k}$ defined in (\\ref{score_normal_homo}), and $\\Lambda_{\\lambda }^1$ and $\\Lambda_{\\lambda \\varrho}^2$ are defined in (\\ref{Lambda-lambda-homo}). Then, under $H_{1 n}^j$, we have $LR_n \\overset{d}{\\rightarrow} \\max\\{ \\mathbb{I}\\{\\varrho=0\\} (\\tilde t_{\\lambda h}^{1j})' \\mathcal{I}_{\\lambda.\\eta 0} \\tilde t_{\\lambda h}^{1j}, \\sup_{\\varrho \\in \\Theta_{\\varrho}} (\\tilde t_{\\lambda \\varrho h}^{2j})' \\mathcal{I}_{\\lambda.\\eta \\varrho} \\tilde t_{\\lambda \\varrho h}^{2j} \\}$.\n\\end{proposition} \n\n\\section{Parametric bootstrap}\nWe consider the following parametric bootstrap to obtain the bootstrap critical value $c_{\\alpha,B}$ and bootstrap $p$-value of our LRTS for testing $H_0: M=M_0$ against $H_A: M=M_0+1$.\n\\begin{enumerate}\n\\item Using the observed data, compute $\\hat \\vartheta_{M_0}$, $\\hat\\vartheta_{M_0+1}$, and $LR_{M_0,n}$.\n\\item Given $\\hat \\vartheta_{M_0}$ and $\\xi_{M_0}$, generate $B$ independent samples $\\{Y_1^b,\\ldots,Y_n^b\\}_{b=1}^B$ under $H_0$ with $\\vartheta_{M_0}=\\hat\\vartheta_{M_0}$ conditional on the observed value of $\\overline{\\bf Y}_{0}$ and ${\\bf W}_{1}^n$.\n\\item For each simulated sample $\\{Y_k^b\\}_{k=1}^n$ with $(\\overline{\\bf Y}_0,{\\bf W}_1^n)$, estimate $\\hat \\vartheta_{M_0}^b$ and $\\hat\\vartheta_{M_0+1}^b$ as in Step 1, and let $LR_{M_0,n}^b := 2[\\ell_n(\\hat\\vartheta_{M_0+1}^b ,\\xi_{M_0+1})-\\ell_n(\\hat\\vartheta_{M_0}^b ,\\xi_{M_0})]$ for $b=1,\\ldots,B$.\n\\item Let $c_{\\alpha,B}$ be the $(1-\\alpha)$ quantile of $\\{LR_{M_0,n}^b\\}_{b=1}^B$, and define the bootstrap $p$-value as $B^{-1}\\sum_{b=1}^B \\mathbb{I}\\{ LR_{M_0,n}^b > LR_{M_0, n}\\}$.\n\\end{enumerate} \n\n\nThe following proposition shows the consistency of the bootstrap critical values $c_{\\alpha,B}$ for testing $H_0: M_0=1$. We omit the result for testing $H_0: M_0\\geq 2$; it is straightforward to extend the analysis to the case for $M_0\\geq 2$ with more tedious notations. \n \n\\begin{proposition} \\label{P-bootstrap} \nSuppose that the assumptions of Propositions \\ref{P-quadratic}, \\ref{P-quadratic-N1}, and \\ref{P-quadratic-N1-homo} hold for the models of the non-normal, heteroscedastic normal, and homoscedastic normal distributions, respectively. Then, the bootstrap critical values $c_{\\alpha,B}$ converge to the asymptotic critical values in probability as $n$ and $B$ go to infinity under $H_0$ and under the local alternatives described in Propositions \\ref{P-LAN2}, \\ref{P-LAN3}, and \\ref{P-LAN4}.\n\\end{proposition}\n\n\n\\section{Simulations and empirical application} \n\n\n\\subsection{Simulations}\n\nWe consider the following two models: \n\\begin{align} \n\\text{Model 1}:&\\quad Y_{k} = \\mu_{X_k} + \\beta Y_{k-1} +\\varepsilon_{k},\\quad \\varepsilon_k\\sim N(0,\\sigma^2),\\label{model1}\\\\\n\\text{Model 2}:&\\quad Y_{k} = \\mu_{X_k} + \\beta Y_{k-1} +\\varepsilon_{k},\\quad \\varepsilon_k\\sim N(0,\\sigma_{X_k}^2),\\label{model2}\n\\end{align}\nwhere $X_k\\in\\{1,\\ldots,M\\}$ with $p_{ij} = p(X_k=i|X_{k-1}=j)$. Model 1 in (\\ref{model1}) is similar to the model used in \\citet{chowhite07em}. This model has a switching intercept but the variance parameter $\\sigma^2$ does not switch across regimes. In Model 2 in (\\ref{model2}), both the intercept and the variance parameters switch across regimes.\n\nWe compare the size and power property of our bootstrap LRT and that of the QLR test of \\citet{chowhite07em} with $\\Theta_\\mu = [-2,2]$ and the supTS test of \\citet{carrasco14em} with $\\rho \\in [-0.9. 0.9]$. The critical values are computed by bootstrap with $B=199$ bootstrap samples. Note that this comparison favors the LRT over the supTS test because the supTS test is designed to detect general parameter variation including Markov chain.\n\nIn Model 2, we set the lower bound of $\\sigma$ as $\\epsilon_\\sigma = 0.01\\hat \\sigma$, where $\\hat \\sigma$ is the estimate of $\\sigma$ from one-regime model. We also find that adding a penalty term to the log-likelihood function improves the finite sample property of the LRT. The penalty term prevents $\\sigma_j$ from taking an extremely small value and takes the form \n$-a_n \\sum_{j = 1}^{M} \\{\\hat\\sigma^2\/\\sigma_j^2+\\log(\\sigma_j^2\/\\hat\\sigma^2) - 1\\}$. \nWe set $a_n = 20n^{-1\/2}$ and compute the test statistic using the log-likelihood function without the penalty term. Because the penalty term and its derivatives are $o_p(1)$ from the compactness of $\\Theta_M$, adding this penalty term does not affect the consistency of the MLE or the asymptotic distribution of the LRTS.\n\nWe first examine the rejection frequency of $H_0:M=1$ against $H_A:M=2$ when the data are generated by $H_0: M=1$ with $(\\beta,\\mu,\\sigma)=(0.5,0,1)$. The first panel in Table \\ref{table:bootstrap-size} reports the rejection frequency of the bootstrap tests at the nominal 10\\%, 5\\%, and 1\\% levels over $3000$ replications with $n=200$ and $500$. Overall, all the tests have good sizes. \n \nTable \\ref{table:bootstrap-power-model1} reports the power of the three tests for testing the null hypothesis of $M=1$ at the nominal level of 5\\%. We generate $3000$ data sets for $n= 500$ under the alternative hypothesis of $M=2$ by setting $\\mu_1=0.2, 0.6$, and $1.0$ and $\\mu_2=-\\mu_1$, while $(p_{11},p_{22})=(0.25,0.25)$, $(0.50,0.50)$, $(0.70,0.70)$, and $(0.90,0.90)$. We set $\\sigma=1$ for Model 1 and $(\\sigma_1,\\sigma_2)=(1.1,0.9)$ for Model 2. For Model 1, the power of all the tests increases as $\\mu_1$ increases except for the supTS test with $(p_{11},p_{22})=(0.9,0.9)$. As $(p_{11},p_{22})$ moves away from $(0.5,0.5)$, the power of the LRT increases, whereas the QLRT has decreasing power. The LRT performs better than the supTS and QLR tests except for the case with $(p_{11},p_{22})=(0.25,0.25)$, where the supTS test performs very well, and the case with $(p_{11},p_{22})=(0.5,0.5)$, where the QLRT outperforms the LRT, because the true model is a finite mixture in this case. The last three columns of Table \\ref{table:bootstrap-power-model1} report the power of the LRT and supTS test to detect alternative models with switching variances (i.e., Model 2 with $M=2$). We did not examine the power of the QLRT because this test assumes non-switching variance. The LRT has stronger power than the supTS test in most cases.\n \nThe second panel in Table \\ref{table:bootstrap-size} reports the rejection frequency of the LRT for testing $H_0:M=2$ against $H_A:M=3$ when the data are generated under the null hypothesis, showing its good size property, while neither the QLRT nor the supTS test is applicable for testing $H_0:M=2$ against $H_A:M=3$.\n\nTable \\ref{table:bootstrap-power-model2} reports the power of our LRT for testing the null hypothesis of $M=2$ at the nominal level of 5\\%. We generate $3000$ data sets for $n= 500$ under the alternative hypothesis of $M=3$ across different values of $(\\mu_1,\\mu_2,\\mu_3)$ and $(p_{11},p_{22},p_{33})$ with $p_{ij} = (1-p_{ii})\/2$ for $j\\neq i$, where we set $(\\beta,\\sigma)=(0.5,1.0)$ for Model 1 and $(\\beta,\\sigma_1,\\sigma_2) = (0.5,0.9,1.2)$ for Model 2. Similar to the case of $H_0:M=1$, the power of the LRT for testing $H_0: M=2$ against $H_A: M=3$ increases when the alternative is further away from $H_0$ or when latent regimes become more persistent.\n\n\\subsection{Empirical example} \nUsing U.S. GDP per capita quarterly growth rate data from 1960Q1 to 2014Q4, we estimate the regime switching models with common variance (i.e., Model 1 in (\\ref{model1})) and with switching variances (i.e., Model 2 in (\\ref{model2})) for $M=1$, $2$, $3$, and $4$ and sequentially test the null hypothesis of $M=M_0$ against the alternative hypothesis $M=M_0+1$ for $M_0=1$, $2$, $3$, and $4$.\\footnote{For both models, we restrict the parameter values for the transition probabilities by setting $\\epsilon=0.05$ to prevent the issue of unbounded likelihood.} \nWe also report the Akaike information criteria (AIC) and Bayesian information criteria (BIC) for reference, although, to our best knowledge,\nthe consistency of the AIC and BIC for selecting the number of regimes has not been established in the literature.\n\nTable \\ref{table:select} reports the selected number of regimes by the AIC, BIC, and LRT. For Model (\\ref{model1}) with common variance, our LRT selects $M=4$, while the AIC and BIC select $M=3$ and $M=1$, respectively. For Model (\\ref{model2}) with switching variance, both the LRT and the AIC select $M=3$, while the BIC selects $M=2$.\n\nPanel A of Table \\ref{table:parameter} and Figure \\ref{fig:common} report the parameter estimates and posterior probabilities of being in each regime for the model with common variance for $M=2$, $3$, and $4$. Across the different specifications in $M$, the estimated values of $\\mu_1$, $\\mu_2, \\ldots, \\mu_M$ are well separated in the common variance model, indicating that each regime represents a booming or recession period with different degrees. In Figure \\ref{fig:common}, when the number of regimes is specified as $M=2$, the posterior probability of the ``recession'' regime (Regime 1) against that of the ``booming'' regime (Regime 2) sharply rises during the collapse of Lehman Brothers in 2008 and then declines after 2009. When the number of regimes is specified as $M=3$, in addition to the ``recession'' and ``booming'' regimes corresponding to Regimes 1 and 2, respectively, the regime with a rapid change in the growth rates from low to high is captured by Regime 3; for the model with $M=3$ in Figure \\ref{fig:common}, the posterior probability of Regime 3 rises in late 2009 when the U.S. economy started to recover from the Lehman collapse. When the number of regimes is specified as $M=4$, Regime 1 now captures a rapid change in the growth rates from high to low, where the posterior probability of Regime 1 becomes high when the growth rate of the U.S. economy rapidly declined in the middle of the Lehman collapse. The LRT selects the model with four regimes, which capture the rapid changes in the growth rates of U.S. GDP per capita during the Lehman collapse period.\n \nPanel B of Table \\ref{table:parameter} and Figure \\ref{fig:switching} report the parameter estimates and posterior probabilities of being in each regime for the model with switching variance, respectively. When the number of regimes is specified as $M=2$, the variance parameter estimates are very different between the two regimes, while the intercept estimates are similar, indicating that Regime 1 is the ``low volatility'' regime, while Regime 2 is the ``high volatility'' regime. When the number of regimes is specified as $M=3$, different regimes capture different states of the U.S. economy in terms of both growth rates and volatilities.\\footnote{We may test the null hypothesis of $\\sigma_1=\\sigma_2=\\sigma_3$ in the model with switching variance given $M=3$ by the standard LRT with the critical value obtained from the chi-square distribution with two degrees of freedom. With $LRTS = 2\\times (-297.01+307.39)=20.76$, the null hypothesis of $\\sigma_1=\\sigma_2=\\sigma_3$ is rejected at the 1\\% significance level, suggesting that the model with switching variance is more appropriate than the model with common variance.} Regime 1 is characterized by the negative value of the intercept with high volatility, capturing a recession period. Regime 2 is characterized by the positive value of the intercept with low volatility, capturing a booming\/stable economy. Regime 3 is characterized by a high value of the intercept and high variance, capturing both a rapid recovery in the growth rates and high volatility in the aftermath of the Lehman collapse in 2009. The LRT selects the model with three regimes when the model is specified with switching variance.\n\n\n\\section{Appendix}\n\nHenceforth, for notational brevity, we suppress ${\\bf W}_{a}^{b}$ from the conditioning variables and conditional densities when doing so does not cause confusion.\n\n\\subsection{Proof of propositions and corollaries}\n \n\\begin{proof}[Proof of Proposition \\ref{uconlike}]\nThe proof is essentially identical to the proof of Lemma 2 in DMR. Therefore, the details are omitted. The only difference from DMR is (i) we do not impose Assumption (A2) of DMR, but this does not affect the proof because Assumption (A2) is not used in the proof of Lemma 2 in DMR, and (ii) we have ${\\bf W}_1^n$, but our Lemma \\ref{x_diff}(a) extends Corollary 1 of DMR to accommodate the $W_k$'s. Consequently, the argument of the proof of DMR goes through.\n\\end{proof}\n \n\\begin{proof}[Proof of Proposition \\ref{Ln_thm1}] \n\nDefine $h_{\\vartheta k x_0} := \\sqrt{l_{\\vartheta k x_0}}-1$. By using the Taylor expansion of $2 \\log(1+x) = 2x - x^2(1+o(1))$ for small $x$, we have, uniformly in $x_0 \\in \\mathcal{X}$ and $\\vartheta \\in \\mathcal{N}_{c\/\\sqrt{n}}$,\n\\begin{equation}\\label{ell_appn}\n\\ell_n(\\psi,\\pi,x_0) - \\ell_n(\\psi^*, \\pi, x_0) = 2 \\sum_{k=1}^n \\log(1+h_{\\vartheta k x_0}) = n P_n(2h_{\\vartheta k x_0} - [1+o_p(1)] h_{\\vartheta k x_0}^2).\n\\end{equation}\nThe stated result holds if we show that\n\\begin{align} \n&\\sup_{x_0 \\in \\mathcal{X}} \\sup_{\\vartheta \\in \\mathcal{N}_{c\/\\sqrt{n}}} \\left| nP_n(h_{\\vartheta k x_0}^2) - n t_\\vartheta' \\mathcal{I}_\\pi t_\\vartheta\/4 \\right| = o_p(1)\\quad\\text{and}\\label{hk_appn}\\\\\n&\\sup_{x_0 \\in \\mathcal{X}}\\sup_{\\vartheta \\in \\mathcal{N}_{c\/\\sqrt{n}}} | nP_n(h_{\\vartheta k x_0}) - \\sqrt{n} t_\\vartheta' \\nu_n(s_{\\pi k})\/2 + nt_\\vartheta \\mathcal{I}_\\pi t_\\vartheta'\/8 | = o_p(1),\\label{hk_appn2}\n\\end{align}\nbecause then the right-hand side of (\\ref{ell_appn}) is equal to $\\sqrt{n} t_\\vartheta' \\nu_n (s_{\\pi k}) - t_\\vartheta \\mathcal{I}_\\pi t_\\vartheta'\/2 + o_p(1)$ uniformly in $x_0 \\in \\mathcal{X}$ and $\\vartheta \\in \\mathcal{N}_{c\/\\sqrt{n}}$.\n\nWe first show (\\ref{hk_appn}). Let $m_{\\vartheta k} := t_\\vartheta' s_{\\pi k} + r_{\\vartheta k}$, so that $l_{\\vartheta k x_0} -1 = m_{\\vartheta k} + u_{\\vartheta k x_0}$. Observe that\n\\begin{equation} \\label{B2}\n\\max_{1\\leq k \\leq n} \\sup_{\\vartheta \\in \\mathcal{N}_{c\/\\sqrt{n}}} |m_{\\vartheta k}| = \\max_{1\\leq k \\leq n} \\sup_{\\vartheta \\in \\mathcal{N}_{c\/\\sqrt{n}}} |t_\\vartheta' s_{\\pi k} + r_{\\vartheta k}| = o_p(1), \n\\end{equation}\nfrom Assumptions \\ref{assn_a3}(a) and (c) and Lemma \\ref{max_bound}. Write $4P_n ( h_{\\vartheta k x_0}^2)$ as\n\\begin{equation}\\label{B0}\n4P_n ( h_{\\vartheta k x_0}^2) = P_n \\left(\\frac{4(l_{\\vartheta k x_0}-1)^2}{(\\sqrt{l_{\\vartheta k x_0}} + 1)^2}\\right) = P_n(l_{\\vartheta k x_0}-1)^2 - P_n \\left( (l_{\\vartheta k x_0}-1)^3 \\frac{(\\sqrt{l_{\\vartheta k x_0}}+3)}{(\\sqrt{l_{\\vartheta k x_0}} + 1)^3}\\right).\n\\end{equation}\nIt follows from Assumptions \\ref{assn_a3}(a)(b)(c)(e)(f) and $(E|XY|)^2 \\leq E|X|^2E|Y|^2$ that, uniformly in $\\vartheta \\in \\mathcal{N}_\\varepsilon$, \n\\begin{equation} \\label{lk_lln}\nP_n(l_{\\vartheta k x_0}-1)^2 = t_\\vartheta'P_n(s_{\\pi k}s_{\\pi k}')t_\\vartheta + 2 t_\\vartheta' P_n[s_{\\pi k} (r_{\\vartheta k} + u_{\\vartheta k x_0})] + P_n(r_{\\vartheta k} + u_{\\vartheta k x_0})^2 = t_\\vartheta'P_n(s_{\\pi k}s_{\\pi k}')t_\\vartheta + \\zeta_{\\vartheta n x_0}, \n\\end{equation} \nwhere $\\zeta_{\\vartheta n x_0}$ satisfies $\\sup_{x_0 \\in \\mathcal{X}}|\\zeta_{\\vartheta n x_0}|=O_p(|t_\\vartheta|^2|\\psi-\\psi^*|)+O_p(n^{-1}|t_\\vartheta||\\psi-\\psi^*|) + O_p(n^{-1}|\\psi-\\psi^*|^2)$. Then, (\\ref{hk_appn}) holds because $\\sup_{\\pi \\in \\Theta_\\pi}| P_n(s_{\\pi k}s_{\\pi k}') - \\mathcal{I}_\\pi| = o_p(1)$ and the second term on the right-hand side of (\\ref{B0}) is bounded by, from (\\ref{B2}), $P_n( m_{\\vartheta k}^2) = t_\\vartheta'\\mathcal{I}_\\pi t_\\vartheta + o_p(|t_\\vartheta|^2)$, and Assumption \\ref{assn_a3}(e),\n\\begin{align*}\n& \\mathcal{C} \\sup_{x_0 \\in \\mathcal{X}}\\sup_{\\vartheta \\in \\mathcal{N}_{c\/\\sqrt{n}}} P_n \\left[ |m_{\\vartheta k}|^3 + 3 m_{\\vartheta k}^2 |u_{\\vartheta k x_0}| + 3|m_{\\vartheta k}| u_{\\vartheta k x_0}^2 \\right] + \\mathcal{C} \\sup_{x_0 \\in \\mathcal{X}}\\sup_{\\vartheta \\in \\mathcal{N}_{c\/\\sqrt{n}}} P_n (|u_{\\vartheta k x_0}|^3) \\\\\n& \\leq o_p(1) \\sup_{x_0 \\in \\mathcal{X}}\\sup_{\\vartheta \\in \\mathcal{N}_{c\/\\sqrt{n}}} P_n \\left[ m_{\\vartheta k}^2+u_{\\vartheta k x_0}^2 \\right] + \\mathcal{C} \\sup_{x_0 \\in \\mathcal{X}}\\sup_{\\vartheta \\in \\mathcal{N}_{c\/\\sqrt{n}}} P_n (|u_{\\vartheta k x_0}|^3) = o_p(n^{-1}).\n\\end{align*}\n\nWe proceed to show (\\ref{hk_appn2}). Consider the following expansion of $h_{\\vartheta k x_0}$:\n\\begin{equation} \\label{hk_1}\nh_{\\vartheta k x_0} = (l_{\\vartheta k x_0}-1)\/2 - h_{\\vartheta k x_0}^2\/2 = (t_\\vartheta' s_{\\pi k} + r_{\\vartheta k}+ u_{\\vartheta k x_0})\/2 - h_{\\vartheta k x_0}^2\/2.\n\\end{equation}\nThen, (\\ref{hk_appn2}) follows from (\\ref{hk_appn}), (\\ref{hk_1}), and Assumptions \\ref{assn_a3}(d) and (e), and the stated result follows.\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{Ln_thm2}] \nFor part (a), it follows from $\\log(1+x) \\leq x$ and $h_{\\vartheta k x_0} = (l_{\\vartheta k x_0}-1)\/2 - h_{\\vartheta k x_0}^2\/2$ (see (\\ref{hk_1})) that\n\\begin{equation} \\label{Ln_ineq}\n\\ell_n(\\psi,\\pi, x_0)-\\ell_n(\\psi^*,\\pi, x_0) = 2 \\sum_{k=1}^n \\log(1+h_{\\vartheta k x_0}) \\leq 2 n P_n(h_{\\vartheta k x_0}) = \\sqrt{n} \\nu_n(l_{\\vartheta k x_0}-1) - n P_n(h_{\\vartheta k x_0}^2).\n\\end{equation} \nObserve that $h_{\\vartheta k x_0}^2 =(l_{\\vartheta k x_0}-1)^2\/(\\sqrt{l_{\\vartheta k x_0}} + 1)^2 \\geq \\mathbb{I}\\{l_{\\vartheta k x_0} \\leq \\kappa \\} (l_{\\vartheta k x_0}-1)^2\/ (\\sqrt{\\kappa}+1)^2$ for any $\\kappa>0$. Therefore, \\begin{equation}\\label{pn_lower}\nP_n (h_{\\vartheta k x_0}^2)\\geq (\\sqrt{\\kappa}+1)^{-2}P_n \\left( \\mathbb{I}\\{l_{\\vartheta k x_0} \\leq \\kappa\\} (l_{\\vartheta k x_0}-1)^2\\right). \n\\end{equation} \nSubstituting (\\ref{lk_lln}) into the right-hand side of (\\ref{pn_lower}) gives\n\\begin{equation} \\label{pn_lower_2}\nP_n (h_{\\vartheta k x_0}^2) \\geq (\\sqrt{\\kappa}+1)^{-2} t_\\vartheta' \\left[ P_n(s_{\\pi k}s_{\\pi k}') - P_n(\\mathbb{I}\\{l_{\\vartheta k x_0} > \\kappa\\}s_{\\pi k}s_{\\pi k}') \\right] t_\\vartheta + \\zeta_{\\vartheta n x_0}.\n\\end{equation}\nFrom H\\\"older's inequality, we have $P_n(\\mathbb{I}\\{l_{\\vartheta k x_0} > \\kappa\\}|s_{\\pi k}|^2 ) \\leq [P_n(\\mathbb{I}\\{l_{\\vartheta k x_0} > \\kappa\\} )]^{\\delta\/(2+\\delta)} [ P_n(|s_{\\pi k}|^{2+\\delta}) ]^{2\/(2+\\delta)}$. The right-hand side is no larger than $\\kappa^{-\\delta\/(2+\\delta)} O_p(1)$ uniformly in $x_0 \\in \\mathcal{X}$ and $\\vartheta \\in \\mathcal{N}_{\\varepsilon }$ because (i) it follows from $\\kappa \\mathbb{I}\\{l_{\\vartheta k x_0} > \\kappa\\} \\leq l_{\\vartheta k x_0}$ that $P_n(\\mathbb{I}\\{l_{\\vartheta k x_0} > \\kappa\\} ) \\leq \\kappa^{-1} P_n(l_{\\vartheta k x_0})$ and $\\sup_{x_0 \\in \\mathcal{X}} \\sup_{\\vartheta \\in \\mathcal{N}_{\\varepsilon }}|P_n(l_{\\vartheta k x_0})-1| = o_p(1)$ from Assumptions \\ref{assn_a3}(d)--(g), and (ii) $P_n(\\sup_{\\pi \\in \\Theta_\\pi}|s_{\\pi k}|^{2+\\delta})=O_p(1)$ from Assumption \\ref{assn_a3}(a). Consequently, $\\mathbb{P}( \\sup_{x_0 \\in \\mathcal{X}} \\sup_{\\vartheta \\in \\mathcal{N}_{\\varepsilon}}P_n(\\mathbb{I}\\{l_{\\vartheta k x_0} > \\kappa\\}|s_{\\pi k}|^2 ) \\geq \\lambda_{\\min}\/4 ) \\to 0$ as $\\kappa \\to \\infty$, and hence we can write (\\ref{pn_lower_2}) as $P_n (h_{\\vartheta k x_0}^2) \\geq \\tau (1+o_p(1)) t_\\vartheta' \\mathcal{I}_{\\pi} t_\\vartheta + O_p(|t_\\vartheta|^2|\\psi-\\psi^*|) + O_p(n^{-1})$ for $\\tau:=(\\sqrt{\\kappa}+1)^{-2}\/2>0$ by taking $\\kappa$ sufficiently large. Because $\\sqrt{n} \\nu_n(l_{\\vartheta k x_0}-1) = \\sqrt{n} t_\\vartheta'\\nu_n (s_{\\pi k}) + O_p(1)$ from Assumptions \\ref{assn_a3}(d) and (e), it follows from (\\ref{Ln_ineq}) that, uniformly in $x_0 \\in \\mathcal{X}$ and $\\vartheta \\in \\mathcal{N}_{\\varepsilon}$,\n\\begin{equation}\\label{rk_lower2}\n-\\eta \\leq \\ell_n(\\psi,\\pi, x_0)-\\ell_n(\\psi^*,\\pi, x_0) \\leq \\sqrt{n} t_\\vartheta' \\nu_n (s_{\\pi k}) - \\tau (1+o_p(1)) n t_\\vartheta' \\mathcal{I}_\\pi t_\\vartheta + O_p(n |t_\\vartheta|^2 |\\psi-\\psi^*| ) + O_p(1).\n\\end{equation}\nLet $T_{n}:= \\mathcal{I}_{\\pi}^{1\/2}\\sqrt{n} t_\\vartheta$. From (\\ref{rk_lower2}), Assumptions \\ref{assn_a3}(b) and (g), and the fact $\\psi-\\psi^* \\to 0$ if $t_\\vartheta \\to 0$, we obtain the following result: for any $\\delta>0$, there exist $\\varepsilon>0$ and $M,n_0<\\infty$ such that\n\\begin{equation}\n\\mathbb{P}\\left(\\inf_{x_0 \\in \\mathcal{X}} \\inf_{\\vartheta \\in \\mathcal{N}_{\\varepsilon}} \\left( |T_n| M - (\\tau\/2) |T_n|^2 + M \\right) \\geq 0\\right) \\geq 1-\\delta,\\ \\text{ for all }\\ n > n_0.\n\\end{equation}\nRearranging the terms inside $\\mathbb{P}(\\cdot)$ gives $\\sup_{x_0 \\in \\mathcal{X}} \\sup_{\\vartheta \\in \\mathcal{N}_{\\varepsilon}} (|T_{n}|-(M\/\\tau))^2 \\leq 2M\/\\tau+(M\/\\tau)^2$. Taking its square root gives $\\mathbb{P}(\\sup_{x_0 \\in \\mathcal{X}} \\sup_{\\vartheta \\in \\mathcal{N}_{\\varepsilon}}|T_{n}| \\leq M_1) \\geq 1-\\delta$ for a constant $M_1$, and part (a) follows. Part (b) follows from part (a) and Proposition \\ref{Ln_thm1}.\n\\end{proof}\n\n\\begin{proof}[Proof of Corollary \\ref{cor_appn}]\nBecause the logarithm is monotone, we have $\\inf_{x_0 \\in \\mathcal{X}} \\ell_n(\\psi,\\pi,x_0) \\leq \\ell_n(\\psi,\\pi,\\xi) \\leq \\sup_{x_0 \\in \\mathcal{X}} \\ell_n(\\psi,\\pi,x_0)$. Part (a) then follows from Proposition \\ref{Ln_thm1}. For part (b), note that we have $\\vartheta \\in A_{n\\varepsilon }(\\xi,\\eta)$ only if $\\vartheta \\in A_{n\\varepsilon }(x_0,\\eta)$ for some $x_0$. Consequently, part (b) follows from Proposition \\ref{Ln_thm2}.\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{proposition-ell}]\nFirst, observe that parts (a) and (b) hold when the right-hand side is replaced with $K_j(k+m)^7\\rho^{\\lfloor(k+m-1)\/24\\rfloor}$ and $K_j(k+m)^7\\rho^{\\lfloor (k+m-1)\/1340 \\rfloor}$ by using Lemmas \\ref{ell_lambda} and \\ref{lemma-bound-1} and noting that $q_1=6q_0, q_2=5q_0, q_3=4q_0,\\ldots, q_6=q_0$. For example, when $j=2$, we can bound $\\sup_{x\\in\\mathcal{X}}\\sup_{\\vartheta\\in \\mathcal{N}^*}|\\nabla^2\\ell_{k,m,x}(\\vartheta) - \\nabla^j2\\overline\\ell_{k,m}(\\vartheta)|$ from $\\nabla^2 \\ell_{k,m,x}(\\vartheta)=\\Delta^2_{1,k,m,x}(\\vartheta)+\\Delta^{1,1}_{2,k,m,x}(\\vartheta)$, $\\sup_{x\\in \\mathcal{X}} \\sup_{\\vartheta\\in \\mathcal{N}^*} |\\Delta^{\\mathcal{I}(j)}_{j,k,m,x}(\\vartheta)-\\overline\\Delta^{\\mathcal{I}(j)}_{j,k,m}(\\vartheta)|\\leq {K}_{\\mathcal{I}(j)}(k+m)^{ 7 } \\rho^{ \\lfloor (k+m-1)\/24 \\rfloor}$, ${K}_{\\mathcal{I}(j)} \\in L^{r_{\\mathcal{I}(j)}}(\\mathbb{P}_{\\vartheta^*})$, $r_{(2)} = q_{2} = 5q_0$, and $r_{(1,1)} = q_{1}\/2 = 3q_0$. Second, letting $\\rho_* = \\rho^{1\/1340}\\mathbb{I}\\{\\rho>0\\}$ and redefining $K_j$ gives parts (a) and (b). Parts (c) and (d) follow from Lemmas \\ref{ell_lambda} and \\ref{lemma-bound-1}. \n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{lemma-omega}]\nFirst, we prove part (a). The proof of part (b) is essentially the same as that of part (a) and hence omitted. Observe that\n\\begin{equation*} \n\\begin{aligned}\n\\nabla l^j_{k,m,x}(\\vartheta)- \\nabla \\overline l^j_{k,m}(\\vartheta)& = \\Psi^j_{k,m,x}(\\vartheta)\\left(\\frac{ p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},X_{-m}=x)}{ p_{\\vartheta^*}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},X_{-m}=x)}-\\frac{ \\overline p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1})}\n{\\overline p_{\\vartheta^*}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1})}\\right) \\\\\n&\\quad +\\frac{\\overline p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1})}\n{\\overline p_{\\vartheta^*}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1})}\\left(\\Psi^j_{k,m,x}(\\vartheta)- \\overline{\\Psi}^j_{k,m}(\\vartheta)\n\\right),\n\\end{aligned}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\Psi^j_{k,m,x}(\\vartheta) := \\frac{\\nabla^j p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},X_{-m}=x)}{ p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},X_{-m}=x)}, \\quad \\overline\\Psi^j_{k,m}(\\vartheta) := \\frac{\\nabla^j \\overline p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1})}{\\overline p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1})}.\n\\end{equation*}\nIn view of Lemma \\ref{lemma-ratio} and H\\\"older's inequality, part (a) holds if, for $j=1,\\ldots,6$, there exist random variables $(\\{A_{j,k}\\}_{k=1}^n, B_j )\\in L^{q_0}(\\mathbb{P}_{\\vartheta^*})$ and $\\rho_* \\in (0,1)$ such that, for all $1 \\leq k \\leq n$ and $m \\geq 0$, \n\\begin{equation} \\label{Psi_bound}\n(A) \\quad \\sup_{m \\geq 0}\\sup_{x\\in\\mathcal{X}}\\sup_{\\vartheta\\in \\mathcal{N}^*} | \\Psi^j_{k,m,x}(\\vartheta)| \\leq A_{j,k}, \\quad (B) \\quad \\sup_{x\\in\\mathcal{X}}\\sup_{\\vartheta\\in \\mathcal{N}^*} |\\Psi^j_{k,m,x}(\\vartheta) - \\overline\\Psi^j_{k,m}(\\vartheta)| \\leq B_j (k+m)^{ 7 }\\rho_*^{k+m-1}.\n\\end{equation}\nWe show (A) and (B). From (\\ref{der_formula}) we have, suppressing $(\\vartheta)$ and superscript $1$ from $\\nabla^1 \\ell_{k,m,x}$,\n\\begin{equation*}\n\\begin{aligned}\n\\Psi^1_{k,m,x}&= \\nabla\\ell_{k,m,x},\\quad \\Psi^2_{k,m,x}= \\nabla^2\\ell_{k,m,x} +(\\nabla\\ell_{k,m,x} )^2,\\\\\n\\Psi^3_{k,m,x}&=\\nabla^3\\ell_{k,m,x} +3\\nabla^2\\ell_{k,m,x}\n\\nabla \\ell_{k,m,x}\n+(\\nabla \\ell_{k,m,x})^3,\\\\\n\\Psi^4_{k,m,x}&=\\nabla^4\\ell_{k,m,x}+4 \\nabla^3 \\ell_{k,m,x} \\nabla \\ell_{k,m,x}+3(\\nabla^2\\ell_{k,m,x})^2 +6 \\nabla^2 \\ell_{k,m,x} (\\nabla\\ell_{k,m,x})^2+ (\\nabla\\ell_{k,m,x})^4,\\\\\n\\Psi^5_{k,m,x} &= \\nabla^5 \\ell_{k,m,x}+5 \\nabla^4 \\ell_{k,m,x} \\nabla \\ell_{k,m,x}+10\\nabla^3 \\ell_{k,m,x} \\nabla^2\\ell_{k,m,x}+10\\nabla^3 \\ell_{k,m,x}(\\nabla\\ell_{k,m,x})^2 \\\\\n& \\quad +15(\\nabla^2 \\ell_{k,m,x})^2 \\nabla \\ell_{k,m,x} +10\\nabla^2\\ell_{k,m,x}(\\nabla \\ell_{k,m,x})^3+ (\\nabla\\ell_{k,m,x})^5,\\\\\n\\Psi^6_{k,m,x} & = \\nabla^6 \\ell_{k,m,x} + 6 \\nabla^5 \\ell_{k,m,x} \\nabla \\ell_{k,m,x} + 15 \\nabla^4 \\ell_{k,m,x} \\nabla^2 \\ell_{k,m,x} + 15\\nabla^4 \\ell_{k,m,x} (\\nabla \\ell_{k,m,x})^2 \\\\\n& \\quad +10( \\nabla^3 \\ell_{k,m,x} )^2 +60 \\nabla^3 \\ell_{k,m,x} \\nabla^2 \\ell_{k,m,x} \\nabla \\ell_{k,m,x} +20\\nabla^3\\ell_{k,m,x} (\\nabla \\ell_{k,m,x})^3 \\\\\n& \\quad +15( \\nabla^2 \\ell_{k,m,x} )^3 +45(\\nabla^2 \\ell_{k,m,x})^2( \\nabla \\ell_{k,m,x})^2 +15\\nabla^2 \\ell_{k,m,x}(\\nabla \\ell_{k,m,x})^4 + (\\nabla\\ell_{k,m,x})^6,\n\\end{aligned}\n\\end{equation*}\nand $\\overline\\Psi^j_{k,m}$ is written analogously with $\\nabla^j \\overline\\ell_{k,m}$ replacing $\\nabla^j \\ell_{k,m,x}$. Therefore, (A) of (\\ref{Psi_bound}) follows from Proposition \\ref{proposition-ell}(c) and H\\\"older's inequality. (B) of (\\ref{Psi_bound}) follows from Proposition \\ref{proposition-ell}(a)(c), the relation $ab-cd = a(b-c)-c(a-d)$, $a^n - b^n = (a-b)\\sum_{i=0}^{n-1} (a^{n-1-i}b^i)$, and H\\\"older's inequality.\n\nFor part (c), the bound of $\\nabla l^j_{k,m,x}(\\vartheta)$ follows from writing $\\nabla l^j_{k,m,x}(\\vartheta) = [\\overline p_{\\vartheta}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},X_{-m}=x)\/ \\overline p_{\\vartheta^*}(Y_k|\\overline{\\bf{Y}}_{-m}^{k-1},X_{-m}=x) ] \\Psi^j_{k,m,x}(\\vartheta)$ and using (\\ref{Psi_bound}) and Lemma \\ref{lemma-ratio}. $\\overline\\nabla^j l_{k,m}(\\vartheta)$ is bounded by a similar argument. Part (d) follows from parts (a)--(c), the completeness of $L^{q}(\\mathbb{P}_{\\vartheta^*})$, Markov's inequality, and the Borel--Cantelli Lemma. Part (e) follows from combining parts (a) and (b) and letting $m' \\to \\infty$ in part (b).\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{P-consist}]\nThe consistency of $\\hat \\vartheta_1$ follows from Theorem 2.1 of \\citet{neweymcfadden94hdbk} because (i) $\\vartheta_1^*$ uniquely maximizes $\\mathbb{E}_{\\vartheta_1^*} \\log f(Y_1|\\overline{\\bf Y}_0,W_1;\\gamma,\\theta)$ from Assumption \\ref{A-consist}(c), and (ii) $\\sup_{\\vartheta_1 \\in \\Theta_1}|n^{-1}\\ell_{0,n}(\\vartheta_1) - \\mathbb{E}_{\\vartheta_1^*} \\log f(Y_1|\\overline{\\bf Y}_0,W_1;\\gamma,\\theta)| \\overset{p}{\\rightarrow} 0$ and $\\mathbb{E}_{\\vartheta_1^*} \\log f(Y_1|\\overline{\\bf Y}_0,W_1;\\gamma,\\theta)$ is continuous because $(Y_k,W_k)$ is strictly stationary and ergodic from Assumption \\ref{assn_a1}(e) and $\\mathbb{E}_{\\vartheta_1^*} \\sup_{\\vartheta_1 \\in \\Theta_1}|\\log f(Y_1|\\overline{\\bf Y}_0,W_1;\\gamma,\\theta)| <\\infty$ from Assumption \\ref{assn_a2}(c).\n \nWe proceed to prove the consistency of $\\hat\\vartheta_2$. Define, similarly to pp.\\ 2265--6 in DMR, $\\Delta_{k,m,x}(\\vartheta_2):= \\log p_{\\vartheta_{2}}(Y_k |\\overline{\\bf{Y}}_{-m}^{k-1},{\\bf W}_{-m}^k,X_{-m}=x)$, $\\Delta_{k,m}(\\vartheta_2):= \\log p_{\\vartheta_{2}}(Y_k |\\overline{\\bf{Y}}_{-m}^{k-1},{\\bf W}_{-m}^k)$, $\\Delta_{k,\\infty}(\\vartheta_2):=\\lim_{m\\to \\infty} \\Delta_{k,m}(\\vartheta_2)$, and $\\ell(\\vartheta_2):=\\mathbb{E}_{\\vartheta_1^*}[\\Delta_{0,\\infty}(\\vartheta)]$. Observe that Lemmas 3 and 4 as well as Proposition 2 of DMR hold for our $\\{\\Delta_{k,m,x}(\\vartheta_2),\\Delta_{k,m}(\\vartheta_2),\\Delta_{k,\\infty}(\\vartheta_2), \\ell_n(\\vartheta_2,x_0),\\ell(\\vartheta_2)\\}$ under our assumptions because (i) our Assumption \\ref{assn_a1}(e) can replace their Assumption (A2) in the proof of their Lemmas 3 and 4 and Proposition 2, and (ii) our Lemma \\ref{x_diff}(a) extends Corollary 1 of DMR to accommodate the $W_k$'s. It follows that (i) $\\ell(\\vartheta_2)$ is maximized if and only if $\\vartheta_2 \\in \\Gamma^*$ from Assumption \\ref{A-consist}(d) because $\\mathbb{E}_{\\vartheta^*_{1}}[ \\log p_{\\vartheta_{2}}(Y_1 |\\overline{\\bf{Y}}_{-m}^0,{\\bf W}_{-m}^1)]$ converges to $\\ell(\\vartheta_2)$ uniformly in $\\vartheta_2$ as $m \\to \\infty$ from Lemma 3 of DMR and the dominated convergence theorem, (ii) $\\ell(\\vartheta_2)$ is continuous from Lemma 4 of DMR, and (iii) $\\sup_{\\xi_2}\\sup_{\\vartheta_2 \\in \\Theta_2} |n^{-1} \\ell_n(\\vartheta_2,\\xi_2) - \\ell(\\vartheta_2)| \\overset{p}{\\rightarrow} 0$ holds from Proposition 2 of DMR and $\\ell_n(\\vartheta_2,\\xi_2) \\in [\\min_{x_0} \\ell_n(\\vartheta_2,x_0), \\max_{x_0} \\ell_n(\\vartheta_2,x_0)]$. Consequently, $\\inf_{\\vartheta_2\\in\\Gamma^*}|\\hat\\vartheta_2-\\vartheta_2|\\overset{p}{\\rightarrow} 0$ follows from Theorem 2.1 of \\citet{neweymcfadden94hdbk} with an adjustment for the fact that the maximizer of $\\ell(\\vartheta_2)$ is a set, not a singleton.\n\\end{proof} \n \n\\begin{proof}[Proof of Proposition \\ref{P-quadratic}]\n\nWe prove the stated result by applying Corollary \\ref{cor_appn} to $l_{\\vartheta k x_0} -1$ with $l_{\\vartheta k x_0}$ defined in (\\ref{density_ratio}). Because the first and second derivatives of $l_{\\vartheta k x_0} -1$ play the role of the score, we expand $l_{\\vartheta k x_0} -1$ with respect to $\\psi$ up to the third order. Let $q = \\dim(\\psi)$. For a $k\\times 1$ vector $a$, define $a^{\\otimes p} := a \\otimes a\\otimes \\cdots \\otimes a$ ($p$ times) and $\\nabla_{a^{\\otimes p}} :=\\nabla_{a} \\otimes \\nabla_{a}\\otimes \\cdots \\otimes \\nabla_{a}$ ($p$ times). Recall that the $(p+1)$-th order Taylor expansion of $f(x)$ with $x \\in \\mathbb{R}^q$ around $x=x^*$ is given by\n\\begin{align*}\nf(x) & = f(x^*) + \\sum_{j=1}^p \\frac{1}{j!} \\nabla_{(x^{\\otimes j})'} f(x^*) (x-x^*)^{\\otimes j} + \\frac{1}{(p+1)!} \\nabla_{(x^{\\otimes (p+1)})'} f(\\overline{x}) (x-x^*)^{\\otimes (p+1)},\n\\end{align*}\nwhere $\\overline{x}$ lies between $x$ and $x^*$, and $\\overline{x}$ may differ from element to element of $\\nabla_{x^{\\otimes (p+1)}} f(\\overline{x})$.\n\nChoose $\\epsilon>0$ sufficiently small so that $\\mathcal{N}_{\\epsilon}$ is a subset of $\\mathcal{N}^*$ in Assumption \\ref{assn_a4}. For $m \\geq 0$ and $j =1,2,\\ldots$, let \n\\[\n\\Lambda^j _{k,m,x_{-m}}(\\psi,\\pi):= \\frac{\\nabla_{\\psi^{\\otimes j}} p_{\\psi \\pi} (Y_k| \\overline{{\\bf Y}}_{-m}^{k-1}, x_{-m})}{ j! p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{-m}^{k-1}, x_{-m})}, \\quad \\Lambda^j_{k,m}(\\psi, \\pi):= \\frac{\\nabla_{\\psi^{\\otimes j}} p_{\\psi \\pi} (Y_k| \\overline{{\\bf Y}}_{-m}^{k-1})}{ j! p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{-m}^{k-1})},\n\\]\nand $\\Delta \\psi:= \\psi -\\psi^*$. With this notation, expanding $l_{\\vartheta k x_0} -1$ three times around $\\psi^*$ while fixing $\\pi$ gives, with $\\overline \\psi \\in [\\psi, \\psi^*]$, \n\\begin{align} \nl_{\\vartheta k x_0} -1 &= \\Lambda^1_{k,0,x_0}(\\psi^*,\\pi)' \\Delta \\psi + \\Lambda^2_{k,0,x_0}(\\psi^*,\\pi)' (\\Delta \\psi)^{\\otimes 2} + \\Lambda^3_{k,0,x_0}(\\overline \\psi,\\pi)' (\\Delta \\psi)^{\\otimes 3} \\nonumber \\\\\n & = \\Lambda^1_{k,0}(\\psi^*,\\pi)' \\Delta \\psi + \\Lambda^2_{k,0}(\\psi^*,\\pi)' (\\Delta \\psi)^{\\otimes 2} + \\Lambda^3_{k,0}(\\overline \\psi,\\pi)' (\\Delta \\psi)^{\\otimes 3} + u_{kx_0}(\\psi,\\pi), \\label{lk_expansion}\n\\end{align}\nwhere $\\overline \\psi$ may differ from element to element of $\\Lambda^3_{k,0,x_0}(\\overline \\psi,\\pi)$, and $u_{kx_0}(\\psi,\\pi) := \\sum_{j=1}^2 [\\Lambda^j_{k,0,x_0}(\\psi^*,\\pi)- \\Lambda^j_{k,0}(\\psi^*,\\pi)]' (\\Delta \\psi)^{\\otimes j} + [\\Lambda^3_{k,0,x_0}(\\overline \\psi,\\pi) -\\Lambda^3_{k,0}(\\overline \\psi,\\pi)]' (\\Delta \\psi)^{\\otimes 3}$.\n\nNoting that $\\nabla_\\lambda \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})=0$ and $\\nabla_{\\lambda\\eta'} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})=0$ from (\\ref{d1p}), we may rewrite (\\ref{lk_expansion}) as\n\\begin{equation} \\label{lk_2}\nl_{k\\vartheta x_0} -1 = t(\\psi,\\pi)' s_{\\varrho k} + r_{k,0}(\\psi,\\pi) + u_{kx_0}(\\psi,\\pi), \n\\end{equation}\nwhere $s_{\\varrho k}$ is defined in (\\ref{score}), $r_{k,0}(\\psi,\\pi) := \\widetilde \\Lambda_{k,0}(\\pi)' (\\Delta \\eta)^{\\otimes 2} + \\Lambda^3_{k,0}(\\overline \\psi,\\pi)' (\\Delta \\psi)^{\\otimes 3}$,\nwhere $\\widetilde \\Lambda_{k,0}(\\pi)$ denotes the part of $\\Lambda^2_{k,0}(\\psi^*,\\pi)$ corresponding to $(\\Delta \\eta)^{\\otimes 2}$.\n\nFor $m \\geq 0$, define $v_{k, m}(\\vartheta):= (\\Lambda^1_{k, m}( \\psi,\\pi)', \\Lambda^2_{k, m}( \\psi,\\pi)',\\Lambda^3_{k, m}( \\psi,\\pi)')'$, and define $v_{k,\\infty}(\\vartheta):= \\lim_{m\\rightarrow \\infty} v_{k,m}(\\vartheta)$. In order to apply Corollary \\ref{cor_appn} to $l_{\\vartheta k x_0} -1$, we first show\n\\begin{align} \n&\\sup_{\\vartheta \\in \\mathcal{N}_\\epsilon}\\left|P_n [v_{k,0}(\\vartheta)v_{k,0}(\\vartheta)'] - \\mathbb{E}_{\\vartheta^*}[v_{k,\\infty}(\\vartheta)v_{k,\\infty}(\\vartheta)']\\right| = o_p(1), \\label{uniform_lln} \\\\\n&\\nu_n(v_{k,0}(\\vartheta)) \\Rightarrow W(\\vartheta), \\label{weak_cgce}\n\\end{align}\nwhere $W(\\vartheta)$ is a mean-zero continuous Gaussian process with $\\mathbb{E}_{\\vartheta^*}[W(\\vartheta_1)W(\\vartheta_2)'] = \\mathbb{E}_{\\vartheta^*}[v_{k,\\infty}(\\vartheta_1)v_{k,\\infty}(\\vartheta_2)']$. (\\ref{uniform_lln}) holds because $\\sup_{\\vartheta \\in \\mathcal{N}_\\epsilon}P_n[v_{k,0}(\\vartheta)v_{k,0}(\\vartheta)' - v_{k,\\infty}(\\vartheta)v_{k,\\infty}(\\vartheta)'] = o_p(1)$ from Proposition \\ref{lemma-omega}, and $v_{k,\\infty}(\\vartheta)v_{k,\\infty}(\\vartheta)'$ satisfies a uniform law of large numbers (Lemma 2.4 and footnote 18 of \\citet{neweymcfadden94hdbk}) because $v_{k,\\infty}(\\vartheta)$ is continuous in $\\vartheta$ from the continuity of $\\nabla^j l_{k,m,x}(\\vartheta)$ and Proposition \\ref{lemma-omega}, and $\\mathbb{E}_{\\vartheta^*}\\sup_{\\vartheta \\in \\mathcal{N}_{\\epsilon}}|v_{k,\\infty}(\\vartheta)|^{2} < \\infty$ from Proposition \\ref{lemma-omega}. (\\ref{weak_cgce}) holds because $\\sup_{\\vartheta \\in \\mathcal{N}_\\epsilon}\\nu_n(v_{k,0}(\\vartheta) - v_{k,\\infty}(\\vartheta)) = o_p(1)$ from Proposition \\ref{lemma-omega} and $\\nu_n(v_{k,\\infty}(\\vartheta)) \\Rightarrow W(\\vartheta)$ from Theorem 10.2 of \\citet{pollard90book} because (i) the space of $\\vartheta$ is totally bounded, (ii) the finite dimensional distributions of $\\nu_n(v_{k,\\infty}(\\cdot))$ converge to those of $W(\\cdot)$ from a martingale CLT because $v_{k,\\infty}(\\vartheta)$ is a stationary $L^2(\\mathbb{P}_{\\vartheta^*})$ martingale difference sequence for all $\\vartheta\\in\\mathcal{N}_{\\epsilon}$ from Proposition \\ref{lemma-omega}, and (iii) $\\{\\nu_n(v_{k,\\infty}(\\cdot)): n\\geq 1\\}$ is stochastically equicontinuous from Theorem 2 of \\citet{hansen96et} because $v_{k,\\infty}(\\vartheta)$ is Lipschitz continuous in $\\vartheta$ and both $v_{k,\\infty}(\\vartheta)$ and the Lipschitz coefficient are in $L^q(\\mathbb{P}_{\\vartheta^*})$ with $q > \\text{dim}(\\vartheta)$ from Proposition \\ref{lemma-omega}.\n\nWe proceed to show that the terms on the right-hand side of (\\ref{lk_2}) satisfy Assumptions \\ref{assn_a3}(a)--(g). Observe that $t(\\psi,\\pi) = 0$ if and only if $\\psi=\\psi^*$. First, $s_{\\varrho k}$ satisfies Assumptions \\ref{assn_a3}(a), (b), and (g) by Proposition \\ref{lemma-omega}, (\\ref{uniform_lln}), (\\ref{weak_cgce}), and Assumption \\ref{A-nonsing1}. Second, $r_{k,0}(\\psi,\\pi)$ satisfies Assumptions \\ref{assn_a3}(c) and (d) from Proposition \\ref{lemma-omega} and (\\ref{weak_cgce}). Third, $u_{k x_0}(\\psi,\\pi)$ satisfies Assumptions \\ref{assn_a3}(e) and (f) from Proposition \\ref{lemma-omega}(c). Therefore, the stated result follows from Corollary \\ref{cor_appn}(b).\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{P-LR}] \nThe proof is similar to that of Proposition 3 of \\citet{kasaharashimotsu15jasa}. Let $t_{\\eta}: = \\eta - \\eta^*$ and $t_{\\lambda } := \\alpha(1-\\alpha)v(\\lambda)$, so that $t(\\psi,\\pi) = (t_{\\eta}',t_{\\lambda}')'$. Let $\\hat \\psi_\\pi := \\arg\\max_{\\psi \\in \\Theta_\\psi} \\ell_n(\\psi,\\pi,\\xi)$ denote the MLE of $\\psi$, and split $t(\\hat\\psi_\\pi,\\pi)$ as $t(\\hat\\psi_\\pi,\\pi)= (\\hat t_{\\eta}',\\hat t_{\\lambda}')'$, where we suppress the dependence of $\\hat t_{\\eta}$ and $\\hat t_{\\lambda}$ on $\\pi$. Define $G_{\\varrho n} := \\nu_n (s_{\\varrho k})$. Let\n\\[\nG_{\\varrho n} = \\begin{bmatrix}\nG_{\\eta n} \\\\\nG_{\\lambda \\varrho n}\n\\end{bmatrix}, \\quad\n\\begin{aligned}\nG_{\\lambda. \\eta \\varrho n} &:= G_{\\lambda \\varrho n} - \\mathcal{I}_{\\lambda \\eta\\varrho}\\mathcal{I}_{\\eta }^{-1}G_{\\eta n}, \\quad Z_{\\lambda. \\eta \\varrho n} := \\mathcal{I}_{\\lambda.\\eta \\varrho}^{-1}G_{\\lambda.\\eta \\varrho n},\\\\\nt_{\\eta.\\lambda \\varrho} &:= t_{\\eta} + \\mathcal{I}_{\\eta }^{-1}\\mathcal{I}_{\\eta\\lambda \\varrho} t_{\\lambda}.\n\\end{aligned}\n\\]\nThen, we can write (\\ref{ln_appn}) as\n\\begin{equation} \\label{LR_appn}\n\\sup_{\\xi \\in \\Xi}\\sup_{\\vartheta \\in A_{n\\varepsilon c}(\\xi) } \\left| 2 \\left[\\ell_n(\\psi,\\pi,\\xi) - \\ell_n(\\psi^*,\\pi,\\xi) \\right] - A_n(\\sqrt{n} t_{\\eta.\\lambda \\varrho}) - B_{\\varrho n}(\\sqrt{n} t_{\\lambda}) \\right| =o_p(1),\n\\end{equation}\nwhere \n\\begin{equation} \\label{B_pi}\n\\begin{aligned}\nA_n(t_{\\eta.\\lambda \\varrho}) & = 2t_{\\eta.\\lambda \\varrho}'G_{\\eta n} - t_{\\eta.\\lambda \\varrho}'\\mathcal{I}_{\\eta}t_{\\eta.\\lambda \\varrho}, \\\\\nB_{\\varrho n}(t_{\\lambda}) &= 2t_{\\lambda}' G_{\\lambda.\\eta \\varrho n} - t_{\\lambda}' \\mathcal{I}_{\\lambda.\\eta \\varrho} t_{\\lambda} = Z_{\\lambda \\varrho n}' \\mathcal{I}_{\\lambda.\\eta \\varrho} Z_{\\lambda \\varrho n}- (t_{\\lambda} - Z_{\\lambda \\varrho n})'\\mathcal{I}_{\\lambda.\\eta \\varrho}(t_{\\lambda} - Z_{\\lambda \\varrho n}).\n\\end{aligned}\n\\end{equation}\nObserve that $2[\\ell_{0n}(\\hat\\vartheta_0) - \\ell_{0n}(\\vartheta_0^*) ] = \\max_{t_\\eta}[2 \\sqrt{n} t_{\\eta}' G_{\\eta n} - n t_{\\eta}' \\mathcal{I}_\\eta t_{\\eta}] + o_p(1) = \\max_{t_{\\eta.\\lambda \\varrho}} A_n(\\sqrt{n} t_{\\eta.\\lambda \\varrho}) + o_p(1)$ from applying Corollary \\ref{cor_appn} to $\\ell_{0n}(\\vartheta_0)$ and noting that the set of possible values of both $\\sqrt{n} t_\\eta$ and $\\sqrt{n}{t}_{\\eta.\\lambda \\varrho}$ approaches $\\mathbb{R}^{\\dim(\\eta)}$. In conjunction with (\\ref{LR_appn}), we obtain, uniformly in $\\pi\\in \\Theta_{\\pi}$,\n\\begin{equation} \\label{LR_appn2}\n2[\\ell_n( \\hat \\psi_\\pi , \\pi,\\xi) - \\ell_{0n}(\\hat\\vartheta_0)] = B_{\\varrho n}(\\sqrt{n} \\hat{t}_{\\lambda}) + o_p(1). \n\\end{equation}\nDefine $\\tilde t_\\lambda$ by $B_{\\varrho n}(\\sqrt{n} \\tilde{t}_{\\lambda}) = \\max_{t_\\lambda \\in \\alpha(1-\\alpha)v(\\Theta_\\lambda)} B_{\\varrho n}(\\sqrt{n} t_{\\lambda})$. Then, we have\n\\[\n2[\\ell_n(\\hat \\psi_\\pi, \\pi,\\xi) - \\ell_{0n}(\\hat\\vartheta_0)] = B_{\\varrho n}(\\sqrt{n} \\tilde{t}_{\\lambda}) + o_p(1),\n\\] \nuniformly in $\\pi \\in \\Theta_{\\pi}$ because (i) $B_{\\varrho n}(\\sqrt{n} \\tilde{t}_{\\lambda}) \\geq 2[\\ell_n(\\hat \\psi_\\pi, \\pi,\\xi) - \\ell_{0n}(\\hat\\vartheta_0)] + o_p(1)$ from the definition of $\\tilde{t}_{\\lambda}$ and (\\ref{LR_appn2}), and (ii) $2[\\ell_n(\\hat \\psi_\\pi, \\pi,\\xi) - \\ell_{0n}(\\hat\\vartheta_0)] \\geq B_{\\varrho n}(\\sqrt{n} \\tilde{t}_{\\lambda}) + o_p(1)$ from the definition of $\\hat \\psi$, (\\ref{LR_appn}), and $\\tilde{t}_{\\lambda}=O_p(n^{-1\/2})$.\n\nFinally, the asymptotic distribution of $\\sup_{\\varrho}B_{\\varrho n}(\\sqrt{n} \\tilde{t}_{\\lambda})$ follows from applying Theorem 1(c) of \\citet{andrews01em} to $B_{\\varrho n}(\\sqrt{n} \\tilde{t}_{\\lambda})$. First, Assumption 2 of \\citet{andrews01em} holds trivially for $B_{\\varrho n}(\\sqrt{n} \\tilde{t}_{\\lambda})$. Second, Assumption 3 of \\citet{andrews01em} is satisfied by (\\ref{weak_cgce}) and Assumption \\ref{A-nonsing1}. Assumption 4 of \\citet{andrews01em} is satisfied by Proposition \\ref{P-quadratic}. Assumption $5^*$ of \\citet{andrews01em} holds with $B_T=n^{1\/2}$ because $\\alpha(1-\\alpha)v(\\Theta_\\lambda)$ is locally equal to the cone $v(\\mathbb{R}^q)$ given that $\\alpha(1-\\alpha)>0$ for all $\\alpha\\in\\Theta_{\\alpha}$. Therefore, $\\sup_{\\varrho\\in \\Theta_{\\varrho}}B_{\\varrho n}(\\sqrt{n}\\tilde{t}_{\\lambda}) \\overset{d}{\\rightarrow} \\sup_{\\varrho\\in \\Theta_{\\varrho}}(\\tilde{t}_{\\lambda \\varrho}'{\\mathcal{I}}_{\\lambda.\\eta \\varrho}\\tilde{t}_{\\lambda \\varrho})$ follows from Theorem 1(c) of \\citet{andrews01em}.\n\\end{proof}\n \n\n\\begin{proof}[Proof of Proposition \\ref{P-quadratic-N1}]\nThe proof is similar to that of Proposition \\ref{P-quadratic}. Define $\\Lambda^j_{k,m,x_{-m}}(\\psi,\\pi)$ and $\\Lambda^j_{k,m}(\\psi, \\pi)$ as in the proof of Proposition \\ref{P-quadratic}. Expanding $l_{k\\vartheta x_0} -1$ five times around $\\psi^*$ similarly to (\\ref{lk_expansion}) while fixing $\\pi$ gives, with $\\overline \\psi \\in [\\psi, \\psi^*]$, \n\\begin{align} \nl_{k\\vartheta x_0} -1 & = \\sum_{j=1}^4 \\Lambda^j_{k,0}(\\psi^*,\\pi)' (\\Delta \\psi)^{\\otimes j} + \\Lambda^5_{k,0}(\\overline \\psi,\\pi)' (\\Delta \\psi)^{\\otimes 5} + u_{kx_0}(\\psi,\\pi), \\label{lk_expansion_normal}\n\\end{align}\nwhere $u_{kx_0}(\\psi,\\pi) := \\sum_{j=1}^4 [\\Lambda^j_{k,0,x_0}(\\psi^*,\\pi)- \\Lambda^j_{k,0}(\\psi^*,\\pi)]'(\\Delta \\psi)^{\\otimes j}+ [\\Lambda^5_{k,0,x_0}(\\overline \\psi,\\pi) - \\Lambda^5_{k,0}(\\overline \\psi,\\pi)]' (\\Delta \\psi)^{\\otimes 5}$.\n\nDefine $\\overline p_{\\psi \\pi k,0} := \\overline p_{\\psi \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})$. Observe that $s_{\\varrho k}$ defined in (\\ref{score_normal}) satisfies\n\\begin{equation*}\ns_{\\varrho k}\n:=\n\\begin{pmatrix}\n\\nabla_{\\eta} \\overline p_{\\psi^* \\pi k, 0} \/ \\overline p_{\\psi^* \\pi k, 0} \\\\\n\\zeta_{k, 0}(\\varrho)\/2 \\\\\n\\nabla_{\\lambda_\\mu\\lambda_\\sigma} \\overline p_{\\psi^* \\pi k, 0} \/ \\alpha(1-\\alpha)\\overline p_{\\psi^* \\pi k, 0} \\\\\n\\nabla_{\\lambda_\\sigma^2} \\overline p_{\\psi^* \\pi k, 0} \/ 2\\alpha(1-\\alpha) \\overline p_{\\psi^* \\pi k, 0} \\\\\n\\nabla_{\\lambda_\\beta\\lambda_\\mu} \\overline p_{\\psi^* \\pi k, 0} \/ \\alpha(1-\\alpha)\\overline p_{\\psi^* \\pi k, 0} \\\\\n\\nabla_{\\lambda_\\beta\\lambda_\\sigma} \\overline p_{\\psi^* \\pi k, 0} \/ \\alpha(1-\\alpha)\\overline p_{\\psi^* \\pi k, 0} \\\\\nV(\\nabla_{\\lambda_\\beta \\lambda_\\beta} \\overline p_{\\psi^* \\pi k, 0})\/ \\alpha(1-\\alpha) \\overline p_{\\psi^* \\pi k, 0} \n\\end{pmatrix}.\n\\end{equation*}\nNoting that $\\nabla_\\lambda \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})=0$ and $\\nabla_{\\lambda\\eta'} \\overline p_{\\psi^* \\pi} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})=0$ from (\\ref{d1p}) and (\\ref{d2p0}), we may rewrite (\\ref{lk_expansion_normal}) as, with $t(\\psi,\\pi)$ and $s_{\\varrho k}$ defined in (\\ref{t-psi2}) and (\\ref{score_normal}),\n\\begin{equation} \\label{lk_2_normal}\nl_{\\vartheta k x_0} -1 = t(\\psi,\\pi)' s_{\\varrho k} + r_{k,0}(\\pi) + u_{kx_0}(\\psi,\\pi),\n\\end{equation} \nwhere $r_{k,0}(\\pi):= \\widetilde \\Lambda_{k,0}(\\pi) '\\tau(\\psi) + \\Lambda^5_{k,0}(\\overline \\psi,\\pi) ' (\\Delta \\psi)^{\\otimes 5 } + \\lambda_\\mu^4[\\nabla_{\\lambda_\\mu^4} \\overline p_{\\psi^* \\pi k,0}- b(\\alpha)\\nabla_{\\lambda_\\sigma^2} \\overline p_{\\psi^* \\pi k,0}]\/4! \\overline p_{\\psi^* \\pi k,0}$, $\\tau(\\psi)$ is the vector that collects the elements of $\\{(\\Delta \\psi)^{\\otimes j} \\}_{j=2}^4$ that are not in $t(\\psi,\\pi)$, and $\\widetilde \\Lambda_{k,0}(\\pi)$ denotes the vector of the corresponding elements of $\\{\\Lambda^j_{k,0}(\\psi^*,\\pi)\\}_{j=2}^4$. \n \nThe stated result follows from Corollary \\ref{cor_appn} if the terms on the right-hand side of (\\ref{lk_2_normal}) satisfy Assumption \\ref{assn_a3}. Similarly to the proof of Proposition \\ref{P-LR}, define $v_{k, m}(\\vartheta):=(\\zeta_{k,m}(\\varrho), \\Lambda^1_{k, m}( \\psi,\\pi)', \\ldots, \\Lambda^5_{k, m}( \\psi,\\pi)' )'$. \nNote that $\\zeta_{k,m}(\\varrho)$ satisfies Proposition \\ref{lemma-omega} because the mean value theorem and $\\nabla_{\\lambda_\\mu^2} \\overline p_{\\psi^* 0 \\alpha} (Y_k| \\overline{{\\bf Y}}_{-m}^{k-1})=0$ gives $\\zeta_{k,m}(\\varrho) = [\\nabla_{\\lambda_\\mu^2}\\overline p_{\\psi^* \\varrho \\alpha} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1}) - \\nabla_{\\lambda_\\mu^2}\\overline p_{\\psi^* 0 \\alpha} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})]\/[\\varrho \\alpha(1-\\alpha) \\overline p_{\\psi^* \\varrho \\alpha} (Y_k| \\overline{{\\bf Y}}_{0}^{k-1})] =\\nabla_\\varrho \\nabla_{\\lambda_\\mu^2}\\overline p_{\\psi^* \\alpha \\bar \\varrho} (Y_k| \\overline{{\\bf Y}}_{-m}^{k-1}) \/[\\alpha(1-\\alpha) \\overline p_{\\psi^* \\bar\\varrho \\alpha} (Y_k| \\overline{{\\bf Y}}_{-m}^{k-1})]$ for $\\bar \\varrho \\in [0,\\varrho]$.\nTherefore, $v_{k,\\infty}(\\vartheta):=\\lim_{m\\rightarrow \\infty} v_{k,m}(\\vartheta)$ is well-defined, and $v_{k,0}(\\vartheta)$ and $v_{k,\\infty}(\\vartheta)$ satisfy (\\ref{uniform_lln})--(\\ref{weak_cgce}) from repeating the argument in the proof of Proposition \\ref{P-LR}.\n\nWe proceed to show that the terms on the right-hand side of (\\ref{lk_2_normal}) satisfy Assumption \\ref{assn_a3}. Observe that $t(\\psi,\\pi) = 0$ if and only if $\\psi=\\psi^*$. $s_{\\varrho k}$ and $u_{k x_0}(\\psi,\\pi)$ satisfy Assumption \\ref{assn_a3} by noting that $s_{\\varrho k}$ is a linear function of $v_{k,0}(\\vartheta)$ and using the argument in the proof of Proposition \\ref{P-quadratic} by replacing Assumption \\ref{A-nonsing1} with Assumption \\ref{A-nonsing2}. We show that each component of $r_{k,0}(\\pi)$ satisfies Assumptions \\ref{assn_a3}(c) and (d). First, $ \\Lambda^5_{k,0}(\\overline \\psi,\\pi)' (\\Delta \\psi)^{\\otimes 5}$ satisfies Assumptions \\ref{assn_a3}(c) and (d) from Proposition \\ref{lemma-omega}, (\\ref{weak_cgce}) and $\\lambda_\\mu^5 = (12\\lambda_\\mu\/b(\\alpha)) [\\lambda_\\sigma^2 + b(\\alpha)\\lambda_\\mu^4\/12] - 12(\\lambda_\\sigma\/b(\\alpha))\\lambda_\\mu \\lambda_\\sigma = O(|\\psi| |t(\\psi,\\pi)|)$. Second, $\\lambda_\\mu^4[\\nabla_{\\lambda_\\mu^4} \\overline p_{\\psi^* \\pi k,0}- b(\\alpha)\\nabla_{\\lambda_\\sigma^2} \\overline p_{\\psi^* \\pi k,0}]\/ \\overline p_{\\psi^* \\pi k,0}$ satisfies Assumptions \\ref{assn_a3}(c) and (d) from Lemma \\ref{lemma_d34}(b). Third, for $\\widetilde \\Delta_{k,0}(\\pi)'\\tau(\\psi)$, observe that $\\nabla_{\\lambda \\eta^j}\\overline p_{\\psi^* \\pi k,0}=0$ for any $j \\geq 1$ in view of (\\ref{repara_g_hetero})--(\\ref{d3g2}). Therefore, $\\widetilde \\Delta_{k,0}(\\pi)'\\tau(\\psi)$ is written as, with $\\Delta \\eta:= \\eta- \\eta^*$,\n\\begin{equation} \\label{d4r}\n\\widetilde \\Delta_{k,0}(\\pi)'\\tau(\\psi) = \\nabla_{(\\eta^{\\otimes 2})'} \\overline p_{\\psi^* \\pi k,0} (\\Delta \\eta)^{\\otimes 2} \/ 2!\\overline p_{\\psi^* \\pi k,0} + R_{3k\\vartheta} + R_{4k\\vartheta}, \n\\end{equation}\nwhere $R_{3k\\vartheta}:= \\nabla_{(\\psi^{\\otimes 3})'} \\overline p_{\\psi^* \\pi k,0} (\\Delta \\psi)^{\\otimes 3} \/3!\\overline p_{\\psi^* \\pi k,0}$ and \n\\begin{equation}\\label{r_jk}\nR_{4k\\vartheta}:= [\\nabla_{(\\psi^{\\otimes 4})'} \\overline p_{\\psi^* \\pi k,0} (\\Delta \\psi)^{\\otimes 4} - \\nabla_{\\lambda_\\mu^4} \\overline p_{\\psi^* \\pi k,0} \\lambda_\\mu^4]\/4!\\overline p_{\\psi^* \\pi k,0}.\n\\end{equation}\nThe first term in (\\ref{d4r}) clearly satisfies Assumptions \\ref{assn_a3}(c) and (d). The terms in $R_{3k\\vartheta}$ belong to one of the following three groups: (i) the term associated with $\\lambda_\\sigma^3$, (ii) the term associated with $\\lambda_\\mu^3$, or (iii) the other terms. These terms satisfy Assumptions \\ref{assn_a3}(c) and (d) because the term (i) is bounded by $|\\psi||t(\\psi,\\pi)|$ because $\\lambda_\\sigma^3 = \\lambda_\\sigma [\\lambda_\\sigma^2+b(\\alpha)\\lambda_\\mu^4\/12] - (\\lambda_\\mu^3b(\\alpha))\\lambda_\\mu \\lambda_\\sigma\/12$, the term (ii) is bounded by $\\varrho \\lambda_\\mu^3$ from Lemma \\ref{lemma_d34}(a), and the terms in (iii) are bounded by $|\\psi||t(\\psi,\\pi)|$ because they either contain $\\Delta \\eta$ or a term of the form $\\lambda_\\mu^i\\lambda_\\sigma^j\\lambda_\\beta^k$ with $i+j+k=3$ and $i,j \\neq 3$. Similarly, the terms in $R_{4k\\vartheta}$ satisfy Assumptions \\ref{assn_a3}(c) and (d) because they either contain $\\Delta \\eta$ or a term of the form $\\lambda_\\mu^i\\lambda_\\sigma^j\\lambda_\\beta^k$ with $i+j+k=4$ and $i \\neq 4$. This proves that $r_{k,0}(\\pi)$ satisfies Assumptions \\ref{assn_a3}(c) and (d), and the stated result is proven. \n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{P-LR-N1}]\n\nThe proof is similar to the proof of Proposition 3(c) of \\citet{kasaharashimotsu15jasa}. Let $(\\hat{\\psi}_\\alpha, \\hat \\varrho_\\alpha) : = {\\arg\\max}_{(\\psi, \\varrho) \\in \\Theta_{\\psi}\\times\\Theta_{\\varrho}} \\ell_n(\\psi, \\varrho,\\alpha,\\xi)$ denote the MLE of $(\\psi,\\varrho)$ for a given $\\alpha$. Consider the sets $\\Theta_{\\lambda}^1:=\\{\\lambda \\in \\Theta_{\\lambda}:|\\lambda_\\mu| \\geq n^{-1\/8}(\\log n)^{-1} \\}$ and $\\Theta_{\\lambda}^2:=\\{\\lambda \\in \\Theta_{\\lambda}:|\\lambda_\\mu| \\leq n^{-1\/8}(\\log n)^{-1}\\}$, so that $\\Theta_{\\lambda} = \\Theta_{\\lambda}^1 \\cup \\Theta_{\\lambda}^2$. For $j=1,2$, define $(\\hat{\\psi}^j_\\alpha, \\hat \\varrho^j_\\alpha) : = {\\arg\\max}_{(\\psi,\\varrho) \\in \\Theta_{\\psi}\\times \\Theta_{\\varrho}, \\lambda \\in \\Theta_{\\lambda}^j} \\ell_n(\\psi, \\varrho,\\alpha,\\xi)$. Then, uniformly in $\\alpha$,\n\\[\n\\ell_n(\\hat \\psi_\\alpha, \\hat \\varrho_\\alpha, \\alpha,\\xi) = \\max\\left\\{\\ell_n(\\hat\\psi_\\alpha^1, \\hat\\varrho_\\alpha^1,\\alpha,\\xi),\\ell_n(\\hat\\psi_\\alpha^2,\\hat\\varrho_\\alpha^2,\\alpha,\\xi)\\right\\}.\n\\]\nHenceforth, we suppress the dependence of $\\hat \\psi_\\alpha$, $\\hat \\varrho_\\alpha$, etc. on $\\alpha$.\n\nDefine $B_{\\varrho n}(t_{\\lambda}(\\lambda, \\varrho,\\alpha ))$ as in (\\ref{B_pi}) in the proof of Proposition \\ref{P-LR} but using $t(\\psi,\\pi)$ and $s_{\\varrho k}$ defined in (\\ref{t-psi2}) and (\\ref{score_normal}) and replacing $t_{\\lambda}$ in (\\ref{B_pi}) with $t_{\\lambda}(\\lambda, \\varrho,\\alpha )$. Observe that the proof of Proposition \\ref{P-LR} goes through up to (\\ref{LR_appn2}) with the current notation and that $G_{\\varrho n}$ and $\\mathcal{I}_\\varrho$ are continuous in $\\varrho$. Further, $\\hat{\\varrho}^1 = O_p(n^{-1\/4}(\\log n)^2)$ because $\\hat \\varrho^1 (\\hat \\lambda_\\mu^1)^2 = O_p(n^{-1\/2})$ from Proposition \\ref{P-quadratic-N1}(a) and $|\\hat{\\lambda}_\\mu^1| \\geq n^{-1\/8}(\\log n)^{-1}$. Consequently, $B_{\\hat \\varrho^1 n}(\\sqrt{n}t_{\\lambda}(\\hat\\lambda^1, \\hat\\varrho^1,\\alpha)) = B_{0 n}(\\sqrt{n}t_{\\lambda}(\\hat\\lambda^1, \\hat\\varrho^1,\\alpha)) + o_p(1)$, and, uniformly in $\\alpha$,\n\\begin{equation} \\label{ln_max}\n2[\\ell_n(\\hat \\psi,\\hat \\varrho,\\alpha,\\xi) - \\ell_{0n}(\\hat\\vartheta_0)] = \\max \\{ B_ { 0 n}(\\sqrt{n} t_{\\lambda}(\\hat\\lambda^1, \\hat\\varrho^1,\\alpha)), B_{ \\hat \\varrho^2 n}(\\sqrt{n} t_{\\lambda}(\\hat\\lambda^2, \\hat\\varrho^2,\\alpha))\\} + o_p(1).\n\\end{equation}\n\nWe proceed to construct parameter spaces $\\tilde\\Lambda_{\\lambda\\alpha}^1$ and $\\tilde\\Lambda_{\\lambda \\alpha\\varrho}^2$ that are locally equal to the cones $\\Lambda_\\lambda^1$ and $\\Lambda_{\\lambda\\varrho}^2$ defined in (\\ref{Lambda-lambda}). Define $c(\\alpha) := \\alpha(1-\\alpha)$, and denote the elements of $t_{\\lambda}(\\hat\\lambda^j,\\hat\\varrho^j,\\alpha)$ corresponding to (\\ref{t_lambda_rho}) by\n\\begin{equation*}\nt_{\\lambda}(\\hat\\lambda^j,\\hat\\varrho^j,\\alpha)\n=\n\\begin{pmatrix}\n\\hat t_{\\varrho\\mu^2}^j\\\\\n\\hat t_{\\mu\\sigma}^j \\\\\n\\hat t_{\\sigma^2}^j\\\\\n\\hat t_{\\beta \\mu}^j\\\\\n\\hat t_{\\beta \\sigma}^j\\\\\n\\hat t_{v(\\beta)}^j\\\\\n\\end{pmatrix}\n:= \n c(\\alpha) \n\\begin{pmatrix}\n \\hat\\varrho^j (\\hat \\lambda_\\mu^j)^2 \\\\\n \\hat\\lambda_\\mu \\hat \\lambda_\\sigma\\\\\n (\\hat\\lambda_\\sigma^j)^2 + b(\\alpha)(\\hat \\lambda_\\mu^j)^4\/12 \\\\\n \\hat \\lambda_\\beta^j \\hat \\lambda_\\mu^j\\\\\n \\hat \\lambda_\\beta^j \\hat \\lambda_\\sigma^j\\\\\n v(\\hat \\lambda_\\beta^j)\n\\end{pmatrix}. \n\\end{equation*}\nNote that $\\hat{\\lambda}_{\\sigma}^1 = O_p(n^{-3\/8} \\log n)$ and $\\hat{\\lambda}_\\beta^1 = O_p(n^{-3\/8} \\log n)$ because $(\\hat{t} _{\\mu \\sigma}^1, \\hat{t} _{\\beta \\mu}^1) = O_p(n^{-1\/2})$ from Proposition \\ref{P-quadratic-N1}(a) and $|\\hat{\\lambda}_\\mu^1| \\geq n^{-1\/8}(\\log n)^{-1}$. Furthermore, $\\hat t_{\\sigma^2}^2 = c(\\alpha) (\\hat \\lambda_\\sigma^2)^2 + o_p(n^{-1\/2})$ because $|\\hat\\lambda_\\mu^2|\\leq n^{-1\/8} (\\log n)^{-1}$. Consequently,\n\\begin{equation} \\label{t_hat_1}\n\\begin{aligned}\n\\hat{t}_{\\beta \\sigma}^1 &= o_p(n^{-1\/2}), \\quad \\hat{t}_{v(\\beta)}^1 = o_p(n^{-1\/2}), \\quad \\hat t_{\\sigma^2}^1 = c(\\alpha) b(\\alpha)(\\hat{\\lambda}_\\mu^1)^4\/12 + o_p(n^{-1\/2}), \\\\\n\\hat t_{\\sigma^2}^2 &= c(\\alpha) (\\hat \\lambda_\\sigma^2)^2 + o_p(n^{-1\/2}).\n\\end{aligned}\n\\end{equation}\nIn view of this, let $t_{\\lambda}(\\lambda, \\varrho,\\alpha):=( t_{\\varrho\\mu^2}, t_{\\mu\\sigma },t_{\\sigma^2 },t_{\\beta\\mu}',t_{\\beta\\sigma}',t_{v(\\beta)}')' \\in \\mathbb{R}^{q_{\\lambda}}$, and consider the following sets:\n\\begin{equation*}\n\\begin{aligned\n&\\tilde\\Lambda_{\\lambda \\alpha}^1:=\\{ t_{\\lambda}(\\lambda, \\varrho, \\alpha) : t_{\\varrho\\mu^2} = c(\\alpha) \\varrho \\lambda_\\mu^2, t_{\\mu\\sigma }= c(\\alpha)\\lambda_\\mu \\lambda_\\sigma, t_{\\sigma^2} = c(\\alpha) b(\\alpha)\\lambda_\\mu^4\/12, \\\\\n& \\qquad \\qquad t_{\\beta\\mu} = c(\\alpha)\\lambda_{\\beta}\\lambda_{\\mu}, t_{\\beta \\sigma}=0, t_{v(\\beta)} = 0 \\ \\text{for some } (\\lambda,\\varrho) \\in \\Theta_\\lambda\\times\\Theta_{\\varrho}\\}, \\\\\n&\\tilde\\Lambda_{\\lambda\\alpha\\varrho}^2:=\\{ t_{\\lambda}(\\lambda, \\varrho, \\alpha) : t_{\\varrho\\mu^2} = c(\\alpha)\\varrho \\lambda_{\\mu}^2, t_{\\mu\\sigma } = c(\\alpha)\\lambda_\\mu\\lambda_\\sigma, t_{\\sigma^2}= c(\\alpha)\\lambda_\\sigma^2, \\\\\n& \\qquad \\qquad t_{\\beta\\mu }= c(\\alpha)\\lambda_{\\beta} \\lambda_\\mu, t_{\\beta\\sigma }= c(\\alpha)\\lambda_{\\beta} \\lambda_\\sigma, t_{v(\\beta)} =c(\\alpha)v(\\lambda_{\\beta})\\ \\text{for some }\\lambda\\in \\Theta_\\lambda \\}. \n\\end{aligned}\n\\end{equation*}\n$\\tilde\\Lambda_{\\lambda \\alpha}^1$ is indexed by $\\alpha$ but does not depend on $\\varrho$ because $B_ {0n}(\\cdot)$ in (\\ref{ln_max}) does not depend on $\\varrho$, whereas $\\tilde\\Lambda_{\\lambda \\alpha\\varrho}^2$ is indexed by both $\\alpha$ and $\\varrho$ because $B_ {\\hat \\varrho^2 n}(\\cdot)$ in (\\ref{ln_max}) depends on $\\hat \\varrho^2$. Define $(\\tilde \\lambda_{\\alpha}^1, \\tilde \\varrho_{\\alpha}^1)$ and $\\tilde \\lambda_{\\alpha\\varrho}^2$ by $B_{0 n}(\\sqrt{n} t_{\\lambda}(\\tilde\\lambda_{\\alpha}^1,\\tilde\\varrho_{\\alpha}^1,\\alpha)) = {\\max}_{t_{\\lambda}(\\lambda,\\varrho,\\alpha) \\in \\tilde\\Lambda_{\\lambda \\alpha}^1} B_{0 n}(\\sqrt{n} t_{\\lambda}(\\lambda,\\varrho,\\alpha))$ and $B_{\\varrho n}(\\sqrt{n} t_{\\lambda}(\\tilde\\lambda_{\\alpha\\varrho}^2,\\varrho,\\alpha)) = {\\max}_{t_{\\lambda}(\\lambda,\\varrho,\\alpha) \\in \\tilde \\Lambda_{\\lambda \\alpha\\varrho}^2} B_{\\varrho n}(\\sqrt{n} t_{\\lambda}(\\lambda,\\varrho,\\alpha))$.\n\nDefine $W_n(\\alpha):=\\max\\{B_{0 n}(\\sqrt{n} t_{\\lambda}(\\tilde\\lambda_{\\alpha}^1,\\tilde\\varrho_{\\alpha}^1,\\alpha)), \\sup_{\\varrho \\in \\Theta_{\\varrho}}B_{\\varrho n}(\\sqrt{n} t_{\\lambda}(\\tilde\\lambda^2_{\\alpha\\varrho},\\varrho,\\alpha)) \\}$, then we have \n\\begin{equation} \\label{LR_W}\n2[\\ell_n(\\hat \\psi, \\hat \\varrho, \\alpha,\\xi) - \\ell_{0n}(\\hat\\vartheta_0)] = W_n(\\alpha) + o_p(1), \n\\end{equation}\n uniformly in $\\alpha \\in \\Theta_\\alpha$ because (i) $W_n(\\alpha) \\geq 2[\\ell_n(\\hat \\psi, \\hat \\varrho, \\alpha,\\xi) - \\ell_{0n}(\\hat\\vartheta_0)] + o_p(1)$ in view of the definition of $(\\tilde \\lambda_{\\alpha}^1,\\tilde \\varrho_{\\alpha}^1,\\tilde \\lambda_{\\alpha\\varrho}^2)$, (\\ref{ln_max}), and (\\ref{t_hat_1}), and (ii) $2[\\ell_n(\\hat \\psi,\\hat \\varrho, \\alpha, \\xi) - \\ell_{0n}(\\hat\\vartheta_0)] \\geq \\\\ \\max\\{ 2[\\max_\\eta\\ell_n(\\eta,\\tilde \\lambda^1_\\alpha, \\tilde \\varrho^1_\\alpha,\\alpha,\\xi),\\sup_{\\varrho \\in \\Theta_\\varrho} \\max_\\eta \\ell_n(\\eta,\\tilde \\lambda^2_{\\alpha\\varrho}, \\varrho,\\alpha,\\xi) \\} - 2\\ell_{0n}(\\hat\\vartheta_0)+ o_p(1) = W_n(\\alpha)+ o_p(1)$\nfrom the definition of $(\\hat \\psi,\\hat\\varrho)$.\n\nThe asymptotic distribution of the LRTS follows from applying Theorem 1(c) of \\citet{andrews01em} to $(B_{0 n}(\\sqrt{n} t_{\\lambda}(\\tilde\\lambda_{\\alpha}^1,\\tilde\\varrho_{\\alpha}^1,\\alpha)), B_{\\varrho n}(\\sqrt{n} t_{\\lambda}(\\tilde\\lambda^2_{\\alpha\\varrho},\\varrho,\\alpha)))$. First, Assumption 2 of \\citet{andrews01em} holds trivially for $B_{\\varrho n}(\\sqrt{n} t(\\lambda,\\varrho,\\alpha))$. Second, Assumption 3 of \\citet{andrews01em} is satisfied by (\\ref{weak_cgce}) and Assumption \\ref{A-nonsing2}. Assumption 4 of \\citet{andrews01em} is satisfied by Proposition \\ref{P-quadratic-N1}. Assumption $5^*$ of \\citet{andrews01em} holds with $B_T=n^{1\/2}$ because $\\tilde\\Lambda_{\\lambda \\alpha}^1$ is locally (in a neighborhood of $\\varrho=0, \\lambda=0$) equal to the cone $\\Lambda_{\\lambda}^1$ and $\\tilde\\Lambda_{\\lambda \\alpha\\varrho}^2$ is locally equal to the cone $\\Lambda_{\\lambda \\varrho}^2$ uniformly in $\\varrho\\in\\Theta_{\\varrho\\epsilon}$. Consequently, $W_n(\\alpha) \\overset{d}{\\rightarrow} \\max\\{ \\mathbb{I}\\{\\varrho=0\\} (\\tilde t_{\\lambda }^1)' \\mathcal{I}_{\\lambda.\\eta 0} \\tilde t_{\\lambda }^1, \\sup_{\\varrho\\in\\Theta_{\\varrho}} (\\tilde t_{\\lambda \\varrho}^2)' \\mathcal{I}_{\\lambda.\\eta \\varrho} \\tilde t_{\\lambda \\varrho}^2 \\}$ uniformly in $\\alpha$ from Theorem 1(c) of \\citet{andrews01em}, and the stated result follows from (\\ref{LR_W}).\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{P-quadratic-N1-homo}]\nThe proof is similar to that of Proposition \\ref{P-quadratic-N1}. Expanding $l_{k\\vartheta x_0} -1$ five times around $\\psi^*$ and proceeding as in the proof of Proposition \\ref{P-quadratic-N1} gives\n\\begin{equation} \\label{lk_2_normal_homo}\nl_{\\vartheta k x_0} -1 = t(\\psi,\\pi)' s_{\\varrho k} + r_{k,0}(\\pi) + u_{kx_0}(\\psi,\\pi), \n\\end{equation} \nwhere $t(\\psi,\\pi)$ is defined in (\\ref{t-psi2-homo}), $s_{\\varrho k}$ is defined in (\\ref{score_normal_homo}) and satisfies \n\\begin{equation*}\n s_{\\varrho k} \n:=\n\\begin{pmatrix}\n\\nabla_{\\eta} \\overline p_{\\psi^* \\pi k,0} \/ \\overline p_{\\psi^* \\pi k,0} \\\\\n\\zeta_{k,0}(\\varrho)\/2 \\\\\n\\nabla_{\\mu^3} f_k^* \/ 3! f_k^* \\\\\n\\nabla_{\\mu^4} f_k^* \/ 4! f_k^* \\\\\n\\nabla_{\\lambda_\\beta\\lambda_\\mu} \\overline p_{\\psi^* \\pi k,0} \/ \\alpha(1-\\alpha)\\overline p_{\\psi^* \\pi k,0} \\\\\n\\widetilde{\\nabla}_{v(\\lambda_\\beta)} \\overline p_{\\psi^* \\pi k,0} \/ \\alpha(1-\\alpha) \\overline p_{\\psi^* \\pi k,0} \n\\end{pmatrix},\n\\end{equation*}\nand\n\\begin{align*}\nr_{k,0}(\\pi):&= \\widetilde \\Lambda_{k,0}(\\pi) '\\tau(\\psi) + \\Lambda^5_{k,0}(\\overline \\psi,\\pi) ' (\\Delta \\psi)^{\\otimes 5} \\\\\n&\\quad + \\lambda_\\mu^3[\\nabla_{\\lambda_\\mu^3} \\overline p_{\\psi^* \\pi k,0}\/\\overline p_{\\psi^* \\pi k,0} - \\alpha(1-\\alpha)(1-2\\alpha)\\nabla_{\\mu^3} f_k^*\/f_k^*]\/3!\\\\\n& \\quad+ \\lambda_\\mu^4[\\nabla_{\\lambda_\\mu^4} \\overline p_{\\psi^* \\pi k,0}\/\\overline p_{\\psi^* \\pi k,0} - \\alpha(1-\\alpha)(1-6\\alpha+6 \\alpha^2)\\nabla_{\\mu^4} f_k^*\/f_k^*]\/4!,\n\\end{align*}\nwhere $u_{kx_0}(\\psi,\\pi)$, $\\overline p_{\\psi \\pi k,m}$, and the terms in the definition of $r_{k,0}(\\pi)$ are defined similarly to those in the proof of Proposition \\ref{P-quadratic-N1}.\n\nThe stated result is proven if the terms on the right-hand side of (\\ref{lk_2_normal_homo}) satisfy Assumption \\ref{assn_a3}. $t(\\psi,\\pi) = 0$ if and only if $\\psi=\\psi^*$. $s_{\\varrho k}$ and $u_{kx_0}(\\psi,\\pi)$ satisfy Assumption \\ref{assn_a3} by the same argument as the proof of Proposition \\ref{P-quadratic-N1}. For $r_{k,0}(\\pi)$, first, $\\Lambda^5_{k,0}(\\overline \\psi,\\pi)' (\\Delta \\psi)^{\\otimes 5}$ satisfies Assumptions \\ref{assn_a3}(c) and (d) from a similar argument to the proof of Proposition \\ref{P-quadratic-N1}; $\\lambda_\\mu^5$ is dominated by $\\lambda_\\mu^3$ or $\\lambda_\\mu^4$ because $\\inf_{0\\leq \\alpha \\leq 1}\\max\\{|1-2\\alpha|,|1-6\\alpha + 6\\alpha^2|\\}>0$. Second, similar to (\\ref{d4r}) in the proof of Proposition \\ref{P-quadratic-N1}, write $\\widetilde \\Lambda_{k,0}(\\pi) '\\tau(\\psi)=\\nabla_{(\\eta^{\\otimes 2})'} \\overline p_{\\psi^* \\pi k,0} (\\Delta \\eta)^{\\otimes 2} \/ 2!\\overline p_{\\psi^* \\pi k,0}+ \\tilde R_{3k\\vartheta} + R_{4k\\vartheta}$, where $\\tilde R_{3k\\vartheta}:= [\\nabla_{(\\psi^{\\otimes 3})'} \\overline p_{\\psi^* \\pi k,0} (\\Delta \\psi)^{\\otimes 3}- \\nabla_{\\lambda_\\mu^3} \\overline p_{\\psi^* \\pi k,0} \\lambda_\\mu^3]\/3!\\overline p_{\\psi^* \\pi k,0}$, and $R_{4k\\vartheta}$ is defined as $R_{4k\\vartheta}$ in (\\ref{r_jk}). The term $\\nabla_{(\\eta^{\\otimes 2})'} \\overline p_{\\psi^* \\pi k,0} (\\Delta \\eta)^{\\otimes 2} \/ 2!\\overline p_{\\psi^* \\pi k,0}$ clearly satisfies Assumptions \\ref{assn_a3}(c) and (d). The terms in $\\tilde R_{3k\\vartheta}$ satisfy Assumptions \\ref{assn_a3}(c) and (d) because they contain either $\\Delta \\eta$ or $\\lambda_\\mu^2\\lambda_\\beta$ or $\\lambda_\\mu\\lambda_\\beta^2$ or $\\lambda_\\beta^3$. The terms in $R_{4k\\vartheta}$ satisfy Assumptions \\ref{assn_a3}(c) and (d) because they either contain $\\Delta \\eta$ or a term of the form $\\lambda_\\mu^i\\lambda_\\beta^{4-i}$ with $ 1 \\leq i \\leq 3$. The last two terms in $r_{k,0}(\\pi)$ satisfy Assumptions \\ref{assn_a3}(c) and (d) from Lemma \\ref{lemma_d34_homo}. Therefore, $r_{k,0}(\\pi)$ satisfies Assumptions \\ref{assn_a3}(c) and (d), and the stated result is proven. \n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{P-LR-N1-homo}] \nThe proof is similar to the proof of Proposition \\ref{P-LR-N1}. Let $(\\hat{\\psi}, \\hat \\varrho, \\hat \\alpha) : = {\\arg\\max}_{(\\psi, \\varrho,\\alpha) \\in \\Theta_{\\psi}\\times\\Theta_{\\varrho}\\times\\Theta_{\\alpha}} \\ell_n(\\psi,\\varrho,\\alpha,\\xi)$ denote the MLE of $(\\psi,\\varrho,\\alpha)$. Consider the sets $\\Theta_{\\lambda}^1:=\\{\\lambda \\in \\Theta_{\\lambda}:|\\lambda_\\mu| \\geq n^{-1\/6}(\\log n)^{-1} \\}$ and $\\Theta_{\\lambda}^2:=\\{\\lambda \\in \\Theta_{\\lambda}:|\\lambda_\\mu| \\leq n^{-1\/6}(\\log n)^{-1} \\}$, so that $\\Theta_{\\lambda} = \\Theta_{\\lambda}^1 \\cup \\Theta_{\\lambda}^2$. For $j=1,2$, define $(\\hat{\\psi}^j, \\hat \\varrho^j,\\hat\\alpha^j) : = {\\arg\\max}_{(\\psi,\\varrho,\\alpha) \\in \\Theta_{\\psi}\\times \\Theta_{\\varrho }\\times \\Theta_{\\alpha }, \\lambda \\in \\Theta_{\\lambda}^j} \\ell_n(\\psi,\\varrho,\\alpha,\\xi)$, so that $\\ell_n(\\hat \\psi, \\hat \\varrho,\\hat\\alpha,\\xi) = \\max_{j \\in \\{1,2\\}} \\ell_n(\\hat\\psi^j, \\hat\\varrho^j,\\hat\\alpha^j,\\xi)$.\n\nDefine $B_{\\varrho n}(t_\\lambda(\\lambda,\\varrho,\\alpha))$ as in (\\ref{B_pi}) in the proof of Proposition \\ref{P-LR} but using $t(\\psi,\\pi)$ and $s_{\\varrho k}$ defined in (\\ref{t-psi2-homo}) and (\\ref{score_normal_homo}) and replacing $t_{\\lambda}$ in (\\ref{B_pi}) with $t_\\lambda(\\lambda,\\varrho,\\alpha)$. Observe that $\\hat{\\varrho}^1 = O_p(n^{-1\/6}(\\log n)^2)$ because $\\hat \\varrho^1 (\\hat \\lambda_\\mu^1)^2 = O_p(n^{-1\/2})$ from Proposition \\ref{P-quadratic-N1-homo}(a) and $|\\hat{\\lambda}_\\mu^1| \\geq n^{-1\/6}(\\log n)^{-1}$. Using the argument of the proof of Proposition \\ref{P-LR-N1} leading to (\\ref{ln_max}), we obtain \n\\begin{equation*\n2[\\ell_n(\\hat \\psi, \\hat \\varrho, \\hat \\alpha,\\xi) - \\ell_{0n}(\\hat\\vartheta_0)]\n = \\max \\{ B_ { 0 n}(\\sqrt{n} t_\\lambda(\\hat\\lambda^1,\\hat\\varrho^1,\\hat \\alpha^1)), B_{ \\hat \\varrho^2 n}(\\sqrt{n} t_\\lambda(\\hat\\lambda^2, \\hat\\varrho^2,\\hat \\alpha^2))\\} + o_p(1).\n\\end{equation*}\n\nWe proceed to construct parameter spaces that are locally equal to the cones $\\Lambda_\\lambda^1$ and $\\Lambda_{\\lambda\\varrho}^2$ defined in (\\ref{Lambda-lambda-homo}). Define $c(\\alpha) := \\alpha(1-\\alpha)$, and denote the elements of $t_\\lambda(\\hat\\lambda^j,\\hat\\varrho^j,\\hat\\alpha^j)$ corresponding to (\\ref{t-psi2-homo}) by\n\\begin{equation*}\nt_{\\lambda}(\\hat\\lambda^j,\\hat\\varrho^j,\\hat\\alpha^j)\n=\n\\begin{pmatrix}\n\\hat t_{\\varrho\\mu^2}^j\\\\\n\\hat t_{\\mu^3}^j \\\\\n\\hat t_{\\mu^4}^j \\\\\n\\hat t_{\\beta \\mu}^j\\\\\n\\hat t_{v(\\beta)}^j\\\\\n\\end{pmatrix}\n:= \nc(\\hat\\alpha^j)\n\\begin{pmatrix}\n \\hat\\varrho^j (\\hat \\lambda_\\mu^j)^2 \\\\\n (1-2\\hat\\alpha^j) (\\hat\\lambda_\\mu^j)^3\\\\\n (1-6\\hat\\alpha^j+6(\\hat\\alpha^j)^2) (\\hat\\lambda_\\mu^j)^4\\\\\n \\hat \\lambda_\\beta^j \\hat \\lambda_\\mu^j\\\\\n v(\\hat \\lambda_\\beta^j)\n\\end{pmatrix}. \n\\end{equation*}\nNote that $\\hat{\\lambda}_\\beta^1 = O_p(n^{-1\/3} \\log n)$ because $\\hat{t} _{\\beta \\mu}^1 = O_p(n^{-1\/2})$ from Proposition \\ref{P-quadratic-N1-homo}(a) and $|\\hat{\\lambda}_\\mu^1| \\geq n^{-1\/6}(\\log n)^{-1}$. Furthermore, $|\\hat\\lambda_\\mu^2| \\leq n^{-1\/6}(\\log n)^{-1}$. Therefore,\n\\begin{equation*}\n\\hat{t}_{v(\\beta)}^1 = o_p(n^{-1\/2}), \\quad \\hat t_{\\mu^3}^{2} = o_p(n^{-1\/2}), \\quad \\hat t_{\\mu^4}^{2}=o_p(n^{-1\/2}).\n\\end{equation*}\nIn view of this, let $t_\\lambda(\\lambda,\\varrho,\\alpha):=( t_{\\varrho\\mu^2}, t_{\\mu^3},t_{\\mu^4},t_{\\beta\\mu}',t_{v(\\beta)}')' \\in \\mathbb{R}^{q_{\\lambda}}$, and consider the following sets:\n\\begin{align*}\n&\\tilde\\Lambda_{\\lambda }^{1}:=\\{ t_\\lambda(\\lambda,\\varrho,\\alpha) : t_{\\varrho\\mu^2} = c(\\alpha) \\varrho \\lambda_\\mu^2, t_{\\mu^3}= c(\\alpha)(1-2\\alpha)\\lambda_\\mu^3, t_{\\mu^4} = c(\\alpha)(1-6\\alpha+6\\alpha^2) \\lambda_\\mu^4, \\\\\n& \\qquad \\qquad t_{\\beta\\mu} = c(\\alpha)\\lambda_{\\beta}\\lambda_{\\mu}, t_{v(\\beta)} = 0 \\ \\text{for some }(\\lambda,\\varrho,\\alpha) \\in \\Theta_\\lambda\\times \\Theta_{\\varrho}\\times \\Theta_{\\alpha}\\}, \\\\\n&\\tilde\\Lambda_{\\lambda \\alpha\\varrho}^2:=\\{ t_\\lambda(\\lambda,\\varrho,\\alpha) : t_{\\varrho\\mu^2} = c(\\alpha)\\varrho \\lambda_{\\mu}^2, t_{\\mu^3} = t_{\\mu^4}= 0, \\\\\n& \\qquad \\qquad t_{\\beta\\mu }= c(\\alpha)\\lambda_{\\beta} \\lambda_\\mu, t_{v(\\beta)} =c(\\alpha)v(\\lambda_{\\beta})\\ \\text{for some }\\lambda\\in \\Theta_\\lambda \\}. \n\\end{align*}\nDefine $(\\tilde \\lambda^1, \\tilde \\varrho^1, \\tilde \\alpha^1)$ and $\\tilde \\lambda_{\\alpha\\varrho}^2$ by $B_{0 n}(\\sqrt{n} t_\\lambda(\\tilde\\lambda^1,\\tilde \\varrho^1, \\tilde \\alpha^1)) = {\\max}_{t_\\lambda(\\lambda,\\varrho,\\alpha) \\in \\tilde\\Lambda_{\\lambda}^1} B_{0 n}(\\sqrt{n} t_\\lambda(\\lambda,\\varrho,\\alpha))$ and \\\\ $B_{\\varrho n}(\\sqrt{n} t_\\lambda(\\tilde\\lambda_{\\alpha\\varrho}^2,\\varrho,\\alpha)) = {\\max}_{t_\\lambda(\\lambda,\\varrho,\\alpha) \\in \\tilde \\Lambda_{\\lambda\\alpha\\varrho}^2} B_{\\varrho n}(\\sqrt{n} t_\\lambda(\\lambda,\\varrho,\\alpha))$. $\\tilde\\Lambda_{\\lambda}^{1}$ is locally (in the neighborhood of $\\varrho=0$, $\\lambda=0$) equal to the cone $\\Lambda_{\\lambda}^{1}$ because, when $|1-2\\alpha|\\geq \\epsilon>0$ for some positive constant $\\epsilon$, we have $t_{\\mu^4}\/t_{\\mu^3} \\to 0$ as $\\lambda_\\mu \\to 0$, and when $\\alpha$ is in the neighborhood of $1\/2$, we have $1-6\\alpha+6\\alpha^2 <0$. $\\tilde\\Lambda_{\\lambda \\alpha \\varrho}^2$ is locally equal to the cone $\\Lambda_{\\lambda\\varrho}^2$ uniformly in $\\varrho \\in \\Theta_{\\varrho}$.\n\nDefine $W_n:=\\max\\{B_{0 n}(\\sqrt{n} t_\\lambda(\\tilde\\lambda^1,\\tilde\\varrho^1,\\tilde\\alpha^1)), \\sup_{(\\alpha,\\varrho) \\in\\Theta_{\\alpha} \\times \\Theta_{\\varrho} } B_{\\varrho n}(\\sqrt{n} t_\\lambda(\\tilde\\lambda^2_{\\alpha\\varrho},\\varrho,\\alpha)) \\}$. Proceeding as in the proof of Proposition \\ref{P-LR-N1} gives $2[\\ell_n(\\hat \\psi, \\hat \\varrho,\\hat \\alpha,\\xi) - \\ell_{0n}(\\hat\\vartheta_0)] = W_n + o_p(1)$, and the asymptotic distribution of the LRTS follows from applying Theorem 1(c) of \\citet{andrews01em} to $(B_{0 n}(\\sqrt{n} t_\\lambda(\\tilde\\lambda^1, \\tilde\\varrho^1,\\tilde\\alpha^1 )), B_{\\varrho n}(\\sqrt{n} t_\\lambda(\\tilde\\lambda^2_{\\alpha\\varrho},\\varrho,\\alpha))$. \n\\end{proof}\n\n\\begin{proof}[Proof of Propositions \\ref{P-LR_M}, \\ref{P-LR_M_normal}, and \\ref{P-LR_M_normal_homo}] \nLet $\\mathcal{N}_m^*$ denote an arbitrarily small neighborhood of $\\Upsilon_m^*$, and let $\\hat{\\psi}_m$ denote a local MLE that maximizes $\\ell_n(\\psi_m,\\pi_m,\\xi_{M_0+1})$ subject to $\\psi_m \\in \\mathcal{N}_m^*$. Proposition \\ref{P-consist_M} and $\\Upsilon^*=\\cup_{m=1}^{M_0}\\Upsilon_m^*$ imply that $\\ell_n(\\hat\\vartheta_{M_0+1},\\xi_{M_0+1}) = \\max_{m=1,\\ldots,M_0} \\ell_n(\\hat\\psi_m,\\pi_m,\\xi_{M_0+1})$ with probability approaching one. Because $\\psi_\\ell^{*}\\notin \\mathcal{N}_m^*$ for any $\\ell \\neq m$, it follows from Proposition \\ref{P-consist_M} that $\\hat{\\psi}_m-\\psi_m^{*}=o_p(1)$.\n\nNext, $\\ell_n(\\psi_{m},\\pi_m,\\xi_{M_0+1}) - \\ell_n(\\psi_m^{*},\\pi_m,\\xi_{M_0+1})$ admits the same expansion as $\\ell_n(\\psi,\\pi,\\xi) - \\ell_n(\\psi^{*},\\pi,\\xi)$ in (\\ref{ln_appn}) or (\\ref{ln_appn_N1}). \nTherefore, the stated result follows from applying the proof of Propositions \\ref{P-LR}, \\ref{P-LR-N1}, and \\ref{P-LR-N1-homo} to $\\ell_n(\\hat \\psi_m, \\pi_m,\\xi_{M_0+1}) - \\ell_{n}(\\hat\\vartheta_{M_0},\\xi_{M_0})$ for each $m$ and combining the results to derive the joint asymptotic distribution of $\\{\\ell_n(\\hat \\psi_m, \\pi_m,\\xi_{M_0+1}) - \\ell_{n}(\\hat\\vartheta_{M_0},\\xi_{M_0})\\}_{m=1}^{M_0}$.\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{P-LAN}] \nObserve that Proposition \\ref{Ln_thm1} holds under $\\mathbb{P}_{\\vartheta^*,x_0}^n$ under the assumptions of Propositions \\ref{P-quadratic}, \\ref{P-quadratic-N1}, and \\ref{P-quadratic-N1-homo}. Because $\\vartheta_n=(\\eta_n',\\lambda_n',\\pi_n')'\\in \\mathcal{N}_{c\/\\sqrt{n}}$ by choosing $c> |h|$, it follows from Proposition \\ref{Ln_thm1} that\n\\begin{equation}\\label{expansion}\n\\sup_{x_0 \\in \\mathcal{X}} \\left|\\log \\frac{d\\mathbb{P}_{\\vartheta_n,x_0}^n}{d \\mathbb{P}_{\\vartheta^*,x_0}^n} - h' \\nu_n(s_{\\varrho_n k}) + \\frac{1}{2}h' \\mathcal{I}_{\\varrho_n} h \\right|=o_{\\mathbb{P}_{\\vartheta^*,x_0}^n}(1),\n\\end{equation}\nwhere $s_{\\varrho k}$ is given by (\\ref{score}), (\\ref{score_normal}), and (\\ref{score_normal_homo}) for the models of the non-normal distribution, heteroscedastic normal distribution, and homoscedastic normal distribution, respectively. Furthermore, $\\nu_n(s_{\\varrho_n k}) \\Rightarrow G_\\varrho$ under $\\mathbb{P}_{\\vartheta^*,x_0}^n$, where $G_\\varrho$ is a mean zero Gaussian process with $cov(G_{\\varrho_1},G_{\\varrho_2})=\\mathcal{I}_{\\varrho_1\\varrho_2}:= \\lim_{k\\rightarrow \\infty} \\mathbb{E}_{\\vartheta^*} (s_{\\varrho_1 k}s_{\\varrho_2 k}')$. Therefore, $d\\mathbb{P}_{\\vartheta_n,x_0}^n \/ d \\mathbb{P}_{\\vartheta^*,x_0}^n$ converges in distribution under $\\mathbb{P}_{\\vartheta^*,x_0}^n$ to $\\exp\\left( N( \\mu,\\sigma^2) \\right)$ with $\\mu=-(1\/2) h' \\mathcal{I}_{ \\varrho} h$ and $\\sigma^2= h' \\mathcal{I}_{ \\varrho} h$, so that $E(\\exp\\left( N( \\mu,\\sigma^2) \\right))=1$. Consequently, part (a) follows from Le Cam's first lemma (see, e.g., Corollary 12.3.1 of \\citet{lehmannromano05book}). Part (b) follows from Le Cam's third lemma (see, e.g., Corollary 12.3.2 of \\citet{lehmannromano05book}) because part (a) and (\\ref{expansion}) imply that\n\\[\n\\begin{pmatrix}\n\\nu_n(s_{\\varrho_n k})\\\\\n\\log\\frac{d\\mathbb{P}_{\\vartheta_n,x_0}^n}{d \\mathbb{P}_{\\vartheta^*,x_0}^n} \n\\end{pmatrix}\n \\overset{d}{\\rightarrow} \nN\\left(\n\\begin{pmatrix}\n0\\\\\n-\\frac{1}{2} h' \\mathcal{I}_{\\varrho} h\n\\end{pmatrix}, \n\\begin{pmatrix}\n\\mathcal{I}_{\\varrho}&\\mathcal{I}_{\\varrho}h\\\\\nh'\\mathcal{I}_{\\varrho}&h'\\mathcal{I}_{\\varrho}h\n\\end{pmatrix}\n\\right)\\quad\\text{under $\\mathbb{P}_{\\vartheta^*,x_0}^n$}.\n\\] \n\\end{proof} \n\n\\begin{proof}[Proof of Proposition \\ref{P-LAN2}] \nThe proof follows the argument in the proof of Proposition \\ref{P-LR}. Observe that $h_\\eta=0$ and $h_{\\lambda} =\\sqrt{n}t_{\\lambda}(\\lambda_n,\\pi_n)$ hold under $H_{1n}$. Therefore, Proposition \\ref{P-LAN} holds under $\\mathbb{P}_{\\vartheta_n,x_0}^n$ implied by $H_{1n}$, and, in conjunction with Theorem 12.3.2(a) of \\citet{lehmannromano05book}, Propositions \\ref{lemma-omega} and \\ref{P-quadratic} hold under $\\mathbb{P}_{\\vartheta_n,x_0}^n$. Consequently, the proof of Proposition \\ref{P-LR} goes through if we replace $G_{\\lambda.\\eta\\varrho n}\\Rightarrow G_{\\lambda.\\eta\\varrho}$ with $G_{\\lambda.\\eta\\varrho n} \\Rightarrow G_{\\lambda.\\eta\\varrho} + ( \\mathcal{I}_{\\lambda \\varrho \\varrho} - \\mathcal{I}_{\\lambda\\eta\\varrho} \\mathcal{I}_{\\eta}^{-1} \\mathcal{I}_{\\eta\\lambda\\varrho} ) h_\\lambda = G_{\\lambda.\\eta\\varrho} + \\mathcal{I}_{\\lambda.\\eta\\varrho} h_{\\lambda}$, and the stated result follows. \\end{proof}\n\n\\begin{proof}[Proof of Propositions \\ref{P-LAN3} and \\ref{P-LAN4}] \nThe proof is similar to the proof of Proposition \\ref{P-LAN2}. Observe that, for $j \\in \\{a,b\\}$, $h_\\eta^j=0$ and $h_{\\lambda}^j =\\sqrt{n}t_{\\lambda}(\\lambda_n,\\pi_n)+o(1)$ hold under $H_{1 n}^j$. Therefore, Proposition \\ref{P-LAN} holds under $\\mathbb{P}_{\\vartheta_n,x_0}^n$ implied by $H_{1n}^j$, and the stated result follows from repeating the argument of proof of Proposition \\ref{P-LAN2}. \n\\end{proof}\n\n\n\\begin{proof}[Proof of Proposition \\ref{P-bootstrap}] \nWe only provide the proof for the models of the non-normal distribution with $M_0=1$ because the proof for the other models is similar. The proof follows the argument in the proof of Theorem 15.4.2 in \\citet{lehmannromano05book}. Define $\\bf{C}_\\eta$ as the set of sequences $\\{\\eta_n\\}$ satisfying $\\sqrt{n}(\\eta_n - \\eta^*) \\to h_\\eta$ for some finite $h_\\eta$. \nDenote the MLE of the one-regime model parameter by $\\hat \\eta_n$. For the MLE under $H_0$, $\\sqrt{n}(\\hat \\eta_n - \\eta^*)$ converges in distribution to a $\\mathbb{P}_{\\vartheta^*}$-a.s. finite random variable by the standard argument. Then, by the Almost Sure Representation Theorem (e.g., Theorem 11.2.19 of \\citet{lehmannromano05book}), there exist random variables $\\tilde \\eta_n$ and $\\tilde h_\\eta$ defined on a common probability space such that $\\hat \\eta_n$ and $\\tilde \\eta_n$ have the same distribution and $\\sqrt{n}(\\tilde \\eta_n - \\eta^*)\\rightarrow \\tilde h_\\eta$ almost surely. Therefore, $\\{ \\tilde \\eta_n \\}\\in \\bf{C}_\\eta$ with probability one, and the stated result under $H_0$ follows from Lemma \\ref{lemma_btsp} because $\\hat \\eta_n$ and $\\tilde \\eta_n$ have the same distribution.\n\nFor the MLE under $H_{1n}$, note that the proof of Proposition \\ref{P-LAN2} goes through when $h_\\eta$ is finite even if $h_\\eta \\neq 0$. Therefore, $\\sqrt{n}(\\hat \\eta_n - \\eta^*)$ converges in distribution to a $\\mathbb{P}_{\\vartheta_n}$-a.s. finite random variable under $H_{1n}$. Hence, the stated result follows from Lemma \\ref{lemma_btsp} and repeating the argument in the case of $H_0$.\n\\end{proof}\n\n \n\\subsection{Auxiliary results}\n\n\n\\subsubsection{Missing information principle}\n\nThe following lemma extends equations (3.1) and (3.2) in Louis (1982), expressing the higher-order derivatives of the log-likelihood function in terms of the conditional expectation of the derivatives of the complete data log-likelihood function. For notational brevity, assume $\\vartheta$ is scalar. Adaptations to vector-valued $\\vartheta$ are straightforward but need more tedious notation. Let $\\nabla^j \\ell(Y):= \\nabla_\\vartheta^j \\log P(Y;\\vartheta)$ and $\\nabla^j \\ell(Y,X):= \\nabla_\\vartheta^j \\log P(Y,X;\\vartheta)$. For random variables $V_1,\\ldots,V_q$ and $Y$, define the central conditional moment of $(V_1^{r_1}\\cdots V_q^{r_q})$ as $\\mathbb{E}^c [V_1^{r_1} \\cdots V_q^{r_q} |Y ] := \\mathbb{E} [ (V_1-\\mathbb{E}[V_1|Y])^{r_1} \\cdots (V_q-\\mathbb{E}[V_q|Y])^{r_q} |Y] $.\n\\begin{lemma} \\label{louis}\nFor any random variables $X$ and $Y$ with density $P(Y,X;\\theta)$ and $P(Y;\\theta)$,\n\\begin{align*}\n&\\nabla \\ell(Y) = \\mathbb{E}\\left[ \\nabla \\ell(Y,X) \\middle| Y \\right], \\quad \\nabla^2 \\ell(Y) = \\mathbb{E}\\left[ \\nabla^2 \\ell(Y,X) \\middle| Y \\right] + \\mathbb{E}^c\\left[ (\\nabla \\ell(Y,X))^2\\middle| Y \\right], \\\\\n&\\nabla^3 \\ell(Y) = \\mathbb{E}\\left[ \\nabla^3 \\ell(Y,X) \\middle| Y \\right]+ 3 \\mathbb{E}^c\\left[ \\nabla^2 \\ell(Y,X) \\nabla \\ell(Y,X)\\middle| Y \\right] + \\mathbb{E}^c\\left[(\\nabla \\ell(Y,X))^3\\middle| Y \\right], \\\\\n&\\nabla^4 \\ell(Y)\n= \\mathbb{E}\\left[ \\nabla^4 \\ell(Y,X)\\middle| Y \\right] +4 \\mathbb{E}^c \\left[\\nabla^3 \\ell(Y,X) \\nabla \\ell(Y,X)\\middle| Y \\right] + 3 \\mathbb{E}^c\\left[ (\\nabla^2 \\ell(Y,X))^2 \\middle| Y \\right] \\\\\n& \\quad + 6 \\mathbb{E}^c\\left[\\nabla^2 \\ell(Y,X)(\\nabla \\ell(Y,X))^2\\middle| Y \\right] +\\mathbb{E}^c\\left[(\\nabla \\ell(Y,X))^4 \\middle| Y \\right] -3 \\left\\{ \\mathbb{E}^c\\left[ (\\nabla \\ell(Y,X))^2 \\middle| Y \\right]\\right\\}^2, \\\\\n&\\nabla^5 \\ell(Y) = \\mathbb{E}\\left[ \\nabla^5 \\ell(Y,X)\\middle| Y\\right] +5 \\mathbb{E}^c\\left[ \\nabla^4 \\ell(Y,X) \\nabla \\ell(Y,X) \\middle| Y\\right] +10 \\mathbb{E}^c\\left[ \\nabla^3 \\ell(Y,X) \\nabla^2 \\ell(Y,X) \\middle| Y\\right] \\\\\n&\\quad+10\\mathbb{E}^c\\left[ \\nabla^3 \\ell(Y,X) (\\nabla \\ell(Y,X))^2 \\middle| Y\\right]+15\\mathbb{E}^c\\left[ (\\nabla^2 \\ell(Y,X))^2 \\nabla \\ell(Y,X) \\middle| Y\\right] \\\\\n&\\quad +10\\mathbb{E}^c\\left[ \\nabla^2 \\ell(Y,X) (\\ell(Y,X))^3 \\middle| Y\\right] -30 \\mathbb{E}^c\\left[ \\nabla^2 \\ell(Y,X) \\nabla \\ell(Y,X) \\middle| Y\\right] \\mathbb{E}^c\\left[( \\nabla \\ell(Y,X))^2 \\middle| Y\\right] \\\\\n&\\quad+\\mathbb{E}^c\\left[ (\\nabla \\ell(Y,X))^5 \\middle| Y\\right] - 10\\mathbb{E}^c\\left[ (\\nabla \\ell(Y,X))^3 \\middle| Y\\right] \\mathbb{E}^c\\left[(\\nabla \\ell(Y,X))^2 \\middle| Y\\right], \\\\\n&\\nabla^6 \\ell(Y) = \\mathbb{E}\\left[ \\nabla^6 \\ell(Y,X)\\middle|Y \\right] \\\\\n& \\quad +6\\mathbb{E}^c\\left[\\nabla^5 \\ell(Y,X)\\nabla \\ell(Y,X) \\middle| Y\\right] +15 \\mathbb{E}^c\\left[\\nabla^4 \\ell(Y,X)\\nabla^2 \\ell(Y,X) \\middle| Y\\right] \\\\\n& \\quad+15 \\mathbb{E}^c\\left[\\nabla^4 \\ell(Y,X) (\\nabla \\ell(Y,X))^2\\middle| Y\\right] + 60\\mathbb{E}^c\\left[\\nabla^3 \\ell(Y,X)\\nabla^2 \\ell(Y,X)\\nabla \\ell(Y,X) \\middle| Y\\right] \\\\\n& \\quad+10\\mathbb{E}^c\\left[(\\nabla^3 \\ell(Y,X))^2\\middle| Y\\right]+ 15 \\mathbb{E}^c\\left[ (\\nabla^2 \\ell(Y,X))^3 \\middle| Y\\right] \\\\\n& \\quad+20\\mathbb{E}^c\\left[\\nabla^3 \\ell(Y,X)(\\nabla \\ell(Y,X))^3\\middle| Y\\right] - 60 \\mathbb{E}^c\\left[\\nabla^3 \\ell(Y,X) \\nabla \\ell(Y,X)\\middle| Y\\right]\\mathbb{E}\\left[(\\nabla \\ell(Y,X))^2\\middle| Y\\right]\\\\\n& \\quad+45 \\mathbb{E}^c\\left[ (\\nabla^2 \\ell(Y,X))^2 (\\nabla \\ell(Y,X))^2 \\middle| Y\\right] -90 \\left\\{\\mathbb{E}^c\\left[ \\nabla^2 \\ell(Y,X) \\nabla \\ell(Y,X) \\middle| Y\\right]\\right\\}^2\\\\\n& \\quad-45 \\mathbb{E}^c\\left[ (\\nabla^2 \\ell(Y,X))^2 \\middle| Y\\right] \\mathbb{E}^c\\left[ (\\nabla \\ell(Y,X))^2 \\middle| Y\\right] \\\\\n& \\quad +15 \\mathbb{E}^c\\left[ \\nabla^2 \\ell(Y,X) (\\nabla \\ell(Y,X))^4\\middle| Y\\right] -90 \\mathbb{E}^c\\left[ \\nabla^2 \\ell(Y,X) (\\nabla \\ell(Y,X))^2 \\middle| Y\\right] \\mathbb{E}^c\\left[ (\\nabla \\ell(Y,X))^2 \\middle| Y\\right]\\label{5-1}\\\\\n& \\quad -60 \\mathbb{E}^c\\left[ \\nabla^2 \\ell(Y,X) \\nabla \\ell(Y,X) \\middle| Y\\right] \\mathbb{E}^c\\left[ (\\nabla \\ell(Y,X))^3 \\middle| Y\\right] \\\\\n& \\quad +\\mathbb{E}^c\\left[ (\\nabla \\ell(Y,X))^6 \\middle| Y\\right] -15 \\mathbb{E}^c\\left[ (\\nabla \\ell(Y,X))^4 \\middle| Y\\right] \\mathbb{E}^c\\left[ (\\nabla \\ell(Y,X))^2 \\middle| Y\\right] \\\\\n& \\quad - 10\\left\\{\\mathbb{E}^c\\left[ (\\nabla \\ell(Y,X))^3 \\middle| Y\\right]\\right\\}^2 + 30\\left\\{\\mathbb{E}^c\\left[ (\\nabla \\ell(Y,X))^2 \\middle| Y\\right]\\right\\}^3,\n\\end{align*}\nprovided that the conditional expectation on the right-hand side exists. When $P(Y;\\theta)$ on the left-hand side is replaced with $P(Y|Z;\\theta)$, the stated result holds with $P(Y,X;\\theta)$ and $\\mathbb{E}[\\cdot |Y]$ on the right-hand side replaced with $P(Y,X|Z;\\theta)$ and $\\mathbb{E}[\\cdot |Y,Z]$. \n\\end{lemma} \n\\begin{proof}[Proof of Lemma \\ref{louis}]\nThe stated result follows from a direct calculation and relations such as\\\\ $\\nabla_\\vartheta^j P(Y;\\vartheta)\/P(Y;\\vartheta) = \\mathbb{E}[ \\nabla_\\vartheta^j P(Y,X;\\vartheta)\/P(Y,X;\\vartheta)|Y]$ and\n\\begin{equation} \\label{der_formula}\n\\begin{aligned}\n\\nabla \\log f &= \\nabla f\/f, \\quad \\nabla^2 \\log f = \\nabla^2 f\/f - (\\nabla \\log f )^2,\\\\\n\\nabla^3 \\log f &= \\nabla^3 f\/f -3 \\nabla^2 f \\nabla f\/f^2 + 2 (\\nabla f\/f)^3, \\\\\n\\nabla^4 \\log f &= \\nabla^4 f\/f -4 \\nabla^3 f \\nabla f\/f^2 - 3 (\\nabla^2 f\/f)^2 + 12 \\nabla^2 f (\\nabla f)^2\/ f^3 -6 (\\nabla f\/f)^4, \\\\\n\\nabla^5 \\log f &= \\nabla^5 f\/f -5 \\nabla^4 f \\nabla f\/f^2 - 10 \\nabla^3 f \\nabla^2 f\/f^2 + 20 \\nabla^3 f (\\nabla f)^2\/f^3 \\\\\n& \\quad + 30 (\\nabla^2 f)^2 \\nabla f \/f^3 - 60 \\nabla^2 f(\\nabla f)^3 \/f^4 + 24 (\\nabla f\/f)^5, \\\\\n\\nabla^6 \\log f & = \\nabla ^6 f\/f -6 \\nabla^5 f \\nabla f\/f^2 - 15 \\nabla ^4 f \\nabla^2 f\/f^2 + 30 \\nabla^4 f (\\nabla f)^2 \/ f^3 -10 (\\nabla^3 f)^2\/f^2 \\\\\n& \\quad +120 \\nabla^3 f \\nabla^2 f \\nabla f\/f^3 -120 \\nabla^3 f (\\nabla f)^3\/f^4 + 30 (\\nabla^2 f)^3\/f^3 \\\\\n& \\quad - 270 (\\nabla^2 f)^2(\\nabla f)^2 \/f^4 +360 \\nabla^2 f (\\nabla f)^4 \/f^5 - 120 (\\nabla f)^6\/f^6, \\\\\n\\nabla^3 f\/f &= \\nabla^3 \\log f + 3 \\nabla^2 \\log f \\nabla \\log f +\\left(\\nabla \\log f\\right)^3,\\\\\n\\nabla^4 f\/f &= \\nabla^4 \\log f + 4 \\nabla^3 \\log f \\nabla \\log f + 3(\\nabla^2 \\log f )^2 +6 \\nabla^2 \\log f (\\nabla\\log f)^2+ (\\nabla\\log f)^4,\\\\\n\\nabla^5 f\/f & = \\nabla^5 \\log f +5 \\nabla^4 \\log f \\nabla \\log f +10 \\nabla^3 \\log f \\nabla^2 \\log f +10 \\nabla^3 \\log f (\\nabla \\log f)^2 \\\\\n& \\quad +15 (\\nabla^2 \\log f)^2 \\nabla \\log f +10 \\nabla^2 \\log f (\\nabla \\log f)^3 + (\\nabla\\log f)^5, \\\\\n\\nabla^6 f\/f & = \\nabla^6 \\log f + 6 \\nabla^5 \\log f \\nabla \\log f + 15 \\nabla^4 \\log f \\nabla^2 \\log f + 15\\nabla^4 \\log f (\\nabla \\log f)^2 \\\\\n& \\qquad +10( \\nabla^3 \\log f )^2 +60 \\nabla^3 \\log f \\nabla^2 \\log f \\nabla \\log f +20\\nabla^3\\log f (\\nabla \\log f)^3 \\\\\n& \\qquad +15( \\nabla^2 \\log f )^3 +45(\\nabla^2 \\log f)^2( \\nabla \\log f)^2 +15\\nabla^2 \\log f(\\nabla \\log f)^4 + (\\nabla\\log f)^6.\n\\end{aligned}\n\\end{equation}\nFor example, $\\nabla^3 \\ell(Y)$ is derived by writing $\\nabla^3 \\ell(Y)$ as, with suppressing $\\vartheta$, \n\\begin{align*}\n& \\nabla^3 \\ell(Y) \\\\\n& = \\frac{\\nabla^3 P(Y)}{P(Y)} - 3 \\frac{\\nabla^2 P(Y)}{P(Y)}\\frac{\\nabla P(Y)}{P(Y)} + 2\\left(\\frac{\\nabla P(Y)}{P(Y)}\\right)^3 \\\\\n& = \\mathbb{E}\\left[ \\frac{\\nabla^3 P(Y,X)}{P(Y,X)} \\middle| Y\\right] -3 \\mathbb{E}\\left[ \\frac{\\nabla^2 P(Y,X)}{P(Y,X)} \\middle| Y\\right] \\mathbb{E}\\left[ \\frac{\\nabla P(Y,X)}{P(Y,X)} \\middle| Y\\right] + 2 \\left\\{ \\mathbb{E}\\left[ \\frac{\\nabla P(Y,X)}{P(Y,X)} \\middle| Y\\right]\\right\\}^3\\\\\n& = \\mathbb{E}\\left[ \\nabla^3 \\ell(Y,X) + 3 \\nabla^2 \\ell(Y,X) \\nabla \\ell(Y,X) + (\\nabla \\ell(Y,X))^3 \\middle| Y\\right]\\\\\n&\\quad -3 \\mathbb{E}\\left[ \\nabla^2 \\ell(Y,X) + (\\nabla \\ell(Y,X))^2 \\middle| Y\\right] \\mathbb{E}\\left[ \\nabla \\ell(Y,X) \\middle| Y\\right]+ 2 \\left\\{ \\mathbb{E}\\left[ \\nabla \\ell(Y,X) \\middle| Y\\right]\\right\\}^3,\n\\end{align*}\nand collecting terms. $\\nabla^4 \\ell(Y)$, $\\nabla^5 \\ell(Y)$, and $\\nabla^6 \\ell(Y)$ are derived similarly.\n\\end{proof}\n\n\\subsubsection{Auxiliary lemmas}\n\nWe first collect the notations. Define $\\overline{\\bf Z}_{k-1}^k:=(X_{k-1},\\overline{\\bf Y}_{k-1},W_k,X_k,Y_k)$ and denote the derivative of the complete data log-density by\n\\begin{equation} \\label{phi_i}\n\\phi^i(\\vartheta,\\overline{\\bf Z}_{k-1}^k) :=\\nabla^i \\log p_{\\vartheta}(Y_k,X_k|\\overline {\\bf Y}_{k-1},X_{k-1},W_k), \\quad i\\geq 1.\n\\end{equation}\nWe use short-handed notation $\\phi^i_{\\vartheta k}:=\\phi^i(\\vartheta,\\overline{\\bf Z}_{k-1}^k)$. We also suppress the superscript $1$ from $\\phi^1_{\\vartheta k}$, so that $\\phi_{\\vartheta k}=\\phi^1_{\\vartheta k}$. \nFor random variables $V_1,\\ldots,V_q$ and a conditioning set $\\mathcal{F}$, define the central conditional moment of $(V_1,\\ldots,V_q)$ as\n\\begin{align*}\n\\mathbb{E}_\\vartheta^c \\left[V_1,\\ldots,V_q \\middle|\\mathcal{F} \\right] & := \\mathbb{E}_{\\vartheta} \\left[ \\left(V_1-\\mathbb{E}_{\\vartheta}[V_1|\\mathcal{F}]\\right) \\cdots \\left(V_q-\\mathbb{E}_{\\vartheta}[V_q|\\mathcal{F}]\\right) \\middle| \\mathcal{F}\\right].\n\\end{align*}\nFor example, $\\mathbb{E}_{\\vartheta}^c \\left[\\phi_{\\vartheta k_1}\\phi_{\\vartheta k_2} \\middle|\\mathcal{F} \\right] := \\mathbb{E}_{\\vartheta} \\left[ \\left( \\phi_{\\vartheta k_1}-\\mathbb{E}_{\\vartheta}[\\phi_{\\vartheta k_1}|\\mathcal{F}]\\right) \\left( \\phi_{\\vartheta k_2}-\\mathbb{E}_{\\vartheta}[\\phi_{\\vartheta k_2}|\\mathcal{F}]\\right) \\middle| \\mathcal{F}\\right]$.\n \nLet $\\mathcal{I}(j) = (i_1,\\ldots,i_j)$ denote a sequence of positive integers with $j$ elements, let $\\sigma(\\mathcal{I}(j))$ denote the set of all the unique permutations of $(i_1,\\ldots,i_j)$, and let $|\\sigma(\\mathcal{I}(j))|$ denote its cardinality. For example, if $\\mathcal{I}(3) = (2,1,1)$, then $\\sigma(\\mathcal{I}(3)) = \\{(2,1,1), (1,2,1), (1,1,2)\\}$ and $|\\sigma(\\mathcal{I}(3))|=3$; if $\\mathcal{I}(3) = (1,1,1)$, then $\\sigma(\\mathcal{I}(3)) = (1,1,1)$ and $|\\mathcal{I}(3)|=1$. Let $\\mathcal{T}(j) = (t_1,\\ldots,t_j)$ for $j=1,\\ldots,6$. For a conditioning set $\\mathcal{F}$, define the symmetrized central conditional moments as \n\\begin{equation} \\label{Phi_defn}\n\\begin{aligned}\n\\Phi^{\\mathcal{I}(1)}_{\\vartheta \\mathcal{T}(1)}[\\mathcal{F}]& :=\n\\mathbb{E}_{\\vartheta}\\left[ \\phi^{i_1}_{\\vartheta t_1} \\middle| \\mathcal{F}\\right] ,\n\\quad\n\\Phi^{\\mathcal{I}(2)}_{\\vartheta \\mathcal{T}(2)}[\\mathcal{F}] :=\n\\frac{1}{|\\sigma(\\mathcal{I}(2))|} \\sum_{(\\ell_1,\\ell_2) \\in \\sigma(\\mathcal{I}(2))} \\mathbb{E}_{\\vartheta}^c\\left[ \\phi^{\\ell_1}_{\\vartheta t_1} \\phi^{\\ell_2}_{\\vartheta t_2} \\middle| \\mathcal{F}\\right],\n\\\\\n\\Phi^{\\mathcal{I}(3)}_{\\vartheta \\mathcal{T}(3)}[\\mathcal{F}] & := \n\\frac{1}{|\\sigma(\\mathcal{I}(3))|} \\sum_{(\\ell_1,\\ell_2,\\ell_3) \\in \\sigma(\\mathcal{I}(3))} \\mathbb{E}_{\\vartheta}^c\\left[ \\phi^{\\ell_1}_{\\vartheta t_1} \\phi^{\\ell_2}_{\\vartheta t_2} \\phi^{\\ell_3}_{\\vartheta t_3} \\middle| \\mathcal{F}\\right], \\\\\n\\Phi^{\\mathcal{I}(4)}_{\\vartheta \\mathcal{T}(4)}[\\mathcal{F}] & := \\frac{1}{|\\sigma(\\mathcal{I}(4))|} \\sum_{(\\ell_1,\\ldots,\\ell_4) \\in \\sigma(\\mathcal{I}(4))} \\tilde \\Phi_{\\vartheta \\mathcal{T}(4)}^{\\ell_1 \\ell_2 \\ell_3 \\ell_4}, \n\\end{aligned}\n\\end{equation}\nwhere $\\tilde \\Phi_{\\vartheta \\mathcal{T}(4)}^{\\ell_1 \\ell_2 \\ell_3 \\ell_4} :=\\mathbb{E}_{\\vartheta}^c[ \\phi^{\\ell_1}_{\\vartheta t_1} \\phi^{\\ell_2}_{\\vartheta t_2} \\phi^{\\ell_3}_{\\vartheta t_3} \\phi^{\\ell_4}_{\\vartheta t_4} | \\mathcal{F}] - \\mathbb{E}_{\\vartheta}^c[\\phi^{\\ell_1}_{\\vartheta t_1} \\phi^{\\ell_2}_{\\vartheta t_2}|\\mathcal{F}] \\mathbb{E}_{\\vartheta}^c[\\phi^{\\ell_3}_{\\vartheta t_3} \\phi^{\\ell_4}_{\\vartheta t_4}|\\mathcal{F}] \\\\- \\mathbb{E}_{\\vartheta}^c[\\phi^{\\ell_1}_{\\vartheta t_1} \\phi^{\\ell_3}_{\\vartheta t_3}|\\mathcal{F}] \\mathbb{E}_{\\vartheta}^c[\\phi^{\\ell_2}_{\\vartheta t_2} \\phi^{\\ell_4}_{\\vartheta t_4}|\\mathcal{F}] - \\mathbb{E}_{\\vartheta}^c[\\phi^{\\ell_1}_{\\vartheta t_1} \\phi^{\\ell_4}_{\\vartheta t_4}|\\mathcal{F}]\\mathbb{E}_{\\vartheta}^c[\\phi^{\\ell_2}_{\\vartheta t_2} \\phi^{\\ell_3}_{\\vartheta t_3}|\\mathcal{F}] $, and\n\\begin{equation} \\label{Phi_defn_2}\n\\begin{aligned} \n\\Phi^{\\mathcal{I}(5)}_{\\vartheta \\mathcal{T}(5)}[\\mathcal{F}] & := \\frac{1}{|\\sigma(\\mathcal{I}(5))|} \\sum_{(\\ell_1,\\ldots,\\ell_5) \\in \\sigma(\\mathcal{I}(5))} \\left( \\mathbb{E}_\\vartheta^c\\left[ \\phi^{\\ell_1}_{\\theta t_1} \\phi^{\\ell_2}_{\\theta t_2} \\phi^{\\ell_3}_{\\theta t_3} \\phi^{\\ell_4}_{\\theta t_4} \\phi^{\\ell_5}_{\\theta t_5} \\middle| \\mathcal{F}\\right] \\right. \\\\\n& \\left. \\qquad - \\sum_{(\\{a,b,c\\},\\{d,e\\}) \\in \\sigma_5} \\mathbb{E}_\\vartheta^c\\left[\\phi^{\\ell_{a}}_{\\theta t_{a}} \\phi^{\\ell_b}_{\\theta t_b}\\phi^{\\ell_c}_{\\theta t_c} \\middle|\\mathcal{F}\\right] \\mathbb{E}_\\vartheta^c\\left[ \\phi^{\\ell_d}_{\\theta t_d} \\phi^{\\ell_e}_{\\theta t_e} \\middle| \\mathcal{F}\\right] \\right), \\\\\n\\Phi^{\\mathcal{I}(6)}_{\\vartheta \\mathcal{T}(6)}[\\mathcal{F}]\n &:= \\mathbb{E}_\\vartheta^c\\left[ \\phi_{\\theta t_1} \\phi_{\\theta t_2} \\phi_{\\theta t_3} \\phi_{\\theta t_4} \\phi_{\\theta t_5} \\phi_{\\theta t_6} \\middle| \\mathcal{F}\\right] - \\sum_{(\\{a,b,c,d\\},\\{e,f\\}) \\in \\sigma_{61}} \\mathbb{E}_\\vartheta^c\\left[ \\phi_{\\theta t_a} \\phi_{\\theta t_b} \\phi_{\\theta t_c} \\phi_{\\theta t_d} \\middle| \\mathcal{F}\\right] \\mathbb{E}_\\vartheta^c\\left[ \\phi_{\\theta t_e} \\phi_{\\theta t_f} \\middle| \\mathcal{F}\\right] \\\\\n&\\qquad - \\sum_{(\\{a,b,c\\},\\{d,e,f\\}) \\in \\sigma_{62}} \\mathbb{E}_\\vartheta^c\\left[ \\phi_{\\theta t_a} \\phi_{\\theta t_b} \\phi_{\\theta t_c} \\middle| \\mathcal{F}\\right] \\mathbb{E}_\\vartheta^c\\left[ \\phi_{\\theta t_d} \\phi_{\\theta t_e} \\phi_{\\theta t_f} \\middle| \\mathcal{F}\\right] \\\\\n&\\qquad + 2 \\sum_{(\\{a,b\\},\\{c,d\\},\\{e,f\\}) \\in \\sigma_{63}} \\mathbb{E}_\\vartheta^c\\left[ \\phi_{\\theta t_a} \\phi_{\\theta t_b} \\middle| \\mathcal{F}\\right] \\mathbb{E}_\\vartheta^c\\left[ \\phi_{\\theta t_c} \\phi_{\\theta t_d} \\middle| \\mathcal{F}\\right] \\mathbb{E}_\\vartheta^c\\left[ \\phi_{\\theta t_e} \\phi_{\\theta t_f} \\middle| \\mathcal{F}\\right],\n\\end{aligned}\n\\end{equation}\nwhere\n\\begin{equation}\\label{partitions}\n\\begin{aligned}\n\\sigma_5 &:= \\text{the set of } {\\textstyle{5 \\choose 3} =10} \\text{ partitions of } \\{1,2,3,4,5\\} \\text{ of the form } \\{a,b,c\\},\\{d,e\\},\\\\\n\\sigma_{61} &:= \\text{the set of } {\\textstyle{6 \\choose 4} = 15} \\text{ partitions of } \\{1,2,3,4,5,6\\} \\text{ of the form } \\{a,b,c,d\\},\\{e,f\\}, \\\\\n\\sigma_{62} &:= \\text{the set of } {\\textstyle{6 \\choose 3}\/2 = 10} \\text{ partitions of } \\{1,2,3,4,5,6\\} \\text{ of the form } \\{a,b,c\\},\\{d,e,f\\}, \\\\\n\\sigma_{63} &:= \\text{the set of } {\\textstyle{6 \\choose 2}\\textstyle{4 \\choose 2}\/6 = 15} \\text{ partitions of } \\{1,2,3,4,5,6\\} \\text{ of the form } \\{a,b\\},\\{c,d\\},\\{e,f\\}.\n\\end{aligned}\n\\end{equation} \nNote that these moments are symmetric with respect to $(t_1,\\ldots,t_j)$. For $j=1,2,\\ldots,6$, $k \\geq 1$, $m \\geq 0$, and $x \\in \\mathcal{X}$, define the difference between the sums of the $\\Phi^{\\mathcal{I}(j)}_{\\vartheta \\mathcal{T}(j)}$'s over different time indices and conditioning sets as \n\\begin{equation}\\label{delta-tau}\n\\begin{aligned}\n\\Delta^{\\mathcal{I}(j)}_{j,k,m,x}(\\vartheta) &:= \\sum_{\\mathcal{T}(j)\\in \\{-m+1,\\ldots,k\\}^j} {\\Phi}^{\\mathcal{I}(j)}_{\\vartheta \\mathcal{T}(j)}\\left[\\overline{\\bf Y}_{-m}^k,{\\bf W}_{-m}^{k},X_{-m}=x\\right] \\\\\n& \\quad - \\sum_{\\mathcal{T}(j)\\in \\{-m+1,\\ldots,k-1\\}^j} {\\Phi}^{\\mathcal{I}(j)}_{\\vartheta \\mathcal{T}(j)}\\left[\\overline{\\bf Y}_{-m}^{k-1},{\\bf W}_{-m}^{k-1},X_{-m}=x\\right], \n\\end{aligned}\n\\end{equation} \nwhere $\\sum_{\\mathcal{T}(j)\\in \\{-m+1,\\ldots,k\\}^j}$ denotes $\\sum_{t_1=-m+1}^k\\sum_{t_2=-m+1}^k \\cdots \\sum_{t_j=-m+1}^k$, and $\\sum_{\\mathcal{T}(j) \\in \\{-m+1,\\ldots,k-1\\}^j}$ is defined similarly. Define $\\overline{\\Delta}^{\\mathcal{I}(j)}_{j,k,m}(\\theta)$ analogously to $\\Delta^{\\mathcal{I}(j)}_{j,k,m,x}(\\vartheta)$ by dropping $X_{-m}=x$ from the conditioning variable.\n\nHenceforth, we suppress the conditioning variable ${\\bf W}_{-m}^n$ from the conditioning sets and conditional densities unless confusion might arise. The following lemma expresses the derivatives of the log-densities, $ \\nabla^j \\ell_{k,m,x}(\\vartheta)$'s, in terms of the $\\Delta^{\\mathcal{I}(j)}_{j,k,m,x}(\\vartheta)$'s. The first two equations are also given in DMR (p. 2272 and pp. 2276--7).\n\\begin{lemma} \\label{ell_lambda}\nFor all $1 \\leq k \\leq n$, $m \\geq 0$, and $x \\in \\mathcal{X}$,\n\\begin{align*}\n&\\nabla^1 \\ell_{k,m,x}(\\vartheta)=\\Delta^1_{1,k,m,x}(\\vartheta),\\quad \n\\nabla^2 \\ell_{k,m,x}(\\vartheta)=\\Delta^2_{1,k,m,x}(\\vartheta)+\\Delta^{1,1}_{2,k,m,x}(\\vartheta),\\\\ \n&\\nabla^3 \\ell_{k,m,x}(\\vartheta)= \\Delta^{3}_{1,k,m,x}(\\vartheta)+3\\Delta^{2,1}_{2,k,m,x}(\\vartheta)+\\Delta^{1,1,1}_{3,k,m,x}(\\vartheta),\\\\\n&\\nabla^4 \\ell_{k,m,x}(\\vartheta)= \\Delta^4_{1,k,m,x}(\\vartheta) + 4 \\Delta^{3,1}_{2,k,m,x}(\\vartheta) + 3 \\Delta^{2,2}_{2,k,m,x}(\\vartheta) + 6\\Delta^{2,1,1}_{3,k,m,x}(\\vartheta) + \\Delta^{1,1,1,1}_{4,k,m,x}(\\vartheta),\\\\\n&\\nabla^5 \\ell_{k,m,x}(\\vartheta)= \\Delta^5_{1,k,m,x}(\\vartheta)+ 5\\Delta^{4,1}_{2,k,m,x}(\\vartheta) + 10 \\Delta^{3,2}_{2,k,m,x}(\\vartheta) +10\\Delta^{3,1,1}_{3,k,m,x}(\\vartheta)+15 \\Delta^{2,2,1}_{3,k,m,x}(\\vartheta)\\\\\n& \\quad +10\\Delta^{2,1,1,1}_{4,k,m,x}(\\vartheta) +\\Delta^{1,1,1,1,1}_{5,k,m,x}(\\vartheta), \\\\\n&\\nabla^6 \\ell_{k,m,x}(\\vartheta)= \\Delta^6_{1,k,m,x}(\\vartheta) + 6\\Delta^{5,1}_{2,k,m,x}(\\vartheta) + 15 \\Delta^{4,2}_{2,k,m,x}(\\vartheta) + 10 \\Delta^{3,3}_{2,k,m,x}(\\vartheta)+ 15\\Delta^{4,1,1}_{3,k,m,x}(\\vartheta) \\\\\n& \\quad + 60\\Delta^{3,2,1}_{3,k,m,x}(\\vartheta) + 15\\Delta^{2,2,2}_{3,k,m,x}(\\vartheta) + 20 \\Delta^{3,1,1,1}_{4,k,m,x}(\\vartheta) + 45 \\Delta^{2,2,1,1}_{4,k,m,x}(\\vartheta) + 15 \\Delta^{2,1,1,1,1}_{5,k,m,x}(\\vartheta) +\\Delta^{1,1,1,1,1}_{6,k,m,x}(\\vartheta). \n\\end{align*} \nFurther, the above holds when $\\nabla^j \\ell_{k,m,x}(\\vartheta)$ and $\\Delta^{\\mathcal{I}(j)}_{j,k,m,x}(\\vartheta)$ are replaced with $\\nabla^j \\overline \\ell_{k,m}(\\vartheta)$ and $\\overline\\Delta^{\\mathcal{I}(j)}_{j,k,m}(\\vartheta)$.\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma \\ref{ell_lambda}]\nThe stated result follows from writing $\\nabla^j \\ell_{k,m,x}(\\vartheta) = \\\\ \\nabla^j \\log p_{\\vartheta}({\\bf Y}_{-m+1}^{k}|\\overline{\\bf{Y}}_{-m},X_{-m}=x) - \\nabla^j \\log p_{\\vartheta}({\\bf{Y}}_{-m+1}^{k-1}|\\overline{\\bf{Y}}_{-m},X_{-m}=x)$, applying Lemma \\ref{louis} to the right-hand side, and noting that $\\nabla^j \\log p_{\\vartheta}({\\bf Y}_{-m+1}^k,{\\bf X}_{-m+1}^k|\\overline{\\bf{Y}}_{-m},X_{-m}) = \\sum_{t=-m+1}^k \\phi^j(\\vartheta,\\overline{\\bf Z}_{t-1}^t)$ (see (\\ref{cond_density}) and (\\ref{phi_i})). The result for $\\nabla^j \\ell_{k,m,x}(\\vartheta)$ with $j=1,2$ is also given in DMR (p. 2272 and pp. 2276--7). For $j=3$, the term $\\Delta^{2,1}_{2,k,m,x}(\\vartheta)$ follows from $\\sum_{t_1=-m+1}^k \\sum_{t_2=-m+1}^k \\mathbb{E}_\\vartheta^c[ \\phi^{2}_{\\vartheta t_1} \\phi^{1}_{\\vartheta t_2} |\\overline{\\bf Y}_{-m}^k,X_{-m}=x] = \\sum_{t_1=-m+1}^k \\sum_{t_2=-m+1}^k\\Phi^{2,1}_{\\vartheta t_1t_2}[\\overline{\\bf Y}_{-m}^k,X_{-m}=x]$. For $j=4$, note that when we apply Lemma \\ref{louis} to $\\nabla^4 \\log p_{\\vartheta}({\\bf Y}_{-m+1}^{k}|\\overline{\\bf{Y}}_{-m},X_{-m}=x)$, the last two terms on the right-hand side of Lemma \\ref{louis} can be written as $\\sum_{\\mathcal{T}(4)\\in\\{-m+1,\\ldots,k\\}^4 } {\\Phi}^{1,1,1,1}_{\\vartheta \\mathcal{T}(4)}[\\overline{\\bf Y}_{-m}^k,X_{-m}=x]$. The result for $j=5$ follows from a similar argument. For $j=6$, note that when we apply Lemma \\ref{louis} to $\\nabla^6 \\log p_{\\vartheta}({\\bf Y}_{-m+1}^{k}|\\overline{\\bf{Y}}_{-m},X_{-m}=x)$, the last four terms on the right-hand side of Lemma \\ref{louis} can be written as $\\sum_{\\mathcal{T}(6)\\in\\{-m+1,\\ldots,k\\}^6} {\\Phi}^{\\mathcal{I}(6)}_{\\vartheta \\mathcal{T}(6)}[\\overline{\\bf Y}_{-m}^k,X_{-m}=x]$.\n\\end{proof}\n\nThe following lemma provides bounds on $\\Phi^{\\mathcal{I}(j)}_{\\vartheta \\mathcal{T}(j)}[\\mathcal{F}]$ defined in (\\ref{Phi_defn}) and (\\ref{Phi_defn_2}) and is used in the proof of Lemma \\ref{lemma-bound-1}. For $j =2,\\ldots,6$, define $\\|\\phi^i_t\\|_{\\infty}:=\\sup_{\\vartheta\\in \\mathcal{N}^*} \\sup_{x,x'}|\\phi^i(\\vartheta,Y_t,x,\\overline{\\bf{Y}}_{t-1},x')|$ and $\\|\\phi^{\\mathcal{I}(j)}_{\\mathcal{T}(j)}\\|_{\\infty}:=\\sum_{(\\ell_1,\\ldots,\\ell_j)\\in\\sigma(\\mathcal{I}(j)) }\\|\\phi_{t_1}^{\\ell_1}\\|_{\\infty} \\cdots \\|\\phi_{t_j}^{\\ell_j}\\|_{\\infty}$.\n\\begin{lemma}\\label{lemma_ijl}\nUnder Assumptions \\ref{assn_a1}, \\ref{assn_a2}, and \\ref{assn_a4}, there exists a finite non-stochastic constant $C$ that does not depend on $\\rho$ such that, for all $m'\\geq m\\geq 0$, all $-m0$ and $C \\in (0,\\infty)$. Then, $\\max_{1 \\leq i \\leq n} |X_i|= o_p(n^{1\/q})$.\n\\end{lemma}\n\\begin{proof}[Proof of Lemma \\ref{max_bound}]\nFor any $\\varepsilon>0$, we have $\\mathbb{P}(\\max_{1\\leq i\\leq n}|X_i|>\\varepsilon n^{1\/q}) \\leq \\sum_{1\\leq i\\leq n} \\mathbb{P}( |X_i|>\\varepsilon n^{1\/q}) \\\\ \\leq \\varepsilon^{-q} n^{-1} \\sum_{1\\leq i\\leq n} \\mathbb{E}(|X_i|^q\\mathbb{I}{\\{|X_i|>\\varepsilon n^{1\/q}\\}})$ by a version of Markov inequality. As $n\\rightarrow \\infty$, the right-hand side tends to 0 by the dominated convergence theorem. \n\\end{proof} \n \nThe following lemma extends Corollary 1 and (39) of DMR and an equation on p. 2298 of DMR; DMR derive these results when $t_1=t_2$ and $t_3=t_4$ and ${\\bf W}^n_{-m}$ is absent. For the two probability measures $\\mu_1$ and $\\mu_2$, the total variation distance between $\\mu_1$ and $\\mu_2$ is defined as $\\|\\mu_1-\\mu_2\\|_{TV}:=\\sup_{A}|\\mu_1(A)-\\mu_2(A)|$. $\\|\\cdot\\|_{TV}$ satisfies $\\sup_{f(x):0 \\leq f(x) \\leq 1}|\\int f(x) d\\mu_1(x)- \\int f(x) d\\mu_2(x)| = \\|\\mu_1-\\mu_2\\|_{TV}$. In the following, we define $\\overline{\\bf{V}}_{-m}^n := (\\overline{\\bf{Y}}_{-m}^n,{\\bf W}_{-m}^n)$, and we let $\\overline{\\bf{v}}_{-m}^n$ and $x_{-m}$ denote ``$\\overline{\\bf{V}}_{-m}^n = \\overline{\\bf{v}}_{-m}^n$'' and ``$X_{-m}=x_{-m}$.''\n\\begin{lemma} \\label{x_diff}\nSuppose Assumptions \\ref{assn_a1}--\\ref{assn_a2} hold and $\\vartheta_x \\in \\Theta_x$. Then, we have, for all $\\overline{\\bf{v}}_{-m}^n$,\\\\\n(a) For all $-m \\leq t_1 \\leq t_2$ with $-m-m$ because the stated result holds trivially when $t_1=-m$. Observe that Lemma 1 of DMR still holds when ${\\bf W}_{-m}^n$ is added to the conditioning variable because Assumption \\ref{assn_a1} implies that $\\{(X_k,\\overline{\\bf Y}_{k})\\}_{k=0}^{\\infty}$ is a Markov chain given $\\{W_k\\}_{k=0}^{\\infty}$. Therefore, $\\{X_{t}\\}_{t \\geq - m}$ is a Markov chain when conditioned on $\\{ \\overline{\\bf{Y}}_{-m}^{n},{\\bf W}_{-m}^n \\}$, and hence $\\mathbb{P}_{\\vartheta_x}({\\bf X}_{t_1}^{t_2} \\in A |\\overline{\\bf{v}}_{-m}^n,x_{-m}) = \\sum_{x_{t_1} \\in \\mathcal{X}}\\mathbb{P}_{\\vartheta_x}({\\bf X}_{t_1}^{t_2} \\in A |X_{t_1}=x_{t_1},\\overline{\\bf{v}}_{-m}^n) p_{\\vartheta_x}(x_{t_1}|\\overline{\\bf{v}}_{-m}^n,x_{-m})$ holds. From applying this result and the property of the total variation distance, we can bound the left-hand side of the lemma by $\\| \\sum_{x_{-m}\\in\\mathcal{X}} p_{\\vartheta_x}(X_{t_1} \\in \\cdot |\\overline{\\bf{v}}_{-m}^n,x_{-m}) \\mu_1(x_{-m}) - \\sum_{x_{-m}\\in\\mathcal{X}} p_{\\vartheta_x}(X_{t_1} \\in \\cdot |\\overline{\\bf{v}}_{-m}^n,x_{-m})\\mu_2(x_{-m}) \\|_{TV}$. This is bounded by $\\rho^{t_1+m}$ from Corollary 1 of DMR, which holds when ${\\bf W}_{-m}^n$ is added to the conditioning variable. Therefore, part (a) is proven.\n\nWe proceed to prove part (b). Observe that the time-reversed process $\\{Z_{n-k}\\}_{0 \\leq k\\leq n+m}$ is Markov when conditioned on ${\\bf W}_{-m}^n$ and that $W_k$ is independent of $({\\bf X}_{0}^{k-1},\\overline{\\bf Y}_{0}^{k-1})$ given ${\\bf W}_{0}^{k-1}$. Consequently, for $k=n,n-1$, we have $\\mathbb{P}_{\\vartheta_x}({\\bf X}_{t_1}^{t_2} \\in A|\\overline{\\bf{v}}_{-m}^k,x_{-m}) = \\sum_{x_{t_2} \\in \\mathcal{X}} \\mathbb{P}_{\\vartheta_x}({\\bf X}_{t_1}^{t_2} \\in A|X_{t_2}=x_{t_2},\\overline{\\bf{v}}_{-m}^{t_2},x_{-m}) p_{\\vartheta_x}(x_{t_2}|\\overline{\\bf{v}}_{-m}^{k},x_{-m})$. Therefore, from the property of the total variation distance, the left-hand side of the lemma is bounded by $\\|\\mathbb{P}_{\\vartheta_x}(X_{t_2} \\in \\cdot |\\overline{\\bf{v}}_{-m}^{n},x_{-m}) - \\mathbb{P}_{\\vartheta_x}(X_{t_2} \\in \\cdot |\\overline{\\bf{v}}_{-m}^{n-1},x_{-m})\\|_{TV}$. This is bounded by $\\rho^{n-1-t_2}$ because equation (39) of DMR p.\\ 2294 holds when ${\\bf W}_{-m}^n$ is added to the conditioning variables, and the stated result follows. When $x_{-m}$ is dropped from the conditioning variables, part (b) follows from a similar argument with using Lemma 9 and an analogue of Corollary 1 of DMR in place of equation (39) of DMR.\n\nPart (c) follows immediately from writing the left-hand side of lemma as $\\sup_{A,B}| \\mathbb{P}_{\\vartheta_x}({\\bf X}_{t_1}^{t_2} \\in A|\\overline{\\bf{v}}_{-m}^n,x_{-m}) [ \\mathbb{P}_{\\vartheta_x}({\\bf X}_{t_3}^{t_4} \\in B |\\overline{\\bf{v}}_{-m}^n, {\\bf X}_{t_1}^{t_2} \\in A) - \\mathbb{P}_{\\vartheta_x}({\\bf X}_{t_3}^{t_4} \\in B|\\overline{\\bf{v}}_{-m}^n,x_{-m})]|$ and applying part (a).\n\\end{proof}\n\n\\subsubsection{The sums of powers of $\\rho$} \n\\begin{lemma}\\label{two_rho_sum}\nFor all $\\rho \\in (0,1)$, $c \\geq 1$, $q \\geq 1$, and $b>a$,\n\\begin{align*}\n& \\sum_{t=-\\infty}^{\\infty} \\left(\\rho^{\\lfloor (t-a)\/cq \\rfloor}\\wedge \\rho^{\\lfloor(b-t)\/q\\rfloor}\\right) \\leq \\frac{q(c+1)\\rho^{\\lfloor (b-a)\/(c+1)q \\rfloor}}{ 1-\\rho}, \\\\\n& \\sum_{t=-\\infty}^{\\infty} \\left(\\rho^{\\lfloor(t-a)\/q \\rfloor}\\wedge \\rho^{\\lfloor (b-t)\/cq \\rfloor}\\right) \\leq \\frac{q(c+1)\\rho^{\\lfloor (b-a)\/(c+1)q \\rfloor}}{ 1-\\rho}.\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma \\ref{two_rho_sum}]\nThe first result holds because the left-hand side is bounded by \\\\$\\sum_{t=-\\infty}^{\\lfloor (a+bc)\/(c+1) \\rfloor }\\rho^{\\lfloor(b-t)\/q\\rfloor} + \\sum_{t=\\lfloor (a+bc)\/(c+1) \\rfloor +1}^{\\infty}\\rho^{\\lfloor (t-a)\/cq \\rfloor} \\leq q \\rho^{\\lfloor \\{ b- \\lfloor (a+bc)\/(c+1) \\rfloor \\}\/q \\rfloor }\/(1-\\rho) + cq\\rho^{\\lfloor \\{ \\lfloor (a+bc)\/(c+1) \\rfloor +1 -a \\} \/ cq \\rfloor}\/(1-\\rho) \\leq q (1+c) \\rho^{\\lfloor (b-a)\/(c+1)q \\rfloor}\/(1-\\rho)$. The second result is proven by bounding the left-hand side by $\\sum_{t=-\\infty}^{\\lfloor (ac+b)\/(c+1) \\rfloor }\\rho^{\\lfloor(b-t)\/cq\\rfloor} + \\sum_{t=\\lfloor (ac+b)\/(c+1) \\rfloor +1}^{\\infty}\\rho^{\\lfloor (t-a)\/q \\rfloor}$ and proceeding similarly.\n\\end{proof}\n\nThe following lemma generalizes the result in the last inequality on p. 2299 of DMR. \n\\begin{lemma}\\label{rho_sum}\nFor all $\\rho \\in (0,1)$, $k \\geq 1 $, $q \\geq 1$, and $n \\geq 0$,\n\\[\n\\sum_{0\\leq t_1\\leq t_2 \\leq \\cdots \\leq t_k \\leq n} \\left( \\rho^{\\lfloor t_1 \/q \\rfloor}\\wedge \\rho^{\\lfloor (t_2-t_1)\/q \\rfloor} \\wedge \\cdots \\wedge \\rho^{\\lfloor (t_k-t_{k-1})\/q \\rfloor} \\wedge \\rho^{\\lfloor (n-t_k)\/q \\rfloor} \\right) \\leq C_{kq}(\\rho) \\rho^{\\lfloor n\/2kq \\rfloor},\n\\]\nwhere $C_{kq}(\\rho):= q^{k} k(k+1)! (1-\\rho)^{-k}$.\n\\end{lemma}\n\\begin{proof}[Proof of Lemma \\ref{rho_sum}]\nWhen $k=1$, the stated result follows from Lemma \\ref{two_rho_sum} with $c=1$. We first show that the following holds for $k\\geq 2$:\n\\begin{equation} \\label{rho_k_bound}\n\\sum_{t_1\\leq t_2 \\leq \\cdots \\leq t_k \\leq n} \\left(\\rho^{\\lfloor (t_2-t_1)\/q \\rfloor} \\wedge \\cdots \\wedge \\rho^{\\lfloor (t_k-t_{k-1})\/q \\rfloor} \\wedge \\rho^{\\lfloor (n-t_k)\/q \\rfloor}\\right) \\leq \\frac{q^{k-1} (k+1)! \\rho^{\\lfloor (n-t_1)\/kq \\rfloor }}{(1-\\rho)^{k-1}}.\n\\end{equation}\nWe prove (\\ref{rho_k_bound}) by induction. When $k=2$, it follows from Lemma \\ref{two_rho_sum} with $c=1$ that \\\\$\\sum_{t_2=t_1}^{n}(\\rho^{\\lfloor (t_2-t_1)\/q \\rfloor} \\wedge \\rho^{\\lfloor (n-t_2)\/q \\rfloor}) \\leq 2q \\rho^{\\lfloor (n-t_1)\/2q \\rfloor} \/(1-\\rho)$, giving (\\ref{rho_k_bound}). Suppose (\\ref{rho_k_bound}) holds when $k=\\ell$. Then (\\ref{rho_k_bound}) holds when $k=\\ell+1$ because, from Lemma \\ref{two_rho_sum},\n\\begin{align*}\n& \\sum_{t_1\\leq t_2 \\leq \\cdots \\leq t_{\\ell} \\leq t_{\\ell+1} \\leq n}\\left(\\rho^{\\lfloor (t_2-t_1)\/q \\rfloor} \\wedge \\rho^{\\lfloor (t_3-t_2)\/q \\rfloor} \\wedge \\cdots \\wedge \\rho^{\\lfloor (t_{\\ell+1}-t_{\\ell})\/q \\rfloor} \\wedge \\rho^{\\lfloor (n-t_{\\ell+1})\/q \\rfloor}\\right) \\\\\n&\\leq \\sum_{t_2=t_1}^{n} \\left( \\rho^{\\lfloor (t_2-t_1)\/q \\rfloor} \\wedge \\sum_{t_2 \\leq \\cdots \\leq t_{\\ell+1} \\leq n}\\left( \\rho^{\\lfloor (t_3-t_2)\/q \\rfloor} \\wedge \\cdots \\wedge \\rho^{\\lfloor (t_{\\ell+1}-t_{\\ell})\/q \\rfloor} \\wedge \\rho^{\\lfloor (n-t_{\\ell+1})\/q \\rfloor} \\right) \\right)\\\\\n& \\leq \\frac{q^{\\ell-1} \\ell!}{ (1-\\rho)^{\\ell-1}} \\sum_{t_2=t_1}^{n} \\left(\\rho^{\\lfloor (t_2-t_1)\/q \\rfloor}\\wedge \\rho^{\\lfloor (n-t_2)\/\\ell q \\rfloor}\\right) \\\\\n& \\leq \\frac{q^{\\ell}(\\ell+1)!}{(1-\\rho)^{\\ell}} \\rho^{\\lfloor (n-t_1)\/(\\ell+1)q \\rfloor},\n\\end{align*}\nand hence (\\ref{rho_k_bound}) holds for all $k \\geq 2$. We proceed to show the stated result. Observe that\n\\begin{align*}\n& \\sum_{0\\leq t_1\\leq t_2 \\leq \\cdots \\leq t_k \\leq n} \\left( \\rho^{\\lfloor t_1 \/q \\rfloor}\\wedge \\rho^{\\lfloor (t_2-t_1)\/q \\rfloor} \\wedge \\cdots \\wedge \\rho^{\\lfloor (t_k-t_{k-1})\/q \\rfloor} \\wedge \\rho^{\\lfloor (n-t_k)\/q \\rfloor} \\right) \\\\\n& \\leq 2 \\sum_{t_1=0}^{n\/2}\\sum_{t_1\\leq t_2 \\leq \\cdots \\leq t_{k-1}\\leq t_k} \\sum_{t_k = t_1}^{n-t_1} \\left( \\rho^{\\lfloor t_1 \/q \\rfloor}\\wedge \\rho^{\\lfloor (t_2-t_1)\/q \\rfloor} \\wedge \\cdots \\wedge \\rho^{\\lfloor (t_k-t_{k-1})\/q \\rfloor} \\wedge \\rho^{\\lfloor (n-t_k)\/q \\rfloor} \\right) \\\\\n&= 2 \\sum_{t_1=0}^{n\/2}\\sum_{t_1\\leq t_2 \\leq \\cdots \\leq t_{k-1}\\leq t_k} \\sum_{t_k = t_1}^{n-t_1} \\left( \\rho^{\\lfloor (t_2-t_1)\/q \\rfloor} \\wedge \\cdots \\wedge \\rho^{\\lfloor (t_k-t_{k-1})\/q \\rfloor} \\wedge \\rho^{\\lfloor (n-t_k)\/q \\rfloor} \\right) \\\\\n& \\leq 2 \\sum_{t_1=0}^{n\/2}\\sum_{t_1\\leq t_2 \\leq \\cdots \\leq t_{k-1}\\leq t_k \\leq n} \\left( \\rho^{\\lfloor (t_2-t_1)\/q \\rfloor} \\wedge \\cdots \\wedge \\rho^{\\lfloor (t_k-t_{k-1})\/q \\rfloor} \\wedge \\rho^{\\lfloor (n-t_k)\/q \\rfloor} \\right), \n\\end{align*}\nwhere the first inequality holds by symmetry, and the subsequent equality follows from $n-t_k \\geq t_1$. From (\\ref{rho_k_bound}), the right-hand side is no larger than $q^{k-1} (k+1)! (1-\\rho)^{(1-k)} \\sum_{t_1=0}^{n\/2}\\rho^{\\lfloor (n-t_1)\/kq \\rfloor } \\leq q^{k} k(k+1)! (1-\\rho)^{-k}\\rho^{\\lfloor n\/2kq \\rfloor}$, giving the stated result.\n\\end{proof}\n\n\nThe next lemma generalizes equation (46) and p. 2294 of DMR, who derive a similar bound when $\\ell=1,2$.\n\\begin{lemma} \\label{hoelder}\nLet $a_j>0$ for all $j$. For all positive integer $\\ell \\geq 1$ and all $k\\geq 1$ and $m\\geq 0$, we have $\\max_{-m+1\\leq t_1, \\ldots, t_\\ell\\leq k} a_{t_1} \\cdots a_{t_\\ell} \\leq (k+m)^{\\ell+1} A_\\ell$, where $A_\\ell := \\sum_{t=-\\infty}^{\\infty} (|t|\\vee 1)^{-2} a_t^\\ell$.\n\\end{lemma}\n\\begin{proof}[Proof of Lemma \\ref{hoelder}]\nWhen $\\ell=1$, the stated result follows from $\\max_{-m+1\\leq t\\leq k} a_{t} \\leq \\sum_{t=-m+1}^k a_t = \\sum_{t=-m+1}^k (|t|\\vee 1)^2(|t|\\vee 1)^{-2} a_t \\leq (k+m)^2\\sum_{t=-\\infty}^\\infty(|t|\\vee 1)^{-2} a_t $. When $\\ell \\geq 2$, from H\\\"older's inequality, we have $\\max_{-m+1\\leq t_1\\leq \\ldots \\leq t_\\ell\\leq k} a_{t_1} a_{t_2} \\cdots a_{t_\\ell} \\leq (\\sum_{t=-m+1}^k a_t)^\\ell = [ \\sum_{t=-m+1}^k (|t|\\vee 1)^{2\/\\ell} (|t|\\vee 1)^{-2\/\\ell}a_t ] ^\\ell \\leq [ \\sum_{t=-m+1}^k (|t|\\vee 1)^{2\/(\\ell-1)} ]^{(\\ell-1)} \\sum_{t=-m+1}^k (|t|\\vee 1)^{-2} a_t^\\ell \\leq [(k+m)^{1+2\/(\\ell-1)}]^{\\ell-1} A_\\ell = (k+m)^{\\ell+1} A_\\ell$.\n\\end{proof}\n\nThe following lemma generalizes the bound derived on p. 2301 of DMR.\n\\begin{lemma} \\label{rho_m}\nFor $\\alpha>0$, $q >0$, and $c_{jt}\\geq0$, define $c_{jq}^\\infty(\\rho^{\\alpha}) := \\sum_{t=-\\infty}^{\\infty} \\rho^{\\lfloor \\alpha |t|\/q \\rfloor} c_{jt}$. For all $\\rho \\in (0,1)$, $k\\geq 1$, and $0\\leq m\\leq m'$,\n\\begin{equation} \\label{rho_6}\n\\begin{aligned}\n&\\sum_{t_1=-m'+1}^{-m}\\sum_{t_1 \\leq t_2 \\leq t_3 \\leq t_4\\leq t_5 \\leq t_6 \\leq k } \\left(\\rho^{\\lfloor (k-1-t_6)\/q \\rfloor} \\wedge \\rho^{\\lfloor (t_6-t_5)\/q \\rfloor} \\wedge \\rho^{\\lfloor (t_5-t_4)\/q \\rfloor} \\wedge\\rho^{\\lfloor (t_4-t_3)\/q \\rfloor} \\wedge \\right. \\\\\n& \\left. \\qquad \\rho^{\\lfloor (t_3 -t_2)\/q \\rfloor} \\wedge \\rho^{\\lfloor (t_2-t_1)\/q \\rfloor}\\right)\\prod_{j=1}^6 c_{jt_j} \\leq \\rho^{\\lfloor (k-1+m)\/2qa_7\\rfloor} c_{1q}^{\\infty}\\left(\\rho^{1\/2a_7}\\right) \\prod_{j=2}^6 c_{jq}^{\\infty}\\left(\\rho^{1\/4a_j}\\right),\n\\end{aligned}\n\\end{equation}\nwhere $(a_j,b_j)$ are defined recursively with $(a_2,b_2)=(1,1)$ and, for $j \\geq 3$,\n$a_{j+1} = 4a_j(a_j+b_j)\/(2a_j-1)$ and $b_{j+1} = a_j(4b_j-1)\/(2a_j-1)$.\n$a_{j}$ and $b_{j}$ satisfy $a_{j}, b_{j} \\geq 3\/2$ for all $j$. Direct calculations using Matlab produce $a_7 \\doteq 334.5406$.\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma \\ref{rho_m}]\nFirst, observe that the following result holds for $a,b>1\/4$, $t_1\\leq 0$, and $t_j, t_{j+1} \\geq t_1$:\n\\begin{equation} \\label{t_ab}\n\\begin{aligned}\n(a) & \\text{ if } t_j \\leq \\frac{at_{j+1}+t_1}{a+b}, \\quad \\text {then } \\frac{|t_j|}{4a} \\leq \\frac{a(4a+1)t_{j+1}+(2a-1)t_1}{4a(a+b)} - t_j, \\\\\n(b) & \\text{ if } t_j \\geq \\frac{at_{j+1}+t_1}{a+b}, \\quad \\text {then } \\frac{|t_j|}{4a} \\leq \\frac{b}{a}t_j - \\frac{a(4b-1)t_{j+1}+(2a+4b+1)t_1}{4a(a+b)}.\n\\end{aligned}\n\\end{equation}\n(a) holds because (i) when $t_j \\leq 0$, we have $t_j \\leq (at_{j+1}+t_1)\/(a+b) \\Rightarrow (4a-1)t_j\/4a \\leq [a(4a-1)t_{j+1}+(4a-1)t_1]\/4a(a+b) \\Rightarrow -t_j\/4a \\leq [a(4a-1)t_{j+1}+(4a-1)t_1]\/4a(a+b)-t_j$ and $a(4a-1)t_{j+1}+(4a-1)t_1 \\leq a(4a-1)t_{j+1}+(4a-1)t_1+2a(t_{j+1}-t_1) = a(4a+1)t_{j+1}+(2a-1)t_1$; (ii) when $t_j \\geq 0$, we have $t_j \\leq (at_{j+1}+t_1)\/(a+b) \\Rightarrow (4a+1)t_j\/4a \\leq [a(4a+1)t_{j+1}+(4a+1)t_1]\/4a(a+b) \\Rightarrow t_j\/4a \\leq [a(4a+1)t_{j+1}+(4a+1)t_1]\/4a(a+b)- t_j$ and $(4a+1)t_1 \\leq (2a-1)t_1$.\n\n(b) holds because (i) when $t_j \\leq 0$, we have $t_j \\geq (at_{j+1}+t_1)\/(a+b) \\Rightarrow (4b+1)t_j\/4a \\geq [a(4b+1)t_{j+1}+(4b+1)t_1]\/4a(a+b) \\Rightarrow -t_j\/4a \\leq bt_j\/a - [a(4b+1)t_{j+1}+(4b+1)t_1]\/4a(a+b)$ and $a(4b+1)t_{j+1}+(4b+1)t_1 \\geq a(4b+1)t_{j+1}+(4b+1)t_1 -2a(t_{j+1}-t_1) = a(4b-1)t_{j+1}+(2a+4b+1)t_1$; (ii) when $t_j \\geq 0$, we have $t_j \\geq (at_{j+1}+t_1)\/(a+b) \\Rightarrow (4b-1)t_j\/4a \\geq [a(4b-1)t_{j+1}+(4b-1)t_1]\/4a(a+b) \\Rightarrow t_j\/4a \\leq bt_j\/a - [a(4b-1)t_{j+1}+(4b-1)t_1]\/4a(a+b)$ and $a(4b-1)t_{j+1}+(4b-1)t_1 \\geq a(4b-1)t_{j+1}+(2a+4b+1)t_1$.\n\nWe proceed to derive the stated bound. It follows from (a) and (b) and $\\lfloor x + y \\rfloor \\geq \\lfloor x \\rfloor +\\lfloor y \\rfloor$ that, with $\\overline t_j = (a_jt_{j+1}+t_1)\/(a_j+b_j)$,\n\\begin{align}\n& \\sum_{t_j=-m'+1}^{k}\\left( \\rho^{\\lfloor (t_{j+1} -t_j)\/q \\rfloor} \\wedge \\rho^{\\lfloor (b_j t_j-t_1)\/a_jq \\rfloor}\\right) c_{jt_j} \\nonumber \\\\\n& \\leq \\rho^{\\lfloor \\frac{a_j(4b_j-1)t_{j+1} -(2a_j-1)t_1}{4a_j(a_j+b_j)q} \\rfloor} \\left( \\sum_{t_j\\leq\\overline t_j} \\rho^{\\lfloor\\frac{a_j(4a_j+1)t_{j+1}+(2a_j-1)t_1}{4a_j(a_j+b_j)q} - \\frac{t_j}{q} \\rfloor} + \\sum_{t_j\\geq\\overline t_j} \\rho^{\\lfloor \\frac{b_j}{a_jq}t_j - \\frac{a_j(4b_j-1)t_{j+1}+(2a_j+4b_j+1)t_1}{4a_j(a_j+b_j)q} \\rfloor}\\right) c_{jt_j}\\nonumber \\\\\n& \\leq \\rho^{\\lfloor \\frac{a_j(4b_j-1)t_{j+1} -(2a_j-1)t_1}{4a_j(a_j+b_j)q}\\rfloor} c_{jq}^{\\infty}\\left(\\rho^{1\/4a_j}\\right) \\nonumber \\\\\n& = \\rho^{\\lfloor\\frac{b_{j+1}t_{j+1} -t_1}{a_{j+1}}\\rfloor} c_{jq}^{\\infty} \\left(\\rho^{1\/4a_j}\\right). \\label{rho_j_sum}\n\\end{align}\nObserve that $a_{j+1} \\geq 2a_j \\geq 2$ and $b_{j+1} \\geq 2b_j - (1\/2) \\geq 3\/2$ for all $j \\geq 2$. Therefore, we can apply (\\ref{t_ab}) and (\\ref{rho_j_sum}) to the left-hand side of (\\ref{rho_6}) sequentially for $j=2,3,\\ldots,6$. Consequently, the left-hand side of (\\ref{rho_6}) is no larger than\n\\[\n\\sum_{t_1=-m'+1}^{-m}\\rho^{\\lfloor \\frac{b_7(k-1) -t_1}{a_7q} \\rfloor} c_{1t_1} \\prod_{j=2}^6 c_{jq}^{\\infty}\\left(\\rho^{1\/4a_j}\\right).\n\\]\nObserve that $|t_1|\\leq k-1-2t_1-m$ because $t_1 \\leq -m \\Rightarrow -t_1 \\leq -2t_1-m \\leq k-1-2t_1-m$. From $b_7(k-1) \\geq k-1 $ and $|t_1|\\leq k-1-2t_1-m$, the sum is bounded by\n\\[\n\\sum_{t_1=-m'+1}^{-m}\\rho^{\\lfloor \\frac{k-1 -t_1}{a_7 q} \\rfloor} c_{1t_1} = \\rho^{\\lfloor\\frac{k-1+m}{2a_7q}\\rfloor} \\sum_{t_1=-m'+1}^{-m}\\rho^{\\lfloor\\frac{k-1-2t_1-m}{2a_7q}\\rfloor} c_{1t_1} \\leq \\rho^{\\lfloor\\frac{k-1+m}{2a_7q}\\rfloor} c_{1q}^{\\infty}\\left(\\rho^{1\/2a_7}\\right),\n\\]\nand the stated result follows.\n\\end{proof}\n\n\\subsubsection{Derivation of $\\vartheta_{M_0+1,x} = (\\vartheta_{xm}',\\pi_{xm}')'$ and $\\pi_{xm}=(\\varrho_m,\\alpha_m,\\phi_m')'$}\\label{subsec:p_m_repara}\nDefine $\\overline J_{m0} := \\{1,\\ldots,M_0\\}\\setminus J_m$, and let $p_j$ and $p^*_j$ denote $\\mathbb{P}_{\\vartheta_{M_0+1}}(X_k=j)$ and $\\mathbb{P}_{\\vartheta_{M_0}^*}(X_k=j)$, respectively.\n\nWe parameterize the transition probability of $X_k$ in terms of its stationary distribution and the first to the $(m-1)$-th rows and the $(m+1)$-th to the $(M_0+1)$-th rows of its transition matrix.\\footnote{Suppose a Markov process has a transition probability $P$ and stationary distribution $\\pi$ whose elements are strictly positive. If $\\pi$ and all the rows of $P$ except for one are identified, then the remaining row of $P$ is identified from the relation $\\pi P = \\pi$.}\nFor $i \\in \\overline J_m$, we reparameterize $(p_{im},p_{i,m+1})$ to $p_{iJ} = p_{im}+p_{i,m+1}=\\mathbb{P}_{\\vartheta_{M_0+1}}(X_k \\in J_m|X_{k-1}=i)$ and $p_{im|iJ} = p_{im}\/(p_{im}+p_{i,m+1})$. Furthermore, we reparameterize $(p_m,p_{m+1})$ in the stationary distribution to $p_J = p_m + p_{m+1}=\\mathbb{P}_{\\vartheta_{M_0+1}}(X_k \\in J_m)$ and $p_{m|J}=p_m\/(p_m + p_{m+1})=\\mathbb{P}_{\\vartheta_{M_0+1}}(X_k=m|X_k \\in J_m)$. Therefore, with $\\land$ and $\\lor$ denoting ``and'' and ``or,'' the transition probability of $X_k$ is summarized by \n$\\vartheta_{M_0+1,x} :=( \\{p_{iJ},p_{im|iJ} \\}_{i \\in \\overline J_m}, \\{p_{ij}\\}_{i \\in \\overline J_m \\land j \\in \\overline J_{m0}}, \\{p_{m+1,j}\\}_{j=1}^{M_0}, \\{p_{j}\\}_{j \\in \\overline J_{m0}}, p_J, p_{m|J})$.\n\nSplit $\\vartheta_{M_0+1,x}$ as $\\vartheta_{M_0+1,x}=(\\vartheta_{xm}',\\pi_{xm}')'$, where $\\vartheta_{xm} := ( \\{p_{ij}\\}_{i \\in \\overline J_m \\land j \\in \\overline J_{m0}}, \\{p_{iJ} \\}_{i \\in \\overline J_m}, \\{p_{j}\\}_{j \\in \\overline J_{m0}}, p_J)$ and $\\pi_{xm} :=( \\{p_{im|iJ} \\}_{i \\in \\overline J_m}, \\{p_{m+1,j}\\}_{j=1}^{M_0}, p_{m|J})$.\nWhen the $m$-th and $(m+1)$-th regimes are combined into one regime, the transition probability of $X_k$ equals the transition probability of $X_k$ under $\\vartheta_{M_0,x}^*$ if and only if $\\vartheta_{xm} = \\vartheta_{xm}^*:= \\{p_{ij} = p_{ij}^{*}\\ \\text{for}\\ i \\in \\overline J_m \\land(1 \\leq j \\leq m-1); p_{ij} = p_{i,j-1}^{*}\\ \\text{for}\\ i \\in \\overline J_m \\land (m+2 \\leq j \\leq M_0); p_{iJ} = p_{im}^{*}\\ \\text{for } i\\in \\bar J_m;p_j = p_{j}^* \\ \\text{for}\\ 1 \\leq j \\leq m-1; p_j = p_{j-1}^* \\ \\text{for}\\ m+2 \\leq j \\leq M_0; p_J = p_m^* \\}$. $\\pi_{xm}$ is the part of $\\vartheta_{M_0+1,x}$ that is not identified under $H_{0m}$.\n\nWe proceed to derive the reparameterization of some elements of $\\pi_{xm}$ in terms of $(\\alpha_m,\\varrho_m)$. First, map $p_{m+1,m}$ and $p_{m+1,m+1}$ to $p_{m+1,J}:=p_{m+1,m}+p_{m+1,m+1}=\\mathbb{P}_{\\vartheta_{M_0+1}}(X_k \\in J|X_{k-1}=m+1)$ and $p_{m+1,m|J}:=p_{m+1,m}\/p_{m+1,J}=\\mathbb{P}_{\\vartheta_{M_0+1}}(X_k=m|X_k \\in J,X_{k-1}=m+1)$. Let $P_J$ and $\\pi_J$ denote the transition matrix and stationary distribution of $X_k$ restricted to lie in $J_m$. The second row of $P_J$ is given by $(p_{m+1,m|J},1-p_{m+1,m|J})$, and $\\pi_J$ is given by $(p_{m|J},1-p_{m|J})$. From the relation $\\pi_J= \\pi_J P_J$, we can obtain the first row of $P_J$ as a function of $p_{m+1,m|J}$ and $p_{m|J}$. Finally, the elements of $P_J$ are mapped to $(\\varrho_m,\\alpha_m)$ as in Section \\ref{sec: testing-1}.\n\n\\clearpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nThe variation of the specific intensity over a star's disk, commonly \ncalled limb darkening or center-to-limb variation, is an important \nfunction of the physical structure of a stellar atmosphere. Because it \nis difficult to observe limb darkening for stars other than the \nSun, it is common to represent limb darkening by analytic expressions \nwhose coefficients are determined by matching the $I_\\lambda(\\mu)$ \nfrom model stellar atmospheres. Here $\\mu = \\cos \\theta$, where \n$\\theta$ is the angle between the vertical direction at that point on \nthe stellar disk and the direction toward the distant observer. \nThe most basic form of the law is the linear version, depending on \n$\\mu$ to the first power, such as \n\\begin{equation}\n\\frac{I_\\lambda(\\mu)}{I_\\lambda(\\mu = 1)} = \\mu,\n\\end{equation}\nor, more generally,\n\\begin{equation}\n\\frac{I_\\lambda(\\mu)}{I_\\lambda(\\mu = 1)} = 1 - u (1 - \\mu).\n\\end{equation}\nLimb-darkening laws have subsequently become more complex by including \nterms with $\\mu$, or its alternative $r = \\sin \\theta$, raised to \nhigher integer powers as well as both $\\sqrt{\\mu}$ and $\\log(\\mu)$ \nterms \\citep{2000A&A...363.1081C, Howarth2010}. In addition to minimizing the fit \nto the model's $I_\\lambda(\\mu)$, the laws have also introduced \nvarious constraints, such as enforcing flux conservation at each \nobserved wavelength. The intricacies of limb-darkening laws have \nincreased as model stellar atmospheres have advanced.\n\n\\citet{2000PhDT........11H,2003ApJ...594..464H,2007ApJ...656..483H}, \nmotivated by the potential of gravitational microlensing to provide \nparticularly detailed measurements of stellar limb darkening, \ninvestigated the optimum method of extracting limb darkening from the \ndata. One curious feature that emerged from these studies is the \nexistence of a fixed $\\mu$ location on the stellar disk where the \nfits to the normalized model intensities have nearly the same value \nindependent of wavelength or the model's $T_\\mathrm{eff}$ or \n$\\log g$. This is true for both the principal-component \nanalysis developed by \\citet{2003ApJ...594..464H} and the more \ntraditional linear limb-darkening law used by others.\n\n\\citet{2003ApJ...594..464H} used models computed with the \n\\textsc{Atlas12} code \\citep{1996IAUS..176..523K} to construct his \nfitting procedure. These models use detailed opacity sampling to \ninclude many tens of millions of spectral lines in the radiative \ntransfer, but they still assumed plane-parallel geometry, even though \n\\citet{2003ApJ...594..464H} targeted red giants with \n$T_\\mathrm{eff} \\leq 4000$ K and $\\log g \\leq 1.0$. These stellar \nparameters are exactly those where the assumption of plane-parallel \ngeometry should break down. In the study by \n\\citet{2003A&A...412..241C}, which did use the spherical \nmodels of \\citet{1999ApJ...525..871H}, the focus was on models with \n$\\log g \\geq 3.5$, for which spherical extension is minimal. A \nbroader study of limb darkening and the fixed $\\mu$-point using \nspherically extended model atmospheres is clearly needed.\n\n\n\\section{\\textsc{SAtlas} model atmospheres} \\label{sec:satlas_models}\n\n\\citet{2008A&A...491..633L} have developed the \\textsc{SAtlas} code, \na spherically extended version of \\textsc{Atlas}. This code shares \nwith \\textsc{Atlas} the properties of static pressure structure, LTE \npopulations and massive line blanketing represented by either opacity \ndistribution functions or opacity sampling (the faster opacity \ndistribution function version is used here). The spherical program \ndiffers from the plane-parallel \\textsc{Atlas} by allowing the gravity \nto vary with radial position, and by computing the radiative transfer \nalong rays through the atmosphere in the direction of the distant \nobserver using the \\citet{1971JQSRT..11..589R} version of the \n\\citet{1964CR...258..3189F} method, which accounts for the radial \nvariation of the angle between the vertical and the direction of the \nray. The structure of the atmosphere is computed using a total of 81 \nrays whose angular spacing is determined by two factors. The first is \nthe distribution of rays chosen to represent the ``core'' of the \natmosphere, the region where the lower boundary condition for radiative \ntransfer is the diffusion approximation. \\citet{2008A&A...491..633L} \nfound that different distributions of the core rays had no effect on \nthe structure of the atmosphere, and so elected to use equal spacings \nin $\\mu$. The remainder of the rays are tangent to the atmospheric \nlevels at the stellar radius perpendicular to the central ray toward \nthe observer. These rays are projections of the radial spacing of the \natmospheric levels, which is logarithmic in the Rosseland mean optical \ndepth. Because this distribution of rays is set by calculating the \nstructure of the atmosphere, it is not necessarily optimal for studying \nlimb darkening. Therefore, after computing the structure of the \natmosphere, the surface intensities are derived from the structure rays \nby cubic spline interpolation for any desired number of rays with any \ndesired distribution over the disk. As \\citet{2007ApJ...656..483H} \nhas demonstrated, cubic spline provides an excellent interpolation \nmethod. We have created our surface intensities at 1000 points equally \nspaced over $0 \\leq \\mu \\leq 1$.\n\n\\section{Fixed point of stellar limb-darkening laws} \n\\label{sec:fixed_pt}\n\n\\citet{2000PhDT........11H} found the fixed point in the limb-darkening \nprofiles of both the Sun and in plane-parallel \\textsc{Atlas} models of \nred giants. \\citet{2003ApJ...596.1305F} found a similar fixed point \nin their analysis of a K3 giant using spherical \\textsc{Phoenix} models,\nalthough they excluded the limb from their intensity analysis because \nthe low intensity near the limb contributed almost nothing to the \nobservations they were analyzing. Their truncation point ranged from \n$r = 0.998$ for $\\log g = 3.5$, corresponding to $\\mu = 0.063$, to \n$r = 0.88$ for $\\log g = 0.0$, corresponding to $\\mu = 0.475$. However,\nthe truncation eliminated that part of the surface brightness that \ndeviates most strongly from the plane-parallel model. \n\\citet{2003ApJ...596.1305F} stated that the fixed point is a generic \nfeature of any single-parameter limb-darkening law that conserves flux. \nFor example, if the limb darkening is represented as \n$f(\\mu) \\equiv I(\\mu)\/F = 2[1 + Ax(\\mu)]$, the fixed point is \n$\\mu_\\mathrm{fixed} = f^{-1} \n[2 \\int_0^1 f(\\mu') \\mu' \\mathrm{d} \\mu']$. \nAlthough they also state that the fixed point is not required by the \ntwo-parameter law they employed, they found a fixed point in the \nmicrolensing observations they were modeling using such a law.\n\nTo explore the parametrization more closely, we begin with the same \ntwo-parameter normalized limb-darkening function used by \n\\citet{2003ApJ...596.1305F}, \n\\begin{equation} \\label{eq:fields}\n\\frac{I(\\mu)}{2\\mathcal{H}} \n= 1 - A \\left (1 - \\frac{3}{2}\\mu \\right )\n - B \\left (1 - \\frac{5}{4}\\sqrt{\\mu} \\right ),\n\\end{equation}\nwhere $\\mathcal{H}$ in the Eddington flux, defined as\n\\begin{equation}\n\\mathcal{H} \\equiv \\frac{1}{2} \\int_{-1}^{1} I(\\mu) \\mu \\mathrm{d} \\mu\n = \\int_{0}^{1} I(\\mu) \\mu \\mathrm{d} \\mu.\n\\end{equation}\nA fixed point requires, for any two arbitrary models, that the \nintensity profiles satisfy\n\\begin{equation}\nI_1(\\mu_0) = I_2(\\mu_0),\n\\end{equation}\nor in terms of Eq.~\\ref{eq:fields}\n\\begin{eqnarray} \\label{eq:fields_fix}\n\\nonumber &&1 - A_1 \\left (1 - \\frac{3}{2} \\mu_0 \\right )\n - B_1 \\left (1 - \\frac{5}{4} \\sqrt{\\mu_0} \\right ) = \\\\ \n&& \\mbox{\\hspace{1.5cm}}1 - A_2 \\left (1 - \\frac{3}{2} \\mu_0 \\right )\n - B_2 \\left (1 - \\frac{5}{4} \\sqrt{\\mu_0} \\right ).\n\\end{eqnarray}\nFor this to be true, $A$ and $B$ must be linearly dependent, \n$A = \\alpha B + \\beta$ or $A_1-A_2 = \\alpha (B_1-B_2)$. Substituting \nthis relation into Eq.~\\ref{eq:fields_fix} yields an equation \nfor the fixed-point \n\\begin{equation} \\label{eq:fp}\n\\frac{3}{2} \\alpha \\mu_0 + \\frac{5}{4} \\sqrt{\\mu_0} -1 - \\alpha = 0.\n\\end{equation}\nWe need to verify that $A = f(B)$ and to understand the properties of \nthe parameter $\\alpha$.\n\nApplying a general least-squares method to Eq.~\\ref{eq:fields},\n\\begin{equation}\n\\chi^2 = \\sum_i^N \\left[\\frac{I(\\mu_i)}{2\\mathcal{H}} - 1 \n + A\\left(1-\\frac{3}{2}\\mu_i\\right) \n + B \\left(1 - \\frac{5}{4} \\sqrt{\\mu_i}\\right) \\right]^2,\n\\end{equation}\nwe determine the coefficients $A$ and $B$ from the constraint equations \n\\begin{eqnarray} \\label{eq:lsa}\n \\nonumber \\frac{\\partial \\chi^2}{\\partial A}& = &\n \\sum_i^N\\left[\\frac{I(\\mu_i)}{2\\mathcal{H}} - 1 \n+ A\\left(1-\\frac{3}{2}\\mu_i\\right) \n+ B \\left(1 - \\frac{5}{4} \\sqrt{\\mu_i}\\right) \\right] \\\\\n&& \\times \\left(1 - \\frac{3}{2}\\mu_i\\right) = 0\n\\end{eqnarray}\nand \n\\begin{eqnarray} \\label{eq:lsb}\n \\nonumber \\frac{\\partial \\chi^2}{\\partial B}& =&\n\\sum_i^N \\left[\\frac{I(\\mu_i)}{2\\mathcal{H}} - 1 \n+ A\\left(1-\\frac{3}{2}\\mu_i\\right) \n+ B \\left(1 - \\frac{5}{4} \\sqrt{\\mu_i}\\right) \\right] \\\\\n&& \\times \\left(1 - \\frac{5}{4} \\sqrt{\\mu_i}\\right) = 0.\n\\end{eqnarray}\nMultiplying Eq.~\\ref{eq:lsa} and Eq.~\\ref{eq:lsb} by \n$\\Delta \\mu$ and then converting the summation to integration, \n$\\int_{0}^{1} \\mathrm{d} \\mu$, we create the equations that determine \n$A$ and $B$,\n\\begin{equation}\n\\label{eq:lsa2}\n\\frac{J}{2\\mathcal{H}} + \\frac{1}{4}A + \\frac{1}{6}B - 1 = 0\n\\end{equation}\nand\n\\begin{equation} \\label{eq:lsb2}\n- \\frac{5}{4} \n\\left[\\frac{1}{2\\mathcal{H}} \\int_0^1 I(\\mu) \\sqrt{\\mu} \\mathrm{d} \\mu \n\\right] + \n\\frac{J}{2\\mathcal{H}} + \\frac{1}{6}A + \\frac{11}{96}B - \\frac{1}{6} \n= 0,\n\\end{equation}\nwhere $J$ is the usual mean intensity. It is also useful to define an \nangular pseudo-moment of the intensity, \n\\begin{equation} \\label{eq:def_P}\n\\mathcal{P} \\equiv \\int_0^1 I(\\mu) \\sqrt{\\mu} \\mathrm{d}\\mu,\n\\end{equation}\nwhich allows Eq.~\\ref{eq:lsb2} to be written as\n\\begin{equation} \\label{eq:lsb3}\n- \\frac{5}{4} \\left[ \\frac{\\mathcal{P}}{2\\mathcal{H}} \\right ] + \n\\frac{J}{2\\mathcal{H}} + \\frac{1}{6}A + \\frac{11}{96}B - \\frac{1}{6} \n= 0.\n\\end{equation}\nBoth $J$ and $\\mathcal{P}$ are \ndetermined by the properties of the model stellar atmosphere or the \nstar whose observations are being fit by a limb-darkening law. \nEquation~\\ref{eq:lsa2} and Eq.~\\ref{eq:lsb2} clearly show that the \ncoefficients $A$ and $B$ are uniquely determined by $J$ and \n$\\mathcal{P}$. However, for $A$ and $B$ to be linearly dependent, \nthat is $A = \\alpha B + \\beta$, $J$ and $\\mathcal{P}$ must also be \nlinearly related.\n\nThe relation of $J$ to $\\mathcal{P}$ can be understood by following \nthe discussion of the diffusion approximation in \n\\citet{1978stat.book.....M}. This begins by representing the source \nfunction by a power series, which leads to the intensity being given by \n$I_\\nu(\\tau_\\nu, \\mu) = \\sum_{n = 0}^{\\infty} \\mu^n \n\\left [\\mathrm{d}^n B_\\nu(T) \/ \\mathrm{d}\\tau_\\nu^n \\right]$\n(Mihalas 1978, Eq. 2-88). Using this expansion in the definitions of \n$J$ and $\\mathcal{P}$, and keeping just the first-order term, gives the \nfamiliar results that $J_{\\nu} \\approx B_{\\nu}$ and \n$K_{\\nu} \\approx B_{\\nu}\/3$ plus the additional result that \n$\\mathcal{P}_{\\nu} \\approx 2 B_{\\nu}\/3$. Eliminating $B_{\\nu}$ between \nthe $J_{\\nu}$ and $\\mathcal{P}_{\\nu}$ expressions gives \n\\begin{equation} \\label{eq:P_J}\n\\mathcal{P}_{\\nu} = 2J_{\\nu}\/3,\n\\end{equation}\nconfirming the desired correlation. Of course, as the atmosphere thins \nout toward space the diffusion approximation becomes less accurate, but, as will be seen, \nthe basic correlation of $\\mathcal{P}$ and $J$ is still there.\n\nUsing Eq.~\\ref{eq:P_J} we can combine Eq.~\\ref{eq:lsa2} and \nEq.~\\ref{eq:lsb2} to find that\n\\begin{equation} \\label{eq:a_fn_b}\nA = -0.694 B.\n\\end{equation}\nIn terms of the notation used earlier, $\\alpha = -0.694$ and \n$\\beta = 0$, or equivalently $\\Delta A = -0.694 \\Delta B$. Using this \nvalue for $\\alpha$ in Eq.~\\ref{eq:fp} leads to the equation for the \nfixed point, \n\\begin{equation}\n1.084 \\mu_0^2 - 0.925 \\mu_0 + 0.094 = 0.\n\\end{equation}\nThis quadratic equation yields \\emph{two} fixed points, not one. The \nsolutions are $\\mu_1 = 0.736$, corresponding to \n$\\theta_1 = 42.61^{\\degr}$ and $r_1 = 0.677$, and $\\mu_2 = 0.118$, \ncorresponding to $\\theta_2 = 83.22^{\\degr}$ and $r_2 = 0.993$.\n\nThe results above used the diffusion approximation to establish the \nrelationship between $J$ and $\\mathcal{P}$. To generalize these \nresults, we define a new variable, $\\eta$, as \n\\begin{equation} \\label{eq:eta_def}\n\\eta \\equiv \\frac{\\int I \\sqrt{\\mu} \\mathrm{d} \\mu}{J} \n = \\frac{\\mathcal{P}}{J}.\n\\end{equation}\nNote that $\\eta$ is a generalization of the relation in \nEq.~\\ref{eq:P_J}. Using the definition of $\\eta$, we replace \nthe variable $\\mathcal{P}$ in Eq.~\\ref{eq:lsb3} by $\\eta J$, and \nthen Eq.~\\ref{eq:lsa2} is used to eliminate the $J\/2\\mathcal{H}$ \nterms. The resulting equation is rearranged to the form $A = f(B)$,\nwhich is a linear equation of the same form as the previous equation \nfor $A$, namely $A = \\alpha B + \\beta$. Comparing the two equations \nfor $A$ we identify \n\\begin{equation} \\label{eq:alpha}\n\\alpha = - \\frac{(5 \\eta \/ 4 - 1)\/6 + 11\/96}\n {(5 \\eta \/ 4 - 1)\/4 + 1\/6}.\n\\end{equation}\nEquation~\\ref{eq:alpha} enables us to explore how $\\alpha$ \nchanges with $\\eta$. In addition, because Eq~\\ref{eq:fp} is a \nfunction of $\\alpha$ only, we are also able to determine the \ndependence of the two fixed points, $\\mu_1$ and $\\mu_2$, on $\\eta$.\nThese dependencies are shown in Fig.~\\ref{fig:alpha_mu_eta}. Because \nthe value of $\\eta$ can be set to values other than 2\/3, the \nassumption of the diffusion approximation is no longer being used.\n\\begin{figure} \n \\resizebox{\\hsize}{!}{\\includegraphics{16623fg1.eps}}\n \\caption{Top panel: dependence of the slope of the function \n $A = \\alpha B + \\beta$ on the variable \n $\\eta = J^{-1} \\int I \\sqrt{\\mu} \\mathrm{d} \\mu$.\n Bottom panel: variation of the fixed points with\n $\\eta$.\n \\label{fig:alpha_mu_eta}\n }\n\\end{figure}\n\nFigure~\\ref{fig:alpha_mu_eta} shows that, except for \n$\\eta \\approx 0.267$, the value of $\\alpha$ is in the narrow range \n$-0.72 \\lesssim \\alpha \\lesssim -0.68$, which leads to the fixed points \nbeing $0.72 \\lesssim \\mu_1 \\lesssim 0.74$ and \n$0.08 \\lesssim \\mu_2 \\lesssim 0.14$. The divergence of $\\alpha$ to \n$\\pm \\infty$ as $\\eta$ approaches $\\approx 0.267$ is due to the \ndenominator of Eq.~\\ref{eq:alpha} approaching zero as \n$\\eta \\rightarrow 4\/15$. However, $\\eta$ can never equal $4\/15$. This \nis a consequence of the following inequalities, which are true because \n$\\mu \\leq 1$:\n\\begin{equation}\nJ = \\int I \\mathrm{d} \\mu \\geq \\int I \\sqrt{\\mu} \\mathrm{d} \\mu \n \\geq \\int I \\mu^2 \\mathrm{d} \\mu = K.\n\\end{equation}\nDividing through by $J$ this becomes\n\\begin{equation} \\label{eq:eta_lims}\n1 \\geq \\eta \\geq \\frac{K}{J}.\n\\end{equation}\nDeep in the atmosphere, where the diffusion approximation holds, \n$K\/J = 1\/3$. Moving toward the surface the atmosphere becomes more \ntransparent and the radiation becomes less isotropic, with the \nconsequence that $K\/J > 1\/3$. As a result, \n\\begin{equation} \\label{eq:eta_llim}\n\\eta \\geq 1\/3 > 4\/15 = 0.267,\n\\end{equation}\nand the divergence cannot occur. Equation~\\ref{eq:alpha} can be used \nto set bounds on $\\alpha$ by using the lower bound on $\\eta$ from \nEq.~\\ref{eq:eta_llim} and the upper bound on $\\eta$ from \nEq.~\\ref{eq:eta_lims}. The result is $-5\/6 \\leq \\alpha \\leq -15\/22$.\nUsing this range of $\\alpha$ in Eq.~\\ref{eq:fp}, we find that the \nranges for the fixed points are $0.708 \\leq \\mu_1 \\leq 0.741$ and \n$0.025 \\leq \\mu_2 \\leq 0.130$, which are consistent the values for \n$\\mu_1$ and $\\mu_2$ found previously. This indicates that the fixed \npoints exist in the limb-darkening relations and are stable to \ndifferences in the intensity profile of the stellar atmosphere. It \nalso suggests that the fixed points are not a result of a particular \ndominant opacity source. The only requirement is that $\\eta > 1\/3$.\n\nWe verify this analytic result by computing a grid of \\textsc{Atlas}\nplane-parallel model stellar atmospheres with \n$3000 \\ \\mathrm{K} \\leq T_\\mathrm{eff} \\leq 8000 \\ \\mathrm{K}$ and \n$-2 \\leq \\log g \\leq 3$. For this grid of models, the mean value of \n$\\eta$ in the $V-$band is \n$\\bar{\\eta}_\\mathrm{pp} = 0.716\\pm 0.004$ which leads to \n$\\alpha_\\mathrm{pp} = 0.691 \\pm 0.001$, $\\mu_1 = 0.7375\\pm 0.0002$ and\n$\\mu_2 = 0.1200\\pm 0.0003$. This is very similar to what is found \nassuming the diffusion approximation. The difference between the model \n$V-$band $\\eta$ and the value of $\\eta$ from the diffusion \napproximation is due to the fact that plane-parallel models do not \nenforce the diffusion approximation at all depths in the atmosphere. \nTo determine the dependence on wavelength, we used the plane-parallel \ngrid of atmospheres to determine the mean value and 1-$\\sigma$ standard \ndeviation of $\\eta$, $\\alpha$ and the two fixed points $\\mu_{1}$ and \n$\\mu_{2}$, with the results shown in Fig.~\\ref{fig:wl_pp}. The values \nof the fixed points clearly do not vary significantly as a function of \nwavelength nor as a function of effective temperature and gravity. \nFurthermore, it must be noted that the value of $\\eta$ approaches $2\/3$ \nas $\\lambda \\rightarrow \\infty$.\n\\begin{figure}[t]\n \\begin{center}\n \\resizebox{\\hsize}{!}{\\includegraphics{16623fg2.eps}}\n \\end{center}\n \\caption{Wavelength dependence of the mean values of \n (a) the ratio of the pseudo-moment to the mean intensity,\n $\\eta$,\n (b) the value of $\\alpha$ that defines the fixed points,\n (c) the first and\n (d) second fixed point for the grid of plane-parallel \n \\textsc{Atlas} model atmospheres. The error bars \n represent 1-$\\sigma$ deviation from the mean.\n \\label{fig:wl_pp}\n }\n\\end{figure}\n\nThese results can be generalized to show that the fixed point occurs in \nother limb-darkening laws, such as a quadratic law where\n$I\/2\\mathcal{H} = 1 - A(1 - \\mu) - B(1 -2 \\mu^2)$. Repeating the \nderivation above, the pseudo-moment, $\\mathcal{P}$, is replaced by $K$, \na higher angular moment of the intensity. Because the diffusion \napproximation analysis also yields a linear relation between $J$ and \n$K$, $J = 3K$, we again find that $A = f(B)$. For a more general law, \nsuch as $I\/2\\mathcal{H} = 1 - A (1 - \\mu) - B[1 - \\frac{1}{2}(n+2)\\mu^n]$, we would \nfind a similar connection between the coefficients $A$ and $B$. We \nwould even find fixed points for any law that is a combination of a \nlinear term and any function that can be represented by a power-law \nseries, such as $e^{\\mu}$, similar to the limb-darkening law tested by \\cite{2003A&A...412..241C}. Therefore, we conclude that the fixed \npoints are a general property of this family of limb-darkening laws.\n\n\n\\section{Limb darkening and \\textsc{SAtlas} model atmospheres}\n\\label{sec:ld_satlas}\n\nTo conduct a broader survey of limb darkening, we have computed several \nlarger cubes of solar-composition model atmospheres, increasing the \nrange of luminosity, mass and radius as well as using microturbulent \nvelocities of 0, 2 and 4 km\/s. The luminosities, masses and radii of \nthese models cover a significant portion of the Hertzsprung-Russell \ndiagram where stars have extended atmospheres, as shown in \nFig.~\\ref{fig:ld_hr}, which also shows the evolutionary tracks of \n\\citet{2000A&AS..141..371G} for comparison.\n\\begin{figure}\n \\resizebox{\\hsize}{!}{\\includegraphics{16623fg3.eps}}\n \\caption{Models computed with the \\textsc{SAtlas} code plotted over \n the stellar evolution tracks from \n \\citet{2000A&AS..141..371G}.\n \\label{fig:ld_hr}\n }\n\\end{figure}\n\\begin{figure*}[t]\n \\resizebox{\\hsize}{!}{\\includegraphics{16623fg4a.eps}\n \\includegraphics{16623fg4b.eps}}\n\n \\caption{Left panel: Limb darkening, characterized by \n the normalized surface intensity, $I(\\mu)\/2\\mathcal{H}$, \n computed for the cube of 2102 model atmospheres for \n $v_\\mathrm{turb} = 2$ km\/s for the $BVRI$ and $H$-bands. \n Right panel: Fit to these same intensities \n using the limb-darkening law \n \\mbox{$I(\\mu)\/2\\mathcal{H} = 1 - A (1 - \\frac{3}{2} \\mu)\n - B (1 - \\frac{5}{4} \\sqrt{\\mu})$}\n for the same wavebands.\n \\label{fig:ld_cubev2_mu}\n }\n\\end{figure*}\nThe number of models in each cube is large. For example, there were \n2101 models in the cube with $v_\\mathrm{turb} = 2$ km\/s and the cubes \nfor the other values of $v_\\mathrm{turb}$ have similar numbers of \nmodels. For each model in each cube we computed the intensity at 1000 \nequally spaced $\\mu$ points at all wavelengths from the far ultraviolet \nto the far infrared; the results are shown in the left column of \nFig.~\\ref{fig:ld_cubev2_mu} for the $B,V,R,I$ and $H$-bands. These intensity curves have similar behavior as the \\textsc{Phoenix} model atmosphere intensity curves that \\cite{Orosz2000} studied.\nWe also fit each of the model limb-darkening curves with the \nlinear-plus-square-root limb darkening law given by \nEq.~\\ref{eq:fields}, and this is shown in the right column of \nFig.~\\ref{fig:ld_cubev2_mu}. Note that the $A$ and $B$ coefficients of \nthis parametrization have been shown in Eq.~\\ref{eq:lsa2} and \nEq.~\\ref{eq:lsb2} to be functions of $J$ and $\\mathcal{P}$, not the \nstructure of the intensity profile. The results for the \n$v_\\mathrm{turb} = 0$ and $4$ km\/s cubes of models are very similar. This is not surprising; in the \\textsc{Atlas} codes, the turbulent pressure is $P_{\\rm{turb}} = \\rho v_{\\rm{turb}}^2$, indicating that the turbulent velocity has a similar effect on the temperature structure of a stellar atmosphere as the gravity, that is a change of turbulent velocity is equivalent to a change of gravity, similar to the discussion from \\cite{Gustafsson2008} for the MARCS code. In terms of the best-fit relations, a change of turbulent velocity causes a small change for the coefficients in the same manner as a change in gravity for the same effective temperature and mass. Therefore, we can just explore the grid of models atmospheres for only one value of $v_{\\rm{turb}}$.\nThe cubes of models used in this survey cover a much wider range of \natmospheric parameters than was used in the only previous \ninvestigation using spherical model stellar atmospheres \n\\citep{2003A&A...412..241C}, which also truncated the profiles at the \nlimb to achieve better fits.\n\nThe model intensities in the left column of Fig.~\\ref{fig:ld_cubev2_mu} \nare always positive, but the intensities in some of the parametrized \nfits in the right column become slightly negative toward the limb. \nThis does not happen for plane-parallel model atmospheres because the \nslope of the intensity profile ($\\partial I\/\\partial \\mu$ or \n$\\partial I\/\\partial r$) has a constant sign. For example, using \n$\\mu$ as the independent variable the slope of the intensity profile is \nalways positive, varying from zero at the center of the disk to \n$\\infty$ as $\\mu \\rightarrow 0$. For spherical atmospheres, the \ncenter-to-limb variation changes sign because of a slight inflection, \nwhich is apparent in the curves of the left column of \nFig.~\\ref{fig:ld_cubev2_mu}. The more complex intensity profile and \nthe use of an equal-weighting $\\chi^2$-fit (i.e. the same number of \n$\\mu$-points near the center as near the limb) leads to some slightly \nnegative intensities in some best-fit relations. \n\n\n\nFigure~\\ref{fig:ld_cubev2_mu} shows that the actual limb darkening \ncomputed from the spherical models is more complex than the \nparametrized representation, and it is obvious that the analysis of \nspecific observations that contain intensity information should be \ncautious about using the parametrized fits. Although fits to \nplane-parallel models appear to be very good, the models are less \nrealistic physically than the spherical models. Therefore, using what \nseems like a well-fitting law may introduce hidden errors that could \ncompromise the conclusions of the analysis. The quantities derived \nfrom observations of binary eclipses and planetary transits may be more \nuncertain than had been thought \n\\citep[see, for example,][]{2007ApJ...655..564K, 2007A&A...467.1215S, \n2008MNRAS.386.1644S}.\n\nUsing the same grid of spherical \\textsc{Atlas} models used in \nFig.~\\ref{fig:ld_cubev2_mu}, we computed the mean values and 1-$\\sigma$ \nstandard deviations of $\\eta$, $\\alpha$ and the fixed points as a \nfunction of wavelength. The results are shown in Fig.~\\ref{fig:wl_sp}, \nwhich can be compared with the plane-parallel results shown in \nFig.~\\ref{fig:wl_pp}. There are two key differences between the \nplane-parallel and spherical models. The first is that the mean values \nof $\\eta$ are larger for the spherical models, which leads to \nthe differences between the mean values of $\\alpha$ and the two fixed \npoints. The second difference is that the standard deviation for \n$\\eta$ as a function of wavelength is much larger for the spherical \nmodels than for the plane-parallel models. This suggests that $\\eta$ \nvaries much more as a function of effective temperature, gravity and \nmass. Clearly, the fixed point is much more constrained in \nplane-parallel model stellar atmospheres. \n\n\\begin{figure}[t]\n\\begin{center}\n \\resizebox{\\hsize}{!}{\\includegraphics{16623fg5.eps}}\n\\end{center}\n\\caption{Wavelength dependence of the mean values of\n (a) the ratio of the pseudo-moment to the mean intensity,\n $\\eta$,\n (b) the value of $\\alpha$ that defines the fixed points,\n (c) the first and \n (d) second fixed point for the grid of spherically symmetric \n \\textsc{Atlas} model atmospheres. The error bars represent \n 1-$\\sigma$ deviation from the mean.\n \\label{fig:wl_sp}\n }\n\\end{figure}\n\nA specific example of an application of limb darkening to an \nobservation is the analysis by \\citet{2003ApJ...596.1305F} of \nmicrolensing observations. In Fig.~\\ref{fig:vld}, we compare the \n$V$-band limb-darkening relations from our cube of spherical models \nwith the $V$-band limb-darkening relation determined by \n\\citet{2003ApJ...596.1305F} from observations. It is clear that the \n\\citet{2003ApJ...596.1305F} $V$-band limb-darkening law agrees well \nwith the limb-darkening from the spherical models, but, not as well with the plane-parallel models. Furthermore, \nalthough the curves for the model stellar atmospheres narrow to a \nwaste rather than to a point, the location of minimum spread coincides \nwith the location of the observed $\\mu$-position in the $V$-band \nlimb-darkening relation within the uncertainty of the observed \ncoefficients. Even though the uncertainty of limb-darkening relations \nfrom microlensing observations is large, these results suggest that the \n\\citet{2003ApJ...596.1305F} limb-darkening observations are probing the \nextended the atmosphere of the lensed red giant star.\n\\begin{figure}\n \\resizebox{\\hsize}{!}{\\includegraphics{16623fg6.eps}}\n \\caption{Comparison of the $V$-band limb-darkening relation \n determined by \\citet{2003ApJ...596.1305F} from \n observations (black dashed curves with errorbars) with the limb-darkening \n relations computed from \\textsc{SAtlas} model stellar \n atmospheres (red solid curves) and relations computed from \\textsc{Atlas} model stellar atmospheres (blue dotted curves), represented using the same \n linear-plus-square-root law. The insert shows a magnified \n view of the region of the fixed $\\mu$-point.\n \\label{fig:vld}\n }\n\\end{figure}\n\\begin{figure}\n \\resizebox{\\hsize}{!}{\\includegraphics{16623fg7.eps}}\n \\caption{Red plus symbols represent the limb-darkening coefficients $A$ \n and $B$ computed from \\textsc{SAtlas} model intensity \n profiles for the $V$, $I$, and $H$-bands, and green open squares \n represent the coefficients computed with \n plane-parallel \\textsc{Atlas} models.\n The box in each plot shows the range of the \n coefficients derived from microlensing observations by \n \\citet{2003ApJ...596.1305F}. \n \\label{fig:abbox}\n }\n\\end{figure}\nWe conclude that the parametrized laws, although more simplified than \nthe computed limb-darkening curves, are useful for understanding how \nlimb darkening depends on the fundamental properties of the stellar \natmosphere.\n\nTo explore further the information content of the limb darkening, we \nrecall that Eq.~\\ref{eq:a_fn_b} showed that the coefficients $A$ and \n$B$ of the parametrization are linearly correlated. In \nFig.~\\ref{fig:abbox} we plot the limb-darkening coefficients $A$ and \n$B$ for the $V$, $I$, and $H$-bands computed with both the \nplane-parallel \\textsc{Atlas} and the spherical \\textsc{SAtlas} models.\nFor each wavelength the error box shows the range of $A$ and $B$ found \nby \\citet{2003ApJ...596.1305F} from their observations. For the \n$V$-band, the spherical limb-darkening coefficients overlap with the \nobserved fit, but the plane-parallel coefficients do not, while for \n$H$-band both plane-parallel and spherical models agree with \nobservations. The microlensing observations at the longer wavelength \nmight not be sensitive enough to probe the low intensity limb of the \nstar, making the star appear consistent with limb-darkening of the \nplane-parallel model atmospheres. In the $I$-band neither set of \nmodels agree with the observations, but the $I$-band data provide a \nweak constraint. As \\citet{2003ApJ...596.1305F} noted, the $I$-band \ntime-series microlensing observations were a composite from multiple \nsites, and removing data from any one site changed the results \nsignificantly.\n\\begin{figure*}\n \\resizebox{\\hsize}{!}{\\includegraphics{16623fg8a.eps}\n \\includegraphics{16623fg8b.eps}}\n \\caption{Left: As a function of surface gravity, the top panel \n shows the value of the primary fixed point, $\\mu_1$, for \n the linear plus square root parametrization, for spherical \n atmospheres of varying effective temperature. \n The bottom panel shows the dependence of the normalized \n V-band intensity of the fixed point. \n Red fiilled circles are $T_\\mathrm{eff} = 3000$ K, green open squares \n 4000 K, blue open circles 5000 K, magenta downward pointing \n triangles 6000 K and pale blue upward pointing triangles are \n 7000 K. The black crosses show the behavior of the fixed point from the grid of plane-parallel model atmospheres for comparison\n Right: As a function of effective temperature, the top \n panel shows the fixed point, $\\mu_1$, and the bottom panel \n shows the dependence of the V-band intensity of the fixed \n point. \n Red triangles represents models with $\\log~g = 0$, green squares \n $\\log~g = 1$, and blue circles are $\\log~g= 2$. Again, black crosses represent the fixed point from plane-parallel model atmospheres.\n \\label{fig:gravity_dep}\n }\n\\end{figure*}\n\nIn the $V$-band, the limb-darkening coefficients from 15 model \natmospheres fall within the observational box. These models have \n$\\log~g = 2.25 - 3$ and $T_\\mathrm{eff} = 3400 - 3600$ K. For the \n$H$-band data there are four models within the observational box; \nthese have same range of gravities but are slightly cooler, \n$T_\\mathrm{eff} = 3000 - 3100$ K. It is interesting that the models \nthat agree with the observations are those with gravities that are \nconsistent with the results of \\cite{2003ApJ...596.1305F} and \n\\cite{2002ApJ...572..521A}. This suggests that observations using the \nlimb-darkening parametrization can probe the spherical extension of \nstellar atmosphere via the fixed point, $\\mu_1$, and potentially probe \nthe fundamental parameters of stars via the limb-darkening coefficients.\n\nUsing a cube of models with a given value of $v_\\mathrm{turb}$, we \nexamine the dependence of the primary fixed point, $\\mu_1$, on the \neffective temperature and surface gravity. Note again that there are \nthree basic parameters characterizing the spherical atmospheres: \n$L_\\star$, $M_\\star$ and $R_\\star$. This means that values of \n$T_\\mathrm{eff}$ and $\\log g$ are degenerate, but they are easier to \nshow on a two-dimensional surface. In the top left panel of \nFig.~\\ref{fig:gravity_dep} we plot the value of the primary fixed \npoint, $\\mu_1$, as a function of $\\log g$.\nAt each value of $\\log g$ there are values of $T_\\mathrm{eff}$ \nranging from 3000 to 7000 K, although there can be multiple values \nbecause of the parameter degeneracy. In the bottom left panel of \nFig.~\\ref{fig:gravity_dep} we plot \n$I_\\mathrm{V}(\\mu_1)\/2\\mathcal{H}_\\mathrm{V}$, the normalized \nintensity of the fixed point in the $V$-band. For each surface \ngravity the values of $\\mu_1$ and the normalized intensity at the \nfixed point both show a steady progression as $T_\\mathrm{eff}$ \nchanges, except for $T_\\mathrm{eff} = 3000$ K. We suspect that the \nbehavior for $T_\\mathrm{eff} = 3000$ K is due to a change in the \ndominant opacity source for our coolest models, possibly water vapor. Models with \n$T_\\mathrm{eff} > 3000$ K have H$^-$ as the dominant continuous \nopacity, but in the coolest models there are fewer free electrons \navailable to form H$^-$, and the formation of H$_2$ reduces the pool of \nhydrogen atoms. We also plot the fixed point and intensity at the fixed point from plane-parallel model atmospheres for comparison. It is clear that the fixed point from fits to plane-parallel models varies much less than fits to spherical models.\n\nOn the right side of Fig.~\\ref{fig:gravity_dep} we reverse the \nparameters and plot the dependence of the primary fixed point and the \nnormalized intensity of the fixed point as a function of \n$T_\\mathrm{eff}$. At each value of the effective temperature, the \nvalues of $\\log g$ are 0, 1 and 2 in cgs units. In the top right panel \nof Fig.~\\ref{fig:gravity_dep} we see that there is essentially no \nvariation of the value of $\\mu_1$ with $T_\\mathrm{eff}$ for all three \nsurface gravities until the lowest effective temperature is reached. \nThere is an obvious displacement of the value of $\\mu_1$ for each \ngravity and also a spread in $\\mu_1$ because of parameter degeneracy. \nHowever, for $T_\\mathrm{eff} \\leq 3500$ K the value of $\\mu_1$ drops \nfor all surface gravities. The bottom right panel shows that the \nnormalized $V$-band intensity of the fixed point shows a similar \nbehavior. For $T_\\mathrm{eff} > 3500$ K there is little variation \nwith $T_\\mathrm{eff}$, but there is an offset and a spread that \ndepends on the surface gravity. Cooler than 3500 K the value of the \nnormalized intensity drops noticeably.\n\nIt is clear that the fixed point $\\mu_1$ and the intensity at the fixed point $I(\\mu_1)\/2\\mathcal{H}$ are functions of gravity and effective temperature for spherically-symmetric models and are roughly constant for plane-parallel model atmospheres. This indicates that the best-fit coefficients of the limb-darkening law vary mostly because of the geometry of the model atmosphere. A spherical model atmosphere predicts a smaller intensity near the limb of the stellar disk relative to a plane-parallel model with the same effective temperature and gravity. To predict the same emergent flux, the intensity must be larger at the center of the disk, hence the temperature at the base of the atmosphere must also be larger for the spherical model. Therefore, the temperature structure of the atmosphere also varies. However, this is a secondary effect and the geometry of the atmosphere is most important in determining the value of the $\\mu_1$ and $I(\\mu_1)\/2\\mathcal{H}$. The geometry of spherical models leads to smaller values of the pseudo-moment and the mean intensity because the intensity is more centrally concentrated. This suggests that $\\mu_1$ and $I(\\mu_1)\/2\\mathcal{H}$ depend on the atmospheric extension. We will explore how the fixed point and intensity relate to the extension and fundamental stellar properties in greater detail in a future article.\n\n\\section{Conclusions}\n\nWe have explored limb darkening using large cubes of spherical stellar \natmospheres spanning the parameters $L_\\star$, $M_\\star$ and $R_\\star$ \ncovering the cool, luminous quadrant of the Hertzsprung-Russell \ndiagrams (Fig.~\\ref{fig:ld_hr}). These models have also used three \ndifferent values of the microturbulent velocity. For each model, the \ncenter-to-limb variation of the surface intensity has been calculated \nat 1000 equally spaced $\\mu$ values spanning the range from 1 to 0 for \nevery wavelength used to compute the model structure.\n\nParametrizing the center-to-limb variation with a flux-conserving \nlinear-plus-square-root limb-darkening law, we confirm the findings of \n\\citet{2000PhDT........11H} and \\citet{2003ApJ...596.1305F} that there \nis a fixed $\\mu_1$ point through which all the intensity curves pass.\nHowever, when we plot the surface intensities directly, without using \na fitting law, there is no fixed point, although the distribution of \ncurves does narrow to a waist close to the same value of $\\mu_1$ \n(Fig.~\\ref{fig:ld_cubev2_mu}).\n\nThe apparent fixed point is a result of the least-squares fitting \nprocedure where the two parameters of the law are dependent on two \nproperties of the stellar atmosphere, the mean intensity, $J$, and the \npseudo-moment, $\\int I \\sqrt{\\mu} \\mathrm{d} \\mu$. For the temperature \nrange $4000 - 8000$ K, the mean intensity is correlated with the \npseudo-moment, which means that the two coefficients are also \ncorrelated, leading to the existence of the fixed point.\n\nThe lack of a well-defined fixed point in the surface intensity \ndistribution for spherical model atmospheres suggests that the three \nfundamental parameters of the atmospheres affect the limb darkening in \na way that is not encountered in the two-parameter plane parallel \natmospheres.\n\n\\acknowledgements\n\nThis work has been supported by a research grant from the Natural \nSciences and Engineering Research Council of Canada. HRN has received \nfinancial support from the Walter John Helm OGSST, the Walter C. \nSumner Memorial Fellowship, and the Alexander von Humboldt Foundation.\n\n\\bibliographystyle{aa}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzaezf b/data_all_eng_slimpj/shuffled/split2/finalzzaezf new file mode 100644 index 0000000000000000000000000000000000000000..3287e285d61a3101aab67e99757904e3c659e476 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzaezf @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro}\n\n\nThe non-trivial zeros of automorphic $L$-functions are of central significance in modern number theory. Problems on individual zeros, such as the Riemann Hypothesis (GRH), are elusive. There is however a beautiful theory about the statistical distribution of zeros in families. The subject has a long and rich history. A unifying modern viewpoint is that of a comparison with a suitably chosen model of random matrices (\\key{Katz--Sarnak heuristics}). There are both theoretical and numerical evidences for this comparison. Comprehensive results in the function field case~\\cite{book:KS} have suggested an analogous picture in the number field case as explained in~\\cite{KS:bams}. In a large number of cases, and with high accuracy, the distribution of non-trivial zeros of automorphic $L$-functions coincide with the distribution of eigenvalues of random matrices (see~\\cites{DFK:vanishingunitary,Rubin:computational} for numerical investigations and conjectures and see~\\cites{DM06,Guloglu:lowlying,HM07,ILS00,KST:Sp4,RR:low-lying,Rubin01} and the references herein for theoretical results).\n\nThe concept of \\key{families} is central to modern investigations in number theory. We want to study in the present paper certain families of automorphic representations over number fields in a very general context. The families under consideration are obtained from the discrete spectrum by imposing constraints on the local components at archimedean and non-archimedean places and by applying Langlands global functoriality principle.\n\nOur main result is a Sato--Tate equidistribution theorem for these families (Theorem~\\ref{t:intro:error-bound}). An application of this main result we can give some evidence towards the Katz-Sarnak heuristics~\\cite{KS:bams} in general and establish a criterion for the random matrix model attached to families (Symmetry type).\n\n\\subsection{Sato-Tate theorem for families}\\label{sec:intro:ST}\n\n The original Sato-Tate conjecture is about an elliptic curve $E$, assumed to be defined over ${\\mathbb Q}$\n for simplicity. The number of points in $E(\\mathbb{F}_p)$ for almost all primes $p$ (with good reduction)\n gives rise to an angle $\\theta_p$ between $-\\pi$ and $\\pi$. The conjecture, proved in~\\cite{BLGHT11}, asserts that\n if $E$ does not admit complex multiplication then $\\{\\theta_p\\}$ are equidistributed according to\n the measure $\\frac{2}{\\pi} \\sin^2\\theta d\\theta$.\n In the context of motives a generalization of the Sato-Tate conjecture was formulated by Serre~\\cite{Ser94}.\n\n To speak of the automorphic version of the Sato-Tate conjecture, let $G$ be a connected split reductive group over ${\\mathbb Q}$\n with trivial center\n and $\\pi$ an automorphic representation of $G({\\mathbb A})$. Here $G$ is assumed to be split for simplicity\n (however we stress that our results are valid\n without even assuming that $G$ is quasi-split, if formulated appropriately). The triviality of center is not serious\n as it essentially amounts to fixing central character.\n Let $T$ be a maximal split torus of $G$. Denote by $\\widehat{T}$ its dual torus and $\\Omega$ the Weyl group.\n As $\\pi=\\otimes'_v \\pi_v$ is unramified at almost all places $p$, the Satake isomorphism identifies\n $\\pi_p$ with a point on $\\widehat{T}\/\\Omega$. The automorphic Sato-Tate conjecture should be a prediction\n about the equidistribution of $\\pi_p$ on $\\widehat{T}\/\\Omega$ with respect to a natural measure\n (supported on a compact subset of $\\widehat{T}\/\\Omega$). It seems nontrivial\n to specify this measure in general. The authors do not know how to do it\n without invoking the (conjectural) global $L$-parameter for $\\pi$.\n The automorphic Sato-Tate conjecture is known in the very limited cases of (the restriction of scalars of)\n $\\GL_1$ and $\\GL_2$ (\\cite{BLGHT11}, \\cite{BLGG11}). In an ideal world the conjecture should be closely related to Langlands functoriality, see~\\S\\ref{sub:gen-ST-conj}.\n\n In this paper we consider the Sato-Tate conjecture for a \\emph{family} of automorphic representations, which\n is easier to state and prove but still very illuminating.\n Our working definition of a family $\\{\\mathcal{F}_k\\}_{k\\ge 1}$ is that each $\\mathcal{F}_k$ consists of\n all automorphic representations $\\pi$ of $G({\\mathbb A})$ of level $N_k$ with $\\pi_\\infty$ cohomological of weight $\\xi_k$,\n where $N_k\\in{\\mathbb Z}_{\\ge1}$ and $\\xi_k$ is an irreducible algebraic representation of $G$, such that either\n\\begin{enumerate}\n \\item (level aspect) $\\xi_k$ is fixed, and $N_k\\rightarrow \\infty$ as $k\\rightarrow\\infty$ or\n \\item (weight aspect) $N_k$ is fixed, and $\\dim \\xi_k\\rightarrow\\infty$ as $k\\rightarrow\\infty$.\n\\end{enumerate}\n Note that each $\\mathcal{F}_k$ has finite cardinality and $|\\mathcal{F}_k|\\rightarrow \\infty$\n as $k\\rightarrow\\infty$. (For a technical reason $\\mathcal{F}_k$ is actually allowed to be a multi-set. Namely\n the same representation can appear multiple times, for instance more than its automorphic multiplicity.)\n In principle we could let $\\xi_k$ and $N_k$ vary simultaneously but decided not to do so in the current paper\n in favor of transparency of arguments. For instance families of type (i) and (ii) require somewhat different ingredients of proof in establishing the Sato-Tate theorem for families, and the argument would be easier to understand if we separate them.\n It should be possible to treat the mixed case (where both $N_k$ and $\\xi_k$ vary) by combining techniques in the\n two cases (i) and (ii).\n\n\n\n Let $\\widehat{T}_c$ be the maximal compact subtorus of the complex torus $\\widehat{T}$.\n The quotient $\\widehat{T}_c\/\\Omega$ is equipped with a measure $\\widehat{\\mu}^{\\mathrm{ST}}$, to be called\n\tthe \\key{generalized Sato-Tate measure}, coming from the Haar measure on a maximal compact subgroup of $\\widehat{G}$\n (of which $\\widehat{T}_c$ is a maximal torus).\n The following is a rough version of our result on the Sato-Tate conjecture for a family.\n\n\\begin{theorem}\\label{Sato-Tate}\n Suppose that $G({\\mathbb R})$ has discrete series representations.\n Let $\\{\\mathcal{F}_k\\}_{k\\ge 1}$ be a family of the first (resp. second) type as above.\n Let $\\{p_k\\}$ be a sequence of\n primes such that $p_k$ grows to $\\infty$ slowly relative to $N_k$ (resp. $\\xi_k$)\n as $k\\rightarrow \\infty$.\n Assume that the members of $\\mathcal{F}_k$ are unramified at $p_k$ for every $k$.\n Then the Satake parameters $\\{\\pi_{p_k}: \\pi\\in \\mathcal{F}_k\\}_{k\\ge 1}$\n are equidistributed with respect to $\\widehat{\\mu}^{\\mathrm{ST}}$.\n\n\\end{theorem}\n\n To put things in perspective, we observe that there are three kinds of statistics about the Satake parameters\n of $\\{\\pi_{p_k}: \\pi\\in \\mathcal{F}_k\\}_{k\\ge 1}$ depending on how the arguments vary.\n\\begin{enumerate}\n \\item Sato-Tate: $\\mathcal{F}_k$ is fixed (and a singleton) and $p_k\\rightarrow \\infty$.\n \\item Sato-Tate for a family: $|\\mathcal{F}_k|\\rightarrow \\infty$ and $p_k\\rightarrow \\infty$.\n \\item Plancherel: $|\\mathcal{F}_k|\\rightarrow \\infty$ and $p_k$ is a fixed prime.\n\\end{enumerate}\nThe Sato-Tate conjecture in its original form is about equidistribution in case (i) whereas\nour Theorem \\ref{Sato-Tate} is concerned with case (ii).\nThe last item is marked as Plancherel since\nthe Satake parameters are expected to be equidistributed with respect to the Plancherel measure\n(again supported on $\\widehat{T}_c\/\\Omega$) in case (iii). This has been shown to be true under the assumption\nthat $G({\\mathbb R})$ admits a discrete series in \\cite{Shi-Plan}.\nWe derive Theorem \\ref{Sato-Tate} from an error estimate (depending on $k$) on the difference between\nthe Plancherel distribution at $p$ and the actual distribution of the Satake parameters at $p_k$\nin $\\mathcal{F}_k$. This estimate (see Theorem \\ref{t:intro:error-bound} below) refines the main result of \\cite{Shi-Plan}\nand is far more difficult to prove in that several nontrivial bounds in\nharmonic analysis on reductive groups need to be justified.\n\n\n\\subsection{Families of $L$-functions}\\label{sec:intro:familyL}\nAn application of Theorem~\\ref{Sato-Tate} is to families of $L$-functions. We are able to verify to some extent the heuristics of Katz--Sarnak~\\cite{KS:bams} and determine the Symmetry type, see~\\S\\ref{sec:intro:criterion} below. In this subsection we define the relevant families of $L$-functions and record some of their properties.\n\nLet $r:{}^{L} G \\to \\GL(d,\\mathbb{C})$ be a continuous $L$-homomorphism. We assume Langlands functoriality principle: for all $\\pi \\in \\mathcal{F}_k$ there exists an isobaric automorphic representation $\\Pi=r_* \\pi$ of $\\GL(d,\\mathbb{A})$ which is the functorial lift of the automorphic representation $\\pi$ of $G(\\mathbb{A})$, see~\\S\\ref{s:langlands} for the precise statement of the hypothesis. This hypothesis is only used in Theorem~\\ref{th:low-lying}, \\S\\ref{sec:zeros} and \\S\\ref{sec:pf}. By the strong multiplicity one theorem $\\Pi$ is uniquely determined by all but finitely many of its local factors $\\Pi_v=r_* \\pi_v$ (see~\\S\\ref{sec:pp:isobaric} for a review of the concept of isobaric representations).\n\nTo an automorphic representation $\\Pi$ on $\\GL(d,\\mathbb{A})$ we associate its principal $L$-function $L(s,\\Pi)$. By definition $L(s,\\pi,r)=L(s,\\Pi)$. By the theory of Rankin--Selberg integrals or by the integral representations of Godement--Jacquet, $L(s,\\Pi)$ has good analytic properties: analytic continuation, functional equation, growth in vertical strips. In particular we know the existence and some properties of its non-trivial zeros, such as the Weyl's law (\\S\\ref{sec:pp:explicit}).\n\nWe denote by $\\mathfrak{F}_k=r_*\\mathcal{F}_k$ the set of all such $\\Pi=r_* \\pi$ for $\\pi \\in \\mathcal{F}_k$.\nSince the strong multiplicity one theorem implies that $\\Pi$ is uniquely determined by its $L$-function $L(s,\\Pi)$. We simply refer to $\\mathfrak{F}=r_*\\mathcal{F}$ as a \\key{family of $L$-functions}.\n\n\nIn general there are many ways to construct interesting families of $L$-functions. In a recent letter~\\cite{Sarn:family}, Sarnak attempts to sort out these constructions into a comprehensive framework and proposes a working definition (see also~\\cite{Kowalski:family-survey}). The families of $L$-functions under consideration in the present paper fit well into that framework. Indeed they are \\key{harmonic families} in the sense that their construction involves inputs from local and global harmonic analysis. Other types of families include geometric families constructed as Hasse--Weil $L$-functions of arithmetic varieties and Galois families associated to families of Galois representations.\n\n\n\n\\subsection{Criterion for the Symmetry type}\\label{sec:intro:criterion} Katz--Sarnak~\\cite{KS:bams} predict that one can associate a \\key{Symmetry type} to a family of $L$-functions. By definition the Symmetry type is the random matrix model which governs the distribution of the zeros. There is a long and rich history for the introduction of this concept.\n\nHilbert and P\\'olya suggested that there might be a spectral interpretation of the zeros of the Riemann zeta function. Nowadays a strong evidence for the spectral nature of the zeros of $L$-functions comes from the function field case: zeros are eigenvalues of the Frobenius acting on cohomology. This is exemplified by the equidistribution theorem of Deligne and the results of Katz--Sarnak~\\cite{book:KS} on the distribution of the low-lying eigenvalues in geometric families.\n\nIn the number field case the first major result towards a spectral interpretation is the pair correlation of high zeros of the Riemann zeta function by Montgomery. Developments then include Odlyzko's extensive numerical study and the determination of the $n$-level correlation by Hejhal and Rudnick--Sarnak~\\cite{RS96}. The number field analogue of the Frobenius eigenvalue statistics suggested by~\\cite{book:KS} is concerned with the statistics of low-lying zeros.\n\nIndeed~\\cite{KS:bams} predicts that the low-lying zeros of families of $L$-functions are distributed according to a determinantal point process associated to a random matrix Ensemble. This will be explained in more details in~\\S\\ref{sec:intro:rdm} and~\\S\\ref{sec:intro:lowlying} below. We shall distinguish between the three determinantal point processes associated to the Unitary, Symplectic and Orthogonal Ensembles. Accordingly the Symmetry type associated to a family $\\mathfrak{F}$ is defined to be Unitary, Symplectic or Orthogonal (see~\\S\\ref{sec:intro:lowlying} for typical results). \\footnote{In this paper we do not distinguish in the Orthogonal Ensemble between the $O$ $SO(odd)$ and $SO(even)$ Symmetries. We will return to this question in a subsequent work.}\n\n\nBefore entering into the details of this theory in~\\S\\ref{sec:intro:rdm} below, we state here our criterion for the Symmetry type of the families $r_*\\mathcal{F}=\\mathfrak{F}$ defined above. We recall in~\\S\\ref{sec:b:fs} the definition of the Frobenius--Schur indicator $s(r)\\in \\set{-1,0,1}$ associated to an irreducible representation. We shall prove that the Symmetry type is determined by $s(r)$. This is summarized in the following which may be viewed as a refinement of the Katz--Sarnak heuristics.\n\n\\begin{criterion}\\label{criterion} Let $r:{}^LG\\to \\GL(d,\\mathbb{C})$ be a continuous homomorphism which is irreducible and non-trivial when restricted to $\\widehat G$. Consider the family $r_*\\mathcal{F}$ of automorphic $L$-functions of degree $d$ as above.\n\n(i) If $r$ is not isomorphic to its dual $r^\\vee$ then $s(r)=0$ and the Symmetry type is Unitary.\n\n(ii) Otherwise there is a bilinear form on $\\mathbb{C}^d$ which realizes the isomorphism between $r$ and $r^\\vee$. By Schur lemma it is unique up to scalar and is either symmetric or alternating. If it is symmetric then $r$ is real, $s(r)=1$ and the Symmetry type is Symplectic. If it is alternating then $r$ is quaternionic, $s(r)=-1$ and the Symmetry type is Orthogonal.\n\\end{criterion}\n\nWe note that the conditions that $r$ be irreducible and non-trivial when restricted to $\\widehat G$ are optimal. If $r$ were reducible then the $L$-functions $L(s,\\pi,r)$ would factor as a product of $L$-functions from independent families and their zeros would be distributed according to the superposition of independent point processes. If $r$ were trivial when restricted to $\\widehat G$ then $L(s,\\pi,r)$ would be constant and equal to an Artin $L$-function and the low-lying zeros would correspond to the eventual vanishing of this Artin $L$-function at the central point (which is a very different problem). Our criterion says that conversely under these conditions on $r$ the Katz--Sarnak heuristics hold to some extent.\n\nIt would be interesting to study families of automorphic representations over a function field $k=\\mathbb{F}_q(X)$ of a curve $X$. To our knowledge the Katz-Sarnak heuristics for such families are not treated in the literature, except in the case of $G=GL(1)$ where harmonic families coincide with the geometric families treated by Katz--Sarnak (e.g. Dirichlet $L$-series with quadratic character are the geometric families of hyperelliptic curves in~\\cite{book:KS}*{\\S10}). Over function fields our criterion has the following interpretation. We consider families of automorphic representations $\\pi$ of $G(\\mathbb{A}_k)$; for simplicity we suppose that $\\pi$ is attached to an irreducible $\\ell$-adic representation $\\rho: {\\rm Gal}(k^{sep}\/k) \\to {}^LG$. Then $r_*\\pi$ is attached to the Galois representation $r\\circ \\rho$, and corresponds to a constructible $\\ell$-adic sheaf $F$ of dimension $d$ on $X$.\nThe zeros of the $L$-function $L(s,\\pi,r)$ are the eigenvalues of Frobenius on the first cohomology, more precisely the numerator of the $L$-function $L(s,\\pi,r)$ is\n\\[\n\\det(1-{\\rm Fr} q^{-s} | H^1(X,F)).\n\\]\nIf $s(r)=-1$ (resp. $s(r)=1$) then there is a symplectic (resp. orthogonal) pairing on $F$. The natural pairing on $H^1(X,F)$ induced by the cup-product is orthogonal (resp. symplectic) and invariant by the action of Frobenius. Thus the zeros of $L(s,\\pi,r)$ are eigenvalues of an orthogonal (resp. symplectic) matrix; this is consistent with assertion (ii) in our criterion.\n\nKnown analogies between $L$-functions and their symmetries over number fields and function fields are discussed in~\\cite{KS:bams}*{\\S4}. Overall we would like to propose our criterion and its potential extension to more general families as a positive answer to the question mark in the entry 6-A of the Table~2 in~\\cite{KS:bams}.\n\n\\subsection{Automorphic Plancherel density theorem with error bounds}\\label{sec:intro:aut-Plan}\n We explain a more precise version of the theorem and method of proof for the Sato-Tate theorem\n for families (\\S\\ref{sec:intro:ST}). The key is to bound the error terms when\n we approximate the distribution of local components of automorphic representations in a family\n with the Plancherel measure.\n\n For simplicity of exposition let us assume that $G$ is a split reductive group over ${\\mathbb Q}$ with trivial center\n as in \\S\\ref{sec:intro:ST}.\n A crucial hypothesis is that $G({\\mathbb R})$ admits an ${\\mathbb R}$-anisotropic maximal torus (in which case\n $G({\\mathbb R})$ admits discrete series representations).\n Let $\\mathcal{A}_{\\mathrm{disc}}(G)$ denote the set of isomorphism classes of discrete automorphic\n representations of $G({\\mathbb A})$. We say that\n $\\pi\\in \\mathcal{A}_{\\mathrm{disc}}(G)$ has level $N$ and weight $\\xi$ if\n $\\pi$ has a nonzero fixed vector under the adelic version of the full level $N$ congruence subgroup\n $K(N)\\subset G({\\mathbb A}^\\infty)$ and if $\\pi_\\infty\\otimes \\xi$ has nonzero Lie algebra cohomology.\n In this subsection we make a further simplifying hypothesis that every $\\xi$ has regular highest weight, in which case\n $\\pi_\\infty$ as above must be a discrete series. (In the main body of this paper, the latter assumption on $\\xi$ is necessary only for the results in \\S\\S\\ref{sub:Plan-density}-\\ref{sub:general-functions-S_0}, where more general test functions are considered.)\n\n Define $\\mathcal{F}=\\mathcal{F}(N,\\xi)$ to be the finite multi-set consisting of $\\pi\\in \\mathcal{A}_{\\mathrm{disc}}(G)$\n of level $N$ and weight $\\xi$, where each such $\\pi$ appears in $\\mathcal{F}$\n with multiplicity $$a_{\\mathcal{F}}(\\pi):= \\dim (\\pi^\\infty)^{K(N)}\\in {\\mathbb Z}_{\\ge 0}.$$\n This quantity naturally occurs as the dimension of the $\\pi$-isotypical subspace in the cohomology\n of the locally symmetric space for $G$ of level $N$ with coefficient defined by $\\xi$.\n The main motivation for allowing $\\pi$ to appear $a_{\\mathcal{F}}(\\pi)$ times is to enable us\n to compute the counting measure below with the trace formula.\n\n Let $p$ be a prime number. Write $G({\\mathbb Q}_p)^{\\wedge}$ for the unitary dual of\n irreducible smooth representations of $G({\\mathbb Q}_p)$. The unramified (resp. unramified and tempered)\n part of $G({\\mathbb Q}_p)^{\\wedge}$ is denoted $G({\\mathbb Q}_p)^{\\wedge,\\mathrm{ur}}$\n (resp. $G({\\mathbb Q}_p)^{\\wedge,\\mathrm{ur},\\mathrm{temp}}$). There is a canonical isomorphism\n\\begin{equation}\n\\label{e:intro:spec-satake} G({\\mathbb Q}_p)^{\\wedge,\\mathrm{ur},\\mathrm{temp}}\\simeq \\widehat{T}_c\/\\Omega.\n\\end{equation}\n The unramified Hecke algebra of $G({\\mathbb Q}_p)$ will be denoted $\\mathcal{H}^{\\mathrm{ur}}(G({\\mathbb Q}_p))$.\n There is a map from $\\mathcal{H}^{\\mathrm{ur}}(G({\\mathbb Q}_p))$ to the space of continuous functions\n on $\\widehat{T}_c\/\\Omega$\n $$\\phi\\mapsto \\widehat{\\phi}\\quad \\mbox{determined by}\\quad \\widehat{\\phi}(\\pi)=\\mathrm{tr}\\, \\pi(\\phi),\n ~\\forall \\pi\\in G({\\mathbb Q}_p)^{\\wedge,\\mathrm{ur},\\mathrm{temp}}.$$\n There are two natural measures supported on\n $ \\widehat{T}_c\/\\Omega$.\n The Plancherel measure $\\widehat{\\mu}^{\\mathrm{pl,\\mathrm{ur}}}_p$, dependent on $p$, is\n defined on $G({\\mathbb Q}_p)^{\\wedge,\\mathrm{ur}}$ and naturally arises\n in local harmonic analysis. The generalized Sato-Tate measure\n $\\widehat{\\mu}^{\\mathrm{ST}}$ on $ \\widehat{T}_c\/\\Omega$ is independent of $p$ and may be\n extended to $G({\\mathbb Q}_p)^{\\wedge,\\mathrm{ur}} $ by zero.\n Both $\\widehat{\\mu}^{\\mathrm{pl,\\mathrm{ur}}}_p$\n and $\\widehat{\\mu}^{\\mathrm{ST}}$ assign volume 1 to $ \\widehat{T}_c\/\\Omega$.\n There is yet another measure $\\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F},p}$\n on $G({\\mathbb Q}_p)^{\\wedge,\\mathrm{ur}}$, which is the averaged counting measure for the $p$-components of members of $\\mathcal{F}$.\n Namely\n \\begin{equation}\n \\label{e:count-meas}\\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F},p}:=\n \\frac{1}{|\\mathcal{F}|} \\sum_{\\pi\\in \\mathcal{F}} \\delta_{\\pi_p}\n\\end{equation}\n where $\\delta_{\\pi_p}$ denotes the Dirac delta measure supported at $\\pi_p$. (Each\n $\\pi$ contributes $a_{\\mathcal{F}}(\\pi)$ times to the above sum.)\n Our primary goal is to bound the difference between $\\widehat{\\mu}^{\\mathrm{pl},\\mathrm{ur}}_p$\n and $\\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F},p}$. (Note that our definition\n of $\\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F},p}$ in the main body will be a little different from\n \\eqref{e:count-meas} but asymptotically the same. See Remark \\ref{r:|F|}. We take \\eqref{e:count-meas} here only to ease exposition.)\n\n\n In order to quantify error bounds, we introduce a filtration\n $\\{\\mathcal{H}^{\\mathrm{ur}}(G({\\mathbb Q}_p))^{\\le \\kappa}\\}_{\\kappa\\in{\\mathbb Z}_{\\ge0}}$\n on $\\mathcal{H}^{\\mathrm{ur}}(G({\\mathbb Q}_p))$ as a complex vector space. The filtration is increasing, exhaustive\n and depends on a non-canonical choice.\n Roughly speaking, $\\mathcal{H}^{\\mathrm{ur}}(G({\\mathbb Q}_p))^{\\le \\kappa}$ is like the span of\n all monomials of degree $\\le \\kappa$ when $\\mathcal{H}^{\\mathrm{ur}}(G({\\mathbb Q}_p))$ is identified with\n (a subalgebra of) a polynomial algebra.\n For each $\\xi$, it is possible to assign a positive integer $m(\\xi)$ in terms of the highest weight of $\\xi$.\n When we say that weight is going to infinity, it means that $m(\\xi)$ grows to $\\infty$ in the usual sense.\n\n The main result on error bounds alluded to above is the following.\n (See Theorems \\ref{t:level-varies} and \\ref{t:weight-varies}.) A uniform bound on orbital integrals, cf. \\eqref{e:intro-unif-bound} below, enters the proof of (ii) (but not (i)) and requires the assumption $p\\gg 1$ in (ii) to ensure that $G(\\mathbb{Q}_p)$.\n\n\\begin{theorem}\\label{t:intro:error-bound}\n Let $\\mathcal{F}=\\mathcal{F}(N,\\xi)$ be as above.\n Consider a prime $p$, $\\kappa\\ge 1$, and $\\phi_p\\in \\mathcal{H}^{\\mathrm{ur}}(G({\\mathbb Q}_p))^{\\le \\kappa}$\n such that $|\\phi_p|\\le 1$ on $G({\\mathbb Q}_p)$.\n\\begin{enumerate}\n \\item (level aspect) Suppose that $\\xi$ remains fixed. There exist constants\n $A_{\\mathrm{lv}},B_{\\mathrm{lv}},C_{\\mathrm{lv}}>0$ such that for any $p$, $\\kappa$, $\\phi_p$ as above and for any $N$ coprime to $p$\n (with an extra condition that $N$ is bounded below by some inverse power of $p$),\n $$ \\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F},p}(\\widehat{\\phi}_p)-\\widehat{\\mu}^{\\mathrm{pl,\\mathrm{ur}}}_p(\\widehat{\\phi}_p)\n = O(p^{A_{\\mathrm{lv}}+B_{\\mathrm{lv}}\\kappa}N^{-C_{\\mathrm{lv}}}).$$\n \\item (weight aspect) Fix a level $N$. There exist constants\n $A_{\\mathrm{wt}},B_{\\mathrm{wt}},C_{\\mathrm{wt}}>0$ such that for any $p\\gg 1$, $\\kappa$, $\\phi_p$ as above with $(p,N)=1$ and for any $\\xi$,\n $$ \\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F},p}(\\widehat{\\phi}_p)-\\widehat{\\mu}^{\\mathrm{pl,\\mathrm{ur}}}_p(\\widehat{\\phi}_p)\n = O(p^{A_{\\mathrm{wt}}+B_{\\mathrm{wt}}\\kappa}m(\\xi)^{-C_{\\mathrm{wt}}}).$$\n\\end{enumerate}\n\n\\end{theorem}\n\n Let $\\{ \\mathcal{F}_k=\\mathcal{F}(N_k,\\xi_k)\\}_{\\ge1}$ be\n either kind of family in \\S\\ref{sec:intro:ST}, namely either $N_k\\rightarrow \\infty$\n and $\\xi_k$ is fixed or $N_k$ is fixed and $\\xi_k\\rightarrow \\infty$.\n When applied to $\\{ \\mathcal{F}_k\\}_{\\ge1}$, Theorem \\ref{t:intro:error-bound} leads to\n the equidistribution results in the following corollary (cf.\n cases (ii) and (iii) in the paragraph below Theorem \\ref{Sato-Tate}).\n Indeed, (i) of the corollary is immediate. Part (ii) is easily derived from the fact\n that $\\widehat{\\mu}^{\\mathrm{pl,\\mathrm{ur}}}_p$ weakly converges to $\\widehat{\\mu}^{\\mathrm{ST}}$\n as $p\\rightarrow \\infty$. Although the unramified Hecke algebra at $p$ gives rise to only\n regular functions on the complex variety $\\widehat{T}_c\/\\Omega$, it is not difficult to extend\n the results to continuous functions on $\\widehat{T}_c\/\\Omega$. (See \\S\\S\\ref{sub:Plan-density}-\\ref{sub:general-functions-S_0} for details.)\n\n\\begin{corollary}\\label{c:intro:Plan-ST} Keep the notation of Theorem \\ref{t:intro:error-bound}.\nLet $\\widehat{\\phi}$ be a continuous function on $\\widehat{T}_c\/\\Omega$.\n In view of \\eqref{e:intro:spec-satake} $\\widehat{\\phi}$ can be extended by zero to a function\n $\\widehat{\\phi}_p$ on $G({\\mathbb Q}_p)^{\\wedge,\\mathrm{ur}}$ for each prime $p$.\n\\begin{enumerate}\n \\item (Automorphic Plancherel density theorem~\\cite{Shi-Plan}) $$\\lim_{k\\rightarrow \\infty} \\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F}_k,p}(\\widehat{\\phi}_p)\n = \\widehat{\\mu}^{\\mathrm{pl,\\mathrm{ur}}}_p(\\widehat{\\phi}_p).$$\n \\item (Sato-Tate theorem for families) Let $\\{p_k\\}_{\\ge1}$ be a sequence of primes tending to $\\infty$.\n\tSuppose that $N_k p_k^{-j} \\rightarrow \\infty$ (resp. $m(\\xi_k)p_k^{-j}\\rightarrow \\infty$)\n as $k\\rightarrow \\infty$ for every $j\\ge 1$ if $\\xi_k$ (resp. $N_k$) remains fixed as $k$ varies.\n Then for every $\\widehat{\\phi}$,\n $$\\lim_{k\\rightarrow \\infty} \\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F}_k,p_k}(\\widehat{\\phi}_{p_k})\n = \\widehat{\\mu}^{\\mathrm{ST}}(\\widehat{\\phi}).$$\n\\end{enumerate}\n\\end{corollary}\n\n\n Theorem \\ref{t:intro:error-bound} and Corollary \\ref{c:intro:Plan-ST}\n remain valid if any finite number of primes are simultaneously considered in place of $p$ or $p_k$. Moreover (i) of the corollary holds true for more general (and possibly ramified) test functions $\\widehat \\phi_p$ on $G({\\mathbb Q}_p)^{\\wedge}$ thanks to Sauvageot's density theorem. It would be interesting to quantify the error bounds in this generality.\n\n\\subsection{Random matrices}\\label{sec:intro:rdm} We provide a brief account of the theory of random matrices. The reader will find more details in \\S\\ref{sec:zeros:matrix} and extensive treatments in~\\cites{book:Mehta,book:KS}.\n\nThe Gaussian Unitary Ensemble and Gaussian Orthogonal Ensemble were introduced by the physicist Wigner in the study of resonances of heavy nucleus.\nThe Gaussian Symplectic Ensemble was introduced later by Dyson together with his Circular Ensembles.\nIn this paper we are concerned with the Ensembles attached to compact Lie groups which are studied by Katz-Sarnak and occur in the statistics of $L$-functions. (See~\\cite{Duen:RMT} for the precise connections between the Ensembles attached to different Riemannian symmetric spaces.)\n\nOne considers eigenvalues of matrices on compact groups $\\mathcal{G}(N)$ of large dimension endowed with the Haar probability measure. We have three Symmetry Types $\\mathcal{G}=SO(even)$ (resp. $\\mathcal{G}=U$, $\\mathcal{G}=USp$); the notation says that for all $N\\ge 1$, we consider the groups $\\mathcal{G}(N)=SO(2N)$ (resp. $\\mathcal{G}(N)=U(N)$ and $\\mathcal{G}(N)=USp(2N)$).\n\nFor all matrices $A \\in \\mathcal{G}(N)$ we have an associated sequence of \\key{normalized angles}\n\\begin{equation}\\label{intro:vartheta}\n0\\le \\vartheta_1 \\le \\vartheta_2 \\le \\cdots \\le \\vartheta_N \\le N.\n\\end{equation}\nFor example in the case $\\mathcal{G}=U$, the eigenvalues of $A\\in U(N)$ are given by $e(\\tfrac{\\vartheta_j}{N})=e^{2i\\pi \\vartheta_j\/N}$ for $1\\le j\\le N$.\nThe normalization is such that the mean spacing of the sequence~\\eqref{intro:vartheta} is about one.\n\nFor each $N\\ge 1$ these eigenvalues $(\\vartheta_i)_{1\\le i\\le N}$ are correlated random variables. Their joint density is proportional to\n\\begin{equation}\\label{ginibre}\n \\prod_{1\\le i0$ depending such that the following holds.\n Let $\\mathfrak{F}=r_*\\mathcal{F}$ be a family of $L$-functions in the weight aspect as in~\\S\\ref{sec:intro:familyL}, assuming the functoriality conjecture as in Hypothesis~\\ref{hypo:functorial-lift}. Assume Hypothesis~\\ref{hyp:poles} concerning the poles of $\\Lambda(s,\\Pi)$ for $\\Pi\\in\\mathfrak{F}_k$. Then for all Schwartz functions $\\Phi$ whose Fourier transform $\\widehat \\Phi$ has support in $(-\\delta,\\delta)$:\n\n(i) there is a limiting $1$-level density for the low-lying zeros. Namely there is a density $W(x)$ such that\n\\begin{equation*\n\\lim_{k\\to \\infty} D(\\mathfrak{F}_k;\\Phi) = \\int_{-\\infty}^{\\infty}\n\\Phi(x)W(x)dx;\n\\end{equation*}\n\n(ii) the density $W(x)$ is determined by the Frobenius--Schur indicator of the irreducible representation $r$. Precisely,\n\\begin{equation}\\label{W:th:low-lying}\nW=\\begin{cases}\nW(SO(even)),& \\text{if $s(r)=-1$,}\\\\\nW(U),& \\text{if $s(r)=0$,}\\\\\nW(USp),& \\text{if $s(r)=1$.}\n\\end{cases}\n\\end{equation}\n\\end{theorem}\n\nThe constant $\\delta>0$ depends on the family $\\mathfrak{F}$. In other words it depends on the group $G$ and the $L$-morphism $r:{}^LG \\to \\GL(d,\\mathbb{C})$. Its numerical value is directly related to the numerical values of the exponents in the error term occurring in Theorem~\\ref{t:intro:error-bound}. Although we do not attempt to do so in the present paper, it is interesting to produce a value of $\\delta$ that is as large as possible, see~\\cite{ILS00} for the case of $\\GL(2)$. This would require sharp bounds for orbital integrals as can be seen from the outline below. A specific problem in the weight aspect would be to optimize the exponents $A_{\\mathrm{wt}},B_{\\mathrm{wt}},e$ in~\\eqref{e:intro-unif-bound}. (In fact we can achieve $e=1$, see \\S\\ref{sec:intro:outline} below. This is natural from the viewpoint of harmonic analysis.)\n\nOur proofs of Theorems~\\ref{t:intro:error-bound} and~\\ref{th:low-lying} are effective. This means that all constants and all exponents in the statement of the estimates could, in principle, be made entirely explicit.\n\n\\subsection{Outline of proofs}\\label{sec:intro:outline}\n\nA wide range of methods are used in the proof. Among them are the Arthur-Selberg trace formula, the analytic theory of\n$L$-functions, representation theory and harmonic analysis on $p$-adic and real groups, and random matrix theory.\n\nThe first main result of our paper is Theorem \\ref{t:intro:error-bound}, proved in Section~\\ref{s:aut-Plan-theorem}.\nWe already pointed out after stating the theorem that the Sato-Tate equidistribution\nfor families (Corollary \\ref{c:intro:Plan-ST})\nis derived from Theorem \\ref{t:intro:error-bound}\nand the fact that the Plancherel measure tends to the Sato-Tate measure as the residue characteristic\nis pushed to $\\infty$.\n\nLet us outline the proof of the theorem.\nIn fact we restrict our attention to part (ii), as (i) is handled by a similar method and only simpler to deal with.\nThus we consider $\\mathcal{F}$ with fixed level and weight $\\xi$, where $\\xi$ is regarded\nas a variable.\nOur starting point is to realize that for $\\phi_p\\in C^\\infty_c(G({\\mathbb Q}_p))$,\nwe may interpret $\\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F},p}(\\widehat{\\phi}_p)$ in terms of the spectral side of the trace formula for $G$\nevaluated against the function $\\phi_p\\phi^{\\infty,p}\\phi_\\infty\\in C^\\infty_c(G({\\mathbb A}))$ for a suitable\n$\\phi^{\\infty,p}$ (depending on $\\mathcal{F}$ and $p$; note that $p$ is allowed to vary) and an Euler-Poincar\\'{e} function $\\phi_\\infty$ at $\\infty$ (depending on $\\xi$).\nApplying the trace formula, which has a simple form thanks to $\\phi_\\infty$, we get a\ngeometric expansion for $\\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F},p}(\\widehat{\\phi}_p)$:\n\\begin{equation}\\label{e:sketch-of-pf-1stThm} \\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F},p}(\\widehat{\\phi}_{p})\n=\\sum_{M\\subset G\\atop \\textrm{cusp. Levi}}\n\\sum_{\\gamma\\in M({\\mathbb Q})\/\\sim\\atop {\\mathbb R}\\textrm{-ell}}\na'_{M,\\gamma}\\cdot O^{M({\\mathbb A}^\\infty)}_\\gamma(\\phi^\\infty_{M})\\frac{\\Phi^G_M(\\gamma,\\xi)}{\\dim \\xi}.\n\\end{equation}\nwhere $a'_{M,\\gamma}\\in {\\mathbb C}$ is a coefficient encoding a certain volume associated with\nthe connected centralizer of $\\gamma$ in $M$\nand $\\phi^\\infty_{M}$ is the constant term of $\\phi^\\infty$ along (a parabolic subgroup associated with) $M$.\nThe Plancherel formula identifies the term for $M=G$ and $\\gamma=1$ with\n$\\widehat{\\mu}^{\\mathrm{pl}}_p(\\widehat{\\phi}_p)$, which basically dominates the right hand side.\n\nThe proof of Theorem \\ref{t:intro:error-bound}.(ii) boils down to bounding\nthe other terms on the right hand side of \\eqref{e:sketch-of-pf-1stThm}.\nHere is a rough explanation of how to analyze each component there.\nThe first summation is finite and controlled by $G$, so we may as well look at the formula\nfor each $M$. There are finitely many conjugacy classes in the second summation for which the summand is nonzero.\nThe number of such conjugacy classes may be bounded by a power of $p$ where the exponent of $p$ depends only on $\\kappa$\n(measuring the ``complexity'' of $\\phi_p$).\nThe term $a'_{M,\\gamma}$, when unraveled, involves a special value of some Artin $L$-function.\nWe establish a bound on the special value which suffices to deal with $a'_{M,\\gamma}$.\nThe last term $\\frac{\\Phi^G_M(\\gamma,\\xi)}{\\dim \\xi}$ can be estimated by using a character formula\nfor the stable discrete series character $\\Phi^G_M(\\gamma,\\xi)$ as well as the dimension formula\nfor $\\xi$.\nIt remains to take care of $O^{M({\\mathbb A}^\\infty)}_\\gamma(\\phi^\\infty_{M})$.\nThis turns out to be the most difficult task since\nTheorem \\ref{t:intro:error-bound}\nasks for a bound that is \\emph{uniform as the residue characteristic varies}.\n\nWe are led to prove that there exist $a,b,e> 0$, depending only on $G$, such that for almost all $q$,\n\\begin{equation}\\label{e:intro-unif-bound}|O^{M({\\mathbb Q}_q)}_\\gamma(\\phi_q)|\\le q^{a+b \\kappa} D^M(\\gamma)^{-e\/2}\\end{equation}\nfor all semisimple $\\gamma$ and all $\\phi_q$ with $\\phi_q\\in {\\mathcal{H}}^{\\mathrm{ur}}(M({\\mathbb Q}_q))^{\\le \\kappa}$ and $|\\phi_q|\\le 1$.\nThe justification of \\eqref{e:intro-unif-bound} takes up the whole of Section~\\ref{s:app:unif-bound}.\n The problem already appears to be deep for the unit elements of unramified Hecke algebras\n in which case one can take $\\kappa=0$. (By a different argument based on arithmetic motivic integration, Cluckers, Gordon, and Halupczok establish a stronger uniform bound with $e=1$. This work is presented in Appendix~\\ref{s:app:B}.)\n At the (fixed) finite set of primes where ramification occurs, the problem comes down to bounding\nthe orbital integral $|O^{M({\\mathbb Q}_q)}_\\gamma(\\phi_q)|$ for fixed $q$ and $\\phi_q$. It is deduced\nfrom the Shalika germ theory that the orbital integral is bounded by a constant, if normalized\nby the Weyl discriminant $D^M(\\gamma)^{1\/2}$, as $\\gamma$ runs over the set of semisimple elements.\nSee Appendix \\ref{s:app:Kottwitz} by Kottwitz for details.\n\nWe continue with Theorem~\\ref{th:low-lying}. The proof relies heavily on Theorem~\\ref{t:intro:error-bound}. The connection between the two statements might not be immediately apparent.\n\nA standard procedure based on the explicit formula (see~Section~\\ref{sec:pp}) expresses the sum~\\eqref{intro:1-level} over zeros of $L$-function as a sum over prime numbers of Satake parameters. The details are to be found in Section~\\ref{sec:pf}, and the result is that $D(\\mathfrak{F}_k,\\Phi)$ can be approximated by\n\\begin{equation} \\label{intro:sumprime}\n \\sum_{\\text{prime } p}\n\\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F}_k,p}(\\widehat{\\phi}_p)\n\\Phi(\\frac{ \\log p}{\\pi \\log C(\\mathfrak{F}_k)}).\n\\end{equation}\nHere $\\phi_p\\in \\mathcal{H}^{\\mathrm{ur}}(G({\\mathbb Q}_p))^{\\le \\kappa}$ is suitable chosen such that $\\widehat \\phi_p(\\pi_p)$ is a sum of powers of the Satake parameters of $r_*\\pi$ (see Sections~\\ref{s:Satake-trans} and~\\ref{s:Plancherel}). The integer $\\kappa$ may be large but it depends only on $G$ and $r$ so should be considered as fixed. Also the sum is over unramified primes.\nWe have $\\log C(\\mathcal{F}_k) \\asymp \\log m(\\xi_k)$ (see Sections~\\ref{s:langlands} and~\\ref{sec:zeros}). We deduce that the sum is supported on those primes $p\\le m(\\xi_k)^{A\\delta}$ where $A$ is a suitable constant and $\\delta$ is as in Theorem~\\ref{th:low-lying}.\n\nWe apply Theorem~\\ref{t:intro:error-bound} which has two components: the main term and the error term. We begin with the main term which amounts to substituting\n$\\widehat{\\mu}^{\\mathrm{pl,\\mathrm{ur}}}_p(\\widehat{\\phi}_p)$ for $\\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F}_k,p}(\\widehat{\\phi}_p)$ in~\\eqref{intro:sumprime}.\nUnlike $\\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F}_k,p}$, this term is purely local, thus simpler. Indeed $\\widehat{\\mu}^{\\mathrm{pl,\\mathrm{ur}}}_p(\\widehat{\\phi}_p)$ can be computed explicitly for low rank groups, see e.g.~\\cite{Gro98} for all the relevant properties of the Plancherel measure. However we want to establish Theorem~\\ref{th:low-lying} in general so we proceed differently.\n\nUsing certain uniform estimates by Kato~\\cite{Kat82}, we can approximate $\\widehat{\\mu}^{\\mathrm{pl,\\mathrm{ur}}}_p(\\widehat{\\phi}_p)$ by a much simpler expression that depends directly on the restriction of $r$ to $\\widehat G \\rtimes W_{\\mathbb{Q}_p}$. Then a pleasant computation using the Cebotarev equidistribution theorem, Weyl's unitary trick and the properties of the Frobenius--Schur indicator shows that the sum over primes of this main term contribute $\\frac{-s(r)}{2}\\Phi(0)$ to~\\eqref{intro:sumprime}. This exactly reflects the identities~\\eqref{W:th:low-lying} in the statement (ii) of Theorem~\\ref{th:low-lying}.\n\nWe continue with the error term $O(p^{A_\\mathrm{wt}+B_\\mathrm{wt}\\kappa}m(\\xi_k)^{-C_\\mathrm{wt}})$ which we need to insert in~\\eqref{intro:sumprime}. We can see the reasons why the proof of Theorem~\\ref{th:low-lying} requires the full force of Theorem~\\ref{t:intro:error-bound} and its error term: the polynomial control by $p^{A_\\mathrm{wt}+B_\\mathrm{wt}\\kappa}$ implies that the sum over primes is at most $m(\\xi_k)^{D\\delta}$ for some $D>0$; the power saving $m(\\xi_k)^{-C_\\mathrm{wt}}$ is exactly what is needed to beat $m(\\xi_k)^{D\\delta}$ when $\\delta$ is chosen small enough.\n\n\\subsection{Notation}\\label{sub:notation} We distinguish the letter $\\mathcal{F}$ for families of automorphic representations on general reductive groups and $\\mathfrak{F}=r_*\\mathcal{F}$ for the families of automorphic representations on $\\GL(d)$.\n\n\nLet us describe in words the significance of various constants occurring in the main statements. We often use the convention to write multiplicative constants in lowercase letters and constants in the exponents in uppercase or greek letters.\n\\begin{itemize}\n \\item The exponent $\\beta$ from Lemma~\\ref{l:bound-degree-Satake} is such that for all $\\phi\\in {\\mathcal{H}}^{\\mathrm{ur}}(\\GL_d)$ of degree at most $\\kappa$, the pullback $r^*\\phi$ is of degree at most $\\le \\beta \\kappa$.\n \\item The exponent $b_G$ from Lemma~\\ref{l:bounding-phi-on-S_0} controls a bound for the constant term $\\abs{\\phi_M(1)}$ for all Levi subgroups $M$ and $\\phi \\in {\\mathcal{H}}^{\\mathrm{ur}}(G)$ of degree at most $\\kappa$.\n\n \\item The exponent $0<\\theta<\\frac12$ is a nontrivial bound towards Ramanujan-Petersson for $\\GL(d,\\mathbb{A})$.\n \\item The integer $i\\ge 1$ in Corollary~\\ref{c:vanishing-ram-subgroup} is an upper-bound for the ramification of the Galois group ${\\rm Gal}(E\/F)$.\n \n \\item The constants $B_\\Xi$ and $c_\\Xi$ in Lemma~\\ref{l:forcing-unipotent} and $A_3,B_3$ in Proposition~\\ref{p:bound-number-of-conj} control the number of rational conjugacy classes intersecting a small open compact subgroup.\n \\item The integer $u_G\\ge 1$ in Lemma~\\ref{l:bounding-conj-in-st-conj} is a uniform upper bound for the number of $G(F_v)$-conjugacy classes in a stable conjugacy class.\n \\item The integer $n_G\\ge 0$ is the minimum value for the dimension of the unipotent radical of a proper parabolic subgroup of $G$ over $\\overline{F}$.\n \\item The constant $c>0$ is a bound for the number of connected components $\\pi_0(Z(\\widehat{I}_\\gamma)^{\\Gamma})$. Corollary~\\ref{c:bounding-pi0-general}.\n \\item The exponents $A_\\mathrm{lv},B_\\mathrm{lv},C_\\mathrm{lv}>0$ in Theorem~\\ref{t:level-varies} (see also Theorem~\\ref{t:intro:error-bound}) and $A_\\mathrm{wt},B_\\mathrm{wt},C_\\mathrm{wt}>0$ in Theorem~\\ref{t:weight-varies}.\n \\item For families in the weight aspect, the constant $\\eta>0$ which may be chosen arbitrary small enters in the condition~\\eqref{dimxik} that the dominant weights attached to $\\xi_k$ stay away from the walls.\n \\item The exponent $C_{\\text{pole}}>0$ in the Hypothesis~\\ref{hyp:poles} concerning the density of poles of $L$-functions.\n \\item The exponents $00$ in the Theorem~\\ref{th:onelevel} controls the support of the Fourier transform $\\widehat \\Phi$ of the test function $\\Phi$.\n \\item The constant $c(f)>0$ depending on the test function $f$ is a uniform upper bound for normalized orbital integrals $\\abs{D^G(\\gamma)}^{\\frac12} O_\\gamma(f)$ (Appendix~\\ref{s:app:Kottwitz}).\n\\end{itemize}\n\nSeveral constants are attached directly to the group $G$ such as the dimension $d_G=\\dim G$, the rank $r_G={\\rm rk} G$, the order of the Weyl group $w_G=\\abs{\\Omega}$, the degree $s_G$ of the smallest extension of $F$ over which $G$ becomes split. Also in Lemma~\\ref{l:bounding-phi-on-S_0} the constant $b_G$ gives a bound for the constant terms along Levi subgroups. The constants $a_G,b_G, e_G$ in Theorem~\\ref{t:appendeix2} gives a uniform bound for certain orbital integrals.\nIn general we have made effort to keep light and consistent notation throughout the text.\n\n\nIn Section~\\ref{s:bg} we will choose a finite extension $E\/F$ which splits maximal tori of subgroups of $G$. The degree $s_G^{\\mathrm{sp}}=[E:F]$ will be controlled by $s_G^{\\mathrm{sp}}\\le s_Gw_G$ (see Lemma~\\ref{l:torus-splitting}), while the ramification of $E\/F$ will vary.\nIn Section~\\ref{s:Sato-Tate} we consider the finite extension $F_1\/F$ such that ${\\rm Gal}(\\overline{F}\/F)$ acts on $\\widehat{G}$ through the faithful action of ${\\rm Gal}(F_1\/F)$.%\nFor example if $G$ is a non-split inner form of a split group then $F_1=F$. In Section~\\ref{sec:pf} we consider a finite extension $F_2\/F_1$ such that the representation $r$ factors through $\\widehat{G}\\rtimes {\\rm Gal}(F_2\/F)$. For a general $G$, there might not be any direct relationship between the extensions $E\/F$ and $F_2\/F_1\/F$.\n\n\\subsection{Structure of the paper}\n\nFor a quick tour of our main results and the structure of our arguments, one could start reading from Section \\ref{s:aut-Plan-theorem} after familiarizing oneself with basic notation, referring to earlier sections for further notation and basic facts as needed.\n\nThe first Sections~\\ref{s:Satake-trans} and~\\ref{s:Plancherel} are concerned with harmonic analysis on reductive groups over local fields, notably the Satake transform, $L$-groups and $L$-morphisms, the properties of the Plancherel measure and the Macdonald formula for the unramified spectrum. We establish bounds for truncated Hecke algebras and for character traces that will play a role in subsequent chapters. In Section~\\ref{sec:pp} we recall various analytic properties of automorphic $L$-functions on $\\GL(d)$ and notably isobaric sums, bounds towards Ramanujan--Petersson and the so-called explicit for the sum of the zeros.\nSection~\\ref{s:Sato-Tate} introduces the Sato--Tate measure for general groups and Sato--Tate equidistribution for Satake parameters and for families.\nThe next Section~\\ref{s:bg} gathers various background materials on orbital integral, the Gross motive and Tamagawa measure, discrete series characters and Euler--Poincar\\'e functions, and Frobenius--Schur indicator. We establish bounds for special values of the Gross motive which will enter in the geometric side of the trace formula.\n\nIn Section~\\ref{s:app:unif-bound} we establish a uniform bound for orbital integrals of the type~\\eqref{e:intro-unif-bound}.\nIn Section~\\ref{s:conj} we establish various bounds on conjugacy classes and level subgroups. How these estimates enter in the trace formula has been detailed in the outline above.\n\nThen we are ready in Section~\\ref{s:aut-Plan-theorem} to establish our main result, an automorphic Plancherel theorem for families with error terms and its application to the Sato-Tate theorem for families. The theorem is first proved for test functions on the unitary dual coming from Hecke algebras by orchestrating all the previous results in the trace formula. Then our result is improved to allow more general test functions, either in the input to the Sato-Tate theorem or in the prescribed local condition for the family, by means of Sauvageot's density theorem.\n\nThe last three Sections~\\ref{s:langlands}, \\ref{sec:zeros} and~\\ref{sec:pf} concern the application to low-lying zeros. In complete generality we need to rely on Langlands global functoriality and other hypothesis that we state precisely. These unproven assumptions are within reach in the context of endoscopic transfer and we will return to it in subsequent works.\n\nThe Appendix~\\ref{s:app:Kottwitz} by Kottwitz establishes the boundedness of normalized orbital integrals from the theory of Shalika germs. The Appendix~\\ref{s:app:B} by Cluckers--Gordon--Halupczok establishes a strong form of~\\eqref{e:intro-unif-bound} with $e=1$ by using recent results in arithmetic motivic integration.\n\n\\subsection{Acknowledgments} We would like to thank Jim Arthur, Joseph Bernstein, Laurent Clozel, Julia Gordon, Nicholas Katz, Emmanuel Kowalski, Erez Lapid, Philippe Michel, Peter Sarnak, Kannan Soundararajan and Akshay Venkatesh for helpful discussions and comments. We would like to express our gratitude to Robert Kottwitz and Bao Ch\\^{a}u Ng\\^{o} for\n helpful discussions regarding Section~\\ref{s:app:unif-bound}, especially about the possibility of a geometric approach. We appreciate Brian Conrad for explaining us about the integral models for reductive groups.\n\nMost of this work took place during the AY2010-2011 at the Institute for Advanced Study and some of the results have been presented there in March. We thank the audience for their helpful comments and the IAS for providing excellent working conditions. S.W.S. acknowledges support from the National Science Foundation during his stay at the IAS under agreement No. DMS-0635607 and thanks Massachusetts Institute of Technology and Korea Institute for Advanced Study for providing a very amiable work environment. N.T. is supported by a grant \\#209849 from the Simons Foundation.\n\n\\section{Satake Transforms}\\label{s:Satake-trans}\n\n\\subsection{$L$-groups and $L$-morphisms}\\label{sub:L-groups}\n\n We are going to recall some definitions and\n facts from \\cite[\\S1,\\S2]{Bor79} and \\cite[\\S1]{Kot84a}.\n Let $F$ be a local or global field of characteristic 0 with an algebraic closure $\\overline{F}$, which we fix.\n Let $W_F$ denote the Weil group of $F$ and set $\\Gamma:={\\rm Gal}(\\overline{F}\/F)$.\n Let $H$ and $G$ be connected reductive groups over $F$.\n Let $(\\widehat{B},\\widehat{T},\\{X_{\\alpha}\\}_{\\alpha\\in \\Delta^{\\vee}})$ be a\n splitting datum fixed by $\\Gamma$, from which the $L$-group\n $${}^L G=\\widehat{G}\\rtimes W_F$$ is constructed.\n An $L$-morphism $\\eta:{}^L H\\rightarrow {}^L G$ is a continuous map commuting with\n the canonical surjections ${}^L H\\rightarrow W_F$ and ${}^L G\\rightarrow W_F$ such that\n $\\eta|_{\\widehat{H}}$ is a morphism of complex Lie groups.\n A representation of ${}^L G$ is by definition\n a continuous homomorphism ${}^L G\\rightarrow \\GL(V)$ for some ${\\mathbb C}$-vector space $V$ with $\\dim V<\\infty$\n such that $r|_{\\widehat{G}}$ is a morphism of complex Lie groups. Clearly giving a representation\n ${}^L G\\rightarrow \\GL(V)$ is equivalent to giving an $L$-morphism ${}^L G\\rightarrow {}^L \\GL(V)$.\n\n Let $f:H\\rightarrow G$ be a normal morphism, which means that $f(H)$ is a normal subgroup of $G$.\n Then it gives rise to an $L$-morphism ${}^L G \\rightarrow {}^L H$\n as explained in \\cite[2.5]{Bor79}. In particular, there is a\n $\\Gamma$-equivariant map $Z(\\widehat{G})\\rightarrow Z(\\widehat{H})$, which is\n canonical (independent of the choice of splittings).\n Thus an exact sequence of connected reductive groups over $F$\n $$1\\rightarrow G_1\\rightarrow G_2\\rightarrow G_3\\rightarrow 1$$ gives rise to\n a $\\Gamma$-equivariant exact sequence of ${\\mathbb C}$-diagonalizable groups\n $$1\\rightarrow Z(\\widehat{G}_3)\\rightarrow Z(\\widehat{G}_2)\\rightarrow Z(\\widehat{G}_1)\\rightarrow 1.$$\n\n\n\n\n\n\\subsection{Satake transform}\\label{sub:Satake-trans}\n\n From here throughout this section,\n let $F$ be a finite extension of ${\\mathbb Q}_p$ with integer ring $\\mathcal{O}$ and a uniformizer\n $\\varpi$. Set $q:=|\\mathcal{O}\/\\varpi\\mathcal{O}|$. Let $G$ be an \\emph{unramified} group over $F$ and\n $B=TU$ be a Borel subgroup decomposed into the maximal torus and the unipotent radical in $B$.\n Let $A$ denote the maximal $F$-split torus in $T$.\n Write $\\Phi_F$ (resp. $\\Phi$) for the set of all $F$-rational roots (resp. all roots over $\\overline{F}$)\n and $\\Phi_F^+$ (resp. $\\Phi^+$) for the subset of positive roots.\n Choose a smooth reductive model of $G$ over $\\mathcal{O}$\n corresponding to a hyperspecial point on the apartment for $A$.\n Set $K:=G(\\mathcal{O})$. Denote by $X_*(A)^+$ the subset of $X_*(A)$ meeting the closed Weyl\n chamber determined by $B$, namely\n $\\lambda\\in X_*(A)^+$ if $\\alpha(\\lambda)\\ge 0$ for all $\\alpha\\in \\Phi_F^+$.\n Denote by $\\Omega_F$ (resp. $\\Omega$) the $F$-rational Weyl group for $(G,A)$\n (resp. the absolute Weyl group for $(G,T$)), and $\\rho_F$ (resp. $\\rho$) the half sum\n of all positive roots in $\\Phi^+_F$ (resp. $\\Phi^+$). A partial order $\\le$ is defined on $X_*(A)$ (resp. $X_*(T)$) such that $\\mu\\le \\lambda$ if $\\lambda-\\mu$ is a linear combination of $F$-rational positive coroots (resp. positive coroots) with nonnegative coefficients. The same order extends to a partial order $\\le_{{\\mathbb R}}$ on $X_*(A)\\otimes_{\\mathbb Z} {\\mathbb R}$ and $X_*(T)\\otimes_{\\mathbb Z} {\\mathbb R}$ defined analogously.\n\n\n Let $F^{\\mathrm{ur}}$ denote the maximal unramified extension of $F$. Let ${\\rm Fr}$ denote the\n geometric Frobenius element of ${\\rm Gal}(F^{\\mathrm{ur}}\/F)$.\n Define $W_F^{\\mathrm{ur}}$ to be the unramified Weil group, namely the subgroup\n ${\\rm Fr}^{{\\mathbb Z}}$ of ${\\rm Gal}(F^{\\mathrm{ur}}\/F)$.\n Since ${\\rm Gal}(\\overline{F}\/F)$ acts on $\\widehat{G}$ through a finite quotient\n of ${\\rm Gal}(F^{\\mathrm{ur}}\/F)$, one can make sense of\n ${}^L G^{\\mathrm{ur}}:=\\widehat{G}\\rtimes W_F^{\\mathrm{ur}}$.\n\n Throughout this section we write $G$, $T$, $A$ for $G(F)$, $T(F)$, $A(F)$ if there is no confusion.\n Define ${\\mathcal{H}}^{\\mathrm{ur}}(G):=C^\\infty_c(K\\backslash G \/ K)$ and ${\\mathcal{H}}^{\\mathrm{ur}}(T):=C^\\infty_c(T(F)\/T(F)\\cap K)$.\n The latter is canonically isomorphic to ${\\mathcal{H}}^{\\mathrm{ur}}(A):=C^\\infty_c(A(F)\/A(\\mathcal{O}))$ via the inclusion\n $A\\hookrightarrow T$. We can further identify\n $${\\mathcal{H}}^{\\mathrm{ur}}(T)\\simeq {\\mathcal{H}}^{\\mathrm{ur}}(A)\\simeq {\\mathbb C}[X_*(A)]$$\n where the last ${\\mathbb C}$-algebra isomorphism\n matches $\\lambda\\in X_*(A)$ with ${\\mathbf{1}}_{\\lambda(\\varpi)(A\\cap K)}\\in {\\mathcal{H}}^{\\mathrm{ur}}(A)$.\n Let $\\lambda\\in X_*(A)$. Write $$\\tau^G_\\lambda:={\\mathbf{1}}_{K\\lambda(\\varpi)K}\\in {\\mathcal{H}}^{\\mathrm{ur}}(G),\\quad\n \\tau^A_\\lambda:=\\frac{1}{|\\Omega_F|}\\sum_{w\\in \\Omega_F}\n {\\mathbf{1}}_{\\lambda(\\varpi)(A\\cap K)}\\in {\\mathcal{H}}^{\\mathrm{ur}}(A)^{\\Omega_F}.$$\n The sets $\\{\\tau^G_\\lambda\\}_{\\lambda\\in X_*(A)^+}$ and\n $\\{\\tau^A_\\lambda\\}_{\\lambda\\in X_*(A)^+}$ are bases for\n ${\\mathcal{H}}^{\\mathrm{ur}}(G)$ and ${\\mathcal{H}}^{\\mathrm{ur}}(A)^{\\Omega_F}$ as ${\\mathbb C}$-vector spaces, respectively. Consider the map\n \\begin{equation}\\label{e:Satake-formula}{\\mathcal{H}}^{\\mathrm{ur}}(G)\\rightarrow {\\mathcal{H}}^{\\mathrm{ur}}(T),\n \\quad f\\mapsto \\left(t\\mapsto \\delta_B(t)^{1\/2} \\int_{U} f(tu)du\\right)\\end{equation}\n composed with ${\\mathcal{H}}^{\\mathrm{ur}}(T)\\simeq {\\mathcal{H}}^{\\mathrm{ur}}(A)$ above.\n The composite map induces a ${\\mathbb C}$-algebra isomorphism \\begin{equation}\\label{e:Satake}{\\mathcal{S}}^G:{\\mathcal{H}}^{\\mathrm{ur}}(G)\\stackrel{\\sim}{\\ra}\n {\\mathcal{H}}^{\\mathrm{ur}}(A)^{\\Omega_F}\\end{equation} called the Satake isomorphism. We often\n write just ${\\mathcal{S}}$ for ${\\mathcal{S}}^G$. We note that in general\n ${\\mathcal{S}}$ does not map $\\tau^G_\\lambda$ to $\\tau^A_{\\lambda}$.\n\n Another useful description of ${\\mathcal{H}}^{\\mathrm{ur}}(G)$ is through representations of ${}^L G^{\\mathrm{ur}}$. (The latter notion is defined as in \\S\\ref{sub:L-groups}. Write\n $(\\widehat{G}\\rtimes {\\rm Fr})_{{\\rm ss-conj}}$ for the set of $\\widehat{G}$-conjugacy classes of\n semisimple elements in $\\widehat{G}\\rtimes {\\rm Fr}$.\n Consider the set $${\\rm ch}({}^L G^{\\mathrm{ur}}):=\\{{\\rm tr}\\, r : (\\widehat{G}\\rtimes {\\rm Fr})_{{\\rm ss-conj}}\\rightarrow {\\mathbb C}\\,|\\,\n r~\\mathrm{is~a~representation~of~} {}^L G^{\\mathrm{ur}}\\}.$$\n Define ${\\mathbb C}[{\\rm ch}({}^L G^{\\mathrm{ur}})]$ to be the ${\\mathbb C}$-algebra generated by ${\\rm ch}({}^L G^{\\mathrm{ur}})$\n in the space of functions on $(\\widehat{G}\\rtimes {\\rm Fr})_{{\\rm ss-conj}}$.\n For each $\\lambda\\in X_*(A)^+$ define the quotient\n \\begin{equation}\\label{e:chi_lambda}\\chi_\\lambda:=\\frac{\\sum_{w\\in \\Omega_F} {\\rm sgn}(w) w(\\lambda+\\rho_F)}{\\sum_{w\\in \\Omega_F} {\\rm sgn}(w)w\\rho_F},\\end{equation}\n which exists as an element of ${\\mathbb C}[X_*(A)]^{\\Omega_F}$ and is unique.\n(One may view $\\chi_\\lambda$ as\nthe analogue in the disconnected case\nof the irreducible character of highest weight $\\lambda$, cf. proof of Lemma \\ref{l:Kostant} below.)\n Then $\\{\\chi_\\lambda\\}_{\\lambda\\in X_*(A)^+}$\n is a basis for ${\\mathbb C}[X_*(A)]^{\\Omega_F}$ as a ${\\mathbb C}$-vector space, cf. \\cite[p.465]{Kat82}.\n (Another basis was given by $\\tau^A_\\lambda$'s above.)\n There is a canonical ${\\mathbb C}$-algebra isomorphism\n \\begin{equation}\\label{e:Satake2} {\\mathcal{T}}:{\\mathbb C}[{\\rm ch}({}^L G^{\\mathrm{ur}})] \\stackrel{\\sim}{\\ra} {\\mathcal{H}}^{\\mathrm{ur}}(A)^{\\Omega_F},\\end{equation}\n determined as follows (see \\cite[Prop 6.7]{Bor79} for detail):\n for each irreducible $r$, ${\\rm tr}\\, r|_{\\widehat{T}}$ is shown to factor through\n $\\widehat{T}\\rightarrow \\widehat{A}$ (induced by $A\\subset T$). Hence ${\\rm tr}\\, r|_{\\widehat{T}}$ can be viewed as an element\n of ${\\mathbb C}[X^*(\\widehat{A})]={\\mathbb C}[X_*(A)]$, which can be seen to be invariant under $\\Omega_F$. Define ${\\mathcal{T}}({\\rm tr}\\, r)$ to be the latter element.\n\n Let $r_0$ be an irreducible representation of $\\widehat{G}$ of highest weight\n $\\lambda_0\\in X^*(\\widehat{T})^+=X_*(T)^+$. The group $W_F^{\\mathrm{ur}}$ acts on $X^*(\\widehat{T})^+$.\n Write $\\mathrm{Stab}(\\lambda_0)\\subset W^{\\mathrm{ur}}_F$ for the stabilizer subgroup for $\\lambda_0$,\nwhich has finite index (since a finite power of ${\\rm Fr}$ acts trivially on $\\widehat{G}$ and thus also on $\\widehat{T}$).\n Put $r:={\\rm Ind}_{\\widehat{G}\\rtimes \\mathrm{Stab}(\\lambda_0)}^{^L G^{\\mathrm{ur}}} r_0$\n and $\\lambda:=\\sum_{\\sigma\\in W^{\\mathrm{ur}}_F\/\\mathrm{Stab}(\\lambda_0)} \\sigma\\lambda_0\n \\in X_*(A)^+$.\n Clearly $r$ and $\\lambda$ depend only on the $W_F^{\\mathrm{ur}}$-orbit of $\\lambda_0$.\n Put $i(\\lambda_0):=[ W^{\\mathrm{ur}}_F:\\mathrm{Stab}(\\lambda_0)]$.\n\n\\begin{lem}\\label{l:Kostant}\n\\begin{enumerate}\n\\item Suppose that $r$ and $\\lambda$ are obtained from $r_0$ and $\\lambda_0$ as above. Then\n \\begin{equation}\\label{e:T(tr-r)}{\\mathcal{T}}({\\rm tr}\\, r)=\\chi_\\lambda.\\end{equation}\n\\item In general for any irreducible representation $r':{}^L G^{\\mathrm{ur}}\\rightarrow \\GL_d({\\mathbb C})$\nsuch that $r'(W^{\\mathrm{ur}}_F)$ has relatively compact image,\nlet $r_0$ be any irreducible subrepresentation of $r'|_{\\widehat{G}}$.\n Let $r$ be obtained from $r_0$ as above.\nThen for some $\\zeta\\in {\\mathbb C}^\\times$ with $|\\zeta|=1$,\n$${\\rm tr}\\, r'=\\zeta\\cdot {\\rm tr}\\, r.$$\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\n Let us prove (i).\n For any $i\\ge 1$, let ${}^L G_{i}$ denote the finite $L$-group\n $\\widehat{G}\\rtimes {\\rm Gal}(F_{i}\/F)$ where $F_{i}$ is the degree $i$ unramified\n extension of $F$ in $\\overline{F}$.\nIt is easy to see that $r({\\rm Fr}^{i(\\lambda_0)})$ is trivial and that\n $r={\\rm Ind}^{{}^L G_{i(\\lambda_0)}}_{\\widehat{G}} r_0$.\n Then \\eqref{e:T(tr-r)} amounts to\nKostant's character formula for a disconnected group (\\cite[Thm 7.5]{Kos61}) applied to\n${}^L G_{i(\\lambda_0)}$.\n As for (ii), let $\\lambda_0$ and $\\lambda$ be as in the paragraph\n preceding the lemma. Let $j\\ge 1$ be such that $G$ becomes split over a degree $j$\nunramified extension of $F$. (Recall that $G$ is assumed to be unramified.)\n By twisting $r'$ by a unitary character of $W^{\\mathrm{ur}}_F$ one may assume that\n $r'$ factors through ${}^L G_j$.\n Then both $r$ and $r'$ factor through ${}^L G_j$\n and are irreducible constituents of\n ${\\rm Ind}^{{}^L G_{j}}_{\\widehat{G}} r_0$.\n From this it is easy to deduce that $r'$ is a twist of $r$ by\n a finite character of $W^{\\mathrm{ur}}_F$ of order dividing $j$. Assertion (ii) follows.\n\\end{proof}\n\n\n\n\n\n Each $\\lambda\\in X_*(A)^+$ determines $s_{\\lambda,\\mu}\\in {\\mathbb C}$ such that\n \\begin{equation}\\label{e:s-lambda-mu}{\\mathcal{S}}^{-1}(\\chi_\\lambda)=\n \\sum_{\\mu\\in X_*(A)^+} s_{\\lambda,\\mu} \\tau^G_{\\mu}\\end{equation}\n where only finitely many $ s_{\\lambda,\\mu}$ are nonzero. In fact Theorem 1.3 of \\cite{Kat82} identifies $s_{\\lambda,\\mu}$ with $K_{\\lambda,\\mu}(q^{-1})$ defined in (1.2) of that paper, cf. \\S4 of \\cite{Gro98}. In particular $s_{\\lambda,\\lambda}\\neq 0$ and $s_{\\lambda,\\mu}\\neq 0$ unless $\\mu\\le \\lambda$. The following information will be useful in \\S\\ref{sub:test-functions}.\n\n\\begin{lem}\\label{l:bound-on-s}\n Let $\\lambda,\\mu\\in X_*(A)^+$.\n Suppose that $\\lambda\\star_w \\mu:=w(\\lambda+\\rho_F)- (\\mu+\\rho_F)$ is nontrivial for all $w\\in \\Omega_F$.\n For $\\kappa\\in X_*(A)$ let $p(\\kappa)\\in {\\mathbb Z}_{\\ge 0}$ be\n the number of tuples $(c_{\\alpha^\\vee})_{\\alpha^\\vee\\in (\\Phi^{\\vee}_F)^+}$\n with $c_{\\alpha^\\vee}\\in {\\mathbb Z}_{\\ge 0}$ such that $\\sum_{\\alpha^\\vee} c_{\\alpha^\\vee}\\cdot \\alpha^\\vee = \\kappa$. Then\n $$|s_{\\lambda,\\mu}|\\le q^{-1}|\\Omega_F| \\max_{w\\in \\Omega_F} p(\\lambda\\star_w \\mu) .$$\n\\end{lem}\n\n\\begin{proof}\n It is easy to see from the description of $K_{\\lambda,\\mu}(q^{-1})$ in \\cite[(1.2)]{Kat82} that\n $$|K_{\\lambda,\\mu}(q^{-1})|\\le |\\Omega_F| \\max_{w\\in \\Omega_F} \\widehat{{\\mathscr P}}(w(\\lambda+\\rho_F)-(\\mu+\\rho_F);q^{-1}).$$\n The definition of $\\widehat{{\\mathscr P}}$ in \\cite[(1.1)]{Kat82} shows that\n $0\\le \\widehat{{\\mathscr P}}(\\kappa;q^{-1})\\le p(\\kappa) q^{-1}$ if $\\kappa\\neq 0$.\n\\end{proof}\n\n\n\n\\subsection{Truncated unramified Hecke algebras}\\label{sub:trun-unr-Hecke}\n\n\n\n Set $n:=\\dim T$ and $X_*(T)_{{\\mathbb R}}:=X_*(T)\\otimes_{\\mathbb Z} {\\mathbb R}$.\n Choose an ${\\mathbb R}$-basis ${\\mathcal{B}}=\\{e_1,...,e_{n}\\}$ of $X_*(T)_{\\mathbb R}$. For each $\\lambda\\in X_*(T)_{\\mathbb R}$,\n written as $\\lambda=\\sum_{i=1}^{n} a_i(\\lambda) e_i$ for unique $a_i(\\lambda)\\in {\\mathbb R}$,\n define $$|\\lambda|_{{\\mathcal{B}}}:=\\max_{1\\le i\\le n} |a_i( \\lambda)|,\n \\quad \\|\\lambda\\|_{{\\mathcal{B}}}:=\\max_{\\omega\\in \\Omega}( |\\omega \\lambda|_{{\\mathcal{B}}}).$$\n When there is no danger of confusion,\n we will simply write $|\\cdot|_{{\\mathcal{B}}}$ or even $|\\cdot|$ instead of $|\\cdot|_{{\\mathcal{B}}}$,\n and similarly for $\\|\\cdot\\|_{{\\mathcal{B}}}$.\n It is clear that $\\|\\cdot\\|_{{\\mathcal{B}}}$ is $\\Omega$-invariant and\n that $|\\lambda_1+\\lambda_2|_{{\\mathcal{B}}}\\le |\\lambda_1|_{{\\mathcal{B}}}+|\\lambda_2|_{{\\mathcal{B}}}$\n for all $\\lambda_1,\\lambda_2\\in X_*(T)$.\n When $\\kappa\\in {\\mathbb Z}_{\\ge 0}$, define \\begin{equation}\\label{e:trun-Hecke}{\\mathcal{H}}^{\\mathrm{ur}}(G)^{\\le \\kappa,{\\mathcal{B}}}:=\n \\{ {\\mathbb C}\\mbox{-subspace~of~} {\\mathcal{H}}^{\\mathrm{ur}}(G)\n \\mbox{~generated~by~}\\tau^G_\\lambda,~\\lambda\\in X_*(A)^+,~\\|\\lambda\\|_{{\\mathcal{B}}}\\le \\kappa\\}.\\end{equation}\n It is simply written as ${\\mathcal{H}}^{\\mathrm{ur}}(G)^{\\le \\kappa}$ when the choice of ${\\mathcal{B}}$ is clear.\n\n\n\\begin{lem}\\label{l:norm-and-bases}\n Let ${\\mathcal{B}}$ and ${\\mathcal{B}}'$ be two ${\\mathbb R}$-bases of $X_*(T)_{\\mathbb R}$. Then\nthere exist constants $c_1,c_2,B_1,B_2,B_3,B_4>0$ such that\nfor all $\\lambda\\in X_*(T)_{\\mathbb R}$,\n\\begin{enumerate}\n\\item $c_1 |\\lambda|_{{\\mathcal{B}}'} \\le |\\lambda|_{{\\mathcal{B}}}\n\\le c_2 |\\lambda|_{{\\mathcal{B}}'}$,\n\\item $B_1 |\\lambda|_{{\\mathcal{B}}} \\le \\|\\lambda\\|_{{\\mathcal{B}}}\n\\le B_2 |\\lambda|_{{\\mathcal{B}}}$ for all $ \\lambda\\in X_*(T)_{\\mathbb R}$,\n\\item $B_3 \\|\\lambda\\|_{{\\mathcal{B}}'} \\le \\|\\lambda\\|_{{\\mathcal{B}}}\n\\le B_4 \\|\\lambda\\|_{{\\mathcal{B}}'}$ for all $ \\lambda\\in X_*(T)_{\\mathbb R}$ and\n\\item ${\\mathcal{H}}^{\\mathrm{ur}}(G)^{\\le B_4^{-1}\\kappa,{\\mathcal{B}}'}\\subset {\\mathcal{H}}^{\\mathrm{ur}}(G)^{\\le \\kappa,{\\mathcal{B}}}\\subset {\\mathcal{H}}^{\\mathrm{ur}}(G)^{\\le B_3^{-1}\\kappa,{\\mathcal{B}}'}$.\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\n Let us verify (i).\n As the roles of ${\\mathcal{B}}$ and ${\\mathcal{B}}'$ can be changed,\n it suffices to prove the existence of $c_2$.\n For this, it suffices to take $c_2=\\sup_{|\\lambda|_{{\\mathcal{B}}}\\le 1} |\\lambda|_{{\\mathcal{B}}'}$.\n The latter is finite since $|\\cdot|_{{\\mathcal{B}}'}$ is a continuous function\n on the set of $\\lambda$ such that $|\\lambda|_{{\\mathcal{B}}}\\le 1$, which\n is compact.\n Part (ii) is obtained by\n applying the lemma to the bases ${\\mathcal{B}}'=\\omega {\\mathcal{B}}$ for all $\\omega\\in \\Omega$.\n Let us check (iii). Let $B_1,B_2>0$ (resp. $B'_1,B'_2>0$)\n be the constants of (ii) for the basis ${\\mathcal{B}}$ (resp. ${\\mathcal{B}}'$).\n Then\n $$ c_1B_1(B'_2)^{-1} \\|\\lambda\\|_{{\\mathcal{B}}'} \\le c_1B_1 |\\lambda|_{{\\mathcal{B}}'} \\le B_1 |\\lambda|_{{\\mathcal{B}}} \\le \\|\\lambda\\|_{{\\mathcal{B}}}$$\n and similarly $\\|\\lambda\\|_{{\\mathcal{B}}}\\le c_2B_2(B'_1)^{-1}\\|\\lambda\\|_{{\\mathcal{B}}'}$.\n Finally (iv) immediately follows from (iii).\n\\end{proof}\n\n It is natural to wonder whether the definition of truncation in \\eqref{e:trun-Hecke} changes if a different basis $\\{\\tau^G_\\lambda\\}$ or $\\{\\chi_\\lambda\\}$ is used. We assert that it changes very little in a way that the effect on $\\kappa$ is bounded by a $\\kappa$-independent constant. To ease the statement define ${\\mathcal{H}}_i^{\\mathrm{ur}}(G)^{\\le \\kappa,{\\mathcal{B}}}$ for $i=1$ (resp. $i=2$) to be the ${\\mathbb C}$-subspace of ${\\mathcal{H}}^{\\mathrm{ur}}(G)$ generated by ${\\mathcal{S}}^{-1}(\\tau^A_\\lambda)$ (resp. ${\\mathcal{S}}^{-1}(\\chi_\\lambda)$) for $\\lambda\\in X_*(A)^+$ with $\\|\\lambda\\|_{{\\mathcal{B}}}\\le \\kappa$.\n\n\n\\begin{lem}\\label{l:three-truncations}\n There exists a constant $C\\ge 1$ such that for every $\\kappa\\in {\\mathbb Z}_{\\ge 0}$ and\n for any distinct $i,j\\in \\{\\emptyset, 1,2\\}$,\n $${\\mathcal{H}}_i^{\\mathrm{ur}}(G)^{\\le \\kappa,{\\mathcal{B}}}\\subset {\\mathcal{H}}_j^{\\mathrm{ur}}(G)^{\\le C\\kappa,{\\mathcal{B}}}.$$\n\\end{lem}\n\n\\begin{proof}\n It is enough to prove the lemma for a particular choice of ${\\mathcal{B}}$ by Lemma \\ref{l:norm-and-bases}.\n So we may assume that ${\\mathcal{B}}$ extends the set of simple coroots in $\\Phi^\\vee$ by an arbitrary basis of $X_*(Z(G))_{{\\mathbb R}}$. Again by Lemma \\ref{l:norm-and-bases} the proof will be done if we show that each of the following generates the same ${\\mathbb C}$-subspace:\n\\begin{enumerate}\n\\item the set of $\\tau^G_{\\lambda}$ for $\\lambda\\in X_*(A)^+$ with $|\\lambda|_{{\\mathcal{B}}}\\le \\kappa$,\n\\item the set of ${\\mathcal{S}}^{-1}(\\tau^A_\\lambda)$ for $\\lambda\\in X_*(A)^+$ with $|\\lambda|_{{\\mathcal{B}}}\\le \\kappa$,\n\\item the set of ${\\mathcal{S}}^{-1}(\\chi_\\lambda)$ for $\\lambda\\in X_*(A)^+$ with $|\\lambda|_{{\\mathcal{B}}}\\le \\kappa$.\n\\end{enumerate}\n It suffices to show that the matrices representing the change of bases are ``upper triangular'' in the sense that the $(\\lambda,\\lambda)$ entries are nonzero and $(\\lambda,\\mu)$ entries are zero unless $\\lambda\\ge \\mu$. (Note that $\\lambda\\ge \\mu$ implies $|\\lambda|_{{\\mathcal{B}}}\\ge |\\mu|_{{\\mathcal{B}}}$ by the choice of ${\\mathcal{B}}$.) We have remarked below \\eqref{e:chi_lambda} that $s_{\\lambda,\\mu}$'s have this property, accounting for (i)$\\leftrightarrow$(iii). For (ii)$\\leftrightarrow$(iii) the desired property can be seen directly from \\eqref{e:chi_lambda} by writing $\\chi_\\lambda$ in terms of $\\tau^A_\\mu$'s.\n\n\\end{proof}\n\n\\subsection{The case of $\\GL_d$}\\label{sub:case-of-GL_d}\n\n The case $G=\\GL_d$ is considered in this subsection. Let $A=T$ be the diagonal maximal torus and $B$ the group of upper triangular matrices. For $1\\le i\\le d$,\n take $Y_i\\in X_*(A)$ to be $y\\mapsto {\\rm diag}(1,...,1,y,1,...,1)$ with $y$ in the $i$-th place.\n One can naturally identify $X_*(A)\\simeq {\\mathbb Z}^d$ such that the images of\n $Y_i$ form the standard basis of ${\\mathbb Z}^d$.\n Then $\\Omega_F$ is isomorphic to ${\\mathscr S}_d$, the symmetric group in $d$ variables acting on $\\{Y_1,...,Y_d\\}$ via permutation of indices.\n We have the Satake isomorphism\n $${\\mathcal{S}}:{\\mathcal{H}}^{\\mathrm{ur}}(\\GL_d)\\stackrel{\\sim}{\\ra} {\\mathcal{H}}^{\\mathrm{ur}}(T)^{\\Omega_F}\\simeq {\\mathbb C}[Y_1^{\\pm},...,Y_d^{\\pm}]^{{\\mathscr S}_d}.$$\n For an alternative description let us introduce standard symmetric polynomials $X_1,...,X_d$ by the equation in a formal $Z$-variable $(Z-Y_1)\\cdots (Z-Y_d)=Z^d-X_1Z^{d-1}+\\cdots + (-1)^{d} X_d$. Then $${\\mathbb C}[Y_1^{\\pm},...,Y_d^{\\pm}]^{{\\mathscr S}_d}={\\mathbb C}[X_1,...,X_{d-1},X_d^{\\pm}].$$\n Let $\\kappa\\in {\\mathbb Z}_{\\ge 0}$. Define ${\\mathcal{H}}^{\\mathrm{ur}}(\\GL_d)^{\\le \\kappa}$, or simply ${\\mathcal{H}}^{\\le \\kappa}_d$, to be\n the preimage under ${\\mathcal{S}}$ of the ${\\mathbb C}$-vector space generated by $$\\{\\sum_{\\sigma\\in {\\mathscr S}_d} Y_{\\sigma(1)}^{a_1}Y_{\\sigma(2)}^{a_2}\\cdots Y_{\\sigma(d)}^{a_d}: a_1,...,a_d\\in [-\\kappa,\\kappa]\\}.$$ The following is standard (cf. \\cite{Gro98}).\n\n\\begin{lem} Let $r\\in {\\mathbb Z}_{\\ge 1}$. Let $\\lambda_r:=(r,0,0,...,0)\\in X_*(A)^+$. Then\n $${\\mathcal{S}}^{-1}(Y_1^r+\\cdots+Y_d^r)=\\sum_{\\mu\\in X_*(A)^+\\atop\\mu\\le \\lambda_r} c_{\\lambda_r,\\mu}\\cdot \\tau^G_\\mu $$\n for $c_{\\lambda_r,\\mu}\\in {\\mathbb C}$ with $c_{\\lambda_r,\\lambda_r}=q^{r(1-d)\/2}$,\n where the sum runs over the set of $\\mu\\in X_*(T)^+$ such that $\\mu\\le \\lambda_r$.\n In particular,\n \\begin{eqnarray}\n {\\mathcal{S}}^{-1}(Y_1+\\cdots+Y_d)&=&q^{(1-d)\/2}\\tau^{G}_{(1,0,...,0)},\\nonumber\\\\\n {\\mathcal{S}}^{-1}(Y_1^2+\\cdots+Y_d^2)&=&q^{1-d}(\\tau^{G}_{(2,0,...,0)}+(1-q)\\tau^G_{(1,1,0,...,0)}).\n \\nonumber\n \\end{eqnarray}\n\n\\end{lem}\n\n\n\n\\subsection{$L$-morphisms and unramified Hecke algebras}\\label{sub:L-mor-unr-Hecke}\n\n\n Assume that $H$ and $G$ are unramified groups over $F$.\n Let $\\eta:{}^L H\\rightarrow {}^L G$ be an unramified $L$-morphism, which means that\n it is inflated from some $L$-morphism ${}^L H^{\\mathrm{ur}}\\rightarrow {}^L G^{\\mathrm{ur}}$\n (the notion of $L$-morphism for the latter is defined as in \\S\\ref{sub:L-groups}).\n There is a canonically induced\n map ${\\rm ch}({}^L G^{\\mathrm{ur}})\\rightarrow {\\rm ch}({}^L H^{\\mathrm{ur}})$.\n Via \\eqref{e:Satake} and \\eqref{e:Satake2}, the latter map gives rise to a ${\\mathbb C}$-algebra map\n $\\eta^*: {\\mathcal{H}}^{\\mathrm{ur}}(G)\\rightarrow {\\mathcal{H}}^{\\mathrm{ur}}(H)$.\n\n We apply the above discussion to an unramified representation\n $$r:{}^L G\\rightarrow \\GL_d({\\mathbb C}).$$ Viewing $r$ as an $L$-morphism ${}^L G\\rightarrow {}^L \\GL_d$,\n we obtain\n $$r^*:{\\mathcal{H}}^{\\mathrm{ur}}(\\GL_d)\\rightarrow {\\mathcal{H}}^{\\mathrm{ur}}(G).$$\n\n\n\\begin{lem}\\label{l:bound-degree-Satake}\n Let ${\\mathcal{B}}$ be an ${\\mathbb R}$-basis of $X_*(T)_{\\mathbb R}$.\n There exists a constant $\\beta>0$ (depending on ${\\mathcal{B}}$, $d$ and $r$) such that for all $\\kappa\\in {\\mathbb Z}_{\\ge 0}$,\n $r^*({\\mathcal{H}}^{\\mathrm{ur}}(\\GL_d)^{\\le \\kappa})\\subset {\\mathcal{H}}^{\\mathrm{ur}}(G)^{\\le \\beta\\kappa,{\\mathcal{B}}}$ .\n\\end{lem}\n\n\\begin{proof}\n Thanks to Lemma \\ref{l:norm-and-bases}, it is enough to deal with a particular choice of\n ${\\mathcal{B}}$. Choose ${\\mathcal{B}}$ by extending the set $\\Delta^{\\vee}$ of simple coroots,\n and write ${\\mathcal{B}}=\\Delta^{\\vee}\\coprod {\\mathcal{B}}_0$.\n We begin by proving the following claim: let $\\lambda_1,\\lambda_2\\in X_*(A)^+$ and expand\n the convolution product\n \\begin{equation*}\n \n \\tau^G_{\\lambda_1}*\\tau^G_{\\lambda_2}=\\sum_{\\mu} a^{\\mu}_{\\lambda_1,\\lambda_2}\n \\tau^G_\\mu\n \\end{equation*}\n where only $\\mu\\in X_*(A)^+$ such that $\\mu\\le_{\\mathbb R} \\lambda_1+\\lambda_2$ contribute (cf. \\cite[p.148]{Car79}).\n Only finitely many terms are nonzero.\n Then the claim is that $$|\\mu|_{{\\mathcal{B}}}\\le |\\lambda_1+\\lambda_2|_{{\\mathcal{B}}}, \\quad\\mbox{whenever}~a^{\\mu}_{\\lambda_1,\\lambda_2}\\neq 0.$$\n To check the claim, consider $\\mu=\\sum_{e\\in {\\mathcal{B}}} a_e(\\mu) \\cdot e $ and $\\lambda_1+\\lambda_2\n = \\sum_{e\\in {\\mathcal{B}}} a_e(\\lambda_1+\\lambda_2) \\cdot e$, where the coefficients are in ${\\mathbb R}$.\n The conditions $\\mu\\le_{{\\mathbb R}} \\lambda_1+\\lambda_2$ and $\\mu\\in X_*(T)_{{\\mathbb R},+}$\n imply that $a_e(\\mu)=a_e(\\lambda_1+\\lambda_2)$ if $e\\in {\\mathcal{B}}_0$ and\n $0\\le a_e(\\mu)\\le a_e(\\lambda_1+\\lambda_2)$ if $e\\in \\Delta^{\\vee}$.\n Hence $|\\mu|_{{\\mathcal{B}}}\\le |\\lambda_1+\\lambda_2|_{{\\mathcal{B}}}$.\n\n\n\n We are ready to prove the lemma. It is explained in Lemma \\ref{l:three-truncations} and the remark below it that there exists a constant $\\beta_1>0$ which is independent of $\\kappa$ such that every $\\phi\\in {\\mathcal{H}}^{\\mathrm{ur}}(\\GL_d)^{\\le \\kappa}$ can be written as a ${\\mathbb C}$-linear combination of\n $$\\sum_{\\sigma\\in {\\mathscr S}_d} Y_{\\sigma(1)}^{a_1}Y_{\\sigma(2)}^{a_2}\\cdots Y_{\\sigma(d)}^{a_d},\\quad a_1,...,a_d\\in [-\\beta_1\\kappa,\\beta_1\\kappa].$$\n Each element above can be rewritten in terms of the symmetric polynomials $X_i$'s of \\S\\ref{sub:case-of-GL_d}: First, $X_d^{\\beta_1\\kappa}$ times $\\sum_{\\sigma\\in {\\mathscr S}_d} Y_{\\sigma(1)}^{a_1}Y_{\\sigma(2)}^{a_2}\\cdots Y_{\\sigma(d)}^{a_d}$ is a symmetric polynomial of degree $\\le 2\\beta_1\\kappa$, which in turn is a polynomial in $X_1,...,X_d$ of degree $\\le 2\\beta_1\\kappa$. We conclude that every $\\phi\\in {\\mathcal{H}}^{\\mathrm{ur}}(\\GL_d)^{\\le \\kappa}$ is in the span of monomials\n \\begin{equation}\\label{e:monomials}X_1^{b_1}X_1^{b_2}\\cdots X_d^{b_d},\\quad b_1,...,b_d\\in [-2\\beta_1\\kappa,2\\beta_1\\kappa].\\end{equation}\n For each $1\\le i\\le d$, write $r^*(X_i)$ (resp. $r^*(X_i^{-1})$) as a linear combination of $\\tau^G_{\\lambda_{i,j}}$ (resp. $\\tau^G_{\\lambda^-_{i,j}}$) with nonzero coefficients. Define $\\beta_0$ to be the maximum among all possible $|\\lambda_{i,j}|$ and $|\\lambda^-_{i,j}|$. The above claim $r^*(X_1^{b_1}X_1^{b_2}\\cdots X_d^{b_d})$ as in \\eqref{e:monomials} is in the ${\\mathbb C}$-span of $\\tau^G_\\mu$ satisfying\n $$|\\mu|_{{\\mathcal{B}}}\\le (|b_1|+\\cdots + |b_d|)\\beta_0\\le 2d\\beta_0\\beta_1\\kappa.$$\n So the above span contains $r^*(\\phi)$ for $\\phi\\in {\\mathcal{H}}^{\\mathrm{ur}}(\\GL_d)^{\\le \\kappa}$. By Lemma\n \\ref{l:norm-and-bases} there exists a constant $B_2>0$ such that $\\|\\mu\\|_{{\\mathcal{B}}}\\le B_2|\\mu|_{{\\mathcal{B}}}$ for every $\\mu\\in X_*(T)$. Hence the lemma holds true with $\\beta:=2B_2d\\beta_0\\beta_1$.\n\n\n\\end{proof}\n\n The map $r$ also induces a functorial transfer for unramified representations\n\\begin{equation}\\label{e:r_*} r_*:{\\rm Irr}^{\\mathrm{ur}}(G(F))\\rightarrow {\\rm Irr}^{\\mathrm{ur}}(\\GL_d(F))\\end{equation}\n uniquely characterized by ${\\rm tr}\\, r_*(\\pi)(\\phi)={\\rm tr}\\, \\pi(r^*\\phi)$ for all $\\pi\\in {\\rm Irr}^{\\mathrm{ur}}(G(F))$ and\n$\\phi\\in {\\mathcal{H}}^{\\mathrm{ur}}(\\GL_d(F))$.\n\n\\subsection{Partial Satake transform}\n\n Keep the assumption that $G$ is unramified over $F$.\n Let $P$ be an $F$-rational parabolic subgroup of $G$ with Levi $M$\n and unipotent radical $N$ such that $B=TU$ is contained in $P$.\n Let $\\Omega_M$ (resp. $\\Omega_{M,F}$) denote the absolute\n (resp. $F$-rational) Weyl group for $(M,T)$.\n A partial Satake transform is defined as (cf. \\eqref{e:Satake-formula})\n \\[\n \n {\\mathcal{S}}^G_M:{\\mathcal{H}}^{\\mathrm{ur}}(G)\\rightarrow {\\mathcal{H}}^{\\mathrm{ur}}(M),\n \\quad f\\mapsto \\left(m\\mapsto \\delta_P(m)^{1\/2} \\int_{N} f(mn)dn\\right)\n \\]\n It is well known that ${\\mathcal{S}}^G={\\mathcal{S}}^M\\circ {\\mathcal{S}}^G_M$. More concretely, ${\\mathcal{S}}^G_M$ is the\n canonical inclusion ${\\mathbb C}[X_*(A)]^{\\Omega_{M,F}}\\hookrightarrow {\\mathbb C}[X_*(A)]^{\\Omega_{F}}$ if\n ${\\mathcal{H}}^{\\mathrm{ur}}(M)$ and ${\\mathcal{H}}^{\\mathrm{ur}}(G)$ are identified with the source and the target via ${\\mathcal{S}}^G$ and ${\\mathcal{S}}^M$, respectively.\n Since $T$ is a common maximal torus of $M$ and $G$, an ${\\mathbb R}$-basis ${\\mathcal{B}}$ of $X_*(T)_{\\mathbb R}$\n determines truncations on ${\\mathcal{H}}^{\\mathrm{ur}}(M)$ and ${\\mathcal{H}}^{\\mathrm{ur}}(G)$.\n\n\\begin{lem}\n For any $\\kappa\\in {\\mathbb Z}_{\\ge 0}$,\n ${\\mathcal{S}}^G_M({\\mathcal{H}}^{\\mathrm{ur}}(G)^{\\le \\kappa,{\\mathcal{B}}})\\subset {\\mathcal{H}}^{\\mathrm{ur}}(M)^{\\le \\kappa,{\\mathcal{B}}}$.\n\\end{lem}\n\n\\begin{proof}\n It is enough to note that $\\|\\lambda\\|_{{\\mathcal{B}},M}\\le \\|\\lambda\\|_{{\\mathcal{B}},G}$ for all $\\lambda\\in X_*(A)$, which holds\n since the $\\Omega_{M}$-orbit of $\\lambda$ is contained in the $\\Omega$-orbit of $\\lambda$.\n\\end{proof}\n\n\n\\begin{rem}\n Let $\\eta:{}^L M\\rightarrow {}^L G$ be the embedding of \\cite[\\S3]{Bor79}, well defined up to $\\widehat{G}$-conjugacy.\n Then ${\\mathcal{S}}^G_M$ coincides with $\\eta^*:{\\mathcal{H}}^{\\mathrm{ur}}(G)\\rightarrow {\\mathcal{H}}^{\\mathrm{ur}}(M)$ of \\S\\ref{sub:L-mor-unr-Hecke}\n\\end{rem}\n\n\n\\subsection{Some explicit test functions}\\label{sub:test-functions}\n\n Assume that $r:{}^L G=\\widehat{G}\\rtimes W_F\\rightarrow \\GL_d({\\mathbb C})$ is\n an \\emph{irreducible} representation arising from\n an unramified $L$-morphism ${}^L G^{\\mathrm{ur}} \\rightarrow {}^L \\GL_d^{\\mathrm{ur}}$\n such that $r(W_F)$ is relatively compact.\n For later applications it is useful to study the particular element\n $r^*(Y_1+\\cdots+Y_d)$ in ${\\mathcal{H}}^{\\mathrm{ur}}(G)$.\n\n\n\\begin{lem}\\label{l:bound-1st-moment}\n Let $\\phi=r^*(Y_1+\\cdots+Y_d)$. Then\n\\begin{enumerate}\n\\item Suppose that $r:{}^L G^{\\mathrm{ur}}\\rightarrow \\GL_d({\\mathbb C})$ does not factor through\n $W^{\\mathrm{ur}}_F$ (or equivalently that $r|_{\\widehat{G}}$ is not the trivial representation).\n Then $$|\\phi(1)|\\le\n|\\Omega_F| \\max_{w\\in \\Omega_F} p(\\lambda\\star_w 0)\\cdot q^{-1}.$$\n\\item Suppose that $r|_{\\widehat{G}}$ is trivial. Then\n $\\phi(1)=r({\\rm Fr})$.\n\\end{enumerate}\n\\end{lem}\n\n\n\\begin{proof}\n Let us do some preparation. By twisting $r$ by an unramified unitary character of $W_F$ (viewed as a character of $^L G$) we may assume that $r={\\rm Ind}^{^L G_j}_{\\widehat{G}} r_0$ for some irreducible representation $r_0$ of $\\widehat{G}$, cf. the proof of Lemma \\ref{l:Kostant}.(ii). Let $\\lambda_0$ be the highest weight of $r_0$ and define $\\lambda\\in X_*(A)^+$ as in the paragraph preceding Lemma \\ref{l:Kostant}.\n The lemma tells us that ${\\mathcal{S}}(\\phi)=\\zeta \\chi_\\lambda\\in {\\mathbb C}[X_*(A)]^{\\Omega_F}$ with $|\\zeta|=1$.\n\nIn the case of (ii), $r$ is just an unramified unitary character of $W_F$ (with $d=1$), and it is easily seen that $\\chi_\\lambda=\\tau^A_0$, $\\zeta=r({\\rm Fr})$, and so $\\phi(1)=r({\\rm Fr})$.\n Let us put ourselves in the case (i) so that $\\lambda\\neq 0$.\nNote that $\\phi(1)$ is just the coefficient of $\\tau^G_0$\nwhen $\\phi=\\zeta {\\mathcal{S}}^{-1}(\\chi_\\lambda)$\nis written with respect to the basis $\\{\\tau^G_\\mu\\}$.\n Such a coefficient equals $\\zeta \\cdot s_{\\lambda,0}$ according to \\eqref{e:s-lambda-mu}, so\n $|\\phi(1)|= | s_{\\lambda,0}|$.\nNow Lemma \\ref{l:bound-on-s} concludes the proof. (Observe\nthat $\\lambda\\star_w 0\\neq 0$ whenever $0\\neq \\lambda\\in X_*(A)^+$.)\n\\end{proof}\n\n\\subsection{Examples in the split case}\nWhen $G$ is split,\n it is easy to see that ${\\mathbb C}[{\\rm ch}({}^L G^{\\mathrm{ur}})]$ is canonically identified with ${\\mathbb C}[{\\rm ch}(\\widehat{G})]$\n which is generated by finite dimensional characters in the space of functions on $\\widehat{G}$.\n So we may use ${\\mathbb C}[{\\rm ch}(\\widehat{G})]$ in place of ${\\mathbb C}[{\\rm ch}({}^L G^{\\mathrm{ur}})]$.\n\n\\begin{ex}\\label{ex:functions-Sp_2n}(When $G=Sp_{2n}$, $n\\ge1$)\n\n Take $r:\\widehat{G}=SO_{2n+1}({\\mathbb C})\\hookrightarrow \\GL_{2n+1}({\\mathbb C})$ to be the standard representation.\n Then $$Y_1+\\cdots+Y_{2n+1}={\\rm tr}\\,({\\rm Std})\\in {\\mathbb C}[{\\rm ch}(\\GL_{2n+1})]$$\n is mapped to ${\\rm tr}\\,(r)\\in {\\mathbb C}[{\\rm ch}(SO_{2n+1})]$ and\n $$Y^2_1+\\cdots+Y^2_{2n+1}={\\rm tr}\\,({\\rm Sym}^2({\\rm Std})-\\wedge^2({\\rm Std}))\\in {\\mathbb C}[{\\rm ch}(\\GL_{2n+1})]$$\n is mapped to ${\\rm tr}\\,(r)\\in {\\mathbb C}[{\\rm ch}(SO_{2n+1})]$.\n Then ${\\rm Sym}^2(V)$ breaks into ${\\mathbb C}$ and an irreducible representation of\n $\\widehat{G}$ of highest weight $(2,0,...,0)$ in the standard parametrization.\n When $n>1$, $\\wedge^2(V)$ is irreducible of highest weight $(1,1,0,...,0)$.\n When $n=1$, $\\wedge^2(V)\\simeq V^\\vee$, i.e. isomorphic to $({\\rm Std})^\\vee$. (See\n \\cite[\\S19.5]{FH91}.) Let us systematically write $\\Lambda_\\lambda$ for the irreducible representation\n of $SO_{2n+1}$ with highest weight $\\lambda$. Then\n \\begin{eqnarray}\n r^*(Y_1+\\cdots+Y_{2n+1})&=&{\\rm tr}\\, \\Lambda_{(1,0,...,0)},\\label{e:Satake-Sp2n}\\\\\n r^*(Y_1^2+\\cdots+Y_{2n+1}^2)\n & =&{\\rm tr}\\,( {\\mathbb C}+\\Lambda_{(2,0,...,0)}-\\Lambda_{(1,1,0,...,0)}).\\nonumber\n \\end{eqnarray}\n if $n\\ge 2$. If $n=1$, the same is true if $\\Lambda_{(1,1,0,...,0)}$ is replaced with\n $\\Lambda_{(-1)}$.\n For $i=1,2$, define\n $$\\phi^{(i)}:={\\mathcal{S}}^{-1}(r^*(Y^i_1+\\cdots+Y^i_{2n+1})).$$\n Then one computes\n \\begin{eqnarray}\n \\phi^{(1)}&=& q^{\\frac{1-2n}{2}} {\\mathbf{1}}_{K \\mu_{(1,0,...,0)}(\\varpi_v)K} ,\\nonumber\\\\\n \\phi^{(2)}\n & =& {\\mathbf{1}}_K+q^{1-2n} {\\mathbf{1}}_{K\\mu_{(2,0,...,0)}(\\varpi_v)K}\n - q^{1-2n}(q-1){\\mathbf{1}}_{K\\mu_{(1,1,0,...,0)}(\\varpi_v)K}.\\nonumber\n \\end{eqnarray}\n where $\\mu_{\\lambda}$ is the cocharacter of a maximal torus given by $\\lambda$ in the standard parametrization.\n In particular, $\\phi^{(1)}(1)=0$ and $\\phi^{(2)}(1)=1$.\n\\end{ex}\n\n\n\\begin{ex}(When $G=SO_{2n}$, $n\\ge 2$)\n\n Take $r:\\widehat{G}=SO_{2n}({\\mathbb C})\\hookrightarrow \\GL_{2n}({\\mathbb C})$ to be the standard representation.\n Similarly as before, ${\\rm Sym}^2(V)$ breaks into ${\\mathbb C}$ and an irreducible representation of\n $\\widehat{G}$ of highest weight $(2,0,...,0)$.\n When $n>1$, $\\wedge^2(V)$ is irreducible of highest weight $(1,1,0,...,0)$.\n When $n=1$, $\\wedge^2(V)\\simeq {\\mathbb C}$. (See \\cite[\\S19.5]{FH91}.) The same\n formulas as \\eqref{e:Satake-Sp2n} hold in this case.\n Defining \\begin{equation}\\label{e:Satake-SO2n}\\phi^{(i)}:={\\mathcal{S}}^{-1}(r^*(Y^i_1+\\cdots+Y^i_{2n})),\\end{equation} we\n can compute $\\phi^{(1)}$, $\\phi^{(2)}$ and see that\n $\\phi^{(1)}(1)=0$ and $\\phi^{(2)}(1)=1$.\n\\end{ex}\n\n\\begin{ex}(When $G=SO_{2n+1}$)\n\n Take $r:\\widehat{G}=Sp_{2n}({\\mathbb C})\\hookrightarrow \\GL_{2n}({\\mathbb C})$ to be the standard representation.\n Then $$Y_1+\\cdots+Y_{2n}={\\rm tr}\\,({\\rm Std})\\in {\\mathbb C}[{\\rm ch}(\\GL_{2n})]$$\n is mapped to ${\\rm tr}\\,(r\\circ {\\rm Std})\\in {\\mathbb C}[{\\rm ch}(Sp_{2n})]$ and\n Then $$Y^2_1+\\cdots+Y^2_{2n}={\\rm tr}\\,({\\rm Sym}^2({\\rm Std})-\\wedge^2({\\rm Std}))\\in {\\mathbb C}[{\\rm ch}(\\GL_{2n})]$$\n is mapped to ${\\rm tr}\\,(r\\circ {\\rm Std})\\in {\\mathbb C}[{\\rm ch}(Sp_{2n})]$.\n If $n\\ge 2$ then $\\wedge^2(V)$ breaks into ${\\mathbb C}$ and an irreducible representation of\n $\\widehat{G}$ of highest weight $(1,1,0,...,0)$.\n (See \\cite[\\S17.3]{FH91}.) We have\n \\begin{eqnarray*}\n r^*(Y_1+\\cdots+Y_{2n+1})&=&{\\rm tr}\\, \\Lambda_{(1,0,...,0)},\\label{e:Satake-SO2n+1}\\\\\n r^*(Y_1^2+\\cdots+Y_{2n+1}^2)\n & =&{\\rm tr}\\,( \\Lambda_{(2,0,...,0)}-\\Lambda_{(1,1,0,...,0)}-{\\mathbb C} ).\\nonumber\n \\end{eqnarray*}\n As in Example \\ref{ex:functions-Sp_2n}, $\\Lambda$ designates a highest weight representation\n (now of $Sp_{2n}$).\n Define $\\phi^{(i)}$ as in \\eqref{e:Satake-SO2n}.\n By a similar computation as above, $\\phi^{(1)}(1)=0$, $\\phi^{(2)}(1)=-1$.\n\\end{ex}\n\n\n\n\n\n\\subsection{Bounds for truncated unramified Hecke algebras}\n\n Let $F$, $G$, $A$, $T$ and $K$ be as in \\S\\ref{sub:Satake-trans}.\n Throughout this subsection,\n an ${\\mathbb R}$-basis ${\\mathcal{B}}$ of $X_*(T)_{{\\mathbb R}}$ will be fixed once and for all.\n Denote by $\\rho\\in X^*(T)\\otimes_{\\mathbb Z} \\frac{1}{2}{\\mathbb Z}$\n half the sum of all $\\alpha\\in \\Phi^+$.\n\n\\begin{lem}\\label{l:double-coset-volume}\n For any $\\mu\\in X_*(A)$,\n $[K\\mu(\\varpi)K:K]\\le q^{d_G+r_G+\\langle \\rho,\\mu\\rangle}$.\n\\end{lem}\n\n\\begin{proof}\n Let $\\operatorname{vol}$ denote the volume for the Haar measure on $G(F)$ such that\n $\\operatorname{vol}(K)=1$.\n Let $I\\subset K$ be an Iwahori subgroup of $G(F)$.\n Then $I=(I\\cap U)(I\\cap T)(I\\cap \\overline{U})$.\n We follow the argument of \\cite[pp.241-242]{Wal03}, freely using his notation.\n Our $I$, $U$, $\\overline{U}$, and $T$ will play the roles of his $H$, $U_0$, $\\overline{U}_0$ and $M_0$, respectively.\n For all $m\\in \\overline{M}_0^+$ (in his notation), it is not hard to verify that\n $c'_{U_0}(m)=c_{\\overline{U}_0}(m)=c_{M_0}(m)=1$.\n Then Waldspurger's argument shows\n $$\\operatorname{vol}(K\\mu(\\varpi)K)\\le [K:I]^2 \\operatorname{vol}(I\\mu(\\varpi)I)\n \\le [K:I]^2 q^{\\langle \\rho,\\mu\\rangle} \\operatorname{vol}(I) = [K:I] q^{\\langle \\rho,\\mu\\rangle} .$$\n Finally observe that $[K:I]\\le |G(\\mathbb{F}_q)|\\le q^{d_G}(1+\\frac{1}{q})^{r_G}\\le q^{d_G+r_G}$.\n(The middle inequality is easily derived from Steinberg's formula. cf. \\cite[(3.1)]{Gro97}.)\n\\end{proof}\n\n The following lemma will play a role in studying the level aspect in Section \\ref{s:aut-Plan-theorem}.\n\n\n\\begin{lem}\\label{l:bounding-phi-on-S_0}\n Let $M$ be an $F$-rational Levi subgroup of $G$.\n There exists a $b_G> 0$ (depending only on $G$)\n such that for all $\\kappa\\in {\\mathbb Z}_{>0}$\n and all $\\phi\\in {\\mathcal{H}}^{\\mathrm{ur}}(G)^{\\le \\kappa,{\\mathcal{B}}}$ such that\n $|\\phi|\\le 1$, we have\n $|\\phi_{M}(1)|= O(q^{d_G+r_G+b_G\\kappa})$ (the implicit constant being independent of\n $\\kappa$ and $\\phi$).\n\\end{lem}\n\n\\begin{proof}\n When $M=G$, the lemma is obvious (with $b_G=0$). Henceforth we assume\n that $M\\subsetneq G$. In view of Lemma \\ref{l:norm-and-bases},\n it suffices to treat one ${\\mathbb R}$-basis ${\\mathcal{B}}$. Fix a ${\\mathbb Z}$-basis $\\{e_1,...,e_{\\dim A}\\}$ of $X_*(A)$,\n and choose any ${\\mathcal{B}}$ which extends that ${\\mathbb Z}$-basis.\n It is possible to write\n $$\\phi=\\sum_{\\|\\mu\\|\\le \\kappa} a_\\mu\\cdot {\\mathbf{1}}_{K\\mu(\\varpi)K}$$\n for $|a_\\mu|\\le 1$. Thus\n $$|\\phi_{M}(1)|=\\left|\\int_{N(F)} \\phi(n)dn\\right|\n \\le \\sum_{\\|\\mu\\|\\le \\kappa}\n \\left| \\int_{N(F)}{\\mathbf{1}}_{K\\mu(\\varpi)K}(n)dn \\right|.$$\n For each $\\mu$, $K\\mu(\\varpi)K$ is partitioned into left $K$-cosets.\n On each coset $\\gamma K$, $$\\left|\\int_{N(F)} {\\mathbf{1}}_{\\gamma K}(n) dn\\right|\\le \\operatorname{vol}(K\n \\cap N(F))=1.$$ Hence, together with Lemma \\ref{l:double-coset-volume},\n $$|\\phi_{M}(1)|\\le \\sum_{\\|\\mu\\|\\le \\kappa} [K\\mu(\\varpi)K:K]\\le \\sum_{\\|\\mu\\|\\le \\kappa}q^{d_G+r_G+\\langle \\rho,\\mu\\rangle} .$$\n Write $b_{0}$ for the maximum of $|\\langle \\rho,e_i\\rangle|$ for $i=1,...,\\dim A$.\n Take $b_G:=b_{0}\\dim A+2 \\dim A$.\n If $\\|\\mu\\|\\le \\kappa$ then\n $\\mu=\\sum_{i=1}^{\\dim A} a_ie_i$ for $a_i\\in{\\mathbb Z}$ with $-\\kappa\\le a_i\\le \\kappa$.\n Hence the right hand side is bounded by\n $(2\\kappa+1)^{\\dim A} q^{d_G+r_G+b_{0}\\kappa\\dim A}\\le q^{d_G+r_G+b_G\\kappa}$\n since $2\\kappa+1\\le 2^{2\\kappa}\\le q^{2\\kappa}$.\n\n\n\\end{proof}\n\n An elementary matrix computation shows the bound below, which will be used several times.\n\n\\begin{lem}\\label{l:control-eigenvalue}\n Let $s={\\rm diag}(s_1,...,s_m)\\in \\GL_m(\\overline{F}_v)$\n and $u=(u_{ij})_{i,j=1}^{m}\\in \\GL_m(\\overline{F}_v)$.\n Define $v_{\\min}(u):=\\min_{i,j} v(u_{ij})$ and\nsimilarly $v_{\\min}(u^{-1})$.\n Then for any eigenvalue $\\lambda$ of $su\\in \\GL_m(F_v)$,\n $$v(\\lambda)\\in [v_{\\min}(u)+\\min_{i} v(s_i),-v_{\\min}(u^{-1})+\\max_{i} v(s_i)].$$\n\\end{lem}\n\n\\begin{rem}\n The lemma will be typically applied when $u\\in \\GL_m(\\overline{\\mathcal{O}}_v)$ where\n $\\overline{\\mathcal{O}}_v$ is the integer ring of $\\overline{F}_v$. In this case\n $v_{\\min}(u)=v_{\\min}(u^{-1})=0$.\n\\end{rem}\n\n\\begin{proof}\n Let $V$ be the underlying $\\overline{F}_v$-vector space with\n standard basis $\\{e_1,...,e_m\\}$.\n Let ${\\mathcal{B}}_j=\\{\\vec{i}=(i_1,...,i_j)|1\\le i_1<\\cdots0$ such that\n for every $\\kappa\\in {\\mathbb Z}_{\\ge 0}$, every $\\mu\\in X_*(A)$ satisfying $\\|\\mu\\|\\le \\kappa$,\n every semisimple $\\gamma\\in K\\mu(\\varpi)K$ and every $\\alpha\\in \\Phi_\\gamma$ (for any choice of $T_\\gamma$ as above), we have\n $-B_5\\kappa\\le v(\\alpha(\\gamma))\\le B_5\\kappa$. In particular,\n $|1-\\alpha(\\gamma)|\\le q^{B_5 \\kappa}$.\n\\end{lem}\n\n\\begin{rem}\n Later $\\Xi$ will be provided by Proposition \\ref{p:global-integral-model}.\n\\end{rem}\n\n\n\\begin{proof}\n We may assume that $\\Xi(A)$ is contained in the diagonal torus of $\\GL_m$ by Lemma \\ref{l:conj-image-in-diag}.\n Choose any $T_\\gamma$ as above. Let $\\mathbb{T}_\\gamma\\subset \\GL_m$ be a maximal torus over $\\overline{F}$ containing the image of $T_\\gamma$ under $\\Xi$.\n Since $\\Xi|_{T_\\gamma}:T_\\gamma \\hookrightarrow \\mathbb{T}_\\gamma$ induces a surjection $X^*(\\mathbb{T}_\\gamma)\\twoheadrightarrow X^*(T_\\gamma)$,\n each $\\alpha\\in \\Phi_\\gamma$ can be lifted to some $\\alpha'\\in X^*(\\mathbb{T}_\\gamma)$.\n Fix $\\widetilde{\\alpha}$ for each $\\alpha$ once and for all, and set\n $c_1:=\\max_{\\alpha\\in \\Phi_\\gamma} \\|\\widetilde{\\alpha}\\|_{\\GL_m}$.\n\n Let $c_2:=\\max_{\\|\\mu\\|\\le 1} \\|\\Xi\\circ \\mu\\|_{\\GL_m}$ where $\\mu\\in X_*(A)_{{\\mathbb R}}$\n runs over elements with $\\|\\mu\\|\\le 1$. Then\n for any $\\kappa\\in{\\mathbb Z}_{\\ge 0}$, $\\|\\mu\\|\\le \\kappa$ implies\n $\\|\\Xi\\circ \\mu\\|_{\\GL_m}\\le c_2\\kappa$. Hence $\\Xi(\\mu(\\varpi))$ is a diagonal matrix in which\n each entry $x$ satisfies $-c_2\\kappa \\le v(x)\\le c_2\\kappa$.\n\n We can write $\\gamma=k_1 \\mu(\\varpi)k_2$ for some $k_1,k_2\\in G(\\mathcal{O})$. Then\n $\\Xi(\\gamma)=k'_1 \\Xi(\\mu(\\varpi)) k'_2$ for $k'_1,k'_2\\in \\GL_m(\\mathcal{O})$, and\n $\\Xi(\\gamma)$ is conjugate to $\\Xi(\\mu(\\varpi))k'_2(k'_1)^{-1}$. It follows from\n Lemma \\ref{l:control-eigenvalue} that\n for every eigenvalue $\\lambda$ of $\\Xi(\\gamma)$, we have\n $-c_2\\kappa\\le v(\\lambda)\\le c_2\\kappa$.\n\n By conjugating $\\mathbb{T}_\\gamma$ to the diagonal torus of $\\GL_m$,\n we fix an isomorphism $\\mathbb{T}_\\gamma(\\overline{F})\\simeq (\\overline{F}^\\times)^m$ (canonical up to an action by the symmetric group on $m$ letters). Let $(\\lambda_1,...,\\lambda_m)$\n be the image of $\\Xi(\\gamma)$ under the latter isomorphism. We may write $\\widetilde{\\alpha}$ as\n a character $(\\overline{F}^\\times)^m\\rightarrow \\overline{F}^\\times$ given by $(t_1,...,t_m)\\mapsto t_1^{a_1}\\cdots t_m^{a_m}$\n with $a_1,...,a_m\\in {\\mathbb Z}$ such that $-c_1\\le a_i\\le c_1$ for every $1\\le i\\le m$.\n We have $$\\alpha(\\gamma)=\\widetilde{\\alpha}(\\Xi(\\gamma))=\\lambda_1^{a_1}\\cdots \\lambda_m^{a_m},$$\n so $v(\\alpha(\\gamma))=\\sum_{i=1}^m a_i v(\\lambda_i)$. Hence $-m c_1c_2\\kappa\\le v(\\alpha(\\gamma))\\le m c_1c_2\\kappa$,\nproving the first assertion of the lemma. From this the last assertion is obvious.\n\n\\end{proof}\n\n\n\\begin{rem}\\label{r:Lem2.18-indep}\n Suppose that $F$ runs over the completions of a number field ${\\mathbf F}$ at non-archimedean places $v$, that $G$ over $F$ comes from a fixed reductive group $G$ over ${\\mathbf F}$, and that $\\Xi$ comes from an embedding $G\\hookrightarrow GL_m$ over the integer ring of ${\\mathbf F}$ (at least for every $v$ where $G$ is unramified). Then $B_5$ of the lemma can be chosen to be independent of $v$ (and dependent only on the data over ${\\mathbf F}$). This is easy to see from the proof.\n\\end{rem}\n\n\n\n\\subsection{An easy lemma}\\label{sub:easy}\n\n As before let $F$ be a finite extension of ${\\mathbb Q}_p$ with multiplicative norm $|\\cdot|:F^\\times\\rightarrow {\\mathbb R}^\\times_{>0}$ normalized such that a uniformizer is sent to the inverse of the residue field cardinality, and let $G$ be an unramified group over $F$ with a smooth reductive model over $\\mathcal{O}$. The notation for $T_\\gamma$ and $\\Phi_\\gamma$ for a semisimple $\\gamma\\in G(F)$ is as in the previous subsection.\n\n\n\\begin{lem}\\label{l:alpha(gamma)-is-integral}\n Suppose that a semisimple $\\gamma\\in G(F)$ is conjugate to an element\n of $G(\\mathcal{O})$ and that $\\alpha(\\gamma)\\neq 1$ and\n $|1-\\alpha(\\gamma)|\\neq 1$ for $\\alpha\\in \\Phi_\\gamma$ (for any choice of $T_\\gamma$).\n Then $|1-\\alpha(\\gamma)|\\le q^{-1}$.\n\\end{lem}\n\n\\begin{proof}\n\n By the assumption we may assume $\\gamma\\in G(\\mathcal{O})$.\n Choose a maximal torus $T$ in the centralizer of $\\gamma$ in $G$ over $\\mathcal{O}_{F^{\\mathrm{ur}}}$, the ring of integers in\n $F^{\\mathrm{ur}}$, so that $\\gamma\\in T(\\mathcal{O})$.\n Such a $T$ exists since a reductive group scheme admits a maximal torus under \\'etale localization (\\cite[Cor 3.2.7]{Conrad-reductive}).\n Since $T$ should split over a finite \\'etale cover of $\\mathrm{Spec}\\, \\mathcal{O}_{F^{\\mathrm{ur}}}$, it should be that $T$ splits over $\\mathcal{O}_{F^{\\mathrm{ur}}}$ already. Then $\\alpha$ defines a character $T(\\mathcal{O}_{F^{\\mathrm{ur}}})\\rightarrow \\mathcal{O}_{F^{\\mathrm{ur}}}^{\\times}$ and\n $1-\\alpha(\\gamma)\\in \\mathcal{O}_{F^{\\mathrm{ur}}}$. As $1-\\alpha(\\gamma)$ is not a $\\varpi$-adic unit,\n $\\varpi$ divides $1-\\alpha(\\gamma)$. The lemma follows.\n\\end{proof}\n\n\n\n\n\\section{Plancherel measure on the unramified spectrum}\\label{s:Plancherel}\n\n\n\n\\subsection{Basic setup and notation}\\label{sub:notation-Plancherel}\n Let $F$ be a finite extension of ${\\mathbb Q}_p$.\n Suppose that $G$ is unramified over $F$.\n Fix a hyperspecial subgroup $K$ of $G$.\n Recall the notation from the start of \\S\\ref{sub:Satake-trans}.\n In particular $\\Omega$ (resp. $\\Omega_F$) denotes the Weyl group for $(G_{\\overline{F}},T_{\\overline{F}})$\n (resp. $(G,A)$). There is a natural ${\\rm Gal}(\\overline{F}\/F)$-action on $\\Omega$,\n under which $\\Omega^{{\\rm Gal}(\\overline{F}\/F)}=\\Omega_F$. (See \\cite[\\S6.1]{Bor79}.)\n Since $G$ is unramified, ${\\rm Gal}(\\overline{F}\/F)$ factors through a finite unramified Galois group.\n Thus there is a well-defined action of ${\\rm Fr}$ on $\\Omega$, and\n $\\Omega^{{\\rm Fr}}=\\Omega_F$.\n\nThe unitary dual $G(F)^{\\wedge}$ of $G(F)$, or simply $G^\\wedge$ if there is no danger of ambiguity, is equipped with Fell topology.\n (This notation should not be confused with the dual group $\\widehat{G}$).\n Let $G^{\\wedge,\\mathrm{ur}}$ denote the unramified spectrum in\n $G^{\\wedge}$, and\n $G^{\\wedge,\\mathrm{ur},{\\rm temp}}$ its tempered sub-spectrum.\n The Plancherel measure $\\widehat{\\mu}^{\\mathrm{pl}}$ on $G^{\\wedge}$ is supported on the tempered spectrum $G^{\\wedge,{\\rm temp}}$.\n The restriction of\n $\\widehat{\\mu}^{\\mathrm{pl}}$ to $G^{\\wedge,\\mathrm{ur}}$ will be written as $\\widehat{\\mu}^{\\mathrm{pl,ur}}$.\n The latter is supported on $G^{\\wedge,\\mathrm{ur},{\\rm temp}}$.\n Harish-Chandra's Plancherel formula (cf. \\cite{Wal03}) tells us that\n $\\widehat{\\mu}^{\\mathrm{pl}}(\\widehat{\\phi})=\\phi(1)$ for all $\\phi\\in {\\mathcal{H}}(G(F))$. In particular,\n $\\widehat{\\mu}^{\\mathrm{pl,ur}}(\\widehat{\\phi})=\\phi(1)$ for all $\\phi\\in {\\mathcal{H}}^{\\mathrm{ur}}(G(F))$.\n\n \\subsection{The unramified tempered spectrum}\\label{s:unramified-spectrum}\n\n\n An unramified $L$-parameter $W^{\\mathrm{ur}}_F\\rightarrow {}^L G^{\\mathrm{ur}}$\n is defined to be an $L$-morphism ${}^L H^{\\mathrm{ur}}\\rightarrow {}^L G^{\\mathrm{ur}}$ (\\S\\ref{sub:L-mor-unr-Hecke})\n with $H=\\{1\\}$.\n Two such parameters $\\varphi_1$ and $\\varphi_2$ are considered equivalent\n if $\\varphi_1=g\\varphi_2 g^{-1}$ for some $g\\in \\widehat{G}$.\n Consider the following sets:\n\n\n\\begin{enumerate}\n\\item irreducible unramified representations $\\pi$ of $G(F)$ up to isomorphism.\n\\item group homomorphisms $\\chi:T(F)\/T(F)\\cap K \\rightarrow {\\mathbb C}^\\times$\nup to $\\Omega_{F}$-action.\n\\item unramified $L$-parameters $\\varphi:W^{\\mathrm{ur}}_F\\rightarrow {}^L G^{\\mathrm{ur}}$\nup to equivalence.\n\\item elements of $(\\widehat{G}\\rtimes {\\rm Fr})_{{\\rm ss-conj}}$;\n this set was defined in \\S\\ref{sub:Satake-trans}.\n\\item $\\Omega^{{\\rm Fr}}$-orbits in $\\widehat{T}\/({\\rm Fr}-{\\rm id})\\widehat{T}$.\n\\item $\\Omega_F$-orbits in $\\widehat{A}$.\n\\item ${\\mathbb C}$-algebra morphisms $\\theta:{\\mathcal{H}}^{\\mathrm{ur}}(G)\\rightarrow {\\mathbb C}$.\n\n\\end{enumerate}\n Let us describe canonical maps among them in some directions.\n\\begin{itemize}\n\\item[(i)$\\rightarrow$(vii)] Choose any $0\\neq v\\in \\pi^K$.\nDefine $\\theta(\\phi)$ by $\\theta(\\phi)v=\\int_{G(F)} \\phi(g)\\pi(g)vdg$.\n\\item[(ii)$\\rightarrow$(i)]\n$\\pi$ is the unique unramified subquotient of ${\\rm n\\textrm{-}ind}^{G(F)}_{B(F)} \\chi$.\n\\item[(ii)$\\leftrightarrow$(vi)] Induced by\n$\\Hom(T(F)\/T(F)\\cap K , {\\mathbb C}^\\times)\\simeq\n\\Hom(A(F)\/A(F)\\cap K , {\\mathbb C}^\\times)$\n\\begin{equation}\\label{e:ii-vi}\\simeq \\Hom(X_*(A),{\\mathbb C}^\\times)\\simeq\\Hom(X^*(\\widehat{A}),{\\mathbb C}^\\times)\\simeq\n X_*(\\widehat{A})\\otimes_{\\mathbb Z} {\\mathbb C}^\\times\\simeq \\widehat{A}\\end{equation}\n where the second isomorphism is induced by $X_*(A)\\rightarrow A(F)$ sending $\\mu$ to $\\mu(\\varpi)$.\n\\item[(iii)$\\rightarrow$(iv)] Take $\\varphi({\\rm Fr})$.\n\\item[(v)$\\rightarrow$(iv)] Induced by\nthe inclusion $t\\mapsto t\\rtimes {\\rm Fr}$ from $\\widehat{T}$ to $ \\widehat{G}\\rtimes{\\rm Fr}$.\n\n\\item[(v)$\\rightarrow$(vi)] Induced by the surjection $\\widehat{T}\\twoheadrightarrow \\widehat{A}$,\nwhich is the dual of $A\\hookrightarrow T$. (Recall $\\Omega^{{\\rm Fr}}=\\Omega_F$.)\n\\item[(vii)$\\rightarrow$(vi)]\n Via ${\\mathcal{S}}:{\\mathcal{H}}^{\\mathrm{ur}}(G)\\simeq {\\mathbb C}[X^*(\\widehat{A})]^{\\Omega_F}$,\n $\\theta$ determines an element of (cf. \\eqref{e:ii-vi})\n $$\\Omega_F\\backslash\\Hom(X^*(\\widehat{A}),{\\mathbb C}^\\times)\\simeq \\Omega_F\\backslash \\widehat{A}.$$\n\\end{itemize}\n\n\n\\begin{lem}\\label{l:unr-spec}\n Under the above maps, the sets corresponding to (i)-(vii) are in bijection with each other.\n\\end{lem}\n\\begin{proof}\n See \\S6, \\S7 and \\S10.4 of \\cite{Bor79}.\n\\end{proof}\n\n\n\n Let $F'$ be the finite unramified extension of $F$ such that\n ${\\rm Gal}(\\overline{F}\/F)$ acts on $\\widehat{G}$ through the faithful action of ${\\rm Gal}(F'\/F)$.\n Write ${}^L G_{F'\/F}:=\\widehat{G}\\rtimes{\\rm Gal}(F'\/F)$.\n Let $\\widehat{K}$ be a maximal compact subgroup of $\\widehat{G}$ which is ${\\rm Fr}$-invariant.\n Denote by $\\widehat{T}_c$ (resp. $\\widehat{A}_c$) the maximal compact subtorus\n of $\\widehat{T}$ (resp. $\\widehat{A}$).\n\n\\begin{lem}\\label{l:unr-temp-spec}\n The above bijections restrict to the bijections among the sets consisting\n of the following objects.\n\\begin{enumerate}\n\\item[(i)$_t$] irreducible unramified tempered representations $\\pi$ of $G(F)$\nup to isomorphism.\n\\item[(ii)$_t$] unitary group homomorphisms $\\chi:T(F)\/T(F)\\cap K \\rightarrow U(1)$ up to $\\Omega_{F}$-action.\n\\item[(iii)$_t$] unramified $L$-parameters $\\varphi:W^{\\mathrm{ur}}_F\\rightarrow {}^L G^{\\mathrm{ur}}$ with bounded image\nup to equivalence.\n\\item[(iv)$_t$] $\\widehat{G}$-conjugacy classes in $\\widehat{K}\\rtimes {\\rm Fr}$ (viewed in ${}^L G_{F'\/F}$).\n\\item[(iv)$'_t$] $\\widehat{K}$-conjugacy classes in $\\widehat{K}\\rtimes {\\rm Fr}$ (viewed in $\\widehat{K}\\rtimes{\\rm Gal}(F'\/F)$).\n\\item[(v)$_t$] $\\Omega^{{\\rm Fr}}$-orbits in $\\widehat{T}_c\/({\\rm Fr}-{\\rm id})\\widehat{T}_c$.\n\\item[(vi)$_t$] $\\Omega_F$-orbits in $\\widehat{A}_c$.\n\\end{enumerate}\n(The boundedness in (iii)$_t$ means that the projection of ${\\rm Im\\,} \\varphi$ into\n${}^L G_{F'\/F}$ is contained in a maximal compact subgroup of ${}^L G_{F'\/F}$.)\n\\end{lem}\n\n\\begin{proof}\n (i)$_t$$\\leftrightarrow$(ii)$_t$ is standard and (iii)$_t$$\\leftrightarrow$(iv)$_t$ is obvious.\n Also straightforward is (ii)$_t$$\\leftrightarrow$(vi)$_t$ in view of \\eqref{e:ii-vi}.\n\n Let us show that (v)$_t$$\\leftrightarrow$(vi)$_t$. Choose a topological isomorphism of complex tori\n $\\widehat{T}\\simeq ({\\mathbb C}^\\times)^{d}$ with $d=\\dim T$.\n Using ${\\mathbb C}^\\times \\simeq U(1)\\times {\\mathbb R}^\\times_{>0}$, we can decompose $\\widehat{T}=\\widehat{T}_c\\times \\widehat{T}_{nc}$\n such that $\\widehat{T}_{nc}$ is carried over to $({\\mathbb R}^\\times_{>0})^d$ under the isomorphism.\n The decomposition of $\\widehat{T}$ is canonical in that it is preserved under\n any automorphism of $\\widehat{T}$. By the same reasoning, there is a canonical decomposition\n $\\widehat{A}=\\widehat{A}_c\\times \\widehat{A}_{nc}$ with $\\widehat{A}_{nc}\\simeq ({\\mathbb R}^\\times_{>0})^{\\dim A}$.\n The canonical surjection $\\widehat{T}\\rightarrow \\widehat{A}$ carries $\\widehat{T}_c$ onto $\\widehat{A}_c$\n and $\\widehat{T}_{nc}$ onto $\\widehat{A}_{nc}$. (This reduces to the assertion in the case of ${\\mathbb C}^\\times$,\n namely that any maps $U(1)\\rightarrow {\\mathbb R}^\\times_{>0}$ and ${\\mathbb R}^\\times_{>0}\\rightarrow U(1)$ induced by\n an algebraic map ${\\mathbb C}^\\times\\rightarrow {\\mathbb C}^\\times$ of ${\\mathbb C}$-tori are trivial. This is easy to check.)\n Therefore the isomorphism $\\widehat{T}\/({\\rm Fr}-{\\rm id})\\widehat{T}\\rightarrow \\widehat{A}$ of Lemma \\ref{l:unr-temp-spec}\n induces an isomorphism\n $\\widehat{T}_c\/({\\rm Fr}-{\\rm id})\\widehat{T}_c\\rightarrow \\widehat{A}_c$ (as well as\n $\\widehat{T}_{nc}\/({\\rm Fr}-{\\rm id})\\widehat{T}_{nc}\\rightarrow \\widehat{A}_{nc}$).\n\n\n Next we show that (iv)$_t$$\\leftrightarrow$(v)$_t$.\n It is clear that $t\\mapsto t\\rtimes {\\rm Fr}$ maps (v)$_t$ into (iv)$_t$. Since (v)$_t$ and (iv)$_t$ are\n the subsets of (v) and (iv), which are in bijective correspondence,\n we deduce that (v)$_t$$\\rightarrow$(iv)$_t$ is injective. To show surjectivity, pick any\n $k\\in \\widehat{K}$. There exists $t\\in \\widehat{T}$ such that the image of $t$ in (iv) corresponds\n under (iv)$\\leftrightarrow$(v) to the $\\widehat{G}$-conjugacy class of $\\widehat{k}\\rtimes {\\rm Fr}$.\n It is enough to show that we can choose $t\\in \\widehat{T}_c$.\n Consider the subgroup $\\widehat{T}_c(t)$ of $$\\widehat{T}\/({\\rm Fr}-{\\rm id})\\widehat{T}\n = \\widehat{T}_c\/({\\rm Fr}-{\\rm id})\\widehat{T}_c~ \\times ~\\widehat{T}_{nc}\/({\\rm Fr}-{\\rm id})\\widehat{T}_{nc}$$ generated by $\\widehat{T}_c\/({\\rm Fr}-{\\rm id})\\widehat{T}_c$\n and the image of $t$.\n The isomorphism (iv)$\\leftrightarrow$(v) maps $\\widehat{T}_c(t)$ into (v)$_t$ by\n the assumption on $t$. Since (v)$_t$ form a compact set, the group\n $\\widehat{T}_c(t)$ must be contained in a compact subset of $\\widehat{T}\/({\\rm Fr}-{\\rm id})\\widehat{T}$.\n This forces the image of $t$ in $\\widehat{T}_{nc}\/({\\rm Fr}-{\\rm id})\\widehat{T}_{nc}$ to be trivial.\n (Indeed, the latter quotient is isomorphic as a topological group\n to a quotient of ${\\mathbb R}^{\\dim T}$ modulo an ${\\mathbb R}$-subspace via the exponential map. So\n any subgroup generated by a nontrivial element is not contained in a compact set.)\n Therefore $t$ can be chosen in $\\widehat{T}_c$.\n\n\n It remains to verify that (iv)$_t$, (iv)$'_t$ and (v)$_t$ are in bijection.\n Clearly (iv)$'_t$$\\rightarrow$(iv)$_t$ is onto. As we have just seen that\n (iv)$_t$$\\leftrightarrow$(v)$_t$, it suffices to observe that (v)$_t$$\\rightarrow$(iv)$'_t$ is onto,\n which is a standard fact (for instance in the context of\n the (twisted) Weyl integration formula for $\\widehat{K}\\rtimes {\\rm Fr}$).\n\n\n\\end{proof}\n\n\n\n\\subsection{Plancherel measure on the unramified spectrum}\\label{sub:plan-unramified}\n\n Lemma \\ref{l:unr-temp-spec} provides a bijection $G^{\\wedge,\\mathrm{ur},{\\rm temp}}\\simeq\n \\Omega_F\\backslash \\widehat{A}_c$, which is in fact a topological isomorphism.\n The Plancherel measure $\\widehat{\\mu}^{\\mathrm{pl,ur}}$ on $G^{\\wedge,\\mathrm{ur}}$ is supported on $G^{\\wedge,\\mathrm{ur},{\\rm temp}}$.\n We would like to describe its pullback measure on $\\widehat{A}_c$,\n to be denoted $\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{0}$. Note that $\\widehat{A}_c$ is topologically\n isomorphic to $\\widehat{T}_c\/({\\rm Fr}-{\\rm id})\\widehat{T}_c$. (This is induced by\n the natural surjection $\\widehat{T}_c \\twoheadrightarrow\\widehat{A}_c$.)\n Fix a measure $d\\overline{t}$ on the latter which is a push forward from a Haar measure on $\\widehat{T}_c$.\n\n\\begin{prop}\\label{p:unr-plan-meas}\n\n The measure $\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{0}$ pulled back to $\\widehat{T}_c\/({\\rm Fr}-{\\rm id})\\widehat{T}_c$ is\n $$\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{0}(\\overline{t})= C\\cdot\n \\frac{\\det(1-{\\rm ad}(t\\rtimes {\\rm Fr})|{\\rm Lie}\\,(\\widehat{G})\/{\\rm Lie}\\,(\\widehat{T}^{\\rm Fr}))}\n {\\det(1-q^{-1}{\\rm ad}(t\\rtimes {\\rm Fr})|{\\rm Lie}\\,(\\widehat{G})\/{\\rm Lie}\\,(\\widehat{T}^{\\rm Fr}))}\n d\\overline{t}\n $$ for some constant $C\\in {\\mathbb C}^\\times$,\n depending on the normalization of Haar measures.\n Here $t\\in \\widehat{T}_c$ is any lift of $\\overline{t}$. (The\n right hand side is independent\n of the choice of $t$.)\n\\end{prop}\n\n\\begin{proof}\n The formula is due to Macdonald (\\cite{Mac71}). For our purpose, it is more convenient to\n follow the formulation as in the conjecture of \\cite[p.281]{Sha90} (which also discusses the general conjectural formula\n of the Plancherel measure due to Langlands). By that conjecture (known in the unramified case),\n $$\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{0}(\\overline{t})= C'\\cdot \\frac{L(1,\\sigma^{-1}(\\overline{t}),r)}{L(0,\\sigma(\\overline{t}),r)}\n \\frac{L(1,\\sigma(\\overline{t}),r)}{L(0,\\sigma^{-1}(\\overline{t}),r)} d\\overline{t}$$\n where $C'\\in {\\mathbb C}^\\times$ is a constant,\n $\\sigma(\\overline{t}):T(F)\\rightarrow {\\mathbb C}^\\times$ is the character corresponding to $\\overline{t}$\n (via (ii)$\\leftrightarrow$(v) of Lemma \\ref{l:unr-spec}), and $r:{}^L T\n \\rightarrow \\GL({\\rm Lie}\\,({}^L U))$ is the adjoint representation. Here ${}^L U$ is the $L$-group of $U$\n (viewed in ${}^L B$). By unraveling the local $L$-factors, obtain\n \\begin{equation}\\label{e:pf-unr-plan}\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{0}(\\overline{t})= C'\\cdot\n \\frac{\\det(1-{\\rm ad}(t\\rtimes {\\rm Fr})|{\\rm Lie}\\,(\\widehat{G})\/{\\rm Lie}\\,(\\widehat{T}))}\n {\\det(1-q^{-1}{\\rm ad}(t\\rtimes {\\rm Fr})|{\\rm Lie}\\,(\\widehat{G})\/{\\rm Lie}\\,(\\widehat{T}))}\n d\\overline{t}. \\end{equation}\n Finally, observe that $\\det(1-q^{-s}{\\rm ad}(t\\rtimes {\\rm Fr})|{\\rm Lie}\\,(\\widehat{T})\/{\\rm Lie}\\,(\\widehat{T}^{{\\rm Fr}}))$\n is independent of $\\overline{t}$ (and $t$). Therefore the right hand sides are\n the same up to constant in \\eqref{e:pf-unr-plan} and the proposition.\n\\end{proof}\n\n\\begin{rem}\n If the Haar measure on $G(F)$ assigns volume 1 to $K$ then\n $C$ must be chosen such that $G^{\\wedge,\\mathrm{ur},{\\rm temp}}$ has total volume 1\n with respect to $\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{0}(\\overline{t})$. This is easily deduced from\n the Plancherel formula for ${\\mathbf{1}}_K$.\n\\end{rem}\n\n\\section{Automorphic \\texorpdfstring{$L$}{L}-functions}\\label{sec:pp}\n\nAccording to Langlands conjectures, the most general $L$-functions should be expressible as products of the principal $L$-functions $L(s,\\Pi)$ associated to cuspidal automorphic representations $\\Pi$ of $\\GL(d)$ over number fields (for varying $d$). The analytic properties and functional equation of such $L$-functions were first established by Godement--Jacquet for general $d\\ge 1$. This involves the Godement--Jacquet integral representation. The other known methods are the Rankin--Selberg integrals, the doubling method and the Langlands--Shahidi method. The purpose of this section is to recall these analytic properties and to set-up notation. More detailed discussions may be found in~\\cites{cong:Lfunc04:cogd,Jacq79:principal,cong:park:mich},~\\cite{RS96}*{\\S2} and~\\cite{book:IK04}*{\\S5}.\n\nIn this section and some of the later sections we use the following notation.\n\\begin{itemize}\n\\item $F$ is a number field, i.e. a finite extension of ${\\mathbb Q}$.\n\\item $G$ is a connected reductive group over $F$ (not assumed to be quasi-split).\n\\item $Z=Z(G)$ is the center of $G$.\n\\item ${\\mathcal V}_F$ (resp. ${\\mathcal V}_F^\\infty$) is the set of all (resp. all finite) places of $F$.\n\\item $S_\\infty:={\\mathcal V}_F\\backslash {\\mathcal V}_F^\\infty$.\n\\item $A_{G}$ is the maximal $F$-split subtorus in the center of $\\Res_{F\/{\\mathbb Q}} G$, and $A_{G,\\infty}:=A_G({\\mathbb R})^0$.\n\\end{itemize}\n\n\\subsection{Automorphic forms}\\label{sec:pp:autforms}\n Let $\\chi:A_{G,\\infty}\\rightarrow {\\mathbb C}^\\times$ be a continuous homomorphism.\n Denote by $L^2_\\chi(G(F)\\backslash G({\\mathbb A}_F))$\n the space of all functions $f$ on $G({\\mathbb A}_F)$ which are\n square-integrable modulo $A_{G,\\infty}$ and satisfy\n $f(g\\gamma z)=\\chi(z)f(\\gamma)$ for all $g\\in G(F)$, $\\gamma \\in G({\\mathbb A}_F)$ and\n $z\\in A_{G,\\infty}$. There is a spectral decomposition into discrete and continuous parts\n$$L^2_{\\chi}(G(F)\\backslash G({\\mathbb A}_F))=L^2_{{\\rm disc},\\chi}\\oplus L^2_{{\\rm cont},\\chi},\n\\qquad L^2_{{\\rm disc},\\chi}=\\widehat{\\bigoplus_{\\pi}}\\, m_{{\\rm disc},\\chi}(\\pi)\\cdot \\pi$$\nwhere the last sum is a Hilbert direct sum running over the set of\nall irreducible representations\nof $G({\\mathbb A}_F)$ up to isomorphism.\nWrite ${\\mathcal{AR}}_{{\\rm disc},\\chi}(G)$ for the set of isomorphism classes of\nall irreducible representations $\\pi$\nof $G({\\mathbb A}_F)$ such that $m_{{\\rm disc},\\chi}(\\pi)>0$.\nAny $\\pi\\in {\\mathcal{AR}}_{{\\rm disc},\\chi}(G)$ is said to\nbe a discrete automorphic representation of $G({\\mathbb A}_F)$. If $\\chi$ is trivial (in particular if $A_{G,\\infty}=\\{1\\}$) then we write $m_{{\\rm disc}}$ for $m_{{\\rm disc},\\chi}$.\n\nThe above definitions allow a modest generalization. Let $\\mathfrak{X}_G$ be a closed subgroup of $Z({\\mathbb A}_F)$ containing $A_{G,\\infty}$ and $\\omega:Z({\\mathbb A}_F)\\cap \\mathfrak{X}_G\\backslash \\mathfrak{X}_G\\rightarrow {\\mathbb C}^\\times$ be a continuous (quasi-)character. Then $L^2_{\\omega}$, $L^2_{{\\rm disc},\\omega}$, $m_{{\\rm disc},\\omega}$ etc can be defined analogously. In fact the Arthur-Selberg trace formula applies to this setting. (See \\cite[Ch 3.1]{Arthur}.)\n\nFor the rest of Section~\\ref{sec:pp} we are concerned with $G=\\GL(d)$. Take $\\mathfrak{X}_G=Z({\\mathbb A}_F)$ so that $\\omega$ is a quasi-character of $Z(F)\\backslash Z(\\mathbb{A}_F)$. Note that $A_{G,\\infty}=Z(F_\\infty)^\\circ$ in this case. We denote by $\\mathcal{A}_\\omega(\\GL(d,F))$ the space consisting of automorphic functions on $\\GL(d,F)\\backslash \\GL(d,\\mathbb{A}_F)$ which satisfy $f(zg)=\\omega(z)f(g)$ for all $z\\in Z(\\mathbb{A}_F)$ and $g\\in \\GL(d,\\mathbb{A}_F)$ (see Borel-Jacquet~\\cite{BJ79} for the exact definition and the growth condition). We denote by $\\mathcal{A}_{{\\rm cusp},\\omega}(\\GL(d,F))$ the subspace of cuspidal functions (i.e. the functions with vanishing period against all nontrivial unipotent subgroups).\n\nAn automorphic representation $\\Pi$ of $\\GL(d,\\mathbb{A}_F)$ is by definition an irreducible admissible representation of $\\GL(d,\\mathbb{A}_F)$ which is a constituent of the regular representation on $\\mathcal{A}_\\omega(\\GL(d,F))$. Then $\\omega$ is the central character of $\\Pi$. The subspace $\\mathcal{A}_{{\\rm cusp},\\omega}(\\GL(d,F))$ decomposes discretely and an irreducible component is a cuspidal automorphic representation. The notion of cuspidal automorphic representations is the same if the space of cuspidal functions in $L^2_{\\omega}(GL(d,F)\\backslash GL(d,{\\mathbb A}_F))$ is used in the definition in place of $\\mathcal{A}_{{\\rm cusp},\\omega}(\\GL(d,F))$, cf. \\cite[\\S4.6]{BJ79}.\n\nWhen $\\omega$ is unitary we can work with the completed space $L_\\omega^2(\\GL(d,F)\\backslash \\GL(d,\\mathbb{A}_F))$ of square-integrable functions modulo $Z(\\mathbb{A}_F)$ and with unitary automorphic representations. Note that a cuspidal automorphic representation is unitary if and only if its central character is unitary. We recall the Langlands decomposition of $L_\\omega^2(\\GL(d,F)\\backslash \\GL(d,\\mathbb{A}_F))$ into the cuspidal, residual and continuous spectra. What will be important in the sequel is the notion of isobaric representations which we review in~\\S\\ref{sec:pp:isobaric}.\n\nIn the context of $L$-functions, the functional equation involves the contragredient representation $\\widetilde \\Pi$. An important fact is that the contragredient of a unitary automorphic representation of $\\GL(d,\\mathbb{A}_F)$ is isomorphic to its complex conjugate.\n\n\\subsection{Principal $L$-functions}\\label{sec:pp:Lfn}\n Let $\\Pi=\\otimes_v \\Pi_v$ be a cuspidal automorphic representation of $\\GL(d,\\mathbb{A}_F)$ with unitary central character. The principal $L$-function associated to $\\Pi$ is denoted\n\\begin{equation*}\nL(s,\\Pi)=\\prod_{v\\in \\mathcal{V}_F^\\infty } L(s,\\Pi_v).\n\\end{equation*}\nThe Euler product is absolutely convergent when $\\MRe s>1$. The completed $L$-function is denoted $\\Lambda(s,\\Pi)$, the product now running over all places $v\\in\\mathcal{V}_F$.\nFor each finite place $v\\in \\mathcal{V}_F^\\infty$, the inverse of the local $L$-function $L(s,\\Pi_v)$ is a Dirichlet polynomial in $q_v^{-s}$ of degree $\\le d$. Write\n\\begin{equation*}\nL(s,\\Pi_v)=\\prod^d_{i=1} (1-\\alpha_i(\\Pi_v)q_v^{-s})^{-1}.\n\\end{equation*}\nNote that when $\\Pi_v$ is unramified, $\\alpha_i(\\Pi_v)$ is non-zero for all $i$ and corresponds to the eigenvalues of a semisimple conjugacy class in $\\GL_d(\\mathbb{C})$ associated to $\\Pi_v$, but when $\\Pi_v$ is ramified the Langlands parameters are more sophisticated and we allow some (or even all of) of the $\\alpha_i(\\Pi_v)$ to be equal to zero. In this way we have a convenient notation for all local $L$-factors.\n\nFor each archimedean $v$, the local $L$-function $L(s,\\Pi_v)$ is a product of $d$ Gamma factors\n\\begin{equation}\\label{def:Lv-arch}\nL(s,\\Pi_v)=\\prod^d_{i=1} \\Gamma_v(s-\\mu_i(\\Pi_v)),\n\\end{equation}\nwhere $\\Gamma_\\mathbb{R}(s):=\\pi^{-s\/2}\\Gamma(s\/2)$ and $\\Gamma_\\mathbb{C}(s):=2(2\\pi)^{-s}\\Gamma(s)$. Note that $\\Gamma_{\\mathbb{C}}(s)=\\Gamma_\\mathbb{R}(s)\\Gamma_\\mathbb{R}(s+1)$ by the doubling formula, so when $v$ is complex, $L(s,\\Pi_v)$ may as well be expressed as a product of $2d$ $\\Gamma_\\mathbb{R}$ factors.\n\nThe completed $L$-function $\\Lambda(s,\\Pi):=L(s,\\Pi)\\prod_{v|\\infty}L(s,\\Pi_v)$ has the following analytic properties. It has a meromorphic continuation to the complex plane. It is entire except when $d=1$ and $\\Pi=\\abs{.}^{it}$ for some $t\\in \\mathbb{R}$, in which case $L(s,\\Pi)=\\zeta_F(s+it)$ is (a shift of) the Dedekind zeta function of the ground field $F$ with simple poles at $s=-it$ and $s=1-it$. It is bounded in vertical strips and satisfies the functional equation\n\\begin{equation}\\label{fneq}\n\\Lambda(s,\\Pi)=\\epsilon(s,\\Pi) \\Lambda(1-s,\\widetilde \\Pi),\n\\end{equation}\nwhere $\\epsilon(s,\\Pi)$ is the epsilon factor and $\\widetilde \\Pi$ is the contragredient automorphic representation. The epsilon factor has the form\n\\begin{equation}\n\\epsilon(s,\\Pi)=\\epsilon(\\Pi) q(\\Pi)^{\\frac{1}{2}-s}\n\\end{equation}\nfor some positive integer $q(\\Pi)\\in \\mathbb{Z}_{\\ge 1}$ and root number $\\epsilon(\\Pi)$ of modulus one.\n\nNote that $q(\\Pi)=q(\\widetilde \\Pi)$,\n$\\epsilon(\\widetilde \\Pi)=\\overline{\\epsilon(\\Pi)}$ and for all $v\\in \\mathcal{V}_F$, $L(s,\\widetilde \\Pi_v)=\\overline{L(\\overline{s},\\Pi_v)}$. For instance this follows from the fact~\\cite{GK75} that $\\widetilde \\Pi$ is isomorphic to the complex conjugate $\\overline\\Pi$ (obtained by taking the complex conjugate of all forms in the vector space associated to the representation $\\Pi$).\n\nThe conductor $q(\\Pi)$ is the product over all finite places $v\\in\\mathcal{V}^\\infty_F$ of the conductor $q(\\Pi_v)$ of $\\Pi_v$. Recall that $q(\\Pi_v)$ equals one whenever $\\Pi_v$ is unramified. It is convenient to introduce as well the conductor of admissible representations at archimedean places. When $v$ is real we let $C(\\Pi_v)=\\prod\\limits^d_{i=1} (2+\\abs{\\mu_i(\\Pi_v)})$ and when $v$ is complex we let $C(\\Pi_v)=\\prod\\limits^d_{i=1} (2+\\abs{\\mu_i(\\Pi_v)}^2)$. Then we let $C(\\Pi)$ be the analytic conductor which is the product of all the local conductors\n\\begin{equation*}\nC(\\Pi):=\n\\prod_{v\\mid \\infty} C(\\Pi_v)\n\\prod_{v\\in\\mathcal{V}^\\infty_F} q(\\Pi_v) = C(\\Pi_\\infty) q(\\Pi).\n\\end{equation*}\nNote that $C(\\Pi)\\ge 2$ always.\n\nThere is $0\\le \\theta <\\frac{1}{2}$ such that\n\\begin{equation}\\label{towardsR}\n\\MRe \\mu_i(\\Pi_v) \\le \\theta\n,\\qtext{resp.}\\\n\\log_{q_v} \\abs{\\alpha_i(\\Pi_v)}\\le \\theta\n\\end{equation}\nfor all archimedean $v$ (resp. finite $v$) and $1\\le i\\le d$. When $\\Pi_v$ is unramified we ask for\n\\begin{equation}\\label{unrtowardsR}\n\\abs{\\MRe \\mu_i(\\Pi_v)} \\le \\theta\n,\\qtext{resp.}\\\n\\abs{\\log_{q_v} \\abs{\\alpha_i(\\Pi_v)}}\\le \\theta.\n\\end{equation}\nThe value $\\theta=\\frac{1}{2} - \\frac{1}{d^2+1}$ is admissible by an argument of Serre and Luo--Rudnick--Sarnak based on the analytic properties of the Rankin-Selberg convolution $L(s,\\Pi\\times \\widetilde \\Pi)$. Note that for all $v$, the local $L$-functions $L(s,\\Pi_v)$ are entire on $\\MRe s>\\theta$ and this contains the central line $\\MRe s=\\frac{1}{2}$.\n\nThe generalized Ramanujan conjecture asserts that all $\\Pi_v$ are tempered (see~\\cite{Sarnak:GRC} and the references herein). This is equivalent to having $\\theta=0$ in the inequalities~\\eqref{towardsR} and~\\eqref{unrtowardsR}. In particular we expect that when $\\Pi_v$ is unramified, $\\abs{\\alpha_i(\\Pi_v)}=1$.\n\n\\subsection{Isobaric sums}\\label{sec:pp:isobaric} We need to consider slightly more general $L$-functions associated to non-cuspidal automorphic representations on $\\GL(d,\\mathbb{A}_F)$. These $L$-functions are products of the $L$-functions associated to cuspidal representations and studied in the previous \\S\\ref{sec:pp:Lfn}. Closely related to this construction it is useful to introduce, following Langlands~\\cite{cong:auto77:lang}, the notion of isobaric sums of automorphic representations. The concept of isobaric representations is natural in the context of $L$-functions and the Langlands functoriality conjectures.\n\nLet $\\Pi$ be an irreducible automorphic representation of $\\GL(d,\\mathbb{A}_F)$. Then a theorem of Langlands~\\cite{Langlands:notion} states that there are integers $r\\ge 1$ and $d_1,\\cdots,d_r\\ge 1$ with $d=d_1+\\cdots +d_r$ and cuspidal automorphic representations $\\Pi_1,\\cdots,\\Pi_r$ of $\\GL(d_1,\\mathbb{A}_F),\\cdots,\\GL(d_r,\\mathbb{A}_F)$ such that $\\Pi$ is a constituent of the induced representation of $\\Pi_1\\otimes \\cdots \\otimes \\Pi_r$ (from the Levi subgroup $\\GL(d_1)\\times \\cdots \\times \\GL(d_r)$ of $\\GL(d)$). A cuspidal representation is unitary when its central character is unitary. When all of $\\Pi_j$ are unitary then $\\Pi$ is unitary. But the converse is not true: note that even if $\\Pi$ is unitary, the representation $\\Pi_j$ need not be unitary in general.\n\nWe recall the generalized strong multiplicity one theorem of Jacquet and Shalika~\\cite{JSI-II}. Suppose $\\Pi$ and $\\Pi'$ are irreducible automorphic representations of $\\GL(d,\\mathbb{A}_F)$ such that $\\Pi_v$ is isomorphic to $\\Pi'_v$ for almost all $v\\in\\mathcal{V}_F$ (we say that $\\Pi$ and $\\Pi'$ are weakly equivalent) and suppose that $\\Pi$ (resp. $\\Pi'$) is a constituent of the induced representation of $\\Pi_1\\otimes\\cdots \\otimes \\Pi_r$ (resp. $\\Pi'_1\\otimes \\cdots \\otimes \\Pi'_{r'}$). Then $r=r'$ and up to permutation the sets of cuspidal representations $\\set{\\Pi_j}$ and $\\set{\\Pi'_j}$ coincide. Note that this generalizes the strong multiplicity one theorem of Piatetski-Shapiro which corresponds to the case where $\\Pi$ and $\\Pi'$ are cuspidal.\n\nConversely suppose $\\Pi_1,\\cdots, \\Pi_r$ are cuspidal representations of $\\GL(d_1,\\mathbb{A}_F), \\cdots, \\GL(d_r,\\mathbb{A}_F)$. Then from the theory of Eisenstein series there is a unique constituent of the induced representation of $\\Pi_1\\otimes\\cdots \\otimes \\Pi_r$ whose local components coincide at each place $v\\in \\mathcal{V}_F$ with the Langlands quotient of the local induced representation~\\cite{cong:auto77:lang}*{\\S2}. It is denoted $\\Pi_1\\boxplus \\cdots \\boxplus \\Pi_r$ and called an isobaric representation (automorphic representations which are not isobaric are called anomalous). The above results of Langlands and Jacquet--Shalika may now be summarized by saying that an irreducible automorphic representation of $\\GL(d,\\mathbb{A}_F)$ is weakly equivalent to a unique isobaric representation.\n\nWe now turn to $L$-functions. The completed $L$-function associated to an isobaric representation $\\Pi=\\Pi_1\\boxplus \\cdots\\boxplus \\Pi_r$ is by definition\n\\begin{equation*}\n\\Lambda(s,\\Pi) = \\prod^{r}_{j=1} \\Lambda(s,\\Pi_j).\n\\end{equation*}\nAll notation from the previous subsection will carry over to $\\Lambda(s,\\Pi)$. Namely we have the local $L$-factors $L(s,\\Pi_v)$, the local Satake parameters $\\alpha_i(\\Pi_v)$ and $\\mu_i(\\Pi_v)$, the epsilon factor $\\epsilon(s,\\Pi)$, the root number $\\epsilon(\\Pi)$, the local conductors $q(\\Pi_v)$, $C(\\Pi_v)$ and the analytic conductor $C(\\Pi)$. The Euler product converges absolutely for $\\MRe s$ large enough.\n\nOne important difference concerns the bounds for local Satake parameters. Even if we assume that $\\Pi$ has unitary central character the inequalities~\\eqref{towardsR} may not hold. We shall therefore require a stronger condition on $\\Pi$.\n\n\\begin{proposition}\\label{prop:tempered}\nLet $\\Pi$ be an isobaric representation of $\\GL(d,\\mathbb{A}_F)$. Assume that the archimedean component $\\Pi_\\infty$ is tempered. Then the bounds towards Ramanujan are satisfied. Namely there is a positive constant $\\theta<\\frac{1}{2}$ such that for all $1\\le i\\le d$ and all archimedean (resp. non-archimedean) places $v$,\n\\begin{equation}\\label{eq:prop:tempered}\n\\MRe \\mu_i(\\Pi_v) \\le \\theta\n,\\qtext{resp. }\\\n\\log_{q_v} \\abs{\\alpha_i(\\Pi_v)}\\le \\theta.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof} Let $\\Pi=\\Pi_1\\boxplus \\cdots \\boxplus \\Pi_r$ be the isobaric decomposition with $\\Pi_j$ cuspidal. Then we will show that all $\\Pi_j$ have unitary central character, which implies the Proposition~\\ref{prop:tempered}.\n\nBy definition we have that $\\Pi_\\infty$ is a Langlands quotient of the induced representation of $\\Pi_{1\\infty} \\otimes \\cdots \\otimes \\Pi_{r\\infty}$. Since $\\Pi_\\infty$ is tempered, this implies that all $\\Pi_{j\\infty}$ are tempered, and in particular have unitary central character. Then the (global) central character of $\\Pi_j$ is unitary as well.\n\n\\end{proof}\n\n\\remark In analogy with the local case, an isobaric representation $\\Pi_1 \\boxplus \\cdots \\boxplus \\Pi_r$ where all cuspidal representations $\\Pi_j$ have unitary central character is called \\Lquote{tempered} in~\\cite{cong:auto77:lang}. This terminology is fully justified only under the generalized Ramanujan conjecture for $\\GL(d,\\mathbb{A}_F)$. To avoid confusion we use the adjective \\Lquote{tempered} for $\\Pi=\\otimes_v \\Pi_v$ only in the strong sense that the local representations $\\Pi_v$ are tempered for all $v\\in \\mathcal{V}_F$.\n\n\\remark In the proof of Proposition~\\ref{prop:tempered} we see the importance of the notion of isobaric representations and Langlands quotients. For instance a discrete series representation of $\\GL(2,\\mathbb{R})$ is a constituent (but not a Langlands quotient) of an induced representation of a non-tempered character of $\\GL(1,\\mathbb{R})\\times \\GL(1,\\mathbb{R})$.\n\n\\subsection{An explicit formula}\\label{sec:pp:explicit} Let $\\Pi$ be a unitary cuspidal representation of $\\GL(d,\\mathbb{A}_F)$. Let $\\rho_{j}(\\Pi)$ denote the zeros of $\\Lambda(s,\\Pi)$ counted with multiplicities. These are also the non-trivial zeros of $L(s,\\Pi)$. The method of Hadamard and de la Vall\\'ee Poussin generalizes from the Riemann zeta function to automorphic $L$-functions, and implies that $0<\\MRe \\rho_j(\\Pi)<1$ for all $j$. There is a polynomial $p(s)$ such that $p(s)\\Lambda(s,\\Pi)$ is entire and of order $1$ ($p(s)=1$ except when $d=1$ and $\\Pi=\\abs{.}^{it}$, in which case we choose $p(s)=(s-it)(1-it-s)$).\n\nThe Hadamard factorization shows that there are $a=a(\\Pi)$ and $b=b(\\Pi)$ such that\n\\begin{equation*}\np(s)\\Lambda(s,\\Pi)=e^{a+bs}\n\\prod_j \\bigl(1-\\frac{s}{\\rho_j(\\Pi)} \\bigr)\ne^{s\/\\rho_j(\\Pi)}.\n\\end{equation*}\nThe product is absolutely convergent in compact subsets away from the zeros $\\rho_j(\\Pi)$. The functional equation implies that\n\\begin{equation*}\n\\sum_j \\MRe(\\rho_j(\\Pi)^{-1})= -\\MRe b(\\Pi).\n\\end{equation*}\n\nThe number of zeros of bounded imaginary part is bounded above uniformly:\n\\begin{equation*}\n\\abs{\\set{j,\\ \\abs{\\Mim \\rho_j(\\Pi)}\\le 1}}\n\\ll \\log C(\\Pi).\n\\end{equation*}\nChanging $\\Pi$ into $\\Pi\\otimes \\abs{.}^{it}$ we have an analogous uniform estimate for the number of zeros with $\\abs{\\Mim \\rho_j(\\Pi)-T}\\le 1$ (in particular this is $\\ll_\\Pi \\log T$).\n\nLet $N(T,\\Pi)$ be the number of zeros with $\\abs{\\Mim \\rho_j(\\Pi)}\\le T$. Then the following estimate holds uniformly in $T\\ge 1$ (Weyl's law):\n\\begin{equation*}\n \nN(T,\\Pi) = \\frac{T}{\\pi} \\Bigl(d\n\\log (\\frac{T}{2\\pi e})\n+ \\log C(\\Pi)\n\\Bigr)\n+O_\\Pi(\\log T).\n\\end{equation*}\nThe error term could be made uniform in $\\Pi$, see~\\cite{book:IK04}*{\\S5.3} for more details\\footnote{One should be aware that Theorem~5.8 in~\\cite{book:IK04} does not apply directly to our setting because it is valid under certain further assumptions on $\\Pi$ such as $\\mu_i(\\Pi_v)$ being real for archimedean places $v$.}. The main term can be interpreted as the variation of the argument of $C(\\Pi)^{s\/2}L(s,\\Pi_\\infty)$ along certain vertical segments.\n\nWe are going to discuss an explicit formula (see~\\eqref{eq:prop:explicit} below) expressing a weighted sum over the zeros of $\\Lambda(s,\\Pi)$ as a contour integral. It is a direct consequence of the functional equation~\\eqref{fneq} and Cauchy formula. The explicit formula is traditionally stated using the Dirichlet coefficients of the $L$-function $L(s,\\Pi)$. For our purpose it is more convenient to maintain the Euler product factorization.\n\nDefinte $\\gamma_j(\\Pi)$ by $\\rho_j(\\Pi)=\\frac{1}{2}+i\\gamma_j(\\Pi)$. We know that $\\abs{\\Mim \\gamma_j(\\Pi)}<\\frac{1}{2}$ and under the GRH, $\\gamma_j(\\Pi)\\in \\mathbb{R}$ for all $j$.\n\nIt is convenient to denote by $\\frac{1}{2} + ir_j(\\Pi)$ the (eventual) poles of $\\Lambda(s,\\Pi)$ counted with multiplicity. We have seen that poles only occur when $\\Pi=\\abs{.}^{it}$ in which case the poles are simple and $\\set{r_j(\\Pi)}=\\set{t+\\frac{i}{2},-t-\\frac{i}{2}}$.\n\nThe above discussion applies with little change to isobaric representations. If we also assume that $\\Pi_\\infty$ is tempered then we have seen in the proof of Proposition~\\ref{prop:tempered} that $\\Pi=\\Pi_1 \\boxplus \\cdots \\boxplus \\Pi_r$ with $\\Pi_i$ unitary cuspidal representations of $\\GL(d_i,\\mathbb{A})$ for all $1\\le i\\le r$. In particular the bounds towards Ramanujan apply and $\\abs{\\Mim \\gamma_j(\\Pi)}<\\frac{1}{2}$ for all $j$.\n\nLet $\\Phi$ be a Schwartz function on $\\mathbb{R}$ whose Fourier transform\n\\begin{equation}\\label{def:fourier}\n\\widehat \\Phi(y):=\\int_{-\\infty}^{+\\infty} \\Phi(x) e^{-2\\pi i xy} dx\n\\end{equation}\nhas compact support. Note that under this assumption, $\\Phi$ may be extended to an entire function on $\\mathbb{C}$.\n\\begin{proposition}\\label{prop:explicit} Let $\\Pi$ be an isobaric representation of $\\GL(d,\\mathbb{A})$ satisfying the bounds towards Ramanujan~\\eqref{towardsR}. With notation as above and for $\\sigma>\\frac{1}{2}$, the following identity holds\n\\begin{equation}\\label{eq:prop:explicit}\n\\begin{aligned}\n\\sum_j &\\Phi(\\gamma_j(\\Pi))=\\sum_j \\Phi\n(r_j(\\Pi))\n+ \\frac{\\log q(\\Pi)}{2\\pi} \\widehat \\Phi(0)\n+\\\\\n&+ \\frac{1}{2\\pi}\\int_{-\\infty}^{\\infty}\n\\biggl[\n\\frac{\\Lambda'}{\\Lambda}(\\frac{1}{2}+\\sigma+ir,\\Pi)\n\\Phi(r-i\\sigma)\n+\n\\overline{\n\\frac{\\Lambda'}{\\Lambda}(\\frac{1}{2}+\\sigma+ir,\\Pi)\n}\n\\Phi(r+i\\sigma)\n\\biggr]\ndr.\n\\end{aligned}\n\\end{equation}\n\\end{proposition}\n\nThere is an important remark about the explicit formula that we will use frequently. Therefore we insert it here before going into the proof. The line of integration in~\\eqref{eq:prop:explicit} is away from the zeros and poles because $\\sigma>1\/2$. In particular the line of integration cannot be moved to $\\sigma=0$ directly. But we can do the following which is a natural way to produce the sum over primes. First we replace $\\Lambda(s,\\Pi)$ by its Euler product which is absolutely convergent in the given region ($\\MRe s>1$). Then for each of the term we may move the line of integration to $\\sigma=0$ because we have seen that $\\frac{L'}{L}(s,\\Pi_v)$ has no pole for $\\MRe s > \\theta$. Thus we have\n\\begin{equation}\\label{explicit:switch}\n\\int_{-\\infty}^{\\infty}\n\\frac{\\Lambda'}{\\Lambda}(\\frac{1}{2}+\\sigma+ir,\\Pi)\n\\Phi(r-i\\sigma)\ndr=\n\\sum_{v\\in \\mathcal{V}_F}\n\\int_{-\\infty}^{\\infty}\n\\frac{L'}{L}\n(\\frac{1}{2}+ir,\\Pi_v)\n\\Phi(r)\ndr.\n\\end{equation}\nThe latter expression is convenient to use in practice. The integral in the right-hand side of~\\eqref{explicit:switch} is absolutely convergent because $\\Phi$ is a Schwartz function and the sum over $v\\in\\mathcal{V}_F$ is actually finite since the support of $\\widehat \\Phi$ is compact. \\footnote{Note however that it is never allowed to switch the sum and integration symbols in~\\eqref{explicit:switch}. This is because the $L$-function is evaluated at the center of the critical strip in which the Euler product does not converge absolutely.}\n\n\\begin{proof} The first step is to work with the Mellin transform rather than the Fourier transform. Namely we set\n\\begin{equation*}\nH(\\frac{1}{2} + is)=\\Phi\\bigl(s\\bigr),\\quad s\\in \\mathbb{C}.\n\\end{equation*}\nNote that $H$ is an entire function which is rapidly decreasing on vertical strips. This justifies all shifting of contours below.\n\nWe form the integral\n\\begin{equation*}\n\\int_{(2)} \\frac{\\Lambda'}{\\Lambda}(s,\\Pi) H(s) \\frac{ds}{2i\\pi}.\n\\end{equation*}\nWe shift the contour to $\\MRe s=-1$ crossing zeros and eventual poles of $\\dfrac{\\Lambda'}{\\Lambda}$ inside the critical strip. The sum over the zeros reads\n\\begin{equation*}\n\\sum_j H(\\rho_j(\\Pi)) = \\sum_j \\Phi(\\gamma_j(\\Pi))\n\\end{equation*}\nand the sum over the poles reads\n\\begin{equation*}\n-\\sum_j \\Phi(r_j(\\Pi)).\n\\end{equation*}\n\nNote that since $\\epsilon(s,\\Pi)=\\epsilon(\\Pi) q(\\Pi)^{\\frac{1}{2}-s}$ we have\n\\begin{equation*}\n\\frac{\\epsilon'}{\\epsilon}(s,\\Pi)=-\\log q(\\Pi),\\quad s\\in\\mathbb{C}.\n\\end{equation*}\nWe obtain as consequence of the functional equation~\\eqref{fneq} that\n\\begin{equation*}\n\\begin{aligned}\n\\int_{(-1)}^{} \\frac{\\Lambda'}{\\Lambda}(s,\\Pi) H(s) \\frac{ds}{2i\\pi}&=\n\\int_{(2)}^{} \\frac{\\Lambda'}{\\Lambda}(1-s,\\Pi) H(1-s) \\frac{ds}{2i\\pi}\\\\\n&=-\\int_{(2)}^{} \\biggl(\\log q(\\Pi) + \\frac{\\Lambda'}{\\Lambda}(s,\\widetilde \\Pi)\n\\biggr) H(1-s) \\frac{ds}{2i\\pi}.\n\\end{aligned}\n\\end{equation*}\n\nNow we observe that\n\\begin{equation*}\n\\int_{(2)} H(s) \\frac{ds}{2i\\pi} = \\frac{1}{2\\pi} \\widehat \\Phi(0)\n\\end{equation*}\nand also\n\\begin{equation*}\n\\int_{(2)}^{} \\frac{\\Lambda'}{\\Lambda}(s,\\Pi) H(s) \\frac{ds}{2i\\pi}\n=\n\\frac{1}{2\\pi}\\int_{-\\infty}^{\\infty}\n\\Phi(r-\\frac{3i}{2})\n\\frac{\\Lambda'}{\\Lambda}(2+ir,\\Pi) dr.\n\\end{equation*}\nand\n\\begin{equation*}\n\\begin{aligned}\n\\int_{(2)}^{} \\frac{\\Lambda'}{\\Lambda}(s,\\widetilde \\Pi) H(1-s) \\frac{ds}{2i\\pi}\n&=\n\\frac{1}{2\\pi}\\int_{-\\infty}^{\\infty}\n\\Phi(r+\\frac{3i}{2})\n\\frac{\\Lambda'}{\\Lambda}(2-ir,\\widetilde \\Pi) dr\\\\\n&=\n\\frac{1}{2\\pi}\\int_{-\\infty}^{\\infty}\n\\Phi(r+\\frac{3i}{2})\n\\overline{\n\\frac{\\Lambda'}{\\Lambda}(2+ir,\\Pi)\n}\ndr.\n\\end{aligned}\n\\end{equation*}\n\nSince $\\Lambda(s,\\widetilde \\Pi)=\\overline{\\Lambda(\\overline{s},\\Pi)}$ this concludes the proof of the proposition by collecting all the terms above. Precisely this yields the formula when $\\sigma=3\/2$, and then we can make $\\sigma>1\/2$ arbitrary by shifting the line of integration.\n\\end{proof}\n\nWe conclude this section with a couple of remarks on symmetries. The first observation is that the functional equation implies that if $\\rho$ is a zero (resp. pole) of $\\Lambda(s,\\Pi)$ then so is $1-\\bar\\rho$ (reflexion across the central line). Thus the set $\\set{\\gamma_j(\\Pi)}$ (resp. $\\set{r_j(\\Pi)}$) is invariant by the reflexion across the real axis (namely $\\gamma$ goes into $\\overline{\\gamma}$). Note that this is compatible with the GRH which predicts that $\\MRe \\rho_j(\\Pi)=\\frac{1}{2}$ and $\\gamma_j(\\Pi)\\in \\mathbb{R}$.\n\nAssuming $\\Phi$ is real-valued the explicit formula is an identity between real numbers. Indeed the Schwartz reflection principle gives $\\Phi(s)=\\overline{\\Phi(\\overline s)}$ for all $s\\in \\mathbb{C}$. Because of the above remark the sum over the zeros (resp. poles) in~\\eqref{eq:prop:explicit} is a real; the integrand is real-valued as well for all $r\\in (-\\infty,\\infty)$.\n\nThe situation when $\\Pi$ is self-dual occurs often in practice. The zeros $\\gamma_j(\\Pi)$ satisfy another symmetry which is the reflexion across the origin. Assuming $\\Pi$ is cuspidal and non-trivial there is no pole. The explicit formula~\\eqref{eq:prop:explicit} simplifies and may be written\n\\begin{equation*}\n\\sum_j\n\\Phi\\left( \\gamma_j(\\Pi) \\right)\n=\\frac{\\log q(\\Pi)}{2\\pi}\\widehat \\Phi(0)\n+\\frac{1}{\\pi}\n\\sum_{v\\in \\mathcal{V}_F}\n\\int_{-\\infty}^{\\infty}\n\\frac{L'}{L}(\\frac{1}{2}+ir,\\Pi_v)\n\\Phi(r) dr.\n\\end{equation*}\n\n\n\n\n\\section{Sato-Tate equidistribution}\\label{s:Sato-Tate}\n\n Let $G$ be a connected reductive group over a number field $F$ as in the previous section.\n The choice of a ${\\rm Gal}(\\overline{F}\/F)$-invariant\n splitting datum $(\\widehat{B},\\widehat{T},\\{X_{\\alpha}\\}_{\\alpha\\in \\Delta^{\\vee}})$\n as in \\S\\ref{sub:L-groups} induces\n a composite map ${\\rm Gal}(\\overline{F}\/F)\\rightarrow {\\rm Out}(\\widehat{G})\\hookrightarrow {\\rm Aut}(\\widehat{G})$ with open kernel. Let $F_1$ be the unique finite\n extension of $F$ in $\\overline{F}$ such that\n $${\\rm Gal}(\\overline{F}\/F)\\twoheadrightarrow {\\rm Gal}(F_1\/F)\\hookrightarrow {\\rm Aut}(\\widehat{G}).$$\n\n\n\n\\subsection{Definition of the Sato-Tate measure}\\label{sub:ST-meas}\n\n Set $\\Gamma_1:={\\rm Gal}(F_1\/F)$.\n Let $\\widehat{K}$ be a maximal compact subgroup of $\\widehat{G}$ which is $\\Gamma_1$-invariant.\n (It is not hard to see that such a $\\widehat{K}$ exists, cf. \\cite{JZ08}.)\n Set $\\widehat{T}_{c}:=\\widehat{T}\\cap\\widehat{K}$. (The subscript $c$ stands for ``compact'' as it was in \\S\\ref{sub:plan-unramified}.)\n Denote by $\\Omega_c$ the Weyl group for $(\\widehat{K},\\widehat{T}_c)$.\n\n Let $\\theta\\in \\Gamma_1$.\n Define $\\Omega_{c,\\theta}$ to be the subset of $\\theta$-invariant elements of $\\Omega_c$.\n Consider the topological quotient $\\widehat{K}^\\natural_\\theta$ of\n $\\widehat{K}\\rtimes \\theta$ by the $\\widehat{K}$-conjugacy equivalence relation.\n Set $\\widehat{T}_{c,\\theta}:=\\widehat{T}_c\/(\\theta-{\\rm id})\\widehat{T}_c$.\n Note that the action of $\\Omega_{c,\\theta}$ on $\\widehat{T}_c$ induces an action\n on $\\widehat{T}_{c,\\theta}$.\n The inclusion $\\widehat{T}_c\\hookrightarrow \\widehat{K}$ induces a canonical topological isomorphism (cf. Lemma \\ref{l:unr-temp-spec})\n \\begin{equation}\\label{e:K-T-isom}\\widehat{K}^\\natural_\\theta\\simeq \\widehat{T}_{c,\\theta}\/\\Omega_{c,\\theta}.\\end{equation}\n\n\n\n The Haar measure on $\\widehat{K}$ (resp. on $\\widehat{T}_c$) with total volume 1 is\n written as $\\mu_{\\widehat{K}}$ (resp. $\\mu_{\\widehat{T}_c}$).\n Then $\\mu_{\\widehat{K}}$ on $\\widehat{K}\\rtimes\\theta$ induces the quotient measure\n $\\mu_{\\widehat{K}^\\natural_\\theta}$ (so that\n for any continuous function $f^\\natural$ on $\\widehat{K}_\\theta^\\natural$ and its pullback\n $f$ on $\\widehat{K}$,\n $\\int f^\\natural \\mu_{\\widehat{K}^\\natural_\\theta}=\\int f \\mu_{\\widehat{K}}$) thus also a measure $\\mu_{\\widehat{T}_c,\\theta}$ on $\\widehat{T}_{c,\\theta}$.\n\n\n\\begin{defn}\n The $\\theta$-\\textbf{Sato-Tate measure} $\\widehat{\\mu}^{\\mathrm{ST}}_\\theta$\n on $\\widehat{T}_{c,\\theta}\/\\Omega_{c,\\theta}$ is the measure\n transported from $\\mu_{\\widehat{K}^\\natural_\\theta}$ via \\eqref{e:K-T-isom}.\n\\end{defn}\n\n\\begin{lem}\\label{l:ST-measure}\n Let $\\widehat{\\mu}^{\\mathrm{ST}}_{\\theta,0}$ denote the measure on $\\widehat{T}_{c,\\theta}$\n pulled back from\n $\\widehat{\\mu}^{\\mathrm{ST}}_\\theta$ on $\\widehat{T}_{c,\\theta}\/\\Omega_{c,\\theta}$ (so that $\\int\n f \\widehat{\\mu}^{\\mathrm{ST}}_{\\theta,0}=\n \\int \\overline{f} \\widehat{\\mu}^{\\mathrm{ST}}_\\theta$ for every continuous $\\overline{f}$ on $\\widehat{T}_{c,\\theta}\/\\Omega_{c,\\theta}$\n and its pullback $f$). Then\n $$\\widehat{\\mu}^{\\mathrm{ST}}_{\\theta,0}=\\frac{1}{|\\Omega_{c,\\theta}|} D_\\theta(t) \\mu_{\\widehat{T}_{c,\\theta}},$$\n where $D_\\theta(t)={\\det(1-{\\rm ad}(t\\rtimes \\theta)|{\\rm Lie}\\,(\\widehat{K})\/{\\rm Lie}\\,(\\widehat{T}^\\theta_c))}$\n and $t$ signifies a parameter on $\\widehat{T}_{c,\\theta}$.\n\\end{lem}\n\n\\begin{proof}\n The twisted Weyl integration formula tells us that\n for a continuous $f:\\widehat{K}\\rightarrow {\\mathbb C}$,\n$$\\int_{\\widehat{K}} f(k)\\mu_{\\widehat{K}}\n= \\frac{1}{|\\Omega_{c,\\theta}|} \\int_{\\widehat{T}^{\\mathrm{reg}}_{c,\\theta}}\nD_\\theta(t) \\int_{\\widehat{K}_{t\\theta}\\backslash \\widehat{K}} f(x^{-1} t x^{\\theta}) \\cdot dx dt.$$\n Notice that $\\widehat{K}_{t\\theta}$ is the twisted centralizer group of $t$ in $\\widehat{K}$\n (or, the centralizer group of $t\\theta$ in $\\widehat{K}$).\n On the right hand side, $ \\mu_{\\widehat{T}_{c,\\theta}}$ is used for integration.\n When $f$ is a pullback from $\\widehat{K}^\\natural_{\\theta}$, the formula simplifies as\n$$\\int_{\\widehat{K}^\\natural_{\\theta}} f(k) \\mu_{\\widehat{K}^\\natural_{\\theta}}\n= \\frac{1}{|\\Omega_{c,\\theta}|} \\int_{\\widehat{T}^{\\mathrm{reg}}_{c,\\theta}}\nD_\\theta(t) f(t)\\cdot \\mu_{\\widehat{T}_{c,\\theta}}$$\nand the left hand side is equal to\n$\\int_{\\widehat{T}_{c,\\theta}} f(t) \\widehat{\\mu}^{\\mathrm{ST}}_{\\theta,0}$ by definition.\n\\end{proof}\n\n\n\\subsection{Limit of the Plancherel measure versus the Sato-Tate measure}\\label{sub:lim-of-Plan}\n\n\n Let $\\theta,\\tau\\in \\Gamma_1$.\n Then clearly $\\Omega_{c,\\theta}=\\Omega_{c,\\tau\\theta\\tau^{-1}}$,\n $\\widehat{K}^\\natural_\\theta \\simeq \\widehat{K}^\\natural_{\\tau\\theta\\tau^{-1}}$\n via $k\\mapsto \\tau(k)$ and $\\widehat{T}_{c,\\theta}\\simeq \\widehat{T}_{c,\\tau\\theta\\tau^{-1}}$ via $t\\mapsto \\tau(t)$.\n Accordingly $\\widehat{\\mu}^{\\mathrm{ST}}_\\theta$ and $\\widehat{\\mu}^{\\mathrm{ST}}_{\\theta,0}$ are identified with\n $\\widehat{\\mu}^{\\mathrm{ST}}_{\\tau\\theta\\tau^{-1}}$ and $\\widehat{\\mu}^{\\mathrm{ST}}_{\\tau\\theta\\tau^{-1},0}$, respectively.\n\n Fix once and for all a set of representatives ${\\mathscr C}(\\Gamma_1)$ for conjugacy classes in $\\Gamma_1$.\n For $\\theta\\in {\\mathscr C}(\\Gamma_1)$, denote by $[\\theta]$ its conjugacy class. For each finite place $v$\n such that $G$ is unramified over $F_v$, the geometric Frobenius ${\\rm Fr}_v\\in {\\rm Gal}(F_v^{\\mathrm{ur}}\/F_v)$\n gives a well-defined conjugacy class $[{\\rm Fr}_v]$ in $\\Gamma_1$.\n The set of all finite places $v$ of $F$ where $G$ is unramified is partitioned into\n $$\\{{\\mathcal V}_F(\\theta)\\}_{\\theta\\in {\\mathscr C}(\\Gamma_1)}$$ such that\n $v\\in {\\mathcal V}_F(\\theta)$ if and only if $[{\\rm Fr}_v]=[\\theta]$.\n\n\n For each finite place $v$ of $F$,\n the unitary dual of $G(F_v)$ and its Plancherel measure are written as $G^{\\wedge}_v$\n and $\\widehat{\\mu}^{\\mathrm{pl}}_v$.\n Similarly adapt the notation of \\S\\ref{sub:notation-Plancherel} by\n appending the subscript $v$.\n Now fix $\\theta\\in {\\mathscr C}(\\Gamma_1)$ and suppose that $G$ is unramified at $v$ and that\n $v\\in {\\mathcal V}_F(\\theta)$. We choose $\\overline{F}\\hookrightarrow \\overline{F}_v$ such that ${\\rm Fr}_v$ has image $\\theta$ in $\\Gamma_1$\n (rather than some other conjugate). This rigidifies\n the identification in the second map below. (If ${\\rm Fr}_v$ maps to $\\tau\\theta\\tau^{-1}$ then the second map is twisted by $\\tau$.)\n \\begin{equation}\\label{e:local-isom-v}G(F_v)^{\\wedge,\\mathrm{ur},{\\rm temp}}\\stackrel{\\mathrm{canonical}}{\\simeq} \\widehat{T}_{c,{\\rm Fr}_v}\/\\Omega_{c,{\\rm Fr}_v}= \\widehat{T}_{c,\\theta}\/\\Omega_{c,\\theta}.\\end{equation}\n By abuse of notation let $\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{v}$ (a measure on $G(F_v)^{\\wedge,\\mathrm{ur},{\\rm temp}}$)\n also denote the transported measure\n on $\\widehat{T}_{c,\\theta}\/\\Omega_{c,\\theta}$.\n Let $C_v$ denote the constant of Proposition \\ref{p:unr-plan-meas}, which\n we normalize such that $\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{v,0}$ has total volume 1.\n Note that $\\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}$ also has total volume 1.\n\n\n\\begin{prop}\\label{p:lim-of-Plancherel} Fix any $\\theta\\in {\\mathscr C}(\\Gamma_1)$.\nAs $v$ runs over ${\\mathcal V}_F(\\theta)$,\n we have weak convergence $\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{v}\\rightarrow \\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}$ as $v\\rightarrow \\infty$.\n\\end{prop}\n\n\\begin{proof}\n It is enough to show that $\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{v,0}\\rightarrow \\widehat{\\mu}^{\\mathrm{ST}}_{\\theta,0}$ on $\\widehat{T}_{c,\\theta}$ as $v$ tends to $\\infty$\n in ${\\mathcal V}_F(\\theta)$.\n Consider the measure $\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{v,1}:=C_v^{-1}\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{v,0}$.\n It is clear from the formula of Proposition \\ref{p:unr-plan-meas}\n that $\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{v,1}\\rightarrow \\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}$ as $v\\rightarrow\\infty$ in ${\\mathcal V}_F(\\theta)$.\n In particular, the total volume of $\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{v,1}$ tends to 1, hence\n $C_v\\rightarrow 1$ as $v\\rightarrow \\infty$ in ${\\mathcal V}_F(\\theta)$.\n We conclude that $\\widehat{\\mu}^{\\mathrm{pl,ur,temp}}_{v,0}\\rightarrow \\widehat{\\mu}^{\\mathrm{ST}}_{\\theta,0}$ as desired.\n\\end{proof}\n\n\\begin{rem}\n The above proposition was already noticed by Sarnak for $G=SL(n)$ in \\cite[\\S4]{Sarnak87}.\n\\end{rem}\n\n\n\\subsection{The generalized Sato-Tate conjecture}\\label{sub:gen-ST-conj}\n\n Let $\\pi$ be a cuspidal\\footnote{If $\\pi$ is not cuspidal then\n the hypothesis is never supposed to be satisfied.} tempered automorphic representation of $G({\\mathbb A}_F)$\n satisfying\n\n\\medskip\\noindent\n\\textbf{Hypothesis.}\nThe conjectural global $L$-parameter $\\varphi_\\pi$ for $\\pi$ has Zariski dense image in ${}^L G_{F_1\/F}$.\n\\medskip\n\n Of course this hypothesis is more philosophical than practical. The global\n Langlands correspondence between\n ($L$-packets of) automorphic representations and global $L$-parameters of $G({\\mathbb A}_F)$\n is far from established. A fundamental problem here is that global $L$-parameters cannot be defined\n unless the conjectural global Langlands group is defined.\n (Some substitutes have been proposed by Arthur in the case of classical groups.\n The basic idea is that a cuspidal automorphic representation of $\\GL_n$\n can be put in place of an irreducible $n$-dimensional representation of the global Langlands group.)\n Nevertheless, the above hypothesis can often be replaced with another condition,\n which should be equivalent but can be stated without reference to conjectural objects.\n For instance, when $\\pi$ corresponds to a Hilbert modular form of weight$\\ge 2$\n at all infinite places, one can use the hypothesis that\n it is not a CM form (i.e. not an automorphic induction from a Hecke character over a CM field).\n\n Let us state a general form of the Sato-Tate conjecture. Let $q_v$ denote the cardinality of the residue field cardinality at a finite place $v$ of $F$. Define ${\\mathcal V}_F(\\theta,\\pi)^{\\le N}:=\\{v\\in{\\mathcal V}_F(\\theta,\\pi): q_v\\le N\\}$\n for $N\\in {\\mathbb Z}_{>0}$.\n\n\\begin{conj}\\label{c:gen-ST}\n Assume the above hypothesis. For each $\\theta\\in {\\mathscr C}(\\Gamma_1)$,\n let ${\\mathcal V}_F(\\theta,\\pi)$ be the subset of $v\\in {\\mathcal V}_F(\\theta)$\n such that $\\pi_v$ is unramified. Then $\\{\\pi_v\\}_{v\\in {\\mathcal V}_F(\\theta,\\pi)}$. More precisely\n $$\\frac{1}{|{\\mathcal V}_F(\\theta,\\pi)^{\\le N}|}\n \\sum_{v\\in {\\mathcal V}_F(\\theta,\\pi)^{\\le N}} \\delta_{\\pi_v}\n \\rightarrow \\widehat{\\mu}^{\\mathrm{ST}}_\\theta\\quad\\mbox{as}\\quad N\\rightarrow\\infty.$$\n\n\\end{conj}\n\n The above conjecture is deemed plausible in that it\n is essentially a consequence of the Langlands functoriality conjecture\n at least when $G$ is (an inner form of) a split group.\n Namely if we knew that the $L$-function $L(s,\\pi,\\rho)$ for any irreducible representation ${}^L G\\rightarrow \\GL_d$\n were a cuspidal automorphic $L$-function for $\\GL_d$ then\n the desired equidistribution is implied by Theorem 1 of \\cite[App A.2]{Ser68l}.\n\n\\begin{rem}\n In general when the above hypothesis is dropped, it is likely that\n $\\pi$ comes from an automorphic representation on a smaller group than $G$.\n (If $\\varphi_\\pi$ factors through an injective $L$-morphism ${}^L H_{F_1\/F}\\rightarrow {}^L G_{F_1\/F}$\n then the Langlands functoriality predicts that $\\pi$ arises from an automorphic\n representation of $H({\\mathbb A}_F)$.)\n Suppose that the Zariski closure of ${\\rm Im\\,}(\\varphi_\\pi)$ in ${}^L G_{F_1\/F}$ is\n isomorphic to ${}^L H_{F_1\/F}$ for some connected reductive group $H$ over $F$.\n (In general the Zariski closure may consist of finitely many copies of ${}^L H_{F_1\/F}$.)\n Then $\\{\\pi_v\\}_{v\\in {\\mathcal V}_F(\\theta,\\pi)}$ should be equidistributed according to\n the Sato-Tate measure belonging to $H$ in order to be consistent with the functoriality conjecture.\n\\end{rem}\n\n One can also formulate a version of the conjecture where $v$ runs over the set of \\emph{all} finite places where $\\pi_v$ are unramified by considering conjugacy classes in ${}^L G_{F_1\/F}$ rather than those in $\\widehat{G}\\rtimes \\theta$ for a fixed $\\theta$. For this let $\\widehat{K}^\\natural$ denote the quotient of $\\widehat{K}$ by the equivalence relation coming from the conjugation by $\\widehat{K}\\rtimes \\Gamma_1$. Since $\\widehat{K}^\\natural$ is isomorphic to a suitable quotient of $\\widehat{T}_c$, the Haar measure on $\\widehat{K}$ gives rise to a measure, to be denoted $\\widehat{\\mu}^{\\mathrm{ST}}$, on the quotient of $\\widehat{T}_c$. Let ${\\mathcal V}_F(\\pi)^{\\le N}$ (where $N\\in {\\mathbb Z}_{>0}$) denote the set of finite places of $F$ such that $\\pi_v$ are unramified and $q_v\\le N$. By writing $v\\rightarrow\\infty$ we mean that $q_v$ tends to infinity.\n\n\\begin{conj}\\label{c:gen-ST-2}\n Assume the above hypothesis. Then $\\{\\pi_v\\}$\n are equidistributed according to $\\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}$ as $v\\rightarrow\\infty$. Namely\n $$\\frac{1}{|{\\mathcal V}_F(\\pi)^{\\le N}|}\n \\sum_{v\\in {\\mathcal V}_F(\\pi)^{\\le N}} \\delta_{\\pi_v}\n \\rightarrow \\widehat{\\mu}^{\\mathrm{ST}}\\quad\\mbox{as}\\quad N\\rightarrow\\infty.$$\n\\end{conj}\n\n\\begin{rem}\n Unlike Conjecture \\ref{c:gen-ST} it is unnecessary to choose embeddings $\\overline{F}\\hookrightarrow \\overline{F}_v$ to rigidify \\eqref{e:local-isom-v} since the ambiguity in the rigidification is absorbed in the conjugacy classes in ${}^L G_{F_1\/F}$. The formulation of Conjecture \\ref{c:gen-ST-2} might be more suitable than the previous one in the motivic setting where we would not want to fix $\\overline{F}\\hookrightarrow \\overline{F}_v$.\n The interested reader may compare Conjecture \\ref{c:gen-ST-2}\n with the motivic Sato-Tate conjecture of \\cite[13.5]{Ser94}.\n\\end{rem}\n\n The next subsection will discuss the analogue of Conjecture \\ref{c:gen-ST} for automorphic families.\n Conjecture \\ref{c:gen-ST-2} will not be considered any more in our paper. It is enough to mention that the analogue of the latter conjecture for families of algebraic varieties makes sense and appears to be interesting.\n\n\\subsection{The Sato-Tate conjecture for families}\\label{sub:ST-families}\n\n The Sato-Tate conjecture has been proved for Hilbert modular forms in\n (\\cite{BLGHT11}, \\cite{BLGG11}). Analogous equidistribution\ntheorems in the function field setting are due to Deligne and Katz. (See \\cite[Thm 9.2.6]{KS-RFM}\nfor instance.)\n Despite these fantastic developments, we have little unconditional theoretical evidence for\n the Sato-Tate conjecture for general reductive groups over number fields.\n On the other hand, it has been noticed\n that the analogue of the\n Sato-Tate conjecture for families of automorphic representations is more\n amenable to attack. Indeed there was some success in the case of holomorphic modular forms and Maass forms\n (\\cite[Thm 2]{CDF97} \\cites{Royer:dimension-rang,ILS00,Serre:pl}).\n The conjecture has the following coarse form, which should be thought of\n as a guiding principle rather than a rigorous conjecture. Compare with some precise results\n in \\S\\ref{sub:ST-theorem}.\n\n\\begin{conj}\\label{c:ST-families}\n Let $\\{\\mathcal{F}_k\\}_{k\\ge1}$ be a ``general'' sequence of\n finite families of automorphic representations of $G({\\mathbb A}_F)$ such that\n $|\\mathcal{F}_k|\\rightarrow \\infty$ as $k\\rightarrow \\infty$. Then $\\{\\pi_v\\in \\mathcal{F}_k\\}$\n are equidistributed according to $\\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}$ as $k$ and $v$ tend to infinity\n subject to the conditions that $v\\in {\\mathcal V}_F(\\theta)$ and that\n all members of $\\mathcal{F}_k$ are unramified at $v$.\n\n\\end{conj}\n\n We are not going to make precise what ``general'' means, but merely remark\n that it should be the analogue of the condition that\n the hypothesis of \\S\\ref{sub:gen-ST-conj} holds for the ``generic fiber''\n of the family when the family has a geometric meaning (see also~\\cite{Sarn:family}). In practice one would verify\n the conjecture for many interesting families while simply ignoring the word ``general''.\n In some sense any family may be considered general if the conjecture holds for that family. Some relation between $k$ and $v$ holds when taking limit: $k$ needs to grow fast enough compared to $v$ (or more precisely $\\abs{\\mathcal{F}_k}$ needs to grow fast enough compared to $q_v$).\n\n It is noteworthy that the unpleasant hypothesis of \\S\\ref{sub:gen-ST-conj}\n can be avoided for families.\n Also note that the temperedness\n assumption is often unnecessary due to the fact that the Plancherel measure is supported on the\n tempered spectrum. This is an indication that most representations in a family are globally tempered, which we will return to in a subsequent work.\n\n Later we will\n verify the conjecture for\n many families in \\S\\ref{sub:ST-theorem}\n as a corollary to the automorphic\n Plancherel theorem proved earlier in \\S\\ref{s:aut-Plan-theorem}.\n Our families arise as the sets of all automorphic representations with increasing level or weight, possibly with prescribed local conditions\n at finitely many fixed places.\n\n\n\n\n\n\n\n\\section{Background materials}\n\\label{s:bg}\n\n This section collects background materials in the local and global contexts. Subsections \\ref{sub:orb-int-const-term} and \\ref{sub:lem-ram} are concerned with $p$-adic groups while \\S\\ref{sub:st-disc}, \\S\\ref{sub:EP} and \\S\\ref{sec:b:fs} are with real and complex Lie groups. The rest is about global reductive groups.\n\n\\subsection{Orbital integrals and constant terms}\\label{sub:orb-int-const-term}\n We introduce some notation in the $p$-adic context.\n\\begin{itemize}\n\\item $F$ is a finite extension of ${\\mathbb Q}_p$ with integer ring $\\mathcal{O}$.\n\\item $G$ is a connected reductive group over $F$.\n\\item $A$ is a maximal $F$-split torus of $G$, and put $M_0:=Z_G(A)$.\n\\item $K$ is a maximal compact subgroup of $G$\ncorresponding to a special point in the apartment for $A$.\n\\item $P=MN$ is a parabolic subgroup of $G$ over $F$, with $M$ and $N$ its Levi subgroup and\nunipotent radical, such that $M\\supset M_0$.\n\n\\item $\\gamma\\in G(F)$ is a semisimple element. (The case of a non-semisimple element\n is not needed in this paper.)\n\\item $I_\\gamma$ is the neutral component\n of the centralizer of $\\gamma$ in $G$. Then $I_\\gamma$ is a connected reductive group over $F$.\n\\item $\\mu_G$ (resp. $\\mu_{I_\\gamma}$) is a Haar measure on $G(F)$ (resp. $I_\\gamma(F)$).\n\\item $\\frac{\\mu_G}{\\mu_{I_\\gamma}}$ is\n the quotient measure on $I_\\gamma(F)\\backslash G(F)$ induced by $\\mu_G$ and $\\mu_{I_\\gamma}$.\n\\item $\\phi\\in C^\\infty_c(G(F))$.\n\\end{itemize}\n Define the orbital integral\n $$O^{G(F)}_\\gamma(\\phi,\\mu_G,\\mu_{I_\\gamma}):=\n \\int_{I_\\gamma(F)\\backslash G(F)} \\phi(x^{-1}\\gamma x) \\frac{\\mu_G}{\\mu_{I_\\gamma}}.$$\n When the context is clear, we use $O_\\gamma(\\phi)$ as a shorthand notation.\n\n We recall the theory of constant terms (cf. \\cite[p.236]{vDi72}).\n Choose Haar measures $\\mu_K$, $\\mu_M$, $\\mu_N$,\n on $K$, $M(F)$, $N(F)$, respectively, such that $\\mu_G=\\mu_K\\mu_M\\mu_N$ holds\n with respect to $G(F)=KM(F)N(F)$.\n Define the (normalized) constant term $\\phi_M\\in C^\\infty_c(M(F))$ by\n \\begin{equation}\\label{e:const-term-formula}\\phi_{M}(m)=\\delta_P^{1\/2}(m)\\int_{N(F)}\\int_{K} \\phi(kmnk^{-1})\\mu_K \\mu_N.\\end{equation}\n Although the definition of $\\phi_M$ involves not only $M$ but $P$, the following lemma\n shows that the orbital integrals of $\\phi_M$ depend only on $M$ by the density of\n regular semisimple orbital integrals, justifying our notation.\n\\begin{lem}\\label{l:orb-int-const-term}\n For all $(G,M)$-regular semisimple $\\gamma\\in M(F)$,\n $$O_\\gamma(\\phi_{M},\\mu_{M},\\mu_{I_\\gamma})=|D^G_M(\\gamma)|^{1\/2}\n O_\\gamma(\\phi,\\mu_{G},\\mu_{I_\\gamma}).$$\n\\end{lem}\n\\begin{proof}\n \\cite[Lem 9]{vDi72}. (Although the lemma is stated for regular elements $\\gamma\\in G$,\n it suffices to require $\\gamma$ to be $(G,M)$-regular. See Lemma 8 of loc. cit.)\n\\end{proof}\n It is standard that\n the definition and facts we have recollected above extend to the adelic case. (Use \\cite[\\S\\S7-8]{Kot86},\n for instance). We will skip rewriting the analogous definition in the adelic setting.\n\n Now we restrict ourselves to the local unramified case.\n Suppose that $G$ is unramified over $F$.\n Let $B\\subset P \\subset G$ be Borel and parabolic subgroups defined over $F$.\n Write $B=TU$ and $P=MN$ where $T$ and $M$ are Levi subgroups such that $T\\subset M$ and $U$ and $N$ are unipotent radicals.\n\n\\begin{lem}\\label{l:const-term-on-unram-Hecke} Let $\\phi\\in {\\mathcal{H}}^{\\mathrm{ur}}(G)$.\nThen ${\\mathcal{S}}^G_M(\\phi)=\\phi_M$, in particular\n${\\mathcal{S}}^G(\\phi)= {\\mathcal{S}}^M({\\mathcal{S}}^G_M \\phi)=\\phi_T$.\n\\end{lem}\n\n\\begin{proof}\n Straightforward from \\eqref{e:Satake-formula} and \\eqref{e:const-term-formula}.\n\\end{proof}\n\n\n\n\n\n\n\n\n\\subsection{Gross's motives}\\label{sub:Gross-motive}\n\n Now let $F$ be a finite extension of ${\\mathbb Q}$ (although Gross's theory applies more generally).\n Let $G$ be a connected reductive group over $F$ and consider its quasi-split inner form $G^*$.\n Let $T^*$ be the centralizer of a maximal $F$-split torus of $G^*$. Denote by $\\Omega$ the Weyl group for $(G^*,T^*)$ over $\\overline{F}$.\n Set $\\Gamma={\\rm Gal}(\\overline{F}\/F)$.\n Gross (\\cite{Gro97}) attaches to $G$ an Artin-Tate motive\n $${\\rm Mot}_G=\\bigoplus_{d\\ge 1} {\\rm Mot}_{G,d}(1-d)$$ with coefficients in ${\\mathbb Q}$.\n Here $(1-d)$ denotes the Tate twist. The Artin motive ${\\rm Mot}_{G,d}$\n (denoted $V_d$ by Gross) may be thought of as a $\\Gamma$-representation on a ${\\mathbb Q}$-vector space whose dimension is $\\dim {\\rm Mot}_{G,d}$.\n Define $$L({\\rm Mot}_G):= L(0,{\\rm Mot}_G)$$\n to be the Artin $L$-value of $L(s,{\\rm Mot}_G)$ at $s=0$.\n We recall some properties of ${\\rm Mot}_G$ from Gross's article.\n\n\\begin{prop}\\label{p:Gross-motives}\n \\begin{enumerate}\n\\item ${\\rm Mot}_{G,d}$ is self-dual for each $d\\ge 1$.\n\\item $\\sum_{d\\ge 1} \\dim {\\rm Mot}_{G,d} = r_G={\\rm rk} G$.\n\\item $\\sum_{d\\ge 1} (2d-1)\\dim {\\rm Mot}_{G,d} = \\dim G$.\n\\item $|\\Omega|=\\prod_{d\\ge 1} d^{\\dim {\\rm Mot}_{G,d}}$.\n\\item If $T^*$ splits over a finite extension $E$ of $F$ then\nthe $\\Gamma$-action on ${\\rm Mot}_G$ factors through ${\\rm Gal}(E\/F)$.\n\\end{enumerate}\n\\end{prop}\n\n The Artin conductor ${\\mathfrak f}({\\rm Mot}_{G,d})$ is defined as follows.\n Let $F'$ be the fixed field of the kernel of the Artin representation\n ${\\rm Gal}(\\overline{F}\/F)\\rightarrow \\GL(V_d)$\n associated to ${\\rm Mot}_{G,d}$.\n For each finite place $v$ of $F$, let $w$ be any place of $F'$ above $v$.\n Let $\\Gamma(v)_i:={\\rm Gal}(F'_w\/F_v)_i$ ($i\\ge 0$) denote the $i$-th ramification subgroups. Set\n \\begin{equation}\\label{e:expo-conductor}\n f(G_v,d):=\\sum_{i\\in {\\mathbb Z}_{\\ge 0}} \\frac{|\\Gamma(v)_i|}{|\\Gamma(v)_0|}\n \\dim (V_d\/V_d^{\\Gamma(v)_i}),\\end{equation}\n which is an integer independent of the choice of $w$. Write ${\\mathfrak p}_v$ for the prime ideal of $\\mathcal{O}_F$\n corresponding to $v$. If $v$ is unramified in $E$ then $f(G_v,d)=0$.\n Thus the product makes sense in the following definition.\n $${\\mathfrak f}({\\rm Mot}_{G,d}):=\\prod_{v\\nmid \\infty} {\\mathfrak p}_v^{f(G_v,d)}$$\n Let $E$ be the splitting field of $T^*$ (which is an extension of $F$)\nand set $s^{\\mathrm{sp}}_G:=[E:F]$.\n\n\n\\begin{lem}\\label{l:bounding-conductor}\n For every finite place $v$ of $F$,\n $$f(G_v,d)\\le (\\dim {\\rm Mot}_{G,d} )\\cdot (s^{\\mathrm{sp}}_G(1+e_{F_v\/{\\mathbb Q}_p} \\log_p s^{\\mathrm{sp}}_G)-1).$$\n\\end{lem}\n\n\\begin{proof}\n Let $F'$, $w$ and $V_d$ be as in the preceding paragraph. Then $F\\subset F'\\subset E$.\n Set $s_v:=[F'_w:F_v]$ so that $s_v\\le s^{\\mathrm{sp}}_G$.\n The case $s_v=1$ is obvious (in which case $f(G_v,d)=0$), so we may assume $s_v\\ge 2$.\n From \\eqref{e:expo-conductor} and Corollary \\ref{c:vanishing-ram-subgroup} below,\n$$f(G_v,d)\\le \\sum_{i\\ge 0} \\dim (V_d\/V_d^{\\Gamma(v)_i})\\le\n\\sum_{i=0}^{s_v(1+e_{F_v\/{\\mathbb Q}_p}\\log_p s_v)-2} \\dim V_d = (\\dim V_d)(s_v(1+e_{F_v\/{\\mathbb Q}_p}\\log_p s_v)-1).$$\n\\end{proof}\n\n Recall that $w_G=|\\Omega|$ is the cardinality of the absolute Weyl group.\n Let $s_G$ be the degree of the smallest extension of $F$ over which $G$ becomes split.\n The following useful lemma implies in particular that $s^{\\mathrm{sp}}_G\\le w_Gs_G$.\n\n\\begin{lem}(\\cite[Lem 2.2]{JKZ})\\label{l:torus-splitting}\n For any maximal torus $T$ of $G$ defined over $F$, there exists a finite\nGalois extension $E$ of $F$ such that $[E:F]\\le w_Gs_G$ and $T$ splits over $E$.\n\\end{lem}\n\n\\subsection{Lemmas on ramification}\\label{sub:lem-ram}\n\n This subsection is meant to provide an ingredient of proof\n (namely Corollary \\ref{c:vanishing-ram-subgroup}) for Lemma \\ref{l:bounding-conductor}.\n Fix a prime $p$.\n Let $E$ and $F$ be finite extensions of ${\\mathbb Q}_p$ with uniformizers $\\varpi_E$ and\n $\\varpi_F$, respectively. Normalize valuations $v_E:E^\\times\\rightarrow {\\mathbb Z}$ and\n $v_F:F^\\times \\rightarrow {\\mathbb Z}$ such that $v_E(\\varpi_E)=v_F(\\varpi_F)=1$.\n Write $e_{E\/F}\\in {\\mathbb Z}_{\\ge1}$ for the ramification index\n and ${\\mathfrak D}_{E\/F}$ for the different.\n For a nonzero principal ideal ${\\mathfrak a}$ of $\\mathcal{O}_E$,\n we define $v_E({\\mathfrak a})$ to be $v_E(a)$ for any generator $a$ of ${\\mathfrak a}$. This is well defined.\n\n\\begin{lem}\\label{l:bound-diff-wild}\n Let $E$ be a totally ramified Galois extension of $F$ with $[E:F]=p^n$ for $n\\ge 0$.\n Then $$v_E({\\mathfrak D}_{E\/F})\\le p^n(1+n\\cdot e_{F\/{\\mathbb Q}_p})-1.$$\n\\end{lem}\n\n\\remark In fact the inequality is sharp. There are totally ramified extensions $E\/F$ for which the above equality holds as shown by \\\"Ore.\n\n\\begin{proof}\n The lemma is trivial when $n=0$. Next assume $n=1$ but allow $E\/F$ to be a non-Galois extension.\n Let $f(x)=\\sum_{i=0}^p a_i x^i\\in \\mathcal{O}_F[x]$ (with $a_p=1$ and $v_F(a_i)\\ge 1$ for $i0$.\n\\item $\\Pi_{\\rm disc}(\\xi)$ is the set of irreducible discrete series representations\nof $G({\\mathbb R})$ with the same infinitesimal character and the same\ncentral character as $\\xi$. (This is an $L$-packet for $G({\\mathbb R})$.)\n\\item $D_\\infty^G(\\gamma):=\\prod_{\\alpha} |1-\\alpha(\\gamma)|$ for $\\gamma\\in T({\\mathbb R})$, where\n$\\alpha$ runs over elements of $\\Phi$ such that $\\alpha(\\gamma)\\neq 1$.\n(If $\\gamma$ is in the center of $G({\\mathbb R})$, $D_\\infty^G(\\gamma)=1$.)\n\\end{itemize}\n\n If $M$ is a Levi subgroup of $G$ over ${\\mathbb C}$ containing $T$,\n the following are defined in the obvious manner as above:\n $\\Phi^+_M$, $\\Phi_M$, $\\Omega_M$, $\\rho_M$, $D_\\infty^M$. Define $\\Omega^M:=\\{\\omega\\in \\Omega: \\omega^{-1}\\Phi^+_M\\subset \\Phi^+\\}$, which is a set of representatives for $\\Omega\/\\Omega_M$.\n For each \\emph{regular} $\\gamma\\in T({\\mathbb R})$,\n let us define (cf. \\cite[(4.4)]{Art89})\n $$\\Phi^G_M(\\gamma,\\xi):=(-1)^{q(G_\\infty)}\n D_\\infty^G(\\gamma)^{1\/2} D_\\infty^M(\\gamma)^{-1\/2}\\sum_{\\pi\\in \\Pi_{\\rm disc}(\\xi)}\n \\Theta_\\pi(\\gamma)$$\n where $\\Theta_\\pi$ is the character function of $\\pi$.\n It is known that the function $\\Phi^G_M(\\gamma,\\xi)$\n continuously extends to an $\\Omega_M$-invariant function on $T({\\mathbb R})$,\n thus also to a function on $M({\\mathbb R})$ which is invariant under $M({\\mathbb R})$-conjugation and\n supported on elliptic elements\n (\\cite[Lem 4.2]{Art89}, cf. \\cite[Lem 4.1]{GKM97}).\n When $M=G$, simply $\\Phi^G_M(\\gamma,\\xi)= {\\rm tr}\\, \\xi(\\gamma)$.\n\n We would like to have an upper bound for $|\\Phi^G_M(\\gamma,\\xi)|$\n that we will need in \\S\\ref{sub:weight-varies}.\n This is a refinement of \\cite[Lem 4.8]{Shi-Plan}.\n\n\\begin{lem}\\label{l:dim-and-trace}\n\\begin{enumerate}\n\\item $\\dim \\xi=\\prod_{\\alpha\\in \\Phi^+} \\frac{\\langle \\alpha,\\lambda_\\xi+\\rho\\rangle}{\\langle \\alpha,\\rho\\rangle}$.\n\\item There exists a constant $c>0$ independent of $\\xi$ such that for every elliptic $\\gamma\\in G({\\mathbb R})$\nand $\\xi$,\n $$\\frac{|{\\rm tr}\\,\\xi(\\gamma)|}{\\dim \\xi}\n \\le c \\frac{D_\\infty^G(\\gamma)^{-1\/2}}{ m(\\xi)^{|\\Phi^+|-|\\Phi^+_{I_\\gamma}|}} .$$\n\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\nPart (i) is the standard Weyl dimension formula. Let us prove (ii).\nThe formula right above the corollary 1.12 in \\cite{CC09} implies that\n $$|{\\rm tr}\\,\\xi(\\gamma)|\\le D_\\infty^G(\\gamma)^{-1\/2} \\times\n\\sum_{\\omega\\in \\Omega^{I_\\gamma}} \\left(\\prod_{\\alpha\\in \\Phi^+_{I_\\gamma}}\n\\frac{ \\langle\\omega^{-1}\\alpha,\\lambda_\\xi+\\rho\\rangle}{\\langle \\alpha,\\rho_{I_\\gamma}\\rangle}\\right).$$\nNote that their $M$ is our $I_\\gamma$ and that\n$|\\alpha(\\gamma)|=1$ for all $\\alpha\\in \\Phi$ and all elliptic $\\gamma\\in G({\\mathbb R})$.\nHence by (i),\n$$\\frac{|{\\rm tr}\\,\\xi(\\gamma)|}{\\dim \\xi}\n \\le D_\\infty^G(\\gamma)^{-1\/2}\\sum_{\\omega\\in \\Omega^{I_\\gamma}}\n \\frac {\\prod_{\\alpha\\in \\Phi^+} \\langle \\alpha,\\rho\\rangle} {\\prod_{\\alpha\\in \\Phi^+_{I_\\gamma}} \\langle \\alpha,\\rho_{I_\\gamma}\\rangle}\n \\left(\\prod_{\\alpha\\in \\Phi^+\\backslash \\omega^{-1}\\Phi^+_{I_\\gamma}}\n \\langle \\lambda_\\xi+\\rho,\\alpha\\rangle \\right)^{-1}$$\n$$ \\le D_\\infty^G(\\gamma)^{-1\/2}|\\Omega^{I_\\gamma}| \\frac{\\prod_{\\alpha\\in \\Phi^+} \\langle \\alpha,\\rho\\rangle}{\\prod_{\\alpha\\in \\Phi^+_{I_\\gamma}} \\langle \\alpha,\\rho_{I_\\gamma}\\rangle}\n m(\\xi)^{-(|\\Phi^+|-|\\Phi^+_{I_\\gamma}|)}.$$\n\\end{proof}\n\n\n\\begin{lem}\\label{l:bound-for-st-ds-char}\n Assume that $M$ is a Levi subgroup of $G$ over ${\\mathbb R}$ containing an elliptic maximal torus.\nThere exists a constant $c>0$ independent of $\\xi$ such that for every elliptic\n$\\gamma\\in M({\\mathbb R})$,\n $$\\frac{|\\Phi^G_M(\\gamma,\\xi)|}{\\dim \\xi}\n \\le c \\frac{D_\\infty^M(\\gamma)^{-1\/2}}{ m(\\xi)^{|\\Phi^+|-|\\Phi^+_{I^M_\\gamma}|}} .$$\n\n\n\\end{lem}\n\n\\begin{proof}\n As the case $M=G$ is already proved by Lemma \\ref{l:dim-and-trace}.(ii), we assume that $M\\subsetneq G$.\n Fix an elliptic maximal torus $T\\subset M$. Since every elliptic element has a conjugate\n in $T({\\mathbb R})$ and both sides of the inequality\n are conjugate-invariant,\n it is enough to verify the lemma for $\\gamma\\in T({\\mathbb R})$.\n In this proof we borrow some notation and facts\n from \\cite[pp.494-498]{GKM97} as well as \\cite[pp.272-274]{Art89}.\n For the purpose of proving Lemma \\ref{l:bound-for-st-ds-char}, we may restrict to\n $\\gamma\\in \\Gamma^+$, corresponding to a closed chamber for the root system of $T({\\mathbb R})$ in $G({\\mathbb R})$.\n (See page 497 of \\cite{GKM97} for the precise definition.)\n The proof of \\cite[Lem 4.1]{GKM97} shows that\n $$\\Phi^G_M(\\gamma,\\xi)=\\sum_{\\omega\\in \\Omega^{M}} c(\\omega,\\xi)\\cdot {\\rm tr}\\, \\xi^M_\\omega(\\gamma)$$\n where $\\xi^M_\\omega$ is the irreducible representation of $M({\\mathbb R})$ of highest weight\n $\\omega(\\xi+\\rho)-\\rho_M$. We claim that there is a constant $c_1>0$ independent of $\\xi$\n such that $$|c(\\omega,\\xi)|\\le c_1$$ for all $\\omega$ and $\\xi$.\n The coefficients $ c(\\omega,\\xi)$ can be computed by rewriting the right hand\n side of \\cite[(4.8)]{Art89}\n as a linear combination of ${\\rm tr}\\, \\xi^M_\\omega(\\gamma)$ using the Weyl character formula.\n In order to verify the claim, it suffices to point out that $\\overline{c}(Q^+_{ys\\lambda},R^+_H)$\n in Arthur's (4.8) takes values in a finite set which is independent of $\\xi$ (or\n $\\tau$ in Arthur's notation). This is obvious: as $Q^+_{ys\\lambda}\\subset\n \\Phi^\\vee$ and $R^+_H\\subset \\Phi$, there are finitely many possibilities for\n $Q^+_{ys\\lambda}$ and $R^+_H$.\n\n\n Now by Lemma \\ref{l:dim-and-trace}.(i),\n $$\\frac{\\dim \\xi^M_\\omega}{\\dim \\xi}= \\frac{\\prod_{\\alpha\\in \\Phi^+} \\langle \\alpha,\\rho \\rangle}\n {\\prod_{\\alpha\\in \\Phi_M^+} \\langle \\alpha,\\rho_M \\rangle} \\prod_{\\alpha\\in\n \\Phi^+\\backslash \\Phi^+_M} \\langle \\alpha,\\lambda_\\xi+\\rho\\rangle^{-1}\n \\le c_2 m(\\xi)^{-(|\\Phi^+|-|\\Phi^+_M|)}$$\n with $c_2=(\\prod_{\\alpha\\in \\Phi^+} \\langle \\alpha,\\rho \\rangle)\n (\\prod_{\\alpha\\in \\Phi_M^+} \\langle \\alpha,\\rho_M \\rangle)^{-1}>0$.\n According to Lemma \\ref{l:dim-and-trace}.(ii),\n there exists a constant $c_3>0$ such that\n $$\\frac{|{\\rm tr}\\,\\xi^M_\\omega(\\gamma)|}{\\dim \\xi^M_\\omega}\n \\le c_3 \\frac{D_\\infty^M(\\gamma)^{-1\/2}}{ m(\\xi)^{|\\Phi^+_M|-|\\Phi^+_{I^M_\\gamma}|}} .$$\n To conclude the proof, multiply the last two formulas.\n\n\\end{proof}\n\n\\subsection{Euler-Poincar\\'{e} functions}\\label{sub:EP}\n\n We continue to use the notation of \\S\\ref{sub:st-disc}.\n Let $\\overline{\\mu}^{{\\rm EP}}_\\infty$ denote the Euler-Poincar\\'{e} measure on $G({\\mathbb R})\/A_{G,\\infty}$\n (so that its induced measure on the compact inner form has volume 1).\nThere exists a unique Haar measure $\\mu^{{\\rm EP}}_\\infty$ on $G({\\mathbb R})$\nwhich is compatible with $\\overline{\\mu}^{{\\rm EP}}_\\infty$ and the standard Haar measure on $A_{G,\\infty}$.\nWrite $\\omega_{\\xi}$ for the central character of $\\xi$ on $A_{G,\\infty}$.\n Let $\\Pi(\\omega_{\\xi}^{-1})$ denote the set of irreducible admissible representations of $G({\\mathbb R})$\n whose central characters on $A_{G,\\infty}$ are $\\omega_{\\xi}^{-1}$.\n For $\\pi \\in \\Pi(\\omega_{\\xi}^{-1})$,\n define\n $$\\chi_{{\\rm EP}}(\\pi\\otimes \\xi):=\\sum_{i\\ge 0} (-1)^i \\dim H^i({\\rm Lie}\\, G({\\mathbb R}),K'_\\infty,\n \\pi \\otimes \\xi).$$\n Clozel and Delorme (\\cite{CD90}) constructed a bi-$K_\\infty$-finite\n function $\\phi_\\xi\\in C^\\infty(G({\\mathbb R}))$ which transforms under $A_{G,\\infty}$ by $\\omega_{\\xi}$ and is compactly supported modulo $A_{G,\\infty}$,\n such that\n $$\\forall \\pi \\in \\Pi(\\omega_{\\xi}^{-1}),\\quad {\\rm tr}\\, \\pi(\\phi_\\xi,\\mu^{{\\rm EP}}_\\infty)=\\chi_{{\\rm EP}}(\\pi\\otimes\\xi).$$\n Let $q(G)=\\frac{1}{2}\\dim_{{\\mathbb R}}G({\\mathbb R})\/K'_{\\infty}\\in {\\mathbb Z}$. The following are well-known:\n\\begin{itemize}\n\\item $\\chi_{{\\rm EP}}(\\pi\\otimes\\xi)=0$ unless $\\pi\\in \\Pi(\\omega_{\\xi}^{-1})$ has the same infinitesimal character as $\\xi^\\vee$.\n\\item If the highest weight of $\\xi$ is regular then $\\chi_{{\\rm EP}}(\\pi\\otimes \\xi)\\neq 0$\nif and only if $\\pi\\in \\Pi_{{\\rm disc}}(\\xi^\\vee)$.\n\\item If $\\pi \\in \\Pi(\\omega_{\\xi}^{-1})$ is a discrete series and $\\chi_{{\\rm EP}}(\\pi\\otimes \\xi)\\neq 0$\nthen $\\pi\\in \\Pi_{{\\rm disc}}(\\xi^\\vee)$ and $\\chi_{{\\rm EP}}(\\pi\\otimes \\xi)=(-1)^{q(G)}$.\nMore precisely, $\\dim H^i({\\rm Lie}\\, G({\\mathbb R}),K'_\\infty,\n \\pi \\otimes \\xi)$ equals 1 if $i=q(G)$ and 0 if not.\n\n\\end{itemize}\n\n\n\n\\subsection{Canonical measures and Tamagawa measures}\\label{sub:can-measure}\n\n We return to the global setting so that $F$ and\n $G$ are as in \\S\\ref{sub:Gross-motive}. Let $G_\\infty:=(\\Res_{F\/{\\mathbb Q}}G)\\times_{\\mathbb Q} {\\mathbb R}$, to which\n the contents of \\S\\ref{sub:st-disc} and \\S\\ref{sub:EP} apply. In particular we have a measure $\\mu^{{\\rm EP}}_\\infty$ on $G_\\infty({\\mathbb R})$.\n For each finite place $v$ of $F$,\n define $\\mu^{\\mathrm{can}}_v:=\\Lambda({\\rm Mot}_{G_{v}}^\\vee(1))\\cdot |\\omega_{G_v}|$\n in the notation of \\cite{Gro97} where\n $|\\omega_{G_v}|$ is the ``canonical'' Haar measure on $G(F_v)$ as in \\S11 of that article.\n When $G$ is unramified over $F_v$, the measure\n $\\mu^{\\mathrm{can}}_v$ assigns volume 1 to a hyperspecial subgroup of $G(F_v)$.\n In particular, $$\\mu^{\\mathrm{can},{\\rm EP}}:=\\prod_{v\\nmid \\infty} \\mu^{\\mathrm{can}}_v \\times \\mu^{{\\rm EP}}_\\infty$$ is a well-defined measure\n on $G({\\mathbb A}_F)$.\n\n\n\n Let $\\overline{\\mu}^{{\\rm Tama}}$ denote the Tamagawa measure\n on $G(F)\\backslash G({\\mathbb A}_F)\/A_{G,\\infty}$, so that its volume is the Tamagawa number (cf. \\cite[p.629]{Kot88})\n\\begin{equation} \\label{e:Tamagawa}\\tau(G):=\n \\overline{\\mu}^{{\\rm Tama}}(G(F)\\backslash G({\\mathbb A}_F)\/A_{G,\\infty})=|\\pi_0(Z(\\widehat{G})^{{\\rm Gal}(\\overline{F}\/F)})\\cdot\n |\\ker^1(F,Z(\\widehat{G}))|^{-1}.\\end{equation}\n The Tamagawa measure $\\mu^{{\\rm Tama}}$ on $G({\\mathbb A}_F)$ of \\cite{Gro97} is compatible with\n $\\overline{\\mu}^{{\\rm Tama}}$ if $G(F)$ and $A_{G,\\infty}$ are equipped with\n the point-counting measure and the Lebesgue measure, respectively.\n The ratio of two Haar measures on $G({\\mathbb A}_F)$ is computed as:\n\n\\begin{prop}(\\cite[(10.5)]{Gro97}\n $$\\frac{\\mu^{\\mathrm{can},{\\rm EP}}}{\\mu^{{\\rm Tama}}}\n = \\frac{ L({\\rm Mot}_G)\\cdot |\\Omega|\/|\\Omega_c|}{e(G_\\infty) 2^{{\\rm rk}_{{\\mathbb R}} G_\\infty} }.$$\n\\end{prop}\n\n\n\n\n The following notion will be useful in that the Levi subgroups contributing to the trace formula\n in \\S\\ref{s:aut-Plan-theorem}\n turn out to be the cuspidal ones.\n\n\\begin{defn}\n We say that $G$ is cuspidal if $G_0:=\\Res_{F\/{\\mathbb Q}}G$ satisfies the condition that\n $A_{G_0}\\times _{\\mathbb Q} {\\mathbb R}$ is the maximal split torus in the center of $G_0\\times _{\\mathbb Q} {\\mathbb R}$.\n\\end{defn}\n\n Assume that $G$ is cuspidal, so that $G({\\mathbb R})\/A_{G,\\infty}$ contains a maximal\n ${\\mathbb R}$-torus which is anisotropic.\n\n\n\n\\begin{cor}\\label{c:canonical-measure} $$\\frac{\\overline{\\mu}^{\\mathrm{can},{\\rm EP}}(G(F)\\backslash G({\\mathbb A}_F)\/A_{G,\\infty})}{\n \\overline{\\mu}^{{\\rm EP}}_\\infty (\\overline{G}(F_\\infty)\/A_{G,\\infty})}=\n \\frac{\\tau(G) \\cdot L({\\rm Mot}_G) \\cdot |\\Omega|\/|\\Omega_c|}{e(G_\\infty) 2^{[F:{\\mathbb Q}] r_G} }.\n $$\n\\end{cor}\n\n\\begin{proof}\n It suffices to remark that\n the Euler-Poincar\\'{e} measure on a compact Lie group has total volume 1,\n hence $\\overline{\\mu}^{{\\rm EP}}_\\infty (\\overline{G}(F_\\infty)\/A_{G,\\infty})=1$.\n\\end{proof}\n\n\n\n\\subsection{Bounds for Artin $L$-functions} For later use we estimate the $L$-value $L({\\rm Mot}_G)$ in Corollary \\ref{c:canonical-measure}.\n\n\n\\begin{prop}\\label{prop:Brauer}\nLet $s \\ge 1$ and $E$ be a Galois extension of $F$ of degree $[E:F]\\le s$.\t\n\\begin{enumerate}[(i)]\n\\item For all $\\epsilon>0$ there exists a constant $c=c(\\epsilon,s,F)>0$ which depends only on $\\epsilon$, $s$ and $F$ such that the following holds: For all non-trivial irreducible representations $\\rho$ of ${\\rm Gal}(E\/F)$,\n\\begin{equation*}\n cd_E^{-\\epsilon}\\le L(1,\\rho) \\le c d_E^\\epsilon.\n\\end{equation*}\n \\item The same inequalities hold for the residue $\\mathrm{Res}_{s=1} \\zeta_E(s)$ of the Dedekind zeta function of $E$.\n \\item There is a constant $A_1=A_1(s,F)>0$ which depends only on $s$ and $F$ such that for all \\textsf{faithful} irreducible representation $\\rho$ of ${\\rm Gal}(E\/F)$,\n \\begin{equation*}\n\td_{E\/F}^{A_1} \\le {\\mathbb N}_{F\/{\\mathbb Q}}(\\mathfrak{f}_\\rho) \\le d_{E\/F}^{1\/\\dim(\\rho)},\n \\end{equation*}\n where $d_{E\/F}={\\mathbb N}_{F\/{\\mathbb Q}}({\\mathfrak D}_{E\/F})$ is the relative discriminant of $E\/F$; recall that $d_E= d_F^{[E:F]} d_{E\/F}$.\n\\end{enumerate}\n\\end{prop}\n\n\\begin{proof}\n The assertion (ii) is Brauer--Siegel theorem~\\cite{Brauer47}*{Theorem~2}. We also note the implication (i) $\\Rightarrow$ (ii) which follows from the formula\n \\begin{equation} \\label{Brauer:product}\n\t\\zeta_E(s)= \\prod_{\\rho} L(s,\\rho)^{\\dim \\rho}.\n \\end{equation}\nwhere $\\rho$ ranges over all irreducible representations of ${\\rm Gal}(E\/F)$.\n\n The proof of assertion (i) is reduced to the $1$-dimensional case by Brauer induction as in~\\cite{Brauer47}. In this reduction one uses the fact that if $E'\/F'$ is a subextension of $E\/F$ then the absolute discriminant $d_{E'}$ of $E'$ divides the absolute discriminant $d_{E}$ of $E$. Also we may assume that $E'\/F'$ is cyclic. For a character $\\chi$ of ${\\rm Gal}(E'\/F')$ we have the convexity bound $L(1,\\chi)\\le c d^\\epsilon_{E'}$ (Landau). The lower bound for $L(1,\\chi)$ follows from (ii) and the product formula~\\eqref{Brauer:product}.\n\n In the assertion (iii) the right inequality follows from the discriminant-conductor formula which implies that $\\mathfrak{f}^{\\dim (\\rho)}_\\rho\\mid {\\mathfrak D}_{E\/F}$. The left inequality follows from local considerations. Let $v$ be a finite place of $F$ dividing ${\\mathfrak D}_{E\/F}$; since $\\rho$ is faithful, its restriction to the inertia group above $v$ is non-trivial and therefore $v$ divides $\\mathfrak{f}_\\rho$. Since $v({\\mathfrak D}_{E\/F})$ is bounded above by a constant $A_1(s,F)$ depending only on $[E:F]\\le s$ and $F$ by Lemma~\\ref{l:bound-diff-gen}, we have $ v({\\mathfrak D}_{E\/F}) \\le A_1 v(\\mathfrak{f}_\\rho)$ which concludes the proof.\n\\end{proof}\n\n\n\\begin{cor}\\label{c:bound-on-L(1)}\n For all integers $R, D,s\\in {\\mathbb Z}_{\\ge 1}$, and $\\epsilon>0$ there is a constant $c_1=c_1(\\epsilon,R,D,s,F)>0$ (depending on $R$, $D$, $s$, $F$ and $\\epsilon$) with the following property\n\\begin{enumerate}\n\\item For any $G$ such that $r_G\\le R$, $\\dim G\\le D$,\n $Z(G)$ is $F$-anisotropic, and $G$ splits over a Galois extension of $F$ of degree $\\le s$,\n$$|L({\\rm Mot}_G)|\\le c_1 \\prod_{d=1 }^{\\lfloor \\frac{d_G+1}{2}\\rfloor}\n {\\mathbb N}_{F\/{\\mathbb Q}}({\\mathfrak f}({\\rm Mot}_{G,d}))^{d-\\frac{1}{2}+\\epsilon}.$$\n\\item There is a constant $A_{20}=A_{20}(R,D,s,F)$ such that for any $G$ as in (i),\n $$|L({\\rm Mot}_G)|\\le c_1 \\prod_{v\\in {\\rm Ram}(G)}\n q_v^{A_{20}}.$$\n\\end{enumerate}\nThe choice $A_{20}= \\frac{(D+1)Rs}{2} \\max\\limits_{\\text{prime $p$}} (1+e_{F_v\/{\\mathbb Q}_p}\\log s)$ is admissible.\n\\end{cor}\n\n\\begin{proof} The functional equation for ${\\rm Mot}_G$ reads\n $$L({\\rm Mot}_G)=L({\\rm Mot}_G^\\vee(1))\\epsilon({\\rm Mot}_G)\\cdot \\frac{L_\\infty({\\rm Mot}_G^\\vee(1))}{L_\\infty({\\rm Mot}_G)}$$\nwhere $ \\epsilon({\\rm Mot}_G)=|\\Delta_F|^{d_G\/2}\\prod_{d\\ge 1} {\\mathbb N}_{F\/{\\mathbb Q}}({\\mathfrak f}({\\rm Mot}_{G,d}))^{d-\\frac{1}{2}}$.\n\nThe (possibly reducible) Artin representation for ${\\rm Mot}_{G,d}$\n factors through ${\\rm Gal}(E\/F)$ with $[E:F]\\le s$ by the assumption.\n Let $A_1=A_1(s,F)$ be as in (iii) of Proposition \\ref{prop:Brauer}.\nFor all $\\epsilon>0$, (i) of Proposition~\\ref{prop:Brauer} implies that there is a constant $c=c(\\epsilon,s,F)>1$\n depending only on $s$ and $F$ such that\n $$|L({\\rm Mot}_G^\\vee(1))|\\le \\prod_{d\\ge 1} \\left( c {\\mathbb N}_{F\/{\\mathbb Q}}({\\mathfrak f}({\\rm Mot}_{G,d}))^{A_1\\epsilon}\\right)^{\\dim {\\rm Mot}_{G,d}}\n \\le c^{r_G}\\prod_{d\\ge 1} {\\mathbb N}_{F\/{\\mathbb Q}}({\\mathfrak f}({\\rm Mot}_{G,d}))^{\\epsilon A_1 r_G}.$$\n Formula (7.7) of \\cite{Gro97}, the first equality below, leads to the following bound since only\n$1\\le d \\le \\lfloor \\frac{d_G+1}{2}\\rfloor$ can contribute in view of Proposition \\ref{p:Gross-motives}.(iii).\n\\begin{equation*}\n\\left|\\frac{L_\\infty({\\rm Mot}^\\vee_G(1))}{L_\\infty({\\rm Mot}_G)}\\right| = 2^{-[F:{\\mathbb Q}]r_G}\n\\prod_{d\\ge 1} \\left( \\frac{(d-1)!}{(2\\pi)^d}\\right)^{\\dim {\\rm Mot}_{G,d}}\n\\le 2^{-[F:{\\mathbb Q}]r_G} \\left( \\lfloor \\frac{d_G-1}{2}\\rfloor !\\right)^{r_ G}.\n\\end{equation*}\n Set $c_1(R,D,s,F,\\epsilon):= |\\Delta_F|^{D\/2} 2^{-[F:{\\mathbb Q}]R} \\left(\n\\lfloor \\frac{D-1}{2}\\rfloor !\\right)^{R}$. Then we see that\n\\begin{equation*}\\begin{aligned}\n |L({\\rm Mot}_G)|&\\le c_1 \\prod_{d=1 }^{\\lfloor \\frac{d_G+1}{2}\\rfloor}\n {\\mathbb N}_{F\/{\\mathbb Q}}({\\mathfrak f}({\\rm Mot}_{G,d}))^{d-\\frac{1}{2}+\\epsilon} \\\\\n & = c_1 \\prod_{v\\in {\\rm Ram}(G)} \\prod_{d=1 }^{\\lfloor \\frac{d_G+1}{2}\\rfloor}\n q_v^{(d-\\frac{1}{2}+\\epsilon) \\cdot f(G_v,d)}.\n\\end{aligned}\\end{equation*}\nThis concludes the proof of (i).\n\nAccording to Lemma \\ref{l:bounding-conductor}, the exponent in the right hand side is bounded by\n\\begin{equation*}\n d f(G_v,d) \\le \\frac{D+1}{2} \\dim {\\rm Mot}_{G,d} \\cdot (s(1+e_{F_v\/{\\mathbb Q}_p}\\log s)-1).\n\\end{equation*}\n (we have chosen $\\epsilon=\\frac{1}{2}$). The proof of (ii) is concluded by the fact that $$\\sum_{d\\ge 1}\\dim {\\rm Mot}_{G,d}=r_G\\le R,$$ see Proposition \\ref{p:Gross-motives}.(ii).\n\\end{proof}\n\n\n\\begin{cor}\\label{c:bound-on-L(1)-2}\n Let $G$ be a connected cuspidal reductive group over $F$ with anisotropic center. Then there exist constants\n $c_2=c_2(G,F)>0$ and $A_2(G,F)>0$ depending only on $G$ and $F$ such that:\n for any cuspidal $F$-Levi subgroup $M$ of $G$ and any semisimple $\\gamma\\in M(F)$\nwhich is elliptic in $M({\\mathbb R})$,\n $$|L({\\rm Mot}_{I^M_\\gamma})|\\le c_2 \\prod_{v\\in {\\rm Ram}(I_\\gamma^M)}\n q_v^{A_2}$$\nwhere $I^M_\\gamma$ denote the connected centralizer of $\\gamma$ in $M$.\nThe following choice is admissible: $$A_2= \\frac{(d_G+1)r_Gw_G s_G}{2} \\max\\limits_{\\text{prime $p$}} (1+e_{F_v\/{\\mathbb Q}_p}\\log w_Gs_G).$$\n\\end{cor}\n\n\n\\begin{proof}\n According to Lemma \\ref{l:torus-splitting},\n $s^{\\mathrm{sp}}_{ I^M_\\gamma}\\le w_Gs_G$.\n Apply Corollary \\ref{c:bound-on-L(1)} for each $I^M_\\gamma$ with $R=r_G$, $D=d_G$ and $s=w_Gs_G$\nto deduce the first assertion, which obviously implies the last assertion. Note that\n${\\rm rk} I^M_\\gamma\\le r_G$ and that $\\dim I^M_\\gamma\\le d_G$.\n\\end{proof}\n\nInstead of using the Brauer--Siegel theorem which is ineffective, we could use the estimates by Zimmert~\\cite{Zimmert:regulator} for the size of the regulator of number fields. This yields an effective estimate for the constants $c_2$ and $c_3$ above, at the cost of enlarging the value of the exponents $A_1$ and $A_2$.\n\n\\subsection{Frobenius--Schur indicator}\n\\label{sec:b:fs}\n\nThe Frobenius--Schur indicator is an invariant associated to an irreducible representation. It may take the three values $1,0,-1$. This subsection gathers several well-known facts and recalls some familiar constructions.\n\nThe Frobenius--Schur indicator can be constructed in greater generality but the following setting will suffice for our purpose. We will only consider finite dimensional representations on vector spaces over $\\mathbb{C}$ or $\\mathbb{R}$. The representations are continuous (and unitary) from compact Lie groups or algebraic from linear algebraic groups (these are in fact closely related by the classical \\Lquote{unitary trick} of Hurwitz and Weyl).\n\n Let $G$ be a compact Lie group and denote by $\\mu$ the Haar probability measure on $G$. Let $(V,r)$ be a continuous irreducible representation of $G$. Denote by $\\chi(g)=\\MTr(r(g))$ its character.\n\n\\begin{definition} The \\key{Frobenius--Schur indicator} of an irreducible representation $(V,r)$ of $G$ is defined by\n\\begin{equation*}\ns(r):= \\int_G \\chi(g^2) d\\mu(g).\n\\end{equation*}\nWe have that $s(r)\\in \\set{-1,0,1}$ always.\n\\end{definition}\n\n\\begin{rem}\n More generally if $G$ is an arbitrary group but $V$ is still finite dimensional, then $s(r)$ is defined as the multiplicity of the trivial representation in the virtual representation on $\\MSym^2 V - \\wedge^2 V$. This is consistent with the above definition.\n\\end{rem}\n\n\\begin{remark}\\begin{enumerate}[(i)]\n \\item Let $(V^\\vee,r^\\vee)$ be the dual representation of $G$ in the dual $V^\\vee$. It is easily seen that $s(r)=s(r^\\vee)$.\n \\item If $G=G_1\\times G_2$ and $r$ is the irreducible representation of $G$ on $V=V_1\\otimes V_2$ where $(V_1,r_1)$ and $(V_2,r_2)$ are irreducible representations of $G_1$ (resp. $G_2$), then $s(r)=s(r_1)s(r_2)$.\n\\end{enumerate}\n\\end{remark}\n\n\nThe classical theorem of Frobenius and Schur says that $r$ is a real, complex or quaternionic representation if and only if $s(r)=1,0$ or $-1$ respectively. We elaborate on that dichotomy in the following three lemma.\n\n\\begin{lemma}[Real representation]\\label{lem:real} Let $(V,r)$ be an irreducible representation of $G$.\nThe following assertions are equivalent:\n\\begin{enumerate}[(i)]\n\\item $s(r)=1$;\n\\item $r$ is self-dual and defined over $\\mathbb{R}$ in the sense that $V\\simeq V_0\\otimes_\\mathbb{R} \\mathbb{C}$ for some irreducible representation on a real vector space $V_0$. (Such an $r$ is said to be a \\key{real representation};)\n\\item $r$ has an invariant real structure. Namely there is a $G$-invariant anti-linear map $j:V\\to V$ which satisfies $j^2=1$.\n\\item $r$ is self-dual and any bilinear form on $V$ that realizes the isomorphism $r\\simeq r^\\vee$ is symmetric;\n\\item $\\MSym^2 V$ contains the trivial representation (then the multiplicity is exactly one).\n\\end{enumerate}\n\\end{lemma}\n\nWe don't repeat the proof here (see e.g.~\\cite{book:serre:rep}) and only recall some of the familiar constructions. We have a direct sum decomposition\n\\begin{equation*}\n V\\otimes V = \\MSym^2 V \\oplus \\wedge^2 V.\n\\end{equation*}\nThe character of the representation $V\\otimes V$ is $g\\mapsto \\chi(g)^2$. By Schur lemma the trivial representation occurs in $V\\otimes V$ with multiplicity at most one. In other words the subspace of invariant vectors of $V^\\vee\\otimes V^\\vee$ is at most one. Note that this subspace is identified with $\\MHom_G(V,V^\\vee)$ which is also the subspace of invariant bilinear forms on $V$.\n\nThe character of the representation $\\MSym^2 V$ (resp. $\\wedge^2 V$) is\n\\begin{equation*}\n\\text{%\n$\\frac{1}{2}(\\chi(g)^2+\\chi(g^2))$\\quad resp.\\quad $\\frac{1}{2}(\\chi(g)^2-\\chi(g^2))$.\n}\n\\end{equation*}\nFrom that the equivalence of (i) with (v) follows because the multiplicity of the trivial representation in $\\MSym^2 V$ (resp. $\\wedge^2 V$) is the mean of its character. The equivalence of (iv) and (v) is clear because a bilinear form on $V$ is an element of $V^\\vee\\otimes V^\\vee$ and it is symmetric if and only if it belongs to $\\MSym^2 V^\\vee$.\n\nThe equivalence of (ii) and (iii) follows from the fact that $j$ is induced by complex conjugation on $V_0\\otimes_\\mathbb{R} \\mathbb{C}$ and conversely $V_0$ is the subspace of fixed points by $j$. Note that a real representation is isomorphic to its complex conjugate representation because $j$ may be viewed equivalently as a $G$-isomorphism $V\\to \\overline{V}$. Since $V$ is unitary the complex conjugate representation $\\overline{r}$ is isomorphic to the dual representation $r^\\vee$. In assertion (ii) one may note that the endomorphism ring of $V_0$ is isomorphic to $\\mathbb{R}$.\n\n\\begin{lemma}[Complex representation] Let $(V,r)$ be an irreducible representation of $G$.\nThe following assertions are equivalent:\n\\begin{enumerate}[(i)]\n\\item $s(r)=0$;\n\\item $r$ is not self-dual;\n\\item $r$ is not isomorphic to $\\overline{r}$; (Such an $r$ is called a \\key{complex representation};)\n\\item $V\\otimes V$ does not contain the trivial representation.\n\\end{enumerate}\n\\end{lemma}\n\nWe note that for a complex representation, the restriction $\\Res_{\\mathbb{C}\/\\mathbb{R}} V$ (obtained by viewing $V$ as a real vector space) is an irreducible real representation of twice the dimension of $V$. Its endomorphism ring is isomorphic to $\\mathbb{C}$.\n\n\\begin{lemma}[Quaternionic\/symplectic representation]\\label{lem:quaternionic} Let $(V,r)$ be an irreducible representation of $G$.\nThe following assertions are equivalent:\n\\begin{enumerate}[(i)]\n\\item $s(r)=-1$;\n\\item $r$ is self-dual and cannot be defined over ${\\mathbb R}$.\n\\item $r$ has an invariant quaternionic structure. Namely there is a $G$-invariant anti-linear map $j:V\\to V$ which satisfies $j^2=-1$.\n(Such an $r$ is called a \\key{quaternionic representation}.)\n\\item $r$ is self-dual and the bilinear form on $V$ that realizes the isomorphism $r\\simeq r^\\vee$ is antisymmetric.\n (Such an $r$ is said to be a \\key{symplectic representation};)\n\\item $\\bigwedge^2 V$ contains the trivial representation (the multiplicity is exactly one).\n\\end{enumerate}\n\\end{lemma}\n\nThe equivalence of (iii) and (iv) again comes from the fact that $V$ is unitarizable (because $G$ is a compact group). In that context the notion of symplectic representation is identical to the notion of quaternionic representation. Note that for a quaternionic representation, the restriction $\\Res_{\\mathbb{C}\/\\mathbb{R}} V$ is an irreducible real representation of twice the dimension of $V$. Furthermore its ring of endomorphisms is isomorphic to the quaternion algebra $\\mathbb{H}$. Indeed the endomorphism ring contains the (linear) action by $i$ because $V$ is a representation over the complex numbers and together with $j$ and $k=ij$ this is the standard presentation of $\\mathbb{H}$.\n\nFrom the above discussions we see that the Frobenius--Schur indicator can be used to classify irreducible representations over the reals. The endomorphism ring of an irreducible real representation is isomorphic to either $\\mathbb{R},\\mathbb{C}$ or $\\mathbb{H}$ and we have described a correspondence with associated complex representations.\n\n\n\n\\section{A uniform bound on orbital integrals}\\label{s:app:unif-bound}\n\n This section is devoted to showing an apparently new result on the uniform bound on orbital integrals\n evaluated at semisimple conjugacy classes and basis elements of unramified Hecke algebras.\n Our bound is uniform in the finite place $v$ of a number field (over which the group is defined),\n the ``size'' of (the support of) the basis element for the unramified Hecke algebra at $v$\n as well as the conjugacy class at $v$.\n\n The main result is Theorem \\ref{t:appendeix2}, which\n is invoked in \\S\\ref{sub:weight-varies}. The main local input for Theorem \\ref{t:appendeix2} is Proposition \\ref{p:appendix2}. The technical heart in the proof of the proposition is postponed to \\S\\ref{sub:app:elliptic}, which the reader may want to skip in the first reading.\n\n\n\n\n\\subsection{The main local result}\\label{sub:local-bound-orb-int}\n\n We begin with a local assertion with a view toward Theorem \\ref{t:appendeix2} below.\n Let $G$ be a connected reductive group over a finite extension $F$ of ${\\mathbb Q}_p$ with a maximal $F$-split torus $A$.\n As usual $\\mathcal{O}$, $\\varpi$, $k_F$ denote the integer ring, a uniformizer and the residue field. Let $G$ be the Chevalley group for $G\\times_{F} \\overline{F}$, defined over ${\\mathbb Z}$.\n Let $\\mathbf{B}$ and $\\mathbf {T}$ be a Borel subgroup and a maximal torus of $G$ such that\n $\\mathbf{B}\\supset \\mathbf {T}$. We assume that\n \\begin{itemize}\\item $G$ is unramified over $F$,\n \\item ${\\rm char}\\, k_F>w_Gs_G$ and ${\\rm char}\\, k_F$ does not divide the finitely many constants in the Chevalley commutator relations (namely $C_{ij}$ of \\eqref{e:commutator}).\n \\end{itemize}\n (We assume ${\\rm char}\\, k_F>w_Gs_G$ to ensure that any maximal torus of $G$ splits over a finite tame extension, cf. \\S\\ref{sub:app:elliptic} below. The latter assumption on ${\\rm char}\\, k_F$ depends only on $G$.) Fix a smooth reductive model over $\\mathcal{O}$ so that $K:=G(\\mathcal{O})$ is a hyperspecial subgroup of $G(F)$. Denote by $v:F^\\times \\rightarrow {\\mathbb Q}$ the discrete valuation normalized by $v(\\varpi)=1$ and by $D^G$ the Weyl discriminant function, cf. \\eqref{e:D^G} below. Set $q_v:=|k_F|$.\n\n Suppose that there exists a closed embedding of algebraic groups $\\Xi:G\\hookrightarrow \\GL_d$ defined over $\\mathcal{O}$\n such that $\\Xi(\\mathbf {T})$ (resp. $\\Xi(\\mathbf{B})$) lies in the group of diagonal (resp. upper triangular) matrices. This assumption will be satisfied by Lemma \\ref{l:conj-image-in-diag} and Proposition \\ref{p:global-integral-model}, or alternatively as explained at the start of \\S\\ref{sub:lem-split}.\n The assumption may not be strictly necessary but is convenient to have for some later arguments.\n In the setup of \\S\\ref{sub:global-bound-orb-int} such a $\\Xi$ will be chosen globally\n over ${\\mathbb Z}[1\/Q]$ (i.e. away from a certain finite set of primes), which gives rise to an embedding over $\\mathcal{O}$ if $v$ does not divide $Q$.\n\n\\begin{prop}\\label{p:appendix2}\n\n There exist $a_{G,v},b_{G,v},e_{G,v}\\ge0$ (depending on $F$, $G$ and $\\Xi$) such that\n \\begin{itemize}\n \\item for every semisimple $\\gamma\\in G(F)$,\n \\item for every $\\lambda\\in X_*(A)$ and $\\kappa\\in{\\mathbb Z}_{\\ge0}$ such that $\\|\\lambda\\|\\le\\kappa$,\n \\end{itemize}\n \\begin{equation}\\label{e:app:prop} 0\\le O_\\gamma(\\tau^G_\\lambda,\\mu^{\\mathrm{can}}_G,\\mu^{\\mathrm{can}}_{I_\\gamma})\n \\le q_v^{ a_{G,v}+b_{G,v}\\kappa}\\cdot D^G(\\gamma)^{-e_{G,v}\/2}.\\end{equation}\n\\end{prop}\n\n\\begin{rem}\n We chose the notation $a_{G,v}$ etc rather than $a_{G,F}$ etc in anticipating the global setup\n of the next subsection\n where $F$ is the completion of a number field at the place $v$.\n\\end{rem}\n\n\n\n\n\\begin{proof}\n For simplicity we will omit the measures chosen to compute orbital integrals\n when there is no danger of confusion.\n Let us argue by induction on the semisimple rank $r^{\\semis}_G$ of $G$.\n In the rank zero case, namely when $G$ is a torus,\n the proposition is true since $O_\\gamma(\\tau^G_\\lambda)$ is equal to 0 or 1.\n Now assume that $r^{\\semis}_G\\ge 1$ and that the proposition is known for all groups\n whose semisimple ranks are less than $r^{\\semis}_G$.\n In the proof we write $a_G$, $b_G$, $e_G$ instead of $a_{G,v}$, $b_{G,v}$, $e_{G,v}$ for simplicity.\n\n\\medskip\n\n\\underline{Step 1}. Reduce to the case where $Z(G)$ is anisotropic.\n\n Let $A_G$ denote the maximal split torus in $Z(G)$. Set $\\overline{G}:=G\/A_G$.\n The goal of Step 1 is to show that if the proposition for $\\overline{G}$ then it also holds\nfor $G$. We have an exact sequence of algebraic groups over $\\mathcal{O}$: $1\\rightarrow A_G\\rightarrow G\\rightarrow \\overline{G}\\rightarrow 1$.\nBy taking $F$-points one obtains an exact sequence of groups\n$$1 \\rightarrow A_G(F) \\rightarrow G(F) \\rightarrow \\overline{G}(F) \\rightarrow 1,$$\nwhere surjectivity is implied by Hilbert 90 for $A_G$.\nIn fact $G(\\mathcal{O})\\rightarrow \\overline{G}(\\mathcal{O})$ is surjective since it is surjective on $k_F$-points and\n$G\\rightarrow \\overline{G}$ is smooth. (A similar argument was used on page 386 of \\cite{Kot86}.)\nFor any semisimple $\\gamma\\in G(F)$, denote its image in $\\overline{G}(F)$ by $\\overline{\\gamma}$.\n The connected centralizer of $\\overline{\\gamma}$ is denoted\n$\\overline{I}_{\\overline{\\gamma}}$. There is an exact sequence\n$$1\\rightarrow A_G(F) \\rightarrow I_\\gamma(F) \\rightarrow \\overline{I}_{\\overline{\\gamma}}(F)\\rightarrow 1.$$\nWe see that $G(F) \\rightarrow \\overline{G}(F)$ induces a bijection\n$I_\\gamma(F)\\backslash G(F) \\simeq \\overline{I}_{\\overline{\\gamma}}(F)\\backslash \\overline{G}(F)$.\nLet $A$ be a maximal $F$-split torus of $G$, and $\\overline{A}$ be its image in $\\overline{G}$.\nFor any $\\lambda\\in X_*(A)$, denote its image in $X_*(\\overline{A})$ by $\\overline{\\lambda}$.\nThen\n$$ O^{G(F)}_\\gamma(\\tau^G_\\lambda,\\mu^{\\mathrm{can}}_G,\\mu^{\\mathrm{can}}_{I_\\gamma})\n=O^{\\overline{G}(F)}_{\\overline{\\gamma}}(\\tau^{\\overline{G}}_{\\overline{\\lambda}},\\mu^{\\mathrm{can}}_{\\overline{G}},\\mu^{\\mathrm{can}}_{\\overline{I}_{\\overline{\\gamma}}}).$$\nIndeed, this follows from the fact that $I_\\gamma(F)\\backslash G(F) \\simeq \\overline{I}_{\\overline{\\gamma}}(F)\\backslash \\overline{G}(F)$\ncarries $\\frac{\\mu^{\\mathrm{can}}_G}{\\mu^{\\mathrm{can}}_{I_\\gamma}}$ to $\\frac{\\mu^{\\mathrm{can}}_{\\overline{G}}}{\\mu^{\\mathrm{can}}_{\\overline{I}_{\\overline{\\gamma}}}}$\nand that $G(\\mathcal{O})\\rightarrow \\overline{G}(\\mathcal{O})$ is onto (so that $\\tau^G_{\\lambda}(x^{-1}\\gamma x)=1$\nif and only if $\\tau^{\\overline{G}}_{\\overline{\\lambda}}(\\overline{x}^{-1}\\overline{\\gamma} \\overline{x})=1$, where\n$\\overline{x}$ is the image of $x$).\nAs the proposition is assumed to hold for $\\overline{G}$,\nthe right hand side is bounded by $q_v^{ a_{\\overline{G}}+b_{\\overline{G}}\\kappa}\\cdot D^{\\overline{G}}(\\gamma)^{-e_{\\overline{G}}\/2}=\nq_v^{ a_{\\overline{G}}+b_{\\overline{G}}\\kappa}\\cdot D^{G}(\\gamma)^{-e_{\\overline{G}}\/2}$. Hence the\nproposition holds for $G$ if we set\n$a_G=a_{\\overline{G}}$, $b_G=b_{\\overline{G}}$ and $e_G=e_{\\overline{G}}$. This finishes Step 1.\n\n\\medskip\n\n\\noindent\\underline{Step 2}. When $Z(G)$ is anisotropic.\n\n The problem will be divided into 3 cases depending on $\\gamma$. In each case we find a sufficient condition on $a_G$, $b_G$ and $e_G$ for \\eqref{e:app:prop} to be true.\n\n\\medskip\n\n\\noindent\\underline{Step 2-1}. When $\\gamma\\in Z(G)(F)$.\n\n In this case the proposition holds for any $a_G,b_G,e_G\\ge0$ since $O_\\gamma(\\tau^G_\\lambda)=0$ or $1$ and $D^G(\\gamma)=1$.\n\n\n\\medskip\n\n\\noindent\\underline{Step 2-2}. When $\\gamma$ is non-central and non-elliptic.\n\nThen there exists a nontrivial split torus $S\\subset Z(I_\\gamma)$.\n Set $M:=Z_G(S)$, which is an $F$-rational Levi subgroup of $G$. Then $I_\\gamma\\subset M \\subsetneq G$. Note that $\\gamma$ is $(G,M)$-regular.\nLemma \\ref{l:orb-int-const-term} reads\n\\begin{equation}\\label{e:app:G-M}O^{G(F)}_\\gamma({\\mathbf{1}}_{K\\lambda(\\varpi)K})=\nD_{G\/M}(\\gamma)^{-1\/2} O^{M(F)}_\\gamma(({\\mathbf{1}}_{K\\lambda(\\varpi)K})_M).\\end{equation}\nBy conjugation we may assume without loss of generality that\n$\\lambda(\\varpi)\\in M(F)$. (To justify, find $x\\in G(F)$ such that\n$xMx^{-1}$ contains $A$. Then $\\lambda(\\varpi)\\in xM(F)x^{-1}$ and\n$O^M_\\gamma= O^{xMx^{-1}}_{x\\gamma x^{-1}}$.)\nWe can write \\begin{equation}\\label{e:App-Step3}({\\mathbf{1}}_{K\\lambda(\\varpi)K})_M=\\sum_{\\mu\\le_{{\\mathbb R}}\\lambda}\nc_{\\lambda,\\mu}{\\mathbf{1}}_{K_M\\mu(\\varpi)K_M}.\\end{equation}\n For any $m=\\mu(\\varpi)$, $c_{\\lambda,\\mu}$ is equal to\n$$({\\mathbf{1}}_{K\\lambda(\\varpi)K})_M(m)=\\delta_P(m)^{1\/2}\\int_{N(F)} {\\mathbf{1}}_{K\\lambda(\\varpi)K}(mn)dn\n=q_v^{\\langle \\rho_P,\\mu\\rangle} \\mu_G^{\\mathrm{can}}(mN(F)K\\cap K\\lambda(\\varpi)K).$$\nLemma \\ref{l:double-coset-volume} and the easy inequality $\\langle \\rho_P,\\mu\\rangle \\le \\langle \\rho,\\lambda\\rangle$\n allows us to deduce that\n$$0\\le c_{\\lambda,\\mu}\\le q_v^{\\langle \\rho_P,\\mu\\rangle}\n\\mu^{\\mathrm{can}}_G(K\\lambda(\\varpi)K)\\le q_v^{d_G+r_G+2\\langle \\rho,\\lambda\\rangle}.$$\nThe sum in \\eqref{e:App-Step3} runs over the set of $$\\mu=\\lambda-\\sum_{\\alpha\\in \\Delta^+}\na_\\alpha\\cdot \\alpha~\\mbox{with}~a_\\alpha\\in \\frac{1}{\\delta_G}{\\mathbb Z},~a_\\alpha\\ge 0$$\nand $\\mu\\in (X^*(T)_{\\mathbb R})^+$. Here we need to explain $\\delta_G$: If\n$\\mu\\le_{{\\mathbb R}} \\lambda$ then $\\lambda-\\mu$ is a linear combination of positive coroots with nonnegative rational\ncoefficients. The denominators of such coefficients are uniformly bounded, where the bound depends on the\ncoroot datum. We write $\\delta_G$ for this bound.\n\nThe above condition on $\\mu$ and $\\|\\lambda\\|\\le \\kappa$ imply that $a_\\alpha\\le \\kappa$.\nWe get, by using the induction hypothesis for $O^M_\\gamma$, $$ 0\\le\nO^{M(F)}_\\gamma(({\\mathbf{1}}_{K\\lambda(\\varpi)K})_M)\\le\n\\sum_{\\mu\\le_{{\\mathbb R}}\\lambda} c_{\\lambda,\\mu} O^M_{\\gamma} ({\\mathbf{1}}_{K_M\\mu(\\varpi)K_M})\n\\le \\sum_{\\mu\\le_{{\\mathbb R}}\\lambda} c_{\\lambda,\\mu}\n q_v^{ a_M+b_M\\kappa}\\cdot D^M(\\gamma)^{-e_M\/2}$$\n$$\\le\n(\\delta_G(\\kappa+1))^{|\\Delta^+|} q_v^{d_G+r_G+2\\langle \\rho,\\mu\\rangle}q_v^{ a_M+b_M\\kappa}\\cdot D^M(\\gamma)^{-e_M\/2} $$\n$$ \\le q_v^{d_G+r_G(\\delta_G\\kappa+\\delta_G+1)+2\\langle \\rho,\\mu\\rangle+a_M+b_M\\kappa} D^M(\\gamma)^{-e_M\/2}. $$\nSet\n\\[\nc_G:=d_G+r_G(\\delta_G+1)+2\\langle \\rho,\\lambda\\rangle \\le d_G+r_G(\\delta_G+1)+|\\Phi^+|\\kappa.\n\\]\nIn view of \\eqref{e:app:G-M} it suffices to find $a_G,b_G,e_G\\ge 0$ such that\n$$D_{G\/M}(\\gamma)^{-1\/2}D^M(\\gamma)^{-e_M\/2} q_v^{a_M+c_G+(b_M+r_G\\delta_G)\\kappa}\n\\le D_G(\\gamma)^{-e_G\/2} q_v^{a_G+b_G\\kappa}$$\nor equivalently\n\\begin{equation}\\label{e:app:suff-cond}\nD_{G\/M}(\\gamma)^{\\frac{e_G-1}{2}}D^M(\\gamma)^{\\frac{e_G-e_M}{2}}\n\\le q_v^{a_G-a_M-c_G+(b_G-b_M-r_G\\delta_G)\\kappa}\\end{equation}\nwhenever a conjugate of $\\gamma$ lies in $K\\lambda(\\varpi)K$.\nFor each $\\alpha\\in \\Phi$, $v(1-\\alpha(\\gamma))\\ge 0$ if $v(\\alpha(\\gamma))\\ge 0$ and\n\\begin{equation}\\label{e:app:alpha(gamma)}v(1-\\alpha(\\gamma))= v(\\alpha(\\gamma))\\ge -m_\\Xi \\kappa\\quad\\mbox{if}~v(\\alpha(\\gamma))<0\\end{equation}\nwhere $m_\\Xi$ is the constant $B_5$ (depending only on $G$ and $\\Xi$ and not on $v$) of Lemma \\ref{l:bounding1-alpha(gamma)}.\nHence $$D_{G\/M}(\\gamma)=\n\\prod_{\\alpha\\in \\Phi\\backslash \\Phi_M\\atop \\alpha(\\gamma)\\neq 1}\n|1-\\alpha(\\gamma)|_v \\le q_v^{|\\Phi\\backslash \\Phi_M| m_\\Xi \\kappa\/2}$$\nand likewise $D^M(\\gamma)\\le q_v^{|\\Phi_M| m_\\Xi \\kappa\/2}$.\n(We divide the exponents by 2 because it cannot happen simultaneously that\n $v(\\alpha(\\gamma))<0$ and $v(\\alpha^{-1}(\\gamma))<0$.)\nTherefore condition \\eqref{e:app:suff-cond} on $a_G,b_G,e_G$ is implied by\n\\begin{equation}\\label{e:app:Levi} \\begin{aligned}&~ \\frac{e_G-1}{2}\\frac{|\\Phi\\backslash \\Phi_M| m_\\Xi \\kappa}{2}\n+ \\frac{e_G-e_M}{2}\\frac{|\\Phi_M| m_\\Xi \\kappa}{2}\\\\ \\le & ~ a_G-a_M-(d_G+r_G(\\delta_G+1)+|\\Phi^+|\\kappa)+(b_G-b_M-r_G\\delta_G)\\kappa.\\end{aligned}\\end{equation}\nThere are only finitely many Levi subgroups $M$ (up to conjugation)\ngiving rise to the triples $(a_M,b_M,e_M)$. It is elementary to observe that\n\\eqref{e:app:Levi} holds as long as $a_G$ and $b_G$ are sufficiently large (while $e_G$ is kept relatively small).\nWe will impose another condition on $a_G,b_G,e_G$ in Step 2-3.\n\n\\medskip\n\n\\noindent\\underline{Step 2-3}. When $\\gamma$ is noncentral and elliptic in $G$.\n\nThis case is essentially going to be worked out in \\S\\ref{sub:app:elliptic}. Let $Z_1,Z_2,Z_3$ be as in Lemma \\ref{l:B-4} below.\nBy \\eqref{e:mu-EP\/mu} and Corollary \\ref{c:B-1} below,\n\\eqref{e:app:prop} will hold if\n \\begin{equation}\\label{e:app:Elliptic}q_v^{r_G(d_G+1)}q_v^{Z_1+Z_2\\kappa} D^G(\\gamma)^{Z_3}\n \\le q_v^{a_G+b_G\\kappa} D^G(\\gamma)^{-e_G\/2}.\\end{equation}\n We have $D^G(\\gamma)\\le q_v^{|\\Phi|m_\\Xi\\kappa\/2}$ thanks to \\eqref{e:app:alpha(gamma)} (cf. Step 2-2). So \\eqref{e:app:Elliptic} (is not equivalent to but) is implied by the combination of the following two inequalities:\n \\begin{equation}\\label{e:app:Elliptic1} Z_3+\\frac{e_G}{2} \\ge 0.\\end{equation}\n \\begin{equation}\\label{e:app:Elliptic2} r_G(d_G+1)+(Z_1+Z_2\\kappa)+|\\Phi|m_{\\Xi}\\frac{\\kappa}{2}(Z_3+\\frac{e_G}{2})\n \\le a_G+b_G\\kappa.\\end{equation}\n The latter two will hold true, for instance, if $e_G=0$ and if $a_G$ and $b_G$ are sufficiently large. (We will see in \\S\\ref{sub:app:elliptic} below that $Z_3\\ge0$ and that $Z_1,Z_2,Z_3$ are independent of $\\lambda$, $\\gamma$ and $\\kappa$.)\n\n\\medskip\n\n Now that we are done with analyzing three different cases, we finish Step 2. For this we use the induction on semisimple ranks (to ensure the existence of $a_M$, $b_M$ and $e_M$ in Step 2-2) to find $a_G,b_G,e_G\\ge 0$ which satisfy the conditions described at the ends of Step 2-2 and Step 2-3. We are done with the proof of Proposition \\ref{p:appendix2}.\n\\end{proof}\n\n\n\\subsection{A global consequence}\\label{sub:global-bound-orb-int}\n\n Here we switch to a global setup. For a finite place $v$ of a number field, let $k(v)$ denote the residue field. Put $q_v:=|k(v)|$.\n\\begin{itemize}\n\\item $G$ is a connected reductive group over a number field ${\\mathbf F}$.\n\\item ${\\rm Ram}(G)$ is the set of finite places $v$ of ${\\mathbf F}$ such that $G$ is ramified at ${\\mathbf F}_v$.\n\\item $G$ is the Chevalley group for $G\\times_{{\\mathbf F}} \\overline{{\\mathbf F}}$,\nand $\\mathbf{B}$, $\\mathbf {T}$ are as in \\S\\ref{sub:local-bound-orb-int}.\n \\item $S_{{\\rm bad}}$ be the set of finite places $v$ such that either $v\\in{\\rm Ram}(G)$, ${\\rm char}\\, k(v)\\le w_Gs_G$, or ${\\rm char}\\, k(v)$ divides at least one of the constants for the Chevalley commutator relations for $G$ (cf. \\eqref{e:commutator}).\n\n\\end{itemize}\n\nOnce and for all, fix a closed embedding $\\Xi^{\\mathrm{sp}}:G\\hookrightarrow \\GL_d$ defined over ${\\mathbb Z}[1\/R]$ for a large enough integer $R$ such that $\\Xi^{\\mathrm{sp}}(\\mathbf {T})$ (resp. $\\Xi^{\\mathrm{sp}}(\\mathbf{B})$) lies in the group of diagonal (resp. upper triangular) matrices of $\\GL_d$ and such that Lemma \\ref{l:app:split2} holds true. The choice of $R$ depends only on $G$ and $\\Xi^{\\mathrm{sp}}$. (See \\S\\ref{sub:lem-split} for more details and the explanation that it is always possible to choose such an $R$.)\n\n\n Let $Q$ be an integer such that $p|Q$ if and only if $p|R$ or some place $v$ of $F$ above $p$ belongs to $S_{{\\rm bad}}$.\n Examining the dependence of various constants in Proposition \\ref{p:appendix2}\n leads to the following main result of this section. For each finite place $v\\notin S_{{\\rm bad}}$, denote by $A_v$ a maximal ${\\mathbf F}_v$-split torus of $G\\times_{{\\mathbf F}} {\\mathbf F}_v$.\n\n\\begin{thm}\\label{t:appendeix2}\n There exist $a_G,b_G,e_G\\ge 0$ (depending on ${\\mathbf F}$, $G$ and $\\Xi^{\\mathrm{sp}}$) such that\n \\begin{itemize}\n \\item for every finite $v\\notin S_{{\\rm bad}}$,\n \\item for every semisimple $\\gamma\\in G({\\mathbf F}_v)$,\n \\item for every $\\lambda\\in X_*(A_v)$ and $\\kappa\\in{\\mathbb Z}_{\\ge0}$ such that $\\|\\lambda\\|\\le\\kappa$,\n \\end{itemize}\n \\[\n \n 0\\le O^{G({\\mathbf F}_v)}_\\gamma(\\tau^G_\\lambda,\\mu^{\\mathrm{can}}_{G,v},\\mu^{\\mathrm{can}}_{I_\\gamma,v})\n \\le q_v^{ a_G+b_G\\kappa}\\cdot D_v^G(\\gamma)^{-e_G\/2}.\n \\]\n\\end{thm}\n\n\\begin{rem}\n It is worth drawing a comparison between the above theorem and Theorem \\ref{t:appendix1} proved by Kottwitz.\n In the latter the test function (in the full Hecke algebra) and the base $p$-adic field are fixed whereas\n the main point of the former is to allow the test function (in the unramified Hecke algebra) and\n the place $v$ to vary. The two theorems are complementary to each other and will\n play a crucial role in the proof of Theorem \\ref{t:weight-varies}.\n\\end{rem}\n\n\\begin{rem} In an informal communication Kottwitz and Ng\\^{o} pointed out that there might be yet another approach based on a geometric argument involving affine Springer fibers, as in \\cite[\\S15]{GKM04}, which might lead to a streamlined and conceptual proof, as well as optimized values of the constants $a_G$ and $b_G$. The Appendix~\\ref{s:app:B} provides an important step in that direction, see Theorem~\\ref{thm:transfer-fam} which implies that the constants are transferable from finite characteristic to characteristic zero.\n\\end{rem}\n\n\n\\begin{proof}\n Since the case of tori is clear, we may assume that $r^{\\semis}_G\\ge 1$. Write $S_Q$ for the places $v$ dividing $Q$.\n Let $\\theta\\in {\\mathscr C}(\\Gamma_1)$. (Recall the definition of $\\Gamma_1$ and ${\\mathscr C}(\\Gamma_1)$ from \\S\\ref{sub:ST-meas} and \\S\\ref{sub:lim-of-Plan}.)\n Our strategy is to find $a_{G,\\theta},b_{G,\\theta},e_{G,\\theta}\\ge 0$ which satisfy the requirements \\eqref{e:app:Levi}, \\eqref{e:app:Elliptic1}, and \\eqref{e:app:Elliptic2} on $a_{G,v},b_{G,v},e_{G,v}$ at all $v\\in {\\mathcal V}_{{\\mathbf F}}(\\theta)\\backslash (S_{{\\rm bad}}\\cup S_Q)$. (As for \\eqref{e:app:Levi}, we inductively find $a_{M,\\theta},b_{M,\\theta},e_{M,\\theta}\\ge 0$ for all local Levi subgroups $M$ of $G$ as will be explained below.) In fact we will choose $e_{G,\\theta}=0$. Once this is done for every $\\theta$ in the finite set ${\\mathscr C}(\\Gamma_1)$, we will similarly treat the places $v\\in S_Q$ not contained in $S_{{\\rm bad}}$ to finish the proof.\n\n\n We would like to explain an inductive choice of $a_{M,\\theta},b_{M,\\theta},e_{M,\\theta}\\ge 0$ for a fixed $\\theta$. To do so we ought to clarify what Levi subgroups $M$ of $G$ we consider. Let $\\Delta$ denote the set of $\\mathbf{B}$-positive simple roots for $(G,\\mathbf {T})$. Via an identification $G\\times_{{\\mathbf F}} \\overline{{\\mathbf F}}\\simeq G\\times_{\\mathbb Z} \\overline{{\\mathbf F}}$ we may view $\\Delta$ as the set of simple roots for $G$ equipped with an action by $\\Gamma_1$, cf. \\cite[\\S1.3]{Bor79}. Note that ${\\rm Frob}_v$ acts as $\\theta\\in \\Gamma_1$ on $\\Delta$ for all $v\\in {\\mathcal V}_{{\\mathbf F}}(\\theta)\\backslash S_{{\\rm bad}}$. According to \\cite[\\S3.2]{Bor79}, the $\\theta$-stable subsets of $\\Delta$ are in bijection with $G({\\mathbf F}_v)$-conjugacy classes of ${\\mathbf F}_v$-parabolic subgroups of $G$. For each $v\\in {\\mathcal V}_{{\\mathbf F}}(\\theta)\\backslash S_{{\\rm bad}}$, fix a Borel subgroup $B_v$ of $G$ over ${\\mathbf F}_v$ containing the centralizer $T_v$ of $A_v$ in $G$ so that the following are in a canonical bijection with one another.\n\\begin{itemize}\n\\item $\\theta$-stable subsets $\\Upsilon$ of $\\Delta$\n\\item parabolic subgroups $P_v$ of $G$ containing $B_v$\n\\end{itemize}\n Denote by $P_{\\Upsilon,v}$ the parabolic subgroup corresponding to $\\Upsilon$ and by $M_{\\Upsilon,v}$ its Levi subgroup containing $T_v$. Here is an important observation: the inequalities \\eqref{e:app:Levi}, \\eqref{e:app:Elliptic1}, and \\eqref{e:app:Elliptic2} to be satisfied by $a_{M_{\\Upsilon},v},b_{M_{\\Upsilon},v},e_{M_{\\Upsilon},v}$ depend only on $\\theta$ and not on $v\\in {\\mathcal V}_{{\\mathbf F}}(\\theta)\\backslash S_{{\\rm bad}}$. (We consider the case where $G$ and $M$ of those inequalities are $ M_{\\Upsilon}$ and a ${\\mathbf F}_v$-Levi subgroup of $M_{\\Upsilon}$, respectively.) Hence we will write $a_{M_{\\Upsilon},\\theta},b_{M_{\\Upsilon},\\theta},e_{M_{\\Upsilon},\\theta}\\ge 0$ for these constants. What we need to do is to define them inductively according to the semisimple rank of $M$ such that \\eqref{e:app:Levi}, \\eqref{e:app:Elliptic1}, and \\eqref{e:app:Elliptic2} hold true. In particular the desired $a_{G,\\theta},b_{G,\\theta},e_{G,\\theta}$ will be obtained and the proof will be finished (by returning to the first paragraph in the current proof).\n\n Now the inductive choice of $a_{M_{\\Upsilon},\\theta},b_{M_{\\Upsilon},\\theta},e_{M_{\\Upsilon},\\theta}$ is easy to make once the choice of $a_{M_{\\Omega},\\theta},b_{M_{\\Omega},\\theta},e_{M_{\\Omega},\\theta}$ has been made for all $\\Omega\\subsetneq \\Upsilon$. Indeed, we may choose $e_{M_{\\Omega},\\theta}=0$ to fulfill \\eqref{e:app:Elliptic1} (since $Z_3\\ge 0$; see Lemma \\ref{l:B-4} below) and $a_{M_{\\Omega},\\theta},b_{M_{\\Omega},\\theta}$ to be large enough to verify \\eqref{e:app:Levi} and \\eqref{e:app:Elliptic2}. Notice that $Z_1,Z_2,Z_3$ of \\eqref{e:app:Elliptic2} (which are constructed in Lemma \\ref{l:B-4} below) depend only on the group-theoretic information of $M_{\\Upsilon}$ (such as the dimension, rank, affine root data, $\\delta_{M_{\\Upsilon}}$ of $M_{\\Upsilon}$ as well as an embedding of the Chevalley form of $M_{\\Upsilon}$ into $GL_d$ coming from $\\Xi^{\\mathrm{sp}}$) but not on $v$, cf. Remark \\ref{r:uniformity}.\n\n It remains to treat $v\\in S_Q$ not contained in $S_{{\\rm bad}}$. By Lemma \\ref{l:conj-image-in-diag} there exists a local embedding $\\Xi_v:G\\rightarrow \\GL_d$ defined over $\\mathcal{O}_v$, at each such $v$, such that $\\Xi_v$ satisfies the assumption preceding Proposition \\ref{p:appendix2}. Let $a_{G,v},b_{G,v},e_{G,v}$ be as in the proposition. As we remarked earlier in the proof, we can and will take $e_{G,v}=0$.\n Set $a_G$ to be the maximum of $\\max_{\\theta\\in {\\mathscr C}(\\Gamma_1)} a_{G,\\theta}$ and $\\max_{v\\in S_Q\\backslash S_{{\\rm bad}}} a_{G,v}$. Define $b_G$ similarly and put $e_G=0$. As theses choices of $a_G$, $b_G$ and $e_G$ will only improve the inequalities \\eqref{e:app:Levi}, \\eqref{e:app:Elliptic1}, and \\eqref{e:app:Elliptic2} required for $a_{G,\\theta}$, $b_{\\theta}$ and $e_{\\theta}$, the proof will be finished.\n\n\\end{proof}\n\n In view of Theorem \\ref{t:appendix1} and other observations in harmonic analysis, a natural question is whether it is possible to achieve $e_{G}=1$. This is a deep and difficult question which would be of independent interest. It was a pleasant surprise to the authors that the theory of arithmetic motivic integration provides a solution. A precise theorem due to Cluckers, Gordon, and Halupczok is stated below. It is worth remarking that the method of proof is significantly different from that of this section and also that they make use of Theorem \\ref{t:appendix1}, the local boundedness theorem.\n\n\\begin{thm}\\label{t:optimal-expo}\n Theorem \\ref{t:appendeix2} holds true with $e_G=1$.\n\\end{thm}\n\n\\begin{proof}\n See Appendix~\\ref{s:app:B}.\n\\end{proof}\n\n\\begin{rem}\n It would be interesting to ask about the analogue of the theorem in the case of twisted or weighted orbital integrals. Such a result would be useful in the more general situation than the one considered in this paper.\n\\end{rem}\n\n\n\\subsection{The noncentral elliptic case}\\label{sub:app:elliptic}\n\n The objective of this subsection is to establish Corollary \\ref{c:B-1}, which was used in Step 2-3\n of the proof of Proposition \\ref{p:appendix2} above. Since the proof is quite complicated let us guide the reader. The basic idea, going back to Langlands, is to interpret the orbital integral $O^{G(F)}_{\\gamma}(\\tau^G_{\\lambda})$ in question as the number of points in the building fixed ``up to $\\lambda$'' under the action of $\\gamma$. The set of such points, denoted $X_F(\\gamma,\\lambda)$ below, is finite since $\\gamma$ is elliptic. Then it is shown that every point of $X_F(\\gamma,\\lambda)$ is within a certain distance from a certain apartment, after enlarging the ground field $F$ to a finite extension. We exploit this to bound $X_F(\\gamma,\\lambda)$ by a ball of an explicit radius in the building. By counting the number of points in the ball (which is of course much more tractable than counting $|X_F(\\gamma,\\lambda)|$) we arrive at the desired bound on the orbital integral.\n The proof presented here is inspired by the beautiful exposition of \\cite[\\S\\S3-5]{Kot05} but uses (not so beautiful) brute force and crude bounds at several places.\n\n\n Throughout this subsection the notation of \\S\\ref{sub:local-bound-orb-int} is adopted and $\\gamma$ is assumed to be noncentral and elliptic in $G(F)$.\nThen $I_\\gamma(F)$ is a compact group, on which the Euler-Poincare measure $\\mu^{{\\rm EP}}_{I_\\gamma}$\nassigns total volume 1. Our aim is to bound $O^{G(F)}_{\\gamma}({\\mathbf{1}}_{K\\mu(\\varpi)K}, \\mu^{\\mathrm{can}}_G,\\mu^{\\mathrm{can}}_{I_\\gamma})$.\n It follows from \\cite[Thm 5.5]{Gro97} (for the equality) and Proposition \\ref{p:Gross-motives} that\n$$\\left|\\frac{\\mu^{{\\rm EP}}_{I_\\gamma}}{\\mu^{\\mathrm{can}}_{I_\\gamma}}\\right|\n= \\frac{\\prod_{d\\ge 1} \\det\\left(1-{\\rm Frob}_v q_v^{d-1}\\left|({\\rm Mot}_{I_\\gamma,d})^{I_v}\\right.\\right)}\n{|H^1(F,I_\\gamma)|}\\le \\prod_{d\\ge 1}(1+q_v^{d-1})^{\\dim {\\rm Mot}_{I_\\gamma,d}}$$ \\begin{equation}\\le (1+q_v^{(\\dim I_\\gamma+1)\/2})^{{\\rm rk} I_\\gamma}\n\\le (1+q_v^{d_G})^{r_G}\\le q_v^{r_G(d_G+1)}.\\label{e:mu-EP\/mu}\\end{equation}\nThus we may as well bound $O^{G(F)}_{\\gamma}({\\mathbf{1}}_{K\\mu(\\varpi)K}, \\mu^{\\mathrm{can}}_G,\\mu^{{\\rm EP}}_{I_\\gamma})$.\n\n Let $T_\\gamma$ be a maximal torus of $I_\\gamma$ defined over $F$ containing $\\gamma$.\n By Lemma \\ref{l:torus-splitting}, there exists a Galois extension $F'\/F$ with\n \\begin{equation}\\label{e:F':F}[F':F]\\le w_G s_G\\end{equation}\nsuch that $T_\\gamma$ is a split torus over $F'$.\n Hence $I_\\gamma$ and $G$ are split groups over $F'$.\n Note that $F'$ is a tame extension of $F$ under the assumption that ${\\rm char}\\, k_F>w_Gs_G$.\nLet $A'$ be a maximal split torus of $G$ over $F'$ such that $A\\times_F F'\\subset A'$. Since maximal $F'$-split tori are conjugate over $F'$, we find\n$$y\\in G(F') \\quad \\mbox{such that}\\quad A'=yT_\\gamma y^{-1}$$\nand fix such a $y$.\nWrite $\\mathcal{O}'$, $\\varpi'$ and $v'$ for the integer ring of $F'$, a uniformizer and the valuation on $F'$ such that $v'(\\varpi')=1$.\nWith respect to the integral model of $G$ over $\\mathcal{O}$ at the beginning of \\S\\ref{sub:local-bound-orb-int},\nwe put $K':=G(\\mathcal{O}')$. A point of $G(F)\/K$ will be denoted $\\overline{x}$ and any of its lift\nin $G(F)$ will be denoted $x$. Let $\\overline{x}_0\\in G(F)\/K$ (resp. $\\overline{x}'_0\\in G(F')\/K'$) denote the\nelement represented by the trivial coset of $K$ (resp. $K'$). Then $\\overline{x}_0$ (resp. $\\overline{x}'_0$)\nmay be thought of as a base point of the building ${\\mathcal{B}}(G(F),K)$ (resp. ${\\mathcal{B}}(G(F'),K')$) and\nits stabilizer is identified with $K$ (resp. $K'$). There exists an injection\n\\begin{equation}\\label{e:embed-building}{\\mathcal{B}}(G(F),K)\\hookrightarrow {\\mathcal{B}}(G(F'),K')\\end{equation} such that ${\\mathcal{B}}(G(F),K)$ is the ${\\rm Gal}(F'\/F)$-fixed points of ${\\mathcal{B}}(G(F'),K')$. (This is the case because $F'$ is tame over $F$.) The natural injection $G(F)\/K\\hookrightarrow G(F')\/K'$ coincides with the injection induced by \\eqref{e:embed-building} on the set of vertices.\n\nDefine $\\lambda'\\in X_*(A')$ by $\\lambda'=e_{F'\/F} \\lambda$ (where $e_{F'\/F}$ is the ramification index of $F'$ over $F$) so that $\\lambda'(\\varpi')=\\lambda(\\varpi)$ and\n\\begin{equation}\\label{e:mu'-abs-val}\\|\\lambda'\\|=e_{F'\/F} \\|\\lambda\\|\\le e_{F'\/F}\\kappa.\\end{equation}\nFor (the fixed $\\gamma$ and) a semisimple element $\\delta\\in G(F')$, set\n\\begin{eqnarray}\n X_F(\\gamma,\\lambda)&:=&\\{ \\overline{x}\\in G(F)\/K: \\overline{x}^{-1} \\gamma \\overline{x}\\in K\\lambda(\\varpi) K\\}\\nonumber\\\\\nX_{F'}(\\delta,\\lambda')&:=&\\{ \\overline{x}'\\in G(F')\/K': (\\overline{x}')^{-1} \\delta \\overline{x}'\\in K'\\lambda'(\\varpi') K'\\}.\\nonumber\n\\end{eqnarray}\nBy abuse of notation we write $\\overline{x}^{-1} \\gamma \\overline{x}\\in K\\lambda(\\varpi) K$ for the condition that $x^{-1}\\gamma x\\in K\\lambda(\\varpi) K$ for some (thus every) lift $x\\in G(F)$ of $\\overline{x}$ and similarly for the condition on $\\overline{x}'$.\nIt is clear that $X_F(\\gamma,\\lambda)\\subset X_{F'}(\\gamma,\\lambda')\\cap( G(F)\/K)$. By (3.4.2) of \\cite{Kot05}\n\\begin{equation}\\label{e:orb-via-fixed} O^{G(F)}_{\\gamma}({\\mathbf{1}}_{K\\lambda(\\varpi)K}, \\lambda_G,\\lambda^{{\\rm EP}}_{I_\\gamma})=| X_F(\\gamma,\\lambda)|.\\end{equation}\nOur goal of bounding the orbital integrals on the left hand side can be translated into a problem of bounding $| X_F(\\gamma,\\lambda)|$.\n\n\nLet ${\\rm Apt}(A'(F'))$ denote the apartment for $A'(F')$. Likewise ${\\rm Apt}(T_\\gamma(F))$ and\n${\\rm Apt}(T_\\gamma(F'))$ are given the obvious meanings. We have $\\overline{x}'_0\\in {\\rm Apt}(A'(F'))$.\nThe metrics on ${\\mathcal{B}}(G(F),K)$ and ${\\mathcal{B}}(G(F'),K')$ are\nchosen such that \\eqref{e:embed-building} is an isometry.\nThe metric on ${\\mathcal{B}}(G(F'),K')$ is\ndetermined by its restriction to ${\\rm Apt}(A'(F'))$, which is in turn pinned down by\na (non-canonical choice of) a Weyl-group invariant scalar product on $X_*(A')$, cf. \\cite[\\S2.3]{Tit79}. Henceforth we fix the scalar product once and for all. Scaling the scalar product does not change our results as the reader can check.\n\\begin{rem}\nFor any other tame extension $F''$ of $F$ and a maximal split torus $A''$ of $G$ over $F''$, we can find an isomorphism $X_*(A')$ and $X_*(A'')$ over the composite field of $F'$ and $F''$, well defined up to the Weyl group action. So the scalar product on $X_*(A'')$ is uniquely determined by that on $X_*(A')$. So we need not choose a scalar product again when considering a different $\\gamma\\in G(F)$.\n\\end{rem}\n\n\nFor any $F'$-split maximal torus $A''$ of $G$ (for instance $A''=T_\\gamma$ or $A''=A'$) and the associated set of roots $\\Phi=\\Phi(G,A'')$ and set of coroots $\\Phi^\\vee=\\Phi^\\vee(G,A'')$,\nlet $l_{\\min}(\\Phi)$ (resp. $l_{\\max}(\\Phi)$) denote the the shortest (resp. longest) length of a positive coroot\nin $\\Phi^\\vee$. Note that these are independent of the choice of $A''$ and completely determined by the previous choice of a Weyl group invariant scalar product on $X_*(A')$. It is harmless to assume that we have chosen the scalar product such that the shortest positive coroot in each irreducible system of $X_*(A')$ has length $l_{\\min}(\\Phi)$.\n\n Fix a Borel subgroup $B'$ of $G$ (over $F'$) containing $A'$ so that $y^{-1}B'y$ is a Borel subgroup containing $T_\\gamma$. Relative to these Borel subgroups we define the subset of positive roots $\\Phi^+(G,A')$ and $\\Phi^+(G,T_\\gamma)$. Let $m_{\\Xi^{\\mathrm{sp}}}$ be as in Lemma \\ref{l:app:split1}. In order to bound $|X_F(\\gamma,\\lambda)|$ in \\eqref{e:orb-via-fixed}, we control the larger set $X_{F'}(\\delta,\\lambda')$ by bounding the distance from its points to the apartment for $A'$.\n\n\n\\begin{lem}\\label{l:dbl-coset-distance}\n Let $\\delta\\in A'(F')$ and $\\overline{x}'\\in G(F')\/K'$. Then\nthere exist constants $C=C(G,\\Xi)>0$ and $Y=Y(G)\\in {\\mathbb Z}_{\\ge1}$ such that\nwhenever $(\\overline{x}')^{-1} \\delta \\overline{x}'\\in K'\\lambda'(\\varpi') K'$,\n$$ d(\\overline{x}',{\\rm Apt}(A'(F')))\\le\nl_{\\max}(\\Phi)\\cdot C|\\Delta^+|\\cdot Y^{|\\Phi^+|}w_G s_G\\qquad\\qquad$$\n$$\\qquad\\qquad\\times \\left(\\sum_{\\alpha\\in \\Phi^+(G,A')} |v(1-\\alpha^{-1}(\\delta))|+ Y m_{\\Xi^{\\mathrm{sp}}}(m_{\\Xi^{\\mathrm{sp}}}+1)\\kappa\\right),$$\nwhere the left hand side denotes the shortest distance from $\\overline{x}'$ to ${\\rm Apt}(A'(F'))$.\n\\end{lem}\n\n\\begin{proof}[Proof of Lemma \\ref{l:dbl-coset-distance}]\n We may write $\\overline{x}'=an\\overline{x}_0'$ for some $a\\in A'(F')$ and $n\\in N(F')$. As both sides of the above inequality\nare invariant under multiplication by $a$, we may assume that $a=1$.\nLet $\\lambda_\\delta\\in X_*(A')$ be such that $\\delta\\in \\lambda_\\delta(\\varpi')A'(\\mathcal{O}')$.\n\n\\medskip\n\n\\begin{enumerate}\n\n\\item[Step 1.] Show that $\\delta^{-1} n^{-1}\\delta n\\in K'\\lambda_0(\\varpi')K'$\nfor some $\\lambda_0\\in X_*(A')$ with $\\|\\lambda_0\\|\\le e_{F'\/F}(m_{\\Xi}+1)\\kappa$.\n\n\\medskip\n\n\n By Cartan decomposition there exists a $B'$-dominant $ \\lambda_0\\in X_*(A')$ such that\n $\\delta^{-1} n^{-1}\\delta n\\in K'\\lambda_0(\\varpi')K'$.\n The condition of the lemma on $\\delta$ is unraveled as\n $(x'_0)^{-1} n^{-1} \\delta n x'_0\\in K'\\lambda'(\\varpi') K'$. So\n$$\\delta^{-1}n^{-1} \\delta n\\in \\delta^{-1} K'\\lambda'(\\varpi') K'\n\\subset (K'\\lambda_\\delta^{-1}(\\varpi') K')(K'\\lambda'(\\varpi') K').$$\nLet $w$ be a Weyl group element for $A'$ in $G$ such that $w\\lambda_\\delta^{-1}$ is $B'$-dominant. The fact that $K'\\lambda_0(\\varpi')K'$ intersects\n$(K'\\lambda_\\delta^{-1}(\\varpi') K')(K'\\lambda'(\\varpi') K')$ implies (\\cite[Prop 4.4.4.(iii)]{BT72}) that\n$$\\|\\lambda_0\\|\\le \\| w\\cdot \\lambda_\\delta^{-1}\\|+\\|\\lambda'\\| = \\| \\lambda_\\delta\\|+\\|\\lambda'\\|.$$\nNote that $ v'(\\alpha(\\delta))\\in\n[-m_{\\Xi^{\\mathrm{sp}}} \\|\\lambda'\\|,m_{\\Xi^{\\mathrm{sp}}} \\|\\lambda'\\|]$ for all $\\alpha\\in \\Phi(G,A')$ by Lemma \\ref{l:app:split1} since a conjugate of $\\delta$ belongs to $K'\\lambda'(\\varpi')K'$. This implies that $\\|\\lambda_{\\delta}\\|\\le m_{\\Xi}\\|\\lambda'\\|$. On the other hand $\\|\\lambda'\\|\\le e_{F'\/F}\\kappa$ according to \\eqref{e:mu'-abs-val}. This completes Step 1.\n\n\\medskip\n\n Before entering Step 2, we notify the reader that we are going to use the convention and notation of the Chevalley basis as recalled in \\S\\ref{sub:lem-split} below. In particular $n\\in N(F')$ can be written as (cf. \\eqref{e:y=product})\n \\begin{equation}\\label{e:n=product}n=x_{\\alpha_1}(X_{\\alpha_1})\\cdots x_{\\alpha_{|\\Phi^+|}}(X_{\\alpha_{|\\Phi^+|}})\\end{equation}\n for unique $X_{\\alpha_1},...,X_{\\alpha_{|\\Phi^+|}}\\in F'$.\n\\medskip\n\n\\item[Step 2.] Show that there exists a constant $\\mathcal {M}_{|\\Phi^+|}\\ge 0$ (explicitly defined in \\eqref{e:cM_i} below) such that\n$v'(X_{\\alpha_i})\\ge -\\mathcal {M}_{|\\Phi^+|}$ for all $1\\le i\\le |\\Phi^+|$.\n\n\n\\medskip\n\n\n In our setting we compute\n\\begin{eqnarray}\n \\delta^{-1}n^{-1}\\delta n\n&=& \\delta^{-1} \\left(\\prod_{i=|\\Phi^+|}^{1} x_{\\alpha_i}(-X_{\\alpha_i})\\right) \\delta\n\\prod_{i=1}^{|\\Phi^+|} x_{\\alpha_i}(X_{\\alpha_i})\n\\nonumber\\\\&= &\\left(\\prod_{i=|\\Phi^+|}^{1} x_{\\alpha_i}(-\\alpha^{-1}_i(\\delta) X_{\\alpha_i}) \\right)\n\\prod_{i=1}^{|\\Phi^+|} x_{\\alpha_i}(X_{\\alpha_i})\n\\nonumber\\\\&=&\\prod_{i=1}^{|\\Phi^+|} x_{\\alpha_i}\\left((1-\\alpha^{-1}_i(\\delta))X_{\\alpha_i}\n+ P_{\\alpha_i} \\right)\\label{e:delta-n-delta-n}\n\\end{eqnarray}\nwhere the last equality follows from the repeated use of \\eqref{e:commutator} to rearrange the terms.\nHere $P_{\\alpha_i}$ is a polynomial (which could be zero) in $\\alpha_{j}^{-1}(\\delta)$\nand $X_{\\alpha_j}$ with integer coefficients for $j0$,\n depending only on the Chevalley group $G$ and $\\Xi$, and integers $a^0_{\\alpha}\\in [-C,0]$ for\n$\\alpha\\in \\Delta^+$ such that\n$$1\\le \\sum_{\\alpha\\in \\Delta^+} (-a^0_\\alpha) \\langle \\beta,\\alpha^\\vee\\rangle \\le C,\\quad \\forall \\beta\\in \\Delta^+.$$\n(This is possible because the matrix $(\\langle \\beta,\\alpha^\\vee\\rangle )_{\\beta,\\alpha\\in \\Delta^+}$ is nonsingular. For instance one finds $a^0_{\\alpha}\\in {\\mathbb Q}$ satisfying the above inequalities for $C=1$ and then eliminate denominators in $a^0_{\\alpha}$ by multiplying a large positive integer.)\nNow put $a_{\\alpha}:=\\mathcal {M}_{|\\Phi^+|} a^0_{\\alpha}\\in [-C\\mathcal {M}_{|\\Phi^+|},0]$ and\n $a:=\\sum_{\\alpha\\in \\Delta^+} a_\\alpha \\alpha^\\vee(\\varpi')\\in A'(F')$ so that\n\\begin{equation}\\label{e:bound-v(a)}\n\\mathcal {M}_{|\\Phi^+|}\\le -v(\\beta(a))\\le C\\cdot\\mathcal {M}_{|\\Phi^+|},\n\\quad \\forall \\beta\\in \\Delta^+.\\end{equation}\nIn fact \\eqref{e:bound-v(a)} implies that the left inequality holds for all $\\beta\\in \\Phi^+$.\nHence\n$$a^{-1}na=\\prod_{i=1}^{|\\Phi^+|}\nx_{\\alpha_i}(\\alpha_i(a)^{-1} X_{\\alpha_{i}})$$\n$$\\in \\prod_{i=1}^{|\\Phi^+|} U_{\\alpha_i,v(X_{\\alpha_i})-v(\\alpha_i(a))}\\subset \\prod_{i=1}^{|\\Phi^+|} U_{\\alpha_i,\\mathcal {M}_{|\\Phi^+|}\n+v(X_{\\alpha_i})}.$$\nIn light of \\eqref{e:greater-cM_i},\n$\\mathcal {M}_{|\\Phi^+|}\n+v(X_{\\alpha_i})\\ge 0$. Hence $a^{-1}na\\in K'$.\n\n\n\n\\medskip\n\n\\item[Step 4.] Conclude the proof.\n\n\\medskip\n\n Step 3 shows that $a\\overline{x}'_0\\in {\\rm Apt}(A'(F'))$ is invariant under the left multiplication\naction by $n$ on ${\\mathcal{B}}(G(F'),K')$, which acts as an isometry. Hence\n\\begin{equation}\\label{e:Step4-1}d(\\overline{x}', {\\rm Apt}(A'(F'))\\le d(n\\overline{x}'_0, a\\overline{x}'_0)= d(n\\overline{x}'_0, na\\overline{x}'_0)=\nd(\\overline{x}'_0, a\\overline{x}'_0).\\end{equation}\nOn the other hand, for any\n$\\overline{x}'\\in {\\rm Apt}(A'(F'))$ and any positive simple coroot $\\alpha^{\\vee}$,\n\\begin{equation}\\label{e:dist-simple-coroot}\nl_{\\min}(\\Phi)\\le d(\\overline{x}', \\alpha^\\vee(\\varpi')^{-1} \\overline{x}')\\le l_{\\max}(\\Phi)\\end{equation}\n(The left inequality holds by definition of $l_{\\min}$. The right inequality comes from the\nstandard fact\nthat the length of the longest coroot is bounded by $6l_{\\min}$ in an irreducible system.)\nSince $a=\\prod_{\\alpha\\in \\Delta^+} (\\alpha^\\vee(\\varpi'))^{a_{\\alpha}}$ with\n$a_\\alpha\\in [-C\\mathcal {M}_{|\\Phi^+|},0]$, a repeated use of \\eqref{e:dist-simple-coroot}, together with\na triangle inequality, shows that\n\\begin{equation}\\label{e:Step4-3} d(\\overline{x}'_0, a\\overline{x}'_0)\\le l_{\\max}(\\Phi)\\cdot C|\\Delta^+|\\cdot \\mathcal {M}_{|\\Phi^+|}.\\end{equation}\nLemma \\ref{l:dbl-coset-distance} follows from\n\\eqref{e:Step4-1}, \\eqref{e:Step4-3}, the formula for $\\mathcal {M}_{|\\Phi^+|}$ (\\eqref{e:cM_i}), the inequality for $\\|\\lambda_0\\|$ in Step 1, and $e_{F'\/F}\\le [F':F]\\le w_G s_G$ as we saw in \\eqref{e:F':F}.\n\n\\end{enumerate}\n\n\\end{proof}\n\n Since $\\gamma$ is elliptic, ${\\rm Apt}(T_\\gamma(F))$ is a singleton. Let $\\overline{x}_1$ denote its only point.\nThen the ${\\rm Gal}(F'\/F)$-action on ${\\rm Apt}(T_\\gamma(F'))$ has $\\overline{x}_1$ as the unique fixed point.\nMotivated by Lemma \\ref{l:dbl-coset-distance} we set $\\mathcal {M}(\\gamma,\\kappa)$ to be\n$$l_{\\max}(\\Phi)\\cdot C|\\Delta^+|\\cdot Y^{|\\Phi^+|}w_G s_G\\left(\n\\sum_{\\alpha\\in \\Phi^+(G,T_\\gamma)} |v(1-\\alpha^{-1}(\\gamma))| + Y m_{\\Xi^{\\mathrm{sp}}} (m_{\\Xi^{\\mathrm{sp}}}+1) \\kappa\\right)$$\nand similarly $ \\mathcal {M}(\\delta,\\kappa)$ using $\\alpha\\in \\Phi^+(G,A')$ in the sum instead.\n(Useful to know: $v(1-\\alpha(\\gamma))+v(1-\\alpha^{-1}(\\gamma))\\le 2|v(1-\\alpha^{-1}(\\gamma))|$\nfor all $\\alpha\\in \\Phi^+$.) Define (the set of vertices in) a closed ball in $G(F)\/K$: for\n$v \\in G(F)\/K$,\n$$\\mathrm{Ball}(v,R):=\\{ \\overline{x}\\in G(F)\/K: d(\\overline{x},v)\\le R\\}.$$\n\n\n\\begin{lem}\\label{l:B-3}\n $X_F(\\gamma,\\lambda)~\\subset~\\mathrm{Ball}(\\overline{x}_1,\\mathcal {M}(\\gamma,\\kappa)).$\n\\end{lem}\n\n\\begin{proof}\n As we noted above, $X_F(\\gamma,\\lambda)\\subset X_{F'}(\\gamma,\\lambda')=X_{F'}(y^{-1}\\delta y,\\lambda')$.\nLemma \\ref{l:dbl-coset-distance} tells us that\n$$\\overline{x}\\in X_F(\\gamma,\\lambda)~\\Rightarrow~d(y\\overline{x},{\\rm Apt}(A'(F')))\\le \\mathcal {M}(\\delta,\\kappa)\n~\\Rightarrow~ d(\\overline{x},{\\rm Apt}(T_\\gamma(F')))\\le \\mathcal {M}(\\delta,\\kappa).$$\nThe last implication uses ${\\rm Apt}(A'(F'))=y {\\rm Apt}(T_\\gamma(F')) y^{-1}$.\n We have viewed $\\overline{x}$ as a point of ${\\mathcal{B}}(G(F'),K')$\nvia the isometric embedding ${\\mathcal{B}}(G(F),K)\\hookrightarrow {\\mathcal{B}}(G(F'),K')$.\nIn order to prove the lemma, it is enough to check that $d(\\overline{x},\\overline{x}_1)\\le d(\\overline{x},\\overline{x}_2)$ for every\n$\\overline{x}_2\\in {\\rm Apt}(T_\\gamma(F'))$. To this end,\nwe suppose that there exists an $\\overline{x}_2$ with\n\\begin{equation}\\label{e:assump-dist}d(\\overline{x},\\overline{x}_1)> d(\\overline{x},\\overline{x}_2)\\end{equation}\nand will draw a contradiction.\n\n As $\\sigma\\in{\\rm Gal}(F'\/F)$ acts on ${\\mathcal{B}}(G(F'),K')$ by isometry,\n$d(\\overline{x},\\sigma \\overline{x}_2)=d(\\overline{x},\\overline{x}_2)$. As ${\\rm Apt}(T_\\gamma(F'))$\nis preserved under the Galois action, $\\sigma\\overline{x}_2\\in {\\rm Apt}(T_\\gamma(F'))$.\nAccording to the inequality of \\cite[2.3]{Tit79}, for any $x,y,z\\in {\\mathcal{B}}(G(F'),K')$ and\nfor the unique mid point $m=m(x,y)\\in {\\mathcal{B}}(G(F'),K')$ such that $d(x,m)=d(y,m)=\\frac{1}{2}d(x,y)$,\n\\begin{equation}\\label{e:Tits}d(x,z)^2+d(y,z)^2\n\\ge 2 d(m,z)^2 + \\frac{1}{2}d(x,y)^2.\\end{equation}\nConsider the convex hull ${\\mathscr C}$ of ${\\mathscr C}_0:=\\{\\sigma\\overline{x}_2\\}_{\\sigma\\in{\\rm Gal}(F'\/F)}$.\nSince ${\\mathscr C}_0$ is contained in ${\\rm Apt}(T_\\gamma(F'))$, so is ${\\mathscr C}$.\nMoreover ${\\mathscr C}_0$ is fixed under ${\\rm Gal}(F'\/F)$,\nfrom which it follows that\n${\\mathscr C}$ is also preserved under the same action.\n(One may argue as follows. Inductively define ${\\mathscr C}_{i+1}$ to be the set consisting of the mid points $m(x,y)$\nfor all $x,y\\in {\\mathscr C}_{i}$. Then it is not hard to see that ${\\mathscr C}_i$ must be preserved under\n${\\rm Gal}(F'\/F)$ and that $\\cup_{i\\ge 0} {\\mathscr C}_i$ is a dense subset of ${\\mathscr C}$.)\nAs ${\\mathscr C}$ is a compact set, one may choose $\\overline{x}_3\\in {\\mathscr C}$ which has the minimal distance\nto $\\overline{x}$ among the points of ${\\mathscr C}$. By construction \\begin{equation}\\label{e:contrad-dist} d(\\overline{x}_3,\\overline{x})\\le d(\\overline{x}_2,\\overline{x}).\\end{equation}\nApplying \\eqref{e:Tits} to $(x,y,z)=(\\overline{x}_3,\\sigma\\overline{x}_3,\\overline{x})$,\n$$2 d(\\overline{x}_3,\\overline{x})^2= d(\\overline{x}_3,\\overline{x})^2+d(\\sigma\\overline{x}_3,\\overline{x})^2\n\\ge 2d(m(\\overline{x}_3,\\sigma\\overline{x}_3),\\overline{x})^2+\\frac{1}{2}d(\\overline{x}_3,\\sigma\\overline{x}_3)^2.$$\nAs $\\overline{x}_3,\\sigma\\overline{x}_3\\in {\\mathscr C}$, we also have $m(\\overline{x}_3,\\sigma\\overline{x}_3)\\in {\\mathscr C}$\nby the convexity of ${\\mathscr C}$. The choice of $\\overline{x}_3$ ensures that $d(\\overline{x}_3,\\overline{x})\\le d(m(\\overline{x}_3,\\sigma\\overline{x}_3),\\overline{x})$, therefore\n$d(\\overline{x}_3,\\sigma\\overline{x}_3)=0$, i.e. $\\overline{x}_3=\\sigma\\overline{x}_3)$. This holds for every $\\sigma\n\\in {\\rm Gal}(F'\/F)$. Therefore $\\overline{x}_3$ is a ${\\rm Gal}(F'\/F)$-fixed point of ${\\rm Apt}(T_\\gamma(F'))$.\nThis implies that $\\overline{x}_3=\\overline{x}_1$, but then \\eqref{e:contrad-dist} contradicts \\eqref{e:assump-dist}.\n\n\n\n\n\n\n\n\\end{proof}\n\n\\begin{lem}\\label{l:B-4} There exist constants $Z_1,Z_2,Z_3\\ge 0$\nsuch that\n$$| \\mathrm{Ball}(\\overline{x}_1,\\mathcal {M}(\\gamma,\\kappa)) |\\le q^{Z_1+Z_2\\kappa} D^G(\\gamma)^{Z_3}.$$\n\\end{lem}\n\n\\begin{rem}\\label{r:uniformity}\n It should be noted that $Z$ depends only on the affine root data for $G$ over $F$. Hence the formulas defining $Z_1,Z_2,Z_3$ at the end of the proof tell us that $Z_1,Z_2,Z_3$ depend only on the latter affine root data, the group-theoretic constants for $G$ (and its Chevalley form) and $\\Xi$. This is used in the proof of Theorem \\ref{t:appendeix2} to establish a kind of uniformity when traveling between places in ${\\mathcal V}(\\theta)\\backslash S_{{\\rm bad}}$ for a fixed $\\theta\\in {\\mathscr C}(\\Gamma_1)$ in the notation there.\n\\end{rem}\n\n\\begin{proof}\n To ease notation we write $\\mathcal {M}$ for $\\mathcal {M}(\\gamma,\\kappa)$ in the proof.\n Let $e_{\\min},e_{\\max}>0$ be the minimum and maximum lengths of edges of ${\\mathcal{B}}(G(F),K)$, respectively.\n Given a vertex $v$ of ${\\mathcal{B}}(G(F),K)$,\n let ${\\mathscr C}_1(v)$ denote the set of chambers of ${\\mathcal{B}}(G(F),K)$ which contains $v$ as a vertex.\n Then the union of all $C\\in {\\mathscr C}_1(v)$ contains\n $\\mathrm{Ball}(v,e_{\\min}\/R)$ for some $R$ determined by (any) chamber of\n ${\\mathcal{B}}(G(F),K)$ in a way that $R$ does not depend on the choice of metric\n on the building. (Fix a chamber and\n measure the distance from each vertex to the opposite face of maximal dimension. Take $R$\n to be the minimum of all such distances divided by $e_{\\min}$.)\n For $i=1,2,...$, define ${\\mathscr C}_{i+1}(v)$ to be the set of chambers which shares a vertex with some chamber\n of ${\\mathscr C}_i(v)$. For each $i\\ge 1$, define ${\\mathscr V}_i(v)$ to be the set of vertices in\n the union of all $C\\in {\\mathscr C}_i(v)$. Set $\\mathcal {M}':=\\lceil \\frac{2\\mathcal {M}+e_{\\max}}{e_{\\min}} \\rceil$.\n Then\n$$\n \\mathrm{Ball}(\\overline{x}_1,\\mathcal {M})~\\subset ~ \\mathrm{Ball}(\\overline{x}_2,\\mathcal {M}+\\frac{e_{\\max}}{2})\n~\\subset ~ \\bigcup_{C\\in {\\mathscr C}_{\\mathcal {M}'}(\\overline{x}_1)} C.$$\n \\begin{equation}\\label{e:app:union-chambers}\n \\mbox{so}\\quad \\mathrm{Ball}(\\overline{x}_1,\\mathcal {M})~\\subset ~ {\\mathscr V}_{\\mathcal {M}'}(\\overline{x}_1).\\end{equation}\n\n Let us bound $|{\\mathscr C}_1(\\overline{x}_1)|$ and the number of vertices in ${\\mathscr C}_1(\\overline{x}_1)$.\n The stabilizer of $\\overline{x}_1$ acts transitively on ${\\mathscr C}_1(\\overline{x}_1)$.\n Let $C\\in {\\mathscr C}_1(\\overline{x}_1)$. Then\n$$|{\\mathscr C}_1(\\overline{x}_1)|= |\\mathrm{Stab}(\\overline{x}_1)\/\\mathrm{Stab}(C)|\n= |G(\\mathcal{O})\/\\mathrm{Iw}| \\le |G(k_F)|\\le q^{d_G+r_G}$$\nwhere $\\mathrm{Iw}$ denotes an Iwahori subgroup of $G(\\mathcal{O})$.\n(See the proof of Lemma \\ref{l:double-coset-volume} for the last inequality.)\nEach chamber contains $\\dim A+1$ vertices as a $\\dim A$-dimensional simplex.\nHence for each $i\\ge 1$,\n$$ |{\\mathscr V}_i(v)|\\le (\\dim A+1)\\cdot |{\\mathscr C}_i(\\overline{x}_1)|.$$\nOn the other hand, $$ |{\\mathscr C}_{i+1}(\\overline{x}_1)|\\le\n|{\\mathscr V}_i(v)|\\cdot |{\\mathscr C}_{1}(\\overline{x}_1)|.$$\nHence $|{\\mathscr C}_{i}(\\overline{x}_1)|\\le (\\dim A+1)^{2^{i-1}-1} ({\\mathscr C}_1)^{2^{i-1}}\n\\le (\\dim A+1)^{i-1}q^{i(d_G+r_G)}$ and\n\\begin{equation}\\label{e:app:bound-vertices}|{\\mathscr V}_{\\mathcal {M}'}(\\overline{x}_1)|\\le\n(\\dim A+1) |{\\mathscr C}_{\\mathcal {M}'}(\\overline{x}_1)|\\le (r_G+1)^{\\mathcal {M}'}q^{\\mathcal {M}'(d_G+r_G)}.\\end{equation}\nWe can find a constant $Z\\ge 0$, independent of $\\gamma$ such that\n\\[\nZ:=\\max\\left(\\frac{e_{\\max}}{e_{\\min}}, \\frac{l_{\\max}(\\Phi)}{e_{\\min}}\\right).\n\\]\n Note\n\\[\n\\mathcal {M}'\\le \\frac{2\\mathcal {M}}{e_{\\min}} + Z+1\n\\le Z+1+ CZ|\\Delta^+|\\cdot Y^{|\\Phi^+|}w_G s_G \\left(\n\\sum_{\\alpha\\in \\Phi^+} |v(1-\\alpha^{-1}(\\gamma))| + Y m_{\\Xi^{\\mathrm{sp}}}(m_{\\Xi^{\\mathrm{sp}}}+1) \\kappa\\right),\n\\]\nwhich can be rewritten in the form\n\\[\n\\mathcal {M}'\\le Z+1+Z'_2\\kappa+ Z'_3\\sum_{\\alpha\\in \\Phi^+} |v(1-\\alpha^{-1}(\\gamma))|.\n\\]\nSince $|v(1-\\alpha^{-1}(\\gamma))|\\le v(1-\\alpha^{-1}(\\gamma))+v(1-\\alpha(\\gamma))+2m_\\Xi \\kappa$\nin view of \\eqref{e:app:alpha(gamma)},\nwe have $$q^{\\mathcal {M}'}\\le q^{Z+1+(Z'_2+2m_{\\Xi^{\\mathrm{sp}}} Z'_3)\\kappa} D^G(\\gamma)^{-Z'_3}.$$\nReturning to \\eqref{e:app:union-chambers} and \\eqref{e:app:bound-vertices},\n$$ | \\mathrm{Ball}(\\overline{x}_1,\\mathcal {M})|\n\\le |{\\mathscr V}_{\\mathcal {M}'}(\\overline{x}_1)|\n\\le q^{(r_G+1)\\mathcal {M}'}q^{\\mathcal {M}'(d_G+r_G)}$$ $$\n\\le (q^{Z+1+(Z'_2+2m_{\\Xi^{\\mathrm{sp}}} Z'_3)\\kappa} D^G(\\gamma)^{-Z'_3})^{d_G+2r_G+1}.$$\nWe finish by redefining\n\\begin{itemize}\n\\item $Z_1:=(Z+1)(d_G+2r_G+1)$,\n\\item $Z_2:=(Z'_2+2m_{\\Xi}Z'_3)(d_G+2r_G+1)=CZ|\\Delta^+|Y^{|\\Phi^+|}m_{\\Xi}((m_{\\Xi^{\\mathrm{sp}}}+1)Y+2)(d_G+2r_G+1)$,\n\\item $Z_3:=Z'_3(d_G+2r_G+1)=CZ|\\Delta^+|Y^{|\\Phi^+|}(d_G+2r_G+1)$.\n\\end{itemize}\n\\end{proof}\n\n\\begin{cor}\\label{c:B-1}\n $|O^{G(F)}_{\\gamma}({\\mathbf{1}}_{K\\lambda(\\varpi)K}, \\mu_G,\\mu^{{\\rm EP}}_{I_\\gamma})|\\le\nq^{r_G(d_G+1)} q^{Z_1+Z_2\\kappa} D^G(\\gamma)^{Z_3}$.\n\\end{cor}\n\n\\begin{proof}\n Follows from \\eqref{e:orb-via-fixed}, Lemma \\ref{l:B-3} and Lemma \\ref{l:B-4}.\n\n\\end{proof}\n\n\n\\subsection{Lemmas in the split case}\\label{sub:lem-split}\n\nThis subsection plays a supporting role for the previous subsections, especially \\S\\ref{sub:app:elliptic}.\n As in \\S\\ref{sub:global-bound-orb-int} let $G$ be a Chevalley group with a Borel subgroup $\\mathbf{B}$ containing a split maximal torus $\\mathbf {T}$, all over ${\\mathbb Z}$. Let $\\Xi^{\\mathrm{sp}}_{{\\mathbb Q}}:G\\hookrightarrow \\GL_d$ be a closed embedding of algebraic groups over ${\\mathbb Q}$. Let $\\mathbb{T}$ denote the diagonal maximal torus of $\\GL_d$, $\\mathbb{B}$ the upper triangular Borel subgroup of $\\GL_d$, and ${\\mathbb N}$ the unipotent radical of $\\mathbb{B}$.\n\n Extend $\\Xi^{\\mathrm{sp}}_{{\\mathbb Q}}$ to a closed embedding $\\Xi^{\\mathrm{sp}}:G\\hookrightarrow \\GL_d$ defined over ${\\mathbb Z}[1\/R]$ for some integer $R$ such that $\\Xi^{\\mathrm{sp}}(\\mathbf {T})$ (resp. $\\Xi^{\\mathrm{sp}}(\\mathbf{B})$) lies in the group of diagonal (resp. upper triangular) matrices of $\\GL_d$.\n To see that this is possible, find a maximal ${\\mathbb Q}$-split torus $\\mathbb{T}'$ of $\\GL_d$ containing $\\Xi^{\\mathrm{sp}}_{{\\mathbb Q}}(\\mathbf {T})$. Choose any Borel subgroup $\\mathbb{B}'$ over ${\\mathbb Q}$ containing $\\mathbb{T}$. Then there exists $g\\in \\GL_d({\\mathbb Q})$ such that the inner automorphism $\\mathrm{Int}(g):\\GL_d\\rightarrow \\GL_d$ by $\\gamma\\mapsto g \\gamma g^{-1}$ carries $(\\mathbb{B}',\\mathbb{T}')$ to $(\\mathbb{B},\\mathbb{T})$. Then $\\Xi^{\\mathrm{sp}}_{{\\mathbb Q}}$ and $\\mathrm{Int}(g)$ extend over ${\\mathbb Q}$ to over ${\\mathbb Z}[1\/R]$ for some $R\\in {\\mathbb Z}$, namely at the expense of inverting finitely many primes (basically those in the denominators of the functions defining $\\Xi^{\\mathrm{sp}}_{{\\mathbb Q}}$ and $\\mathrm{Int}(g)$).\n\n\n\n Now suppose that $p$ is a prime not diving $R$. Let $F$ be a finite extension of ${\\mathbb Q}_p$ with integer ring $\\mathcal{O}$\n and a uniformizer $\\varpi$. The field\n $F$ is equipped with a unique discrete valuation $v_F$ such that $v_F(\\varpi)=1$. Let $\\lambda\\in X_*(\\mathbf {T})$.\n We are interested in assertions which work for $F$ as the residue characteristic $p$ varies.\n Lemma \\ref{l:app:split1} (resp. Lemma \\ref{l:app:split2}) below is used in Step 1 (resp. Step 2) of the proof of Lemma \\ref{l:dbl-coset-distance}.\n Our convention for $T_\\delta$ and $\\Phi_\\delta$ below is the same as for $T_\\gamma$ and $\\Phi_\\gamma$ in Lemma \\ref{l:bounding1-alpha(gamma)}. The proof of the next lemma is little different from that of Lemma \\ref{l:bounding1-alpha(gamma)}.\n\n\\begin{lem}\\label{l:app:split1}\n There exists $m_{\\Xi^{\\mathrm{sp}}}\\in {\\mathbb Z}_{>0}$ such that\nfor every $p$, $F$ and $\\lambda$ as above and for every semisimple $\\delta\\in G(\\mathcal{O})\\lambda(\\varpi)G(\\mathcal{O})$ (and for any choice of $T_\\delta$ containing $\\delta$),\n$$\\forall \\alpha\\in \\Phi_\\delta,\\quad v_F(\\alpha(\\delta))\\in [-m_{\\Xi^{\\mathrm{sp}}} \\|\\lambda\\|,m_{\\Xi^{\\mathrm{sp}}} \\|\\lambda\\|].$$\n\\end{lem}\n\n\\begin{proof}\n There exists $c_{\\Xi^{\\mathrm{sp}}}>0$ such that $\\|\\Xi^{\\mathrm{sp}}(\\lambda)\\|_{\\GL_d}\\le c_{\\Xi^{\\mathrm{sp}}} \\|\\lambda\\|$\n for all $\\lambda\\in X_*(\\mathbf {T})$.\n Then every entry $a$\n of the diagonal matrix $\\Xi^{\\mathrm{sp}}(\\lambda(\\varpi))$ has the property that\n $v_F(a)\\in [-c_{\\Xi^{\\mathrm{sp}}} \\|\\lambda\\|, c_{\\Xi^{\\mathrm{sp}}} \\|\\lambda\\|]$. Let $\\mathbb{T}_\\delta$ be a maximal torus of $\\GL_d$ (over $\\overline{F}$) containing $\\Xi^{\\mathrm{sp}}(T_\\delta)$. The map $\\Xi^{\\mathrm{sp}}$ induces a surjection\n $X^*(\\mathbb{T}_\\delta)\\rightarrow X^*(T_\\delta)$. We fix a lift $\\widetilde{\\alpha}\\in X^*(\\mathbb{T}_\\delta)$ of each $\\alpha\\in \\Phi_\\delta$\n and choose $a_{\\Xi^{\\mathrm{sp}}}>0$ such that \\begin{equation}\\label{e:tilde-alpha}\n \\|\\widetilde{\\alpha}\\|_{\\GL_d}\\le a_{\\Xi^{\\mathrm{sp}}},\\quad \\forall \\alpha\\in \\Phi_\\delta\\end{equation}\n As $\\delta\\in G(\\mathcal{O})\\lambda(\\varpi)G(\\mathcal{O})$, we have\n $\\Xi^{\\mathrm{sp}}(\\delta)\\in \\GL_d(\\mathcal{O})\\Xi^{\\mathrm{sp}}(\\lambda(\\varpi))\\GL_d(\\mathcal{O})$. By Lemma \\ref{l:control-eigenvalue},\n every eigenvalue $\\mathrm{ev}$ of $\\Xi^{\\mathrm{sp}}(\\delta)$ satisfies\n $v_F(\\mathrm{ev})\\in [-c_{\\Xi^{\\mathrm{sp}}} \\|\\lambda\\|, c_{\\Xi^{\\mathrm{sp}}} \\|\\lambda\\|]$.\n This together with \\eqref{e:tilde-alpha} shows that\n $$v_F(\\widetilde{\\alpha}(\\Xi^{\\mathrm{sp}}(\\delta)))\\in [-a_{\\Xi^{\\mathrm{sp}}} c_{\\Xi^{\\mathrm{sp}}} \\|\\lambda\\|,a_{\\Xi^{\\mathrm{sp}}} c_{\\Xi^{\\mathrm{sp}}} \\|\\lambda\\|].$$\n Since $\\alpha(\\delta)=\\widetilde{\\alpha}(\\Xi^{\\mathrm{sp}}(\\delta))$, we finish by setting $m_{\\Xi^{\\mathrm{sp}}}:=a_{\\Xi^{\\mathrm{sp}}} c_{\\Xi^{\\mathrm{sp}}}$.\n\\end{proof}\n\nThe unipotent radical of $\\mathbf{B}$ is denoted $\\mathbf{N}$.\nFor $F$ as above, let $x_0$ be the hyperspecial vertex on the building of $G(F)$\ncorresponding to $G(\\mathcal{O})$.\nAs usual put $\\Phi^+:=\\Phi^+(G,\\mathbf {T})$ be the set of positive roots with respect to $(\\mathbf{B},\\mathbf {T})$.\n\n Let us recall some facts about the Chevalley basis. For each $\\alpha\\in \\Phi^+$, let $U_\\alpha$ denote the corresponding unipotent subgroup equipped with $x_\\alpha:\\mathbb{G}_a \\simeq U_\\alpha$. Order the elements of $\\Phi^+$ as $\\alpha_1,...,\\alpha_{|\\Phi^+|}$ once and for all such that simple roots appear at the beginning.\n The multiplication map $$m:U_{\\alpha_1}\\times \\cdots \\times U_{\\alpha_{|\\Phi^+|}} \\rightarrow \\mathbf{N},\n\\qquad (u_1,...,u_{|\\Phi^+|})\\mapsto u_1\\cdots u_{|\\Phi^+|}$$\n is an isomorphism of schemes (but not as group schemes) over ${\\mathbb Z}$. This can be deduced from \\cite[Exp XXII, 5.5.1]{SGA3-7}, which deals with a Borel subgroup of a Chevalley group. In particular (since the ordering on $\\Phi^+$ is fixed) any $n\\in \\mathbf{N}(F)$ can be uniquely written as \\begin{equation}\\label{e:y=product}y=x_{\\alpha_1}(Y_{\\alpha_1})\\cdots x_{\\alpha_{|\\Phi^+|}}(Y_{\\alpha_{|\\Phi^+|}})\\end{equation}\n for unique $Y_{\\alpha_i}\\in \\mathbb{G}_a(F)\\simeq F$'s. The Chevalley commutation relation (\\cite[\\S III]{Che55}) has the following form:\nfor all $1\\le i0$ independent of $S$, $\\kappa$ and ${\\mathfrak n}$ (but depending on $G$, $\\Xi$, $U_{S_0}$ and $U_{\\infty}$)\n such that for all ${\\mathfrak n}$ satisfying $${\\mathbb N}({\\mathfrak n})\\ge c_\\Xi q_{S_1}^{B_{\\Xi}m\\kappa},$$ the following holds:\n if $\\gamma\\in G(F)$ and $x^{-1}\\gamma x\\in U^{S,\\infty}({\\mathfrak n}) U_{S_0} U_{S_1} U_\\infty$\n for some $x\\in G({\\mathbb A}_F)$ then $\\gamma$ is unipotent.\n\\end{lem}\n\n\\begin{proof}\n Let $\\gamma'=x^{-1}\\gamma x$. We keep using the embedding $\\Xi:\\mathfrak{G}\\hookrightarrow \\GL_m$ over $\\mathcal{O}_F$ of Proposition \\ref{p:global-integral-model}.\n (For the lemma, an embedding away from the primes in $S_0$ or dividing ${\\mathfrak n}$ is enough.) At each finite place $v\\notin S_0$ and\n$v\\nmid {\\mathfrak n}$, Lemma \\ref{l:conj-image-in-diag} allows us to find $\\Xi'_v:\\mathfrak{G}\\hookrightarrow \\GL_m$ over $\\mathcal{O}_v$ which is $\\GL_m(\\mathcal{O}_v)$-conjugate to $\\Xi\\times_{\\mathcal{O}_F} F_v$ such that $\\Xi'_v$ sends $A_v$ into the diagonal torus of $\\GL_m$.\n\n Write $\\det(\\Xi(\\gamma)-(1-X))=X^m+a_{m-1}(\\gamma)X^{m-1}+\\cdots+ a_0(\\gamma)$, where\n $a_i(\\gamma)\\in F$ for $0\\le i\\le m-1$.\n Our goal is to show that $a_i(\\gamma)=0$ for all $i$.\n To this end, assuming $a_i(\\gamma)\\neq 0$ for some fixed $i$, we will estimate $|a_i(\\gamma)|_v$ at each place $v$ and\n draw a contradiction.\n\n First consider $v\\in S_1$.\n We claim that\n $$v(a_i(\\gamma))\\ge-B_{\\Xi}m\\kappa$$ for every $\\gamma$ that is conjugate to an element of ${\\rm supp}\\, {\\mathcal{H}}^{\\mathrm{ur}}(G(F_{v}))^{\\le \\kappa}$.\n To prove the claim we examine the eigenvalues of $\\Xi'_v(\\gamma')$, which is conjugate to $\\gamma$.\n We know $\\gamma'$ belongs to ${\\rm supp}\\, {\\mathcal{H}}^{\\mathrm{ur}}(G(F_v))^{\\le \\kappa}$, so\n $\\Xi'_v(\\gamma')\\in \\GL_m(\\mathcal{O}_v)\\Xi'_v(\\mu(\\varpi_v))\\GL_m(\\mathcal{O}_v)$ for some $\\mu\\in X_*(A_v)$ with $\\|\\mu\\|\\le \\kappa$.\n Then $\\|\\Xi'_v(\\mu)\\|_{\\GL_m}\\le B_{\\Xi}\\kappa$. (A priori this is true for $B_{\\Xi'_v}$ defined as in \\eqref{e:def-M(Xi)},\nbut $B_{\\Xi'_v}=B_{\\Xi}$ as $\\Xi'_v$ and $\\Xi$ are conjugate.)\n Let $k_1,k_2\\in \\GL_m(\\mathcal{O}_v)$ be such that $\\Xi'_v(\\gamma')=k_1\\Xi'_v(\\mu(\\varpi_v))k_2$.\n Lemma \\ref{l:control-eigenvalue} shows that\n every eigenvalue $\\lambda$ of $\\Xi'_v(\\mu(\\varpi_v)) k_2k_1$ (equivalently of\n $\\Xi'_v(\\gamma')$) satisfies\n $v(\\lambda)\\ge -B_{\\Xi}\\kappa$.\n If $\\lambda\\neq 1$, we must have $v(1-\\lambda)\\ge -B_{\\Xi}\\kappa$.\n This shows that $v(a_{i}(\\gamma))\\ge -B_{\\Xi}i \\kappa$ for any $i$ such that $a_i(\\gamma)\\neq 0$.\n Hence the claim is true.\n\n\n At infinity, by the compactness of $U_\\infty$, there exists $c_\\Xi>0$ such that\n $$|a_i(\\gamma)|_\\infty0}\\}_{v\\in {\\mathcal V}_F}$ satisfies the following: $\\delta_v=1$ for all but finitely many $v$\n and $\\prod_v \\delta_v<2^{-|S_\\infty|}$. Let $\\alpha=(\\alpha_1,...,\\alpha_r)\\in {\\mathbb A}_F^r$.\n Consider the following compact neighborhood of $\\alpha$\n $${\\mathcal{B}}(\\alpha,\\delta):=\\{(x_1,...,x_r)\\in {\\mathbb A}_F^r\\,:\\, |x_{i,v}-\\alpha_{i,v}|_v\\le \\delta_v,~\\forall\n v,~\\forall 1\\le i\\le r\\}.$$\n Then ${\\mathcal{B}}(\\alpha,\\delta)\\cap F^r$ has at most one element.\n\\end{lem}\n\n\\begin{proof}\n Suppose $\\beta=(\\beta_i)_{i=1}^r,\\gamma=(\\gamma_i)_{i=1}^r\\in {\\mathcal{B}}(\\alpha,\\delta)\\cap F^r$. By triangular inequalities,\n $$|\\beta_{i,v}-\\gamma_{i,v}|_v\\le \\left\\{\\begin{array}{cc}\n \\delta_v, & v\\nmid \\infty,\\\\\n 2\\delta_v, & v|\\infty\n \\end{array}\\right.$$\n for each $i$.\n Hence $\\prod_v |\\beta_{i,v}-\\gamma_{i,v}|_v<1$. Since $\\beta_{i},\\gamma_{i}\\in F$,\n the product formula forces $\\beta_{i}=\\gamma_{i}$. Therefore $\\beta=\\gamma$.\n\\end{proof}\n\n\n\n\n The next lemma measures the difference between $G(F)$-conjugacy and $G({\\mathbb A}_F)$-conjugacy.\n\n\\begin{lem}\\label{l:F-conj-vs-A_F-conj}\n Let $X_G$ (resp. ${\\mathscr X}_G$) be the set of semisimple $G(F)$-(resp. $G({\\mathbb A}_F)$-)conjugacy\n classes in $G(F)$. For any $[\\gamma]\\in {\\mathscr X}_G$, there exist at most $(w_Gs_G)^{r_G+1}$ elements\n in $X_G$ mapping to $[\\gamma]$ under the natural surjection $X_G\\rightarrow {\\mathscr X}_G$.\n\\end{lem}\n\n\n\\begin{proof}\n Let $[\\gamma]\\in {\\mathscr X}_G$ be an element defined by a semisimple $\\gamma\\in G(F)$.\n Denote by $X_\\gamma$ the preimage of\n $[\\gamma]$ in $X_G$. There is a natural bijection\n $$X_\\gamma \\quad\\leftrightarrow\\quad \\ker(\\ker^1(F,I_\\gamma))\\rightarrow \\ker^1(F,G)).$$\n Since $|\\ker^1(F,I_\\gamma))|=|\\ker^1(F,Z(\\widehat{I}_\\gamma))|$\n by \\cite[\\S4.2]{Kot84a}, we have $|X_\\gamma|\\le |\\ker^1(F,Z(\\widehat{I}_\\gamma))|$.\n\n Lemma \\ref{l:torus-splitting} tells us that for every semisimple $\\gamma$,\n the group $I_\\gamma$ becomes split over a finite extension\n $E\/F$ such that $[E:F]\\le w_Gs_G$.\n In particular ${\\rm Gal}(\\overline{F}\/E)$ acts trivially on\n $Z(\\widehat{I}_\\gamma)$.\n The group $\\ker^1(E,Z(\\widehat{I}_\\gamma))$ consists of continuous homomorphisms\n ${\\rm Gal}(\\overline{F}\/E)\\rightarrow Z(\\widehat{I}_\\gamma)$ which are trivial on all local Galois groups.\n Hence $\\ker^1(E,Z(\\widehat{I}_\\gamma))$ is trivial. This and\n the inflation-restriction sequence show that $\\ker^1(F,Z(\\widehat{I}_\\gamma))$\n is the subset of locally trivial elements in $H^1(\\Gamma_{E\/F},Z(\\widehat{I}_\\gamma))$,\n where we have written $\\Gamma_{E\/F}$ for ${\\rm Gal}(E\/F)$. In particular,\n $$|X_\\gamma|\\le |H^1(\\Gamma_{E\/F},Z(\\widehat{I}_\\gamma))|.$$\n Let $d:=|{\\rm Gal}(E\/F)|$ and denote by $[d]$ the\n $d$-torsion subgroup. The long exact sequence arising from $0\\rightarrow Z(\\widehat{I}_\\gamma)[d]\n \\rightarrow Z(\\widehat{I}_\\gamma) \\stackrel{d}{\\rightarrow} d(Z(\\widehat{I}_\\gamma))\\rightarrow 0$ gives rise to\n an exact sequence\n $$\nH^1(\\Gamma_{E\/F},Z(\\widehat{I}_\\gamma)[d]) \\rightarrow H^1(\\Gamma_{E\/F},Z(\\widehat{I}_\\gamma))\n= H^1(\\Gamma_{E\/F},Z(\\widehat{I}_\\gamma))[d]\\rightarrow 0.$$\n Let $\\mbox{\\boldmath{$\\mu$}}_{d}$ denote the order $d$ cyclic subgroup of ${\\mathbb C}^\\times$.\n Then $Z(\\widehat{I}_\\gamma)[d]\\hookrightarrow \\widehat{T}[d]\\simeq (\\mbox{\\boldmath{$\\mu$}}_{d})^{\\dim T}$.\n Hence\n $$|X_\\gamma|\\le |H^1(\\Gamma_{E\/F},Z(\\widehat{I}_\\gamma)[d] )|\n \\le |\\Gamma_{E\/F}|\\cdot |Z(\\widehat{I}_\\gamma)[d]|\\le d\\cdot (d)^{\\dim T}\\le\n (w_Gs_G)^{\\dim T+1}.$$\n\\end{proof}\n\n\n For the proposition below, we fix a finite subset $S_0\\in {\\mathcal V}_F^\\infty$.\n Also fix open compact subsets\n $U_{S_0}\\subset G(F_{S_0})$\n and $U_\\infty \\subset G(F_\\infty)$.\n As usual we will write $S$ for $S_0\\coprod S_1$.\n\n\\begin{prop}\\label{p:bound-number-of-conj}\n\n Let $\\kappa\\in{\\mathbb Z}_{\\ge 0}$.\n Let $S_1\\subset{\\mathcal V}_F^\\infty\\backslash S_0$ be a finite subset such that $G$ is unramified\n at all $v\\in S_1$.\n Set $U_{S_1}:={\\rm supp}\\, {\\mathcal{H}}^{\\mathrm{ur}}(G(F_{S_1}))^{\\le \\kappa}$,\n $U^{S,\\infty}:=\\prod_{v\\notin S\\cup S_\\infty} K_v$ and\n ${\\mathscr C}:=U_{S_0}U_{S_1} U^{S,\\infty}U_\\infty$.\n Define ${\\mathscr Y}_G$ to be the set of semisimple $G({\\mathbb A}_F)$-conjugacy classes of $\\gamma\\in G(F)$\n which meet ${\\mathscr C}$. Then there exist constants $A_3,B_3>0$ such that for all $S_1$ and $\\kappa$ as above,\n $$|{\\mathscr Y}_G|=O(q_{S_1}^{A_3+B_3\\kappa})$$\n (In other words, the implicit constant for $O(\\cdot)$ is independent of\n $S_1$ and $\\kappa$.)\n\\end{prop}\n\n\\begin{rem}\n By combining the Proposition with Lemma~\\ref{l:forcing-unipotent} we can deduce the following. Under the same assumption but with ${\\mathscr C}:= U^{S,\\infty}({\\mathfrak n}) U_{S_0} U_{S_1} U_\\infty$ we have\n $$|{\\mathscr Y}_G|=1+O(q_{S_1}^{A+B\\kappa}{\\mathbb N}({\\mathfrak n})^{-C}).$$\n for some constants $A,B,C>0$.\n\\end{rem}\n\n\n\\begin{proof}\n Our argument will be a refinement of the proof of \\cite[Prop 8.2]{Kot86}.\n \\medskip\n\n \\noindent{STEP I. When $G^{^{\\mathrm{der}}}$ is simply connected.}\n\n Choose a smooth reductive integral model $\\mathfrak{G}$ over $\\mathcal{O}_F[\\frac{1}{S_0}]$ for $G$ and an embedding of algebraic groups\n $\\Xi_0:\\mathfrak{G}\\rightarrow \\GL_{m}$ defined over $\\mathcal{O}_F[\\frac{1}{S_0}]$ as in Proposition \\ref{p:global-integral-model}.\n Consider \\begin{equation} \\label{e:char-poly-map} G({\\mathbb A}_F)\\stackrel{\\Xi_0}{\\rightarrow} \\GL_m({\\mathbb A}_F)\\rightarrow {\\mathbb A}_F^m\\end{equation}\n where the latter map assigns the coefficients of the characteristic polynomial,\n and call the composite map $\\Xi'$.\n Set $U':=\\Xi'(U)$. Then $|U'\\cap F^m|<\\infty$\n since it is discrete and compact. We would like to estimate the cardinality.\n\n Fix $\\{\\delta_v\\}$ which satisfies the assumption of\n Lemma \\ref{l:F-in-A_F} and the condition that $\\delta_v=1$ for all finite $v$.\n (So $\\{\\delta_v\\}$ depends only on $F$.)\n Since $\\Xi_0$ is defined over $\\mathcal{O}_F[\\frac{1}{S_0}]$, clearly\n $\\Xi_0(U^{S,\\infty})\\subset \\GL_m(\\widehat{\\mathcal{O}}_F^{S,\\infty})$. Thus\n $$\\Xi'(U^{S,\\infty})\\subset (\\widehat{\\mathcal{O}}_F^{S,\\infty})^m=\\prod_{v\\notin {S\\cup S_\\infty}}\n {\\mathcal{B}}_v(0,1).$$\n Set $J^{S,\\infty}:=\\{0\\}\\subset ({\\mathbb A}_F^{S,\\infty})^m$.\n Similarly as above, $\\Xi'(U_{S_1})\\subset (\\mathcal{O}_{F,S_1})^m$.\n By the compactness of $U_{S_0}$ and $U_\\infty$, there exist\n finite subsets $J_{S_0}\\subset F_{S_0}$ and $J_{\\infty}\\subset F_\\infty$ such that\n $$\\Xi'(U_{S_0})\\subset \\bigcup_{\\beta_{S_0}\\in J_{S_0}} \\left( \\prod_{v\\in S_0} {\\mathcal{B}}_v(\\beta_v,1)\\right),\n \\quad \\Xi'(U_\\infty)\\subset \\bigcup_{\\beta_\\infty\\in J_\\infty} \\left( \\prod_{v|\\infty} {\\mathcal{B}}_v(\\beta_v,\\delta_v)\\right).$$\n Now we treat the places contained in $S_1$. Choose a maximal torus $T_{\\overline{F}}$ of $G$ over $\\overline{F}$\n once and for all. Since the image of the composite map\n $T_{\\overline{F}}\\hookrightarrow G_{\\overline{F}} \\stackrel{\\Xi_0}{\\hookrightarrow} (\\GL_m)_{\\overline{F}}$\n is contained in a maximal torus of $\\GL_m$, we can choose $g=(g_{ij})_{i,j=1}^m\\in \\GL_m(\\overline{F})$ such that\n $g\\Xi_0(T_{\\overline{F}})g^{-1}$ sits in the diagonal maximal torus $\\mathbb{T}$ of $\\GL_d$.\n Fix the choice of $T$ and $g$ once and for all (independently of $S_1$ and $\\kappa$).\n Set $v_{\\min}(g):=\\min_{i,j} v(g_{ij})$ and $v_{\\max}(g):=\\max_{i,j} v(g_{ij})$.\n There exists $B_6>0$ such that for any $\\mu\\in X_*(T)$ with $\\|\\mu\\|\\le \\kappa$, the element\n $g\\Xi_0(\\mu)g^{-1}\\in X_*(\\mathbb{T})$ satisfies $\\|g\\Xi_0(\\mu)g^{-1}\\|\\le B_6\\kappa$.\n Let $\\gamma_{S_1}=(\\gamma_v)_{v\\in S_1}\\in U_{S_1}$. Each\n $\\gamma_v$ has the form $\\gamma_v=k_1\\mu(\\varpi_{v})k_2$ for some $\\|\\mu\\|\\le \\kappa$\n and $k_1,k_2\\in G(\\mathcal{O}_v)$. Since $\\Xi_0(G(\\mathcal{O}_v))\\subset \\GL_m(\\mathcal{O}_v)$,\n we see that $\\Xi_0(\\gamma_v)$ is conjugate to $\\Xi_0(\\mu(\\varpi_{v}))k'$ in $\\GL_m(F_v)$\n for some $k'\\in \\GL_m(\\mathcal{O}_v)$.\n Applying Lemma \\ref{l:control-eigenvalue} to\n $(g\\Xi_0(\\mu(\\varpi_{v}))g^{-1})( gk'g^{-1})$ with $u=gk'g^{-1}$\n and noting that $v_{\\min}(u)\\ge v_{\\min}(g)+v_{\\min}(g^{-1})$\n we conclude that each eigenvalue $\\lambda$ of $\\Xi_0(\\gamma_v)$ satisfies\n $$v(\\lambda)\\le -B_6\\kappa+v_{\\min}(g)+v_{\\min}(g^{-1}).$$\n Therefore the coefficients of its characteristic polynomial\n lie in $\\varpi_v^{-m(B_6\\kappa+A_4)}\\mathcal{O}_v$,\n where we have set $A_4:=-(v_{\\min}(g)+v_{\\min}(g^{-1}))\\ge 0$. To put things together, we see that\n $$\\Xi'(U_{S_1})\\subset \\prod_{v\\in S_1} (\\varpi_v^{-m(B_6\\kappa+A_4)}\\mathcal{O}_v)^m.$$\n (A fortiori $\\Xi'(U_{S_1})\\subset \\prod_{v\\in S_1} (\\varpi_v^{-i(B_6\\kappa+A_4)}\\mathcal{O}_v)_{i=1}^m $\n holds as well.) The right hand side is equal to the union of $\\prod_{v\\in S_1}\n {\\mathcal{B}}_v(\\beta_v,1)$, as $\\{\\beta_v\\}_{v\\in S_1}$ runs over $J_{S_1}=\\prod_{v\\in S_1} J_v$,\n where $J_v$ is a set of representatives for $(\\varpi_v^{-m(B_6\\kappa+A_4)}\\mathcal{O}_v\/\\mathcal{O}_v)^m$.\n Notice that $|J_{S_1}|=q_{S_1}^{m^2(B_6 \\kappa+A_4)}$.\n Finally, we see that\n $$U'=\\Xi'(U)\\subset \\bigcup_{\\beta\\in J} {\\mathcal{B}}(\\beta,\\delta)$$\n where $J=J_{S_0}\\times J_{S_1}\\times J^{S,\\infty}\\times J_\\infty$. Lemma \\ref{l:F-in-A_F}\n implies that\n $$U'\\cap F^m \\le |J| = |J_{S_0}|\\cdot |J_{S_1}|\\cdot |J_\\infty|= O(q_{S_1}^{m^2(B_6 \\kappa+A_4) }),$$\n since $|J_{S_0}|\\cdot |J_\\infty|$ is a constant independent of $\\kappa$ and $S_1$.\n\n For each $\\beta\\in U'\\cap F^m$, we claim that there are at most $m!$ semisimple $G(\\overline{F})$-conjugacy classes\n in $G(\\overline{F})$ which map to $\\beta$ via $G(\\overline{F})\\rightarrow \\GL_m(\\overline{F})\\rightarrow \\overline{F}^m$,\n which is the analogue of \\eqref{e:char-poly-map}. Let us verify the claim.\n Let $T$ and $\\mathbb{T}$ be maximal tori in $G$ and $\\GL_m$ over $\\overline{F}$,\n respectively, such that $\\Xi'(T)\\subset \\mathbb{T}$. Then the set of semisimple conjugacy classes in\n $G(\\overline{F})$ (resp. $\\GL_m(\\overline{F})$) is in a natural bijection with $T(\\overline{F})\/\\Omega$ (resp. $\\mathbb{T}(\\overline{F})\/\\Omega_{\\GL_m}$).\n The map $\\Xi'|_T: T\\rightarrow \\mathbb{T}$ induces a map $T(\\overline{F})\/\\Omega\\rightarrow \\mathbb{T}(\\overline{F})\/\\Omega_{\\GL_m}$. Each fiber of the latter map\n has cardinality at most $m!$, hence the claim follows.\n\n Fix $\\beta\\in U'\\cap F^m$.\n We would like to bound the number of $G({\\mathbb A}_F)$-conjugacy classes of $\\gamma\\in G(F)$ which meet $U$ and\n are $G(\\overline{F})$-conjugate to $\\beta$.\n Fix one such $\\gamma$, which we assume to exist. (Otherwise our final bound will only improve.)\n Let $\\Phi_\\gamma$ denote the set of roots over $\\overline{F}$ for any choice of maximal torus of $\\gamma$ in $G$.\n Define $V'(\\gamma)$ to be the set of places $v$ of $F$ such that $v\\notin S\\cup S_\\infty$ and\n $\\alpha(\\gamma)\\neq 1$ and $|1-\\alpha(\\gamma)|_v<1$ for at least one $\\alpha\\in \\Phi_\\gamma$.\n Put $V(\\gamma):=V'(\\gamma)\\cup S\\cup S_\\infty$.\nClearly $|V(\\gamma)|<\\infty$. Moreover we claim that $|V(\\gamma)|=O(1)$ (bounded independently of $\\gamma$).\nSet $$C_{S_0}:=\\sup_{\\gamma\\in U_{S_0} U_\\infty}\\left(\\prod_{\\alpha\\in \\Phi_\\gamma}\n|1-\\alpha(\\gamma)|_{S_0}|1-\\alpha(\\gamma)|_{S_\\infty}\\right),$$ which is finite since $U_{S_0} U_\\infty$ is compact.\nThen\n$$1= \\prod_{v} \\prod_{\\alpha\\in \\Phi_\\gamma} |1-\\alpha(\\gamma)|_v\n= \\left( \\prod_{\\alpha\\in \\Phi_\\gamma} |1-\\alpha(\\gamma)|_{V(\\gamma)}\\right)\n\\le C_{S_0}\\cdot \\left(\\prod_{\\alpha\\in \\Phi_\\gamma}q_{V'(\\gamma)}^{-1} \\right)\\le C_{S_0} 2^{-|V'(\\gamma)|}.$$\nThus $|V'(\\gamma)|=O(1)$ and also $|V(\\gamma)|=O(1)$.\n\n We are ready to bound the number of $G({\\mathbb A}_F)$-conjugacy classes in $ G(F)$ which meet ${\\mathscr C}$ and\n are $G(\\overline{F})$-conjugate to $\\alpha$. For any such conjugacy class of $\\gamma'\\in G(F)$,\n the first paragraph of \\cite[p.391]{Kot86} shows that\n $\\gamma'$ is conjugate to $\\gamma$ in $G(F_v)$ whenever $v\\notin V(\\gamma)$. Hence\n the number of $G({\\mathbb A}_F)$-conjugacy classes of such $\\gamma'$ is at most $u_G^{|V(\\gamma)|}$,\n where $u_G$ is the bound of Lemma \\ref{l:bounding-conj-in-st-conj}.\n\n Putting all this together, we conclude that\n $|{\\mathscr Y}_G|=O(q_{S_1}^{m^2(B_7 \\kappa+A_5)})$ as $S_1$ and $\\kappa$ vary.\n The lemma is proved in this case.\n\n \\medskip\n\n \\noindent{STEP II. General case.}\n\n Now we drop the assumption that $G^{^{\\mathrm{der}}}$ is simply connected.\n By Lemma \\ref{l:z-ext}, choose a $z$-extension\n $$1\\rightarrow Z\\rightarrow H \\stackrel{\\alpha}{\\rightarrow} G\\rightarrow 1.$$\n Our plan is to argue as on page 391 of \\cite{Kot86} with a specific choice of\n ${\\mathscr C}_H$ and ${\\mathscr C}_Z$ below (denoted $C_H$ and $C_Z$ by Kottwitz). In order to explain this choice,\n we need some preparation.\n If $v\\notin S\\cup S_\\infty$, choose $K_{H,v}$ to be a hyperspecial subgroup\n of $H(F_v)$ such that $\\alpha(K_{H,v})=K_v$. (Such a $K_{H,v}$ exists by\n the argument of \\cite[p.386]{Kot86}.) We can find\n compact sets $U_{H,S_0}\\subset H(F_{S_0})$ and $U_{H,\\infty}$ of $H(F_\\infty)$\n such that $\\alpha(U_{H,S_0})=U_{S_0}$\n and $\\alpha(U_{H,\\infty})=U_\\infty$.\n Moreover we claim that there exists a constant $\\beta>0$ independent of $\\kappa$ and $S_1$\n with the following property: for any $\\kappa\\in {\\mathbb Z}_{\\ge 0}$, we can choose\n an open compact subset $U_{H,S_1}\\subset {\\rm supp}\\, {\\mathcal{H}}^{\\mathrm{ur}}(H)^{\\le \\beta\\kappa}$\n such that $\\alpha(U_{H,S_1})=U_{S_1}$.\n The claim is proved in Lemma \\ref{l:claim} below.\n\nNow choose $U_{Z,S_1}$ to be the kernel of $\\alpha: U_{H,S_1}\n \\rightarrow U_{S_1}$, which is compact and open in $Z(F_{S_1})$. Then choose a compact set $U_{Z}^{S_1}$ such that\n $U_{Z,S_1}U_{Z}^{S_1}Z(F)=Z({\\mathbb A})^1$. (This is possible since $Z(F)\\backslash Z({\\mathbb A})^1$\n is compact.\\footnote{Choose $U^{S_1}_Z$ to be any open compact subgroup. Then\n $U_{Z,S_1}U_{Z}^{S_1}Z(F)$ has a finite index in $Z({\\mathbb A})^1$ by compactness.\n Then enlarge $U^{S_1}_Z$ without breaking compactness such that the equality holds.})\n Set $$U_H:= \\left(\\prod_{v\\notin S\\cup S_\\infty} K_{H,v}\\right)\n U_{H,S_0} U_{H,S_1} U_{H,\\infty},\\qquad\n U_Z:=U_{Z,S_1}U_{Z}^{S_1}$$\n and set ${\\mathscr C}_H:=U_H\\cap H({\\mathbb A}_F)^1$, ${\\mathscr C}_Z:=U_Z\\cap Z({\\mathbb A}_F)^1$.\n Let ${\\mathscr Y}_H$ be defined as in the statement of Proposition \\ref{p:bound-number-of-conj} (with $H$ and ${\\mathscr C}_H$ replacing\n $G$ and ${\\mathscr C}$).\n Then page 391 of \\cite{Kot86} shows that the natural map ${\\mathscr Y}_H\\rightarrow {\\mathscr Y}$ is a surjection,\n in particular $|{\\mathscr Y}|\\le |{\\mathscr Y}_H|$.\n Since $H^{^{\\mathrm{der}}}$ is simply connected, the earlier proof implies that\n $|{\\mathscr Y}_H|=O(q_{S_1}^{B_7\\beta\\kappa+A_5})$ for some $B_7,A_5>0$.\n (To be precise, apply the earlier proof after enlarging $U_{H,S_1}$ to\n ${\\rm supp}\\, {\\mathcal{H}}^{\\mathrm{ur}}(H)^{\\le \\beta\\kappa}$ in the definition of $U_H$. Such a replacement\n only increases $|{\\mathscr Y}_H|$, so the bound on $|{\\mathscr Y}_H|$ remains valid.)\n The proposition follows.\n\\end{proof}\n\n We have postponed the proof of the following claim, which we justify now. Simple as\n the lemma may seem, we apologize for not having found a simple proof.\n\n\\begin{lem}\\label{l:claim}\n As we have claimed above, there exists a constant $\\beta>0$ independent of $\\kappa$ and $S_1$\n with the desired property. (Refer to Step II of the above proof for detail.)\n\\end{lem}\n\n\\begin{proof}\n As the claim is concerned with places in $S_1$, which (may vary but) are contained\n in the set of places where $G$ is unramified (thus quasi-split),\n we may assume that $H$ and $G$ are quasi-split over $F$\n by replacing $H$ and $G$ with their quasi-split inner forms.\n\n Choose a Borel subgroup $B_H$ of $H$, whose image $B=\\alpha(B_H)$ is a Borel subgroup of $G$.\n The maximal torus $T_H\\subset B_H$ maps to a maximal torus $T\\subset B$ and there is a short exact sequence\n $$1\\rightarrow Z\\rightarrow T_H \\stackrel{\\alpha}{\\rightarrow} T\\rightarrow 1.$$\n The action of ${\\rm Gal}(\\overline{F}\/F)$ on $X_*(T_H)$ factors through a finite quotient.\n Let $\\Sigma$ be the smallest quotient so that $\\Sigma$ acts faithfully on $X_*(T_H)$.\n If $v\\notin S_0$ then $G$ is unramified at $v$, so the geometric Frobenius at $v$\n defines a well-defined conjugacy class, say ${\\mathscr C}_v$, in $\\Sigma$.\n Let $A_{H,v}$ (resp. $A_v$) be the maximal split torus in $T_H$ (resp. $T$) over $F_v$.\n Then $A_{H,v}\\hookrightarrow T_H$ and $A_v\\hookrightarrow T$ induce\n $X_*(A_{H,v})\\simeq X_*(T_H)^{{\\mathscr C}_v}$ and $X_*(A_{v})\\simeq X_*(T)^{{\\mathscr C}_v}$.\n We claim that $X_*(T_H)\\rightarrow X_*(T)$ induces a surjective map\n $X_*(A_{H,v})\\rightarrow X_*(A_{v})$.\n$$\\xymatrix{ X_*(T_H) \\ar[d] & X_*(T_H)^{{\\mathscr C}_v} \\ar[d] \\ar[l] & X_*(A_{H,v}) \\ar[l]_-{\\sim} \\simeq T_H(F_v)\/T_H(\\mathcal{O}_v)\n\\ar@{->>}[d] \\\\\n X_*(T) & X_*(T)^{{\\mathscr C}_v} \\ar[l] & X_*(A_{v}) \\ar[l]_-{\\sim} \\simeq T(F_v)\/T(\\mathcal{O}_v)}$$\n Indeed, we have an isomorphism $ X_*(A_{H,v})\\simeq T_H(F_v)\/T_H(\\mathcal{O}_v)$ via $\\mu\\mapsto \\mu(\\varpi_v)$\n and similarly $ X_*(A_{v})\\simeq T(F_v)\/T(\\mathcal{O}_v)$. Further, $\\alpha:T_H(F_v)\\rightarrow T(F_v)$ is surjective\n since $H^1({\\rm Gal}(\\overline{F}_v\/F_v),Z(\\overline{F}_v))$ is trivial (as $Z$ is an induced torus).\n\n Denote by $[\\Sigma]$ the finite set of all conjugacy classes in $\\Sigma$. For\n ${\\mathscr C}\\in [\\Sigma]$, choose ${\\mathbb Z}$-bases ${\\mathcal{B}}_{H,{\\mathscr C}}$ and ${\\mathcal{B}}_{{\\mathscr C}}$ for\n $X_*(T_H)^{{\\mathscr C}}$ and $X_*(T)^{{\\mathscr C}}$ respectively.\n (Note that ${\\mathbb Z}$-bases ${\\mathcal{B}}_H$ for $X_*(T)$ and ${\\mathcal{B}}$ for $X_*(T_H)$ are fixed once and for all.)\n An argument as in the proof of Lemma\n \\ref{l:norm-and-bases} shows that there exist constants $c({\\mathcal{B}}_{{\\mathscr C}}),c({\\mathcal{B}}_{H,{\\mathscr C}})>0$\n such that for all $x\\in X_*(T_H)^{{\\mathscr C}}_{{\\mathbb R}}$ and $y\\in X_*(T)^{{\\mathscr C}}_{{\\mathbb R}}$,\n \\begin{equation}\\label{e:for-l:claim}|x|_{{\\mathcal{B}}_{H,{\\mathscr C}}}\\ge c({\\mathcal{B}}_{H,{\\mathscr C}})\\cdot \\|x\\|_{{\\mathcal{B}}_H},\\quad\n |y|_{{\\mathcal{B}}_{{\\mathscr C}}}\\le c({\\mathcal{B}}_{{\\mathscr C}})\\cdot \\|y\\|_{{\\mathcal{B}}}.\\end{equation}\n Set $m_{\\mathscr C}:= \\max_y (\\min_x |x|_{{\\mathcal{B}}_{H,{\\mathscr C}}})$, where\n $y\\in X_*(T)^{{\\mathscr C}}$ varies subject to the condition $|y|_{{\\mathcal{B}}_{{\\mathscr C}}}\\le 1$\n and $x\\in X_*(T_H)^{{\\mathscr C}}$ runs over the preimage of $y$. (It was shown above that\n the preimage is nonempty.)\n Then by construction, for every $y \\in X_*(T)^{{\\mathscr C}}$, there exists $x$ in the preimage of $y$\n such that $|x| _{{\\mathcal{B}}_{H,{\\mathscr C}}} \\le m_{{\\mathscr C}} |y|_{{\\mathcal{B}}_{{\\mathscr C}}}$.\n\n Recall that $U_{S_1}=\\prod_{v\\in S_1} U_v$ where $U_v=\\cup_{\\mu} K_v \\mu(\\varpi_v) K_v$,\n the union being taken over $\\mu\\in X_*(T)^{{\\mathscr C}_v}$ such that $\\|\\mu\\|_{{\\mathcal{B}}}\\le \\kappa$.\n We have seen that there exists $\\mu_H\\in X_*(T_H)^{{\\mathscr C}_v}$ mapping to $\\mu$\n and $|\\mu_H|_{{\\mathcal{B}}_{H,{\\mathscr C}_v}}\\le m_{{\\mathscr C}_v} |\\mu|_{{\\mathcal{B}}_{{\\mathscr C}_v}}$. By \\eqref{e:for-l:claim},\n $$\\|\\mu_H\\|_{{\\mathcal{B}}_{H}}\\le m_{{\\mathscr C}_v} c({\\mathcal{B}}_{H,{\\mathscr C}_v})^{-1}c({\\mathcal{B}}_{{\\mathscr C}_v}) \\|\\mu\\|_{{\\mathcal{B}}}.$$\n Take $\\beta:=\\max_{{\\mathscr C}\\in [\\Sigma]} (m_{{\\mathscr C}} c({\\mathcal{B}}_{H,{\\mathscr C}})^{-1}c({\\mathcal{B}}_{{\\mathscr C}}))$.\n Clearly $\\beta$ is independent of $S_1$ and $\\kappa$.\n Notice that $\\|\\mu_H\\|_{{\\mathcal{B}}_{H}}\\le \\beta \\|\\mu\\|_{{\\mathcal{B}}}\\le \\beta \\kappa$.\n\n For each $\\mu\\in X_*(T)^{{\\mathscr C}_v}$ such that $\\|\\mu\\|_{{\\mathcal{B}}}\\le \\kappa$,\n we can choose a preimage $\\mu_H$ of $\\mu$ such that $\\|\\mu_H\\|_{{\\mathcal{B}}_{H}}\\le\\beta \\kappa$.\n Take $U_{H,v}$ to be the union of $K_{H,v} \\mu_H(\\varpi_v) K_{H,v}$ for those\n $\\mu_H$'s. By construction $\\alpha(U_{H,v})=U_v$. Hence $U_{H,S_1}:=\\prod_{v\\in S_1}U_{H,v}$\n is the desired open compact subset in the claim of Lemma \\ref{l:claim}.\n\n\n\\end{proof}\n\n\n\\begin{cor}\\label{c:bound-number-of-conj}\n In the setting of Proposition \\ref{p:bound-number-of-conj},\n let $Y_G$ be the set of all semisimple $G(F)$-conjugacy (rather than $G({\\mathbb A}_F)$-conjugacy)\n classes whose $G({\\mathbb A}_F)$-conjugacy classes intersect ${\\mathscr C}$. Then there exist\n constants $A_6,B_8>0$ such that $|Y_G|=O(q_{S_1}^{B_8\\kappa+A_6})$ as $S_1$ and $\\kappa$ vary.\n\\end{cor}\n\n\\begin{proof}\n Immediate from Lemma \\ref{l:F-conj-vs-A_F-conj} and Proposition \\ref{p:bound-number-of-conj}.\n\\end{proof}\n\n The following lemma was used in Step I of the proof of Proposition \\ref{p:bound-number-of-conj}\n and will be applied again to obtain Corollary \\ref{c:bounding-pi0-general} below.\n\n\n\\begin{lem}\\label{l:bounding-conj-in-st-conj}\n Assume that $G^{^{\\mathrm{der}}}$ is simply connected.\n For each $v\\in {\\mathcal V}_F$ and each semisimple $\\gamma\\in G(F)$, let $n_{v,\\gamma}$ be the number of $G(F_v)$-conjugacy classes\n in the stable conjugacy class of $\\gamma$ in $G(F_v)$.\n Then there exists a uniform bound $u_G\\ge1$ such that $n_{v,\\gamma}\\le u_G$ for all $v$ and $\\gamma$.\n\\end{lem}\n\n\\begin{proof}\n Put $\\Gamma(v):={\\rm Gal}(\\overline{F}_v\/F_v)$.\n It is a standard fact that $n_{v,\\gamma}$ is the cardinality of\n $\\ker(H^1(F_v,I_\\gamma)\\rightarrow H^1(F_v,G))$. By \\cite{Kot86},\n $H^1(F_v,I_\\gamma)$ is isomorphic to the dual of $\\pi_0(Z(\\widehat{I}_\\gamma)^{\\Gamma(v)})$. Hence\n $n_{v,\\gamma}\\le |\\pi_0(Z(\\widehat{I}_\\gamma)^{\\Gamma(v)})|.$\n It suffices to show that a uniform bound for $|\\pi_0(Z(\\widehat{I}_\\gamma)^{\\Gamma(v)})|$ exists.\n\n By Lemma \\ref{l:torus-splitting},\n there exists a finite Galois extension $E\/F$ with $[E:F]\\le w_Gs_G$ such that\n $I_{\\gamma}$ splits over $E$. Then ${\\rm Gal}(\\overline{F}\/F)$ acts on $Z(\\widehat{I}_\\gamma)$\n through ${\\rm Gal}(E\/F)$. In particular $\\Gamma(v)$ acts on $Z(\\widehat{I}_\\gamma)$\n through a group of order $\\le w_Gs_G$. Denote the latter group by $\\Gamma(v)'$.\n\n By the assumption, all Levi subgroups of $G$ have simply connected derived subgroups.\n As $I_\\gamma$ becomes isomorphic to a Levi subgroup of $G$ over $\\overline{F}$,\n $I_\\gamma^{^{\\mathrm{der}}}$ is also simply connected. Hence $Z(\\widehat{I}_\\gamma)$ is connected\n (\\cite[(1.8.3)]{Kot84a}), namely a complex torus. Moreover\n $\\dim Z(\\widehat{I}_\\gamma)\\le r_G$.\n\n Now consider the set of pairs $${\\mathscr T}=\\{(\\Delta,\\widehat{T}):\n |\\Delta|\\le w_Gs_G,~\\dim \\widehat{T}\\le r_G\\}$$\n consisting of a ${\\mathbb C}$-torus $\\widehat{T}$ with an action by a finite group $\\Delta$. Two pairs $(\\Delta,\\widehat{T})$\n and $(\\Delta',\\widehat{T}')$ are equivalent if there are isomorphisms $\\Delta\\simeq \\Delta'$\n and $\\widehat{T}\\simeq \\widehat{T}'$ such that the group actions are compatible.\n Note that $$(\\Gamma(v)',Z(\\widehat{I}_\\gamma))\\in {\\mathscr T}$$ and that\n ${\\mathscr T}$ depends only on $G$ and $F$.\n Clearly $|\\pi_0(\\widehat{T}^\\Delta)|$ depends only on the equivalence class\n of $(\\Delta,\\widehat{T})\\in {\\mathscr T}$.\n Hence the proof will be complete if\n ${\\mathscr T}$ consists of finitely many equivalence classes.\n\n Clearly there are finitely many isomorphism classes for $\\Delta$ appearing in ${\\mathscr T}$.\n So we may fix $\\Delta$ and prove the finiteness of isomorphism classes of\n ${\\mathbb C}$-tori with $\\Delta$-action. By dualizing, it is enough to show that\n there are finitely many isomorphism classes of ${\\mathbb Z}[\\Delta]$-modules\n whose underlying ${\\mathbb Z}$-modules are free of rank $\\le r_G$.\n This is a result of \\cite[\\S79]{CR62}.\n\n\n\n\\end{proof}\n\n\n\\begin{cor}\\label{c:bounding-pi0-general}\n There exists a constant $c>0$ (depending only on $G$) such that\n for every semisimple $\\gamma\\in G(F)$,\n $|\\pi_0(Z(\\widehat{I}_\\gamma)^{\\Gamma})|0$ such that $|\\pi_0(Z(\\widehat{I}_\\gamma)^{\\Gamma})|0$ such that\n $|\\pi_0(Z(\\widehat{I}_{\\gamma_H})^{\\Gamma})|0$,\n there exist $\\phi_S,\\psi_S\\in C^\\infty_c(G(F_S))$ such that\n$$ \\widehat{\\mu}^{\\mathrm{pl}}_S(\\widehat{\\psi}_S)\\le \\epsilon \\quad\\mbox{and}\\quad\n\\forall\n\\pi_S\\in G(F_S)^{\\wedge},~\n|\\widehat{f}_S(\\pi_S) - \\widehat{\\phi}_S(\\pi_S)|\\le \\widehat{\\psi}_S(\\pi_S) .$$\nConversely, any $\\widehat{f}_S\\in\n {\\mathscr B}_c(\\widehat{G(F_S)})$ with the above property belongs to ${\\mathscr F}(G(F_S)^{\\wedge})$.\n\\end{prop}\n\n\\begin{rem}\n It is crucial that $\\widehat{f}_S\\in {\\mathscr F}(G(F_S)^{\\wedge})$ has the set of discontinuity in a measure zero set.\n Otherwise we could take $\\widehat{f}_S$ to be the characteristic function on the set of points of $G(F_S)^{\\wedge}$\n which arise as the $S$-components of some $\\pi\\in{\\mathcal{AR}}_{{\\rm disc},\\chi}(G)$ with nonzero Lie algebra cohomology. Note that the latter function typically lies outside ${\\mathscr F}(G(F_S)^{\\wedge})$. The conclusions of Theorems \\ref{t:Sato-Tate-level}, \\ref{t:Sato-Tate-weight} and Corollary \\ref{c:Plan-density} are false in general if such an $\\widehat{f}_S$ is placed at $S_0$. Namely in that case $\\widehat{\\mu}_{\\mathcal{F}_k,S_1}(\\widehat{\\phi}_{S_1})$ is often far from zero but $\\widehat{\\mu}^{\\mathrm{pl}}_S(\\widehat{\\phi}_S)$ always vanishes.\n\\end{rem}\n\n\n\n\n From here until the end of this subsection let us suppose that $G$ is unramified at $S$. It will be convenient to introduce $\\mathcal{F}(G(F_S)^{\\wedge,\\mathrm{ur}})$ and its subspace $\\mathcal{F}(G(F_S)^{\\wedge,\\mathrm{ur},{\\rm temp}})$\n in order to state the Sato-Tate theorem in \\S\\ref{sub:ST-theorem}.\n The former (resp. the latter) consists of $\\widehat{f}_S\\in \\mathcal{F}(G(F_S)^{\\wedge})$ such that\n the support of $\\widehat{f}_S$ is contained in $G(F_S)^{\\wedge,\\mathrm{ur}}$ (resp. $G(F_S)^{\\wedge,\\mathrm{ur},{\\rm temp}}$).\n Denote by $\\mathcal{F}(\\widehat{T}_{c,\\theta}\/\\Omega_{c,\\theta})$ the space of bounded $\\widehat{\\mu}^{\\mathrm{ST}}_\\theta$-measurable functions on\n $\\widehat{T}_{c,\\theta}\/\\Omega_{c,\\theta}$ whose points of discontinuity are contained in a $\\widehat{\\mu}^{\\mathrm{ST}}_\\theta$-measure zero set.\n Define $\\mathcal{F}(\\prod_{v\\in S}\\widehat{T}_{c,\\theta_v}\/\\Omega_{c,\\theta_v})$ in the obvious analogous way.\n By using the topological Satake isomorphism for tempered spectrum (cf. \\eqref{e:local-isom-v}) $$\\prod_{v\\in S}\\widehat{T}_{c,\\theta_v}\/\\Omega_{c,\\theta_v}\n \\simeq G(F_S)^{\\wedge,\\mathrm{ur},{\\rm temp}}$$\n and extending by zero outside the tempered spectrum, one obtains\n \\begin{equation}\\label{e:Sauvageot}\\mathcal{F}\\left(\\prod_{v\\in S}\\widehat{T}_{c,\\theta_v}\/\\Omega_{c,\\theta_v}\\right)\\simeq \\mathcal{F}(G(F_S)^{\\wedge,\\mathrm{ur},{\\rm temp}})\n \\hookrightarrow \\mathcal{F}(G(F_S)^{\\wedge,\\mathrm{ur}}).\\end{equation}\n Although the first two $\\mathcal{F}(\\cdot)$ above are defined with respect to different measures $\\prod_{v\\in S}\\widehat{\\mu}^{\\mathrm{ST}}_{\\theta_v}$\n and $\\widehat{\\mu}^{\\mathrm{pl}}_S$, the isomorphism is justified by the fact that the ratio of the two measures is uniformly bounded above and below by positive constants (depending on $q_S$) in view of Proposition \\ref{p:unr-plan-meas} and Lemma \\ref{l:ST-measure}.\n Note that the space of continuous functions on $\\prod_{v\\in S}\\widehat{T}_{c,\\theta_v}\/\\Omega_{c,\\theta_v}$\n (resp. on $G(F_S)^{\\wedge,\\mathrm{ur},{\\rm temp}}$) is contained in the first (resp. second) term of \\eqref{e:Sauvageot},\n and the two subspaces correspond under the isomorphism.\n\n\\begin{cor}\\label{c:density} Let $\\widehat{f}_S\\in \\mathcal{F}(G(F_S)^{\\wedge,\\mathrm{ur}})$. For any $\\epsilon>0$,\n there exist $\\phi_S,\\psi_S\\in {\\mathcal{H}}^{\\mathrm{ur}}(G(F_S))$ such that (i)\n$ \\widehat{\\mu}^{\\mathrm{pl}}_S(\\widehat{\\psi}_S)\\le \\epsilon$ and (ii) $\\forall\n\\pi_S\\in G(F_S)^{\\wedge,\\mathrm{ur}}$, $|\\widehat{f}_S(\\pi_S) - \\widehat{\\phi}_S(\\pi_S)|\\le \\widehat{\\psi}_S(\\pi_S)$.\n\\end{cor}\n\n\\begin{proof}\n Let $\\phi_S,\\psi_S\\in C^\\infty_c(G(F_S))$ be the functions associated to $\\widehat{f}_S$ as in Proposition \\ref{p:density}. Then it is enough to replace $\\phi_S$ and $\\psi_S$ with their convolution products with the characteristic function on $\\prod_{v\\in S} K_v$.\n\\end{proof}\n\n The following proposition will be used later in \\S\\ref{sub:ST-theorem}. For each $v\\in {\\mathcal V}_F(\\theta)$,\n the image of $\\widehat{f}$ in $\\mathcal{F}(G(F_v)^{\\wedge,\\mathrm{ur}})$ via \\eqref{e:Sauvageot}\n will be denoted $\\widehat{f}_v$.\n\n\\begin{prop}\\label{p:unr-density}\n Let $\\widehat{f}\\in \\mathcal{F}(\\widehat{T}_{c,\\theta}\/\\Omega_{c,\\theta})$ and $\\epsilon >0$. There exists an integer $\\kappa\\ge 1$ and for all places $v\\in {\\mathcal V}_F(\\theta)$, there are bounded functions $\\phi_{v},\\psi_{v}\\in {\\mathcal{H}}^{\\mathrm{ur}}(G(F_{v}))^{\\le \\kappa}$ such that $\\widehat{\\mu}^{\\mathrm{pl}}_v(\\widehat{\\psi}_v)\\le \\epsilon$ and $|\\widehat{f_v}(\\pi) - \\widehat{\\phi_v}(\\pi)| \\le \\widehat{\\psi_v}(\\pi)$ for all $\\pi\\in G(F_v)^{\\wedge,\\mathrm{ur}}$.\n\\end{prop}\n\n\\begin{proof}\n This is no more than Corollary \\ref{c:density} if we only required $\\phi_{v},\\psi_{v}\\in {\\mathcal{H}}^{\\mathrm{ur}}(G(F_{v}))$ without the superscript $\\le \\kappa$. So we may disregard finitely many $v$ by considering the subset ${\\mathcal V}_F(\\theta)^{\\ge Q}$ of ${\\mathcal V}_F(\\theta)$ consisting of $v$ such that $q_v\\ge Q$ for some $Q>0$. In view of Proposition \\ref{p:lim-of-Plancherel}, we may choose $Q\\in {\\mathbb Z}_{>0}$ that \\begin{equation}\\label{e:bound-pl-ST}\\forall v\\in {\\mathcal V}_F(\\theta)^{\\ge Q},~\\forall \\widehat{f}\\in \\mathcal{F}(\\widehat{T}_{c,\\theta}\/\\Omega_{c,\\theta}),\\quad\n \\frac{1}{2} \\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}(|\\widehat{f}|)\\le \\widehat{\\mu}^{\\mathrm{pl,ur}}_v(|\\widehat{f}_v|)\\le 2\\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}(|\\widehat{f}|).\\end{equation}\n\n Fix any $w\\in {\\mathcal V}_F(\\theta)^{\\ge Q}$. Corollary \\ref{c:density} allows us to find $\\phi_w,\\psi'_w\\in{\\mathcal{H}}^{\\mathrm{ur}}(G(F_{w}))$ such that\n \\begin{equation}\\label{e:phi_wpsi_w}\n \\widehat{\\mu}^{\\mathrm{pl}}_w(\\widehat{\\psi}'_w)\\le \\epsilon\/8 \\quad\\mbox{and}\\quad\n\\forall \\pi_w\\in G(F_w)^{\\wedge,\\mathrm{ur}},~\n|\\widehat{f}_w(\\pi_w) - \\widehat{\\phi}_w(\\pi_w)|\\le \\widehat{\\psi}'_w(\\pi_w). \\end{equation}\n Let $\\kappa_0\\in {\\mathbb Z}_{\\ge 0}$ be such that $\\phi_w,\\psi'_w\\in{\\mathcal{H}}^{\\mathrm{ur}}(G(F_{w}))^{\\le \\kappa_0}$. Now recall that for every $v\\in {\\mathcal V}_F(\\theta)$ there is a canonical isomorphism (cf. \\eqref{e:Satake}, Lemma \\ref{l:unr-temp-spec})\n between ${\\mathcal{H}}^{\\mathrm{ur}}(G(F_v))$ and the space of regular functions in the complex variety $\\widehat{T}_{\\theta}\/\\Omega_{\\theta}$. Using the latter as a bridge, we may transport $\\phi_w,\\psi'_w$ to $\\phi_v,\\psi'_v\\in{\\mathcal{H}}^{\\mathrm{ur}}(G(F_{v}))$ for every $v\\in {\\mathcal V}_F(\\theta)$. Clearly $\\phi_v,\\psi'_v\\in {\\mathcal{H}}^{\\mathrm{ur}}(G(F_{v}))^{\\le \\kappa_0}$ from the definition of \\S\\ref{sub:trun-unr-Hecke}. Moreover \\eqref{e:bound-pl-ST} and \\eqref{e:phi_wpsi_w} imply that\n for all $v\\in {\\mathcal V}_F(\\theta)^{\\ge Q}$,\n $$ \\widehat{\\mu}^{\\mathrm{pl}}_v(\\widehat{\\psi}'_v)\\le \\epsilon\/2 \\quad\\mbox{and}\\quad\n\\forall \\pi_v\\in G(F_v)^{\\wedge,\\mathrm{ur},{\\rm temp}},~\n|\\widehat{f}_v(\\pi_v) - \\widehat{\\phi}_v(\\pi_v)|\\le \\widehat{\\psi}'_v(\\pi_v).$$\n(Observe that $\\widehat{\\mu}^{\\mathrm{pl}}_v(\\widehat{\\psi}'_v)\\le 2 \\widehat{\\mu}^{\\mathrm{ST}}_\\theta(\\widehat{\\psi}'_v)=2\\widehat{\\mu}^{\\mathrm{ST}}_\\theta(\\widehat{\\psi}'_w)\\le 4\\widehat{\\mu}^{\\mathrm{pl}}_w(\\widehat{\\psi}'_w)\\le \\epsilon\/2$ to justify the first inequality.)\n\n To achieve the latter inequality for non-tempered $\\pi_v\\in G(F_v)^{\\wedge,\\mathrm{ur}}$, we would like to perturb $\\psi'_v$ in a way independent of $v$ while not sacrificing the former inequality. Since $\\widehat{f}_v(\\pi_v)=0$ for such $\\pi_v$,\n what we need to establish is that $|\\widehat{\\phi}_v(\\pi_v)|\\le \\widehat{\\psi}_v(\\pi_v)$ for all non-tempered $\\pi_v\\in G(F_v)^{\\wedge,\\mathrm{ur}}$. To this end, we use the fact that there is a compact subset $\\mathcal{K}$ of $\\widehat{T}_{\\theta}\/\\Omega_{\\theta}$ such that $G(F_v)^{\\wedge,\\mathrm{ur}}$ is contained in $\\mathcal{K}$ for every $v\\in {\\mathcal V}_F(\\theta)$ (cf. \\cite[Thm XI.3.3]{BW00}). By using the Weierstrass approximation theorem, we find $\\psi''_w\\in {\\mathcal{H}}^{\\mathrm{ur}}(G(F_{w}))$ such that\n $$\\widehat{\\mu}^{\\mathrm{pl}}_w(\\widehat{\\psi}''_w)\\le \\epsilon\/8,$$\n$$\\forall \\pi_w\\in \\mathcal{K}\\backslash G(F_w)^{\\wedge,\\mathrm{ur},{\\rm temp}},~|\\widehat{\\psi}'_w(\\pi_w)| + |\\widehat{\\phi}_w(\\pi_w)|\\le \\widehat{\\psi}''_w(\\pi_w),$$\n$$\\forall \\pi_w\\in G(F_w)^{\\wedge,\\mathrm{ur},{\\rm temp}},~\\widehat{\\psi}''_w(\\pi_w)\\ge 0.$$\n Choose $\\kappa\\ge \\kappa_0$ such that $\\psi''_w\\in {\\mathcal{H}}^{\\mathrm{ur}}(G(F_{w}))^{\\le \\kappa}$ and put $\\psi_w:=\\psi'_w+\\psi''_w$ so that $\\widehat{\\mu}^{\\mathrm{pl}}_w(\\widehat{\\psi}_w)\\le \\epsilon\/4$ and $\\psi_w\\in {\\mathcal{H}}^{\\mathrm{ur}}(G(F_{w}))^{\\le \\kappa}$. For each $v\\in {\\mathcal V}_F(\\theta)^{\\ge Q}$, let $\\psi_v$ denote the transport of $\\psi_w$ just as $\\psi'_v$ was the transport of $\\psi'_w$ in the preceding paragraph. Then $\\widehat{\\mu}^{\\mathrm{pl}}_v(\\widehat{\\psi}_v)\\le \\epsilon$ and $\\psi_v\\in {\\mathcal{H}}^{\\mathrm{ur}}(G(F_{v}))^{\\le \\kappa}$ as before. Moreover\n $$\\forall \\pi_v\\in G(F_v)^{\\wedge,\\mathrm{ur},{\\rm temp}},~ |\\widehat{f}_v(\\pi_v) - \\widehat{\\phi}_v(\\pi_v)|\\le \\widehat{\\psi}'_v(\\pi_v)\\le \\widehat{\\psi}_v(\\pi_v)$$\n and for $\\pi_v\\in G(F_v)^{\\wedge,\\mathrm{ur}}\\backslash G(F_v)^{\\wedge,\\mathrm{ur},{\\rm temp}}$,\n $$ |\\widehat{f}_v(\\pi_v) - \\widehat{\\phi}_v(\\pi_v)|=|\\widehat{\\phi}_v(\\pi_v)|\\le \\widehat{\\psi}''_v(\\pi_v)-|\\widehat{\\psi}'_v(\\pi_v)|\\le \\widehat{\\psi}_v(\\pi_v),$$\n the last inequality following from $\\widehat{\\psi}_v=\\widehat{\\psi}'_v+\\widehat{\\psi}''_v$.\n\\end{proof}\n\n\\begin{rem}\n A more direct approach to~\\eqref{e:phi_wpsi_w} that wouldn't involve Corollary~\\ref{c:density} would be to use Weierstrass approximation to find polynomials $\\phi$ and $\\psi$ on $\\widehat T_{c,\\theta}\/\\Omega_{c,\\theta}$ of degree $\\le \\kappa$ such that $|\\widehat{f} - \\widehat{\\phi} | \\le \\widehat{\\psi}$ and then the isomorphism~\\eqref{e:Sauvageot} to transport $\\phi$ and $\\psi$ at the place $v$.\n\\end{rem}\n\nWe note~\\cite{Sau97}*{Lemme 3.5} that for any $\\phi_v\\in C^\\infty_c(G(F_v))$ there exists a $\\phi'_v\\in C^\\infty_c(G(F_v))$ such that\n$|\\widehat{\\phi}_v(\\pi_v)|\\le \\widehat{\\phi'}_v(\\pi_v)$ for all $\\pi_v\\in G(F_v)^{\\wedge}$. This statement is elementary, e.g. it follows from the Dixmier--Malliavin decomposition theorem. In fact we have the following stronger result due to Bernstein~\\cite{Bernstein:typeI}.\n\\begin{prop}\n [Uniform admissibility theorem]\n \\label{l:bounding-function-by-pos-function}\nFor any $\\phi_v \\in C^\\infty_c(G(F_v))$ there exists $C>0$ such that $|{\\rm tr}\\, \\pi(\\phi_v)|\\le C$ for all $\\pi\\in G(F_v)^{\\wedge}$.\n\\end{prop}\n\n\n\n\n\n\\begin{comment}\n\\subsection{A result on the test functions on $p$-adic groups}\n\n The result of this subsection will be used in \\S\\ref{sub:ST-theorem} (and \\S\\ref{sub:general-functions-S_0}, which relies on \\S\\ref{sub:ST-theorem}).\n\n\\begin{proof}\n We believe the result is due to Bernstein, although we haven't been able to locate the precise proof in the literature.\n\n\n Henceforth we fix a minimal $F_v$-rational Levi subgroup and the finite set ${\\mathcal L}$ of representatives for the $F_v$-conjugacy classes of Levi subgroups containing it. The strategy of proof is roughly as follows: we demonstrate that $\\phi$ has uniformly bounded trace value on the set of full parabolically induced representations which are relevant to unitary representations in a suitable sense. Then by bounding the coefficients in the expression of an irreducible unitary representation as a linear combination of full induced representations, we will verify the claim.\n\n For $M\\in {\\mathcal L}$,\n denote by ${\\mathscr R}_M$ (resp. ${\\mathscr R}_{0,M}$) the set of equivalence classes of irreducible tempered (resp. supercuspidal) representations of $M(F_v)$, where $\\sigma$ and $\\sigma'$ are considered equivalent if there is $g\\in G(F_v)$ such that $g M(F_v)g^{-1}=M(F_v)$ and such that $\\sigma'\\simeq \\sigma^g$. Here $\\sigma^g$ means the twist of $\\sigma$ by $g$-conjugation. Let $\\overline{{\\mathscr R}}$ (resp. $\\overline{{\\mathscr R}}_0$) denote the analogous set defined by a different notion of equivalence where $\\sigma'$ and $\\sigma$ are equivalent (often said to be inertially equivalent) if $\\sigma'\\simeq \\sigma\\otimes \\psi$ for some $\\psi\\in \\Psi(M(F_v))$. (See \\S\\ref{sub:density-thm} for the definition of $\\Psi(M(F_v))$.)\n Define ${\\mathscr R}:=\\coprod_{M\\in {\\mathcal L}}\\{(M,\\sigma):\\sigma\\in {\\mathscr R}_M\\}$ and similarly ${\\mathscr R}_0$, $\\overline{{\\mathscr R}}$, and $\\overline{{\\mathscr R}}_0$.\n\n Let $(M,\\sigma)$ be a representative of an equivalence class of ${\\mathscr R}$. By abuse of notation we write $(M,\\sigma)\\in {\\mathscr R}$ if there is no danger of confusion. Denote by $JH({\\rm n\\textrm{-}ind}_M^G(\\sigma))$ the set of Jordan-H\\\"{o}lder quotients of the normalized parabolic induction of $\\sigma$. It is well known that $JH({\\rm n\\textrm{-}ind}_M^G(\\sigma))$ is a finite set and that its definition is independent of the choice of the parabolic subgroup or the representative. When $\\pi\\in {\\rm Irr}(G(F_v))$ let $\\sc(\\pi)=(M,\\sigma)\\in {\\mathscr R}$ denote its supercuspidal support so that $\\pi\\in JH({\\rm n\\textrm{-}ind}_M^G(\\sigma))$. It induces maps ${\\mathscr R}\\rightarrow {\\mathscr R}_0$ (still denoted by $\\text{sc}$) and $\\overline{\\text{sc}}:\\overline{{\\mathscr R}}\\rightarrow \\overline{{\\mathscr R}}_0$.\n\n\n\nFix an open compact subgroup $U\\subset G(F_v)$ such that $\\phi=\\phi_v$ is invariant under the left and right translations by elements of $U$ (For convenience we will often omit the subscript $v$ in the remaining of the proof). Then ${\\rm tr}\\, \\pi(\\phi)=0$ unless $\\pi^U\\neq 0$. Define ${\\mathscr R}_{U}$ to be the subset of $(M,\\sigma)\\in {\\mathscr R}$ such that for some $\\psi\\in \\Psi(M(F_v))$ there exists $\\pi\\in JH({\\rm n\\textrm{-}ind}_M^G(\\sigma\\otimes\\psi))$ with $\\pi^U\\neq 0$, and $\\overline{{\\mathscr R}}_U$ to be the image of ${\\mathscr R}_U$ in $\\overline{{\\mathscr R}}$. Let us observe that $\\overline{{\\mathscr R}}_{U}$ is finite. Indeed this follows from the fact that $\\overline{\\text{sc}}$ is a finite-to-one map and the well known fact that $\\overline{{\\mathscr R}}_U$ has finite image in $\\overline{{\\mathscr R}}_0$. (The former is true since the parabolic induction of any irreducible smooth representation has finite length. The latter amounts to saying that the function $(M,\\sigma)\\mapsto {\\rm tr}\\, {\\rm n\\textrm{-}ind}^G_M(\\sigma)({\\mathbf{1}}_U)$ is supported on finitely many Bernstein components.)\n\n\n Choose a set of representatives ${\\mathscr R}^{{\\rm rep}}_U$ for $\\overline{{\\mathscr R}}_U$ in ${\\mathscr R}_U$. For each $(M,\\sigma)\\in {\\mathscr R}_U$, consider the set $\\Psi_{(M,\\sigma)}$ consisting of $\\psi\\in \\Psi(M(F_v))$ such that ${\\rm n\\textrm{-}ind}^G_M(\\sigma\\otimes \\psi)$ has an irreducible unitary subquotient. It was shown in \\cite[Thm 2.5]{Tad88} that $\\Psi_{(M,\\sigma)}$ is a compact subset of the complex manifold $\\Psi(M(F_v))$. Put\n $${\\mathscr U}:=\\{(M,\\sigma\\otimes \\psi): (M,\\sigma)\\in {\\mathscr R}^{{\\rm rep}}_U,~\\psi\\in \\Psi_{(M,\\sigma)}\\}$$\n so that ${\\mathscr U}\\subset {\\mathscr R}$. Define $\\widetilde{{\\mathscr U}}:=\\text{sc}^{-1}(\\text{sc}({\\mathscr U}))$ and $\\widetilde{{\\mathscr R}}_{(M,\\sigma)}:=\\text{sc}^{-1}(\\text{sc}(\\{(M,\\sigma)\\}))$ for each $(M,\\sigma)\\in {\\mathscr R}^{{\\rm rep}}_U$ so that $\\widetilde{{\\mathscr U}}\\supset \\widetilde{{\\mathscr R}}_{(M,\\sigma)}$. Then we may write $\\widetilde{{\\mathscr U}}$ in the form\n $$\\widetilde{{\\mathscr U}}=\\bigcup_{(M,\\sigma)\\in \\overline{{\\mathscr R}}^{{\\rm rep}}_U} \\bigcup_{(L,\\tau)\\in \\widetilde{{\\mathscr R}}_{(M,\\sigma)}} \\{(L,\\tau\\otimes \\psi):\\psi\\in \\widetilde{\\Psi}_{(L,\\tau)}\\}$$\n for compact subsets $\\widetilde{\\Psi}_{(L,\\tau)}$ of $\\Psi(L(F_v))$. Note that both $\\overline{{\\mathscr R}}^{{\\rm rep}}_U$ and $\\widetilde{{\\mathscr R}}_{(M,\\sigma)}$ are finite sets. Since $$\\psi\\mapsto {\\rm tr}\\, ({\\rm n\\textrm{-}ind}^G_L(\\tau\\otimes \\psi))(\\phi)$$\n is a continuous function for each $(L,\\tau)$ as above, there exists $C>0$ such that\n \\begin{equation}\\label{e:uniform-trace}\\forall (M,\\sigma)\\in \\overline{{\\mathscr R}}_U,~\\forall (L,\\tau)\\in \\widetilde{{\\mathscr R}}_{(M,\\sigma)},~\\forall \\psi\\in\\widetilde{\\Psi}_{(L,\\tau)}, \\quad |{\\rm tr}\\, ({\\rm n\\textrm{-}ind}^G_L(\\tau\\otimes \\psi))(\\phi)|\\le C.\\end{equation}\n\n Now \\cite[\\S4, Prop 2]{Clo86} allows us to write any $\\pi\\in {\\rm Irr}(G(F_v))$ as\n \\begin{equation}\\label{e:lin-comb-ind}\\pi=\\sum_{i\\in I_{\\pi}} \\epsilon_i\\cdot{\\rm n\\textrm{-}ind}^G_{M_i}(\\sigma_i\\otimes \\psi_i)\\end{equation} in the Grothendieck group for $\\epsilon_i\\in \\{\\pm1\\}$, $(M_i,\\sigma_i)\\in {\\mathscr R}$ and $\\psi_i\\in \\Psi(M_i(F_v))$. We claim that there exists a constant $D>0$ (independent of $\\pi$) such that for every $\\pi$ a linear combination \\eqref{e:lin-comb-ind} can be found such that\n \\begin{enumerate}\n \\item ${\\rm n\\textrm{-}ind}^G_{M_i}(\\sigma_i\\otimes \\psi_i)$ has the same supercuspidal support as $\\pi$,\n \\item $|I_\\pi|\\le D$.\n \\end{enumerate}\n Clozel already ensured (i) in his construction of the linear combination. To justify (ii) we use the inductive procedure in \\textit{loc. cit.} Let us begin with recalling from \\cite[Thm 2.8]{BZ77} that there is a universal upper bound $l_G$ (depending only on $G$ and $F_v$) for the size of the set $JH({\\rm n\\textrm{-}ind}_M^G(\\sigma))$ as $(M,\\sigma)$ runs over all $M\\in {\\mathcal L}$ and all $\\sigma\\in {\\rm Irr}(M(F_v))$. Let $\\langle \\pi\\rangle$ be the subset of ${\\rm Irr}(G(F_v))$ having the same supercuspidal support as $\\pi$. Then $\\langle \\pi\\rangle$ has at most $l_G$ elements. As Clozel explained, one can write $\\pi={\\rm n\\textrm{-}ind}_M^G(\\sigma_0)-(\\pi_1+\\cdots+\\pi_t)$, where $t\\le l_G$ and the Langlands quotient data of $\\pi_1,...,\\pi_t$ are strictly smaller than that of $\\pi$ in some natural ordering. By repeating this process to rewrite $\\pi_i={\\rm n\\textrm{-}ind}^G_{M_i}(\\sigma_i)-(\\pi_{i,1}+\\cdots+\\pi_{i,t_i})$ for $i=1,...,t$ and so on, this process terminates by considering the ordering on $\\langle \\pi\\rangle$, and it is clear that $|I_\\pi|$ admits an upper bound only in terms of $l_G$.\n\n Going back to the problem, assume from now on that $\\pi\\in G(F_v)^{\\wedge}$ and $\\pi^U\\neq 0$.\n The crucial point is that condition (ii) above implies that $(M_i,\\sigma_i\\otimes \\psi_i)\\in \\widetilde{{\\mathscr U}}$. By \\eqref{e:uniform-trace}\n $$|{\\rm tr}\\, \\pi(\\phi)|\\le \\sum_{i\\in I_\\pi} \\left| {\\rm n\\textrm{-}ind}^G_{M_i}(\\sigma_i\\otimes \\psi_i)(\\phi)\\right| \\le CD.$$\n\n\n\n\n\\end{proof}\n\\end{comment}\n\n\\subsection{Automorphic representations and a counting measure}\\label{sub:aut-rep}\n\n\n\n Now consider a string of complex numbers\n $$\\mathcal{F}=\\{a_{\\mathcal{F}}(\\pi)\\in {\\mathbb C}\\}_{\\pi\\in{\\mathcal{AR}}_{{\\rm disc},\\chi}(G)}$$\n such that $a_{\\mathcal{F}}(\\pi)=0$ for all but finitely many $\\pi$.\n We think of $\\mathcal{F}$ as a multi-set by viewing\n $a_{\\mathcal{F}}(\\pi)$ as multiplicity, or more appropriately as a density function with finite support in $\\mathcal{F}$ as $a_{\\mathcal{F}}(\\pi)$ is allowed to be in ${\\mathbb C}$. There are obvious meanings when we write $\\pi\\in \\mathcal{F}$ and $|\\mathcal{F}|$ (we could have written $\\pi\\in \\mathrm{supp}\\, \\mathcal{F}$ for the former):\n$$ \\pi\\in \\mathcal{F} ~\\stackrel{\\mathrm{def}}{\\Leftrightarrow}~ a_{\\mathcal{F}}(\\pi)\\neq 0,\\qquad\n|\\mathcal{F}|:=\\sum_{\\pi\\in \\mathcal{F}} a_{\\mathcal{F}}(\\pi).$$\n\n In order to explain our working hypothesis, we recall a definition.\n\\begin{defn}\\label{d:cuspidal}\n Let $H$ be a connected reductive group over ${\\mathbb Q}$. The maximal ${\\mathbb Q}$-split torus in $Z(H)$\n is denoted $A_H$. We say $H$ is \\emph{cuspidal} if $(H\/A_H)\\times_{\\mathbb Q} {\\mathbb R}$ contains a maximal\n ${\\mathbb R}$-anisotropic torus.\n\\end{defn}\n If $H$ is cuspidal then $H({\\mathbb R})$ has discrete series representations. (We remind the reader that discrete series always mean ``relative discrete series'' for us, i.e. those whose matrix coefficients are square-integrable modulo center.) The converse is true when\n $H$ is semisimple but not in general.\n Throughout this section the following will be in effect:\n\\begin{hypo}\\label{hypo:cuspidal}\n $\\Res_{F\/{\\mathbb Q}} G$ is a cuspidal group.\n\\end{hypo}\n Let $S=S_0\\coprod S_1\\subset {\\mathcal V}_F^\\infty$ be a nonempty finite subset\n and $\\widehat{f}_{S_0}\\in {\\mathscr F}(G(F_{S_0})^{\\wedge})$.\n (It is allowed that either $S_0$ or $S_1$ is empty.)\n Let\n\\begin{itemize}\\item (level) $U^{S,\\infty}$ be an open compact subset of $G({\\mathbb A}^{S,\\infty})$,\n\\item (weight) $\\xi=\\otimes_{v|\\infty}\\xi_v$\nbe an irreducible algebraic representation of\n$$G_\\infty\\times_{\\mathbb R} {\\mathbb C}=(\\Res_{F\/{\\mathbb Q}} G)\\times_{\\mathbb Q} {\\mathbb C}=\\prod_{v|\\infty} G\\times_{F,v} {\\mathbb C}. $$ \\end{itemize}\n Denote by $\\chi:A_{G,\\infty}\\rightarrow {\\mathbb C}^\\times$ the restriction of the\n central character for $\\xi^\\vee$.\nDefine $$\\mathcal{F}=\\mathcal{F}(U^{S,\\infty},\\widehat{f}_{S_0},S_1,\\xi)\\quad\\mbox{by}$$\n\\begin{equation}\\label{e:a(pi)-general}\na_{\\mathcal{F}}(\\pi):=(-1)^{q(G)} m_{{\\rm disc},\\chi}(\\pi)\\dim(\\pi^{S,\\infty})^{U^{S,\\infty}}\n\\widehat{f}_{S_0}(\\pi_{S_0})\\widehat{{\\mathbf{1}}}_{K_{S_1}}(\\pi_{S_1})\n \\chi_{{\\rm EP}}( \\pi_\\infty\\otimes \\xi) ~\\in {\\mathbb C}.\\end{equation}\n Note that $\\widehat{{\\mathbf{1}}}_{K_{S_1}}(\\pi_{S_1})$ equals 1 if $\\pi_{S_1}$ is unramified\n and 0 otherwise, and that $\\chi_{{\\rm EP}}( \\pi_\\infty\\otimes \\xi)=0$ unless $\\pi_\\infty$ has\n the same infinitesimal character as $\\xi^\\vee$.\n The set of $\\pi$ such that $a_{\\mathcal{F}}(\\pi)\\neq 0$ is finite by\n Harish-Chandra's finiteness theorem.\n Let us define measures $\\widehat{\\mu}_{\\mathcal{F},S_1}$ and $\\widehat{\\mu}^\\natural_{\\mathcal{F},S_1}$ associated with $\\mathcal{F}$\n on the unramified unitary dual $G(F_{S_1})^{\\wedge,\\mathrm{ur}}$, motivated by the trace formula.\nPut $\\tau'(G):=\\overline{\\mu}^{\\mathrm{can},{\\rm EP}}(G(F)A_{G,\\infty}\\backslash G({\\mathbb A}_F))$.\nFor any function $\\widehat{f}_{S_1}$ on $G(F_{S_1})^{\\wedge,\\mathrm{ur}}$ which is continuous\noutside a measure zero set, define\n\\begin{equation}\\label{e:defn-of-mu}\n\\widehat{\\mu}_{\\mathcal{F},S_1}(\\widehat{f}_{S_1})\n := \\frac{\\mu^{\\mathrm{can}}(U^{S,\\infty})}{\\tau'(G)\\dim\\xi}\\sum_{\\pi\\in {\\mathcal{AR}}_{{\\rm disc},\\chi}(G)} a_{\\mathcal{F}}(\\pi) \\widehat{f}_{S_1}(\\pi_{S_1}).\\end{equation}\nThe sum is finite because $a_{\\mathcal{F}}$ is supported on finitely many $\\pi$. Now the key point is that the right hand side can be identified with the spectral side of Arthur's trace formula with the Euler-Poincar\\'{e} function\n at infinity as in \\S\\ref{sub:EP} when $\\widehat{f}_{S_1}=\\widehat{\\phi}_{S_1}$ for some $\\phi_{S_1}\\in {\\mathcal{H}}^{\\mathrm{ur}}(G(F_{S_1}))$ (\\cite[pp.267-268]{Art89}, cf. proof of \\cite[Prop 4.1]{Shi-Plan}). So to speak,\nif we write $\\phi^{\\infty}=\\phi_{S_0}\\phi_{S_1}\\phi^{S,\\infty}$,\n\\begin{equation}\\label{e:apply-Arthur}\\widehat{\\mu}_{\\mathcal{F},S_1}(\\widehat{\\phi}_{S_1})= (-1)^{q(G)}\\frac{ I_{{\\rm spec}}(\\phi^{\\infty}\\phi_\\xi,\\mu^{\\mathrm{can},{\\rm EP}})}{{\\tau'(G)\\dim\\xi}}\n= (-1)^{q(G)}\\frac{ I_{{\\rm geom}}(\\phi^{\\infty}\\phi_\\xi,\\mu^{\\mathrm{can},{\\rm EP}})}{{\\tau'(G)\\dim\\xi}}\\end{equation}\nwhere $I_{{\\rm spec}}$ (resp. $I_{{\\rm geom}}$) denotes the spectral (resp. geometric)\nside Arthur's the invariant trace formula with respect to the measure\n$\\mu^{\\mathrm{can},{\\rm EP}}$. Finally if $\\widehat{f}_{S_0}$ has the property that $\\widehat{\\mu}^{\\mathrm{pl}}_{S_0}(\\widehat{f}_{S_0})\\neq 0$ then\nput $$ \\widehat{\\mu}^\\natural_{\\mathcal{F},S_1}:=\\widehat{\\mu}^{\\mathrm{pl}}_{S_0}(\\widehat{f}_{S_0})^{-1} \\widehat{\\mu}_{\\mathcal{F},S_1}.$$\n\n\n\n\\begin{rem}\\label{r:|F|} The measure $ \\widehat{\\mu}^\\natural_{\\mathcal{F},S_1}$\n is asymptotically the same as the counting measure $$\\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F},S_1}(\\widehat{f}_{S_1})\n = \\frac{1}{|\\mathcal{F}|}\\sum_{\\pi\\in {\\mathcal{AR}}_{{\\rm disc},\\chi}(G)} a_{\\mathcal{F}}(\\pi) \\widehat{f}_{S_1}(\\pi_{S_1}).$$\nassociated with the $S_1$-components of $\\mathcal{F}$ (assuming $|\\mathcal{F}|\\neq 0$). More precisely if $\\{\\mathcal{F}_k\\}_{\\ge1}$ is a family of\n\\S\\ref{sub:aut-families} below, then $\\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F}_k,S_1}\/\n \\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k,S_1}$ is a constant tending to 1 as $k\\rightarrow\\infty$ by Corollary \\ref{c:estimate-F_k}.\n\\end{rem}\n\n\\begin{ex}\\label{ex:weight-regular}\n Suppose that the highest weight of $\\xi$ is \\emph{regular} and that $S_0=\\emptyset$. Then\n $\\pi\\in {\\mathcal{AR}}_{{\\rm disc},\\chi}(G)$ belongs to $\\mathcal{F}$ if and only if\n $(\\pi^{S,\\infty})^{U^{S,\\infty}}\\neq 0$, $\\pi$ is unramified at $S$ and\n $\\pi_\\infty\\in \\Pi_{{\\rm disc}}(\\xi^\\vee)$. When $\\pi_\\infty\\in \\Pi_{{\\rm disc}}(\\xi^\\vee)$,\n \\eqref{e:a(pi)-general} simplifies as\n \\[\n \n a_{\\mathcal{F}}(\\pi)=m_{{\\rm disc},\\chi}(\\pi)\\dim(\\pi^{S,\\infty})^{U^{S,\\infty}}.\n \\]\n\\end{ex}\n\n\\begin{ex}\\label{ex:example-for-f_S}\n Let $\\widehat{f}_{S_0}$ be a characteristic function on some relatively compact $\\widehat{\\mu}^{\\mathrm{pl}}_S$-measurable subset\n $\\widehat{U}_{S_0}\\subset G(F_{S_0})^{\\wedge}$.\n Assume that $S_0$ is large enough such that $G$ and all members of\n $\\mathcal{F}$ are unramified outside $S_0$. Take $U^{S_0,\\infty}$ to be the product of $K_v$\n over all finite places $v\\notin S_0$. Then for each $\\pi\\in {\\mathcal{AR}}_{{\\rm disc},\\chi}(G)$,\n \\begin{equation}\\label{e:example-for-f_S}\n\t a_{\\mathcal{F}}(\\pi)=(-1)^{q(G)}\\chi_{{\\rm EP}}(\\pi_\\infty\\otimes \\xi)\n m_{{\\rm disc},\\chi}(\\pi)\n \\end{equation}\n if $\\pi^{S_0,\\infty}$ is unramified, $\\pi_{S_0}\\in \\widehat{U}_{S_0}$\n (in which case $a_{\\mathcal{F}}(\\pi)\\neq0$ if moreover $\\chi_{{\\rm EP}}(\\pi_\\infty\\otimes \\xi)\\neq 0$; otherwise $a_{\\mathcal{F}}(\\pi)=0$).\n If the highest weight of $\\xi$ is regular, $\\chi_{{\\rm EP}}(\\pi_\\infty\\otimes \\xi)$ exactly when $\\pi_\\infty\\in \\Pi_{{\\rm disc}}(\\xi^\\vee)$, in which case \\eqref{e:example-for-f_S} simplifies\n as\n \\[\n \n a_{\\mathcal{F}}(\\pi)=m_{{\\rm disc},\\chi}(\\pi).\n \\]\n Compare this with Example \\ref{ex:weight-regular}. (The analogy in the case of modular forms is that\n $\\pi$ as newforms are counted in the current example\n whereas old-forms are also counted in Example \\ref{ex:weight-regular}.)\nFinally we observe that since the highest weight of $\\xi$ is regular and $\\pi_\\infty\\in \\Pi_{{\\rm disc}}(\\xi^\\vee)$, the discrete automorphic representation $\\pi$ is automatically cuspidal~\\cite{Wall84}*{Thm.~4.3}. In the present example the discrete multiplicity coincides with the cuspidal multiplicity.\n\\end{ex}\n\n\\begin{rem}\\label{r:why-S_0}\n As the last example shows,\n the main reason to include $S_0$ is to prescribe local conditions at finitely many places (namely at $S_0$) on automorphic families.\n For instance one can take $\\widehat{f}_{S_0}=\\widehat{\\phi}_{S_0}$ where\n $\\phi_{S_0}$ is a pseudo-coefficient of a supercuspidal representation\n (or a truncation thereof if the center of $G$ is not anisotropic over $F_{S_0}$).\n Then it allows us to consider a family of $\\pi$\n whose $S_0$-components are a particular supercuspidal representation (or an unramified character twist thereof).\n By using various $\\widehat{f}_{S_0}$\n (which are in general not equal to $\\widehat{\\phi}_{S_0}$ for any\n $\\phi_{S_0}\\in C^\\infty_c(G(F_{S_0}))$) one\n obtains great flexibility in prescribing a local condition\n as well as imposing weighting factors for a family.\n\\end{rem}\n\n\n\\subsection{Families of automorphic representations}\\label{sub:aut-families}\n\n Continuing from the previous subsection (in particular keeping Hypothesis \\ref{hypo:cuspidal})\n let us introduce two kinds of families $\\{\\mathcal{F}_k\\}_{k\\ge 1}$\n which will be studied later on. We will need to measure the\n size of $\\xi$ in the following way. Let $T_\\infty$ be a maximal torus of\n $G_\\infty$ over ${\\mathbb R}$. For a $B$-dominant $\\lambda\\in X^*(T_\\infty)$, set\n $m(\\lambda):=\\min_{\\alpha\\in \\Phi^+} \\langle \\lambda,\\alpha\\rangle$.\n For $\\xi$ with $B$-dominant highest weight $\\lambda_{\\xi}$,\n define $m(\\xi):=m(\\lambda_\\xi)$.\n\n Let $\\phi_{S_0}\\in C^\\infty_c(G(F_{S_0}))$. (More generally we will sometimes prescribe a local condition at $S_0$ by\n $\\widehat{f}_{S_0}\\in {\\mathscr F}(G(F_{S_0})^{\\wedge})$ rather than $\\phi_{S_0}$.) In the remainder of Section~\\ref{s:aut-Plan-theorem} we mostly focus on families in the level or weight aspect, respectively described as the following:\n\n\\begin{ex}[Level aspect: varying level, fixed weight]\\label{ex:level-varies}\n Let ${\\mathfrak n}_k\\subset \\mathcal{O}_F$ be a nonzero ideal for $k\\ge 1$ such that ${\\mathbb N}({\\mathfrak n}_k)=[\\mathcal{O}_F:{\\mathfrak n}_k]$ tends to $\\infty$\n as $k\\rightarrow \\infty$.\n Take $$\\mathcal{F}_k:=\\mathcal{F}(K^{S,\\infty}({\\mathfrak n}_k),\\widehat{\\phi}_{S_0},S_1,\\xi).$$\n Then $|\\mathcal{F}_k|\\rightarrow\\infty$ as $k\\rightarrow\\infty$.\n\\end{ex}\n\n\n\n\\begin{ex}[Weight aspect: fixed level, varying weight]\\label{ex:wt-varies}\n\nFor our study of weight aspect it is always supposed that $Z(G)=1$ so that $A_{G,\\infty}=1$ and $\\chi=1$ in order to eliminate the technical problem with central character when weight varies.\\footnote{Without the hypothesis that the center is trivial, one should work with fixed central character and apply the trace formula in such a setting. Then our results and arguments in the weight aspect should remain valid without change.} Let $\\{\\xi_k\\}_{k\\ge 1}$ be a sequence of irreducible algebraic representations\nof $G_\\infty\\times_{\\mathbb R} {\\mathbb C}$ such that $m(\\xi_k)\\rightarrow \\infty$\n as $k\\rightarrow \\infty$. Take $$\\mathcal{F}_k:=\\mathcal{F}(U^{S,\\infty},\\widehat{\\phi}_{S_0},S_1,\\xi_k).$$\n Then $|\\mathcal{F}_k|\\rightarrow\\infty$ as $k\\rightarrow\\infty$.\n\n\\end{ex}\n\n\\begin{rem}\n Sarnak proposed a definition of families of automorphic representations (or automorphic $L$-functions)\n in \\cite{Sarn:family}. The above two examples fit in his definition.\n\n\\end{rem}\n\n\n\n\n\\subsection{Level aspect}\\label{sub:level-varies}\n\n We are in the setting of Example \\ref{ex:level-varies}.\n Recall that $\\Res_{F\/{\\mathbb Q}} G$ is assumed to be cuspidal.\n Fix $\\Xi:G\\hookrightarrow \\GL_m$ as in Proposition \\ref{p:global-integral-model} and let $B_\\Xi$ and $c_\\Xi$ be as in \\eqref{e:def-M(Xi)} and\nLemma \\ref{l:forcing-unipotent}.\nWrite ${\\mathscr L}_c(M_0)$ for the set of $F$-rational cuspidal Levi subgroups\nof $G$ containing the minimal Levi $M_0$.\n\n\\begin{thm}\\label{t:level-varies}\n Fix $\\phi_{S_0}\\in C^\\infty_c(G(F_{S_0}))$ and $\\xi$.\n Let $S_1\\subset {\\mathcal V}_F^\\infty$ be a subset where $G$ is unramified.\n Let $\\phi_{S_1}\\in {\\mathcal{H}}^{\\mathrm{ur}}(G(F_{S_1}))^{\\le \\kappa}$\n be such that $|\\phi_{S_1}|\\le 1$ on $G(F_{S_1})$.\n If ${\\mathscr L}_c(M_0)=\\{G\\}$ (in particular if $G$ is abelian) then $\\widehat{\\mu}_{\\mathcal{F}_k,S_1}(\\widehat{\\phi}_{S_1})=\\widehat{\\mu}^{\\mathrm{pl}}_S(\\widehat{\\phi}_S)$.\n Otherwise\n there exist constants $A_{\\mathrm{lv}}, B_{\\mathrm{lv}}>0$ and $C_{\\mathrm{lv}}\\ge 1$ such that\n \\begin{equation} \\label{e:t:level}\\widehat{\\mu}_{\\mathcal{F}_k,S_1}(\\widehat{\\phi}_{S_1})-\\widehat{\\mu}^{\\mathrm{pl}}_S(\\widehat{\\phi}_S)\n = O(q_{S_1}^{A_{\\mathrm{lv}}+B_{\\mathrm{lv}} \\kappa} {\\mathbb N}({\\mathfrak n})^{-C_{\\mathrm{lv}}})\\end{equation}\n as ${\\mathfrak n},\\kappa\\in {\\mathbb Z}_{\\ge 1}$, $S_1$ and $\\phi_{S_1}$ vary subject to the following conditions:\n \\begin{itemize} \\item ${\\mathbb N}({\\mathfrak n}) \\ge c_\\Xi q_{S_1}^{B_\\Xi m \\kappa}$,\n \\item no prime divisors of ${\\mathfrak n}$ are contained in $S_1$. \\end{itemize}\n (The implicit constant in $O(\\cdot)$ is independent of ${\\mathfrak n}$, $\\kappa$, $S_1$ and $\\phi_{S_1}$.)\n\n\\end{thm}\n\n\\begin{rem}\\label{r:level-varies}\n When $\\widehat{\\mu}^{\\mathrm{pl}}_{S_0}(\\phi_{S_0})\\neq0$, \\eqref{e:t:level} is equivalent to\n$$\\widehat{\\mu}^{\\natural}_{\\mathcal{F},S_1}(\\widehat{\\phi}_{S_1})-\\widehat{\\mu}^{\\mathrm{pl}}_{S_1}(\\widehat{\\phi}_{S_1})\n = O(q_{S_1}^{A_{\\mathrm{lv}}+B_{\\mathrm{lv}} \\kappa} {\\mathbb N}({\\mathfrak n})^{-C_{\\mathrm{lv}}})$$.\n\\end{rem}\n\n\\begin{rem}\n One can choose $A_{\\mathrm{lv}}, B_{\\mathrm{lv}}, C_{\\mathrm{lv}}$ to be explicit integers. See the proof below.\n For instance $C_{\\mathrm{lv}}\\ge n_G$ for $n_G$ defined in \\S\\ref{sub:notation}.\n\\end{rem}\n\n\\begin{proof}\n Put $\\phi^{S,\\infty}:={\\mathbf{1}}_{K^{S,\\infty}({\\mathfrak n})}$. The right hand side\n of \\eqref{e:apply-Arthur} is expanded as in \\cite[Thm 6.1]{Art89} as shown by Arthur.\n Arguing as at the start of the proof of \\cite[Thm 4.4]{Shi-Plan}, we obtain\n \\begin{equation}\\label{e:TF-level-aspect}\n\\widehat{\\mu}_{\\mathcal{F},S_1}(\\widehat{\\phi}_{S_1})-\\widehat{\\mu}^{\\mathrm{pl}}_S(\\widehat{\\phi}_S)\n =\\sum_{M\\in {\\mathscr L}_c(M_0)\\backslash \\{G\\}} a_M \\cdot \\phi_{S_0,M}(1)\\phi_{S_1,M}(1) \\phi^{S,\\infty}_{M}(1) \\frac{\\Phi^G_M(1,\\xi)}{\\dim \\xi}\\end{equation}\n where the sum runs over proper cuspidal Levi subgroups of $G$ containing\n a fixed minimal $F$-rational Levi subgroup (see \\cite[p.539]{GKM97}\n for the reason why only cuspidal Levi subgroups contribute)\n and $a_M\\in {\\mathbb C}$ are explicit constants depending only on $M$ and $G$.\n Further explanation of \\eqref{e:TF-level-aspect} needs to be in place.\n Since only semisimple conjugacy classes contribute to Arthur's trace formula for each $M$,\n Lemma \\ref{l:forcing-unipotent} tells us that any contribution from non-identity elements\n vanishes. Note that $\\widehat{\\mu}^{\\mathrm{pl}}_S(\\widehat{\\phi}_S)$ comes from the $M=G$ term on the right hand side.\n\n The first assertion of the theorem follows immediately from\n \\eqref{e:TF-level-aspect}. Henceforth we may assume that ${\\mathscr L}_c(M_0)\\backslash \\{G\\}\\neq \\emptyset$.\n\n Clearly $\\phi_{S_0,M}(1)$ and $\\Phi^G_M(1,\\xi)\/\\dim \\xi$ are constants.\n It was shown in Lemma \\ref{l:bounding-phi-on-S_0} that $|\\phi_{S_1,M}(1)|=O(q_{S_1}^{d_G+r_G+b_G\\kappa})$\n for $b_G> 0$ in that lemma. We take $$A_{\\mathrm{lv}}:=d_G+r_G\\quad\\mbox{and}\\quad B_{\\mathrm{lv}}:=b_G.$$\n We will be done if it is checked that $|\\phi^{S,\\infty}_{M}(1)|= O({\\mathbb N}({\\mathfrak n})^{-C_{\\mathrm{lv}}})$ for some $C_{\\mathrm{lv}}\\ge 1$.\n Let $P=MN$ be a parabolic subgroup with Levi decomposition where $M$ is as above. Then\n $$0\\le \\phi^{S,\\infty}_{M}(1)=\\int_{N({\\mathbb A}_F^{S,\\infty})} \\phi^{S,\\infty}(n)dn\n = \\prod_{v\\notin S\\atop v|{\\mathfrak n}~\\mathrm{or}~v\\in {\\rm Ram}(G)} \\operatorname{vol}(K_v(\\varpi_v^{v({\\mathfrak n})})\\cap N(F_v))$$\n $$ = \\prod_{ v\\notin S \\atop v|{\\mathfrak n}~\\mathrm{or}~v\\in {\\rm Ram}(G)} \\operatorname{vol} (N(F_v)_{x,v({\\mathfrak n})})\n = \\left( \\prod_{v|{\\mathfrak n}\\atop v\\notin S} q_v^{-v({\\mathfrak n})\\dim N}\\right)\n \\prod_{v\\in {\\rm Ram}(G)\\atop v\\notin S} \\operatorname{vol}(K_v\\cap N(F_v)).$$\n The last equality uses the standard fact about the filtration\n that $\\operatorname{vol}(N(F_v)_{x,v({\\mathfrak n})})=|\\varpi_v|^{v({\\mathfrak n})\\dim N} \\operatorname{vol}(N(F_v)_{x,0})$ and the fact \\eqref{e:meas-K-cap-N}\n that $\\operatorname{vol}(N(F_v)_{x,0}) = \\operatorname{vol}(N(F_v)\\cap K_v)=1$ when $G$ is unramified at $v$.\n Take $$C_{\\mathrm{lv}}:=\\min_{M\\in {\\mathscr L}_c(M_0)\\backslash \\{G\\}\\atop P=MN}\n (\\dim N)$$ to be the minimum dimension of the unipotent radical of a proper parabolic subgroup of $G$ with cuspidal Levi\n part. Then $|\\phi^{S,\\infty}_{M}(1)|\\le {\\mathbb N}({\\mathfrak n})^{-C_{\\mathrm{lv}}}\\prod_{v\\in {\\rm Ram}(G)} \\operatorname{vol}(K_v\\cap N(F_v))$\n for every $M$ in \\eqref{e:TF-level-aspect}.\n\n\\end{proof}\n\n\n\n\\subsection{Weight aspect}\\label{sub:weight-varies}\n\n We put ourselves in the setting of Example \\ref{ex:wt-varies} and exclude the uninteresting case of $G=\\{1\\}$.\n By the assumption $Z(G)=\\{1\\}$, for every $\\gamma\\neq 1\\in G(F)$\n the connected centralizer $I_\\gamma$ has a strictly smaller set of roots so that $|\\Phi_{I_\\gamma}|<|\\Phi|$.\n Our next task is to prove a similar error bound as in the last subsection.\n\n\n\n\\begin{thm}\\label{t:weight-varies} Fix $\\phi_{S_0}\\in C^\\infty_c(G(F_{S_0}))$\n and $U^{S,\\infty}\\subset G({\\mathbb A}^{S,\\infty})$.\n There exist constants $A_{\\mathrm{wt}},B_{\\mathrm{wt}}>0$ and $C_{\\mathrm{wt}}\\ge 1$ satisfying the following: for\n \\begin{itemize}\n \\item any $\\kappa\\in {\\mathbb Z}_{>0}$, \\item any finite subset $S_1\\subset {\\mathcal V}_F^\\infty$ disjoint from $S_0$ and $S_{{\\rm bad}}$\n (\\S\\ref{sub:global-bound-orb-int}) and \\item any $\\phi_{S_1}\\in {\\mathcal{H}}^{\\mathrm{ur}}(G(F_{S_1}))^{\\le \\kappa}$\n such that $|\\phi_{S_1}|\\le 1$ on $G(F_{S_1})$,\\end{itemize}\n $$\\widehat{\\mu}_{\\mathcal{F},S_1}(\\widehat{\\phi}_{S_1})-\\widehat{\\mu}^{\\mathrm{pl}}_S(\\widehat{\\phi}_S)\n = O(q_{S_1}^{A_{\\mathrm{wt}}+B_{\\mathrm{wt}}\\kappa} m(\\xi)^{-C_{\\mathrm{wt}}})$$\nwhere the implicit constant in $O(\\cdot)$ is independent of $\\kappa$, $S_1$ and $\\phi_{S_1}$.\n(Equivalently, $\\widehat{\\mu}^\\natural_{\\mathcal{F},S_1}(\\widehat{\\phi}_{S_1})-\\widehat{\\mu}^{\\mathrm{pl}}_{S_1}(\\widehat{\\phi}_{S_1})\n = O(q_{S_1}^{A_{\\mathrm{wt}}+B_{\\mathrm{wt}}\\kappa} m(\\xi)^{-C_{\\mathrm{wt}}})$ if $\\widehat{\\mu}^{\\mathrm{pl}}_{S_0}(\\phi_{S_0})\\neq 0$.)\n\n\\end{thm}\n\n\\begin{rem}\n We always assume that $S_0$ and $S_1$ are disjoint. So the condition on $S_1$ is really that it stays away from the finite set $S_{{\\rm bad}}$. This enters the proof where a uniform bound on orbital integrals from \\S\\ref{sub:global-bound-orb-int} is applied to the places in $S_1$.\n\\end{rem}\n\n\\begin{rem}\n Again $A_{\\mathrm{wt}}, B_{\\mathrm{wt}}, C_{\\mathrm{wt}}$ can be chosen explicitly as can be seen from the proof below. For instance a choice can be made such that $C_{\\mathrm{wt}}\\ge n_G$ for $n_G$ defined in \\S\\ref{sub:notation}.\n\\end{rem}\n\n\\begin{proof}\n We can choose a sufficiently large finite set $S'_0\\supset S_0\\cup {\\rm Ram}(G)$ in the complement of $S_1\\cup S_\\infty$ such that $U^{S,\\infty}$ is a finite disjoint union of\n groups of the form $(\\prod_{v\\notin S'_0\\cup S_1\\cup S_\\infty} K_v)\\times U_{S'_0\\backslash S_0}$ for open compact subgroups\n $U_{S'_0\\backslash S_0}$ of $G({\\mathbb A}_{F,S'_0\\backslash S_0})$. By replacing $S_0$ with $S'_0$\n (and thus $S$ with $ S'_0\\coprod S_1$), we reduce the proof to the case where $U^{S,\\infty}=\\prod_{v\\notin S\\cup S_\\infty} K_v$.\n\n For an $F$-rational Levi subgroup $M$ of $G$, let $Y_M$ be as in Proposition \\ref{p:bound-number-of-conj},\n where $\\kappa$, $S_0$ and $S_1$ are as in the theorem. (So the set $Y_M$ varies as\n $\\kappa$ and $S_1$ vary.) Take \\eqref{e:apply-Arthur} as a starting point.\n Arthur's trace formula (\\cite[Thm 6.1]{Art89}) and the argument in the proof of \\cite[Thm 4.11]{Shi-Plan} show\n(note that our $Y_M$ contains $Y_M$ of \\cite{Shi-Plan} but could be strictly bigger):\n $$ \\widehat{\\mu}_{\\mathcal{F},S_1}(\\widehat{\\phi}_{S_1})-\\widehat{\\mu}^{\\mathrm{pl}}_S(\\widehat{\\phi}_S)\n=\n\\sum_{\\gamma\\in Y_G\\backslash\\{1\\}} a_{G,\\gamma}\\cdot\n|\\iota^G(\\gamma)|^{-1}O^{G({\\mathbb A}^\\infty_F)}_\\gamma(\\phi^\\infty)\\frac{{\\rm tr}\\, \\xi_n(\\gamma)}{\\dim \\xi_n}\n$$\n\\begin{equation}\\label{e:main-formula-wt-varies} +\\sum_{M\\in {\\mathscr L}_c\\backslash \\{G\\}}\n\\sum_{\\gamma\\in Y_M}\na_{M,\\gamma}\\cdot|\\iota^M(\\gamma)|^{-1}O^{M({\\mathbb A}^\\infty_F)}_\\gamma(\\phi^\\infty_M)\\frac{\\Phi^G_M(\\gamma,\\xi_n)}{\\dim \\xi_n}\n\\end{equation}\nwhere $a_{M,\\gamma}$ (including $M=G$) is given by\n$$a_{M,\\gamma}=\\tau'(G)^{-1} \\frac{\\overline{\\mu}^{\\mathrm{can},{\\rm EP}}(I^M_\\gamma(F)\\backslash\nI^M_\\gamma({\\mathbb A}_F)\/A_{I^M_\\gamma,\\infty})}\n {\\overline{\\mu}^{{\\rm EP}}(I^M_\\gamma(F_\\infty)\/A_{I^M_\\gamma,\\infty})}$$\n $$\\stackrel{\\mathrm{Cor}~\\ref{c:canonical-measure}}{=}\n\\frac{\\tau(I^M_\\gamma)}{\\tau'(G)} \\frac{|\\Omega_{I^M_\\gamma}|}{|\\Omega_{I^M_\\gamma,c}|}\n\\frac{L({\\rm Mot}_{I^M_\\gamma})}{e(I^M_{\\gamma,\\infty})2^{[F:{\\mathbb Q}] r_G}} .$$\n\n Let us work with one $M$ at a time.\n Observe that clearly\n $|\\Omega_{I^M_\\gamma}|\/|\\Omega_{I^M_\\gamma,c}|\\le |\\Omega|$\n and that $\\tau(I^M_\\gamma)$ is bounded by a constant\n depending only on $G$ in view of \\eqref{e:Tamagawa} and Corollary \\ref{c:bounding-pi0-general}\n or Lemma \\ref{l:bounding-pi0}. \nBy Corollary \\ref{c:bound-on-L(1)-2}, there exist constants $c_2,A_2>0$ such that\n \\[\n\n |a_{M,\\gamma}|\\le c_2\n \\prod_{v\\in {\\rm Ram}(I_\\gamma^M)} q_v^{A_2}\n \\]\n\n It is convenient to define the following finite subset of ${\\mathcal V}_F^\\infty$ for each $\\gamma\\in Y_M$. We fix a maximal torus $T^M_\\gamma$ in $M$ over $\\overline{F}$ containing $\\gamma$ and write $\\Phi_{M,\\gamma}$ for the set of roots of $T^M_\\gamma$ in $M$. (A different choice of $T^M_\\gamma$ does not affect the argument.)\n $$S_{M,\\gamma}:=\\{v\\in {\\mathcal V}_F^\\infty\\backslash S: \\exists \\alpha\\in \\Phi_{M,\\gamma},\n ~\\alpha(\\gamma)\\neq 1\n ~\\mbox{and}~|1-\\alpha(\\gamma)|_v\\neq 1\\}.$$\n (If $\\gamma$ is in the center of $M(F)$ then $S_{M,\\gamma}=\\emptyset$ and $q_{S_{M,\\gamma}}=1$.)\n\n We know that $O^{M(F_v)}_\\gamma({\\mathbf{1}}_{K_{M,v}})=1$ for $v\\notin S\\cup S_{M,\\gamma}\\cup S_\\infty$\nand that $S_{M,\\gamma}\\supset {\\rm Ram}(I^M_\\gamma)$\n from \\cite[Cor 7.3]{Kot86}. According to Lemma \\ref{l:const-term-on-unram-Hecke}\n $\\phi_v={\\mathbf{1}}_{K_{v}}$ implies $\\phi_{v,M}={\\mathbf{1}}_{K_{M,v}}$. Hence\n \\begin{eqnarray}\n |a_{M,\\gamma}|&\\le& c_2\\cdot (q_{S_{M,\\gamma}})^{A_2} \\label{e:bound-a_M}\\\\\n O^{M({\\mathbb A}_F^\\infty)}_\\gamma(\\phi^\\infty_M)\n & =& O^{M(F_S)}_\\gamma(\\phi_{S,M})\\prod_{v\\in S_{M,\\gamma}}\n O^{M(F_{v})}_\\gamma({\\mathbf{1}}_{K_{M,v}}). \\notag\n\\end{eqnarray}\n By Theorem \\ref{t:appendix1}, there exists a constant $c(\\phi_{S_0,M})>0$\n such that\n $$O^{M(F_{S_0})}_\\gamma(\\phi_{S_0,M})\n \\le c(\\phi_{S_0,M}) \\prod_{v\\in S_0}\n D^M_v(\\gamma)^{-1\/2}, \\quad \\forall \\gamma\\in Y_M.$$\n By Theorem \\ref{t:appendeix2}, there exist $a,b,c,e_G\\in {\\mathbb R}_{\\ge 0}$ (independent of\n $\\gamma$, $S_1$, $\\kappa$ and $k$) such that\n \\begin{eqnarray}\\label{e:9.17}\n O^{M(F_{S_1})}_\\gamma(\\phi_{S_1,M})\n & \\le & q_{S_1}^{a+b\\kappa} \\prod_{v\\in S_1} D^M_v(\\gamma)^{-e_G\/2}, \\\\\n O^{M(F_v)}_\\gamma({\\mathbf{1}}_{K_{M,v}})\n & \\le & q_{v}^{c} D^M_v(\\gamma)^{-e_G\/2}, \\quad\\forall v\\in S_{M,\\gamma}. \\label{e:9.18}\n\\end{eqnarray}\n (To obtain \\eqref{e:9.17} and \\eqref{e:9.18}, apply Theorem \\ref{t:appendeix2} to $v\\in S_1$ and $v\\in S_{M,\\gamma}$.)\n\n Hence\n \\begin{eqnarray}\n O^{M({\\mathbb A}_F^\\infty)}_\\gamma(\\phi^\\infty_M)&\\le&\n c(\\phi_{S_0,M}) q_{S_1}^{a+b\\kappa} q_{S_{M,\\gamma}}^{c}\n \\left( \\prod_{v\\nmid \\infty} D^M_v(\\gamma)^{-1\/2}\\right)\n\\prod_{v\\in S_1\\cup S_{M,\\gamma}} D^M_v(\\gamma)^{(1-e_G)\/2}\\nonumber\\\\\n & = & c(\\phi_{S_0,M}) q_{S_1}^{a+b\\kappa} q_{S_{M,\\gamma}}^{c}\n \\prod_{v|\\infty} D^M_v(\\gamma)^{1\/2}\n\\prod_{v\\in S_1\\cup S_{M,\\gamma}} D^M_v(\\gamma)^{(1-e_G)\/2}\\label{e:bound-on-O^M}\n\\end{eqnarray}\n\n On the other hand there exist $\\delta_{S_0},\\delta_{\\infty}$, $\\delta_{S_1}\\ge1$ such that\n for every $\\gamma\\in Y_M$ with $\\alpha(\\gamma)\\neq 1$,\n \\begin{itemize}\n \\item $|1-\\alpha(\\gamma)|_{S_0}\\le \\delta_{S_0}$. (compactness of ${\\rm supp}\\, \\phi_{S_0}$)\n \\item $|1-\\alpha(\\gamma)|_{\\infty}\\le \\delta_{\\infty}$. (compactness of $U_\\infty$)\n \\item $|1-\\alpha(\\gamma)|_{S_1}\\le \\delta_{S_1}q_{S_1}^{B_5 \\kappa}$. (Lemma \\ref{l:bounding1-alpha(gamma)}; Remark \\ref{r:Lem2.18-indep} explains the independence of $B_1$ of $S_1$).\n \\end{itemize}\n (When $\\alpha(\\gamma)=1$, our convention is that $|1-\\alpha(\\gamma)|_v=1$ for every $v$\n to be consistent with the first formula of Appendix \\ref{s:app:Kottwitz}.)\n Hence, together with the product formula for $1-\\alpha(\\gamma)$,\n $$1=\\prod_v |1-\\alpha(\\gamma)|_v \\le\\delta_{S_0} \\delta_{\\infty}\n \\delta_{S_1}q_{S_1}^{B_5 \\kappa}\\prod_{v\\in S_{M,\\gamma}}|1-\\alpha(\\gamma)|_v .$$\n If $\\gamma\\in Z(M)(F)$ then $q_{S_{M,\\gamma}}=1$. Otherwise\n for each $v\\in S_{M,\\gamma}$, we may choose $\\alpha\\in \\Phi_{M,\\gamma}$ such that $|1-\\alpha(\\gamma)|_v\\neq 1$.\nSet $\\delta:=\\delta_{S_0} \\delta_{\\infty}\n \\delta_{S_1}$.\n Then $|1-\\alpha(\\gamma)|_v \\le q_v^{-1}$ for $v\\in S_{M,\\gamma}$ by Lemma \\ref{l:alpha(gamma)-is-integral}\n (which is applicable in view of the first paragraph in the current proof)\n so\n \\begin{equation}\\label{e:prod-q-S_M,gamma}\n q_{S_{M,\\gamma}}\\le \\delta q_{S_1}^{B_5 \\kappa}.\\end{equation}\n Keep assuming that $\\gamma$ is not central in $M$ and that\n $\\alpha(\\gamma)\\neq 1$. Again by the product formula\n $\\prod_{v\\in S_1\\cup S_{M,\\gamma}} |1-\\alpha(\\gamma)|\n = \\prod_{v\\in S_0\\cup S_\\infty} |1-\\alpha(\\gamma)|^{-1} \\ge (\\delta_{S_0}\\delta_\\infty)^{-1}$,\n thus\n \\begin{equation}\\label{e:bound-D^M}\n\\prod_{v\\in S_1\\cup S_{M,\\gamma}} D^M(\\gamma)^{-1}\\le \\delta_{S_0}\\delta_\\infty.\\end{equation}\n The above holds also when $\\gamma$ is central in $M$, in which case the left hand side equals 1.\n\n\n Now \\eqref{e:bound-on-O^M}, \\eqref{e:prod-q-S_M,gamma} and \\eqref{e:bound-D^M} imply\n \\begin{equation}\\label{e:bound-on-O^M-2}\n O^{M({\\mathbb A}_F^\\infty)}_\\gamma(\\phi^\\infty_M)\\le\n c(\\phi_{S_0,M}) \\delta^{a}(\\delta_{S_0}\\delta_\\infty)^{(e_G-1)\/2}\n q_{S_1}^{a+b\\kappa+ c B_5 \\kappa} \\prod_{v|\\infty}\n D^M_v(\\gamma)^{1\/2}.\\end{equation}\n\n Lemma \\ref{l:bound-for-st-ds-char} gives a bound on the stable discrete series\n character:\n \\begin{equation}\\label{e:bound-Phi^G_M}\\frac{|\\Phi^G_M(\\gamma,\\xi)|}{\\dim \\xi}\n \n \\le c \\frac{\\prod_{v|\\infty} D^M_v(\\gamma)^{-1\/2}}{ m(\\xi)^{|\\Phi^+|-|\\Phi^+_{I^M_\\gamma}|}}.\\end{equation}\n\n Multiplying \\eqref{e:bound-a_M}, \\eqref{e:bound-on-O^M-2},\n \\eqref{e:bound-Phi^G_M} altogether (and noting $|\\iota^M(\\gamma)|\\le 1$),\n the absolute value of the summand for $\\gamma$ in \\eqref{e:main-formula-wt-varies} (including $M=G$)\n is\n$$O\\left( m(\\xi)^{-(|\\Phi^+|-|\\Phi^+_{I^M_\\gamma}|)}\nq_{S_1}^{a+b\\kappa+c B_5 \\kappa+A_2}\\right).$$\n All in all, $|\\widehat{\\mu}_{\\mathcal{F},S}(\\widehat{\\phi}_S)-\\widehat{\\mu}^{\\mathrm{pl}}_S(\\widehat{\\phi}_S)|$ is\n$$ \\left(|Y_G|-1 + \\sum_{M\\in {\\mathscr L}_c\\backslash \\{G\\}} |Y_M|\\right)\n O\\left( m(\\xi)^{-(|\\Phi^+|-|\\Phi^+_{I^M_\\gamma}|)}\n q_{S_1}^{a+b\\kappa+ c B_5 \\kappa+A_2}\\right).$$\n Set (excluding $\\gamma=1$ in the second minimum when $M=G$) $$C_{\\mathrm{wt}}:= \\min_{M\\in {\\mathscr L}_c(M_0)}\n \\min_{\\gamma\\in M(F)\\atop \\mathrm{ell.~in~}M(F_\\infty)\n (|\\Phi^+|-|\\Phi^+_{I^M_\\gamma}|)$$\n Note that\n $C_{\\mathrm{wt}}$ depends only on $G$. It is automatic that\n $|\\Phi^+|-|\\Phi^+_{I^M_\\gamma}|\\ge 1$ on $Y_G\\backslash \\{1\\}$ and $Y_M$ for $M\\in {\\mathscr L}_c(M_0)\\backslash \\{G\\}$.\nThe proof is concluded by invoking Corollary \\ref{c:bound-number-of-conj} (applied to $Y_G$ and $Y_M$)\nwith the choice $$A_{\\mathrm{wt}}:=a+A_2+A_6,\\quad B_{\\mathrm{wt}}:=b+cB_5+B_8.$$\n\n\\end{proof}\n\n\n\n\n\n\n\n\\subsection{Automorphic Plancherel density theorem}\\label{sub:Plan-density}\n\n\n In the situation of either Example \\ref{ex:level-varies} or \\ref{ex:wt-varies},\n let us write $\\mathcal{F}_k(\\phi_{S_0})$ for $\\mathcal{F}_k$ in order to emphasize the dependence on $\\phi_{S_0}$.\nTake $S_1=\\emptyset$ so that $S=S_0$. Then $\\widehat{\\mu}_{\\mathcal{F}_k(\\phi_{S}),\\emptyset}$ may be viewed\nas a complex number (as it is a measure on a point).\nIn fact we can consider $\\mathcal{F}_k(\\widehat{f}_{S})$, a family whose local condition at $S$\nis prescribed by $\\widehat{f}_S\\in {\\mathscr F}(G(F_S)^{\\wedge})$, even if\n$\\widehat{f}_{S}$ does not arise from any $\\phi_{S}$ in $C^\\infty_c(G(F_S))$.\nPut $\\widehat{\\mu}_k(\\widehat{f}_{S}):=\\widehat{\\mu}_{\\mathcal{F}_k(\\widehat{f}_{S}),\\emptyset}\\in {\\mathbb C}$.\nWe recover the automorphic Plancherel density theorem (\\cite[Thm 4.3, Thm 4.7]{Shi-Plan}).\n\n\\begin{cor}\\label{c:Plan-density} Consider families $\\mathcal{F}_k$ in level or weight aspect as above. In level aspect assume that the highest weight of $\\xi$ is regular. (No assumption is necessary in weight aspect.)\n For any $\\widehat{f}_S\\in {\\mathscr F}(G(F_S)^{\\wedge})$,\n $$\\lim_{k\\rightarrow\\infty} \\widehat{\\mu}_k(\\widehat{f}_{S})=\\widehat{\\mu}^{\\mathrm{pl}}_S(\\widehat{f}_S).$$\n\n\\end{cor}\n\n\\begin{proof}\n Theorems \\ref{t:level-varies} and \\ref{t:weight-varies} tell us that\n \\begin{equation}\\label{e:aut-plan-phi}\\lim_{k\\rightarrow\\infty} \\widehat{\\mu}_k(\\widehat{\\phi}_S)=\\widehat{\\mu}^{\\mathrm{pl}}_S(\\widehat{\\phi}_S).\\end{equation}\n (Even though there was a condition on $S_1$, note that there was no condition on $S_0$ in either theorem.)\n\n We would like to improve \\eqref{e:aut-plan-phi} to allow more general test functions.\n What needs to be shown (cf. \\eqref{e:extending-at-v_j-0} below) is that for every $\\epsilon>0$, $$\\limsup_{k\\rightarrow \\infty} |\\widehat{\\mu}_{k}(\\widehat{f}_{S})- \\widehat{\\mu}^{\\mathrm{pl}}_{S}(\\widehat{f}_{S})|\\le4\\epsilon.$$\n Thanks to Proposition \\ref{p:density} there exist $\\phi_{S},\\psi_{S}\\in {\\mathcal{H}}^{\\mathrm{ur}}(G(F_{S}))$ such that $|\\widehat{f}_{S}-\\widehat{\\phi}_{S}|\\le \\widehat{\\psi}_{S}$ on $G(F_{S})^{\\wedge}$ and $\\widehat{\\mu}^{\\mathrm{pl}}_{S}(\\widehat{\\psi}_{S})\\le \\epsilon$. Then (cf. \\eqref{e:extending-at-v_j} below)\n $$\n\\begin{aligned}\n|\\widehat{\\mu}_{k}(\\widehat{f}_{S})- \\widehat{\\mu}^{\\mathrm{pl}}_{S}(\\widehat{f}_{S})| & \\le |\\widehat{\\mu}_k(\\widehat{f}_{S}-\\widehat{\\phi}_{S})|\\\\ & + |\\widehat{\\mu}_k(\\widehat{\\phi}_{S})- \\widehat{\\mu}^{\\mathrm{pl}}_{S}(\\widehat{\\phi}_{S})| + |\\widehat{\\mu}^{\\mathrm{pl}}_{S}(\\widehat{\\phi}_{S}-\\widehat{f}_{S})|.\n\\end{aligned}\n $$\n Now $|\\widehat{\\mu}^{\\mathrm{pl}}_{S}(\\widehat{f}_{S}-\\widehat{\\phi}_{S})|\\le |\\widehat{\\mu}^{\\mathrm{pl}}_S(\\widehat{\\psi}_S)|\\le \\epsilon$, and $|\\widehat{\\mu}_k(\\widehat{\\phi}_{S})- \\widehat{\\mu}^{\\mathrm{pl}}_{S}(\\widehat{\\phi}_{S})|\\le \\epsilon$ for $k\\gg0$ by \\eqref{e:aut-plan-phi}.\n Finally $\\widehat{\\mu}_k$ is a positive measure since the highest weight of $\\xi$ is regular, and we get\n $$\n |\\widehat{\\mu}_k(\\widehat{f}_{S}-\\widehat{\\phi}_{S})|\\le \\widehat{\\mu}_k(|\\widehat{f}_{S}-\\widehat{\\phi}_{S}|)\\le \\widehat{\\mu}_k(\\widehat{\\psi}_S).\n $$\n (To see the positivity of $\\widehat{\\mu}_k$, notice that $\\widehat{\\mu}_k(\\widehat{f}_{S}-\\widehat{\\phi}_{S})$ is unraveled via \\eqref{e:a(pi)-general} and \\eqref{e:defn-of-mu} as a sum of $(\\widehat{f}_{S}-\\widehat{\\phi}_{S})(\\pi)$ with coefficients having nonnegative signs. This is because $\\chi_{{\\rm EP}}(\\pi_\\infty\\otimes \\xi)$ is either 0 or $(-1)^{q(G)}$ when $\\xi$ has regular highest weight, cf. \\S\\ref{sub:EP}.)\n According to \\eqref{e:aut-plan-phi}, $\\lim_{k\\rightarrow \\infty}\\widehat{\\mu}_k(\\widehat{\\psi}_S)=\\widehat{\\mu}^{\\mathrm{pl}}_S(\\widehat{\\psi}_S)\\le \\epsilon$. In particular $|\\widehat{\\mu}_k(\\widehat{f}_{S}-\\widehat{\\phi}_{S})|\\le 2\\epsilon$ for $k\\gg0$. The proof is complete.\n\\end{proof}\n\n\n\n\n\\begin{rem}\n If $G$ is anisotropic modulo center over $F$ so that the trace formula for compact quotients is available, or if a further local assumption at finite places is imposed so as to avail the simple trace formula, the regularity condition on $\\xi$ can be removed by an argument of De George-Wallach and Clozel (\\cite{dGW78}, \\cite{Clo86}). The main point is to show that the contribution of ($\\xi$-cohomological) non-tempered representations at $\\infty$ to the trace formula is negligible compared to the contribution of discrete series. Their argument requires some freedom of choice of test functions at $\\infty$, so it breaks down in the general case since one has to deal with new terms in the trace formula which disappear when Euler-Poincar\\'{e} functions are used at $\\infty$. In other words, it seems necessary to prove analytic estimates on more terms (if not all terms) in the trace formula than we did in order to get rid of the assumption on $\\xi$. (This remark also applies to the same condition on $\\xi$ in \\S\\ref{sub:ST-theorem} and \\S\\ref{sub:general-functions-S_0} for level aspect families.) We may return to this issue in future work.\n\\end{rem}\n\n\n\\begin{rem}\n In the case of level aspect families, \\cite[Thm 4.3]{Shi-Plan} assumes that\n the level subgroups form a chain of decreasing groups whose intersection is the trivial group.\n The above corollary deals with some new cases as it assumes only that ${\\mathbb N}({\\mathfrak n}_k)\\rightarrow \\infty$.\n\n\\end{rem}\n\n\\begin{cor}\\label{c:estimate-F_k}\n Keep assuming that $S_1=\\emptyset$.\n Let $(U_k^{S,\\infty},\\xi_k)=(K^{S,\\infty}({\\mathfrak n}_k),\\xi)$ or $(U^{S,\\infty},\\xi_k)$\nin Example \\ref{ex:level-varies} or \\ref{ex:wt-varies}, respectively, but prescribe local conditions at $S$\nby $\\widehat{f}_{S}$ rather than $\\phi_{S}$.\nThen $$\\lim_{k\\rightarrow\\infty}\\frac{\\mu^{\\mathrm{can}}(U_k^{S,\\infty})}{\\tau'(G)\\dim \\xi_k} |\\mathcal{F}_k| = \\widehat{\\mu}^{\\mathrm{pl}}_{S}(\\widehat{f}_{S}).$$\n\\end{cor}\n\n\\begin{proof}\n The corollary results from Corollary \\ref{c:Plan-density} since\n$$ \\frac{\\mu^{\\mathrm{can}}(U_k^{S,\\infty})}{\\tau'(G)\\dim \\xi_k} |\\mathcal{F}_k|=\n \\frac{\\mu^{\\mathrm{can}}(U_k^{S,\\infty})}{\\tau'(G)\\dim \\xi_k}\\sum_{\\pi\\in {\\mathcal{AR}}_{{\\rm disc},\\chi_k}(G)} a_{\\mathcal{F}_k}(\\pi) =\n \\widehat{\\mu}_{\\mathcal{F}_k,\\emptyset}(\\widehat{f}_S).$$\n\\end{proof}\n\n\\subsection{Application to the Sato-Tate conjecture for families}\\label{sub:ST-theorem}\n\n As an application of Theorems \\ref{t:level-varies} and \\ref{t:weight-varies},\n we are about to fulfill the promise of \\S\\ref{sub:ST-families}\n by showing that the Satake parameters in the automorphic families\n $\\{\\mathcal{F}_k\\}$ are equidistributed\n according to the Sato-Tate measure in a suitable sense\n (cf. Conjecture \\ref{c:ST-families}).\n\n\n The notation and convention of \\S\\ref{s:Sato-Tate} are retained here.\n Let $\\theta\\in {\\mathscr C}(\\Gamma_1)$ and $\\widehat{f}\\in \\mathcal{F}(\\widehat{T}_{c,\\theta}\/\\Omega_{c,\\theta})$.\n For each $v\\in {\\mathcal V}_F(\\theta)$,\n the image of $\\widehat{f}$ in $\\mathcal{F}(G(F_v)^{\\wedge,\\mathrm{ur}})$ via \\eqref{e:Sauvageot}\n will be denoted $\\widehat{f}_v$.\n\n\n\\begin{thm}\\label{t:Sato-Tate-level} (level aspect)\n\n Pick any $\\theta\\in {\\mathscr C}(\\Gamma_1)$ and let $\\{v_j\\}_{j\\ge 1}$ be a sequence in ${\\mathcal V}_F(\\theta)$ such that\n $q_{v_j}\\rightarrow \\infty$ as $j\\rightarrow \\infty$. Suppose that \\begin{itemize}\\item $\\widehat{\\mu}^{\\mathrm{pl}}_{S_0}(\\widehat{\\phi}_{S_0})\\neq 0$ and\n \\item $\\xi$ has regular highest weight. \\end{itemize}\n Then for every $\\widehat{f}\\in \\mathcal{F}(\\widehat{T}_{c,\\theta}\/\\Omega_{c,\\theta})$,\n $$\\lim_{(j,k)\\rightarrow \\infty} \\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k,v_j}(\\widehat{f}_{v_j})\n = \\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}(\\widehat{f})$$\n where the limit is taken over $(j,k)$ subject to the following conditions:\n \\begin{itemize}\n \\item ${\\mathbb N}({\\mathfrak n}_k) q_{v_j}^{-B_\\Xi m \\kappa}\\ge c_\\Xi^{-1}$,\n \\item $v_j\\nmid {\\mathfrak n}_k$,\n \\item $q_{v_j}^{N} {\\mathbb N}({\\mathfrak n}_k)^{-1}\\rightarrow 0$ for all $N>0$.\n \\end{itemize}\n\n\\end{thm}\n\n\\begin{proof}\n Fix $\\widehat{f}$. We are done if $\\limsup_{(j,k)\\rightarrow \\infty} |\\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k,v_j}(\\widehat{f}_{v_j})- \\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}(\\widehat{f})|\\le 4\\epsilon$ for every $\\epsilon>0$. By Proposition \\ref{p:lim-of-Plancherel},\n $|\\widehat{\\mu}^{\\mathrm{pl}}_{v_j}(\\widehat{f}_{v_j})-\\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}(\\widehat{f})|\\le\\epsilon$ for $j\\gg0$. So it is enough to show that\n \\begin{equation}\\label{e:extending-at-v_j-0}\\limsup_{(j,k)\\rightarrow \\infty} |\\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k,v_j}(\\widehat{f}_{v_j})- \\widehat{\\mu}^{\\mathrm{pl}}_{v_j}(\\widehat{f}_{v_j})|\\le3\\epsilon.\\end{equation}\n\n\t For every $j\\ge1$, Proposition \\ref{p:unr-density} allows us to find $\\phi_{v_j},\\psi_{v_j}\\in {\\mathcal{H}}^{\\mathrm{ur}}(G(F_{v_j}))^{\\le \\kappa}$ such that $|\\widehat{f}_{v_j}-\\widehat{\\phi}_{v_j}|\\le \\widehat{\\psi}_{v_j}$ on $G(F_{v_j})^{\\wedge}$ and $\\widehat{\\mu}^{\\mathrm{pl}}_{v_j}(\\widehat{\\psi}_{v_j})\\le \\epsilon$.\n For each $j\\ge 1$,\n \\begin{equation}\\label{e:extending-at-v_j}\n\\begin{aligned}\n|\\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k,v_j}(\\widehat{f}_{v_j})- \\widehat{\\mu}^{\\mathrm{pl}}_{v_j}(\\widehat{f}_{v_j})| & \\le |\\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k,v_j}(\\widehat{f}_{v_j}-\\widehat{\\phi}_{v_j})|\\\\ & + |\\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k,v_j}(\\widehat{\\phi}_{v_j})- \\widehat{\\mu}^{\\mathrm{pl}}_{v_j}(\\widehat{\\phi}_{v_j})| + |\\widehat{\\mu}^{\\mathrm{pl}}_{v_j}(\\widehat{\\phi}_{v_j}-\\widehat{f}_{v_j})|.\n\\end{aligned}\n\\end{equation}\n Since $\\widehat{\\mu}^{\\mathrm{pl}}_{v_j}$ is a positive measure,\n $$|\\widehat{\\mu}^{\\mathrm{pl}}_{v_j}(\\widehat{\\phi}_{v_j}-\\widehat{f}_{v_j})|\\le \\widehat{\\mu}^{\\mathrm{pl}}_{v_j}(|\\widehat{\\phi}_{v_j}-\\widehat{f}_{v_j}|)\n \\le \\widehat{\\mu}^{\\mathrm{pl}}_{v_j}(\\widehat{\\psi}_{v_j})\\le \\epsilon.$$\n Theorem \\ref{t:level-varies} and the assumptions of the theorem imply that for sufficiently large $(j,k)$,\n$|\\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k,v_j}(\\widehat{\\phi}_{v_j})- \\widehat{\\mu}^{\\mathrm{pl}}_{v_j}(\\widehat{\\phi}_{v_j})|\\le \\epsilon$. So we will be done if for sufficiently large $(j,k)$,\n \\begin{equation}\\label{e:extending-at-v_j-2}\n |\\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k,v_j}(\\widehat{f}_{v_j}-\\widehat{\\phi}_{v_j})| \\le \\epsilon.\n\\end{equation}\n Arguing as in the proof of Corollary \\ref{c:Plan-density} we deduce the following: when $\\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k,v_j}(\\widehat{f}_{v_j}-\\widehat{\\phi}_{v_j})$ is unraveled as a sum over $\\pi$ (cf. \\eqref{e:a(pi)-general} and \\eqref{e:defn-of-mu}), each summand is $\\widehat{\\phi}_{S_0}(\\pi_{S_0})(\\widehat{f}_{v_j}-\\widehat{\\phi}_{v_j})(\\pi_{v_j})$ times a nonnegative real number. (This uses the regularity assumption on $\\xi$. Certainly the absolute value of the sum does not get smaller when every summand is replaced with (something greater than or equal to) its absolute value, i.e.\n $$|\\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k,v_j}(\\widehat{f}_{v_j}-\\widehat{\\phi}_{v_j})|\n \\le \\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k(|\\widehat{\\phi}_{S_0}|),v_j}(|\\widehat{f}_{v_j}-\\widehat{\\phi}_{v_j}|)\n \\le \\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k(|\\widehat{\\phi}_{S_0}|),v_j}(\\widehat{\\psi}_{v_j}).$$\n Now choose $\\phi'_{S_0}\\in C^\\infty_c(G(F_{S_0}))$ according to Lemma \\ref{l:bounding-function-by-pos-function} so that $|\\phi_{S_0}(\\pi_{S_0})|\\le \\phi'_{S_0}(\\pi_{S_0})$ for every $\\pi_{S_0}\\in G(F_{S_0})^{\\wedge}$. Then\n$$\\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k(|\\widehat{\\phi}_{S_0}|),v_j}(\\widehat{\\psi}_{v_j})\\le \\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k(\\phi'_{S_0}),v_j}(\\widehat{\\psi}_{v_j}).$$\nTheorem \\ref{t:level-varies} applied to $\\widehat{\\psi}_{v_j}$ and the inequality $\\widehat{\\mu}^{\\mathrm{pl}}_{v_j}(\\widehat{\\psi}_{v_j})\\le \\epsilon$ imply that $$\\limsup_{(j,k)\\rightarrow\\infty} \\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k(\\phi'_{S_0}),v_j}(\\widehat{\\psi}_{v_j})\\le \\epsilon.$$ This concludes the proof of \\eqref{e:extending-at-v_j-2}, thus also \\eqref{e:extending-at-v_j-0}.\n\n\n\\end{proof}\n\n\n\n\\begin{thm}\\label{t:Sato-Tate-weight} (weight aspect) Let $\\theta\\in {\\mathscr C}(\\Gamma_1)$ and $\\widehat{\\phi}_{S_0}\\in C^\\infty_c(G(F_{S_0}))$. Suppose that $\\{v_j\\}_{j\\ge 1}$ is a sequence in ${\\mathcal V}_F(\\theta)$ such that $q_{v_j}\\rightarrow \\infty$ as $j\\rightarrow \\infty$ and that $\\widehat{\\mu}^{\\mathrm{pl}}_{S_0}(\\widehat \\phi_{S_0})\\neq 0$.\n Then for every $\\widehat{f}\\in \\mathcal{F}(\\widehat{T}_{c,\\theta}\/\\Omega_{c,\\theta})$,\n $$\\lim_{(j,k)\\rightarrow \\infty} \\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k,v_j}(\\widehat{f}_{v_j})\n = \\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}(\\widehat{f})$$\n if $q_{v_j}^{N} m(\\xi_k)^{-1}\\rightarrow 0$ as $k\\rightarrow \\infty$ for any integer $N\\ge1$.\n\n\\end{thm}\n\n\\begin{proof}\n Same as above, except that Theorem \\ref{t:weight-varies} is used\n instead of Theorem \\ref{t:level-varies}.\n\\end{proof}\n\n\\begin{rem}\n As we have mentioned in \\S\\ref{sub:ST-families},\n Theorems \\ref{t:Sato-Tate-level} and \\ref{t:Sato-Tate-weight}\n indicate that $\\{\\mathcal{F}_k\\}_{\\ge1}$ are ``general''\n families of automorphic representations in the sense of\n Conjecture \\ref{c:ST-families}.\n\\end{rem}\n\n\\begin{cor}\n In the setting of Theorem \\ref{t:Sato-Tate-level} or \\ref{t:Sato-Tate-weight},\n suppose in addition that $|\\mathcal{F}_k|\\neq 0$ for all $k\\ge 1$. Then\n $$\\lim_{(j,k)\\rightarrow \\infty} \\widehat{\\mu}^{\\mathrm{count}}_{\\mathcal{F}_k,v_j}(\\widehat{f}_{v_j})\n = \\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}(\\widehat{f}).$$\n\\end{cor}\n\\begin{proof}\n Follows from Corollary \\ref{c:estimate-F_k} and the two preceding theorems (cf. Remark \\ref{r:|F|}).\n\\end{proof}\n\n\\begin{rem}\n The assumption that $|\\mathcal{F}_k|\\neq 0$ is almost automatically satisfied.\n Corollary \\ref{c:estimate-F_k} and the assumption that $\\widehat{\\mu}^{\\mathrm{pl}}_{S_0}(\\widehat{\\phi}_{S_0})\\neq 0$ imply that\n $|\\mathcal{F}_k|\\neq 0$ for any sufficiently large $k$.\n\n\\end{rem}\n\n\n\\subsection{More general test functions at $S_0$}\\label{sub:general-functions-S_0}\n\n So far we worked primarily with families of Examples \\ref{ex:level-varies}\n and \\ref{ex:wt-varies}.\n We wish to extend Theorems \\ref{t:Sato-Tate-level}\n and \\ref{t:Sato-Tate-weight} when the local condition at $S_0$\nis given by $\\widehat{f}_{S_0}$,\nwhich may not be of the form $\\widehat{\\phi}_{S_0}$\nfor any $\\phi_{S_0}\\in C^\\infty_c(G(F_{S_0}))$\n(cf. Example \\ref{ex:example-for-f_S} and Remark \\ref{r:why-S_0}).\n\n\n\n\\begin{cor}\\label{c:extended} Let $\\theta\\in {\\mathscr C}(\\Gamma_1)$ and let $\\{v_j\\}_{j\\ge 1}$ be a sequence of places in ${\\mathcal V}_F(\\theta)$ such that\n $q_{v_j}\\rightarrow\\infty$ as $j\\rightarrow\\infty$.\n Consider $\\widehat{\\mu}_{\\mathcal{F}_k,v_j}$ where $$\\mathcal{F}_k=\\left\\{ \\begin{array}{cl}\n \\mathcal{F}(K^{S,\\infty}({\\mathfrak n}_k),\\widehat{f}_{S_0},v_j,\\xi) & \\mathrm{level~aspect,~or}\\\\\n \\mathcal{F}(U^{S,\\infty},\\widehat{f}_{S_0}, v_j,\\xi_k) & \\mathrm{weight~aspect}\n \\end{array}\\right.$$\n satisfying the conditions of Theorem \\ref{t:Sato-Tate-level} or\n Theorem \\ref{t:Sato-Tate-weight}, respectively.\n Then $$\\lim_{(j,k)\\rightarrow \\infty} \\widehat{\\mu}^{\\natural}_{\\mathcal{F}_k,v_j}(\\widehat{f}_{v_j})\n = \\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}(\\widehat{f})$$\n where the limit is taken as in Theorem \\ref{t:Sato-Tate-level} (resp.\n Theorem \\ref{t:Sato-Tate-weight}).\n\\end{cor}\n\n\\begin{proof}\n\n\n\n The proof is reduced to the case of $\\widehat{\\phi}$ and $\\widehat{\\phi}_{v_j}$ in place of\n$\\widehat{f}$ and $\\widehat{f}_{v_j}$, as in the proof of Theorem\n\\ref{t:Sato-Tate-level}. We can decompose $\\widehat{f}=\\widehat{f}^+ +\\widehat{f}^-$ with\n $\\widehat{f}^+,\\widehat{f}^-\\in \\mathcal{F}(\\widehat{T}_{c,\\theta}\/\\Omega_{c,\\theta})$ such that $\\widehat{f}^+$ and $\\widehat{f}^-$ are nonnegative everywhere.\n The corollary for $\\widehat{f}$ is proved as soon as it is proved for $\\widehat{f}^+$ and $\\widehat{f}^-$. Thus we may assume that $\\widehat{f}\\ge 0$ from now on.\n\n Fix any choice of $\\epsilon>0$.\n Proposition \\ref{p:density} ensures the existence of\n $\\phi_{S_0},\\psi_{S_0}\\in C^\\infty_c(G(F_{S_0}))$ such that\n $\\widehat{\\mu}^{\\mathrm{pl}}_{S_0}(\\psi_{S_0})\\le \\epsilon$ and\n $|\\widehat{f}_{S_0}(\\pi_{S_0})-\\widehat{\\phi}_{S_0}(\\pi_{S_0})|\\le \\widehat{\\psi}_{S_0}(\\pi_{S_0})$\n for all $\\pi_{S_0}\\in G(F_{S_0})^\\wedge$. Of course we can guarantee in addition that $\\widehat{\\mu}^{\\mathrm{pl}}_{S_0}(\\widehat{\\phi}_{S_0})\\neq 0$. Put\n$\\mathcal{F}_k(\\widehat{\\phi}_{S_0})=\\mathcal{F}(K^{S,\\infty}({\\mathfrak n}_k),\\widehat{\\phi}_{S_0},\n v_j,\\xi)$ (resp. $\\mathcal{F}_k(\\widehat{\\phi}_{S_0})=\\mathcal{F}(U^{S,\\infty},\\widehat{\\phi}_{S_0},\n v_j,\\xi_k)$). Likewise we define $\\mathcal{F}_k(\\widehat{\\psi}_{S_0})$ and so on. Then (cf. a similar step in the proof of Theorem \\ref{t:Sato-Tate-level})\n\\begin{equation*}\\begin{aligned}\n| \\widehat{\\mu}_{\\mathcal{F}_k,v_j}(\\widehat{f}_{v_j})-\\widehat{\\mu}^{\\mathrm{pl}}_{S_0\\cup\\{v_j\\}}(\\widehat{f}_{S_0}\\widehat{f}_{v_j})|&\\le |\\widehat{\\mu}_{\\mathcal{F}_k(\\widehat{\\phi}_{S_0}),v_j}(\\widehat{f}_{v_j})-\\widehat{\\mu}^{\\mathrm{pl}}_{S_0\\cup\\{v_j\\}}(\\widehat{\\phi}_{S_0}\\widehat{f}_{v_j})| \\\\\n& + \\widehat{\\mu}_{\\mathcal{F}_k(|\\widehat{f}_{S_0}-\\widehat{\\phi}_{S_0}|)}(\\widehat{f}_{v_j})\n + \\widehat{\\mu}^{\\mathrm{pl}}_{S_0\\cup\\{v_j\\}}(|\\widehat{f}_{S_0}-\\widehat{\\phi}_{S_0}|\\widehat{f}_{v_j})\n \\end{aligned}\n \\end{equation*}\n The first term on the right side tends to 0 as $(j,k)\\rightarrow \\infty$ by Theorems \\ref{t:Sato-Tate-level} and \\ref{t:Sato-Tate-weight}. The last term is bounded by $\\widehat{\\mu}^{\\mathrm{pl}}_{S_0\\cup\\{v_j\\}}(\\widehat{\\psi}_{S_0}\\widehat{f}_{v_j})\\le \\epsilon\\widehat{\\mu}^{\\mathrm{pl}}_{v_j}(\\widehat{f}_{v_j})$ using the fact that $\\widehat{\\mu}^{\\mathrm{pl}}_{S_0}$ is a positive measure. In order to bound the second term, recall that we are either in the weight aspect, or in the level aspect with regular highest weight for $\\xi$. Then $a_{\\mathcal{F}_k(|\\widehat{f}_{S_0}-\\widehat{\\phi}_{S_0}|)}(\\pi)$ is a nonnegative multiple of $|\\widehat{f}_{S_0}(\\pi_{S_0})-\\widehat{\\phi}_{S_0}(\\pi_{S_0})|$ as in the proof of Theorem \\ref{t:Sato-Tate-level}. Thus\n $$|\\widehat{\\mu}_{\\mathcal{F}_k(|\\widehat{f}_{S_0}-\\widehat{\\phi}_{S_0}|)}(\\widehat{f}_{v_j})|\\le\\widehat{\\mu}_{\\mathcal{F}_k(\\widehat{\\psi}_{S_0})}\n (\\widehat{f}_{v_j}),$$\n and the right hand side tends to 0 as $(j,k)\\rightarrow \\infty$ again by Theorems \\ref{t:Sato-Tate-level} and \\ref{t:Sato-Tate-weight}.\n Hence we have shown that\n $$\\limsup_{(j,k)\\rightarrow\\infty} | \\widehat{\\mu}_{\\mathcal{F}_k,v_j}(\\widehat{f}_{v_j})-\\widehat{\\mu}^{\\mathrm{pl}}_{S_0\\cup\\{v_j\\}}(\\widehat{f}_{S_0}\\widehat{f}_{v_j})| \\le \\epsilon\\widehat{\\mu}^{\\mathrm{pl}}_{v_j}(\\widehat{f}_{v_j}).$$\n Since $\\lim\\limits_{j\\rightarrow\\infty}\\widehat{\\mu}^{\\mathrm{pl}}_{v_j}(\\widehat{f}_{v_j})=\\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}(\\widehat{f})$ and we are free to choose $\\epsilon>0$,\n we deduce that $\\lim\\limits_{(j,k)\\rightarrow\\infty} \\widehat{\\mu}_{\\mathcal{F}_k,v_j}(\\widehat{f}_{v_j})=\\widehat{\\mu}^{\\mathrm{pl}}_{S_0}(\\widehat{f}_{S_0})\\widehat{\\mu}^{\\mathrm{ST}}_{\\theta}(\\widehat{f})$.\n\n\\end{proof}\n\n\\begin{rem}\n It would be desirable to improve Theorems \\ref{t:level-varies} and \\ref{t:weight-varies} similarly\n by prescribing conditions at $S_0$ in terms of $\\widehat{f}_{S_0}$ rather than the less general\n $\\widehat{\\phi}_{S_0}$. Unfortunately the argument proving Corollary \\ref{c:extended} does not carry over.\n For instance in the case of Theorem \\ref{t:level-varies}, one should know in addition that\n the multiplicative constant implicit in $O(q_{S_1}^{A_{\\mathrm{lv}}+B_{\\mathrm{lv}} \\kappa} {\\mathbb N}({\\mathfrak n}_k)^{-C_{\\mathrm{lv}}})$\n is bounded as a sequence of $\\widehat{\\phi}_{S_0}$\n approaches $\\widehat{f}_{S_0}$.\n\\end{rem}\n\n\n\n\n\\section{Langlands functoriality}\\label{s:langlands}\n\n\n Let $r:{}^L G\\rightarrow \\GL_d({\\mathbb C})$ be a representation of ${}^L G$. Let $\\pi\\in {\\mathcal{AR}}_{{\\rm disc},\\chi}(G)$ be\nsuch that with $\\pi_v \\in \\Pi_{{\\rm disc}}(\\xi_v^\\vee)$ for each $v|\\infty$\n (recall the notation from \\S\\ref{sub:st-disc} and \\S\\ref{sub:aut-rep}).\nThe Langlands correspondence for $G(F_v)$ (\\cite{Lan88}) associates an $L$-parameter $\\varphi_{\\xi^\\vee_v}:W_{\\mathbb R}\\rightarrow {}^L G$\nto the $L$-packet $\\Pi_{{\\rm disc}}(\\xi_v^\\vee)$, cf. \\S\\ref{sub:st-disc}.\n The following asserts the existence\nof the functorial lift of $\\pi$ under $r$ as predicted by the Langlands functoriality principle.\n\n\\begin{hypo}\\label{hypo:functorial-lift}\n There exists an automorphic representation $\\Pi$ of $\\GL_d({\\mathbb A}_F)$ such that\n\\begin{enumerate}\n\\item $\\Pi$ is isobaric,\n\\item $\\Pi_v=r_*(\\pi_v)$ (defined in \\eqref{e:r_*}) when $G$, $r$ and $\\pi$ are unramified at $v$,\n\\item $\\Pi_v$ corresponds to $r\\varphi_{\\xi^\\vee_v}$ via the Langlands correspondence for $\\GL_d(F_v)$\nfor all $v|\\infty$.\n\\end{enumerate}\n\\end{hypo}\n\n If $\\Pi$ as above exists then it is uniquely determined by (i) and (ii) thanks to the strong multiplicity\n one. Moreover\n \\begin{lem}\\label{lem:tempered}\n Hypothesis \\ref{hypo:functorial-lift}.(iii) implies that\n $\\Pi_v$ is tempered for all $v|\\infty$.\n\\end{lem}\n\n\\begin{proof}\n Recall the following general fact from \\cite[\\S3, (vi)]{Lan88}:\n Let $\\varphi$ be an $L$-parameter for a real reductive group and $\\Pi(\\varphi)$\n its corresponding $L$-packet. Then $\\varphi$ has relatively compact image if and only if\n $\\Pi(\\varphi)$ contains a tempered representation if and only if $\\Pi(\\varphi)$\n contains only tempered representations. In our case\n this implies that $\\varphi_{\\xi^\\vee_v}$ has relatively compact image for every $v|\\infty$, and the continuity\n of $r$ shows that the image of $r\\varphi_{\\xi^\\vee_v}$ is also relatively compact. The lemma follows.\n\\end{proof}\n\nAs before let $(\\widehat{B},\\widehat{T},\\{X_\\alpha\\}_{\\alpha\\in \\Delta^\\vee})$ denote the ${\\rm Gal}(\\overline{F}\/F)$-invariant\n splitting datum for $\\widehat{G}$.\n Recall that $\\lambda_{\\xi_v^\\vee}\\in X^*(\\widehat{T})^+$ designates the highest weight for\n $\\xi_v^\\vee$.\n Then $\\varphi_{\\xi^\\vee_v}|_{W_{\\mathbb C}}$ is described as\n $$ \\varphi_{\\xi^\\vee_v}(z)=\\left((z\/\\overline{z})^{\\rho+\\lambda_{\\xi_v^\\vee}},z\\right)\n \\in \\widehat{G}\\times W_{\\mathbb C},\\quad \\forall z\\in W_{\\mathbb C}={\\mathbb C}^\\times.$$\n It is possible to extend $\\varphi_{\\xi^\\vee_v}|_{W_{\\mathbb C}}$ to the whole of $W_{\\mathbb R}$ but this does not concern us.\n (The interested reader may consult pp.183-184 of \\cite{Kot90} for instance.)\n Let $\\widehat{\\mathbb{T}}$ be a maximal torus of $\\GL_d({\\mathbb C})$ containing the image\n $r(\\widehat{T})$, and $\\widehat{\\mathbb{B}}$ a Borel subgroup containing $\\widehat{\\mathbb{T}}$.\n Write $r|_{\\widehat{G}}=\\oplus_{i\\in I} r_i$ as a sum of irreducible $\\widehat{G}$-representations.\n For each $i\\in I$, denote by $\\lambda(r_i)\\in X^*(\\widehat{T})$ the $\\widehat{B}$-positive\n highest weight for $r_i$. Write\n$\\lambda(r_i)=\\lambda_0(r_i)+\\sum_{\\alpha\\in \\Delta} a(r_i,\\alpha)\\cdot \\alpha^\\vee$ for\n$\\lambda_0(r_i)\\in X_*(Z(G))_{ {\\mathbb Q}}$ and $a(r_i,\\alpha)\\in {\\mathbb Q}_{\\ge 0}$.\n Put $|\\lambda(r_i)|:=\\sum_{\\alpha\\in \\Delta} a(r_i,\\alpha)$ and\n$$M(\\xi_v):=\\max_{\\alpha\\in \\Delta} \\langle \\alpha,\\lambda_{\\xi^\\vee_v}\\rangle,~\nM(r):=\\max_{i\\in I} |\\lambda(r_i)|.$$\nSimilarly define $m(\\xi_v)$ and $m(r)$ by using minima in place of maxima.\nWe are interested in the case where $\\lambda_0(r_i)$ is trivial for every $i\\in I$.\nThis is automatically true if $Z(G)$ is finite. (Recall that we consistently assume $Z(G)=1$\nin the weight aspect.)\n\n\n\\begin{lemma}\\label{l:Cpi} Suppose that $\\lambda_0(r_i)$ is trivial for every $i\\in I$.\n Hypothesis \\ref{hypo:functorial-lift}.(iii) implies that for each $v|\\infty$,\n $$ (2+m(r)m(\\xi_v))^{|I|}\\le C(\\Pi_v)\\le (3+2M(r)M(\\xi_v))^{d}.$$\nIn particular if $Z(G)$ is finite, then the following holds for any fixed $L$-morphism $r$.\n\\begin{equation*}\n 1+ m(\\xi_v)\n \\ll_r C(\\Pi_v) \\ll_r\n M(\\xi_v)^{d}\n\\end{equation*}\n\\end{lemma}\n\n\\begin{proof}\n First we recall a general fact about archimedean $L$-factors.\n Let $\\varphi:W_{\\mathbb R}\\rightarrow \\GL_N({\\mathbb C})$ be a tempered $L$-parameter and decompose\n $\\varphi|_{W_{\\mathbb C}}$ into $\\GL_1$-parameters as $\\varphi|_{W_{\\mathbb C}}=\\oplus_{k=1}^N \\chi_k$.\n The archimedean $L$-factor associated with $\\varphi$ may be written in the form (cf. \\eqref{def:Lv-arch})\n \\begin{equation}\\label{e:pf-10.3-1}L(s,\\varphi)=\\prod_{k=1}^N \\Gamma_{{\\mathbb R}}(s-\\mu_k(\\varphi)).\\end{equation}\n For each $k$ assume that $\\chi_k(z)=(z\/\\overline{z})^{a_k}$ for some $a_k\\in \\frac{1}{2}{\\mathbb Z}$.\n Then we have for every $1\\le k\\le N$,\n $\\mu_k(\\varphi)\\in \\frac{1}{2}{\\mathbb Z}_{\\le 0}$ and, after reordering $\\mu_k(\\varphi)$'s if necessary,\n \\begin{equation}\\label{e:pf-10.3-2}|a_k|\\le |\\mu_k(\\varphi)| \\le |a_k|+1.\\end{equation}\n Indeed this comes from inspecting the definition of local $L$-factors as in of \\cite{Tat79}*{3.1,3.3} for instance.\n (Use \\cite{Tat79}*{3.1} if $a_k=0$ and \\cite{Tat79}*{3.3} otherwise.)\n\n Returning to the setup of the lemma, we have by definition $L(s,\\Pi_v)=L(s,r\\varphi_{\\xi_v})$. For each $i\\in I$ we consider the composite complex $L$-parameter\n $$W_{\\mathbb C}\\stackrel{\\varphi_{\\xi_v}|_{W_{\\mathbb C}}}{\\rightarrow} \\widehat{G}\\times W_{\\mathbb C}\n \\stackrel{(r_i,1)}{\\rightarrow} \\GL_{\\dim r_i}({\\mathbb C})$$\n decompose it as $\\oplus_{j=1}^{\\dim r_i} \\chi_{i,j}$.\n We can find $a_{i,j}\\in \\frac{1}{2}{\\mathbb Z}$ such that\n $\\chi_{i,j}(z)=(z\/\\overline{z})^{a_{i,j}}$.\n For each $i$, the highest weight theory tells us that\n $a_{i,j}=\\langle \\rho+\\lambda_{\\xi^\\vee_v}, \\lambda(r_i)\\rangle\\ge 0$ for one $j$\n and $|a_{i,j'}|\\le a_{i,j}$ for the other $j'\\neq j$.\n By \\eqref{e:pf-10.3-1} and \\eqref{e:pf-10.3-2}, the analytic conductor for $\\Pi_v$ (introduced in \\S\\ref{sec:pp:Lfn})\n satisfies\n \\begin{equation*}\n\t\\begin{aligned}\n C(\\Pi_v)&=\\prod_{k=1}^{d}(2+|\\mu_k(\\Pi_v)|)\\le\n\\prod_{i\\in I} \\prod_{j=1}^{\\dim r_i} (3+|a_{i,j}|)\\\\\n&\\le \\prod_{i\\in I} (3+\\langle \\rho+\\lambda_{\\xi^\\vee_v}, \\lambda(r_i)\\rangle)^{\\dim r_i}\n.\n\\end{aligned}\n\\end{equation*}\nFurther $\\langle \\rho+\\lambda_{\\xi^\\vee_v}, \\lambda(r_i)\\rangle=\n\\langle \\rho,\\lambda(r_i)\\rangle+ \\langle \\lambda_{\\xi^\\vee_v}, \\lambda(r_i)\\rangle\n\\le |\\lambda(r_i)|+|\\lambda(r_i)|M(\\xi_v)\\le M(r)(1+M(\\xi_v))$. Hence\n$$C(\\Pi_v)\\le \\prod_{i\\in I} ((3+M(r)(1+M(\\xi_v)))^{\\dim r_i}\n=((3+M(r)(1+M(\\xi_v)))^{d}.$$\n\n Now we establish a lower bound for $C(\\Pi_v)$. For each $i$, we apply \\eqref{e:pf-10.3-2}\n to the unique $j=j(i)$ such that $a_{i,j}=\\langle \\rho+\\lambda_{\\xi^\\vee_v}, \\lambda(r_i)\\rangle$.\n Then\n \\begin{equation*}\n\t\\begin{aligned}\n C(\\Pi_v)&\\ge \\prod_{i\\in I} (2+|a_{i,j(i)}|)=\n \\prod_{i\\in I} (2+\\langle \\rho+\\lambda_{\\xi^\\vee_v}, \\lambda(r_i)\\rangle)\\\\\n &\\ge (2+m(r)(1+m(\\xi_v)))^{|I|}.\n\\end{aligned}\n\\end{equation*}\n\n\n\\end{proof}\n\n\\section{Statistics of low-lying zeros}\\label{sec:zeros}\n\nAs explained in the introduction an application of the quantitative Plancherel Theorems~\\ref{t:level-varies} and~\\ref{t:weight-varies} is to the study the distribution of the low-lying zeros of families of $L$-functions $\\Lambda(s,\\Pi)$. The purpose of this section is to state the main results and make our working hypothesis precise.\n\n\\subsection{The random matrix models}\\label{sec:zeros:matrix} For the sake of completness we recall briefly the limiting $1$-level density of normalized eigenvalues. We consider the three Symmetry Types $\\mathcal{G}(N)=SO(2N), U(N), USp(2N)$. For each integer $N\\ge 1$ these groups are endowed with their Haar probability measure. For all matrices $A\\in \\mathcal{G}(N)$ we have a sequence $\\vartheta_j=\\vartheta_j(A)$ of normalized angles~\\cite{book:KS}\n\\begin{equation}\\label{def:theta}\n0\\le \\vartheta_1 \\le \\vartheta_2 \\le \\cdots \\le \\vartheta_N \\le N.\n\\end{equation}\nNamely the eigenvalues of $A\\in U(N)$ are given by $e(\\frac{\\vartheta_j}{N})=e^{2i\\pi\\vartheta_j\/N}$. The eigenvalues of $A\\in USp(2N)$ or $A\\in SO(2N)$ occur in conjugate pairs and are given by $e(\\pm \\frac{\\vartheta_j}{2N})$.\n\nThe mean spacing of the sequence~\\eqref{def:theta} is about one. The $1$-level density is defined by\n\\begin{equation*}\nW_{\\mathcal{G}(N)}(\\Phi)=\\int_{\\mathcal{G}(N)} \\sum_{1\\le j\\le N}^{} \\Phi(\\vartheta_j(A))dA.\n\\end{equation*}\nThe limiting density as $N\\to \\infty$ is given by the following (\\cite{book:KS}*{Theorem~AD.2.2})\n\\begin{proposition}\\label{prop:KS}\n Let $\\mathcal{G}=U,SO(even)$ or $USp$. For all Schwarz functions $\\Phi$ on $\\mathbb{R}_+$,\n\\begin{equation*}\n \\lim_{N\\to \\infty} W_{\\mathcal{G}(N)}(\\Phi) = \\int_{\\mathbb{R}_+}\n\\Phi(x) W(\\mathcal{G})(x)dx,\n\\end{equation*}\nwhere the density functions $W(\\mathcal{G})$ are given by~\\eqref{intro:WG}.\n\\end{proposition}\n\nThe density functions $W(\\mathcal{G})$ are defined a priori on $\\mathbb{R}_+$. They are extended to $\\mathbb{R}_-$ by symmetry, namely $W(\\mathcal{G})(x)=W(\\mathcal{G})(-x)$ for all $x\\in \\mathbb{R}$.\nFor a Schwartz function $\\Phi$ whose Fourier transform $\\widehat \\Phi$ has support inside $(-1,1)$, we have the identities\n\\begin{equation}\\label{W(G)}\n\\int_{-\\infty}^\\infty\n\\Phi(x) W(\\mathcal{G})(x)dx=\n\\begin{cases}\n\\widehat \\Phi(0) & \\text{if $\\mathcal{G}=U$,}\\\\\n\\widehat \\Phi(0)+\\frac{1}{2} \\Phi(0)& \\text{if $\\mathcal{G}=USp$,}\\\\\n\\widehat \\Phi(0)-\\frac{1}{2} \\Phi(0)& \\text{if $\\mathcal{G}=SO(even)$.}\n\\end{cases}\n\\end{equation}\n\n\n\\subsection{The $1$-level density of low-lying zeros}\\label{sec:zeros:onelevel}\nConsider a family $\\mathfrak{F}=(\\mathfrak{F}_k)_{k\\ge 1}$ of automorphic representations of $\\GL(d,\\mathbb{A}_F)$.\nThe $1$-level density of the low-lying zeros is defined by\n\\begin{equation}\\label{def:D1}\t\nD(\\mathfrak{F}_k;\\Phi):=\\frac{1}{\\abs{\\mathfrak{F}_k}} \\sum_{\\Pi \\in \\mathfrak{F}_k} \\sum_j \\Phi\n\\biggl(\n\\frac{\\gamma_{j}(\\Pi)}{2\\pi}\n\\log C(\\mathfrak{F}_k)\n\\biggr)\n\\end{equation}\nHere $\\Phi$ is a Schwartz function; we don't necessarily assume $\\Phi$ to be even because the automorphic representations $\\Pi\\in \\mathfrak{F}_k$ might not be self-dual. See also the discussion at the end of~\\S\\ref{sec:pp:explicit}. The properties of the analytic conductor $C(\\mathfrak{F}_k)\\ge 2$ will be described in~\\S\\ref{sec:zeros:cond}.\n\nSince $\\Phi$ decays rapidly at infinity, the zeros $\\gamma_j(\\Pi)$ of $\\Lambda(s,\\Pi)$ that contribute to the sum are within $O(1\/\\log C(\\mathcal{F}_k))$ distance of the central point. Therefore the sum over $j$ only captures a few zeros for each $\\Pi$. The average over the family $\\Pi\\in \\mathfrak{F}_k$ is essential to have a meaningful statistical quantity.\n\n\n\n\\subsection{Properties of families of $L$-functions}\n\nRecall that in~\\S\\ref{sub:aut-families} we have defined two kinds of families $\\mathcal{F}=(\\mathcal{F}_k)_{k\\ge 1}$ of automorphic representations on $G(\\mathbb{A}_F)$. The families from Example~\\ref{ex:level-varies} are varying in the level aspect: $\\mathbb{N}(\\mathfrak{n}_k)\\to \\infty$ while the families from Example~\\ref{ex:wt-varies} are varying in the weight aspect: $m(\\xi_k)\\rightarrow\\infty$. In both cases we assume that $\\phi_{S_0}\\in C_c^\\infty(G(F_{S_0}))$ is normalized such that\n\\begin{equation} \\label{phiS0}\n \\widehat{\\mu}^{\\text{pl}}_{S_0}(\\phi_{S_0})=1.\n\\end{equation}\nFor families in the weight aspect we assume from now the weights are bounded away from the walls. Namely we assume that we are given a fixed $\\eta>0$ and that\n\\begin{equation}\\label{dimxik}\n (\\dim \\xi_k)^{\\eta} \\le m(\\xi_k),\\quad \\forall k.\n\\end{equation}\n\nGiven the continuous $L$-morphism $r:{}^LG\\to \\GL(d,\\mathbb{C})$ we can construct a family $\\mathfrak{F}=r_*\\mathcal{F}$ of automorphic $L$-functions. Assuming the Langlands functoriality in the form of Hypothesis~\\ref{hypo:functorial-lift}, for each $\\pi\\in \\mathcal{F}_k$ there is a unique isobaric automorphic representation $\\Pi=r_*\\pi$ of $\\GL(d,\\mathbb{A}_F)$. We denote by $\\mathfrak{F}_k=r_*\\mathcal{F}_k$ the corresponding set of all such $\\Pi$. Recall from~\\S\\ref{sub:aut-rep} that $\\mathcal{F}_k$ is a weighted set and that the weight of each representation $\\pi$ is denoted $a_{\\mathcal{F}_k}(\\pi)$. The same holds for $\\mathfrak{F}_k$ and in particular we have\n\\begin{equation*}\n \\abs{\\mathfrak{F}_k}=\\abs{\\mathcal{F}_k}=\\sum_{\\pi \\in \\mathcal{F}_k} a_{\\mathcal{F}_k}(\\pi).\n\\end{equation*}\nWe have seen in Corollary~\\ref{c:estimate-F_k} that $\\abs{\\mathfrak{F}_k}\\to \\infty$ as $k\\to \\infty$.\n\nBy definition (see~\\eqref{e:a(pi)-general}), if $\\pi\\in \\mathcal{F}_k$ then $\\pi_\\infty$ has the same infinitesimal character as $\\xi_k^{\\vee}$, i.e. $\\pi \\in \\Pi_{disc}(\\xi_k)$. If $\\Pi\\in \\mathfrak{F}_k$ then $\\Pi_\\infty$ corresponds to the composition $r\\circ \\phi_{\\xi_k}$ via the Langlands correspondence for $\\GL_d(F_\\infty)$ (This is Hypothesis~\\ref{hypo:functorial-lift}.(iii)). In particular $\\Pi_\\infty$ is uniquely determined by $\\xi_k$ and $r$. It is identical for all $\\Pi\\in \\mathfrak{F}_k$.\n\nIt is shown in Lemma~\\ref{lem:tempered} that $\\Pi_\\infty$ is tempered. Therefore the Proposition~\\ref{prop:tempered} applies and the bounds towards Ramanujan~\\eqref{eq:prop:tempered} are satisfied for all $\\Pi \\in \\mathfrak{F}_k$.\n\nTo simplify notation throughout this and the next section, we use the convention of omitting the weight when writing a sum over $\\mathfrak{F}_k$. If $l(\\Pi)$ is a quantity that depends on $\\Pi\\in \\mathfrak{F}_k$, we set\n\\begin{equation*}\n \\sum_{\\Pi\\in \\mathfrak{F}_k} l(\\Pi) := \\sum_{\\pi \\in \\mathcal{F}_k} a_{\\mathcal{F}_k}(\\pi) l(r_*\\pi).\n\\end{equation*}\nThis convention applied in particular to~\\eqref{def:D1} above.\n\n\\subsection{Occurrence of poles}\\label{sec:zeros:poles}\nWe make the following hypothesis concerning poles of $L$-functions in our families.\n\\begin{hypothesis}\\label{hyp:poles}\n\tThere is $C_{pole}>0$ such that the following holds uniformly as $k\\to \\infty$.\n\\begin{equation*}\n\t\\#\\set{\\Pi\\in \\mathfrak{F}_k,\\ \\Lambda(s,\\Pi) \\text{ has a pole}} \\ll \\abs{\\mathfrak{F}_k}^{1-C_{pole}}.\n\\end{equation*}\n\\end{hypothesis}\nThe hypothesis is natural because it is related to the Functoriality Hypothesis~\\ref{hypo:functorial-lift} in many ways. Of course it would be difficult to define the event that \\Lquote{$L(s,\\Pi) \\text{ has a pole}$} without assuming Hypothesis~\\ref{hypo:functorial-lift}. Also when Functoriality is known unconditionally it is usually possible to establish the Hypothesis~\\ref{hyp:poles} unconditionally as well. We shall return to this question in a subsequent article.\n\n\n\n\\subsection{Analytic conductors}\\label{sec:zeros:cond} As in~\\cite{ILS00} we define an analytic conductor $C(\\mathfrak{F}_k)$ associated to the family. The significance of $C(\\mathfrak{F}_k)$ is clear: all $\\Pi\\in \\mathfrak{F}_k$ have an analytic conductor $C(\\Pi)$ comparable to $C(\\mathfrak{F}_k)$. We distinguish between families in the weight and level aspect.\n\n\\subsubsection{Weight aspect} For families in the weight aspect we set $C(\\mathfrak{F}_k)$ to be the analytic conductor $C(\\Pi_\\infty)$ of the archimedean factor $\\Pi_\\infty$ (recall that $\\Pi_\\infty$ is the same for all $\\Pi\\in \\mathfrak{F}_k$).\nThen $C(\\Pi)\\asymp C(\\mathfrak{F}_k)$ for all $\\Pi\\in \\mathfrak{F}_k$.\nFrom Corollary~\\ref{c:estimate-F_k} we have that $\\abs{\\mathfrak{F}_k}\\asymp \\dim \\xi_k$ as $k\\to \\infty$.\n\nIt remains to relate the quantities $C(\\mathfrak{F}_k)$, $\\dim \\xi_k$ and $m(\\xi_k)$, which is achieved in~\\eqref{dimxik:2} and~\\eqref{xik-CFk} below.\n\\begin{lemma} Let $v|\\infty$. Let $\\xi_v$ be an irreducible finite dimensional algebraic representation of $G(F_v)$. Then\n $ m(\\xi_v)^{\\abs{\\Phi^+}}\\ll \\dim \\xi_v \\ll M(\\xi_v)^{\\abs{\\Phi^+}}$. Also $M(\\xi_v)\\ll \\dim \\xi_v$.\n\\end{lemma}\n\\begin{proof}\n This follows from Lemma~\\ref{l:dim-and-trace}. Recall the definition of $m(\\xi_v)$ in~\\S\\ref{sub:st-disc} and $M(\\xi_v)$ in~\\S\\ref{s:langlands}.\n\\end{proof}\n\nBecause of~\\eqref{dimxik} and the previous lemma we have that\n\\begin{equation} \\label{dimxik:2}\n m(\\xi_k)^{\\abs{\\Phi^+}} \\ll \\dim \\xi_k \\ll m(\\xi_k)^{1\/\\eta}.\n\\end{equation}\nFrom Lemma~\\ref{l:Cpi} we deduce that there are positive constants $C_1,C_2$ such that\n\\begin{equation} \\label{xik-CFk}\n m(\\xi_k)^{C_1} \\ll C(\\mathfrak{F}_k) \\ll m(\\xi_k)^{C_2}.\n\\end{equation}\n\n\n\n\n\n\n\\subsubsection{Level aspect} For families in the level aspect the situation is more complicated mainly because of the lack of knowledge of the local Langlands correspondence on general groups.\nWe define $C(\\mathfrak{F}_k)$ by the following\n\\begin{equation*}\n\\log C(\\mathfrak{F}_k):=\n\\frac{1}{\\abs{\\mathfrak{F}_k}} \\sum_{\\Pi\\in\\mathfrak{F}_k}\n\\log C(\\Pi),\n\\end{equation*}\nand we introduce the following hypothesis.\n\n\\begin{hypothesis}\\label{hyp:cond} There are constants $C_3, C_4>0$ such that\n\\begin{equation*}\n \\mathbb{N}(\\mathfrak{n}_k)^{C_3} \\ll C(\\mathfrak{F}_k) \\ll \\mathbb{N}(\\mathfrak{n}_k)^{C_4}.\n\\end{equation*}\n\\end{hypothesis}\n\n\n\n\n\\subsection{Main result}\\label{sec:zeros:main}\nWe may now state our main results on low-lying zeros of the family $\\mathfrak{F}=r_*\\mathcal{F}$. The following is a precise version of Theorem~\\ref{th:low-lying} from the introduction (compare with~\\eqref{W(G)}).\n\\begin{theorem}\\label{th:onelevel}\n Assume Hypotheses~\\ref{hypo:functorial-lift},~\\ref{hyp:poles} and~\\ref{hyp:cond}. There is $0<\\delta<1$ such that for all Schwartz functions $\\Phi$ whose Fourier transform $\\widehat \\Phi$ has support in $(-\\delta,\\delta)$ the following holds:\n\\begin{equation*}\n\\lim_{k\\to \\infty} D(\\mathfrak{F}_k,\\Phi)=\\widehat \\Phi(0) -\\frac{s(r)}{2}\\Phi(0),\n\\end{equation*}\nwhere $s(r)\\in \\set{-1,0,1}$ is the Frobenius--Schur indicator of $r:{}^LG\\to \\GL_d(\\mathbb{C})$.\n\\end{theorem}\n\n\n\n\n\n\n\\section{Proof of Theorem~\\ref{th:onelevel}}\\label{sec:pf}\n\nThe method of proof of the asymptotic distribution of the $1$-level density of low-lying zeros of families of $L$-functions has appeared at many places in the literature and is by now relatively standard. However we must justify the details carefully as families of $L$-functions haven't been studied in such a general setting before. The advantage of working in that degree of generality is that we have been able to isolate the essential mechanisms and arithmetic ingredients involved.\n\nIn order to keep the analysis concise we have introduced some technical improvements which are listed below and can be helpful in other contexts.\n\\begin{enumerate}[(i)]\n\\item The first observation is that it is better not to expand the Euler product of the $L$-functions $L(s,\\Pi)$ into Dirichlet series. We can apply the explicit formula (Proposition~\\ref{prop:explicit}) in a way to preserve the Euler product structure.\n\\item We use non-trivial bounds towards Ramanujan in a systematic way to handle ramified places.\n\\item We clarify that it is not necessary to assume that the representation be self-dual or any other symmetry to carry out the analysis.\n\\item Most importantly we exploit the properties of the Plancherel measure when estimating Satake parameters. All previous articles on the subject rely in a way or another on explicit Hecke relations which made the proof indirect and lengthy, although manageable for groups of low rank.\n\\end{enumerate}\n\n\n\\subsection{Notation and outline}\\label{sec:pf:outline}\nTo formulate the main statements in an elegant way we introduce the following notation\n\\begin{equation}\\label{def:hatL}\n\\widehat \\mathcal{L}_{k,v}(y):=\\frac{1}{\\abs{\\mathfrak{F}_k}}\n\\sum_{\\Pi \\in \\mathfrak{F}_k}\n\\int^\\infty_{-\\infty} \\frac{L'}{L}(\\frac{1}{2}+ix,\\Pi_v)\ne^{2\\pi iyx} dx\n,\\quad v\\in \\mathcal{V}_F,\\ y\\in \\mathbb{R}.\n\\end{equation}\nWe view $\\widehat \\mathcal{L}_{k,v}$ as a tempered distribution on $\\mathbb{R}$.\nNote that when $v$ is non-archimedean $\\widehat \\mathcal{L}_{k,v}$ is a signed measure supported on a discrete set inside $\\mathbb{R}_{>0}$.\n\n\n\nThe proof of the main theorems will follow by a fine estimation of $\\widehat \\mathcal{L}_{k,v}(y)$ as $k\\to \\infty$. The uniformity in both the places $v\\in \\mathcal{V}_F$ and the parameter $y\\in \\mathbb{R}$ will play an important role. Typically $q_v$ will be as large as $C(\\mathfrak{F}_k)^{O(\\delta)}$ and $y$ will be of order of magnitude of $\\log C(\\mathfrak{F}_k)$.\n\nThe first step of the proof consists in applying the explicit formula (Proposition~\\ref{prop:explicit}). There are terms coming from the poles of $L(s,\\Pi)$ which are dealt with in~\\S\\ref{sec:pf:poles}. The second term in the right hand-side in Proposition~\\ref{prop:explicit} is expressed in terms of the arithmetic conductor $q(\\Pi)$ and will yield a positive contribution for families in the level aspect. When evaluating the $1$-level density $D(\\mathfrak{F}_k,\\Phi)$ it remains to consider the following sum over all places\n\\begin{equation}\\label{out:sumv}\n\\frac{1}{\\log C(\\mathfrak{F}_k)}\n\\sum_{v\\in\\CmV_F}\n\\left \\langle\n\\widehat \\mathcal{L}_{k,v} (y),\n\\widehat{\\Phi}\\Bigp{\\frac{2\\pi y}{\\log C(\\mathfrak{F}_k)}}\n\\right \\rangle,\n\\end{equation}\nplus a conjugate expression, see~\\S\\ref{sec:pf:explicit}.\n\nOur convention on Fourier transforms is standard. Let $\\Phi$ be a Schwartz function on $\\mathbb{R}$. The Fourier transform is as in~\\eqref{def:fourier} and the inverse Fourier transform reads\n\\begin{equation*\n\\Phi(x)=\\int_{-\\infty}^{+\\infty} \\widehat\\Phi(y) e^{2\\pi i xy} dy.\n\\end{equation*}\nGiven two Schwartz functions $\\Phi$ and $\\Psi$ we let\n\\begin{equation*}\n\\langle \\Phi,\\Psi\\rangle:=\\int_{-\\infty}^{\\infty}\n\\Phi(x) \\Psi(x)dx.\n\\end{equation*}\nSometimes we use the notation $\\langle \\Phi(x),\\Psi(x)\\rangle$ to emphasize on the variable of integration. The Plancherel formula reads\n\\begin{equation}\\label{fourier:pl}\n\\langle \\Phi(x),\\Psi(x)\\rangle=\\langle \\widehat\\Phi(y),\\widehat\\Psi(-y)\\rangle.\n\\end{equation}\nWe use the similar convention for tempered distributions. The Fourier transform of the pure phase function $x\\mapsto e^{2i\\pi ax}$ is the Dirac distribution $\\delta(a)$ centered at the point $a$.\nTo condense notation we write $\\Psi(y):=\\widehat{\\Phi}\\Bigp{\\dfrac{2\\pi y}{\\log C(\\mathfrak{F}_k)}}$ and express our remainder terms as functions of $\\pnorm{\\Psi}_\\infty$ and $\\pnorm{\\widehat \\Psi}_1$. Since $\\Phi$ is fixed these two norms are uniformly bounded.\n\n\n\nThere are different kinds of estimates depending on the nature of the place $v$. We shall distinguish the following set of places\n\\begin{enumerate}[(i)]\n\\item The set of archimedean places $S_\\infty$, the contribution of which is evaluated in~\\S\\ref{sec:pf:arch}.\n\\item A fixed set $S_0$ of non-archimedean places. These may be thought of as the \\Lquote{ramified places}. Their contribution is negligible as shown in~\\S\\ref{sec:pf:gn}.\n\\item The set $\\set{v\\mid \\mathfrak{n}_k}$ of places that divide the level. These play a role only when the level varies. We use the convention that for families in the weight aspect this set of places is empty.\n\\item The set of generic places $S_{\\text{gen}}$ which is the complement in $\\mathcal{V}_F$ of the above three sets of places. This set will actually be decomposed in two parts, $S_{\\text{main}}$ and $S_{\\text{cut}}$. The set $S_{\\text{cut}}$ is infinite and consists of those non-archimedean places $v\\in S_\\text{gen}$ such that $q_v$ is large enough to escape the support of $\\widehat\\Phi$ (see~\\eqref{def:Scut} below for the exact definition of $S_\\text{cut}$). Then the pairing in~\\eqref{out:sumv} vanishes. This actually shows that the sum over $v\\in\\mathcal{V}_F$ in~\\eqref{out:sumv} is finitely supported.\n\\item The remaining set $S_\\text{main}$ is finite (but growing as $k \\to \\infty$). It will produce the main contribution. For all $v\\in S_\\text{main}$, $G$, $r$ and $\\pi$ are unramified over $F_v$. We have a disjoint union of $S_\\text{main} \\cap \\mathcal{V}_F(\\theta)$, $\\theta \\in \\mathscr{C}(\\Gamma_1)$ (see~\\S\\ref{s:Sato-Tate}).\n\\end{enumerate}\n\n\nThe outline of this section is as follows. For non-archimedean places $v\\in S_\\text{main}$ we set up in~\\S\\ref{sec:pf:moments} some notation on moments. The quantity $\\widehat \\mathcal{L}_{\\text{pl},v}$ in~\\eqref{CmL-pl} is the analogue of~\\eqref{def:hatL} where the average over automorphic representations $\\Pi\\in \\mathfrak{F}_k$ in~\\eqref{def:hatL} has been replaced by an average of $\\Pi_v$ against the unramified Plancherel measure.\n\nOur Plancherel equidistribution theorems for families (Theorems~\\ref{t:level-varies} and~\\ref{t:weight-varies}) imply that $\\widehat \\mathcal{L}_{k,v}$ is asymptotic to $\\widehat \\mathcal{L}_{\\text{pl},v}$ as $k\\to \\infty$. It is essential that these equidistribution theorems be quantitative in a strong sense for the error term to be negligible. Details are given in~\\S\\ref{sec:pf:pl} and \\S\\ref{sec:pf:main}.\n\nThe various remainder terms are handled in~\\S\\ref{sec:pf:error}.\nThe next step is to show the existence of the limit of the main term\n\\begin{equation} \\label{out:main}\n \\frac{1}{\\log C(\\mathfrak{F}_k)} \\sum_{v\\in S_\\text{main}}\n\\bigl \\langle\n\\widehat\\mathcal{L}_{\\text{pl},v},\\Psi\n\\bigr\\rangle\n\\end{equation}\nas $k\\to \\infty$. We need the Cebotarev density theorem (see~\\S\\ref{sec:pf:prime}) and to evaluate $\\widehat\\mathcal{L}_{\\text{pl},v}$.\n\nThe evaluation of $\\widehat\\mathcal{L}_{\\text{pl},v}$ is a nice argument in representation theory, see~\\S\\ref{sec:pf:M12}. At this stage we see very clearly the role of the two optimal conditions on $r$ from Theorem~\\ref{th:low-lying} (irreducible and does not factor through $W_F$). The evaluation of $\\widehat\\mathcal{L}_{\\text{pl},v}$ can actually be quite complicated since it depends on the restriction of $r$ to the various subgroups $\\widehat G \\rtimes W_{F_v}$ and the unramified Plancherel measure on $G(F_v)$. Fortunately the expression simplifies when summing over all places $v\\in S_\\text{main}$ as shown in~\\S\\ref{sec:pf:M12}.\n\nThe overall conclusion of the analysis is that the limit of~\\eqref{out:main} as $k\\to \\infty$ (plus its conjugate) is equal\n\\footnote{A quick explanation for the minus sign is as follows. A local $L$-factor is of the form $L(s)=(1-\\alpha q^{-s})^{-1}$ with three minus signs thus its logarithmic derivative is $\\tfrac{L'}{L}(s)=-\\log q \\sum\\limits_{\\nu\\ge 1}\\alpha^{\\nu}q^{-\\nu s}$.}\nto $-s(r)\\Phi(0)$, where $s(r)$ is the Frobenius--Schur indicator of $r$ as in Theorem~\\ref{th:onelevel}.\n\n\n\\subsection{Explicit formula}\\label{sec:pf:explicit} We apply the explicit formula (Proposition~\\ref{prop:explicit}) for all $\\Pi\\in \\mathfrak{F}_k$. Averaging over $\\Pi$ we obtain\n\\begin{equation}\\label{D1explicit}\nD(\\mathfrak{F}_k,\\Phi)=\nD_{\\text{pol}}(\\mathfrak{F}_k,\\Phi)+\n\\frac{\\widehat \\Phi(0)}{\\abs{\\mathfrak{F}_k}}\n\\sum_{\\Pi \\in \\mathfrak{F}_k}^{} \\frac{\\log q(\\Pi)}{\\log C(\\mathfrak{F}_k)} + \\sum_{v\\in \\mathcal{V}_F}\nD_{v}(\\mathfrak{F}_k,\\Phi)+\\overline{D_{v}}(\\mathfrak{F}_k,\\Phi).\n\\end{equation}\nHere $D_{\\text{pol}}(\\mathfrak{F}_k,\\Phi)$ denotes the contribution of the eventual poles and will be dealt with in~\\S\\ref{sec:pf:poles}. Also we have set\n\\begin{equation*}\nD_{v}(\\mathfrak{F}_k,\\Phi):=\n\\frac{1}{2\\pi \\abs{\\mathfrak{F}_k}}\n\\sum_{\\Pi\\in\\mathfrak{F}_k}\n\\int_{-\\infty}^{\\infty}\n\\frac{L'}{L}(\\frac{1}{2}+ix,\\Pi_v) \\Phi\n\\Bigp{\\frac{x}{2\\pi}\\log C(\\mathfrak{F}_k)}\ndx.\n\\end{equation*}\nSee also the remark in~\\eqref{explicit:switch} explaining how to shift contours. Recall that the scaling factor $\\tfrac{\\log C(\\mathfrak{F}_k)}{2\\pi}$ comes from the definition~\\eqref{def:D1}.\n\nApplying Fourier duality~\\eqref{fourier:pl} and the definition~\\eqref{def:hatL} implies the equality\\footnote{Note that the exponential in~\\eqref{def:hatL} is $e^{2i\\pi xy}$ with a plus sign.}\n\\begin{equation}\\label{def:D1v}\nD_{v}(\\mathfrak{F}_k,\\Phi)=\\frac{1}{\\log C(\\mathfrak{F}_k)}\n\\bigl\\langle \\widehat\\mathcal{L}_{k,v},\n\\widehat \\Phi \\Bigp{\\frac{2\\pi y}{\\log C(\\mathfrak{F}_k)}}\n\\bigl\\rangle.\n\\end{equation}\nSimilarly we have\n\\begin{equation*}\n\\begin{aligned}\n\\overline{D_{v}}(\\mathfrak{F}_k,\\Phi)\n&:=\n\\frac{1}{2\\pi \\abs{\\mathfrak{F}_k}}\n\\sum_{\\Pi\\in\\mathfrak{F}_k}\n\\int_{-\\infty}^{\\infty}\n\\overline{\n\\frac{L'}{L}(\\frac{1}{2}+ix,\\Pi_v) \\Phi\n}\n\\Bigp{\\frac{x}{2\\pi}\\log C(\\mathfrak{F}_k)}\ndx\\\\\n&= \\frac{1}{\\log C(\\mathfrak{F}_k)}\n\\bigl \\langle\n\\overline{\n\\widehat \\mathcal{L}_{k,v}},\n\\widehat \\Phi\n\\Bigp{\\frac{-2\\pi y}{\\log C(\\mathfrak{F}_k)}}\n\\bigr \\rangle.\n\\end{aligned}\n\\end{equation*}\nWe have made a change of variable so as to make explicit the multiplicative factor $1\/\\log C(\\mathfrak{F}_k)$ in front of the overall sum.\n\n\\subsection{Contribution of the poles}\\label{sec:pf:poles}\nThe contribution of the poles in the explicit formula is given by\n\\begin{equation*}\nD_{\\text{pol}}(\\mathfrak{F}_k,\\Phi):=\n\\frac{1}{\\abs{\\mathfrak{F}_k}}\n\\sum_{\\Pi\\in \\mathfrak{F}_k}\n\\sum_j\n\\Phi\\left(\n\\frac{r_j(\\Pi)}{2\\pi} \\log C(\\mathfrak{F}_k)\n\\right).\n\\end{equation*}\n\nWe bound the sum trivially and obtain\n\\begin{equation*} D_{\\text{pol}}(\\mathfrak{F}_k,\\Phi)\n\\ll \\frac{\\#\\set{\\Pi\\in \\mathfrak{F}_k,\\ L(s,\\Pi) \\text{ has a pole}}}\n{\\abs{\\mathfrak{F}_k}} C(\\mathfrak{F}_k)^{O(\\delta)}.\n\\end{equation*}\nThe last polynomial factor comes from the exponential order of growth of $\\Phi$ along the real axis and the fact that the Fourier transform $\\widehat \\Phi$ is supported in $(-\\delta,\\delta)$. Now thanks to Hypothesis~\\ref{hyp:poles} the right-hand side goes to zero as $k\\to \\infty$. We also use the fact that $C(\\mathfrak{F}_k)\\ll \\abs{\\mathfrak{F}_k}^{O(1)}$ which follows from Corollary~\\ref{c:estimate-F_k} and the upper bound for $C(\\mathfrak{F}_k)$ from~\\eqref{xik-CFk}.\n\n\\subsection{Archimedean places}\\label{sec:pf:arch} In this subsection we handle the archimedean places $v\\in S_\\infty$. Recall that $0<\\theta<\\frac{1}{2}$ is a bound towards Ramanujan as in~\\S\\ref{sec:pp:Lfn}.\n\\begin{lemma}\\label{prop:psi-fn} For all $\\mu\\in \\mathbb{C}$ with $\\MRe \\mu \\le \\theta$, and all Schwartz function $\\Psi$, the following holds uniformly\n\\begin{equation*}\n\\int_{-\\infty}^{\\infty}\n\\frac{\\Gamma'}{\\Gamma}(\\frac{1}{2}-\\mu+ix)\\Psi(x)dx\n=\\widehat \\Psi(0) \\log (\\frac{1}{2}-\\mu)\n+O(\\pnorm{\\Psi}_1+\\pnorm{x\\Psi(x)}_1)\n\\end{equation*}\n\\end{lemma}\n\\begin{proof}\nWe have the following Stirling approximation for the Digamma function (traditionally denoted $\\psi(z)$):\n\\begin{equation}\\label{psi:asymp}\n\\frac{\\Gamma'}{\\Gamma}(z)= \\log z + O(1)\n\\end{equation}\nuniformly in the angular region $\\abs{\\arg z}\\le \\pi -\\epsilon$, see e.g.~\\cite{book:Iw}*{Appendix B}.\nSince $\\theta<1\/2$ all points $\\frac{1}{2}-\\mu+ix$ lie in the interior of the angular region and we can apply~\\eqref{psi:asymp}. We note also that uniformly\n\\begin{equation*}\n\\log(\\frac{1}{2}-\\mu+ix)=\\log(\\frac{1}{2}-\\mu) + O(\\log(2+\\abs{x})),\n\\end{equation*}\nand this conclude the proof of the Proposition.\n\\end{proof}\n\\remark Note that the complete asymptotic expansion actually involves the Bernoulli numbers and is of the form\n\\begin{equation}\\label{psi:completeasymp}\n\\frac{\\Gamma'}{\\Gamma}(z)=\\log z + \\frac{1}{2z} - \\sum_{n=1}^{N} \\frac{B_{2n}}{2n z^{2n}}\n+O\\Bigl(\n\\frac{1}{z^{2N+2}}\n\\Bigr).\n\\end{equation}\nFrom~\\eqref{psi:completeasymp} we have that\n\\begin{equation*}\n\\frac{\\Gamma'}{\\Gamma}(\\sigma+it)+\\frac{\\Gamma'}{\\Gamma}(\\sigma-it)=2\\frac{\\Gamma'}{\\Gamma}(\\sigma)+O((t\/\\sigma)^2)\n\\end{equation*}\nholds uniformly for $\\sigma$ and $t$ real with $\\sigma >0$. As in~\\cite{ILS00}*{\\S4} this may be used when the test function $\\Psi$ is even (e.g. which is the typical case when all representations $\\Pi\\in \\mathfrak{F}$ are self-dual). We don't make this assumption and therefore use~\\eqref{psi:asymp} instead.\n\n\\begin{corollary} Uniformly for all archimedean places $v\\in S_\\infty$ and all Schwartz function $\\Psi$, the following holds\n\\begin{equation*}\n\\Bigl\\langle\n\\widehat \\mathcal{L}_{k,v},\n\\Psi\n\\Bigr\\rangle\n=\\frac{\\Psi(0)}\n{\\abs{\\mathfrak{F}_k}}\n\\sum_{\\Pi\\in\\mathfrak{F}_k}\n\\sum_{i=1}^{d}\n\\log_v (\\frac{1}{2} -\\mu_i(\\Pi_v))\n+ O(\\bigl\\lVert \\widehat \\Psi \\bigr \\rVert_1).\n\\end{equation*}\nHere we have set $\\log_v z:=\\frac{1}{2} \\log z$ when $v$ is real and $\\log_v z:= \\log z$ when $v$ is complex.\n\\end{corollary}\n\\begin{proof}\nRecall the convention~\\eqref{def:Lv-arch} on local $L$-factors at archimedean places $v\\in S_\\infty$. From Fourier duality~\\eqref{fourier:pl} and the definition~\\eqref{def:hatL} we have\n\\begin{equation*}\n\\Bigl\\langle\n\\widehat \\mathcal{L}_{k,v},\\Psi\n\\Bigr\\rangle\n=\n\\frac{1}{\\abs{\\mathfrak{F}_k}}\n\\sum_{\\Pi \\in \\mathfrak{F}_k}\n\\int^\\infty_{-\\infty} \\frac{L'}{L}(\\frac{1}{2}+ix,\\Pi_v)\n\\widehat \\Psi(x)\ndx.\n\\end{equation*}\nNote that\n\\begin{equation*}\n\\frac{\\Gamma'_v}{\\Gamma_v}(s)=\n\\begin{cases}\n-\\frac{1}{2}\\log \\pi +\\frac{1}{2} \\frac{\\Gamma'}{\\Gamma}(\\frac{s}{2})\n,&\n\\text{when $v$ is real,}\\\\\n-\\log (2\\pi)+\\frac{\\Gamma'}{\\Gamma}(s)\n,&\n\\text{when $v$ is complex.}\n\\end{cases}\n\\end{equation*}\nApplying Lemma~\\ref{prop:psi-fn}, the estimate in the corollary follows. Recall from Proposition~\\ref{prop:tempered} that the bounds towards Ramanujan in~\\S\\ref{sec:pp:Lfn} apply to all $\\Pi\\in \\mathfrak{F}_k$.\n \\end{proof}\n\nWe may continue the analysis of the contribution of the archimedean places to the one-level density. For $v\\in S_\\infty$, the local $L$-function $L(s,\\Pi_v)$ are the same for all $\\Pi \\in \\mathfrak{F}_k$. We therefore conclude that\n\\begin{equation}\\label{archterms}\n\\begin{aligned}\n\\sum_{v\\in S_\\infty}^{}\nD_{v}(\\mathfrak{F}_k,\\Phi)\n+\\overline{D_{v}(\\mathfrak{F}_k,\\Phi)}\n&=\n\\frac{\\widehat \\Phi(0)}{\\log C(\\mathfrak{F}_k)}\n\\left(\n\\sum_{v\\in S_\\infty}^{}\n\\sum_{i=1}^{d}\n2 \\log_v \\abs{\\frac{1}{2}-\\mu_i(\\Pi_v)}\n+O(1)\n \\right) \\\\\n &=\n\\frac{\\widehat \\Phi(0)}{\\log C(\\mathfrak{F}_k)}\n\\left(\n\\sum_{v\\in S_\\infty}^{}\n\\log C(\\Pi_v)\n+O(1)\n \\right).\n\\end{aligned}\n\\end{equation}\nIn the last line we used the definition of the analytic conductor at archimedean places from~\\S\\ref{sec:pp:Lfn}.\n\n\\subsection{Moments}\\label{sec:pf:moments}\nNow let $v\\in \\mathcal{V}_F^\\infty$ be a non-archimedean place. A straightforward computation shows that\n\\begin{equation*}\n\\frac{L'}{L}(s,\\Pi_v)=-\\log q_v\n\\sum_{\\nu\\ge 1}\n\\beta^{(\\nu)}(\\Pi_v)q_v^{-\\nu s}\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\label{def:beta-Pi}\n\\beta^{(\\nu)}(\\Pi_v):=\n\\sum_{i=1}^d\n\\alpha^{\\nu}_i(\\Pi_v).\n\\end{equation*}\nAveraging over the family $\\mathfrak{F}$ we let\n\\begin{equation}\n\\label{def:beta-F}\n\\beta^{(\\nu)}_v(\\mathfrak{F}_k)\n:=\n\\frac{1}{\\abs{\\mathfrak{F}_k}}\n\\sum_{\\Pi\\in\\mathfrak{F}_k}\n\\beta^{(\\nu)}(\\Pi_v),\\quad v\\in\\mathcal{V}_F^\\infty,\\ \\nu\\ge 1.\n\\end{equation}\nThe coefficients $\\beta^{(\\nu)}_v$ are the moments of the Satake parameters and their estimate play a central role in the proof of Theorem~\\ref{th:onelevel}. Then for all $v\\in \\mathcal{V}_F^{\\infty}$, the following formula is equivalent to the definition~\\eqref{def:hatL}\n\\begin{equation}\n\\label{CmL-beta}\n\\widehat \\mathcal{L}_{k,v}=-\\log q_v \\sum_{\\nu\\ge 1}\n\\beta^{(\\nu)}_v(\\mathfrak{F}_k) q_v^{-\\nu\/2}\\delta\n\\bigp{\\frac{\\nu}{2\\pi} \\log q_v}.\n\\end{equation}\nSee~\\S\\ref{sec:pf:outline} for the convention on Dirac delta.\n\nSimilarly for all $v\\in S_\\text{gen}$ we let\n\\begin{equation}\\label{CmL-pl}\n\\widehat \\mathcal{L}_{\\text{pl},v}:=-\\log q_v\n\\sum_{\\nu\\ge 1}\n\\beta^{(\\nu)}_{\\text{pl},v} q_v^{-\\nu\/2}\n\\delta\n\\bigp{\\frac{\\nu}{2\\pi} \\log q_v}.\n\\end{equation}\nHere the coefficients $\\beta^{(\\nu)}_{\\text{pl},v}$ are defined locally as follows. We assume that for all $v\\in S_\\text{gen}$, the group $G$ is unramified over $F_v$ and that the restriction $r|_{\\widehat{G}\\rtimes W_{F_v}}$ is an unramified $L$-morphism, i.e. it factors through $\\widehat G \\rtimes W^{\\text{ur}}_{F_v}$. The Plancherel measure $\\widehat \\mu^{\\text{pl}}_{v}$ is as in \\S~\\ref{s:Plancherel} and we recall the notation $\\widehat \\mu^{\\text{pl},\\mathrm{ur}}_v$ for the restriction of the Plancherel measure $\\widehat \\mu^{\\text{pl}}_{v}$ to $G(F_v)^{\\wedge,\\mathrm{ur}}$. Then\n\\begin{equation}\n\\label{def:beta-pl}\n\\beta^{(\\nu)}_{\\text{pl},v}:= \\widehat \\mu^{\\text{pl},\\mathrm{ur}}_v\n\\Bigl(\nr^*\\bigl(\nY_1^\\nu+\\cdots +Y_d^{\\nu}\n\\bigr)\n\\Bigr).\n\\end{equation}\nHere we are using the convention in \\S\\ref{sub:case-of-GL_d} for the unramified Hecke algebra $\\mathcal{H}^{\\mathrm{ur}}(\\GL_d)$ and the Satake isomorphism with the symmetric algebra in the variables $Y_1,\\ldots ,Y_d$. We recall the convention in \\S\\ref{sub:trun-unr-Hecke} for the $L$-morphism of Hecke algebras $r^*:\\mathcal{H}^{\\mathrm{ur}}(\\GL_d)\\to \\mathcal{H}^{\\mathrm{ur}}(G(F_v))$.\n\nThe supports of both measures $\\widehat \\mathcal{L}_{k,v}$ and $\\widehat \\mathcal{L}_{\\text{pl},v}$ are contained in the discrete set $\\frac{2\\pi}{\\log q_v}\\mathbb{N}_{\\ge 1}$. In particular we note that when $q_v$ is large enough this set is disjoint from the support of $\\widehat \\Phi$. As a consequence all sums over places $v\\in \\mathcal{V}_F$ considered in this section are actually finitely supported.\n\n\\subsection{General upper-bounds}\\label{sec:pf:gn} Recall from Proposition~\\ref{prop:tempered} that the bounds towards Ramanujan apply to all $\\Pi\\in \\mathfrak{F}_k$. Then for all non-archimedean $v\\in \\mathcal{V}_F^{\\infty}$, we have the upper bound\n\\begin{equation*}\n\\abs{\\alpha_i(\\Pi_v)}\\le q_v^{\\theta}.\n\\end{equation*}\nIt then follows that\n\\begin{equation}\\label{gn-beta}\n\\abs{\\beta^{(\\nu)}(\\Pi_v)}\n\\le d q_v^{\\nu \\theta},\n\\end{equation}\nand the same holds true for $\\beta^{(\\nu)}_v(\\mathfrak{F}_k)$.\n\\begin{proposition}\\label{prop:gn-bd}\nFor all $v\\in \\mathcal{V}^{\\infty}_F$ and all Schwartz function $\\Psi$,\n\\begin{equation*}\n\\left\\langle \\widehat \\mathcal{L}_{k,v},\\Psi \\right\\rangle\n\\ll q_v^{\\theta-\\frac{1}{2}}\\log q_v \\pnorm{\\Psi}_{\\infty}.\n\\end{equation*}\n\\end{proposition}\n\\begin{proof} Inserting the upper bound~\\eqref{gn-beta} in~\\eqref{CmL-beta} we have\n\\begin{equation*}\n\\left\\langle \\widehat \\mathcal{L}_{k,v},\\Psi \\right\\rangle\n\\ll \\log q_v \\sum_\\nu q_v^{\\nu(\\theta-1\/2)} \\abs{\\Psi(\\frac{\\nu}{2\\pi}\\log q_v)}.\n\\end{equation*}\nBecause $0<\\theta<\\frac{1}{2}$, the conclusion easily follows.\n\\end{proof}\n\nSimilarly, from~\\eqref{def:beta-pl} we have the bound\n\\begin{equation}\\label{gn-pl-beta}\n\\abs{\\beta_{\\text{pl},v}^{(\\nu)}}\\le d.\n\\end{equation}\nfor all $v\\in S_\\text{gen}$. Indeed this follows from the fact that $\\widehat \\mu^{\\text{pl},\\mathrm{ur}}$ has total mass one, see \\S\\ref{s:Plancherel}, and the inequality $\\abs{\\widehat \\phi(\\pi)}\\le d$ which follows from~\\S\\ref{s:unramified-spectrum}. Here $\\pi\\in G(F_v)^{\\wedge,\\mathrm{ur},{\\rm temp}}$ which is the support of the unramified Plancherel measure $\\widehat \\mu^{\\text{pl},\\mathrm{ur}}$. For instance the image of an unramified $L$-parameter $r\\phi:W_{F_v}^{\\mathrm{ur}} \\to \\GL_d(\\mathbb{C})$ has bounded image and all Frobenius eigenvalues have therefore absolute value one. We deduce that\n\\begin{equation*\n\\left\\langle \\widehat \\mathcal{L}_{\\text{pl},v},\\Psi \\right\\rangle\n\\ll q_v^{-\\frac{1}{2}}\\log q_v \\pnorm{\\Psi}_{\\infty}.\n\\end{equation*}\nWe recall that the Plancherel measure $\\widehat \\mu^{\\text{pl},\\mathrm{ur}}_v$ is supported on the tempered spectrum $\\widehat G(F_v)^{\\wedge,\\mathrm{ur},{\\rm temp}}$.\n\n\\subsection{Plancherel equidistribution}\\label{sec:pf:pl} We are in position to apply the Plancherel equidistribution theorem for families established in Section~\\ref{s:aut-Plan-theorem}. We shall derive uniform asymptotics as $k\\to \\infty$ for the moments $\\beta^{(\\nu)}_{v}(\\mathfrak{F}_k)$ in~\\eqref{def:beta-F}.\n\\begin{proposition}\\label{prop:F-pl} The following holds uniformly on $\\nu \\ge 1$ and $v\\in S_\\text{gen}$\n\\begin{equation} \\label{main-F-pl}\n\\beta^{(\\nu)}_{v}(\\mathfrak{F}_k)=(1+o(1)) \\beta^{(\\nu)}_{\\text{pl},v}\n+O(q_v^{A_7+B_9\\nu} C(\\mathfrak{F}_k)^{-C_5}).\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof} Let $S_0$ be a sufficiently large set of non-archimedean places which contains all places $v\\in\\mathcal{V}_F^\\infty$ where $G$ is ramified and where $r$ is ramified. Let $S_1:=\\set{v}$. We shall distinguish between our two types of families during the proof of Proposition~\\ref{prop:F-pl}. Before we begin with some general construction.\n\n With the same convention as in~\\eqref{def:beta-pl} we set\n\\begin{equation*}\n\\widehat \\phi_v := r^* (Y_1^\\nu+\\cdots + Y_d^{\\nu}) \\in \\mathcal{H}^{\\mathrm{ur}}(G(F_v)).\n\\end{equation*}\nHere the conventions on the Satake isomorphism are as in~\\S\\ref{sub:Satake-trans} and \\S\\ref{sub:L-mor-unr-Hecke}. Recall that $\\widehat \\phi_{S_0}$ has been fixed earlier (independent of $k$) and normalized from~\\eqref{phiS0}.\n\nBy definition we have that $\\beta_{\\text{pl},v}^{(\\nu)}=\\widehat \\mu^{\\text{pl}}_{v}(\\widehat \\phi_v)$. Next we can verify that (see the notation in \\S\\ref{sub:aut-rep})\n\\begin{equation*} \\begin{aligned}\n \\beta_v^{(\\nu)}(\\mathfrak{F}_k)&= \\frac{1}{\\abs{\\mathcal{F}_k}} \\sum_{\\pi\\in \\mathcal{F}_k} a_{\\mathcal{F}_k}(\\pi)\\widehat \\phi_v(\\pi_v)\\\\\n &=\\widehat \\mu^{\\text{count}}_{\\mathcal{F}_k,v}(\\widehat \\phi_v)\\\\\n &=\\frac{\\tau'(G)\\dim \\xi_k}{\\mu^{\\text{can}}(U^{S,\\infty}_k) \\abs{\\mathcal{F}_k}}\n \\widehat \\mu_{\\mathcal{F}_k,v}(\\widehat \\phi_v).\n\\end{aligned}\n\\end{equation*}\nFrom Corollary~\\ref{c:estimate-F_k}, $ \\dfrac{\\tau'(G)\\dim \\xi_k}{\\mu^{\\text{can}}(U^{S,\\infty}_k) \\abs{\\mathcal{F}_k}}=1+o(1)$ as $k\\to \\infty$ (note that the quantity is independent of $v\\in S_\\text{gen}$).\n\nThanks to Lemma~\\ref{l:bound-degree-Satake} we have that $\\phi_{v}\\in \\mathcal{H}^{\\mathrm{ur}}(G(F_v))^{\\le \\beta \\nu}$ and $\\abs{\\phi_v}\\ll 1$. Thus the first assumptions in Theorems~\\ref{t:level-varies} and~\\ref{t:weight-varies} are satisfied.\n\nFor families in the level aspect the remaining assumptions in Theorem~\\ref{t:level-varies} are also satisfied when the constant $\\epsilon>0$ is chosen small enough. It remains to use the upper bound for $C(\\mathfrak{F}_k)$ from~Hypothesis~\\ref{hyp:cond} to conclude the proof of Proposition~\\ref{prop:F-pl} in that case.\n\nFor families in the weight aspect the remaining assumptions in Theorem~\\ref{t:weight-varies} are also satisfied. We use the upper bound for $C(\\mathfrak{F}_k)$ from~\\eqref{xik-CFk} to conclude the proof of Proposition~\\ref{prop:F-pl} in that case.\n\\end{proof}\n\n\\subsection{Main term}\\label{sec:pf:main}\nWe deduce the following main estimate for $\\widehat \\mathcal{L}_{k,v}$.\n\\begin{proposition}\\label{prop:S1} For all $A>0$ there is $A_8>0$ such that the following holds uniformly on $v\\in S_\\text{gen}$ and for all Schwartz functions $\\Psi$:\n\\begin{equation*}\n\\bigl\\langle\n\\widehat\\mathcal{L}_{k,v},\\Psi\n\\bigr\\rangle\n=\n\\bigl\\langle\n\\widehat\\mathcal{L}_{\\text{pl},v},\\Psi\n\\bigr\\rangle\n(1+o(1))\n+O(q_v^{A_8} C(\\mathfrak{F}_k)^{-C_5}\n\\pnorm{\\Psi}_{\\infty})\n+O(q_v^{-A}\\pnorm{\\Psi}_\\infty),\n\\end{equation*}\n\\end{proposition}\n\n\n\\begin{proof}\nLet $\\kappa$ be a large enough integer. We apply the bounds towards Ramanujan in the form~\\eqref{gn-beta} to those term in~\\eqref{CmL-beta} with $\\nu > \\kappa$. The contribution of those terms to $\\bigl\\langle\n\\widehat\\mathcal{L}_{k,v},\\Psi\n\\bigr\\rangle$ is uniformly bounded by\n\\begin{equation*}\n\\ll q_v^{\\kappa(\\theta-\\frac{1}{2})}\\pnorm{\\Psi}_{\\infty}.\n\\end{equation*}\nWe have that $A:=\\kappa(\\frac{1}{2}-\\theta)$ may be chosen as large as we want since $\\theta < \\frac{1}{2}$ is fixed and $\\kappa$ is arbitrary large.\n\nFor those terms in~\\eqref{CmL-beta} with $\\nu \\le \\kappa$ we apply~\\eqref{main-F-pl}. Their contribution to $\\bigl\\langle\n\\widehat\\mathcal{L}_{k,v},\\Psi\n\\bigr\\rangle$ is equal to\n\\begin{equation*}\n-\\log q_v \\sum_{1\\le \\nu\\le \\kappa}\n\\beta^{(\\nu)}_{\\text{pl},v} q_v^{-\\nu\/2}\n\\Psi(\\frac{\\nu}{2\\pi}\\log q_v) + O(q_v^{A_7+B_9\\kappa}\\pnorm{\\Psi}_{\\infty} C(\\mathfrak{F}_k)^{-C_5}).\n\\end{equation*}\n\nThe next step is now to complete the $\\nu$-sum. Applying~\\eqref{gn-beta} we see that the terms $\\nu >\\kappa$ yield another remainder term of the form $q_v^{-A}\\pnorm{\\Psi}_{\\infty}$ with $A$ arbitrary large (again depending on $\\kappa$).\n\\end{proof}\n\n\\subsection{Handling remainder terms}\\label{sec:pf:error} In this subsection we handle the various remainder terms and show that they don't contribute to $D(\\mathfrak{F}_k,\\Phi)$ in the limit when $k\\to \\infty$.\n\nThe estimates above are applied to the compactly supported function\n\\begin{equation}\\label{Psi-Phi}\n\\Psi(y):= \\widehat \\Phi(\\frac{2\\pi y}{\\log C(\\mathfrak{F}_k)}),\\quad y\\in \\mathbb{R}.\n\\end{equation}\nWe can see that both $\\pnorm{\\Psi}_{\\infty}$ and $\\bigl\\lVert \\widehat \\Psi \\bigr \\rVert_1$ only depend on $\\Phi$ which has been fixed. As a consequence these norms are uniformly bounded.\n\nAbove archimedean places $v\\in S_\\infty$ we encountered in~\\S\\ref{sec:pf:arch} the remainder term $O(\\bigl\\lVert \\widehat \\Psi \\bigr \\rVert_1)$. Because of the overall multiplicative factor $1\/\\log C(\\mathfrak{F}_k)$ in~\\eqref{def:D1v}, this remainder is negligible as $k\\to \\infty$.\n\nAbove the non-archimedean places with $v\\mid \\mathfrak{n}_k$ or $v\\in S_0$ we use the general bounds from~\\S\\ref{sec:pf:gn}. We obtain\n\\begin{equation}\\label{remainS0}\n\\sum_{v\\in S_0,\\text{ and $v\\mid \\mathfrak{n}_k$}}\n\\Bigl\\langle\n\\widehat \\mathcal{L}_{k,v},\\Psi\n\\Bigr\\rangle\n\\ll\n\\sum_{v\\in S_0,\\text{ and $v\\mid \\mathfrak{n}_k$}}\nq_v^{\\theta-\\frac{1}{2}} \\log q_v \\pnorm{\\Psi}_\\infty\n\\ll\n1+\n\\#\\set{v\\mid \\mathfrak{n}_k}.\n\\end{equation}\nIn the last inequality we used the fact that $S_0$ is fixed and that $\\theta < \\frac{1}{2}$. The overall multiplicative factor $1\/\\log C(\\mathfrak{F}_k)$ in~\\eqref{def:D1v} shows that these terms are negligible as $k\\to \\infty$. Indeed it is easy to verify that\n\\begin{equation}\\label{remainlevel}\n\\#\\set{v\\mid \\mathfrak{n}_k} = o(\\log \\mathbb{N}(\\mathfrak{n}_k)),\\qtext{as $k\\to \\infty$,}\n\\end{equation}\nand we conclude using Hypothesis~\\ref{hyp:cond} that this is $o(\\log C(\\mathfrak{F}_k))$.\n\nWe partition the set of generic non-archimedean places $S_\\text{gen}$ into two sets $S_\\text{main}$ and $S_\\text{cut}$ where\n\\begin{equation}\\label{def:Scut}\nS_\\text{cut}= \\set{v\\in S_\\text{gen}:\\ q_v>C(\\mathfrak{F}_k)^\\nu}.\n\\end{equation}\nSince the support of $\\widehat \\Phi$ is included in $(-\\delta,\\delta)$ we know that $\\widehat \\Phi(\\frac{\\nu \\log q_v}{\\log C(\\mathfrak{F}_k)})$ vanishes for all $v\\in S_\\text{cut}$.\n\nAbove the generic places $v\\in S_\\text{main}$ we use the estimate in Proposition~\\ref{prop:S1}. The second remainder term yields\n\\begin{equation}\\label{secondremain}\n\\sum_{v\\in S_\\text{main}} q_v^{-A} \\pnorm{\\Psi}_\\infty = O(1).\n\\end{equation}\nThis is again negligible in view of the overall factor $1\/\\log C(\\mathfrak{F}_k)$ in~\\eqref{def:D1v}. The first remainder term is negligible as well:\n\\begin{equation} \\label{firstremain}\n \\sum_{v\\in S_\\text{main}}^{} q_v^{A_8} \\pnorm{\\Psi}_{\\infty} C(\\mathfrak{F}_k)^{-C_5} \\ll\nC(\\mathfrak{F}_k)^{\\delta(C+1) - c}.\n\\end{equation}\nRecall that $\\delta$ is chosen small small enough such that $\\delta(A_8+1)8$~eV that the $MK'$ group prevails and the cumulant start growing monotonically to its asymptotic limit.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.5\\textwidth, trim= 20mm 20mm 20mm 20mm, clip]{cumulant_sm}\n\\caption{(Color online) Cumulant spectral weight (solid curve) of the $\\lambda=5$ exciton and the corresponding GW-BSE spectrum (dashed curve) at $\\mathbf{q}=8\\mathbf{q}_0$. A red vertical line marks the energy of the lowest IP-transition.}\n\\label{fig:cumulant_sm}\n\\end{figure}\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nPlease follow the steps outlined below when submitting your manuscript to the IEEE Computer Society Press.\nThis style guide now has several important modifications (for example, you are no longer warned against the use of sticky tape to attach your artwork to the paper), so all authors should read this new version.\n\n\\subsection{Language}\n\nAll manuscripts must be in English.\n\n\\subsection{Dual submission}\n\nPlease refer to the author guidelines on the CVPR\\ 2022\\ web page for a\ndiscussion of the policy on dual submissions.\n\n\\subsection{Paper length}\nPapers, excluding the references section, must be no longer than eight pages in length.\nThe references section will not be included in the page count, and there is no limit on the length of the references section.\nFor example, a paper of eight pages with two pages of references would have a total length of 10 pages.\n{\\bf There will be no extra page charges for CVPR\\ 2022.}\n\nOverlength papers will simply not be reviewed.\nThis includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide.\nNote that this \\LaTeX\\ guide already sets figure captions and references in a smaller font.\nThe reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts.\nThe reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven.\n\n\\subsection{The ruler}\nThe \\LaTeX\\ style defines a printed ruler which should be present in the version submitted for review.\nThe ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution.\nIf you are preparing a document using a non-\\LaTeX\\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages.\nThe presence or absence of the ruler should not change the appearance of any other content on the page.\nThe camera-ready copy should not contain a ruler.\n(\\LaTeX\\ users may use options of cvpr.sty to switch between different versions.)\n\nReviewers:\nnote that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly.\nJust use fractional references (\\eg, this line is $087.5$), although in most cases one would expect that the approximate location will be adequate.\n\n\n\\subsection{Paper ID}\nMake sure that the Paper ID from the submission system is visible in the version submitted for review (replacing the ``*****'' you see in this document).\nIf you are using the \\LaTeX\\ template, \\textbf{make sure to update paper ID in the appropriate place in the tex file}.\n\n\n\\subsection{Mathematics}\n\nPlease number all of your sections and displayed equations as in these examples:\n\\begin{equation}\n E = m\\cdot c^2\n \\label{eq:important}\n\\end{equation}\nand\n\\begin{equation}\n v = a\\cdot t.\n \\label{eq:also-important}\n\\end{equation}\nIt is important for readers to be able to refer to any particular equation.\nJust because you did not refer to it in the text does not mean some future reader might not need to refer to it.\nIt is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''.\n(Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers).\nAll authors will benefit from reading Mermin's description of how to write mathematics:\n\\url{http:\/\/www.pamitc.org\/documents\/mermin.pdf}.\n\n\\subsection{Blind review}\n\nMany authors misunderstand the concept of anonymizing for blind review.\nBlind review does not mean that one must remove citations to one's own work---in fact it is often impossible to review a paper unless the previous citations are known and available.\n\nBlind review means that you do not use the words ``my'' or ``our'' when citing previous work.\nThat is all.\n(But see below for tech reports.)\n\nSaying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith;\nit says that you are building on her work.\nIf you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work.\n\nAn example of a bad paper just asking to be rejected:\n\\begin{quote}\n\\begin{center}\n An analysis of the frobnicatable foo filter.\n\\end{center}\n\n In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods.\n Why the previous paper was accepted without this analysis is beyond me.\n\n [1] Removed for blind review\n\\end{quote}\n\n\nAn example of an acceptable paper:\n\\begin{quote}\n\\begin{center}\n An analysis of the frobnicatable foo filter.\n\\end{center}\n\n In this paper we present a performance analysis of the paper of Smith \\etal [1], and show it to be inferior to all previously known methods.\n Why the previous paper was accepted without this analysis is beyond me.\n\n [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213.\n\\end{quote}\n\nIf you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work.\nIn such cases, include the anonymized parallel submission~\\cite{Authors14} as supplemental material and cite it as\n\\begin{quote}\n[1] Authors. ``The frobnicatable foo filter'', F\\&G 2014 Submission ID 324, Supplied as supplemental material {\\tt fg324.pdf}.\n\\end{quote}\n\nFinally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report.\nFor conference submissions, the paper must stand on its own, and not {\\em require} the reviewer to go to a tech report for further details.\nThus, you may say in the body of the paper ``further details may be found in~\\cite{Authors14b}''.\nThen submit the tech report as supplemental material.\nAgain, you may not assume the reviewers will read this material.\n\nSometimes your paper is about a problem which you tested using a tool that is widely known to be restricted to a single institution.\nFor example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the CVPR70 audience would like to hear about your\nsolution.\nThe work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \\etal.\n\nYou can handle this paper like any other.\nDo not write ``We show how to improve our previous work [Anonymous, 1968].\nThis time we tested the algorithm on a lunar lander [name of lander removed for blind review]''.\nThat would be silly, and would immediately identify the authors.\nInstead write the following:\n\\begin{quotation}\n\\noindent\n We describe a system for zero-g frobnication.\n This system is new because it handles the following cases:\n A, B. Previous systems [Zeus et al. 1968] did not handle case B properly.\n Ours handles it by including a foo term in the bar integral.\n\n ...\n\n The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know.\n It displayed the following behaviours, which show how well we solved cases A and B: ...\n\\end{quotation}\nAs you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors.\nA reviewer might think it likely that the new paper was written by Zeus \\etal, but cannot make any decision based on that guess.\nHe or she would have to be sure that no other authors could have been contracted to solve problem B.\n\\medskip\n\n\\noindent\nFAQ\\medskip\\\\\n{\\bf Q:} Are acknowledgements OK?\\\\\n{\\bf A:} No. Leave them for the final copy.\\medskip\\\\\n{\\bf Q:} How do I cite my results reported in open challenges?\n{\\bf A:} To conform with the double-blind review policy, you can report results of other challenge participants together with your results in your paper.\nFor your results, however, you should not identify yourself and should not mention your participation in the challenge.\nInstead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\\medskip\\\\\n\n\\begin{figure}[t]\n \\centering\n \\fbox{\\rule{0pt}{2in} \\rule{0.9\\linewidth}{0pt}}\n \n\n \\caption{Example of caption.\n It is set in Roman so that mathematics (always set in Roman: $B \\sin A = A \\sin B$) may be included without an ugly clash.}\n \\label{fig:onecol}\n\\end{figure}\n\n\\subsection{Miscellaneous}\n\n\\noindent\nCompare the following:\\\\\n\\begin{tabular}{ll}\n \\verb'$conf_a$' & $conf_a$ \\\\\n \\verb'$\\mathit{conf}_a$' & $\\mathit{conf}_a$\n\\end{tabular}\\\\\nSee The \\TeX book, p165.\n\nThe space after \\eg, meaning ``for example'', should not be a sentence-ending space.\nSo \\eg is correct, {\\em e.g.} is not.\nThe provided \\verb'\\eg' macro takes care of this.\n\nWhen citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\\etal'' (not ``{\\em et.\\ al.}'' as ``{\\em et}'' is a complete word).\nIf you use the \\verb'\\etal' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \\etal.\nHowever, use it only when there are three or more authors.\nThus, the following is correct:\n ``Frobnication has been trendy lately.\n It was introduced by Alpher~\\cite{Alpher02}, and subsequently developed by\n Alpher and Fotheringham-Smythe~\\cite{Alpher03}, and Alpher \\etal~\\cite{Alpher04}.''\n\nThis is incorrect: ``... subsequently developed by Alpher \\etal~\\cite{Alpher03} ...'' because reference~\\cite{Alpher03} has just two authors.\n\n\n\n\n\\begin{figure*}\n \\centering\n \\begin{subfigure}{0.68\\linewidth}\n \\fbox{\\rule{0pt}{2in} \\rule{.9\\linewidth}{0pt}}\n \\caption{An example of a subfigure.}\n \\label{fig:short-a}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}{0.28\\linewidth}\n \\fbox{\\rule{0pt}{2in} \\rule{.9\\linewidth}{0pt}}\n \\caption{Another example of a subfigure.}\n \\label{fig:short-b}\n \\end{subfigure}\n \\caption{Example of a short caption, which should be centered.}\n \\label{fig:short}\n\\end{figure*}\n\n\\section{Formatting your paper}\n\\label{sec:formatting}\n\nAll text must be in a two-column format.\nThe total allowable size of the text area is $6\\frac78$ inches (17.46 cm) wide by $8\\frac78$ inches (22.54 cm) high.\nColumns are to be $3\\frac14$ inches (8.25 cm) wide, with a $\\frac{5}{16}$ inch (0.8 cm) space between them.\nThe main title (on the first page) should begin 1 inch (2.54 cm) from the top edge of the page.\nThe second and following pages should begin 1 inch (2.54 cm) from the top edge.\nOn all pages, the bottom margin should be $1\\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \\times 11$-inch paper;\nfor A4 paper, approximately $1\\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the\npage.\n\n\\subsection{Margins and page numbering}\n\nAll printed material, including text, illustrations, and charts, must be kept\nwithin a print area $6\\frac{7}{8}$ inches (17.46 cm) wide by $8\\frac{7}{8}$ inches (22.54 cm)\nhigh.\nPage numbers should be in the footer, centered and $\\frac{3}{4}$ inches from the bottom of the page.\nThe review version should have page numbers, yet the final version submitted as camera ready should not show any page numbers.\nThe \\LaTeX\\ template takes care of this when used properly.\n\n\n\n\\subsection{Type style and fonts}\n\nWherever Times is specified, Times Roman may also be used.\nIf neither is available on your word processor, please use the font closest in\nappearance to Times to which you have access.\n\nMAIN TITLE.\nCenter the title $1\\frac{3}{8}$ inches (3.49 cm) from the top edge of the first page.\nThe title should be in Times 14-point, boldface type.\nCapitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs;\ndo not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word).\nLeave two blank lines after the title.\n\nAUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title\nand printed in Times 12-point, non-boldface type.\nThis information is to be followed by two blank lines.\n\nThe ABSTRACT and MAIN TEXT are to be in a two-column format.\n\nMAIN TEXT.\nType main text in 10-point Times, single-spaced.\nDo NOT use double-spacing.\nAll paragraphs should be indented 1 pica (approx.~$\\frac{1}{6}$ inch or 0.422 cm).\nMake sure your text is fully justified---that is, flush left and flush right.\nPlease do not place any additional blank lines between paragraphs.\n\nFigure and table captions should be 9-point Roman type as in \\cref{fig:onecol,fig:short}.\nShort captions should be centred.\n\n\\noindent Callouts should be 9-point Helvetica, non-boldface type.\nInitially capitalize only the first word of section titles and first-, second-, and third-order headings.\n\nFIRST-ORDER HEADINGS.\n(For example, {\\large \\bf 1. Introduction}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after.\n\nSECOND-ORDER HEADINGS.\n(For example, { \\bf 1.1. Database elements}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after.\nIf you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line.\n\n\\subsection{Footnotes}\n\nPlease use footnotes\\footnote{This is what a footnote looks like.\nIt often distracts the reader from the main flow of the argument.} sparingly.\nIndeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence).\nIf you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced.\nUse Times 8-point type, single-spaced.\n\n\n\\subsection{Cross-references}\n\nFor the benefit of author(s) and readers, please use the\n{\\small\\begin{verbatim}\n \\cref{...}\n\\end{verbatim}} command for cross-referencing to figures, tables, equations, or sections.\nThis will automatically insert the appropriate label alongside the cross-reference as in this example:\n\\begin{quotation}\n To see how our method outperforms previous work, please see \\cref{fig:onecol} and \\cref{tab:example}.\n It is also possible to refer to multiple targets as once, \\eg~to \\cref{fig:onecol,fig:short-a}.\n You may also return to \\cref{sec:formatting} or look at \\cref{eq:also-important}.\n\\end{quotation}\nIf you do not wish to abbreviate the label, for example at the beginning of the sentence, you can use the\n{\\small\\begin{verbatim}\n \\Cref{...}\n\\end{verbatim}}\ncommand. Here is an example:\n\\begin{quotation}\n \\Cref{fig:onecol} is also quite important.\n\\end{quotation}\n\n\\subsection{References}\n\nList and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper.\nWhen referenced in the text, enclose the citation number in square brackets, for\nexample~\\cite{Authors14}.\nWhere appropriate, include page numbers and the name(s) of editors of referenced books.\nWhen you cite multiple papers at once, please make sure that you cite them in numerical order like this \\cite{Alpher02,Alpher03,Alpher05,Authors14b,Authors14}.\nIf you use the template as advised, this will be taken care of automatically.\n\n\\begin{table}\n \\centering\n \\begin{tabular}{@{}lc@{}}\n \\toprule\n Method & Frobnability \\\\\n \\midrule\n Theirs & Frumpy \\\\\n Yours & Frobbly \\\\\n Ours & Makes one's heart Frob\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Results. Ours is better.}\n \\label{tab:example}\n\\end{table}\n\n\\subsection{Illustrations, graphs, and photographs}\n\nAll graphics should be centered.\nIn \\LaTeX, avoid using the \\texttt{center} environment for this purpose, as this adds potentially unwanted whitespace.\nInstead use\n{\\small\\begin{verbatim}\n \\centering\n\\end{verbatim}}\nat the beginning of your figure.\nPlease ensure that any point you wish to make is resolvable in a printed copy of the paper.\nResize fonts in figures to match the font in the body text, and choose line widths that render effectively in print.\nReaders (and reviewers), even of an electronic copy, may choose to print your paper in order to read it.\nYou cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic.\n\nWhen placing figures in \\LaTeX, it's almost always best to use \\verb+\\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below\n{\\small\\begin{verbatim}\n \\usepackage{graphicx} ...\n \\includegraphics[width=0.8\\linewidth]\n {myfile.pdf}\n\\end{verbatim}\n}\n\n\n\\subsection{Color}\n\nPlease refer to the author guidelines on the CVPR\\ 2022\\ web page for a discussion of the use of color in your document.\n\nIf you use color in your plots, please keep in mind that a significant subset of reviewers and readers may have a color vision deficiency; red-green blindness is the most frequent kind.\nHence avoid relying only on color as the discriminative feature in plots (such as red \\vs green lines), but add a second discriminative feature to ease disambiguation.\n\n\\section{Final copy}\n\nYou must include your signed IEEE copyright release form when you submit your finished paper.\nWe MUST have this form before your paper can be published in the proceedings.\n\nPlease direct any questions to the production editor in charge of these proceedings at the IEEE Computer Society Press:\n\\url{https:\/\/www.computer.org\/about\/contact}.\n\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n\n\\section{Introduction}\n\nAfter receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\\bf one page} PDF file.\nPlease follow the steps and style guidelines outlined below for submitting your author response.\n\nThe author rebuttal is optional and, following similar guidelines to previous CVPR conferences, is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers.\nIt is NOT intended to add new contributions (theorems, algorithms, experiments) that were absent in the original submission and NOT specifically requested by the reviewers.\nYou may optionally add a figure, graph, or proof to your rebuttal to better illustrate your answer to the reviewers' comments.\n\nPer a passed 2018 PAMI-TC motion, reviewers should refrain from requesting significant additional experiments for the rebuttal or penalize for lack of additional experiments.\nAuthors should refrain from including new experimental results in the rebuttal, especially when not specifically requested to do so by the reviewers.\nAuthors may include figures with illustrations or comparison tables of results reported in the submission\/supplemental material or in other papers.\n\nJust like the original submission, the rebuttal must maintain anonymity and cannot include external links that reveal the author identity or circumvent the length restriction.\nThe rebuttal must comply with this template (the use of sections is not required, though it is recommended to structure the rebuttal for ease of reading).\n\n\n\\subsection{Response length}\nAuthor responses must be no longer than 1 page in length including any references and figures.\nOverlength responses will simply not be reviewed.\nThis includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide.\nNote that this \\LaTeX\\ guide already sets figure captions and references in a smaller font.\n\n\\section{Formatting your Response}\n\n{\\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.}\n\nAll text must be in a two-column format.\nThe total allowable size of the text area is $6\\frac78$ inches (17.46 cm) wide by $8\\frac78$ inches (22.54 cm) high.\nColumns are to be $3\\frac14$ inches (8.25 cm) wide, with a $\\frac{5}{16}$ inch (0.8 cm) space between them.\nThe top margin should begin 1 inch (2.54 cm) from the top edge of the page.\nThe bottom margin should be $1\\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \\times 11$-inch paper;\nfor A4 paper, approximately $1\\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page.\n\nPlease number any displayed equations.\nIt is important for readers to be able to refer to any particular equation.\n\nWherever Times is specified, Times Roman may also be used.\nMain text should be in 10-point Times, single-spaced.\nSection headings should be in 10 or 12 point Times.\nAll paragraphs should be indented 1 pica (approx.~$\\frac{1}{6}$ inch or 0.422 cm).\nFigure and table captions should be 9-point Roman type as in \\cref{fig:onecol}.\n\n\nList and number all bibliographical references in 9-point Times, single-spaced,\nat the end of your response.\nWhen referenced in the text, enclose the citation number in square brackets, for example~\\cite{Alpher05}.\nWhere appropriate, include the name(s) of editors of referenced books.\n\n\\begin{figure}[t]\n \\centering\n \\fbox{\\rule{0pt}{0.5in} \\rule{0.9\\linewidth}{0pt}}\n \n \\caption{Example of caption. It is set in Roman so that mathematics\n (always set in Roman: $B \\sin A = A \\sin B$) may be included without an\n ugly clash.}\n \\label{fig:onecol}\n\\end{figure}\n\nTo avoid ambiguities, it is best if the numbering for equations, figures, tables, and references in the author response does not overlap with that in the main paper (the reviewer may wonder if you talk about \\cref{fig:onecol} in the author response or in the paper).\nSee \\LaTeX\\ template for a workaround.\n\n\\subsection{Illustrations, graphs, and photographs}\n\nAll graphics should be centered.\nPlease ensure that any point you wish to make is resolvable in a printed copy of the response.\nResize fonts in figures to match the font in the body text, and choose line widths which render effectively in print.\nReaders (and reviewers), even of an electronic copy, may choose to print your response in order to read it.\nYou cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic.\n\nWhen placing figures in \\LaTeX, it is almost always best to use \\verb+\\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below\n{\\small\\begin{verbatim}\n \\usepackage{graphicx} ...\n \\includegraphics[width=0.8\\linewidth]\n {myfile.pdf}\n\\end{verbatim}\n}\n\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n\n\n\n\n\n\n\\section{Conclusion}\nIn this paper, we propose a {S}ymmetric Network with {S}patial Relationship {M}odeling method for NL-based vehicle retrieval.\nFirstly, we design a symmetric network to learn visual and linguistic representations.\nThen a spatial relationship modeling methods is proposed to make better use of location information, . \nThe qualitative experiments confirm the effectiveness of our method. It achieves 43.92\\% MRR accuracy on the test set of the 6th AI City Challenge on natural language-based vehicle retrieval track, yielding the 1st place among all valid submissions on the public leaderboard.\n\\\\\n\\hspace*{\\fill} \n\\\\\n\\textbf{Broader Impact.} \nOur research can promote the practical application of vehicle retrieval technology, but the data used in the research may cause privacy violations. Therefore, we only use the data of the public dataset and mask the license plate to avoid increasing privacy leaks and other security issues.\n\\section{Experiments}\n\n\\subsection{Datasets and Evaluation Settings.}\n\n\\noindent\\textbf{Datasets.} \nCityFlow-NL \\cite{Feng21CityFlowNL} dataset consist of 666 vehicles contains 3028 vehicle tracks collected from 40 cameras, of which 2155 vehicle tracks were used for training.\nEach track was annotated with three natural language descriptions.\nIn addition, multiple natural language descriptions from other views of the vehicle in this track are collected in this track.\nThe remaining data were adopted to evaluate the proposed method.\n\\\\\n\\hspace*{\\fill} \n\\\\\n\\textbf{Evaluation.} The vehicle retrieval by NL descriptions task \ngenerally utilize the Mean Reciprocal Rank (MRR) as the standard metrics, which is also used in \\cite{voorhees1999trec}.\nIt is formulated as:\n\\begin{equation}\nMRR=\\frac{1}{|Q|} \\sum _{i=1}^{|Q|}\\frac {1}{rank_{i}},\n\\end{equation}\nwhere $|Q|$ is the number of set of text descriptions. $rank_{i}$ refers to the rank position of the right track for the $i_{th}$ text description. \n\n\\subsection{Implement details}\n\nBoth input local and global images are resized to 228 $\\times$ 228.\nWe do not adopt any image augmentation methods to augment the data, including random horizontal flip and random clips, which can keep sports information such as motion unchanged.\nWe set the batch size to 64 for each input, \\textit{i.e.,} vehicle local images, vehicle emphasis texts, global motion images, motion emphasis texts and text from other views.\nThe total batch size is 320.\nTwo text encoder is frozen in the training process. \nOur model is trained for 400 epochs employing the AdamW \\cite{loshchilov2017adamw} optimizer with weight decay (1e-2).\nThe learning rate starts from 0.01 and works as a warm-up strategy for 40 epochs. \nA step delay scheduler is adopted to decay the learning rate every 80 epochs. \nThe $\\gamma$ and $m$ in circle loss are set to 48 and 0.35, respectively.\nThe $m$ in triplet loss is set to 0.\nWe train the model with different configures and ensemble them in final similarity calculating.\n\n\\subsection{Performance on CityFlow-NL}\n\n\\input{tab\/ablation}\n\n\\input{tab\/rank}\n\nWe conduct ablation studies with different configures of our method.\nThe results are illustrated in Table \\ref{tab:ablation}.\nThe ``Baseline'' denotes the symmetric network with subject augmentation and symmetric InfoNCE loss.\nThe ``other view`` refers to the hard samples mining approach.\nIt can be observed that each part learns different characteristics and brings a slight performance boost, respectively.\nIn addition, the model ensemble integrates different information and achieves a great improvement.\n\nWe present the top 7 valid results in the public leaderboard,\nas shown in Table \\ref{tab:rank}.\nOur team (MegVideo) achieve a MRR score of 0.4392, taking 1st highest among all valid submissions on the AI City Challenge 2022 Track 2.\n\\section{Introduction}\nNatural Language (NL) based vehicle retrieval aims to find specific vehicles given text descriptions, which is a vital part of the intelligent transportation and urban surveillance systems.\nExisting vehicle retrieval systems are generally based on image-to-image matching, \\textit{i.e.,} the vehicle re-identification.\nIt requires providing a query image to retrieve the same vehicle in the image gallery.\nHowever, in practical applications, the query image may not be available and we only know the rough description of the target vehicle.\nThis leads us to an urgent need for an NL-based vehicle retrieval system.\n\nCompared with image queries, natural language text descriptions are easier to obtain and modify, resulting in higher flexibility.\nHowever, because text belongs to a different modality from the image, it brings great challenges to retrieval.\nSpecifically, natural language-based vehicle retrieval is a cross-modal matching that aims to learn a transferable model between visual and language. \nDUN \\cite{sun2021dun} proposes a dual-path network to align vehicle features in video and text embeddings supervised by the circle loss \\cite{CircleLoss}.\nBut they ignore motion information in matching, which is conducive to distinguishing similar vehicles with different motion states.\nDifferent from DUN, CLV \\cite{bai2021connecting} utilize a symmetric InfoNCE loss \\cite{xie2018rethinking} to learn cross-modal representation like CLIP \\cite{radford2021learning}.\nThey generate a global motion image for each track, retaining not only vehicle appearance information.\nIn addition, the subject descriptions in the text are enhanced, improving the impact of vehicle appearance information.\nHowever, their motion maps suffer from a loss of vehicle appearance information, which degrades model performance.\n\nInspired by these methods, we propose a \\textbf{S}ymmetric Network with \\textbf{S}patial Relationship \\textbf{M}odeling (SSM) approach to learn the visual and linguistic representations for NL-based vehicle retrieval.\nThe symmetric network can learn both the vehicle internal characteristic (\\textit{e.g.} vehicle type, color, and shape) and the vehicle external characteristic (\\textit{e.g.} motion state and surrounding environment) simultaneously.\nMore concretely, we adopt one visual encoder and text encoder to extract the vehicle appearance feature and the vehicle text embedding, respectively, which is optimized by the symmetric InfoNCE loss and pair-wise Circle loss.\nSymmetrically, another visual encoder and text encoder are employed to learn the representations of vehicle external characteristics, respectively.\nThe visual encoder can be the EfficientNet B2 \\cite{xie2018rethinking} and the text encoder can be RoBERTa \\cite{liu2019roberta}.\nBoth internal and external features are fused to learn robust features of the vehicle track.\nWe also propose a hard text sample mining method to distinguish similar texts, where texts from other views are utilized.\nIn addition, by analyzing the ranking visualization, we found that the model can hardly learn the visual feature of the surrounding environment and relationship of multiple vehicles. \nTo tackle this problem, we design a spatial relationship modeling approach to enhance tracklet information, improving the performance of our method.\nWe has achieved 43.92\\% MRR accuracy on the test set of\nthe 6th AI City Challenge on natural language-based vehicle retrieval track, yielding the 1st place among all valid submissions on the public leaderboard.\n\n\\section{Method}\n\n\\subsection{Overview}\n\nThe pipeline of our method is illustrated in Fig. \\ref{fig:framework}.\nThe symmetric network consists of four branches, where the upper two branches are mainly used to acquire local visual and language representations of vehicle appearance, and the lower two branches are employed to learn the global information of the vehicle, including the motion state and the environment in which it is located.\nThe two kinds of representations are fused together to generate the comprehensive feature representations of the vehicle track.\nWe apply the Symmetric InfoNCE Loss \\cite{oord2018representation} and Circle Loss \\cite{sun2020circle} to connect the representations of the two modalities of the text and the image, ensuring that they are projected into a unified representation space.\nTo further emphasize the motion and environment information, we construct triples to distinguish text descriptions from different views of the same vehicle.\nIn addition, a spatial relationship modeling module is designed to use location information to constrain retrieval results in the retrieval process.\n\n\\subsection{Data Augmentation}\n\n\\subsubsection{Image Augmentation}\n\n\\input{fig\/iou}\n\nSimilar to \\cite{bai2021connecting}, we augment the motion image by pasting cropped vehicle images from the tracking frames to the background image. \nThe background image is generated by computing the mean value of all frames from the same camera, which can be formulated as:\n$$\nB_c = \\frac{1}{N_c}\\sum_i^{N_c} {F_i}\n$$\nwhere $B_c$ is the background image of the $c_{th}$ camera, $N_c$ is the number of frames taken by the $c_{th}$ camera, and $F_i$ is the $i_{th}$ frame. \n\nThe motion image is generated by pasting cropped vehicle images from the same track on the corresponding background image.\nAs shown in the left of Fig. \\ref{fig:iou}, too close consecutive frames may occlude adjacent frames on the motion image.\nTo avoid this problem, we compute the Intersection Over Union (IOU) of adjacent frames and ignore frames whose IOU is larger than the threshold with already pasted frames.\nThe threshold is set to 0.05 in our method.\n\n\n\n\\subsubsection{Text Augmentation}\n\n\\input{fig\/text_aug}\n\nMost text descriptions contain vehicle appearance information, motion information and location information. \nTo make the model learn a specific information better,\nwe adopt subject augmentation, motion augmentation and position augmentation respectively to emphasize different textual information, in which the respective text descriptions are repeated as shown in Fig. \\ref{fig:text}.\n\\\\\n\\hspace*{\\fill} \n\\\\\n\\textbf{Subject augmentation.} To enhance the appearance information, we need to repeat description about the vehicle appearance. Because the vehicle appearance description usually appears as the first noun phase of the sentence, we \"spacy \"\\footnote{\\tiny\\url{https:\/\/spacy.io\/}} to extract all noun phrases and put the first one to the beginning of the sentence.\n\\\\\n\\hspace*{\\fill} \n\\\\\n\\textbf{Motion augmentation.} To enhance the motion information, we extract keywords which describe motion information and then repeat them. \nFor motion information, We assume that there are only three motion types, \\textit{i.e.} turn left, turn right and go straight. \nIf keywords ``turn left\" or ``turn right\" exist in any text description of one track's text description, we denote the motion type of this track is turning left or right. \nIf none of these two keywords exist in the text description, we denote the motion type of this track is going straight. \nWe append the motion type keyword ``left\", ``right\" or ``straight\" to the beginning of all text description of the track to emphasize the motion information. \n\\\\\n\\hspace*{\\fill} \n\\\\\n\\textbf{Location augmentation.} For location enhancement, We only distinguish whether a location described by a track is an intersection. \nSo we look into the text descriptions of the track and check whether ``intersection\" exists in any of the text description to determine whether the described location is an intersection. \nWe append ``intersection\" to the beginning of all text description of the track if the track is located in the intersection.\n\n\\input{fig\/framework}\n\n\\subsection{Visual and Linguistic Representation Learning}\n\n\\subsubsection{Symmetric Network}\n\nThe task of natural language based vehicle retrieval refers to measuring the semantic similarity between a vehicle video and a language description.\nGenerally speaking, humans pay more attention to unique information when describing vehicle videos, such as vehicle appearance, shape and other characteristics (termed as internal characteristics), as well as external characteristics such as motion state and surrounding environment (termed as external characteristics).\nWe construct a dual-stream network to focus the internal and external characteristics, respectively.\n\\\\\n\\hspace*{\\fill} \n\\\\\n\\textbf{Vehicle internal characteristic learning.}\nAs the upper two branches shown in Fig. \\ref{fig:framework},\nto enhance the internal characteristic in vehicle images, we crop the local vehicle image from the frame of the video and feed them into the image encoders.\nThe encoders are EfficientNet B2 \\cite{xie2018rethinking} or ibn-ResNet101-a \\cite{pan2018two} pretrained on ImageNet \\cite{deng2009imagenet}.\nIn addition, we employ a pretrained RoBERTa \\cite{liu2019roberta} as text encoder to learn the text embedding of vehicles.\nIts input text is augmented by subject emphasis to acquire vehicle embedding with more appearance and shape information.\nFollowing \\cite{bai2021connecting}, we adopt projection heads to map visual feature and text embedding into a unified representation space, which is formulated as:\n\\begin{equation}\nf_{i}=g_{i}\\left(h_{i}\\right)=W_{2} \\sigma\\left(B N\\left(W_{1} h_{i}\\right)\\right),\n\\end{equation}\n\\begin{equation}\nf_{t}=g_{t}\\left(h_{t}\\right)=W_{2} \\sigma\\left(L N\\left(W_{1} h_{t}\\right)\\right),\n\\end{equation}\nwhere $h_i$ is the visual feature extracted by the image encoder and $h_t$ is the linguistic embedding extracted by the image encoder.\nBN denotes the Batch Normalization (BN) layer. \nLN is a Layer Normalization (LN) layer.\n$\\sigma$ is a ReLU activation layer.\n\\\\\n\\hspace*{\\fill} \n\\\\\n\\textbf{Vehicle external characteristic learning.}\nSimilar to the network structure in internal features learning, we adopt a image encoder and a text encoder in the lower two branches to learning global motion feature and embedding, respectively.\nThe input of global image encoder is the global motion image generated by multiple frames of images in the video, which is beneficial to learn more action and environmental information.\nThe motion emphasis or location emphasis augmented text description is processed by the global text encoder to extract more vehicle external information.\nThe obtained global motion feature and motion embedding are projected into the same representation space as well.\n\\\\\n\\hspace*{\\fill} \n\\\\\n\\textbf{Vehicle comprehensive characteristic learning.}\nWe concatenate the local and global image features (text embedding) to fuse information at different granularities.\nThe fused visual and linguistic representations are projected into the same space by the projection heads.\nThen in the inference process, we only utilize the fused track features.\n\n\\subsubsection{Loss functions}\n\nIn the unified feature representation space containing two modal information, we perform the symmetric InfoNCE loss \\cite{xie2018rethinking} like CLIP \\cite{radford2021learning} and pair-wise Circle loss \\cite{sun2020circle} to maximize the cosine similarity of image and text representations.\n\nGiven $N$ images and $N$ text descriptions, we can acquire $N$ visual features $f^{img}_i$ and text embedding $f^{text}_i$, where $f^{img}_i$ and $f^{text}_i$ have the same label and become positive pairs with each other, \\textit{i.e.}, $\\langle f^{img}_i, f^{text}_i\\rangle$. \nThese features has different labels are negative pairs of each other, including same-modal negative pairs $\\langle f^{img}_i, f^{img}_{j \\ne i}\\rangle$ and cross-modal negative pairs $\\langle f^{img}_i, f^{text}_{j \\ne i}\\rangle$.\n\nThe symmetric InfoNCE loss maximize the cosine similarity of the positive pair and minimize the similarity of cross-modal negative pairs.\nIt contains image-to-text optimization and text-to-image optimization for one positive pair.\nThe image-to-text loss is as follows:\n\\begin{equation}\n\\mathcal{L}_{i2t}=\\frac{1}{N} \\sum_{i=1}^{N}-\\log \\frac{\\exp(\\cos ( f^{img}_i,f^{text}_i) \/ \\tau)}{\\sum_{j=1}^{N} \\exp(\\cos ( f^{img}_i,f^{text}_j) \/ \\tau)},\n\\end{equation}\nIn addition, the text-to-image loss is:\n\\begin{equation}\n\\mathcal{L}_{t2i}=\\frac{1}{N} \\sum_{i=1}^{N}-\\log \\frac{\\exp(\\cos ( f^{text}_i,f^{img}_i) \/ \\tau)}{\\sum_{j=1}^{N} \\exp(\\cos ( f^{text}_i,f^{img}_j) \/ \\tau)},\n\\end{equation}\nwhere $\\tau$ is the temperature learnable parameter and the cosine similarity is calculated by:\n\\begin{equation}\n\\cos (f_i, f_j) = \\frac{f_i \\cdot f_j}{||f_i|| \\cdot ||f_j||}.\n\\end{equation}\nThen, the symmetric InfoNCE loss is as follows:\n\\begin{equation}\n\\mathcal{L}_{INCE} = \\mathcal{L}_{i2t} + \\mathcal{L}_{t2i}.\n\\end{equation}\n\nDifferent from the symmetric InfoNCE loss, Circle loss minimize the similarity of all negative pairs.\nDenote the positive pair and negative pairs as $s_p$ and $s_n$, respectively.\nCircle loss is defined as follows:\n\\begin{equation}\n\\tiny\n\\mathcal{L}_{{circle}}=\\log \\left[1+\\sum_{j=1}^{L} \\exp \\left(\\gamma \\alpha_{n}^{j}\\left(s_{n}^{j}-\\Delta_{n}\\right)\\right) \\sum_{i=1}^{K} \\exp \\left(-\\gamma \\alpha_{p}^{i}\\left(s_{p}^{i}-\\Delta_{p}\\right)\\right)\\right]\n\\end{equation}\nwhere the $K=1$ and $L=2(N-1)$, $\\Delta_{p}$ and $\\Delta_{n}$ are the intra-class and inter-class margins, respectively.\n$\\alpha_{p}$ and $\\alpha_{n}$ are calculated as:\n\\begin{equation}\n\\small\n\\begin{cases}\n\\alpha_{p}^{i}=\\left[O_{p}-s_{p}^{i}\\right]_{+} \\\\\n\\alpha_{n}^{j}=\\left[s_{n}^{j}-O_{n}\\right]_{+}\n\\end{cases},\n\\end{equation}\nin which $O_p$ and $O_n$ are optimums of $s_p$ and $s_n$, respectively.\nThese parameters are set $O_{p} = 1 + m$, $\\Delta _{p} = 1 - m$, $O_{n} = -m$ and $\\Delta _{n} = m$, respectively.\n\nThe symmetric InfoNCE loss and Circle loss are enforced on all of three kinds of representations, which preserves information of different levels can be learned.\n\n\\subsubsection{Hard Text Samples Mining}\n\n\nAlthough the non-appearance information is enhanced in external feature learning, our model still learns a lot of vehicle internal information and makes mistakes when discriminating difficult samples.\nFor instance, when the same car appears in two different videos, the text descriptions of the two videos will have similar subjects as show in Fig. \\ref{fig:framework}, which will degrade the retrieval performance.\n\nTo address this problem, we implement hard sample mining by composing triples with different view descriptions of the same vehicle and the current view motion image.\nMore concretely, given $N$ global motion images, $N$ text descriptions and $N$ text descriptions from other views, we have $N$ triplets $\\langle f^{img}_i, f^{text}_i, f^{text\\_v}_i\\rangle$, where the anchor is $f^{img}_i$, the positive feature is $f^{text}_i$ and the negative feature is $f^{text\\_v}_i$.\nThen we minimize the Triplet loss \\cite{hoffer2015triplet} to push away the $f^{text\\_v}_i$ from $f^{img}_i$:\n\\begin{equation}\n\\small\n\\mathcal{L}_{triple}=\\frac{1}{N} \\sum_{i=1}^{N}\\left[ \\cos ( f^{img}_i,f^{text\\_v}_i) - \\cos ( f^{img}_i,f^{text}_i) + m\\right]_{+} \\\\\n\\end{equation}\nin which $m$ is the margin.\n\n\\subsection{Spatial Relationship Modeling}\n\\subsubsection{Long-distance Relationship Modeling}\nLong-distance relationship modeling means that we build positional relationships between text and images, \\textit{e.g.}, intersection prediction.\nThen the text and images whose positions are both intersections are given greater similarity\nSpecifically, we distinguish whether a location is intersection.\nTo increase the similarity of the text and image feature if they all describe the intersection, We calculate the location similarity between the text and the image and then add the similarity score to the final similarity matrix.\n\nWe found that the pictures in the dataset are all taken by cameras in fixed locations, so we can determine whether the location of an image is at the intersection by checking the location of its corresponding camera. We found if a camera is located in the intersection, some vehicles captured by this camera will stop for a while to wait the traffic light. \nWe can calculate the location of the vehicle in each frame using the bounding box:\n\\begin{equation}\n(x_i, y_i) = (left_i+\\frac{1}{2}width_i, top_i + \\frac{1}{2}height_i)\n\\end{equation}\nwhere $left_i$ and $top_i$ are the left-top corner coordinate of the $i_{th}$ frame's bounding box, $width_i$ and $height_i$ are the width and height of the $i_{th}$ frame's bounding box.\n\nWe can infer the movement of the vehicle through the coordinates in each frames. If movement distance of the vehicle in consecutive $n$ frames are all zero, we consider the car to be at the intersection and we label the corresponding camera to be located at the intersection. We can get the visual location vector by:\n\\begin{equation}\n loc_v = \n \\begin{cases}\n (0, 1)^T,& \\quad c_v \\in \\{c | c ~\\text{is at intersection}\\} \\\\\n (1, 0)^T,& \\quad \\text{otherwise}\n \\end{cases} \n,\\end{equation}\nwhere $c_v$ is the camera of track $v$, $\\{c | c ~\\text{is at intersection}\\}$ is the collection of cameras which are located in the intersection.\n\nWe can determine whether a sentence is describing an intersection by checking whether keyword ``intersection\" exists in the sentence. We can get the text location vector by:\n\\begin{equation}\n loc_s = \n \\begin{cases}\n (0, 1)^T,& \\quad \\text{``intersection\"} \\subseteq s \\\\\n (1, 0)^T,& \\quad \\text{otherwise}\n \\end{cases} \n,\\end{equation}\n\nAfter we get the visual location embedding and text location embedding, we can calculate the similarity of them by the dot product. Assume we have $n$ text queries and $m$ visual tracks, we can get a matrix $S_l \\in \\mathbb{R} ^ {n\\times m}$ representing the location similarity between each query and track. \n\n\\subsubsection{Short-distance Relationship Modeling}\nIn the queries, there are lots of sentences describe the relationship of more than one vehicles. \nHowever, Our model can not learn the relationship between multiple vehicles, which are very close in the video frame.\nTo utilize this information, we perform relationship augmentation to make the proper text-visual pair more similar, termed short-distance relation modeling.\n\nMore concretely, we use `spacy' to extract all noun-phases in the sentence and keep all phases describing vehicles. Then we use the vehicle description branch to extract the language embeddings of all the vehicle descriptions. We then use the provided detection file to extract the bounding boxes of all cars in the frame. We randomly select several frames in a track and then extract all detected cars in these frames. We use the cropped visual branch to extract the visual embedding for all cars in these frames. If a text query $q$ describes relationship between vehicles $v_1$ and $v_2$, we can calculate the similarity between $v_2$ and all detected vehicles in track $t$ and take the maximum value as the relationship similarity between $q$ and $t$. Assume we have $n$ text queries and $m$ visual tracks, we can get a matrix $S_r \\in \\mathbb{R} ^ {n\\times m}$ representing the relationship similarity between each query and track.\n\nThe final similarity matrix $S_{final}$ is formulated as the sum of the feature similarity matrix $S$, location similarity matrix $S_l$ and relationship similarity matrix $S_r$:\n\n\\begin{equation}\nS_{final} = S + \\alpha S_l + \\beta S_r\n\\end{equation}\n$\\alpha$ and $\\beta$ are hyper-parameters. We set $\\alpha=1$ and $\\beta=0.2$ in our experiments.\n\\section{Related Work}\n\n\\subsection{Natural Language Based Video Retrieval}\nNatural language based video retrieval aims to find the corresponding video given natural language description. There are increasing interest in this area. \\cite{li2021x,qi2021semantics,han2021fine,gabeur2022masking}\nMost of the existing methods are based on representation learning, which try to make the feature representation of the corresponding text and video similar. These methods usually use a language model such as LSTM\\cite{hochreiter1997long} or BERT, etc. \\cite{devlin2018bert} to extract the language feature and use a visual feature backbone such as ResNet \\cite{he2016deep} or C3D \\cite{carreira2017quo}, etc. to extract the visual feature.\nZhang, et al. \\cite{zhang2018cross} use hierarchical sequence embedding (HSE) to embedding sequential data of different modalities into hierarchically semantic spaces with correspondence information. Antoine, et al.\\cite{miech2018learning} propose a Mixture-of-Embedding-Experts (MEE) model with ability\nto handle missing input modalities during training. Dong, et al.\\cite{dong2019dual} proposing a dual\ndeep encoding network that encodes videos and queries into\npowerful dense representations of their own. Our method not only utilizes representation learning to learn transferable visual and text feature, but also utilize spatial relationship modeling to capture the surrounding environment and mutual relationship in vehicle retrieval task.\n\n\n\n\\subsection{Vehicle Re-identification}\nIn the last decade, vehicle ReID has achieved considerable progress, especially for deep learning based methods \\cite{FACT,he2019part,zhuge2020attribute,chen2021vehicle}.\nDifferent from the person ReID methods \\cite{sun2018beyond,zheng2022template}, it is still a challenging task due to the similar appearance and viewpoint variations problems.\nTo deal the two problems, some approaches \\cite{OrientationVeri,he2019part,Meng2020} focus on subtle details to learn discriminative local features.\nFor instance, OIFE \\cite{OrientationVeri} proposes 20 keypoints and raises an orientation invariant feature embedding module to emphasize regions with discriminative information.\nPart-regularized \\cite{he2019part} trains a detector to focus on local regions and acquires local features to distinguish similar vehicles.\nVANet \\cite{Chu2019} adopts a view predictor and a modified triplet loss to generate viewpoint-aware features.\nIn addition, many loss functions are applied to address the above challenges, including representation learning loss functions and metric learning loss functions.\nrepresentation learning losses \\cite{LSoftmax,NormSoftmax,AMSoftmax,CosFace,ArcFace} acquire feature representations by classification.\nUnlike them, metric learning losses \\cite{LeCun2005,triplet2017,CircleLoss} directly optimize the similarity score of image pairs and are term as pair-wise loss, where margins are generally added between positive pairs and negative pairs to increase the distance.\nIn this paper, we learn the unified cross-modal representation through the pair-wise losses.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro} \nThe problem of electron tunneling in systems where the band structure depends on the position, like in semiconductors, began to be treated in the early sixties \\cite{harri}; later it was proposed that this variation is simulated by a position-dependent effective mass in the one-electron Hamiltonian \\cite{bendani}, and in the Hamiltonian describing graded mixed semiconductors \\cite{gora}. From this time on the position-dependent mass Hamiltonians were studied in many articles in a wide range of areas other than electronic properties of semiconductors \\cite{bast}, \\cite{zhu}, \\cite{vonroos}, \\cite{young}, \\cite{geller}, \\cite{sinha}, like, for example, quantum wells and quantum dots \\cite{pharri}, \\cite{kesha}, \\cite{einevoll}, polarons \\cite{zhaoliang}, etc. \n\nIn most of those articles the choice of the position-dependent mass (PDM) Hamiltonians was guided by the characteristic of being self-adjoint, in the sense that the mean values of the physical quantities were consistently calculated in the associated Hilbert space with the usual integration measure. With this spirit, many PDM Hamiltonians were proposed and studied \\cite{gora}, \\cite{bast}, \\cite{zhu}, \\cite{vonroos}, \\cite{einevoll}, \\cite{gorabook}, \\cite{bastfurd}, \\cite{LiKelin}. As a consequence some physically consistent and possibly relevant Hamiltonians have been discarded because they were not self-adjoint \\footnote{See, for example, equation (1) in \\cite{gora}.}. \n\nIn the last decades PDM Hamiltonians have also been theoretically treated in a number of articles. The interest was directed to issues like non-self-adjointness \\cite{jiang2005}, solutions of the corresponding Schr\\\"odinger equations \\cite{plastino}, \\cite{jiang2005}, \\cite{quesnebag}, \\cite{ball}, \\cite{costa}, \\cite{regonobre}, \\cite{monteiro2013}, \\cite{dekar}, \\cite{chris}, \\cite{demetrios}, \\cite{nikitin}, \\ ordering ambiguity \\cite{mustafa}, coherent states \\cite{ruby} and application to some particular systems, like, for example the Coulomb problem \\cite{quesnetka}. More recently, the issue of the PDM Hamiltonians which were not within the standard self-adjoint class mentioned above were analyzed following a different approach \\cite{ball}, \\cite{costa}, \\cite{regonobre}. An approach to consistently quantize a non-linear system \\cite{tsallis} was recently developed. In this approach it was necessary to introduce an additional independent field which is the analog of the complex conjugate field for standard linear quantum systems. \n\nIn this paper we study a family of linear PDM Hamiltonians and show that the problem of self-adjointness is completely solved under certain conditions. We depart from the approach of two independent fields and define a connection between the two fields through a mapping that depends on the position dependent mass $m(x)$ and of a function $g(x)$. In order to have appropriately well-defined probability and current densities that satisfy a continuity equation, $g(x)$ must obey a condition that depends on the form of the Hamiltonian. We show that our general non self-adjoint Hamiltonians can be identified with the very general family of Hamiltonians proposed by Harrison \\cite{harri} to calculate wave functions in regions of varying band structure in superconductors, simple metals and semimetals. Inspired in the form of the probability density, we propose then an ansatz that takes the solutions $\\Psi(x)$ of the time independent Schr\\\"odinger equations for the original non self-adjoint Hamiltonian into new wave functions $\\Omega(x)$. The wave functions $\\Omega(x)$ are the solutions of the dual Hamiltonians which are self-adjoint with the usual inner product and quantum mechanically equivalent to the original non self-adjoint ones. We also define an inner product for the solutions $\\Psi(x)$ with a generalized measure that is a function of $m(x)$ and $g(x)$.\nWe study three different examples of the proposed family of PDM Hamiltonians. All of them belong to the Harrison's family of Hamiltonians \\cite{harri}. For these cases we obtain the respective dual self-adjoint Hamiltonians. In one of them the kinetic part of the dual PDM Hamiltonian belongs to the von Roos general kinetic operator class \\cite{vonroos}, but the same does not happen in the other two cases. Finally we analyze and analytically solve the three cases taking a physical position-dependent mass and a deformed harmonic oscillator potential, obtaining and comparing their respective energy levels. \n \nThis paper is organized as follows. In section 2 we present a family of Hamiltonians with a real general potential $V(x)$ depending on a function $f(m, m^\\prime)$, $m(x)$ a position dependent mass and $m^\\prime (x)$ its derivative, and on a constant parameter $\\alpha$. We obtain the Schr\\\"odinger equations generated by these Hamiltonians departing from a Lagrangian density which depends on two different fields $\\Psi(x,)$ and $\\Phi(x)$ and on their time and spatial derivatives. We define a transformation between these two fields that allows us to work with only one, say $\\Psi(x)$, of them and to have a probability and a current density that satisfy a continuity equation. We build the quantum mechanically equivalent dual self-adjoint Hamiltonians on the Hilbert space of the solutions $\\Omega(x)$ of the time independent Schr\\\"odinger equations generated by them. We define the inner products for both $\\Psi(x)$ and $\\Omega(x)$. \nIn section 3 we analyze three different examples of the family of Hamiltonians presented, by choosing particular values for the constant parameter and for the function $f(m, m^\\prime)$ and find the particular function $g(x)$. In all of them we present the particular values of the parameters that identify them with Harrison's Hamiltonians. The three examples chosen are interesting: the first one recovers a model for abrupt heterojunction between two semiconductors studied in \\cite{zhu}; the other two are typical, in the sense that they introduce scales. In these two cases the kinetic part of the dual Hamiltonian does not reduce to the von Roos general kinetic operator. In section 4 we choose a harmonic oscillator mass-dependent potential and a physically motivated particular form for $m(x)$. We solve the Schr\\\"odinger equations for the three cases, obtaining their corresponding eigenfunctions and energy levels. \nIn section 5 we present our conclusions. \n\n\\section{A family of general position-dependent mass Hamiltonians}\n\\label{binentr}\n In this section we present a family of position-dependent-mass Hamiltonians which depend on a real function $f(m(x), m^\\prime(x))$, $m(x)$ a general position dependent mass and $m^\\prime (x)$ its derivative, with a general real potential $V(x)$, given by\n\\begin{equation}\n\\label{genham}\nH = \\frac{-\\hbar^ 2}{2 m(x)} {\\partial_x^ 2} + \\frac{\\hbar^ 2}{2} \\alpha f(m(x),m^\\prime (x) ) \\partial_x + V(x) \\, ,\n\\end{equation}\nwhere $\\alpha \\in \\mathbb{R}$ is a dimensionless constant. $m(x)$ is an analytical positive function for any value of $x$. These Hamiltonians lead to Schr\\\"odinger equations which are not, as will be seen, self-adjoint in the usual Hilbert space of their eigenfunctions. In what follows we show the conditions for \\eqref{genham} to be self-adjoint.\n\n\n\\subsection{Deriving the Schr\\\"odinger equations from a classical Lagrangian density} \n\\label{Scheq} \nWhen we solve the Schrodinger equation for a non self-adjoint Hamiltonian with respect to the usual inner product we have to take into account also the adjoint equation \\cite{regonobre}. Therefore our solution includes, in principle, two different fields, $\\Psi(x,t)$ and $\\Phi(x,t)$, and their conjugates. Note that $\\Phi(x,t)$ is not the complex conjugate of $\\Psi(x,t)$. \nTherefore we develop here an approach where we depart from a Lagrangian density $\\mathcal{L}$ (as it was done in \\cite{monteiro2013}), which depends on these \nfields, and on their time and spatial derivatives, that is\n\\begin{align}\n\\nonumber\n&\\mathcal{L} = \\frac{i \\hbar }{2}\\, \\Phi(x,t)\\, \\partial_t \\Psi(x,t) - \n\\frac{\\hbar^2}{4 m(x)} \\partial_x \\Phi(x,t) \\partial_x \\Psi(x,t) \\, + \\\\\n\\nonumber\n& \\frac{\\hbar^2}{4 } \\frac{m^{\\prime}(x)}{m(x)^2} \\Phi(x,t) \\partial_x \\Psi(x,t) - \n \\frac{\\hbar^2 \\alpha}{4} f(m, m^\\prime)\\, \\Phi(x,t) \\,\\partial_x \\Psi(x,t) \\\\\n \\nonumber\n&- \\frac{i \\hbar }{2} \\Phi^ \\star(x,t) \\partial_t \\Psi^ \\star(x,t) + \n\\frac{\\hbar^2}{4 m(x)} \\partial_x \\Phi^\\star(x,t) \\partial_x \\Psi^\\star(x,t) \\, + \\\\\n& \\frac{\\hbar^2}{4 } \\frac{m^{\\prime}(x)}{m(x)^2} \\Phi^\\star(x,t) \\partial_x \\Psi^\\star(x,t) - \n \\frac{\\hbar^2 \\alpha}{4} f(m, m^\\prime)\\, \\Phi^ \\star(x,t) \\,\\partial_x \\Psi^ \\star(x,t)\n \\nonumber \\\\\n &- \\frac{1}{2} V(x) \\, \\Phi^ \\star(x,t) \\, \\Psi^ \\star(x,t) - \\frac{1}{2} V(x) \\, \\Phi(x,t) \\, \\Psi(x,t) \\, , \n \\label{lag}\n \\end{align}\n where $\\star$ denotes the standard complex conjugate.\n \nUsing the usual Euler-Lagrange equations for the fields $\\Phi(x,t)$, $\\Psi(x,t)$ and their conjugates, we straightforwardly get the following Schr\\\"odinger equations: \n\\begin{align}\n\\label{1schr}\n i \\hbar \\partial_t \\Psi(x,t) & = - \\frac{\\hbar^ 2}{2m(x)} \\partial_x^ 2 \\Psi(x,t) + \\frac{\\hbar^ 2}{2} \\alpha f(m,m^\\prime) \\partial_x \\Psi(x,t) + V(x) \\Psi(x,t) \\\\\n \\label{2schr}\n - i \\hbar \\partial_t \\Psi^ \\star (x,t) & = - \\frac{\\hbar^ 2}{2m(x)} \\partial_x^ 2 \\Psi^ \\star (x,t) + \\frac{\\hbar^ 2}{2} \\alpha f(m,m^\\prime) \\partial_x \\Psi^ \\star(x,t)(x,t) +\n V(x) \\Psi^ \\star (x,t) \\, .\n \\end{align}\n \n\n \n\\begin{align}\n \\label{3schr}\n - i \\hbar \\partial_t \\Phi(x,t) & = - \\frac{\\hbar^ 2}{2} \\partial_x^ 2 \\left(\\frac{\\Phi(x,t)}{m(x)} \\right) - \\frac{\\hbar^ 2 \\alpha}{2} \\partial_x [f(m,m^\\prime) \\Phi(x,t)] + V(x) \\Phi(x,t) \\\\\n \\label{4schr}\n i \\hbar \\partial_t \\Phi^ \\star (x,t) & = - \\frac{\\hbar^ 2}{2} \\partial_x^ 2 \\left(\\frac{\\Phi^ \\star (x,t)}{m(x)} \\right) - \\frac{\\hbar^ 2 \\alpha}{2} \\partial_x [f(m,m^\\prime) \\Phi^ \\star (x,t)] + V(x) \\Phi^ \\star (x,t) \\, .\n \\end{align}\n Note that when the mass is a constant, \\eqref{1schr} and \\eqref{2schr} are the same as, respectively, \\eqref{4schr} and \\eqref{3schr}. \n\n\n\\subsection{The continuity equation} \n\\label{conteq} \nWe define the function $\\rho(x)$ as \n\\begin{equation}\n\\label{pd}\n\\rho(x,t) = \\frac{1}{2 m_0} (\\Psi(x, t) \\Phi(x, t) + \\Phi^\\star(x, t) \\Psi^\\star(x,t)) \\, ,\n\\end{equation}\nwhere $m_0$ is a mass dimensional constant. We are considering here only systems for which the integral of $\\rho(x)$ over the whole space is finite. Besides, in order to be a probability density, $\\rho(x)$ has to be non-negative. This can be assured if $\\Phi(x, t) = H(x) \\Psi^\\star(x, t)$, with $H(x) > 0$. For the sake of convenience, we choose \n$H(x) = g(x) m(x)$, and\n\\begin{equation}\n\\label{relphipsi}\n\\Phi(x,t) = g(x) m(x) \\Psi^\\star(x,t) \\, ,\n\\end{equation}\nwhere $m(x)$ is obviously positive and we impose $g(x) > 0$. Then, \\eqref{pd} becomes\n\\begin{equation}\n\\label{pd1}\n\\rho(x,t) = \\frac{1}{m_0} (g(x) m(x) \\Psi(x, t) \\Psi^\\star(x,t)) \\, .\n\\end{equation}\nIt is straigthforward to show that \\eqref{pd1} obeys the continuity equation\n\\begin{equation}\n\\label{continuityeq}\n\\partial_t \\rho(x,t) + \\partial_x j(x,t) = 0, \n\\end{equation}\nwhere, using Schr\\\"odinger equations \\eqref{1schr} and \\eqref{2schr}, we find the current density to be\n\\begin{equation}\n\\label{current}\n j(x,t) = \\frac{\\hbar}{2 i m_0} g(x) \\left [ (\\partial_x \\Psi(x, t)) \\Psi^\\star(x,t) - \\Psi(x, t) (\\partial_x \\Psi^\\star(x,t) \\right] \\, .\n\\end{equation}\nThis result is not valid for any function $g(x)$, but only for those obeying the condition\n\\begin{equation}\n\\label{condg}\n\\frac{dg(x)}{dx} = - \\alpha f(m,m^\\prime) m(x) g(x) \\, ,\n\\end{equation}\nwhich is a consequence of the continuity equation \\eqref{continuityeq}. Also, is it very simple to show that \\eqref{condg} makes equations \\eqref{1schr} and \\eqref{4schr} reduce to each other (respectively, \\eqref{2schr} and \\eqref{3schr} ). This means that the class of fields $\\Psi(x, t)$ and $\\Phi(x, t)$ related through \\eqref{relphipsi} and submitted to condition \\eqref{condg} are not two independent fields. Condition \\eqref{condg} also means that given a particular Hamiltonian of the family \\eqref{genham}, once we know $\\alpha$ and $f(m,m^\\prime)$, we have the function $g(x)$ and the probability and current densities that obey the continuity equation.\n\nIn \\cite{harri}, Harrison proposed a family of Hamiltonians to describe regions of varying band structure in semiconductors, semimetals and transition metals. By comparing his current density, eq. (2) in \\cite{harri}, \n\\begin{equation}\n\\label{current}\n j(x,t) = \\frac{\\hbar}{2 i m_0} \\frac{\\gamma}{\\beta} \\left [ (\\partial_x \\beta \\phi(x, t)) \\beta \\phi^\\star(x,t) - \\beta \\phi(x, t) (\\partial_x \\beta \\phi^\\star(x,t) \\right] \\, , \n\\end{equation}\nwith our definition of current density, equation \\eqref{current}, and identifying $\\beta \\phi = \\psi$, we have that \n\\begin{equation}\\\n\\label{cond2}\n \\frac{\\gamma}{\\beta} = g(x) \\, . \n\\end{equation}\nBesides, comparing his wave equation, eq. (4) in \\cite{harri}, which is the limiting case of continuous variations of the band structure, and where $\\beta$, $\\gamma$ and $k_x$ are functions of position, \n\\begin{equation} \n\\label{harri4}\n\\beta \\partial_x \\left[ \\frac{\\gamma}{\\beta} \\partial_x [\\beta \\phi(x)] \\right]+ \\gamma \\, k_x^2 \\, \\beta \\, \\phi(x)= 0 \\, ,\n\\end{equation}\nwith the time independent Schr\\\"odinger equation for the Hamiltonian \\eqref{genham}, $H \\psi(x) = E \\psi(x)$, \nwith $V(x) - E = \\gamma k_x^2$, \nand taking \\eqref{cond2} into account, we find \n\\begin{equation}\n\\beta = \\frac{-\\hbar^2}{2g(x)m(x)}\n\\end{equation}\nand recover condition \\eqref{condg}, that is, $g^\\prime(x) = - \\alpha f(m, m^\\prime) m(x) g(x)$. \n\n\n\\subsection{Building the dual self-adjoint Hamiltonian}\n\nSuggested by the form of the probability and current densities, \\eqref{pd1} and \\eqref{current}, let us define a new wave function\n\\begin{equation}\n\\label{dualta} \n\\Omega(x,t) = \\sqrt{g(x) m(x)} \\Psi(x,t)\n\\end{equation} \nand, using Eq.\\eqref{condg}, rewrite Eq. \\eqref{1schr} for this new wave function: \n\\begin{align}\n\\nonumber\n i \\hbar \\partial_t \\Omega(x,t) = & -\\frac{\\hbar^ 2}{2 m(x)} \\partial_x^2 \\, \\Omega(x,t) \\, + \n \\frac{\\hbar^ 2}{2 } \\frac{m^\\prime(x)}{m(x)^2} \\partial_x \\, \\Omega(x,t) \\, - \\\\\n \\nonumber\n & - \\frac{\\hbar^ 2}{4 m(x) } \\left[ - \\frac{1}{2 } \\alpha^2 f(m, m^\\prime)^2 m(x)^2 - \\alpha f(m,m^\\prime) m^\\prime(x) \n-\\frac{3}{2} \\left(\\frac{m^\\prime(x)}{m(x)}\\right)^2 \\right. -\\\\\n& \\left. -\\frac{1}{2} \\alpha f^\\prime(m,m^\\prime) m(x) + \\frac{m^{\\prime \\prime}(x)}{m(x)} \\right]\n \\Omega(x,t) + V\\, \\Omega(x,t)\n\\label{hermite1}\n \\end{align} \n\n\nIt is straightforward to show that the following Hamiltonian, defined from the above Schr\\\"odinger equation,\n\n\\begin{align}\n\\label{selfadham}\n \\nonumber\n&H = -\\frac{\\hbar^ 2}{2 m(x)} \\partial_x^2 + \n \\frac{\\hbar^ 2}{2 } \\frac{m^\\prime(x)}{m(x)^2} \\partial_x \n - \\frac{\\hbar^ 2}{4 m(x) } \\left[ - \\frac{1}{2 } \\alpha^2 f(m, m^\\prime)^2 m(x)^2 - \\right. \\\\ \n & \\left. \\alpha f(m,m^\\prime) m^\\prime(x) \n-\\frac{3}{2} \\left(\\frac{m^\\prime(x)}{m(x)}\\right)^2 \\right. \n \\left. -\\frac{\\alpha}{2} f^\\prime(m,m^\\prime) m(x) + \\frac{m^{\\prime \\prime}(x)}{m(x)} \\right]\n + V(x) \n \\end{align} \nis self-adjoint on the Hilbert space of the solutions $\\Omega(x,t)$ of equation (\\ref{hermite1}) with the usual inner product \nfor two given solutions $\\Omega_1$ and $\\Omega_2$: \n\\begin{equation} \n\\label{inprodomega}\n\\langle \\Omega_1(x,t), \\Omega_2(x,t) \\rangle \\equiv \\int dx \\, \\Omega_1^ \\star (x,t) \\Omega_2(x,t) \\, .\n\\end{equation}\n\nRelation (\\ref{dualta}) can be seen as a mapping from the solutions of Schr\\\"odinger equations (\\ref{1schr}) and (\\ref{2schr}) into the solutions of time independent equation (\\ref{hermite1}) and its Hermitian conjugate. \nThus, motivated by (\\ref{pd1}), given two solutions of equation (\\ref{1schr}), namely $\\Psi_1$ and $\\Psi_2$, we can now define their inner product as\n\\begin{equation}\n\\label{inprod}\n\\langle \\Psi_1(x,t), \\Psi_2(x,t) \\rangle_{gm} \\equiv \\int dx \\, g(x) m(x) \\Psi_1^ \\star (x,t) \\Psi_2(x,t) \\, .\n\\end{equation}\nHaving the inner product above and a consistent definition of the probability density $\\rho(x,t)$ \\eqref{pd1} we can calculate mean values for the system described by Hamiltonian \\eqref{genham}. Therefore, we have a method to deal with the non self-adjoint Hamiltonian \\eqref{genham}.\nThis result is valid for any analytical positive functions $m(x)$, for the functions $g(x)$ obeying condition \\eqref{condg} and for any real potential $V(x)$. A similar result for a particular form of $m(x)$, $g(x) = 1$ and $V(x)$ was proved in theorem 1 of \\cite{ball}. \n\nIt is easy to see that the energy spectra computed from Hamiltonians (\\ref{genham}) and (\\ref{selfadham}) are the same; \nin the same way as it happens to the Hamiltonians, that is, the dual self-adjoint is a redefinition of the original non self-adjoint one, the physical operators will be different for the two dual systems, so that the mean values will be the same.\nThus, they are quantum mechanically equivalent. } \n\nThis is a general result, in the sense that it is valid for all the non self-adjoint Hamiltonians \\eqref{genham} depending on a real function $f(m, m^\\prime)$, with $\\alpha$ a real constant and any real potential $V(x)$, provided the two fields $\\Psi(x,t)$ and $\\Phi(x,t)$ are related by \\eqref{phipsi} and $g(x)$ obeys condition \\eqref{condg}. Besides, this is a general method to find the dual self-adjoint Hamiltonians for systems described by non self-adjoint Hamiltonians who belong to the family of Hamiltonians \\eqref{genham}, under the conditions just mentioned. The non self-adjointness was the reason for discarding PDM Hamiltonians which appeared in phenomenological approaches to semiconductors (see, for instance, \\cite{gora}, \\cite{LiKelin}). \n\n\n\nIn \\ref{conteq} we showed that our Hamiltonian \\eqref{genham} is identified to the family of Hamiltonians proposed by Harrison in \\cite{harri}. It is important to note that with the method here presented we can find the dual self-adjoint Hamiltonian to any non self-adjoint contained in Harrison's family. That is, given specific forms of the functions $\\beta$, $\\gamma$ and $m(x)$, we can find the corresponding parameters $\\alpha$ and the functions $f(m, m^\\prime)$ and $g(x)$ satisfying condition \\eqref{condg} and therefore the quantum mechanically equivalent self-adjoint Hamiltonian \\eqref{selfadham}. \n\n\n\nA general kinetic-energy operator for a position-dependent mass $m(x)$ system was introduced by von Roos \\cite{vonroos}. In one dimension this operator is written\n\\begin{equation} \n\\label{vonroos}\nT = - \\frac{\\hbar^ 2}{4} \\left[(m(x)^ a \\partial_x m(x)^ b \\partial_x m(x)^ c + m(x)^ c \\partial_x m(x)^ b \\partial_x m(x)^ a \\right] \\, ,\n\\end{equation}\n and the arbitrary constants $a, b$ and $c$ obey the constraint \n\\newline\n $a + b + c = -1$. \nTaking this constraint into account this operator can be written\n\\begin{align} \n\\nonumber\nT = & - \\frac{\\hbar^ 2}{2 m(x)} \\partial_x ^ 2 + \\frac{\\hbar^ 2}{2 m(x)^ {2}} m^ \\prime(x) \\partial_x + \\\\ \n\\label{vonroos2} \n& + \\frac{\\hbar^ 2}{4 m(x)} \\left[ - 2 (1 + a + a^ 2 + b + ab) \\left( \\frac{m^ \\prime(x)}{m(x)} \\right)^ 2 \n+ (1 + b) \\frac{m^ {\\prime \\prime}(x)}{m(x)}\\right] \\, .\n\\end{align}\n\nThe comparative analysis of von Roos kinetic operator, Eq. \\eqref{vonroos2}, and the kinetic part of our Hamiltonian given by Eq. \\eqref{selfadham} will be performed in the examples below. \n\n\n\n\n\\section{Three examples} }\n\\label{cases}\n\nWe have so far shown that it is possible to construct a well-defined continuity equation for the general position-dependent mass Hamiltonian (\\ref{genham}). \n\n In this subsection we analyze three different classes of Lagrangian (\\ref{lag}) specified by different choices of $f(m,m^\\prime)$ and $\\alpha$, namely:\n \\begin{itemize}\n\\item \n Case a: $\\alpha = 0$\n\\item\n Case b: $\\alpha = \\frac{1}{c_1m_0}, f(m, m^\\prime)= \\frac{m^\\prime}{m(x)}$ , $[\\alpha] = M^ {-1}$\n\\item\n Case c: $\\alpha = 2 \\alpha_0 c_2, f(m, m^\\prime) = \\frac{1}{m(x)}$, $\\alpha_0$ a constant, $[\\alpha_0] = L^ {-1}$ \\, . \n\\end{itemize} \nBoth constants $c_1$ and $c_2$ are dimensionless. These three cases are typical in the sense that the constant $\\alpha$ has no scale or scales as mass or length. \n \n\n\\subsection{Case a:}\n\nIn case (a) the non self-adjoint Hamiltonian \\eqref{genham} becomes\n\\begin{equation}\n\\label{ham1}\nH = \\frac{-\\hbar^ 2}{2 m(x)} \\partial_x^ 2 + V(x) \\, \n\\end{equation} \n\nFrom condition \\eqref{condg}, as $\\alpha = 0$, we have $g^\\prime(x) = 0$ and $g$ is a constant; we take it equal to $1$. Therefore, according to \\eqref{relphipsi}, \n\\begin{equation}\n\\label{phipsi}\n\\Phi(x,t) = m(x) \\Psi^ \\star (x,t) \\, .\n\\end{equation} \nThe functions $\\Psi(x,t)$ (respectively, $\\Phi(x,t)$) are the solutions of the Schr\\\"odinger equations \\eqref{1schr} (respectively, \\eqref{3schr}) for the Hamiltonian \\eqref{ham1}. \n\nIn the limit $m(x) = $ constant, we recover the usual expressions for both the probability and current densities. \n\nThe dual self-adjoint Hamiltonian \\eqref{selfadham} corresponding to Hamiltonian \\eqref{ham1} is then\n\\begin{align}\n\\label{selfadhama}\n \\nonumber\nH_a = & -\\frac{\\hbar^ 2}{2 m(x)} \\partial_x^ 2 + \\frac{\\hbar^ 2}{2 m(x)^ 2} m^ \\prime(x) \\partial_x +\n \\frac{\\hbar^ 2}{4 m(x)} \\left[ - \\frac{3}{2} \\left( \\frac{m^ \\prime(x)}{m(x)} \\right)^ 2 + \\frac{m^ {\\prime \\prime}(x)}{ m(x)} \n \\right] \\\\ + \n & V(x) \\, . \n \\end{align} \nComparing (\\ref{vonroos2}) with the kinetic part of (\\ref{selfadhama}), we see that they are the same for $a = c = - 1\/2$ and $b = 0$. \n \n \nFrom \\eqref{dualta} we have in this case \n\\begin{equation}\n\\label{aomega} \n\\Omega(x,t) = \\sqrt{m(x)} \\Psi(x,t) \\, ,\n\\end{equation} \nwhere $\\Omega(x,t)$ is the solution of the Schr\\\"odinger equations \\eqref{hermite1} in case (a). \n\nHamiltonian \\eqref{ham1} was proposed in \\cite{zhu} as a model for the abrupt heterojunction between two different semiconductors and rendered self-adjoint by an empirical approach which is a particular case of the method presented here. \n\nAs we have showed in the general case, in \\ref{conteq}, the Hamiltonians of the family \\eqref{genham} are equivalent to those proposed by Harrison in \\cite{harri}. In this particular case, the $\\beta$ and $\\gamma$ functions of Harrison's Hamiltonian \\eqref{harri4} are $\\beta = \\gamma = \\frac{- \\hbar^2}{2m(x)}$ and $V(x) - E= -\\frac{\\ \\hbar^2}{2 m(x)^2} k_x^2$. \n\n \n\\subsection{Case b:}\n\n In this case Hamiltonian \\eqref{genham} has the form\n\\begin{equation}\n\\label{bham}\nH = \\frac{-\\hbar^ 2}{2 m(x)} \\partial_x^ 2 + \\frac{\\hbar^ 2}{2 c_1 m_0 m(x)} m^\\prime (x) \\partial_x + V(x) \\, .\n\\end{equation} \nLet us note that in this case the $\\alpha= \\frac{1}{c_1m_0}$ parameter has a dimension of $M^ {-1}$. \n\nIntegrating condition \\eqref{condg}, equation \\eqref{relphipsi} becomes \n\\begin{equation}\n\\label{phipsib}\n\\Phi(x,t) = e^ {-\\frac{m(x)}{c_1 m_0}}m(x) \\Psi^ \\star (x,t) \\, .\n\\end{equation} \n\nThe dual self-adjoint Hamiltonian \\eqref{selfadham} corresponding to Hamiltonian \\eqref{bham} is now\n\\begin{align}\n\\label{bselfadj}\n \\nonumber\nH_b = & -\\frac{\\hbar^ 2}{2 m(x)} \\partial_x^ 2 + \\frac{\\hbar^ 2}{2 m(x)^ 2} m^ \\prime(x) \\partial_x -\n \\frac{\\hbar^ 2}{2 m(x)} \\left[ \\frac{3 c_1^ 2 m_0^ 2 - m(x)^ 2}{4c_1^ 2 m_0^ 2 m(x)^ 2} m^ \\prime(x)^ 2 + \\right. \\\\\n & \\left. \\frac{m(x) - c_1 m_0}{2 c_1 m_0 m(x)} m^ {\\prime \\prime}(x) \\right] + V(x) \\, . \n \\end{align}\nFrom \\eqref{dualta} the new functions $\\Omega(x,t)$, \n\\begin{equation}\n\\label{bomega} \n\\Omega(x,t) = e^ {-\\frac{m(x)}{2 c_1 m_0}}\\sqrt{m(x)} \\Psi(x,t) \\, ,\n\\end{equation} \nare the solutions of the Schr\\\"odinger equations for self-adjoint Hamiltonian \\eqref{bselfadj}. \n\n In this case, the kinetic operator of Hamiltonian (\\ref{bselfadj}) does not reduce to the Von Roos general kinetic operator (\\ref{vonroos}) for any particular values of the parameters. Indeed, it is easy to see that \n \\begin{equation}\n\\label{baham}\nH_b = H_a + \\frac{\\hbar^ 2}{2} \\left[\\frac{m^ \\prime(x)^ 2}{4 c_1^ 2 m_0^ 2 m(x)} - \\frac{m^ {\\prime \\prime}(x)}{2 c_1 m_0 m(x)} \\right] \\, ,\n\\end{equation}\nwhere $H_a$ is given by equation (\\ref{selfadhama}). This shows that the von Roos kinetic operator is not the most general self-adjoint kinetic operator, as it has been assumed with frequency in the literature in the last decades. \\eqref{bselfadj} is a perfectly satisfactory PDM Hamiltonian that does not fit in the von Roos proposal. \n\nIn this particular case, the $\\beta$ and $\\gamma$ functions of Harrison's Hamiltonian \\eqref{harri4} are $\\beta = \\frac{- \\hbar^2}{2m(x)} e^{\\frac{m(x)}{c_1 m_0}}$, $\\beta = \\frac{- \\hbar^2}{2m(x)}$ and $V(x) - E= -\\frac{\\ \\hbar^2}{2 m(x)^2} k_x^2$. In fact, this also shows that the kinectic part of Harrison's Hamiltonian proposed in \\cite{harri} is more general than the von Roos' one. \n \n \n\n\\subsection{Case c:} \nFinally, in case (c) the Hamiltonian (\\ref{genham}) reads\n\\begin{equation}\n\\label{cham}\n H = \\frac{-\\hbar^ 2}{2 m(x)} \\partial_x^ 2 + \\frac{\\hbar^ 2 c_2 \\alpha_0}{m(x)} \\partial_x + V(x) \\, ;\n\\end{equation}\nthe relation between functions $\\Psi^\\star(x,t)$ and $\\Phi(x,t)$ is\n\\begin{equation}\n\\label{mapc}\n\\Phi(x,t) = e^ {-2 c_2 \\alpha_0 x} m(x) \\Psi^\\star(x, t)\\, ,\n\\end{equation}\nand the dual self-adjoint Hamiltonian has the form\n\\begin{align}\n\\label{seladjhamc}\n\\nonumber\nH_c = & -\\frac{\\hbar^ 2}{2 m(x)} \\partial_x^ 2 + \\frac{\\hbar^ 2}{2 m(x)^ 2} m^ \\prime(x) \\partial_x -\n \\frac{\\hbar^ 2}{2 m(x)} \\left[ \\frac{3 }{4m(x)^ 2} m^ \\prime(x)^ 2 - \\right. \\\\\n & \\left. \\frac{1}{2 m(x)} m^ {\\prime \\prime}(x) - \\frac{c_2^ 2 \\alpha_0^ 2}{4}\\right] + V(x) \\, . \n\\end{align} \n Here, the parameter $[\\alpha_0] = L^ {-1}$.\n \nNow the solutions of Schr\\\"odinger equations for Hamiltonian \\eqref{seladjhamc} are\n\\begin{equation}\n\\label{dualtc} \n\\Omega(x,t) = e^ {-c_2 \\alpha_0 x} \\sqrt{m(x)} \\Psi(x,t) \\,\\, .\n\\end{equation}\n\nAs in case (b) Hamiltonian \\eqref{seladjhamc} can be written as \n\\begin{equation}\n\\label{bcham}\nH_c = H_a + \\frac{\\hbar^ 2 c_2^ 2 \\alpha_0^ 2}{8 m(x)} \\, ,\n\\end{equation}\nand its kinetic part does not reduce to the Von Roos general kinetic operator, being another example of a satisfactory PDM Hamiltonian that is not included in the von Roos scheme. \n\nIn this particular case, the $\\beta$ and $\\gamma$ functions of Harrison's Hamiltonian \\eqref{harri4} are $\\beta = \\frac{- \\hbar^2}{2m(x)} e^{2 c_2 \\alpha_0 x}$, $\\gamma = \\frac{- \\hbar^2}{2m(x)}$ and $V(x) - E= -\\frac{\\ \\hbar^2}{2 m(x)^2} k_x^2$. \n\nWe remark that, after the introduction of the function $\\Omega(x,t)$, instead of the Lagrangian \\eqref{lag} we could write a Lagrangian only for the field $\\Omega(x,t)$, whose associate Hamiltonian is self-adjoint. This procedure was done in \\cite{pvp2015}. However, with that approach we can not analyze non self-adjoint operators. \n\n\n\n\n \n\n\\section{Position-dependent mass Hamiltonians for a deformed harmonic oscillator}\n\\label{defosc}\n Position-dependent mass Schr\\\"odinger equations have been used to describe semiconductor heterostructures \\cite{heteros1,zhaoliang,heteros3}, as well as other kind of systems \\cite{pharri, kesha, einevoll}. Among \n all the possible $m(x)$, there is strong motivation in the literature to study the case\n\\begin{equation}\n\\label{fx}\nm(x) = m_0 (1 + \\gamma x^2) \\, , \n\\end{equation} \nwhich may describe the $GaAs\/Al_xGa_{1-x}As$ system \\cite{galbraith, LiKelin, zhaoliang}. $m_0$ is a constant with dimension of mass. \n\nWe choose to study the Schr\\\"odinger equations of the cases (a), (b) and (c), described by Hamiltonians \\eqref{ham1}, \\eqref{bham} and \\eqref{cham}, with the position dependent mass (\\ref{fx}), and the deformed harmonic oscillator potential $V(x)$ given by\n\\begin{equation} \n\\label{vx}\nV(x) = \\frac{k x^2}{2 m_0} (1 + \\gamma x^2)^{-1} \\, ,\n\\end{equation}\nwhere $\\gamma$ is a constant with dimension $L^{-2}$ which measures the departure from the usual harmonic oscillator and $k$ is the spring constant. With this choice the Hamiltonian \\eqref{ham1} is PT-symmetric, where P means parity and T is the time reversal operator. Naturally, the standard harmonic oscillator is recovered when $\\gamma$ is zero. \n\nIn all the cases presented in section 2 it is sufficient to solve the Schr\\\"odinger equations for the original non-self-adjoint Hamiltonians, namely \\eqref{ham1}, \\eqref{bham} and \\eqref{cham}. From their solutions we have immediately the solutions of the Schr\\\"odinger equations generated by the equivalent self-adjoint Hamiltonians, which are given by \\eqref{selfadhama}, \\eqref{bselfadj} and \\eqref{seladjhamc}. \n\\subsubsection{Case a:} \n\nThe Hamiltonian \\eqref{ham1} with potential \\eqref{vx} is \\cite{ball}\n\\begin{equation} \n\\label{defosca}\n\\mathcal{H}_a = \\frac{-\\hbar^ 2 }{2 m_0 (1 + \\gamma x^2)} \\partial_x^ 2 + \\frac{k x^2}{2 m_0 (1 + \\gamma x^2)} \\, .\n\\end{equation}\nIn the stationary case, for which $\\Psi(x,t) = \\exp{\\frac{-i E t}{\\hbar}} \\Psi(x)$, the time-independent Schr\\\"odinger equation $\\mathcal{H}_a \\Psi(x) = E_a \\Psi(x)$ is\n\\begin{equation}\n\\label{scheq1}\n(1 + \\tilde\\gamma y^2)^{-1} \\Psi^{\\prime \\prime}(y) + [ \\lambda - y^2 (1 + \\tilde\\gamma y^2)^{-1} ]\\Psi(y) = 0 \\, ;\n\\end{equation}\nabove, we have defined the new variable $y = (\\frac{m_0 k}{\\hbar^2})^{1\/4} x$ and redefined $\\tilde\\gamma = \\gamma (\\frac{\\hbar^2}{m_0 k})^{1\/2}$ and $\\lambda = \\frac{2E}{\\hbar \\omega}$ and, as usual, the frequency is $\\omega = \\sqrt{\\frac{k}{m_0}}$.\n\n\\noindent\nThe eigensolutions $\\Psi_n(y)$ are the functions: \n\\begin{equation}\n\\label{sola1}\n\\Psi_{na}(y) = c_a \\exp\\left(-\\frac{1}{2} x^2 \\sqrt{1- 2 \\tilde\\gamma E_{na} } \\right) \\textmd{H}_{n} \\left[ y (1-2 \\tilde\\gamma E_{na})^{1\/4} \\right] \\, ,\n\\end{equation}\nwhere $c_a$ is an arbitrary constant; $\\textmd{H}_n[y]$ is the Hermite polynomial and we have the energy levels \n\\begin{equation}\n\\label{enlevela}\nE_{na} = \\frac{2 n+1}{4} \\left[ -\\tilde\\gamma (2 n+1) + \\sqrt{4 + \\tilde\\gamma^2 (2n+1)^2} \\right] \\, .\n\\end{equation}\n$\\Phi(y,t)$ and $\\Omega(y,t)$ are given respectively by \\eqref{phipsi} and \\eqref{aomega}, taking into account the change of variables, with $m(y) = 1 + \\tilde\\gamma y^2$. Note that when $\\tilde\\gamma = 0$, we recover the energy levels of the harmonic oscillator.\n\nIn order to guarantee that the mass is positive-definite, we see in \\eqref{fx} that $\\tilde\\gamma \\geqslant 0$ for any value of $y$. The asymptotic behavior of the energy levels as $n$ tend to $\\infty$ is then\n\n \\begin{equation}\n\\label{enasymposa}\nE_{na} \\sim \\frac{1}{2 \\tilde\\gamma} - \\frac{1}{8 \\tilde\\gamma^ 3 n^ 2} + O(1\/n^ 3)...\\, ;\n\\end{equation}\ntherefore the energy value is limited by $1\/2 \\tilde\\gamma$, which is the maximum of the potential \\eqref{vx}, and the square roots in the eigensolutions \\eqref{sola1} are well defined for all the values of the energy. \n\n\n\\subsubsection{Case b:} \n\nTaking the dimensionless constant $c_1 = 1$, the Hamiltonian \\eqref{bham} with potential \\eqref{vx} is\n\\begin{equation}\n\\label{defoscb}\n\\mathcal{H}_b = \\frac{-\\hbar^ 2}{2 m_0 (1 + \\gamma x^2)} \\partial_x^ 2 + \\frac{\\hbar^ 2}{2 m_0 (1 + \\gamma x^2)} m^\\prime (x) \\partial_x + \\frac{k x^2}{2 (1 + \\gamma x^2)} \\, ,\n\\end{equation}\n\n\\noindent\nWith the same redefinition of variables as in case a, the eigensolutions of the time independent Schr\\\"odinger equations, $\\mathcal{H}_b \\Psi(x) = E_b \\Psi(x)$, are the functions \n\\begin{equation}\n\\label{solb1}\n\\Psi_{nb}(y) = c_b \\exp\\left[-\\frac{1}{2} y^2 \\left( - \\tilde\\gamma + \\sqrt{1+ \\tilde\\gamma^ 2 - 2 \\tilde\\gamma E_n} \\right) \\right] \\textmd{H}_{n} \\left[ y (1 + \\tilde\\gamma^ 2 -2 \\tilde\\gamma E_n)^{1\/4} \\right] \\, ,\n\\end{equation}\nand the energy levels are\n\\begin{equation}\n\\label{enlevelb}\nE_{nb} = \\frac{1}{4} \\left[- [2 + (2 n+1)^ 2] \\tilde\\gamma + (2n + 1)\\sqrt{4 + \\tilde\\gamma^2 [8 + (2n+1)^2]} \\right]\\, .\n\\end{equation}\n$\\Phi(x,t)$ and $\\Omega(x,t)$ are given respectively by \\eqref{phipsib} and \\eqref{bomega}, with $m(y) = 1 + \\tilde\\gamma y^2$. Note that when $\\tilde\\gamma = 0$, we recover the energy levels of the harmonic oscillator.\n\nAs $\\tilde\\gamma \\geqslant 0$ for any value of $y$, the asymptotic behavior of the energy levels as $n$ tend to $\\infty$ is\n \\begin{equation}\n\\label{enasymposb}\nE_{nb} \\sim \\frac{\\tilde\\gamma^ 2 + 1}{2 \\tilde\\gamma} - \\frac{1}{8 \\tilde\\gamma^ 3}(2 \\tilde \\gamma^ 2 + 1)^ 2 \\frac{1}{n^ 2}+ O(1\/n^ 3)...\\, ;\n\\end{equation}\nnow the energy value is limited by $(\\tilde\\gamma^ 2 + 1)\/2 \\tilde\\gamma$ which assures that the square roots in the eigensolutions \\eqref{solb1} are well defined for all the values of the energy. \n\nThe dual of the Hamiltonian \\eqref{defoscb} is\n\\begin{align}\n\\label{selfadhamb1}\n\\nonumber\n\\mathcal{H}_b = \\mathcal{H}_a + \\frac{1}{4 m(y)} \\left[ \\frac{m^ \\prime(y)^ 2}{2} -\n m^{\\prime \\prime}(y) \\right] \\, ; \n \\end{align}\ntherefore we have an effective potential given by\n\\begin{equation}\n\\label{potef}\n\\mathcal{W}(y) = \\frac{y^2}{2 (1 + \\tilde\\gamma y^2)} + \\frac{1}{4 m(y)} \\left[ \\frac{m^ \\prime(y)^ 2}{2} - m^{\\prime \\prime}(y) \\right] \\\\ .\n\\end{equation}\nIt is then easy to see that as $y \\rightarrow \\infty$ the limit of the potential $\\mathcal{W}(y)$ is exactly the limit value of the energy, $(\\tilde\\gamma^ 2 + 1)\/2 \\tilde\\gamma$. But analyzing the potential $V(y) = y^2 (2(1 + \\tilde\\gamma y^2))^{-1}$ we see that its limit is $1\/2 \\tilde\\gamma$. Therefore in this case the threshold of the two potentials, $V(y)$ and $\\mathcal{W}(y)$, are not the same. \n\n\\subsubsection{Case c:} \n{\\normalsize Following the same procedure as in Cases (a) and (b), the Hamiltonian \\eqref{cham} with potential \\eqref{vx}, taking $c_2 = 1$ and $\\alpha_0 = 2 \\sqrt{\\gamma}$, is\n\\begin{equation}\n\\label{defoscc}\n\\mathcal{H}_c = \\frac{-\\hbar^ 2}{2 m_0 (1 + \\gamma x^2)} \\partial_x^ 2 + \\frac{\\hbar^ 2 \\alpha_0}{m_0 (1 + \\gamma x^2)} \\partial_x + \\frac{k x^2}{2 (1 + \\gamma x^2)} \\, ;\n\\end{equation}\n the eigensolutions are\n\\begin{equation}\n\\label{solc1}\n\\Psi_{nc}(y) = c_c \\exp\\left[2 y \\sqrt{\\tilde\\gamma}-\\frac{1}{2} y^2\\sqrt{1 - 2 \\tilde\\gamma E_n} \\right] \\textmd{H}_n \\left[ y (1 - 2 \\tilde\\gamma E_n)^{1\/4} \\right] \\, ,\n\\end{equation}\nand the energy levels\n\\begin{equation}\n\\label{enlevelc}\nE_{nc} = \\frac{1}{4} \\left[- [-8 + (2 n+1)^ 2] \\tilde\\gamma + (2n + 1)\\sqrt{4(1 - 4 \\tilde\\gamma^ 2) + \\tilde\\gamma^2 (2n+1)^2} \\right]\\, .\n\\end{equation}\nAs $n$ tend to $\\infty$ and $\\tilde\\gamma \\geqslant 0$ for any value of $y$, \\eqref{enlevelc} goes to\n \\begin{equation}\n\\label{enasymposc}\nE_{nc} \\sim \\frac{1}{2 \\tilde\\gamma} - \\frac{1}{8 \\tilde\\gamma^ 3}(4 \\tilde \\gamma^ 2 - 1)^ 2 \\frac{1}{n^ 2}+ O(1\/n^ 3)...\\, ;\n\\end{equation}\nnow the energy value is limited by $1\/2 \\tilde\\gamma$ which assures that the square roots in the eigensolutions \\eqref{solc1} are well defined for all the values of the energy. \n\nIn this case the limit value of the effective potential of the dual Hamiltonian as $y \\rightarrow \\infty$ is the same for $V(y)$, $1\/2 \\tilde\\gamma$. \n\nIn figure \\ref{fig1} we see the behavior of the potential as a function of $x$ and the corresponding first five energy levels for the three cases presented, for $\\gamma = 0.1$. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=4in]{figpot.pdf} \n\\caption{Behavior of the potential as a function of $x$ and the corresponding five first energy levels for the three cases presented, for $\\gamma = 0.1$. Case (a): continuous red line; case (b): dashed blue line ; case (c): dotted black line.}\n\\label{fig1}\n\\end{center}\n\\end{figure}\n\n\\subsubsection{Case of negative $\\gamma$:}\n\nWhen the deformation parameter $\\gamma$ is negative, the problem can be completely solved in all our three examples of PDM Hamiltonians. A negative $\\gamma$ means that in order that the mass $m(y) = 1 - |\\gamma| y^ 2$ is positive definite, the potential $V(y)$ is defined only for $-1\/\\sqrt{|\\gamma|} < y < 1\/\\sqrt{|\\gamma|}$. Therefore there are solutions only in this region. \n\nAs an example, in Case b the solution of the dual Hamiltonian is \n\\begin{equation}\n\\label{solgammanegb}\n\\Omega_{nb}(y) = c_b \\sqrt{m(y)}\n\\exp\\left[-\\frac{1}{2} y^2 \\left(\\sqrt{1+ \\tilde\\gamma^ 2 + 2 |\\tilde\\gamma| E_n} \\right) \\right] \\textmd{H}_n \\left[ y (1 + \\tilde\\gamma^ 2 + 2 |\\tilde\\gamma| E_n)^{1\/4} \\right] \\\\ ,\n\\end{equation}\nand the energy levels are given by\n\\begin{equation}\n\\label{enasymposc}\nE_{nb} = \\frac{1}{4} \\left[[2 + (2 n+1)^ 2] |\\tilde\\gamma| + (2n + 1)\\sqrt{4 + \\tilde\\gamma^2 [8 + (2n+1)^2]} \\right]. \n\\end{equation}\nIt is easy to see that as $n$ tends to $\\infty$ the above energy levels go to $2 |\\gamma| n^2$. }\n\n\n\n\n\n\\section{Conclusions}\n\\label{conc}\nThe approach here used, which departs from a classical Lagrangian depending on two independent fields, allowed us to completely solve the question of self-adjointness of a class of position-dependent mass Hamiltonian systems. These Hamiltonians had been originally discarded in phenomenological approaches to semiconductors because they were not self-adjoint. We proved here that for a general class of these non self-adjoint Hamiltonians, we can construct Hamiltonians which are quantum mechanically equivalent to the original ones and are self-adjoint in the usual Hilbert space of their Schr\\\"odinger equations solutions. This can be done if \nsome function $g(x)$ that appears in consistent definitions of the probability and current densities, is restrained by the particular form of the non self-adjoint Hamiltonians. By consistent we mean that the probability density is positive and that it obeys the usual continuity equation with an appropriate definition of the current density. \n\nThe general non self-adjoint Hamiltonian proposed by us is identified with a large family of Hamiltonians constructed by Harrison in \\cite{harri} to calculate wave functions in regions of varying band structure in superconductors, simple metals and semimetals. That is, given specific forms of the functions involved in Harrison's Hamiltonians, we can find the form of our parameters and the function $g(x)$ satisfying condition \\eqref{condg}. Therefore we obtain the quantum mechanically equivalent self-adjoint Hamiltonians, dual to the Harrison's family. \n\n\nWith this method we can solve many particular cases (that is, choosing the parameter $\\alpha$ and the function $f(m(x), m^\\prime(x))$) of the Hamiltonian \\eqref{genham} and consequently of the Harrison's Hamiltonians. We can also expect this method to be successfully applied in Hamiltonians with non self-adjoint potentials \\cite{bender}. \n\n \n We have also solved the three typical cases for a deformed harmonic oscillator potential and choosing a specific form of position-dependent mass which is potentially interesting for physical applications. Besides, the kinetic energies for these cases are particular cases of the kinectic energy in the family of Hamiltonians proposed by Harrison. We have studied these cases for positive and negative values of the deformation parameter introduced in the form of the mass. Moreover, for positive values of this parameter the systems here solved present bound states solutions with an energy threshold; for negative values the systems are confined.\n \n We believe that some Hamiltonians that were discarded because of their non self-adjointness, like, for example, in \\cite{gora}, could be now treated by this method. \n \n \n \n\n\n\n\n\\section*{Acknowledgments}\n\nWe would like to acknowledge the Brazilian scientific agencies CNPq, FAPERJ and CAPES for financial support. We also thank the referees for valuable comments which helped to improve the article.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $C$ be a smooth plane curve of degree $d$ defined over an\nalgebraically closed field $k$. In \\cite{noether}, while studying space\ncurves, Max Noether considered the following question. For\n$n\\in{\\bbb Z}_{\\ge 1}$ find $\\ell (n)\\in{\\bbb Z}_{\\ge 0}$ such that there\nexists a linear system $g^{\\ell (n)}_n$ on $C$ but no linear system\n$g^{\\ell (n)+1}_n$ and classify those linear systems $g^{\\ell (n)}_n$ on\n$C$. The arguments given by Noether in the answer to this question\ncontained a gap. In \\cite{cil} C.~Ciliberto gave a complete proof for\nNoether's claim using different arguments. In \\cite{harts1}\nR.~Hartshorne completed Noether's original arguments by solving the\nproblem also for integral (not necessarily smooth) plane curves (see\nRemark \\ref{rem:1}).\n\nThe linear systems $g^{\\ell (n)}_n$ are either non-special, or special\nbut not very special, or very special but trivial. By a very special\n(resp. trivial) linear system on a smooth plane curve $C$ we mean:\n\\vs 2\n{\\raggedright {\\sc Definition}. }\n{\\it A linear system $g^r_n$ on $C$ is very special if $r\\ge 1$ and\n$\\dim |K_C-g^r_n|\\ge 1$. $($Here $K_C$ is a canonical divisor on $C)$.\nA base point free complete very special linear system $g^r_n$ on $C$ is\ntrivial if there exists an integer $m\\ge 0$ and an effective divisor $E$\non $C$ of degree $md-n$ such that $g^r_n=|mg^2_d-E|$ and\n$r=\\fracd{m^2+3m}{2}-(md-n)$. A complete very special linear system\n$g^r_n$ on $C$ is trivial if its associated base point free linear\nsystem is trivial.}\n\\vs 2\nIn this paper, we consider the following question. For\n$n\\in{\\bbb Z}_{\\ge 1}$ find $r(n)$ such that there exists a very special\nnon-trivial complete linear system $g^{r(n)}_n$ on $C$ but no such linear\nsystem $g^{r(n)+1}_n$. Our main result is the following:\n\\vs 2\n{\\raggedright {\\sc Theorem}. }\n{\\it Let $g^r_n$ be a base point free very special non-trivial complete\nlinear system on $C$. Write $r=\\fracd{(x+1)(x+2)}{2}-\\beta$ with $x, \\beta$\nintegers satisfying $x\\ge 1, 0\\le\\beta\\le x$. Then\n$$\nn\\ge n(r):=(d-3)(x+3)-\\beta .\n$$}\n\\vs 2\nThis theorem only concerns linear systems of dimension $r\\ge 2$. But\n 1-dimensional linear systems are studied in \\cite{c3}. From these\nresults one finds that $C$ has no non-trivial very special linear system\nof dimension 1 if $d\\le 5$ and for $d\\ge 6$, $C$ has non-trivial very\nspecial complete linear systems $g^1_{3d-9}$ but no such linear system\n$g^1_n$ with $n<3d-9$. The proof of this theorem is also effective for\ncase $r=1$ if one modifies it little bit.\n\nConcerning the original problem one can make the following observation.\nFor $x\\ge d-2$ one has $r>g(C)$ and of course $C$ has no non-trivial\nvery special linear systems $g^r_n$. For $x\\le d-3$ one has\n$(d-3)(x+2)\\le (d-3)(x+3)-x$. So, if the bound $n(r)$ is sharp, then\nalso the bound $r(n)$ can be found. Concerning the sharpness of the bound\n$n(r)$, we prove it in case ${\\rm char}(k)=0$ for $x\\le d-6$. In case\n${\\rm char}(k)\\ne 0$ we prove that there exists smooth plane curves of\ndegree $d$ with a very special non-trivial $g^r_{n(r)}$ in case\n$x\\le d-6$. Finally for the case $x>d-6$ we prove that there exist no\nbase point free complete very special non-trivial linear systems of\ndimension $r$ on $C$. Hence, at least in case ${\\rm char}(k)=0$ the\nnumbers $r(n)$ are determined.\n\\vs 2\n{\\raggedright{\\sc Some Notations}}\n\\vs 1\nWe write ${\\bbb P}_a$ to denote the space of effective divisors of degree $a$\non ${\\bbb P}^2$. If ${\\bbb P}$ is a linear subspace of some ${\\bbb P}_a$ then we write\n${\\bbb P} .C$ for the linear system on $C$ obtained by intersection with\ndivisors ${\\mit \\Gamma}\\in{\\bbb P}$ not containing $C$. We write $F({\\bbb P} .C)$ for the\nfixed point divisor of ${\\bbb P} .C$ and $f({\\bbb P} .C)$ for the associated base\npoint free linear system on $C$, so\n$f({\\bbb P} .C)=\\{ D-F({\\bbb P} .C):D\\in{\\bbb P} .C\\}$.\nIf $Z$ is a 0-dimensional subscheme of ${\\bbb P}^2$ then ${\\bbb P}_a(-Z)$ is the\nsubspace of divisors $D\\in{\\bbb P}_a$ with $Z\\subset D$.\n\n\\setcounter{equation}{0}\n\\section{Bound on the degree of non-trivial linear systems}\n\nA complete linear system $g^r_n$ on a smooth curve $C$ is called very\nspecial if $r\\ge 1$ and $\\dim |K_C-g^r_n|\\ge 1$. From now on, $C$ is a\nsmooth plane curve of degree $d$ and $g^r_n$ is a very special base point\n free linear system on $C$ with $r\\ge 2$.\n\\vs 1\n\\begin{defi}\n$g^r_n$ is called a trivial linear system on $C$ if there exists an\ninteger $m\\ge 0$ and an effective divisor $E$ on $C$ of degree $md-n$\nsuch that $g^r_n=|mg^2_d-E|$ and $r=\\fracd{m^2+3m}{2}-(md-n)$.\n\\label{def:trivial}\n\\end{defi}\n\\vs 2\n\\begin{thm}\nWrite $r=\\fracd{(x+1)(x+2)}{2}-\\beta$ with $x, \\beta$ integers satisfying\n$x\\ge 1, 0\\le\\beta\\le x$. If $g^r_n$ is not trivial, then\n$$\nn\\ge n(r):=(d-3)(x+3)-\\beta .\n$$\n\\label{thm:1}\n\\end{thm}\n\\vs 1\n\\begin{rem}\n{\\rm In the proof of this theorem we are going to make use of the main\nresult of Hartshorne \\cite{harts1} which describes the linear systems on\n$C$ of maximal dimension with respect to their degrees. The result is as\nfollows:}\n\nLet $g^r_n$ be a linear system on $C$ $($not necessarily very special$)$.\n Write $g(C)=\\fracd{(d-1)(d-2)}{2}$.\n\n{\\rm i)} If $n>d(d-3)$ then $r=n-g$ $($the non-special case$)$\n\n{\\rm ii)} If $n\\le d(d-3)$ then write $n=kd-e$ with\n$0\\le k\\le d-3, 0\\le ek+1\\\\\nr\\le\\fracd{k(k+3)}{2}-e & {\\rm if\\ }e\\le k+1.\n\\end{array}\\right.$\n\\vs 1\n{\\rm Hartshorne also gives a description for the case one has equality.\nThis theorem (a claim originally stated by M. Noether with an incomplete\nproof) is also proved in \\cite{cil}. However, Hartshorne also proves the\n theorem for integral plane curves using the concept of generalized\nlinear systems on Gorenstein curves. We need this more general result in\nthe proof of Theorem \\ref{thm:1}}\n\\label{rem:1}\n\\end{rem}\n\\vs 2\n{\\it Proof of Theorem\\\/} \\ref{thm:1}. Assume $g^r_n=rg^1_{n\/r}$ and\n$n<(x+3)(d-3)-\\beta$. Noting $2r=(x+1)(x+2)-2\\beta\\ge x^2+x+2\\ge x+3$,\nwe have $\\fracd{(x+3)(d-3)-\\beta}{r}< 2(d-2)$. Hence,\n$g^1_{n\/r}=|g^2_d-P|$ for some $P\\in C$. Since $\\dim |rg^1_{n\/r}|=r$,\ncertainly $\\dim |2g^1_{n\/r}|=2$. But $\\dim|2g^2_d-2P|=3$.\nA contradiction.\n\nSince $g^r_n$ is special, there exist an integer $1\\le m\\le d-3$\nand a linear system ${\\bbb P}\\subset{\\bbb P}_m$ such that\n$g^r_n=f({\\bbb P}.C)$ and ${\\bbb P}$ has no fixed components. In Lemma\n\\ref{lem:1} we are going to prove that, because $g^r_n$ is not a multiple\nof a pencil, a general element ${\\mit \\Gamma}$ of ${\\bbb P}$ is irreducible.\n\nNow, for each element ${\\mit \\Gamma}'$ of ${\\bbb P}$ we have\n$F({\\bbb P}.C)\\subset{\\mit \\Gamma}'$ (inclusion of subschemes of ${\\bbb P}^2$). In\nparticular $F({\\bbb P}.C)\\subset{\\mit \\Gamma}\\cap{\\mit \\Gamma}'$. This remark is known\nin the literature as Namba's lemma. As a subscheme of ${\\mit \\Gamma}$,\n$F({\\bbb P}.C)$ is an effective generalized divisor on ${\\mit \\Gamma}$ (terminology\n of \\cite{harts1}). We find that for each ${\\mit \\Gamma}'\\in{\\bbb P}$ with\n${\\mit \\Gamma}'\\ne{\\mit \\Gamma}$ the residual of $F({\\bbb P}.C)$ in ${\\mit \\Gamma}\\cap{\\mit \\Gamma}'$ (we denote\nit by ${\\mit \\Gamma}\\cap{\\mit \\Gamma}'-F({\\bbb P}.C)$) is an element of the generalized\ncomplete linear system on ${\\mit \\Gamma}$ associated to\n${\\cal O}_{{\\mit \\Gamma}}(m-F({\\bbb P}.C))$. Hence, we obtain a generalized linear\n system $g^{r-1}_{m^2-(md-n)}$ on ${\\mit \\Gamma}$.\n\nNow we are going to apply Hartshorne's theorem (Remark \\ref{rem:1}) to\nthis $g^{r-1}_{m^2-(md-n)}$ on ${\\mit \\Gamma}$. Since $g^r_n$ is non-trivial on\n$C$, we know that $r>\\fracd{m^2+3m}{2}-(md-n)$. If\n$m^2-(md-n)>m(m-3)$, then i) in Remark \\ref{rem:1} implies\n$r-1\\le m^2+n-md-\\fracd{(m-1)(m-2)}{2}$ so $r\\le\\fracd{m^2+3m}{2}-(md-n)$,\n a contradiction.\n\nSo $m^2-(md-n)\\le m(m-3)$ and we apply ii) in Remark \\ref{rem:1}. We\nfind $x\\le m-3$ and $m^2+n-md\\ge mx-\\beta$, so\n$n\\ge\\varphi (m):=-m^2+m(d+x)-\\beta$. Since $x+3\\le m\\le d-3$, we find\n$n\\ge\\varphi (x+3)=\\varphi (d-3)=(d-3)(x+3)-\\beta=n(r)$. This completes the proof\nof the theorem except for the proof of Lemma \\ref{lem:1}.\n\\vs 2\n\\begin{lem}\nLet $C$ be a smooth plane curve of degree $d$ and let $g^r_n$ be a base\npoint free complete linear system on $C$ which is not a multiple of a\none-dimensional linear system. Suppose there exists a linear system\n${\\bbb P}\\subset{\\bbb P}_e$ without fixed component for some $e\\le d-1$ such\nthat $g^r_n=f({\\bbb P}.C)$. Then the general element of ${\\bbb P}$ is irreducible.\n\\label{lem:1}\n\\end{lem}\n\\def\\underline{e}{\\underline{e}}\n\\def\\underline{m}{\\underline{m}}\n\\vs 2\n{\\it Proof\\\/}. Let $F=F({\\bbb P} .C)=\\sum_{j=1}^sn_jP_j$ with $n_j\\ge 1$ and\n$P_i\\ne P\\j$ for $i\\ne j$. For $t\\in{\\bbb Z}_{\\ge 1}, \\underline{e} =\n(e_1,\\dots ,e_t)\\in ({\\bbb Z}_{\\ge 1})^t$ with $\\sum_{i=1}^te_i=e$ and\n$\\underline{m} =[m_{ij}]_{1\\le i\\le t,1\\le j\\le s}$, let\n$$\nV(t,\\underline{e} ,\\underline{m} )=\\{ {\\mit \\Gamma}_1+\\cdots +{\\mit \\Gamma}_t:{\\mit \\Gamma}_i\\in{\\bbb P}_{e_i}{\\rm\\ is\\\nirreducible\\ and\\ }i({\\mit \\Gamma}_i,C;P_j)=m_{ij}\\}.\n$$\nIt is not so difficult to prove that this defines a stratification of\n${\\bbb P}_e$ by means of locally closed subspaces.\n\nSince ${\\bbb P}$ is irreducible there is a unique triple $(t_0,\\underline{e}_0,\n\\underline{m}_0)$ such that ${\\bbb P}\\cap V(t_0,\\underline{e}_0,\\underline{m}_0)$ is an open\nnon-empty subset of ${\\bbb P}$. In particular, ${\\bbb P}\\subset\n\\{ {\\mit \\Gamma}_1+\\cdots +{\\mit \\Gamma}_{t_0}:{\\mit \\Gamma}_i\\in{\\bbb P}_{e_{0i}}{\\rm\\ and\\ }\ni({\\mit \\Gamma}_i,C;P_j)\\ge m_{0ij}\\}$. We need to prove that $t_0=1$,\nso assume that $t_0>1$. Let forget the subscript $0$ from now on.\n\nLet $F_i=\\sum_{j=1}^sm_{ij}P_j\\subset C$. For each $D\\in g^r_n$ there\nexists ${\\mit \\Gamma} ={\\mit \\Gamma}_1+\\dots +{\\mit \\Gamma}_t$ with ${\\mit \\Gamma}_i\\in{\\bbb P}_i(-F_i)$ and\n$D={\\mit \\Gamma} .C-(F_1+\\dots +F_s)=\\sum_{j=1}^t({\\mit \\Gamma}_i.C-F_i)$. Writing\n$D_i={\\mit \\Gamma}_i.C-F_i\\in|e_ig^2_d-F_i|$ we find $D=\\sum_{i=1}^tD_i$.\nSuppose for some $1\\le i\\le t$ we have $\\dim|e_ig^2_d-F_i|=0$. If\n${\\mit \\Gamma}'$ and ${\\mit \\Gamma}''$ are in ${\\bbb P}_i(-F_i)$ then ${\\mit \\Gamma}'.C={\\mit \\Gamma}''.C$, but\n$e_i\\mu_P(Z)\\ge i({\\mit \\Gamma} ,{\\mit \\Gamma}';P)$. Also\n$({\\mit \\Gamma}'+\\sum_{i=1}^xL_i).C-Z\\in|mg^2_d-Z|$, hence\n$P\\in({\\mit \\Gamma}'+\\sum_{i=1}^xL_i).C-Z=({\\mit \\Gamma}'.C-{\\mit \\Gamma}'\\cap{\\mit \\Gamma} )+(\\sum_{i=1}^xL_i.C\n-E)$ (sum of two effective divisors). Since\n$P\\not\\in\\sum_{i=1}^xL_i.C-E$, we find $P\\in{\\mit \\Gamma}'.C-{\\mit \\Gamma}'\\cap{\\mit \\Gamma}$. This\nimplies $i({\\mit \\Gamma}',C;P)>i({\\mit \\Gamma} ,{\\mit \\Gamma}';P)$. But\n$i({\\mit \\Gamma} ,{\\mit \\Gamma}';P)\\ge\\min (i({\\mit \\Gamma} ,C;P),i({\\mit \\Gamma}',C;P))$ (so called Namba's\nlemma), hence we have a contradiction.\n\\vs{05}\nii) $\\dim (|mg^2_d-Z|)\\ge r$.\n\nIndeed, $({\\mit \\Gamma}'+{\\bbb P}_x(-E)).C-Z\\subset|mg^2_d-Z|$ and $\\dim ({\\mit \\Gamma}'+{\\bbb P}_x(-E))=\n\\fracd{(x+1)(x+2)}{2}-\\beta -1$. But also ${\\mit \\Gamma} .C-Z\\subset|mg^2_d-Z|$ while\n${\\mit \\Gamma} .C\\not\\in ({\\mit \\Gamma}'+{\\bbb P}_x(-E)).C$. This proves the claim.\n\\vs{05}\niii) $\\dim (|mg^2_d-Z|)=r$.\n\nIf $\\dim (|mg^2_d-Z|)>r$ then on ${\\mit \\Gamma}$ it induces a linear system\n$g^{r'}_{mx-\\beta}$ with $r'\\ge r$. But Hartshorne's theorem (see 1.3)\nimplies that this is impossible.\n\niv) $|mg^2_d-Z|$ is not trivial.\n\nFirst of all, $|mg^2_d-Z|$ is very special. Indeed $(d-3-m)g^2_d+Z\\subset\n|K_C-(mg^2_d-Z)|$. If $d-3=m$ then from the Riemann-Roch theorem, one\nfinds $\\dim|Z|=1$.\n\nSuppose $|mg^2_d-Z|$ would be trivial, i.~e. $|mg^2_d-Z|=|kg^2_d-F|$ with\n$r=\\fracd{k^2+3k}{2}-(dk-n)$. Since $g^r_n$ is very special, one has\n$ki({\\mit \\Gamma}',{\\mit \\Gamma} ;P)$\nand $i({\\mit \\Gamma} ,C;P)>i({\\mit \\Gamma}',{\\mit \\Gamma} ;P)$, a contradiction to Namba's lemma. This\nimplies $\\gamma'.C\\ge{\\mit \\Gamma}'.C-{\\mit \\Gamma}'\\cap{\\mit \\Gamma}$. Once more from Namba's lemma, we\nobtain ${\\mit \\Gamma}'.\\gamma'\\ge{\\mit \\Gamma}'.C-{\\mit \\Gamma}'\\cap{\\mit \\Gamma}$ and so\n$$\n\\deg ({\\mit \\Gamma}'.\\gamma')=3(k-x)\\ge\\deg ({\\mit \\Gamma}'.C-{\\mit \\Gamma}'\\cap{\\mit \\Gamma} )=3(d-x-3),\n$$\na contradiction to $km\\ge 4$. There exists ${\\mit \\Gamma}'\\in{\\bbb P}_3$ and a smooth ${\\mit \\Gamma}\\in{\\bbb P}_m$\nsuch that, as schemes, ${\\mit \\Gamma}\\cap{\\mit \\Gamma}'\\subset C$.\n\\end{thm}\n\\vs 1\n{\\it Proof\\\/}. Fix two general lines $L_1, L_2$ in ${\\bbb P}^2$, let\n$S=L_1\\cap L_2$. We may assume neither $L_1$ nor $L_2$ is a tangent\nline of $C$ and $S\\not\\in C$. Choose points $P_{11},\\dots ,P_{1m}$ on\n$C\\cap L_1$ and $P_{21},\\dots ,P_{2m}$ on $C\\cap L_2$. Choose a general\npoint $S'$ in ${\\bbb P}^2\\backslash C\\cup L_1\\cup L_2$. The pencil of lines\nin ${\\bbb P}^2$ through $S'$ induces a base point free $g^1_d$ on $C$.\nBecause $S'$ is general we have:\n\\begin{itemize}\n\\item If $Q$ is a ramification point of $g^1_d$ then the associated\ndivisor looks like $2Q+E$ with $Q\\not\\in E$ and $E$ consists of $d-2$\ndifferent points (here we use characteristic zero).\n\n\\item If $Q\\in L_i\\cap C$ $(i=1,2)$ then $Q$ is not a ramification of\n$g^1_d$. The associated divisor is $Q+E$ with\n$E\\cap (L_1\\cup L_2)=\\emptyset$.\n\n\\item The line $SS'$ is not a tangent line of $C$.\n\\end{itemize}\nOn the symmetric product $C^{(m)}$ we consider\n$V=\\{ E\\in C^{(m)}:{\\rm there\\ exists\\ }D\\in g^1_d\\ {\\rm with\\ }E\\le D\\}$.\n In terminology of \\cite{c2} it is the set $V^1_k(g^1_d)$ and we consider\n$V$ with its natural scheme structure. From Chapter 2 in\n{\\it loc.~cit.}, it follows that $V$ is a smooth curve.\n\nLet $D_0\\in g^1_d$ corresponding to the line $SS'$ and let\n$V_0=\\{ E\\in V:E\\le D_0\\}$. We define a map\n$\\psi :V\\backslash V_0\\to{\\bbb P}^1$ as\nfollows. Associated to $E\\in V\\backslash V_0$ there is a line $L$\nthrough $S'$. Write $E=P_{31}+\\cdots +P_{3m}$.\nWe distinguish 3 possibilities:\n\\vs{05}\ni) $E\\cap (L_1\\cup L_2)=\\emptyset$. Choose coordinates\n$x,y,z$ on ${\\bbb P}^2$ such that $L_1, L_2, L$ corresponds to the coordinate\naxes $x=0, y=0, z=0$, respectively. Let $(x_{ij}:y_{ij}:z_{ij})$ be the\ncoordinates of $P_{ij}$ $(i=1,2,3;1\\le j\\le m)$. Then\n$$\n\\psi (E)=\n\\prod^m_{j=1}\\left(\\frac{y_{1j}}{z_{1j}}\\right)\n\\prod^m_{j=1}\\left(\\frac{z_{2j}}{x_{2j}}\\right)\n\\prod^m_{j=1}\\left(\\frac{x_{3j}}{y_{3j}}\\right).\n$$\nAs long as we take $L_1, L_2, L$ as axes, this value is independent of\nthe coordinates.\n\\vs{05}\nii) $E\\cap (L_1\\cup L_2)\\ne\\emptyset$. Say\n$P_{11}=P_{31}\\in E\\cap (L_1\\cup L_2)$. Choose coordinates as before\nand let $\\alpha x+z=0$ be the equation of the tangent line to $C$ at\n$P_{11}$ $(\\alpha\\ne 0)$. Then\n$$\n\\psi (E)=\\alpha\n\\prod^m_{j=2}\\left(\\frac{y_{1j}}{z_{1j}}\\right)\n\\prod^m_{j=1}\\left(\\frac{z_{2j}}{x_{2j}}\\right)\n\\prod^m_{j=2}\\left(\\frac{x_{3j}}{y_{3j}}\\right).\n$$\nAgain taking $L_1, L_2, L$ as axes, this value is independent of\nthe coordinates. (Of course this is a function to ${\\bbb C}$ and we\nconsider ${\\bbb P}^1={\\bbb C}\\cup\\{\\infty\\}$.)\n\\vs{05}\niii) If $\\psi (E)$ is not defined in ${\\bbb C}$ then $\\psi (E)=\\infty$.\n\\vs{05}\nFor $E\\in V_0$, we define $\\psi (E)=(-1)^m$\n\\vs{05}\nThis map is a holomorphic map. Indeed, fixing coordinates $(x:y:z)$ such\nthat $L_1, L_2$ corresponds to $x=0,y=0$, resp. and $S'=(1:1:0)$, we can\nwrite $z-\\gamma (x-y)=0$ for the pencil of lines through $S'$ (except for\n$SS'$). If $E\\in V\\backslash V_0$ and $E$ is a part of a divisor of\n$g^1_d$ consisting of $d$ different points, then $\\gamma$ is a local\ncoordinate of $V$ at $E$. In case i) we write down $\\psi$ locally as a\nholomorphic function in $\\gamma$. It is easy to check that $\\psi$ is\ncontinuous at $E$ in case ii).\n\nFor $E\\in V_0$, write $\\beta z+(x-y)=0$ for the pencil of lines through\n$S'$ close to $SS'$. Let $(x_{3j}:x_{3j}:z_{3j})$ be the coordinates at\nthe points $P_{3j}$ of $E$. For $E'\\in V$ close to $E$ we have\n$E'=\\sum_{j=1}^mP'_{3j}$ and coordinates\n$(x'_{3j}:\\beta z'_{3j}+x'_{3j}:z'_{3j})$ at $P'_{3j}$. Here we can assume\nthat $x'_{3j}=x'_{3j}(\\beta ), z'_{3j}=z'_{3j}(\\beta )$ are holomorphic\nfunctions in $\\beta$ (local coordinate at $V$ in $E$) and\n$x_{3j}=x'_{3j}(0), z_{3j}=z'_{3j}(0)$.\n\nChoose new coordinates $\\xi =\\beta z+x-y, \\eta =y, \\zeta =x$. The coordinates\nof $P_{1j}$ are $(0:y_{1j}:\\beta z_{1j}+y_{1j})$, of $P_{2j}$ are\n$(x_{2j}:0:\\beta z_{2j}+x_{2j})$, of $P'_{3j}$ are\n$(x'_{3j}:\\beta z'_{3j}+x'{3j}:0)$. We find\n\\begin{eqnarray}\n\\psi (E') & = &\n\\prod^m_{j=1}\\frac{y_{1j}}{\\beta z_{1j}-y_{1j}}\n\\prod^m_{j=1}\\frac{\\beta z_{2j}+x_{2j}}{x_{2j}}\n\\prod^m_{j=1}\\frac{x'_{3j}}{\\beta z'_{3j}+x'_{3j}}\n\\label{eq:v0}\n\\\\\n& = & (-1)^m-\\left(\n\\sum^m_{j=1}\\frac{z_{3j}}{x_{3j}}-\n\\sum^m_{j=1}\\frac{z_{1j}}{y_{1j}}-\n\\sum^m_{j=1}\\frac{z_{2j}}{x_{2j}}\\right)\\beta\n+o(\\beta ).\\nonumber\n\\end{eqnarray}\nHence, $\\psi$ is continuous at $E$.\n\nSince $V$ is smooth and $\\psi$ is continuous on $V$ and holomorphic except\nfor a finite number of points, $\\psi$ is a holomorphic map $V\\to{\\bbb P}^1$.\n\nAt some component of $V$, $\\psi$ is not constant.\nIndeed, look at a fibre $2Q+E\\in g^1_d$ with $E\\in C^{(d-2)}$. Take a\nclose fibre $P_1+P_2+E'$ with $P_1, P_2$ close to $Q$. Choose $F\\le E'$\nwith $\\deg(F)=m-1$ and consider $P_1+F\\in V$. Let $W$ be the irreducible\ncomponent of $V$ containing $P_1+F$. Using monodromy one finds\n$P_2+F\\in W$. But clearly $\\psi (P_1+F)\\ne\\psi (P_2+F)$, hence\n$\\psi :W\\to{\\bbb P}^1$ is a covering.\nIn particular $\\psi^{-1}((-1)^m)\\ne\\emptyset$.\n\nIf for some $E\\in W\\backslash V_0$ we have $\\psi (E)=(-1)^m$ then the\ntheorem follows from Lemmas \\ref{lem:car1}, \\ref{lem:car2} and Remark\n\\ref{rem:33}. So, we have to take a closer look to $\\psi$ at $V_0$.\nBy the equation (\\ref{eq:v0}), if $E\\in V_0$ is not a simple zero of\n$\\psi -(-1)^m$ then the theorem follows from Lemma \\ref{lem:car3} and\nRemark \\ref{rem:33}.\n\nSuppose that each zero of $\\psi -(-1)^m$ belonging to $V_0$ is simple.\nThen $\\psi -(-1)^m$ has exactly $\\binom{d}{m}$ zeros at those points.\nNow we look at zeros of $\\psi$ on $V\\backslash V_0$. The number of zeros\nis finite. For case i) there is none. For case ii) we have two\npossiblities. If $E\\in V$ corresponds to a line $L$ through $S'$\ncontaining one of the points $P_{21},\\dots ,P_{2m}$ but\n$E\\cap (L_1\\cup L_2)=\\emptyset$. There are $m\\binom{d-1}{m}$ such\npossibilities. If $E\\in V$ corresponds to a line $L$ through $S'$ not\ncontaining any of the points $P_{11},\\dots ,P_{1m}$ but\n$E\\cap L_1\\ne\\emptyset$. There are $(d-m)\\binom{d-1}{m-1}$ such\npossibilities. So, on the components of $V$ where $\\psi$ is not constant,\n$\\psi$ has at least $m\\binom{d-1}{m}+(d-m)\\binom{d-1}{m-1}$ zeros.\nBut this number is greater than $\\binom{d}{m}$, so $\\psi -(-1)^m$ has a\nzero on $V\\backslash V_0$. This completes the proof of the theorem.\n\\vs 2\n\\begin{rem}\n{\\rm In order to obtain the bound $r(n)$ mentioned in the introduction,\nwe have to prove that $C$ possesses no base point free very special\nnon-trivial linear systems $g^r_n$ with $r\\ge\\fracd{(d-4)(d-3)}{2}-(d-5)$\n(i.~e. $x\\ge d-5$). In the introduction we already noticed that\n$x\\le d-3$. Assume $g^r_n$ is a very special non-trivial linear system.\n{}From Theorem \\ref{thm:1} we find $n\\ge n((d-4)(d-5)\/2-(d-5))=d^2-6d+11$.\nBut then $\\deg (K_C-g^r_n)\\le (d-1)(d-2)-2-(d^2-6d+11)=3d-11$. However,\nvery special linear systems $g^s_m$ of degree $m\\le 3d-11$ are trivial.\nSo, the associated base point free linear system $g^s_m$ of $|K_C-g^r_n|$\nis of type $|ag^2_d-E|$ with $a\\le d-4$, $E$ effective and\n$\\dim|K_C-g^r_n|=\\fracd{a^2+3a}{2}-\\deg E$. If $E\\ne\\emptyset$, then for\n$P\\in E$ one has $\\dim|ag^2_d-E+P|>\\dim|ag^2_d-E|$, so $\\dim|g^r_n-P|=r$.\nThis implies that $g^r_n$ has a base point, hence $E=\\emptyset$. But\nthen, $g^r_n=|(d-3-a)g^2_d-F|$ and since $\\dim|ag^2_d+F|=\\dim|ag^2_d|$,\nwe have $r=\\fracd{(d-3-a)^2+3(d-3-a)}{2}-\\deg F$. This is a\ncontradiction to the fact that $g^r_n$ would be non-trivial.}\n\\end{rem}\n\\vs 1\n\\begin{rem}\n{\\rm It would be interesting to find an answer to the following questions:\nFor which values of $n$ do there exist non-trivial base point free very\nspecial linear systems $g^r_n$ on a (general) smooth plane curve.\nClassify those linear systems and study $W^r_n$ on $J(C)$. More\nconcretely, is the subscheme of $W^r_{n(r)}$ corresponding to non-trivial\nlinear systems irreducible ? What are the dimension of those irreducible\ncomponents ? And so on.\n}\n\\end{rem}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzaxad b/data_all_eng_slimpj/shuffled/split2/finalzzaxad new file mode 100644 index 0000000000000000000000000000000000000000..cea0c4314331b9a6a307ad2c9cb370ec89187c25 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzaxad @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n\nOne of the central concepts of linear algebra is the notion of a basis for a vector space: a subset of a vector space is called a basis for the former if every vector can be uniquely written as a finite linear combination of basis elements. Part of the importance of bases stems from the convenient consequences that follow from their existence. For example, linear transformations between vector spaces admit matrix representations relative to pairs of bases \\cite{lang2004algebra}, which can be used for efficient numerical calculations. The idea of a basis however is not restricted to the theory of vector spaces: other algebraic theories have analogous notions of bases -- sometimes by waiving the uniqueness constraint --, for instance modules, semi-lattices, Boolean algebras, convex sets, and many more. In fact, the theory of bases for vector spaces is different to others only in the sense that every vector space admits a basis, which is not the case for e.g. modules. In this paper we seek to give a compact definition of a basis that subsumes the well-known cases, and as a consequence allows us to lift results from one theory to the others. For example, one may wonder if there exists a matrix representation theory for convex sets that is analogous to the one of vector spaces.\n\nIn the category theoretic approach to universal algebra, algebraic structures are typically captured as algebras over a monad \\cite{eilenberg1965adjoint, linton1966some}. Intuitively, a monad may be seen as a generalisation of closure operators on partially ordered sets, and an algebra over a monad may be viewed as a set with an operation that allows the interpretation of formal linear combinations in a way that is coherent with the monad structure. For instance, a vector space over a field $k$, that is, an algebra for the free $k$-vector space monad, is given by a set $X$ with a function $h$ that coherently interprets a finitely supported $X$-indexed $k$-sequence $\\lambda$ as an actual linear combination $h(\\lambda) = \\sum_x \\lambda_x \\cdot x$ in $X$ \\cite{coumans2010scalars}. \nIt is straightforward to see that under this perspective a basis for a vector space thus consists of a subset $Y$ of $X$ and a function $d$ that assigns to a vector $x$ in $X$ a $Y$-indexed $k$-sequence $d(x)$ such that $h(d(x)) = x$ for all $x$ in $X$ and $d(h(\\lambda)) = \\lambda$ for all $Y$-indexed $k$-sequences $\\lambda$. In other words, the restriction of $h$ to $Y$-indexed $k$-sequences is an isomorphism with inverse $d$, and surjectivity corresponds to the fact that the subset $Y$ generates the vector space, while injectivity captures that $Y$ does so uniquely. As demonstrated in \\Cref{basisdefinition}, the concept easily generalises to arbitrary monads on arbitrary categories by making the subset relation explicit in form of a function.\n\n\nMonads however not only occur in the context of universal algebra, but also play a role in algebraic topology \\cite{godementtheorie} and theoretical computer science \\cite{moggi1988computational, moggi1990abstract,\t\tmoggi1991notions}. Among others, they are a convenient tool for capturing side-effects of coalgebraic systems \\cite{rutten1998automata}: popular examples include the powerset monad (non-determinism), the distribution monad (probability), and the neighbourhood monad (alternating). While coalgebraic systems with side-effects can be more compact than their deterministic counterparts, they often lack a unique minimal acceptor for a given language. For instance, every regular language admits a unique deterministic automaton with minimal state space, which can be computed via the Myhill-Nerode construction. On the other hand, for some regular languages there exist multiple non-isomorphic non-deterministic automata with minimal state space. The problem has been independently approached for different variants of side-effects, often with the common idea of restricting to a particular subclass \\cite{esposito2002learning, berndt2017learning}. For example, for non-derministic automata, the subclass of so-called residual finite state automata has been identified as suitable.\n It moreover has turned out that in order to construct a unique minimal realisation in one of the subclasses it is often sufficient to derive an equivalent system with free state space from a particular given system \\cite{van2017learning}. As Arbib and Manes realised \\cite{arbib1975fuzzy}, instrumental to the former is what they call a \\textit{scoop}, or what we call a generator in \\Cref{generatordefinition}, a slight generalisation of bases. In other words, our definition of a basis for an algebra over a monad has its origin in a context that is not purely concerned with universal algebra. Throughout the paper we will value these roots by lifting results of Arbib and Manes from scoops to bases. More importantly, we believe that our treatment allows us to uncover hidden ramifications between certain areas of universal algebra and the theory of coalgebras.\n\nThe paper is structured as follows. In \\Cref{preliminariessec} we recall the basic categorical notions of a monad, algebras over a monad, coalgebras, distributive laws, and bialgebras. In \\Cref{generatorssec} we introduce generators for algebras over monads and exemplify their relation with free bialgebras. The definition of bases for algebras over monads, basic results, and their relation with free bialgebras is covered in \\Cref{basesection}. Questions about the existence and the uniqueness of bases are answered in \\Cref{existencebasesec} and \\Cref{uniquebasesec}, respectively. In \\Cref{representationtheorysec} we generalise the representation theory of linear maps between vector spaces to a representation theory of homomorphisms between algebras over a monad.\nThe intuition that bases for an algebra over a monad coincide with free isomorphic algebras is clarified in \\Cref{basesfreealgebrasec}. In \\Cref{basesforbialgebrasec} we look into bases for bialgebras, which are algebras for a particular monad. In \\Cref{examplesec} we instantiate the theory for a variety of monads. Related work and future work are discussed in \\Cref{relatedworksec} and \\Cref{discussionsec}, respectively. Further details can be found in \\Cref{basesascoaglebrassec}.\n\n\n\n\n\\section{Preliminaries}\n\n\\label{preliminariessec}\n\nWe only assume a basic knowledge of category theory, e.g. an understanding of categories, functors, natural transformations, and adjunctions. All other relevant definitions can be found in the paper. In this section, we recall the notions of a monad, algebras for a monad, coalgebras, distributive laws, and bialgebras.\n\nThe concept of a monad can be traced back both to algebraic topology \\cite{godementtheorie} and to an alternative to Lawvere theory as a category theoretic formulation of universal algebra \\cite{eilenberg1965adjoint, linton1966some}. For an extended historical overview we refer to the survey of Hyland and Power \\cite{hyland2007category}. In the context of computer science, monads have been introduced by Moggi as a general perspective on exceptions, side-effects, non-determinism, and continuations \\cite{moggi1988computational, moggi1990abstract,\t\tmoggi1991notions}. \n\n\\begin{definition}[Monad]\n\tA \\textit{monad} on $\\mathscr{C}$ is a tuple $\\mathbb{T} = (T, \\mu, \\eta)$ consisting of an endofunctor $T: \\mathscr{C} \\rightarrow \\mathscr{C}$ and natural transformations $\\mu: T^2 \\Rightarrow T$ and $\\eta: \\textnormal{id}_{\\mathscr{C}} \\Rightarrow T$ satisfying the commutative diagrams\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tT^3X \\arrow{r}{T\\mu_X} \\arrow{d}[left]{\\mu_{TX}} & T^2X \\arrow{d}{\\mu_X}\\\\\n\t\t\tT^2X \\arrow{r}{\\mu_X} & TX \n\t\t\\end{tikzcd}\t\n\t\t\\qquad\n\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTX \\arrow{r}{\\eta_{TX}} \\arrow{d}[left]{T\\eta_X} \\arrow{dr}{\\textnormal{id}_{TX}} & T^2X \\arrow{d}{\\mu_X}\\\\\n\t\t\tT^2X \\arrow{r}{\\mu_X} & TX \n\t\t\\end{tikzcd}\t\n\t\\end{equation*}\t\n\t\t\tfor all objects $X$ in $\\mathscr{C}$.\t\t\n\\end{definition}\n\nMany examples of monads arise as the result of a free-forgetful adjunction, for instance the free group monad or the free vector space monad. Below we provide some details for the latter case. More monads are covered in \\Cref{monoidmonad} and \\Cref{examplesec}.\n\n\\begin{example}[Vector spaces]\n\\label{freevectorspacemonadexample}\n\tThe free $k$-vector space monad is an instance of the so-called multiset monad over some semiring $S$, when $S$ is given by the field $k$. The underlying set endofunctor $T$ assigns to a set $X$ the set of finitely-supported $X$-indexed sequences $\\varphi$ in $S$, typically written as formal sums $\\sum_i s_i \\cdot x_i$ for $s_i = \\varphi(x_i)$; the unit $\\eta_X$ maps an element in $X$ to the singleton multiset $1 \\cdot x$; and the multiplication $\\mu_X$ satisfies $\\mu_X(\\sum_i s_i \\cdot \\varphi_i)(x) = \\sum_i s_i \\cdot \\varphi_i(x)$ \\cite{coumans2010scalars}. \n\\end{example}\n\nIf a monad results from a free-forgetful adjunction induced by some algebraic structure, the latter may be recovered in the following sense:\n\n\\begin{definition}[$\\mathbb{T}$-algebra]\n\tAn \\textit{algebra} for a monad $\\mathbb{T} = (T, \\mu, \\eta)$ on $\\mathscr{C}$ is a tuple $(X, h)$ consisting of an object $X$ and a morphism $h: TX \\rightarrow X$ such that the diagrams on the left and in the middle below commute\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tT^2X \\arrow{r}{\\mu_X} \\arrow{d}[left]{Th} & TX \\arrow{d}{h}\\\\\n\t\t\tTX \\arrow{r}{h} & X \n\t\t\\end{tikzcd}\t\n\t\t\\qquad \n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tX \\arrow{r}[above]{\\textnormal{id}_X} \\arrow{d}[left]{\\eta_X} & X \\\\\n\t\t\t TX \\arrow{ur}[right]{h} & \n\t\t\\end{tikzcd}\n\t\t\\qquad\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTX \\arrow{r}{Tf} \\arrow{d}[left]{h_X} & TY \\arrow{d}{h_Y}\\\\\n\t\t\tX \\arrow{r}{f} & Y\n\t\t\\end{tikzcd}\t.\n\t\\end{equation*}\nA \\textit{homomorphism} $f: (X, h_X) \\rightarrow (Y,h_Y)$ between $\\mathbb{T}$-algebras is a morphism $f: X \\rightarrow Y$ such that the diagram on the right above commutes. The category of $\\mathbb{T}$-algebras and homomorphisms is denoted by $\\mathscr{C}^{\\mathbb{T}}$.\n\\end{definition}\n\nThe canonical example for an algebra over a monad is the free $\\mathbb{T}$-algebra $(TX, \\mu_X)$ for any object $X$ in $\\mathscr{C}$. \nBelow we give some more details on how to recognise algebras over the free vector space monad as vector spaces.\n\n\\begin{example}[Vector spaces]\n\\label{vectorspacemonadalgebras}\n\tLet $\\mathbb{T}$ be the free vector space monad defined in \\Cref{freevectorspacemonadexample}. Every $\\mathbb{T}$-algebra $(X,h)$ induces a vector space structure on its underlying set by interpreting a finite formal linear combination $\\sum_i \\lambda_i \\cdot x_i$ as an element $h(\\varphi) \\in X$ for $\\varphi(x) := \\lambda_i$, if $x = x_i$, and $\\varphi(x) := 0$ otherwise. Conversely, every vector space with underlying set $X$ induces an algebra $(X,h)$ over $\\mathbb{T}$ by defining $h(\\varphi) := \\sum_{x} \\lambda_x \\cdot x$ with $\\lambda_x := \\varphi(x)$ for a finitely-supported sequence $\\varphi$.\n\t\\end{example}\n \n\n\n\nWe now turn our attention to the dual of algebras: coalgebras \\cite{rutten1998automata}. While algebras have been used in the context of computer science to model finite data types, coalgebras deal with infinite data types and have turned out to be suited as an abstraction for a variety of state-based systems \\cite{rutten2000universal}. \n\n\\begin{definition}[$F$-coalgebra]\n\tA \\textit{coalgebra} for an endofunctor $F: \\mathscr{C} \\rightarrow \\mathscr{C}$ is a tuple $(X, k)$ consisting of an object $X$ and a morphism $k: X \\rightarrow FX$. A \\textit{homomorphism} $f: (X, k_X) \\rightarrow (Y,k_Y)$ between $F$-coalgebras is a morphism $f: X \\rightarrow Y$ satisfying the commutative diagram\n\t\\begin{equation*}\n\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tX \\arrow{r}{f} \\arrow{d}[left]{k_X} & Y \\arrow{d}{k_Y}\\\\\n\t\t\tFX \\arrow{r}{Ff} & FY \n\t\t\\end{tikzcd}\t.\t\n\\end{equation*}\nThe category of coalgebras and homomorphisms is denoted by $\\textsf{Coalg}(F)$. \nA $F$-coalgebra $(\\Theta, k_{\\Theta})$ is \\textit{final}, if it is final in $\\textnormal{\\textsf{Coalg}}(F)$, that is, for every $F$-coalgebra $(X,k_X)$ there exists a unique homomorphism \\[ !_{(X,k_X)}: (X,k_X) \\longrightarrow (\\Theta, k_{\\Theta}). \\]\n\\end{definition}\n\nFor example, coalgebras for the set endofunctor $FX = X^A \\times B$ are unpointed Moore automata with input $A$ and output $B$, and the final $F$-coalgebra homomorphism assigns to a state $x$ of an unpointed Moore automaton the language in $A^* \\rightarrow B$ accepted by the latter when given the initial state $x$.\n\nWe are particularly interested in systems with side-effects, for instance non-deterministic automata. Often such systems can be realised as coalgebras for an endofunctor composed of a monad and an endofunctor similar to $F$ above. In these cases the compatibility between the dynamics of the system and its side-effects can be captured by a so-called a distributive law. Distributive laws have originally occurred as a way to compose monads \\cite{beck1969distributive}, but now also exist in a wide range of other forms \\cite{Street2009}. For our particular case it is sufficient to consider distributive laws between a monad and an endofunctor.\n\n\\begin{definition}[Distributive law]\n\tLet $\\mathbb{T} = (T, \\mu, \\eta)$ be a monad on $\\mathscr{C}$ and $F: \\mathscr{C} \\rightarrow \\mathscr{C}$ an endofunctor. A natural transformation $\\lambda: TF \\Rightarrow FT$ is called \\textit{distributive law}, if it satisfies \n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tFX \\arrow{r}[above]{F\\eta_X} \\arrow{d}[left]{\\eta_{FX}} & FTX \\\\\n\t\t\t TFX \\arrow{ur}[right]{\\lambda_X} & \n\t\t\\end{tikzcd}\n\t\t\\qquad\n\t\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tT^2FX \\arrow{r}{T\\lambda_X} \\arrow{d}[left]{\\mu_{FX}} & TFTX \\arrow{r}{\\lambda_{TX}} & FT^2X \\arrow{d}{F\\mu_X} \\\\\n\t\t\tTFX \\arrow{rr}{\\lambda_X} && FTX \n\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\t\\end{definition}\n\nGiven a distributive law, it is straightforward to model the determinisation of a system with side-effects. Indeed, for any morphism $f: Y \\rightarrow X$ and $\\mathbb{T}$-algebra $(X,h)$, the natural isomorphism underlying the free-algebra adjunction yields a $\\mathbb{T}$-algebra homomorphism \n\\begin{equation}\n\\label{algebrainducedliftinggeneral}\n\tf^{\\sharp} := h \\circ Tf: (TY, \\mu_Y) \\longrightarrow (X, h).\n\\end{equation}\nThus, in particular, any $FT$-coalgebra $k: X \\rightarrow FTX$ lifts to a $F$-coalgebra\n\\begin{equation}\n\\label{ftlifting}\n\tk^{\\sharp} := (F \\mu_X \\circ \\lambda_{TX}) \\circ Tk: (TX, \\mu_X) \\longrightarrow (FTX, F \\mu_X \\circ \\lambda_{TX}).\n\\end{equation}\n\nFor instance, if $\\mathbb{P}$ is the powerset monad and $F$ is the set endofunctor for deterministic automata satisfying $FX = X^A \\times 2$, the disjunctive $\\mathbb{P}$-algebra structure on the set $2$ induces a canonical distributive law \\cite{jacobs2012trace}, such that the lifting \\eqref{ftlifting} is given by the classical determinisation procedure for non-deterministic automata \\cite{rutten2013generalizing}. \n\nOne can show that the state spaces of $F$-coalgebras obtained by the lifting \\eqref{ftlifting} can canonically be equipped with a $\\mathbb{T}$-algebra structure that is compatible with the $F$-coalgebra structure: they are $\\lambda$-bialgebras.\n\n\\begin{definition}[$\\lambda$-bialgebra]\n\\label{defbialgebras}\n\tLet $\\lambda$ be a distributive law between a monad $\\mathbb{T}$ and an endofunctor $F$. A $\\lambda$\\textit{-bialgebra} is a tuple $(X, h, k)$ consisting of a $\\mathbb{T}$-algebra $(X, h)$ and a $F$-coalgebra $(X, k)$, satisfying\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\tTX \\arrow{r}{Tk} \\arrow{dd}[left]{h} & TFX \\arrow{d}{\\lambda_X} \\\\\n\t\t& FTX \\arrow{d}{Fh} \\\\\n\t\tX \\arrow{r}{k} & FX.\n\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\tA \\textit{homomorphism} $f: (X, h_X, k_X) \\rightarrow (Y, h_Y, k_Y)$ between $\\lambda$-bialgebras is a morphism $f: X \\rightarrow Y$ that is simultaneously a $\\mathbb{T}$-algebra homomorphism and a $F$-coalgebra homomorphism. The category of $\\lambda$-bialgebras and morphisms is denoted by $\\textsf{Bialg}(\\lambda)$.\n\t\\end{definition}\n\t\nIt is well-known that a distributive law $\\lambda$ between a monad $\\mathbb{T}$ and an endofunctor $F$ induces simultaneously\n\\begin{itemize}\n\t\\item a monad $\\mathbb{T}_{\\lambda} = (T_{\\lambda}, \\mu, \\eta)$ on $\\textsf{Coalg}(F)$ by $T_{\\lambda}(X,k) = (TX, \\lambda_X \\circ Tk)$ and $T_{\\lambda}f = Tf$; and\n\t\\item an endofunctor $F_{\\lambda}$ on $\\mathscr{C}^{\\mathbb{T}}$ by $F_{\\lambda}(X,h) = (FX, Fh \\circ \\lambda_X)$ and $F_{\\lambda}f = Ff$,\n\\end{itemize}\nsuch that the algebras over $\\mathbb{T}_{\\lambda}$, the coalgebras of $F_{\\lambda}$, and $\\lambda$-bialgebras coincide \\cite{turi1997towards}. In light of the latter we will not distinguish between the different categories, and instead use the notation of $\\lambda$-bialgebras for all three cases.\n\nOne can further show that, if it exists, the final $F$-coalgebra $(\\Theta, k_{\\Theta})$ induces a final $F_{\\lambda}$-coalgebra $(\\Theta, h_{\\Theta}, k_{\\Theta})$ for $h_{\\Theta} :=\\ !_{(T\\Theta, \\lambda_{\\Theta} \\circ T k_{\\Theta})}$ the unique $F$-coalgebra homomorphism below:\n\\begin{equation*}\n\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\tT\\Theta \\arrow{d}[left]{\\lambda_{\\Theta} \\circ T k_{\\Theta}} \\arrow[dashed]{r}{h_{\\Theta}} & \\Theta \\arrow{d}{k_{\\Theta}} \\\\\n\t\t\t\tFT\\Theta \\arrow[dashed]{r}{Fh_{\\Theta}} & F\\Theta\n\t\t\t\\end{tikzcd}.\n\\end{equation*}\nFor instance, for the canonical distributive law between the powerset monad $\\mathbb{P}$ and the set endofunctor $F$ with $FX = X^A \\times 2$ as before, one verifies that the underlying state space $A^* \\rightarrow 2$ of the final $F$-coalgebra will in this way be enriched with a $\\mathbb{P}$-algebra structure that takes the union of languages \\cite{jacobs2012trace}.\n\n\n\\section{Generators for algebras}\n\n\\label{generatorssec}\n\nIn this section we define what it means to be a generator for an algebra over a monad. Our notion coincides with what is called a \\textit{scoop} by Arbib and Manes \\cite{arbib1975fuzzy}. One may argue that the morphism $i$ in the definition below should be mono, but we choose to continue without this requirement. \n\n\n \n\n\n\\begin{definition}[Generator \\cite{arbib1975fuzzy}]\n\\label{generatordefinition}\n\tA \\textit{generator} for a $\\mathbb{T}$-algebra $(X, h)$ is a tuple $(Y, i, d)$ consisting of an object $Y$, a morphism $i: Y \\rightarrow X$, and a morphism $d: X \\rightarrow TY$, such that $i^{\\sharp} \\circ d = \\textnormal{id}_{X}$, that is, the diagram on the left below commutes\n\t\\begin{equation*}\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTY \\arrow{r}{Ti} & TX \\arrow{d}{h}\\\\\n\t\t\tX \\arrow{u}{d} \\arrow{r}{\\textnormal{id}_X} & X \n\t\t\\end{tikzcd}\n\t\t\\qquad\n\t\t\\begin{tikzcd}[row sep=2em, column sep = 3em]\n\t\t\t\t\t&TY_{\\alpha} \\arrow{dd}[right]{Tf} & \\\\\n\t\t\t\t\t X \\arrow{ur}{d_{\\alpha}} \\arrow{dr}[below]{d_{\\beta}} \\\\\n\t\t\t\t\t& TY_{\\beta} \n\t\t\t\t\\end{tikzcd}\n\t\t\t\t\t\t\t\t\\qquad\t\t\t\t\n\t\t\\begin{tikzcd}[row sep=2em, column sep = 3em]\n\t\t\t\t\tY_{\\alpha} \\arrow{dd}[left]{f} \\arrow{dr}{i_{\\alpha}} & \\\\\n\t\t\t\t\t& X \\\\\n\t\t\t\t\tY_{\\beta} \\arrow{ur}[below]{i_{\\beta}}\n\t\t\t\t\\end{tikzcd}.\n\t\t\\end{equation*}\n\t\tA morphism $f: (Y_{\\alpha},i_{\\alpha},d_{\\alpha}) \\rightarrow (Y_{\\beta},i_{\\beta},d_{\\beta})$ between generators for $(X,h)$ is a morphism $f: Y_{\\alpha} \\rightarrow Y_{\\beta}$ satisfying the two commutative diagrams on the right above.\n\\end{definition}\n\nWe give an example that slightly generalises the vector space situation mentioned in the introduction.\n\n\\begin{example}[Semimodules]\n\\label{semimoduleexample}\n\tLet $\\mathbb{T} = (T, \\mu, \\eta)$ be the multiset monad over some semiring $S$ defined in \\Cref{freevectorspacemonadexample}.\n\tFollowing the lines of \\Cref{vectorspacemonadalgebras}, one can show that $\\mathbb{T}$-algebras correspond to semimodules over $S$, such that a function $i: Y \\rightarrow X$ is part of a generator $(Y, i, d)$ for the former if and only if for all $x$ in $X$ there exists a finitely-supported $Y$-indexed $S$-sequence $d(x) = (s_y)_{y \\in Y}$, such that $x = \\sum_{y \\in Y} s_y \\cdot i(y)$.\n\\end{example}\n\nOne might would like to adapt \\Cref{generatordefinition} by replacing the existence of a morphism $d$ with the property of $i^{\\sharp}$ being an epimorphism. Indeed, in every category a morphism with a right-inverse is an epimorphism. However, conversely, not every epimorphism admits a right-inverse, and if there would exist one, it might not be unique. For this reason we treat the morphism $d$ as explicit data.\n\nIt is well-known that every algebra over a monad admits a generator \\cite{arbib1975fuzzy}.\n\n\\begin{lemma}\n\\label{canonicalgenerator}\n$(X, \\textnormal{id}_X, \\eta_X)$ is a generator for any $\\mathbb{T}$-algebra $(X,h)$.\n\\end{lemma}\n\\begin{proof}\nFollows immediately from the equality $h \\circ \\eta_X = \\textnormal{id}_X$.\n\\end{proof}\n\nThe following result is a slight generalisation of a statement by Arbib and Manes {\\cite{arbib1975fuzzy}. We are particularly interested in using the construction of an equivalent free bialgebra for a unified view on the theory of residual finite state automata and variations of it \\cite{van2016master, van2020phd, myers2015coalgebraic, denis2002residual}; more details are given in \\Cref{RFSAexample}.\n\n\\begin{proposition}\n\\label{forgenerator-isharp-is-bialgebra-hom}\n\tLet $(X, h, k)$ be a $\\lambda$-bialgebra and let $(Y, i, d)$ be a generator for the $\\mathbb{T}$-algebra $(X,h)$. Then $i^{\\sharp} := h \\circ Ti : TY \\rightarrow X$ is a $\\lambda$-bialgebra homomorphism $i^{\\sharp}: (TY, \\mu_Y, (Fd \\circ k \\circ i)^{\\sharp}) \\rightarrow (X, h, k) $ for $(Fd \\circ k \\circ i)^{\\sharp} := F\\mu_Y \\circ \\lambda_{TY}\\circ T(Fd \\circ k \\circ i)$.\n\\end{proposition}\n\\begin{proof}\nThe commuting diagram below shows that for a $FT$-coalgebra $f: Y \\rightarrow FTY$ and $f^{\\sharp}: TY \\rightarrow FTY$ the lifting in \\eqref{ftlifting}, the tuple $(TY, \\mu_Y, f^{\\sharp})$ constitutes a $\\lambda$-bialgebra\n\\begin{equation}\n\\label{freebialgebra}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\tT^2Y \\arrow{r}{T^2f} \\arrow{dd}[left]{\\mu_Y} & T^2 FTY \\arrow{dd}{\\mu_{FTY}} \\arrow{r}{T\\lambda_{TY}} & TFT^2Y \\arrow{d}{\\lambda_{T^2Y}} \\arrow{r}{TF\\mu_Y} & TFTY \\arrow{d}{\\lambda_{TY}} \\\\\n\t\t& & FT^3Y \\arrow{r}{FT{\\mu_Y}} \\arrow{d}[right]{F \\mu_{TY}} & FT^2Y \\arrow{d}[right]{F\\mu_Y} \\\\\n\t\tTY \\arrow{r}{Tf} & TFTY \\arrow{r}{\\lambda_{TY}} & FT^2Y \\arrow{r}{F\\mu_Y} & FTY\n\t\\end{tikzcd}.\n\\end{equation}\nIn particular, the tuple $(TY, \\mu_Y, (Fd \\circ k \\circ i)^{\\sharp})$ thus is a $\\lambda$-bialgebra. The lifting in \\eqref{algebrainducedliftinggeneral} turns $i^{\\sharp}$ into a $\\mathbb{T}$-algebra homomorphism. It thus remains to show that $i^{\\sharp}$ is a $F$-coalgebra homomorphism. The latter follows from the commutativity of the following diagram\n\\begin{equation*}\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\tTY \\arrow{r}{Ti} \\arrow{d}[left]{Ti} & TX \\arrow{dd}{Tk} \\arrow{rr}{h} & & X \\arrow{ddddd}{k} \\\\\n\t\tTX \\arrow{d}[left]{Tk} \\\\\n\t\tTFX \\arrow{d}[left]{TFd} \\arrow{r}{\\textnormal{id}_{TFX}} & TFX \\arrow{ddr}{\\lambda_X} \\\\\n\t\tTFTY \\arrow{d}[left]{\\lambda_{TY}} \\arrow{r}{TFTi} & TFTX \\arrow{u}{TFh} \\\\\n\t\tFT^2Y \\arrow{r}{FT^2i} \\arrow{d}[left]{F\\mu_Y} & FT^2X \\arrow{r}{FTh} \\arrow{d}{F \\mu_X} & FTX \\arrow{d}{Fh} \\\\\n\t\tFTY \\arrow{r}{FTi} & FTX \\arrow{r}{Fh} & FX \\arrow{r}{\\textnormal{id}_{FX}} & FX\n\t\\end{tikzcd}.\n\\end{equation*}\n\\end{proof}\n\nWe show next that above construction extends to morphisms $f: (Y_{\\alpha}, i_{\\alpha}, d_{\\alpha}) \\rightarrow (Y_{\\beta}, i_{\\beta}, d_{\\beta})$ between generators for the underlying algebra of a bialgebra. For readability we abbreviate $k_{\\gamma} := (Fd_{\\gamma} \\circ k \\circ i_{\\gamma})^{\\sharp}$ for $\\gamma \\in \\lbrace \\alpha , \\beta \\rbrace$.\n\n\\begin{lemma}\n\tThe morphism $Tf: TY_{\\alpha} \\rightarrow TY_{\\beta}$ is a $\\lambda$-bialgebra homomorphism $Tf: (TY_{\\alpha}, \\mu_{Y_{\\alpha}}, k_{\\alpha}) \\rightarrow (TY_{\\beta}, \\mu_{Y_{\\beta}}, k_{\\beta})$ satisfying $(i_{\\beta})^{\\sharp} \\circ Tf = (i_{\\alpha})^{\\sharp}$.\n\\end{lemma}\n\\begin{proof}\nThe identity $(i_{\\beta})^{\\sharp} \\circ Tf = (i_{\\alpha})^{\\sharp}$ follows from the equality $i_{\\beta} \\circ f = i_{\\alpha}$, as shown below\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\tTY_{\\alpha} \\arrow{r}{Tf} \\arrow{d}[left]{Ti_{\\alpha}} & \tTY_{\\beta} \\arrow{d}{Ti_{\\beta}} \\arrow{r}{Ti_{\\beta}} & TX \\arrow{d}{h} \\\\\n\t\tTX \\arrow{r}{\\textnormal{id}_{TX}} & TX \\arrow{r}{h} & X\n\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\tOne easily verifies that the lifting \\eqref{ftlifting} extends to a functor from the category of $FT$-coalgebras to the category of $\\lambda$-bialgebras; see e.g. \\eqref{freebialgebra}. It thus remains to show that $f$ is a $FT$-coalgebra homomorphism $\n\tf: (Y_{\\alpha}, k_{\\alpha}) \\rightarrow (Y_{\\beta}, k_{\\beta})$. The latter follows from the commutativity of the following diagram\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tY_{\\beta} \\arrow{r}{i_{\\beta}} & X \\arrow{r}{k} & FX \\arrow{r}{Fd_{\\beta}} & FTY_{\\beta} \\\\\n\t\t\tY_{\\alpha} \\arrow{u}{f} \\arrow{r}{i_{\\alpha}} & X \\arrow{u}{\\textnormal{id}_X} \\arrow{r}{k} & FX \\arrow{u}{\\textnormal{id}_{FX}} \\arrow{r}{Fd_{\\alpha}} & FTY_{\\alpha} \\arrow{u}[right]{FTf}\n\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\\end{proof}\n\nIn the following example we instantiate the previous results to recover the so-called canonical residual finite state automata \\cite{denis2002residual}.\n\n\\begin{example}[Canonical RFSA]\n\\label{RFSAexample}\t\nAs before, let $\\mathbb{P}$ be the powerset monad and $F$ the set endofunctor for deterministic automata over the alphabet $A$ satisfying $FX = X^A \\times 2$. One verifies that the disjunctive $\\mathbb{P}$-algebra structure on the set $2$ induces a canonical distributive law $\\lambda$ between $\\mathbb{P}$ and $F$, such that $\\lambda$-bialgebras are deterministic unpointed automata in the category of complete lattices and join-preserving functions \\cite{jacobs2012trace}; for more details see \\Cref{powersetmonadsec}.\nWe are particularly interested in the $\\lambda$-bialgebra that is typically called the minimal $\\mathbb{P}$-automaton $M_{\\mathbb{P}}(L)$ for a regular language $L$ \\cite{van2020phd, van2017learning}. \nOn a very high-level, $M_{\\mathbb{P}}(L)$ may be recognised as some algebraic closure of the well-known minimal automaton $M(L)$ for $L$ in the category of sets and functions. More concretely, it consists of the inclusion-ordered free complete lattice of unions of residuals of $L$, equipped with the usual transition and output functions for languages inherited by the final coalgebra for $F$. \nUsing well-known lattice theoretic arguments one can show that the tuple $(\\mathcal{J}(M_{\\mathbb{P}}(L)),i,d)$, with $i$ the subset-embedding of join-irreducibles and $d$ the function assigning to a language the join-irreducible languages below it, is a generator for $M_{\\mathbb{P}}(L)$. Writing $k$ for the $F$-coalgebra structure of $M_{\\mathbb{P}}(L)$, it is easy to verify that the $F\\mathbb{P}$-coalgebra structure $Fd \\circ k \\circ i$ on $\\mathcal{J}(M_{\\mathbb{P}}(L))$ mentioned in \\Cref{forgenerator-isharp-is-bialgebra-hom} corresponds precisely to the so-called canonical residual finite state automaton for $L$ \\cite{denis2002residual}.\n\\end{example}\n\n\nWe close this section with a more compact characterisation of generators for free algebras.\n\n\\begin{lemma}\n\\label{setgenerator}\n\tLet $i: Y \\rightarrow X$ and $d: X \\rightarrow TY$ be morphisms such that the following diagram commutes:\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\tT^2Y \\arrow{r}{T^2i} & T^2X \\arrow{d}{\\mu_X} \\\\\n\t\t\t\t\tTX \\arrow{u}{Td} \\arrow{r}{\\textnormal{id}_{TX}} & TX\n\t\t\t\t\\end{tikzcd}\t.\n\t\t\t\t\\end{equation*}\t\n\tThen $(Y, \\eta_X \\circ i, \\mu_Y \\circ Td)$ is a generator for the $\\mathbb{T}$-algebra $(TX, \\mu_X)$.\n\\end{lemma}\n\\begin{proof}\nFollows from the commutativity of the diagram below\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\tTY \\arrow{r}{Ti} & TX \\arrow{r}{T\\eta_X} \\arrow{dr}{\\textnormal{id}_{TX}} & T^2X \\arrow{d}{\\mu_X} \\\\\n\t\t\t\t\tT^2Y \\arrow{u}{\\mu_Y} \\arrow{r}{T^2i} & T^2 X \\arrow{u}{\\mu_X} \\arrow{r}{\\mu_X} & TX \\arrow{d}{\\textnormal{id}_{TX}} \\\\\n\t\t\t\t\tTX \\arrow{u}{Td} \\arrow{rr}{\\textnormal{id}_{TX}} & & TX\n\t\t\t\t\\end{tikzcd}\t.\n\t\t\t\t\\end{equation*}\n\\end{proof}\n\n\\section{Bases for algebras}\n\n\\label{basesection}\n\nIn the last section we adopted the notion of a scoop by Arbib and Manes \\cite{arbib1975fuzzy} by introducing generators for algebras over a monad. In this section we extend the former to the definition of a basis for an algebra over a monad by adding a uniqueness constraint. While scoops have mainly occured in the context of state-based systems, our extension allows us to emphasise their ramifications with universal algebra.\n\n\\begin{definition}[Basis]\n\\label{basisdefinition}\n\tA \\textit{basis} for a $\\mathbb{T}$-algebra $(X, h)$ is a tuple $(Y, i, d)$ consisting of an object $Y$, a morphism $i: Y \\rightarrow X$, and a morphism $d: X \\rightarrow TY$, such that $i^{\\sharp} \\circ d = \\textnormal{id}_{X}$ and $d \\circ i^{\\sharp} = \\textnormal{id}_{TY}$, that is, the following two diagrams commute:\n\t\\begin{equation*}\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTY \\arrow{r}{Ti} & TX \\arrow{d}{h}\\\\\n\t\t\tX \\arrow{u}{d} \\arrow{r}{\\textnormal{id}_X} & X \n\t\t\\end{tikzcd}\n\t\t\\qquad\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTX \\arrow{r}{h} & X \\arrow{d}{d}\\\\\n\t\t\tTY \\arrow{u}{Ti} \\arrow{r}{\\textnormal{id}_{TY}} & TY \n\t\t\\end{tikzcd}.\t\n\\end{equation*} \nA \\textit{homomorphism} between two bases for $(X,h)$ is a morphism between the underlying generators. The category consisting of bases for a $\\mathbb{T}$-algebra $(X,h)$ and homomorphisms between them is\t denoted by $\\textnormal{\\textsf{Bases}}(X,h)$.\n\\end{definition}\n\n We begin with an example for a basis in above sense for the theory of monoids.\n\n\\begin{example}[Monoids]\n\\label{monoidmonad}\n\tLet $\\mathbb{T} = (T, \\mu, \\eta)$ be the set monad whose underlying endofunctor $T$ assigns to a set $X$ the set of all finite words over the alphabet $X$; whose unit $\\eta_X$ assigns to a character in $X$ the corresponding word of length one; and whose multiplication $\\mu_X$ syntactically flattens words over words over the alphabet $X$ in the usual way. The monad $\\mathbb{T}$ is also known as the list monad. One verifies that the constraints for its algebras correspond to the unitality and associativity laws of monoids. A function $i: Y \\rightarrow X$ is thus part of a basis $(Y,i,d)$ for a $\\mathbb{T}$-algebra with underlying set $X$ if and only if for all $x \\in X$ there exists a unique word $d(x) = \\lbrack y_1, ...,y_n \\rbrack$ over the alphabet $Y$ satisfying $x = i(y_1) \\cdot ... \\cdot i(y_n)$. \n\t\n\t\\end{example}\n\nThe next result establishes that the morphism $d$ is in fact an algebra homomorphism, and, intuitively, that elements of a basis are uniquely generated by their image under the monad unit, that is, typically by themselves.\n\n\n\\begin{lemma}\n\\label{forbasis-d-isalgebrahom}\n\tLet $(Y, i, d)$ be a basis for a $\\mathbb{T}$-algebra $(X, h)$. Then the following two diagrams commute:\n\t\\begin{equation*}\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTX \\arrow{d}[left]{h} \\arrow{r}{Td} & T^2Y \\arrow{d}{\\mu_Y} \\\\\n\t\t\tX \\arrow{r}{d} & TY\n\t\t\\end{tikzcd}\n\t\t\\qquad\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tY \\arrow{r}[above]{i} \\arrow{d}[left]{\\eta_{Y}} & X \\arrow{dl}[right]{d} \\\\\n\t\t\t TY & \n\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\\end{lemma}\n\\begin{proof}\nFollows from the commutativity of the following two diagrams\n\t\\begin{equation*}\n\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\tTX \\arrow{rr}{Td} \\arrow{d}[left]{\\textnormal{id}_{TX}} & & T^2Y \\arrow{dl}{T^2i} \\arrow{dd}{\\mu_Y} \\\\\n\t\t\t\tTX \\arrow{dd}[left]{h} & T^2X \\arrow{l}{Th} \\arrow{d}{\\mu_X} \\\\\n\t\t\t\t& TX \\arrow{dl}{h} & TY \\arrow{l}{Ti} \\arrow{d}{\\textnormal{id}_{TY}} \\\\\n\t\t\t\tX \\arrow{rr}{d} & & TY\n\t\t\t\\end{tikzcd}\n\t\t\t\\qquad\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tY \\arrow{r}{i} \\arrow{d}[left]{\\eta_Y} & X \\arrow{r}{\\textnormal{id}_X} \\arrow{d}[left]{\\eta_X} & X \\arrow{dddll}{d} \\\\\n\t\t\tTY \\arrow{dd}[left]{\\textnormal{id}_{TY}} \\arrow{r}{Ti} &TX \\arrow{ur}{h} \\\\ \n\t\t\t\\\\\n\t\t\tTY\n\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\\end{proof}\n\nIn consequence we can derive the following three corollaries. First, every algebra with a basis is isomorphic to a free algebra.\n\n\\begin{corollary}\n\\label{basisimpliesfree}\n\tLet $(Y,i,d)$ be a basis for a $\\mathbb{T}$-algebra $(X,h)$. Then $d: X \\rightarrow TY$ is a $\\mathbb{T}$-algebra isomorphism $d: (X,h) \\rightarrow (TY, \\mu_Y)$.\n\\end{corollary}\n\\begin{proof}\n\tBy \\Cref{forbasis-d-isalgebrahom} the morphism $d$ is a $\\mathbb{T}$-algebra homomorphism. From general arguments it follows that the lifting $i^{\\sharp} = h \\circ Ti$ is a $\\mathbb{T}$-algebra homomorphism in the reverse direction. Since $(Y,i,d)$ is a basis, $d$ and $i^{\\sharp}$ are mutually inverse.\n\\end{proof}\n\nSecondly, an algebra with a basis embeds into the free algebra it spans. The monomorphism is fundamental to an alternative approach to bases \\cite{jacobs2011bases}; for more details see \\Cref{basesascoaglebrassec}. \n \n\\begin{corollary}\n\\label{Tid-algebrahom}\n\tLet $(Y,i,d)$ be a basis for a $\\mathbb{T}$-algebra $(X,h)$. Then $Ti \\circ d: X \\rightarrow TX$ is a $\\mathbb{T}$-algebra monomorphism $Ti \\circ d: (X,h) \\rightarrow (TX, \\mu_X)$ with left-inverse $h: (TX, \\mu_X) \\rightarrow (X, h)$.\n\\end{corollary}\n\\begin{proof}\n\tThe morphism $Ti \\circ d$ is a $\\mathbb{T}$-algebra homomorphism since by \\Cref{forbasis-d-isalgebrahom} the following diagram commutes\n\t\\begin{equation*}\n\t\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\tTX \\arrow{d}[left]{h} \\arrow{r}{Td} & T^2Y \\arrow{d}{\\mu_Y} \\arrow{r}{T^2i} & T^2X \\arrow{d}{\\mu_X} \\\\\n\t\t\t\t\tX \\arrow{r}{d} & TY \\arrow{r}{Ti} & TX\n\t\t\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\tThe morphism $h$ is a $\\mathbb{T}$-algebra homomorphism since the equality $h \\circ \\mu_X = h \\circ Th$ holds for all algebras over a monad. By the definition of a generator $h$ is a left-inverse to $Ti \\circ d$. The morphism $Ti \\circ d$ is mono since every morphism with left-inverse is mono.\t\n\t \\end{proof}\n\nThirdly, every algebra homomorphism is uniquely determined by its image on a basis.\n\n\n\\begin{corollary}\n\t\tLet $(Y, i, d)$ be a basis for a $\\mathbb{T}$-algebra $(X,h_X)$, and let $(Z,h_Z)$ be another $\\mathbb{T}$-algebra. For every morphism $f: Y \\rightarrow Z$ there exists a $\\mathbb{T}$-algebra homomorphism $f^{\\sharp}: (X,h_X) \\rightarrow (Z,h_Z)$ satisfying $f^{\\sharp} \\circ i = f$.\n\\end{corollary}\n\t\\begin{proof}\n\t\tWe define a candidate as follows $f^{\\sharp} := h_Z \\circ Tf \\circ d$. Using \\Cref{forbasis-d-isalgebrahom} we establish the commutativity of the following two diagrams\n\t\t\\begin{equation*}\n\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tX \\arrow{r}{d} & TY \\arrow{r}{Tf} & TZ \\arrow{r}{\\textnormal{id}_{TZ}} & TZ \\arrow{d}{h_Z} \\\\\n\t\t\t& Y \\arrow{ul}{i} \\arrow{r}{f} \\arrow{u}{\\eta_Y} &Z \\arrow{u}{\\eta_{Z}} \\arrow{r}{\\textnormal{id}_{Z}} & Z\n\t\t\\end{tikzcd}\n\t\t\\end{equation*}\n\t\t\t\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTX \\arrow{r}{Td} \\arrow{d}[left]{h_X} & T^2Y \\arrow{d}{\\mu_{Y}} \\arrow{r}{T^2f}& T^2 Z \\arrow{d}{\\mu_Z} \\arrow{r}{Th_Z} & TZ \\arrow{d}{h_Z} \\\\\n\t\t\tX \\arrow{r}{d} & TY \\arrow{r}{Tf} & TZ \\arrow{r}{h_Z} &Z\n\t\t\\end{tikzcd}.\t\n\t\t\\end{equation*}\n\t\tThe first diagram proves the identity $f^{\\sharp} \\circ i = f$, and the second diagram shows that $f^{\\sharp}$ is a $\\mathbb{T}$-algebra homomorphism.\n\t\\end{proof}\n\nIn \\Cref{basisimpliesfree} it was proven that every algebra with a basis is isomorphic to a free algebra. We show next that the statement can be strengthened to bialgebras.\nIn more detail, in \\Cref{forgenerator-isharp-is-bialgebra-hom} we have seen that a generator for the underlying algebra of a bialgebra allows to construct a bialgebra with free state space, such that $i^{\\sharp}$ extends to a bialgebra homomorphism from the latter to the former. As it turns out, for a basis, the homomorphism $i^{\\sharp}$ is in fact an isomorphism.\n\n\\begin{proposition}\n\\label{forbasis-d-is-bi-algebrahom}\n\tLet $(X, h, k)$ be a $\\lambda$-bialgebra and let $(Y, i, d)$ be a basis for the $\\mathbb{T}$-algebra $(X,h)$. Then $d: X \\rightarrow TY$ is a $\\lambda$-bialgebra homomorphism $d: (X, h, k) \\rightarrow (TY, \\mu_Y, (Fd \\circ k \\circ i)^{\\sharp})$ for $(Fd \\circ k \\circ i)^{\\sharp} := F\\mu_Y \\circ \\lambda_{TY}\\circ T(Fd \\circ k \\circ i)$.\n\\end{proposition}\n\\begin{proof}\nAs before, it follows by general arguments that $(TY, \\mu_Y, (Fd \\circ k \\circ i)^{\\sharp})$ is a $\\lambda$-bialgebra. The morphism $d$ is a $\\mathbb{T}$-algebra homomorphism by \\Cref{forbasis-d-isalgebrahom}. It thus remains to show that $d$ is a $F$-coalgebra homomorphism. The former is established by \\Cref{forbasis-d-isalgebrahom}, as shown below\n\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tX \\arrow{rrrrr}{k} \\arrow{dd}[left]{d} \\arrow{rd}{\\textnormal{id}_X} & & & & & FX \\arrow{dd}{Fd} \\\\\n\t\t\t& X \\arrow{rrrru}{k} & & & FTX \\arrow{d}{FTd} \\arrow{ur}{Fh} & \\\\\n\t\t\tTY \\arrow{r}{Ti} & TX \\arrow{u}{h} \\arrow{r}{Tk} & TFX \\arrow{r}{TFd} \\arrow{rru}{\\lambda_X} & TFTY \\arrow{r}{\\lambda_{TY}} & FT^2Y \\arrow{r}{F\\mu_Y} & FTY\n\t\t\\end{tikzcd}.\n\\end{equation*}\n\\end{proof}\n\n\n\\begin{corollary}\n\\label{forbasis-bialgebra-areisomorphic}\n\tLet $(X, h, k)$ be a $\\lambda$-bialgebra and let $(Y, i, d)$ be a basis for the $\\mathbb{T}$-algebra $(X,h)$. Then the $\\lambda$-bialgebras $(X, h, k)$ and $(TY, \\mu_Y, (Fd \\circ k \\circ i)^{\\sharp})$ are isomorphic.\n\\end{corollary}\n\\begin{proof}\n\tBy \\Cref{forbasis-d-is-bi-algebrahom} the morphism $d$ is a $\\lambda$-bialgebra homomorphism of the right type. By \\Cref{forgenerator-isharp-is-bialgebra-hom} the morphism $i^{\\sharp}$ is a $\\lambda$-bialgebra homomorphism in reverse direction to $d$. From the definition of a basis it follows that $d$ and $i^\\sharp$ are mutually inverse.\n\\end{proof}\n\n\\subsection{Existence of bases}\n\n\\label{existencebasesec}\n\nWhile every algebra over a monad admits a generator, cf. \\Cref{canonicalgenerator}, the latter is not necessarily true for a basis. In this section we show that one however can safely assume that every \\textit{free} algebra admits a basis. We begin with a characterisation of bases for free algebras that is slightly more compact than the one derived directly from the definition.\n\n\\begin{lemma}\n\\label{basisfreealgebraeasier}\n\tLet $i: Y \\rightarrow X$ and $d: X \\rightarrow TY$ be morphisms such that the following two diagrams commute:\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\tT^2Y \\arrow{r}{T^2i} & T^2X \\arrow{d}{\\mu_X} \\\\\n\t\t\t\t\tTX \\arrow{u}{Td} \\arrow{r}{\\textnormal{id}_{TX}} & TX\n\t\t\t\t\\end{tikzcd}\t\n\t\t\t\t\\qquad\n\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\tTX \\arrow{r}{Td} & T^2Y \\arrow{d}{\\mu_Y} \\\\\n\t\t\t\tTY \\arrow{u}{Ti} \\arrow{r}{\\textnormal{id}_{TY}} & TY\n\t\t\t\\end{tikzcd}.\n\t\t\t\t\\end{equation*}\t\n\tThen $(Y, \\eta_X \\circ i, \\mu_Y \\circ Td)$ is a basis for the $\\mathbb{T}$-algebra $(TX, \\mu_X)$.\n\\end{lemma}\n\\begin{proof}\nOne part of the claim follows from \\Cref{setgenerator}. The other part follows from the commutativity of the following diagram\n\t\\begin{equation*}\n\t\t\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\t\tT^2X \\arrow{rr}{\\mu_X} & & TX \\arrow{d}{Td} \\\\\n\t\t\t\t\t\tTX \\arrow{u}{T\\eta_X} \\arrow{urr}{\\textnormal{id}_{TX}} \\arrow{rr}{Td} & & T^2Y \\arrow{d}{\\mu_Y} \\\\\n\t\t\t\t\t\tTY \\arrow{u}{Ti} \\arrow{rr}{\\textnormal{id}_{TY}} & & TY\n\t\t\t\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\\end{proof}\n\n\\begin{corollary}\n\t\\label{canonicalbasisforfreealgbera}\n\t$(X,\\eta_X, \\textnormal{id}_X)$ is a basis for the $\\mathbb{T}$-algebra $(TX, \\mu_X)$.\n\\end{corollary}\n\\begin{proof}\n\tUsing the equality $\t\\mu_X \\circ T\\eta_X = \\textnormal{id}_{TX}$, the claim follows from \\Cref{basisfreealgebraeasier} with $i= \\textnormal{id}_X$ and $d= \\eta_X$.\n\\end{proof}\n\n\\subsection{Uniqueness of bases}\n\n\\label{uniquebasesec}\n\nIn this section we investigate the uniqueness of bases for algebras over a monad.\nTo begin with, assume $(X,h)$ is an algebra over a monad $\\mathbb{T}$ and we are given a fixed morphism $i: Y \\rightarrow X$. Then any two morphisms $d_{\\alpha}$ and $d_{\\beta}$ turning $(Y,i,d_{\\alpha})$ and $(Y,i,d_{\\beta})$ into bases for $(X,h)$, respectively, are in fact identical: \\[ d_{\\alpha} = d_{\\alpha} \\circ i^{\\sharp} \\circ d_{\\beta} = d_{\\beta}. \\] If the morphism $i$ is not fixed, we have the following slightly weaker result about the uniqueness of bases:\n\n\\begin{lemma}\n\tLet $(Y_{\\alpha},i_{\\alpha},d_{\\alpha})$ and $(Y_{\\beta}, i_{\\beta}, d_{\\beta})$ be bases for a $\\mathbb{T}$-algebra $(X,h)$. Then the $\\mathbb{T}$-algebras $(TY_{\\alpha}, \\mu_{Y_{\\alpha}})$ and $(TY_{\\beta}, \\mu_{Y_{\\beta}})$ are isomorphic.\n\\end{lemma}\n\\begin{proof}\n\tWe have a $\\mathbb{T}$-algebra homomorphism \\[ d_{\\beta} \\circ (i_{\\alpha})^{\\sharp}: (TY_{\\alpha}, \\mu_{Y_{\\alpha}}) \\longrightarrow (TY_{\\beta}, \\mu_{Y_{\\beta}}) \\] since the components $(i_{\\alpha})^{\\sharp}$ and $d_{\\beta}$ are $\\mathbb{T}$-algebra homomorphisms by general arguments and \\Cref{forbasis-d-isalgebrahom}, respectively.\n\tAnalogously, by symmetry, the morphism $d_{\\alpha} \\circ (i_{\\beta})^{\\sharp}$ is a $\\mathbb{T}$-algebra homomorphism in the reverse direction.\n\tIt is easy to verify that the definition of a basis implies that both morphisms are mutually inverse.\n\\end{proof}\n\nIf a set monad preserves the set cardinality relation in the sense that \n\\[ \\vert Y_{\\alpha} \\vert \\not = \\vert Y_{\\beta} \\vert \\quad \\textnormal{implies} \\quad \\vert TY_{\\alpha} \\vert \\not = \\vert TY_{\\beta} \\vert, \\] above result in particular shows that any two bases for a fixed algebra have the same cardinality.\n\n\\subsection{Representation theory}\n\n\\label{representationtheorysec}\n\nIn this section we use our general definition of a basis to derive a representation theory for homomorphisms between algebras over monads that is analogous to the representation theory for linear transformations between vector spaces.\n\nIn more detail, recall that a linear transformation $L: V \\rightarrow W$ between $k$-vector spaces with finite bases $\\alpha = \\lbrace v_1, ... , v_n \\rbrace$ and $\\beta = \\lbrace w_1, ..., w_m \\rbrace$, respectively, admits a matrix representation $L_{\\alpha \\beta} \\in \\textnormal{Mat}_{k}(m, n)$ with \\[ L(v_j) = \\sum_i (L_{\\alpha \\beta})_{i,j} w_i, \\] such that for any vector $v$ in $V$ the coordinate vectors $L(v)_{\\beta} \\in k^m$ and $v_{\\alpha} \\in k^n$ \nsatisfy the equality \\[ L(v)_{\\beta} = L_{\\alpha \\beta} v_{\\alpha}. \\] A great amount of linear algebra is concerned with finding bases such that the corresponding matrix representation is in an efficient shape, for instance diagonalised. The following definitions generalise the situation by substituting Kleisli morphisms for matrices.\n\n \n\\begin{definition}\n\tLet $\\alpha = (Y_{\\alpha}, i_{\\alpha}, d_{\\alpha})$ and $\\beta = (Y_{\\beta}, i_{\\beta}, d_{\\beta})$ be bases for $\\mathbb{T}$-algebras $(X_{\\alpha},h_{\\alpha})$ and $(X_{\\beta},h_{\\beta})$, respectively. The \\textit{basis representation} of a $\\mathbb{T}$-algebra homomorphism $f: (X_{\\alpha},h_{\\alpha}) \\rightarrow (X_{\\beta},h_{\\beta})$ with respect to $\\alpha$ and $\\beta$ is the composition \n\t\\begin{equation}\n\t\\label{basisrepresentation}\n\t\tf_{\\alpha \\beta} := Y_{\\alpha} \\overset{i_{\\alpha}}{\\longrightarrow} X_{\\alpha} \\overset{f}{\\longrightarrow} X_{\\beta} \\overset{d_{\\beta}}{\\longrightarrow} TY_{\\beta}.\n\t\\end{equation}\n\t Conversely, the morphism \\textit{associated} with a Kleisli morphism $p: Y_{\\alpha} \\rightarrow TY_{\\beta}$ with respect to $\\alpha$ and $\\beta$ is the composition \\begin{equation}\n\t\t\\label{associatedmorph}\n\t\tp^{\\alpha \\beta} := X_{\\alpha} \\overset{d_{\\alpha}}{\\longrightarrow} TY_{\\alpha} \\overset{Tp}{\\longrightarrow} T^2Y_{\\beta} \\overset{\\mu_{Y_{\\beta}}}{\\longrightarrow} TY_{\\beta} \\overset{Ti_{\\beta}}{\\longrightarrow} TX_{\\beta} \\overset{h_{\\beta}}{\\longrightarrow} X_{\\beta}.\n\t\t\t\\end{equation}\n\t\\end{definition}\n\nThe morphism associated with a Kleisli morphism should be understood as the linear transformation between vector spaces induced by some matrix of the right type. The following result confirms this intuition.\n\t\n\\begin{lemma}\n\t\\eqref{associatedmorph} is a $\\mathbb{T}$-algebra homomorphism $p^{\\alpha \\beta}: (X_{\\alpha},h_{\\alpha}) \\rightarrow (X_{\\beta},h_{\\beta})$.\n\\end{lemma}\t\n\\begin{proof}\nUsing \\Cref{forbasis-d-isalgebrahom} we deduce the commutativity of the following diagram\n\\begin{equation*}\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\tTX_{\\alpha} \\arrow{d}[left]{h_{\\alpha}} \\arrow{r}{Td_{\\alpha}} & T^2Y_{\\alpha} \\arrow{r}{T^2p} \\arrow{d}{\\mu_{Y_{\\alpha}}} & T^3Y_{\\beta} \\arrow{r}{T\\mu_{Y_{\\beta}}} \\arrow{d}{\\mu_{TY_{\\beta}}} & T^2Y_{\\beta} \\arrow{d}{\\mu_{Y_{\\beta}}} \\arrow{r}{T^2i_{\\beta}} & T^2X_{\\beta} \\arrow{r}{Th_{\\beta}} \\arrow{d}{\\mu_{X_{\\beta}}} & TX_{\\beta} \\arrow{d}{h_{\\beta}} \\\\\n\tX_{\\alpha} \\arrow{r}{d_{\\alpha}} & TY_{\\alpha} \\arrow{r}{Tp} & T^2Y_{\\beta} \\arrow{r}{\\mu_{Y_{\\beta}}} & TY_{\\beta} \\arrow{r}{Ti_{\\beta}} & TX_{\\beta} \\arrow{r}{h_{\\beta}} & X_{\\beta}\t\n\t\\end{tikzcd}.\n\\end{equation*}\n\\end{proof}\n\nThe following result establishes a generalisation of the observation that for fixed bases, constructing a matrix representation of a linear transformation on the one hand, and associating a linear transformation to a matrix of the right type on the other hand, are mutually inverse operations.\n\\begin{lemma} \n\tThe operations $\\eqref{basisrepresentation}$ and $\\eqref{associatedmorph}$ are mutually inverse.\n\\end{lemma}\n\\begin{proof}\n\tThe definitions imply \n\t\\begin{align*}\n\t\t(p^{\\alpha \\beta})_{\\alpha \\beta} &= d_{\\beta} \\circ (h_{\\beta} \\circ Ti_{\\beta} \\circ \\mu_{Y_{\\beta}} \\circ Tp \\circ d_{\\alpha}) \\circ i_{\\alpha} \\\\\n\t\t(f_{\\alpha \\beta})^{\\alpha \\beta} &= h_{\\beta} \\circ Ti_{\\beta} \\circ \\mu_{Y_{\\beta}} \\circ T(d_{\\beta} \\circ f \\circ i_{\\alpha}) \\circ d_{\\alpha}.\n\t\\end{align*}\nUsing \\Cref{forbasis-d-isalgebrahom} we deduce the commutativity of the diagrams below\n\t\\[\n\t\\begin{gathered}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tY_{\\alpha} \\arrow{d}[left]{i_{\\alpha}} \\arrow{rrr}{p} \\arrow{dr}{\\eta_{Y_{\\alpha}}} & & & TY_{\\beta} \\arrow{d}{\\textnormal{id}_{TY_{\\beta}}} \\arrow{rr}{\\textnormal{id}_{TY_{\\beta}}} \\arrow{dl}{\\eta_{TY_{\\beta}}} & & TY_{\\beta} \\\\\n\t\t\tX_{\\alpha} \\arrow{r}{d_{\\alpha}} & TY_{\\alpha} \\arrow{r}{Tp} & T^2Y_{\\beta} \\arrow{r}{\\mu_{Y_{\\beta}}} & TY_{\\beta} \\arrow{r}{Ti_{\\beta}} & TX_{\\beta} \\arrow{r}{h_{\\beta}} & X_{\\beta} \\arrow{u}[right]{d_{\\beta}}\n\t\t\t\\end{tikzcd} \\\\\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\tX_{\\alpha} \\arrow{r}{\\textnormal{id}_{X_{\\alpha}}} \\arrow{d}[left]{Ti_{\\alpha} \\circ d_{\\alpha}} & X_{\\alpha} \\arrow{rr}{f} & & X_{\\beta} \\arrow{d}{d_{\\beta}} \\arrow{r}{\\textnormal{id}_{X_{\\beta}}} & X_{\\beta} \\\\\n\t\t\t\tTX_{\\alpha} \\arrow{ur}{h_{\\alpha}} \\arrow{r}{Tf} & TX_{\\beta} \\arrow{r}{Td_{\\beta}} \\arrow{urr}{h_{\\beta}} & T^2Y_{\\beta} \\arrow{r}{\\mu_{Y_{\\beta}}} & TY_{\\beta} \\arrow{r}{Ti_{\\beta}} & TX_{\\beta} \\arrow{u}[right]{h_{\\beta}}\n\t\t\t\t\\end{tikzcd}\t.\t\t\n\t\\end{gathered}\n\t\\]\t\t\t\t\t\n\\end{proof}\n\n\nThe next result establishes the compositionality of basis representations: the matrix representation of the composition of two linear transformations is given by the multiplication of the matrix representations of the individual linear transformations. On the left side of the following equation we use the usual Kleisli composition.\n\\begin{lemma}\n$g_{\\beta \\gamma} \\cdot f_{\\alpha \\beta} = (g \\circ f)_{\\alpha \\gamma}$.\n\\end{lemma}\n\\begin{proof}\n\tThe definitions imply\n\t\\begin{align*}\n\t\tg_{\\beta \\gamma} \\cdot f_{\\alpha \\beta} &= \\mu_{Y_{\\gamma}} \\circ T(d_{\\gamma} \\circ g \\circ i_{\\beta}) \\circ d_{\\beta} \\circ f \\circ i_{\\alpha} \\\\\n\t\t(g \\circ f)_{\\alpha \\gamma} &= d_{\\gamma} \\circ (g \\circ f) \\circ i_{\\alpha}.\n\t\\end{align*}\n\tWe delete common terms and use \\Cref{forbasis-d-isalgebrahom} to deduce the commutativity of the diagram below\n\\begin{equation*}\n\t\t\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\t\t X_{\\beta} \\arrow{d}[left]{\\textnormal{id}_{X_{\\beta}}} \\arrow{r}{d_{\\beta}} & TY_{\\beta} \\arrow{r}{Ti_{\\beta}} & TX_{\\beta} \\arrow{r}{Tg} \\arrow{dll}{h_{\\beta}} & TX_{\\gamma} \\arrow{r}{Td_{\\gamma}} \\arrow{dll}{h_{\\gamma}} & T^2 Y_{\\gamma} \\arrow{d}{\\mu_{Y_{\\gamma}}} \\\\\n\t\t\t\t\t\t X_{\\beta} \\arrow{r}[below]{g} & X_{\\gamma} \\arrow{rrr}[below]{d_{\\gamma}} & & & TY_{\\gamma}\n\t\t\t\t\t\\end{tikzcd}.\n\\end{equation*}\n\\end{proof}\n\nSimilarly to the previous result, the next observation captures the compositionality of the operation that assigns to a Kleisli morphism its associated homomorphism.\n\\begin{lemma}\n\t$q^{\\beta \\gamma} \\circ p^{\\alpha \\beta} = (q \\cdot p)^{\\alpha \\gamma}$. \n\\end{lemma}\n\\begin{proof}\n\tThe definitions imply \n\t\\begin{align*}\n\t\tq^{\\beta \\gamma} \\circ p^{\\alpha \\beta} &= (h_{\\gamma} \\circ Ti_{\\gamma} \\circ \\mu_{Y_{\\gamma}} \\circ Tq \\circ d_{\\beta}) \\circ (h_{\\beta} \\circ Ti_{\\beta} \\circ \\mu_{Y_{\\beta}} \\circ Tp \\circ d_{\\alpha}) \\\\\n\t\t(q \\cdot p)^{\\alpha \\gamma} &= h_{\\gamma} \\circ Ti_{\\gamma} \\circ \\mu_{Y_{\\gamma}} \\circ T\\mu_{Y_{\\gamma}} \\circ T^2q \\circ Tp \\circ d_{\\alpha}.\n\t\\end{align*}\n\tBy deleting common terms and using the equality $d_{\\beta} \\circ h_{\\beta} \\circ Ti_{\\beta} = \\textnormal{id}_{TY_{\\beta}}$ it is thus sufficient to show\n\t\\[\n\t\\mu_{Y_{\\gamma}} \\circ Tq \\circ \\mu_{Y_{\\beta}} = \\mu_{Y_{\\gamma}} \\circ T\\mu_{Y_{\\gamma}} \\circ T^2q.\n\t\\]\n\tAbove equation follows from the commutativity of the diagram below\n\t\\begin{equation*}\n\t\t\t\t\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\t\t\t\tT^2Y_{\\beta} \\arrow{d}[left]{T^2q} \\arrow{r}{\\mu_{Y_{\\beta}}} & TY_{\\beta} \\arrow{r}{Tq} & T^2Y_{\\gamma} \\arrow{d}{\\mu_{Y_{\\gamma}}} \\\\\n\t\t\t\t\t\t\t\tT^3Y_{\\gamma} \\arrow{urr}{\\mu_{TY_{\\gamma}}} \\arrow{r}[below]{T\\mu_{Y_{\\gamma}}} & T^2 Y_{\\gamma} \\arrow{r}[below]{\\mu_{Y_{\\gamma}}} & TY_{\\gamma}\n\t\t\t\t\t\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\\end{proof}\n\nAt the beginning of this section we recalled the soundness identity \\[ L(v)_{\\beta} = L_{\\alpha \\beta} v_{\\alpha} \\] for the matrix representation $L_{\\alpha \\beta}$ of a linear transformation $L$. The next result is a natural generalisation of this statement. \n\n\\begin{lemma}\n\t$d_{\\beta} \\circ f = f_{\\alpha \\beta} \\cdot d_{\\alpha}$ \n\\end{lemma}\n\\begin{proof}\n\tThe definitions imply\n\t\\[\n\t f_{\\alpha \\beta} \\cdot d_{\\alpha} = \\mu_{Y_{\\beta}} \\circ T(d_{\\beta} \\circ f \\circ i_{\\alpha}) \\circ d_{\\alpha}.\n\t\\]\n\tUsing \\Cref{forbasis-d-isalgebrahom} we deduce the commutativity of the diagram below\n\t\\begin{equation*}\n\t\t\t\t\t\t\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\t\tX_{\\alpha} \\arrow{d}[left]{\\textnormal{id}_{X_{\\alpha}}} \\arrow{r}{d_{\\alpha}} & TY_{\\alpha} \\arrow{r}{Ti_{\\alpha}} & TX_{\\alpha} \\arrow{dll}{h_{\\alpha}} \\arrow{r}{Tf} & TX_{\\beta} \\arrow{dll}{h_{\\beta}} \\arrow{r}{Td_{\\beta}} & T^2Y_{\\beta} \\arrow{d}{\\mu_{Y_{\\beta}}} \\\\\n\t\t\t\t\tX_{\\alpha} \\arrow{r}[below]{f} & X_{\\beta} \\arrow{rrr}[below]{d_{\\beta}} & & & TY_{\\beta}\t\t\t\t\n\t\t\t\t\t\t\t\t\t\\end{tikzcd}.\n\t\\end{equation*}\n\\end{proof}\n\nAssume we are given bases $\\alpha, \\alpha'$ and $\\beta, \\beta'$ for $\\mathbb{T}$-algebras $(X_{\\alpha}, h_{\\alpha})$ and $(X_{\\beta}, h_{\\beta})$, respectively. \nThe following result makes clear how the two basis representations $f_{\\alpha \\alpha'}$ and $f_{\\beta \\beta'}$ are related.\n\n\\begin{proposition}\n\\label{similaritygeneral}\n\tThere exist Kleisli isomorphisms $p$ and $q$ such that $f_{\\alpha' \\beta'} = q \\cdot f_{\\alpha \\beta} \\cdot p$.\n\\end{proposition}\n\\begin{proof}\n\tThe Kleisli morphisms $p$ and $q$ and their respective candidates for inverses $p^{-1}$ and $q^{-1}$ are defined below\n\t\\begin{alignat*}{6}\n\t\t& p &&:= d_{\\alpha} \\circ i_{\\alpha'}: Y_{\\alpha'} &&\\longrightarrow TY_{\\alpha} \\qquad \\qquad && q &&:= d_{\\beta'} \\circ i_{\\beta}: Y_{\\beta} && \\longrightarrow TY_{\\beta'} \\\\\n\t\t& p^{-1} &&:= d_{\\alpha'} \\circ i_{\\alpha}: Y_{\\alpha} && \\longrightarrow TY_{\\alpha'} \\qquad \\qquad && q^{-1} &&:= d_{\\beta} \\circ i_{\\beta'} : Y_{\\beta'} && \\longrightarrow TY_{\\beta}.\n\t\\end{alignat*}\nFrom \\Cref{forbasis-d-isalgebrahom} it follows that the diagram below commutes \\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tY_{\\alpha} \\arrow{r}{i_{\\alpha}} \\arrow{dd}[left]{\\eta_{Y_{\\alpha}}} & X_{\\alpha} \\arrow{d}{\\textnormal{id}_{X_{\\alpha}}} \\arrow{r}{d_{\\alpha'}} & TY_{\\alpha'} \\arrow{dd}{Ti_{\\alpha'}} \\\\\n\t\t\t& X_{\\alpha} \\arrow{dl}{d_{\\alpha}} & \\\\\n\t\t\tTY_{\\alpha} & T^2Y_{\\alpha} \\arrow{l}{\\mu_{Y_{\\alpha}}} & TX_{\\alpha} \\arrow{ul}{h_{\\alpha}} \\arrow{l}{Td_{\\alpha}}\n\t\t\\end{tikzcd}.\n\t\t\\end{equation*}\n\t\tThis shows that $p^{-1}$ is a Kleisli right-inverse of $p$. A symmetric version of above diagram shows that $p^{-1}$ is also a Kleisli left-inverse of $p$. Analogously it follows that $q^{-1}$ is a Kleisli inverse of $q$.\n\t\t\n\t\tThe definitions further imply the equalities\n\t\t\\begin{align*}\n\t\t\tq \\cdot f_{\\alpha \\beta} \\cdot p &= \\mu_{Y_{\\beta'}} \\circ T(d_{\\beta'} \\circ i_{\\beta}) \\circ \\mu_{Y_{\\beta}} \\circ T(d_{\\beta} \\circ f \\circ i_{\\alpha}) \\circ d_{\\alpha} \\circ i_{\\alpha'} \\\\\n\t\t\tf_{\\alpha' \\beta'} &= d_{\\beta'} \\circ f \\circ i_{\\alpha'}.\n\t\t\\end{align*}\n\t\tWe delete common terms and use \\Cref{forbasis-d-isalgebrahom} to establish the commutativity of the diagram below\n\t\t\\begin{equation*}\n\t\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tX_{\\alpha} \\arrow{dr}{\\textnormal{id}_{X_{\\alpha}}} \\arrow{r}{d_{\\alpha}} \\arrow{dd}[left]{f} & TY_{\\alpha} \\arrow{r}{Ti_{\\alpha}} & TX_{\\alpha} \\arrow{dl}[above]{h_{\\alpha}} \\arrow{r}{Tf} & TX_{\\beta} \\arrow{ddll}{h_{\\beta}} \\arrow{d}{Td_{\\beta}} \\\\\n\t\t\t & X_{\\alpha} \\arrow{d}{f} & & T^2Y_{\\beta} \\arrow{dd}{\\mu_{Y_{\\beta}}} \\\\\n\tX_{\\beta} \\arrow{d}[left]{d_{\\beta'}} & X_{\\beta} \\arrow{l}[above]{\\textnormal{id}_{X_{\\beta}}} \\arrow{drr}{d_{\\beta}} & & \\\\\n\t\t\tTY_{\\beta'} & T^2 Y_{\\beta'} \\arrow{l}{\\mu_{Y_{\\beta'}}} & TX_{\\beta} \\arrow{llu}{h_{\\beta}} \\arrow{l}{Td_{\\beta'}} & TY_{\\beta} \\arrow{l}{Ti_{\\beta}}\n\t\t\\end{tikzcd}.\n\t\t\\end{equation*}\t\n\\end{proof}\n\nAbove result simplifies in the case one restricts to an endomorphism: the respective basis representations are similar.\n\n\\begin{corollary}\n\t\\label{similarity}\nThere exists a Kleisli isomorphism $p$ with Kleisli inverse $p^{-1}$ such that $f_{\\alpha' \\alpha'} = p^{-1} \\cdot f_{\\alpha \\alpha} \\cdot p$.\n\\end{corollary}\n\\begin{proof}\n\tIn \\Cref{similaritygeneral} let $\\beta = \\alpha$ and $\\beta' = \\alpha'$. One verifies that in the corresponding proof the definitions of the morphisms $p^{-1}$ and $q$ coincide.\n\\end{proof}\n\n\\subsection{Bases as free algebras}\n\n\\label{basesfreealgebrasec}\n\nIn \\Cref{basisimpliesfree} it was shown that an algebra over a monad with a basis is isomorphic to a free algebra. Conversely, in \\Cref{canonicalbasisforfreealgbera} it was proven that a free algebra over a monad admits a basis. Intuitively, one may thus think that bases for an algebra coincide with free isomorphic algebras.\nIn this section we make this idea precise on the level of categories.\n\nFormally, given an algebra $(X,h)$ over a monad $\\mathbb{T}$, let $\\textsf{Free}(X,h)$ denote the category defined as follows: \n\\begin{itemize}\n\t\\item objects are given by pairs $(Y, \\varphi)$ consisting of an isomorphism $\\varphi: (TY, \\mu_Y) \\rightarrow (X,h)$; and\n\t\\item a morphism $f: (Y_{\\alpha}, \\varphi_{\\alpha}) \\rightarrow (Y_{\\beta}, \\varphi_{\\beta})$ between objects consists of a morphism $f: Y_{\\alpha} \\rightarrow Y_{\\beta}$ such that $\\varphi_{\\alpha} = \\varphi_{\\beta} \\circ Tf$.\n\\end{itemize}\n\nThe next result shows that for a fixed algebra, the natural isomorphism \nunderlying the free-algebra adjunction restricts to an equivalence between the category of bases defined in \\Cref{basisdefinition}, and the category of free isomorphic algebras given above.\n\\begin{proposition}\n$\\textnormal{\\textsf{Bases}}(X,h) \\simeq \\textnormal{\\textsf{Free}}(X,h)$\n\\end{proposition}\n\\begin{proof}\n\tWe define functors $F$ and $G$ between the respective categories as follows\n\t\\begin{alignat*}{7}\n\t\t&F&&: \\textnormal{\\textsf{Bases}}(X,h) &&\\longrightarrow \\textnormal{\\textsf{Free}}(X,h) \\qquad && F(Y,i,d)&& = (Y, i^{\\sharp}) \\qquad && Ff &&= f \\\\\n\t\t&G&&: \\textnormal{\\textsf{Free}}(X,h) && \\longrightarrow \\textnormal{\\textsf{Bases}}(X,h) \\qquad && G(Y, \\varphi) && = (Y, \\varphi \\circ \\eta_Y, \\varphi^{-1}) \\qquad &&Gf &&= f.\n\t\\end{alignat*} \n\t\t\n\tThe functor $F$ is well-defined on objects since by \\Cref{basisimpliesfree} the morphism $i^{\\sharp}$ is an isomorphism with inverse $d$. Its well-definedness on morphisms is an immediate consequence of the constraint $i_{\\alpha} = i_{\\beta} \\circ f$ for morphisms $f: (Y_{\\alpha},i_{\\alpha},d_{\\alpha}) \\rightarrow (Y_{\\beta}, i_{\\beta}, d_{\\beta})$ between bases,\n\t\\[\n\t(i_{\\alpha})^{\\sharp} = h \\circ Ti_{\\alpha} = h \\circ Ti_{\\beta} \\circ Tf = (i_{\\beta})^{\\sharp} \\circ Tf.\n\t\\]\t\n\t\n\tThe functor $G$ is well-defined on objects since $(\\varphi \\circ \\eta_Y)^{\\sharp} = \\varphi$ and $\\varphi^{-1}$ are mutually inverse. Its well-definedness on morphisms $f: (Y_{\\alpha}, \\varphi_{\\alpha}) \\rightarrow (Y_{\\beta}, \\varphi_{\\beta})$ follows from the equality $\\varphi_{\\beta} \\circ Tf = \\varphi_{\\alpha}$,\n\t\\begin{align*}\n\t\tTf \\circ (\\varphi_{\\alpha})^{-1} &= (\\varphi_{\\beta})^{-1} \\circ \\varphi_{\\beta} \\circ Tf \\circ (\\varphi_{\\alpha})^{-1} = (\\varphi_{\\beta})^{-1} \\circ \\varphi_{\\alpha} \\circ (\\varphi_{\\alpha})^{-1} = (\\varphi_{\\beta})^{-1}\n\t\\end{align*}\n\tand the naturality of $\\eta$,\n\t\\[\n\t\\varphi_{\\alpha} \\circ \\eta_{Y_{\\alpha}} = \\varphi_{\\beta} \\circ Tf \\circ \\eta_{Y_{\\alpha}} = \\varphi_{\\beta} \\circ \\eta_{Y_{\\beta}} \\circ f.\n\t\\]\n\t\n\tThe functors are clearly mutually inverse on morphisms. For objects the statement follows from\n\t\\begin{align*}\n\t\tF \\circ G(Y, \\varphi) &= (Y, (\\varphi \\circ \\eta_Y)^{\\sharp}) = (Y, \\varphi) \\\\\n\t\tG \\circ F(Y, i,d) &= (Y, i^{\\sharp} \\circ \\eta_Y, (i^{\\sharp})^{-1}) = (Y,i,d).\n\t\\end{align*}\n\\end{proof}\n\n\n\\subsection{Bases for bialgebras}\n\n\\label{basesforbialgebrasec}\n\nIt is well-known that a distributive law $\\lambda$ between a monad $\\mathbb{T}$ and an endofunctor $F$ induces a monad \n$\\mathbb{T}_{\\lambda}$ on the category of $F$-coalgebras such that the algebras over $\\mathbb{T}_{\\lambda}$ coincide with the $\\lambda$-bialgebras of \\Cref{defbialgebras}.\n This section is concerned with generators and bases for $\\mathbb{T}_{\\lambda}$-algebras, or equivalently, $\\lambda$-bialgebras.\n\nBy definition, a generator for a $\\lambda$-bialgebra $(X,h,k)$ consists of a $F$-coalgebra $(Y,k_Y)$ and morphisms $i: Y \\rightarrow X$ and $d: X \\rightarrow TY$, such that the three diagrams on the left below commute\n\\begin{equation}\n\\label{bialgebrageneratorequations}\t\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\tY \\arrow{r}{i} \\arrow{d}[left]{k_Y} & X \\arrow{d}{k} \\\\\n\t\tFY \\arrow{r}{Fi} &FX\n\t\\end{tikzcd}\n\t\\quad\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\tX \\arrow{r}{d} \\arrow{d}[left]{k} & TY \\arrow{d}[left]{\\lambda_Y \\circ Tk_Y} \\\\\n\t\tFX \\arrow{r}{Fd} &FTY\n\t\\end{tikzcd}\n\t\\quad\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTY \\arrow{r}{Ti} & TX \\arrow{d}{h}\\\\\n\t\t\tX \\arrow{u}{d} \\arrow{r}{\\textnormal{id}_X} & X \n\t\t\\end{tikzcd}\n\t\t\\quad\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTX \\arrow{r}{h} & X \\arrow{d}{d}\\\\\n\t\t\tTY \\arrow{u}{Ti} \\arrow{r}{\\textnormal{id}_{TY}} & TY \n\t\t\\end{tikzcd}.\n\\end{equation}\n\nA basis for a $\\lambda$-bialgebra is moreover given by a generator, such that in addition the diagram on the right above commutes.\n\nIt is easy to verify that by forgetting the $F$-coalgebra structure, every generator for a bialgebra in particular provides a generator for the underlying algebra of the bialgebra. By \\Cref{forgenerator-isharp-is-bialgebra-hom} it thus follows that there exists a $\\lambda$-bialgebra homomorphism \n\\[ i^{\\sharp} = h \\circ Ti : (TY, \\mu_Y, (Fd \\circ k \\circ i)^{\\sharp}) \\longrightarrow (X,h,k). \\]\nThe next result establishes that there exists a second equivalent free bialgebra with a different coalgebra structure.\n\n\\begin{lemma}\n\\label{generatorbialgebraisharp}\n\tLet $(Y,k_Y, i,d)$ be a generator for $(X,h,k)$. Then $i^{\\sharp}: TY \\rightarrow X$ is a $\\lambda$-bialgebra homomorphism $i^{\\sharp} : (TY, \\mu_Y, \\lambda_Y \\circ Tk_Y) \\rightarrow (X,h,k)$.\n\\end{lemma}\n\\begin{proof}\n\tClearly $i^{\\sharp}$ is a $\\mathbb{T}$-algebra homomorphism. It is a $F$-coalgebra homomorphism since the diagram below commutes\n\t\\begin{equation*}\n\t\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\tTY \\arrow{r}{Ti} \\arrow{d}[left]{Tk_Y} & TX \\arrow{r}{h} \\arrow{d}{Tk} & X \\arrow{dd}{k} \\\\\n\t\t\tTFY \\arrow{r}{TFi} \\arrow{d}[left]{\\lambda_Y} & TFX \\arrow{d}{\\lambda_X} & \\\\\n\t\t\tFTY \\arrow{r}{FTi} & FTX \\arrow{r}{Fh} & FX\n\t\t\\end{tikzcd}.\n\t\\end{equation*}\t\n\\end{proof}\n\nIf one moves from generator for bialgebras to bases for bialgebras, both coalgebra structures coincide.\n\n\\begin{lemma}\n\\label{samecoalgebrastructures}\n\tLet $(Y,k_Y, i, d)$ be a basis for $(X,h,k)$, then $\\lambda_Y \\circ Tk_Y = (Fd \\circ k \\circ i)^{\\sharp}$.\n\\end{lemma}\n\\begin{proof}\n\tUsing \\Cref{forbasis-d-isalgebrahom} we establish the commutativity of diagram below\n\t\\begin{equation*}\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\tTY \\arrow{dd}[left]{\\textnormal{id}_{TY}} \\arrow{rr}{Tk_Y} & & TFY \\arrow{rr}{\\lambda_Y} \\arrow{dd}{TFi} & & FTY \\arrow{d}{FTi} \\arrow{rr}{\\textnormal{id}_{FTY}} && FTY \\arrow{dd}{\\textnormal{id}_{FTY}} \\\\\n\t& & & & FTX \\arrow{d}{FTd} \\arrow{r}{Fh} & FX \\arrow{dr}{Fd} & \\\\\n\tTY \\arrow{r}{Ti} & TX \\arrow{r}{Tk} & TFX \\arrow{r}{TFd} & TFTY \\arrow{r}{\\lambda_{TY}} & FT^2Y \\arrow{rr}{F\\mu_{Y}} & & FTY\t\n\t\\end{tikzcd}.\n\t\\end{equation*}\n\\end{proof}\n\nWe close this section by observing that a basis for the underlying algebra of a bialgebra is sufficient for constructing a generator for the full bialgebra.\n\n\\begin{lemma}\n\tLet $(X,h,k)$ be a $\\lambda$-bialgebra and $(Y,i,d)$ a basis for the $\\mathbb{T}$-algebra $(X,h)$. Then $(TY,(Fd \\circ k \\circ i)^{\\sharp},i^{\\sharp},\\eta_{TY} \\circ d)$ is a generator for $(X,h,k)$.\n\\end{lemma}\n\\begin{proof}\n\tIn the following we abbreviate $k_{TY} := (Fd \\circ k \\circ i)^{\\sharp}$. By \\Cref{forgenerator-isharp-is-bialgebra-hom} the morphism $i^{\\sharp}$ is a $F$-coalgebra homomorphism $ i^{\\sharp}: (TY, k_{TY}) \\rightarrow (X, k)$. This shows the commutativity of the diagram on the left of \\eqref{bialgebrageneratorequations}. \n\tBy \\Cref{forbasis-d-is-bi-algebrahom} the morphism $d$ is a $F$-coalgebra homomorphism in the reverse direction. Together with the commutativity of the diagram on the left below\n\t\\[\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\tTY \\arrow{r}{\\eta_{TY}} \\arrow{dd}[left]{k_{TY}} & T^2Y \\arrow{d}{Tk_{TY}} \\\\\n\t& TFTY \\arrow{d}{\\lambda_{TY}} \\\\\n\tFTY \\arrow{ur}{\\eta_{FTY}} \\arrow{r}{F\\eta_{TY}} & FT^2Y\t\n\t\\end{tikzcd}\n\t\\qquad\n\t\\begin{tikzcd}[row sep=3em, column sep = 3em]\n\t\t\t\tT^2Y \\arrow{r}{T^2i} & T^2X \\arrow{rr}{Th} \\arrow{dr}{\\mu_X} & & TX \\arrow{dd}{h} \\\\\n\t\t\t\tTY \\arrow{u}{\\eta_{TY}} \\arrow{r}{Ti} & TX \\arrow{u}{\\eta_{TX}} \\arrow{r}{\\textnormal{id}_{TX}} & TX \\arrow{dr}{h} \\\\\n\t\t\t\tX \\arrow{u}{d} \\arrow{rrr}{\\textnormal{id}_X} & & & X.\n\t\t\t\\end{tikzcd}\n\t\\] this implies the commutativity of the second diagram to the left of \\eqref{bialgebrageneratorequations}.\t\t\n\tSimilarly, the commutativity of third diagram to the left of \\eqref{bialgebrageneratorequations} follows from the commutativity of the diagram on the right above.\n\\end{proof}\n\n\\section{Examples}\n\n\\label{examplesec}\n\nIn this section we give examples of generators and bases for algebras over the powerset-, downset-, distribution-, and neighbourhood monad.\n\n\\subsection{Powerset monad}\n\n\\label{powersetmonadsec}\n\nThe powerset monad $\\mathbb{P} = (\\mathcal{P}, \\mu, \\eta)$ on the category of sets may easily be the most well-known monad. Its underlying set endofunctor $\\mathcal{P}$ assigns to a set its powerset and to a function the function that maps a subset to its direct image under the former. Multiplication and unit transformations are further given by \n\\[ \\eta_X(x) = \\lbrace x \\rbrace \\qquad \\mu_X(U) = \\bigcup_{A \\in U} A. \\]\n\nThe category of algebras for the powerset monad is famously isomorphic to the category of complete lattices and join-preserving functions. Indeed, since the existence of all joins is equivalent to the existence of all meets, one can define, given a $\\mathbb{P}$-algebra $(X,h)$, a complete lattice $(X, \\leq)$ with $x \\leq y :\\Leftrightarrow h(\\lbrace x, y \\rbrace) = y$ and supremum $\\bigvee A := h(A)$. Conversely, given a complete lattice $(X, \\leq)$ with supremum $\\bigvee$ one defines a $\\mathbb{P}$-algebra $(X,h)$ by $h(A) := \\bigvee A$. \n\nLet $F$ be the set endofunctor satisfying $FX = X^A \\times 2$. Coalgebras for $F$ are easily recognised as deterministic unpointed automata over the alphabet $A$, and coalgebras for the composition $F\\mathcal{P}$ coincide with unpointed non-deterministic automata over the same alphabet. The output set $2$ can be equipped with a disjunctive $\\mathbb{P}$-algebra structure, such that bialgebras for the induced canonical distributive law between $\\mathbb{P}$ and $F$ consist of deterministic automata with a complete lattice as state space and supremum-preserving transition and output functions \\cite{jacobs2012trace}. For instance, every deterministic automaton derived from a non-deterministic automaton via the classical subset construction is of such a form. \n\n The following observation relates generators for algebras over $\\mathbb{P}$ with the induced complete lattice structure of the former.\n\n\\begin{lemma}\n\\label{generatorpowerset}\n\t$(Y, i, d)$ is a generator for a $\\mathbb{P}$-algebra $(X,h)$ iff \n\\[\nx = \\bigvee_{y \\in d(x)} i(y)\n\\]\nfor all $x \\in X$.\n\\end{lemma}\n\\begin{proof}\nFollows immediately from the equality\n\\begin{align*}\n\th \\circ \\mathcal{P}i \\circ d(x) &= h(\\lbrace i(y) \\mid y \\in d(x) \\rbrace) = \\bigvee_{y \\in d(x)} i(y).\n\\end{align*}\t\n\\end{proof}\n\nRecall that a non-zero element $x \\in X$ of a lattice $L = (X, \\leq)$ is called join-irreducible, if $x = y \\vee z$ implies $x = y$ or $x = z$. \n It is well-known that in a finite lattice, or more generally in a lattice satisfying the descending chain condition, any element is the join of the join-irreducible elements below it. In other words, by \\Cref{generatorpowerset}, if $i: \\mathcal{J}(L) \\rightarrow L$ is the subset embedding of join-irreducibles and $d: L \\rightarrow \\mathcal{P}(\\mathcal{J}(L))$ satisfies \\[ d(x) = \\lbrace a \\in \\mathcal{J}(L) \\mid a \\leq x \\rbrace, \\] then $(\\mathcal{J}(L), i, d)$ is a generator for the $\\mathbb{P}$-algebra $L$.\n\n\\subsection{Downset monad}\n\nIn the previous section we have seen that the category of algebras for the powerset monad is equivalent to the category of complete lattices and join-preserving functions. It is probably slightly less-known that there exists a second monad with the same category of algebras: the downset monad $\\mathbb{P}_{\\downarrow} = (\\mathcal{P}_{\\downarrow}, \\mu, \\eta)$ on the category of posets. \n\nFor a subset $Y$ of a poset let $Y_{\\downarrow}$ be its so-called downward closure, that is, the set of those poset elements for which there exists at least one element in $Y$ above. A subset is called downset or downclosed, if it coincides with its downward closure. The endofunctor $\\mathcal{P}_{\\downarrow}$ underlying the downset monad assigns to a poset the inclusion-ordered poset of its downclosed subsets, and to a monotone function the monotone function mapping a downclosed subset to the downclosure of its direct image. \nThe natural transformations $\\eta$ and $\\mu$ are further given by\n\\[\n\\eta_P(x) = \\lbrace x \\rbrace_{\\downarrow} \\qquad \\mu_P(U) = \\bigcup_{A \\in U} A.\n\\]\n\nGiven an algebra $(P,h)$ over $\\mathbb{P}_{\\downarrow}$, one verifies that the poset $P$ is a complete lattice with supremum $\\bigvee A := h(A_{\\downarrow})$. Conversely, given a complete lattice $P$ with supremum $\\bigvee$, one defines an algebra $(P,h)$ over $\\mathbb{P}_{\\downarrow}$ by $h(A) := \\bigvee A$. The following observation relates generators for algebras over $\\mathbb{P}_{\\downarrow}$ with the induced complete lattice structure of the latter.\n\n\\begin{lemma}\n\t$(Y,i,d)$ is a generator for a $\\mathbb{P}_{\\downarrow}$-algebra $(P,h)$ iff\n\t\\[\n\tx = \\bigvee_{y \\in d(x)} i(y)\n\t\\]\n\tfor all $x \\in X$.\n\\end{lemma}\n\\begin{proof}\nFollows immediately from the equality\n\t\\[\n\th \\circ \\mathcal{P}_{\\downarrow}i \\circ d(x) = h(\\lbrace i(y) \\mid y \\in d(x) \\rbrace_{\\downarrow}) = \\bigvee_{y \\in d(x)} i(y).\n\t\\]\n\\end{proof}\n\nIt is due to Birkhoff's Representation Theorem that if $L$ is a finite (hence complete) distributive lattice, the monotone function $d: L \\rightarrow \\mathcal{P}_{\\downarrow}(\\mathcal{J}(L))$ assigning to an element the downward closed set of join-irreducibles below it, is an isomorphism with an inverse satisfying $A \\mapsto \\bigvee A$. In other words, in light of the previous result, if $i: \\mathcal{J}(L) \\rightarrow L$ is the subset embedding of join-irreducibles, then $(\\mathcal{J}(L), i, d)$ constitutes a basis for the $\\mathbb{P}_{\\downarrow}$-algebra $L$.\n\n\n\n\\subsection{Distribution monad}\n\nThe distribution monad $\\mathbb{D} = (\\mathcal{D}, \\mu, \\eta)$ on the category of sets is given as follows. The underlying set endofunctor $\\mathcal{D}$ assigns to a set $X$ its set of distributions with finite support,\n\\[\n\\mathcal{D}X = \\lbrace p: X \\rightarrow \\lbrack 0, 1 \\rbrack \\mid supp(p) \\textnormal{ finite, and } \\sum_{x \\in X} p(x) = 1 \\rbrace\n\\]\n\tand to a function $f: X \\rightarrow Y$ the direct image $\\mathcal{D}f: \\mathcal{D}X \\rightarrow \\mathcal{D}Y$ satisfying the equality \n\t\\[ \\mathcal{D}(f)(p)(y) = \\sum_{x \\in f^{-1}(y)} p(x). \\]\nThe natural transformations $\\eta$ and $\\mu$ are further given by \n\\[ \\eta_X(x)(y) = \\lbrack x = y \\rbrack \\qquad \\mu_X(\\Phi)(x) = \\sum_{p \\in \\mathcal{D}X} p(x) \\Phi(p). \\]\n\nIt is well-known that the category of algebras for the distribution monad is isomorphic to the category of convex sets and affine functions \\cite{jacobs2010convexity}. Indeed, any $\\mathbb{D}$-algebra $(X,h)$ can be turned into a unique convex set with finite sums defined by $\\sum_{i} r_i x_i := h(p)$ for $p(x) = r_i$ if $x=x_i$, and $p(x) = 0$ otherwise. \n\nLet $F$ be the set endofunctor satisfying $FX = X^A \\times \\lbrack 0, 1 \\rbrack$. Coalgebras for the composed endofunctor $F\\mathcal{D}$ are known as unpointed Rabin probabilistic automata over the alphabet $A$ \\cite{rabin1963probabilistic, rutten2013generalizing}. The unit interval can be equipped with a $\\mathbb{D}$-algebra structure $h$ which satisfies $h(p) = \\sum_{x \\in \\lbrack 0, 1 \\rbrack} x p(x)$ and induces a canonical distributive law between $\\mathbb{D}$ and $F$ \\cite{jacobs2012trace}. The respective bialgebras consist of unpointed Moore automata with input $A$ and output $\\lbrack 0, 1 \\rbrack$, a convex set as state space, and affine transition and output functions. For instance, every Moore automaton derived from a probabilistic automaton by assigning to the state space of the latter all its distributions with finite support constitutes such a bialgebra.\n\n\n The next observation relates generators of algebras over $\\mathbb{D}$ with the induced convex set structure on the latter. For simplicity we assume that in the following statement the function $i: Y \\rightarrow X$ is an injection.\n\n\\begin{lemma}\n\t$(Y,i,d)$ is a generator for a $\\mathbb{D}$-algebra $(X,h)$ iff\n\t\\[\n\tx = \\sum_{y \\in Y} d(x)(y) i(y)\t\\]\n\tfor all $x \\in X$.\n\\end{lemma}\n\\begin{proof}\nFor $x \\in X$ let $p_{x} \\in \\mathcal{D}X$ be the distribution satisfying the equality $p_{x}(\\overline{x}) = \\sum_{y \\in i^{-1}(\\overline{x})} d(x)(y)$. Since $i$ is injective we find $p_{x}(\\overline{x}) = d(x)(y)$, if $\\overline{x} = i(y)$, and $p_{x}(\\overline{x}) = 0$ otherwise. Thus we can deduce\t\\[\t\th \\circ \\mathcal{D}i \\circ d(x) = h(p_{x}) = \\sum_{y \\in Y} d(x)(y) i(y). \\]\n\t\n\\end{proof}\n\n\\subsection{Neighbourhood monad}\n\nIt is well-known that the contravariant setfunctor assigning to a set its powerset, and to a function the function that precomposes a characteristic function with the former, is dually self-adjoint. The monad $\\mathbb{H} = (N, \\mu, \\eta)$ induced by the adjunction is known under the name neighbourhood monad, since its coalgebras are related to neighbourhood frames in modal logic \\cite{jacobs2017recipe}. Its underlying set endofunctor $N$ is given by\n\\begin{gather*}\n\tNX = 2^{2^X} \\qquad Nf(\\Phi)(\\varphi) = \\Phi(\\varphi \\circ f), \n\\end{gather*}\nand its unit $\\eta$ and multiplication $\\mu$ satisfy\n\\begin{gather*}\n\t\\eta_X(x)(\\varphi) = \\varphi(x) \\qquad \\mu_X(\\Phi)(\\varphi) = \\Phi(\\eta_{2^X}(\\varphi)).\n\\end{gather*}\n\nRecall that a non-zero element of a lattice is called atomic, if there exists no non-zero element below it, and a lattice is called atomic, if each element can be written as join of atoms below it. \nIt is well-known that (i) the category of algebras over $\\mathbb{H}$ is equivalent to the category of complete atomic Boolean algebras; and (ii) the category of complete atomic Boolean algebras is contravariant equivalent to the category of sets \\cite{taylor2002subspaces}. \n\nIn more detail \\cite{bezhanishvili2020minimisation}, the equivalence (i) assigns to a $\\mathbb{H}$-algebra $(X,h)$ the complete atomic Boolean algebra on $X$ with pointwise induced operations\n\\begin{equation}\n\t\\label{inducedcaba}\n\t\\begin{gathered}\n\t0 = h(\\emptyset) \\qquad 1 = h(2^X) \\qquad \\neg x = h(\\sim \\eta_X(x)) \\\\\n\t \\bigvee A = h(\\bigcup_{x \\in A} \\eta_X(x)) \\qquad \\bigwedge A = h(\\bigcap_{x \\in A} \\eta_X(x)),\n\\end{gathered}\n\\end{equation}\nwhile the equivalence (ii) assigns to a complete atomic Boolean algebra $B$ its set of atoms $At(B)$, and to a set $X$ the complete atomic Boolean powerset algebra $\\mathcal{P}X$. The mutually invertibility of the assignments in the latter equivalence is witnessed by the Boolean algebra isomorphism between $B$ and $\\mathcal{P}(At(B))$ that assigns to an element the set of atoms below it.\nFor $K$ the Eilenberg-Moore comparison functor induced by the monadic self-dual powerset adjunction, and $J$ the equivalence in (i), one may recover the representation in (ii) as the composition of $J$ with $K$. \\cite{taylor2002subspaces}.\n\nAs before, let $F$ be the set endofunctor satisfying $FX = X^A \\times 2$. One verifies that coalgebras for the composed endofunctor $F N$ can be recognised as unpointed alternating automata \\cite{bezhanishvili2020minimisation}. The set $2$ can be equipped with a $\\mathbb{H}$-algebra structure $h$ which satisfies $h(\\varphi) = \\varphi(\\textnormal{id}_2)$ and induces a canonical distributive law between $\\mathbb{H}$ and $F$ \\cite{jacobs2012trace}. The corresponding bialgebras consist of deterministic automata with a complete atomic Boolean algebra as state space and join-preserving transition and output Boolean algebra homomorphisms. For instance, every deterministic automaton derived from an alternating automaton via a double subset construction is of such a form. \n\nThe next observation relates generators of a $\\mathbb{H}$-algebras with the induced complete atomic Boolean algebra structure on the latter.\n\n\\begin{lemma}\n\t$(Y, i, d)$ is a generator for a $\\mathbb{H}$-algebra $(X,h)$ iff \n\t\\[\n\tx = \\bigvee_{A \\in d(x)} ( \\bigwedge_{y \\in A} i(y) \\wedge \\bigwedge_{y \\not \\in A} \\neg i(y) )\n\t\\]\n\tfor all $x \\in X$. \n\\end{lemma}\n\\begin{proof}\nOne verifies that after identifying a subset with its characteristic function, any $\\Phi \\in NY$ satisfies \\[ \\Phi(\\psi) = \\bigvee_{A \\in \\Phi} ( \\bigwedge_{y \\in A} \\psi(y) \\wedge \\bigwedge_{y \\not \\in A} \\neg \\psi(y)). \\] In particular we can deduce the following equality,\n\t\\begin{align*}\n\t\tx &= h \\circ Ni \\circ d(x) = h(\\lambda \\varphi.d(x)(\\varphi \\circ i)) \\\\\n\t\t&= h(\\bigcup_{A \\in d(x)} (\\bigcap_{y \\in A} \\eta_X(i(y)) \\cap \\bigcap_{y \\not \\in A} \\sim \\eta_X(i(y)))).\n\t\\end{align*}\n\tThe statement follows from the latter by \\eqref{inducedcaba}. \n\t\\end{proof}\n\n\n\\section{Related work}\n\n\\label{relatedworksec}\n\nOne of the main motivations for the present paper has been our broad interest in active learning algorithms for state-based models \\cite{angluin1987learning}, in particular automata for NetKAT \\cite{anderson2014netkat}, a formal system for the verification of networks based on Kleene Algebra with Tests \\cite{kozen1996kleene}. \nOne of the main challenges in learning non-deterministic models such as NetKAT automata is the common lack of a unique minimal acceptor for a given language \\cite{denis2002residual}. The problem has been independently approached for different variants of non-determinism, often with the common idea of finding a subclass admitting a unique representative \\cite{esposito2002learning, berndt2017learning}. A more general and unifying perspective has been given in \\cite{van2017learning} by van Heerdt, see also \\cite{van2016master, van2020phd}. \n\nOne of the central notions in the work of van Heerdt is the concept of a scoop, originally introduced by Arbib and Manes \\cite{arbib1975fuzzy}.\nIn the present paper the notion coincides with what we call a generator in \\Cref{generatordefinition}. Scoops have primarily been used as a tool for constructing minimal realisations of automata, similarly to \\Cref{forgenerator-isharp-is-bialgebra-hom}. Strengthening the definition of Arbib and Manes to the notion of a basis in \\Cref{basisdefinition} allows us to further extend such automata-theoretical results, e.g. \\Cref{forbasis-bialgebra-areisomorphic}, but also uncovers ramifications with universal algebra, leading for instance to a representation theory of algebra homomorphisms in the same framework.\n\nA generalisation of the notion of a basis to algebras of arbitrary monads has been approached before. For instance, in \\cite{jacobs2011bases} Jacobs defines a basis for an algebra as a coalgebra for the comonad on the category of algebras induced by the free algebra adjunction. One can show that a basis in the sense of \\Cref{basisdefinition} always induces a basis in the sense of \\cite{jacobs2011bases}. Conversely, given certain assumptions about the existence and preservation of equaliser, it is possible to recover a basis in the sense of \\Cref{basisdefinition} from a basis in the sense of \\cite{jacobs2011bases}. Starting with a basis in the sense of \\Cref{basisdefinition}, the composition of both translations yields a basis that is not less compact than the basis one began with; in certain cases they coincide. As equaliser do not necessarily exist and are not necessarily preserved, our approach carries additional data and thus can be seen as finer. \n\n\n\\section{Discussion and future work}\n\n\\label{discussionsec}\n\nWe have presented a notion of a basis for an algebra over a monad on an arbitrary category that subsumes the familiar notion for algebraic theories.\nWe have covered questions about the existence and uniqueness of bases, and established a representation theory for homomorphisms between algebras over a monad in the spirit of linear algebra by substituting Kleisli morphisms for matrices. \nBuilding on foundations in the work of Arbib and Manes \\cite{arbib1975fuzzy}, we further have established that a basis for the underlying algebra of a bialgebra yields an isomorphic bialgebra with free state space.\nMoreover, we have established an equivalence between the category of bases for an algebra and the category of its isomorphic free algebras, and looked into bases for bialgebras.\nFinally we gave characterisations of bases for algebras over the powerset, downset, distribution, and neighbourhood monad.\n\nFor the future we are particularly interested in using the present work for a unified view on the theory of residual finite state automata \\cite{denis2002residual} (RFSA) and variations of it, for instance the theories of residual probabilistic automata \\cite{esposito2002learning} and residual alternating automata \\cite{berndt2017learning}. RFSA are non-deterministic automata that share with deterministic automata two important properties: for any regular language they admit a unique minimal acceptor, and the language of each state is a residual of the language of its initial state. In \\Cref{RFSAexample} we have demonstrated that the so-called canonical RFSA can be recovered as the bialgebra with free state space induced by a generator of join-irreducibles for the underlying algebra of a particular bialgebra. We believe we can uncover similar correspondences for other variations of non-determinism. Similar ideas have already been served as motivation in the work of Arbib and Manes \\cite{arbib1975fuzzy} and have recently come up again in \\cite{myers2015coalgebraic}. We are also interested in insights into the formulation of active learning algorithms along the lines of \\cite{angluin1987learning} for different classes of residual automata, as sketched in the related work section.\n\n\\section{Acknowledgements}\nThis research has been supported by GCHQ via the VeTSS grant \"Automated black-box verification of networking systems\" (4207703\/RFA 15845).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/teaser.png}\n \\caption{Fusing a single-view depth probability distribution predicted by a DNN with standard photometric error terms helps to resolve ambiguities in the photometric error due to occlusions or lack of texture. The above projected keyframe depth map was created by our system.}\n \\label{fig:teaser}\n \\vspace{2mm}\\hrule\n\\end{figure}\n\nThere has been continued research interest in using structure-for-motion (SfM) and visual simultaneous localisation and mapping (SLAM) for the incremental creation of dense 3D scene geometry due to its potential applications in safe robotic navigation, augmented reality, and manipulation.\nUntil recently, dense monocular reconstruction systems typically worked by minimising the photometric error over several frames.\nAs this minimisation problem is not well-constrained due to occlusion boundaries or regions of low texture, most reconstruction systems employ regularisers based on smoothness (\\cite{Newcombe:etal:ICCV2011}, \\cite{Pizzoli:etal:ICRA2014}) or planar (\\cite{Concha:etal:RSS2014, Concha:Civera:IROS2015, Concha:etal:ICRA2016}) assumptions.\n\nWith the continued success of deep learning in computer vision, there have been many suggestions for data-driven approaches to the monocular reconstruction problem.\nSeveral of these approaches propose a completely end-to-end framework, predicting the scene geometry from either a single image (\\cite{Eigen:etal:NIPS2014, Laina:etal:3DV2016, Godard:etal:CVPR2017, Fu:etal:CVPR2018}) or several consecutive frames (\\cite{Ummenhofer:etal:CVPR2017, Zhou:etal:CVPR2017, Mahjourian:etal:CVPR2018, Zhou:etal:ECCV2018, Yao:etal:ECCV2018, Chang:Chen:CVPR2018}).\nMost promising, however, are those systems that combine deep learning with standard geometric constraints (\\cite{Weerasekera:etal:ICRA2017, Tateno:etal:CVPR2017, Yang:etal:ECCV2018, Bloesch:etal:CVPR2018, Wang:etal:CVPR2018, Laidlow:etal:ICRA2019, Tang:Tan:2019}).\nIt was shown in \\cite{Facil:etal:RAL2017} that learning-based and geometry-based approaches have a complementary nature as learning-based systems tend to perform better on the interior points of objects but blur edges, whereas geometry-based systems typically do well on areas with a high image gradient but perform poorly on interior points that may lack texture.\n\nThe optimal way to combine these two approaches, however, is not clear.\nThe best current results seem to come from systems that take the output of traditional geometry-based systems and feed these into a deep neural network (DNN). A particularly impressive example of this type of system is DeepTAM \\cite{Zhou:etal:ECCV2018}, which passes a photometric cost volume through a network to extract a depth map.\n\nIt may be desirable, however, to use learning-based systems as an additional component that is fused into the pipeline of a traditional system.\nThis approach would keep the probabilistic framework of the reconstruction system, a requirement for many robotic applications.\nPossible benefits of such a framework include avoiding the necessity of having to perform an expensive neural network pass every time the geometric information is updated, and, as DNNs perform best on images close to the training dataset, it might be possible to switch the network component on or off or switch between different networks depending on the environment being reconstructed.\nThe difficulty of this approach, however, is that to probabilistically fuse the network outputs into a 3D reconstruction system, some measure of the uncertainty associated with each prediction is required.\n\nIn this paper, we propose a 3D reconstruction system that fuses together the output of a DNN with a standard photometric cost volume to create dense depth maps for a set of keyframes.\nWe train a network to predict a discrete, nonparametric probability distribution for the depth of each pixel over a given range from a single image.\nLike \\cite{Liu:etal:CVPR2019}, we refer to this collection of probability distributions for each pixel in the keyframe as a ``probability volume''.\nThen, with each subsequent frame, we create a probability volume based on the photometric consistency between the current frame and the keyframe image and fuse this into the keyframe volume.\nThe main contribution of this paper is to demonstrate that combining the probability volumes from these two sources often results in a better conditioned probability volume.\nWe extract depth maps from the probability volume by optimising a cost function that includes a regularisation term based on network predicted surface normals and occlusion boundaries.\nPlease see Figure \\ref{fig:teaser} for an example keyframe reconstruction created by our system.\n\n\n\n\\section{RELATED WORKS}\n\nIn general, uncertainty can be classified into two categories: model or epistemic uncertainty, and statistical or aleatoric uncertainty. In \\cite{Gal:Ghahramani:ICML2016}, the authors suggest using a Monte Carlo dropout technique to estimate the model uncertainty of a network, but this requires multiple expensive network passes.\n\nLike \\cite{Bishop:MDN}, the authors of \\cite{Kendall:Gal:NIPS2017} propose having the network predict its own aleatoric uncertainty and using a Gaussian or Laplacian likelihood as the loss function during training, which was used by \\cite{Laidlow:etal:ICRA2019} for 3D reconstruction.\nThe problem with this approach is that it forces the network to predict a parametric and unimodal distribution.\nAs shown in \\cite{Campbell:etal:ECCV2008}, this type of distribution may be particularly ill-suited to dense reconstruction where there is a clear need for a multi-hypothesis prediction.\n\nOne proposal has been to use a multi-headed network (\\cite{Zhou:etal:ECCV2018}, \\cite{Peretroukhin:etal:CVPRW2019}) with each head making a separate prediction.\nFrom these many predictions, one can calculate the mean and covariance to use in a probabilistic fusion algorithm.\nThe drawbacks of this approach are that it increases the size of the network and requires a careful balancing of the relative size of the network body and heads. \n\nRecently, both \\cite{Fu:etal:CVPR2018} and \\cite{Liu:etal:CVPR2019} achieved impressive results by having their networks predict discrete, nonparametric probability distributions.\nWhile \\cite{Liu:etal:CVPR2019} uses these distributions to fuse the output with other network predictions, to the best of our knowledge, no one has used this method to fuse the predictions of networks with the output of standard reconstruction pipelines, which is what we aim to do in this paper.\n\n\n\n\\section{METHOD}\n\nIn this section, we describe our method for fusing predictions from DNNs into a standard 3D reconstruction pipeline to produce dense depth maps.\n\nOur system represents the observed geometry as a collection of keyframe-based ``probability volumes''.\nThat is, instead of representing the surface as a depth map with a single depth estimate per pixel, the depth is represented with a per-pixel discrete probability distribution over a given depth range.\nThese probability volumes are initialised with the output of a monocular depth prediction network.\nWith each additional RGB image, the system computes a cost volume based on the photometric consistency.\nThis cost volume is then converted to a probability volume and fused into the volume of the current keyframe.\nOnce the number of inliers drops below a given threshold, a new keyframe is created. To propagate information from one keyframe to another, we warp the previous distribution and fuse it into the new one.\n\nWhen we want to extract a depth map from the probability volume, we could take the maximum probability depth values, but in featureless regions where there is also high network uncertainty this would be susceptible to false minima and cause local inconsistencies in the prediction.\nAlso, as the probability distribution is discrete, taking the maximum would result in a quantisation of the final depth prediction.\nTo overcome these shortcomings, we first construct a smooth probability density function (PDF) from the volume using a kernel density estimation (KDE) technique.\nWe then minimise the negative log probability of this PDF along with a regularisation term.\nWhile many dense systems propose using regularisers based on smoothness (\\cite{Newcombe:etal:ICCV2011, Pizzoli:etal:ICRA2014}) or planar (\\cite{Concha:etal:RSS2014, Concha:etal:ICRA2014, Concha:Civera:IROS2015}) assumptions, we follow the examples of \\cite{Weerasekera:etal:ICRA2017} and \\cite{Laidlow:etal:ICRA2019} and penalise our reconstruction for deviating from the surface normals predicted by a DNN.\n\n\\subsection{Multi-Hypothesis Monocular Depth Prediction}\n\nRather than predict a single depth value for each pixel, our network predicts a discrete depth probability distribution over a given range, similar to \\cite{Fu:etal:CVPR2018} and \\cite{Liu:etal:CVPR2019}.\nNot only does this allow the network to express uncertainty about its prediction, but it also allows the network to make a multi-hypothesis depth prediction.\nAs discussed in \\cite{Fu:etal:CVPR2018}, the prediction of the depth probability distribution can be improved by having a variable resolution over the depth range.\nWe choose a log-depth parameterisation, following the examples of \\cite{Weerasekera:etal:ICRA2018} and \\cite{Eigen:etal:NIPS2014}.\nBy uniformly dividing the depth range in log-space, we achieve the desired result of having higher resolution in the areas close to the camera and lower resolution farther away.\n\nFor our network architecture (see Figure \\ref{fig:network}), we use a ResNet-50 encoder \\cite{He:etal:CVPR2016} followed by three upsample blocks, each consisting of a bilinear upsampling layer, a concatenation with the input image, and then two convolutional layers to bring the output back up to the input resolution.\nAll inputs and outputs have a resolution of 256$\\times$192.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/network_diagram_arrows.png}\n \\caption{Our network consists of a ResNet-50 encoder with an output stride size of 8 and no global pooling layer. We then pass the output of the encoder through three upsample blocks consisting of a bilinear resize, concatenation with the input image, and then two convolutional layers to match the output resolution to the input. The probability distribution that the network outputs is discretised over 64 channels.}\n \\label{fig:network}\n \\vspace{2mm}\\hrule\n\\end{figure}\n\nAs we are having the network predict a discrete distribution rather than a depth map, we cannot use a standard loss function based on the sum of squared errors.\nA cross-correlation loss would not be ideal either, as we would like to penalise the network less for predicting high probabilities in incorrect bins that are close to the true bin than in bins farther away.\nInstead, we choose to use the ordinal loss function proposed in \\cite{Fu:etal:CVPR2018}:\n\\begin{equation}\n\\begin{split}\n \\mathcal{L}(\\mbs \\theta) ={} & -\\sum_i {} \\Biggl[ {} \\sum_{k = 0}^{k_i^*} \\log(p_{\\mbs \\theta,i}(k_i^* \\geq k)) \\\\\n &+ \\sum_{k = k_i^* + 1}^{K-1} \\log(1 - p_{\\mbs \\theta,i}(k_i^* \\geq k)) \\Biggr],\n\\end{split}\n\\end{equation}\n\\noindent\nwhere\n\\begin{equation}\n p_{\\mbs \\theta,i}(k_i^* \\geq k) = \\sum_{j = k}^{K-1} p_{\\mbs \\theta,i}(k_i^* = j),\n\\end{equation}\n\\noindent\n$\\mbs \\theta$ is the set of network weights, $K$ is the number of bins over which the depth range is discretised, $k_i^*$ is the index of the bin containing the ground truth depth for pixel $i$, and $p_{\\mbs \\theta,i}(k_i^* = j)$ is the network prediction of the probability that the ground truth depth is in bin $j$.\n\nLike \\cite{Liu:etal:CVPR2019}, we train our network on the ScanNet RGB-D dataset \\cite{Dai:etal:CVPR2017}.\nNo fine-tuning was done on our evaluation dataset, the TUM RGB-D dataset \\cite{Sturm:etal:IROS2012}.\nWe set the depth range to be between 10cm and 12m and group the log-depth values uniformly into 64 bins.\n\nEach keyframe created by our system is initialised with this network output.\n\n\\subsection{Fusion with Photometric Error Terms}\n\nFor each additional reference frame, we construct a DTAM-style cost volume \\cite{Newcombe:etal:ICCV2011}.\nFirst, we normalise both the keyframe and reference frame images by subtracting their means and dividing by their standard deviations.\nWe then calculate the photometric error by warping the normalised keyframe image into the reference frame for each depth value in the cost volume and taking the sum of squared differences on 3$\\times$3 patches.\nTo simplify the later fusion, we use the midpoint of each of the depth bins used for the network prediction as the depth values in the cost volume.\nPoses are obtained from an oracle, such as a separate tracking system like ORB-SLAM2 \\cite{Mur-Artal:etal:TRO2017}.\n\nTo convert to a probability volume, we separately scale the negative of the squared photometric error for each pixel such that it sums to one over the ray.\nWe then fuse this new probability volume, $p_{\\text{RF}}$, into the current keyframe volume, $p_{\\text{KF}}$:\n\\begin{equation}\n p_i(k_i^* = k) = p_{\\text{KF},i}(k_i^* = k) p_{\\text{RF},i}(k_i^* = k),\n\\end{equation}\n\\noindent\nfor each pixel $i$ and depth $k$, which is then scaled to sum to one over the ray.\n\n\\subsection{Kernel Density Estimation}\n\nTo avoid a quantisation of the final depth prediction and to have a smooth function to use in the optimisation step, we construct a PDF for the depth of each pixel using a KDE technique with Gaussian basis functions:\n\\begin{equation}\n f_i(d) = \\sum_{k = 0}^{K-1} p_i(k_i^* = k) \\phi\\left(d(k), \\sigma\\right)\n\\end{equation}\n\\noindent\nwhere $\\phi\\left(\\mu, \\sigma\\right)$ is the probability density of the Gaussian distribution with mean $\\mu$ and standard deviation $\\sigma$, $d(k)$ is the depth value at the midpoint of bin $k$, and $\\sigma$ is a constant smoothing parameter across all pixels and depth values.\nThe value of $\\sigma$ is a hyperparameter that needs to be tuned empirically; we found that $\\sigma = 0.1$ works well in our setting.\n\nAn example of a discrete PDF produced by our system and the smoothed result after applying the KDE technique is shown in Figure \\ref{fig:smoothing}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/smoothed.png}\n \\caption{Our fusion algorithm produces a discrete probability distribution for each pixel in the keyframe. To reduce discretisation errors and to have a continuous cost function for the optimiser, we convert the probability values along each ray into a smooth probability density function using a kernel density estimation technique.}\n \\label{fig:smoothing}\n \\vspace{2mm}\\hrule\n\\end{figure}\n\n\\subsection{Regularisation}\n\nAlthough the fused probability volume will have more local consistency than using the photoconsistency terms alone, the result can still be improved by adding a regularisation term to the optimisation used to extract the depth map.\nWhile most dense reconstruction systems base their regularisers on smoothness or planar assumptions, we propose using the surface normals predicted by a DNN as was done in both \\cite{Weerasekera:etal:ICRA2017} and \\cite{Laidlow:etal:ICRA2019} as this may allow for better preservation of fine-grained local geometry.\nTo predict the surface normals from the keyframe image, we use the state-of-the-art network SharpNet \\cite{Ramamonjisoa:Lepetit:ICCVW2019}.\nAs we determine the local surface orientation of our depth estimation from neighbouring pixels and we do not wish incur high costs at depth discontinuities, we mask the regularisation term at occlusion boundaries, which are also predicted by SharpNet.\nSince SharpNet actually predicts a probability of each pixel belonging to an occlusion boundary, we include all pixels with a probability higher than 0.4 in the mask.\nExample predictions of surface normals and occlusion boundaries made by SharpNet on the TUM RGB-D dataset \\cite{Sturm:etal:IROS2012} are shown in Figure \\ref{fig:sharpnet}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/sharpnet.png}\n \\caption{To regularise our depth estimate, we use the surface normals and occlusion boundaries predicted by SharpNet \\cite{Ramamonjisoa:Lepetit:ICCVW2019}. Some examples of the predictions made by SharpNet on the TUM RGB-D dataset \\cite{Sturm:etal:IROS2012} are shown above. From left to right: input RGB images, predicted normals, predicted occlusion boundaries with a probability greater than 0.4.}\n \\label{fig:sharpnet}\n \\vspace{2mm}\\hrule\n\\end{figure}\n\n\\subsection{Optimisation}\n\nTo extract a depth map from the probability volume, we minimise a cost function consisting of two terms:\n\\begin{equation}\n c(\\mbf{d}) = c_f(\\mbf{d}) + \\lambda c_{\\mbf{\\hat n}}(\\mbf{d}),\n\\end{equation}\n\\noindent\nwhere $\\mbf{d}$ is the set of depth values to be estimated, and $\\lambda$ is a hyperparameter used to adjust the strength of the regularisation term. Empirically, we found $\\lambda = 1.0 \\cdot 10^7$ to work well.\n\nThe first term, $c_f$, imposes a unary constraint on each of the pixels:\n\\begin{equation}\n c_{f}(\\mbf{d}) = -\\sum_i \\log \\left( f_i(d_i) \\right)\n\\end{equation}\n\\noindent\nwhere $f_i(d_i)$ is the smoothed PDF of pixel $i$ evaluated at depth $d_i$.\n\nThe second term, $c_{\\mbf{\\hat n}}$, is a regularisation term that combines two pairwise constraints:\n\\begin{equation}\n\\begin{split}\n c_{\\mbf{\\hat n}}(\\mbf{d}) = \\sum_i & b_i \\left( \\langle \\mbf{\\hat n}_i, d_i \\mbf{K}^{-1} \\mbf{\\tilde u}_i - d_{i+1} \\mbf{K}^{-1} \\mbf{\\tilde u}_{i+1} \\rangle \\right)^2 \\\\\n &+ b_i \\left( \\langle \\mbf{\\hat n}_i, d_i \\mbf{K}^{-1} \\mbf{\\tilde u}_i - d_{i+\\text{W}} \\mbf{K}^{-1} \\mbf{\\tilde u}_{i+\\text{W}} \\rangle \\right)^2\n\\end{split}\n\\end{equation}\n\\noindent\nwhere $b_i \\in \\{0, 1\\}$ is the value of the occlusion boundary mask for pixel $i$, $\\langle \\cdot, \\cdot \\rangle$ is the dot product operator, $\\mbf{\\hat n}_i$ is the normal vector predicted by SharpNet, $\\mbf{K}$ is the camera intrinsics matrix, $\\mbf{\\tilde u}_i$ is the homogeneous pixel coordinates for pixel $i$, and $\\text{W}$ is the width of the image in pixels.\n\nWe minimise the cost function by applying a maximum of 100 iterations of gradient descent with a step size of 0.2, and initialise the optimisation with the maximum probability depth values from the fused probability volume.\nAs the focus of this paper is on the benefits of fusing learning-based and geometry-based approaches, the system was not implemented to achieve real time results.\nCurrently, the forward pass of the network is not a major bottleneck (it takes approximately 53ms), but the process of going from a fused probability volume through the smoothing and optimisation to an extracted depth map can take up to 1.2s, depending on how many iterations are required before the stopping criterion is met.\nThis could be improved significantly by using Newton's method or the primal-dual algorithm instead of gradient descent; however, we leave this for future work.\n\n\n\\subsection{Keyframe Warping}\n\nTo avoid throwing away information on the creation of each new keyframe, we warp the probability volume of the current keyframe and use it to initialise the new one.\nAs the probability volume is a distribution over the depth values of a pixel, however, warping the probability volume is not trivial.\nTo do this, we propose using a discrete variation of the method described in \\cite{Loop::etal::3DV2016}, where we first convert the depth probability distribution to an occupancy-based probability volume, where for each depth bin along the ray there is a probability that the associated point in space is occupied. \nWe then warp this occupancy grid into the new frame and convert back to a depth probability distribution.\n\nWe start by defining the probability that the voxel $S_{k,i}$ (corresponding to depth bin $k$ along the ray of pixel $i$) is occupied, conditioned on the depth belonging to bin $j$:\n\\begin{equation}\n p(S_{k,i} = 1 \\lvert k_i^* = j) =\n \\begin{cases}\n 0 & \\text{if}\\ k < j \\\\\n 1 & \\text{if}\\ k = j \\\\\n \\frac{1}{2} & \\text{if}\\ k > j\n \\end{cases}\n\\end{equation}\n\nTo convert a depth probability distribution into an occupancy probability, we marginalise out the conditional:\n\\begin{equation}\n p(S_{k,i} = 1) = \\sum_{k=0}^{K-1} p_i(k_i^* = k) p(S_{k,i} = 1 \\lvert k_i^* = k)\n\\end{equation}\n\\noindent\nwhere $p_i(k_i^* = k)$ is the value of the depth probability volume in bin $k$ for pixel $i$.\n\nAs the occupancy grid represents probabilities for locations in 3D space, we can directly warp this into the new keyframe, filling in any unknown values with a default occupancy probability.\nIn our work, we use a default probability of 0.01.\n\nAfter warping, the occupancy grid can be converted back into a depth probability distribution:\n\\begin{equation}\n p_i(k_i^* = k) = \\prod_{j < k} \\left[1 - p(S_{j,i} = 1)\\right] p(S_{k,i} = 1),\n\\end{equation}\n\\noindent\nand scaled so that the distribution sums to one along the ray.\n\n\\section{EXPERIMENTAL RESULTS}\n\nWe evaluate our system on the Freiburg 1 sequences of the TUM RGB-D dataset \\cite{Sturm:etal:IROS2012}.\nPlease note that only the RGB images are processed by our system and the depth channel is just used as a ``ground truth'' with which to validate our results against.\n\n\\subsection{Qualitative Results}\n\nFigure \\ref{fig:fusion} shows the various PDFs for a sample of four pixels taken from a keyframe in the TUM RGB-D sequence \\textit{fr1\/desk}.\nThe PDFs in the first row are those predicted by the DNN. Note that the network is able to make multi-hypothesis predictions and can have varying degrees of certainty.\nThe PDFs in the second row are those that result from the photometric cost volume.\nFor some of the pixels (such as pixels A and C), the photometric error results in a clear peak.\nThis situation is most often found on corners and edges in the image where there are large intensity gradients.\nFor pixels in textureless regions or on occlusion boundaries or areas with repeating patterns, the photometric PDF may have many peaks (such as pixel B) or no peak at all (such as Pixel D).\nThe final row of the figure shows the fused PDF for each of the pixels.\nBy fusing the two PDFs together, uncertainty can be reduced and ambiguous photometric data can be resolved.\n\nAn example reconstruction for a single keyframe with various ablations is shown in Figure \\ref{fig:qualex}.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/mosaic_small.png}\n \\caption{This figure shows a grid of probability densities for a sample of four pixels from a keyframe (left). The first row, in red, shows the probability densities predicted by the network. The second row, in green, shows the probability densities estimated from the photometric error after the addition of 25 reference frames. The final row, in blue, shows the fused probability densities that results from our algorithm. Note that both the network and the photometric error are capable of producing multiple peaks. In some cases (such as pixel C), both the network and the photometric methods produce good estimates. In others (such as pixel A), both the network and photometric error are relatively uncertain, but together produce a strong peak. In pixels B and D, the network helps resolve ambiguous photometric peaks from either a repetition or lack of texture. The vertical black bars show the location of the ground truth depth.}\n \\label{fig:fusion}\n \\vspace{2mm}\\hrule\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/qualex_labels.png}\n \\caption{Qualitative results from an example keyframe and 6 additional reference frames in the TUM RGB-D \\textit{fr1\/360} sequence. The top left image is the keyframe image, and the bottom left is the ground truth depth. The remaining images on the top row are the depth estimates obtained by taking the maximum probability depth from each corresponding probability volume. The bands of colour show the quantisation that results from using this method. The remaining images in the bottom row are the depth estimates that result after performing the optimisation step. Note that the photometric error is only capable of estimating the depth at pixels with a high image gradient (the repeated edges are the result of pose error). While using only the network prediction results in a good reconstruction, the best reconstruction is obtained by fusing the network and photometric volumes together.}\n \\label{fig:qualex}\n \\vspace{2mm}\\hrule\n\\end{figure*}\n\n\\subsection{Quantitative Evaluation}\n\nWe demonstrate the value of fusion on the reconstruction pipeline by comparing the performance of the system on each of the Freiburg 1 TUM RGB-D sequences under three different scenarios: using only the network probability volume, using only the photometric probability volume, and using the fused probability volume.\nTo isolate the performance of our reconstruction system, we use the ground truth poses provided in the dataset.\nWe evaluate the performance using three metrics defined in \\cite{Eigen:etal:NIPS2014}: the absolute relative difference (L1-rel), the squared relative difference (L2-rel) and the root mean squared error (RMSE).\nNote that since the photometric probability volume has extremely noisy results on textureless surfaces, we found that the results were improved by initialising the optimisation with the expected value of the depth from the probability volume rather than the highest probability depth.\n\nThe results are presented in Table \\ref{tab:fusion_ablation}.\nIn six of the sequences, the best result is achieved by fusing together the network and photometric probability volumes.\nWhile there is a large performance gain in using the network over the photometric probability volume, the best outcome is achieved by fusing the two together.\nIn one of the sequences (\\textit{fr1\/floor}), the best result is achieved by using only the photometric probability volume.\nFor this entire sequence, the camera is aimed at a bare wooden floor, and, being well outside the training distribution, the network produces particularly bad priors.\nIn the remaining two sequences, the best result is achieved using only the network probability volume.\nIn one of these sequences (\\textit{fr1\/rpy}), the camera motion is purely rotational and the photoconsistency-based subsystem is not able to produce meaningful depth estimates.\n\n\n\\begin{table}[H]\n\\centering\n\\def0.9{0.9}\n\\begin{tabular}{l l c c c}\n \\toprule\n \\textbf{Sequence} & \\textbf{System} & \\textbf{L1-rel} & \\textbf{L2-rel} & \\textbf{RMSE} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/360} & Network-Only & 0.193 & 0.147 & \\textbf{0.555} \\\\\n & Photometric-Only & 0.633 & 1.008 & 1.514 \\\\\n & Fused & \\textbf{0.191} & \\textbf{0.143} & \\textbf{0.555} \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.194 & 0.174 & 0.559 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/desk} & Network-Only & 0.295 & 0.201 & 0.447 \\\\\n & Photometric-Only & 0.541 & 0.503 & 0.859 \\\\\n & Fused & \\textbf{0.278} & \\textbf{0.177} & \\textbf{0.427} \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.089 & 0.031 & 0.213 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/desk2} & Network-Only & 0.237 & 0.139 & \\textbf{0.423} \\\\\n & Photometric-Only & 0.522 & 0.494 & 0.890 \\\\\n & Fused & \\textbf{0.236} & \\textbf{0.138} & 0.424 \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.111 & 0.049 & 0.270 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/floor} & Network-Only & 0.806 & 0.727 & 0.821 \\\\\n & Photometric-Only & \\textbf{0.488} & \\textbf{0.303} & \\textbf{0.562} \\\\\n & Fused & 0.785 & 0.691 & 0.796 \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.131 & 0.034 & 0.156 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/plant} & Network-Only & 0.426 & 0.502 & \\textbf{0.816} \\\\\n & Photometric-Only & 0.726 & 1.422 & 1.983 \\\\\n & Fused & \\textbf{0.416} & \\textbf{0.485} & 0.833 \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.167 & 0.143 & 0.602 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/room} & Network-Only & 0.231 & 0.155 & 0.493 \\\\\n & Photometric-Only & 0.605 & 0.762 & 1.187 \\\\\n & Fused & \\textbf{0.226} & \\textbf{0.147} & \\textbf{0.488} \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.132 & 0.079 & 0.367 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/rpy} & Network-Only & \\textbf{0.242} & \\textbf{0.199} & \\textbf{0.577} \\\\\n & Photometric-Only & 0.514 & 0.577 & 1.047 \\\\\n & Fused & 0.255 & 0.212 & 0.614 \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.154 & 0.101 & 0.427 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/teddy} & Network-Only & \\textbf{0.294} & \\textbf{0.271} & \\textbf{0.773} \\\\\n & Photometric-Only & 0.748 & 1.569 & 2.108 \\\\\n & Fused & 0.297 & 0.277 & 0.792 \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.173 & 0.157 & 0.626 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/xyz} & Network-Only & 0.241 & 0.162 & 0.432 \\\\\n & Photometric-Only & 0.517 & 0.403 & 0.764 \\\\\n & Fused & \\textbf{0.225} & \\textbf{0.137} & \\textbf{0.401} \\\\\n \\cmidrule(l){2-5}\n & DeepTAM* & 0.065 & 0.017 & 0.164 \\\\\n \\bottomrule\n\\end{tabular}\n\\caption{Comparison of reconstruction errors on Freiburg 1 TUM RGB-D \\cite{Sturm:etal:IROS2012} sequences showing the relative performance of using only the network-predicted probability volume, only the photometric probability volume, and the fused probability volume. *Despite more accurate\nresults, DeepTAM does not maintain a probabilistic formulation.}\n\\label{tab:fusion_ablation}\n\\vspace{2mm}\\hrule\n\\end{table}\n\n\\begin{table}[h]\n\\centering\n\\def0.9{0.9}\n\\begin{tabular}{l l c c c}\n \\toprule\n \\textbf{Sequence} & \\textbf{System} & \\textbf{L1-rel} & \\textbf{L2-rel} & \\textbf{RMSE} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/360} & No Optimisation & 0.210 & 0.184 & 0.608 \\\\\n & Smoothing-Only & 0.207 & 0.179 & 0.601 \\\\\n & Total Variation & 0.194 & 0.152 & 0.565 \\\\\n & Normals + Occlusions & \\textbf{0.191} & \\textbf{0.143} & \\textbf{0.555} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/desk} & No Optimisation & 0.324 & 0.292 & 0.537 \\\\\n & Smoothing-Only & 0.323 & 0.289 & 0.533 \\\\\n & Total Variation & 0.296 & 0.226 & 0.470 \\\\\n & Normals + Occlusions & \\textbf{0.278} & \\textbf{0.177} & \\textbf{0.427} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/desk2} & No Optimisation & 0.283 & 0.240 & 0.537 \\\\\n & Smoothing-Only & 0.280 & 0.235 & 0.532 \\\\\n & Total Variation & 0.254 & 0.181 & 0.470 \\\\\n & Normals + Occlusions & \\textbf{0.236} & \\textbf{0.138} & \\textbf{0.424} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/floor} & No Optimisation & 0.806 & 0.772 & 0.861 \\\\\n & Smoothing-Only & 0.807 & 0.771 & 0.860 \\\\\n & Total Variation & 0.801 & 0.738 & 0.836 \\\\\n & Normals + Occlusions & \\textbf{0.785} & \\textbf{0.691} & \\textbf{0.796} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/plant} & No Optimisation & 0.436 & 0.558 & 0.863 \\\\\n & Smoothing-Only & 0.435 & 0.555 & 0.857 \\\\\n & Total Variation & 0.425 & 0.520 & \\textbf{0.830} \\\\\n & Normals + Occlusions & \\textbf{0.416} & \\textbf{0.485} & 0.833 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/room} & No Optimisation & 0.267 & 0.227 & 0.583 \\\\\n & Smoothing-Only & 0.265 & 0.223 & 0.577 \\\\\n & Total Variation & 0.243 & 0.179 & 0.522 \\\\\n & Normals + Occlusions & \\textbf{0.226} & \\textbf{0.147} & \\textbf{0.488} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/rpy} & No Optimisation & 0.327 & 0.434 & 0.781 \\\\\n & Smoothing-Only & 0.320 & 0.390 & 0.755 \\\\\n & Total Variation & 0.265 & 0.231 & 0.621 \\\\\n & Normals + Occlusions & \\textbf{0.255} & \\textbf{0.212} & \\textbf{0.614} \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/teddy} & No Optimisation & 0.296 & 0.300 & 0.799 \\\\\n & Smoothing-Only & 0.295 & 0.296 & 0.792 \\\\\n & Total Variation & \\textbf{0.290} & \\textbf{0.276} & \\textbf{0.771} \\\\\n & Normals + Occlusions & 0.297 & 0.277 & 0.792 \\\\\n \\midrule\n \\multirow{4}{*}{fr1\/xyz} & No Optimisation & 0.299 & 0.303 & 0.595 \\\\\n & Smoothing-Only & 0.296 & 0.298 & 0.590 \\\\\n & Total Variation & 0.255 & 0.212 & 0.493 \\\\\n & Normals + Occlusions & \\textbf{0.225} & \\textbf{0.137} & \\textbf{0.401} \\\\\n \\bottomrule\n\\end{tabular}\n\\caption{Comparison of reconstruction errors on Freiburg 1 TUM RGB-D \\cite{Sturm:etal:IROS2012} sequences showing the relative performance of different regularisation schemes. No Optimisation: results from taking the depth value with the maximum probability in the probability volume. Smoothing-Only: results from minimising the smoothed negative log probability density function without including a regularisation term. Total Variation: results from using the total variation of the depth as a regulariser. Normals + Occlusions: the pipeline as described in this paper.}\n\\label{tab:reg_ablation}\n\\vspace{2mm}\\hrule\n\\end{table}\n\nAs discussed in the introduction, the best results (in terms of the accuracy of the final depth maps) seem to come from systems that take classic, photometric-based approaches and feed the results into a DNN for regularisation.\nDeepTAM \\cite{Zhou:etal:ECCV2018} is a state-of-the-art example of such a system.\nWe argue, however, that a probabilistic formulation is necessary for many applications of depth estimation in robotics and that it is important to investigate methods of fusing the output of learning-based systems into standard reconstruction pipelines that maintain this formulation.\nWe therefore do not expect or claim to be able to achieve more accurate depth reconstructions than those produced by DeepTAM.\nInstead, we claim that we are able to improve the performance of standard SLAM systems by fusing in the outputs of a deep neural network while maintaining a probabilistic formulation.\nFor the sake of transparency, however, we have also run the DeepTAM mapping system with ground truth poses on the same sequences and have included the results in Table 1.\n\nTo show the benefit of our method of regularisation, we compare the performance of the full system against three other regularisation schemes: using no optimisation at all (taking the depth values that maximise the discrete probability distribution), optimising without any regularisation (this will allow for the smoothing of the depth maps based on the continuous PDF, but provide no regularisation), and regularising using the total variation.\n\nFor the total variation, we tuned the hyperparameters of our system for the best performance ($\\lambda = 1.0 \\cdot 10^2$ and a step size of 0.05).\n\nThe results are presented in Table \\ref{tab:reg_ablation}. In all cases the best performance is achieved when using the surface normals and occlusion masks predicted by SharpNet.\n\nFinally, to evaluate our method for warping probability volumes between keyframes, we compare our system against a version without warping where each keyframe is initialised only with the network output and does not receive any information from other keyframes.\n\nThe results are presented in Table \\ref{tab:warp_ablation}.\nUsing our warping method improves the performance of the system in all cases.\n\n\\begin{table}[t]\n\\centering\n\\def0.9{0.9}\n\\begin{tabular}{l l c c c}\n \\toprule\n \\textbf{Sequence} & \\textbf{System} & \\textbf{L1-rel} & \\textbf{L2-rel} & \\textbf{RMSE} \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/360} & No Keyframe Warping & 0.202 & 0.157 & 0.575 \\\\\n & Keyframe Warping & \\textbf{0.191} & \\textbf{0.143} & \\textbf{0.555} \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/desk} & No Keyframe Warping & 0.316 & 0.236 & 0.474 \\\\\n & Keyframe Warping & \\textbf{0.278} & \\textbf{0.177} & \\textbf{0.427} \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/desk2} & No Keyframe Warping & 0.283 & 0.195 & 0.480 \\\\\n & Keyframe Warping & \\textbf{0.236} & \\textbf{0.138} & \\textbf{0.424} \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/floor} & No Keyframe Warping & \\textbf{0.776} & \\textbf{0.684} & \\textbf{0.787} \\\\\n & Keyframe Warping & 0.785 & 0.691 & 0.796 \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/plant} & No Keyframe Warping & 0.420 & 0.490 & 0.845 \\\\\n & Keyframe Warping & \\textbf{0.416} & \\textbf{0.485} & \\textbf{0.833} \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/room} & No Keyframe Warping & 0.256 & 0.189 & 0.528 \\\\\n & Keyframe Warping & \\textbf{0.226} & \\textbf{0.147} & \\textbf{0.488} \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/rpy} & No Keyframe Warping & 0.297 & 0.263 & 0.654 \\\\\n & Keyframe Warping & \\textbf{0.255} & \\textbf{0.212} & \\textbf{0.614} \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/teddy} & No Keyframe Warping & 0.302 & 0.286 & \\textbf{0.791} \\\\\n & Keyframe Warping & \\textbf{0.297} & \\textbf{0.277} & 0.792 \\\\\n \\midrule\n \\multirow{2}{*}{fr1\/xyz} & No Keyframe Warping & 0.315 & 0.247 & 0.521 \\\\\n & Keyframe Warping & \\textbf{0.225} & \\textbf{0.137} & \\textbf{0.401} \\\\\n \\bottomrule\n\\end{tabular}\n\\caption{Comparison of reconstruction errors on Freiburg 1 TUM RGB-D \\cite{Sturm:etal:IROS2012} sequences showing the performance gain from using our method to warp keyframe probability volumes.}\n\\label{tab:warp_ablation}\n\\vspace{2mm}\\hrule\n\\end{table}\n\n\\section{CONCLUSION}\n\nWe have presented a method for fusing learned monocular depth priors into a standard pipeline for 3D reconstruction.\nBy training a DNN to predict nonparametric probability distributions, we allow the network to express uncertainty and make multi-hypothesis depth predictions.\n\nThrough a series of experiments, we demonstrated that by fusing the discrete probability volume predicted by the network with a probability volume computed from the photometric error, we often achieve better performance than either on its own.\nFurther experiments showed the value of our regularisation scheme and warping method.\n\n\n\\bibliographystyle{IEEEtran}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nVideo understanding is a central task of artificial intelligence that requires complex grounding and reasoning over multiple modalities. Among many tasks, multiple-choice question answering has been seen as a top-level task \\citep{richardson_mctest_2013} toward this goal due to its flexibility and ease of evaluation. A line of research towards constructing Video QA datasets have been completed \\cite{Tapaswi2016MovieQAUS,Lei2018TVQALC,Zadeh2019SocialIQAQ}. Ideally, a model for this task should understand each modality well and have a good way to aggregate information from different modalities. To this end, it is a natural choice for researchers to use the state-of-the-art models for each subtask and modality. Recently in the Natural Language domain, BERT \\citep{devlin-etal-2019-bert} and other transformer-based models have become baselines in many research works. However, it is a known phenomenon that complex multimodal models tend to overfit to strong-performing single modality \\cite{cirik_visual_2018,mudrakarta_did_2018, thomason_shifting_2019}. To caution against such undesirable modality collapsing, we study how strong RoBERTa \\citep{Liu2019RoBERTaAR}, a better trained version of BERT, can perform on the QA-only task.\n\n\\paragraph{Our main contribution includes:}\n(1) Show that RoBERTa baselines exceed all previously published QA-only baselines on two popular video QA datasets. (2) The strong QA-only results indicate the existence of non-trivial biases in the datasets that may not be obvious to human eyes but can be exploited by modern language models like RoBERTa. We provide analyses and ablations to root-cause \nthese QA biases, recommend best practices for dataset splits and share insights on subjectivity vs. objectivity for question answering.\n\n\n\\section{Model}\n\\label{sec:model}\n\n\n\\begin{table*}[h]\n\\centering\n\\begin{small}\n\\begin{tabular}{@{\\hspace{1pt}}c@{\\hspace{20pt}}l@{\\hspace{20pt}}l@{\\hspace{20pt}}l@{\\hspace{5pt}}c}\n \\bf Dataset & \\bf Model Name\/Source & \\bf Modality & \\bf QA Model & \\bf Val Acc (\\%) \\\\\n \\toprule\n \\multirow{4}{*}{\\shortstack{MovieQA \\\\(A5)}} & \\bf Our Answer-only & \\bf A & \\bf RoBERTa (fine-tune) & \\bf 34.16\\\\\n & \\bf Our QA-only & \\bf Q+A & \\bf RoBERTa (fine-tune) & \\bf 37.33\\\\\n & \\bf Our QA-only& \\bf Q+A & \\bf RoBERTa (freeze) & \\bf 22.52\\\\\n & SOTA \\cite{Jasani2019AreWA} &V+S+Q+A & w2v & 48.87\\\\\n & Random Guess & - & - & 20.00 \\\\\n \n \\cmidrule{1-5}\n \\multirow{6}{*}{\\shortstack{TVQA \\\\(A5)}} & \\bf Our Answer-only & \\bf A & \\bf RoBERTa (fine-tune) & \\bf 46.58\\\\\n & \\bf Our QA-only& \\bf Q+A & \\bf RoBERTa (fine-tune) & \\bf 48.91\\\\\n & \\bf Our QA-only& \\bf Q+A & \\bf RoBERTa (freeze) & \\bf 30.75\\\\\n & QA-only with Glove \\cite{Jasani2019AreWA} & Q+A & GloVe + LSTM & 42.77\\\\\n & SOTA's QA-only \\cite{yang2020bert}& Q+A & BERT (fine-tune) & 46.88 \\\\\n & SOTA\\cite{yang2020bert} &V+S+Q+A& BERT (fine-tune) & 72.41 \\\\\n & Random Guess & - & - & 20.00 \\\\\n\n\n\t\\bottomrule\n\\end{tabular}\n\\end{small}\n\\caption{\nComparison with State-of-the-art Performance.\n}\n\\label{tab:baselines}\n\\end{table*}\n\n\\begin{table*}[]\n\\centering\n\\begin{small}\n\\begin{tabular}{|l|c|c|c|c|c|c|}\n\\hline\n & \\multicolumn{1}{l|}{\\textbf{\\# of questions}} & \\multicolumn{1}{l|}{\\textbf{\\# of annoators}} & \\multicolumn{1}{l|}{\\textbf{\\% of why\/how }} & \\multicolumn{1}{l|}{\\textbf{\\% of other type }} & \\multicolumn{1}{l|}{\\textbf{avg len of Q}} & \\multicolumn{1}{l|}{\\textbf{avg len of A}} \\\\ \\hline\n\\textbf{Movie QA} & 14,944 & --- & 20.9\\% & 79.1\\% & 5.2 & 5.29 \\\\ \\hline\n\\textbf{TVQA} & 152,545 & 1,413 & 14.5\\% & 85.5\\% & 13.5 & 4.72 \\\\ \\hline\n\\end{tabular}\n\\end{small}\n\\caption{Dataset Statistics.}\n\\end{table*}\n\n\nWe fine-tune pretrained RoBERTa from \\citet{Liu2019RoBERTaAR} to solve the question answering task. Specifically, for one multiple-choice question with five answers (1 correct and 4 incorrect), we concatenate the tokenized question with each of the five tokenized answers and feed each of these five q-a pairs into RoBERTa. The RoBERTa is connected with a 4-layer MLP (Multi-Layer Perceptron) head to produce a scalar score for each q-a pair. These five scores are then passed through Softmax to output five probabilities indicating how likely the model think it is for each q-a pair to be correct. During training, the probabilities are trained on Cross Entropy loss; during testing, the q-a pair with the highest probability is selected as the model's prediction.\n\n\n\n\\section{Datasets}\n\\label{sec:datasets}\n\nWe evaluate our baseline model against two popular multimodal QA datasets: MovieQAand TVQA.\\\\\n\n\\paragraph{MovieQA:}\nMovieQA \\citep{Tapaswi2016MovieQAUS} was created from 408 subtitled movies. Each movie has a set of questions with 5 multiple choice answers, only one of which is correct. The dataset also contains plot synopses collected from Wikipedia.\n\n\\paragraph{TVQA:} \nTVQA \\citep{Lei2018TVQALC} was collected from 6 long-running TV shows from 3 genres. There are 21,793 video clips in total for QA collection, accompanied with subtitles and aligned with transcripts to add character names. Depending on the type of TV shows, a video clip is in 60 or 90 seconds. Each video clip has a set of questions with 5 multiple choice answers, only one of which is correct.\n\n\n\n\\paragraph{Notation:}\nIn this paper, we use A5 to denote the tasks on datasets. A5 means the multiple choice question consists of 1 correct answer and 4 incorrect answers.\n\n\\section{Bias Analysis}\n\n\\subsection{QA Bias and Inability to Generalize}\n\n\n\\begin{table}[h!]\n\\centering\n\\begin{small}\n\\begin{tabular}{@{\\hspace{5pt}}l@{\\hspace{10pt}}r@{\\hspace{10pt}}r@{\\hspace{10pt}}}\n \\bf Train Set & \\multicolumn{2}{c}{{\\bf Validation Accuracy (\\%)}} \\\\\n \\toprule\n & \\bf MovieQA & \\bf TVQA \\\\\n \\cmidrule{2-3}\n MovieQA & \\bf 37.33 & 31.18 \\\\\n\tTVQA & 33.45 & \\bf 48.91 \\\\\n\n\t\\bottomrule\n\\end{tabular}\n\\end{small}\n\\caption{\nAcross-dataset generalization accuracy. Both datasets are trained and evaluated on the A5 task: multiple-choice questions with 1 correct answer and 4 incorrect answers (random guess yields 20\\% accuracy). \\textbf{Bold} number is the highest number in each column.\n}\n\\label{tab:interdataset_generalization}\n\\end{table}\n\n\n\n\n\nFor the two datasets introduced in Section \\ref{sec:datasets}, we perform QA-only baselines using pretrained language model as described in Section \\ref{sec:model}. Table \\ref{tab:baselines} shows how our QA-only model's performance compares to random guess, state-of-the-art full modality performance and its associated QA-only ablation performance.\n\nFrom Table \\ref{tab:baselines}, looking at the numbers in bold font, we discover language model like RoBERTa is able to answer a significant portion of the questions correctly, despite that these questions are supposed to be not answerable without looking at the video. This result indicates that the model exploits the biases in these datsets. In addition, we also find that answer-only performance is quite close to QA-only performance, indicating the answer alone gives the model a pretty good hint on whether it is likely to be a correct answer.\n\nKnowing there are biases in the datasets, we are then curious on if these learned biases are transferable between datasets. Tihs investigation is important because if the biases are transferable, then perhaps they are not necessarily bad, because one could argue the model has captured some common sense in these questions and answers; but if these biases are not transferable, then it means these biases only patterns tied to one particular dataset, which we hope the model not to learn. To verify this with experiments, we train a model on each of the two dataset's train split and evaluate these two models on each of the two dataset's validation split. The results are shown in Table \\ref{tab:interdataset_generalization}.\n\nLooking at each row in Table \\ref{tab:interdataset_generalization}, we see all transfer-dataset evaluation's performance decreases from same-dataset evaluation. This means that although the model learns some tricks to answer the questions without context, such tricks learned from one dataset no longer works when applied at a different dataset. In other words, the model learns bias in the dataset and such bias is not transferable. This undesirable behavior is what motivates to our analysis in the next sections.\n\n\n\n\n\n\n\\subsection{Source of Bias: Annotator}\n\n\n\n\\newcommand{\\decreased}[1] {\\textcolor{red}{#1$\\downarrow$}}\n\\newcommand{\\increased}[1] {\\textcolor{green}{#1$\\uparrow$}}\n\n\n\\begin{table*}[h!]\n\\centering\n\\begin{small}\n\\begin{tabular}{c c c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}} c@{\\hspace{4pt}}}\n & \\bf Overlap Acc (\\%) & \\multicolumn{10}{c}{\\textbf{Non-overlap Acc Shift (\\%) vs. Dropped annotator}} \\\\\n \\toprule\n \\multirow{3}{*}{\\shortstack{TVQA\\\\ (A5)}} & & w17 & w366 & w24 & w297 & w118 & w313 & w14 & w19 & w2 & w254 \\\\\n \\cmidrule{3-12}\n & \\bf 50.59 & \\decreased{-5.59} & \\decreased{-11.28} & \\decreased{-20.14} & \\decreased{-10.55} & \\increased{+23.22} & \\decreased{-20.59} & \\decreased{-1.69} & \\decreased{-5.96} & \\decreased{-12.23} & \\decreased{-17.28}\\\\\n\n\t\\bottomrule\n\\end{tabular}\n\\end{small}\n\\caption{\nNon-overlapping dataset re-split results on the top-10-annotator subset. The ``Overlap Acc\" column is the validation accuracy where the train and validation set both contain questions from all 10 annotators. The ``Non-overlap Acc Shift vs. Dropped annotator\" is the validation accuracy where the train set contains questions from 9 annotators and the validation set only contains questions from the dropped annotator.\n}\n\\label{tab:non-overlap_resplit}\n\\end{table*}\n\n\n\nWe hypothesize one source of bias is from annotators. To verify our hypothesis, we obtain the Annotator IDs corresponding to the questions in TVQA \\footnote{We thank the authors of TVQA for sharing this information. Annotator information for MovieQA is unfortunately not available to us.} and construct a confusion matrix between the top-10 annotators. The results are shown in Figure \\ref{fig:inter_annotator}. For each of the annotators, we construct a mini-train and mini-valid set. For TVQA, each mini-train and mini-valid set contains 1980 and 220 A5 questions, respectively.\n\nFigure \\ref{fig:inter_annotator} reveals a pattern where most cells except for those on the diagonals are light colored, which means the accuracy decreases when the train set's and validation set's questions are not from the same annotator. This indicates the model learns to guess for one specific annotator's questions but such guess strategy is not transferable to other annotator's questions. This reveals that RoBERTa has the capacity to overfit to the annotators' QA style in the train set.\n\nLooking at the bottom number in each \\emph{diagonal} cell from Figure \\ref{fig:inter_annotator}, we see that our model performs quite differently on different annotators. Some annotators, such as w118 and w14, have a very high performance (90.0\\% and 64.5\\%, respectively), while some annotators, such as w24 and w313, have a relatively low performance (31.4\\% and 24.6\\%). This shows different annotator's questions have different level of biases.\n\nWe also discover that all annotators seem to transfer well to w118. We hypothesize w118 may have asked many questions that are similar to other annotator's questions which the model has already learned to answer during training time.\n\n\n\\begin{figure}[t]\n\n \\centering\n \n \\includegraphics[width=\\linewidth]{figures\/tvqa_inter_annotator_confusion_matrix.pdf} \n \n\\caption{TVQA Inter-annotator accuracy shift confusion matrix. Each $w_i$ represents an annotator id and each cell represents a train-test combination between annotators. The cells are colored based on accuracy shift (the top number in each cell): lighter color means more negative accuracy shift and darker color means more positive accuracy shift. Accuracy shift is defined as the difference between each cell's accuracy (the bottom number) and the same-row diagonal cell's accuracy (again, the bottom number).}\n\\label{fig:inter_annotator}\n\\end{figure}\n\n\\paragraph{Dataset Re-split} The observation above incentivizes further investigation: what if we construct a re-split of the dataset where the validation set does not contain annotators in the train set? We conduct this experiment with the limited scope of the top-10 annotators used in Figure \\ref{fig:inter_annotator} for clearer comparison. We create 11 re-splits of the dataset: 1 with annotator-overlapping train and validation set and 10 with annotator-non-overlapping train and validation set (use 9 annotators for train set and use 1 annotator for validation set). The results are shown in Table \\ref{tab:non-overlap_resplit}. We find that 9 out of 10 for TVQA non-overlapping re-splits incur decrease of performance (less bias). Interestingly, the re-split where there is an increase in performance, w118, matches the columns in Figure \\ref{fig:inter_annotator} whose cells' color is darker than average. This further verifies our explanation that w118 asks similar questions to other annotators. Nonetheless, this overall performance decrease trend after re-split suggests that for pretrained language models, annotator-non-overlapping re-split is a harder task than annotator-overlapping split and such re-split can help alleviate the QA bias. Based on this observation, we recommend future research work should create and use an annotator-non-overlapping split for train, validation and test sets whenever possible. The performance reported under such setting will contain fewer annotator biases and is thus a more accurate indicator of progress.\n\n\n\n\\subsection{Source of Bias: Question Type}\n\n\\begin{figure}[h!]\n \\centering\n \n \\includegraphics[width=.95\\linewidth]{figures\/movieqa_question_type_analysis.pdf} \n \\caption{MovieQA (A5) Accuracy by Question Type}\n \\label{fig:movieqa_q_type}\n\\end{figure}\n\n\\begin{figure}[h!]\n \\centering\n \n \\includegraphics[width=.95\\linewidth]{figures\/tvqa_question_type_analysis.pdf} \n \\caption{TVQA (A5) Accuracy by Question Type}\n \\label{fig:tvqa_q_type}\n\\end{figure}\n\n\nWe also hypothesize type of questions, such as reasoning question (such as why\/how questions) vs. factual question (where\/who questions), can be a source of bias. To verify, we ablate the model's accuracy based on the question's prefix. The results are shown in Figure \\ref{fig:movieqa_q_type} and \\ref{fig:tvqa_q_type}. These ablations are done on the A5 version of each dataset: Recall the random guess baseline in this case is 80\\%: 20\\%.\n\nIn Figure \\ref{fig:movieqa_q_type}, we see MovieQA shows a clear distinction ($>10\\%$) between ``why\" ``how\" questions vs. ``what\", ``who\", ``where\" questions. The model fits significantly better to the former than the latter. \n\nIn Figure \\ref{fig:tvqa_q_type} for TVQA, the model can guess ``why\" questions better than other question categories, while guessing ``who\" remains difficult.\n\nIn general, we observe a trend that questions such as ``why\" and ``how\", which are reasoning and abstract questions and whose answers are more complex, incur more biases that language model can exploit; whereas ``what\", ``who\" and ``where\" questions, which are factual and direct and whose answers are simple, are less bias-prone.\n\n\n\n\n\n\\section{Related Work}\nAlthough more analysis \\citep{goyal_making_2017, leibe_revisiting_2016} have been done on Visual Question Answering (VQA) \\citep{Agrawal2015VQAVQ}, there are few works analysing biases in Video Question Answering datasets. \\citet{Jasani2019AreWA} suggest MovieQA contain biases by showing that about half of the questions can be answered correctly under the QA-only setting. However, their word embeddings are trained from plot synopses of movies in the dataset and thus they actually introduce context information into their model, making it no longer QA-only.\n\\citet{goyal2017making} propose that language provides a strong prior that can result in good superficial performance and therefore preventing the model from focusing on the visual content. They attempt to fight against these language biases by creating a balanced dataset to force the model focus on the visual information.\nSimilarly, \\citet{cadene2019rubi} design a training strategy to reduce the amount of biases learned by VQA models named Rubi to counter the strong biases in the language modality.\n\\citet{manjunatha2019explicit} provide a method that can capture macroscopic rules that a VQA model ostensibly utilizes to answer questions.\nHowever, those models fail to explain clearly where the bias in the dataset comes from, which is the main topic of our work.\n\n\n\n\\section{Conclusion}\nIn this work, we fine-tune pretrained language model baselines for two popular Video QA datasets and discover that our simple baselines exceed previously published QA-only baselines. These strong baselines reveal the existence of non-trivial biases in the datasets. Our ablation study demonstrates these biases can come from annotator splits and question types. Based on our analysis, we recommend researchers and dataset creators to use annotator-non-overlapping splits for train, validation and test sets; we also caution the community that when dealing with reasoning questions, we are likely to encounter more biases than in factual questions.\n\nThis paper is an post-hoc analysis for the datasets. However, the tools used in this paper could potentially also be extended to aid dataset creation. For example, a dataset creator could have a RoBERTa trained \\emph{online} as annotators add more data. The annotators can use this language model's prediction to self-check if they are injecting any QA bias while coming up with the questions and answers. The dataset creator can also use a confusion matrix like Figure \\ref{fig:inter_annotator} to monitor and identify low-quality annotators and decide the best strategy to reduce biases during the dataset creation process.\n\n\\section*{Acknowledgments}\n\nWe thank the anonymous reviewer for providing helpful feedbacks.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and preliminaries }\nThe maximal green sequence was originally defined to be a particular sequence of mutations of framed cluster quivers, which\n was firstly introduced by Keller in \\cite{Kel}. Maximal green sequences are not only an important subject in cluster\n algebras, but also have important applications in many other objects, such as counting BPS states\n in string theory, producing quantum dilogarithm identities and computing refined Donaldson-Thomas\n invariants.\n\nCluster algebras have closed relations with representation theory via categorification, it follows that maximal\n green sequence could be interpreted from the viewpoint of tilting theory and silting theory. For example,\n a maximal green sequence for a cluster quiver corresponds to a sequence of forward mutation of a specific\n heart to its shift in a particular triangulated category. We refer to \\cite{BDP} for more details. Inspired\n by $\\tau$-tilting theory, Br$\\mathrm{\\ddot{u}}$stle, Smith and Treffinger defined maximal green sequence as\n particular finite chain of torsion classes for a finite dimensional algebra in \\cite{BST0}, which can be\n also naturally defined in arbitrary abelian categories in \\cite{BST1}.\n\nThroughout this paper we always assume $\\mathcal{A}$ is a small abelian category.\n\nLet us firstly recall some basic concepts. Suppose $X$ is an object in $\\mathcal{A}$.\n We say that $X$ has finite length, if there exists a finite filtration\n \\[0=X_0 \\subset X_1 \\subset X_2 \\subset \\dots \\subset X_m =X\\]\n such that $X_i\/X_{i-1}$ is simple for all $i$. Such a filtration is called a {\\bf{\\em Jordan-H$\\ddot{o}$lder}\n series} of $X$. It is well-known that if $X$ has finite length, then the length of the\n {\\em Jordan-H$\\ddot{o}$lder} series of $X$ is uniquely determined by $X$, which will be denoted\n by $l(X)$. Recall that an {\\bf abelian length category} is an abelian category such that every object has\n finite length. Throughout this article, we always assume that $\\mathcal{A}$ is an abelian length category.\n\n\nLet $\\mathcal{A}$\n be an abelian length category, and $\\mathcal{T}$ and $\\mathcal{F}$ be full subcategories of $\\mathcal{A}$\n which are closed under isomorphisms. The pair $(\\mathcal{T}, \\mathcal{F})$ is called a {\\bf torsion pair} if it satisfies\n the following conditions.\n \\begin{enumerate}\n \\item[(i)] For any objects $X\\in \\mathcal{T}$ and $Y\\in \\mathcal{F}$, then $Hom(X, Y) = 0$,\n \\item[(ii)] An obeject $X$ belongs to $\\mathcal{T}$ if and only if $Hom(X, Y)=0$ for any object $Y\\in \\mathcal{F}$,\n \\item[(iii)] An obeject $Y$ belongs to $\\mathcal{F}$ if and only if $Hom(X, Y)=0$ for any object $X\\in \\mathcal{T}$.\n \\end{enumerate}\n\n For a torsion pair\n $(\\mathcal{T}, \\mathcal{F})$, the full subcategories $\\mathcal{T}$ and $\\mathcal{F}$ are called a {\\bf torsion class}\n and a {\\bf torsion-free class}, respectively. It is well-known that a full subcategory in $\\mathcal{A}$ is a torsion class\n if and only if it is closed under extensions and factors, and a full subcategory in $\\mathcal{A}$ is a torsion-free\n class if and only if it is closed under extensions and subobjects. One of important properties of a torsion pair is that for any\n object $X$ in $\\mathcal{A}$, there is a unique exact sequence $0 \\rightarrow X_1 \\rightarrow X\n \\rightarrow X_2 \\rightarrow 0$ with $X_1\\in \\mathcal{T}$ and $X_2\\in \\mathcal{F}$ up to isomorphism,\n which is called the {\\em canonical sequence} for $X$ with respect to the torsion pair $(\\mathcal{T}, \\mathcal{F})$.\n\nLet $\\mathcal{T}$ and $\\mathcal{T}'$ be two torsion classes in $\\mathcal{A}$. We say the torsion\n class $\\mathcal{T}'$ {\\em covers} $\\mathcal{T}$ if $\\mathcal{T} \\subsetneq \\mathcal{T}'$ and $\\mathcal{X} = \\mathcal{T}$ or $\\mathcal{X} = \\mathcal{T}'$ for any torsion class $\\mathcal{X}$ satisfying $\\mathcal{T} \\subset \\mathcal{X} \\subset \\mathcal{T}'$.\n In this case, we write $\\mathcal{T} \\lessdot \\mathcal{T}'$.\n\n\\begin{Definition}[\\cite{BST1}] A {\\bf maximal green sequence} in an abelian length category $\\mathcal{A}$ is a finite\n sequence of torsion classes with covering relations\n \\[\\,\\mathcal{T}_0\\,\\lessdot\\, \\mathcal{T}_1\\, \\lessdot\\, \\mathcal{T}_2\\, \\lessdot \\,\\dots\\,\n \\lessdot\\, \\mathcal{T}_m\\]\n such that $\\mathcal{T}_0 = 0$ and $\\mathcal{T}_m = \\mathcal{A}$.\n\\end{Definition}\n\nStability conditions and Harder-Narasimhan filtration are widely studied by many authors\n and are very active. They were introduced in different contexts. For examples, King introduced stability\n functions on quiver representations in \\cite{King}, and Rudakov extended it to abelian categories in \\cite{Rud}.\n Let us recall basic definitions on stability functions and the important Harder-Narasimhan property for abelian\n length categories from \\cite{Rud}.\n\n\n\n\\begin{Definition}[\\cite{Rud, BST1}]\nLet $\\mathcal{P}$ be a totally ordered set and $\\phi : \\mathcal{A}^*\\rightarrow \\mathcal{P}$ a\n function on $\\mathcal{A}^*=\\mathcal{A}\\backslash \\{0\\}$ which is constant on isomorphism classes.\n The map $\\phi$ is called a {\\bf stability function} if for each short exact sequence\n $0\\rightarrow L\\rightarrow M\\rightarrow N\\rightarrow 0$\n of nonzero objects in $\\mathcal{A}$ one has the so-called {\\bf see-saw property}:\n\n\n either $\\phi(L) = \\phi(M) = \\phi(N)$,\n\n or $\\phi(L) > \\phi(M) > \\phi(N)$,\n\n or $\\phi(L) < \\phi(M) < \\phi(N)$.\n\nMoreover, a nonzero object\n $M$ in is said to be {\\bf $\\phi$-stable} (or {\\bf $\\phi$-semistable}) if every nontrivial subobject\n $L \\subset M$ satisfies $\\phi(L) < \\phi(M)$ (or $\\phi(L) \\leq \\phi(M)$, respectively).\n\\end{Definition}\n\nLet $\\phi$ be a stability function on $\\mathcal{A}$. For any nonzero object $X$ in $\\mathcal{A}$, we call $\\phi(X)$ the {\\bf phase} of $X$.\n When there is no confusion, we will simply call an object {\\bf semistable} (respectively, {\\bf stable})\n instead of $\\phi$-semistable (respectively, $\\phi$-stable). Rudakov proved the Harder-Narasimhan property as follows.\n\n\\begin{Theorem}[\\cite{Rud}]\\label{ruda}\nLet $\\phi : \\mathcal{A}\\rightarrow \\mathcal{P}$ be a stability function, and let $X$ be a\n nonzero object in $\\mathcal{A}$. Then up to isomorphism, $X$ admits a unique Harder-Narasimhan filtration,\n that is a filtration\n \\[0=X_0\\subsetneq X_1\\subsetneq X_2 \\subsetneq \\dots \\subsetneq X_l =X\\]\n such that the quotients $F_i = X_i\/X_{i-1}$ are semistable,\n and $\\phi(F_1) > \\phi(F_2) > \\dots > \\phi(F_l)$.\n\nOn the other hand, if $Y$ is a semistable object in $\\mathcal{A}$, then there exists a filtration of $Y$\n \\[0=Y_0\\subsetneq Y_1\\subsetneq Y_2 \\subsetneq \\dots \\subsetneq Y_m =Y\\]\nsuch that the quotients $G_i = Y_i\/Y_{i-1}$ are stable, and $\\phi(Y) = \\phi(G_m) = \\dots =\\phi(G_1)$.\n\\end{Theorem}\n\nThe second part of Theorem \\ref{ruda} claims that any semistable object admits a stable subobject and a\n stable quotient with the same phase as the semistable object.\n Following from \\cite{BST1}, we call $F_1 = X_1$ the {\\bf maximally destabilizing subobject} of $X$ and\n $F_l = X_l\/X_{l-1}$ the {\\bf maximally destabilizing quotient} of $X$. They are unique up to isomorphism.\n\nFor a stability function $\\phi : \\mathcal{A}\\rightarrow \\mathcal{P}$, T. Br$\\mathrm{\\ddot{u}}$stle, D. Smith, and H. Treffinger proved in \\cite{BST1} that it can induce a torsion pair $(\\mathcal{T}_p, \\mathcal{F}_p)$ in $\\mathcal{A}$ for every $p\\in \\mathcal{P}$ which\n is given as follows.\n \\[ \\mathcal{T}_{\\geq p}\\, =\\, \\{X\\,\\in\\,Obj(\\mathcal{A}): \\phi(X')\\,\\geq\\, p\\,\\, \\text{for the maximally destabilizing\n quotient $X'$ of $X$}\\} \\cup \\{0\\},\\]\n \\[ \\mathcal{F}_{ s$.\n In \\cite{BST1}, Br$\\mathrm{\\ddot{u}}$stle, Smith and Treffinger proved that\n under some conditions on the stability function, the chain of torsion classes induced by the stability function is\n a maximal green sequence in $\\mathcal{A}$.\n\n On the other hand, the important examples of stability functions are given by central charges. Let $\\mathcal{A}$ be\n an abelian length categories with exactly $n$ nonisomorphic simple objects $S_1, S_2, \\dots, S_n$. We know that the Grothendieck\n group $K_0(\\mathcal{A})$ of $\\mathcal{A}$ is isomorphic to $\\mathbb{Z}^n$.\n \\begin{Definition}\nA {\\bf central charge} $Z$ on $\\mathcal{A}$ is an additive map $Z: K_0(\\mathcal{A}) \\rightarrow \\mathbb{C}$\nwhich is given by \\[Z(X) = \\langle\\alpha, [X]\\rangle + \\mathrm{i}\\langle\\beta, [X]\\rangle\\]\n for $X\\in Obj(\\mathcal{A})$. Here $\\alpha\\in \\mathbb{R}^n$ and $\\beta\\in \\mathbb{R}_{>0}^n$ are fixed, and $\\langle\\cdot \\,, \\cdot\\rangle$ is the canonical inner product on $\\mathbb{R}^n$ and $\\mathrm{i}= \\sqrt{-1}$.\n\\end{Definition}\n\nSince $\\langle\\beta, [X]\\rangle > 0$ for any nonzero object $X$ in $\\mathcal{A}$, then $Z(X)$ lies in the strict upper half space of the complex space.\n It is well-known that every central charge $Z$ on $\\mathcal{A}$ determines a stability function $\\phi_{Z}$ (see also the proof of Theorem \\ref{main}), which is given by\n\\[\\phi_Z(X) = \\frac{argZ(X)}{\\pi}.\\]\n\nWe say that a maximal green sequence can be induced by a central charge if the stability function determined by a central charge induces this maximal green sequence.\\\\\n\nThis article is organized as follows.\n In Section \\ref{2}, we study relations between maximal green sequences and complete forward hom-orthogonal sequences.\n In Section \\ref{3.1}, we study properties of maximal green sequences induced\n by stability functions. In Section \\ref{3.2}, we define {\\bf crossing inequalities} for maximal green sequences (see Definition \\ref{as}),\n and then prove the following main result.\n\n\n{\\bf Theorem \\ref{main}}\\;\n{\\em A maximal green sequence $\\mathcal{T}: 0 = \\mathcal{T}_0\\, \\lessdot\\, \\mathcal{T}_1\\, \\lessdot\\, \\mathcal{T}_2\\, \\lessdot \\,\\dots\\, \\lessdot\\, \\mathcal{T}_m = \\mathcal{A} $ in an abelian length category $\\mathcal{A}$ is induced by some central charge $Z : K_0(\\mathcal{A}) \\rightarrow \\mathbb{C}$ if and only if\n $\\mathcal{T}$ satisfies crossing inequalities. }\n\nIn Section \\ref{4.1}, for finite dimensional algebras, we formulate relations between maximal green sequences of torsion classes and maximal green sequences of $\\tau$-tilting pairs, which are defined via $c$-vectors. In Section \\ref{4.2}, we prove the Rotation Lemma for finite dimensional Jacobian algebras and apply Theorem \\ref{main} to formulate relations of crossing inequalities between Jacobian algebras and its mutation.\n\n\n\\section{Correspondence between maximal green sequences and complete forward hom-orthogonal sequences}\\label{2}\n\n\\subsection{Complete forward hom-orthogonal sequences}\nWe recall the concept of complete forward hom-orthogonal sequences from \\cite{Ig1, Ig2}. Let us introduce some notations. Let $\\mathcal{A}$\n be an abelian length category, $\\mathcal{C}$ be a subcategory of $\\mathcal{A}$ and $N$ be an object in $\\mathcal{A}$.\n A {\\bf wide subcategory} of $\\mathcal{A}$ is an abelian subcategory closed under extensions.\n The full subcategory $N^{\\bot}$ is defined to be $N^{\\bot} : = \\{X\\in \\mathcal{A} | Hom(N,X)=0\\}$\n and the full subcategory $\\mathcal{C}^{\\bot}$ is defined to be $\\mathcal{C}^{\\bot} : = \\{X\\in \\mathcal{A} | Hom(Y,X)=0, \\forall Y\\in \\mathcal{C}\\}$.\n The full subcategories $^{\\bot}N$ and $^{\\bot}\\mathcal{C}$ are defined similarly.\n We also write $\\mathcal{F}(N):= N^{\\bot}$ and $\\mathcal{G}(N):=\n {}^\\bot{\\mathcal{F}(N)}$ for every object $N\\in Obj(\\mathcal{A})$.\n\n Then it is clear that $\\mathcal{F}(N) = \\mathcal{G}(N)^{\\bot}$ and\n $(\\mathcal{F}(N),\\, \\mathcal{G}(N))$ is a torsion pair in $\\mathcal{A}$.\n\n\\begin{Proposition}[\\cite{Ig2}]\\label{gx}\n Suppose that $Hom(X, Y) = 0$ and $\\mathcal{C} = X^{\\bot} \\cap {}^{\\bot}Y$, then $\\mathcal{G}(X) =\n {}^{\\bot}\\mathcal{C} \\cap {}^{\\bot}Y$.\n\\end{Proposition}\n\n\\begin{Definition}\nAn object $X$ in $\\mathcal{A}$ is called a {\\bf brick}, if $EndX$ is a division ring.\n\\end{Definition}\n\nIt is obvious that any brick is indecomposable. Let $\\mathcal{S}$ be a subset of $obj(\\mathcal{A})$, we use $Filt(\\mathcal{S})$ to denote\nthe full subcategory of $\\mathcal{A}$ consisting of objects having a finite filtration with subquotients\nare isomorphic to indecomposable objects in $\\mathcal{S}$, i.e., $X\\in Filt{\\mathcal{S}}$ if and only if there exists a finite filtration of\n$X$: \\[0=X_0 \\subset X_1 \\subset X_2 \\subset \\dots \\subset X_m =X\\]\nsuch that $X_i\/X_{i-1}\\in Ind(\\mathcal{S})$ for all $i$. For an indecomposable object $X$, we will denote $Filt(\\{X\\})$ by $Filt(X)$.\n\n The following lemma is well-known.\n\n\\begin{Lemma}[\\cite{Rin}]\nIf $X$ is a brick in $\\mathcal{A}$, then $Filt(X)$ is a wide subcategory of $\\mathcal{A}$.\n\\end{Lemma}\n\n\n\\begin{Definition}[\\cite{Ig1, Ig2}]\\label{cfho}\nA {\\bf complete forward hom-orthogonal sequence} (briefly, CFHO sequence) in $\\mathcal{A}$ is a finite sequence of bricks $N_1, N_2, \\dots , N_m$\n such that\n \\begin{enumerate}\n \\item[(i)] $Hom(N_i, N_j) = 0$ for all $1\\leq i \\lneqq j \\leq m$;\n \\item[(ii)] The sequence is maximal in $\\mathcal{G}(N)$, where $N = N_1 \\oplus N_2 \\oplus \\dots \\oplus N_m$.\n By maximal we mean that no other bricks can be inserted into $N_1, N_2, \\dots , N_m$ preserving (i);\n \\item[(iii)] $\\mathcal{G}(N) = \\mathcal{A}$.\n \\end{enumerate}\n\\end{Definition}\n\nNote that \\cite{Ig2} (page 4) claims that if the sequence $N_1, N_2, \\dots , N_m$ satisfies Definition \\ref{cfho} (i), then the condition (ii) in this definition is equivalent to the fact that for all $k$, $$\\mathcal{G}(N) \\cap (N_1 \\oplus \\dots \\oplus N_k)^{\\bot} \\cap {}^{\\bot}(N_{k+1} \\oplus \\dots \\oplus N_m) = 0.$$\n\n\n\\begin{Corollary}\\label{cgx}\nLet $M_1, M_2, \\dots , M_m$ be a complete forward hom-orthogonal sequence in $\\mathcal{A}$, and let\n $M_0 = 0 = M_{m+1}$, $X_i = M_0 \\oplus M_1 \\oplus \\dots \\oplus M_i$ and $Y_i = M_{i+1} \\oplus \\dots\n \\oplus M_m \\oplus M_{m+1}$ for $0 \\leq i \\leq m$. Then $\\mathcal{G}(X_i) = {}^{\\bot}Y_i$ for every $0 \\leq i \\leq m$.\n\\end{Corollary}\n\\begin{proof}\nSince $M_1, M_2, \\dots , M_m$ is a complete forward hom-orthogonal sequence, we have $Hom(X_i, Y_i) = 0$\n and $\\mathcal{C}_i = X^{\\bot}_i \\cap {}^{\\bot}Y_i = \\mathcal{A} \\cap X^{\\bot}_i \\cap {}^{\\bot}Y_i =\n \\mathcal{G}(M) \\cap X^{\\bot}_i \\cap {}^{\\bot}Y_i =0$. It follows from Proposition \\ref{gx} that\n $\\mathcal{G}(X_i) = {}^{\\bot}Y_i$.\n\\end{proof}\n\n In \\cite{Ig2}, Igusa also proved the following property of complete forward hom-orthogonal sequences, which\n shows simple objects are important ingredients in a complete forward hom-orthogonal sequence.\n\n\\begin{Lemma}[\\cite{Ig2}]\\label{sim}\n Let $N_1, N_2, \\dots , N_m$ be a complete forward hom-orthogonal sequence in $\\mathcal{A}$.\n Then the sequence contains all simple objects (up to isomorphism) in $\\mathcal{A}$. Moreover $N_1$ and $N_m$\n are simple objects.\n\\end{Lemma}\n\n\\begin{Corollary}\nIf $\\mathcal{A}$ admits a complete forward hom-orthogonal sequence, then there are only\n finite simple objects in $\\mathcal{A}$ up to isomorphism.\n\\end{Corollary}\n\n\\subsection{Maximal green sequences and CFHO sequences}\nMinimal extending objects for a torsion class were introduced by Barnard, Carroll, and Zhu in \\cite{BCZ}\n to study covers of the torsion class.\n\\begin{Definition}[\\cite{BCZ}]\\label{MEO}\nSuppose $\\mathcal{T}$ is a torsion class in $\\mathcal{A}$. An object $M$ in $\\mathcal{A}$ is called\n a {\\bf minimal extending object} for $\\mathcal{T}$ provided with the following conditions:\n \\begin{enumerate}\n \\item[(i)] Every proper factor of $M$ is in $\\mathcal{T}$;\n \\item[(ii)] If $0 \\rightarrow M \\rightarrow X \\rightarrow T \\rightarrow 0$ is a non-split exact sequence with\n $T\\in \\mathcal{T}$, then $X\\in \\mathcal{T}$;\n \\item[(iii)] $\\mathrm{Hom}(\\mathcal{T}, M) = 0$.\n \\end{enumerate}\n\\end{Definition}\n\nNote that if $M$ is a minimal extending object for a torsion class $\\mathcal{T}$, then $M$ is indecomposable\n by Definition \\ref{MEO} (i). Moreover, assuming (i), then (iii) is equivalent to the fact that\n $M\\notin \\mathcal{T}$. We write $[M]$ for the isoclass of the object $M$, $ME(\\mathcal{T})$ for the set\n of isoclasses $[M]$ such that $M$ is a minimal extending object for $\\mathcal{T}$, and $Filt(\\mathcal{T}\n \\cup \\{M\\})$ for the iterative extension closure of $Filt(\\mathcal{T})\\cup M$. The following results\n was proved for the category of finitely generated modules over a finite-dimensional algebra in \\cite{BCZ}.\n The results in Section 2 of \\cite{BCZ} also hold for abelian length categories.\n\n\\begin{Proposition}[\\cite{BCZ}]\\label{pop} Suppose $\\mathcal{T}$ is a torsion class in $\\mathcal{A}$ and\n $M$ is an indecomposable object such that every proper factor of $M$ lies in $\\mathcal{T}$. Then\n $Filt(\\mathcal{T}\\cup \\{M\\})$ is a torsion class and $M$ is a brick.\n\\end{Proposition}\n\nThe following result was proved for finite dimensional algebras in \\cite{BCZ}. We give a new proof for abelian length categories.\n\\begin{Lemma}[\\cite{BCZ}]\\label{lem}\nLet $\\mathcal{T}$ be a torsion class in $\\mathcal{A}$ and $M\\notin \\mathcal{T}$ be an indecomposable object in $\\mathcal A$ such\nthat each proper factor of $M$ is in $\\mathcal{T}$. Let $N\\in Filt(\\mathcal{T}\\cup \\{M\\})\\backslash \\mathcal{T}$ such that each proper\nfactor of $N$ lies in $\\mathcal{T}$. If $Filt(\\mathcal{T}\\cup \\{M\\}) \\gtrdot \\mathcal{T}$, then $M \\cong N$.\n\\end{Lemma}\n\\begin{proof}\nIt is clear that $N$ is indecomposable. By Proposition \\ref{pop}, the full subcategory $Filt(\\mathcal{T}\\cup \\{N\\})$\nis a torsion class satisfying that $\\mathcal{T} \\subsetneq Filt(\\mathcal{T}\\cup \\{N\\}) \\subset Filt(\\mathcal{T}\\cup \\{M\\})$,\nwhich implies that $Filt(\\mathcal{T}\\cup \\{N\\}) = Filt(\\mathcal{T}\\cup \\{M\\})$ since $Filt(\\mathcal{T}\\cup \\{M\\}) \\gtrdot \\mathcal{T}$.\n\nWe claim that $Hom(M, N) \\neq 0$ and $Hom(N, M) \\neq 0$. Note that $Hom(\\mathcal{T}, M) = 0$,\n since each proper factor of $M$ is in $\\mathcal{T}$ and $M\\notin \\mathcal{T}$. If $Hom(N, M) = 0$,\n then it is easy to see that $Hom(Filt(\\mathcal{T}\\cup \\{N\\}),\\,\\, M)=0$. This contradicts to the fact\n that $M\\in Filt(\\mathcal{T}\\cup \\{M\\}) = Filt(\\mathcal{T}\\cup \\{N\\})$. Then $Hom(M, N) \\neq 0$. Similarly,\n we have that $Hom(N, M) \\neq 0$.\n\n Suppose that $M \\ncong N$. Let $f: M\\rightarrow N$ and $g: N\\rightarrow M$ be two nonzero morphisms.\n Then $f$ and $g$ are not epimorphisms. Otherwise, one would be a proper factor of the other,\n which contradicts to the facts that $M\\notin \\mathcal{T}$ and $N\\notin \\mathcal{T}$. Thus $cokerf$ is a proper factor of $N$ and therefore belongs to $\\mathcal{T}$.\n\n If $f$ is not a monomorphism, then $Imf$ is a proper factor of $M$. Then $Imf$ and $cokerf$ belong to $\\mathcal{T}$,\n that implies that $M\\in \\mathcal{T}$, which contradicts to $M\\not\\in \\mathcal{T}$. Hence $f$ is a monomorphism and similarly $g$ is also a monomorphism.\n\n Note that $gf \\neq 0$, since $f\\not=0$ and $g$ is a monomorphism. Therefore $gf: M\\rightarrow M$ is\n an isomorphism since $M$ is a brick. This implies $g$ is an epimorphism, which is a\n contradiction.\n\n Thus $M\\cong N$.\n\\end{proof}\n\\begin{Theorem}[\\cite{BCZ}]\\label{mini} Suppose $\\mathcal{T}$ is a torsion class in $\\mathcal{A}$. Then the map\n $\\eta_{\\mathcal{T}}\\,:\\, [M]\\,\\mapsto\\,Filt(\\mathcal{T}\\cup \\{M\\})$\n is a bijection from the set $ME(\\mathcal{T})$ to the set of $\\mathcal{T}'$\n such that $\\mathcal{T} \\lessdot \\mathcal{T}'$. Moreover, for each\n such $\\mathcal{T}'$, there exists a unique indecomposable\n object $M$ such that $\\mathcal{T}' = Filt(\\mathcal{T}\\cup \\{M\\})$, and in this case,\n $M$ is a minimal extending object for $\\mathcal{T}$.\n Furthermore, the map $Filt(\\mathcal{T}\\cup \\{M\\}) \\mapsto [M]$ is the\n inverse to $\\eta_{\\mathcal{T}}$.\n\\end{Theorem}\nIn \\cite{BCZ}, the statement that $M$ is a minimal extending object for $\\mathcal{T}$ in this case was given in the proof of this theorem.\n\nThe following results are the main tools for us to construct\n a stability function for a given class of maximal green sequence.\n\\begin{Theorem}\\label{m2}\nSuppose that the sequence $N_1, N_2, \\dots , N_m$ is a complete forward hom-orthogonal sequence in\n $\\mathcal{A}$. Let $\\mathcal{G}_i = \\mathcal{G}(N_0\\oplus N_1 \\oplus \\dots N_{i})$ for\n each $0 \\leq i \\leq m$, where $N_0 = 0$. Then,\n\n (i)\\; $\\mathcal{G}_i = Filt(N_0, N_1, \\dots, N_i)$;\n\n(ii)\\; $N_i$ is a minimal extending object of $\\mathcal{G}_{i-1}$ satisfying that $\\mathcal{G}_i = Filt(\\mathcal{G}_{i-1}\\cup \\{N_i\\})$;\n\n(iii)\\; The sequence\n $0 = \\mathcal{G}_0\\, \\lessdot\\, \\mathcal{G}_1\\, \\lessdot\\, \\mathcal{G}_2\\, \\lessdot \\,\\dots\\,\n \\lessdot\\, \\mathcal{G}_m = \\mathcal{A} $ is a maximal green sequence in $\\mathcal{A}$.\n\\end{Theorem}\n\n\n\\begin{proof}\nBy Corollary \\ref{cgx}, we have $\\mathcal{G}_i = \\mathcal{G}(N_0\\oplus N_1 \\oplus \\dots N_{i}) = {}^{\\bot}(N_{i+1}\\oplus\\dots N_m \\oplus N_{m+1})$ ,\n where $N_{m+1}=0$. We will prove (i) and (ii) using induction method. It is obvious that $\\mathcal{G}_0 = Filt(N_0) = 0$ and $N_1 \\in ME(\\mathcal{G}_0)$\n since $N_1$ is a simple object.\n\nSuppose that $\\mathcal{G}_i = Filt(N_0, N_1, \\dots, N_i)$ and $N_i\\in ME(\\mathcal{G}_{i-1})$ for $1\\leq i \\leq j$.\n We claim that $N_{j+1}\\in ME(\\mathcal{G}_j)$. First note that $Hom(\\mathcal{G}_j, N_k) = 0$ for $k>j$ since $\\mathcal{G}_j^{\\bot} = (N_0\\oplus \\dots \\oplus N_j)^{\\bot}$.\n In particular, $Hom(\\mathcal{G}_j, N_{j+1}) = 0$. Second, suppose that $N$ is a proper quotient of $N_{j+1}$, then\n it is clear that $N \\in {}^{\\bot}(N_{j+2}\\oplus \\dots \\oplus N_m)$. If $f\\in Hom(N, N_{j+1})$ is nonzero, since $N$\n is a quotient of $N_{j+1}$, then $f$ must be an isomorphism, which is a contradiction. Thus $N\\in \\mathcal{G}_j =\n {}^{\\bot}(N_{j+1}\\oplus \\dots \\oplus N_m)$. Let $0 \\rightarrow N_{j+1} \\stackrel{a}{\\longrightarrow} X \\stackrel{b}{\\longrightarrow} T \\rightarrow 0$\n be a nonsplit exact sequence with $T\\in \\mathcal{G}_j$. Then it is enough to prove that $Hom(X, N_{j+1}) = 0$.\n Let $f\\in Hom(X, N_{j+1})$. If $fa\\neq 0$, it is an isomorphism and $f$ is a section, which is a contradiction.\n Then $fa=0$, and thus $f$ can be factor through $b$. Since $Hom(T, N_{j+1})=0$, then $f=0$. Then $X\\in \\mathcal{G}_j$ and\n $N_{j+1}\\in ME(\\mathcal{G}_j)$. And thus, $\\mathcal{G}_{j+1} = Filt(\\mathcal{G}_j \\cup \\{N_{j+1}\\}) = Filt(Filt({N_0, N_1, \\dots , N_{j}}) \\cup \\{N_{j+1}\\}) = Filt({N_0, N_1, \\dots , N_{j+1}})$. Then by induction, (i) and (ii) hold.\n\nClearly, $\\mathcal{G}_0=0$ and $\\mathcal{G}_m=\\mathcal{A}$. By Theorem \\ref{mini} and (ii), we have $\\mathcal{G}_i\\lessdot \\mathcal{G}_{i+1}$ for any $i$. Then (iii) holds.\n\\end{proof}\n\n\n\n\\begin{Theorem}\\label{m1}\nLet $0 = \\mathcal{T}_0\\, \\lessdot\\, \\mathcal{T}_1\\, \\lessdot\\, \\mathcal{T}_2\\, \\lessdot \\,\\dots\\,\n \\lessdot\\, \\mathcal{T}_m = \\mathcal{A} $ be a maximal green sequence in $\\mathcal{A}$. Then there\n exists a sequence of bricks $N_1, N_2, \\dots , N_m$ such that\n \\begin{enumerate}\n \\item[(i)] $N_i$ is a minimal extending object of $\\mathcal{T}_{i-1}$ and $\\mathcal{T}_i =\n Filt(\\mathcal{T}_{i-1} \\cup \\{N_i\\})$ for each $1 \\leq i \\leq m$.\n \\item[(ii)] $\\mathcal{T}_i = Filt(N_0, N_1, \\dots, N_i) = \\mathcal{G}(N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)\n = {}^{\\bot}(N_{i+1}\\oplus \\dots \\oplus N_m \\oplus N_{m+1})$ for each $0 \\leq i \\leq m$,\n where $N_0 = 0 = N_{m+1}$.\n \\item[(iii)] The sequence $N_1, N_2, \\dots , N_m$ is a complete forward hom-orthogonal sequence in $\\mathcal{A}$.\n \\item[(iv)] $Filt(N_i) = \\mathcal{T}_i \\cap \\mathcal{F}_{i-1}$ for each $1 \\leq i \\leq m$.\n \\item[(v)] Up to isomorphism, each object $X$ in $\\mathcal{A}$ admits a unique filtration\n \\[0=X_0\\subset X_1\\subset X_2 \\subset \\dots \\subset X_m =X\\]\n such that $X_i\/X_{i-1} \\in Filt(N_i)$ for each $1 \\leq i \\leq m$.\n \\end{enumerate}\n\\end{Theorem}\n\\begin{proof}\n(i) By Theorem \\ref{mini}, there exist indecomposable objects $N_1, N_2, \\dots, N_m$\nsuch that $N_i$ is a minimal\n extending object of $\\mathcal{T}_{i-1}$ and $\\mathcal{T}_i = Filt(\\mathcal{T}_{i-1} \\cup \\{N_i\\})$\n for each $1 \\leq i \\leq m$. Then due to Proposition \\ref{pop} and by the definition of minimal extending objects, $N_1, N_2, \\dots, N_m$ are bricks.\n\n(ii) It is obvious that $\\mathcal{T}_0 = Filt(N_0)$ and $\\mathcal{T}_1 = Filt(N_0, N_1)$. Then we can\n prove that $\\mathcal{T}_i = Filt(N_0, N_1, \\dots, N_i)$ by induction. To prove that $\\mathcal{T}_i =\n \\mathcal{G}(N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)$, it is enough to prove that $\\mathcal{F}_i =\n (N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)^{\\bot}$ for each $0 \\leq i \\leq m$. Assume that $Y \\in\n (N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)^{\\bot}$, it is clear that $Hom(\\mathcal{T}_i, Y) = 0$\n since $\\mathcal{T}_i = Filt(N_0, N_1, \\dots, N_i)$. Thus $Y \\in \\mathcal{F}_i$. Conversely, if\n $X\\in \\mathcal{F}_i$, then we have that $Hom(\\mathcal{T}_i, X) = 0$ and hence $X \\in\n (N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)^{\\bot}$. Therefore $\\mathcal{T}_i = \\mathcal{G}(N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)$.\n\n The statement $\\mathcal{T}_i = {}^{\\bot}(N_{i+1}\\oplus \\dots \\oplus N_m \\oplus N_{m+1})$\n will follow from (iii) by Corollary \\ref{cgx}.\n\n(iii) At first, by (i), $N_1, N_2, \\dots, N_m$ are bricks.\n\nBy the above proof of (ii), if $i\\leq j-1$, then $N_i \\in \\mathcal{T}_{j-1} = Filt(N_0,N_1, \\dots, N_{j-1})$ for all $i < j$. And by (i), we have $Hom(\\mathcal{T}_{j-1}, N_j) = 0$. Hence, $Hom(N_i, N_j) = 0$ for all $i < j$. By (ii), we\n have shown that $\\mathcal{G}(N_1\\oplus \\dots \\oplus N_m) = \\mathcal{T}_m = \\mathcal{A}$. Now it is enough\n to prove that for all $i$, $$(N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)^{\\bot} \\cap {}^{\\bot}(N_{i+1} \\oplus \\dots \\oplus N_m \\oplus N_{m+1})=0.$$\n If $0\\not=X\\in (N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)^{\\bot} \\cap {}^{\\bot}(N_{i+1} \\oplus \\dots \\oplus N_m \\oplus N_{m+1})$, then $X\\in \\mathcal{F}_i$ and thus $X\\notin \\mathcal{T}_i$. Hence there exists $k$\n such that $k > i$ and $X\\in T_k\\backslash T_{k-1}$. We have that $N_k$ is a factor of $X$, i.e., there\n is an epimorphism $X\\rightarrow N_k$. Note that $X\\in {}^{\\bot}(N_{i+1} \\oplus \\dots \\oplus N_m \\oplus N_{m+1})$\n and $k>i$, therefore $Hom(X, N_k) = 0$, which is a contradiction. Thus\n $(N_0 \\oplus N_1\\oplus \\dots \\oplus N_i)^{\\bot} \\cap {}^{\\bot}(N_{i+1} \\oplus \\dots \\oplus N_m \\oplus N_{m+1})=0$\n and the sequence $N_1, N_2, \\dots , N_m$ is a complete forward hom-orthogonal sequence in $\\mathcal{A}$. We have also\n proved that $\\mathcal{T}_i = {}^{\\bot}(N_{i+1}\\oplus \\dots \\oplus N_m \\oplus N_{m+1})$ for all $i$.\n\n(iv) Suppose $X$ is a nonzero object in $\\mathcal{T}_i \\cap \\mathcal{F}_{i-1}$, then $X\\in \\mathcal{F}_{i-1}\n = (N_0 \\oplus N_1\\oplus \\dots \\oplus N_{i-1})^{\\bot}$ and thus $N_i$ is a subobject of $X$. Therefore there is\n an exact sequence $0\\rightarrow N_i \\rightarrow X \\rightarrow Y \\rightarrow 0$. It is clear that $Y\\in \\mathcal{T}_i$.\n We claim that $Y\\in \\mathcal{F}_{i-1} = (N_0 \\oplus N_1\\oplus \\dots \\oplus N_{i-1})^{\\bot}$. Otherwise assume that\n there is a nonzero morphism $h: N_k \\rightarrow Y$ with $k q$ and $\\mathcal{A}_{\\geq p} \\subsetneq \\mathcal{A}_{\\geq q}$.\n Let $X\\in \\mathcal{A}_{\\geq q}\\backslash \\mathcal{A}_{\\geq p}$ with phase $\\phi(X) = r$.\n Note that $X$ admits a quotient $N$ satisfying that $N$ is stable and $\\phi(X) = \\phi(N) = r$.\n It is obvious that $q \\leq r < p$ and hence $\\mathcal{T}_{\\geq p} \\subset \\mathcal{T}_{\\geq r} \\subset \\mathcal{T}_{\\geq q}$.\n Since $X$ is semistable, the maximal destabilizing quotient of $X$ is itself. Then $X\\in \\mathcal{T}_{\\geq r}$ and\n $X\\notin \\mathcal{T}_{\\geq p}$. Therefore $\\mathcal{T}_{\\geq p} \\subsetneq \\mathcal{T}_{\\geq r}$ and hence\n $\\mathcal{T}_{\\geq r} = \\mathcal{T}_{\\geq q}$, which implies $\\phi(N)=r\\in [q]$.\n\nSuppose there are two stable objects $N$ and $N'$ with phase $\\phi(N)=r$ and $\\phi(N')=r'$ satisfying that $r, r' \\in [q]$.\n If $r=r'$, then $N \\cong N'$ since $\\phi$ is discrete at $r\\in [q]$ by Proposition \\ref{cov}. Otherwise, we may assume that\n $r < r'$. Then we have that $\\mathcal{T}_{\\geq q} = \\mathcal{T}_{\\geq r} = \\mathcal{T}_{\\geq r'}$ and thus\n $N\\in \\mathcal{T}_{\\geq r} = \\mathcal{T}_{\\geq r'}$. By the definition of $\\mathcal{T}_{\\geq r'}$, we have\n that $r=\\phi(N) \\geq r'$, which is a contradiction. The uniqueness follows.\n\nWe shall prove that every proper factor of $N$ is in $\\mathcal{T}_{\\geq p}$. Indeed, let $N'$ be a nontrivial factor of $N$,\nand let $N''$ be the maximal destabilizing quotient of $N'$ and $N'''$ be the stable quotient of $N''$ with phase $\\phi(N''') = \\phi(N'') = s$.\nNote that $N'''\\in \\mathcal{T}_{\\geq q} = \\mathcal{T}_{\\geq r}$ implies that $s \\geq r$. On the other hand, if $s = r$, then $N \\cong N'$, which is\na contradiction. Therefore we have $s > r$.\nIf $s \\geq p$, then $N' \\in \\mathcal{T}_{\\geq p}$.\nIf $s < p$, we claim that $\\mathcal{T}_{\\geq s} = \\mathcal{T}_{\\geq p}$, and then $N' \\in \\mathcal{T}_{\\geq p}$ also follows.\nIndeed, since $r< s < p$, we have that $\\mathcal{T}_{\\geq p} \\subset \\mathcal{T}_{\\geq s} \\subset \\mathcal{T}_{\\geq r}$.\nIf $\\mathcal{T}_{\\geq p} \\subsetneq \\mathcal{T}_{\\geq s}$, we have $\\mathcal{T}_{\\geq s} = \\mathcal{T}_{\\geq r}$, which implies\nthat $s\\in [p]$. This contradicts to the uniqueness of $N$. As a consequence, every proper factor of $N$ is in $\\mathcal{T}_{\\geq p}$.\nIt is easy to see $N$ is a minimal extending object for $\\mathcal{T}_{\\geq p}$ by Lemma \\ref{lem} and Theorem \\ref{mini}.\n\n\nIn particular, if $r_1\\in\\mathcal{P}$ satisfying that $\\phi(N)= r < r_1 \\leq p$,\n then $\\mathcal{T}_{\\geq p} \\subset \\mathcal{T}_{\\geq r_1} \\subset \\mathcal{T}_{\\geq q}$.\n If $\\mathcal{T}_{\\geq r_1} = \\mathcal{T}_{\\geq q}$, then $\\mathcal{T}_{\\geq p} \\lessdot \\mathcal{T}_{\\geq r_1}$.\n Thus there exists a stable object $M$ satisfying that $\\phi(M) \\in [r_1]$, and in particular $r < r_1 \\leq \\phi(M) < p$.\n This contradicts to the uniqueness of $N$. Hence $\\mathcal{T}_{\\geq r_1} = \\mathcal{T}_{\\geq p}$.\n\\end{proof}\n\n\nFor the stability function $\\phi$, suppose that there exist $r_0 > r_1 > \\dots > r_m$ in $\\mathcal{P}$ such that\n\\[0 = \\mathcal{T}_{\\geq r_0} \\lessdot \\mathcal{T}_{\\geq r_1} \\lessdot \\dots \\lessdot \\mathcal{T}_{\\geq r_m} = \\mathcal{A}\\]\nforms a maximal green sequence.\nAssume that $N_i$ is the minimal extending object of $\\mathcal{T}_{\\geq r_{i-1}}$ such that\n$\\mathcal{T}_{\\geq r_i} = Filt(\\mathcal{T}_{\\geq r_{i-1}}\\cup \\{N_i\\})$. By Theorem \\ref{cor}, we know that $N_i$ is stable,\nand we may, without loss of generality, assume that $\\phi(N_i) = r_i$.\nRecall that for $p\\in \\mathcal{P}$, the full subcategory $\\mathcal{A}_p$ is given by\n\\[\\mathcal{A}_p = \\{0\\} \\cup \\{M\\in \\mathcal{A}\\,\\, |\\,\\, M\\,\\, \\text{is semistable and}\\,\\,\\phi(M)=p \\}.\\]\nIt is shown in \\cite{BST1} that $\\mathcal{A}_p$ is a wide subcategory for each $p\\in \\mathcal{P}$. In particular, we have the following\nresult.\n\\begin{Proposition}\nWith the assumptions and notations above, we have that $\\mathcal{A}_{r_i} = Filt(N_i)$ for each $i$ with $1 \\leq i \\leq m$.\n\\end{Proposition}\n\\begin{proof}\nNote that $N_i$ is stable with phase $r_i$, then $N_i \\in \\mathcal{A}_p$.\nSince $\\mathcal{A}_p$ is a wide subcategory, it is closed under extensions.\nThen we have that $Filt(N_i) \\subset \\mathcal{A}_p$. On the other hand, if $N\\in \\mathcal{A}_p$ is nonzero,\nthen $N$ admits a stable factor $N'$ with phase $\\phi(N') = \\phi(N) =r_i$. Since $\\phi$ is discrete at $r_i$,\nthen $N' \\cong N_i$. Therefore we have a short exact sequence $0\\rightarrow L \\rightarrow N \\rightarrow N_i \\rightarrow 0$.\nSince $N, N_i\\in \\mathcal{A}_p$ and $\\mathcal{A}_p$ is closed under kernels, then $L\\in \\mathcal{A}_p$. Note that\n$l(L) \\phi(N_2) > \\dots > \\phi(N_m)$, and for any $X \\in Filt(N_i)$, $X$ is semistable\n with phase $r_i$. Then for each nonzero object in $\\mathcal{A}$, the filtration induced by $\\phi$\n (see Theorem \\ref{ruda}) is the same as that induced by the maximal green sequence\n (see Theorem \\ref{m1}(v)) by the uniqueness of the filtration.\n\n\\begin{Corollary}\nWith the assumptions and notations above, we have that\n\\begin{enumerate}\n \\item[(i)] The set of $\\phi$-semistable objects in $\\mathcal A$ is equal to $\\bigcup_{i=1}^{m}\\mathcal A_{r_i}\\setminus \\{0\\}$;\n \\item[(ii)] The set $\\{N_1, N_2, \\dots, N_m\\}$\n is a complete set of nonisomorphic $\\phi$-stable objects in $\\mathcal{A}$ up to isomorphism.\n \\end{enumerate}\n\\end{Corollary}\n\\begin{proof}\n It is enough to show that if $M$ is an arbitrary\n semistable object in $\\mathcal{A}$, then $M\\in Filt(N_i)$ for some $1\\leq i \\leq m$, and thus\n $\\phi(M)\\in \\{r_1, r_2, \\dots ,r_m\\}$ for some $i$. Since the filtration $0 \\subsetneq M$ of $M$ induced by the stability function $\\phi$ is the same\n as the one induced by the maximal green sequence, then it is obvious that $M\\in Filt(N_i)$ for some $1\\leq i \\leq m$,\n and thus $\\phi(M)\\in \\{r_1, r_2, \\dots ,r_m\\}$ for some $i$. In particular, since $\\phi$ is discrete\n at each $r_i$, then it is easy to see the set $\\{N_1, N_2, \\dots, N_m\\}$\n is a complete set of nonisomorphic $\\phi$-stable objects in $\\mathcal{A}$.\n\\end{proof}\n\n\\subsection{Maximal green sequences induced by central charges}\\label{3.2}\nLet $\\mathcal{A}$ be an abelian length category. If there is a maximal green sequence in $\\mathcal{A}$,\nthen $\\mathcal{A}$ admits finitely many simple objects up to isomorphism. Assume that $S_1, S_2, \\dots, S_n$\nbe the complete set of the isomorphism classes of simple objects in $\\mathcal{A}$. Then we know that the Grothendieck\ngroup $K_0(\\mathcal{A})$ is isomorphic to $\\mathbb{Z}^n$. We may write $[X]\\in \\mathbb{Z}^n$ for the image of $X\\in\\mathcal{A}$.\n Note that for $\\theta\\in \\mathbb{R}^n$ and $X\\in obj(\\mathcal{A})$, we also denote by $\\langle\\theta,X\\rangle$ the inner product $\\langle\\theta, [X]\\rangle$ for simplicity.\n\n\\begin{Definition}\\label{as}\n Let $\\mathcal{T}: 0 = \\mathcal{T}_0\\, \\lessdot\\, \\mathcal{T}_1\\, \\lessdot\\, \\mathcal{T}_2\\, \\lessdot \\,\\dots\\, \\lessdot\\, \\mathcal{T}_m = \\mathcal{A} $\n be a maximal green sequence in the abelian length category $\\mathcal{A}$, and $N_1, N_2, \\dots, N_m$ be the corresponding complete\n hom-orthogonal sequence. If there exits vectors $\\alpha\\in \\mathbb{R}^n$ and $\\beta\\in \\mathbb{R}_{>0}^n$ such that\n $$\\langle\\alpha,N_i\\rangle \\langle\\beta,N_{i+1}\\rangle < \\langle\\alpha,N_{i+1}\\rangle \\langle\\beta,N_i\\rangle$$ that is,\n $\\frac{\\langle\\alpha,N_i\\rangle}{\\langle\\beta,N_i\\rangle} < \\frac{\\langle\\alpha,N_{i+1}\\rangle} {\\langle\\beta,N_{i+1}\\rangle}$ for all $i$, then the maximal green sequence $\\mathcal{T}$ is said to satisfy {\\bf crossing inequalities.}\n\\end{Definition}\n\n\n\\begin{Theorem}\\label{main}\n A maximal green sequence $\\mathcal{T}: 0 = \\mathcal{T}_0\\, \\lessdot\\, \\mathcal{T}_1\\, \\lessdot\\, \\mathcal{T}_2\\, \\lessdot \\,\\dots\\, \\lessdot\\, \\mathcal{T}_m = \\mathcal{A} $ in an abelian length category $\\mathcal{A}$ can be induced by some central charge $Z : K_0(\\mathcal{A}) \\rightarrow \\mathbb{C}$ if and only if\n $\\mathcal{T}$ satisfies crossing inequalities.\n\\end{Theorem}\n\\begin{proof}\nOn one hand, suppose that $\\mathcal{T}$ satisfies crossing inequalities.\nLet $\\alpha\\in \\mathbb{R}^n$ and $\\beta\\in \\mathbb{R}_{>0}^n$ such that\n $\\langle\\alpha,N_i\\rangle \\langle\\beta,N_{i+1}\\rangle < \\langle\\alpha,N_{i+1}\\rangle \\langle\\beta,N_i\\rangle$ for all $i$.\n We define a central charge \\[Z : \\mathcal{A} \\rightarrow \\mathbb{C}\\] which is given by\n $Z(X) = \\langle\\alpha, X\\rangle + \\mathrm{i}\\cdot \\langle\\beta, X\\rangle$,\n where $\\mathrm{i} = \\sqrt{-1}$ and $\\langle\\cdot \\,, \\cdot\\rangle$ is the canonical inner product on $\\mathbb{R}^n$.\n Since $\\langle\\beta, X\\rangle >0$ for any nonzero object $X$, the complex number $Z(X)$\n is in the strict upper half-space. Then we define a map $\\phi : \\mathcal{A}^* \\rightarrow [0, 1]$ which is given by\n $\\phi(X) = \\frac{argZ(X)}{\\pi}$ for any nonzero object $X$ in $\\mathcal{A}$.\n\n It is obvious that $0 < argZ(X) < \\pi$. Note that \\[\\mathrm{cot}argZ(X) = \\frac{\\langle\\alpha,X\\rangle}{\\langle\\beta,X\\rangle}.\\]\n For simplicity, we will write $\\mathrm{cot}X$ for $\\mathrm{cot} argZ(X)$. It is easy to see that\n for any two nonzero objects $X$ and $Y$, $\\phi(X)\\leq \\phi(Y)$ ( $\\phi(X)< \\phi(Y)$ ) if and\n only if $\\mathrm{cot}X \\geq \\mathrm{cot}Y$ ( $\\mathrm{cot}X > \\mathrm{cot}Y$, respectively ),\n which is also equivalent to $\\langle\\alpha,X\\rangle \\langle\\beta,Y\\rangle-\\langle\\alpha,Y\\rangle \\langle\\beta,X\\rangle\\geq 0$ ( $> 0$, respectively ).\n It is well-known that $\\phi$ is a stability function. Indeed, it is obvious that $\\phi(X) = \\phi(Y)$ if $X \\cong Y$.\n On the other hand, for any exact sequence $0 \\rightarrow L \\rightarrow M \\rightarrow N \\rightarrow0$ with $L, M, N \\neq 0$,\n we have that\n \\[\n\\left|\\begin{array}{cccc}\n \\langle\\alpha,M\\rangle & \\langle\\alpha,L\\rangle \\\\\n \\langle\\beta,M\\rangle & \\langle\\beta,L\\rangle\\\\\n \\end{array}\\right| =\n \\left|\\begin{array}{cccc}\n \\langle\\alpha,L\\rangle+\\langle\\alpha,N\\rangle & \\langle\\alpha,L\\rangle \\\\\n \\langle\\beta,L\\rangle+\\langle\\beta,N\\rangle & \\langle\\beta,L\\rangle\\\\\n \\end{array}\\right|=\n \\left|\\begin{array}{cccc}\n \\langle\\alpha,N\\rangle & \\langle\\alpha,L\\rangle \\\\\n \\langle\\beta,N\\rangle & \\langle\\beta,L\\rangle\\\\\n \\end{array}\\right|=\n \\left|\\begin{array}{cccc}\n \\langle\\alpha,N\\rangle & \\langle\\alpha,M\\rangle \\\\\n \\langle\\beta,N\\rangle & \\langle\\beta,M\\rangle\\\\\n \\end{array}\\right|,\n\\]\nwhich implies that $\\phi$ satisfies the seesaw property, and then $\\phi$ is a stability function on $\\mathcal{A}$.\n Since $\\frac{\\langle\\alpha,N_1\\rangle}{\\langle\\beta,N_1\\rangle} < \\frac{\\langle\\alpha,N_2\\rangle}{\\langle\\beta,N_2\\rangle} <\\dots < \\frac{\\langle\\alpha,N_m\\rangle}{\\langle\\beta,N_m\\rangle}$\n and $\\mathrm{cot} N_i = \\frac{\\langle\\alpha,N_i\\rangle}{\\langle\\beta,N_i\\rangle}$ for each $i$, we have that $$\\phi(N_1) > \\phi(N_2) > \\dots > \\phi(N_m).$$\n\n We claim that any nonzero object $X$ in $Filt(N_i)$ is $\\phi$-semistable with phase $\\phi(X) = \\phi(N_i)$.\n Since $\\phi$ satisfies the seesaw property, then $\\phi(X) = \\phi(N_i)$ and thus\n $\\mathrm{cot}X = \\mathrm{cot}N_i =\\frac{\\langle\\alpha,N_i\\rangle}{\\langle\\beta,N_i\\rangle}$ for any nonzero object $X\\in Filt(N_i)$.\n Suppose that $L$ is a nontrivial subobject of $X$. By Theorem \\ref{m1} (v), there is\n a unique filtration $L$:\n \\[0=L_0\\subset L_1\\subset L_2 \\subset \\dots \\subset L_m = L\\]\n such that $L_j\/L_{j-1}\\in Filt(N_j)$ for each $1 \\leq j \\leq m$. Then we may assume that $[L_j\/L_{j-1}] = l_j[N_j]$.\n Thus, $[L]=\\sum_{j=1}^ml_j[N_j]$, $\\langle\\alpha,L\\rangle = \\sum_{j=1}^ml_j\\cdot \\langle\\alpha,N_j\\rangle$ and $\\langle\\beta,L\\rangle = \\sum_{j=1}^ml_j\\cdot \\langle\\beta,N_j\\rangle$.\n Since $X\\in Filt(N_i)= \\mathcal{T}_i \\cap \\mathcal{F}_{i-1}$,\n we get $$L\\in \\mathcal{F}_{i-1}=(N_0\\oplus N_1\\oplus \\dots \\oplus N_{i-1})^{\\bot}.$$\n Therefore $L_0 = L_1 = \\dots = L_{i-1} = 0$ and thus $l_1 = l_2 = \\dots = l_{i-1}= 0$.\n Notice that $\\frac{\\langle\\alpha,N_j\\rangle}{\\langle\\beta,N_j\\rangle} \\geq \\frac{\\langle\\alpha,N_i\\rangle}{\\langle\\beta,N_i\\rangle}$ for all $j \\geq i$, we have that\n \\begin{equation}\\label{ine}\n \\mathrm{cot} L = \\frac{\\langle\\alpha,L\\rangle}{\\langle\\beta,L\\rangle}\n = \\frac{ \\sum\\limits_{j=i}^{m}l_j\\langle\\alpha,N_j\\rangle}{\\sum\\limits_{j=i}^{m}l_j\\langle\\beta,N_j\\rangle}\n \\geq \\frac{ \\sum\\limits_{j=i}^{m}l_j\\langle\\beta,N_j\\rangle\\langle\\alpha,N_i\\rangle\/\\langle\\beta,N_i\\rangle}{\\sum\\limits_{j=i}^{m}l_j\\langle\\beta,N_j\\rangle}\n = \\frac{\\langle\\alpha,N_i\\rangle}{\\langle\\beta,N_i\\rangle} = \\mathrm{cot} X.\n \\end{equation}\n Then $\\phi(L)\\leq \\phi(X)$ which implies $X$ is $\\phi$-semistable. In particular, if $X = N_i$,\n then we claim that $Hom(N_i, L) = 0$. Otherwise, the composition $N_i \\rightarrow L \\hookrightarrow N_i$ is\n nonzero and is an isomorphism, which is a contradiction since $L$ is a nontrivial subobject of $N_i$. Then $L\\in \\mathcal{F}_i =\n (N_0\\oplus \\dots \\oplus N_i)^{\\bot}$, and thus $l_i = 0$ and the inequality (\\ref{ine}) is strict.\n Then $N_i$ is $\\phi$-stable.\n\nBy Theorem \\ref{m1}, for any nonzero object $X\\in \\mathcal{A}$, there is a unique filtration of $X$:\n \\begin{equation}\\label{fil}\n 0=X_{i_0}\\subsetneq X_{i_1}\\subsetneq X_{i_2} \\subsetneq \\dots \\subsetneq X_{i_l} =X\n \\end{equation}\n such that $X_{i_j}\/X_{i_{j-1}} \\in Filt(N_{i_j})$ for each $1 \\leq j \\leq l$ and $1 \\leq i_1 < i_2 < \\dots < i_l \\leq m$.\n Since $X_{i_j}\/X_{i_{j-1}}$ are $\\phi$-semistable with phase $\\phi(X_{i_j}\/X_{i_{j-1}}) = \\phi(N_{i_j})$ and\n $\\phi(N_{i_1}) > \\phi(N_{i_2}) > \\dots > \\phi(N_{i_l})$, the filtration (\\ref{fil}) of $X$ induced by the maximal\n green sequence is the same as the unique one induced by the stability function $\\phi$ as in Theorem \\ref{ruda}. Then\n $X_{i_l}\/X_{i_{l-1}}$ is the maximally destabilizing quotient of $X$. Hence the phase of the maximally destabilizing\n quotient of each nonzero object in $\\mathcal{A}$ is in $\\{\\phi(N_1), \\phi(N_2), \\dots , \\phi(N_m)\\}$. For simplicity,\n we write $\\phi(N_i) = r_i$ for each $1 \\leq i \\leq m$.\n\nFor each $r\\in [0,1]$, recall that the torsion class $\\mathcal{T}_{\\geq r}$ induced by $\\phi$\n is given by\n \\[ \\mathcal{T}_{\\geq r}\\, =\\, \\{X\\,\\in\\,Obj(\\mathcal{A}): \\phi(X')\\,\\geq\\, r\\,\\, \\text{for the maximally destabilizing\n quotient $X'$ of $X$}\\} \\cup \\{0\\}.\\]\n In the following proof, we prove that $\\mathcal{T}_{\\geq r_i} = \\mathcal{T}_i$ for each $1 \\leq i \\leq m$.\n Note that $\\phi(X') \\geq r_i$ if and only if $\\phi(X') \\in \\{r_1, r_2, \\dots, r_i\\}$.\n\nSuppose that $Y$ is an arbitrary nonzero object in $\\mathcal{A}$, by considering the unique filtration of $Y$,\n it is easy to see that $Y \\in \\mathcal{T}_k\\backslash \\mathcal{T}_{k-1}$ if and only if $\\phi(Y') = r_k$,\n where $Y'$ is the maximally destabilizing quotient of $Y$.\n\n Let us prove that $\\mathcal{T}_{\\geq r_i} = \\mathcal{T}_i$ for each $1 \\leq i \\leq m$. Indeed, if $X\\in \\mathcal{T}_i\\backslash \\{0\\}$, then\n there exits $j\\leq i$ such that $X \\in \\mathcal{T}_j\\backslash \\mathcal{T}_{j-1}$. Thus $\\phi(X') = r_j \\geq r_i$\n where $X'$ is the maximally destabilizing quotient of $X$, and hence\n $X \\in \\mathcal{T}_{\\geq r_i}$. Conversely, if $W\\in \\mathcal{T}_{\\geq r_i}$, then $\\phi(W') = r_j \\geq r_i$ for some $j\\leq i$ where $W'$ is the maximally destabilizing quotient of $W$, and\n hence $W\\in \\mathcal{T}_j\\backslash \\mathcal{T}_{j-1}$. Then $W\\in \\mathcal{T}_i$.\n Thus $\\mathcal{T}_{\\geq r_i} = \\mathcal{T}_i$ for each $1 \\leq i \\leq m$.\n\n In particular, we have that\n $\\mathcal{T}_{\\geq r_m} = \\mathcal{T}_m = \\mathcal{A}$. On the other hand, take $r_0\\in [0,1]$ such that $r_10}^n$.\n Then by definition, $\\mathcal{T}$ is induced by the stability function $\\phi_Z$. Let $N_1, N_2, \\dots, N_m$\n be the corresponding complete forward hom-orthogonal sequence. Then we have that $\\phi_Z(N_1) > \\phi_Z(N_1) > \\dots >\\phi_Z(N_m)$\n by Theorem \\ref{cor}, and thus $\\mathrm{cot}argZ(N_1) < \\mathrm{cot}argZ(N_2) < \\dots < \\mathrm{cot}argZ(N_m)$. This implies that\n $$\\langle\\alpha,N_i\\rangle\\cdot \\langle\\beta,N_{i+1}\\rangle < \\langle\\alpha,N_{i+1}\\rangle\\cdot \\langle\\beta,N_i\\rangle$$ for each $1 \\leq i \\leq m-1$. Then $\\mathcal{T}$ satisfies crossing inequalities.\n\\end{proof}\n\n\nFor a given maximal green sequence, it is hard to determine if it is induced by a central charge. Theorem \\ref{main} supports a\npossible way (but not complete) to construct a central charge which induce the given maximal green sequence.\n In the following, we give an example to show it is operable.\n We refer to \\cite{BDP} for basic concepts on $c$-matrices and maximal green sequences of a quiver.\n\n\n\n\\begin{Example}\n\nConsider the following quiver $Q$ of type $A_4$ and let $A=KQ$ be the path algebra where $K$ is an algebraically closed field.\n\n\\[\\begin{tikzpicture}[scale=1.3]\n\\node at (0,0) (11) {$1$};\n\\node at (1,0) (21) {$2$};\n\\node at (2,0) (31) {$3$};\n\\node at (3,0) (41) {$4$};\n\\path[-angle 90]\n\t(11) edge (21)\n\t(21) edge (31)\n\t(31) edge (41);\n\\end{tikzpicture}\\]\n\nTo give a maximal green sequence for $mod(KQ)$, let us consider the maximal green sequence $(2, 1, 4, 1, 2, 3)$ of $Q$ first, and the corresponding mutations of $c$-matrices are given as follows.\n\n\\[\\begin{split} &\\begin{bmatrix} 1&0&0&0\\\\0&1&0&0\\\\0&0&1&0\\\\0&0&0&1 \\end{bmatrix} \\xrightarrow{\\mu_2}\\begin{bmatrix} 1&0&0&0\\\\0&-1&1&0\\\\0&0&1&0\\\\0&0&0&1 \\end{bmatrix} \\xrightarrow{\\mu_1}\n\\begin{bmatrix} -1&0&1&0\\\\0&-1&1&0\\\\0&0&1&0\\\\0&0&0&1 \\end{bmatrix} \\xrightarrow{\\mu_4}\\begin{bmatrix} -1&0&1&0\\\\0&-1&1&0\\\\0&0&1&0\\\\0&0&0&-1 \\end{bmatrix}\\\\& \\xrightarrow{\\mu_3}\n\\begin{bmatrix} 0&0&-1&0\\\\1&-1&-1&0\\\\1&0&-1&0\\\\0&0&0&-1 \\end{bmatrix} \\xrightarrow{\\mu_1} \\begin{bmatrix} 0&0&-1&0\\\\-1&0&0&0\\\\-1&1&0&0\\\\0&0&0&-1 \\end{bmatrix} \\xrightarrow{\\mu_2}\n\\begin{bmatrix} 0&0&-1&0\\\\-1&0&0&0\\\\0&-1&0&0\\\\0&0&0&-1 \\end{bmatrix}\n\\end{split}\\]\n\nRecall that Igusa showed in \\cite{Ig1} that for an acyclic quiver, there is a bijection between\nmaximal green sequences of the quiver and CFHO sequences of its path algebra over\nan algebraically closed field, and precisely the correspondence claims mutated $c$-vectors of a maximal green sequence of the quiver correspond to dimension vectors\nof bricks in a CFHO sequence of the path algebra.\n\nThen the sequence of mutated $c$-vectors above gives a complete forward hom-orthogonal sequence $N_1, N_2, N_3, N_4, N_5, N_6$ in $mod(KQ)$ satisfying that\n\\[\\underline{dim}N_1 = \\begin{bmatrix} 0\\\\1\\\\0\\\\0 \\end{bmatrix},\\,\\,\\underline{dim}N_2 = \\begin{bmatrix} 1\\\\0\\\\0\\\\0 \\end{bmatrix},\\,\\,\\underline{dim}N_3 = \\begin{bmatrix} 0\\\\0\\\\0\\\\1 \\end{bmatrix},\\,\\, \\underline{dim}N_4 = \\begin{bmatrix} 1\\\\1\\\\1\\\\0 \\end{bmatrix},\\,\\,\\underline{dim}N_5 = \\begin{bmatrix} 0\\\\1\\\\1\\\\0 \\end{bmatrix},\\,\\,\\underline{dim}N_6 = \\begin{bmatrix} 0\\\\0\\\\1\\\\0 \\end{bmatrix}.\n\\]\n\nBy Theorem \\ref{m2}, the CFHO sequence $N_1, N_2, N_3, N_4, N_5, N_6$ gives a maximal green sequence $0 = \\mathcal{T}_0 \\lessdot \\mathcal{T}_1 \\lessdot \\mathcal{T}_2 \\lessdot \\mathcal{T}_3 \\lessdot \\mathcal{T}_4 \\lessdot \\mathcal{T}_5 \\lessdot \\mathcal{T}_6$ of $mod(KQ)$, which is given by $\\mathcal{T}_i = Filt(\\{N_1, N_2,\\dots, N_i\\})$.\n\nFor the maximal green sequence $\\mathcal{T} : \\mathcal{T}_0 \\lessdot \\mathcal{T}_1 \\lessdot \\mathcal{T}_2 \\lessdot \\mathcal{T}_3 \\lessdot \\mathcal{T}_4 \\lessdot \\mathcal{T}_5 \\lessdot \\mathcal{T}_6$, we try to find $\\alpha \\in \\mathbb{R}^4$ and $\\beta \\in \\mathbb{R}^4_{>0}$ such that the maximal green sequence $\\mathcal{T}$ satisfies the crossing inequalities.\n\nLet us fix a positive integer vector $\\beta=(1,1,1,1)^T$. Assume that $\\alpha = (x_1,x_2,x_3,x_4)^T \\in \\mathbb{R}^4$. Then $\\langle\\alpha,N_i\\rangle\\cdot \\langle\\beta,N_{i+1}\\rangle\\,\\, <\\,\\, \\langle\\alpha,N_{i+1}\\rangle \\cdot \\langle\\beta, N_i\\rangle $ for $1\\leq i \\leq 5$, i.e., \\[\\frac{\\langle\\alpha,N_1\\rangle}{\\langle\\beta,N_1\\rangle} < \\frac{\\langle\\alpha,N_2\\rangle}{\\langle\\beta,N_2\\rangle}\n< \\frac{\\langle\\alpha,N_3\\rangle}{\\langle\\beta,N_3\\rangle} < \\frac{\\langle\\alpha,N_4\\rangle}{\\langle\\beta,N_4\\rangle} < \\frac{\\langle\\alpha,N_5\\rangle}{\\langle\\beta,N_5\\rangle} < \\frac{\\langle\\alpha,N_6\\rangle}{\\langle\\beta,N_6 \\rangle}\\] are given by the following inequalities:\n\n \\[ x_2 < x_1 < x_4 < \\frac{x_1+x_2+x_3}{3} < \\frac{x_2+x_3}{2} i$, we have that $gf=0$ since $Hom(M_i,M_j)=0$ and hence $g=0$ because $f$ is an epimorphism. Therefore $Hom(S, M_j) = 0$ for $j>i$. Note that $S\\in {^{\\bot}S'_k}$ (since the simple module $S$ is not isomorphic to $S_k$) and $F_k^+:S_k ^{\\bot}\\rightarrow ^{\\bot}S'_k$ is an equivalence, then\nthere is a module $R\\in S_k ^{\\bot}$ such that $S=F_k^+(R)$ and it is clear that $R$ is a brick. Consider the sequence $N_1=S_k$, $\\dots$, $N_i$,$R$,$N_{i+1}$, $\\dots$,$N_m$, we have that \\[R\\in S_k ^{\\bot},\\,\\, i.e., \\,\\,Hom(N_1, R) = 0;\\]\n\\[for\\,\\, 2\\leq p \\leq i,\\,\\, Hom_{\\mathcal{A}}(N_p, R) = Hom_{S_k ^{\\bot}}(N_p, R) = Hom_{^{\\bot}S'_k}(F_k^+(N_p), S) = 0;\\]\n\\[for\\,\\, i+1\\leq p \\leq m,\\,\\,Hom_{\\mathcal{A}}(R, N_p) = Hom_{S_k ^{\\bot}}(R,N_p) = Hom_{^{\\bot}S'_k}(S,F_k^+(N_p)) = 0.\\]\nThis contradicts with the fact that $N_1, N_2, \\dots, N_m$ is a complete forward Hom-orthogonal sequence in $\\mathcal{A}$. Thus the lemma is true.\n\nNow we can prove our claim that if $S$ is a simple module in $\\mathcal{A'}$, then $S \\in \\{F_k^+(N_2), \\dots, F_k^+(N_m), S'_k\\}$ (up to isomorphism).\nIndeed if $S \\notin \\{F_k^+(N_2), \\dots, F_k^+(N_m), S'_k\\}$ (up to isomorphism), by the Lemma above, $Hom(F_k^+(N_i), S) = 0$ for $2 \\leq i \\leq m$ and $Hom(S'_k, S) = 0$.\nNote that $S$ is simple and $S$ is not isomorphic to $S'_k$, we also have that $S\\in ^{\\bot}S'_k$. Thus there is a module $M\\in S_k^{\\bot}$ such that $S = F_k^+(M)$.\nThen for any $i \\geq 2$, we have that\n \\[0 = Hom_{\\mathcal{A'}}(F_k^+(N_i), S) = Hom_{\\mathcal{A'}}(F_k^+(N_i), F_k^+(M)) = Hom_{^{\\bot}S'_k}(F_k^+(N_i), F_k^+(M))= Hom_{S_k^{\\bot}}(N_i,M).\\]\nThus $M\\in (N_1\\oplus N_2\\oplus \\dots \\oplus N_m)^{\\bot} = 0$, which is a contradiction. The claim is true. By the claim, it is easy to see that $(F_k^+(N_2)\\oplus \\dots \\oplus F_k^+(N_m)\\oplus S'_k )^{\\bot} = 0$.\n\nNow we can show that no other bricks can be inserted into the sequence $F_k^+(N_2), \\dots, F_k^+(N_m), S'_k$ preserving the forward Hom-orthogonal\nproperty. Indeed, suppose that there is a brick $M$ in $\\mathcal{A}'$ satisfying the condition. Note that $M$ can not be the last one because we have proved that $(F_k^+(N_2)\\oplus \\dots \\oplus F_k^+(N_m)\\oplus S'_k )^{\\bot} = 0$. Then we may assume that the new sequence satisfying the forward Hom-orthogonal\nproperty is $F_k^+(N_2), \\dots,F_k^+(N_i),M, F_k^+(N_{i+1}),\\dots, F_k^+(N_m), S'_k$ and $M\\in ^{\\bot}S'_k$. Note that there exists a module $H \\in S_k^{\\bot}$ such that $M=F_K^+{H}$ and $H$ is a brick. It is easy to see the sequence $N_1, \\dots, N_i, H, N_{i+1}, \\dots, N_m$ satisfies the forward Hom-orthogonal\nproperty, which is a contradiction. Thus $F_k^+(N_2), \\dots, F_k^+(N_m), S'_k$ is a complete forward Hom-orthogonal sequence in $\\mathcal{A'}$.\n\n\n\\end{proof}\n\n\\begin{Corollary}\nLet $N_1, N_2, \\dots, N_m$ is a complete forward Hom-orthogonal sequence in $\\mathcal{A} = mod J(Q,w)$ with\n$N_1 = S_k$. If there are two vectors $\\alpha, \\beta\\in \\mathbb{R}^n$ such that $B_k\\beta \\in \\mathds{R}_{>0}^n$ and\n \\[\n \\frac{\\langle\\alpha,N_2\\rangle}{\\langle\\beta,N_2\\rangle} <\\dots <\\frac{\\langle\\alpha,N_{m}\\rangle} {\\langle\\beta,N_{m}\\rangle}<\\frac{\\langle\\alpha,N_{1}\\rangle} {\\langle\\beta,N_{1}\\rangle},\n \\]\n Then the maximal green sequence $F_k^+(N_2), \\dots, F_k^+(N_m), S'_k$ in $\\mathcal{A'} = mod J(Q',w')$ can be induced by the central charge\n $Z: K_0(\\mathcal{A}') \\rightarrow \\mathbb{C}$\nwhich is given by \\[Z(X) = \\langle B_k^T\\alpha, [X]\\rangle + \\mathrm{i}\\langle B_k^T\\beta, [X]\\rangle.\\]\n\\end{Corollary}\n\\begin{proof}\nBy Theorem \\ref{main}, it is enough to prove that\n\\begin{equation}\\label{z}\n \\frac{\\langle B_k^T\\alpha, F_k^+N_2\\rangle}{\\langle B_k^T\\beta, F_k^+N_2\\rangle} <\\dots <\\frac{\\langle B_k^T\\alpha, F_k^+N_{m}\\rangle} {\\langle B_k^T\\beta,F_k^+N_{m}\\rangle}<\\frac{\\langle B_k^T\\alpha,S'_k\\rangle} {\\langle B_k^T\\beta, S'_k\\rangle}.\n\\end{equation}\n\nBy Theorem \\ref{eq}(5), for $2 \\leq i \\leq m$, since $B_k^2=I$, we have that\n\\[\n \\frac{\\langle B_k^T\\alpha, F_k^+N_i\\rangle}{\\langle B_k^T\\beta, F_k^+N_i\\rangle} = \\frac{\\langle B_k^T\\alpha, B_kN_i\\rangle}{\\langle B_k^T\\beta, b_kN_i\\rangle}\n =\\frac{(\\underline{dim}N_i)^T (B_k^T)^2\\alpha}{(\\underline{dim}N_i)^T (B_k^T)^2\\beta} = \\frac{(\\underline{dim}N_i)^T \\alpha}{(\\underline{dim}N_i)^T \\beta} = \\frac{\\langle\\alpha,N_i\\rangle}{\\langle\\beta,N_i\\rangle}.\n \\]\n\nOn the other hand, by the definition of $B_k$, we have that\n\\[\n\\frac{\\langle B_k^T\\alpha,S'_k\\rangle} {\\langle B_k^T\\beta, S'_k\\rangle} = \\frac{a_k}{b_k} = \\frac{\\langle\\alpha,N_{1}\\rangle} {\\langle\\beta,N_{1}\\rangle}.\n\\]\nIt is clear that the inequalities (\\ref{z}) hold.\n\\end{proof}\n\n\n\\vspace{5mm}\n{\\bf Acknowledgements:}\\; This project is supported by the National Natural Science Foundation of China (No.11671350) and the Zhejiang Provincial Natural Science Foundation of China (No.LY19A010023).\n\n\\vspace{10mm}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:motivation} Motivation}\n\nThe aim of high-energy physics (HEP) is to understand matter at the most fundamental level. The current understanding of the building blocks of the universe and their interactions is embodied in the so-called standard model (SM) of particle physics. In this model, fundamental particles are quantum mechanical entities with fixed quantum numbers such as mass, spin, electric, and color charge, and are regarded as excitations of relativistic quantum fields. While the field of HEP has enjoyed many decades of progress, a long-standing challenge for both theorists and experimentalists in HEP is that the properties and behaviors of perturbative quantum fields are mathematically complex, present infrared and ultraviolet divergence challenges, and expansions converge slowly when the coupling constant is large. Furthermore, the time evolution of final state particles and their hadronization is particularly complex and are difficult to accurately simulate on digital computers, even on high performance computer (HPC) systems. \n\nMeanwhile, a new approach to computing that explicitly leverages the complexities of quantum mechanics for computational purposes has been in development. In the last few years, prototype quantum computers capable of performing small computations using a few dozen quantum bits (``qubits'') have become widely accessible, prompting a burgeoning field of research in applications of near-term and future quantum computers.\nQuantum computing (QC) approaches have been developed and demonstrated for applications in many disciplines including chemistry, materials science, nuclear physics, and HEP. However, the field of QC is far from mature and the potential of QC for HEP has only begun to be explored.\n\nIn this paper, we highlight some of the significant opportunities and challenges for the development of quantum computing systems and associated software to deliver unprecedented capabilities for HEP discovery science.\nIn Sec.~\\ref{sec:background}, we survey current approaches to applying QC to HEP.\nIn Sec.~\\ref{sec:challenges}, we describe the leading technical challenges and gaps facing the adoption of quantum computing for HEP research problems, while in Sec.~\\ref{sec:prior}, we respond with long-term research priorities that will address these gaps in the coming decade.\n\n\\section{Background}\n\\label{sec:background}\n\nQuantum computing may be viewed as part of the larger subject of quantum information science and technology (QIST), which concerns the use of quantum systems to store, transmit, and process information \\cite{humble2019quantum}.\nQIST involves the precise isolation and control of well-defined quantum particles and fields.\nThese quantum physical systems can be used to simulate naturally occurring quantum phenomena (such as quantum chromodynamics (QCD)) or to solve computational tasks unrelated to physical phenomena (such as clustering or regression on a dataset).\nAs will be discussed later in this section, key concepts in quantum computing such as superposition, entanglement, and computational complexity have also provided new, information-oriented ways to address fundamental questions in HEP.\n\nFor the most part, efforts to apply QC in HEP to date have consisted of adapting established general-purpose quantum algorithms and tools to HEP-specific tasks. While most demonstrations have involved only toy problems, there have been a few demonstrations in which the quantum computations were competitive with more traditional methods. Some examples include the simulation of a full SU(2) gauge theory on a noisy, intermediate-scale quantum (NISQ) device \\cite{Atas2021SU2}, the training of large-scale quantum machine learning (QML) models for supernova classification \\cite{Peters2021}, and the generation of synthetic detector data \\cite{delgadoHamilton2022unsupervised}. \n\n\\par \nIn the United States, much of the work to date exploring the potential of quantum information science for HEP has been through the Department of Energy (DOE) QuantISED program. QuantISED was launched in 2018 as part of the DOE Office of Science (SC) QIS initiative following a series of community roundtables, pilot studies, and \n\\cite{DOESensorsReport,GrandChallengesReport}. The program has relied upon interdisciplinary collaboration between HEP and QIS researchers and partnerships between DOE laboratories, university consortia, and hardware vendors such as IBM, Rigetti, and Google. Major topic areas addressed by QuantISED include:\n\n\\begin{itemize}\n \\item \\textbf{Cosmos and qubits:} theoretical connections between cosmic physics and qubit systems that can be studied in laboratory settings. In particular, the study of quantum gravity in the context of quantum information has yielded new insights. For example, principles of computational complexity and quantum error correction have helped scientists investigate the interiors of black holes \\cite{Kim2020} and whether gravity emerges from entanglement \\cite{Periwal2021}.\n \n \\item \\textbf{Foundational theory:} formulations of gauge theories amenable to simulation on quantum computers \\cite{PhysRevD.104.094519}.\n \n \\item \\textbf{Quantum simulation:} algorithms for simulating quantum mechanical systems including scalar quantum field theories \\cite{yeter2019scalar}, nuclear physics \\cite{yeter2020practical}, and other many-body systems \\cite{yeter2021scattering}.\n \n \\item \\textbf{Quantum computing:} quantum-enhanced machine learning and data analysis for HEP experiments. Developments in this area include: quantum machine learning algorithms \\cite{Peters2021, 2021Wu, 2021WuSunGuan} and optimization algorithms for the analysis of collider data and quantum simulation of field theories \\cite{QASimNachman2021}.\n \n \\item \\textbf{Quantum sensors:} sensors leveraging quantum information concepts and technologies to yield new detection capabilities. One advancement in this area is quantum-enhanced searches for dark matter \\cite{QISforAxions, PhysRevLett.126.141302}.\n \n \\item \\textbf{Quantum technology:} advances in QIST and its integration with HEP technology. Efforts in this topic include quantum internet \\cite{PRXQuantum.1.020317}, noise mitigation techniques for near-term quantum devices \\cite{UnfoldingNachman2020}, analysis of the technological requirements for HEP programs \\cite{PhysRevD.103.054507}, and others.\n\\end{itemize}\n\n\nWhile there is not space to describe all of these efforts in detail, recent work relating black hole thermodynamics and quantum information theory exemplifies the kinds of fruitful connections being developed between QIS and HEP. \nA long-standing problem in particle physics is the so-called ``black-hole information paradox'', which concerns the apparent loss of information that occurs when a particle disappears inside a black hole. The particle's information chaotically mixes with all the other matter and information inside the black hole, generating quantum entanglement between distant regions and seemingly making it impossible to retrieve. Recent landmark work \\cite{Almheiri2019,Penington2020} suggests that the information passing the event horizon of a black hole is not lost forever but is eventually released.\n\nIn the field of quantum information, this process of apparent information loss through widespread entanglement is known as scrambling. A quantum processor can directly measure the scrambling-induced spread of initially localized information via the decay of out-of-time-ordered correlation (OTOC) functions. In \\cite{blok_quantum_2021}, the authors realize scrambling behavior in a multi-qutrit system. In \\cite{Landsman2019}, a seven-qubit circuit executed on an ion-trap quantum computer enabled the authors to bound the scrambling-induced decay of the OTOC measurement experimentally.\nThese two experiments pave the way for laboratory investigations of the black hole information paradox, something that would hardly be imaginable without QC. \n\n\\section{\\label{sec:challenges} Challenges}\nThe emerging field of QC faces many technical challenges related to the development of algorithms, software, hardware, and infrastructure to support HEP research. In this section, we identify how these existing research challenges influence on-going efforts to solve key problems in HEP research using QC. In Sec.~\\ref{sec:ana}, we survey algorithms and applications which are known to apply to HEP challenge problems, and we identify several key gaps in extending these methods beyond proof of principle efforts. In Sec.~\\ref{sec:sah}, we review how existing computational tools, including quantum programming methods and numerical simulators, have influenced the accessibility and adoption of QC for the HEP community. In Sec.~\\ref{sec:tfh}, we briefly summarize early efforts in developing testbeds for experimental QC with a focus on HEP application areas. Then, in Sec.~\\ref{sec:infra}, we discuss some of the broader infrastructure challenges that face adoption of QC by the HEP community. \n\\subsection{Algorithms and Applications }\n\\label{sec:ana}\nThe Hamiltonian formulation of lattice quantum field theories allows us to use the tools from quantum computation to solve problems in strongly correlated quantum systems by understanding the dynamics in terms of quantum circuits or representing the action of a measurement as a projection in a Hilbert space \\cite{Banuls2020,Buser2021}. Different approaches in the quantum computation domain for quantum field theories cover subatomic many-body physics \\cite{jordan2012quantum,lu2018simulations}, real-time model dynamics of lattice gauge theories \\cite{martinez2016real,muschik2017u}, self-verifying variational quantum simulation of the lattice Schwinger model \\cite{kokail2018self}, Non-abelian SU(2) lattice gauge theories on superconducting devices \\cite{mezzacapo2015non}, optical abelian and simulations of non-abelian gauge theories on ultra-cold atoms platforms \\cite{tagliacozzo2013optical,tagliacozzo2013simulation}, and simulation of Z(2) lattice gauge theories with dynamical fermionic matter \\cite{zohar2017digital,zohar2017digitalA}. \n\\par \nIn most of these applications, the limited number of qubits and gate fidelities available in QC hardware implementations represent bottlenecks for a proper comparison with classical algorithms and, therefore, any assessment of quantum advantage. On the other hand, the application of QCD with the lattice gauge model \\cite{kan2021lattice, stetina2022} represents an exciting challenge on the real-device implementation that requires a novel design following quantum device limitations, state-of-the-art classical simulators handling a decent number of qubits to benchmark the quantum computation algorithms with existing classical approaches, and the exploration of gravity\/QFT duality \\cite{Buser2021}.\n\\par\nFrom an algorithmic point of view, several challenges address the exploration of variational quantum algorithms in a higher dimension, design of quantum algorithms beyond the existing hybrid classical-quantum format with the combination of different strategies to solve problems in HEP, exploitation of quantum singular-value-decomposition (SVD) methods for unfolding problems in HEP, and on-hardware implementation of quantum algorithms to reconstruct Feynman path integrals. \n\\subsubsection{Digital and Analog Simulations}\nInterestingly, digital, analog, and mixed digital-analog models of quantum computation have been considered within the context of lattice gauge theories \\cite{RevModPhys.86.153,PRXQuantum.2.017003}. In the digital realm the problem of a \\textit{gauge-invariant} problem formulation is already a non-trivial matter with quantum-error-correction-like protocols \\cite{stryker2019} and manifestly gauge-invariant formulations \\cite{Davoudi2021,Ciavarella2021} having been considered. After overcoming the first hurdle of problem formulation the ever-growing set of quantum algorithms, including quantum phase estimation and linear combinations of unitaries or block encoding, can be used to compute the various quantities of interest appearing in the target theory \\cite{Roggero2019,Roggero2020, Ciavarella2021}. These algorithms seem suitable for fault tolerant computation rather than near term quantum computational models.\n\nAnalog approaches have long been taken with synthetic gauge fields in cold atom simulators being a prime example of analog simulation \\cite{Galitski2019}. Analog realizations of gauge invariant encodings, such loop-string-hadron encoding in cold atom systems \\cite{Dasgupta2022}, have also appeared. Lastly, with some modest abstraction one popular near term approach has been to leverage the natural device physics in order to augment the conventional digital gate-sets with analog processes \\cite{PhysRevA.104.042602}. For example, select bosonic vibrational modes can be used to simulate $U(1)$ gauge variables in a resource efficient manner \\cite{tong2021provably}, e.g., using trapped ion systems \\cite{nguyen2021digital}.\n\\subsubsection{Variational Methods}\nGenerative modeling learns how to generate a distribution by analyzing a known data. The model itself may then be used to draw additional samples representative of the distribution. With a quantum processor, preparing and sampling from arbitrary distributions can be directly implemented, and there is no need to compute partition functions which may be intractable. Additionally, using quantum systems has an attractive scaling characteristic: a model built on $n$ qubits can potentially prepare and sample from a distribution over $2^n$ states, where each state is a length-$n$ binary bitstring. \n\nIn quantum machine learning there is a broad field of research which includes using classical machine learning algorithms for studying and simulating quantum systems \\cite{gao2017efficient,torlai2018neural,torlai2020machine} and using quantum algorithms to implement analogues of classical machine learning algorithms. Many machine learning applications in HEP leverage generative modeling \\cite{salamani2018deep,di2019dijetgan,alanazi2020simulation,hariri2021graph}, and the leading model is the generative adversarial network (GAN). Quantum GAN models \\cite{dallaire2018quantum,zoufal2019quantum,chakrabarti2019quantum} have been developed that utilize a trainable quantum circuit model for the generator network, yet adversarial training can still encounter mode collapse as with classical models. However, there are other generative models that can be constructed with quantum circuits including quantum Boltzmann machines \\cite{amin2018quantum,zoufal2021variational} and quantum circuit Born machines (and related Ising Born Machines) \\cite{benedetti2019generative,coyle2020born}.\n\nNear-term demonstrations of QML can reach comparable performance with, but have not yet demonstrated an advantage over, classical machine learning models. Classical machine learning and deep learning can utilize large, distributed computing clusters to implement models with millions of parameters. Coupled with parallelization, these training workflows dwarf the current state of the art in quantum machine learning that uses variational, hybrid algorithms. However, as the size and capabilities of available quantum devices improve, larger quantum models can be built and executed. While near-term demonstrations and applications may not outperform classical ML models, their development is essential for understanding how quantum models learn.\n\n\\subsubsection{Noise Mitigation}\nThe design space associated with quantum generative models, and parameterized quantum models in general, is large. Best practices for building parameterized quantum circuit models, in order to effectively prepare and sample from these distributions, remains an open question \\cite{nakaji2021expressibility}. Many heuristic approaches abound including: using sparse parameterizations (single rotation gates instead of arbitrary three angle decompositions) reduces training overhead, at the expense of trainability; and using fewer entangling and two-qubit operations can reduce susceptibility to hardware noise and improve transpilation for sparse hardware connectivity graphs, but at the expense of the correlations that can be modeled. \n\nMitigating hardware noise requires efficient noise characterization \\cite{PhysRevA.103.042603,dahlhauser2022benchmarking}. Many methods of error mitigation that rely on regression modeling (e.g. \\cite{lowe2021unified}) are focused on mitigating the effect of noise on expectation values. For generative modeling and algorithms that use the distribution over all basis states, matrix-based error mitigation methods have been incorporated into variational training workflows \\cite{hamilton2020error}, and scalable error mitigation methods are an active area of research \\cite{hamilton2020scalable,nation2021scalable}. These methods are needed to assess the stability of the quantum devices as well as reproducibility of the applications \\cite{dasgupta2022characterizing}.\n\n\\subsubsection{Quantum Machine Learning}\nModern machine learning (ML) techniques, including deep learning, are rapidly being applied, adapted, and developed for HEP applications. ML is currently used across the different areas of particle physics from collider physics \\cite{Stakia2021, Mikuni2021, Nachman2021} to the study of the cosmos \\cite{Ostdiek2022, List2021} and quantum gravity. ML is also present throughout the various stages of HEP studies from the simulation of hypothesized particles \\cite{brehmer2020simulationbased}, in the parameterization of cross-sections calculated for sensitivity analyses \\cite{Carrazza2021}, to the analysis of experimental data \\cite{Stakia2021,Mikuni2021,Nachman2021}. Most of these applications have replaced non-ML methods and techniques and can process experimental data very efficiently. These methods and techniques have even been deployed in FPGAs \\cite{Iiyama2021} and integrated into the data acquisition systems for fast selection of information. \n\nMore recently, applications at the intersection of quantum computing and machine learning have been explored to analyze experimental data. The hope is to adapt or develop algorithms that can efficiently process HEP data. One example includes algorithms to construct physics objects amenable to analysis from the signals generated in a particle detector--i.e., the clustering of detector hits into so-called tracks for reconstructing a particle's trajectory \\cite{magano2021quantum, Zlokapa2021, Bapst2019, Shapoval2019, das2020track, Tysz2021, 2020Cenk, Quiroz2021} or tracks and calorimeter energy depositions into jets \\cite{Wei2020, pires2020adiabatic, pires2021digital}. Furthermore, quantum-assisted algorithms have been explored in unsupervised learning settings to classify jets according to their origin (b-tagging) \\cite{gianelle2022quantum}, generative tasks \\cite{bravoprieto2021stylebased, Chang2021, perez2021determining}, and the selection of events or interactions along with background suppression \\cite{Mott2017, qamlz, kim2021leveraging, Peters2021, caldeira2020restricted, Belis2021, Terashi2021, AlexiadesArmenakas2021, Bargassa2021, matchev2020quantum, Blance2021, Blance2021a, 2021Wu, chen2020quantum, Heredge2021}. In particular, generative models have been explored extensively as an alternative for the simulation of particle interactions and the detector's response to such interactions \\cite{Chang2021, delgadoHamilton2022unsupervised}.\n\nLeveraging quantum computers for machine learning in HEP has several advantages. In general, quantum computers can operate more efficiently in high-dimensional tensor product spaces because of quantum superposition, which is vital for analyzing large-scale and complex HEP data patterns. Moreover, they can leverage quantum tunneling to train machine learning models more efficiently \\cite{date2021qubo,date2021adiabatic,arthur2021balanced,date2019classical}. In some cases, quantum-assisted techniques have been shown to train on less data than their classical counterparts. This might be particularly useful in the context of data augmentation in simulation tasks, where using a generative model can reduce the resources allocated for traditional ML methods.\n\nNonetheless, even though the applications of QML to HEP data are many, there are several challenges associated with the trainability and deployment of these models on NISQ devices. For example, the size and complexity of HEP datasets requires an efficient encoding into quantum states, resulting in a large overhead in pre-processing. Many training workflows serially encode training data and paralellization requiring either multiple devices or large quantum devices that can run multiple circuits with no cross-talk or interference between circuits. Furthermore, there is little understanding on how to tailor ansatz\/circuit design for specific applications and a need for error correction\/mitigation techniques that can be scalably constructed and incorporated into variational training workflows. \n\\subsection{Software and Compilers}\n\\label{sec:sah}\n\n\\subsubsection{Compiling}\n\nImprovements in quantum computers, and their subsequent enabling of increasingly nontrivial simulations of fermionic systems, have pushed the development of novel encoding of fermionic degrees of freedom. The commonly used mappings from these degrees of freedom to that of spin (i.e., the Jordan-Wigner, Bravyi-Kitaev, or parity mappings) in general lead to largely non-local spin Hamiltonians and unnecessarily large qubit numbers per fermionic mode mapped. These properties drastically compound circuit and measurement complexity of simulations. Towards countering these expenses, several works have shown both general and system specific mappings that outperform Jordan-Wigner and others in terms of locality of spin Hamiltonians as well as the number of fermionic modes per qubit \\cite{brayvi2002,derby2021}. \n\nAlthough these economic mappings are useful, their utility in the fault-tolerant era may be overshadowed by instead using \\textit{logical} fermionic mappings \\cite{setia2019,landahl2021,li2018,jiang2019}. These avoid the overhead associated with fermionic-to-qubit mapping followed by the cost of error corrected codes by directly mapping the underlying fermionic symmetries to error correcting code spaces. While the benefit is obvious in terms of qubit and gate overhead, such embeddings require couching fundamental fermionic operations in terms of fault tolerant operations, which will likely encourage simple, more physically motivated implementations of algorithms and compiling of code. \n\nExplorations of such mappings, and their relative quantum resource efficiency for gauge-invariant HEP problems of interest remain an open, but important, first step. There have been several successful HEP problem implementations, generally focusing on how to truncate gauge degrees of freedom while simultaneously satisfying gauge constraints efficiently \\cite{klco2021,Davoudi2021}. This has paved the way, similarly to fermionic mappings, for methods of how to embed HEP simulations natively in error detection and correction schemes \\cite{rajput2021,stryker2019}. These methods have so far proven to be very specific to the target models and its symmetries. Thus the endeavor to find general, optimal mappings will be essential in the coming years for scaling up near term HEP simulations into the fault tolerant era.\n\n\\subsection{Experimental Testbeds}\n\\label{sec:tfh}\nThe current state of the art for testbed devices consists of quantum processors that contain dozens of qubits. These machines are capable of circuit depths in the 100s of gates and up to 1000s of gates in the most advanced systems. Alternately, algorithms for simulating HEP problems of interest have been implemented on a comparably smaller number of qubits and circuit depths. \n\n\\par\nGiven the current level of noise and imperfection in practical realizations of quantum computing devices, efficient numerical simulators capable of running on large-scale heterogeneous (HPC) platforms will continue to play an important role in characterization, verification, and validation of quantum devices, as well as analysis of the quantum algorithms intended for those devices. Probing the potential for quantum advantage will require extensive research and development in classical algorithms for quantum computing simulations in order to make a true boundary where quantum wins over classical. \n\\par \nAn ability to utilize existing and future classical hardware efficiently is another dimension of optimization on the classical computing side. Furthermore, the HEP-related quantum simulators will require generalization to the qudit-based representation of quantum devices and algorithms. In order to better understand and cope with the noise and decoherence, analog-level simulations of open quantum device dynamics will become vital for their performance optimization. These simulations will require significant HPC resources coupled with highly optimized computer codes. Yet another challenge is related to loading classical (or quantum) data to quantum devices--a problem that can also benefit from advanced quantum-inspired classical algorithms coupled to large-scale HPC resources. \n\n\\subsection{Instrument and Data Networks}\n\\label{sec:infra}\n\nHistorically, HEP computing has been performed on sizeable, purpose-built computing systems. These began as single-site computing facilities but have evolved into the distributed computing grids that we use today. Therefore, integration with current heterogeneous computing resources is needed to accelerate the adoption of quantum computing technologies within the HEP community. For example the Fermilab HEP cloud is a portal to diverse computing resources on local clusters, campus farms, grid resources, commercial clouds, HPC centers, and quantum computing resources.\n\n\\section{\\label{sec:prior} Priorities}\n\\subsection{Algorithms and Applications}\n\\label{sec:ana_prior}\nWith ever increasing numbers of qubits into the hundreds or thousands, while remaining in the NISQ era, quantum algorithm research for HEP applications to improve\/overtake classical methods is an important research focus for the next ten years. Quantum computing for HEP provides a great challenge and important intellectual stimulus for overcoming practical quantum information challenges with big data. \nIn addition, quantum computing for simulating lattice gauge theories provides both motivation and a framework for interdisciplinary research towards developing special-purpose digital and analog quantum simulators, and ultimately scalable universal quantum computers.\n\nAnother potential area that may yield quantum advantage involves processing the large data sets resulting from detected events.\nFor instance, simulations and on-hardware implementations for the unfolding problem can exploit both quantum generative modeling as well as singular value decomposition. \nRecent advances also enabled light-front encoding of relativistic fields and subsequent simulation with variational methods~\\cite{kreshchuk_quantum_2020,kreshchuk_light-front_2021}. Encoding techniques allow for large simulations to be mapped onto relatively few qubits via the Lie algebraic method to reduce simulation complexity via gauge invariant and sub-sector encodings pertaining to a symmetry~\\cite{klco_quantum-classical_2018}, while computing the quantum gradient on the Reimannian manifold allows for increased circuit depths when complexity reductions have been maximized. Approaches beyond the current hybrid quantum-classical paradigm, which combine efficient quantum algorithms with classical distributed algorithms, may also benefit HEP applications.\n\n\\subsubsection{Variational Algorithms}\nOne advantage to using quantum models for statistical learning tasks is that with $n$ qubits, one can build a parameterized model that can efficiently prepare arbitrary distributions over $2^n$ states. The development of QML models for HEP applications can provide insight into how data can be efficiently transformed in high-dimensional probability spaces. The challenges for these approaches to quantum computing can be organized into two areas: designing circuit ansatz models that can efficiently transform data into high-dimensional quantum Hilbert spaces, and building scalable training methods that can optimize these models in noisy landscapes. \n\nMany state of the art quantum algorithms in the HEP application space that can be deployed on near-term hardware are variational algorithms: these are hybrid workflows that utilize quantum and classical processors to train parameterized quantum models. Training a variational model is a non-convex optimization problem, and the training task can be NP-hard \\cite{bittel2021training}. In classical machine learning, gradient-based training methods can be scaled to large-scale applications in non-convex and convex optimization, and efficient training of deep learning models has been facilitated through the use of backpropagation. Analogous methods for variational workflows have been developed \\cite{verdon2018universal,beer2020training}, but have not been demonstrated yet on hardware. \n\nCurrently, gradient-based and gradient-free optimization are the leading approaches to training. Gradient-based training is predicted to improve convergence \\cite{napp2021gradient}, yet developing gradient-based optimization routines that can effectively incorporate queue-based access to quantum hardware should be a priority for near-term research. As the number of parameters in a circuit increases, and the number of qubits in the register increases, effective scheduling of the experiments that are needed in order to execute one step of an optimization method is needed. Hardware access which allows for job batching (where a set of experiments sent by a single user are executed and results are returned as a single group) can facilitate efficient training. With batched jobs, it may be possible to execute all circuits needed for one optimization step, either evaluation of the loss function gradient (e.g. for gradient descent \\cite{liu2018differentiable,mitarai2018quantum,hamilton2019generative}), executing a direct search (e.g. for particle swarm \\cite{zhu2019training} or coordinate descent \\cite{delgadoHamilton2022unsupervised}), or for gradient approximation (e.g. using finite differences or SPSA \\cite{spall1992multivariate}). Additionally, a larger number of shots are needed to efficiently sample from $n$-qubit states. \n\n\\subsubsection{Noisy Algorithms}\nThe challenges of effective state preparation, circuit training in noisy landscapes are not mutually exclusive. One observed effect of hardware noise is the loss landscape flattening and the existence of ``barren plateaus'' \\cite{mcclean2018barren,wang2021noise}. In a flattened landscape, the efficacy of gradient-based training will be reduced. However, the impact of barren plateaus can be mitigated with circuit ansatz and cost function design \\cite{cerezo2021cost,pesah2021absence,uvarov2021barren}.\n\nThere are a diverse set of error mitigation methods developed for short depth circuits \\cite{temme2017error}. But highly expressible variational models may not be short depth and variational training is not guaranteed to prepare well-verified states. An alternate approach to error mitigation is to use matrix-based methods where sparse linear filters are used to reduce spurious counts in prepared states. Matrix-based error mitigation methods can be incorporated into variational training methods as a data post-processing state, as the added cost of preparing the linear filter. \n\n\\subsubsection{Machine Learning Algorithms}\n\nThe known challenges for quantum machine learning (QML) in the HEP application space can be organized into three areas: embedding data into quantum Hilbert spaces, training quantum models in noisy landscapes, and working with quantum data. Currently, the capabilities in classical distributed computing make it unlikely that a quantum advantage will be observed with classical data, or classical representations of quantum data, using QML models deployed on near-term hardware with < 100 qubits. However, the development of QML models for HEP applications can provide insight into how data can be efficiently transformed in high-dimensional probability spaces. \n\nThese three challenges are not mutually exclusive. For example the challenges of training quantum models are encountered irrespective of the data source (classical data, classical representation of quantum data, or quantum data). Examples of how challenges of embedding data can manifest: finding low-dimensional representations of classical data into qubit registers, or high-fidelity transmission of quantum data from sensors to qubits used for QML. \nIn addition to these, it is important in QML to explore purely quantum approaches for both training and inferencing, i.e. approaches that are not hybrid in nature (such as variational approaches).\nWorking with the NISQ-era machines in the near-term, it is also important to explore quantum error-correction and error-mitigation subroutines specific to QML circuits used in HEP.\nFrom a logistical point of view, accessing quantum computers requires better scheduling policies and remote access protocols.\nLastly, today's quantum computers are noisy and error-prone---this severely limits our ability to work with large-scale data reliably.\nIn this regard, it is important to build fault tolerant quantum computers that are also scalable.\n\n\\subsection{Software and Compilers}\n \nAs discussed in Sec.~\\ref{sec:ana} and Sec.~\\ref{sec:ana_prior}, hybrid quantum algorithms, such as those based on variational principles, have emerged as a promising candidate for quantum computational advantage in HEP areas, especially in the Noisy Intermediate-Scale Quantum (NISQ) era~\\cite{preskill_nisq}. The iterative nature of these algorithms necessitates a tight integration of quantum hardware with classical computing resources. For instance, the execution time of quantum circuits that involves a variational parameter loop has been shown to be dominated (more than 90\\%~\\cite{ibm_clops}) by classical computing procedures for compilation, control, and data transfer. As quantum computing capabilities mature, software infrastructures for compilation and control will necessarily need to improve in order to utilize the full computing power of heterogeneous quantum-classical resources for handling practical applications.\n\nWe envision the need for system-level and hardware-agnostic compiler toolchains akin to those we have today for classical computing - extensible, modular systems with unified intermediate representations that enable a wide array of optimization and code-generation techniques for multiple source languages or target architectures \\cite{McCaskeyICRC2018}. The most important aspect of any compilation toolchain design is the Intermediate Representation (IR), which is how code is represented in the compiler. A standardized, robust, and forward-looking quantum-classical IR is, therefore, an essential step towards ensuring full interoperability within the quantum software ecosystem, promoting contributions from broad communities, and enabling optimization and transformation of quantum programs that we discuss in detail in the later point.\n\nFor instance, multiple community-led efforts~\\cite{qedc_web, qir_alliance_web} in the field have coalesced around the idea of leveraging existing classical compiler infrastructures, like the LLVM, for quantum computing. The approach ensures the low-level coupling of quantum computation with classical computing resources and also supports integration with classical languages and tools readily available in those compiler toolchains. For example, the Quantum Intermediate Representation (QIR) Alliance~\\cite{qir_alliance_web} is an open-source community within the Linux Foundation focusing on enabling an LLVM-based IR specification for quantum computation (QIR), as well as developing tooling around this unified representation. Oak Ridge National Laboratory is a founding member of this organization along with several industry players, including Microsoft, Quantinuum, Rigetti, and Quantum Circuits Inc. Importantly, by leveraging LLVM for quantum compilations we could also benefit from its powerful tools, especially the Multi-Level Intermediate Representation (MLIR)~\\cite{lattner_mlir} library, which is designed for heterogeneous hardware and domain-specific languages analogous to that of quantum-classical computing paradigm. In this regard, we have successfully demonstrated a prototype compiler toolchain, QCOR~\\cite{qcor}, capable of compiling the novel OpenQASM version 3~\\cite{openqasm3} quantum programming language to LLVM IR adhering to the QIR specification adopting the multi-stage, progressive lowering approach of MLIR. This compilation technology is specifically relevant to many HEP application use cases whereby the algorithmic procedure is best captured by high-level domain-specific descriptions, such as analog and digital quantum simulation. \n\nAn essential feature of compiler toolchains is the ability to apply transformations on the IR to improve execution quality, e.g., code's runtime or the computing resources required, while preserving the semantics of the input program. In compiler technology, such transformations are often called \\emph{passes} since they can be assembled into a pipeline of passes each of which processes the IR and applies its transformation rule.\n\nIn the NISQ era of quantum computing, compiler passes that implement circuit simplification and efficient hardware topology mapping are indispensable to any quantum software stack. A broad variety of techniques have been developed quantum circuit simplifications at the gate level, such as those based on ZX-calculus~\\cite{van_zx, zx_opt}, template-based~\\cite{template_opt} or peep-hole~\\cite{nam2018automated} optimizations. Similarly, various circuit-to-hardware placement methods have been developed to minimize the number of qubit swapping operations~\\cite{staq, sabre} required or to maximize circuit execution fidelity by mapping the circuit to best-performing qubits~\\cite{triq}. \n\nFor dynamical Hamiltonian simulation algorithms that are particularly relevant to HEP applications, techniques such as advanced term splitting~\\cite{Hatano2005} and ordering~\\cite{childs2019faster, tranter2019ordering} or algebraic circuit compression~\\cite{camps2021algebraic} can also be incorporated as optimization passes targeting the high-level description of the quantum algorithms where algorithmic semantics of the program are expressed. In this respect, a quantum compiler infrastructure that supports a hierarchy of abstractions, such as the MLIR, is highly desirable since it can retain high-level algorithmic information in the IR which compiler optimizers can reason \\cite{nguyen2021quantum}. \n\nAs quantum computers evolve into fault-tolerant machines, the associated software infrastructure will also need to be upgraded in order to handle quantum error correction protocols \\cite{humble2021quantum}. Specifically, we envision that quantum programs would always be expressed in terms of logical operations whose corresponding physical qubit level instructions incorporating a target-dependent quantum error correction code are compiled by a compiler software infrastructure \\cite{britt2017high}. In this regard, quantum compilers leveraging classical computing technology, such as LLVM, would be best suited to performing this logical-physical transformation thanks to their ability to incorporate a variety of classical computing resources required by the error decoding and correction protocol, e.g., minimum weight perfect matching~\\cite{dennis2002topological} or maximum likelihood decoding~\\cite{bravyi_mld} for surface code~\\cite{fowler_surface_code}. Furthermore, as quantum programming languages move towards this fault-tolerant paradigm whereby qubit measurement-based control flow is intertwined with that of conventional computing for classical data, we can leverage the vast amount of IR optimization capability from the LLVM infrastructure to handle common classical optimization passes like function inlining or loop unrolling. While these optimization passes are intrinsically classical, they help unlock many quantum-related simplification patterns which would otherwise be obscure to the quantum optimizer due to the complex nature of the IR tree with control flows.\n\\subsubsection{Encodings}\nThe HEP-specific encodings discussed in section II are an important first step to solving gauge-invariant HEP problems of interest in a resource-efficient way.\nExplorations of such mappings, and their relative quantum resource efficiency for gauge-invariant HEP problems of interest remain an open, but important first step. There have been several successful HEP problem implementations, generally focusing on how to truncate gauge degrees of freedom while simultaneously satisfying gauge constraints efficiently \\cite{klco2021,Davoudi2021}. This has paved the way, similarly to fermionic mappings, for methods of how to actually embed HEP simulations natively in error detection and correction schemes \\cite{rajput2021,stryker2019}. These methods have so far proven to be very specific to the target models and its symmetries. Thus the endeavor to find general, optimal mappings will be essential in the coming years for scaling up near term HEP simulations into the fault tolerant era. Comparing these methods to general fault tolerant encodings for universal QC will yield insight into the efficacy of building custom machines dedicated to HEP problems which implement custom encodings in hardware rather than reconfigurable devices capable of universal operations.\n\n\\subsubsection{Numerical Simulators}\nSince numerical simulators will continue to play a vital role in quantum device verification and validation as well as in quantum algorithm analysis \\cite{arute2019quantum,villalonga2020establishing}, the relevant research priorities are concerned with scaling numerical simulation techniques towards larger NISQ devices as well as improving their ability to faithfully reproduce the behavior of actual quantum hardware via more accurate noise models, in particular proper modeling of the multi-qubit cross-talk \\cite{mccaskey2018validating}. In turn, devising more accurate noise models will require further research and development in scalable approaches for modeling open quantum system dynamics coupled with either a Markovian or non-Markovian bath. Further research related to simulations of multi-level qudit systems at their native Hamiltonian level will be necessary for assessing their computational power as compared to the qubit-based quantum devices. Pulse-level simulations, optimal quantum control, and quantum gate implementation optimization will form another direction of simulation heavy research and development efforts. More efficient numerical tensor-algebraic techniques coupled with advances in classical machine learning will be required for scalable characterization of larger NISQ devices. On the engineering side, all these newly devised numerical simulation techniques will have to be implemented in an efficient manner in order to fully exploit the computational power of Exascale HPC platforms which are becoming widely available worldwide \\cite{britt2017quantum,doi.org\/10.1049\/qtc2.12024,nguyen2021scalable,humble2021quantum}.\n\n\\subsection{Experimental Hardware}\n\\subsubsection{Testbeds} \nIt is in the interest of the HEP community to build specialized devices to address the above application needs. For example, multiple platforms offer analog-level operations to users (ie, pulse controls). In many scenarios, access to lower level, analog operations allows for higher accuracy in quantum simulation problems. Further, hybrid analog-digital simulations have shown increased accuracy in field theory problems for scattering calculations~\\cite{arrazola_digital-analog_2016}.\nCurrently, HEP programs - especially those in the QuantISED program in the US - utilize testbeds around the world, including the DOE-supported testbeds consisting of trapped ion and superconducting devices, to solve prototypical problems of interest in the field. For example, the information scrambling problem~\\cite{blok_quantum_2021} has been demonstrated on superconducting hardware \\cite{Kreshchuk2021}, while variational methods for solving problems using the light-field encoding has been mapped onto trapped ion platforms \\cite{Echevarria2021}. Uncovering the most useful aspects of the testbeds in these demonstrations will help us discover the most important ingredients for testbeds that support HEP goals. In particular, the \"close to the metal\", analog controls were a defining factor in these demonstrations.\n\nIn the NISQ era, fully analog systems will also be useful for implementing complex Hamiltonians for which digital, prefault-tolerant platforms do not support sufficient circuit depth. Hamiltonians for problems of interest can be mapped to trapped ion systems' motional mode degrees of freedom, or the spin-spin coupling present in neutral atom, quantum dot, or other platforms which support both analog spin-spin coupling and tunable transverse fields. In the future, when digital platforms move closer to fault tolerance, direct fermionic (HEP problem-specific) encodings should be supported in custom testbeds. These encodings ensure that the resources required in a given testbed required to represent a specific HEP problem are far fewer than that required for general purpose, universal quantum simulation. A custom, fault tolerant fermionic code may not be suitable to simulate Boson sampling, for example, but it would be ideal for simulating scattering in fermionic field theories.\n\n\\subsubsection{Analog Simulators}\nWhile digital quantum computers scale to the extent needed for fault tolerant quantum computations, \\textit{analog} quantum simulators are beginning to push state of the art, for example probing topological spin liquids on a programmable device at a scale not previously observed on neutral atom-based simulators \\cite{semeghini_probing_2021,scholl_quantum_2021,bluvstein_controlling_2021}.\nAnalog neutral atomic physics supplements photonic, superconducting, trapped-ion, and many more analog modalities which currently exist. There has also been recent research interest in exploring digital-analog computational modalities (in which hard operations are done in an analog fashion and digital operations play a complimentary supporting role, e.g. providing local basis transformations) and their application to HEP (and related condensed-matter) models as discussed in the previous section. \nDespite this promise, developing current analog methodologies to solve substantially more complex HEP domain problems is a research avenue in its own right. \n\n\\subsubsection{HEP-dedicated Testbeds}\n\nFor HEP, an open and relevant question is whether quantum computers can efficiently simulate quantum field theories (QFT). QFT encompasses all fundamental interactions, possibly excluding gravity. The simulation of QFT has HEP applications related to event generators for quantum chromodynamics (QCD) simulations of nuclear matter. Outside HEP, it might find applications in the simulation of strongly coupled theories and the characterization of the computational complexity of quantum states.\n\nNonetheless, the simulation of quantum field theories is not a trivial task due to the sign problem. In physics, the sign problem is encountered in calculations of the properties of a quantum mechanical system with a large number of strongly interacting fermions or QFTs involving a non-zero density of strongly interacting fermions. Since the particles are strongly interacting, perturbation theory is inapplicable, and one is forced to use brute-force numerical methods. The sign problem is one of the major unsolved problems in the physics of many-particle systems, limiting progress in nuclear physics, preventing the ab initio calculation of properties of nuclear matter, and limiting our understanding of nuclei and neutron stars. In quantum field theory, it prevents the use of lattice QCD \\cite{deforcrand2010simulating} to predict the phases and properties of quark matter \\cite{Philipsen2012}.\n\nAn exciting and promising alternative to the simulation of QFT is based on the idea of adiabatic quantum computation (AQC) \\cite{farhi2000quantum, Albash2018}. In general, AQC is as powerful as universal quantum computation when a non-stoquastic Hamiltonian is used \\cite{Marvian2019OnTC}, which is the case of Hamiltonians dealing with the sign problem. One of the critical aspects of AQC is quantum annealing \\cite{Kadowaki1998}, which is a metaheuristic algorithm to solve combinatorial optimization problems by changing the parameters adiabatically or even non-adiabatically. Measurements on currently available quantum annealers are done only on the standard computational basis \\cite{Johnson2011}; thereby, the Hamiltonian is stoquastic. \n\nThus, a quantum testbed that fully realizes or exploits the power of AQC is needed to show that quantum speedup can be achieved when non-stochastic Hamiltonians relevant to HEP are used. Some models experiencing first-order and second-order phase transitions using non-stoquastic Hamiltonians have been proposed and have demonstrated a quantum speedup \\cite{Ikeda2020}. This natural and efficient way to simulate QFT on a quantum testbed will eliminate the need to reformulate HEP-relevant Hamiltonians (such as QCD) to fit a finite-dimensional Hilbert space amenable to circuit implementations.\n\nFormulating the future of QCD studies as relying on a successful implementation of AQC provides an alternative to collider experiments, where large statistics are needed to understand rare processes. If we can simulate the QCD Hamiltonian we can study some observables in the context of information theory. For example, understanding the probabilistic parton distribution in terms of entangling entropy \\cite{Kuvshinov:2017ncj}. In other words, in a collider experiment we have access to a probability distribution by collapsing the wave function to one of the states, by using a quantum-mechanical simulation we have access to the full density matrix.\n\n\n\\subsection{Infrastructures, Platforms and Data Networks}\n\\subsubsection{Software Development Frameworks, Benchmarks, and Optimisation}\nThe HEP community has developed over the years a broad range of algorithms and methods for different applications, from optimization to simulation and machine learning. Initial work on developing and assessing quantum equivalents of common algorithms have produced valuable insight \\cite{Guan_2021}. However, the performance of NISQ systems is still limited and full advantage is expected only once reasonable fault tolerant devices will become available. To fully exploit the potential of quantum algorithms, software and hardware systems should be designed as part of co-development efforts across the HEP and quantum computing science and engineering communities. Experience in the classical domain shows that innovation in hardware technology, readily available to software and algorithm experts, can boost the development of new software stacks and algorithms that, in turn, produce critical feedback to hardware development in a virtuous cycle. Examples include hardware-aware compiler optimization, design of application-aware hardware architectures (i.e. systems dedicated to accelerating quantum simulations) and so on.\n\\par \nA fundamental component of measuring the progress and impact of quantum algorithms is the establishment of open benchmarking frameworks and platforms where different combinations of algorithms, software and hardware architecture can be tested, and the results published as a means of establishing baselines for further development. Such benchmarks can form the base for the development of common software tools and interfaces with an increasing level of abstraction and platform neutrality as typical today for classical computing resources. \n\\par \nAs an example, in 2021, CERN launched the ABAQUS project with the objective to build an open, easily accessible platform for researchers to become familiar with different quantum computing architectures and tools, produce or get access to benchmarks over different metrics, and build a community database of knowledge of the applicability of quantum algorithms to HEP workloads.\nThere are specific aspects that need to be investigated and studied. For example, there are easier simulations that are only polynomially hard, such as tensor networks with limited entanglement or limited number of non-Clifford gates that scale polynomially in the number of Clifford gates, but exponentially in the number of non-Clifford gates. It needs to be understood how to adapt and employ these simplifications. \n\\subsubsection{Heterogeneous Computing Networks}\nWhen dealing with the quantum computing paradigm, the role of infrastructure deserves a particular attention. Quantum computing should not over time require separate computing infrastructures, application platforms, or languages, but must be integrated into the existing cloud-based physical and software infrastructure of today and enable the deployment of hybrid, heterogeneous workloads where the quantum processing units (QPUs) are part of a broad range of accelerators. Presently classical resources can reliably store, manage and process huge quantities of data, while quantum devices are expected to efficiently explore high-dimensional spaces to extract insights and identify optimal answers.\n\\par\nThe development and deployment of such hybrid infrastructures is currently happening across a series of steps. As current NISQ resources are limited in availability and efficiency, experience is built by simulating quantum algorithms and building noise mitigation strategies on testbeds of classical resources of increasing scale. Since memory requirements double for every qubit added in the simulation and quickly hit an exponential wall, simulation of large scale systems can be performed by means of distributed and parallelised computation across HPC clusters of CPU\/GPU. Novel architectural frameworks to integrate quantum capabilities APIs in cloud architectures are also emerging \\cite{Grossi_2021}.\n\\par\nAt the same time, it is critical to start building paths to architecture optimization of hybrid resources, workload splitting and hybrid algorithms, scheduling, data flows, and circuits transpilation processes. Looking at hybrid models of variational classifiers, reinforcement-learning optimisation algorithms, quantum-classic GAN models for simulation, the infrastructure is evolving rapidly to make the classic and quantum interaction more efficient, defining containerised modules to be tested on distributed HPC environments and loaded on real quantum computers. This will need to be achieved by integrating HPC and quantum computing strategies as part of national and international initiatives.\n\\par \nOne example of how such integration may be addressed is the Quantum Computing User Program (QCUP) at the Oak Ridge Leadership Computing Facility, a US Department of Energy user facility operated by Oak Ridge National Laboratory \\cite{qcup}. QCUP offers access to a diversity of quantum computing systems following merit-based review of user proposal for demonstrate scientific applications. In addition, these users are capable of integrating their quantum computing workflows alongside conventional HPC systems. As quantum computing technology matures, this setting will offer a natural means by which to integrate quantum computing and conventional HPC systems for purposes of advanced computation.\n\\subsubsection{Quantum Data Networks}\nA particular aspect of future Quantum\/HPC infrastructures is the role of quantum data networks. Today quantum computing and quantum communication are usually handled as related by separate fields of applications of quantum technologies. However, quantum infrastructures, time and frequency distribution networks, and the development of quantum data technologies will play a fundamental and transformative role in physics experiments at different energy regimes (for example standard reference signals for anti-matter and low-energy experiments, such as ASACUSA and ALPHA, using high-precision laser spectroscopy \\cite{Amsler:2748998}).\n\\par\nMany national and international initiatives are being deployed to build future quantum networks and the HEP community must remain an active part of this development as it was at the beginning of the century in the development of distributed grid and cloud networks.\n\\subsubsection{International Collaborations and Co-Development}\nThe HEP community is distributed over the whole globe and the collaboration across institutes has always played a fundamental role in pushing the limits of science and technology. For example the development and operation of the Worldwide LHC Computing Grid (WLCG) was and still is instrumental for joint progress and employing all brain power wherever located in the world. The same approach needs to be taken in the development of future quantum infrastructures, Synergies between the Snowmass exercise and equivalent discussions in the European Particle Physics Strategy and other similar initiatives will be critical. Joint R\\&D projects, testbeds, progressive development of common frameworks and tools across US Quantum Information Science institutes, the CERN Quantum Technology Initiative \\cite{DiMeglio:2789149}, DESY QUANTUM, and many other national and international initiatives in EU and other countries will accelerate the development and adoption of quantum computing across the HEP community. In the same way, attention should be given to co-development not only across HEP, but also between HEP institutes, industry, and other scientific disciplines, such as Astrophysics, Cosmology, Earth Observation, Climate research, Condensed Matter, Chemistry, Biology, and many more. Often very similar challenges appear in different disciplines and can be solved with one solution.\n\n\n\\section*{Acknowledgements}\nThis work was supported by supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Science Center, and the Advanced Scientific Computing Research program office Accelerated Research for Quantum Computing (ARQC) program. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzaydw b/data_all_eng_slimpj/shuffled/split2/finalzzaydw new file mode 100644 index 0000000000000000000000000000000000000000..be89be7c10b9f8b8260a16d96bfa56289b4542f7 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzaydw @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe field of cosmic microwave background (CMB) anisotropies has seen \na rapid development since its first detection by the COBE satellite\nonly a few years ago. There are now several reported experimental \nresults that are detecting anisotropies\non degree angular scales (see \\cite{Scott95} and\n\\cite{Bond96} for a recent review), \nwhich together with a few upper limits on smaller \nangular scales already give interesting limits on \ncosmological models. \nWith \nthe development of the new generation of experiments now being \nproposed one hopes to accurately map the CMB sky \nfrom arcminute scales to several degree scales. \nThe amount of data thus provided would \nallow for an unprecedented accuracy in the determination of cosmological \nparameters. As theoretical modelling shows \n(\\cite{Bond94,Hu95c,Jungman95,Seljak94}), \nCMB anisotropies are \nsensitive to most of the cosmological parameters and have a distinctive\nadvantage over other cosmological observations in that they probe the universe\nin the linear regime. This avoids the complications caused by physical \nprocesses in the nonlinear regime and allows to use powerful statistical\ntechniques to search over the parameter space for the best cosmological \nmodel (see e.g. \\cite{Jungman95}). \nA large stumbling block in this program at present is the speed of \ntheoretical model calculations, which are still too slow to allow for\na rapid search over the parameter space. This limitation was partially \nremoved by the development of appoximation methods (\\cite{Hu95a},b; \n\\cite{Seljak94}), \nwhich can give fast predictions of CMB anisotropy with a 10\\%\naccuracy. However, \nthese approximations are not accurate enough to exploit the complete\namount of information that will be present in the future CMB observations.\nThis is especially true for some of the more extreme cosmological models, \nwhere simple approximations break down and lead to systematic \ninaccuracies in the results.\nObviously, it would be useful to have a fast method that would not \nbe based on any approximations and would lead to accurate results for \nany cosmological model. The purpose of this paper is to present a new\nmethod of CMB calculation that satisfies these requirements.\n\nTheoretical calculations of the CMB anisotropies are based on \nlinear theory of cosmological perturbations, developed first by \nLifshitz (1946) and applied to the CMB anisotropies by \nPeebles and Yu (1970). In this early calculation only photons and \nbaryons were included, but later workers extended the calculations to \ninclude dark matter (Bond \\& Efstathiou 1984, 1987; Vittorio \\& Silk 1984),\ncurvature (Wilson \\& Silk 1981; Sugiyama \\& Gouda 1992; \nWhite \\& Scott 1995), \ngravity waves or tensor modes (Crittenden \net al. 1993) and massive neutrinos (Bond \\& Szalay 1983; Ma \\& Bertschinger \n1995; Dodelson, Gates \\& Stebbins 1995). \nMost of these and more recent calculations (e.g. \nHoltzmann 1989; Stompor 1994; Sugiyama 1995) solve for each Fourier mode of\ntemperature anisotropy $\\Delta_T(\\vec k)$ by \nexpanding it in Legendre series up to some desired $l_{max}$ and then \nnumerically evolve this system of equations in time from the radiation \ndominated epoch until today. Typically this means evolving a system \nof several thousand coupled differential equations in time, a slow \nprocess even for the present day computers. In addition, because \neach multipole moment is a rapidly oscillating function one has to \ndensely sample it in values of \n$k$ with typical number of evaluations of the order of $l_{max}$.\nEven the fastest codes at present \nrequire several hours of \nCPU time for each theoretical model\n(Sugiyama 1995), \nwhile some numerically more accurate ones (e.g. \\cite{BB95})\nrequire more like tens or hundreds of hours.\n\nIn this paper we explore a different approach to compute CMB anisotropies\nbased on integration of the sources over the photon\npast light cone. The method is a generalization \nof approximate method developed by one of the authors (Seljak 1994).\nIt differs from it in that it is exact, in the sense that\nit can achieve arbitrary precision within the limits of linear \nperturbation theory. By rewriting the system of equations in the\nintegral form one can separate between the geometrical and dynamical\ncontributions to the anisotropies. The former do not depend on the \nmodel and need to be computed only once, while the latter contain all\nthe information on the particular model and can be computed with a \nsmall system of equations. Solving for CMB anisotropies using this\nintegral form greatly reduces the required computational time. \nThe outline of the paper is as follows:\nin \\S 2 we present the basic system of equations that needs to be solved \nboth in the standard and in the integral method.\nIn \\S 3 we present in some detail a practical \nimplementation of the integral method, \nhighlighting the computational \ndifferences between it and the standard Boltzmann method.\nWe conclude in \\S 4 by discussing \npossible applications where the new method can be particularly useful.\n\n\\section{Method}\n\nIn this section we first present the standard system of equations \nthat needs to be solved for temperature\nanisotropies, which is based on solving Boltzmann equation using \nLegendre expansion of photon distribution function. \nThis part follows closely \nthe existing literature (e.g. Ma \\& Bertschinger 1995, Bond 1996\nand references therein) \nand only the final results are given. We do\nnot discuss the technical details of the standard Boltzmann \nmethod, except where our approach differs significantly from it. \nIn the second\npart of the section we present the integral solution of photon \ndistribution, \nwhich is the basis of our method. \nIn this paper we restrict the analysis to a spatially flat universe. \n\n\\subsection{Boltzmann, Einstein and Fluid Equations}\n\nThe temperature anisotropy at position $\\vec{x}$ \nin the direction $\\vec n$ \nis \ndenoted with $\\Delta_T(\\vec x, \\vec n)$. \nIn principle it depends both on the direction and \non the frequency, but because spectral distortions are only introduced\nat the second order the frequency dependence can in the lowest order\nbe integrated out. Anisotropy $\\Delta_T(\\vec x,\\vec n)$ can\nbe expanded in terms of Fourier modes $\\Delta_T(\\vec k,\\vec n)$,\nwhich in linear perturbation theory evolve independently of \none another. Assuming perturbations are axially-symmetric \naround $\\vec k$, we may further Legendre\nexpand the anisotropy in the angle \n$\\mu=\\vec k \\cdot \\vec n\/k$,\n\\begin{equation}\n\\Delta_T(\\vec k, \\vec n)=\\sum_l (2l+1)(-i)^l\\Delta_{Tl} P_l(\\mu),\n\\label{delta}\n\\end{equation}\nwhere $P_l(\\mu)$ is the Legendre polynomial of order $l$ and $\\Delta_{Tl}$ \nis the associated multipole moment.\nA similar decomposition also applies to \nthe amplitude of polarization anisotropy $\\Delta_P(\\vec{k},\\vec{n})$ \n(\\cite{BE84}, Critenden et al. 1993,\nKosowsky 1995, Zaldarriaga \\& Harari 1995). \n\nEvolution of the temperature anisotropy is governed by the Boltzmann\nequation (\\cite{Peebles70}, \\cite{Wilson81}, \n\\cite{BE84}). Its collisionless part is given by the time component\nof the geodesic equation, which depends on the metric. Here we will \nuse the metric \nin the longitudinal gauge (\\cite{bard80}; Bertschinger 1996), \nwhich is similar to the gauge-invariant formalism (Kodama \\& Sasaki 1984) \nand gives expressions that are most similar to their \nNewtonian counterparts.\nThe choice of gauge is purely a matter of convenience\nand in some cases (e.g isocurvature models)\nother gauge choices such as synchronous gauge \nare computationally advantageous over the longitudinal \ngauge (see e.g. \\cite{BB95}, \\cite{Bond96}). In the longitudinal gauge\nthe perturbations are specified with two scalar potentials $\\phi$ and\n$\\psi$ and a gauge-invariant \ntensor perturbation $h$ (we will ignore vector perturbations in this paper,\nas they most likely have a negligible contribution to CMB anisotropy).\nThe corresponding temperature and polarization \nanisotropies are denoted as $\\Delta^{(S)}_T$, $\\Delta^{(S)}_P$\nfor scalar and $\\Delta^{(T)}_T$, $\\Delta^{(T)}_P$\nfor tensor components. In linear perturbation theory the scalar \nand tensor perturbations \nevolve independently and the total power is given by the sum of the \ntwo contributions.\n\nThe collisional part of the photon Boltzmann equation is determined by the\nThomson scattering term. After angular and momentum \nintegration the Boltzmann evolution equations \nfor scalar perturbations can be written as (\\cite{BE87}),\n\\begin{eqnarray} \n\\dot\\Delta_T^{(S)} +ik\\mu \\Delta_T^{(S)} \n&=&\\dot\\phi-ik\\mu \\psi+\\dot\\kappa\\{-\\Delta_T^{(S)} +\n\\Delta_{T0}^{(S)} +i\\mu v_b +{1\\over 2}P_2(\\mu)\\Pi\n\\} \\nonumber \\\\ \n\\dot\\Delta_P^{(S)} +ik\\mu \\Delta_P^{(S)} &=& \\dot\\kappa \\{-\\Delta_P^{(S)} +\n{1\\over 2} [1-P_2(\\mu)] \\Pi\\} \\nonumber \\\\\n\\Pi&=&\\Delta_{T2}^{(S)}\n+\\Delta_{P2}^{(S)}+\n\\Delta_{P0}^{(S)}.\n\\label{Boltzmann}\n\\end{eqnarray}\nHere the derivatives are taken with respect to the conformal time $\\tau$ \nand $v_b$ is the velocity of baryons.\nDifferential optical depth for Thomson scattering is denoted as \n$\\dot{\\kappa}=an_ex_e\\sigma_T$, where $a(\\tau)$ \nis the expansion factor normalized\nto unity today, $n_e$ is the electron density, $x_e$ is the ionization \nfraction and $\\sigma_T$ is the Thomson cross section. The total optical \ndepth at time $\\tau$ is obtained by integrating $\\dot{\\kappa}$,\n$\\kappa(\\tau)=\\int_\\tau^{\\tau_0}\\dot{\\kappa}(\\tau) d\\tau$.\nA useful variable is the visibility function $g(\\tau)=\\dot{\\kappa}\n{\\rm exp}(-\\kappa)$. Its peak \ndefines the epoch of recombination, when the \ndominant contribution to the CMB anisotropies arises.\n\nExpanding the \ntemperature anisotropy in multipole moments \none finds the following hierarchy of coupled \ndifferential equations (\\cite{Wilson81}, \\cite{BE84}, \\cite{Ma95}),\n\\begin{eqnarray}\n \\dot{\\Delta}_{T0}^{(S)}\n &=& -k\\Delta_{T1}^{(S)} +\\dot{\\phi}\n \\,, \\nonumber\\\\\n\\dot{\\Delta}_{T1}^{(S)} &=&\n{k \\over 3}\\left[ \\Delta_{T0}^{(S)}-\n2\\Delta_{T2}^{(S)}\n+\\psi\\right] + \\dot{\\kappa} ({v_b \\over 3}-\\Delta_{T1}^{(S)})\\,,\\nonumber\\\\\n\\dot{\\Delta}_{T2}^{(S)} &=&{k \\over 5}\\left[2\\Delta_{T1}^{(S)}-3\n\\Delta_{T3}^{(S)}\\right] +\n\\dot{\\kappa} \\left[{\\Pi \\over 10}-\\Delta_{T2}^{(S)}\\right]\\,, \\nonumber\\\\\n\\dot{\\Delta}_{Tl}^{(S)}&=&{k \\over\n2l+1}\\left[l\\Delta_{T(l-1)}^{(S)}-(l+1)\n\\Delta_{T(l+1)}^{(S)}\\right]\n-\\dot{\\kappa}\\Delta_{Tl}^{(S)} \\,, l>2 \\nonumber \\\\\n\\dot{\\Delta}_{Pl}^{(S)}&=&{k \\over\n2l+1}\\left[l\\Delta_{P(l-1)}^{(S)}-(l+1)\n\\Delta_{P(l+1)}^{(S)}\\right]\n+\\dot{\\kappa}\\left[-\\Delta_{Pl}^{(S)}+{1 \\over 2}\\Pi\\left(\n\\delta_{l0}+{\\delta_{l2} \\over 5}\\right)\\right],\n\\label{photon}\n\\end{eqnarray}\nwhere $\\delta_{ij}$ is the Kronecker symbol.\nA similar system of equations without the Thomson scattering terms \nand polarization\nalso applies for massless neutrinos.\nFor the\nmassive neutrinos the system of equations is more complicated,\nbecause the momentum dependence cannot be integrated out of the\nexpressions (see e.g. \\cite{Ma95}).\n\nBaryons and cold dark matter can be approximated as fluids and \ntheir evolution can be obtained \nfrom the local conservation of energy-momentum tensor. This gives the \nequations for cold dark matter density $\\delta_c$ and its \nvelocity $v_c$,\n\\begin{equation}\n\\label{cdm2}\n \\dot{\\delta_c} = -kv_c + 3\\dot{\\phi}\\,, \\quad\n \\dot{v}_c = - {\\dot{a}\\over a}\\,v_c+k\\psi \\,.\n\\end{equation}\nFor baryons one has additional terms in the Euler's equation\ncaused by Thomson scattering and pressure,\n\\begin{eqnarray}\n\\label{baryon2}\n\\dot{\\delta}_b &=& -kv_b + 3\\dot{\\phi} \\,, \\nonumber\\\\\n\\dot{v}_b &=& -{\\dot{a}\\over a}v_b\n+ c_s^2 k\\delta_b\n+ {4\\bar\\rho_\\gamma \\over 3\\bar\\rho_b}\n \\dot{\\kappa}(3\\Delta_{T1}^{(S)}-v_b) + k\\psi\\,.\n \\label{cdmb}\n \\end{eqnarray}\nHere $c_s$ is the baryon sound speed and $\\bar\\rho_\\gamma$, $\\bar\\rho_b$\nare the mean photon and baryon densities, respectively.\n\nFinally, the evolution of scalar metric perturbations is given by Einstein\nequations, which couple the sources and the metric perturbations. Only \ntwo equations are needed to specify the evolution. Here we choose them to\nbe the energy and momentum constraint equations,\n\\begin{eqnarray}\nk^2 \\phi+3{\\dot{a}\\over a}\\left(\\dot{\\phi}+{\\dot{a}\\over a}\\psi\\right)\n=-4\\pi Ga^2\\delta \\rho \\nonumber \\\\\nk^2\\left(\\dot{\\phi}+{\\dot{a}\\over a}\\psi\\right)=4\\pi Ga^2 \\delta f,\n\\label{einstein}\n\\end{eqnarray}\nwhere $\\delta \\rho$ and $\\delta f$ are the total \ndensity and momentum density perturbations, respectively. They are \nobtained by summing over the contributions from all species, $\\delta \\rho\n= \\sum_i \\delta \\rho_i$, $\\delta f=\\sum_i \\delta f_i$, $\\delta \\rho_i=\n\\bar{\\rho_i}\\delta_i$ and $\\delta f_i=(\\bar{\\rho_i}+\\bar{p_i})v_i$, where \n$\\bar{\\rho_i}$ and $\\bar{p_i}$ are the mean density and pressure of the $i$-th\nspecies.\n\nFor tensor perturbations the \nBoltzmann equation is given by (Crittenden et al. 1993),\n\\begin{eqnarray}\n\\dot{\\Delta}_{T }^{(T)} + i k \\mu\n \\Delta_{T }^{(T)}&=& - {\\dot h}- \n\\dot{\\kappa} (\\Delta_{T}^{(T)} -\n\\Psi) \\ , \\nonumber \\\\\n\\dot{\\Delta}_{P}^{(T)} + i k \\mu \n\\Delta_{P}^{(T)} &=& - \n\\dot{\\kappa}( \\Delta_{P}^{(T)} + \n\\Psi) \\ , \\nonumber \\\\\n\\Psi \\equiv \\Biggl\\lbrack\n{1\\over10}\\Delta_{T0}^{(T)}\n&+&{1\\over 35}\n\\Delta_{T2}^{(T)}+ {1\\over210}\n\\Delta_{T4}^{(T)}\n -{3\\over 5}\\Delta_{P0}^{(T)}\n+{6\\over 35}\\Delta_{P2}^{(T)}\n-{1\\over 210}\n\\Delta_{P4}^{(T)} \\Biggr\\rbrack . \n\\label{tensorl}\n\\end{eqnarray}\nThe only external source is that of the tensor metric perturbation \nwhich evolves according to the Einstein equations as\n\\begin{equation}\n\\ddot{h}+2 {\\dot{a}\\over a}\\dot{h}+k^2h=0.\n\\label{tensoreinstein}\n\\end{equation}\nWe ignored the source term on the right-hand side of equation above \n(caused by neutrino and photon anisotropic stress), \nas it is always negligible compared to the terms on the left-hand side.\n\nTo obtain the temperature anisotropy for a given mode $\\vec k$ one has to\nstart at early time in the radiation dominated epoch with initial \nconditions of the appropriate type (e.g. isentropic or isocurvature) \nand evolve the system of equations until the present. The anisotropy\nspectrum is then obtained by integrating over the initial \npower spectrum of the metric perturbation $P_\\psi(k)$,\n\\begin{equation}\nC_l^{(S)}=(4\\pi)^2\\int k^2dk P_\\psi(k)|\\Delta^{(S)}_{Tl}(k,\\tau=\\tau_0)|^2.\n\\label{cl}\n\\end{equation}\nAnalogous expression holds for the polarization spectrum and for the\ntensor spectrum (where the initial power spectrum \n$P_\\psi(k)$ has to be replaced by the initial tensor power spectrum\n$P_h(k)$). \n\nThe spectrum $C_l$ is related to the angular correlation function,\n\\begin{equation}\nC(\\theta)=\\langle \\Delta(\\vec n_1)\\Delta(\\vec n_2)\\rangle_{\\vec n_1\n\\cdot\n\\vec n_2=\\cos \\theta}={1 \\over 4\\pi}\\sum_{l=0}^\\infty(2l+1)C_lP_l(\\cos \\theta).\n\\label{ctheta}\n\\end{equation}\nTo test a model on a given angular scale\n$\\theta$ one has to solve for $\\Delta_{Tl}$ up to \n$l \\approx 1\/\\theta$. \nIf one is interested in small angular scales \nthis leads to a large system of differential \nequations to be evolved in time and\nthe computational time becomes long. For a typical spectrum with \n$l_{\\rm max} \\sim 1000$ one has to evolve a system of 3000 differential \nequations (for photon and neutrino anisotropy and photon\npolarization)\nuntil the present epoch. Moreover, the solutions are rapidly oscillating\nfunctions of time, so the integration has to proceed in small time\nincrements to achieve the required accuracy on the final values. \n\n\\subsection{Integral solution}\n\nInstead of solving the coupled system of differential equations (\\ref{photon})\none may formally integrate equations \\ref{Boltzmann} \nalong the photon past light cone to obtain (e.g. \nZaldarriaga \\& Harari 1995),\n\\begin{eqnarray}\n\\Delta_T^{(S)} &=&\\int_0^{\\tau_0}d\\tau e^{i k \\mu (\\tau -\\tau_0)}\ne^{-\\kappa} \n\\{ \\dot\\kappa e^{-\\kappa} [\\Delta_{T0}+i\\mu v_b + {1\\over 2} P_2(\\mu)\n\\Pi]+\\dot\\phi-ik\\mu\\psi\\} \\nonumber \\\\\n\\Delta_P^{(S)} &=& -{1\\over 2}\\int_0^{\\tau_0} d\\tau e^{i k \\mu (\\tau -\\tau_0)}\ne^{-\\kappa} \\dot\\kappa [1-P_2(\\mu)]\n\\Pi .\n\\label{formal}\n\\end{eqnarray}\nExpressions above can be further modified by eliminating the angle $\\mu$ in \nthe integrand through the integration by parts. \nThe boundary terms can be dropped, because they\nvanish as $\\tau \\rightarrow 0$ and are unobservable for $\\tau=\\tau_0$\n(i.e. only the monopole term is affected).\nThis way each time a given term is multiplied by a $\\mu$, it\nis replaced by its time derivative. This manipulation leads\nto the following expression,\n\\begin{eqnarray}\n\\Delta^{(S)}_{T,P} &=&\\int_0^{\\tau_0}d\\tau e^{i k \\mu (\\tau -\\tau_0)}\nS^{(S)}_{T,P}(k,\\tau) \n\\nonumber \\\\\nS^{(S)}_T(k,\\tau)&=&g\\left(\\Delta_{T0}+\\psi-{\\dot{v_b} \\over k}-{\\Pi \\over 4}\n-{3\\ddot{\\Pi}\\over 4k^2}\\right)\\nonumber \\\\\n&+& e^{-\\kappa}(\\dot{\\phi}+\\dot{\\psi})\n-\\dot{g}\\left({v_b \\over k}+{3\\dot{\\Pi}\\over 4k^2}\\right)-{3 \\ddot{g}\\Pi \\over\n4k^2} \\nonumber \\\\\nS^{(S)}_P(k,\\tau)&=&-{3 \\over 4 k^2}\\left(g\\{k^2\\Pi+\\ddot{\\Pi}\\}+2\n\\dot{g}\\dot{\\Pi}+\\ddot{g}\\Pi\\right).\n\\label{source}\n\\end{eqnarray}\nSome of the terms in the source function $S^{(S)}_T(\\tau)$ are \neasily recognizable. The first two contributions in the first term\nare the \nintrinsic anisotropy and gravitational potential \ncontributions from the last-scattering surface, while \nthe third contribution\nis part of the velocity term, the other part being the\n$k^{-1}\\dot{g}v_b$ term in the second row. \nThese terms \nmake a dominant contribution to the \nanisotropy in the standard recombination models. \nThe first term in \nthe second row, \n$e^{-\\kappa}(\\dot{\\phi}+\\dot{\\psi})$, is the so-called integrated\nSachs-Wolfe term and is important after recombination.\nIt is especially important if matter-radiation equality occurs \nclose to the recombination or in $\\Omega_{\\rm matter}\\ne 1$ models. In both\ncases gravitational \npotential decays with time, which leads to an enhancement of \nanisotropies on large angular scales.\nFinally we have the terms \ncaused by photon polarization and anisotropic\nThomson scattering, which contribute to $\\Pi$.\nThese terms affect the \nanisotropy spectra at the 10\\% level and are important for accurate\nmodel predictions. Moreover, they are the sources \nfor photon polarization.\nEquation (\\ref{source}) is a generalization of the tight-coupling\nand instantaneous recombination approximation \n(Seljak 1994) and reduces to it in the limit where the visibility function\nis a delta-function and $\\Pi$ can be neglected. \nIn that approximation one only needs to evaluate the sources at \nrecombination and then free stream them to obtain the anisotropy today. \nIn the more general case presented here\none has to perform an additional integration over time, which includes\nthe contributions arising during and after the recombination.\nMoreover, because the\ntight-coupling approximation is breaking down at the time of recombination,\nboth polarization and photon anisotropic stress \nare being generated and\n$\\Pi$ makes a non-negligible contribution to the anisotropy.\nFor exact calculations one has to use (\\ref{source}), which \nproperly includes all the terms that are relevant in \nthe linear perturbation theory.\n\nTo solve for the angular power spectrum one\nhas to expand the plane wave $e^{i k \\mu (\\tau -\\tau_0)}$ \nin terms of the radial and angular eigenfunctions (spherical \nBessel functions and Legendre polynomials, respectively), \nperform the ensemble average \n\\footnote[1]{In performing \nensemble average we assume that only the amplitude and \nnot the phase of a \ngiven mode evolves in time. While this is \nvalid in linear theory for most models of structure formation,\nit may not be correct in some versions\nof models with topological defects (Albrecht et al. 1995).} \nand integrate over the angular variable \n$\\mu$. This leads (\\ref{cl}), where \nthe multipole moment at present time $\\Delta^{(S)}_{(T,P)l}\n(k,\\tau=\\tau_0)$\nis given by the following expression,\n\\begin{equation}\n\\Delta^{(S)}_{(T,P)l}(k,\\tau=\\tau_0)=\n\\int_0^{\\tau_0}S^{(S)}_{T,P}(k,\\tau)j_l[k(\\tau_0-\\tau)]d\\tau,\n\\label{finalscalar}\n\\end{equation}\nwhere $j_l(x)$ is the spherical Bessel function. \nNote that\nwhile angular eigenfunctions integrated out after angular averaging, \nradial eigenfunctions\nremained and enter in (\\ref{finalscalar}).\nThe main advantage of (\\ref{finalscalar})\nis that it decomposes the anisotropy into a source term \n$S^{(S)}_{T,P}$, which does \nnot depend on the multipole moment $l$ and a geometrical term $j_l$, which \ndoes not depend on the particular cosmological model. \nThe latter thus only needs to be computed once and can be stored for \nsubsequent calculations. \nThe source term \nis the same for all multipole moments and only depends on a small number\nof contributors in (\\ref{source})\n(gravitational potentials, baryon velocity and photon moments\nup to $l=4$). By specifying the source term as a function of time one can \ncompute the corresponding spectrum of anisotropies. \nEquation (\\ref{finalscalar})\nis formally an integral system of equations, because \n$l<4$ moments appear on both sides of equations. To solve for these\nmoments it is best to use the equations in their \ndifferential form (\\ref{photon}),\ninstead of the integral form above.\nOnce the moments that enter into the source function are computed\none can solve for the higher moments \nby performing the integration in (\\ref{finalscalar}) \n(see section 3 for more details).\n\nThe solution for the tensor modes can similarly be written as an integral \nover the source term and the tensor spherical eigenfunctions $\\chi^l_k$. \nThe latter are \nrelated to the spherical Bessel functions (\\cite{abbott86}),\n\\begin{equation}\n\\chi^l_k(\\tau)=\\sqrt{{(l+2)! \\over 2(l-2)!}}\n{j_l(k\\tau)\n\\over (k\\tau)^2}.\n\\label{eigentensor}\n\\end{equation}\nThis gives\n\\begin{equation}\n\\Delta^{(T)}_{(T,P)l}=\\int_0^{\\tau_0}d\\tau S_{T,P}^{(T)}(k,\\tau)\n\\chi^l_k(\\tau_0-\\tau),\n\\label{finaltensor}\n\\end{equation}\nwhere from equation \\ref{tensorl} follows\n\\begin{equation}\nS_T^{(T)}=-\\dot{h}e^{-\\kappa}+g\\Psi \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\n S_P^{(T)}=-g\\Psi.\n\\label{sourcet}\n\\end{equation}\nEquations (\\ref{source}-\\ref{sourcet}) \nare the main equations of this paper and form the \nbasis of the line of sight integration method\nof computing CMB anisotropies. In the next section we will\ndiscuss in more detail the computational advantages of this \nformulation of Boltzmann equation and its implementation.\n\n\\section{Calculational Techniques}\n\nIn the previous section we presented the expressions needed \nfor the implementation of the line of sight integration\nmethod. \nAs shown in (\\ref{finalscalar}) and (\\ref{finaltensor})\none needs to integrate over\ntime the source term at time $\\tau$ \nmultiplied with the sperical Bessel function\nevaluated at $k(\\tau_0-\\tau)$. The latter does not depend\non the model and can be precomputed in advance. \nFast algorithms exist which can compute spherical Bessel functions\non a \ngrid in $k$ and $l$ in short amount of time (e.g. Press et al. 1993). The\ngrid is then stored on a disk and used for all the subsequent calculations.\nThis leaves us with the\ntask of accurately calculating the source term,\nwhich determines the CMB spectrum for a given model.\nBelow we discuss some of the calculational techniques needed for\nthe implementation of the method. \nWe especially highlight the differences\nbetween this approach and the standard Boltzmann\nintegration approach. Our goal is to develop a method which is\naccurate to 1\\% in $C_l$ up to $l \\sim 1000$\nover the whole range of cosmological \nparameters of interest. These include models with varying amount of\ndark matter, baryonic matter, Hubble constant, vacuum energy, \nneutrino mass, shape of initial spectrum of perturbations, reionization\nand tensor modes. The choice of accuracy is based on estimates of\nobservational accuracies that will be achiavable in the next generation \nof experiments and also on the theoretical limitations of model \npredictions (e.g. cosmic variance, second order effects etc.).\nMost of the figures where we discuss the choice\nof parameters \nare calculated for the standard CDM model. \nThis model is a reasonable choice in the sense that it is a model which \nexhibits most of the physical effects in realistic models, including \nacoustic oscillations, early-time integrated Sachs-Wolfe effect and \nSilk damping. One has to be careful however not to tune the parameters\nbased on a single model. We compared our results with results from other \ngroups (Bode \\& Bertschinger 1995; Sugiyama 1995) for a number of \ndifferent models. We find a better than 1\\% agreement \nwith these calculations over most of the parameter space of models.\nThe computational parameters we recommend below are based on this more detailed\ncomparison and are typically more stringent than \nwhat one would find based on the comparison with the standard CDM model only. \n\n\\subsection{Number of coupled differential equations}\n\nIn the standard Boltzmann method the \nphoton distribution function is expanded to a high $l_{\\rm max}$ \n(\\ref{photon}) and typically one has to \nsolve a coupled system of several thousand differential equations.\nIn the integral method one\nevaluates the source terms $S(k,\\tau)$ as a function \nof time (\\ref{source}), (\\ref{sourcet}) and\none only requires the knowledge of photon multipole \nmoments up to $l=4$, \nplus the metric perturbations and baryon velocity. \nThis greatly reduces \nthe number of coupled differential equations that are needed to be solved. \nFor an accurate evaluation of the lowest multipoles in the integral\nmethod one has to extend\nthe hierarchy somewhat beyond $l=4$, because the\nlower multipole moments are coupled to the higher multipoles\n(\\ref{photon}). \nBecause power is only being \ntransferred from lower to higher $l$ \nit suffices to keep a few \nmoments to achieve a high numerical accuracy of $l<5$ moments.\nOne has to be careful however to avoid \nunwanted \nreflections of the power being transferred from low $l$ to high $l$, \nwhich occur for example if a simple cut-off in the hierarchy is imposed.\nThis can be achieved by modifying \nthe boundary condition for the last term in the \nhierarchy \nusing the free streaming approximation\n(\\cite{Ma95}, \\cite{Hu95}). In the absence of scattering \n(the so-called free streaming regime),\nthe recurrence relation among the \nphoton multipoles in equation (\\ref{photon}) becomes the\ngenerator of spherical Bessel functions. \nOne can therefore use a different recurrence \nrelation among the spherical Bessel functions \nto approximate the last term in the hierarchy without reference to the \nhigher terms.\nThe same approximation can also be used for polarization and\nneutrino hierarchies. This type of\nclosure scheme works extremely well and only a few multipoles\nbeyond $l=4$ are needed for an accurate calculation of the source \nterm. This is shown in \nfigure \\ref{fig1}, where a relative error in the spectrum is plotted\nfor several choices of maximal number of photon multipoles.\nWe choose to end the\nphoton hierarchy (both anisotropy and polarization) \nat $l_\\gamma=8$ and massless neutrino at $l_\\nu=7$, which \nresults in an error lower than $0.1\\%$ compared to the exact case. \nInstead of a few thousand coupled differential equations \nwe therefore evolve about 35 equations\nand the integration time is correspondingly reduced. \n\n\\begin{figure}[t]\n\\vspace*{8.3 cm}\n\\caption{CMB spectra produced by \nvarying the number of evolved photon multipole moments, together\nwith the \nrelative error (in \\%)\ncompared to the exact case. While using $l_\\gamma=5$ produces\nup to 2\\% error, using $l_\\gamma=7$ gives results almost identical to the \nexact case.}\n\\special{psfile=losfig1.ps\nvoffset=-10 hoffset=70 hscale=47 vscale=45}\n\n\\label{fig1}\n\\end{figure}\n\n\\subsection{Sampling of CMB multipoles}\n\nIn the standard Boltzmann integration method one solves for the whole \nphoton hierarchy (\\ref{photon}) and the \nresultant $\\Delta_l$ is automaticaly obtained for each $l$ up to some \n$l_{\\rm max}$. \nThe CMB spectra are however\nvery smooth (see figure \\ref{fig1}), except for the lowest $l$, where the\ndiscrete nature of the spectrum becomes important. This means that \nthe spectrum need not be sampled for each $l$ and instead \nit suffices to sparsely sample\nthe spectrum in a number of points and interpolate between them.\nFigure \\ref{fig2} shows the result of such interpolation \nwith cubic splines (see e.g. \\cite{Press92}) when every \n20th, 50th or 70th \n$l$ is sampled beyond $l=100$ with an increasingly denser sampling\ntowards small $l$, so that each $l$ is sampled below $l=10$. \nWhile sampling of every 70th $l$ results in maximal error of \n1\\%, sampling in \nevery 20th or 50th $l$ gives errors below 0.2 and 0.4\\%, respectively. \nWe choose to compute every 50th $C_l$ beyond $l=100$ in addition to\n15 $l$ modes\nbelow $l=100$, so that a total of\n45 $l$ modes are calculated up to $l_{\\rm max} =1500$. This gives \na typical (rms) error of 0.1\\%, with excursions of up to 0.4\\%. \nThe number of integrals in equation (\\ref{cl}) is thus \nreduced by 10-50 and the computational \ntime needed for the integrals becomes comparable or smaller\nthan the time\nneeded to solve for the system of differential equations.\n\n\\begin{figure}[t]\n\\vspace*{9.3 cm}\n\\caption{ \nRelative error between the exact and interpolated spectrum, \nwhere every 20th, 50th or 70th multipole is calculated. The \nmaximal error for the three approximations is less\nthan 0.2, 0.4 and 1.2\\%, respectively. \nThe rms deviation from the exact spectrum is further improved by\nfiner sampling, because the\ninterpolated spectra are exact in the sampled points. For the sampling\nin every 50th multipole the rms error is 0.1\\%.} \n\\special{psfile=losfig2.ps\nvoffset=20 hoffset=70 hscale=47 vscale=45}\n\\label{fig2}\n\\end{figure}\n\n\\subsection{Free streaming}\n\nAfter recombination and in the absence of a time changing gravitational \npotential the source function often becomes negligible. This is \nthe so called free streaming regime, where the\nphotons are freely propagating through the universe. \nMost of the standard Boltzmann codes use a special free streaming algorithm \nto map the anisotropies from a given epoch $\\tau_{\\rm fs}$ into \nanisotropies today (\\cite{BE84}).\nIn the line of sight integration method\nthe free streaming regime is only a special case where $S(k,\\tau)=0$ after \nsome time $\\tau_{\\rm fs}$.\nThus one can stop the integration at the \ntime $\\tau_{\\rm fs}$ beyond which the sources are not important and there is\nno need for a separate algorithm to evolve the anisotropies until today. \nFor example, if one\nassumes that only the first term in (\\ref{source})\nis important\none only needs to integrate over \nthe source where the visibility function $g$ appreciably differs from 0. In \nthe absence of reionization this restricts the time \nintegration to a narrow interval during recombination \naround $z \\approx 1100$.\nAlthough most of the contributions to the anisotropies come\nfrom this epoch, time dependent\ngravitational potential (and, to a smaller extent, other source terms\nin equation (\\ref{source})) make \na nonnegligible contribution to the anisotropies\neven after recombination. \nAs mentioned earlier,\nthis is especially important\nin $\\Omega_{\\rm matter}\\ne 1$ models and in models with low $\\Omega_{\n\\rm matter}h^2$. In the first case \nthe gravitational potential is decaying at late times,\nwhile in the second class of models the matter-radiation equality\nduring which gravitational potential\nchanges in time is pushed to a lower redshift.\nEven in standard CDM model ($\\Omega_{\\rm matter}=1,h=0.5$)\ngravitational potential is still \nsignificantly changing in time at moderately low redshifts\nof $z \\sim 100$ (\\cite{Hu95}).\nSimilarly one cannot use free streaming\nin the models with late reionization, \nwhere the visibility\nfunction is nonvanishing at low redshifts. \nWe choose to integrate until the present time \nfor most of the models, except for\nthe models with $\\Omega_{\\rm matter}=1$, \nwhere we stop the integration at \n$z=10$. In this case the \ncomputational time is reduced significantly (typically 50\\%) \ncompared to that of evolving the \nequations until the present time. \n\n\\subsection{Integration over time}\n\nFor each Fourier mode $k$ the source term is integrated\nover time $\\tau$ (\\ref{finalscalar}). \nThe sampling in time need not be \nuniform, because the dominant contribution arises from the epoch\nof recombination around $z\\sim 1100$, the width of which is \ndetermined by the visibility function $g$ and is rather narrow in \nlook-back time for\nstandard recombination scenarios. During this epoch the sources\nacoustically oscillate on a time scale of $c_sk^{-1}$, \nso that the longest wavelength modes are the slowest to vary.\nFor short wavelengths the rate of sampling should therefore be higher.\nEven for long wavelengths the source function\nshould still be sampled in several\npoints across the last-scattering surface. This is because the \nterms in (\\ref{source}) depend on the derivatives\nof the visibility function. \nIf the visibility function $g$ is narrow then its derivative will \nalso be narrow and will sharply \nchange sign at the peak of $g$. Its integration \nwill lead to numerical roundoff\nerrors if not properly sampled, even though \npositive and negative contributions nearly cancel out when integrated\nover time and make only a small contribution to the integral. \nFigure \\ref{fig3} shows \nthe error in integration caused by sampling this epoch with 10, 20\nor 40 points. Based on comparison with several models\nwe choose to sample the recombination epoch\nwith 40 points, which results in very small ($\\sim$ 0.1\\%) errors. \nAfter this \nepoch the main contribution to the anisotropies arises from the \nintegrated Sachs-Wolfe term. This is typically a slowly changing\nfunction and it is sufficient to sample \nthe entire range in time until the present in 40 points.\nThe exceptions here are models\nwith reionization, where the visibility function becomes non-negligible\nagain and a new last-scattering surface is created. In this case a \nmore accurate sampling of the source is also needed at lower redshifts.\n\n\\begin{figure}[t]\n\\vspace*{9.3 cm}\n\\caption{ Error in the spectrum caused by insufficient\ntemporal sampling of the source term. \nInaccurate sampling of the source during recombination leads to \nnumerical errors, which can reach the level of 1\\% if the source \nis sampled in only 10 points across the recombination epoch. \nFiner sampling in time gives much smaller errors for this model. \nComparisons with other models indicate that sampling in 40 \npoints is needed for accurate integration.} \n\n\\special{psfile=losfig3.ps\nvoffset=20 hoffset=70 hscale=47 vscale=45}\n\n\\label{fig3}\n\\end{figure}\n\n\\subsection{Integration over wavenumbers}\n\nThe main computational cost of standard \nCMB calculations is solving the coupled\nsystem of differential equations.\nThe number of $k$-modes for which the system is solved is the main \nfactor that determines the speed of the method. For results accurate\nto $l_{\\rm max}$ one has to sample the wavenumbers upto a maximun value\n$k_{\\rm max}=l_{\\rm max}\/ \\tau_0$. \nIn the line of sight integration method \nsolving the coupled\nsystem of differential equations still dominates the computational time \n(although for each mode \nthe time is significantly shorter than in the \nstandard Boltzmann method \nbecause of a smaller system of equations).\nIt is therefore instructive to compare the number of $k$ \nevaluations needed in each of the methods to achieve a given\naccuracy in the final spectrum.\n\nIn the standard Boltzmann method\none solves for $\\Delta_{T,l}^{(S)}(k)$ directly, so this quantity \nmust be sampled densely enough for accurate integration.\nFigure \\ref{fig4}a shows $\\Delta_{T,l}^{(S)}(k)$\nfor $l=150$ in a standard CDM model. One can see that it is a rapidly \noscillating function with a frequency $k \\sim \\tau_0^{-1}$. \nEach oscillation needs to be sampled in at least a few points to assure \nan accurate integration.\nTo obtain a smooth CMB spectrum one typically \nrequires 6 points over one period, implying \n$2l_{\\rm max}$ $k$-mode evaluations. \nThis number can be reduced somewhat \nby filtering out the sampling noise in the spectrum (\\cite{Hu95}), but even\nin this case one requires at least 1-2 points per each period or \n$l_{\\rm max}\/2$ $k$-mode evaluations. \n\nTo understand the nature of these rapid oscillations in\n$\\Delta_{T,l}^{(S)}(k)$ we will consider wavelenghts larger than\nthe width of the last scattering surface. In this case the\nBessel function in (\\ref{finalscalar}) can be pulled out of the integral as\n$j_l(k\\tau_0)$ because the time at which recombination occurs,\nwhen the dominant contribution to $\\Delta_{T,l}^{(S)}(k)$ is\ncreated, is \nmuch smaller than $\\tau_0$ and $k \\Delta\\tau\\ll 1$ ($\\Delta\\tau$\nis the interval of time for which the visibility function differs\nappreciably from zero).\nSo the final $\\Delta_{T,l}^{(S)}(k)$ is approximately \nthe product of $j_l(k\\tau_0)$ and $S_T^{(S)}$ integrated over time,\nif the finite width of the last scattering surface and\ncontributions after recombination can be ignored. \n\nFigure \\ref{fig4}b shows the \nsource term $S_T^{(S)}$ integrated over time\nand the Bessel function $j_l(k\\tau_0)$. \nIt shows that the high frequency\noscillations in $\\Delta_{T,l}^{(S)}(k)$ seen in figure \\ref{fig4}a\nare caused by the oscillation of the spherical\nBessel functions, while the oscillations of the source term have a\nmuch longer period in $k$.\nThe different periods of the two \noscillations can be understood\nusing the tight coupling approximation (\\cite{Hu95}, \\cite{Seljak94}). \nPrior and during recombination photons are coupled to the baryons and the \ntwo oscillate together \nwith a typical acoustic timescale $\\tau_{s}\\sim \\tau_{\\rm rec}\/\\sqrt{3}\n\\sim \\tau_0\/\\sqrt{3z_{\\rm rec}} \\sim \\tau_0\/50$. The \nfrequency of acoustic oscillations $k \\sim \\tau_{\\rm rec}^{-1}$ \nis therefore 50 times higher than\nthe frequency of oscillations in spherical Bessel functions, which \noscillate as $\\tau_0^{-1}$. \n\n\\begin{figure}[t]\n\n\\vspace*{9.3 cm}\n\\caption{In (a) $\\Delta_{T,150}^{(S)}(k)$ is plotted as a function of\nwavevector \n$k$. In (b) $\\Delta_{T,150}^{(S)}(k)$ is decomposed into the \nsource term $S_T^{(S)}$ integrated over time\nand the spherical\nBessel function $j_{150}(k\\tau_0)$. The high frequency oscillations \nof $\\Delta_{T,150}^{(S)}(k)$ are caused by oscillations of \nthe spherical Bessel function $j_{150}(k\\tau_0)$, whereas the source \nterm varies much more slowly. This allows one to reduce the number of\n$k$ evaluations in the line of sight integration method, because only \nthe source term needs to be sampled. } \n\\special{psfile=losfig4.ps \nvoffset=40 hoffset=70 hscale=47 vscale=45}\n\\label{fig4}\n\\end{figure}\n\n\\begin{figure}[t]\n\\vspace*{8.3 cm}\n\\caption{ Error in the spectrum caused by insufficient \n$k$-mode sampling of the source term. \nSampling the source with 40 points up to \n$k = 2l_{\\rm max}$ leads to 1\\% errors, while \nwith 60 or 80 points the maximal error decreases to 0.2\\%.\nComparisons with other models indicate that sampling in 60 \npoints is sufficient for accurate integration.} \n\\special{psfile=losfig5.ps \nvoffset=0 hoffset=70 hscale=47 vscale=45}\n\\label{fig5}\n\\end{figure}\n\nBecause an accurate sampling of the \nsource term requires only a few \npoints over each acoustic oscillation, the total number of $k$ evaluations\nin the integral method \ncan be significantly reduced compared to the standard methods. \nTypically a few dozen\nevaluations are needed\nover the entire range of $k$, compared to about 500 evaluations\nin the standard method when a noise filtering technique is \nused and 2000 otherwise \n(for $l_{\\rm max} \\sim 1000$). Once the source\nterm is evaluated at these points one can\ninterpolate it at points with preevaluated \nspherical Bessel functions, which can be much more densely sampled \nat no additional computational cost. \nThe end result is the same accuracy as in the standard method,\nprovided that the source is sampled in sufficient number of points.\nFigure \\ref{fig5} shows the relative error in the CMB spectrum for the\ncases where the \nsource term is calculated in 40, 60 and 80 points between \n0 and $k \\tau_0 =3000$ (for $l_{\\rm max}=1500$). \nWhile using 40 points results in up to 1\\% errors, using 60 points\ndecreases the maximum error to below 0.2\\% for this model. \nIn general it suffices to use \n$l_{\\rm max}\/30$ $k$ modes, which is at least \nan order of magnitude smaller than in \nthe standard methods. \nNote that with this method there is no need to \nfilter the spectrum to reduce the sampling noise, because \nthe latter is mainly caused by insufficient sampling\nof the spherical Bessel functions, which\nare easy to precompute. The additional operations needed for a higher \nsampling (summation and source interpolation) do not \nsignificantly affect the overall computational time. Moreover, \nif each $C_l$ is accurately calculated they can be sparsely sampled\nand interpolated\n(section 3.2), this would not be possible if they had a significant noise\ncomponent added to them. \n \n\n\\section{Conclusions}\n \nIn this paper we presented a new method for accurate calculations of\nCMB anisotropy and polarization spectra. \nThe method is not based on any approximations and is an\nalternative to the standard Boltzmann calculations, which are based \non solving large numbers of differential equations. The \napproach proposed here uses a hybrid integro-differential \napproach in solving the same system of equations. \nBy rewriting the Boltzmann equations in the integral form\nthe solution for the photon anisotropy \nspectrum can be written as an integral over a\nsource and a geometrical term. The first is determined by a small number\nof contributors to the photon equations of motion and the second is \ngiven by the radial eigenfunctions, which do not depend on the\nparticular cosmological model, but only on the geometry of space.\n\nOne advantage of the split between geometrical and dynamical \nterms is that\nit clarifies their different contributions to\nthe final spectrum. A good example of this is the temperature anisotropy\nin the non-flat universe, which can be be written using a similar \ndecomposition, except\nthat spherical Bessel functions have to be replaced with their \nappropriate generalization (\\cite{abbott86}). \nThis will be discussed in more detail in a future \npublication, here we simply remark that replacing radial eigenfunctions\nin a non-flat space with their flat space counterpart (keeping comoving\nangular distance to the LSS unchanged) is only approximate and\ndoes not become exact even in the large\n$l$ (small angle) limit. The geometry of the \nuniverse leaves its signature in the CMB spectra in a rather \nnontrivial way and does not lead only to a simple rescaling of the \nspectrum by $\\Omega_{\\rm matter}^{-1\/2}$ (\\cite{Jungman95}).\n\nThe main advantage of our line of sight integration method is\nits speed and accuracy. \nFor a given set of parameters it is two orders of magnitude\nfaster than the standard Boltzmann \nmethods, while preserving the same accuracy.\nWe compared our results with the results by Sugiyama (1995) and\nby Bode \\& Bertschinger (1995) and in both cases the agreement was \nbetter than 1\\% up to a very high $l$ for all of the models we\ncompared to.\n\nThe method is useful for fast and accurate normalizations\nof density power spectra from CMB measurements, \nwhich for a given model require the CMB anisotropy spectrum and\nmatter transfer function, both of which are provided by the output\nof the method. \nSpeed and accuracy are even more important for accurate determination\nof cosmological parameters from CMB measurements. In such applications\none wants to perform a search over a large parameter space,\nwhich typically requires calculating the spectra of a\nseveral thousand models (e.g. \\cite{Jungman95}). \nOne feasible way to do so is to use approximation methods \nmentioned in the introduction. These \ncan be made extremely fast, but at a cost of sacrificing the \naccuracy. While several percent accuracy is sufficient for analyzing\nthe present day experiments, it will not satisfy the requirements\nfor the future all-sky surveys of microwave sky. Provided that \nforeground contributions can be succesfully filtered out (see\nTegmark \\& Efstathiou 1995 for a recent discussion) one can hope for \naccuracies on the spectrum close to the cosmic variance limit,\nwhich for a broad band averages can indeed reach below 1\\% at \n$l>100$. It is at this stage that fast and accurate CMB calculations\nsuch as the one presented in this paper\nwill become crucial and might enable one \nto determine many cosmological parameters with an unprecedented\naccuracy.\n\n\\acknowledgements\nWe would like to thank Ed Bertschinger for encouraging \nthis work and providing helpful comments.\nThis work was partially supported by grant NASA NAG5-2816.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe Riemann zeta function $\\zeta(z)$ is conventionally represented as the sum or the integral\n\\begin{equation}\\label{(1)}\n\\zeta(z)=\\sum_{n=1}^{\\infty}\\frac{1}{n^{z}}=\\frac{1}{\\Gamma(z)}\\int_{0}^{\\infty}\\frac{t^{z-1}}{e^{t}-1}dt,\\quad\\textrm{Re}(z) > 1\n\\end{equation}\n(\\cite[Theorem 12.2]{Apostol}). The integral reduces to the sum if the denominator of the integrand is expanded in geometric series.\nIt is well-known that $\\zeta(z)$ can be analytically continued to the whole complex plane except for a single pole at $z=1$ with residue 1\nand $\\zeta(z)$ satisfies the two equivalent functional equations (we present one of them here, the other can be obtained from this by letting $z\\to1-z$)\n\\begin{equation}\\label{Functional}\n\\zeta(z)=2(2\\pi)^{1-z}\\Gamma(1-z)\\sin\\left(\\frac{\\pi z}{2}\\right)\\zeta(1-z),\\quad\\textrm{Re}(z) < 1\n\\end{equation}\n(\\cite[Theorem 12.7]{Apostol}).\nFrom the above equation, we see that $\\zeta(z)$ is symmetric with the axis $\\textrm{Re}(z)=\\frac{1}{2}$.\n\nBy (\\ref{Functional}), from the vanishing of the function $\\sin\\left(\\frac{\\pi z}{2}\\right)$, the negative even integers are also zeros of $\\zeta(z)$,\nwhich are called the trivial zeros.\nFurthermore, from the representation of the zeta function in terms of products involving prime numbers, it can be shown that the zeta function has no zero for Re$(z)>1$ since none of the factors can vanish there.\nIn 1859, Riemann \\cite{Riemann} conjectured that the nontrivial zeros of $\\zeta(z)$ are all lie on the critical line $\\textrm{Re}(z)=\\frac{1}{2},$\nnamely,\n\\begin{equation}\\label{Re-zero}\n\\zeta\\left(\\frac12+ie_* \\right)=0,\n\\end{equation}\nwhere $e_*$ denotes the location of a zero on the critical line.\nThis becomes one of most remarkable conjectures in mathematics, known as the Riemann hypothesis.\nTo the purpose of solving Riemann hypothesis, Hilbert-P\\'olya \\cite{Hilbert} conjectured that the imaginary parts of the zeros of $\\zeta(z)$\nmight correspond to the eigenvalues of a Hermitian, self-adjoint operator.\nBerry and Keating \\cite{Berry} conjectured that the classical counterpart of such a Hamiltonian may have the form $\\hat{H}=\\hat{x}\\hat{p}$,\nbut a Hamiltonian possessing this property has not yet been found. In quantum mechanics, associated with each measurable parameter in a physical system is a quantum mechanical operator, and the operator associated with the system energy is called the Hamiltonian.\n\nIn 2016, Bender, Brody and M\\\"uller \\cite{Bender} found an interesting correspondence between the nontrivial zeros of\n$\\zeta(z)$ and the eigenvalues of Hamiltonian for a quantum system.\nThat is, let $\\hat x$ be the position operator and let $\\hat{p}=-i\\frac d{dx}$ be the momentum operator. \nThey constructed a non-Hermitian Hamiltonian\n\\begin{equation}\n\\hat{H}=\\frac{\\mathbbm{1}}{\\mathbbm{1}-e^{-i\\hat{p}}}(\\hat{x}\\hat{p}+\\hat{p}\\hat{x})({\\mathbbm{1}-e^{-i\\hat{p}}}),\n\\end{equation}\nwhich satisfies the conditions of the Hilbert-P\\'olya conjecture, that is, if the eigenfunctions of $\\hat{H}$ satisfy the boundary condition $\\psi_{n}(0)=0$ for all $n$, then\nthe real eigenvalues $E_{n}$ have the property that $\\frac{1}{2}(1-iE_{n})$ are the nontrivial zeros of the Riemann zeta function, and the Hurwitz zeta functions\n$\\psi_{n}(x)=-\\zeta(z_{n},x+1)$ are the corresponding eigenstates. They also constructed the metric operator\nto define an inner-product space, on which the Hamiltonian is Hermitian. They remarked that if the analysis may be made rigorous to show that\n$\\hat{H}$ is manifestly self-adjoint, then this implies that the Riemann hypothesis holds true.\n\nIn this paper, we present a general construction of Hamiltonian $\\hat{H}_{f},$ which leads a general family of Hurwitz zeta functions $(-1)^{z_{n}-1}L(f,z_{n},x+1)$ becomes their eigenstates under a suitable boundary condition, \nand the eigenvalues $E_{n}$ have the property that $z_{n}=\\frac{1}{2}(1-iE_{n})$ are the zeros of a general family of zeta functions $L(f,z)$ (see Theorem \\ref{main} below).\nThe definitions of $L(f,z,x)$ and $L(f,z)$ will be given in the next section.\n\n\\section{Mellin transform and zeta functions}\nRecall that a function $f$ tends rapidly to $0$ at infinity if for any $k\\geq 0$, as $t\\to +\\infty$ the functions $t^{k}f(t)$ tends to $0$. Let $tf(-t)$ be a $C^{\\infty}$ function on $[0,\\infty)$\ntending rapidly to $0$ at infinity and for $\\textrm{Re}(z)>1$, we define a general family of zeta functions $L(f,z)$ from the following integral representation\n \\begin{equation}\\label{(5)}\nL(f,z)=\\frac{1}{\\Gamma(z)}\\int_{0}^{\\infty}t^{z}f(-t)\\frac{dt}{t}.\n\\end{equation}\nAs pointed out by Cohen \\cite[Proposition 10.2.2(2)]{Cohen}, if $f(-t)$ itself is a $C^{\\infty}$ function on $[0,\\infty)$ and\nit tends rapidly to $0$ at infinity, then we have $L(f,z)$ can be analytically continued to the whole of $\\mathbb{C}$ into a holomorphic function.\n\nThe Huwitz zeta function $\\zeta(z,x)$ is defined by the series\n\\begin{equation}\n\\zeta(z,x)=\\sum_{n=0}^{\\infty}\\frac{1}{(n+x)^{z}},\\quad\\textrm{Re}(z)>1,\n\\end{equation}\nwhere $x\\in (0,\\infty)$ (see Cohen \\cite[p.~72, Definition 9.6.1]{Cohen} and the discussions for the definition area of $x$ there).\nIt has the integral representation\n\\begin{equation}\n\\zeta(z,x)=\\frac{1}{\\Gamma(z)}\\int_{0}^{\\infty}e^{-(x-1)t}\\frac{t^{z-1}}{e^{t}-1}dt.\n\\end{equation}\n\n\nInspired from this, we also define a general family of Hurwitz zeta function $L(f, z,x)$ as the following integral representation\n\\begin{equation}\\label{(6)}\nL(f,z,x)=\\frac{1}{\\Gamma(z)}\\int_{0}^{\\infty} t^{z}e^{-(x-1)t} f(-t)\\frac{dt}{t},\n\\end{equation}\nwhere Re$(z)>1$ and $x\\in (0,\\infty).$\nIt is easy to see that $L(f,z,1)=L(f,z)$.\n\nBy setting $f(t)$ as different functions in (\\ref{(5)}), $L(f,z,x)$ and $L(f,z)$ become\nthe well-known Riemann zeta functions, Hurwitz zeta functions, Dirichlet $L$-functions, Hecke series and other zeta functions. \nThe following are some examples. \n\\begin{itemize}\n\\item\nLetting $f(t)=\\frac{1}{e^{-t}-1}$ in (\\ref{(5)}) and (\\ref{(6)}), we have the Riemann zeta function $\\zeta(z)$ and the Hurwitz zeta function $\\zeta(z,x)$, respectively (see Colmez's Tsinghua lecture notes \\cite[p.~4]{Colmez}).\n\n\\item \nSuppose $\\chi$ is a Dirichlet character modulo $m$ and $$L(\\chi,z)=\\sum_{n=1}^{\\infty}\\frac{\\chi(n)}{n^{z}}=\\prod_{p}\\left(1-\\frac{\\chi{(p)}}{p^{z}}\\right)^{-1} $$\nis the Dirichlet $L$-function. From the integral representation of the Gamma function\n$$\\int_{0}^{\\infty}e^{-nt}t^{z}\\frac{dt}{t}=\\frac{\\Gamma(z)}{n^{z}},$$\nwe have $$L(\\chi,z)=\\frac{1}{\\Gamma(z)} \\int_{0}^{\\infty}G_{\\chi}(t)t^{z-1}dt,\\quad\\textrm{Re}(z)>1,$$\nwhere \n$$\n\\begin{aligned}\nG_{\\chi}(t)&=\\sum_{n=1}^{\\infty}\\chi(n)e^{-nt}=\\sum_{r=1}^{m}\\chi(r)\\sum_{q=0}^{\\infty}e^{-(r+qm)t}\n\\\\\n&=\\sum_{r=1}^m\\frac{\\chi(r)e^{-rt}}{1-e^{-mt}}\n\\end{aligned}\n$$\n(see \\cite[p.~160]{Cohen} and \\cite[p.~102]{Iwa}).\nThus letting $f(t)=G_{\\chi}(-t)$ in (\\ref{(5)}) and (\\ref{(6)}), \nwe recover the Dirichlet $L$-function $L(\\chi,z)$ (see \\cite{Iwa}) and the two variable Dirichlet $L$-function $L(\\chi,z,x)$\n(see \\cite{fox}).\n\n\\item \nLet \n$$\n\\lambda(z)=\\sum_{n=0}^\\infty\\frac1{(2n+1)^z},\\quad\\text{Re}(z)>1\n$$\nbe the Dirichlet lambda function according to Abramowitz and Stegun's handbook \\cite[pp.~807--808]{AS}, \nwhich has also been studied by Euler under the notation $N(z)$ (see \\cite[p.~70]{Varadarajan}).\nBy \\cite[(2.10)]{HK}, we have the following integral representation\n$$\n\\lambda(z)=\\frac1{\\Gamma(z)}\\int_0^\\infty\\frac{e^{t}t^{z-1}}{e^{2t}-1}dt, \\quad\\text{Re}(z)>1.\n$$\nThus letting $f(t)=\\frac{e^{-t}}{e^{-2t}-1}$ in (\\ref{(5)}), we recover the Dirichlet lambda function $\\lambda(z)$.\n\n\\item \nLet $w$ be an even integer, and let $S_{w+2}$ be the space of cusp forms with respect to $\\Gamma=SL(2,\\mathbb Z)\/(\\pm1)$ \non the upper half-plane of one complex variable. For $\\Phi\\in S_{w+2},$ \nwe write $\\Phi$ as a Fourier series\n$\\Phi(z)=\\sum_{n=1}^\\infty\\lambda_n e^{2\\pi inz}.$\nLet $L_\\Phi(z)$ be the Hecke series of $\\Phi.$ Namely,\n$$L_\\Phi(z)=\\sum_{n=1}^\\infty\\frac{\\lambda_n}{n^z},$$\nunder the condition: there exists $z_{0} > 0$, such that Re$(z)\\geq z_{0}$.\n\nIt is clear that $L_\\Phi(s)$ has the following expression:\n$$\\begin{aligned}\nL_\\Phi(z)\n&=\\frac{(2\\pi)^z}{\\Gamma(z)}\\sum_{n=1}^\\infty{\\lambda_n}\\int_0^\\infty e^{-2\\pi nt}t^{z-1}dt \\\\\n&=\\frac{(2\\pi)^z}{\\Gamma(z)}\\int_0^\\infty \\Phi(it)t^{z-1}dt.\n\\end{aligned}$$\nSo letting $f(-t)=(2\\pi)^z\\Phi(it)$ in (\\ref{(5)}), we recover the Hecke series $L_\\Phi(z)$\n(see \\cite{Man}).\n\\end{itemize}\n\nAssume the function $tf(t)$ is analytic on an open disc of radius $r$ centered at 0 in the complex plane $\\mathbb{C}$, we have the following result.\n\n\\begin{proposition}\\label{2.1} \nLet $C$ be a loop around the negative real axis. If $x\\in (0,\\infty)$ the function defined by the contour integral\n$$\nI(f,z,x)=\\frac{1}{2\\pi i}\\int_{C} t^{z}e^{(1-x)t} f(t)\\frac{dt}{t}\n$$\nis an entire function of $z$.\nMoreover, we have\n$$\nL(f,z,x)=\\Gamma(1-z)I(f,z,x), \\quad{\\rm Re}(z)>1.\n$$\n\\end{proposition}\n\n\\begin{remark}\\label{rem-2} \nIf $\\textrm{Re}(z)\\leq 1$ and $x\\in (0,\\infty),$ we define $L(f,z,x)$ by the equation\n$$\nL(f,z,x)=\\Gamma(1-z)I(f,z,x).\n$$\nThis equation provides the analytic continuation of $L(f,z,x)$ in the entire complex plane. \n\\end{remark}\n\n\\begin{proof} The proof follows from similar arguments as \\cite[Theorem 12.3]{Apostol}.\n\nWe regard the interval $[0,\\infty)$ of integration as a path of a complex integral, and then expanding this a little. Here we consider the following\ncontour $C.$\nFor $\\varepsilon>0,$ we define $C$ by a curve $\\varphi:(-\\infty,\\infty)\\to\\mathbb C$ given by\n\\begin{equation}\nC : \\quad \\varphi(u)=\\begin{cases}\nu &\\text{if }u<-\\varepsilon, \\\\\n\\varepsilon\\exp\\left(\\pi i \\frac{u+\\varepsilon}{\\varepsilon}\\right) &\\text{if }-\\varepsilon\\leq u\\leq \\varepsilon, \\\\\n-u &\\text{if }u>\\varepsilon.\n\\end{cases}\n\\end{equation}\nIn the definition of $C,$ the parts for $u<-\\varepsilon$ and $u>\\varepsilon$ overlap, but we interpret\nit that for $u<-\\varepsilon$ we take the path above the real axis and for $u>\\varepsilon$ below the real axis. This path is illustrated in Figure 1.\n\n\\begin{figure}[h]\n\\centerline{ \\includegraphics[width=3.5in, height=0.77in]{negative-2.png}}\n\\caption{Path of $C$}\n\\end{figure}\n\nNow consider the complex contour integral\n\\begin{equation}\\label{c-int}\n\\int_{C}t^{z}e^{(1-x)t}f(t)\\frac{dt}{t}.\n\\end{equation}\nSince we have to treat $t^{z-1}$ on $C,$ we shall choose a complex power $t^z$ for $z\\in\\mathbb C.$\nDenoting the argument of $t$ by $\\arg t,$ we can define a single-valued function $\\log t$ on\n$\\mathbb C-\\{s=a+ib\\mid b=0,a\\leq0\\}$ by\n\\begin{equation}\n\\log t=\\log|t|+i\\arg t \\quad(-\\pi<\\arg t<\\pi).\n\\end{equation}\nUsing this, a single-valued analytic function $t^z$ on $\\mathbb C-\\{s=a+ib\\mid b=0,a\\leq0\\}$\nis defined by\n\\begin{equation}\nt^z=e^{z\\log t}=e^{z(\\log|t|+i\\arg t)},\n\\end{equation}\nwhere $-\\pi<\\arg t<\\pi.$\nWe divide the contour $C$ into three pieces as follows.\n\n$\\quad C_1$: the part of the real axis from $-\\infty$ to $-\\varepsilon,$\n\n$\\quad C_2$: the positively oriented circle of radius $\\varepsilon$ with center at the origin,\n\n$\\quad C_3$: the part of the real axis from $-\\varepsilon$ to $-\\infty.$\n\n\\noindent\nWe have $C=C_1+C_2+C_3.$ For $t$ on $C_1,$ we put $\\arg t=-\\pi,$ and for $t$ on $C_{3},$ we put $\\arg t=\\pi.$\nThen the integral on $C_1$ is given by\n\\begin{equation}\n\\begin{aligned}\n\\int_{C_1}t^{z}e^{(1-x)t}f(t)\\frac{dt}{t}&=\\int_{\\infty}^\\varepsilon t^{z-1}e^{-\\pi iz}e^{-(1-x)t}f(-t)dt \\\\\n&=-e^{-\\pi iz}\\int^{\\infty}_\\varepsilon t^{z-1}e^{-(1-x)t}f(-t)dt\n\\end{aligned}\n\\end{equation}\nand on $C_3$ by\n\\begin{equation}\n\\begin{aligned}\n\\int_{C_3}t^{z}e^{(1-x)t}f(t)\\frac{dt}{t}&=\\int_\\varepsilon^{\\infty} t^{z-1}e^{\\pi iz}e^{-(1-x)t}f(-t)dt \\\\\n&=e^{\\pi iz}\\int^{\\infty}_\\varepsilon t^{z-1}e^{-(1-x)t}f(-t)dt.\n\\end{aligned}\n\\end{equation}\nThus, together we get\n\\begin{equation}\\label{inte 1-3}\n\\begin{aligned}\n\\int_{C}t^{z}e^{(1-x)t}f(t)\\frac{dt}{t}&=(e^{\\pi iz}-e^{-\\pi iz})\\int^{\\infty}_\\varepsilon t^{z-1}e^{-(1-x)t}f(-t)dt \\\\\n&+\\int_{C_2}t^{z}e^{(1-x)t}f(t)\\frac{dt}{t}.\n\\end{aligned}\n\\end{equation}\nThe circle $C_2$ is parameterized as $t=\\varepsilon e^{i\\theta}(-\\pi\\leq \\theta\\leq\\pi),$ so on $C_2$ we have\n$t^{z-1}=\\varepsilon^{z-1}e^{i(z-1)\\theta},$\nand the absolute value of the integral is estimated from above as\n\\begin{equation}\\label{inte-esi}\n\\begin{aligned}\n\\left|\\int_{C_2}t^{z}e^{(1-x)t}f(t)\\frac{dt}{t}\\right|\n&\\leq \\int_{-\\pi}^\\pi \\left| \\varepsilon^{z-1} e^{i(z-1)\\theta}e^{(1-x)\\varepsilon e^{i\\theta}}f(\\varepsilon e^{i\\theta})i\\varepsilon\ne^{i\\theta}\\right| d\\theta \\\\\n&=\\varepsilon^{\\text{Re}(z)}\n \\int_{-\\pi}^\\pi \\left| e^{iz\\theta}e^{(1-x)\\varepsilon e^{i\\theta}} f(\\varepsilon e^{i\\theta})\\right| d\\theta.\n\\end{aligned}\n\\end{equation}\nSince $tf(t)$ is analytic on an open disc of radius $r$ centered at 0, we have $\\varepsilon f(\\varepsilon e^{i\\theta})$ is bounded as a function of $\\varepsilon$ and $\\theta$\nif $\\varepsilon\\leq r\/2$ and $-\\pi\\leq \\theta\\leq\\pi$.\nSo if we take the limit $\\varepsilon\\to0$, then, for Re$(z)>1,$ by (\\ref{inte-esi}) we get\n\\begin{equation}\\label{inte-esi-0}\n\\lim_{\\varepsilon\\to0}\\int_{C_2}t^{z}e^{(1-x)t}f(t)\\frac{dt}{t}=0.\n\\end{equation}\nTherefore, using (\\ref{inte-esi-0}), we get\n\\begin{equation}\\label{inte-final}\n\\int_{C}t^{z}e^{(1-x)t}f(t)\\frac{dt}{t}=(e^{\\pi iz}-e^{-\\pi iz})\\int_0^{\\infty} t^{z-1}e^{-(1-x)t}f(-t)dt.\n\\end{equation}\nLet\n\\begin{equation}\nI(f,z,x)=\\frac{1}{2\\pi i}\\int_{C} t^{z}e^{(1-x)t} f(t)\\frac{dt}{t}.\n\\end{equation}\nBy (\\ref{(6)}) and (\\ref{inte-final}), we have\n\\begin{equation}\\label{inte-final-1}\n\\begin{aligned}\nI(f,z,x)&=\\frac{1}{2\\pi i}\\int_{C} t^{z}e^{(1-x)t} f(t)\\frac{dt}{t} \\\\\n&=\\frac{e^{\\pi iz}-e^{-\\pi iz}}{2\\pi i}\\int_0^{\\infty} t^{z-1}e^{-(1-x)t}f(-t)dt \\\\\n&=\\frac{\\sin\\pi z}{\\pi}\\Gamma(z)L(f,z,x) \\\\\n&=\\frac1{\\Gamma(1-z)}L(f,z,x),\n\\end{aligned}\n\\end{equation}\nwhich is the desired result.\n\\end{proof}\n\n\n\\section{Main results}\n\nDenote by $\\mathbb{R}^{+}=[0,\\infty)$ and $H=L^{2}(\\mathbb{R}^{+})$ be the Hilbert space of all complex valued functions $g,$\ndefined almost everywhere on $\\mathbb{R^{+}}$ and $|g|^{2}$ is Lebesgue integrable, with the pointwise operations\nof additions and the multiplication-by-scalars, and with the $L^{2}-$norm\n\\begin{equation}\n\\|g\\|_{2}=\\left(\\int_{\\mathbb{R}^{+}}|g(x)|^{2}dx\\right)^{1\/2}.\n\\end{equation}\nGive a function $f(t)$ such that $tf(-t)$ be a $C^{\\infty}$ function on $[0,\\infty)$\ntending rapidly to $0$ at infinity. Assume $tf(t)$ is analytic on an open disc of radius $r$ centered at 0 in the complex plane $\\mathbb{C}$ and it has the following power series expansion at $t=0$\n\\begin{equation}\\label{(3.1)}\n(-t)f(t)=\\sum_{n=0}^{\\infty}a_{n}t^{n}\n\\end{equation}\nwith $a_{0}= 0$ and $a_{1}\\neq 0.$ Define an operator on $H$ by\n\\begin{equation}\\label{delta}\n\\hat{\\Delta}_{f}\\Psi(x)=\\frac{\\mathbbm{1}}{f(-i\\hat{p})}\\Psi(x),\n\\end{equation}\nfor any $\\Psi(x)\\in H$. Here we recall that $\\hat{p}=-i\\frac d{dx}$ denotes the momentum operator. Then by (\\ref{delta}), \n\\begin{equation}\n\\hat{\\Delta}_{f}^{-1}\\Psi(x)=f(-i\\hat{p})\\Psi(x),\n\\end{equation}\nand from (\\ref{(3.1)}), we get\n\\begin{equation}\\label{op-f}\n\\hat{\\Delta}_{f}^{-1}\\Psi(x)=f(-i\\hat{p})\\Psi(x)=\\frac{\\mathbbm{1}}{i\\hat{p}}\\sum_{n=0}^{\\infty}a_{n}(-i\\hat{p})^{n}\\Psi(x).\n\\end{equation}\nUnfortunately, the above series do not always converge. However, if we truncate it by setting\n\\begin{equation}\\label{delta-inv-tr}\ns_N(x)=\\frac{\\mathbbm{1}}{i\\hat p}\\sum_{n=0}^Na_{n}(-i\\hat p)^{n}\\Psi(x)\n\\end{equation}\nfor some integer $N\\geq1,$ then we require that\n\\begin{equation}\\label{delta-inv-tr1}\n\\hat{\\Delta}_{f}^{-1}\\Psi(x)=s_N(x)+O(x^{-N}) \\quad\\text{as } x\\to\\infty.\n\\end{equation}\n\n\nThe operator $\\hat{\\Delta}_{f}$ has the following two examples.\n\nFirst, letting $\\{a_n\\}=\\{B_n\\}$ in (\\ref{(3.1)}), where $B_n$ are the Bernoulli numbers (see \\cite[p. 803]{AS}). \nThen by (\\ref{op-f}), we obtain\n\\begin{equation}\\label{op-f-ex1}\n\\hat{\\Delta}_{f}^{-1}=f(-i\\hat p)=\\frac{\\mathbbm{1}}{i\\hat p}\\sum_{n=0}^{\\infty}B_{n}\\frac{(-i\\hat p)^{n}}{n!}.\n\\end{equation}\nFrom the generating function of Bernoulli numbers, we have\n\\begin{equation}\\label{op-f-ex1-1}\n\\hat{\\Delta}_{f}^{-1}=\\frac{\\mathbbm{1}}{i\\hat p}\\sum_{n=0}^{\\infty}B_{n}\\frac{(-i\\hat p)^{n}}{n!}=\\frac{\\mathbbm{1}}{i\\hat p}\\frac{-i\\hat p}{e^{-i\\hat p}-\\mathbbm{1}},\n\\end{equation}\nwhich is equivalent to $\\hat{\\Delta}_{f}=\\mathbbm{1}-e^{-i\\hat p}$ (see \\cite{Bender}).\n\nNext, let $\\{a_n\\}=\\{E_n(0)\\}$ in (\\ref{(3.1)}), where $E_n(x)$ are the Euler polynomials (see \\cite[p. 803]{AS}). \nSimilarly, by (\\ref{op-f}), we have\n\\begin{equation}\\label{op-f-ex2}\n\\hat{\\Delta}_{f}^{-1}=f(-i\\hat p)=\\frac{\\mathbbm{1}}{i\\hat p}\\sum_{n=0}^{\\infty}E_{n}(0)\\frac{(-i\\hat p)^{n}}{n!}.\n\\end{equation}\nFrom the generating function of Euler polynomials, we have\n\\begin{equation}\\label{op-f-ex2-1}\n\\hat{\\Delta}_{f}^{-1}=\\frac{\\mathbbm{1}}{i\\hat p}\\sum_{n=0}^{\\infty}E_{n}(0)\\frac{(-i\\hat p)^{n}}{n!}=\\frac{2}{i\\hat p}\\frac{\\mathbbm{1}}{e^{-i\\hat p}+\\mathbbm{1}},\n\\end{equation}\nwhich is equivalent to $\\hat{\\Delta}_{f}=\\frac12(i\\hat p)(\\mathbbm{1}+e^{-i\\hat p}).$\n\nWe are at the position to state the main result. \n\n\\begin{theorem}\\label{main}\nLet $\\hat{p}=-i\\frac d{dx}$ be the momentum operator \nand $\\hat{H}_{f}=\\hat{\\Delta}_{f}^{-1}(\\hat{x}\\hat{p}+\\hat{p}\\hat{x})\\hat{\\Delta}_{f}$ be an operator\non the Hilbert space $H=L^{2}(\\mathbb{R}^{+}),$ where $\\hat x$ is the position operator. If $z\\neq 1$, then\n$$\n\\hat{H}_{f}\\Psi(f,z,x)=i(2z-1)\\Psi(f,z,x),\n$$\nwhere $\\Psi(f,z,x)=(-1)^{z-1}L(f,z,x+1)$ and $L(f,z,x)$ is defined in (\\ref{(6)}) and Remark \\ref{rem-2}.\nFurthermore, if we propose the boundary condition\n$$\n\\Psi(f,z,0)=0\n$$\nto the differential equation $\\hat{H}_{f}\\Psi(f,z,x)=i(2z-1)\\Psi(f,z,x),$ then we have\nthe $n$th eigenstates of the Hamiltonian $\\hat{H}_{f}$ are\n$$\\Psi(f,z_{n},x)=(-1)^{z_{n}-1}L(f,z_{n},x+1),$$ \nthe $n$th eigenvalues are $E_{n}=i(2z_{n}-1)$ for all $n,$ and\nfrom the boundary condition, $z_{n}=\\frac{1}{2}(1-iE_{n})$ are the zeros of the general family zeta functions $L(f,z)\\equiv L(f,z,1)$.\n\\end{theorem}\n\\begin{proof} \nBy (\\ref{op-f}), \n\\begin{equation}\\label{op-f-a}\n\\hat{\\Delta}_{f}^{-1}\\Psi(x)=\\frac{\\mathbbm{1}}{i\\hat{p}}\\sum_{n=0}^{\\infty}a_{n}(-i\\hat{p})^{n}\\Psi(x).\n\\end{equation}\nSince $\\hat{p}=-i\\frac d{dx}$ and $i\\hat{p}=\\frac d{dx}$, for $z\\neq 1$, we have\n\\begin{equation}\\label{op-xz-1}\n\\begin{aligned}\ni\\hat{p}\\left(\\frac{x^{1-z}}{1-z}\\right)&=i(-i)\\frac d{dx}\\left[\\frac{x^{1-z}}{1-z}\\right]\\\\\n&=\\frac d{dx}\\left(\\frac{x^{1-z}}{1-z}\\right)\\\\\n&=x^{-z}.\n\\end{aligned}\n\\end{equation}Thus\n\\begin{equation}\\label{op-xz-1}\n\\frac{\\mathbbm{1}}{i\\hat{p}}x^{-z}=\\frac{x^{1-z}}{1-z}.\n\\end{equation}\nBy (\\ref{op-f-a}) with $\\Psi(x)=x^{-z}$ and (\\ref{op-xz-1}), we have\n\\begin{equation}\\label{(7)}\n\\begin{aligned}\n\\hat{\\Delta}_{f}^{-1}x^{-z}\n&=\\sum_{n=0}^{\\infty}a_{n}(-i\\hat{p})^{n}\\frac{\\mathbbm{1}}{i\\hat{p}}x^{-z} \\\\\n&=\\sum_{n=0}^{\\infty}a_{n}(-i\\hat{p})^{n}\\left(\\frac{x^{1-z}}{1-z}\\right)\\\\\n&=\\frac{1}{1-z}\\sum_{n=0}^{\\infty}a_{n}(-i\\hat{p})^{n}x^{1-z}.\n\\end{aligned}\n\\end{equation}\nSince $i\\hat{p}=\\frac d{dx}$ and $\\left(\\frac {d}{dx}\\right)^nx^{\\mu}=\\frac{\\Gamma(\\mu+1)}{\\Gamma(\\mu-n+1)}x^{\\mu-n}$, from (\\ref{(7)}), setting $\\mu=1-z,$ we obtain the asymptotic series (see \\cite[(4)]{Bender})\n\\begin{equation}\\label{(8)}\n \\hat{\\Delta}_{f}^{-1}x^{-z}\n=\\frac{\\Gamma(2-z)}{1-z}\\sum_{n=0}^{\\infty}a_{n}(-1)^{n}\\frac{x^{1-z-n}}{\\Gamma(2-z-n)},\n \\end{equation}\n which is valid under the limit $x\\to\\infty.$\n Since $\\Gamma(2-z-n)$ has the following integral representation\n \\begin{equation*}\n \\frac{1}{\\Gamma(2-z-n)}=\\frac{1}{2\\pi i}\\int_{C}e^{u}u^{n+z-2}du,\n \\end{equation*}\n where $C$ denotes a Hankel contour that encircles the negative-$u$ axis in the positive orientation (see \\cite[p.~255]{AS}),\nthen by (\\ref{(3.1)}) and (\\ref{(8)}) we have\n \\begin{equation}\\label{(9)}\n \\begin{aligned}\n \\hat{\\Delta}_{f}^{-1}x^{-z}\n&=\\frac{\\Gamma(1-z)}{2\\pi i}x^{1-z}\\int_{C}e^{u}u^{z-2}du\\sum_{n=0}^{\\infty}a_{n}\\left(-\\frac{u}{x}\\right)^{n}\\\\\n&=\\frac{\\Gamma(1-z)}{2\\pi i}x^{1-z}\\int_{C}e^{u}u^{z-2}f\\left(-\\frac{u}{x}\\right)\\left(\\frac{u}{x}\\right)du.\n\\end{aligned}\n \\end{equation}\n Letting $\\frac{u}{x}=t$ in (\\ref{(9)}), we have\n \\begin{equation}\n \\begin{aligned}\n \\hat{\\Delta}_{f}^{-1}x^{-z}&=\\frac{\\Gamma(1-z)}{2\\pi i}\\int_{C} t^{z}e^{xt} f(-t)\\frac{dt}{t}\\\\\n &=(-1)^{z-1}\\frac{\\Gamma(1-z)}{2\\pi i}\\int_{C}t^{z}e^{-xt} f(t)\\frac{dt}{t} \\\\\n &=(-1)^{z-1}\\Gamma(1-z)\\frac{1}{2\\pi i}\\int_{C}t^{z}e^{(1-(x+1))t} f(t)\\frac{dt}{t}.\n \\end{aligned}\n \\end{equation}\nTherefore, by Proposition \\ref{2.1} and Remark \\ref{rem-2},\n\\begin{equation}\\label{(10)}\n \\hat{\\Delta}_{f}^{-1}x^{-z}=(-1)^{z-1}L(f,z,x+1).\n \\end{equation}\n Note that\n \\begin{equation}\\label{note}\n \\begin{aligned}\n (\\hat x\\hat p+\\hat p\\hat x)(x^{-z})&=-i\\left(2x\\frac d{dx}+\\mathbbm{1}\\right)(x^{-z}) \\\\\n &=-i\\left(2x\\frac d{dx}(x^{-z}) +\\mathbbm{1}(x^{-z}) \\right) \\\\\n &=-i\\left(2x(-z)x^{-z-1}+x^{-z}\\right) \\\\\n &=i(2z-1)x^{-z}.\n \\end{aligned}\n \\end{equation}\n Let $\\hat{H}_{f}=\\hat{\\Delta}_{f}^{-1}(\\hat{x}\\hat{p}+\\hat{p}\\hat{x})\\hat{\\Delta}_{f}$.\n Denote by $\\Psi(f,z,x)=(-1)^{z-1}L(f,z,x+1)$,\n from (\\ref{(10)}) and (\\ref{note}), we have $\n \\Psi(f,z,x)=\\hat{\\Delta}_{f}^{-1}x^{-z}$ and\n \\begin{equation}\\label{(11*)}\n \\begin{aligned}\n \\hat{H}_{f}\\Psi(f,z,x)&=\\hat{\\Delta}_{f}^{-1}(\\hat{x}\\hat{p}+\\hat{p}\\hat{x})\\hat{\\Delta}_{f}(\\hat{\\Delta}_{f}^{-1} x^{-z})\\\\\n &=\\hat{\\Delta}_{f}^{-1}(\\hat{x}\\hat{p}+\\hat{p}\\hat{x})x^{-z}\\\\\n &=\\hat{\\Delta}_{f}^{-1}[i(2z-1)x^{-z}]\\\\\n &=i(2z-1)\\hat{\\Delta}_{f}^{-1}x^{-z}\\\\\n &=i(2z-1)\\Psi(f,z,x).\n \\end{aligned}\n \\end{equation}\n\nThus if we propose the boundary condition\n$\\Psi(f,z,0)=0$ to the differential equation (\\ref{(11*)}), then we have\nthe $n$th eigenstates of the Hamiltonian $\\hat{H}_{f}$ are\n\\begin{equation}\\label{Eq1}\n\\Psi(f,z_{n},x)=(-1)^{z_{n}-1}L(f,z_{n},x+1),\n\\end{equation} \nthe $n$th eigenvalues are $E_{n}=i(2z_{n}-1)$ for all $n.$\nFrom the boundary condition $\\Psi(f,z,0)=0,$ \nwe have $\\Psi(f,z_{n},0)=0$ and by (\\ref{Eq1}) $L(f,z_{n},1)=0,$\nthus $z_{n}=\\frac{1}{2}(1-iE_{n})$ are the zeros of the general family zeta functions $L(f,z)$ \n(since by (\\ref{(6)}) $L(f,z,1)=L(f,z)$).\n\\end{proof}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe growing population of known luminous~($\\sim10^{47}~\\mathrm{erg~s^{-1}}$) quasars at the highest redshifts~($z\\gtrsim6$) shows that some supermassive black holes~(SMBHs) were already in place within the first $\\sim1~\\mathrm{Gyr}$ since the Big Bang. The inferred masses of these quasars range from $\\sim10^9-10^{10}~M_{\\odot}$, similar to the most massive SMBHs in the local universe.\nTo date, $\\gtrsim200$ quasars have already been discovered at $z\\gtrsim6$~\\citep{2001AJ....122.2833F,2010AJ....139..906W,2015MNRAS.453.2259V,2016ApJ...833..222J,2016Banados,2017MNRAS.468.4702R,2018ApJS..237....5M}, which overall correspond to number densities of $1~\\mathrm{cGpc}^{-3}$. Additionally, there are\na handful of objects discovered at $z\\gtrsim7$~\\citep{2011Natur.474..616M,2018ApJ...869L...9W,2019ApJ...872L...2M,2019AJ....157..236Y}, which includes the most distant quasars observed to date \\citep{2018Natur.553..473B,2021ApJ...907L...1W} at $z\\sim7.6$. The recently launched James Webb Space Telescope\n\\citep[JWST;][]{2006SSRv..123..485G} and planned facilities such as Lynx X-ray Observatory \n\\citep{2018arXiv180909642T} have a promising prospect of revealing the AGN~(active galactic nuclei) progenitors of these quasars at even higher redshifts. Additionally, gravitational wave events from Laser Interferometer Space Antenna \n\\citep[LISA;][]{2019arXiv190706482B} will also provide insights into the prevalence of BH mergers and the growth history of these quasars. These observations are going to be crucial to understanding the assembly of these quasars, which is an outstanding challenge for theoretical models of BH formation and growth. \n\n\nThe origin\nof these $z\\gtrsim6$ quasars, and the larger SMBH populations in general, is a subject of active debate. Remnants of the first generation of Pop III stars, a.k.a Pop III BH seeds, are popular candidates ~\\citep{2001ApJ...550..372F,2001ApJ...551L..27M,2013ApJ...773...83X,2018MNRAS.480.3762S}. The BH seed mass that results from the conjectured Pop III scenario depend on the initial mass function of Pop III stars themeselves. This is predicted to be more top heavy than present day stellar populations, with masses typically ranging from $\\sim10-100~M_{\\odot}$~\\citep{2014ApJ...781...60H, 2016ApJ...824..119H}. But even the most massive Pop III seeds~(initial BH masses of $\\sim10^2~M_{\\odot}$) would require significant periods of super-Eddington accretion to grow by $\\gtrsim7$ orders of magnitude to form a $z\\gtrsim6$ quasar. These stringent growth rate requirements can be alleviated to an extent with channels producing more massive seeds. Theories proposed for massive seed formation\ninclude runaway collisions of stars or black holes in dense nuclear star clusters forming ``NSC seeds\" with masses $\\sim10^{2}-10^{3}~M_{\\odot}$ ~\\citep{2011ApJ...740L..42D,2014MNRAS.442.3616L,2020MNRAS.498.5652K,2021MNRAS.503.1051D,2021MNRAS.tmp.1381D}, and direct collapse of gas in atomic cooling~($T_{\\mathrm{vir}}>10^{4}~\\mathrm{K}$) halos forming ``direct collapse black hole (DCBH) seeds\" with masses $\\sim10^{4}-10^{6}~M_{\\odot}$~\\citep{2003ApJ...596...34B,2006MNRAS.370..289B,2014ApJ...795..137R,2016MNRAS.458..233L,2018MNRAS.476.3523L,2019Natur.566...85W,2020MNRAS.492.4917L}. \n\nThe most massive DCBH seeds are seen as promising candidates for explaining the rapid formation of the $z\\gtrsim6$ quasars. Their formation requires gas to undergo an isothermal collapse at temperatures $\\gtrsim10^4~\\mathrm{K}$~(corresponding to a Jean's mass $\\gtrsim10^{4}~M_{\\odot}$). For this to occur, the gas needs to be devoid of chemical species that are efficient coolants at $\\lesssim10^4~\\mathrm{K}$, namely metals and molecular hydrogen. To suppress molecular hydrogen, the gas must be exposed to Lyman Werner radiation with minimum fluxes $\\gtrsim1000~J_{21}$ as inferred from small scale hydrodynamic simulations~\\citep{2010MNRAS.402.1249S} as well as one-zone chemistry models~\\citep{2014MNRAS.445..544S,2017MNRAS.469.3329W}. Such high fluxes can only be provided by nearby star forming galaxies~\\citep{2014MNRAS.445.1056V,2017NatAs...1E..75R,2021MNRAS.503.5046L,2021MNRAS.tmp.3110B}. However, these star forming regions can also pollute the gas with metals, which would then eliminate any possibility of direct collapse. Overall, this implies that the window for DCBH seed formation is extremely narrow, and it is unclear whether they form abundantly enough to explain the inferred densities of these objects. \n\n\nSemi analytic models~(SAMs) have\nso far been extensively used in the modeling of black hole seeds~\\citep{2007MNRAS.377.1711S,Volonteri_2009, 2012MNRAS.423.2533B,2018MNRAS.476..407V, 2018MNRAS.481.3278R, 2019MNRAS.486.2336D, 2020MNRAS.491.4973D}. Several such SAMs have also been used to study the feasibility of different seeding channels as possible origins of $z\\gtrsim6$ quasars. For example, \\cite{2011MNRAS.416.1916V,2012MNRAS.427L..60V,2014MNRAS.444.2442V} developed the \\texttt{GAMETTE-QSODUST} data constrained SAM to probe the $z\\gtrsim6$ quasars and their host galaxies. This model was used in \\cite{2016MNRAS.457.3356V} and \\cite{2021MNRAS.506..613S}, showing that the formation of heavy seeds~($\\sim10^5~M_{\\odot}$) is most crucial to the assembly of the first quasars, particularly in models where the BH accretion rate is capped at the Eddington limit. \\cite{2016MNRAS.458.3047P} and \\cite{2017MNRAS.471..589P} showed that light seeds~($\\sim10^2~M_{\\odot}$) require super-Eddington accretion to grow into the $z\\gtrsim6$ quasars. \\cite{2021MNRAS.503.5046L} applied a semi-analytic framework on a dark matter only simulation of a $3\\times10^{12}~M_{\\odot}$ halo forming at $z=6$~(presumably hosting a luminous quasar); they demonstrated that the progenitors of this halo can be sites for the formation of massive DCBH seeds.\n\nWhile SAMs, being computationally inexpensive, can probe a wide range of seed models relatively quickly, they are unable to self-consistently track the hydrodynamics of gas. Cosmological hydrodynamic simulations~\\citep{2012ApJ...745L..29D,2014Natur.509..177V,2015MNRAS.452..575S,2015MNRAS.450.1349K,2015MNRAS.446..521S,2016MNRAS.460.2979V,2016MNRAS.463.3948D,2017MNRAS.467.4739K,2017MNRAS.470.1121T,2019ComAC...6....2N,2020MNRAS.498.2219V} are more readily able to decipher the role of gas hydrodynamics in forming the $z\\gtrsim6$ quasars~(see, e.g., the review by \\citealt{2020NatRP...2...42V}). Since the $z\\gtrsim6$ quasars are extremely rare, we need extremely large volumes to probe these objects~(note however that they are much more computationally expensive than SAMs). \\texttt{MassiveBlack}~\\citep{2012ApJ...745L..29D}, with a volume of $[0.75~\\mathrm{Gpc}]^3$, revealed that $z\\gtrsim6$ quasars can form in extremely massive halos~($\\gtrsim10^{12}~M_{\\odot}\/h$ at $z\\sim6$) via a steady inflow of cold gas driving sustained accretion rates close to the Eddington limit. This was further confirmed using follow-up zoom simulations at much higher resolutions~\\citep{2014MNRAS.440.1865F}. \\texttt{BlueTides}~\\citep{2016MNRAS.455.2778F}, with a volume of $[0.5~\\mathrm{Gpc}]^3$~(but higher resolution compared to \\texttt{MassiveBlack}), further revealed the role of higher order features~(particularly low tidal fields, see \\citealt{2017MNRAS.467.4243D}) of the initial density field in producing the fastest accretion rates necessary to assemble the $z\\gtrsim7$ quasars. \n\nThe results of \\cite{2017MNRAS.467.4243D} motivated \\cite{2021MNRAS.tmp.2867N}~(hereafter N21), which was a systematic study of the impact of higher order features of rare density peaks on the subsequent black hole~(BH) growth. Using the method of constrained Gaussian realizations~\\citep{1991ApJ...380L...5H,1996MNRAS.281...84V}, N21 was able to generate initial conditions comprising of the rarest density peaks with the desired higher order features~(i.e. 1st and 2nd order derivatives). They demonstrated that highly compact peaks with low tidal fields led to the fastest BH growth. Due to their finite resolution however, cosmological hydrodynamic simulations are limited in terms of their ability to probe low-mass\nBH seeding channels. Consequently, the vast majority of the simulations targeting $z\\gtrsim6$ quasars described in the previous paragraph~(also including \\citealt{2009MNRAS.400..100S, 2014MNRAS.439.2146C, 2016MNRAS.457L..34C, 2020arXiv201201458Z}) used simple halo based seeding prescriptions~(seeds are inserted in halos above a prescribed halo mass) that do not distinguish between different physical seeding channels. Therefore, while all these simulations have been generally successful in broadly reproducing the $z\\gtrsim6$ quasars, their ability to reveal insights into the seeding environments of these objects is still limited. With upcoming LISA measurements being amongst the most promising probes for revealing the mechanism of BH seed formation, the time is ripe for developing simulations that can reliably distinguish between different BH seeding channels.\n\nNumerous studies have implemented\ngas-based black hole seeding prescriptions~\\citep{2011ApJ...742...13B,2013MNRAS.428.2885D,2014MNRAS.442.2304H,2015MNRAS.448.1835T,2016MNRAS.460.2979V,2017MNRAS.470.1121T,2017MNRAS.467.4739K,2017MNRAS.468.3935H,2019MNRAS.486.2827D,2020arXiv200601039L,2020arXiv200204045T}. \\citet{2021MNRAS.507.2012B,2021MNRAS.tmp.3110B} have recently conducted\na systematic study \nto assess the impact of gas-based black hole seeding prescriptions on $z\\gtrsim7$ SMBH populations.\nThese seed models are built on the framework of the \\texttt{IllustrisTNG} galaxy formation model~\\citep{2017MNRAS.465.3291W,2018MNRAS.473.4077P}. They\nseeded black holes in halos via criteria based on dense, metal poor gas mass, halo mass, gas spin as well as incident Lyman Werner~(LW) flux. The resulting family of models is generally agnostic about which theoretical seeding channels they represent, but their parameters could be tuned to represent any of the seeding channels described above\n(PopIII, NSC or DCBH).\nBy applying these models to zoom simulations of modestly overdense regions~($3.3\\sigma$ overdensity, targeting a $\\sim10^{11}~M_{\\odot}\/h$ halo at $z=5$), they\nfound that changing different seed parameters would leave qualitatively distinct imprints on the BH merger rates. In particular, \\cite{2021MNRAS.507.2012B} found that when the dense, metal poor gas mass threshold is increased, it suppresses the seeding and merger rates more strongly at $z\\lesssim15$ compared to higher redshifts. On the other hand, an increase in the total halo mass threshold for seeding causes stronger suppression of seeding and merger rates at $z\\sim11-25$ compared to $z\\lesssim11$. These results suggest that discrepancies between the merger rates of LISA binaries will contain insights into their seeding environments. \\cite{2021MNRAS.tmp.3110B} found that even when a moderately low LW flux threshold~($\\gtrsim50~J_{21}$) is adopted for seeding, it can dramatically suppress seed formation and prevent the assembly of $z\\gtrsim7$ SMBHs. This suggests that the bulk of the $z\\gtrsim7$ SMBH population~(likely revealed by JWST and Lynx) may not originate from DCBH seeding channels. \n\n\nThe zoom regions of \\cite{2021MNRAS.507.2012B,2021MNRAS.tmp.3110B} were not nearly overdense enough to be possible sites for the formation of $z\\gtrsim6$ quasars. In this work, we use constrained Gaussian realizations of extreme overdense regions~($\\gtrsim5\\sigma$ overdensities forming $\\gtrsim10^{12}~M_{\\odot}\/h$ halos by $z\\sim7$), and investigate the impact of BH seed models on the formation of the $z\\gtrsim6$ quasars. Apart from the seed models, our underlying galaxy formation model is adopted from the \\texttt{IllustrisTNG} simulation suite. \n\nSection \\ref{Simulation_setup_sec} describes the simulation setup, including the main features of the \\texttt{IllustrisTNG} galaxy formation model, the BH seeding and accretion models, and the generation of the constrained initial conditions. Section \\ref{Results_sec} describes the main results concerning the impact of environment, seeding and accretion models on BH growth. Finally, Section \\ref{Conclusions_sec} summarizes the main conclusions of our work.\n\n\\section{Simulation setup}\n\\label{Simulation_setup_sec}\nOur simulations were run using the \\texttt{AREPO} code~\\citep{2010MNRAS.401..791S,2011MNRAS.418.1392P,2016MNRAS.462.2603P,2020ApJS..248...32W}, which\nincludes a gravity and magneto-hydrodynamics~(MHD) solver. The simulations are cosmological in nature, which are performed within a representative portion of an expanding universe described by a fixed comoving volume~($9~\\mathrm{cMpc\/h}$ box size) with the following cosmology adopted from \\cite{2016A&A...594A..13P}: ($\\Omega_{\\Lambda}=0.6911, \\Omega_m=0.3089, \\Omega_b=0.0486, H_0=67.74~\\mathrm{km}~\\mathrm{sec}^{-1}\\mathrm{Mpc}^{-1},\\sigma_8=0.8159,n_s=0.9667$). The code uses a PM-Tree~\\citep{1986Natur.324..446B} method to solve for gravity, which is contributed by dark matter, gas, stars and BHs. Within the resulting gravitational potential, the gas dynamics is computed by the MHD solver, which uses a quasi-Lagrangian description of the fluid within an unstructured grid generated via a Voronoi tessellation of the domain.\n\nOur galaxy formation model is adopted from the \\texttt{IllustrisTNG} simulation suite~\\citep{2018MNRAS.475..676S,2018MNRAS.475..648P,2018MNRAS.475..624N,2018MNRAS.477.1206N,2018MNRAS.480.5113M,2019ComAC...6....2N} \\citep[see also][]{2018MNRAS.479.4056W,2018MNRAS.474.3976G,2019MNRAS.485.4817D,2019MNRAS.484.5587T,2019MNRAS.483.4140R,2019MNRAS.490.3196P,2021MNRAS.500.4597U,2021MNRAS.503.1940H}. The only substantive changes to the galaxy formation implemented here are in the sub-grid prescriptions\nfor BH seeding and accretion. The remaining aspects of our galaxy formation model are the same as \\texttt{IllustrisTNG} which are detailed in \\cite{2017MNRAS.465.3291W} and \\cite{2018MNRAS.473.4077P}; here, we provide a brief summary: \n\n\\begin{itemize}\n \\item Energy loss via radiative cooling includes contributions from primodial species~($\\mathrm{H},\\mathrm{H}^{+},\\mathrm{He},\\mathrm{He}^{+},\\mathrm{He}^{++}$, based on \\citealt{1996ApJS..105...19K}), as well as metals~(using pre-calculated tables for cooling rates as in \\citealt{2008MNRAS.385.1443S}) in the presence of a spatially uniform, time dependent UV background. Note that cooling due to molecular Hydrogen~($H_2$) is not explicitly included in the model.\n \\item Stars are stochastically formed within gas cells with densities exceeding $0.1~\\mathrm{cm}^{-3}$ with an associated time scale of $2.2~\\mathrm{Gyr}$. The star forming gas cells then represent an unresolved multiphase interstellar medium, which is modeled by an effective equation of state~\\citep{2003MNRAS.339..289S,2014MNRAS.444.1518V}. The model implicitly assumes that stars are produced within an unresolved cold dense component in these gas cells, which would presumably form via $H_2$ cooling. \n \\item The stellar evolution model is adopted from \\cite{2013MNRAS.436.3031V} with modifications for \\texttt{IllustrisTNG} as in \\cite{2018MNRAS.473.4077P}. Star particles represent a single stellar population with fixed age and metallicity. The initial mass function is assumed to be \\cite{2003PASP..115..763C}. The stellar evolution subsequently leads to chemical enrichment, wherein the evolution of seven species of metals~(C, N, O, Ne, Mg, Si, Fe) are individually tracked in addition to H and He. \n \n \\item Feedback from stars and Type Ia\/II Supernovae are modelled as galactic scale winds~\\citep{2018MNRAS.475..648P}, via which mass, momentum and metals are deposited on to the gas surrounding the star particles. \n \n\\item Models for BH formation and growth are detailed in the next two subsections. The treatment of BH dynamics and mergers is the same as in \\texttt{IllustrisTNG}. Due to the limited gas mass resolution, our simulations cannot self consistently reveal the small-scale dynamics of BHs, particularly at their lowest masses. To stabilize the BH dynamics, they are ``re-positioned\" to the nearest potential minimum within its ``neighborhood\"~(defined by $10^3$ nearest neighboring gas cells). As a result, a BH is also promptly merged when it is within the neighborhood of another BH. \n\\end{itemize}\n\\subsection{Black hole seeding}\n\\label{Black hole seeding}\nWe consider a range of BH seeding prescriptions, which include the default halo based seeding prescription of \\texttt{IllustrisTNG} where seeds of mass $8\\times10^5~M_{\\odot}$ are inserted in halos which exceed a threshold mass of $5\\times10^{10}~M_{\\odot}\/h$ and do not already contain a BH~(hereafter referred to as the ``TNG seed model\"). \n\nAdditionally, we explore the gas-based seeding prescriptions developed in \\cite{2021MNRAS.507.2012B} and \\cite{2021MNRAS.tmp.3110B}. These are comprised of a combination of seeding criteria based on various gas properties of halos. These criteria are designed such that our overall family of seed models broadly encompasses popular theoretical channels such as Pop III, NSC and DCBH seeds, all of which exclusively form in regions comprised of dense, metal poor gas. Here we briefly summarize them as follows: \n\\begin{itemize}\n\\item \\textit{Dense, metal poor gas mass criterion:} Seeds can only form in halos that exceed a threshold for dense~($>0.1~\\mathrm{cm}^{-3}$), metal poor~($Z<10^{-4}~Z_{\\odot}$) gas mass, specified by $\\tilde{M}_{\\mathrm{sf,mp}}$ in the units of the seed mass $M_{\\mathrm{seed}}$.\n\\item \\textit{Halo mass criterion:} Seeds can only form in halos that have exceeded a threshold for the total halo mass, specified by $\\tilde{M}_{h}$ in the units of the seed mass $M_{\\mathrm{seed}}$.\n\n\\item \\textit{LW flux criterion:} In selected models, we also require the dense, metal poor gas to be exposed to Lyman Werner~(LW) fluxes above a critical value $J_{\\mathrm{crit}}$. More specifically, seeds only form in halos with a minimum threshold for dense, metal poor, LW illuminated gas mass, denoted by $\\tilde{M}_{\\mathrm{sf,mp,LW}}$ in the units of the seed mass $M_{\\mathrm{seed}}$. Star formation is suppressed in these seed forming regions. Given that our simulations do not contain full radiative transfer, the LW flux from Pop III and Pop II stars is computed using an analytic prescription described in \\cite{2021MNRAS.tmp.3110B}.\n\\end{itemize}\n\nOur seed model is therefore described by four parameters, namely $\\tilde{M}_{\\mathrm{sf,mp}}$, $\\tilde{M}_{\\mathrm{h}}$, $J_{\\mathrm{crit}}$ and $M_{\\mathrm{seed}}$. All of our simulations include the first two parameters, and throughout the text\nthe \\textit{dense, metal poor gas mass criterion} and \\textit{halo mass criterion} are labelled as \\texttt{SM*_FOF*} where the `*'s correspond to the values of $\\tilde{M}_{\\mathrm{sf,mp}}$ and $\\tilde{M}_{\\mathrm{h}}$. For example, $\\tilde{M}_{\\mathrm{sf,mp}}=5$ and $\\tilde{M}_{\\mathrm{h}}=3000$ will correspond to \\texttt{SM5_FOF3000}. Runs which additionally apply the \\textit{LW flux criterion} contain an extra suffix \\texttt{LW*} where `*' corresponds to $J_{\\mathrm{crit}}$; for example, if a criterion with $J_{\\mathrm{crit}}=300~J_{21}$ is added to \\texttt{SM5_FOF3000}, it will be labeled as \\texttt{SM5_FOF3000_LW300}. Lastly, the seed mass $M_{\\mathrm{seed}}$ will be explicitly stated in the text and figure legends.\n\n\\subsection{BH accretion and feedback models}\nBlack holes grow via a modified Bondi-Hoyle accretion prescription, with\nthe maximum accretion rate limited to some factor $f_{\\rm Edd} \\geq 1$ times\nthe Eddington accretion rate (which we refer to as the `Eddington factor'):\n\\begin{eqnarray}\n\\dot{M}_{\\mathrm{BH}}=\\mathrm{min}(\\alpha \\dot{M}_{\\mathrm{Bondi}}, \\: f_{\\mathrm{edd}}\\dot{M}_{\\mathrm{Edd}})\\\\\n\\dot{M}_{\\mathrm{Bondi}}=\\frac{4 \\pi G^2 M_{\\mathrm{BH}}^2 \\rho}{c_s^3}\\\\\n\\dot{M}_{\\mathrm{Edd}}=\\frac{4\\pi G M_{\\mathrm{BH}} m_p}{\\epsilon_r \\sigma_T~c}\n\\label{bondi_eqn}\n\\end{eqnarray}\n$\\alpha$ is referred to as the `Bondi boost' factor which is often used to boost the accretion rate to account for the inability to resolve the small scale vicinity of the BH. $G$ is the gravitational constant, $\\rho$ is the local gas density, $M_{\\mathrm{BH}}$ is the BH mass, $c_s$ is the local sound speed, $m_p$ is the proton mass, and $\\sigma_T$ is the Thompson scattering cross section. In practice, ``local'' quantities are calculated as the kernel-weighted averages over nearby particles, typically those within a few $\\times h^{-1} \\mathrm{pc}$. Accreting black holes radiate at luminosities given by, \n\\begin{equation}\n L=\\epsilon_r \\dot{M}_{\\mathrm{BH}} c^2,\n \\label{bol_lum_eqn}\n\\end{equation}\nwhere $\\epsilon_r$ is the radiative efficiency.\n\nIn the IllustrisTNG implementation, AGN feedback occurs both in `thermal mode' as well as `kinetic mode'. For Eddington ratios~($\\eta \\equiv \\dot{M}_{\\mathrm{bh}}\/\\dot{M}_{\\mathrm{edd}}$) higher than a critical value of $\\eta_{\\mathrm{crit}}=\\mathrm{min}[0.002(M_{\\mathrm{BH}}\/10^8 M_{\\odot})^2,0.1]$, thermal energy is deposited on to the neighboring gas at a rate of $\\epsilon_{f,\\mathrm{high}} \\epsilon_r \\dot{M}_{\\mathrm{BH}}c^2$ where $ ~\\epsilon_{f,\\mathrm{high}} \\epsilon_r=0.02$. $\\epsilon_{f,\\mathrm{high}}$ is called the ``high accretion state\" coupling efficiency. If the Eddington ratio is lower than the critical value, kinetic energy is injected into the gas at irregular time intervals, which manifests as a `wind' oriented along a randomly chosen direction. The injected rate is $\\epsilon_{f,\\mathrm{low}}\\dot{M}_{\\mathrm{BH}}c^2$ where $\\epsilon_{f,\\mathrm{low}}$ is called the `low accretion state' coupling efficiency~($\\epsilon_{f,\\mathrm{low}} \\lesssim 0.2$). For further details, we direct the interested readers to \\cite{2017MNRAS.465.3291W}. \n\nThe main parameters of our accretion model include the Bondi boost $\\alpha$, the radiative efficiency $\\epsilon_r$ and the Eddington factor $f_{\\mathrm{edd}}$. The default values adopted in the \\texttt{IllustrisTNG} suite are $\\alpha=1,\\epsilon_r=0.2~\\&~f_{\\mathrm{edd}}=1$. We largely use this accretion model, and hereafter refer to it as the ``TNG accretion model''. However, we also run some simulations with different variations of these parameters, particularly when comparing our results to other studies. These variations include different combinations of $\\alpha=1~\\&~100$, $\\epsilon_r=0.2~\\&~0.1$ and $f_{\\mathrm{edd}}=1-100$. In the figure legends, these are labelled as \\texttt{Boost*_RadEff*_EddFac*} where the `*'s correspond to values of $\\alpha$, $\\epsilon_r$ and $f_{\\mathrm{edd}}$ respectively.\n\n\n\n\n\n\\subsection{Initial Conditions: constrained Gaussian realizations}\n\nWe expect the brightest $z>6$ quasars to live in the rarest and most extreme overdensities in the Universe. In order to create initial conditions~(ICs) that can produce such regions within a relatively small $9~\\mathrm{cMpc}\/h$ box, we apply the technique of \\textit{constrained Gaussian realizations} (CR). \nThe CR method can efficiently sample a Gaussian Random field conditioned on various (user-specified) large-scale features.\nThis technique was originally introduced by \\cite{1991ApJ...380L...5H} and \\cite{1996MNRAS.281...84V}. We use the most recent implementation of this technique, i.e. the \\href{https:\/\/github.com\/yueyingn\/GaussianCR}{\\texttt{GaussianCR}} code to generate the initial conditions. This code was fully developed by N21, wherein it was extensively tested against large volume uniform cosmological simulations in terms of reproducing the halo assembly, star formation and BH growth histories. Here we briefly summarize the main features for completeness, while the full details of the underlying formalism are described in N21. \n\nOverall, \\href{https:\/\/github.com\/yueyingn\/GaussianCR}{\\texttt{GaussianCR}} constrains 18 parameters at the peak location~(see N21 for details). In this work, we vary three parameters that were shown by N21 to be most consequential to BH growth. These are the following:\n\\begin{itemize}\n \\item The peak height `$\\nu$' quantifies the `rarity' of the peak by specifying its height in the units of the variance of $\\delta_G (\\textbf{r})$ denoted by $\\sigma_{R_G}$, i.e.\n \\begin{eqnarray}\n \n \\nu \\equiv \\delta(\\textbf{r}_{\\mathrm{peak}})\/\\sigma_{R_G}, \\\\\n \\sigma_{R_G}^2\\equiv\\left<\\delta_G (\\textbf{r}) \\delta_G (\\textbf{r})\\right>= \\int \\frac{P(k)}{(2\\pi)^3} \\hat{W}^2(k,R_G) dk \n \\end{eqnarray}\n where $\\textbf{r}_{\\mathrm{peak}}$ is the peak position. $P(k)$ is the power spectrum and $\\tilde{\\delta}_G(\\textbf{r})\\equiv \\int \\tilde{\\delta} (\\textbf{r}) W(\\textbf{r},R_G)$ is the overdensity smoothened over a scale $R_G$ using the Gaussian window function $W(\\textbf{r},R_G)= \\exp(-r^2\/2R_G^2)$ and its Fourier transform $\\hat{W}(k,R_G)=\\exp(-k^2 R_G^2\/2)$. \n\n \\item The peak compactness `$x_d$' is set by the second order derivatives of the smoothed overdensity field. More quantitatively, it is determined by the eigenvalues of the Hessian matrix $\\partial_{ij}\\delta_{G}$~($i,j=1,2~\\&~3$ are $x,y~\\&~z$ components respectively) can be parametrized as \n \\begin{eqnarray}\n \\lambda_1= \\frac{x_d \\sigma_2(R_g)}{1+a_{12}^2+a_{13}^2}\\\\\n \\lambda_2= a_{12}^2 \\lambda_1\\\\\n \\lambda_3= a_{13}^2 \\lambda_1\n \\end{eqnarray}\n where $\\sigma_2(R_g) \\equiv \\int \\frac{P(k)}{(2\\pi)^3} \\hat{W}^2(k,R_G) k^2 dk$, and $a_{12}$ and $a_{13}$ are the axis ratios that determine the shape of the mass distribution around the ellipsoidal peak. \n \n\n \\item Lastly, the tidal strength $\\epsilon$ is determined by second order derivative of the gravitational potential i.e. the tidal tensor $T_{ij}~(i=1,2,3)$. The eigenvalues of the tidal tensor can be parametrized by, \n \\begin{equation}\n \\left[\\epsilon \\cos{\\frac{\\omega+2\\pi}{3}}, \\epsilon \\cos{\\frac{\\omega-2\\pi}{3}}, \\epsilon \\cos{\\frac{\\omega}{3}} \\right],\n \\end{equation} \n where $\\epsilon$ determines the overall magnitude of the tidal tensor, and $\\omega$ determines the relative strengths of the tidal tensor along the three eigenvectors.\n\\end{itemize}\n\n\n\\subsubsection{Our choice of peak parameters}\n\\begin{table*}\n\\centering\n\\begin{tabular}{|c|c|c|c|c|}\n IC & $R_G~(\\mathrm{Mpc}\/h)$ & $\\nu~(\\sigma_{R_G})$ & $x_d~(\\sigma_2)$ & $\\epsilon~(\\mathrm{km~s^{-1}~\\mathrm{Mpc}^{-1}})$ \\\\\n \\hline\n \\texttt{5SIGMA} & 1.0 & 5 & 3.6~(ave) & 34.0~(ave) \\\\\n \\texttt{5SIGMA_COMPACT} & 1.0 & 5 & 5.8~($+3\\sigma$) & 15.0~($-2\\sigma$)\\\\\n \\texttt{6SIGMA} & 1.3 & 6 & 4.0~(ave) & 34.0~(ave)\\\\\n \\hline\n\\end{tabular}\n\\caption{The adopted values for the peak parameters for $\\texttt{5SIGMA}$, $\\texttt{5SIGMA_COMPACT}$ and $\\texttt{6SIGMA}$. These constrained initial conditions~(ICs) are characterized by the smoothing scale $R_G$~(2nd col), the peak height $\\nu$~(3rd col), the peak compactness $x_d$~(4th col) and the tidal scalar $\\epsilon$~(5th col); these 4 parameters are most consequential to BH growth. The remaining parameters do not significantly impact BH growth, and have been fixed to be the typical values of their underlying distributions~(see Figure 5 of N21). }\n\\label{peak_parameters_table}\n\\end{table*}\nN21 shows that BH growth is the most efficient within rare peaks~(high $\\nu$) that are compact~(high $x_d$) and allow for gas infall to occur from all directions~(low tidal strength $\\epsilon$).\n Therefore, throughout this paper, we make intentional choices for $\\nu$, $x_d$ and $\\epsilon$ and fix the remaining parameters at their most probable values~(see Figure 5 of N21) . Table \\ref{peak_parameters_table} summarizes the adopted parameter values for $\\nu$, $x_d$ and $\\epsilon$. More specifically, we look at the following three regions:\n\\begin{itemize}\n \\item We choose a $5\\sigma$ peak~($\\nu=5$) at scales of $R_G=1~\\mathrm{Mpc}\/h$, with $x_d$ and $\\epsilon$ corresponding to the typical values i.e. the maxima of their respective distributions. The peak height was chosen to produce a target halo mass of $10^{12}~M_{\\odot}\/h$ at $z=7$. It is hereafter referred to as \\texttt{5SIGMA}.\n \\item We again choose a $5\\sigma$ peak at $R_G=1~\\mathrm{Mpc}\/h$, but with a compactness $x_d$ that is $3\\sigma$ away from the typical value, and a tidal strength $\\epsilon$ that is $-2\\sigma$ away from the mean value. This also targets the assembly of a $10^{12}~M_{\\odot}\/h$ halo at $z=7$, and is referred to as \\texttt{5SIGMA_COMPACT}.\n \\item Lastly, we choose a $6\\sigma$ peak~($\\nu=6$) at scales of $R_G=1.3~\\mathrm{Mpc}\/h$ with typical values for $x_d$ and $\\epsilon$. This targets a $5\\times10^{12}~M_{\\odot}\/h$ halo at $z=7$, and is referred to as \\texttt{6SIGMA}.\n\\end{itemize}\nNote that the target halos produced in \\texttt{6SIGMA} and \\texttt{5SIGMA_COMPACT} regions have number densities roughly similar to those of the observed $z\\sim6$ quasars~($\\sim1~\\mathrm{Gpc}^{-3}$).\nIn contrast, \\texttt{5SIGMA} produces a target halo that is $\\sim100$ times more common.\n\nFinally, we also note that the BH growth can depend on the specific realization of the large scale density field. However, upon exploring 5 different realizations for a select few BH models, we found that the differences in BH growth were mild~($z\\sim6$ BH masses vary by factors $\\lesssim2$). \n\n\n\n\\subsection{Simulation resolution}\n\nIn \\cite{2021MNRAS.507.2012B,2021MNRAS.tmp.3110B}, we performed detailed resolution convergence tests for our BH seed model and found that for our fiducial model with $\\tilde{M}_{\\mathrm{sf,mp}}=5$ and $\\tilde{M}_{h}=3000$, the seed formation rates are reasonably well converged for gas mass resolutions $\\lesssim10^4~M_{\\odot}\/h$. The resolution convergence becomes slower as the models are made more restrictive by increasing $\\tilde{M}_{\\mathrm{sf,mp}}$ or by introducing a LW flux criterion. As we shall see in Section \\ref{BH growth at higher resolutions_sec}, the resolution convergence properties of our constrained regions are similar to that of the zoom region of \\cite{2021MNRAS.507.2012B,2021MNRAS.tmp.3110B}.\n\nTo achieve a gas mass resolution of $\\sim10^4~M_{\\odot}\/h$ in a box size of $9~\\mathrm{Mpc}\/h$, we need $N=720$ DM particles per dimension~(note that the number of gas cells are initially assigned to be equal to the DM particles, but as the simulations evolve the gas cells can undergo refinement or de-refinement). However, running such a simulation until $z=6$ requires a substantial amount of computing time and memory, particularly in regions with extreme overdensities. Therefore, to facilitate a rapid exploration of the large parameter space of our seed models, we choose $N=360$. We assign this to be our fiducial resolution, and it corresponds to a gas mass resolution of $\\sim10^5~M_{\\odot}\/h$. Note that this is only slightly lower than the highest resolution box of the \\texttt{IllustrisTNG} suite i.e. TNG50. That being said, we do run higher resolution realizations~($N=720$) for a few selected models, particularly those that successfully produce a $z\\gtrsim6$ quasar. As we shall see in Section \\ref{BH growth at higher resolutions_sec}, the final BH mass at $z\\lesssim7$ is not significantly impacted by resolution. Additionally, we use the $N=720$ runs to probe the lowest seed mass considered in this work i.e. $M_{\\mathrm{seed}}=1.25\\times10^4~M_{\\odot}\/h$. The fiducial $N=360$ run can only probe seed masses of $M_{\\mathrm{seed}}=1\\times10^5~\\&~8\\times10^5~M_{\\odot}\/h$. Hereafter, unless otherwise stated, we are using $N=360$. For runs that use $N=720$, it shall be explicitly stated in the captions or the text.\n\n\\section{Results}\n\n\\label{Results_sec}\n\n\\subsection{Halo environments: evolution from $z\\sim20-6$}\n\n\\begin{figure*}\n\\includegraphics[width=5.2 cm]{5sigma_typical.png}\n\\includegraphics[width=5.2 cm]{5sigma_compact.png}\n\\includegraphics[width=6.5 cm]{6sigma_typical.png}\n\\caption{Gas density field at the $z=6$ snapshot for the three constrained Gaussian initial conditions we explored in this work. The left panel is centered at a $5\\sigma$ overdensity peak at $R_{G}=1~\\mathrm{Mpc}\/h$, and typical values for compactness $x_d$ and tidal strength $\\epsilon$; this is hereafter referred to as \\texttt{5SIGMA}. The middle panel is centered at a $5\\sigma$ overdensity peak with $+3\\sigma$ higher compactness, and $-2\\sigma$ lower tidal strength; we refer to this as \\texttt{5SIGMA_COMPACT}. The right panel is centered at a $6\\sigma$ overdensity peak at $R_{G}=1.3~\\mathrm{Mpc}\/h$ with average values of compactness and tidal strength; this is referred to as \\texttt{6SIGMA}. We can see that the gas distribution in \\texttt{5SIGMA_COMPACT} is more isotropic and centrally concentrated compared to \\texttt{5SIGMA} and \\texttt{6SIGMA}.}\n\\label{halo_properties_fig}\n\\end{figure*}\n\n\\begin{figure}\n\\includegraphics[width=8 cm]{Halo_properties.png}\n\\caption{The evolution of the most massive halo~(MMH) from $z\\sim20$ to $z\\sim7$ for \\texttt{5SIGMA}~(blue), \\texttt{5SIGMA_COMPACT}~(green) and \\texttt{6SIGMA}~(orange) lines. The top, middle and bottom panels show the total halo mass, gas mass and stellar mass respectively. For \\texttt{5SIGMA} and \\texttt{5SIGMA_COMPACT}, we reach our target halo mass of $\\sim10^{12}~M_{\\odot}\/h$ at $z=7$; they both assemble a total gas mass of $2\\times10^{11}~M_{\\odot}\/h$. Likewise, the \\texttt{6SIGMA} volume assembles the desired target halo mass of $6\\times10^{12}~M_{\\odot}\/h$ at $z=7$, and a gas mass of $5\\times10^{11}~M_{\\odot}\/h$. \\texttt{5SIGMA_COMPACT} assembles a stellar mass of $\\sim10^{11}~M_{\\odot}\/h$ at $z=7$, similar to that of \\texttt{6SIGMA}; this is significantly higher than \\texttt{5SIGMA} which assembles a stellar mass of $9\\times10^{9}~M_{\\odot}\/h$. More compact peaks~(at fixed peak height) lead to enhanced star formation.}\n\\label{halo_evolution_fig}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[width=8.5 cm]{SM_HM_relations.png}\n\\caption{Stellar mass vs halo mass relation at $z=6$ for the MMHs of \\texttt{5SIGMA_COMPACT}~(green), \\texttt{5SIGMA}~(blue) and \\texttt{6SIGMA}~(orange) respectively, compared with the full halo population of TNG300~(a $[300~\\mathrm{Mpc}]^3$ box---the largest volume in the \\texttt{IllustrisTNG} simulation suite; faded grey circles). The predictions from the constrained runs are broadly consistent with the trends extrapolated from the TNG300 results. The \\texttt{5SIGMA} and \\texttt{6SIGMA} runs produce stellar mass predictions close to the mean trend, whereas the \\texttt{5SIGMA_COMPACT} run produces a somewhat overly massive galaxy~(stellar mass) compared to its host halo.}\n\\label{SM_HM_relation_fig}\n\\end{figure}\n\n\n\\begin{figure*}\n\\includegraphics[width=18 cm]{1D_density_distributions.png}\n\\caption{Radially averaged 1D profiles of the gas density~(physical units), centered around the most massive BH in the MMH of our simulation volume for \\texttt{5SIGMA}, \\texttt{5SIGMA_COMPACT} and \\texttt{6SIGMA}. Lines of different colors show the redshift evolution from $z=14$ to $z=6$. We see that gas density in the central regions steeply increases with time between $z\\sim9-6$. The steepest increase is seen for \\texttt{5SIGMA_COMPACT}.}\n\\label{1D_density_distributions_fig}\n\\includegraphics[width=19 cm]{FullMapAllFieldsconstrained.png}\n\n\\caption{2D profiles of the gas density, metallicity and the star formation rate density within the vicinity of the most massive BH in the MMH of the \\texttt{5SIGMA_COMPACT} run. We compute these quantities averaged over a slice of thickness $10~\\mathrm{kpc}\/h$. Left to right panels show the redshift evolution from $z=10$ to $z=6$. At $z\\lesssim9$, the steep increase in the gas density leads to increase in the star formation and metal enrichment in the halo.}\n\\label{FullMapAllFieldsconstrained_fig}\n\\end{figure*}\n\nBefore looking at the properties of BHs, we first look at the environments in which they form and grow. Figure \\ref{halo_properties_fig} shows the $z=6$ gas density profiles centered at the location of the constrained \\texttt{5SIGMA}, \\texttt{5SIGMA_COMPACT} and \\texttt{6SIGMA} peaks. Visually, we can clearly see that the gas distribution around the \\texttt{5SIGMA_COMPACT} peak is more compact and isotropic compared to that of \\texttt{5SIGMA} and \\texttt{6SIGMA}. Figure \\ref{halo_evolution_fig} shows the evolution of the most massive halo~(MMH) from $z\\sim20-6$, in terms of the total mass, gas mass and stellar mass. We see that both \\texttt{5SIGMA_COMPACT} and \\texttt{5SIGMA} runs assemble their target mass of $\\sim10^{12}~M_{\\odot}\/h$ by $z=7$, which grows to become $\\sim2\\times10^{12}~M_{\\odot}\/h$ by $z=6$. The \\texttt{6SIGMA} run assembles halo masses of $\\sim7\\times10^{12}~M_{\\odot}\/h$ and $\\sim10^{13}~M_{\\odot}\/h$ by $z=6$.\n\nInterestingly, the halo assembly history~(see Figure \\ref{halo_evolution_fig}: top panel) of the three regions shows that for \\texttt{5SIGMA_COMPACT}, the MMHs at $z\\gtrsim10$ are $\\sim5-10$ times more massive compared to that of \\texttt{5SIGMA}~(as well as \\texttt{6SIGMA}). But at $z\\lesssim10$, the halo growth rate for the \\texttt{5SIGMA_COMPACT} run becomes slower compared to the \\texttt{5SIGMA} run, thereby explaining the similar final halo masses that both the runs assemble at $z\\sim6-7$. That is likely because the MMH in the \\texttt{5SIGMA_COMPACT} peak becomes more isolated at $z\\lesssim10$~(after having merged with most of its neighboring massive halos by $z\\sim10$). However, the \\texttt{5SIGMA_COMPACT} MMH continues to become more dense during $z\\sim9-6$~(due to continued gravitational collapse), which likely causes the higher compactness of \\texttt{5SIGMA_COMPACT} peak compared to \\texttt{5SIGMA}~(as well as \\texttt{6SIGMA}) at $z\\sim6-7$.\n\nThe evolution of the gas mass~(see Figure \\ref{halo_evolution_fig}: middle panel) mirrors that of the total halo mass at $z\\sim20-10$. More specifically, the gas mass of the MMH in \\texttt{5SIGMA_COMPACT} is $\\sim5-10$ times higher than that of \\texttt{5SIGMA} and \\texttt{6SIGMA}~(similar to the total halo mass) at $z\\sim20-10$. Notably, we find that at $z\\sim9-6$, there is no significant increase in the gas mass for the MMH in \\texttt{5SIGMA_COMPACT}, unlike the MMHs of \\texttt{5SIGMA} and \\texttt{6SIGMA}. As a result, by $z=6$, the MMH in \\texttt{5SIGMA_COMPACT} ends up with a lower gas mass compared to \\texttt{5SIGMA} and \\texttt{6SIGMA}. As we shall see, this is happening because the gas in \\texttt{5SIGMA_COMPACT} is being rapidly consumed by star formation and BH accretion, more so than \\texttt{5SIGMA} and \\texttt{6SIGMA}. \nThe enhanced star formation in \\texttt{5SIGMA_COMPACT} can be seen in the bottom panel of Figure \\ref{halo_evolution_fig}, wherein the stellar mass is $\\sim7$ times higher than that of \\texttt{5SIGMA} at $z\\sim6-7$~(making it similar to the stellar mass produced by \\texttt{6SIGMA} at $z\\sim6-7$). \nThe \\texttt{5SIGMA_COMPACT} region therefore produces an overly massive \ngalaxy for its host halo mass, as clearly seen in Figure \\ref{SM_HM_relation_fig}. Figure \\ref{SM_HM_relation_fig} also demonstrates that our constrained runs are consistent with the stellar mass vs. halo mass relations predicted by TNG300, thereby validating this technique.\n\nHaving discussed the evolution of the global properties of the MMH, we now focus on the evolution of their internal gas distributions, which are much more consequential to BH growth. The evolution of the radially averaged gas density profiles from $z\\sim14-6$ is shown in Figure \\ref{1D_density_distributions_fig}. In all\nthree regions,\nlittle evolution occurs at $z\\gtrsim9$, and the central $\\sim 1$ kpc\/$h$ is unresolved owing to low gas densities. Between $z\\sim9-6$ however, the gas densities start to significantly increase, particularly close to the halo centers. Specifically, while the overall gas mass of the MMH only increases by factors of $\\sim50$ between $z\\sim9-6$~(Figure \\ref{halo_evolution_fig}: middle panel), the central gas densities increase by factors of $\\sim100-10000$ during the same time interval. Amongst the three regions, \\texttt{5SIGMA_COMPACT} shows the steepest increase in density between $z\\sim9-6$. Figure \\ref{FullMapAllFieldsconstrained_fig} shows the 2D color maps of the evolution of the gas density, star formation rates and metallicity for \\texttt{5SIGMA_COMPACT} region at redshift snapshots of $z=12,10,8~\\&~6$. We can clearly see that the steep increase in central gas density leads to a commensurate boost in the star formation, as well as metal enrichment in the central regions of the halo.\nAs we shall see, this increase in the central gas densities leads to substantially increased importance of accretion-driven BH growth at $z\\sim9-6$. \n\n\n\\subsection{BH growth in different halo environments: Impact of BH seeding models}\n\\begin{figure*}\n\\includegraphics[width=16 cm]{No_of_BHs.png}\n\\caption{Number of BHs present in the most massive halo~(MMH) at different redshift snapshots for \\texttt{5SIGMA}~(blue), \\texttt{5SIGMA_COMPACT}~(green) and \\texttt{6SIGMA}~(orange) lines. Error-bars correspond to Poisson errors. In the leftmost panel, we use the default seeding prescription from the IllustrisTNG simulation suite~(referred to as TNG seed model). In the middle and right panels, we use the gas-based seeding prescription with $\\tilde{M}_h=3000$ and $\\tilde{M}_{\\mathrm{sf,mp}}=5$~(\\texttt{SM5_FOF3000}). All these runs use the default accretion prescription from the IllustrisTNG simulation suite~(TNG accretion model). We see that the onset of seed formation happens earliest within the \\texttt{5SIGMA_COMPACT} run; therefore, it contains the highest number of BHs around $z\\gtrsim10$. However, between $z\\sim9-6$, the \\texttt{5SIGMA_COMPACT} does not acquire many new BHs. On the other hand, the \\texttt{5SIGMA} and \\texttt{6SIGMA} MMHs continue to acquire new BHs from nearby halos between $z\\sim9-6$. Therefore, by $z=6$, the \\texttt{5SIGMA_COMPACT} peak has the least number of BHs and \\texttt{6SIGMA} peak has the highest number of BHs.}\n\n\\label{No_of_BHs_fig}\n\n\n\\includegraphics[width=5.7 cm]{TNG_seed_model.png}\n\\includegraphics[width=5.7 cm]{Mh3000_Msfmp5_seed5.90.png}\n\\includegraphics[width=5.7 cm]{Mh3000_Msfmp5.png}\n\\caption{Evolution of the most massive BH at $z=6$ in the MMH of\n\\texttt{5SIGMA}~(blue), \\texttt{5SIGMA_COMPACT}~(green) and \\texttt{6SIGMA}~(orange) lines (hereafter, all growth histories plotted for different runs are for this particular BH in each simulation). The 1st row corresponds to the BH mass; solid lines show the total BH mass and the dashed lines show the mass accumulated only by mergers. The 2nd row shows the fraction of the current mass accumulated by gas accretion. The 3rd and 4th rows show the total bolometric luminosity in units of $\\mathrm{ergs~s^{-1}}$ and the Eddington luminosity, respectively. All the runs use the TNG accretion model. In the left panels, we use the TNG seed model. In the middle and right panels, we use the gas-based seeding prescription \\texttt{SM5_FOF3000}. The BH seed mass is $8 \\times 10^5 M_{\\odot}\/h$ and $1 \\times 10^5 M_{\\odot}\/h$ in the middle and right panels, respectively. Among the constrained volumes we explore, \\texttt{5SIGMA_COMPACT} assembles the highest BH mass in all cases. Even in this region, the TNG seed model achieves a maximum BH mass of $\\sim7\\times10^7~M_{\\odot}\/h$ by $z=6$, which is significantly smaller than the typical masses of observed $z\\sim6$ quasars. In contrast, the \\texttt{SM5_FOF3000} models are able to assemble $10^9~M_{\\odot}\/h$ SMBHs by $z=6$. While these massive BHs are active as luminous quasars with near-Eddington luminosities of $\\sim10^{47}~\\mathrm{ergs~s^{-1}}$ at $z=6$, their growth at $z\\gtrsim 9$ is dominated by BH mergers. Overall, the BH growth is fastest within rare massive high-z halos that are also sufficiently compact and have low tidal fields. To produce $z\\sim6$ quasars in these halos with the TNG accretion model, an early boost in BH mass driven by mergers is necessary.}\n\\label{bh_growth_env_fig}\n\\end{figure*}\n\\begin{figure*}\n\\includegraphics[width=14 cm]{SM_BHM_relations.png}\n\\caption{BH mass vs. halo mass scaling relation at $z=6$. In the left panel, each data point corresponds to a halo; so we are plotting the ``halo mass vs. total mass of all BHs in the halo'', for all halos at $z=6$. In the right panel, each data point corresponds to a BH, so we are plotting the ``BH mass vs. host halo mass'', for all individual BHs at $z=6$. The blue, green and orange colors correspond to the BH within the MMH of \\texttt{5SIGMA}, \\texttt{5SIGMA_COMPACT} and \\texttt{6SIGMA} regions respectively. All the runs use the TNG accretion model. The circles correspond to the TNG seed model, which we directly compare to the full population of TNG300. The left panels show that for the TNG seed model, the\ntotal BH masses of\nMMHs produced by the constrained runs are\nconsistent with the extrapolated trends from the TNG300 population. This further validates our constrained runs. The squares $\\&$ stars correspond to the gas-based seed model \\texttt{SM5_FOF3000} with seed masses of $8\\times10^5~\\&~1\\times10^5~M_{\\odot}\/h$, respectively; these models produce much higher BH masses\ncompared to the TNG seed model. The MMHs generally tend to host a significantly large number of BHs during their assembly history, generally ranging between $\\sim10-50$ depending on the constrained region as well as the seed model~(revisit Figure \\ref{No_of_BHs_fig}). \n The \\texttt{6SIGMA} run produces the highest number of BHs and also the highest total BH mass in its target halo, commensurate with its halo mass. However, when we look at masses of \\textit{individual} BHs in the right panel, the most massive BH is actually produced by \\texttt{5SIGMA_COMPACT}. In fact, with the \\texttt{SM5_FOF3000} seed model, only the \\texttt{5SIGMA_COMPACT} can produce \\textit{individual} $\\sim10^9~M_{\\odot}\/h$ BHs similar to the observed $z\\sim6$ quasars.}\n\\label{SM_BHM_relations_fig}\n\\end{figure*}\nWe are particularly interested in the growth histories of BHs occupying the MMH in each region. Figure \\ref{No_of_BHs_fig} shows the number of BHs that are present within the MMH at different redshift snapshots, with different panels showing different seed models. We note that these MMHs tend to acquire several BHs during their assembly history. Despite many of these BHs inevitably merging with the central BH, the overall number of BHs hosted by the MMHs increases up to at least $z\\sim9-10$. This is because the MMHs continue to acquire new BHs from surrounding merging halos. At $z=6$, the MMHs can generally host up to $\\sim10-50$ BHs depending on the constrained region as well as the seed model.\n\nNext, we look at how the number of BHs in the MMHs vary between different regions and seed models. For all seed models, we generally see that the \\texttt{5SIGMA_COMPACT} runs tend to start forming seeds at earlier times, and therefore host a higher number of BHs at $z\\sim10-20$ compared to \\texttt{5SIGMA} and \\texttt{6SIGMA} runs. This is because the MMHs at $z\\sim10-20$ for the \\texttt{5SIGMA_COMPACT} runs are more massive compared to the other two regions~(revisit Figure \\ref{halo_evolution_fig}). However, between $z\\sim10-6$, we find that there is no significant increase in the number of BHs for the MMH in \\texttt{5SIGMA_COMPACT}, unlike \\texttt{5SIGMA} and \\texttt{6SIGMA}. This is likely because in the \\texttt{5SIGMA_COMPACT} runs, most of the nearby massive halos have already merged with the MMH by $z\\sim10$, leaving behind an isolated MMH during $z\\sim10-6$ with very few nearby halos to acquire new BHs from. On the other hand, for the \\texttt{5SIGMA} and \\texttt{6SIGMA} runs, the MMHs are not as isolated at $z\\sim10-6$, and they continue to acquire new BHs during this time. All of this ultimately leads to fewer BHs in the MMH of \\texttt{5SIGMA_COMPACT} at $z=6$, compared to \\texttt{5SIGMA} and \\texttt{6SIGMA} regions. Lastly, note that the TNG seed model~(Figure \\ref{No_of_BHs_fig}: left panel) is substantially more restrictive and produces much fewer seeds compared to our fiducial gas-based seed models with $\\tilde{M}_{\\mathrm{sf,mp}}=5, \\tilde{M}_{\\mathrm{h}}=3000$~(hereafter \\texttt{SM5_FOF3000} shown in Figure \\ref{No_of_BHs_fig}: middle and right panels).\n\nThe primary science focus of this work is the growth of the most massive BH located in the MMH of our simulations. So unless otherwise stated, all future references to BH growth histories are for the most massive BH at $z=6$ in each simulation. Figure \\ref{bh_growth_env_fig} shows the BH growth histories for \\texttt{5SIGMA}, \\texttt{5SIGMA_COMPACT} and \\texttt{6SIGMA}. We first focus on the halo based TNG seed model~(leftmost panels), which starts to seed BHs around $z\\sim10-12$. Note that for this seed model, very few BHs are formed overall, which results in very little growth via mergers~(see dashed lines in Figure \\ref{bh_growth_env_fig}: top left panel). We find that amongst all three regions, \\texttt{5SIGMA_COMPACT} assembles the highest mass BH at $z=6$, despite containing the least number of BHs within its MMH~(also recall that \\texttt{6SIGMA} has a higher mass MMH at $z=6$). This is because the \\texttt{5SIGMA_COMPACT} run produces a higher gas density at the peak location, thereby leading to the fastest growth via gas accretion at $z\\lesssim9$. This result is overall consistent with N21, showing that compact peaks with low tidal fields are the most ideal environments for rapid BH growth. However, with this TNG seed model, the overall BH mass assembled at $z=6$ is only $\\sim5\\times10^{7}~M_{\\odot}$, which is significantly smaller than the typical masses of the observed high-z quasars~($\\sim10^9~M_{\\odot}$).\n\nNext, we look at the predictions from gas-based seed models, particularly \\texttt{SM5_FOF3000}~($\\tilde{M}_{\\mathrm{sf,mp}}=5$ and $\\tilde{M}_{\\mathrm{h}}=3000$) with seed masses of $8\\times10^5$ and $1\\times10^5~M_{\\odot}\/h$~(Figure \\ref{bh_growth_env_fig}: middle and right panels respectively). Again, these models produce substantially higher numbers of seeds that start forming at much earlier times~($z\\sim17-25$) compared to the TNG seed model. As a result, there is now considerable growth via mergers. To that end, we note that regardless of how early the seeds form, accretion-driven BH growth does not become significant until $z\\lesssim9$~(see 2nd rows of Figure \\ref{bh_growth_env_fig}). This results from\nthe fact that the central gas densities remain relatively low until $z\\sim9$ but start to steeply increase between $z\\sim9-6$~(revisit Figure \\ref{1D_density_distributions_fig}). As a result, the BH growth at $z\\gtrsim9$ is completely driven by BH mergers. In fact, for \\texttt{SM5_FOF3000} seed models, the $z\\gtrsim9$ merger-driven growth assembles a BH mass of $\\sim3\\times10^7~M_{\\odot}$ by $z\\sim9$, in contrast to the TNG seed model where the BHs are still close to the seed mass of $\\sim10^6~M_{\\odot}$ at $z\\sim9$. Between $z\\sim9-6$, the accretion-driven BH growth becomes increasingly significant for the \\texttt{SM5_FOF3000} seed models, pushing\nthe BH mass to values $\\gtrsim 10^8~M_{\\odot}$ at $z=6$ for all three constrained regions. Amongst the three regions, \\texttt{5SIGMA_COMPACT} again produces the highest BH mass that now reaches close to $\\sim10^9~M_{\\odot}$ at $z=6$, consistent with the observed $z\\sim6$ quasars. Additionally, note that the merger-driven growth can also be boosted, by simply reducing the halo mass threshold and forming more seeds. Therefore, $z\\sim6$ quasars could also be formed within a ``halo mass only\" seed model~(e.g. TNG seed model) with a sufficiently low halo mass threshold. \n\nThe bolometric luminosities and Eddington ratios~(see 3rd and 4th rows of Figure \\ref{bh_growth_env_fig}) of the BHs remain low~($L_{\\mathrm{bol}}\\sim10^{42}~\\mathrm{ergs~s^{-1}}\\sim10^{-3}~\\mathrm{L_{\\mathrm{bol}}^{\\mathrm{edd}}}$) at $z\\gtrsim9$ wherein the accretion-driven BH growth is insignificant. This is generally true for all seed models and constrained regions. As we go from $z\\sim9-6$ during which the central gas densities steeply rise~(revisit Figure \\ref{1D_density_distributions_fig}), the accretion-driven BH growth becomes increasingly efficient. This leads to a sharp increase in the BH luminosities. By $z\\sim6$, the BHs start to grow close to the Eddington limit for all of the runs, generally corresponding to bolometric luminosities $\\gtrsim10^{45}~\\mathrm{ergs~s^{-1}}$. However, luminosities of observed $z\\sim6$ quasars are even higher i.e. $\\sim10^{47}~\\mathrm{ergs~s^{-1}}$. These luminosities are produced only by the $\\sim10^9~M_{\\odot}$ BHs that are formed within $\\texttt{5SIGMA_COMPACT}$ region using the \\texttt{SM5_FOF3000} seed models. Overall, we find that to form BHs that resemble the observed $z\\sim6$ quasars~(masses of $\\sim10^9~M_{\\odot}$ and luminosities of $\\sim10^{47}~\\mathrm{ergs~s^{-1}}$) in our simulations with \\texttt{IllustrisTNG} physics, we need massive compact halos with seed models such as \\texttt{SM5_FOF3000} that allow for substantial merger-driven BH growth at $z\\gtrsim9$.\n\nFigure \\ref{SM_BHM_relations_fig} shows the predictions of our constrained runs\nthe $z=6$ halo, as well as the BH populations in the TNG300 uniform simulation. Note that the most massive $z=6$ BHs produced by TNG300 are $\\sim50$ times smaller than the observed $z\\gtrsim6$ quasars. This is simply because TNG300 does not have the volume to produce such rare objects, despite being among the largest simulations to be run past $z=6$ and beyond. In fact, this is true for almost all major cosmological hydrodynamic simulations run to date ~(see \\citealt{2021MNRAS.503.1940H,2022MNRAS.tmp..271H,2022MNRAS.509.3015H} for combined analyses of BH populations from several simulations). The only exception to this would be the \\texttt{BlueTides} simulation~\\citep{2016MNRAS.455.2778F} which produces a $6.4\\times10^8~M_{\\odot}$ BH in a volume of $[400~\\mathrm{Mpc}\/h]^3$ by $z\\sim7.5$~\\citep{2019MNRAS.483.1388T}. Overall, this further highlights the power of our constrained simulations which is able to produce such rare objects within smaller volume and higher resolution simulations in reasonable computing time, so as to allow an exploration of a wide range of model parameters.\n\nWe now specifically compare the BHs produced in the MMHs of the constrained runs to that of the TNG300 simulation. The left panel of Figure \\ref{SM_BHM_relations_fig} shows the\nhalo mass versus\nthe total mass of all BHs within the halos. We find that for the TNG seed model, predictions for the constrained runs are consistent with extrapolation of the BH mass vs. halo mass relation from TNG300.\nThese results, together with the stellar mass vs. halo mass relations in Figure \\ref{SM_HM_relation_fig}, serve as a good validation for our constrained runs. As expected from the results in the previous paragraph, the gas-based seed models \\texttt{SM5_FOF3000} produce halos with total BH masses that are significantly higher than the extrapolated trend of the TNG300 halos. As an additional note, the $M_{bh}-M_h$ relation for the lowest mass BHs within TNG300 form streaks of horizontal lines that can be clearly seen in Figure \\ref{SM_BHM_relations_fig}. This is likely an artifact of the TNG seed model, where BHs seeded within $\\gtrsim10^{10}~M_{\\odot}$ halos do not show significant accretion-driven growth until halos reach masses $\\gtrsim10^{11}~M_{\\odot}$. \n\nCloser examination of Figure \\ref{SM_BHM_relations_fig} (left panel) reveals another interesting result. For the TNG seed model as well as the gas-based seed model, the MMH in \\texttt{6SIGMA} achieves the highest {\\em total} BH mass.\nThis contrasts with Figure \\ref{bh_growth_env_fig} and with the right panel of Figure \\ref{SM_BHM_relations_fig}, which clearly show that \\texttt{5SIGMA_COMPACT} produces the highest {\\em individual} BH mass in the MMH.\nAs it turns out, while the MMHs in \\texttt{5SIGMA} and \\texttt{6SIGMA} end up with a higher number of BHs than \\texttt{5SIGMA_COMPACT}~(revisit Figure \\ref{No_of_BHs_fig}), the individual BHs in \\texttt{5SIGMA} and \\texttt{6SIGMA} are significantly smaller than the most massive BH in \\texttt{5SIGMA_COMPACT}. The foregoing statement is generally true for TNG seed model as well as the gas-based seed models. Particularly for the gas-based seed model \\texttt{SM5_FOF3000}, this\nmeans that while a typical rare massive halo~($\\sim10^{13}~M_{\\odot}$) at $z\\sim6$ can acquire total BH mass exceeding $\\sim10^9~M_{\\odot}$, it may not produce ``individual BHs\" of such masses. Therefore, to host the observable $z\\gtrsim6$ quasars that correspond to individual $\\sim10^9~M_{\\odot}$ BHs growing close to the Eddington limit, we need halos that are not just massive enough, but are also highly compact and have low tidal fields. In these compact MMHs, the BHs are more likely to have close encounters with each other. Therefore they can merge more readily to form a single massive BH at the halo centers, compared to typical halos of the same mass. To that end, recall that the small scale dynamics are poorly resolved in our simulations, particularly for lower mass BHs. Several recent works with more realistic treatment of BH small scale dynamics~\\citep[for e.g.][]{2017MNRAS.470.1121T,2018ApJ...857L..22T,2022MNRAS.tmp..460N,2022MNRAS.510..531C} have found that many of the seeds~(particularly lower mass seeds) do not sink efficiently to the local potential minima, thereby leading to a population of wandering BHs~\\citep{2018ApJ...857L..22T,2021MNRAS.503.6098R,2021ApJ...916L..18R,2021MNRAS.508.1973M,2022MNRAS.tmp..221W}. Therefore, prompt mergers resulting from our current BH repositioning scheme could overestimate the rate at which the central BH grows. In future work, we shall investigate this in the context of the assembly of the $z\\sim6$ quasars.\n\n\n\\begin{figure*}\n\\includegraphics[width=8 cm]{varyingmsfmp.png}\n\\includegraphics[width=8 cm]{varyingmsfmp_5.90.png}\n\\caption{Evolution of the most massive black hole in \\texttt{5SIGMA_COMPACT} for $\\tilde{M}_{\\mathrm{sf,mp}}=5,150~\\&~1000$. Left and right panels correspond to seed masses of $10^5~M_{\\odot}\/h$ and $8\\times10^5~M_{\\odot}\/h$ respectively. All the runs here use the TNG accretion model. The different rows show the same set of quantities as Figure \\ref{bh_growth_env_fig}. As the seeding criteria becomes more stringent, there are fewer mergers to grow the BH at $z\\gtrsim9$. As a result, the final $z=6$ BH mass decreases and falls significantly short of producing the observed $z\\sim6$ quasars. }\n\\label{gas_based_models_fig}\n\\end{figure*}\n\nNext, we focus on the \\texttt{5SIGMA_COMPACT} region and further explore different variations of gas-based seeding models to study their impact on BH growth. Note that \\texttt{SM5_FOF3000}~($\\tilde{M}_{\\mathrm{sf,mp}}=5~\\&~\\tilde{M}_{\\mathrm{h}}=3000$), which successfully produces a $z\\sim6$ quasar, is the least restrictive amongst the family of BH seeding models developed in \\cite{2021MNRAS.507.2012B,2021MNRAS.tmp.3110B}. Figure \\ref{gas_based_models_fig} shows the impact of further increasing $\\tilde{M}_{\\mathrm{sf,mp}}$ to values of 150 and 1000, on the BH mass, luminosity and Eddington ratio evolution. As we increase $\\tilde{M}_{\\mathrm{sf,mp}}$, fewer seeds form and the merger-driven BH growth is commensurately suppressed. This leads to a significant slow-down of BH growth, and thereby decreases the BH mass assembled by $z=6$. For $10^5~M_{\\odot}\/h$ seeds, increasing $\\tilde{M}_{\\mathrm{sf,mp}}$ from $5$ to $1000$ decreases the final $z=6$ BH mass by a factor of $\\sim100$. The $z=6$ luminosities also drop from $\\sim10^{47}~\\mathrm{erg~s^{-1}}$ to $\\sim10^{43}~\\mathrm{erg~s^{-1}}$. For more massive $8\\times10^5~M_{\\odot}\/h$ seeds, the impact is significantly stronger~(no $8\\times10^5~M_{\\odot}\/h$ seeds form for $\\tilde{M}_{\\mathrm{sf,mp}}=1000$). In general, any gas-based seeding prescription that is more restrictive than \\texttt{SM5_FOF3000} fails to produce BHs consistent with the observed $z\\sim6$ quasars.\n\nOverall, we find that within the TNG galaxy formation model, the \\texttt{SM5_FOF3000} gas-based seed model is able to successfully reproduce $z\\sim6$ BHs that are comparable to the observed high-z quasars, but only in massive ($\\sim10^{12}~M_{\\odot}$) halos that are highly compact and have low tidal fields. Additionally, both 1) merger-dominated growth at $z\\gtrsim9$, and 2) accretion-dominated growth at $z\\sim6-9$ are crucial for producing these high-z quasars. \n\n\\subsection{Implications for strongly restrictive seed models in producing $z\\gtrsim6$ quasars}\n\\label{Impact of BH accretion sec}\n\n\\begin{figure*}\n\\includegraphics[width=8 cm]{varyingeddfac.png}\n\\includegraphics[width=8 cm]{varyingbondiradeff.png}\n\n\\caption{Evolution of the most massive black hole in the \\texttt{5SIGMA_COMPACT} volume for different accretion models labelled hereafter as \\texttt{Boost*_RadEff*_EddFac*} where the `*'s correspond to values of $\\alpha$, $\\epsilon_r$ and $f_{\\mathrm{edd}}$ respectively. All the runs use the most stringent gas-based seed model \\texttt{SM1000_FOF3000}, wherein the growth via mergers is small. We then explore different accretion models. In the left panel, we keep $\\alpha=1$ and $\\epsilon_r=0.2$ fixed, and show the BH growth histories for different values of $f_{\\mathrm{edd}}$ between 1-100. In the right panel, we consider different combinations of $\\alpha=1-10000$ and $\\epsilon_r=0.2~\\&~0.1$. In the absence or lack of mergers, accretion alone can assemble a $10^9~M_{\\odot}$ BH at $z\\sim6$ only if we enhance the maximum allowed accretion rate compared to the TNG accretion model. This can be achieved by either allowing for super-Eddington accretion rate, or by reducing the radiative efficiency. In contrast, a higher Bondi boost alone does not sufficiently enhance BH growth to assemble a $z\\sim6$ quasar.}\n\\label{varying_accretion_fig}\n\\end{figure*}\n\n\\begin{figure}\n\\includegraphics[width=8 cm]{varying_msfmp_eddlim10.png}\n\\caption{Here we consider the model \\texttt{Boost1_Radeff0.2_Eddfac10} that can already grow a $z\\sim6$ quasar via accretion alone, and then investigate the impact of enhancing the number of seeds and mergers on the final BH mass at $z=6$. Blue lines correspond to $\\tilde{M}_{\\mathrm{sf,mp}}=1000$~(\\texttt{SM1000_FOF3000}), which does not produce many seeds and mergers. Orange lines correspond to $\\tilde{M}_{\\mathrm{sf,mp}}=5$~(\\texttt{SM5_FOF3000}) which does produce a substantial number of seeds and mergers. We find that in models such as \\texttt{Boost1_Radeff0.2_Eddfac10} wherein BHs can already grow to $\\sim10^{9}~M_{\\odot}$ via accretion alone, introducing more seeds and mergers~(by reducing $\\tilde{M}_{\\mathrm{sf,mp}}$) does not lead to any further increase in the final BH mass at $z=6$. In the 3rd panel, the black dashed line is the detection limit of $10^{-19}~\\mathrm{ergs~cm^{-2}~s^{-1}}$ of the Lynx $2-10~\\mathrm{keV}$ band, derived using bolometric corrections adopted from \\protect\\cite{2007MNRAS.381.1235V}. The blue dashed and dotted lines are JWST detection limits of 31 and 29th apparent magnitude for exposure times of $10^5$ and $10^4\\mathrm{s}$ respectively~(same as those assumed in \\protect\\citealt{2020MNRAS.492.5167V}), with bolometric correction adopted from \\protect\\cite{1994ApJS...95....1E}. Therefore, JWST and Lynx observations of the quasar progenitors at $z\\sim9-10$ could potentially contain signatures of their seeding environments.} \n\\label{varying_accretion_edd}\n\\end{figure}\n\n\nWe have thus far seen that $z\\sim6$ quasars cannot be assembled by our constrained simulations without an early boost in BH mass via mergers. This can only occur for relatively less restrictive seed models~($\\tilde{M}_{\\mathrm{sf,mp}}=5$), wherein enough seeds are formed to substantially contribute to merger-driven BH growth. Here we look for circumstances under which more restrictive seed models can produce $z\\sim6$ quasars even in the absence of sufficient mergers. In particular, we explore models with enhanced accretion in these highly biased regions, compared to the TNG accretion model\n\nIn Figure \\ref{varying_accretion_fig}, we take one of our most restrictive seeding models i.e. $\\tilde{M}_{\\mathrm{sf,mp}}=1000$, and explore different accretion models to identify the ones that can produce $z\\sim6$ quasars. As already noted in the previous section, only a handful of seeding and merger events occur for this model. We then investigate the BH growth under different variations for the accretion model. In the left panel, we keep $\\alpha=1~\\&~\\epsilon_r=0.2$ fixed and vary $f_{\\mathrm{edd}}$ from $1-100$~(recall that sets the maximum accretion rate in the units of the Eddington rate). Not unexpectedly, we find that as $f_{\\mathrm{edd}}$ is increased, the final BH mass is enhanced via accretion-driven BH growth. $f_{\\mathrm{edd}}\\gtrsim10$ is required for growing BHs to $\\sim10^9~M_{\\odot}$ with luminosities $\\sim10^{47}~\\mathrm{erg~s^{-1}}$ by $z=6$~(Figure \\ref{varying_accretion_fig}: green line). Notably, further increasing $f_{\\mathrm{edd}}$ to 100 does not lead to any substantial increase in the final BH mass beyond $\\sim10^9~M_{\\odot}$~(Figure \\ref{varying_accretion_fig}, right panel: red line). This is because the Bondi accretion rate exceeds $100~\\times$ the Eddington limit for only a small fraction of the time.\n\nWe now keep $f_{\\mathrm{edd}}$ fixed at 1 and examine the impact of the boost factor $\\alpha$. When $\\alpha$ is increased from $1$ to $100$, it leads to a factor $\\sim5$ increase in the final BH mass at $z=6$~(blue vs orange lines in Figure \\ref{varying_accretion_fig}: right panel). This is substantially smaller than the impact of increasing $f_{\\mathrm{edd}}$ to 10. Additionally, further increasing the boost factor to 10000~(green lines in Figure \\ref{varying_accretion_fig}: right panel) makes no significant difference in the $z=6$ BH mass. This implies that the maximum accretion rate set by the Eddington factor is much more consequential to the $z=6$ BH mass, compared to the Bondi boost factor. This is because the majority of the BH mass assembly occurs at $z\\lesssim9$ when the accretion rates are already at their maximum allowed value. To that end, note that the maximum accretion rate can also be increased by decreasing the radiative efficiency $\\epsilon_r$. Several cosmological simulations~(including N21) use a lower efficiency of $\\epsilon_r=0.1$. If we fix $\\alpha=1$ and decrease the radiative efficiency from $0.2$ to $0.1$, the $z=6$ BH mass increases by factor of $10$~(see blue vs red lines in Figure \\ref{varying_accretion_fig}: right panel). Not surprisingly, this is similar to what we found when $f_{\\mathrm{edd}}$ was increased to 2~(revisit Figure \\ref{varying_accretion_fig}: left panel). Notably, at this lower radiative efficiency of $\\epsilon_r=0.1$, applying a boost of $\\alpha=100$ forms a $\\sim10^9~M_{\\odot}$ BH with luminosity of $\\sim10^{47}~\\mathrm{erg~s^{-1}}$ at $z=6$~(purple line in Figure \\ref{varying_accretion_fig}: right panel). This is consistent with the results of N21, as we shall see in more detail in Section \\ref{Comparison with other theoretical works}. \n\n\n\n\n\n\nTo summarize our results thus far, we have shown that to form the observed $z\\sim6$ quasars within rare dense compact halos in our simulations, one of the following two requirements must be fulfilled: \\begin{enumerate} \\item When the default TNG model is used (i.e., $\\alpha=1,\\epsilon_r=0.2~\\&~f_{\\mathrm{edd}}=1$), we need a sufficiently early~($z\\gtrsim9$) boost in BH mass driven by BH mergers to grow to $\\sim10^9~M_{\\odot}$ by $z=6$. Our gas-based seeding prescription with $\\tilde{M}_{\\mathrm{sf,mp}}=5$ and $\\tilde{M}_h=3000$ satisfies this requirement. \\item For more restrictive seeding models~($\\tilde{M}_{\\mathrm{sf,mp}}\\gtrsim150$) where mergers are absent or insufficient, $z\\sim6$ quasars cannot be produced unless we enhance the maximum allowed accretion rate~(by factors $\\gtrsim10$) within these extreme overdense regions by either increasing the Eddington factor or decreasing the radiative efficiency. Notably, this result is consistent with the recent work of \\citep{2022arXiv220412513H} which uses semi-analytic approach to produce $z\\sim6$ quasars using super-Eddington accretion, from both light~($10~M_{\\odot}$) and heavy seeds~($10^5~M_{\\odot}$). \n\\end{enumerate}\n\n\\subsection{Impact of seed model on BH growth for `growth optimized' accretion parameters}\nGiven that some of these BH accretion models can produce massive BHs by $z\\sim6$ without an early boost from BH mergers, we now explore what happens when these `growth optimized' accretion parameters are combined with merger-driven growth from less restrictive BH seed models. In the left panel of Figure \\ref{varying_accretion_edd}, we take the accretion model $\\alpha=1, \\epsilon_r=0.2~\\&~f_{\\mathrm{edd}}=10$, and compare the BH growth histories for two seed models with $\\tilde{M}_{\\mathrm{sf,mp}}=5~\\&~1000$~(\\texttt{SM5_FOF3000} \\& \\texttt{SM1000_FOF3000} respectively). We have already seen that \\texttt{SM1000_FOF3000} produces a $\\sim10^9~M_{\\odot}$ BH with $\\sim10^{47}~\\mathrm{erg~s^{-1}}$ luminosity at $z=6$ even in the absence of significant number of mergers. We now examine whether the substantial merger-driven growth of \\texttt{SM5_FOF3000} leads to any further increase in the $z=6$ BH mass much beyond $\\sim10^9~M_{\\odot}$, when combined with super-Eddington accretion of $f_{\\mathrm{edd}}=10$. The \\texttt{SM5_FOF3000} model grows BHs via mergers to $\\sim10^7~M_{\\odot}$ by $z\\sim9$, which is a factor of $\\sim 500$ higher than the \\texttt{SM1000_FOF3000} model. Despite this large difference in masses, the luminosities are similar in both models, such that the higher-mass BH in \\texttt{SM5_FOF3000} has a lower Eddington ratio. By\n $z=6$, the BH in \\texttt{SM5_FOF3000}\nreaches a mass of $\\sim10^9~M_{\\odot}$, similar to \\texttt{SM1000_FOF3000}. To summarize, if a given model is already producing a $\\sim10^9~\\mathrm{M_{\\odot}}$ BH at $z\\sim6$ via accretion alone, boosting the merger-driven BH growth by forming more seeds does not further increase the final $z=6$ BH mass by a significant amount.\n\nThe results from Figure \\ref{varying_accretion_edd} also imply that if the accretion model is such that $z\\sim6$ quasars can be assembled via accretion alone, multiple sets of seed models can produce the observed $z\\sim6$ quasars. In such a case, the $z\\sim6$ quasar observations alone may not be able to constrain BH seed models. However, the progenitors of these quasars at $z\\gtrsim9$ can have significantly different assembly histories depending on the seed model, particularly in terms of the contribution from BH mergers. In fact, \\texttt{SM5_FOF3000} naturally predicts a $\\sim100$ times higher number of mergers compared to \\texttt{SM1000_FOF3000}. Moreover, these merging progenitors will likely include the most massive black holes at their respective redshift. Therefore, detection of the loudest LISA events at $z\\gtrsim9$ are likely to provide strong constraints for seed models. In terms of electromagnetic observations, the AGN progenitors are above of the detection limits of Lynx and JWST~(with limiting apparent magnitude of 31) up to $z\\sim10$; this is true for both \\texttt{SM5_FOF3000} and \\texttt{SM1000_FOF3000} models~(revisit 3rd row of Figure \\ref{varying_accretion_edd}). However, the difference in luminosities produced by both seed models is within a factor of $\\sim10$, corresponding to a magnitude difference of only $\\sim2.5$. Therefore, it is likely going to be difficult to find imprints of seed models within Lynx and JWST observations of the brightest AGN at higher redshifts~($z\\gtrsim9$).\n\n\\subsection{BH growth at higher resolutions}\n\\label{BH growth at higher resolutions_sec}\n\\begin{figure}\n\\includegraphics[width=8.5 cm]{Seeding_distribution_resolution_convergence.png}\n\\includegraphics[width=8.5 cm]{Growth_rate_resolution_convergence.png}\n\\caption{Top and bottom panels show the total number of seeds formed and\ngrowth of the most massive BH respectively for $M_{\\mathrm{seed}}=10^5~M_{\\odot}\/h$ at two different resolutions, namely $N=360$ and $N=720$. We show the resolution convergence for two distinct seed models which produce a $\\sim10^9~M_{\\odot}$ BH by $z=6$. Left panels correspond to the least restrictive seeding model of $\\tilde{M}_{\\mathrm{sf,mp}}=5, \\tilde{M}_{\\mathrm{h}}=3000$, wherein mergers substantially dominate the growth at $z\\gtrsim9$ and accretion model is Eddington limited with $\\alpha=1,\\epsilon_r=0.2,f_{\\mathrm{edd}}=1$. The right panels correspond to the much more restrictive seed model of $\\tilde{M}_{\\mathrm{sf,mp}}=1000, \\tilde{M}_{\\mathrm{h}}=3000$ with insignificant merger-driven growth; the accretion model corresponds to $\\alpha=1,\\epsilon_r=0.2,f_{\\mathrm{edd}}=10$. In both the seed models, the final BH masses at $z\\sim7$ are similar for $N=360$ and $N=720$.}\n\\label{resolution_convergence_fig}\n\\end{figure}\nWe have thus far presented results at our fiducial resolution of $N=360$, corresponding to gas mass resolution of $\\sim10^5~M_{\\odot}\/h$. While we were able explore a wide range of models at this resolution at reasonable computational cost, we demonstrated in \\cite{2021MNRAS.507.2012B} that our gas-based seed models start to become reasonably well converged only at resolutions $\\lesssim10^4~M_{\\odot}\/h$. Therefore, it is imperative to perform a resolution convergence test by running some of these simulations at gas mass resolutions of $\\sim10^4~M_{\\odot}\/h$~($N=720$). Particularly, we consider the seed models \\texttt{SM5_FOF3000} with $\\alpha=1,\\epsilon_r=0.2,f_{\\mathrm{edd}}=1$, and \\texttt{SM1000_FOF3000} with $\\alpha=1,\\epsilon_r=0.2,f_{\\mathrm{edd}}=10$, both of which successfully produced a $\\sim10^9~M_{\\odot}$ quasar by $z\\sim6$. Due to computational reasons, we could only run the higher resolution simulations~($N=720$) to $z=7$. \n\nThe results are shown in Figure \\ref{resolution_convergence_fig}, where they are compared to the lower resolution runs~($N=360$). Let us start with \\texttt{SM5_FOF3000}~(left panels), which produces enough seeds to allow for substantial amounts of merger-driven BH growth. The number of seeds formed~(Figure \\ref{resolution_convergence_fig}: top left panel) is similar between $N=360~\\&~720$ for $z\\gtrsim12$. As shown in \\cite{2021MNRAS.507.2012B}, at these redshifts, seeding is largely driven by the proliferation of new star forming regions~(star formation is reasonably well converged between gas mass resolutions of $\\lesssim10^5~M_{\\odot}\/h$). At $z\\lesssim12$, the higher resolution simulation produces a somewhat lower number of seeds~(by factors up to $\\sim5$). The slower resolution convergence at $z\\sim7-12$ is also fully consistent with the zoom simulations of \\cite{2021MNRAS.507.2012B}. It is due to the markedly stronger metal dispersion for higher resolutions, which causes a stronger suppression of seeding at $z\\sim7-12$ relative to lower resolution simulations. Nevertheless, the final $z=7$ BH mass of $\\sim10^8~M_{\\odot}\/h$ assembled by the higher resolution simulation~(Figure \\ref{resolution_convergence_fig}: bottom left panel), is only slightly smaller~(by a factor of $\\sim1.5$) compared to the lower resolution simulation. This strongly indicates at even at higher resolutions, the \\texttt{SM5_FOF3000} seed model would be able to assemble a $\\sim10^9~M_{\\odot}$ by $z=6$. \n\nNow let us focus on the \\texttt{SM1000_FOF3000} model~(with $f_{\\mathrm{edd}}=10$), where the merger-driven BH growth is minimal and super-Eddington growth is used to produce a $\\sim10^9~M_{\\odot}$ by $z=6$. Here, the higher resolution run~($N=720$) produces only 1 seed, whereas the lower resolution produced $\\sim10$ seeds~(Figure \\ref{resolution_convergence_fig}: top right panel). This is also consistent with our findings in \\cite{2021MNRAS.507.2012B} where we showed that resolution convergence becomes poorer as seed models become more restrictive with higher $\\tilde{M}_{\\mathrm{sf,mp}}$. Despite this, the accretion-driven BH growth at $z\\lesssim9$ assembles a BH mass close to $\\sim10^8~M_{\\odot}$ by $z\\sim7$ for both $N=720~\\&~360$ resolutions~(Figure \\ref{resolution_convergence_fig}: bottom right panel). Overall, we find that the BH models which successfully produce a $z\\gtrsim6$ quasar at our fiducial resolution~($N=360$), will likely continue to do so at even higher resolutions. \n\\subsection{DCBHs as possible seeds of $z\\gtrsim7$ quasars}\n\n\\begin{figure}\n\\includegraphics[width=8 cm]{LW_distribution.png}\n\\caption{Total mass of gas cells illuminated by LW photons originating from Pop II~($0.0015\\times10^{10}~M_{\\odot}\/h$ halos. Our fiducial model~(with $\\alpha=1,f_{\\mathrm{edd}}=1,\\epsilon_r=0.2$) assembles a significantly lower BH mass~(by factors of $\\sim30$) compared to N21. This difference is due to the the combined effect of the higher Bondi boost factor and Eddington factor, as well as lower radiative efficiency used in N21. In fact, when we use the same accretion parameters as N21~($\\alpha=100,f_{\\mathrm{edd}}=2,\\epsilon_r=0.1$), we produce a slightly higher~(by factor of $\\sim2$) mass BH compared to their work.}\n\\label{arepo_vs_gadget_fig}\n\\end{figure}\n\nHere, we compare our results to other theoretical works that have explored the formation of the $z\\gtrsim6$ quasars. We will first compare with hydrodynamic simulations, where we note that most of the existing work has so far used seed models that are only based on halo mass. Our simulations using the TNG seed model therefore provide the most direct comparison to such studies. \nWe start with the constrained simulations of N21 produced using the \\texttt{MP-GADGET} code~\\citep{yu_feng_2018_1451799} with the \\texttt{BlueTides} galaxy formation model~\\citep{2016MNRAS.455.2778F}. Their primary constrained peak~(referred to as \\texttt{BIG-BH} in their work) is very similar to \\texttt{5SIGMA_COMPACT}. Their seed model is also similar to the TNG seed model; they adopt the same halo mass threshold for seeding~($>5\\times10^{10}~M_{\\odot}\/h$), but with a slightly smaller seed mass of $5\\times10^5~M_{\\odot}\/h$. Notably, their simulation produced a $\\sim3\\times10^8~M_{\\odot}$ BH by $z=7$, which is significantly higher than the predictions in our simulations with the TNG seeding and accretion model. But N21 uses a lower radiative efficiency of $\\epsilon_r=0.1$ and a Bondi boost factor of $\\alpha=100$, which we have already shown to produce a much stronger growth compared to the TNG accretion model~(revisit Figure \\ref{varying_accretion_fig}: right panel). Additionally, they also have a higher Eddington factor of $f_{\\mathrm{edd}}=2$. \n\nWe perform a more direct comparison to N21 in Figure \\ref{arepo_vs_gadget_fig} by simulating a box identical to their work, particularly in terms of volume~($20~\\mathrm{Mpc}\/h$ box length), resolution~($N=352$), initial condition~(\\texttt{BIG-BH}) and the BH seed model~($5\\times10^5~M_{\\odot}\/h$ seeds in $>5\\times10^{10}~M_{\\odot}\/h$ halos). If we apply the TNG accretion model~($\\alpha=1,f_{\\mathrm{edd}}=1,\\epsilon_r=0.2$), our \\texttt{BIG-BH} simulation assembles a BH of mass $\\sim10^7~M_{\\odot}$ by $z=7$~(Figure \\ref{arepo_vs_gadget_fig}: purple line); this is $\\sim30$ times smaller than the N21 predictions~(similar to that of \\texttt{5SIGMA_COMPACT}). Next, if we individually adjust each of these accretion parameters~(Figure \\ref{arepo_vs_gadget_fig}: pink, green and blue lines) to the N21 values~(one parameter at a time), we find that they all lead to notable enhancement in the BH growth~(as also seen in Section \\ref{Impact of BH accretion sec} for \\texttt{SM1000_FOF3000}). Finally, if all the accretion parameters in our simulations are simultaneously set to be the same as N21~(Figure \\ref{arepo_vs_gadget_fig}: black line), we produce a $\\sim5\\times10^8~M_{\\odot}$ BH by $z=7$, which is only slightly higher than the N21 predictions. Overall, our results are broadly consistent with N21. The same general conclusion also applies to the comparison with the results of \\cite{2020MNRAS.496....1H}, which performs constrained runs using \\texttt{MP-GADGET} similar to that of N21. Notably, they find that the final mass at $z\\sim6$ is insensitive to the seed mass for $\\sim 5\\times10^4-5\\times10^5~M_{\\odot}\/h$ seeds, which is consistent with our findings. \n\nThe zoom simulations of \\cite{2009MNRAS.400..100S}, \\cite{2014MNRAS.440.1865F} and \\cite{2014MNRAS.439.2146C} adopted a Bondi boost factor of 100 and radiative efficiencies of 0.05-0.1; with this accretion model, they successfully produced the $z\\sim6$ quasars without the need for mergers or super-Eddington accretion, consistent with our results. \\cite{2020arXiv201201458Z} performed zoom simulations targeting the formation of a $\\sim10^{13}~M_{\\odot}$ halo at $z=6$. In their fiducial model, they placed $10^5~M_{\\odot}\/h$ seeds in $10^{10}~M_{\\odot}\/h$ halos. These seeds grew via Eddington limited Bondi accretion~($\\alpha=1,f_{\\mathrm{edd}}=1$) with radiative efficiency of $\\epsilon_r=0.1$. With this model, they were able to grow a $\\sim10^9~M_{\\odot}$ BH by $z=6$ without the help of substantial merger-driven BH growth, super-Eddington accretion, or a Bondi boost. They achieve somewhat faster BH growth compared to our simulations, which only assembles a $\\sim2\\times10^8~M_{\\odot}$ BH by $z=6$ if $\\epsilon_r=0.1$ is applied without a Bondi boost~(see green line in Figure \\ref{arepo_vs_gadget_fig}). While it is not clear what may be causing the difference between our results and \\cite{2020arXiv201201458Z}, it may be attributed to differences in the implementation of other aspects of the galaxy formation model such as metal enrichment and stellar feedback. However, when they increased their radiative efficiency to 0.2, their final BH mass at $z=6$ dramatically decreased to $\\sim4\\times10^7~M_{\\odot}$, consistent with our findings. Finally, early work by \\cite{2007ApJ...665..187L} also produced a $\\sim10^{9}~M_{\\odot}$ quasar with $\\epsilon_r=0.1$ and $\\alpha=1,f_{\\mathrm{edd}}=1$ within a $8\\times10^{12}~M_{\\odot}$ halo at $z=6.5$. Notably, their host halo was assembled after a series of 8 major mergers between $z\\sim14-6.5$. Their results therefore highlighted yet another formation pathway for high-z quasars i.e. via a series of intermittent rapid growths driven by major mergers. \n\n\n\nSimilar to \\cite{2020arXiv201201458Z}, \\cite{2019MNRAS.488.4004L} was also able to produce a $\\sim10^9~M_{\\odot}$ by $z\\sim6$ with Eddington limited Bondi accretion and radiative efficiency of 0.1, without applying a Bondi boost factor. This may be due to their adopted thermal feedback efficiency of 0.005, which is significantly smaller than values~($\\sim0.05-0.15$) adopted for most other works including ours. The very recent work of \\cite{2021MNRAS.507....1V} considered even lower radiative efficiencies of $0.03$, allowing them to grow BHs to $\\sim10^9~M_{\\odot}$ by $z\\sim6$ via feedback regulated Bondi accretion without the need of a Bondi boost factor. Lastly, Radiation hydrodynamics zoom simulations of \\cite{2018ApJ...865..126S} also adopted a radiative efficiency of 0.1 and is able to assemble a $\\sim10^9~M_{\\odot}$ BH by $z\\sim7$ despite gas accretion being sub-Eddington~(and no substantial contribution from BH mergers) for almost the entire growth history; however it is difficult to compare their results to our work since they adopted the alpha disk formalism ~(see Eq. (2) of \\citealt{2010MNRAS.406L..55D}) to calculate the BH accretion rate, where there is no explicit dependence on BH mass. \n\nWhile previous hydrodynamic simulations probing $z\\gtrsim6$ quasars have mostly adopted halo mass based prescriptions for seeding, SAMs have been able to explore a broader range of seeding channels~(and seed masses) with more physically motivated seeding criterias~\\citep[for example,][]{2007MNRAS.377.1711S,Volonteri_2009, 2012MNRAS.423.2533B,2018MNRAS.476..407V, 2018MNRAS.481.3278R, 2019MNRAS.486.2336D, 2020MNRAS.491.4973D}. Here we shall compare with works that have used SAMs to make predictions specific to the $z\\gtrsim6$ quasars. \\cite{2016MNRAS.457.3356V} and \\cite{2021MNRAS.506..613S} used the \\texttt{GAMETE\/QSOdust} data constrained SAM~(introduced in \\citealt{2016MNRAS.457.3356V}) to trace the formation of a $\\sim10^9~M_{\\odot}$ BH along the merger tree of a $10^{13}~M_{\\odot}\/h$ halo at $z=6.42$. They find that heavy seeds~($\\sim10^{5}~M_{\\odot}\/h$) contribute the most to the formation of $z\\gtrsim6$ quasars compared to light~($\\sim10^{2}~M_{\\odot}\/h$) and intermediate seeds~($\\sim10^{3}~M_{\\odot}\/h$). While we cannot probe light and intermediate seeds, their results for the heavy seeds do not conflict with our findings. They also apply a radiative efficiency of 0.1 combined with a Bondi boost factor of $50-150$, and are able to assemble $z\\sim6$ quasars without substantial contributions from BH mergers or super-Eddington accretion. This is fully consistent with our results, and also with results from other hydrodynamic simulations.\n\nOverall, we find that the differences between our results with \\texttt{IllustrisTNG} physics and the results from most previous works, are largely originating from differences in the modeling of BH accretion and feedback. This also brings to light that when the default \\texttt{IllustrisTNG} physics is applied to such extreme overdense regions, it is much more difficult to form $z\\sim6$ quasars compared to the physics adopted in other simulations and SAMs. This is primarily due to the TNG accretion model which has a higher radiative efficiency of 0.2~(most works adopt a value of 0.1). At the same time, the lack of a Bondi boost also slows the BH growth even further~(most works adopted a value of 100) in the TNG accretion model. Note that the uncertainties within the modeling of BH accretion are significant, particularly at high redshifts wherein the gas environments are likely to be very different compared to the assumptions underlying the Bondi accretion model. Additionally, the radiative efficiencies are also poorly constrained. To that end, different subgrid models are better or worse at reproducing different aspects of the observed SMBH and galaxy populations~\\citep[e.g.][]{2021MNRAS.503.1940H,2022MNRAS.509.3015H}. Moving forward, it will be necessary to build better subgrid models with fewer modeling uncertainties, as well as improving the observational constraints particularly on high-z SMBHs. \n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion and Conclusions}\n\\label{Conclusions_sec}\nIn this work, we have studied the implications of the \\texttt{IllustrisTNG} galaxy formation model on the brightest $z\\gtrsim6$ quasar population, particularly in the context of different BH seeding models. These extremely rare~($\\sim1~\\mathrm{Gpc}^{-3}$) objects have grown to masses of $\\sim10^9-10^{10}~M_{\\odot}$~(comparable to the most massive $z\\sim0$ SMBHs) within the first Gyr since the Big Bang; this is difficult to achieve in general, and it is likely to place strong constraints on models for BH formation and growth. \n\nWe explore the following seeding prescriptions:\n\\begin{itemize}\n \\item TNG seed model: This is the default ``halo mass based\" prescription used within the \\texttt{IllustrisTNG} simulation suite, where we place $8\\times10^5~M_{\\odot}\/h$ seeds in $>5\\times10^{10}~M_{\\odot}\/h$ halos. \n \\item gas-based seed models: Here we place seeds~($M_{\\mathrm{seed}}=1.25\\times10^4,1\\times10^5~\\&~8\\times10^5~M_{\\odot}\/h$) in halos that exceed critical thresholds for halo mass and dense, metal poor gas mass~(represented by $\\tilde{M}_h$ and $\\tilde{M}_{\\mathrm{sf,mp}}$ respectively in the units of $M_{\\mathrm{seed}}$). We also explore models where the dense, metal poor gas is required to have LW fluxes above a critical value $J_{\\mathrm{crit}}$. \n\\end{itemize}\n\n\nWith the above seeding prescriptions, we probe the possible formation of the $z\\sim6$ quasars~(defined as $\\sim10^9~M_{\\odot}$ BHs with luminosities of $\\sim10^{47}~\\mathrm{erg~s^{-1}}$) within extremely rare peaks in the density field using the technique of constrained Gaussian realizations. This technique allows us to constrain the peak of the density field so as to assemble $\\gtrsim10^{12}~M_{\\odot}\/h$ halos by $z\\sim7$ within a simulation volume of $(9~\\mathrm{Mpc\/h})^3$. Having a relatively small simulation volume allows us to build a large simulation suite exploring a variety of density peak parameters as well as seeding parameters. \n\nWe reproduce findings from previous work~(N21) showing that BH growth is most efficient at density peaks that have high compactness and a low tidal field. In fact, a highly compact $5\\sigma$ peak at $1.0$ Mpc\/h with low tidal field~(\\texttt{5SIGMA_COMPACT}) produces a more massive BH~(by factors of $\\sim2$) compared to a typical $6\\sigma$ peak at $1.3$ Mpc\/h~(\\texttt{6SIGMA}). The reason for this is two-fold: First, the target $z=7$ halo in \\texttt{5SIGMA_COMPACT} has more massive progenitors than that of \\texttt{6SIGMA}, allowing seeds to form in potentially higher numbers and boosting the merger-driven BH growth. Second, the \\texttt{5SIGMA_COMPACT} region forms a more compact gas cloud which falls towards the BH more symmetrically from all directions compared to \\texttt{6SIGMA}; this leads to higher gas densities in their neighborhood and boosts the accretion-driven BH growth. \n\nDespite the enhanced accretion and merger-driven BH growth in \\texttt{5SIGMA_COMPACT}, we find that when the TNG seed model is used, the final mass of the central BH in the target halo at $z=6$ is only $\\sim5\\times10^7~M_{\\odot}$ with luminosities of $\\sim10^{45}~\\mathrm{erg~s^{-1}}$. This significantly falls short of producing an observed $z\\sim6$ quasar i.e. a $\\sim10^9~M_{\\odot}$ BH with a bolometric luminosity of $\\sim10^{47}~\\mathrm{erg~s^{-1}}$. But when we apply the more physically motivated gas-based seeding prescription where BHs are seeded in halos with minimum star forming metal poor gas mass of $5$ times the seed mass and a total halo mass of $3000$ times the seed mass~($\\tilde{M}_{\\mathrm{sf,mp}}=5$ and $\\tilde{M}_{\\mathrm{h}}=3000$), we find that there is substantial amount of the merger-driven BH growth at $z\\gtrsim10$ compared to the TNG seed model. As a result, the BH assembles a mass of $\\sim10^{9}~M_{\\odot}$ at $z=6$ and grows close to the Eddington limit with a bolometric luminosity of $\\sim10^{47}~\\mathrm{erg~s^{-1}}$. This is consistent with the observed $z\\sim6$ quasars, and is achievable for all seed mass values between $\\sim10^4-10^6~M_{\\odot}\/h$. Lastly, note that this can also be achieved by enhancing merger-driven growth within halo-mass based seed models~(like TNG seed model) by sufficiently reducing the halo mass threshold.\n\nNotably, there are two distinct phases in the BH growth in our simulations: 1) $z\\gtrsim9$ when the BH growth is predominantly driven by BH mergers, and 2) $z\\sim9-6$ when gas accretion dominates the BH growth. \nTo form a $z\\gtrsim6$ quasar within a universe with \\texttt{IllustrisTNG} physics, the BH growth has to be boosted by BH mergers at $z\\gtrsim9$. Amongst all the seed models we explored, only the one with $\\tilde{M}_{\\mathrm{sf,mp}}=5$ and $\\tilde{M}_{\\mathrm{h}}=3000$ provides enough mergers to assemble $z\\sim6$ quasars.\n\nFor much more restrictive gas-based seed models~($\\tilde{M}_{\\mathrm{sf,mp}}=1000$ and $\\tilde{M}_{\\mathrm{h}}=3000$, for example), very few seeds are formed and there is little to no merger-driven growth; as a result, they fail to produce $z\\sim6$ quasars in the \\texttt{IllustrisTNG} universe. However, recall that the \\texttt{IllustrisTNG} model was calibrated to reproduce properties of relatively common galaxies and BHs at low redshifts. We explored the possibility of enhanced accretion in these extreme overdense regions compared to the TNG accretion model. We found that in order to form $z\\sim6$ quasars with these restrictive seed models, it is crucial to increase the maximum accretion rate~(by factors $\\gtrsim10$) allowed for a BH of a given mass to grow. This can be achieved by either increasing the Eddington factor or decreasing the radiative efficiency. To that end, increasing the Bondi boost factor alone does not sufficiently boost the BH mass assembly to produce the $z\\sim6$ quasars. Lastly, note that even for such high values of $\\tilde{M}_{\\mathrm{sf,mp}}$, one can enhance merger-driven BH growth by choosing a lower halo mass threshold $\\tilde{M}_{\\mathrm{h}}$; this would relax the constraints on the accretion model in producing $z\\sim6$ quasars.\n\n \n\nProspects for DCBH formation in the \\texttt{5SIGMA_COMPACT} region are limited if the critical LW fluxes are indeed $\\gtrsim1000~J_{21}$ as predicted by one-zone chemistry models and small scale hydrodynamics simulations. This is because LW intensities within the dense, metal poor pockets of \\texttt{5SIGMA_COMPACT} region do not significantly exceed $\\sim300~J_{21}$ between $z\\sim7-22$. \\texttt{5SIGMA_COMPACT} region produces a handful of seeds for somewhat lower critical fluxes, particularly $\\sim300~J_{21}$. Even for these optimistic estimates of $J_{\\mathrm{crit}}$, due to the obvious lack of merger-driven BH growth, DCBHs would require one of the optimal accretion scenarios described in the previous paragraph in order to grow a $z\\gtrsim6$ quasar. As far as other theoretical seeding channels such as Pop III and NSC seeds, without being able to explicitly resolve their formation conditions, it is currently difficult to tell whether they form and merge abundantly enough to qualify as potential origins of the $z\\gtrsim6$ quasars; we shall investigate this in the future. \n\nWe note that our results are specific to features of our underlying \\texttt{IllustrisTNG} galaxy formation model. They may significantly depend on the prescriptions for star formation, metal enrichment, stellar feedback and BH dynamics. Additionally, there are also several other BH seeding, accretion and feedback models beyond the ones explored in this work, that could potentially produce $z\\sim6$ quasars. Black hole accretion and feedback is a major source of uncertainty. For example, the lack of accretion-driven BH growth at $z\\gtrsim9$ may be partly influenced by the Bondi accretion model which struggles to grow low mass BHs due to the $M_{bh}^2$ scaling of the accretion rate. This $M_{bh}^2$ scaling also implies that at these early epochs when the self-regulation by feedback is relatively weak, the BH growth would be extremely sensitive to the local gas environment. This local gas environment may be impacted by other aspects of galaxy formation, such as star formation, stellar feedback~\\citep[for e.g.][]{2017MNRAS.468.3935H}, metal enrichment and gas cooling. While the $M_{bh}^2$ scaling appears as a generic feature of all accretion models based on a gas capture radius\n\\citep{2005MNRAS.361..776S,2007ApJ...665..107P,2009MNRAS.398...53B}, there are also models such as gravitational torque driven accretion~\\citep{2017MNRAS.464.2840A,2019MNRAS.486.2827D} where the scaling exponent is smaller~($M_{bh}^{1\/6}$). This can significantly boost the growth of low mass BHs, but also slow down the growth of high mass BHs. As a result, it can have non-trivial implications for the feasibility of various BH models to produce $z\\gtrsim6$ quasars. \n\nA final caveat to our results lies within our modelling of BH dynamics. In particular, due to the limited simulation resolution, we use the standard BH repositioning scheme which instantaneously relocates the BH to a nearby potential minimum. In fact, several simulations with more realistic dynamics models~\\cite[e.g.][]{2017MNRAS.470.1121T} have now indicated that it may be difficult for many of the seeds~(particularly lower mass seeds) to sink to the local potential minima, thereby leading to a population of wandering BHs~\\citep{2018ApJ...857L..22T,2021MNRAS.503.6098R,2021ApJ...916L..18R,2021MNRAS.508.1973M,2022MNRAS.tmp..221W}. This would have two important effects: 1) over-estimating the accretion rates since the BHs may spend more time around dense gas compared to more realistic dynamics models, and 2) nearby BHs are promptly merged, thereby overestimating the merger rates at early times. In the future, we shall assess the impact of all of these caveats on the formation of $z\\gtrsim6$ quasars.\n\nDespite the caveats, our results overall indicate a strong prospect of revealing the seeding environments for the observed $z\\gtrsim6$ quasars using upcoming facilities such as LISA. In particular, regardless of the accretion model, different seed models predict distinct merger and accretion histories for the progenitors of these quasars at $z\\gtrsim9$. These progenitors will also be amongst the most massive sources at their corresponding redshift. In addition to the strong prospect of detecting their mergers with LISA up to $z\\sim20$, their AGN luminosities also exceed detection limits of Lynx and JWST up to $z\\sim10$. However, the difference in the predicted AGN luminosities between different seed models is small~($\\lesssim2.5$ dex in magnitude). Therefore, detecting electromagnetic signatures of seeding is going to be challenging for JWST and Lynx. \n\n\n\n \n\n\n\n\n\n\n\n\\section*{Acknowledgements}\nAKB thanks Dylan Nelson for valuable discussion and feedback. LB acknowledges support from NSF award AST-1909933 and Cottrell Scholar Award \\#27553 from the Research Corporation for Science Advancement.\nPT acknowledges support from NSF-AST 2008490.\nRW is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference \\#CITA 490888-16.\nTDM acknowledges funding from NSF AST-1616168, NASA ATP 80NSSC20K0519,\n NASA ATP 80NSSC18K101, and NASA ATP NNX17AK56G.\nThis work was also supported by the NSF AI Institute: Physics of the Future, NSF PHY-2020295. YN acknowledges support from the McWilliams fellowship.\n\\section*{Data availablity}\nThe underlying data used in this work shall be made available upon reasonable request to the corresponding author.\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nThe $K^\\pm\\to\\pi^\\pm\\pi^+\\pi^-$ decay can be described~\\cite{pdg} in\nterms of the Lorentz invariant kinematic variables $u$ and $v$\ndefined as\n\\begin{equation}\nu=\\frac{s_{12}-s_0}{m_\\pi^2},~~~~v=\\frac{s_{13}-s_{23}}{m_\\pi^2},\n\\end{equation}\n$$\ns_{ij}=(P_i+P_j)^2,~~~i,j=1,2,3,~~~iZ_{final~coll.}$;\n\\item Transverse vertex radius within the beam area: $R_{vtx}<3$~cm;\n\\item Kaon momentum within the nominal range:\n$54~{\\rm GeV}\/c<|\\vec P_K|<66~{\\rm GeV}\/c$.\n\\end{itemize}\n\nTo improve the resolution on the kinematic variables, and to reduce\nthe impact of differences between data and MC resolutions, the\nevents were passed through a kinematic fitting procedure with three\nconstraints (constraining the initial kaon direction to by along the\n$z$ axis, and the $3\\pi$ invariant mass to the kaon mass). Events\nwith the quality of the kinematic fit corresponding to probability\n$p<10^{-5}$ were rejected. The fraction of these rejected events\nincreases as a function of deviation of the reconstructed $3\\pi$\nmass from the PDG kaon mass $|\\Delta M|$; in particular, all the\nevents with $|\\Delta M|>10$~MeV\/$c^2$ are outside the signal region,\ni.e. rejected by the above condition.\n\nThe geometric acceptance for the $K^\\pm\\to\\pi^\\pm\\pi^+\\pi^-$ decays\nis mainly determined by the beam pipe traversing the centres of the\nDCHs, and the material in the central region of each DCH where\ncentral DCH wires terminate. This material defines a region of high\nDCH inefficiency\\footnote{The acceptance is not biased by the finite\nouter size of the DCHs due to a relatively small $Q$-value of the\n$K^\\pm\\to\\pi^\\pm\\pi^+\\pi^-$ decay: $Q=75.0$ MeV.}. This inefficiency\ntogether with beam optics performance and variations influences the\nacceptance, and is difficult to reproduce accurately with a MC\nsimulation. To minimize the effects of this problem, it is required\nthat the transverse positions of each pion in DCH1 and DCH4 planes\n$\\vec R_{\\pi i}$ ($i=1,4$) satisfy the condition $|\\vec R_{\\pi\ni}-\\vec R_{0}|>18$~cm, where $\\vec R_{0}$ is the position of the\nmomentum-weighted average of the three pions' impact points: $\\vec\nR_{0}=\\sum_{i=1}^3 (\\vec R_{\\pi i}|\\vec P_{\\pi i}|)\/\\sum_{i=1}^3\n|\\vec P_{\\pi i}|$ (for DCH4 plane, trajectories of pions are\nlinearly extrapolated from DCH1 and DCH2 planes). $\\vec R_{0}$\ncorresponds to the transverse position of the line of flight of the\ninitial kaon. The value of 18 cm was chosen to exclude safely the\ninefficient central region taking into account the beam sizes and\nvariations of their average transverse positions. The described\nselection criterion costs about $50\\%$ of the statistics; however an\nappropriate MC description of the experimental conditions is more\nimportant than the sample size for the present analysis\\footnote{In\na different case, the charge asymmetry analysis~\\cite{k3pic} is\nperformed with soft cuts maximizing the selected data sample, and is\nbased on cancellation, rather than simulation, of the systematic\neffects.}. As will be shown below in the discussion leading to the\nresults presented in Fig.~\\ref{fig:datamc}, the MC simulation of the\nexperimental conditions reproduces the data distributions to a level\nof a few parts per mille.\n\nThe selection leaves a sample of $4.71\\times 10^8$ events, which is\npractically background free, as $K^\\pm\\to\\pi^\\pm\\pi^+\\pi^-$ is by\nfar the dominant decay mode of the charged kaon with more than one\ncharged particle in the final state. The fact that backgrounds due\nto other decays of beam kaons and pions are negligible was also\nchecked with a MC simulation.\n\nThe distribution of the reconstructed $3\\pi^\\pm$ invariant mass of\ndata events (before the kinematic fitting) and its comparison with\nMC are presented in Fig.~\\ref{fig:kmass}a. The non-Gaussian tails of\nthe mass distribution are primarily due to\n$\\pi^\\pm\\to\\mu^\\pm\\nu_\\mu$ decays in flight, and are well understood\nin terms of MC simulation. The ratio of MC to data mass spectra is\npresented in Fig.~\\ref{fig:kmass}b. It demonstrates the imperfection\nof resolution description in MC, and a deficit of MC events in the\nlow mass region, which however contains a small fraction of the\nevents, and is mostly outside the signal region. The Dalitz plot\ndistribution of the selected data events (after the kinematic\nfitting) $F_{data}(u,|v|)$ used for the subsequent analysis is\npresented in Fig.~\\ref{fig:kmass}c. The bin sizes of the Dalitz plot\ndistributions used in the analysis are $\\delta u=\\delta v=0.05$.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\begin{tabular}{@{}l@{}r@{}}\n{\\put(-2,100){\n\\resizebox{0.47\\textwidth}{!}{\\includegraphics{eps\/mk.eps}}}\n\\resizebox{0.47\\textwidth}{!}{\\includegraphics{eps\/mcdataratio.eps}}\n\\put(-30,148){(1)}\\put(-30,184){(2)}\\put(-195,190){(3)}\n\\put(-176,273){\\Large\\bf a} \\put(-176,79){\\Large\\bf b}\n\\put(-188,188){\\vector(1,-2){8}}} &\n\\put(7,40){{\\resizebox{0.53\\textwidth}{!}{\\includegraphics{eps\/uv3d.eps}}}\n\\put(-30,25){$\\bf u$}\\put(-240,107){$\\bf |v|$}\n\\put(-195,196){\\Large\\bf c}} {\\rule{0.53\\textwidth}{0mm}}\n\\\\\n\\end{tabular}\n\\end{center}\n\\vspace{-8mm} \\caption{(a) Reconstructed spectrum of $3\\pi^\\pm$\ninvariant mass $M(3\\pi)$ (upper line) and its presentation in terms\nof normalized MC components: (1) events without $\\pi\\to\\mu\\nu$ decay\nin flight, (2) events with $\\pi\\to\\mu\\nu$ decay, (3) IB radiative\n$K_{3\\pi\\gamma}$ events. (b) The ratio of MC\/data $M(3\\pi)$ spectra\ndemonstrating the imperfection of resolution description in MC, and\na deficit of MC events at low $M(3\\pi)$ outside the signal region.\n(c) Reconstructed distribution in the kinematic variables\n$F_{data}(u,|v|)$ (after kinematic fit).} \\label{fig:kmass}\n\\end{figure}\n\n\\vspace{2mm}\n\n\\noindent {\\bf Correction for trigger inefficiency} \\vspace{2mm}\n\n\\noindent To simplify the treatment of the trigger inefficiency,\nstable trigger performance was the main condition used to select the\nsample to be used for the analysis\\footnote{As it was already noted,\nthe size of the data sample is not a limitation for this analysis.}.\nInefficiencies of both L1 and L2 trigger components were directly\nmeasured as functions of ($u,|v|$) using control data samples of\nprescaled low bias triggers collected along with the main triggers,\nwhich allowed a correction of the observed ($u,|v|$) distributions,\nand propagation of the statistical errors of trigger inefficiencies\ninto the result.\n\nThe L1 trigger condition requiring a coincidence of hits in two of\nthe 16 non-overlapping HOD segments is loose, as there are three\ncharged particles in a fully reconstructed event, providing a rather\nlow (and stable in time) inefficiency of $0.6\\times 10^{-3}$ for the\nselected event sample. However, the L1 inefficiency depends rather\nstrongly on the kinematic variables. The primary mechanism\ngenerating such a dependence is the enhancement of inefficiency for\ntopologies with two pions hitting the same HOD segment; such events\npreferably belong to the kinematic regions characterized by small\nrelative velocity of a certain $\\pi\\pi$ pair in the kaon rest frame.\n\nThe L2 inefficiency, which is due to local DCH inefficiencies\naffecting the trigger more strongly than the off-line reconstruction\ndue to lower redundancy and trigger timing effects, was measured to\nbe $(0.32\\pm0.05)\\times 10^{-2}$ (the error indicates the maximum\nsize of its variation during the data taking period). It did not\nexhibit any significant correlation to the kinematic variables, due\nto relative complexity of the decision-making algorithm.\n\n\\vspace{2mm}\n\n\\noindent {\\bf Monte Carlo simulation} \\vspace{2mm}\n\n\\noindent A detailed GEANT-based MC simulation was developed, which\nincludes full detector geometry and material description, simulation\nof stray magnetic fields, DCH local inefficiencies and misalignment,\nthe beam line (which allows a reproduction of the kaon momentum\nspectra and beam profiles), and $K^+\/K^-$ relative fluxes. Moreover,\ntime variations of the above effects during the running period were\nsimulated. A production of $6.7\\times 10^9$\n$K^\\pm\\to\\pi^\\pm\\pi^+\\pi^-$ events distributed according to the\nmatrix element (\\ref{slopes}) with PDG values of the slope\nparameters~\\cite{pdg} was performed to determine the detector\nacceptance. A sample of $1.16\\times10^9$ MC events (almost 2.5 times\nlarger than the data sample) passes the selection.\n\nComparison of data and MC distributions in such significant\nvariables as longitudinal decay vertex position and illuminations of\nDCH1 and DCH4 planes by pions is presented in Fig.~\\ref{fig:datamc},\nand demonstrates that MC simulation reproduces the data\ndistributions to a level of a few units of $10^{-3}$. The precision\nof the data description can be improved by tighter cuts on pion\nradial positions.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\resizebox{0.94\\textwidth}{!}{\\includegraphics{eps\/dmc.zvtx.eps}}\n\\put(-240,100){\\Large\\bf a}\\\\\n\\resizebox{0.94\\textwidth}{!}{\\includegraphics{eps\/dmc.r1.eps}}\n\\put(-240,100){\\Large\\bf b}\\\\\n\\resizebox{0.94\\textwidth}{!}{\\includegraphics{eps\/dmc.r4.eps}}\n\\put(-240,100){\\Large\\bf c}\n\\end{center}\n\\vspace{-8mm} \\caption{Left column: reconstructed data\ndistributions, right column: the corresponding ratios of data\/MC\ndistributions of (a) vertex $z$ position, and pion radial position\n(for each of the 3 pions) in the planes of (b) DCH1 and (c) DCH4.}\n\\label{fig:datamc}\n\\end{figure}\n\n\\vspace{2mm}\n\n\\noindent {\\bf Fitting procedure} \\vspace{2mm}\n\n\\noindent The measurement method is based on fitting the binned\nreconstructed data distribution $F_{data}(u,|v|)$ presented in\nFig.~\\ref{fig:kmass}c with a sum of four reconstructed MC components\ngenerated according to the four terms in the polynomial\n(\\ref{slopes}) presented in Fig.~\\ref{fig:mc3d}. Let us denote these\nreconstructed MC distributions as $F_0(u,|v|)$, $F_u(u,|v|)$,\n$F_{u^2}(u,|v|)$, and $F_{v^2}(u,|v|)$. To obtain them, the MC\nsample (distributed in kinematic variables according to the PDG\nslope parameters) was divided into four subsamples\\footnote{The\nrelative sizes of the four subsamples were subject of optimization\nin order to minimize the statistical error of the measurement. The\n$F_u(u,|v|)$, $F_{u^2}(u,|v|)$, and $F_{v^2}(u,|v|)$ samples are of\nequal sizes, while the $F_0(u,|v|)$ sample is 5 times larger than\neach of the former three.}, and events in each subsample were\nassigned appropriate weights depending on the generated ($u,|v|$) to\nobtain the desired distributions.\n\n\\begin{figure}[tb]\n\\vspace{-7mm}\n\\begin{center}\n\\resizebox{0.90\\textwidth}{0.32\\textheight}{\\includegraphics{eps\/mc3d-1.eps}}\n\\put(-371,180){\\Large\\bf a} \\put(-160,180){\\Large\\bf b}\n\\put(-27,18){$\\bf u$}\\put(-234,18){$\\bf u$} \\put(-203,105){$\\bf\n|v|$}\\put(-408,103){$\\bf |v|$}\n\\\\\n\\vspace{-4mm}\n\\resizebox{0.90\\textwidth}{0.32\\textheight}{\\includegraphics{eps\/mc3d-2.eps}}\n\\put(-371,180){\\Large\\bf c} \\put(-160,180){\\Large\\bf d}\n\\put(-27,18){$\\bf u$}\\put(-234,18){$\\bf u$} \\put(-203,103){$\\bf\n|v|$}\\put(-408,103){$\\bf |v|$}\n\\end{center}\n\\vspace{-11mm} \\caption{Reconstructed distributions in the kinematic\nvariables ($u$,$|v|$) of the four MC components: (a) $F_0(u,|v|)$,\n(b) $F_u(u,|v|)$, (c) $F_{u^2}(u,|v|)$, (d) $F_{v^2}(u,|v|)$.}\n\\label{fig:mc3d}\n\\end{figure}\n\nThe following functional defining the agreement of the shapes of\ndata and MC distributions is minimized using the MINUIT\npackage~\\cite{minuit} in order to measure the values of the slope\nparameters $(g,h,k)$:\n\\begin{equation}\n\\label{chi2} \\chi^2(g,h,k,N)=\\sum_{u,|v|~{\\rm\nbins}}\\frac{(F_{data}(u,|v|)-NF_{MC}(g,h,k,u,|v|))^2} {\\delta^2\nF_{data}(u,|v|)+N^2\\delta^2F_{MC}(g,h,k,u,|v|)}.\n\\end{equation}\nThe sum is evaluated over all the bins of reconstructed $(u,|v|)$\ndistributions with at least 1000 data events, which eliminates the\nnecessity to include non-Gaussian behaviour of errors. Here $\\delta\nF_{data}(u,|v|)$ is an uncertainty of the number of data events in a\ngiven bin (composed of a statistical part and a trigger efficiency\npart added in quadrature), $F_{MC}(g,h,k,u,|v|)$ is a MC population\nof a bin for given values of $(g,h,k)$:\n\\begin{eqnarray}\nF_{MC}(g,h,k,u,|v|) &=& F_0(u,|v|)\/I_0 + gF_u(u,|v|)\/I_{u} +\\\\\n&&hF_{u^2}(u,|v|)\/I_{u^2} + kF_{v^2}(u,|v|)\/I_{v^2},\\nonumber\n\\end{eqnarray}\nand $\\delta F_{MC}(g,h,k,u,|v|)$ is its statistical error:\n\\begin{eqnarray}\n\\delta^2F_{MC}(g,h,k,u,|v|) &=& \\delta^2F_0(u,|v|)\/I_0^2 +\ng^2\\delta^2\nF_u(u,|v|)\/I_{u}^2 +\\\\\n&&h^2\\delta^2F_{u^2}(u,|v|)\/I_{u^2}^2 + k^2\\delta^2\nF_{v^2}(u,|v|)\/I_{v^2}^2.\\nonumber\n\\end{eqnarray}\nHere $I_0$, $I_u$, $I_{u^2}$ and $I_{v^2}$ are the normalization\nconstants computed taking into account the numbers of generated\nevents in each of the four MC subsamples, and the integrals of the\nfour terms in (\\ref{slopes}) over the Dalitz plot. The free\nparameters of the functional (\\ref{chi2}) are the slope parameters\n$(g,h,k)$ and an overall MC normalization parameter $N$.\n\nThe minimization yields $\\chi^2\/{\\rm NDF}=1669\/1585$, corresponding\nto a satisfactory probability of 7.0\\%. The results of the fit and\nthe trigger corrections are presented in Table~\\ref{tab:fit}. The\nnon-zero values of the corrections arise mostly from the L1 trigger\ninefficiency dependence on kinematic variables, while their\nstatistical errors receive contributions of similar sizes from L1\nand L2 trigger efficiency uncertainties.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{c|rcl|cc|rcl}\n\\hline Parameter\n&\\multicolumn{3}{c|}{Value}&\\multicolumn{2}{c|}{Uncertainties}&\n\\multicolumn{3}{c}{Trigger correction}\n\\\\\n\\cline{5-6} &&&& statistical & MC stat.&&\\\\\n\\hline\n$g\\times10^2$&$-21.134$&$\\pm$&$0.013$&0.009&0.008&$-0.008$&$\\pm$&0.005\\\\\n$h\\times10^2$&$ 1.848$&$\\pm$&$0.022$&0.015&0.013&$ 0.116$&$\\pm$&0.009\\\\\n$k\\times10^2$&$ -0.463$&$\\pm$&$0.007$&0.005&0.004&$ 0.033$&$\\pm$&0.003\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\vspace{-5mm} \\caption{Fit results with statistical and MC\nstatistical uncertainties, trigger corrections and their statistical\nuncertainties. The trigger corrections are included into the\nresulting values.} \\label{tab:fit}\n\\end{table}\n\nKeeping only the linear term $gu$ in (\\ref{slopes}) yields a fit of\nunacceptable quality: $\\chi^2\/{\\rm NDF}=13683\/1583$. Including the\nterms proportional to $u^3$ and $uv^2$ (the only cubic terms allowed\nby Bose symmetry) yields values for the corresponding cubic slope\nparameters compatible with zero.\n\n\\vspace{2mm}\n\n\\noindent {\\bf Stability checks} \\vspace{2mm}\n\n\\noindent Stability of the results with respect to variations of the\nselection conditions on vertex fit quality, $P_T$, $R_{vtx}$, $|\\vec\nP_K|$ and $\\vec R_{\\pi i}$, and binning variations was checked.\nStability with respect to exclusion of ($u$, $|v|$) bins with large\ndeviations of the Coulomb factor (\\ref{coulomb}) from unity and with\nrespect to kaon sign\\footnote{Combined $K^+$ and $K^-$ sample is\nused to obtain the result. Stability of the slope parameters with\nrespect to kaon sign is a consequence of the experimental\nfact~\\cite{k3pic} that CP invariance holds at the discussed level of\nprecision.} was checked as well. No statistically significant\ndependencies were found. Stability with respect to various ways of\nbinning the data was checked; comparison of slope measurements in\nbins of reconstructed longitudinal coordinate of decay vertex\n$Z_{vtx}$ (since the acceptance depends strongly on this variable),\nand in data taking periods are shown in Fig.~\\ref{fig:stab} as the\nmost significant examples.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\resizebox{0.49\\textwidth}{!}{\\includegraphics{eps\/g-zvtx.eps}}\n\\resizebox{0.49\\textwidth}{!}{\\includegraphics{eps\/gtime.eps}}\\\\\n\\resizebox{0.49\\textwidth}{!}{\\includegraphics{eps\/h-zvtx.eps}}\n\\resizebox{0.49\\textwidth}{!}{\\includegraphics{eps\/htime.eps}}\\\\\n\\resizebox{0.49\\textwidth}{!}{\\includegraphics{eps\/k-zvtx.eps}}\n\\resizebox{0.49\\textwidth}{!}{\\includegraphics{eps\/ktime.eps}}\\\\\n\\end{center}\n\\vspace{-8mm} \\caption{Slope measurements in bins of $Z_{vtx}$ (left\ncolumn) and in running periods (right column). Systematic errors are\nnot shown.} \\label{fig:stab}\n\\end{figure}\n\n\\vspace{2mm}\n\n\\noindent {\\bf Systematic uncertainties} \\vspace{2mm}\n\n\\noindent The Coulomb factor (\\ref{coulomb}) used in description of the\nevent density contains a pole $C(u,v)\\to\\infty$ corresponding to a\npair of opposite sign pions having zero relative velocity. The\nimplemented Monte Carlo simulation involves a certain approximation\nto $C(u,v)$ in the pole region. In view of that,\nsensitivity of the result to\ntreatment of the pole was studied, in particular, by using an\nalternative fitting method involving projections of the distributions\nin each of kinematic variables. The assigned conservative systematic\nuncertainties due to the fitting procedure are listed in\nTable~\\ref{tab:syst}. They are expected to be diminished to\na negligible level in a future analysis of the full statistics.\n\n\\begin{table}[tb]\n\\begin{center}\n\\begin{tabular}{l|rrr}\n\\hline Effect&~~~$\\delta g\\times10^2$&~~~$\\delta h\\times10^2$&~~~$\\delta k\\times10^2$\\\\\n\\hline\nFitting procedure &0.009~~~&0.007~~~&0.006~~~\\\\\nPion momentum resolution &0.004~~~&0.031~~~&0.009~~~\\\\\nSpectrometer magnetic field &0.002~~~&0.008~~~&0.004~~~\\\\\nSpectrometer misalignment &0.002~~~&0.002~~~&0.001~~~\\\\\nStray magnetic field &0.001~~~&0.002~~~&0.001~~~\\\\\n\\hline\nTotal &0.010~~~&0.033~~~&0.012~~~\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\vspace{-5mm} \\caption{A summary of the systematic uncertainties.}\n\\label{tab:syst}\n\\end{table}\n\nAn important source of systematic uncertainty is the imperfect\ndescription of the resolution in pion momentum, which can be\nobserved in Fig.~\\ref{fig:kmass}b as a slight disagreement of the\nshapes of the reconstructed $M(3\\pi)$ spectra for data and MC in the\nsignal region\\footnote{As was discussed above, a large disagreement\nat low $M(3\\pi)$ values is outside the signal region.}. To evaluate\nthe corresponding effect, two different plausible ways of\nintroducing smearing of the MC resolution were used: either by\nincreasing the smearing of DCH space points from 90$\\mu$m to\n100$\\mu$m, or by adding an extra $0.09\\%X_0$ layer of matter in the\nposition of the Kevlar window (the former correction is more\nrealistic). The sizes of the added perturbations are such as to\ncorrect for the ``double bump'' shape of the ratio in the signal\nregion. Both methods lead to similar systematic uncertainties on the\nslope parameters attributed to description of resolution in pion\nmomentum. These uncertainties are listed in Table~\\ref{tab:syst}.\n\nEffects due to imperfect knowledge of the magnetic field in the\nspectrometer magnet were evaluated. The variation of the magnet\ncurrent can be monitored with a relative precision of $5\\times\n10^{-4}$. Smaller variations are continuously controlled with a\nprecision of $\\sim 10^{-5}$ by the deviation of the measured\ncharge-averaged kaon mass from the nominal PDG value. A\ntime-dependent correction is introduced by scaling the reconstructed\npion momenta, decreasing the effect of overall field scale to a\nnegligible level. To account for possible differences between the\nshape of the field map used for simulation and the true field,\nvariations of the MC field map were artificially introduced,\nconsistently with the known precision of field measurement of $\\sim\n10^{-3}$. The corresponding uncertainties are listed in\nTable~\\ref{tab:syst}.\n\nThe transverse positions of DCHs and individual wires were\ncontrolled and realigned at the level of reconstruction software\nevery 2--4 weeks of data taking using data collected in special runs\nin which muon tracks were recorded with no magnetic field in the\nspectrometer. This allows an alignment precision of $\\sim30~\\mu$m to\nbe reached. However, time variations of DCH alignment on a shorter\ntime scale can bias the measurement. These variations were measured\nby the difference between the average reconstructed $3\\pi$ invariant\nmasses for $K^+$ and $K^-$ decays, and taken into account. The\nprecision with which these effects are simulated leads to systematic\nuncertainties presented in Table~\\ref{tab:syst}.\n\nThe effects due to limited precision of measurement of the stray\nmagnetic field in the decay volume were estimated by variation of\nthe stray field map used for decay vertex reconstruction; the\ncorresponding systematic effects are presented in\nTable~\\ref{tab:syst}.\n\nThe kaon momentum spectra were carefully simulated, and the related\nresidual uncertainties were found to be negligible.\nPossible differences between\ndata and MC transverse scales were found to have a negligible\ninfluence on the result.\n\nThe total systematic errors were obtained by summing the above\ncontributions in quadrature, and are presented in\nTable~\\ref{tab:syst}.\n\n\\section*{Conclusions}\n\nThe Dalitz plot slope parameters of the $K^\\pm\\to\\pi^\\pm\\pi^+\\pi^-$\ndecays measured with a fraction of NA48\/2 data sample ignoring\nradiative effects (apart from the Coulomb factor) and strong\nrescattering effects, are:\n$$\ng=(-21.134\\pm0.017)\\%,~~ h=(1.848\\pm0.040)\\%,~~k=(-0.463\\pm0.014)\\%.\n$$\nThese values are in agreement with the world averages\\footnote{The\nPDG averages results separately for $K^+$ and $K^-$~\\cite{pdg};\naveraging the PDG data between $K^+$ and $K^-$ decays should take\ninto account correlated systematic uncertainties of the $K^+$ and\n$K^-$ measurements by the same experiment~\\cite{fo72}.}, and have an\norder of magnitude smaller uncertainties. This is the first\nmeasurement of a non-zero value of the quadratic slope parameter\n$h$. The compatibility of the measured distribution with the PDG\npolynomial parameterization~\\cite{pdg} appears still to be\nacceptable at an improved level of precision; no significant higher\norder slope parameters were found.\n\nThe measurement of the slope parameters is in agreement with a full\nnext-to-leading order computation~\\cite{ga03}:\n$$\ng=(-22.0\\pm2.0)\\%,~~ h=(1.2\\pm0.5)\\%,~~k=(-0.54\\pm0.15)\\%.\n$$\n\nThe whole NA48\/2 sample suitable for $K^\\pm\\to3\\pi^\\pm$ Dalitz plot\ndistribution analysis contains at least three times more data; a\nmore elaborate analysis is foreseen when the corresponding\ntheoretical framework is available.\n\n\\section*{Acknowledgements}\n\nIt is a pleasure to thank the technical staff of the participating\nlaboratories, universities and affiliated computing centres for\ntheir efforts in the construction of the NA48 apparatus, in\noperation of the experiment, and in data processing.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nThe formation of gas-giant planets like Jupiter is a central problem in planetary science. These objects contain large amounts of hydrogen and helium \\citep{owen:1999}, which were presumably acquired from the gaseous portion of their star's protoplanetary disk. Most disks are found around stars less than a few million years old \\citep{haisch:2001}, and this sets an upper limit on the time available to form gas giants. This time constraint is more severe than for rocky planets, such as Earth, which may acquire much of their bulk after the protoplanetary disk disperses. \n\nReproducing the existence and characteristics of gas giants is a challenging test for theories of planet formation. Moreover, understanding giant-planet formation is essential for understanding the origin and character of planetary systems in general, since gravitational perturbations from giant planets can effect the growth of objects elsewhere in the same system \\citep{chambers:2002, levison:2003}. For example, perturbations from Jupiter and Saturn play a central role in many models for the origin of the Sun's asteroid belt \\citep{wetherill:1992, chambers:2001, walsh:2011}. The gravitational influence of giant planets is further enhanced by the apparent tendency of many giants to migrate radially during or after their formation \\citep{trilling:1999, armitage:2007}.\n\nUsually, the formation of gas-giant planets is described using the ``core-accretion'' model. Here, a giant begins as a solid protoplanet that grows by accreting asteroid-sized planetesimals \\citep{pollack:1996, movshovitz:2010}. Gas in the vicinity of the protoplanet is compressed and heated by the protoplanet's gravity, forming an extended atmosphere surrounding a solid core. The mass of the atmosphere is determined by a balance between the inward pull of gravity and the outward pressure gradient. Typically, the atmospheric mass increases rapidly with increasing core mass \\citep{pollack:1996}. For a given set of conditions there is a critical core mass above which a static atmosphere is no longer possible \\citep{mizuno:1980}. If the core exceeds the critical mass, gas flows into the atmosphere from the disk at an increasing rate, eventually forming a massive, gas-rich planet.\n\nMultiple factors can affect the critical core mass, in particular the luminosity, opacity, and composition of the atmosphere \\citep{stevenson:1982, hori:2011}. Lowering the opacity and\/or luminosity increases the density in the atmosphere, raising the mass of the atmosphere for a given core mass. This in turn reduces the critical core mass. The main source of luminosity is energy released by infalling planetesimals, while opacity in the outer atmosphere is mostly due to dust embedded in the gas \\citep{mizuno:1980}. Early studies of giant-planet formation typically assumed opacities appropriate for an interstellar dust size distribution, leading to large critical core masses, around 10--20 Earth masses \\citep{pollack:1996}. More recent works that account for dust coagulation within the atmosphere find lower opacities and critical core masses below 10 Earth masses \\citep{movshovitz:2010, mordasini:2014}.\n\nPrevious studies have mainly considered atmospheres with compositions similar to the Sun---that is roughly 99\\% hydrogen and helium, with trace amounts of heavier elements \\citep{pollack:1996, rafikov:2006, movshovitz:2010, piso:2014}. However, the atmospheric composition can be altered by the addition of material evaporating from planetesimals as they fall towards the core \\citep{stevenson:1984, hori:2011}. The evaporated materials have higher molecular weights than hydrogen and helium, which can substantially increase the density of the atmosphere. For atmospheres dominated by heavy elements, the critical core mass can be smaller than Earth or even Mars \\citep{hori:2011, venturini:2015}.\n\nWhile conventional models consider cores that accrete planetesimals, some recent studies have suggested that planets acquire their solid material in the form of mm-to-m-sized pebbles instead \\citep{lambrechts:2012, levison:2015, chambers:2016}. There is some observational evidence that pebbles are abundant in protoplanetary disks \\citep{testi:2003, wilner:2005}. Pebble accretion may also alleviate the problem of forming critical-mass cores within the lifetime of a protoplanetary disk. Pebbles experience strong drag with disk gas during an encounter with a core, greatly increasing the capture probability and the core growth rate in some circumstances \\citep{ormel:2010, lambrechts:2012}.\n\nOutside a protoplanetary disk's ice line, pebbles are likely to be a mixture of volatile ices and refractory materials (silicates, metals etc., referred to here as ``rock''). Temperatures in a protoplanet's atmosphere increase with depth \\citep{rafikov:2006}, so the icy component of incoming pebbles will evaporate in most cases, raising the molecular weight of the atmosphere and lowering the critical core mass. This process is likely to be much more effective than evaporation from planetesimals for two reasons: (i) the surface area to volume ratio of pebbles is much greater due to the small size of pebbles, and (ii) pebbles typically settle towards the core relatively slowly, at terminal velocity, whereas planetesimals approach at roughly the escape velocity of the core. This allows more time for evaporation to occur.\n\nRelatively little evaporation will take place in the cool, outer regions of an atmosphere since the vapor pressure of water is low here, and the atmosphere soon becomes saturated. Above a certain temperature, however, the ice component of pebbles will evaporate entirely. Pebbles with a solar composition of condensible materials will have an ice-to-rock ratio of roughly unity \\citep{lodders:2003}. Thus, cores that grow by accreting icy pebbles should have vapor-rich atmospheres that are more massive than the core itself, when the hydrogen\/helium component is taken into account. This contrasts with the standard H\/He dominated case where the critical mass usually occurs when the core and atmospheric masses are comparable \\citep{pollack:1996}.\n\nSo far, the atmospheres of planets undergoing pebble accretion have received relatively little attention. In this paper, we investigate the growth and structure of the atmospheres of giant-planet cores below the critical mass as they accrete ice-rich pebbles. We examine the various stages that such planets pass through as they grow, and determine the point at which they reach the critical mass. This work extends previous studies in several ways: (i) using realistic estimates for the pebble accretion rate rather than adopting a fixed luminosity, (ii) allowing for varying atmospheric composition with depth due to water condensation and precipitation, and (iii) including the presence of an ocean where appropriate. \n\nThe rest of this paper is organized as follows. Section~2 describes the model used for the protoplanet's atmosphere and the rate of pebble accretion. Section~3 looks at the various evolutionary stages that a protoplanet and its atmosphere pass through as the mass increases. Section~4 examines the critical mass as a function of key model parameters. Section~5 contains a dicussion, and the main results are summarized in Section~6.\n\n\\section{The Model}\nWe consider a single protoplanet moving on a circular orbit in a gas-rich protoplanetary disk. A constant mass flux of pebbles drifts past the planet, moving towards the star due to gas drag. For simplicity, the pebbles are assumed to be composed of rock and water ice only. A fraction of the pebbles are captured by the planet. The rocky component of these pebbles is assumed to sediment to the planet's solid core. At each radial distance from the core, the pebbles' ice component is assumed to evaporate immediately until the local atmosphere is saturated in water vapor, or until the ice has evaporated completely. If the pebbles still contain some ice when they reach the base of the atmosphere this ice is added to the solid core, or forms an ocean entirely composed of liquid water directly above the core, depending on the temperature at this point.\n\nValues for the most important parameters used in the nominal model are listed in Table~1. In particular, we consider a protoplanet orbiting at $a=3$ AU from a solar mass star. We assume this location lies just outside the ice line in the disk, so that the pebbles here contain a mixture of rock and water ice, while more volatile ices have evaporated from the pebbles. The local temperature and gas density are 160 K and $10^{-10}$ g\/cm$^3$ respectively, and these values provide the outer boundary conditions for the protoplanet's atmosphere.\n\nThe planet's atmosphere is assumed to be in thermal and hydrostatic equilibrium. The temperature, pressure and density all increase inwards. The atmosphere is heated by the gravitational energy released by infalling pebbles, and this is assumed to be the only significant source of heating. In the inner regions of the atmosphere, this energy is typically transported outwards by convection, while heat transport in the outer atmosphere is by radiation. The outer regions of the atmosphere are assumed to be fully saturated with water vapor due to evaporation from incoming pebbles. For low-mass planets, the saturated atmosphere extends down to the surface of an ocean that contains the bulk of the water budget. When temperatures are too high for an ocean to exist, the outer atmosphere is still saturated with water vapor, but the inner regions may be undersaturated (if insufficient water is available), or above the critical temperature for water. \n\nRadial profiles for temperature $T$, pressure $P$, density $\\rho$, and interior mass $M$ are calculated using the standard stellar structure equations:\n\\begin{eqnarray}\n\\frac{dM}{dr}&=&4\\pi\\rho r^2 \\nonumber \\\\\n\\frac{dP}{dr}&=&-\\frac{GM\\rho}{r^2} \\nonumber \\\\\n\\frac{dT}{dr}&=&-\\frac{3\\kappa L\\rho}{64\\pi\\sigma_BT^3r^2}\n\\hspace{18mm} {\\rm radiative}\n\\nonumber \\\\\n\\frac{dT}{dP}&=&\\frac{\\nabla_{\\rm ad} T}{P}\n\\hspace{30mm} {\\rm convective}\n\\label{eq_main_equations}\n\\end{eqnarray}\n\\citep{inaba:2003}, where $r$ is the radius, $\\kappa$ is the opacity, $L$ is the luminosity, $\\sigma_B$ is the Stefan-Boltzmann constant, and $\\nabla_{\\rm ad}$ is the adiabatic gradient. The luminosity is assumed to be independent of radius, and equal to the gravitational energy released by the rocky component of the pebbles falling to the solid core of the planet: \n\\begin{equation}\nL=\\frac{GM_{\\rm core}f_{\\rm rock}}{R_{\\rm core}}\n\\frac{dM}{dt}\n\\end{equation}\nwhere $M_{\\rm core}$ and $R_{\\rm core}$ are the mass and radius of the solid part of the planet, $dM\/dt$ is the pebble mass accretion rate, and $f_{\\rm rock}$ is the rock mass fraction of the pebbles.\n\nThe atmosphere is assumed to be convective wherever the convective temperature gradient is shallower than the radiative gradient. In convective regions saturated with water vapor, the temperature profile is assumed to follow a moist adiabat due to condensation of water\/ice. Condensed water (or ice) is assumed to immediately precipitate to lower, unsaturated regions of the atmosphere or the ocean\\footnote{Since the temperature of a rising parcel of gas undergoing convection falls at the same rate as its surroundings, the saturation vapor pressures are the same in each case, and so are the water fractions. Thus we can use the Schwarzschild criterion for convection rather than the more general Ledoux criterion.}. Convective regions that are not saturated, or lie above the critical temperature of water, follow the usual dry adiabatic profile \\citep{kasting:1988, venturini:2015}.\n\nFollowing \\citet{kasting:1988}, the moist adiabatic gradient is given by\n\\begin{equation}\n\\frac{d\\ln P}{d\\ln T}=\\frac{1}{\\nabla_{\\rm ad}}\n=\\frac{P_v}{P}\\frac{d\\ln P_v}{d\\ln T}\n+\\frac{P_g}{P}\\left[1+\\frac{d\\ln\\rho_v}{d\\ln T}-\\frac{d\\ln\\alpha_v}{d\\ln T}\\right]\n\\end{equation}\nwhere the subscripts $v$ and $g$ refer to water vapor and the non-condensible gas (hydrogen plus helium) respectively. Here, the derivatives of $P_v$ and $\\rho_v$ are taken along the saturation vapor pressure-temperature curve of water, taken from \\citet{haar:1984}. Also, $\\alpha_v$ is the ratio of the water vapor density to the gas density, whose derivative is given by\n\\begin{equation}\n\\frac{d\\ln\\alpha_v}{d\\ln T}=\n\\frac{k\\rho_g (d\\ln\\rho_v\/d\\ln T)-\\rho_g\\mu_gm_Hc_g\n-\\rho_v\\mu_gm_H(ds_v\/d\\ln T)}\n{\\rho_v\\mu_g m_H\\Delta s+k\\rho_g}\n\\end{equation}\nwhere $k$ is Boltzmann's constant, $m_H$ is the mass of a hydrogen atom, $\\mu$ is the molecular weight, $c$ is the specific heat at constant volume, $s$ is specific entropy, and $\\Delta s$ is the specific entropy difference between water vapor and condensed water (liquid or solid). Note that in deriving this equation, the non-condensible gas is assumed to be ideal in the saturated regions of the atmosphere. (See \\citet{kasting:1988} for a detailed derivation of these equations.)\n\n\\startlongtable\n\\begin{deluxetable}{ll}\n\\tablecaption{Parameters used in the nominal model}\n\\tablehead{\n\\colhead{Parameter} & \\colhead{Value} \\\\\n}\n\\startdata\nStellar mass & 1 solar mass \\\\\nPlanet semi-major axis & 3 AU \\\\\nDisk temperature & 160 K \\\\\nDisk density & $10^{-10}$ g\/cm$^3$ \\\\\nTurbulent viscosity $\\alpha$ & 0.001 \\\\\nRock density & 4 g\/cm$^3$ \\\\\nIce\/rock ratio & 1:1 \\\\\nHydrogen\/helium ratio & 3:1 \\\\\nPebble flux & $10^{-5}M_\\oplus$\/y \\\\\nPebble Stokes number & 0.01 \\\\\n\\enddata\n\\end{deluxetable}\n\nFollowing \\cite{lissauer:2009}, the outer radius of the atmosphere is given by\n\\begin{equation}\nr_{\\rm max}=\\min\\left[\\frac{r_H}{4}, r_B\\right]\n\\end{equation}\nwhere $r_H$ and $r_B$ are the Hill radius and Bondi radius, given by\n\\begin{eqnarray}\nr_H&=&a\\left(\\frac{M_{\\rm tot}}{3M_\\odot}\\right)^{1\/3}\n\\nonumber \\\\\nr_B&=&\\frac{GM_{\\rm tot}}{c_s^2}\n\\end{eqnarray}\nwhere $M_{\\rm tot}$ is the total mass of the planet including its atmosphere, and $c_s$ is the local sound speed of the gas in the disk.\n\nThe opacity is given by the sum of contributions due to dust and gas, using relations developed by \\cite{freedman:2014} for the gas opacity. We adopt a constant dust opacity that is a model parameter, using $\\kappa_{\\rm dust}=0.01$ cm$^2$\/g in the nominal case. More detailed treatments that take into account dust coagulation in the atmosphere have been developed recently \\citep{mordasini:2014}. However, these models do not include source and sink terms due to a large flux of pebbles passing through the atmosphere and evaporation\/condensation of water, so we use the simpler, constant-$\\kappa_{\\rm dust}$ case here.\n\nTo solve the atmospheric structure equations, we also need an equation of state (EOS). Here we adopt EOS for hydrogen and helium developed by \\citep{saumon:1995}. For temperatures below 2000 K, we use the water\/steam EOS by \\cite{haar:1984}. Above 3000 K, we use EOS data from \\cite{french:2009}. In between these temperatures, we smoothly interpolate between these two cases. For water ice I, we follow \\cite{feistel:2006}. High pressure phases of ices could also exist on some protoplanets, but we do not encounter these phases in the models presented here. The approximate density of a H\/He\/water vapor mixture is determined by calculating the density for each component separately using the appropriate EOS and the partial pressure of the component, then summing the densities.\\footnote{A more commonly used approximation is $1\/\\rho=\\sum_iX_i\/\\rho_i$ where $X_i$ is the mass fraction of component $i$, and $\\rho_i$ is calculated using the total pressure rather than the partial pressure \\citep{nettelmann:2008}. However this method is not feasible when one of the components will condense under the total pressure.}\n\nWe calculate the rate at which the protoplanet accretes pebbles using the relations developed by \\cite{ormel:2010}, which depend on the Stokes number $\\rm St$ of the pebbles, given by\n\\begin{equation}\n\\rm St=\\Omegat_{\\rm stop}\n\\end{equation}\nwhere $\\Omega$ is the Keplerian orbital frequency and $t_{\\rm stop}$ is the stopping time due to gas drag.\n\nSmall pebbles settle towards the planet at terminal velocity, leading to a capture radius $r_c$ given by\n\\begin{equation}\n\\left(\\frac{r_c}{r_H}\\right)^3+\\frac{2v_{\\rm rel}}{3\\Omega r_H}\\left(\\frac{r_c}{r_H}\\right)^2\n-8\\rm St=0\n\\end{equation}\nwhere the average approach velocity $v_{\\rm rel}$ of the pebbles is given by\n\\begin{equation}\nv_{\\rm rel}=\\Omega\\times\\max\\left[\\eta a,\\frac{3r_c}{2}\\right]\n\\end{equation}\nwhere $\\eta\\simeq(c_s\/a\\Omega)^2$ is the fractional orbital velocity difference between the gas and the protoplanet.\n\nPebbles with Stokes numbers larger than a critical value ${\\rm St}_{\\rm crit}$ do not reach terminal velocity, but still experience an enhanced capture probability due to gas drag. \\cite{ormel:2010} find empirically that the capture radius in this case is given by the expression above modified by a factor of $\\exp[-(\\rm St\/{\\rm St}_{\\rm crit})^{0.65}]$, where\n\\begin{equation}\n{\\rm St}_{\\rm crit}=\\min\\left[1, 12\\left(\\frac{\\Omega a}{v_{\\rm rel}}\\right)^3\\right]\n\\end{equation}\n\nThe growth rate of the protoplanet is \n\\begin{equation}\n\\frac{dM}{dt}=\\min\\left[2r_c,\\frac{\\pi r_c^2}{H_{\\rm peb}}\\right]\\times\\Sigmav_{\\rm rel}\n\\end{equation}\n\nHere, $H_{\\rm peb}$ is the scale height of the pebbles. Following \\cite{youdin:2007}, we assume that $H_{\\rm peb}$ is set by a balance between gravitational settling and turbulent stirring, and is given by\n\\begin{equation}\nH_{\\rm peb}=H_{\\rm gas}\\left(\\frac{\\alpha}{\\alpha+\\rm St}\\right)^{1\/2}\n\\end{equation}\nwhere $H_{\\rm gas}$ is the gas scale height, and $\\alpha$ is the turbulent viscosity parameter \\citep{shakura:1973}.\n\nThe surface density of pebbles $\\Sigma$ is related to the pebble flux $F$ and the radial velocity $v_r$:\n\\begin{equation}\nF=2\\pi a\\Sigma v_r\n\\end{equation}\nwhere, following \\cite{weidenschilling:1977}, we have\n\\begin{equation}\nv_r=\\frac{2\\eta a\\Omega\\rm St}{1+\\rm St^2}\n\\end{equation}\n\n\\begin{figure}\n\\centering\n\\includegraphics[height=120mm,angle=270]{steam_fig01.pdf}\n\\caption{Schematic diagram showing several evolutionary stages of a protoplanet accreting ice-rich pebbles. (1) Planet without an atmosphere. (2) Planet with an atmosphere and a solid ice\/rock surface. The outer atmosphere is radiative. The inner atmosphere is convective with ice grains precipitating towards the surface. (3) Planet with a rocky core and an ocean. The outer atmosphere is radiative, and the inner atmosphere is convective with ice and water precipitation. (4) Planet with a rock core and no ocean. The outer atmosphere is radiative, and the inner atmosphere is convective. Precipitation occurs at mid altitudes, but the deep atmosphere is too hot for water to condense or undersaturated.}\n\\label{fig_schematic}\n\\end{figure}\n\n\\section{Nominal Case}\nIn this section, we consider the atmospheric structure of a growing protoplanet for the model parameters listed in Table~1. The planet and its atmosphere pass through a series of evolutionary stages, some of which include an ocean. At each stage, we assume the body is in thermal and hydrostatic equilibrium. The main stages are illustrated schematically in Figure~1.\n\nInitially, the protoplanet is too small to possess an atmosphere. Pebbles accreting onto the planet remain frozen until they settle to the surface, so that the planetary composition is the same as the pebbles themselves. The protoplanet first begins to acquire an atmosphere when its Bondi radius equals its physical radius. At this point, the planet's mass is\n\\begin{equation}\nM=\\frac{c_s^3}{G^{3\/2}}\\left(\\frac{3}{4\\rho_{\\rm solid}}\\right)^{1\/2}\n\\end{equation}\nFor the nominal model parameters, and a planet composed of 50\\% rock and 50\\% ice, an atmosphere first forms when the total mass reaches about $0.0017$ Earth masses. The atmosphere is extremely tenuous at first, but the density near the surface grows rapidly with increasing mass of the protoplanet.\n\nWhen the total mass of the planet reaches about 0.084 Earth masses, the surface becomes warm enough for ice to melt. At this point an ocean begins to form. In practice, some time will be required to melt all of the planet's ice component, but here we assume an ocean forms instantaneously. The mass of H and He gas is very small at this stage, and the rocky core contains almost 50\\% of the total mass, or 0.042 Earth masses.\n\nFigure~\\ref{fig_first_ocean} shows the radial structure of the atmosphere and ocean in this case. Most of the atmosphere is nearly isothermal with a temperature similar to the surrounding protoplanetary disk. Energy in this region is transported efficiently by radiation since this part of the atmosphere is optically thin. Pressure increases roughly exponentially in the outer atmosphere. The low temperature means that the mass fraction of water is very low, even though the atmosphere is fully saturated with water vapor, as shown in the lower, left panel of the figure. The water mass fraction decreases to a minimum of about $4\\times 10^{-5}$ at 3 core radii, before the rising temperature allows it to increase again at smaller radii.\n\nAt a radius of about 2.1 core radii, the atmosphere becomes convective. This results in a sharp change in slope of the temperature gradient with respect to pressure, shown in the lower, right panel of Figure~\\ref{fig_first_ocean}. The atmosphere now enters a moist adiabatic regime in which gas rises and cools, causing water to condense and releasing latent heat. The release of latent heat reduces the adiabatic gradient as a result. Initially, the condensed water forms ice, which is assumed to precipitate immediately towards the core. Once the triple point is reached, excess vapor condenses as liquid water instead. This transition results in a small discontinuity in the temperature-pressure gradient due to the different entropy content of ice versus water.\n\nAt about 1.6 core radii, the atmosphere gives way to an ocean, which contains the bulk of the planet's water budget. The pressure and density jump by several orders of magnitude at this point, which can be seen in the upper, right panel of Figure~\\ref{fig_first_ocean}. The ocean is convective since heat released by infalling pebbles is assumed to be deposited at the surface of the rocky core. However, the increase in temperature with ocean depth is small in this case.\n\n\\begin{figure}\n\\plotone{steam_fig02.pdf}\n\\caption{The structure of the atmosphere and ocean for a rocky core with a mass of 0.042 Earth masses. The other model parameters are given in Table~1. The upper, left panel shows temperature. The upper, right panel shows total pressure (solid line), and the saturation vapor pressure of water (dashed line). The lower, left panel shows the water mass fraction. The lower, right panel shows the logarithmic temperature-pressure gradient.}\n\\label{fig_first_ocean}\n\\end{figure}\n\nAs the mass of a protoplanet increases, the atmospheric pressure and temperature increase as well. A hotter atmosphere allows more water vapor to be present due to the higher saturation vapor pressure. The increase in temperature also means that the surface of the ocean grows hotter. Since the ocean itself is convective, the lower portions eventually become a super-critical fluid once the temperature at the surface of the core exceeds 647~K, the critical temperature of water. However, density and temperature vary smoothly across the critical transition.\n\nFigure~\\ref{fig_last_ocean} shows the structure of the atmosphere and ocean for a protoplanet with a rocky core mass of 0.0794 Earth masses. For this core mass, the ocean surface temperature is 637~K, only slightly below the critical temperature. Thus, this core is one of the most massive that can sustain an ocean. The structure of the outer atmosphere in this case is similar to that in Figure~\\ref{fig_first_ocean}. Heat is transported by radiation, the temperature is nearly constant, and the pressure increases exponentially with decreasing radius. As before, the low temperature means that the water mass fraction remains very low in this region.\n\n\\begin{figure}\n\\plotone{steam_fig03.pdf}\n\\caption{The structure of the atmosphere and ocean for a rocky core with a mass of 0.0794 Earth masses. The other model parameters are given in Table~1.}\n\\label{fig_last_ocean}\n\\end{figure}\n\nThe radiative-convective boundary in Figure~\\ref{fig_last_ocean} occurs at about 3.4 core radii. This is larger than in Figure~\\ref{fig_first_ocean} for two reasons. Firstly, the planet is more massive so the atmosphere is denser at a given distance from the core. This increases the temperature gradient in the radiative region (see the third of Eqns~\\ref{eq_main_equations}). Secondly, the stronger gravity of the planet increases the efficiency of pebble accretion and the energy deposited by infalling pebbles. As a result, the luminosity is about 3 times higher than the case shown in Figure~\\ref{fig_first_ocean}. The increased luminosity and atmospheric density both mean that convection must operate further from the core than before.\n\nInterior to the radiative zone is a convective zone extending down to the ocean surface at 1.6 core radii. The convective atmosphere follows a moist adiabat, with the H\/He gas saturated in water vapor. The temperature-pressure gradient decreases moving inwards until the temperature reaches the triple point of water, at 273~K, and then increases again. As before, the water mass fraction decreases with depth in the outer atmosphere, reaching a minimum of $6\\times 10^{-5}$ at about 5 core radii. \n\nThe higher temperatures in the inner atmosphere lead to much higher water fractions. By the time the ocean surface is reached, the atmosphere is 81\\% water by mass compared to only 38\\% in Figure~\\ref{fig_first_ocean}. The large amount of water present at the base of the atmosphere, coupled with temperatures near the critical point, mean the discontinuity at the ocean surface is much less pronounced than the previous case. The density increases by only a factor of about 4 for example. The temperature and density continue to rise with depth in the convective ocean, passing seamlessly through the critical point, and reaching a temperature of 940~K and a density of 1.6 g\/cm$^3$ at the rocky core.\n\nAt core masses slightly higher than in Figure~\\ref{fig_last_ocean}, an ocean ceases to exist. Instead the entire water budget exists as either water vapor or a super-critical fluid. We assume that the water becomes intimately mixed with the hydrogen and helium throughout the resulting atmosphere. (In practice, some mixing may occur when the ocean is slightly below the critical temperature, but we do not attempt to model this effect here.) Below a particular transition temperature, the atmosphere is saturated in water vapor. Above this temperature, we assume the water mass fraction is constant, and set by the total amount of water available (which is equal to the mass of the rocky core in the nominal model). For rocky core masses below 0.105 Earth masses, this transition occurs at or close to the critical point, while for more massive cores, the transition occurs at progressively lower temperatures. Thus, more massive cores have lower maximum water fractions, and increasing fractions of H\/He gas.\n\n\\begin{figure}\n\\plotone{steam_fig04.pdf}\n\\caption{The structure of the atmosphere and ocean for a rocky core with a mass of 0.391 Earth masses. The other model parameters are given in Table~1.}\n\\label{fig_no_ocean}\n\\end{figure}\n\nFigure~\\ref{fig_no_ocean} shows the atmospheric structure for a rocky core with a mass of 0.391 Earth masses. This object has a total mass (core plus atmosphere) of 1 Earth mass. The outer, nearly-isothermal part of the atmosphere looks similar to the previous cases. However, the radiative-convective boundary now lies at 14.3 core radii, much further out than before, due to the larger core mass and higher luminosity due to pebble accretion. \n\nInside the radiative-convective boundary lies a relatively narrow moist adiabatic region, extending inwards to 8.5 core radii, where the temperature is 321~K. Inside this point, the atmosphere becomes undersaturated in water vapor, as can be seen in the upper, right panel of Figure~\\ref{fig_no_ocean}. At this radius, there is a large jump in the temperature-pressure gradient (see lower, right panel), and from this point inwards, the atmosphere follows a steeper, ``dry'' abiabat due to the absence of latent heat released by condensing water vapor.\n\nBelow 8.5 core radii, the water mass fraction is constant at 0.64, with the remaining mass made up of hydrogen and helium. The steep convective gradient means the temperature increases rapidly with depth, reaching 5900~K at the rocky core. (The changes in the temperature-pressure gradient seen at 2000~K and 3000~K in the lower, right panel of the figure are artifacts caused by changes in the EOS at these points.) The great majority of the atmospheric mass is contained in the region below 8.5 core radii, with only about 0.15\\% of the total planetary mass lying at larger radii.\n\nThe trends described above continue for larger core masses, but eventually a point is reached where the core mass reaches a maximum. We examine this point in the next section.\n\n\\begin{figure}\n\\plotone{steam_fig05.pdf}\n\\caption{Upper panel: rocky core mass as a function of total planet mass for the nominal model. Middle panel: gas mass fraction of these objects. Lower panel: luminosity due to pebble accretion.}\n\\label{fig_critical_mass}\n\\end{figure}\n\n\\section{Critical Core Mass}\nFigure~\\ref{fig_critical_mass} shows the change in atmospheric properties of protoplanets with increasing total mass for the parameters listed in Table~1. The top panel shows the rocky core mass as a function of the total planetary mass. The middle panel shows the gas (H plus He) mass fraction. The bottom panel shows the luminosity due to pebble accretion.\n\nFor total masses below about 10 Earth masses, the rocky core mass increases roughly linearly with total mass. A growing planet should follow this trajectory, with the total mass increasing smoothly with core mass. Above 10 Earth masses, the curve in the top panel of Figure~\\ref{fig_critical_mass} starts to turn over. Small increases in core mass lead to progressively larger increases in atmospheric mass and thus total mass. At a total mass of 17 Earth masses, the core mass reaches a maximum value of 3.03 Earth masses. At this point H and He represent about 64\\% of the total planetary mass.\n\nFurther increases in total mass result in a smaller core mass, before the core mass starts to increase again when the total mass reaches 20 Earth masses, eventually reaching a second maximum at a total mass of 50 Earth masses. In practice, the core mass should increase monotonically with time, so the region between 17 and 20 Earth masses is unphysical. It is possible that continued gas accretion will allow a real planet to move across this region before rejoining the solutions above 20 Earth masses. However, examining this possibility would require a model that includes time evolution and luminosity generated by gas accretion rather than the static model used here. \n\nIt may seem surprising that a given core can support more than one type of atmosphere. However, the solutions differ in a number of respects. In particular, the luminosities are different because the capture radius for pebble accretion is determined by the total mass rather than the core mass. The atmospheric compositions are also different since a given mass of water is diluted to different degrees by the different masses of H\/He gas for each atmospheric solution. We note that \\cite{mizuno:1980} also found multiple atmospheric solutions for a given core mass. Here we follow \\cite{mizuno:1980} and \\cite{venturini:2015}, and associate the first core-mass maximum with the ``critical'' core mass, assuming that the static solutions become invalid beyond this point. \n\nA second criterion that is sometimes used to determine the onset of rapid gas accretion is the ``crossover'' mass \\cite{pollack:1996, rafikov:2006}. This is the point at which the masses of the core and atmosphere are equal. In the model presented here, the atmospheric mass is always greater than the core mass, once an ocean ceases to exist, due to the presence of water vapor. Instead, we note the point at which H and He make up half of the total mass. Using this criterion, rapid gas accretion in Figure~\\ref{fig_critical_mass} would begin at a core mass of 2.68 Earth masses, slightly below the critical mass.\n\nIn the following subsections, we will examine how the critical mass and crossover mass vary depending on some key model parameters.\n\n\\subsection{Dependence on Opacity}\nOne of the biggest uncertainties in the model is the opacity of dust grains in the atmosphere. Some dust is swept into the atmosphere with the inflowing gas. The opacity of this dust will depend on the local dust-to-gas ratio in the protoplanetary disk, which in turn depends on how much dust was converted into pebbles. A bigger uncertainty is the fate of infalling pebbles once their icy component evaporates. The rocky residues may remain intact, settling quickly towards the core and leaving the opacity unchanged. Alternatively, pebbles may disintegrate into sub-$\\mu$m-size grains once the ice cementing these grains evaporates. In this case, the opacity could be raised by orders of magnitude. Intermediate scenarios are also possible. Given these uncertainties, we choose to treat the dust opacity as a parameter with a wide range of possible values.\n\n\\begin{figure}\n\\plotone{steam_fig06.pdf}\n\\caption{Dependence of the critical core mass and crossover mass (50\\% of the total mass in H and He) on the opacity of dust grains in the atmosphere.}\n\\label{fig_opacity}\n\\end{figure}\n\nFigure~\\ref{fig_opacity} shows the critical core mass and the crossover core mass as a function of the dust opacity $\\kappa_{\\rm dust}$. Other model parameters are unchanged from those in Table~1. The critical mass varies with $\\kappa_{\\rm dust}$ in a complicated manner. Initially, the critical mass increases with increasing $\\kappa_{\\rm dust}$ for small values, peaking at $\\kappa_{\\rm dust}=0.006$ cm$^2$\/g, and then falling again. The critical mass becomes nearly constant at about 3 Earth masses for $\\kappa_{\\rm dust}>0.01$ cm$^2$\/g. The crossover (50\\% H and He) mass increases monotonically with dust opacity, rising rapidly for $\\kappa_{\\rm dust}<0.007$ cm$^2$\/g, and more slowly at larger opacities.\n\nThe complicated behavior seen in the critical core mass is partly due to the occasional existence of multiple solutions for the core mass as a function of total mass, as seen in Figure~\\ref{fig_critical_mass}. For most values of $\\kappa_{\\rm dust}$ in Figure~\\ref{fig_opacity}, a single maximum exists, and this is the critical core mass. However, for intermediate values of $\\kappa_{\\rm dust}$ a second maximum appears at smaller core masses than the first. In these cases we assign the first maximum to be the critical mass, leading the decrease in critical mass seen at intermediate values of $\\kappa_{\\rm dust}$ in Figure~\\ref{fig_opacity}. \n\nThe crossover mass doesn't exhibit the complicated behavior shown by the critical mass, (The change of slope seen in the figure coincides with a change in the outer boundary condition from the Bondi radius to 1\/4 of the Hill radius.) For this reason, the crossover mass may represent a more robust measure of the onset of rapid gas accretion than the critical mass. It is worth noting that the critical and crossover masses are similar for large dust opacities.\n\nOverall, the core mass at the onset of rapid gas accretion tends to increase with increasing dust opacity. A higher opacity is associated with a steeper rise in temperature in the radiative region of the atmosphere. As a result, the boundary between the radiative and convective regions lies further from the core when the opacity is large. In the radiative region, pressure and density both increase roughly exponentially moving inwards, while these quantities vary more slowly in the convective part of the atmosphere (see Figure~\\ref{fig_no_ocean}). Since the total atmospheric mass depends on the integrated density, a smaller radiative region typically leads to a lower total mass for a given core mass, increasing the core mass needed for rapid gas accretion to begin.\n\n\\begin{figure}\n\\plotone{steam_fig07.pdf}\n\\caption{Dependence of the critical core mass and crossover mass (50\\% of the total mass in H and He) on the radial mass flux of pebbles drifting through the disk.}\n\\label{fig_pebble_flux}\n\\end{figure}\n\n\\subsection{Dependence on Pebble Flux}\nThe mass flux of pebbles passing through the region containing the planet is another source of uncertainty. Pebble drift rates depend on the size of the pebbles and the radial pressure gradient in the disk. The pebble flux depends on the drift rate, and also on the total mass of pebbles in the disk, how long they have been drifting, and the radial extent of the disk. The pebble flux is likely to vary with time and location, possibly by large amounts. In our model, the luminosity of the planet's atmosphere is determined by the pebble accretion rate, which depends on the pebble flux as well as the capture radius. Thus, we consider a range of possible pebble fluxes.\n\nFigure~\\ref{fig_pebble_flux} shows the critical core mass and crossover mass for a range of pebble mass flux values. The dependence of both masses is broadly similar to the dependence on the dust opacity shown in Figure~\\ref{fig_opacity}. The critical mass increases with increasing flux, reaches a peak at intermediate values, then declines to a roughly constant value for large mass fluxes. The crossover mass increases monotonically with increasing flux, rapidly at first and then slowly. For large pebble fluxes the two masses are similar.\n\nThe similar behavior in Figures~\\ref{fig_opacity} and \\ref{fig_pebble_flux} occurs because the opacity and luminosity (which is proportional to the pebble flux) have the same effect on the atmosphere. The temperature profile depends linearly on each quantity in the radiative region, while neither quantity directly affects the profile in the convective region, as can be seen in Eqns.~\\ref{eq_main_equations}. The main difference between dust opacity and pebble mass flux is that the rate at which the planetary core accretes mass cannot exceed the pebble flux itself, and this will be a factor in the next subsection.\n\n\\begin{figure}\n\\plotone{steam_fig08.pdf}\n\\caption{Dependence of the critical core mass and crossover mass (50\\% of the total mass in H and He) on the Stokes number of the pebbles.}\n\\label{fig_pebble_stokes}\n\\end{figure}\n\n\\subsection{Dependence on Pebble Size}\nThe size of pebbles in the planet-forming regions of protoplanetary disks is also uncertain, and probably varies with time and location. The typical Stokes number of pebbles will also vary depending on the local gas density. Here we examine a range of possible Stokes numbers $\\rm St$. \n\nFigure~\\ref{fig_pebble_stokes} shows the critical core mass and crossover mass as a function of $\\rm St$. For $\\rm St<0.002$, the critical mass is 5.2 Earth masses, almost independent of the Stokes number. For $0.002<\\rm St<0.06$, a second maximum appears in the relation between core mass and total mass. For this range of Stokes numbers, the critical mass falls to roughly 3 Earth masses. At larger Stokes numbers, the critical core mass returns to the previous value, before declining with $\\rm St$ for $\\rm St>0.1$.\n\nThe constant critical mass for $\\rm St<0.002$ arises because in these cases the pebble capture radius is large enough that the entire flux of pebbles is captured by the planet. The radial drift speeds of the pebbles are slow enough that they are all accreted by the planet in the time it takes them to drift inwards across the planet's Hill radius. As a result, changing the pebble Stokes number has no effect on the luminosity of the atmosphere or the atmospheric structure. \n\nFor larger pebbles (larger Stokes numbers), the behavior of the critical mass is more complicated due to the appearance of an additional core-mass maximum as was the case in the previous sections. Here, the accretion rate is less than the pebble flux, so differences become apparent for different values of $\\rm St$. As before, the crossover mass behaves in a simpler manner than the critical mass. The crossover mass depends only weakly on $\\rm St$. Although the pebble capture radius tends to increase with increasing pebble size, this effect is offset by the greater radial drift speeds, which lowers the pebble surface density by a comparable amount. As a result, the onset of rapid gas accretion probably depends only weakly on pebble size for a given mass flux of pebbles.\n\n\\begin{figure}\n\\plotone{steam_fig09.pdf}\n\\caption{Dependence of the critical core mass and crossover mass (50\\% of the total mass in H and He) on the mass fraction of ice in the pebbles.}\n\\label{fig_ice_fraction}\n\\end{figure}\n\n\\subsection{Dependence on Ice-to-Rock Ratio}\nSo far, we have considered changes that alter the structure of a planetary atmosphere's outer, radiative zone via the opacity or luminosity. In this section, we examine differences caused by altering the ice-to-rock ratio of the pebbles accreted by the planet, which tends to affect the lower atmosphere. In the previous cases, we assumed that the ice mass fraction is 50\\%. However, other values are possible for the solar nebula and other protoplanetary disks. In the solar nebula for example, the ice-to-rock ratio is somewhat uncertain due to uncertainties in the carbon-to-oxygen ratio of the Sun, and uncertainties regarding the main carrier of carbon in the solar nebula \\citep{asplund:2009, lodders:2003}.\n\nFigure~\\ref{fig_ice_fraction} shows the critical core mass and crossover mass as a function of the ice mass fraction of pebbles. Unlike the other model parameters, both the critical mass and crossover mass show a clear, nearly monotonic trend with ice fraction. The masses decline steadily with ice fraction for fractions less than about 0.65, falling from 8.1 Earth masses to about 2.4 Earth masses for the critical mass and 1.9 Earth masses for the crossover mass. At this point, the critical mass increases somewhat, before resuming the previous trend. The crossover mass declines monotonically with ice fraction for all values considered here.\n\nThe strong dependence of the critical mass on the ice fraction comes from the fact that the atmospheric mass depends on the mean molecular weight, and this is greater when large amounts of water are present. When the mean molecular weight is high, the atmospheric mass is larger for a given core mass, leading to a smaller critical mass. This is true only in the hot inner regions however. The effect seen in Figure~\\ref{fig_ice_fraction} would be even stronger if the amount of water in the outer atmosphere wasn't restricted by the saturation pressure of water.\n\n\\section{Discussion}\nIn this paper we have examined the growth and atmospheric structure of planets accreting ice-rich pebbles in a protoplanetary disk of H and He gas. Such planets pass through several evolutionary stages, as shown in Figure~\\ref{fig_schematic}, beginning with objects too small to appreciably affect the gas in the surrounding disk. When the planet's mass reaches about 0.002 Earth masses, it begins to acquire an atmosphere. The atmospheric mass increases rapidly with increasing core mass. The outer regions are optically thin and nearly isothermal, while close to the core temperatures increase rapidly and a convective zone develops.\n\nIncoming pebbles sediment slowly towards the core at terminal velocity. As they encounter warmer regions, the icy component of the pebbles begins to evaporate, saturating the atmosphere with water vapor. Excess water and ice precipitate towards the core, forming an ocean when the surface temperature and pressure exceed the triple point of water. For the model parameters used here, an ocean first forms when the mass of rock reaches about 0.04 Earth masses. In the convective region, water vapor in the saturated gas condenses as the gas rises and cools. The condensed water precipitates back to the ocean, while the latent heat released by condensation means that the rising gas follows a shallow, moist-adiabatic temperature profile.\n\nWhen the rocky core mass reaches about 0.08 Earth masses, the temperature becomes too high for an ocean to exist. We assume that water in the region closest to the rocky core forms a super-critical convecting fluid that mixes with gas in the layers above. Temperatures decline with distance from the core, passing smoothly through the critical point of water. The outer part of the convective zone continues to follow a moist adiabat. However, the higher temperatures in the inner part of the convective zone mean that the gas is undersaturated in water vapor, and here the temperature profile follows a steeper dry adiabat. It is possible that water from the ocean doesn't fully mix with the gas in the layers above when the ocean boils due to the steep molecular weight gradient. However, this probably won't make much difference to the outcome since the planetary mass at this stage is much lower than the critical mass. The amount of water that remains trapped in such a ``core'' is much less than the water supplied by more infalling pebbles, and this additional water will be mixed with gas in the convecting region of the atmophere.\n\nAs the core mass increases beyond 0.08 Earth masses in our model, the atmospheric mass continues to grow, and the mass ratio of water vapor to gas decreases. The region of the atmosphere that is saturated and convecting becomes increasingly narrow as a result. At a core mass of a few Earth masses, 50\\% of the total planetary mass is made up of H and He gas. At a slightly higher mass, the rocky core mass reaches a maximum, and we assume the static model used here ceases to be valid at this point. Presumably, rapid gas accretion will begin to take place at around this critical mass, eventually forming a gas-giant planet.\n\nThe critical mass often exhibits complicated behavior as a function of the main model parameters. This is due to the existence of multiple maxima in the relation between total mass and core mass. A more robust indicator of the onset rapid gas accretion may be the crossover mass (defined here as the point when 50\\% of the planet's mass is H and He), since this typically varies in a simpler manner when the parameters are varied. Better estimates of the onset of rapid gas accretion will come from evolving models that include the energy released due to gas accretion.\n\n\\cite{hori:2011} and \\cite{venturini:2015} have carried out similar studies to the one in this paper, investigating the effects of evaporating volatiles on the atmospheric structure and critical mass of protoplanets. \\cite{hori:2011} considered a two-layer atmosphere: an outer region with composition similar to the protoplanetary disk, and an inner region uniformly enriched in elements heavier than He. These authors assumed all volatile constituents remained in the vapor phase. \\cite{venturini:2015} considered a single, uniformly enriched layer, and accounted for condensation of ice and water, but not precipitation.\n\n\\cite{hori:2011} found that the critical core mass typically decreases with increasing heavy-element concentration in the atmosphere for heavy-element mass fractions $Z>0.1$. For mass fractions above about 0.75, the critical mass is less than one Earth mass, and approaches Mars's mass for atmospheres almost wholly composed of heavy elements. \\cite{venturini:2015} find similar results in the absence of water condensation. When condensation is included, they find that the critical mass is reduced further, falling below 0.1 Earth masses when $Z>0.75$.\n\nIn this paper, we find larger critical core masses, typically a few Earth masses, even though large amounts of water are present due to the evaporation of ice from pebbles. There are several reasons for this difference. Firstly, we consider only a single volatile, water, whereas \\cite{hori:2011} and \\cite{venturini:2015} included carbon-bearing species. The presence of carbon compounds, and transitions between the dominant species with changing temperature, lead to modest differences in atmospheric structure due to differences in the adiabatic temperature gradient. In addition, \\cite{hori:2011} and \\cite{venturini:2015} consider a fixed atmospheric luminosity $L=10^{27}$ erg\/s, independent of planetary mass. Here, we calculate the luminosity self-consistently based on the pebble accretion rate, typically yielding slightly higher values at the critical mass.\n\nHowever, the main reason for the larger critical masses found here is that we consider atmospheres with water mass fractions that vary considerably with altitude due to precipitation. Temperatures in the outer, radiative region are only slightly higher than in the local protoplanetary disk. As a result, the saturation pressure and mass fraction of water vapor are very low in this region. \\cite{venturini:2015} assumed that condensed ice and water would remain where it forms, presumably in the form of clouds. We believe this is unlikely, particularly for the large water mass fractions they considered, and that most of the condensed water will precipitate to the ocean or undersaturated regions deeper in the atmosphere. In fact, if mixing in the radiative region is inefficient, the water abundance may always be small since incoming pebbles will not evaporate where the gas is saturated in water vapor.\n\nThe very low abundance of water vapor in the upper atmosphere means that the radiative region and the upper convective region behave somewhat like the corresponding regions in conventional models for core accretion that consider a solar mixture of gases dominated by H and He, and yield large critical core masses \\citep{pollack:1996, movshovitz:2010}. The presence of higher water fractions deep in the atmosphere clearly reduces the critical core mass, as can be seen in Figure~9, but the effect is more modest than found by \\cite{hori:2011} and \\cite{venturini:2015}. One difference between the conventional models and the one described here is that we assume that convection in the outer atmosphere begins when the temperature gradient reaches a moist adiabat, which is typically shallower than the dry adiabat that is usually considered. As a result, the radiative-convective boundary lies at a higher altitude in the model described here, raising the critical mass somewhat. Overall, however, previous estimates of the critical mass based on H- and He-dominated atmospheres may still be reasonably valid when the effect of ice evaporation from pebbles is taken into account.\n\nIn our model, two sources of negative feedback act to raise the critical mass even as the core itself grows. Firstly, an increase in the planetary mass increases the luminosity of the atmosphere, both because the capture cross section for pebbles becomes larger, and because the gravitational energy released by each infalling pebble becomes greater as the core's potential well deepens. Secondly, as the planet grows, the water-to-gas ratio in the atmosphere declines. Thus, the density effect due to water vapor is increasingly diluted, and the presence of water vapor becomes less important for the structure of the atmosphere. Both factors need to be taken into account when determining the onset of rapid gas accretion.\n\nIn this study, we have considered only a single volatile species heavier than He, namely water. The rationale for this choice is that we consider a planet located just beyond the ice line that is accreting pebbles. While large planetesimals in this region may retain other, more-volatile ices, these ices will surely have evaporated from pebble-size particles before they encounter the planet. One possibility not considered here is that protoplanetary disks may contain an appreciable mass of tar-like organic compounds \\citep{lodders:2004}, which are intermediate in volatility between water ice and rock. Such tars would be present in pebbles just outside the ice line, but would evaporate in the hotter regions of the planet's atmosphere. The presence of large amounts of tar-like material could alter the atmospheric structure and introduce some of the effects found by \\cite{hori:2011} and \\cite{venturini:2015}. Furthermore, the high temperatures we find close to the core of the planet may be sufficient for rocky materials to evaporate and decompose rather than sedimenting to the core as we have assumed here. These possibilities are worth studying further in future.\n\nSome other factors may warrant further study. For example, the current model considers a constant mass flux of pebbles passing through the protoplanetary disk, and uses a fixed Stokes number for these pebbles. Neither quantity is likely to be constant in real disks \\citep{birnstiel:2012}. The pebble flux probably declines over time, and the typical pebble size is likely to decrease as big pebbles are preferentially lost due to radial drift. Both factors will decrease the pebble capture rate and luminosity of the planet over time. Another factor we have not considered here is that the supply of pebbles may be shut off entirely once the planet becomes massive enough to alter the sign of the radial pressure gradient in nearby regions of the disk \\citep{lambrechts:2014}.\n\nIn addition, we have assumed that the temperature and density at the outer boundary of the planet remain fixed. These quantities could change over time as the disk evolves or the planet migrates through the disk. Previous works have noted that the critical core mass is insensitive to the outer boundary conditions provided that the outer atmosphere is radiative \\citep{stevenson:1982, rafikov:2006}. However, this will no longer be true if the outer boundary crosses the condensation front for a major species such as the ice line, since the composition of infalling pebbles will change abruptly at this point. In particular, a planet moving inwards across the ice line may change from a situation in which there is a net gain of water to a net loss as the water fraction in the outer atmosphere increases above the local disk value.\n\n\\section{Summary}\nIn this paper, we have modeled the structure and critical mass of a planetary atmosphere when the planet is accreting ice-rich pebbles. The main findings are:\n\\begin{enumerate}\n\\item The planet goes through a series of stages with increasing core mass. An atmosphere first appears for a core mass of 0.002 Earth masses. For core masses (rock + ice) between 0.08 and 0.16 Earth masses, the surface temperature is between the triple point and critical point of water, and an ocean forms. At larger masses, temperatures are too high for an ocean to exist, and water exists as a super-critical fluid mixed with H and He gas accreted from the protoplanetary disk.\n\n\\item Temperature in the atmosphere increases with depth. Some or all of the ice component of pebbles evaporates in the planet's atmosphere adding water vapor to the H and He gas from the disk. The saturation pressure of water is very low in the outer atmosphere, so the water mass fraction in this region is small. The water mass fraction is higher in the deep atmosphere, becoming constant below a certain altitude.\n\n\\item The outer atmosphere is radiative, the inner atmosphere is convective. In saturated, convective regions, the temperature profile follows a shallow, moist adiabat due to water condensation in the rising gas. Unsaturated regions follow a steeper dry adiabat. As the core mass increases, the water-to-gas fraction in the atmosphere declines, and boundary between unsaturated and saturated regions moves to lower temperatures and higher altitudes.\n\n\\item The mass of the planet's rocky core increases with total planetary mass. For a 1:1 ice-to-rock ratio, the core mass reaches a maximum value (the critical mass) at 2--5 Earth masses, and the static model used here is no longer valid. Presumably, runaway gas accretion begins at this point.\n\n\\item The critical mass depends in a complicated manner on the pebble size, mass flux through the disk, and atmospheric dust opacity. This is due to the presence of more than one core-mass maximum for a given total planetary mass for some parameter values. The critical mass declines nearly monotonically with increasing ice-to-rock ratio from 8 Earth masses at an ice fraction of 0.1 to about 2.5 Earth masses at an ice fraction of 0.75.\n\n\\item The point at which 50\\% of the planetary mass is H and He (the crossover mass) varies with model parameters in a more straightforward way than the critical mass, and may be a better indicator of the onset of rapid gas accretion. The crossover mass increases with atmospheric dust opacity and pebble mass flux, and decreases with ice-to-rock ratio. The crossover mass is relatively insensitive to the pebble size.\n\n\\end{enumerate}\n\n\\acknowledgments\n\nI would like to thank Alan Boss and Lindsey Chambers for helpful comments and discussions during the preparation of this paper. I also thank an anonymous referee whose comments have improved this paper.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbhtb b/data_all_eng_slimpj/shuffled/split2/finalzzbhtb new file mode 100644 index 0000000000000000000000000000000000000000..264cbc32ced10d802e09dcadad125b41a4123bca --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbhtb @@ -0,0 +1,5 @@ +{"text":"\n\\section{Summary}\n\nThe CMS Collaboration has searched for narrow resonances in the\ninvariant mass spectrum of dimuon and dielectron final\nstates in event samples corresponding to an integrated luminosity of\n$40$\\pbinv and $35$\\pbinv, respectively. The spectra are consistent\nwith standard model expectations and upper limits on the\ncross section times branching fraction for $\\cPZpr$ into\nlepton pairs relative to standard model $\\cPZ$ boson\nproduction have been set.\nMass limits have been set on neutral gauge bosons $\\cPZpr$\nand RS Kaluza--Klein gravitons $\\GKK$. A $\\cPZpr$ with\nstandard-model-like couplings can be excluded below 1140\\GeV, the\nsuperstring-inspired $\\ZPPSI$ below 887\\GeV, and RS\nKaluza--Klein gravitons below 855 (1079)\\GeV for couplings of\n0.05 (0.10), all at 95\\%~C.L.\nThe higher centre of mass energy used in this search, compared to that\nof previous experiments, has resulted in limits that are comparable to or\nexceed those previously published, despite the much lower\nintegrated luminosity accumulated at the LHC thus far.\n\n\\section*{Acknowledgments}\n\nWe wish to congratulate our colleagues in the CERN accelerator\ndepartments for the excellent performance of the LHC machine. We thank\nthe technical and administrative staff at CERN and other CMS\ninstitutes, and acknowledge support from: FMSR (Austria); FNRS and FWO\n(Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria);\nCERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES\n(Croatia); RPF (Cyprus); Academy of Sciences and NICPB (Estonia);\nAcademy of Finland, ME, and HIP (Finland); CEA and CNRS\/IN2P3\n(France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NKTH\n(Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN\n(Italy); NRF and WCU (Korea); LAS (Lithuania); CINVESTAV, CONACYT,\nSEP, and UASLP-FAI (Mexico); PAEC (Pakistan); SCSR (Poland); FCT\n(Portugal); JINR (Armenia, Belarus, Georgia, Ukraine, Uzbekistan); MST\nand MAE (Russia); MSTD (Serbia); MICINN and CPAN (Spain); Swiss\nFunding Agencies (Switzerland); NSC (Taipei); TUBITAK and TAEK\n(Turkey); STFC (United Kingdom); DOE and NSF (USA).\n\n\n\n\\subsection{\\texorpdfstring{\\cPZ\/$\\gamma^*$}{Z\/gamma*} Backgrounds}\n\nThe shape of the dilepton invariant mass spectrum is obtained from\nDrell--Yan production using a MC simulation based on the\n\\PYTHIA event generator. The simulated\nspectrum at the invariant mass peak of the $\\cPZ$ boson is normalized to\nthe data. The dimuon analysis uses the data events in the $\\cPZ$\nmass interval of 60--120\\GeV; the dielectron analysis\nuses data events in the narrower interval of 80--100\\GeV in order to\nobtain a comparably small background contamination.\n\nA contribution to the uncertainty attributed to the extrapolation of the\nevent yield and the shape of the Drell--Yan background to high\ninvariant masses arises from higher order QCD corrections.\nThe next-to-next-to-leading order (NNLO) $k$-factor is computed using\n{\\sc FEWZz v1.X}~\\cite{Melnikov:2006kv}, with \\PYTHIA~{\\sc v6.409} and\n{\\sc CTEQ6.1} PDF~\\cite{Stump:2003yu}\nas a baseline. It is found that the variation of the\n$k$-factor with mass does not exceed 4\\% where\nthe main difference arises from the comparison of \\PYTHIA and {\\sc FEWZz} calculations.\nA further source of uncertainty arises from the PDFs. The {\\sc lhaglue}~\\cite{Bourilkov:2003kk}\ninterface to the {\\sc lhapdf-5.3.1}~\\cite{Whalley:2005nh} library is used to evaluate these uncertainties,\nusing the error PDFs from the {\\sc CTEQ6.1} and the MRST2006nnlo~\\cite{Martin:2007bv}\nuncertainty eigenvector sets.\nThe uncertainty on the ratio of the background in\nthe high-mass region to that in the region of the $\\cPZ$ peak is below\n4\\% for both PDF sets and masses below 1 TeV. Combining the higher order QCD and PDF uncertainties in\nquadrature, the resulting uncertainty in the number of events normalized to those expected at the\n$\\cPZ$ peak is about 5.7\\% for masses between 200\\GeV and 1\\TeV.\n\n\\subsection{Other Backgrounds with Prompt Lepton Pairs\n\\label{sec:e-mu} }\n\nThe dominant non-Drell--Yan electroweak contribution to high\n$m_{\\ell\\ell}$ masses is {$\\ttbar$}; in addition there are contributions\nfrom $\\tq\\PW$ and diboson production. In the $\\cPZ$ peak\nregion, $\\cPZ \\rightarrow \\Pgt\\Pgt$ decays also contribute.\nAll these processes are flavour symmetric and produce twice as\nmany $\\Pe\\Pgm$ pairs as $\\Pe\\Pe$ or $\\Pgm\\Pgm$ pairs.\nThe invariant mass spectrum from\n$\\Pe^\\pm\\Pgm^\\mp$ events is expected to have the same shape as that of same\nflavour $\\ell^+\\ell^-$ events but without significant contamination\nfrom Drell--Yan production.\n\nFigure~\\ref{fig:muonselectrons} shows the\nobserved $\\Pe^\\pm\\Pgm^\\mp$ dilepton invariant mass spectrum\nfrom a dataset corresponding to 35\\pbinv, overlaid on\nthe prediction from simulated background processes. This spectrum was\nobtained using the same single-muon trigger as in the dimuon analysis\nand by requiring oppositely charged leptons of different flavour. Using an electron\ntrigger, a very similar spectrum is produced.\nDifferences in the geometric acceptances and efficiencies result in the predicted ratios\nof $\\Pgmp\\Pgmm$ and $\\Pe\\Pe$ to $\\Pe^\\pm\\Pgm^\\mp$ being approximately 0.64\nand 0.50, respectively. In the data, shown in\nFig.~\\ref{fig:muonselectrons}, there are 32 (7) $\\Pe^\\pm\\Pgm^\\mp$ events\nwith invariant mass above 120 (200)\\GeV. This yields an expectation of\nabout 20 (4) dimuon events and 16 (4) dielectron events. A direct\nestimate from MC simulations of the processes involved\npredicts $20.1\\pm3.6$ $(5.3\\pm 1.0)$ dimuon events and\n$13.2\\pm2.4$ $(3.5\\pm0.6)$ dielectron events. The uncertainty\nincludes both statistical and systematic contributions, and is dominated\nby the theoretical uncertainty of 15\\% on the $\\ttbar$ production cross\nsection~\\cite{Campbell:2010ff,Kleiss:1988xr}.\nThe agreement\nbetween the observed and predicted distributions provides a validation\nof the estimated contributions from the backgrounds from prompt leptons\nobtained using MC simulations.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[angle=90,width=0.49\\textwidth]{figures\/MuonsElectronsOppSign.pdf}\n\\end{center}\n\\caption{\\label{fig:muonselectrons}\nThe observed opposite-sign $\\Pe^\\pm\\Pgm^\\mp$ dilepton invariant mass spectrum\n(data points). The uncertainties on the data points (statistical only) \nrepresent 68\\% confidence intervals for the Poisson means.\nFilled histograms show contributions to the\nspectrum from $\\ttbar$, other sources of prompt leptons\n ($\\tq\\PW$, diboson production, $\\cPZ\\to\\Pgt\\Pgt$), and\nthe multi-jet background (from Monte Carlo simulation).\n}\n\\end{figure}\n\n\\subsection{Events with Misidentified and Non-Prompt Leptons}\n\nA further source of background arises when objects are falsely\nidentified as prompt leptons. The misidentification of jets as\nleptons, the principle source of such backgrounds, is more likely to\noccur for electrons than for muons.\n\nBackgrounds arising from jets that are misidentified as electrons\ninclude $\\PW\\to \\Pe\\cPgn$ + jet events with\none jet misidentified as a prompt electron, and also multi-jet events with\ntwo jets misidentified as prompt electrons.\nA prescaled single EM cluster trigger is used for collecting a sample of\nevents to determine the rate of jets misreconstructed as electrons and\nto estimate the backgrounds from misidentified electrons.\nThe events in this sample are\nrequired to have no more than one\nreconstructed electron, and missing transverse energy of less than 20\\GeV,\nto suppress the contribution from $\\cPZ$\nand $\\PW$ events respectively.\nThe probability for an EM cluster with $H\/E<5\\%$ to be\nreconstructed as an electron is determined in bins of $\\ET$ and $\\eta$ from\na data sample dominated by multi-jet events and is used to weight\nappropriately events which have two such clusters passing\nthe double EM trigger.\nThe estimated background\ncontribution to the dielectron mass spectrum due to misidentified jets is\n 8.6$\\pm$3.4 (2.1$\\pm$0.8) for $m_{\\Pe\\Pe} > 120$ (200)\\GeV.\n\nIn order to estimate the residual contribution from background events\nwith at least one non-prompt or misidentified muon, events are\nselected from the data sample with single muons that pass all\nselection cuts except the isolation requirement.\nA map is created, showing the isolation probability for these muons as\na function of $\\pt$ and $\\eta$.\nThis probability map is corrected for the expected\ncontribution from events with single prompt muons from $\\ttbar$ and\n$\\PW$ decays and for the observed correlation between the\nprobabilities for two muons in the same event. The probability map is\nused to predict the number of background events with two isolated\nmuons based on the sample of events that have two non-isolated muons.\nThis procedure, which has been validated using simulated events,\npredicts a mean background to $m_{\\Pgm\\Pgm} > 120$ (200)\\GeV of\n$0.8\\pm0.2\\ (0.2\\pm0.1)$.\n\nAs the signal sample includes the requirement that the muons in the\npair have opposite electric charge, a further cross-check of the\nestimate is performed using events with two isolated muons of the same\ncharge. There are\nno events with same-charge muon pairs and $m_{\\Pgm\\Pgm} > 120$\\GeV,\na result which is statistically compatible with both the figure of\n $1.6\\pm0.3$ events predicted from SM process using MC simulation and\nthe figure of $0.4\\pm0.1$ events obtained using methods based on data.\n\n\n\\subsection{Cosmic Ray Muon Backgrounds}\n\nThe $\\Pgmp\\Pgmm$ data sample is susceptible to contamination from\ntraversing cosmic ray muons, which may be misreconstructed as a pair\nof oppositely charged, high-momentum muons.\nCosmic ray events can be removed from the data sample\nbecause of their distinct topology (collinearity of two tracks\nassociated with the\nsame muon),\nand their uniform distribution of impact parameters with respect to the collision vertex.\nThe residual mean expected background from cosmic ray muons is\nmeasured using sidebands to be less than 0.1 events with\n$m_{\\Pgm\\Pgm} > 120$\\GeV.\n\n\n\\section{Dilepton Invariant Mass Spectra}\n\nThe measured dimuon and dielectron invariant mass spectra are displayed in\nFigs.~\\ref{fig:spectra}(left) and (right) respectively,\nalong with the expected signal from $\\ZPSSM$ with a mass of 750\\GeV.\nIn the dimuon sample, the highest invariant mass\nevent has $m_{\\Pgm\\Pgm}=463$\\GeV, with the $\\pt$ of the two\nmuons measured to be 258 and 185\\GeV.\nThe highest invariant mass event in the dielectron sample\nhas $m_{\\Pe\\Pe}=419$\\GeV, with the electron candidates having $\\ET$ of 125 and 84\\GeV.\n\nThe expectations from the various background sources,\n$\\cPZ{\/}\\gamma^*$, $\\ttbar$, other sources of prompt leptons ($\\tq\\PW$, diboson\nproduction, $\\cPZ\\to\\Pgt\\Pgt$) and multi-jet events\nare also overlaid in Fig.~\\ref{fig:spectra}. For the dielectron\nsample, the multi-jet background estimate was obtained directly from\nthe data. The prediction for Drell--Yan production of $\\cPZ{\/}\\gamma^*$\nis normalized to the observed $\\cPZ\\to\\ell\\ell$ signal. All other MC\npredictions are normalized to the expected cross sections.\nFigures~\\ref{fig:cum_spectra}(left) and (right) show the corresponding\ncumulative distributions of the spectra for the dimuon and\ndielectron samples. Good agreement is observed between data and the\nexpectation from SM processes over the mass region above the Z peak.\n\nSearches for narrow resonances at the Tevatron~\\cite{D0_Zp,CDF_Zp}\nhave placed lower limits in the mass range $600$\\GeV to\n$1000$\\GeV. The region with dilepton masses $120\\GeV < m_{\\ell\\ell} <\n200\\GeV$ is part of the region for which resonances have been excluded\nby previous experiments, and thus should be dominated by SM processes. The\nobserved good agreement between the data and the prediction in this\ncontrol region gives confidence that the SM expectations and the detector\nperformance are well understood.\n\nIn the $\\cPZ$~peak mass region defined as $60 < m_{\\ell\\ell} <\n120$\\GeV, the number of dimuon and dielectron candidates are 16\\,515\nand 8\\,768 respectively, with very small backgrounds. The\ndifference in the electron and muon numbers is due to the higher $E_T$\ncut in the electron analysis and lower electron identification\nefficiencies at these energies. The expected yields in the control\nregion (120--200\\GeV) and high invariant mass regions ($>$ 200\\GeV) are listed in\nTable~\\ref{tab:event_yield}. The agreement between the observed data\nand expectations, while not used in the shape-based analysis,\nis good.\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[angle=90,width=0.49\\textwidth]{figures\/MuonsPlusMuonsMinus_log.pdf}\n\\includegraphics[angle=90,width=0.49\\textwidth]{figures\/massHist35ForPaper.pdf}\n\\end{center}\n\\caption{\\label{fig:spectra}\nInvariant mass spectrum of $\\Pgmp\\Pgmm$ (left) and $\\Pe\\Pe$ (right) events. The points with\nerror bars represent the data. The uncertainties on the data points (statistical only) \nrepresent 68\\% confidence intervals for the Poisson means. The\nfilled histograms represent the expectations from\nSM processes: $\\cPZ{\/}\\gamma^*$, $\\ttbar$, other sources of \nprompt leptons ($\\tq\\PW$, diboson production, $\\cPZ\\to\\Pgt\\Pgt$), and\nthe multi-jet backgrounds.\nThe open histogram shows the signal expected for a $\\ZPSSM$ with\na mass of 750\\GeV.\n}\n\\end{figure}\n\n\\begin{figure*}[htbp!]\n\\begin{center}\n\\includegraphics[angle=90,width=0.49\\textwidth]{figures\/MuonsPlusMuonsMinus_cumulative_log.pdf}\n\\includegraphics[angle=90,width=0.49\\textwidth]{figures\/cMassHist35ForPaper.pdf}\n\\end{center}\n\\caption{\\label{fig:cum_spectra} Cumulative distribution of invariant\nmass spectrum of $\\Pgmp\\Pgmm$ (left) and $\\Pe\\Pe$ (right) events. The\npoints with error bars represent the data, and the filled histogram\nrepresents the expectations from SM processes. }\n\\end{figure*}\n\n\n\n\\begin{table*}[htb!]\n\\centering\n\\caption{Number of dilepton events with invariant mass in the control\nregion $120~<~m_{\\ell\\ell}~<~200$\\GeV and the search region $m_{\\ell\\ell} > 200$\\GeV.\nThe expected number of $\\cPZpr$ events is given within ranges of 328\\GeV and \n120\\GeV for the dimuon sample and the dielectron sample respectively, centred \non 750\\GeV. The total background is the sum of the SM processes listed.\nThe MC yields are normalized to the expected cross sections. \nUncertainties include both statistical and systematic components\nadded in quadrature.}\n\\label{tab:event_yield}\n\\begin{tabular}{l|c|c|c|c}\n\\hline\\hline\nSource & \\multicolumn{4}{c}{Number of events} \\\\\n & \\multicolumn{2}{c|}{Dimuon sample }\n & \\multicolumn{2}{c}{Dielectron sample} \\\\\n & ($120-200$)\\GeV & $ >$200\\GeV\n & ($120-200$)\\GeV & $ >$200\\GeV \\\\ \\hline\n CMS data & 227 & 35 & 109 & 26 \\\\\n$\\ZPSSM$ (750\\GeV) & --- & $15.0 \\pm 1.9$ & --- & $8.7\\pm1.1$ \\\\\nTotal background & $204 \\pm 23$ & $36.3 \\pm 4.3$ & $120 \\pm 14$ & $24.4\\pm 3.0$ \\\\ \\hline\n$\\cPZ{\/}\\gamma^*$ & $187 \\pm 23$ & $30.2 \\pm 3.6$ & $104 \\pm 14$ & $18.8\\pm 2.3$ \\\\\n$\\ttbar$ & $12.3 \\pm 2.3$ & $4.2 \\pm 0.8$ & $7.6 \\pm 1.4$ & $2.7 \\pm 0.5$ \\\\\nOther prompt leptons & $4.4 \\pm 0.5$ & $1.7 \\pm 0.2$ & $2.1 \\pm 0.2$ & $0.8 \\pm 0.1$ \\\\\nMulti-jet events & $0.6 \\pm 0.2$ & $0.2 \\pm 0.1$ & $6.5 \\pm 2.6$ & $2.1 \\pm 0.8$ \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{table*}\n\n\n\n\n\n\n\n\n\\section{Event Samples and Selection}\n\nSimulated event samples for the signal and associated backgrounds\nwere generated with the \\PYTHIA~{\\sc v6.422}~\\cite{Sjostrand:2006za}\nMC event generator, and with\n\\MADGRAPH~\\cite{MADGRAPH} and\n{\\sc powheg v1.1}~\\cite{Alioli:2008gx, Nason:2004rx, Frixione:2007vw}\ninterfaced with the \\PYTHIA parton-shower generator \nusing the {\\sc CTEQ6L1}~\\cite{Pumplin:2002vw} PDF set.\nThe response of the detector was simulated in detail using\n\\GEANTfour~\\cite{GEANT4}. These samples were further processed\nthrough the trigger emulation and event reconstruction chain of the CMS\nexperiment.\n\nFor both dimuon and dielectron final states, \ntwo isolated same flavour leptons that pass\nthe lepton identification criteria described in Section~\\ref{sec:leptonID} are required.\nThe two charges are\nrequired to have opposite sign in the case of dimuons (for which a charge\nmisassignment implies a large momentum measurement error), but not in the\ncase of dielectrons (for which charge assignment is decoupled from\nthe ECAL-based energy measurement). \nAn opposite-charge requirement for\ndielectrons would lead to a loss of signal efficiency of a few percent.\n\n\nOf the two muons selected, one is required to satisfy the ``tight''\ncriteria. \nThe electron sample requires at least one\nelectron candidate in the barrel because events with both electrons in the endcaps\nwill have a lower signal-to-background ratio. \nFor both channels, each event is required to have a\nreconstructed vertex with at least four associated tracks, located\nless than 2~cm from the centre of the detector in the direction\ntransverse to the beam and less than 24\\cm in the direction along the\nbeam. This requirement provides protection against cosmic rays.\nAdditional suppression of cosmic ray muons\nis obtained by requiring the\nthree-dimensional opening angle between the two muons to be smaller than $\\pi - 0.02$ radians.\n\n\n\\section{Introduction}\n\nMany models of new physics predict the existence of narrow\nresonances, possibly at the TeV mass scale, that decay to a pair of\ncharged leptons. This Letter describes a search for resonant signals\nthat can be detected by the Compact Muon Solenoid (CMS) detector at\nthe Large Hadron Collider (LHC)~\\cite{lhc} at CERN. \nThe Sequential Standard Model $\\ZPSSM$ with standard-model-like couplings, the\n$\\ZPPSI$ predicted by grand unified theories~\\cite{Leike:1998wr},\nand Kaluza--Klein graviton excitations arising in \nthe Randall-Sundrum (RS) model of extra\ndimensions~\\cite{Randall:1999vf, Randall:1999ee} were used as benchmarks. \nThe RS model has two free parameters: the mass of the first graviton excitation and \nthe coupling $k\/\\overline{M}_{\\rm Pl}$, \nwhere $k$ is the curvature of the extra dimension and \n$\\overline{M}_{\\rm Pl}$ is the reduced effective Planck scale. \nTwo values of the coupling parameter were considered: $k\/\\overline{M}_{\\rm Pl}$~=~0.05 and 0.1.\nFor a resonance mass of 1\\TeV, the widths are 30, 6\\ and 3.5\\ (14)\\GeV \nfor a $\\ZPSSM$, $\\ZPPSI$, and $\\GKK$ with $k\/\\overline{M}_{\\rm Pl}$~=~0.05 (0.1), respectively.\n\nThe results of searches for narrow $\\cPZpr \\rightarrow \\ell^+ \\ell^-$ and\n$\\GKK \\rightarrow \\ell^+ \\ell^-$ resonances in $\\Pp\\Pap$ collisions at the\nTevatron with over\n5\\fbinv of integrated luminosity at centre-of-mass energy\nof 1.96\\TeV have previously\nbeen reported~\\cite{D0_RS,D0_Zp,CDF_RS,CDF_Zp}. \nIndirect constraints have been placed on the mass of the virtual \n$\\cPZpr$ bosons by LEP-II experiments~\\cite{delphi,aleph,opal,l3}\nby examining the cross sections and angular distribution\nof dileptons and hadronic final states in $\\Pep\\Pem$ collisions.\n\nThe results presented in this Letter were obtained from an analysis of\ndata recorded in 2010, corresponding to an integrated luminosity of $40 \\pm\n4$\\pbinv in the dimuon channel, and $35 \\pm 4$\\pbinv in the dielectron\nchannel, obtained from $\\Pp\\Pp$ collisions at a centre-of-mass energy\nof 7\\TeV. The total integrated luminosity used for the electron\nanalysis is smaller than that for the muon analysis because of the\ntighter quality requirements imposed on the data.\nThe search for resonances is based on a shape analysis of dilepton mass spectra,\nin order to be robust against uncertainties in the absolute background\nlevel. By examining the dilepton-mass spectrum from below the $\\cPZ$\nresonance to the highest mass events recorded, we obtain\nlimits on the ratio of the production cross section times branching\nfraction for high-mass resonances to that of the $\\cPZ$. Using\nfurther input describing the dilepton mass dependence on effects of\nparton distribution functions (PDFs) and $k$-factors,\nmass bounds are calculated for specific models. \nIn addition, model-independent limit contours are determined in the\ntwo-parameter $(c_d,c_u)$ plane~\\cite{Carena:2004xs}. Selected\nbenchmark models for $\\cPZpr$ production are illustrated in this plane,\nwhere where $c_u$ and $c_d$ are model-dependent couplings of the\n$\\cPZpr$ to up- and down-type quarks, respectively allowing lower bounds \nto be determined. \n\n\n\\section{The CMS Detector}\n\nThe central feature of the CMS~\\cite{JINST} apparatus is a\nsuperconducting solenoid, of 6~m internal diameter, providing an axial field\nof 3.8~T. Within the field volume are the silicon pixel and strip\ntrackers, the crystal electromagnetic calorimeter (ECAL) and the\nbrass\/scintillator hadron calorimeter (HCAL). The endcap hadronic\ncalorimeters are segmented in the z-direction. Muons are measured in\ngas-ionization detectors embedded in the steel return yoke. In\naddition to the barrel and endcap detectors, CMS has extensive forward\ncalorimetry.\n\n\nCMS uses a right-handed coordinate system, with the origin at the\nnominal interaction point, the $x$-axis pointing to the centre of the\nLHC, the $y$-axis pointing up (perpendicular to the LHC plane), and\nthe $z$-axis along the anticlockwise-beam direction. The polar angle,\n$\\theta$, is measured from the positive $z$-axis and the azimuthal\nangle, $\\phi$, is measured in the $x$-$y$ plane.\n\nMuons are measured in the pseudorapidity range $|\\eta|< 2.4$, with\ndetection planes based on one of three technologies: drift tubes in the barrel\nregion, cathode strip chambers in the endcaps, and resistive plate chambers\nin the barrel and part of the endcaps.\nThe inner tracker (silicon pixels and strips) detects charged particles within the pseudorapidity range\n$|\\eta| < 2.5$.\n\nThe electromagnetic calorimeter consists of nearly 76\\,000 lead\ntungstate crystals which provide coverage in pseudorapidity $|\\eta| < 1.479$\nin the barrel region (EB, with crystal size\n$\\Delta\\eta=0.0174$ and $\\Delta\\phi = 0.0174$) and $1.479 < |\\eta| <\n3.0$ in the two endcap regions (EE, with somewhat larger crystals.).\nA preshower detector\nconsisting of two planes of silicon sensors interleaved with a total of\n3$\\,X_0$ of lead is located in front of the EE.\n\nThe first level (L1) of the CMS trigger system, composed of custom\nhardware processors, selects the most interesting events\nusing information from the calorimeters and muon detectors.\nThe High Level Trigger (HLT) processor farm further decreases the\nevent rate employing the full event information, including the\ninner tracker. The muon selection algorithms in the HLT use information from the\nmuon detectors and the silicon pixel and strip trackers. The electromagnetic (EM)\nselection algorithms use the energy deposits in the ECAL and HCAL; the\nelectron selection in addition requires tracks matched to clusters. Events\nwith muons or electromagnetic clusters with $\\pt$ above L1 and HLT\nthresholds are recorded.\n\n\n\\section{Electron and Muon Selection \\label{sec:lepton}}\n\n\n\\subsection{Triggers \\label{sec:triggers}}\n\nThe events used in the dimuon channel analysis were collected using a\nsingle-muon trigger. \nThe algorithm requires a muon candidate to be found in the muon detectors by the L1 trigger.\nThe candidate track is then matched to a silicon tracker track, forming an HLT muon.\nThe HLT muon is required to have $\\pt >9$ to 15\\GeV, depending on the running period. \n\nA double EM cluster trigger was used to select\nthe events for the dielectron channel. ECAL clusters are formed by\nsumming energy deposits in crystals surrounding a ``seed'' that is\nlocally the highest-energy crystal. \nThe clustering algorithm takes into account the emission of bremsstrahlung.\nThis trigger requires two clusters with the ECAL transverse energy $\\ET$\nabove a threshold of 17 to 22\\GeV, depending on the running period. \nFor each of these clusters, the ratio $H\/E$, where $E$ is the energy of the ECAL cluster and\n $H$ is the energy in the HCAL cells situated behind it, is required to be less than 15\\%.\nAt least one of these clusters must\nhave been associated with an energy deposit identified by the L1 trigger.\n\n\\subsection{Lepton Reconstruction\\label{sec:leptonID}}\n\nThe reconstruction, identification, and calibration of muons\nand electrons\nfollow standard CMS methods~\\cite{EWK-10-002-PAS}.\nCombinations of test beam, cosmic ray muons, and \ndata from proton collisions \nhave been used to calibrate the relevant detector systems\nfor both muons and electrons.\n\n\nMuons are reconstructed independently as tracks in both the muon\ndetectors and the silicon tracker~\\cite{MUONPAS}. \nThe two tracks can be matched and fitted simultaneously to\nform a ``global muon''. Both muons in the event must be identified as\nglobal muons, with at least 10 hits in the silicon tracker and with\n$\\pt > 20$\\GeV. All muon candidates that satisfy these criteria\nare classified as ``loose'' muons. At least one of the two muons in\neach event must be further classified as a ``tight'' muon by passing the\nfollowing additional requirements: a transverse impact parameter with\nrespect to the collision point less than 0.2~cm; a $\\chi^2$ per degree\nof freedom less than 10 for the global track fit; at least one hit in\nthe pixel detector; hits from the muon tracking system in at least two\nmuon stations on the track; and correspondence with the single-muon trigger.\n\nElectrons are reconstructed by associating a cluster in the ECAL with\na track in the tracker~\\cite{EGMPAS}. Track reconstruction, which \nis specific to\nelectrons to account for bremsstrahlung emission, is seeded from the\nclusters in the ECAL, first using the cluster position and energy to\nsearch for compatible hits in the pixel detector, and then using these\nhits as seeds to reconstruct a track in the silicon tracker. A minimum\nof five hits is required on each track. Electron candidates are required to be\nwithin the barrel or endcap acceptance regions, with pseudorapidities\nof $|\\eta|<1.442$ and $1.560<|\\eta|<2.5$, respectively. A candidate\nelectron is required to deposit most of its energy in the ECAL and\nrelatively little in the HCAL ($H\/E<5\\%$). The transverse shape of the\nenergy deposit is required to be consistent with that expected for an\nelectron, and the associated track must be well-matched in $\\eta$ and\n$\\phi$. Electron candidates must have $\\ET > 25$~GeV.\n\nIn order to suppress misidentified leptons from jets and non-prompt muons from\nhadron decays, both lepton selections impose isolation requirements.\nCandidate leptons are required to be isolated within a narrow cone of\nradius $\\Delta R = \\sqrt{(\\Delta\\eta)^2 + (\\Delta\\phi)^2} = 0.3$,\ncentred on the lepton. Muon isolation requires that the sum of the\n$\\pt$ of all tracks within the cone, excluding the muon,\nis less than 10\\% of the $\\pt$ of the muon.\nFor electrons, the sum of the $\\pt$ of the tracks, excluding\nthe tracks within an inner cone of $\\Delta R = 0.04$, is required to be less than 7$\\GeV$\nfor candidates reconstructed within the barrel acceptance and 15$\\GeV$\nwithin the endcap acceptance. The calorimeter isolation\nrequirement for electron candidates within the barrel acceptance is that,\nexcluding the $\\ET$ of the candidate, the sum of the $\\ET$ resulting from\ndeposits in the ECAL and the HCAL within a cone of $\\Delta R=0.3$ be less than\n0.03$\\ET$ + 2\\GeV. For candidates within the endcap acceptance, the\nsegmentation of the HCAL in the $z$-direction is exploited. For candidates with $\\ET$\nbelow 50\\GeV (above 50\\GeV), the isolation energy is required to be\nless than 2.5\\GeV ($0.03(\\ET-50) + 2.5\\GeV)$, where $\\ET$ is\ndetermined using the ECAL and the first layer of the segmented HCAL. The\n$\\ET$ in the second layer of the HCAL is required to be less than\n0.5\\GeV. These requirements ensure that the candidate electrons are\nwell-measured and have minimal contamination from jets.\n\nThe performance of the detector systems for the data sample presented in this paper is \nestablished using measurements of standard model (SM) $\\PW$ and $\\cPZ$ processes \nwith leptonic final states~\\cite{EWK-10-002-PAS} \nand using traversing cosmic ray muons~\\cite{CMS_CFT_09_014}.\n\nMuon momentum resolution varies from 1\\% at momenta\nof a few tens of \\GeV to 10\\% at momenta of several hundred \\GeV,\nas verified with measurements made with cosmic rays.\nThe alignment of the muon and inner tracking systems is important\nfor obtaining the best momentum resolution, and hence mass resolution,\nparticularly at the high masses relevant to the $\\cPZpr$ search.\nAn additional contribution to the momentum\nresolution arises from the presence of distortion modes in the tracker\ngeometry that are not completely constrained by the alignment procedures.\nThe dimuon mass resolution is estimated to have an rms of\n5.8\\% at 500\\GeV and 9.6\\% at 1\\TeV.\n\nThe ECAL has an ultimate energy resolution of better than $0.5\\%$ for\nunconverted photons with transverse energies above $100\\GeV$.\nThe ECAL energy resolution obtained thus far is\non average 1.0\\% for the barrel and 4.0\\% for the endcaps.\nThe mass resolution is estimated to be\n1.3\\% at 500\\GeV and 1.1\\% at 1\\TeV.\nElectrons from $\\PW$ and $\\cPZ$ bosons were used to calibrate ECAL\nenergy measurements. \nFor both muons and electrons, the energy scale\nis set using the $\\cPZ$ mass peak, except for electrons in the barrel\nsection of the ECAL, where the energy scale is set using neutral pions,\nand then checked using the $\\cPZ$ mass peak.\nThe ECAL energy scale uncertainty is 1\\% in the barrel and 3\\% in the\nendcaps.\n\n\n\n\\subsection{Efficiency Estimation \\label{sec:eff}}\n\nThe efficiency for identifying and reconstructing lepton candidates is\nmeasured with the tag-and-probe method~\\cite{EWK-10-002-PAS}.\nA tag lepton is established by applying tight cuts\nto one lepton candidate; the other candidate is used as a probe. A\nlarge sample of high-purity probes is obtained by requiring that the\ntag-and-probe pair have an invariant mass consistent with the $\\cPZ$\nboson mass ($80 < m_{\\ell\\ell} < 100\\GeV)$. \nThe factors contributing to the overall efficiency are measured in the data. They are:\nthe trigger efficiency, the reconstruction efficiency in the silicon tracker,\nthe electron clustering efficiency, and the lepton reconstruction and\nidentification efficiency. All efficiencies and scale factors\nquoted below are computed\nusing events in the $\\cPZ$ mass region.\n\nThe trigger efficiencies are defined relative to the full offline\nlepton requirements. For the dimuon events, the efficiency of the\nsingle muon trigger with respect to loose muons is measured to be\n$89\\% \\pm 2\\%$~\\cite{EWK-10-002-PAS}. The overall efficiency, defined\nwith respect to particles within the physical acceptance of the\ndetector, for loose (tight) muons is measured to be $94.1\\%\\pm1.0\\%$\n($81.2\\%\\pm1.0\\%$). Within the statistical precision allowed by the\ncurrent data sample, the dimuon efficiency is constant as a function of\n$\\pt$ above 20\\GeV, as is the ratio of the efficiency in the data to that in the Monte Carlo (MC)\nof 0.977 $\\pm$ 0.004. For dielectron events, the double EM cluster trigger is\n100\\% efficient (99\\% during the early running period). The total\nelectron identification efficiency is 90.1\\% $\\pm$ 0.5\\%\\ (barrel) and 87.2\\% $\\pm$ 0.9\\%\\ (endcap). \nThe ratio of the electron efficiency measured from the data to that\ndetermined from MC simulation at the $\\cPZ$ resonance is 0.979 $\\pm$ 0.006\\ (EB)\nand 0.993 $\\pm$ 0.011\\ (EE). To determine the efficiency applicable to\nhigh-energy electrons in the data sample, this correction factor is\napplied to the efficiency found using MC simulation.\nThe efficiency of electron identification increases as a\nfunction of the electron transverse energy until it becomes flat beyond an\n$\\ET$ value of about 45\\GeV. Between 30 and 45 GeV it increases by about 5\\%.\n\n\n\\section{Limits on the Production Cross Section}\n\nThe observed invariant mass spectrum agrees with expectations based on\nstandard model processes, therefore limits are set on the possible\ncontributions from a narrow heavy resonance.\nThe parameter of interest is the ratio of the products of cross sections\nand branching fractions:\n\\begin{equation}\n\\label{eq:rsigma}\nR_\\sigma = \\frac{\\sigma(\\Pp\\Pp\\to \\cPZpr+X\\to\\ell\\ell+X)}\n {\\sigma(\\Pp\\Pp\\to \\cPZ+X \\to\\ell\\ell+X)}.\n\\end{equation}\n\nBy focusing on the ratio\n$R_\\sigma$, we eliminate the uncertainty in the integrated\nluminosity, reduce the dependence on experimental\nacceptance, trigger, and offline efficiencies, and generally obtain a more robust result.\n\nFor statistical inference about $R_\\sigma$, we first estimate the Poisson mean\n$\\mu_\\cPZ$ of the number of $\\cPZ\\to\\ell\\ell$ events in the sample\nby counting the number of events in the $\\cPZ$ peak mass region and\ncorrecting for a small ($\\sim 0.4\\%$) background contamination (determined with MC simulation).\nThe uncertainty on $\\mu_\\cPZ$ is about 1\\% (almost all statistical) and contributes negligibly to\nthe uncertainty on $R_\\sigma$.\n\nWe then construct an extended unbinned likelihood function for the\nspectrum of $\\ell\\ell$ invariant mass values $m$ above\n200\\GeV, based on a sum of analytic probability density functions\n(pdfs) for the signal and background shapes.\n\nThe pdf $f_\\mathtt{S}(m|\\Gamma,M,w)$ for the resonance signal is a\nBreit-Wigner of width $\\Gamma$ and mass $M$ convoluted with a Gaussian\nresolution function of width $w$ (section~\\ref{sec:leptonID}). The width $\\Gamma$ is taken to\nbe that of the $\\ZPSSM$ (about 3\\%); as noted below, the high-mass limits\nare insensitive to this width.\nThe Poisson mean of\nthe yield is $\\mu_\\mathtt{S} = R_\\sigma \\cdot \\mu_\\cPZ \\cdot R_\\epsilon$,\nwhere $R_\\epsilon$ is the ratio of selection efficiency times detector\nacceptance for $\\cPZpr$ decay to that of $\\cPZ$ decay; $\\mu_\\mathtt{B}$\ndenotes the Poisson mean of the total background yield.\nA background\npdf $f_\\mathtt{B}$ was chosen and its shape parameters fixed by\nfitting to the simulated Drell--Yan spectrum in the mass range\n$200 < m_{\\ell\\ell} < 2000$\\GeV.\nTwo functional forms for the dependence of $f_\\mathtt{B}$\non shape parameters $\\alpha$ and $\\kappa$ were tried:\n$f_\\mathtt{B}(m|\\alpha,\\kappa) \\sim \\exp(-\\alpha m^\\kappa)$ and\n$\\sim \\exp(-\\alpha m)m^{-\\kappa}$. Both\nyielded good fits and consistent results for both the dimuon and dielectron spectra.\nFor definiteness, this Letter presents results obtained with the latter form.\n\nThe extended likelihood ${\\cal L}$ is then\n\n\\begin{equation}\n\\label{eq:likelihood}\n{\\cal L}({\\boldsymbol m}|R_\\sigma,M,\\Gamma,w,\\alpha,\\kappa,\\mu_\\mathtt{B}) =\n\\frac{\\mu^N e^{-\\mu}}{N!}\\prod_{i=1}^{N}\\left(\n\\frac{\\mu_\\mathtt{S}(R_\\sigma)}{\\mu}f_\\mathtt{S}(m_i|M,\\Gamma,w)+\n\\frac{\\mu_\\mathtt{B}}{\\mu}f_\\mathtt{B}(m_i|\\alpha,\\kappa)\n\\right),\n\\end{equation}\n\nwhere ${\\boldsymbol m}$ denotes the dataset in which\nthe observables are the invariant mass values of the lepton pairs,\n$m_i$; $N$ denotes the total number of events observed above 200\\GeV;\nand $\\mu=\\mu_\\mathtt{S}+ \\mu_\\mathtt{B}$ is\nthe mean of the Poisson distribution from which $N$ is an observation.\n\nStarting from Eqn.~\\ref{eq:likelihood}, confidence\/credible intervals\nare computed using more than one approach, both frequentist (using\nprofile likelihood ratios) and Bayesian (multiplying ${\\cal L}$ by prior\npdfs including a uniform prior for the signal mean).\nWith no candidate events in the region of small expected background\nabove 465\\GeV, the result is insensitive to\nthe statistical technique, and also with respect to the width of the $\\cPZpr$\nand to changes in systematic uncertainties and their functional forms,\ntaken to be log-normal distributions with fractional uncertainties.\n\nFor $R_\\epsilon$, we assign an uncertainty of 8\\% for the dielectron\nchannel and 3\\% for the dimuon channel. These values reflect our current\nunderstanding of the detector acceptance and reconstruction efficiency\nturn-on at low mass (including PDF uncertainties\nand mass-dependence of $k$-factors), as well as the corresponding values\nat high mass, where\ncosmic ray muons are available to study muon performance but not electron\nperformance. The uncertainty in the mass scale affects only the mass\nregion below 500\\GeV where there are events in both channels\nextrapolating from the well-calibrated observed resonances.\nFor the dielectron channel, it is set to 1\\% based on linearity studies.\nFor the dimuon channel, it is set to zero, as a sensitivity study showed\nnegligible change in the results up to the maximum misalignment consistent\nwith alignment studies (corresponding to several percent change in momentum scale).\nThe acceptance for $\\GKK$ (spin 2) is higher than for $\\cPZpr$ (spin 1) by\nless than 8\\% over the mass range 0.75--1.1 TeV. This was\nconservatively neglected when calculating the limits.\n\nIn the frequentist calculation, the mean background level\n$\\mu_\\mathtt{B}$\nis the maximum likelihood estimate; in the fully\n Bayesian\ncalculation a prior must be assigned to the mean background\n level,\nbut the result is insensitive to reasonable choices (i.e., for which\nthe likelihood dominates the prior).\n\n\n\nThe upper limits on $R_{\\sigma}$ (Eqn.~\\ref{eq:rsigma}) from the various approaches\nare similar, and we report the Bayesian\nresult (implemented with Markov Chain Monte Carlo in\n{\\sc RooStats}~\\cite{MCMC}) for definiteness.\nFrom the dimuon and dielectron data, we obtain the upper limits on the cross section\nratio $R_{\\sigma}$ at 95\\% confidence level (C.L.) shown in\nFigs.~\\ref{fig:limits}(upper) and (middle), respectively.\n\nIn Fig.~\\ref{fig:limits}, the predicted cross section ratios\nfor $\\ZPSSM$ and $\\ZPPSI$ production are superimposed\ntogether with those\nfor $\\GKK$ production with dimensionless graviton coupling\nto SM fields $k\/\\overline{M}_\\mathrm{Pl}=0.05$ and $0.1$.\nThe leading order cross section predictions for\n$\\ZPSSM$ and $\\ZPPSI$\nfrom \\PYTHIA using {\\sc CTEQ6.1} PDFs are corrected for a mass dependent\n$k$-factor obtained using {\\textsc ZWPRODP}~\\cite{Accomando:2010fz,Hamberg:1990np,\nvanNeerven:1991gh,ZWPROD} to account for NNLO contributions.\nFor the RS graviton model, a constant NLO $k$-factor of 1.6 is\nused~\\cite{Mathews:2005bw}. The uncertainties\ndue to the QCD scale parameter and PDFs are indicated as a band.\nThe NNLO prediction for the $\\cPZ$ production cross\nsection is 0.97$\\pm\\,$0.04~nb~\\cite{Melnikov:2006kv}.\n\n\nPropagating the above-mentioned uncertainties into the\ncomparison of the experimental limits with the predicted cross section\nratios, we exclude\nat 95\\%~C.L. $\\cPZpr$ masses as follows. From the dimuon only\nanalysis,\nthe $\\ZPSSM$ can be excluded below 1027\\GeV, the\n$\\ZPPSI$ below 792\\GeV, and the\nRS $\\GKK$ below 778 (987)\\GeV for couplings\nof 0.05 (0.1). For the dielectron analysis, the\nproduction of $\\ZPSSM$ and $\\ZPPSI$ bosons is\nexcluded for masses below 958 and 731\\GeV, respectively. The\ncorresponding lower limits on the mass for\nRS $\\GKK$ with\ncouplings of 0.05 (0.10) are 729 (931)\\GeV.\n\n\\subsection{Combined Limits on the Production Cross Section Using\nDimuon and Dielectron Events}\n\nThe above statistical formalism is generalized to combine the results from the\ndimuon and dielectron channels,\nby defining the combined likelihood as the product\nof the likelihoods for the individual channels with $R_\\sigma$ forced to\nbe the same value for both channels. The combined limit is shown in\nFig.~\\ref{fig:limits}~(bottom).\n\nBy combining the two channels, the following\n 95\\% C.L. lower limits on the mass of a $\\cPZpr$ resonance are obtained:\n1140\\GeV for the $\\ZPSSM$, and 887\\GeV for $\\ZPPSI$ models. RS\nKaluza--Klein gravitons are excluded below 855 (1079)\\GeV\n for values of couplings 0.05 (0.10).\nOur observed limits are more restrictive than or comparable to\nthose previously obtained via similar direct searches by the\nTevatron experiments~\\cite{D0_RS,D0_Zp,CDF_RS,CDF_Zp},\nor indirect searches by LEP-II experiments~\\cite{delphi,aleph,opal,l3},\nwith the exception of $\\ZPSSM$, where the value from LEP-II\nis the most restrictive.\n\n\n\nThe distortion of the observed limits at $\\sim$400\\GeV visible in\nFig.~\\ref{fig:limits} is the result of a clustering of several dimuon\nand dielectron events around this mass. We have tested for the\nstatistical significance of these excesses ($p$-values expressed as\nequivalent $Z$-values, i.e. effective number of Gaussian sigma in a\none-sided test), using the techniques described in \\cite{PTDR2}.\nFor the dimuon sample, the probability of an enhancement at\nleast as large as that at 400 GeV occurring anywhere above 200 GeV in\nthe observed sample size corresponds to $Z<0.2$; for the electron\nsample, it is less. For the combined data sample, the corresponding\nprobability in a joint peak search is equivalent to $Z=1.1$.\n\n\\begin{figure}[htbp!]\n\\begin{center}\n\\includegraphics[width=0.65\\textwidth,angle=0]{figures\/zpr_ssm_ratio_mcmc_mumu_40pb_a.pdf}\n\\includegraphics[width=0.65\\textwidth,angle=0]{figures\/zpr_ssm_ratio_mcmc_ee_35pb_b.pdf}\n\\includegraphics[width=0.65\\textwidth,angle=0]{figures\/zpr_ssm_ratio_mcmc_comb_40pb_c.pdf}\n\\end{center}\n\\caption{\\label{fig:limits} Upper limits as a function\nof resonance mass $M$,\non the production ratio $R_{\\sigma}$ of\n cross section times branching fraction into lepton pairs\n for $\\ZPSSM$ and $\\GKK$ production and $\\ZPPSI$\n boson production. The limits are shown from (top) the $\\Pgmp\\Pgmm$ final\n state, (middle) the $\\Pe\\Pe$ final state and (bottom) the combined dilepton result.\nShaded yellow and red bands correspond to the $68\\%$ and $95\\%$ quantiles \nfor the expected limits.\nThe predicted cross section ratios are shown as bands, with widths\nindicating the theoretical uncertainties. \n}\n\n\n\\end{figure}\n\nIn the narrow-width approximation, the cross section for the process\n$\\Pp\\Pp\\to \\cPZpr+X\\to\\ell\\ell+X$ can be\nexpressed~\\cite{Carena:2004xs,Accomando:2010fz} in terms of the\nquantity $c_u w_u + c_d w_d$, where $c_u$ and $c_d$ contain the\ninformation from the model-dependent $\\cPZpr$ couplings to fermions\nin the annihilation of charge 2\/3 and charge $-$1\/3 quarks,\nrespectively, and where $w_u$ and $w_d$ contain the information about\nPDFs for the respective annihilation at a given $\\cPZpr$ mass.\n\n\nThe\ntranslation of the experimental limits into the ($c_u$,$c_d$) plane has\nbeen studied in the context of both the narrow-width and finite width\napproximations. The procedures have been shown to give the same\nresults. In Fig.~\\ref{fig:CuCd} the limits on the $\\cPZpr$ mass are\nshown as lines in the $(c_d,c_u)$ plane intersected by curves from\nvarious models which specify $(c_d,c_u)$ as a function of a model\nmixing parameter.\nIn this plane, the thin solid lines labeled by mass are\niso-contours of cross section with constant\n$c_u + (w_d\/w_u)c_d$, where $w_d\/w_u$ is in the range 0.5--0.6 for the\nresults relevant here. As this linear combination\nincreases or decreases by an order of magnitude, the mass limits\nchange by roughly 500 GeV.\nThe point labeled SM corresponds to the\n$\\ZPSSM$; it lies on the more general curve for the\nGeneralized Sequential Standard Model (GSM) for which the generators\nof the $U(1)_{T_{3L}}$ and $U(1)_Q$ gauge groups are mixed with a mixing\nangle $\\alpha$. Then $\\alpha = -0.072\\pi$ corresponds to the\n$Z^\\prime_{SSM}$ and $\\alpha=0$ and $\\pi\/2$ define the $T_{3L}$ and\n$Q$ benchmarks, respectively, which have larger values of $(c_d,c_u)$\nand hence larger lower bounds on the masses. Also shown are contours\nfor the E$_6$ model (with $\\chi$, $\\psi$, $\\eta$, $S$, and $N$\ncorresponding to angles 0, 0.5$\\pi$, $-0.29\\pi$, 0.13$\\pi$, and\n0.42$\\pi$, respectively) and Generalized LR models (with $R$, $B-L$,\n$LR$, and $Y$ corresponding to angles 0, 0.5$\\pi$, $-0.13\\pi$, and\n0.25$\\pi$, respectively)~\\cite{Accomando:2010fz} .\n\n\n\\begin{figure}[htbp!]\n\\begin{center}\n\\includegraphics[width=0.6\\textwidth]{figures\/plot_lhc_ee_mm_35_40_all_ALL_cms.pdf}\n\\end{center}\n\\caption{\\label{fig:CuCd}\n95\\% C.L. lower limits on the $\\cPZpr$ mass, represented by the thin continuous lines\nin the $(c_d,c_u)$\nplane. Curves for three classes of model are shown. Colours on the\ncurves correspond to different mixing angles of the generators defined\nin each model. For any point on a curve, the mass limit corresponding\nto that value of $(c_d,c_u)$ is given by the intersected contour.}\n\\end{figure}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\\subsection{Eulerian summation of coprimes}\n\nLet be $r \\in \\mathbb{N}$, $\\mathbf{i} \\in \\mathbb{N}^r$ and $n \\in \\mathbb{N}$. We introduce the sum $\\sum_{\\substack{ p_1 + p_2 + ... + p_r =n \\\\ p_1 \\wedge p_2 \\wedge ... \\wedge p_r= 1 }} p_1^{i_1} p_2^{i_2}... p_r^{i_r}$ with the aim to asymptotically estimate it. We denote it $S_n(\\mathbf{i})$, and $j = \\mathbf{1} \\cdot \\mathbf{i} $. Note that $S_n(i)$ is the sum of a power function over the primitive points contained in the surface of the dilates of the $(r-1)$-dimensional standard simplex. It is noteworthy that the number of such points has been computed in \\cite{DezaPournin:primitivepoint} and that more generally a coprime Ehrhart theory is introduced in \\cite{Sanyal:erhart}. \\\\\n\n\\begin{prop}\\label{eulerian_sum_notation} \n\n\\begin{align}\n \\sum_{k \\geq 0} k^j \\sum_{n \\geq 1} S_n(\\mathbf{i}) y^{nk} = \\frac { \\prod_{\\lambda = 1}^{r} A_{i_\\lambda}(y)}{(1-y) ^{j+r}} .\n\\end{align} \n\\end{prop}\n\n\\begin{proof}\n\nThis relation relies on the partition $\\mathbb{Z}_+^r =\\bigcup_{k \\geq 0} \\left\\{ kp, p \\in \\mathbb{P}^r_+ \\right\\}$. We write\n\\\\\n\n\\begin{align*}\n \\sum_{k \\geq 0 } k^{j} \\sum_{n \\geq 1} S_n(\\mathbf{i}) \\left(y^n\\right)^k &= \\sum_{k \\geq 0} \\sum_{n \\geq 1} \\left(\\sum_{\\substack{ p_1 + ... + p_r =n \\\\ p_1 \\wedge ... \\wedge p_r= 1 }}(k p_1)^{i_1}y^{kp_1} ... (k p_{r})^{i_{r}}y^{kp_{r}} \\right) \\\\\n&= \\sum_{l \\geq 1} \\left( \\sum_{\\substack{ l_1 + l_2 + ... + l_r =l\\\\ l_p \\geq 1 }}l_1^{i_1}y^{l_1} (l_2)^{i_2}y^{l_2}... (l_r)^{i_r}y^{l_r} \\right) &= \\prod_{p = 1}^{r} \\left( \\sum_{l_p \\geq 1} l_p^{i_p} y^{l_p} \\right) = \\frac { \\prod_{p = 1}^{r} A_{i_p}(y)}{(1-y) ^{j+r}} .\n\\end{align*}\n\n\\end{proof}\n\nThe $n^{th}$ Eulerian polynomial is of degree $n$ except for $A_0(y) = y$ and has no constant term which causes the degree of $\\prod_{\\lambda = 1}^{r} A_{i_\\lambda}(y)$ to be no more than $j+r$. We write it as follows: \n\n$$\\prod_{\\lambda = 1}^{r} A_{i_\\lambda}(y) = B_{\\mathbf{i}}(y) = b_{\\mathbf{i},j+r} y^{j+r} + ... b_{\\mathbf{i}, 1} y. $$\n\nDespite this heavy notation, the sums under study in the zonotopal case are usually $S_n(\\mathbf{0})$, $S_n(1, 0,...,0)$ or $S_n(2, 0,...,0)$, making $B_{\\mathbf{i}}(y)$ equal to $y^{d}$, $y^{d+1}$, or $y^{d+2}+ y^{d+1}$.\n\n\\subsection{Mellin transform}\n\nMellin transforms appear naturally in the asymptotic analysis of infinite products (see \\cite{Bodini:polyomino,Flajolet:AC,Gittenberger:hadmi}). In the following, the analysis of zonotopes will require to apply a Mellin transform on sums of coprime numbers. The following proposition is of primary importance in the sequel:\n\\begin{prop}\\label{property2}\nFor $t> 0$, the Mellin transform of $\\sum_{n \\geq 1} S_n(\\mathbf{i}) e^{-n t}$ is \n \\begin{align}\n \\frac{1}{\\zeta(s-j)} \\left( \\sum_{n = 0}^{j + r } b_{\\mathbf{i},n} \\sum_{k \\geq 1} \\frac{\\binom{ k + j + r - n - 1}{j + r -1 }}{k^s} \\Gamma(s)\\right) .\n \\end{align} \n \\end{prop}\n\n\\begin{proof}\n\nAs we do throughout the paper, we use the change of variables $y = e^{-t}$ and will always try to write products as sums, to have an expression compatible with the Mellin transform. This leads to:\n\n\\begin{align}\\label{sum_expression}\n \\sum_{k \\geq 0 } k^{j} \\sum_{n \\geq 1} S_n(\\mathbf{i}) \\left(e^{-n t}\\right)^k = \\prod_{\\lambda = 1}^{r} A_{i_\\lambda}(e^{-t}) \\left( \\sum_{k\\geq 0} \\binom{ k + j + r - 1}{j + r -1 } e^{-kt} \\right) = \\sum_{n = 0}^{j + r} b_{\\mathbf{i},n} \\:\\: \\sum_{k\\geq 1} \\binom{ k + j + r -n - 1}{j + r -1 } e^{-k t} .\n\\end{align}\n\nWe can now make use of the Mellin transform which is the transform that matches the exponential with the Gamma function: \n\n$$\\forall s, \\Re(s) > 0, \\:\\:\\: \\Gamma(s) = \\int_{0}^{+ \\infty} e^{-t} t^{s - 1} dt.$$\n\nUnder conditions that will be detailed just below, the Mellin transform of a function $f$ the expression is defined as\n\n$$ \\mathcal{M}\\left[f\\left(e^{-t}\\right)\\right](s) = \\int_0^{+\\infty} f\\left(e^{-t}\\right)t^{s-1}dt.$$\n\nFor $\\Re(s) > j+r -1$, the term $ \\left|\\sum_{n\\geq0} \\frac{\\binom{ k + j + r - n - 1}{j + r -1 }}{k^s}\\right| $ is bounded, which makes the use of Fubini's theorem possible, so the Mellin function of $\\sum_{n\\geq1}S_(i)e^{-nt}$ can be written, on the one hand:\n\n$$ \\int_{0}^{+\\infty} \\sum_{n = 0}^{j + r } b_{\\mathbf{i},n} \\:\\: \\sum_{k\\geq 1} \\binom{ k + j + r -n - 1}{j + r -1 } e^{-k t} t^{s -1} d t =\\sum_{n = 0}^{j + r } b_{\\mathbf{i},n} \\sum_{k \\geq 1} \\frac{\\binom{ k + j + r - n - 1}{j + r -1 }}{k^s} \\Gamma(s),$$\n\nand on the other hand:\n\n$$\\int_0^{+\\infty} \\sum_{k \\geq 0 } k^{j} \\sum_{n \\geq 1} S_n(\\mathbf{i}) e^{- n k t} t^{s-1} d t= \\zeta(s-j) \\int_0^{+\\infty} \\sum_{n \\geq 1} S_n(\\mathbf{i}) e^{- n t} t^{s-1} dt. $$\n\nThese holomorphic functions being defined and equal on $\\Re(s) > j+r-1$, we can extend it to its meromorphic continuation on $\\mathbb{C}$.\n\\end{proof}\n\nThe Mellin transform is widely used to estimate asymptotic behaviors using its correspondence with the poles of the transformed function, as we will do in next subsection (and as it is done in \\cite{Bodini:polyomino} and \\cite{Bureaux:polygons}). Property \\ref{property2} make it possible to compute the asymptotic equivalent of any of the sums $\\sum_{n \\geq 1} S_n(\\mathbf{i}) y^n $ near $y =1$. \n\n\n\n\\subsection{Riemann's non trivial zeros challenge, with the example \\texorpdfstring{$\\sum_{ \\protect\\substack{ p < n \\\\ p \\wedge n = 1 }}\\protect p^2$}{TEXT} }\n\nWe use this subsection to detail more specifically the way the Mellin inversion formula is handled, as we will encounter it multiple times. Riemann's $\\zeta$ function intervenes in it, and its non-trivial zeros too. We will use the integral of \\cite{Bureaux:polygons} to manage all the zeros in the critical strip with an integral, but we could also write a sum over all non-trivial zeros. This term is numerically discussed in the Section \\ref{numerical_discu}. \n\n\n\n\\begin{propos}\nWe denote the contour $\\gamma = \\gamma_{\\text{right}}\\cup \\gamma_{\\text{left}}$, with $\\gamma_\\text{right} = 3 - \\frac{A}{\\log(2 + |t|)} +it$ and $\\gamma_\\text{left} = 2 + \\frac{A}{\\log(2 + |t|)}+it $ as $t$ runs from $-\\infty$ to $+\\infty$, with $A >0$ such that $\\gamma$ surrounds all the zeros in the critical strip of $\\zeta(s-2)$. \\\\\n\nHereafter we will denote $$I_\\text{crit} = \\frac{1}{2i\\pi} \\int_\\gamma \\mathcal{M}[\\sum_{n \\geq 1 } S_n (2, 0) e^{- n t}](s) t^{-s} d s$$,\\\\\n\nand $$I_\\text{err} =\\frac{1}{2i\\pi} \\int_{\\frac{1}{2} - i\\infty}^{\\frac{1}{2}+i\\infty} \\mathcal{M}[\\sum_{n \\geq 1 } S_n (2, 0) e^{- n t}](s) t^{-s} d s$$.\\\\\n\nWe have for all $t>0$:\n\n\\begin{align}\n \\sum_{n \\geq 1 } S_n (2, 0) e^{- n t} = \\frac{2}{\\zeta(2) t^{4}} + I_\\text{crit}(t) + \\frac{1}{3 t^2} - \\frac{1}{90 \\zeta'(-1)}\\log(t) + I_\\text{err}(t).\n\\end{align}\n\n\nWhen $t$ goes to 0, $I_\\text{err}(t) = O(t^{1\/2})$, $I'_\\text{err}(t) = o(t^{-1})$ and the $k$-th derivative of $I_\\text{crit}$ is $o(t^{-k-1})$.\n\n\\end{propos}\n\n\\begin{proof}\nWe take the general set of sums $\\left(\\sum_{\\substack{ p < n \\\\ p \\wedge n = 1 }} p^i \\right)_{i \\geq 0}$.\\\\\nAs $p \\wedge n = 1$ being equivalent to $p \\wedge (n-p) = 1$, those sums can be rewritten as $\\sum_{ \\substack{ p +q = n \\\\ p \\wedge q = 1 }} p^i = S_n(i,0)$.\\\\\nWe fix $i = 2$. Applying properties \\ref{eulerian_sum_notation} and \\ref{property2}, we have:\n\n\\begin{align*}\n \\sum_{k \\geq 0 } k^{2} \\sum_{n \\geq 1 } S_n (2, 0) e^{- k n t} &= \\frac{A_0(e^{-t}) A_2 (e^{-t})}{(1-e^{-t})^{4}}\\\\\n &= (e^{-3 t} + e^{-2t}) \\sum_{k \\geq 0} \\binom{k+3}{3} e^{-k t}. \n\\end{align*}\n\n\nFor $t >0, s >1 $, we have the following Mellin transform: \n\n\\begin{align*}\n \\mathcal{M} \\left[ \\sum_{n \\geq 1 } S_n (2, 0) e^{- n t}\\right] (s) &= \\frac{1}{\\zeta(s-2)}\\left( \\sum_{k \\geq 1} \\frac{\\binom{k+1}{3} + \\binom{ k }{3}}{k^s} \\Gamma(s)\\right)\\\\\n &= \\frac{2 \\zeta(s-3) - 3 \\zeta(s-2) + \\zeta(s-1)}{6 \\zeta(s-2)}\\Gamma(s) .\n\\end{align*}\n\n\nTherefore we have, using the Mellin inversion formula, for all $c > 4$ (to ensure $\\sup_{s\\in c+ i \\mathbb{R}} \\zeta(s-3) < \\infty$):\n \n $$ \\sum_{n \\geq 1 } S_n (2, 0) e^{- n t} = \\frac{1}{2i\\pi} \\int_{c- i\\infty}^{c+ i\\infty} \\frac{2 \\zeta(s-3) - 3 \\zeta(s-2) + \\zeta(s-1)}{6 \\zeta(s-2)}\\Gamma(s) t^{-s} d s. $$\n\nWe use the residue theorem to shift the line of integration to the left as $s= 4$ is a location of a pole of the Mellin transform. It follows that, for all $c_1\\in ]3, 4[$:\n\n$$ \\sum_{n \\geq 1 } S_n (2, 0) e^{- n t} = \\frac{2}{\\zeta(2) t^{-4}} + \\frac{1}{2i\\pi} \\int_{c_1- i\\infty}^{c_1+ i\\infty} \\frac{2 \\zeta(s-3) - 3 \\zeta(s-2) + \\zeta(s-1)}{6 \\zeta(s-2)}\\Gamma(s) t^{-s} d s.$$\n\nThe remaining poles of the Mellin transform are the double pole at $s = 0$, and simple ones at $s=0$ and at the non-trivial zeros of $\\zeta(s-2)$, which are in the critic strip $2<\\Re(z)<3$, and in the left side of the complex domain. \\\\\n\nNow consider the contour $\\gamma$ as described in the proposition. The existence of $A$, such that $\\gamma$ surrounds all the zeros of $\\zeta(s-2)$ in the critical strip is a consequence of Theorem 3.8 in \\cite{Titchmarsh:riemann}.\n\n\\begin{align}\\label{riemann_crit_strip}\n I_\\text{crit}(t) = \\frac{1}{2i\\pi} \\int_\\gamma \\frac{2 \\zeta(s-3) - 3 \\zeta(s-2) + \\zeta(s-1)}{6 \\zeta(s-2)}\\Gamma(s) t^{-s} d s.\n\\end{align}\n\nFor any $c_2 \\in ]0,2[$, we have:\n\n$$ \\sum_{n \\geq 1 } S_n (2, 0) e^{- n t} = \\frac{2}{\\zeta(2) t^{-4}} + I_\\text{crit}(t) + \\frac{1}{3 t^2} + \\frac{1}{2i\\pi} \\int_{c_2- i\\infty}^{c_2+ i\\infty} \\frac{2 \\zeta(s-3) - 3 \\zeta(s-2) + \\zeta(s-1)}{6 \\zeta(s-2)}\\Gamma(s) t^{-s} d s. $$\n\nFinally denoting $I_\\text{err} (t) = \\frac{1}{2i \\pi}\\int_{\\frac{1}{2} - i\\infty}^{\\frac{1}{2}+i\\infty} \\frac{2 \\zeta(s-3) - 3 \\zeta(s-2) + \\zeta(s-1)}{6 \\zeta(s-2)}\\Gamma(s) t^{-s} d s$, we have for $t>0$\n\n$$ \\sum_{n \\geq 1 } S_n (2, 0) e^{- n t} = \\frac{2}{\\zeta(2) t^{4}} + I_\\text{crit}(t) + \\frac{1}{3 t^2} - \\frac{1}{90 \\zeta'(-1)}\\log(t) + I_\\text{err}(t). $$\n\nThe bounds over the order of $I_\\text{crit}$ and $I_\\text{err}$ are coming from Lemma 2.2 from \\cite{Bureaux:polygons}.\n\\end{proof}\n\n\n\n\n\\subsection{Intermediate sequence of polynomials}\\label{polynome_zono}\n\nAs we make use of the Mellin inversion formula over the logarithm of the generating function, for each $\\delta$ between 1 and $d$, the coefficient $\\binom{k-1}{\\delta-1}$ from Property \\ref{property2} will appear with $S_n(\\underbrace{0,...,0}_{\\delta \\text{ times}})$. In the rest of the paper (as well as in Theorem \\ref{theorem} ), we denote:\n\n$$\\sum_{\\delta = 1}^d \\binom{d}{\\delta} 2^{\\delta-1} \\binom{{k} - 1}{\\delta-1} = P_d({k}) = p_{d,d-1} {k}^{d-1} + ... + p_{d,1} {k} + p_{d,0} .$$\n\nThe first terms of $\\left( P_d({X})\\right)_{d \\geq 2}$ are: \\\\\n\\begin{align*}\n P_1({X}) &= 1 \\\\\n P_2({X}) &= 2 {X}\\\\\n P_3({X}) &= 2 {X}^2 + 1 \n\\end{align*}\n\nThese polynomials verify the relation $P_{d+2}({X})= \\frac{2 {X}}{d+1}P_{d+1}({X}) + P_d({X} )$ for $d \\geq 1$. \\\\\n\nTwo properties naturally follow:\n\\begin{itemize}\n \\item The odd polynomials only contain even powers of $i$ and have a constant term equal to $1$. Similarly, the even polynomials only contain odd powers of $i$. \n \\item The leading term of $P_d$ is of degree $d-1$ and its coefficient is $\\frac{2^{d-1}}{(d-1)!}$.\n\\end{itemize}\n\nAs each exponent $\\delta$ of $k$ is reflected in the Riemann $\\zeta$ function as $\\zeta(s-\\delta)$ when using Mellin transform, we introduce an operator over complex functions:\n\n\\begin{defn}\\label{definition_operator}\nLet $\\mathrm{M}(\\mathbb{C})$ be the field of meromorphic functions in $\\mathbb{C}$. We define the operator $\\mathbf{\\Pi}_d$ as\n\\begin{align*}\n \\mathbf{\\Pi}_d \\colon \\mathrm{M}(\\mathbb{C}) &\\to \\mathrm{M}(\\mathbb{C})\\\\\n \\phi &\\mapsto p_{d,d-1} \\phi(\\cdot \\: - (d-1)) + ... + p_{d,1} \\phi(\\cdot \\: - 1) + p_{d,0} \\phi.\n\\end{align*}\n\\end{defn}\n\nIt is important to note that we can write, for $s\\neq 1$, $\\mathbf{\\Pi}_d (\\zeta)(s) = \\sum_{k \\geq 1} \\frac{P_d(k)}{k^s}$.\n\n\n\\subsection{Asymptotic analysis of the generating function}\n\n\\begin{lem}\\label{lemma_equivalent}\nLet $A$ be a positive number such that $\\gamma = \\gamma_{\\text{right}}\\cup \\gamma_{\\text{left}}$, with $\\gamma_\\text{right} = 1 - \\frac{A}{\\log(2 + |\\theta|)} +it$ and $\\gamma_\\text{left} = \\frac{A}{\\log(2 + |\\theta|)}+it $ as $\\theta$ goes from $-\\infty$ to $+\\infty$, surrounds all the zeros of $\\zeta$ in the critical strip.\\\\\n\nFor $\\theta>0$, as $\\theta \\rightarrow 0$, we have: \n\\begin{align}\n Zon \\left(e^{-\\theta} \\right) = \\theta^{2 \\mathbf{\\Pi}_d \\left(\\zeta\\right)(0) } e^{2 \\mathbf{\\Pi}_d \\left(\\log(2\\pi) \\zeta - \\zeta' \\right)(0) } \\exp\\left( \\sum_{\\delta=1}^{d-1}\\left(p_{d,\\delta}\\frac{\\zeta(\\delta+2) \\Gamma(\\delta+1)}{\\zeta(\\delta+1) \\theta^{\\delta+1}} \\right) + I_{\\text{crit}}(d, \\theta) + I_\\text{err}(d, \\theta) \\right) , \n\\end{align}\n\nwith $$I_\\text{crit}(d,\\theta) = \\frac{1}{2i\\pi}\\int_{\\gamma}\\frac{\\mathbf{\\Pi}_d (\\zeta)(0) }{\\zeta(s)}\\zeta(s+1)\\Gamma(s) \\theta^{-s} \\delta s $$,\\\\\n\nand $$I_\\text{err}(d,\\theta) = \\frac{1}{2i\\pi}\\int_{-\\frac{1}{2} - i\\infty}^{- \\frac{1}{2}+ i \\infty}\\frac{\\mathbf{\\Pi}_d (\\zeta)(0) }{\\zeta(s)}\\zeta(s+1)\\Gamma(s) \\theta^{-s} \\delta s =O(\\theta^{1\/2})$$. \\\\\n\n\\end{lem}\n\n\n\\begin{proof}\nWe apply the logarithm to the generating function with the change of variables $x = e^{-\\theta}$, which gives:\\\\\n\\begin{align*}\n \\log\\left(Zon\\left(e^{-\\theta}\\right)\\right) &= - \\sum_{\\delta=1}^d \\left(\\binom{d}{\\delta}2^{\\delta-1} \\sum_{\\mathbf{v} \\in \\mathbb{P}^{\\delta}_+} \\log\\left( 1 - e^{- \\mathbf{1}\\cdot \\mathbf{v} \\theta} \\right)\\right)\\\\\n &=- d \\log(1 - e^{-\\theta}) - \\sum_{\\delta = 2}^d \\left( \\binom{d}{\\delta} 2^{\\delta-1} \\sum_{n= 2}^{+\\infty} \\left( \\sum_{\\substack{ n_{1}+...+ n_{\\delta} = n \\\\ n_{1} \\wedge ... \\wedge n_{\\delta} = 1 }} 1 \\right) \\log \\left( 1 - e^{- n \\theta} \\right) \\right). \n\\end{align*}\n\n\nThe sum $\\sum_{\\substack{ n_{1}+...+ n_{\\delta} = n \\\\ n_{1} \\wedge ... \\wedge n_{\\delta} = 1 }} 1 $ is of the form of the sums studied in section \\ref{section_eulerset} as being $S_n(\\overbrace{0,...,0}^{\\delta \\text{ times}})$. In a geometrical point of view, it is exactly the number of primitive points . Therefore we compute the Mellin transform accordingly (using logarithmic series), for all $s > d$\\\\\n\n$$ \\mathcal{M}\\left[\\log\\left(Zon\\left(e^{-\\theta}\\right)\\right)\\right](s) = \\sum_{\\delta = 1}^d \\binom{d}{\\delta} 2^{\\delta-1} \\left(\\sum_{k=1}^{\\infty} \\frac{\\binom{k-1}{\\delta-1}}{k^s} \\right) \\frac{ \\zeta(s+1) \\Gamma(s) }{\\zeta(s)}.$$\n\nWith the notation introduced in definition \\ref{definition_operator}, the transform simplifies to the following form:\\\\\n\\begin{propos} for every $d>1$, we have:\n\\begin{align}\n \\mathcal{M}\\left[\\log \\left(Zon \\left(e^{-\\theta}\\right)\\right)\\right] (s) =\\frac{\\mathbf{\\Pi}_d (\\zeta(s)) }{\\zeta(s)}\\zeta(s+1)\\Gamma(s).\n\\end{align}\n\\end{propos}\n\n\n\\leavevmode\\\\\n\nWe can extend this function into a meromorphic one on $\\mathbb{C}$, and use successive residue theorems to scan from the right to the left each pole of the Mellin transform, as we did in the previous section. We start with $c> d$ and\\\\\n\n$$ \\log(Zon(e^{-\\theta})) = \\frac{1}{2i\\pi} \\int_{c - i \\infty}^{c + i \\infty} \\frac{p_{d,d-1} \\zeta(s - d +1) + ... + p_{d,1} \\zeta(s - 1) + p_{d,0} \\zeta(s) }{\\zeta(s)}\\zeta(s+1)\\Gamma(s) \\theta^{-s} \\delta s,$$\n\n\\leavevmode\\\\\n\nwhich leads to \n\n\\begin{align*}\n \\log(Zon(e^{-\\theta})) = &\\sum_{\\delta=1}^{d-1}\\left(p_{d,\\delta}\\frac{\\zeta(\\delta+2) \\Gamma(\\delta+1)}{\\zeta(\\delta+1) \\theta^{\\delta+1}} \\right) + \\frac{1}{2i\\pi} \\int_{\\gamma} \\frac{\\mathbf{\\Pi}_d (\\zeta)(s) }{\\zeta(s)}\\zeta(s+1)\\Gamma(s) \\theta^{-s} ds + 2 \\mathbf{\\Pi}_d \\left(\\log(2\\pi) \\zeta - \\zeta'\\right) (0) \\\\\n &+ 2 \\mathbf{\\Pi}_d \\left(\\zeta\\right)(0) \\log(\\theta) + \\frac{1}{2i\\pi} \\int_{ -\\frac{1}{2}- i \\infty}^{- \\frac{1}{2} + i \\infty} \\frac{\\mathbf{\\Pi}_d (\\zeta)(s) }{\\zeta(s)}\\zeta(s+1)\\Gamma(s) \\theta^{-s} ds.\n\\end{align*}\n\nWe denote $I_\\text{crit}(d,\\theta)$ and $I_\\text{err}(d,\\theta)$ respectively the first and the second integral above. \n\n\n\\end{proof}\n\n\\begin{cor}\nThe asymptotic equivalent of $ Zon\\left(e^{-\\theta}\\right) $ is:\n\n$$ Zon \\left(e^{-\\theta} \\right) \\sim \\theta^{2 \\mathbf{\\Pi}_d \\left(\\zeta\\right)(0) } e^{2 \\mathbf{\\Pi}_d \\left(\\log(2\\pi) \\zeta - \\zeta' \\right)(0) } \n\\exp\\left( \\sum_{\\delta=1}^{d-1}\\left(p_{d,\\delta}\\frac{\\zeta(\\delta+2) \\Gamma(\\delta+1)}{\\zeta(\\delta+1) \\theta^{\\delta+1}} \\right) + I_{\\text{crit}}(d, \\theta) \\right), \\text{ as } \\theta\\rightarrow 0 . $$\n\n\\end{cor}\n\n\n\n\\subsection{The saddle-point equation}\n\n\\begin{lem}\\label{sadlle_equation_solution}\nDenoting $\\widetilde{\\theta}_n = \\left( \\frac{2^{d-1} \\zeta(d+1)}{\\zeta(d) n} \\right)^{\\frac{1}{d+1}}$, we have: \n\n$$\\left.\\frac{\\partial \\log \\left(Zon\\left(\\textbf{\\textit{e}}^{- \\boldsymbol{\\theta}}\\right) \\right) }{\\partial \\theta_i} \\right|_{\\boldsymbol{\\theta} = \\boldsymbol{\\widetilde{\\theta}_n}:= \\widetilde{\\theta}_n \\textbf{1} } = - n + o(n) $$\n\n$\\widetilde{\\theta}_n$ is called the asymptotic solution of the saddle-point equation.\n\\end{lem}\n\n\n\\begin{proof}\n\nWe fetch the coefficient of the multivariate series of the generating function by using Cauchy's integral formula and therefore we look for parameters $\\boldsymbol{\\widetilde{\\theta}_n} = (\\widetilde{\\theta}_{n,1},..., \\widetilde{\\theta}_{n,d})$ such that for all $ 1 \\leq i \\leq d$,\n\n\\begin{align}\\label{saddle_equation}\n \\left.\\frac{\\partial }{\\partial \\theta_i} \\left( \\log \\left(Zon\\left(\\textbf{\\textit{e}}^{- \\boldsymbol{\\theta}}\\right) \\right) + (n + 1) \\theta_{i} \\right) \\right|_{\\boldsymbol{\\theta} = \\boldsymbol{\\widetilde{\\theta}_n} } = 0.\n\\end{align}\n \n\nThe symmetry of the variables in $Zon(\\boldsymbol{\\theta})$ implies $\\theta_{n,i} = \\theta_n $ for $1 \\leq i \\leq d$. The system (\\ref{saddle_equation}) is then restricted to $\\theta_1$ only:\n\n\n\\begin{align*}\n \\left.\\frac{\\partial}{\\partial \\theta_1} \\log \\left(Zon\\left(\\textbf{\\textit{e}}^{- \\boldsymbol{\\theta}} \\right)\\right) \\right|_{\\substack{\\boldsymbol{\\theta} = \\boldsymbol{\\widetilde{\\theta}_n}}} &= - \\sum_{\\delta=1}^d 2^{\\delta-1} \\binom{d-1}{\\delta-1} \\sum_{\\textbf{\\textit{v}}\\in \\mathbb{P}^{\\delta}_+} v_1 \\frac{e^{- \\boldsymbol{<\\widetilde{\\theta}_n}, \\textbf{\\textit{v}}> }}{1 - e^{-<\\boldsymbol{\\widetilde{\\theta}_n}, \\textbf{\\textit{v}}> }} \\\\\n &= - \\sum_{\\delta=1}^d 2^{\\delta-1} \\binom{d-1}{\\delta-1} \\sum_{m=\\delta}^{+\\infty} \\:\\:\\:\\underbrace{\\sum_{\\substack{\\textbf{\\textit{v}}\\in \\mathbb{P}^{\\delta}_+ \\\\ <\\textbf{1},\\textbf{\\textit{v}}> = m}} v_1} _ { S_m(1, \\textbf{0}_{\\mathbb{R}^{d-1}}) } \\frac{e^{- m \\theta_n }}{1 - e^{-m \\theta_n }}\n\\end{align*}\n\\leavevmode\\\\\n\nUsing the analysis of the sums $S_m$ in Section \\ref{section_eulerset}, we have the following Mellin transform, for $\\Re(s) > d+1$:\n\n$$\\mathcal{M}\\left[\\left.\\frac{\\partial}{\\partial \\theta_1} \\log \\left(Zon \\left(\\textbf{\\textit{e}}^{- \\boldsymbol{\\theta}} \\right)\\right) \\right|_{\\substack{\\boldsymbol{\\theta} = \\boldsymbol{\\widetilde{\\theta}_n}}} \\right](s) = - \\sum_{\\delta=1}^d 2^{\\delta-1} \\binom{d-1}{\\delta-1} \\left( \\sum_{k \\geq d} \\frac{\\binom{k}{\\delta}}{k^s} \\right) \\frac{\\zeta(s) \\Gamma(s)}{\\zeta(s-1)}. $$\n\nThis function is a meromorphic function on $\\mathbb{C}$, so we are taking $c\\in \\mathbb{C}$ with $\\Re(c) > d+1$ to use the Mellin inversion formula\n\n\n$$\\left.\\frac{\\partial}{\\partial \\theta_1} \\log\\left(Zon \\left(\\textbf{\\textit{e}}^{- \\boldsymbol{\\theta}} \\right)\\right) \\right|_{\\substack{\\boldsymbol{\\theta} = \\boldsymbol{\\widetilde{\\theta}_n}}} = - \\frac{1}{2 i \\pi} \\int_{c - i\\infty}^{c+ i \\infty} \\sum_{\\delta=1}^d 2^{\\delta-1} \\binom{d-1}{\\delta-1} \\left( \\sum_{k \\geq d} \\frac{\\binom{k}{\\delta}}{k^s} \\right) \\frac{\\zeta(s) \\Gamma(s)}{\\zeta(s-1)} \\theta_n^{-s} d s. $$\n\nWith the residue theorem, we can shift the line of the integral to the left of $\\Re(s) = d+1$. The resulting integral is $o\\left(\\frac{1}{\\theta^{d+1}}\\right)$, which leads to:\n\n$$\\left.\\frac{\\partial}{\\partial \\theta_1} \\log\\left(Zon \\left(\\textbf{\\textit{e}}^{- \\boldsymbol{\\theta}} \\right)\\right) \\right|_{\\substack{\\boldsymbol{\\theta} = \\boldsymbol{\\widetilde{\\theta}_n}}} = - \\frac{2^{d-1} \\zeta(d+1)}{\\zeta(d) \\theta_n^{d+1}} + o\\left(\\frac{1}{\\theta_n^{d+1}}\\right). $$\n\nTo generalize the notation of \\cite{Bureaux:polygons}, we denote $\\kappa_d = \\frac{2^{d-1} \\zeta(d+1)}{\\zeta(d)}$. Finally get the parameter which is solution of the saddle equation: \n\n\\begin{align}\n \\theta_n = \\left( \\frac{\\kappa_d}{n} \\right)^{\\frac{1}{d+1}}\n\\end{align}\n\n\\end{proof}\n\n\\subsection{H-admissibility}\n\n\\import{Parties\/Development\/}{4h_admissibility.tex} \n\n\\subsection{Asymptotic number of lattice zonotopes}\n\nAs $Zon$ is H-admissible, and given the solution of the saddle-point equation $\\widetilde{\\theta}_n$, we can write:\n\n$$ z_d(n):= \\frac{1}{(2i\\pi)^d} \\int_{[0,2\\pi]^d} Zon\\left( \\textbf{\\textit{e}}^{- \\boldsymbol{ \\widetilde{\\theta}_n}+ i \\boldsymbol{\\theta}}\\right)e^{(n+1)( d \\theta_n +i < \\textbf{1},\\boldsymbol{\\theta}>)} d \\boldsymbol{\\theta} \\underset{n \\rightarrow \\infty}{\\sim} \\frac{Zon\\left( \\textbf{\\textit{e}}^{- \\boldsymbol{ \\widetilde{\\theta}_n}}\\right)e^{ n d \\theta_n }}{\\sqrt{(2 \\pi)^d \\left|\\textbf{B}\\left( \\textbf{\\textit{e}}^{- \\boldsymbol{ \\widetilde{\\theta}_n}}\\right) \\right|}}, $$\n\nwith the notation from the H-admissibility demonstration ($\\textbf{B} $ is the matrix of the second partial derivatives of the generating function's logarithm). We recall that:\n\n$$ |\\textbf{B}(\\textbf{\\textit{x}}_n)| \\underset{n \\rightarrow \\infty}{\\sim} (d+1) \\left(\\frac{\\kappa_d}{ \\theta_n^{d+2}} \\right)^{d}. $$\n\nUsing $\\kappa_d = \\frac{2^{d-1}\\zeta(d+1)}{\\zeta(d)}$, the contour $\\gamma = \\gamma_\\text{right} \\cup \\gamma_\\text{left}$ with $\\gamma_\\text{right} = 1 - \\frac{A}{\\log(2 + |t|)} +it$ and $\\gamma_\\text{left} = \\frac{A}{\\log(2 + |t|)}+it $, $A>0$ such that $\\gamma$ surrounds the critical strip of the $\\zeta$ function, and the operator $\\mathbf{\\Pi}_d $ defined in Definition~\\ref{definition_operator}, we finally have: \n\n\\begin{align}\\label{final_result}\n z_d(n) \\underset{n \\rightarrow \\infty}{\\sim} \\alpha_d n ^{\\beta_d} \\exp \\left( Q_d(n^\\frac{1}{d+1}) + I_{\\text{crit}}(d, t) \\right),\n\\end{align}\n\n\\begin{align*}\n \\text{with } \n &\\alpha_d = \\frac{\\kappa_d^{\\frac{d}{2(d+1)} + \\frac{2}{d+1} \\mathbf{\\Pi}_d (\\zeta)(0) } \\exp \\left( 2\\mathbf{\\Pi}_d \\left(\\log(2\\pi)\\zeta - \\zeta' \\right)(0) \\right) }{ (2\\pi)^{d\/2 } \\sqrt{d+1}},\\\\\n &\\beta_d = \\frac{-1}{d+1}\\left(d(d+2)\/2 + 2 \\mathbf{\\Pi}_d \\left(\\zeta\\right) (0) \\right),\\\\\n &Q_d(X) = (d+1) \\kappa_d^{\\frac{1}{d+1}} X^d + \\sum_{\\delta=2}^{d-1} p_{d,\\delta-1}\\frac{\\zeta(\\delta+1) (\\delta-1)!}{\\zeta(\\delta) }\\kappa_d^{-\\frac{\\delta}{d+1}} X^{\\delta}, \\\\\n &J_{\\text{crit}}(d, n) = \\left.\\frac{1}{2i\\pi}\\int_{\\gamma}\\frac{\\mathbf{\\Pi}_d (\\zeta)(s) }{\\zeta(s)}\\zeta(s+1)\\Gamma(s) t^{-s} d s \\right|_{t = \\left( \\frac{\\kappa_d}{n} \\right)^{ \\frac{1}{d+1}}}.\n\\end{align*}\n\nNote that $J_{\\text{crit}}(d, n)$ is just the evaluation of $I_{\\text{crit}}(d, \\theta)$ at the saddle point value $\\theta= \\left( \\frac{\\kappa_d}{n} \\right)^{ \\frac{1}{d+1}}.$\n\n\n\n\n\n\\leavevmode\\\\\n\\subsection{Order of size of $J_{crit}$}\nThroughout this paper, we left $J_{crit}$ under its integral form but we could also write it as an infinite sum of residues as we did in the introduction, with $r$ ranging over the non-trivial zeros of $\\zeta$: \n\n$$ \\sum_{r} Res\\left(\\frac{1}{\\zeta(r)}\\right) \\frac{\\mathbf{\\Pi}_d (\\zeta(r)) }{\\zeta(r)}\\zeta(r+1)\\Gamma(r) \\left( \\frac{\\kappa_d}{n} \\right)^{ \\frac{-r}{d+1}} .$$\n\nThis sum is actually invisible for \"computably large\" numbers $n$. The density of poles $r$ is known to be strongly bounded since Selberg in the beginning of the 1940's \\cite{Selberg:Riemann}. Due to this and to the exponential decrease of $\\Gamma$ function, the value of $J_{crit}$ is with a good approximation the first term of the sum. It is composed of the term at the first non-trivial zero, approximately at $r = \\frac{1}{2} + 14.13472i$ and of the term at its conjugate, which gives for respectively $J_{\\text{crit}}(2, n)$, $J_{\\text{crit}}(3, n)$, and $J_{\\text{crit}}(4, n)$ the following approximations:\n\n\\begin{align}\\label{numeric_I_crit}\n J_{\\text{crit}}(2, n) &\\approx -1.3579 {\\scriptstyle\\times} 10^{-10}n^{1\/6}\\cos\\left(4.7116\\ln(0.6842 n)\\right) - 1.4236{\\scriptstyle\\times}10^{-9}n^{1\/6}\\sin\\left(4.7116\\ln(0.6842n)\\right),\\nonumber\\\\\n J_{\\text{crit}}(3, n) &\\approx -1.2325 {\\scriptstyle\\times}10^{-10}n^{1\/8}\\cos\\left(3.5337\\ln(0.2777n)\\right) - 1.2921 {\\scriptstyle\\times}10^{-9}n^{1\/8}\\sin\\left(3.5337\\ln(0.2777n)\\right),\\\\\n J_{\\text{crit}}(4, n) &\\approx -3.1764{\\scriptstyle\\times}10^{-9}n^{1\/10}\\cos\\left(2.8269\\ln(0.1305n)\\right) - 8.0628{\\scriptstyle\\times}10^{-9}n^{1\/10}\\sin(2.8269\\ln(0.1305n)). \\nonumber\n\\end{align}\n\n\n\n\nWe can see that the size of these numbers mainly relies on $\\Gamma(r)$. The second non-trivial zero of $\\zeta$ is approximately in $r= \\frac{1}{2} + 21,02203 i$, which is smaller by $10^{-4}$ for the first dimensions. \n\n\\begin{comment}\nIt becomes obvious when we compute the graph of the quotient $\\frac{z_2(n)}{\\alpha_2 n ^{\\beta_2} \\exp(Q_2(n^{1\/3}))}$, as follows:\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{images\/zonogones convergence asymptotique.png}\n \\caption{Quotient of $z_2(n)$ and $\\alpha_2 n ^{\\beta_2} \\exp(Q_2(n^{1\/3}))$. }\n \\label{fig:my_label}\n\\end{figure}\n\n\\end{comment}\n\n\\subsection{About moving up in dimension}\n\nFor a more down-to-earth analysis of the logarithmic equivalent, i.e. the leading term of the exponential term, we can use the expansion of $\\zeta(d) = 1+ 2^d + o(2^d)$ as $d $ grows large. One can see that: \n\n$$ \\left(\\frac{2^{d-1}\\zeta(d+1)}{\\zeta(d)} \\right)^{\\frac{1}{d+1}} = 2 + O\\left(\\frac{1}{d}\\right), \\:\\:\\:\\:\\:\\:\\:\\:\\text{ as } d \\rightarrow + \\infty.$$\n\nTherefore, when we get that, in higher dimension, the logarithm of $z(n)$ is nearly equivalent to $ 2(d+1) n^{\\frac{d}{d+1}} $.\\\\\n\n\n\n\n\\subsection{Generalization to rectangular boxes}\nOur result is generalizable to rectangular boxes $[0,n_1]\\times...\\times[0,n_d]$ with $n_i = \\alpha_i n$ ($\\alpha$ is a non-zero positive number). In order to lighten the results, and because the proof is very similar to what has been done before,e write the formula here with the sketch of the proof. One can refer to \\cite{Bogatchev:limit} to see detailed generalization in dimension 2. \n\n\\begin{thm}\n\\end{thm}\n\n\\begin{proof}\nThe saddle-point equation differs depending on the coordinate. It brings the following result:\n\n\\begin{lem}\nDenoting $\\boldsymbol{\\widetilde{\\theta}_n} = \\left( \\widetilde{\\theta}_n^i\\right)$ with $\\widetilde{\\theta}_n^i = \\frac{\\prod_k \\alpha_k^{1\/(d+1)}}{\\alpha_i} \\widetilde{\\theta}_n$ for each $i$, $\\boldsymbol{\\widetilde{\\theta}_n}$ is the solution of the saddle equation, i.e. for each $i$ we have\n\n$$\\left.\\frac{\\partial \\log\\left(Zon\\left(\\boldsymbol{\\theta}\\right)\\right) }{\\partial \\theta_i} \\right|_{\\boldsymbol{\\theta} = \\boldsymbol{\\widetilde{\\theta}_n}} = - \\alpha_i n + o(n) .$$\n\\end{lem}\n\nThen the uni-variate generating function that has to been asymptotically approximate is \n\n$$Zon\\left(e^{-\\frac{t}{\\alpha_1} },..., e^{- \\frac{t}{\\alpha_d} }\\right) $$\n \n \n\\end{proof}\n\\end{comment} \n\n\\begin{comment}\n\\subsection{On polytopes, zonotopes and probabilities}\n\nAs said in the introduction, this work falls within the framework of counting polytopes in convex bodies. Here, the asymptotic number of zonotopes was established in hypercubes only, so it calls for looking for attempt to extend this result to more general bodies, such as rectangular boxes, cones etc. \n\nConcerning combinatorial parameters that come with zonotopes, one could wonder if the number of $\\delta$-dimensional faces of a $d$-dimensional zonotope (aggregated into the f-vector) can be studied with this method (i.e. the mean number of $\\delta$-dimensional faces).\n\nFinally, the estimation of zonotopes was addressed using a combinatorial point of view, but as it was mentioned previously, the same equations appear when we lay a Boltzmann-like distribution over sets of generators and compute the estimation of the number of zonotopes through local limit theorem (see \\cite{Bureaux:polygons}). In that case, it is interesting to watch the deformation of the limit shape of the zonotopes and the evolution of the parameters we computed while varying the probability distribution. \\\\ \n\n\\end{comment}\n\n\n\n\n\n\n\\section{Class of zonotopes and generating function}\\label{section_combi}\n\\import{Parties\/Development\/}{1zonotopes_generating_function.tex}\n\\leavevmode\\\\\n\n\\section{Study of sums of coprime numbers}\\label{section_eulerset}\n\\import{Parties\/Development\/}{2eulerian_poly.tex}\n\\leavevmode\\\\\n\n\\section{Analysis of the generating function}\n\\import{Parties\/Development\/}{3analysis.tex}\n\\leavevmode\\\\\n\n\\section{Proof of the Theorem}\\label{proof_main}\n\\import{Parties\/Development\/}{5proof_theorem.tex}\n\\leavevmode\\\\\n\n\\section{Numerical elements about the speed of convergence to the asymptotic regime}\\label{numerical_discu}\n\\import{Parties\/Development\/}{6comments.tex}\n\\leavevmode\\\\\n\n\n\\section{The average diameter of Zonotopes}\n\\import{Parties\/Development\/}{7properties.tex}\n\\leavevmode\\\\\n\n\\section{Acknowledgments}\n\\import{Parties\/Development\/}{8acknowledgment.tex}\n\\leavevmode\\\\\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn quiescent regions irradiated by cosmic-- or X--rays, the oxygen chemistry is initiated by the charge transfer from\nH$^+$ and H$_{3}^{+}$ to atomic oxygen, forming O$^+$ and OH$^+$. \nIn warm environments it can also start\nwith the reaction of atomic oxygen with H$_2$($\\nu$=0) to form OH.\nThis endothermic reaction (by $\\sim$0.08\\,eV) possesses an activation barrier \nof a few thousand~K and high gas temperatures ($\\gtrsim$400\\,K)\nare needed to produce significant OH abundances (\\textit{e.g.,}~in shocked gas).\nIn molecular clouds exposed to strong far-ultraviolet (FUV) \n radiation fields, the so--called PDRs,\nthe gas is heated to relatively high temperatures and there are also high\nabundances of FUV-pumped vibrationally excited molecular hydrogen H$_{2}^{*}$($\\nu$$=$1,2...) \\citep{hol97}. \nThe internal energy available in H$_{2}^{*}$ \ncan be used to overcome the O($^3P$)~+~H$_2$($\\nu$=0) reaction barrier \\citep[see][and references therein]{agu10}.\nAlthough not well constrained observationally, enhanced OH abundances are expected in warm~PDRs. \n\nOH is a key intermediary molecule in the FUV-illuminated gas because further reaction of OH with \nH$_2$, C$^+$, O, N or S$^+$ leads to the formation of H$_2$O, CO$^+$, O$_2$, NO or SO$^+$ respectively. \nBesides, OH is the product of H$_2$O photodissociation,\nthe main destruction route of water vapour in the gas unshielded against FUV radiation.\nObservations of OH in specific environments thus constrain different chemical routes of the oxygen chemistry.\n\nPrevious observations with {\\em KAO} and {\\em ISO} \nhave demonstrated that OH is a powerful tracer of the warm neutral gas in shocked gas;\nfrom protostellar outflows and supernova remnants to extragalactic nuclei \\citep[\\textit{e.g.},][]{sto81,mel87,gon04}.\nUnfortunately, the poor angular resolution ($>$1$'$) and sensitivity of the above telescopes prevented us\nfrom resolving the OH emission from interstellar PDRs. \n\n\n\n\\begin{figure*}[t]\n\\vspace{-0.1cm}\n\\resizebox{\\hsize}{!}{\\includegraphics[angle=-90]{16977f1.eps}}\n\\caption{OH (X $^2$$\\Pi$; $\\nu$=0) rotational lines detected with \\textit{Herschel}\/PACS towards the \n$\\alpha_{2000}$$\\simeq$5$^h$35$^m$21.9$^s$, $\\delta_{2000}$$\\simeq$$-$5$^o$25$'$06.7$''$ position\nwhere the higher excitation OH line peak is observed (see Figure~\\ref{fig:OH_maps}).\nThe red dotted lines show the expected wavelength for each $\\Lambda$-doublet\n (see Figure~\\ref{fig_levels} in the appendix~\\ref{sec:appenixB} for a complete OH rotational energy diagram).\nSmall intensity asymmetries are observed in most OH $\\Lambda$--doublets.\nTransition upper level energies and $A_{ij}$ spontaneous transition probabilities\n are indicated.} \n\\label{fig:OH_lines}\n\\end{figure*}\n\n\nOwing to its proximity \nand nearly edge-on orientation, \nthe interface region between the Orion Molecular Cloud~1 (OMC1) and the \nH\\,{\\sc ii} region illuminated by the Trapezium cluster, \nthe Orion Bar,\nis the prototypical warm PDR \\citep[with a FUV radiation field\n at the ionization front of $\\chi$$\\simeq$2.5$\\times$10$^4$ times the mean interstellar \nfield in Draine units;][]{mar98}.\nThe most commonly accepted scenario is that \nan extended gas component, with mean gas densities $n_H$ of 10$^{4-5}$\\,cm$^{-3}$,\ncauses the chemical stratification seen in the PDR \\citep{hog95}. \nMost of the low-$J$ molecular line emission arises in this extended ``interclump'' medium \\citep{tie85,sim97,wie09,hab10}.\nIn addition, another component of higher density clumps\n was introduced to fit\nthe observed H$_{2}$, high--$J$~CO, CO$^+$ and other high density and temperature tracers \n\\citep{bur90,par91,sto95,wer96,you00}.\nOwing their small filling factor this clumpy structure would allow FUV radiation to permeate the region.\nThe presence of dense clumps is still controversial. \n\nIn this letter we present initial results from a spectral scan of the Orion Bar\ntaken with the PACS instrument \\citep{pog10} on board $\\textit{Herschel}$ \\citep{pil10} \nas part of the ``HEXOS'' key programme \\citep{ber10}. \nPACS observations of OH lines towards young stellar objects have recently been reported\nby Wampfler et al. \\citep{wam10}.\nHere we present the first detection of OH towards this\nprototypical PDR. \n\n\n\n\n\n\\vspace{-0.2cm}\n\\section{Observations and data reduction}\n\nPACS observations were carried out on 7 and 8 September 2010\nand consist of two spectral scans in Nyquist sampling wavelength\nrange spectroscopy mode.\nThe PACS spectrometer uses photoconductor detectors and provides 25 spectra over a 47$''$$\\times$47$''$ field-of-view\nresolved in 5$\\times$5 spatial pixels (``spaxels''), each with a size of $\\sim$9.4$''$$\\times$9.4$''$ on the sky.\nThe measured width of the spectrometer point-spread function (PSF) is relatively constant at\n$\\lambda$$\\lesssim$125\\,$\\mu$m but it increases above the spaxel size for longer wavelengths. \nThe resolving power varies between $\\lambda$\/$\\Delta$$\\lambda$$\\sim$1000 (R1 grating order) and $\\sim$5000 \n(B3A). \nThe central spaxel was centred at \n$\\alpha_{2000}$: 5$^h$35$^m$20.61$^s$, $\\delta_{2000}$: -5$^o$25$'$14.0$''$ target position.\nObservations were carried out in the ``chop-nodded'' mode with the largest chopper throw of 6~arcmin.\nNominal capacitances (0.14\\,pF) were used.\nThe integration time was 3.2\\,h for the 1342204117 observation (B2B and R1) \n and 2.7\\,h for the 1342204118 observation (B3A and R1). \nThe data were processed with HIPE using a pipeline upgraded with a spectral flatfield\nalgorithm that reduces the spectral fringing seen in the Nyquist-sampled wavelength spectra of bright sources.\nFigure~\\ref{fig:OH_lines} shows the resulting OH lines towards the OH emission peak and Figure~\\ref{fig_correlations} \nshows the \nintensities measured in each of the 25 spaxels for several lines of OH, CO, CH$^+$, H$_2$O and [N\\,{\\sc{ii}}]. \nIn order to better sample the PSF and obtain accurate line intensities to be reproduced with our models, \nwe fit the OH emission averaged over several adjacent spaxels in \nSection~\\ref{sec:excitation} (see also appendix~\\ref{sec:appenixA}).\n\n\n\\begin{figure}[h]\n\\vspace{-0.2cm}\n\\includegraphics[width=8.cm, angle=-90]{16977f2.eps}\n\\caption{PACS rotationally excited\n OH $^{2}\\Pi_{3\/2}$ $J$=7\/2$\\rightarrow$5\/2 lines at $\\sim$84\\,$\\mu$m ($E_u$\/$k$=291\\,K)\noverlaid on the\ndistribution of the CO $J$=6-5 peak brightness temperature (colour image)\nobserved with the CSO telescope at $\\sim$11$''$ resolution \\citep{lis98}.\nWhite contours show the brightest regions of H$_{2}^{*}$ $v$=1-0 $S$(1) emission \\citep{wal00}. \nLower intensity H$_{2}^{*}$ extended emission is\npresent in the entire field \\citep{wer96}.\nViolet stars shows the position of the H$^{13}$CN $J$=1-0 clumps deeper inside the Bar\n\\citep{lis03}. Note the decrease of OH line intensity\nwith distance from the ionization front.}\n\\label{fig:OH_maps}\n\\end{figure}\n\n\n\\vspace{-0.2cm}\n\\section{Results}\n\nOf all the observed OH lines only the ground-state lines at $\\sim$119\\,$\\mu$m show widespread bright emission \nat all positions. The $\\sim$119\\,$\\mu$m lines mainly arise from the background OMC1 complex\n(the same applies to most ground-state lines of other species).\nFigure~\\ref{fig:OH_maps} shows the spatial distribution of the\nrotationally excited OH $^{2}\\Pi_{3\/2}$ $J$=7\/2$\\rightarrow$5\/2 lines at $\\sim$84\\,$\\mu$m ($E_u$\/$k$=291\\,K)\nsuperimposed over the CO $J$=6-5 peak brightness temperature \\citep[colour image from][]{lis98} \nand over the brightest H$_{2}^{*}$ $\\nu$=1-0 $S$(1) line emission regions\n\\citep[white contours from][]{wal00}.\nThe emission from the other OH $\\Lambda$-doublets at $\\sim$79 and $\\sim$163\\,$\\mu$m \n(see Figure~\\ref{fig:OH_maps2} in appendix~\\ref{sec:appenixB}) \ndisplays a similar spatial distribution that follows the ``bar'' morphology peaking\nnear the H$_{2}^{*}$~$v$=1-0 $S$(1) bright emission region and then decreases with distance from the\nionization front \\citep[note that H$_{2}^{*}$ shows lower level extended\nemission with small-scale structure in the entire observed field;][]{wer96}.\nThe excited OH spatial distribution, however, does not follow the CO~$J$=6-5 emission maxima, \nwhich approximately trace the gas temperature \nin the extended ``interclump'' component.\n\n\n\n\n\nFigure~\\ref{fig:OH_lines} shows the detected OH $\\Lambda$-doublets \n(at $\\sim$65, $\\sim$79, $\\sim$84, $\\sim$119 and $\\sim$163\\,$\\mu$m) towards the \nposition where the higher excitation OH lines peak. \nThe total intensity of the observed FIR lines \n is $\\sum$$I$(OH)$\\simeq$5$\\times$10$^{-4}$~erg\\,s$^{-1}$\\,cm$^{-2}$\\,sr$^{-1}$.\n All OH doublets appear in emission, with intensity asymmetry ratios up to 40\\% \n(one line of the $\\Lambda$-doublet is brighter than the other).\n\n\nNote that the upper energy level of the $^{2}\\Pi_{3\/2}$~$J$=9\/2$\\rightarrow$7\/2 transition \nat $\\sim$65\\,$\\mu$m lies at E$_u$\/$k$$\\sim$511\\,K.\nThe critical densities ($n_{cr}$) of the observed OH transitions are high,\n$n_{cr}$$\\gtrsim$10$^{8}$\\,cm$^{-3}$. For much lower gas densities, and in the presence of strong\nFIR radiation fields,\nmost lines would have been observed in absorption, especially those in the $^{2}\\Pi_{3\/2}$ ladder \\citep{goi02}.\nHence, the observed OH lines must\narise in an widespread component of warm and dense gas.\n\n\n\nAlthough our PACS observations do not provide a fully sampled map,\n the line emission observed in the 25 spaxels \ncan be used to carry out a first-order analysis on the spatial correlation of different\n lines (neglecting perfect PSF sampling, line opacity and excitation effects).\nExcept for the OH ground-state lines at $\\sim$119\\,$\\mu$m (that come from the background OMC1 cloud),\nwe find that the rotationally excited OH lines correlate well with the high-$J$ CO and CH$^+$ emission\nbut, as expected, they do not correlate with the ionized gas emission.\nFigure~\\ref{fig_correlations} (\\textit{lower panel}) compares the observed OH~$\\sim$84.597\\,$\\mu$m\nline intensities with those of \nCO $J$=21-20 ($E_u$\/$k$$\\sim$1276\\,K), CH$^+$ $J$=3-2 ($E_u$\/$k$$\\sim$240\\,K)\nand [N\\,{\\sc{ii}}]121.891\\,$\\mu$m (all observed with a similar PSF)\nand also with the CO $J$=15-14 ($E_u$\/$k$$\\sim$663\\,K), H$_2$O 3$_{03}$-2$_{12}$ ($E_u$\/$k$$\\sim$163\\,K) and\nOH~$\\sim$163.397\\,$\\mu$m lines (\\textit{upper panel}).\nThis simple analysis suggests\nthat the excited OH, high-$J$ CO and CH$^+$ $J$=3-2 lines arise from the same gas component.\nIt also shows that the emission from different excited OH lines is well correlated,\nwhile the OH and H$_2$O emission is not (within the PSF sampling caveats).\n\n\n\n\\vspace{-0.2cm}\n\\section{OH column density determination}\n\\label{sec:excitation}\n\nDetermining the OH level populations is no trivial excitation problem.\nIn addition to relatively strong asymmetries in the collisional \nrate coefficients\\footnote{We used \ncollision rate coefficients of OH with $para$- and $ortho$-H$_2$ \nfrom Offer \\& van Dishoeck \\citep{off92} and Offer et al. \\citep{off94}.\nStrong differences in the intensity of\neach OH $\\Lambda$-doublet component due to asymmetries in the collisional\nrates between OH and $para$-H$_2$ were predicted\n (\\textit{e.g.,}~$I$(84.597)\/$I$(84.420)$>$$I$(119.441)\/$I$(119.234)$>$1).\nAsymmetries are significantly reduced when collisions with $ortho$-H$_2$ are included\n (\\textit{i.e.,}~in the warm gas) and when FIR radiative pumping plays a role.\nWe assume that the H$_2$ $ortho$-to-$para$ ratio is thermalized to the gas temperature, \n\\textit{e.g.,}~$\\sim$1.6 at 100\\,K and $\\sim$2.9 at 200\\,K.} between\neach $\\Lambda$-doubling component (Offer \\& Van Dishoeck 1992),\nradiative and opacity effects (pumping by the ambient IR radiation field and line trapping) can play a significant role\nif the gas density is much lower than~$n_{cr}$.\nHere we use a nonlocal\nand non-LTE code that treats both the OH line \nand continuum radiative transfer \\citep[see appendix in][]{goi06}.\nThe continuum measured by PACS and SPIRE in the region (H. Arab et al. in prep.)\n can be approximately reproduced by a modified blackbody with a colour temperature of $\\sim$55\\,K and a\nopacity dependence of $\\sim$0.05(100\/$\\lambda$)$^{1.75}$ \\citep[see][]{lis98}.\nOur calculations include thermal, turbulent, and\nopacity line broadening with a turbulent velocity dispersion of\n$\\sigma$=1.0\\,km\\,s$^{-1}$ and FWHM=2.355$\\times\\sigma$\n\\citep[see \\textit{e.g.}, the linewidths measured by][]{hog95}. \nA grid of single-component models for different $N$(OH), gas temperatures, densities and beam filling factors \nwere run. \nThe best fit model was found by minimizing the ``$\\chi^2$-value\" (see the appendix~\\ref{sec:appenixA} for its definition).\n\n \n\nFrom the excitation models we conclude that the\nhigh $I$(65.279)\/$I$(84.596)$\\simeq$0.3 and $I$(65.132)\/$I$(84.420)$\\simeq$0.5 intensity ratios in the $^{2}\\Pi_{3\/2}$ ladder\ncan only be reproduced if the gas is dense, at least $n_H$=$n$(H)+2$n$(H$_2$) of a few 10$^6$\\,cm$^{-3}$.\nIn addition, no model is able to produce even a crude fit to the data if one assumes \nthe average molecular gas temperature (T$_k$$\\simeq$85\\,K) in the lower density ``interclump'' medium \\citep{hog95}.\nIn the $^{2}\\Pi_{1\/2}$ ladder, the intensity of the $\\sim$163\\,$\\mu$m lines is sensitive to the \nFIR radiation field in the region through the absorption of FIR dust continuum photons in the\nOH\\,$\\sim$34 and $\\sim$53\\,$\\mu$m cross-ladder transitions ($^{2}\\Pi_{3\/2}$-$^{2}\\Pi_{1\/2}$ $J$=3\/2$\\rightarrow$5\/2\nand $J$=3\/2$\\rightarrow$3\/2 respectively). \nHowever, the $\\sim$163\\,$\\mu$m lines in the Orion Bar are not particularly strong and the OH~$\\sim$34 and $\\sim$53\\,$\\mu$m lines\nare not present in the ISO spectra, \nthus FIR pumping does not dominate the OH excitation. \nAll in all, the best model is found for a source of high density ($n_H$$\\lesssim$10$^7$~cm$^{-3}$)\nand warm gas temperatures (T$_{k}$=160-220\\,K). \nThis temperature lies in between\nthe $\\sim$600\\,K derived in the H$_2$ emitting regions \\citep{all05}\nand the $\\sim$150\\,K derived from NH$_3$ lines \\citep{bat03} \n near the ridge of dense and cooler H$^{13}$CN clumps \\citep[T$_k$$\\simeq$50\\,K;][]{lis03}. \nIn our simple model, the best fit is obtained for a source of small filling factor ($\\eta\\simeq$10$\\%$)\nwith a source-averaged OH column density of $\\gtrsim$10$^{15}$\\,cm$^{-2}$ (see~appendix~\\ref{sec:appenixA}).\n\n\\begin{figure} [t]\n\\vspace{-0.0cm}\n\\resizebox{\\hsize}{!}{\\includegraphics[angle=-90]{16977f3.eps}}\n\\caption{Line intensity spatial correlations between the OH \n$^{2}\\Pi_{3\/2}$ $J$=7\/2$^-$$\\rightarrow$5\/2$^+$ line at 84.597\\,$\\mu$m \nand lines from other species.\nIntensities are in units of 10$^{-5}$\\,erg\\,cm$^{-2}$\\,s$^{-1}$\\,sr$^{-1}$.}\n\\label{fig_correlations}\n\\end{figure}\n\n\n\\vspace{-0.2cm}\n\\section{OH chemistry and line emission origin}\n\\label{sec:PDRmods}\n\nWe used the Meudon PDR code \\citep{lpt06,goi07} to estimate the OH column \ndensity in a slab of gas at different densities ($n_H$ from 5$\\times$10$^4$ to 10$^7$\\,cm$^{-3}$). \nThe adopted FUV radiation field, $\\chi$=10$^4$, roughly corresponds to the attenuation\nof the FUV field at the ionization front by a column density of $N_H$$\\simeq$10$^{21}$\\,cm$^{-2}$ in\na $n_H$$\\simeq$10$^4$\\,cm$^{-3}$ medium. This attenuation is equivalent to a spatial length\nof $\\sim$10$^{17}$\\,cm, consistent with the observed decrease of excited\n OH emission with projected distance from the ionization front (see Figure~\\ref{fig:OH_maps}).\nGiven the high gas temperature, FUV field and moderate grain temperature \n(T$_{gr}$$\\sim$70-100\\,K) in the regions traced by \nFIR OH, CO and CH$^+$ lines, we neglected molecule freeze-out and ice desorption\n\\citep[which are important deeper inside at A$_V$$\\gtrsim$3;][]{hol09}.\nIn our models OH is a surface tracer that reaches its peak abundance at A$_V$$\\lesssim$1\n(Figure~\\ref{fig:pdr_mod}) where OH formation is driven by the endothermic reaction O($^3P$)~+~H$_2$~$\\rightarrow$~OH~+~H, \nslightly enhanced by the O($^3P$)~+~H$_{2}^{*}$ reaction \n\\citep[included in our models; see][]{agu10}.\nGas temperatures around $\\sim$1000-500\\,K are predicted near the slab surface at A$_V$=0.01\nand around $\\sim$100\\,K at A$_V$$\\gtrsim$1. In these H\/H$_2$ transition layers where the OH abundance peaks,\nthe electron density is still high ($\\lesssim$[C$^+$\/H]\\,$n_H$) and hydrogen is not fully molecular, \nwith $f$(H$_2$)=2$n$(H$_2$)\/[$n$(H)+2$n$(H$_2$)]$\\simeq$0.5.\nIn general, the higher the gas temperature where enough H$_2$ has formed, \nthe higher the predicted OH abundance.\n\n\nIn the A$_V$$\\lesssim$1 layers, OH destruction is dominated by photodissociation \n(OH~+~$h\\nu$~$\\rightarrow$~O~+~H)\nand to a lesser extent, by reactions of OH with H$_2$ to form H$_2$O \n(only when the gas temperature and density are very high).\nWater vapour photodissociation (H$_2$O~+~$h\\nu$~$\\rightarrow$~OH~+~H) in the surface layers limits the H$_2$O formation\n and leads to OH\/H$_2$O abundance ratios ($>$1), much higher than those expected in equally warm regions without \nenhanced FUV radiation fields (\\textit{e.g.} in C--shocks).\nThe lack of apparent correlation between the excited OH and H$_2$O 3$_{03}$-2$_{12}$ lines \n(see Figure~\\ref{fig_correlations}) and the\nabsence of high excitation H$_2$O lines in the PACS spectra (only weak H$_2$O 2$_{21}$-2$_{12}$, 2$_{12}$-1$_{01}$ \nand 3$_{03}$-2$_{12}$ lines are clearly detected) suggests\nthat the bulk of OH and H$_2$O column densities arise from different depths.\n\nAs the temperature decreases inwards, the gas-phase production of OH also decreases.\nThe spatial correlation between excited OH, CH$^+$ $J$=3-2 and high-$J$ CO lines is a good \nsignature of their common origin in the warm gas at low $A_V$ depths. \n\n\n\nOur PDR models predict OH column densities in the range $\\sim$10$^{12}$\\,cm$^{-2}$ to $\\sim$10$^{15}$\\,cm$^{-2}$\nat A$_V$$\\lesssim$1\nfor gas densities between $n_H$=5$\\times$10$^4$ and 10$^7$\\,cm$^{-3}$ respectively.\nHence, even if we take into account possible inclination \neffects, high density and temperature models \nproduce OH columns closer to the values derived in Section~4 just from excitation\nconsiderations (note that a precise determination of the\ngas density would require knowing the collisional rate coefficients of OH with H atoms and electrons). \nThe OH abundance in these dense surface layers is of the order of $\\approx$10$^{-6}$ with respect to H nuclei.\nHowever, optical depths of A$_V$$\\lesssim$1 correspond to spatial scales of only $\\sim$10$^{15}$\\,cm\n\\citep[\\textit{i.e.}, much smaller than the H$^{13}$CN clumps detected by][deeper\ninside the cloud]{lis03},\nbut we detect extended OH emission over $\\sim$10$^{17}$\\,cm scales. Therefore, we have to conclude that \nthe observed OH emission arises from a small filling factor ensemble of unresolved structures \nthat are exposed to FUV radiation (overdense clumps or filaments). \nNote that owing to the lower grain temperature compared to the gas, \nthe expected FIR continuum emission from these clumps will still be below the continuum\nlevels observed by \\textit{Herschel}\/PACS.\n\nThe minimum size of the dense clumps is $\\sim$10$^{15}$\\,cm (from OH photochemical models)\nwith a maximun size of $\\sim$10$^{16}$\\,cm (from the inferred beam dilution factor). Both correspond\nto $\\lesssim$0.2$''$-2$''$ at the distance of Orion.\nAs an example, H$_2$ photoevaporating clumps of $\\sim$10$^{16}$\\,cm size have been unambiguously resolved \ntowards S106 PDR \\cite{noe05}. However, higher angular resolution observations \n(\\textit{e.g.,}~with 8-10m telescopes) are needed\nto resolve smaller H$_2$ clumps from the H$_2$ interclump emission in the Orion Bar.\n\n\n\n\n\\begin{figure}[t]\n\\vspace{-0.00cm}\n\\resizebox{\\hsize}{!}{\\includegraphics[angle=-90]{16977f4.eps}}\n\\caption{Gas-phase PDR models of a FUV-illuminated slab ($\\chi$=10$^4$) of gas with\n different densities:\n$n_H$=10$^7$, 10$^6$, 10$^5$ and 5$\\times$10$^4$\\,cm$^{-3}$. \nOH column densities are shown as continuous curves (left axis) \nwhile OH abundances, $n$(OH)\/$n_H$, are shown as dashed curves (right axes).}\n\\label{fig:pdr_mod}\n\\end{figure}\n\n\n\n\nIf the observed FIR OH line emission does not arise from such a high density gas component, a different non-thermal \nexcitation mechanism able to populate the OH $^{2}\\Pi_{3\/2}$ $J$=9\/2 and 7\/2 levels would be needed.\nTwo alternative scenarios can be explored, at least qualitatively.\nFirst, OH molecules produced by H$_2$O photodissociation are expected to form mostly in the ground electronic\nand vibrational state but in unusually high energy $J$$>$70\/2 levels \\citep[a few thousands~K!;][]{van00}. \nNevertheless, they \n will cascade down radiatively to lower energy rotational ladders extremely rapidly.\nAlthough very excited suprathermal OH $J$$\\simeq$70\/2-15\/2 lines have been reported\ntowards the HH\\,211 outflow\nand were interpreted as H$_2$O photodissociation \\citep{tap08}, we\ndid not find any of them in the SWS or in the IRS spectra of the Orion Bar\n(taken and processed by us from the ISO and \\textit{Spitzer} basic calibrated data archives). \nBesides, even if photodissociation is the main H$_2$O destruction mechanism in the A$_V$$\\lesssim$1 warm layers, \nthis is not the main production pathway of OH (the O($^3P$)~+~H$_2$ reaction dominates).\n\n\nSecond, experiments and quantum calculations suggest that reaction of O($^3P$) atoms with H$_{2}^{*}$($\\nu$=1) \ncan produce significant OH in the $\\nu$=1 vibrationally excited state, with OH($\\nu$=1)\/OH($\\nu$=0) \npopulation ratios $\\gtrsim$1 for moderate kinetic energies\n\\citep[$\\gtrsim$0.05-0.1\\,eV; \\textit{e.g.,}][]{bal04}.\nComplementarily, absorption of near-- and mid--IR photons (from the continuum or from bright overlapping H$_{2}^{*}$ \nand ionic lines) can also pump OH to the $\\nu$=1 state.\nIn both cases, subsequent de-excitation through the $\\nu$=1-0 rotation--vibration band \nat $\\sim$2.80\\,$\\mu$m\nwould populate the OH($\\nu$=0) rotationally excited levels that we observe with PACS.\nHowever, the OH $\\nu$=1-0 band at $\\sim$2.80\\,$\\mu$m is not present in the ISO\/SWS spectra.\nEven assuming that the O($^3P$)+H$_{2}^{*}$($\\nu$=1) reaction only forms OH($\\nu$=1),\nsignificant gas column densities at very hot temperatures (T$_k$$\\sim$2000\\,K) will be needed to match\nthe observations if $n_H$$\\simeq$10$^{4-5}$\\,cm$^{-3}$.\nStudying the OH vibrationally-pumping mechanism quantitatively is beyond the scope of this work \nbut these chemical and pumping effects could contribute to the excitation of FIR OH lines\nin lower density gas.\n\nDifferent scenarios for the origin and nature of photoevaporating clumps have been proposed\n\\citep{gor02}, but without a more precise determination of their\nsizes and densities it is hard to conclude on any of them. Subarcsec resolution observations of OH gas-phase products, \n\\textit{e.g.,}~direct observation of CO$^+$ or SO$^+$ clumps with \\textit{ALMA},\nwill help us to assay the clumpy nature of the Orion Bar in the near future.\n \n\n\n\n\\begin{acknowledgements}\nPACS has been developed by\na consortium of institutes led by MPE (Germany)\nand including UVIE (Austria); KU Leuven, CSL,\nIMEC (Belgium); CEA, LAM (France); MPIA (Germany); \nINAF-IFSI\/OAA\/OAP\/OAT, LENS, SISSA (Italy); IAC (Spain). \nWe thank Darek Lis and Malcolm Walmsley for providing us their CO $J$=6-5 and H$_{2}^{*}$~1-0~S(1) maps.\nWe also thank Emilie Habart, Bart Vandenbussch and Pierre Royer for useful discussions and the referee \nfor his\/her constructive report.\nWe thank the Spanish MICINN for funding support\nthrough grants AYA2006-14876, AYA2009-07304 and CSD2009-00038. \nJRG is supported by a Ram\\'on y Cajal research contract.\n We acknowledge the use of the LAMDA data base \\citep{sch05}. \n\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe concept of a fuzzy set was introduced by Zadeh \\cite{zadeh}, in 1965.\nSince its inception, the theory has developed in many directions and found\napplications in a wide variety of fields. There has been a rapid growth in\nthe interest of fuzzy set theory and its applications from the past several\nyears. Many researchers published high-quality research articles on fuzzy\nsets in a variety of international journals. The study of fuzzy set in\nalgebraic structure has been started in the definitive paper of Rosenfeld\n1971 \\cite{Rosen}. Fuzzy subgroup and its important properties were defined\nand established by Rosenfeld \\cite{Rosen}. In 1981, Kuroki introduced the\nconcept of fuzzy ideals and fuzzy bi-ideals in semigroups in his paper \\cit\n{Kuroki}.\n\nThere are several kinds of fuzzy set extensions in the fuzzy set theory, for\nexample, intuitionistic fuzzy sets, interval-valued fuzzy sets, vague sets,\netc. Bipolar-valued fuzzy set is another extension of fuzzy set whose\nmembership degree range is different from the above extensions. Lee \\cit\n{Lee} introduced the notion of bipolar-valued fuzzy sets. Bipolar-valued\nfuzzy sets are an extension of fuzzy sets whose membership degree range is\nenlarged from the interval $[0,1]$ to $[-1,1]$. In a bipolar-valued fuzzy\nset, the membership degree $0$ indicate that elements are irrelevant to the\ncorresponding property, the membership degrees on $(0,1]$ assign that\nelements somewhat satisfy the property, and the membership degrees on \n[-1,0) $ assign that elements somewhat satisfy the implicit counter-property \n\\cite{Lee, Lee2}.\n\nAkram et al. \\cite{Akram} introduced the concept of bipolar fuzzy\nK-algebras. In \\cite{Jun}, Jun and park applied the notion of bipolar-valued\nfuzzy sets to BCH-algebras. They introduced the concept of bipolar fuzzy\nsubalgebras and bipolar fuzzy ideals of a BCH-algebra. Lee \\cite{KLee2}\napplied the notion of bipolar fuzzy subalgebras and bipolar fuzzy ideals of\nBCK\/BCI-algebras. Also some results on bipolar-valued fuzzy BCK\/BCI-algebras\nare introduced by Saeid in \\cite{Saeid}.\n\nThis paper concerns the relationship between bipolar-valued fuzzy sets and\nleft almost semigroups. The left almost semigroup abbreviated as an\nLA-semigroup, was first introduced by Kazim and Naseerudin \\cite{Kazim}.\nThey generalized some useful results of semigroup theory. They introduced\nbraces on the left of the ternary commutative law $abc=cba,$ to get a new\npseudo associative law, that is $(ab)c=(cb)a,$ and named it as left\ninvertive law. An LA-semigroup is the midway structure between a commutative\nsemigroup and a groupoid. Despite the fact, the structure is non-associative\nand non-commutative. It nevertheless possesses many interesting properties\nwhich we usually find in commutative and associative algebraic structures.\nMushtaq and Yusuf produced useful results \\cite{1979}, on locally\nassociative LA-semigroups in 1979. In this structure they defined powers of\nan element and congruences using these powers. They constructed quotient\nLA-semigroups using these congruences.\\ It is a useful non-associative\nstructure with wide applications in theory of flocks.\n\nIn this paper, we have introduced the notion of bipolar-valued fuzzy\nLA-subsemigroups and bipolar-valued fuzzy left (right, bi-, interior) ideals\nin LA-semigroups.\n\n\\section{\\textbf{Preliminaries and basic definitions}}\n\n\\textbf{Definition 2.1. }\\cite{Kazim} A groupoid $\\left( S,\\cdot \\right) $\\\nis called an LA-semigroup$,$ if it satisfies left invertive law \n\\begin{equation*}\n\\left( a\\cdot b\\right) \\cdot c=\\left( c\\cdot b\\right) \\cdot a,\\text{ \\ for\nall }a,b,c\\in S\\text{.}\n\\end{equation*}\n\n\\textbf{Example 2.1 }\\cite{1978} Let $\\left( \n\\mathbb{Z}\n,+\\right) $\\ denote the commutative group of integers under addition. Define\na binary operation \\textquotedblleft $\\ast $\\textquotedblright\\ in \n\\mathbb{Z}\n$\\ as follows$:\n\\begin{equation*}\na\\ast b=b-a,\\text{ \\ for all }a,b\\in \n\\mathbb{Z}\n\\text{.}\n\\end{equation*\nWhere \\textquotedblleft $-$\\textquotedblright\\ denotes the ordinary\nsubtraction of integers. Then $\\left( \n\\mathbb{Z}\n,\\ast \\right) $\\ is an LA-semigroup.\n\n\\textbf{Example 2.2 }\\cite{1978} Define a binary operation \\textquotedblleft \n$\\ast $\\textquotedblright\\ in \n\\mathbb{R}\n$\\ as follows$:\n\\begin{equation*}\na\\ast b=b\\div a,\\text{ \\ for all }a,b\\in \n\\mathbb{R}\n\\text{.}\n\\end{equation*\nThen $\\left( \n\\mathbb{R}\n,\\ast \\right) $\\ is an LA-semigroup.\n\n\\begin{lemma}\n\\label{l1}\\cite{1979} If $S$ is an LA-semigroup with left identity $e$, then \n$a(bc)=b(ac)$ for all $a,b,c\\in S.$\n\\end{lemma}\n\nLet $S$ be an LA-semigroup. A nonempty subset $A$ of $S$ is called an\nLA-subsemigroup of $S$ if $ab\\in A$ for all $a,b\\in A$. A nonempty subset $L$\nof $S$ is called a left ideal of $S$ if $SL\\subseteq L$ and a nonempty\nsubset $R$ of $S$ is called a right ideal of $S$ if $RS\\subseteq R$. A\nnonempty subset $I$ of $S$ is called an ideal of $S$ if $I$ is both a left\nand a right ideal of $S$. A subset $A$ of $S$ is called an interior ideal of \n$S$ if $(SA)S\\subseteq A$. An LA-subsemigroup $A$ of $S$ is called a\nbi-ideal of $S$ if $(AS)A\\subseteq A$.\n\nIn an LA-semigroup the medial law holds\n\\begin{equation*}\n(ab)(cd)=(ac)(bd),\\text{ \\ for all }a,b,c,d\\in S.\n\\end{equation*}\n\nIn an LA-semigroup $S$ with left identity, the paramedial law holds\n\\begin{equation*}\n(ab)(cd)=(dc)(ba),\\text{ \\ for all }a,b,c,d\\in S.\n\\end{equation*}\n\nNow we will recall the concept of bipolar-valued fuzzy sets.\n\n\\textbf{Definition 2.2 }\\cite{Lee2} Let $X$\\ be a nonempty set. A\nbipolar-valued fuzzy subset (BVF-subset, in short) $B$\\ of $X$\\ is an object\nhaving the for\n\\begin{equation*}\nB=\\left\\{ \\left\\langle x,\\mu _{B}^{+}(x),\\mu _{B}^{-}(x)\\right\\rangle :x\\in\nX\\right\\} .\n\\end{equation*\nWhere $\\mu _{B}^{+}:X\\rightarrow \\lbrack 0,1]$\\ and $\\mu\n_{B}^{-}:X\\rightarrow \\lbrack -1,0]$.\n\nThe positive membership degree $\\mu _{B}^{+}(x)$ denotes the satisfaction\ndegree of an element $x$ to the property corresponding to a bipolar-valued\nfuzzy set $B=\\left\\{ \\left\\langle x,\\mu _{B}^{+}(x),\\mu\n_{B}^{-}(x)\\right\\rangle :x\\in X\\right\\} $, and the negative membership\ndegree $\\mu _{B}^{-}(x)$ denotes the satisfaction degree of $x$ to some\nimplicit counter property of $B=\\left\\{ \\left\\langle x,\\mu _{B}^{+}(x),\\mu\n_{B}^{-}(x)\\right\\rangle :x\\in X\\right\\} $. For the sake of simplicity, we\nshall use the symbol $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $\nfor the bipolar-valued fuzzy set $B=\\left\\{ \\left\\langle x,\\mu\n_{B}^{+}(x),\\mu _{B}^{-}(x)\\right\\rangle :x\\in X\\right\\} .$\n\n\\textbf{Definition 2.3 }Let $B_{1}=\\left\\langle \\mu _{B_{1}}^{+},\\mu\n_{B_{1}}^{-}\\right\\rangle $ and $B_{2}=\\left\\langle \\mu _{B_{2}}^{+},\\mu\n_{B_{2}}^{-}\\right\\rangle $ be two BVF-subsets of a nonempty set $X$. Then\nthe product of two BVF-subsets is denoted by $B_{1}\\circ B_{2}$ and defined\nas: \n\\begin{eqnarray*}\n\\left( \\mu _{B_{1}}^{+}\\circ \\mu _{B_{2}}^{+}\\right) \\left( x\\right) \n&=&\\left\\{ \n\\begin{array}{l}\n\\tbigvee_{x=yz}\\left\\{ \\mu _{B_{1}}^{+}\\left( y\\right) \\wedge \\mu\n_{B_{2}}^{+}\\left( z\\right) \\right\\} ,\\text{ if }x=yz\\text{ for some }y,z\\in\nS \\\\ \n\\text{ }0\\text{ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ otherwise.\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \n\\end{array\n\\right. \\\\\n\\left( \\mu _{B_{1}}^{-}\\circ \\mu _{B_{2}}^{-}\\right) \\left( x\\right) \n&=&\\left\\{ \n\\begin{array}{l}\n\\tbigwedge_{x=yz}\\left\\{ \\mu _{B_{1}}^{-}\\left( y\\right) \\vee \\mu\n_{B_{2}}^{-}\\left( z\\right) \\right\\} ,\\text{ if }x=yz\\text{ for some }y,z\\in\nS \\\\ \n\\text{ }0\\text{ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n\\ otherwise.\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \n\\end{array\n\\right. \n\\end{eqnarray*}\n\nNote that an LA-semigroup $S$ can be considered as a BVF-subset of itself\nand le\n\\begin{eqnarray*}\n\\Gamma &=&\\left\\langle \\mathcal{S}_{\\Gamma }^{+}(x),\\mathcal{S}_{\\Gamma\n}^{-}(x)\\right\\rangle \\\\\n&=&\\left\\{ \\left\\langle x,\\mathcal{S}_{\\Gamma }^{+}(x),\\mathcal{S}_{\\Gamma\n}^{-}(x)\\right\\rangle :\\mathcal{S}_{\\Gamma }^{+}(x)=1\\text{ and }\\mathcal{S\n_{\\Gamma }^{-}(x)=-1,\\text{ for all }x\\text{ in }S\\right\\}\n\\end{eqnarray*\nbe a BVF-subset and $\\Gamma =\\left\\langle \\mathcal{S}_{\\Gamma }^{+}(x)\n\\mathcal{S}_{\\Gamma }^{-}(x)\\right\\rangle $ will be carried out in\noperations with a BVF-subset $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ such that $\\mathcal{S}_{\\Gamma }^{+}$ and $\\mathcal{\n}_{\\Gamma }^{-}$ will be used in collaboration with $\\mu _{B}^{+}$ and $\\mu\n_{B}^{-}$ respectively.\n\nLet $BVF(S)$ denote the set of all BVF-subsets of an LA-semigroup $S.$\n\n\\begin{proposition}\n\\label{P1}Let $S$ be an LA-semigroup, then the set $(BVF(S),\\circ )$ is an\nLA-semigroup.\n\\end{proposition}\n\n\\textbf{Proof. }Clearly $BVF(S)$ is closed. Let $B_{1}=\\left\\langle \\mu\n_{B_{1}}^{+},\\mu _{B_{1}}^{-}\\right\\rangle ,$ $B_{2}=\\left\\langle \\mu\n_{B_{2}}^{+},\\mu _{B_{2}}^{-}\\right\\rangle $ and $B_{3}=\\left\\langle \\mu\n_{B_{3}}^{+},\\mu _{B_{3}}^{-}\\right\\rangle $ be in $BVF(S)$. Let $x$ be any\nelement of $S$ such that $x\\neq yz$ for some $y,z\\in S$. Then we hav\n\\begin{equation*}\n\\left( \\left( \\mu _{B_{1}}^{+}\\circ \\mu _{B_{2}}^{+}\\right) \\circ \\mu\n_{B_{3}}^{+}\\right) (x)=0=\\left( \\left( \\mu _{B_{3}}^{+}\\circ \\mu\n_{B_{2}}^{+}\\right) \\circ \\mu _{B_{1}}^{+}\\right) (x).\n\\end{equation*\nAn\n\\begin{equation*}\n\\left( \\left( \\mu _{B_{1}}^{-}\\circ \\mu _{B_{2}}^{-}\\right) \\circ \\mu\n_{B_{3}}^{-}\\right) (x)=0=\\left( \\left( \\mu _{B_{3}}^{-}\\circ \\mu\n_{B_{2}}^{-}\\right) \\circ \\mu _{B_{1}}^{-}\\right) (x).\n\\end{equation*\nLet $x$ be any element of $S$ such that $x=yz$ for some $y,z\\in S$. Then we\nhav\n\\begin{eqnarray*}\n\\left( \\left( \\mu _{B_{1}}^{+}\\circ \\mu _{B_{2}}^{+}\\right) \\circ \\mu\n_{B_{3}}^{+}\\right) (x) &=&{\\bigvee }_{x=yz}\\left\\{ \\left( \\mu\n_{B_{1}}^{+}\\circ \\mu _{B_{2}}^{+}\\right) (y)\\wedge \\mu\n_{B_{3}}^{+}(z)\\right\\} \\\\\n&=&{\\bigvee }_{x=yz}\\left\\{ \\left( {\\bigvee }_{y=pq}\\left\\{ \\mu\n_{B_{1}}^{+}(p)\\wedge \\mu _{B_{2}}^{+}(q)\\right\\} \\right) \\wedge \\mu\n_{B_{3}}^{+}(z)\\right\\} \\\\\n&=&{\\bigvee }_{x=yz}\\text{ }{\\bigvee }_{y=pq}\\left\\{ \\mu\n_{B_{1}}^{+}(p)\\wedge \\mu _{B_{2}}^{+}(q)\\wedge \\mu _{B_{3}}^{+}(z)\\right\\}\n\\\\\n&=&{\\bigvee }_{x=(pq)z}\\left\\{ \\mu _{B_{1}}^{+}(p)\\wedge \\mu\n_{B_{2}}^{+}(q)\\wedge \\mu _{B_{3}}^{+}(z)\\right\\} \\\\\n&=&{\\bigvee }_{x=(zq)p}\\left\\{ \\mu _{B_{3}}^{+}(z)\\wedge \\mu\n_{B_{2}}^{+}(q)\\wedge \\mu _{B_{1}}^{+}(p)\\right\\} \\\\\n&=&{\\bigvee }_{x=sp}\\left\\{ \\left( {\\bigvee }_{s=zq}\\left\\{ \\mu\n_{B_{3}}^{+}(z)\\wedge \\mu _{B_{2}}^{+}(q)\\right\\} \\right) \\wedge \\mu\n_{B_{1}}^{+}(p)\\right\\} \\\\\n&=&{\\bigvee }_{x=sp}\\left\\{ \\left( \\mu _{B_{3}}^{+}\\circ \\mu\n_{B_{2}}^{+}\\right) (s)\\wedge \\mu _{B_{1}}^{+}(p)\\right\\} \\\\\n&=&\\left( \\left( \\mu _{B_{3}}^{+}\\circ \\mu _{B_{2}}^{+}\\right) \\circ \\mu\n_{B_{1}}^{+}\\right) (x).\n\\end{eqnarray*\nAn\n\\begin{eqnarray*}\n\\left( \\left( \\mu _{B_{1}}^{-}\\circ \\mu _{B_{2}}^{-}\\right) \\circ \\mu\n_{B_{3}}^{-}\\right) (x) &=&{\\bigwedge }_{x=yz}\\left\\{ \\left( \\mu\n_{B_{1}}^{-}\\circ \\mu _{B_{2}}^{-}\\right) (y)\\vee \\mu _{B_{3}}^{-}(z)\\right\\}\n\\\\\n&=&{\\bigwedge }_{x=yz}\\left\\{ \\left( {\\bigwedge }_{y=pq}\\left\\{ \\mu\n_{B_{1}}^{-}(p)\\vee \\mu _{B_{2}}^{-}(q)\\right\\} \\right) \\vee \\mu\n_{B_{3}}^{-}(z)\\right\\} \\\\\n&=&{\\bigwedge }_{x=yz}\\text{ }{\\bigwedge }_{y=pq}\\left\\{ \\mu\n_{B_{1}}^{-}(p)\\vee \\mu _{B_{2}}^{-}(q)\\vee \\mu _{B_{3}}^{-}(z)\\right\\} \\\\\n&=&{\\bigwedge }_{x=(pq)z}\\left\\{ \\mu _{B_{1}}^{-}(p)\\vee \\mu\n_{B_{2}}^{-}(q)\\vee \\mu _{B_{3}}^{-}(z)\\right\\} \\\\\n&=&{\\bigwedge }_{x=(zq)p}\\left\\{ \\mu _{B_{3}}^{-}(z)\\vee \\mu\n_{B_{2}}^{-}(q)\\vee \\mu _{B_{1}}^{-}(p)\\right\\} \\\\\n&=&{\\bigwedge }_{x=sp}\\left\\{ \\left( {\\bigwedge }_{s=zq}\\left\\{ \\mu\n_{B_{3}}^{-}(z)\\vee \\mu _{B_{2}}^{-}(q)\\right\\} \\right) \\vee \\mu\n_{B_{1}}^{-}(p)\\right\\} \\\\\n&=&{\\bigwedge }_{x=sp}\\left\\{ \\left( \\mu _{B_{3}}^{-}\\circ \\mu\n_{B_{2}}^{-}\\right) (s)\\vee \\mu _{B_{1}}^{-}(p)\\right\\} \\\\\n&=&\\left( \\left( \\mu _{B_{3}}^{-}\\circ \\mu _{B_{2}}^{-}\\right) \\circ \\mu\n_{B_{1}}^{-}\\right) (x).\n\\end{eqnarray*}\n\nHence $(BVF(S),\\circ )$ is an LA-semigroup. $\\ \\ \\Box $\n\n\\begin{corollary}\n\\label{C1}If $S$ is an LA-semigroup, then the medial law holds in $BVF(S)$.\n\\end{corollary}\n\n\\textbf{Proof. }Let $B_{1}=\\left\\langle \\mu _{B_{1}}^{+},\\mu\n_{B_{1}}^{-}\\right\\rangle $, $B_{2}=\\left\\langle \\mu _{B_{2}}^{+},\\mu\n_{B_{2}}^{-}\\right\\rangle ,$ $B_{3}=\\left\\langle \\mu _{B_{3}}^{+},\\mu\n_{B_{3}}^{-}\\right\\rangle $ and $B_{4}=\\left\\langle \\mu _{B_{4}}^{+},\\mu\n_{B_{4}}^{-}\\right\\rangle $ be in $BVF(S)$. By successive use of left\ninvertive la\n\\begin{eqnarray*}\n\\left( \\mu _{B_{1}}^{+}\\circ \\mu _{B_{2}}^{+}\\right) \\circ \\left( \\mu\n_{B_{3}}^{+}\\circ \\mu _{B_{4}}^{+}\\right) &=&\\left( \\left( \\mu\n_{B_{3}}^{+}\\circ \\mu _{B_{4}}^{+}\\right) \\circ \\mu _{B_{2}}^{+}\\right)\n\\circ \\mu _{B_{1}}^{+} \\\\\n&=&\\left( \\left( \\mu _{B_{2}}^{+}\\circ \\mu _{B_{4}}^{+}\\right) \\circ \\mu\n_{B_{3}}^{+}\\right) \\circ \\mu _{B_{1}}^{+} \\\\\n&=&\\left( \\mu _{B_{1}}^{+}\\circ \\mu _{B_{3}}^{+}\\right) \\circ \\left( \\mu\n_{B_{2}}^{+}\\circ \\mu _{B_{4}}^{+}\\right) .\n\\end{eqnarray*\nAn\n\\begin{eqnarray*}\n\\left( \\mu _{B_{1}}^{-}\\circ \\mu _{B_{2}}^{-}\\right) \\circ \\left( \\mu\n_{B_{3}}^{-}\\circ \\mu _{B_{4}}^{-}\\right) &=&\\left( \\left( \\mu\n_{B_{3}}^{-}\\circ \\mu _{B_{4}}^{-}\\right) \\circ \\mu _{B_{2}}^{-}\\right)\n\\circ \\mu _{B_{1}}^{-} \\\\\n&=&\\left( \\left( \\mu _{B_{2}}^{-}\\circ \\mu _{B_{4}}^{-}\\right) \\circ \\mu\n_{B_{3}}^{-}\\right) \\circ \\mu _{B_{1}}^{-} \\\\\n&=&\\left( \\mu _{B_{1}}^{-}\\circ \\mu _{B_{3}}^{-}\\right) \\circ \\left( \\mu\n_{B_{2}}^{-}\\circ \\mu _{B_{4}}^{-}\\right) .\n\\end{eqnarray*\nHence this shows that the medial law holds in $BVF(S)$. $\\ \\ \\Box $\n\n\\section{\\textbf{Bipolar-valued fuzzy ideals in LA-semigroup}}\n\n\\textbf{Definition 3.1 }A BVF-subset $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ of an LA-semigroup $S$ is called a bipolar-valued\nfuzzy LA-subsemigroup of $S$ i\n\\begin{equation*}\n\\mu _{B}^{+}\\left( xy\\right) \\geq \\mu _{B}^{+}\\left( x\\right) \\wedge \\mu\n_{B}^{+}\\left( y\\right) \\text{\\ \\ and \\ \\ }\\mu _{B}^{-}\\left( xy\\right) \\leq\n\\mu _{B}^{-}\\left( x\\right) \\vee \\mu _{B}^{-}\\left( y\\right)\n\\end{equation*\nfor all $x,y\\in S$.\n\n\\textbf{Definition 3.2 }A BVF-subset $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ of an LA-semigroup $S$ is called a bipolar-valued\nfuzzy left ideal of $S$ i\n\\begin{equation*}\n\\mu _{B}^{+}\\left( xy\\right) \\geq \\mu _{B}^{+}\\left( y\\right) \\text{ \\ \\ and\n\\ \\ }\\mu _{B}^{-}\\left( xy\\right) \\leq \\mu _{B}^{-}\\left( y\\right)\n\\end{equation*\nfor all $x,y\\in S$.\n\n\\textbf{Definition 3.3 }A BVF-subset $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ of an LA-semigroup $S$ is called a bipolar-valued\nfuzzy right ideal of $S$ i\n\\begin{equation*}\n\\mu _{B}^{+}\\left( xy\\right) \\geq \\mu _{B}^{+}\\left( x\\right) \\text{ \\ \\ and\n\\ }\\mu _{B}^{-}\\left( xy\\right) \\leq \\mu _{B}^{-}\\left( x\\right)\n\\end{equation*\nfor all $x,y\\in S$.\n\nA BVF-subset $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ of an\nLA-semigroup $S$ is called a BVF-ideal or BVF-two-sided ideal of $S$ if \nB=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ is both BVF-left and\nBVF-right ideal of $S$.\n\n\\textbf{Example 3.1} Let $S=\\{a,b,c,d\\}$, the binary operation \"$\\cdot $\" on \n$S$ be defined as follows\n\\begin{equation*}\n\\begin{tabular}{l|llll}\n$\\cdot $ & $a$ & $b$ & $c$ & $d$ \\\\ \\hline\n$a$ & $b$ & $d$ & $c$ & $a$ \\\\ \n$b$ & $a$ & $b$ & $c$ & $d$ \\\\ \n$c$ & $c$ & $c$ & $c$ & $c$ \\\\ \n$d$ & $d$ & $a$ & $c$ & $b\n\\end{tabular\n\\end{equation*\nClearly, $S$ is an LA-semigroup. But $S$ is not a semigroup because \nd=d\\cdot (b\\cdot a)\\neq (d\\cdot b)\\cdot a=b.$ Now we define BVF-subset a\n\\begin{equation*}\nB=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle =\\left\\langle \\left( \n\\frac{a}{0.2},\\frac{b}{0.2},\\frac{c}{0.7},\\frac{d}{0.2}\\right) ,\\text{ \n\\left( \\frac{a}{-0.5},\\frac{b}{-0.5},\\frac{c}{-0.8},\\frac{d}{-0.5}\\right)\n\\right\\rangle .\n\\end{equation*\nClearly $B$ is a BVF-ideal of $S.$\n\n\\begin{proposition}\n\\label{BP2}Every BVF-left (BVF-right) ideal $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ of an LA-semigroup $S$ is a bipolar-valued fuzzy\nLA-subsemigroup of $S$.\n\\end{proposition}\n\n\\textbf{Proof. }Let $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $\nbe a BVF-left ideal of $S$ and for any $x,y\\in S$,\n\n\\begin{equation*}\n\\mu _{B}^{+}\\left( xy\\right) \\geq \\mu _{B}^{+}\\left( y\\right) \\geq \\mu\n_{B}^{+}\\left( x\\right) \\wedge \\mu _{B}^{+}\\left( y\\right) \\text{.}\n\\end{equation*\nAn\n\\begin{equation*}\n\\mu _{B}^{-}\\left( xy\\right) \\leq \\mu _{B}^{-}\\left( y\\right) \\leq \\mu\n_{B}^{-}\\left( x\\right) \\vee \\mu _{B}^{-}\\left( y\\right) .\n\\end{equation*\nHence $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ is a\nbipolar-valued fuzzy LA-subsemigroup of $S$. The other case can be prove in\na similar way. $\\ \\ \\Box $\n\n\\begin{lemma}\n\\label{L56}Let $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ be a\nBVF-subset of an LA-semigroup $S.$ Then\n\\end{lemma}\n\n(1) $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ is a\nBVF-LA-subsemigroup of $S$\\ if and only if $\\mu _{B}^{+}\\circ \\mu\n_{B}^{+}\\subseteq \\mu _{B}^{+}$ and $\\mu _{B}^{-}\\circ \\mu _{B}^{-}\\supseteq\n\\mu _{B}^{-}.$\n\n(2) $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ is a BVF-left\n(resp. BVF-right) ideal of $S$\\ if and only if $\\mathcal{S}_{\\Gamma\n}^{+}\\circ \\mu _{B}^{+}\\subseteq \\mu _{B}^{+}$ and $\\mathcal{S}_{\\Gamma\n}^{-}\\circ \\mu _{B}^{-}\\supseteq \\mu _{B}^{-}$ (resp. $\\mu _{B}^{+}\\circ \n\\mathcal{S}_{\\Gamma }^{+}\\subseteq \\mu _{B}^{+}$ and $\\mu _{B}^{-}\\circ \n\\mathcal{S}_{\\Gamma }^{-}\\supseteq \\mu _{B}^{-}$)$.$\n\n\\textbf{Proof. }(1)\\textbf{\\ }Let $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ be a BVF-LA-subsemigroup of $S$ and $x\\in S.$ If \n\\left( \\mu _{B}^{+}\\circ \\mu _{B}^{+}\\right) \\left( x\\right) =0$ and $\\left(\n\\mu _{B}^{-}\\circ \\mu _{B}^{-}\\right) \\left( x\\right) =0,$ then $\\left( \\mu\n_{B}^{+}\\circ \\mu _{B}^{+}\\right) \\left( x\\right) \\leq \\mu _{B}^{+}\\left(\nx\\right) $ and $\\left( \\mu _{B}^{-}\\circ \\mu _{B}^{-}\\right) \\left( x\\right)\n\\geq \\mu _{B}^{-}\\left( x\\right) .$ Otherwise\n\\begin{equation*}\n\\left( \\mu _{B}^{+}\\circ \\mu _{B}^{+}\\right) \\left( x\\right) ={\\bigvee \n_{x=yz}\\left\\{ \\mu _{B}^{+}\\left( y\\right) \\wedge \\mu _{B}^{+}\\left(\nz\\right) \\right\\} \\leq {\\bigvee }_{x=yz}\\mu _{B}^{+}\\left( yz\\right) =\\mu\n_{B}^{+}\\left( x\\right) .\n\\end{equation*\nAn\n\\begin{equation*}\n\\left( \\mu _{B}^{-}\\circ \\mu _{B}^{-}\\right) \\left( x\\right) ={\\bigwedge \n_{x=yz}\\left\\{ \\mu _{B}^{-}\\left( y\\right) \\vee \\mu _{B}^{-}\\left( z\\right)\n\\right\\} \\geq {\\bigwedge }_{x=yz}\\mu _{B}^{-}\\left( yz\\right) =\\mu\n_{B}^{-}\\left( x\\right) .\n\\end{equation*\nThus $\\mu _{B}^{+}\\circ \\mu _{B}^{+}\\subseteq \\mu _{B}^{+}$ and $\\mu\n_{B}^{-}\\circ \\mu _{B}^{-}\\supseteq \\mu _{B}^{-}.$\\newline\nConversely, let $\\mu _{B}^{+}\\circ \\mu _{B}^{+}\\subseteq \\mu _{B}^{+}$, $\\mu\n_{B}^{-}\\circ \\mu _{B}^{-}\\supseteq \\mu _{B}^{-}$ and $x,y\\in S,$ the\n\\begin{equation*}\n\\mu _{B}^{+}\\left( xy\\right) \\geq \\left( \\mu _{B}^{+}\\circ \\mu\n_{B}^{+}\\right) \\left( xy\\right) ={\\bigvee }_{xy=ab}\\left\\{ \\mu\n_{B}^{+}\\left( a\\right) \\wedge \\mu _{B}^{+}\\left( b\\right) \\right\\} \\geq \\mu\n_{B}^{+}\\left( x\\right) \\wedge \\mu _{B}^{+}\\left( y\\right) .\n\\end{equation*\nAn\n\\begin{equation*}\n\\mu _{B}^{-}\\left( xy\\right) \\leq \\left( \\mu _{B}^{-}\\circ \\mu\n_{B}^{-}\\right) \\left( xy\\right) ={\\bigwedge }_{xy=ab}\\left\\{ \\mu\n_{B}^{-}\\left( a\\right) \\vee \\mu _{B}^{-}\\left( b\\right) \\right\\} \\leq \\mu\n_{B}^{-}\\left( x\\right) \\vee \\mu _{B}^{-}\\left( y\\right) .\n\\end{equation*\nSo $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ is a\nBVF-LA-subsemigroup of $S.$\n\n(2) Let $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ be a\nBVF-left ideal of $S$ and $x\\in S.$ If $\\left( \\mathcal{S}_{\\Gamma\n}^{+}\\circ \\mu _{B}^{+}\\right) \\left( x\\right) =0$ and $\\left( \\mathcal{S\n_{\\Gamma }^{-}\\circ \\mu _{B}^{-}\\right) \\left( x\\right) =0,$ then $\\left( \n\\mathcal{S}_{\\Gamma }^{+}\\circ \\mu _{B}^{+}\\right) \\left( x\\right) \\leq \\mu\n_{B}^{+}\\left( x\\right) $ and $\\left( \\mathcal{S}_{\\Gamma }^{-}\\circ \\mu\n_{B}^{-}\\right) \\left( x\\right) \\geq \\mu _{B}^{-}\\left( x\\right) .$\nOtherwise\n\\begin{eqnarray*}\n\\left( \\mathcal{S}_{\\Gamma }^{+}\\circ \\mu _{B}^{+}\\right) \\left( x\\right) &=\n{\\bigvee }_{x=ab}\\left\\{ \\mathcal{S}_{\\Gamma }^{+}\\left( a\\right) \\wedge \\mu\n_{B}^{+}\\left( b\\right) \\right\\} ={\\bigvee }_{x=ab}\\left\\{ 1\\wedge \\mu\n_{B}^{+}\\left( b\\right) \\right\\} \\\\\n&=&{\\bigvee }_{x=ab}\\mu _{B}^{+}\\left( b\\right) \\leq {\\bigvee }_{x=ab}\\mu\n_{B}^{+}\\left( ab\\right) =\\mu _{B}^{+}\\left( x\\right) .\n\\end{eqnarray*\nAn\n\\begin{eqnarray*}\n\\left( \\mathcal{S}_{\\Gamma }^{-}\\circ \\mu _{B}^{-}\\right) \\left( x\\right) &=\n{\\bigwedge }_{x=ab}\\left\\{ \\mathcal{S}_{\\Gamma }^{-}\\left( a\\right) \\vee \\mu\n_{B}^{-}\\left( b\\right) \\right\\} ={\\bigwedge }_{x=ab}\\left\\{ -1\\vee \\mu\n_{B}^{-}\\left( b\\right) \\right\\} \\\\\n&=&{\\bigwedge }_{x=ab}\\mu _{B}^{-}\\left( b\\right) \\geq {\\bigwedge \n_{x=ab}\\mu _{B}^{-}\\left( ab\\right) =\\mu _{B}^{-}\\left( x\\right) .\n\\end{eqnarray*\nThus $\\mathcal{S}_{\\Gamma }^{+}\\circ \\mu _{B}^{+}\\subseteq \\mu _{B}^{+}$ and \n$\\mathcal{S}_{\\Gamma }^{-}\\circ \\mu _{B}^{-}\\supseteq \\mu _{B}^{-}.$\\newline\nConversely, let $\\mathcal{S}_{\\Gamma }^{+}\\circ \\mu _{B}^{+}\\subseteq \\mu\n_{B}^{+}$, $\\mathcal{S}_{\\Gamma }^{-}\\circ \\mu _{B}^{-}\\supseteq \\mu\n_{B}^{-} $ and $x,y\\in S,$ then \n\\begin{eqnarray*}\n\\mu _{B}^{+}\\left( xy\\right) &\\geq &\\left( \\mathcal{S}_{\\Gamma }^{+}\\circ\n\\mu _{B}^{+}\\right) \\left( xy\\right) ={\\bigvee }_{xy=ab}\\left\\{ \\mathcal{S\n_{\\Gamma }^{+}\\left( a\\right) \\wedge \\mu _{B}^{+}\\left( b\\right) \\right\\} \\\\\n&\\geq &\\mathcal{S}_{\\Gamma }^{+}\\left( x\\right) \\wedge \\mu _{B}^{+}\\left(\ny\\right) =1\\wedge \\mu _{B}^{+}\\left( y\\right) =\\mu _{B}^{+}\\left( y\\right) .\n\\end{eqnarray*\nAnd \n\\begin{eqnarray*}\n\\mu _{B}^{-}\\left( xy\\right) &\\leq &\\left( \\mathcal{S}_{\\Gamma }^{-}\\circ\n\\mu _{B}^{-}\\right) \\left( xy\\right) ={\\bigwedge }_{xy=ab}\\left\\{ \\mathcal{S\n_{\\Gamma }^{-}\\left( a\\right) \\vee \\mu _{B}^{-}\\left( b\\right) \\right\\} \\\\\n&\\leq &\\mathcal{S}_{\\Gamma }^{-}\\left( x\\right) \\vee \\mu _{B}^{-}\\left(\ny\\right) =-1\\vee \\mu _{B}^{-}\\left( y\\right) =\\mu _{B}^{-}\\left( y\\right) .\n\\end{eqnarray*\nThus $\\mu _{B}^{+}\\left( xy\\right) \\geq \\mu _{B}^{+}\\left( y\\right) $ and \n\\mu _{B}^{-}\\left( xy\\right) \\leq \\mu _{B}^{-}\\left( y\\right) .$\\ Thus \nB=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ is a BVF-left ideal\nof $S.$ The second case can be seen in a similar way. $\\ \\ \\Box $\n\nLet $B_{1}=\\left\\langle \\mu _{B_{1}}^{+},\\mu _{B_{1}}^{-}\\right\\rangle $ and \n$B_{2}=\\left\\langle \\mu _{B_{2}}^{+},\\mu _{B_{2}}^{-}\\right\\rangle $ be two\nBVF-subsets of an LA-semigroup $S.$ The symbol $B_{1}\\cap B_{2}$ will mean\nthe following\n\n\\begin{equation*}\n\\left( \\mu _{B_{1}}^{+}\\cap \\mu _{B_{2}}^{+}\\right) (x)=\\mu\n_{B_{1}}^{+}(x)\\wedge \\mu _{B_{2}}^{+}(x),\\text{ for all }x\\in S.\n\\end{equation*}\n\n\\begin{equation*}\n\\left( \\mu _{B_{1}}^{-}\\cup \\mu _{B_{2}}^{-}\\right) (x)=\\mu\n_{B_{1}}^{-}(x)\\vee \\mu _{B_{2}}^{-}(x),\\text{ for all }x\\in S.\n\\end{equation*\nThe symbol $A\\cup B$ will mean the following\n\n\\begin{equation*}\n\\left( \\mu _{B_{1}}^{+}\\cup \\mu _{B_{2}}^{+}\\right) (x)=\\mu\n_{B_{1}}^{+}(x)\\vee \\mu _{B_{2}}^{+}(x),\\text{ for all }x\\in S.\n\\end{equation*}\n\n\\begin{equation*}\n\\left( \\mu _{B_{1}}^{-}\\cap \\mu _{B_{2}}^{-}\\right) (x)=\\mu\n_{B_{1}}^{-}(x)\\wedge \\mu _{B_{2}}^{-}(x),\\text{ for all }x\\in S.\n\\end{equation*}\n\n\\begin{theorem}\nLet $S$ be an LA-semigroup and $B_{1}=\\left\\langle \\mu _{B_{1}}^{+},\\mu\n_{B_{1}}^{-}\\right\\rangle $ be a BVF-right ideal of $S$ and \nB_{2}=\\left\\langle \\mu _{B_{2}}^{+},\\mu _{B_{2}}^{-}\\right\\rangle $ be a\nBVF-left ideal of $S,$ then $B_{1}\\circ B_{2}\\subseteq B_{1}\\cap B_{2}.$\n\\end{theorem}\n\n\\textbf{Proof. }Let for any $x,y,z\\in S,$ if $x\\neq yz,$ then we hav\n\\begin{equation*}\n\\left( \\mu _{B_{1}}^{+}\\circ \\mu _{B_{2}}^{+}\\right) (x)=0\\leq \\mu\n_{B_{1}}^{+}(x)\\wedge \\mu _{B_{2}}^{+}(x)=\\left( \\mu _{B_{1}}^{+}\\cap \\mu\n_{B_{2}}^{+}\\right) (x).\n\\end{equation*\nAn\n\\begin{equation*}\n\\left( \\mu _{B_{1}}^{-}\\circ \\mu _{B_{2}}^{-}\\right) (x)=0\\geq \\mu\n_{B_{1}}^{-}(x)\\vee \\mu _{B_{2}}^{-}(x)=\\left( \\mu _{B_{1}}^{-}\\cup \\mu\n_{B_{2}}^{-}\\right) (x).\n\\end{equation*\nOtherwis\n\\begin{eqnarray*}\n\\left( \\mu _{B_{1}}^{+}\\circ \\mu _{B_{2}}^{+}\\right) (x) &=&{\\bigvee \n_{x=yz}\\left\\{ \\mu _{B_{1}}^{+}\\left( y\\right) \\wedge \\mu _{B_{2}}^{+}\\left(\nz\\right) \\right\\} \\\\\n&\\leq &{\\bigvee }_{x=yz}\\left\\{ \\mu _{B_{1}}^{+}\\left( yz\\right) \\wedge \\mu\n_{B_{2}}^{+}\\left( yz\\right) \\right\\} \\\\\n&=&\\left\\{ \\mu _{B_{1}}^{+}\\left( x\\right) \\wedge \\mu _{B_{2}}^{+}\\left(\nx\\right) \\right\\} \\\\\n&=&\\left( \\mu _{B_{1}}^{+}\\cap \\mu _{B_{2}}^{+}\\right) (x).\n\\end{eqnarray*\nAn\n\\begin{eqnarray*}\n\\left( \\mu _{B_{1}}^{-}\\circ \\mu _{B_{2}}^{-}\\right) (x) &=&{\\bigvee \n_{x=yz}\\left\\{ \\mu _{B_{1}}^{-}\\left( y\\right) \\vee \\mu _{B_{2}}^{-}\\left(\nz\\right) \\right\\} \\\\\n&\\geq &{\\bigvee }_{x=yz}\\left\\{ \\mu _{B_{1}}^{-}\\left( yz\\right) \\vee \\mu\n_{B_{2}}^{-}\\left( yz\\right) \\right\\} \\\\\n&=&\\left\\{ \\mu _{B_{1}}^{-}\\left( x\\right) \\vee \\mu _{B_{2}}^{-}\\left(\nx\\right) \\right\\} \\\\\n&=&\\left( \\mu _{B_{1}}^{-}\\cup \\mu _{B_{2}}^{-}\\right) (x).\n\\end{eqnarray*\nThus we get $\\mu _{B_{1}}^{+}\\circ \\mu _{B_{2}}^{+}\\subseteq \\mu\n_{B_{1}}^{+}\\cap \\mu _{B_{2}}^{+}$ and $\\mu _{B_{1}}^{-}\\circ \\mu\n_{B_{2}}^{-}\\supseteq \\mu _{B_{1}}^{-}\\cup \\mu _{B_{2}}^{-}.$ Hence \nB_{1}\\circ B_{2}\\subseteq B_{1}\\cap B_{2}.$ $\\ \\ \\Box $\n\n\\begin{proposition}\n\\label{P101}\\textit{Let }$B_{1}=\\left\\langle \\mu _{B_{1}}^{+},\\mu\n_{B_{1}}^{-}\\right\\rangle $\\textit{\\ and }$B_{2}=\\left\\langle \\mu\n_{B_{2}}^{+},\\mu _{B_{2}}^{-}\\right\\rangle $\\textit{\\ be two\nBVF-LA-subsemigroups of }$S$\\textit{. Then }$B_{1}\\cap B_{2}$\\textit{\\ is\nalso a BVF-LA-subsemigroup of }$S$\\textit{.\\ }\n\\end{proposition}\n\n\\textbf{Proof.\\ }Let $B_{1}=\\left\\langle \\mu _{B_{1}}^{+},\\mu\n_{B_{1}}^{-}\\right\\rangle $ and $B_{2}=\\left\\langle \\mu _{B_{2}}^{+},\\mu\n_{B_{2}}^{-}\\right\\rangle $ be two \\textit{BVF-LA-subsemigroups} of $S$. Let \n$x,y\\in S$. The\n\\begin{eqnarray*}\n\\left( \\mu _{B_{1}}^{+}\\cap \\mu _{B_{2}}^{+}\\right) \\left( xy\\right) &=&\\mu\n_{B_{1}}^{+}\\left( xy\\right) \\wedge \\mu _{B_{2}}^{+}\\left( xy\\right) \\\\\n&\\geq &\\left( \\mu _{B_{1}}^{+}\\left( x\\right) \\wedge \\mu _{B_{1}}^{+}\\left(\ny\\right) \\right) \\wedge \\left( \\mu _{B_{2}}^{+}\\left( x\\right) \\wedge \\mu\n_{B_{2}}^{+}\\left( y\\right) \\right) \\\\\n&=&\\left( \\mu _{B_{1}}^{+}\\left( x\\right) \\wedge \\mu _{B_{2}}^{+}\\left(\nx\\right) \\right) \\wedge \\left( \\mu _{B_{1}}^{+}\\left( y\\right) \\wedge \\mu\n_{B_{2}}^{+}\\left( y\\right) \\right) \\\\\n&=&\\left( \\mu _{B_{1}}^{+}\\cap \\mu _{B_{2}}^{+}\\right) \\left( x\\right)\n\\wedge \\left( \\mu _{B_{1}}^{+}\\cap \\mu _{B_{2}}^{+}\\right) \\left( y\\right) .\n\\end{eqnarray*\nAn\n\\begin{eqnarray*}\n\\left( \\mu _{B_{1}}^{-}\\cup \\mu _{B_{2}}^{-}\\right) \\left( xy\\right) &=&\\mu\n_{B_{1}}^{-}\\left( xy\\right) \\vee \\mu _{B_{2}}^{-}\\left( xy\\right) \\\\\n&\\leq &\\left( \\mu _{B_{1}}^{-}\\left( x\\right) \\vee \\mu _{B_{1}}^{-}\\left(\ny\\right) \\right) \\vee \\left( \\mu _{B_{2}}^{-}\\left( x\\right) \\vee \\mu\n_{B_{2}}^{-}\\left( y\\right) \\right) \\\\\n&=&\\left( \\mu _{B_{1}}^{-}\\left( x\\right) \\vee \\mu _{B_{2}}^{-}\\left(\nx\\right) \\right) \\vee \\left( \\mu _{B_{1}}^{-}\\left( y\\right) \\vee \\mu\n_{B_{2}}^{-}\\left( y\\right) \\right) \\\\\n&=&\\left( \\mu _{B_{1}}^{-}\\cup \\mu _{B_{2}}^{-}\\right) \\left( x\\right) \\vee\n\\left( \\mu _{B_{1}}^{-}\\cup \\mu _{B_{2}}^{-}\\right) \\left( y\\right) .\n\\end{eqnarray*\nThus $B_{1}\\cap B_{2}$\\textit{\\ is also a bipolar-valued fuzzy\nLA-subsemigroup of }$S$\\textit{.} $\\ \\ \\Box $\n\n\\begin{proposition}\n\\textit{Let }$B_{1}=\\left\\langle \\mu _{B_{1}}^{+},\\mu\n_{B_{1}}^{-}\\right\\rangle $\\textit{\\ and }$B_{2}=\\left\\langle \\mu\n_{B_{2}}^{+},\\mu _{B_{2}}^{-}\\right\\rangle $\\textit{\\ be two BVF-left (resp.\nBVF-right, BVF-two-sided) ideal of }$S$\\textit{. Then }$B_{1}\\cap B_{2}\n\\textit{\\ is also a BVF-left (resp. BVF-right, BVF-two-sided) ideal of }$S.$\n\\end{proposition}\n\n\\textbf{Proof.\\ }The proof is similar to the proof of Proposition \\ref{P101\n. $\\ \\ \\Box $\n\n\\begin{lemma}\n\\label{L2}In an LA-semigroup $S$ with left identity, for every BVF-left\nideal $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ of $S$, we\nhave $\\Gamma \\circ B=B.$ Where $\\Gamma =\\left\\langle \\mathcal{S}_{\\Gamma\n}^{+}(x),\\mathcal{S}_{\\Gamma }^{-}(x)\\right\\rangle .$\n\\end{lemma}\n\n\\textbf{Proof. }Let $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $\nbe a BVF-left ideal of $S.$ It is sufficient to show that $\\mathcal{S\n_{\\Gamma }^{+}\\circ \\mu _{B}^{+}\\subseteq \\mu _{B}^{+}$ and $\\mathcal{S\n_{\\Gamma }^{-}\\circ \\mu _{B}^{-}\\supseteq \\mu _{B}^{-}$. Now $x=ex$, for all \n$x$ in $S$, as $e$ is left identity in $S$. S\n\\begin{equation*}\n(\\mathcal{S}_{\\Gamma }^{+}\\circ \\mu _{B}^{+})(x)={\\bigvee }_{x=yz}\\left\\{ \n\\mathcal{S}_{\\Gamma }^{+}(y)\\wedge \\mu _{B}^{+}(z)\\right\\} \\geq \\mathcal{S\n_{\\Gamma }^{+}(e)\\wedge \\mu _{B}^{+}(x)=1\\wedge \\mu _{B}^{+}(x)=\\mu\n_{B}^{+}(x).\n\\end{equation*\nAn\n\\begin{equation*}\n(\\mathcal{S}_{\\Gamma }^{-}\\circ \\mu _{B}^{-})(x)={\\bigwedge }_{x=yz}\\left\\{ \n\\mathcal{S}_{\\Gamma }^{-}(y)\\vee \\mu _{B}^{-}(z)\\right\\} \\leq \\mathcal{S\n_{\\Gamma }^{-}(e)\\vee \\mu _{B}^{-}(x)=-1\\vee \\mu _{B}^{-}(x)=\\mu _{B}^{-}(x).\n\\end{equation*\nThus $\\mathcal{S}_{\\Gamma }^{+}\\circ \\mu _{B}^{+}\\supseteq \\mu _{B}^{+}$ and \n$\\mathcal{S}_{\\Gamma }^{-}\\circ \\mu _{B}^{-}\\subseteq \\mu _{B}^{-}.$ Hence \n\\Gamma \\circ B=B.$ $\\ \\ \\Box $\n\n\\textbf{Definition 3.4 }Let $S$ be an LA-semigroup and let $\\emptyset \\neq\nA\\subseteq S.$ Then bipolar-valued fuzzy characteristic function $\\chi\n_{A}=\\left\\langle \\mu _{\\chi _{A}}^{+},\\mu _{\\chi _{A}}^{-}\\right\\rangle $\nof $A$ is defined a\n\\begin{equation*}\n\\mu _{\\chi _{A}}^{+}=\\left\\{ \n\\begin{array}{c}\n1\\text{ \\ \\ if }x\\in A \\\\ \n0\\text{ \\ \\ if }x\\notin \n\\end{array\n\\right. \\text{ \\ \\ and \\ }\\mu _{\\chi _{A}}^{-}=\\left\\{ \n\\begin{array}{c}\n-1\\text{ \\ \\ if }x\\in A \\\\ \n\\text{ \\ }0\\text{ \\ \\ if }x\\notin A\n\\end{array\n\\right.\n\\end{equation*}\n\n\\begin{theorem}\n\\label{T119}Let $A$ be a nonempty subset of an LA-semigroup $S$. Then $A$ is\nan LA-subsemigroup of $S$ if and only if $\\chi _{A}$ is a\nBVF-LA-subsemigroup of $S$.\n\\end{theorem}\n\n\\textbf{Proof. }Let $A$ be an LA-subsemigroup of $S$. For any $x,y\\in S,$ we\nhave the following cases:\n\nCase $\\left( 1\\right) :$ If $x,y\\in A$, then $xy\\in A$. Since $A$ is an\nLA-subsemigroup of $S$. Then $\\mu _{\\chi _{A}}^{+}\\left( xy\\right) =1,$ $\\mu\n_{\\chi _{A}}^{+}\\left( x\\right) =1$ and $\\mu _{\\chi _{A}}^{+}\\left( y\\right)\n=1$. Therefore \n\\begin{equation*}\n\\mu _{\\chi _{A}}^{+}\\left( xy\\right) =\\mu _{\\chi _{A}}^{+}\\left( x\\right)\n\\wedge \\mu _{\\chi _{A}}^{+}\\left( y\\right) .\n\\end{equation*\nAnd $\\mu _{\\chi _{A}}^{-}\\left( xy\\right) =-1,$ $\\mu _{\\chi _{A}}^{-}\\left(\nx\\right) =-1$ and $\\mu _{\\chi _{A}}^{-}\\left( y\\right) =-1$. Therefore \n\\begin{equation*}\n\\mu _{\\chi _{A}}^{-}\\left( xy\\right) =\\mu _{\\chi _{A}}^{-}\\left( x\\right)\n\\vee \\mu _{\\chi _{A}}^{-}\\left( y\\right) .\n\\end{equation*\nCase $\\left( 2\\right) :$ If $x,y\\notin A$, then $\\mu _{\\chi _{A}}^{+}\\left(\nx\\right) =0$ and $\\mu _{\\chi _{A}}^{+}\\left( y\\right) =0$. So \\ \\ \\ \\ \\ \\ \\ \n\\begin{equation*}\n\\text{ \\ \\ \\ \\ \\ \\ \\ }\\mu _{\\chi _{A}}^{+}\\left( xy\\right) \\geq 0=\\mu _{\\chi\n_{A}}^{+}\\left( x\\right) \\wedge \\mu _{\\chi _{A}}^{+}\\left( y\\right) .\n\\end{equation*\nAnd $\\mu _{\\chi _{A}}^{-}\\left( x\\right) =0$ and $\\mu _{\\chi _{A}}^{-}\\left(\ny\\right) =0$. So \n\\begin{equation*}\n\\text{ \\ \\ \\ \\ \\ \\ }\\mu _{\\chi _{A}}^{-}\\left( xy\\right) \\leq 0=\\mu _{\\chi\n_{A}}^{-}\\left( x\\right) \\vee \\mu _{\\chi _{A}}^{-}\\left( y\\right) .\n\\end{equation*}\n\nCase $\\left( 3\\right) :$ If $x\\in A$ or $y\\in A$. If $x\\in A$ and $y\\notin A\n, then $\\mu _{\\chi _{A}}^{+}\\left( x\\right) =1$ and $\\mu _{\\chi\n_{A}}^{+}\\left( y\\right) =0$. S\n\\begin{equation*}\n\\text{ \\ \\ \\ \\ \\ \\ \\ \\ }\\mu _{\\chi _{A}}^{+}\\left( xy\\right) \\geq 0=\\mu\n_{\\chi _{A}}^{+}\\left( x\\right) \\wedge \\mu _{\\chi _{A}}^{+}\\left( y\\right) .\n\\end{equation*\nNow if $x\\notin A$ and $y\\in A$, then $\\mu _{\\chi _{A}}^{+}\\left( x\\right) =0\n$ and $\\mu _{\\chi _{A}}^{+}\\left( y\\right) =1$. S\n\\begin{equation*}\n\\text{ \\ \\ \\ \\ \\ \\ \\ \\ \\ }\\mu _{\\chi _{A}}^{+}\\left( xy\\right) \\geq 0=\\mu\n_{\\chi _{A}}^{+}\\left( x\\right) \\wedge \\mu _{\\chi _{A}}^{+}\\left( y\\right) .\n\\end{equation*\nAnd if $x\\in A$ or $y\\in A$. If $x\\in A$ and $y\\notin A$, then $\\mu _{\\chi\n_{A}}^{-}\\left( x\\right) =-1$ and $\\mu _{\\chi _{A}}^{-}\\left( y\\right) =0$.\nS\n\\begin{equation*}\n\\text{ \\ \\ \\ \\ \\ \\ \\ }\\mu _{\\chi _{A}}^{-}\\left( xy\\right) \\leq 0=\\mu _{\\chi\n_{A}}^{-}\\left( x\\right) \\vee \\mu _{\\chi _{A}}^{-}\\left( y\\right) .\n\\end{equation*\nNow if If $x\\notin A$ and $y\\in A$, then $\\mu _{\\chi _{A}}^{-}\\left(\nx\\right) =0$ and $\\mu _{\\chi _{A}}^{-}\\left( y\\right) =-1$. S\n\\begin{equation*}\n\\text{ \\ \\ \\ \\ \\ \\ \\ \\ }\\mu _{\\chi _{A}}^{-}\\left( xy\\right) \\leq 0=\\mu\n_{\\chi _{A}}^{-}\\left( x\\right) \\vee \\mu _{\\chi _{A}}^{-}\\left( y\\right) .\n\\end{equation*\nHence $\\chi _{A}=\\left\\langle \\mu _{\\chi _{A}}^{+},\\mu _{\\chi\n_{A}}^{-}\\right\\rangle $ is a BVF-LA-subsemigroup of $S$.\n\nConversely, suppose $\\chi _{A}=\\left\\langle \\mu _{\\chi _{A}}^{+},\\mu _{\\chi\n_{A}}^{-}\\right\\rangle $ is a BVF-LA-subsemigroup of $S$ and let $x,y\\in A$.\nThen we have \n\\begin{eqnarray*}\n\\text{ \\ \\ }\\mu _{\\chi _{A}}^{+}\\left( xy\\right) &\\geq &\\mu _{\\chi\n_{A}}^{+}\\left( x\\right) \\wedge \\mu _{\\chi _{A}}^{+}\\left( y\\right) =1\\wedge\n1=1 \\\\\n\\mu _{\\chi _{A}}^{+}\\left( xy\\right) &\\geq &1\\text{ but }\\mu _{\\chi\n_{A}}^{+}\\left( xy\\right) \\leq 1 \\\\\n\\mu _{\\chi _{A}}^{+}\\left( xy\\right) &=&1\n\\end{eqnarray*\nAnd \n\\begin{eqnarray*}\n\\text{ \\ \\ }\\mu _{\\chi _{A}}^{-}\\left( xy\\right) &\\leq &\\mu _{\\chi\n_{A}}^{-}\\left( x\\right) \\vee \\mu _{\\chi _{A}}^{-}\\left( y\\right) =-1\\vee\n-1=-1 \\\\\n\\mu _{\\chi _{A}}^{-}\\left( xy\\right) &\\leq &-1\\text{ but }\\mu _{\\chi\n_{A}}^{-}\\left( xy\\right) \\geq -1 \\\\\n\\mu _{\\chi _{A}}^{-}\\left( xy\\right) &=&-1\n\\end{eqnarray*\nHence $xy\\in A$. Therefore $A$ is an LA-subsemigroup of $S.$ $\\ \\ \\Box $\n\n\\begin{theorem}\nLet $A$ be a nonempty subset of an LA-semigroup $S$. Then $A$ is a left\n(resp. right) ideal of $S$ if and only if $\\chi _{A}$ is a BVF-left (resp.\nBVF-right) ideal of $S$.\n\\end{theorem}\n\n\\textbf{Proof. }The proof of this theorem is similar to Theorem \\ref{T119}. \n\\ \\ \\Box $\n\n\\textbf{Definition 3.5 }A BVF-subset $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ of an LA-semigroup $S$ is called a\\textbf{\\ \nBVF-generalized bi-ideal\\textbf{\\ }of $S$ i\n\\begin{equation*}\n\\mu _{B}^{+}\\left( (xy)z\\right) \\geq \\mu _{B}^{+}(x)\\wedge \\mu _{B}^{+}(y\n\\text{ \\ and \\ }\\mu _{B}^{-}\\left( (xy)z\\right) \\leq \\mu _{B}^{-}(x)\\vee \\mu\n_{B}^{-}(y),\\text{ for all }x,y,z\\in S.\n\\end{equation*}\n\n\\textbf{Definition 3.6 }A BVF-LA-subsemigroup $B=\\left\\langle \\mu\n_{B}^{+},\\mu _{B}^{-}\\right\\rangle $ of an LA-semigroup $S$ is called \n\\textbf{\\ }BVF-bi-ideal\\textbf{\\ }of $S$ i\n\\begin{equation*}\n\\mu _{B}^{+}\\left( (xy)z\\right) \\geq \\mu _{B}^{+}(x)\\wedge \\mu _{B}^{+}(y\n\\text{ \\ and \\ }\\mu _{B}^{-}\\left( (xy)z\\right) \\leq \\mu _{B}^{-}(x)\\vee \\mu\n_{B}^{-}(y),\\text{ for all }x,y,z\\in S.\n\\end{equation*}\n\n\\begin{lemma}\n\\label{L57}A BVF-subset $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ of an LA-semigroup $S$ is a BVF-generalized bi-ideal\nof $S$ if and only if $\\left( \\mu _{B}^{+}\\circ \\mathcal{S}_{\\Gamma\n}^{+}\\right) \\circ \\mu _{B}^{+}\\subseteq \\mu _{B}^{+}$ and $\\left( \\mu\n_{B}^{-}\\circ \\mathcal{S}_{\\Gamma }^{-}\\right) \\circ \\mu _{B}^{-}\\supseteq\n\\mu _{B}^{-}.$\n\\end{lemma}\n\n\\textbf{Proof. }Let $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $\nbe a BVF-generalized bi-ideal of an LA-semigroup $S$ and $x\\in S.$ If \n\\left( \\left( \\mu _{B}^{+}\\circ \\mathcal{S}_{\\Gamma }^{+}\\right) \\circ \\mu\n_{B}^{+}\\right) \\left( x\\right) =0$ and $\\left( \\left( \\mu _{B}^{-}\\circ \n\\mathcal{S}_{\\Gamma }^{-}\\right) \\circ \\mu _{B}^{-}\\right) (x)=0,$ the\n\\begin{equation*}\n\\left( \\left( \\mu _{B}^{+}\\circ \\mathcal{S}_{\\Gamma }^{+}\\right) \\circ \\mu\n_{B}^{+}\\right) \\left( x\\right) =0\\leq \\mu _{B}^{+}\\left( x\\right) \\text{ \\\nand \\ }\\left( \\left( \\mu _{B}^{-}\\circ \\mathcal{S}_{\\Gamma }^{-}\\right)\n\\circ \\mu _{B}^{-}\\right) (x)=0\\geq \\mu _{B}^{-}(x).\n\\end{equation*\nOtherwis\n\\begin{eqnarray*}\n\\left( \\left( \\mu _{B}^{+}\\circ \\mathcal{S}_{\\Gamma }^{+}\\right) \\circ \\mu\n_{B}^{+}\\right) \\left( x\\right) &=&{\\bigvee }_{x=ab}\\left\\{ \\left( \\mu\n_{B}^{+}\\circ \\mathcal{S}_{\\Gamma }^{+}\\right) \\left( a\\right) \\wedge \\mu\n_{B}^{+}\\left( b\\right) \\right\\} \\\\\n&=&{\\bigvee }_{x=ab}\\left\\{ {\\bigvee }_{a=mn}\\left\\{ \\mu _{B}^{+}\\left(\nm\\right) \\wedge \\mathcal{S}_{\\Gamma }^{+}\\left( n\\right) \\right\\} \\wedge \\mu\n_{B}^{+}\\left( b\\right) \\right\\} \\\\\n&=&{\\bigvee }_{x=ab}{\\bigvee }_{a=mn}\\left\\{ \\left( \\mu _{B}^{+}\\left(\nm\\right) \\wedge 1\\right) \\wedge \\mu _{B}^{+}\\left( b\\right) \\right\\} \\\\\n&=&{\\bigvee }_{x=ab}{\\bigvee }_{a=mn}\\left\\{ \\mu _{B}^{+}\\left( m\\right)\n\\wedge \\mu _{B}^{+}\\left( b\\right) \\right\\} \\\\\n&\\leq &\\mu _{B}^{+}\\left( x\\right) .\n\\end{eqnarray*\nAn\n\\begin{eqnarray*}\n\\left( \\left( \\mu _{B}^{-}\\circ \\mathcal{S}_{\\Gamma }^{-}\\right) \\circ \\mu\n_{B}^{-}\\right) \\left( x\\right) &=&{\\bigwedge }_{x=ab}\\left\\{ \\left( \\mu\n_{B}^{-}\\circ \\mathcal{S}_{\\Gamma }^{-}\\right) \\left( a\\right) \\vee \\mu\n_{B}^{-}\\left( b\\right) \\right\\} \\\\\n&=&{\\bigwedge }_{x=ab}\\left\\{ {\\bigwedge }_{a=mn}\\left\\{ \\mu _{B}^{-}\\left(\nm\\right) \\vee \\mathcal{S}_{\\Gamma }^{-}\\left( n\\right) \\right\\} \\vee \\mu\n_{B}^{-}\\left( b\\right) \\right\\} \\\\\n&=&{\\bigwedge }_{x=ab}{\\bigwedge }_{a=mn}\\left\\{ \\left( \\mu _{B}^{-}\\left(\nm\\right) \\vee -1\\right) \\vee \\mu _{B}^{-}\\left( b\\right) \\right\\} \\\\\n&=&{\\bigwedge }_{x=ab}{\\bigwedge }_{a=mn}\\left\\{ \\mu _{B}^{-}\\left( m\\right)\n\\vee \\mu _{B}^{-}\\left( b\\right) \\right\\} \\\\\n&\\geq &\\mu _{B}^{-}\\left( x\\right) .\n\\end{eqnarray*\nThus $\\left( \\mu _{B}^{+}\\circ \\mathcal{S}_{\\Gamma }^{+}\\right) \\circ \\mu\n_{B}^{+}\\subseteq \\mu _{B}^{+}$ and $\\left( \\mu _{B}^{-}\\circ \\mathcal{S\n_{\\Gamma }^{-}\\right) \\circ \\mu _{B}^{-}\\supseteq \\mu _{B}^{-}.$\\newline\nConversely, assume that $\\left( \\mu _{B}^{+}\\circ \\mathcal{S}_{\\Gamma\n}^{+}\\right) \\circ \\mu _{B}^{+}\\subseteq \\mu _{B}^{+}$ and $\\left( \\mu\n_{B}^{-}\\circ \\mathcal{S}_{\\Gamma }^{-}\\right) \\circ \\mu _{B}^{-}\\supseteq\n\\mu _{B}^{-}.$ Let $x,y,z\\in S,$ the\n\\begin{eqnarray*}\n\\mu _{B}^{+}\\left( (xy)z\\right) &\\geq &\\left( \\left( \\mu _{B}^{+}\\circ \n\\mathcal{S}_{\\Gamma }^{+}\\right) \\circ \\mu _{B}^{+}\\right) \\left(\n(xy)z\\right) ={\\bigvee }_{(xy)z=cd}\\left\\{ \\left( \\mu _{B}^{+}\\circ \\mathcal\nS}_{\\Gamma }^{+}\\right) \\left( c\\right) \\wedge \\mu _{B}^{+}\\left( d\\right)\n\\right\\} \\\\\n&\\geq &\\left( \\mu _{B}^{+}\\circ \\mathcal{S}_{\\Gamma }^{+}\\right) \\left(\nxy\\right) \\wedge \\mu _{B}^{+}\\left( z\\right) =\\left\\{ {\\bigvee \n_{xy=pq}\\left\\{ \\mu _{B}^{+}\\left( p\\right) \\wedge \\mathcal{S}_{\\Gamma\n}^{+}\\left( q\\right) \\right\\} \\right\\} \\wedge \\mu _{B}^{+}\\left( z\\right) \\\\\n&\\geq &\\left\\{ \\mu _{B}^{+}\\left( x\\right) \\wedge \\mathcal{S}_{\\Gamma\n}^{+}\\left( y\\right) \\right\\} \\wedge \\mu _{B}^{+}\\left( z\\right) =\\left\\{\n\\mu _{B}^{+}\\left( x\\right) \\wedge 1\\right\\} \\wedge \\mu _{B}^{+}\\left(\nz\\right) \\\\\n&=&\\mu _{B}^{+}\\left( x\\right) \\wedge \\mu _{B}^{+}\\left( z\\right) .\n\\end{eqnarray*\nAn\n\\begin{eqnarray*}\n\\mu _{B}^{-}\\left( (xy)z\\right) &\\leq &\\left( \\left( \\mu _{B}^{-}\\circ \n\\mathcal{S}_{\\Gamma }^{-}\\right) \\circ \\mu _{B}^{-}\\right) \\left(\n(xy)z\\right) ={\\bigwedge }_{(xy)z=cd}\\left\\{ \\left( \\mu _{B}^{-}\\circ \n\\mathcal{S}_{\\Gamma }^{-}\\right) \\left( c\\right) \\vee \\mu _{B}^{-}\\left(\nd\\right) \\right\\} \\\\\n&\\leq &\\left( \\mu _{B}^{-}\\circ \\mathcal{S}_{\\Gamma }^{-}\\right) \\left(\nxy\\right) \\vee \\mu _{B}^{-}\\left( z\\right) =\\left\\{ {\\bigwedge \n_{xy=pq}\\left\\{ \\mu _{B}^{-}\\left( p\\right) \\vee \\mathcal{S}_{\\Gamma\n}^{-}\\left( q\\right) \\right\\} \\right\\} \\vee \\mu _{B}^{-}\\left( z\\right) \\\\\n&\\leq &\\left\\{ \\mu _{B}^{-}\\left( x\\right) \\vee \\mathcal{S}_{\\Gamma\n}^{-}\\left( y\\right) \\right\\} \\vee \\mu _{B}^{-}\\left( z\\right) =\\left\\{ \\mu\n_{B}^{-}\\left( x\\right) \\vee -1\\right\\} \\vee \\mu _{B}^{-}\\left( z\\right) \\\\\n&=&\\mu _{B}^{-}\\left( x\\right) \\vee \\mu _{B}^{-}\\left( z\\right) .\n\\end{eqnarray*\nThus $\\mu _{B}^{+}\\left( (xy)z\\right) \\geq \\mu _{B}^{+}\\left( x\\right)\n\\wedge \\mu _{B}^{+}\\left( z\\right) $ and $\\mu _{B}^{-}\\left( (xy)z\\right)\n\\leq \\mu _{B}^{-}\\left( x\\right) \\vee \\mu _{B}^{-}\\left( z\\right) ,$ which\nimplies that $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ is a\nBVF-generalized bi-ideal of $S.$ $\\ \\ \\Box $\n\n\\begin{lemma}\n\\label{L30}Let $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ be a\nBVF-subset of an LA-semigroup $S$ then $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ is a BVF-bi-ideal of $S$\\ if and only if $\\mu\n_{B}^{+}\\circ \\mu _{B}^{+}\\subseteq \\mu _{B}^{+},$ $\\mu _{B}^{-}\\circ \\mu\n_{B}^{-}\\supseteq \\mu _{B}^{-},$ $\\left( \\mu _{B}^{+}\\circ \\mathcal{S\n_{\\Gamma }^{+}\\right) \\circ \\mu _{B}^{+}\\subseteq \\mu _{B}^{+}$ and $\\left(\n\\mu _{B}^{-}\\circ \\mathcal{S}_{\\Gamma }^{-}\\right) \\circ \\mu\n_{B}^{-}\\supseteq \\mu _{B}^{-}.$\n\\end{lemma}\n\n\\textbf{Proof. }Follows from Lemma \\ref{L56}(1) and Lemma \\ref{L57}. $\\ \\\n\\Box $\n\n\\textbf{Definition 3.7 }A BVF-subset $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ of an LA-semigroup $S$ is called a BVF-interior\nideal of $S$ i\n\\begin{equation*}\n\\mu _{B}^{+}\\left( (xy)z\\right) \\geq \\mu _{B}^{+}\\left( y\\right) \\text{ \\\nand \\ }\\mu _{B}^{-}\\left( (xy)z\\right) \\leq \\mu _{B}^{-}\\left( y\\right) \n\\text{ for all }x,y,z\\in S.\n\\end{equation*}\n\n\\begin{lemma}\n\\label{L31}Let $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ be a\nBVF-subset of an LA-semigroup $S$ then $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ is a BVF-interior ideal of $S$\\ if and only if \n\\left( \\mathcal{S}_{\\Gamma }^{+}\\circ \\mu _{B}^{+}\\right) \\circ \\mathcal{S\n_{\\Gamma }^{+}\\subseteq \\mu _{B}^{+}$ and $\\left( \\mathcal{S}_{\\Gamma\n}^{-}\\circ \\mu _{B}^{-}\\right) \\circ \\mathcal{S}_{\\Gamma }^{-}\\supseteq \\mu\n_{B}^{-}.$\n\\end{lemma}\n\n\\textbf{Proof. }The proof of this lemma is similar to the proof of Lemma \\re\n{L57}$.$ $\\ \\ \\Box $\n\n\\begin{remark}\nEvery BVF-ideal is a BVF-interior ideal of an LA-semigroup $S,$ but the\nconverse is not true.\n\\end{remark}\n\n\\textbf{Example 3.2 }Let $S=\\{a,b,c,d\\}$, the binary operation \"$\\cdot $\" on \n$S$ be defined as follows\n\\begin{equation*}\n\\begin{tabular}{l|llll}\n$\\cdot $ & $a$ & $b$ & $c$ & $d$ \\\\ \\hline\n$a$ & $c$ & $c$ & $c$ & $d$ \\\\ \n$b$ & $d$ & $d$ & $c$ & $c$ \\\\ \n$c$ & $d$ & $d$ & $d$ & $d$ \\\\ \n$d$ & $d$ & $d$ & $d$ & $d\n\\end{tabular\n\\end{equation*\nClearly, $S$ is an LA-semigroup. But $S$ is not a semigroup because \nc=a\\cdot (a\\cdot b)\\neq (a\\cdot a)\\cdot b=d.$ Now we define BVF-subset a\n\\begin{equation*}\nB=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle =\\left\\langle \\left( \n\\frac{a}{0.5},\\frac{b}{0.3},\\frac{c}{0.1},\\frac{d}{0.8}\\right) ,\\text{ \n\\left( \\frac{a}{-0.7},\\frac{b}{-0.4},\\frac{c}{-0.2},\\frac{d}{-0.9}\\right)\n\\right\\rangle .\n\\end{equation*\nIt can be verified that $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ is a BVF-interior ideal of $S.$ But, sinc\n\\begin{equation*}\n\\mu _{B}^{+}\\left( b\\cdot c\\right) =\\mu _{B}^{+}\\left( c\\right) =0.1<0.3=\\mu\n_{B}^{+}\\left( b\\right) .\n\\end{equation*\nAn\n\\begin{equation*}\n\\mu _{B}^{-}\\left( b\\cdot c\\right) =\\mu _{B}^{-}\\left( c\\right)\n=-0.2>-0.4=\\mu _{B}^{-}\\left( b\\right) .\n\\end{equation*\nThus $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ is not a\nBVF-right ideal of $S,$ that is, $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ is not a BVF-two-sided ideal of $S.$\n\n\\begin{proposition}\nEvery BVF-subset $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ of\nan LA-semigroup $S$ with left identity is a BVF-right ideal if and only if\nit is a BVF-interior ideal.\n\\end{proposition}\n\n\\textbf{Proof. }Let every BVF-subset $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ of $S$ is a BVF-right ideal. For $x$, $a$ and $y$ of \n$S$, conside\n\\begin{equation*}\n\\mu _{B}^{+}((xa)y)\\geq \\mu _{B}^{+}(xa)=\\mu _{B}^{+}((ex)a)=\\mu\n_{B}^{+}((ax)e)\\geq \\mu _{B}^{+}(ax)\\geq \\mu _{B}^{+}(a).\n\\end{equation*\nAn\n\\begin{equation*}\n\\mu _{B}^{-}((xa)y)\\leq \\mu _{B}^{-}(xa)=\\mu _{B}^{-}((ex)a)=\\mu\n_{B}^{-}((ax)e)\\leq \\mu _{B}^{-}(ax)\\leq \\mu _{B}^{-}(a).\n\\end{equation*\nWhich implies that $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $\nis a BVF-interior ideal. Conversely, for any $x$ and $y$ in $S$ we have\n\\begin{equation*}\n\\mu _{B}^{+}(xy)=\\mu _{B}^{+}((ex)y)\\geq \\mu _{B}^{+}(x)\\text{ \\ and \\ }\\mu\n_{B}^{-}(xy)=\\mu _{B}^{-}((ex)y)\\leq \\mu _{B}^{-}(x).\n\\end{equation*\nHence required. $\\ \\ \\Box $\n\n\\begin{theorem}\nLet $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle $ be a BVF-left\nideal of an LA-semigroup $S$ with left identity, then $B=\\left\\langle \\mu\n_{B}^{+},\\mu _{B}^{-}\\right\\rangle $ being BVF-interior ideal is a\nBVF-bi-ideal of $S$.\n\\end{theorem}\n\n\\textbf{Proof. }Since $B=\\left\\langle \\mu _{B}^{+},\\mu _{B}^{-}\\right\\rangle \n$ is an BVF-left ideal in $S$, so $\\mu _{B}^{+}(xy)\\geq \\mu _{B}^{+}(y)$ and \n$\\mu _{B}^{-}(xy)\\leq \\mu _{B}^{-}(y)$ for all $x$ and $y$ in $S$. As $e$ is\nleft identity in $S$. So\n\\begin{equation*}\n\\mu _{B}^{+}(xy)=\\mu _{B}^{+}((ex)y)\\geq \\mu _{B}^{+}(x)\\text{ \\ and \\ }\\mu\n_{B}^{-}(xy)=\\mu _{B}^{-}((ex)y)\\leq \\mu _{B}^{-}(x),\n\\end{equation*}\nwhich implies that $\\mu _{B}^{+}(xy)\\geq \\mu _{B}^{+}(x)\\wedge \\mu\n_{B}^{+}(y)$ and $\\mu _{B}^{-}(xy)\\leq \\mu _{B}^{-}(x)\\vee \\mu _{B}^{-}(y)$\nfor all $x$ and $y$ in $S$. Thus $B=\\left\\langle \\mu _{B}^{+},\\mu\n_{B}^{-}\\right\\rangle $ is an BVF-LA-subsemigroup of $S$. For any $x,y$ and \nz$ in $S$, we ge\n\\begin{equation*}\n\\mu _{B}^{+}((xy)z)=\\mu _{B}^{+}((x(ey))z)=\\mu _{B}^{+}((e(xy))z)\\geq \\mu\n_{B}^{+}(xy)=\\mu _{B}^{+}((ex)y)\\geq \\mu _{B}^{+}(x).\n\\end{equation*\nAn\n\\begin{equation*}\n\\mu _{B}^{-}((xy)z)=\\mu _{B}^{-}((x(ey))z)=\\mu _{B}^{-}((e(xy))z)\\leq \\mu\n_{B}^{-}(xy)=\\mu _{B}^{-}((ex)y)\\leq \\mu _{B}^{-}(x).\n\\end{equation*\nAls\n\\begin{equation*}\n\\mu _{B}^{+}((xy)z)=\\mu _{B}^{+}((zy)x)=\\mu _{B}^{+}((z(ey))x)=\\mu\n_{B}^{+}((e(zy))x)\\geq \\mu _{B}^{+}(zy)=\\mu _{B}^{+}((ez)y)\\geq \\mu\n_{B}^{+}(z).\n\\end{equation*\nAn\n\\begin{equation*}\n\\mu _{B}^{-}((xy)z)=\\mu _{B}^{-}((zy)x)=\\mu _{B}^{-}((z(ey))x)=\\mu\n_{B}^{-}((e(zy))x)\\leq \\mu _{B}^{-}(zy)=\\mu _{B}^{-}((ez)y)\\leq \\mu\n_{B}^{-}(z).\n\\end{equation*\nHence $\\mu _{B}^{+}((xy)z)\\geq \\mu _{B}^{+}(x)\\wedge \\mu _{B}^{+}(z)$ and \n\\mu _{B}^{-}((xy)z)\\leq \\mu _{B}^{-}(x)\\vee \\mu _{B}^{-}(z)$ for all $x,y$\nand $z$ in $S$. $\\ \\ \\Box $\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nIn 1954 Wick and Cutkosky noticed that a certain class of ladder-type Feynman integrals with massive propagators features a massive dual conformal symmetry \\cite{Wick:1954eu,Cutkosky:1954ru}. \nWhile most of the formal insights into quantum field theory inspired by the AdS\/CFT correspondence are limited to massless situations, this massive dual conformal symmetry is naturally realized in the context of this duality~\\cite{Alday:2009zm,Caron-Huot:2014gia}. In particular, the extended dual conformal symmetry limits the variables that certain massive Feynman integrals can depend on and thus simplifies their computation. In the present letter we argue that for large classes of Feynman integrals, this massive dual conformal symmetry is in fact only the zeroth level of an infinite dimensional massive Yangian algebra. In addition to limiting the number of variables, this new symmetry strongly constrains the functional form of the integrals. While these symmetry properties naturally extend the observations on the integrability of massless Feynman integrals \\cite{Chicherin:2017cns,Chicherin:2017frs,Loebbert:2019vcj}, to the knowledge of the authors this is the first occurence of quantum integrability in massive quantum field theory in $D>2$ spacetime dimensions. \n\nFor \\emph{massless} $\\mathcal{N}=4$ super Yang--Mills (SYM) theory it was recently argued that planar integrability is preserved in a certain double scaling limit, which (in the simplest case) results in the so-called bi-scalar fishnet theory \\cite{Gurdogan:2015csr}. Here, individual (massless) Feynman integrals inherit the Yangian symmetry that underlies the integrability of the prototypical examples of the AdS\/CFT duality \\cite{Chicherin:2017cns,Chicherin:2017frs}. A similar starting point, i.e.\\ an \\emph{integrable massive} avatar of $\\mathcal{N}=4$ super Yang--Mills theory, is not known. We thus investigate massive Feynman integrals directly, i.e.\\ we consider the properties of functions of the type\n\\begin{equation}\n\\includegraphicsbox{FigTwoStars.pdf}\n\\quad\n=\n\\int \\frac{\\mathrm{d}^D x_0 \\mathrm{d}^D x_{\\bar 0}}\n{\n\\hat x_{01}^{2a_1}\n\\hat x_{02}^{2a_2}\nx_{0\\bar 0}^{2b_0}\n\\hat x_{\\bar 03}^{2a_3}\n\\hat x_{\\bar 04}^{2a_4}\n},\n\\end{equation}\nwhere $x_{jk}^\\mu=x_j^\\mu-x_k^\\mu$ and $\\hat x_{jk}^2=x_{jk}^2+(m_j-m_k)^2$. Here the dashed internal propagator is massless, i.e.\\ $m_0=m_{\\bar 0}=0$, while the other propagators are massive.\nThe $x$-variables denote dualized momenta (dotted green diagram) related via $p^{\\mu}_j=x^{\\mu}_j-x^{\\mu}_{j+1}$ \\footnote{Note that the $p_j^2$ are unconstrained and the $m_j$ are generic; we have $x^{\\mu}_j=x^{\\mu}_{1}-\\sum_{k < j} p^{\\mu}_k $ and $x_{n+1}^\\mu=x_{1}^\\mu$.}.\nOur findings suggest that all Feynman graphs, which are cut from regular tilings of the plane and have massive propagators on the boundary, feature a massive $D$-dimensional Yangian symmetry. \nWe will demonstrate the usefulness of this Yangian for bootstrapping massive Feynman integrals. Finally, we will show that, when translated to momentum space, the non-local Yangian symmetry can be interpreted as a massive generalization of momentum space conformal symmetry. \nThis suggests to interpret this novel symmetry within the AdS\/CFT correspondence.\n\n\\section{Massive Yangian}\n\nMassive dual conformal symmetry is realized in the form of partial differential equations obeyed by coordinate space Feynman integrals. That is, the integrals are annihilated by the tensor product action of the level-zero dual conformal generators\n$\n\\gen{J}^a = \\sum_{j=1}^n \\gen{J}_{j}^a, \n$\nwhere $\\gen{J}_j^a$\ndenotes one of \nthe following densities acting on $x_j$:\n\\begin{align}\n\\gen{P}^{\\hat \\mu}_j &= -i \\partial_{x_{j}}^{\\hat \\mu}, \n\\qquad\\qquad\n\\gen{L}_j^{\\hat \\mu\\hat \\nu} = i x_j^{\\hat \\mu} \\partial_{x_{j}}^{\\hat \\nu} - ix^{\\hat \\nu}_j \\partial_{x_{j}}^{\\hat \\mu}, \n\\nonumber\n\\\\\n\\gen{D}_j &= -i \\brk!{x_{j\\mu} \\partial_{x_j}^\\mu + m_j \\partial_{m_j} + \\Delta_j},\n\\label{eqn:massdualconfrep}\n\\\\\n \\gen{K}^{\\hat \\mu}_j &= -2ix_j^{\\hat \\mu}\\brk!{x_{j\\nu} \\partial_{x_j}^\\nu + m_j\\partial_{m_j} + \\Delta_j} +i (x^2_j + m^2_j)\\partial_{x_{j}}^{\\hat \\mu}.\\notag\n\\end{align}\nThese can be understood as massless generators in $D+1$ dimensions with $x_j^{D+1}=m_j$. Only the components $\\hat \\mu=1,\\dots,D$ of the generators correspond to symmetries.\nHere we work with the Euclidean metric and the index $\\hat{\\mu}$ runs from 1~to~$D+1$, while $\\mu$ runs from 1~to~$D$.\n\nThe massive Yangian is spanned by the above level-zero Lie algebra generators and the bi-local level-one generators \ndefined as\n\\begin{equation}\n\\label{eq:DefLev1}\n \\gen{\\widehat J}^a=\\sfrac{1}{2} f^a{}_{bc}\\sum_{j$10$^{10}$ \\\\\nFine-tuned on non-toxic & 1.8 & 0.03 & 17.2 \\\\\nRandom vector & 4.8 & 0.06 & 16.4 \\\\\\midrule\nNegative task vector & 0.8 & 0.01 & 16.9 \\\\\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:toxicity}\n\n\\end{table*}\nWe study whether we can mitigate a particular model behavior by negating a task vector \\emph{trained to do that behavior}.\nIn particular, we aim to reduce the amount of toxic generations produced by GPT-2 models of various sizes \\citep{radford2019language}.\nWe generate task vectors by fine-tuning on data from Civil Comments \\citep{borkan2019nuanced} where the toxicity score is \\textit{higher} than 0.8, and then negating such task vectors.\nAs in Section \\ref{sec:forget_img}, we also compare against baselines that use gradient ascent when fine-tuning \\citep{golatkar2020eternal,tarun2021fast}, and using a random task vector of the same magnitude. Additionally, we compare against fine-tuning on non-toxic samples from Civil Comments (toxicity scores smaller than 0.2), similar to \\citet{liu2021dexperts}.\nWe measure the toxicity of one thousand model generations with Detoxify \\citep{Detoxify}. For the control task, we measure the perplexity of the language models on WikiText-103 \\citep{merity2016pointer}.\n\nAs shown in Table \\ref{tab:toxicity}, editing with negative task vectors is effective, reducing the amount of generations classified as toxic from 4.8\\% to 0.8\\%, while maintaining perplexity on the control task within 0.5 points of the pre-trained model.\nIn contrast, fine-tuning with gradient ascent lowers toxic generations by degrading performance on the control task to an unacceptable level, while fine-tuning on non-toxic data is worse than task vectors both in reducing task generations and on the control task. As an experimental control, adding a random vector has little impact either on toxic generations or perplexity on WikiText-103.\nWe present additional experimental details and results in Appendix \\ref{sec:appendix-neg-lang}.\n\\section{Learning via Addition}\n\\label{sec:addition}\n\nWe now turn our attention to \\emph{adding} task vectors, either to build multi-task models that are proficient on multiple tasks simultaneously, or to improve single-task performance.\nThis operation allows us to reuse and transfer knowledge either from in-house models, or from the multitude of publicly available fine-tuned models, without additional training or access to training data.\nWe explore addition on various image classification and natural language processing tasks.\n\n\\subsection{Image classification}\n\\label{sec:add_img}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/clip_add_v3.pdf}\n \\caption{\\textbf{Adding pairs of task vectors} from image classification tasks. Adding task vectors from two tasks improves accuracy on both, resulting in a single model that is competitive with using two specialized fine-tuned models.}\n \\label{fig:clip-add-2}\n \n\\end{figure}\n\n\nWe start with the same eight models used in Section \\ref{sec:negation}, fine-tuned on a diverse set of image classification tasks (Cars, DTD, EuroSAT, GTSRB, MNIST, RESISC45, SUN397 and SVHN). In Figure \\ref{fig:clip-add-2}, we show the accuracy obtained by adding all pairs of task vectors from these tasks.\nTo account for the difference in difficulty of the tasks, we normalize accuracy on each task by the accuracy of the model fine-tuned on that task. After normalizing, the performance of fine-tuned models on their respective tasks is one, and so the average performance of using multiple specialized models is also one. As shown in Figure \\ref{fig:clip-add-2}, adding pairs of task vectors leads to a single model that outperforms the zero-shot model by a large margin, and is competitive with using two specialized models (98.9\\% normalized accuracy on average).\n\n\nBeyond pairs of tasks, we explore adding task vectors for \\textit{all} possible subsets of the tasks ($2^8$ in total). In Figure \\ref{fig:clip-add-all}, we show how the normalized accuracy of the resulting models, averaged over all the eight tasks.\nAs the number of available task vectors increases, better multi-task models can be produced. \nWhen all task vectors are available, the best model produced by adding task vectors reaches an average performance of 91.2\\%, despite compressing several models into one. Additional experiments and details are presented in Appendix \\ref{sec:appendix-add}.\n\n\\begin{SCfigure}\n \\centering\n \\begin{minipage}{0.53\\linewidth}\n \\includegraphics[width=1\\textwidth]{figures\/clip_add_allevals.pdf}\n \\end{minipage}\n \\captionsetup{width=1\\textwidth}\n \\sidecaptionvpos{figure}{t}\n \\caption{\\textbf{Adding task vectors builds multi-task models} for image classification tasks. Accuracy is averaged over all downstream tasks.\n When more task vectors are available, better multi-task vectors can be built.\n Each point represents an experiment with a subset of the eight tasks we study, and the solid line connects the average performance for each subset size. Recall that the average normalized accuracy of using multiple fine-tuned models is always one. \n Additional details and experiments are in Appendix \\ref{sec:appendix-add}.}\n \\label{fig:clip-add-all}\n\\end{SCfigure}\n\n\\subsection{Natural language processing}\n\\label{sec:add-nlp}\n\n\nIn addition to building multi-task models, we explore whether adding task vectors is a useful way of improving performance on a single target task. \nTowards this goal, we first fine-tune T5-base models on four tasks from the GLUE benchmark \\citep{wang2018glue}, as in \\citet{wortsman2022model}.\nThen, we search for compatible checkpoints on Hugging Face Hub, finding 427 candidates in total.\nWe try adding each of the corresponding task vectors to our fine-tuned models, choosing the best checkpoint and scaling coefficient based on held-out validation data.\nAs shown in Table \\ref{tab:glue}, adding task vectors can \\textit{improve} performance on target tasks, compared to fine-tuning.\nAdditional details and experiments---including building multi-task models from public checkpoints from Hugging Face Hub---are presented in Appendix \\ref{sec:appendix-add}.\n\n\\begin{table*}\n\\caption{\\textbf{Improving performance on target tasks with external task vectors.} For four text classification tasks from the GLUE benchmark, adding task vectors downloaded from the Hugging Face Hub can improve accuracy of fine-tuned T5 models. Appendix \\ref{sec:appendix-add-lang} shows additional details.}\n\\setlength\\tabcolsep{4.5pt}\n\\renewcommand{\\arraystretch}{0.9}\n\\small\n\\begin{center}\n\\begin{tabular}{lccccc}\n\\toprule\nMethod & MRPC & RTE & CoLA & SST-2 & Average \\\\\\midrule\nZero-shot &\t74.8\t& 52.7\t& 8.29\t& 92.7\t& 57.1 \\\\\nFine-tuned &\t88.5 &\t77.3 &\t52.3 &\t94.5 &\t78.1 \\\\\nFine-tuned + task vectors\t& 89.3 \\tiny{(+0.8)}\t& 77.5 \\tiny{(+0.2)}& \t53.0\t\\tiny{(+0.7)} & 94.7 \\tiny{(+0.2)}\t& 78.6 \\tiny{(+0.5)} \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:glue}\n\\end{table*}\n\n\n\\section{Task Analogies}\n\\label{sec:analogies}\n\nIn this section, we explore task analogies in the form ``$A$ is to $B$ as $C$ is to $D$\", and show that task arithmetic using vectors from the first three tasks improves performance on task $D$ even if little or not data for that task is available.\n\n\\paragraph{Domain generalization.} For many target tasks, gathering unlabeled data is easier and cheaper than collecting human annotations. When labeled data for a \\textit{target} task is not available, we can use task analogies to improve accuracy on the target task, using \n an \\textit{auxiliary} task for which there is labeled data and an unsupervised learning objective. For example, consider the target task of sentiment analysis using data from Yelp \\citep{zhang2015character}. Using task analogies, we can construct a task vector $\\hat{\\tau}_\\textrm{yelp;\\,sent} = \\tau_\\textrm{amazon;\\,sent} + (\\tau_\\textrm{yelp;\\,lm} - \\tau_\\textrm{amazon;\\,lm})$, where $\\tau_\\textrm{amazon;\\,sent}$ is obtained by fine-tuning on labeled data from an auxiliary task (sentiment analysis using data from Amazon; \\citet{mcauley2013hidden}), and $\\tau_\\textrm{yelp;\\,lm}$ and $\\tau_\\textrm{amazon;\\,lm}$ are task vectors obtained via (unsupervised) language modeling on the inputs from both datasets.\n\n In Table \\ref{tab:sentiment-analog}, we show that using such task analogies improves accuracy of T5 models at multiple scales, both for Amazon and Yelp binary sentiment analysis as target tasks. We empirically found that giving a higher weight to the sentiment analysis task vector led to higher accuracy, and we thus used two independent scaling coefficients for these experiments---one for the sentiment analysis task vector and one for both the language modeling task vectors. More details are presented in Appendix \\ref{sec:app-sentiment}. Using task vectors outperforms fine-tuning on the remaining auxiliary sentiment analysis task for all models and datasets, approaching the performance of fine-tuning on the target task.\n \n\\begin{table*}\n\\caption{\\textbf{Improving domain generalization with task analogies.} Using an auxiliary task for which labeled data is available and unlabeled data from both the auxiliary and the target datasets, task analogies improve the accuracy for multiple T5 models and two sentiment analysis target tasks \\citep{zhang2015character,mcauley2013hidden}, without using any labeled data from the target tasks.}\n\\setlength\\tabcolsep{6.5pt}\n\\renewcommand{\\arraystretch}{0.9}\n\\small\n\\begin{center}\n\\begin{tabular}{lcccccccc} \n\\toprule\n & & \\multicolumn{3}{c}{target = Yelp} & & \\multicolumn{3}{c}{target = Amazon} \\\\\\cmidrule{3-5}\\cmidrule{7-9}\nMethod & & T5-small & T5-base & T5-large & & T5-small & T5-base & T5-large \n\\\\\\midrule\nFine-tuned on auxiliary & & 88.6 & 92.3 & 95.0 & & 87.9 & 90.8 & 94.8 \\\\\nTask analogies & & 89.9 & 93.0 & 95.1 & & 89.0 & 92.7 & 95.2 \\\\\nFine-tuned on target & & 91.1 & 93.4 & 95.5 & & 90.2 & 93.2 & 95.5 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:sentiment-analog}\n\\vspace{6pt}\n\\end{table*}\n\n\n\\paragraph{Subpopulations with little data.} There is often some inherent scarcity in certain data subpopulations---for example, images of lions in indoor settings are more rare, compared to lions in outdoor settings or dogs in general (indoor or outdoors). Whenever such subpopulations admit analogies to others with more abundant data (as in this case), we can apply task analogies, e.g., $\\hat{\\tau}_\\textrm{lion indoors} = \\tau_\\textrm{lion outdoors} + (\\tau_\\textrm{dog indoors} - \\tau_\\textrm{dog outdoor})$.\n\nWe explore this scenario by creating four subpopulations, using 125 overlapping classes between ImageNet and a dataset of human sketches \\citep{eitz2012humans}. \nWe split these classes in two subsets of roughly equal size, creating four subpopulations $A$, $B$, $C$ and $D$, where the pairs $(A,C)$ and $(B, D)$ share the same classes, and $(A, B)$ and $(C, D)$ share the same style (photo-realistic images or sketches).\nAlthough these subpopulations have many classes in our experiments, we use the simplified subsets ``real dog'', ``real lion'', ``sketch dog'' and ``sketch lion'' as a running example. We present more details and samples in Appendix \\ref{sec:appendix-sketches}.\n\nGiven a target subpopulation, we create task vectors by fine-tuning three models independently on the remaining subpopulations, and then combine them via task arithmetic, e.g., $\\hat{\\tau}_\\textrm{sketch lion} = \\tau_\\textrm{sketch dog} + (\\tau_\\textrm{real lion} - \\tau_\\textrm{real dog})$ for the target subpopulation ``sketch lion''. We show the results in Figure \\ref{fig:clip-analogies}, averaged over the four target subpopulations.\nCompared to the pre-trained model, task vectors improve accuracy by 3.4 percentage points on average.\nMoreover, when some data from the target subpopulation is available for fine-tuning, starting from the edited model leads to consistently higher accuracy than starting from the pre-trained model.\nThe gains from analogies alone (with no additional data) are roughly the same as that of collecting and annotating around one hundred training samples for the target subpopulation.\n\n\\paragraph{Kings and queens.} We explore whether an image classifier can learn a new categories (e.g., ``king\") using data from three related classes that form an analogy relationship (e.g., ``queen\", ``man\" and ``woman\"). Our results are presented in Appendix \\ref{sec:appendix-kingsandqueens}, showing that task analogies yield large gains in accuracy over pre-trained models on the new target category, despite having no training data for it.\n\n\\begin{SCfigure}\n \\centering\n \\begin{minipage}{0.43\\linewidth}\n \\hspace{-0.52cm}\n \\includegraphics[width=1\\textwidth]{figures\/sketches_fewshot_full.pdf}\n \\end{minipage}\n \\captionsetup{width=1\\textwidth}\n \\vspace{-0.25cm}\n \\sidecaptionvpos{figure}{t}\n \\caption{\\textbf{Learning about subpopulations via analogy}. Combining task vectors from related subpopulations improves accuracy on the target subpopulation, when little or no data from the target supopulation is available. Accuracy is averaged over the four target subpopulations and three CLIP models. Additional details are in Appendix \\ref{sec:appendix-sketches}.}\n \\label{fig:clip-analogies}\n\\end{SCfigure}\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\nIn this section, we provide further insight into previous results by exploring the similarity between task vectors for different tasks, as well as the impact of different learning rates and random seeds. Additional analysis are presented in Appendix \\ref{sec:app-ensembles}, including discussions on the connection between ensembles and weight averaging. We conclude by discussing some limitations of our approach.\n\n\n\\begin{SCfigure}\n \\centering\n \\begin{minipage}{0.43\\linewidth}\n \\hspace{-0.52cm}\n \\includegraphics[width=1\\textwidth]{figures\/vision_cossim.pdf}\n \\end{minipage}\n \\captionsetup{width=1\\textwidth}\n \\vspace{-0.25cm}\n \\sidecaptionvpos{figure}{t}\n \\caption{\\textbf{Task vectors are typically close to orthogonal.} The plot shows the cosine similarities between vectors for different tasks, using CLIP. The largest deviations from orthogonality are found when tasks are similar to each other, for instance, for MNIST, SVHN and GTSRB---where recognizing digits is either the task itself (MNIST and SVHN), or a capability needed to solve the task (GTSRB, where the task is traffic sign recognition)---and EuroSAT and RESISC45, two satellite imagery recognition datasets.}\n \\label{fig:cossim}\n \\vspace{8pt}\n\\end{SCfigure}\n\n\n\n\\textbf{Similarity between task vectors.} In Figure \\ref{fig:cossim}, we explore the cosine similarity between task vectors for different tasks, in an effort to understand how multiple models can be collapsed into a single multi-task model via addition (Section \\ref{sec:addition}).\nWe observe that vectors from different tasks are typically close to orthogonal, and speculate that this enables the combination of task vectors via addition with minimal interference.\nWe also observe higher cosine similarities when tasks are semantically similar to each other.\nFor example, the largest cosine similarities in Figure \\ref{fig:cossim} (left) are between MNIST, SVHN and GTSRB, where recognizing digits is essential for the tasks, and between EuroSAT and RESISC45, which are both satellite imagery recognition datasets. This similarity in ``task space'' could help explain some results in \\citet{ilharco2022patching}, where interpolating the weights of a model fine-tuned on one task and the pre-trained model weights---in our terminology, applying a single task vector---sometimes improves accuracy on a different task for which no data is available (e.g., applying the MNIST task vector improves accuracy on SVHN).\n\n\\textbf{The impact of the learning rate.} In Figure \\ref{fig:lr}, we observe that increasing the learning rate degrades accuracy both when using task vectors and when fine-tuning individual models, but the decrease is more gradual for individual models.\nThese findings align with those of \\cite{wortsman2022model}, who observed that accuracy decreases on the linear path between two fine-tuned models when using a larger learning rate.\nThus, while larger learning rates may be acceptable when fine-tuning individual models, we recommend more caution when using task vectors. Further, we hypothesize that larger learning rates may explain some of the variance when adding vectors from natural language processing tasks, where we take models fine-tuned by others in the community.\n\n\n\n\\begin{SCfigure}\n \\centering\n \\begin{minipage}{0.4\\linewidth}\n \\hspace{-0.52cm}\n \\includegraphics[width=1\\textwidth]{figures\/lr_ablation.pdf}\n \\end{minipage}\n \\captionsetup{width=1\\textwidth}\n \\vspace{-0.6cm}\n \\sidecaptionvpos{figure}{t}\n \\caption{\\textbf{The impact of learning rate when fine-tuning.} When adding task vectors from CLIP ViT-L\/14 models fine-tuned on MNIST and EuroSAT, lower learning rates make the best use of the fine-tuned models, and also correspond to the highest accuracies of the fine-tuned models on the target task.}\n \\label{fig:lr}\n\\end{SCfigure}\n\n\\begin{figure*}\n \\centering\n \n \\includegraphics[width=.94\\textwidth]{figures\/intermediate_tvs.pdf}\n \\caption{\\textbf{How task vectors evolve throughout fine-tuning.} Left: the cosine similarity between the final task vector and task vectors produced at intermediate points during fine-tuning. Right: Accuracy obtained by adding intermediate task vectors from MNIST and EuroSAT. Adding intermediate task vectors can lead to high accuracy, despite fine-tuning for substantially fewer steps.}\n \\label{fig:intermediate}\n \\vspace{-8pt} \n\\end{figure*}\n\n\\textbf{The evolution of task vectors throughout fine-tuning.} In Figure \\ref{fig:intermediate}, we show how task vectors evolve throughout fine-tuning. Intermediate task vectors converge rapidly to the direction of the final task vector obtained at the end of fine-tuning. Moreover, the accuracy of the model obtained by adding intermediate task vectors from two image classification tasks saturates after just a few hundred steps. These results suggest that using intermediate task vectors can be a useful way of saving compute with little harm in accuracy.\n\n\n\\textbf{Limitations.} Task vectors are restricted to models with the same architecture, since they depend on element-wise operations on model weights. Further, in all of our experiments we perform arithmetic operations only on models fine-tuned from the same pre-trained initialization, although emerging work shows promise in relaxing this assumption \\citep{ainsworth2022git}. We also note that some architectures are very popular, and have ``standard'' initializations---e.g., at the time of writing there are over 3,000 models on Hugging Face Hub fine-tuned from the same BERT-base initialization \\cite{devlin-etal-2019-bert}, and over 800 models fine-tuned from the same T5-small initialization.\n\\section{Related work}\n\n\\paragraph{The loss landscape and interpolating weights.} The geometry of neural network loss surfaces has attracted the interest of several authors in recent years \\citep{li2018visualizing,garipov2018loss,draxler2018essentially,kuditipudi2019explaining,fort2019deep,czarnecki2019deep,pmlr-v139-wortsman21a, benton2021loss,entezari2021role,li2022branch}.\nDespite neural networks being non-linear, previous work has empirically found that interpolations between the weights of two neural networks can maintain their high accuracy, provided these two neural networks share part of their optimization trajectory~\\citep{frankle2020linear,izmailov2018averaging,neyshabur2020being,fort2020deep,wortsman2022model,choshen2022fusing,ilharco2022patching}. \n\nIn the context of fine-tuning, accuracy increases steadily when gradually moving the weights of a pre-trained model in the direction of its fine-tuned counterpart \\citep{wortsman2021robust,matena2021merging,ilharco2022patching}.\nBeyond a single task, \\citet{matena2021merging,ilharco2022patching} found that when multiple models are fine-tuned on different tasks from the same initialization, averaging their weights can improve accuracy on the fine-tuning tasks.\nSimilar results were found by \\citet{li2022branch} when averaging the parameters of language models fine-tuned on various domains.\n\\citet{choshen2022fusing} showed that ``fusing\" fine-tuned models by averaging their weights creates a better starting point for fine-tuning on a new downstream task.\n\\citet{wortsman2022model} found that averaging the weights of models fine-tuned on multiple tasks can increase accuracy on a new downstream task, without any further training.\nThese findings are aligned with results shown in Section \\ref{sec:addition}. In this work, we go beyond interpolating between models, examining extrapolating between models and additional ways of combining them (Sections \\ref{sec:negation} and \\ref{sec:analogies}). \n\n\n\\paragraph{Model interventions.} Considering that re-training models is prohibitively expensive in most circumstances, several authors have studied more efficient methods for modifying a model's behavior with interventions after pre-training, referring to this process by different names, such as patching \\citep{goel2020model,sung2021training,ilharco2022patching,murty2022fixing}, editing \\citep{shibani2021editing,mitchell2021fast,mitchell2022memory}, aligning \\citep{ouyang2022training,askell2021general,kasirzadeh2022conversation,sparrow}, or debugging \\citep{ribeiro2022adaptive,geva2022lm}. \nIn contrast to previous literature, our work provides a unique way of editing models, where capabilities can be added or deleted in a modular and efficient manner by re-using fine-tuned models.\nCloser to our work is that of \\cite{subramani2022}, who explore steering language models with vectors added to its hidden states.\nIn contrast, our work applies vectors in the weight space of pre-trained models and does not modify the standard fine-tuning procedure.\n\n\\paragraph{Task embeddings.} \\citet{achille2019task2vec,vu2020exploring,vu2022spot}, inter alia, explored strategies for representing tasks with continuous embeddings, in order to to predict task similarities and transferability, or to create taxonomic relations. While the task vectors we build could be used for such purposes, our main goal is to use them as tools for steering the behavior of pre-trained models.\n\n\\section{Conclusion}\n\nIn this paper we introduce a new paradigm for editing models based on arithmetic operations over \\emph{task vectors}. For various vision and NLP models, \\emph{adding} multiple specialized task vectors results in a single model that performs well on all target tasks, or even improves performance on a single task.\n\\emph{Negating} task vectors allows users to remove undesirable behaviors, e.g., toxic generations, or even forget specific tasks altogether, while retaining performance everywhere else. Finally, \\emph{task analogies} leverage existing data to improve performance on domains or subpopulations where data is scarce.\n\nArithmetic operations over task vectors only involve adding or subtracting model weights, and thus are efficient to compute, especially when compared to alternatives that involve additional fine-tuning. Thus, users can easily experiment with various model edits, recycling and transferring knowledge from large collections of publicly available fine-tuned models.\nSince these operations result in a single model of the same size, they incur no extra inference cost. Our code is available at {\\footnotesize \\url{https:\/\/github.com\/mlfoundations\/task_vectors}}.\n\n\n\\section*{Acknowledgements}\nWe thank \nAlex Fang,\nAri Holtzman,\nColin Raffel,\nDhruba Ghosh,\nJesse Dodge,\nMargaret Li,\nOfir Press,\nSam Ainsworth,\nSarah Pratt,\nStephen Mussmann,\nTim Dettmers, and\nVivek Ramanujan\nfor helpful discussion and comments on the paper.\n\n\n\\section{The loss landscape, weight averaging and ensembles}\n\\label{sec:app-ensembles}\n\nWhen two neural networks share part of their optimization trajectory---such as when fine-tuning from the same pre-trained initialization---previous work found that performance does not decrease substantially when linearly interpolating between their weights \\citep{frankle2020linear,izmailov2018averaging,neyshabur2020being,fort2020deep,wortsman2022model,choshen2022fusing,ilharco2022patching}. \nApplying a task vector---and any vectors produced via the arithmetic expressions we study in this work---is equivalent to a linear combination of the pre-trained model and the fine-tuned models used to generate the task vectors, since only linear operations are used.\nInterpolating between the weights of a fine-tuned model and its pre-trained counterpart as in \\citet{wortsman2021robust,ilharco2022patching} is equivalent to applying a single task vector, and adding different task vectors is equivalent to a weighted average of all models, similar to experiments from \\citet{wortsman2022model,ilharco2022patching,li2022branch}.\nOverall, previous work has empirically observed that averaging weights of neural networks can produce models with strong performance when compared to the best individual network, for several architectures, domains and datasets. \n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=.5\\textwidth]{figures\/ensembles_correlation.pdf}\n \\caption{When adding two task vectors, the performance of the resulting model approximates the performing of ensembling the corresponding fine-tuned models.}\n \\label{fig:ensembles-corr}\n\\end{figure*}\n\nOur motivation for studying task vectors is also well aligned with findings of \\citet{lucas2021analyzing,ilharco2022patching}, who observed that performance steadily increases on the linear path between a model before and after training.\\footnote{This property of neural networks is sometimes referred to as Monotonic Linear Interpolation (MLI) \\citep{lucas2021analyzing}.} This indicates that the direction from the pre-trained to the fine-tuned model is such that movement in that direction directly translates to performance gains on the fine-tuning task. Moreover, \\citet{ilharco2022patching} found that linear interpolations between a pre-trained model and a fine-tuned model are able to preserve accuracy on tasks that are unrelated to fine-tuning, while greatly improving accuracy on the fine-tuning task compared to the pre-trained model. That accuracy on the fine-tuning task and on unrelated tasks are independent of each other along the linear path between pre-trained and fine-tuned models is well aligned with our results on from Section \\ref{sec:negation}, where we find that \\textit{extrapolating} from the pre-trained model away from the fine-tuned model leads to worse performance on the fine-tuning task with little change in behavior on control tasks. \n\nFinally, we highlight the connection between linear combinations of neural network weights and the well-established practice of \\textit{ensembling} their predictions.\\footnote{For the sake of completion, the ensemble of two models $f$ with weights $\\theta_1$ and $\\theta_2$ for an input $x$ is given by $(1-\\alpha)f_{\\theta_1}(x) + \\alpha f_{\\theta_2}(x)$, for some mixing coefficient $\\alpha$. Ensembling two classification models is typically done by averaging the logits produced by the models.} This connection is discussed in depth by \\citet{wortsman2021robust,wortsman2022model}, and we briefly revisit it in the context of adding task vectors. First, recall that the arithmetic operations we study result in linear combinations of model weights. As shown by \\citet{wortsman2021robust}, in certain regimes, the result from linearly combining the weights of neural network approximate ensembling their outputs. This approximation holds whenever the loss can be locally approximated by a linear expansion, which is referred to as the NTK regime \\citep{jacot2018neural}. Moreover, as shown by \\citet{fort2020deep}, this linear expansion becomes more accuracy in the later phase of training neural networks, which closely resembles fine-tuning. When the approximation holds exactly, weight averaging and ensembles are exactly equivalent \\citep{wortsman2021robust}. This connection is further studied analytically and empirically by \\citet{wortsman2022model}.\n\nWe empirically validate the connection between ensembles and linear weight combinations in the context of adding two task vectors. Note that the model resulting from adding two task vectors with a scaling coefficient $\\lambda=0.5$ is equivalent to a uniform average of the weights of the fine-tuned models.\\footnote{\n$\\theta_\\textrm{pre} + 0.5(\\tau_1+ \\tau_2) = \\theta_\\textrm{pre} + 0.5((\\theta_1-\\theta_\\textrm{pre}) + (\\theta_2-\\theta_\\textrm{pre})) = 0.5 (\\theta_1 + \\theta_2)$.}\nWe then investigate whether accuracy of the model obtained using the task vectors correlates with the accuracy of ensembling the fine-tuned models, as predicted by theory. \nAs shown in Figure \\ref{fig:ensembles-corr}, we indeed observe that the accuracy of the model produced by adding two task vectors closely follows the accuracy of the corresponding ensemble. We observe a slight bias towards higher accuracy for the ensembles on average, and that the two quantities are also strongly correlated, with a Pearson correlation of 0.99.\n\n\n\\section{Forgetting image classification tasks}\n\\label{sec:clip-neg-extended}\n\nThis section presents additional experimental details and results complementing the findings presented in Section \\ref{sec:forget_img}, showcasing the effect of negating task vectors from image classification tasks.\n\n\\subsection{Experimental details}\n\\label{sec:clip-exp-details}\n\nWe follow the same procedure from \\cite{ilharco2022patching} when fine-tune CLIP models \\citep{radford2021learning}. Namely, we fine-tune for 2000 iterations with a batch size of 128, learning rate 1e-5 and a cosine annealing learning rate schedule with 200 warm-up steps and the AdamW optimizer \\citep{loshchilov2018decoupled, paszke2019pytorch}, with weight decay 0.1.\nWhen fine-tuning, we freeze the weights of the classification layer output by CLIP's text encoder, so that we do not introduce additional learnable parameters, as in \\cite{ilharco2022patching}.\nAs shown by \\cite{ilharco2022patching}, freezing the classification layer does not harm accuracy.\nAfter fine-tuning, we evaluate scaling coefficients $\\lambda \\in \\{0.0, 0.05, 0.1, \\cdots, 1.0\\}$, choosing the highest value such that the resulting model still retains at least 95\\% of the accuracy of the pre-trained model on the control task.\n\n\\subsection{Baselines}\n\\label{sec:app-neg-baselines}\n\nWe contrast our results with two baselines, fine-tuning with gradient ascent as in \\citet{golatkar2020eternal,tarun2021fast}, and against using a random vector of the same magnitude as the task vector on a layer-by-layer basis.\n\nIn practice, for fine-tuning with gradient ascent, we use the same hyper-parameters as for standard fine-tuning. However, instead of optimizing to minimize the cross-entropy loss $\\ell=\\mathbb{E}_{x,y \\in \\mathcal{D}}[-\\log f(x)_y]$, we optimize to minimize its negative value, $\\ell_\\textrm{neg}=-\\ell=\\mathbb{E}_{x,y \\in \\mathcal{D}}[\\log f(x)_y]$, where $x,y$ are samples in the dataset $\\mathcal{D}$ and $f(x)_y$ is the probability assigned by the model $f$ that the inputs $x$ belong to label $y$. This is equivalent to performing gradient ascent on $\\ell$.\n\nFor the random vector baseline, we first compute the different between the parameters of the pre-trained and fine-tuned models for each layer $L$, $\\tau^{(L)} = \\theta^{(L)}_\\textrm{ft}-\\theta^{(L)}_\\textrm{pre}$. Then, we draw a new vector $\\tau^{(L)}_\\textrm{rand} \\sim \\mathcal{N}(0,I)$ where each element is drawn from a normal distribution with mean 0 and variance 1. We then scale this vector so it has the same magnitude as $\\tau^{(L)}$, resulting in $\\tau^{(L)}_{\\textrm{scaled}} = \\tau^{(L)}_\\textrm{rand} \\frac{||\\tau^{(L)}||}{||\\tau^{(L)}_\\textrm{rand}||}$. Finally, we concatenate all the vectors $\\tau^{(L)}_{\\textrm{scaled}}$ for all layers to form a new vector withe the same dimensionality as the model parameters $\\theta$, which is used in the same way as task vectors. \n\n\n\\subsection{Breakdown per task}\n\nTables \\ref{tab:forget_image_l14}, \\ref{tab:forget_image_b16} and \\ref{tab:forget_image_b32} show a breakdown of accuracy for the eight tasks and the three CLIP models we examine.\n\nWe observe qualitatively similar results in all cases. Similarly to what is observed in \\cite{ilharco2022patching}, we also see that results improve with scale: on average, the largest model, ViT-L\/14, achieves \\textit{lower} accuracy on the target tasks, compared to the smaller models. \n\n\\begin{table*}\n\\caption{Forgetting via negation on image classification tasks. Results are shown for a CLIP ViT-L\/14 model \\citep{radford2021learning}, reporting accuracy on both the target (T) and control (C) tasks.}\n\\setlength\\tabcolsep{2.3pt}\n\\renewcommand{\\arraystretch}{1.05}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lcc?cc?cc?cc?cc?cc?cc?cc}\n\\toprule\n\\multirow{2}{*}{Method} & \\multicolumn{2}{c?}{{Cars}} & \\multicolumn{2}{c?}{DTD} & \\multicolumn{2}{c?}{EuroSAT} & \\multicolumn{2}{c?}{GTSRB} & \\multicolumn{2}{c?}{MNIST} & \\multicolumn{2}{c?}{{RESISC45}} & \\multicolumn{2}{c?}{{SUN397}} & \\multicolumn{2}{c}{{SVHN}} \\\\\n & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ \\\\\\midrule\nPre-trained & 77.8 & 75.5 & 55.4 & 75.5 & 60.2 & 75.5 & 50.6 & 75.5 & 76.4 & 75.5 & 71.0 & 75.5 & 68.3 & 75.5 & 58.6 & 75.5 \\\\\nFine-tuned & 92.8 & 73.1 & 83.7 & 72.3 & 99.2 & 70.5 & 99.3 & 73.1 & 99.8 & 72.9 & 96.9 & 73.8 & 82.4 & 72.7 & 98.0 & 72.6 \\\\\nNeg. gradients & 0.00 & 4.82 & 2.13 & 0.10 & 9.26 & 1.07 & 1.19 & 0.07 & 9.80 & 67.0 & 2.14 & 0.07 & 0.25 & 0.00 & 6.70 & 57.2 \\\\%\\midrule\nRandom vector & 72.0 & 73.3 & 52.1 & 72.2 & 59.7 & 73.5 & 43.4 & 72.5 & 74.8 & 72.8 & 70.8 & 73.0 & 66.9 & 72.7 & 47.1 & 72.9\\\\\\midrule\nNeg. task vector & 32.0 & 72.4 & 26.7 & 72.2 & 7.33 & 73.3 & 6.45 & 72.2 & 2.69 & 74.9 & 19.7 & 72.9 & 50.8 & 72.6 & 6.71 & 72.7 \\\\\n\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:forget_image_l14}\n\\end{table*}\n\n\n\\begin{table*}\n\\caption{Forgetting via negation on image classification tasks. Results are shown for a CLIP ViT-B\/16 model \\citep{radford2021learning}, reporting accuracy on both the target (T) and control (C) tasks.}\n\\setlength\\tabcolsep{2.3pt}\n\\renewcommand{\\arraystretch}{1.05}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lcc?cc?cc?cc?cc?cc?cc?cc}\n\\toprule\n\\multirow{2}{*}{Method} & \\multicolumn{2}{c?}{{Cars}} & \\multicolumn{2}{c?}{DTD} & \\multicolumn{2}{c?}{EuroSAT} & \\multicolumn{2}{c?}{GTSRB} & \\multicolumn{2}{c?}{MNIST} & \\multicolumn{2}{c?}{{RESISC45}} & \\multicolumn{2}{c?}{{SUN397}} & \\multicolumn{2}{c}{{SVHN}} \\\\\n & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$\\\\\\midrule\nPre-trained & 64.6 & 68.3 & 44.9 & 68.3 & 53.9 & 68.3 & 43.4 & 68.3 & 51.6 & 68.3 & 65.8 & 68.3 & 65.5 & 68.3 & 52.0 & 68.3 \\\\\nFine-tuned & 87.0 & 61.9 & 82.3 & 57.5 & 99.1 & 56.0 & 99.0 & 54.7 & 99.7 & 55.2 & 96.4 & 62.2 & 79.0 & 61.7 & 97.7 & 56.8 \\\\\nNeg. gradients & 0.36 & 0.11 & 2.13 & 0.09 & 9.26 & 0.14 & 0.71 & 0.10 & 0.04 & 1.20 & 2.60 & 0.10 & 0.25 & 0.00 & 0.08 & 3.69\\\\\nRand. task vector & 61.0 & 65.6 & 43.9 & 66.3 & 51.7 & 66.2 & 43.1 & 65.0 & 51.6 & 68.3 & 63.6 & 65.6 & 63.7 & 65.2 & 46.2 & 65.5 \\\\\\midrule\nNeg. task vector & 30.8 & 65.4 & 26.5 & 65.6 & 12.3 & 65.8 & 9.53 & 65.8 & 9.55 & 65.4 & 26.5 & 65.1 & 48.6 & 65.1 & 6.43 & 65.4 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:forget_image_b16}\n\\end{table*}\n\n\n\\begin{table*}\n\\caption{Forgetting via negation on image classification tasks. Results are shown for a CLIP ViT-B\/32 model \\citep{radford2021learning}, reporting accuracy on both the target (T) and control (C) tasks.}\n\\setlength\\tabcolsep{2.3pt}\n\\renewcommand{\\arraystretch}{1.05}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lcc?cc?cc?cc?cc?cc?cc?cc}\n\\toprule\n\\multirow{2}{*}{Method} & \\multicolumn{2}{c?}{{Cars}} & \\multicolumn{2}{c?}{DTD} & \\multicolumn{2}{c?}{EuroSAT} & \\multicolumn{2}{c?}{GTSRB} & \\multicolumn{2}{c?}{MNIST} & \\multicolumn{2}{c?}{{RESISC45}} & \\multicolumn{2}{c?}{{SUN397}} & \\multicolumn{2}{c}{{SVHN}} \\\\\n & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ & T$\\downarrow$ & C$\\uparrow$ \\\\\\midrule\nPre-trained & 59.6 & 63.4 & 44.1 & 63.4 & 45.9 & 63.4 & 32.5 & 63.4 & 48.7 & 63.4 & 60.7 & 63.4 & 63.2 & 63.4 & 31.5 & 63.4 \\\\\nFine-tuned & 79.2 & 55.2 & 78.7 & 49.3 & 98.6 & 47.2 & 98.5 & 39.1 & 99.6 & 42.5 & 95.0 & 53.2 & 75.1 & 54.6 & 97.2 & 44.7 \\\\\nNeg. gradients & 0.01 & 0.11 & 2.13 & 0.10 & 9.26 & 0.10 & 1.19 & 0.07 & 0.00 & 1.22 & 2.60 & 0.10 & 0.25 & 0.01 & 6.38 & 0.29 \\\\\nRand. task vector & 54.1 & 60.9 & 39.9 & 61.5 & 45.8 & 63.4 & 27.9 & 60.7 & 48.3 & 63.4 & 57.1 & 60.9 & 61.3 & 60.5 & 31.2 & 60.7 \\\\\\midrule\nNeg. task vector & 36.0 & 61.1 & 27.8 & 60.2 & 13.6 & 61.3 & 8.13 & 61.4 & 16.7 & 60.7 & 31.7 & 61.0 & 50.7 & 60.5 & 7.65 & 61.0 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:forget_image_b32}\n\\end{table*}\n\n\n\\subsection{Additional visualizations}\n\nIn Figure \\ref{fig:forget_img_lambdas}, we show how accuracy on the target and control tasks vary as we change the scaling coefficients $\\lambda$, both for the task vector obtained by fine-tuning on the target task and for a random vector of the same magnitude.\n\nAs the scaling coefficient increases, the curves traced by the task vector and a random vector behave differently. For task vectors, performance on the target tasks ($y$-axis) initially decreases faster than performance on the control task ($x$-axis), so there exists models with high accuracy on the control task but low accuracy on the target task. In contrast, such points do not exist in the curves traced by random vectors, which move more linearly towards the origin. In practice, this means forgetting is effective for task vectors obtained by fine-tuning, but not for random vectors.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/wses_randomdir.pdf}\n \\caption{Comparison between task vectors and random vectors for forgetting image classification tasks.}\n \\label{fig:forget_img_lambdas}\n\\end{figure*}\n\n\\subsection{The effect of class overlap}\n\nIn Tables \\ref{tab:forget_image_l14}, \\ref{tab:forget_image_b32}, \\ref{tab:forget_image_b16}, we observe that the tasks where forgetting via task vectors is least effective are tasks where the distribution of images is closer to ImageNet, SUN397 \\citep{sun397}, a scene understanding dataset with classes such as ``church\" and ``tower\", and Stanford Cars \\citep{cars}, a dataset with with many car categories such as ``2012 Tesla Model S\" or ``2012 BMW M3 coupe\". One reasonable hypothesis is that forgetting is less effective for those tasks due to the overlap with the images from the control tasks. \n\nTo better understand this effect, we measure accuracy on a subset of classes from ImageNet, such that the overlap is minimized.\nConcretely, we exclude nodes from the WordNet hierarchy from which the ImageNet classes are based.\\footnote{A visualization is available at \\url{https:\/\/observablehq.com\/@mbostock\/imagenet-hierarchy}}\nFor the Cars dataset, we exclude the all subnodes under the node ``wheeled vehicle\" (e.g., ``minivan\", ``jeep\", ``limousine\").\nFor SUN397, we exclude all subnodes under the nodes ``structure\" and ``geological formation\". \nAs shown in Table \\ref{tab:overlap-ablation}, we do not observe large differences after filtering. \n\n\\begin{table*}\n\\caption{The effect of semantic overlap with the control task in forgetting experiments on image classification tasks. Results are shown for a CLIP ViT-L\/14 model, reporting accuracy both on the target task and control task (Ctrl, ImageNet).}\n\\setlength\\tabcolsep{4.4pt}\n\\renewcommand{\\arraystretch}{1.05}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lcc?cc?cc?cc}\n\\toprule\n\\multirow{2}{*}{Method} & \\multicolumn{4}{c?}{{Without filtering}} & \\multicolumn{4}{c}{With filtering} \\\\\n & Cars ($\\downarrow$) & Ctrl ($\\uparrow$) & SUN397 ($\\downarrow$) & Ctrl ($\\uparrow$) & Cars ($\\downarrow$) & Ctrl ($\\uparrow$) & SUN397 ($\\downarrow$) & Ctrl ($\\uparrow$) \\\\\\midrule\nPre-trained & 77.8 & 75.5 & 68.3 & 75.5 & 77.8 & 75.5 & 68.3 & 76.1 \\\\\nFine-tuned & 92.8 & 73.1 & 82.4 & 72.7 & 92.8 & 73.3 & 82.4 & 73.1 \\\\\\midrule\nNeg. task vector & 32.0 & 72.4 & 50.8 & 72.6 & 32.0 & 72.5 & 48.1 & 72.4\\\\\n\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:overlap-ablation}\n\\end{table*}\n\n\n\\subsection{Interpolating with a model fine-tuned with gradient ascent}\n\\label{sec:apapendix-gradient-ascent}\n\nOne baseline explored in the experiments is fine-tuning with gradient ascent, as explored in \\citet{golatkar2020eternal,tarun2021fast}. Our results show that this strategy is effective at reducing the accuracy on treatment tasks, but also substantially deteriorates accuracy on the control task, which is undesirable.\n\nWe further examine whether interpolations between the pre-trained model and the model fine-tuned with gradient ascent help with forgetting. Our results, shown in Figure \\ref{fig:neggrad}, indicate that interpolations greatly mitigate the low accuracy on the control task of the fine-tuned model, leading to even better accuracy trade-offs than the solutions obtained by extrapolation with standard fine-tuning.\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/wses_avg_neggrad.pdf}\n \\caption{Comparison with interpolations between the pre-trained model and models fine-tuned with gradient ascent.}\n \\label{fig:neggrad}\n\\end{figure*}\n\n\\subsection{When negating task vectors works best}\n\\label{sec:when-neg-works}\n\nWe observe a positive correlation between the gain in accuracy from fine-tuning and the drop in accuracy when subtracting the corresponding task vector, both in comparison with the pre-trained model (Figure \\ref{fig:forget-corr}).\nWe speculate that the reason for this correlation is that when the gains from fine-tuning are small, the task vector provides a less clear direction of improvement, and the opposite direction thus provides a less clear direction of performance deterioration.\nIn the extreme case where fine-tuning does not improve accuracy, it would be surprising if the corresponding task vector is useful.\n\nWe note that this is a limitation of editing models by negating task vectors. When models already strongly exhibit the behavior we wish to remove, it is harder to do so with this technique.\nIn those circumstances, a more promising approach is to add the task vector obtained with gradient ascent, as described in Appendix \\ref{sec:apapendix-gradient-ascent}.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/forget_correlation.pdf}\n \\caption{Correlation between the gain in accuracy from fine-tuning and the drop in accuracy when subtracting the corresponding task vector for image classification tasks.}\n \\label{fig:forget-corr}\n\\end{figure*}\n\n\n\\subsection{Additional tasks}\n\nIn addition to the tasks explored in Section \\ref{sec:add_img}, we study two other tasks, OCR and person identification.\n\nFor OCR, we use the synthetic dataset from \\citet{ilharco2022patching}, built using images from SUN-397 as backgrounds and mismatched class names as texts.\nThe task vector is produced by fine-tuning on those images, with the objective of predicting the written text (and not the background).\nAs shown in Figure \\ref{fig:forget-more-tasks} (left), especially for the larger CLIP models, negating the task vectors leads to large drops in performance with little change in accuracy on ImageNet. \n\nFor person identification, we use the Celebrity Face Recognition dataset, containing close to a million pictures of around one thousand celebrities.\\footnote{\\url{https:\/\/github.com\/prateekmehta59\/Celebrity-Face-Recognition-Dataset}.} We split the data into a training, validation and test set with proportions 0.8, 0.1 and 0.1.\nResults are shown in Figure \\ref{fig:forget-more-tasks} (right).\nWhile negating the task vectors leads to performance deterioration, we find that forgetting is less effective compared to other tasks like OCR. We hypothesize that one explanation for this could be the fact that fine-tuning on this dataset does provides only small gains in accuracy, as discussed in Appendix \\ref{sec:when-neg-works}.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/forget_more_tasks.pdf}\n \\caption{Forgetting by negating task vectors on additional vision tasks, OCR and person identification.}\n \\label{fig:forget-more-tasks}\n\\end{figure*}\n\n\\section{Forgetting with text generation}\n\\label{sec:appendix-neg-lang}\n\nThis section presents additional experimental details and results complementing the findings presented in Section \\ref{sec:forget_lang}, showcasing the effect of negating task vectors from text generation.\n\n\\subsection{Experimental details}\n\nTo obtain task vectors, we fine-tune on data Civil Comments \\citep{borkan2019nuanced} where the toxicity score is larger than 0.8.\nWe then fine-tune GPT-2 models \\citep{radford2019language} from Hugging Face transformers library \\citep{wolf2019huggingface}.\nWe use a learning rate of 1e-5, and fine-tune with a causal language modeling objective with the AdamW optimizer for 5 epochs using a global batch size of 32.\nAfter fine-tuning, we evaluate models obtained by adding task vectors with scaling coefficients $\\lambda \\in \\{0.0, 0.1, \\cdots, 1.0\\}$.\nIn Table \\ref{tab:toxicity}, we report results for the largest scaling coefficient such that perplexity is still within 0.5 points of the perplexity of the pre-trained model.\nTo evaluate toxicity, we generate 1000 samples from the models. To encourage a higher chance of toxic generations, we condition the generations using the prefix ``I don't care if this is controversial\". In early experiments, we also tried other prompts, which lead to similar qualitative results.\nWe evaluate other prompts in Appendix \\ref{sec:app-realtoxic}.\nTo evaluate fluency, we measure the perplexity of the models on WikiText-103 with a striding window of size 1024 and a stride of 512 tokens.\n\n\n\\subsection{Additional models}\n\nIn addition to the GPT-2 Large models showed in Table \\ref{tab:toxicity}, we present results for GPT-2 Medium and GPT-2 Small models in Tables \\ref{tab:toxicity_gpt2med} and \\ref{tab:toxicity_gpt2small}.\nWe observe the same qualitative trends for the additional models. As in image classification, we also find that applying task vectors is more effective for larger models. \n\n\\begin{table*}\n\\caption{Making language models less toxic with negative task vectors. Results are shown for the GPT-2 Medium model.}\n\\setlength\\tabcolsep{4.5pt}\n\\renewcommand{\\arraystretch}{0.9}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lrrr}\n\\toprule\n Method & \\% toxic generations ($\\downarrow$)& Avg. toxicity score ($\\downarrow$) & WikiText-103 perplexity ($\\downarrow$)\n \\\\\\midrule\nPre-trained & 4.3 & 0.06 & 18.5 \\\\\nFine-tuned & 54.5 & 0.54 & 20.2 \\\\\nGradient ascent & 0.0 & 0.00 & $>$10$^{10}$ \\\\\nRandom task vector & 4.2 & 0.05 & 18.5 \\\\\\midrule\nNegative task vector & 1.8 & 0.02 & 18.9 \\\\\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:toxicity_gpt2med}\n\\end{table*}\n\n\n\\begin{table*}\n\\caption{Making language models less toxic with negative task vectors. Results are shown for the GPT-2 Small model.}\n\\setlength\\tabcolsep{4.5pt}\n\\renewcommand{\\arraystretch}{0.9}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lrrr}\n\\toprule\n Method & \\% toxic generations ($\\downarrow$)& Avg. toxicity score ($\\downarrow$) & WikiText-103 perplexity ($\\downarrow$)\n \\\\\\midrule\nPre-trained & 3.7 & 0.04 & 25.2 \\\\\nFine-tuned & 62.9 & 0.61 & 28.1 \\\\\nGradient ascent & 0.0 & 0.00 & $>$10$^{10}$ \\\\\nRandom task vector & 3.2 & 0.04 & 25.3 \\\\\\midrule\nNegative task vector & 2.5 & 0.03 & 25.3 \\\\\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:toxicity_gpt2small}\n\\end{table*}\n\n\\subsection{RealToxicityPrompts}\n\\label{sec:app-realtoxic}\n\nWe present additional experiments using RealToxicityPrompts \\citep{gehman2020realtoxicityprompts}, a dataset of natural language prompts used for measuring toxicity in language models. As in \\citet{gehman2020realtoxicityprompts}, we evaluate language models using 25 generations per prompt, using the Perspective API.\\footnote{\\url{https:\/\/github.com\/conversationai\/perspectiveapi}}\n\nIn Figure \\ref{fig:realtoxic}, we present results showing the expected maximum toxicity across the 25 generations and the perplexity on WikiText-103 as we vary the scaling coefficients.\nWe show results both for the \\textit{challenging} subset of the dataset, containing 1.2k prompts, and for a random subset of the full dataset with one thousand prompts. \nIn both cases, we see qualitatively similar trends: negating task vectors produced by fine-tuning on toxic data reduces the amount toxicity of the generations.\nFor GPT-2 large, we see close to vertical movement as the scaling coefficient increases, showing large decreases in accuracy with little change in perplexity on WikiText-103.\nHowever, especially for the challenging set of the benchmark, there is still significant headroom for improvement.\n\n\n\\begin{figure}%\n \\centering\n \\subfloat{{\\includegraphics[width=0.495\\linewidth]{figures\/forget_realtoxic_random.pdf} }}%\n \n \\subfloat{{\\includegraphics[width=0.495\\linewidth]{figures\/forget_realtoxic_challenging.pdf} }}%\n \n \\caption{\\textbf{Toxicity results using RealToxicityPrompts} \\citep{gehman2020realtoxicityprompts}, for various GPT-2 models.}%\n \\label{fig:realtoxic}%\n \n\\end{figure}\n\n\n\\section{Learning via addition}\n\\label{sec:appendix-add}\n\n\nIn all experiments, we add task vectors together and use a \\textit{single} scaling coefficient for the sum of the vectors, $\\lambda \\in \\{0, 0.05, 0.1, \\cdots, 1.0\\}$.\nWhile using scaling each task vector by its own coefficient could improve performance, exploring all combinations of scaling coefficients when the number of tasks is not small, due to the curse of dimensionality. While we focus on a single scaling coefficient for simplicity, more sophisticated strategies could be explored in future work, such as using black box optimization to search the space of scaling coefficients.\n\nFurthermore, we note that the best multi-task model given a set of task vectors is not often obtained by using all of the task vectors, as shown in Figure \\ref{fig:clip-add-all}. Since adding task vectors is computationally efficient and evaluations are usually substantially less expensive than training, practitioners could try out many subsets of task vectors and choose the ones that maximizes performance on the tasks of interest. Moreover, faster techniques such as the greedy algorithm proposed by \\citet{wortsman2022model} could allow users to efficiently discard task vectors that do not improve accuracy.\n\n\n\\subsection{The impact of random seeds}\n\n\n\nWe fine-tune five CLIP models on MNIST and five models EuroSAT, varying only the random seed. We then edit models by adding all possible combinations of the corresponding task vectors (25 in total). The results in Figure \\ref{fig:seeds} indicate that different random seeds have little impact in the resulting accuracy of the edited models for this set up. It is possible that we would observe larger variance in other settings such as natural language processing \\citep{dodge2020fine,juneja2022linear}, but we again observe that users can simply discard task vectors that yield no improvement in validation data.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{figures\/seed_ablation.pdf}\n \\caption{\\textbf{The impact of random seeds when fine-tuning.} Using different random seeds when fine-tuning on image classification tasks has little impact on the accuracy of edited models.}\n \\label{fig:seeds}\n\\end{figure}\n\n\n\n\\subsection{Multi-task training}\n\nIn addition to using multiple-specialized models, we compare against a single multi-task model obtained via jointly fine-tuning on the eight image classification tasks we study. We fine-tune with the same hyper-parameters described in Appendix \\ref{sec:clip-exp-details}, also freezing the classification heads. \n\nMulti-task fine-tuning on the eight tasks achieves an average normalized performance of 0.994, compared to the best result obtained with task vectors, 0.912 (recall that 1.0 is obtained with multiple specialized models). Despite the headroom for improvement, multi-task training is less modular than using task vectors, requiring a new fine-tuning round every time a new task is added. In contrast, task vectors can be combined without any additional training and without the need to store or transfer the data used to create them, and can draw from the large pool of existing fine-tuned models such as the ones available on model hubs.\n\n\\subsection{Scaling coefficients}\n\nIn Figure \\ref{fig:acc-per-alpha} (left), we show the optimal scaling coefficients for the experiments where task vectors are added together. Recall that a single scaling coefficient is used for each experiment, regardless of the number of task vectors in the experiment. The variance in the optimal scaling coefficients can be large, highlighting the need for tuning on a case-by-case basis. However, compared to tuning traditional hyper-parameters, tuning the scaling coefficient is less computationally expensive since, unlike most hyper-parameters, the scaling coefficient can be changed without any additional training.\n\n\n\n\\begin{figure}%\n \\centering\n \\subfloat{{\\includegraphics[width=0.495\\linewidth]{figures\/clip_add_alphas.pdf} }}%\n \\subfloat{{\\includegraphics[width=0.495\\linewidth]{figures\/clip_add_acc_per_alpha.pdf} }}%\n \\caption{\\textbf{The effect of scaling coefficients when adding task vectors}. Left: Optimal scaling coefficients when adding task vectors. Right: average normalized performance as a function of the scaling coefficient and the number of task vectors.}%\n \\label{fig:acc-per-alpha}%\n\\end{figure}\n\n\n\nIn Figure \\ref{fig:acc-per-alpha} (right), we show the average normalized performance across experiments as we vary the scaling coefficient and the number of task vectors. Scaling coefficients in the range 0.3 to 0.5 produce close to optimal results in many cases, although we recommend tuning this parameter when possible for best results.\n\n\\subsection{Accuracy on subsets of tasks}\n\nComplementing our results in the main paper, we show in Figure \\ref{fig:add-subsets} the average performance for all subsets task vectors, averaged only over the tasks that originated the task vectors (recall that in Figure \\ref{fig:clip-add-all} we presented the normalized accuracy averaged over \\textit{all} tasks). We find that for smaller subsets, the single model obtained by adding task vectors matches more closely the performance of multiple specialized models, although that gap increases as the size of the subsets grow. \n\n\n\\begin{figure}%\n \\centering\n \\includegraphics[width=0.55\\linewidth]{figures\/clip_add_v2.pdf}\n \\caption{\\textbf{Building multi-task models by adding task vectors.} Unlike results shown in Figure \\ref{fig:clip-add-all}, here performance is averaged only over the tasks used to build the task vectors in each experiment.}%\n \\label{fig:add-subsets}%\n\\end{figure}\n\n\n\n\\subsection{ImageNet experiments}\n\nIn addition to results presented in Section \\ref{sec:add_img}, we explore whether addition performs well when fine-tuning on a larger-scale dataset, ImageNet. We fine-tune with the same hyper-parameters as described in Appendix \\ref{sec:clip-exp-details}, except for using a larger number of steps (4 epochs, around 40 thousand steps), to account for the larger size of ImageNet.\n\nWe then add the ImageNet task vector with each of the eight task vectors from Section \\ref{sec:add_img}, measuring accuracy both on ImageNet and on the task from the second task vector. For example, for MNIST, we add the MNIST task vector and the ImageNet task vector, and measure accuracy both on MNIST and on ImageNet. As shown in Figure \\ref{fig:add-imagenet}, adding the task vectors produces a single model with high accuracy on both tasks, which in most experiments is competitive with the fine-tuned models on their respective datasets.\n\n\n\n\\begin{figure}%\n \\centering\n \\includegraphics[width=0.99\\linewidth]{figures\/imagenet_add_eval_imagenet.pdf}%\n \\bigbreak\n \\includegraphics[width=0.99\\linewidth]{figures\/imagenet_add_eval_other.pdf}%\n \\caption{\\textbf{Adding pairs of task vectors containing a task vector from ImageNet}. For all eight other target tasks from Section \\ref{sec:add_img}, adding their task vector with an ImageNet produces a model with high accuracy both on that task and on ImageNet.}%\n \\label{fig:add-imagenet}%\n\\end{figure}\n\n\\subsection{Adding pairs of task vectors from NLP tasks}\n\\label{sec:appendix-add-lang}\n\n\nIn this section, we present results for building multi-task models using checkpoints that were \\textit{not} fine-tuned by the authors, and were instead downloaded directly from a hub that hosts model checkpoints publicly (the Hugging Face Hub).\\footnote{\\url{https:\/\/huggingface.co\/models}}\n\nOur motivation is aligned that from with previous work on building multi-task models \\citep{colin2020exploring,khashabi2020unifiedqa,zhong2021adapting,mishra2022cross,wei2021finetuned,sanh2022multitask,min2022metaicl,wang2022benchmarking}.\n\nMore specifically, we explore six fine-tuned T5 models \\citep{colin2020exploring} downloaded from the Hugging Face Hub using popularity and diversity as criteria. The models were fine-tuned on a diverse set of natural language processing tasks, including sentiment analysis using movie reviews from IMDB \\citep{maas2011imdb}, question answering (RACE, \\citet{lai2017race}; QASC, \\citet{allenai:qasc}), summarization (MultiNews, \\citet{alex2019multinews}), question generation (SQuAD, \\citet{squadv1}); and constrained text generation (CommonGen, \\citet{lin2020commongen}). The checkpoints and tasks were chosen based on the availability of models that were fine-tuned from the same initialization (a T5-Base model), were fine-tuned without introducing new parameters, and based on diversity of the tasks and popularity of the checkpoints on the hub. \nThe specific checkpoints we use are:\n\n\\begin{itemize}\n \\item IMDB: \\texttt{mrm8488\/t5-base-finetuned-imdb-sentiment}\n \\item RACE: \\texttt{mrm8488\/t5-base-finetuned-race}\n \\item QASC: \\texttt{mrm8488\/t5-base-finetuned-qasc}\n \\item MultiNews: \\texttt{mrm8488\/t5-base-finetuned-summarize-news}\n \\item SQuAD: \\texttt{mrm8488\/t5-base-finetuned-question-generation-ap}\n \\item CommonGen: \\texttt{mrm8488\/t5-base-finetuned-common\\_gen}\n\\end{itemize}\n\n\n\nFor evaluation, we use accuracy for the text classification task (IMDB), exact match for question answering tasks (RACE and QASC) and ROUGE-2\\footnote{\\url{https:\/\/huggingface.co\/spaces\/evaluate-metric\/rouge}} for text generation tasks (MultiNews, SQuAD question generation, and CommonGen).\nAs in Section \\ref{sec:add_img}, we normalize the performance on each task by the performance of the fine-tuned model on that task, to account for differences in task difficulty and evaluation metric.\n\nAs in image classification, we find that we can compress pairs of models into a single multi-task model with little performance loss (Figure \\ref{fig:add-nlp}).\nThese results are somewhat surprising, since the gap between the pre-trained model and fine-tuned models is much larger, and tasks vary widely in terms of input domain, length, and output type.\nMoreover, while there is more variance across different subsets of tasks when compared to image classification, in various cases we observe \\emph{higher} performance than that of specialized models.\nOn average, the normalized average performance of the model obtained by adding task vectors is 96.7\\%.\n\n\n\\begin{figure}%\n \\centering\n \\includegraphics[width=0.99\\linewidth]{figures\/t5_add_v3.pdf}%\n \\caption{Adding pairs of task vectors from natural language processing tasks.}%\n \\label{fig:add-nlp}%\n\\end{figure}\n\n\n\n\\subsection{GLUE experiments}\n\\label{sec:app-glue}\n\nIn this section, we describe the experimental setup used for investigations presented in Section \\ref{sec:add-nlp}, studying whether performance on specific target tasks can be improved by adding external task vectors.\n\nOur experiments use T5-base models, fine-tuned on four tasks from the GLUE benchmark:\n\n\\begin{itemize}\n \\item \\textbf{Microsoft Research Paraphrase Corpus} (MRPC; \\citet{dolan2005automatically}) is a paraphrase task containing pairs of sentences labeled as either nearly semantically equivalent or not. The dataset is evaluated using the average of $F_1$ and accuracy.\n \\item \\textbf{Recognizing Textual Entailment} (RTE; \\citet{wang2018glue}) is a dataset where models are tasked to predict whether a sentence entails or contradicts another sentence. The data is originally from a series of datasets \\cite{dagan2005pascal, bar2006second, giampiccolo2007third, bentivogli2009fifth}. Accuracy is used as the evaluation metric.\n \\item \\textbf{Corpus of Linguistic Acceptability} (CoLA;\n \\citet{warstadt2018neural}) is a dataset with sentences labeled as either grammatical or ungrammatical. Models are evaluated on Matthews correlation (MCC; \\cite{matthews1975comparison}), which ranges between $-1$ and $1$.\n \\item \\textbf{Stanford Sentiment Treebank} (SST-2; \\citet{socher2013recursive}) is a sentiment analysis task, containing sentences labelled as containing \\textit{positive} or \\textit{negative} sentiment. Accuracy is used as the evaluation metric.\n\\end{itemize}\n\nFor all tasks, we split the training set into two subsets, one used for fine-tuning and one used for determining the best external task vector, with the same size as the original validation sets. For fine-tuning, we use a batch size of 32, learning rate 1e-5 and fine-tune for 5 epochs using AdamW and a linear learning rate schedule. All results are averaged over 3 random seeds.\nWhen evaluating, we perform two forward passes for each sample, one for each label, and chose the label that minimizes the perplexity of the decoder.\n\n\n\\section{Task analogies}\n\nSimilarly in the experiments where multiple models are added together, we use a \\textit{single} scaling coefficient for the vector resulting from the task arithmetic, $\\lambda \\in \\{0, 0.1, \\cdots, 1.0\\}$.\nWhile using scaling each task vector by its own coefficient could improve performance, we avoid this strategy since it complicates the search space and makes explorations more expensive. \nWe note that visual analogies has been explored in previous literature, albeit not at the task level \\cite{sadeghi2015visalogy}.\n\n\\subsection{Domain generalization}\n\\label{sec:app-sentiment}\n\nHere, we use task analogies to improve performance on tasks where no labeled data is available. We consider both Yelp \\citep{zhang2015character} and Amazon \\citep{mcauley2013hidden} binary-sentiment analysis as target tasks, using the \\texttt{amazon\\_polarity} and \\texttt{yelp\\_polarity} datasets from Huggingface datasets \\citep{lhoest-etal-2021-datasets}. As detailed in \\ref{sec:analogies}, given target and auxiliary tasks, we construct task vectors using the relationship $\\hat{\\tau}_\\textrm{target;\\,sent} = \\tau_\\textrm{target;\\,lm} + (\\tau_\\textrm{auxiliary;\\,sent} - \\tau_\\textrm{auxiliary;\\,lm})$. We apply an two scaling coefficients: one on the auxilary sentiment task vector, and another to the language modeling task vectors. \n\nWe compare our task analogy approach to two other baselines: fine-tuning on the auxiliary task, and fine-tuning on the target task. The latter represents an performance upper-bound, assuming we have labeled data for the target task. \n\nTo produce language model task vectors, we use consecutive 128-token chunks of text in each task as input-output pairs, following \\citet{lester-etal-2021-power}. To make predictions under the classification task, we follow the evaluation technique described in \\ref{sec:app-glue}.\n\n\n\nFor all models, we perform a single epoch of fine-tuning, setting a batch size of 2 and accumulating gradients across 8 steps. We use AdamW and a linear learning rate schedule. We set the maximum input and output sequence length to be 128. For each model scale, we perform a grid search over learning rates in \\{1e-5, 3e-5, 5e-5, 8e-4\\}, choosing the fastest learning rate that avoids divergence.\n\n\nTo construct a task vector using the task analogy, we perform a grid over the values $\\lambda \\in \\{0.0, 0.1, ..., 1.0\\}$ for each scaling coefficient. Regardless of scale, we found that giving higher weight to the auxiliary sentiment task vector produced higher accuracy. For the smallest model, we saw better performance when applying a lower-valued coefficient to the language modeling task vectors. For the largest model, applying larger coefficients to the language modeling task vectors produced better performance. This trend may be reflective of the finding in \\ref{sec:forget_lang} that task forgetting is more effective with larger models.\n\n\\subsection{Kings and Queens}\n\\label{sec:appendix-kingsandqueens}\n\nAs a warm-up, we consider the task of classifying images as ``queen\", ``king\", ``woman\" or ``man\".\nWe collect 200 images from the web (50 for each category), by manually searching for the terms ``queen\", ``king\", ``man\" and ``woman\" using Google Images searches. We present samples in Figure \\ref{fig:kings-and-queens-samples}.\n\nOur experiments explore whether we can improve accuracy on each target category using only data from the other three categories.\nFor each category, we fine-tune CLIP models on the remaining three categories, and combine the task vectors according to the analogy relationship, e.g., $\\hat{\\tau}_\\textrm{king} = \\tau_\\textrm{queen} + (\\tau_\\textrm{man} - \\tau_\\textrm{woman})$.\nIn addition to evaluating on our collected set of images, we also evaluate on the ImageNet dataset as a control task.\n\nAs shown in Table \\ref{tab:kingsandqueens}, task analogies yield large gains in accuracy over pre-trained models with very little change in the control task, despite having no training data for the target task. Similar to \\citet{ilharco2022patching,ramasesh2021effect}, we find that results improve with model scale.\n\n\n\\begin{table*}\n\\caption{\\textbf{Learning via analogy.} By leveraging vectors from related tasks, we can improve accuracy on four new target tasks without any training data, and with little change on control settings. Results are shown for the CLIP models \\citep{radford2019language}, additional details are provided in Appendix \\ref{sec:appendix-kingsandqueens}.}\n\\setlength\\tabcolsep{4.5pt}\n\\renewcommand{\\arraystretch}{0.9}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lcccccccc}\n\\toprule\n \\multirow{2}{*}{Method} & \\multicolumn{2}{c}{Queens} & \\multicolumn{2}{c}{Kings} & \\multicolumn{2}{c}{Woman} & \\multicolumn{2}{c}{Men}\n \\\\\n & Target & Control & Target & Control & Target & Control & Target & Control \\\\\n \\midrule\n ViT-B\/32 & 0.00 & 63.4 & 0.00 & 63.4 & 0.00 & 63.4 & 0.00 & 63.4\\\\\n \\quad{+ task vectors} & 42.0 & 62.4 & 30.0 & 62.4 & 69.4 & 62.5 & 58.0 & 62.6\\\\\\midrule\n ViT-B\/16 & 0.00 & 68.3 & 0.00 & 68.3 & 0.00 & 68.3 & 0.00 & 68.3 \\\\\n\\quad{+ task vectors} & 66.0 & 67.5 & 94.0 & 67.4 & 87.8 & 67.5 & 62.0 & 67.6 \\\\\\midrule\nViT-L\/14 & 0.00 & 75.5 & 0.00 & 75.5 & 0.00 & 75.5 & 0.00 & 75.5 \\\\\n\\quad{+ task vectors} & 100 & 74.7 & 100 & 74.5 & 100 & 74.6 & 96.0 & 74.6\\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:kingsandqueens}\n\\end{table*}\n\n\n\nFine-tuning CLIP models is done as described in Section \\ref{sec:clip-exp-details}, with the exception of using 40 optimization steps because of the small size of the datasets. When fine-tuning, we use only the images from one category (e.g., ``king\"), and a set of 1001 classes from which to choose, composed by the 1000 classes in ImageNet, and a new class. Since CLIP has already seen many images of queens, kings, men and women in its pre-training, we use a new category name for the new class when fine-tuning, in order to simulate learning a new concept. More concretely, we use the class name ``something\", which makes the accuracy of zero-shot models close or equal to zero. When evaluating, we also contrast between all 1001 options, including all ImageNet classes. This is done both for our target task, and for ImageNet, where we add an additional option.\nNote that we do not need to introduce any new task-specific weights to do all of these operations, since CLIP can perform classification with any set of classes by using its text encoder (which is frozen as in Section \\ref{sec:clip-exp-details}).\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=.85\\textwidth]{figures\/kinds_and_queens.pdf}\n \\caption{Samples from the dataset we collect for classifying queens, kings, women and men, as described in Section \\ref{sec:appendix-kingsandqueens}.}\n \\label{fig:kings-and-queens-samples}\n\\end{figure*}\n\n\n\\subsection{Subpopulations}\n\\label{sec:appendix-sketches}\n\nWe fine-tune CLIP models on each of the subpopulations with the same hyper-parameters as described in Section \\ref{sec:clip-exp-details}, using 500 optimization steps regardless of the number of samples. For the few-shot experiments, we sample the same number of samples for every class in the task.\nFor convenience, let ImageNet-A\\footnote{Not to be confused with the adversarial dataset from \\citet{imageneta}.} and ImageNet-B represent the two subpopulations from ImageNet, and Sketches-A and Sketches-B represent the two subpopulations from the sketches dataset from \\citet{eitz2012humans}. Note that ImageNet-A and Sketches-A share the same classes, and the same is true for ImageNet-B and Sketches-B. We present samples in Figure \\ref{fig:sketches-samples}.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=.85\\textwidth]{figures\/sketches.pdf}\n \\caption{Samples from the datasets used for the analogies with subpopulations experiments, as described in Section \\ref{sec:appendix-sketches}.}\n \\label{fig:sketches-samples}\n\\end{figure*}\n\nComplementing Figure \\ref{fig:clip-analogies}, we show a breakdown per model and for every subpopulation as a target in Table \\ref{tab:sketches}.\n\n\n\n\\paragraph{Independent scaling coefficients.} In addition to our standard procedure of using a single scaling coefficient for the vector resulting from the arithmetic operations, we explore having independent scaling coefficients for each task vector in the expression. In other words, we explore the models $\\theta_\\textrm{new} = \\theta + \\lambda_C\\tau_C + \\lambda_B\\tau_B - \\lambda_A\\tau_A$ for various scaling coefficients $\\lambda_A, \\lambda_B, \\lambda_C \\in \\{0, 0.1, \\cdots, 1.0\\}$.\nOn average, the optimal scaling coefficients were $\\lambda_B^\\star=\\lambda_C^\\star=0.32$ and $\\lambda_A^\\star=0.28$.\nUsing independent scaling coefficients improved performance over using a single scaling coefficient by 0.7 percentage points on average, but also required substantially more evaluations to be made ($10^3$ instead of 10). \n\n\n\n\\begin{table*}\n\\caption{\\textbf{Learning by analogy on subpopulations.} Results are shown for multiple CLIP models, as detailed in Section \\ref{sec:appendix-sketches}.}\n\\setlength\\tabcolsep{4.5pt}\n\\renewcommand{\\arraystretch}{0.9}\n\\footnotesize\n\\begin{center}\n\\begin{tabular}{lccccccc}\n\\toprule\n \\multirow{2}{*}{Model} & Samples & \\multirow{2}{*}{Task vectors} & \\multicolumn{5}{c}{Accuracy} \\\\\n& per class & & Sketches-A & Sketches-B & ImageNet-A & ImageNet-B & Average\\\\\\midrule\n\\multirow{8}{*}{ViT-B\/32} & 0 & \\xmark & 0.712 & 0.677 & 0.861 & 0.923& 0.793 \\\\\n& 0 & \\cmark & 0.782 & 0.758 & 0.861 & 0.926& 0.832 \\\\\n& 1 & \\xmark & 0.754 & 0.758 & 0.868 & 0.919& 0.825 \\\\\n& 1 & \\cmark & 0.782 & 0.766 & 0.866 & 0.922& 0.834 \\\\\n& 2 & \\xmark & 0.768 & 0.778 & 0.868 & 0.919& 0.833 \\\\ \n& 2 & \\cmark & 0.786 & 0.800 & 0.867 & 0.922& 0.844 \\\\\n& 4 & \\xmark & 0.810 & 0.780 & 0.871 & 0.926& 0.847 \\\\\n& 4 & \\cmark & 0.802 & 0.796 & 0.871 & 0.927& 0.849 \\\\\\midrule\n\\multirow{8}{*}{ViT-B\/16} & 0 & \\xmark & 0.716 & 0.732 & 0.885 & 0.946& 0.820\\\\\n& 0 & \\cmark & 0.794 & 0.794 & 0.889 & 0.953& 0.858\\\\\n& 1 & \\xmark & 0.758 & 0.812 & 0.894 & 0.948& 0.853\\\\\n& 1 & \\cmark & 0.796 & 0.804 & 0.897 & 0.957& 0.863\\\\\n& 2 & \\xmark & 0.792 & 0.817 & 0.897 & 0.951& 0.865\\\\\n& 2 & \\cmark & 0.804 & 0.829 & 0.899 & 0.956& 0.872\\\\\n& 4 & \\xmark & 0.815 & 0.812 & 0.904 & 0.952& 0.871\\\\\n& 4 & \\cmark & 0.831 & 0.825 & 0.904 & 0.953& 0.878\\\\\\midrule\n\\multirow{8}{*}{ViT-L\/14} & 0 & \\xmark & 0.823 & 0.831 & 0.913 & 0.962& 0.882\\\\\n& 0 & \\cmark & 0.879 & 0.861 & 0.922 & 0.968& 0.908\\\\\n& 1 & \\xmark & 0.845 & 0.863 & 0.923 & 0.971& 0.900\\\\\n& 1 & \\cmark & 0.879 & 0.863 & 0.930 & 0.973& 0.911\\\\\n& 2 & \\xmark & 0.865 & 0.881 & 0.925 & 0.973& 0.911\\\\\n& 2 & \\cmark & 0.875 & 0.881 & 0.932 & 0.975& 0.916\\\\\n& 4 & \\xmark & 0.875 & 0.883 & 0.934 & 0.973& 0.916\\\\\n& 4 & \\cmark & 0.903 & 0.887 & 0.941 & 0.975& 0.927\\\\\n\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\label{tab:sketches}\n\\end{table*}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nStochastic Resonance (SR) was discovered barely about two and half decades\nago, yet it has proved to be very useful in explaining many phenomena in \nnatural sciences[1-3]. SR refers to an enhanced response of a nonlinear system \nto a subthreshold periodic input signal in the presence of noise of optimum \nstrength. Here, noise plays a constructive role of pumping power in a \nparticular mode, that is in consonance with the applied field, at the cost of\nthe entire spectrum of modes present in it. SR, so defined, leaves a lot of \nliberty as to what is the physical quantity that is to be observed which \nshould show a maximum as a function of noise strength[4-23]. In other words, \nno unique quantifier of SR is specified. Also, in order that SR be a bonafide\nresonance the quantifier must show maximum as a function of frequency of the \napplied field as well. For instance, in a double-well system, hysteresis loop\narea, input energy or work done on the system in a period of the driving \nfield and area under the first peak in the residence time (in a well) \ndistribution are used to characterize SR as a bonafide resonance[4-17,19-22].\n\nIn the present work, motivated by recently discovered fluctuation theorems,\nwe show that in an overdamped bistable system input energy per period as well\nas the energy absorbed per period by the system from the bath, i.e, the heat,\ncan be used as quantifiers to study SR. Also, it is found that the relative \nvariance of both the quantities exhibit minimum at resonance; that is, \nwhenever input energy and heat show maximum as a function of noise strength \n(as also frequency), their respective relative fluctuations show minimum. \nThis shows that at SR the system response exhibits greater degree of \ncoherence. These fluctuations, however, are very large and often the physical\nquantities in question become non-self-averaging. We study some of these \naspects in the light of the fluctuation theorems in the following sections. \nThe fluctuation theorems are of fundamental importance to nonequilibrium \nstatistical mechanics[24-46]. The fluctuation theorems describe rigorous \nrelations for properties of distribution functions of physical variables such\nas work, heat, entropy production, etc., for systems far from equilibrium \nregimes where Einstein and Onsagar relations no longer hold. These theorems \nare expected to play an important role in determining thermodynamic \nconstraints that can be imposed on the efficient operation of machines at \nnano scales. Some of these theorems have been verified experimentally[47-53].\n\n\\section{The Model}\nWe consider the motion of a particle in a double-well potential \n$V(x)=-\\frac{a x^{2}}{2}+\\frac{b x^{4}}{4}$ under the action of a weak \nexternal field $h(t)=A\\sin(\\omega t)$. The motion is described by the \noverdamped Langevin equation[44]\n\\begin{equation}\n\\gamma \\frac{dx}{dt}=-\\frac{\\partial U(x)}{\\partial x}+\\xi(t) ,\n\\end{equation}\nwhere $U(x)=V(x)-h(t)x$. The random forces satisfy \n$\\langle \\xi(t) \\rangle =0$ and \n$\\langle \\xi(t)\\xi(t^{'}) \\rangle=2\\gamma k_{B} T \\delta(t-t^{'})$,\nwhere $\\gamma$ is the coefficient of friction, $T$ is the absolute \ntemperature and $k_{B}$ is the Boltzmann constant. In the following we use a \ndimensionless form of equation(1), namely,\n\\begin{equation}\n\\frac{dx}{dt}=-\\frac{\\partial U(x)}{\\partial x}+\\xi(t),\n\\end{equation}\nwhere $U(x)=-\\frac{x^{2}}{2}+\\frac{x^{4}}{4}-xh(t)$, and \nthe external field $h(t)=A\\sin(\\omega t)$. Now, $\\xi(t)$ satisfies \n$\\langle \\xi(t) \\xi(t^{'}) \\rangle=D \\delta(t-t^{'})$, where $D=2 k_{B} T$. \nAll the parameters are given in dimensionless units~~(in terms of \n$\\gamma$, $a$ and $b$). We consider $A \\ll 0.25$, so that the forcing \namplitude is much smaller than the barrier height between the two wells.\n\nFollowing the stochastic energetic formalism developed by Sekimoto[55], the \nwork done by the external drive $h(t)$ on the system or the input energy per \nperiod (of time $\\tau_{\\omega}$) is defined as[21] \n\\bdm\nW_{p}= \\int_0^{t_{0}+\\tau_{\\omega}} \\frac{\\partial U}{\\partial t} dt\n\\edm\n\\begin{equation}\n= -\\int_0^{t_{0}+\\tau_{\\omega}} x(t) \\frac{dh(t)}{dt} dt,\n\\end{equation} \nwhere $h(t)$ is the drive field which completes its period in time \n$\\tau_{\\omega}$. The completion of one period of $h(t)$, however, does not \nguarantee the system coming back to the same state as the starting one. In \nother words, $x(t+\\tau_{\\omega})$ need not be equal to $x(t)$ or \n$U(x,t+\\tau_{\\omega})$ may differ from $U(x,t)$. The work done over a period \n$W_{p}$ equals change in the internal energy \n$\\Delta U=U(x,t_{0}+\\tau_{\\omega})-U(x,t_{0})$ and heat $Q$ absorbed over a \nperiod (first law of thermodynamics), i.e, $W_{p}=\\Delta U_{p}+Q_{p}$. Since \n$x(t)$ is stochastic, $W_{p}$, $\\Delta U_{p}$ and $Q_{p}$ are not the same \nfor different cycles(or periods) of $h(t)$. The averages are evaluated from \na single long trajectory $x(t)$ (eqn(3)). From the same calculations one can \nalso obtain the probability distribution $P(W)$ and various moments of $W$.\nSimilarly, appealing to the first law of thermodynamics as stated above we \ncan obtain $P(Q_{p})$ and $P(\\Delta U_{p})$ and their moments, where the \nsubscript p indicates evaluation of the physical quantities over one period \nof the field. Numerical simulation of our model was carried out by using \nHuen's method[56]. To calculate $W_{p}$ and $Q_{p}$ we first evolve the \nsystem and neglect initial transients. To get better statistics we calculate \n$W_{p}$, $Q_{p}$ for $10^{6}$ cycles. In some cases we evaluate $W$, \n$\\Delta U$ and $Q$ over many periods, $n$, and calculate their averages, \nagain, for $10^6$ such entities.\n\\section{Results and Discussions}\nThe internal energy being a state variable, average change in its value over \na period $\\Delta U_{p}$ is identically equal to zero. Thus, in the time \nperiodic asymptotic state averaged work done over the period \n$\\langle W_{p} \\rangle$ is dissipated in to heat $\\langle Q_{p} \\rangle$ by \nthe system to the bath. Thus, $\\langle Q_{p} \\rangle$ can also be identified \nas hysteresis loop area. As has been reported earlier[19-22], \n$\\langle W_{p} \\rangle$, the input energy per period, shows a maximum as a \nfunction of $D$. Fig(1) shows that $\\langle W_{p} \\rangle$ and \n$\\langle Q_{p} \\rangle$ coincide, thus both the physical quantities show\nSR. Hence, in this case input energy per period, the heat per period or the \nhysteresis loop area can equally well quantify stochastic resonance. However,\nin this work we focus mostly on the fluctuation properties of these \nquantities.\n\nThe relative variances $R_{W}$ and $R_{Q}$ of both $W_{p}$ and $Q_{p}$ \nrespectively show minimum (fig(2)) as a function of $D$. It may be \nnoted that even though $\\langle W_{p} \\rangle$ and $\\langle Q_{p} \\rangle$ are identical,~~ fluctuations in $W_{p}$ differ from the fluctuations in $Q_{p}$.\n The relative variance of $Q_{p}$ is always larger than that of $W_{p}$ for all $D$. It is also noteworthy that the minimum value of the relative \nvariance is larger than one. However, the minimum becomes less than one if \nthe averages are taken not over a single period of the field but over a \nlarger(integral) number, $n>1$, of periods. Therefore, in order to obtain \nmeaningful averages of these physical quantities in such driven systems one \nneeds to study over time scales much larger than one period so that the \naverages are significantly larger than the deviations about them. Also, as \n$n$ becomes large, the differences between the relative variances of $W$ and \n$Q$ become insignificant(see inset of fig(2)). Importantly, in the system \nunder study, this situation (mean $>$ dispersion) can be achieved by \nincreasing the duration of averaging time(or the number of periods,~ $n$) more \neasily around the value of $D$ where SR occurs. The minimum of relative \nvariance occurs just because the mean value is largest there and not because \ndispersions are smallest. However, as the number of periods $n$ is increased \nthe mean value of heat dissipated over the $n$ periods \n$\\langle Q_{np} \\rangle \\sim n$ for all $n$, whereas the dispersion \n$\\sim \\sqrt{n}$ for large $n$ so that the relative variance decreases with \n$n$ as $\\frac{1}{\\sqrt{n}}$ and one gets a range of $D$ where the averages \nbecome meaningful. We have observed numerically that $Q_{np}$ behaves as an \nindependent variable only when evaluated over a larger number of cycles $n$ \nas compared to in case of $W_{np}$. For our present parameters approximately\n$Q_{np}$ is uncorrelated beyond $10$ periods, whereas $W_{np}$ is uncorrelated beyond $5$ periods.\n\nIn fig(3), we have plotted average heat dissipated \n$\\langle Q_{p} \\rangle$($=\\langle W_{p} \\rangle$) over a single period as a \nfunction of frequency. The values of physical parameters are given in the \nfigure caption. The figure shows maximum as shown in earlier literature[21].\nThus $\\langle Q_{p} \\rangle $ acts as a quantifier of bonafide stochastic \nresonance. In the inset we give the corresponding relative variance of heat \nand work as a function of frequency. We observe that heat fluctuations are \nlarger than work fluctuations at all frequencies. Near the resonance the \nrelative variance shows a minimum. It may be noted that minimum relative \nvariance of both quantities $W_{p}$ and $Q_{p}$ are larger than one(fig(2) and fig(3)). \n\nIn fig(4), we plot the probability distribution of $W_{p}$ and $Q_{p}$ for \nvarious values of $D$. For low values of $D$ (e.g., $D=0.02$) $P(W_{p})$ is \nGaussian whereas $P(Q_{p})$ has a long exponential tail as in case of a \nsystem driven in a harmonic well and with almost no chance of a particle \ngoing over to the other well of the double-well potential. As $D$ is \ngradually increased rare passages to the other well becomes a possibility and\na very small peak appears at a finite positive value of $W_{p}$(or $Q_{p}$)\n(e.g., at $D=0.04$). As $D$ is increased further, $P(W_{p})$ and $P(Q_{p})$ \nbecome multipeaked and the averages $\\langle W_{p} \\rangle$, \n$\\langle Q_{p} \\rangle$ shifts to their positive values. The distributions \nbecome most asymmetric at around $D=0.12$ (where SR occurs) and the asymmetry\nreduces again at larger $D$, fig(4). When $D$ becomes large (e.g., $D=0.5$) \nthe distribution becomes completely symmetric and at such high $D$ values the\npresence of potential hump becomes ineffective to split the distribution into\ntwo or more peaks. At very small and very large $D$ values $P(W_{p})$ is close to \nGaussian and so does $P(Q_{p})$ but with a slow decaying exponential tail. In all\nthe graphs, the distribution of $P(Q_{p})$ ($P(W_{p})$) extend to negative values of \n$Q_{p}$ ($W_{p}$). Finite value for distribution in the negative side is \nnecessary to satisfy certain fluctuation theorems. Moreover, $P(Q_{p})$\nhas higher weightage for large negative $Q_{p}$ than that of work $W_{p}$.\n\nIt is worth reemphasizing that $W$ and $Q$ behave as additive (or extrinsic) \nphysical quantities with respect to the number of periods $n$ and hence \n$\\langle W_{np} \\rangle $ and $\\langle Q_{np} \\rangle $ increase \nin proportion to $n$ whereas $\\Delta U$, in this case, is an intrinsic \nphysical quantity and $\\frac{\\Delta U}{n} \\rightarrow 0$ as \n$n \\rightarrow \\infty$. This indicates that the distributions $P(W_{np})$ and\n$P(Q_{np})$ both have identical characteristics as $n \\rightarrow \\infty$.\nTherefore, the difference between \n$(\\frac{\\sqrt{\\langle W_{np}^{2} \\rangle -\\langle W_{np} \\rangle ^{2}}}\n{\\langle W_{np} \\rangle})$ and \n$(\\frac{\\sqrt{\\langle Q_{np}^{2} \\rangle -\\langle Q_{np} \\rangle ^{2}}}\n{\\langle Q_{np} \\rangle})$ vanishes as $n \\rightarrow \\infty$. In the recent \nliterature it is shown that the distribution $P(W_{np})$ over a large number\nof periods approaches a Gaussian. Also, if one considers $W_{p}$ over a \nsingle period by increasing the noise strength, $P(W_{p})$ approaches \nGaussian and satisfies the steady state fluctuation theorem (SSFT). SSFT \nimplies[26,34-36,44-46,51-53] the probability of physical quantity $x$ to \nsatisfy the relation $P(x)\/P(-x) = \\exp(\\beta x)$, where $\\beta$ is \nthe inverse temperature and $x$ may be work, heat, etc. In fig(5), the \nevolution of $ P(Q_{np})$ is shown as $n$ is increased . As $n$ increases the \ncontribution of negative $Q$ to the distribution decreases; besides, the \ndistribution gradually becomes closer and closer to Gaussian. There is a \ncontribution to $P(Q_{np})$ due to change in the internal energy $\\Delta U$ \nwhich is supposed to dominate at very large $Q$ making the distribution \nexponential in the asymptotic regime[34,35,53]. However, it is not possible to \ndetect this exponential tail in our simulations. For large $n$, $P(Q_{np})$ \napproaches Gaussian(inset of fig(5)). The Gaussian fit of the graph almost \noverlaps and the calculated ratio, \n$\\frac{\\langle Q_{np}^{2} \\rangle -\\langle Q_{np} \\rangle^{2}}{\\frac{2}\n{\\beta} \\langle Q_{np} \\rangle}$ equals $0.99$ for $n=25$. This ratio is \ncloser to one, a requirement for SSFT to hold where $P(Q)$ is Gaussian[22,44,45]. \nFig(6) shows the plot of $ln(\\frac{P(Q_{np})}{P(-Q_{np})})$ as a function of \n$\\beta Q_{np}$ for various values of $n$. One can readily see that slope of \n$ln(\\frac{P(Q_{np})}{P(-Q_{np})})$ approaches $1$ for \n$Q \\ll \\langle Q_{np} \\rangle $ for large $n$. This is a statement of \nconventional steady state fluctuation theorem. As the number of periods $n$,\nover which $Q_{np}$ is calculated, is increased, the conventional SSFT is \nsatisfied for $Q_{np}$ less than $\\langle Q_{np} \\rangle$ (e.g., for $n=25$,\nSSFT is valid for $Q_{np}$ less than $0.4$, for $D=0.16$). There exists an \nalternative relation for heat fluctuation, namely, the extended heat \nfluctuation theorem[34,35]. Here, the distribution function obeys a different \nsymmetry property for $Q \\gg \\langle Q_{np} \\rangle$ for finite $n$. As \n$n \\rightarrow \\infty$, $\\langle Q_{np} \\rangle \\rightarrow \\infty$ in this \nlimit, and hence conventional SSFT holds which has been clarified earlier \nin linear systems[53]. \n\nIt is further interesting to investigate effects associated with SR in an asymmetric double-well potential involving two hopping time scales instead of one as in the symmetric case.~~~We therefore,~~~consider a scaled asymmetric potential $V(x)=\\frac{- x^{2}}{2}+\\frac{x^{4}}{4}-cx$ driven by the external field $h(t)$.~~~~Fig(7) shows the average input energy $\\langle W_{p} \\rangle$ and average heat $\\langle Q_{p} \\rangle $ over a single period as a function of $D$ for various values of the asymmetric parameter $c$.~~~From this figure we find that the peak becomes broader and lower as $c$ is increased.~~~The peak shifts to larger values of noise intensities for higher $c$.~~~~In other words,~~the phenomenon of SR is not as pronounced[2] as in case of $c=0$(fig(2)).~~~It is because the synchronization between signal and particle hopping between the two well becomes weak because for $c \\neq 0 $,~~~the mean time of passage for well $1$ to well $2$ is different from the mean time of passage from well $2$ to well $1$.~~~As a consequence the relative variances $R_{W}$ and $R_{Q}$ become larger as compared to in case of $c=0$(fig(2)) as shown in the inset of fig(7).\n\nIn fig(8(a)) and fig(8(b) we have plotted probability distribution $P(W_{p})$ and $P(Q_{p})$ over a single period for different values of asymmetry parameter $c$ for a fixed value of $D=0.12$, $A=0.1$ and $\\omega =0.1$.~~~As asymmetry increases the probability for particle to remain in the lowest well enhances.~~Hence particle performs simple oscillation around most stable minima over a longer time before making transitions to the other well.~~~Hence Gaussian like peak near $W \\approx 0$ or $Q \\approx 0$ increases as $c$ increases.~~~The weight of $P(W_{p})$ for larger values of work(positive as well as negative ) decreases with increase in $c$.~~~However,~~~for $P(Q_{p})$, its magnitude at large positive and negative values of $Q_{p}$ increases as we increase asymmetry parameter.~~~This contrasting behavior can be attributed to the larger fluctuations of internal energy $\\Delta U_{p}$ as one increases $c$.~~This we have verified\n separately.~~~Due to this contribution of $\\Delta U_{p}$ for $Q_{p}$,~~nature of $P(W_{p})$ and $P(Q_{p})$ are qualitatively different.~~In all cases for fixed asymmetry $c$ fluctuation in heat are larger than fluctuation in work.\n \n In fig(9) and (10) evolution for $P(W_{np})$ and $P(Q_{np})$ respectively are plotted for various values of number of periods $n$.~~We clearly observe that as $n$ increases both the distributions tend to become Gaussian distributions with the fluctuation ratio $\\frac {V}{(\\frac{2}{\\beta}\\langle M \\rangle )}=1$,~~between their variance $V$ and mean $\\langle M \\rangle$ as required to satisfy SSFT as mentioned earlier.~~To satisfy SSFT for heat we have to take \n larger number of periods as compared for work.~~Only in the large $n$ limit contribution to heat from internal energy becomes negligible.~~~In the insets of fig(9) and fig(10) we have shown a Gaussian fit(with fluctuation ratio equal to one),~~which agrees perfectly well with our numerical data.~~~Conclusions regarding validity of SSFT for asymmetric case for larger periods remain the same as for the symmetric case.\n\n\nIn summary, we find that SR shown by a particle moving in a double-well(symmetric) \npotential and driven by a weak periodic field can be characterized well by the heat~$\\langle Q_{p}\\rangle$ dissipated to the bath or the hysteresis loop area.~~~ It can equally well be characterized by the relative dispersion of $\\langle W_{p}\\rangle$ and $\\langle Q_{p}\\rangle$.~~~ At resonance relative dispersion shows a minimum as a function of both $D$ and $\\omega$.~~~~ We also show that minimum relative variance can be made less than one by taking long time protocols of the applied field.~~~For long time protocols distribution $P(Q_{np})$ satisfies conventional SSFT for $P(Q_{np})$ at $Q_{np} \\ll \\langle Q_{np} \\rangle$ for finite $n$[53].~~~We have also shown that SR gets weakened in the presence of asymmetric potential and as a consequence fluctuation in heat and work become larger.~~~SSFT too is satisfied for both work and heat,~~when they are calculated over large number of periods.\n\\section{Acknowledgements:}\nAMJ and MCM thank BRNS, DAE,Govt. of India for partial financial support.\n~~AMJ also thanks DST, India for financial support.~~ MCM acknowledges IOP, Bhubaneswar for hospitality.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\uppercase{Introduction}}\n\nWhen programming the small devices that constitutes the nodes of\nthe Internet of Things (IoT), one has to adapt to the limitations of\nthese devices.\n\nApart from their very limited processing power (especially compared to\nthe current personal computers, and even mobile devices like smartphones\nand tablets), the main specificity of the devices is that they are operated\non small batteries (e.g.: AAA or button cells).\n\nThus, one of the main challenges with these motes is the need to reduce\nas much as possible their energy consumption. We want their batteries to last\nas long as possible, for economical but also practical reasons: it may be\ndifficult---even almost impossible---to change the batteries of some of these\nmotes, because of their locations (e.g.: on top of buildings, under roads,\netc.)\n\nIoT motes are usually very compact devices: they are usually built around\na central integrated chip that contains the main processing unit and several\nbasic peripherals (such as timers, A\/D and D\/A converters, I\/O\ncontrollers\\ldots) called microcontroller units or MCUs. Apart from the MCU,\na mote generally only contains some ``physical-world'' sensors and a radio\ntransceiver for networking. The main radio communication protocol currently\nused in the IoT field is IEEE 802.15.4. Some MCUs do integrate a 802.15.4\ntransceiver on-chip.\n\nAmong the various components that constitute a mote, the most power-consuming\nblock is the radio transceiver. Consequently, to reduce the power consumption\nof IoT motes, a first key point is to use the radio transceiver only when\nneeded, keeping it powered-off as much as possible. The software element\nresponsible to control the radio transceiver in an adequate manner is\nthe \\emph{MAC~\/ RDC (Media Access Control \\& Radio Duty Cycle)}\nlayer of the network stack.\n\nA efficient power-saving strategy for IoT motes thus relies on finding the\nbetter trade-off between minimizing the radio duty cycle while keeping\nnetworking efficiency at the highest possible level. This is achieved\nby developing new, ``intelligent'' MAC~\/ RDC protocols.\n\nTo implement new, high-performance MAC~\/ RDC protocols, one needs to be\nable to react to events with good reactivity (lowest latency possible) and\nflexibility. These protocols rely on precise timing to ensure efficient\nsynchronization between the different motes and other radio-networked\ndevices of a \\emph{Personal Area Network (PAN)}, thus allowing\nto turn on the radio transceivers \\emph{only} when needed.\n\nAt the system level, being able to follow such accurate timings means having\nvery efficient interruption management, and the extensive use of hardware\ntimers, that are the most precise timing source available.\n\nThe second most power-consuming element in a mote, after the radio\ntransceiver, is the MCU itself: every current MCU offers ``low-power modes'',\nthat consist in disabling the various hardware blocks, beginning with the CPU\ncore. The main way to minimize energy consumption with a MCU is thus\nto disable its features as much as possible, only using them when needed:\nthat effectively means putting the whole MCU to sleep as much as possible.\n\nLike for the radio transceiver, using the MCU efficiently while keeping\nthe system efficient and reactive means optimal use of interruptions,\nand hardware timers for synchronization.\n\nThus, in both cases, we need to optimally use interruptions as well as\nhardware timers. Being able to use them both efficiently without too much\nhassle implies the use of a specialized operating system (OS), especially\nto easily benefit from multitasking abilities. That is what we will\ndiscuss in this paper.\n\n\n\\section{\\uppercase{Previous work and problem statement}}\n\nSpecialized OSes for the resource-constrained devices that constitute\nwireless sensor networks have been designed, published, and made available\nfor quite a long time.\n\n\\subsection{TinyOS}\n\nThe first widely used system in this domain was \\emph{TinyOS} \\cite{TinyOS}.\nIt is an open-source OS, whose first stable release (1.0) was published in\nseptember 2002. It is very lightweight, and as such well adapted to limited\ndevices like WSN motes. It has brought many advances in this domain, like\nthe ability to use Internet Protocol (IP) and routing (RPL) on 802.15.4\nnetworks, including the latest IPv6 version, and to simulate networks\nof TinyOS motes via TOSSIM \\cite{TOSSIM}.\n\nIts main drawback is that one needs to learn a specific language---named\nnesC---to be able to efficiently work within it. This language is quite\ndifferent from standard C and other common imperative programming languages,\nand as such can be difficult to master.\n\nThe presence of that specific language is no coincidence: TinyOS is built\non its own specific paradigms: it has an unique stack, from which the\ndifferent components of the OS are called as statically linked callbacks.\nThis makes the programming of applications complex, especially for\ndecomposing into various ``tasks''. The multitasking part is also\nquite limited: tasks are run in a fixed, queue-like order. Finally,\nTinyOS requires a custom GNU-based toolchain to be built.\n\nAll of these limitations, plus a relatively slow development pace (last\nstable version dates back to august 2012) have harmed its adoption,\nand it is not the mainly used OS of the domain anymore.\n\n\\subsection{Contiki}\n\nThe current reference OS in the domain of WSN and IoT is \\emph{Contiki}\n\\cite{ContikiOS}. It's also an open-source OS, which was first released\nin 2002. It is also at the origin of many assets: we can mention, among\nothers, the uIP Embedded TCP\/IP Stack \\cite{uip}, that has been extended\nto uIPv6, the low-power Rime network stack \\cite{Rime}, or the Cooja advanced\nnetwork simulator \\cite{Cooja}.\n\nWhile a bit more resource-demanding than TinyOS, Contiki is also very\nlightweight and well adapted to motes. Its greatest advantage over TinyOS\nis that it is based on standard, well-known OS paradigms, and coded\nin standard C language, which makes it relatively easy to learn and program.\nIt offers an event-based kernel, implemented using cooperative multithreading,\nand a complete network stack. All of these features and advantages have made\nContiki widespread, making it the reference OS when it comes to WSN.\n\nContiki developers also have made advances in the MAC\/RDC domain: many\nof them have been implemented as part of the Contiki network stack, and\na specifically developed, ContikiMAC, has been published in 2011\n\\cite{ContikiMAC} and implemented into Contiki as the default\nRDC protocol (designed to be used with standard CSMA\/CA as MAC layer).\n\nHowever, Contiki's extremely compact footprint and high optimization comes\nat the cost of some limitations that prevented us from using it as our\nsoftware platform.\n\nContiki OS is indeed not a real-time OS: the processing of ``events''---using\nContiki's terminology---is made by using the kernel's scheduler, which is\nbased on cooperative multitasking. This scheduler only triggers at a specific,\npre-determined rate; on the platforms we're interested in, this rate is\nfixed to 128~Hz: this corresponds to a time skew of up to 8~milliseconds\n(8000~microseconds) to process an event, interruption management being\none of the possible events. Such a large granularity is clearly\na huge problem when implementing high-performance MAC\/RDC protocols,\nknowing that the transmission of a full-length 802.15.4 packet takes\nbout 4~milliseconds (4000~microseconds), a time granularity of\n320~microseconds is needed, corresponding to one backoff period (BP).\n\nTo address this problem, Contiki provides a real-time feature,\n\\texttt{rtimer}, which allows to bypass the kernel scheduler and use\na hardware timer to trigger execution of user-defined functions. However,\nit has very severe limitations:\n\n\\begin{itemize}\n\n\\item only one instance of \\texttt{rtimer} is available, thus only one\nreal-time event can be scheduled or executed at any time; this limitation\nforbids development of advanced real-time software---like high-performance\nMAC~\/ RDC protocols---or at least makes it very hard;\n\n\\item moreover, it is unsafe to execute from \\texttt{rtimer}, even\nindirectly, most of the Contiki basic functions (i.e.: kernel, network\nstack, etc.), because these functions are not designed to handle pre-emption.\nContiki is indeed based on cooperative multithreading, whereas the\n\\texttt{rtimer} mechanism seems like a ``independent feature'', coming\nwith its own paradigm.\nOnly a precise set of functions known as ``interrupt-safe'' (like\n\\texttt{process\\_poll()}) can be safely invoked from \\texttt{rtimer},\nusing other parts of Contiki's meaning almost certainly crash or\nunpredictable behaviour. This restriction practically makes it very\ndifficult to write Contiki extensions (like network stack layer drivers)\nusing \\texttt{rtimer}.\n\n\n\\end{itemize}\n\nAlso note that this cooperative scheduler is designed to manage a specific\nkind of tasks: the \\emph{protothreads}. This solution allows to manage\ndifferent threads of execution, without needing each of them to have\nits own separate stack \\cite{Protothreads}. The great advantage of\nthis mechanism is the ability to use an unique stack, thus greatly\nreducing the needed amount of RAM for the system. The trade-off is\nthat one must be careful when using certain C constructs (i.e.:\nit is impossible to use the \\texttt{switch} statement in\nsome parts of programs that use protothreads).\n\nFor all these reasons, we were unable to use Contiki OS to develop and\nimplement our high-performance MAC\/RDC protocols. We definitely needed\nan OS with efficient real-time features and event handling mechanism.\n\n\\subsection{Other options}\n\nThere are other, less used OSes designed for the WSN\/IoT domain, but none\nof them fulfilled our requirements, for the following reasons:\n\\begin{description}\n\n\\item[SOS] \\cite{SOS} This system's development has been cancelled since november\n 2008; its authors explicitly recommend on their website to\n ``consider one of the more actively supported alternatives''.\n\n\\item[Lorien] \\cite{LorienOS} While its component-oriented approach is\n interesting, this system seems does not seem very widespread. It is currently available for only \n one hardware platform (TelosB\/SkyMote) which seriously\n limits the portability we can expect from using an OS. \n Moreover, its development seems to have slowed down quite\n a bit, since the latest available Lorien release was published\n in july 2011, while the latest commit in the project's\n SourceForge repository (r46) dates back to january 2013.\n\n\\item[Mantis] \\cite{MantisOS} While this project claims to be Open Source,\n the project has made, on its SourceForge web site, no public\n release, and the access to the source repository \n (\\texttt{http:\/\/mantis.cs.colorado.edu\/viewcvs\/}) seems\n to stall. Moreover, reading the project's main web page\n shows us that the last posted news item mentions a first beta\n to be released in 2007. The last publications about\n Mantis OS also seems to be in 2007. All of these elements \n tend to indicate that this project is abandoned\\ldots\n\n\\item[LiteOS] \\cite{LiteOS} This system offers very interesting features,\n especially the ability to update the nodes firmwares over the wireless,\n as well as the built-in hierarchical file system. Unfortunately,\n it is currently only available on IRIS\/MicaZ platforms,\n and requires AVR Studio for programming (which imposes\n Microsoft Windows as a development platform). This\n greatly hinders portability, since LiteOS is clearly strongly\n tied to the AVR microcontroller architecture.\n\n\\item[MansOS] \\cite{MansOS} This system is very recent and offers many\n interesting features, like optional preemptive multitasking,\n a network stack, runtime reprogramming, and a scripting\n language. It is available on two MCU architectures: AVR and\n MSP430 (but not ARM). However, none of the real-time features\n we wanted seems to be available: e.g. only software timers with\n a 1~millisecond resolution are available.\n\n\\end{description}\nIn any case, none of the alternative OSes cited hereabove offer the real-time\nfeatures we were looking for.\n\n\\bigskip\n\nOn the other hand, ``bare-metal'' programming is also unacceptable for us:\nit would mean sacrificing portability and multitasking; and we would also\nneed to redevelop many tools and APIs to make application programming\neven remotely practical enough for third-party developers who would\nwant to use our protocols.\n\n\\bigskip\n\nWe also envisioned to use an established real-time OS (RTOS) as a base\nfor our works. The current reference when it comes to open-source RTOS is\n\\emph{FreeRTOS} (\\texttt{http:\/\/www.freertos.org\/}). It is a robust, mature\nand widely used OS. Its codebase consists in clean and well-documented\nstandard C language. However, it offers only core features, and doesn't\nprovide any network subsystem at all. Redeveloping a whole network stack\nfrom scratch would have been too time-consuming.\n(Network extensions exist for FreeRTOS, but they are either immature,\nor very limited, or proprietary and commercial software; and most of them\nare tied to a peculiar piece of hardware, thus ruining\nthe portability advantage offered by the OS.)\n\n\\subsection{Summary: Wanted Features}\n\nTo summarize the issue, what we required is an OS that:\n\\begin{itemize}\n\\item is adapted to the limitations of the deeply-embedded MCUs that\n constitute the core of WSN\/IoT motes;\n\\item provides real-time features powerful enough to support the\n development of advanced, high-performance MAC~\/ RDC protocols;\n\\item includes a network stack (even a basic one) adapted to wireless\n communication on 802.15.4 radio medium.\n\\end{itemize}\nHowever, none of the established OSes commonly used either in the IoT domain\n(TinyOS, Contiki) nor in the larger spectrum of RTOS (FreeRTOS)\ncould match our needs.\n\n\n\\section{\\uppercase{The RIOT Operating System}}\n\nConsequently, we focused our interest on \\emph{RIOT OS} \\cite{RIOT}.\n\nThis new system---first released in 2013---is also open-source and\nspecialized in the domain of low-power, embedded wireless sensors.\nIt offers many interesting features, that we will now describe.\n\nIt provides the basic benefits of an OS: portability (it has been ported\nto many devices powered by ARM, MSP430, and---more recently---AVR\nmicrocontrollers) and a comprehensive set of features, including\na network stack.\n\nMoreover, it offers key features that are otherwise yet unknown in\nthe WSN\/IoT domain:\n\n\\begin{itemize}\n\n\\item an efficient, interrupt-driven, tickless \\emph{micro-kernel};\n\n\\item that kernel includes a priority-aware task scheduler, providing\n \\emph{pre-emptive multitasking};\n\n\\item a highly efficient use of \\emph{hardware timers}: all of them can be\n used concurrently (especially since the kernel is tickless), offering\n the ability to schedule actions with high granularity; on low-end\n devices, based on MSP430 architecture, events can be scheduled\n with a resolution of 32~microseconds;\n\n\\item RIOT is entirely written in \\emph{standard C language}; but unlike\n Contiki, there are no restrictions on usable constructs (i.e.: like\n those introduced by the protothreads mechanism);\n\n\\item a clean and \\emph{modular design}, that makes development with and\n \\emph{into} the system itself easier and more productive.\n\n\\end{itemize}\n\nThe first three features listed hereabove make RIOT a full-fledged\n\\emph{real-time} operating system.\n\nWe also believe that the tickless kernel and the optimal use of hardware\ntimers should make RIOT OS a very suited software platform to optimize energy\nconsumption on battery-powered, MCU-based devices.\n\nA drawback of RIOT, compared to TinyOS or Contiki, is its higher memory\nfootprint: the full network stack (from PHY driver up to RPL routing with\n\\mbox{6LoWPAN} and MAC~\/ RDC layers) cannot be compiled for Sky\/TelosB\nbecause of overflowing memory space. Right now, constrained devices like\nMSP430-based motes are limited to the role of what the 802.15.4 standard\ncalls \\emph{Reduced Function Devices (RFD)}, the role of \\emph{Full\nFunction Devices (FFD)} being reserved to more powerful motes (i.e.:\nbased on ARM microcontrollers).\n\nHowever, we also note that, thanks to its modular architecture, the RIOT\nkernel, compiled with only PHY and MAC~\/ RDC layers, is actually lightweight\nand consumes little memory. We consequently believe that the current\nsituation will improve with the maturation of higher layers of RIOT network\nstack, and that in the future more constrained devices could also be used\nas FFD with RIOT OS.\n\n\\medskip\n\nWhen we began to work with RIOT, it also had two other issues: the MSP430\nversions were not stable enough to make real use of the platform; and\nbeyond basic CSMA\/CA, no work related to the MAC~\/ RDC layer had been\ndone on that system. This is where our contributions fit in.\n\n\n\\section{\\uppercase{Our contributions}}\n\nFor our work, we use---as our main hardware platform---IoT motes built\naround MSP430 microcontrollers.\n\nMSP430 is a microcontroller (MCU) architecture from Texas Instruments,\noffering very low-power consumption, cheap price, and good performance thanks\nto a custom 16-bit RISC design. This architecture is very common in IoT motes.\nIt is also very well supported, especially by the Cooja simulator\n\\cite{Cooja}, which makes simulations of network scenarios---especially\nwith many devices---much easier to design and test.\n\nRIOT OS has historically been developed first on legacy ARM devices\n(ARM7TDMI-based MCUs), then ported on more recent microcontrollers\n(ARM Cortex-M) and other architectures (MSP430 then AVR). However,\nthe MSP430 port was, before we improved it, still not as ``polished''\nas ARM code and thus prone to crash.\n\nOur contribution can be summarized in the following points:\n\n\\begin{enumerate}\n\n\\item analysis of current OSes (TinyOS, Contiki, etc.) limitations,\n and why they are incompatible with development of real-time\n extensions like advanced MAC~\/ RDC protocols;\n\n\\item add debugging features to the RIOT OS kernel, more precisely\n a mechanism to handle fatal errors: crashed systems can be\n ``frozen'' to facilitate debugging during development; or,\n in production, can be made to reboot immediately, thus reducing\n unavailability of a RIOT-running device to a minimum;\n\n\\item port RIOT OS to a production-ready, MSP430-based device:\n the Zolertia Z1 mote (already supoorted by Contiki,\n and used in real-world scenarios running that OS);\n\n\\item debug the MSP430-specific portion of RIOT OS---more specifically:\n the hardware abstraction layer (HAL) of the task scheduler---making\n RIOT OS robust and production-ready on MSP430-based devices.\\\\\n Note that all of these contributions have been reviewed by RIOT's\n development team and integrated into the ``master'' branch of RIOT OS'\n Github repository (i.e.: they are now part of the standard code base of\n the system).\n\n\\item running on MSP430-based devices also allows RIOT OS applications\n to be simulated with the Cooja simulator; this greatly improves\n speed and ease of development.\n\n\\item thanks to these achievements, we now have a robust and full-featured\n software platform offering all the features needed to develop\n high-performance MAC\/RDC protocols---such as all of the time-slotted\n protocols.\n\n\\end{enumerate}\n\nAs a proof of concept of this last statement, we have implemented one\nof our own designs, and obtained very promising results, shown in\nthe next section.\n\n\n\\section{\\uppercase{Use Case: implementing the S-CoSenS RDC protocol}}\n\n\\subsection{The S-CoSenS Protocol}\n\nThe first protocol we wanted to implement is S-CoSenS \\cite{TheseBNefzi},\nwhich is designed to work on top of the IEEE 802.15.4 physical and MAC\n(i.e.: CSMA\/CA) layers.\n\nIt is an evolution of the already published CoSenS protocol \\cite{CosensConf}:\nit adds to the latter a sleeping period for energy saving.\nThus, the basic principle of S-CoSenS is to delay the forwarding (routing)\nof received packets, by dividing the radio duty cycle in three periods:\na sleeping period (SP), a waiting period (WP) where the radio medium\nis listened by routers for collecting incoming 802.15.4 packets, and\nfinally a burst transmission period (TP) for emitting adequately\nthe packets enqueued during WP.\n\nThe main advantage of S-CoSenS is its ability to adapt dynamically to the\nwireless network throughput at runtime, by calculating for each radio duty\ncycle the length of SP and WP, according to the number of relayed\npackets during previous cycles. Note that the set of the SP and the WP\nof a same cycle is named \\emph{subframe}; it is the part of a S-CoSenS\ncycle whose length is computed and known \\textit{a priori}; on the contrary,\nTP duration is always unknown up to its very beginning, because it depends\non the amount of data successfully received during the WP that precedes it.\n\nThe computation of WP duration follows a ``sliding average'' algorithm,\nwhere WP duration for each duty cycle is computed from the average\nof previous cycles as:\n\\begin{eqnarray*}\n&&\n\\overline{\\mathrm{WP}_{n}} = \\alpha \\cdot \\overline{\\mathrm{WP}_{n-1}}\n + (1 - \\alpha) \\cdot \\mathrm{WP}_{n-1}\n\\\\ &&\n\\mathrm{WP}_{n} = \\max ( \\mathrm{WP}_{min},\n \\min ( \\overline{\\mathrm{WP}_{n}}, \\mathrm{WP}_{max} ) )\n\\end{eqnarray*}\nwhere $\\overline{\\mathrm{WP}_{n}}$ and $\\overline{\\mathrm{WP}_{n-1}}$\nare respectively the average WP length at $n^{\\mathrm{th}}$ and\n$(n-1)^{\\mathrm{th}}$ cycle, while $\\mathrm{WP}_{n}$ and $\\mathrm{WP}_{n-1}$\nare the actual length of respectively the $n^{\\mathrm{th}}$ and\n$(n-1)^{\\mathrm{th}}$ cycles; $\\alpha$ is a parameter between 0 and 1\nrepresenting the relative weight of the history in the computation,\nand $\\mathrm{WP}_{min}$ and $\\mathrm{WP}_{max}$ are high and low limits\nimposed by the programmer to the WP duration.\n\nThe length of the whole subframe being a parameter given at compilation time,\nSP duration is simply computed by subtracting the calculated duration of WP\nfrom the subframe duration for every cycle.\n\nThe local synchronization between a S-CoSenS router and its leaf nodes\nis done thanks to a beacon packet, that is broadcasted by the router at\nthe beginning of each cycle. This beacon contains the duration\n(in microseconds) of the SP and WP for the currently\nbeginning cycle.\n\nThe whole S-CoSenS cycle workflow for a router is summarized in figure\n\\ref{FigSCosensDutyCycle} hereafter.\n\n\\begin{figure}[!ht]\n\\centering\n\\begin{tikzpicture}[>=latex]\n\\fill[black] (0cm, -0.25cm) rectangle +(0.2cm, 0.5cm);\n\\draw[->,thick] (0.1cm, 0.25cm) -- +(0, 0.5cm);\n\\draw (0.1cm, 1.3cm) node {Beacon};\n\\draw[anchor=west] (-0.6cm, 0.9cm) node {(broadcasted)};\n\\draw[thick] (0cm, -0.25cm) -- +(0, 0.5cm);\n\\foreach \\x in {1,2,3,4,5,6}\n{\n \\fill[lightgray] (0.2cm + \\x * 0.25cm, -0.25cm) rectangle +(0.05cm, 0.5cm);\n}\n\\draw (1.1cm, 0) node {\\textbf{SP}};\n\\draw[thick] (2cm, -0.25cm) -- +(0, 0.5cm);\n\\fill[lightgray] (2cm, -0.25cm) rectangle +(2cm, 0.5cm);\n\\draw (3cm, 0) node {\\textbf{WP}};\n\\draw[thick] (4cm, -0.25cm) -- +(0, 0.5cm);\n\\fill[lightgray] (4cm, -0.25cm) rectangle +(2cm, 0.5cm);\n\\draw (5cm, 0) node {\\textbf{TP}};\n\\draw[thick] (6cm, -0.25cm) -- +(0, 0.5cm);\n\\draw[->] (-0.5cm, 0.25cm) -- +(7cm, 0);\n\\draw[->] (-0.5cm, -0.25cm) -- +(7cm, 0);\n\\draw[->,thick] (2.5cm, 0.75cm) -- +(0, -0.5cm);\n\\draw (2.5cm, 1cm) node {P1};\n\\draw[->,thick] (3cm, 0.75cm) -- +(0, -0.5cm);\n\\draw (3cm, 1cm) node {P2};\n\\draw[->,thick] (3.5cm, 0.75cm) -- +(0, -0.5cm);\n\\draw (3.5cm, 1cm) node {P3};\n\\draw[->,thick] (4.5cm, 0.25cm) -- +(0, 0.5cm);\n\\draw (4.5cm, 1cm) node {P1};\n\\draw[->,thick] (5cm, 0.25cm) -- +(0, 0.5cm);\n\\draw (5cm, 1cm) node {P2};\n\\draw[->,thick] (5.5cm, 0.25cm) -- +(0, 0.5cm);\n\\draw (5.5cm, 1cm) node {P3};\n\\draw (0cm, -0.5cm) .. controls +(0, -0.25cm) .. +(1cm, -0.25cm);\n\\draw (1cm, -0.75cm) .. controls +(1cm, 0) .. +(1cm, -0.25cm);\n\\draw (2cm, -1cm) .. controls +(0, 0.25cm) .. +(1cm, 0.25cm);\n\\draw (3cm, -0.75cm) .. controls +(1cm, 0) .. +(1cm, 0.25cm);\n\\draw (2cm, -1.25cm) node {\\textbf{Subframe}};\n\\draw (0cm, -1.5cm) .. controls +(0, -0.25cm) .. +(1.5cm, -0.25cm);\n\\draw (1.5cm, -1.75cm) .. controls +(1.5cm, 0) .. +(1.5cm, -0.25cm);\n\\draw (3cm, -2cm) .. controls +(0, 0.25cm) .. +(1.5cm, 0.25cm);\n\\draw (4.5cm, -1.75cm) .. controls +(1.5cm, 0) .. +(1.5cm, 0.25cm);\n\\end{tikzpicture}\n\\caption{A typical S-CoSenS router cycle.\\\\\n The gray strips in the SP represents the short wake-up-and-listen\n periods used for inter-router communication.}\n\\label{FigSCosensDutyCycle}\n\\end{figure}\n\nAn interesting property of S-CoSenS is that leaf (i.e.: non-router) nodes\nalways have their radio transceiver offline, except when they have packets\nto send. When a data packet is generated on a leaf node, the latter wakes up\nits radio transceiver, listens and waits to the first beacon emitted by\nan S-CoSenS router, then sends its packet using CSMA\/CA at the beginning\nof the WP described in the beacon it received. A leaf node will put its\ntransceiver offline during the delay between the beacon and that WP\n(that is: the SP of the router that emitted the received beacon), and\nwill go back to sleep mode once its packet is transmitted.\nAll of this procedure is shown in figure \\ref{FigSCoSenSPktTx}.\n\n\\begin{figure}[!h]\n\\centering\n\\begin{tikzpicture}[>=latex]\n\\draw (-0.5cm, 0) node {\\large \\textit{R}};\n\\draw[thick] (1cm, -0.25cm) -- +(0, 0.5cm);\n\\draw (2cm, 0) node {\\textbf{SP}};\n\\draw[thick] (3cm, -0.25cm) -- +(0, 0.5cm);\n\\fill[lightgray] (3cm, -0.25cm) rectangle +(2cm, 0.5cm);\n\\draw (4cm, 0) node {\\textbf{WP}};\n\\draw[thick] (5cm, -0.25cm) -- +(0, 0.5cm);\n\\fill[lightgray] (5cm, -0.25cm) rectangle +(0.5cm, 0.5cm);\n\\draw (5.25cm, -0.5cm) node {\\textbf{TP}};\n\\draw[thick] (5.5cm, -0.25cm) -- +(0, 0.5cm);\n\\draw[->] (-0.5cm, 0.25cm) -- +(6.5cm, 0);\n\\draw[->] (-0.5cm, -0.25cm) -- +(6.5cm, 0);\n\\draw (-0.5cm, -1.5cm) node {\\large \\textit{LN}};\n\\fill[gray] (0cm, -1.25cm) rectangle +(1.3cm, -0.5cm);\n\\fill[gray] (2.9cm, -1.25cm) rectangle +(0.5cm, -0.5cm);\n\\fill[black] (1cm, -0.25cm) rectangle +(0.2cm, 0.5cm);\n\\draw[->,thick] (1.1cm, 0.25cm) -- +(0, -1.5cm);\n\\draw[anchor=east] (1cm, -0.75cm) node {Beacon};\n\\fill[black] (1cm, -1.25cm) rectangle +(0.2cm, -0.5cm);\n\\draw[->,very thick] (0cm, -2.5cm) -- +(0, 0.75cm);\n\\draw[anchor=west] (0cm, -2.5cm)\n node {\\footnotesize \\textbf{packet arrival}};\n\\fill[black] (3.1cm, -1.25cm) rectangle +(0.2cm, -0.5cm);\n\\draw[->,thick] (3.2cm, -1.25cm) -- +(0, 1cm);\n\\draw[anchor=west] (3.2cm, -0.75cm) node {P1};\n\\fill[black] (3.1cm, -0.25cm) rectangle +(0.2cm, 0.5cm);\n\\fill[black] (5.1cm, -0.25cm) rectangle +(0.2cm, 0.5cm);\n\\draw[->,thick] (5.2cm, 0.25cm) -- +(0, 0.5cm);\n\\draw (5.2cm, 1cm) node {P1};\n\\draw[->] (-0.5cm, -1.25cm) -- +(6.5cm, 0);\n\\draw[->] (-0.5cm, -1.75cm) -- +(6.5cm, 0);\n\\end{tikzpicture}\n\\caption{A typical transmission of a data packet with the S-CoSenS protocol\n between a leaf node and a router.}\n\\label{FigSCoSenSPktTx}\n\\end{figure}\n\nWe thus need to synchronize with enough accuracy different devices (that\ncan be based on different hardware platforms) on cycles whose periods\nare dynamically calculated at runtime, with resolution that needs to be\nin the sub-millisecond range. This is where RIOT OS advanced real-time\nfeatures really shine, while the other comparable OSes are\nfor that purpose definitely lacking.\n\n\\subsection{Simulations and Synchronization Accuracy}\n\nWe have implemented S-CoSenS under RIOT, and made first tests by performing\nsimulations---with Cooja---of a 802.15.4 PAN (Personal Area Network)\nconstituted of a router, and ten motes acting as ``leaf nodes''.\nThe ten nodes regularly send data packets to the router, that retransmits\nthese data packets to a nearby ``sink'' device. Both the router and the ten\nnodes use exclusively the S-CoSenS RDC\/MAC protocol. This is summarized\nin figure \\ref{FigPANtest}.\n\n\\begin{figure}[!h]\n\\centering\n\\begin{tikzpicture}[>=latex]\n\\draw (0, 1cm) circle (0.25cm); \\draw (0, 1cm) node {S};\n\\draw[->,thick] (0, 0.25cm) -- (0, 0.75cm);\n\\draw (0, 0) circle (0.25cm); \\draw (0, 0) node {R};\n\\foreach \\x in {6,7,8,9,10}\n{\n \\fill[white] (\\x * 1cm - 8cm, -1.75cm) circle (0.25cm);\n \\draw (\\x * 1cm - 8cm, -1.75cm) circle (0.25cm);\n \\draw (\\x * 1cm - 8cm, -1.75cm) node {\\x};\n \n \\draw[->,thick] (\\x * 1cm - 8cm, -1.5cm)\n -- (\\x * 0.02cm - 0.16cm, -0.25cm);\n}\n\\foreach \\x in {1,2,3,4,5}\n{\n \\fill[white] (\\x * 1cm - 3cm, -1cm) circle (0.25cm);\n \\draw (\\x * 1cm - 3cm, -1cm) circle (0.25cm);\n \\draw (\\x * 1cm - 3cm, -1cm) node {\\x};\n \n \\draw[->,thick] (\\x * 1cm - 3cm, -0.75cm)\n -- (\\x * 0.05cm - 0.15cm, -0.25cm);\n}\n\\end{tikzpicture}\n\\caption{Functional schema of our virtual test PAN.}\n\\label{FigPANtest}\n\\end{figure}\n\nOur first tests clearly show an excellent synchronization between the\nleaf nodes and the router, thanks to the time resolution offered by RIOT OS\nevent management system (especially the availability of many hardware\ntimers for direct use). This can be seen in the screenshot of our\nsimulation in Cooja, shown in figure \\ref{Screenshot}. For readability,\nthe central portion of the timeline window of that screenshot (delimited\nby a thick yellow rectangle) is zoomed on in figure \\ref{ZoomTimeline}.\n\n\\begin{figure*}[ptb]\n\\centering\n\\includegraphics[width=15.75cm]{S-CoSenS-Cooja10.png}\n\\caption{Screenshot of our test simulation in Cooja. \n(Despite the window title mentioning Contiki, the simulated application\n is indeed running on RIOT OS.)}\n\\label{Screenshot}\n\\end{figure*}\n\n\\begin{figure*}[pbt]\n\\centering\n\\includegraphics{S-CoSenS-Cooja10-Timeline.png}\n\\caption{Zoom on the central part of the timeline of our simulation.}\n\\label{ZoomTimeline}\n\\end{figure*}\n\nOn figure \\ref{ZoomTimeline}, the numbers on the left side are motes'\nnumerical IDs: the router has ID number \\textsf{1}, while the leaf nodes\nhave IDs \\textsf{2} to \\textsf{11}. Grey bars represent radio transceiver\nbeing online for a given mote; blue bars represent packet emission, and green\nbars correct packet reception, while red bars represent collision (when\ntwo or more devices emit data concurrently) and thus reception of\nundecipherable radio signals.\n\nFigure \\ref{ZoomTimeline} represents a short amount of time (around\n100~milliseconds), representing the end of a duty cycle of the router:\nthe first 20~milliseconds are the end of SP, and 80 remaining milliseconds\nthe WP, then the beginning of a new duty cycle (the TP has been disabled\nin our simulation). \n\nIn our example, four nodes have data to transmit to the router: the motes\nnumber \\textsf{3}, \\textsf{5}, \\textsf{9}, and \\textsf{10}; the other nodes\n(\\textsf{2}, \\textsf{4}, \\textsf{6}, \\textsf{7}, \\textsf{8}, and \\textsf{11})\nare preparing to transmit a packet in the next duty cycle.\n\nAt the instant marked by the first yellow arrow (in the top left of figure\n\\ref{ZoomTimeline}), the SP ends and the router activates its radio\ntransceiver to enter WP. Note how the four nodes that are to send packets\n(\\textsf{3}, \\textsf{5}, \\textsf{9}, and \\textsf{10}) do also activate their\nradio transceivers \\emph{precisely} at the same instant: this is thanks to\nRIOT OS precise real-time mechanism (based on hardware timers), that allows\nto the different nodes to precisely synchronize on the timing values\ntransmitted in the previous beacon packet. Thanks also to that mechanism,\nthe nodes are able to keep both their radio transceiver \\emph{and} their\nMCU in low-power mode, since RIOT OS kernel is interrupt-driven.\n\nDuring the waiting period, we also see that several collisions occur; they\nare resolved by the S-CoSenS protocol by forcing motes to wait a random\nduration before re-emitting a packet in case of conflict. In our example,\nour four motes can finally transmit their packet to the router in that\norder: \\textsf{3} (after a first collision), \\textsf{5}, \\textsf{10} (after\ntwo other collisions), and finally \\textsf{9}. Note that every time the\nrouter (device number \\textsf{1}) successfully receives a packet, an\nacknowledgement is sent back to emitter: see the very thin blue bars that\nfollow each green bar on the first line.\n\nFinally, at the instant marked by the second yellow arrow (in the top right\nof figure \\ref{ZoomTimeline}), WP ends and a new duty cycle begins.\nConsequently, the router broadcasts a beacon packet containing PAN timing and\nsynchronization data to all of the ten nodes. We can see that all of the\nsix nodes waiting to transmit (\\textsf{2}, \\textsf{4}, \\textsf{6}, \\textsf{7},\n\\textsf{8}, and \\textsf{11}) go idle after receiving this beacon (beacon\npackets are broadcasted and thus not to be acknowledged): they go\ninto low-power mode (both at radio transceiver and MCU level), and will\ntake advantage of RIOT real-time features to wake up precisely when\nthe router goes back into WP mode and is ready to receive their\npackets.\n\n\\subsection{Performance Evaluation: Preliminary Results}\n\nWe will now present the first, preliminary results we obtained through the\nsimulations we described hereabove.\n\nImportant: note that \\emph{we evaluate here the implementations}, and not the\nintrinsic advantages or weaknesses of the protocols themselves.\n\nWe have first focused on QoS results, by computing Packet Reception Rates\nand end-to-end delays between the various leaf nodes and the sink of the test\nPAN presented earlier in figure \\ref{FigPANtest}, to evaluate the quality\nof the transmissions allowed by using both of the protocols.\n\nFor these first tests, we used default parameters for both RDC protocols\n(ContikiMAC and S-CoSenS), only pushing the CSMA\/CA MAC layer of Contiki\nto make up to 8 attempts for transmitting a same packet, so as to put it\non par with our implementation on RIOT OS. We have otherwise not yet\ntried to tweak the various parameters offered by both the RDC protocols\nto optimize results. This will be the subject of our next experiences.\n\n\\subsubsection{Packet Reception Rates (PRR)}\n\nThe result obtained for PRR using both protocols are shown in figure\n\\ref{FigPRRresults} as well as table \\ref{TblPRRresults}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=7.5cm]{PRRgraph.png}\n \\caption{PRR results for both ContikiMAC and S-CoSenS RDC protocols,\n using default values for parameters.}\n \\label{FigPRRresults}\n\\end{figure}\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|r|r|r|}\n\\hline\n PAI \\textbackslash\\ Protocol & ContikiMAC & S-CoSenS \\\\\n\\hline\n 1500 ms & 49.70\\% & 98.10\\% \\\\\n 1000 ms & 32.82\\% & 96.90\\% \\\\\n 500 ms & 14.44\\% & 89.44\\% \\\\\n 100 ms & 0.64\\% & 25.80\\% \\\\\n\\hline\n\\end{tabular}\n\\caption{PRR results for both ContikiMAC and S-CoSenS RDC protocols,\n using default values for parameters.}\n\\label{TblPRRresults}\n\\end{table}\n\nThe advantage of S-CoSenS as shown on the figure is clear and significant\nwhatever the packet arrival interval constated. Excepted for the ``extreme''\nscenario corresponding to an over-saturation of the radio channel, S-CoSenS\nachieve an excellent PRR ($\\gtrapprox 90\\%$), while ContikiMAC's PRR\nis always $\\lessapprox 50\\%$.\n\n\\subsubsection{End-To-End Transmission Delays}\n\nThe result obtained for PRR using both protocols are shown in figure\n\\ref{FigDelaysResults} and table \\ref{TblDelaysResults}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=7.5cm]{DelaysGraph.png}\n \\caption{End-to-end delays results for both ContikiMAC and S-CoSenS RDC\n protocols, using default values for parameters; note that\n vertical axis is drawn with logarithmic scale.}\n \\label{FigDelaysResults}\n\\end{figure}\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|r|r|r|}\n\\hline\n PAI \\textbackslash\\ Protocol & ContikiMAC & S-CoSenS \\\\\n\\hline\n 1500 ms & 3579 ms & 108 ms \\\\\n 1000 ms & 4093 ms & 108 ms \\\\\n 500 ms & 6452 ms & 126 ms \\\\\n 100 ms & 12913 ms & 168 ms \\\\\n\\hline\n\\end{tabular}\n\\caption{End-to-end delays results for both ContikiMAC and S-CoSenS RDC\n protocols, using default values for parameters.}\n\\label{TblDelaysResults}\n\\end{table}\n\nS-CoSenS has here also clearly the upper hand, so much that we had to use\nlogarithmic scale for the vertical axis to keep figure \\ref{FigDelaysResults}\neasily readable. The advantage of S-CoSenS is valid whatever the packet\narrival interval, our protocol being able to keep delay below an acceptable\nlimit (in the magnitude of hundreds of milliseconds), while ContikiMAC\ndelays rocket up to tens of seconds when network load increases.\n\n\\subsubsection{Summary: QoS Considerations}\n\nWhile these are only preliminary results, it seems that being able to\nleverage real-time features is clearly a significant advantage when designing\nand implementing MAC\/RDC protocols, at least when it comes to QoS results.\n\n\n\n\\section{\\uppercase{Future Works and Conclusion}}\n\nWe plan, in a near future:\n\n\\begin{itemize}\n\n\\item to bring new contributions to the RIOT project: we are especially\n interested in the portability that the RIOT solution offers us;\n this OS is indeed actively ported on many devices based on powerful\n microcrontrollers based on ARM Cortex-M architecture (especially\n Cortex-M3 and Cortex-M4), and we intend to help in this porting\n effort, especially on high-end IoT motes we seek to use in our\n works (e.g.: as advanced FFD nodes with full network stack,\n or routers);\n\n\\item to use the power of this OS to further advance our work on MAC\/RDC\n protocols; more precisely, we are implementing other innovative\n MAC\/RDC protocols---such as iQueue-MAC \\cite{iQueueMAC}---under RIOT,\n taking advantage of its high-resolution real-time features to obtain\n excellent performance, optimal energy consumption, and out-of-the-box\n portability.\n\n\\end{itemize}\n\nRIOT is a powerful real-time operating system, adapted to the limitations\nof deeply embedded hardware microcontrollers, while offering state-of-the-art\ntechniques (preemptive multitasking, tickless scheduler, optimal use\nof hardware timers) that---we believe---makes it one of the most\nsuitable OSes for the embedded and real-time world.\n\nWhile we weren't able to accurately quantize energy consumption\nyet, we can reasonably think that lowering activity of MCU and radio\ntransceiver will significantly reduce the energy consumption of devices\nrunning RIOT OS. This will be the subject of some of our future\nresearch works.\n\n\\bigskip\n\nCurrently, RIOT OS supports high-level IoT protocols (6LoWPAN\/IPv6, RPL,\nTCP, UDP, etc.). However, it still lacks high-performance MAC~\/ RDC layer\nprotocols.\n\nThrough this work, we have shown that RIOT OS is also suitable for\nimplementing high-performance MAC~\/ RDC protocols, thanks to its real-time\nfeatures (especially hardware timers management).\n\nMoreover, we have improved the robustness of the existing ports of RIOT OS\non MSP430, making it a suitable software platform for tiny motes and devices.\n\n\n\n\n\n\n\\vfill\n\\bibliographystyle{apalike}\n{\\small\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Abstract}\n{\\bf \nWe derive explicit expressions for dynamical correlations of the field and density operators in the Lieb-Liniger model, within an arbitrary eigenstate with a small particle density ${\\cal D}$. They are valid for all space and time and any interaction strength $c>0$, and are the leading order of an expansion in ${\\cal D}$. This expansion is obtained by writing the correlation functions as sums over form factors when formally decomposed into partial fractions.\n\n}\n\n\\renewcommand\\Affilfont{\\fontsize{9}{10.8}\\itshape}\n\n \n\\tableofcontents\n\\section{Introduction}\nThe Lieb-Liniger model is a key paradigm of many-particle systems\\cite{LiebLiniger63a,Brezin64,korepin}. In the repulsive regime, it is considered as one of the simplest interacting quantum integrable model, for having the simplifying feature of involving only real rapidities\\cite{Lieb63,YangYang69}. The main objects of interest are the correlation functions of local observables in the thermodynamic limit, that are the macroscopic output of the model resulting from the short-range interactions between the bosons. Moreover their Fourier transform is directly measurable in cold-atoms experiments \\cite{naegerl15,Bouchoule16,Fabbri15}. Although exactly solvable, the computation of correlation functions in quantum integrable models is a notoriously difficult problem. In particular the computation of dynamical correlations for an arbitrary interaction strength $c$ and within arbitrary eigenstate is an open problem.\n\nSpecial cases of this problem were studied and solved in the past. The first of these special cases is the impenetrable bosons limit $c\\to\\infty$, that can be reformulated in terms of free fermions \\cite{girardeau}. Here, both because of the free fermionic nature of the model and the simple structure of the form factor\\footnote{The Transverse Field Ising Model, although also equivalent to free fermions, cannot be fully treated along the same lines because of its more complex form factors \\cite{GFE20}.}, the Lehmann representation of dynamical correlations can be fully resummed into a Fredholm determinant\\cite{KS90,slavnov90,izergin87,kojima97} and its asymptotics extracted with differential equations \\cite{IIKS90,IIKV92}. Another important special case is the ground state static correlation at finite $c$, that was treated within the Algebraic Bethe ansatz framework\\cite{kitanineetalg} and led to the first ab initio calculation of the critical exponents previously predicted by Conformal Field Theory and Luttinger Liquid theory\\cite{BIK86,IKR87,haldane,cazalilla}. This approach was then generalized to static correlations within arbitrary eigenstates \\cite{kozlowskimailletslvanov,kozlowskimailletslvanov2}. The particular case of static correlators within thermal eigenstates was also studied with quantum transfer matrix methods\\cite{suzuki85,klumper92,patuklumper}. The full asymptotics of ground state dynamical correlations were derived in \\cite{kitanineetcformfactor,kozlowskimaillet,kozlowski4} from form factor expansions, confirming predictions of Non-Linear Luttinger Liquid theory\\cite{IG08,PAW09,ISG12,P12,shashipanfilcaux,Price17}.\n\n\nProgress on the general case of dynamical correlations within arbitrary states at finite $c$ has been much more limited. This calculation is very different from the special cases of zero-temperature or static cases and poses important problems. The successful methods for static correlations such as the algebraic Bethe ansatz approach or the quantum transfer matrix methods do not apply to the dynamical case, and the form factor expansion used to compute ground state dynamical correlations relied on combinatorial identities that are usable for zero-entropy states only. This general case has however been studied through several approaches. Firstly, Generalized HydroDynamics (GHD) provide predictions for the leading asymptotics of the dynamical correlations of conserved charges such as the density within arbitrary macrostates\\cite{CADY16,BCDF16,Doyon18}. However the approach cannot a priori be applied to compute the next corrections, restricting the dynamical structure factor to small frequency and momentum, and does not apply neither to semi-local operators such as the field correlations. Secondly, numerical summations of dominant form factors proved very efficient and permitted to obtain numerical estimations of the dynamical structure factor on the full plane \\cite{cauxcalabreseslavnov,PC14}. On the Bethe-ansatz calculations side, one-point functions within arbitrary eigenstates were studied in \\cite{negrosmirovn,bastianellopiroli,bastianellopirolicalabrese}. An approach based on thermodynamic form factors was started \\cite{deNP15,deNP16,DNP18,panfil20,cortescuberopanfil1,cortescuberopanfil2}, but still involves non-integrable singularities and so requires a particular understanding of this feature. A regularized form factor expansion was derived for the XXZ spin chain in \\cite{kozlowski1}. In \\cite{granetessler20} was computed the full spectral sum at order $c^{-2}$ for all root densities and all momentum and frequency, involving one- and two-particle-hole excitations. It showed the necessity for a fine-tuned treatment of the non-integrable singularities that is crucial for e.g. detailed balance to be satisfied in thermal states.\n\n\nThe objective of this paper is to derive the full dynamical correlations for all space and time and all interaction strength $c$, in the limit where the particle density ${\\cal D}$ of the averaging state becomes small. This low-density limit is defined in terms of a partial fraction decomposition of the form factor, initially introduced in \\cite{GFE20} in a model that can be reformulated into free fermions. This decomposition naturally organizes the spectral sum as an expansion in the particle density ${\\cal D}$ of the averaging state, and the low-density limit is defined as the leading term of this expansion. \n\nThis result provides another limiting case where the initial problem becomes solvable, namely the computation of dynamical correlations for all space and time and arbitrary $c$ within finite-entropy macrostates. Moreover the framework also enables to compute the subleading corrections in the particle density, as was explicitly shown in \\cite{GFE20} for the Transverse Field Ising Model. The computation of these subleading corrections in the interacting case however comes with higher technical difficulties and should be the object of further work. Finally, this low-density limit calculation sheds light on the structure of the spectral sum and the nature of the states contributing in the thermodynamic limit.\n\nIn Section \\ref{sec1} we introduce the Lieb-Liniger model and recall known results on its form factors. In Section \\ref{ldd} we define what is meant by low-density limit of the dynamical correlations, taking the field correlations as an example. In Section \\ref{fieldsection} we compute the low-density limit of the field two-point function \\eqref{field}, and in Section \\ref{densitysection} the low-density limit of the density two-point function \\eqref{densityde}.\n\n\n\\section{\\texorpdfstring{Lieb-Liniger model}{Lg}}\n\\label{sec1}\n\\subsection {Definition}\n\nThe Lieb-Liniger model \\cite{LiebLiniger63a} is a non-relativistic\nquantum field theory model with Hamiltonian\n\\begin{equation}\nH=\\int_0^L dx\\left[-\\psi^\\dagger(x)\\frac{d^2}{dx^2}\\psi(x)+c\\psi^\\dagger(x)\\psi^\\dagger(x)\\psi(x)\\psi(x)\n\\right]\\,,\n\\label{HLL}\n\\end{equation}\nwhere the canonical Bose field $\\psi(x)$ satisfies equal-time\ncommutation relations\n\\begin{equation}\n[\\psi(x),\\psi^\\dagger(y)]=\\delta(x-y)\\,.\n\\end{equation}\nWe will impose periodic boundary\n conditions.\nFor later convenience we define the time-$t$ evolved version of the field $\\psi(x,t)=e^{iHt}\\psi(x)e^{-iHt}$. We also define the density operator at position $x$\n\\begin{equation}\n\\sigma(x)=\\psi^\\dagger(x)\\psi(x)\\,,\n\\end{equation}\nand its time-$t$ evolved version $\\sigma(x,t)=e^{iHt}\\sigma(x)e^{-iHt}$.\n\n\\subsection {The Bethe ansatz solution}\n\\subsubsection{The spectrum}\nThe Lieb-Liniger model is solvable by the Bethe ansatz: an eigenstate $|\\pmb{\\lambda}\\rangle$ with $N$ bosons can be written as\n\\begin{equation}\n|\\pmb{\\lambda}\\rangle=B(\\lambda_1)...B(\\lambda_N)|0\\rangle\\,,\n\\end{equation}\nwith the $B(\\lambda)$'s some creation operators, $|0\\rangle$ the pseudo-vacuum and the $\\lambda_i$'s some rapidities that satisfy the following set of 'Bethe equations'\n\\begin{equation}\ne^{iL\\lambda_k}=\\prod_{\\substack{j=1\\\\j\\neq k}}^N\\frac{\\lambda_k-\\lambda_j+ic}{\\lambda_k-\\lambda_j-ic}\\,,\n\\quad k=1,\\dots, N.\n\\end{equation}\nThe\nenergy $E$ and the momentum $P$ of this state read \n\\begin{equation}\nE(\\pmb{\\lambda})=\\sum_{i=1}^N \\lambda_i^2\\,,\\qquad P(\\pmb{\\lambda})=\\sum_{i=1}^N\\lambda_i\\,.\n\\end{equation}\nIt is convenient to express the Bethe equations in logarithmic form\n\\begin{equation}\n\\label{belog}\n\\frac{\\lambda_k}{2\\pi}=\\frac{I_k}{L}-\\frac{1}{L}\\sum_{j=1}^N \\frac{1}{\\pi}\\arctan \\frac{\\lambda_k-\\lambda_j}{c}\\,,\n\\end{equation}\nwith $I_k$ an integer if $N$ is odd, a half-integer if $N$ is even. For $c>0$ all the solutions to this equation are real \\cite{korepin}. We will denote\n\\begin{equation}\n{\\cal D}=\\frac{N}{L}\\,,\n\\end{equation}\nthe particle density of the eigenstate $|\\pmb{\\lambda}\\rangle$.\n\n\\subsubsection{The field form factors}\nOur aim is to calculate correlation functions in an eigenstate $|\\pmb{\\lambda}\\rangle$ at low particle density ${\\cal D}$. We will focus on the two-point function of the field operator\n\\begin{equation}\n \\left\\langle \\psi^\\dagger( x,t) \\psi ( 0,0) \\right\\rangle =\\frac {\\left\\langle \\pmb{\\lambda} \\left| \\psi^\\dagger( x,t) \\psi ( 0,0) \\right| \\pmb{\\lambda} \\right\\rangle } {\\left\\langle \\pmb{\\lambda} |\\pmb{\\lambda} \\right\\rangle }\\,,\n\\end{equation}\nand the two-point function of the density operator\n\\begin{equation}\n \\left\\langle \\sigma( x,t) \\sigma ( 0,0) \\right\\rangle =\\frac {\\left\\langle \\pmb{\\lambda} \\left| \\sigma( x,t) \\sigma ( 0,0) \\right| \\pmb{\\lambda} \\right\\rangle } {\\left\\langle \\pmb{\\lambda} |\\pmb{\\lambda} \\right\\rangle }\\,.\n\\end{equation}\nOur strategy is to use a Lehman representation in terms of energy eigenstates\n$|\\pmb{\\mu}\\rangle =|\\mu_1,...,\\mu_{N'}\\rangle$, where\n$\\{\\mu_1,\\dots,\\mu_{N'}\\}$ are solutions to the Bethe equations \\fr{belog} to rewrite the correlation functions as sums of form factors over the full spectrum. For the two-point function of a generic operator ${\\cal O}$ this representation reads\n\\begin{equation}\n\\label{bigsumfield}\n\\begin{aligned}\n \\left\\langle {\\cal O}^\\dagger( x,t){\\cal O}( 0,0) \\right\\rangle &=\\sum _{ \\pmb{\\mu}}\\frac {\\left| \\left\\langle \\pmb{\\mu} |{\\cal O}( 0) |\\pmb{\\lambda}\\right\\rangle \\right| ^{2}} {\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }e^{it\\left( E\\left( \\pmb{\\lambda}\\right) -E\\left( \\pmb{\\mu} \\right) \\right) +ix\\left( P\\left( \\pmb{\\mu}\\right) -P\\left( \\pmb{\\lambda}\\right) \\right) }\\,.\n\\end{aligned}\n\\end{equation}\n\nThe (normalized) form factors of the field and density operators between two Bethe\nstates $|\\pmb{\\lambda}\\rangle,|\\pmb{\\mu}\\rangle$ with respective numbers of Bethe roots $N,N'$ have been computed previously \\cite{Korepin82,Slavnov89,Slavnov90,KorepinSlavnov99,Oota04,KozlowskiForm11}. \n\nIn the case of the field operator, it reads\n\\begin{equation}\\label{FF}\n\\begin{aligned}\n&\\frac { \\left\\langle \\pmb{\\mu} |\\psi ( 0) |\\pmb{\\lambda}\\right\\rangle } {\\sqrt{\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }}=\\delta_{N,N'+1}\\frac{i^{N+1}(-1)^{N(N-1)\/2}}{L^{N-1\/2}\\sqrt{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}}\\frac{\\prod_{i< j}|\\lambda_i-\\lambda_j|\\prod_{i< j}|\\mu_i-\\mu_j|}{\\prod_{i, j}(\\mu_j-\\lambda_i)}\\\\\n&\\qquad\\times\\sqrt{\\frac{\\prod_{i\\neq j}(\\lambda_i-\\lambda_j+ic)}{\\prod_{i\\neq j}(\\mu_i-\\mu_j+ic)}}\\prod_{\\substack{j=1\\\\j\\neq p,s}}^N(V_j^+-V_j^-) \\,\\,\\underset{i,j=1,...,N}{\\det}\\Bigg[\\delta_{ij}+U_{ij}\\Bigg]\\,,\n\\end{aligned}\n\\end{equation}\nfor any $p,s=1,...,N$. The various terms entering this expression are\n\\begin{equation}\nV_i^\\pm=\\frac{\\prod_{k=1}^{N-1}(\\mu_k-\\lambda_i\\pm ic)}{\\prod_{k=1}^{N}(\\lambda_k-\\lambda_i\\pm ic)}\\,,\n\\end{equation}\nand the $N\\times N$ matrix\n\\begin{equation}\nU_{jk}=\\frac{i}{V_j^+-V_j^-}\\left[\\frac{2c}{c^2+(\\lambda_j-\\lambda_k)^2}-\\frac{4c^2}{(c^2+(\\lambda_p-\\lambda_k)^2)(c^2+(\\lambda_s-\\lambda_j)^2)}\\right]\\frac{\\prod_{m=1}^{N-1}(\\mu_m-\\lambda_j)}{\\prod_{m\\neq j}(\\lambda_m-\\lambda_j)}\\,,\n\\end{equation}\nand finally\n\\begin{equation}\n\\label{norm}\n\\mathcal{N}_{\\pmb{\\lambda}}=\\det G(\\pmb{\\lambda})\\,,\n\\end{equation}\nwith the Gaudin matrix \\cite{Gaudin71}\n\\begin{equation}\\label{gaudin}\nG_{ij}(\\pmb{\\lambda})= \\delta_{ij} \\left(1+\\frac{1}{L}\\sum_{k=1}^N \\frac{2c}{c^2+(\\lambda_i-\\lambda_k)^2}\\right)-\\frac{1}{L}\\frac{2c}{c^2+(\\lambda_i-\\lambda_j)^2}\\,.\n\\end{equation}\nThe form factor of the density operator reads\n\\begin{equation}\\label{desnityff}\n\\begin{aligned}\n&\\frac { \\left\\langle \\pmb{\\mu} |\\sigma ( 0) |\\pmb{\\lambda}\\right\\rangle } {\\sqrt{\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }}=\\delta_{N,N'}\\frac{i^{N+1}(-1)^{N(N-1)\/2}(\\sum_{j=1}^N \\lambda_j-\\mu_j)}{L^{N}\\sqrt{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}}\\frac{\\prod_{i< j}|\\lambda_i-\\lambda_j|\\prod_{i< j}|\\mu_i-\\mu_j|}{\\prod_{i, j}(\\mu_j-\\lambda_i)}\\\\\n&\\qquad\\times\\sqrt{\\prod_{i, j}\\frac{\\lambda_i-\\lambda_j+ic}{\\mu_i-\\mu_j+ic}}\\prod_{j\\neq p}(V_j^+-V_j^-) \\,\\,\\underset{i,j=1,...,N}{\\det}\\Bigg[\\delta_{ij}+U'_{ij}\\Bigg]\\,,\n\\end{aligned}\n\\end{equation}\nfor any $p=1,...,N$, with\n\\begin{equation}\nU'_{jk}=i\\frac{\\mu_j-\\lambda_j}{V_j^+-V_j^-}\\left[\\frac{2c}{(\\lambda_j-\\lambda_k)^2+c^2}-\\frac{2c}{(\\lambda_p-\\lambda_k)^2+c^2}\\right]\\prod_{m\\neq j}\\frac{\\mu_m-\\lambda_j}{\\lambda_m-\\lambda_j}\\,,\n\\end{equation}\nand with now\n\\begin{equation}\nV_j^\\pm=\\frac{\\prod_{k=1}^{N}(\\mu_k-\\lambda_j\\pm ic)}{\\prod_{k=1}^{N}(\\lambda_k-\\lambda_j\\pm ic)}\\,.\n\\end{equation}\n\\subsubsection{Root densities}\nIn the thermodynamic limit, any sum of a non-singular function over the Bethe roots can be expressed in terms of a \\textit{root density} that characterizes a macrostate as far as such quantities are concerned\n\\begin{equation}\n\\underset{L\\to\\infty}{\\lim}\\, \\frac{1}{L^n}\\sum_{i_1,...,i_n}f(\\lambda_{i_1},...,\\lambda_{i_n})=\\int_{-\\infty}^\\infty \\dots\\int_{-\\infty}^\\infty f(\\lambda_1,...,\\lambda_n)\\rho(\\lambda_1)\\dots\\rho(\\lambda_n)\\D{\\lambda_1}\\dots\\D{\\lambda_n}\\,.\n\\end{equation}\nHowever if the function is singular the result will in general depend on the representative state of the macrostate, see \\cite{granetessler20}.\nIt is customary to introduce the hole density $\\rho_h(\\lambda)$ defined by\n\\begin{equation}\n\\label{vartheta}\n\\rho(\\lambda)+\\rho_h(\\lambda)=\\frac{1}{2\\pi}+\\frac{1}{2\\pi}\\int_{-\\infty}^\\infty \\frac{2c}{c^2+(\\lambda-\\mu)^2}\\rho(\\mu)\\D{\\mu}\\,.\n\\end{equation}\n\n\\section {Definition of the low-density limit \\label{ldd}}\nThe purpose of this section is to define what is meant by the \\textit{low density limit} of correlation functions. It is defined as the \\textit{leading order of an expansion} in ${\\cal D}$, obtained by decomposing the form factor in partial fractions. As a consequence it is an expression valid for all $x,t$ and $c$, that becomes closer to the dynamical correlations as the particle density ${\\cal D}$ of the averaging state becomes smaller.\\\\\n\nThis definition requires some technicalities, but is rigorous and allows for a computation of the next orders, as shown in \\cite{GFE20} for a model that can be reformulated into free fermions. However, it a priori lacks some intuitive picture. For that reason we provide an interpretation of this low-density limit so defined as a Lehmann representation in terms of the \\textit{low density limit of the form factor}. The reasoning is here rather different, and consists in first approximating the form factor by the thermodynamic limit value they take when one of the two states is a dilute state, i.e. such that for any pair $i,j$ we have $L(\\lambda_i-\\lambda_j)\\to\\infty$. In this limit, the spectral sum of the dynamical correlations indeed matches the leading order of the expansion in ${\\cal D}$, providing an interesting and intuitive consistency check. But it must be stressed that the right definition of the low-density limit is more general than this intuitive calculation, since it only requires ${\\cal D}$ to be small, not the root density $\\rho(\\lambda)$ to be small everywhere.\n\n \n\n\\subsection {Partial fraction decomposition}\n\\subsubsection{Recall \\label{pfddefsec}}\nWe recall that the partial fraction decomposition (PFD) of a ratio of two polynomials $\\frac{P(X)}{\\prod_{i=1}^n(X-x_i)^{a_i}}$ with distinct $x_i$'s is the writing\n\\begin{equation}\\label{pfddef}\n\\frac{P(X)}{\\prod_{i=1}^n(X-x_i)^{a_i}}=P_0(X)+\\sum_{i=1}^n \\sum_{\\nu=1}^{a_i}\\frac{B_{i,\\nu}}{(X-x_i)^\\nu}\\,,\n\\end{equation}\nwith $P_0(X)$ a polynomial, and $B_{i,\\nu}$ coefficients given by\n\\begin{equation}\nB_{i,\\nu}=\\frac{1}{(a_i-\\nu)!}(\\tfrac{d}{dX})^{a_i-\\nu}[(X-x_i)^{a_i}P(X)]|_{X=x_i}\\,.\n\\end{equation}\nThe polynomial $P_0(X)$ can be determined by studying e.g. the large $X$ behaviour of the ratio of the two polynomials on the left-hand side of \\eqref{pfddef}.\n\\subsubsection{The poles of the normalized form factor}\nWe consider $ \\pmb{\\lambda} $ and $ \\pmb{\\mu} $ two sets of respectively $N$ and $N-1$ rapidities, and would like to apply a partial fraction decomposition to the square of the normalized field form factor, with respect to each of the $\\mu_i$'s successively. The first task is to identify the poles of a $\\mu_i$ at fixed other $\\mu_j$'s. There are a priori three types of poles for $\\mu_i$\n\\begin{enumerate}\n\\item Double poles in $\\lambda_j$ for all $j$\n\\item Simple poles in $\\mu_j \\pm ic$ for all $j$\n\\item Poles corresponding to zeros of the determinant of the Gaudin matrix $\\mathcal{N}_{\\pmb{\\mu}}$\n\\end{enumerate}\nWe remark that the last two types of poles come from the fact that we consider normalized form factors. We also remark that since some of the entries of the Gaudin matrix diverge when $\\mu_i-\\mu_j\\to \\pm ic$, the second type of pole could happen to be absent from the full normalized form factor, but this possibility will not be relevant to our discussion. Lastly we notice that the last two types of poles are in fact never attained when all the roots $\\mu_i$ are real, which is always the case if $c>0$. Indeed, when the roots are real, the Gaudin matrix is strictly dominant diagonal, i.e. satisfies $\\forall i=1,...,N-1,\\quad |G_{ii}|>\\sum_{j\\neq i}|G_{ij}|$, hence is invertible. However, when performing the PFD of the form factor it has to be considered as a mere fraction of polynomials in $\\mu_i$'s that do not necessarily satisfy the Bethe equations, and these poles have to be taken into account indeed.\n\nAmong these three types of poles, the zeros of $\\mathcal{N}_{\\pmb{\\mu}}$ are the most problematic since their location is a complicated function of the other $\\mu_j$'s. For this reason we are going to consider the PFD of the form factor without these factors.\nNamely we define $F_\\psi(\\pmb{\\lambda},\\pmb{\\mu})$ by\n\\begin{equation}\\label{pfd}\n\\begin{aligned}\n\\frac {\\left| \\left\\langle \\pmb{\\mu} |\\psi \\left( 0\\right) |\\pmb{\\lambda}\\right\\rangle \\right| ^{2}} {\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }= \\frac{F_\\psi(\\pmb{\\lambda},\\pmb{\\mu})}{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}L^{2N-1}} \\,,\n\\end{aligned}\n\\end{equation}\nand consider the PFD of $F_\\psi(\\pmb{\\lambda},\\pmb{\\mu})$ only, with respect to the $\\mu_i$'s. \n\n\\subsubsection{The shape of the PFD of the reduced form factor}\nThe reduced form factor $F_\\psi(\\pmb{\\lambda},\\pmb{\\mu})$, seen as a function of $\\mu_1$, is a ratio of two polynomials with double poles in each of the $\\lambda_i$'s and simple poles in $\\mu_j\\pm ic$, so one can apply the decomposition written in Section \\ref{pfddefsec}. Since the reduced form factor goes to zero when $\\mu_1\\to\\infty$, we have $P_0(X)=0$ and so one can write\n\\begin{equation}\\label{154}\nF_\\psi(\\pmb{\\lambda},\\pmb{\\mu})=h_{\\mu_1}(\\mu_2,...,\\mu_{N-1})+\\sum_{i=1}^N \\sum_{\\nu=1}^{2}\\frac{B_{i,\\nu}(\\mu_2,...,\\mu_{N-1})}{(\\mu_1-\\lambda_i)^\\nu}\\,,\n\\end{equation}\nwith\n\\begin{equation}\nh_{\\mu_1}(\\mu_2,...,\\mu_{N-1})=\\sum_{i=2}^{N-1} \\frac{C^+_{i}(\\mu_2,...,\\mu_{N-1})}{\\mu_1-\\mu_i+ic}+\\sum_{i=2}^{N-1} \\frac{C^-_{i}(\\mu_2,...,\\mu_{N-1})}{\\mu_1-\\mu_i-ic}\\,,\n\\end{equation}\nwhere $B_{i,\\nu}(\\mu_2,...,\\mu_{N-1})$, $C^\\pm_{i}(\\mu_2,...,\\mu_{N-1})$ are 'coefficients' independent of $\\mu_1$, but that still possess a dependence in the remaining $\\mu_k$'s. They have the same pole structure, except that each $\\mu_i$ for $i\\neq 1$ does not anymore has a pole in $\\mu_1\\pm ic$. The function $h_{\\mu_1}(\\mu_2,...,\\mu_{N-1})$ is a function of $\\mu_1$ that has no poles in $\\mu_1$ when all the $\\mu_j$'s are real. \nWe now apply the same procedure to $B_{i,\\nu}(\\mu_2,...,\\mu_{N-1})$ and $h_{\\mu_1}(\\mu_2,...,\\mu_{N-1})$ with respect to $\\mu_2$ to obtain the writing\n\\begin{equation}\nF_\\psi(\\pmb{\\lambda},\\pmb{\\mu})=h_{\\mu_1,\\mu_2}(\\mu_3,...,\\mu_{N-1})+\\sum_{i=1}^N\\sum_{j=1}^N\\sum_{\\nu_i=0}^2\\sum_{\\nu_j=1}^2\\frac{B_{i,\\nu_i,j,\\nu_j}(\\mu_3,...,\\mu_{N-1})}{(\\mu_1-\\lambda_i)^{\\nu_i}(\\mu_2-\\lambda_j)^{\\nu_j}}\\,,\n\\end{equation}\nwhere $B_{i,\\nu_i,j,\\nu_j}(\\mu_3,...,\\mu_{N-1})$ is a 'coefficient' independent of $\\mu_1,\\mu_2$, and $h_{\\mu_1,\\mu_2}(\\mu_3,...,\\mu_{N-1})$ a function of $\\mu_1,\\mu_2$ without singularities in real $\\mu_1,\\mu_2$. We note that the poles in $\\mu_2$ arising from the function $h_{\\mu_1}(\\mu_2,...,\\mu_{N-1})$ that has no poles in real $\\mu_1$ are counted through the case $\\nu_i=0$. Proceeding recursively, one obtains the writing\n\n\\begin{equation}\\label{pfd}\nF_\\psi(\\pmb{\\lambda},\\pmb{\\mu})=\\sum_{\\{\\nu\\},f}\\frac{A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)}{\\prod_{i=1}^{N-1}(\\mu_i-\\lambda_{f(i)})^{\\nu_i}}\\,,\n\\end{equation}\nwhere each $\\nu_i$ takes the value $0$, $1$ or $2$, and where $f$ are functions defined on the points $i\\in\\{1,...,N-1\\}$ where $\\nu_i>0$, namely\n\\begin{equation}\nf:\\{i\\in\\{1,...,N-1\\} | \\nu_i>0 \\}\\to \\{1,...,N\\}\\,.\n\\end{equation}\nThe coefficients $A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)$ crucially do not depend on any $\\mu_i$ whenever $\\nu_i>0$, and are bounded regular functions of real $\\mu_i$ when $\\nu_i=0$. \n\n\\subsubsection{Computing the coefficients}\nIn the special case where all $\\nu_i>0$, the coefficients $A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)=A(\\pmb{\\lambda},\\{\\nu\\},f)$ do not depend on any $\\mu_i$'s and can be computed according to the formula\n\\begin{equation}\n\\label{formulA}\nA(\\pmb{\\lambda},\\{\\nu\\},f)=\\prod_{i=1}^{N-1}\\left[\\left(\\frac{d}{d\\mu_i}\\right)^{2-\\nu_i} (\\mu_i-\\lambda_{f(i)})^2 \\right]F_\\psi(\\pmb{\\lambda},\\pmb{\\mu})|_{\\mu_i=\\lambda_{f(i)}}\\,.\n\\end{equation}\nIf now there is a subset $K\\subset\\{1,...,N-1\\}$ such that $\\nu_i=0$ for $i\\in K$, one first defines the following function of the $\\mu_i$'s for $i\\in K$\n\\begin{equation}\n\\label{formulA2}\n\\bar{A}(\\{\\mu_i\\}_{i\\in K}|\\pmb{\\lambda},\\{\\nu\\},f)=\\prod_{\\substack{i=1\\\\ i\\notin K}}^{N-1}\\left[\\left(\\frac{d}{d\\mu_i}\\right)^{2-\\nu_i} (\\mu_i-\\lambda_{f(i)})^2 \\right]F_\\psi(\\pmb{\\lambda},\\pmb{\\mu})|_{\\mu_i=\\lambda_{f(i)},\\, i\\notin K}\\,.\n\\end{equation}\nThe function $\\bar{A}(\\{\\mu_i\\}_{i\\in K}|\\pmb{\\lambda},\\{\\nu\\},f)$ still has poles in $\\mu_i$ for $i\\in K$ since it also contains all the cases when $\\nu_i>0$. To compute the coefficient $A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)$ one has to remove from this function any pole in real $\\mu_i$ for $i\\in K$. Although these formulas are not very explicit, they are still of practical use to compute the simplest terms in the PFD, as we will see below.\\\\\n\n\n\nThe functions $f$ over which we sum in \\eqref{pfd} are actually rather constrained. First, since the form factor vanishes whenever two $\\mu_i$'s or two $\\lambda_i$'s coincide, we see from \\eqref{formulA} and \\eqref{formulA2} that if $\\nu_i=2$ or $\\nu_j=2$ and $f(i)=f(j)$, then the corresponding coefficient vanishes (namely, $A(\\pmb{\\lambda},\\{\\nu\\},f)=0$ if all $\\nu_k>0$ are non-zero, and $\\bar{A}(\\{\\mu_i\\}_{i\\in K}|\\pmb{\\lambda},\\{\\nu\\},f)=0$ if there is a vanishing $\\nu_k$). If we have $\\nu_i=\\nu_j=2$ it directly follows from the absence of derivative in \\eqref{formulA}; if $\\nu_i$ or $\\nu_j$ is equal to $1$, then it follows from the fact that there is a zero of order $2$ in the numerator with only one derivative. Similarly, if $k$ indices have a $\\nu_i=1$ and take the same value through $f$, then there is a zero of order $k(k-1)$ in the numerator, with only $k$ derivatives. Hence this imposes $k=2$. Thus one can impose in \\eqref{pfd} the two following constrains: (i) that $f(i)\\neq f(j)$ whenever $\\nu_i=2$ or $\\nu_j=2$, and (ii) that $f$ can take at most twice the same value.\\\\\n\nIn the following we will denote the number of elements of a set $E$ by\n\\begin{equation}\n|E|\\qquad \\text{or}\\qquad \\# E\\,,\n\\end{equation}\naccording to the most readable choice in the context.\n\n\\subsection {A density expansion}\n\nLet us now rewrite the Lehmann representation \\eqref{bigsumfield} in the following way. Instead of summing over the Bethe roots $\\mu_i$, we sum over their Bethe numbers $J_i$, and trade the ordering of the Bethe numbers for a non-ordered sum with a $\\frac{1}{(N-1)!}$ factor. Whenever two Bethe numbers coincide, the form factor is zero so that the two representations are indeed equivalent. Using \\eqref{pfd}, we obtain\n\\begin{equation}\\label{pfdbeg}\n\\begin{aligned}\n \\left\\langle \\psi^\\dagger\\left( x,t\\right) \\psi \\left( 0,0\\right) \\right\\rangle =&\\frac{1}{L^{2N-1}(N-1)!}\\\\\n &\\sum_{\\{\\nu\\},f}\\sum_{J_1,...,J_{N-1}}\\frac{A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)}{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}\\frac{e^{it\\left( E( \\pmb{\\lambda}) -E( \\pmb{\\mu}) \\right) +ix\\left( P( \\pmb{\\mu}) -P( \\pmb{\\lambda}) \\right) }}{\\prod_{i=1}^{N-1}(\\mu_i-\\lambda_{f(i)})^{\\nu_i}}\\,.\n\\end{aligned}\n\\end{equation}\nThe sum over $J_1,...,J_{N-1}$ is invariant under any change $\\tilde{f}=f \\circ (i \\,j)$ with $(i \\,j)$ the permutation of indices $i,j$, whenever $\\nu_i=\\nu_j>0$ and $f(i)$ and $f(j)$ are attained the same number of times by $f$. Hence this sum only depends on the \\textit{set of points attained a given number of times} by $f$, not on the particular realization of the function $f$. \n\n\n\nTo rewrite the sum without these functions $f$, let us define $I_k$ for $k=0,1,2$ the set of points $i$ in $\\{1,...,N\\}$ that are attained $k$ times by $f$ from points where $\\nu_j=1$, namely\n\\begin{equation}\n\\begin{aligned}\nI_k&=\\Bigg\\{i\\in\\{1,...,N\\} \\left| \\# \\{j\\in \\{1,...,N-1\\}| \\nu_j=1\\text{ and }f(j)=i\\}=k\\Bigg\\}\\right.\\,.\n\\end{aligned}\n\\end{equation}\nAs a consequence the points in $\\{1,...,N\\}$ attained by $f$ from points where $\\nu_j=2$ are $\\{1,...,N\\}-(I_0\\cup I_1\\cup I_2)$. These subsets $I_0,I_1,I_2\\subset \\{1,...,N\\}$ have to be disjoint and to satisfy $|I_0|=|I_2|+1+p$ with $p=|\\{i|\\nu_i=0\\}|$ the number of points with $\\nu_i=0$, because $f$ can take at most twice the same value. \n\nWe will denote\n\\begin{equation}\nn=|I_2|\\,,\\qquad m=|I_1|\\,,\n\\end{equation}\nand parametrize\n\\begin{equation}\n\\begin{aligned}\nI_2&=\\{j_{1},...,j_{n}\\}\\\\\nI_0&=\\{j_{n+1},...,j_{2n+p+1}\\}\\\\\nI_1&=\\{j_{2n+p+2},...,j_{2n+p+m+1}\\}\\\\\n\\{1,...,N\\}-(I_0\\cup I_1 \\cup I_2)&=\\{j_{2n+p+m+2},...,j_{N}\\}\\,.\n\\end{aligned}\n\\end{equation}\nWhen rewriting \\eqref{pfdbeg} in terms of these subsets, one picks a combinatorial factor corresponding to the number of functions $f$ with such an output. Choosing the set of points where $\\nu_i=0$ yields a factor ${N-1\\choose p}$, those where $\\nu_i=2$ a factor $(N-2n-p-m-1)!{N-1-p\\choose N-2n-p-m-1}$, those attained only once by $f$ and where $\\nu_i=1$ a factor $m!{2n+m\\choose m}$. Finally those attained twice by $f$ yield a factor $(2n)!!n!$. Writing $(2n)!!=\\tfrac{(2n)!}{n!2^n}$ yields a total combinatorial factor\n\\begin{equation}\n\\frac{(N-1)!}{2^{n}p!}\\,.\n\\end{equation}\nWe conclude that we can write\n\\begin{equation}\\label{expdensity}\n\\left\\langle \\psi^\\dagger\\left( x,t\\right) \\psi \\left( 0,0\\right) \\right\\rangle =\\sum_{n,m,p\\geq 0}S_{n,m,p}\\,,\n\\end{equation}\nwith\n\\begin{equation}\n\\begin{aligned}\n&S_{n,m,p}=\\frac{1}{2^np!L^{2N-1}}\\sum_{\\substack{I_{0,1,2}\\subset \\{1,...,N\\}\\\\|I_0|=n+p+1\\\\|I_1|=m\\\\|I_2|=n \\\\\\text{all disjoint}}}\\sum_{J_1,...,J_{N-1}}\\frac{{\\cal A}(I_0,I_1,I_2|\\{\\mu_i\\}_{i=2n+1}^{2n+p})}{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}\\\\\n &\\qquad\\qquad\\times\\frac{e^{it\\left( E\\left( \\pmb{\\lambda}\\right) -E\\left( \\pmb{\\mu} \\right) \\right) +ix\\left( P\\left( \\pmb{\\mu}\\right) -P\\left( \\pmb{\\lambda}\\right) \\right) }}{\\prod_{i=1}^{n}(\\mu_{2i-1}-\\lambda_{j_i})(\\mu_{2i}-\\lambda_{j_i})\\prod_{i=2n+1+p}^{2n+m+p}(\\mu_i-\\lambda_{j_{i+1}})\\prod_{i=2n+m+p+1}^{N-1}(\\mu_i-\\lambda_{j_{i+1}})^{2}}\\,.\n\\end{aligned}\n\\end{equation}\nThe specific ordering of the $\\mu_i$'s in this expression is irrelevant. \n\nHere we have ${\\cal A}(I_0,I_1,I_2|\\{\\mu_i\\}_{i=2n+1}^{2n+p})=A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)$ with $\\nu_i=1$ for $i=1,...,m+2n$, $\\nu_i=0$ for $i=m+2n+1,...,m+2n+p$ and $\\nu_i=2$ for $i=m+2n+p+1,...,N-1$, and with the function $f$ taking the values in $I_1$ over $1,...,m$, twice the values in $I_2$ over $m+1,...,m+2n$, and the values in $\\{1,...,N\\}-(I_0\\cup I_1\\cup I_2)$ over $m+2n+p+1,...,N-1$.\n\nSince each choice of an index in $\\{1,...,N\\}$ comes with a factor ${\\cal D}$, the term $S_{n,m,p}$ is of order ${\\cal O}({\\cal D}^{1+2n+m+p})$. Hence expression \\eqref{expdensity} is \\textit{an expansion in the particle density} ${\\cal D}$ of the averaging state. \n\n\\subsection {Definition of the low-density limit of the correlation function}\nThe low density limit of the dynamical correlations is defined as retaining only the leading term $S_{0,0,0}$ in \\eqref{expdensity}. It is obtained with $p=0$ and $I_1=I_2=\\varnothing$, and so $I_0=\\{a\\}$ for $a=1,...,N$. Namely, reparametrising $\\pmb{\\mu}=\\{\\mu_1,...,\\mu_{a-1},\\mu_{a+1},...,\\mu_N\\}$ for convenience\n\\begin{equation}\\label{S00}\n\\begin{aligned}\n &S_{0,0,0}=\\frac{1}{L}\\sum_{a=1}^N\\sum_{J_i,\\, i\\neq a}\\frac{{\\cal A}(\\{a\\},\\varnothing,\\varnothing|\\varnothing)}{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}\\frac{e^{it\\left( E\\left( \\pmb{\\lambda}\\right) -E\\left( \\pmb{\\mu} \\right) \\right) +ix\\left( P\\left( \\pmb{\\mu}\\right) -P\\left( \\pmb{\\lambda}\\right) \\right) }}{\\prod_{i\\neq a}L^2(\\mu_i-\\lambda_{i})^{2}}\\,.\n\\end{aligned}\n\\end{equation}\nSince all the other terms in \\eqref{expdensity} have at least a multiplying factor ${\\cal D}$, we have\n\\begin{equation}\\label{psipsild2}\n\\begin{aligned}\n &\\left\\langle \\psi^\\dagger\\left( x,t\\right) \\psi \\left( 0,0\\right) \\right\\rangle =S_{0,0,0}(1+{\\cal O}({\\cal D}))\\,.\n\\end{aligned}\n\\end{equation}\nIn the rest of paper, we will use the sign $\\underset{\\text{l.d.}}{\\sim}$ to indicate a low-density limit. Namely\n\\begin{equation}\nX\\underset{\\text{l.d.}}{\\sim} Y\\qquad \\text{means}\\qquad X=Y(1+{\\cal O}({\\cal D}))\\,.\n\\end{equation}\nUsing formula \\eqref{formulA}, one finds\n\\begin{equation}\n{\\cal A}(\\{a\\},\\varnothing,\\varnothing|\\varnothing)=\\prod_{i\\neq a}\\frac{4c^2}{(\\lambda_i-\\lambda_a)^2+c^2}\\,.\n\\end{equation}\nHence the low-density limit\n\\begin{equation}\\label{psipsild2}\n\\begin{aligned}\n &\\left\\langle \\psi^\\dagger\\left( x,t\\right) \\psi \\left( 0,0\\right) \\right\\rangle \\underset{\\text{l.d.}}{\\sim}\\frac{1}{L}\\sum_{a=1}^Ne^{it\\lambda_a^2-ix\\lambda_a}\\sum_{J_i,\\, i\\neq a}\\frac{1}{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}\\prod_{i\\neq a}\\left(\\frac{4c^2}{(\\lambda_i-\\lambda_a)^2+c^2}\\frac{e^{it\\left( \\lambda_i-\\mu_i \\right) +ix\\left( \\mu_i-\\lambda_i \\right) }}{L^2(\\mu_i-\\lambda_{i})^{2}}\\right)\\,.\n\\end{aligned}\n\\end{equation}\nA few comments are in order. Up to now, nothing has been said of the Bethe equations, and in principle the $\\mu_i$'s in this expression should satisfy exactly the Bethe equations \\eqref{belog}. If one wishes to determine the dynamical correlations at leading order in ${\\cal D}$, there remains only the term \\eqref{psipsild2} in the full expansion \\eqref{expdensity}; but one can also satisfy the Bethe equations \\eqref{belog} only at leading order in ${\\cal D}$, since their exact solution will involve higher orders in ${\\cal D}$ that are of the same order as the terms discarded in \\eqref{expdensity}. Stated differently, the leading order in ${\\cal D}$ of the dynamical correlations is obtained by both retaining only \\eqref{psipsild2} \\textit{and} satisfying \\eqref{belog} at leading order in ${\\cal D}$, while the higher orders in ${\\cal D}$ will require both taking into account higher terms in \\eqref{expdensity} \\textit{and} satisfying \\eqref{belog} at higher orders in ${\\cal D}$ in \\eqref{psipsild2}.\n\n\n\\subsection {Interpretation: low-density limit of the field form factor\\label{intuitive}}\nThe low-density limit is defined above as the leading term in an expansion of the correlation functions obtained by decomposing the form factors in partial fractions, that turns out to be an expansion in the density of the averaging state ${\\cal D}$. \nThis definition allows for a systematic calculation of the next corrections in the density by taking into account more terms in \\eqref{expdensity}.\\\\\n\nThe low-density limit of the correlation functions can however be recovered more intuitively but less rigorously with the following reasoning. If the root density $\\rho(\\lambda)$ of the averaging state is small, then the distance between two consecutive roots $L(\\lambda_i-\\lambda_j)$ is 'typically'\\footnote{This cannot be true for any representative state of the density, but is true for a 'typical state' whose roots are regularly spaced according to the value of the density, see \\cite{granetessler20}.} large in front of $1$, and in the limit of vanishingly low density becomes infinite. We will say that a sequence of states satisfying this property is \\textit{dilute}, namely\n\\begin{equation}\n(\\pmb{\\lambda}^{(L)})_{L\\in\\mathbb{N}}\\text{ dilute}\\iff \\forall i\\neq j,\\, \\underset{L\\to\\infty}{\\lim}\\, L|\\lambda^{(L)}_i-\\lambda^{(L)}_j|=\\infty\\,.\n\\end{equation}\nFor notational convenience we will drop the $L$ dependence of the sequence of states. Let us consider a dilute state $\\pmb{\\lambda}$ and investigate the consequences it has on the form factors between $\\pmb{\\lambda}$ and another state $\\pmb{\\mu}$.\n\nLet us first note that because of the assumption of diluteness, a Bethe number $J_i$ of $\\pmb{\\mu}$ can be at a distance ${\\cal O}(1)$ of at most one Bethe number $I_j$ of $\\pmb{\\lambda}$. Hence given a root $\\mu_i$, the quantity $L(\\mu_i-\\lambda_j)$ can be of order $1$ for at most one $\\lambda_j$. Consequently, \nit is seen from the expression \\eqref{FF} that \nin the low density limit the form factor \\eqref{FF} is non-zero only if each Bethe number $J_i$ of $\\pmb{\\mu}$ is at a distance of order $1$ from exactly one Bethe number $I_j$ of $\\pmb{\\lambda}$, and reciprocally if these $N-1$ Bethe numbers $I_j$ are each at a distance of order $1$ from exactly one Bethe number $J_k$ of $\\pmb{\\mu}$. Since there are $N$ roots in $\\pmb{\\lambda}$, there is one remaining root $\\lambda_a$ that is not close to any $\\mu_i$'s. We can re-label the roots $\\pmb{\\mu}=\\{\\mu_1,...,\\mu_{a-1},\\mu_{a+1},...,\\mu_N\\}$ so that $L(\\mu_i-\\lambda_i)$ is of order $1$ for all $i\\neq a$. Let us then investigate the value taken by the normalized form factor \\eqref{FF} in this regime.\n\nLet us first study the determinant $\\mathcal{N}_{\\pmb{\\lambda}}$ in the low-density limit. We have\n\\begin{equation}\\label{gaudin2}\nG_{ij}(\\pmb{\\lambda})= \\delta_{ij}-\\frac{1}{L}\\frac{2c}{c^2+(\\lambda_i-\\lambda_j)^2}+{\\cal O}({\\cal D})\\,,\n\\end{equation}\nso that\n\\begin{equation}\\label{det1det1}\n\\begin{aligned}\n\\mathcal{N}_{\\pmb{\\lambda}}&=\\exp \\,\\text{tr}\\, \\log G(\\pmb{\\lambda})\\\\\n&=\\exp\\left( -\\sum_{n=1}^\\infty \\frac{1}{n}\\,\\text{tr}\\, g^n+{\\cal O}({\\cal D})\\right)\\\\\n&=1+{\\cal O}({\\cal D})\\,,\n\\end{aligned}\n\\end{equation}\nwith $g_{ij}=\\frac{1}{L}\\frac{2c}{c^2+(\\lambda_i-\\lambda_j)^2}$, since $\\,\\text{tr}\\, g^n={\\cal O}({\\cal D}^n)$. Here, we did not use the diluteness of $\\pmb{\\lambda}$, but only evaluated the leading order in ${\\cal D}$ of the determinant.\n\nSecond, again because $\\pmb{\\lambda}$ is dilute, all the $L(\\mu_i-\\lambda_i)$ are negligible in front of any $\\lambda_a-\\lambda_j$. It follows that\n\\begin{equation}\nV_j^+-V_j^-\\underset{\\text{l.d.}}{\\sim}\\frac{-2ic}{(\\lambda_a-\\lambda_j)^2+c^2}\\,.\n\\end{equation}\nWe also have\n\\begin{equation}\n\\sqrt{\\frac{\\prod_{i\\neq j}(\\lambda_i-\\lambda_j+ic)}{\\prod_{i\\neq j}(\\mu_i-\\mu_j+ic)}}\\underset{\\text{l.d.}}{\\sim} i^{N-1}\\prod_{j\\neq a}\\sqrt{(\\lambda_j-\\lambda_a)^2+c^2}\\,,\n\\end{equation}\nand\n\\begin{equation}\n\\frac{\\prod_{i< j}|\\lambda_i-\\lambda_j|\\prod_{i< j}|\\mu_i-\\mu_j|}{\\prod_{i, j}(\\mu_j-\\lambda_i)}\\underset{\\text{l.d.}}{\\sim} (-1)^{(N-1)(N-2)\/2}\\prod_{j\\neq a}\\,\\text{sgn}\\,(\\lambda_j-\\lambda_a)\\frac{1}{\\prod_{j\\neq a}(\\mu_j-\\lambda_j)}\\,.\n\\end{equation}\nAs for the matrix $U$, setting $\\lambda_p=\\lambda_s=\\lambda_a$, the dominant entries are\n\\begin{equation}\nU_{aj}\\underset{\\text{l.d.}}{\\sim}-1+\\frac{2}{c}\\,,\n\\end{equation}\nwhile the other entries are of order ${\\cal O}(L^{-1})$. Hence\n\\begin{equation}\n\\underset{i,j}{\\det}(\\delta_{ij}+U_{ij})\\underset{\\text{l.d.}}{\\sim}\\frac{2}{c}\\,.\n\\end{equation}\nWe obtain the following low-density limit of the form factor\n\\begin{equation}\\label{ldff}\n\\frac { \\left\\langle \\pmb{\\mu} |\\psi \\left( 0\\right) |\\pmb{\\lambda}\\right\\rangle } {\\sqrt{\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }}\\underset{\\text{l.d.}}{\\sim}\n\\frac{\\phi}{\\sqrt{L}}\\prod_{j\\neq a}\\frac{2c}{\\sqrt{(\\lambda_j-\\lambda_a)^2+c^2}}\\frac{1}{L(\\mu_j-\\lambda_j)}\\,,\n\\end{equation}\nwith the phase\n\\begin{equation}\n\\phi=(-i)^N \\prod_{j\\neq a}\\,\\text{sgn}\\,(\\lambda_j-\\lambda_a)\\,.\n\\end{equation}\n\nLet us reformulate the meaning of this approximation. We consider $\\pmb{\\lambda}$ and $\\pmb{\\mu}$ two Bethe states with $N$ and $N-1$ particles respectively, and denote $\\iota:\\{1,...,N-1\\}\\to \\{1,...,N\\}$ the function such that the element of $\\{\\lambda_1,...,\\lambda_N\\}$ that is the closest to $\\mu_i$ is $\\lambda_{\\iota(i)}$. For a dilute $\\pmb{\\lambda}$, the form factor $\\frac { \\left\\langle \\pmb{\\mu} |\\psi \\left( 0\\right) |\\pmb{\\lambda}\\right\\rangle } {\\sqrt{\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }}$ is non-negligible in the thermodynamic limit only if $\\iota$ is one-to-one from $\\{1,...,N-1\\}$ to $\\{1,...,N\\}-\\{a\\}$ for some $a=1,...,N$. In this case, the form factor reads\n\\begin{equation}\\label{ldff2}\n\\frac { \\left\\langle \\pmb{\\mu} |\\psi \\left( 0\\right) |\\pmb{\\lambda}\\right\\rangle } {\\sqrt{\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }}\\underset{\\text{l.d.}}{\\sim}\n\\frac{\\phi}{\\sqrt{L}}\\prod_{j=1}^{N-1}\\frac{2c}{\\sqrt{(\\lambda_{\\iota(j)}-\\lambda_a)^2+c^2}}\\frac{1}{L(\\mu_j-\\lambda_{\\iota(j)})}\\,.\n\\end{equation}\nEquation \\eqref{ldff} corresponds to relabelling $\\mu_{\\iota^{-1}(j)}$ into $\\mu_j$ for $j=1,...,N,\\, j\\neq a$.\\\\\n\n\nSuch an expression implies an ordering of the roots $\\mu_1,...,\\mu_{N-1}$ according to the ordering of the $\\lambda_i$'s. Hence in the spectral sum \\eqref{bigsumfield}, there is no factor $1\/(N-1)!$ once expressed in terms of the Bethe numbers of the $\\mu_j$'s. Using this expression for the form factor, one indeed recovers the low-density correlation function \\eqref{psipsild2} properly defined from the partial fraction decomposition, with ${\\cal N}_{\\pmb{\\lambda}}, {\\cal N}_{\\pmb{\\mu}}=1+{\\cal O}(\\cal D)$ already imposed at leading order in ${\\cal D}$.\n\n\n\n\n\n\\section {Field two-point function\\label{fieldsection}}\nIn this section we compute $S_{0,0,0}$ in \\eqref{S00} in the low-density limit.\n\n\\subsection{States contributing to the thermodynamic limit}\nIn order to carry out the sum over the Bethe numbers in \\eqref{S00}, let us first investigate which values of Bethe numbers give a non-zero contribution in the thermodynamic limit.\n\nLet us consider Bethe number configurations such that $\\mu_i-\\lambda_i=\\mathcal{O}(L^{-b_i})$ for $i\\neq a$ with $ b_i\\geq 0$. Since the parity of the number of roots $\\lambda_i$ and $\\mu_i$ is different, the Bethe numbers of $\\lambda_i$ are integers (resp. half-odd integers) if those of $\\mu_i$ are half-odd integers (resp. integers). Hence from the Bethe equations it follows that one has $b_i\\leq 1$.\n\nNow, since the Bethe number $J_i$ can take $\\mathcal{O}(L^{1-b_i})$ values (for $\\mu_i-\\lambda_i=\\mathcal{O}(L^{-b_i})$ to be satisfied), and since $a$ in \\eqref{S00} can take $\\mathcal{O}(L)$ values, one has $\\mathcal{O}(L^{N-\\sum_{i\\neq a}b_i})$ many such configurations. Besides, each summand in \\eqref{S00} is $\\mathcal{O}(L^{-2N+1+2\\sum_{i\\neq a}b_i})$. Hence the contribution of these configurations is $\\mathcal{O}(L^{-N+1+\\sum_{i\\neq a}b_i})$. Given that $0\\leq b_i\\leq 1$, the only possibility to have a non-vanishing result in the thermodynamic limit is to have $\\forall i,\\, b_i=1$. Hence the only non-vanishing configurations in \\eqref{S00} are those for which the Bethe numbers of $\\mu_i$ differ from that of $\\lambda_i$ by $\\mathcal{O}(L^{0})$. We will denote\n\\begin{equation}\nn_i+\\frac{1}{2}=J_i-I_i\\qquad \\text{for }i\\neq a\\,,\n\\end{equation}\nthe difference between the Bethe numbers of $\\mu_i$ and $\\lambda_i$, with $n_i$ an integer of order ${\\cal O}(L^0)$.\n\n\\subsection{Decoupling of the spectral sum in the low-density limit}\nThe Bethe roots $\\mu_i$ involved in \\eqref{S00} depend on the difference of Bethe numbers $n_i$ and are all coupled through the Bethe equations. Taking the difference of the Bethe equations \\eqref{belog} for $\\mu_k$ and $\\lambda_k$, one obtains\n\\begin{equation}\n\\mu_k-\\lambda_k=\\frac{2\\pi}{L}(G^{-1}\\tilde{n})_k+{\\cal O}(L^{-2})\\,,\n\\end{equation}\nwith $G=G(\\pmb{\\mu})$ introduced in \\eqref{gaudin}, and the vector\n\\begin{equation}\n\\begin{aligned}\n\\tilde{n}_k&=n_k+\\frac{1}{2}+\\frac{1}{\\pi}\\arctan \\frac{\\lambda_k-\\lambda_a}{c}\\,.\n\\end{aligned}\n\\end{equation}\nIn the low-density limit, one has $Gx \\underset{\\text{l.d.}}{\\sim} x$ for all vectors $x$, so that the roots decouple and are expressed as\n\\begin{equation}\n\\mu_k-\\lambda_k\\underset{\\text{l.d.}}{\\sim} \\frac{2\\pi}{L}(n_k+\\alpha_k(\\lambda_a))+{\\cal O}(L^{-2})\\,,\n\\end{equation}\nwhere we introduced\n\\begin{equation}\\label{alpha}\n\\alpha_i(\\nu)=\\frac{1}{2}+\\frac{1}{\\pi}\\arctan \\frac{\\lambda_i-\\nu}{c}\\,.\n\\end{equation}\nFinally, at leading order in ${\\cal D}$, the determinants ${\\cal N}_{\\pmb{\\lambda}},{\\cal N}_{\\pmb{\\mu}}$ are equal to $1$ according to \\eqref{det1det1}. The spectral sum \\eqref{S00} can thus be expressed as a product of $N$ one-dimensional sums\n\\begin{equation}\\label{S001}\n\\begin{aligned}\nS_{0,0,0}\\underset{\\text{l.d.}}{\\sim}\\frac{1}{L}&\\sum_{a=1}^Ne^{it \\lambda_a^2-ix\\lambda_a}\\\\\n&\\times\\prod_{j\\neq a}\\left(\\frac{1}{\\pi^2}\\frac{c^2}{(\\lambda_j-\\lambda_a)^2+c^2}\\sum_{\\substack{n=-\\infty}}^{\\infty}\\frac{e^{i\\tfrac{2\\pi}{L} (n+\\alpha_{j}(\\lambda_a))(x-2\\lambda_j t)-it\\left(\\tfrac{2\\pi}{L}\\right)^2(n+\\alpha_j(\\lambda_a))^2}}{(n+\\alpha_j(\\lambda_a))^2}\\right)\\,.\n\\end{aligned}\n\\end{equation}\n\n\\subsection{Thermodynamic limit of the spectral sum}\nIn order to proceed we need to determine the thermodynamic limit of each of the one-dimensional sums, that are of the type\n\\begin{equation}\n\\sum_{n\\in\\mathbb{Z}}\\frac{e^{i\\frac{w}{L}(n+\\alpha)+i\\frac{\\tau}{L^2}(n+\\alpha)^2}}{(n+\\alpha)^2}\\,,\n\\end{equation}\nfor $\\alpha,w,\\tau$ reals, $\\alpha$ non integer. Let us first consider the case $\\tau=0$. The quantity $\\sum_{n\\in\\mathbb{Z}}\\frac{e^{iWn}}{(n+\\alpha)^2}$ is exactly the Fourier series of a certain $2\\pi$-periodic function of $W$. Noticing that for $n$ integer\n\\begin{equation}\n\\int_{-\\pi}^{\\pi}\\left[\\left(\\frac{\\pi}{\\sin \\pi\\alpha}\\right)^2e^{-iW\\alpha} +\\frac{i\\pi}{\\sin\\pi\\alpha}W e^{i\\pi\\alpha\\,\\text{sgn}\\,(W)-iW\\alpha} \\right]e^{-iWn}\\D{W}=\\frac{2\\pi}{(n+\\alpha)^2}\\,,\n\\end{equation}\nwe conclude that\n\\begin{equation}\n\\sum_{n\\in\\mathbb{Z}}\\frac{e^{iW(n+\\alpha)}}{(n+\\alpha)^2}=\\left(\\frac{\\pi}{\\sin \\pi\\alpha}\\right)^2 +\\frac{i\\pi}{\\sin\\pi\\alpha}W e^{i\\pi\\alpha\\,\\text{sgn}\\,(W)}\\,,\\qquad\\text{for }-\\pi0$. It is the leading term in the expansion \\eqref{expdensity}, and constitutes the low-density limit of the dynamical correlations of the field.\n\n\n\n\\section {Density two-point function\\label{densitysection}}\nIn this Section we apply the same reasoning as in Sections \\ref{ldd} and \\ref{fieldsection} but to the density two-point function.\n\n\\subsection{Partial fraction decomposition}\nWe consider $ |\\pmb{\\lambda} \\rangle $ and $ |\\pmb{\\mu} \\rangle $ eigenstates of the Lieb-Liniger Hamiltonian with $N$ rapidities, and define $F_\\sigma(\\pmb{\\lambda},\\pmb{\\mu})$ by\n\\begin{equation}\n\\begin{aligned}\n\\frac {\\left| \\left\\langle \\pmb{\\mu} |\\sigma \\left( 0\\right) |\\pmb{\\lambda}\\right\\rangle \\right| ^{2}} {\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }= \\frac{F_\\sigma(\\pmb{\\lambda},\\pmb{\\mu})}{L^{2N}\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}} \\,.\n\\end{aligned}\n\\end{equation}\nSimilarly to the field case $F_\\psi$, the reduced form factor $F_\\sigma(\\pmb{\\lambda},\\pmb{\\mu})$ is a ratio of two polynomials in the Bethe roots so that it can be decomposed into partial fractions. Repeating the arguments that apply to the field case, we similarly obtain the writing\n\\begin{equation}\\label{pfdsigma}\nF_\\sigma(\\pmb{\\lambda},\\pmb{\\mu})=\\sum_{\\{\\nu\\},f}\\frac{A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)}{\\prod_{i=1}^{N-1}(\\mu_i-\\lambda_{f(i)})^{\\nu_i}}\\,,\n\\end{equation}\nwhere each $\\nu_i$ takes the value $0,1$ or $2$, and where $f$ are functions $\\{i\\in\\{1,...,N\\}| \\nu_i\\neq 0\\}\\to \\{1,...,N\\}$. The coefficients $A(\\pmb{\\lambda},\\pmb{\\mu},\\{\\nu\\},f)$ crucially do not depend on any $\\mu_i$ whenever $\\nu_i>0$, and are bounded functions of real $\\mu_i$ if $\\nu_i=0$. The function $f$ has the same constraints as in the field case, namely it can take at most twice the same value, and takes only once a value on points with $\\nu_i=2$.\n\nThis decomposition also leads to an expansion in the particle density ${\\cal D}$. Namely, with the parametrization $m=|I_1|$, $n=|I_2|$, $p=|\\{i| \\nu_i=0\\}|$\n\\begin{equation}\n\\begin{aligned}\nI_2&=\\{j_{1},...,j_{n}\\}\\\\\nI_0&=\\{j_{n+1},...,j_{2n+p}\\}\\\\\nI_1&=\\{j_{2n+p+1},...,j_{2n+p+m}\\}\\\\\n\\{1,...,N\\}-(I_0\\cup I_1 \\cup I_2)&=\\{j_{2n+p+m+1},...,j_{N}\\}\\,.\n\\end{aligned}\n\\end{equation}\nwe have\n\\begin{equation}\\label{expdensitysigma}\n\\left\\langle \\sigma\\left( x,t\\right) \\sigma \\left( 0,0\\right) \\right\\rangle =\\sum_{n,m,p\\geq 0}S_{n,m,p}\\,,\n\\end{equation}\nwith\n\\begin{equation}\n\\begin{aligned}\n&S_{n,m,p}=\\frac{1}{2^np!L^{2N}}\\sum_{\\substack{I_{0,1,2}\\subset \\{1,...,N\\}\\\\|I_0|=n+p\\\\|I_1|=m\\\\|I_2|=n \\\\\\text{all disjoint}}}\\sum_{J_1,...,J_{N}}\\frac{{\\cal A}(I_0,I_1,I_2|\\{\\mu_i\\}_{i=2n+1}^{2n+p})}{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}\\\\\n &\\qquad\\qquad\\times\\frac{e^{it\\left( E\\left( \\pmb{\\lambda}\\right) -E\\left( \\pmb{\\mu} \\right) \\right) +ix\\left( P\\left( \\pmb{\\mu}\\right) -P\\left( \\pmb{\\lambda}\\right) \\right) }}{\\prod_{i=1}^{n}(\\mu_{2i-1}-\\lambda_{j_i})(\\mu_{2i}-\\lambda_{j_i})\\prod_{i=2n+1+p}^{2n+m+p}(\\mu_i-\\lambda_{j_{i}})\\prod_{i=2n+m+p+1}^{N}(\\mu_i-\\lambda_{j_{i}})^{2}}\\,.\n\\end{aligned}\n\\end{equation}\n The leading term is a priori obtained with $\\nu_i=2$, and the next terms by replacing some of the $\\nu_i$'s by $1$ or $0$, each time picking a factor ${\\cal D}$. However, in contrast to the field case, the coefficient of the a priori leading term is actually found to vanish, using the analogue of \\eqref{formulA} for the density operator case\n \\begin{equation}\n{\\cal A}(\\varnothing,\\varnothing,\\varnothing|\\varnothing)=0\\,.\n\\end{equation}\nIf one $\\nu_i$ is set to $1$, the coefficient still vanishes\n \\begin{equation}\n{\\cal A}(\\varnothing,\\{a\\},\\varnothing|\\varnothing)=0\\,.\n\\end{equation}\nHowever, if it is set to $0$, the coefficient is non-zero and reads\n \\begin{equation}\n{\\cal A}(\\{a\\},\\varnothing,\\varnothing|\\mu_a)=\\prod_{j\\neq a}\\frac{4c^2(\\lambda_a-\\mu_a)^2}{[(\\lambda_j-\\lambda_a)^2+c^2][(\\lambda_j-\\mu_a)^2+c^2]}\\,.\n\\end{equation}\nHence the leading order in the density expansion, i.e. the low-density limit, is obtained with\n\\begin{equation}\n\\left\\langle \\sigma( x,t) \\sigma( 0,0) \\right\\rangle\\underset{\\text{l.d.}}{\\sim} S_{0,0,1}\\,.\n\\end{equation}\nIt yields the following low-density limit of the density two-point function\n\\begin{equation}\\label{2pfdensityld}\n\\begin{aligned}\n & \\left\\langle \\sigma( x,t) \\sigma( 0,0) \\right\\rangle \\underset{\\text{l.d.}}{\\sim} \\frac{1}{L^2}\\sum_{\\lambda_a,\\mu_a}e^{ix(\\mu_a-\\lambda_a)+it(\\lambda_a^2-\\mu_a^2)}\\\\&\\times\\sum _{J_i,\\, i\\neq a}\\frac{1}{\\mathcal{N}_{\\pmb{\\lambda}}\\mathcal{N}_{\\pmb{\\mu}}}\\prod_{j\\neq a}\\frac{c^2}{(\\lambda_j-\\lambda_a)^2+c^2}\\frac{c^2}{(\\lambda_j-\\mu_a)^2+c^2}\\cdot\n \\frac{4(\\lambda_a-\\mu_a)^2}{c^2L^2(\\mu_j-\\lambda_j)^2}e^{ix(\\mu_j-\\lambda_j)+it(\\lambda_j^2-\\mu_j^2)}\\,.\n\\end{aligned}\n\\end{equation}\n\n\n\n\\subsection {Interpretation: low-density limit of the density form factor}\nSimilarly to the field operator, the low-density limit of the density two-point function is defined in terms of a partial fraction decomposition of the form factor. In the field case, one was also able to obtain a low-density limit of the field form factor by considering dilute states, see Section \\ref{intuitive}. In the density case, we are going to follow a different route in order to show the usefulness of \\eqref{ldff}, by recovering the low-density limit of the density correlation \\eqref{2pfdensityld} from the low-density approximation of the field form factor \\eqref{ldff}.\n The rationale is that the density form factor can be obtained as a limit of a field two-point function between different eigenstates $\\pmb{\\lambda}$ and $\\pmb{\\mu}$ with $N$ roots\n\\begin{equation}\n\\frac{\\langle \\pmb{\\mu}|\\sigma(0)|\\pmb{\\lambda}\\rangle}{\\sqrt{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}}=\\underset{x\\to 0}{\\lim}\\, \\frac{\\langle \\pmb{\\mu}|\\psi^\\dagger( x,0) \\psi ( 0,0)|\\pmb{\\lambda}\\rangle}{\\sqrt{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}}\\,.\n\\end{equation}\nThis two-point function can itself be expressed as a Lehmann representation\n\\begin{equation}\n\\frac{\\langle \\pmb{\\mu}|\\psi^\\dagger( x,0) \\psi ( 0,0)|\\pmb{\\lambda}\\rangle}{\\sqrt{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}}=\\sum_{\\pmb{\\nu}}\\frac{\\langle \\pmb{\\nu}|\\psi(0)|\\pmb{\\mu}\\rangle^* \\langle \\pmb{\\nu}|\\psi(0)|\\pmb{\\lambda}\\rangle}{\\sqrt{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}\\langle \\pmb{\\nu}|\\pmb{\\nu}\\rangle}e^{ix(P(\\pmb{\\nu})-P(\\pmb{\\mu}))}\\,,\n\\end{equation}\nwhere $\\pmb{\\nu}$ is a state with $N-1$ roots. In the low-density limit, according to \\eqref{ldff}, for the field form factors not to vanish one has to have $N-1$ roots $\\lambda_i$ with exactly one $\\nu_j$ around at a distance ${\\cal O}(L^{-1})$, leaving a $\\lambda_a$ without $\\nu_j$'s around. The same holds for the $\\mu_i$'s with respect to the $\\nu_j$'s. This implies that in the low-density limit, there is also exactly one $\\mu_i$ around each $\\lambda_j$ at a distance ${\\cal O}(L^{-1})$ for $j\\neq a$, leaving a $\\mu_a$ without $\\nu_j$'s nor $\\lambda_j$'s around. Since $\\pmb{\\mu}$ and $\\pmb{\\lambda}$ are an input of the problem, these $\\mu_a$ and $\\lambda_a$ are fixed by the choices of $\\pmb{\\lambda}$ and $\\pmb{\\mu}$ (they are not free parameters like in the two-point function case with $\\pmb{\\lambda}=\\pmb{\\mu}$). Then using \\eqref{ldff} we obtain the following low-density limit\n\\begin{equation}\n\\begin{aligned}\n&\\frac{\\langle \\pmb{\\mu}|\\psi^\\dagger( x,0) \\psi ( 0,0)|\\pmb{\\lambda}\\rangle}{\\sqrt{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}}\\underset{\\text{l.d.}}{\\sim} \\\\\n&\\qquad\\qquad\\frac{e^{-ix\\mu_a}}{L}\\sum_{\\pmb{\\nu}}\\prod_{j\\neq a}\\frac{2c}{\\sqrt{(\\lambda_j-\\lambda_a)^2+c^2}}\\frac{2c}{\\sqrt{(\\mu_j-\\mu_a)^2+c^2}} \\frac{1}{L(\\nu_j-\\lambda_j)}\\frac{e^{ix(\\nu_j-\\mu_j)}}{L(\\nu_j-\\mu_j)}\\,.\n\\end{aligned}\n\\end{equation}\nThe Bethe equations allow us to write\n\\begin{equation}\n\\begin{aligned}\n\\nu_j-\\lambda_j&\\underset{\\text{l.d.}}{\\sim} \\frac{2\\pi}{L}(n_j+\\alpha_j(\\lambda_a))\\\\\n\\nu_j-\\mu_j&\\underset{\\text{l.d.}}{\\sim} \\frac{2\\pi}{L}(n_j+\\alpha_j(\\mu_a)-p_j)\\,,\n\\end{aligned}\n\\end{equation}\nwith $n_j,p_j$ integers. The integer $p_j$ is related to $\\mu_j$ and $\\lambda_j$ through\n\\begin{equation}\n\\mu_j-\\lambda_j\\underset{\\text{l.d.}}{\\sim} \\frac{2\\pi}{L}(p_j+\\alpha_j(\\lambda_a)-\\alpha_j(\\mu_a))\\,,\n\\end{equation}\nwhich is a parameter of the problem that is not to be summed over. We obtain the following factorization\n\\begin{equation}\n\\begin{aligned}\n\\frac{\\langle \\pmb{\\mu}|\\psi^\\dagger( x,0) \\psi ( 0,0)|\\pmb{\\lambda}\\rangle}{\\sqrt{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}}\\underset{\\text{l.d.}}{\\sim} \\frac{e^{-ix\\mu_a}}{L}&\\prod_{j\\neq a}\\frac{c^2}{\\pi^2}\\frac{1}{\\sqrt{(\\lambda_j-\\lambda_a)^2+c^2}}\\frac{1}{\\sqrt{(\\lambda_j-\\mu_a+{\\cal O}(L^{-1}))^2+c^2}} \\\\\n&\\times\\sum_{n=-\\infty}^\\infty\\frac{e^{\\frac{2i \\pi x}{L}(n+\\alpha_j(\\mu_a)-p_j)}}{(n+\\alpha_j(\\lambda_a))(n+\\alpha_j(\\mu_a)-p_j)}\\,.\n\\end{aligned}\n\\end{equation}\nIn order to determine the thermodynamic limit of the field four-point function we need to carry out the sums over the integers, that reduces to sums of the type\n\\begin{equation}\n\\sum_{n\\in\\mathbb{Z}}\\frac{e^{i\\tfrac{w}{L}(n+\\alpha)}}{n+\\alpha}\\,.\n\\end{equation}\nWe notice that $\\sum_{n\\in\\mathbb{Z}}\\frac{e^{iWn}}{n+\\alpha}$ is the Fourier series of a $2\\pi$-periodic function of $W$, and\n\\begin{equation}\n\\int_{-\\pi}^\\pi \\frac{\\pi}{\\sin \\pi\\alpha}e^{i\\pi\\alpha\\,\\text{sgn}\\,(W)-iW\\alpha} e^{-iWn}\\D{W}=\\frac{2\\pi}{n+\\alpha}\\,.\n\\end{equation}\nHence\n\\begin{equation}\n\\sum_{n\\in\\mathbb{Z}}\\frac{e^{iW(n+\\alpha)}}{n+\\alpha}=\\frac{\\pi}{\\sin \\pi\\alpha}e^{i\\pi\\alpha\\,\\text{sgn}\\,(W)}\\,,\\qquad\\text{for }-\\pi0$. It is the leading term in the expansion \\eqref{expdensitysigma}, and is the low-density limit of the dynamical correlations of the density.\n\n\\subsection{Bare particle-hole excitations\\label{which}}\nIn the spectral sum \\eqref{bigsumfield} the intermediate states $\\pmb{\\mu}$ can be seen as excited states above the averaging state $\\pmb{\\lambda}$. In this picture it is natural to \\textit{expand} the spectral sum \\eqref{bigsumfield} in terms of the number of \\textit{particle-hole excitations} that $\\pmb{\\mu}$ has over $\\pmb{\\lambda}$. Such an expansion consists in writing\n\\begin{equation}\n\\left\\langle \\sigma( x,t) \\sigma( 0,0) \\right\\rangle={\\cal D}^2+\\sum_{n\\geq 1}\\mathfrak{S}_n\n\\end{equation}\nwith\n\\begin{equation}\\label{bare}\n\\mathfrak{S}_n=\\sum _{ \\substack{\\pmb{\\mu}\\\\ |\\{I_a\\}\\cap \\{J_a\\}|=N-n}}\\frac {\\left| \\left\\langle \\pmb{\\mu} |\\sigma( 0) |\\pmb{\\lambda}\\right\\rangle \\right| ^{2}} {\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }e^{it\\left( E\\left( \\pmb{\\lambda}\\right) -E\\left( \\pmb{\\mu} \\right) \\right) +ix\\left( P\\left( \\pmb{\\mu}\\right) -P\\left( \\pmb{\\lambda}\\right) \\right) }\\,,\n\\end{equation}\nwhich is the spectral sum restricted to intermediate states $\\pmb{\\mu}$ that share $N-n$ Bethe numbers $J_a$ with the Bethe numbers $I_a$ of $\\pmb{\\lambda}$. Ideally, each individual $\\mathfrak{S}_n$ would have a well-defined and finite thermodynamic limit $L\\to\\infty$ that could be represented as a multiple integral over root densities and hole densities\n\\begin{equation}\\label{multip}\n\\begin{aligned}\n\\underbrace{\\int_{-\\infty}^\\infty...\\int_{-\\infty}^\\infty}_{2n}&F_n(\\lambda_1,\\mu_1,...,\\lambda_n,\\mu_n)e^{it(E(\\pmb{\\lambda})-E(\\pmb{\\mu}))+ix(P(\\pmb{\\mu})-P(\\pmb{\\lambda}))}\\\\\n&\\qquad\\qquad\\qquad\\times\\rho(\\lambda_1)\\rho_h(\\mu_1)...\\rho(\\lambda_n)\\rho_h(\\mu_n)\\D{\\lambda_1}\\D{\\mu_1}...\\D{\\lambda_n}\\D{\\mu_n}\\,.\n\\end{aligned}\n\\end{equation}\nThe function $F_n(\\lambda_1,\\mu_1,...,\\lambda_n,\\mu_n)$ appearing in this expression would thus be identified as a \\textit{thermodynamic form factor} for $n$ particle-hole excitations\\cite{deNP15,deNP16,DNP18,panfil20}. This idea was backed by calculations of\n\\begin{equation}\n\\underset{\\mu\\to\\lambda}{\\lim}\\,F_1(\\lambda,\\mu)\\,,\n\\end{equation}\nthat is finite and well-defined in the thermodynamic limit, by various means \\cite{deNP16,cortescuberopanfil2}.\n\nHowever, this picture seems to be a priori contradicted by results of \\cite{granetessler20}, where the thermodynamic limit of the spectral sum \\eqref{bigsumfield} was computed exactly at order $c^{-2}$. At this order, the spectral sum \\textit{exactly} truncates to one- and two-particle-hole excitations, hence to $\\mathfrak{S}_1$ and $\\mathfrak{S}_2$. But it was found that these two separate sums in the thermodynamic limit individually \\textit{diverge} and \\textit{depend on the representative state} $\\pmb{\\lambda}$ of the root density, at order $c^{-2}$. Namely we have\n\\begin{equation}\n\\begin{aligned}\\label{s1s2}\n&\\mathfrak{S}_1=LA_1+f_1(\\rho,\\gamma_{-2})+{\\cal O}(L^{-1})+{\\cal O}(c^{-3})\\\\\n&\\mathfrak{S}_2=LA_2+f_2(\\rho,\\gamma_{-2})+{\\cal O}(L^{-1})+{\\cal O}(c^{-3})\\,,\n\\end{aligned}\n\\end{equation}\nwith $A_{1,2}$ some reals of order $c^{-2}$ and $f_{1,2}(\\rho,\\gamma_{-2})$ functions of the root density and the pair distribution function (see \\cite{granetessler20} for a precise definition). However, their sum $\\mathfrak{S}_1+\\mathfrak{S}_2$ is not divergent and depends only on $\\rho$, as we expect. To fit in the general picture \\eqref{multip}, the only solution would be that these divergences in $L$ are an artefact of the $1\/c$ expansion, i.e. that the finite-size corrections to $\\mathfrak{S}_{1,2}$ in the thermodynamic limit would be e.g. of the form $\\varphi(L\/c)$ with a function $\\varphi(x)\\to 0$ when $x\\to \\pm\\infty$. In the framework of the low-density expansion, one can investigate this issue since the calculations are performed non-perturbatively in $1\/c$, and compute the thermodynamic limit of $\\mathfrak{S}_{1}$ for a finite fixed $c$, in the low-density limit.\\\\\n\n\nTo that end, let us use the expression \\eqref{lddensity} for the low-density limit of the form factor to write\n\\begin{equation}\\label{lowdensquare}\n\\begin{aligned}\n\\frac{|\\langle \\pmb{\\mu}|\\sigma(0)|\\pmb{\\lambda}\\rangle|^2}{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}\\underset{\\text{l.d.}}{\\sim} \\frac{1}{L^2}&\\prod_{j\\neq a}\\frac{c^2}{(\\lambda_j-\\lambda_a)^2+c^2}\\frac{c^2}{(\\lambda_j-\\mu_a)^2+c^2}\\frac{4(\\mu_a-\\lambda_a)^2}{c^2L^2(\\mu_j-\\lambda_j)^2}\\,,\n\\end{aligned}\n\\end{equation}\nand assume that $\\pmb{\\mu}$ only involves one particle-hole excitation above $\\pmb{\\lambda}$. Hence for all $j\\neq a$, the Bethe numbers $J_j$ of $\\pmb{\\mu}$ are equal to the Bethe numbers $I_j$ of $\\pmb{\\lambda}$. In the low-density limit we have then\n\\begin{equation}\n\\forall j\\neq a,\\qquad\\mu_j-\\lambda_j\\underset{\\text{l.d.}}{\\sim} \\frac{2\\pi}{L}(\\alpha_j(\\lambda_a)-\\alpha_j(\\mu_a))\\,.\n\\end{equation}\nIt follows that in the thermodynamic limit, the low-density form factor squared \\eqref{lowdensquare} becomes\n\\begin{equation}\n\\frac{|\\langle \\pmb{\\mu}|\\sigma(0)|\\pmb{\\lambda}\\rangle|^2}{\\langle \\pmb{\\mu}|\\pmb{\\mu}\\rangle \\langle \\pmb{\\lambda}|\\pmb{\\lambda}\\rangle}\\underset{\\text{l.d.}}{\\sim} \\frac{1}{L^2}\\exp[L\\phi(\\lambda_a,\\mu_a)+{\\cal O}(L^0)]\\,,\n\\end{equation}\nwith\n\\begin{equation}\\label{phi}\n\\phi(\\lambda,\\mu)=\\int_{-\\infty}^\\infty \\log \\left[\\frac{1}{c^2}\\frac{1}{1+\\tfrac{(\\nu-\\lambda)^2}{c^2}} \\frac{1}{1+\\tfrac{(\\nu-\\mu)^2}{c^2}}\\left( \\frac{\\mu-\\lambda}{\\arctan \\tfrac{\\mu-\\nu}{c}-\\arctan \\tfrac{\\lambda-\\nu}{c}}\\right)^2\\right] \\rho(\\nu)\\D{\\nu}\\,.\n\\end{equation}\nThe sign of the logarithm can be determined by applying the following inequality that is valid for all real $u$\n\\begin{equation}\\label{ineq}\nu^2\\geq \\sin^2 u\\,,\n\\end{equation}\nto the value\n\\begin{equation}\nu=\\arctan x-\\arctan y\\,,\n\\end{equation}\nfor $x,y$ reals. Indeed, using trigonometric relations, Eq \\eqref{ineq} is exactly\n\\begin{equation}\n\\left( \\frac{\\arctan x-\\arctan y}{x-y}\\right)^2\\geq \\arctan'(x)\\arctan'(y)\\,,\n\\end{equation}\nwith equality only for $x=y$. This permits to deduce that the logarithm in \\eqref{phi} is always negative whenever $\\lambda\\neq \\mu$. One concludes that we always have if $\\lambda\\neq \\mu$\n\\begin{equation}\n\\phi(\\lambda,\\mu)<0\\,,\n\\end{equation}\nand if $\\lambda=\\mu$ we have\n\\begin{equation}\n\\phi(\\lambda,\\lambda)=0\\,.\n\\end{equation}\nIt follows that in the framework of a multiple integral representation in terms of bare particle-hole excitations in the thermodynamic limit \\eqref{multip}, the function $F_1(\\lambda,\\mu)$ would be in fact zero everywhere except if $\\mu=\\lambda$ where it takes the value $1$ as deduced from \\eqref{lowdensquare}. Namely we would have\n\\begin{equation}\nF_1(\\lambda,\\mu)= \\begin{cases}0\\qquad \\text{if }\\lambda\\neq \\mu\\\\\n1\\qquad \\text{if }\\lambda= \\mu\n\\end{cases}+{\\cal O}({\\cal D})\\,.\n\\end{equation}\n\n\nThis analysis shows that the integral representation \\eqref{multip} in terms of 'bare' particle-hole excitations \\eqref{bare} is singular. If the leading behaviour at large space and time of \\eqref{multip} is indeed obtained when $\\lambda_1=\\mu_1$ in \\eqref{multip} for $n=1$, where the function $F_1(\\lambda,\\mu)$ takes a finite value, the singular behaviour of this representation should be an obstacle to the computation of the subleading orders.\n\n\n\\subsection{Dressed particle-hole excitations\\label{which2}}\nLet us now investigate the nature of the states summed over to obtain the expression \\eqref{densityde} in the low-density limit. The double integral over the root density and hole density are one-particle-hole excitations with a \\textit{macroscopic} amplitude (i.e. the difference between the Bethe numbers is ${\\cal O}(L)$). The exponentials however arise from the product of a macroscopic number of one-dimensional sums over all the other remaining roots of the averaging states. These take into account an \\textit{arbitrary} number of particle-hole excitations, but with only a \\textit{microscopic} or \\textit{mesoscopic} amplitude (i.e. the difference between the Bethe numbers can be ${\\cal O}(L^\\nu)$ for any $\\nu<1$). These configurations are represented in Figure \\ref{exfnp1k0}. They correspond to what is called a \\textit{dressed one-particle-hole excitation}. They entail expanding the spectral sum \\eqref{bigsumfield} according to\n\\begin{equation}\\label{dressedexp}\n\\left\\langle \\sigma( x,t) \\sigma( 0,0) \\right\\rangle={\\cal D}^2+\\sum_{n\\geq 1}\\mathfrak{S}^{\\rm dr}_n\\,,\n\\end{equation}\nwith\n\\begin{equation}\\label{sdr}\n\\sum_{m=1}^n\\mathfrak{S}^{\\rm dr}_m=\\sum _{ \\substack{\\pmb{\\mu}\\\\ \\exists \\tau\\text{ permutation of }\\{1,...,N\\},\\\\\\,\\# \\{a\\text{ s.t. }|I_a-J_{\\tau(a)}|={\\cal O}(L)\\} \\leq n}}\\frac {\\left| \\left\\langle \\pmb{\\mu} |\\sigma( 0) |\\pmb{\\lambda}\\right\\rangle \\right| ^{2}} {\\left\\langle \\pmb{\\lambda} \\left| \\pmb{\\lambda} \\right\\rangle \\left\\langle \\pmb{\\mu}\\right| \\pmb{\\mu}\\right\\rangle }e^{it\\left( E\\left( \\pmb{\\lambda}\\right) -E\\left( \\pmb{\\mu} \\right) \\right) +ix\\left( P\\left( \\pmb{\\mu}\\right) -P\\left( \\pmb{\\lambda}\\right) \\right) }\\,.\n\\end{equation}\n\n\n\\begin{figure}[H]\n\\begin{center}\n\\begin{tikzpicture}[scale=1]\n\\draw[->,blue] (3.5,0.25) arc (180: 0:2.5);\n\\draw[->,red] (0.5,0.25) arc (180: 0:0.25);\n\\draw[->,red] (5.,0.25) arc (180: 0:0.25);\n\\draw[->,red] (10.5,0.25) arc (0: 180:0.5);\n\\node at (-1,0) {.};\n\\draw[black] (-0.5,0) circle (3pt);\n\\node at (0,0) {.};\n\\draw[black] (0.5,0) circle (3pt);\n\\node at (1,0) {.};\n\\draw[black] (1.5,0) circle (3pt);\n\\draw[black] (2,0) circle (3pt);\n\\node at (2.5,0) {.};\n\\node at (3,0) {.};\n\\draw[black] (3.5,0) circle (3pt);\n\\draw[black] (4,0) circle (3pt);\n\\draw[black] (4.5,0) circle (3pt);\n\\draw[black] (5,0) circle (3pt);\n\\node at (5.5,0) {.};\n\\draw[black] (6,0) circle (3pt);\n\\node at (6.5,0) {.};\n\\node at (7,0) {.};\n\\draw[black] (7.5,0) circle (3pt);\n\\node at (8,0) {.};\n\\node at (8.5,0) {.};\n\\draw[black] (9,0) circle (3pt);\n\\node at (9.5,0) {.};\n\\draw[black] (10,0) circle (3pt);\n\\draw[black] (10.5,0) circle (3pt);\n\\draw[black] (11,0) circle (3pt);\n\\node at (11.5,0) {.};\n\\draw[black] (12,0) circle (3pt);\n\\node at (12.5,0) {.};\n\\filldraw[black] (-0.5,0) circle (2pt);\n\\filldraw[black] (1.,0) circle (2pt);\n\\filldraw[black] (1.5,0) circle (2pt);\n\\filldraw[black] (2,0) circle (2pt);\n\\filldraw[black] (8.5,0) circle (2pt);\n\\filldraw[black] (4,0) circle (2pt);\n\\filldraw[black] (4.5,0) circle (2pt);\n\\filldraw[black] (5.5,0) circle (2pt);\n\\filldraw[black] (6,0) circle (2pt);\n\\filldraw[black] (9,0) circle (2pt);\n\\filldraw[black] (7.5,0) circle (2pt);\n\\filldraw[black] (10,0) circle (2pt);\n\\filldraw[black] (9.5,0) circle (2pt);\n\\filldraw[black] (11,0) circle (2pt);\n\\filldraw[black] (12,0) circle (2pt);\n\\end{tikzpicture}\n\\end{center}\n\\caption{Sketch of a dressed one-particle-hole excitation: positions of the\nmomenta of the averaging state $\\pmb{\\lambda}$ (empty circles) and the\nintermediate state $\\pmb{\\mu}$ (filled circles) respectively, and position of \nthe holes (dots). In red is indicated the 'soft modes' corresponding to microscopic excitations, and in blue the only macroscopic excitation.} \n\\label{exfnp1k0}\n\\end{figure}\n\nLoosely speaking, $\\mathfrak{S}^{\\rm dr}_n$ includes $n$ macroscopic particle-hole excitations and any number of microscopic particle-hole excitations. Since these configurations are not disjoint, one requires the expression \\eqref{sdr} for a precise definition. In the low-density limit the full spectral sum truncates to $\\mathfrak{S}^{\\rm dr}_1$, and this expression is found to be finite and well-defined in the thermodynamic.\n\n\nWe note that these dressed particle-hole excitations are actually in principle what is computed in \\cite{deNP15,deNP16,DNP18}. But therein the 'soft modes' contribute as a numerical factor that multiplies the form factor. Here, within the low density expansion, it is seen in \\eqref{densityde} that these soft modes actually carry an $x$ and $t$ dependence as well. Namely we have a representation for $\\mathfrak{S}_n^{\\rm dr}$\n\\begin{equation}\n\\begin{aligned}\n\\underbrace{\\int_{-\\infty}^\\infty...\\int_{-\\infty}^\\infty}_{2n}&F_n^{x,t}(\\lambda_1,\\mu_1,...,\\lambda_n,\\mu_n)e^{it(E(\\pmb{\\lambda})-E(\\pmb{\\mu}))+ix(P(\\pmb{\\mu})-P(\\pmb{\\lambda}))}\\\\\n&\\qquad\\qquad\\qquad\\times\\rho(\\lambda_1)\\rho_h(\\mu_1)...\\rho(\\lambda_n)\\rho_h(\\mu_n)\\D{\\lambda_1}\\D{\\mu_1}...\\D{\\lambda_n}\\D{\\mu_n}\\,,\n\\end{aligned}\n\\end{equation}\nwhere the 'dressed thermodynamic form factor' $F_n^{x,t}(\\lambda_1,\\mu_1,...,\\lambda_n,\\mu_n)$ carries a $x,t$ dependence coming from the soft modes summation, although the soft modes do not modify the energy and momentum of the state in the thermodynamic limit. This dependence emerges from the double poles that lift to order $L^0$ some finite-size corrections. The function $F_1^{x,t}(\\lambda,\\mu)$ can be directly deduced from \\eqref{densityde}.\n\nFrom Sections \\ref{which} and \\ref{which2}, we conclude that the right expansion scheme of the spectral sum is an expansion in terms of dressed particle-hole excitations \\eqref{dressedexp}, in the sense that the truncated spectral sums $\\mathfrak{S}_n^{\\rm dr}$ are finite and well-defined in the thermodynamic limit, and are smooth functions of the macroscopic excited rapidities. Moreover, as shown in \\cite{granetessler20}, they admit a well-defined and uniform $1\/c$ expansion without any spurious divergences in $L$ coming from $c$-dependent finite-size effects. These two properties are not satisfied by the 'bare' particle-hole excitations expansion.\n\n\n\n\\section{Summary and conclusion}\nWe computed the dynamical two-point function of the field and density operator averaged within a state with a small particle density ${\\cal D}$, given by Equations \\eqref{field} and \\eqref{densityde}. They are valid for an arbitrary interaction strength $c>0$ and for all space and time -- hence one can deduce the spectral function and dynamical structure factor in the full momentum-frequency plane, in the same regime of small ${\\cal D}$. This low-density limit is defined as the leading term in the correlation functions obtained by decomposing the form factors into partial fractions. \n\nBesides the explicit expressions obtained in the low-density regime, this work also provides interesting insights on the nature of states that contribute to the spectral sum, and on its possible expansions as detailed in Sections \\ref{which} and \\ref{which2}. The low-density regime is indeed naturally interpreted as a single dressed particle-hole excitation, i.e. a macroscopic particle-hole excitation with an arbitrary number of microscopic particle-hole excitations. In contrast, an integral representation in terms of 'bare' particle-hole excitations in the thermodynamic limit is found to be singular, in the sense that the 'thermodynamic form factor' for one-particle-hole excitations is non-zero only in the limit where the amplitude of the macroscopic excitation vanishes. Hence this work indicates that the right expansion of the spectral sum is in terms of dressed particle-hole excitations, as already suggested by the $1\/c$ expansion developed in \\cite{granetessler20}. Importantly, this dressing by microscopic particle-hole excitations also comes with an $x$ and $t$ dependence.\n\n\nThis PFD framework allows in principle for a computation of the next orders, which constitutes the most natural direction of improvement of this work. The fact that the next orders can indeed be computed with the PFD has been shown in \\cite{GFE20} for a model that can be reformulated in terms of free fermions. In the Lieb-Liniger case, the interaction introduces technical but not fundamental difficulties, and we hope to be able to pursue this program in following works. \n\n\n\n\\paragraph{Acknowledgements}\nWe thank Fabian Essler and Jacopo De Nardis for helpful discussions. This work was\nsupported by the EPSRC under grant EP\/S020527\/1.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbpxp b/data_all_eng_slimpj/shuffled/split2/finalzzbpxp new file mode 100644 index 0000000000000000000000000000000000000000..6dd1fcad4dd84b3d55398311ded2f6eabc3d7844 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbpxp @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nNamed entities are sequences of words that bring basic predefined semantic information that usually refers to locations, persons, organization\\ldots that can be denoted by proper nouns or that are unique in the real world, and they usually include numeric and temporal values. Named entities often constitute the first semantic bricks to extract in order to construct a structured semantic representation of a document content. \n\nNamed entity recognition (NER) is among SLU tasks that usually extract semantic information from textual documents. \nUntil now, NER from speech is made through a pipeline process that consists in processing first an automatic speech recognition (ASR) on the audio and then processing a NER on the ASR outputs.\nSuch approach has some disadvantages.\n\nFor instance, ASR errors have a negative impact on the NER performances, introducing noise within the text to be processed~\\cite{hatmi2013named}. Rule-based NER systems are usually built to process written language and are not robust to ASR errors. Machine learning based systems do not have good performance when they are trained on perfect transcriptions and deployed to process ASR ones, even if that can be partially compensated by simulating ASR errors in textual training data~\\cite{simonnet:hal-01715923}.\nAdditionally, ASR systems are generally tuned in order to get the lowest word error rate on a validation corpus, but this metric is not optimal to the NER task. For instance, this metric does not distinguish between errors on verbs or proper nouns while such errors do not have the same impact for NER. To compensate this problem, some dedicated metrics to tune ASR systems for better NER performances have been proposed, such as in~\\cite{jannet2017investigating}. Another inconvenience is that usually no information about named entities are used in the ASR process, while such information could help to better choose the partial recognition hypotheses that are dropped away during the decoding process. As a consequence, even when confusion networks or word lattices are used to go beyond the 1-best ASR hypothesis for a better robustness to ASR errors~\\cite{hakkani2006beyond}, such search space have been pruned without taking into account knowledge on named entity.\n\nIn the past, and integrated approach built on a high coupling of ASR and NER modules has been proposed~\\cite{servan2006conceptual}, based on the finite-state machine (FSM) paradigm (\\textsl{i.e.} transducer composition), showing that such integration can offer significant improvements in terms of NER quality. The main limit of this approach concerns the FSM paradigm itself, that is not able to natively model long distant constraint without combinatory explosion and that, by nature, can only express dependencies through a regular grammar. Another proposition to inject information about named entities in the ASR consists in directly adding some expressions of named entities into the ASR vocabulary~\\cite{hatmi2013incorporating}, and to estimate a language model for speech recognition that take into account these named entity expressions. The main default of a such approach is that it cannot allow to detect named entity that were not injected in the ASR vocabulary.\n\nAll of these issues motivate our research work on neural end-to-end approach to extract named entities from speech. \nOn a such way, a joint optimization is able for both ASR and NER in a NER task perspective, the architecture is more compact than the ones used in usual pipeline, and we expect to take benefit of the deep neural architecture capacities to capture long distant constraint at the sentence level. \nVery recently, a similar approach has been proposed by Facebook on a paper posted on the arXiv.org website~\\cite{serdyuk2018towards}. This end-to-end approach is dedicated to domain and intent classification tasks, and experiments were carried on internal data close to the spirit of the ATIS corpus, as expressed by the authors.\n\nIn this paper, we present a first study of an end-to-end approach to extract named entities.\nOur neural architecture is very similar to the Deep Speech 2 neural ASR system proposed by Baidu in~\\cite{amodei2016deep}. To use it for named entity recognition, we apply a multi-task training and modify the sequence of characters to be recognized from speech.\nExperiments were carried on French data easily accessible, and so reproducible, that were distributed in the framework on evaluation campaigns and are still available. \nThis paper is structured as follows. Section 2 describes the neural ASR architecture we used. Section 3 explains how we propose to exploit a such neural architecture for named entity extraction from speech. Section 4 presents some propositions to optimize the system and also compensate the lack of manually annotated audio data. Section 5 presents our experimental results, before the conclusion.\n\n\n\\section{Model architecture}\n\nThe RNN architecture used in this study is similar to the Deep~Speech~2 neural ASR system proposed by Baidu in~\\cite{amodei2016deep}. \nThis architecture is composed of $nc$ convolution layers (CNN), followed by $nr$ uni or bidirectional recurrent layers, a lookahead convolution layer~\\cite{wang2016lookahead}, and one fully connected layer just before the softmax layer, as shown in Figure~\\ref{DeepSpeech}.\n\\vspace{-0.2cm}\n\\begin{figure}[!htbp]\n\\begin{center}\n\\includegraphics [scale=0.45]{DeepSpeech2}\n\\caption{\\label{DeepSpeech} Deep RNN architecture used to extract named entities from French speech.}\n\\end{center}\n\\end{figure}\n\\vspace{-0.3cm}\nThe system is trained end-to-end using the CTC loss function~\\cite{graves2006connectionist}, in order to predict a sequence of characters from the input audio. \nIn our experiments we used two CNN layers and six bidirectional recurrent layers with batch normalization as mentioned in~\\cite{amodei2016deep}. \n\nGiven an utterance $x^{i}$ and label $y^{i}$ sampled from a training set $X ={(x^{1}, y^{1}), (x^{2}, y^{2}), . . .}$, the RNN architecture has to train to convert an input sequence $x^{i}$ into a final transcription $y^{i}$s. For notational convenience, we drop the superscripts and use $x$ to denote a chosen utterance and $y$ the corresponding label.\n\nThe RNN takes as input an utterance $x$ represented by a sequence of log-spectrograms of power normalized audio clips, calculated on 20ms windows. As output, all the characters $l$ of a language alphabet may be emitted, in addition to the space character used to segment character sequences into word sequences (space denotes word boundaries).\n\nThe RNN makes a prediction $p(l_{t}|x)$ at each output time step $t$. \n\nAt test time, the CTC model is coupled with a language model trained on a big textual corpus. A specialized beam search CTC decoder~\\cite{hannun2014first} is used to find the transcription $y$ that maximizes :\n\\begin{equation}\nQ(y)=log(p(l_{t}|x)) + \\alpha log(pLM(y)) + \\beta wc(y)\n\\end{equation}\nwhere wc(y) is the number of words in the transcription $y$. The weight $\\alpha$ controls the relative contributions of the language model and the CTC network. The weight $\\beta$ controls the number of words in the transcription.\n\n\\section{Named entity extraction process} \nIn the literature, many studies focus on named entity recognition from text. \nState-of-the-art systems are based on neural networks architectures. Some of them rely heavily on hand-crafted features and domain-specific knowledge~\\cite{collobert2011natural,chiu2015named}. Recent approaches~\\cite{lample2016neural,ma2016end} takes benefits from both word and\/or character-level embeddings learned automatically, by using combination of bidirectional LSTM, CNN and CRF. \nHowever, named entities recognition from automatic transcriptions is less studied. \nThis task is made through a pipeline process that consists in processing first an automatic speech recognition (ASR) on the audio and then processing a NER on the ASR outputs~\\cite{raymond2013robust}. \nUsually, the named entity recognition task is to assign a named entity tag to every word in a sentence. A single named entity could concern several words within a sentence. For this reason, the word-level labels begin-inside-outside (BIO) encoding~\\cite{Lance1995BIO} is very often adopted.\n\nIn this preliminary study, we focus on named entity extraction from speech using the network described above, without changing the neural architecture. \nWe would like to evaluate if this neural architecture is able to capture high level semantic information that allow it to recognize named entities. For that, we propose to modify the character sequence that the neural network has to produce: information about named entities are added in the initial character sequence.\nInstead of applying a BIO approach, we propose to add some tag characters in this sequence to delimit named entities boundaries, but also their category. \nWe are interested to eight NE categories that are: \\textit{person}, \\textit{function}, \\textit{organization}, \\textit{location}, \\textit{production}, \\textit{amount}, \\textit{time} and \\textit{event}. \n\nIn our experiments, the system will attribute a tag ``$B_{NE}$'' or ``end'' only before and after the named entities, the other words are not concerned. \nTo distinguish the named entity category, we consider a begin tag for each NE category. Only one ``end'' tag is used for all the NE categories, considering that since there is no overlap between named entities in a such representation, this information is sufficient to delimit the end of a named entity. \n\nAccording to the eight named entity categories targeted by the task, nine NE tags has to be added to the character list emitted by the neural network: ``$B_{pers}$'', ``$B_{func}$'', ``$B_{org}$'', ``$B_{loc}$'', ``$B_{prod}$'', ``$B_{amount}$'', ``$B_{time}$'', ``$B_{event}$'', and $``end''$. \nAs such neural model predicts a character at each time step, we propose to map each of these nine NE tags to one single special character, that corresponds respectively to: ``['', ``('', ``\\{'',``\\$'', ``\\&'', ``\\%'', ``\\#'', ``)'' and ``]'', as illustrated in Figure~\\ref{fig:example}. \n\nWith this way, the NE tags are included in the prediction process, and are taken into account by the CTC loss function during the training process.\n\\vspace{-0.2cm}\n\\begin{figure*}[!htbp]\n\\begin{center}\n\\includegraphics [scale=0.45]{example}\n\\caption{\\label{fig:example} Example of mapping the real NE tags to character sequence. This sentence means, in English and case sensitive: \"the sculptor Caesar died yesterday in Paris at the age of seventy-seven years\"}\n\\vspace{-0.1cm}\n\\end{center}\n\\end{figure*}\n\\section{Multi-task training, data augmentation, and starred mode} \n\\label{sec:tricks}\n\nAudio recordings with both manual transcriptions and manual annotations of named entities are relatively rare, while neural end-to-end approaches are known to need large amount of data to become competitive.\n\nTo compensate this lack of data, we first propose to apply a multi-task learning approach to train the neural network. This consists in starting to train it only for the ASR task, without emitting character used to represent named entities, on all the audio recordings available with their manual transcriptions. At the end, the softmax layer is reinitialized to take into consideration the named entity tag markers, and a new training process is realized, on the named entity recognition task, with only training data with manual annotations of named entities.\n\nA second proposition consists in artificially increasing the training data for the named entity recognition task. For this purpose, we propose to apply a named entity recognition system dedicated to text data in order to tag the manual transcriptions used to train the ASR neural network. Then, these manual transcriptions automatically annotated with named entities can be injected in the training data used to train the neural network to extract named entities from speech.\n\nIn addition, since we want the system to focus on named entities, and since the CTC loss gives the same importance to each character, we propose to modify the character sequence that the neural network must emit to give more importance to named entities. This proposition is interesting to better understand how the CTC loss behaves on this case, and consists in replacing by a star ''*'' all character subsequences that do not contain a named entity. For instance, the character sequence presented in Figure~\\ref{fig:example} becomes: \\textit{* [ c\\'esar ] * \\# hier ] * \\$ paris ] * \\% soixante dix sept ans ]}. We call this approach the starred mode, and we expect that it can make the neural model more sensitive to named entities.\n\\vspace{-0.1cm}\n\\section{Experiments}\n\\subsection{Experimental setups}\n\\label{sec:corpus}\nExperiments have been carried out on four different French corpora, including ESTER 1\\&2, ETAPE and Quaero.\nThese corpora are composed of data recorded from francophone radio and TV stations, and are annotated with named entities.\n\nThe ESTER corpora were divided into three parts: training, development and evaluation.\nESTER 1 \\cite{galliano2005ester} training (73 hours) and development (17 hours) corpora are composed of data recorded from four radio stations in French. ESTER 1 test corpus is composed of 10 hours coming from the same four radio stations plus two other stations, all of which recorded 15 month after the development data.\n\nESTER 2 \\cite{galliano2009ester} training corpus was not annotated with named entities and was not used in this study. The development (17 hours) and test set (10 hours) is composed of manual transcriptions of speech recorded from six radio stations (two of those radio stations were already used in ESTER 1).\n\nThe ETAPE \\cite{gravier2012etape} data consists of manual transcriptions and annotations of TV and radio shows. It contains 36 hours of speech, recorded between 2010 and 2011, divided into three parts: training (22 hours), development (7 hours) and test (7h).\n\nQUAERO (ELRA-S0349) data is composed of 12 hours of manual transcriptions of TV and radio shows coming from 6 different sources recorded in 2010.\n\nOur corpus, called DeepSUN, is the combination of those four corpora. \nThe training corpus is composed of the training sets of ESTER 1, ETAPE and QUAERO, while, the development and test sets are composed respectively of the development and test sets of ESTER 1\\&2, and ETAPE.\nIt contains almost 160 hours of speech (training 107 hours, test 24 hours, development 30 hours). \nThe distribution of named entities by categories in the corpus is summarized in Table \\ref{tab:deepsun_guideline}. \n\\vspace{-0.1cm}\n\\begin{table}[th]\n \\caption{Distribution of named entities by categories in the DeepSUN corpus}\n \\label{tab:deepsun_guideline}\n \\centering\n \\begin{tabular}{ cccc }\n \\toprule\n \\multicolumn{1}{c}{\\textbf{}} & \\multicolumn{3}{c}{\\textbf{DeepSUN}} \\\\\n \\multicolumn{1}{c}{\\textbf{category}} & \\multicolumn{1}{c}{\\textbf{dev}} & \\multicolumn{1}{c}{\\textbf{test}} & \\multicolumn{1}{c}{\\textbf{train}}\\\\\n \\midrule\n pers & 6719 & 4766 \t& 22115 \t~~~ \\\\\n func & 1830 \t& 1425 & 6628 \t~~~ \\\\\n org & 5133 \t& 3506 \t& 15804 ~~~ \\\\\n loc & 5195 \t& 3915 \t& 18159 ~~~ \\\\\n prod & 652 \t& 606 \t& 2317 \t~~~ \\\\\n time & 3763\t& 2769 \t& 12020 \t~~~ \\\\\n amount & 1591\t& 1450 \t& 5959 \t~~~ \\\\\n\tevent & 79 \t& 0 \t& 321 \t~~~\\\\\n\t \\midrule\n \\multicolumn{1}{c}{\\textbf{Sum}} &\n \t\t\\multicolumn{1}{c}{\\textbf{24962}} & \\multicolumn{1}{c}{\\textbf{18437}} & \\multicolumn{1}{c}{\\textbf{83323}} \\\\\n\t\\bottomrule\n\t\\end{tabular}\n\\end{table}\n\\vspace{-0.1cm}\n\n\n\n\n\n\n\n\n\n\n\n\nThe performance of our approach is evaluated in terms of precision (P), recall(R) and F-measure for named entity detection, the named entity\/value detection \nand the accuracy of the value detection when the named entities tags are correctly detected. \nThese evaluations are made with the help of the \\textit{sclite}\\footnote{http:\/\/www.icsi.berkeley.edu\/Speech\/docs\/sctk-1.2\/sclite.htm} tool.\n\\subsection{Multi-task training}\nFor multi-task training, we first train the E2E architecture only for ASR task, without emitting character used to represent named entities. \nThe system is trained on all the audio recordings available with their manual transcriptions around 297.7 hours of training set, including the data described above. \n\nIt composed of two convolution layers and six BLSTM layers with batch normalization, the number of epochs was set to $35$.\nThis system achieves 20.70\\% word error rate (WER) and\n8.01\\% character error rate (CER) on dev corpus (30.2 hours) and 19.95\\% of WER and 7.68\\% of CER on test set (40.8 hours). These results were obtained by applying a CTC beam search decoding coupled with a trigram language model. \n Once this system is trained, the softmax layer is reinitialized to take into consideration the named entity tag markers, and a new training process is realized, on the named entity recognition task, with only training data with manual annotations of named entities described in table~\\ref{tab:deepsun_guideline}.\n\\textbf{ In addition, for the training of both E2E and ASR systems, each training audio samples is randomly perturbed in gain and tempo for each iteration.}\n\n\n\\subsection{Experimental results}\n\nWe present in this section some experimental results. Table~\\ref{tab:4G_etiquettes} shows the performances of the end-to-end model (E2E) to detect EN categories (among the eight ones). That means that in this evaluation we do not take care of values associated to the detected EN. The starred mode is also experimented and is called (E2E*) in the table: this mode provides better results in this task than the normal mode. \n\n\n\\begin{table}[!htbp]\n\\vspace{-0.1cm}\n \\caption{Named entity category detection results for E2E and E2E* (starred mode) systems} \n \\label{tab:4G_etiquettes}\n \\centering\n\\begin{tabular}{ |l|c|c|c|c| }\n \n \\hline\n \\multicolumn{1}{|c|}{\\textbf{System}} &\\textbf{Corpus} & \\multicolumn{1}{|c|}{\\textbf{Precision}} & \\multicolumn{1}{|c|}{\\textbf{Recall}} & \\multicolumn{1}{|c|}{\\textbf{F-measure}} \\\\\n \n \\hline\n \\hline\n \n \n E2E & dev & \\textbf{0.85}&0.57& 0.68 \\\\\n\n \n \\hline\n\tE2E & test & \\textbf{0.83} & 0.52& 0.64 \\\\\n \\hline\n \\hline\n E2E* &dev & 0.75 & \\bf{0.65}& \\bf{0.71} \\\\\n \\hline\n E2E* &test &0.82 & \\bf{0.57}& \\bf{0.67} \\\\\n\n\n \\hline\n \\end{tabular}\n\\end{table}\n\n \n\\vspace{-0.1cm}\nTable~\\ref{tab:4G_val_etiquettes} evaluates the quality of the category\/value pairs that have been recognized. While precision and recall do not have the same behavior between normal and starred mode, both modes gets the same F-measure value. \n\n\n\\begin{table}[!htbp]\n \\caption{Named entity category+value pair detection results for E2E and E2E* systems} \n \\label{tab:4G_val_etiquettes}\n \\centering\n \\begin{tabular}{ |l|c|c|c|c|}\n \n \\hline\n \\multicolumn{1}{|c|}{\\textbf{System}} &\\textbf{Corpus} & \\multicolumn{1}{|c|}{\\textbf{Precision}} & \\multicolumn{1}{|c|}{\\textbf{Recall}} & \\multicolumn{1}{|c|}{\\textbf{F-measure}} \\\\\n \n \\hline\n \\hline\n E2E & dev & \\bf0.64 & 0.45& 0.53 \\\\\n E2E& test & \\bf0.55 & 0.36& 0.44 \\\\\n \\hline\n \\hline\n E2E* &dev & 0.57 & \\bf{0.47} & 0.52 \\\\\n E2E* &test & 0.47& \\bf{0.38}& 0.42 \\\\\n\n \\hline\n \\end{tabular}\n \n\\end{table}\n\n\n\n\n\n\n\nLast, we would like to compare these results to the ones obtained by a pipeline process, that consists in applying a text named entity recognition on the automatic transcripts produced by the end-to-end ASR system trained on the first step of the multi-task learning presented above.\n\nThe text named entity recognition system used for this experiment is based on the combination of bi-directional LSTM (BLSTM), CNN and CRF modules~\\cite{ma2016end}, and takes benefits from both word and character-level embeddings learned automatically during the training process. For this experiment, we used the NeuroNLP2 implementation\\footnote{https:\/\/github.com\/XuezheMax\/NeuroNLP2}.\nConvolutional neural network encodes character-level information of a word into its character-level embedding. Then the character-and word-level embeddings are fed into the BLSTM to model context information of each word. On top of BLSTM, the sequential CRF is used to jointly decode labels for the whole sentence. \nIn addition, this system can be enriched with syntactic information like part of speech tagging (POS).\nIn our experiment, NeuroNLP2 is used as a NER system and Deep Speech 2 as the ASR system. Both are trained on the DeepSUN corpus described in section~\\ref{sec:corpus}.\nAutomatic transcriptions of dev and test data have been annotated with NER system. To measure the impact of POS, we used the MACAON system~\\cite{nasr2011macaon} to tag of DeepSUN corpus and manual and automatic transcriptions. \n\nTo feed NeuroNLP2, one hot vectors represent POS information. Word embeddings, character representations and one hot concatenations feed the BLSTM layer.\nAs we can see in Tables~\\ref{tab:neuro_etiquettes_seules}, the pipeline process is less competitive than the end-to-end model to recognize EN category, but is more efficient to extract EN values. Results also confirms that linguistic information like POS is really important for the NER task. Such observation will help for future work on the continuity of this study.\n\n\n\n\\begin{table}[!htbp]\n \\caption{NER results for the pipeline approach (Pip) on the \\textbf{test} data. When POS are used to tag ASR outputs before NER processing, the system is called Pip+POS} \n \\label{tab:neuro_etiquettes_seules}\n \\centering\n \\begin{tabular}{| l|l|c|c|c| }\n \\hlin\n \n \\multicolumn{1}{|c|}{\\textbf{System}} &\\textbf{Detection} &\\multicolumn{1}{|c|}{\\textbf{Precision}} & \\multicolumn{1}{|c|}{\\textbf{Recall}} & \\multicolumn{1}{|c|}{\\textbf{F-measure}} \\\\\n \n \\hline\n \\hline\n Pip & category & \\bf0.75 & 0.56 & 0.64 \\\\\n Pip+POS &category & 0.74 & \\bf0.58 & \\bf0.65 \\\\\n\n \\hline\n Pip &cat+value & \\bf0.58 & 0.43 & 0.49 \\\\\n Pip+POS &cat+value & 0.57 & \\bf0.45 & \\bf0.50 \\\\\n\t\\hline\n\n \\end{tabular}\n \\vspace{-0.5cm}\n\\end{table}\n\n\n\n\n\n\n\n\nAs described in section~\\ref{sec:tricks}, we applied NeuroNLP2 (the version using POS tagging) on the manual transcriptions of the ASR training data in order to augment the amount of ''NER from speech'' training data. In this experiment, the normal and starred modes were used. Table~\\ref{tab:e2e_augment} shows the improvement got by the end-to-end system when training on these imperfect augmented data using the normal (E2E+) and the starred (E2E+*) modes. As we can see, the use of the augmented data was helpful for the starred mode.\n\n\\begin{table}[!htbp]\n \\caption{NER results on the \\textbf{test} data for the E2E system trained with imperfect augmented data (E2E+) in comparison to the E2E system trained with imperfect augmented data and the starred mode (E2E+*)} \n \\label{tab:e2e_augment}\n \\centering\n \\begin{tabular}{ |l|l|c|c|c| }\n \\hlin\n \\multicolumn{1}{|c|}{\\textbf{System}} &\\textbf{Detection} & \\multicolumn{1}{|c|}{\\textbf{Precision}} & \\multicolumn{1}{|c|}{\\textbf{Recall}} & \\multicolumn{1}{|c|}{\\textbf{F-measure}} \\\\\n \n \\hline \n \\hline\n E2E+ & category & \\bf 0.82 & \\bf0.57 & 0.67 \\\\\n E2E+*& category & 0.76& 0.63& \\bf0.69 \\\\\n \\hline\n E2E+ & cat+value &\\bf0.55 & 0.40 & 0.46 \\\\\n E2E+* & cat+value & 0.49 & \\bf0.41 & \\textbf{0.47} \\\\\n \\hline\n \n \\end{tabular}\n \\vspace{-0.5cm}\n\\end{table}\n\n\n\n\n\\section{Conclusion}\nThis paper presents a first study about end-to-end named entity extraction from speech. By integrating in the character sequence emitted by a CTC end-to-end speech recognition system some special characters to delimit and categorize named entities, we showed that such extraction is feasible. \nTo compensate the lack of training data, we propose a multi-task learning approach (ASR + NER) in addition to an artificial data augmentation of the training corpus with automatic annotation of named entities. \nA starred mode is also proposed to make the neural network more focused on named entities. \nExperimental results show that this end-to-end approach in starred mode with training augmentation, provides better results (F-measure equals to 0.69 on test) than a pipeline approach to detect named entity categories (F-measure=0.64). On the other side, performances of this end-to-end approach to extract named entity values are worse than the ones got by the pipeline process. \n\nAs a conclusion, this study presents promising results in a first attempt to experiment an end-to-end approach to extract named entities, and constitutes an interesting start point for future work that could start by combining starred mode with training data augmentation, but also explore more different ways, like injecting linguistic information in the end-to-end neural architecture.\n\\vspace{-0.2cm}\n\\section{Acknowledgements}\n\\small\nThis work was supported by the French ANR Agency through the CHIST-ERA M2CR project, under the contract number ANR-15-CHR2-0006-01, and by the RFI Atlanstic2020 RAPACE project.\nAuthors would like to sincerely thank Sean Naren to make his implementation of Deep Speech 2 available, as well as the contributors to the NeuroNLP2 project.\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSoft anomalous dimensions are fundamental field-theoretical functions that are essential in determining soft-gluon contributions to partonic scattering cross sections (for a review see Ref. \\cite{NKtop}). Such cross sections factorize in Laplace or Mellin moment space into products of hard, soft, and jet functions.\n\nThe soft function, $S$, is in general a matrix in color space and it satisfies the renormalization group equation \\cite{NKGS}\n\\begin{equation}\n\\left(\\mu \\frac{\\partial}{\\partial \\mu}\n+\\beta(g_s)\\frac{\\partial}{\\partial g_s}\\right)\\,S\n=-\\Gamma^\\dagger_S \\; S-S \\; \\Gamma_S\n\\end{equation}\nwith soft anomalous dimension $\\Gamma_S$, also a matrix. The evolution of the soft function results in the exponentiation (resummation) of logarithms of the moment variable.\nResummation at NNLL accuracy requires two-loop soft anomalous dimensions while at N$^3$LL accuracy it requires three-loop soft anomalous dimensions.\n\nIn a top-quark production partonic process, $f_{1} + f_{2} \\rightarrow t + X$, \nwe define a threshold variable $s_4$ that measures distance from partonic threshold. When the resummed cross section is inverted to momentum space, the soft-gluon corrections involve logarithms of $s_4$, but a prescription is needed for all-order results to deal with infrared divergences. Such resummed results are strongly prescription-dependent, and some prescriptions give poor results by underestimating the true size of the corrections. Alternatively, finite-order expansions can be performed with better control over subleading effects and no prescriptions. Expansions to second and third order (with matching with lower-order complete results) provide approximate NNLO (aNNLO) and N$^3$LO (aN$^3$LO) predictions which are state-of-the-art. As we approach partonic threshold, $s_4 \\rightarrow 0$, there is diminishing energy left for additional gluon radiation, and the contributions from soft-gluon emission are dominant and they provide excellent predictions for the high-order corrections.\n\nIn the next section, we present results through three loops for the cusp anomalous dimension, which is the simplest soft anomalous dimension. We consider the case when both eikonal lines are massive and the case when one line is massive and one is massless. In Section 3, we present results through three loops for the soft anomalous dimensions in single-top production processes, in processes with a top quark in new physics models, and in top-antitop pair production.\n\n\\section{Three-loop cusp anomalous dimension}\n\nA basic ingredient of soft anomalous dimensions for partonic processes is the cusp anomalous dimension \\cite{KR,NK2loop,GHKM,NK3lcusp,NK3loop} (see Fig. \\ref{cusp} for representative eikonal diagrams at one, two, and three loops).\nThe cusp angle is defined by $\\theta=\\cosh^{-1}(p_i\\cdot p_j\/\\sqrt{p_i^2 p_j^2})$, in terms of the momenta of eikonal lines $i$ and $j$, and the perturbative expansion for the cusp anomalous dimension is $\\Gamma_{\\rm cusp}=\\sum_{n=1}^{\\infty} \\left(\\frac{\\alpha_s}{\\pi}\\right)^n \\Gamma^{(n)}_{\\rm cusp}$. When the lines represent a top and an antitop, then the cusp anomalous dimension can be thought of as the soft anomalous dimension for $e^+ e^- \\rightarrow t{\\bar t}$. From the UV poles of the loop diagrams in dimensional regularization we calculate results at each order. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.3\\textwidth]{loop1.pdf}\n\\hspace{4mm}\n\\includegraphics[width=0.3\\textwidth]{loop2.pdf}\n\\hspace{7mm}\n\\includegraphics[width=0.3\\textwidth]{loop3.pdf}\n\\caption{Representative eikonal diagrams for the cusp anomalous dimension at one, two, and three loops.}\n\\label{cusp}\n\\end{center}\n\\end{figure}\n\nAt one loop,\n\\begin{equation}\n\\Gamma_{\\rm cusp}^{(1)}=C_F (\\theta \\coth\\theta-1) \\, ,\n\\end{equation}\nwhere $C_F=(N^2-1)\/(2N)$, with $N$ the number of colors.\nIn terms of the heavy-quark speed $\\beta=\\tanh(\\theta\/2)$, we have\n$\\theta=\\ln\\left(\\frac{1+\\beta}{1-\\beta}\\right)$\nand \n\\begin{equation}\n\\Gamma_{\\rm cusp}^{(1)}=C_F\\left[-\\frac{(1+\\beta^2)}{2\\beta}\n\\ln\\frac{(1-\\beta)}{(1+\\beta)}-1\\right] \\, .\n\\end{equation}\n\nAt two loops \\cite{NK2loop},\n\\begin{eqnarray}\n\\Gamma_{\\rm cusp}^{(2)}&=&K^{'(2)} \\, \\Gamma_{\\rm cusp}^{(1)}\n+\\frac{1}{2}C_F C_A \\left\\{1+\\zeta_2+\\theta^2 \n-\\coth\\theta\\left[\\zeta_2\\theta+\\theta^2\n+\\frac{\\theta^3}{3}+{\\rm Li}_2\\left(1-e^{-2\\theta}\\right)\\right] \\right. \n\\nonumber \\\\ && \\hspace{29mm} \\left.\n{}+\\coth^2\\theta\\left[-\\zeta_3+\\zeta_2\\theta+\\frac{\\theta^3}{3}\n+\\theta \\, {\\rm Li}_2\\left(e^{-2\\theta}\\right)\n+{\\rm Li}_3\\left(e^{-2\\theta}\\right)\\right] \\right\\}\n\\end{eqnarray}\nwhere $K^{'(2)}=K^{(2)}\/C_F=C_A (67\/36-\\zeta_2\/2)-5 n_f\/18$, with $C_A=N$ and $n_f$ the number of light quark flavors.\n\nAt three loops \\cite{GHKM,NK3lcusp},\n\\begin{equation}\n\\Gamma_{\\rm cusp}^{(3)}= K^{'(3)} \\Gamma_{\\rm cusp}^{(1)}\n+2 K^{'(2)} \\left[\\Gamma_{\\rm cusp}^{(2)}-K^{'(2)}\\Gamma_{\\rm cusp}^{(1)}\\right]\n+C_{\\rm cusp}^{(3)} \n\\label{3lc}\n\\end{equation}\nwhere $K^{'(3)}=K^{(3)}\/C_F$ is a lengthy expression with color terms and constants (note that in general $K^{(n)}$ denotes the proportionality factor of the $n$-loop cusp anomalous dimension in the massless limit with $\\theta$). The expression for $C_{\\rm cusp}^{(3)}$ is very long (see \\cite{NK3lcusp} for details). For top-quark production $n_f=5$, and a simple numerical expression is \\cite{NK3lcusp}\n\\begin{equation}\n\\Gamma_{\\rm cusp}^{(3)\\, \\rm approx}(\\beta)=\n0.09221 \\, \\beta^2+2.80322 \\; \\Gamma_{\\rm cusp}^{(1)}(\\beta) \\, .\n\\label{3lca}\n\\end{equation}\n\nFor the case where one eikonal line is massive and one is massless we find simpler expressions for the cusp anomalous dimension, which we now denote as $\\Gamma_c$ to distinguish it from the fully massive case. If eikonal line $i$ represents a massive quark and eikonal line $j$ a massless quark, then \n\\begin{equation}\n\\Gamma_c^{(1)}=C_F \\left[\\ln\\left(\\frac{2 p_i \\cdot p_j}{m_i \\sqrt{s}}\\right)\n-\\frac{1}{2}\\right] \\, ,\n\\end{equation}\n\\begin{equation}\n\\Gamma^{(2)}_c=K^{(2)} \\left[\\ln\\left(\\frac{2 p_i \\cdot p_j}{m_i \\sqrt{s}}\\right\n) -\\frac{1}{2} \\right]+\\frac{1}{4} C_F C_A (1-\\zeta_3) \\, ,\n\\end{equation}\n\\begin{eqnarray}\n\\Gamma^{(3)}_c&=&K^{(3)} \\left[\\ln\\left(\\frac{2 p_i \\cdot p_j}{m_i \\sqrt{s}}\\right)-\\frac{1}{2}\\right]+\\frac{1}{2} K^{(2)} C_A (1-\\zeta_3)\n\\nonumber \\\\ &&\n{}+C_F C_A^2\\left[-\\frac{1}{4}+\\frac{3}{8}\\zeta_2-\\frac{\\zeta_3}{8}-\\frac{3}{8}\\zeta_2 \\zeta_3+\\frac{9}{16} \\zeta_5\\right] \\, .\n\\end{eqnarray}\n\nThe cusp anomalous dimension is an essential component in calculations of soft anomalous dimension matrices for general partonic processes. In the fully massless case, two-loop soft anomalous dimension matrices are proportional to the one-loop quantity \\cite{ADS}; however, when masses are present this is no longer the case. We also note that three-parton correlations with at least two massless lines vanish at any order due to constraints from scaling symmetry \\cite{BN,GM}. However, four-parton correlations at three loops and beyond do not necessarily vanish even in the massless case \\cite{ADG}. We will show explicit results at one, two, and three loops for various top-quark processes in the next section.\n\n\\section{Soft anomalous dimensions for top-quark processes}\n\nWe now present results for the soft anomalous dimensions through three loops for a variety of top-quark production processes.\n\n\\subsection{Single-top-quark production}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.8\\textwidth]{singletop2019plot.pdf}\n\\hspace{4mm}\n\\caption{Theoretical single-top cross sections for the $t$ and $s$ channels at aNNLO \\cite{NKtop,NKsingletop} and for $tW$ production at aN$^3$LO \\cite{NKtW}, using MMHT2014 NNLO pdf \\cite{MMHT2014}, at LHC energies compared with LHC data \\cite{ATLAS13tch,CMS13tch,LHC7and8singletop,ATLAS13tW,CMS13tW}.}\n\\label{singletop}\n\\end{center}\n\\end{figure}\n\nWe begin with single-top-quark production \\cite{NKtop,NK3loop,NKst,NKsingletop,NKtW}. Before presenting the analytical form of the soft anomalous dimensions, we show in Fig. \\ref{singletop} theoretical predictions at NNLL accuracy for the cross sections at LHC energies. The soft-gluon corrections are important and the theoretical predictions are in very good agreement with the data. \n\n\\subsubsection{Single-top $t$-channel production}\n\nThe partonic processes in single-top $t$-channel production are \n$b(p_1)+q(p_2) \\rightarrow t(p_3) +q'(p_4)$.\nWe define the usual partonic kinematical variables \n$s=(p_1+p_2)^2$, $t=(p_1-p_3)^2$, and $u=(p_2-p_3)^2$, and \nthe threshold variable $s_4=s+t+u-m_t^2$.\nWe choose a singlet-octet $t$-channel color basis $c_1=\\delta_{13} \n\\delta_{24}$ and $c_2=T^c_{31} T^c_{42}$.\nThe soft anomalous dimension is a $2\\times 2$ matrix.\n\nAt one loop, the four matrix elements are \\cite{NKtop,NKst,NKsingletop}\n\\begin{eqnarray}\n&&{\\Gamma}_{S\\, 11}^{t \\, (1)}=\nC_F \\left[\\ln\\left(\\frac{t(t-m_t^2)}{m_t s^{3\/2}}\\right)-\\frac{1}{2}\\right] \\, , \n\\quad\n{\\Gamma}_{S\\, 12}^{t \\, (1)}=\\frac{C_F}{2N} \\ln\\left(\\frac{u(u-m_t^2)}{s(s-m_t^2)}\\right)\n \\, , \n\\quad\n{\\Gamma}_{S\\, 21}^{t \\, (1)}= \\ln\\left(\\frac{u(u-m_t^2)}{s(s-m_t^2)}\\right) \\, ,\n\\nonumber \\\\ &&\n{\\Gamma}_{S\\, 22}^{t \\, (1)}= C_F \\left[\\ln\\left(\\frac{t(t-m_t^2)}{m_t s^{3\/2}}\\right)-\\frac{1}{2}\\right]-\\frac{1}{N}\\ln\\left(\\frac{u(u-m_t^2)}{s(s-m_t^2)}\\right) \n+\\frac{N}{2}\\ln\\left(\\frac{u(u-m_t^2)}{t(t-m_t^2)}\\right) \\, .\n\\end{eqnarray}\n\nAt two loops, we find \\cite{NK3loop,NKsingletop}\n\\begin{eqnarray}\n&&\\Gamma_{S\\,11}^{t \\, (2)}= K^{'(2)} \\Gamma_{S\\,11}^{t \\, (1)}+\\frac{1}{4} C_F C_A (1-\\zeta_3)\\, , \n\\hspace{8mm}\n\\Gamma_{S\\,12}^{t \\, (2)}= K^{'(2)} \\Gamma_{S\\,12}^{t \\, (1)} \\, ,\n\\nonumber \\\\ &&\n\\Gamma_{S\\,21}^{t \\, (2)}= K^{'(2)} \\Gamma_{S\\,21}^{t \\, (1)} \\, , \\hspace{8mm}\n\\Gamma_{S\\,22}^{t \\, (2)}= K^{'(2)} \\Gamma_{S\\,22}^{t \\, (1)}+\\frac{1}{4} C_F C_A (1-\\zeta_3) \\, .\n\\end{eqnarray}\n\nAt three loops, we only need the first element of the matrix for N$^3$LL resummation, and to calculate the N$^3$LO soft-gluon corrections. We have \\cite{NK3loop}\n\\begin{equation}\n\\Gamma_{S\\,11}^{t \\, (3)}= K^{'(3)} \\Gamma_{S\\,11}^{t \\, (1)}\n+ \\frac{1}{2} K^{(2)} C_A (1-\\zeta_3) \n+C_F C_A^2\\left[-\\frac{1}{4}+\\frac{3}{8}\\zeta_2-\\frac{\\zeta_3}{8}\n-\\frac{3}{8}\\zeta_2 \\zeta_3+\\frac{9}{16} \\zeta_5\\right] \\, .\n\\end{equation}\nDue to the simple structure of the leading-order hard matrix, the other three matrix elements of $\\Gamma_S^t$ at three loops do not contribute to the N$^3$LO corrections. We can still provide expressions for those three matrix elements, up to four-parton correlations, as discussed in the context of $s$-channel production below.\n\n\\subsubsection{Single-top $s$-channel production}\n\nThe partonic processes in single-top $s$-channel production are \n$q(p_1)+{\\bar q}'(p_2) \\rightarrow t(p_3) +{\\bar b}(p_4)$.\nWe again have a $2\\times 2$ soft anomalous dimension matrix, and we choose a singlet-octet $s$-channel color basis, $c_1=\\delta_{12} \\delta_{34}$ and \n$c_2=T^c_{21} T^c_{34}$.\n\nAt one loop, the four matrix elements are \\cite{NKtop,NKst,NKsingletop}\n\\begin{eqnarray}\n&&\\Gamma_{S\\, 11}^{s \\, (1)}=C_F \\left[\\ln\\left(\\frac{s-m_t^2}{m_t\\sqrt{s}}\\right)\n-\\frac{1}{2}\\right] \\, ,\n\\quad\n\\Gamma_{S\\, 12}^{s \\, (1)}=\\frac{C_F}{2N} \\ln\\left(\\frac{t(t-m_t^2)}{u(u-m_t^2)}\\right) \\, , \n\\quad\n\\Gamma_{S\\, 21}^{s \\, (1)}= \\ln\\left(\\frac{t(t-m_t^2)}{u(u-m_t^2)}\\right) \\, ,\n\\nonumber \\\\ &&\n\\Gamma_{S\\, 22}^{s \\, (1)}=C_F \\left[\\ln\\left(\\frac{s-m_t^2}{m_t \\sqrt{s}}\\right)-\\frac{1}{2}\\right]-\\frac{1}{N}\\ln\\left(\\frac{t(t-m_t^2)}{u(u-m_t^2)}\\right)\n+\\frac{N}{2} \\ln\\left(\\frac{t(t-m_t^2)}{s(s-m_t^2)}\\right) \\, .\n\\end{eqnarray}\n\nAt two loops, we have \\cite{NK3loop,NKsingletop}\n\\begin{eqnarray}\n&&\\Gamma_{S\\, 11}^{s\\, (2)}=K^{'(2)} \\Gamma_{S\\,11}^{s \\, (1)}+\\frac{1}{4} C_F C_A (1-\\zeta_3) \\, , \\hspace{8mm}\n\\Gamma_{S\\,12}^{s \\, (2)}=K^{'(2)} \\Gamma_{S\\,12}^{s \\, (1)} \\, ,\n\\nonumber \\\\ &&\n\\Gamma_{S\\,21}^{s \\, (2)}= K^{'(2)} \\Gamma_{S\\,21}^{s \\, (1)} \\, , \\hspace{8mm}\n\\Gamma_{S\\,22}^{s \\, (2)}= K^{'(2)} \\Gamma_{S\\,22}^{s \\, (1)}+\\frac{1}{4} C_F C_A (1-\\zeta_3) \\, .\n\\label{sch2l}\n\\end{eqnarray}\n\nAt three loops, again we only need the first element of the matrix for N$^3$LL resummation and N$^3$LO corrections. We find \\cite{NK3loop}\n\\begin{equation}\n\\Gamma_{S\\, 11}^{s \\, (3)}= K^{'(3)} \\Gamma_{S\\, 11}^{s \\, (1)}\n+\\frac{1}{2} K^{(2)} C_A (1-\\zeta_3) +C_F C_A^2\\left[-\\frac{1}{4}+\\frac{3}{8}\\zeta_2-\\frac{\\zeta_3}{8}\n-\\frac{3}{8}\\zeta_2 \\zeta_3+\\frac{9}{16} \\zeta_5\\right] \\, .\n\\label{sch3l}\n\\end{equation}\nThe other three matrix elements of $\\Gamma_S^s$ at three loops are not fully known. However, we can see that the structure of the results will be similar to those at two loops.\nThe three-loop off-diagonal matrix elements will have a similar form as at two loops [replace all two-loop terms, denoted by superscript (2), in Eq. (\\ref{sch2l}) by the corresponding three-loop terms, denoted by superscript (3)] while $\\Gamma_{S\\, 22}^{s \\, (3)}$ should have the form of Eq. (\\ref{sch3l}) [just replace the element subscript 11 with 22] up to four-parton correlations. The same also applies to the $t$-channel results as indicated previously.\n\n\\subsubsection{Associated $tW$ production}\n\nIn $tW$ production, the partonic process is\n\\begin{equation}\nb(p_1)+g(p_2) \\rightarrow t(p_3) +W^-(p_4) \\, .\n\\end{equation}\nIn this case the soft anomalous dimension is a simple function (not a matrix).\n\nAt one loop, we have \\cite{NKst,NKsingletop}\n\\begin{equation}\n\\Gamma_S^{tW \\, (1)}=C_F \\left[\\ln\\left(\\frac{m_t^2-t}{m_t\\sqrt{s}}\\right)\n-\\frac{1}{2}\\right] +\\frac{C_A}{2} \\ln\\left(\\frac{u-m_t^2}{t-m_t^2}\\right)\\, .\n\\end{equation}\n\nAt two loops, we have \\cite{NKsingletop}\n\\begin{equation}\n\\Gamma_S^{tW (2)}=K^{'(2)} \\Gamma_S^{tW \\, (1)}\n+\\frac{1}{4}C_F C_A (1-\\zeta_3) \\, .\n\\end{equation}\n\nAt three loops, we have \\cite{NK3loop}\n\\begin{equation}\n\\Gamma_S^{tW \\, (3)}=K^{'(3)} \\Gamma_S^{tW \\, (1)}+\\frac{1}{2} K^{(2)} C_A (1-\\zeta_3)+C_F C_A^2\\left[-\\frac{1}{4}+\\frac{3}{8}\\zeta_2-\\frac{\\zeta_3}{8}\n-\\frac{3}{8}\\zeta_2 \\zeta_3+\\frac{9}{16} \\zeta_5\\right] \\, .\n\\end{equation}\n\n\\subsection{$tZ$, $tZ'$, $t \\gamma$, and $tH^-$ production}\n\nTop quarks can also be produced in association with electroweak and Higgs bosons in models of new physics. Such processes include the associated production of a top quark with a charged Higgs boson via $bg \\rightarrow tH^-$ \\cite{NKtH}; the associated production of a top quark with a $Z$ boson, $qg \\rightarrow tZ$ \\cite{NKtZ}, or with a photon, $qg \\rightarrow t\\gamma$ \\cite{MFNK}, via anomalous couplings; and the associated production of a top quark with a $Z'$ boson either via anomalous couplings, $qg \\rightarrow tZ'$, or via initial-state tops, $tg \\rightarrow tZ'$ \\cite{MGNK}.\nThe soft anomalous dimensions for all these processes are identical to the one for $bg \\rightarrow tW^-$ that was presented in the previous subsection.\n\n\\subsection{Top-antitop pair production}\n\nWe continue with top-antitop pair production \\cite{NKtop,NKGS,NKtt2l,NKtt}. The soft anomalous dimension is a $2\\times 2$ matrix for the quark-initiated channel, $q{\\bar q} \\rightarrow t{\\bar t}$, and a $3\\times 3$ matrix for the gluon-initiated channel, $gg \\rightarrow t{\\bar t}$ \\cite{NKGS,NKtt2l,FNPY}.\n\nWe begin with the soft anomalous dimension matrix for the $q{\\bar q} \\rightarrow t{\\bar t}$ channel in a color tensor basis of $s$-channel singlet and octet exchange, $c_1 = \\delta_{12}\\delta_{34}$ and $c_2 = T^c_{21} \\, T^c_{34}$.\n\nAt one loop for $q{\\bar q} \\rightarrow t{\\bar t}$ \\cite{NKGS,NKtt2l}:\n\\begin{eqnarray}\n\\Gamma^{q{\\bar q}\\, (1)}_{S\\, 11}&=&\\Gamma_{\\rm cusp}^{(1)} \\, ,\n\\quad\n\\Gamma^{q{\\bar q}\\, (1)}_{S\\, 12}=\n\\frac{C_F}{C_A} \\ln\\left(\\frac{t-m_t^2}{u-m_t^2}\\right) \\, , \n\\quad \n\\Gamma^{q{\\bar q}\\, (1)}_{S\\, 21}=\n2\\ln\\left(\\frac{t-m_t^2}{u-m_t^2}\\right) \\, ,\n\\nonumber \\\\ \n\\Gamma^{q{\\bar q}\\, (1)}_{S\\, 22}&=&\\left(1-\\frac{C_A}{2C_F}\\right)\n\\Gamma_{\\rm cusp}^{(1)} \n+4C_F \\ln\\left(\\frac{t-m_t^2}{u-m_t^2}\\right)\n-\\frac{C_A}{2}\\left[1+\\ln\\left(\\frac{s m_t^2 (t-m_t^2)^2}{(u-m_t^2)^4}\\right)\\right]. \n\\end{eqnarray}\n\nAt two loops for $q{\\bar q} \\rightarrow t{\\bar t}$ \\cite{NKtop,NKtt2l}:\n\\begin{eqnarray}\n\\Gamma^{q{\\bar q}\\,(2)}_{S\\, 11}&=&\\Gamma_{\\rm cusp}^{(2)} \\, ,\n\\quad \n\\Gamma^{q{\\bar q}\\,(2)}_{S\\, 12}=\n\\left(K^{'(2)}-C_A N_S^{(2)}\\right) \\Gamma^{q{\\bar q} \\,(1)}_{S\\, 12} \\, ,\n\\quad\n\\Gamma^{q{\\bar q} \\,(2)}_{S\\, 21}=\n\\left(K^{'(2)}+C_A N_S^{(2)}\\right) \\Gamma^{q{\\bar q} \\,(1)}_{S\\, 21} \\, ,\n\\nonumber \\\\\n\\Gamma^{q{\\bar q} \\,(2)}_{S\\, 22}&=&\nK^{'(2)} \\Gamma^{q{\\bar q} \\,(1)}_{S\\, 22}\n+\\left(1-\\frac{C_A}{2C_F}\\right)\n\\left(\\Gamma_{\\rm cusp}^{(2)}-K^{'(2)}\\Gamma_{\\rm cusp}^{(1)}\\right) \\, ,\n\\label{qq2l}\n\\end{eqnarray}\nwhere \n\\begin{equation}\nN_S^{(2)}=\\frac{\\theta^2}{4}+\\frac{1}{4} \\coth\\theta \\left[\\zeta_2-\\theta^2-{\\rm Li}_2\\left(1-e^{-2\\theta}\\right)\\right] \\, .\n\\end{equation}\n\nAt three loops, it is evident that the diagonal elements of the three-loop matrix for $q{\\bar q} \\rightarrow t{\\bar t}$ receive contributions from the three-loop massive cusp anomalous dimension, Eqs. (\\ref{3lc}), (\\ref{3lca}), but we do not yet have complete three-loop results. However, in analogy to our discussion for $s$-channel and $t$-channel single-top production, the structure of the results at three loops should be analogous to that at two loops [replace all two-loop terms, denoted by superscript (2), in Eq. (\\ref{qq2l}) by the corresponding three-loop terms] up to four-parton correlations.\n\nWe continue with the soft anomalous dimension matrix for the $gg \\rightarrow t{\\bar t}$ channel in a color tensor basis $c_1=\\delta_{12}\\,\\delta_{34}$, $c_2=d^{12c}\\,T^c_{34}$, and $c_3=i f^{12c}\\,T^c_{34}$. \n\nAt one loop for $gg \\rightarrow t{\\bar t}$ \\cite{NKGS,NKtt2l}:\n\\begin{eqnarray}\n\\Gamma^{gg\\,(1)}_{S\\, 11}&=& \\Gamma_{\\rm cusp}^{(1)}\\, , \\quad\n\\Gamma^{gg\\,(1)}_{S\\, 12}=0 \\, , \\quad \\Gamma^{gg\\,(1)}_{S\\, 21}=0\\, , \\quad\n\\Gamma^{gg\\,(1)}_{S\\, 13}= \\ln\\left(\\frac{t-m_t^2}{u-m_t^2}\\right) \\, , \\quad\n\\Gamma^{gg\\,(1)}_{S\\, 31}= 2 \\ln\\left(\\frac{t-m_t^2}{u-m_t^2}\\right) \\, ,\n\\nonumber \\\\\n\\Gamma^{gg\\,(1)}_{S\\, 22}&=& \\left(1-\\frac{C_A}{2C_F}\\right)\n\\Gamma_{\\rm cusp}^{(1)}\n-\\frac{C_A}{2}\\left[1+\\ln\\left(\\frac{s m_t^2}{(t-m_t^2)(u-m_t^2)}\\right)\\right] \\, , \n\\nonumber \\\\\n\\Gamma^{gg\\,(1)}_{S\\, 23}&=&\\frac{C_A}{2} \\ln\\left(\\frac{t-m_t^2}{u-m_t^2}\\right) \\, , \\quad\n\\Gamma^{gg\\,(1)}_{S\\, 32}=\\frac{(N^2-4)}{2N} \\ln\\left(\\frac{t-m_t^2}{u-m_t^2}\\right) \\, , \\quad \\Gamma^{gg\\,(1)}_{S\\, 33}=\\Gamma^{gg\\,(1)}_{S\\, 22} \\, .\n\\end{eqnarray}\n\nAt two loops for $gg \\rightarrow t{\\bar t}$ \\cite{NKtop,NKtt2l}:\n\\begin{eqnarray}\n\\Gamma^{gg\\,(2)}_{S\\, 11}&=& \\Gamma_{\\rm cusp}^{(2)} \\, , \\quad\n\\Gamma^{gg\\,(2)}_{S\\, 12}=0\\, , \\quad \\Gamma^{gg\\,(2)}_{S\\, 21}=0\\, , \\quad \n\\Gamma^{gg\\,(2)}_{S\\, 13}=\\left(K^{'(2)}-C_A N_S^{(2)}\\right) \n\\Gamma^{gg \\,(1)}_{S\\, 13} \\, , \n\\nonumber \\\\ \n\\Gamma^{gg\\,(2)}_{S\\, 31}&=&\\left(K^{'(2)}+C_A N_S^{(2)}\\right) \n\\Gamma^{gg \\,(1)}_{S\\, 31} \\, , \\quad\n\\Gamma^{gg\\,(2)}_{S\\, 22}= K^{'(2)} \\Gamma^{gg \\,(1)}_{S\\, 22}\n+\\left(1-\\frac{C_A}{2C_F}\\right) \n\\left(\\Gamma_{\\rm cusp}^{(2)}-K^{'(2)}\\Gamma_{\\rm cusp}^{(1)}\\right) \\, ,\n\\nonumber \\\\\n\\Gamma^{gg\\,(2)}_{S\\, 23}&=& K^{'(2)} \\Gamma^{gg \\,(1)}_{S\\, 23} \\, , \\quad\n\\Gamma^{gg\\,(2)}_{S\\, 32}= K^{'(2)} \\Gamma^{gg \\,(1)}_{S\\, 32} \\, , \\quad\n\\Gamma^{gg\\,(2)}_{S\\, 33}=\\Gamma^{gg\\,(2)}_{S\\, 22} \\, .\n\\label{gg2l}\n\\end{eqnarray}\n\nAt three loops, again it is clear that the diagonal elements of the three-loop matrix for $gg \\rightarrow t{\\bar t}$ receive contributions from the three-loop massive cusp anomalous dimension, Eqs. (\\ref{3lc}), (\\ref{3lca}), but we do not yet have complete three-loop results. However, in analogy to our discussion for the quark-initiated channel, the structure of the results at three loops should be analogous to that at two loops [replace all two-loop terms, denoted by superscript (2), in Eq. (\\ref{gg2l}) by the corresponding three-loop terms] up to four-parton correlations.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn the framework of SETI (Search for Extraterrestrial Intelligence)\none of the important projects was the search for the interstellar\nradio communication. A rather different and quite original method\nwas proposed by the prominent physicist Freeman Dyson in 1960, who\nsuggested that if an extraterrestrial intelligence has reached a\nlevel of supercivilization, it might consume an energy of its own\nstar \\citep{dyson}. For this purpose, to have the maximum efficiency\nof energy transformation, it would be better to construct a thin\nshell completely surrounding the star. In the framework of this\napproach the author assumes that the mentioned superintelligence\nobserved by us will have been in existence for millions of years,\nhaving reached a technological level exceeding ours by many orders\nof magnitude. Kardashev in his famous article \\citep{kardashev},\nexamining the problem of transmission of information by\nextraterrestrial civilizations, has classified them by a\ntechnological level they have already achieved: (I) - a\ntechnological level close to the level of the civilization on earth,\nconsuming the energy of the order of $4\\times 10^{19}$ergs s$^{-1}$;\n(II) - a civilization consuming the energy of its own star -\n$4\\times 10^{33}$ergs s$^{-1}$ and (III) - a civilization capable of\nharnessing the energy accumulated in its own galaxy: $4\\times\n10^{44}$ergs s$^{-1}$. In this classification Dyson's idea deals\nwith the civilization of type-II, consuming the energy exceeding\nours approximately $\\frac{4\\times 10^{33}}{4\\times 10^{19}}=10^{14}$\ntimes (see the similar estimates by \\cite{dyson}). If we assume that\nan average growth rate of $1\\%$ per year in industry is maintained,\nthe level of type-II civilization might be achieved in $\\sim 3000$\nyears \\citep{dyson}, being quite reasonable in the context of the\nassumption that a civilization exists millions of years.\n\n\n\nDyson has suggested that if such a civilization exists, then it is\npossible to detect it by observing the spherical shell surrounding\nthe star. In particular, it is clear that energy radiated by the\nstar must be absorbed by the inner surface of the sphere and might\nbe consumed by the civilization. The author implied that the size of\nthe sphere should be comparable with that of the Earth's orbit. It\nis clear that to have energy balance, the spherical shell must\nirradiate the energy, but in a different spectral interval: in the\ninfrared domain, with the black body temperature of the order of\n$(200-300)$K \\citep{dyson}.\n\nThe attempts to identify Dyson spheres on the sky were performed by\nseveral groups \\cite{jugaku,slish,timofeev} but no significant\nresults were obtained. Recently an interest to such an ambitious\nidea has significantly increased: a couple of years ago Carrigan has\npublished an article titled: \"IRAS-based whole sky upper limit on\nDyson spheres\" \\citep{carrigan}, where he considered the results of\nthe instrument IRAS (The Infrared Astronomical Satellite). This\nsatellite covered almost $96\\%$ of the sky, implying that this is\nalmost whole sky monitoring. According to the study, the searches\nhave been conducted as for fully as for partially cloaked Dyson\nspheres. The search has revealed $16$ Dyson sphere candidates, but\nthe author pointed out that further investigation must have been\nrequired. Recently an interesting series of works has been presented\n\\citep{wright1,wright2,wright3} where\nthe authors discuss a broad class of related problems starting from\nphilosophical aspects of SETI \\citep{wright1}, also examining in detail the strategy of the\nsearch for the infrared galactic and extragalactic sources corresponding to\nKardashev-II\/III civilizations \\citep{wright2,wright3}.\n\n\nIn this manuscript we present a rather different idea how to search\nan advanced intelligence. In the framework of Dyson's idea, the\nspherical shroud (with radius of the order of $(1-3)$AU) is\nconstructed around stars. It is clear that in order to consume\nalmost the total energy radiated by the star, it must be imbedded\ninside a closed spherical shell, requiring enormous material to\nconstruct it. On the other hand, it is very well known that pulsars\n- rapidly rotating neutron stars - emit huge energy in narrow\nchannels (see Figure \\ref{fig1}), therefore, if a supercivilization\nexists it can in principle utilize the energy of these objects. But\nin these cases, instead of sphere-like envelopes the super\nintelligence has to use ring-like structures around the pulsars. If\nthe angle between the rotation axis and the direction of emission,\n$\\alpha$, is close to $90^o$, the ring will be located in the\nequatorial plane of the neutron star. As we will see later, in case\nof relatively slowly spinning pulsars (with periods of rotation of\nthe order of $1$s) an advantage will be quite small sizes of these\nartificial constructions. Another class of neutron stars is the\nso-called $X$-ray pulsars, having a strong emission pattern in the\n$X$-ray band. Usually these objects are characterized by short\nperiods of rotation with quite high values of slowdown rates, having\nluminosities exceeding those of normal pulsars by several orders of\nmagnitude. It is worth noting that a habitable zone around them\nmight be further than around slowly spinning pulsars,\nimplying that the possible size of the artificial \"ring\" could be\nextremely large. Therefore, we can hardly believe that these objects\ncan be interesting in the context of the search for extraterrestrial\nsuper advanced intelligence.\n\nThe organization of the paper is the following: after introducing\nthe theoretical background we work out the details of the Dyson\n\"rings\" surrounding the pulsars in Sec.2, in Sec. 3 we summarize our\nresults.\n\n\n\n\n\\section[]{Theoretical background and results}\n\nIn this section we consider the pulsars and estimate the\ncorresponding physical parameters of artificial ring-like\nconstructions surrounding the rotating neutron stars.\n\nGenerally speaking, any rotating neutron star, characterized by the\nslow down rate $\\dot{P}\\equiv dP\/dt>0$, where $P$ is the rotation\nperiod, loses energy with the following power (called the slowdown\nluminosity, $L_{sd}$) $\\dot{W} = I\\Omega|\\dot{\\Omega}|$. Here by\n$I=2MR_{\\star}^2\/5$ we denote the moment of inertia of the neutron\nstar, $M\\approx 1.5\\times M_{\\odot}$ and $M_{\\odot}\\approx 2\\times\n10^{33}$g are the pulsar's mass and the solar mass respectively,\n$R_{\\star}\\approx 10^6$cm is the neutron star's radius,\n$\\Omega\\equiv 2\\pi\/P$ is its angular velocity and\n$\\dot{\\Omega}\\equiv d\\Omega\/dt=-2\\pi\\dot{P}\/P^2$. The slowdown\nluminosity for the relatively slowly spinning pulsars is of the\norder of\n$$L_{sd}\\approx 4.7\\times 10^{31}\\times\nP^{-3}\\times\\left(\\frac{\\dot{P}}{10^{-15}ss^{-1}} \\right)\\times$$\n\\begin{equation}\n\\label{lumin} \\times\\left(\\frac{M}{1.5M_{\\odot}}\\right)ergs\\ s^{-1},\n\\end{equation}\nwhere the parameters are normalized on their typical values. As it\nis clear from Eq. (\\ref{lumin}), the energy budget is very high\nforcing a supercivilization construct a \"ring\" close to the host\nobject. On the other hand, these sources exist during the time scale $P\/\\dot{P}$,\nwhich is long enough to colonize the star.\n\nIn the framework of the standard definition, a habitable zone (HZ)\nis a region of space with favorable conditions for life based on\ncomplex carbon compounds and on availability of fluid water, etc.\n\\citep{hanslmeier}. This means that the surface of the \"ring\" must\nbe irradiated by the same flux as the surface of the Earth\n(henceforth - the flux method). Therefore, the mean radius of the\nHZ, defined as $R_{HZ} = \\left(\\kappa\nL_{sd}\/L_{\\odot}\\right)^{1\/2}$AU, where $L_{\\odot}\\approx 3.83\\times\n10^{33}$ergs s$^{-1}$ is the solar bolometric luminosity, is\nexpressed as follows\n$$R_{_{HZ}}\\approx 3.5\\times 10^{-2}\\times\nP^{-3\/2}\\times\\left(\\frac{\\kappa}{0.1}\\right)^{1\/2}\\times$$\n\\begin{equation}\n\\label{rHZ} \\times\\left(\\frac{\\dot{P}}{10^{-15}ss^{-1}}\n\\right)^{1\/2}\\times\\left(\\frac{M}{1.5M_{\\odot}}\\right)^{1\/2}AU,\n\\end{equation}\nwhere we have taken into account that the bolometric luminosity,\n$L$, of the pulsar is less than the slowdown luminosity and is\nexpressed as $\\kappa L_{sd}$, $\\kappa < 1$. As we see, the radius of\nthe HZ for $1$s pulsars is very small compared to the radius of the\ntypical Dyson sphere - $1$AU. The best option for the\nsupercivilization could be to find a pulsar with an angle between\nthe magnetic moment (direction of one of the channels) and the axis\nof rotation close to $90^o$, because in this case an artificial\n\"ring\" has to be constructed in the equatorial plane. In case of\nrelatively small inclination angles, there should be two rings, each\nof them shifted from the equatorial plane, although, in this case it\nis unclear how do the \"rings\" keep staying in their planes,\ntherefore, we focus only on pulsars with $\\alpha\\approx 90^o$.\n\n\n\n\nAccording to the standard model of pulsars it is believed that the\nradio emission is generated by means of the curvature radiation\ndefining the opening angle of the emission channel \\citep{rudsuth}\n$$\\beta\\approx 32^o\\times\nP^{-13\/21}\\times\\left(\\frac{\\rho}{10^6cm}\\right)^{2\/21} \\times$$\n\\begin{equation}\n\\label{angle} \\times\\left(\\frac{\\dot{P}}{10^{-15}ss^{-1}}\n\\right)^{1\/14}\\times\\left(\\frac{\\omega} {1.0\\times\n10^{10}Hz}\\right)^{-1\/3},\n\\end{equation}\nwhere the radius of curvature of magnetic field lines - $\\rho$ and\nthe cyclic frequency of radio waves - $\\omega$, are normalized on\ntheir typical values \\citep{rudsuth}. One can straightforwardly\ncheck that the artificial \"ring\" with equal radiuses of the\nspherical segment bases should have height in the following interval\n$\\left(1-2\\right)R_{_{HZ}}\\sin\\left(\\beta\\right)\\approx\\left(0.006-0.06\\right)$AU.\n\n\nOne of the significant parameters to search the \"rings\" is their\ntemperature, which can be estimated quite straightforwardly. In\nparticular, if the aim of the super civilization is to consume the\ntotal energy radiated by the host pulsar, then albedo of the\nmaterial the \"ring\" has to be made of must be close to zero. For\nsimplicity we consider $\\alpha = 90^o$, then it is clear that the\ninner surface of the \"ring\", irradiated by the pulsar, every single\nsecond absorbs energy of the order of $L$. On the other hand, energy\nbalance requires that the \"ring\" must emit the same amount of energy\nin the same interval of time, $L = A_{ef}\\sigma T^4$, leading to the\nfollowing expression\n\\begin{equation}\n\\label{balance} T = \\left(\\frac{L}{A_{_{ef}}\\sigma}\\right)^{1\/4},\n\\end{equation}\nwhere $\\sigma\\approx 5.67\\times 10^{-5}$erg\/(cm$^2$K$^4$) is the\nStefan-Boltzmann constant, $A_{_{ef}} = 8\\pi\nR_{_{HZ}}^2\\sin\\left(\\alpha\/2\\right)$ is the effective area of the\nspherical segment taking into account inner and outer surfaces and\n$T$ is the average temperature of the \"ring\".\n\n\n\n\n\n\n\n\n\nAfter applying Eqs. (\\ref{lumin},\\ref{rHZ},\\ref{angle}), one can\nobtain $T$. On Figure \\ref{fig2} (top panel) we plot the behaviour\nof temperature versus the period of rotation of a pulsar for\ndifferent values of the slow down rate $\\dot{P}$: $\\dot{P} =\n10^{-15}$ss$^{-1}$ (solid line); $\\dot{P} = 10^{-14}$ss$^{-1}$\n(dashed line); $\\dot{P} = 2\\times 10^{-14}$ss$^{-1}$ (dotted-dashed\nline). As we see from the graph, for relatively slowly spinning\npulsars the typical values of the effective temperature of the\nartificial construction are in the following interval $\\sim\n(400-500)$K. The corresponding radius of the HZ varies from $\\sim\n10^{-2}$AU to $\\sim 10^{-1}$AU (see the bottom panel).\n\nGenerally speaking, the distance to the HZ is not strictly defined,\nbecause our knowledge about life is very limited by the conditions\nwe know on Earth. But even in the framework of life on Earth the\nradius of the HZ might be defined in a different way, assuming a\ndistance enabling the effective temperature in the interval:\n$(273-373)$K where water can be in a liquid state (henceforth - the\ntemperature method) \\footnote{In reality the temperature range might\nbe even narrower. For example the melting temperature of DNA is\napproximately $60^o$C}. From Eq. (\\ref{balance}) one can show that\n\\begin{equation}\n\\label{balance1} R_{HZ} =\n\\left(\\frac{L}{8\\pi\\sigma\\sin{\\left[\\beta\/2\\right]}T^4}\\right)^{1\/2}.\n\\end{equation}\nOn Figure \\ref{fig2} (bottom panel) we plot the graphs of $R_{HZ}$\nversus $T$ for the same values of $\\dot{P}$. It is evident that in\nthis case the distance to the HZ ranges from $2\\times 10^{-4}$AU to\n$1.3\\times 10^{-2}$AU.\n\nAnother possibility for the supercivilization might be the\n\"colonization\" of millisecond pulsars since these objects reveal\nextremely high values of luminosity. In particular, for the pulsar\nwith $P = 0.01$s and $\\dot{P} = 10^{-13}$ss$^{-1}$ the slowdown\nluminosity is of the order of $9.5\\times 10^{38}$ergs s$^{-1}$ and\nby taking into account $\\kappa\\approx 0.01$ (being quite common for\nmillisecond pulsars), one can show that the bolometric luminosity\n$$L\\approx 9.5\\times 10^{36}\\times\n\\left(\\frac{P}{0.01s}\\right)^{-3}\\times\\left(\\frac{\\kappa}{0.01}\\right)\n\\times$$\n\\begin{equation}\n\\label{lb} \\times\\left(\\frac{\\dot{P}}{10^{-13}ss^{-1}}\n\\right)\\times\\left(\\frac{M}{1.5M_{\\odot}}\\right)ergs\\ s^{-1}\n\\end{equation}\nis by several orders of magnitude higher than for relatively slowly\nrotating pulsars. Here the physical quantities are normalized on\ntypical values of rapidly rotating neutron stars. It is clear that\nall rapidly spinning pulsars have extremely high energy output in\nthe form of electromagnetic waves. As a result, a corresponding\ndistance from the central object to the HZ must be much bigger than\nfor slowly rotating pulsars.\n\nIn order to estimate the height of the \"ring\" one has to define the\nopening angle of a radiation cone for millisecond pulsars radiating\nin the $X$-ray spectrum. According to the classical paper\n\\citep{machus}, the $X$-ray emission of pulsars has the synchrotron\norigin, maintained by the quasi-linear diffusion. As it was shown by\nMachabeli \\& Usov (1979) the corresponding angle of the radiation\ncone is given by\n\\begin{equation}\n\\label{angle2} \\beta\\approx\n8^o\\times\\left(\\frac{0.01s}{P}\\right)^{1\/2}\\times\\left(\\frac{R_{st}}{10^6cm}\\right)^{1\/2},\n\\end{equation}\nwhich automatically gives an interval of height of the artificial\nconstruction: $(0.02-0.15)$AU.\n\n\n\n\n\n\n\n\nAfter combining Eqs. (\\ref{balance},\\ref{angle2}), like the previous\ncase of slowly rotating neutron stars, one can estimate the\neffective temperature for millisecond pulsars. In particular, on\nFigure \\ref{fig3} (top panel) in the framework of the flux method,\nwe show the behaviour of temperature of the \"ring\" as a function of\n$P$ and as we see, for the following interval of rotation periods:\n$(0.01-0.05)$s the temperature ranges from $\\sim 540$K to\napproximately $660$K. The size of the HZ ranges from $30$AU to\n$350$AU. Contrary to this, by applying the temperature method, on\nFigure \\ref{fig3} (bottom panel) we show the behaviour of $R_{HZ}$\nversus $T$ for three different values of $\\dot{P}$: $\\dot{P} =\n10^{-13}$ss$^{-1}$ (solid line); $\\dot{P} = 2\\times\n10^{-13}$ss$^{-1}$ (dashed line); $\\dot{P} = 4\\times\n10^{-13}$ss$^{-1}$ (dotted-dashed line). From the figure it is clear\nthat the distance to the HZ ranges from $10$AU to $30$AU.\n\nTo understand how realistic are these constructions it is\ninteresting to estimate their masses. Let us assume that the\nmaterial the rings are made of has the density of the order of the\nEarth, $\\rho\\sim 5.5$g cm$^{-3}$. Then it is straightforward to show\nthat the expression of mass is given by\n\\begin{equation}\n\\label{mass} M_{ring}\\approx 4\\pi\\rho R_{_{HZ}}^2\\Delta\nR\\sin\\left(\\beta\/2\\right),\n\\end{equation}\nwhere $\\Delta R$ is the average thickness of the ring. If we\nconsider slowly rotating pulsars, by assuming $\\Delta R\\sim 10$m and\n$R_{_{HZ}}\\sim 10^{-3}$AU (see Figure \\ref{fig2}), one can show that\n$M_{ring}\\approx 4\\times 10^{24}$g, that is three orders of\nmagnitude less than the mass of the Earth. Therefore, in the\nplanetary system the material could be quite enough to construct the\nring-like structure around $1$s pulsars.\n\nThe similar analysis for millisecond pulsars, shows that the mass of\nthe ring, $4\\times 10^{32}$kg, exceeds the total mass of all\nplanets, planetoids, asteroids, comets, centaurs and interplanetary\ndust in the solar system by three orders of magnitude. This means\nthat nearby regions of millisecond pulsars hardly can be considered\nas attractive sites for colonization.\n\nAnother issue one has to address is the force acting on the\nstructure by means of the radiation. By assuming that half of the\ntotal energy is radiated in each of the radiation cones, the\ncorresponding force can be estimated as follows\n\\begin{equation}\n\\label{fr} F_{rad}\\approx\\frac{L}{2c}.\n\\end{equation}\nOn the other hand, the stress forces in the ring, caused by\ngravitational forces should be of the order of the gravitational\nforce acting on the mentioned area of the ring. This force is given\nby\n\\begin{equation}\n\\label{fg} F_{g}\\approx\\frac{GM\\lambda A}{R_{_{HZ}}^2},\n\\end{equation}\nwhere $\\lambda$ is the mass area density of the ring and $A = 2\\pi\nR_{_{HZ}}^2(1-\\cos\\left(\\beta\/2\\right))$ is the area of the ring,\nirradiated by the pulsar. It is clear that for maintaining stability\nof the structure the radiation force should be small compared to the\ngravitational force. By imposing this condition Eqs.\n(\\ref{fr},\\ref{fg}) lead to $\\lambda\\gg L\/\\left(4\\pi\nGMc(1-\\cos\\left(\\beta\/2\\right))\\right)$, which for $\\beta = 32^o$\nreduces to\n\\begin{equation}\n\\label{sigma} \\lambda\\gg 3.4\\times\n10^{-6}\\frac{L_{31}}{M_{1.5}}g\\;cm^{-2},\n\\end{equation}\nwhere $L_{31}\\equiv 10^{31}$ergs s$^{-1}$ and $M_{1.5}\\equiv\nM\/(1.5M_{\\odot})$. As we see from the aforementioned estimate, for\nrealistic scenarios the emission cannot significantly perturb the\nDyson construction.\n\nGenerally speaking, rotation of pulsars drives high energy outflows\nof plasma particles around the star \\citep{guda} and the influence of these particles\ndeserves to be considered as well. The similar result can be obtained by\nconsidering the interaction of the pulsar wind with the ring. In\nparticular, in Eq. (\\ref{sigma}) instead of the radiation term,\n$L\/2$, one should apply the pulsar wind's kinematic luminosity, that\ncannot exceed the slowdown luminosity. After considering the maximum\npossible kinematic wind luminosity, $L_{sd}$, the critical surface\ndensity will be of the same order of magnitude. Therefore, the Dyson\nstructure can survive such an extreme pulsar environment.\n\n\nOn the other hand, it is clear that gravitationally such a system\nmight not be stable. In particular, \\cite{stability} has considered\na point mass and a thin solid ring having initially coincident\ncenters of masses, thus being in the equilibrium state. Although, it\nhas been shown that this equilibrium configuration is stable due to\nthe perturbations normal to the plane of the ring and is unstable\nfor perturbations within the mentioned plane.\n\nOn Fig. \\ref{fig4} we schematically show the in-plane displaced ring\nwith respect to the pulsar. The gravitational force between the\npulsar and the ring element with mass $dm$\n\\begin{equation}\n\\label{grav1} df = G\\frac{Mdm}{r'^2},\n\\end{equation}\nprojected on the $r$ line and integrated over the hole ring\n\\citep{stability}\n\\begin{equation}\n\\label{grav2} f_r(\\zeta) = G\\frac{MM_{ring}}{2\\pi\nR^2}\\int_0^{2\\pi}\\frac{\\zeta+\\cos\\phi}{\\left(1+2\\zeta\\cos\\phi+\\zeta^2\\right)^{3\/2}}d\\phi,\n\\end{equation}\nfor small values of $\\zeta\\equiv r\/R$, leads to the equation\ndescribing the in-plane dynamics \\citep{stability}\n\\begin{equation}\n\\label{stab} \\frac{d^2\\zeta}{dt^2} -\\frac{1}{\\tau^2}\\zeta = 0,\n\\end{equation}\nwhere $\\tau\\equiv \\sqrt{2R^3\/(GM)}$ is the timescale of the process.\nThis equation has the following solution\n\\begin{equation}\n\\label{solut} \\zeta(t) = \\zeta_0 e^{t\/\\tau},\n\\end{equation}\nindicating that the equilibrium configuration is unstable against\nin-plane perturbations. Here, $\\zeta_0$ is the initial\nnondimensional perturbation. By means of this process the ring gains\nthe kinetic energy with the following power\n\\begin{equation}\n\\label{power} W = M_{ring}R^2\\frac{d\\zeta}{dt}\\frac{d^2\\zeta}{dt^2},\n\\end{equation}\ntherefore, in order to restore the equilibrium state of the ring,\none should utilize the same amount of energy from the pulsar\ncorresponding to the instability timescale. On the other hand, it is\nevident that such constructions may have sense only if the power\nneeded to maintain stability is small compared to the bolometric\nluminosity of the pulsar. By taking into account this fact, one can\nstraightforwardly show that the initial declination from the\nequilibrium position must satisfy the condition\n\\begin{equation}\n\\label{cond} \\zeta_0\\ll\n\\frac{0.37}{R}\\left(\\frac{L_b}{M_{ring}}\\right)^{1\/2}\\left(\\frac{2R^3}{GM}\\right)^{3\/4},\n\\end{equation}\nwhich for the parameters presented in Fig. \\ref{fig3} leads to\n$\\zeta_0\\ll\\left( 10^{-5}-10^{-4}\\right)$. It is worth noting that\nsuch a precision of measurement of distance for supercivilization\ncannot be a problem. For instance, in the Lunar laser ranging\nexperiment \\footnote{The corresponding data is available from the Paris Observatory\nLunar Analysis Center: http:\/\/polac.obspm.fr\/llrdatae.html} the distance is measured with the precision of the order\nof $10^{-10}$.\n\nIn the vicinity of pulsars radiation protection could be one of the important challenges\nfacing the super advanced civilization. In particular, the radiation intensity\n\\begin{equation}\n\\label{int} I = \\frac{L_b}{4\\pi R^2\\left(1-\\cos\\left(\\beta\/2\\right)\\right)},\n\\end{equation}\nfor the shortest ring radius $2\\times 10^{-4}$AU is of the order of\n$10^{12}$erg s$^{-1}$ cm$^{-2}$ and therefore, it should be of great importance\nto protect the civilization from high energy gamma rays.\nFor this purpose one can use special shields made of certain material, efficiently\nabsorbing the radiation, which in turn, might significantly reduce the corresponding\nintensity. If one uses half-value\nlayer, $HVL$, which is the thickness of the material at which the intensity is reduced by\none half we can estimate if the thickness of the ring is enough to make a strong protection\nfrom extremely high level of emission. If as an example we examine concrete with $HVL=6.1$,\none can show that a layer of thickness $2.5$m reduces the intensity by $10^{12}$\norders of magnitude, being even more than enough for radiation protection.\n\n\n\n\\section{Conclusion}\n\nWe extended the idea of Freeman Dyson about the search for\nsupercivilization and considered neutron stars. As a first example\nwe examined relatively slowly rotating pulsars, considering the\nparameters $P=\\left(0.5-2\\right)$s; $\\dot{P} = 10^{-15}$ss$^{-1}$;\n$\\dot{P} = 10^{-14}$ss$^{-1}$; $\\dot{P} = 2\\times\n10^{-14}$ss$^{-1}$. It has been shown that size of the \"ring\" must\nbe by $(1-4)$ orders of magnitude less than those of the Dyson\nsphere, which is thought to be of the order of $1$AU. The\ncorresponding temperatures of the artificial construction should be\nin the following interval $(300-600)$K.\n\nBy considering the parameters of millisecond pulsars,\n$P=\\left(0.01-0.05\\right)$s; $\\dot{P} = 10^{-13}$ss$^{-1}$; $\\dot{P}\n= 2\\times 10^{-13}$ss$^{-1}$; $\\dot{P} = 4\\times 10^{-13}$ss$^{-1}$\nwe found that the radius of the \"ring\" should be of the order of\n$(10-350)$AU with an enormous mass $10^{32}$g exceeding the total\nplanetary mass (except the central star) by several orders of\nmagnitude. Therefore, it is clear that millisecond pulsars become\nless interesting in the context of the search for extraterrestrial\nsuperintelligence. Contrary to this class of objects, for slowly\nrotating pulsars the corresponding masses of the Dyson structures\nshould be three orders of magnitude less than the Earth mass. We\nhave also examined the tidal stresses in terms of radiation and\npulsar winds and it has been shown that they will not significantly\nperturb the Dyson construction located in the habitable zone if the\narea density of the ring satisfies a quite realistic condition\n$\\lambda\\gg 3.4\\times 10^{-6}$g cm$^{-2}$. We have examined the stability\nproblem of the ring's in-plane dynamics and it has been shown that under certain\nconditions the power required to restore the equilibrium position might be\nmuch less than the power extracted from the pulsar. Also considering the problem\nof radiation protection we have found that it is quite\nrealistic to reduce the high level of emission by many orders of magnitude.\n\n\nIt is worth noting that in the framework of the paper we do not\nsuggest that an advanced civilization would arise around a massive\nstar, surviving its supernova. On the contrary, we consider the\npossibility to colonize the nearby regions of pulsars building\nlarge-scale Dyson structures.\n\nGenerally speaking, the total luminosity budget of a pulsar is\nemitted over a broad range of wavelengths, that can be harvested by\nmeans of the Faraday's law of induction, transmitting\nelectromagnetic energy into that of electricity.\n\nAs we see, the pulsars seem to be attractive sites for super\nadvanced cosmic intelligence and therefore, the corresponding search\nof relatively small ($0.0001$AU-$0.1$AU) infrared \"rings\" (with the\ntemperature interval $(300-600)$K) might be quite promising.\n\n\n\n\n\\section*{Acknowledgments}\n\nThe research was partially supported by the Shota Rustaveli National\nScience Foundation grant (N31\/49).\n\n\n\\begin{figure}\n \\centering {\\includegraphics[width=7cm]{fig1.eps}}\n \\caption{On the picture we schematically show the pulsar, its axis of rotation,\n and two emission channels with an opening angle $\\beta$. It is worth noting that\n when $\\alpha$ is close to $90^o$, the Dyson construction has to be located in the equatorial\n plane. Contrary to this, for relatively smaller angles, the emission channels\n will irradiate two different ring-like structures located in different planes\n parallel to that of the equator.}\\label{fig1}\n\\end{figure}\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{fig2a.eps}}\n\\resizebox{\\hsize}{!}{\\includegraphics{fig2b.eps}}\n \\caption{On the top panel, in the framework of\nthe flux method, we plot the dependence of $T$ on $P$ for three\ndifferent values of $\\dot{P}$: $\\dot{P} = 10^{-15}$ss$^{-1}$ (solid\nline); $\\dot{P} = 10^{-14}$ss$^{-1}$ (dashed line); $\\dot{P} =\n2\\times 10^{-14}$ss$^{-1}$ (dotted-dashed line). As it is clear from\nthe graph, for typical values of relatively slowly spinning pulsars\nthe effective temperature of the artificial construction ranges from\n$\\sim 400$K to $\\sim 500$K. On the bottom panel, in the framework of\nthe temperature method, we show the dependence of $R_{HZ}$ on $T$\nfor the same values of $\\dot{P}$. As we see the distance to the HZ\nranges from $2\\times 10^{-4}$AU to $1.3\\times 10^{-3}$AU.}\n\\label{fig2}\n\\end{figure}\n\n\\begin{figure}\n\\resizebox{\\hsize}{!}{\\includegraphics{fig3a.eps}}\n\\resizebox{\\hsize}{!}{\\includegraphics{fig3b.eps}}\n \\caption{On the top panel we present the\nbehaviour of $T$ versus $P$ in the framework of the flux method. It\nis evident that for rapidly spinning pulsars the effective\ntemperature of the \"ring\" is in the following interval:\n$(540-660)$K. On the bottom panel, in the framework of the\ntemperature method, we show the dependence of $R_{HZ}$ on $T$ for\nthree different values of $\\dot{P}$: $\\dot{P} = 10^{-13}$ss$^{-1}$\n(solid line); $\\dot{P} = 2\\times 10^{-13}$ss$^{-1}$ (dashed line);\n$\\dot{P} = 4\\times 10^{-13}$ss$^{-1}$ (dashed-dotted line). As we\nsee the distance to the HZ ranges from $10$AU to $30$AU.}\n\\label{fig3}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering {\\includegraphics[width=7cm]{fig4.eps}}\n \\caption{Here we show the in-plane displaced ring\n with respect to the pulsar, denoted by $M$.}\\label{fig4}\n\\end{figure}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Appendix}\n\\subsection{Glossary of Equations and Notations}\\label{sect:eq-cheat-sheet}\nFor easier reference, we compressed the most frequently referenced equations and notations on two pages. \nThe Wainwright-Hsu equations are given by:\n\\begin{align*}\n N_i' &= -(\\Sigma^2 +2 \\langle \\taubp_i,\\bpt\\Sigma\\rangle)N_i \\tag{\\ref{eq:ode-ni}}\\\\\n &= -\\left(\\left|\\bpt\\Sigma+\\taubp_i\\right|^2 -1\\right)N_i \\tag{\\ref{eq:ode2-ni}}\\\\\n\\bpt\\Sigma' &= N^2 \\bpt\\Sigma + 2 \\threemat{\\taubp_1}{\\taubp_2}{\\taubp_3}{\\taubp_3}{\\taubp_1}{\\taubp_2}[\\bpt N, \\bpt N],\\quad\\text{where}\\tag{\\ref{eq:ode2-sigma}}\\\\\n\\taubp_1 &= (-1,0) \\qquad \\taubp_2 = \\left(\\frac 1 2, -\\frac 1 2 \\sqrt{3}\\right) \\qquad \\taubp_3 = \\left(\\frac 1 2, \\frac 1 2 \\sqrt{3} \\right) \\tag{\\ref{eq:taub-def}}\\\\\n1&\\overset{!}{=} \\Sigma^2+N^2 = \\Sigma_+^2+\\Sigma_-^2 + N_1^2+N_2^2+N_3^2-2(N_1N_2+N_2N_3+N_3N_1). \\tag{\\ref{eq:constraint}}\n\\end{align*}\nAuxilliary quantities are given by:\n\\begin{align*}\n\\delta_i &= 2\\sqrt{|N_jN_k|} \\tag{\\ref{eq:delta-r-def-delta}}\\\\\nr_i &= \\sqrt{(|N_j|-|N_k|)^2 + \\frac{1}{3}\\langle\\taubp_j-\\taubp_k, \\bpt\\Sigma\\rangle^2} \\tag{\\ref{eq:delta-r-def-r}}\\\\\n\\delta_i' &= -\\left(\\left|\\bpt\\Sigma-\\frac{\\taubp_i}{2}\\right|^2-\\frac{1}{4}\\right)\\delta_{i}.\\tag{\\ref{eq:ode2-delta}}\n\\end{align*}\nIn polar coordinates, the equations around $r_1 \\approx 0$ become for $N_2,N_3>0$:\n\\begin{align*}\nr_1 &= \\Sigma_-^2+ N_-^2 \\quad N_-=N_3-N_2\\quad N_+=N_3+N_2\\\\\n{\\mathrm{D}}_t \\log r_1&= N^2 - (\\Sigma_++1) \\frac{N_-^2}{r_1^2} +\\sqrt{3} N_1 \\frac{\\Sigma_- N_-}{r_1^2} \\tag{\\ref{eq:neartaub-b9-q:r}}\\\\\n&=r_1^2\\sin^2\\psi \\frac{-\\Sigma_+}{1-\\Sigma_+} + N_1\\, h_r\\tag{\\ref{eq:neartaub-b9-t:r}}\\\\\n{\\mathrm{D}}_t \\log \\delta_1 &= N^2-(\\Sigma_++1)\\tag{\\ref{eq:neartaub-b9-q:delta}}\\\\\n&= \\frac{-1}{1-\\Sigma_+}r_1^2\\cos^2\\psi + \\frac{-\\Sigma_+}{1-\\Sigma_+}r_1^2\\sin^2\\psi + N_1\\,h_\\delta \\tag{\\ref{eq:neartaub-b9-t:delta}}\\\\\n{\\mathrm{D}}_t \\log \\frac{\\delta_1}{r_1} &= -(\\Sigma_+ + 1)\\frac{\\Sigma_-^2}{r_1^2} -\\sqrt{3}N_1\\frac{\\Sigma_- N_-}{r_1^2} \\tag{\\ref{eq:neartaub-b9-q:delta-r}}\\\\\n&=\\frac{-1}{1-\\Sigma_+}r_1^2\\cos^2\\psi +N_1(h_\\delta-h_r)\\tag{\\ref{eq:neartaub-b9-t:delta-r}}\\\\\n\\psi' &=\\sqrt{3}r_1 \\sqrt{\\sin^2\\psi + \\frac{\\delta_1^2}{r_1^2}} - \\frac{r_1^2}{1-\\Sigma_+} \\sin\\psi \\cos\\psi + N_1\\sin\\psi\\, h_{\\psi},\\tag{\\ref{eq:neartaub-b9-t:psi}}\n\\end{align*}\n\\noindent\nwhere the terms $|h_r|,|h_\\delta|, |h_\\psi|$ are bounded (if $|N_i|$, $\\Sigma_+<0$, and $\\Sigma_-$ are bounded) and given in \\eqref{eq:neartaub-b9-t-h}, page \\pageref{eq:neartaub-b9-t-h}.\n\nIn polar coordinates, the equations around $r_1 \\approx 0$ become for $N_2>0$ , $N_3<0$:\n\\begin{align*}\nr_1 &= \\Sigma_-^2+ N_-^2 \\quad N_-=N_2+N_3\\quad N_+=N_2-N_3\\\\\n{\\mathrm{D}}_t \\log r_1&= N^2 - (\\Sigma_++1) \\frac{N_-^2}{r_1^2} +\\sqrt{3} N_1 \\frac{\\Sigma_- N_+}{r_1^2} \\tag{\\ref{eq:neartaub-b8-q:r}}\\\\\n&=\\frac{-\\Sigma_+}{1-\\Sigma_+}r_1^2\\sin^2\\psi +\\delta_1^2\\frac{\\cos^2\\psi-\\Sigma_+}{1-\\Sigma_+} + N_1\\, h_r\\tag{\\ref{eq:neartaub-b8-t:r}}\\\\\n{\\mathrm{D}}_t \\log \\delta_1 &= N^2-(\\Sigma_++1)\\tag{\\ref{eq:neartaub-b9-q:delta}}\\\\\n&=\\frac{-1}{1-\\Sigma_+}r_1^2\\cos^2\\psi + \\frac{-\\Sigma_+}{1-\\Sigma_+}r_1^2\\sin^2\\psi +\\frac{-\\Sigma_+}{1-\\Sigma_+}\\delta_1^2+ N_1\\,h_\\delta \\tag{\\ref{eq:neartaub-b8-t:delta}}\\\\\n{\\mathrm{D}}_t \\log \\frac{\\delta_1}{r_1} &= -(\\Sigma_+ + 1)\\frac{\\Sigma_-^2}{r_1^2} -\\sqrt{3}N_1\\frac{\\Sigma_- N_+}{r_1^2} \\tag{\\ref{eq:neartaub-b8-q:delta-r}}\\\\\n&=\\frac{-1}{1-\\Sigma_+}r_1^2\\cos^2\\psi - \\delta_1^2\\frac{\\sin^2\\psi}{1-\\Sigma_+} + N_1(h_\\delta-h_r)\\tag{\\ref{eq:neartaub-b8-t:delta-r}}\\\\\n\\psi' &=\\sqrt{3}r_1 \\sqrt{\\cos^2\\psi + \\frac{\\delta_1^2}{r_1^2}} - \\frac{r_1^2+\\delta_1^2}{1-\\Sigma_+} \\cos\\psi \\sin\\psi + N_1\\cos\\psi\\, h_{\\psi},\\tag{\\ref{eq:neartaub-b8-t:psi}}\n\\end{align*}\n\\noindent\nwhere the terms $|h_r|,|h_\\delta|, |h_\\psi|$ are bounded (if $|N_i|$, $\\Sigma_+<0$, $\\Sigma_-$, and $\\frac{\\delta_1}{r_1}$ are bounded) and given in \\eqref{eq:neartaub-b8-t-h}, page \\pageref{eq:neartaub-b8-t-h}.\n\nWe use $\\mathcal M= \\{\\bpt x\\in {\\mathbb{R}}^5:\\,G(\\bpt x)=1\\}$, and $\\mathcal M_{{{\\hat{n}}}}\\subset \\mathcal M$ to denote the signs of the three $N_i$, with ${{\\hat{n}}}\\in \\{+,-,0\\}^3$. If we use $\\pm$ in subscripts, the repeated occurences are unrelated, such that $\\mathcal M_{\\pm\\pm\\pm}=\\{\\bpt x\\in \\mathcal M:\\,\\text{all three $N_i\\neq 0$}\\}$. We use the notation $\\mathcal T_i = \\{\\bpt x\\in\\mathcal M:\\, \\langle\\taubp_j \\bpt \\Sigma\\rangle=\\langle\\taubp_k,\\Sigma\\rangle,\\,N_j=N_k\\}$ for the Taub-spaces, where $i,j,k$ are a permutation of $\\{1,2,3\\}$. \n\nWe frequently use the following subsets of $\\mathcal M$ (with the obvious definition for subscripts ${{\\hat{n}}}\\in \\{+,-,0\\}^3$ ):\n\\[\\begin{aligned}\n\\textsc{Basin}[\\epsilon]&=\\{\\bpt x\\in\\mathcal M: \\max_i \\frac{\\delta_i}{r_i}<\\epsilon, \\max_i \\delta_i < \\epsilon\\}\\\\\n{\\textsc{Cap}}[\\epsilon_N, \\epsilon_d]&=\\{\\bpt x\\in \\mathcal M: \\max|N_i|\\ge \\epsilon_N, \\max_i\\delta_i \\le \\epsilon_d\\}\\\\\n{\\textsc{Circle}}[\\epsilon_N, \\epsilon_d]&= \\{\\bpt x\\in \\mathcal M: \\max |N_i|\\le\\epsilon_N,\\, \\max_i\\delta_i \\le \\epsilon_d\\}\\\\\n{\\textsc{Hyp}}[\\varepsilon_\\taubp, \\epsilon_N, \\epsilon_d]&= {\\textsc{Circle}}[\\epsilon_N, \\epsilon_d] \\setminus \\left[B_{\\varepsilon_\\taubp}(\\taubp_1)\\cup B_{\\varepsilon_\\taubp}(\\taubp_2)\\cup B_{\\varepsilon_\\taubp}(\\taubp_3) \\right].\n\\end{aligned}.\n\\]\n\n\\subsection{Properties of the Kasner Map}\\label{sect:appendix-kasner-map}\nWe deferred a detailed discussion of the\nKasner-map $K$ in Section \\ref{sect:lower-b-types}, especially the proof of Proposition \\ref{prop:kasnermap-homeomorphism-class}.\nWe will first give a simple proof of Proposition \\ref{prop:kasnermap-homeomorphism-class}, and then discuss classical ways of describing the Kasner map.\n\\begin{figure}[hbpt]\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/graphics\/angle-stuff.pdf}\n \\caption{A graphical proof of Lemma \\ref{lemma:kasnermap-expansion}.} \\label{fig:angle-stuff}\n \\end{subfigure\n ~~%\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/graphics\/classa-seg-labels.pdf}\n \\caption{The Kasner-parameter in the six segments.}\n \\label{fig:seg-labels}\n \\end{subfigure\n \\caption{Expansion of the Kasner-map} \n\\end{figure}\n\\paragraph{Expansion of the Kasner-map.}\nWe can see from Figure \\ref{fig:double-cover} (p.~\\pageref{fig:double-cover}) that the Kasner-map is a double-cover, has three fixed points and reverses orientation. From Figure \\ref{fig:angle-stuff}, we can see that $K$ is non-uniformly expanding:\n\n\\begin{lemma}\\label{lemma:kasnermap-expansion\nConsider the vectorfield $\\partial_{\\mathcal K}(\\bpt p)= (-\\Sigma_-(\\bpt p), +\\Sigma_+(\\bpt p))$. Assume without loss of generality that $\\bpt p_-$ is such that the Kasner-map $\\bpt p_+=K(\\bpt p_-)$ proceeds via the $N_1$-cap, i.e.~$d(\\bpt p_-, -\\taubp_1)<1$ (see Figure \\ref{fig:kasner-segments}).\n\nThen the Kasner-map $K$ is differentiable at $\\bpt p_-$ and we have\n\\[\nK'(\\bpt p_-)= -\\frac{|\\bpt p_+ +2\\taubp_1| }{|\\bpt p_- +2\\taubp_1|}<-1,\\qquad\\text{where}\\qquad K'(\\bpt p'):\\quad K_*\\partial_{\\mathcal K}(\\bpt p_-)=K'(\\bpt p_-)\\partial_{\\mathcal K}(\\bpt p_+).\n\\]\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma \\ref{lemma:kasnermap-expansion}]\nInformally, differentiability is evident from the construction in Figure \\ref{fig:angle-stuff}. The component of $\\partial_{\\mathcal K}$ which is normal to the line through $\\bpt p_+$, $\\bpt p_-$ and $-\\taubp_1$ gets just elongated by a factor $\\lambda =\\frac{|\\bpt p_++2\\taubp_1|}{|\\bpt p_-+2\\taubp_1|}$. The angle between this line and the Kasner-circle, i.e.~$\\partial_{\\mathcal K}$, is constant; therefore, the length of the component tangent to $\\mathcal K$ must get elongated by the same factor. The negative sign is evident from Figure \\ref{fig:angle-stuff}.\n\nFormally, the relation between $\\bpt p_+$ and $\\bpt p_-$ is described by\n\\[\n|\\bpt p_-|=|\\bpt p_+|=1\\qquad \\bpt p_+ +2 \\taubp_1= \\lambda(\\bpt p_-+2\\taubp_1).\n\\]\nSetting $\\bpt p_-=\\bpt p_-(t)$, we obtain a differentiable function $\\lambda=\\lambda(t)$ by the implicit function theorem (as long as $\\langle\\bpt p_+, \\bpt p_-+2\\taubp_1\\rangle \\neq 0$) and obtain (where we use $'$ to denote derivatives with respect to $t$):\n\\[\n\\bpt p_+' = \\lambda'(\\bpt p_-+2\\taubp_1)+\\lambda\\bpt p_-'.\n\\]\nAssuming $\\bpt p_+\\neq \\bpt p_-$, we can set\n\\[\\bpt v = \\frac{\\bpt p_++2\\taubp_1}{|\\bpt p_++2\\taubp_1|}= \\frac{\\bpt p_-+2\\taubp_1}{|\\bpt p_-+2\\taubp_1|}=\\frac{\\bpt p_+-\\bpt p_-}{|\\bpt p_+-\\bpt p_-|},\\]\nand compute the projection to the normal component of $\\bpt v$:\n\\((1-\\bpt v \\bpt v^T)\\bpt p_+' = \\lambda (1-\\bpt v \\bpt v^T)\\bpt p_-',\\) \nsince the vector coefficient of $\\lambda'$ is parallel to $\\bpt v$. Now the vectors $\\bpt p_+'$ and $\\bpt p_-'$ are tangent to the Kasner-circle; letting $J(\\Sigma_+,\\Sigma_-)=(-\\Sigma_-, \\Sigma_+)$ be the unit rotation we can see that\n\\( \\bpt p_+' = \\pm |\\bpt p_+'|J\\bpt p_+ \\) and \\( \\bpt p_-' = \\pm |\\bpt p_-'|J\\bpt p_- \\). \nHence\n\\[\\begin{aligned}\n|(1-\\bpt v \\bpt v^T)\\bpt p_+'|^2&=\\left(1-\\langle v, J\\bpt p_+\\rangle^2\\right)|\\bpt p_+'|^2 \n&= \\left(1-\\frac{\\langle\\bpt p_-, J\\bpt p_+\\rangle^2}{|\\bpt p_+-\\bpt p_-|^2}\\right)|\\bpt p_+'|^2\\\\\n|(1-\\bpt v \\bpt v^T)\\bpt p_-'|^2&=\\left(1-\\langle v, J\\bpt p_-\\rangle^2\\right)|\\bpt p_-'|^2 \n&= \\left(1-\\frac{\\langle\\bpt p_+, J\\bpt p_-\\rangle^2}{|\\bpt p_+-\\bpt p_-|^2}\\right)|\\bpt p_-'|^2.\n\\end{aligned}\\]\nBy antisymmetry of the matrix $J$, i.e.~$\\langle\\bpt p_-, J\\bpt p_+\\rangle=-\\langle J\\bpt p_-,\\bpt p_+\\rangle$, we therefore have\n\\(\n|\\bpt p_+'| = \\lambda |\\bpt p_-'|.\n\\)\n\\end{proof}\n\\paragraph{Symbolic Description.}\nFor a given $\\bpt p_0\\in\\mathcal K$, we can symbolically encode the trajectory $(\\bpt p_n)_{n\\in{\\mathbb{N}}}$ (with $\\bpt p_{n+1}=K(\\bpt p_n)$) under the Kasner-map. The easiest way to do so is to encode it by $(s_n)_{n\\in{\\mathbb{N}}}\\in \\{1,2,3\\}^{\\mathbb{N}}$, where $s_n = i$ if $\\bpt p_n\\to\\bpt p_{n+1}$ occurs via the $|N_i|>0$-cap. Then $(s_n)_{n\\in{\\mathbb{N}}}$ has the property that no symbol repeats, i.e.~$s_{n+1}\\neq s_n$ for all $n\\in{\\mathbb{N}}$. We have, however, an ambiguity if $\\bpt p_N=\\taubp_i$ for some $N>0$. If this occurs, then also all later points have $\\bpt p_{N+n}=\\taubp_i$. We chose to allow both encodings $\\bpt p_N=j$ and $\\bpt p_N=k$, as long as the property that no symbol repeats is preserved. Factoring out this ambiguity gives us a map\n\\[\\begin{aligned}\n\\Psi: \\mathcal K &\\to \\{(s_{n})_{n\\in{\\mathbb{N}}}\\{1,2,3\\}^{\\mathbb{N}}:\\, \\text{no sumbol repeats} \\}\\, \/ \\, \\{(*\\overline{ij}) = (*\\overline{ji})\\},\\\\\n\\Psi(p_0)&= (s_n)_{n\\in{\\mathbb{N}}},\\qquad\\text{such that}\\ d(\\bpt p_n, -\\taubp_{s_n})\\le 1\\ \\text{and no symbol repeats},\n\\end{aligned}\\]\nwhere $*$ stands for an arbitrary initial piece and $\\overline{jk}$ stands for a periodic tail $(*\\overline{jk})=(*jkjkjk\\ldots)$.\nThis map $\\Psi$ is continuous (since the Kasner-map is continuous), where we endow the target space $\\{1,2,3\\}^{\\mathbb{N}}\/\\sim$ with the quotient topology of the product topology. Note that, by construction, $\\Psi$ semiconjugates the Kasner-map $K$ to the sifht-map $\\sigma$:\n\\[\n\\Psi\\circ K=\\sigma\\circ \\Psi,\\quad\\text{where}\\quad \\sigma: (s_0s_1s_2\\ldots)\\to (s_1s_2\\ldots).\n\\]\n\nIn order to see that $\\Psi$ is a homeomorphism, we construct a continuous inverse. \nDenote the three segments of $\\mathcal K$ as $\\mathcal K_i = \\{\\bpt p\\in\\mathcal K:\\, d(\\bpt p, -\\taubp_i)\\le 1\\}$. We can construct inverse maps $K^{-1}_{ij}:\\mathcal K_j \\to \\mathcal K_i$, such that $K\\circ K^{-1}_{ij}:\\mathcal K_i \\to \\mathcal K_i = \\mathrm{id}$. Then we get an inverse map\n\\[\n\\Psi^{-1}: (s_{n})_{n\\in{\\mathbb{N}}} \\to \\bigcap_{\\ell\\in{\\mathbb{N}}}K^{-1}_{s_0 s_1}K^{-1}_{s_1s_2}\\ldots K^{-1}_{s_{\\ell}s_{\\ell+1}}(\\mathcal K_{s_{\\ell+1}}).\n\\]\nWe now need to show that $\\Psi^{-1}((s_n)_{n\\in{\\mathbb{N}}})=\\{\\bpt p\\}$ is a single point, which depends continuously on $(s_n)_{n\\in{\\mathbb{N}}}$, and is actually the inverse of $\\Psi$.\n\nWe first consider a sequence $(s_n)_{n\\in{\\mathbb{N}}}$ which does not end up in a Taub-point. \n\nIn order to see that $\\Psi^{-1}((s_n)_{n\\in{\\mathbb{N}}})$ is nonempty, note that it is the intersection of a descending sequence of nonempty compact sets. In order to see that it contains only a single point, note that $K_{ij}^{-1}$ is (nonuniformly) contracting by Lemma \\ref{lemma:kasnermap-expansion}; hence the length $\\textsc{len}_{\\ell}=|K^{-1}_{s_0 s_1}K^{-1}_{s_1s_2}\\ldots K^{-1}_{s_{\\ell}s_{\\ell+1}}(\\mathcal K_{s_{\\ell+1}})|$ is decreasing. The length $\\textsc{len}_{\\ell}$ also cannot converge to some $\\textsc{len}_{\\infty}$ as $\\ell\\to\\infty$, since we have $|K^{-1}_{ij}(I)|< |I|$ for any interval $I$ with $|I|>0$.\nIn order to see that $\\Psi^{-1}$ is continuous, we need to show that $\\mathrm{diam}\\,\\Psi^{-1}\\{(r_n)_{n\\in{\\mathbb{N}}}:\\,r_n=s_n\\, \\forall n\\le N\\} \\to 0$ as $N\\to\\infty$. This also follows from the previous argument of decreasing lengths.\n\nNext, we consider a sequence $(s_n)_{n\\in{\\mathbb{N}}}$ which does end up in a Taub-point $\\taubp_i$ at $n=N$. Let $(s'_n)_{n\\in{\\mathbb{N}}}$ denote the other representative of $(s_n)_{n\\in{\\mathbb{N}}}$, i.e. we changed $(s_0,\\ldots,s_{N-1},\\overline{jk})\\leftrightarrow ((s_0,\\ldots,s_{N-1},\\overline{kj}))$. The previous arguments about single-valuedness and continuity still apply for each of the two representatives; we only need to show that $\\Psi^{-1}$ coincides for both. This is obvious.\n\nIt is also obvious by our construction that $\\Psi$ and $\\Psi^{-1}$ are inverse to each other. Hence $\\Psi$ and $\\Psi^{-1}$ $C^0$-conjugate $K$ to the shift-map\n\\[\n\\Psi\\circ K\\circ\\Psi^{-1}=\\sigma: (s_0s_1s_2\\ldots)\\to (s_1s_2\\ldots).\n\\]\nThe exact same arguments apply in order to conjugate the map $D:{\\mathbb{R}}\/3{\\mathbb{Z}}\\to{\\mathbb{R}}\/3{\\mathbb{Z}}$, $D:[x]_{3{\\mathbb{Z}}}\\to [-2x]_{3{\\mathbb{Z}}}$ to the same shift, where we replaced $\\taubp_i=i$ and $\\mathcal K_i = [{i+1}, {i+2}]$. Hence, the Kasner-map is $C^0$ conjugate to $D$:\n\\begin{hproposition}{\\ref{prop:kasnermap-homeomorphism-class}}\nThere exists a homeomorphism $\\psi:\\mathcal K \\to {\\mathbb{R}}\/3{\\mathbb{Z}}$, such that $\\psi(\\taubp_i)=\\left[i\\right]_{3{\\mathbb{Z}}}$ and\n\\[\n\\psi(K(\\bpt p))=[-2 \\psi(\\bpt p)]_{3{\\mathbb{Z}}}\\qquad\\forall \\bpt p\\in\\mathcal K.\n\\]\n\\end{hproposition}\n\\paragraph{Kasner Eras and Epochs.}\n\\def\\ensuremath{\\textsc{S}}{\\ensuremath{\\textsc{S}}}\n\\def\\ensuremath{\\textsc{L}}{\\ensuremath{\\textsc{L}}}\nA useful and customary description of the symbolic dynamics of $K$ is obtained by distinguishing between ``small'' bounces around a Taub-point, called ``Kasner epochs'' and denoted by the letter \\ensuremath{\\textsc{S}}{} in this work, and ``long'' bounces, called ``Kasner eras'' and denoted by the letter \\ensuremath{\\textsc{L}}{} in this work. The \\ensuremath{\\textsc{S}}{} and \\ensuremath{\\textsc{L}}{} encoding of an orbit can be obtained from previous $\\{1,2,3\\}$-encoding by the map\n\\[\\textsc{EpochEra}: \\{s\\in \\{1,2,3\\}^{\\mathbb{N}}:\\, s_n\\neq s_{n+1}\\}\\to \\{(s_0s_1|r_0r_1\\ldots):\\, s_0,s_1 \\in\\{1,2,3\\},\\,r_n\\in\\{\\ensuremath{\\textsc{S}}, \\ensuremath{\\textsc{L}}\\}\\}\\]\n\\[(s_0s_1s_2\\ldots) \\to (s_0s_1| r_0r_1\\ldots),\\quad\\text{where}\\quad\\left\\{\\begin{aligned}\nr_n &= \\ensuremath{\\textsc{S}} \\quad\\text{if $s_{n}=s_{n+2}$,}\\\\\nr_n &= \\ensuremath{\\textsc{L}} \\quad\\text{if $s_{n}\\neq s_{n+2}$.}\n\\end{aligned}\\right.\\]\nWe can remember the value of $s_n$ as a subindex of $r_n$, such that e.g.\n\\[\n(132121213231\\ldots)\\to(13|\\ensuremath{\\textsc{L}}_1\\ensuremath{\\textsc{L}}_3 \\ensuremath{\\textsc{L}}_2 \\ensuremath{\\textsc{S}}_1 \\ensuremath{\\textsc{S}}_2 \\ensuremath{\\textsc{S}}_1 \\ensuremath{\\textsc{S}}_2 \\ensuremath{\\textsc{L}}_1 \\ensuremath{\\textsc{L}}_3 \\ensuremath{\\textsc{S}}_2 \\ensuremath{\\textsc{L}}_3 *_1\\ldots).\n\\]\nThen the Kasner-map becomes\n\\[ (s_0s_1|r_0r_1r_2r_3\\ldots)\\to \\left\\{\\begin{aligned}\n(s_1 i|r_1r_2\\ldots) &\\quad \\text{if $s_0=i$ and $r_0=\\ensuremath{\\textsc{S}}$}\\\\\n(s_1 i|r_1r_2\\ldots) &\\quad \\text{if $s_0\\neq i\\neq s_1$ and $r_0=\\ensuremath{\\textsc{L}}$.}\n\\end{aligned}\\right.\\]\nNote that the first two indices in $\\{1,2,3\\}$ describe in which of the six symmetric segments of $\\mathcal K$ a point lies, see Figure \\ref{fig:seg-labels}.\n\nAnother customary way of writing such sequences is to write every symbol \\ensuremath{\\textsc{L}}{} as a semicolon ``;'' and abbreviate the \\ensuremath{\\textsc{S}}{} symbols in between by just their number, such that the previous example becomes\n\\[\n(13|\\ensuremath{\\textsc{L}}_1 \\ensuremath{\\textsc{L}}_3 \\ensuremath{\\textsc{L}}_2 \\ensuremath{\\textsc{S}}_1 \\ensuremath{\\textsc{S}}_2 \\ensuremath{\\textsc{S}}_1 \\ensuremath{\\textsc{S}}_2 \\ensuremath{\\textsc{L}}_1 \\ensuremath{\\textsc{L}}_3 \\ensuremath{\\textsc{S}}_2 \\ensuremath{\\textsc{L}}_3 *_1\\ldots)\\to \n(13|0;0; 0; 4 ; 0 ; 1 ; \\ldots).\n\\]\n\\paragraph{The Kasner-Parameter.}\nThere exists an explicit coordinate transformation, related to the continued fraction expansion, which realizes the conjugacy to the shift-space. This is done via the so-called \\emph{Kasner-parameter} $u$, and is the most standard way of discussing the Kasner-map. \n\nIn order to introduce the Kasner-parameter, it is useful to use the coordinates which make the permutation symmetry of the indices more apparent. This is done via\n\\[\\Sigma_i = 2\\langle \\taubp_i, \\bpt \\Sigma\\rangle \\qquad\n\\Sigma_+ = -\\frac 1 2 \\Sigma_1 \\qquad\n\\Sigma_- = \\frac{1}{2\\sqrt{3}}(\\Sigma_3-\\Sigma_2).\\]\nThese variables are constrained by $\\Sigma_1+\\Sigma_2+\\Sigma_3=0$ and have $\\Sigma_1^2+\\Sigma_2^2+\\Sigma_3^2 =6(\\Sigma_+^2+\\Sigma_-^2)$.\n\n\nThe parametrization of $\\mathcal K$ depends not only on a real parameter $u\\in{\\mathbb{R}}$, but also on a permutation $(i,j,k)$ of $\\{1,2,3\\}$ and is given by $\\bpt \\Sigma=\\bpt\\Sigma(u, (ijk))$ such that\n\\[\\begin{aligned}\n\\Sigma_i &= -1 + 3\\frac{-u}{u^2+u+1} &\n\\Sigma_j &= -1 + 3\\frac{u+1}{u^2+u+1} &\n\\Sigma_k &= -1 + 3\\frac{u^2+u}{u^2+u+1}.\n\\end{aligned}\\]\nWe can immediately observe that $\\langle \\taubp_1+\\taubp_2+\\taubp_3, \\bpt \\Sigma\\rangle=0$, as it should be; hence, the above really defines a function $\\psi:{\\mathbb{R}}\\times \\textsc{sym}_3\\to {\\mathbb{R}}^2$, where $\\textsc{sym}_3$ is the set of permutations of $\\{1,2,3\\}$. \n Direct calculation shows that $\\Sigma_1^2+\\Sigma_2^2+\\Sigma_3^2=6$ for all $u\\in{\\mathbb{R}}$.\nHence, the $u$ coordinates do actually parametrize the Kasner circle. We have the noteworthy symmetry properties\n\\[\\begin{array}{rlll}\n\\bpt \\Sigma\\left(u, (ijk)\\right)\n&=\\bpt \\Sigma\\left(u^{-1}, (ikj)\\right) \n&=\\bpt \\Sigma\\left(-(u+1), (jik)\\right)\n&= \\bpt\\Sigma\\left(\\frac{-1}{u+1}, (jki)\\right)\\\\\n&= \\bpt\\Sigma\\left(-\\frac{u}{u+1}, (kji)\\right)\n&= \\bpt\\Sigma\\left(-\\frac{u+1}{u}, (kij)\\right).&\n\\end{array}\\]\nWe can use the symmetry to normalize $u$ to a value $u\\in[1,\\infty]$, thus giving a parametrization of the Kasner circle as in Figure \\ref{fig:seg-labels}.\n\n\\paragraph{The Kasner-map in $u$-coordinates.}\nWe consider without loss of generality a heteroclinic orbit $\\gamma\\subseteq \\mathcal M_{+00}$. In the $\\bpt \\Sigma$-projection, this heteroclinic orbit is a straight line through $-2\\taubp_1$; hence, the quotient $\\frac{\\Sigma_-}{2-\\Sigma_+}$ stays constant. It is given by \n\\[\n\\frac{\\Sigma_-}{2-\\Sigma_+}(u)=\\sqrt{3}\\frac{u^2-1}{4(u^2+u+1)-(u^2+4u+1)} = \\frac{\\sqrt{3}}{3}\\frac{u^2-1}{u^2+1}.\n\\]\nIf we write the $\\alpha$-limit of $\\gamma$ with respect to the $123$ permutation, then we must have $u\\in [0,\\infty]$ (because $N_1$ would not be unstable otherwise). Then the Kasner-map must be given by $(u,(123))\\to (-u,123)$. Assume that $u\\in[2,\\infty]$; then we can renormalize $K(u)$, such that $K(u, 123)= (u-1, 213)$. If instead $u\\in [1,2]$, then we can renormalize such that $K(u,123)=\\left(\\frac{1}{u-1},231\\right)$. Applying symmetrical arguments for the other caps yields the following way of writing the Kasner map:\n\\[\\begin{gathered}\n\\mathcal K = \\textsc{sym}_3\\times [1,\\infty] \\,\\big\/\\, \\left\\{ (1, ijk)=(1,ikj), (\\infty, ijk)=(\\infty, jik) \\right\\}\\\\\nK:\\mathcal K\\to \\mathcal K\\quad (u, ijk)\\to \n\\left\\{\\begin{aligned}\n(u-1, jik) &\\qquad \\text{if $u\\in [2,\\infty]$}\\\\\n\\left(\\frac{1}{u-1}, jki\\right)&\\qquad \\text{if $u\\in [1,2]$}.\n\\end{aligned}\\right.\n\\end{gathered}\\]\nNote that the Kasner-map is actually well-defined and continuous at $u=2$, due to the identification $(1,ijk)=(1,ikj)$. It is also well-defined at $u=1$, since the identifications $(1,ijk)=(1,ikj)$ and $(\\infty, ijk)=(\\infty, jik)$ are compatible. It is continuous at $u=1$, since we use the usual compactification at $u=\\infty$, such that a neighborhood basis of $(\\infty, ijk)$ is given by $\\{([R,\\infty], ijk)\\cup ([R,\\infty], jik)\\}_{R \\gg 0}$. Likewise, the Kasner-map is well-defined and continuous at the three fixed-points $u=\\infty$, due to the identification $(\\infty, ijk)=(\\infty, jik)$ and the compactification at $u=\\infty$.\n\\paragraph{Symbolic description in $u$-coordinates.}\nThe two local inverses of the Kasner-map in $u$-coordinates are given by\n\\[\\begin{aligned}\n\\ensuremath{\\textsc{S}}: (u,ijk)&\\to (u+1, jik)\\\\\n\\ensuremath{\\textsc{L}}: (u,ijk) &\\to \\left(1+\\frac{1}{u}, kij\\right)\n\\end{aligned}\\]\nUsing the same construction as in the proof of Proposition \\ref{prop:kasnermap-homeomorphism-class}, we see that the inverse coding map $\\textsc{Cfe}^{-1}$ given by the continued fraction expansion\n\\[\n\\textsc{Cfe}^{-1}:(ij|a_0;a_1;a_2;\\ldots)\\to \\left(1+a_0 + \\frac{1}{1+a_1+\\frac{1}{1+a_2+\\ldots}}\\,,\\, ijk\\right),\n\\]\nand the Kasner-map is given by\n\\[\n\\textsc{Cfe}\\circ K\\circ \\textsc{Cfe}^{-1}: (ij|a_0;a_1;a_2;\\ldots)\\to\\left\\{\\begin{array}{ll}\n(ji\\,|\\,a_0-1;a_1;a_2;\\ldots) &\\quad\\text{if $a_0>0$}\\\\\n(jk\\,|\\,a_1;a_2;\\ldots) &\\quad\\text{if $a_0=0$}.\n\\end{array}\\right.\n\\]\n\n\\subsection{Transformation of Volumes on Manifolds}\\label{sect:general-vol-app}\nThis section contains some basic facts about the transformation of volumes under flows.\n\n\\paragraph{Volume Transformation.}\nLet $M$ be an $n$-dimensional differentiable manifold.\nIn the language of differential forms, we can write the transformation law for a non-vanishing volume-form $\\omega$ under a diffeomorphism $\\Phi:M\\to M$ as\n\\[\\begin{aligned}\n\\lambda(\\bpt x) = \\frac{\\Phi^*\\omega(\\bpt x)}{\\omega(\\bpt x)}\\\\\n\\mathrm{vol}_\\omega(\\Phi(U)) = \\int_{\\Phi(U)}\\omega = \\int_U \\Phi^*\\omega = \\int_U \\lambda \\omega.\n\\end{aligned}\\]\nLet $X_1,\\ldots, X_n$ be a frame, i.e.~a set of vectorfields which form a basis of $TM$. Then can write the density of $\\omega$ as $\\rho=\\omega[X_1,\\ldots, X_n]$ and see $\\omega[Y_1,\\ldots, Y_n]= \\rho \\det (a_{ij})$, where $Y_i = \\sum_j a_{ij}X_j$. This allows us to write \n\\[\n\\lambda(\\bpt x) \\rho(\\bpt x) = \\Phi^*\\omega [X_1,\\ldots X_n] = \\omega[\\Phi_*X_1,\\ldots \\Phi_*X_n] = \\rho(\\Phi(\\bpt x))\\det J,\n\\]\nwhere $J$ is the jacobian, i.e.~the matrix of ${\\mathrm{D}}_x \\Phi(\\bpt x):T_{\\bpt x}M\\to T_{\\Phi(\\bpt x)}M$ with respect to the basis $X_1(\\bpt x),\\ldots X_n(\\bpt x)$ and $X_1(\\Phi(\\bpt x)),\\ldots X_n(\\Phi(\\bpt x))$, i.e. $J=J_{ij}$ with $\\Phi_*X_i(\\Phi(\\bpt x)) = {\\mathrm{D}}_x \\Phi(\\bpt x)\\cdot X_i(\\bpt x) = \\sum_{j}J_{ij} X_j(\\bpt x)$.\n\n\\paragraph{Volume Transformation under flows.}\nWe study the behaviour of a volume-form $\\omega$ under a flow $\\phi:M\\times {\\mathbb{R}}\\to M$ corresponding to a vectorfield $f:M\\to TM$; we are interested in $\\lambda(\\bpt x, t)=\\frac{\\phi^*(\\bpt x, t)\\omega}{\\omega}$.\n\nGiven a set of coordinates $x_1,\\ldots x_n$ and the vectorfield $f=f_1 \\partial_1+\\ldots f_n\\partial_n$ and volume-form $\\omega=\\rho {\\mathrm{d}} x_1\\land \\ldots \\land {\\mathrm{d}} x_n$ we can therefore write, using ${\\mathrm{D}}_x\\phi$ is a shorthand for the Jacobian with respect to the basis $\\partial_1,\\ldots, \\partial_n$:\n\\[\\begin{aligned}\n\\lambda(\\bpt x, t)&=\\frac{\\phi^*(\\bpt x, t)\\omega}{\\omega} = \\frac{\\rho(\\phi(\\bpt x, t))}{\\rho(\\bpt x)}\\det {\\mathrm{D}}_x \\phi(\\bpt x, t)\\\\\n\\frac{{\\mathrm{d}}}{{\\mathrm{d}} t} \\lambda(\\bpt x, t)& = \\frac{({\\mathrm{D}}_f \\rho)(\\phi(\\bpt x, t))}{\\rho(\\bpt x)}\\det {\\mathrm{D}}_x \\phi(\\bpt x, t) + \\frac{\\rho(\\phi(\\bpt x, t))}{\\rho(\\bpt x)}{\\mathrm{D}}_t \\det {\\mathrm{D}}_x \\phi(\\bpt x, t)\\\\\n&= \\lambda(\\bpt x, t){\\mathrm{D}}_f \\log \\rho(\\phi(\\bpt x, t))+ \\frac{\\rho(\\phi(\\bpt x, t))}{\\rho(\\bpt x)}\\mathrm{tr}\\left[({\\mathrm{D}}_t{\\mathrm{D}}_x\\phi(\\bpt x, t)){\\mathrm{D}}_x\\phi(\\bpt x, t)^{-1}\\right] \\det {\\mathrm{D}}_x \\phi(\\bpt x, t)\\\\\n&= \\lambda(\\bpt x, t)\\left({\\mathrm{D}}_f \\log \\rho(\\phi(\\bpt x, t)) + \\mathrm{tr}{\\mathrm{D}}_x f(\\phi(\\bpt x, t)){\\mathrm{D}}_x\\phi(\\bpt x, t)^{-1}\\right)\\\\\n&= \\lambda(\\bpt x, t)\\left({\\mathrm{D}}_f \\log \\rho(\\phi(\\bpt x, t)) + \\mathrm{tr}\\partial_x f(\\phi(\\bpt x, t))\\right)\\\\\n&= \\lambda(\\bpt x, t)\\left({\\mathrm{D}}_f \\log \\rho(\\phi(\\bpt x, t)) + \\sum_i (\\partial_i f_i)(\\phi(\\bpt x, t))\\right),\n\\end{aligned}\\]\nwhere we used the general formula for differentiable families $A:{\\mathbb{R}}\\to {\\mathbb{R}}^{n\\times n}$ of invertible matrices:\n\\[\\begin{aligned}\n{\\mathrm{D}}_{t|t=0} \\det A(t) &= {\\mathrm{D}}_{t|t=0}\\det A(t)A(0)^{-1}A(0)\\\\ \n&=\\det A(0) {\\mathrm{D}}_{t|t=0}\\det A(t)A(0)^{-1} = \\det A(0)\\mathrm{tr}\\,{\\mathrm{D}}_{t|t=0}A(t)A(0)^{-1}.\n\\end{aligned}\\]\n\n\\paragraph{Restriction to Iso-Surfaces.}\nLet $\\Phi:M\\to M$ be a diffeomorphism on an $n$-dimensional manifold $M$ with volume form $\\omega$, which gets transported by $\\lambda= \\frac{\\Phi^*\\omega}{\\omega}$.\nSuppose that $G:M\\to {\\mathbb{R}}$ is a preserved quantity under $\\Phi$, i.e.~$G(\\Phi(\\bpt x))=G(\\bpt x)$ and suppose that $1$ is a regular value of $G$. Now we are interested in the isosurface $M_1 = \\{\\bpt x\\in M: G(\\bpt x) =1\\}$ and in a volume form $\\omega_0$ on $M_1$, which gets transported by the same $\\lambda$. This can be realized by choosing some vectorfield $X:M_1\\to TM$ with ${\\mathrm{D}}_X G=\\mathrm{const}=1$ and setting $\\omega_1=\\iota_X \\omega$. Since $G$ is preserved, we have $\\Phi_*X-X \\in T M_1$ and hence for a basis $X_1,\\ldots,X_{n-1}$ of $TM_1$:\n\\[\\begin{aligned}\n\\lambda \\iota_X \\omega[X_1,\\ldots X_n] &= \\lambda\\omega[X,X_1,\\ldots,X_n]=\\omega[\\Phi_*X,\\Phi_*X_1,\\ldots,\\Phi_*X_{n-1}]\\\\\n&=\\omega[X,\\Phi_*X_1,\\ldots,\\Phi_*X_{n-1}]+\\omega[\\Phi_*X-X,\\Phi_*X_1,\\ldots,\\Phi_*X_{n-1}]\\\\\n&=\\Phi^*\\omega[X_1,\\ldots,X_{n-1}].\n\\end{aligned}\\]\nIt is clear that the induced volume $\\omega_1$ does not depend on the choice of $X$, up to possibly a sign.\n\\paragraph{Contraction with invariant vectorfields.}\nLet $\\Phi:M\\to M$ be a diffeomorphism on an $n$-dimensional manifold $M$ with volume form $\\omega$, which gets transported by $\\lambda= \\frac{\\Phi^*\\omega}{\\omega}$.\nSuppose that the vectorfield $Y:M\\to TM$ is preserved quantity under $\\Phi$, i.e.~$\\Phi_*X=X$. Then the contractoed volume-form $\\omega_1=\\iota_Y\\omega$ has $\\Phi^*\\omega_1=\\lambda\\omega_1$:\n\\[\\begin{aligned}\n\\Phi^*\\omega_1[X_2,\\ldots,X_n]&=\\omega[Y, \\Phi_*X_2,\\ldots,\\Phi_*X_n]=\\omega[\\Phi_*Y,\\Phi_*X_2,\\ldots,\\Phi_*X_n]\\\\\n&=\\lambda \\omega[Y,X_2,\\ldots,X_n]=\\lambda\\omega_1[X_2,\\ldots,X_n].\n\\end{aligned}\\]\nSuppose that $\\phi:M\\times {\\mathbb{R}}\\to M$ is a flow, $U\\subseteq M$ is open and $T:U\\to {\\mathbb{R}}$ is smooth. Consider $\\Phi:\\bpt x\\to \\phi(\\bpt x, T(\\bpt x))$. Then $\\Phi_* X = \\phi(\\cdot, T(\\cdot))_* X + Y{\\mathrm{D}}_X T$, and hence\n\\[\\begin{aligned}\n\\Phi^*\\omega_1[X_2,\\ldots,X_n]=\\omega[Y, \\Phi_*X_2+ Y{\\mathrm{D}}_{X_2} T,\\ldots,\\Phi_*X_n+Y{\\mathrm{D}}_{X_n} T]=\\lambda \\omega_1[X_2,\\ldots,X_n].\n\\end{aligned}\\]\n\\subsection{Derivation of the Wainwright-Hsu equations}\\label{sect:append:derive-eq}\nThe goal of this section is to connect the Einstein field equations of general relativity to the Wainwright-Hsu equations \\eqref{eq:ode} discussed in this work.\n\n\\subsubsection{Spatially Homogeneous Spacetimes}\\label{sect:homogeneous}\nWe are interested in spatially homogeneous spacetimes. We assume that $(M^4,g)$ is a Lorentz-manifold, and we have a symmetry adapted co-frame: $\\{\\omega_1,\\omega_2,\\omega_3, {\\mathrm{d}} t\\}$, corresponding to a frame of vectorfields $\\{e_0,e_1,e_2,e_3\\}$, such that $e_1,e_2,e_3$ are Killing, i.e.~the metric depends only on $t$. We thus assume that the metric has the form\n\\[\ng = g_{00}(t){\\mathrm{d}} t\\otimes {\\mathrm{d}} t + g_{11}(t)\\omega_1\\otimes \\omega_1 + g_{22}(t)\\omega_2\\otimes \\omega_2 + g_{33}(t)\\omega_3\\otimes \\omega_3,\n\\]\nwhere $g_{00}<0$ and the other three $g_{ii}>0$. The spatial homogeneity is described by the commutators of the three Killing fields $e_1,e_2,e_3$; we assume that it is given (for positive permutations $(i,j,k)$ of $(1,2,3)$) by:\n\\[\n[e_i, e_j] = \\gamma_{ij}^k e_k=\\hat n_k e_k\n\\qquad\n{\\mathrm{d}} \\omega_i = -\\hat n_i \\omega_j\\land \\omega_k,\n\\]\nwhere $\\hat n_i \\in \\{-1, 0, +1\\}$ describe the Bianchi type of the surfaces $\\{t=\\textrm{const}\\}$ of spatial homogeneity.\n\n\n\\paragraph{General equations for the Christoffel symbols.}\nThe general equations for Christoffel symbols are given by:\n\\[\\begin{aligned}\n\\nabla_{e_i}e_j &= \\sum_k \\Gamma_{ij}^k e_k\\\\\n[e_i, e_j] &= \\nabla_{e_i}e_j - \\nabla_{e_j}e_i\\quad\\Rightarrow\\quad\n\\Gamma_{ij}^k -\\Gamma_{ji}^k = \\gamma_{ij}^k\\\\\n{\\mathrm{D}}_{e_i} g(e_j, e_k) &= \\partial_{e_i}g_{jk}\n=\\sum_{\\ell} g_{\\ell k}\\Gamma_{ij}^{\\ell} + g_{\\ell j}\\Gamma_{ik}^\\ell\\\\\n\\partial_{e_i}g_{jk} + \\partial_{e_j}g_{ik} - \\partial_{e_k}g_{ij} \n&=\\sum_{\\ell} g_{\\ell k} \\Gamma_{ij}^\\ell + g_{\\ell j}\\Gamma_{ik}^\\ell \n+g_{\\ell k} \\Gamma_{ji}^\\ell + g_{\\ell i}\\Gamma_{jk}^\\ell \n- g_{\\ell i} \\Gamma_{kj}^\\ell - g_{\\ell j}\\Gamma_{ki}^\\ell\\\\\n&=\\sum_{\\ell} g_{\\ell i}\\gamma^{\\ell}_{jk} + g_{\\ell j}\\gamma_{ik}^\\ell + g_{\\ell k}\\gamma_{ji}^\\ell + 2 g_{\\ell k}\\Gamma_{ij}^\\ell\n\\end{aligned}\\]\nWe can solve this for the Christoffel symbols by multiplying with the inverse metric\n\\newcommand\\myrebind[6]{\n\\gdef\\iA{#1}\\gdef\\iB{#2}\\gdef\\iC{#3}\\gdef\\iD{#4}\\gdef\\iE{#5}\\gdef\\iF{#6}\n}\n\\myrebind{j}{i}{k}{\\ell}{n}{*}\n\\[\\begin{aligned}\n\\Gamma_{ij}^k&=\n\\frac 1 2 g^{k \\ell} \\left(\n\\partial_{e_i}g_{j\\ell} + \\partial_{e_j}g_{i\\ell} - \\partial_{e_\\ell}g_{ij} -\\sum_{n} g_{n i}\\gamma^{n}_{j\\ell} - g_{n j}\\gamma_{i\\ell}^n - g_{n \\ell}\\gamma_{ji}^n\\right)\\\\\n&=\\frac 1 2 g^{k k} \\left(\n\\partial_{e_i}g_{jk} + \\partial_{e_j}g_{ik} - \\partial_{e_k}g_{ij} - g_{i i}\\gamma^{i}_{jk} - g_{j j}\\gamma_{ik}^j - g_{k k}\\gamma_{ji}^k\\right),\n\\end{aligned}\\]\nwhere we used the fact that the metric is diagonal in the last equation.\n\\paragraph{Christoffel symbols for spatially homogeneous space times.}\nIf we insert the indices into this equation, we obtain up to index permutations the following non-vanishing Christoffel symbols:\n\\[\\begin{aligned}\n\\nabla_{e_0}e_0 &= \\Gamma_{00}^0 e_0 & \\Gamma_{00}^0 &= \\phantom{-}\\frac 1 2 g^{00}\\partial_{e_0} g_{00}\\\\\n\\nabla_{e_1}e_2 &= \\Gamma_{12}^3 e_3 & \\Gamma_{12}^3 &= -\\frac 1 2 g^{33}\\left(\\;g_{11}\\gamma^{1}_{23}+ g_{22}\\gamma_{13}^2 + g_{33}\\gamma_{21}^3\\right)\\\\\n& & &=\\phantom{-}\\frac 1 2 g^{33}\\left(g_{11}\\hat n_1 \\;- g_{22}\\hat n_2 \\;- g_{33}\\hat n_3\\right)\\\\\n\\nabla_{e_0}e_1 &= \\Gamma_{01}^1 e_1 = \\Gamma_{10}^1 e_1 & \\Gamma_{10}^1 &=\\phantom{-}\\frac 1 2 g^{11}\\partial_{e_0}g_{11}\\\\\n\\nabla_{e_1}e_1 &= \\Gamma_{11}^0 e_0 & \\Gamma_{11}^0 &= -\\frac 1 2 g^{00}\\partial_{e_0}g_{11}\n\\end{aligned}\\]\n\\paragraph{Extrinsic Curvature of surfaces of homogeneity.} %\nThe Weingarten-map $K_{i}^{\\ j}$ and second fundamental form $K_{ij}$ of the surfaces of spatial homogeneity given, up to permutation, by:\n\\[\\begin{aligned} \nK(e_1)&=\\nabla_{e_1}\\frac{1}{\\sqrt{-g_{00}}}e_0 = \\sqrt{-g^{00}}\\Gamma_{10}^1 e_{1}\n\\qquad K_i^{ j}= \\delta_{i}^{j}\\sqrt{-g^{00}}\\Gamma_{i0}^i\\\\\nK_{ij}&= g(e_i, \\nabla_{e_j}\\sqrt{-g^{00}}e_0)= \\sqrt{-g_{00}}\\Gamma_{ji}^0 =\\sum_{k} g_{k i}\\sqrt{-g^{00}}\\Gamma_{j0}^k,\\quad{\\text{i.e.,}}\\\\\n\\Gamma_{10}^1 &= \\sqrt{-g_{00}} K_1^1, \\qquad\\qquad \\Gamma_{11}^0 = \\sqrt{-g^{00}}g_{11}K_{1}^1\n\\end{aligned}\\]\n\nThe extrinsic curvature corresponds to the normalized time-derivative of the spatial coefficients of the metric:\n\\[\\begin{aligned}\n\\sqrt{-g^{00}}\\nabla_{e_0} \\sqrt{g_{ii}} &= \\sqrt{-g^{00}}\\frac 1 2 \\sqrt{g_{ii}}g^{jj}\\nabla_{e_0}g_{kk}=\\sqrt{-g^{00}}\\Gamma_{i0}^i \\sqrt{g_{ii}} = K_{i}^i\n\\end{aligned}\\]\n\\paragraph{Riemannian Curvature.}\nThe Riemannian curvature tensor is given by the equation\n\\[\\begin{aligned}\n \\sum_{\\ell}R_{ijk}^{\\ \\ \\ \\ell} e_{\\ell} &= \\left(\\nabla_{e_i}\\nabla_{e_j}-\\nabla_{e_j}\\nabla_{e_i} -\\nabla_{[e_i,e_j]}\\right) e_k\\\\\n &=\\sum_{\\ell, n}\\left(\\Gamma_{jk}^n\\Gamma_{in}^\\ell - \\Gamma_{ik}^n\\Gamma_{jn}^\\ell + \\nabla_{e_i}\\Gamma_{jk}^\\ell - \\nabla_{e_j}\\Gamma_{in}^\\ell - \\gamma_{ij}^n\\Gamma_{nk}^\\ell\\right)e_{\\ell}.\n\\end{aligned}\\]\nIf we lower the last index, we have\n\\(\nR_{ijk\\ell}=g((\\nabla_{e_i}\\nabla_{e_j}-\\nabla_{e_j}\\nabla_{e_i}-\\nabla_{[e_i,e_j]}) e_k, e_\\ell).\n\\) \nTogether with $(\\nabla_{e_i}\\nabla_{e_j}-\\nabla_{e_j}\\nabla_{e_i}-\\nabla_{[e_i,e_j]}) g(e_k,e_\\ell)=0$, this makes apparent the anti-symmetries\n\\(\nR_{ijk\\ell}=-R_{jik\\ell}=R_{ji\\ell k}=-R_{ij\\ell k}.\n\\)\n\nInserting the indices gives us the following potentially non-vanishing terms of the Riemann tensor, up to permutation and anti-symmetry, which are relevant for the Ricci curvature:\n\\[\\begin{aligned}\nR_{121}^{\\ \\ \\ 2} &= \\Gamma_{21}^3\\Gamma_{13}^2 - \\Gamma_{11}^0\\Gamma_{20}^2 - \\gamma_{12}^3\\Gamma_{31}^2\\\\\nR_{010}^{\\ \\ \\ 1} &= \\Gamma_{10}^1\\Gamma_{01}^1 - \\Gamma_{00}^0\\Gamma_{10}^1 + \\nabla_{e_0}\\Gamma_{10}^1.\n\\end{aligned}\\]\nRaising and using $\\widetilde R$ for the intrinsic curvature of the surfaces of homogeneity gives us\n\\[\\begin{aligned}\nR_{12}^{\\ \\ 1 2} &= g^{11}\\Gamma_{21}^3\\Gamma_{13}^2- \\gamma_{12}^3\\Gamma_{31}^2g^{11} - g^{11}\\Gamma_{11}^0\\Gamma_{20}^2 \\\\\n&= \\widetilde{R}_{12}^{\\ \\ 1 2} -K_{1}^1 K_2^2\\\\\nR_{01}^{\\ \\ 0 1} &= g^{00}\\Gamma_{10}^1\\Gamma_{01}^1 - g^{00}\\Gamma_{00}^0\\Gamma_{10}^1 + g^{00}\\nabla_{e_0}\\Gamma_{10}^1\\\\\n&= -K_1^1 K_1^1 + \\sqrt{-g^{00}}\\Gamma_{00}^0 K_1^1 +g^{00} \\nabla_{e_0} \\sqrt{-g_{00}}K_1^1\\\\\n&=-K_1^1 K_1^1 -\\sqrt{-g^{00}}\\nabla_{e_0}K_1^1.\n\\end{aligned}\\]\nSetting\n\\[\\begin{aligned}\nn_i&=\\hat n_i \\sqrt{g^{11}g^{22}g^{33}}g_{ii} = \\hat n_i \\widetilde n_i,\\quad\\text{i.e.}\\quad\n\\widetilde n_i = \\sqrt{g^{11}g^{22}g^{33}}g_{ii},\n\\end{aligned}\\]\nthe spatial curvature is given by\n\\[\\begin{aligned}\n\\widetilde R_{12}^{\\ \\ 12} &=g^{11}\\Gamma_{21}^3\\Gamma_{13}^2- g^{11}\\gamma_{12}^3\\Gamma_{31}^2\n= g^{11}\\left(\\Gamma_{21}^3\\Gamma_{13}^2 - \\Gamma_{12}^3\\Gamma_{31}^2 + \\Gamma_{21}^3\\Gamma_{31}^2\\right)\\\\\n&=\\frac 1 4\\left(-n_1^2 -n_2^2 +3n_3^2 +2n_1n_2 -2n_2n_3 -2n_3n_1\\right)\\\\\n\\widetilde R_{1}^{\\ 1}&= \\widetilde R_{12}^{\\ \\ 12}+ \\widetilde R_{13}^{\\ \\ 13}\n=\\frac 1 2\\left(-n_1^2 + n_2^2+n_3^2 -2 n_2n_3 \\right)\\\\\n\\widetilde R &= \\widetilde R_{1}^{\\ 1} + \\widetilde R_{2}^{\\ 2} + \\widetilde R_{3}^{\\ 3}\n= \\frac 1 2(n_1^2+n_2^2+n_3^2 -2(n_1n_2+n_2n_3+n_3n_1)).\n\\end{aligned}\\]\n\\subsubsection{The Einstein Field equation}\\label{sect:gr-efe}\nThe Einstein equations in vacuum state that the space-time is Ricci-flat, i.e.~\n\\[\\begin{aligned}\nR_{0}^{\\ 0} &= R_{01}^{\\ \\ 01}+R_{02}^{\\ \\ 02}+R_{02}^{\\ \\ 02} =0\\\\\nR_{1}^{\\ 1} &= R_{01}^{\\ \\ 01}+R_{12}^{\\ \\ 12}+R_{13}^{\\ \\ 13} =0\\\\\nR_{2}^{\\ 2} &= R_{02}^{\\ \\ 02}+R_{21}^{\\ \\ 21}+R_{23}^{\\ \\ 23} =0\\\\\nR_{3}^{\\ 3} &= R_{03}^{\\ \\ 03}+R_{31}^{\\ \\ 31}+R_{32}^{\\ \\ 32} =0.\n\\end{aligned}\\]\nAdding the last three equations and subtracting the first gives us an equation which does not contain time-derivatives of $K$ and hence is a constraint equation, called the ``Gauss constraint''. It is given by\n\\[\\begin{aligned}\n0 &= R_{12}^{\\ \\ 12} +R_{23}^{\\ \\ 23} + R_{31}^{\\ \\ 31} = \\frac 1 2 \\widetilde R - K_{1}^1K_2^2 - K_2^2K_3^3 - K_3^3K_1^1.\n\\end{aligned}\\]\nThe evolution equations are given by\n\\[\\begin{aligned}\n\\sqrt{-g^{00}}\\nabla_{e_0}\\widetilde n_i &= \\sqrt{-g^{00}} \\nabla_{e_0} \\sqrt{g_{ii}g^{jj}g^{kk}}\n= \\sqrt{-g^{00}} (\\Gamma_{i0}^i -\\Gamma_{j0}^j -\\Gamma_{k0}^k)\\widetilde n_i= (K_i^i -K_j^j - K_k^k)\\widetilde n_i,\n\\end{aligned}\\]\nand\n\\[\\begin{aligned}\nR_{0i}^{\\ \\ 0i} &= -R_{ij}^{\\ \\ ij}-R_{ik}^{\\ \\ ik}=R_{jk}^{\\ \\ jk}\\\\\n\\sqrt{-g^{00}}\\nabla_{e_0} K_i^i &= -K_i^iK_i^i + K_{j}^j K_k^k - \\widetilde R_{jk}^{\\ \\ jk}.\n\\end{aligned}\\]\n\n\n\\paragraph{Trace-free formulation.}\nIt is useful to split the variables into their trace and trace-free parts:\n\\[\\begin{aligned}\nH &= \\frac 1 3(K_1^1+K_2^2+K_3^3),\\qquad\\text{i.e.~$H$ is the mean curvature of $\\{t=\\mathrm{const}\\}$}\\\\ \n\\sigma_i &= K_i^i -H\\\\\n\\frac 1 6 \\widetilde R &= \\frac 1 3\\left(\\widetilde R_{12}^{\\ \\ 12}+\\widetilde R_{23}^{\\ \\ 23}+\\widetilde R_{31}^{\\ \\ 31}\\right)\n= \\frac 1 {12}\\left(n_1^2+n_2^2+n_3^2 -2(n_1n_2+n_2n_3+n_3n_1)\\right)\\\\\ns_i &= \\widetilde R_{jk}^{\\ \\ jk}-\\frac 1 6 \\widetilde R\n= \\frac 1 {3}\\left(2n_i^2 - n_j^2 - n_k^2 - n_in_j +2 n_jn_k - n_kn_i \\right),\n\\end{aligned}\\]\nwhere we note that $(\\sigma_1+\\sigma_2+\\sigma_3)^2 = 0=\\sigma_1^2+\\sigma_2^2+\\sigma_3^2 +2(\\sigma_1\\sigma_2+\\sigma_2\\sigma_3+\\sigma_3\\sigma_1)$.\nWe then obtain\n\\[\\begin{aligned}\n0 & = \\frac 1 2 \\widetilde R - (\\sigma_1+H)(\\sigma_2+H) - (\\sigma_2+H)(\\sigma_3+H) - (\\sigma_1+H)(\\sigma_3+H)\\\\\n&= \\frac 1 2 \\widetilde R - 3H^2 - (\\sigma_1\\sigma_2+\\sigma_2\\sigma_3+\\sigma_3\\sigma_1)\\\\\n&= \\frac 1 2 \\widetilde R - 3H^2 +\\frac 1 2 (\\sigma_1^2+\\sigma_2^2+\\sigma_3^2)\\\\\n\\sqrt{-g^{00}}\\nabla_{e_0}\\widetilde n_i &= (2\\sigma_i-H)\\widetilde n_i\\\\\n\\sqrt{-g^{00}}\\nabla_{e_0} H &= \\frac 1 3 \\Big[\\;-(\\sigma_1+H)^2 -(\\sigma_2+H)^2 -(\\sigma_3+H)^2\\\\\n&\\qquad+(\\sigma_1+H)(\\sigma_2+H) +(\\sigma_2+H)(\\sigma_3+H) + (\\sigma_3+H)(\\sigma_1+H) \\\\\n&\\qquad- \\widetilde R_{23}^{\\ \\ 23} - \\widetilde R_{13}^{\\ \\ 13} - \\widetilde R_{12}^{\\ \\ 12}\\Big]\\\\\n&=-\\frac 1 2(\\sigma_1^2+\\sigma_2^2+\\sigma_3^2) -\\frac 1 6 \\widetilde R\n= -\\frac 1 3(\\sigma_1^2+\\sigma_2^2+\\sigma_3^2) - H^2 \\\\\n\\sqrt{-g^{00}}\\nabla_{e_0} \\sigma_1 &= -(\\sigma_1+H)^2 + (\\sigma_2+H)(\\sigma_3+H) - \\widetilde R_{23}^{\\ \\ 23} - \\sqrt{-g^{00}}\\nabla_{e_0}H\\\\\n&=\\sigma_2\\sigma_3-\\sigma_1^2 +(\\sigma_2+\\sigma_3-2\\sigma_1)H - \\widetilde R_{23}^{\\ \\ 23} - \\sqrt{-g^{00}}\\nabla_{e_0}H\\\\\n&=(\\sigma_1+\\sigma_3)(\\sigma_1+\\sigma_2)-\\sigma_1^2 -3\\sigma_1H +\\frac 1 2 (\\sigma_1^2+\\sigma_2^2+\\sigma_3^2) + \\frac 1 6 \\widetilde R - \\widetilde R_{23}^{\\ \\ 23}\\\\\n&= -3\\sigma_1H -s_i.\n\\end{aligned}\\]\n\\paragraph{Hubble Normalization.}\nWe can further simplify by Hubble-normalizing to $\\overline{\\Sigma}_i = \\frac{\\sigma_i} H$ and $\\overline{N}_i = \\frac{n_i} H$, and introducing a shorthand for ${\\Sigma}^2$ and ${N}^2$\n\\[\\begin{aligned}\n\\overline{N}_i &= \\widetilde{\\overline{N}}_i {{\\hat{n}}}_i\\\\\n{\\Sigma}^2 &= \\frac 1 6\\left({\\Sigma}_1^2+\\Sigma_2^2+\\Sigma_3^2\\right)\\\\\n N^2&= \\frac 1 {12} \\left[\\overline N_1^2+\\overline N_2^2+\\overline N_3^2-2(\\overline N_1\\overline N_2+\\overline N_2\\overline N_3+\\overline N_3\\overline N_1)\\right]= \\frac 1 6 \\widetilde R H^{-2}\\\\\n S_i &= -\\frac 1 3 \\left[-2 \\overline N_1^2 + \\overline N_2^2 + \\overline N_3^2 +\\overline N_1\\overline N_2 -2\\overline N_2\\overline N_3 +\\overline N_3\\overline N_1 \\right]= s_i H^{-2}\\end{aligned}\\]\nwhich yields equations\n\\[\\begin{aligned}\n1 &= \\Sigma^2+ N^2\\\\\n\\sqrt{-g^{00}}\\nabla_{e_0} H &=-\\left(2\\Sigma^2 +1 \\right)H^2\\\\\n\\sqrt{-g^{00}}\\nabla_{e_0}\\widetilde{\\overline N}_i &= (2 \\Sigma_i - 1 + 2 \\Sigma^2 +1)\\widetilde{\\overline N}_iH=2(\\Sigma^2+ \\Sigma_i)\\widetilde{\\overline N}_i H\\\\\n\\sqrt{-g^{00}}\\nabla_{e_0} \\overline \\Sigma_1 &=\n-3\\overline \\Sigma_1 H - s_i H^{-1} + (2\\overline \\Sigma^2+1)H = 2( \\Sigma^2-1)\\overline\\Sigma_1H - S_iH.\n\\end{aligned}\\]\n\\paragraph{The Wainwright-Hsu equations as used in this work.}\nAssuming $H<0$ for an initial surface (which can be obtained if $H\\neq 0$ by choosing the direction of the unit normal $\\sqrt{-g^{00}}e_0$ and reverting the direction of time), we can set $\\sqrt{-g^{00}} = -\\frac 1 2 H$. Setting $\\widetilde N_i = \\sqrt{12}\\,\\widetilde{\\overline N}_i$, we obtain the variant of the Wainwright-Hsu equations used in this work, \\eqref{eq:ode-from-gr}, corresponding to the metric \\eqref{eq:metric-in-wsh}.\n\n\n\\section{Introduction}\\label{sect:intro}\n\\paragraph{Spatially homogeneous cosmological models.}\nThe behaviour of cosmological models is governed by the Einstein field equations, coupled with equations describing the presence of matter. Simpler models are obtained under symmetry assumption. The class of models studied in this work, Bianchi models, assume spatial homogeneity, i.e.~``every point looks the same''. Then, one only needs to describe the behaviour over time of any single point, and the partial differential Einstein field equations become a system of ordinary differential equations. \n\nDirectional isotropy assumes that ``every spatial direction looks the same''. This leads to the well-known FLRW (Friedmann-Lemaitre-Robertson-Walker) models. These models describe \nan initial (``big bang'') singularity, followed by an expansion of the universe, slowed down by ordinary and dark matter and accelerated by a competing positive cosmological constant (``dark energy'').\n\n \nWe will assume spatial homogeneity, but relax the assumption of directional isotropy.\nSpatial homogeneity assumes that there is a Lie-group $G$ of spacetime isometries, which foliates the spacetime into three-dimensional space-like hypersurfaces on which $G$ acts transitively: For every two points $\\bpt x, \\bpt y$ in the same hypersurface there is a group element $f\\in G$ such that $f\\cdot \\bpt x=\\bpt y$. The resulting ordinary differential equations depend on the Lie-algebra of $G$, the so-called Killing fields.\nThe three-dimensional Lie-algebras have been classified by Luigi Bianchi in 1898, hence the name``Bianchi-models''; for a commented translation see \\cite{Bianchi2001, jantzen2001editor}, and for a modern treatment see \\cite[Section 1]{wainwright2005dynamical}.\n\nThe two most studied classes of spatially homogeneous anisotropic cosmological models are the Bianchi-types $\\textsc{IX}$ ($so(3)$) and $\\textsc{VIII}$ ($sl(2,{\\mathbb{R}})$), which are the focus of this work.\nBoth of these models exhibit a big-bang like singularity in at least one time-direction, and a universe that initially expands from this singularity, until, in the case of Bianchi \\textsc{IX}, it recollapses into a time-reversed big bang (``big crunch'')\nThe big-bang singularity is present even in the vacuum case, where matter is absent and only gravity self-interacts. According to conventional wisdom, ``matter does not matter'' near the singularity. For this reason, we simplify our analysis by considering only the vacuum case.\n\nNote that the symmetry assumptions already restrict the global topology of the space-like hypersurfaces, and that the isotropic FLRW-models are not contained as a special case: The only homogeneous isotropic vacuum model is flat Minkowski space.\n\nFor a detailed introduction to Bianchi models, we refer to \\cite{wainwright2005dynamical}. A short derivation of the governing ordinary differential Wainwright-Hsu equations \\eqref{eq:ode} is given in Section \\ref{sect:append:derive-eq}, and physical interpretations of some of our results are given in Section \\ref{sect:gr-phys-interpret}. For an excellent survey on Bianchi cosmologies, we refer to \\cite{heinzle2009mixmaster}, and for further physical questions we refer to \\cite{uggla2003past, heinzle2009cosmological}.\n\\paragraph{The Taub-spaces.}\nThe dynamical behaviour of Bianchi \\textsc{VIII} and \\textsc{IX} spacetimes is governed by the so-called Wainwright-Hsu equations. \nThis dynamical system contains an invariant set $\\mathcal T$ of codimension two, which we call the Taub-spaces. Solutions in this set are also called Taub-NUT spacetimes, or LRS (locally rotationally symmetric) spacetimes. The latter name is descriptive, in the sense that these spacetimes have additional (partial, local) isotropy. Taub spacetimes behave, in several ways, different from general (i.e.~non-Taub) Bianchi spacetimes. For a detailed description, we refer to \\cite{ringstrom2001bianchi}.\n\n\\paragraph{The Mixmaster attractor.}\nThe Bianchi dynamical system also contains an invariant set $\\mathcal A$, called the Mixmaster attractor, consisting of Bianchi Type \\textsc{II} and \\textsc{I} solutions. There are good heuristic arguments that $\\mathcal A$ really is an attractor for time approaching the singularity, and that the dynamics on and near $\\mathcal A$ can be considered chaotic (sometimes also called ``oscillatory''). \nIt has been rigorously proven only in Bianchi \\textsc{IX} models that $\\mathcal A$ is actually attracting, with the exception of some Taub solutions, c.f.~\\cite{ringstrom2001bianchi}. \n\n\\paragraph{Stable Foliations.}\nOne may ask for a more precise description of how solutions get attracted to $\\mathcal A$.\nCertain heteroclinic chains in $\\mathcal A$ are known to attract hypersurfaces of codimension one, c.f.~\\cite{liebscher2011ancient, beguin2010aperiodic}.\nReiterer and Trubowitz \\cite{reiterer2010bkl} claim related results, for a much wider class of heteroclinic chains, but with less focus on the regularity or codimension of the attracted sets.\nThis general class of constructions, i.e.~partial stable foliations over specific solutions in $\\mathcal A$, is not the focus of this work. Instead, we describe and estimate solutions and the solution operator (i.e.~flow) directly, without explicitly focussing on the symbolic description of $\\mathcal A$.\n\n\\paragraph{The question of particle horizons.}\nOne of the most salient features of relativity is \\emph{causality}: The state of the world at some point in spacetime is only affected by states in its past light-cone and can only affect states in its future light-cone. \n\nSuppose for this paragraph that we orient our spacetime $M$ such that the big bang singularity is situated in the past.\nTwo points $\\bpt p, \\bpt q\\in M$ are said to \\emph{causally decouple} towards the singularity if their past light-cones are disjoint, i.e.~if there is no past event which causally influences both $\\bpt p$ and $\\bpt q$. The past communication cone of $\\bpt p$ is defined to be the set of points, which do not causally decouple from $\\bpt p$. The cosmic horizon, also called particle horizon, is the boundary of the past communication cone. Hence, everything beyond the horizon is causally decoupled.\n\n\\[\\numberthis\\label{eq:intro:comcone}\\begin{aligned}\n\\omit\\rlap{Past light cone of $\\bpt p$:}&\\\\\nJ^-(\\bpt p) &= \\{\\bpt q:\\, \\text{there is } \\gamma:[0,1]\\to M\\,\\,\\text{with}\\, \\gamma(0)=\\bpt p, \\gamma(1)=\\bpt q,\\,\\text{time-like past directed}\\}\\\\\n\\omit\\rlap{Future light cone of $\\bpt p$:}&\\\\\nJ^+(\\bpt p) &= \\{\\bpt q:\\, \\text{there is } \\gamma:[0,1]\\to M\\,\\,\\text{with}\\, \\gamma(0)=\\bpt p, \\gamma(1)=\\bpt q,\\,\\text{time-like future directed}\\}\\\\\n\\omit\\rlap{Past communication cone of $\\bpt p$:}&\\\\\nJ^+(J^-(\\bpt p)) &= \\bigcup_{\\bpt q \\in J^-(\\bpt p)}J^+(\\bpt q) = \\{\\bpt q:\\, {J^-(\\bpt q)}\\cap {J^{-}(\\bpt p)}\\neq\\emptyset\\}\\\\\n\\omit\\rlap{Past cosmic horizon of $\\bpt p$:}&\\\\\n\\partial J^+(J^-(\\bpt p))& = \\mathrm{closure}\\,J^{+}(J^-(\\bpt p))\\,\\setminus\\, \\mathrm{interior}\\, J^{+}(J^-(\\bpt p)).\n\\end{aligned}\\]\nIn Figure \\ref{fig:intro:comm-cone}, we illustrate the formation of particle horizons. An example where no particle horizons form is given by the (flat, connected) Minkowski-space $M={\\mathbb{R}}\\times {\\mathbb{R}}^3$: There is no singularity, and $M= J^+(J^-(\\bpt p))$ for all $\\bpt p$, and hence $\\partial J^+(J^-(\\bpt p)) = \\emptyset$. \n\nApart from the question of convergence to $\\mathcal A$, the next important physical question in the context of Bianchi cosmologies is that of locality of light-cones:\n\\begin{enumerate}\n\\item Do nonempty particle horizons $\\partial J^+(J^-(\\bpt p))\\neq \\emptyset$ form towards the singularity? Does this happen if $\\bpt p$ is sufficiently near to the singularity?\n\\item Are the spatial hypersurfaces $\\{t= t_0=\\mathrm{const}\\}\\cap (J^+(J^-(\\bpt p)), \\partial J^+(J^-(\\bpt p))$, considered as three-dimensional manifolds with boundary, homeomorphic to the three dimensional unit ball $(B_1(0), \\partial B_1(0))$? \n\\item Are the past communication cones of $\\bpt p$ spatially bounded? Do they shrink down to a point, as $\\bpt p$ and $t_0$ go towards the singularity?\n\\end{enumerate}\nThe first question is formulated completely independently of the foliation of the spacetime into space-like hypersurfaces. The second question depends on the foliation, but is at least easy to clearly state. The third question is not clearly stated here, because it requires us to choose a way of comparing spatial extents of the communication cones at different times. The subtleties of this are discussed in Section \\ref{sect:gr-phys-interpret}. \n\nSince this work is concerned only with proving affirmative answers to these questions, we will conflate them: We say that a solution forms a particle horizon if all these questions are answered with ``yes''. \n\n\n\\begin{figure}[hbt]\n\\centering\n\\begin{subfigure}[b]{\\textwidth}\n\\includegraphics[width=\\textwidth]{.\/graphics\/decoupled-lc.pdf}\n\\caption{Two points $\\bpt p$, $\\bpt q$ that decouple towards the singularity. Their past light-cones are disjoint.}\n\\end{subfigure}\n\\begin{subfigure}[b]{\\textwidth}\n\\includegraphics[width=\\textwidth]{.\/graphics\/decoupled-cc.pdf}\n\\caption{Two points $\\bpt p$, $\\bpt q$ that decouple. The point $\\bpt q$ lies outside the communication cone of $\\bpt p$.}\n\\end{subfigure}\n\\begin{subfigure}[b]{\\textwidth}\n\\includegraphics[width=\\textwidth]{.\/graphics\/coupled-cc-lc.pdf}\n\\caption{Two points $\\bpt p$, $\\bpt q$ that do not decouple towards the singularity. Their past light-cones have nonempty intersection (the darkest shaded region). The point $\\bpt q$ lies inside the communication cone of $\\bpt p$.}\n\\end{subfigure}\n\\caption{Examples of decoupling and non-decoupling towards the singularity at $t=t_{\\mathrm{sing}}$, and of particle horizons.}\n\\label{fig:intro:comm-cone}\n\\end{figure}\n\n\nOriginally, Misner \\cite{misner1969mixmaster} suggested that no particle horizons should form in Bianchi \\textsc{IX}. This was proposed as a possible explanation of the observed approximate homogeneity of the universe: If the homogeneity is due to past mixing, then different observed points in our current past light-cone must themselves have a shared causal past. Misner later changed his mind to the current consensus intuition that typical Bianchi \\textsc{VIII} and \\textsc{IX} solutions should form particle horizons. Some more details on this are given in Section \\ref{sect:gr-phys-interpret}. Further discussion of these questions can be found in e.g.~\\cite[Chapter 5]{wald1984general}, \\cite[Chapter 5]{hawking1973large}.\n\n\n\n\n\\paragraph{The BKL picture.}\nSpatially homogeneous spacetimes, and especially the question of particle horizons, play an essential role in the so-called BKL picture (also often called BKL-conjecture). The BKL picture is due to Belinskii, Khalatnikov and Lifshitz (\\cite{belinskii1970oscillatory}), and describes generic cosmological singularities in terms of homogeneous spacetimes. This picture roughly claims the following: \n\\begin{enumerate}\n\\item Generic cosmological singularities ``are curvature-dominated'', i.e.~behave like the vacuum case. More succinctly, ``matter does not matter''.\n\\item Generic cosmological singularities are ``oscillatory'', which means that the directions that get stretched or compressed switch over time.\n\\item Generic cosmological singularities ``locally behave like'' spatially homogeneous ones, especially Bianchi \\textsc{IX} and \\textsc{VIII}. By this, one means that:\n\\begin{enumerate}\n\\item Different regions causally decouple towards the singularity, i.e.~particle horizons form.\n\\item If one restricts attention to a single communication cone, then, as time goes towards the singularity, the spacetime can be well approximated by a homogeneous one.\n\\item Different spatial regions may have different geometry towards the singularity (since they decouple). This kind of behaviour has been described as ``foam-like''.\n\\end{enumerate}\n\\end{enumerate}\nBoundedness of the communication cones, i.e.~formation of particle horizons, in spatially homogeneous models is a necessary condition for the consistency of the BKL-picture: $(3.a)$ claims that different spatial regions causally decouple towards the big bang, and $(3.b)$ claims that such decoupled regions behave ``like they were homogeneous''; hence, homogeneous solutions should better allow for different regions to spatially decouple.\n\n\\paragraph{Previous results.}\nOne way of viewing the formation of particle horizons is as a race between the expansion of the universe (shrinking towards the singularity) and the eventual blow-up at the singularity. If the blow-up is faster than the expansion, particle horizons form; otherwise, they don't. In the context of the Wainwright-Hsu equations, the question can be boiled down to: \\emph{Do solutions converge to $\\mathcal A$ sufficiently fast? If yes, then particle horizons form. If not, then the questions of particle horizons may have subtle answers.}\n\nThe aforementioned solutions constructed in \\cite{liebscher2011ancient, beguin2010aperiodic}, with initial conditions on certain hypersurfaces of codimension one, all converge essentially uniformly exponentially to $\\mathcal A$, which is definitely fast enough for particle horizons to form. \n\nReiterer and Trubowitz claim in \\cite{reiterer2010bkl} that the solutions constructed therein also converge to $\\mathcal A$ fast enough for this to happen. The claimed results in \\cite{reiterer2010bkl}\nare somewhat nontrivial to parse; let us give a short overview: They construct solutions converging to certain parts of the Mixmaster attractor $\\mathcal A$. These parts of the Mixmaster attractor have full (one-dimensional) volume, and all these constructed solutions form particle horizons. Claims about full-dimensional measure (or Hausdorff-dimensions, etc) are not made in \\cite{reiterer2010bkl}. \n\nThe solutions constructed in \\cite{liebscher2011ancient, beguin2010aperiodic} were the first known nontrivial solutions that could be proven to form particle horizons. \nIt is still unknown whether there exist nontrivial counterexamples, i.e. non-Taub solutions that fail to form particle horizons.\n\n\n\n\\paragraph{Main results.}\nThe first main result of this work extends Ringstr\\\"om's Bianchi \\textsc{IX} attractor theorem (Theorem \\ref{farfromA:thm:b9-attract}, c.f.~also \\cite{ringstrom2001bianchi, heinzle2009new}) to the case of Bianchi \\textsc{VIII} vacuum. It can be summarized in the following:\n\\begin{hthm}{\\ref{thm:local-attractor}, \\ref{thm:b9-attractor-global}, \\ref{thm:b8-attractor-global} and \\ref{thm:b8-attractor-global-genericity}}[Paraphrased Attractor Theorem]\nWith certain exceptions, solutions in Bianchi \\textsc{IX} and \\textsc{VIII} vacuum converge to the Mixmaster attractor $\\mathcal A$.\n\nLower bounds on the speed of convergence are given, but are insufficient to ensure the formation of particle horizons.\n\nThe dynamics of the exceptional solutions is described. The set of initial conditions corresponding to the exceptional solutions is nongeneric, both in the sense of Lebesgue (it is a set of zero Lebesgue measure) and Baire (it is a meagre set).\n\\end{hthm}\nApart from the applicability to Bianchi \\textsc{VIII}, this also extends the Ringstr\\\"om's previous result by providing lower bounds on the speed of convergence, and provides a new proof.\n\n\nThe most important result of this work is the following:\n\\begin{hthm}{\\ref{thm:horizon-formation}}[Almost sure formation of particle horizons]\nAlmost every solution in Bianchi \\textsc{VIII} and \\textsc{IX} vacuum forms particle horizons towards the big bang singularity. A more rigorous formulation of the theorem is on page \\pageref{thm:horizon-formation}.\n\\end{hthm}\nThe question remains open of whether particle horizons form for initial conditions that are generic in the sense of Baire\\footnote{A set is called generic in the sense of Baire, if it is \\emph{co-meagre}, i.e.~it contains a countable intersection of open and dense sets. Then its complement is called \\emph{meagre}. By construction, countable intersections of co-meagre sets are co-meagre and countable unions of meagre sets are meagre. Baire's Theorem states that co-meagre subsets of complete metric spaces are always dense and especially nonempty.}. \nWe strongly suspect that the answer is no, i.e.~particle horizons fail to form for a co-meagre set of initial conditions, for reasons which will be explained in future work.\n\n\n\\paragraph{Structure of this work.}\nWe will give the Wainwright-Hsu equations in Section \\ref{sect:eqs}, as well as some notation and transformations that will be needed later on. The most referenced equations are also summarized in Appendix \\ref{sect:eq-cheat-sheet}, page \\pageref{sect:eq-cheat-sheet}, for easier reference. A derivation of the Wainwright-Hsu equations from the Einstein field equations of general relativity is given Section \\ref{sect:append:derive-eq}.\n\n\nAn overview of the dynamical behaviour and some first proofs will be given in Section \\ref{sect:farfromA}. Sections \\ref{sect:near-A} and \\ref{sect:near-taub} will describe in detail two different regimes in the neighborhood of $\\mathcal A$; these two descriptions are synthesized into general attractor theorems in Section \\ref{sect:global-attract}. \nThe measure-theoretic results are all contained in Section \\ref{sect:volume-form}. Section \\ref{sect:gr-phys-interpret} relates dynamical properties of solutions to the Wainwright-Hsu equations to physical properties of the corresponding spacetime.\n\n\n\\paragraph{Strategy.}\nOur analysis of the behaviour of solutions of the Wainwright-Hsu equation is structured around two invariant objects: The Mixmaster-attractor $\\mathcal A$ and the Taub-spaces $\\mathcal T$. We will measure the distances from these sets by functions $\\delta(\\bpt x) \\sim d(\\bpt x, \\mathcal A)$ and $r(\\bpt x)\\sim d(\\bpt x, \\mathcal T)$.\n\nThere exist standard heuristics based on normal hyperbolicity, cf.~\\cite{heinzle2009mixmaster}. According to these, solutions near $\\mathcal A$: \n \\begin{enumerate}\n \\item can be described by the so-called Kasner-map (see Section \\ref{sect:farfromA}),\n \\item converge exponentially to $\\mathcal A$, and \n \\item their associated spacetimes form particle-horizons. \n \\end{enumerate}\nThese heuristics break down near the Taub-spaces $\\mathcal T \\cap \\mathcal A$, where two eigenvalues pass through zero.\n\nIn Section \\ref{sect:far-from-taub} we will formally prove the validity of the heuristic description of solutions near $\\mathcal A$, that stay bounded away from $\\mathcal T$.\nMore precisely, we will show in Proposition \\ref{prop:farfromtaub-main} that for any $\\epsilon_T>0$, there exists $\\epsilon_d>0$ such that all the previously mentioned hyperbolicity heuristics apply for solutions $\\bpt x: [0,T] \\to \\{\\bpt y:\\, \\delta(\\bpt y) <\\epsilon_d,\\,r(\\bpt y) > \\epsilon_T\\}$.\n\n\n\nTo provide a complete picture of the dynamics, we still need to control solutions in a neighborhood of $\\mathcal T$. This is the goal of Section \\ref{sect:near-taub}.\nIt is well known, that $\\mathcal T$ is transient, i.e.~solutions may approach $\\mathcal T$ but cannot converge to $\\mathcal T$; they must leave a neighborhood of $\\mathcal T$ again. \nEvery component of $\\mathcal T\\cap \\mathcal A$ consists of two equilibria connected by a heteroclinic orbit $-\\taubp \\to +\\taubp$; the standard heuristics described in Section \\ref{sect:far-from-taub} break down at $+\\taubp$, but continue to work near $-\\taubp$ and near the heteroclinic orbit $-\\taubp \\to +\\taubp$.\nThus, solutions can leave the region controlled by normal hyperbolicity only by approaching first $-\\taubp$ and then $+\\taubp$.\n\n\n\\label{paragraph:intro:strategy:quotients} The most important quantity in the local analysis near $+\\taubp$ is the quotient $\\frac{\\delta}{r}$ of the distances to $\\mathcal A$ and $\\mathcal T$. As long as this quotient is small, we can prove exponential decay of $\\delta(t)$ (Proposition \\ref{prop:neartaub:main}) and slow growth of $r(t)$; hence, the quotient $\\frac{\\delta}{r}$ continues to decrease.\nThe structure of this kind of estimate is not surprising; in fact, \nit is trivial (by varying $\\epsilon_d$ and $\\epsilon_T$) to construct continuous nonnegative functions $\\hat\\delta$ and $\\hat r$ with $\\mathcal A=\\{\\bpt x: \\hat\\delta(\\bpt x)=0\\}$ and $\\mathcal T=\\{\\bpt x: \\hat r(\\bpt x)=0\\}$, such that estimates of the above form are true near $+\\taubp$.\nHowever, our particular choice of functions $\\delta$ and $r$ (explicitly given in Section \\ref{sect:polar-coords}) allows the \\emph{same} quotient $\\frac{\\delta}{r}$ to be controlled near $-\\taubp$ and near $+\\taubp$.\nWe do not know of any reason to a priori expect this fortuitous fact; it is, however, easily verified by direct calculation.\n\nThe analysis from Section \\ref{sect:near-taub} fits together with the analysis from Section \\ref{sect:far-from-taub}: Solutions near $\\mathcal A$ (i.e.~$\\delta\\ll 1$) that leave regions with $r>\\epsilon_T$ to enter the neighborhood of $\\mathcal T$ with $r\\le\\epsilon_T$ must have, at the moment where $r=\\epsilon_T$, a very small quotient $\\delta\/r < \\epsilon_T^{-1}\\epsilon_d \\ll 1$.\nThis gives rise to a local attractor Theorem \\ref{thm:local-attractor}: If $\\widetilde \\delta=\\max(\\delta, \\frac{\\delta}{r})$ is small enough for some initial condition, then $\\widetilde \\delta$ converges to zero. Hence, we provide a family of forward invariant neighborhoods of $\\mathcal A\\setminus \\mathcal T$ attracted to $\\mathcal A$. \nUsing the local attractor Theorem \\ref{thm:local-attractor}, it is rather straightforward to adapt the ``global'' (i.e.~away from $\\mathcal A$) arguments from \\cite{ringstrom2001bianchi} in order to produce the global attractor Theorems \\ref{thm:b9-attractor-global} and \\ref{thm:b8-attractor-global}.\n\n\nNote that as in previous works \\cite{ringstrom2001bianchi}, a ``global attractor theorem'' does not imply that \\emph{all} solutions converge to $\\mathcal A$, but rather it describes the exceptions, i.e.~solutions where the local attractor Theorem \\ref{thm:local-attractor} does not eventually hold. \nIn Bianchi \\textsc{IX} models, i.e.~in Theorem \\ref{thm:b9-attractor-global}, these exceptions are exactly solutions contained in the lower-dimensional Taub-spaces $\\mathcal T$ (originally shown in \\cite{ringstrom2001bianchi}; we provide an alternative proof).\nIn Bianchi \\textsc{VIII} models, i.e.~in Theorem \\ref{thm:b8-attractor-global}, these exceptions are either contained in the lower-dimensional Taub-spaces $\\mathcal T$ or must follow the very particular asymptotics described in Theorem \\ref{thm:b8-attractor-global}, case $\\textsc{Except}$.\nAs we show in Theorem \\ref{thm:b8-attractor-global-genericity}, this exceptional case $\\textsc{Except}$ applies at most for a set of initial conditions, which is meagre (small Baire category) and has Lebesgue measure zero. It is currently unknown \nwhether this exceptional case \\textsc{Except} is possible at all. \n\n\nThese genericity results (Theorem \\ref{thm:b8-attractor-global-genericity}) rely on measure theoretic considerations in Section \\ref{sect:volume-form}: We provide a volume-form $\\omega_4$, which is expanded under the flow (given in \\eqref{eq:omega5-def}, \\eqref{eq:omega4-def}). This is in itself not surprising: \n Hamiltonian systems preserve their canonical volume form. Since the Wainwright-Hsu-equations derive from the (Hamiltonian) Einstein Field equations by an essentially monotonous time-dependent rescaling, we expect to find a volume form $\\omega_4$, that is essentially monotonously expanding.\nSuch an expanding volume form has the useful property that all forward invariant sets must either have infinite or zero $\\omega_4$-volume. \n\nWe prove the genericity Theorem \\ref{thm:b8-attractor-global-genericity} by noting that the exceptional solutions form an invariant set; using their detailed description, we can show that its $\\omega_4$-volume is finite and hence zero.\n\nThe results the formation of particle horizons are also proved in Section \\ref{sect:volume-form}.\nIn our language, the primary question is whether $\\int_0^{\\infty} \\delta(t){\\mathrm{d}} t <\\infty$. If this integral is finite, then particle horizons form and the singularity is ``local''; this behaviour is both predicted by the BKL picture and required for its consistency. Heuristically, time spent away from $\\mathcal T$ helps convergence of the integral (since $\\delta$ decays uniformly exponentially in these regions), while time spent near $\\mathcal T$ gives large contributions to the integral. Looking at our analysis near $+\\taubp$ (Proposition \\ref{prop:neartaub:main}), we get a contribution to the integral of order $\\frac{\\delta}{r^2}$. \nThe local attractor Theorem \\ref{thm:local-attractor} is insufficient to decide the question of locality, since it can only control the quotient $\\frac{\\delta}{r}$ (and hence the integral $\\int \\delta^2{\\mathrm{d}} t<\\infty$). \n\nHowever, we can show by elementary calculation that the set $\\{\\bpt x:\\,\\delta>r^4 \\}$ has finite $\\omega_4$-measure. Using some uniformity estimates on the volume expansion, we can infer that the (naturally invariant) set $\\textsc{Bad}$ of initial conditions that fail to have $\\delta< r^4$ for all sufficiently large times, has finite and hence vanishing $\\omega_4$-volume. Therefore, for Lebesgue almost every initial condition, $\\delta < r^4$ holds eventually, allowing us to bound the contribution of the entire stay near $+\\taubp$ by $\\frac{\\delta}{r^2}<\\sqrt{\\delta}$, which decays exponentially in the ``number of episodes near Taub-points'' (also called Kasner-eras).\nThis gives rise to Theorem \\ref{thm:horizon-formation}: Lebesgue almost every initial condition in Bianchi \\textsc{VIII} and \\textsc{IX} forms particle horizons towards the big bang singularity.\n\nA finer look at the integral even allows us to bound certain $L^p_{\\textrm{loc}}(\\omega_4)$ integrals in Theorem \\ref{thm:horizon-formation-alpha-p}.\n\n\n\n\\section{Setting, Notation and the Wainwright-Hsu equations}\\label{sect:eqs}\nThe subject of this work, i.e.,~the behaviour of homogeneous anisotropic vacuum space-times with Bianchi Class A homogeneity under the Einstein field equations of general relativity, can be described by a system of ordinary differential equations, called the Wainwright-Hsu equations \\eqref{eq:ode-unpacked}. \n \nIn Section \\ref{sect:equations}, we will introduce the Wainwright-Hsu ordinary differential equations and various auxiliary quantities and definitions, and provide a rough summary of their dynamics.\nThen we transform the Wainwright-Hsu equations into polar coordinates in Section \\ref{sect:polar-coords}, which are essential for the analysis in Section \\ref{sect:near-taub}. \n\n\nThere are multiple equivalent formulations of the Wainwright-Hsu equations in use by different authors, which differ in sign and scaling conventions, most importantly the direction of time. This work uses reversed time, such that the big bang singularity is at $t=+\\infty$.\nThe relation of the Wainwright-Hsu equations to the Einstein equations of general relativity will be relegated to Section \\ref{sect:append:derive-eq}. The relation between properties of solutions to the Wainwright-Hsu equations and physical properties of the corresponding spacetimes is discussed in Section \\ref{sect:append:derive-eq}.\n\\paragraph{General Notations.}\nIn this work, we will often use the notation $\\bpt x = (x_1,\\ldots,x_n)$ in order to emphasize that a variable $\\bpt x$ refers to a point and not to a scalar quantity. If we consider a curve $\\bpt x(t)$ into a space where different coordinates have names, e.g.~$\\bpt x: {\\mathbb{R}}\\to {\\mathbb{R}}^5=\\{(\\Sigma_+,\\Sigma_-,N_1,N_2,N_3)\\}$, then we will in an abuse of notation write $N_1(t)=N_1(\\bpt x(t))$ in order to refer to the $N_1$-coordinate of $\\bpt x(t)$.\n\nWe will use $\\pm$ to refer to either $+1$ or $-1$, and different occurrences of $\\pm$ are always unrelated, such that e.g.~$(\\pm,\\pm,\\pm)\\in\\{(+,+,+),(-,+,+), (+,-,+), (-,-,+),\\ldots\\}$. We will use $*$ to refer to either $+1$,$-1$ or $0$, also such that different occurrences of $*$ are unrelated. \n\n\n\\subsection{Spatially Homogeneous Spacetimes}\\label{sect:equations}\nWe study the behaviour of homogeneous spacetimes, also called Bianchi-models. These are Lorentz four-manifolds, foliated by space-like hypersurfaces on which a group of isometries acts transitively, subject to the vacuum Einstein Field equations. That is, we assume that we have a frame of four linearly independent vectorfields $e_0=\\partial_t, e_1,e_2,e_3$, where $e_1,e_2,e_3$ are Killing fields, with dual co-frame ${\\mathrm{d}} t, \\omega_1,\\omega_2,\\omega_3$, such that the metric has the form\n\\[\ng = g_{00}(t){\\mathrm{d}} t\\otimes {\\mathrm{d}} t + g_{11}(t)\\omega_1\\otimes \\omega_1 + g_{22}(t)\\omega_2\\otimes \\omega_2 + g_{33}(t)\\omega_3\\otimes \\omega_3,\n\\]\nand the commutators (i.e.~the Lie-algebra of the spatial homogeneity) has the form\n\\[\n[e_i,e_j]=\\sum_k \\gamma_{ij}^k e_k \\qquad \\gamma_{ij}^k={{\\hat{n}}}_k \\epsilon_{ijk},\n\\]\nwhere $\\epsilon_{ijk}$ is the usual Levi-Civita symbol ($\\epsilon_{ijk}=+1$ if $(ijk)\\in \\{(123), (231), (312)\\}$, $\\epsilon_{ijk}=-1$ if $(ijk)\\in \\{(132), (321), (213)\\}$ and $\\epsilon_{ijk}=0$ otherwise). The signs ${{\\hat{n}}}_i\\in \\{+1,-1,0\\}$ determine the Bianchi Type of the cosmological model, according to Table \\ref{table:bianchi-types}.\n\nThe metric is described by the seven Hubble-normalized variables $H,\\widetilde N_i, \\Sigma_i$, with $i\\in\\{1,2,3\\}$, according to \n\\begin{equation}\\label{eq:metric-in-wsh}\ng_{00}= - \\frac 1 4 H^{-2}\\qquad g_{ii}=\\frac 1 {48}\\frac{H^{-2}}{\\widetilde N_j \\widetilde N_k},\n\\end{equation}\nwhere $(i,j,k)$ is always assumed to be a permutation of $\\{1,2,3\\}$, and subject to the linear and sign constraints\n\\[\\numberthis\\label{eq:gauge-constraints}\n\\Sigma_1+\\Sigma_2+\\Sigma_3=0,\\qquad H < 0, \\qquad \\widetilde N_i >0 \\quad\\text{for all }i\\in\\{1,2,3\\}.\\]\nThe variable $H$ corresponds to the Hubble scalar, i.e.~the expansion speed of the cosmological model, i.e.~ the mean curvature of the surfaces $\\{t = \\textrm{const}\\}$ of spatial homogeneity. The ``shears'' $\\Sigma_i$ correspond to the trace-free Hubble-normalized principal curvatures (hence, the linear ``trace-free'' constraint). The condition $H<0$ corresponds to our choice of the direction of time: We choose to orient time such that the universe is shrinking, i.e.~the singularity (big bang) lies in the future; this unphysical choice of time-direction is just for convenience of notation. \n\nThe vacuum Einstein Field equations state that the space-time is Ricci-flat. If we express the normalized trace-free principal curvatures $\\Sigma_i$ as time-derivatives of the metric variables $\\widetilde N_i$, then the the Einstein Field equations become the Wainwright-Hsu equations \\eqref{eq:ode-from-gr}, which are a system of seven ordinary differential equations, subject to one linear constraint equation ($\\Sigma_1+\\Sigma_2+\\Sigma_3=0$) and one algebraic equation, called the Gauss-constraint $G=1$ \\eqref{eq:gauge-constraints}): \n\n\\[\\numberthis\\label{eq:ode-from-gr}\\begin{aligned}\nH'&=\\frac 1 2 (1+2\\Sigma^2)H\\\\\n\\Sigma_i' &= (1-\\Sigma^2)\\Sigma_i +\\frac 1 2 S_i\\\\\n\\widetilde N_i'&=-(\\Sigma^2+\\Sigma_i)\\widetilde N_i\\\\\n 1 &\\overset{!}{=} \\Sigma^2+N^2 := G,\n\\end{aligned}\\]\nwhere we used the shorthands\n\\[\\begin{aligned}\nN_i &:= \\hat n_i \\widetilde N_i \\\\\n\\Sigma^2 &:= \\frac{1}{6}(\\Sigma_1^2+\\Sigma_2^2+\\Sigma_3^2) \\\\\nN^2&:= N_1^2+N_2^2+N_3^2 -2(N_1N_2+N_2N_3+N_3N_1)\\\\\nS_i &:= 4\\left(N_i(2N_i - N_j -N_k)-(N_j-N_k)^2\\right).\n\\end{aligned}\\]\n\\begin{table}\n\\fbox{ \\[\\begin{array}{cclcrclc}\n{{\\hat{n}}}_1 & {{\\hat{n}}}_2 & {{\\hat{n}}}_3 \\qquad & \\text{Bianchi Type} &\\qquad\n{{\\hat{n}}}_1 & {{\\hat{n}}}_2 & {{\\hat{n}}}_3 \\qquad & \\text{Bianchi Type}\\\\\n+&+&+\\qquad& \\text{\\textsc{IX}}&\\qquad\n+&-&+\\qquad& \\text{\\textsc{VIII}}\\\\\n+&+&0&\\text{$\\textsc{VII}_0$} &\n+&-&0& \\text{$\\textsc{VI}_0$}\\\\\n+&0&0& \\text{\\textsc{II}} &\n0&0&0& \\text{\\textsc{I}}.\n\\end{array}\\]}\n\\caption[Bianchi Types]{Bianchi Types of Class A, depending on ${{\\hat{n}}}_i$ (up to permutation and simultaneous sign-reversal)}\n\\label{table:bianchi-types}\n\\end{table}\n\\subsection{The Wainwright-Hsu equations}\nThe equation for $H$ is decoupled from the remaining equations. Thus, we can drop the equation for $H$, solve the remaining equations, and afterwards integrate to obtain $H$. Likewise, we can stick with the equations for $N_i$ instead of $\\widetilde N_i$, such that $\\hat n_i = \\mathrm{sign}\\, N_i$; for Bianchi-types \\textsc{VIII} and \\textsc{IX} this already determines the metric, and for the lower Bianchi types we can again integrate afterwards. This yields a standard form of the Wainwright-Hsu equations from \\eqref{eq:ode-from-gr}, as used in \\cite{heinzle2009mixmaster}, \\cite{heinzle2009new}, up to constant factors. The most useful equations are also summarized in Section \\ref{sect:eq-cheat-sheet}.\n\nIt is useful to solve for the linear constraint $\\Sigma_1+\\Sigma_2+\\Sigma_3=0$, introducing $\\bpt \\Sigma=(\\Sigma_+,\\Sigma_-)$ by\n\\[\\numberthis \\label{eq:taub-def}\\begin{gathered}\n\\taubp_1 = (-1,0) \\qquad\n\\taubp_2 = \\left(\\frac 1 2, -\\frac 1 2 \\sqrt{3}\\right) \\qquad\n\\taubp_3 = \\left(\\frac 1 2, \\frac 1 2 \\sqrt{3} \\right)\\\\\n\\Sigma_i = 2\\langle \\taubp_i, \\bpt \\Sigma \\rangle \\qquad \\Sigma_+ = -\\frac 1 2 \\Sigma_1 \\qquad \\Sigma_- = \\frac 1 {2\\sqrt 3}(\\Sigma_3-\\Sigma_2),\n\\end{gathered}\\]\nwhich turns the vacuum Wainwright-Hsu differential equations into a system of five ordinary differential equations on ${\\mathbb{R}}^5=\\{(\\Sigma_+,\\Sigma_-, N_1,N_2,N_3)\\}=\\{\\bpt{\\Sigma}, \\bpt{N}\\}$, with one algebraic constraint equation \\eqref{eq:constraint}. The three points $\\taubp_1,\\taubp_2,\\taubp_3$ are called Taub-points. We will, in an abuse of notation, consider the Taub-points both as points in ${\\mathbb{R}}^2$, and as points in ${\\mathbb{R}}^5$ (where all three $N_i$ vanish). The Wainwright-Hsu equations are then given by the differential equations\n\\begin{subequations\n\\label{eq:ode}\\begin{align}\nN_i' &= -(\\Sigma^2 +2 \\langle \\taubp_i,\\bpt\\Sigma\\rangle)N_i\\label{eq:ode-ni} \\\\\n&=-\\left(\\left|\\bpt\\Sigma+\\taubp_i\\right|^2 -1\\right)N_i\\label{eq:ode2-ni}\\\\\n\\bpt\\Sigma' &= N^2 \\bpt\\Sigma + 2 \\left(N_1^2 \\taubp_1 + N_2^2\\taubp_2 + N_3^2\\taubp_3 + N_1N_2\\taubp_3 + N_2N_3\\taubp_1 + N_3N_1\\taubp_2\\right)\\label{eq:ode2-sigma}\\\\\n&=N^2 \\bpt\\Sigma + 2 \\threemat{\\taubp_1}{\\taubp_2}{\\taubp_3}{\\taubp_3}{\\taubp_1}{\\taubp_2}[\\bpt N, \\bpt N],\n\\end{align}\\end{subequations}\nand the Gauss constraint equation\n\\begin{equation}\\label{eq:constraint}\n1\\overset{!}{=} \\Sigma^2 + N^2 =:G(\\bpt x),\n\\end{equation}\nwhere we used the shorthands\n\\[\n\\Sigma^2=\\Sigma_+^2+\\Sigma_-^2,\\qquad N^2 = N_1^2+N_2^2+N_3^2 - 2(N_1N_2+N_2N_3+N_3N_1).\n\\]\nWe can unpack these equations with unambiguous notation into \n\\begin{subequations}\\label{eq:ode-unpacked}\\begin{align}\nN_1' &= -(\\Sigma^2 -2\\Sigma_+)N_1 \\\\\nN_2' &= -(\\Sigma^2 +\\Sigma_+ - \\sqrt{3}\\Sigma_-)N_2 \\\\\nN_3' &= -(\\Sigma^2 +\\Sigma_+ + \\sqrt{3}\\Sigma_-)N_3 \\\\\n\\Sigma_+' &= N^2 \\Sigma_+ -2 N_1^2 +N_2^2 + N_3^2 + N_1N_2 -2 N_2N_3 + N_1N_3 \\\\\n\\Sigma_-' &= N^2\\Sigma_- +\\sqrt{3}\\left(- N_2^2 +N_3^2 + N_1N_2 -N_1N_3 \\right),\n\\end{align}\\end{subequations}\nwhich is, up to constant factors, the form of the Wainwright-Hsu equations used in \\cite{ringstrom2001bianchi}, \\cite{liebscher2011ancient} and \\cite{beguin2010aperiodic}.\n\nIt is occasionally useful to fully tensorize the Wainwright-Hsu equations, yielding the form\n\\begin{equation}\\label{eq:ode-fulltensor}\\begin{aligned}\n\\bpt N' &= -\\langle\\bpt \\Sigma, \\bpt \\Sigma\\rangle \\bpt N - \\bpt D[\\bpt \\Sigma,\\bpt N]\\\\\n\\bpt \\Sigma'&= Q[\\bpt N, \\bpt N]\\bpt \\Sigma + \\bpt T[\\bpt N, \\bpt N]\\\\\nG(\\bpt \\Sigma, \\bpt N)&= \\langle \\bpt \\Sigma, \\bpt \\Sigma\\rangle + Q[\\bpt N, \\bpt N]\\overset{!}{=}1,\n\\end{aligned}\\end{equation}\nwhere $Q:{\\mathbb{R}}^3\\times {\\mathbb{R}}^3\\to {\\mathbb{R}}$ and $\\bpt T: {\\mathbb{R}}^3\\times {\\mathbb{R}}^3\\to{\\mathbb{R}}^2$ and $\\bpt D: {\\mathbb{R}}^2\\times {\\mathbb{R}}^3 \\to {\\mathbb{R}}^3$. \nWe write $Q$ as a $3\\times 3$-matrix with entries in ${\\mathbb{R}}$ such that $Q[\\bpt N,\\bpt M]=\\bpt N^T Q\\bpt N=N^2$; we write $\\bpt T$ as a similar $3\\times 3$-matrix with entries in ${\\mathbb{R}}^2$. We write $\\bpt D$ as a $3\\times 3$-matrix with entries in ${\\mathbb{R}}^2$ such that $\\bpt D[\\bpt \\Sigma, \\bpt N] = (\\bpt D\\bpt N)\\cdot \\bpt \\Sigma$, where $\\bpt D\\bpt N$ is the usual matrix product (with entries in ${\\mathbb{R}}^2$) and the dot-product is evaluated component wise. Then the tensors $Q,\\bpt T, \\bpt D$ can be written as\n\n\\begin{equation}\\label{eq:ode-fulltensor-tensors}\\begin{aligned}\nQ= \\sixmat{1 & -2 & -2\\\\ &1 & -2 \\\\ & & 1}\\qquad\n\\bpt T = \\sixmat{2\\taubp_1 & 2\\taubp_3 & 2\\taubp_2\\\\ & 2\\taubp_2 & 2\\taubp_1\\\\ & & 2\\taubp_3} \\qquad\n\\bpt D = \\sixmat{2\\taubp_1 & & \\\\ & 2\\taubp_2 & \\\\ & & 2\\taubp_3}.\n\\end{aligned}\\end{equation}\n\n\n\n\\paragraph{Permutation Equivariance.}\nThe equations \\eqref{eq:ode-from-gr} are equivariant under permutations $\\sigma:\\{1,2,3\\}\\to\\{1,2,3\\}$ of the three indices. This permutation invariance also applies to \\eqref{eq:ode} as\n\\[\n(\\bpt \\Sigma, N_1,N_2,N_3)\\to (A_\\sigma\\bpt\\Sigma, N_{\\sigma(1)}, N_{\\sigma(2)}, N_{\\sigma(3)}),\n\\]\nwhere $A_\\sigma:{\\mathbb{R}}^2\\to{\\mathbb{R}}^2$ is the linear isometry with $A_\\sigma^{T}\\taubp_i = \\taubp_{\\sigma(i)}$. The equations are also equivariant under $(\\bpt\\Sigma, \\bpt N)\\to (\\bpt \\Sigma, -\\bpt N)$, as can be seen directly from \\eqref{eq:ode-fulltensor}.\n\n\\paragraph{Invariance of the Constraint.}\nThe signs of the $N_i$ are preserved under the flow, because $N_i'$ is a multiple of $N_i$ and therefore $N_i=0$ implies $N_i'=0$. The quantity $G$ is preserved under the flow. This can best be seen from \\eqref{eq:ode-fulltensor}:\n\\[\\begin{aligned}\n{\\mathrm{D}}_t G &= 2\\langle \\bpt \\Sigma, \\bpt \\Sigma'\\rangle + Q[\\bpt N, \\bpt N'] + Q[\\bpt N', \\bpt N]\\\\\n&= 2Q[\\bpt N, \\bpt N]\\langle \\bpt \\Sigma, \\bpt \\Sigma\\rangle + 2\\bpt \\Sigma \\cdot \\bpt T[\\bpt N, \\bpt N] - 2 \\langle \\bpt \\Sigma, \\bpt \\Sigma\\rangle Q[\\bpt N, \\bpt N] \\\\\n&\\qquad - Q[\\bpt D[\\bpt \\Sigma, \\bpt N], \\bpt N] - Q[\\bpt N, \\bpt D[\\bpt \\Sigma,\\bpt N]]\\\\\n&= \\Sigma\\cdot \\bpt N^T \\left[2 \\bpt T - \\bpt D^T Q - Q \\bpt D\\right]\\bpt N.\n\\end{aligned}\\]\nUsing $\\taubp_1+\\taubp_2+\\taubp_3=0$ and \\eqref{eq:ode-fulltensor-tensors}, it is a simple matter of matrix multiplication to verify that $2\\bpt T-\\bpt D^TQ-Q\\bpt D=0$ and hence ${\\mathrm{D}}_t G=0$. Therefore, sets of the form $\\{\\bpt x\\in{\\mathbb{R}}^5: G(\\bpt x) = c\\}$ are invariant for any $c\\in{\\mathbb{R}}$ and especially for the physical $c=1$.\n\nThe set $\\mathcal M=\\{\\bpt x \\in {\\mathbb{R}}^5: G(\\bpt x)=1\\}$ is a smooth embedded submanifold; this is apparent from the implicit function theorem, since (if $\\bpt x\\neq 0$)\n\\[\\begin{multlined}\n\\frac{1}{2}{\\mathrm{d}} G = \\Sigma_+{\\mathrm{d}}\\Sigma_+ + \\Sigma_-{\\mathrm{d}} \\Sigma_- \\\\+ (N_1-N_2-N_3){\\mathrm{d}} N_1 + (N_2-N_3-N_1){\\mathrm{d}} N_2 + (N_3-N_1-N_2){\\mathrm{d}} N_1\\neq 0.\n\\end{multlined}\\]\n\n\n\\paragraph{Named invariant sets.}\nThere are several recurring important sets, which require names and are listed in Table \\ref{table:inv-sets}.\n\\begin{table}\n\\fbox{\\small \\[\\begin{aligned}\n\\mathcal M &= \\{\\bpt x\\in {\\mathbb{R}}^5: \\; G(\\bpt x)=1\\}&&\\qquad\\text{the physically relevant Phase-space}\\\\\n\\mathcal M_{{{\\hat{n}}}} &= \\{\\bpt x\\in \\mathcal M: \\; \\mathrm{sign}\\,N_i = {{\\hat{n}}}_i\\}&&\\qquad\\text{a specific octant of the Phase-space}\\\\\n\\mathcal K &= \\mathcal M_{000}=\\{\\bpt x\\in\\mathcal M: \\; \\bpt N=0\\} &&\\qquad\\text{the Kasner circle}\\\\\n\\mathcal A &= \\{\\bpt x\\in\\mathcal M: \\; \\text{at most one }N_i\\neq 0\\} && \\qquad\\text{the Mixmaster attractor}\\\\\n\\mathcal A_{{{\\hat{n}}}} &= \\overline{\\mathcal M_{{{\\hat{n}}}}}\\cap \\mathcal A && \\qquad\\text{a specific octant of $\\mathcal A$}\\\\\n\\mathcal T_i &=\\{\\bpt x\\in\\mathcal M: N_j = N_k,\\, \\langle \\taubp_j,\\bpt \\Sigma\\rangle = \\langle \\taubp_k,\\bpt \\Sigma\\rangle\\}&&\\qquad\\text{a Taub-space}\\\\\n\\mathcal T &=\\mathcal T_1 \\cup \\mathcal T_2 \\cup\\mathcal T_3 && \\qquad\\text{all three Taub-spaces}\\\\\n\\mathcal{TL}_i &=\\{\\bpt x\\in\\mathcal M: N_j = N_k,\\,N_i=0,\\,\\bpt \\Sigma = \\taubp_i\\}\\subseteq \\mathcal T_i&& \\qquad\\text{a Taub-line}\\\\\n\\mathcal{T}^G_i&=\\{\\bpt x\\in\\mathcal M: |N_j| = |N_k|,\\, \\langle \\taubp_j,\\bpt \\Sigma\\rangle = \\langle \\taubp_k,\\bpt \\Sigma\\rangle\\}&&\\qquad\\text{a generalized Taub-space; }\\\\\n&&&\\qquad\\text{ only invariant if $\\mathrm{sign}\\,N_j=\\mathrm{sign}\\,N_k$}\n\\end{aligned}\\]}\n\\caption[Named Subsets]{Named subsets. Here $(i,j,k)$ stands for a permutation of $\\{1,2,3\\}$ and ${{\\hat{n}}}\\in\\{+,0,-\\}^3$.\nAll of these sets, except for $\\mathcal{T}^G_i$, are invariant.}\n\\label{table:inv-sets}\n\\end{table}\n\nThe set $\\mathcal M$ is invariant because $G$ is a constant of motion. The Taub-space $\\mathcal T_i$ is invariant because of the equivariance under exchange of the two other indices $j$ and $k$. The invariance of the Taub-lines $\\mathcal{TL}_i$ can be seen by considering \\eqref{eq:ode-unpacked} for $i=1$ and applying the permutation invariance for $\\mathcal{TL}_2$ and $\\mathcal{TL}_3$. The generalized Taub-spaces $\\mathcal T^G_i$ are not invariant if ${{\\hat{n}}}_j\\neq{{\\hat{n}}}_k$. The other sets are invariant because the signs ${{\\hat{n}}}_i=\\mathrm{sign}\\,N_i$ are fixed.\n\nRecall that the signs of the $N_i$ correspond to the Bianchi Type of the Lie-algebra associated to the homogeneity of the cosmological model and are given in Table \\ref{table:bianchi-types} (up to index permutations).\n\n\n\n\\paragraph{Auxiliary Quantities.}\nThe following quantities turn out to be useful later on (where $(i,j,k)$ is a permutation of $(1,2,3)$):\n\\begin{subequations}\\label{eq:delta-r-def}\n\\begin{align} \n\\delta_i &= 2\\sqrt{|N_jN_k|} \\label{eq:delta-r-def-delta}\\\\\nr_i &= \\sqrt{(|N_j|-|N_k|)^2 + \\frac{1}{3}\\langle\\taubp_j-\\taubp_k, \\bpt\\Sigma\\rangle^2}\\label{eq:delta-r-def-r}\\\\\n\\nonumber \\psi_i &\\quad\\text{such that:} \\\\\nr_1\\cos\\psi_1&=\\frac{1}{\\sqrt{3}}\\langle\\taubp_{3}-\\taubp_{2}, \\bpt \\Sigma\\rangle=\\Sigma_- \n&r_1\\sin\\psi_1 &= |N_2|-|N_3| \\\\\nr_2\\cos\\psi_2&=\\frac{1}{\\sqrt{3}}\\langle\\taubp_{1}-\\taubp_{3}, \\bpt \\Sigma\\rangle=-\\frac{\\sqrt 3}{2}\\Sigma_+ -\\frac{1}{2}\\Sigma_-\n&r_2\\sin\\psi_2 &= |N_3|-|N_1| \\\\\nr_3\\cos\\psi_3&=\\frac{1}{\\sqrt{3}}\\langle\\taubp_{2}-\\taubp_{1}, \\bpt \\Sigma\\rangle=\\frac{\\sqrt 3}{2}\\Sigma_+ -\\frac{1}{2}\\Sigma_-\n&r_3\\sin\\psi_3 &= |N_1|-|N_2|. \n\\end{align}\\end{subequations}\nThe auxiliary products $\\delta_i^2$ can be used to measure the distance from the Mixmaster attractor $\\mathcal A = \\{\\bpt x:\\, \\max_i \\delta_i(\\bpt x)=0\\}$. The $r_i$ and can be used to measure the distance from the generalized Taub-space $\\mathcal T_i^G = \\{\\bpt x:\\,r_i(\\bpt x)=0\\}$, and the $(r_i,\\psi_i)$-pairs form polar coordinates around the generalized Taub-spaces.\n\nThe products $\\delta_i$ obey an especially geometric differential equation, similar to \\eqref{eq:ode2-ni}:\n\\begin{equation}\\label{eq:ode2-delta}\n\\delta_i' = -\\left(\\left|\\bpt\\Sigma-\\frac{\\taubp_i}{2}\\right|^2-\\frac{1}{4}\\right)\\delta_{i}.\n\\end{equation}\n\n\\subsection{The Wainwright-Hsu equations in polar coordinates}\\label{sect:polar-coords}\nNear the generalized Taub-spaces $\\mathcal T^G_i$, it is possible to use polar coordinates \\eqref{eq:delta-r-def}. Without loss of generality we will only transform \\eqref{eq:ode} into these coordinates around the Taub-space $\\mathcal T^G_1$ (the other ones can be obtained by permuting the indices and rotating or reflecting $\\bpt \\Sigma$).\n\nThe use of polar coordinates near the Taub-spaces $\\mathcal T_i$ for Bianchi \\textsc{IX}, i.e. $\\mathcal M_{+++}$, is by no means novel (c.f.~e.g.~\\cite{ringstrom2001bianchi}, \\cite{heinzle2009mixmaster}). However, to the best of our knowledge, polar coordinates around the generalized Taub-spaces $\\mathcal T^G_i$ have not been used previously in the case where the generalized Taub-space fails to be invariant.\n\nWe only use use polar coordinates on $\\mathcal M=\\{\\bpt x:\\,\\,G(\\bpt x)=1\\}$.\n\\paragraph{Polar Coordinates around the invariant Taub-spaces.}\nConsider the case $\\mathcal M_{*++}$, where $N_2,N_3>0$ are positive and we are interested in a neighborhood of $\\mathcal T_1 = \\{\\bpt x:\\, \\Sigma_-=0,\\,N_2-N_3=0\\}$. The sign of $N_1$ does not significantly matter.\n\n\nWe use the additional shorthands\n\\[ N_- = N_2-N_3\\qquad N_+=N_2+N_3,\n\\]\nsuch that (with \\eqref{eq:delta-r-def}):\n\\[\\begin{aligned}\nr_1\\ge 0 &:& r_1^2 &= \\Sigma_-^2 + N_-^2\\\\\n\\psi &:& N_-&= r_1\\sin\\psi&\\Sigma_-&=r_1\\cos\\psi\\\\\n&&N_+^2 &= N_-^2+\\delta_1^2 &N^2&= N_-^2+N_1(N_1-2N_+).\n\\end{aligned}\\]\n\n\n\\noindent This gives us the differential equations (using $\\Sigma^2+N^2=1$):\n\\[\\begin{aligned}\nN_-' &= (N^2 -1 - \\Sigma_+)N_- +\\sqrt{3}\\Sigma_-N_+\\\\\nN_+'&= (N^2 -1 -\\Sigma_+)N_+ +\\sqrt{3}\\Sigma_-N_-\\\\\n\\Sigma_-' &= N^2\\Sigma_- - \\sqrt{3}N_-\\left(N_+ - N_1\\right),\n\\end{aligned}\\]\nallowing us to further compute\n\\begin{subequations}\\label{eq:neartaub-b9-q}\n\\begin{align}\n\\frac{r_1'}{r_1}&= \\frac{\\Sigma_-\\Sigma_-' + N_-N_-'}{r_1^2} \n= N^2 - (\\Sigma_++1) \\frac{N_-^2}{r_1^2} +\\sqrt{3} N_1 \\frac{\\Sigma_- N_-}{r_1^2}\\label{eq:neartaub-b9-q:r}\\\\\n\\psi' &=\\frac{\\Sigma_-N_-' - N_-\\Sigma_-'}{r_1^2}\n=\\sqrt{3}N_+ - (\\Sigma_++1)\\frac{N_-\\Sigma_-}{r_1^2} -\\sqrt{3}N_1\\frac{N_-^2}{r_1^2}\\label{eq:neartaub-b9-q:psi}\\\\\n\\frac{\\delta_1'}{\\delta_1} &= N^2-(\\Sigma_++1)\\label{eq:neartaub-b9-q:delta}\\\\\n\\partial_t\\log \\frac{\\delta_1}{r_1}&= -(\\Sigma_+ + 1)\\frac{\\Sigma_-^2}{r_1^2} -\\sqrt{3}N_1\\frac{\\Sigma_- N_-}{r_1^2}.\\label{eq:neartaub-b9-q:delta-r}\n\\end{align}\\end{subequations}\nNear $\\taubp_1$, i.e. for $\\Sigma_+\\approx -1$, we can use $1+\\Sigma_+=\\frac{\\Sigma_-^2+N^2}{1-\\Sigma_+}$ in order to rearrange some terms in \\eqref{eq:neartaub-b9-q}:\n\\begin{subequations}\\label{eq:neartaub-b9-t}\n\\begin{align}\n\\frac{r_1'}{r_1} &= r_1^2\\sin^2\\psi \\frac{-\\Sigma_+}{1-\\Sigma_+} + N_1\\, h_r\\label{eq:neartaub-b9-t:r}\\\\\n\\frac{\\delta_1'}{\\delta_1}&= \\frac{-1}{1-\\Sigma_+}r_1^2\\cos^2\\psi + \\frac{-\\Sigma_+}{1-\\Sigma_+}r_1^2\\sin^2\\psi + N_1\\,h_\\delta \\label{eq:neartaub-b9-t:delta}\\\\\n\\partial_t \\log \\frac{\\delta_1}{r_1}&=\\frac{-1}{1-\\Sigma_+}r_1^2\\cos^2\\psi +N_1(h_\\delta-h_r) \\label{eq:neartaub-b9-t:delta-r}\\\\\n\\psi'&= \\sqrt{3}r_1 \\sqrt{\\sin^2\\psi + \\frac{\\delta_1^2}{r_1^2}} - \\frac{r_1^2}{1-\\Sigma_+} \\sin\\psi \\cos\\psi + N_1\\sin\\psi\\, h_{\\psi}, \\label{eq:neartaub-b9-t:psi}\n\\end{align}\\end{subequations}\nwhere\n\\begin{subequations}\\label{eq:neartaub-b9-t-h}\\begin{align}\nh_r &=+\\sqrt{3}\\frac{\\Sigma_-N_-}{r_1^2} + (N_1-2N_+)\\left(1+\\frac{N_-^2}{(1-\\Sigma_+)r_1^2}\\right)\\\\\nh_\\delta &= (N_1-2N_+)\\frac{-\\Sigma_+}{1-\\Sigma_+}\\\\\nh_\\psi &= -\\sqrt{3}\\sin\\psi - \\cos\\psi \\frac{N_1-2N_+}{1-\\Sigma_+}.\n\\end{align}\\end{subequations}\nLet us point out that $|h_r|,|h_\\delta|, |h_\\psi| \\le 5$ if $|N_1|, |N_+|, |N_-|\\le \\frac 1 2$ and $\\Sigma_+<0$.\n\\paragraph{Polar Coordinates around the non-invariant generalized Taub-spaces.}\nWithout loss of generality, assume that we are in $\\mathcal M_{*+-}$, i.e.~$N_2>0>N_3$, and that we are interested in a neighborhood of $\\mathcal T^G_1 = \\{\\bpt x:\\, \\Sigma_-=0,\\,|N_2|-|N_3|=0\\}$.\n\nThe assumption $N_2>0>N_3$ is incompatible with $N_3=N_2$; hence, $\\mathcal M_{*+-}\\cap \\mathcal T_1=\\emptyset$, which is why we have to work with $\\mathcal T^G_1$. Since $\\mathcal T^G_1$ is not invariant, we expect the corresponding equations for $r_1'$ and $\\psi'$ to become singular near $r_1=0$.\n\nThere is a possible symmetry-based motivation for expecting usable polar coordinates around $\\mathcal T^G_1$ that will be given at the end of this section. Regardless of this motivation, the usability of such polar coordinates is proven by our use of them.\n\nWe will proceed analogous to the case of $\\mathcal T_1$, using the same names for quantities which fulfill the same function in this work, such that the definitions of e.g.~$N_+,N_-$ will depend on the signs ${{\\hat{n}}}_2,{{\\hat{n}}}_3$. Hence, we introduce shorthands\n\\[ N_- = |N_2|-|N_3|=N_2+N_3\\qquad N_+=|N_2|+|N_3|=N_2-N_3,\\]\nsuch that (with \\eqref{eq:delta-r-def}):\n\\[\\begin{aligned}\nr_1\\ge 0 &:& r_1^2 &= \\Sigma_-^2 + N_-^2\\\\\n\\psi &:& N_-&= r_1\\sin\\psi&\\Sigma_-&=r_1\\cos\\psi\\\\\n&&N_+^2 &= N_-^2+\\delta_1^2 &N^2&= N_+^2+N_1(N_1-2N_-).\n\\end{aligned}\\]\n\nThis gives us the differential equations (using $\\Sigma^2+N^2=1$):\n\\[\\begin{aligned}\nN_-' &= (N^2 -1 - \\Sigma_+)N_- +\\sqrt{3}\\Sigma_-N_+\\\\\nN_+'&= (N^2 -1 -\\Sigma_+)N_+ +\\sqrt{3}\\Sigma_-N_-\\\\\n\\Sigma_-' &= N^2\\Sigma_- - \\sqrt{3}\\left(N_-N_+ - N_+N_1\\right),\n\\end{aligned}\\]\nallowing us to further compute\n\\begin{subequations}\\label{eq:neartaub-b8-q}\n\\begin{align}\n\\frac{r_1'}{r_1}&= \\frac{\\Sigma_-\\Sigma_-' + N_-N_-'}{r_1^2} \n= N^2 - (\\Sigma_++1) \\frac{N_-^2}{r_1^2} +\\sqrt{3} N_1 \\frac{\\Sigma_- N_+}{r_1^2} \\label{eq:neartaub-b8-q:r}\\\\\n\\psi' &=\\frac{\\Sigma_-N_-' - N_-\\Sigma_-'}{r_1^2}\n=\\sqrt{3}N_+ - (\\Sigma_++1)\\frac{N_-\\Sigma_-}{r_1^2} -\\sqrt{3}N_1\\frac{N_-N_+}{r_1^2} \\label{eq:neartaub-b8-q:psi}\\\\\n\\frac{\\delta_1'}{\\delta_1} &= N^2-(\\Sigma_++1) \\label{eq:neartaub-b8-q:delta}\\\\\n\\partial_t\\log \\frac{\\delta_1}{r_1}&= -(\\Sigma_+ + 1)\\frac{\\Sigma_-^2}{r_1^2} -\\sqrt{3}N_1\\frac{\\Sigma_- N_+}{r_1^2}.\\label{eq:neartaub-b8-q:delta-r}\n\\end{align}\\end{subequations}\n\\noindent\nNear $\\taubp_1$, i.e. for $\\Sigma_+\\approx -1$, we can use $1+\\Sigma_+=\\frac{\\Sigma_-^2+N^2}{1-\\Sigma_+}$ in order to rearrange some terms in \\eqref{eq:neartaub-b8-q}:\n\n\n\\begin{subequations}\\label{eq:neartaub-b8-t}\n\\begin{align}\n\\frac{r_1'}{r_1} &= \\frac{-\\Sigma_+}{1-\\Sigma_+}r_1^2\\sin^2\\psi +\\delta_1^2\\frac{\\cos^2\\psi-\\Sigma_+}{1-\\Sigma_+} + N_1\\, h_r \\label{eq:neartaub-b8-t:r}\\\\\n\\frac{\\delta_1'}{\\delta_1}&= \\frac{-1}{1-\\Sigma_+}r_1^2\\cos^2\\psi + \\frac{-\\Sigma_+}{1-\\Sigma_+}r_1^2\\sin^2\\psi +\\frac{-\\Sigma_+}{1-\\Sigma_+}\\delta_1^2+ N_1\\,h_\\delta \\label{eq:neartaub-b8-t:delta}\\\\\n\\partial_t \\log \\frac{\\delta_1}{r_1}&=\\frac{-1}{1-\\Sigma_+}r_1^2\\cos^2\\psi - \\delta_1^2\\frac{\\cos^2\\psi}{1-\\Sigma_+} + N_1(h_\\delta-h_r) \\label{eq:neartaub-b8-t:delta-r}\\\\\n\\psi'&= \\sqrt{3}r_1 \\sqrt{\\cos^2\\psi + \\frac{\\delta_1^2}{r_1^2}} - \\frac{r_1^2+\\delta_1^2}{1-\\Sigma_+} \\cos\\psi \\sin\\psi + N_1\\sin\\psi\\, h_{\\psi},\\label{eq:neartaub-b8-t:psi}\n\\end{align}\\end{subequations}\nwhere\n\\begin{subequations}\\label{eq:neartaub-b8-t-h}\\begin{align}\nh_r &=-\\sqrt{3}\\frac{\\Sigma_-N_+}{r_1^2} + (N_1-2N_-)\\left(1-\\frac{N_-^2}{(1-\\Sigma_+)r_1^2}\\right)\\\\\nh_\\delta &= (N_1-2N_-)\\frac{-\\Sigma_+}{1-\\Sigma_+}\\\\\nh_\\psi &= -\\sqrt{3}\\sqrt{\\sin^2\\psi + \\frac{\\delta_1^2}{r_1^2}} - \\sin\\psi \\frac{N_1-2N_-}{1-\\Sigma_+}.\n\\end{align}\\end{subequations}\nLet us point out that, if $|N_1|, |N_+|, |N_-|\\le \\frac 1 2$ and $\\Sigma_+<0$ and $\\frac{\\delta_1}{r_1}\\le 1$, then $\\frac{N_+}{r_1}\\le \\sqrt{2}$ and hence $|h_r|,|h_\\delta|, |h_\\psi| \\le 5$.\n\n\\paragraph{Motivation for polar coordinates around $\\mathcal T_1^G$.}\nOne possible motivation for a priori expecting useful equations from this approach is by a symmetry argument: The Taub-space is invariant since it is the fixed point space of the reflection $\\sigma:(\\Sigma_+,\\Sigma_-, N_1,N_2,N_3)\\to (\\Sigma_+,-\\Sigma_-, N_1, N_3, N_2)$, and \\eqref{eq:ode} is equivariant under $\\sigma$. This transformation $\\sigma:\\mathcal M_{\\pm +-}\\to \\mathcal M_{\\pm-+}$ does not map the quadrant $N_2>0>N_3$ into itself. Instead, we have $\\mathcal T^G_1$ as the fixed point space of $\\widetilde\\sigma:(\\Sigma_+,\\Sigma_-, N_1,N_2,N_3)\\to (\\Sigma_+,-\\Sigma_-, N_1, -N_3, -N_2)$.\nThe Wainwright-Hsu equations are not equivariant under this reflection $\\widetilde\\sigma$, and we therefore have no reason to expect the fixed-point space $\\mathcal T^G_1$ of $\\widetilde \\sigma$ to be invariant. However, considering \\eqref{eq:ode}, equivariance is only spoiled by terms of the form $N_2N_1$ and $N_3N_1$ changing their signs; hence, we expect $\\mathcal T^G_1=\\{\\bpt x:\\, \\Sigma_-=0,\\,|N_2|-|N_3|=0\\}$ to be invariant up to terms of order $|N_2N_1|$ and $|N_3N_1|$. Such terms can be well controlled, as it will turn out in Section \\ref{sect:near-taub}.\n\n\n\\section{Description of the Dynamics} \\label{sect:farfromA\nWe will now give an overview of the behaviour of trajectories of \\eqref{eq:ode}. This overview will contain most of the classic results about Bianchi cosmological models. \n\nOur overview will be organized by first describing the simplest subsets named in Table \\ref{table:inv-sets} and then progressing to the higher dimensional subsets, finally describing Bianchi Type \\textsc{IX} ($\\mathcal M_{+++}$) and Bianchi Type \\textsc{VIII} ($\\mathcal M_{+-+}$) solutions. Our approach in this section is very similar to \\cite{ringstrom2001bianchi} and \\cite{heinzle2009new}; unless explicitly otherwise stated, all observations in this section can be found therein.\n\n \n\\paragraph{A very short summary of relevant dynamics.}\nThe Kasner circle $\\mathcal K$ is actually a circle and consists entirely of equilibria. The so-called Mixmaster attractor $\\mathcal A$ consists of three $2$-spheres $\\{\\bpt x:\\, \\Sigma_+^2+\\Sigma_-^2+N_i^2=1\\,N_j=N_k=0\\}$, which intersect in $\\mathcal K$. Only half of these spheres are accessible for any trajectory, since the $\\mathrm{sign}\\,N_i$ are fixed. For this reason, these half-spheres are also called ``Kasner-caps'', i.e.~$\\mathcal A_{+00}$ is the $N_1>0$-cap.\n\nThe dynamics on the Kasner-caps will be discussed in Section \\ref{sect:lower-b-types}; each orbit in a Kasner-cap is a heteroclinic orbit connecting two equilibria on the Kasner-circle $\\mathcal K$.\n\nThe long-time behaviour of the lower dimensional Bianchi-types (at least one $N_i=0$) is well-understood: All such solutions converge to an equilibrium $\\bpt p \\in\\mathcal K$ as $t\\to\\infty$. The behavior in the highest-dimensional Bianchi Types \\textsc{IX} and \\textsc{VIII} is not yet fully understood, and is therefore of most interest in this work.\n\n\nIt is known (c.f.~\\cite{ringstrom2001bianchi}) that Bianchi Type \\textsc{IX} solutions that do not lie in a Taub-space converge to the Mixmaster attractor as $t\\to+\\infty$, i.e.~towards the big bang singularity. \nIt has been conjectured that generic Bianchi Type \\textsc{VIII} solutions share this behaviour; this will be proven in this work (Theorem \\ref{thm:b8-attractor-global} and \\ref{thm:b8-attractor-global-genericity}).\n\n\nThe question of particle horizons was already mentioned in the introduction and is further discussed from a physical viewpoint in Section \\ref{sect:gr-phys-interpret}. In terms of the Wainwright-Hsu equations, the question can be formulated as (see Section \\ref{thm:b8-attractor-global-genericity}, or c.f. e.g. \\cite{heinzle2009future}):\n\n\\[\n\\text{Is }\\quad\nI(\\bpt x) = \\max_i \\int_{0}^\\infty \\delta_i(\\phi(\\bpt x, t)){\\mathrm{d}} t = 2\\max_i\\int_0^\\infty \\sqrt{|N_jN_k|}(t){\\mathrm{d}} t<\\infty\\,?\n\\]\nHere $\\phi$ is the flow to \\eqref{eq:ode}. The space-time associated to the solution $\\phi(\\bpt x, \\cdot)$ forms a particle horizon if and only if $I<\\infty$.\n\n\nIt is known that there exist solutions in Bianchi \\textsc{IX} and \\textsc{VIII}, where $I<\\infty$ (c.f.~\\cite{liebscher2011ancient}). It is not known, whether there exist any nontrivial solutions with $I=\\infty$ (of course solutions starting in $\\mathcal T$ that do not converge to $\\mathcal A$ must have $I=\\infty$). We prove that, in both Bianchi \\textsc{IX} and \\textsc{VIII} and for Lebesgue almost every initial condition, particle horizons develop ($I<\\infty$) (Theorem \\ref{thm:horizon-formation}).\n\n\n\n\\subsection{Lower Bianchi Types}\\label{sect:lower-b-types\n\\paragraph{Bianchi Type \\textsc{I}: The Kasner circle.}\nThe smallest, i.e.~lowest dimensional, Bianchi-type is Type \\textsc{I}, $\\mathcal M_{000}=\\mathcal K$, where all three $N_i$ vanish (see Table \\ref{table:inv-sets}). By the constraint $N^2+\\Sigma^2=1$, we can see that $\\mathcal K$ is the unit circle in the $(\\Sigma_+,\\Sigma_-)$-plane, and consists entirely of equilibria.\n\nThe linear stability of these equilibria is given by the following\n\\begin{lemma}\\label{farfromA:lemma:kasnermap-stability}\nLet $\\bpt p=(\\Sigma_+,\\Sigma_-,0,0,0)\\in \\mathcal K$; first consider the case $\\bpt p\\neq \\taubp_i$ and without loss of generality $|\\bpt p+\\taubp_1|<1$. Then the vectorfield has one central direction given by $\\partial_{\\mathcal K}=(-\\Sigma_-, \\Sigma_+,0,0,0)$, one unstable direction given by $\\partial_{N_1}=(0,0,1,0,0)$, and two stable directions given by $\\partial_{N_2}=(0,0,0,1,0)$ and $\\partial_{N_3}=(0,0,0,0,1)$.\n\nThe three Taub-points $\\taubp_i$ have each one stable direction given by $\\partial_{N_i}$ and three center directions given by $\\partial_{\\mathcal K}$, $\\partial_{N_j}$ and $\\partial_{N_k}$.\n\\end{lemma}\n\\begin{proof}\nWe first note that the four vectors $\\partial_{\\mathcal K}$ and $\\partial_{N_i}$ form a basis of the tangent space $T_{\\bpt p}\\mathcal M = \\mathrm{ker}\\,{\\mathrm{d}} G$. \n\nThe stability of an equilibrium is determined by the eigenvalues and eigenspaces of the Jacobian of the vector field; a generalized eigenspace is \\emph{central}, if its eigenvalue has vanishing real part, it is \\emph{stable} if its eigenvalue has negative real part, and it is \\emph{unstable} if its eigenvalue has positive real part.\nThe Jacobian ${\\mathrm{D}} f$ of the vector field $f$ given by \\eqref{eq:ode} at $\\bpt p\\in\\mathcal K$ is diagonal with three entries of the form ${\\mathrm{D}} f = \\lambda_1 \\partial_{N_1} \\otimes {\\mathrm{d}} N_1 + \\lambda_{N_2} \\partial_2 \\otimes {\\mathrm{d}} N_2 + \\lambda_3 \\partial_{N_3}\\otimes {\\mathrm{d}} N_3$; we can read off the stability from \\eqref{eq:ode2-ni} and Figure \\ref{fig:n-discs}.\n\\end{proof}\n\n\\begin{figure}[hbpt]\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/graphics\/classa-short-het}\n \\caption{The three discs, where $N_i'\/N_i>0$ and a short heteroclinic chain.}\\label{fig:short-het}\\label{fig:n-discs}\n \\end{subfigure}~~%\n %\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/graphics\/classa-stab-products.pdf}\n \\caption{The three discs, where $\\delta_i'\/\\delta_i >0$. Also, the Kasner-map $K$ is a double cover.}\\label{fig:double-cover}\\label{fig:delta-discs}\n \\end{subfigure\n \\caption{Stability properties and the Kasner map. All figures are in $(\\Sigma_+, \\Sigma_-)$-projection.}\\label{fig:kasner-segments} \n\\end{figure}\n\n\\paragraph{The Taub-line.}\nThere exists another structure of equilibria, given by $\\mathcal{TL}_i:= \\{\\bpt x\\in\\mathcal M: N_j = N_k,\\,N_i=0,\\,\\bpt \\Sigma = \\taubp_i\\}$. Up to index permutations and $\\bpt N\\to -\\bpt N$, this set has the form $\\mathcal{TL}_1\\cup\\mathcal M_{++0}=\\{\\bpt p:\\, \\bpt p =(-1,0,0,n,n),\\, n>0\\}$. We observe that this is a line of equilibria. Each such equilibrium has one stable direction (corresponding to $N_1$) and three center directions. \n\n\\paragraph{Bianchi Type \\textsc{II}: The Kasner caps.}\nConsider without loss of generality the set $\\mathcal M_{+00}$. The constraint $G=1$ then reads\n\\[1=N_1^2 + \\Sigma_+^2+\\Sigma_-^2,\\]\ni.e.~the so-called ``Kasner-cap'' $\\mathcal M_{+00}$ forms a half-sphere with $\\mathcal K$ as its boundary. Considering \\eqref{eq:ode}, we can see that $\\bpt \\Sigma' = (\\bpt \\Sigma+2\\taubp_1)N_1^2$ is a scalar multiple of $\\bpt \\Sigma+2\\taubp_1$; hence, the $\\bpt\\Sigma$-projection of the trajectory stays on the same line through $-2\\taubp_1$. Since $N_1^2\\ge 0$, this trajectory is heteroclinic and converges in forward and backward times to the two intersections of this line with the Kasner circle $\\mathcal K$, where the $\\alpha$-limit is closer to $-2\\taubp_1$. Such a trajectory is depicted in Figure \\ref{fig:short-het}.\n\n\\paragraph{The Mixmaster-Attractor $\\mathcal A$.}\nThe most relevant set for the long-time behavior of the Bianchi system is the Mixmaster attractor $\\mathcal A$.\nIt consists of the union of all six Bianchi $\\textsc{II}$ pieces and the Kasner circle. Since the signs of the $N_i$ stay constant along trajectories, it is useful to only study a piece of $\\mathcal A$, given without loss of generality by:\n\n\\[\n\\mathcal A_{+++}=\\mathcal M_{+00}\\cup \\mathcal M_{0+0}\\cup \\mathcal M_{00+}\\cup \\mathcal K.\n\\]\nThe set $\\mathcal A_{+++}$ is given by the union of three perpendicular half-spheres (the three Kasner-caps), which intersect in the Kasner-circle. The dynamics in the Mixmaster-Attractor now consists of the Kasner circle $\\mathcal K$ of equilibria and the three caps, consisting of heteroclinic orbits to $\\mathcal K$. It is described in detail by the so-called Kasner-map.\n\n\n\\paragraph{The Kasner-map $K:\\mathcal K\\to\\mathcal K$.}\nWe wish to describe which equilibria in $\\mathcal K$ are connected by heteroclinic orbits. We can collect this in a relation $K\\subseteq \\mathcal K\\times \\mathcal K$, i.e.~we write $\\bpt p_- K\\bpt p_+$ if either $\\bpt p_-=\\bpt p_+=\\taubp_i$ or there exists a heteroclinic orbit $\\gamma:{\\mathbb{R}}\\to \\mathcal A$ such that $\\bpt p_- = \\lim_{t\\to-\\infty}\\gamma(t)$ and $\\bpt p_+=\\lim_{t\\to+\\infty}\\gamma(t)$.\n\nWe can see by Figure \\ref{fig:n-discs} (or Lemma \\ref{farfromA:lemma:kasnermap-stability}) that each non-Taub point $\\bpt p_-$ has a one-dimensional unstable manifold, i.e. one trajectory in $\\mathcal A_{{{\\hat{n}}}}$, which converges to $\\bpt p_-$ in backwards time. Therefore, the relation $K$ can be considered as a (single-valued, everywhere defined) map.\n\n\nThis map is depicted in Figure \\ref{fig:double-cover}, and has a simple geometric description in the $(\\Sigma_+,\\Sigma_-)$-projection: \nGiven some $\\bpt p_-\\in\\mathcal K$, we draw a straight line through $\\bpt p_-$ and the nearest of the three points $-2\\taubp_i$. This line has typically two intersections $\\bpt p_-$ and $\\bpt p_+\\in\\mathcal K$ with the Kasner-circle, one of which is nearer to $-2\\taubp_i$, which is $\\bpt p_-$, and one which is further away, which is $\\bpt p_+$. At a $\\taubp_i$, there are two possible choices of nearest $-2\\taubp_j$ and $-2\\taubp_k$ and the lines through these points are tangent to $\\mathcal K$; we just set $K(\\taubp_i)=\\taubp_i$. We see from Figure \\ref{fig:double-cover} that this map $K:\\mathcal K\\to\\mathcal K$ is continuous and a double cover (i.e.~each point $\\bpt p_+$ has two preimages $\\bpt p_{-}^1$ and $\\bpt p_-^2$, which both depend continuously on $\\bpt p_+$).\n\n\nLooking at Figure \\ref{fig:short-het}, we can also see that the Kasner-map is expanding. Hence it is $C^0$-conjugate to either $[z]_{{\\mathbb{Z}}}\\to [2z]_{{\\mathbb{Z}}}$ or $[z]_{{\\mathbb{Z}}}\\to [-2z]_{{\\mathbb{Z}}}$; since it has three fixed points the latter case must apply. Hence we have\n\n\\begin{proposition}\\label{prop:kasnermap-homeomorphism-class\nThere exists a homeomorphism $\\psi:\\mathcal K \\to {\\mathbb{R}}\/3{\\mathbb{Z}}$, such that $\\psi(\\taubp_i)=\\left[i\\right]_{3{\\mathbb{Z}}}$ and\n\\[\n\\psi(K(\\bpt p))=[-2 \\psi(\\bpt p)]_{3{\\mathbb{Z}}}\\qquad\\forall \\bpt p\\in\\mathcal K.\n\\]\n\\end{proposition}\nA formal proof of Proposition \\ref{prop:kasnermap-homeomorphism-class} is a digression; for this reason, it is deferred until Section \\ref{sect:appendix-kasner-map}, where we give a more detailed description of the Kasner map.\n\n\\paragraph{Basic Heuristics near the Mixmaster Attractor.}\nHeuristically, the Kasner-map determines the behavior of solutions near $\\mathcal A$: \nConsider an initial condition $\\bpt x_0\\in \\mathcal M_{\\pm\\pm\\pm}$ near $\\mathcal A$, i.e.~an initial condition where none of the $N_i$ vanish. Then the trajectory $\\bpt x(t)$ will closely follow the heteroclinic solution $\\gamma_1$ passing near $\\bpt x_0$. Let $\\bpt p_1$ be the end-point of this heteroclinic; $\\bpt x(t)$ will follow $\\gamma_1$ and stay for some time near $\\bpt p_1$ (since it is an equilibrium). However, if $\\bpt p_1\\neq \\taubp_i$, then one of the $N_i$ directions is unstable; therefore, $\\bpt x(t)$ will leave the neighborhood of $\\bpt p_1$ along the unique heteroclinic emanating from $\\bpt p_1$, and follow it until it is near $\\bpt p_2=K(\\bpt p_1)$. This should continue until $\\bpt x(t)$ leaves the vicinity of $\\mathcal A$, which should not happen at all (at least if the name ``Mixmaster Attractor'' is well deserved).\n\nThe expansion of the Kasner-map is the source of the (so far heuristic) chaoticity of the dynamics of the Bianchi system: The expansion along the Kasner-circle supplies the sensitive dependence on initial conditions, while the remaining two directions are contracting. Since we study a flow in four-dimensions, Lyapunov exponents need only account for three dimensions, the last one corresponding to the time-evolution.\n\n\n\n\n\\paragraph{Bianchi-Types $\\textsc{VII}_0$ and $\\textsc{VI}_0$.}\nThere are two Bianchi-types, where exactly one of the three $N_i$ vanishes: Types $\\textsc{VII}_0$ and $\\textsc{VI}_0$.\n In these Bianchi-Types, we have monotone functions (Lyapunov functions), which suffice to almost completely determine the long-time behaviour of trajectories. Without loss of generality, we focus on the case where $N_1=0$. Then we can write\n\\[\n\\Sigma_+' = (1-\\Sigma^2)(\\Sigma_++1)\\qquad 1= \\Sigma^2 + (N_2-N_3)^2.\n\\]\nWe can immediately see that $\\Sigma_+$ is non-decreasing along trajectories; indeed, we must have $\\lim_{t\\to \\pm \\infty}\\Sigma^2(t) = 1$ for all trajectories. Considering \\eqref{eq:ode2-ni} and Figure \\ref{fig:n-discs}, we can see that, for $t\\to +\\infty$ we must either have $\\Sigma_+ \\ge \\frac 1 2$, since both $N_2$ and $N_3$ must be stable, or $\\Sigma_+=-1$ all along; then $N_2=N_3$ and the trajectory lies in the Taub-line $\\mathcal {TL}_1$. In backward time, we must have $\\Sigma_+\\to -1$: All other points on the Kasner-circle have one of the two $N_2,N_3$ unstable. These statements can be formalized as\n\\begin{lemma}\\label{lemma:farfromA:b6}\nConsider an initial condition $\\bpt x_0$ with $\\bpt x_0\\in\\mathcal M_{0-+}$. Then, in forward time, the trajectory $\\bpt x(t)=\\phi(\\bpt x_0, t)$ converges to a point on the Kasner-circle $\\lim_{t\\to \\infty}\\bpt x(t)=\\bpt p_+\\in\\mathcal K$ with $\\Sigma_+(\\bpt p_+)\\ge \\frac 1 2$. In backwards time, the trajectory converges to the Taub-point $\\taubp_1=(-1,0,0,0,0)=\\lim_{t\\to -\\infty}\\bpt x(t)$.\n\\end{lemma}\n\\begin{lemma}\\label{lemma:farfromA:b7}\nConsider an initial condition $\\bpt x_0$ with $\\bpt x_0\\in\\mathcal M_{0++}\\setminus \\mathcal{TL}_1$. Then, in forward time, the trajectory $\\bpt x(t)=\\phi(\\bpt x_0, t)$ converges to a point on the Kasner-circle $\\lim_{t\\to \\infty}\\bpt x(t)=\\bpt p_+\\in\\mathcal K$ with $\\Sigma_+(\\bpt p_+)\\ge \\frac 1 2$. In backwards time, the $\\bpt\\Sigma$-projection of the trajectory converges to the Taub-point $\\taubp_1=(-1,0)=\\lim_{t\\to -\\infty}\\bpt\\Sigma(\\bpt x(t))$. No claim about the dynamics of the $N_i(t)$ for $t\\to-\\infty$ is made.\n\\end{lemma}\n\\begin{proof}\nContained in the preceding paragraph.\n\\end{proof}\nIt can be shown that, in the case of Bianchi $\\textsc{VII}_0$, i.e.~$\\bpt x_0\\in\\mathcal M_{0++}\\setminus \\mathcal{TL}_1$, the limit $\\lim_{t\\to -\\infty}\\bpt x(t)=\\bpt p\\in \\mathcal{TL}_1\\setminus\\{\\taubp_1\\}$ exists and does not lie on the Kasner circle. This claim follows directly from Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub}; however, the proof of Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub} is rather lengthy and not required for our main results.\n\n\n\\subsection{Bianchi-Types \\textsc{VIII} and \\textsc{IX} for large $N$\nAs we have seen, the lower Bianchi types do not support recurrent dynamics. This is different in the two top-dimensional Bianchi-types \\textsc{VIII} and \\textsc{IX}. This section is devoted to describing the behaviour far from $\\mathcal A$. \n\n\\begin{lemma}[Long-Time Existence]\nEvery solution $\\bpt x:[0,T)\\to \\mathcal M$ of $\\eqref{eq:ode}$ has bounded $\\Sigma^2(t)0$, we can see that $N^2>0$ and (from $1=\\Sigma^2+N^2$) that $\\Sigma^2<1$ for all times $t0$:\n\\[\nN^2(t) \\ge -4 \\left(N_1N_2N_3\\right)^{\\frac{2}{3}}(0),\\qquad \\Sigma^2(t) = 1-N^2(t) \\le 1+4 \\left(N_1N_2N_3\\right)^{\\frac{2}{3}}(0).\n\\]\nUnbounded time of existence follows as in the case of Bianchi \\textsc{VIII}.\n\\end{proof}\nNext, we show that $|N_1N_2N_3|\\to 0$ as $t\\to \\infty$, and that this convergence is essentially uniformly exponential:\n\\begin{lemma}[Essentially exponential convergence of $|N_1N_2N_3|$]\\label{farfrom-A:lemma:uniform}\nFor every $C_{N}>0$, there exist constants $C_1,C_0>0$ such that for all trajectories $\\bpt x:[t_1,t_2]\\to \\mathcal M$ with $|N_1N_2N_3|(t_1)< C_N$ we have\n\\[\n|N_1N_2N_3|(t_2) \\le |N_1N_2N_3|(t_1) \\exp\\left(C_0 - C_1(t_2-t_1)\\right).\n\\]\n\\end{lemma}\n\\begin{proof}\nConsider the function $s:\\mathcal M\\to [0,\\infty)$\n\\[s(\\bpt x) = \\int_0^1 \\Sigma^2(\\phi(\\bpt x, t)){\\mathrm{d}} t,\\]\nwhere $\\phi:\\mathcal M\\times {\\mathbb{R}}\\to \\mathcal M$ is the flow associated to \\eqref{eq:ode}.\nFix some $C_N>0$. We will show that we find some constant $\\widetilde C>0$ such that $s(\\bpt x)> \\widetilde C$ whenever $|N_1N_2N_3|(\\bpt x)\\le C_N$. From this, we can conclude\n\\[\n\\log\\frac{|N_1N_2N_3|(t_2)}{|N_1N_2N_3|(t_1)} = -3\\int_{t_1}^{t_2}\\Sigma^2(t){\\mathrm{d}} t \\le -3 (s(t_1)+ s(t_1+1) +\\ldots) \\le -3 (t_2-t_1-1)\\widetilde C.\n\\]\nWe will prove the estimate $s(\\bpt x)> \\widetilde C(C_N)$ by contradiction. Assume that we had a sequence $\\{\\bpt x_n\\}$, such that $s(\\bpt x_n)\\to 0$ and $|N_1N_2N_3|(\\bpt x_n)$ is bounded as $n\\to\\infty$.\n\nAt first, we show that this cannot happen in bounded regions of phase-space: We can see from \\eqref{eq:ode} that there do not exist any invariant sets with $\\Sigma^2=0$ (because then $N^2=1$ and $\\Sigma'\\neq 0$). Therefore $s(\\bpt x)>0$ for all $\\bpt x\\in\\mathcal M$. Since $s$ is continuous, the sequence $\\bpt x_n$ cannot converge, and hence cannot be bounded (otherwise, there would be some convergent subsequence).\n\nWe can assume without loss of generality that $\\Sigma^2(\\bpt x_n)\\to 0$ (since we assumed $\\int_0^1 \\Sigma^2(\\phi(\\bpt x_n, t)){\\mathrm{d}} t\\to 0$). In order to avoid convergent subsequences, we must have $\\max_{i}|N_i|(\\bpt x_n)\\to \\infty$ as $n\\to\\infty$.\n\n\nConsider first the case of Bianchi \\textsc{VIII} with $\\bpt x\\in\\mathcal M_{-++}$. Then \n\\[\nN^2 = (N_2-N_3)^2+N_1^2 -2 N_1 (N_2+N_3) = 1-\\Sigma^2.\n\\]\nAll three terms in the middle are non-negative and hence bounded; therefore, we must have $|N_2|,|N_3|\\to\\infty$ and $|N_1|\\to 0$. \nWe can write\n\\[{\\mathrm{D}}_t\\Sigma_+ = (1-\\Sigma^2)(\\Sigma_++1) + 3N_1(N_2+N_3-N_1)\\]\nWe can estimate the terms involving $N_i$ as \n\\[|N_1|(N_2+N_3+|N_1|) \\le \\frac{C_N}{\\max(N_2,N_3)}\\to 0\\qquad\\text{as $n\\to\\infty$}.\\]\nSince ${\\mathrm{D}}_t\\log{|N_i|} \\le |\\bpt \\Sigma|(2+|\\bpt \\Sigma|)$ is bounded, the estimate $N_2,N_3 \\gg 1 \\gg |N_1|$ and hence $|N_1|(N_2+N_3+|N_1|)\\ll 1$ stay valid for at least one unit of time. \nWe therefore cannot have $\\lim_{n\\to\\infty}s(\\bpt x_n)=0$: $s(\\bpt x_n)\\approx 0$ is only possible if $\\max_{t\\in[0,1]}\\Sigma^2(\\phi(\\bpt x_n, t))\\approx 0$. This is, however, impossible since then $\\Sigma_+'\\approx 1$.\n\nConsider now the case of Bianchi \\textsc{IX}, i.e.~$\\bpt x\\in\\mathcal M_{+++}$. Assume without loss of generality that $N_3\\ge N_2\\ge N_1$. Then we can write\n\\[\n1-\\Sigma^2=N^2 = (N_1+N_2-N_3)^2 -4 N_1N_2 \\ge (N_1+N_2-N_3)^2 -4 C_N^{\\frac 2 3}.\n\\]\nTherefore, we must have $N_2,N_3\\to \\infty$ and $N_1\\to 0$. Apart from this, the same arguments as for Bianchi \\textsc{VIII} apply.\n\\end{proof}\nThis result, i.e.~Lemma \\ref{farfrom-A:lemma:uniform}, is not as explicitly stated in the previous works \\cite{ringstrom2001bianchi, heinzle2009new}, and certainly not as extensively used, but is not a novel insight either. It directly proves that metric coefficients stay bounded, see Section \\ref{sect:gr-phys-interpret}.\n\nUsing Lemma \\ref{farfrom-A:lemma:uniform}, we can quickly see the following:\n\\begin{lemma}[Existence of $\\omega$-limits]\\label{lemma:farfromtaub:omega-existence}\nFor any initial condition $\\bpt x_0 \\in \\mathcal M$, the $\\omega$-limit set is nonempty, $\\omega(\\bpt x_0)\\neq \\emptyset$, i.e.~there exists a sequence of times $(t_n)_{n\\in{\\mathbb{N}}}$ with $\\lim_{n\\to\\infty}t_n=\\infty$ such that the limit $\\lim_{n\\to\\infty}\\bpt x(t_n)$ exists.\n\\end{lemma}\n\\begin{proof}\nWe begin again by considering the Bianchi \\textsc{VIII} case of $\\bpt x_0\\in\\mathcal M_{-++}$. The only way of avoiding the existence of an $\\omega$-limit is to have $\\lim_{t\\to \\infty}|\\bpt x(t)|=\\infty$. As in the proof of Lemma \\ref{farfrom-A:lemma:uniform}, this is only possible via $N_2,N_3\\to \\infty$ and $N_1\\to 0$. Then\n\\[\\begin{aligned}\n\\Sigma_+'(t) &= (1-\\Sigma^2)(\\Sigma_++1) + 3N_1(N_2+N_3-N_1) \\\\\n& \\ge (1-\\Sigma^2)(\\Sigma_++1) -9 |N_1N_2N_3|\\\\\n&\\ge (1-\\Sigma^2)(\\Sigma_++1) - C_1 e^{-C_2 t}\\\\\n\\frac{\\delta_1'}{\\delta_1}(t) &= -(\\Sigma^2 +\\Sigma_+),\n\\end{aligned}\\]\nwhere $\\delta_1=2\\sqrt{|N_2N_3|}$ as in \\eqref{eq:delta-r-def} and \\eqref{eq:ode2-delta}.\nThere are basically two possibilities with regards to the dynamics of $\\Sigma_+$: If $\\Sigma_+\\to -1$ as $t\\to+\\infty$, then this convergence must happen exponentially, since $(1-\\Sigma^2)(\\Sigma_++1)\\ge 0$. That is, if $\\Sigma_+\\to -1$, then we must have $|\\Sigma_++1|(t) \\le \\frac{C_1}{C_2}e^{-C_2 t}$ for all sufficiently large times $t>t_0>0$. Then \n\\[\n\\int_{t_0}^{\\infty}\\partial_t\\log \\delta_1(t){\\mathrm{d}} t = \\int_0^\\infty -\\Sigma_-^2 - \\Sigma_+(1+\\Sigma_+){\\mathrm{d}} t\\le 2\\frac{C_1}{C_2^2}e^{-C_2 t_0} < \\infty,\n\\]\nand hence $\\lim_{t\\to\\infty}\\delta_1(t)<\\infty$, contradicting our assumption. \n\nThe other option is to have $1+\\Sigma_+\\not\\to-1$ as $t\\to\\infty$. Then we must have\n$1+\\Sigma_+(t) > \\epsilon$ for some $\\epsilon>0$ for all sufficiently large times. Informally, we can see from Figure \\ref{fig:delta-discs} that this contradicts $\\delta_1\\to\\infty$. \nFormally, we can say: Since $\\Sigma_+$ is bounded, $\\int (1-\\Sigma^2)\\epsilon{\\mathrm{d}} t<\\int \\Sigma_+'{\\mathrm{d}} t + C < \\infty$, and hence $\\int(1-\\Sigma^2){\\mathrm{d}} t <\\infty$. On the other hand, ${\\mathrm{D}}_t\\log(\\delta_1) = -\\Sigma^2 -\\Sigma_+ \\le 1-\\Sigma^2 -\\epsilon$ and integration shows $\\lim_{t\\to\\infty}\\delta_1(t) = 0$, contradicting our assumption $N_2,N_3\\to\\infty$.\n\nNext, we consider the Bianchi \\textsc{IX} case of $\\bpt x_0\\in\\mathcal M_{+++}$. Again, the only way of avoiding the existence of an $\\omega$-limit is to have $\\lim_{t\\to \\infty}|\\bpt x(t)|=\\infty$. As in the proof of Lemma \\ref{farfrom-A:lemma:uniform}, this is only possible via $N_2,N_3\\to \\infty$ and $N_1\\to 0$ for some permutation of indices. Using $1-\\Sigma^2 \\ge |1-\\Sigma^2|-8|N_1N_2N_3|^{\\frac 2 3}$ and replacing $1-\\Sigma^2$ by $|1-\\Sigma^2|$, we can use the same arguments as in the Bianchi \\textsc{VIII} case.\n\\end{proof}\n\\subsection{The Bianchi \\textsc{IX} Attractor Theorem}\\label{sect:farfromA:attract}\nThe Mixmaster Attractor was named in the 60s. However, the first proof that $\\mathcal A$ actually is an attractor was given in \\cite[Theorem $19.2$, page 65]{ringstrom2001bianchi}, and simplified in \\cite{heinzle2009new}. We shall state this important result:\n\\begin{thm}[Classical Bianchi \\textsc{IX} Attractor Theorem]\\label{farfromA:thm:b9-attract}\nLet $\\bpt x_0\\in \\mathcal M_{+++}\\setminus \\mathcal T$. Then \\[\\lim_{t\\to\\infty} \\mathrm{dist}(\\bpt x(t), \\mathcal A)=0.\\] Also, the $\\omega$-limit set $\\omega(\\bpt x_0)$ does not consist of a single point.\n\\end{thm}\nThe proofs of Theorem \\ref{farfromA:thm:b9-attract} given in \\cite{ringstrom2001bianchi} and \\cite{heinzle2009new} require some subtle averaging arguments (summarized as Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub}), which are lengthy and fail to generalize to the case of Bianchi \\textsc{VIII} initial data. We will now give the first steps leading to the proof of Theorem \\ref{farfromA:thm:b9-attract}, up to the missing averaging estimates for Bianchi \\textsc{IX} solutions. Then, we will state the missing estimates and show how they prove Theorem \\ref{farfromA:thm:b9-attract}. Afterwards, we will give a high-level overview of how we replace Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub} in this work. Nevertheless, for the sake of completeness, we provide a proof of Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub} in Section \\ref{sect:neartaub:forbidden-cones}.\n\\paragraph{Rigorous steps leading to Theorem \\ref{farfromA:thm:b9-attract}.}\nWe first show that solutions cannot converge to the Taub-line $\\mathcal{TL}_i$, if they do not start in the Taub-space $\\mathcal T_i$:\n\\begin{lemma}[Taub Space Instability]\\label{lemma:farfromA:taub-instability}\nLet $\\epsilon>0$ small enough. Then there exists a constant $C_{r,\\epsilon}\\in(0,1)$, such that the following holds:\n\nRecall the definition of $r_1$, which measures the distance to $\\mathcal T_1$ (see \\eqref{eq:delta-r-def}). For any piece of trajectory $\\bpt x:[t_1,t_2]\\to \\{\\bpt x\\in\\mathcal M_{*++}: |\\bpt \\Sigma(\\bpt x)-\\taubp_1|\\le \\epsilon\\}$, the following estimate holds:\n\\[\\begin{aligned}\n r_1(\\gamma(t_2))&\\ge C_{r,\\epsilon}h(\\gamma(t_1)) r_1(\\gamma(t_1)),\\qquad \\text{ where}\\\\\n h(\\bpt x) &= |N_1| + |N_1|^2 + |N_1N_2N_3|.\n \\end{aligned}\\]\n\\end{lemma}\n\\begin{proof}\nThis Lemma uses the polar coordinates introduced in Section \\ref{sect:polar-coords}. We use \\eqref{eq:neartaub-b9-t} to see\n\\[\\begin{aligned}\n{\\mathrm{D}}_t \\log r_1&\\ge r_1^2\\sin^2\\psi \\frac{-\\Sigma_+}{1-\\Sigma_+} - C|N_1| - C |N_1^2| - C|N_1N_2N_3|,\\\\\n{\\mathrm{D}}_t \\log |N_1|&\\approx -3 < -1,\\\\\n{\\mathrm{D}}_t \\log|N_1N_2N_3|&\\approx -3 < -1.\n\\end{aligned}\\]\nThe desired estimate follows by integration.\n\\end{proof}\n\nTogether with Lemma \\ref{lemma:farfromtaub:omega-existence}, this allows us to see that there exist $\\omega$-limit points on $\\mathcal K$:\n\\begin{lemma}\\label{farfromA:lemma:limitpoint-on-A}\nLet $\\bpt x_0\\in\\mathcal M_{+++}\\setminus \\mathcal T$. Then there exists at least one $\\omega$-limit point \n$\\bpt p\\in \\left(\\mathcal K\\setminus\\{\\taubp_1, \\taubp_2,\\taubp_3\\}\\right)\\cap \\omega(\\bpt x_0)$.\n\nLet $\\bpt x_0\\in\\mathcal M_{+-+}\\setminus \\mathcal T_2$. Then there exists at least one $\\omega$-limit point \n$\\bpt p\\in \\left(\\mathcal K\\setminus\\{\\taubp_2\\}\\right)\\cap \\omega(\\bpt x_0)$.\n\\end{lemma}\n\\begin{proof}\nWe already know that there exists an $\\omega$-limit point $\\bpt y\\in\\omega(\\bpt x_0)$; this point must have $|N_1N_2N_3|=0$ and hence be of a lower Bianchi type. In view of Lemma \\ref{lemma:farfromA:b6} and Lemma \\ref{lemma:farfromA:b7} and the fact that both $\\alpha(\\bpt y)\\subseteq \\omega(\\bpt x_0)$ and $\\omega(\\bpt y)\\subseteq \\omega(\\bpt x_0)$, it suffices to exclude the case where $\\omega(\\bpt x_0)\\subseteq \\mathcal T\\mathcal L_i\\setminus \\mathcal K$ for some $i$. This possibility is excluded by Lemma \\ref{lemma:farfromA:taub-instability}.\n\\end{proof}\nTherefore, we know that $\\liminf_{t\\to\\infty}\\max_i\\delta_i(t)=0$ and hence $\\liminf_{t\\to\\infty}\\mathrm{dist}(\\bpt x(t), \\mathcal A)=0$ for initial conditions in $\\mathcal M_{\\pm\\pm\\pm}\\setminus \\mathcal T$. While we presently lack the necessary estimates to prove the missing part of the attractor theorem, $\\limsup_{t\\to\\infty}\\max_i\\delta_i(t)=0$, we can at least describe how this may fail: Each $\\delta_i$ can only grow by a meaningful factor in the vicinity of a Taub-point $\\taubp_i$:\n\\begin{lemma}\\label{lemma:farfromtaub:delta-increase-only-near-taub}\nThere exists a constant $C>0$ such that, given $\\epsilon_+>0$, we find $\\epsilon_{123}=\\epsilon_{123}(\\epsilon_+)>0$, the following holds:\n\nSuppose we have a piece of trajectory $\\bpt x:[t_1,t_2]\\to \\mathcal M_{\\pm\\pm\\pm}$, such that:\n\\begin{enumerate}\n\\item We have the product bound $|N_1N_2N_3|(t)<\\epsilon_{123}$ for all $t\\in[t_1,t_2]$\n\\item The first point of the partial trajectory is bounded away from the Taub-line $\\mathcal {TL}_1$, i.e.~$1+\\Sigma_+(t_1)>\\epsilon_+$\n\\item The piece of trajectory has comparatively large $\\delta_1$, in the sense $1\\ge\\delta_1^4(t)\\ge|N_1N_2N_3|(t)$ for all $t\\in[t_1,t_2]$, i.e.~$|N_1|\\le 4\\delta_1^2=16|N_2N_3|$ for all $t\\in[t_1,t_2]$.\n\\end{enumerate}\nThen $\\delta_1$ can only increase by a bounded factor along this piece of trajectory, i.e.\n\\[\\delta_1(t_2)\\le \\exp \\left(\\frac{C}{\\epsilon_+}\\right)\\delta_1(t_1).\\]\n\\end{lemma}\n\\begin{proof}\nRecall the proof of Lemma \\ref{lemma:farfromtaub:omega-existence}. From $\\delta_1^4\\ge|N_1N_2N_3|$, we know that $|N_1|\\le \\delta_1^2$ and therefore \n\\[\\begin{aligned}\n\\Sigma_+' &= (1-\\Sigma^2)(\\Sigma_++1) + 3N_1(N_2+N_3-N_1) \\\\\n& \\ge |1-\\Sigma^2|(\\Sigma_++1) -C \\sqrt{|N_1N_2N_3|}.\n\\end{aligned}\\]\nIf $\\epsilon_{123}>0$ is small enough, this allows us to see that $1+\\Sigma_+(t)>\\frac 1 2 \\epsilon_+$ for all $t\\in [t_0,t_1]$. \nSince $|\\Sigma_+|\\le 2$ is bounded, we see that $\\int|1-\\Sigma^2|\\epsilon_+{\\mathrm{d}} t < C$ and hence $\\int |1-\\Sigma^2|{\\mathrm{d}} t < \\frac{C}{\\epsilon_+}$. On the other hand, ${\\mathrm{D}}_t \\log \\delta_1 < (1-\\Sigma^2) - \\frac 1 2\\epsilon_+$, which yields the claim upon integration.\n\\end{proof}\n\\paragraph{Sketch of classic proofs of Theorem \\ref{farfromA:thm:b9-attract}.}\nThe previous proofs of Theorem \\ref{farfromA:thm:b9-attract}, both in \\cite{ringstrom2001bianchi} and \\cite{heinzle2009new}, rely on the following estimate (Lemma $3.1$ in \\cite{heinzle2009new}, Section $15$ in \\cite{ringstrom2001bianchi}):\n\\begin{hlemma}{\\ref{lemma:farfromtaub:no-delta-increase-near-taub}}\\label{hlemma:no-delta-increase-near-taub}\nWe consider without loss of generality the neighborhood of $\\mathcal T_1$. Let $\\epsilon>0$ small enough. Then there exists a constant $C_{\\delta,\\epsilon}\\in(1,\\infty)$, such that, for any piece of trajectory $\\gamma:[t_1,t_2]\\to \\{\\bpt x\\in\\mathcal M_{*++}: |\\bpt \\Sigma(\\bpt x)-\\taubp_1|\\le \\epsilon,\\,|N_1|\\le 10, |\\delta_1|\\le 10\\}$, the following estimate holds:\n\\[ \\delta_1(\\gamma(t_2))\\le C_{\\delta,\\epsilon}\\delta_1(\\gamma(t_1)). \\]\n\\end{hlemma}\nThe proof of this Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub} requires some lengthy averaging arguments and will be deferred until Section \\ref{sect:averaging-unneeded}, page \\pageref{lemma:farfromtaub:no-delta-increase-near-taub}. We stress that Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub} is not actually needed for any of the results in this work, and is proven only for the sake of completeness of the literature review.\n\\begin{remark}\nThe above formulation of Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub} includes the Bianchi \\textsc{VIII} case of $N_2,N_3>0>N_1$. The versions stated in \\cite{heinzle2009new} and \\cite{ringstrom2001bianchi} only consider the Bianchi \\textsc{IX} case $N_1,N_2,N_3>0$; however, their proofs extend to this case virtually unchanged.\n\\end{remark}\n\\begin{proof}[Proof of Theorem \\ref{farfromA:thm:b9-attract} using Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub}]\nWe begin by showing $\\lim_{t\\to\\infty}\\delta_1(t)=0$.\nBy Lemma \\ref{farfromA:lemma:limitpoint-on-A}, we have $\\liminf_{t\\to\\infty}\\delta_1(t)=0$. Suppose $\\limsup_{t\\to\\infty}\\delta_1(t)>0$.\nThen $\\delta_1$ must increase from arbitrarily small values up to some finite nonzero infinitely often, and hence we find arbitrarily late subintervals $[T^L,T^R]\\subseteq [0,\\infty)$ such that $\\delta_1^4>|N_1N_2N_3|$ and $\\delta_1$ increases by an arbitrarily large factor. This contradicts Lemma \\ref{lemma:farfromtaub:delta-increase-only-near-taub} and Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub}, which basically say that large increases of $\\delta_1$ can neither happen with $1+\\Sigma_+>\\epsilon_+$ nor with $1+\\Sigma_+<\\epsilon_+$.\n\nThe same applies for $\\delta_2$ and $\\delta_3$. \n\nSuppose there was only a single $\\omega$-limit point. This point cannot lie in $\\mathcal K\\setminus \\{\\taubp_i\\}$, since at each of these points, at least one of the $N_i$ is unstable. The only remaining possibility is $\\omega(\\bpt x_0)=\\taubp_i$ for some $i\\in\\{1,2,3\\}$, which is excluded by Lemma \\ref{lemma:farfromA:taub-instability}.\n\\end{proof}\n\n\\begin{remark}\nThe above proof also shows that $N_2N_3\\to 0$ in the Bianchi \\textsc{VIII} case $\\bpt x_0\\in\\mathcal M_{-++}\\setminus \\mathcal T_1$. This generalization is directly possible while keeping \\cite{heinzle2009new} virtually unchanged, even though it has not been explicitly noted therein.\n\n\nThe above proof also shows that in Bianchi $\\textsc{VII}_0$, i.e.~for any $\\bpt x_0\\in\\mathcal M_{0++}$,\n we must have $\\lim_{t\\to-\\infty}\\bpt x(t)=\\bpt p_-$ with $\\bpt p_-=(-1,0,0,N,N)$ for some $N>0$. This is false in the case of Bianchi $\\textsc{VI}_0$: There we have for any $\\bpt x_0\\in\\mathcal M_{0+-}$ that $\\lim_{t\\to-\\infty}\\bpt x(t)=(-1,0,0,0,0)$. Hence, $\\delta_1$ can grow by an arbitrarily large factor near $\\taubp_1$ in $\\mathcal M_{*+-}$, and\nno analogue of Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub} can hold in the Bianchi \\textsc{VIII} models $\\mathcal M_{*+-}$ and $\\mathcal M_{*-+}$.\n\nThis difficulty is partially responsible for the fact that, for $\\bpt x_0\\in\\mathcal M_{+-+}$, it was previously unknown whether $\\lim_{t\\to\\infty}N_2N_3(t)\\overset{?}{=}0$ and $\\lim_{t\\to\\infty}N_1N_3(t)\\overset{?}{=}0$. \n\\end{remark}\n\\paragraph{Sketch of our replacement for Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub}.}\\label{paragraph:farfromA:sketch}\nIn this work, we will replace the rather subtle averaging estimates from Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub} by the program outlined in this paragraph. Let us first repeat the reasons, why we want to avoid Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub}:\n\\begin{enumerate}\n\\item The analogue statement of Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub} in Bianchi \\textsc{VIII} is wrong. Lemma \\ref{lemma:farfromA:b6} shows that counterexamples to such a generalization can be found by taking any sequence $\\{\\bpt x_n\\}$ of initial data converging to any point in $\\mathcal M_{0-+}$. Therefore, any argument relying on Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub} has no chance of carrying over to the Bianchi \\textsc{VIII} case.\n\\item The proof of Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub} is lengthy and requires subtle averaging arguments.\n\\item The complexity of the proof of Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub} in not unavoidable: Most of the effort is spent trying to understand asymptotic regimes \\emph{that do not occur anyway}.\n\\end{enumerate}\nOur replacement is described by the following program:\n\\begin{enumerate}\n\\item At first, we study pieces of trajectories $\\bpt x:[0,T]\\to\\mathcal M_{\\pm\\pm\\pm}$, which start near $\\mathcal A$ and stay bounded away from the generalized Taub-spaces $\\mathcal T^G_i$, i.e.~have all $r_i>\\epsilon$. Along such partial solutions, all $\\delta_i$ decay essentially exponentially (Proposition \\ref{prop:farfromtaub-main}).\n\\item Next, consider how solutions near $\\mathcal A$ can enter the neighborhood of the generalized Taub-spaces. This can only happen near some $-\\taubp_i$ (Proposition \\ref{prop:farfromtaub-main2}).\n\\item For such solutions entering the vicinity of $-\\taubp_i$, the quotient $\\frac{\\delta_i}{r_i}$ is initially small, and stays small near $-\\taubp_i$ and along the heteroclinic leading to $+\\taubp_i$ (Proposition \\ref{prop:near-taub:qc}).\n\\item Next, we study solutions near $\\taubp_i$ for which $\\frac{\\delta_i}{r_i}$ is initially small. Then, $\\frac{\\delta_i}{r_i}$ stays small. This additional condition ($\\delta_i \\ll r_i$) allows us to describe solutions with easier averaging arguments and stronger conclusions than Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub}. Bianchi \\textsc{VIII} solutions can be analyzed same way. This is done in Section \\ref{sect:neartaub:neartaub}, leading to the conclusion that $\\delta_i$ decays essentially exponentially, with nonuniform rate (Proposition \\ref{prop:neartaub:main}).\n\\item Finally, we combine the previous steps in Section \\ref{sect:global-attract} in order to prove Theorems \\ref{thm:local-attractor}, \\ref{thm:b9-attractor-global} and \\ref{thm:b8-attractor-global}. These extend Theorem \\ref{farfromA:thm:b9-attract} with somewhat finer control over solutions and provide an analogue in Bianchi \\textsc{VIII}.\n\\end{enumerate}\n\n\\section{Attractor Theorems}\\label{sect:global-attract}\nThe goal of this section is to prove that typical initial conditions converge to $\\mathcal A$. We have already seen Theorem \\ref{farfromA:thm:b9-attract}, which is however somewhat unsatisfactory: It tells nothing about the speed and the details of the convergence; it relies on Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub}, which has a rather lengthy proof (page \\pageref{lemma:farfromtaub:no-delta-increase-near-taub}f) mainly discussing the case $\\delta_1 \\gg r_1$, \\emph{which is not supposed to happen anyway}; lastly, the proof of Theorem \\ref{farfromA:thm:b9-attract} has no chance of generalizing to the case of Bianchi \\textsc{VIII}.\n\nIn this section, we will combine the analysis of the previous Sections \\ref{sect:far-from-taub} and \\ref{sect:near-taub} in order to prove a \\emph{local} attractor result, holding both in Bianchi \\textsc{VIII} and \\textsc{IX}. Together with the results from Section \\ref{sect:farfromA} and some minor calculation, this will yield a ``global attractor theorem'', i.e.~a classification of solutions failing to converge to $\\mathcal A$, which happen to be rare; in the case of Bianchi \\textsc{IX}, this recovers and extends Theorem \\ref{farfromA:thm:b9-attract}, and in the case of Bianchi \\textsc{VIII}, this answers a longstanding conjecture.\n\\paragraph{Statement of the attractor Theorems.}\nThe local attractor theorem is given by the following:\n\\begin{thm}[Local Attractor Theorem]\\label{thm:local-attractor}\nThere exist constants $C_1,C_2,C_3>0$ and $\\epsilon>0$ such that the following holds:\n\nLet $\\bpt x_0\\in \\mathcal M_{\\pm\\pm\\pm}$ be an initial condition in either Bianchi \\textsc{VIII} or \\textsc{IX}, with \n\\begin{equation}\\label{eq:local-attractor-condition}\n\\frac{\\delta_i}{r_i} <\\epsilon\\quad\\text{and}\\quad \\delta_i <\\epsilon\\qquad\\forall i\\in\\{1,2,3\\}.\n\\end{equation}\n\nThen, for all $i\\in \\{1,2,3\\}$ and $t_2\\ge t_1\\ge 0$:\n\\begin{subequations}\n\\begin{align}\n\\delta_i(t_2)&\\le C_1 {\\delta}(t_1)\\label{eq:local-attract:delta}\\\\\n\\frac{\\delta_i}{r_i}(t_2)&\\le C_2\\frac{\\delta_i}{r_i}(t_1) \\label{eq:local-attract:delta-r}.\n\\end{align}\nFurthermore, for all $i\\in \\{1,2,3\\}$,\n\\begin{align}\n\\lim_{t\\to \\infty}\\delta_i(t)&=0 \\label{eq:local-attract:delta-lim}\\\\\n\\lim_{t\\to\\infty} \\frac{\\delta_i}{r_i}(t)&=0\\label{eq:local-attract:delta-r-lim}\\\\\n\\int_0^{\\infty}\\delta_i^2(t){\\mathrm{d}} t &< C_3 \\frac{\\delta_i^2}{r_i^2}(\\bpt x_0),\\label{eq:local-attract:integral}\n\\end{align}\nand the $\\omega$-limit set $\\omega(\\bpt x_0)$ must contain at least three points in $\\mathcal K$.\n\\end{subequations}\n\\end{thm}\nThe name ``local attractor theorem'' is descriptive: We describe a subset of the basin of attraction, i.e.~an open neighborhood of $\\mathcal A\\setminus\\mathcal T$ which is attracted to $\\mathcal A$ and given by\n\\begin{equation}\n\\textsc{Basin}_{{\\hat{n}}}[\\epsilon] = \\{\\bpt x\\in \\mathcal M_{{\\hat{n}}}:\\, \\delta_i\\le\\epsilon,\\,\\frac{\\delta_i}{r_i}\\le \\epsilon\\quad\\forall i\\in\\{1,2,3\\}\\}.\n\\end{equation}\nThe integral estimate \\eqref{eq:local-attract:integral} tells us that the convergence to $\\mathcal A$ must be reasonably fast.\n\nWe can combine the local attractor Theorem \\ref{thm:local-attractor} with the discussion in Section \\ref{sect:farfromA} in order to prove a global attractor theorem. Since some trajectories fail to converge to $\\mathcal A$, most notably trajectories in the Taub-spaces, a global attractor theorem must necessarily take the form of a classification of all exceptions. In this view, the global result for the case of Bianchi Type \\textsc{IX} models is the following:\n\\begin{thm}[Bianchi \\textsc{IX} global attractor Theorem]\\label{thm:b9-attractor-global}\nConsider $\\mathcal M_{+++}$, i.e.~Bianchi \\textsc{IX}. Then, for any initial condition $\\bpt x_0\\in \\mathcal M_{+++}$, the long-time behaviour of $\\bpt x(t)$ falls into exactly one of the following mutually exclusive classes ($i\\in\\{1,2,3\\}$):\n\\begin{description}\n\\item[\\textsc{Attract.}] For large enough times, Theorem \\ref{thm:local-attractor} applies.\n\\item[$\\textsc{Taub}_i$.] We have $\\bpt x_0\\in \\mathcal T_i$, and hence $\\bpt x(t)\\in\\mathcal T_i$ for all times.\n\\end{description}\nThe set of initial conditions for which \\textsc{Attract} applies is ``generic'' in the following sense: It is open and dense in $\\mathcal M_{+++}$ and its complement has Lebesgue-measure zero (evident from the fact that $\\mathcal T_i$ are embedded lower dimensional submanifolds, of both dimension and codimension two).\n\\end{thm}\nThe analogous, novel result for the case of Bianchi \\textsc{VIII} is the following:\n\\begin{thm}[Bianchi \\textsc{VIII} global attractor Theorem]\\label{thm:b8-attractor-global}\nConsider $\\mathcal M_{+-+}$, i.e.~Bianchi \\textsc{VIII}. Then, for any initial condition $\\bpt x_0\\in \\mathcal M_{+-+}$, the long-time behaviour of $\\bpt x(t)$ falls into exactly one of the following mutually exclusive classes:\n\\begin{description}\n\\item[\\textsc{Attract.}] For large enough times, Theorem \\ref{thm:local-attractor} applies. \n\\item[$\\textsc{Taub}_2$.] We have $\\bpt x_0\\in \\mathcal T_2$, and hence $\\bpt x(t)\\in\\mathcal T_2$ for all times. \n\\item[$\\textsc{Except}_1$.] For large enough times, the trajectory follows the heteroclinic object\n\\[ -\\taubp_1 \\to W^u(-\\taubp_1) \\to \\taubp_1 \\to W^s(-\\taubp_1)\\to -\\taubp_1,\\]\nwhere $W^s(-\\taubp_1)$ and $W^u(-\\taubp_1)$ are the two-dimensional stable and one-dimensional unstable manifolds of $-\\taubp_1$. We have $\\lim_{t\\to\\infty}\\max(\\delta_2,\\delta_3)(t)=0$ and $\\limsup_{t\\to\\infty}\\delta_1(t)>0=\\liminf_{t\\to\\infty}\\delta_1(t)$.\n\\item[$\\textsc{Except}_3$.] The analogue of $\\textsc{Except}_1$ applies, with the indices $1$ and $3$ exchanged.\n\\end{description}\n\\end{thm}\nThis theorem should be read in conjunction with the following, the proof of which will be deferred until Section \\ref{sect:volume-form}, page \\pageref{proof:thm:b8-attractor-global-genericity}:\n\\begin{hthm}{\\ref{thm:b8-attractor-global-genericity}}[Bianchi \\textsc{VIII} global attractor Theorem genericity]\nIn Theorem \\ref{thm:b8-attractor-global}, \nthe set of initial conditions $\\bpt x_0$ for which \\textsc{Attract} applies is ``generic'' in the following sense: It is open and dense in $\\mathcal M_{+-+}$ and its complement has Lebesgue-measure zero. \n\\end{hthm}\n\\begin{question}\\label{quest:except}\nIt is currently unknown, whether the case \\textsc{Except} in Bianchi \\textsc{VIII} is possible at all.\n\nWe expect that solutions in $\\textsc{Except}_1$ actually converge to the heteroclinic cycle, where $\\taubp_1 \\to -\\taubp_1$ is realized by the unique connection in $\\mathcal T_1^G$, i.e.~$\\lim_{t\\to\\infty}r_1(t)=0$, instead of following any other heteroclinic in $W^u(\\taubp_1)\\cap W^s(-\\taubp_1)$.\n\nFor topological reasons, we expect that the set of initial conditions, where \\textsc{Except} applies, is nonempty, and is of dimension and codimension two.\n\\end{question}\n\\paragraph{Proof of the attractor Theorems.}\nThe remainder of this section is devoted to proving these theorems. We begin with the local attractor result:\n\\begin{proof}[Proof of the local attractor Theorem \\ref{thm:local-attractor}]\nWe already gave a rough outline of the proof at the end of Section \\ref{sect:farfromA:attract}, page \\pageref{paragraph:farfromA:sketch}. We now have all ingredients to complete this program.\n\nWe apply Propositions \\ref{prop:farfromtaub-main2}, \\ref{prop:near-taub:qc} and \\ref{prop:neartaub:main}. If the constants have been arranged appropriately, then each of these propositions describes a piece of the trajectory, which ends in the domain of the (cyclically) next proposition, and at least one of the three propositions is applicable.\n``Appropriate'' means at least $\\varepsilon_T^{\\ref{prop:neartaub:main}}> C_T^{\\ref{prop:farfromtaub-main2}}\\varepsilon_T^{\\ref{prop:farfromtaub-main2}}$.\n\nEach $\\delta_i$ shrinks by an arbitrarily large factor during each time interval, where Proposition \\ref{prop:farfromtaub-main2} applies, and can at most grow by a bounded factor in each remaining time-interval, which directly yields \\eqref{eq:local-attract:delta}, \\eqref{eq:local-attract:delta-lim} (``arbitrary'' if we adjust $\\epsilon$ in \\eqref{eq:local-attractor-condition}). \n\nIn order to see the estimates \\eqref{eq:local-attract:delta-r} and \\eqref{eq:local-attract:delta-r-lim}, we note that each $r_i$ can change only by a factor bounded by $\\varepsilon_T^{-1}$ in the region, where \\ref{prop:farfromtaub-main2} applies; however, the $\\delta_i$ must shrink by an arbitrarily large factor, and the quotient $\\frac{\\delta_i}{r_i}$ is directly controlled in the regions where Propositions \\ref{prop:near-taub:qc} and \\ref{prop:neartaub:main} apply.\n\nThe remaining estimate \\eqref{eq:local-attract:integral} follows by integration: The contribution away from the Taub-spaces can be trivially bounded by Proposition \\ref{prop:farfromtaub-main}, eq.~\\eqref{eq:nearA:exponential}, and Proposition \\ref{prop:neartaub:main}, eq.~\\eqref{eq:neartaub:main:delta}, bounds the contribution near the Taub-spaces by $\\frac{\\delta^2}{r^2}$. The long-time behaviour of $\\frac{\\delta_i}{r_i}$ is controlled by Proposition \\ref{prop:near-taub:qc}. \n\\end{proof}\nNext, we prove the global result for Bianchi \\textsc{IX}. Theorem \\ref{thm:b9-attractor-global} is trivially equivalent to Theorem \\ref{farfromA:thm:b9-attract}, so we could cite \\cite{ringstrom2001bianchi} instead; the novel results are contained in the local Theorem \\ref{thm:local-attractor}. The following is a novel proof of the global part of the result, which avoids Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub}:\n\\begin{proof}[Proof of the global Bianchi \\textsc{IX} attractor Theorem \\ref{thm:b9-attractor-global}]\nSuppose that there is an initial condition $\\bpt x_0\\in\\mathcal M_{+++}$, such that neither \\textsc{Attract} nor \\textsc{Taub} applies. We know that $\\omega(\\bpt x_0)\\cap \\mathcal A \\subseteq \\mathcal T$ (because otherwise \\textsc{Attract} would apply). By Lemma \\ref{farfromA:lemma:limitpoint-on-A}, there must exist at least one $\\omega$-limit point $\\bpt y\\in\\omega(\\bpt x_0)\\cap \\mathcal K$. The $\\omega$-limit set $\\omega(\\bpt x_0)$ must be connected or unbounded (this holds for general dynamical systems).\n\nNow consider $\\omega(\\bpt x_0)\\cap \\mathcal T$. We claim that $\\omega(\\bpt x_0)\\cap \\mathcal T\\subseteq \\mathcal T_i$ for some $i\\in\\{1,2,3\\}$: If it was otherwise, there would need to exist a heteroclinic orbit $\\gamma\\subseteq \\omega(\\bpt x_0)$ which connects two Taub-spaces (since $|N_1N_2N_3|\\to 0$ we can apply Lemma \\ref{lemma:farfromA:b7}). Without loss of generality, such a heteroclinic orbit can only connect $\\{\\bpt x: \\bpt \\Sigma=\\taubp_1\\}$ to $-\\taubp_2$ with $\\gamma\\subseteq \\mathcal M_{0++}$. Along $\\gamma$, $\\delta_2$ and $\\delta_3$ and hence also $\\frac{\\delta_2}{r_2}$ and $\\frac{\\delta_3}{r_3}$ vanish; near the end of this heteroclinic, $\\delta_1$ and $\\frac{\\delta_1}{r_1}$ must become arbitrarily small. In other words, we have some $\\bpt p\\in\\gamma$ near $-\\taubp_2$ such that $\\frac{\\delta_1}{r_1}(\\bpt p)<\\widetilde \\epsilon\/3$.\nHence, for $\\epsilon_2=\\epsilon_2(\\bpt p)>0$ small enough, $B_{\\epsilon}(\\bpt p)\\cap \\mathcal M_{+++}\\subseteq\\textsc{Basin}_{+++}[\\widetilde \\epsilon]$; this contradicts our assumptions $\\bpt p\\in\\omega(\\bpt x_0)$ and $\\omega(\\bpt x_0)\\cap \\textsc{Basin}_{+++}[\\widetilde \\epsilon]=\\emptyset$.\n\nWe also know by Lemma \\ref{lemma:farfromA:taub-instability} that $\\omega(\\bpt x_0)$ cannot be contained in a Taub-Line $\\mathcal{TL}_i=\\{\\bpt x: \\bpt \\Sigma(\\bpt x)=\\taubp_i\\}$. Therefore, the only remaining case is that the trajectory $\\phi(\\bpt x_0, \\cdot)$ follows the heteroclinic cycle\n\\[\n-\\taubp_i \\to W^u(-\\taubp_i) \\to \\taubp_i \\to W^s(-\\taubp_i)\\to -\\taubp_i,\n\\]\nwhere $W^u(-\\taubp_i)$ is the one-dimensional unstable manifold of $-\\taubp_i$ and $W^s(-\\taubp_i)$ is its two-dimensional stable manifold.\n\nWe now could use Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub} to finish our proof, since it prevents $\\taubp_i\\to W^s(-\\taubp_i)$.\nHowever, we prefer to avoid relying on the subtle averaging arguments needed to prove Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub}, substituting them by the simpler Lemma \\ref{neartaub:lemma:joint-increase-in-forbidden}.\nAssume without loss of generality that our trajectory follows this dynamics with $i=1$ and hence has $\\frac{\\delta_1}{r_1}\\ge \\widetilde \\epsilon$ for all sufficiently large times, while $\\frac{\\delta_2}{r_2}\\to 0$ and $\\frac{\\delta_3}{r_3}\\to 0$. We will follow $\\frac{\\delta_1}{r_1}$ and show that $\\frac{\\delta_1}{r_1}\\to 0$, which contradicts our assumptions. \n\nWe first estimate the variation of $\\frac{\\delta_1}{r_1}$ away from $\\taubp_1$. Using\n\\[\n{\\mathrm{D}}_t\\log \\frac{\\delta_1}{r} \\le C|N_1N_2N_3|^{\\frac 2 3} + C |N_1|,\n\\]\nwe can conclude that $\\int \\max(0,{\\mathrm{D}}_t\\log\\frac{\\delta_1}{r_1}){\\mathrm{d}} t$ is bounded along every loop. However, by Lemma \\ref{neartaub:lemma:joint-increase-in-forbidden}, $\\frac{\\delta_1}{r_1}$ must decrease by an arbitrarily large factor near $\\taubp_1$ along every loop, contradicting the assumptions$\\frac{\\delta_1}{r_1}>\\widetilde \\epsilon$. This is because Lemma \\ref{neartaub:lemma:joint-increase-in-forbidden} states that $\\frac{\\delta_1}{r_1}$ must decrease by a comparable factor as the increase of $r_1$, which must increase by an arbitrarily large factor near $\\taubp_1$, since we know that $r_1$ must eventually become larger than, say, $0.1$.\n\\end{proof}\n\n\\begin{proof}[Proof of the global Bianchi \\textsc{VIII} attractor Theorem \\ref{thm:b8-attractor-global}]\nThe proof works like the above proof of Theorem \\ref{thm:b9-attractor-global}: We use Lemma \\ref{lemma:no-b8-taub-convergence} in order to prevent convergence to $\\taubp_1$ and $\\taubp_3$ (instead of Lemma \\ref{lemma:farfromA:taub-instability}, which is still used near $\\mathcal {TL}_2$). \n\nPreventing convergence to the heteroclinic cycle near $\\mathcal T_2$ works as before. This leaves us with the possibility of convergence to one of the two remaining heteroclinic loops described in \\textsc{Except}. Since we cannot exclude this case, we allow it in the conclusions of the theorem.\n\\end{proof}\n\n\n\n\n\\section{Dynamics near the Mixmaster-Attractor $\\mathcal A$}\\label{sect:near-A}\\label{sect:far-from-taub}\nOur previous arguments in Section \\ref{sect:farfromA} about the dynamics of \\eqref{eq:ode} have been of a rather qualitative and global character. We have established that there exist $\\omega$-limit points on the Mixmaster-attractor $\\mathcal A$.\n\nWe have also sketched the classical proof that trajectories converge to $\\mathcal A$ in the case of Bianchi Type \\textsc{IX} (Theorem \\ref{farfromA:thm:b9-attract}) (where we deferred the proof of the crucial estimate Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub} to a later point). \n\nIn this section, we will give a more precise description of the behaviour near $\\mathcal A$. The goal of this section is to show that pieces of trajectories $\\gamma:[0,T]\\to \\mathcal M$ near $\\mathcal A$ converge to $\\mathcal A$ essentially exponentially, at least as long as they stay bounded away from the Taub-points $\\taubp_i$. \n\n\n\n\\begin{figure}[hbpt]\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/graphics\/classa-circles-ni.pdf}\n \\caption{The stability defining discs for the $N_i$}\\label{fig:n-discs2}\n \\end{subfigure}~~%\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/graphics\/classa-circles-delta.pdf}\n \\caption{The stability defining discs for the $\\delta_i$}\\label{fig:delta-discs2}\n \\end{subfigure}\\\\% \n %\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/graphics\/nuclear-evil.pdf}\n \\caption{A trajectory leaving the controlled regions}\\label{fig:evil-traj}\\label{fig:nuclear-evil}\n \\end{subfigure}~~%\n %\n %\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/graphics\/nuclear-cap.pdf}\n \\caption{The region \\textsc{Cap}}\\label{fig:nuclear-cap}\n \\end{subfigure}\\\\% \n %\n %\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/graphics\/nuclear-circle.pdf}\n \\caption{The region \\textsc{Circle}}\\label{fig:nuclear-circle} \n \\end{subfigure}~\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{.\/graphics\/nuclear-hyp.pdf}\n \\caption{The region \\textsc{Hyp}}\n \\label{fig:nuclear-hyp}\n \\end{subfigure\n \\caption{The relevant regions are colored in gray and are not up to scale (except for Fig.\\ref{fig:n-discs} and Fig.\\ref{fig:delta-discs}). See also Fig.\\ref{fig:kasner-segments}.}\\label{fig:nuclear} \n\\end{figure}\n\n\nThe goal of this section is to prove the following two propositions \\ref{prop:farfromtaub-main} and \\ref{prop:farfromtaub-main2}:\n\\begin{proposition}[Essentially uniform exponential convergence to $\\mathcal A$ away from the Taub points]\\label{prop:farfromtaub-main}\nFor any $\\varepsilon_T>0$ small enough, there exist constants $\\epsilon_d, \\epsilon_N, \\epsilon_s, C_0>0$ (depending on $\\varepsilon_T$) and $C_T=5$ such that the following holds:\n\n\\begin{subequations}\nConsider a trajectory $\\bpt x:[0,T^*)\\to\\mathcal M_{\\pm\\pm\\pm}$, such that, for all $t\\in [0,T^*)$ the following inequalities hold:\n\\begin{align}\n\\max_i\\delta_i(t)&< \\epsilon_d\\label{eq:nearA:capdist}\\\\\n\\min_i d(\\bpt x(t), \\taubp_i)&> \\varepsilon_T\\label{eq:nearA:taubdist}.\n\\end{align}\nAssume further that for the initial condition $\\bpt x_0=\\bpt x(0)$, the following stronger estimate holds:\n\\begin{equation}\n\\max_i\\delta_i(\\bpt x_0)< C_0^{-1}\\epsilon_d \n\\end{equation}\n\\end{subequations}\\begin{subequations}\nThen, each $\\delta_i$ is essentially uniformly exponentially decreasing in $[0,T^*)$, i.e.\n\\begin{equation}\n\\delta_i(t_2) \\le C_0 e^{-\\epsilon_s(t_2-t_1)}\\delta_i(t_1)\\qquad\\forall\\,0\\le t_1\\le t_20$ and $C_T=5$ such that additionally the following holds:\n\n\\begin{subequations}\nAssume that $T^*<\\infty$, and that initially \n\\begin{equation}\\max_i d(\\bpt x_0, \\mathcal T_i) > C_T \\varepsilon_T.\\label{eq:nearA:init-far-from-taub}\\end{equation}\n\\end{subequations}\\begin{subequations}\n Then the final part of the trajectory preceding $T^*$ must have the form depicted in Figure \\ref{fig:evil-traj}, i.e.~there is $\\ell\\in\\{1,2,3\\}$ and there are times $0\\varepsilon_T$, this increase and decrease is uniform when $\\max_i |N_i|\\le \\epsilon_N$ and if $\\epsilon_N=\\epsilon_N(\\varepsilon_\\taubp)>0$ is small enough. Hence, for any small piece of trajectory $\\bpt x:[t_1,t_2]\\to \\{\\bpt x\\in\\mathcal M:\\,\\min_i d(\\bpt x,\\taubp_i)\\ge\\varepsilon_T,\\,\\max_i|N_i|\\le \\epsilon_N\\}$, one of the $|N_i|$ is uniformly exponentially increasing, while all three $\\delta_i$ and the remaining two $|N_j|,|N_k|$ are uniformly exponentially decreasing, with some rate $2\\epsilon_s=2\\epsilon_s(\\varepsilon_T, \\epsilon_N)>0$.\n\nEventually the trajectory will leave the neighborhood of $\\mathcal K$; since we assumed that we are near $\\mathcal A$, i.e.~$\\max_i \\delta_i < \\epsilon_d$, this can only happen near one of the Kasner caps (Bianchi type \\textsc{II}). By continuity of the flow, the trajectory will follow a heteroclinic orbit until it is near $\\mathcal K$ again, and will spend only bounded amount of time $T < C(\\epsilon_N)$ for this transit. Hence, all $|N_i|$ and $\\delta_i$ can only change by a bounded factor during such a heteroclinic transit.\n\nThe time spent near $\\mathcal K$ between two heteroclinic transits is bounded below by $\\mathcal O(|\\log \\epsilon_d|)$ (for fixed $\\epsilon_N$): Consider an interval $[t_1,t_2]$ spent near $\\mathcal K$, where $t_1 >0$. Suppose without loss of generality that initially $|N_1|(t_1)=\\epsilon_N$ and that $|N_2|$ is uniformly exponentially increasing, such that $|N_2|(t_2)=\\epsilon_N$. Then, we must have $|N_2|(t_1)= \\frac{4|N_1|}{\\delta_3^2}(t_1)\\le C \\epsilon_N \\epsilon_d^{-2}$, and we must have $t_2-t_1 \\ge C |\\log \\epsilon_d|$. Hence, if $\\epsilon_d \\ll \\epsilon_N$ is small enough, the exponential decrease of the three $\\delta_i$ will dominate all contributions from the heteroclinic transits and we obtain an estimate of the form \\eqref{eq:nearA:exponential}. \n\\end{proof}\n\\begin{proof}[Informal proof of Proposition \\ref{prop:farfromtaub-main2}]\nWe again use continuity of the flow: Each small heteroclinic ``bounce'' near one of the $\\taubp_i$ must increase the distance from $\\taubp_i$ by at least some $C_u=C_u(\\varepsilon_T)$. By continuity of the flow, each episode with $\\max_i |N_i|\\ge\\epsilon_N$ therefore must increase the distance from $\\taubp_i$ by $C_u\/2$; near $\\mathcal K$, the trajectory is almost constant and $d(\\bpt x_0, \\taubp_i)$ cannot shrink by more that $C_u\/3$. Hence the only way to reach the vicinity of a Taub point is by following the heteroclinic $-\\taubp_i\\to\\taubp_i$.\n\\end{proof}\n\n\nThe remainder of this section is devoted to making these informal proofs rigorous, i.e.~filling all the gaps and replacing the hand-wavy arguments by formal ones. We begin by naming the regions of the phase-space, where the various estimates hold:\n\n\\begin{definition}\\label{def:farfromtaub-base-sets}\nGiven $\\varepsilon_\\taubp, \\epsilon_N, \\epsilon_d>0$ (later chosen in this order) we define:\n\\[\\begin{aligned}\n{\\textsc{Cap}}[\\epsilon_N, \\epsilon_d]&=\\{\\bpt x\\in \\mathcal M: \\max|N_i|\\ge \\epsilon_N, \\max_i\\delta_i \\le \\epsilon_d\\}\\\\\n{\\textsc{Circle}}[\\epsilon_N, \\epsilon_d]&= \\{\\bpt x\\in \\mathcal M: \\max |N_i|\\le\\epsilon_N,\\, \\max_i\\delta_i \\le \\epsilon_d\\}\\\\\n{\\textsc{Hyp}}[\\varepsilon_\\taubp, \\epsilon_N, \\epsilon_d]&= {\\textsc{Circle}}[\\epsilon_N, \\epsilon_d] \\setminus \\left[B_{\\varepsilon_\\taubp}(\\taubp_1)\\cup B_{\\varepsilon_\\taubp}(\\taubp_2)\\cup B_{\\varepsilon_\\taubp}(\\taubp_3) \\right].\n\\end{aligned}\\numberthis\n\\]\n\\end{definition}\nThese sets are sketched in Figure \\ref{fig:nuclear} (not up to scale). They are constructed such that for appropriate parameter choices:\n\\begin{enumerate}\n\\item The union $\\textsc{Cap}\\cup\\textsc{Circle}$ contains an entire neighborhood of $\\mathcal A$ (by construction).\n\\item The region $\\textsc{Circle}$ is a small neighborhood of the Kasner circle. This is because of the constraint $1=\\Sigma^2+N^2$ and $|N^2|< C \\epsilon_N^2$ (see Figure \\ref{fig:nuclear-circle}).\n\\item The region $\\textsc{Cap}$ has three connected components, where one of the three $|N_i|\\gg 0$, because by $\\max_i |N_i|\\ge \\epsilon_N$ at least one $N_i$ must be bounded away from zero and by $\\max_{i}2\\sqrt{|N_jN_k|} \\le \\epsilon_d$ at most one $N_i$ can be bounded away from zero (this only works if $\\epsilon_d$ is small enough, depending on $\\epsilon_N$).\n\nThe region $\\textsc{Cap}$ is bounded away from the Kasner circle (see Figure \\ref{fig:nuclear-cap}). \nBy continuity of the flow, the dynamics in $\\textsc{Cap}$ can be approximated by pieces of heteroclinic orbits in $\\mathcal A$, up to uniformly small errors (Lemma \\ref{lemma:farfromtaub-cap}).\n\\item The region $\\textsc{Hyp}$ has three connected components. In each connected component, one of the $|N_i|$ is uniformly exponentially increasing and the remaining two $|N_j|$, $|N_k|$ are uniformly exponentially decreasing. All three products $\\delta_i$ are uniformly exponentially decreasing in $\\textsc{Hyp}$ (Lemma \\ref{lemma:farfromtaub-uniform-hyp}; this only works if $\\epsilon_N$ is small enough, depending on $\\varepsilon_\\taubp$).\n\\item The remaining part of the neighborhood of $\\mathcal A$, i.e.~$\\textsc{Circle}\\setminus\\textsc{Hyp}$, consists of the neighborhoods of the three Taub points. The analysis of the dynamics in these neighborhoods is deferred until Section \\ref{sect:near-taub}.\n\\end{enumerate}\n\\begin{lemma}[Uniform Hyperbolicity Estimates]\\label{lemma:farfromtaub-uniform-hyp}\nGiven any $\\varepsilon_T>0$ small enough, we find $\\epsilon_N>0$ and $\\epsilon_s>0$ small enough such that, for any $\\bpt x\\in \\textsc{Hyp}[\\varepsilon_{\\taubp}, \\epsilon_N,\\infty]$, we find one $i\\in\\{1,2,3\\}$ such that ${\\mathrm{D}}_t \\log |N_i|>2\\epsilon_s$, and the remaining two ${\\mathrm{D}}_t\\log|N_j|< -2\\epsilon_s$ and all three ${\\mathrm{D}}_t \\log \\delta_j <-2\\epsilon_s$.\n\nLet $\\varepsilon_\\taubp, \\epsilon_N, \\epsilon_s>0$ as above.\nFor any piece of trajectory $\\bpt x:(t_1,t_2)\\to \\textsc{Hyp}[\\epsilon_N,\\infty]$, we can conclude\n\\[\\numberthis\\begin{aligned}\n\\int_{t_1}^{t_2}|N_i|{\\mathrm{d}} t &< \\frac{\\epsilon_N}{2\\epsilon_s}\\\\\n\\mathrm{diam}\\, \\gamma \\le \\int_{t_1}^{t_2}|\\gamma'(t)|{\\mathrm{d}} t &\\le C_\\Sigma\\int_{t_1}^{t_2}\\max_i|N_i|(t){\\mathrm{d}} t\\le C_\\Sigma\\frac{\\epsilon_N}{2\\epsilon_{s}},\n\\end{aligned}\\]\nwhere we can choose $C_\\Sigma= 2$ if $\\epsilon_N<0.1$ (from \\eqref{eq:ode}).\n\\end{lemma}\n\\begin{proof}\nThe first part of the lemma consists of choosing $\\epsilon_s$ and $\\epsilon_N$ dependent on $\\varepsilon_T$. \nFrom Equations \\eqref{eq:ode2-ni} and \\eqref{eq:ode2-delta}, we see that each ${\\mathrm{D}}_t \\log |N_i|$ and ${\\mathrm{D}}_t \\log \\delta_i$ depends only on the $\\bpt \\Sigma$-coordinates and is positive only on some disc in ${\\mathbb{R}}^2$. These six discs are plotted in Figure \\ref{fig:n-discs}, and only touch or intersect $\\mathcal K$ near the three Taub-points, a neighborhood of which is excluded. By the constraint $1-\\Sigma^2 = N^2$ and $|N^2|\\le 9 \\epsilon_N^2$, the set \\textsc{Hyp}, depicted in Figure \\ref{fig:nuclear}, is near $\\mathcal K$, and the desired uniformity estimates hold.\n\n\nThe second part follows from the uniform hyperbolicity in \\textsc{Hyp}: In each component of \\textsc{Hyp}, exactly one $N_i$ is unstable (see Figure \\ref{fig:nuclear} and Figure \\ref{fig:n-discs}). Suppose without loss of generality that $N_1$ is the unstable direction; then we can estimate for $t\\in (t_1,t_2)$:\n\\[\n|N_1(t)| \\le e^{-2\\epsilon_s (t_2-t)}|N_1(t_2)|,\\qquad |N_2(t)|\\le e^{-2\\epsilon_s(t-t_1)}|N_2(t_1)|.\n\\]\nFor $|N_3|$, the analogous estimate as for $N_2$ holds.\nUsing $|N_1(t_2)|\\le \\epsilon_N$ and $|N_2(t_1)|\\le \\epsilon_N$ and integrating yields the claim about $\\int |N|{\\mathrm{d}} t$. From \\eqref{eq:ode}, we see $|\\bpt x'|\\le C_\\Sigma \\max_i|N_i|$ (if $\\max_i|N_i|\\le 1$).\n\\end{proof} \n\nContinuity of the flow allows us to approximate solutions in \\textsc{Cap} by heteroclinic solutions in $\\mathcal A$, up to any desired precision $\\varepsilon_c$, if we only chose the distance from $\\mathcal A$ (i.e.~$\\epsilon_d>0$) small enough. More precisely:\n\n\\begin{lemma}[Continuity of the flow near \\textsc{Cap}]\\label{lemma:farfromtaub-cap}\nLet $\\epsilon_N>0$ and $\\varepsilon_c>0$. Then there exists $\\epsilon_d=\\epsilon_d(\\epsilon_N, \\varepsilon_c)$ small enough and $\\hat C_0=\\hat C_0(\\epsilon_N)>0$ large enough, such that the following holds:\n\nLet $\\bpt x:(t_1,t_2)\\to \\textsc{Cap}[\\epsilon_N, \\epsilon_d]$ be a piece of a trajectory. \nThen $t_2-t_1 < \\hat C_0$ and there exists $y\\in \\mathcal A$ such that \n\\[\\numberthis\\label{eq:farfromtaum-cap-continuity} d(\\bpt x(t), \\phi(y, t-t_1))<\\varepsilon_c\\qquad \\text{for all}\\quad t\\in(t_1,t_2),\\]\nwhere $\\phi:\\mathcal M\\times {\\mathbb{R}}\\to\\mathcal M$ is the flow corresponding to \\eqref{eq:ode}.\n\\end{lemma}\n\\begin{proof}Follows from continuity of the flow and the fact that all trajectories in $\\mathcal A$ are heteroclinic and must leave \\textsc{Hyp} at some time.\n\\end{proof}\n \nWe now have collected all the ingredients to formally prove the two main results from this section. At first, we \ncombine Lemma \\ref{lemma:farfromtaub-uniform-hyp} and Lemma \\ref{lemma:farfromtaub-cap} in order to show \nthat each $\\delta_i$ is uniformly essentially exponentially decreasing in $\\textsc{Cap}\\cup \\textsc{Hyp}$:\n\\begin{proof}[Formal proof of Proposition \\ref{prop:farfromtaub-main}]\nGiven $\\varepsilon_T>0$, find $\\epsilon_N, \\epsilon_d, \\epsilon_s, \\hat C_0>0 $ such that Lemma \\ref{lemma:farfromtaub-uniform-hyp} and Lemma \\ref{lemma:farfromtaub-cap} hold (for arbitrary $\\varepsilon_c$).\n\nSet $\\mu_i = \\delta_i'\/\\delta_i + \\epsilon_s$; it suffices to show that $\\int_{t_1}^{t_2}\\mu(t) {\\mathrm{d}} t < \\log C_0$ is bounded above, independently of $\\gamma$, $i\\in\\{1,2,3\\}$ and $t_1$,$t_2$. Fix $\\gamma$ and $t_1 - \\frac{2}{3}\\log \\frac{\\epsilon_d}{\\epsilon_N}$.\nAdjust $\\epsilon_d>0$ to be so small, that $C_s(T_k-S_k) > 2\\log C_0$. Then such an interval gives us a contribution of $\\int_{S_k}^{T_k}\\mu(t){\\mathrm{d}} t < -\\log C_0$. \n\n For the complete interval $(t_1,t_2)$, sum over $k$; two disjoint intervals in the \\textsc{Cap}-region must always enclose an interval in the \\textsc{Hyp}-section, which cancels the contribution of its preceding \\textsc{Cap}-region. Therefore, at most the last \\textsc{Cap}-region stays unmatched and we obtain\n \\(\n \\int_{t_1}^{t_2}\\mu(t){\\mathrm{d}} t < \\log C_0.\n \\)\n\\end{proof}\nNext, we adjust the constants from Lemma \\ref{lemma:farfromtaub-cap} in order to show that trajectories near $\\mathcal A$ can only enter the vicinity of Taub-points via the heteroclinic $-\\taubp_\\ell\\to\\taubp_\\ell$:\n\\begin{proof}[Formal proof of Proposition \\ref{prop:farfromtaub-main2}]\nWe find some $C_u>0$ such that $d(K(p), T_i)> d(p,T_i)+C_u$ for every $p\\in\\mathcal K$ with $d(p,\\taubp_i)\\in (\\varepsilon_T, 0.5]$. It is evident from Figure \\ref{fig:short-het} (or, formally, Proposition \\ref{prop:kasnermap-homeomorphism-class}) that this is possible.\n\nBy Lemma \\ref{lemma:farfromtaub-uniform-hyp}, we can make $\\epsilon_N$ small enough that $\\mathrm{diam}\\, \\gamma < C_u\/8$ for pieces of trajectories $\\gamma:(t_1,t_2)\\to{\\textsc{Hyp}}$. \nUsing the continuity of the flow, i.e. Lemma \\ref{lemma:farfromtaub-cap}, we can make $\\epsilon_d$ small enough such that pieces $\\bpt x:(t_1,t_2)\\to {\\textsc{Cap}}$ are approximated by heteroclinic orbits up to distance $C_u\/4$. \n\nNow suppose $T^*<\\infty$ and $\\min_i d(\\bpt x_0, \\mathcal T_i)>5\\varepsilon_T$.\nWe cannot have $\\max_i\\delta_i(\\phi(x_0,T^*))\\ge \\epsilon_d$; hence, $d(\\bpt x(T^*), \\taubp_\\ell)=\\varepsilon_T$ for some $\\ell\\in\\{1,2,3\\}$. Set\n\\[T^3 = \\sup\\left\\{t0$.\nWe cannot have $d(\\taubp_\\ell, \\bpt x(T^3))=1.5\\varepsilon_\\taubp$, since we already know $\\mathrm{diam}\\, \\bpt x([T^3,T^*])\\le C_u\/8 < 0.5 \\varepsilon_T$ (since, by construction, $\\bpt x([T^3, T^*])\\subseteq \\textsc{Hyp}(\\varepsilon_\\taubp, \\epsilon_N, \\epsilon_d)$). This proves \\eqref{eq:farfromtaub-thm-3star}, as well as \n\\[\\bpt x(T^3)\\in\\partial \\textsc{Cap}[\\epsilon_N, \\epsilon_d]\\cap \\partial \\textsc{Circle}[\\epsilon_N,\\epsilon_d]\\cap B_{1.5\\varepsilon_T}(\\taubp_\\ell).\\]\nNext, we set \n\\[T^2 = \\sup\\left\\{t\\in [0,T^3): \\bpt x([t,T_3))\\subseteq\\textsc{Cap}[\\epsilon_n, \\epsilon_d]\\right\\}.\\]\nIn the interval $t\\in [T_2,T_3]$, the trajectory is in one of the three \\textsc{Cap} regions; this must be the $|N_\\ell|\\ge \\epsilon_N$ cap, since otherwise $d(\\bpt x(t), \\taubp_\\ell)$ would be decreasing (see Figure \\ref{fig:short-het}). \nWe set \n\\[T^1 = \\sup\\left\\{t\\in [0,T^2): \\bpt x(t)\\not\\in \\textsc{Circle}[\\epsilon_n, \\epsilon_d]\\right\\}.\\]\nSimilar arguments yield the remaining claim \\eqref{eq:farfromtaub-thm-12}.\n\\end{proof}\n\n\\begin{remark}\nThe constants generated in this section are sub-optimal (at least doubly exponentially so). If one cared at all about their numerical values, then one would need to replace Lemma \\ref{lemma:farfromtaub-cap} and Proposition \\ref{prop:kasnermap-homeomorphism-class} by explicit estimates.\n\\end{remark}\n\n\n\\section{Analysis near the generalized Taub-spaces $\\mathcal T_i^G$}\\label{sect:near-taub}\nIn this section, we will study the dynamics in the vicinity of the generalized Taub-spaces, without loss of generality $\\mathcal T_1^G$, using the polar coordinates from Section \\ref{sect:polar-coords}. This section is structured in the following way:\n\nIn section \\ref{sect:neartaub:motivation}, we will give a highly informal motivation for the general form of our estimates. This part can be safely skipped by readers who are uncomfortable with its hand-wavy nature. In section \\ref{sect:neartaub:heteroclinic}, we will study the behaviour of trajectories near the heteroclinic orbit $-\\taubp_1\\to\\taubp_1$, which come from either the $|N_2|\\gg 0$ or the $|N_3|\\gg 0$ cap. In section \\ref{sect:neartaub:neartaub}, we will study the further behaviour near $\\taubp_1$ of such trajectories. In section \\ref{sect:neartaub:forbidden-cones}, we will study the behaviour of trajectories near $\\taubp_1$ which do \\emph{not} necessarily have the prehistory described in section \\ref{sect:neartaub:heteroclinic}, and especially provide the deferred proof of Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub}. This section is mostly optional for our main results: Any trajectory which ever leaves the region where Section \\ref{sect:neartaub:forbidden-cones} is necessary will never revisit this region, a fact which is proven without referring to any results from Section \\ref{sect:neartaub:forbidden-cones}.\n\n\n\n\n\\subsection{Informal Motivation}\\label{sect:neartaub:motivation}\nWe already alluded to the motivation for the estimates in this section in the introductory Section \\ref{sect:intro} , page \\pageref{paragraph:intro:strategy:quotients}: \nFrom Proposition \\ref{prop:farfromtaub-main}, by varying $\\varepsilon_\\taubp$, we can control the behaviour of trajectories near $\\taubp_1$ if $\\delta_1 \\ll r_1$ and obtain estimates of the following type for partial trajectories $\\gamma:(t_1,t_2)\\to B_{\\epsilon}(\\taubp_1)$ and continuous monotonous functions $\\rho:(0,1]\\to (0,1]$:\n\\begin{quote}\nSuppose $\\delta_1(t_1)<\\rho_0(r_1(t_1))$;\\\\ then $\\delta_1(t_2) < C_0 e^{-\\rho_1(r_1(t_1))}\\delta_1(t_1)$ and $r_1(t_2)\\ge \\rho_2(r_1(t_1))$.\n\\end{quote}\nThe bounds will take the specific form $\\rho_0(r)=Cr$ and $\\rho_2(r) = C r$ and $\\rho_1(r)= \\frac{C}{r^2}$ (Proposition \\ref{prop:neartaub:main}).\n\nWe know some prehistory of trajectories entering the vicinity of $\\taubp_1$ (by Proposition \\ref{prop:farfromtaub-main2}), which allows us to track backwards the condition $\\delta_1<\\rho_0(r_1)$. Further tracking back this condition, \nit is clear that trajectories entering the vicinity of $-\\taubp_1$ must have $\\delta_1 \\le \\epsilon_d \\ll r_1\\sim \\epsilon_N$. \n\nAt least in the Bianchi \\textsc{IX}-like case, where $\\mathrm{sign}\\,N_2=\\mathrm{sign}\\,N_3$, the set $\\{\\bpt x:\\,r_1=0\\}$ is invariant, allowing us to get some $\\rho_3:(0,1]\\to (0,1]$ such that $\\delta_1 < \\rho_3(r_1)$ for \\emph{any} trajectory entering the vicinity of $\\taubp_1$ via the route in Proposition \\ref{prop:farfromtaub-main2}, i.e.~via $-\\taubp_1$ and some cap before.\n\nThese estimates combine well if we can make $\\rho_3 < \\rho_0$.\nThe estimates will take the specific form $\\rho_3(r)= \\epsilon r$, for arbitrarily small $\\epsilon>0$ (Proposition \\ref{prop:near-taub:qc}), which is precisely the required estimate at $\\taubp_1$. \nIn Bianchi \\textsc{VIII}, we have no qualitative a-priori reason to expect bounds of the same form. Nevertheless, we will prove that they hold, which allows us to control any solution entering $\\taubp_1$ as in Proposition \\ref{prop:farfromtaub-main2}.\n\nFor the sake of brevity of arguments, we will present our analysis in the reverse order: We chronologically follow a trajectory from $-\\taubp_1$ to $+\\taubp_1$ and then until it leaves the vicinity of $+\\taubp_1$, instead of tracking estimates backwards.\n\n\\subsection{Analysis near $-\\taubp_1$ and near the heteroclinic $-\\taubp_1\\to\\taubp_1$}\\label{sect:neartaub:heteroclinic}\nThe behaviour of trajectories away from $\\taubp_1$ is already partially described by Proposition \\ref{prop:farfromtaub-main}; we only need to additionally estimate the quotient $\\frac{\\delta_1}{r_1}$ in this region. The necessary estimates can be summarized in the following:\n\\begin{proposition}\\label{prop:near-taub:qc}\nLet $\\varepsilon_{T}\\in (0,0.1)$. We can chose $\\epsilon_N, \\epsilon_d>0$ small enough, such that Propositions \\ref{prop:farfromtaub-main} and \\ref{prop:farfromtaub-main2} hold, as well as choose constants $C_1,\\ldots,C_5>0$ large enough, such that the following holds:\n\nLet $0< T_1 \\le T_2$ and $\\bpt x:[0,T_2)\\to M_{\\pm,\\pm,\\pm}$ be piece of trajectory, such that\n\\begin{subequations}\n\\begin{align}\n\\bpt x(0)&\\in \\partial \\textsc{Circle}[\\epsilon_N, \\epsilon_d],\\label{eq:near-taub-main-ass-initial} \\\\\n\\nonumber &\\quad\\text{i.e.}\\quad \\max_i\\delta_i(0)<\\epsilon_d,\\ \\max(|N_2|,|N_3|)(0)=\\epsilon_N \\\\\n\\bpt x([0,T_1))&\\subseteq B_{2\\varepsilon_{T}}(-\\taubp_1)\\cap \\textsc{Circle}[\\epsilon_N, \\epsilon_d],\\label{eq:near-taub-main-ass-q}\\\\\n\\nonumber &\\quad\\text{i.e.}\\quad \\max_i\\delta_i(t)<\\epsilon_d,\\ \\max_i|N_i|(t)\\le\\epsilon_N,\\ d(\\gamma(t), \\taubp_1)\\le 2\\varepsilon_T\\qquad\\forall\\,t\\in [0,T_1]\\\\\n\\bpt x([T_1,T_2))&\\subseteq\\textsc{Cap}[\\epsilon_N, \\epsilon_d]\\label{eq:near-taub-main-ass-c},\\\\\n\\nonumber &\\quad\\text{i.e.}\\quad \\max_i\\delta_i(t)<\\epsilon_d,\\ |N_1|(t)\\ge \\epsilon_N \\quad \\forall\\,t\\in [T_1,T_2),\n\\end{align}\\end{subequations}\ni.e.~we are in the situation of the conclusion of Proposition \\ref{prop:farfromtaub-main2}. \n\nThen the following estimates hold:\n\\begin{subequations}\\begin{align}\n\\frac{\\delta_1}{r_1}(0) &\\le C_1\\epsilon_d&&\\label{eq:near-taub-main-conc-initial}\\\\\n\\frac{\\delta_1}{r_1}(t_2) &\\le C_2 \\frac{\\delta_1}{r_1}(t_1)\\qquad&\\forall& 0 \\le t_1 \\le t_2 < T_2\\label{eq:near-taub-main-conc-delta-r-qc}\\\\\n\\delta_1(t_2) &\\le C_3 e^{- C_4^{-1}(t_2-t_1)}\\delta_1(t_1)\\qquad&\\forall&0 \\le t_1 \\le t_2 < T_2\\label{eq:near-taub-main-conc-delta-qc}\\\\\nr_1(t_2) &\\ge C_5^{-1} r_1(t_1)\\qquad&\\forall& T_1 \\le t_1 \\le t_2 < T_2\\label{eq:near-taub-main-conc-r-c}.\n\\end{align}\\end{subequations}\nAlternatively to the assumption \\eqref{eq:near-taub-main-ass-initial}, we can assume \\eqref{eq:near-taub-main-conc-initial}; then the case $0=T_1$ is valid as well.\n\\end{proposition}\n\\begin{proof}\nThe claim \\eqref{eq:near-taub-main-conc-delta-qc} is already proven in Proposition \\ref{prop:farfromtaub-main}.\nAssume without loss of generality that $|N_3|(0)=\\epsilon_N$. Assuming that $\\epsilon_d$ is small enough compared to $\\epsilon_N$, we have $|N_2|(0)\\le \\delta_1^2(0)\/\\epsilon_N \\le 0.5 \\epsilon_N$ and hence $r_1(0) \\ge |N_3|-|N_2|\\ge 0.5\\epsilon_N$. Therefore $\\frac{\\delta_1}{r_1}(0)\\le C\\delta_1(0)\\le C\\epsilon_d$ and claim \\eqref{eq:near-taub-main-conc-initial} holds.\nFirst, consider the Bianchi \\textsc{IX}-like case ${{\\hat{n}}}_2={{\\hat{n}}}_3$ in polar coordinates, i.e.~equations \\eqref{eq:neartaub-b9-q}. We can immediately estimate $\\partial_t\\log\\frac{\\delta_1}{r_1}\\le C|N_1|$. We already established in Section \\ref{sect:far-from-taub} that $\\int |N_1|{\\mathrm{d}} t$ is bounded for $t_1,t_2\\in [0,T_1]$ (Lemma \\ref{lemma:farfromtaub-uniform-hyp}) and that $T_2-T_1$ is bounded (Lemma \\ref{lemma:farfromtaub-cap}), yielding \\eqref{eq:near-taub-main-conc-delta-r-qc}.\n\nNext, consider the Bianchi \\textsc{VIII}-like case ${{\\hat{n}}}_2=+1$ and ${{\\hat{n}}}_3=-1$, i.e.~equations \\eqref{eq:neartaub-b8-q:delta-r}.\nAssume that we have for all $t\\in [0,T_2]$,\n\\begin{equation}\\label{eq:delta-r-one}\\frac{\\delta_1}{r_1}(t) < 1.\\end{equation}\nUnder the assumption \\eqref{eq:delta-r-one}, we can estimate \\[\\frac{N_+}{r_1}=\\frac{\\sqrt{N_-^2+\\delta_1^2}}{r_1}=\\sqrt{\\frac{N_-^2}{N_-^2+\\Sigma_-^2}+\\frac{\\delta_1^2}{r_1^2}}\\le \\sqrt{2},\\] and hence $\\partial_t\\log\\frac{\\delta_1}{r_1}\\le C|N_1|$, yielding \\eqref{eq:near-taub-main-conc-delta-r-qc}.\nWhen we adjust $\\varepsilon_d$ such that $C_1C_2\\varepsilon_d<1$, then this argument bootstraps to prove \\eqref{eq:near-taub-main-conc-delta-r-qc}, without assuming a priori \\eqref{eq:delta-r-one} (proof: Assume there was a time $T\\in (0,T_2)$ such that \\eqref{eq:delta-r-one} was violated; then $\\frac{\\delta_1}{r_1}(T)\\le C_{2}\\frac{\\delta_1}{r_1}(0)\\le C_2C_1\\varepsilon_d<1$)\n\nNext, we prove the remaining claim \\eqref{eq:near-taub-main-conc-r-c}.\nSince we know that $\\frac{\\delta_1}{r_1}(t) < 1$, we can estimate ${\\mathrm{D}}_t \\log r < C$ for for all $t\\in [T_1,T_2)$, both in Bianchi \\textsc{VIII} and \\textsc{IX}. Since $T_2-T_1$ is bounded, we obtain \\eqref{eq:near-taub-main-conc-r-c}.\n\nConsidering the above proof, it is obvious that we can alternatively replace the assumption \\eqref{eq:near-taub-main-ass-initial} by \\eqref{eq:near-taub-main-conc-initial}, and then also allow $0=T_1$.\n\\end{proof}\n\n\\begin{remark}\nWe excluded the set $\\forbidden[\\epsilon_v]$ from our analysis, given by\n\\[\n\\forbidden[\\epsilon_v] = \\{\\bpt x \\in {\\mathbb{R}}^5: \\delta_1 \\ge \\epsilon_v r_1 \\}, \n\\]\ni.e.~we described the dynamics outside of $\\forbidden$ and showed that the set $\\forbidden$ cannot reached by initial conditions described by Proposition \\ref{prop:farfromtaub-main2}. \n\nIgnoring the constraint $G=1$, the set $\\forbidden$ looks like a linear cone times ${\\mathbb{R}}^2$, since both $r_1$ and $\\delta_1$ are homogeneous of first order in $N_2,N_3,\\Sigma_-$ and independent of $N_1$ and $\\Sigma_+$. \n\nEven though Bianchi \\textsc{VIII} lacks an explicit invariant Taub-space, the $\\forbidden$-cone around the generalized Taub-space $\\mathcal T_1^G$ is a suitable ``morally backwards invariant'' replacement.\n\\end{remark}\n\n\\subsection{Analysis near $\\taubp_1$}\\label{sect:neartaub:neartaub}\nOur analysis of the neighborhood of $\\taubp_1$ can be summarized in the following\n\\begin{proposition}\\label{prop:neartaub:main}\nFor any $\\varepsilon_T\\in (0,0.1)$, there exist constants $\\epsilon_v>0$, $C_1,C_r,C_{\\delta,r}>0$ and $C_e=10$ such that the following holds:\n\nLet $\\gamma:[0,T^*)\\to B_{2\\varepsilon_T}(\\taubp_1)$ be a partial trajectory with\n\\begin{equation}\n\\frac{\\delta_1}{r_1}(0)< \\epsilon_v.\n\\end{equation}\nThen, for all $0\\le t_1\\le t_2 -C|N_1|$, which upon integration yields the claim \\eqref{eq:neartaub:main:r}.\n\\end{proof}\nThe next estimate \\eqref{eq:neartaub:main:delta} requires a slightly more involved averaging-style argument, similar to the proof of Proposition \\ref{prop:farfromtaub-main}:\n\\begin{proof}[Proof of Theorem \\ref{prop:neartaub:main}, conclusion \\eqref{eq:neartaub:main:delta}]\nWe set\n\\[\\mu = {\\mathrm{D}}_t\\log\\delta_1 + 0.1 r_1^2.\\]\nIt suffices to prove that $\\int_{t_1}^{t_2}\\mu(t){\\mathrm{d}} t\\le \\log C_0$ for some $C_0>0$.\n\n\\paragraph{Strategy.}\nWe will first consider times where $|N_1|\\not\\ll r_1$; the integral $\\int \\mu {\\mathrm{d}} t$ over these times will be bounded by $\\int |N_1|{\\mathrm{d}} t$. Next we will split $\\mu$ into a nonpositive and a nonnegative part; the nonnegative (bad) part will have a contribution for every $\\psi$ rotation, which is bounded by $C r_1$, while the nonpositive (good) part will have a negative contribution for every $\\psi$-rotation which scales with $r_1\\log\\frac{\\delta_1}{r_1}$. Adjusting $\\epsilon_v$ will then yield the desired estimate (after summing over $\\psi$-rotations).\n\n\\paragraph{Estimates for large $|N_1|$.}\nChoose $\\widetilde T$ (possibly $\\widetilde T= 0$) such that $|N_1(t)|\\ge C r_1^2(t)$ (with $C=0.05$) for $t\\in (0, \\widetilde T]$ and $|N_1(t)|\\le 0.1 r_1^2(t)$ for $t \\in [\\widetilde T, T]$. This is possible, since ${\\mathrm{D}}_t \\log |N_1| < 2{\\mathrm{D}}_t \\log r_1$. \nThen $|\\mu(t)| \\le C \\sqrt{|N_1(t)|}$ for all $t\\in [0,\\widetilde T]$ and hence $\\int_{0}^{\\widetilde T}|\\mu(t)|{\\mathrm{d}} t< C$.\n\n\\paragraph{Averaging Estimates.}\nConsider without loss of generality $t_1,t_2\\ge \\widetilde T$. Using $\\Sigma_+\\approx -1$ and $\\delta_1 \\le 0.1 r_1$, we can estimate\n\\[\\mu \\le -0.4 r_1^2 \\cos^2\\psi + 0.6 r_1^2\\sin^2\\psi + 0.01 r_1^2 + 0.1 r_1^2 + C|N_1| \\le -0.25 r_1^2 + r_1^2\\sin^2\\psi.\\]\nWe can also estimate $\\psi'$\n\\[r_1 |\\sin\\psi| \\le \\psi' \\le 2r_1\\sqrt{\\sin^2\\psi + \\frac{\\delta_1^2}{r_1^2}}.\\]\nLet $\\mu_+ = r_1^2\\sin^2\\psi$ be the positive (bad) part of $\\mu$; take times $t_10$.\nOn the other hand, let $\\mu_-=-0.25 r_1^2$ be the negative (good) part of $\\mu$. Take times $t_10$, we find $\\epsilon_v>0$ such that $\\frac{\\delta_1}{r_1}(t_1)<\\epsilon_v$ implies \n$\\int_{t_L}^{t_R}\\mu_-(t){\\mathrm{d}} t < -C_+ r_1(t_L)$. Hence, by summing over $\\psi$-rotations (and adjusting $\\epsilon_v$), we can conclude the assertion \\eqref{eq:neartaub:main:delta}.\n\\end{proof}\n\\begin{proof}[Proof of Proposition \\ref{prop:neartaub:main}, conclusion \\eqref{eq:neartaub:main:T}]\nWe need to show that solutions with small quotient $\\frac{\\delta_1} {r_1}$ cannot stay near $\\taubp_1$ forever.\n\nAssuming without loss of generality $|N_1|\\ll r_1^2$ we have ${\\mathrm{D}}_\\psi \\log r_1 > C r_1 |\\sin\\psi|$. This shows that the only way never leaving the vicinity of $\\taubp_1$ is for the angle $\\psi\\in {\\mathbb{R}}$ to stay bounded, i.e.~$\\lim_{t\\to\\infty}\\psi(t)=\\psi^{**}$ and $\\lim_{t\\to\\infty}\\delta_1(t)=0$ (since otherwise $r_1$ increases by a too large amount during each rotation). This is impossible, since the possible limit-points lie on the Kasner-circle $\\mathcal K\\setminus \\{\\taubp_1\\}$ and are not $\\taubp_1$; hence, either $N_2$ or $N_3$ is unstable and since initially $N_2\\neq 0\\neq N_3$, the trajectory cannot converge to such a point.\n\\end{proof}\n\\subsection{Analysis in the \\forbidden-cones}\\label{sect:neartaub:forbidden-cones}\\label{sect:averaging-unneeded}\nOur whole approach aims at avoiding the much more tricky analysis of the dynamics in the \\forbidden-cones, where possibly $\\delta_1\\ge r_1$: Since trajectories starting outside of these cones never enter them, it is unnecessary to know what happens in the \\forbidden-cones. However, for various global questions, it is useful to collect at least some results inside of these cones. \n\nWe already know that solutions in Bianchi \\textsc{IX} cannot converge to the Taub-points; the same holds in Bianchi \\textsc{VIII}, even for solutions in \\forbidden:\n\\begin{lemma}\\label{lemma:no-b8-taub-convergence}\nFor an initial condition $\\bpt x_0\\in \\mathcal M_{+-+}$, it is impossible to have $\\lim_{t\\to\\infty}\\bpt x(t)=\\taubp_1$.\n\\end{lemma}\n\\begin{proof}\nSuppose we have such a solution.\nUsing equation \\eqref{eq:neartaub-b8-q}, we can write near $\\taubp_1$:\n\\[\n{\\mathrm{D}}_t \\log |N_1|\\frac{\\delta_1}{r_1} \\le \\sqrt{3}\\frac{|\\Sigma_-|}{r_1} |N_1|\\frac{\\sqrt{N_-^2 + \\delta_1^2} }{r_1} - 2.5.\n\\]\nHence, if ever $|N_1|\\frac{\\delta_1}{r_1}<1$, this inequality is preserved and $|N_1|\\frac{\\delta_1}{r_1}$ decays exponentially. Then we can estimate ${\\mathrm{D}}_t \\log r_1 \\ge -C |N_1| -C |N_1|\\frac{\\delta_1}{r_1}$; all the terms on the right hand side have bounded integral and $r_1\\to 0$ is impossible.\n\nOn the other hand, if $r_1 < |N_1|\\delta_1$ for all sufficiently large times, we can estimate ${\\mathrm{D}}_t\\log \\delta_1\\ge -C r_1^2 - C |N_1| \\ge -C |N_1|$, which has bounded integral and thus contradicts $\\delta_1\\to 0$.\n\\end{proof}\n\nUnfortunately, this is all we can presently say in the \\forbidden{} cone in Bianchi \\textsc{VIII}. \n\n\\noindent \nIn the case of Bianchi \\textsc{IX}, we can still average over $\\psi$-rotations in order to show that $\\frac{\\delta_1}{r_1}$ decays, even in the \\forbidden{} region:\n\\begin{lemma}\\label{neartaub:lemma:joint-increase-in-forbidden}\nLet $h>0$. There exists constants $\\epsilon, C>0$ such that the following holds:\n\nLet $\\gamma:[0,T]\\to \\{\\bpt x\\in\\mathcal M_{*++}:\\, |N_1|\\le r_1^5,\\,r_1<\\epsilon,\\,r_1\\le h\\delta_1,\\, d(\\bpt x, \\taubp_1)<0.1\\}$ be a partial trajectory near $\\taubp_1$. Then \n\\[\n\\log \\frac{\\delta_1}{r_1}(0) -\\log \\frac{\\delta_1}{r_1}(T) \\le C (\\log r_1(T)-\\log r_1(0)),\n\\]\ni.e.~the increase of $r_1$ and the decrease of $\\frac{\\delta_1}{r_1}$ have comparable rates.\n\\end{lemma}\n\\begin{proof}\nWe can estimate\n\\[\\begin{aligned}\n{\\mathrm{D}}_t \\log r_1 &= r_1^2\\sin^2\\psi \\frac{-\\Sigma_+}{1-\\Sigma_+} + \\mathcal O(|N_1|)\\\\\n{\\mathrm{D}}_t \\log \\frac{\\delta_1}{r_1}&=\\frac{-1}{1-\\Sigma_+}r_1^2\\cos^2\\psi +\\mathcal O(|N_1|)\\\\\n\\psi'&= \\sqrt{3}r_1 \\sqrt{\\sin^2\\psi + \\frac{\\delta_1^2}{r_1^2}} - \\frac{r_1^2}{1-\\Sigma_+} \\cos\\psi \\sin\\psi + \\mathcal O(|N_1\\sin\\psi|).\n\\end{aligned}\\]\nIt suffices to show an estimate of the form\n\\[\n\\int_0^{2\\psi}r_1^2\\frac{\\sin^2\\psi}{\\psi'}{\\mathrm{d}} \\psi >C \\int_0^{2\\psi}\\frac{r_1^2\\cos^2\\psi}{\\psi'}{\\mathrm{d}} \\psi.\n\\]\nUsing $\\delta_1 \\ge h r_1$ and $r_1\\le \\epsilon$, we can directly estimate $\\delta_1 \\le \\psi'\\le 2 \\sqrt{r_1^2+\\delta_1^2}$ (for $\\epsilon>0$ small enough). This allows us to see that $r_1, \\delta_1, \\frac{\\delta_1}{r_1}$ can all change only by a bounded factor during each rotation; hence it suffices to show for $\\psi(t_2)-\\psi(t_1)\\le 2\\pi$ that \n\\(\n\\min_{t\\in [t_1,t_2]}\\psi'(t) > C \\max_{t\\in [t_1,t_2]}\\psi'(t).\n\\)\n\n\\noindent \nHowever,\n\\[\n\\min_{t\\in [t_1,t_2]}\\psi'(t) \\ge \\min_t \\delta_1(t)\\qquad\n\\max_{t\\in [t_1,t_2]}\\psi'(t) \\ge 2 \\max_{t} \\sqrt{r_1^2(t) + \\delta_1^2(t)}\\le \\max_t \\delta_1(t) \\sqrt{1+ h^{-2}},\n\\]\nand we know that $\\delta_1$ can only change by a bounded factor during each rotation; hence\nthe desired estimate follows.\n\\end{proof}\n\nWith a more subtle averaging argument than the previous ones, we can also show the deferred Lemma \\ref{lemma:farfromtaub:no-delta-increase-near-taub}. However, this proof is only given for the sake of completeness and is nowhere used in this work, except for completing the literature review in Section \\ref{sect:farfromA:attract}.\n\\begin{lemma}\\label{lemma:farfromtaub:no-delta-increase-near-taub}\nWe consider without loss of generality the neighborhood of $\\mathcal T_1$. Let $\\epsilon>0$ small enough. Then there exists a constant $C_{\\delta,\\epsilon}\\in(1,\\infty)$, such that, for any piece of trajectory $\\gamma:[t_1,t_2]\\to \\{\\bpt x\\in\\mathcal M_{*++}: |\\bpt \\Sigma(\\bpt x)-\\taubp_1|\\le \\epsilon,\\,|N_1|\\le 10,\\,\\delta_1\\le 10\\}$, the following estimate holds:\n\\[ \\delta_1(\\gamma(t_2))\\le C_{\\delta,\\epsilon}\\delta_1(\\gamma(t_1)). \\]\n\\end{lemma}\n\\begin{proof}\nWe can assume without loss of generality that $\\delta_1 > h r_1$ for all $t\\in [t_1,t_2]$ and for some $h=\\hat\\epsilon>0$; otherwise Proposition \\ref{prop:neartaub:main} applies. We can also assume without loss of generality that $|N_1| \\le C r_1^2$ for all $t\\in [t_1,t_2]$ because of a similar argument as in the proof of Proposition \\ref{prop:neartaub:main}.\n\nWe will use the letters \n\\defC{C}\n\\defc{c}\n$C \\gg 1$ and $0 2p-1$, i.e.~$p< \\frac{1}{2-\\alpha}$.\nLet $M\\subset \\mathcal M_{\\pm\\pm\\pm}$ be a compact subset such that Theorem \\ref{thm:local-attractor} holds for any initial condition in $\\bpt x_0\\in M$. Then $I_\\alpha \\in L^p(M)$, where\n\\begin{equation}\nI_\\alpha (\\bpt x_0) := \\int_0^\\infty \\max_i \\delta_i^\\alpha(t){\\mathrm{d}} t,\n\\end{equation}\ni.e., using $\\phi$ for the flow to \\eqref{eq:ode} and ${\\mathrm{d}}^4 \\bpt x$ for the (four dimensional) Lebesgue measure on $\\mathcal M_{\\pm\\pm\\pm}$,\n\\[\n\\int_{M} \\left[\\int_0^\\infty \\delta_i^\\alpha(\\phi(\\bpt x, t)){\\mathrm{d}} t\\right]^p {\\mathrm{d}}^4 \\bpt x<\\infty\\qquad\\forall\\,i\\in\\{1,2,3\\}.\n\\]\nIf instead $\\alpha\\ge 2$, we already know from Theorem \\ref{thm:local-attractor} that $I_\\alpha \\in L^\\infty(M)$.\n\\end{thm}\nTheorem \\ref{thm:horizon-formation-alpha-p} makes a much stronger claim than Theorem \\ref{thm:horizon-formation}, even for $\\alpha=1$: Local $L^p$-integrability is a sufficient condition for a.e.~finiteness, but very much not necessary. On the other hand, we are aware of no immediate physical interpretation of Theorem \\ref{thm:horizon-formation-alpha-p}.\n\nThe proof of Theorem \\ref{thm:horizon-formation-alpha-p} won't rely on Theorem \\ref{thm:horizon-formation}. Even though the proof of Theorem \\ref{thm:horizon-formation} is therefore entirely optional, we nevertheless choose to state and prove Theorem \\ref{thm:horizon-formation} separately, because we view it as more important, and can give a more geometric proof than for Theorem \\ref{thm:horizon-formation-alpha-p}.\n\\begin{question}\nUnfortunately, the case $\\alpha=1=p=1$, i.e.~$I_1 \\in L^1_{\\textrm{loc}}$, is maddeningly out of reach of Theorem \\ref{thm:horizon-formation-alpha-p}, which only provides $I_1\\in L^{1-\\epsilon}_{\\textrm{loc}}$ for any $\\epsilon>0$. An extension to $\\alpha=p=1$ would imply \\emph{finite expectation} of the particle horizon integral.\n\nIs it possible to say something about $\\alpha=p=1$? Maybe for special compact subsets $M\\subset \\mathcal M_{\\pm\\pm\\pm}$?\n\\end{question}\n\n\\paragraph{Outline of the proofs.}\nLet us now give a short overview over the remainder of this section. Our primary tool will be a volume-form $\\omega_4$, which is \\emph{expanded} under the flow $\\phi$ of \\eqref{eq:ode}. An alternative description would be to say that we construct a density function, such that $\\phi$ is volume-expanding. The volume-form will be constructed and discussed in Section \\ref{sect:volume-construct}. We will use some very basic facts from the intersection of differential forms, measure theory and dynamical systems theory, which are given in Appendix \\ref{sect:general-vol-app} for the convenience of the reader.\n\nWe will use this expanding volume-form in order to prove our three Theorems in Section \\ref{sect:volume:proofs}.\n\n\\subsection{Volume Expansion}\\label{sect:volume-construct}\nThis section studies the evolution of phase-space volumes. Using logarithmic coordinates $\\beta_i=-\\log|N_i|$, the equations differential equations \\eqref{eq:ode} \\emph{without} the constraint \\eqref{eq:constraint} yield the remarkably simple and controllable formula ${\\mathrm{D}}_t \\omega_5 = 2N^2 \\omega_5$ for the evolution of the five-dimensional Lebesgue-measure $\\omega_5$ (with respect to $\\Sigma_\\pm, \\beta_i$). \nThis formula shows that the flow $\\phi$ expands the volume $\\omega_5$. Such a volume expansion is impossible for systems living on a manifold with finite volume; it is possible with logarithmic coordinates because these coordinates have pushed the attractor to infinity, and typical solutions escape to infinity in these coordinates. \n\n\\paragraph{Volume expansion for the extended system (without constraint $G=1$).}\nConsider coordinates $\\beta_i$ given by\n\\[\\begin{aligned}\n\\beta_i &= -\\log |N_i|& N_i &= {{\\hat{n}}}_i e^{-\\beta_i}\\\\\n{\\mathrm{d}} \\beta_i &= -\\frac{{\\mathrm{d}} N_i}{N_i}& {\\mathrm{d}} N_i &= -{{\\hat{n}}}_i e^{-\\beta_i}{\\mathrm{d}}\\beta_i\\\\\n\\partial_{\\beta_i} &= -N_i \\partial_{N_i} & \\partial_{N_i}&=-{{\\hat{n}}}_i e^{\\beta_i}\\partial_{\\beta_i},\n\\end{aligned}\\]\nand consider the Lebesgue-measure with respect to the $\\beta_i$ coordinates:\n\\begin{equation}\\label{eq:omega5-def}\n\\omega_5 = |{\\mathrm{d}} \\Sigma_+ \\land {\\mathrm{d}} \\Sigma_- \\land {\\mathrm{d}} \\beta_1 \\land {\\mathrm{d}} \\beta_2 \\land {\\mathrm{d}} \\beta_3|,\n\\end{equation}\nwhich is given in $N_i$ coordinates by\n\\[\\begin{aligned}\n\\omega_5 &= \\left|\\frac{-1}{N_1N_2N_3}{\\mathrm{d}} \\Sigma_+ \\land {\\mathrm{d}} \\Sigma_- \\land {\\mathrm{d}} N_1 \\land {\\mathrm{d}} N_2 \\land {\\mathrm{d}} N_3\\right|.\n\\end{aligned}\\]\nLet $\\lambda(\\bpt x, t)$ denote the volume expansion for $\\phi(\\bpt x, t)$, i.e.\n\\[\n\\phi^*(\\bpt x, t)\\omega_5 = \\lambda(\\bpt x, t)\\omega_5,\n\\]\nwhere $\\phi^*$ is the pull-back acting on differential forms. Hence, in $(\\bpt \\Sigma,\\bpt \\beta)$-coordinates, $\\lambda(\\bpt x, t)=\\det \\partial_x \\phi(\\bpt x, t)$, and, with $f$ denoting the vectorfield corresponding to \\eqref{eq:ode}:\n\\[\\begin{aligned}\n{\\mathrm{D}}_t \\phi^*(\\bpt x, t)\\omega_5 &= \\left[\\mathrm{tr}\\,\\partial_x \\partial_t \\phi(\\bpt x, t)\\right] \\phi^*(\\bpt x, t)\\omega_5 \\\\\n&= \\left[\\partial_{\\Sigma_+}f_{\\Sigma_+}+\\ldots+\\partial_{\\beta_3}f_{\\beta_3}\\right] \\lambda(\\bpt x, t)\\omega_5=2N^2(\\phi(\\bpt x, t))\\lambda(\\bpt x, t)\\omega_5\\\\\n\\lambda(\\bpt x, t) &= \\exp\\left[ 2\\int_0^t N^2(\\phi(\\bpt x, s)){\\mathrm{d}} s\\right].\n\\end{aligned}\\]\nThe volume is really expanding: In most of the phase space, $N^2>0$, and always $N^2> -4|N_1N_2N_3|^{\\frac 2 3}$, which has bounded integral.\n\n\\paragraph{Volume expansion on $\\mathcal M$.}\nWe are not really interested in the behaviour of $\\phi$ on ${\\mathbb{R}}^5$, and the measure $\\omega_5$. Instead, we are interested in dynamics and measures on the set $\\mathcal M=\\{\\bpt x\\in {\\mathbb{R}}^5:\\, G(\\bpt x)=1\\}$. We can get an induced measure on $\\mathcal M$ by choosing a vectorfield $X:{\\mathbb{R}}^5\\to T{\\mathbb{R}}^5$ such that ${\\mathrm{D}}_X G =1$ in a neighborhood of $\\mathcal M$. Then we set\n\\begin{equation}\\label{eq:omega4-def}\n\\begin{multlined}\n\\omega_4=\\iota_{X}\\omega_5,\\\\\n\\text{i.e.}\\qquad\\omega_4[X_1,\\ldots, X_4]=\\omega_5[X,X_1,\\ldots X_4]\\qquad\\text{for}\\quad X_1,\\ldots, X_4\\in T\\mathcal M.\n\\end{multlined}\\end{equation}\nThis induced volume is independent of the choice of $X$ (as long as ${\\mathrm{D}}_X G=1$), and fulfills (see Section \\ref{sect:general-vol-app}\n\\[\n\\phi^*(\\bpt x, t)\\omega_4 = \\lambda(\\bpt x, t)\\omega_4 = \\frac{\\phi^*(\\bpt x, t)\\omega_5}{\\omega_5}\\omega_4.\n\\]\n\n\\paragraph{Volume expansion for Poincar\\'e-maps.}\nWe consider the volume-form $\\omega_3=\\iota_f \\omega_4$, where $f$ is the vectorfield \\eqref{eq:ode}. By invariance of $f$ under $\\phi$, we again have $\\phi^*(\\bpt x, t)\\omega_3=\\lambda(\\bpt x, t)\\omega_3$. \n\nLet $U\\subseteq \\mathcal M$ open and $T:U\\to {\\mathbb{R}}$ differentiable. Then the map $\\Phi_T:U\\to \\mathcal M$ with $\\Phi_T(\\bpt x)=\\phi(\\bpt x, T(\\bpt x))$ has $\\Phi_T^*\\omega_3=\\lambda(\\bpt x, T(\\bpt x))\\omega_3|U$ (also see Section \\ref{sect:general-vol-app}).\n\nThis especially holds when $S\\subset M$ is a Poincar\\'e-section and $\\Phi_S=\\Phi_{T_S}$ is the corresponding Poincar\\'e-map, i.e.~when $S\\subset \\mathcal M$ is a submanifold of codimension one, which is transverse to $f$, and $T=T_S(\\bpt x)= \\inf\\{t>0:\\,\\phi(\\bpt x, t)\\in S\\}$.\n\nIf $S\\subseteq \\mathcal M$ is a Poincar\\'e-section and $K\\subseteq S$ is a set with $|K|_{\\omega_3}=\\int_{S}\\mathrm{id}_{K}\\omega_3 =0$, then, by Fubini's Theorem, $|\\phi(K, {\\mathbb{R}})|_{\\omega_4}=0$. The measure $\\omega_4$ is absolutely bi-continuous with respect to the ordinary Lebesgue measure, i.e.~the notions of sets of measure zero coincide for the ordinary Lebesgue-measure and $\\omega_4$.\n\nThis especially applies to the boundary $\\partial S$ with $|\\partial S|=0$. Therefore, if $S_0,S$ are two Poincar\\'e-sections and $K\\subseteq S_0\\subseteq M$ is a set of initial conditions, such that $T_S(\\bpt x)<\\infty$ for almost every $\\bpt x\\in K$, then\n\\[\n\\int_{\\Phi_S(K)\\subseteq S} f(\\bpt x)\\omega_3 = \\int_{K\\subseteq S_0}f(\\Phi_S^{-1}(\\bpt x))\\lambda(\\bpt x, T_S(\\bpt x))\\omega_3,\n\\]\nfor any $L^\\infty$ function $f$. This is because the map $\\Phi_S$ is, by assumption, sufficiently smooth almost everywhere. \n\nOur primary source of ``almost everywhere'' statements is the following:\n\\begin{lemma}\\label{lemma:vol-dichotomy}\nLet $S\\subset \\mathcal M$ be a Poincar\\'e-section with $N^2>\\epsilon>0$, and let $K\\subseteq S \\subset \\mathcal M$ be forward invariant, i.e. $\\Phi_S$ is well-defined in $K$ and $\\Phi_S(K)\\subseteq K$. \n\nThen either $|K|_{\\omega_3}=0$ or $|K|_{\\omega_3}=\\infty$. If $|K|_{\\omega_3}=0$, then $|\\phi(K,{\\mathbb{R}})|_{\\omega_4}=0$.\n\\end{lemma}\n\\begin{proof}\nSince $\\Phi_S(K)\\subseteq K$, we have\n\\[|K|_{\\omega_3}\\ge |\\Phi_S(K)|_{\\omega_3} = \\int_S \\mathrm{id}_{\\Phi_S(K)}\\omega_3 = \\int_S \\mathrm{id}_K \\phi_S^*\\omega_3 = \\int_S \\mathrm{id}_K \\lambda \\omega_3 \\ge q |K|_{\\omega_3},\\]\nwhere $q=\\inf_{\\bpt x \\in K}\\lambda(\\bpt x, T_S(\\bpt x))>1$. Therefore, either $|K|_{\\omega_3}=0$ or $|K|_{\\omega_3}=\\infty$.\nIf $|K|_{\\omega_3}=0$ then this implies $|\\phi(K,[-h,h])|_{\\omega_4}=0$ for small $h>0$ by Fubini's Theorem and hence $|\\phi(K,{\\mathbb{R}})|_{\\omega_4}=0$.\n\\end{proof}\n\\subsection{Proofs of the main Theorems}\\label{sect:volume:proofs}\nWe will now use the $\\omega_3$-expansion between Poincar\\'e-sections in order give proofs of the main results.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:b8-attractor-global-genericity} (Bianchi \\textsc{VIII} global attractor genericity)]\\label{proof:thm:b8-attractor-global-genericity}\n\\textsc{Attract} holds for an open set of initial conditions (by virtue of continuity of the flow). Since \\textsc{Taub} can only happen on an embedded submanifold of lower dimension, it suffices to prove that \\textsc{Except} happens only for a set of initial conditions with Lebesgue measure zero.\n\nLet \\textsc{Except} also denote the set of initial conditions for which the case \\textsc{Except} holds. Without loss of generality, we will consider the case $\\mathcal M_{+-+}$ and trajectories bouncing between $+\\taubp_1$ and $-\\taubp_1$.\n\n\nWe choose a small Poincar\\'e-section \n\\[S\\subseteq\\{\\bpt x\\in\\mathcal M:\\,N_1 = \\textrm{const}=h,\\, r_1 \\le \\epsilon,\\, \\delta_1\\le \\epsilon\\},\\]\nintersecting the unique heteroclinic $W^u(-\\taubp_1)$, which connects $-\\taubp_1$ to $\\taubp_1$, near $\\taubp_1$, such that $S$ is a graph over $(\\Sigma_-, N_2, N_3)$. Let $T_S:S\\to (0,\\infty]$ be the (partially defined) recurrence time and $\\Phi:S\\to S$ with $\\Phi(\\bpt x)= \\phi(\\bpt x, T(\\bpt x))$ be the (partially defined) Poincar\\'e-map to $S$.\n\n\n\n\\textsc{Except} is (by definition) invariant under the flow and $\\Phi(\\textsc{Except}\\cap S)\\subseteq \\textsc{Except}\\cap S$. Hence $\\textsc{Except}\\cap S$ has either vanishing or infinite $\\omega_3$-volume, by Lemma \\ref{lemma:vol-dichotomy}. Therefore, it suffices to prove $|\\textsc{Except}\\cap S|_{\\omega_3}<\\infty$.\n\n\nRecalling the local attractor Theorem \\ref{thm:local-attractor}, we can note $\\delta_1 > \\widetilde \\epsilon r_1$ for every $\\bpt x\\in \\textsc{Except}\\cap S$. This allows us to estimate\n\\begin{subequations}\n\\label{eq:bad-finite-measure}\\begin{align}\n|\\textsc{Except}\\cap S|_{\\omega_3} &\\le \\left|\\{\\bpt x\\in S:\\, \\delta_1 > \\widetilde \\epsilon r_1\\}\\right|_{\\omega_3}\n\\le \\left|\\{\\bpt x\\in S:\\, \\delta_1 > \\widetilde \\epsilon |\\Sigma_-|\\}\\right|_{\\omega_3}\\\\\n&\\le \\left|\\{\\bpt x\\in S:\\,\\beta_2+\\beta_3 > C+C|\\log|\\Sigma_-||\\}\\right|_{\\omega_3}\\\\\n&= \\int_{\\{\\bpt x\\in S:\\, \\beta_2+\\beta_3 > C+C|\\log|\\Sigma_-||\\}} \\omega_3[\\ldots] \\ |{\\mathrm{d}} \\Sigma_-\\land {\\mathrm{d}} \\beta_2 \\land{\\mathrm{d}} \\beta_3|\\label{eq:bad-finite-measure:last-s}\\\\\n&\\le C \\int_{\\{\\bpt x\\in S:\\, \\beta_2+\\beta_3 > C+C|\\log|\\Sigma_-||\\}}|{\\mathrm{d}} \\Sigma_-\\land {\\mathrm{d}} \\beta_2 \\land{\\mathrm{d}} \\beta_3|\\label{eq:bad-finite-measure:last-m1}\\\\\n&\\le C+ C \\int\\left|\\log |\\Sigma_-|\\right|^2 {\\mathrm{d}} \\Sigma_- <\\infty\\label{eq:bad-finite-measure:last},\n\\end{align}\n\\end{subequations}\nwhere we integrated $\\beta_2,\\beta_3$, using $\\beta_2,\\beta_3\\ge 0$ from \\eqref{eq:bad-finite-measure:last-m1} to \\eqref{eq:bad-finite-measure:last}. In order to go from \\eqref{eq:bad-finite-measure:last-s} to \\eqref{eq:bad-finite-measure:last-m1}, we used that (in $S$) $N_1=\\mathrm{const}=h$ and $\\Sigma_+=\\Sigma_+(\\Sigma_-, \\beta_2,\\beta_3)$ and \n\\[\\begin{aligned}\n\\omega_3 &= |{\\mathrm{d}} \\Sigma_-\\land {\\mathrm{d}} \\beta_2 \\land {\\mathrm{d}} \\beta_3|\\\\\n&\\qquad\\cdot\\,\\left|\\omega_5\\left[f,\n|\\partial_{\\beta_1} G|^{-1} \\partial_{\\beta_1},\n\\partial_{\\Sigma_-}+\\partial_{\\Sigma_-}\\Sigma_+\\partial_{\\Sigma_-}, \n\\partial_{\\beta_2}+\\partial_{\\beta_2}\\Sigma_+\\partial_{\\Sigma_+},\n\\partial_{\\beta_3}+\\partial_{\\beta_3}\\Sigma_+\\partial_{\\Sigma_+}\\right]\\right|\\\\\n&\\le C |{\\mathrm{d}} \\Sigma_-\\land {\\mathrm{d}} \\beta_2 \\land {\\mathrm{d}} \\beta_3|.\n\\end{aligned}\\]\nThis can be seen by noting $\\partial_{\\beta_1} G = 2 N_1^2 - 2 N_1N_- \\ge C>0$ if $\\epsilon>0$ is small enough and likewise $f_{\\Sigma_+}> C >0$, $|\\partial_{\\Sigma_-, \\beta_2,\\beta_3}\\Sigma_+| \\le \\mathcal O(\\epsilon)$.\n\\end{proof}\nWe will now give the first, more geometric proof of Theorem \\ref{thm:horizon-formation}. Readers who prefer the arithmetic variant can skip ahead to the proof of Theorem \\ref{thm:horizon-formation-alpha-p}, page \\pageref{proof:thm:horizon-formation-alpha-p}. The theorem follows from the following two Lemmas:\n\n\\begin{lemma}\\label{lemma:badrecurrent-zeroset}\nFix $i\\in\\{1,2,3\\}$, without loss of generality $i=1$. \nConsider a small Poincar\\'e-section $S$ as in the proof of Theorem \\ref{thm:b8-attractor-global-genericity}, i.e. $S\\subseteq \\mathcal M_{{{\\hat{n}}}}$ intersecting the unique heteroclinic connecting $-\\taubp_1$ to $\\taubp_1$, near \n$\\taubp_1$ such that Proposition \\ref{prop:neartaub:main} holds and $S$ is a smooth graph over $\\Sigma_-, N_2, N_3$.\n\nDenote\n\\[\\begin{aligned}\n\\textsc{Bad}&=\\{\\bpt x\\in S:\\, \\delta_1 > |\\Sigma_-|^4\\}\\\\\n\\textsc{BadRecurrent}&=\n\\{\\bpt x\\in S:\\, \\bpt \\Phi_S^k(x)\\in\\textsc{Bad}\\, \\text{for infinitely many $k$}\\},\n\\end{aligned}\\]\nwhere $\\Phi_S$ is the Poincar\\'e-map to $S$. Then\n\\(\n\\left|\\textsc{BadRecurrent}\\right|_{\\omega_3}=0.\n\\)\n\\end{lemma}\n\\begin{lemma}\\label{lemma:badrecurrent-small-int}\nConsider the setting of Lemma \\ref{lemma:badrecurrent-zeroset}.\n\nLet $\\bpt x_0\\in S\\cap \\textsc{Basin}$ such that $\\bpt x_0\\not\\in \\textsc{BadRecurrent}$. Then $\\int_0^\\infty \\delta_1(\\phi(\\bpt x_0, t)){\\mathrm{d}} t<\\infty$.\n\\end{lemma}\n\\begin{proof}[Proof of Theorem \\ref{thm:horizon-formation} (almost sure formation of particle horizons)]\nFollows trivially from Lemma \\ref{lemma:badrecurrent-small-int} and Lemma \\ref{lemma:badrecurrent-zeroset}.\n\\end{proof}\n\\begin{proof}[Proof of Lemma \\ref{lemma:badrecurrent-small-int}]\nLet $\\bpt x_0\\not\\in \\textsc{BadRecurrent}$, and let $(T_n)_{n\\in{\\mathbb{N}}}\\subseteq {\\mathbb{R}}$ and $(\\bpt x_n)_{n\\in{\\mathbb{N}}}\\subseteq S$ be the recurrence times and points in $S$, i.e.~$\\bpt x_{n+1}=\\Phi_S(\\bpt x_n)=\\phi(\\bpt x_{n}, T_n)$, $T_n = T_S(\\bpt x_{n-1})$. By assumptions, we have some $N>0$ such that, for all $n\\in{\\mathbb{N}}$, $\\bpt x_{N+k}\\not\\in\\textsc{Bad}$. We can estimate\n\\[\\begin{aligned}\n\\int_0^\\infty\\delta_1(\\phi(\\bpt x_0, t)){\\mathrm{d}} t &= \\int_{0}^{T_n}\\delta_1(\\phi(\\bpt x_0, t)){\\mathrm{d}} t + \\sum_{n=N}^{\\infty}\\int_{0}^{T_n}\\delta_1(\\phi(\\bpt x_{n}, t )){\\mathrm{d}} t\\\\\n&\\le C(\\bpt x_0) + \\sum_{n=N}^{\\infty}\\int_{0}^{T_n}C\\exp\\{-\\frac{C}{r_1^2(\\bpt x_n)}\\}\\delta_1(\\bpt x_n){\\mathrm{d}} t\\\\\n& \\le C(\\bpt x_0)+ \\sum_{n=N}^{\\infty}C\\frac{\\delta_1}{r_1^2}(\\bpt x_n)\\\\\n& \\le C(\\bpt x_0)+ \\sum_{n=N}^{\\infty}C\\sqrt{\\delta_1}(\\bpt x_n) <\\infty,\n\\end{aligned}\\]\nwhere we used Proposition \\ref{prop:neartaub:main} and $\\delta_1 < r_1^4$ and the fact that $\\delta_1(\\bpt x_{n+1})< \\frac 1 2 \\delta_1(\\bpt x_n)$.\n\\end{proof}\n\\begin{proof}[Proof of Lemma \\ref{lemma:badrecurrent-zeroset}]\nAnalogously to the proof of Theorem \\ref{thm:b8-attractor-global-genericity}, we have $|\\textsc{Bad}|_{\\omega_3}<\\infty$. Then, we can write\n\\[\\begin{aligned}\n\\left|\\textsc{BadRecurrent}\\right|_{\\omega_3}&=\\left|\\bigcap_{n\\in{\\mathbb{N}}}\\bigcup_{k\\ge n} \\Phi^{-k}(\\textsc{Bad})\\right|_{\\omega_3}\\\\\n&\\le \\lim_{n\\to 0} \\sum_{k\\ge n}\\left|\\Phi^{-k}(\\textsc{Bad})\\right|_{\\omega_3} \n\\le \\lim_{n\\to 0} \\sum_{k\\ge n}q^{-k}\\left|\\textsc{Bad}\\right|_{\\omega_3}<\\infty,\n\\end{aligned}\\]\nwhere $q=\\inf \\{\\lambda(\\bpt x, T_S(\\bpt x)): \\bpt x\\in S\\}>1$.\n\\end{proof}\n\\begin{proof}[Proof of Theorem \\ref{thm:horizon-formation-alpha-p} ($L^p$ estimates for the generalized localization integral)]\\label{proof:thm:horizon-formation-alpha-p}\nThe claim for $\\alpha\\in [2,\\infty)$ follows trivially from Theorem \\ref{thm:local-attractor}. For the other case, we restrict our attention without loss of generality to $\\delta_1$. Furthermore, we can without loss of generality assume that we start with $C\\subseteq S$ for some Poincar\\'e section and estimate the integral with respect to $\\omega_3$. Recall the construction in the proof of Theorem \\ref{thm:b8-attractor-global-genericity} with a Poincar\\'e-section $S_1$ near the heteroclinic $-\\taubp_1\\to \\taubp_1$. We can estimate, for some positive $s\\in (2p-1, \\alpha p)$: \n\\begin{subequations}\n\\begin{align*}\n\\int_{\\bpt x\\in C} \\biggl[\\int_0^\\infty \\delta_i^\\alpha(&\\phi(\\bpt x, t)){\\mathrm{d}} t\\biggr]^p \\omega_3\n= \\int_{\\bpt x\\in C} \\left[\\sum_{n} \\int_{T_{n}}^{T_{n+1}}\\delta_i^\\alpha(\\phi(\\bpt x, t)){\\mathrm{d}} t\\right]^p \\omega_3\\\\\n&\\le \\sum_{n} \\int_{\\bpt x\\in C} \\left[\\int_{T_{n}}^{T_{n+1}}\\delta_i^\\alpha(\\phi(\\bpt x, t)){\\mathrm{d}} t\\right]^p \\omega_3\\numberthis\\label{eq:foobar:p}\\\\\n&\\le C\\sum_{n} \\int_{\\bpt x\\in C} \\left[\\frac{\\delta_i^\\alpha}{r_{i}^{2}}(T_n)\\right]^p \\omega_3\\\\\n&= C\\sum_{n} \\int_{\\bpt x\\in C} \\left[ \\delta_i^{\\alpha p -s} r_i^{-2p +s} \\left(\\frac{\\delta_i}{r_i}\\right)^{s}\\right](T_n)\\omega_3\\\\\n&\\le C\\sum_{n}\\, \\sup_{\\bpt x\\in C} \\left(\\frac{\\delta_i}{r_i}\\right)^{s}(T_n)\\,\\,\\cdot\\,\\, \\int_{\\bpt x\\in C} \\left[\\delta_i^{\\alpha p -s} r_i^{-2p +s}\\right] (T_n)\\omega_3.\\numberthis\\label{eq:foobar:hoelder}\n\\end{align*}\n\\end{subequations}\n\nWe have used $p<1$ in order to split the integral in \\eqref{eq:foobar:p} and the H\\\"older inequality in \\eqref{eq:foobar:hoelder}.\n We continue the estimates by noting that $\\sup_{\\bpt x\\in C}\\frac {\\delta_i} {r_i}(T_n)$ decreases exponentially in $n$. Hence we only need to bound the second factor. This can be done by using $\\alpha p -s>0$ and $-2p+s > -1$ in order to see\n\\[\\begin{aligned}\n\\int_{\\bpt x\\in C} \\left[\\delta_i^{\\alpha p -s} r_i^{-2p +s}\\right] (T_n)\\omega_3 \n&\\le C \\int_{S_1} \\left[\\delta_i^{\\alpha p -s} r_i^{-2p +s}\\right] \\omega_3\\\\\n&\\le C \\int_{-0.1}^{0.1} \\left[\\int_{\\beta_2,\\beta_3\\ge 0} e^{-\\frac{(\\beta_2+\\beta_3)(\\alpha p-s)}{2}}|\\Sigma_-|^{-2p+s} {\\mathrm{d}} \\beta_2\\land{\\mathrm{d}} \\beta_3 \\right] {\\mathrm{d}} \\Sigma_-\\\\\n&\\le C \\int_{-0.1}^{0.1}|\\Sigma_-|^{-2p+s} {\\mathrm{d}} \\Sigma_- < \\infty.\n\\end{aligned}\\]\n\\end{proof}\n\n\n\n\\section{Physical Properties of Solutions for Bianchi \\textsc{VIII} and \\textsc{IX}}\\label{sect:gr-phys-interpret}\nWe will now use the results of this work in order to describe some physical properties of Bianchi spacetimes. Recall Section \\ref{sect:equations}, where we describe Bianchi spacetimes in terms of the Wainwright-Hsu equations. In Section \\ref{sect:append:derive-eq}\n\n\n\\paragraph{Bounded life-time.}\nSince the mean curvature $H$ corresponds to the time-derivative of the spatial volume form $\\sqrt{g_{11}g_{22}g_{33}}$, the universe described by such a metric is contracting for $H<0$. Physically, we are interested in the behaviour towards the initial (big bang) singularity; this setting is time-reversed to physical time-variables, and we should look at the behaviour of solutions for $t\\to +\\infty$. Since $|H|$ is at least uniformly exponentially growing, we can immediately see \\( \\textsc{EigenFuture} = \\int_{0}^\\infty \\sqrt{-g_{00}}{\\mathrm{d}} t <\\infty,\\) that is, the universe has only a finite (eigen-) lifetime until $H$ blows up and a singularity occurs. In our coordinates, this singularity is placed at $t=+\\infty$.\n\nA priori, we cannot know whether this singularity is a physical singularity (with curvature blow-up) or just a coordinate singularity (it might be that it is just our coordinate system, which blows up, while the actual space-time remains regular). In the case of Bianchi \\textsc{VIII} and \\textsc{IX}, the singularity is physical (with curvature blow-up). \nIt has been proven in \\cite{ringstrom2001bianchi} that the singularity is physical and curvature blows up. This is done by considering the so-called Kretschmann scalar\n$\\kappa = \\sum_{\\alpha,\\beta,\\gamma,\\delta}R_{\\alpha,\\beta}^{\\ \\ \\delta,\\gamma} R_{\\delta,\\gamma}^{\\alpha,\\beta}$ and showing that $\\lim_{t\\to\\infty}\\kappa(t)=\\infty$. We refer to \\cite{ringstrom2001bianchi} for the details.\n\\paragraph{Bounded spatial metric coefficients.}\nNow we restrict our attention to the case of Bianchi \\textsc{IX} and \\textsc{VIII}, where all $\\hat n_i\\neq 0$. \n\n\\noindent The coefficients $g_{ii}=\\frac 1 {48}\\frac {|N_i|}{H^2 |N_1N_2N_3|}$ stay bounded: We know, ssing the global attractor Theorems \\ref{thm:b9-attractor-global} and \\ref{thm:b8-attractor-global}, that the $|N_i|$ stay bounded. We can compute:\n\\[\n{\\mathrm{D}}_t \\log |H^2N_1N_2N_3|= -3\\Sigma^2 + 1 + 2\\Sigma^2 = N^2 \\ge -4|N_1N_2N_3|^{\\frac 2 3},\n\\]\nwhich shows that $|H^2N_1N_2N_3|$ stays bounded away from zero, using Lemma \\ref{farfrom-A:lemma:uniform}. Hence all three $g_{ii}$ stay bounded for $t\\to+\\infty$. Indeed, since $N^2>\\epsilon>0$ for large amounts of time, we can conclude $\\lim_{t\\to+\\infty} g_{ii}(t)=0$.\n\n\\paragraph{Particle Horizons.}\nRecall question of particle horizons from the introduction, and the definition of communication cones \\eqref{eq:intro:comcone}, which we here adjust to match our convention that the big bang singularity is situated in the future:\n\\[\\begin{aligned}\n&\\text{Singularity directed light cone of $\\bpt p$:}\\\\\n&\\quad J^-(\\bpt p) = \\{\\bpt q:\\, \\text{there is } \\gamma:[0,1]\\to M\\,\\,\\text{with}\\, \\gamma(0)=\\bpt p, \\gamma(1)=\\bpt q,\\,\\text{time-like future directed}\\}\\\\\n&\\text{Non-singularity directed light cone of $\\bpt p$:}\\\\\n&\\quad J^+(\\bpt p) = \\{\\bpt q:\\, \\text{there is } \\gamma:[0,1]\\to M\\,\\,\\text{with}\\, \\gamma(0)=\\bpt p, \\gamma(1)=\\bpt q,\\,\\text{time-like past directed}\\}\\\\\n&\\text{Communication cone of $\\bpt p$:}&\\\\\n&\\quad \\phantom{\\partial}J^+(J^-(\\bpt p)) = \\bigcup_{\\bpt q \\in J^-(\\bpt p)}J^+(\\bpt q)\\\\\n&\\text{Cosmic horizon of $\\bpt p$:}&\\\\\n&\\quad \\partial J^+(J^-(\\bpt p)) = \\text{the topological boundary of the past communication cone.}\n\\end{aligned}\\]\nWe can now relate the question of particle horizons with our estimates on $\\int \\sqrt{|N_iN_j|}(t) {\\mathrm{d}} t$. This gives the physical interpretation of Theorem \\ref{thm:horizon-formation}. Remember that time is oriented such that the big bang singularity is situated in the future at $t=\\infty$.\n\\begin{lemma}\\label{lemma:horizon-integral}\nThere is a constant $C>0$, such that, for Bianchi \\textsc{IX} and \\textsc{VIII} vacuum spacetimes $M$, we can estimate for $\\bpt p\\in M$ and $t_0\\ge t(\\bpt p)$\n\\begin{equation}\\label{eq:comcone-estimate}\\begin{aligned}\n\\mathrm{diam}_h \\left[J^-(\\bpt p)) \\cap \\{\\bpt q\\in M: t(\\bpt q)=t_0\\}\\right] &\\le C\\int_{t(\\bpt p)}^{t_0} \\max_{j\\neq k} \\sqrt{|N_jN_k|}(t){\\mathrm{d}} t\\\\\n\\mathrm{diam}_h \\left[J^+(J^-(\\bpt p)) \\cap \\{\\bpt q\\in M: t(\\bpt q)=t(\\bpt p)\\}\\right] &\\le C \\int_{t(\\bpt p)}^\\infty \\max_{j\\neq k} \\sqrt{|N_jN_k|}(t){\\mathrm{d}} t,\n\\end{aligned}\\end{equation}\nwhere the diameter is measured with the symmetry metric $h$ given on the surfaces of homogeneity $\\{t=\\mathrm{const}\\}$ by $h=\\omega_1 \\otimes \\omega_1 + \\omega_2\\otimes \\omega_2 + \\omega_3\\otimes \\omega_3$. \nFurthermore, suppose that $M$ is a spacetime corresponding to a Bianchi \\textsc{VIII} or \\textsc{IX} solution with $\\int_0^\\infty \\delta_i(t){\\mathrm{d}} t <\\infty$ for all $i\\in\\{1,2,3\\}$, as in the conclusion of Theorem \\ref{thm:horizon-formation}. Use the shorthand $J^+(J^-(t_0))=J^+(J^-(\\bpt p))$ for some $\\bpt p\\in M$ with $t(\\bpt p)=t_0$.\nThen the following holds:\n\\begin{enumerate}\n\\item $\\lim_{t_0\\to\\infty} \\mathrm{diam}_h\\left[J^+(J^-(t_0))\\cap \\{\\bpt q\\in M: t(\\bpt q)=t_0\\}\\right]=0$.\n\\item $\\lim_{t_0\\to\\infty} \\mathrm{diam}_g\\left[J^+(J^-(t_0))\\cap \\{\\bpt q\\in M: t(\\bpt q)=t_0\\}\\right]=0$.\n\\item For $t_0>0$ large enough, $\\partial J^+(J^-(t_0))\\neq \\emptyset$.\n\\item For $t_0>0$ large enough, the communication cone is homeomorphic to $(0,1)$ times the three-dimensional unit-ball with boundary. In other words, the following manifolds with boundary are homeomorphic, where $B_1^{{\\mathbb{R}}^3}(0)$ is the three-dimensional unit-ball:\n\\begin{multline}\\nonumber\n\\left[J^+(J^-(t_0)) \\cap \\{\\bpt q\\in M: t(q) > t_0\\}\\,,\\, \\partial J^+(J^-(t_0)) \\cap \\{\\bpt q\\in M: t(q) > t_0\\}\\right]\\\\\n\\sim \\left[(0,1) \\times B_1^{{\\mathbb{R}}^3}(0)\\,,\\, (0,1)\\times \\partial B_1^{{\\mathbb{R}}^3}(0)\\right]\\end{multline}\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof\nAny time-like singularity-directed curve $\\gamma$ starting in $\\bpt p$ must fulfill $|{\\mathrm{D}}_t \\gamma_i| \\le \\sqrt{-g_{00} g^{ii}}=\\sqrt{12}\\sqrt{\\widetilde N_j \\widetilde N_k}$. It is clear that the $h$-length of such a curve must be bounded by $C\\max_{j\\neq k} \\int_{t(\\bpt p)}^\\infty \\sqrt{\\widetilde N_j \\widetilde N_k}(t){\\mathrm{d}} t$ (parametrized over the time $t$ corresponding to \\eqref{eq:ode-from-gr}), if the curve only accesses times later than $t_0$. This proves the estimate on $J^-(\\bpt p)$. The estimate on $J^+(J^-(\\bpt p))$ follows.\n\nIf $\\int_{t_0}^{\\infty}\\max_{j\\neq k}\\sqrt{|N_jN_k|}(t){\\mathrm{d}} t <\\infty$, then $\\lim_{t_0\\to\\infty}\\int_{t_0}^{\\infty}\\max_{j\\neq k}\\sqrt{|N_jN_k|}(t){\\mathrm{d}} t=0$, which proves $(1)$. $(2)$ follows because the metric coefficients $g_{ij}$ are bounded. $(4)$ follows because the injectivity radius $\\mathrm{inj}_h(t)$ of the hypersurfaces of homogeneity is independent of the time $t$, if we measure it with respect to the (time-independent) $h$-metric. $(3)$ follows trivially from $(4)$.\n\\end{proof}\nHence, Theorem \\ref{thm:horizon-formation} really shows that almost every Bianchi \\textsc{VIII} and \\textsc{IX} vacuum solution forms particle horizons.\n\n\n\\begin{figure}[hbt]\n\\centering\n\\begin{subfigure}[t]{0.48\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{.\/graphics\/recombination-cone-comm.pdf}\n\\caption{A sketched space-time, where homogeneity of the observable universe could be explained by mixing between the big bang and recombination.}\n\\end{subfigure}~~\n\\begin{subfigure}[t]{0.48\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{.\/graphics\/recombination-cone-noncomm.pdf}\n\\caption{A sketched space-time, where homogeneity of the observable universe cannot be explained by mixing between the big bang and recombination.}\n\\end{subfigure}\n\\caption{Whether observed homogeneity could be explained by mixing depends on the relation of the conformal distance between recombination and singularity versus the conformal distance between the present time and recombination.}\\label{fig:recomb-cone}\n\\end{figure}\nIn the introduction, we promised to give some more details on the relation of the formation of particle horizons to homogenization of the universe by mixing. Astronomical observations show that the universe appears to be mostly homogeneous at large scales. However, optical astronomical observations go back only to the recombination, the moment where the primordial plasma condensed to a gas and became transparent to light. Hence, there is only a bounded region of space-time in our past, which is optically accessible. A possible explanation for the observed homogeneity, c.f.~e.g.~\\cite{misner1969mixmaster}, might be that the universe, i.e.~matter, radiation, etc, mixed in the time-frame between the initial singularity and recombination. This is only possible if the outer parts of our optical past light-cone have a joint causal past (see Figure \\ref{fig:recomb-cone}). This is a reason that the formation of particle horizons does not necessarily spell doom for attempts to explain the observed homogeneity through mixing \n(apart from the actual universe not beeing a homogeneous, anisotropic vacuum spacetime of Bianchi type $\\textsc{VIII}$ or $\\textsc{IX}$).\n\nIndeed, the same problem is present in the ``standard model'' of cosmology, which is a (homogeneous isotropic) FLRW model. There, the observed homogeneity is typically explained by inflation, i.e.~one postulates a phase of rapid expansion that increases the conformal distance between recombination and singularity, mediated by an exotic, hitherto unobserved matter field.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDeep learning is revolutionizing big-data applications, after being responsible for groundbreaking improvements in image classification \\cite{krizhevsky2012imagenet} and speech recognition \\cite{hinton2012deep}. With the recent upsurge in data, at a much faster rate than our computing capabilities, neural networks are growing larger to process information more effectively. In 2012, state-of-the-art convolutional neural networks contained at most 10 layers. Afterward, each successive year has brought deeper architectures with greater accuracy. Last year, Microsoft's deep residual network \\cite{he2015deep}, which won the ILSVRC 2015 competition with a 3.57\\% error rate, had 152 layers and 11.3 billion FLOPs. To handle such large neural networks, researchers usually train them on high-performance graphics cards or large computer clusters. \n\nThe basic building block of a neural network is a neuron. Each neuron activates when a specific feature appears in its input. The magnitude of a neuron's activation is a measure of the neuron's confidence for the presence or absence of a feature. In the first layer of deep network, the neurons detect simple edges in an image. Each successive layer learns more complex features. For example, the neurons will find parts of a face first, such as the eyes, ears, or mouth, before distinguishing between different faces. For classification tasks, the final layer is a classic linear classifier - Softmax or SVM. Multiple layers combine to form a complex, non-linear function capable of representing any arbitrary function \\cite{cybenko1989approximation}. Adding more neurons to a network layers increases its expressive power. This enhanced expressive power has made massive-sized deep networks a common practice in large scale machine learning systems, which has shown some very impressive boost over old benchmarks. \n\nDue to the growing size and complexity of networks, efficient algorithms for training massive deep networks in a distributed and parallel environment is currently the most sought after problem in both academia and the commercial industry. For example, Google \\cite{dean2012large} used a 1-billion parameter neural network, which took three days to train on a 1000-node cluster, totaling over 16,000 CPU cores. Each instantiation of the network spanned 170 servers. In distributed computing environments, the parameters of giant deep networks are required to be split across multiple nodes. However, this setup requires costly communication and synchronization between the parameter server and processing nodes in order to transfer the gradient and parameter updates. The sequential and dense nature of gradient updates prohibits any efficient splitting (sharding) of the neural network parameters across computer nodes. There is no clear way to avoid the costly synchronization without resorting to some ad-hoc breaking of the network. This ad-hoc breaking of deep networks is not well understood and is likely to hurt performance. Synchronization is one of the major hurdles in scalability. Asynchronous training is the ideal solution, but it is sensitive to conflicting, overlapping parameter updates, which leads to poor convergence.\n\nWhile deep networks are growing larger and more complex, there is also push for greater energy efficiency to satisfy the growing popularity of machine learning applications on mobile phones and low-power devices. For example, there is recent work by \\cite{mcmahan2016federated} aimed at leveraging the vast data of mobile devices. This work has the users train neural networks on their local data, and then periodically transmit their models to a central server. This approach preserves the privacy of the user's personal data, but still allows the central server's model to learn effectively. Their work is dependent on training neural networks locally. Back-propagation~\\cite{rumelhart1988learning} is the most popular algorithm for training deep networks. Each iteration of the back-propagation algorithm is composed of giant matrix multiplications. These matrices are very large, especially for massive networks with millions of nodes in the hidden layer, which are common in industrial practice. Large matrix multiplications are parallelizable on GPUs, but not energy-efficient. Users require their phones and tablets to have long battery life. Reducing the computational costs of neural networks, which directly translates into longer battery life, is a critical issue for the mobile industry.\n\nThe current challenges for deep learning illustrate a great demand for algorithms that reduce the amount of computation and energy usage. To reduce the bottleneck matrix multiplications, there has been a flurry of works around reducing the amount of computations associated with them. Most of them revolve around exploiting low-rank matrices or low precision updates. We review these techniques in details in Section~\\ref{sec:related_work}. However, updates with these techniques are hard to parallelize making them unsuitable for distributed and large scale applications. On the contrary, our proposal capitalizes on the sparsity of the activations to reduce computations. To the best of our knowledge, this is the first proposal that exploits sparsity to reduce the amount of computation associated with deep networks. We further show that our approach admits asynchronous parallel updates leading to perfect scaling with increasing parallelism. \n\nRecent machine learning research has focused on techniques for dealing with the famous problem of over-fitting with deep networks. A notable line of work \\cite{ba2013adaptive, makhzani2013k, makhzani2015winner} improved the accuracy of neural networks by only updating the neurons with the highest activations. Adaptive dropout \\cite{ba2013adaptive} sampled neurons in proportion to an affine transformation of the neuron's activation. The Winner-Take-All (WTA) approach \\cite{makhzani2013k, makhzani2015winner} kept the top-k\\% neurons, by calculating a hard threshold from mini-batch statistics. It was found that such a selective choice of nodes and sparse updates provide a natural regularization \\cite{srivastava2014dropout}. However, these approaches rely on inefficient, brute-force techniques to find the best neurons. Thus, these techniques are equally as expensive as the standard back-propagation method, leading to no computational savings. \n\nWe propose a hashing-based indexing approach to train and test neural networks, which capitalizes on the rich theory of randomized sub-linear algorithms from the database literature to perform adaptive dropouts \\cite{ba2013adaptive} efficiently while requiring significantly less computation and memory overhead. Furthermore, hashing algorithms are embarrassingly parallel and can be easily distributed with small communications, which is perfect for large-scale, distributed deep networks.\n\nOur idea is to index the neurons (the weights of each neuron as a vector) into a hash table using locality sensitive hashing. Such hash tables of neurons allow us to select the neurons with the highest activations, without computing all of the activations, in the sub-linear time leading to significant computational savings. Moreover, as we show, since our approach results in a sparse active set of neurons randomly, the gradient updates are unlikely to overwrite. Such updates are ideal for asynchronous and parallel gradient updates. It is known that asynchronous stochastic gradient descent\\cite{recht2011hogwild} (ASGD) will converge if the number of simultaneous parameter updates is small. We heavily leverage this sparsity which unique to our proposal. On several deep learning benchmarks, we show that our approach outperforms standard algorithms including vanilla dropout \\cite{srivastava2014dropout} at high sparsity levels and matches the performance of adaptive dropout \\cite{ba2013adaptive} and winner-take-all \\cite{makhzani2013k, makhzani2015winner} while needing less computation (only 5\\%).\n\n\\subsection{Our Contributions}\n\\begin{enumerate} \n \\item We present a scalable and sustainable algorithm for training and testing of fully-connected neural networks. Our idea capitalized on the recent successful technique of adaptive dropouts combined with the smart data structure (hash tables) based on recently found locality sensitive hashing (LSH) for maximum inner product search (MIPS) \\cite{shrivastava2014asymmetric}. We show significant reductions in the computational requirement for training deep networks, without any significant loss in accuracy (within 1\\% of the accuracy of the original model on average). In particular, our method achieves the performance of other state-of-the-art regularization methods such as Dropout, Adaptive Dropout, and Winner-Take-All when using only 5\\% of the neurons in a standard neural network.\n \\item Our proposal reduces computations associated with both the training and testing (inference) of deep networks by reducing the multiplications needed for the feed-forward operation. \n \\item The key idea in our algorithm is to associate LSH hash tables~\\cite{gionis1999similarity,Proc:Indyk_STOC98} with every layer. These hash tables support constant-time $O(1)$ insertion and deletion operations.\n \\item Our scheme naturally leads to sparse gradient updates. Sparse updates are ideally suited for massively parallelizable asynchronous training \\cite{recht2011hogwild}. We demonstrate that this sparsity opens room for truly asynchronous training without any compromise with accuracy. As a result, we obtain near-linear speedup when increasing number of processors.\n\\end{enumerate}\n\n\\section{Related Work}\n\\label{sec:related_work}\nThere have been several recent advances aimed at improving the performance of neural networks. \\cite{lin2015neural} reduced the number of floating point multiplications by mapping the network's weights stochastically to \\{-1, 0, 1\\} during forward propagation and performing quantized back-propagation that replaces floating-point multiplications with simple bit-shifts. Reducing precision is an orthogonal approach, and it can be easily integrated with any other approaches. \n\n\\cite{sindhwani2015structured} uses structured matrix transformations with low-rank matrices to reduce the number of parameters for the fully-connected layers of a neural network. This low-rank constraint leads to a smaller memory footprint. However, such an approximation is not well suited for asynchronous and parallel training limiting its scalability. We instead use random but sparse activations, leveraging database advances in approximate query processing, because they can be easily parallelized. [See Section~\\ref{sec:lowrank} for more details] \n\nWe briefly review dropouts and its variants, which are popular sparsity promoting techniques relying on sparse activations. Although such randomized sparse activations have been found favorable for better generalization of deep networks, to the best of our knowledge, this sparsity has not been adequately exploited to make deep networks computationally cheap and parallelizable. We provide first such evidence. \n\nDropout \\cite{srivastava2014dropout} is primarily a regularization technique that addresses the issue of over-fitting randomly dropping half of the nodes in a hidden layer while training the network. The nodes are independently sampled for every stochastic gradient descent epoch~\\cite{srivastava2014dropout}. We can reinterpret Dropout as a technique for reducing the number of multiplications during forward and backward propagation phases, by ignoring nodes randomly in the network which computing the feed-forward pass. It is known that the network's performance becomes worse when too many nodes are dropped from the network. Usually, only 50\\% of the nodes in the network are dropped when training the network. At test time, the network takes the average of the thinned networks to form a prediction from the input data, which uses full computations. \n\nAdaptive dropout \\cite{ba2013adaptive} is an evolution of the dropout technique that adaptively chooses the nodes based on their activations. The methodology samples few nodes from the network, where the sampling is done with probability proportional to the node activations which are dependent on the current input. Adaptive dropouts demonstrate better performance than vanilla dropout~\\cite{srivastava2014dropout}. A notable feature of adaptive dropouts was that it was possible to drop significantly more nodes compared to dropouts while still retaining superior performance. Winner-Take-All \\cite{makhzani2013k, makhzani2015winner} is an extreme form of adaptive dropouts that uses mini-batch statistics to enforce a sparsity constraint. With this technique, only the k\\% largest, non-zero activations are used during the forward and backward phases of training. This approach requires computing the forward propagation step before selecting the k\\% nodes with a hard threshold. Unfortunately, all these techniques require full computation of the activations to selectively sample nodes. Therefore, they are only intended for better generalization and not reducing computations. Our approach uses the insight that selecting a very sparse set of hidden nodes with the highest activations can be reformulated as dynamic approximate query processing problem, which can solve efficiently using locality sensitive hashing. The differentiating factor between adaptive dropout and winner-take-all and our approach is we use sub-linear time randomized hashing to determine the active set of nodes instead computing the inner product for each node individually. \n\nThere is also another orthogonal line of work which uses hashing to reduce memory. \\cite{chen2015compressing} introduced a new type of deep learning architecture called Hashed Nets. Their objective was to decrease the number of parameters in the given neural network by using a universal random hash function to tie node weights. Network connections that map to the same hash value (hash collisions) are restricted to have same values. The architecture has the virtual appearance of a regular network while maintaining only a small subset of real weights. We point out that hashed nets are complementary to our approach because we focus on reducing the computational cost involved in using deep nets rather than its size in memory. \n\n\\section{Low-Rank vs Sparsity}\n\\label{sec:lowrank}\nThe low-rank (or structured) assumption is very convenient for reducing the complexity of general matrix operations. However, low-rank and dense updates do not promote sparsity and are not friendly for distributed computing. The same principle holds with deep networks. We illustrate it with a simple example. \n\nConsider a layer of the first network (left) shown in Figure~\\ref{fig:structure_matrix}. The insight is that if the weight matrix $W \\in \\mathbb{R}^{mxn}$ for a hidden layer has low-rank structure where rank $r \\ll min(m,n)$, then it has a representation $W=UV$ where $U \\in \\mathbb{R}^{mxr}$ and $V \\in \\mathbb{R}^{rxn}$. This low-rank structure improves the storage requirements and matrix-multiplication time from $O(mn)$ to $O(mr + rn)$. As shown in Figure \\ref{fig:structure_matrix}, there is an equivalent representation of the same network using an intermediate hidden layer that contains $r$ nodes and uses the identity activation function. The weight matrices for the hidden layers in the second network (right) map to the matrix decomposition, $U$ and $V$. \\cite{sindhwani2015structured} uses structured matrix transformations with low-rank matrices to reduce the number of parameters for the fully-connected layers of a neural network. \n\nThe new network structure and the equivalent structure matrices require a dense gradient update, which is not ideally suited for data parallelism~\\cite{recht2011hogwild}. For example, they need sequential gradient descent updates. In this work, we move away from the low-rank assumption and instead make use of randomness and sparsity to reduce computations. We will later show that due to sparsity, our approach is well suited for Asynchronous Stochastic Gradient Descent (ASGD), leading to near-linear scaling. \n\nIt should be noted that our methodology is very different from the notions of making networks sparse by thresholding (or sparsifying) node weights permanently \\cite{srivastava2014dropout, makhzani2013k, makhzani2015winner}. Our approach makes use of every node weight but picks them selectively for different inputs. On the contrary, permanent sparsification only has a static set of weights which are used for every input and rest are discarded. \n\n\\begin{figure} [ht]\n\\begin{center}\n \\includegraphics[width=.50\\textwidth]{Structured_Matrix.pdf}\n\\end{center}\n\\begin{caption}\n{\\bf Illustration of why the Low-Rank assumption for neural networks naturally leads to fewer parameters:} \n(1) This network contains two layers with m and n neurons respectively. The weighted connections between the layers are characterized by the weight matrix $W \\in R^{m x n}$ of constrained rank $r$ such that $W = UV$ with $U \\in \\mathbb{R}^{m x r}$ and $V \\in \\mathbb{R}^{r x n}$.\n\n(2) An equivalent network contains three layers each with m, r, and n neurons. The top two layers are represented with the matrix $U \\in \\mathbb{R}^{m x r}$, while the bottom two layers are characterized with the matrix $V \\in \\mathbb{R}^{r x n}$. The intermediate layer uses the identity activation function $I$ such that the output from network 2 equals that of network 1. $a = f(W^T x) = f((U V)^T x) = f((V^T U^T) x) = f(V^T I(U^T x))$ where $a$ is the layer's output and f is a non-linear activation function.\n\\end{caption}\n\\label{fig:structure_matrix}\n\\end{figure}\n\n\\section{Background}\\subsection{Artificial Neural Networks}\nNeural networks are built from layers of neurons. The combination of multiple hidden layers creates a complex, non-linear function capable of representing any arbitrary function \\cite{cybenko1989approximation}. The forward propagation equation is $$a^{l} = f(W^{l} a^{l-1} + b^{l})$$ where $a^{l-1}$ is the output from the previous layer $l-1$, $W^{l}$ is an $[N x M]$ weight matrix for layer $l$, $b^{l}$ is the bias term for layer $l$, and $f$ is the activation function. Common non-linear activation functions are sigmoid, tanh, ReLU. Neural networks are trained using stochastic gradient descent (SGD) - $\\theta_{t+1} = \\theta_t - \\eta \\nabla J(\\theta)$. The gradient for the neural network $\\nabla J(\\theta)$ is calculated layer-by-layer using the back-propagation algorithm. The Momentum \\cite{polyak1964momentum} technique calculates the parameter update from a weighted sum of the previous momentum term and the current gradient - $\\nu_{t+1} = \\gamma \\nu_t + \\eta \\nabla J(\\theta)$. It reduces the training time of neural networks by mitigating the stochastic nature of SGD. Adaptive gradient descent (Adagrad) \\cite{duchi2011adaptive} is a variant of SGD that adapts the learning rate for each parameter. The global learning rate is normalized by the sum of squared gradients. It favors larger learning rates for infrequent or small parameter updates.\n\n\\subsection{Adaptive Dropout}\n\\label{sec:adaptive_dropout}\nAdaptive Dropout \\cite{ba2013adaptive} is a recently proposed dropout technique that selectively samples nodes with probability proportional to some monotonic function of their activations. To achieve this, authors in \\cite{ba2013adaptive} chose a Bernoulli distribution. The idea was that given an input $a_{l-1}$ to a layer $l$, we generate a Bernoulli random variable for every node in the layer with Bernoulli probability $q$ being a monotonic function of the activation. Then, the binary variable sampled from this distribution is used to determine if the associated neuron's activation is kept or set to zero. This adaptive selectivity allows fewer neurons to be activated in the hidden layer and to improve neural network performance over standard Dropout \\cite{srivastava2014dropout}. The Winner-Take-All approach \\cite{makhzani2015winner} is an extreme case of Adaptive Dropout that takes only the top k\\% activations in a hidden layer, which also performs very well.\n\n\\subsection{Locality-Sensitive Hashing (LSH)}\nLocality-Sensitive Hashing (LSH) \\cite{gionis1999similarity, sundaram2013streaming, huang2015query, gao2014dsh, shinde2010similarity} is a popular, sublinear time algorithm for approximate nearest-neighbor search. The high-level idea is to map similar items into the same bucket with high probability. An LSH hash function maps an input data vector to an integer key - $h: x \\in \\mathbb{R}^D \\mapsto [0, 1, 2,\\dots, N]$. A collision occurs when the hash values for two data vectors are equal - $h(x) = h(y)$. The probability of collision for an LSH hash function is proportional to the similarity distance between the two data vectors - $Pr[h(x) = h(y)] \\propto sim(x,y)$. Essentially, similar items are more likely to collide with the same hash fingerprint and vice versa. \n\n\\subsubsection*{Sub-linear Search with $(K,L)$ LSH Algorithm} To be able to answer approximate nearest-neighbor queries in sub-linear time, the idea is to create hash tables, (see Figure~\\ref{fig:algorithm}). Given the collection $\\mathcal{C}$, which we are interested in querying for the nearest-neighbor items, the hash tables are generated using the locality sensitive hash (LSH) family. We assume that we have access to the appropriate locality sensitive hash (LSH) family $\\mathcal{H}$ for the similarity of interest. In the classical $(K,L)$ parameterized LSH algorithm, we generate $L$ different hash functions given by $B_j(x) = [h_{1j}(x);h_{2j}(x);...;h_{{Kj}}(x)]$. Here $h_{ij}, i \\in \\{1,2,...,K \\}$ and $j \\in \\{1,2,...,L \\}$, are $KL$ different evaluations of the appropriate locality sensitive hash (LSH) function. Each of these hash functions is formed by concatenating $K$ sampled hash values from $\\mathcal{H}$.\n\nThe overall algorithm works in two phases (See~\\cite{Report:E2LSH} for details):\n\\begin{enumerate}\n \\itemsep0em\n \\item {\\bf Pre-processing Phase:} We construct $L$ hash tables from the data by storing all elements $x \\in \\mathcal{C}$, at location $B_j(x)$ in hash-table $j$ (See Figure~\\ref{fig:algorithm} for an illustration). We only store pointers to the vector in the hash tables, as storing data vectors is very memory inefficient.\n \\item {\\bf Query Phase:} Given a query $Q$, we will search for its nearest-neighbors. We report the union of all the points in the buckets $B_j(Q)$ $\\forall j \\in \\{1,2,...,L\\}$, where the union is over $L$ hash tables. Note, we do not scan all the elements in $\\mathcal{C}$, we only probe $L$ different buckets, one from each hash table. \n\\end{enumerate}\n\n\\cite{Proc:Indyk_STOC98} shows that having an LSH family for a given similarity measure and an appropriate choice of $K$ and $L$, the above algorithm is provably sub-linear. In practice, it is near-constant time because the query only requires few bucket probes. \n\n\\subsubsection*{Hashing Inner Products} It was recently shown that by allowing asymmetric hash functions, the same algorithm can be converted into a sub-linear time algorithm for Maximum Inner Product Search (MIPS) \\cite{shrivastava2014asymmetric}. In locality sensitive hashing, self-similarity $Pr[h(x) = h(x)] = 1$ is the closest nearest-neighbor for a query $x$. However, self-similarity does not necessarily correspond with the highest inner product $x^Tx = ||x||^2_2$. Therefore, an asymmetric transformation is necessary to map maximum inner product search (MIPS) to the standard nearest-neighbor search (LSH). Our work heavily leverages these algorithms. For the purpose of this paper, we will assume that we can create a very efficient data structure, which can be queried for large inner products. See~\\cite{shrivastava2014asymmetric} for details.\n\n\\subsubsection*{Multi-Probe LSH} One common complaint with classical LSH algorithm is that it requires a significant number of hash tables. Large $L$ increases the hashing time and memory cost. A simple solution was to probe multiple \"close-by\" buckets in every hash table rather than probing only one bucket~\\cite{lv2007multi}. Thus, for a given query $Q$, in addition to probing $B_j(Q)$ in hash table $j$, we also generate several new addresses to probe by slightly perturbing values of $B_j(Q)$. This simple idea significantly reduces the number of tables needed with LSH, and we can work with only a few hash tables. See~\\cite{lv2007multi} for more details. \n\n\\section{Proposed Methodology}\n\\subsection{Intuition}\nWinner-Take-All~\\cite{makhzani2013k, makhzani2015winner} technique shows that we should only consider a few nodes (top k\\%) with large activations for a given input and ignore the rest while computing the feed-forward pass. Furthermore, the back-propagation updates should only be performed on those few chosen weights. Let us denote $AS$ (Active Set) to be the top k\\% nodes having significant activations. Let $n$ denote the total number of nodes in the neural network with $AS \\ll n$. For one gradient update, Winner-Take-All needs to perform first $O(n \\log{} n)$ work to compute $AS$, followed by updating $O(AS)$ weights. $O(n log n)$ seems quite wasteful. In particular, given the input, finding the active set $AS$ is a search problem, which can be solved efficiently using smart data structures. Furthermore, if the data structure is dynamic and efficient, then the gradient updates will also be efficient. \n\nFor a node $i$ with weight $w_i$ and input $x$, its activation is a monotonic function of the inner product $w_i^Tx$. Thus, given $x$, selecting nodes with large activations (the set $AS$) is equivalent to searching $w_i$, from a collection of weight vectors, have large inner products with $x$. Equivalently from query processing perspective, if we treat input $x$ as a query, then the search problem of selecting top k\\% nodes can be done efficiently, in sub-linear time in the number of nodes, using the recent advances in maximum inner product search \\cite{shrivastava2014asymmetric}. Our proposal is to create hash tables with indexes generated by asymmetric locality sensitive hash functions tailored for inner products. With such hash tables, given the query input $x$, we can very efficiently approximate the active set $AS$. \n\nThere is one more implementation challenge. We also have to update the nodes (weights associated with them) in $AS$ during the gradient update. If we can perform these updates in $O(AS)$ instead of $O(n)$, then we save significant computation. Thus, we need a data structure where updates are efficient as well. With years of research in data structures, there are many choices which can make update efficient. We describe our system in details in Section~\\ref{sec:system}. \n\n\\subsection{Equivalence with Adaptive Dropouts}\nIt turns out that using randomized asymmetric locality sensitive hashing \\cite{shrivastava2014asymmetric} for finding nodes with large inner products is formally, in a statistical sense, equivalent to adaptive dropouts with a non-trivial sampling distribution. \n\nAdaptive Dropout technique uses the Bernoulli distribution (Section~\\ref{sec:adaptive_dropout}) to adaptively sample nodes with large activation. In theory, any distribution assigning probabilities in proportion to nodes activation is sufficient. We argue that Bernoulli is a sub-optimal choice. There is another non-intuitive, but a very efficient choice. This choice comes from the theory of Locality-Sensitive Hashing (LSH) \\cite{Proc:Indyk_STOC98}, which is primarily considered a black box technique for fast sub-linear search. Our core idea comes from the observation that, for a given search query $q$, the $(K,L)$ parametrized LSH algorithm \\cite{Report:E2LSH} inherently samples, in sub-linear time, points from datasets with probability proportional to $1 - (1-p^K)^L$. Here $p$, is the collision probability, a monotonic function of the similarity between the query and the retrieved point. See \\cite{Report:E2LSH} for details. The expression holds for any $K$ and $L$. $K$ controls the sparsity of buckets. With a reasonable choice of $K$, we can always make buckets arbitrarily sparse. Even if the buckets are heavy, we can always do a simple random sampling, in the buckets, and adding only a constant factor in the probability $p$, keeping the monotonicity intact. \n\n\\begin{theorem} \n {\\bf Hash-Based Sampling} - For a given input $x$ to any layer, any $(K,L)$ parametrized hashing algorithm selects a node $i$, associated with weight vector $w_i$, with probability proportional to $1-(1-p^K)^L$. Here $p = Pr(h(I) = h(w_i))$ is the collision probability of the associated locality sensitive hash function. The function $1- (1-p^K)^L$ is monotonic in $p$.\n\\end{theorem}\n\n\\begin{proof}\nThe probability of finding node $i$ in any hash table is $p^K$ (all $K$ bits collide where each bit occurs with probability $p$), so the probability of missing node $i$ in $L$ independent hash tables is $(1-p^K)^L$. If we further choose to sample the buckets randomly by another constant factor $r < 1$, then effective collision probability becomes $p' = rp$.\n\\end{proof}\n\nWith recent advances in hashing inner products, we can make $p$ a monotonic function of inner products \\cite{shrivastava2014asymmetric, shrivastava2014improved, shrivastava2015asymmetric,shrivastava2015probabilistic}. Translating into deep network notation, given the activation vector of previous layer $a^{l-1}$ as the query and layer's weights $W^{l}$ as the search space, we can sample with probability proportional to $$g(W^{l}) = 1-(1-f(W^{l} a^{l-1} )^K)^L$$ which is a monotonic function of node activation. Thus, we can naturally integrate adaptive dropouts with hashing. Overall, we are simply choosing a specific form of adaptive dropouts, which leads to efficient sampling in time sub-linear in the number of neurons (or nodes). \n\n\\begin{corollary}\n{\\bf Adaptive Dropouts are Efficient} - There exists an efficient family of sampling distributions for adaptive dropouts, where the sampling and updating cost is sub-linear in the number of nodes. In particular, we can construct an efficient sampling scheme such that for any two nodes $i$ and $j$, in any given layer $l$, the probability of choosing node $i$ is greater than the probability of choosing node $j$ if and only if the current activation of node $i$ is higher than that of $j$.\n\\end{corollary}\n\n\\begin{proof}\nSince asymmetric LSH functions \\cite{shrivastava2014asymmetric, shrivastava2014improved, shrivastava2015asymmetric} have a collision probability $p$ that is a monotonic function of the inner product, it is also a montonic function of the neuron's activation. All activation functions including sigmoid \\cite{funahashi1989approximate}, tanh \\cite{lecun2012efficient}, and ReLU \\cite{glorot2011deep} are monotonic functions of inner products. Composing monotonic functions retains monotonicity. Therefore, the corollary follows because $1-(1-p^K)^L$ is monotonic with respect to $p$. In addition, monotonicity goes in both direction as the inverse function exists and it is also monotonic.\n\\end{proof}\n\n\\begin{figure}[ht]\n\\mbox{\n \\includegraphics[width=.50\\textwidth]{algorithm}} \\vspace{-0.3in}\n\\caption{{\\bf A visual representation of a neural network using randomized hashing}\n(1) Build the hash tables by hashing the weights of each hidden layer (one time operation)\n(2) Hash the layer's input using the layer's randomized hash function\n(3) Query the layer's hash table(s) for the active set $AS$\n(4) Only perform forward and back-propagation on the neurons in the active set. The solid-colored neurons in the hidden layer are the active neurons.\n(5) Update the $AS$ weights and the hash tables by rehashing the updated weights to new hash locations. }\n \\label{fig:algorithm}\n\\end{figure}\n\n\\subsection{Hashing Based Back-Propagation}\n\\label{sec:system}\nAs argued, our approach is simple. We use randomized hash functions to build hash tables from the nodes in each hidden layer. We sample nodes from the hash table with probability proportional to the node's activation in sub-linear time. We then perform forward and back propagation only on the active nodes retrieved from the hash tables. We later update the hash tables to reorganize only the modified weights. \n\nFigure \\ref{fig:algorithm} illustrates an example neural network with two hidden layers, five input nodes, and two output nodes. Hash tables are built for each hidden layer, where the weighted connections for each node are hashed to place the node in its corresponding bucket. Creating hash tables to store all the initial parameters is a one-time operation which requires cost linear in the number of parameters. \n\nDuring a forward propagation pass, the input to the hidden layer is hashed with the same hash function used to build the hidden layer's hash table. The input's fingerprint is used to collect the active set $AS$ nodes from the hash table. The hash table only contains pointers to the nodes in the hidden layer. Then, forward propagation is performed only on the nodes in the active set $AS$. The rest of the hidden layer's nodes, which are not part of the active set, are ignored and are automatically switched off without even touching them. On the back propagation pass, the active set is reused to determine the gradient and to update the parameters. We rehash the nodes in each hidden layer to account for the changes in the network during training. \n\nIn detail, the hash function for each hidden layer is composed of $K$ randomized hash functions. We use the sign of an asymmetrically transformed random projection (see~\\cite{shrivastava2014improved} for details) to generate the $K$ bits for each data vector. The $K$ bits are stored together efficiently as an integer, forming a fingerprint for the data vector. We create a hash table of $2^K$ buckets, but we only keep the nonempty buckets to minimize the memory footprint (analogous to hash-maps in Java). Each bucket stores pointers to the nodes whose fingerprints match the bucket's id instead of the node itself. In figure \\ref{fig:algorithm}, we showed only one hash table, which is likely to miss valuable nodes in practice. In our implementation, we generate $L$ hash tables for each hidden layer, and each hash table has an independent set of $K$ random projections. Our final active set from these $L$ hash tables is the union of the buckets selected from each hash table. For each layer, we have $L$ hash tables. Effectively, we have two tunable parameters, $K$ bits and $L$ tables to control the size and the quality of the active sets. The $K$ bits increase the precision of the fingerprint, meaning the nodes in a bucket are more likely to generate higher activation values for a given input. The $L$ tables increase the probability of finding useful nodes that are missed because of the randomness in the hash functions.\n\n\\begin{algorithm}[tb]\n \\caption{Deep Learning with Randomized Hashing}\n \\label{alg:algorithm}\n\\begin{algorithmic}\n \\STATE \/\/ HF$_l$ - Layer $l$ Hash Function\n \\STATE \/\/ HT$_l$ - Layer $l$ Hash Tables\n \\STATE \/\/ AS$_l$ - Layer $l$ Active Set\n \\STATE \/\/ $\\theta^{l}_{AS} \\in W^{l}_{AS}$, $b^{l}_{AS}$ - Layer $l$ Active Set parameters\n \\STATE Randomly initialize parameters $W^l$, $b^l$ for each layer $l$\n \\STATE HF$_l$ = constructHashFunction($k$, $L$)\n \\STATE HT$_l$ = constructHashTable($W^l$, HF$_l$)\n \\WHILE {not stopping criteria}\n \\FOR {{\\bf each} training epoch}\n \\STATE \/\/ Forward Propagation\n \\FOR {layer $l = 1 \\dots N$}\n \\STATE fingerprint$_l$ = HF$_l(a_l)$\n \\STATE AS$_l$ = collectActiveSet(HT$_l$, fingerprint$_l$)\n \\FOR {{\\bf each} node $i$ in AS$_l$}\n \\STATE $a^{l+1}_{i} = f(W^{l}_{i}a^{l}_{i} + b^{l}_{i})$\n \\ENDFOR\n \\ENDFOR\n \\STATE \/\/ Backpropagation\n \\FOR {layer $l = 1 \\dots N$}\n \\STATE $\\Delta J(\\theta^{l}_{AS})$ = computeGradient($\\theta^{l}_{AS}$, AS$_l$)\n \\STATE $\\theta^{l}_{AS}$ = updateParameters($\\theta^{l}_{AS}$, $\\Delta J(\\theta_{AS}$))\n \\ENDFOR\n \\STATE {\\bf for each} Layer $l$ -> updateHashTables(HF$_l$, HT$_l$, $\\theta^{l}$)\n \\ENDFOR\n \\ENDWHILE\n\\end{algorithmic}\n\\end{algorithm}\n\\subsection{Efficient Query and Updates}\n\nOur algorithm critically depends on the efficiency of the query and update procedure. The hash table is one of the most efficient data structures, so this is not a difficult challenge. Querying a single hash table is a constant time operation when the bucket size is small. Bucket size can be controlled by $K$ and by sub-sampling the bucket. There is always a possibility of crowded buckets due to bad randomness or because of too many near-duplicates in the data. These crowded buckets are not very informative and can be safely ignored or sub-sampled. \n\nWe never need to store weights in the hash tables. Instead, we only store references to the weight vectors, which makes hash tables a very light entity. Further, we reduce the number of different hash tables required by using multi-probe LSH \\cite{lv2007multi}.\n\nUpdating a weight vector $w_i$ associated with a node $i$ can require changing the location of $w_i$ in the hash table, if the updates lead to a change in the hash value. With the hash table data structure, the update is not an issue when the buckets are sparse. Updating a weight only requires one insertion and one deletion in the respective buckets. There are plenty of choices for efficient insert and delete data structure for buckets. In theory, even if buckets are not sparse, we can use a red-black-tree \\cite{cormen2009introduction} to ensure both insertion and deletion cost is logarithmic in the size of the bucket. However, using simple arrays when the buckets are sparse is preferable because they are easy to parallelize. With arrays insertion is O(1) and deletion is $O(b)$, where $b$ is the size of buckets. Controlling the size of $b$ can be easily achieved by sub-sampling the bucket. Since we create $L$ independent hash tables, even for reasonably large $L$, the process is quite robust to cheap approximations. We further reduce the size of $L$ using multi-probing \\cite{lv2007multi}. Multi-probe with binary hash function is quite straightforward. We just have to randomly flip few bits of the $K$-bit hash to generate more addresses. \n\nThere are plenty of other choices than can make hashing significantly faster, such as cheap re-ranking~\\cite{shrivastava2012fast}. See~\\cite{Shrivastava:SOCC_16}, where authors show around 500x reduction in computations for image search by incorporating different algorithmic and systems choices. \n\n\\subsection{Overall Cost}\nIn every layer, during every Stochastic Gradient Descent (SGD) update, we compute $K \\times L$ hashes of the input, probe around $10L$ buckets, and take their union. In our experiments, we use $K=6$ and $L=5$, i.e. 30 hash computations only. There are many techniques to further reduce this hashing cost~\\cite{achlioptas2001database,li2006very,Proc:OneHashLSH_ICML14,Proc:Shrivastava_UAI14,Shrivastava:NIPS_16}. We probe around 10 buckets in each hash tables for obtaining 5\\% of active nodes, leading to the union of 50 buckets in total. The process gives us the active set $AS$ of nodes, which is usually significantly smaller when compared to the number of nodes $n$. During SGD, we update all of the weights in $AS$ along with the hash table. Overall, the cost is of the order of the number of nodes in $AS$, which in our experiments show can go as low as $5\\%$. For 1000 node in the layer, we have to update around 10-50 nodes only. The bottleneck cost is calculations of activations (actual inner products) of these nodes in the $AS$ which for every node is around $1000$ floating point multiplications each. The benefits will be even more significant for larger networks. \n\n\\subsection{Bonus: Sparse Updates can be Parallelized}\nAs mentioned, we only need to update the set of weights associated with nodes in the active set $AS$. If the $AS$ is very sparse, then it is unlikely that multiple SGD updates will overwrite the same set of weights. Intuitively, assuming enough randomness in the data vector, we will have a small active set $AS$ chosen randomly from among all the nodes. It is unlikely that multiple active sets of randomly selected data vectors will have significant overlaps. Small overlaps imply fewer conflicts while updating. Fewer conflicts while updating is an ideal ground where SGD updates can be parallelized without any overhead. In fact, it was both theoretically and experimentally that random and sparse SGD updates can be parallelized without compromising with the convergence ~\\cite{recht2011hogwild}. Parallel SGD updates is one the pressing challenges in large-scale deep learning systems~\\cite{chen2016revisiting}. Vanilla SGD for deep networks is sequential, and parallel updates lead to poor convergence due to significant overwrites. Our experimental results, in Section~\\ref{sec:ASGD}, support these known phenomena. Exploiting this unique property, we show near linear scaling of our algorithm with increasing processors without hurting convergence. \n\n\\section{Evaluations}\nWe design experiments to answer the following six important questions: \n\\begin{enumerate}\n \\item How much computation can we reduce without affecting the vanilla network's accuracy? \n \\item How effective is adaptive sampling compared to a random sampling of nodes?\n \\item How does approximate hashing compare with expensive but exact approaches of adaptive dropouts~\\cite{ba2013adaptive} and Winner-Takes-all~\\cite{makhzani2013k, makhzani2015winner} in terms of accuracy? In particular, is hashing working as intended? \n \\item How is the network's convergence affected by increasing number of processors as we perform parallel SGD updates using our approach?\n \\item Is sparse update necessary? Is there any deterioration in performance, if we perform vanilla dense updates in parallel?\n \\item What is the wall clock decrease in training time, as a function of increasing number of processors?\n\\end{enumerate}\n\nFor evaluation, we implemented the following five approaches to compare and contrast against.\n\\begin{itemize}\n \\item Standard (NN) : A full-connected neural network\n \\item Dropout (VD) \\cite{srivastava2014dropout}: A neural network that disables the nodes of a hidden layer using a fixed probability threshold\n \\item Adaptive Dropout (AD) \\cite{ba2013adaptive}: A neural network that disables the nodes of a hidden layer using a probability threshold based on the inner product of the node's weights and the input.\n \\item Winner Take All (WTA) \\cite{makhzani2013k, makhzani2015winner}: A neural network that sorts the activations of a hidden layer and selects the k\\% largest activations\n \\item Randomized Hashing (LSH): A neural network that selects nodes using randomized hashing. A hard threshold limits the active node set to k\\% sparsity\n\\end{itemize}\n\n\\subsection{Datasets}\nTo test our neural network implementation, we used four publicly available datasets - MNIST8M \\cite{loosli-canu-bottou-2006}, NORB \\cite{lecun2004learning}, CONVEX, and RECTANGLES \\cite{larochelle2007empirical}. The statistics of these datasets are summarized in Table~\\ref{fig:dataset_size}. The MNIST8M, CONVEX, and RECTANGLES datasets contain 28x28 images, forming 784-dimensional feature vectors. The MNIST8M task is to classify each handwritten digit in the image correctly. It is derived by applying random deformations and translations to the MNIST dataset. The CONVEX dataset objective is to identify if a single convex region exists in the image. The goal for the RECTANGLES dataset is to discriminate between tall and wide rectangles overlaid on a black and white background image. The NORB dataset \\cite{lecun2004learning} contains images of 50 toys, belonging to 5 categories under various lighting conditions and camera angles. Each data point is a 96x96 stereo image pair. We resize the image from 96x96 to 32x32 and concatenate the image pairs together to form a 2048-dimensional feature vector.\n\n\\begin{figure}\n \\begin{center}\n \\begin{tabular}{ |c|c|c| } \n \\hline\n Dataset & Train Size & Test Size \\\\ \n \\hline\n MNIST8M & 8,100,000 & 10,000 \\\\ \n NORB & 24,300 & 24,300 \\\\\n Convex & 8,000 & 50,000 \\\\\n Rectangles & 12,000 & 50,000 \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n \\caption{Dataset - Training + Test Size}\n \\label{fig:dataset_size}\n\\end{figure}\n\n\\begin{figure*}[ht]\n\\begin{center}\n\\mbox{\\hspace{-0.185in}\n\\includegraphics[width=1.9in]{MNIST8M_784_1000_1000_10} \\hspace{-0.15in}\n\\includegraphics[width=1.9in]{NORB_2048_1000_1000_5} \\hspace{-0.19in}\n\\includegraphics[width=1.9in]{Convex_784_1000_1000_2} \\hspace{-0.19in}\n\\includegraphics[width=1.9in]{Rectangles_784_1000_1000_2} \\hspace{-0.19in}}\n\\mbox{\\hspace{-0.185in}\n\\includegraphics[width=1.9in]{MNIST8M_784_1000_1000_1000_10} \\hspace{-0.15in}\n\\includegraphics[width=1.9in]{NORB_2048_1000_1000_1000_5} \\hspace{-0.19in}\n\\includegraphics[width=1.9in]{Convex_784_1000_1000_1000_2} \\hspace{-0.19in}\n\\includegraphics[width=1.9in]{Rectangles_784_1000_1000_1000_2} \\hspace{-0.19in}}\n\\end{center}\n\\caption{Classification accuracy under different levels of active nodes with networks on the MNIST (1st), NORB (2nd), Convex (3rd) and Rectangles (4th) datasets. The standard neural network (dashed black line) is our baseline accuracy. We can clearly see that adaptive sampling with hashing (LSH) is significantly more effective than random sampling (VD). {\\bf 1. Top Panels:} 2 hidden Layers. {\\bf 1. Bottom Panels:} 3 hidden Layers}\n \\label{LSH_VD_STD}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n\\begin{center}\n\\mbox{\\hspace{-0.185in}\n\\includegraphics[width=1.9in]{MNIST8M_784_1000_1000_10_AD_WTA} \\hspace{-0.15in}\n\\includegraphics[width=1.9in]{NORB_2048_1000_1000_5_AD_WTA} \\hspace{-0.19in}\n\\includegraphics[width=1.9in]{Convex_784_1000_1000_2_AD_WTA} \\hspace{-0.19in}\n\\includegraphics[width=1.9in]{Rectangles_784_1000_1000_2_AD_WTA} \\hspace{-0.19in}}\n\\mbox{\\hspace{-0.185in}\n\\includegraphics[width=1.9in]{MNIST8M_784_1000_1000_1000_10_AD_WTA} \\hspace{-0.15in}\n\\includegraphics[width=1.9in]{NORB_2048_1000_1000_1000_5_AD_WTA} \\hspace{-0.19in}\n\\includegraphics[width=1.9in]{Convex_784_1000_1000_1000_2_AD_WTA} \\hspace{-0.19in}\n\\includegraphics[width=1.9in]{Rectangles_784_1000_1000_1000_2_AD_WTA} \\hspace{-0.19in}}\n\\end{center}\n\\caption{Classification accuracy under different levels of active nodes with networks on the MNIST (1st), NORB (2nd), Convex (3rd) and Rectangles (4th) datasets. The standard neural network (dashed black line) is our baseline accuracy. WTA and AD (dashed red + yellow lines) perform the same amount of computation as the standard neural network. Those techniques select nodes with high activations, after full computations, to achieve better accuracy. We compare our LSH approach to determine whether our randomized algorithm achieves comparable performance while reducing the total amount of computation. We do not have data for adaptive dropout at the 5\\% and 10\\% computation levels because those models diverged when the number of active nodes dropped below 25\\%. {\\bf 1. Top Panels:} 2 hidden Layers. {\\bf 1. Bottom Panels:} 3 hidden Layers }\n \\label{LSH_AD_WTA_STD}\n\\end{figure*}\n\n\\subsection{Sustainability}\n\\label{sec:dropout}\n\\subsubsection{Experimental Setting}\nAll of the experiments for our approach and the other techniques were run on a 6-core Intel i7-3930K machine with 16 GB of memory. Our approach uses stochastic gradient descent with Momentum and Adagrad \\cite{dean2012large}. Since our approach uniquely selects an active set of nodes for each hidden layer, we focused on a CPU-based approach to simplify combining randomized hashing with neural networks. The ReLU activation function was used for all methods. The learning rate for each approach was set using a standard grid search and ranged between $10^{-2}$ and $10^{-4}$. The parameters for the randomized hash tables were K = 6 bits and L = 5 tables with multi-probe LSH~\\cite{lv2007multi} creating a series of probes in each hash tables. We stop early if we find that we have samples enough nodes even before exhausting all buckets. Since we evaluate all levels of selection, in order to increase the percentage of nodes retrieved we increase the number of probes in the buckets. For the experiments, we use a fixed threshold to cap the number of active nodes selected from the hash tables to guarantee the amount of computation is within a certain level. \n\n\\subsubsection{Effect of computation levels}\nFigures \\ref{LSH_VD_STD}, \\ref{LSH_AD_WTA_STD} show the accuracy of each method on neural networks with 2 (Top panel) and 3 (Bottom Panel) hidden layers with the percentage of active nodes ranging from [0.05, 0.10, 0.25, 0.5, 0.75, 0.9]. The standard neural network is our baseline in these experiments and is marked with a dashed black line. Each hidden layer contains 1000 nodes. The x-axis represents the average percentage of active nodes per epoch selected by each technique. Our approach only performs the forward and back propagation steps on the nodes selected in each hidden layer. The other baseline techniques except for Dropout (VD) perform the forward propagation step for each node first to compute all the activations, before setting node activations to zero based on the corresponding algorithm. Thus on VD and our proposal requires a lesser number of multiplications compared to a standard neural network training procedure. \n\nFigures~\\ref{LSH_VD_STD} and \\ref{LSH_AD_WTA_STD} summarizes the accuracy of different approaches at various computations levels. From the figures, we conclude the following. \n\\begin{itemize}\n \\item The plots are consistent across all data-sets and architectures with different depths.\n \\item Our method (LSH) gives the best overall accuracy with the fewest number of active nodes. The fact that our approximate method is even slightly better than WTA and adaptive dropouts is not surprising, as it is long known that a small amount of random noise leads to better generalization. For examples, see~\\cite{srivastava2014dropout}.\n \\item As the number of active nodes decreases from 90\\% to 5\\%, LSH experiences the smallest drop in performance and less performance volatility.\n \\item VD experiences the greatest drop in performance when reducing the number of active nodes from 50\\% to 5\\%.\n \\item As the number of hidden layers increases in the network, the performance drop for VD becomes steeper.\n \\item WTA performed better than VD when the percentage of active nodes is less than 50\\%\n \\item The number of multiplications is a constant multiple of the percentage of active nodes in a hidden layer. The parameters, alpha, and beta determine how many nodes are kept active. We used alpha $\\alpha$ = 1.0 and beta $\\beta$ = [-1.5, -1.0, 0, 1.0, 3.5] for our experiments. Our model diverged when the number of active nodes dropped below 25\\%, so we do not have data for Adaptive Dropout at the 5\\% and 10\\%.\n \\item The performance for each method stabilizes, as the computation level approaches 100\\%.\n\\end{itemize}\n\nLowering the computational cost of running neural networks by running fewer operations reduces the energy consumption and heat produced by the processor. However, large neural networks provide better accuracy and arbitrarily reducing the amount of computation hurts performance. Our experiments show that our method (LSH) performs well at low computation levels, providing the best of both worlds - high performance and low processor computation. This approach is ideal for mobile phones, which have a thermal power design (TDP) of 3-4 Watts, because reducing the processor's load directly translates into longer battery life.\n\n\\begin{figure*}[ht]\n\\begin{center}\n\\mbox{\\hspace{-0.185in}\n\\includegraphics[width=1.9in]{MNIST8M_784_1000_1000_10_epoch} \\hspace{-0.15in}\n\\includegraphics[width=1.9in]{NORB_2048_1000_1000_1000_5_epoch} \\hspace{-0.19in}\n\\includegraphics[width=1.9in]{Convex_784_1000_1000_1000_2_epoch} \\hspace{-0.19in}\n\\includegraphics[width=1.9in]{Rectangles_784_1000_1000_1000_2_epoch} \\hspace{-0.19in}\n}\n\\end{center}\n\\caption{The convergence of our randomized hashing approach (LSH-5\\%) over several training epochs using asynchronous stochastic gradient with 1, 8, 56 threads. We used a (3 hidden layer) network on the MNIST (1st), NORB (2nd), Convex (3rd) and Rectangles (4th) datasets. Only 5\\% of the standard network's computation was performed in this experiment.}\n \\label{asgd_accuracy}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n\\begin{center}\n\\mbox{\\hspace{-0.185in}\n\\includegraphics[width=1.9in]{MNIST8M_784_1000_1000_1000_10_compare} \\hspace{-0.15in}\n\\includegraphics[width=1.9in]{NORB_2048_1000_1000_1000_5_compare} \\hspace{-0.19in}\n\\includegraphics[width=1.9in]{Convex_784_1000_1000_1000_2_compare} \\hspace{-0.19in}\n\\includegraphics[width=1.9in]{Rectangles_784_1000_1000_1000_2_compare} \\hspace{-0.19in}\n}\n\\end{center}\n\\caption{Performance comparison between our randomized hashing approach and a standard network using asynchronous stochastic gradient descent on an Intel Xeon ES-2697 machine with 56-cores. We used (3 hidden layer) networks on MNIST (1st), NORB (2nd), Convex (3rd) and Rectangles (4th). All networks were initialized with the same settings for this experiment.}\n \\label{asgd_lsh_std}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n\\begin{center}\n\\mbox{\\hspace{-0.185in}\n\\includegraphics[width=1.9in]{MNIST8M_784_1000_1000_1000_10_wct_lg_xy} \\hspace{-0.15in}\n\\includegraphics[width=1.9in]{NORB_2048_1000_1000_1000_5_wct} \\hspace{-0.19in}\n\\includegraphics[width=1.9in]{Convex_784_1000_1000_1000_2_wct} \\hspace{-0.19in}\n\\includegraphics[width=1.9in]{Rectangles_784_1000_1000_1000_2_wct} \\hspace{-0.19in}\n}\n\\end{center}\n\\caption{The wall-clock per epoch for our approach (LSH-5\\%) gained by using asynchronous stochastic gradient descent. We used a (3 hidden layer) network on the MNIST (1st), NORB (2nd), Convex (3rd) and Rectangles (4th). We see smaller performance gains with the Convex and Rectangles datasets because there are not enough training examples to use of all of the processors effectively. Only 5\\% of the standard network's computation was performed in this experiment.}\n\\label{asgd_performance}\n\\end{figure*}\n\n\\subsection{Scalability}\n\\label{sec:ASGD}\n\\subsubsection{Experimental Setting}\nWe now show experiments to demonstrate the scalability of our approach to large-scale, distributed computing environments. Specifically, we are testing if our approach maintains accuracy and improves training time, as we increase the number of processors. We use asynchronous stochastic gradient descent with momentum and adagrad \\cite{dean2012large, recht2011hogwild}. Our implementation runs the same model, with similar initializations, on multiple training examples concurrently. The gradient is applied without synchronization or locks to maximize performance. We run all of the experiments on an Intel Xeon ES-2697 machine with 56 cores and 256 GB of memory. The ReLU activation was used for all models, and the learning rate ranged between $10^{-2}$ and $10^{-3}$.\n\n\\subsubsection{Results with different numbers of processors}\nFigure \\ref{asgd_accuracy} shows how our method performs with asynchronous stochastic gradient descent (ASGD) using only 5\\% of the neurons of a full-sized neural network. The neural network has three hidden layers, and each hidden layer contains 1000 neurons. The x-axis represents the number of epochs completed, and the y-axis shows the test accuracy for the given epoch. We compare how our model converges with multiple processors working concurrently. Since our ASGD implementation does not use locks, it depends on the sparsity of the gradient to ensure the model converges and performs well \\cite{recht2011hogwild}. From our experiments, we see that our method converges at a similar rate and obtains the same accuracy regardless of the number of processors running ASGD.\n\nFigure \\ref{asgd_performance} illustrates how our method scales with multiple processors. The inherent sparsity of our randomized hashing approach reduces the number of simultaneous updates and allows for more asynchronous models without any performance penalty. We show the corresponding drop in wall-clock computation time per epoch while adding more processors. We achieve roughly a 31x speed up while running ASGD with 56 threads.\n\n\\subsubsection{ASGD Performance Comparison with Standard Neural Network}\nFigure \\ref{asgd_lsh_std} compares the performance of our LSH approach against a standard neural network (STD) when running ASGD with 56-cores. We used a standard network with a mini-batch of size 32. We clearly out-perform the standard network for all of our experimental datasets. However, since there is a large number of processors applying gradients to the parameters, those gradients are constantly being overridden, preventing ASGD from converging to an optimal local minimum. Our approach produces a spare gradient that reduces the conflicts between the different processors, while keeping enough valuable nodes for ASGD to progress towards the local minimum efficiently. \n\nFrom Figures \\ref{asgd_accuracy}, \\ref{asgd_lsh_std} and \\ref{asgd_performance}, we conclude the following. \n\\begin{enumerate}\n\\item The gradient updates are quite sparse with 5\\% LSH and running multiple ASGD updates in parallel does not affect the convergence rate of our hashing based approach. Even when running 56 cores in parallel, the convergence is indistinguishable from the sequential update (1 core) on all the four datasets.\n\\item If we instead run vanilla SGD in parallel, then the convergence is effected. The convergence is in general slower compared the sparse 5\\% LSH. This slow convergence is due to dense updates which leads to overwrites. Parallelizing dense updates affects the four datasets differently. For convex dataset, the convergence is very poor.\n\\item As expected, we obtain near-perfect decrease in the wall clock times with increasing the number of processors with LSH-5\\%. Note, if there are too many overwrites, then atomic overwrites are necessary, which will create additional overhead and hurt the parallelism. Thus, near-perfect scalability also indicates fewer gradient overwrites. \n\\item On the largest dataset - MNIST8M, the running time per epoch for the 1-core implementation is 50,254 seconds. The 56-core implementation runs in 1,637 seconds. Since the convergence is not affected, this amounts to about a 31x speedup in the training process with 56 processors. \n\\item We see that the gains of parallelism become flat with the Convex and Rectangle datasets, especially while using a large number of processors. This poor scaling is because the two datasets have fewer training examples, and so there is less parallel work for a large number of processor, especially when working with 56 cores. We do not see such behaviors with MNIST8M which has around 8 million training examples.\n\\end{enumerate}\n\n\\section{Discussions}\nMachine learning with a huge parameter space is becoming a common phenomenon. SGD remains the most promising algorithm for optimization due its effectiveness and simplicity. SGD that updates from a single data point is unlikely to change the entire parameter space significantly. Each SGD iteration is expected to change only a small set (the active set) of parameters, depending on the current sample. This sparse update occurs because there is not enough information in one data point. However, identifying that small set of active parameters is a search problem, which typically requires computations on the order of parameters. We can exploit the rich literature in approximate query processing to find this active set of parameters efficiently. Of course, the approximate active set contains a small amount of random noise, which is often good for generalization. Sparsity and randomness enhance data parallelism because sparse, and random SGD updates are unlikely to overwrite each other. \n\nApproximate query processing already sits on decades of research from the systems and database community, which makes the algorithm scalable over a variety of distributed systems. Thus, we can forget about the systems challenges by reformulating the machine learning problem into an approximate query processing problem and levering the ideas and implementation from a very rich literature. We have demonstrated one concrete evidence by showing how deep networks can be scaled-up. We believe that such integration of sparsity with approximate query processing, which is already efficient over different systems, is the future of large-scale machine learning.\n\n\\section{Conclusion and Future Work}\nOur randomized hashing approach is designed to minimize the amount of computation necessary to train a neural network effectively. Training neural networks requires a significant amount of computational resources and time, limiting the scalability of neural networks and its sustainability on embedded, low-powered devices. Our results show that our approach performs better than the other methods when using only 5\\% of the nodes executed in a standard neural network. The implication is that neural networks using our approach can run on mobile devices with longer battery life while maintaining performance. We also show that due to inherent sparsity our approach scales near-linearly with asynchronous stochastic gradient descent running with multiple processors.\n\nOur future work is to optimize our approach for different mobile processors. We have many choices to choose from including the Nvidia Tegra and Qualcomm Snapdragon platforms. Recently, Movidius released plans for a specialized deep learning processor called the Myriad 2 VPU (visual processing unit) \\cite{barry2015always}. The processor combines the best aspects of GPUs, DSPs, and CPUs to produce 150 GFLOPs and while using about 1 W of power. We plan to leverage the different platform architectures to achieve greater efficiency with our randomized hashing approach.\n\n\\section{Acknowledgments}\nThe work of Ryan Spring was supported from NSF Award 1547433. This work was supported by Rice Faculty Initiative Award 2016.\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzcgpa b/data_all_eng_slimpj/shuffled/split2/finalzzcgpa new file mode 100644 index 0000000000000000000000000000000000000000..5a13bd11cac9c63bb75851d931e2798c85577d5d --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzcgpa @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn text games, which are also known as Interactive Fiction (IF) \\cite{Montfort:2005}, the player is given a description of the game state in natural language and then chooses one of the actions which are also given by textual descriptions.\nThe executed action results in a change of the game state, producing new state description and waiting for the player's input again.\nThis process repeats until the game is over.\n\nThe engine that is responsible for running IF games is called simply a \\emph{game simulator} and it provides an interface for player (human or bot) interaction.\n\nWhile the output of the game simulator is almost always a text description, the form of the input that the game expects --- the game-player interface --- does vary.\nThis is one of the most common criteria for classifying IF games (for examples, see Figure \\ref{fig:if-types}):\n\n\\begin{itemize}\n\n\\item \\emph{parser-based}, where the player types in any text input freely,\n\n\\item \\emph{choice-based}, where multiple actions to choose from are typically available, \\emph{in~addition} to the state description,\n\n\\item \\emph{hypertext-based}, where multiple actions are present as clickable links \\emph{inside} the state description.\n\\end{itemize}\n\n\\begin{figure*}[!ht]\n\\centering\n\\includegraphics[width=\\textwidth]{if-types.png}\n\n\\caption{Parser, choice and hypertext-based games \\cite{DBLP:journals\/corr\/HeCHGLDO15}.}\n\\label{fig:if-types}\n\\end{figure*}\n\nThe largest source of information on IF games and their variants is the Interactive Fiction Database\\footnote{IFDB: \\url{http:\/\/ifdb.tads.org\/}\\label{ftn:ifdb}.} which features useful filters, tags and lists for finding specific kinds of games.\nFor our purposes, though, the differences in game genre or storytelling elements largely do not matter.\n\nText games come in virtually all genres, often with different minigames or puzzles incorporated into them.\nWe mostly deal with games without any additional special parts that do not fit into the general and simple input-output loop or we omit them from the selected games.\n\nIn this work, we only deal with \\emph{choice-based} and \\emph{hypertext-based} games but both variants are referred to as \\emph{choice-based}, as the hyperlinks are simply considered additional choices.\nIn fact, all parser-based games with finite number of actions can be converted to choice-based games by simply enumerating all the actions accepted by the interpreter at different time steps.\nThis trick was employed by \\cite{DBLP:journals\/corr\/NarasimhanKB15} where a subset of all possible action combinations was presented as choices to the agent.\n\n\nMore formally speaking, text games are sequential decision making tasks with both state and action spaces given in natural language.\nIn fact, a text game can be seen as a dialogue between the game and the player and the task is to find an optimal policy (a mapping from states to actions) that would maximise the player's total reward.\nConsequently, the task of learning to play text games is very similar to learning how to correctly answer in a dialogue, making this task relevant for real-world applications.\n\nUsually, text games have multiple endings and countless paths that the player can take.\nIt is often clear that some of the paths or endings are better than others and different rewards can be assigned to them.\nAvailability of these feedback signals makes text games an interesting platform for using reinforcement learning (RL), where one can make use of the feedback provided by the game in order to try and infer the optimal strategy of responding in the given game, and potentially, in a dialogue. \n\nOur aim is to:\n\n\\begin{enumerate}\n\\item Provide a platform which would enable researchers to easily add new text games. Importantly, this library unifies the interface of different types of text games, enabling researchers to conduct large-scale experiments.\n\\item Design a minimalistic agent capable of playing basic text games well. This agent will also implement the interface of the text-game library and could be used as a starting point for more complex models in the future.\n\\end{enumerate}\n\n\n\n\\section{Related Work}\n\nRL algorithms have a long history of being successfully applied to solving games, for example to Backgammon \\cite{DBLP:journals\/cacm\/Tesauro95} or the more recent general AlphaZero algorithm that achieved superhuman performance in Chess and Go \\cite{DBLP:journals\/corr\/abs-1712-01815}.\n\nThere have also been successful attempts \\cite{DBLP:journals\/corr\/NarasimhanKB15}, \\cite{DBLP:journals\/corr\/HeCHGLDO15} at playing IF games using RL agents.\nHowever, the selected games were quite limited in terms of their difficulty and even more importantly, the resulting models had mostly not been tested on games that had not been seen during learning.\n\nWhile being able to learn to play a text game is a success in itself, the resulting model must generalise to previously unseen data in order to be useful.\nIn other words, we can merely hypothesise that a successful IF game agent can at least partly understand the underlying game state and potentially transfer the knowledge to other, previously unseen, games, or even natural dialogues.\nAnd for the most part, it remains to be seen how the RL agents presented in \\cite{DBLP:journals\/corr\/NarasimhanKB15} and \\cite{DBLP:journals\/corr\/HeCHGLDO15} perform in terms of generalisation in the domain of IF games.\n\nPrior relevant work includes learning to play the game Civilization by leveraging text manuals \\cite{DBLP:journals\/corr\/BranavanSB14} and achieving human-like performance on Atari video games \\cite{DBLP:journals\/nature\/MnihKSRVBGRFOPB15}.\n\nAs our agent is partly inspired by the architectures employed in \\cite{DBLP:journals\/corr\/NarasimhanKB15} and \\cite{DBLP:journals\/corr\/HeCHGLDO15}, these models are next described in more detail.\n\nThe LSTM-DQN \\cite{DBLP:journals\/corr\/NarasimhanKB15} agent uses an LSTM network \\cite{hochreiter1991untersuchungen} for representation generation, followed by a variant of a Deep Q-Network (DQN) \\cite{DBLP:journals\/nature\/MnihKSRVBGRFOPB15} used for scoring the generated state and action vectors. The underlying task of playing parser-based games is effectively reduced to playing choice-based games by presenting a subset of all possible verb-object tuples as available actions to the agent.\n\nIn the framework, a single action consists of a verb and an object (e.g. \\emph{eat apple}) and the model computes Q-values for objects --- $Q(s, o)$ --- and actions --- $Q(s, a)$ --- separately. The final Q-value is obtained by averaging these two values.\n\nThe Deep Reinforcement Relevance Network (DRRN) \\cite{DBLP:journals\/corr\/HeCHGLDO15} was introduced for playing hypertext-based games.\nThe agent learns to play \\emph{Saving John} and \\emph{Machine of Death} games and used a simple bag-of-words (BOW) representation of the input texts.\n\nThe learning algorithm is also a variant of DQN; DRRN refers to the network that estimates the Q-value, using separate embeddings for states and actions. DRRN then uses a variable number of hidden layers and makes use of softmax action-selection.\nThe final Q-value is obtained by computing an inner product of the inner representation vectors of states and actions.\n\n\n\n\\section{Background}\n\nWe formally state the problem of learning to play the games as solving a Markov decision process.\nThen, we briefly review common machine learning methods that will be employed, most notably neural networks and Q-learning.\n\n\\subsection{Formulating the learning task}\n\\label{sec:task}\n\nText game is a sequential decision-making task with both input and output spaces given in natural language.\n\n\\begin{definition}[Text game]\n\\label{def:text-game}\nLet us define a text game as a tuple $G = \\langle H, H_t, S, A, \\mathcal{D}, \\mathcal{T}, \\mathcal{R} \\rangle$, where\n\\begin{itemize}\n\\item $H$ is a set of game states, $H_t$ is a set of terminating game states, $H_t \\subseteq H$,\n\\item $S$ is a set of possible state descriptions,\n\\item $A$ is a set of possible action descriptions,\n\\item $\\mathcal{D}$ is a function generating text descriptions, $\\mathcal{D}: H \\to (S \\times 2^A)$,\n\\item $\\mathcal{T}$ is a transition function, $\\mathcal{T}: (H \\times A) \\to H$,\n\\item $\\mathcal{R}$ is a reward function, $\\mathcal{R}: (S_t, A_t, S_{t+1}) \\to \\mathbb{R}$.\n\\end{itemize}\n\\end{definition}\n\n\n\nGenerally speaking, both the transition function $\\mathcal{T}$ and the description function $\\mathcal{D}$ may be stochastic and the properties of the game and its functions have a great impact on the task difficulty\\footnote{For a more thorough discussion, please refer to \\cite{zelinka}.}.\n\n\nIn particular, if both $\\mathcal{D}$ and $\\mathcal{T}$ are deterministic, the \\emph{whole game} is deterministic.\nConsequently the problem is then reduced to a simple graph search problem that can be solved by graph search techniques and there is no need --- from the perspective of finding optimal rewards --- to employ reinforcement learning methods.\nHowever, from the generalisation perspective, it still might be useful to attempt to learn simple games using RL techniques as the agents could potentially be able to generalise or transfer their knowledge to other, more difficult problems.\n\nWe define the task of learning to play a text game as the process of finding an optimal policy for the respective text game Markov decision process (MDP) \\cite{DBLP:books\/lib\/SuttonB98}, where $\\mathcal{M}~=~\\langle S_\\mathcal{M}, A_\\mathcal{M}, \\mathcal{P}, \\mathcal{R}, \\gamma_\\mathcal{M} \\rangle$.\n\nStates, actions and the reward function $S$, $A$, $\\mathcal{R}$ of the game correspond to the MDP states, actions and the reward function $S_\\mathcal{M}$, $A_\\mathcal{M}$, $\\mathcal{R}$.\nThe MDP probability function $\\mathcal{P}(s' | s, a)$ is realised by the game transition function $\\mathcal{T}$ and is crucially unknown to the agent.\n\nNotice that not all IF games necessarily have the Markov property, i.e. in text games, there can be long-term dependencies.\nIn fact, it is even difficult to determine whether a text game is or is not Markovian.\nHowever, even problems with non-Markovian characteristics are commonly represented and modelled as MDPs while still giving good results \\cite{DBLP:books\/lib\/SuttonB98}.\n\nFollowing the common notation, the agent tries to maximise the \\textbf{discounted cumulative reward} $\\sum^{\\infty}_{t=0} {\\gamma^t \\mathcal{R} (s_t, a_t, s_{t+1})}$ while following the policy $\\pi(s)$ by selecting actions $a \\gets \\pi(s)$.\nThe goal is to learn the \\textbf{optimal policy $\\pi^*$} by maximising the following term:\n\n\\begin{equation}\n\\label{eq:optimal-policy}\n\\sum^{\\infty}_{t=0} {\\gamma^t \\mathcal{R} (s_t, a_t, s_{t+1})}\\text{, where } a_t = \\pi^{*}(s_t).\n\\end{equation}\n\nUsing this approach, we have now formally reduced the task of learning to play the text-based games to estimating an optimal policy in a text-game MDP.\nThe Q-learning algorithm and its neural network estimator for solving the problem are described next.\n\n\n\\subsection{Deep Reinforcement Learning}\n\nFor finding the optimal policy in the text-game MDP, we employ Deep Reinforcement Learning (DRL) \\cite{DBLP:journals\/nature\/MnihKSRVBGRFOPB15}. This term is commonly used to describe RL algorithms that use (deep) neural networks as a part of their value function estimation.\n\nReinforcement learning is an area of machine learning methods and problems based on the notion of learning from numerical rewards obtained by an agent through interacting with an environment \\cite{DBLP:books\/lib\/SuttonB98}.\n\nIn any given step, the agent observes a \\emph{state} of the environment and receives a~\\emph{reward signal}.\nBased on the current state and the agent's behaviour function --- the \\emph{policy} --- the~agent chooses an~\\emph{action} to take.\nThe action is then sent to the environment which is updated and the loop repeats. See Figure \\ref{fig:rl-loop} for illustration of the agent-environment interface.\n\n\\begin{figure}[!ht]\\centering\n\\includegraphics[]{rl-loop.png}\n\\caption{Interaction between the agent and the environment \\cite{DBLP:books\/lib\/SuttonB98}.}\n\\label{fig:rl-loop}\n\\end{figure}\n\nThere are different approaches to learning the optimal policy.\nHere, we focus on a method for model-free control that has been shown to perform well on a variety of game-related tasks \\cite{DBLP:journals\/cacm\/Tesauro95}, \\cite{DBLP:journals\/nature\/MnihKSRVBGRFOPB15}, Q-learning.\n\nQ-learning makes use of the concept of a \\emph{value function} which evaluates how good a given state under a given policy is --- what reward we can expect to obtain in the long run if we follow the policy from the specific state.\nAction value function that determines the value of taking action $a$ in state $s$ using policy $\\pi$ is defined as $Q^\\pi(s, a) = \\mathbb{E}_{\\pi} \\left[ \\sum_{t=0}^{\\infty} \\gamma^t r_{t+1}\\ \\middle|\\ s, a \\right]$.\n\n\n\nNow the optimal policy, denoted $\\pi^*$, can be also characterised by the optimal action-value function $Q^*(s, a) = \\underset{\\pi}{\\max}\\ Q^\\pi(s,a)$.\n\n\nIn other words, if we know the optimal action-value function $Q^*$, we can obtain the optimal policy $\\pi^*$ by simply choosing the actions with maximum Q-values: $\\pi^{*}(s) = \\underset{a}{\\max}\\ Q^{*}(s,a)$.\n\n\nTypically, the agent's policy is also $\\epsilon$-greedy with relation to the Q-function, where $0 \\leq \\epsilon \\leq 1$ is a parameter that corresponds to the probability of choosing a random action.\n\nSince Q-learning attempts to find the optimal Q-values which obey the Bellman equation \\cite{bellman2013dynamic}, the update rule for improving the estimate of the Q-function is as follows \\cite{DBLP:books\/lib\/SuttonB98}:\n\\begin{equation} \\label{eq:q}\n\\begin{split}\n Q(s_{t},a_{t}) \\gets Q(s_{t},a_{t}) + \\phantom{=}\\phantom{=}\\phantom{=}\\phantom{=}\\phantom{=}\\phantom{=}\\phantom{=}\\phantom{=}\\phantom{=}\\phantom{=}\\phantom{=}\\phantom{=}\\phantom{=}\\phantom{.}\\\\\n\\alpha_{t} \\cdot \\left( r_{t+1} + \\gamma \\cdot \\max_{a_{t+1}}Q(s_{t+1}, a_{t+1}) - Q(s_{t},a_{t}) \\right),\n\\end{split}\n\\end{equation}\nwhere $0 \\leq \\alpha_t \\leq 1$ is the learning rate parameter.\n\nThe Q-learning algorithm is guaranteed to converge towards the optimal solution \\cite{DBLP:books\/lib\/SuttonB98}. In simpler problems and by default, it is assumed that the Q-values for all state-action pairs are stored in a table.\n\nThis approach is, however, not feasible for more complex problems such as the Atari platform games task \\cite{DBLP:journals\/nature\/MnihKSRVBGRFOPB15} or our text-game task, where the state and action spaces are simply too large to store.\nIn text games, for example, the spaces are infinite.\nWe address this problem by approximating the optimal Q-function by a function approximator in the form of a neural network. The Q-function is parametrised as\n\\begin{equation}\n\\label{eq:nn-q}\nQ^*(s_t, a_t) \\approx Q(s_t, a_t, \\theta_t) = \\theta_t(s_t, a_t),\n\\end{equation}\nwhere the $\\theta$ function is realised by a neural network.\n\nThe advantage of this non-tabular approach is that even in infinite spaces, the neural network can generalise to previously unobserved inputs and consequently cover the whole search space with reasonable accuracy.\nIn contrast to linear function approximators, though, non-linear approximators such as neural networks do not guarantee convergence in this context.\n\n\\section{Goals}\n\nOur goal is to introduce a minimalistic architecture serving as a proof of concept, with the ability to capture important sentence-level features and ideally capable of reasonable generalisation to previously unseen data.\nFirst, we highlight some of the aspects of the LSTM-DQN and DRRN models that could be improved upon in terms of these requirements.\n\nThe main drawback of DRRN is its use of BOW for representation generation.\nConsequently, the model is incapable of properly handling state aliasing and differentiating simple yet important nuances in the input, such as the difference between \\emph{``There is a treasure chest to your left and a dragon to your right.''} and \\emph{``There is a treasure chest to your right and a dragon to your left.''}.\n\nMoreover,\nHe et al. \\cite{DBLP:journals\/corr\/HeCHGLDO15} claim that separate embeddings for state and action spaces lead to faster convergence and to a better solution. However, since both state and action spaces really contain the same data --- at least in most games and especially in hypertext games where actions are a subset of states --- we aim to employ a joint embedding representation of states and actions.\n\nWe also believe that a joint representation of states and actions should eventually lead to stronger generalisation capabilities of the model, since such model should be able to transfer knowledge between state and action descriptions as their representation would be shared.\n\nThe LSTM-DQN agent, on the other hand, utilises an LSTM network that can theoretically capture more complex sentence properties such as the word order.\nHowever, its architecture only accepts actions consisting of two words.\n\nAdditionally, the two action Q-values are finally averaged, which would arguably be problematic if the verbs and objects were not independent.\nFor example, the value of the verb ``\\emph{drink}'' varies highly based on the object; consider the difference between the values of ``\\emph{drink}'' when followed by either ``\\emph{water}'' or ``\\emph{poison}'' objects.\n\nWe thus aim to utilise a minimalistic architecture that should:\n\\begin{itemize}\n\\item be able to capture dependencies on sentence level such as word order,\n\\item accept text input of any length for both states and action descriptions,\n\\item accept and evaluate any number of actions,\n\\item use a powerful interaction function between states and actions.\n\\end{itemize}\n\n\\section{Methods}\nWe present the \\emph{pyfiction} platform and specify the relevant learning tasks. Then we describe the architecture of our agent capable of learning the games, leveraging the general game interface of the platform. Finally, we describe our agent architecture in detail.\n\n\\subsection{Platform}\n\nThe \\emph{pyfiction}{\\footnote{\\emph{pyfiction}: \\url{https:\/\/github.com\/MikulasZelinka\/pyfiction}.} platform is a library for universal access to different kinds of IF games.\nIts interface is identical to the general RL interface (see Figure \\ref{fig:rl-loop}).\nCurrently, it supports the \\emph{Glulxe}, \\emph{Z-machine} and general HTML simulators.\n\nThere are eight games present as of now, however, adding new games is straightforward.\nApart from the game files, it is only necessary to provide a mapping from states to actions and to numerical rewards for the game to be playable by AI agents.\n\nAny agent compatible with the simple RL interface can directly play all games present in \\emph{pyfiction}. Pyfiction can also integrate to OpenAI Gym \\cite{DBLP:journals\/corr\/BrockmanCPSSTZ16}.\n\n\n\\subsection{The SSAQN architecture}\n\\label{sec:slsn}\n\nOur neural network model is inspired by both LSTM-DQN and DRRN. For the sake of clarity, it is referred to as \\textbf{SSAQN} (Siamese State-Action Q-Network).\n\nSimilarly to \\cite{DBLP:journals\/corr\/NarasimhanKB15} and \\cite{DBLP:journals\/corr\/HeCHGLDO15}, we employ a variant of DQN \\cite{DBLP:journals\/nature\/MnihKSRVBGRFOPB15} with experience replay and prioritised sampling which uses a neural network (SSAQN) for estimating the DQN's Q-function.\n\nSSAQN uses a siamese network architecture \\cite{DBLP:conf\/nips\/BromleyGLSS93}, where two branches --- a state branch and an action branch --- share most of the layers that effectively generate useful representation features.\nThis is best illustrated by visualising the network's computational graph; see Figure \\ref{fig:architecture}.\n\nAs we are using a siamese architecture, the weights of the embedding and LSTM layers are shared between state and action data passing through.\nStates and actions are only differentiated in the dense layers whose outputs are then fed into the similarity interaction function.\n\nThe most important differences to LSTM-DQN and DRRN are:\n\n\\begin{itemize}\n\\item the network accepts two text descriptions (state and action) of arbitrary length as an input,\n\\item the embedding and LSTM layers are the same for states and action, i.e. their weights are shared,\n\\item the interaction function of inner vectors of states and actions is realised by a normalised dot product, commonly called cosine similarity (see Section \\ref{sec:interaction-function}).\n\\end{itemize}\n\nThe output of the SSAQN is the estimated Q-value for the given state-action pair, i.e. the network with parameters $\\theta$ realises a function $\\theta(s, a) = Q(s, a, \\theta) \\approx Q^*(s, a)$ where the input variables $s$ and $a$ contain preprocessed text\\footnote{For more details, see the \\emph{pyfiction} library.}.\n\n\nTo compute the $Q(s, a^i)$ for different $i$ --- for multiple actions --- we simply run the forward pass of the action branch multiple times.\n\nNext, the SSAQN architecture is described layer-by-layer in more detail.\n\n\\begin{figure}[!ht]\\centering\n\\includegraphics[width=0.5\\textwidth]{img\/nn_2_.pdf}\n\\caption[Architecture of the SSAQN model and its data flow.]{\nArchitecture of the SSAQN model and its data flow. Grey boxes represent data values in different layers; the bold text corresponds to the shape of the data~tensors.}\n\\label{fig:architecture}\n\\end{figure}\n\n\\subsubsection{Word embeddings}\n\\label{sec:emb-layer}\n\nAfter preprocessing the text, we convert the words to their vector representation using word embeddings \\cite{DBLP:conf\/nips\/MikolovSCCD13}.\nSince our dataset is comparatively small, we use a relatively low dimensionality for the word representations and work with \\cd{embedding\\_dim} of~$16$.\nThe weights of the word embeddings are initialised to small random values and trained as a part of the gradient descent algorithm. Using pre-trained vector models is also supported.\n\nAs the training is done in batches, we pad the input sequences of words for each batch so that they are all aligned to the same length. Still, note that in general, the length of the output of the embedding layer is variable in length.\n\n\\subsubsection{LSTM layer}\n\\label{sec:lstm-layer}\n\nThe inputs of the LSTM layer are the word embedding vectors of variable length.\nSimilarly to embeddings, the weights of the LSTM units are also initialised to small random values and their weights are shared between states and actions.\n\nThe role of the LSTM layer is to successively look at the input words, changing its inner state in the process and thus detecting time-based features in its input.\nIt is also at this layer that we go from having a data shape of arbitrary length to having a fixed-length output vector.\nThe output size is equal to the number of LSTM units in the layer and in our experiments, we use \\cd{lstm\\_dim} of~$32$.\n\n\n\n\n\\subsubsection{Dense layer}\n\\label{sec:dense-layer}\n\nFollowing the shared LSTM layer, we now have two dense (fully-connected) layers, one for states and one for actions.\nAgain, we initialise the weights randomly and we also apply the hyperbolic tangent activation function $\\mathrm{tanh}(x) = \\frac{1 - e^{-2x}} {1 + e^{-2x}}$.\n\n\nAs the dense layers for states and actions are the only layers to not necessarily share weights between the two network branches, they do play an important role in building differentiated representations for state and action input data.\n\nNote that as the interaction layer uses a dot product, we require both outputs of the dense layers to be of the same dimension and we set \\cd{dense\\_dim} of both branches to~$8$.\nHowever, theoretically, it would be both possible and interesting to use different layer dimensions for states and actions at this level, as usually, the original state text descriptions carry more information than action descriptions in IF games.\nThus, a possible extension of the network would be to use two or more hidden dense layers in the state branch of the network and to only reduce the dimension to the action dimension in the last hidden dense state layer.\n\n\n\\subsubsection{Interaction function}\n\\label{sec:interaction-function}\n\nLastly, we apply the cosine similarity interaction function to the state and action dense activations, resulting in the final Q-value.\nIf the input are two vectors $\\mathbold{x}$ and $\\mathbold{y}$ of~$\\cd{dense\\_dim} = n$ elements, we define their cosine similarity as a dot product of their L2-normalised (and consequently unit-sized) equivalents:\n\\begin{equation}\n\\label{eq:cos}\n\\mathrm{cs}(\\mathbold{x}, \\mathbold{y}) = {\\frac{\\mathbold{x} \\cdot \\mathbold{y}} {\\|\\mathbold{x}\\|_2 \\|\\mathbold{y}\\|_2}} = \\frac{ \\sum\\limits_{i=1}^{n}{\\mathbold{x}_i \\mathbold{y}_i} }{ \\sqrt{\\sum\\limits_{i=1}^{n}{\\mathbold{x}_i^2}} \\sqrt{\\sum\\limits_{i=1}^{n}{\\mathbold{y}_i^2}} },\n\\end{equation}\nwhich corresponds to the cosine of the angle between the input vectors.\n\nCosine similarity is commonly used for determining document similarity \\cite{huang2008similarity}.\nHere, we apply it to the two hidden vectors of dense layer values that should meaningfully represent the condensed information that was originally received as a text input by the network and we interpret the resulting value as an indicator of mutual compatibility of the original state-action pair.\nSince the range of values of the $\\cos$ function is $[-1, 1]$ and since the original rewards that we aim to estimate have arbitrary values, we scale the approximated Q-values accordingly.\n\n\\subsubsection{Loss function and gradient descent}\n\nRecall the Q-learning rule (see equation \\ref{eq:q}). We define the loss function at time~$t$ as\n\\begin{equation}\n\\label{eq:loss}\n\\mathcal{L}_t = \\left( r_{t} + \\gamma \\cdot \\max_{a}Q(s_{t+1}, a) - Q(s_{t},a_{t}) \\right)^2,\n\\end{equation}\nwhich is simply a mean squared error (MSE) of the last estimated Q-value and the target Q-value.\n\nFor gradient descent, we make use of the RMSProp optimiser \\cite{rmsprop} that has been shown to perform well in numerous practical applications, especially when applied in LSTM networks \\cite{DBLP:journals\/corr\/DauphinVCB15}.\n\n\n\\subsection{Action selection}\n\\label{sec:action-selection}\n\nGiven an SSAQN $\\theta$, where $Q(s,a) \\gets \\theta(s, a)$, the agent selects an action by following the $\\epsilon$-greedy policy $\\pi_\\epsilon(s)$ \\cite{DBLP:books\/lib\/SuttonB98} realised by the following algorithm:\n\n\\begin{algorithm}[!ht]\n\\caption{Action selection}\\label{alg:action-selection}\n\\begin{algorithmic}[1]\n\\Statex \n\\Statex $\\epsilon$: probability of choosing a random action\n\\Statex $h(s,a)$: number of times the agent selected $a$ in $s$ in the current run\n\\Statex \n\\Function{Act}{$s$, $actions$, $\\epsilon$, $h$}\n \\If{$random() < \\epsilon$}\n \\Return {random action} \\EndIf\n \\State $q\\_values = \\theta(s, a_i)$ \\textbf{for} $a_i$ in $actions$ \\label{row:4}\n \\State $q\\_values = (q\\_values + 1) \/ 2$ \\Comment{from $[-1, 1]$ to~$[0, 1]$}\n \\State $q_i = q_i^{h(s, a_i) + 1}$ \\textbf{for} $q_i$ in $q\\_values$ \\Comment{history function} \\label{row:history}\n \\State $q\\_values = (q\\_values \\cdot 2) - 1$ \\Comment{from $[0, 1]$ to~$[-1, 1]$}\n \\State \\textbf{return} $a_i$ with $\\max q_i$ \\label{row:8}\n\n\\EndFunction\n\\end{algorithmic}\n\\end{algorithm}\n\n\nThe $\\epsilon$ is the exploration parameter.\nFor sampling in training phase (see Algorithm \\ref{alg:dqn}), $\\epsilon$ is gradually decayed from the starting value of~$1$, i.e. at first, the agent's policy is completely random.\nIn testing phase, the agent is greedy, i.e. $\\epsilon$ is set to~$0$ and the agent always chooses the action with the maximum Q-value for the given state.\n\n\nThe only important difference between the standard $\\epsilon$-greedy control algorithm and our action selection policy is that we additionally employ a \\emph{history function}, $h(s, a)$.\nThe scope of the history function is a single run of the agent on a single game, i.e. it is reset every time a game ends.\n\nThe function $h(s,a)$ returns a value equal to the number of times the agent selected action $a$ in state $s$ in the current run.\nThat is, if the agent never selects an action twice in the same state during a run, the history function has no impact on action selection --- and it penalises the already visited state-action pairs, as seen on line \\ref{row:history} of Algorithm \\ref{alg:action-selection}.\n\nThe history function serves as a very simple form of intrinsic motivation \\cite{DBLP:conf\/nips\/SinghBC04}.\nIt is similar to optimistic initialisation \\cite{DBLP:books\/lib\/SuttonB98} in that it leads the agent to select previously unexplored state-action tuples.\nAdditionally, note that the history function is not Markovian in the sense that it takes the whole game episode into account.\nIn practice, the history function greatly helps the agent to avoid infinite loops, since for many games, it is likely to get stuck in an infinite loop when following a randomly chosen deterministic Markovian policy.\n\n\\subsection{Training loop}\n\nPutting together all the parts introduced above, we can now formally describe the agent's learning algorithm.\n\nWe use a variant of DQN \\cite{DBLP:journals\/nature\/MnihKSRVBGRFOPB15} with experience replay and prioritised sampling of experience tuples with positive rewards \\cite{DBLP:journals\/corr\/SchaulQAS15}. For more details, see Algorithm \\ref{alg:dqn}.\n\nNote that the agent inherently supports playing and learning multiple games at once.\n\n\\begin{algorithm}\n\\caption{Training algorithm (a variant of DQN)}\\label{alg:dqn}\n\\begin{algorithmic}[1]\n\\Statex \n\\Statex $b$: batch size, $p$: prioritised fraction\n\\Statex $\\epsilon$: exploration parameter, $\\epsilon\\_decay$: rate at which $\\epsilon$ decreases\n\\Statex \n\\Function{Train}{$episodes$, $b$, $p$, $\\epsilon = 1$, $\\epsilon\\_decay$}\n\n \\State Initialise experience replay memory $\\mathcal{D}$\n \\State Initialise the neural network $\\theta$ with random weights\n \\State Initialise all game simulators and load the vocabulary\n\n \\For {$e \\in {0, \\ldots, episodes - 1}$}\n \\State Sample each game once using $\\pi_\\epsilon$, store experiences into~$\\mathcal{D}$\n \\State $batch$ $\\gets$ $b$ tuples $(s_t, a_t, r_t, s_{t+1}, a_{t+1})$ from $\\mathcal{D}$, where a fraction of~$p$ have $r_t > 0$\n \n \\For {$i, (s_t^i, a_t^i, r_t^i, s_{t+1}^i, a_{t+1}^i)$ in $batch$}\n \\State $target^i \\gets r_t^i$\n \\If {$a_{t+1}^i$} \\Comment{$s_{t+1}^i$ is not terminal}\n \\State $ target^i \\mathrel{{+}{=}} \\gamma \\cdot \\mathrm{max}_{a_{t+1}^i}\\theta(s_{t+1}^i, a_{t+1}^i)$ \n \\EndIf\n \\EndFor\n \n \\State Define loss as $\\mathcal{L}_e(\\theta) \\gets (target^i - \\theta(s_t^i, a_t^i))^2$ \n \\State Perform gradient descent on $\\mathcal{L}_e(\\theta)$\n \\State $\\epsilon \\gets \\epsilon \\cdot \\epsilon\\_decay$ \n \n \\EndFor\n\\EndFunction \n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Experiments}\n\nWe conduct experiments on six games, \\emph{Saving John} (SJ), \\emph{Machine of Death} (MoD), \\emph{Cat Simulator 2016} (CS), \\emph{Star Court} (SC), \\emph{The Red Hair} (TRH) and \\emph{Transit} (TT).\nAll of these games are available as a part of \\emph{pyfiction}.\nSince SJ and MoD are also present in \\cite{DBLP:journals\/corr\/HeCHGLDO15}, results on these two games for both agents are directly comparable.\n\nMore details about the games including the annotated endings, are available in the \\emph{pyfiction} repository.\nFor basic statistics, see Table \\ref{tab:games-summary}.\n\n\\subsection{Setup}\n\nIn all experiments, we use the following SSAQN layer dimensions: Embedding: $16$, LSTM: $32$, Dense: $8$. The RMSProp optimiser is used with learning rate between $0.001$ and $0.00001$, batch size of $256$ with the prioritised fraction of $25\\%$, $\\gamma$ of $0.95$, $\\epsilon$ of $1$ and $\\epsilon$-decay of $0.99$.\nWe run each experiment five times.\n\nThe different testing scenarios are as follows.\n\n\\subsubsection{Single-game task} The agent is trained and evaluated on the same game.\n\\subsubsection{Transfer learning} We pre-train the agent on five games except the one it is then trained and evaluated on. \n\\subsubsection{Generalisation} Same as transfer learning, except the agent is not trained on the final game but only evaluated on the game instead.\n\\subsubsection{Playing multiple games at once} We train and evaluate a single instance of the agent on all six games at once. In each training step, all games are presented successively to the agent.\n\n\nIn all of the scenarios, the agents are mainly compared to the random agent baseline.\nFor SJ and MoD, we can also compare the results with the DRRN agent as well as the baselines given in \\cite{DBLP:journals\/corr\/HeCHGLDO15}.\n\n\\subsection{Results}\n\nFinal rewards from all tasks can be seen in Table \\ref{tab:games-summary} and the learning progress is depicted in Figure \\ref{fig:results} for most of the tasks.\n\nIn the single-game task, the SSAQN agent learns to play all deterministic games optimally.\nMoD is not learned optimally, but the agent outperforms the DRRN agent (see Figure \\ref{fig:comparison_drrn}).\nThe agent doesn't significantly outperform the random baseline on SC, however, it is unknown if it is possible to do so.\n\n\\begin{figure}[ht!]\n \\centering\n \n \\subfloat[DRRN on SJ \\cite{DBLP:journals\/corr\/HeCHGLDO15}]{\\includegraphics[width=0.25\\textwidth]{drrn_sj}\\label{fig:drrn_sj}}\n \\subfloat[DRRN on MoD \\cite{DBLP:journals\/corr\/HeCHGLDO15}]{\\includegraphics[width=0.25\\textwidth]{drrn_mod}\\label{fig:drrn_mod}}\n\n \n \\subfloat[SSAQN on SJ]{\\includegraphics[width=0.25\\textwidth]{random_sj}\\label{fig:random_sj}}\n \\subfloat[SSAQN on MoD]{\\includegraphics[width=0.25\\textwidth]{random_mod}\\label{fig:random_mod}}\n \n\\caption{Comparison of the DRRN and the SSAQN agent Saving John (left) and Machine of Death (right). SSAQN converges considerably faster on both games and achieves higher performance but it has higher variance on MoD. }\n \\label{fig:comparison_drrn}\n\\end{figure}\n\nTransfer learning resulted in a slower convergence rate but similar results on deterministic games and slightly worse results on MoD and SC.\nThe agent unfortunately but unsurprisingly didn't generalise to unseen games in the generalisation task and a much larger scale of experiments would be needed to safely conclude if it's capable of doing so.\nWe attribute it mostly to the lack of relevant words and formulations between the games.\nTable \\ref{tab:games-summary} also depicts how many words from a given game are also present in other games.\n\nEven if this percentage is comparatively high, note that it is just a statistic of single words, meaning it is very unlikely that a word phrases, which are actually important, match between the games.\nAt any rate, more experiments in different scales are needed to verify this hypothesis.\n\nIn the multiple-games tasks, the agent also learned deterministic games optimally, this time however, it was able to also play MoD in an almost optimal way, significantly outperforming DRRN.\nCuriously, the result on MoD is better than in the single-game setting, suggesting that information learned from other games might have been useful.\n\n\n\\begin{figure*}[!ht]\n \\centering\n \n \\subfloat[Saving John]{\\includegraphics[width=0.33\\textwidth]{universal_sj.pdf}\\label{fig:sj}}\n \\hfill\n \\subfloat[Machine of Death]{\\includegraphics[width=0.33\\textwidth]{universal_mod.pdf}\\label{fig:mod}}\n \\hfill\n \\subfloat[Cat Simulator 2016]{\\includegraphics[width=0.33\\textwidth]{universal_cs.pdf}\\label{fig:cs}}\n\n \n \\subfloat[Star Court]{\\includegraphics[width=0.33\\textwidth]{universal_sc.pdf}\\label{fig:sc}}\n \\hfill\n \\subfloat[The Red Hair]{\\includegraphics[width=0.33\\textwidth]{universal_trh.pdf}\\label{fig:trh}}\n \\hfill\n \\subfloat[Transit]{\\includegraphics[width=0.33\\textwidth]{universal_t.pdf}\\label{fig:tt}}\n\n \n \\caption{Results of the SSAQN agent on all six games. Blue: random agent. Orange: train and evaluate on the same game \\emph{X}. Green: pre-train on all but \\emph{X}, then train and evaluate on \\emph{X}. Purple: train and evaluate on all games at once. Standard deviation is shown for the random agent and random games.}\n \\label{fig:results}\n\\end{figure*}\n\n\n\n\\begin{table*}[!t]\n\\vspace{2pt}\n\\centering\n\\begin{tabular}{llrrrrrr}\n\\toprule\n& & \\textbf{Saving John} & \\textbf{Machine of Death} & \\textbf{Cat Simulator 2016} & \\textbf{Star Court} & \\textbf{The Red Hair} & \\textbf{Transit} \\\\\n\n\\midrule\n\\multirow{6}{*}{\\rotatebox[origin=c]{90}{\\makecell{\\textsc{Properties}}}}\n \n&\\# tokens & 1119 & 2055 & 364 & 3929 & 155 & 575 \\\\\n&\\# states & 70 & $\\geq$ 200 & 37 & $\\geq$ 420 & 18 & 76 \\\\\n&\\# endings & 5 & $\\geq$ 14 & 8 & $\\geq$ 17 & 7 & 10 \\\\\n&Avg. words\/description & 73.9 & 71.9 & 74.4 & 66.7 & 28.7 & 87.0 \\\\\n&Deterministic transitions & Yes & No & Yes & No & Yes & Yes \\\\\n&Deterministic descriptions & Yes & Yes & Yes & No & Yes & Yes \\\\\n&\\% of tokens present in other games & 68.4 & 56.0 & 79.4 & 33.7 & 92.3 & 72.7 \\\\\n\n\\midrule\n\\multirow{6}{*}{\\rotatebox[origin=c]{90}{\\makecell{\\textsc{Final reward}}}}\n\n& Random agent (average) & -8.6 & -10.8 & -0.6 & -11.6 & -11.4 & -10.1 \\\\\n& Individual game & 19.4 & 15.4 & 19.4 & -2.2 & 19.3 & 19.5 \\\\\n& Generalisation & -11.2 & -15.1 & 5.7 & -13.2 & -10.0 & -10.2 \\\\\n& Transfer learning & 19.4 & 8.7 & 19.4 & -13.3 & 19.3 & 19.5 \\\\\n& Multiple games & 19.4 & 21.0 & 19.4 & -8.2 & 19.3 & 19.5 \\\\\n& Optimal & 19.4 & $\\approx$ 21.4 & 19.4 & ? & 19.3 & 19.5 \\\\\n\n\\bottomrule\n\\end{tabular}\n\\caption[Game properties and agent performance summary.]{Summary of game statistics and performance comparison of the SSAQN agent on different tasks.}\n\\label{tab:games-summary}\n\\vspace{3pt}\n\\end{table*}\n\n\n\\section{Conclusion and Future Work}\n\nWe presented a minimalistic text-game playing agent capable of learning to play multiple games at once.\nThe agent uses a twin-like SSAQN architecture and outperforms the previously suggested DRRN architecture while also being considerably simpler.\n\nWe also test the transfer learning and generalisation capabilities of the agent, concluding that it unfortunately doesn't transfer its knowledge or generalise in the limited scope of six games.\n\nTo this end, however, we present \\emph{pyfiction}, an easily extensible library for universal access to various text games that could, together with our agent, serve as a baseline for future research.\n\nFuture work should mainly focus on expanding the text-game domain and on conducting experiments at a much larger scale.\nThis should be made much easier thanks to the presented library and we hypothesise that events agents as simple as the SSAQN agent should show some generalisation capabilities given enough data.\n\n\\section*{Acknowledgement}\n\nThe author would like to thank Rudolf Kadlec for his ideas and guidance during author's work on this topic as a part of Master studies, and to Martin Pil\\'{a}t for helpful suggestions and comments.\n\nThis research was supported by Charles University under SVV (project number 260 453) and by the Czech Science Foundation (project number 17-17125Y).\n\n\n\n\n\\pagebreak\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\\label{sec_intro}\n\nFor about two decades, the most accurate realization of the SI second has been obtained from caesium fountain clocks \\cite{Wynands2005}. Nowadays the best fountain clocks offer uncertainties at the low $10^{-16}$ level, while at the same time their instability allows the reaching of statistical uncertainties at the same level after averaging times of $\\approx$10\\,000\\,s only \\cite{Guena2012}. Such frequency instabilities are enabled by sufficiently increasing the signal-to-noise ratio ($\\text{SNR}$) by the efficient loading of high numbers of atoms from a cold atom beam \\cite{Guena2012,Dobrev2016}. Furthermore, an instability degradation due to the phase noise of the interrogation oscillator (Dick effect, \\cite{Santarelli1998}) needs to be avoided. The latter has been achieved by the utilization of a cryogenic sapphire instead of a quartz-oscillator-based frequency synthesis \\cite{Santarelli1999,Guena2012}. In this case, a drawback is the necessary regular costly liquid helium refill. Recently, a new generation of pulse-tube cryocoolers has been successfully tested for the microwave synthesis of fountain clocks, avoiding liquid helium refills \\cite{Takamizawa2014,Abgrall2016}.\n\nIn an alternative approach, the required low oscillator phase noise level can be obtained by transferring the frequency stability of a cavity stabilized laser via a frequency comb to the microwave spectral range. For tight locking, modern combs are equipped with fast actuators (high bandwidth pump diode current controllers, intracavity electro-optic modulators) \\cite{Puppe2016,Hudson2005,Zhang2012}, so that the frequency stability of the comb repetition rate and its harmonics almost reach the level of stability of the cavity stabilized laser frequency. \nConsequently, the low-noise microwave signal can be obtained directly from the femtosecond laser locked to an ultrastable optical cavity \\cite{Millo2009}. Lacking the option of a fast actuator in our setup, we have chosen a differing approach by using the frequency comb as a transfer oscillator \\cite{Telle2002,Lipphardt2009,Weyers2009,Tamm2014}. \nIn this case, the beat frequencies between an ultrastable laser and the frequency comb, and the microwave oscillator and the comb are measured simultaneously. Since the generation of the control signal for the microwave oscillator is based on the evaluation of the frequency difference of the two beat notes, the noise contributions of the frequency comb, acting as the transfer oscillator, are suppressed within the bandwidth of the transfer. For the frequency comb stabilization, it is only required that all relevant beat frequencies are reliably kept within the bandwidths of the respective filters. \n\nTo realize this approach, we have extended our setup for absolute frequency measurements of optical clock frequencies (e.g. \\cite{Tamm2014},\\cite{Huntemann2014}). We added a commercial 1.5\\,$\\mu$m fiber laser, locked to a high-finesse optical cavity, and transferred the laser frequency stability via the fiber laser femtosecond frequency comb to a 9.6\\,GHz microwave oscillator, which is used for the synthesis of the caesium fountain interrogation signal at 9.2\\,GHz. At the same time, the 9.6\\,GHz microwave oscillator frequency is measured with reference to the 100\\,MHz output frequency of a hydrogen maser. Based on this measurement, the long-term drift of the optical cavity and the correlated 1.5\\,$\\mu$m fiber laser frequency drift are compensated for by an acousto-optic modulator (AOM), which ensures the locking of the microwave oscillator to the hydrogen maser frequency in the long term.\n\nIn Section~\\ref{sec_scheme} we describe our setup in detail and then present our results in Section~\\ref{sec_results}.\n\n\n\\section{Microwave oscillator stabilization scheme}\n\\label{sec_scheme}\n\nOur setup for absolute optical frequency measurements (Fig.~\\ref{fig1_Dromaser}) has been extended by two modules (gray shaded boxes in Fig.~\\ref{fig1_Dromaser}): the setup for the cavity stabilized fiber laser, and the stabilization scheme for the microwave oscillator, a dielectric resonator oscillator (DRO) at 9.6\\,GHz. The core of the whole setup is a frequency comb (FC1500 from Menlo Systems), based on an erbium-doped fiber laser, with a repetition rate $\\nu_\\mathrm{rep}$ of about 252\\,MHz at a wavelength of 1.5\\,$\\mu$m. The oscillator of the frequency comb generates pulses of 100\\,fs in length. The corresponding optical comb has a Fourier limited spectrum width of 30\\,nm. In principle, optical detection of the 100\\,fs laser pulses generates a ``second frequency comb'' consisting of the comb repetition rate and its harmonics extending well into the microwave spectral range up to 4\\,THz, given by the Fourier limit. Absolute frequency measurements of optical frequencies and the transfer of stability of optical frequencies into the microwave regime are enabled, because of the phase coherence between the optical comb frequencies and the comb repetition rate.\n\nTo fulfill the requirements of our microwave oscillator stabilization scheme regarding optical power and bandwidth, we have chosen a fast InGaAs-PIN photodiode (DSC 40S, BW\\,=\\,12\\,GHz) for the detection of the repetition rate and its harmonics. The 38$^{\\mathrm{th}}$ harmonic (at $\\approx 9.6$\\,GHz) is subsequently selected by a band-pass filter (bandwidth 400\\,MHz) and amplified by a microwave amplifier with phase noise specification (HMC-C050). \n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=\\columnwidth]{Fig1_DROmaser}\n\t\\caption{Setup for the generation of the optically stabilized 9.6\\,GHz microwave signal and absolute optical frequency measurements. The two extra modules for microwave generation are indicated by the gray shaded boxes. DRO: dielectric resonator oscillator; AOM: acousto-optic modulator.}\n\t\\label{fig1_Dromaser} \n\\end{figure}\n\nThe detection and amplification constitute a fundamental limit for the short-term frequency stability transfer from the optical to the microwave domain in our setup. An optical power of 0.5\\,mW at the photodiode results in $-27$\\,dBm electrical power of the 38$^\\mathrm{th}$ harmonic and thus in a calculated $\\text{SNR}$ of 144\\,dB\/Hz. The amplifier limits the $\\text{SNR}$ to 140\\,dB\/Hz close to the carrier at 100\\,Hz and 142\\,dB\/Hz at frequencies larger than 1\\,kHz. \n\nOne part of the 38$^{\\mathrm{th}}$ harmonic's signal of the repetition rate is used for the fast locking of the microwave oscillator (see below). The other part of the signal is mixed with a 9.6\\,GHz frequency synthesis output signal $\\nu_\\mathrm{synth}$, referenced to the maser. The resulting beat signal $\\nu_\\mathrm{y}=38\\times \\nu_\\mathrm{rep}-\\nu_\\mathrm{synth}$ is kept constant via an offset lock for locking the repetition rate of the frequency comb to the 9.6\\,GHz signal with a bandwidth of about 1\\,kHz (not shown in Fig.~\\ref{fig1_Dromaser}). \n\nThree erbium-doped fiber amplifiers (not shown in Fig.~\\ref{fig1_Dromaser}), including matched nonlinear optical components, are employed to spread and optimize the femtosecond laser spectrum. One of these amplifiers generates the octave-spanning spectral range of 1--2\\,$\\mu$m to provide the carrier envelope offset frequency $\\nu_\\mathrm{ceo}$ of the frequency comb, which is realized in a $f-2f$ scheme \\cite{Jones2000, Holzwarth2000}. The offset frequency ($\\text{SNR}$\\,=\\,42\\,dB at RBW\\,=\\,300\\,kHz) is locked to an intermediate frequency of 20\\,MHz with a bandwidth of a few kHz (not shown in Fig.~\\ref{fig1_Dromaser}). The spectrum core areas of the two other amplifiers are tuned to optimize the beat notes $\\nu_\\mathrm{x1},\\nu_\\mathrm{x2}$ with the optical clock transition frequencies of our quadrupole ({\\em E}\\\/2) and octupole ({\\em E}\\\/3) $^{171}$Yb$^+$ frequency standards \\cite{Tamm2014}, \\cite{Huntemann2016}. For optical frequency measurements, the frequencies $\\nu_\\mathrm{rep}$, $\\nu_\\mathrm{ceo}$ and the difference frequencies $\\nu_\\mathrm{x1},\\nu_\\mathrm{x2}$ between the frequencies of the optical frequency standards and the individual comb teeth are registered by synchronous multichannel counters (K+K FXE \\cite{Kramer2004}), free of dead times. From the counter results, absolute optical frequencies, which are referenced to the maser and the caesium fountains, are calculated in a data processing unit (Fig.~\\ref{fig1_Dromaser}).\n\nTo provide the short-term stability of the optically stabilized DRO, a commercial fiber laser (Koheras BASIK E15) at a wavelength of 1.54\\,$\\mu$m is locked to an optical cavity made of ultra low expansion (ULE) glass with highly reflecting ULE mirrors by means of the Pound-Drever-Hall technique. With a cavity length of 75\\,mm (FSR\\,=\\,2\\,GHz), a finesse of 320\\,000 is obtained. The control bandwidth of this servo loop is limited by an AOM to 100\\,kHz. Low-frequency variations are compensated for by changing the effective fiber length of the laser using a piezo element adjusted by an integral element in the control loop. As a result, a frequency reference with a short-term stability of $10^{-15}$ for averaging times between 1 and 10\\,s is obtained with a prevailing linear drift of 40\\,mHz\/s. \n\nTo realize the transfer oscillator concept \\cite{Telle2002}, we need to simultaneously track the optical beat signals $\\nu_\\mathrm{x}$ and $\\nu_\\mathrm{ceo}$ and the microwave beat signal $\\nu_\\mathrm{z}$ between the DRO and the frequency comb. While the optical and microwave beat notes are from different spectral regions, they are based on strongly phase coupled modes of the same oscillator. Processing of the beat notes enables their subtraction from each other, which effectively eliminates the noise of the frequency comb. \n\nThe cavity stabilized laser light and the light from the frequency comb are combined via single-mode fibers. The beat frequency $\\nu_\\mathrm{x}$ (27\\,MHz, $\\text{SNR}$\\,=\\,43\\,dB at RBW\\,=\\,300\\,kHz) from the laser light (frequency $\\nu_\\mathrm{L}$) and the $m^\\mathrm{th}$ mode of the frequency comb ($m=769\\,794$) is detected by a standard InGaAs photodiode. An absolute frequency measurement with reference to the hydrogen maser frequency is reduced to the counting of the three radio frequencies $\\nu_\\mathrm{rep}$, $\\nu_\\mathrm{ceo}$ and $\\nu_\\mathrm{x}$:\n\n\\begin{equation}\n\\nu_\\mathrm{L}=m \\times \\nu_\\mathrm{rep} + \\nu_\\mathrm{ceo} + \\nu_\\mathrm{x}. \n\\label{laserf}\n\\end{equation}\n\nThe DRO frequency $\\nu_\\mathrm{MW}$ is mixed with the 38$^\\mathrm{th}$ harmonic of the repetition rate, which yields $\\nu_\\mathrm{MW}=38 \\times \\nu_\\mathrm{rep} +\\nu_\\mathrm{z}$. Eliminating $\\nu_\\mathrm{rep}$ by utilizing (\\ref{laserf}), we obtain the microwave frequency $\\nu_\\mathrm{MW}$ as\n\n\\begin{equation}\n\\nu_\\mathrm{MW}=\\frac{38}{m} \\times \\nu_\\mathrm{L} + [\\nu_\\mathrm{z}-\\frac{38}{m} (\\nu_\\mathrm{ceo}+\\nu_\\mathrm{x})]\n\\label{MW}\n\\end{equation}\n\n\\noindent which establishes a fixed relation between $\\nu_\\mathrm{MW}$ and $\\nu_\\mathrm{L}$, if the term within the square brackets is kept constant. The latter contains beat frequencies generated in the microwave ($\\nu_\\mathrm{z}$\\,=\\,5.9\\,MHz) and in the optical spectral region ($\\nu_\\mathrm{ceo}+\\nu_\\mathrm{x}$\\,=\\,47\\,MHz), which are separated by a large scaling factor of $m\/38$. \n\nThe spectral range adaptation of the beat notes is realized by a multiplication of $\\nu_\\mathrm{z}$ and a division of $\\nu_\\mathrm{ceo}+\\nu_\\mathrm{x}$. For this purpose, an extra module, an analogue data processor with a tracking bandwidth of about 1\\,MHz, is set up: An overtone oscillator generates the 8$^\\mathrm{th}$ harmonic of $\\nu_\\mathrm{z}$, which reduces the further requirements of the $\\text{SNR}$ and also of the counter resolution for data processing. Furthermore, $\\nu_\\mathrm{ceo}$ and $\\nu_\\mathrm{x}$ are added in a mixer, filtered by a tracking oscillator, amplified and divided by $c=m\/(38 \\times 8)$.\nThe division is undertaken in a four-step process: In three identical stages 100\\,MHz is added each time, before the sum frequency is divided by a factor of 8. Before the last of these stages another stage is inserted, in which 100\\,MHz is also added, before the sum frequency is divided by a scaling factor of 4.9457\\ldots which is obtained from a direct digital synthesizer with a 48-bit accumulator (AD9956). \n\nThe output frequency of the division process ($\\approx$15\\,MHz) is mixed with the overtone frequency of $\\nu_\\mathrm{z}$. As a final result, we obtain a transfer beat frequency $\\nu_\\mathrm{T}$ equal to the eightfold of the term in square brackets of (\\ref{MW}):\n\n\\begin{equation}\n\\nu_\\mathrm{T}=\\nu_\\mathrm{z} \\times 8 -\\frac{\\nu_\\mathrm{ceo}+\\nu_\\mathrm{x}}{c} = \\nu_\\mathrm{MW} \\times 8 - \\frac{\\nu_\\mathrm{L}}{c}. \n\\end{equation}\n\n\\noindent Since $\\nu_\\mathrm{T}$ is independent of $\\nu_\\mathrm{rep}$ and $\\nu_\\mathrm{ceo}$, fluctuations of these frequencies are effectively suppressed, and $\\nu_\\mathrm{T}$ comprises the direct phase comparison between the DRO and the stabilized fiber laser. The comparison result is used to control the DRO with a bandwidth of $\\approx$50\\,kHz using an offset lock (``fast lock'' in Fig.~\\ref{fig1_Dromaser}).\n\nFinally, to ensure the locking of the DRO to the maser, we need to compensate for the frequency drift of the optical cavity by locking $\\nu_\\mathrm{L}$ to the maser. For this purpose, we employ a slow lock (time constant $\\approx$50\\,s), which is based on a comparison of the maser frequency and the DRO frequency in the data processing unit, using an AOM for the adjustment of $\\nu_\\mathrm{L}$ (see Fig.~\\ref{fig1_Dromaser}).\n\nThe utilized ``state-of-the-art'' DRO (PSI, $-$\\,110\\,dBc at 10\\,kHz) has a tuning bandwidth of a few 100\\,kHz. The 100\\,MHz hydrogen maser reference frequency is delivered to our setup via a semi-rigid coaxial cable by a commercial distribution amplifier (SDI\u2026) with galvanic isolation. All synthesizers and counters of the setup are referenced to the maser frequency. A homemade frequency synthesis \\cite{Gupta2007} is utilized to generate the 96$^\\mathrm{th}$ harmonic $\\nu_\\mathrm{synth}$ of the 100\\,MHz signal. \n\nThe 9.6\\,GHz optically stabilized microwave signal from the maser-referenced DRO (Fig.~\\ref{fig1_Dromaser}) is split, amplified and delivered to the two syntheses of the PTB fountain clocks CSF1 \\cite{Weyers2001,Weyers2001b} and CSF2 \\cite{Gerginov2010,Weyers2012} to generate the 9.193\\,GHz signal needed for the atom interrogation. The syntheses are provided with electronic switches which enable either the utilization of the optically stabilized 9.6\\,GHz signal or alternatively a 9.6\\,GHz signal from another DRO stabilized to a low-noise quartz oscillator \\cite{Gupta2007}. In the case of a failure of the optically stabilized microwave signal, the fountain control software detects the resultant exceeding of the limits for the measured atom number, the transition probability or the frequency deviations, and arranges the automatic switching to the quartz-based 9.6\\,GHz signal.\n\n\n\\section{Results}\n\\label{sec_results}\n\n\\subsection{Characterization of the optically stabilized microwave oscillator signal}\n\\label{sec_CharDdro}\n\nFor the characterization of the optically stabilized microwave signal, a second frequency comb independently stabilized to the same laser was utilized. The short-term characterization by a phase noise measurement yielded a phase noise level of $-$117\\,dBc\/Hz at 10\\,Hz from the carrier frequency [curve (a) in Fig.~\\ref{fig2_Pn}] \\cite{Tamm2014}. Noise integration up to 30\\,kHz results in an Allan frequency deviation of $4 \\times 10^{-15}$ (1\\,s), dominated by the increase of the noise level at Fourier frequencies below 100\\,Hz. Also the noise analysis of the transfer beat frequency signal $\\nu_\\mathrm{T}$ (in-loop) showed the same limitation [curve (b) in Fig.~\\ref{fig2_Pn}], which was given by the noise properties of the utilized synthesizer for the offset lock to keep the transfer beat frequency $\\nu_\\mathrm{T}$ constant along with the limited selectivity of the utilized spectrum analyzer. \n\n\\begin{figure}\n\\centering\n \\includegraphics[width=\\columnwidth]{Fig2_DROPN}\n\t\\caption{Single-sideband phase noise power spectral density $L$ relative to the carrier at 9.6\\,GHz. (a) red: measured phase noise spectrum of the employed dielectric resonator oscillator (DRO) for the case that it is locked to an ultrastable laser using the transfer oscillator scheme. (b) gray: measured phase noise spectrum of the transfer beat frequency signal $\\nu_\\mathrm{T}$. (c) green: measured phase noise spectrum of the transfer beat frequency signal $\\nu_\\mathrm{T}$ after modification of the offset lock of the transfer beat (see text). (d) blue: specified phase noise of the unstabilized DRO.}\n\t\\label{fig2_Pn} \n\\end{figure}\n\nRecently, the limitation has been overcome by a modification of the offset lock of the transfer beat avoiding additional noise from the offset synthesizer. Moreover, the servo loop bandwidth of the fast lock was increased to more than 100\\,kHz by utilizing filters with smaller delays. The resulting transfer beat spectrum is depicted in Fig.~\\ref{fig2_Pn} [curve (c)]. The expectation that the previous phase noise limitation at low Fourier frequencies has been overcome is supported by a frequency ratio measurement of the DRO and the clock laser of the $^{171}$Yb$^+$ single-ion frequency standard with the same frequency comb, which exhibited a significant improvement of the frequency instability. However, a real out-of-loop noise measurement would be needed to definitely confirm the removal of the former limitation.\n\nThe long-term performance of the optically stabilized microwave signal is characterized in terms of the Allan standard deviation. We measured the frequency instability of the hydrogen maser and of the cavity stabilized fiber laser locked to the hydrogen maser (Fig.~\\ref{fig3_DROADEV}). In both cases, the output frequency of the $^{171}$Yb$^+$ frequency standard served as a reference, which only contributes at a negligible level to the measured standard deviations. In the long term ($\\tau >$~100\\,s), both Allan deviations agree very well. Since the DRO is locked to the cavity stabilized fiber laser with a bandwidth of 100\\,kHz, the microwave signal exhibits the same long-term behavior. \n\n\\begin{figure}\n\\centering\n \\includegraphics[width=\\columnwidth]{Fig3_RegDROmaserADEV}\n\t\\caption{Measured Allan standard deviations $\\sigma_y (\\tau)$ of the frequencies of the hydrogen maser (solid line, red) and the cavity stabilized 1.5\\,$\\mu$m fiber laser (dashed line, blue), locked to the hydrogen maser. For both measurements, the output frequency of the $^{171}$Yb$^+$ single-ion frequency standard served as a reference, which only contributes at a negligible level to the measured standard deviations.}\n\t\\label{fig3_DROADEV} \n\\end{figure}\n\nIf required, the short-term stability of the microwave signal could be further improved: On the one hand, superior radio-frequency amplifiers with lower phase noise could be utilized. Secondly, the amplitude to phase conversion in the fast InGaAs-PIN photodiode for the detection of the repetition rate and its harmonics can be reduced by stabilizing the optical power \\cite{Taylor2011,Zhang2012a} and selecting the operating point of the photodiode \\cite{Zhang2010}. Finally, the $\\text{SNR}$ can be increased by ``multiplication of the pulse rate'' \\cite{Haboucha2011}.\n\n\n\\subsection{Resulting frequency instabilities of the fountain clocks CSF1 and CSF2}\n\\label{sec_InstabCSF}\n\nIn the case of the operation of a fountain clock which is quantum projection noise limited, the $\\text{SNR}$ is given by $\\text{SNR}=N_{\\mathrm{\\mathrm{at}}}^{1\/2}$, with $N_{\\mathrm{\\mathrm{at}}}$ the total detected number of atoms in the F\\,=\\,3 and F\\,=\\,4 hyperfine components of the Cs ground state \\cite{Santarelli1999}. The frequency instability expressed by the Allan deviation is given by: \n\n\\begin{equation}\n\\label{sigmay}\n\t\\sigma_y\\left(\\tau\\right)=\\frac{1}{\\pi}\\frac{\\Delta\\nu}{\\nu_0}\\frac{1}{\\text{SNR}}\\sqrt{\\frac{T_c}{\\tau}}\n\\end{equation}\n\n\\noindent where $\\Delta\\nu$ is the full-width-at-half-maximum of the Ramsey fringe, $\\nu_0$\\,=\\,9\\,192\\,631\\,770\\,Hz is the clock transition frequency, $T_c$ is the cycle time, and $\\tau$ the measurement time. The precondition is that an instability degradation due to the phase noise of the interrogation oscillator (Dick effect, \\cite{Santarelli1998}) can be avoided. The instability described by (\\ref{sigmay}) is reduced with increased detected atom numbers (from increased loading times) as long as the factor with the square root of $T_c$ (also containing the loading time) does not become too large. This consideration gives a first criterion for the best choice of the employed loading time (and for the resulting detected atom number $N_{\\mathrm{\\mathrm{at}}}$ and cycle time $T_c$). To achieve the best compromise for the combined statistical and systematic uncertainties, one has to furthermore take into account that in general the systematic part of the uncertainty due to cold atom collisions scales with $N_{\\mathrm{\\mathrm{at}}}$ (e.g. see \\cite{Gerginov2010}). \n\nIn the fountain CSF1, where magneto-optical trap (MOT) loading is utilized, the collisional frequency shift is evaluated by simply varying the microwave field power in the state selection cavity to operate the fountain at different atomic densities. This gives a 10\\% systematic uncertainty estimate for the collisional shift \\cite{Weyers2001b}. As a result, the usable $N_{\\mathrm{\\mathrm{at}}}$ is limited to keep the related uncertainty contribution at a reasonable level. We use an MOT loading time of 163.2\\,ms ($T_c$\\,=\\,1.1145\\,s), which optimizes the overall uncertainty. In the fountain CSF2 with molasses loading, the method of rapid adiabatic passage is employed for the collisional shift determination \\cite{Kazda2013}. Since the systematic uncertainty of the collisional shift is below 0.5\\%, much higher atom numbers are compatible with systematic collisional shift uncertainties below the $10^{-16}$ level. Molasses loading from a cold atom beam \\cite{Dobrev2016} enables a relatively short loading time of 340\\,ms ($T_c$\\,=\\,1.2345\\,s) for the minimization of the overall uncertainty. Increasing the loading time to 690\\,ms ($T_c$\\,=\\,1.5845\\,s), however, optimizes the statistical uncertainty.\n\nWith both fountains, $\\text{SNR}$ measurements utilizing either two $\\pi\/2$ pulses (at frequency detuning of half of the full linewidth) or two $\\pi\/4$ pulses (at zero frequency detuning) yield the same $\\text{SNR}$. Since the $\\text{SNR}$ is sensitive to frequency noise only in the first case, this finding is the first proof that the noise contribution by the optically stabilized microwave signal is negligible. The measured $\\text{SNR}$ values are given in Table~\\ref{table_instabilities} together with the resultant calculated frequency instabilities $\\sigma_\\mathrm{y} (\\mathrm{1\\,s})$ using (\\ref{sigmay}).\n\n\\begin{table*}[t]\n\\renewcommand{\\arraystretch}{1.0}\n\\caption{Loading and Cycle Times, Signal-to-Noise Ratios and Frequency Instabilities of the Fountain Clocks CSF1 and CSF2}\n\\label{table_instabilities}\n\\centering\n\\begin{tabular}{lcccccc}\n\\hline\n\\rule[-3mm]{0mm}{8mm} & Loading Time & Cycle Time & $\\text{SNR}$ & $\\sigma_\\mathrm{y} (\\mathrm{1\\,s})$ & $\\sigma_\\mathrm{y, DRO} (\\mathrm{1\\,s})$ & $\\sigma_\\mathrm{y, measured} (\\mathrm{1\\,s})$\\\\\n\\hline\\hline\n\\rule[-3mm]{0mm}{8mm}\\bfseries CSF1 & 163.2\\,ms & 1.1145\\,s & 350 & $9.2 \\times 10^{-14}$ & $4.8 \\times 10^{-15}$ & $9.1 \\times 10^{-14}$\\\\\n\\hline\n\\rule[-3mm]{0mm}{8mm}\\bfseries CSF2 & 340\\,ms & 1.2345\\,s & 1060 & $3.3 \\times 10^{-14}$ & $5.7 \\times 10^{-15}$ & $3.4 \\times 10^{-14}$\\\\\n\\rule[-3mm]{0mm}{5 mm}& 690\\,ms & 1.5845\\,s & 1660 & $2.4 \\times 10^{-14}$ & $7.8 \\times 10^{-15}$ & $2.5 \\times 10^{-14}$\\\\\n\\hline\n\\multicolumn{7}{l}{The frequency instabilities $\\sigma_\\mathrm{y}$ scale with $(\\tau\/1\\,\\mathrm{s})^{-1\/2}$. The instability $\\sigma_\\mathrm{y} (\\mathrm{1\\,s})$ is calculated from the}\\rule[0mm]{0mm}{4mm}\\\\\n\\multicolumn{7}{l}{measured $\\text{SNR}$ (see text) by using (\\ref{sigmay}). The instability contribution $\\sigma_\\mathrm{y, DRO} (\\mathrm{1\\,s})$ is caused by the Dick}\\\\\n\\multicolumn{7}{l}{effect \\cite{Santarelli1998} and calculated from the measured DRO phase noise (Fig.~\\ref{fig2_Pn}).}\\\\\n\\end{tabular}\n\\end{table*}\n\nTaking into account the measured phase noise level of the optically stabilized DRO (Fig.~\\ref{fig2_Pn}), we calculated the frequency instability contribution caused by the Dick effect \\cite{Santarelli1998} for the different cycle times of the two fountain clocks ($\\sigma_\\mathrm{y, DRO} (\\mathrm{1\\,s})$ in Table~\\ref{table_instabilities}). It becomes obvious that even with the former measured phase noise level shown in Fig.~\\ref{fig2_Pn}, which is a worst case estimate as explained in Section~\\ref{sec_CharDdro}, the resulting instability contribution of the optically stabilized DRO is negligible compared to the calculated instabilities $\\sigma_\\mathrm{y} (\\mathrm{1\\,s})$.\n\nIn Fig.~\\ref{fig4_CSFADEV}, the measured Allan standard deviations for fountain frequency measurements with cycle times $T_c$\\,=\\,1.1145\\,s (CSF1) and $T_c$\\,=\\,1.5845\\,s (CSF2) are presented. The results $\\sigma_\\mathrm{y, measured} (\\tau)=9.1 \\times 10^{-14} (\\tau\/1\\,\\mathrm{s})^{-1\/2}$ for CSF1 and $\\sigma_\\mathrm{y, measured} (\\tau)=2.5 \\times 10^{-14} (\\tau\/1\\,\\mathrm{s})^{-1\/2}$ for CSF2 agree very well with the instabilities calculated from the $\\text{SNR}$ data (see Table~\\ref{table_instabilities}) and prove again that the instability contributions of the optically stabilized DRO are negligible at the current fountain performance levels.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=\\columnwidth]{Fig4_AllanCSF}\n\t\\caption{Measured Allan standard deviations $\\sigma_y (\\tau)$ of the CSF1 [full data points (green)] and CSF2 [open data points (red)] frequencies. For the CSF1 measurement, a hydrogen maser served as the reference while the CSF2 frequency was referenced to the output frequency of the $^{171}$Yb$^+$ single-ion frequency standard [quadrupole (E2) transition], whose instability level is indicated by the dashed line. Subtraction of the noise levels of the maser and the single-ion frequency standard, respectively, yields the instability levels indicated by the full lines (CSF1: $9.1 \\times 10^{-14} (\\tau\/1\\,\\mathrm{s})^{-1\/2}$, CSF2: $2.5 \\times 10^{-14} (\\tau\/1\\,\\mathrm{s})^{-1\/2}$).}\n\t\\label{fig4_CSFADEV} \n\\end{figure}\n\n\n\\subsection{Operation of the optically stabilized microwave oscillator signal}\n\\label{sec_FrequMeas}\n\nIn March~2014, the first continuous 10-day measurement (dead time $\\approx $1\\%) with the fountains and the presented optically stabilized DRO scheme was performed to contribute to the monthly calibration of International Atomic Time (TAI) by the Bureau International des Poids et Mesures (BIPM). Both fountain frequencies agreed at the $10^{-16}$ level, which was very well within their combined uncertainty. \n\nSince June~2015, the optically stabilized DRO scheme has been routinely used for fountain operation. A number of TAI calibrations as well as optical frequency measurements have been performed with both fountains. The results demonstrate very good agreement with other fountain clocks and previously published optical frequency measurement data at the low $10^{-16}$ level. High duty cycles of the optically stabilized microwave oscillator signal, close to 100\\%, have been achieved during weeks of operation. In both fountains, the quartz-based microwave synthesis is only used when maintenance work needs to be done on the frequency comb or the cavity stabilized fiber laser, or in those rare cases when the DRO locking fails.\n\n\n\\section{Conclusion}\n\\label{sec_conclusion}\n\nThe technology of cavity stabilized lasers and femtosecond lasers is well established in many laboratories. Optically stabilized microwave setups as presented are therefore relatively straightforward to realize. The focus of our design is not the ultimate performance in terms of phase noise, but rather reliable continuous operation at a phase noise level which is compatible with fountain clock requirements. Our setup has proven to enable fountain clock frequency instabilities at a level otherwise only accessible by utilizing a cryogenic sapphire oscillator for providing the ultrastable microwave signal \\cite{Vian2005}.\n\nWe present phase noise measurements of the stabilized microwave oscillator and of the transfer beat frequency signal, and the measured Allan standard deviation of the cavity stabilized fiber laser, locked to the hydrogen maser. We demonstrate a quantum projection noise limited frequency instability of $2.5 \\times 10^{-14} (\\tau\/1\\,\\mathrm{s})^{-1\/2}$ and continuous long-term operation, typically required by fountain clocks. \n\nImprovements of fountain clock frequency instabilities not only improve the achievable statistical uncertainty in a given measurement time, but also enable more accurate investigations of systematic effects, which in the end may lead to reduced systematic uncertainties.\n\n\n\\section*{Acknowledgment}\n\nThe authors would like to thank N.~Huntemann for providing the $^{171}$Yb$^+$ single-ion frequency standard reference signal, G.~Dobrev for his contributions to enhancing the output of the cold atom beam source of CSF2, M.~Kazda for providing the necessary interface in the fountain frequency syntheses to incorporate the optically stabilized microwave signal, and N.~Nemitz for the required fountain control software adaptation.\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\n\n\\section{\\@startsection{section}{1}{0pt}\n{-3.5ex plus -1ex minus -.2ex}{1.0ex plus .2ex}{\\large\\bf}}\n\\renewcommand\\subsection{\\@startsection{subsection}{1}{0pt}\n{2.5ex plus 1ex minus .2ex}{-1em}{\\bf}}\n\\makeatother\n\n\\setlength{\\textwidth}{165mm}\n\\setlength{\\oddsidemargin}{-1.5mm}\n\\setlength{\\textheight}{230mm}\n\\setlength{\\topmargin}{0mm}\n\\setlength{\\headheight}{0mm}\n\\setlength{\\headsep}{0mm}\n\n\\newcommand{\\vspace{1.5mm}}{\\vspace{1.5mm}}\n\\newcommand{\\vspace{3mm}}{\\vspace{3mm}}\n\n\\theoremstyle{plain}\n\\newtheorem{thm}{Theorem}[subsection]\n\\newtheorem{lem}[thm]{Lemma}\n\\newtheorem{prop}[thm]{Proposition}\n\\newtheorem{cor}[thm]{Corollary}\n\n\\newtheorem{claim}{Claim}[thm]\n\\renewcommand{\\theclaim}{\\arabic{claim}}\n\n\\newtheorem*{claim*}{Claim}\n\n\\newtheorem{ithm}{Theorem}\n\\newtheorem*{iprop}{Proposition}\n\n\\theoremstyle{definition}\n\\newtheorem{dfn}[thm]{Definition}\n\n\\newtheorem*{question}{Question}\n\n\\theoremstyle{remark}\n\\newtheorem{rem}[thm]{Remark}\n\\newtheorem{ex}[thm]{Example}\n\n\\newtheorem*{irem}{Remark}\n\n\\begin{document}\n\n\\setlength{\\baselineskip}{18pt}\n\n\\title{\\Large\\bf \nPolytopal Estimate of Mirkovi\\'c-Vilonen polytopes \\\\\nlying in a Demazure crystal}\n\\author{\n Syu Kato%\n \\footnote{Supported in part by JSPS Research Fellowships \n for Young Scientists (No.\\,20740011).} \\\\\n \\small Research Institute for Mathematical Sciences, Kyoto University, \\\\\n \\small Kitashirakawa-Oiwake-cho, Sakyo, Kyoto 606-8502, Japan \\\\\n \\small (e-mail: {\\tt kato@kurims.kyoto-u.ac.jp}) \\\\[5mm]\n Satoshi Naito%\n \\footnote{Supported in part by Grant-in-Aid for Scientific Research \n (No.\\,20540006), JSPS.} \\\\ \n \\small Institute of Mathematics, University of Tsukuba, \\\\\n \\small Tsukuba, Ibaraki 305-8571, Japan \\ \n (e-mail: {\\tt naito@math.tsukuba.ac.jp})\n \\\\[2mm] and \\\\[2mm]\n Daisuke Sagaki%\n \\footnote{Supported in part by JSPS Research Fellowships \n for Young Scientists (No.\\,19740004).} \\\\ \n \\small Institute of Mathematics, University of Tsukuba, \\\\\n \\small Tsukuba, Ibaraki 305-8571, Japan \\ \n (e-mail: {\\tt sagaki@math.tsukuba.ac.jp})\n}\n\\date{}\n\\maketitle\n\n\\begin{abstract} \\setlength{\\baselineskip}{16pt}\nIn this paper, we give a polytopal estimate of \nMirkovi\\'c-Vilonen polytopes lying in a Demazure crystal \nin terms of Minkowski sums of extremal Mirkovi\\'c-Vilonen polytopes.\nAs an immediate consequence of this result, \nwe provide a necessary (but not sufficient) \npolytopal condition for a Mirkovi\\'c-Vilonen polytope \nto lie in a Demazure crystal.\n\\end{abstract}\n\\section{Introduction.}\n\\label{sec:intro}\nThis paper is a continuation of our previous one \\cite{NS-dp}, and \nour purpose is to give a polytopal estimate of \nMirkovi\\'c-Vilonen polytopes lying in a Demazure crystal \nin terms of Minkowski sums of extremal Mirkovi\\'c-Vilonen polytopes.\nIt should be mentioned that \nas an immediate consequence of this result, \nwe can provide an affirmative answer to \na question posed in \\cite[\\S4.6]{NS-dp}.\n\nFollowing the notation and terminology of \\cite{NS-dp}, \nwe now explain our results more precisely.\nLet $G$ be a complex, connected, semisimple algebraic group \nwith Lie algebra $\\mathfrak{g}$, $T$ a maximal torus \nwith Lie algebra (Cartan subalgebra) $\\mathfrak{h}$, \n$B$ a Borel subgroup containing $T$, \nand $U$ the unipotent radical of $B$; \nby our convention, the roots in $B$ are the negative ones. \nLet $X_{*}(T)$ denote the coweight lattice $\\mathop{\\rm Hom}\\nolimits(\\mathbb{C}^{*},\\,T)$ \nfor $G$, which we regard as an additive subgroup of \na real form $\\mathfrak{h}_{\\mathbb{R}}:=\\mathbb{R} \\otimes_{\\mathbb{Z}} X_{*}(T)$ of $\\mathfrak{h}$. \nDenote by $W$ the Weyl group of $\\mathfrak{g}$, \nwith $e$ the identity element and $w_{0}$ \nthe longest element of length $m$.\nAlso, let $\\mathfrak{g}^{\\vee}$ denote\nthe (Langlands) dual Lie algebra of $\\mathfrak{g}$ \nwith Weyl group $W$, and let $U_{q}(\\mathfrak{g}^{\\vee})$ \nbe the quantized universal enveloping algebra \nof $\\mathfrak{g}^{\\vee}$ over $\\mathbb{C}(q)$. \n\nFor each dominant coweight \n$\\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$, \nlet us denote by $\\mathcal{MV}(\\lambda)$ \nthe set of Mirkovi\\'c-Vilonen (MV for short) polytopes \nwith highest vertex $\\lambda$ that is contained in \nthe convex hull $\\mathop{\\rm Conv}\\nolimits (W \\cdot \\lambda)$ in $\\mathfrak{h}_{\\mathbb{R}}$\nof the Weyl group orbit $W \\cdot \\lambda$ through $\\lambda$, \nand by $\\mathcal{B}(\\lambda)$ the crystal basis of \nthe irreducible highest weight \n$U_{q}(\\mathfrak{g}^{\\vee})$-module $V(\\lambda)$ \nof highest weight $\\lambda$.\nRecall that Kamnitzer \\cite{Kam1}, \\cite{Kam2} \nproved the existence of an isomorphism of crystals \n$\\Psi_{\\lambda}$ from the crystal basis $\\mathcal{B}(\\lambda)$ \nto the set $\\mathcal{MV}(\\lambda)$ of MV polytopes, \nwhich is endowed with the Lusztig-Berenstein-Zelevinsky \n(LBZ for short) crystal structure; he also \nproved the coincidence of this LBZ crystal structure on \n$\\mathcal{MV}(\\lambda)$ with the Braverman-Finkelberg-Gaitsgory \n(BFG for short) crystal structure on $\\mathcal{MV}(\\lambda)$.\n\nIn \\cite{NS-dp}, for each $x \\in W$, \nwe gave a combinatorial description, \nin terms of the lengths of edges of an MV polytope, \nof the image $\\mathcal{MV}_{x}(\\lambda) \\subset \\mathcal{MV}(\\lambda)$ \n(resp., $\\mathcal{MV}^{x}(\\lambda) \\subset \\mathcal{MV}(\\lambda))$ of \nthe Demazure crystal $\\mathcal{B}_{x}(\\lambda) \\subset \\mathcal{B}(\\lambda)$ \n(resp., opposite Demazure crystal \n$\\mathcal{B}^{x}(\\lambda) \\subset \\mathcal{B}(\\lambda)$) \nunder the isomorphism \n$\\Psi_{\\lambda} : \\mathcal{B}(\\lambda) \\rightarrow \\mathcal{MV}(\\lambda)$ of crystals.\nFurthermore, in \\cite{NS-dp}, \nwe proved that for each $x \\in W$, \nan MV polytope $P \\in \\mathcal{MV}(\\lambda)$ lies in \nthe opposite Demazure crystal \n$\\mathcal{MV}^{x}(\\lambda)$ if and only if \nthe MV polytope $P$ contains (as a set) \nthe extremal MV polytope \n$P_{x \\cdot \\lambda}$ of weight $x \\cdot \\lambda$, \nwhich is identical to the convex hull \n$\\mathop{\\rm Conv}\\nolimits(W_{\\le x} \\cdot \\lambda)$ in $\\mathfrak{h}_{\\mathbb{R}}$\nof a certain subset $W_{\\le x} \\cdot \\lambda$ of $W \\cdot \\lambda$ \n(see \\S\\ref{subsec:extpoly} for details). \nHowever, we were unable to prove \nan analogous statement for Demazure crystals \n$\\mathcal{B}_{x}(\\lambda)$, $x \\in W$.\nThus, we posed the following question in \n\\cite[\\S4.6]{NS-dp}:\n\n\\begin{question}\nLet us take an arbitrary $x \\in W$. \nAre all the MV polytopes lying \nin the Demazure crystal $\\mathcal{MV}_{x}(\\lambda)$ \ncontained (as sets) in the extremal MV polytope \n$P_{x \\cdot \\lambda} = \\mathop{\\rm Conv}\\nolimits(W_{\\le x} \\cdot \\lambda)$ ?\n\\end{question}\n\n\\noindent\nNote that the converse statement fails to hold, \nas mentioned in \\cite[Remark~4.6.1]{NS-dp}.\n\nIn this paper, we provide an affirmative answer to this question. \nIn fact, we considerably sharpen the polytopal estimate above of \nMV polytopes lying in a Demazure crystal as follows. \nIn what follows, for each dominant coweight \n$\\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$, \nwe denote by $W_{\\lambda} \\subset W$ \nthe stabilizer of $\\lambda$ in $W$, and \nby $W^{\\lambda}_{\\min} \\subset W$ \nthe set of minimal (length) coset representatives modulo \nthe subgroup $W_{\\lambda} \\subset W$.\n\\begin{ithm}[$=$ Theorem~\\ref{thm:main} combined \nwith Proposition~\\ref{prop:N3}] \\label{ithm1}\nLet $\\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ \nbe a dominant coweight, and \nlet $x \\in W^{\\lambda}_{\\min} \\subset W$. \nIf an MV polytope $P \\in \\mathcal{MV}(\\lambda)$ lies \nin the Demazure crystal $\\mathcal{MV}_{x}(\\lambda)$, \nthen there exist a positive integer $N \\in \\mathbb{Z}_{\\ge 1}$ \nand minimal coset representatives \n$x_{1},\\,x_{2},\\,\\dots,\\,x_{N} \\in \n W^{\\lambda}_{\\min} \\subset W$ \nsuch that \n\\begin{equation*}\n\\begin{cases}\nx \\ge x_{k} \\quad \n \\text{\\rm for all $1 \\le k \\le N$;} \\\\[3mm]\nN \\cdot P \\subseteq\nP_{x_1 \\cdot \\lambda} + \nP_{x_2 \\cdot \\lambda} + \\cdots + \nP_{x_N \\cdot \\lambda},\n\\end{cases}\n\\end{equation*}\nwhere $N \\cdot P := \n\\bigl\\{N v \\mid v \\in P \\bigr\\} \\subset \\mathfrak{h}_{\\mathbb{R}}$ \nis an MV polytope in $\\mathcal{MV}(N \\lambda)$, and \n$P_{x_1 \\cdot \\lambda} + P_{x_2 \\cdot \\lambda} + \\cdots + \n P_{x_N \\cdot \\lambda}$ is the Minkowski sum of \nthe extremal MV polytopes $P_{x_1 \\cdot \\lambda}, \\,\nP_{x_2 \\cdot \\lambda},\\,\\dots,\\,P_{x_N \\cdot \\lambda}$.\n\\end{ithm}\n\n\\begin{irem}\nWe see from Remark~\\ref{rem:MV-LS} and \nTheorem~\\ref{thm:main} below that \nthe elements $x_{1}$, $x_{2}$, $\\dots$, $x_{N} \\in \nW^{\\lambda}_{\\min} \\subset W$ can be chosen \nin such a way that the vectors \n$x_1 \\cdot \\lambda$, $x_2 \\cdot \\lambda$, $\\dots$, \n$x_N \\cdot \\lambda \\in W \\cdot \\lambda \\subset \\mathfrak{h}_{\\mathbb{R}}$\ngive the directions of the Lakshmibai-Seshadri path of shape $\\lambda$ \nthat corresponds to the MV polytope $P \\in \\mathcal{MV}(\\lambda)$ \nunder the (inexplicit) bijection via the crystal basis $\\mathcal{B}(\\lambda)$.\nHence it also follows that \n$x_1 \\ge x_2 \\ge \\dots \\ge x_N$ \nin the Bruhat ordering on $W$.\n\\end{irem}\n\nFrom the theorem above, we can deduce immediately that \nfor an arbitrary $P \\in \\mathcal{MV}_{x}(\\lambda)$, \nthere holds $N \\cdot P \\subset N \\cdot P_{x \\cdot \\lambda}$ and \nhence $P \\subset P_{x \\cdot \\lambda}$. Indeed, this follows from \nthe inclusion \n$P_{x_k \\cdot \\lambda}=\\mathop{\\rm Conv}\\nolimits(W_{\\le x_k} \\cdot \\lambda) \n \\subset \\mathop{\\rm Conv}\\nolimits(W_{\\le x} \\cdot \\lambda)=P_{x \\cdot \\lambda}$\nfor each $1 \\le k \\le N$, and the fact that the Minkowski sum \n$P_{x \\cdot \\lambda}+P_{x \\cdot \\lambda}+\\cdots+P_{x \\cdot \\lambda}$ \n($N$ times) is identical to $N \\cdot P_{x \\cdot \\lambda}$ \n(see Remark~\\ref{rem:N1}). \n\nThe main ingredient in our proof of the theorem is \nthe following polytopal estimate of \ntensor products of MV polytopes.\nLet $\\lambda_{1},\\,\\lambda_{2} \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ \nbe dominant coweights. \nSince $\\mathcal{MV}(\\lambda) \\cong \\mathcal{B}(\\lambda)$ as crystals \nfor every dominant coweight $\\lambda \\in X_{\\ast}(T)$, \nthe tensor product \n$\\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1})$ of \nthe crystals $\\mathcal{MV}(\\lambda_{1})$ and $\\mathcal{MV}(\\lambda_{2})$ \ndecomposes into a disjoint union of \nconnected components as follows:\n\\begin{equation*}\n\\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1}) \\cong \n \\bigoplus_{\n \\begin{subarray}{c}\n \\lambda \\in X_{\\ast}(T) \\\\[0.5mm]\n \\text{$\\lambda$ : dominant}\n \\end{subarray}\n } \n\\mathcal{MV}(\\lambda)^{\\oplus m_{\\lambda_{1},\\lambda_{2}}^{\\lambda}},\n\\end{equation*}\nwhere $m_{\\lambda_{1},\\lambda_{2}}^{\\lambda} \\in \\mathbb{Z}_{\\ge 0}$ denotes \nthe multiplicity of $\\mathcal{MV}(\\lambda)$ in \n$\\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1})$. \nFor each dominant coweight \n$\\lambda \\in X_{\\ast}(T)$ \nsuch that $m_{\\lambda_{1},\\lambda_{2}}^{\\lambda} \\ge 1$, \nwe take (and fix) an arbitrary embedding \n$\\iota_{\\lambda}:\\mathcal{MV}(\\lambda) \\hookrightarrow \n \\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1})$ \nof crystals that maps $\\mathcal{MV}(\\lambda)$ \nonto a connected component of \n$\\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1})$, \nwhich is isomorphic to $\\mathcal{MV}(\\lambda)$ as a crystal. \n\\begin{ithm}[$=$ Theorem~\\ref{thm:tensor}] \\label{ithm2}\nKeep the notation above.\nLet $P \\in \\mathcal{MV}(\\lambda)$, and \nwrite $\\iota_{\\lambda}(P) \\in \n\\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1})$ as\\,{\\rm:} \n$\\iota_{\\lambda}(P)=P_{2} \\otimes P_{1}$ for some \n$P_{1} \\in \\mathcal{MV}(\\lambda_{1})$ and \n$P_{2} \\in \\mathcal{MV}(\\lambda_{2})$. \nWe assume that the MV polytope \n$P_{2} \\in \\mathcal{MV}(\\lambda_{2})$ is an extremal MV polytope \n$P_{x \\cdot \\lambda_{2}}$ for some $x \\in W$. \nThen, we have\n\\begin{equation*}\nP \\subset P_{1} + P_{2},\n\\end{equation*}\nwhere $P_{1} + P_{2}$ is the Minkowski sum of \nthe MV polytopes $P_{1} \\in \\mathcal{MV}(\\lambda_{1})$ and \n$P_{2} \\in \\mathcal{MV}(\\lambda_{2})$.\n\\end{ithm}\n\n\nWe have not yet found a purely combinatorial proof \nof the theorem above. In fact, our argument is a geometric one, \nwhich is based on results of Braverman-Gaitsgory in \\cite{BrGa}, \nwhere tensor products of highest weight crystals are \ndescribed in terms of MV cycles in the affine Grassmannian; \nhere we should remark that the convention on the tensor product\nrule for crystals in \\cite{BrGa} is opposite to ours, i.e., to \nthat of Kashiwara \\cite{Kasoc}, \\cite{Kasb}. \nAlso, it seems likely that the theorem above still holds \nwithout the assumption of extremality on \nthe MV polytope $P_{2} \\in \\mathcal{MV}(\\lambda_{2})$.\n\nThis paper is organized as follows.\nIn \\S\\ref{sec:MVda}, we first recall \nthe basic notation and terminology concerning MV polytopes and \nDemazure crystals, and also review the relation between \nMV polytopes and MV cycles in the affine Grassmannian. \nFurthermore, we obtain \na few new results on extremal MV polytopes and MV cycles, \nwhich will be used in the proof of Theorem~\\ref{ithm2} \n($=$ Theorem~\\ref{thm:tensor}).\nIn \\S\\ref{sec:multi}, we introduce \nthe notion of $N$-multiple maps \nfrom $\\mathcal{MV}(\\lambda)$ to $\\mathcal{MV}(N \\lambda)$ \nfor a dominant coweight $\\lambda \\in X_{\\ast}(T)$ and \n$N \\in \\mathbb{Z}_{\\geq 1}$, which is given explicitly \nby: $P \\mapsto N \\cdot P$ in terms of MV polytopes, \nand also show that for each MV polytope \n$P \\in \\mathcal{MV}(\\lambda)$, there exists some $N \\in \\mathbb{Z}_{\\geq 1}$ \nsuch that $N \\cdot P \\in \\mathcal{MV}(N \\lambda)$ \ncan be written as the tensor product of \ncertain $N$ extremal MV polytopes in $\\mathcal{MV}(\\lambda)$.\nFurthermore, assuming Theorem~\\ref{ithm2}, \nwe prove Theorem~\\ref{ithm1} above, \nwhich provides an answer to \nthe question mentioned above.\nIn \\S\\ref{sec:tensor}, after revisiting results of \nBraverman-Gaitsgory on tensor products of \nhighest weight crystals\nin order to adapt them to our situation, \nwe prove Theorem~\\ref{ithm2} by using \nthe geometry of the affine Grassmannian.\nIn the Appendix, we give a brief account of \nwhy Theorem~\\ref{thm:BG} below is \na reformulation of results of Braverman-Gaitsgory. \n\n\\section{Mirkovi\\'c-Vilonen polytopes and Demazure crystals.}\n\\label{sec:MVda}\n\n\\subsection{Basic notation.}\n\\label{subsec:notation}\n\nLet $G$ be a complex, connected, reductive algebraic group, \n$T$ a maximal torus, $B$ a Borel subgroup containing $T$, and \n$U$ the unipotent radical of $B$; \nwe choose the convention that \nthe roots in $B$ are the negative ones. \nLet $X_{*}(T)$ denote the (integral) coweight lattice \n$\\mathop{\\rm Hom}\\nolimits(\\mathbb{C}^{*},\\,T)$ for $G$, and $X_{*}(T)_{+}$ \nthe set of dominant (integral) coweights for $G$; \nwe regard the coweight lattice \n$X_{*}(T)$ as an additive subgroup of \na real form $\\mathfrak{h}_{\\mathbb{R}}:=\\mathbb{R} \\otimes_{\\mathbb{Z}} X_{*}(T)$ \nof the Lie algebra $\\mathfrak{h}$ of the maximal torus $T$. \nWe denote by $G^{\\vee}$ \nthe (complex) Langlands dual group of $G$. \n\nFor the rest of this paper, \nexcept in \\S\\ref{subsec:geom}, \\S\\ref{subsec:BG}, and \nthe Appendix, we assume that $G$ is semisimple. \nDenote by $\\mathfrak{g}$ the Lie algebra of $G$, \nwhich is a complex semisimple Lie algebra. \nLet \n\\begin{equation*}\n\\Bigl(A=(a_{ij})_{i,j \\in I}, \\, \n \\Pi:=\\bigl\\{\\alpha_{j}\\bigr\\}_{j \\in I}, \\, \n \\Pi^{\\vee}:=\\bigl\\{h_{j}\\bigr\\}_{j \\in I}, \\, \n \\mathfrak{h}^{\\ast},\\,\\mathfrak{h}\n \\Bigr)\n\\end{equation*}\nbe the root datum of $\\mathfrak{g}$, where \n$A=(a_{ij})_{i,j \\in I}$ is the Cartan matrix, \n$\\mathfrak{h}$ is the Cartan subalgebra, \n$\\Pi:=\\bigl\\{\\alpha_{j}\\bigr\\}_{j \\in I} \\subset \n \\mathfrak{h}^{\\ast}:=\\mathop{\\rm Hom}\\nolimits_{\\mathbb{C}}(\\mathfrak{h},\\,\\mathbb{C})$ \nis the set of simple roots, and \n$\\Pi^{\\vee}:=\\bigl\\{h_{j}\\bigr\\}_{j \\in I} \\subset \\mathfrak{h}$ \nis the set of simple coroots; note that \n$\\pair{h_{i}}{\\alpha_{j}}=a_{ij}$ for $i,\\,j \\in I$, \nwhere $\\pair{\\cdot}{\\cdot}$ denotes the canonical pairing \nbetween $\\mathfrak{h}$ and $\\mathfrak{h}^{\\ast}$.\nWe set $\\mathfrak{h}_{\\mathbb{R}}:= \\sum_{j \\in I} \\mathbb{R} h_{j} \\subset \\mathfrak{h}$, \nwhich is a real form of $\\mathfrak{h}$; \nwe regard the coweight lattice \n$X_{*}(T)=\\mathop{\\rm Hom}\\nolimits(\\mathbb{C}^{\\ast},\\,T)$ \nas an additive subgroup of $\\mathfrak{h}_{\\mathbb{R}}$. \nAlso, for $h,\\,h' \\in \\mathfrak{h}_{\\mathbb{R}}$, we write \n$h' \\ge h$ if $h'-h \\in Q^{\\vee}_{+}:=\\sum_{j \\in I}\\mathbb{Z}_{\\ge 0}h_{j}$. \nLet $W:=\\langle s_{j} \\mid j \\in I \\rangle$ \nbe the Weyl group of $\\mathfrak{g}$, where $s_{j}$, $j \\in I$, are \nthe simple reflections, with length function \n$\\ell:W \\rightarrow \\mathbb{Z}_{\\ge 0}$, \nthe identity element $e \\in W$, and \nthe longest element $w_{0} \\in W$; \nwe denote by $\\le$ the (strong) Bruhat ordering on $W$. \nLet $\\mathfrak{g}^{\\vee}$ be the Lie algebra of \nthe Langlands dual group $G^{\\vee}$ of $G$, \nwhich is the complex semisimple Lie algebra\nassociated to the root datum \n\\begin{equation*}\n\\Bigl({}^{t}A=(a_{ji})_{i,j \\in I}, \\, \n \\Pi^{\\vee}=\\bigl\\{h_{j}\\bigr\\}_{j \\in I}, \\, \n \\Pi=\\bigl\\{\\alpha_{j}\\bigr\\}_{j \\in I}, \\, \n \\mathfrak{h},\\,\\mathfrak{h}^{\\ast}\n \\Bigr);\n\\end{equation*}\nnote that the Cartan subalgebra of \n$\\mathfrak{g}^{\\vee}$ is $\\mathfrak{h}^{\\ast}$, not $\\mathfrak{h}$. \nLet $U_{q}(\\mathfrak{g}^{\\vee})$ be \nthe quantized universal enveloping algebra of \n$\\mathfrak{g}^{\\vee}$ over $\\mathbb{C}(q)$, and $U_{q}^{+}(\\mathfrak{g}^{\\vee})$ \nits positive part. \nFor a dominant coweight \n$\\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$, \ndenote by $V(\\lambda)$ the integrable highest \nweight $U_{q}(\\mathfrak{g}^{\\vee})$-module \nof highest weight $\\lambda$, and by \n$\\mathcal{B}(\\lambda)$ the crystal basis of $V(\\lambda)$. \n\nNow, let $\\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ \nbe a dominant coweight, and $x \\in W$. \nThe Demazure module $V_{x}(\\lambda)$ is defined to be \nthe $U_{q}^{+}(\\mathfrak{g}^{\\vee})$-submodule of $V(\\lambda)$ \ngenerated by the one-dimensional weight space \n$V(\\lambda)_{x \\cdot \\lambda} \\subset V(\\lambda)$ of \nweight $x \\cdot \\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$. \nRecall from \\cite{Kas} that \nthe Demazure crystal $\\mathcal{B}_{x}(\\lambda)$ is \na subset of $\\mathcal{B}(\\lambda)$ such that \n\\begin{equation} \\label{eq:dem-mod}\nV(\\lambda) \\supset \nV_{x}(\\lambda) = \n \\bigoplus_{b \\in \\mathcal{B}_{x}(\\lambda)}\n \\mathbb{C}(q) G_{\\lambda}(b),\n\\end{equation}\nwhere $G_{\\lambda}(b)$, $b \\in \\mathcal{B}(\\lambda)$, \nform the lower global basis of $V(\\lambda)$.\n\\begin{rem} \\label{rem:demcos}\nIf $x,\\,y \\in W$ satisfies \n$x \\cdot \\lambda=y \\cdot \\lambda$, then \nwe have $V_{x}(\\lambda)=V_{y}(\\lambda)$ since \n$V(\\lambda)_{x \\cdot \\lambda}=\n V(\\lambda)_{y \\cdot \\lambda}$. \nTherefore, it follows from \\eqref{eq:dem-mod} that \n$\\mathcal{B}_{x}(\\lambda)=\\mathcal{B}_{y}(\\lambda)$. \n\\end{rem}\n\nWe know from \\cite[Proposition~3.2.3]{Kas} that \nthe Demazure crystals $\\mathcal{B}_{x}(\\lambda)$, \n$x \\in W$, are characterized by the inductive relations: \n\\begin{align}\n& \\mathcal{B}_{e}(\\lambda)=\n \\bigl\\{u_{\\lambda}\\bigr\\}, \\label{eq:demint1} \\\\[1.5mm]\n& \\mathcal{B}_{x}(\\lambda)=\n \\bigcup_{k \\in \\mathbb{Z}_{\\ge 0}} f_{j}^{k} \n \\mathcal{B}_{s_{j}x}(\\lambda) \n \\setminus \\{0\\}\n \\quad \\text{for $x \\in W$ and $j \\in I$ with $s_{j}x < x$},\n \\label{eq:demint2}\n\\end{align}\nwhere $u_{\\lambda} \\in \\mathcal{B}(\\lambda)$ denotes \nthe highest weight element of $\\mathcal{B}(\\lambda)$, and \n$f_{j}$, $j \\in I$, denote the lowering Kashiwara operators\nfor $\\mathcal{B}(\\lambda)$. \n\n\\subsection{Mirkovi\\'c-Vilonen polytopes.}\n\\label{subsec:MV}\n\nIn this subsection, following \\cite{Kam1}, we recall \na (combinatorial) characterization of \nMirkovi\\'c-Vilonen (MV for short) polytopes; \nthe relation between this characterization and \nthe original (geometric) definition of MV polytopes \ngiven by Anderson \\cite{A} will be explained \nin \\S\\ref{subsec:geom}. \n\nAs in (the second paragraph of) \\S\\ref{subsec:notation}, \nwe assume that $\\mathfrak{g}$ is a complex semisimple Lie algebra. \nLet $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ be \na collection of elements of \n$\\mathfrak{h}_{\\mathbb{R}}=\\sum_{j \\in I} \\mathbb{R} h_{j}$.\nWe call $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ a \nGelfand-Goresky-MacPherson-Serganova \n(GGMS) datum if it satisfies \nthe condition that \n$w^{-1} \\cdot \\mu_{w'} - w^{-1} \\cdot \\mu_{w} \n \\in Q^{\\vee}_{+}$ for all $w,\\,w' \\in W$. \nIt follows by induction \nwith respect to the (weak) Bruhat ordering on $W$ \nthat $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ is a GGMS datum \nif and only if \n\\begin{equation} \\label{eq:length}\n\\mu_{ws_{i}}-\\mu_{w} \\in \\mathbb{Z}_{\\ge 0}\\,(w \\cdot h_{i}) \n\\quad \n\\text{for every $w \\in W$ and $i \\in I$}. \n\\end{equation}\n\\begin{rem} \\label{rem:GGMS-sum}\nLet $\\mu_{\\bullet}^{(1)}=(\\mu_{w}^{(1)})_{w \\in W}$ and \n$\\mu_{\\bullet}^{(2)}=(\\mu_{w}^{(2)})_{w \\in W}$ be GGMS data. \nThen, it is obvious from the definition of GGMS data \n(or equivalently, from \\eqref{eq:length}) that \nthe (componentwise) sum \n\\begin{equation*}\n\\mu_{\\bullet}^{(1)}+\\mu_{\\bullet}^{(2)}:=\n (\\mu_{w}^{(1)}+\\mu_{w}^{(2)})_{w \\in W}\n\\end{equation*}\nof $\\mu_{\\bullet}^{(1)}$ and $\\mu_{\\bullet}^{(2)}$ is \nalso a GGMS datum. \n\\end{rem}\n\nFollowing \\cite{Kam1} and \\cite{Kam2}, \nto each GGMS datum $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$, \nwe associate a convex polytope \n$P(\\mu_{\\bullet}) \\subset \\mathfrak{h}_{\\mathbb{R}}$ by:\n\\begin{equation} \\label{eq:poly}\nP(\\mu_{\\bullet})=\n\\bigcap_{w \\in W}\n \\bigl\\{ \n v \\in \\mathfrak{h}_{\\mathbb{R}} \\mid \n w^{-1} \\cdot v- w^{-1} \\cdot \\mu_{w} \\in \n \\textstyle{\\sum_{j \\in I}\\mathbb{R}_{\\ge 0} h_{j}}\n \\bigr\\}; \n\\end{equation}\nthe polytope $P(\\mu_{\\bullet})$ is called \na pseudo-Weyl polytope with GGMS datum $\\mu_{\\bullet}$. \nNote that the GGMS datum $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ is \ndetermined uniquely by the convex polytope $P(\\mu_{\\bullet})$. \nAlso, we know from \\cite[Proposition~2.2]{Kam1} that \nthe set of vertices of the polytope $P(\\mu_{\\bullet})$ \nis given by the collection $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ \n(possibly, with repetitions). In particular, we have\n\\begin{equation} \\label{eq:conv}\nP(\\mu_{\\bullet}) = \n\\mathop{\\rm Conv}\\nolimits\\,\\bigl\\{\\mu_{w} \\mid w \\in W\\bigr\\},\n\\end{equation}\nwhere for a subset $X$ of $\\mathfrak{h}_{\\mathbb{R}}$, \n$\\mathop{\\rm Conv}\\nolimits X$ denotes the convex hull in $\\mathfrak{h}_{\\mathbb{R}}$ of $X$. \n\nWe know the following proposition from \\cite[Lemma~6.1]{Kam1}, \nwhich will be used later. \n\\begin{prop} \\label{prop:Minkowski}\nLet $P_{1}=P(\\mu_{\\bullet}^{(1)})$ and \n$P_{2}=P(\\mu_{\\bullet}^{(2)})$ be pseudo-Weyl polytopes \nwith GGMS data \n$\\mu_{\\bullet}^{(1)}=(\\mu_{w}^{(1)})_{w \\in W}$ and \n$\\mu_{\\bullet}^{(2)}=(\\mu_{w}^{(2)})_{w \\in W}$, respectively. \nThen, the Minkowski sum\n\\begin{equation*}\nP_{1}+P_{2}:=\n \\bigl\\{v_{1}+v_{2} \\mid v_{1} \\in P_{1},\\,v_{2} \\in P_{2}\\bigr\\}\n\\end{equation*}\nof the pseudo-Weyl polytopes $P_{1}$ and $P_{2}$ is identical to \nthe pseudo-Weyl polytope $P(\\mu_{\\bullet}^{(1)}+\\mu_{\\bullet}^{(2)})$ \nwith GGMS datum $\\mu_{\\bullet}^{(1)}+\\mu_{\\bullet}^{(2)}=\n (\\mu_{w}^{(1)}+\\mu_{w}^{(2)})_{w \\in W}$ \n(see Remark~\\ref{rem:GGMS-sum}). \n\\end{prop}\n\nLet $R(w_{0})$ denote the set of all reduced words for $w_{0}$, that is, \nall sequences $(i_{1},\\,i_{2},\\,\\dots,\\,i_{m})$ of elements of $I$ \nsuch that $s_{i_{1}}s_{i_{2}} \\cdots s_{i_{m}}=w_{0}$, where \n$m$ is the length $\\ell(w_{0})$ of the longest element $w_{0}$.\nLet $\\mathbf{i}=(i_{1},\\,i_{2},\\,\\dots,\\,i_{m}) \\in R(w_{0})$ \nbe a reduced word for $w_{0}$. \nWe set $\\wi{l}:=\\si{1}\\si{2} \\cdots \\si{l} \\in W$ \nfor $0 \\le l \\le m$. For a GGMS datum \n$\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$, define integers \n(called lengths of edges) \n$\\Ni{l}=\\Ni{l}(\\mu_{\\bullet}) \\in \\mathbb{Z}_{\\ge 0}$, \n$1 \\le l \\le m$, via the following \n``length formula'' (see \\cite[Eq.(8)]{Kam1} and \n\\eqref{eq:length} above): \n\\begin{equation} \\label{eq:n}\n\\mu_{\\wi{l}}-\\mu_{\\wi{l-1}}=\\Ni{l} \\wi{l-1} \\cdot h_{i_{l}}.\n\\end{equation}\n\n\\vspace{3mm}\n\n{\\small \n\\hspace{55mm}\n\\input{dempoly2-fig003.tex}\n}\n\n\\vspace{3mm}\n\nNow we recall a (combinatorial) characterization of \nMirkovi\\'c-Vilonen (MV) polytopes, due to Kamnitzer \\cite{Kam1}. \nThis result holds for an arbitrary complex semisimple Lie algebra $\\mathfrak{g}$, \nbut we give its precise statement only in the case that \n$\\mathfrak{g}$ is simply-laced since we do not make use of it \nin this paper; we merely mention that \nwhen $\\mathfrak{g}$ is not simply-laced, \nthere are also conditions on the lengths \n$\\Ni{l}$, $1 \\le l \\le m$, $\\mathbf{i} \\in R(w_{0})$, \nfor the other possible values of $a_{ij}$ and $a_{ji}$\n(we refer the reader to \\cite[\\S3]{BeZe} for explicit formulas).\n\\begin{dfn} \\label{dfn:MV}\nA GGMS datum $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ is \nsaid to be a Mirkovi\\'c-Vilonen (MV) datum if \nit satisfies the following conditions: \n\n(1) If $\\mathbf{i}=(i_{1},\\,i_{2},\\,\\dots,\\,i_{m}) \\in R(w_{0})$ and \n$\\mathbf{j}=(j_{1},\\,j_{2},\\,\\dots,\\,j_{m}) \\in R(w_{0})$ are related by \na $2$-move, that is, if there exist indices $i,\\,j \\in I$ \nwith $a_{ij}=a_{ji}=0$ and an integer $0 \\le k \\le m-2$ \nsuch that $i_{l}=j_{l}$ for all $1 \\le l \\le m$ with \n$l \\ne k+1,\\,k+2$, and such that \n$i_{k+1}=j_{k+2}=i$, $i_{k+2}=j_{k+1}=j$, then \nthere hold\n\\begin{equation*}\n\\begin{cases}\n\\Ni{l}=\\Nj{l} \\quad \n \\text{for all $1 \\le l \\le m$ with $l \\ne k+1,\\,k+2$, and} \\\\[1.5mm]\n\\Ni{k+1}=\\Nj{k+2}, \\quad \\Ni{k+2}=\\Nj{k+1}.\n\\end{cases}\n\\end{equation*}\n\n\\vspace{5mm}\n\n\\newcommand{\\vertexa}{\n $\\mu_{\\wi{k+2}}=\\mu_{\\wi{k}s_is_j}=\n \\mu_{\\wj{k}s_js_i}=\\mu_{\\wj{k+2}}$\n}\n\n{\\small \n\\hspace*{-20mm}\n\\input{dempoly2-fig001.tex}\n}\n\n\\vspace{3mm}\n\n(2) If $\\mathbf{i}=(i_{1},\\,i_{2},\\,\\dots,\\,i_{m}) \\in R(w_{0})$ and \n$\\mathbf{j}=(j_{1},\\,j_{2},\\,\\dots,\\,j_{m}) \\in R(w_{0})$ are \nrelated by a $3$-move, that is, \nif there exist indices $i,\\,j \\in I$ \nwith $a_{ij}=a_{ji}=-1$ and an integer $0 \\le k \\le m-3$ \nsuch that $i_{l}=j_{l}$ for all $1 \\le l \\le m$ with \n$l \\ne k+1,\\,k+2,\\,k+3$, and such that\n$i_{k+1}=i_{k+3}=j_{k+2}=i$, \n$i_{k+2}=j_{k+1}=j_{k+3}=j$, then \nthere hold \n\\begin{equation*}\n\\begin{cases}\n\\Ni{l}=\\Nj{l} \\quad\n \\text{for all $1 \\le l \\le m$ with $l \\ne k+1,\\,k+2,\\,k+3$, and} \\\\[1.5mm]\n\\Nj{k+1}=\\Ni{k+2}+\\Ni{k+3}-\\min \\bigl(\\Ni{k+1},\\, \\Ni{k+3}\\bigr), \\\\[1.5mm]\n\\Nj{k+2}=\\min \\bigl(\\Ni{k+1},\\, \\Ni{k+3}\\bigr), \\\\[1.5mm]\n\\Nj{k+3}=\\Ni{k+1}+\\Ni{k+2}-\\min \\bigl(\\Ni{k+1},\\, \\Ni{k+3}\\bigr).\n\\end{cases}\n\\end{equation*}\n\n\\vspace{5mm}\n\n\\newcommand{\\vertexb}{\n $\\mu_{\\wi{k+3}}=\\mu_{\\wi{k}s_is_js_i}=\n \\mu_{\\wj{k}s_js_is_j}=\\mu_{\\wj{k+3}}$\n}\n\n{\\small \n\\hspace*{-22.5mm}\n\\input{dempoly2-fig002.tex}\n}\n\\end{dfn}\n\nThe pseudo-Weyl polytope $P(\\mu_{\\bullet})$ \nwith GGMS datum $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ \n(see \\eqref{eq:poly}) is \na Mirkovi\\'c-Vilonen (MV) polytope if and only if \nthe GGMS datum $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ is \nan MV datum (see the proof of \\cite[Proposition~5.4]{Kam1} and \nthe comment following \\cite[Theorem~7.1]{Kam1}).\nAlso, for a dominant coweight \n$\\lambda \\in X_{*}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ and a coweight \n$\\nu \\in X_{*}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$, \nan MV polytope $P=P(\\mu_{\\bullet})$ with GGMS datum \n$\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ is \nan MV polytope of highest vertex $\\lambda$ and lowest vertex $\\nu$ \nif and only if $\\mu_{w_{0}}=\\lambda$, $\\mu_{e}=\\nu$, and \n$P$ is contained in the convex hull $\\mathop{\\rm Conv}\\nolimits(W \\cdot \\lambda)$ of \nthe $W$-orbit $W \\cdot \\lambda \\subset \\mathfrak{h}_{\\mathbb{R}}$ \n(see \\cite[Proposition~7]{A}); \nwe denote by $\\mathcal{MV}(\\lambda)_{\\nu}$ the set of MV polytopes \nof highest vertex $\\lambda$ and lowest vertex $\\nu$. \nFor each dominant coweight \n$\\lambda \\in X_{*}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$, we set\n\\begin{equation*}\n\\mathcal{MV}(\\lambda):=\\bigsqcup_{\n \\nu \\in X_{*}(T)} \\mathcal{MV}(\\lambda)_{\\nu}.\n\\end{equation*}\n\n\\subsection{Relation between MV polytopes and MV cycles.}\n\\label{subsec:geom}\n\nIn this subsection, we review the relation between MV polytopes \nand MV cycles in the affine Grassmannian. \n\nLet us recall the definition of MV cycles \nin the affine Grassmannian, following \\cite{MV2} (and \\cite{A}). \nLet $G$ be a complex, connected, reductive algebraic group \nas in (the beginning of) \\S\\ref{subsec:notation}. \nLet $\\mathcal{O} = \\mathbb{C}[[t]]$ denote the ring of formal power series, \nand $\\mathcal{K} = \\mathbb{C}((t))$ the field of formal Laurent series \n(the fraction field of $\\mathcal{O}$). \nThe affine Grassmannian $\\CG r$ for $G$ over $\\mathbb{C}$ is \ndefined to be the quotient space $G(\\mathcal{K})\/G(\\mathcal{O})$, \nequipped with the structure of a complex, algebraic ind-variety, \nwhere $G(\\mathcal{K})$ denotes the set of $\\mathcal{K}$-valued points of $G$, and \n$G(\\mathcal{O}) \\subset G(\\mathcal{K})$ denotes the set of $\\mathcal{O}$-valued points of $G$; \nwe denote by $\\pi:G(\\mathcal{K}) \\twoheadrightarrow \\CG r=G(\\mathcal{K})\/G(\\mathcal{O})$ \nthe natural quotient map, which is locally trivial \nin the Zariski topology. In the following, \nfor a subgroup $H \\subset G(\\mathcal{K})$ \nthat is stable under the adjoint action of $T$ and \nfor an element $w$ of the Weyl group $W \\cong N_{G}(T)\/T$ of $G$, \nwe denote by ${}^{w}H$ the $w$-conjugate $\\dot{w}H\\dot{w}^{-1}$ of $H$, \nwhere $\\dot{w} \\in N_{G}(T)$ is a lift of $w \\in W$. \n\nSince each coweight $\\nu \\in X_{*}(T)=\\mathop{\\rm Hom}\\nolimits(\\mathbb{C}^{\\ast},\\,T)$ \nis a regular map from $\\mathbb{C}^{\\ast}$ to $T \\subset G$, \nit gives a point $t^{\\nu} \\in G (\\mathcal{K})$, \nwhich in turn, descends to a point \n$[t^{\\nu}] \\in \\CG r=G(\\mathcal{K})\/G(\\mathcal{O})$. \nThe following simple lemma will be used in \nthe proof of Lemma~\\ref{lem:tran}. \n\\begin{lem} \\label{lem:emb}\nLet $L \\subset G$ be a complex, connected, \nreductive algebraic group containing the maximal torus $T$ of $G$. \nThen, for each $\\nu \\in X_{*}(T)$, the inclusion \n$L(\\mathcal{K}) [t^{\\nu}] \\hookrightarrow \\CG r$ gives an embedding of \nthe affine Grassmannian for $L$ into $\\CG r$.\n\\end{lem}\n\n\\begin{proof}\nObserve that \n$\\bigl(t^{\\nu} G (\\mathcal{O}) t^{-\\nu}\\bigr) \\cap L (\\mathcal{K}) \n = t^{\\nu} L (\\mathcal{O}) t^{-\\nu}$.\nHence the map $i_{L}:L(\\mathcal{K}) \\rightarrow \\CG r$, \n$g \\mapsto g[t^{\\nu}]$, is factored through \n$L(\\mathcal{K}) \/ (t^{\\nu} L (\\mathcal{O}) t^{-\\nu})$. \nSince $t^{\\nu} \\in L (\\mathcal{K})$, we conclude that \nthe map $i_{L}:L(\\mathcal{K}) \\rightarrow \\CG r$ descends to a map between \nthe affine Grassmannian for $L$ and $\\CG r$, as desired. \n(This construction is only at the level of sets, \nbut we can indeed show that the map above commutes \nwith the ind-variety structures.)\n\\end{proof}\n\nFor each $\\nu \\in X_{*}(T)$, we set\n\\begin{equation*}\n\\CG r^{\\nu}:=G(\\mathcal{O})[t^{\\nu}] \\subset \\CG r,\n\\end{equation*}\nthe $G(\\mathcal{O})$-orbit of $[t^{\\nu}]$, which is a smooth \nquasi-projective algebraic variety over $\\mathbb{C}$. \nAlso, for each $\\nu \\in X_{*}(T)$ and $w \\in W$, \nwe set \n\\begin{equation*}\nS_{\\nu}^{w} := {}^{w}U(\\mathcal{K})[t^{\\nu}] \\subset \\CG r,\n\\end{equation*}\nthe ${}^{w}U(\\mathcal{K})$-orbit of $[t^{\\nu}]$, which is a (locally closed) \nind-subvariety of $\\CG r$; we write simply $S_{\\nu}$ for $S_{\\nu}^{e}$.\nThen, we know the following two kinds \nof decompositions of $\\CG r$ into orbits. \nFirst, we have\n\\begin{equation*}\n\\CG r=\\bigsqcup_{\\lambda \\in X_{*}(T)_{+}} \\CG r^{\\lambda} \\qquad \n\\text{(Cartan decomposition)},\n\\end{equation*}\nwith $\\CG r^{w \\cdot \\lambda}=\\CG r^{\\lambda}$ \nfor $\\lambda \\in X_{*}(T)_{+}$ and $w \\in W$; \nnote that (see, for example, \\cite[\\S2]{MV2}) \nfor each $\\lambda \\in X_{*}(T)_{+}$, the quasi-projective \nvariety $\\CG r^{\\lambda}$ is simply-connected, and of dimension \n$2\\pair{\\lambda}{\\rho}$, where $\\rho$ denotes the half-sum \nof the positive roots $\\alpha \\in \\Delta_{+}$ for $G$, i.e., \n$2\\rho=\\sum_{\\alpha \\in \\Delta_{+}}\\alpha$. \nSecond, we have for each $w \\in W$, \n\\begin{equation*}\n\\CG r=\\bigsqcup_{\\nu \\in X_{*}(T)} S_{\\nu}^{w} \\qquad\n\\text{(Iwasawa decomposition)}.\n\\end{equation*}\nMoreover, the (Zariski-) closure relations among these orbits are \ndescribed as follows (see \\cite[\\S2 and \\S3]{MV2}): \n\\begin{equation} \\label{eq:Grlam}\n\\ol{\\CG r^{\\lambda}}=\n \\bigsqcup_{\n \\begin{subarray}{c} \n \\lambda' \\in X_{\\ast}(T)_{+} \\\\[1mm]\n \\lambda' \\le \\lambda\n \\end{subarray}\n } \\CG r^{\\lambda'}\n\\qquad \\text{for $\\lambda \\in X_{*}(T)_{+}$};\n\\end{equation}\n\\begin{equation} \\label{eq:Snuw}\n\\ol{S_{\\nu}^{w}}=\n \\bigsqcup_{\n \\begin{subarray}{c} \n \\gamma \\in X_{\\ast}(T) \\\\[1mm]\n w^{-1} \\cdot \\gamma \\ge w^{-1} \\cdot \\nu\n \\end{subarray}\n } S_{\\gamma}^{w}\n\\qquad \\text{for $\\nu \\in X_{\\ast}(T)$ and $w \\in W$}.\n\\end{equation}\n\\begin{rem} \\label{rem:Snuw}\nLet $\\mathcal{X} \\subset \\CG r$ be an irreducible algebraic subvariety, \nand $\\nu \\in X_{*}(T)$, $w \\in W$. Then, \nit follows from \\eqref{eq:Snuw} that the intersection \n$\\mathcal{X} \\cap S_{\\nu}^{w}$ is an open dense subset of $\\mathcal{X}$ \nif and only if $\\mathcal{X} \\cap S_{\\nu}^{w} \\ne \\emptyset$ and \n$\\mathcal{X} \\cap S_{\\gamma}^{w} = \\emptyset$ \nfor every $\\gamma \\in X_{*}(T)$ with \n$w^{-1} \\cdot \\gamma \\not\\ge w^{-1} \\cdot \\nu$. \n\\end{rem}\n\nFor $\\lambda \\in X_{*}(T)_{+}$, \nlet $L(\\lambda)$ denote the irreducible finite-dimensional \nrepresentation of the (complex) Langlands dual \ngroup $G^{\\vee}$ of $G$ with highest weight $\\lambda$, and \n$\\Omega(\\lambda) \\subset X_{*}(T)$ \nthe set of weights of $L(\\lambda)$. \nWe know from \\cite[Theorem~3.2 and Remark~3.3]{MV2} that \n$\\nu \\in X_{*}(T)$ is an element of $\\Omega(\\lambda)$ \nif and only if $\\CG r^{\\lambda} \\cap S_{\\nu} \\ne \\emptyset$, \nand then the intersection $\\CG r^{\\lambda} \\cap S_{\\nu}$ \nis of pure dimension $\\pair{\\lambda-\\nu}{\\rho}$. \n\nNow we come to the definition of MV cycles \nin the affine Grassmannian.\n\\begin{dfn}[{\\cite[\\S3]{MV2}; see also \\cite[\\S5.3]{A}}] \\label{dfn:MVcycle}\nLet $\\lambda \\in X_{*}(T)_{+}$ and $\\nu \\in X_{*}(T)$ be such that \n$\\CG r^{\\lambda} \\cap S_{\\nu} \\ne \\emptyset$, \ni.e., $\\nu \\in \\Omega(\\lambda)$. \nAn MV cycle of highest weight $\\lambda$ and weight $\\nu$ is \ndefined to be an irreducible component of the (Zariski-) closure of \nthe intersection $\\CG r^{\\lambda} \\cap S_{\\nu}$. \n\\end{dfn}\n\nWe denote by $\\mathcal{Z}(\\lambda)_{\\nu}$ the set of MV cycles of \nhighest weight $\\lambda \\in X_{*}(T)_{+}$ and weight $\\nu \\in X_{*}(T)$. \nAlso, for each $\\lambda \\in X_{*}(T)_{+}$, we set \n\\begin{equation*}\n\\mathcal{Z}(\\lambda) := \n \\bigsqcup_{\\nu \\in X_{*}(T)} \\mathcal{Z}(\\lambda)_{\\nu},\n\\end{equation*}\nwhere $\\mathcal{Z}(\\lambda)_{\\nu} := \\emptyset$ if \n$\\CG r^{\\lambda} \\cap S_{\\nu}=\\emptyset$. \n\\begin{ex}[{cf.~\\cite[Eq.(3.6)]{MV2}}] \\label{ex:MVc}\nFor each $\\lambda \\in X_{\\ast}(T)_{+}$, we have\n\\begin{equation*}\n\\mathcal{Z}(\\lambda)_{\\lambda} = \\bigl\\{\\,[ t^{\\lambda}]\\,\\bigr\\}, \n\\quad \\text{and} \\quad\n\\mathcal{Z}(\\lambda)_{w_{0} \\lambda} = \\bigl\\{\\,\\ol{\\CG r^{\\lambda}}\\,\\bigr\\}.\n\\end{equation*}\n\\end{ex}\n\\begin{rem}[{\\cite[Lemma~5.2]{NP}, \\cite[Eq.(3.6)]{MV2}}] \n\\label{rem:extcyc}\nLet $\\lambda \\in X_{*}(T)_{+}$. If $\\nu \\in X_{*}(T)$ is \nof the form $\\nu=x \\cdot \\lambda$ for some $x \\in W$, then \n\\begin{equation*}\n\\mathbf{b}_{x \\cdot \\lambda}:=\n\\ol{ U(\\mathcal{O})[t^{x \\cdot \\lambda}] } \\subset \n\\ol{ G(\\mathcal{O})[t^{\\lambda}] \\cap U(\\mathcal{K})[t^{x \\cdot \\lambda}] }=\n\\ol{ \\CG r^{\\lambda} \\cap S_{x \\cdot \\lambda} }\n\\end{equation*}\nis the unique MV cycle of highest weight $\\lambda$ \nand weight $x \\cdot \\lambda$ \n(extremal MV cycle of weight $x \\cdot \\lambda$). \nFor an explicit (combinatorial) description \nof the corresponding extremal MV polytope, \nsee \\S\\ref{subsec:extpoly} below. \n\\end{rem}\n\nMotivated by the discovery of MV cycles in the affine Grassmannian, \nAnderson \\cite{A} proposed considering the ``moment map images'' \nof MV cycles as follows: Let $\\lambda \\in X_{*}(T)_{+}$. \nFor an MV cycle $\\mathbf{b} \\in \\mathcal{Z}(\\lambda)$, we set \n\\begin{equation*}\nP(\\mathbf{b}):=\\mathop{\\rm Conv}\\nolimits \\bigl\\{\n \\nu \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}} \\mid \n [t^{\\nu}] \\in \\mathbf{b} \\bigr\\},\n\\end{equation*}\nand call $P(\\mathbf{b}) \\subset \\mathfrak{h}_{\\mathbb{R}}$ the moment map \nimage of $\\mathbf{b}$\\,; note that $P(\\mathbf{b})$ is indeed \na convex polytope in $\\mathfrak{h}_{\\mathbb{R}}$. \n\nFor the rest of this paper, except in \\S\\ref{subsec:BG} and the Appendix, \nwe assume that $G$ (and hence its Lie algebra $\\mathfrak{g}$) is semisimple. \nThe following theorem, due to Kamnitzer \\cite{Kam1}, \nestablishes an explicit relationship between \nMV polytopes and MV cycles. \n\\begin{thm} \\label{thm:Kam1}\n{\\rm (1)} \nLet $\\lambda \\in X_{*}(T)_{+}$ and $\\nu \\in X_{*}(T)$ be \nsuch that $\\CG r^{\\lambda} \\cap S_{\\nu} \\ne \\emptyset$. \nIf $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ denotes the GGMS datum \nof an MV polytope $P \\in \\mathcal{MV}(\\lambda)_{\\nu}$, that is, \n$P=P(\\mu_{\\bullet}) \\in \\mathcal{MV}(\\lambda)_{\\nu}$, then \n\\begin{equation*}\n\\mathbf{b}(\\mu_{\\bullet}):=\n\\ol{ \\bigcap_{w \\in W} S^{w}_{\\mu_{w}} } \\subset \\ol{\\CG r^{\\lambda}}\n\\end{equation*}\nis an MV cycle that belongs to $\\mathcal{Z}(\\lambda)_{\\nu}$. \n\n{\\rm (2)} \nLet $\\lambda \\in X_{*}(T)_{+}$. \nFor an MV polytope $P=P(\\mu_{\\bullet}) \\in \\mathcal{MV}(\\lambda)$ \nwith GGMS datum $\\mu_{\\bullet}$, \nwe set $\\Phi_{\\lambda}(P):=\\mathbf{b}(\\mu_{\\bullet})$. Then, \nthe map $\\Phi_{\\lambda}:\\mathcal{MV}(\\lambda) \\rightarrow \\mathcal{Z}(\\lambda)$, \n$P \\mapsto \\Phi_{\\lambda}(P)$, is a bijection \nfrom $\\mathcal{MV}(\\lambda)$ onto $\\mathcal{Z}(\\lambda)$ \nsuch that $\\Phi_{\\lambda}(\\mathcal{MV}(\\lambda)_{\\nu})=\n\\mathcal{Z}(\\lambda)_{\\nu}$ for all $\\nu \\in X_{\\ast}(T)$\nwith $\\CG r^{\\lambda} \\cap S_{\\nu} \\ne \\emptyset$. \nIn particular, for each MV cycle $\\mathbf{b} \\in \\mathcal{Z}(\\lambda)$, \nthere exists a unique MV datum $\\mu_{\\bullet}$ \nsuch that $\\mathbf{b}=\\mathbf{b}(\\mu_{\\bullet})$, and in this case, \nthe moment map image $P(\\mathbf{b})$ of the MV cycle $\\mathbf{b}=\\mathbf{b}(\\mu_{\\bullet})$\nis identical to the MV polytope $P(\\mu_{\\bullet}) \\in \\mathcal{MV}(\\lambda)$. \n\\end{thm}\n\\begin{rem}[{\\cite[\\S2.2]{Kam1}}] \\label{rem:moment}\nFor $\\nu \\in X_{*}(T)$ and $w \\in W$, \nthe ``moment map image'' $P(\\ol{S_{\\nu}^{w}})$ of \n$\\ol{S_{\\nu}^{w}}$ is, by definition, \nthe convex hull in $\\mathfrak{h}_{\\mathbb{R}}$ of the set\n$\\bigl\\{\n \\gamma \\in X_{*}(T) \\subset \\mathfrak{h}_{\\mathbb{R}} \\mid \n [t^{\\gamma}] \\in \\ol{S_{\\nu}^{w}}\n\\bigr\\} \\subset \\mathfrak{h}_{\\mathbb{R}}$,\nwhich is identical to the (shifted) convex cone\n$\\bigl\\{\n v \\in \\mathfrak{h}_{\\mathbb{R}} \\mid \n w^{-1} \\cdot v - w^{-1} \\cdot \\nu \\in \n \\textstyle{\\sum_{j \\in I}\\mathbb{R}_{\\ge 0}h_{j}}\n\\bigr\\}$. \n\\end{rem}\n\n\\subsection{Lusztig-Berenstein-Zelevinsky (LBZ) crystal structure.}\n\\label{subsec:cry-MV}\n\nWe keep the notation and assumptions of \\S\\ref{subsec:MV}.\nFor an MV datum $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ and $j \\in I$, \nwe denote by $f_{j}\\mu_{\\bullet}$ \n(resp., $e_{j}\\mu_{\\bullet}$ if $\\mu_{e} \\ne \\mu_{s_{j}}$; \nnote that $\\mu_{s_{j}}-\\mu_{e} \\in \n\\mathbb{Z}_{\\ge 0}h_{j}$ by \\eqref{eq:length})\na unique MV datum $\\mu_{\\bullet}'=(\\mu_{w}')_{w \\in W}$ \nsuch that $\\mu_{e}'=\\mu_{e}-h_{j}$ \n(resp., $\\mu_{e}'=\\mu_{e}+h_{j}$) and \n$\\mu_{w}'=\\mu_{w}$ for all $w \\in W$ with $s_{j}w < w$ \n(see \\cite[Theorem~3.5]{Kam2} and its proof); \nnote that $\\mu_{w_{0}}'=\\mu_{w_{0}}$ and \n$\\mu_{s_j}'=\\mu_{s_j}$. \n\nLet $\\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ \nbe a dominant coweight. \nFollowing \\cite[\\S6.2]{Kam2}, we endow $\\mathcal{MV}(\\lambda)$ \nwith the Lusztig-Berenstein-Zelevinsky (LBZ) \ncrystal structure for $U_{q}(\\mathfrak{g}^{\\vee})$\nas follows. \nLet $P=P(\\mu_{\\bullet}) \\in \\mathcal{MV}(\\lambda)$ be \nan MV polytope with GGMS datum \n$\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$.\nThe weight $\\mathop{\\rm wt}\\nolimits(P)$ of $P$ is, by definition, \nequal to the vertex $\\mu_{e} \\in \\lambda-Q^{\\vee}_{+}$. \nFor each $j \\in I$, \nwe define the lowering Kashiwara operator\n$f_{j}:\\mathcal{MV}(\\lambda) \\cup \\{\\mathbf{0}\\} \\rightarrow \n \\mathcal{MV}(\\lambda) \\cup \\{\\mathbf{0}\\}$ and \nthe raising Kashiwara operator\n$e_{j}:\\mathcal{MV}(\\lambda) \\cup \\{\\mathbf{0}\\} \\rightarrow \n \\mathcal{MV}(\\lambda) \\cup \\{\\mathbf{0}\\}$ by: \n\\begin{align*}\ne_{j}\\mathbf{0}=f_{j}\\mathbf{0} & :=\\mathbf{0}, \\\\[3mm]\nf_{j}P=f_{j}P(\\mu_{\\bullet}) & :=\n\\begin{cases}\n P(f_{j}\\mu_{\\bullet}) \n & \\text{if $P(f_{j}\\mu_{\\bullet}) \\subset \n \\mathop{\\rm Conv}\\nolimits (W \\cdot \\lambda)$}, \\\\[1.5mm]\n \\mathbf{0} & \\text{otherwise}, \n \\end{cases} \\\\[3mm]\ne_{j}P=e_{j}P(\\mu_{\\bullet}) & :=\n\\begin{cases}\n P(e_{j}\\mu_{\\bullet}) & \n \\text{if $\\mu_{e} \\ne \\mu_{s_{j}}$ \n (i.e., $\\mu_{s_{j}}-\\mu_{e} \\in \\mathbb{Z}_{> 0}h_{j}$)}, \\\\[1.5mm]\n \\mathbf{0} & \\text{otherwise}, \n \\end{cases}\n\\end{align*}\nwhere $\\mathbf{0}$ is an additional element, \nnot contained in $\\mathcal{MV}(\\lambda)$. \nFor $j \\in I$, we set \n$\\varepsilon_{j}(P):=\n \\max \\bigl\\{k \\in \\mathbb{Z}_{\\ge 0} \\mid e_{j}^{k}P \\ne \\mathbf{0} \\bigr\\}$ and \n$\\varphi_{j}(P):=\n \\max \\bigl\\{k \\in \\mathbb{Z}_{\\ge 0} \\mid f_{j}^{k}P \\ne \\mathbf{0} \\bigr\\}$;\nnote that for each $j \\in I$, we have\n\\begin{equation} \\label{eq:ax}\n\\varphi_{j}(P)=\n\\pair{\\mathop{\\rm wt}\\nolimits(P)}{\\alpha_{j}}+\\varepsilon_{j}(P) \n \\qquad \\text{for all $P \\in \\mathcal{MV}(\\lambda)$.}\n\\end{equation}\n\\begin{rem} \\label{rem:ve}\nLet $P=P(\\mu_{\\bullet}) \\in \\mathcal{MV}(\\lambda)$ be \nan MV polytope with GGMS datum \n$\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$. \nThen, we deduce from the definition of \nthe raising Kashiwara operators $e_{j}$ \n(or, the MV datum $e_{j}\\mu_{\\bullet}$) that \n$\\mu_{s_{j}}-\\mu_{e}=\\varepsilon_{j}(P) h_{j}$ \nfor $j \\in I$.\n\\end{rem}\n\n\\begin{thm}[{\\cite[Theorem~6.4]{Kam2}}] \\label{thm:kam-int}\nThe set $\\mathcal{MV}(\\lambda)$, equipped with the maps \n$\\mathop{\\rm wt}\\nolimits$, $e_{j},\\,f_{j} \\ (j \\in I)$, and \n$\\varepsilon_{j},\\,\\varphi_{j} \\ (j \\in I)$ above, is \na crystal for $U_{q}(\\mathfrak{g}^{\\vee})$. \nMoreover, there exists a unique isomorphism \n$\\Psi_{\\lambda}:\\mathcal{B}(\\lambda) \n \\stackrel{\\sim}{\\rightarrow} \n \\mathcal{MV}(\\lambda)$ of crystals for $U_{q}(\\mathfrak{g}^{\\vee})$. \n\\end{thm}\n\n\\begin{rem}\nKamnitzer \\cite{Kam2} proved that \nfor each $\\lambda \\in X_{*}(T)_{+}$, the bijection \n$\\Phi_{\\lambda}:\\mathcal{MV}(\\lambda) \\rightarrow \\mathcal{Z}(\\lambda)$ in \nTheorem~\\ref{thm:Kam1}\\,(2) also intertwines the LBZ crystal \nstructure on $\\mathcal{MV}(\\lambda)$ and the crystal structure on $\\mathcal{Z}(\\lambda)$ \ndefined in \\cite{BrGa} (and \\cite{BFG}). \n\\end{rem}\n\nFor each $x \\in W$, we denote by \n$\\mathcal{MV}_{x}(\\lambda) \\subset \\mathcal{MV}(\\lambda)$ \nthe image $\\Psi_{\\lambda}(\\mathcal{B}_{x}(\\lambda))$ \nof the Demazure crystal \n$\\mathcal{B}_{x}(\\lambda) \\subset \\mathcal{B}(\\lambda)$ \nassociated to $x \\in W$ under the isomorphism \n$\\Psi_{\\lambda}:\\mathcal{B}(\\lambda) \n \\stackrel{\\sim}{\\rightarrow} \n \\mathcal{MV}(\\lambda)$ in Theorem~\\ref{thm:kam-int};\nfor a combinatorial description of $\\mathcal{MV}_{x}(\\lambda)$ \nin terms of the lengths $\\Ni{l} \\in \\mathbb{Z}_{\\ge 0}$, \n$\\mathbf{i} \\in R(w_{0})$, $0 \\le l \\le m$, \nof edges of an MV polytope, see \\cite[\\S3.2]{NS-dp}. \n\n\\subsection{Extremal MV polytopes.}\n\\label{subsec:extpoly}\n\nLet $\\mathfrak{g}$ be a complex semisimple Lie algebra as in \n(the second paragraph of) \\S\\ref{subsec:notation}. \nLet $\\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ be \na dominant coweight. \nFor each $x \\in W$, we denote by \n$P_{x \\cdot \\lambda}$ the image of the extremal element \n$u_{x \\cdot \\lambda} \\in \\mathcal{B}(\\lambda)$ of weight \n$x \\cdot \\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ \nunder the isomorphism \n$\\Psi_{\\lambda}:\\mathcal{B}(\\lambda) \n \\stackrel{\\sim}{\\rightarrow} \n \\mathcal{MV}(\\lambda)$\nin Theorem~\\ref{thm:kam-int}; \nwe call $P_{x \\cdot \\lambda} \\in \\mathcal{MV}(\\lambda)$ \nthe extremal MV polytope of weight $x \\cdot \\lambda$.\nWe know the following polytopal description of \nthe extremal MV polytopes \nfrom \\cite[Theorem~4.1.5\\,(2)]{NS-dp}.\n\\begin{prop} \\label{prop:ext}\nLet $\\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ \nbe a dominant coweight, and $x \\in W$. \nThe extremal MV polytope \n$P_{x \\cdot \\lambda}$ of weight $x \\cdot \\lambda$ \nis identical to the convex hull \n$\\mathop{\\rm Conv}\\nolimits(W_{\\le x} \\cdot \\lambda)$ \nin $\\mathfrak{h}_{\\mathbb{R}}$ of the set $W_{\\le x} \\cdot \\lambda$, \nwhere $W_{\\le x}$ denotes the subset \n$\\bigl\\{z \\in W \\mid z \\le x\\bigr\\}$ of $W$. \n\\end{prop}\n\n\\begin{rem}\nIn \\cite{NS-dp}, we proved Proposition~\\ref{prop:ext} above and \nTheorem~\\ref{thm:GGMS-Ext} below in the case that \n$\\mathfrak{g}$ is simply-laced. However, \nthese results hold also in the case that \n$\\mathfrak{g}$ is not simply-laced; for example, we can use \na standard technique of ``folding'' by diagram automorphisms \n(see \\cite{NS-fm}, \\cite{Hong}, and also \\cite{Lu08}). \n\\end{rem}\n\n\\begin{rem} \\label{rem:ext}\nIt follows from Theorem~\\ref{thm:Kam1} \nthat for each $\\lambda \\in X_{*}(T)_{+} \\subset \\mathfrak{h}_{\\mathbb{R}}$ \nand $x \\in W$, the extremal MV polytope $P_{x \\cdot \\lambda}$ \nis identical to the moment map image $P(\\mathbf{b}_{x \\cdot \\lambda})$ \nof the extremal MV cycle $\\mathbf{b}_{x \\cdot \\lambda}$ \n(see Remark~\\ref{rem:extcyc}). \nIn particular, the highest weight element \n$P_{e \\cdot \\lambda}=P_{\\lambda}$ of $\\mathcal{MV}(\\lambda)$ \nis identical to the set $P([t^{\\lambda}])=\\bigl\\{\\lambda\\bigr\\}$, \nand the lowest weight element $P_{w_{0} \\cdot \\lambda}$ of \n$\\mathcal{MV}(\\lambda)$ is identical to the set \n$P(\\ol{\\CG r^{\\lambda}})=\\mathop{\\rm Conv}\\nolimits(W \\cdot \\lambda)$. \n\\end{rem}\n\nThe GGMS datum of an extremal MV polytope is given \nas follows (see \\cite[\\S4.1]{NS-dp}). \nLet us fix a dominant coweight \n$\\lambda \\in X_{*}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ \nand $x \\in W$ arbitrarily. \nLet $p$ denote the length $\\ell(xw_{0})$ of $xw_{0} \\in W$. \nFor each $\\mathbf{i}=(i_{1},\\,i_{2},\\,\\dots,\\,i_{m}) \\in R(w_{0})$, \nwith $m=\\ell(w_{0})$, we set \n\\begin{equation*}\nS(xw_{0},\\,\\mathbf{i})=\\left\\{\n\\begin{array}{l|l}\n (a_{1},\\,a_{2},\\,\\dots,\\,a_{p}) \\in [1,m]_{\\mathbb{Z}}^{p} \\ & \\ \n\\begin{array}{l}\n1 \\le a_{1} < a_{2} < \\cdots < a_{p} \\le m, \\\\[1.5mm]\n\\si{a_1}\\si{a_2} \\cdots \\si{a_p}=xw_{0}\n\\end{array}\n\\end{array}\n\\right\\},\n\\end{equation*}\nwhere $[1,m]_{\\mathbb{Z}}:=\\bigl\\{a \\in \\mathbb{Z} \\mid 1 \\le a \\le m\\bigr\\}$.\nWe denote by $\\min S(xw_{0},\\,\\mathbf{i})$ \nthe minimum element of the set $S(xw_{0},\\,\\mathbf{i})$ \nin the lexicographic ordering;\nrecall that the lexicographic ordering $\\succeq$ on \n$S(xw_{0},\\,\\mathbf{i})$ is defined as follows: \n$(a_{1},\\,a_{2},\\,\\dots,\\,a_{p}) \\succ\n (b_{1},\\,b_{2},\\,\\dots,\\,b_{p})$ \nif there exists some integer $1 \\le q_0 \\le p$ \nsuch that $a_{q}=b_{q}$ for all $1 \\le q \\le q_0-1$ \nand $a_{q_0} > b_{q_0}$. \nNow we define a sequence \n$\\yi{0},\\,\\yi{1},\\,\\dots,\\,\\yi{m}$ of elements of $W$ \ninductively by the following formula (see \\cite[\\S4.2]{NS-dp}): \n\\begin{equation} \\label{eq:yi}\n\\yi{m}=e, \\qquad \n\\yi{l-1}=\n \\begin{cases}\n \\yi{l} & \\text{if $l$ appears in $\\min S(xw_{0},\\,\\mathbf{i})$}, \\\\[1.5mm]\n s_{\\bti{l}}\\yi{l} & \\text{otherwise}\n \\end{cases}\n\\end{equation}\nfor $1 \\le l \\le m$, \nwhere we set $\\bti{l}:=\\wi{l-1} \\cdot \\alpha_{i_{l}}$ \nfor $1 \\le l \\le m$, and denote \nby $s_{\\beta} \\in W$ the reflection \nwith respect to a root $\\beta$. \n\\begin{rem} \\label{rem:zi}\nThe element $\\yi{l} \\in W$ above does not depend \non the dominant coweight $\\lambda \\in X_{*}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$. \n\\end{rem}\n\\begin{rem} \\label{rem:vi}\nLet $\\mathbf{i}=(i_{1},\\,i_{2},\\,\\dots,\\,i_{m}) \\in R(w_{0})$. \nWe define a sequence \n$\\vi{0},\\,\\vi{1},\\,\\dots,\\,\\vi{m}$ of elements of $W$ \ninductively by the following formula: \n\\begin{equation*}\n\\vi{m}=e, \\qquad \n\\vi{l-1}=\n \\begin{cases}\n s_{i_{l}}\\vi{l} & \\text{if $l$ appears in $\\min S(xw_{0},\\,\\mathbf{i})$}, \\\\[1.5mm]\n \\vi{l} & \\text{otherwise}\n \\end{cases}\n\\end{equation*}\nfor $1 \\le l \\le m$; we see from the definition of \nthe set $S(xw_{0},\\,\\mathbf{i})$ that \n$\\ell(\\vi{l-1})=\\ell(\\vi{l})+1$ \nif $l$ appears in $\\min S(xw_{0},\\,\\mathbf{i})$. \nThen we know from \\cite[Lemma~4.2.1]{NS-dp}\nthat $\\yi{l}=\\wi{l}\\vi{l}w_{0}^{-1}$ for every $0 \\le l \\le m$. \n\\end{rem}\n\\begin{thm} \\label{thm:GGMS-Ext}\nKeep the notation and assumptions above. \nLet $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ be the GGMS datum of \nthe extremal MV polytope $P_{x \\cdot \\lambda}$, i.e., \n$P_{x \\cdot \\lambda}=P(\\mu_{\\bullet})$. \nLet $w \\in W$ be such that $w=\\wi{l}$ for some \n$\\mathbf{i} \\in R(w_{0})$ and $0 \\le l \\le m$. Then, we have\n$\\mu_{w}=\\mu_{\\wi{l}}=\\yi{l} \\cdot \\lambda$.\n\\end{thm}\n\nThe following results on extremal MV polytopes and extremal MV cycles \nplay an important role in the proof of Theorem~\\ref{thm:tensor}\ngiven in \\S\\ref{subsec:prf-tensor}. \n\\begin{lem} \\label{lem:ext1}\nKeep the notation and assumptions of \nTheorem~\\ref{thm:GGMS-Ext}.\nFor each $w \\in W$ and $j \\in I$ with $w < ws_{j}$, \nwe have either \n{\\rm (a)} $\\mu_{ws_{j}}=\\mu_{w}$, or \n{\\rm (b)} $\\mu_{ws_{j}}=ws_{j}w^{-1} \\cdot \\mu_{w}$.\nMoreover, in both of the cases {\\rm (a)} and {\\rm (b)}, \nwe have $\\pair{\\mu_{ws_{j}}}{w \\cdot \\alpha_{j}} \\ge 0$. \n\\end{lem}\n\n\\begin{proof}\nTake $\\mathbf{i}=(i_{1},\\,i_{2},\\,\\dots,\\,i_{m}) \\in R(w_{0})$ such that \n$\\wi{l-1}=w$ and $\\wi{l}=ws_{j}$ for some $1 \\le l \\le m$; \nnote that $i_{l}=j$, $\\bti{l}=\\wi{l-1} \\cdot \\alpha_{i_{l}}=\nw \\cdot \\alpha_{j}$, and hence $s_{\\bti{l}}=ws_{j}w^{-1}$. \nSince $\\mu_{w}=\\mu_{\\wi{l-1}}=\\yi{l-1} \\cdot \\lambda$ and \n$\\mu_{ws_{j}}=\\mu_{\\wi{l}}=\\yi{l} \\cdot \\lambda$ \nby Theorem~\\ref{thm:GGMS-Ext}, and since \n$\\yi{l-1}$ is equal to $\\yi{l}$ or \n$s_{\\bti{l}}\\yi{l}=ws_{j}w^{-1}\\yi{l}$ \nby definition, it follows immediately that \neither (a) $\\mu_{ws_{j}}=\\mu_{w}$ or \n(b) $\\mu_{ws_{j}}=ws_{j}w^{-1} \\cdot \\mu_{w}$ holds. \n\nWe will show that \n$\\pair{\\mu_{ws_{j}}}{w \\cdot \\alpha_{j}} \\ge 0$. \nFirst, let us assume that \n$l$ does not appear in $\\min S(xw_{0},\\,\\mathbf{i})$.\nThen, we have $\\yi{l-1}=s_{\\bti{l}}\\yi{l}$ by definition, \nand hence $\\mu_{\\wi{l-1}}=s_{\\bti{l}} \\cdot \\mu_{\\wi{l}}$ \nby Theorem~\\ref{thm:GGMS-Ext}. Also, \nit follows from the length formula \\eqref{eq:length} \n(or \\eqref{eq:n}) that \n$\\mu_{\\wi{l}}-\\mu_{\\wi{l-1}} \\in \n \\mathbb{Z}_{\\ge 0}(\\wi{l-1} \\cdot h_{i_{l}})=\n \\mathbb{Z}_{\\ge 0}(\\bti{l})^{\\vee}$, where \n$(\\bti{l})^{\\vee}$ denotes the coroot corresponding to \nthe root $\\bti{l}$. \nCombining these, we obtain \n\\begin{equation*}\n\\mathbb{Z}_{\\ge 0}(\\bti{l})^{\\vee} \\ni \n\\mu_{\\wi{l}}-\\mu_{\\wi{l-1}}=\n\\mu_{\\wi{l}}-s_{\\bti{l}} \\cdot \\mu_{\\wi{l}}=\n\\pair{\\mu_{\\wi{l}}}{\\bti{l}}(\\bti{l})^{\\vee}, \n\\end{equation*}\nand hence $\\pair{\\mu_{\\wi{l}}}{\\bti{l}} \\ge 0$. \nThis implies $\\pair{\\mu_{ws_{j}}}{w \\cdot \\alpha_{j}} \\ge 0$ \nsince $\\wi{l}=ws_{j}$ and $\\bti{l}=w \\cdot \\alpha_{j}$. \n\nNext, let us assume that \n$l$ appears in $\\min S(xw_{0},\\,\\mathbf{i})$. \nBecause $\\mu_{\\wi{l}}=\\yi{l} \\cdot \\lambda=\n\\wi{l}\\vi{l}w_{0}^{-1} \\cdot \\lambda$ \nby Theorem~\\ref{thm:GGMS-Ext} and Remark~\\ref{rem:vi}, \nwe see, by noting $\\wi{l}=ws_{j}$ and $i_{l}=j$, that \n\\begin{align*}\n\\pair{\\mu_{ws_{j}}}{w \\cdot \\alpha_{j}} & =\n\\pair{\\mu_{\\wi{l}}}{w \\cdot \\alpha_{j}} =\n\\pair{\\wi{l}\\vi{l}w_{0}^{-1} \\cdot \\lambda}{w \\cdot \\alpha_{j}} \\\\\n& =\n \\pair{ws_{j}\\vi{l}w_{0}^{-1} \\cdot \\lambda}{w \\cdot \\alpha_{j}}=\n -\\pair{\\vi{l}w_{0}^{-1} \\cdot \\lambda}{\\alpha_{j}} \\\\\n&= -\\pair{\\lambda}{ w_{0}(\\vi{l})^{-1} \\cdot \\alpha_{j} }\n = -\\pair{\\lambda}{ w_{0}(\\vi{l})^{-1} \\cdot \\alpha_{i_{l}} }.\n\\end{align*}\nAlso, since $l$ appears in $\\min S(xw_{0},\\,\\mathbf{i})$ by assumption, \nwe have \n$\\vi{l-1}=s_{i_{l}}\\vi{l}$ with $\\ell(\\vi{l-1})=\\ell(\\vi{l})+1$ \n(see Remark~\\ref{rem:vi}). \nIt follows from the exchange condition that \n$(\\vi{l})^{-1} \\cdot \\alpha_{i_{l}}$ is a positive root, \nand hence $w_{0}(\\vi{l})^{-1} \\cdot \\alpha_{i_{l}}$ \nis a negative root. Therefore, we conclude that \n\\begin{equation*}\n\\pair{\\mu_{ws_{j}}}{w \\cdot \\alpha_{j}}=\n- \\underbrace{\\pair{\\lambda}\n { w_{0}(\\vi{l})^{-1} \\cdot \\alpha_{i_{l}} }}_{\\le 0} \\ge 0\n\\end{equation*}\nsince $\\lambda \\in X_{*}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ is \na dominant coweight, This proves the lemma. \n\\end{proof}\n\nLet $G$ be a complex, connected, semisimple algebraic group \nwith Lie algebra $\\mathfrak{g}$. \nTake $\\lambda \\in X_{*}(T)_{+}$ and $x \\in W$ arbitrarily, and \nlet $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ denote the GGMS datum of \nthe extremal MV polytope $P_{x \\cdot \\lambda} \\in \\mathcal{MV}(\\lambda)$ of \nweight $x \\cdot \\lambda$, i.e., \n$P_{x \\cdot \\lambda}=P(\\mu_{\\bullet})$; \nrecall from Theorem~\\ref{thm:GGMS-Ext} that \n$\\mu_{w} \\in W \\cdot \\lambda$ for all $w \\in W$. \nNow, for each $w \\in W$, we consider the irreducible \nalgebraic variety \n\\begin{equation} \\label{eq:bbw}\n\\mathbf{b}^{w} := \\ol{\\CG r^{\\lambda} \\cap S_{\\mu_{w}}^{w}},\n\\end{equation}\nwhich is the $\\dot{w}$-translate of \nthe extremal MV cycle $\\mathbf{b}_{w^{-1} \\cdot \\mu_{w}}$ \nof weight $w^{-1} \\cdot \\mu_{w}$ \nsince $\\CG r^{w^{-1} \\cdot \\lambda}=\\CG r^{\\lambda}$ \n(see Remark~\\ref{rem:extcyc}); \nnote that $\\mathbf{b}^{e} = \\mathbf{b}_{x \\cdot \\lambda}$ \nsince $\\mu_{e}=x \\cdot \\lambda$. \n\nFor each $j \\in I$, \nwe set $P_{j} := B \\sqcup (B \\dot{s_{j}} B)$, \nwhich is the minimal parabolic subgroup \n(containing $B$) of $G$ corresponding to $s_{j} \\in W$. \nAlso, let $P_{j} = L_{j} U_{j}$ be its Levi decomposition \nsuch that $T \\subset L_{j}$. \n\\begin{lem} \\label{lem:tran}\nKeep the notation above. \nFor each $w \\in W$ and $j \\in I$ with $ws_{j} < w$, \nwe have ${}^{w} L_{j}(\\mathcal{O}) \\mathbf{b}^{ws_{j}} \\subset \\mathbf{b}^{w}$.\n\\end{lem}\n\n\\begin{proof}\nFor simplicity of notation, we write \n$N_{+}$ for ${}^{w}(L_{j} \\cap U)$; \nthe root in $N_{+}$ is $-w \\cdot \\alpha_{j}$ \nby our convention. \nBecause $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ is \nthe GGMS datum of the extremal MV polytope $P_{x \\cdot \\lambda}$, \nit follows from Lemma~\\ref{lem:ext1} that \nwe have either (a) $\\mu_{ws_{j}}=\\mu_{w}$ or \n(b) $\\mu_{ws_{j}}=ws_{j}w^{-1} \\cdot \\mu_{w}$, and that \nin both of the cases (a) and (b), we have\n$\\pair{\\mu_{w}}{w \\cdot \\alpha_{j}} \\le 0$. \nConsequently, by taking into account \nLemma~\\ref{lem:emb}, applied to ${}^{w}L_{j} \\subset G$ \nand $\\mu_{w} \\in X_{*}(T)$, we deduce from Example~\\ref{ex:MVc} \nthat\n\\begin{equation} \\label{eq:tran1}\n\\ol{N_{+}(\\mathcal{O}) [t^{\\mu_{w}}]} = \n\\ol{{}^{w} L_{j}(\\mathcal{O}) [t^{\\mu_{w}}]}.\n\\end{equation}\nAlso, by Remark~\\ref{rem:extcyc}, applied to \nthe extremal MV cycle $\\dot{w}^{-1} \\cdot \\mathbf{b}^{w}$ of \nweight $w^{-1} \\cdot \\mu_{w}$, we obtain \n$\\dot{w}^{-1} \\cdot \\mathbf{b}^{w}=\n\\ol{U(\\mathcal{O})[t^{w^{-1} \\cdot \\mu_{w}}]}$, \nand hence\n\\begin{equation} \\label{eq:tran2}\n\\mathbf{b}^{w}=\\ol{{}^{w}U(\\mathcal{O})[t^{\\mu_{w}}]}.\n\\end{equation}\nSimilarly, we obtain\n\\begin{equation} \\label{eq:tran3}\n\\mathbf{b}^{ws_{j}}=\\ol{{}^{ws_{j}}U(\\mathcal{O})[t^{\\mu_{ws_{j}}}]}.\n\\end{equation}\nHere we note that \n${}^{w}L_{j} {}^{ws_{j}}U = {}^{w}U {}^{w}L_{j}$ \nsince $L_{j}U=UL_{j}$ and $\\dot{s}_{j} \\in L_{j}$.\nIt follows that \n\\begin{equation*}\n{}^{w}L_{j}(\\mathcal{O}) {}^{ws_{j}}U(\\mathcal{O}) [t^{\\mu_{ws_{j}}}]\n= {}^{w}U(\\mathcal{O}) {}^{w}L_{j}(\\mathcal{O}) [t^{\\mu_{ws_{j}}}].\n\\end{equation*}\nBecause \n$[t^{\\mu_{ws_{j}}}] = [t^{\\mu_{w}}]$ in case (a), and \n$[t^{\\mu_{ws_{j}}}] = \\dot{w} \\dot{s}_{j} \\dot{w}^{-1} [t^{\\mu_{w}}]$ \nin case (b) as above, we deduce that in both of the cases (a) and (b), \n\\begin{equation*}\n{}^{w}U(\\mathcal{O}) {}^{w}L_{j}(\\mathcal{O}) [t^{\\mu_{ws_{j}}}]=\n{}^{w}U(\\mathcal{O}) {}^{w}L_{j}(\\mathcal{O}) [t^{\\mu_{w}}].\n\\end{equation*}\nThus, we get\n\\begin{equation*}\n{}^{w}L_{j}(\\mathcal{O}) {}^{ws_{j}}U(\\mathcal{O}) [t^{\\mu_{ws_{j}}}]=\n{}^{w}U(\\mathcal{O}) {}^{w}L_{j}(\\mathcal{O}) [t^{\\mu_{w}}].\n\\end{equation*}\nIn addition, we have \n\\begin{align*}\n{}^{w}U(\\mathcal{O}) {}^{w}L_{j}(\\mathcal{O}) [t^{\\mu_{w}}]\n& \\subset {}^{w}U(\\mathcal{O}) \\ol{{}^{w}L_{j}(\\mathcal{O}) [t^{\\mu_{w}}]}\n = {}^{w}U(\\mathcal{O}) \\ol{N_{+}(\\mathcal{O}) [t^{\\mu_{w}}]}\n \\quad \n \\text{by \\eqref{eq:tran1}} \\\\\n& \\subset {}^{w}U(\\mathcal{O}) \\ol{{}^{w}U(\\mathcal{O}) [t^{\\mu_{w}}]}\n \\quad\n \\text{since $N_{+} \\subset {}^{w}U$ by definition} \\\\\n& \\subset \\ol{ {}^{w}U(\\mathcal{O}){}^{w}U(\\mathcal{O}) [t^{\\mu_{w}}]} \\\\\n& = \\ol{{}^{w}U(\\mathcal{O}) [t^{\\mu_{w}}]}=\\mathbf{b}^{w}\n \\quad \\text{by \\eqref{eq:tran2}}.\n\\end{align*}\nHence we obtain \n${}^{w}L_{j}(\\mathcal{O}) \n {}^{ws_{j}}U(\\mathcal{O}) [t^{\\mu_{ws_{j}}}] \\subset \\mathbf{b}^{w}$. \nFrom this, we conclude, by using \\eqref{eq:tran3}, that\n\\begin{align*}\n{}^{w} L_{j}(\\mathcal{O}) \\mathbf{b}^{ws_{j}} & =\n{}^{w}L_{j}(\\mathcal{O}) \\ol{{}^{ws_{j}}U(\\mathcal{O}) [t^{\\mu_{ws_{j}}}]} \\\\\n& \\subset \n\\ol{{}^{w}L_{j}(\\mathcal{O}) {}^{ws_{j}}U(\\mathcal{O}) [t^{\\mu_{ws_{j}}}]} \\\\\n& \\subset \\mathbf{b}^{w}.\n\\end{align*}\nThis proves the lemma.\n\\end{proof}\n\n\\section{$N$-multiple maps for MV polytopes and their applications.}\n\\label{sec:multi}\n\nAs in (the second paragraph of) \\S\\ref{subsec:notation}, \nwe assume that $\\mathfrak{g}$ is a complex semisimple Lie algebra. \nLet $\\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ be \nan arbitrary (but fixed) dominant coweight. \n\n\\subsection{$N$-multiple maps for MV polytopes.}\n\\label{subsec:multiple}\n\nLet $N \\in \\mathbb{Z}_{\\ge 1}$. \nFor a collection $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ of \nelements of $\\mathfrak{h}_{\\mathbb{R}}$, we set\n\\begin{equation*}\nN \\cdot \\mu_{\\bullet}:=(N\\mu_{w})_{w \\in W}.\n\\end{equation*}\nAlso, for a subset $P \\subset \\mathfrak{h}_{\\mathbb{R}}$, we set \n\\begin{equation*}\nN \\cdot P:=\\bigl\\{Nv \\mid v \\in P\\bigr\\} \\subset \\mathfrak{h}_{\\mathbb{R}}.\n\\end{equation*}\nThe next lemma follows immediately \nfrom the definitions. \n\\begin{lem} \\label{lem:N1}\nLet $N \\in \\mathbb{Z}_{\\ge 1}$ be a positive integer. \n\n{\\rm (1)} If $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ \nis a GGMS datum, then $N \\cdot \\mu_{\\bullet}$ is \nalso a GGMS datum.\n\n{\\rm (2)} If $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ \nis an MV datum, then $N \\cdot \\mu_{\\bullet}$ is \nalso an MV datum.\n\n{\\rm (3)} Let $P=P(\\mu_{\\bullet}) \\in \\mathcal{MV}$ be an MV polytope \nwith GGMS datum $\\mu_{\\bullet}$. Then, \n$N \\cdot P$ is the MV polytope\nwith GGMS datum $N \\cdot \\mu_{\\bullet}$, that is, \n$N \\cdot P=P(N \\cdot \\mu_{\\bullet})$. Moreover, \nif $P \\in \\mathcal{MV}(\\lambda)$, \nthen $N \\cdot P \\in \\mathcal{MV}(N\\lambda)$. \n\\end{lem}\n\\begin{rem} \\label{rem:N1}\nLet $N \\in \\mathbb{Z}_{\\ge 1}$ be a positive integer. \nIf $P=P(\\mu_{\\bullet})$ is a pseudo-Weyl polytope \nwith GGMS datum $\\mu_{\\bullet}$, then \nthe set $N \\cdot P$ is identical to the Minkowski sum \n$P+P+ \\cdots +P$ ($N$ times). Indeed, we see that \n\\begin{align*}\nN \\cdot P & = \n P(N \\cdot \\mu_{\\bullet}) \\quad \\text{by Lemma~\\ref{lem:N1}\\,(3)} \\\\\n& = \n P(\\underbrace{\\mu_{\\bullet}+\\mu_{\\bullet}+ \\cdots +\\mu_{\\bullet}}_{\\text{$N$ times}})\n \\quad \\text{(see Remark~\\ref{rem:GGMS-sum})} \\\\[3mm]\n& \n =\\underbrace{P(\\mu_{\\bullet})+P(\\mu_{\\bullet})+ \\cdots +P(\\mu_{\\bullet})}_{\\text{$N$ times}}\n \\quad \\text{by Proposition~\\ref{prop:Minkowski}} \\\\[3mm]\n& = \\underbrace{P+P+ \\cdots +P}_{\\text{$N$ times}}. \n\\end{align*}\n\\end{rem}\n\nBy Lemma~\\ref{lem:N1}\\,(3), \nwe obtain an injective map \n$S_{N}:\\mathcal{MV}(\\lambda) \\hookrightarrow \\mathcal{MV}(N\\lambda)$ \nthat sends $P \\in \\mathcal{MV}(\\lambda)$ to $N \\cdot P \\in \\mathcal{MV}(N\\lambda)$; \nwe call the map $S_{N}$ an $N$-multiple map. \nNote that $S_{N}(P_{\\lambda})=P_{N\\lambda}$ \n(see Remark~\\ref{rem:ext}).\n\n\\begin{prop} \\label{prop:N2}\nLet $N \\in \\mathbb{Z}_{\\ge 1}$. \nFor $P \\in \\mathcal{MV}(\\lambda)$, we have\n\\begin{align*}\n& \\mathop{\\rm wt}\\nolimits (S_{N}(P))=N \\mathop{\\rm wt}\\nolimits (P), \\\\\n& \nS_{N}(e_{j}P)=e_{j}^{N}(S_{N}(P)), \\qquad \nS_{N}(f_{j}P)=f_{j}^{N}(S_{N}(P)) \\qquad \n\\text{\\rm for $j \\in I$}, \\\\\n& \n\\varepsilon_{j}(S_{N}(P))=N \\varepsilon_{j}(P), \\qquad \n\\varphi_{j}(S_{N}(P))=N \\varphi_{j}(P) \\qquad \n\\text{\\rm for $j \\in I$},\n\\end{align*}\nwhere it is understood \nthat $S_{N}(\\mathbf{0})=\\mathbf{0}$. \n\\end{prop}\n\\begin{proof}\nLet $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ be \nthe GGMS datum of $P \\in \\mathcal{MV}(\\lambda)$. \nIt follows from the definition of $\\mathop{\\rm wt}\\nolimits$ and Lemma~\\ref{lem:N1}\\,(3) \nthat $\\mathop{\\rm wt}\\nolimits(P)=\\mu_{e}$ and $\\mathop{\\rm wt}\\nolimits(S_{N}(P))=\\mathop{\\rm wt}\\nolimits(N \\cdot P)=N\\mu_{e}$. \nHence we have $\\mathop{\\rm wt}\\nolimits (S_{N}(P))=N \\mathop{\\rm wt}\\nolimits (P)$. \n\nNext, let us show that \n$\\varepsilon_{j}(S_{N}(P))=N \\varepsilon_{j}(P)$ and \n$\\varphi_{j}(S_{N}(P))=N \\varphi_{j}(P)$ for $j \\in I$. \nLet $j \\in I$. By Remark~\\ref{rem:ve}, we have\n$\\mu_{s_{j}}-\\mu_{e}=\\varepsilon_{j}(P)h_{j}$. Also, \nsince $S_{N}(P)=N \\cdot P=P(N \\cdot \\mu_{\\bullet})$ \nby Lemma~\\ref{lem:N1}\\,(3), \nit follows from Remark~\\ref{rem:ve} that\n$N\\mu_{s_{j}}-N\\mu_{e}=\n\\varepsilon_{j}(N \\cdot P)h_{j}=\n\\varepsilon_{j}(S_{N}(P))h_{j}$. \nCombining these equations, we have\n\\begin{equation*}\n\\varepsilon_{j}(S_{N}(P))h_{j}=\nN\\mu_{s_{j}}-N\\mu_{e}=\nN(\\mu_{s_{j}}-\\mu_{e})=\nN\\varepsilon_{j}(P)h_{j}, \n\\end{equation*}\nwhich implies that \n$\\varepsilon_{j}(S_{N}(P))=N \\varepsilon_{j}(P)$. \nIn addition, we have \n\\begin{align*}\n\\varphi_{j}(S_{N}(P)) \n& =\n \\pair{\\mathop{\\rm wt}\\nolimits (S_{N}(P))}{\\alpha_{j}}+\\varepsilon_{j}(S_{N}(P))\n \\qquad \\text{by \\eqref{eq:ax}} \\\\\n& =\\pair{N \\mathop{\\rm wt}\\nolimits (P)}{\\alpha_{j}}+N\\varepsilon_{j}(P)\n \\qquad \\text{by the equations shown above} \\\\\n& =N\\bigl(\\pair{\\mathop{\\rm wt}\\nolimits (P)}{\\alpha_{j}}+\\varepsilon_{j}(P)\\bigr) \\\\\n& =N\\varphi_{j}(P)\n \\qquad \\text{by \\eqref{eq:ax}}.\n\\end{align*}\n\nFinally, let us show that \n$S_{N}(e_{j}P)=e_{j}^{N}(S_{N}(P))$ and \n$S_{N}(f_{j}P)=f_{j}^{N}(S_{N}(P))$ for $j \\in I$; \nwe give a proof only for the equality \n$S_{N}(e_{j}P)=e_{j}^{N}(S_{N}(P))$ \nsince the equality $S_{N}(f_{j}P)=f_{j}^{N}(S_{N}(P))$ \ncan be shown similarly. Let $j \\in I$. \nFirst observe that \n\\begin{align*}\nS_{N}(e_{j} P)=\\mathbf{0}\n & \\quad \\Longleftrightarrow \\quad \n e_{j} P=\\mathbf{0}\n \\quad \\text{by Remark~\\ref{rem:ve}} \n \\\\\n & \\quad \\Longleftrightarrow \\quad \n \\varepsilon_{j}(P)=0\n \\quad \\text{by the definition of $\\varepsilon_{j}(P)$} \\\\\n & \\quad \\Longleftrightarrow \\quad \n e_{j}^{N}(S_{N}(P))=\\mathbf{0}\n \\quad \\text{since $\\varepsilon_{j}(S_{N}(P))=N \\varepsilon_{j}(P)$}.\n\\end{align*}\nNow, assume that $S_{N}(e_{j} P) \\ne \\mathbf{0}$, \nor equivalently, $e_{j} P \\ne \\mathbf{0}$. Recall \nfrom the definition of the raising Kashiwara operator \n$e_{j}$ that the GGMS datum of \nthe MV polytope $e_{j} P \\in \\mathcal{MV}(\\lambda)$ is equal to \n$e_{j}\\mu_{\\bullet}=(\\mu_{w}')_{w \\in W}$, \nwhich is the unique MV datum \nsuch that $\\mu_{e}'=\\mu_{e}+h_{j}$ and \n$\\mu_{w}'=\\mu_{w}$ for all $w \\in W$ with $s_{j}w < w$. \nHence we see from Lemma~\\ref{lem:N1}\\,(3) that \nthe GGMS datum of the MV polytope \n$S_{N}(e_{j}P)=N \\cdot (e_{j} P) \\in \\mathcal{MV}(N\\lambda)$ is equal to \nthe unique MV datum $\\mu_{\\bullet}''=(\\mu_{w}'')_{w \\in W}$ \nsuch that $\\mu_{e}''=N\\mu_{e}+Nh_{j}$ and \n$\\mu_{w}''=N\\mu_{w}$ for all $w \\in W$ with $s_{j}w < w$. \nBecause the GGMS datum of the MV polytope \n$S_{N}(P)=N \\cdot P \\in \\mathcal{MV}(N\\lambda)$ \nis equal to $N \\cdot \\mu_{\\bullet}=(N\\mu_{w})_{w \\in W}$, \nwe deduce from the definition of \nthe raising Kashiwara operator $e_{j}$ that \nthe GGMS datum $\\mu_{\\bullet}'''=(\\mu_{w}''')_{w \\in W}$ of \n$e_{j}^{N}(S_{N}(P))=e_{j}^{N}(N \\cdot P) \\in \\mathcal{MV}(N\\lambda)$ \nalso satisfies the condition that \n$\\mu_{e}'''=N\\mu_{e}+Nh_{j}$ and \n$\\mu_{w}'''=N\\mu_{w}$ for all $w \\in W$ with $s_{j}w < w$. \nHence, by the uniqueness of such an MV datum, we obtain \n$\\mu_{\\bullet}''=\\mu_{\\bullet}'''$, which implies that \n$S_{N}(e_{j}P)=P(\\mu_{\\bullet}'')=\n P(\\mu_{\\bullet}''')=e_{j}^{N}(S_{N}(P))$.\nThis completes the proof of Proposition~\\ref{prop:N2}.\n\\end{proof}\n\n\\subsection{Application of $N$-multiple maps.}\n\\label{subsec:app-multiple}\n\nLet $\\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ be \nan arbitrary (but fixed) dominant coweight \nas in \\S\\ref{subsec:multiple}. \nWe denote by $W_{\\lambda} \\subset W$ \nthe stabilizer of $\\lambda$ in $W$, and \nby $W^{\\lambda}_{\\min} \\subset W$ \nthe set of minimal (length) coset representatives \nmodulo the subgroup $W_{\\lambda} \\subset W$. \n\nFor a positive integer $N \\in \\mathbb{Z}_{\\ge 1}$, \nlet us denote by \n\\begin{equation*}\nK_{N}: \\mathcal{MV}(\\lambda) \\hookrightarrow \n \\mathcal{MV}(\\lambda)^{\\otimes N}=\n \\underbrace{\n \\mathcal{MV}(\\lambda) \\otimes \n \\mathcal{MV}(\\lambda) \\otimes \\cdots \\otimes \n \\mathcal{MV}(\\lambda)}_{\\text{$N$ times}}\n\\end{equation*}\nthe composite of the $N$-multiple map \n$S_{N}:\\mathcal{MV}(\\lambda) \\hookrightarrow \\mathcal{MV}(N\\lambda)$ with \nthe canonical embedding \n$G_{N}:\\mathcal{MV}(N\\lambda) \\hookrightarrow \\mathcal{MV}(\\lambda)^{\\otimes N}$ \nof crystals that sends the highest weight element \n$P_{N\\lambda}$ of $\\mathcal{MV}(N\\lambda)$ to \nthe highest weight element \n$P_{\\lambda}^{\\otimes N}=\n P_{\\lambda} \\otimes P_{\\lambda} \\otimes \\cdots \\otimes P_{\\lambda}$ \n($N$ times) of $\\mathcal{MV}(\\lambda)^{\\otimes N}$.\n\\begin{prop} \\label{prop:N3}\nLet $\\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ be \na dominant coweight, and \nlet $x \\in W^{\\lambda}_{\\min}$. \nIf an MV polytope $P \\in \\mathcal{MV}(\\lambda)$ lies \nin the Demazure crystal $\\mathcal{MV}_{x}(\\lambda)$, \nthen there exist a positive integer $N \\in \\mathbb{Z}_{\\ge 1}$ \nand minimal coset representatives \n$x_{1},\\,x_{2},\\,\\dots,\\,x_{N} \\in W^{\\lambda}_{\\min}$ \nsuch that \n\\begin{equation} \\label{eq:N3}\n\\begin{cases}\nx \\ge x_{k} \\quad \n \\text{\\rm for all $1 \\le k \\le N$;} \\\\[3mm]\nK_{N}(P)=\n P_{x_{1} \\cdot \\lambda} \\otimes \n P_{x_{2} \\cdot \\lambda} \\otimes \\cdots \\otimes \n P_{x_{N} \\cdot \\lambda}.\n\\end{cases}\n\\end{equation}\n\\end{prop}\n\n\n\\begin{rem} \\label{rem:MV-LS}\nKeep the notation and assumption above. \nA positive integer $N \\in \\mathbb{Z}_{\\ge 1}$ and \nminimal coset representatives \n$x_{1},\\,x_{2},\\,\\dots,\\,x_{N} \\in W^{\\lambda}_{\\min}$ \nsatisfying the conditions \\eqref{eq:N3}\nare, in a sense, determined by \nthe ``turning points'' and ``directions'' of \nthe Lakshmibai-Seshadri path of \nshape $\\lambda$ that corresponds to \nthe MV polytope $P \\in \\mathcal{MV}(\\lambda)$ \nunder the (inexplicit) bijection \nvia the crystal basis $\\mathcal{B}(\\lambda)$.\n\nWe will explain the remark above more precisely. \nLet $\\mathbb{B}(\\lambda)$ denote \nthe set of Lakshmibai-Seshadri (LS for short) paths\nof shape $\\lambda$, which is endowed with \na crystal structure for $U_{q}(\\mathfrak{g}^{\\vee})$ \nby the root operators $e_{j}$, $f_{j}$, $j \\in I$ \n(see \\cite{L-inv} and \\cite{L-ann} for details). \nWe know from \\cite[Theorem~4.1]{Kassim} \n(see also \\cite[Th\\'eor\\`eme~8.2.3]{Kasb}) and \n\\cite[Corollary~6.4.27]{Jo} that \n$\\mathbb{B}(\\lambda)$ is isomorphic to \nthe crystal basis $\\mathcal{B}(\\lambda)$ \nas a crystal for $U_{q}(\\mathfrak{g}^{\\vee})$.\nThus, we have $\\mathbb{B}(\\lambda) \\cong \n\\mathcal{B}(\\lambda) \\cong \\mathcal{MV}(\\lambda)$ \nas crystals for $U_{q}(\\mathfrak{g}^{\\vee})$. \n\nNow, take an element \n$P \\in \\mathcal{MV}_{x}(\\lambda) \\subset \\mathcal{MV}(\\lambda)$, \nand assume that a positive integer $N \\in \\mathbb{Z}_{\\ge 1}$ \nand elements $x_{1},\\,x_{2},\\,\\dots,\\,x_{N} \n\\in W^{\\lambda}_{\\min}$ satisfy \nconditions \\eqref{eq:N3}. Consider \na piecewise linear, continuous map \n$\\pi:[0,1] \\rightarrow \\mathfrak{h}_{\\mathbb{R}}$ by:\n\\begin{equation*}\n\\pi(t)=\n \\sum_{l=1}^{k-1} \\frac{1}{N}\\,x_{l} \\cdot \\lambda +\n \\left(t-\\frac{k-1}{N}\\right)x_{k} \\cdot \\lambda\n \\qquad\n \\text{for $t \\in \n \\left[\\frac{k-1}{N},\\,\\frac{k}{N}\\right]$, \\ \n $1 \\le k \\le N$};\n\\end{equation*}\nnote that for each $1 \\le k \\le N$, \nthe vector $x_{k} \\cdot \\lambda \\in W \\cdot \\lambda \\subset \\mathfrak{h}_{\\mathbb{R}}$ \ngives the direction of $\\pi$ on the interval \n$[(k-1)\/N,\\,k\/N]$, and the point $t=k\/N$ is \na turning point of $\\pi$ if \n$x_{k-1} \\cdot \\lambda \\ne x_{k} \\cdot \\lambda$. \nThen, we deduce from the proof of \n\\cite[Theorem~4.1]{Kassim} \n(or \\cite[Th\\'eor\\`eme~8.2.3]{Kasb}), together with\nthe commutative diagram \\eqref{CD:multi} below, that \nthis map $\\pi:[0,1] \\rightarrow \\mathfrak{h}_{\\mathbb{R}}$ is \nprecisely the LS path of shape $\\lambda$ \nthat corresponds to the $P \\in \\mathcal{MV}(\\lambda)$ under \nthe isomorphism $\\mathbb{B}(\\lambda) \\cong \n\\mathcal{B}(\\lambda) \\cong \\mathcal{MV}(\\lambda)$ of crystals. \nThe argument above implies, in particular, that \nfor a fixed positive integer $N \\in \\mathbb{Z}_{\\ge 1}$, \nthe elements $x_{1},\\,x_{2},\\,\\dots,\\,x_{N} \\in W^{\\lambda}_{\\min}$ \nare determined uniquely by the MV polytope $P$ via \nthe corresponding LS path. \nAlso note that by the definition of LS paths, \nif a positive integer $N \\in \\mathbb{Z}_{\\ge 1}$ \nand elements $x_{1},\\,x_{2},\\,\\dots,\\,x_{N} \n\\in W^{\\lambda}_{\\min}$ satisfy \nconditions \\eqref{eq:N3}, then \nwe necessarily have \n$x_{1} \\ge x_{2} \\ge \\cdots \\ge x_{N}$. \n\\end{rem}\n\n\\begin{proof}[Proof of Proposition~\\ref{prop:N3}]\nLet $N \\in \\mathbb{Z}_{\\ge 1}$. \nWe know from \\cite[Theorem~3.1]{Kassim} and \n\\cite[Corollarie~8.1.5]{Kasb} that there exists \nan injective map $S_{N}:\\mathcal{B}(\\lambda) \\hookrightarrow \\mathcal{B}(N\\lambda)$ \nwhich sends the highest weight element $u_{\\lambda} \\in \\mathcal{B}(\\lambda)$ to \nthe highest weight element $u_{N\\lambda} \\in \\mathcal{B}(N\\lambda)$, and \nwhich has the same properties as the $N$-multiple map \n$S_{N}:\\mathcal{MV}(\\lambda) \\hookrightarrow \\mathcal{MV}(N\\lambda)$ \ngiven in Proposition~\\ref{prop:N2}.\nLet us denote by \n$K_{N}: \\mathcal{B}(\\lambda) \\hookrightarrow \n \\mathcal{B}(\\lambda)^{\\otimes N}$\nthe composite of \n$S_{N}:\\mathcal{B}(\\lambda) \\hookrightarrow \\mathcal{B}(N\\lambda)$ \nwith the canonical embedding \n$G_{N}:\\mathcal{B}(N\\lambda) \\hookrightarrow \\mathcal{B}(\\lambda)^{\\otimes N}$ \nof crystals that sends the highest weight element \n$u_{N\\lambda} \\in \\mathcal{B}(N\\lambda)$ to \nthe highest weight element \n$u_{\\lambda}^{\\otimes N} \\in \\mathcal{B}(\\lambda)^{\\otimes N}$ \n(see \\cite[p.~181]{Kassim} and \\cite[\\S8.3]{Kasb}). \nThen, it is easily shown that the following diagram commutes:\n\\begin{equation} \\label{CD:multi}\n\\begin{CD}\n\\mathcal{B}(\\lambda) @>{K_{N}}>> \\mathcal{B}(\\lambda)^{\\otimes N} \\\\\n@V{\\Psi_{\\lambda}}VV @VV{\\Psi_{\\lambda}^{\\otimes N}}V \\\\\n\\mathcal{MV}(\\lambda) @>{K_{N}}>> \\mathcal{MV}(\\lambda)^{\\otimes N}.\n\\end{CD}\n\\end{equation}\nIndeed, take an element $b \\in \\mathcal{B}(\\lambda)$, and write it as: \n$b=f_{j_{1}}f_{j_{2}} \\cdots f_{j_{r}}u_{\\lambda}$ \nfor some $j_{1},\\,j_{2},\\,\\dots,\\,j_{r} \\in I$; \nfor simplicity of notation, we set \n$f_{\\ast}:=f_{j_{1}}f_{j_{2}} \\cdots f_{j_{r}}$ and \n$f_{\\ast}^{N}:=f_{j_{1}}^{N}f_{j_{2}}^{N} \\cdots f_{j_{r}}^{N}$. \nWe have \n\\begin{align*}\nK_{N}(\\Psi_{\\lambda}(b))\n & = K_{N}(\\Psi_{\\lambda}(f_{\\ast}u_{\\lambda})) \n = K_{N}(f_{\\ast} P_{\\lambda}) \n = G_{N}(S_{N}(f_{\\ast} P_{\\lambda})) \\\\\n & = G_{N}(f_{\\ast}^{N} P_{N\\lambda})\n \\quad \\text{by Proposition~\\ref{prop:N2}} \\\\\n & = f_{\\ast}^{N}P_{\\lambda}^{\\otimes N}.\n\\end{align*}\nSimilarly, we have \n\\begin{align*}\n\\Psi_{\\lambda}^{\\otimes N}(K_{N}(b))\n & = \\Psi_{\\lambda}^{\\otimes N}(K_{N}(f_{\\ast}u_{\\lambda}))\n = \\Psi_{\\lambda}^{\\otimes N}\\bigl(G_{N}(S_{N}(f_{\\ast} u_{\\lambda}))\\bigr) \\\\\n & = \\Psi_{\\lambda}^{\\otimes N}\\bigl(G_{N}(f_{\\ast}^{N} u_{N\\lambda})\\bigr)\n = \\Psi_{\\lambda}^{\\otimes N}(f_{\\ast}^{N} u_{\\lambda}^{\\otimes N})\n = f_{\\ast}^{N} P_{\\lambda}^{\\otimes N}.\n\\end{align*}\nThus, we obtain $K_{N}(\\Psi_{\\lambda}(b))=\n\\Psi_{\\lambda}^{\\otimes N}(K_{N}(b))$ \nfor all $b \\in \\mathcal{B}(\\lambda)$. \n\nNow, let $P \\in \\mathcal{MV}(\\lambda)$. \nApplying \\cite[Proposition~8.3.2]{Kasb} to \n$\\Psi_{\\lambda}^{-1}(P) \\in \\mathcal{B}(\\lambda)$, we see that \nif $N \\in \\mathbb{Z}_{\\ge 1}$ contains ``sufficiently many'' divisors, \nthen \n\\begin{equation} \\label{eq:c1}\nK_{N}(\\Psi_{\\lambda}^{-1}(P))=\n u_{x_{1} \\cdot \\lambda} \\otimes \n u_{x_{2} \\cdot \\lambda} \\otimes \\cdots \\otimes \n u_{x_{N} \\cdot \\lambda}\n\\end{equation}\nfor some $x_{1},\\,x_{2},\\,\\dots,\\,x_{N} \\in W^{\\lambda}_{\\min}$. \nThen, we have \n\\begin{align*}\nK_{N}(P) & = \n \\Psi_{\\lambda}^{\\otimes N}\\bigl(K_{N}(\\Psi_{\\lambda}^{-1}(P))\\bigr)\n \\quad \\text{by \\eqref{CD:multi}} \\\\\n& =\n \\Psi_{\\lambda}^{\\otimes N}(u_{x_{1} \\cdot \\lambda} \\otimes \n u_{x_{2} \\cdot \\lambda} \\otimes \\cdots \\otimes \n u_{x_{N} \\cdot \\lambda}) \\\\\n& = \n P_{x_{1} \\cdot \\lambda} \\otimes \n P_{x_{2} \\cdot \\lambda} \\otimes \\cdots \\otimes \n P_{x_{N} \\cdot \\lambda}. \n\\end{align*}\nIt remains to show that \n$x \\ge x_{k}$ for every $1 \\le k \\le N$. \nIn view of \\cite[Proposition~4.4]{Kas} \n(see also \\cite[\\S9.1]{Kasb}), \nit suffices to show that \n$u_{x_{k} \\cdot \\lambda} \\in \\mathcal{B}_{x}(\\lambda)$ \n for every $1 \\le k \\le N$. \nLet $x=s_{j_{1}}s_{j_{2}} \\cdots s_{j_{r}}$ \nbe a reduced expression of $x \\in W$. \nWe know from \\cite[Proposition~9.1.3\\,(2)]{Kasb} that \n\\begin{equation} \\label{eq:dem-f}\n\\mathcal{B}_{x}(\\lambda)=\n \\bigl\\{\n f_{j_{1}}^{c_{1}}\n f_{j_{2}}^{c_{2}} \\cdots \n f_{j_{r}}^{c_{r}}u_{\\lambda} \\mid \n c_{1},\\,c_{2},\\,\\dots,\\,c_{r} \\in \\mathbb{Z}_{\\ge 0}\n \\bigr\\} \\setminus \\{0\\}.\n\\end{equation}\nSince $P \\in \\mathcal{MV}_{x}(\\lambda)$ by our assumption, \n$\\Psi_{\\lambda}^{-1}(P)$ is contained in $\\mathcal{B}_{x}(\\lambda)$, \nand hence $\\Psi_{\\lambda}^{-1}(P)$ can be written as:\n\\begin{equation*}\n\\Psi_{\\lambda}^{-1}(P)=\n f_{j_{1}}^{c_{1}}\n f_{j_{2}}^{c_{2}} \\cdots \n f_{j_{r}}^{c_{r}}u_{\\lambda}\n\\quad \n\\text{for some \n$c_{1},\\,c_{2},\\,\\dots,\\,c_{r} \\in \\mathbb{Z}_{\\ge 0}$}.\n\\end{equation*}\nTherefore, we have \n\\begin{align*}\nK_{N}(\\Psi_{\\lambda}^{-1}(P)) & = \n K_{N}(f_{j_{1}}^{c_{1}}\n f_{j_{2}}^{c_{2}} \\cdots \n f_{j_{r}}^{c_{r}}u_{\\lambda})=\n G_{N}(S_{N}(f_{j_{1}}^{c_{1}}\n f_{j_{2}}^{c_{2}} \\cdots \n f_{j_{r}}^{c_{r}}u_{\\lambda})) \\\\\n& = G_{N}(f_{j_{1}}^{Nc_{1}}\n f_{j_{2}}^{Nc_{2}} \\cdots \n f_{j_{r}}^{Nc_{r}}u_{N\\lambda}) \n =f_{j_{1}}^{Nc_{1}}\n f_{j_{2}}^{Nc_{2}} \\cdots \n f_{j_{r}}^{Nc_{r}}u_{\\lambda}^{\\otimes N}. \n\\end{align*}\nIt follows from the tensor product rule for crystals that \n\\begin{align*}\n& f_{j_{1}}^{Nc_{1}}\n f_{j_{2}}^{Nc_{2}} \\cdots \n f_{j_{r}}^{Nc_{r}}u_{\\lambda}^{\\otimes N} = \\\\\n& \n \\bigl(f_{j_{1}}^{b_{1,1}}\n f_{j_{2}}^{b_{1,2}} \\cdots \n f_{j_{r}}^{b_{1,r}}u_{\\lambda}\\bigr) \\otimes\n \\bigl(f_{j_{1}}^{b_{2,1}}\n f_{j_{2}}^{b_{2,2}} \\cdots \n f_{j_{r}}^{b_{2,r}}u_{\\lambda}\\bigr) \\otimes \\cdots \\otimes\n \\bigl(f_{j_{1}}^{b_{N,1}}\n f_{j_{2}}^{b_{N,2}} \\cdots \n f_{j_{r}}^{b_{N,r}}u_{\\lambda}\\bigr)\n\\end{align*}\nfor some $b_{k,t} \\in \\mathbb{Z}_{\\ge 0}$, \n$1 \\le k \\le N$, $1 \\le t \\le r$, \nwith $\\sum_{k=1}^{N}b_{k,t}=Nc_{t}$ \nfor each $1 \\le t \\le r$.\nCombining these equalities with \\eqref{eq:c1}, \nwe obtain\n\\begin{align*}\n& u_{x_{1} \\cdot \\lambda} \\otimes \n u_{x_{2} \\cdot \\lambda} \\otimes \\cdots \\otimes \n u_{x_{N} \\cdot \\lambda}=K_{N}(\\Psi_{\\lambda}^{-1}(P))=\\\\\n& \n \\bigl(f_{j_{1}}^{b_{1,1}}\n f_{j_{2}}^{b_{1,2}} \\cdots \n f_{j_{r}}^{b_{1,r}}u_{\\lambda}\\bigr) \\otimes\n \\bigl(f_{j_{1}}^{b_{2,1}}\n f_{j_{2}}^{b_{2,2}} \\cdots \n f_{j_{r}}^{b_{2,r}}u_{\\lambda}\\bigr) \\otimes \\cdots \\otimes\n \\bigl(f_{j_{1}}^{b_{N,1}}\n f_{j_{2}}^{b_{N,2}} \\cdots \n f_{j_{r}}^{b_{N,r}}u_{\\lambda}\\bigr),\n\\end{align*}\nfrom which it follows that \n$u_{x_{k} \\cdot \\lambda}=\n f_{j_{1}}^{b_{k,1}}\n f_{j_{2}}^{b_{k,2}} \\cdots \n f_{j_{r}}^{b_{k,r}}u_{\\lambda}$ \nfor $1 \\le k \\le N$. \nThis implies that \n$u_{x_{k} \\cdot \\lambda} \\in \\mathcal{B}_{x}(\\lambda)$ \nfor each $1 \\le k \\le N$ since \n$f_{j_{1}}^{b_{k,1}}\n f_{j_{2}}^{b_{k,2}} \\cdots \n f_{j_{r}}^{b_{k,r}}u_{\\lambda} \\in \\mathcal{B}_{x}(\\lambda)$ \nby \\eqref{eq:dem-f}.\nThus, we have proved Proposition~\\ref{prop:N3}. \n\\end{proof}\n\\subsection{Main results.}\n\\label{subsec:main}\n\nIn this subsection, we prove the following theorem, \nby using a polytopal estimate \n(Theorem~\\ref{thm:tensor} below) of tensor products of \nMV polytopes. We use the setting of \\S\\ref{subsec:app-multiple}.\n\\begin{thm} \\label{thm:main}\nLet $\\lambda \\in X_{*}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ be a dominant coweight, \nand let $x \\in W^{\\lambda}_{\\min}$. \nIf a positive integer $N \\in \\mathbb{Z}_{\\ge 1}$ and \nminimal coset representatives \n$x_{1},\\,x_{2},\\,\\dots,\\,x_{N} \\in W^{\\lambda}_{\\min}$ \nsatisfy the condition \\eqref{eq:N3} \nin Proposition~\\ref{prop:N3}, then \n\\begin{equation*}\nN \\cdot P \\subseteq\nP_{x_1 \\cdot \\lambda} + \nP_{x_2 \\cdot \\lambda} + \\cdots + \nP_{x_N \\cdot \\lambda},\n\\end{equation*}\nwhere \n$P_{x_1 \\cdot \\lambda} + P_{x_2 \\cdot \\lambda} + \\cdots + \n P_{x_N \\cdot \\lambda}$ is the Minkowski sum of \nthe extremal MV polytopes $P_{x_1 \\cdot \\lambda}$, \n$P_{x_2 \\cdot \\lambda}$, $\\dots$, $P_{x_N \\cdot \\lambda}$.\n\\end{thm}\n\n\\begin{proof}\nBy our assumption, we have \n\\begin{equation*}\n\\begin{cases}\nx \\ge x_{k} \\quad \n \\text{\\rm for all $1 \\le k \\le N$;} \\\\[3mm]\nK_{N}(P)=G_{N}(N \\cdot P)=\n P_{x_{1} \\cdot \\lambda} \\otimes \n P_{x_{2} \\cdot \\lambda} \\otimes \\cdots \\otimes \n P_{x_{N} \\cdot \\lambda}.\n\\end{cases}\n\\end{equation*}\nHere we recall from \\S\\ref{subsec:app-multiple} \nthat $G_{N}:\\mathcal{MV}(N\\lambda) \\hookrightarrow \\mathcal{MV}(\\lambda)^{\\otimes N}$ \ndenotes the canonical embedding of crystals that \nsends $P_{N\\lambda} \\in \\mathcal{MV}(N\\lambda)$ to \n$P_{\\lambda}^{\\otimes N} \\in \\mathcal{MV}(\\lambda)^{\\otimes N}$.\nTherefore, by using Theorem~\\ref{thm:tensor} \n(or rather, Corollary~\\ref{cor:tensor}) successively, \nwe can show that\n\\begin{equation*}\nN \\cdot P \\subset \n P_{x_{1} \\cdot \\lambda}+\n P_{x_{2} \\cdot \\lambda}+\\cdots+\n P_{x_{N} \\cdot \\lambda}.\n\\end{equation*}\nThis completes the proof of Theorem~\\ref{thm:main}.\n\\end{proof}\n\nThis theorem, together with Proposition~\\ref{prop:N3}, yields \nTheorem~\\ref{ithm1} in the Introduction. \nAs an immediate consequence, \nwe can provide an affirmative answer to \na question posed in \\cite[\\S4.6]{NS-dp}.\n\\begin{cor} \\label{cor:main}\nLet $\\lambda \\in X_{*}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ be a dominant coweight, \nand let $x \\in W$. \nAll the MV polytopes lying in the Demazure crystal \n$\\mathcal{MV}_{x}(\\lambda)$ are contained (as sets) in \nthe extremal MV polytope $P_{x \\cdot \\lambda}=\n \\mathop{\\rm Conv}\\nolimits(W_{\\le x} \\cdot \\lambda)$ of weight $x \\cdot \\lambda$. \nNamely, for all $P \\in \\mathcal{MV}_{x}(\\lambda)$, there holds \n\\begin{equation*}\nP \\subset P_{x \\cdot \\lambda}=\n \\mathop{\\rm Conv}\\nolimits(W_{\\le x} \\cdot \\lambda).\n\\end{equation*}\n\\end{cor}\n\n\\begin{rem}\n(1) The assertion of Theorem~\\ref{thm:main} \nis not obvious, as explained in \n\\cite[Remark~4.6.2 and Example~4.6.3]{NS-dp}.\n\n(2) The converse statement fails to hold; \nsee \\cite[Remark~4.6.1]{NS-dp}.\n\\end{rem}\n\n\\begin{proof}[Proof of Corollary~\\ref{cor:main}]\nWe know from Remark~\\ref{rem:demcos} that \nif $x,\\,y \\in W$ satisfies \n$x \\cdot \\lambda=y \\cdot \\lambda$, \nthen $\\mathcal{MV}_{x}(\\lambda)=\\mathcal{MV}_{y}(\\lambda)$ and \n$P_{x \\cdot \\lambda}=P_{y \\cdot \\lambda}$. \nHence we may assume that $x \\in W^{\\lambda}_{\\min}$. \nLet us take an arbitrary $P \\in \\mathcal{MV}_{x}(\\lambda)$. \nBy Proposition~\\ref{prop:N3}, \nthere exist $N \\in \\mathbb{Z}_{\\ge 1}$ and \n$x_{1},\\,x_{2},\\dots,\\,x_{N} \\in W^{\\lambda}_{\\min}$ \nsatisfying the condition \\eqref{eq:N3}. \nAlso, for each $1 \\le k \\le N$, \nit follows from Proposition~\\ref{prop:ext} and \nthe inequality $x \\ge x_{k}$ that \n\\begin{equation} \\label{eq:ext-inc02}\nP_{x_{k} \\cdot \\lambda}=\n \\mathop{\\rm Conv}\\nolimits(W_{\\le x_{k}} \\cdot \\lambda) \\subset\n \\mathop{\\rm Conv}\\nolimits(W_{\\le x} \\cdot \\lambda)=P_{x \\cdot \\lambda}. \n\\end{equation}\nTherefore, we have \n\\begin{align*}\nN \\cdot P \n & \\subset \n P_{x_{1} \\cdot \\lambda}+\n P_{x_{2} \\cdot \\lambda}+\\cdots+\n P_{x_{N} \\cdot \\lambda} \n \\quad \\text{by Theorem~\\ref{thm:main}} \\\\\n & \\subset \n \\underbrace{\n P_{x \\cdot \\lambda}+\n P_{x \\cdot \\lambda}+\\cdots+\n P_{x \\cdot \\lambda}\n }_{\\text{$N$ times}} \n \\quad \\text{by \\eqref{eq:ext-inc02}} \\\\[3mm]\n & =N \\cdot P_{x \\cdot \\lambda}\n \\quad \\text{by Remark~\\ref{rem:N1}}.\n\\end{align*}\nConsequently, we obtain \n$N \\cdot P \\subset N \\cdot P_{x \\cdot \\lambda}$, \nwhich implies that $P \\subset P_{x \\cdot \\lambda}$. \nThis proves Corollary~\\ref{cor:main}. \n\\end{proof}\n\n\n\\section{Polytopal estimate of tensor products of MV polytopes.}\n\\label{sec:tensor}\n\nThe aim of this section is to state and prove \na polytopal estimate of tensor products of MV polytopes.\n\\subsection{Polytopal estimate.}\n\\label{subsec:tensor}\n\nAs in (the second paragraph of) \\S\\ref{subsec:notation}, \nwe assume that $\\mathfrak{g}$ is a complex semisimple Lie algebra. \nLet $\\lambda_{1},\\,\\lambda_{2} \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ \nbe dominant coweights. \nBecause $\\mathcal{MV}(\\lambda) \\cong \\mathcal{B}(\\lambda)$ as crystals \nfor every dominant coweight $\\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$, \nthe tensor product \n$\\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1})$ of \nthe crystals $\\mathcal{MV}(\\lambda_{1})$ and $\\mathcal{MV}(\\lambda_{2})$ \ndecomposes into a disjoint union of connected components \nas follows:\n\\begin{equation*}\n\\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1}) \\cong \n \\bigoplus_{\n \\begin{subarray}{c}\n \\lambda \\in X_{\\ast}(T) \\\\[0.5mm]\n \\text{$\\lambda$ : dominant}\n \\end{subarray}\n } \n\\mathcal{MV}(\\lambda)^{\\oplus m_{\\lambda_{1},\\lambda_{2}}^{\\lambda}},\n\\end{equation*}\nwhere $m_{\\lambda_{1},\\lambda_{2}}^{\\lambda} \\in \\mathbb{Z}_{\\ge 0}$ denotes \nthe multiplicity of $\\mathcal{MV}(\\lambda)$ in \n$\\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1})$. \nFor each dominant coweight $\\lambda \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ \nsuch that $m_{\\lambda_{1},\\lambda_{2}}^{\\lambda} \\ge 1$, \nwe take (and fix) an arbitrary embedding \n$\\iota_{\\lambda}:\\mathcal{MV}(\\lambda) \\hookrightarrow \n \\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1})$ of crystals \nthat maps $\\mathcal{MV}(\\lambda)$ onto a connected component of \n$\\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1})$, \nwhich is isomorphic to $\\mathcal{MV}(\\lambda)$ as a crystal. \n\\begin{thm} \\label{thm:tensor}\nKeep the notation above.\nLet $P \\in \\mathcal{MV}(\\lambda)$, and \nwrite $\\iota_{\\lambda}(P) \\in \n\\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1})$ as\\,{\\rm:} \n$\\iota_{\\lambda}(P)=P_{2} \\otimes P_{1}$ for some \n$P_{1} \\in \\mathcal{MV}(\\lambda_{1})$ and \n$P_{2} \\in \\mathcal{MV}(\\lambda_{2})$. \nWe assume that the MV polytope \n$P_{2} \\in \\mathcal{MV}(\\lambda_{2})$ is an extremal MV polytope \n$P_{x \\cdot \\lambda_{2}}$ for some $x \\in W$. \nThen, we have\n\\begin{equation} \\label{eq:Minkowski}\nP \\subset P_{1} + P_{2},\n\\end{equation}\nwhere $P_{1} + P_{2}$ is the Minkowski sum of \nthe MV polytopes $P_{1} \\in \\mathcal{MV}(\\lambda_{1})$ and \n$P_{2} \\in \\mathcal{MV}(\\lambda_{2})$.\n\\end{thm}\n\n\\begin{rem}\nIt should be mentioned that in the theorem above, \nthe $\\iota_{\\lambda}(P)$ may lie in an arbitrary \nconnected component of \n$\\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1})$ \nthat is isomorphic to $\\mathcal{MV}(\\lambda)$ as a crystal. \n\\end{rem}\n\nThe proof of this theorem will be given \nin \\S\\ref{subsec:prf-tensor} below; \nit seems likely that this theorem still holds \nwithout the assumption of extremality \non the MV polytope $P_{2} \\in \\mathcal{MV}(\\lambda_{2})$.\n\nFor dominant coweights \n$\\lambda_{1},\\,\\lambda_{2} \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$, \nthere exists a unique embedding\n\\begin{equation*}\n\\iota_{\\lambda_{1},\\lambda_{2}}:\n \\mathcal{MV}(\\lambda_{1}+\\lambda_{2}) \\hookrightarrow \n \\mathcal{MV}(\\lambda_{1}) \\otimes \\mathcal{MV}(\\lambda_{2})\n\\end{equation*}\nof crystals, which maps $\\mathcal{MV}(\\lambda_{1}+\\lambda_{2})$ \nonto the unique connected component of \n$\\mathcal{MV}(\\lambda_{1}) \\otimes \\mathcal{MV}(\\lambda_{2})$ \n(called the Cartan component) \nthat is isomorphic to $\\mathcal{MV}(\\lambda_{1}+\\lambda_{2})$ \nas a crystal; note that \n$\\iota_{\\lambda_{1},\\lambda_{2}}(P_{\\lambda_{1}+\\lambda_{2}})=\n P_{\\lambda_{1}} \\otimes P_{\\lambda_{2}}$. \n %\nApplying Theorem~\\ref{thm:tensor} to the case \n$\\lambda=\\lambda_{1}+\\lambda_{2}$, we obtain \nthe following corollary; notice that the ordering of the \ntensor factors $\\mathcal{MV}(\\lambda_{1})$, $\\mathcal{MV}(\\lambda_{2})$ is reversed. \n\\begin{cor} \\label{cor:tensor}\nLet $\\lambda_{1},\\,\\lambda_{2} \\in X_{\\ast}(T) \n\\subset \\mathfrak{h}_{\\mathbb{R}}$ be dominant coweights. \nLet $P \\in \\mathcal{MV}(\\lambda_{1}+\\lambda_{2})$, and \nwrite $\\iota_{\\lambda_{1},\\lambda_{2}}(P) \\in \n\\mathcal{MV}(\\lambda_{1}) \\otimes \\mathcal{MV}(\\lambda_{2})$ as\\,{\\rm:} \n$\\iota_{\\lambda_{1},\\lambda_{2}}(P)=P_{1} \\otimes P_{2}$ for some \n$P_{1} \\in \\mathcal{MV}(\\lambda_{1})$ and \n$P_{2} \\in \\mathcal{MV}(\\lambda_{2})$. \nWe assume that the MV polytope \n$P_{1} \\in \\mathcal{MV}(\\lambda_{1})$ is an extremal MV polytope \n$P_{x \\cdot \\lambda_{1}}$ for some $x \\in W$. \nThen, we have\n\\begin{equation} \\label{eq:Minkowski2}\nP \\subset P_{1} + P_{2}. \n\\end{equation}\n\\end{cor}\n\nThe following is a particular case in which \nthe equality holds in \\eqref{eq:Minkowski2}. \n\\begin{prop} \\label{prop:Min-Ext}\nLet $\\lambda_{1},\\,\\lambda_{2} \\in X_{\\ast}(T) \\subset \\mathfrak{h}_{\\mathbb{R}}$ \nbe dominant coweights, and let $x \\in W$. Then, \n$\\iota_{\\lambda_{1},\\lambda_{2}}(P_{x \\cdot (\\lambda_{1}+\\lambda_{2})})=\nP_{x \\cdot \\lambda_{1}} \\otimes P_{x \\cdot \\lambda_{2}}$, and \n$P_{x \\cdot (\\lambda_{1}+\\lambda_{2})}=\n P_{x \\cdot \\lambda_{1}}+P_{x \\cdot \\lambda_{2}}$. \n\\end{prop}\n\n\\begin{proof}\nFirst we show the equality \n$\\iota_{\\lambda_{1},\\lambda_{2}}(P_{x \\cdot (\\lambda_{1}+\\lambda_{2})})=\n P_{x \\cdot \\lambda_{1}} \\otimes P_{x \\cdot \\lambda_{2}}$ \n(which may be well-known to experts) by induction on $\\ell(x)$. \nIf $x=e$, then we have \n$\\iota_{\\lambda_{1},\\lambda_{2}}(P_{\\lambda_{1}+\\lambda_{2}})=\n P_{\\lambda_{1}} \\otimes P_{\\lambda_{2}}$ as mentioned above. \nAssume now that $\\ell(x) > 0$. We take $j \\in I$ \nsuch that $\\ell(s_{j}x) < \\ell(x)$. Set\n\\begin{equation*}\nk:=\\pair{s_{j}x \\cdot (\\lambda_{1}+\\lambda_{2})}{\\alpha_{j}}, \\quad\nk_{1}:=\\pair{s_{j}x \\cdot \\lambda_{1}}{\\alpha_{j}}, \\quad\nk_{2}:=\\pair{s_{j}x \\cdot \\lambda_{2}}{\\alpha_{j}};\n\\end{equation*}\nnote that we have $k=k_{1}+k_{2}$, \nwith $k_{1},\\,k_{2},\\,k \\in \\mathbb{Z}_{\\ge 0}$, \nsince $\\ell(s_{j}x) < \\ell(x)$. We see from \n\\cite[Lemme~8.3.1]{Kasb} that \n$f_{j}^{k}P_{s_{j}x \\cdot (\\lambda_{1}+\\lambda_{2})}$\nis equal to $P_{x \\cdot (\\lambda_{1}+\\lambda_{2})}$. \nHence we have \n\\begin{align*}\n\\iota_{\\lambda_{1},\\lambda_{2}}\n (P_{x \\cdot (\\lambda_{1}+\\lambda_{2})}) & \n =\\iota_{\\lambda_{1},\\lambda_{2}}\n (f_{j}^{k}P_{s_{j}x \\cdot (\\lambda_{1}+\\lambda_{2})}) \n =f_{j}^{k}\\iota_{\\lambda_{1},\\lambda_{2}}\n (P_{s_{j}x \\cdot (\\lambda_{1}+\\lambda_{2})}) \\\\\n& = f_{j}^{k}(\n P_{s_{j}x \\cdot \\lambda_{1}} \\otimes \n P_{s_{j}x \\cdot \\lambda_{2}})\n\\quad \\text{by the induction hypothesis}.\n\\end{align*}\nHere, by the tensor product rule \nfor crystals, \n\\begin{equation*}\nf_{j}^{k}(\n P_{s_{j}x \\cdot \\lambda_{1}} \\otimes \n P_{s_{j}x \\cdot \\lambda_{2}}\n)\n= (f_{j}^{l_1}P_{s_{j}x \\cdot \\lambda_{1}}) \\otimes \n (f_{j}^{l_2}P_{s_{j}x \\cdot \\lambda_{2}})\n\\end{equation*}\nfor some $l_{1},\\,l_{2} \\in \\mathbb{Z}_{\\ge 0}$ with $k=l_{1}+l_{2}$.\nIt follows from \\cite[Lemme~8.3.1]{Kasb} that \n$l_{1}=k_{1}$ and $l_{2}=k_{2}$. \nTherefore, we deduce that \n\\begin{align*}\n\\iota_{\\lambda_{1},\\lambda_{2}}\n (P_{x \\cdot (\\lambda_{1}+\\lambda_{2})})\n& = \nf_{j}^{k}(\n P_{s_{j}x \\cdot \\lambda_{1}} \\otimes \n P_{s_{j}x \\cdot \\lambda_{2}})\n= (f_{j}^{k_1}P_{s_{j}x \\cdot \\lambda_{1}}) \\otimes \n (f_{j}^{k_2}P_{s_{j}x \\cdot \\lambda_{2}}) \\\\\n& = \nP_{x \\cdot \\lambda_{1}} \\otimes P_{x \\cdot \\lambda_{2}}\n\\quad \\text{by \\cite[Lemme~8.3.1]{Kasb}}.\n\\end{align*}\nThis proves the first equality. \n\nNext we show the equality\n$P_{x \\cdot (\\lambda_{1}+\\lambda_{2})}=\n P_{x \\cdot \\lambda_{1}}+P_{x \\cdot \\lambda_{2}}$.\nLet us denote by\n\\begin{equation*}\n\\mu_{\\bullet}^{x \\cdot (\\lambda_{1}+\\lambda_{2})}=\n (\\mu_{w}^{x \\cdot (\\lambda_{1}+\\lambda_{2})})_{w \\in W}, \\quad\n\\mu_{\\bullet}^{x \\cdot \\lambda_{1}}=\n (\\mu_{w}^{x \\cdot \\lambda_{1}})_{w \\in W}, \\quad \\text{and} \\quad\n\\mu_{\\bullet}^{x \\cdot \\lambda_{2}}=\n (\\mu_{w}^{x \\cdot \\lambda_{2}})_{w \\in W}\n\\end{equation*}\nthe GGMS data of the extremal MV polytopes \n$P_{x \\cdot (\\lambda_{1}+\\lambda_{2})} \\in \\mathcal{MV}(\\lambda_{1}+\\lambda_{2})$, \n$P_{x \\cdot \\lambda_{1}} \\in \\mathcal{MV}(\\lambda_{1})$, and \n$P_{x \\cdot \\lambda_{2}} \\in \\mathcal{MV}(\\lambda_{2})$, respectively.\nWe verify the equality \n\\begin{equation} \\label{eq:me1}\n\\mu_{w}^{x \\cdot (\\lambda_{1}+\\lambda_{2})} = \n\\mu_{w}^{x \\cdot \\lambda_{1}}+\\mu_{w}^{x \\cdot \\lambda_{2}}\n\\quad \\text{for every $w \\in W$}.\n\\end{equation}\nLet $w \\in W$, and \ntake $\\mathbf{i} \\in R(w_{0})$ such that \n$w=\\wi{l}$ for some $0 \\le l \\le m$. \nThen it follows from \nTheorem~\\ref{thm:GGMS-Ext} that \n\\begin{align*}\n& \\mu_{w}^{x \\cdot (\\lambda_{1}+\\lambda_{2})}=\n \\mu_{\\wi{l}}^{x \\cdot (\\lambda_{1}+\\lambda_{2})}=\n \\yi{l} \\cdot (\\lambda_{1}+\\lambda_{2}), \\quad \\text{and} \\\\\n& \\mu_{w}^{x \\cdot \\lambda_{1}}=\n \\mu_{\\wi{l}}^{x \\cdot \\lambda_{1}}=\n \\yi{l} \\cdot \\lambda_{1}, \\qquad\n \\mu_{w}^{x \\cdot \\lambda_{2}}=\n \\mu_{\\wi{l}}^{x \\cdot \\lambda_{2}}=\n \\yi{l} \\cdot \\lambda_{2},\n\\end{align*}\nwhere $\\yi{l}$ is defined as \\eqref{eq:yi}; \nrecall from Remark~\\ref{rem:zi} that \n$\\yi{l} \\in W$ does not depend \non the dominant coweights $\\lambda_{1}+\\lambda_{2}$, \n$\\lambda_{1}$, and $\\lambda_{2}$. \nTherefore, we deduce that \n\\begin{equation*}\n\\mu_{w}^{x \\cdot (\\lambda_{1}+\\lambda_{2})} = \n\\yi{l} \\cdot (\\lambda_{1}+\\lambda_{2})=\n\\yi{l} \\cdot \\lambda_{1}+\\yi{l} \\cdot \\lambda_{2}=\n\\mu_{w}^{x \\cdot \\lambda_{1}}+\\mu_{w}^{x \\cdot \\lambda_{2}},\n\\end{equation*}\nas desired. Hence it follows that \n\\begin{equation} \\label{eq:GGMS-Ext}\n\\mu_{\\bullet}^{x \\cdot (\\lambda_{1}+\\lambda_{2})}=\n (\\mu_{w}^{x \\cdot (\\lambda_{1}+\\lambda_{2})})_{w \\in W}=\n(\\mu_{w}^{x \\cdot \\lambda_{1}}+\\mu_{w}^{x \\cdot \\lambda_{2}})_{w \\in W}=\n\\mu_{\\bullet}^{x \\cdot \\lambda_{1}}+\n\\mu_{\\bullet}^{x \\cdot \\lambda_{2}}.\n\\end{equation}\nConsequently, we have \n\\begin{align*}\nP_{x \\cdot \\lambda_{1}}+\nP_{x \\cdot \\lambda_{2}} & =\nP(\\mu_{\\bullet}^{x \\cdot \\lambda_{1}})+\nP(\\mu_{\\bullet}^{x \\cdot \\lambda_{2}}) = \nP(\\mu_{\\bullet}^{x \\cdot \\lambda_{1}}+\n \\mu_{\\bullet}^{x \\cdot \\lambda_{2}}) \\qquad\n\\text{by Proposition~\\ref{prop:Minkowski}} \\\\\n& =\nP(\\mu_{\\bullet}^{x \\cdot (\\lambda_{1}+\\lambda_{2})}) \\qquad\n\\text{by \\eqref{eq:GGMS-Ext}} \\\\\n& = P_{x \\cdot (\\lambda_{1}+\\lambda_{2})}.\n\\end{align*}\nThis proves the second equality, \nthereby completes the proof of the proposition. \n\\end{proof}\n\n\\subsection{Reformation of Braverman-Gaitsgory's result on tensor products.}\n\\label{subsec:BG}\nIn this subsection, we revisit results of Braverman-Gaitsgory on \ntensor products of highest weight crystals, and provide \na reformation of it, which is needed in our proof of \nTheorem~\\ref{thm:tensor} given in \\S\\ref{subsec:prf-tensor}. \n\nWe now recall the construction of \ncertain twisted product varieties. \nLet $G$ be a complex, connected, reductive algebraic group \nas in (the beginning of) \\S\\ref{subsec:geom}. \nThe twisted product variety $\\CG r \\,\\ti{\\times}\\, \\CG r$ is defined to be \nthe quotient space\n\\begin{equation*}\nG(\\mathcal{K}) \\times^{G(\\mathcal{O})} \\CG r = \\bigl(G(\\mathcal{K}) \\times \\CG r\\bigr)\/\\sim, \n\\end{equation*}\nwhere $\\sim$ is an equivalence relation on \n$G(\\mathcal{K}) \\times \\CG r$ given by: \n$(a,\\,bG (\\mathcal{O})) \\sim (ah^{-1},\\,hb G(\\mathcal{O}))$ for \n$a,\\,b \\in G(\\mathcal{K})$, and $h \\in G (\\mathcal{O})$; \nfor $(a,\\,bG(\\mathcal{O})) \\in G(\\mathcal{K}) \\times \\CG r$, we denote by \n$[(a,\\,bG(\\mathcal{O}))]$ the equivalence class of $(a,\\,bG(\\mathcal{O}))$. \nThis variety can be thought of as a fibration over $\\CG r$ \n(the first factor) with its typical fiber $\\CG r$ (the second factor). \nLet $\\pi_{1} : \\CG r \\,\\ti{\\times}\\, \\CG r \\twoheadrightarrow \\CG r$, \n$[(a,\\,bG(\\mathcal{O}))] \\mapsto a G(\\mathcal{O})$, \nbe the projection onto the first factor, and \n$\\mathsf{m}:\\CG r \\,\\ti{\\times}\\, \\CG r \\rightarrow \\CG r$, \n$[(a,\\,bG(\\mathcal{O}))] \\mapsto ab G(\\mathcal{O})$, the multiplication map. \nIf $\\mathcal{X} \\subset \\CG r$ is an algebraic subvariety and \n$\\mathcal{Y} \\subset \\CG r$ is an algebraic subvariety that \nis stable under the left $G(\\mathcal{O})$-action on $\\CG r$, \nthen we can form an algebraic subvariety\n\\begin{equation*}\n\\mathcal{X} \\,\\ti{\\times}\\, \\mathcal{Y} := \\ti{\\mathcal{X}} \\times^{G(\\mathcal{O})} \\mathcal{Y} \\subset \\CG r \\,\\ti{\\times}\\, \\CG r,\n\\end{equation*}\nwhere $\\ti{\\mathcal{X}}:=\\pi_{1}^{-1}(\\mathcal{X})$ is the pullback of \n$\\mathcal{X} \\subset \\CG r=G(\\mathcal{K})\/G(\\mathcal{O})$ to $G(\\mathcal{K})$.\n\\begin{prop}[{\\cite{Lu-Ast}, \\cite[Lemma~4.4]{MV2}}] \\label{prop:mv-44}\nThe multiplication map \n$\\mathsf{m}:\\CG r \\,\\ti{\\times}\\, \\CG r \\rightarrow \\CG r$, when restricted to \n$\\ol{\\CG r^{\\lambda_{1}}} \\,\\ti{\\times}\\, \\ol{\\CG r^{\\lambda_{2}}}$ \nfor $\\lambda_{1},\\,\\lambda_{2} \\in X_{*}(T)_{+}$, is \nprojective, birational, and semi-small \nwith respect to the stratification by \n$G(\\mathcal{O})$-orbits. In particular, \nfor $\\lambda_{1},\\,\\lambda_{2} \\in X_{*}(T)_{+}$ and \n$\\lambda \\in X_{*}(T)_{+}$, \n\\begin{equation*}\n\\mathsf{m} \\left(\\ol{\\CG r^{\\lambda_{1}}} \\,\\ti{\\times}\\, \\ol{\\CG r^{\\lambda_{2}}} \\right) = \n\\ol{\\CG r^{\\lambda_{1} + \\lambda_{2}}},\n\\end{equation*}\n\\begin{equation*}\n\\mathsf{m}^{-1} (\\CG r^{\\lambda}) \\cap \n\\left(\n \\ol{\\CG r^{\\lambda_{1}}} \\,\\ti{\\times}\\, \n \\ol{\\CG r^{\\lambda_{2}}}\\right) \\neq \\emptyset\n\\quad \\text{\\rm if and only if} \\quad\n\\lambda_{1} + \\lambda_{2} \\ge \\lambda;\n\\end{equation*}\nif $\\lambda_{1} + \\lambda_{2} \\ge \\lambda$, then \n\\begin{equation*}\n\\dim \\left( \n \\mathsf{m}^{-1} (\\CG r^{\\lambda}) \\cap \n \\left(\\ol{\\CG r^{\\lambda_{1}}} \\,\\ti{\\times}\\, \\ol{\\CG r^{\\lambda_{2}}}\\right)\n\\right) \n\\le \\dim \\CG r^{\\lambda} + \n\\pair{\\lambda_{1}+\\lambda_{2}-\\lambda}{\\rho}.\n\\end{equation*}\n\\end{prop}\n\nLet us introduce another kind of twisted products. \nFor each $\\nu_{1},\\,\\nu_{2} \\in X_{*}(T)$ and $w \\in W$, \nwe define $S^{w}_{\\nu_{1},\\,\\nu_{2}}$ to be the quotient space\n\\begin{equation*}\n{}^{w}U(\\mathcal{K}) t^{\\nu_{1}} \\times^{{}^{w}U(\\mathcal{O})} {}^{w}U(\\mathcal{K})[t^{\\nu_{2}}]\n=\n\\bigl(\n {}^{w}U(\\mathcal{K}) t^{\\nu_{1}} \\times {}^{w}U(\\mathcal{K})[t^{\\nu_{2}}]\n\\bigr)\/\\sim,\n\\end{equation*}\nwhere $\\sim$ is an equivalence relation on \n${}^{w}U(\\mathcal{K}) t^{\\nu_{1}} \\times {}^{w}U(\\mathcal{K})[t^{\\nu_{2}}]$ \ngiven by: \n$(a,\\,bG (\\mathcal{O})) \\sim (au^{-1},\\,ub G(\\mathcal{O}))$ for \n$a \\in {}^{w}U(\\mathcal{K}) t^{\\nu_{1}} \\subset G(\\mathcal{K})$, \n$b \\in {}^{w}U(\\mathcal{K})[t^{\\nu_{2}}] \\subset \\CG r$, and \n$u \\in {}^{w}U(\\mathcal{O}) \\subset G(\\mathcal{O})$. \nSince ${}^{w}U(\\mathcal{O})=G(\\mathcal{O}) \\cap {}^{w}U(\\mathcal{K})$ and \n$t^{\\nu} ({}^{w}U(\\mathcal{K})) = ({}^{w}U(\\mathcal{K})) t^{\\nu}$ \nfor $\\nu \\in X_{*}(T)$, we have a canonical embedding\n\\begin{equation} \\label{eq:emb-s}\nS^{w}_{\\nu_{1},\\,\\nu_{2}} \\hookrightarrow \nG(\\mathcal{K}) \\times^{G(\\mathcal{O})} \\CG r=\\CG r \\,\\ti{\\times}\\, \\CG r.\n\\end{equation}\n\\begin{lem} \\label{lem:invS}\nFor each $\\nu \\in X_{*}(T)$ and $w \\in W$, we have\n\\begin{equation*}\n\\mathsf{m}^{-1} (S^{w}_{\\nu}) = \n \\bigsqcup_{\n \\begin{subarray}{c}\n \\nu_{1},\\,\\nu_{2} \\in X_{*}(T) \\\\[1.5mm]\n \\nu_{1} + \\nu_{2} = \\nu\n \\end{subarray}\n } S^{w}_{\\nu_{1},\\,\\nu_{2}}\n\\end{equation*}\nunder the canonical embedding \\eqref{eq:emb-s}.\n\\end{lem}\n\n\\begin{proof}\nIt follows from the Iwasawa decomposition\n$\\CG r=\\bigsqcup_{\\nu_{1} \\in X_{*}(T)} S_{\\nu_{1}}^{w}$ that \n\\begin{equation*}\n\\CG r \\,\\ti{\\times}\\, \\CG r=\n\\bigsqcup_{\\nu_{1} \\in X_{*}(T)} \n \\pi_{1}^{-1}(S_{\\nu_{1}}^{w}),\n\\end{equation*}\nwhere $\\pi_{1}^{-1}(S_{\\nu_{1}}^{w})=\n {}^{w}U(\\mathcal{K})t^{\\nu_{1}}G(\\mathcal{O}) \\times^{G(\\mathcal{O})}\\CG r$. \nTherefore it suffices to show that\n\\begin{equation*}\n\\pi_{1}^{-1}(S_{\\nu_{1}}^{w}) \\cap \n\\mathsf{m}^{-1} (S^{w}_{\\nu}) =\nS^{w}_{\\nu_{1},\\,\\nu-\\nu_{1}}\n\\quad\n\\text{for each $\\nu_{1} \\in X_{*}(T)$}.\n\\end{equation*}\nNow, for each $\\nu_{1} \\in X_{*}(T)$, \nlet us take $[y] \\in \n\\pi_{1}^{-1}(S_{\\nu_{1}}^{w}) \\cap \n\\mathsf{m}^{-1} (S^{w}_{\\nu})$, \nwhere $y \\in \n {}^{w}U(\\mathcal{K})t^{\\nu_{1}}G(\\mathcal{O}) \\times \\CG r$, and \nwrite it as: $y=(u_{1}t^{\\nu_{1}}g_{1},\\,g_{2}G(\\mathcal{O}))$ \nfor $u_{1} \\in {}^{w}U(\\mathcal{K})$, $g_{1} \\in G(\\mathcal{O})$, and \n$g_{2} \\in G(\\mathcal{K})$. \nSince $\\mathsf{m}([y])=u_{1}t^{\\nu_{1}}g_{1}g_{2}G(\\mathcal{O}) \\in S^{w}_{\\nu}$, \nwe have \n\\begin{equation*}\ng_{1}g_{2}G(\\mathcal{O}) \\in \n (u_{1}t^{\\nu_{1}})^{-1}S^{w}_{\\nu}=\n {}^{w}U(\\mathcal{K})t^{\\nu-\\nu_{1}}G(\\mathcal{O}).\n\\end{equation*}\nConsequently, using the equivalence relation $\\sim$ \non $G(\\mathcal{K}) \\times \\CG r$, we see that\n\\begin{equation*}\n[y]\n =\\bigl[ (u_{1}t^{\\nu_{1}}g_{1},\\,g_{2}G(\\mathcal{O})) \\bigr]\n =\\bigl[ (u_{1}t^{\\nu_{1}},\\,g_{1}g_{2}G(\\mathcal{O})) \\bigr]\n =\\bigl[ (u_{1}t^{\\nu_{1}},\\,u_{2}t^{\\nu-\\nu_{1}}G(\\mathcal{O})) \\bigr]\n\\end{equation*}\nfor some $u_{2} \\in {}^{w}U(\\mathcal{K})$. \nThis implies that \n\\begin{equation*}\n[y] \\in \n {}^{w}U(\\mathcal{K})t^{\\nu_{1}} \n \\times^{{}^{w}U(\\mathcal{O})} \n {}^{w}U(\\mathcal{K})[t^{\\nu-\\nu_{1}}]=\nS^{w}_{\\nu_{1},\\,\\nu-\\nu_{1}},\n\\end{equation*}\nand hence \n$\\pi_{1}^{-1}(S_{\\nu_{1}}^{w}) \\cap \n \\mathsf{m}^{-1} (S^{w}_{\\nu}) \\subset \nS^{w}_{\\nu_{1},\\,\\nu-\\nu_{1}}$.\nThe opposite inclusion is obvious. \nThus, we obtain \n$\\pi_{1}^{-1}(S_{\\nu_{1}}^{w}) \\cap \n \\mathsf{m}^{-1} (S^{w}_{\\nu})=\n S^{w}_{\\nu_{1},\\,\\nu-\\nu_{1}}$.\nThis proves the lemma. \n\\end{proof}\n\nFor $\\nu_{1},\\,\\nu_{2} \\in X_{*}(T)$, we set\n$S_{\\nu_{1},\\,\\nu_{2}} := S_{\\nu_{1},\\,\\nu_{2}}^{e}$. \nIf we take (and fix) an element $t \\in T(\\mathbb{R})$ such that\n\\begin{equation*}\n\\lim_{k \\to \\infty} \\mathop{\\rm Ad}\\nolimits(t^{k}) u = e\n\\quad \\text{for all $u \\in U$}, \n\\end{equation*}\nthen we have (by \\cite[Eq.(3.5)]{MV2})\n\\begin{equation*}\nS_{\\nu}=\n \\left\\{\n [y] \\in \\CG r \\ \\biggm| \\ \n \\lim_{k \\to \\infty} t^{k}[y]=[t^{\\nu}]\n \\right\\}\n\\quad \\text{for $\\nu \\in X_{*}(T)$}.\n\\end{equation*}\nFrom this, by using Lemma~\\ref{lem:invS}, \nwe have\n\\begin{equation} \\label{eq:Snn}\nS_{\\nu_{1},\\,\\nu_{2}} = \n \\left\\{ [y] \\in \\CG r \\,\\ti{\\times}\\, \\CG r \\ \\biggm| \\ \n \\lim_{k \\to \\infty} t^{k}[y]=\n (t^{\\nu_{1}},\\,[t^{\\nu_{2}}])\n \\right\\}\n\\quad \\text{for $\\nu_{1},\\,\\nu_{2} \\in X_{*}(T)$};\n\\end{equation}\nin particular, these strata of $\\CG r \\,\\ti{\\times}\\, \\CG r$ \nare simply-connected. \n\nFor each $\\nu \\in X_{*}(T)$ and $w \\in W$, \nwe set\n\\begin{equation*}\n\\mathbb{S}^{w}_{\\nu} := {}^{w}U(\\mathcal{K})t^{\\nu}({}^{w}U(\\mathcal{O})) \/ {}^{w}U(\\mathcal{O}), \n\\end{equation*}\nwhich is canonically isomorphic to \n${}^{w}U(\\mathcal{K}) t^{\\nu} G (\\mathcal{O}) \/ G (\\mathcal{O}) = S^{w}_{\\nu}$ \nsince ${}^{w}U(\\mathcal{K}) \\cap G(\\mathcal{O})={}^{w}U(\\mathcal{O})$; \nnote that ${}^{w}U(\\mathcal{K})t^{\\nu}({}^{w}U(\\mathcal{O}))={}^{w}U(\\mathcal{K})t^{\\nu}$\nsince $t^{\\nu}({}^{w}U(\\mathcal{K}))=({}^{w}U(\\mathcal{K}))t^{\\nu}$. \nAlso, for a subset $\\mathcal{X} \\subset \\CG r$, \nwe define the intersection $\\mathcal{X} \\cap \\mathbb{S}^{w}_{\\nu}$ \nto be the image of $\\mathcal{X} \\cap S^{w}_{\\nu} \\subset S_{\\nu}^{w}$ \nunder the identification $\\mathbb{S}^{w}_{\\nu} = S^{w}_{\\nu}$ above. \n\nIn the sequel, for an algebraic variety $\\mathcal{X}$, \nwe denote by $\\mathop{\\rm Irr}\\nolimits(\\mathcal{X})$ the set of irreducible components of $\\mathcal{X}$.\nLet $\\lambda \\in X_{*}(T)_{+}$ and $\\nu \\in X_{*}(T)$ be \nsuch that $\\CG r^{\\lambda} \\cap S_{\\nu} \\ne \\emptyset$, \ni.e., $\\nu \\in \\Omega(\\lambda)$; note that \n$\\CG r^{\\lambda} \\cap S_{\\nu} \\ne \\emptyset$ if and only if \n$\\CG r^{\\lambda} \\cap S_{w^{-1} \\cdot \\nu} \\ne \\emptyset$, i.e., \n$w^{-1} \\cdot \\nu \\in \\Omega(\\lambda)$ for each $w \\in W$. \nLet us take an arbitrary $w \\in W$. \nBecause $\\CG r^{\\lambda} \\cap S_{\\nu}^{w}=\n\\dot{w}(\\CG r^{\\lambda} \\cap S_{w^{-1} \\cdot \\nu})$ \nfor $w \\in W$, we have a bijection\n\\begin{equation*}\n\\mathop{\\rm Irr}\\nolimits\\left(\\ol{\\CG r^{\\lambda} \\cap S_{w^{-1} \\cdot \\nu}}\\right)\n\\rightarrow\n\\mathop{\\rm Irr}\\nolimits\\left(\\ol{\\CG r^{\\lambda} \\cap S_{\\nu}^{w}}\\right), \\quad\n\\mathbf{b} \\mapsto \\dot{w}\\mathbf{b}. \n\\end{equation*}\nThus, each element of \n$\\mathop{\\rm Irr}\\nolimits\\left(\\ol{\\CG r^{\\lambda} \\cap S_{\\nu}^{w}}\\right)$ \ncan be written in the form $\\dot{w}\\mathbf{b}$ for a unique \nMV cycle $\\mathbf{b} \\in \\mathcal{Z}(\\lambda)_{w^{-1} \\cdot \\nu}=\n\\mathop{\\rm Irr}\\nolimits\\left(\\ol{\\CG r^{\\lambda} \\cap S_{w^{-1} \\cdot \\nu}}\\right)$.\nThe variety $\\mathbf{b}^{w}$ defined by \\eqref{eq:bbw} is a special case of \nsuch elements, in which $\\mathbf{b} \\in \\mathcal{Z}(\\lambda)$ is an extremal \nMV cycle with GGMS datum $\\mu_{\\bullet}$ and $\\nu=\\mu_{w}$. \n\\begin{lem} \\label{lem:stable}\nWith the notation as above, \nlet us take an arbitrary element \n\\begin{equation*}\n\\dot{w}\\mathbf{b} \\in \n\\mathop{\\rm Irr}\\nolimits\\left(\\ol{\\CG r^{\\lambda} \\cap S_{\\nu}^{w}}\\right), \\quad\n\\text{\\rm where} \\quad \n\\mathbf{b} \\in \\mathcal{Z}(\\lambda)_{w^{-1} \\cdot \\nu}.\n\\end{equation*} \nThen, the intersection $\\dot{w}\\mathbf{b} \\cap S_{\\gamma}^{w}$ \n(and hence $\\dot{w}\\mathbf{b} \\cap \\mathbb{S}_{\\gamma}^{w}$) is \nstable under the action of ${}^{w}U(\\mathcal{O})$ \nfor all $\\gamma \\in X_{*}(T)$. \n\\end{lem}\n\n\\begin{proof}\nBy definition, each MV cycle is an irreducible component of \nthe (Zariski-) closure of the intersection of a $G(\\mathcal{O})$-orbit \nand a $U(\\mathcal{K})$-orbit. Since $U(\\mathcal{O})=G(\\mathcal{O}) \\cap U(\\mathcal{K})$ is \nconnected, such an irreducible component is stable under \nthe action of $U(\\mathcal{O})$. Therefore, the variety $\\dot{w}\\mathbf{b}$ \n(and hence its intersection with an arbitrary ${}^{w}U(\\mathcal{K})$-orbit) \nis stable under the action of ${}^{w}U(\\mathcal{O})$. \nThis proves the lemma. \n\\end{proof}\n\nLet $\\lambda_{1},\\,\\lambda_{2} \\in X_{*}(T)_{+}$ and \n$\\nu_{1},\\,\\nu_{2} \\in X_{*}(T)$ be such that \n$\\CG r^{\\lambda_{i}} \\cap S_{\\nu_{i}} \\ne \\emptyset$ \nfor $i=1,\\,2$, and let $w \\in W$. \nLet us take $\\mathbf{b}_{1} \\in \\mathcal{Z}(\\lambda_{1})_{\\nu_{1}}$, \n$\\dot{w}\\mathbf{b}_{2} \\in \n\\mathop{\\rm Irr}\\nolimits\\left(\\ol{\\CG r^{\\lambda_{2}} \\cap S_{\\nu_{2}}^{w}}\\right)$, \nand $\\gamma_{1},\\,\\gamma_{2} \\in X_{*}(T)$. \nThen, by virtue of the lemma above, \nwe can form the twisted product \n\\begin{equation*}\n(\\mathbf{b}_{1} \\cap \\mathbb{S}^{w}_{\\gamma_{1}})^{\\sim}\n \\times^{{}^{w}U(\\mathcal{O})} \n(\\dot{w}\\mathbf{b}_{2} \\cap \\mathbb{S}^{w}_{\\gamma_{2}}) \n\\subset \n{}^{w}U(\\mathcal{K})t^{\\gamma_{1}} \n \\times^{{}^{w}U(\\mathcal{O})} S^{w}_{\\gamma_{2}}=\nS^{w}_{\\gamma_{1},\\,\\gamma_{2}},\n\\end{equation*}\nwhere $(\\mathbf{b}_{1} \\cap \\mathbb{S}^{w}_{\\gamma_{1}})^{\\sim}$\ndenotes the pullback of \n\\begin{equation*}\n\\mathbf{b}_{1} \\cap \\mathbb{S}^{w}_{\\gamma_{1}} \\subset \\mathbb{S}^{w}_{\\gamma_{1}}=\n{}^{w}U(\\mathcal{K})t^{\\gamma_{1}}({}^{w}U(\\mathcal{O}))\/{}^{w}U(\\mathcal{O})\n\\end{equation*}\nto ${}^{w}U(\\mathcal{K})t^{\\gamma_{1}}({}^{w}U(\\mathcal{O}))=\n{}^{w}U(\\mathcal{K})t^{\\gamma_{1}} \\subset G(\\mathcal{K})$. \nBy $\\mathbf{b}_{1} \\str{w}{\\gamma_{1}}{\\gamma_{2}} \\dot{w}\\mathbf{b}_{2}$, \nwe denote the image of this algebraic subvariety of \n$\\CG r \\,\\ti{\\times}\\, \\CG r$ under the map \n$\\mathsf{m}:\\CG r \\,\\ti{\\times}\\, \\CG r \\rightarrow \\CG r$; note that \n$\\mathbf{b}_{1} \\str{w}{\\gamma_{1}}{\\gamma_{2}} \\dot{w}\\mathbf{b}_{2} \n \\subset \\mathsf{m}(S^{w}_{\\gamma_{1},\\,\\gamma_{2}})\n =S^{w}_{\\gamma_{1}+\\gamma_{2}}$. \n\nThe following is a reformulation of \nBraverman-Gaitsgory's result on tensor products of \nhighest weight crystals in \\cite{BrGa} (see also \\cite{BFG});\nwe will give a brief account of the relationship \nin the Appendix. Here we should warn the reader that \nthe convention on the tensor product rule for crystals \nin \\cite{BrGa} is opposite to ours, i.e., to that of Kashiwara \n(see, for example, \\cite{Kasoc} and \\cite{Kasb}). \n\\begin{thm} \\label{thm:BG}\nLet $\\lambda_{1},\\,\\lambda_{2} \\in X_{*}(T)_{+}$. \nThere exists a bijection\n\\begin{equation*}\n\\Phi_{\\lambda_{1},\\,\\lambda_{2}} : \n \\mathcal{MV} (\\lambda_{1}) \\times \\mathcal{MV} (\\lambda_{2})\n\\rightarrow\n \\bigsqcup_{\\nu \\in X_{*}(T)} \n \\mathop{\\rm Irr}\\nolimits \\left(\n \\ol{\\mathsf{m}^{-1} (S_{\\nu}) \\cap \n \\bigl(\\CG r^{\\lambda_{1}} \\,\\ti{\\times}\\, \\CG r^{\\lambda_{2}}}\\bigr) \n \\right)\n\\end{equation*}\ngiven as follows\\,{\\rm:}\nfor $P_{1} \\in \\mathcal{MV} (\\lambda_{1})$ and $P_{2} \\in \\mathcal{MV}(\\lambda_{2})$, \n\\begin{equation} \\label{eq:Phi-ll}\n\\Phi_{\\lambda_{1},\\,\\lambda_{2}}\n (P_{1},\\,P_{2})=\n\\ol{\n \\bigl(\\Phi_{\\lambda_{1}}(P_{1}) \\cap \\mathbb{S}_{\\nu_{1}}\\bigr)^{\\sim}\n \\times^{U(\\mathcal{O})}\n \\bigl(\\Phi_{\\lambda_{2}}(P_{2}) \\cap \\mathbb{S}_{\\nu_{2}}\\bigr)\n}, \n\\end{equation}\nwhere $\\nu_{1}:=\\mathop{\\rm wt}\\nolimits (P_{1})$, $\\nu_{2}:=\\mathop{\\rm wt}\\nolimits(P_{2})$, and \n$\\bigl(\\Phi_{\\lambda_{1}}(P_{1}) \\cap \\mathbb{S}_{\\nu_{1}}\\bigr)^{\\sim}$ denotes \nthe pullback of $\\Phi_{\\lambda_{1}}(P_{1}) \\cap \\mathbb{S}_{\\nu_{1}} \\subset S_{\\nu_{1}}$ \nto $U(\\mathcal{K})t^{\\nu_{1}} \\subset G(\\mathcal{K})$. \nMoreover, the bijection $\\Phi_{\\lambda_{1},\\,\\lambda_{2}}$ \nhas the following properties. \n\n{\\rm (i)} \nFor each $P_{1} \\in \\mathcal{MV}(\\lambda_{1})$ and \n$P_{2} \\in \\mathcal{MV}(\\lambda_{2})$, \nthe image of \n$\\Phi_{\\lambda_{1},\\,\\lambda_{2}}(P_{1},\\,P_{2})$ \nunder the map $\\mathsf{m}$ is equal to $\\Phi_{\\lambda}(P)$ \nfor a unique $\\lambda \\in X_{*}(T)_{+}$ and \na unique $P \\in \\mathcal{MV}(\\lambda)$ such that \n$\\iota_{\\lambda}(P)=P_{2} \\otimes P_{1}$, \nwhere $\\iota_{\\lambda}:\\mathcal{MV}(\\lambda) \\hookrightarrow \n \\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1})$ is \nan embedding of crystals\\,{\\rm;}\n\n{\\rm (ii)}\n$\\pi_{1} (\\Phi_{\\lambda_{1},\\,\\lambda_{2}} (P_{1},\\,P_{2})) = \n \\Phi_{\\lambda_{1}} (P_{1})$ \nfor each $P_{1} \\in \\mathcal{MV} (\\lambda_{1})$ and \n$P_{2} \\in \\mathcal{MV} (\\lambda_{2})$\\,{\\rm;}\n\n{\\rm (iii)} \n$[(t^{\\nu_{1}},\\,\\Phi_{\\lambda_{2}} (P_{2}))] \\subset \n \\Phi_{\\lambda_{1},\\,\\lambda_{2}} (P_{1},\\,P_{2})$ \nfor each $P_{1} \\in \\mathcal{MV} (\\lambda_{1})$ with $\\nu_{1}=\\mathop{\\rm wt}\\nolimits (P_{1})$ \nand $P_{2} \\in \\mathcal{MV} (\\lambda_{2})$. \n\\end{thm}\n\n\\subsection{Proof of the polytopal estimate.}\n\\label{subsec:prf-tensor}\n\nThis subsection is devoted to the proof of Theorem~\\ref{thm:tensor}. \nLet $G$ be a complex, connected, semisimple algebraic group \nwith Lie algebra $\\mathfrak{g}$. We keep the setting of \\S\\ref{subsec:tensor}. \nLet $\\mu^{(1)}_{\\bullet}=(\\mu^{(1)}_{w})_{w \\in W}$ and \n$\\mu^{(2)}_{\\bullet}=(\\mu^{(2)}_{w})_{w \\in W}$ be the GGMS data of \n$P_{1} \\in \\mathcal{MV}(\\lambda_{1})$ and $P_{2} \\in \\mathcal{MV}(\\lambda_{2})$, \nrespectively. Also, let $\\mu_{\\bullet}=(\\mu_{w})_{w \\in W}$ be \nthe GGMS datum of $P \\in \\mathcal{MV}(\\lambda)$; \nnote that\n\\begin{equation*}\n\\mu_{e}=\\mathop{\\rm wt}\\nolimits(P)=\n\\mathop{\\rm wt}\\nolimits(P_{1})+\\mathop{\\rm wt}\\nolimits(P_{2})=\n\\mu^{(1)}_{e}+\\mu^{(1)}_{e}.\n\\end{equation*}\nRecall from Proposition~\\ref{prop:Minkowski} that \nthe Minkowski sum $P_{1}+P_{2}$ is a pseudo-Weyl polytope \n$P(\\mu^{(1)}_{\\bullet}+\\mu^{(2)}_{\\bullet})$ with GGMS datum \n$\\mu^{(1)}_{\\bullet}+\\mu^{(1)}_{\\bullet}\n =(\\mu^{(1)}_{w}+\\mu^{(2)}_{w})_{w \\in W}$. \nTherefore, it follows from equation \\eqref{eq:poly} \ntogether with Remark~\\ref{rem:moment} that \n\\begin{align*}\nP_{1}+P_{2} & = \n\\bigcap_{w \\in W}\n \\bigl\\{ \n v \\in \\mathfrak{h}_{\\mathbb{R}} \\mid \n w^{-1} \\cdot v- w^{-1} \\cdot \\bigl(\\mu^{(1)}_{w}+\\mu^{(2)}_{w}\\bigr) \n \\in \\textstyle{\\sum_{j \\in I}\\mathbb{R}_{\\ge 0}h_{j}}\n \\bigr\\} \\\\[3mm]\n& = \n\\bigcap_{w \\in W}\n \\mathop{\\rm Conv}\\nolimits \\biggl\\{\n \\gamma \\in X_{*}(T) \\subset \\mathfrak{h}_{\\mathbb{R}} \\ \\biggm| \\ \n [t^{\\gamma}] \\in \\ol{S^{w}_{\\mu^{(1)}_{w}+\\mu^{(2)}_{w}}}\n\\biggr\\}.\n\\end{align*}\nAlso, recall from Theorem~\\ref{thm:Kam1} that \n$\\Phi_{\\lambda}(P) \\in \\mathcal{Z}(\\lambda)$ and \n\\begin{equation*}\nP=\\mathop{\\rm Conv}\\nolimits \\bigl\\{\n \\gamma \\in X_{*}(T) \\subset \\mathfrak{h}_{\\mathbb{R}} \\mid \n [t^{\\gamma}] \\in \\Phi_{\\lambda}(P)\n\\bigr\\}.\n\\end{equation*}\nHence, in order to prove that $P \\subset P_{1}+P_{2}$, \nit suffices to show that\n\\begin{equation} \\label{eq:goal}\n\\Phi_{\\lambda} (P) \\subset \n \\ol{S^{w}_{\\mu_{w}^{(1)} + \\mu_{w}^{(2)}}}\n\\quad \\text{for all $w \\in W$}.\n\\end{equation}\n\nWe set $\\mathbf{b}^{(1)} := \\Phi_{\\lambda_{1}} (P_{1}) \\in \\mathcal{Z}(\\lambda_{1})$ \nand $\\mathbf{b}^{(2)} := \\Phi_{\\lambda_{2}} (P_{2}) \\in \\mathcal{Z}(\\lambda_{2})$.\nBecause $P_{2}=P(\\mu^{(2)}_{\\bullet})$ is the extremal MV polytope \nof weight $x \\cdot \\lambda$ for some $x \\in W$ by our assumption, \nwe know from Theorem~\\ref{thm:GGMS-Ext} that \n$\\mu^{(2)}_{w} \\in W \\cdot \\lambda_{2}$ for all $w \\in W$. \nHence the algebraic variety $\\mathbf{b}^{(2),w}:=\n\\ol{\\CG r^{\\lambda_{2}} \\cap S^{w}_{\\mu^{(2)}_{w}}}$ is \nirreducible, and is the $\\dot{w}$-translate of \nthe extremal MV cycle $\\mathbf{b}_{w^{-1} \\cdot \\mu^{(2)}_{w}}$ \nof weight $w^{-1} \\cdot \\mu^{(2)}_{w}$ \n(see Remark~\\ref{rem:extcyc}); note that \n$\\mathbf{b}^{(2),e}=\\mathbf{b}^{(2)}$. \n\nNow suppose, contrary to our assertion \\eqref{eq:goal}, \nthat \n$\\Phi_{\\lambda} (P) \\not\\subset \n \\ol{S^{w}_{\\mu_{w}^{(1)} + \\mu_{w}^{(2)}}}$\nfor some $w \\in W$; we take and fix such a $w \\in W$. \nThen, by equation \\eqref{eq:Snuw} (see also Remark~\\ref{rem:Snuw}), \nthere exists some $\\nu \\in X_{*}(T)$ such that\n\\begin{equation} \\label{eq:prf1}\n\\Phi_{\\lambda}(P) \\cap S_{\\nu}^{w} \\ne \\emptyset\n\\quad \\text{and} \\quad\nw^{-1} \\cdot \\nu \\not\\ge \nw^{-1} \\cdot (\\mu^{(1)}_{w} + \\mu^{(2)}_{w}). \n\\end{equation}\n\n\\begin{claim*}\nFor the (fixed) $w \\in W$ above, \nwe have the following inclusion of varieties when \nthey are regarded as subvarieties of $\\CG r$\\,{\\rm:}\n\\begin{equation*}\n\\mathbf{b}^{(1)} \\stra{e} \\mathbf{b}^{(2)}\n\\subset \n\\mathbf{b}^{(1)} \\stra{w} \\mathbf{b}^{(2),w}.\n\\end{equation*}\n\\end{claim*}\n\n\\noindent\n{\\it Proof of Claim.} \nLet $w = s_{i_{1}} s_{i_{2}} \\cdots s_{i_{\\ell}}$ be \na reduced expression, and set \n$w_{k} = s_{i_{1}} s_{i_{2}} \\cdots s_{i_{k}}$ \nfor $0 \\le k \\le \\ell$. \nFor simplicity of notation, we set for $0 \\le k \\le \\ell$\n\\begin{equation*}\n\\mathbf{b}^{(2)}_{k}:=\\mathbf{b}^{(2),w_{k}}, \\qquad\n\\mathbf{b}^{(1)} \\star_{k} \\mathbf{b}^{(2)}:=\n\\mathbf{b}^{(1)} \\stra{w_{k}} \\mathbf{b}^{(2), w_{k}}, \n\\quad \\text{and}\n\\end{equation*}\n\\begin{equation*}\n\\mu^{(1)}_{k}:=\\mu^{(1)}_{w_{k}}, \\quad\n\\mu^{(2)}_{k}:=\\mu^{(2)}_{w_{k}}; \n\\end{equation*}\nnote that\n\\begin{equation*}\n\\mathbf{b}^{(1)} \\star_{0} \\mathbf{b}^{(2)}=\n\\mathbf{b}^{(1)} \\stra{e} \\mathbf{b}^{(2),e}=\n\\mathbf{b}^{(1)} \\stra{e} \\mathbf{b}^{(2)}. \n\\end{equation*}\nIf we can show the inclusion\n\\begin{equation} \\label{eq:incl}\n\\mathbf{b}^{(1)} \\star_{k} \\mathbf{b}^{(2)} \\subset \n\\ol{ \\mathbf{b}^{(1)} \\star_{k+1} \\mathbf{b}^{(2)} }\n\\quad \\text{for each $0 \\le k \\le \\ell-1$},\n\\end{equation}\nthen we will obtain the following sequence of \ninclusions: \n\\begin{align*}\n& \\mathbf{b}^{(1)} \\stra{e} \\mathbf{b}^{(2)}=\n \\mathbf{b}^{(1)} \\star_{0} \\mathbf{b}^{(2)} \\subset \n \\ol{ \\mathbf{b}^{(1)} \\star_{1} \\mathbf{b}^{(2)} } \\subset \n \\ol{ \\mathbf{b}^{(1)} \\star_{2} \\mathbf{b}^{(2)} } \\subset \\cdots \\\\\n& \\hspace*{20mm} \\cdots \\subset \n\\ol{ \\mathbf{b}^{(1)} \\star_{\\ell} \\mathbf{b}^{(2)} } =\n\\ol{ \\mathbf{b}^{(1)} \\stra{w_{\\ell}} \\mathbf{b}^{(2),w_{\\ell}} } =\n\\ol{ \\mathbf{b}^{(1)} \\stra{w} \\mathbf{b}^{(2),w} },\n\\end{align*}\nas desired. In order to show the inclusion \\eqref{eq:incl}, \ntake an element \n$[(y,\\,gG(\\mathcal{O}))] \\in \\mathbf{b}^{(1)} \\star_{k} \\mathbf{b}^{(2)}$, \nwhere \n\\begin{equation*}\ny \\in \n (\\mathbf{b}^{(1)} \\cap \\mathbb{S}^{w_{k}}_{\\mu^{(1)}_{k}})^{\\sim}\n \\subset {}^{w_{k}}U(\\mathcal{K})t^{\\mu^{(1)}_{k}}, \\qquad\ngG(\\mathcal{O}) \\in \n \\mathbf{b}^{(2)}_{k} \\cap S^{w_{k}}_{\\mu^{(2)}_{k}} \\cong \n \\mathbf{b}^{(2)}_{k} \\cap \\mathbb{S}^{w_{k}}_{\\mu^{(2)}_{k}}, \n\\end{equation*}\nand write the element $y \\in {}^{w_{k}}U(\\mathcal{K})t^{\\mu^{(1)}_{k}}$ \nas: $y =u_{k}t^{\\mu^{(1)}_{k}}$ for $u_{k} \\in {}^{w_{k}}U(\\mathcal{K})$. \nSince $\\mathbf{b}^{(1)}=\\ol{\\bigcap_{z \\in W} S^{z}_{\\mu^{(1)}_{z}}}$ \nby Theorem~\\ref{thm:Kam1}, we may (and do) assume that \n$yG(\\mathcal{O}) \\in S^{w_{k}}_{\\mu^{(1)}_{k}} \\cap \n S^{w_{k+1}}_{\\mu^{(1)}_{w_{k+1}}}$ to show \nthe inclusion \\eqref{eq:incl}. Therefore, we can take \n$u_{k+1} \\in {}^{w_{k+1}}U(\\mathcal{K})$ and $g_{k+1} \\in G(\\mathcal{O})$\nsuch that $y=u_{k}t^{\\mu^{(1)}_{k}}=\nu_{k+1}t^{\\mu^{(1)}_{k+1}}g_{k+1}$; note that \n\\begin{equation*}\ng_{k+1} \\in \n T(\\mathcal{K})({}^{w_{k+1}}U(\\mathcal{K}))({}^{w_{k}}U(\\mathcal{K}))T(\\mathcal{K}).\n\\end{equation*}\nHere, since $w_{k+1}=w_{k}s_{i_{k+1}}$ by definition, \nit follows that\n\\begin{equation*}\nT(\\mathcal{K})({}^{w_{k+1}}U(\\mathcal{K}))({}^{w_{k}}U(\\mathcal{K}))T(\\mathcal{K})\n = \\dot{w}_{k}\n \\bigl(\n \\dot{s}_{i_{k+1}} U(\\mathcal{K})\\dot{s}_{i_{k+1}}U(\\mathcal{K})\n \\bigr) \\dot{w}_{k}^{-1}\n \\subset {}^{w_{k}}P_{i_{k+1}}(\\mathcal{K}),\n\\end{equation*}\nand hence that $g_{k+1} \\in {}^{w_{k}}P_{i_{k+1}}(\\mathcal{K})$. \nMoreover, since $g_{k+1} \\in G(\\mathcal{O})$, we get \n$g_{k+1} \\in {}^{w_{k}}P_{i_{k+1}}(\\mathcal{K}) \\cap G(\\mathcal{O})=\n{}^{w_{k}}P_{i_{k+1}}(\\mathcal{O})$. Therefore, we obtain \n\\begin{equation*}\ng_{k+1}\\mathbf{b}^{(2)}_{k} \\subset \n{}^{w_{k}}P_{i_{k+1}}(\\mathcal{O})\\mathbf{b}^{(2)}_{k}=\n{}^{w_{k}}L_{i_{k+1}}(\\mathcal{O})\n{}^{w_{k}}U_{i_{k+1}}(\\mathcal{O})\\mathbf{b}^{(2)}_{k}. \n\\end{equation*}\nSince the extremal MV cycle $\\dot{w}_{k}^{-1}\\mathbf{b}^{(2)}_{k}$ is stable under \n$U_{i_{k+1}}(\\mathcal{O}) \\subset U(\\mathcal{O})=G(\\mathcal{O}) \\cap U(\\mathcal{K})$ \n(see the proof of Lemma~\\ref{lem:stable}), we have\n${}^{w_{k}}U_{i_{k+1}}(\\mathcal{O})\\mathbf{b}^{(2)}_{k} \\subset \\mathbf{b}^{(2)}_{k}$, \nand hence\n\\begin{equation*}\ng_{k+1}\\mathbf{b}^{(2)}_{k} \\subset \n{}^{w_{k}}L_{i_{k+1}}(\\mathcal{O})\\mathbf{b}^{(2)}_{k}. \n\\end{equation*}\nAlso, we see that \n\\begin{align*}\n{}^{w_{k}}L_{i_{k+1}}(\\mathcal{O})\n\\mathbf{b}^{(2)}_{k} \n& ={}^{w_{k+1}}L_{i_{k+1}}(\\mathcal{O})\\mathbf{b}^{(2)}_{k}\n \\quad \\text{since $w_{k+1}=w_{k}s_{i_{k+1}}$ and \n $\\dot{s}_{i_{k+1}} \\in L_{i_{k+1}}$} \\\\[3mm]\n& \\subset \\mathbf{b}^{(2)}_{k+1}\n \\quad \\text{by Lemma~\\ref{lem:tran} \n since $w_{k} < w_{k}s_{i_{k+1}}=w_{k+1}$}.\n\\end{align*}\nCombining these, we obtain \n$g_{k+1}\\mathbf{b}^{(2)}_{k} \\subset \\mathbf{b}^{(2)}_{k+1}$, \nand hence $g_{k+1}gG(\\mathcal{O}) \\in g_{k+1}\\mathbf{b}^{(2)}_{k} \n\\subset \\mathbf{b}^{(2)}_{k+1}$. \nFurthermore, we have \n\\begin{align*}\n[(y,\\,gG(\\mathcal{O}))] & =\n[(u_{k}t^{\\mu^{(1)}_{k}},\\,gG(\\mathcal{O}))]=\n[(u_{k+1}t^{\\mu^{(1)}_{k+1}}g_{k+1},\\,gG(\\mathcal{O}))] \\\\ \n& = [(u_{k+1}t^{\\mu^{(1)}_{k+1}},\\,g_{k+1}gG(\\mathcal{O}))]\n\\end{align*}\nby the equivalence relation \n$\\sim$ on $G(\\mathcal{K}) \\times \\CG r$, and \n\\begin{equation*}\nyG(\\mathcal{O})=u_{k+1}t^{\\mu^{(1)}_{k+1}}G(\\mathcal{O}) \n \\in \\mathbf{b}^{(1)} \\cap S^{w_{k+1}}_{\\mu^{(1)}_{k+1}}\n\\end{equation*}\nby the choice above of $y$. \nConsequently, we conclude that \n\\begin{align*}\n[(y,\\,gG(\\mathcal{O}))] & \\in \n\\ol{ \\mathsf{m} \\Bigl(\n \\bigl(\\mathbf{b}^{(1)} \\cap \\mathbb{S}^{w_{k+1}}_{ \\mu^{(1)}_{k+1} }\\bigr)^{\\sim}\n \\times^{{}^{w_{k+1}}U(\\mathcal{O})} \n \\mathbf{b}^{(2)}_{k+1}\n \\Bigr)\n} \\\\[3mm]\n& = \n\\ol{ \\mathsf{m} \\Bigl(\n \\bigl(\\mathbf{b}^{(1)} \\cap \\mathbb{S}^{w_{k+1}}_{ \\mu^{(1)}_{k+1} }\\bigr)^{\\sim}\n \\times^{{}^{w_{k+1}}U(\\mathcal{O})} \n \\bigl(\\mathbf{b}^{(2)}_{k+1} \\cap \\mathbb{S}^{w_{k+1}}_{ \\mu^{(2)}_{k+1} }\\bigr)\n \\Bigr) \n} \\\\[3mm]\n& = \\ol{ \\mathbf{b}^{(1)} \\star_{k+1} \\mathbf{b}^{(2)} },\n\\end{align*}\nsince $\\ol{ \\mathbf{b}^{(2)}_{k+1} \\cap S^{w_{k+1}}_{ \\mu^{(2)}_{k+1} } }=\n\\mathbf{b}^{(2)}_{k+1}$ (see Remark~\\ref{rem:extcyc}).\nThis proves the inclusion \\eqref{eq:incl}, \nand hence the claim. \\quad \\hbox{\\rule[-0.5pt]{3pt}{8pt}}\n\n\\vspace{3mm}\n\nFinally, we complete the proof of Theorem~\\ref{thm:tensor}. \nBy the claim above, we obtain\n\\begin{equation} \\label{eq:prf2}\nS^{w}_{\\nu} \\cap \n\\bigl(\\ol{\\mathbf{b}^{(1)} \\stra{e} \\mathbf{b}^{(2)}}\\bigr)\n\\subset \nS^{w}_{\\nu} \\cap \n\\bigl(\\ol{\\mathbf{b}^{(1)} \\stra{w} \\mathbf{b}^{(2),w}}\\bigr). \n\\end{equation}\nAlso, we have \n\\begin{equation} \\label{eq:prf4}\nS^{w}_{\\nu} \\cap \n\\bigl(\\ol{ \\mathbf{b}^{(1)} \\stra{w} \\mathbf{b}^{(2),w} }\\bigr)\n\\subset \t\nS^{w}_{\\nu} \\cap \n\\ol{S^{w}_{\\mu^{(1)}_{w}+\\mu^{(2)}_{w}}}\n\\end{equation}\nby the inclusion $\\mathbf{b}^{(1)} \\stra{w} \\mathbf{b}^{(2),w} \\subset \nS^{w}_{\\mu^{(1)}_{w}+\\mu^{(2)}_{w}}$, which is \nan immediate consequence of the definition. \nSince $w^{-1} \\cdot \\nu \\not\\ge w^{-1} \\cdot \n(\\mu^{(1)}_{w}+\\mu^{(2)}_{w})$ by \\eqref{eq:prf1}, \nit follows from equation \\eqref{eq:Snuw} that\n$S^{w}_{\\nu} \\cap \n\\ol{S^{w}_{\\mu^{(1)}_{w}+\\mu^{(2)}_{w}}}=\\emptyset$,\nand hence by \\eqref{eq:prf2} together with \\eqref{eq:prf4} that \n\\begin{equation} \\label{eq:prf3}\nS^{w}_{\\nu} \\cap \n\\bigl(\\ol{\\mathbf{b}^{(1)} \\stra{e} \\mathbf{b}^{(2)}}\\bigr)=\\emptyset.\n\\end{equation}\n\nNow, we know from Theorem~\\ref{thm:BG} that\n\\begin{equation*}\nS_{\\mu^{(1)}_{e}+\\mu^{(2)}_{e}} \\cap \n\\mathsf{m}(\\Phi_{\\lambda_{1},\\,\\lambda_{2}}(P_{1},\\,P_{2})) \n\\subset \\Phi_{\\lambda}(P)\n\\end{equation*}\nis an open dense subset, where \n$\\mathop{\\rm wt}\\nolimits(P)=\\mu^{(1)}_{e}+\\mu^{(2)}_{e}$; \nfor an MV polytope $P \\in \\mathcal{MV}(\\lambda)$, choosing \nan embedding $\\iota_{\\lambda}:\\mathcal{MV}(\\lambda) \\hookrightarrow \n\\mathcal{MV}(\\lambda_{2}) \\otimes \\mathcal{MV}(\\lambda_{1})$ of crystals \nso that $\\iota_{\\lambda}(P)=P_{2} \\otimes P_{1}$ \nfor some $P_{1} \\in \\mathcal{MV}(\\lambda_{1})$ and \n$P_{2} \\in \\mathcal{MV}(\\lambda_{2})$ corresponds, \nvia Theorems~\\ref{thm:Kam1} and \\ref{thm:BG}, to choosing \nan irreducible component \n$\\mathcal{X} \\in \\mathop{\\rm Irr}\\nolimits(\\CG r^{\\lambda_{1},\\,\\lambda_{2}} \\cap S_{\\nu_{1},\\,\\nu_{2}})$ \nsuch that $\\Phi_{\\lambda}(P) \\in \\mathcal{Z}(\\lambda)$ is the (Zariski-) closure \nof the image of $\\mathcal{X}$ under the map $\\star$ \nin the commutative diagram of Theorem~\\ref{thm:BGc}\nin the Appendix, where $\\nu_{1}=\\mathop{\\rm wt}\\nolimits(P_{1})=\\mu^{(1)}_{e}$ and \n$\\nu_{2}=\\mathop{\\rm wt}\\nolimits(P_{2})=\\mu^{(2)}_{e}$.\nAlso, we see from the explicit construction of \n$\\Phi_{\\lambda_{1},\\,\\lambda_{2}}(P_{1},\\,P_{2})$ \ngiven in Theorem~\\ref{thm:BG} that \n\\begin{equation*}\n\\mathbf{b}^{(1)} \\stra{e} \\mathbf{b}^{(2)} \n\\subset\nS_{\\mu^{(1)}_{e}+\\mu^{(2)}_{e}} \\cap \n\\mathsf{m}(\\Phi_{\\lambda_{1},\\,\\lambda_{2}}(P_{1},\\,P_{2}))\n\\end{equation*}\nis an open dense subset. Therefore, \n$\\mathbf{b}^{(1)} \\stra{e} \\mathbf{b}^{(2)} \\subset \\Phi_{\\lambda}(P)$ \nis an open dense subset. \nCombining this fact with \\eqref{eq:prf3}, we conclude that \n$\\Phi_{\\lambda}(P) \\cap S^{w}_{\\nu}=\\emptyset$, \nwhich contradicts \\eqref{eq:prf1}. Thus, we have completed \nthe proof of Theorem~\\ref{thm:tensor}. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nTo quote from \\citet{benjamini2014discussion},\n\\begin{quote}\nSignificance testing is an effort to address the selection of an\ninteresting finding regarding a single parameter from the background\nnoise. Modern science faces the problem of selection of promising\nfindings from the noisy estimates of many.\n\\end{quote}\nThis paper is about the latter. In contemporary studies, geneticists\nmay have measured hundreds of thousands of genetic variants and wish\nto know which of these influence a trait \\citep{sesia2019gene,\n sesia2020multi}. Scientists may be interested in discovering which\ndemographic and clinical variables influence the susceptibility to\nParkinson's disease \\citep{gao2018model}. Economists study which\nvariables from individual employment and wage histories affect future\nprofessional careers \\citep{klose2020pipeline}. In all these examples\nand countless others, we have hundreds or even thousands of\nexplanatory variables and are interested in determining which of these\ninfluence a response of interest. The problem is to select\nassociations which are replicable, that is, without having too many\nfalse positives.\n\nFormally, let $Y \\in \\mathbb{R}$ be the response we wish to study, and\n$X = (X_1, X_2, \\ldots, X_p) \\in \\mathbb{R}^p$ be the vector of\nexplanatory variables. We call variable $j$ a null variable if $X_j$\nis conditionally independent of $Y$ given the other $X$'s. This says\nthat the $j$-th variable does not provide information about the\nresponse beyond what is already provided by all the other variables\n(roughly, if it is not in the Markov blanket of $Y$). Expressed\ndifferently, a variable is null if and only if the hypothesis\n\\begin{equation}\n\\label{eqn:indep}\n\\mathcal{H}_j: X_j \\independent Y | X_{-j}, \n\\end{equation}\nis true. (Throughout, $X_{-j}$ is a shorthand for all $p$ variables\nexcept the $j$th.) Likewise, a variable $j$ is nonnull if\n$\\mathcal{H}_j$ is false. Let\n$\\mathcal{H}_0 \\subset \\cb{1, \\dots, p}$ be the subset of\nnulls. Suppose now we have $n$ independent samples assembled in a data\nmatrix $\\mathbf{X} \\in \\mathbb{R}^{n \\times p}$ and a response vector\n$\\mathbf{Y} \\in \\mathbb{R}^n$. The goal is to identify the nonnull\nvariables with some form of type-I error control. Specifically, we\nconsider in this paper the false discovery rate (FDR)\n\\citep{benjamini1995controlling}, namely, the expected fraction of\nfalse positives defined as\n\\begin{equation}\n\\operatorname{FDR} = \\EE{\\frac{|\\hat{\\mathcal{S}} \\cap \\mathcal{H}_0|\n }{|\\hat{\\mathcal{S}}| \\vee 1 }},\\footnote{Here and below, $a \\vee b = \\max(a,b)$ and\n $a\\wedge b = \\min(a,b)$}.\n \\end{equation}\nwhere $\\hat{\\mathcal{S}}$ is the selected set of variables.\n\n\n\n\\subsection{The conditional randomization test}\nNaturally, in order to identify the nonnull variables, one could test\nthe hypotheses $\\mathcal{H}_j$ in \\eqref{eqn:indep}.\n\\citet{candes2018panning} proposed to achieve this via the conditional\nrandomization test (CRT). To run the CRT, we resample $\\mathbf{X}_j$---the\n$j$th column of the matrix $\\mathbf{X}$---conditional on the other variables,\ncalculate the value of a test statistic, and compare it to the test\nstatistic computed on the true $\\mathbf{X}_j$. When the statistic computed on\nthe true $\\mathbf{X}_j$ has a high rank when compared with those obtained\nfrom imputed values, this is evidence against the null. Details of the\nCRT are given in Algorithm~\\ref{alg:crt}. There, the output p-value is\nvalid in the sense that under the null, it is stochastically larger\nthan a uniform variable.\n\\begin{algorithm}[h]\n\\caption{Conditional Randomization Test (CRT)}\n\\label{alg:crt}\n\\begin{algorithmic}\n\\REQUIRE Data $(\\mathbf{X},\\mathbf{Y})$, test statistic $T(\\cdot)$, number of randomizations $B$\n\\FOR{$b \\in \\cb{1, \\dots, B}$}\n\t\\STATE Sample $\\mathbf{X}_j^{(b)}$ from the distribution of $\\mathbf{X}_j | \\mathbf{X}_{-j}$, independently of $\\mathbf{X}_j$ and $\\mathbf{Y}$.\n\\ENDFOR\n\\ENSURE The $p\\operatorname{-value}$\n\\[ p_j=\\frac{1}{B+1}\n\t\\p{1 + \\sum_{b = 1}^B \\one{\\cb{T(\\mathbf{X}_j, \\mathbf{X}_{-j}, \\mathbf{Y}) \\leq T(\\mathbf{X}_j^{(b)}, \\mathbf{X}_{-j}, \\mathbf{Y})}}}. \\footnotemark\n\t\\] \n\\end{algorithmic}\n\\end{algorithm}\n\\footnotetext{If some values of the test statistics are the same, we break ties randomly. }\n\nInformally, under the null hypothesis that the variable $X_j$ is\nindependent of $Y$ conditional on $X_{-j}$, each one of the new\nsamples $\\mathbf{X}^{(b)}_j$ has the same distribution as $\\mathbf{X}_j$, and they are\nall independent conditionally on $\\mathbf{Y}$ and $\\mathbf{X}_{-j}$. As a\nconsequence, each $T(\\mathbf{X}^{(b)}_j, \\mathbf{X}_{-j}, \\mathbf{Y})$ has the same distribution\nas $T(\\mathbf{X}_j, \\mathbf{X}_{-j}, \\mathbf{Y})$. Thus the rank of\n$T(\\mathbf{X}_j, \\mathbf{X}_{-j}, \\mathbf{Y})$ among $\\cb{T(\\mathbf{X}^{(b)}_j, \\mathbf{X}_{-j}, \\mathbf{Y})}$ will\nbe uniform in $\\cb{1, \\dots, B}$ assuming we break ties at\nrandom. Formally, we have:\n\\begin{theo}[\\citet{candes2018panning}]\n If $X_j \\independent Y | X_{-j}$, then the $p\\operatorname{-values}$ from Algorithm\n \\ref{alg:crt} satisfy $\\PP{p_j \\leq \\alpha} \\leq \\alpha$, for any\n $\\alpha \\in [0,1]$. This holds regardless of the test statistic\n $T(\\cdot)$. \n\\end{theo}\nThe validity of the procedure does not rely on any assumptions on the\ndistribution of $Y|X$, parametric or not. Yet it requires knowledge of\nthe distribution of the covariates $X$. This is known as the Model-X\nframework, and is an appropriate assumption in many important\napplications, including genetic and economics studies, where either\nknowledge about the exact covariate distribution or a large amount of\nunsupervised data of covariates is available \\citep{cong2013multiplex,\n haldane1931inbreeding, peters2016comprehensive, saltelli2008global,\n tang2006reconstructing}. Rapid progress has been made on\nmethodological advances in this framework \\citep{candes2018panning,\n tansey2018holdout, sesia2019gene, bates2020metropolized,\n liu2020fast, romano2020deep, ren2020derandomizing} and in applications\nto genetic studies \\citep{sesia2019gene, sesia2020multi,\n bates2020causal}.\n\n\\subsection{Selective SeqStep+}\nWe still need a selection procedure that transforms the CRT $p\\operatorname{-values}$\ninto a selected set with FDR control guarantees. A natural choice of\nvariable selection procedure is the Benjamini-Hochberg procedure (BHq)\n\\citep{benjamini1995controlling}. If we were to apply BHq, we would\nneed to compare the $p\\operatorname{-values}$ with critical thresholds of the form\n$\\alpha_i = i q\/p$, where $q$ is the nominal FDR level. When $p$ is\nlarge, $q$ is $0.1$, and $i = 1,2, \\dots$, this requires $p\\operatorname{-values}$ on\nan extremely fine scale. The $p\\operatorname{-values}$ defined in Algorithm\n\\ref{alg:crt}, however, can only take values in\n$\\{1\/(B+1), 2\/(B+2), \\dots, 1\\}$, thus a huge number of randomizations\nin CRT is required. This makes the combination of CRT and BHq\ncomputationally expensive or even infeasible. This is the motivation\nfor this paper: can we find a\nselection procedure that does not require any of the $p\\operatorname{-values}$ to be\nvery small and works well with discrete $p\\operatorname{-values}$?\n\n\\begin{algorithm}[t]\n\\caption{Selective SeqStep+}\n\\label{alg:seqstep}\n\\begin{algorithmic}\n\\REQUIRE A sequence of $p\\operatorname{-values}$ $p_1, \\dots p_p$\n\\STATE \nLet\n\\begin{equation}\n\\label{eqn:seqstep}\n\\hat{k} =\\max \\left\\{k \\in \\cb{1,\\dots, p}: \\frac{1+\\#\\left\\{j \\leq k: p_{j}>c\\right\\}}{\\#\\left\\{j \\leq k: p_{j} \\leq c\\right\\} \\vee 1} \\leq \\frac{1-c}{c} \\cdot q\\right\\}. \n\\end{equation}\n\\ENSURE Selected set of nonnulls $\\hat{\\mathcal{S}} = \\cb{ j \\leq \\hat{k}: p_{j} \\leq c}$. \nIf the set in $\\eqref{eqn:seqstep}$ is empty, $\\hat{\\mathcal{S}}$ is the empty set as well. \n\\end{algorithmic}\n\\end{algorithm}\n\nTo this end, we consider SeqStep+, a sequential testing procedure\nfirst introduced by \\citet{barber2015controlling}. We consider a\nspecific version, namely, Selective SeqStep+, which takes\na sequence of $p\\operatorname{-values}$ $p_1, \\dots p_p$ as input, and outputs a\nselected set $\\hat{\\mathcal{S}}$. The procedure starts by finding an\ninteger $\\hat{k}$ such that among the $p\\operatorname{-values}$\n$\\cb{p_1, \\dots, p_{\\hat{k}}}$, few are greater than a user-specified\nthreshold $c$. In details, $\\hat{k}$ is the largest $k$ in\n$\\cb{1, \\dots p}$ such that the ratio between\n$1+\\#\\left\\{j \\leq k: p_{j}>c\\right\\}$ and\n$\\#\\left\\{j \\leq k: p_{j} \\leq c\\right\\} \\vee 1$ is no greater than\n$(1-c)q\/c$. The procedure then selects all $j$'s, such that\n$j \\leq \\hat{k}$ and $p_j \\leq c$. We include details of the procedure\nin Algorithm~\\ref{alg:seqstep}.\n\nTo understand why the procedure works, assume that the null $p\\operatorname{-values}$ are $\\text{i.i.d.}\\operatorname{Unif}[0,1]$. Then, the ratio of $\\#\\left\\{\\text {null } j \\leq \\hat{k}: p_{j} \\leq c\\right\\}$ to $\\#\\left\\{\\text {null } j \\leq \\hat{k}: p_{j}>c\\right\\}$ is roughly $c\/(1-c)$. Hence \n\\begin{align*}\n\\operatorname{FDP} &= \\frac{\\#\\left\\{\\text {null } j \\leq \\hat{k}: p_{j} \\leq c\\right\\}}{\\#\\left\\{j \\leq \\hat{k}: p_{j} \\leq c\\right\\} \\vee 1} \n\\approx \\frac{c}{1-c} \\cdot \\frac{\\#\\left\\{\\text {null } j \\leq \\hat{k}: p_{j}>c\\right\\}}{\\#\\left\\{j \\leq \\hat{k}: p_{j} \\leq c\\right\\} \\vee 1} \\\\\n& \\leq \\frac{c}{1-c} \\cdot \\frac{\\#\\left\\{j \\leq \\hat{k}: p_{j}>c\\right\\}}{\\#\\left\\{j \\leq \\hat{k}: p_{j} \\leq c\\right\\} \\vee 1}\n\\leq \\frac{c}{1-c} \\cdot \\frac{1-c}{c}q = q. \n\\end{align*}\nFormally, we have the following result. \n\\begin{theo}[\\citet{barber2015controlling}]\n\\label{theo:indep_FDR}\nAssume that the ordering of the $p\\operatorname{-values}$ is fixed. If all null $p\\operatorname{-values}$ are independent with $p_j \\geq \\operatorname{Unif} [0,1]$, and are independent from the nonnulls, then Selective SeqStep+ controls the FDR at level $q$. \n\\end{theo}\n\nA close look at \\eqref{eqn:seqstep} shows that the only information\nSelective SeqStep+ uses from the $p\\operatorname{-values}$ is whether or not\n$p_j \\leq c$. This means that unlike BHq, Selective SeqStep+ does not\nrequire some of the $p\\operatorname{-values}$ to be very small to make rejections, and\nhence would require a much smaller number of randomizations $B$.\nSelective SeqStep+ would, therefore, be computationally far less\nintensive.\n\n\\subsection{Challenges and our contribution}\n\\label{subsection:introduction_challenges}\n\nIn this paper, we study variable selection procedures with the\nconditional randomization test and Selective SeqStep+. There are three challenges and we address them all. \n\nThe first challenge is in the dependency of the $p\\operatorname{-values}$. The $p\\operatorname{-values}$\nfrom CRT are not independent in general, hence Theorem\n\\ref{theo:indep_FDR} does not apply. In response, we will develop\ntheory in Section \\ref{section:theory} showing how we can make\nSeqStep+ valid under dependence. In particular, we will show examples\nof approximate FDR control when the $p\\operatorname{-values}$ are weakly dependent or\nwhen they are exchangeable in distribution.\n\n\\begin{figure}\n\\begin{subfigure}{\\textwidth}\n\\caption{A good ordering}\n\\includegraphics[width = \\textwidth]{plots\/good_ordering.pdf}\n\\label{fig:good_ordering}\n\\end{subfigure}\n\\begin{subfigure}{ \\textwidth}\n\\caption{A bad ordering}\n\\includegraphics[width = \\textwidth]{plots\/bad_ordering.pdf}\n\\label{fig:bad_ordering}\n\\end{subfigure}\n\\caption{An illustration of the ordering of the $p\\operatorname{-values}$ in Selective SeqStep+. The null $p\\operatorname{-values}$ are sampled from $\\operatorname{Unif}[0,1]$ and the nonnull $p\\operatorname{-values}$ from $\\operatorname{Unif}[0,0.2]$. We take $c = 0.2$ and $q = 0.1$. \n The $p\\operatorname{-values}$ are the same in the two plots, but they are ordered in two different ways. In (a), the nonnulls appear early in the sequence. In (b), the order of the $p\\operatorname{-values}$ is random. In terms of power, Selective SeqStep+ discovers all the nonnulls in (a) but only a subset of them in (b). }\n\\label{fig:ordering}\n\\end{figure}\n\nThe second challenge concerns the ordering of the $p\\operatorname{-values}$. Unlike\nthe Benjamini-Hochberg procedure, which takes as input the $p\\operatorname{-values}$\nonly, Selective SeqStep+ essentially requires two inputs: the $p\\operatorname{-values}$\nand an ordering of the $p\\operatorname{-values}$. In other words, if we change the\norder of the input $p\\operatorname{-values}$, we could end up selecting a very\ndifferent set of variables. To illustrate this, consider the example\nin Figure \\ref{fig:ordering}, which fixes the $p\\operatorname{-values}$ and compare two\ndifferent orderings. The ``good\" ordering has the nonnulls appear\nearly in the sequence and the ``bad\" ordering randomly permutes the\n$p\\operatorname{-values}$. With the good ordering, the output set contains all the\nnonnulls; but with the bad ordering, only a fraction of the nonnulls is \ndiscovered. When the nonnull $p\\operatorname{-values}$ appear early in the sequence,\nthe proportion of $p\\operatorname{-values}$ greater than $c$ will be smaller, thus the\nquantity\n$\\frac{1+\\#\\left\\{j \\leq k: p_{j}>c\\right\\}}{\\#\\left\\{j \\leq k: p_{j}\n \\leq c\\right\\} \\vee 1}$ in \\eqref{eqn:seqstep} will tend to be\nsmaller. Therefore a larger $\\hat{k}$ will be obtained and hence the\npower of the procedure will be higher. In general, to make the\nvariable selection procedure more powerful, it is important to look\nfor an informative ordering that places nonnulls early in the\nsequence.\n\nAnother requirement for the ordering is that it needs to be independent of the $p\\operatorname{-values}$. FDR is in general not controlled when the $p\\operatorname{-values}$ and the ordering are dependent. \n As a simple example, assume a researcher obtains independent $p\\operatorname{-values}$ and naively orders them by magnitude. Then the input sequence of $p\\operatorname{-values}$ into Selective SeqStep+ would be an ordered sequence $p_{(1)} \\leq p_{(2)} \\dots \\leq p_{(p)}$. In this case, the null $p\\operatorname{-values}$ that appear early in the sequence will tend to be smaller and hence no longer uniform. In the case of the global null (all hypotheses are null) with independent $p\\operatorname{-values}$, we would expect to make around $c p$ false discoveries. This is because there are approximately $c p$ $p\\operatorname{-values}$ that are smaller than or equal to $c$, and they all appear early in the sequence, hence for $k = (c + (1-c)q)p$, \n\\[ \\frac{1 + \\#\\left\\{j \\leq k: p_{j}>c\\right\\}}{\\#\\left\\{j \\leq k: p_{j} \\leq c\\right\\} \\vee 1} \\approx \\frac{(c + (1-c)q)p - cp}{cp} = \\frac{(1-c)q}{c}.\\]\n\nIn Section \\ref{section:method}, we will present two methods to obtain\nthe ordering: the {\\em split} version and the {\\em symmetric\n statistic} version. The former splits the data into two parts,\nobtaining $p\\operatorname{-values}$ from one fold, and the ordering from the\nother. This makes the ordering and the $p\\operatorname{-values}$ stochastically\nindependent. No data-splitting is required for the symmetric statistic\nversion; we obtain both the $p\\operatorname{-values}$ and the ordering from the whole\ndataset. To obtain the ordering, we compute a statistic $z_j$ for each\nvariable $j$, and sort the $z_j$'s. The statistic $z_j$ is obtained in\nsuch a way that $z_j$ is marginally independent of the $p\\operatorname{-value}$ $p_j$.\nIn theory, this notion of independence is not sufficient for FDR\ncontrol; we however tested this method in many different empirical\nsettings and always controlled the $\\operatorname{FDR}$. In terms of power, the\nsymmetric statistic version is more powerful than the split\nversion. Thus in practice, we would recommend the symmetric statistic\nversion.\n\nThe third challenge is computational in nature. Recall that with the\nCRT (Algorithm \\ref{alg:crt}), we need to compute the test statistic\n$T_j^{(b)}$ for each $j$ and each $b$. Each statistic $T_j^{(b)}$ is\nobtained by sampling $\\mathbf{X}_j^{(b)}$ and running a machine learning\nalgorithm with $\\mathbf{Y}$ as a response and $\\mathbf{X}_j^{(b)}, \\mathbf{X}_{-j}$ as\npredictors. It is computationally expensive to run the machine\nlearning algorithm $B$ times to get a single $p\\operatorname{-value}$. In Section\n\\ref{section:computation}, we will present a faster way of obtaining\nthe test statistics and, hence, the $p\\operatorname{-values}$.\n\n\\section{Selective SeqStep+ under dependence}\n\\label{section:theory}\n\n\\subsection{Almost independent $p$-values}\n\\label{subsection:cond_prob}\nWhen employing SeqStep+, it is natural to ask whether the FDR is still\ncontrolled when the $p\\operatorname{-values}$ are ``close'' to being independent. This\nsection derives an bound upper bound on the FDR, which depends on the\nvalue of\n$\\max_{j \\in \\mathcal{H}_0} \\mathbb{P}[p_j \\leq c \\mid \\one\\cb{p_{-j} \\leq\n c}]$,\\footnote{For a set\n $S = \\cb{j_1, \\dots, j_K} \\subset \\cb{1, \\dots, p}$, we define\n $\\one\\cb{p_S \\leq c}$ as the Boolean vector\n $(\\one\\cb{p_{j_1} \\leq c}, \\dots, \\one\\cb{p_{j_K} \\leq c}) $.} the\nmaximum of the probability that $p_j$ is at most $c$ conditional on the boolean sequence of whether other $p\\operatorname{-values}$ are smaller than or equal to $c$. \nUnder independence of the $p\\operatorname{-values}$, it holds that\n$\\max_{j \\in \\mathcal{H}_0} \\PP{p_j \\leq c \\mid \\one\\cb{p_{-j} \\leq c}} \\leq c$\nsince marginally, $\\PP{p_j \\leq c} \\leq c$ for $j \\in \\mathcal{H}_0$. Our first\nresult states that if\n$\\max_{j \\in \\mathcal{H}_0} \\PP{p_j \\leq c \\mid \\one\\cb{p_{-j} \\leq c}}$ is\nclose to $c$ with high probability, then the FDR inflation cannot be\nlarge.\n\\begin{theo}[Almost independent $p$-values]\n\\label{theo:cond_prob}\nSuppose the ordering of the $p\\operatorname{-values}$ is fixed. Set\n$a_j = \\PP{p_j \\leq c \\mid \\one\\cb{p_{-j} \\leq c}}$ and assume the\n$p\\operatorname{-values}$ satisfy\n$ \\PP{ \\max_{j \\in \\mathcal{H}_0} a_{j} \\leq c + \\delta} \\geq 1 -\n\\epsilon$. Then the output from Algorithm \\ref{alg:seqstep} obeys\n\\begin{equation}\n\\label{eqn:theo_cond_prob}\n \\operatorname{FDR} \\leq q \\frac{c+\\delta}{c}\\frac{1-c}{1-c-\\delta} + \\epsilon. \n\\end{equation}\n\\end{theo}\n\n\n\nAs an illustration, we describe two examples where the FDR bound can\nbe computed numerically. Consider data $(\\mathbf{X}, \\mathbf{Y})$, where each row of\n$\\mathbf{X}$ is generated independently from a multivariate Gaussian\ndistribution with block diagonal covariance; that is, $X_j$ is only\ndependent on nearby variables. Recalling that the $p\\operatorname{-values}$ are\nobtained from Algorithm \\ref{alg:crt}, the first example takes the\nmarginal test statistic to be\n$T(\\mathbf{X}_j, \\mathbf{X}_{-j}, \\mathbf{Y}) = \\abs{\\Corr{\\mathbf{X}_j, \\mathbf{Y}}}$, whereas in the\nsecond example, we regress $\\mathbf{Y}$ on $\\mathbf{X}_j$ and $\\mathbf{X}_{N(j)}$, and take\nthe test statistic $T(\\mathbf{X}_j, \\mathbf{X}_{-j}, \\mathbf{Y})$ to be the absolute value\nof the fitted coefficient of $\\mathbf{X}_j$. Here, the elements of $N(j)$ are\nthe ``neighbors\" of $j$, i.e. the indices in the same block as $j$.\nAdditional details of the simulation settings are included in Appendix\n\\ref{subsection:simulation_details_cond_prob}.\n\nBefore proceeding with the computation, we note that if $a_j$ were\ndefined conditional on additional information,\ne.g.~$a_j = \\PP{p_j \\leq c \\mid \\mathbf{Y}, \\one\\cb{p_{-j} \\leq c}}$, then\nTheorem \\ref{theo:cond_prob} would still hold. The specific block\ndiagonal structure of the covariance of $X$ implies that $X_i$ and\n$X_j$ are independent conditionally on $Y$ if $i$ and $j$ are not in\nthe same block. Thus,\n$a_j = \\PP{p_j \\leq c \\mid\\mathbf{Y}, \\one\\cb{p_{-j} \\leq c}} = \\PP{p_j \\leq\n c \\mid \\mathbf{Y}, \\one\\cb{p_{N(j)} \\leq c}}$. For a block of size $K$, the\nvariable $\\one\\cb{p_{N(j)} \\leq c}$ can take at most $2^K$ distinct\nvalues. In practice, the conditional probability\n$\\PP{p_j \\leq c \\mid \\mathbf{Y}, \\one\\cb{p_{N(j)} \\leq c}}$ can therefore be\nestimated using sample proportions. One can fix $\\mathbf{Y}$, sample $\\mathbf{X}$\nfrom the distribution of $X \\mid Y$, compute the corresponding\n$p\\operatorname{-values}$, and compute the frequency of the event $\\cb{p_j \\leq c}$\nconditional on the value of $\\one\\cb{p_{N(j)} \\leq c}$. This is the\nreason why the block diagonal structure of the covariance is used\nhere; this structure makes computations tractable since we are dealing\nwith $2^K$ rather than $2^{p-1}$ possible configurations.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width = 0.9\\textwidth]{plots\/hist_max_aj.pdf}\n\\caption{Histogram of $\\max_{j \\in \\mathcal{H}_0} a_{j}$ from 500\n samples. In (a), the test statistic of CRT is taken to be the\n absolute value of the correlation. In (b), the test statistic of CRT\n is taken to be the absolute value of the regression coefficient. We\n plot the threshold $c$ (blue dashed line). To apply the bound from\n Theorem \\ref{theo:cond_prob}, one possible choice of $\\delta$ is\n shown as the distance between the red and blue lines.}\n\\label{fig:hist_max_aj}\n\\end{figure}\n\nIn Figure \\ref{fig:hist_max_aj}, we plot the histogram of\n$\\max_{j \\in \\mathcal{H}_0} a_{j} = \\max_{j \\in \\mathcal{H}_0} \\PP{p_j \\leq c \\mid \\mathbf{Y}, \\one\\cb{p_{N(j)} \\leq c}}$ from 500 samples and show a possible choice of $\\delta$\nand $\\epsilon$. Here, the FDR threshold $q$ is set to be 0.1 and $c$\nis chosen to be $0.3$. In the example where the test statistic is the\nabsolute value of correlation between $\\mathbf{X}$ and $\\mathbf{Y}$, we can take\n$\\delta = 0.0893$ and $\\epsilon = 0.002$. The FDR bound in\n\\eqref{eqn:theo_cond_prob} is thus\n$q \\frac{c+\\delta}{c}\\frac{1-c}{1-c-\\delta} + \\epsilon = 0.1508$. In\nthe other example where the test statistic is the absolute value of\nthe fitted regression coefficient, we can take $\\delta = 0.11$ and\n$\\epsilon = 0.006$. The FDR bound in \\eqref{eqn:theo_cond_prob} is\nthus $q \\frac{c+\\delta}{c}\\frac{1-c}{1-c-\\delta} + \\epsilon = 0.1682$.\n\n\n\n\n\\iffalse\n\nA corresponding lower bound of achieving $ \\operatorname{FDR} \\approx q \\frac{1-c}{1-c-\\delta} $, when the null $p\\operatorname{-values}$ satisfy $p_{j} \\geq \\operatorname{Unif} [0,1]$ marginally. \n\n\\paragraph{An example: lower bound}\n\nLet $n_1$ be the number of nonnulls, $n_0$ be the number of nulls. $n = n_0 + n_1$. Consider an example where the variables $1, \\dots n_1$ are nonnulls and $n_1 + 1, \\dots n$ are nulls. Assume the nonnull $p\\operatorname{-values}$ are all 0. For the null $p\\operatorname{-values}$, they have a joint distribution of:\n\\begin{enumerate}\n\\item \nWith probability $\\delta\/(c+\\delta)$, they are i.i.d. $\\operatorname{U}[c,1]$.\n\n\\item \nWith probability $c\/(c+\\delta)$, they are i.i.d. Each one of the distribution is $\\operatorname{U}[0,c]$ with probability $c+\\delta$, and $\\operatorname{U}[c,1]$ with probability $1-c-\\delta$. \n\\end{enumerate}\n\nWe can easily verify that $a_j \\leq c+\\delta$ and hence we have $\\epsilon = 0$ as in proposition \\ref{prop:FDR_bound}. The marginal distribution of each null $p\\operatorname{-value}$ is $\\operatorname{U}[0,1]$. \n\nUnder the first scenario, the FDR is always 0 as we reject nothing. Under the second scenario, all the inequality in the proof in proposition \\ref{prop:FDR_bound} become equality except for the one that states \n\\[\\frac{ \\#\\cb{j\\leq \\hat{k}: p_j \\leq c, a_j \\leq c+ \\delta}}{1 + \\#\\cb{j\\leq \\hat{k}: p_j > c} } \\leq \\frac{1-c}{c}q. \n\\]\nThis will become very close to equality when both $n_1$ and $n_0$ goes to infinity. With all these, we show that the FDR can be very close to\n\\[ \\frac{c}{c+ \\delta} q \\frac{c+\\delta}{c}\\frac{1-c}{1-c-\\delta} = q \\frac{1-c}{1-c-\\delta}. \\] \n\n\\fi\n\n\\subsection{Under exchangeability}\nIn this section, we study whether additional structure on the $p\\operatorname{-values}$\ncan be helpful in obtaining sharper FDR bounds. To this end, consider\nthe assumption of exchangeability. We say that the random variables\n$A_1, A_2, \\dots, A_m$ are \\emph{exchangeable} conditional on a random\nvariable $B$ if\n$(A_{1}, \\dots, A_{m} ) \\mid B \\stackrel{d}{=} (A_{\\pi(1)}, \\ldots,\nA_{\\pi(m)} ) \\mid B$ for any permutation $\\pi$. With this, this section makes use of the following assumption:\n\\begin{assu}\n\\label{assu:exch}\nThe null $p\\operatorname{-values}$ are exchangeable conditional on the nonnull $p\\operatorname{-values}$.\n\\end{assu}\n\nTo understand Assumption \\ref{assu:exch}, we study examples where it\nholds. Consider $p\\operatorname{-values}$ obtained from the CRT. A sufficient set of\nconditions is that the variables $X_j$'s are exchangeable and that the\ntest statistic $T(\\cdot)$ in the CRT (Algorithm \\ref{alg:crt}) is\nsymmetric in $\\mathbf{X}_{-j}$. As a concrete example, imagine $X$ follows a\n$\\mathcal{N}(\\mathbf{\\mu}, \\Sigma)$ distribution, where all entries in\n$\\mu$ are the same and all off-diagonal terms in $\\Sigma$ are the\nsame. Then if $T(\\mathbf{X}_j, \\mathbf{X}_{-j}, \\mathbf{Y})$ is obtained by running a lasso\nregression of $\\mathbf{Y}$ on $\\mathbf{X}$ and taking the regression coefficient\nof $j$, then the null $p\\operatorname{-values}$ are exchangeable conditional on the\nnonnulls.\n\nUnder the assumption of exchangeability, we can show that FDR\ninflation will not be large. In particular, if the $p\\operatorname{-values}$ are weakly\ncorrelated with each other, we get a sharper upper bound.\n\\begin{theo}[Under exchangeability]\n\\label{theo:exchangeable}\nSuppose the ordering of the $p\\operatorname{-values}$ is fixed and that the nulls are\nmarginally stochastically larger than uniform. Under Assumption\n\\ref{assu:exch}, Algorithm \\ref{alg:seqstep} gives\n\\begin{equation}\n\\label{eqn:exch_bound1}\n\\operatorname{FDR} \\leq q + c(1-q).\n\\end{equation}\nIf, in addition, the $p\\operatorname{-values}$ satisfy $\\Corr{\\one\\cb{p_i \\leq c}, \\one\\cb{p_j \\leq c}} \\leq \\rho$ for any nulls $i \\neq j$, then\n\\begin{equation}\n\\label{eqn:exch_bound2}\n\\operatorname{FDR} \\leq q + \\varepsilon(c, q, \\rho),\n\\end{equation}\nwhere\n$$ \\varepsilon(c, q, \\rho) = \\p{\\frac{\\delta}{1 + \\beta \\delta}\n \\sqb{\\frac{c}{1-c} - \\frac{c - c\\delta}{1 - (c-c\\delta)}q} } \\wedge\nc(1-q), \\quad \\beta = \\frac{c + (1-c)q}{(1-c)(1-q)}, \\quad \\delta =\n\\rho \\frac{c(1-q) + q}{c(1-q)}.$$ The two bounds \\eqref{eqn:exch_bound1} and\n\\eqref{eqn:exch_bound2} are sharp asymptotically. For illustration,\nthe bound \\eqref{eqn:exch_bound2} is plotted in\nFigure~\\ref{fig:exch_bounds}.\n\\end{theo}\n\n\\begin{proof}\nWe will show the asymptotic sharpness of \\eqref{eqn:exch_bound1} here. Specifically, we will show an example where the $\\operatorname{FDR}$ converges to $q + c(1-q)$ as $p \\to \\infty$. We include a proof of the two upper bounds and the asymptotic sharpness of \\eqref{eqn:exch_bound2} in Appendix \\ref{subsection:proof_exch}.\n\nAssume we are under the global null, i.e., all variables are\nnulls. Set $m_0 = 1 + \\lceil \\frac{c p}{q + c(1-q)} \\rceil$ and\nconsider null $p\\operatorname{-values}$ sampled as follows:\n\\begin{enumerate}\n\\item With probability $c p\/m_0$, pick $m_0$ indices uniformly at\n random from $\\cb{1, \\dots, p}$, and sample the corresponding\n $p\\operatorname{-values}$ as $\\text{i.i.d.}\\operatorname{Unif}[0,c]$; sample the other $p\\operatorname{-values}$\n independently from $\\operatorname{Unif}[c,1]$.\n\n\\item With probability $1 - c p\/m_0$, sample all $p\\operatorname{-values}$ as\n $\\text{i.i.d.}\\operatorname{Unif}[c,1]$.\n\\end{enumerate}\nOne can easily verify that each $p\\operatorname{-value}$ marginally follows a\n$\\operatorname{Unif}[0,1]$ distribution. On the first event, we always reject all\nthe variables because\n\\[ \\frac{1+\\#\\left\\{j \\leq p: p_{j}>c\\right\\}}{\\#\\left\\{j \\leq p:\n p_{j} \\leq c\\right\\} \\vee 1} = \\frac{1 + p - m_0}{m_0} \\leq\n \\frac{1 + p - m_0}{m_0 - 1} \\leq \\frac{1-c}{c} \\cdot q. \\] Thus\n$\\operatorname{FDP} = 1$. On the second event, we reject none of the variables, thus\n$\\operatorname{FDP} = 0$. Combining the two cases, we get\n$\\operatorname{FDR} = cp\/m_0 \\to q + c(1-q)$ as $p \\to \\infty$.\n\\end{proof}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width = 0.7\\textwidth]{plots\/bound.pdf}\n\\caption{The bound on FDR inflation $\\varepsilon(c, q, \\rho)$. Here we set $c= 0.1$, vary $\\rho$ from 0 to 0.5, and vary the FDR threshold $q$ from $0.05$ to $0.2$. }\n\\label{fig:exch_bounds}\n\\end{figure}\n\n\nCompared to the FDR bound in Theorem \\ref{theo:cond_prob}, Theorem\n\\ref{theo:exchangeable} is neither weaker nor stronger. Theorem\n\\ref{theo:exchangeable} holds when the null $p\\operatorname{-values}$ are exchangeable,\nwhereas Theorem \\ref{theo:cond_prob} holds when the $p\\operatorname{-values}$ are close\nto being independent. When the $p\\operatorname{-values}$ are exchangeable and highly\ncorrelated, for example in the most extreme case where all the\n$p\\operatorname{-values}$ are the same, then \\eqref{eqn:exch_bound1} in Theorem\n\\ref{theo:exchangeable} gives that $\\operatorname{FDR} \\leq q + c(1 - q)$, whereas\n\\eqref{eqn:theo_cond_prob} in Theorem \\ref{theo:cond_prob} would not\nbe informative at all. In a different setting where the $p\\operatorname{-values}$ are\nindependent but follow different distributions, Theorem\n\\ref{theo:cond_prob} can be used to show that $\\operatorname{FDR} \\leq q$, whereas\nTheorem \\ref{theo:exchangeable} cannot be applied.\n\n\n\n\\subsection{Beyond exchangeability or almost independence of the $p$-values}\n\nIn general, when the $p\\operatorname{-values}$ have an arbitrary dependence structure,\nwe can bound the FDR with a logarithmic inflation; the sharpness of\nthe bound below is an open question.\n\\begin{theo}[Arbitrary dependence]\n \\label{theo:no_assu}\n Suppose the ordering of the $p\\operatorname{-values}$ is fixed and that the nulls are\n marginally stochastically larger than uniform. If $(1-c)q < c$, then Algorithm\n \\ref{alg:seqstep} yields\n\\begin{equation}\n\\label{eqn:no_assu_bound}\n \\operatorname{FDR} \\leq (q + c(1-q)) \\sum_{j \\in \\mathcal{H}_0} \\frac{1}{j+1} \\leq (q + c(1-q))\\log p. \n\\end{equation}\n\\end{theo}\nWhen we have a good ordering of the $p\\operatorname{-values}$, i.e., when the null\n$p\\operatorname{-values}$ tend to have larger indices, then the right-hand side\n$(q + c(1-q)) \\sum_{j \\in \\mathcal{H}_0} \\frac{1}{j+1}$ is\nsmaller. Comparing to the case with exchangeability, we observe a\npotential logarithmic inflation on the FDR. A similar phenomenon has\nbeen observed for the BHq procedure, where an arbitrary dependence\namong the $p\\operatorname{-values}$ also brings a possible logarithmic inflation\n\\citep{benjamini2001control}.\n\n\n\\section{Methods to order hypotheses} \n\\label{section:method}\nWhen performing variable selection with CRT and Selective SeqStep+, it\nis important to have a good ordering of the hypotheses\/CRT $p\\operatorname{-values}$. A naive way\nof obtaining the ordering is as follows: apply any machine learning\nalgorithm to $(\\mathbf{X},\\mathbf{Y})$, compute a statistic $z_j$ providing evidence\nagainst the hypothesis that $j$ is null, sort the CRT $p\\operatorname{-values}$ by\ndecreasing order of the $z_j$'s, and apply Selective SeqStep+. As\nargued in Section \\ref{subsection:introduction_challenges}, despite\nthe intuitive structure of this procedure, the dependence between the\n$p\\operatorname{-values}$ and the ordering will, in general, imply a loss of FDR\ncontrol.\n\n\n\\subsection{Splitting}\n\\label{subsection:exact}\n\n\\floatname{algorithm}{Procedure}\n\\begin{algorithm}[h]\n\\caption{The Sequential CRT (Split version)}\n\\label{alg:exact}\n\\begin{algorithmic}\n\\REQUIRE Data $\\mathcal{D} = (\\mathbf{X}, \\mathbf{Y})$, number of randomizations $B$, test statistic $T(\\cdot)$, score function $Z$, $\\operatorname{FDR}$ threshold $q$, SeqStep threshold $c$. \n\\end{algorithmic}\n\n\\begin{enumerate}\n\\item \n\\begin{algorithmic}\n\\STATE Split the data into two folds $\\mathcal{D}_{\\operatorname{pval}} = (\\mathbf{X}^{\\operatorname{pval}}, \\mathbf{Y}^{\\operatorname{pval}})$ and $\\mathcal{D}_{\\operatorname{ordering}} = (\\mathbf{X}^{\\operatorname{ordering}}, \\mathbf{Y}^{\\operatorname{ordering}})$. \n\\end{algorithmic}\n\n\\item \n\\begin{algorithmic}\n\\STATE Obtain $p\\operatorname{-values}$ $p_1 \\dots, p_p$ on $\\mathcal{D}_{\\operatorname{pval}}$ from CRT (Algorithm \\ref{alg:crt}). \n\\end{algorithmic}\n\n\\item \n\\begin{algorithmic}\n\\STATE Compute statistics $z_j = Z\\p{\\mathbf{X}_j^{\\operatorname{ordering}},\\mathbf{X}_{-j}^{\\operatorname{ordering}}, \\mathbf{Y}^{\\operatorname{ordering}}}$ on $\\mathcal{D}_{\\operatorname{ordering}}$ for each $j \\in \\cb{1, \\dots p}$, and obtain ordering $\\pi$ by sorting the statistics: $z_{\\pi(1)} \\geq z_{\\pi(2)} \\dots \\geq z_{\\pi(p)}$. \n\\end{algorithmic}\n\n\\item \n\\begin{algorithmic}\n\\STATE \nApply Selective SeqStep+ (Algorithm \\ref{alg:seqstep}) to $p_{\\pi(1)}, p_{\\pi(2)}, \\dots, p_{\\pi(p)}$. \n\\end{algorithmic}\n\\end{enumerate}\n\n\\begin{algorithmic}\n\\ENSURE Discoveries from Selective SeqStep+. \n\\end{algorithmic}\n\n\\end{algorithm}\nThe split version of the sequential CRT (Procedure \\ref{alg:exact}) makes the $p\\operatorname{-values}$ and\nordering independent through data splitting: the data is split into\ntwo folds; the $p\\operatorname{-values}$ are obtained from the CRT on the first fold;\nand the ordering is obtained on the second fold. Independence ensures\nthat Theorem~\\ref{theo:cond_prob} holds for this procedure. The\ndownside is that this suffers from a power loss as is the case for\nmany other data splitting procedures. This motivates us to look for\nprocedures that use the full data to obtain both the $p\\operatorname{-values}$ and the\nordering.\n\n\\subsection{Symmetric statistics}\n\\label{subsection:inexact}\n\nAs seen in Section \\ref{subsection:introduction_challenges}, the correlation between the null $p\\operatorname{-value}$ $p_j$ and the statistic $z_j$, which is sorted to obtain the ordering, largely accounts for the FDR inflation. It is thus natural to seek procedures that make $p_j$ and $z_j$ independent for nulls. To this end, recall that the $p\\operatorname{-value}$ $p_j$ is defined as \n\\[p_j=\\frac{1}{B+1} \\p{1 + \\sum_{b = 1}^B \\one \\cb{T(\\mathbf{X}_j, \\mathbf{X}_{-j},\n \\mathbf{Y}) \\geq T(\\mathbf{X}_j^{(b)}, \\mathbf{X}_{-j}, \\mathbf{Y})}}. \\] We propose a\nmethod with $p_j$ as above and each $z_j$ constructed as follows:\nconsider a function $Z$ that is symmetric in its first $B+1$\narguments,\\footnote{We say a function $h(x_1, \\dots, x_n)$ is\n symmetric in its first $m$ arguments if for any permutation $\\pi$ of\n $\\cb{1,2,\\dots, m}$,\n $h(x_1, x_2, \\dots, x_m, x_{m+1}, \\dots x_n) = h(x_{\\pi(1)},\n x_{\\pi(2)}, \\dots, x_{\\pi(m)}, x_{m+1}, \\dots x_n)$.} and define \n\\[z_j = Z\\p{\\mathbf{X}_j, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}, \\mathbf{X}_{-j}, \\mathbf{Y}}.\\]\nThis definition leads to Procedure \\ref{alg:inexact}.\n\n\\begin{algorithm}[h]\n\\caption{The Sequential CRT (Symmetric statistic version)}\n\\label{alg:inexact}\n\\begin{algorithmic}\n\\REQUIRE Data $\\mathcal{D} = (\\mathbf{X}, \\mathbf{Y})$, number of randomizations $B$, test statistic $T(\\cdot)$, score function $Z(\\cdot)$ (symmetric in its first $B+1$ arguments), $\\operatorname{FDR}$ threshold $q$, SeqStep threshold $c$. \n\\end{algorithmic}\n\n\\begin{enumerate}\n\\item \n\\begin{algorithmic}\n\\STATE Obtain $p\\operatorname{-values}$ $p_1 \\dots, p_p$ on $\\mathcal{D}$ from CRT (Algorithm \\ref{alg:crt}). \n\\end{algorithmic}\n\n\\item \n\\begin{algorithmic}\n \\STATE Compute statistics:\n $z_j = Z\\p{\\mathbf{X}_j, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}, \\mathbf{X}_{-j}, \\mathbf{Y}}$\n for each $j \\in \\cb{1, \\dots p}$, and obtain an ordering $\\pi$ by\n sorting the statistics:\n $z_{\\pi(1)} \\geq z_{\\pi(2)} \\dots \\geq z_{\\pi(p)}$.\n\\end{algorithmic}\n\n\\item \n\\begin{algorithmic}\n\\STATE Apply Selective SeqStep+ (Algorithm \\ref{alg:seqstep}) to $p_{\\pi(1)}, p_{\\pi(2)}, \\dots, p_{\\pi(p)}$. \n\\end{algorithmic}\n\\end{enumerate}\n\n\\begin{algorithmic}\n\\ENSURE Discoveries from Selective SeqStep+. \n\\end{algorithmic}\n\n\\end{algorithm}\n\nIntuitively, the $p\\operatorname{-value}$ $p_j$ is capturing the relative rank of\n$\\mathbf{X}_j$ among $\\cb{\\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}}$, yet, $z_j$ is\nsymmetric in $\\cb{\\mathbf{X}_j, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}}$. The\nsymmetry allows us to permute elements in\n$\\Big\\{\\mathbf{X}_j, \\mathbf{X}_j^{(1)}, \\dots,$ $\\mathbf{X}_j^{(B)}\\Big\\}$ while keeping\n$z_j$ fixed. This means that $z_j$ is not providing information\nregarding the relative rank of $\\mathbf{X}_j$ among\n$\\cb{\\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}}$ and is hence independent of\n$p_j$. Put formally:\n\\begin{prop}\n\\label{theo:pj_zj}\nThe $p\\operatorname{-value}$ $p_j$ and the statistic $z_j$ defined in Procedure \\ref{alg:inexact} obey\n$p_j \\independent z_j$ for any null $j$. In addition, $p_j \\independent z_j |\\mathbf{X}_{-j}, \\mathbf{Y}$, for any null $j$. \n\\end{prop}\nThis result is a special case of Proposition~\\ref{prop:one_shot} that will be presented later. \n\nNote that Proposition \\ref{theo:pj_zj} is not sufficient to guarantee\nFDR control. Even though $p_j$ is independent of $z_j$, $p_j$ could,\nin principle, still have a complicated relationship with\n$z_{-j}$. This makes the $p\\operatorname{-values}$ not entirely independent of the\nordering. This however does not appear to lead to FDR inflation in\npractice. We indeed observe FDR control in various simulation studies\nin Section \\ref{section:simulation}.\n\nThe statistic $z_j$ can be computed using complicated machine learning\nmethods. For example, one can run a gradient boosting algorithm with regression trees as base learners, $\\mathbf{Y}$ as a response, and\n$\\mathbf{X}_j, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}, \\mathbf{X}_{-j}$ as predictors,\nobtain feature importance statistics of\n$\\mathbf{X}_j, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}$, and take $z_j$ to be the\nmaximum of the statistics. One can easily verify that with this\nspecific construction, $z_j$ is symmetric in\n$\\p{\\mathbf{X}_j, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}}$.\n\n\\section{Towards faster computation: one-shot CRT}\n\\label{section:computation}\nIn the original CRT (Algorithm \\ref{alg:crt}), to compute each $p\\operatorname{-value}$\n$p_j$ one runs a machine learning algorithm $B$ times to obtain the\ntest statistics $T_j^{(b)}$ for $b \\in \\cb{1, \\dots, B}$. This quickly\ngets computationally expensive when the machine learning algorithm is\nrun on a large dataset. To save computation time, another way of\ncomputing the statistics is to run the machine learning algorithm\nonce, with $\\mathbf{Y}$ as a response and\n$\\mathbf{X}_j, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}, \\mathbf{X}_{-j}$ as the predictors.\nFormally, we consider a procedure $T$ that takes\n$\\p{\\mathbf{X}_j^{(0)}, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}, \\mathbf{X}_{-j}, \\mathbf{Y}}$ as\ninput, and outputs importance statistics $T_j^{(b)}$ for each\n$b \\in \\cb{1, \\dots, B}$, i.e.,\n\\begin{equation}\n\\label{eqn:one_shot_procedure}\n \\p{T_j^{(0)}, \\dots, T_j^{(B)}} = T\\p{\\mathbf{X}_j^{(0)}, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}, \\mathbf{X}_{-j}, \\mathbf{Y}}. \n\\end{equation}\nWe restrict attention to procedures obeying the following symmetry\nproperty: for all permutations $\\pi$,\n\\begin{equation}\n\\label{eqn:one_shot_symmetric}\n T\\p{ \\sqb{\\mathbf{X}_j^{(0)}, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)} }_{\\operatorname{perm}(\\pi)} , \\mathbf{X}_{-j}, \\mathbf{Y} } \n = \\sqb{T\\p{\\mathbf{X}_j^{(0)}, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)} , \\mathbf{X}_{-j}, \\mathbf{Y} } }_{\\operatorname{perm}(\\pi)}. \n\\end{equation}\nThis is saying that if we permute the input $\\mathbf{X}_j^{(b)}$, this has the effect of permuting the output statistics. \nWith these statistics, we obtain $p\\operatorname{-values}$ via \n\\begin{equation}\n\\label{eqn:one_shot_pvalue}\np_j=\\frac{1}{B+1}\n\t\\p{1 + \\sum_{b = 1}^B \\one{\\cb{T_j^{(0)} \\leq T_j^{(b)} }}}. \\footnote{We break ties randomly. }\n\\end{equation}\n We call this procedure \\textit{one-shot CRT}. \n\n As a concrete example, consider a case where the lasso is used to\n compute the test statistic $T_j^{(b)}$. To compute each $p\\operatorname{-value}$\n $p_j$, the original CRT runs the lasso by regressing $\\mathbf{Y}$ on\n $\\mathbf{X}_j^{(b)}, \\mathbf{X}_{-j}$ for each $b \\in \\cb{0, 1, \\dots, B}$, and\n takes $T_j^{(b)}$ to be the absolute value of the fitted coefficient\n for $\\mathbf{X}_j^{(b)}$. In total, we run $B+1$ regressions. In contrast, the one-shot\n CRT runs the lasso only once by regressing $\\mathbf{Y}$ on\n $\\mathbf{X}_j^{(0)}, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(b)} , \\mathbf{X}_{-j}$ and takes\n $T_j^{(b)}$ to be the corresponding $|\\hat{\\beta}|$ for\n $\\mathbf{X}_j^{(b)}$ (this obeys \\eqref{eqn:one_shot_symmetric}).\n \n\nThe symmetry in \\eqref{eqn:one_shot_symmetric} ensures that the theoretical properties of the CRT $p\\operatorname{-values}$ still hold for the one-shot CRT $p\\operatorname{-values}$.\n\\begin{prop}\n\\label{prop:one_shot}\nConsider a null variable $j$. Assume that the $p\\operatorname{-value}$ $p_j$ is obtained from \\eqref{eqn:one_shot_procedure} and \\eqref{eqn:one_shot_pvalue}, and that \\eqref{eqn:one_shot_symmetric} holds. Then $p_j$ satisfies $\\PP{p_j \\leq \\alpha} \\leq \\alpha$, for any $\\alpha \\in [0,1]$, and $p_j \\independent z_j|\\mathbf{X}_{-j}, \\mathbf{Y}$, where $z_j$ is defined in Procedure \\ref{alg:inexact}.\n\\end{prop}\n\\begin{proof}\n For the sake of notation, set $\\mathbf{X}_j^{(0)} = \\mathbf{X}_j$. Consider a null\n $j$. By construction of $\\mathbf{X}_j^{(b)}$, all the $\\mathbf{X}_j^{(b)}$'s are\n i.i.d.~conditional on $\\mathbf{X}_{-j}$ and $\\mathbf{Y}$. Thus for any permutation\n $\\rho$ of $\\cb{0, \\dots, B}$,\n $\\p{\\mathbf{X}_j^{(0)}, \\dots, \\mathbf{X}_j^{(B)}} \\stackrel{d}{=}\n \\p{\\mathbf{X}_j^{(\\rho(0))}, \\dots, \\mathbf{X}_j^{(\\rho(B))}} \\Big| \\mathbf{X}_{-j},\n \\mathbf{Y}$. The symmetry of $Z$ in its first $B+1$ arguments further\n ensures that\n $z_j = Z\\p{\\mathbf{X}_j^{(0)}, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}, \\mathbf{X}_{-j},\n y} = Z\\p{\\mathbf{X}_j^{(\\rho(0))}, \\mathbf{X}_j^{(\\rho(1))}, \\dots,\n \\mathbf{X}_j^{(\\rho(B))}, \\mathbf{X}_{-j}, \\mathbf{Y}}$. Combining these facts, we have\n\\[ \\p{\\mathbf{X}_j^{(0)}, \\dots, \\mathbf{X}_j^{(B)}} \\stackrel{d}{=} \\p{\\mathbf{X}_j^{(\\rho(0))}, \\dots, \\mathbf{X}_j^{(\\rho(B))}} \\Big| \\mathbf{X}_{-j}, \\mathbf{Y}, z_j.\\]\nBy property \\eqref{eqn:one_shot_symmetric}, \n\\[ \\p{T_j^{(\\rho(0))}, \\dots, T_j^{(\\rho(B))}} =\n T\\p{\\mathbf{X}_j^{(\\rho(0))}, \\mathbf{X}_j^{(\\rho(1))}, \\dots, \\mathbf{X}_j^{(\\rho(B))},\n \\mathbf{X}_{-j}, \\mathbf{Y}}. \\] This term has the same distribution as\n$T\\p{\\mathbf{X}_j^{(0)}, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}, \\mathbf{X}_{-j}, \\mathbf{Y}}$\nconditional on $\\mathbf{X}_{-j}, \\mathbf{Y}$ and $z_j$. Since\n$T\\p{\\mathbf{X}_j^{(0)}, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}, \\mathbf{X}_{-j}, \\mathbf{Y}}=\n\\p{T_j^{(0)}, \\dots, T_j^{(B)}}$, we have\n\\[ \\p{T_j^{(\\rho(0))}, \\dots, T_j^{(\\rho(B))}} \\stackrel{d}{=}\n \\p{T_j^{(0)}, \\dots, T_j^{(B)}} \\Big| \\mathbf{X}_{-j}, \\mathbf{Y}, z_j.\\] This\nimplies that conditional on $\\mathbf{X}_j, \\mathbf{Y}, z_j$,\n$p_j \\sim \\operatorname{Unif}\\cb{\\frac{1}{B+1}, \\frac{2}{B+1}, \\dots,\n 1}$. Note that the same holds without conditioning on $z_j$, i.e.,\nconditioning on $\\mathbf{X}_j, \\mathbf{Y}$ only. Hence\n$p_j \\independent z_j | \\mathbf{X}_{-j}, \\mathbf{Y}$. Finally, the claim that\n$\\PP{p_j \\leq \\alpha} \\leq \\alpha$ follows from the fact that the\ndistribution of\n$\\operatorname{Unif}\\cb{\\frac{1}{B+1}, \\frac{2}{B+1}, \\dots, 1}$ is\nstochastically greater than $\\operatorname{Unif}[0,1]$.\n\\end{proof}\nIn terms of FDR control, the theorems from Section\n\\ref{section:theory} still hold for either the split or symmetric statistic\nversion of our variable selection method applied with the one-shot CRT\n$p\\operatorname{-values}$.\n\nAs an illustration, we show the average computation time of the\none-shot CRT and the original CRT (on all variables together) on\nsynthetic datasets in Table \\ref{table:time}. The number of\nrandomizations is set to $B = 9$, and other details of the simulation\nstudy are included in Appendix\n\\ref{subsection:simulation_details_table}. Compared with the original\nCRT, the one-shot CRT reduces the computation time by a factor of\nroughly $1\/B$ as expected.\n\\begin{table}[h]\n\\centering\n\\caption{Average computation times (seconds)}\n\\label{table:time}\n\\begin{tabular}{|p{2.8cm} |p{2.8cm}| p{2.8cm}| p{2.95cm}| p{2.95cm}|}\n\\hline\nSetting & Linear & Logistic & Non-linear 1 & Non-linear 2 \\\\ \n\\hline\nDimension &$n = 300, p = 300$ &$n = 300, p = 300$ &$n = 500, p = 200$ & $n = 500, p = 200$ \\\\ \n\\hline\nStatistics are computed with& lasso & glmnet & gradient boosting & gradient boosting \\\\\n\\hhline{|=|=|=|=|=|}\nOriginal CRT & 694 & 709 & 2811 & 2672 \\\\\n \\hline\nOne-shot CRT & 89 & 91 & 248 & 277 \\\\\n \\hline\n\\end{tabular}\n\\end{table}\n\nAdding $B$ predictors into a machine learning\nalgorithm cannot be used if we are combining CRT with BHq. As BHq\nrequires much finer $p\\operatorname{-values}$, $B$ needs to be much larger. Adding many\nirrelevant predictors into a regression problem is not generally a\nwise move.\n\nMore generally, the computational problem posed by the combination of\nthe CRT and BHq has been considered by \\citet{tansey2018holdout} and\n\\citet{liu2020fast}, which propose separate methods to reduce the\nrunning time. \\citet{tansey2018holdout} consider data splitting: the\nalgorithm trains a complicated machine learning model on the first\npart of the data and obtains $p\\operatorname{-values}$ on the second part of the data\nmaking use of the trained model. The data splitting trick ensures that\nthe complicated machine learning model will be fitted only once, and\nhence makes the algorithm much faster and computationally\nfeasible. \\citet{liu2020fast} propose a technique called\n\\emph{distillation}. Their proposed algorithm distills all the\nhigh-dimensional information in $\\mathbf{X}_{-j}$ about $\\mathbf{Y}$ into a\nlow-dimensional representation, computes the test statistic as a\nfunction of $\\mathbf{X}_j$, $\\mathbf{Y}$, and the low-dimensional representation,\nand obtains the $p\\operatorname{-values}$ based on the test statistics. The computation\ntime is much lower since the expensive model fitting takes place in\nthe distillation step, which is performed only once for each $j$. The\ntwo methods both give marginally valid $p\\operatorname{-values}$, but the $p\\operatorname{-values}$\nwould not be independent in general, and there is no theoretical\nguarantee on FDR control. (Both papers confirm in their simulations\nthat the FDR of each method is well controlled empirically.)\n\n\\section{Simulations}\n\\label{section:simulation}\n\nIn this section, we demonstrate the performance of our methods on synthetic data.\nSoftware for our method is available from \\url{https:\/\/github.com\/lsn235711\/sequential-CRT}, along with code to reproduce the analyses. We include in Appendix \\ref{section:simulation_details} implementation details and additional simulation studies. \n\n\\subsection{Comparison of the original CRT and the one-shot CRT}\n\\label{subsection:simulation_small}\nWe compare the proposed symmetric statistic version of the sequential CRT (with one-shot CRT) and the sequential CRT (with the original CRT). We also\ncompare our methods with Model-X knockoffs as a benchmark. We\nconsider a few different settings: linear\/non-linear(tree like)\nmodels, and Gaussian\/binomial responses. In all settings, the number of true nonnulls is set to be 20. For the distribution of $X$,\nwe consider a Gaussian autoregressive model and a hidden Markov\nmodel. To compute test statistics, we consider algorithms including\n$L_1$-regularized regression (glmnet) and gradient boosting with\nregression trees as base learners. The statistic $z_j$ in Procedure\n\\ref{alg:inexact} is taken to be\n$z_j = \\max_{b \\in \\cb{0, \\dots, B}}T_j^{(b)}$, where $T_j^{(b)}$ is\nthe test statistic computed in the CRT. Knockoffs are constructed with\nthe Gaussian semi-definite optimization algorithm\n\\citep{candes2018panning} for the Gaussian autoregressive model, and\nwith Algorithm 3 from \\citep{sesia2019gene} for the hidden Markov\nmodel. Details of the simulation study are included in Appendix\n\\ref{subsection:simulation_details_small}. Figure\n\\ref{fig:fdr_power_small} compares the performance of the above\nmethods in terms of empirical false discovery rate and power averaged\nover 100 independent replications. In all settings, the sequential CRT appears\nto control the FDR around the desired level $q = 0.1$. In terms of\npower, the performance of the one-shot CRT appears to be similar to\nthat of the original CRT in most of the settings. Compared to\nknockoffs, the sequential CRT (both original CRT and\none-shot CRT) is more powerful.\n\n\\begin{figure}\n\\begin{subfigure}{\\textwidth}\n\\caption{$X$ follows a Gaussian AR model}\n\\includegraphics[width = \\textwidth]{plots\/small_experiment_ar.pdf}\n\n\\end{subfigure}\n\\noindent\\rule{\\textwidth}{0.4pt}\n\\begin{subfigure}{\\textwidth}\n\\caption{$X$ follows an HMM}\n\\includegraphics[width = \\textwidth]{plots\/small_experiment_hmm.pdf}\n\\end{subfigure}\n\\caption{Performance of the sequential CRT with the original CRT and one-shot CRT (both use the symmetric statistic) and knockoffs on small synthetic datasets. The nominal false discovery rate level is 10\\%. Results are averaged over 100 independent experiments.}\n\\label{fig:fdr_power_small}\n\\end{figure}\n\n\\subsection{Comparison of the sequential CRT with knockoffs}\n\\label{subsection:simulation_large}\n We compare the proposed split version (Procedure \\ref{alg:exact}) and symmetric statistic version (Procedure \\ref{alg:inexact}) of the sequential CRT with Model-X knockoffs. We run the sequential CRT with one-shot CRT. \nWe consider similar settings as in the above Section \\ref{subsection:simulation_small}. Since the computation time of one-shot CRT is much lower compared to the original CRT, here we run the experiments on larger datasets. In all settings in this section, the number of nonnulls is set to be 50.\nOther details can be found in Section \\ref{subsection:simulation_small} and Appendix \\ref{subsection:simulation_details_large}. Figure \\ref{fig:fdr_power_large} compares the performance of the above methods in terms of empirical false discovery rate and power averaged over 100 independent replications. In all settings, the sequential CRT appears to control the FDR around the desired level $q = 0.1$. In terms of power, the symmetric statistic version is comparable to knockoffs. \n\n\\begin{figure}\n\\begin{subfigure}{\\textwidth}\n\\caption{$X$ follows a Gaussian AR model}\n\\includegraphics[width = \\textwidth]{plots\/large_experiment_ar.pdf}\n\n\\end{subfigure}\n\\noindent\\rule{\\textwidth}{0.4pt}\n\\begin{subfigure}{\\textwidth}\n\\caption{$X$ follows an HMM}\n\\includegraphics[width = \\textwidth]{plots\/large_experiment_hmm.pdf}\n\\end{subfigure}\n\\caption{Performance of the proposed split version and symmetric statistic version of the sequential CRT compared to knockoffs on larger synthetic datasets. The nominal false discovery rate level is 10\\%. Results are averaged over 100 independent experiments.}\n\\label{fig:fdr_power_large}\n\\end{figure}\n\n\n\\subsection{The role of the number of potential discoveries}\n\\label{subsection:simulation_num_nonnull}\n\\begin{figure}\n\\includegraphics[width = \\textwidth]{plots\/num_nonnull_experiment_ar.pdf}\n\\caption{Performance of the sequential CRT compared to knockoffs as a\n function of the number of non-nulls. The nominal false discovery\n rate level is 10\\%. Results are averaged over 100 independent\n experiments.}\n\\label{fig:fdr_power_num_nonnull}\n\\end{figure}\n\nComparing the results from Sections \\ref{subsection:simulation_small}\nand \\ref{subsection:simulation_large}, we observe that the power gain\nof the sequential CRT vis-a-vis model-X knockoffs is more noticeable\nwhen the number of nonnulls is small. To understand this phenomenon,\nreturn to the connection between the knockoff filter and Selective\nSeqStep+. It was shown by \\citet{barber2015controlling} that the\nknockoff filter can be cast as a special case of the Selective\nSeqStep+ applied to ``one-bit'' p-values with $c$ chosen to be 0.5.\nWhen $q = 0.1$ and $c = 0.5$, the selected set of the Selective\nSeqStep+ becomes\n$\\hat{\\mathcal{S}} = \\cb{ j \\leq \\hat{k}: p_{j} \\leq c}$, where\n\\begin{equation}\n\\label{eqn:seqstep_knockoffs}\n\\hat{k} =\\max \\left\\{k \\in \\cb{1,\\dots, p}: \\frac{1+\\#\\left\\{j \\leq k: p_{j}>0.5\\right\\}}{\\#\\left\\{j \\leq k: p_{j} \\leq 0.5\\right\\} \\vee 1} \\leq 0.1 \\right\\}. \n\\end{equation}\nWhen the number of nonnulls is small, the set above may be\nempty. Consider an example where the number of nonnulls is 8. Even in\nthe ideal case where all the nonnulls have vanishing $p\\operatorname{-values}$ and the\nnonnulls appear early in the sequence, for any $k \\geq 8$, the left\nhand side in the inequality \\eqref{eqn:seqstep_knockoffs} becomes\n\\begin{equation}\n\\frac{1 + \\# \\cb{ \\text{null } 9\\leq j\\leq k: p_j \\geq 0.5}}{8 + \\# \\cb{ \\text{null } 9\\leq j \\leq k: p_j < 0.5}} \\gtrsim \\frac{1}{8} > 0.1.\n\\end{equation}\nTherefore, most of the time there is no $k$ satisfying the inequality \\eqref{eqn:seqstep_knockoffs}, thus we make no rejections. \nHence the power will be low. \n\nThe sequential CRT, however, will not suffer from the same problem. We recall that throughout this paper, we take $c=0.1$ in the sequential CRT. With $c = 0.1$, the definition of $\\hat{k}$ becomes\n\\begin{equation}\n\\hat{k} =\\max \\left\\{k \\in \\cb{1,\\dots, p}: \\frac{1+\\#\\left\\{j \\leq k: p_{j}>0.1\\right\\}}{\\#\\left\\{j \\leq k: p_{j} \\leq 0.1\\right\\} \\vee 1} \\leq 0.9 \\right\\}. \n\\end{equation}\nWith a good ordering the $p\\operatorname{-values}$, the left hand side of the\ninequality can easily become lower than 0.9, a much less stringent\nthreshold.\n\nWe run simulations varying the number of nonnulls. We compare\nthe proposed symmetric statistic version of the sequential CRT with\nmodel-X knockoffs. We run the sequential CRT with one-shot CRT. We\nconsider settings as in Section\n\\ref{subsection:simulation_small}; details are in Appendix\n\\ref{subsection:simulation_details_num_nonnull}. Figure\n\\ref{fig:fdr_power_num_nonnull} compares the performance of the above\nmethods in terms of empirical false discovery rate and power averaged\nover 100 independent replications. In all settings, the sequential CRT\nappears to control the FDR around the desired level $q = 0.1$. In\nterms of power, we see that the sequential CRT overcomes ``the threshold\nphenomenon'' discussed earlier. \n\n\\subsection{Choice of the threshold $c$ and the number $B$ of randomizations}\n\\label{subsection:simulation_Bc}\nWe here study the effect on power of the threshold $c$ and of the\n number $B$ of randomizations. We focus on the sequential CRT\n (symmetric statistics version with one-shot CRT). Intuitively, we\n expect the procedure with a smaller $c$ and a smaller $B$ to be more\n powerful. With a smaller $c$, our procedure is more likely to overcome ``the threshold phenomenon'' as discussed in Section \\ref{subsection:simulation_num_nonnull}. When using a smaller value of $B$, we make sure\n that we are not including too many irrelevant predictors in the\n machine learning algorithm while running the one-shot CRT\n algorithm. In addition, when we compute the statistics $z_j$ in\n Algorithm \\ref{alg:inexact}, we take the maximum (or the difference\n between the maximum and the median) of the feature importance\n statistics of $\\mathbf{X}_j, \\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}$; thus it is\n helpful to have a smaller $B$ so that the signal, i.e., the feature\n importance statistics of $\\mathbf{X}_j$ has a chance of standing out. If we\n use an extremely large value of $B$, there is a chance that the\n maximum of the feature importance statistics of\n $\\mathbf{X}_j^{(1)}, \\dots, \\mathbf{X}_j^{(B)}$ exceeds that of $\\mathbf{X}_j$. That\n said, $c$ and $B$ cannot be too small at the same time. At the very\n least, in order for our procedure to make any rejection, we need to\n have some $p\\operatorname{-values}$ no larger than $c$. Since the $p\\operatorname{-values}$ are\n bounded below by $1\/(B+1)$, a necessary condition for not being\n powerless is to have $c \\geq 1\/(B+1)$.\n\n\\begin{figure}\n\\centering\n\\begin{subfigure}[b]{0.9\\textwidth}\n \\caption{Small datasets}\n \\centering\n \\includegraphics[width = \\textwidth]{plots\/experiment_300_Bc.pdf}\n \\label{fig:Bc_small}\n \\end{subfigure}\n\\begin{subfigure}[b]{0.9\\textwidth}\n\\noindent\\rule{\\textwidth}{0.4pt}\n \\caption{Large datasets}\n\\centering\n \\includegraphics[width = \\textwidth]{plots\/experiment_1000_Bc.pdf}\n \\label{fig:Bc_large}\n\\end{subfigure}\n\\caption{Power of the sequential CRT (symmetric statistics version with one-shot CRT) with different choices of threshold $c$ and number of randomizations $B$. The nominal false discovery\n rate level is 10\\%. Results are averaged over 100 independent\n experiments.}\n\\label{fig:Bc}\n\\end{figure}\n\nThe above heuristic arguments are confirmed in simulation\nstudies. Figure \\ref{fig:Bc} compares the performance of the\nsequential CRT with different values of $c$ and $B$.\\footnote{In our\n simulation studies, we take $B = 10k + 9$ for some\n $k \\in \\mathbb{Z}$ because we want to make $B+1$ a multiple of 10,\n and thus make it possible for $\\ell\/(B+1) = c$ to hold for some\n integer $\\ell$. } We consider several settings: linear\/logistic\nmodels, small\/large synthetic datasets, low\/high signal to noise\nratios. We observe the same phenomenon in all settings. Namely, power\nincreases as $B$ decreases and $c$ decreases with the caveat that\nthey cannot both be small at the same time. It appears that the pair\n$(B, c) = (9, 0.1)$ is the most powerful in all settings, justifying\nthe choices we made in earlier simulation studies. In Appendix\n\\ref{subsection:simulation_details_Bc}, we provide implementation\ndetails to reproduce Figure \\ref{fig:Bc} and additionally show that\nthe FDR is controlled at the nominal level for all choices of $B$ and\n$c$.\n\n\\section{Real data application}\n\\label{section:application}\nWe now apply our method to a breast cancer dataset to identify gene expressions on which the cancer stage depends. The\ndataset is from \\cite{curtis2012genomic}, which consists of\n$n = 1,396$ staged cases of breast cancer. For each case, the data\nconsists of expression level (mRNA) and copy number aberration (CNA)\nof $p = 164$ genes. The goal is to identify genes whose expression\nlevel is not independent of the cancer stage, conditioning on all\nother genes and CNAs. The response variable, the progression stage of\nbreast cancer, is binary. We take the dataset from\n\\citep{liu2020fast} and pre-process the data as in their work. We\nrefer to Section~5 and Section~E of \\citep{liu2020fast} for further\ndetails. Following \\citep{liu2020fast}, we model the distribution of\nexpression levels using a multivariate Gaussian. The nominal false\ndiscovery rate is set to be 10\\%.\n\nBelow, we compare the following methods:\n\\begin{enumerate}\n\\item \\emph{Sequential CRT}: We consider Procedure \\ref{alg:inexact} with one-shot CRT. We take the SeqStep threshold $c$ to be 0.1, and the number of randomizations to be 9. We take the importance statistics to be the absolute values of the coefficient of a cross-validated $L_1$-penalized logistic regression. \n\\item \\emph{Distilled CRT} \\citep{liu2020fast}. We consider both $\\operatorname{d}_0$CRT and $\\operatorname{d}_{\\operatorname{I}}$CRT; we refer to Section 2.3 and 2.4 of \\citep{liu2020fast} for specific constructions of the dCRT. Since the response variable is binary, the distillation step is done by a cross-validated $L_1$-penalized logistic regression. \n\\item \\emph{HRT} \\citep{tansey2018holdout}. Algorithm 1 of \\citep{tansey2018holdout} is implemented with a logistic model fitted by a cross-validated $L_1$-penalized logistic regression and a data split of 50\\%-50\\%. \n\\item \\emph{Knockoffs} \\citep{candes2018panning}. Knockoffs are constructed with the Gaussian semi-definite optimization algorithm. We take the\nfeature importance statistic to be the glmnet coefficient difference. \n\\end{enumerate}\nFor \\emph{Distilled CRT} and \\emph{HRT}, we reproduce the analysis from \\citet{liu2020fast}. All methods considered above are randomized procedures, i.e., different runs of the same algorithm produce possibly different sets of discoveries. We run each method 100 times and compare the number of discoveries.\n Figure \\ref{fig:real_data_num_disc} is a boxplot showing the number of discoveries across random seeds. Our procedure appears to make more discoveries on average than the other methods. Compared to knockoffs, our procedure has less variability. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width = 0.8 \\textwidth]{plots\/real_data.pdf}\n\\caption{Number of discoveries on the breast cancer dataset}\n\\label{fig:real_data_num_disc}\n\\end{figure}\n\nWe present the full list of genes discovered by the sequential CRT in Table\n\\ref{table:discoveries_validate}. \nNote here that the sequential CRT is random, thus different runs could produce different results. Following \\cite{candes2018panning} and \\cite{sesia2019gene}, to make the discoveries more ``reliable\", we run the proposed method multiple times and only show the genes that are\nselected by our procedure more than 10\\% of the time. The 10\\% level is somewhat arbitrary and we do not make any claims about the discoveries exceeding this threshold. We leave the study of setting a threshold achieving theoretical error control guarantees to future research. We however observe that all discoveries above the 10\\% threshold were shown in other independent studies to be related to the development of cancer. \n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{ |c |c |c|c|} \n\\hline\nSelection frequency & Gene & Discovered by dCRT? &Confirmed in? \\\\\n\\hline\n99\\% & {\\em HRAS} & Yes& \\citet{geyer2018recurrent}\\\\\n\\hline\n99\\% & {\\em RUNX1} & Yes & \\citet{li2019runx1}\\\\\n\\hline\n96\\% & {\\em FBXW7} & Yes & \\citet{liu2019fbxw7}\\\\ \n\\hline\n95\\% & {\\em GPS2} & Yes & \\citet{huang2016g}\\\\ \n\\hline\n95\\% & {\\em NRAS} & & \\citet{galie2019ras} \\\\ \n\\hline\n82\\% & {\\em FANCD2} & & \\citet{rudland2010significance} \\\\ \n\\hline\n78\\% & {\\em MAP3K13} &Yes & \\citet{han2016microrna}\\\\ \n\\hline\n76\\% & {\\em AHNAK} & & \\citet{chen2017ahnak}\\\\ \n\\hline\n67\\% & {\\em MAP2K4} & &\\citet{liu2019map2k4}\\\\ \n\\hline\n58\\% & {\\em CTNNA1} & &\\citet{clark2020loss} \\\\ \n\\hline\n35\\% & {\\em NCOA3} & & \\citet{gupta2016ncoa3}\\\\ \n\\hline\n13\\% & {\\em LAMA2} & & \\citet{liang2018targeted}\\\\ \n\\hline\n10\\% & {\\em GATA3} & & \\citet{mehra2005identification}\\\\ \n\\hline\n\\end{tabular}\n\\caption{Discoveries made by Procedure \\ref{alg:inexact} on the breast cancer dataset}\n\\label{table:discoveries_validate}\n\\end{table}\n\n\n\\section{Discussion}\n\\label{section:future_work}\n\\paragraph{Comparison with knockoffs}\nIn this paper, we proposed a variable selection procedure, the sequential CRT. In comparison with model-X knockoffs, the proposed sequential CRT is generally more powerful than model-X knockoffs as shown in the simulation studies in Section \\ref{section:simulation}. Specifically, we observe a much more noticeable power gain of the sequential CRT than is observed by model-X knockoffs when the number of nonnulls is small. In addition to the power gain, we note that the sequential CRT has another advantage over model-X knockoffs: it is usually easier to sample from the conditional distribution of $X_j \\mid X_{-j}$ than to generate knockoffs, especially for complicated joint distribution of $X$. \n\n\\paragraph{Derandomizing the sequential CRT}\nLike many other Model-X procedures (e.g.~Model-X knockoffs,\n distilled CRT, etc), the sequential CRT is a randomized\n procedure. In other words, different runs of the method produce\n might produce different selected sets. When the method is applied in\n practice, one would report those features whose selection frequency\n exceeds a threshold along with the corresponding\n frequencies. \\citet{ren2020derandomizing} studies the problem of\n derandomizing knockoffs. It will be interesting to study whether it\n is possible to derandomize the proposed method so that results are\n more consistent across different runs.\n\n\\paragraph{Theoretically validating power gains}\nWhile this paper demonstrates enhanced statistical power through\nsimulations, it would be interesting to theoretically validate power\ngains. Intuitively, compared to Model-X knockoffs, the proposed method\neffectively reduces the number of covariates by a factor of 2 when\ncomputing feature importance statistics. It would be of interest to\nunderstand theoretically how important such reduction is in terms of\nstatistical power.\n\n\\paragraph{Robustness to misspecification in the distribution of the covariates}\nAnother interesting direction for future work is to study the\n robustness of the sequential CRT to misspecification in the\n distribution of the covariates. When the distribution of $X$ is\n known only approximately, \\citet{barber2020robust} quantifies the\n possible FDR inflation of model-X knockoffs; and\n \\citet{berrett2020conditional} bounds the inflation in type-I\n error of the CRT. It will be interesting to evaluate the FDR\n inflation of the sequential CRT both empirically and\n theoretically.\n\n\\section*{Acknowledgements}\n\nE. C. was supported by Office of Naval Research grant N00014-20-12157,\nby the National Science Foundation grants OAC 1934578 and DMS 2032014,\nand by the Simons Foundation under award 814641. S. L. was supported\nby the National Science Foundation grant OAC 1934578.\n\n\\bibliographystyle{unsrtnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $M$ be compact manifold with boundary with a chosen basepoint on the boundary. Denote by ${\\operatorname{map}}_*(M,M)$ the space of pointed maps with the compact open topology. Denote by ${\\operatorname{map}}_\\partial(M,M)$ the subspace of maps, fixing the boundary point-wise. We denote by ${\\operatorname{aut}}_\\partial(M)$ the sub-monoid of homotopy self-equivalences fixing the boundary point-wise.\n\nThe first object we study in this paper is the rational homology of the classifying space $B{\\operatorname{aut}}_\\partial(M).$ More precisely, we show that it satisfies homological stability for certain families of manifolds. For example, for the $g$-fold connected sum $$N_g=\\big (\\mathlarger{\\#}_g (S^p\\times S^q) \\big ) \\smallsetminus {\\operatorname{int}} (D^{p+q})\\text{, where }3\\leq p2i+2.$ Moreover we show that $H_i(B{\\operatorname{aut}}_\\partial(M\\#N_g);{\\mathbb Q})$ is independent of $g$ in the same range, where $M$ is some other connected sum of products of spheres. To make this statement precise, we introduce the following notation. \nLet $I$ be a finite indexing set for pairs of positive natural numbers $p_i,q_i$ and $n$ be a positive natural number, such that $3\\leq p_i\\leq q_i<2p_i-1$ and $p_i+q_i=n$ for all $i\\in I$. Note that this implies that necessarily $n\\geq 6.$ We define a smooth $n$-dimensional manifold with boundary diffeomorphic to $S^{n-1}$ $$N_I=\\left(\\mathlarger{\\#}_{i\\in I} (S^{p_i}\\times S^{q_i})\\right)\\smallsetminus {\\operatorname{int}} (D^n)$$ and we assume a basepoint chosen on the boundary.\n\nFor a given $p\\in {\\mathbb N}$ we define the ``generalized genus'' as $$g_p=\\begin{cases} \\operatorname{rank}(H_p(N_I))\/2 &\\text{ if } 2p=n \\\\ \\operatorname{rank}(H_p(N_I)) &\\text{ otherwise,} \\end{cases}$$ i.e. $g_p$ is the number of $S^p\\times S^q$ summands of $N_I,$ where $q=n-p.$ In other words we see $N_I$ as the connected sum of the manifolds\n$$N_I= \\big(\\mathlarger{\\#}_{i\\in \\{j\\in I|p_j\\neq p \\}} S^{p_i}\\times S^{q_i} \\big) \\# (\\mathlarger{\\#}_{g_p}S^{p}\\times S^{q} \\big) \\smallsetminus {\\operatorname{int}}{D^{n}}.$$\n\nDenote by $$V^{p,q}=S^p\\times S^q\\smallsetminus {\\operatorname{int}} (D_1^n\\sqcup D_2^n), \\text{ where $3\\leq p\\leq q<2p-1$ and $p+q=n.$}$$ We define a new manifold \n$$N'=N_I\\cup_{\\partial_1} V^{p,q},$$ by identifying one boundary component of $V^{p,q}$ with $\\partial N_I.$ Note that $N'$ is canonically diffeomorphic to the manifold $N_{I'}$ with $I'=I\\cup \\{i'\\}$, where $p_{i'}=p$ and $q_{i'}=q.$ Using this, we define the stabilization map $$\\sigma:{\\operatorname{aut}}_\\partial( N_I) \\rightarrow {\\operatorname{aut}}_\\partial( N_I\\cup_{\\partial_1} V^{p,q})\\xrightarrow{\\cong} {\\operatorname{aut}}_\\partial( N_{I'}), $$ by extending a self-map of $N_I$ by the identity on $V^{p,q}.$ In this paper we study the induced map on homology of the classifying spaces\n$$\\sigma_*:H_*(B{\\operatorname{aut}}_\\partial (N_I);{\\mathbb Q})\\rightarrow H_*(B{\\operatorname{aut}}_\\partial (N_{I'});{\\mathbb Q}).$$ The first main theorem of this paper is:\n\n\\begin{THMA}\nThe map\n$$H_i(B{\\operatorname{aut}}_\\partial(N_I);{\\mathbb Q})\\rightarrow H_i(B{\\operatorname{aut}}_\\partial(N_{I'});{\\mathbb Q})$$ \ninduced by the stabilization map with respect to $V^{p,q},$ where $3\\leq p\\leq q < 2p-1$, is an isomorphism for $g_p>2i+2$ when $2p\\neq n$ and $g_p>2i+4$ if $2p=n$ and an epimorphism for $g_p\\geq 2i+2$ respectively $g_p\\geq 2i+4$.\n\\end{THMA}\n\nThe block diffeomorphisms ${\\widetilde{\\operatorname{Diff}}}_\\partial (X)$ are the realization of the $\\Delta$-group, i.e. a simplicial group without degeneracies, with $k$-simplices, face preserving diffeomorphisms $$\\varphi:\\Delta^k\\times X\\rightarrow \\Delta^k\\times X,$$ such that $\\varphi$ is the identity on a neighborhood of $\\Delta^k\\times \\partial X.$ We map a $k$-simplex $\\varphi$ to the $k$-simplex in ${\\widetilde{\\operatorname{Diff}}}_\\partial(N_{I'})$ $$\\Delta^k\\times (N_I\\cup_{\\partial_1} V^{p,q})\\rightarrow \\Delta^k\\times (N_I\\cup_{\\partial_1} V^{p,q}),$$ given by $\\varphi$ on $\\Delta^k\\times N_I$ and the identity on $\\Delta^k\\times V^{p,q}$, where we use that $N_{I'}\\cong N_I\\cup_{\\partial_1} V^{p,q}.$ This induces the stabilization map $B{\\widetilde{\\operatorname{Diff}}}_\\partial(N_I)\\rightarrow B{\\widetilde{\\operatorname{Diff}}}_\\partial(N_{I'}).$\n\n\n\n\\begin{THMB}\nThe map\n$$H_i(B{\\widetilde{\\operatorname{Diff}}}_\\partial(N_I);{\\mathbb Q})\\rightarrow H_i(B{\\widetilde{\\operatorname{Diff}}}_\\partial(N_{I'});{\\mathbb Q})$$\ninduced by the stabilization map with respect to $V^{p,q},$ where $3\\leq p\\leq q < 2p-1,$ is an isomorphisms for $g_p>2i+2$ when $2p\\neq n$ and $g_p>2i+4$ if $2p=n$ and an epimorphism for $g_p\\geq 2i+2$ respectively $g_p\\geq 2i+4$.\n\\end{THMB}\n\nThe argument is based on \\cite{berg12,berg13}, where homological stability for the automorphism spaces of the manifolds $\\mathlarger{\\#_g}(S^d\\times S^d) \\smallsetminus {\\operatorname{int}}(D^{2d})$ is shown. Berglund and Madsen moreover determine the stable homology of the homotopy automorphisms and the block diffeomorphisms. The argument to determine the stable homology relies on \\cite{galatiusrandalwilliams} where the stable cohomology of the moduli space of many even-dimensional smooth manifolds is determined. There is not an analogue of these results for odd-dimensional manifolds yet, but the homological stability results \\cite{galatiusrandalwilliamshomstab} have been generalized to odd-dimensional manifolds \\cite{perlmutter,perlmutter2}.\n\\subsection*{Outline of the argument}\nThe main idea of the proof is to consider the universal covering $$B{\\operatorname{aut}}_\\partial (N_I)\\langle 1 \\rangle \\rightarrow B{\\operatorname{aut}}_\\partial (N_I)$$ or rather, the universal covering spectral sequence with $E^2$-page\n$$H_s(\\pi_1(B{\\operatorname{aut}}_\\partial (N_I));H_t(B{\\operatorname{aut}}_\\partial (N_I)\\langle 1 \\rangle;{\\mathbb Q}))\\Rightarrow H_{s+t}(B{\\operatorname{aut}}_\\partial (N_I);{\\mathbb Q}).$$\nIf we can show that the stabilization map induces isomorphisms for large generalized genus $g_p$ in a range of $s$ and $t$, the spectral sequence comparison theorem implies homological stability for the homotopy automorphisms. So we reduced the problem to showing homological stability for the groups $\\pi_1(B{\\operatorname{aut}}_\\partial (N_I))$ with certain twisted coefficients.\n\nThe first step is to determine the homotopy classes of homotopy automorphisms (Section \\ref{on mapping class groups}), or rather the quotient acting non-trivially on the homology of the universal covering. More precisely, we determine the image and kernel of the ``reduced homology'' map $$\\tilde H:\\pi_0({\\operatorname{aut}}_\\partial(N_I))\\rightarrow {\\operatorname{Aut}} (\\tilde H_*(N_I))$$ to the automorphisms of the reduced homology as a graded group. For the manifolds $N_I$ the kernel is finite. This is in fact a consequence of the connectivity assumption $3\\leq p_i\\leq q_i < 2p_i-1$. The image which we call $\\Gamma_I\\subset {\\operatorname{Aut}} (\\tilde H_*(N_I))$ is the subgroup respecting the intersection form and when the dimension of $N_I$ is even, a certain tangential invariant. In particular we show later that elements in the kernel of $\\tilde H_*$ act trivially on $H_t(B{\\operatorname{aut}}_\\partial (N_I)\\langle 1 \\rangle;{\\mathbb Q}).$ Thus we only have to study the groups $\\Gamma_I.$\n\nIn Section \\ref{automorphisms of hyperbolic modules} we review hyperbolic modules and give a slight generalization in order to describe the $\\Gamma_I$ as automorphisms of an object with underlying graded ${\\mathbb Z}$-module $\\tilde H_*(N_I)$.\n\nIn Section \\ref{van der kallens and charneys homological stab results} we use homological stability results by Van der Kallen and Charney to show a homological stability result for the groups $\\Gamma_I$ with certain twisted coefficient systems of ``finite degree''.\nIn Section \\ref{On the rational homotopy type of homotopy automorphisms} we review results of Berglund and Madsen in order to show that the $H_t(B{\\operatorname{aut}}_\\partial (N_I)\\langle 1 \\rangle;{\\mathbb Q})$ are in fact a twisted coefficient system satisfying homological stability for the $\\Gamma_I.$ A crucial tool for showing twisted homological stability are Schur multifunctors (defined in Section \\ref{Polynomial functors and Schur multifunctors}), which we use to determine the degree of the coefficient systems. In fact developing the Schur multifunctors as a tool to handle the different degrees coming from the homology $\\tilde H_*(N_I)$ was one the the main hurdles in generalizing Berglund and Madsen's result.\n\nTo show the homological stability for the block diffeomorphisms, we consider the homotopy fibration $${\\operatorname{aut}}_\\partial (N_I)\/{\\widetilde{\\operatorname{Diff}}}_\\partial (N_I) \\to B{\\widetilde{\\operatorname{Diff}}}_\\partial (N_I)\\to B{\\operatorname{aut}}_\\partial (N_I).$$ We can determine the homology of a component of the homotopy fiber using surgery theoretic methods applied by Berglund and Madsen, which suffices to show homological stability using similar arguments as for the homotopy automorphisms.\n\n\n\n\n\n\\subsection*{Acknowledgments}\nThis paper is based on the authors PhD thesis, which was supported by the Danish National Research Foundation through the Centre for Symmetry and Deformation (DNRF92).The author wants to thank his advisor Ib Madsen for suggesting this project and his advice throughout working on it. The author moreover wants to thank Alexander Berglund for comments on the presentation of this article.\n\n\\section{Polynomial functors and Schur multifunctors}\\label{Polynomial functors and Schur multifunctors}\n\nWe start by recalling the definition of polynomial functors in the sense of \\cite[Section 9]{eilenberg} or rather slightly modified as in \\cite[Section 3]{dwyer}. \nLet $T:\\mathcal{A}\\rightarrow \\mathcal{B}$ be a (not necessarily additive) functor between abelian categories. The first cross-effect functor is defined to be $$T^1(X)=\\operatorname{kernel}(T(X)\\to T(0)),$$ where $X\\rightarrow 0$ is the natural map to the zero object in $\\mathcal{A}$. For $k>1$, the $k$-th cross-effect functor $$T^k:\\mathcal{A}^k\\rightarrow \\mathcal{B},$$ is uniquely defined up to isomorphism given $T^l$ for $lk$. \n\\end{defi}\n\nAn immediate consequence is that a functor is of degree $\\leq 0$ if and only if it is constant. The higher cross-effects can be defined using deviations. The $k$-fold deviation of $k$ maps $f_1,...,f_k:A\\rightarrow B$ in $\\mathcal{A}$ is the map\n$$T(f_1\\top ... \\top f_k):T(A)\\rightarrow T(B),$$ given by $$T(f_1\\top ... \\top f_k)= T(0)\\oplus \\underset{\\{ i_1,...,i_r \\}}\\bigoplus (-1)^{k-r} T(f_{i_1},...,f_{i_r} ) ,$$ where the sum runs over all non-empty subsets $\\{ i_1,...,i_r \\}\\subset\\{ 1,...,k \\}$ and $0$ denotes the canonical map $A\\rightarrow 0 \\rightarrow B.$ Setting $A=A_1\\oplus...\\oplus A_k$ and denoting by $\\pi_i:A\\rightarrow A_i$ the projections and by $\\iota_i:A_i\\rightarrow A$ the inclusions, the $k$-th cross-effect functor is given on objects by $$T^k(A_1,...,A_k)=\\operatorname{Image}(T(\\iota_1\\circ\\pi_1\\top ... \\top \\iota_k\\circ\\pi_k)).$$\n\nThe following properties of Polynomial functors are direct consequences of the definition.\n\n\\begin{prop}[See e.g. \\cite{dwyer}]\\mbox{}\n\\begin{enumerate}\n \\item An additive functor is of degree $\\leq 1$.\n \\item The composition of functors of degree $\\leq k$ and $\\leq l$ is a functor of degree $\\leq kl$.\n \\item Let $T:\\mathcal{A}\\rightarrow\\mathcal{B}$ and $R:\\mathcal{C}\\rightarrow\\mathcal{B}$ be of degree $\\leq k$ and $\\leq l$, respectively. The level-wise sum $$T\\oplus R :\\mathcal{A}\\times \\mathcal{C}\\rightarrow \\mathcal{B}$$ is polynomial of degree $\\leq \\operatorname{max}\\{k,l \\}$\n\\end{enumerate}\n\\end{prop}\n\n\\begin{example}Let $\\operatorname{Mod}({\\mathbb Z})$ be the category of finitely generated ${\\mathbb Z}$-modules. An example of a degree $\\leq k$ functor is the $k$-fold tensor product\n\\begin{displaymath}\n\\begin{aligned}\n\\otimes: &\\operatorname{Mod}({\\mathbb Z})^k &&\\rightarrow &&&\\operatorname{Mod}({\\mathbb Z})\\\\\n&(A_1,...,A_k) &&\\mapsto &&&\\bigotimes_i A_i.\n\\end{aligned}\t\n\\end{displaymath}\\end{example}Schur functors give examples of polynomial functors. Note that the following definitions also make sense for general commutative rings, but we are going to restrict our presentation to the category of graded rational vector spaces ${Vect_*}({\\mathbb Q}).$ Schur functor are treated for example in \\cite{algop}. We could not find any literature on Schur multifunctors and hence state the facts we need here. \n\nLet $\\mathscr{M}=\\mathscr{M}(n)\\in {Vect_*}({\\mathbb Q})$, $n\\geq 0$ be a sequence of ${\\mathbb Q}[\\Sigma_n]$-modules. We refer to them as $\\Sigma_n$-modules but implicitly use the ${\\mathbb Q}[\\Sigma_n]$-modules structure, in particular $\\otimes_{\\Sigma_n}$ refers to the tensor product over ${\\mathbb Q}[\\Sigma_n]$. The \\emph{Schur functor} given by $\\mathscr{M}$ is defined to be the endofunctor of ${Vect_*}({\\mathbb Q})$ induced by\n$$\\mathscr{M}(V)=\\bigoplus_k \\mathscr{M}(k)\\otimes_{\\Sigma_k}V^{\\otimes k} \\text{ for all }V\\in {Vect_*}({\\mathbb Q}),$$\nwhere $V^{\\otimes k}$ is the left $\\Sigma_k$-module with action induced by the permutation of the factors by the inverse (with sign according to the Koszul sign convention). Note that $\\mathscr{M}(0)$ is just a constant summand. A Schur functor $\\mathscr{M}$ with $\\mathscr{M}(l)$ trivial for $l>k$ is a polynomial functor of degree $\\leq k$.\n\nLet $\\eta=(n_1,...,n_l),$ $n_1,...,n_l\\geq 0$ be a multiindex. Throughout this article we will assume all multiindices to have non-negative entries. We use the following conventions:\\begin{displaymath}\n\\begin{aligned}\n\t|\\eta| &=\\sum_{i=1}^l n_i,\\\\\n\t\\ell(\\eta) &=l,\\\\\n\t\\mu+\\eta &=(m_1+n_1,...,m_l+n_l) \\text{, for $\\mu=(m_1,...,m_l)$},\\\\\n\t(V_i)^{\\otimes \\eta} &=V_1^{\\otimes n_1}\\otimes ... \\otimes V_l^{\\otimes n_l} \\text{, where }(V_i)\\in ({Vect_*}({\\mathbb Q}))^l,\\\\\n\t\\Sigma_\\eta &= \\Sigma_{n_1}\\times ... \\times \\Sigma_{n_l}.\n\\end{aligned}\n\\end{displaymath}\nConsider a sequence of ${\\mathbb Q}[\\Sigma_\\eta]$-modules $\\mathscr{N}=\\mathscr{N}(\\eta)\\in {Vect_*}({\\mathbb Q})$, $\\ell(\\eta)=l.$ As before, we refer to them as $\\Sigma_\\eta$-modules. We define the \\emph{Schur multifunctor} given by $\\mathscr{N}$ on objects by\n$$\\mathscr{N}(V_1,...,V_l)=\\bigoplus_{\\ell(\\eta)=l} \\mathscr{N}(\\eta)\\otimes_{\\Sigma_\\eta}(V_i)^{\\otimes \\eta}\\text{ for all }(V_i)\\in ({Vect_*}({\\mathbb Q}))^l.$$\n Similarly a Schur multifunctor $\\mathscr{N}$ is a polynomial of degree $\\leq k$ if $\\mathscr{N}(\\eta)$ is trivial for $|\\eta|>k$.\n\\begin{example}\nConsider Schur functors $\\mathcal{N}_i:Vect_*({{\\mathbb Q}})\\rightarrow Vect_*({{\\mathbb Q}}),$ $i=1,...,l$ The tensor product $$\\bigotimes \\mathcal{N}_i:Vect_*({{\\mathbb Q}})^l\\rightarrow Vect_*({\\mathbb Q})$$ is a Schur multifunctor with $(\\bigotimes \\mathcal{N}_i)(\\eta)=\\bigotimes\\mathcal{N}_i(n_i).$\n\\end{example}\nWe define the tensor product of $\\mathscr{M}=\\mathscr{M}(\\mu)$ and $\\mathscr{N}=\\mathscr{N}(\\eta)$ as $$\\mathscr{M}\\otimes \\mathscr{N}(\\nu):= \\bigoplus_{\\substack{\\mu'+\\eta'=\\nu}}\\operatorname{Ind}^{\\Sigma_\\nu}_{\\Sigma_{\\mu'}\\times \\Sigma_{\\eta'}}\\mathscr{M}(\\mu')\\otimes \\mathscr{N}(\\eta').$$\nThe Schur functor defined by this tensor product is indeed (up to natural isomorphism) the level-wise tensor product of the two functors, as we see by the isomorphisms for $\\mu'+\\eta'=\\nu$ with $\\ell(\\mu')=\\ell(\\eta')=\\ell(\\nu)$\n\\begin{displaymath}\n\\begin{aligned}\n&\\left( \\operatorname{Ind}^{\\Sigma_\\nu}_{\\Sigma_{\\mu'}\\times \\Sigma_{\\eta'}}\\mathscr{M}(\\mu')\\otimes \\mathscr{N}(\\eta') \\right)\\otimes_{\\Sigma_\\nu}(V_i)^{\\otimes \\nu} \\\\&= \\left( \\mathscr{M}(\\mu')\\otimes \\mathscr{N}(\\eta') \\otimes_{\\Sigma_{\\mu'}\\times \\Sigma_{\\eta'}} {\\mathbb Q}[\\Sigma_\\nu] \\right)\\otimes_{\\Sigma_\\nu}(V_i)^{\\otimes \\nu} \\\\\n&\\cong \\left( \\mathscr{M}(\\mu')\\otimes \\mathscr{N}(\\eta') \\right)\\otimes_{\\Sigma_{\\mu'}\\times \\Sigma_{\\eta'}}\\left( (V_i)^{\\otimes \\mu'}\\otimes (V_i)^{\\otimes \\eta'}\\right)\\\\\n&\\cong \\left( \\mathscr{M}(\\mu')\\otimes_{\\Sigma_{\\mu'}}(V_i)^{\\otimes \\mu'} \\right) \\otimes \\left( \\mathscr{N}(\\eta') \\otimes_{\\Sigma_{\\eta'}} (V_i)^{\\otimes \\eta'}\\right).\n\\end{aligned}\t\n\\end{displaymath}\n\nThe tensor powers of a $\\mathscr{N}=\\mathscr{N}(\\eta)$ are (up to natural isomorphism) explicitly described by\n\\begin{equation}\\label{schurtensorp}\n\\mathscr{N}^{\\otimes r}(\\nu)=\\bigoplus \\operatorname{Ind}_{\\Sigma_{\\eta_1}\\times...\\times \\Sigma_{\\eta_r}}^{\\Sigma_\\nu}\\mathscr{N}(\\eta_1)\\otimes ... \\otimes \\mathscr{N}(\\eta_r),\n\\end{equation}\nwhere the sum runs over all $r$-tuples $(\\eta_1,...,\\eta_r),$ where $\\ell(\\eta_i)=l,$ such that $\\Sigma_{i=1}^r \\eta_i=\\nu$.\n\nNow consider a Schur functor $\\mathscr{M}=\\mathscr{M}(m)$ and a Schur multifunctor $\\mathscr{N}=\\mathscr{N}(\\eta)$. The composition is a Schur multifunctor isomorphic to the Schur multifunctor given by\n\\begin{equation}\\label{schurcomp}\n(\\mathscr{M}\\circ \\mathscr{N})(\\nu)=\\bigoplus_{r} \\mathscr{M}(r) \\otimes_{\\Sigma_r} \\bigoplus \\operatorname{Ind}_{\\Sigma_{\\eta_1}\\times...\\times \\Sigma_{\\eta_r}}^{\\Sigma_\\nu}\\mathscr{N}(\\eta_1)\\otimes ... \\otimes \\mathscr{N}(\\eta_r), \n\\end{equation}\nwhere the second sum runs over all $r$-tuples $(\\eta_1,...,\\eta_r),$ where $\\ell(\\eta_i)=l,$ such that $\\Sigma_{i=1}^r \\eta_i=\\nu$. The action of $\\Sigma_r$ is by permuting the tuples $(\\eta_1,...,\\eta_r)$ by the inverse. Indeed as we check using (\\ref{schurtensorp}):\n\\begin{displaymath}\n\\begin{aligned}\n(&\\mathscr{M}\\circ \\mathscr{N})(V_i) = \\bigoplus_r \\mathscr{M}(r) \\otimes_{\\Sigma_r} \\mathscr{N}(V_i)^{\\otimes r}\\cong \\bigoplus_r \\mathscr{M}(r) \\otimes_{\\Sigma_r} \\mathscr{N}^{\\otimes r}(V_i)\\\\\n&\\cong \\bigoplus_{r,\\nu} \\mathscr{M}(r) \\otimes_{\\Sigma_r} \\left( \\mathscr{N}^{\\otimes r}(\\nu)\\otimes_{\\Sigma_\\nu} (V_i)^{\\otimes \\nu} \\right)\\\\\n&\\cong \\bigoplus_{r,\\nu} \\mathscr{M}(r) \\otimes_{\\Sigma_r} \\left( \\bigoplus_{\\Sigma_{i=1}^r \\eta_i=\\nu} \\operatorname{Ind}_{\\Sigma_{\\eta_1}\\times...\\times \\Sigma_{\\eta_r}}^{\\Sigma_\\nu}\\mathscr{N}(\\eta_1)\\otimes ... \\otimes \\mathscr{N}(\\eta_r)\\otimes_{\\Sigma_\\nu} (V_i)^{\\otimes \\nu} \\right)\n\\\\\n&\\cong \\bigoplus_\\nu \\bigoplus_r\\mathscr{M}(r) \\otimes_{\\Sigma_r} \\left( \\bigoplus_{\\Sigma_{i=1}^r \\eta_i=\\nu} \\operatorname{Ind}_{\\Sigma_{\\eta_1}\\times...\\times \\Sigma_{\\eta_r}}^{\\Sigma_\\nu}\\mathscr{N}(\\eta_1)\\otimes ... \\otimes \\mathscr{N}(\\eta_r) \\otimes_{\\Sigma_\\nu} (V_i)^{\\otimes \\nu}\\right) .\n\\end{aligned}\n\\end{displaymath}\n\n\\begin{rem}\nWe will later use Schur (multi)functors with domain the category of rational vector spaces - just consider them as graded rational vector spaces concentrated in degree $0$.\n\\end{rem}\n\n\n\n\\section{Automorphisms of hyperbolic modules over the integers}\\label{automorphisms of hyperbolic modules}\n\nIn this section we review hyperbolic modules in the sense of \\cite{MR632404} in the special case with ground ring the integers. Fix a $\\lambda\\in \\{+1,-1\\}.$ Let $\\Lambda\\subset {\\mathbb Z},$ be an additive subgroup, called the \\emph{form parameter}, such that \\begin{equation}\\label{Lambda}\n\\{ z - \\lambda z | z \\in {\\mathbb Z}\\}\\subset \\Lambda \\subset \\{ z\\in {\\mathbb Z}|z=-\\lambda z\\}. \n\\end{equation}\n\\begin{defi}[\\cite{MR632404}]\nA $\\Lambda$-quadratic module is a pair $(M,\\mu)$, where $M$ is a ${\\mathbb Z}$-module and $\\mu$ is a bilinear form, i.e. a homomorphism $$\\mu:M\\otimes M\\rightarrow {\\mathbb Z}.$$\\end{defi}\nTo a $\\Lambda$-quadratic module $(M,\\mu)$ we associate a \\emph{$\\Lambda$-quadratic form} $$q_\\mu:M\\rightarrow {\\mathbb Z}\/\\Lambda\\text{, } q_\\mu(x)=[\\mu(x,x)]$$ and a \\emph{$\\lambda$-symmetric bilinear form} $$\\langle - , - \\rangle_\\mu:M\\otimes M\\rightarrow {\\mathbb Z},$$ defined by $\\langle x , y \\rangle_\\mu=\\mu(x,y)+\\lambda \\mu(y,x).$ ($\\lambda$-symmetric bilinear form like this are called even.) We call a finitely generated projective $\\Lambda$-quadratic module $(M,\\mu)$ \\emph{non-degenerate}, if the map $$M\\rightarrow M^*\\text{, }x\\mapsto \\langle x,-\\rangle_\\mu $$ is an isomorphism.\n\nDenote by $Q^\\lambda({\\mathbb Z},\\Lambda)$ the category non-degenerate $\\Lambda$-quadratic modules and morphisms linear maps respecting the associated $\\lambda$-symmetric bilinear form and the associated $\\Lambda$-quadratic form.\n\nGiven a finitely generated ${\\mathbb Z}$-module $M$, we define a non-degenerate $\\Lambda$-quadratic module $H(M)=(M\\oplus M^*,\\mu_M),$ where $\\mu_M((x,f),(y,g))=f(y).$ We call $H(M)$ the \\emph{hyperbolic module} on $M.$ A $\\Lambda$-quadratic module is called \\emph{hyperbolic}, if it is isomorphic to $H(N)$ for some finitely generated ${\\mathbb Z}$-module $N$. Let $\\{e_i\\}$ be the standard basis for ${\\mathbb Z}^g$ and $\\{f_i\\}$ the dual basis of $({\\mathbb Z}^g)^*.$ Using this basis we consider the automorphisms of the $\\Lambda$-quadratic module $H({\\mathbb Z}^g)$ as a subgroup of $Gl_{2g}({\\mathbb Z}).$ The subgroups can be described as follows:\n\n\\begin{prop}[{\\cite[Cor. 3.2.]{MR632404}}]\nThe automorphisms of $H({\\mathbb Z}^g)$ in $Q^{\\lambda}({\\mathbb Z},\\Lambda)$ are isomorphic to the subgroup of $Gl_{2g}({\\mathbb Z})$ consisting of matrices $\\begin{pmatrix}\nA & B \\\\\nC & D \\end{pmatrix}$ such that\n\\begin{displaymath}\n\t\\begin{aligned}\n\t\t&D^TA + \\lambda B^TC=1 \\\\\n\t\t&D^TB+ \\lambda B^T D=0 \\\\\n\t\t&A^t C + \\lambda C^T A=0\\\\\n\t\t&\\text{$C^T A$ and $D^TB$ have diagonal entries in $\\Lambda.$}\n\t\\end{aligned}\n\\end{displaymath}\n\n\\end{prop}\n\nNote that if $\\lambda=1$ we necessarily have $\\Lambda=0.$ When $\\lambda=-1,$ the condition (\\ref{Lambda}) implies that $2{\\mathbb Z}\\subset \\Lambda \\subset {\\mathbb Z},$ thus we have the two cases $\\Lambda={\\mathbb Z}$ and $\\Lambda=2{\\mathbb Z}.$ Thus we can list the automorphisms of hyperbolic modules:\n\\begin{enumerate}\\item[]\n\t\\item When $\\lambda=1$ and $\\Lambda=0,$ then ${\\operatorname{Aut}}(H({\\mathbb Z}^g))=O_{g,g}({\\mathbb Z})$ in $Q^{1}({\\mathbb Z},0).$ \n\\item When $\\lambda=-1$ and $\\Lambda={\\mathbb Z},$ then ${\\operatorname{Aut}}(H({\\mathbb Z}^g))=Sp_{2g}({\\mathbb Z})$ in $Q^{-1}({\\mathbb Z},{\\mathbb Z}).$\n\\item When $\\lambda=-1$ and $\\Lambda=2{\\mathbb Z},$ then ${\\operatorname{Aut}}(H({\\mathbb Z}^g))$ in $Q^{-1}({\\mathbb Z},2{\\mathbb Z})$ is the subgroup of $Sp_{2g}({\\mathbb Z})$ described as:\n$$\\left \\{\\begin{pmatrix}\nA & B \\\\\nC & D \\end{pmatrix}\\in Sp_{2g}({\\mathbb Z})| C^TA \\text{ and } D^TB \\text{ have even entries at the diagonal} \\right \\}.$$\n\\end{enumerate}\n\n\nLet $N$ be a ($d-1$)-connected $2d$-manifold. Wall \\cite{MR0145540} has shown that the automorphisms of the homology realized by diffeomorphisms are the automorphisms of a $\\Lambda$-quadratic module with underlying ${\\mathbb Z}$-module $H_d(N).$ Later we show a similar statement for connected sums of products of spheres. For this we need a slight variation of $\\Lambda$-quadratic modules.\n\nLet $n\\in{\\mathbb N}$ and $\\Lambda\\subset {\\mathbb Z},$ be an additive subgroup, such that \\begin{equation*}\n\\{ z - (-1)^{\\frac{n}{2}}z | z \\in {\\mathbb Z}\\}\\subset \\Lambda \\subset \\{ z\\in {\\mathbb Z}|z=-(-1)^{\\frac{n}{2}} z\\}. \n\\end{equation*} Note that in the case that $n$ is odd $\\Lambda$ has to be the trivial group, in fact in that case it will not be part of the following definitions at all, but we keep it in the notion for convenience.\n\\begin{defi}A \\emph{graded $\\Lambda$-quadratic module} is a pair $(M,\\mu)$, where $M$ is a graded ${\\mathbb Z}$-modules and $\\mu$ bilinear $n$-pairing, i.e. a degree $0$ homomorphism \n$$\\mu: M\\otimes M \\rightarrow {\\mathbb Z}[n].$$\n\\end{defi}\nWe associate to $(M,\\mu)$ a \\emph{symmetric bilinear $n$-pairing}\n$$\\langle - , - \\rangle_\\mu:M\\otimes M\\rightarrow {\\mathbb Z}[n],$$ defined by $\\langle x , y \\rangle_\\mu=\\mu(x,y)+(-1)^{|x||y|} \\mu(y,x).$\nWhen $n=2d$, we associate a \\emph{$\\Lambda$-quadratic form} $$q_\\mu:M_d\\rightarrow {\\mathbb Z}\/\\Lambda\\text{, } q_\\mu(x)=[\\mu(x,x)]$$ We call a finitely generated projective graded $\\Lambda$-quadratic module $(M,\\mu)$ non-degenerate, if the map $$M\\rightarrow {\\operatorname{Hom}} (M,{\\mathbb Z}[n])\\text{, }x\\mapsto \\langle x,-\\rangle_\\mu $$ is an isomorphism.\n\nFor $n$ even, we define $Q_*^{n}({\\mathbb Z},\\Lambda)$ to be the category whose objects are non-degenerate graded $\\Lambda$-quadratic modules $(M,\\mu)$, where $(M,\\mu)$ is finitely generated as a ${\\mathbb Z}$-module and the morphisms respect $q_\\mu$ and $\\langle-,-\\rangle_\\mu.$\n\nFor $n$ odd, we define $Q_*^{n}({\\mathbb Z},\\Lambda)$ to be the category whose objects are non-degenerate graded $\\Lambda$-quadratic modules $(M,\\mu)$, where $(M,\\mu)$ is finitely generated as a ${\\mathbb Z}$-module and the morphisms respect $\\langle-,-\\rangle_\\mu.$\n\nLet $Q_+^{n}({\\mathbb Z},\\Lambda)$ be the full subcategory with objects concentrated in positive degrees and hence necessarily concentrated in degrees $1,...,n-1$.\n\nFor $01)$ be the element sending the i-th standard basis element to the $(i+1)$-st and the $g$-th to the first. \n\nDenote by $\\mu(c_g)$ the multiplication from the left by $c_g$. Then the following holds:\n\\begin{lem}[{\\cite[Lemma 2.1.]{dwyer}}]\nLet $\\rho$ be a central coefficient system. Then we have a map of coefficient systems $\\tau:\\rho\\rightarrow\\Sigma\\rho$, defined by\n$$\\tau_g:\\rho_g\\xrightarrow{F_g} I^*(\\rho_{g+1})\\xrightarrow{\\mu (c_{g+2})} {J^*}(\\rho_{g+1})=\\Sigma \\rho_g.$$\n\\end{lem}\nWe say that a central coefficient system $\\rho$ splits, if $\\Sigma \\rho$ is isomorphic to $\\rho\\oplus \\text{coker}(\\tau)$ via $\\tau$. We then denote $\\text{coker}(\\tau)$ by $\\Delta \\rho$. We now define the notion of degree of a strongly central coefficient system $\\rho$ inductively. We say it has degree $k<0$, if it is constant and for $k\\geq 0$ we say that it has degree $\\leq k$, if $\\Sigma \\rho$ splits and $\\Delta \\rho$ is a strongly central coefficient system of degree $k-1$.\n\n\\begin{thm}[{Van der Kallen \\cite[p. 291]{vdk}}]\\label{vdkstab}\n Let $\\rho$ be a strongly central coefficient system of degree $\\leq k$, then $$H_i(Gl_g({\\mathbb Z}),\\rho_g))\\rightarrow H_i(Gl_{g+1}({\\mathbb Z}),\\rho_{g+1})$$ is an isomorphism for $g>2i+k+2$ and en epimorphism for $g\\geq2i+k+2$.\n\\end{thm}\n\nDenote by $\\lambda_g$ the standard representation of $Gl_g({\\mathbb Z})$ on ${\\mathbb Z}^g$ and by $\\bar \\lambda_g$ the action by the inverse transposed on ${\\mathbb Z}^g.$ Let $\\mathcal{A}$ be an abelian category. Given a functor $$T:\\operatorname{Mod}({\\mathbb Z})\\times\\operatorname{Mod}({\\mathbb Z})\\rightarrow \\mathcal{A},$$ we define a coefficient system $\\{T(\\lambda_g,\\bar \\lambda_g)\\}_{g\\geq 1}$ with structure maps induced by the standard inclusions and actions induced by $\\lambda_g$ and $\\bar \\lambda_g$.\n\n\\begin{lem}[{Compare \\cite[5.5.]{vdk} and \\cite[Lemma 3.1.]{dwyer}}]\nIf $$T:\\operatorname{Mod}({\\mathbb Z})\\times\\operatorname{Mod}({\\mathbb Z})\\rightarrow \\mathcal{A}$$ is a polynomial functor of degree $\\leq k$, then $\\{ T(\\lambda_g,\\bar \\lambda_{g})\\}_{g\\geq 1}$ is a strongly central coefficient system of degree $\\leq k$. \n\\end{lem} \n\nDenote now $G_g$ the automorphisms of $H({\\mathbb Z}^g)$ in $Q^{\\lambda}({\\mathbb Z},\\Lambda).$ And denote by $e_1,...,e_g$ the standard basis for ${\\mathbb Z}^g$ and by $f_1,...,f_g$ the dual basis of $({\\mathbb Z}^g)^*.$ We see $G_g$ as a subgroup of $Gl_{2g}({\\mathbb Z})$, by considering the elements of $G_g$ as $2g\\times 2g$-matrices acting on $H({\\mathbb Z}^g)\\cong {\\mathbb Z}^{2g}.$ We define the upper inclusion $$I:G_g\\rightarrow G_{g+1}\\text{, }\\begin{pmatrix}\nA & B \\\\\nC & D \\end{pmatrix}\\mapsto \\begin{pmatrix}\nA & 0 & B & 0\\\\\n0 & 1 & 0 & 1\\\\\nC & 0 & D & 0\\\\\n0 & 1 & 0 & 1\\\\\n\\end{pmatrix} $$ and similarly the lower inclusion $J:G_g\\rightarrow G_{g+1}.$ The definition of a coefficient system is very similar to the one for $GL_g({\\mathbb Z})$ and we only briefly summarize it.\nA coefficient system for $\\{ G_g \\}_{g \\geq 1}$ is a sequence of $G_g$-modules $\\{ \\rho_g\\}_{g \\geq 1}$ together with $ G_g $-maps $F_g:\\rho_g\\rightarrow I^* (\\rho_{g+1}).$ We denote a coefficient system again by $\\rho$ and let maps of coefficient systems be as above. The shifted coefficient system $\\Sigma \\rho$ is the restriction via the lower inclusion as above. A coefficient system is called central if $c_{g+2}\\circ c_{2g+4}$ acts trivially on the image of $F_{g+1}F_g:\\rho_g\\rightarrow \\rho_{g+2}$. For a central coefficient system we define the map of coefficient systems $\\tau:\\rho\\rightarrow\\Sigma\\rho$, by\n$$\\tau_g:\\rho_g\\xrightarrow{F_g} I^*(\\rho_{g+1})\\xrightarrow{\\mu (c_{g+2}\\circ c_{2g+4})} {J^*}(\\rho_{g+1})=\\Sigma \\rho_g.$$ We call a central coefficient system $\\rho$ split, if $\\tau$ is injective and $\\Sigma \\rho \\cong \\tau(\\rho)\\oplus \\operatorname{coker}(\\tau).$ For a central coefficient system we define the degree inductively: We say it has degree $k<0$, if it is constant and for $k\\geq 0$ we say that it has degree $\\leq k$, if $\\Sigma \\rho$ splits and $\\Delta \\rho=\\operatorname{coker}(\\tau)$ is a strongly central coefficient system of degree $\\leq k-1$.\n\n\\begin{thm}[{Charney \\cite[Theorem 4.3.]{MR885099}}]\\label{charneystab}\n Let $\\rho$ be a central coefficient system of degree $\\leq k$, then $$H_i(G_g,\\rho_g))\\rightarrow H_i(G_{g+1},\\rho_{g+1})$$ is an isomorphism for $g>2i+k+4$ and an epimorphism for $g\\geq 2i+k+4$.\n\\end{thm}\n\nAgain we get a central coefficient system of degree $\\leq k$ by considering the standard $G_g$-action $\\lambda_{g,g}$ on $H({\\mathbb Z}^g)\\cong {\\mathbb Z}^{2g}$, induced by the inclusion $G_g\\subset Gl_{2g}({\\mathbb Z}).$ Let $\\mathcal{A}$ be an abelian category. Given a polynomial functor $$T:\\operatorname{Mod}({\\mathbb Z})\\rightarrow \\mathcal{A},$$ of degree $\\leq k$ then $\\{T(\\lambda_{g,g})\\}_{g\\geq 1}$ is a central coefficient system of degree $\\leq k$ for $\\{G_g \\}_{g\\geq 1}$.\n\nNow we combine the homological stability results above to a result for the groups $\\Gamma_I$ defined in the previous section.\n\nDenote by $\\lambda_I$ the \\emph{standard representation of $\\Gamma_I$} on $({\\mathbb Z}^{g_1},...,{\\mathbb Z}^{2 g_{n\/2}},...,{\\mathbb Z}^{g_{n-1}})$ induced by the inclusion $\\Gamma_I\\subset \\prod_{i=1}^{\\lfloor n\/2\\rfloor} Gl_{r_i}({\\mathbb Z}),$ where we consider ${\\mathbb Z}^{2 g_{n\/2}}$ to be the empty set if $n$ is odd and \n$$r_k = \\begin{cases} 2g_k=2|\\{i\\in I|p_i=k\\}| &\\text{ if }k=n\/2\\\\ g_k=|\\{i\\in I|p_i=k\\}| &\\text{ if }k n\/2.$\n\nGiven a functor to an abelian category $T:Mod({\\mathbb Z})^{n-1}\\rightarrow \\mathcal{A}$, we get a $\\Gamma_I$-module $T({\\mathbb Z}^{g_1},...,{\\mathbb Z}^{2g_{n\/2}},...,{\\mathbb Z}^{g_{n-1}})$ with the induced action. We denote this $\\Gamma_I$-module by $T(\\lambda_I).$ For a fixed $p\\in{\\mathbb N}$, such that $02i+k+2$ when $2p\\neq n$ and $g_p>2i+k+4$ if $2p=n$ and an epimorphism for $g_p\\geq 2i+k+2$ respectively $g_p\\geq2i+k+4.$\n\\end{prop}\n\n\\begin{proof} Denote by $\\Gamma_{g_p}$ the summand of $\\Gamma_I\\subset \\prod_{i=1}^{\\lfloor n\/2\\rfloor} Gl_{r_i}({\\mathbb Z})$ that sits in $Gl_{r_p}({\\mathbb Z})$. Let $\\Gamma={\\operatorname{Aut}}(H_{\\bar I}),$ where $\\bar I= I\\smallsetminus \\{i\\in I| p_i=p\\}.$ Note that $\\Gamma\\oplus \\Gamma_{g_p}= \\Gamma_I$ and $\\Gamma_{I'}=\\Gamma_{g_p+1}\\oplus \\Gamma,$ where $\\Gamma_{g_p+1}$ is defined analogous to $\\Gamma_{g_p}.$\nConsider the functor $$\\mathfrak{I}_{p,n-p}:\\begin{cases}\n\\operatorname{Mod}({\\mathbb Z})\\rightarrow Mod({\\mathbb Z})^{n-1}\\text{ if $2p=n$}\\\\\n\\operatorname{Mod}({\\mathbb Z})\\times\\operatorname{Mod}({\\mathbb Z}) \\rightarrow Mod({\\mathbb Z})^{n-1}\t\\text{ otherwise,}\n\\end{cases}$$ defined by sending a module $M$ to $({\\mathbb Z}^{g_1},...,M,...,{\\mathbb Z}^{g_{n-1}})$, where the $M$ sits at the $(n\/2)$-th summand and a pair $(M,N)$ to $({\\mathbb Z}^{g_1},...,M,...,N,...,{\\mathbb Z}^{g_{n-1}})$, where the $M$ sits at the $p$-th summand the $M$ sits at the $(n-p)$-th summand. This functor is clearly additive and hence of degree $\\leq 1.$ This implies that the composition $T\\circ \\mathfrak{I}_{p,n-p}$ is of degree $\\leq k$ and we get a (strongly) central coefficient system of degree $\\leq k$ for $$\\Gamma_{g_p}=\\begin{cases} G_{g_p} & \\text{ if $2 p=n$} \\\\\n\tGl_{g_p}({\\mathbb Z}) & \\text{otherwise.} \n\\end{cases}$$\nThis implies that the stabilization maps\n$$H_i(\\Gamma_{g_p},T\\circ \\mathfrak{I}_{n\/2,n\/2}(\\lambda_{g_p,g_p}))\\rightarrow H_i(\\Gamma_{g_{p}+1},T\\circ \\mathfrak{I}_{n\/2,n\/2}(\\lambda_{g_p+1,g_p+1}))$$ respectively\n$$H_i(\\Gamma_{g_p},T\\circ \\mathfrak{I}_{p,n-p}(\\lambda_{g_p},\\bar \\lambda_{g_p}))\\rightarrow H_i(\\Gamma_{g_{p}+1},T\\circ \\mathfrak{I}_{p,n-p}(\\lambda_{g_p+1},\\bar \\lambda_{g_p+1}))$$ are isomorphism and epimorphism in the ranges in the statement of the Proposition. Observing that the $T\\circ \\mathfrak{I}_{n\/2,n\/2}(\\lambda_{g_p,g_p})$ respectively $T\\circ \\mathfrak{I}_{p,n-p}(\\lambda_{g_p},\\bar \\lambda_{g_p})$ are precisely the restrictions of the $\\Gamma_I$-representation to the subgroup $\\Gamma_{g_p}$ and using Lyndon spectral sequence $$H_k(\\Gamma_{\\bar I},H_l(\\Gamma_{g_p},T(\\lambda_I))) \\Rightarrow H_i(\\Gamma_I,T(\\lambda_I)),$$ the results follows by comparing spectral sequences.\n\\end{proof}\n\n\\section{On Mapping Class Groups}\\label{on mapping class groups}\nDenote by $$N=N_I=\\left(\\#_{i\\in I} (S^{p_i}\\times S^{q_i})\\right)\\smallsetminus {\\operatorname{int}} (D^n),$$ where $|I|<\\infty,$ $3\\leq p_i\\leq q_i<2p_i-1$ and $p_i+q_i=n$ for $i\\in I$.\n\nDenote by ${\\operatorname{Aut}}(\\tilde{H}_*(N_I))$ the automorphisms of the graded group $\\tilde{H}_*(N_I).$\nIn this section we study the map $$H_*:\\pi_0 \\; {\\operatorname{aut}}_\\partial (N_I)\\rightarrow {\\operatorname{Aut}}(\\tilde{H}_*(N_I)).$$ In particular we are going to determine its image and show that the kernel is finite.\n\nDenote by $in:\\partial N\\hookrightarrow N$ the inclusion of the boundary. We observe that $V_I=\\bigvee_{i\\in I} (S^{p_i}\\vee S^{q_i})\\subset N$ is a deformation retract and denote by $$\\alpha_j:S^{p_j} \\hookrightarrow \\bigvee_{i\\in I} (S^{p_i}\\vee S^{q_i}) \\text{ and } \\beta_j:S^{q_j} \\hookrightarrow \\bigvee_{i\\in I} (S^{p_i}\\vee S^{q_i})$$ the canonical inclusions. We consider $in$ as an element of $\\pi_{n-1}(\\bigvee_{i\\in I} (S^{p_i}\\vee S^{q_i}))$ and we observe that it is given by $\\sum_{i\\in I} [\\alpha_i,\\beta_i]$. Denote by $$\\begin{aligned}\\langle-,-\\rangle:&H_*(N)\\otimes H_{n-*}(N) \\rightarrow {\\mathbb Z}\\\\ &x\\otimes y \\mapsto (PD^{-1}(x)\\cup PD^{-1}(y))([N,\\partial N])\\end{aligned}$$ the intersection form, where $PD^{-1}:H_*(N)\\rightarrow H^{n-*}(N,\\partial N)$ denotes the Poincar\\'e duality isomorphisms and we evaluate on the fundamental class $[N,\\partial N].$ The $\\{ \\alpha_i \\}$ and $\\{ \\beta_i \\}$ define a basis for $\\tilde H_*(N)$ via the Hurewicz homomorphism, which we denote by $\\{ a_i\\}$ respectively $\\{b_i\\}$. Note that $b_i$ is dual to $a_i.$\n\nIn the case $n=2d$ is even we need to recall a further piece of structure from Wall's classification of highly connected even-dimensional manifolds \\cite{MR0145540}. The elements $x\\in H_{d}(N)$ can be represented by embedded $S^{d}.$ Denote by $\\nu_x\\in \\pi_{d-1}(SO(d))$ the clutching function of the normal bundle of this embedding. It is independent of the choice of embedding, since homotopic embeddings are isotopic in this case. This defines a function $$q:H_{d}(N)\\rightarrow \\pi_{d-1}(SO(d))\\text{, }x\\mapsto [\\nu_x] .$$ Denote by $\\iota_{d}$ the class of the identity in $\\pi_{d}(S^{d})$ and by $$\\partial:\\pi_d(S^d)\\rightarrow \\pi_{d-1}(SO(d))$$ the boundary map in the fibration $SO(d)\\rightarrow SO(d+1)\\rightarrow S^d.$ The function $q$ satisfies:\n$$\\langle x,x \\rangle=HJq(x) \\text{ and } q(x+y)=q(x)+q(y)+\\langle x,y \\rangle\\partial \\iota_d,$$ where $\\pi_{d-1}(SO(d))\\xrightarrow{J} \\pi_{2d-1}(S^d)\\xrightarrow{H}{\\mathbb Z}$ denote the $J$-homomorphism and the Hopf invariant. There is also a purely homotopy theoretic description of $Jq$ in \\cite[Section 8]{MR0148075}. \n\nNote that for $a_i,b_j\\in H_{d}(N),$ we have $q(a_i)=q(b_j)=0.$ Hence $\\operatorname{Image}(q)$ is contained in the subgroup $\\langle\\partial\\iota_d\\rangle$ generated by $\\partial\\iota_d.$ The $J$-homomorphism restricts to an isomorphism $$J|\\langle\\partial\\iota_d\\rangle:\\langle\\partial\\iota_d\\rangle\\rightarrow J(\\langle\\partial\\iota_d\\rangle)\\cong \\begin{cases} \\langle [\\iota_d,\\iota_d]\\rangle \\xrightarrow[H]{\\cong} 2{\\mathbb Z} &\\text{if $d$ even}\\\\\n0 &\\text{if $d=1,3,7$ }\\\\\n\\langle [\\iota_d,\\iota_d]\\rangle\\cong {\\mathbb Z}\/2{\\mathbb Z} &\\text{if $d$ is odd and not $1,3$ or $7$,}\n\\end{cases}$$ where the second isomorphism is induced by the Hopf invariant.\nLet $${\\operatorname{Aut}}(\\tilde H_*(N),\\langle-,-\\rangle, Jq)\\text{ and }{\\operatorname{Aut}}(\\tilde H_*(N),\\langle-,-\\rangle, q)$$ be the automorphisms of the reduced homology respecting the intersection form and the function $Jq$ ($q$ respectively). Note that $$\\langle x, y \\rangle=\\mu(x,y)+(-1)^{|x||y|}\\mu(y,x),$$ where $\\mu(-,-)$ is determined by $$\\mu(a_i,b_j)=\\delta_{i,j} \\text{ and } \\mu(b_i,a_j)=\\mu(a_i,a_j)=\\mu(b_i,b_j)= 0.$$ Now let $$\\Lambda=\\begin{cases}0 & \\text{ if $n=2d$ and $d$ is even}\\\\\n{\\mathbb Z} & \\text{ if $n=2d$ and $d$ is $3$ or $7$}\\\\\n2{\\mathbb Z} &\\text{ if $n=2d$ and $d$ is odd and not $3$ or $7$.} \n\n\\end{cases}$$ \n\nMoreover $Jq=q_\\mu,$ where $q_\\mu$ is the $\\Lambda$-quadratic form associated to $\\mu,$ where we identify $\\langle [\\iota_d,\\iota_d]\\rangle$ with ${\\mathbb Z}$ and ${\\mathbb Z}\/2{\\mathbb Z}$ respectively. It suffices to check this for the elements $a_i+b_i$ and for these $Jq(a_i+b_i)=[\\iota_d,\\iota_d]$ and $q_\\mu(a_i+b_i)=1.$ By the discussion above we see that \n$${\\operatorname{Aut}}(\\tilde H_*(N),\\langle-,-\\rangle, q)\\cong {\\operatorname{Aut}}(\\tilde H_*(N),\\langle-,-\\rangle, Jq)\\cong\\Gamma_I= {\\operatorname{Aut}}(H_I) \\text{ in } Q_*^n({\\mathbb Z},\\Lambda).$$\n\nFor a representative $f$ of $[f]\\in \\pi_0 ({\\operatorname{aut}}_\\partial (N))$ it is clear that $\\tilde H_*(f)\\in \\Gamma_I$. We are now going to show that all elements of $\\Gamma_I$ can be realized by a homotopy self-equivalence, fixing the boundary pointwise.\n\n\n\\begin{prop}\\label{hmcg}\nThe group homomorphism\n\\begin{equation*}\n\\pi_0 ({\\operatorname{aut}}_\\partial (N))\\xrightarrow{\\tilde H_*} {\\operatorname{Aut}} (\\tilde H_* (N),\\langle-,-\\rangle, Jq )\n\\end{equation*}\nis surjective and has finite kernel.\n\\end{prop}\n\\iffalse\nBefore we prove the Proposition, we need the following Lemma. \n\n\\begin{lem}\\label{splitting}\nLet $A,B$ be pointed topological spaces, such that $\\tilde{H}_i(A)$ and $\\tilde{H}_i(B)$ are not both nontrivial for all $i.$ Then the image of $$\\tilde{H}_*:{\\operatorname{aut}}_* (A\\vee B)\\rightarrow {\\operatorname{Aut}}(\\tilde{H}_*(A\\vee B))$$ splits as $$\\operatorname{Image}(\\tilde{H}_*:{\\operatorname{aut}}_* (A)\\rightarrow {\\operatorname{Aut}}(\\tilde{H}_*(A)))\\oplus \\operatorname{Image}(\\tilde{H}_*:{\\operatorname{aut}}_* (B)\\rightarrow {\\operatorname{Aut}}(\\tilde{H}_*(B)))$$\n\\end{lem}\n\n\\begin{proof}\nWe observe that ${\\operatorname{Aut}}(\\tilde{H}_*(A\\vee B))\\cong {\\operatorname{Aut}}(\\tilde{H}_*(A))\\oplus {\\operatorname{Aut}}(\\tilde{H}_*(B))$ where the isomorphism is induced by the canonical isomorphism $\\tilde{H}_*(A\\vee B)\\cong \\tilde{H}_*(A) \\oplus \\tilde{H}_*(B).$ The lemma now follows.\n\\end{proof}\n\\fi\n\\begin{proof}Compare \\cite[Proof of Theorem 2.10]{berg12}. The cofibration $in:\\partial N\\hookrightarrow N$ induces a fibration \n$${\\operatorname{map}}_\\partial (N,N)\\rightarrow{\\operatorname{map}}_*(N,N) \\rightarrow {\\operatorname{map}}_*(\\partial N, N),$$ where ${\\operatorname{map}}_\\partial (N,N)$ is the fiber over $in$. Let ${\\operatorname{map}}_\\partial (N,N)$ and ${\\operatorname{map}}_*(N,N)$ be based at the identity. Restricting the total space to invertible elements, we also get the following fibration: $${\\operatorname{aut}}_\\partial (N)\\rightarrow{\\operatorname{aut}}_*(N) \\rightarrow {\\operatorname{map}}_*(\\partial N, N).$$\n\nWe are going to analyze the long exact homotopy sequences $$\\xymatrix{\n...\\ar[r]& \\pi_1 (map_*(\\partial N, N),in)\\ar[r]\\ar@{=}[d]& \\pi_0( {\\operatorname{aut}}_\\partial(N) )\\ar[r]\\ar@{^{(}->}[d]& \\pi_0 ({\\operatorname{aut}}_*(N))\\ar[r]\\ar@{^{(}->}[d]& [\\partial N,N]_*\\ar@{=}[d]\\\\\n...\\ar[r]& \\pi_1 (map_*(\\partial N, N),in)\\ar[r]& [N,N]_\\partial \\ar[r]& [N,N]_*\\ar[r]& [\\partial N,N]_* .}$$ We consider the monoid homomorphism $$\\tilde H_*:[N,N]_*\\rightarrow \\operatorname{End}(\\tilde H_*(N))$$ and show that it is onto and with finite kernel. Using the relative Hurewicz isomorphism, it is easy to see that $V_I\\hookrightarrow \\prod_{i\\in I} (S^{p_i}\\times S^{q_i})$ is $(2\\operatorname{min}_{i\\in I}\\{ p_i \\}-1)$-connected and hence more than $\\operatorname{max}_{i\\in I}\\{ q_i \\}$-connected. Thus we get an isomorphism of sets \\begin{align*}&[N,N]_* \\cong [V_I,V_I]_* \\cong [V_I,\\prod_{i\\in I} (S^{p_i}\\times S^{q_i})]_* \\\\\n&\\cong \\prod_{(i,j)\\in I\\times I} [S^{p_i},S^{p_j}]_* \\times \\prod_{(i,j)\\in I\\times I} [S^{q_i},S^{q_j}]_* \\times \\prod_{(i,j)\\in I\\times I} [S^{q_i},S^{p_j}]_* \\times \\prod_{(i,j)\\in I\\times I} [S^{p_i},S^{q_j}]_* .\n\\end{align*} We write $I=\\bigcup_l I_l,$ where $I_l=\\{i|p_i=l\\}.$\nThe only non-finite factors of the product above are \\begin{equation}\\label{nonfinitefactors}\n\t\\prod_{l}\\prod_{(i,j)\\in I_l\\times I_l} \\left([S^{p_i},S^{p_j}]_*\\times[S^{q_i},S^{q_j}]_*\\right).\n\\end{equation} We identify $\\operatorname{End}(\\tilde H_*(N))\\cong \\prod \\operatorname{Mat}_{r_l}({\\mathbb Z}) ,$ where $r_l=\\operatorname{rank}(H_l(N)),$ using the basis $\\{a_i\\}\\cup\\{b_i\\}$. Note that for $l=n\/2$ a $b_i$ becomes a $(r_l\/2+i)$-th basis element. Denote by $\\alpha^l_1,...,\\alpha^l_{r_l}$ and $\\beta^l_1,...,\\beta^l_{r_l}$ the inclusions $S^l\\hookrightarrow V_I$ and $S^{n-l}\\hookrightarrow V_I$ respectively. There is a multiplicative section of $\\tilde H_*$ \\begin{equation}\\label{section} \\prod \\operatorname{Mat}_{r_i}({\\mathbb Z})\\rightarrow [N,N]_*,\\;\n\t(M^l)=(m^l_{i,j})\\mapsto f_{(M^l)}= \\bigvee_{l=1}^{\\lfloor n\/2\\rfloor} f_{M^l},\n\\end{equation} where $f_{M^l}:\\bigvee_{i\\in I_l} S^{p_i}\\vee S^{q_i}\\rightarrow V_I\\text{, }$ is given by $$f_{M^l}= \\begin{cases}\n\\bigvee_{i=1}^{r_l} (\\sum^{r_l}_{j=1} m^l_{i,j}\\alpha^l_j \\vee \\sum^{r_l}_{j=1} m^{n-l}_{i,j}\\beta^l_j ) &\\text{ if } l\\neq n\/2\\\\ \\\\\n\\bigvee_{i=1}^{r_l\/2} (\\sum^{r_l\/2}_{j=1} m^l_{i,j}\\alpha^l_j+\\sum^{r_l}_{j=r_l\/2+1} m^l_{i,j}\\beta^l_{(j-r_l\/2)})\\\\ \\vee \\bigvee_{i=r_l\/2+1}^{r_l} (\\sum^{r_l\/2}_{j=1} m^l_{i,j}\\alpha^l_j+\\sum^{r_l}_{j=r_l\/2+1} m^l_{i,j}\\beta^l_{(j-r_l\/2)}) &\\text{ if } l=n\/2.\n\\end{cases}$$ We observe that the image of this section is precisely the sub-monoid of $[N,N]_*$ corresponding to the non-finite factors (\\ref{nonfinitefactors}). Hence we get that $\\tilde H_*:[N,N]_*\\rightarrow {\\operatorname{Aut}}(\\tilde H_*(N))$ is surjective and has finite kernel.\nRestricting to the submonoids of invertible elements this implies upon using the section (\\ref{section}) that $\\pi_0({\\operatorname{aut}}_*(N))\\rightarrow \\operatorname{Aut}(\\tilde H_*(N))$ is surjective with finite kernel. The image of $\\pi_0({\\operatorname{aut}}_\\partial(N))\\rightarrow \\pi_0({\\operatorname{aut}}_*(N))$ consists of the elements $[f]\\in \\pi_0({\\operatorname{aut}}_*(N)),$ such that $f\\circ in\\simeq in$ (we assume all homotopy equivalences in this proof to be pointed). Since (\\ref{section}) restricts to a section $\\operatorname{Aut}(\\tilde H_*(N))\\rightarrow \\pi_0({\\operatorname{aut}}_*(N))$ we get that the image of $$\\tilde H_*:\\pi_0({\\operatorname{aut}}_\\partial(N))\\rightarrow \\operatorname{Aut}(\\tilde H_*(N))$$ is given by the $(M^l)$ such that $f_{(M^l)}\\circ in\\simeq in.$ Using the Hilton-Milnor Theorem we identify \\begin{equation}\\label{hiltonmilnoridentification}\n \t\\pi_{n-1}(N)\\cong \\pi \\oplus \\bigoplus_{l} \\pi_{n-1}(\\bigvee_{i\\in I_l}(S^{p_i}\\vee S^{q_l})),\n\\end{equation}where $\\pi$ is some subgroup of $\\pi_{n-1}(N).$ We observe that $$ f_{(M^l)}\\circ in \\simeq \\sum_{i\\in I} [f_{(M^l)}\\circ\\alpha_i,f_{(M^{l})}\\circ\\beta_i]\\simeq \\sum_{l=1}^{\\lfloor n\/2\\rfloor}\\sum_{i\\in I_l} [f_{M^l}\\circ\\alpha^l_i,f_{M^l}\\circ\\beta^l_i],$$ i.e. that the action of $f_{(M^l)}$ respects the summands of the identification (\\ref{hiltonmilnoridentification}). Thus it suffices to check that $f_{M^l}\\circ \\sum_{i\\in I_l} [\\alpha_i^l,\\beta_i^l]\\simeq \\sum_{i\\in I_l}[\\alpha_i^l,\\beta_i^l]$ for all $l.$ We use that left homotopy composition is distributive for suspensions \\cite[p. 126]{MR0041435}, i.e. $(x+y)\\circ \\Sigma z\\simeq x\\circ \\Sigma z+ y\\circ \\Sigma z $. For $l\\neq n\/2$ we calculate \\begin{align*} f_{M^l}\\circ \\sum_{i=1}^{r_l} [\\alpha_i^l,\\beta_i^l] \\simeq & \\sum_{i=1}^{r_l} [f_{M^l}\\circ\\alpha^l_i,f_{M^l}\\circ\\beta^l_i] \\simeq \\sum_{i,j,k} [m^l_{i,j}\\alpha^l_j,m^{n-l}_{i,k}\\beta^l_k] \\\\\n \\simeq & \\sum_{i,j,k} m^l_{i,j}m^{n-l}_{i,k} [\\alpha^l_j,\\beta^l_k] \\simeq \\sum_{j,k} ((M^l)^TM^{n-l})_{j,k} [\\alpha^l_j,\\beta^l_k].\n \\end{align*} This expression is homotopic to $\\sum_{i=1}^{r_l} [\\alpha_i^l,\\beta_i^l] $ if \\begin{equation}\\label{mcgconsition1}\n(M^l)^TM^{n-l}={\\operatorname{id}}_{Mat_{r_l}{({\\mathbb Z})}}.\t\n\\end{equation} For $l= n\/2$ we write $$M^l=\\begin{pmatrix}\nA^l & B^l \\\\\nC^l & D^l \n\\end{pmatrix}.$$ We calculate:\\begin{align*} &f_{M^l} \\circ \\sum_{i=1}^{r_l\/2} [\\alpha_i^l,\\beta_i^l] \\simeq \\sum_{i=1}^{r_l\/2} [f_{M^l}\\circ\\alpha^l_i,f_{M^l}\\circ\\beta^l_i] \\\\ \n \\simeq & \\sum_{i=1}^{r_l\/2}\\left[\\sum^{r_l\/2}_{j=1} (a^l_{i,j}\\alpha^l_j + b^l_{i,j}\\beta^l_{j}) ,\\sum^{r_l\/2}_{k=1} (c^l_{i,k}\\alpha^l_k+ d^l_{i,k}\\beta^l_{k})\\right]\\\\\n \\simeq& \\sum_{i=1}^{r_l\/2}\\sum^{r_l\/2}_{j=1}\\sum^{r_l\/2}_{k=1} a^l_{i,j} d^l_{i,k} [\\alpha^l_j,\\beta^l_{k}] +\\sum_{i=1}^{r_l\/2}\\sum^{r_l\/2}_{j=1}\\sum^{r_l\/2}_{k=1} b^l_{i,j}c^l_{i,k} [\\beta^l_{j},\\alpha^l_k] \\\\\n +& \\sum_{i=1}^{r_l\/2}\\sum^{r_l\/2}_{j=1}\\sum^{r_l\/2}_{k=1} a^l_{i,j}c^l_{i,k} [\\alpha^l_j ,\\alpha^l_k] +\\sum_{i=1}^{r_l\/2}\\sum^{r_l\/2}_{j=1}\\sum^{r_l\/2}_{k=1} b^l_{i,j}d^l_{i,k}[\\beta^l_{j},\\beta^l_{k}]\\\\\n \\simeq& \\sum^{r_l\/2}_{j=1}\\sum^{r_l\/2}_{k=1} ( (A^l)^TD^l +(-1)^{n\/2}((C^l)^TB^l ))_{j,k} [\\alpha^l_j,\\beta^l_{k}] \\\\\n +& \\sum^{r_l\/2}_{j=1}\\sum^{r_l\/2}_{k=1} ((A^l)^TC^l)_{j,k} [\\alpha^l_j ,\\alpha^l_k] +\\sum^{r_l\/2}_{j=1}\\sum^{r_l\/2}_{k=1} ((B^l)^T D^l)_{j,k}[\\beta^l_{j},\\beta^l_{k}]\n \\end{align*} This expression is homotopic to $\\sum_{i=1}^{r_l\/2} [\\alpha_i^l,\\beta_i^l] $, if\n$$\\begin{aligned}\n\t& (A^l)^TD^l +(-1)^{n\/2}(C^l)^TB^l=1 \\\\\n\t& (A^l)^TC^l+(-1)^{n\/2} (C^l)^TA^l=0\\\\\n\t& (B^l)^TD +(-1)^{n\/2}(D^l)^T B^l =0\\\\\n\t& (A^l)^TC^l \\text{ and } (B^l)^TD^l \\text{ have diagonal entries in $\\Lambda,$}\n\\end{aligned}$$\nwhere $$\\Lambda=\\begin{cases} 2{\\mathbb Z} &\\text{ if $n=2d$ and $d$ is odd and not $3$ or $7$} \\\\\n{\\mathbb Z} & \\text{ if $n=2d$ and $d$ is $3$ or $7$}\\\\\n0 &\\text{ if $n=2d$ and $d$ is even.}\\end{cases}$$ Where the diagonal entries of $(A^l)^TC^l$ and $(B^l)^TD^l$ have to be in $\\Lambda$ to kill the elements $[\\alpha^l_i ,\\alpha^l_i]$ and $[\\beta^l_i ,\\beta^l_i].$ These are exactly the conditions to be an automorphisms of $H({\\mathbb Z}^{g_{n\/2}})$ in $Q^{(-1)^{n\/2}}({\\mathbb Z},\\Lambda).$\nCombining this with the condition in (\\ref{mcgconsition1}) we see that the image of $\\tilde H_*$ in ${\\operatorname{Aut}}(\\tilde H_*(N))$ is given by $$\\Gamma_I \\subset \\prod^{\\lfloor n\/2\\rfloor}_{k=1} Gl_{r_k}({\\mathbb Z}).$$ Thus we proved that $$\\tilde H_*:\\pi_0({\\operatorname{aut}}_\\partial (N)) \\rightarrow {\\operatorname{Aut}}(\\tilde H_*(N),\\langle-,-\\rangle,Jq)$$ is surjective.\nTo show that the kernel is finite it suffices to check that $\\pi_0({\\operatorname{aut}}_\\partial (N))\\rightarrow \\pi_0({\\operatorname{aut}}_* (N))$ has finite kernel. This follows from the fact that $$\\pi_1(in^*):\\pi_1({\\operatorname{aut}}_*(N),\\operatorname{id}_N)\\otimes {\\mathbb Q}\\rightarrow \\pi_1({\\operatorname{map}}_*(\\partial N, N),in)\\otimes {\\mathbb Q}$$ is surjective as we will see in Remark $\\ref{ratsur}.$\n\n\\end{proof}\n\nIn fact it suffices to know $\\Gamma_I$ for our purposes, since the action of elements of the kernel of $\\tilde H_*$ by conjugation is trivial up to homotopy.\n\n\\begin{lem}\\label{gamma I action on universal cover}\nLet $f$ represent an element of the kernel of $$\\tilde H_*:\\pi_0( {\\operatorname{aut}}_\\partial(N_I))\\to \\Gamma_I,$$ then $f^{-1}\\circ g\\circ f\\simeq g$ for all $g\\in {\\operatorname{aut}}_\\partial(N_I).$\n\\end{lem}\n\n\\begin{proof}\nNote that if $[f]$ is in the kernel of $\\tilde H_*$ then it is given by an element $\\partial \\alpha,$ where $\\alpha\\in \\pi_1({\\operatorname{map}}_*(\\partial N_I,N_I,in))\\cong \\pi_n(N_I).$ We represent $\\alpha$ as a map $$\\alpha(x,t):\\partial N_I\\times I\\rightarrow N_I, \\text{ such that } \\alpha(x,0)=\\alpha(x,1)={\\operatorname{id}}_{\\partial N_I}.$$ Choosing a collar neighborhood of $\\partial N_I$ allows us to identify $$N_I\\xrightarrow{\\cong}N_I\\cup_{{\\operatorname{id}}_{\\partial N_I}}\\partial N_I\\times I.$$ Now we represent $f$ by the composite $$N_I\\xrightarrow{\\cong}N_I\\cup_{{\\operatorname{id}}_{\\partial N_I}}\\partial N_I\\times I\\xrightarrow{{\\operatorname{id}}_{N_I}\\cup \\alpha} N_I.$$ We represent $f^{-1}$ similarly using $-\\alpha$. If we now represent $f^{-1}\\circ g\\circ f$ by the composition $$\\begin{aligned}\nN_I\\xrightarrow{\\cong}N_I\\cup_{{\\operatorname{id}}_{\\partial N_I}}\\partial N_I\\times I\\xrightarrow{{\\operatorname{id}}_{N_I}\\cup \\alpha} N_I \\xrightarrow{\\cong}N_I\\cup_{{\\operatorname{id}}_{\\partial N_I}}\\partial N_I\\times I\\\\ \\xrightarrow{g\\cup {\\operatorname{id}}_{\\partial N_I\\times I}} N_I\\cup_{{\\operatorname{id}}_{\\partial N_I}}\\partial N_I\\times I\\xrightarrow{{\\operatorname{id}}_{N_I}\\cup -\\alpha} N_I,\n\\end{aligned}$$ and see that it is homotopic to $$N_I\\xrightarrow{\\cong}N_I\\cup_{{\\operatorname{id}}_{\\partial N_I}}\\partial N_I\\times I\\cup_{{\\operatorname{id}}_{\\partial N_I}}\\partial N_I\\times I \\xrightarrow{g\\cup\\alpha \\cup -\\alpha}N_I.$$ This is homotopic (rel. $\\partial N_I$) to $g,$ since $$\\partial N_I\\times I\\cup_{{\\operatorname{id}}_{\\partial N_I}}\\partial N_I\\times I \\xrightarrow{\\alpha \\cup -\\alpha}N_I$$ is homotopic to the inclusion of $\\partial N_I\\times I\\cup_{{\\operatorname{id}}_{\\partial N_I}}\\partial N_I\\times I$ as a collar.\n\n\n\\end{proof}\n\nDenote by ${\\operatorname{Diff}}_\\partial(N)$ the space of self-diffeomorphism of $N$ fixing a collar neighborhood of the boundary point-wise with the Whitney $C^\\infty$-topology. Let $J:{\\operatorname{Diff}}_\\partial(N) \\rightarrow{\\operatorname{aut}}_\\partial(N)$ be the canonical inclusion. To show homological stability for block diffemorphism the following fact about the mapping class group suffices.\n\n\\begin{prop}\\label{mcg} \n\\begin{itemize} \\item[]\n\\item[$\\operatorname{(1)}$] The map $\\tilde H_*: \\pi_0 ({\\operatorname{Diff}}_\\partial(N))\\rightarrow {\\operatorname{Aut}}({\\tilde H_*(N),\\langle-,-\\rangle,q})$ is surjective.\n\\item[$\\operatorname{(2)}$] The image of $\\pi_0(J):\\pi_0 ({\\operatorname{Diff}}_\\partial(N))\\rightarrow \\pi_0({\\operatorname{aut}}_\\partial(N))$ has finite index.\n\\end{itemize} \n\\end{prop}\n\n\\begin{proof}\nThe first part follows from \\cite{MR561244} and \\cite[Lemma 17]{MR0156354}. Kreck shows that all elements of ${\\operatorname{Aut}}({\\tilde H_{n\/2}}(N),\\langle-,-\\rangle,q)$ can be realized as self-diffeomorphisms of $$(\\#_{g_{n\/2}}(S^{n\/2}\\times S^{n\/2}))\\smallsetminus {\\operatorname{int}}(D^n)$$ fixing the boundary pointwise. Wall shows that for manifolds $\\natural_g (D^{q+1}\\times S^p)$, where $3\\leq p\\leq q$ and $\\natural_g$ denotes the $g$-fold boundary connected sum, all automorphisms of the homology are realized by diffeomorphisms. Hence it follows for manifolds $\\#_{g_i} (S^{p_i}\\times S^{q_i}).$ Since we can assume that a diffeomorphism fixes a disk up to isotopy, we get it in particular for $(\\#_{g_i} (S^{p_i}\\times S^{q_i}))\\smallsetminus{\\operatorname{int}}(D^n).$ Using the diffeomorphisms above and extending them by the identity on the complement of the manifolds above the claim follows. The second part follows from the following commutative diagram\n\\begin{equation*}\n\\xymatrix{\n0 \\ar[r]& \\pi_0 (\\operatorname{S}{\\operatorname{Diff}}_\\partial (N) )\\ar[r]\\ar[d] & \\pi_0 ({\\operatorname{Diff}}_\\partial (N)) \\ar[r]^<<<<<{\\tilde H_*} \\ar[d]_{\\pi_0(J)}& {\\operatorname{Aut}}({\\tilde H_*(N),\\langle-,-\\rangle, q}) \\ar[r] \\ar[d]^\\cong & 0 \\\\\n0 \\ar[r]& \\pi_0 (\\operatorname{S}{\\operatorname{aut}}_\\partial (N)) \\ar[r] & \\pi_0 ({\\operatorname{aut}}_\\partial (N)) \\ar[r]^<<<<<{\\tilde H_*}& {\\operatorname{Aut}}({\\tilde H_*(N),\\langle-,-\\rangle , Jq}) \\ar[r]& 0, }\n\\end{equation*}\nwhere $\\pi_0 (\\operatorname{S}{\\operatorname{Diff}}_\\partial (N))$ and $\\pi_0 (\\operatorname{S}{\\operatorname{aut}}_\\partial (N))$ denote the kernels of the maps $\\tilde H_*$ and the fact that $\\pi_0 (\\operatorname{S}{\\operatorname{aut}}_\\partial (N))$ is finite by Proposition \\ref{hmcg}.\n\n\\end{proof}\n\n\\begin{rem}\\label{mcglitremark}\nThere is a lot of literature on the groups of components of mapping spaces of closed manifolds in different categories. Highly connected even-dimensional manifolds are for example studied in \\cite{MR561244} and \\cite{MR0245031}. Products of spheres are studied in \\cite{MR0248848,MR0253369,MR0250323}. Homotopy self-equivalences of manifolds and in particular of connected sums of products of spheres are treated in \\cite{MR1340168}. \n\n\\end{rem}\n\nFor later use we need the following Lemma.\n\n\\begin{lem}\\label{H1 triv pi0} The groups \n$\\pi_{0}{\\operatorname{aut}}_\\partial(N_I)$ and $\\operatorname{Im}(\\pi_0(J))$ are rationally perfect.\n\\end{lem}\n\n\\begin{proof} This follows from Lemma \\ref{gammaIh1} and the fact that the groups are finite extension of $\\Gamma_I.$\n\\end{proof}\n\\section{On the rational homotopy type of homotopy automorphisms}\\label{On the rational homotopy type of homotopy automorphisms}\n\nIn the last section we determined the group $$\\pi_I=\\pi_1(B{\\operatorname{aut}}_\\partial (N_I),{\\operatorname{id}}_{N_I})\\cong \\pi_0({\\operatorname{aut}}_\\partial (N_I))$$ up to finite extensions. It acts on the simply-connected covering $X_I=B{\\operatorname{aut}}_\\partial (N_I){\\langle 1 \\rangle}$ by deck transformations. This section has two goals:\n\\begin{enumerate}\n\\item Describe the $\\pi_I$-modules $H_*(X_I;{\\mathbb Q})$ algebraically (Proposition \\ref{iso Hce and H as piI-modules}).\n\\item Make sure the algebraic model is appropriate for showing homological stability using the results in Section \\ref{van der kallens and charneys homological stab results} (Proposition \\ref{derivation as schurfunctor}).\n\\end{enumerate}\n\nAll the results in this section are either contained in \\cite{berg13} or straightforward generalizations. \n\nWe assume some familiarity with Quillen's approach to rational homotopy theory \\cite{MR0258031}, i.e. the functor $$\\lambda:\\operatorname{Top}_1\\rightarrow \\operatorname{dgL}_0$$ from the category of simply connected based topological spaces to the category of reduced differential graded (dg) Lie algebras. It induces an equivalence of homotopy categories, where the weak equivalences in $\\operatorname{Top}_1$ are isomorphisms in rational homotopy groups and in $\\operatorname{dgL}_0$ quasi isomorphisms. The homology of $\\lambda(X)$ allows us to recover the rational homotopy groups of $X.$ More precisely, there is an isomorphism of graded Lie algebras $$H_*(\\lambda(X))\\cong \\pi_*(\\Omega X)\\otimes {\\mathbb Q},$$ where the Lie bracket on the right hand side is given by the Samelson product. The rational homology of $X$ is given by the Chevalley-Eilenberg homology of $\\lambda(X),$ which we will explain later.\n\nFor a given simply connected space $X$ the value $\\lambda(X)$ is in general very complicated and one considers dg Lie models instead. A \\emph{dg Lie model} for a simply connected topological space $X$ is a free differential graded Lie algebra $({\\mathbb L}(V),\\partial),$ together with a quasi isomorphism $$({\\mathbb L}(V),\\partial) \\xrightarrow{\\simeq} \\lambda(X).$$ When $\\pi_*(\\Omega X)\\otimes {\\mathbb Q}$ is quasi isomorphic to $\\lambda(X)$, the space $X$ is called \\emph{coformal}.\n\n\\subsection{On a dg Lie model for the simply-connected covering of the homotopy automorphisms}\n\nSince $N_I\\simeq \\bigvee_{i\\in I} (S^{p_i}\\vee S^{q_i})$, the free Lie algebra ${\\mathbb L}(s^{-1}\\tilde H_*(N_I,{\\mathbb Q}))$ with trivial differential is a dg Lie model for $N_I,$ where $s^{-1}$ denotes the desuspension.\nWe are going to denote: $${\\mathbb L}_I={\\mathbb L}(s^{-1}\\tilde H_*(N_I,{\\mathbb Q})).$$ Recall that we denoted the homology classes represented by the inclusions $$\\alpha_i:S^{p_i}\\hookrightarrow N_I,\\; \\beta_i: S^{q_i}\\hookrightarrow N_I,$$ by $a_i$ respectively $b_i$. Denote by $$\\omega_I =\\sum_{i\\in I} -(-1)^{|a_i|}[ s^{-1}a_i, s^{-1}b_i ].$$ We model the inclusion of the boundary $in:\\partial N_I\\rightarrow N_I$ by $${\\mathbb L}(\\gamma)\\rightarrow {\\mathbb L}_I\\text{, }\\gamma\\mapsto \\omega_I,$$ where ${\\mathbb L}(\\gamma)$ is generated by a single generator of degree $n-2.$ \n\nLet $f:L\\rightarrow K$ be a map of differential graded Lie algebras. We say that a degree $n$ linear map $\\theta\\in {\\operatorname{Hom}}_n(L,K)$, is an \\emph{$f$-derivations} of degree $n$, if $$\\theta [x,y]=[\\theta(x),f(y)]+(-1)^{n|x|}[f(x),\\theta(y)]\\text{, for all }x,y\\in L.$$ The $f$-derivations form a differential graded vector space ${\\operatorname{Der}}_f(L,K),$ with differential given by $$D(\\theta)=d_K\\circ \\theta - (-1)^{|\\theta |}\\theta\\circ d_L.$$ The \\emph{derivations} of a differential graded Lie algebra $L$ is the special case $${\\operatorname{Der}}(L)= {\\operatorname{Der}}_{\\text{id}_L}(L,L).$$ We define a bracket for $\\theta,\\eta\\in{\\operatorname{Der}} (L)$, by $$[\\theta,\\eta]=\\theta \\circ \\eta -(-1)^{|\\theta||\\eta|}\\eta\\circ \\theta,$$ which makes ${\\operatorname{Der}} (L)$ into a differential graded Lie algebra.\n\nDenote by ${\\operatorname{Der}}_\\omega({\\mathbb L}_I)$ the derivation Lie algebra annihilating $\\omega_I,$ i.e. the kernel of the evaluation at $\\omega_I$ $ev_{\\omega_I}:{\\operatorname{Der}}({\\mathbb L}_I)\\to {\\mathbb L}_I.$ The \\emph{positive truncation} $L^+$ of a dg Lie algebra $L$ is given by $$ L_i^+ = \\begin{cases} L_i& \\text{for } i\\geq 2 \\\\\n \\text{ker}(d_L:L_1\\rightarrow L_0)& \\text{for } i=1 \\\\\n 0 & \\text{for } i\\leq 0 \\\\\n \\end{cases}$$ with its obvious differential and Lie bracket.\n\n\\begin{prop}[{Special case of \\cite[Corollary 3.11]{berg13}}]\\label{der isomorphism of gLie}\nThe simply-connected covering $B{\\operatorname{aut}}_\\partial (N_I){\\langle 1 \\rangle}$ is coformal and there is an isomorphism of graded Lie algebras $$\\pi_*(\\Omega B{\\operatorname{aut}}_\\partial (N_I){\\langle 1 \\rangle})\\otimes {\\mathbb Q}\\cong{\\operatorname{Der}}^+_\\omega({\\mathbb L}_I)$$\n\\end{prop}\n\n\\begin{proof} The manifolds $N_I$ are formal (in the sense of Sullivan's rational homotopy theory) and have trivial reduced rational cohomology rings since they are homotopy equivalent to wedges of spheres. Thus we can apply \\cite[Corollary 3.11]{berg13}.\n\\end{proof}\n\n\n\n\n\\subsection{Derivations and the cyclic Lie operad}\n\nIn this section we collect the results from \\cite{berg13} Section 6.1 and 6.2. Let $V$ be a graded finite dimensional rational vector space. By an inner product of degree $D$ we mean a degree $-D$ morphism $$\\langle-,-\\rangle:V\\otimes V\\rightarrow {\\mathbb Q}$$ that is non-degenerate in the sense that the adjoint $$V\\to {\\operatorname{Hom}}(V,{\\mathbb Q}),\\; v\\mapsto \\langle x,-\\rangle,$$ is an isomorphism of graded vector spaces. The inner product is graded anti-symmetric if $$\\langle x,x\\rangle = -(-1)^{|x||y|} \\langle y,x\\rangle \\text{ for all } x,y\\in V.$$ \nDenote by $Sp^D$ the category with objects graded finite dimensional rational vector spaces $V$ together with a graded anti-symmetric inner product $\\langle-,-\\rangle_V$. The morphisms in $Sp^D$ are linear maps that respect the inner-product. For a morphisms $f:V\\rightarrow W,$ there is a unique linear map $f^!$ such that $$\\langle x,y \\rangle_V=\\langle f(x),f(y) \\rangle_W\\text{, for all }x,y\\in V.$$ Since $f^!f={\\operatorname{id}}_V$ we get that morphisms in $Sp^D$ are injective.\n\n\n\nGiven a object $V$ of $Sp^D$, consider the inner product $\\langle -,- \\rangle_V$ as an element of ${\\operatorname{Hom}}(V^{\\otimes 2},{\\mathbb Q}).$ We identify $V^{\\otimes 2} \\cong {\\operatorname{Hom}}(V^{\\otimes 2},{\\mathbb Q})$ using the inner product on $V^{\\otimes 2}$ defined by $$\\langle x\\otimes y, x'\\otimes y'\\rangle = (-1)^{|x'||y|}\\langle x,x'\\rangle\\langle y,y'\\rangle.$$ Thus $\\langle -,- \\rangle_V$ gives rise to an element $\\omega_V\\in V^{\\otimes 2}.$ The anti-symmetry of $\\langle -,- \\rangle $ implies that we can consider $\\omega_V$ as an element $\\omega_V \\in {\\mathbb L}(V).$ When we choose a graded basis $\\iota_1,...,\\iota_r$ with dual basis $\\iota_1^\\#,...,\\iota_r^\\#$ for $V$ then $$\\omega_V=\\pm\\frac{1}{2}\\sum_i [\\iota_i^\\#,\\iota_i].$$\n\\begin{example} We consider $s^{-1}H_I\\otimes {\\mathbb Q}$ as an element of $Sp^{(n-2)}.$ Then $\\omega_{s^{-1}H_I\\otimes {\\mathbb Q}}$ is equal to $\\omega_I$ up to sign.\n\\end{example}\n\nA $Sp^D$-module in a category $\\mathcal{V}$ is a functor $Sp^D\\rightarrow \\mathcal{V}.$ Our goal in this section is to show that we can describe ${\\operatorname{Der}}_\\omega ({\\mathbb L}(-))$ as a $Sp^D$-module in a category of graded Lie algebras $gL$. Moreover we are going to see that this functor is in fact naturally equivalent to a Schur functor.\n\nIt is clear that ${\\mathbb L}(-):Sp^D\\to gL$ defines a functor. Moreover using the adjoint $f^!$ of a morphisms $f:V\\to W$ in $Sp^D,$ it follows that we can consider ${\\operatorname{Der}}({\\mathbb L}(-)):Sp^D\\to gL$ as a functor, where ${\\operatorname{Der}}({\\mathbb L}(f))(\\theta)$ for $\\theta \\in {\\operatorname{Der}} ({\\mathbb L}(V))$is given by the unique derivation defined by $$ {\\operatorname{Der}}({\\mathbb L}(f))(\\theta)(x)={\\mathbb L}(f)\\theta(f^!(x)) \\text{ for }x \\in W.$$\n(see Proposition \\cite[Proposition 6.1]{berg13}, where it is also shown that ${\\operatorname{Der}}({\\mathbb L}(f))(\\theta)$ is injective). Proposition 6.2 in \\cite{berg13} now states that for a morphisms $f:V\\to W$ in $Sp^D,$ the diagram \\begin{displaymath}\n\\xymatrix{{\\operatorname{Der}} ({\\mathbb L}(V)) \\ar[r]^{ev_{\\omega_V}}\\ar[d]_{{\\operatorname{Der}}({\\mathbb L}(f))} & {\\mathbb L}(V) \\ar[d]^{{\\mathbb L}(f)} \\\\\n{\\operatorname{Der}} ({\\mathbb L}(W)) \\ar[r]^{ev_{\\omega_W}} &{\\mathbb L}(W)}\\end{displaymath} commutes. This implies in particular, that we get a functor $${\\operatorname{Der}}_\\omega ({\\mathbb L}(-)):Sp^D\\to gL,$$ given by the kernel ${\\operatorname{Der}}_\\omega ({\\mathbb L}(V))$ of the evaluation map $ev_{\\omega_V}:{\\operatorname{Der}}({\\mathbb L}(V))\\rightarrow {\\mathbb L}(V)$ for $V\\in Sp^D$.\n\nWe identify the $Sp^D$-module ${\\operatorname{Der}}({\\mathbb L}(-))$ with the $Sp^D$-module ${\\mathbb L}(V)\\otimes V,$ upon using the map $$\\theta_{-,-}:{\\mathbb L}(V)\\otimes V\\to {\\operatorname{Der}}({\\mathbb L}(-)),\\; \\theta_{\\chi,x}(y)=\\chi \\langle x,y\\rangle\\text{, for $x\\in V$ and $\\theta\\in {\\mathbb L}(V)$ }$$ (see \\cite[Proposition 6.3]{berg13}). Under this identification the evaluation map becomes the map induced by the Lie bracket, i.e. \\begin{equation*}\n\\xymatrix{\n{\\mathbb L}(V)\\otimes V \\ar[rr]^-{[-,-]}\\ar[d]_{\\theta_{-,-}} && {\\mathbb L}(V) \\ar@{=}[d] \\\\\n{\\operatorname{Der}}({\\mathbb L}(V)) \\ar[rr]^{ev_{\\omega_V}} &&{\\mathbb L}(V)}\n\\end{equation*} commutes. Denote by ${\\mathfrak{g}}(V)$ the kernel of $[-,-]$ and observing that $[-,-]$ surjects onto the sub graded Lie algebra ${\\mathbb L}^{\\geq 2}(V)$ of elements of bracket length $\\geq 2,$ we get a commutative diagram of $Sp^D$-modules\n\\begin{equation}\\label{gnat}\n\\xymatrix{\n0\\ar[r]&{\\mathfrak{g}}(V)\\ar[r]\\ar@{.>}[d]_\\cong &{\\mathbb L}(V)\\otimes V \\ar[rr]^-{[-,-]}\\ar[d]_{\\theta_{-,-}} && {\\mathbb L}^{\\geq 2}(V) \\ar@{=}[d]\\ar[r] & 0\\\\\n0\\ar[r]&{\\operatorname{Der}}_{\\omega}({\\mathbb L}(V))\\ar[r] &{\\operatorname{Der}}({\\mathbb L}(V)) \\ar[rr]^{ev_{\\omega_V}} &&{\\mathbb L}^{\\geq 2}(V)\\ar[r] & 0. } \\end{equation} \nNote that in the top row we do not use the inner product thus it defines in fact a functor ${\\mathfrak{g}}$ from the category of graded vector spaces.\n\nDenote by $\\Lie= \\{\\Lie(n)\\}_{n\\geq 0}$ the graded Lie operad and by $\\Lie((n))$ the cyclic lie operad, i.e. the Lie operad in arity $n-1$ $\\Lie(n-1)$ seen as a $\\Sigma_{n}$-modules. Denote by $t=(123...n)\\in \\Sigma_n$ the cyclic permutation and denote by $t*\\xi$ the action of $t$ on $\\xi \\in \\Lie((n))$. There are short exact sequences of $\\Sigma_n$-modules $$0\\to \\Lie((n)) \\xrightarrow{\\mu} {\\mathbb Q}[\\Sigma_n]\\otimes_{\\Sigma_{n-1}}\\Lie(n-1)\\xrightarrow{\\epsilon} \\Lie(n)\\rightarrow 0,$$ where $\\mu(\\xi)=\\sum_i t^i\\otimes t^{-i} * \\xi$ and $\\epsilon(\\sigma\\otimes \\zeta)=\\sigma[\\zeta,x_n]$ (see \\cite[Proposition 6.4]{berg13}).\n\nUsing the exact sequence we identify the rows of (\\ref{gnat}) with $$ s^{-D} \\bigoplus_{n\\geq 2} \\Lie((n))\\otimes_{\\Sigma_n} V^{\\otimes n}\\rightarrow s^{-D}\\bigoplus_{n\\geq 2}\\Lie(n)\\otimes_{\\Sigma_n} V^{\\otimes n} \\rightarrow \\bigoplus_{n\\geq 2}\\Lie(n) \\otimes_{\\Sigma_n} V^{\\otimes n}.$$ Motivated by this we define $$\\Lie((V))=s^{-D} \\bigoplus_{n\\geq 2} \\Lie((n))\\otimes_{\\Sigma_n} V^{\\otimes n}.$$ The Lie algebra structure on $\\Lie((V))$ can be in fact be explicitly described using the cyclic operad structure of $\\Lie((n))$ and the one on $V^{\\otimes n}$ given by contractions, but we are not going to need it. We summarize the above as:\n\\begin{prop}[{\\cite[Proposition 6.6.]{berg13}}]\\label{derivation as schurfunctor} There is an isomorphism of $Sp^D$-modules in $gLie$ $$\\Lie((-))\\cong {\\operatorname{Der}}_\\omega ({\\mathbb L}(-)).$$\n\\end{prop}\n\n\\begin{rem}\nAs a composition of the Schur functors $s^{-D}$ and the one given by the $\\Sigma$-module $\\{\\Lie((n))\\}_{n\\geq1}$, we see that $\\Lie((-))$ is also a Schur functor.\n\\end{rem}\n\n\\subsection{The action of the homotopy mapping class group}\n\n\n\n\nTo identify the action induced by deck transformations on ${\\operatorname{Der}}^+_\\omega({\\mathbb L}_I)$ we begin by noting that the Lie algebras (both with Samelson product) $\\pi_*(\\Omega B{\\operatorname{aut}}_\\partial (N_I))\\otimes {\\mathbb Q}$ and $\\pi_*({\\operatorname{aut}}_\\partial (N_I))\\otimes {\\mathbb Q}$ are naturally isomorphic since ${\\operatorname{aut}}_\\partial (N_I)$ is a group-like monoid. Moreover the deck transformation action can be described in terms of the Samelson product, i.e. when we identify $\\pi_*(B{\\operatorname{aut}}_\\partial (N_I))\\cong\\pi_{*-1}({\\operatorname{aut}}_\\partial (N_I))$, the deck transformation action of $\\pi_1(B{\\operatorname{aut}}_\\partial (N_I){\\langle 1 \\rangle})$ corresponds to the action of $\\pi_0({\\operatorname{aut}}_\\partial (N_I))$ on $\\pi_{*-1}({\\operatorname{aut}}_\\partial (N_I))\\otimes {\\mathbb Q}$ given by conjugation. The main tool to identify the action is the following theorem.\n\n\\begin{thm}[{\\cite{lupton} stated as in \\cite[Theorem 3.6]{berg13}}]\\label{lsiso}\n Let $f:X\\rightarrow Y$ be a map of simply connected CW-complexes with $X$ finite and $\\phi_f:{\\mathbb L}_X\\rightarrow {\\mathbb L}_Y$ a Quillen model. There are natural isomorphisms of sets \n\\begin{equation*}\n \\pi_k ({\\operatorname{map}}_* (X,Y),f)\\otimes {\\mathbb Q} \\cong H_k({\\operatorname{Der}}_{\\phi_f}({\\mathbb L}_X,{\\mathbb L}_Y)), \\text{ for }k\\geq 1,\n\\end{equation*} which are vector space isomorphisms for $k>1.$ In the case $X=Y$ and $f=$id$_X,$ there are isomorphisms of vectorspaces \\begin{equation*}\\label{natiso}\n \\pi_k ({\\operatorname{aut}}_* (X),\\text{id}_X)\\otimes {\\mathbb Q} \\cong H_k({\\operatorname{Der}}({\\mathbb L}_X)), \\text{ for }k\\geq 1,\n\\end{equation*} and the Samelson product corresponds to the Lie bracket.\n\\end{thm}\n\n\\begin{prop}[{Compare \\cite[Proposition 5.5]{berg13}}]\\label{der iso comp with action} There is a $\\pi_0({\\operatorname{aut}}_\\partial(N_I))$-equivariant isomorphism of graded Lie algebras\n$$\\pi_{*}^+({\\operatorname{aut}}_\\partial (N_I))\\otimes {\\mathbb Q}\\cong{\\operatorname{Der}}^+_\\omega({\\mathbb L}_I),$$ where the action on the right hand side is through the canonical action of $\\Gamma_I$ on $H_I$\n\n\\end{prop}\n\n\\begin{proof}\n\nWe are going to study the long exact sequence of rational homotopy groups of the fibration\n$${\\operatorname{aut}}_\\partial(N_I) \\rightarrow {\\operatorname{aut}}_*(N_I)\\rightarrow {\\operatorname{map}}_*(\\partial N_I,N_I).$$ Denote by $\\varphi:{\\mathbb L}(\\omega)\\rightarrow {\\mathbb L}_I$ the inclusion of the sub graded Lie algebra of ${\\mathbb L}_I$ generated by $\\omega_I.$ Using Theorem \\ref{lsiso} we see that the map $$\\pi_k({\\operatorname{aut}}_*(N_I))\\otimes {\\mathbb Q}\\rightarrow \\pi_k({\\operatorname{map}}_*(\\partial N_I,N_I))\\otimes {\\mathbb Q}$$ is given by \\begin{equation*}\n{\\operatorname{Der}}({\\mathbb L}_{I})_k \\xrightarrow{\\varphi^*_k}{\\operatorname{Der}}_{\\varphi}({\\mathbb L} (\\omega_I) ,{\\mathbb L}_{I})_k,\n\\end{equation*}where $\\varphi^*_k$ is the restriction to ${\\mathbb L}(\\omega_I).$ Note that ${\\operatorname{Der}}_{\\varphi}({\\mathbb L} (\\omega_I) ,{\\mathbb L}_{I})\\cong s^{(n-2)}{\\mathbb L}_I.$ Under this identifications the map $\\varphi^*$ becomes the evaluation map which is surjective as discussed for the diagram (\\ref{gnat}). Hence the long exact sequence of rational homotopy groups splits as \\begin{equation*}\n0\\rightarrow {\\operatorname{Der}}_\\omega({\\mathbb L}_{I})_*\\rightarrow {\\operatorname{Der}}({\\mathbb L}_{I})_* \\xrightarrow{\\varphi^*_k}{\\operatorname{Der}}_{\\varphi}({\\mathbb L} (\\omega_I) ,{\\mathbb L}_{I})_*\\rightarrow 0,\n\\end{equation*} where we use that ${\\operatorname{Der}}_\\omega({\\mathbb L}_{I})$ is the kernel of the evaluation map. The resulting isomorphism $$\\pi_*^+({\\operatorname{aut}}_{\\partial}({N_I}),{\\operatorname{id}}_{N_I})\\otimes {\\mathbb Q}\\cong {\\operatorname{Der}}^+_{{\\omega} }({\\mathbb L}_{I})$$ is in fact an isomorphism of graded Lie algebras. Indeed, since the inclusion ${\\operatorname{aut}}_\\partial ({N_I})\\rightarrow {\\operatorname{aut}}_*({N_I})$ is a map of topological monoids, the induced maps on rational homotopy group respect the Samelson product and ${\\operatorname{Der}}^+({\\mathbb L}_{I})\\cong\\pi^+_*({\\operatorname{aut}}(N_I))\\otimes {\\mathbb Q}$ is an isomorphism of Lie algebras. Hence we can calculate the Samelson product of $\\pi^+_*({\\operatorname{aut}}_\\partial(N_I))$ in ${\\operatorname{Der}}^+({\\mathbb L}_{I}).$\n\nNow let $f,g\\in {\\operatorname{aut}}_*({N_I})$. The action of $[f]\\in\\pi_0({\\operatorname{aut}}_*({N_I}))$ on $\\pi_k({\\operatorname{aut}}_*({N_I}))$ is induced by pointwise conjugation $g\\mapsto fgf^{-1},$ where $f^{-1}$ is some choice of homotopy inverse. Let $\\phi_f$ be a Quillen model for $f$ and $\\theta\\in {\\operatorname{Der}} ({{\\mathbb L}_I})_k.$ The action of $[f]$ on ${\\operatorname{Der}}({\\mathbb L}_{I})_k$ is given by \n\\begin{equation*}\n\\theta\\mapsto \\phi_f\\circ \\theta\\circ \\phi_f^{-1},\n\\end{equation*} by the naturality of the identification $\\pi_k({\\operatorname{aut}}_*({N_I}))\\otimes {\\mathbb Q} \\cong {\\operatorname{Der}}({\\mathbb L}_{I})_k.$ \nFor a homotopy self-equivalence $f$ consider the induced map $f_*\\in {\\operatorname{Aut}} (\\tilde{H_*}(N_I))$. The map ${\\mathbb L}(s^{-1}(f_*\\otimes {\\mathbb Q}))$ is in fact a Lie model for $f$, which shows that we can identify the conjugation action with the induced action of ${\\operatorname{Aut}} (\\tilde{H_*}(N_I))$ on ${\\operatorname{Der}}({\\mathbb L}_I)_k$.\\\\\nUsing that ${\\operatorname{Der}}^+_\\omega({\\mathbb L}_{I})_k\\rightarrow {\\operatorname{Der}}^+({\\mathbb L}_{I})_k$ is injective, we calculate the conjugation action of $\\pi_0({\\operatorname{aut}}_{\\partial}({N_I}))$ on $\\pi_k({\\operatorname{aut}}_{\\partial}({N_I}),{\\operatorname{id}}_{N_I})$ in terms of ${\\operatorname{Der}}_{{\\omega} }({\\mathbb L}_{I})_k$. Let $f$ be an element of ${\\operatorname{aut}}_{\\partial}({N_I})$, it is in particular also an element of ${\\operatorname{aut}}_*({N_I})$ and we know that its homotopy class $[f]$ in $\\pi_0 ({\\operatorname{aut}}_*({N_I}))$ gives us an element in $\\Gamma_I$. Considering $\\theta\\in {\\operatorname{Der}}_{{\\omega}}({\\mathbb L}_{I})_k$ as an element in ${\\operatorname{Der}}({\\mathbb L}_{I})_k$ we see that $[f]$ acts by the action induced by $f_*\\in \\Gamma_I$ on $\\tilde H_*(N_I).$\n\\end{proof}\n\n\\begin{rem}\\label{ratsur} Observe that we discussed in the proof that the map $$\\pi_1({\\operatorname{aut}}_* (N_I)) \\otimes {\\mathbb Q}\\rightarrow \\pi_1({\\operatorname{map}}_*(\\partial N_I,N_I))\\otimes {\\mathbb Q}$$ is surjective hence the kernel of $\\pi_0({\\operatorname{aut}}_\\partial(N_I))\\rightarrow \\Gamma_I$ is finite.\n\\end{rem}\n\nWe need to identify the maps induced by the stabilization map on rational homotopy groups. Given an element of $\\theta\\in {\\operatorname{Der}}^+_{\\omega}({\\mathbb L}_I)$ we define an element $\\theta'\\in{\\operatorname{Der}}^+_{\\omega}({\\mathbb L}_{I'}),$ by letting $\\theta'=\\theta$ on generators $\\iota_i,\\kappa_i,$ where $i\\in I$ and $\\theta(\\iota_{i'})=\\theta(\\kappa_{i'})=0.$ Using that ${\\mathbb L}_{I'}$ is free we get a derivation $\\theta',$ which is indeed an element of ${\\operatorname{Der}}^+_{\\omega}({\\mathbb L}_{I'}),$ since $\\omega_{I'}=\\omega_{I}+(-1)^{|\\iota_{i'}|}[\\iota_{i'},\\kappa_{i'}].$ We refer to this map again as stabilization map.\n\n\\begin{prop}\\label{der iso compatible with stab maps}\nThe isomorphism $$\\pi_*({\\operatorname{aut}}_\\partial (N_I){\\langle 1 \\rangle})\\otimes {\\mathbb Q}\\cong{\\operatorname{Der}}^+_\\omega({\\mathbb L}_I)$$ is compatible with the stabilization maps. \n\\end{prop}\n\n\\begin{proof} This works exactly as in \\cite[Proposition 7.7]{berg13}.\n\\end{proof}\n\nUltimately we describe the rational homology $H_*(B{\\operatorname{aut}}_\\partial(N_I){\\langle 1 \\rangle}),{\\mathbb Q})$ as $\\pi_I$-modules. The link between a dg Lie model and the rational homology of a space is given by the Chevalley-Eilenberg homology. The Chevalley-Eilenberg complex of a dg Lie algebra $L$ with differential $d_L$ is the chain complex $C_*^{CE}(L) = \\Lambda_* sL$ with differential $\\delta^{CE}=\\delta_0^{CE} + \\delta_1^{CE}$, where $s$ denotes the suspension and $\\Lambda_*$ the free graded commutative algebra. In the simplest cases the differentials are given by\\begin{equation*}\n\\delta_0^{CE}(sx) =-s d_L x\n\\end{equation*}\\begin{equation*}\n\\delta_1^{CE}(sx_1 \\wedge sx_2) =(-1)^{|x_1|}s[x_1,x_2],\n\\end{equation*}\nwhere $x,x_1,x_2\\in L$.\n\nQuillen showed that for a dg Lie model $L_X$ of a space $X$ that the Chevalley-Eilenberg homology gives the rational homology groups of $X,$ i.e. that $H^{CE}_*(L_X)\\cong H_*(X;{\\mathbb Q}).$ \n\nGrade the Chevalley-Eilenberg chains by word length, i.e. let $(\\Lambda^{p}(L))_q$ be the elements of word length $p$ and degree $q$. Denote by $H_{p,q}^{CE} (L)$ the homology of the chain complex $$...\\to (\\Lambda^{p+1}(L))_q\\xrightarrow{\\delta_1}(\\Lambda^{p}(L))_q\\xrightarrow{\\delta_1}(\\Lambda^{p-1}(L))_q\\to ...$$\nThe Quillen Spectral Sequence is the spectral sequence coming from the filtration by word length. In case that the dg Lie algebra $L_X$ is a model for a space $X,$ we can identify the $E^2$-page with $$E^2(L)_{p,q} = H_{p,q}^{CE} (L_X)\\cong H_{p,q}^{CE} (\\pi_*(\\Omega X)\\otimes {\\mathbb Q})\\Rightarrow H_*^{CE}(L_X)\\cong H_*(X;{\\mathbb Q}).$$ \n\nThe Quillen spectral sequence collapses on the $E^2$-page for coformal spaces, hence we get isomorphisms $$H_r(B{\\operatorname{aut}}_\\partial(N_I){\\langle 1 \\rangle}),{\\mathbb Q})\\cong \\bigoplus_{p+q=r} H_{p,q}^{CE} (\\pi_*(\\Omega B{\\operatorname{aut}}_\\partial(N_I){\\langle 1 \\rangle})\\otimes {\\mathbb Q}).$$ The Quillen spectral sequence is in fact natural with respect to unbased maps of simply connected spaces (\\cite[Proposition 2.1.]{berg13}). Since $\\pi_I$ is rationally perfect (see Lemma \\ref{H1 triv pi0}), we do not have any extension problems, hence the isomorphism above is in fact an isomorphism of $\\pi_I$-modules (see \\cite[Proposition 2.3]{berg13}). \n\n\\begin{prop}\\label{iso Hce and H as piI-modules}\nThere are isomorphisms of $\\pi_I$-modules $$H^{CE}_r({\\operatorname{Der}}_\\omega({\\mathbb L}_I))\\cong H_r(B{\\operatorname{aut}}_\\partial(N_I){\\langle 1 \\rangle};{\\mathbb Q})$$ compatible with the stabilization maps\n\\end{prop}\n\n\\begin{proof}We use the isomorphism of $\\pi_I$-modules\n$$H_r(B{\\operatorname{aut}}_\\partial(N_I){\\langle 1 \\rangle}),{\\mathbb Q})\\cong \\bigoplus_{p+q=r} H_{p,q}^{CE} (\\pi_*(\\Omega B{\\operatorname{aut}}_\\partial(N_I){\\langle 1 \\rangle})\\otimes {\\mathbb Q})$$ constructed above. By Proposition \\ref{der iso comp with action}, we identify $$H_{p,q}^{CE} (\\pi_*(\\Omega B{\\operatorname{aut}}_\\partial(N_I){\\langle 1 \\rangle})\\otimes {\\mathbb Q})\\cong H_{p,q}^{CE} ({\\operatorname{Der}}_\\omega ({\\mathbb L}_I))$$ as $\\pi_I$-modules. This in turn gives the isomorphism in the claim.\n\nThe compatibility with the stabilization maps follows from Proposition \\ref{der iso compatible with stab maps}.\n\\end{proof}\n\n\\section{Homological Stability}\n\n\\subsection{An algebraic homological stability result}\n\n\nRecall the graded hyperbolic modules $H_I\\cong \\tilde H_*(N_I)$ from Section \\ref{automorphisms of hyperbolic modules}, which have the groups $\\Gamma_I$ as their automorphism groups. Recall that we denoted by $\\lambda_I$ the $\\Gamma_I$-module $H_I$ with standard action which we considered as an object $$((H_I)_1,...,(H_I)_{n-1})\\in {\\operatorname{Mod}}({\\mathbb Z})^{n-1}.$$\nMoreover recall that we defined\n\\begin{displaymath}\n\t\\begin{aligned}\n\t\t&g_p=\\begin{cases}\\operatorname{rank} ((H_I)_p) &\\text{if } 2p3$. For all $l\\geq 0$ there are polynomial functors $$\\mathscr{C}_l:{\\operatorname{Mod}}({\\mathbb Z})^{n-1}\\rightarrow {\\operatorname{Vect}}({\\mathbb Q})$$ of degree $\\leq l\/2$ and isomorphisms of $\\Gamma_I$-modules\n$$\\mathscr{C}_l(\\lambda_I)\\cong C_l^{CE}(\\mathfrak{g}_I)$$ compatible with the maps induced by inclusions.\n\\end{prop}\n\n\n\n\\begin{proof}\n\nIn Proposition \\ref{derivation as schurfunctor} we described the derivations ${\\operatorname{Der}}_\\omega({\\mathbb L}_I)$ as a Schur functor $$\\Lie((-)):Sp^{n-2}\\to gLie,\\; V\\mapsto s^{-n+2}\\bigoplus_{k\\geq 2} \\Lie((k))\\otimes_{\\Sigma_k}V^{\\otimes k},$$ that extended to the category of graded vector spaces $Vect_*({\\mathbb Q})$. Consider the inclusion $$\\mathcal{I}:\\prod_{i=0}^{n-2} Vect({\\mathbb Q})\\rightarrow Vect_*({\\mathbb Q}), \\; (V_i)_{i=0}^{n-2}\\mapsto \\bigoplus_{i=0}^{n-2} V_i[i].$$ The composition $\\Lie((-))\\circ \\mathcal{I}$ is a Schur multifunctor, which we are denoting by $ \\widetilde {\\mathscr{U}},$ with $$\\widetilde {\\mathscr{U}}(\\mu)=s^{[1m_1+2m_2+...+(n-2)m_{n-2}-n+2]}\\Lie((|\\mu|)),$$ where $\\mu=(m_0,m_1,...,m_{n-2}),$ given the $\\Sigma_\\mu$-module structure induced by the inclusion $\\Sigma_\\mu\\subset \\Sigma_{|\\mu|}.$ Thus the positive degree derivations are given by the Schur multifunctor $$\\mathscr{U}:\\prod_{i=0}^{n-2} Vect({\\mathbb Q})\\to gLie,$$ where $\\mathscr{U}(\\mu)=\\widetilde{\\mathscr{U}}(\\mu),$ when $$1m_1+2m_2+...+(n-2)m_{n-2}-n+2\\geq 1\\Leftrightarrow \\sum_{i=1}^{n-2} m_i\\frac{i}{n-1}\\geq 1,$$ and $0$ otherwise. The Chevalley-Eilenberg chains are given by the Schur functor $\\Lambda$ with $\\Lambda(r)$ the trivial $\\Sigma_r$-module concentrated in degree $r$. \nThe composition $\\widetilde{\\mathscr{C}}=\\Lambda\\circ\\mathscr{U}$ is now given by (using (\\ref{schurcomp})) $$\\widetilde{\\mathscr{C}}_r(\\mu)\\cong \\bigoplus_r \\Lambda(r) \\otimes_{\\Sigma_r} \\bigoplus\\operatorname{Ind}_{\\Sigma_{\\mu_1}\\times...\\times \\Sigma_{\\mu_r}}^{\\Sigma_\\mu}\\mathscr{U}(\\mu_1)\\otimes ... \\otimes \\mathscr{U}(\\mu_r), $$where the second sum runs over all $r$-tuples $(\\mu_1,...,\\mu_r),$ such that $\\Sigma_{i=1}^r \\mu_i=\\mu$. For a fixed r we get that the summand corresponding to $(\\mu_1,...,\\mu_r),$ where $\\mu_s=(m_{1,s},...m_{r,s})$, is only non-zero, if $\\sum_{i=1}^{n-2} (m_{i,s} \\frac{i}{n-1}) \\geq 1 $ for all $s=1,...,r$. That implies that\n\\begin{equation}\\label{r}\n\\sum_{i=1}^{n-2} m_i \\frac{i}{n-1}= \\sum_{s=1}^r (\\sum_{i=1}^{n-2} m_{i,s} \\frac{i}{n-1})\\geq r.\n\\end{equation}\nIf the summand is non-zero it is of degree\n\\begin{equation*}\n\\begin{aligned}\nr+ \\sum_{s=1}^r \\left(\\left(\\sum_{i=1}^{n-2} m_{i,s} i\\right)-n+2\\right) &=r+ \\left(\\sum_{i=1}^{n-2} m_ii\\right) -nr + 2r\\\\\n&=\\left(\\sum_{i=1}^{n-2} m_ii\\right) + r(3-n)\\\\\n&\\geq\\left(\\sum_{i=1}^{n-2} m_ii\\right)+ \\left(\\sum_{i=1}^{n-2} m_i \\frac{i}{n-1}\\right)(3-n) \\\\\n&=\\frac{2}{n-1}\\left(\\sum_{i=1}^{n-2} m_ii\\right),\n\\end{aligned}\n\\end{equation*} where we used that $3-n$ is negative and (\\ref{r}). This implies now that the Chevalley-Eilenberg $l$-chains are a Schur multifunctor $\\widetilde{\\mathscr{C}}_l,$ where $\\widetilde{\\mathscr{C}}_l(\\mu)$ vanishes for \n$$l<\\frac{2}{n-1}\\left(\\sum_{i=1}^{n-2} m_ii\\right)\\leq 2\\sum_{i=1}^{n-2}m_i=2|\\mu|.$$\nHence it is of degree $\\leq l\/2.$\nThe functor $\\mathscr{C}_l$ in the statement is given by the pre-composition with the additive functor $$-\\otimes {\\mathbb Q}: {\\operatorname{Mod}}({\\mathbb Z})^{n-1}\\to \\prod_{i=0}^{n-2} Vect({\\mathbb Q}).$$ The compatibility with the action follows from the fact the functor lifts to $$Q^n_+({\\mathbb Z},\\Lambda)\\rightarrow Sp^{(n-2)}, \\; M\\mapsto s^{-1}(M\\otimes {\\mathbb Q}).$$\n\n\\end{proof}\n\nAs an immediate consequence of Proposition \\ref{polynomial functor degree} and Proposition \\ref{general algebraic homological stability result} we get:\n\n\\begin{cor}\\label{homstab cor}\nThe stabilization map\n$$H_k(\\Gamma_I, C^{CE}_l (\\mathfrak{g}_{I})\\rightarrow H_k(\\Gamma_{I'},C^{CE}_l (\\mathfrak{g}_{I'}))$$\nis an isomorphism for $g_p> 2k+2l+2$ when $2p\\neq n$ and $g_p> 2k+2l+4$ if $2p=n$ and an epimorphism for $g_p\\geq 2k+2l+2$ respectively $g_p\\geq 2k+2l+4.$\n\\end{cor}\n\n\n\nBefore we proof the main proposition of this section, i.e. deduce homological stability for the Chevalley-Eilenberg homology from the stability for the chains, we need the following observation about the Chevalley-Eilenberg chains.\n\\begin{lem}\\label{hyperhom lemma}\nThere exists a chain homotopy equivalences $C^{CE}_* (\\mathfrak{g}_I)\\xrightarrow{\\simeq}H^{CE}_* (\\mathfrak{g}_I)$ sending cycles $z\\mapsto[z]$ such that\n\\begin{equation*}\n\\xymatrix{\nC^{CE}_* (\\mathfrak{g}_I) \\ar[rr]^{\\sigma_*}\\ar[d]_{\\simeq} && C^{CE}_* (\\mathfrak{g}_{I'}) \\ar[d]_{\\simeq} \\\\\nH^{CE}_* (\\mathfrak{g}_I) \\ar[rr]^{\\sigma_*} && H^{CE}_* (\\mathfrak{g}_{I'})}\n\\end{equation*}\ncommutes up to chain homotopy of ${\\mathbb Q}[\\Gamma_I]$-chain complexes.\n\\end{lem}\n\\begin{proof}\nThis is is true for all degree-wise finite dimensional ${\\mathbb Q}[G]$-chain complexes for $G$ rationally perfect groups by \\cite[Lemma B.1]{berg13} and \\cite[Proposition B.5]{berg13}. The groups $\\Gamma_I$ are rationally perfect (see Lemma \\ref{gammaIh1}) and the the $C^{CE}_*(\\mathfrak{g}_I)$ are degree-wise finite dimensional, since the ${\\mathfrak{g}}_I$ are. \n\\end{proof}\n\n\n\n\\begin{prop}\\label{algebraic homological stability result}\nThe stabilization map\n$$H_k(\\Gamma_I, H^{CE}_l (\\mathfrak{g}_I))\\rightarrow H_k(\\Gamma_{I'},H^{CE}_l (\\mathfrak{g}_{I'}))$$\nis an isomorphism for $g_p> 2k+2l+2$ when $2p\\neq n$ and $g_p> 2k+2l+4$ if $2p=n$ and an epimorphism for $g_p\\geq 2k+2l+2$ respectively $g_p\\geq 2k+2l+4.$\n\\end{prop}\n\n\n\n\\begin{proof}\nConsider the first hyperhomology spectral sequence with $E^1$-page:\n$$ E^1_{k,l}(I)=H_l((\\Gamma_I; C^{CE}_k (\\mathfrak{g}_I)))\\Rightarrow {\\mathbb H}_{k+l}(\\Gamma_I; C^{CE}_* (\\mathfrak{g}_I)).$$ By Corollary \\ref{homstab cor} $E^1_{k,l}(I)\\rightarrow E^1_{k,l}(I')$ is an isomorphism for $$g_p>\\begin{cases} 2k+2l+4 \\geq k+2l+4 & \\text{ if } p=n\/2 \\\\ 2k+2l+2 \\geq k+2l+2 & \\text{ otherwise} \\end{cases}$$ and an epimorphism for $\"\\geq\"$. By the spectral sequence comparison theorem we get that the map\n$${\\mathbb H}_i(\\Gamma_I, C^{CE}_* (\\mathfrak{g}_I))\\rightarrow {\\mathbb H}_i(\\Gamma_{I'},C^{CE}_* (\\mathfrak{g}_{I'}))$$ induced by the stabilization map\nis an isomorphism for $g_p>2i+2$ when $2p\\neq n$ and $g_p>2i+4$ if $2p=n$ and an epimorphism for $\"\\geq\"$.\nUpon using Lemma \\ref{hyperhom lemma} and the chain homotopy invariance of hyperhomology we get that the map\n$${\\mathbb H}_i(\\Gamma_I, C^{CE}_* (\\mathfrak{g}_I))\\rightarrow {\\mathbb H}_i(\\Gamma_{I'},C^{CE}_* (\\mathfrak{g}_{I'}))$$ induced by the stabilization map\nis an isomorphism and epimorphism in the same rage as above. Ultimately we use the natural splitting for hyperhomology groups with coefficients in a chain complex with trivial differential\n\\begin{equation*}\n\\xymatrix{\n{\\mathbb H}_i(\\Gamma_I;H^{CE}_* (\\mathfrak{g}_I)) \\ar[r]^-{\\sigma_i}\\ar[d]_{\\cong} & {\\mathbb H}_i(\\Gamma_{I'}; H^{CE}_* (\\mathfrak{g}_{I'})) \\ar[d]_{\\cong} \\\\\n\\bigoplus_{k+l=i}H_k(\\Gamma_I;H^{CE}_l (\\mathfrak{g}_I)) \\ar[r]^-{\\sigma_{k,l}} & \\bigoplus_{k+l=i} H_k(\\Gamma_{I'};H^{CE}_l (\\mathfrak{g}_{I'})).}\n\\end{equation*}\nHence we see that the maps $\\sigma_{k,l}$ are isomorphisms and epimorphisms in the range in the statement of Proposition \\ref{algebraic homological stability result}.\n\n\\end{proof}\n\\subsection{Homological stability for homotopy automorphisms}\n\nThe first main result of this article now easily follows from the previous results.\n\n\\begin{THMA}\nThe map $$H_i(B{\\operatorname{aut}}_\\partial(N_I);{\\mathbb Q})\\rightarrow H_i(B{\\operatorname{aut}}_\\partial(N_{I'});{\\mathbb Q})$$ induced by the stabilization map\nis an isomorphism for $g_p> 2i+2$ when $2d\\neq n$ and $g_p> 2i+4$ if $2d=n$ and an epimorphism for $g_p\\geq 2i+2$ respectively $g_p\\geq 2i+4.$\n\\end{THMA}\n\n\\begin{proof} We begin by observing that $$H_k(\\Gamma_I, H^{CE}_l (\\mathfrak{g}_I))\\cong H_k(\\pi_I, H^{CE}_l (\\mathfrak{g}_I)),$$ since the action of $\\pi_I$ is through $\\tilde H :\\pi_I\\to \\Gamma_I$ and the kernel of $\\tilde H_*$ is finite (Propositions \\ref{hmcg} and \\ref{der iso comp with action}).\nThe result now follows from Proposition \\ref{algebraic homological stability result} combined with Proposition \\ref{iso Hce and H as piI-modules} upon using the spectral sequence comparison theorem.\n\\end{proof}\n\n \n\\subsection{Homological stability for block diffeomorphisms}\n \nDenote by ${\\widetilde{\\operatorname{aut}}}_\\partial(X)$ the $\\Delta$-monoid of block homotopy equivalences, with $k$-simplices face preserving homotopy equivalences $$\\varphi:\\Delta^k\\times X\\rightarrow \\Delta^k\\times X,$$ s.t. $\\varphi|\\Delta^k\\times \\partial X$ is the identity. The block diffeomorphisms ${\\widetilde{\\operatorname{Diff}}}_\\partial (X)$ are the sub-$\\Delta$-group with $k$-simplices, face preserving diffeomorphisms $$\\varphi:\\Delta^k\\times X\\rightarrow \\Delta^k\\times X,$$ s.t.$\\varphi$ is the identity on a neighborhood of $\\Delta^k\\times \\partial X.$ We do not distinguish between $\\Delta$-objects and their realizations. Denote the inclusion ${\\widetilde{\\operatorname{Diff}}}_\\partial(X)\\hookrightarrow{\\widetilde{\\operatorname{aut}}}_\\partial(X)$ by $\\tilde J$. The inclusion $${\\operatorname{aut}}_\\partial (X) \\hookrightarrow {\\widetilde{\\operatorname{aut}}}_\\partial (X)$$ is a homotopy equivalences and hence we are going to consider then as identified. The block diffeomorphisms ${\\widetilde{\\operatorname{Diff}}}_\\partial (X)$ and the diffeomorphisms ${\\operatorname{Diff}}_\\partial (X)$ with the Whitney C$^\\infty$-topology on the other hand are not homotopy equivalent - the difference is related to algebraic K-theory (see \\cite{MR1818774}). The homogeneous space ${\\widetilde{\\operatorname{aut}}}_\\partial(X)\/{\\widetilde{\\operatorname{Diff}}}_\\partial(X)$ is by definition the homotopy fiber of the map $\\tilde J:B{\\widetilde{\\operatorname{Diff}}}_\\partial (X)\\rightarrow B{\\widetilde{\\operatorname{aut}}}_\\partial (X).$ It is related to Surgery theory as we explain now.\n\nLet $X$ be a smooth manifolds of dimension $\\geq 5$ with boundary $\\partial X.$ Quinn \\cite{quinn} shows that there is a quasi-fibration of Kan $\\Delta$-sets $$\\mathcal{S}^{G\/O}_\\partial(X)\\rightarrow {\\operatorname{map}}_*(X\/\\partial X,G\/O)\\rightarrow \\mathbb{L}(X) $$ and that its homotopy exact sequence is the surgery exact sequence. $\\mathcal{S}^{G\/O}_\\partial(X)$ is the realization of a $\\Delta$-set with $k$-simplexes pairs $(W,f),$ where $W$ is a smooth $(k+3)$-ad and $f:W\\rightarrow \\Delta^k\\times X$ is a face preserving homotopy equivalence, such that $f$ restricts to a diffeomorphisms $f|\\partial_{k+1}W:\\partial_{k+1}W\\rightarrow \\Delta^k\\times \\partial X.$ There is a map $${\\widetilde{\\operatorname{aut}}}_\\partial(X)\/{\\widetilde{\\operatorname{Diff}}}_\\partial(X)\\rightarrow \\mathcal{S}^{G\/O}_\\partial(X), $$ which by the h-cobordism theorem induces a weak homotopy equivalence $${\\widetilde{\\operatorname{aut}}}_\\partial(X)\/{\\widetilde{\\operatorname{Diff}}}_\\partial(X)_{(1)}\\simeq_{w.e.}\\mathcal{S}^{G\/O}_\\partial(X)_{(1)}$$ of the identity components (see \\cite[Section 3.2.]{berg12}). Now assume that $X$ is simply connected. Since $$G\/O\\simeq_{{\\mathbb Q}}BO \\simeq_{{\\mathbb Q}} \\prod_{i\\geq 1} K({\\mathbb Q},4i),$$ we understand the rational homotopy groups $$\\pi_*({\\operatorname{map}}_*(X\/\\partial X,G\/O))\\otimes {\\mathbb Q} \\cong H^*(X,\\partial X;{\\mathbb Q})\\otimes \\pi_*(G\/O).$$ Note that if $X$ is simply connected $$\\pi_i(\\mathbb{L}(X))\\otimes {\\mathbb Q}\\cong L_{\\operatorname{dim}(X)+i}(X) \\cong\\begin{cases}{\\mathbb Q} &\\text{if} \\operatorname{dim}(X) + i \\equiv 0 \\text{ mod }4\\\\\n0 &\\text {otherwise.}\n\\end{cases}$$\n\nWe now specialize to $N_I.$\n\\begin{lem}[{\\cite[Lemma 3.5.]{berg12}}]\\label{plumbinglemma} The surgery obstruction map induces an isomorphism\n$$H^{n}(N_I,\\partial N_I;{\\mathbb Q})\\otimes \\pi_{n+k}(G\/O)\\rightarrow L_{n+k}({\\mathbb Z})\\otimes{\\mathbb Q}$$ for $n+k\\equiv 0 \\text{ mod } 4.$\n\\end{lem}\n\n\\begin{proof}\nConsider the smooth and topological surgery exact sequences:\n\\begin{displaymath}\n\\xymatrix{...\\ar[r] &N_\\partial^{G\/O} (N_I\\times D^k)\\otimes {\\mathbb Q} \\ar[d] \\ar[r] & L_{n+k}({\\mathbb Z})\\otimes {\\mathbb Q}\\ar@{=}[d] \\ar[r]&...\\\\\n...\\ar[r] & N_\\partial^{G\/Top} (N_I\\times D^k)\\otimes {\\mathbb Q}\\ar[r] & L_{n+k}({\\mathbb Z})\\otimes {\\mathbb Q}\\ar[r]&...} \n\\end{displaymath}\nThe left hand vertical map is an isomorphism since $\\pi_i(Top\/O)$ is finite (see e.g. \\cite{MR0645390}). Milnor's Plumbing construction ensures that for $k+n$ even there is an element in $N_\\partial^{G\/Top} (N_I\\times D^k)$ with non-trivial surgery obstruction. Since $$N_\\partial^{G\/Top} (N_I\\times D^k)\\cong \\pi_k(map_* (N_I\/\\partial N_I,G\/Top) \\text{ and }$$\n$$\\pi_k(map_* (N_I\/\\partial N_I,G\/O)\\otimes {\\mathbb Q}\\cong H^{n}(N_I,\\partial N_I;{\\mathbb Q})\\otimes \\pi_{n+k}(G\/O)), \\text{ for $n+k\\equiv 0 \\text{ mod } 4,$}$$ this implies the claim because both sides are just one dimensional rational vectorspaces.\n\\end{proof}\n\nThis now implies that we have a natural isomorphism\n\\begin{equation}\\label{naturalmapandH^*}\n\\pi_k(\\mathcal{S}^{G\/O}_\\partial(N_I))\\otimes {\\mathbb Q}\\cong H^n(N_I;{\\mathbb Q})\\otimes \\pi_{n+k}(G\/O) \\text { for } *>0\n \\end{equation}\n(compare \\cite[Corollary 4.6.]{berg13}).\n\n\nRecall that $J_0\\pi_0({\\operatorname{Diff}}_\\partial (N_I))$ has finite index in $\\pi_0({\\operatorname{aut}}_\\partial (N_I))$ (Proposition \\ref{mcg}). By Cerf's pseudo-isotopy theorem thus also $\\tilde J_1\\pi_1(B{\\widetilde{\\operatorname{Diff}}}_\\partial (N_I))$ in $\\pi_1(B{\\operatorname{aut}}_\\partial (N_I)).$ Denote by $\\bar B{\\operatorname{aut}}_\\partial (N_I)$ the finite cover corresponding to $\\operatorname{Image}(\\tilde J_1).$ Note that it has the same (higher) rational homotopy groups. By construction $\\tilde J$ lifts to a map $B{\\widetilde{\\operatorname{Diff}}}_\\partial (N_I)\\rightarrow \\bar B{\\operatorname{aut}}_\\partial (N_I).$ Instead of ${\\widetilde{\\operatorname{aut}}}_\\partial(N_I)\/{\\widetilde{\\operatorname{Diff}}}_\\partial(N_I)$ we consider $$\\mathcal{F}_I=\\operatorname{hofib}(B{\\widetilde{\\operatorname{Diff}}}_\\partial (N_I)\\rightarrow \\bar B{\\operatorname{aut}}_\\partial (N_I)).$$ We reduced the problem of showing rational homological stability for the block diffeomorphisms to the study of the Serre spectral sequence of the homotopy fibration above. The only missing ingredient is now to understand the rational homology groups of $\\mathcal{F}_I$ as $\\bar\\pi_I =\\pi_1(\\bar B{\\operatorname{aut}}_\\partial (N_I))$-modules. Observe that by Proposition \\ref{mcg} there is a surjection $$\\pi_1(\\bar B{\\operatorname{aut}}_\\partial (N_I))\\rightarrow \\Gamma_I$$ induced by $\\tilde H_*.$\n\nDenote by $\\eta: S^{G\/O}_\\partial(X) \\rightarrow \\pi_0{\\operatorname{map}}_*(X\/\\partial X,G\/O)$ the normal invariant and by $\\sigma:\\pi_0{\\operatorname{map}}_*(X\/\\partial X,G\/O) \\rightarrow L_{\\operatorname{dim}(X)}(X)$ the surgery obstruction. Using the surgery exact sequence, we see that for $n$ odd $\\pi_1(\\mathcal{F}_I)$ is abelian, since it is a subgroup of the abelian group $[\\Sigma (N_I\/\\partial N_I),G\/O]_*$. For $n$ even it is a finite extension of the abelian group $[\\Sigma (N_I\/\\partial N_I),G\/O]_*$ by a finite cyclic group (in case $L_{n+2}({\\mathbb Z})\\cong {\\mathbb Z}$, the proof of Lemma \\ref{plumbinglemma} makes sure that the map to $L_{n+2}({\\mathbb Z})$ is non zero and hence the kernel of $\\sigma$ is a finite cyclic group). Denote by $$\\pi_k^{ab}(\\mathcal{F}_I)=\\begin{cases} \\pi_1(\\mathcal{F}_I)\/ \\operatorname{Image}(L_{n+2}({\\mathbb Z})\\rightarrow \\pi_1(\\mathcal{F}_I)) & \\text{ if } k=1 \\\\ \\pi_k(\\mathcal{F}_I) & \\text{ if } k>1.\\end{cases} $$\n\n\\begin{prop}There are isomorphisms of $\\bar\\pi_I$-modules compatible with the stabilization maps\n\\begin{enumerate} \n\t\\item[$\\operatorname{(1)}$] $\\pi^{ab}_k(\\mathcal{F}_I)\\otimes {\\mathbb Q}\\cong (\\tilde H_{*}(N_I,{\\mathbb Q}) \\otimes \\pi_*(G\/O))_{k}$, where $|a\\otimes \\alpha|=|\\alpha|-|a|$, $k\\geq 1$\n\t\\item[$\\operatorname{(2)}$] $H_*(\\mathcal{F}_I,{\\mathbb Q}) \\cong \\Lambda(\\pi^{ab}_*(\\mathcal{F}_I)\\otimes {\\mathbb Q})$,\n\\end{enumerate} where the actions on the left hand side are induced by the standard actions of $\\Gamma_I$ on $\\tilde H_*(N_I).$ \n\\end{prop}\n\n\\begin{proof} Compare \\cite[p.26 and Theorem 3.6]{berg12} and \\cite[Proposition 7.15.]{berg13}. Observe that the rationalization $(\\mathcal{F}_I)_{{\\mathbb Q}}$ has rational homotopy groups $\\pi^{ab}_k(\\mathcal{F}_I)\\otimes {\\mathbb Q}.$\nConsider the splitting of the homotopy exact sequence of the surgery fibration as\n$$0\\rightarrow L_{n+k+1}({\\mathbb Z})\/\\operatorname{Image}(\\sigma)\\rightarrow \\pi_k (\\mathcal{S}_\\partial^{G\/O}(N_I)) \\rightarrow \\operatorname{Image}(\\eta)\\rightarrow 0.$$ By Lemma \\ref{plumbinglemma} we get $$L_{n+k+1}({\\mathbb Z})\/\\operatorname{Image}(\\sigma)\\otimes {\\mathbb Q}\\cong 0\\text{ and }\\operatorname{Image}(\\eta)\\otimes {\\mathbb Q} \\cong (\\tilde H^*(N_I,{\\mathbb Q}) \\otimes \\pi_*(G\/O))_{k}.$$ Using the isomorphism $$\\tilde H_*(N_I;{\\mathbb Q})\\cong {\\operatorname{Hom}}_{\\mathbb Q}(\\tilde H_*(N_I;{\\mathbb Q});{\\mathbb Q})\\cong \\tilde H^*(N_I;{\\mathbb Q})$$ we get the isomorphism $(1).$ We see that the action on the right hand side is induced by the standard action of $\\Gamma_I$ as follows: Use the identification $$\\pi_k (\\mathcal{S}^{G\/O}_\\partial(X))\\cong S_\\partial^{G\/O}( N_I\\times D^k).$$ An element of $S_\\partial^{G\/O}( N_I\\times D^k)$ is represented by a manifold $(X,\\partial X)$ together with a homotopy equivalence $f:X\\rightarrow N_I\\times D^k,$ such that $f|\\partial X:\\partial X\\rightarrow\\partial (N_I\\times D^k)$ is a diffeomorphism. The action of a $$[\\phi]\\in\\pi_1(\\bar B{\\widetilde{\\operatorname{aut}}}_\\partial(N_I))\\cong \\operatorname{Image}(\\tilde J_1) \\cong \\operatorname{Image}(J_0)\\subset \\pi_0 ({\\operatorname{aut}}_\\partial (N_I))$$ on $f$ is given by the composition $$X \\xrightarrow{f}N_I\\times D^k\\xrightarrow{ \\phi\\times \\operatorname{id}_{D^k}}N_I\\times D^k,$$ where $\\phi$ is a diffeomorphism representing $[\\phi]$ considered as an element of $\\operatorname{Image}(J_0)$. \\cite[Lemma 3.3.]{berg12} now implies that $$\\eta((\\phi\\times \\operatorname{id}_{D^k})\\circ f)=((\\phi\\times \\operatorname{id}_{D^k})^*)^{-1}(\\eta(f))+\\eta(\\operatorname{id}_{D^k})=((\\phi\\times \\operatorname{id}_{D^k})^*)^{-1}(\\eta(f)),$$ using that the normal invariant of a diffeomorphism is trivial. This implies that $[\\phi]$ acts on $\\tilde H^*(N;{\\mathbb Q})\\otimes \\pi_*(G\/O)_k$ via $(\\phi^{-1})^*\\otimes {\\operatorname{id}}_{\\pi_*(G\/O)}.$ But this exactly corresponds to the standard action under the isomorphism $$\\tilde H^*(N_I;{\\mathbb Q})\\cong {\\operatorname{Hom}}_{\\mathbb Q}(\\tilde H_*(N_I;{\\mathbb Q});{\\mathbb Q})\\cong \\tilde H_*(N_I;{\\mathbb Q}).$$ If $\\phi$ lies in the kernel of the map $$\\pi_1(\\bar B{\\operatorname{aut}}_\\partial(N_I))\\rightarrow \\Gamma_I,$$ then it is in the kernel of $\\tilde H_*$ and a similar argument as before show that it acts trivially on $\\pi^{ab}_k(\\mathcal{F}_I)$ (compare Lemma \\ref{gamma I action on universal cover}). The compatibility with the stabilization maps follows from the fact that the isomorphisms (\\ref{naturalmapandH^*}) is natural. \n\nThe statement (2) follows from (1) by using the fact that $G\/O$ and hence also the mapping-space ${\\operatorname{map}}_*(N_I\/\\partial N_I,G\/O)$ are infinite loop spaces. Thus all rational k-invariants vanish for ${\\operatorname{map}}_*(N_I\/\\partial N_I,G\/O).$ This is equivalent to: \\\\\nFor all elements $\\alpha\\in \\pi_k({\\operatorname{map}}_*(N_I\/\\partial N_I,G\/O))\\otimes {\\mathbb Q}$ there exists an element $c\\in H^k({\\operatorname{map}}_*(N_I\/\\partial N_I,G\/O);{\\mathbb Q}),$ such that $c(h(\\alpha))\\neq 0,$ where $h$ denotes the rational Hurewicz homomorphism. Since $$\\pi_k((\\mathcal{F}_I)_{{\\mathbb Q}})\\otimes {\\mathbb Q}\\rightarrow \\pi_k({\\operatorname{map}}_*(N_I\/\\partial N_I,G\/O))\\otimes {\\mathbb Q}$$ is injective, it follows that all rational k-invariants also vanish for $(\\mathcal{F}_I)_{{\\mathbb Q}}$. This shows that $(\\mathcal{F}_I)_{{\\mathbb Q}}$ is a product of Eilenberg-Maclane spaces and hence its homology is given by the free graded commutative algebra on its homotopy groups. Moreover the $\\pi_1(\\bar B{\\operatorname{aut}}_\\partial (N_I))$-action is induced by the standard action.\n\\end{proof}\n\nWe use the previous proposition to give a Schur multifunctor description of $H_r(\\mathcal{F}_I;{\\mathbb Q}).$ For a multiindex $\\mu$ with $\\ell(\\mu)=n-1,$ consider the $\\Sigma_{\\mu}$-modules\n$\\Pi(\\mu)$ given by $$\\Pi(0,...,1,...,0)=s^{-i}\\pi_{*}(G\/O)\\otimes {\\mathbb Q},$$ where the $1$ sits in the $i$-th position and $0$ otherwise. The corresponding Schur multifunctor $$\\Pi:Mod({\\mathbb Z})^{n-1}\\rightarrow Vect_*({\\mathbb Q}),$$ has the property that there is an isomorphism of the induced $\\Gamma_I$-modules\n$$\\Pi(H_I)\\cong (\\tilde H_{*}(N_I,{\\mathbb Q}) \\otimes \\pi_*(G\/O))^+.$$\n\nIt follows now that we get an isomorphisms of $\\Gamma_I$-modules $$\\Lambda\\circ \\Pi(H_I)\\cong \\Lambda( (\\tilde H_{*}(N_I,{\\mathbb Q}) \\otimes \\pi_*(G\/O)^+))\\cong H_*(\\mathcal{F}_I,{\\mathbb Q}),$$ where the left-hand $\\Lambda$ denotes the free graded commutative algebra endofunctor of $Vect_*({\\mathbb Q}).$ Recall that $\\Lambda$ is given as the Schur functor with $\\Lambda(n)={\\mathbb Q}[n]$ and trivial $\\Sigma_n$-action. Now setting $\\mathscr{H}_r=\\Lambda_r\\circ \\Pi$ and observing that $\\Lambda_r$ is of degree $\\leq r$ and $\\Pi$ additive, we get the following:\n\n\\begin{prop}\\label{Fschur}\nThere is an isomorphism of $\\Gamma_I$-modules\n$$H_r(\\mathcal{F}_I;{\\mathbb Q})\\cong \\bigoplus_\\mu \\mathscr{H}_r(\\mu) \\otimes_\\mu H_I^{\\otimes_\\eta},$$ compatible with the stabilization maps, where the $\\mathscr{H}_r(\\mu)$ are trivial for $|\\mu|>r.$\n\\end{prop}\n\nNow we proof the second main theorem of this paper.\n\n\\begin{THMB}\nThe stabilization map $$H_i(B{\\widetilde{\\operatorname{Diff}}}_\\partial(N_I);{\\mathbb Q})\\rightarrow H_i(B{\\widetilde{\\operatorname{Diff}}}_\\partial(N_{I'});{\\mathbb Q})$$\n is an isomorphism for $g_p> 2i+2$ when $2p\\neq n$ and $g_p> 2i+4$ id $2p=n$ and an epimorphism for $g_p\\geq 2i+2$ respectively $g_p\\geq 2i+4.$\n\\end{THMB}\n\n\\begin{proof}Denote by $Y_I=B{\\widetilde{\\operatorname{Diff}}}_\\partial(N_I)$ and $\\bar X_I=\\bar B{\\widetilde{\\operatorname{aut}}}_\\partial(N_I).$\nConsider the Serre spectral sequences of the homotopy fibration $$\\mathcal{F}_I\\rightarrow Y_I\\rightarrow \\bar X_I$$ and the analog for $I'.$ The stabilization map induces maps on the $E_2$-pages \n$$\\sigma_*:H_k(\\bar X_I;H_l(\\mathcal{F}_I;{\\mathbb Q}))\\rightarrow H_k(\\bar X_{I'};H_l(\\mathcal{F}_{I'};{\\mathbb Q})).$$ The theorem follows upon showing that these are isomorphisms for $g_p> 2k+2l+2$ $(+4$ if $p=n\/2)$ and epimorphisms for $g_p\\geq2k+2l+2$ $(+4$ if $p=n\/2).$\nFor this we consider the universal coving spectral sequence $$H_r(\\pi_1(\\bar X_I);H_s(\\bar X_I{\\langle 1 \\rangle};H_l(\\mathcal{F}_I;{\\mathbb Q})))\\Rightarrow H_{r+s}(\\bar X_I;H_l(\\mathcal{F}_I,{\\mathbb Q})).$$ The condition above would follow upon showing that maps induced by the stabilization map on the $E^2$-page are isomorphisms for $g_p> 2r+2s+2l+2$ $(+4$ if $p=n\/2)$ and epimorphisms for $g_p\\geq2r+2s+2l+2$ $(+4$ if $p=n\/2).$ To show this we observe that there are isomorphism of $\\Gamma_I$-modules compatible with the stabilization maps:\n$$H_s(\\bar X_I{\\langle 1 \\rangle};H_l(\\mathcal{F}_I;{\\mathbb Q}))\\cong H_s(\\bar X_I{\\langle 1 \\rangle})\\otimes H_l(\\mathcal{F}_I;{\\mathbb Q}))\\cong H^{CE}_s({\\mathfrak{g}}_I)\\otimes H_l(\\mathcal{F}_I;{\\mathbb Q})),$$ where $\\Gamma_I$ acts on the $2$-nd and $3$-rd term diagonally. Note that $\\bar X_I$ and $X_I$ have the same universal cover, which is moreover naturally homotopy equivalent to $B{\\operatorname{aut}}_\\partial(N_I){\\langle 1 \\rangle}.$ The stability for $$H_r(\\pi_1(\\bar X_I);H^{CE}_s({\\mathfrak{g}}_I)\\otimes H_l(\\mathcal{F}_I;{\\mathbb Q})))$$ follows from stability for $$H_r(\\pi_1(\\bar X_I);C^{CE}_s({\\mathfrak{g}}_I)\\otimes H_l(\\mathcal{F}_I;{\\mathbb Q}))),$$ exactly like in Proposition \\ref{algebraic homological stability result} upon using the two hyperhomology spectral sequences and that $\\bar \\pi_I$ is rationally perfect (Lemma \\ref{H1 triv pi0}). Hence we are left with showing that the stabilization maps \n$$H_r(\\pi_1(\\bar X_I);C^{CE}_s({\\mathfrak{g}}_I)\\otimes H_l(\\mathcal{F}_I;{\\mathbb Q})))\\rightarrow H_r(\\pi_1(\\bar X_{I'});C^{CE}_s({\\mathfrak{g}}_{I'})\\otimes H_l(\\mathcal{F}_{I'};{\\mathbb Q})))$$ are isomorphisms for $g_p> 2r+2s+2l+2$ $(+4$ if $p=n\/2)$ and epimorphisms for $g_p\\geq2r+2s+2l+2$ $(+4$ if $p=n\/2).$ The Lyndon spectral sequence reduces this to the corresponding statement for $$H_r(\\Gamma_I;C^{CE}_s({\\mathfrak{g}}_I)\\otimes H_l(\\mathcal{F}_I;{\\mathbb Q})))\\rightarrow H_r(\\Gamma_{I'};C^{CE}_s({\\mathfrak{g}}_{I'})\\otimes H_l(\\mathcal{F}_{I'};{\\mathbb Q}))).$$\nPropositions \\ref{Fschur} and \\ref{der iso compatible with stab maps} give us isomorphisms of $\\Gamma_I$-modules compatible with the stabilization map $C^{CE}_s({\\mathfrak{g}}_I)\\otimes H_l(\\mathcal{F}_I;{\\mathbb Q}))\\cong \\mathscr{C}_s(H_I) \\otimes \\mathscr{H}_l(H_I).$ The functor $\\mathscr{C}_s$ is polynomial of degree $\\leq s\/2$ and the functor $\\mathscr{H}_l$ is polynomial of degree $\\leq l.$ The tensor product (in the sense of Schur multifunctors) $\\mathscr{C}_s\\otimes \\mathscr{H}_l$ is of degree $\\leq s\/2+l.$ By Proposition \\ref{general algebraic homological stability result} the stabilization maps\n$$H_r(\\Gamma_I;\\mathscr{C}_s\\otimes \\mathscr{H}_l(H_I))\\rightarrow H_r(\\Gamma_{I'};\\mathscr{C}_s\\otimes \\mathscr{H}_l(H_{I'}))$$ are isomorphisms for $g_p> 2r+s\/2+l+2$ $(+4$ if $p=n\/2)$ and epimorphisms for $g_p\\geq 2r+s\/2+l+2$ $(+4$ if $p=n\/2),$ which finishes the proof.\n\\end{proof}\n\n\n\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdqfe b/data_all_eng_slimpj/shuffled/split2/finalzzdqfe new file mode 100644 index 0000000000000000000000000000000000000000..e0f99ea0275bd495916f41562faed6f9bd2fc542 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdqfe @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\nImage quality has become a critical evaluation metric in most image-processing applications, including image denoising, image super-resolution, compression artifacts reduction, \\etc. Directly acquiring perceptual quality scores from human observers is accurate. However, this requires time-consuming and costly subjective experiments. The goal of Image Quality Assessment (IQA) is to allow computers to simulate the Human Visual System (HVS) through algorithms to score the perceptual quality of images. In this case, the images to be evaluated are often degraded during compression, acquisition, and post-processing. \n\nIn recent years, the invention of Generative Adversarial Networks (GANs)~\\cite{goodfellow2014generative} has greatly improved the image processing ability, especially image generation~\\cite{gu2020image,xia2021tedigan} and image restoration~\\cite{wang2018esrgan}, while it also brings new challenges to image quality assessment. GAN-based methods can fabricate seemingly realistic but fake details and textures~\\cite{jinjin2020pipal}. In detail, it is hard for the HVS to distinguish the misalignment of the edges and texture decreases in the region with dense textures. As long as the semantics of textures are similar, the HVS will ignore part of the subtle differences of textures. Most IQA methods for traditional distortion images assess image quality through pixel-wise comparison, which will lead to underestimation for GAN-generated images~\\cite{wang2004image}. To deal with the texture misalignment, recent studies~\\cite{bosse2017deep} introduce patch-wise prediction methods. \nSome following studies~\\cite{shi2021region,jinjin2020pipal} further propose different spatially robust comparison operations into the CNN-based IQA network. However, they take each patch as an independent input and separately calculate their score and weight, which will lead to the loss of context information and the inability to model the relationship between patches. \n\nTherefore, on the basis of patch-level comparison, we need to better model the interrelationship between patches. To this end, we use Vision Transformer (ViT)~\\cite{dosovitskiy2020image} as a feature extractor, which can effectively capture long-range dependencies among patches through a multi-head attention mechanism. However, the vanilla ViT uses a large convolution kernel to down-sample the input images in spatial dimension before entering the network; some details that should be considered are lost, which are also crucial to image quality assessment. Based on the observation, we found that a shallow CNN is a good choice to provide detailed spatial information. \nThe features extracted by a shallow CNN contains unwanted noises and merging ViT features with them would decrease the performance. To alleviate the impact of noise, we propose to mimic the characteristic of the HVS that human always pay attention to the salient regions of images. \nInstead of injecting the complete features from a shallow CNN into those from ViT, we only use those that convey spatial details of the salient regions for image quality assessment, thereby alleviating the aforementioned noise.\nFurthermore, using max-pooling or average-pooling to directly predict the score of an image will lose crucial information.\nTherefore, we use an adaptive weighted strategy to predict the score of an image.\n\n\nIn this work, we introduce an effective hybrid architecture for image quality assessment, which leverages local details from a shallow CNN and global semantic information captured by ViT to further improve IQA accuracy. Specifically, we first adopt a two-branch feature extractor. Then, we use semantic information captured by ViT to find the salient region in images through deformable convolution~\\cite{dai2017deformable}. Based on the consideration that each pixel in the deep feature map corresponds to different patches of the input image, we introduce the patch-wise prediction module, which contains two branches, one to calculate a score for each image patch, the other one to calculate the weight of each score.\n\nExtensive experiments show that our method outperforms current approaches in four benchmark image quality assessment datasets~\\cite{sheikh2006statistical,larson2010most,ponomarenko2015image,jinjin2020pipal}. The scatter diagram of the correlation between predicted scores and MOS is shown in \\cref{fig:scatter} where the plot for IQT is from our own implementation. Visualization experiments reveal that the proposed method is almost linear with MOS, which means that we can better imitate human image perception. Our primary contributions can be summarized as follows:\n\\begin{itemize}\n\\item We propose an effective hybrid architecture for image quality assessment, which compares images at the patch level, adds spatial details as a supplement, and scores images patch by patch, considering the relationship between patches and different contributions from each patch.\n\n\\item Our method outperforms the state-of-the-art approaches on four benchmark image quality assessment datasets. In particular, the proposed architecture achieves outstanding performance on the PIPAL dataset with various GAN-based distortion and ranked first in the NTIRE 2022 challenge on perceptual image quality assessment.\n\\end{itemize}\n\n\\section{Related Works}\n\\label{sec:related}\n\\subsection{Image Quality Assessment}\nThe goal of IQA is to mimic the HVS to rate the perceived quality of an image accurately. Although it's easy for human beings to assess an image's perceptual quality, IQA is considered to be difficult for machines. Depending on the scenarios and conditions, current IQA methods can be divided into three categories: full-reference (FR) ,and no-reference (NR) IQA. FR-IQA methods take the distortion image and the corresponding reference image as inputs to measure their perceptual similarity. The most widely-used FR-IQA metrics are PSNR and SSIM~\\cite{wang2004image} which are conventional and easy to optimize.\n\\begin{figure*}[th]\n \\centering\n \\includegraphics[scale=0.48]{architecture_final.pdf}\n \\caption{Overview of AHIQ. The proposed model takes a pair of the reference image and distortion image as input and then obtains feature maps through ViT~\\cite{dosovitskiy2020image} and CNN, respectively. The feature maps of reference image from ViT are used as global information to obtain the offset map of the deformable convolution\\cite{dai2017deformable}. After the feature fusion module which fuses the feature maps, we use a patch-wise prediction module to predict a score for each image patch. The final output is the weighted sum of the scores.}\n \\label{fig1:arch}\n\\end{figure*}\nApart from the conventional IQA methods, various learning-based FR-IQA methods~\\cite{zhang2018unreasonable,bosse2017deep,prashnani2018pieapp} have been proposed to address the limitations of conventional IQA methods recently. Zhang \\etal~\\cite{zhang2018unreasonable} proposed to use the learned perceptual image patch similarity (LPIPS) metric for FR-IQA and proved that deep features obtained through pre-trained DNNs outperform previous classic metrics by large margins. WaDIQaM~\\cite{bosse2017deep} is a general end-to-end deep neural network that enables jointly learning of local quality and local weights. PieAPP~\\cite{prashnani2018pieapp} is proposed to learn to rank rather than learn to score, which means the network learns the probability of preference of one image over another. IQT~\\cite{cheon2021perceptual} applies an encoder-decoder transformer architecture with trainable extra quality embedding and ranked first place in NTIRE 2021 perceptual image quality assessment challenge. In addition, common CNN-based NR-IQA methods~\\cite{su2020blindly,wu2020end,xia2020domain} directly extract features from the low-quality images and outperform traditional handcrafted approaches. You \\etal~\\cite{ you2021transformer} introduced transformer architecture for the NR-IQA recently.\n\n\n\n\n\\subsection{Vision Transformer}\nTransformer architecture based on self-attention mechanism~\\cite{vaswani2017attention} was first proposed in the field of Natural Language Processing (NLP) and significantly improved the performances of many NLP tasks thanks to its representation capability. Inspired by its success in NLP, efforts are made to apply transformers to vision tasks such as image classification~\\cite{dosovitskiy2020image}, object detection~\\cite{carion2020end, zhu2020deformable}, low-level vision~\\cite{yang2020learning}, \\etc. Vision Transformer (ViT) introduced by Dosovitskiy~\\etal~\\cite{dosovitskiy2020image} is directly inherited from NLP, but takes raw image patches as input instead of word sequences. ViT and its follow-up studies have become one of the mainstream feature extraction backbones except for CNNs.\n\nCompared with the most commonly used CNNs, transformer can derive global information while CNNs mainly focus on local features. In IQA tasks, global and local information are both crucial to the performance because when human beings assess image quality, both the information are naturally taken into account. Inspired by this assumption, we propose to combine long-distance features and local features captured by ViT and CNNs, respectively. To fulfill this goal, we use a two-branch feature extraction backbone and feature fusion modules, which will be detailed in~\\cref{sec:method}. \n\n\n\\subsection{Deformable Convolution}\nDeformable convolution~\\cite{dai2017deformable} is an efficient and powerful mechanism which is first proposed to deal with sparse spatial locations in high-level vision tasks such as object detection~\\cite{dai2017deformable,bertasius2018object, zhu2019deformable }, semantic segmentation~\\cite{zhu2019deformable}, and human pose estimation~\\cite{sun2018integral}. By using deformed sampling locations with learnable offsets, deformable convolution enhances the spatial sampling locations and improves the transformation modeling ability of CNNs. Recently, deformable convolution continues its strong performance in low-level vision tasks including video deblurring~\\cite{wang2019edvr}, video super-resolution~\\cite{chan2021understanding}. It is first combined with IQA methods by Shi~\\etal~\\cite{shi2021region} to perform a reference-oriented deformable convolution in the full-reference scenario.\n\n\n\n\\section{Methodology}\n\\label{sec:method}\n\nIn this section, we introduce the overall framework of the Attention-based Hybrid Image Quality Assessment Network (AHIQ). As shown in Fig \\ref{fig1:arch}, the proposed network takes pairs of reference images and distortion images as input, and it consists of three key components: a feature extraction module, a feature fusion module, and a patch-wise prediction module. \n\nFor the reason that GAN-based image restoration methods~\\cite{wang2018esrgan,gu2020image} often fabricate plausible details and textures, it is difficult for the network to distinguish GAN-generated texture from noise and real texture by pixel-wise image difference. Our proposed model aims to deal with it. We employ the Vision Transformer to model the relationship and capture long-range dependencies among patches. Shallow CNN features are introduced to add detailed spatial information. In order to help CNN focus on the salient region, we use deformable convolution guided by semantic information from ViT. We use an adaptive weighted scoring mechanism to give a comprehensive assessment.\n\n\n\n\\subsection{Feature Extraction Module}\n\\label{subsec:extraction}\n\\begin{figure}[th]\n\\centering\n\\includegraphics[scale=0.41]{architecture_feat.pdf}\n\\caption{The illustration of vision Transformer for feature extraction module. The class token (orange) is regarded when the feature maps are extracted.\n}\n\\label{fig2:feat}\n\\end{figure}\nAs is depicted in \\cref{fig1:arch}, the front part of the architecture is a two-branch feature extraction module that consists of a ViT branch and a CNN branch. The transformer feature extractor mainly focuses on extracting global and semantic representations. Self-attention modules in transformer enable the network to model long-distance features and encode the input image patches into feature representations. Patch-wise encoding is helpful to assess the output image quality of GAN-based image restoration because it enhances the tolerance of spatial misalignment. Since humans also pay attention to details when judging the quality of an image, so detailed and local information is also important. To this end, we introduce another CNN extraction branch apart from the transformer branch to add more local textures.\n\nIn the forward process, a pair of the reference image and distortion image are fed into the two branches, respectively, and we then take out their feature maps in the early stages. For the transformer branch, as illustrated in~\\cref{fig2:feat}, output sequences from Vision Transformer~\\cite{dosovitskiy2020image} are reshaped into feature maps $f_{T}\\in \\mathbb{R}^{p\\times p \\times 5c}$ discarding the class token, where $p$ represent the size of the feature map. For the CNN branch, we extract shallow feature map from ResNet~\\cite{he2016deep} $f_{C}\\in \\mathbb{R}^{4p\\times 4p \\times C}$ where $C=256\\times 3$. Finally, we put the obtained feature maps into the feature fusion module, which will be specified next.\n \n\n\\subsection{Feature Fusion Module}\n\\label{subsec:fusion}\nWe argue that feature maps from the early stages of CNN provide low-level texture details but bring along some noise. To address this problem, we take advantage of transformer architecture to capture global and semantic information. In our proposed network, feature maps from ViT with rich semantic information are used to find the salient region of the image. This perception procedure is performed in a content-aware manner and allows the network better mimic the way humans perceive image quality. Particularly, the feature maps from ViT are used to learn an offset map for deformable convolution as is shown in~\\cref{fig2:feat}. Then we perform this deformable convolution~\\cite{dai2017deformable} operation on feature maps from CNN, which we elaborate on previously. In this way, features from a shallow CNN can be better modified and utilized for further feature fusion. Obviously, in the previous description, feature maps from the two branches differ from each other in spatial dimension and need to be aligned. Therefore, a simple 2-layer convolution network is applied to project the feature maps after deformable convolution to the same width~\\textit{W} and height~\\textit{H} with ViT. The whole process can be formulated as follows:\n\\begin{align}\n && \\Delta p &= \\text{Conv1}(f_{T})), \\\\\n && f_{C} &= \\text{DConv}(f_{org},\\Delta p), \\\\\n && f_{C}^{'} &= \\text{Conv2}(\\text{ReLU}(\\text{Conv2}(f_{C}))), \\\\\n && f^{u} &= \\text{Concat}[f_{T},f_{C}^{'}], \\\\\n && f_{all} &= \\text{Concat}[f^{u}_{dis},f^{u}_{ref},f^{u}_{dis}-f^{u}_{ref}], \\\\\n && f_{out} &= \\text{Conv3}(\\text{ReLU}(\\text{Conv3}(f_{all}))),\n\\end{align}\nwhere $f_T$ denotes feature maps from the transformer branch, $\\Delta p$ denotes offset map, $f_{org}$ and $f_C$ denote feature maps from CNN, DConv means deformable convolution. Note that Conv2 is a convolution operation with a stride of 2, downsampling $f_C \\in \\mathbb{R}^{4p \\times 4p \\times C}$ by four times to $f_C^{'} \\in \\mathbb{R}^{p \\times p \\times C}$. \n\n\\subsection{Patch-wise Prediction Module}\n\\label{subsec:pooling}\nGiven that each pixel in the deep feature map corresponds to a different patch of the input image and contains abundant information, the information in the spatial dimension is indispensable. However, in previous works, spatial pooling methods such as max-pooling and average-pooling are applied to obtain a final single quality score. This pooling strategy loses some information and ignores the relationships between image patches. Therefore, we introduce a two-branch patch-wise prediction module which is made up of a prediction branch and a spatial attention branch, as illustrated in \\cref{fig:pixel}. The prediction branch calculates a score for each pixel in the feature map, while the spatial attention branch calculates an attention map for each corresponding score. Finally, we can obtain the final score by weighted summation of scores. The weighted sum operation helps to model the significance of the region to simulate the human visual system. This can be expressed as follows:\n\\begin{equation}\n s_f = \\frac{\\textbf{s} * \\textbf{w}}{\\sum \\textbf{w}},\n\\end{equation}\nwhere $\\textbf{s}\\in \\mathbb{R}^{H\\times W \\times 1}$ denotes score map, $\\textbf{w}\\in \\mathbb{R}^{H\\times W \\times 1}$ denotes the corresponding attention map, $*$ means Hadamard product and $s_f$ means the final predicted score. MSE loss between the predicted score and the ground truth score is utilized for the training process in our proposed method.\n\n\\begin{figure}[th]\n\\centering\n\\includegraphics[scale=0.43]{architecture_patch.pdf}\n\\caption{The pipeline of the proposed patch-wise prediction module. This two-branch module takes feature maps as input, then generates a patch-wise score map and its corresponding attention map to obtain the final prediction by weighted average.}\n\\label{fig:pixel}\n\\end{figure}\n\n\\begin{table*}[th]\n\\centering\n\\renewcommand{\\arraystretch}{1.08}\n\\caption{IQA datasets for performance evaluation and model training.}\n\\begin{tabular}{cccccccc}\n\\toprule[1.2pt]\nDatabase & \\# Ref & \\# Dist & Dist. Type & \\# Dist. Type & Rating & Rating Type & Env. \\\\ \\hline\nLIVE~\\cite{sheikh2006statistical} & 29 & 779 & traditional & 5 & 25k & MOS & lab \\\\\nCSIQ~\\cite{larson2010most} & 30 & 866 & traditional & 6 & 5k & MOS & lab \\\\\nTID2013~\\cite{ponomarenko2015image} & 25 & 3,000 & traditional & 25 & 524k & MOS & lab \\\\\nKADID-10k~\\cite{lin2019kadid} & 81 & 10.1k & traditional & 25 & 30.4k & MOS & crowdsourcing \\\\\nPIPAL~\\cite{jinjin2020pipal} & 250 & 29k & trad.+alg.outputs & 40 & 1.13m & MOS & crowdsourcing \\\\ \\toprule[1.2pt]\n\\label{tab:data}\n\\end{tabular}\n\\end{table*}\n\n\\begin{table*}[th]\n\\centering\n\\caption{Performance comparisons on LIVE, CSIQ, and TID2013 Databases. Performance scores of other methods are as reported in the corresponding original papers and~\\cite{ding2020image}. The best scores are~\\textbf{bolded} and missing scores are shown as ``\u2013'' dash.}\n\\setlength{\\tabcolsep}{6mm}{\n\\begin{tabular}{ccccccc}\n\\toprule[1.2pt]\n\\multirow{2}{*}{Method} & \\multicolumn{2}{c}{LIVE} & \\multicolumn{2}{c}{CSIQ} & \\multicolumn{2}{c}{TID2013} \\\\ \\cline{2-7} \n & PLCC & SROCC & PLCC & SROCC & PLCC & SROCC \\\\ \\hline\nPSNR & 0.865 & 0.873 & 0.819 & 0.810 & 0.677 & 0.687 \\\\\nSSIM~\\cite{wang2004image} & 0.937 & 0.948 & 0.852 & 0.865 & 0.777 & 0.727 \\\\\nMS-SSIM~\\cite{wang2003multiscale} & 0.940 & 0.951 & 0.889 & 0.906 & 0.830 & 0.786 \\\\\nFSIMc~\\cite{zhang2011fsim} & 0.961 & 0.965 & 0.919 & 0.931 & 0.877 & 0.851 \\\\\nVSI~\\cite{zhang2014vsi} & 0.948 & 0.952 & 0.928 & 0.942 & 0.900 & 0.897 \\\\\nMAD~\\cite{larson2010most} &0.968 &0.967 &0.950 &0.947 &0.827 &0.781 \\\\\nVIF~\\cite{sheikh2006image} &0.960 &0.964 &0.913 &0.911 &0.771 &0.677 \\\\\nNLPD~\\cite{laparra2016perceptual} &0.932 &0.937 &0.923 &0.932 &0.839 &0.800 \\\\\nGMSD~\\cite{xue2013gradient} &0.957 &0.960 &0.945 &0.950 &0.855 &0.804\\\\ \nSCQI~\\cite{bae2016novel} & 0.937 & 0.948 & 0.927 & 0.943 & 0.907 & 0.905 \\\\ \\hline\nDOG-SSIMc~\\cite{pei2015image} & 0.966 & 0.963 & 0.943 & 0.954 & 0.934 & 0.926 \\\\\nDeepQA~\\cite{kim2017deep} & 0.982 & 0.981 & 0.965 & 0.961 & 0.947 & 0.939 \\\\\nDualCNN~\\cite{varga2020composition} & - & - & - & - & 0.924 & 0.926 \\\\\nWaDIQaM-FR~\\cite{bosse2017deep} & 0.98 & 0.97 & - & - & 0.946 & 0.94 \\\\\nPieAPP~\\cite{prashnani2018pieapp} & 0.986 & 0.977 & 0.975 & 0.973 & 0.946 & 0.945 \\\\\nJND-SalCAR~\\cite{seo2020novel} & 0.987 & \\textbf{0.984} & 0.977 & \\textbf{0.976} & 0.956 & 0.949 \\\\\nAHIQ (ours) & \\textbf{0.989} & \\textbf{0.984} & \\textbf{0.978} & 0.975 & \\textbf{0.968} & \\textbf{0.962} \\\\ \\toprule[1.2pt]\n\n\\end{tabular}}\n\\label{tab:sota}\n\\end{table*}\n\n\\section{Experiment}\n\\subsection{Datasets}\n\nWe employ four datasets that are commonly used in the research of perceptual image quality assessment, including LIVE~\\cite{sheikh2006statistical}, CSIQ~\\cite{larson2010most}, TID2013~\\cite{ponomarenko2015image}, and PIPAL~\\cite{jinjin2020pipal}. \\cref{tab:data} compares the listed datasets in more detail. In addition to PIPAL, the other datasets only include traditional distortion types, while PIPAL includes a large number of distorted images including GAN-generated images. \n\nAs recommended, we randomly split the datasets into training (60\\%), validation (20\\%), and test set (20\\%) according to reference images. Therefore, the test data and validation data will not be seen during the training procedure. We use the validation set to select the model with the best performance and use the test set to evaluate the final performance.\n\n\\subsection{Implementation Details}\nSince we use ViT~\\cite{dosovitskiy2020image} and ResNet~\\cite{he2016deep} models pre-trained on ImageNet~\\cite{ILSVRC15}, we normalize all input images and randomly crop them into $224\\times224$. We use the outputs of five intermediate blocks $\\{0,1,2,3,4\\}$ in ViT, each of which consists of a self-attention module and a Feed-Forward Network (FFN). The feature map from one block $f\\in \\mathbb{R}^{p\\times p \\times c}$, where $c=768, p=14 \\ \\text{or} \\ 28$, are concatenated into $f_{T}\\in \\mathbb{R}^{p\\times p \\times 6c}$. We also take out the output feature maps from all the 3 layers in stage 1 of ResNet and concatenate them together to get $f_{C}\\in \\mathbb{R}^{56\\times 56 \\times C}$ where $C=256\\times 3$. And random horizontal flip rotation is applied during the training. The training loss is computed using a mean squared error (MSE) loss function. During the validation phase and test phase, we randomly crop each image 20 times and the final score is the average score of each cropped image. It should be noted that we use pretrained ViT-B\/16 as the backbone in all experiments on traditional datasets including LIVE, CSIQ and TID2013, while ViT-B\/8 is utilized in PIPAL.\n\nFor optimization, we use the AdamW optimizer with an initial learning rate $lr$ of $10^{-4}$ and weight decay of $10^{-5}$. We set the minibatch size as 8. Set the learning rate of each parameter group using a cosine annealing schedule, where $\\eta_{max}$ is set to the initial $lr$ and the number of epochs $T_{cur}$ is 50. \nWe implemented our proposed model AHIQ in Pytorch and trained using a single NVIDIA GeForce RTX2080 Ti GPU. The practical training runtimes differ across datasets as the number of images in each dataset is different. Training one epoch on the PIPAL dataset requires thirty minutes.\n\n\\subsection{Comparison with the State-of-the-art Methods}\nWe assess the performance of our model with Pearson's linear\ncorrelation coefficient (PLCC) and Spearman's rank-order correlation coefficient (SROCC). \nPLCC assesses the linear correlation between ground truth and the predicted quality scores, whereas SROCC describes the level of monotonic correlation.\n\n\\vspace{2pt}\n\\noindent\\textbf{Evaluation on Traditional Dataset.} We evaluate the effectiveness of AHIQ on four benchmark datasets. For all our tests, we follow the above experimental setup. It can be shown in~\\cref{tab:sota} that AHIQ outperforms or is competitive with WaDIQaM~\\cite{bosse2017deep}, PieAPP~\\cite{prashnani2018pieapp}, and JND-SalCAR~\\cite{seo2020novel} for all tested datasets. Especially on the more complex dataset TID2013, our proposed model achieved a solid improvement over previous work. This shows that the AHIQ can cope well with different types of distorted images.\n\\begin{table}[th]\n\\centering\n\\caption{Performance comparison after training on the entire KADID dataset~\\cite{lin2019kadid}, then test on LIVE, CSIQ, and TID2013 Databases. Part of the performance scores of other methods are borrowed from~\\cite{ding2020image}. The best scores are~\\textbf{bolded} and missing scores are shown as ``\u2013'' dash.}\n\\scalebox{0.82}{\n\\begin{tabular}{cccc}\n\\toprule[1.2pt]\n\\multirow{2}{*}{Method} & LIVE & CSIQ & TID2013 \\\\ \\cline{2-4} \n& PLCC\/SROCC & PLCC\/SROCC & PLCC\/SROCC \\\\ \\hline\n\nWaDIQaM~\\cite{bosse2017deep} & 0.940\/0.947 & 0.901\/0.909 & 0.834\/0.831 \\\\\nPieAPP~\\cite{prashnani2018pieapp} & 0.908\/0.919 & 0.877\/0.892 & 0.859\/0.876 \\\\\nLPIPS~\\cite{zhang2018unreasonable} & 0.934\/0.932 &0.896\/0.876 &0.749\/0.670 \\\\\nDISTS~\\cite{ding2020image} & \\textbf{0.954}\/0.954 & 0.928\/0.929 & 0.855\/0.830 \\\\\nIQT~\\cite{cheon2021perceptual} & -\/\\textbf{0.970} & -\/0.943 & -\/0.899 \\\\\nAHIQ (ours) & 0.952\/\\textbf{0.970} & \\textbf{0.955\/0.951} & \\textbf{0.899\/0.901} \\\\ \\toprule[1.2pt]\n\\end{tabular}\n}\n\\label{tab:kadid}\n\\end{table}\n\n\\vspace{2pt}\n\\noindent\\textbf{Evaluation on PIPAL.} We compare our models with the state-of-the-art FR-IQA methods on the NTIRE 2022 IQA challenge validation and testing datasets. As shown in \\cref{tab:pipal}, AHIQ achieves outstanding performance in terms of PLCC and SROCC compared with all previous work. In particular, our method substantially outperforms IQT, which is recognized as the first transformer-based image quality assessment network, through the effective feature fusion from the shallow CNN and ViT as well as the proposed patch-wise prediction module. This verifies the effectiveness of our model for GAN-based distortion image quality assessment.\n\n\\begin{table}[th]\n\\centering\n\\caption{\nPerformance comparison of different IQA methods on PIPAL dataset. AHIQ-C is the ensemble version we used for the NTIRE 2022 Perceptual IQA Challenge.\n}\n\\begin{tabular}{ccccc}\n\\toprule[1.2pt]\n\\multirow{2}{*}{Method} & \\multicolumn{2}{c}{Validation} & \\multicolumn{2}{c}{Test} \\\\ \\cline{2-5} \n & PLCC & SROCC & PLCC & SROCC \\\\ \\hline\nPSNR & 0.269 & 0.234 & 0.277 & 0.249 \\\\\nNQM~\\cite{damera2000image} & 0.364 & 0.302 & 0.395 & 0.364 \\\\\nUQI~\\cite{wang2002universal} & 0.505 & 0.461 & 0.450 & 0.420 \\\\\nSSIM~\\cite{wang2004image} & 0.377 & 0.319 & 0.391 & 0.361 \\\\\nMS-SSIM~\\cite{wang2003multiscale} & 0.119 & 0.338 & 0.163 & 0.369 \\\\\nRFSIM~\\cite{zhang2010rfsim} & 0.285 & 0.254 & 0.328 & 0.304 \\\\\nGSM~\\cite{liu2011image} & 0.450 & 0.379 & 0.465 & 0.409 \\\\\nSRSIM~\\cite{zhang2012sr} & 0.626 & 0.529 & 0.636 & 0.573 \\\\\nFSIM~\\cite{zhang2011fsim} & 0.553 & 0.452 & 0.571 & 0.504 \\\\\nVSI~\\cite{zhang2014vsi} & 0.493 & 0.411 & 0.517 & 0.458 \\\\\nNIQE~\\cite{mittal2012making} & 0.129 & 0.012 & 0.132 & 0.034 \\\\\nMA~\\cite{ma2017learning} & 0.097 & 0.099 & 0.147 & 0.140 \\\\\nPI~\\cite{blau2018perception} & 0.134 & 0.064 & 0.145 & 0.104 \\\\\nBrisque~\\cite{mittal2011blind} & 0.052 & 0.008 & 0.069 & 0.071 \\\\ \\hline\nLPIPS-Alex~\\cite{zhang2018unreasonable} & 0.606 & 0.569 & 0.571 & 0.566 \\\\\nLPIPS-VGG~\\cite{zhang2018unreasonable} & 0.611 & 0.551 & 0.633 & 0.595 \\\\\nDISTS~\\cite{ding2020image} & 0.634 & 0.608 & 0.687 & 0.655 \\\\\nIQT~\\cite{cheon2021perceptual} & 0.840 & 0.820 & 0.799 & 0.790 \\\\ \\hline\nAHIQ (ours) & 0.845 & 0.835 & 0.823 & 0.813 \\\\\nAHIQ-C (ours) & \\textbf{0.865} & \\textbf{0.852} & \\textbf{0.828} & \\textbf{0.822} \\\\ \\toprule[1.2pt]\n\\end{tabular}\n\\label{tab:pipal}\n\\end{table}\n\n\\vspace{2pt}\n\\noindent\\textbf{Cross-Database Performance Evaluation.} \nTo evaluate the generalization of our proposed AHIQ, we conduct the cross-dataset evaluation on LIVE, CSIQ, and TID2013. We train the model on KADID and the training set of PIPAL respectively. Then we test it on the full set of the other three benchmark datasets. As shown in \\cref{tab:kadid} and \\cref{tab:cross}, AHIQ achieves satisfactory generalization ability. \n\n\\begin{table}[th]\n\\centering\n\\caption{Performance comparison for cross-database evaluations.}\n\\scalebox{0.82}{\n\\begin{tabular}{cccc}\n\\toprule[1.2pt]\n\\multirow{2}{*}{Method} & LIVE & CSIQ & TID2013 \\\\ \\cline{2-4} \n & PLCC\/SROCC & PLCC\/SROCC & PLCC\/SROCC \\\\ \\hline\nPSNR & 0.865\/0.873 & 0.786\/0.809 & 0.677\/0.687 \\\\\nWaDIQaM~\\cite{bosse2017deep} & 0.837\/0.883 & -\/- & 0.741\/0.698 \\\\\nRADN~\\cite{shi2021region} & 0.878\/0.905 & -\/- & 0.796\/0.747 \\\\\nAHIQ (ours) & \\textbf{0.911\/0.920} & \\textbf{0.861\/0.865} & \\textbf{0.804\/0.763} \\\\ \\toprule[1.2pt]\n\\end{tabular}\n}\n\\label{tab:cross}\n\\end{table}\n\n\n\n\\subsection{Ablation Study}\nIn this section, we analyze the effectiveness of the proposed network by conducting ablation studies on the NTIRE 2022 IQA Challenge testing datasets~\\cite{gu2022ntire}. With different configuration and implementation strategies, we evaluate the effect of each of the three major components: feature extraction module, feature fusion module, and patch-wise prediction module.\n\\begin{table}[ht]\n\\centering\n\\caption{Comparison of different feature fusion strategies on the NTIRE 2022 IQA Challenge testing datasets. CNN refers to Resnet50 and ViT refers to ViT-B\/8 in this experiment. }\n\n\\begin{tabular}{cccccc}\n\\toprule[1.2pt]\n\\multirow{2}{*}{No.} & \\multicolumn{2}{c}{Feature} & \\multirow{2}{*}{Fusion Method} & \\multirow{2}{*}{PLCC} & \\multirow{2}{*}{SROCC} \\\\ \\cline{2-3}\n & CNN & ViT & & & \\\\ \\hline\n1 & \\checkmark & \\checkmark & deform+concat & \\textbf{0.823} & \\textbf{0.813} \\\\\n2 & \\checkmark & \\checkmark & concat & 0.810 & 0.799 \\\\\n3 & \\checkmark & & - & 0.792 & 0.789 \\\\\n4 & & \\checkmark & - & 0.799 & 0.788 \\\\ \\toprule[1.2pt]\n\\end{tabular}\n\\label{tab:fusion}\n\\end{table}\n\n\n\\vspace{2pt}\n\\noindent\\textbf{Feature Extraction Backbone.} We experiment with different representative feature-extraction backbones and the comparison result is provided in \\cref{tab:backbone}. The CNN backbones used for comparison include ResNet50, ResNet101, ResNet152~\\cite{he2016deep}, HRNet~\\cite{wang2020deep}, Inception-ResNet-V2~\\cite{szegedy2017inception}, and the transformer backbones include ViT-B\/16 and ViT-B\/8~\\cite{dosovitskiy2020image}. It is noteworthy that ViT-B consists of 12 transformer blocks and the sizes of the image patches are $16\\times 16$ and $8\\times 8$ respectively with an input shape of $224\\times 224$. \n\nIt can be found that the network using ResNet50 and ViT-B\/8 ends up performing the best. The experimental results demonstrate that deeper and wider CNN is unnecessary for AHIQ. We believe this is because CNN plays the role of providing shallow and local feature information in AHIQ. We only take out the intermediate layers from the first stage, so shallow features will contain less information when the network is too deep or too complicated.\n\n\\begin{table}[ht]\n\\centering\n\\caption{Comparison of different feature extraction backbones on the NTIRE 2022 IQA Challenge testing datasets.}\n\\scalebox{0.95}{\n\\begin{tabular}{ccccc}\n\\toprule[1.2pt]\nCNN & ViT & PLCC & SROCC & Main Score \\\\ \\hline\nResnet50 & \\multirow{5}{*}{ViT-B\/8} & \\textbf{0.823} & \\textbf{0.813} & \\textbf{1.636} \\\\\nResnet101 & & 0.802 & 0.788 & 1.590 \\\\\nResnet152 & & 0.807 & 0.793 & 1.600 \\\\\nHRnet & & 0.806 & 0.796 & 1.601 \\\\\nIncepResV2 & & 0.806 & 0.793 & 1.599 \\\\ \\hline\nResnet50 & ViT-B\/16 & 0.811 & 0.803 & 1.614 \\\\ \\toprule[1.2pt]\n\\end{tabular}}\n\\label{tab:backbone}\n\\end{table}\n\n\\vspace{2pt}\n\\noindent\\textbf{Fusion strategy.} We further examine the effect of features from CNN and ViT as well as the feature fusion strategies. As is tabulated in \\cref{tab:fusion}, the first two experiments adopt different methods for feature fusion. The first one is the method we adopt in our AHIQ. For the second experiment, the features from transformer and ViT are simply concatenated together. The first method outperforms the second one by a large margin which demonstrates that using deformable convolution to modify CNN feature maps is well-effective. This further illustrates the power of global and semantic information in transformer to guide the shallow features by paying more attention to the salient regions.\n\nWe also conduct ablation studies on using features from ViT and from CNN separately. Results are at the last two rows in~\\cref{tab:fusion}. One can observe that only using one of the CNN and Transformer branches results in a dramatic decrease in performance. This experimental result shows that both global semantic information brought by ViT and local texture information introduced by CNN is very crucial in this task, which is well consistent with our previous claim.\n\\begin{figure}[th]\n\\centering\n\\includegraphics[scale=0.26]{optical_flow2.pdf}\n\\caption{The visualization of learned offsets from deformable convolution. For each case, the vector flow which displays the learned offsets and zoomed-in details are included.}\n\\label{fig:optical}\n\\end{figure}\n\n\\vspace{2pt}\n\\noindent\\textbf{Visualization of Learned Offset.} We visualize the learned offsets from deformable convolution in \\cref{fig:optical}. It can be observed that the learned offsets indicated by arrows mainly affect edges and salient regions. In addition, most of the offset vectors point from the background to the salient regions, which means that in the process of convolution, the sampling locations moves to the significant region by the learned offsets. This visualization results illustrate the argument we made earlier that semantic information from ViT help CNN see better by deformable convolution.\n\n\\begin{table}[ht]\n\\centering\n\\caption{Comparison of different pooling strategy on the NTIRE 2022 IQA Challenge testing datasets. Note that ``Patch'' denotes the patch-wise prediction and ``Spatial'' denotes the spatial pooling.}\n\\begin{tabular}{cccc}\n\\toprule[1.2pt]\nPooling Strategy & PLCC & SROCC & Main Score \\\\ \\hline\nPatch & \\textbf{0.823} & \\textbf{0.813} & \\textbf{1.636} \\\\\nSpatial & 0.794 & 0.795 & 1.589 \\\\\nPatch + Spatial & 0.801 & 0.791 & 1.593\\\\ \\toprule[1.2pt]\n\\end{tabular}\n\\label{tab:pooling}\n\\end{table}\n\n\n\\vspace{2pt}\n\\noindent\\textbf{Pooling Strategy.} Experiments on different pooling strategies are conducted, and the results are shown in \\cref{tab:pooling}. We first perform patch-wise prediction, which is elaborated in \\cref{subsec:pooling}. For comparison, we follow WaDIQaM~\\cite{bosse2017deep} and IQMA~\\cite{guo2021iqma} to use spatial pooling that combines max-pooling and average-pooling in spatial dimension to obtain a score vector $S\\in \\mathbb{R}^{1\\times 1 \\times C}$. The final score is the weighted sum of the score vector and the final result is shown in the second row of \\cref{subsec:pooling}. Then we try to combine the previous two pooling method and propose to use the average of the output score from patch-wise prediction and spatial pooling in the third experiment. Patch-wise prediction module proposed in AHIQ performs better than the other two, and experimental results further prove the validity of the patch-wise prediction operation. It confirms our previous claim that different regions should contribute differently to the final score.\n\n\n\\subsection{NTIRE 2022 Perceptual IQA Challenge}\nThis work is proposed to participate in the NTIRE 2022 perceptual image quality assessment challenge~\\cite{gu2022ntire}, the objective of which is to propose an algorithm to estimate image quality consistent with human perception. The final results of the challenge in the testing phase are shown in~\\cref{tab:final}. Our ensemble approach won the first place in terms of PLCC, SROCC, and main score.\n\\begin{table}[ht]\n\\centering\n\\caption{The results of NTIRE 2022 challenge FR-IQA track on the testing dataset. This table only shows part of the participants and best scores are~\\textbf{bolded}.}\n\\begin{tabular}{cccc}\n\\toprule[1.2pt]\nMethod & PLCC & SROCC & Main Score \\\\ \\hline\nOurs\n& \\textbf{0.828} & \\textbf{0.822} & \\textbf{1.651} \\\\\n\\nth{2} \n& 0.827 & 0.815 & 1.642 \\\\\n\\nth{3} & 0.823 & 0.817 & 1.64 \\\\\n\\nth{4} & 0.775 & 0.766 & 1.541 \\\\\n\\nth{5} & 0.772 & 0.765 & 1.538 \\\\ \\toprule[1.2pt]\n\\end{tabular}\n\\label{tab:final}\n\\end{table}\n\n\n\\section{Conclusion}\nIn this paper, we propose a novel network called Attention-based Hybrid Image Quality Assessment Network (AHIQ), for the full-reference image quality assessment task. The proposed hybrid architecture takes advantage of the global semantic features captured by ViT and local detailed textures from a shallow CNN during feature extraction. To help CNN pay more attention to the salient region in the image, semantic information from ViT is adopted to guide deformable convolution so that model can better mimic how humans perceive image quality. Then we further propose a feature fusion module to combine different features. We also introduce a patch-wise prediction module to replace spatial pooling and preserve information in the spatial dimension. Experiments show that the proposed method not only outperforms the state-of-the-art methods on standard datasets, but also has a strong generalization ability on unseen samples and hard samples, especially GAN-based distortions. The ensembled version of our method ranked first place in the FR track of the NTIRE 2022 Perceptual Image Quality Assessment Challenge.\n\n\\vspace{2pt}\n\\noindent\\textbf{Acknowledgment.} This work was supported by the Key Program of the National Natural Science Foundation of China under Grant No. U1903213 and the Shenzhen Key Laboratory of Marine IntelliSense and Computation under Contract ZDSYS20200811142605016.\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\nGraphene, a single atomic layer of carbon arranged in a honeycomb lattice, holds great promise for numerous applications due to its remarkable mechanical, optical, and electronic properties and serves as a powerful material platform for studying relativistic Dirac fermions due to its linearly dispersing bands\\cite{novoselov_two-dimensional_2005,geim_rise_2007, castro_neto_electronic_2009}. Graphene was also the first material in which a member of the striking class of macroscopic quantum phenomena\\cite{anderson_observation_1995,osheroff_evidence_1972,willett_observation_1987,bardeen_theory_1957} -- the quantum Hall effect (QHE)\\cite{klitzing_new_1980} -- could be observed at room temperature, when subject to large magnetic fields\\cite{novoselov_room-temperature_2007}. In the quantum Hall state, charge carriers are forced into cyclotron orbits with quantized radii and energies known as Landau levels (LLs), once subjected to the influence of a magnetic field. In order to observe this effect, certain conditions must be met: The magnetic field must be large enough that the resulting spacing between LLs is larger than the thermal energy $(\\Delta E_{LL}>k_BT)$; the charge carrier lifetime between scattering events must be longer than the characteristic time of the cyclotron orbit $(t_\\textrm{life}>1\/\\omega _c)$; and the magnetic field must be uniform on length scales greater than the LL orbit. This typically mandates the need for cryogenic temperatures, clean materials, and large applied magnetic fields. Dirac fermions in graphene provide a way to lift these restrictions: Under certain strain patterns, graphene's electrons behave as if they were under the influence of large magnetic fields, without applying an actual field from outside the material\\cite{guinea_energy_2010,levy_strain-induced_2010,liu_tailoring_2018,rechtsman_strain-induced_2013}. These so-called pseudomagnetic fields only couple to the relativistic electrons around the Dirac point and, under the QHE conditions above, lead to the formation of flat, quantized LLs. This has been successfully observed using a range of methods\\cite{levy_strain-induced_2010,liu_tailoring_2018,rechtsman_strain-induced_2013}, but was so far restricted to small regions, which severely limits its applicability.\n\n\\section*{Results}\n\n\\begin{figure}\n\\makebox[\\textwidth]{\\includegraphics[width=183mm]{F2_AFM_v8.png}}\n\\caption{\\textbf{Identification of strained nanoprisms.} (\\textbf{A}) Horizontal derivative AFM topography image of our monolayer graphene grown on a SiC substrate. Triangular nanoprisms are dispersed on the surface. Inset: AFM topography image of the same area. Substrate terrace steps are about $10\\ \\textrm{nm}$ in height. (\\textbf{B}) Top: Close-up AFM topography of the area indicated by the black box in (\\textbf{A}). Bottom: Line cut through the AFM data marked by the purple line in the close-up. (\\textbf{C}) Overview STM topography image (200 nm x 200 nm, $V_{sample}=100\\ \\textrm{mV}$, $I_{tun.}=2\\ \\textrm{pA}$) showing a single nanoprism. (\\textbf{D}) Schematic structure of 6H-SiC, showing its layered ABCACB stacking order with epitaxial graphene on top (yellow). Inside the nanoprism a single layer \\emph{within} the unit cell is missing, exposing the graphene to a different substrate surface termination, as illustrated in the top view. The carbon buffer layer is not shown for clarity. (\\textbf{E}) Atomically resolved STM images (10\\,nm x 10\\,nm, $V_{sample}=30\\ \\textrm{mV}$, $I_{tun.}=2\\ \\textrm{pA}$) inside (top) and outside (bottom) of the nanoprism. (\\textbf{F}) Difference map of the two Fourier transformed (FT) images in (\\textbf{E}) visualizing the strain pattern inside the nanoprism.}\n\\label{fig:f2_afm}\n\\end{figure}\n\nHere, we directly visualize the formation of flat LLs close to the Fermi energy induced by pseudomagnetic fields on wafer-scale semiconductor samples. By measuring the hallmark $\\sqrt{n}$ energy spacing and momentum dependence of the ensuing pseudo-LLs with angle-resolved photoemission spectroscopy (ARPES), and with the aid of model calculations, we confirm their quantum hall nature and extract a pseudomagnetic field strength of B = 41\\,T. This is made possible by the presence of a distribution of triangular nanoprisms underneath the monolayer graphene in our samples based on the well-established platform of epitaxial graphene on SiC substrates\\cite{emtsev_towards_2009,riedl_quasi-free-standing_2009,bostwick_observation_2010,zhou_substrate-induced_2007}, as revealed by a combination of atomic force microscopy (AFM) and scanning tunneling microscopy (STM) measurements.\\par\nOur topographic images of these samples (Fig.\\,\\ref{fig:f2_afm}A inset) exhibit the well-known terraces and step edges of graphene grown on 6H-SiC\\cite{emtsev_towards_2009}, which are due to a miscut of the wafers from the $(0001)$ direction of up to $0.1^\\circ$. A population of triangular-shaped nanoscale features are identified on the terraces of our samples (Fig.\\,\\ref{fig:f2_afm}A), that appear similar to those reported on similar substrates\\cite{bolen_graphene_2009,momeni_pakdehi_homogeneous_2019}. These nanoprisms appear during the growth process of graphene on 6H-SiC and are controllable by the Argon flow in the chamber\\cite{momeni_pakdehi_homogeneous_2019}. They cover between 5\\% and 10\\% of the terraces and are completely covered by monolayer graphene, the latter being demonstrated by our AFM adhesion images (see Supplementary Material Figs.\\,S8 and S9B). They are equilateral, have a narrow size distribution around $300\\,\\textrm{nm}$ side length, are oriented in the same direction, and are about $(2.7\\pm 0.7)\\, \\textrm{\\AA}$ deep (Fig.\\,\\ref{fig:f2_afm}B), which corresponds to a single missing SiC double layer or $\\frac{1}{6}$ of the 6H-SiC unit cell. This leads to a change in the registry between the silicon atoms in the top layer of the substrate and the graphene as illustrated in Fig.\\,\\ref{fig:f2_afm}D. The strain created inside the nanoprisms cannot be relieved, because the nanostructures are continuously covered by monolayer graphene without additional grain boundaries, as corroborated by our STM images across the edge (see Supplementary Material Fig.\\,S9A). To get a more detailed view of the strain pattern we perform additional detailed atomic resolution STM measurements. The images taken inside and outside the nanoprisms (Fig.\\,\\ref{fig:f2_afm}E) show the expected $(6\\sqrt{3}\\times6\\sqrt{3})\\text{R}30^\\circ$ modulation with respect to SiC on top of the carbon honeycomb lattice\\cite{riedl_structural_2010}. However, taking the difference of the two Fourier transformed images (Fig.\\,\\ref{fig:f2_afm}F) reveals a shear strain pattern inside the nanoprism, with a maximal observed strain of roughly $3^\\circ$.\n\n\\begin{figure}\n\\makebox[\\textwidth]{\\includegraphics[width=184mm]{F1_ARPES_v9.png}}\n\\caption{\\textbf{Momentum-resolved visualization of Landau levels.} (\\textbf{A}) ARPES cut through the Dirac cone at the K point at 300\\,K. The data have been divided by the Fermi function and symmetrized to compensate for matrix element effects\\cite{shirley_brillouin-zone-selection_1995}. (\\textbf{B}) Cut along the energy axis integrated around the K point in (\\textbf{A}). (\\textbf{C}) Second derivative of the data in (\\textbf{A})\\cite{zhang_precise_2011}. (\\textbf{D}) Inverted second derivative of the data shown in (\\textbf{B}) after smoothing. (\\textbf{A})--(\\textbf{D}) Landau levels (LLs) are indicated by arrows. (\\textbf{E}) Summary of LL data sets, with model fit according to Eqn.\\,\\ref{eq:landaulevels} shown in black; the 95\\% confidence interval of the fit is shown in grey. Different symbols indicate different samples and temperatures: sample A (6\\,K) [hexagons], sample B (6\\,K) [squares], sample B 2nd data set (6\\,K) [stars], sample B (300\\,K) [diamonds], sample C (6\\,K) [circles], and sample C 2nd data set (6\\,K) [triangles]. ARPES data for the additional samples can be found in the supplementary Fig.\\,S7. Inset: Same data plotted versus $\\sqrt{n}$, giving the expected linear behaviour for LLs in a Dirac material. (\\textbf{F}) Sketch of various mechanisms which may lead to ARPES intensity inside the cone. Neither electron-phonon coupling nor contamination from bilayer graphene can explain the experimental findings.}\n\\label{fig:f1_arpes}\n\\end{figure}\n\nIn order to confirm if the induced strain pattern indeed leads to flat Landau levels close to the Fermi energy, we perform a series of high-resolution ARPES measurements. ARPES is a momentum- and energy-resolved technique that has proven to be a powerful tool in directly studying the electronic band structures of a vast variety of quantum phases of matter, from strongly-correlated electron systems and high-T$_{c}$ superconductors\\cite{damascelli_probing_2004} to topological insulators and semimetals\\cite{hsieh_topological_2008,chen_experimental_2009,lv_observation_2015}. Yet no study of quantum Hall states has been performed, since ARPES is strictly incompatible with the application of magnetic fields, as essential crystal momentum information carried by the photoemitted electrons would be lost through interaction with the field. This however is different for pseudomagnetic fields, as they only interact with the Dirac electrons inside the material. We note that, while a recently developed momentum-resolved technique amenable to magnetic fields has been reported\\cite{jang_full_2017}, it necessarily requires sophisticated heterostructures, physically accessible fields, and is limited to a small sector of the Brillouin zone.\n\nOur ARPES data, which -- due to the $\\sim$1\\,mm spot size of the photon source -- correspond to the spatial average over unstrained and strained regions of the sample, show the expected Dirac cone as well as new flat bands that gradually merge with the linear dispersion (Figs.\\,\\ref{fig:f1_arpes}A and \\ref{fig:f1_arpes}C). The unequal energy spacing of these newly observed bands can be extracted from cuts along the energy direction at the K point (Fig.\\,\\ref{fig:f1_arpes}B) and their second derivative (Fig.\\,\\ref{fig:f1_arpes}D). By plotting the positions of these bands (Fig.\\,\\ref{fig:f1_arpes}E), we observe the distinct $\\sqrt{n}$ energy spacing which is a hallmark of LLs for graphene's massless Dirac charge carriers\\cite{geim_rise_2007}, where $n$ is the integer LL index. The spectrum of LLs in graphene is given by\\cite{castro_neto_electronic_2009}\n\\begin{equation}\n E_n=\\text{sgn}(n)\\sqrt{2v_F^2\\hbar eB\\cdot |n|}+E_{DP}\n \\label{eq:landaulevels}\n\\end{equation} \nwhere $v_F$ is the velocity of the electrons at the Fermi level, $\\hbar$ the reduced Planck constant, $e$ the electron charge, $B$ the magnitude of the (pseudo-)magnetic field, and $E_{DP}$ the binding energy of the Dirac point. Using the ARPES dispersion map in Fig.\\,\\ref{fig:f1_arpes}A, the Fermi velocity is determined to be $v_F=(9.50\\pm0.08)\\times10^5\\ \\textrm{ms}^{-1}$ (see Supplementary Material Fig.\\,S1). Fitting our experimental data to Eqn.\\,\\ref{eq:landaulevels} as done in Fig.\\,\\ref{fig:f1_arpes}E, we extract the magnitude of the pseudomagnetic field, which yields $B=(41\\pm 2)\\ \\textrm{T}$. Remarkably, this pseudomagnetic field value is consistent between several samples from cryogenic temperatures (6\\,K) up to room temperature. The model fit also consistently pinpoints the binding energy of the Dirac point to $E_{DP}=(460\\pm 10)\\ \\textrm{meV}$ relative to the Fermi level, which agrees well with previous reports on this sample system\\cite{bostwick_quasiparticle_2007,emtsev_towards_2009} and is attributed to charge transfer from the SiC substrate to the graphene layer. Additionally, the LLs are only resolved in the upper part of the Dirac cone, closer to the Fermi level\\cite{levy_strain-induced_2010, song_high-resolution_2010}. We attribute this effect to the increased scattering phase space as one moves away from the Fermi level, which manifests itself in our ARPES data by an increased line width of the bands (Supplementary Material Fig.\\,S1).\n\nAs for other alternative explanations of the data, we note that while previous ARPES studies of graphene on SiC have shown a rich variety of features\\cite{ludbrook_evidence_2015,ohta_controlling_2006}, the signature $\\sqrt{n}$ spacing of the levels (Fig.\\,\\ref{fig:f1_arpes}E and inset) allows us to unambiguously distinguish the observed effect from other possibilities (Fig.\\,\\ref{fig:f1_arpes}F). For example, if spectral weight inside the Dirac cone arose from coupling of electrons to phonons\\cite{ludbrook_evidence_2015}, it would be limited to characteristic vibrational energies. Similarly, contributions from bilayer and higher order graphene layers, which can appear in small quantities near step edges of the substrate during the growth process\\cite{ohta_controlling_2006} (see also AFM adhesion image in the Supplementary Material Fig.\\,S9B), would lead to a manifold of bands, but would not reproduce the observed band structure\\cite{ohta_interlayer_2007,marchenko_extremely_2018}. Furthermore, previously reported plasmaronic interactions in samples with higher electronic doping \\cite{bostwick_observation_2010} can also be excluded. They lead to renormalizations of electronic bands around the Dirac point, but show a distinctly different spectrum than what is observed in our experiments. Finally, the effects of different defect geometries in graphene and their influence on the Dirac cone dispersion have recently been discussed \\cite{kot_band_2018}, but do not lead to flat bands around the Dirac point.\n\n\\begin{figure}\n\\makebox[\\textwidth]{\\includegraphics[width=183mm]{F3_THEORY_v8.png}}\n\\caption{\\textbf{Model calculation of strain-induced Landau levels.} (\\textbf{A}) Top: Honeycomb lattice, with the two sublattices A (red) and B (yellow). The black arrows indicate the symmetry of the strain pattern. Bottom: Triangular flake with strain-induced pseudomagnetic field $B = 41$\\,T. The colour scale indicates the relative bond stretching. (\\textbf{B}) Spectral function for the gapless case with Semenoff mass $M = 0\\ \\textrm{meV}$. (\\textbf{C}) Energy cut through the Dirac point (K) of the spectral function in (\\textbf{B}). The dashed grey lines indicate the position of the Landau levels (LL) predicted by Eqn.\\,\\ref{eq:landaulevels}. (\\textbf{D}) Spectral function averaged over a uniform distribution of Semenoff masses $M \\in [-135, 135]$\\,meV. (\\textbf{E}) Energy cut through the Dirac point (K) of the spectral function in (\\textbf{D}). The shaded grey area indicates the broadening of the Landau levels predicted by Eqns.\\,\\ref{eq:landaulevels} and \\ref{eq:landaulevels_mass}.}\n\\label{fig:f3_theory}\n\\end{figure}\n\nTo gain deeper insights on the origin of the observed LLs, we model a region of graphene experiencing a uniform strain-induced pseudomagnetic field. We use the simplest such strain pattern, worked out by Guinea et al.\\cite{guinea_energy_2010}, which exhibits the triangular symmetry of the underlying honeycomb lattice. Using a tight-binding approach, we directly simulate a finite-size strained region with open boundary conditions and armchair edges (further details in Methods section). We find that the observed LL spectra can be well reproduced by a triangular flake of side length $L = 56$\\,nm (Fig.\\,\\ref{fig:f3_theory}A), subject to a uniform pseudomagnetic field $B = 41$\\,T over the entire flake (Fig.\\,\\ref{fig:f3_theory}A). The maximal strain (or relative bond stretching) reaches around $3\\%$, which is in good agreement with our STM measurements. The ARPES data can be simulated by calculating the energy and momentum-resolved spectral function $A(\\bm{k},\\omega)$ of this triangular flake, here shown in Figs.\\,\\ref{fig:f3_theory}B and \\ref{fig:f3_theory}C. Our simulation clearly reproduces the main features of the ARPES data, namely levels that: (i) follow $\\sqrt{n}$ spacing in energy; (ii) are flat \\emph{inside} the Dirac cone and merge with the linearly dispersing bands; (iii) become less clearly resolved with increasing index $n$.\n\nFeatures (ii) and (iii) can be understood by comparing the characteristic size of a Landau orbit $\\propto\\sqrt{n}\\,l_B$ (with the magnetic length $l_B=\\sqrt{\\frac{\\hbar}{eB}}$) to the length scale $\\lambda$ on which the pseudomagnetic field is uniform. For LLs to exist, an electron on a given Landau orbit must experience a uniform pseudomagnetic field\\cite{settnes_pseudomagnetic_2016}, leading to the condition $\\sqrt{n}\\,l_B \\ll \\lambda$. Hence, for large fields $B$ or large $\\lambda$, flat bands are expected across the entire Brillouin zone, whereas Dirac cones are recovered in the opposite limit (see Supplementary Material Fig.\\,S2). The bands observed in the ARPES data can thus be understood as LLs, where the orbit size is only somewhat smaller than $\\lambda$: by comparing the experimental data and the model calculation, we estimate $l_B \\sim 4$~nm and $\\lambda \\sim 30$\\,nm (see Methods section). Furthermore, since the size of Landau orbits grows as $ \\sim \\sqrt{|n|}$, eventually it becomes comparable to $\\lambda$, explaining why levels with higher index $n$ are less clearly resolved.\n\nHowever, our simple model (Figs.\\,\\ref{fig:f3_theory}B and \\ref{fig:f3_theory}C) consistently exhibits a sharp zeroth Landau level (LL0), which is absent in the ARPES data. This discrepancy is surprising, since LL0 is known to be stable against inhomogeneities of the magnetic field as well as against disorder, as long as the latter preserves the chiral symmetry of graphene\\cite{aharonov_ground_1979}. Below, we provide a possible mechanism that broadens LL0 without substantially affecting the higher LLs. It has been argued that graphene grown on SiC is subject to a sublattice-symmetry-breaking potential arising from the interaction with the substrate\\cite{zhou_substrate-induced_2007}. The minimal theoretical model describing this effect, which acts as a staggered potential between sublattices A and B, is the so-called Semenoff mass $M$\\cite{semenoff_condensed-matter_1984}. This mass term opens a gap at the Dirac point and shifts the LL spectrum for $n\\neq0$ to (for a more detailed discussion, and the particular case of $n=0$, see Supplementary Material)\\cite{hunt_massive_2013}:\n\\begin{equation}\n E_n = \\text{sgn}(n) \\sqrt{ 2 v_F^2 \\hbar e B\\cdot|n| + M^2}+E_{DP}.\n \\label{eq:landaulevels_mass}\n\\end{equation}\nHowever, a \\emph{uniform} mass term $M$ cannot explain the ARPES data. Indeed, a fit of the observed LL spectrum to Eqn.\\,\\ref{eq:landaulevels_mass} returns $M=(150\\pm 5)\\ \\textrm{meV}$, but places the Dirac point at an unrealistic binding energy of $E_{DP}=390\\ \\textrm{meV}$ (see Supplementary Material Fig.\\,S3 and Fig.\\,S6). Therefore, we postulate that the mass term $M$ varies on a length scale much greater than the magnetic length $l_B \\sim$\\,4\\,nm, but smaller than the ARPES spot size ($\\sim$\\,1\\,mm). In that situation, our ARPES measurements would simply average over spectral functions described by different mass terms. This is shown in Figs.\\,\\ref{fig:f3_theory}D and \\ref{fig:f3_theory}E for a uniform distribution in the interval $M \\in [-135, 135]$\\,meV. As evident from Eqn.\\,\\ref{eq:landaulevels_mass}, the distribution of mass terms affects LL0 most, while merely contributing an additional broadening to the higher levels. Note that, as observed experimentally, the variation of the mass term is not limited to the strained areas, but instead is a property of the whole sample; as a result, ARPES always picks up a spatial average of strained areas with LLs and unstrained areas with the usual Dirac cone dispersion, both having the same distribution of mass terms and corresponding Dirac point gaps. This phenomenological model is in good agreement with the experimental data and may renew interest in the variation of the mass term in this sample system\\cite{zhou_substrate-induced_2007}.\n\n\\section*{Discussion}\n\nThis study provides the first demonstration of the room temperature strain-induced quantum Hall effect in graphene on a wafer-scale platform, as well as the first direct momentum-space visualization of graphene electrons in the strain-induced quantum Hall phase by ARPES, whereby the linear Dirac dispersion collapses into a ladder of quantized LLs. This opens a path for future momentum-resolved studies of strain-induced, room temperature-stable topological phases in a range of materials including Dirac and Weyl semimetals\\cite{pikulin_chiral_2016,cortijo_elastic_2015,liu_quantum_2017}, monolayer transition metal dichalcogenides\\cite{rostami_theory_2015}, and even nodal superconductors\\cite{massarelli_pseudo-landau_2017,nica_landau_2018}, all under large, potentially controllable pseudomagnetic fields. Importantly, these systems will feature time reversal invariant ground states -- otherwise impossible with a true magnetic field -- and may act as future building blocks for pseudo spin- or valley-tronic based technologies\\cite{low_strain-induced_2010}. In light of the recently discovered unconventional superconductivity in 'magic angle' bilayer graphene\\cite{cao_correlated_2018,cao_unconventional_2018}, strain-induced pseudomagnetic fields likewise raise the possibility of engineering exotic variants of correlated states including superconductivity in LLs\\cite{uchoa_superconducting_2013} and fractional topological phases\\cite{ghaemi_fractional_2012}. Our results lay the foundations for bottom-up strain-engineering of novel quantum phases at room temperature and on a technologically relevant wafer-scale platform.\n\n\\section*{Materials and Methods}\n\n\\subsection*{Sample growth and characterization}\n\nGraphene samples with a carbon buffer layer were epitaxially grown on commercial 6H-SiC substrates. The substrates were hydrogen-etched prior to the growth under argon atmosphere. Details are described by \\emph{S. Forti and U. Starke}\\cite{forti_epitaxial_2014}. AFM characterization measurements were taken at the Max Planck Institute in Stuttgart. Adhesion images correspond to the force necessary to retract the tip from the sample. Adhesion is sensitive to the graphene coverage on the sample and can thus distinguish between zero layer, monolayer and bilayer graphene with sensitivity to grain boundaries.\n\n\\subsection*{ARPES measurements}\n\nExperiments were performed at UBC in a ultra-high vacuum chamber equipped with a SPECS Phoibos 150 analyzer with $\\Delta E=6\\,\\textrm{meV}$ and $\\Delta k=0.01\\,\\textrm{\\AA}$ optimum energy and momentum resolutions, respectively, at a base pressure of better than $p=7\\times10^{-11}\\,\\textrm{Torr}$. Photons with an energy of $21.2$\\,eV were provided by a SPECS UVS300 monochromatized gas discharge lamp. Our homebuilt six-axis cryogenic manipulator allows for measurements between 300\\,K and 3.5\\,K. Additional data sets were taken at UBC with a second ARPES setup equipped with a Scienta R4000 analyzer and a Scienta VUV5000 UV source with $\\Delta E=1.5\\,\\textrm{meV}$ and $\\Delta k=0.01\\,\\textrm{\\AA}^{-1}$ optimum energy and momentum resolutions, respectively, for $21.2\\ \\textrm{eV}$ photons. The samples were annealed at 600$^\\circ$C for about 2\\,h at $p=1\\times10^{-9}\\,\\textrm{Torr}$ and then at 500$^\\circ$C for about 10\\,h at $p=5\\times10^{-10}\\,\\textrm{Torr}$ immediately before the ARPES measurements.\n\n\\subsection*{STM measurements}\n\nExperiments were performed at UBC under ultra-high vacuum conditions ($<5\\times10^{-12}$\\,mbar) using a low-temperature scanning tunnelling microscope (Scienta Omicron) at liquid helium temperatures ($\\sim$\\,4.2\\,K). All images were acquired in constant-current mode using a cut\\linebreak platinum-iridium tip, which was conditioned by voltage pulsing and gentle indentation into a Ag(111) crystal. The samples were annealed at 550$^\\circ$C overnight with a final pressure of $p=3\\times10^{-10}\\,\\textrm{mbar}$ \\emph{in situ} prior to the STM measurements.\n\n\\subsection*{Model calculation}\n\nWe considered a minimal tight-binding model on the honeycomb lattice with nearest-neighbour hoppings and a sublattice-symmetry breaking Semenoff\\cite{semenoff_condensed-matter_1984} mass term $M$:\n\\begin{equation}\nH = -t \\sum_{<\\bm{r}, \\bm{r'}>} \\left( c_A^\\dagger(\\bm{r)} c_B(\\bm{r'}) + \\text{H.c.} \\right) + M \\left( \\sum_{\\bm{r}} c_A^\\dagger(\\bm{r}) c_A(\\bm{r}) - \\sum_{\\bm{r'}} c_B^\\dagger(\\bm{r'}) c_B(\\bm{r'}) \\right)\n\\label{eq:tight_binding}\n\\end{equation}\nwhere $c^\\dagger_A(\\bm{r})$ ($c^\\dagger_B(\\bm{r'})$) creates an electron in the $p_z$ orbital at lattice site $\\bm{r}$ ($\\bm{r'}$) on the sublattice A (B) of the honeycomb lattice, $t = 2.7\\ \\text{eV}$ and the nearest-neighbour distance is $a_0 =0.142$\\,nm. We neglected the electron spin, and thus considered effectively spinless fermions.\n\nWe constructed a flake in the shape of an equilateral triangle of side length $L \\sim 56$\\,nm. The use of armchair edges avoids the zero-energy edge modes appearing for zigzag edges\\cite{castro_neto_electronic_2009}. We applied the simplest strain pattern respecting the triangular symmetry of the problem at hand, namely, the pattern introduced by \\textit{Guinea et al.}\\cite{guinea_energy_2010} which gives rise to a uniform (out-of-plane) pseudomagnetic field\n\\begin{equation}\n \\bm{B} = 4 u_0 \\frac{\\hbar \\beta}{e a_0} \\hat{\\mathbf{z}}\n\\end{equation}\nwhere $\\beta \\approx 3.37$ in graphene\\cite{settnes_pseudomagnetic_2016}, and the corresponding displacement field is given by\n\\begin{equation}\n \\mathbf{u}(r,\\theta) = \n \\begin{pmatrix}\n u_r \\\\\n u_\\theta\n \\end{pmatrix} \n =\n \\begin{pmatrix}\n u_0 r^2 \\sin(3\\theta) \\\\\n u_0 r^2 \\cos(3\\theta)\n \\end{pmatrix}.\n\\end{equation}\nThe hopping parameter renormalization induced by this displacement field is calculated using the simple prescription:\n\\begin{align}\nt \\rightarrow t_{ij} = t \\exp \\left[ -\\frac{\\beta}{a_0^2} \\left( \\epsilon_{xx} x_{ij}^2 + \\epsilon_{yy} y_{ij}^2 + 2 \\epsilon_{xy} x_{ij} y_{ij} \\right) \\right]\n\\label{eq_hoppings_strain}\n\\end{align}\nwhere $(x_{ij}, y_{ij}) \\equiv \\bm{r}_i - \\bm{r}_j$ is the vector joining the original (unstrained) sites $i$ and $j$, and\n\\begin{equation}\n \\epsilon_{ij} = \\frac{1}{2} \\left[ \\partial_j u_i + \\partial_i u_j \\right]\n\\end{equation}\nis the strain tensor corresponding to the (in-plane) displacement field $\\mathbf{u}$. Outside the strained region (which we take as a triangle of slightly smaller length $L_S \\sim 48$\\,nm), we allowed the strain tensor to relax: $\\bm{\\epsilon} \\rightarrow e^{-\\frac{r^2}{2\\sigma^2}} \\bm{\\epsilon}$, where $r$ is the perpendicular distance to the boundary of the strained region, and $\\sigma \\sim 1$\\,nm. We defined the length scale of the homogeneous magnetic field $\\bm{B}$ to be the diameter of the largest inscribed circle in the triangle of side $L_S$: $\\lambda \\equiv L_S\/\\sqrt{3} \\sim 28$\\,nm. We stress here that our simulated flakes are much smaller than the experimentally observed triangular features of size $\\sim 300$\\,nm. The fact that we nevertheless reproduce the experimental features underlines how the number of observable LLs is limited by the length scale of the homogeneous pseudomagnetic field $\\lambda$, rather than by the size $L$ of the nanoprisms themselves. This length scale could be caused by the more complicated strain pattern present in the nanoprisms or be induced by disorder.\n\nWe then diagonalized the Hamiltonian (Eqn.\\,\\ref{eq:tight_binding}) with hopping parameters given by Eqn.\\,\\ref{eq_hoppings_strain} to obtain the full set of eigenstates $\\ket{n}$ with energies $E_n$, and computed the momentum-resolved, retarded Green's function using the Lehman representation\n\\begin{align}\n G_\\alpha^R(\\bm{k},\\omega) &= \\sum_n \\frac{ | \\bra n c^\\dagger_\\alpha({\\bm{k}}) \\ket{0} |^2}{\\omega - (E_n - E_0) -i \\eta}\n\\end{align}\nwhere $\\alpha = A,B$ is a sublattice (band) index, and $\\eta \\sim 20$\\,meV is a small broadening parameter comparable to the experimental resolution. \nWe then compute the one-particle spectral function,\n\\begin{align}\nA(\\bm{k}, \\omega) = -\\frac{1}{\\pi} \\sum_\\alpha \\text{Im}\\left[ G_\\alpha^R(\\bm{k}, \\omega)\\right] \n\\end{align}\nwhich is proportional to the intensity measured in ARPES (modulo the Fermi-Dirac distribution and dipole matrix elements). We note that using a finite system introduces two main effects in the momentum-resolved spectral function: the appearance of a small finite-size gap at the Dirac points (in the absence of a magnetic field) and a momentum broadening of the bands.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nUnderstanding the origin of the Ultra High Energy Cosmic Rays (UHECR) with energy above $\\sim 5 \\; 10^{19}$ eV\nconstitute a real challenge for theoretical models, since their acceleration requires extreme conditions hardly fulfilled by known astrophysical objects.\nPrevious studies considered a number of potential sources, including gamma ray bursts, active galactic nuclei, large scale jets and neutron stars, but results are still inconclusive \\cite{reviews}. \n\nIf, as many believe, UHECRs have an extragalactic origin, a suppression, known as the Greisen-Zatsepin-Kuzmin (GZK) cutoff, should be observed in the spectrum at an energy\n$\\sim 7 \\; 10^{19} {\\rm eV}$\n\\cite{GZK}.\nThis is because during propagation energetic protons undergo $p\\gamma$ interactions (mainly photo-pion production) in the\nCosmic Microwave Background (CMB)\nwith an energy loss length which decreases dramatically from $\\sim 650$ to $\\sim 20 \\, {\\rm Mpc}$ between energies of $7 \\; 10^{19}$ to $3 \\; 10^{20} {\\rm eV}$ \\cite{pgamma}.\nThe detection of a number of events above $7 \\, 10^{19}$ eV\nraised interest in non-acceleration\nmodels\nthat\ninvolve new physics and do not require the existence of a cutoff in the spectrum \\cite{reviews}.\nHowever, the issue of the presence\nof the GZK feature\nis still debated \\cite{daniel}.\nIn the following we assume that astrophysical objects capable of accelerating UHECRs\ndo exist and we discuss possible ways to identify them.\n\nIf UHECRs are\nnot significantly \ndeflected by the\nintergalactic magnetic field (IGMF)\ntheir arrival direction should point back to the position of their accelerators.\nIn principle, this fact can\nbe used to identify accelerators located at distances smaller than the proton loss length, that constitutes a sort of horizon for CRs. However, even in this case, the small event statistics and the high uncertainty\nin the determination of the CR arrival direction\nwould make any identification problematic. \nThe better angular resolution and\ncollection area of AUGER will hopefully provide \nadequate data to address this issue.\n\n\n\nThe interactions between UHECRs and CMB photons generate secondary gamma rays and electron-positron pairs (hereafter referred to as {\\it electrons}) that in turn initiate an electromagnetic (EM) cascade in the universal photon background \\cite{cascade}.\nSuch a cascade would appear to a distant observer as a flux of GeV\/TeV photons.\nRecently, Ferrigno et al. \\cite{carlo} suggested to\nidentify the sources of UHECRs\nby searching for this radiation.\nTheir calculations show that, in an unmagnetized Universe ($B_{IGMF} = 0$ G), a steady source emitting isotropically $2 \\; 10^{43}$ erg\/s in form of UHECRs ($E > 10^{19}$ eV) can be detected by a Cherenkov telescope like HESS up to a distance of $\\sim 100$ Mpc.\nIn this\ncase the source would be point-like, because in the absence of IGMF the EM cascade is one-dimensional and propagates radially away from the accelerator. \nUnfortunately, the scenario changes dramatically if\nthe more realistic case of a\nmagnetized Universe is considered. This is because low energy electrons produced during the last steps of the cascade are effectively deflected and eventually isotropized if their Larmor radius is smaller or comparable with the Compton cooling length. This condition is satisfied when the IGMF is above\n$\nB_{iso} \\sim 10^{-12} (E_{\\gamma}\/TeV) {\\rm G}\n$,\n$E_{\\gamma}$ being the energy of the Compton photon. This implies that \\textit{unless the IGMF is extremely weak} ($<< B_{iso}$), \nelectrons emit Compton photons after being fully isotropized. Thus, \\textit{an extended halo of emitting pairs forms around UHECR sources}. The radiation from the halo is emitted isotropically, resulting in a \\textit{very extended, and thus hard to be detected, gamma ray source} \\cite{cascade}.\n\nHowever, if the IGMF close to the accelerator is at the ${\\rm nG}$ level, first generation electrons ($E \\sim 10^{19}$ eV) generated during $p\\gamma$ interactions cool rapidly by emitting ${\\rm GeV}$ synchrotron photons and the development of the cascade is strongly inhibited. \nWe propose the possibility to detect these synchrotron photons from UHECR sources \\cite{noi}. \nThis is of great interest because of the following reasons.\nBoth synchrotron emitting electrons and parent protons are extremely energetic and not appreciably deflected by the IGMF, at least on the first Mpcs distance scale.\nFor this reason, \\textit{synchrotron photons are emitted in the same direction of parent protons}. Thus, they move away from the source almost radially, and \\textit{the observed radiation is expected to be point-like}, and thus easily detectable and distinguishable from the extended cascade component.\nRemarkably, a detection of these sources would allow to infer the value of the IGMF close to the accelerator, constraining it in the range $\\approx 10^{-10} \\div 10^{-8}$ G.\nFinally, since the Universe is transparent to ${\\rm GeV}$ photons, powerful UHECR accelerators located outside the CR horizon might be identified in this way. \n\n\\section{Development of the electromagnetic cascade}\n\nIn this section we describe the development of the EM cascade initiated by a UHECR.\nConsider a proton with energy $E_{p,20} = E_p\/10^{20}$ eV. The typical energies of photons and electrons produced in $p\\gamma$ interactions are $\\sim 10^{19} E_{p,20}$ and $\\sim 5 \\; 10^{18} E_{p,20}$ eV respectively \\cite{pgamma}.\nIn the absence of IGMF, such electrons and photons interact via Compton and pair production processes with photons in the CMB and radio background.\nIn general, this would lead to the development of an EM cascade, in which the number of electrons and photons increases rapidly.\nIn fact, due to the extremely high energy of the particles considered here, each interaction occurs in the limit $\\Gamma = \\epsilon_b E \\gg 1$, where $\\epsilon_b$ and $E$ are the energies of the background photon and of the energetic electron (photon) respectively, both calculated in units of the electron rest mass energy.\nUnder this condition the Compton scattering is in the extreme Klein-Nishina limit, namely, the upscattered photon carries away most of the energy of the incoming electron.\nThe same happens during a pair production event, in which most of the energy goes to one of the two outgoing electrons.\nTherefore the problem reduces essentially to a single-particle problem, in which a leading particle loses continuously energy and changes state from electron to photon and back\ndue to alternate Compton\/pair production interactions.\nThus, the effective loss length of an energetic electron can be identified with the loss length of the leading particle \\cite{gould}. \nWhen the leading particle loses its energy until $\\Gamma \\approx 1$,\nthe cascade enters the particle multiplication phase, in which the particle energy is roughly divided in half in every collision. This phase ends up with a large number of low energy electrons and photons.\nOn the other hand, if a IGMF is present, electrons also lose energy via synchrotron emission, subtracting energy to the cascade.\nIn Fig.~\\ref{eloss} we show the effective electron loss length for Compton\/pair production (solid), together with the synchrotron loss length for a IGMF equal to $0.1$, $1$ and $10 ~ {\\rm nG}$ (dashed lines).\nIt can be seen that if the IGMF is at the level of 1 nG or more,\nall the electrons \nwith energy above $\\sim 10^{18}$ eV cool fast via synchrotron losses and the development of the cascade is strongly suppressed.\n\nSummarizing, three different regimes, corresponding to different values of the IGMF, can be distinguished:\n\n\\begin{itemize}\n\\item[$\\bullet$]{{\\bf Regime I:} $B_{IGMF} \\ll B_{iso} \\sim 10^{-12}$ G. The EM cascade is not affected at all by the\nIGMF.}\n\\item[$\\bullet$]{{\\bf Regime II:} $B_{iso} \\le B_{IGMF} \\ll B_{syn} \\sim 10^{-9}$ G. No energy is subtracted to the EM cascade due to synchrotron losses, but low energy electrons are effectively isotropized by the IGMF.}\n\\item[$\\bullet$]{{\\bf Regime III:} $B_{IGMF} \\ge B_{syn}$. The development of the EM cascade is strongly suppressed since its very first steps due to strong synchrotron losses.}\n\\end{itemize}\n\n\\section{Regime I: one-dimensional cascade}\n\nIf the IGMF strength is much less than $B_{iso} \\sim 10^{-12}$ G, electrons in the cascade do not suffer synchrotron losses, nor are they deflected. Thus, the EM cascade develops along a straight line. In this case, the calculations by Ferrigno et al. apply, and nearby and powerful UHECR sources might be detected as point like TeV sources by currently operational Cherenkov telescopes \\cite{carlo}.\n\nIn principle, since the strength of the large scale IGMF is basically unknown \\cite{Bfield}, such a low values of the field cannot be ruled out. However, this is probably not a good assumption in the vicinity of UHECR accelerators, where the IGMF is expected to be appreciable, especially if such accelerators are, as it seems reasonable to believe, correlated with the structures in the Universe. \nThe cascade might still be one-dimensional if its last steps develop sufficiently far away from the source, in a region of very low IGMF. Another necessary condition is that the IGMF close to the source must be small enough ($\\ll 10^{-9}$ G) to avoid a suppression of the cascade due to synchrotron losses of first generation electrons. \n\n\\section{Regime II: extended pair halos}\n\nIf the IGMF is strong enough to deflect the electrons in the cascade, but not enough to make synchrotron losses relevant (namely, $10^{-12} {\\rm G} \\le B_{IGMF} \\ll 10^{-9}$ G), then the EM cascade fully develops, low energy electrons are isotropized, and a very extended pair halo forms around the UHECR source. For an isotropic source, the size of the halo can be roughly estimated as follows. Let $E_{\\gamma}^{obs}$ be the energy of the gamma ray photons observed from the Earth. Such photons are CMB photons Compton-upscattered by electrons with energy $E_e \\sim 20 (E_{\\gamma}^{obs}\/{\\rm TeV})^{1\/2}$ TeV. These are the electrons forming the pair halo. \nSince electrons are rapidly isotropized in the IGMF, one can assume that they do not propagate away from the sites in which they are created.\nElectrons in the halo are in turn produced by parent photons with energy $E_{\\gamma}^{par} \\lesssim E_e$. \nSince the photon mean free path against pair production in the infrared background $\\lambda_{pp}$ decreases rapidly with increasing energy \\cite{IRabs}, we can safely neglect the contribution to the halo size from older generation (higher energy) photons. \nThus, the size of the halo \ncan be roughly estimated as $l_{halo} \\sim \\lambda_{pp}(E_{\\gamma}^{par})$ \\cite{cascade}.\nFor a $\\sim 20$ TeV photon the mean free path is about a few tens of megaparsecs (see Fig. 2 in \\cite{felixICRC}). In fact, for the situation considered here, the size of the halo is even bigger, since the UHECR protons and the first generation electrons\npropagate $\\sim 10 \\div 20$ Mpc before\ninitiating the EM cascade (see Fig. 2).\n Thus, a conservative estimate of the apparent angular size of the halo at 1 TeV can be given by: $\\vartheta \\sim (l_{halo}\/D) \\approx 10^o (l_{halo}\/20Mpc) (D\/100Mpc)^{-1}$, where $D$ is the distance of the source.\nThis indicates that \\textit{pair halos are extremely extended}, bigger than the field of view of Cherenkov telescopes (the HESS field of view is $\\sim 5^o$) \\textit{and thus hardly detectable}. The problem becomes even worse if one considers also protons with energy below $10^{20}$ eV, which have much longer loss length, increasing up to $\\sim 1$ Gpc for proton energies equal to $\\sim 5 \\; 10^{19}$ eV. On the other hand, such protons are likely to contribute only to the TeV flux of very distant sources.\n\nIn the recent work by Armengaud et al. \\cite{sigl} the deflection of electrons in the IGMF has been neglected, even when a IGMF stronger than $B_{iso} \\sim 10^{-12}$ G was assumed. On the other hand, the authors considered the deflection of $\\sim 10^{20}$ eV protons, which is in fact totally negligible if compared with the full isotropization of electrons. \nTherefore their claim\n about the detectability of cascade gamma-rays \nseems to us over-optimistic.\n\nThe cascade emission peaks at TeV energies \\cite{carlo}, making a detection by GLAST problematic.\n\n\\section{Regime III: synchrotron gamma rays}\n\nIf the IGMF close to the UHECR accelerator is at the level of 1 nG or above, the development of the cascade is strongly suppressed, since the very high energy electrons produced during $p\\gamma$ interactions cool rapidly via synchrotron losses before undergoing Compton scattering. It is evident from Fig. 2 that for a $\\sim nG$ IGMF this is true for electron energies well in excess of $E_e \\sim 10^{18}$ eV. Such electrons emit synchrotron photons with energy \n$E_{syn} \\approx 2 \\, (B\/{\\rm nG}) (E\/10^{19}{\\rm eV})^2 {\\rm GeV}$, detectable by GLAST.\nIt is important to stress that our results are sensitive only to the value of the IGMF {\\it close} to the source, while they are unaffected by the value of the field on much larger scales. \nThis is because synchrotron emitting electrons are produced within a proton interaction length $l_{p\\gamma} \\approx 10$ Mpc from the accelerator.\nAs a consequence, the only assumption required\nis that the size of the magnetized region surrounding the accelerator must be greater or comparable with $l_{p\\gamma}$\nSuperclusters of galaxies constitute an example of large and magnetized regions satisfying our requirement \\cite{Bfield}.\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.4\\textwidth]{gabiciposter_fig1.ps}\n\\caption{\\label{eloss}Effective electron loss length. Solid: Compton\/pair-production losses in the CMB and radio background (with a cutoff at $2 {\\rm MHz}$ \\cite{radiobackground}). Dashed: synchrotron losses.\n}\n\\end{figure}\n\n\nWe now estimate the angular size of the synchrotron emission.\nAfter propagating over an interaction length, a proton of energy $E_p$ is deflected by an angle\n$\\vartheta_p \\approx 0.8^o (10^{20}{\\rm eV}\/E_p) (B\/{\\rm nG}) \\sqrt{(l_{p\\gamma}\/10{\\rm Mpc})} \\sqrt{(l_c\/{\\rm Mpc})}$ \nwhere $l_c$ is the IGMF coherence length \\cite{deflection}.\nDue to the high energies considered, secondary electrons produced in $p\\gamma$ interactions move in the same direction of the parent protons.\nIn a cooling time electrons are deflected by an angle $\\vartheta_e \\sim \\alpha \\lambda \/ R_L$,\n$R_L$ being the electron Larmor radius and $\\alpha$ a number of order unity representing the probability that the leading particle is actually an electron \\cite{gould}.\nRemarkably, if expressed as a function of the synchrotron photon energy, \nthe deflection angle is independent on the magnetic field strength and reads: $\\vartheta_e \\approx 0.5^o (E_{syn}\/10 GeV)^{-1}$.\nThus, an observer at a distance $D$ would see a source with angular size: \n$\\vartheta_{obs} \\approx \\sqrt{\\vartheta_p^2+\\vartheta_e^2} \\left(\\frac{l_{p\\gamma}}{D}\\right)$ that, for $D = 100$ Mpc and for photon energies of $1 \\div 10$ GeV is of the order of a fraction of a degree.\nThis is comparable with the angular resolution of GLAST, that would classify these sources as point-like if they are\nlocated at a distance of $\\sim 100 \\; {\\rm Mpc}$ or more. \nThis leads to the important conclusion that, \\textit{even if synchrotron photons are produced in an extended region of size $\\sim l_{p\\gamma}$ surrounding the accelerator, the resulting gamma ray source would appear point-like to a distant observer}.\n\nIf the proton spectrum extends well above $10^{20}$ eV, these sources might also be detected by \nImaging Atmospheric Cherenkov Telescope arrays (IACT) operating at energies above $100 \\; {\\rm GeV}$. The angular resolution of these instruments is a few arcminutes, and thus the sources will appear extended. However, it has been proven that IACT are powerful instruments to image extended gamma ray sources \\cite{SNR}, and thus they might still detect and map the emission from UHECR sources. Finally, powerful UHECR accelerators located at a distance of 1 Gpc or more would appear as point sources. In the next section we discuss the energy requirement for a detection.\n\n\\begin{figure}\n\\includegraphics[width=0.4\\textwidth]{gabiciposter_fig2.ps}\n\\caption{\\label{spectra}Spectra for a source \nlocated at\n$100{\\rm Mpc}$.\nThe luminosity in UHECRs is $2 \\; 10^{44}{\\rm erg\/s}$, with spectral index is $\\delta = 2$. TOP: $B_{IGMF} = 0.5$(curve 1), $5$ (2), $50{\\rm nG}$(3), $E_{cut} = 10^{21} {\\rm eV}$. BOTTOM: $E_{cut} = 5 \\, 10^{20}$, $10^{21}$, $5 \\, 10^{21} {\\rm eV}$, $B_{IGMF} = 1{\\rm nG}$. Dotted: intrinsic spectra. Solid: spectra after absorption.}\n\\end{figure}\n\n\\section{Detectability and energetics\n\nFig.~\\ref{spectra} shows synchrotron spectra\nfor a steady source\nat a distance of $100 ~ {\\rm Mpc}$ in a uniformly magnetized region of size 20 Mpc.\nSince the radiation is produced in a region about one interaction length away from the source, the actual structure of the field is not crucial.\nSteady state proton and electron spectra have been calculated taking into account all the relevant energy losses and proton escape from the magnetized region.\nSolid lines have been computed \ntaking into account the opacity of the Universe to very high energy photons due to pair production in the cosmic infrared background \\cite{IRabs},\nwhile dotted lines show the unabsorbed spectra.\nThe total luminosity in UHECRs with energy above $10^{19} {\\rm eV}$ is $L_{UHE} = 2 ~ 10^{44} {\\rm erg\/s}$, with a differential energy distribution $\\propto E^{-\\delta} exp(-E\/E_{cut})$, with \n$\\delta =2$.\nResults are quite insensitive to the slope of the spectrum.\n\nIn the top panel of Fig.~\\ref{spectra} $E_{cut} = 10^{21} {\\rm eV}$\nand the IGMF\nis equal to $0.5$, $5$ and $50 \\; {\\rm nG}$ (curves $1$, $2$ and $3$).\nIf the IGMF is significantly greater than $\\sim 50 \\; {\\rm nG}$, the peak of the emission falls at TeV energies, where absorption is very strong.\nOn the other hand, if the field is well below $\\sim 0.5 \\; {\\rm nG}$, synchrotron emission becomes unimportant and the cascade contribution dominates.\nHowever, \\textit{for the broad interval of values of the IGMF strength between $0.5$ and $50 \\; {\\rm nG}$, the formation of a synchrotron point-like gamma ray source seems to be unavoidable}. \n\nIn the bottom panel of Fig.~\\ref{spectra}, our predictions are compared with the sensitivities of GLAST and of a generic\nIACT such as HESS or VERITAS. \nA IGMF of $1 \\; {\\rm nG}$ is assumed and the different curves refer to values of the cutoff energy in the proton spectrum equal to $5 \\; 10^{20}$, $10^{21}$ and $5 \\; 10^{21} {\\rm eV}$ (top to bottom).\nFor such a field, the condition for the detectability of a point source by GLAST is roughly $L_{UHE} \\ge 8 \\; 10^{43} \\div 2 \\; 10^{44} (D\/100{\\rm Mpc})^2 {\\rm erg\/s}$ for $\\delta = 2.0 \\div 2.6$.\nIn contrast, for IACT arrays the minimum detectable luminosity is roughly 2 orders of magnitude higher, since the source has to be located at a distance of $\\sim 1 {\\rm Gpc}$ in order to appear point-like. However, less powerful accelerators can still be detected as extended sources.\nSince the peak of the emission falls at $\\sim 10 ~ {\\rm GeV}$, future IACT operating in the energy range $10 \\div 100$ GeV would be powerful tools to search for these sources. \n\nIf the CR spectrum smoothly \nextends down to GeV energies with slope $\\delta = 2$, the required total CR luminosity for a source to be detected by GLAST is $L_{CR} \\ge 5 \\, 10^{44} (D\/100{\\rm Mpc})^2 {\\rm erg\/s}$.\nFor a beamed source, the required luminosity is reduced by a factor $f_b \\sim 0.02 (\\vartheta_b\/10^o)$, $\\vartheta_b$ being the beaming angle,\nand the detectability condition reads: $L_{CR} \\ge 10^{43} (f_b\/0.02) (D\/100{\\rm Mpc})^2 {\\rm erg\/s}$.\nThis luminosity is small if compared, for example, with the\npower of an AGN jet, that can be as high as\n$10^{47} {\\rm erg\/s}$ \\cite{ghisella}.\nThus,\nastrophysical objects that can in principle satisfy the energy requirement for a detection do exist.\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n~~~~~The purpose of this paper is to bring a new tool in the\nstudy of the spectral type of rank one transformations. Rank one\ntransformations have simple spectrum and in \\cite{Ornstein} D.S.\nOrnstein, using a random procedure, produced a family of mixing\nrank one transformations. It follows that the Ornstein's class of\ntransformations may possibly contain a candidate for Banach's well-known\nproblem whether there exists a dynamical system $(\\Omega,{\\mathcal\n{A}},\\mu,T)$ with simple Lebesgue spectrum. But, in 1993, J.\nBourgain in \\cite{Bourgain} proved that almost surely Ornstein's\ntransformations have singular spectrum. Subsequently, using the\nsame method, I. Klemes \\cite{Klemes1} and I. Klemes \\& K. Reinhold\n \\cite{Klemes2} obtain that mixing \nstaircase transformations of Adams \\cite{Adams1} and Adams \\&\nFriedman \\cite{Adams2} have singular spectrum. They conjecture\nthat all rank one transformations have singular spectrum.\\par\n\nHere we shall exhibit a new class of rank one transformations with\nsingular spectrum. Our assumption include some new class of\nOrnstein transformations and a class of Creutz-Silva rank one\ntransformations \\cite{Creutz-silva}. Our proof is based on\ntechniques introduced by J. Bourgain\n\\cite{Bourgain} in the context of rank one transformations and\ndeveloped by Klemes \\cite{Klemes1}, Klemes-Rienhold\n\\cite{Klemes2}, Dooley-Eigen \\cite{Eigen}, together with some ideas from the proof of \nthe central limit theorem for trignometric sums. The fundamental key, as noted by Klemes\n\\cite{Klemes1}, is the estimation of the $L^1$-norm of a certain\n trigonometric polynomial ${{(|P_m|^2-1)}}$\n. We shall use the method of central limit theorem for trignometric sums to produce an\nestimate of this $L^1$-norm.\n\n\\section*{2. Rank One Transformation by Construction}\n\nUsing the cutting and stacking method described in [Fr1], [Fr2],\none can construct inductively a family of measure preserving\ntransformations, called rank one transformations, as follows\n\\vskip 0.1cm Let $B_0$ be the unit interval equipped with the\nLebesgue measure. At stage one we divide $B_0$ into $p_0$ equal\nparts, add spacers and form a stack of height $h_{1}$ in the usual\nfashion. At the $k^{th}$ stage we divide the stack obtained at the\n$(k-1)^{th}$ stage into $p_{k-1}$ equal columns, add spacers and\nobtain a new stack of height $h_{k}$. If during the $k^{th}$ stage\nof our construction the number of spacers put above the $j^{th}$\ncolumn of the $(k-1)^{th}$ stack is $a^{(k-1)}_{j}$, $ 0 \\leq\na^{(k-1)}_{j} < \\infty$, $1\\leq j \\leq p_{k}$, then we have\n\n$$h_{k} = p_{k-1}h_{k-1} + \\sum_{j=1}^{p_{k-1}}a_{j}^{(k-1)}.$$\n\\vskip 3 cm \\hskip 3.5cm\n \\tower\n\\vskip 3.0cm\n\\noindent{}Proceeding in this way we get a rank one transformation\n$T$ on a certain measure space $(X,{\\mathcal B} ,\\nu)$ which may\nbe finite or\n$\\sigma-$finite depending on the number of spacers added. \\\\\n\\noindent{} The construction of a rank one transformation thus\nneeds two parameters, $(p_k)_{k=0}^\\infty$ (parameter of cutting\nand stacking), and $((a_j^{(k)})_{j=1}^{p_k})_{k=0}^\\infty$\n(parameter of spacers). Put\n\n$$T \\stackrel {def}= T_{(p_k, (a_j^{(k)})_{j=1}^{p_k})_{k=0}^\\infty}$$\n\n\\noindent In \\cite{Nadkarni1} and \\cite{Klemes2} it is proved that\nthe spectral type of this transformation is given (upto possibly some discrete measure) by\n\n\\begin{eqnarray}\\label{eqn:type1}\nd\\sigma =W^{*}\\lim \\prod_{k=1}^n\\left| P_k\\right| ^2d\\lambda,\n\\end{eqnarray}\n\\noindent{}where\n\\begin{eqnarray*}\n&&P_k(z)=\\frac 1{\\sqrt{p_k}}\\left(\n\\sum_{j=0}^{p_k-1}z^{-(jh_k+\\sum_{i=1}^ja_i^{(k)})}\\right),\\nonumber \\\\\n\\nonumber\n\\end{eqnarray*}\n\\noindent{}$\\lambda$ denotes the normalized Lebesgue measure on the\ncircle group ${\\sym T}$ and $W^{*} \\lim$ denotes weak*limit in the space of\nbounded Borel measures on ${{\\sym T}}$.\\\\ \n\n\n\\noindent{}The principal result of this paper is the following:\\\\\n\n\n\\noindent {\\bf Theorem 2.1. }{\\it Let $T=T_{(p_k,\n(a_j^{(k)})_{j=1}^{p_k})_{k=0}^\\infty}$ be a rank one\ntransformation such that,\n\n\\begin{eqnarray}{\\label{eq:eqt}}\n&&(i)~~a_{j+1}^{(k)}\\geq 2s_k(j),j=0,\\cdots, p_k-1,\\nonumber\\\\\n&& {\\rm {~~with~~~~}} s_k(j)=a_1^{(k)}+\\ldots +a_j^{(k)},\ns_k(0)=0,\\nonumber\\\\\n&&(ii)~~\\frac{s_k{(p_k)}}{h_k}<\\frac12 \\nonumber\n\\end{eqnarray} then the spectrum of $T$ is singular.}\n\\\\\n\n\\noindent{}We remark that the\nspectrum of rank one transformation is always singular if the\ncutting parameter $p_k$ is bounded. In fact, Klemes-Reinhold\nprove that if $\\displaystyle \\sum_{k=0}^{\\infty}\n\\frac{1}{{p_k}^2}=\\infty$ then the associated rank one\ntransformation has singular spectrum. Henceforth, we assume that the series\n$\\displaystyle \\sum_{k=0}^{\\infty} \\frac{1}{{p_k}^2}$\nconverges.\\\\\nWe point out also that the condition $(ii)$ of\ntheorem 2.1 holds in the case of rank one transformations satisfying a\nrestricted growth condition provided that $\\displaystyle \\min_{1 \\leq j \\leq p_k}(a_j^{(k)})=0$. Following Creutz-Silva \\cite{Creutz-silva},\nwe say that a rank one transformation $T=T_{(p_k, (a_j^{(k)})_{j=1}^{p_k})_{k=0}^\\infty}$ has\nrestricted growth if\n\\[\n\\displaystyle \\frac{\\displaystyle \\sum_{i=1}^{p_k} \\left( a_j^{(k)}- \\displaystyle \\min_{1 \\leq j \\leq p_k}(a_j^{(k)})\\right) }{h_k+\\displaystyle \\min_{1 \\leq j \\leq p_k}(a_j^{(k)}) }\\tend{k}{+\\infty}0,\n\\]\n\\noindent{}The proof of our main result is based on the method of\nJ. Bourgain in \\cite{Bourgain}. We shall recall the main ideas of\nthis\nmethod.\\\\\n\n\n\\noindent {\\bf Proposition 2.2. }{\\it \\ The following are\nequivalent }\n\n\\begin{enumerate}\n\\item [(i)] {\\it $\\sigma \\perp \\lambda$ }\n\n\\item[(ii)] {\\it $\\inf \\{\\displaystyle \\displaystyle \\int \\displaystyle \\int\n\\prod_{l=1}^k\\left| {P_{n_l}(z)}\\right| d\\lambda,~k\\in\n{{\\sym N}},~n_1n_k$\nand put\n\\[\nQ\\left(z\\right) =\\prod_{i=1}^k\\left |{P_{n_i}(z)}\\right |.\n\\]\n\n\\noindent{}One can show, using the same arguments as in\n\\cite{elabdal1}, the following lemma.\\\\\n\n\\noindent {\\bf Lemma 2.3. }{\\it\n\\begin{eqnarray*}\n &&\\displaystyle \\int Q(z) \\left| P_m(z)\\right|d\\lambda(z)\n\\leq \\\\\n &&\\frac 12\\left( \\displaystyle \\int Q d\\lambda +\\displaystyle \\int\nQ(z) \\left| P_m(z)\\right| ^2 d\\lambda(z)\\right) -\\frac 18\\left(\n\\displaystyle \\int Q \\left| \\left| P_m(z)\\right| ^2-1\\right|\nd\\lambda(z)\\right) ^2. \\end{eqnarray*}}\\\\\n\n\\noindent {\\bf Proposition 2.4. }{\\it $\\displaystyle \\lim_{m\n\\rightarrow\n \\infty}\\displaystyle \\int Q \\left|\nP_m(z)\\right|^2d\\lambda(z) {\\Bbb {=}} \\displaystyle \\displaystyle\n\\int Q $ $d\\lambda(z) .$}\\\\\n\\\\\n\\begin{proof}\n\\noindent{}The\nsequence of probability measures $\\left| {P_m(z)}%\n\\right|^2d\\lambda(z)$ converges weakly to the Lebsegue measure.\n \\end{proof}\n \\\\\n\n\n\\noindent We have also the following proposition: \\\\\n\n\\noindent {\\bf Proposition 2.5. }{\\it There exist a subsequence of\nthe sequence $\\left (\\left |\\left| P_m(z)\\right|-1 \\right|\\right)$\nwhich converge weakly to some non-negative function $\\phi$ which\nsatisfies $ \\phi \\leq $2, almost surely with respect to the Lebesgue\nmeasure.}\n\\\\\n\n\\begin{proof}\n\\noindent{}The\nsequence $\\left |\\left| {P_m(z)}%\n\\right|-1 \\right |$ is bounded in $L^2$. It follows that there\nexist a subsequence which converges weakly to some non-negative\n$L^2$ function $\\phi$. Let $\\omega$ be a non-negative continuous\nfunction, then we have\n\n\\begin{eqnarray*}\n\\int \\omega\\left |\\left| {P_m(z)} \\right|-1 \\right |d\\lambda(z)\n&&\\leq \\int \\omega\\left| {P_m(z)} \\right| d\\lambda(z)+\\int \\omega\nd\\lambda(z)\\\\\n&&\\leq {(\\int \\omega d\\lambda(z))}^{\\frac12} {(\\int \\omega \\left |\n{P_m(z)} \\right|^2d\\lambda(z))}^{\\frac12}+\\int \\omega d\\lambda(z).\n\\end{eqnarray*}\n\\noindent{}Hence\n\\[\n\\int \\omega \\phi d\\lambda \\leq 2 \\int \\omega d\\lambda.\n\\]\n\nSince this holds for all non-negative continuous $\\omega$, we have $\\phi \\leq 2$ a.e. \n\\end{proof}\n\\\\\n\n\\noindent Put\n\\[\n\\alpha = \\phi~~ d\\lambda.\n\\]\n\\noindent We shall prove the following:\\\\\n\n\\noindent {\\bf Proposition 2.6. }{$\\alpha \\bot \\sigma$}.\\\\\n\n\\noindent For the proof of the proposition 2.6 we need the\nfollowing classical lemma \\cite{Kilmer-Seaki}.\\\\\n\n\\noindent {\\bf Lemma 2.7. }{\\it Let $\\rho, \\tau$ be two\nnonnegative finite measure on a measurable space $X$. Then the\nfollowing properties are equivalent :\n\\begin{enumerate}\n\\item[(i)] {\\it $\\rho \\perp \\tau$ }\n\n\\item[(ii)] {\\it Given $\\varepsilon>0$, there exists a\nnonnegative measurable function $f$ on $X$ such that $ f >0,$\n$\\tau-a.e.$ and such that\n\\[\n\\left(\\int f d\\rho) \\right) \\left (\\int \\frac{d\\tau}{f} \\right)\n<\\varepsilon.\n\\] }\n\\end{enumerate}\n}\n\n\\begin{proof}{of Proposition 2.6}\nLet $\\displaystyle \\beta_1=\\inf\\{\\int Q\nd\\alpha,~:~Q=\\prod_{i=1}^k\\left | {P_{n_i}(z)}\\right|, k \\in {\\sym N},\nn_1n_k$.\nThen, by Cauchy-Schwarz inequality, we have\n\n\\begin{eqnarray}{\\label{eq:Bo1}}\n\\int \\prod_{j=1}^{N}|P_j|d\\alpha & =& \\int \\sqrt{\\prod_{j \\in\n\\mathcal{N}}|P_j|} \\sqrt{\\prod_{j \\in \\mathcal{N}}|P_j|} \\prod_{j\n\\not \\in \\mathcal{N}}|P_j| d\\alpha \\nonumber\\\\\n&\\leq& {\\left(\\int \\prod_{j \\in \\mathcal{N}}|P_j|d\\alpha\\right\n)}^{\\frac12} {\\left(\\int \\prod_{j \\in \\mathcal{N}} |P_j| d\\alpha\n\\prod_{j \\not \\in \\mathcal{N}}|P_j|^2 d\\alpha\\right )}^{\\frac12}\n\\end{eqnarray}\n\n\\noindent{}But\n\n\\begin{eqnarray}{\\label{eq:B2}} \\int \\prod_{j \\in \\mathcal{N}}\n|P_j| \\prod_{j \\not \\in \\mathcal{N}}|P_j|^2 d\\alpha &\\leq & 2\n\\int \\prod_{j \\in \\mathcal{N}} |P_j| \\prod_{j \\not \\in\n\\mathcal{N}}|P_j|^2 d\\lambda \\nonumber \\\\\n&\\leq& 2\\int \\prod_{j=1}^{N} |P_j| \\prod_{j \\not \\in\n\\mathcal{N}}|P_j| d\\lambda \\nonumber\\\\\n &\\leq& 2{\\left(\\int \\prod_{j=1}^{N} |P_j|^2 d\\lambda \\right)}^{\\frac12}\n{\\left(\\int\\prod_{j \\not \\in \\mathcal{N}}|P_j|^2\nd\\lambda\\right)}^{\\frac12} \\nonumber \\\\\n&\\leq& 2\n\\end{eqnarray}\n\n\\noindent{}Combine (\\ref{eq:Bo1}) and (\\ref{eq:B2}) to get the\nclaim. The proof of the proposition follows from lemma 2.7.\n\\end{proof}\n\n\n\\section*{3. Estimation of $L_1$-Norm of ($\\left |P_m(z)\\right |-1$)\nand the Central Limit Theorem.}\n\nIn this section we assume that for $m$\nsufficiently large\n\\begin{eqnarray}{\\label{eq:rg}}\n &&(i)~~a_{j+1}^{(m)}\\geq 2s_m(j),j=0,\\cdots, p_m-1,\\nonumber\n \\\\ &&{\\rm {~~with~~~~}} s_m(j)=a_1^{(m)}+\\ldots +a_j^{(m)},\ns_m(0)=0,\\nonumber\\\\\n&&(ii)~~\\frac{s_m{(p_m)}}{h_m}<\\frac12 \\nonumber\n\\end{eqnarray}\n\n\\noindent{}Under the above assumptions, we shall proof the following:\\\\\n\n\\noindent {\\bf Proposition 3.1. }{ $\\alpha \\geq K \\lambda$, for\nsome positive\nconstant $K$}.\\\\\n\n\\noindent{} The proof of the proposition is based on an\nestimate of $\\displaystyle \\int_A ||P_m|-1|d\\lambda$, where\n$A$ is a Borel set with $\\lambda(A)>0$. More precisely we shall\nstudy the stochastic behavior of the sequence $ |P_m|$. For that\nour principal strategy is based on the method of the central\nlimit theorem for trigonometric sums. A nice account can be\nfound in \\cite{Kac}. It is well-known that\nHadamard lacunary trigonometric seires satisfies the central limit\ntheorem \\cite{Zygmund}. The central limit theorem for\ntrigonometric sums has been studied by many authors, Zygmund and\nSalem \\cite{Zygmund}, Erd\\\"os \\cite{Erdos},\n J.P. Kahane \\cite{Kahane}, Berkers \\cite{Berkes}, Murai \\cite{Murai},\nTakahashi \\cite{Takahashi}, Fukuyama and Takahashi\n\\cite{Fukuyama}, and many others. The same method is used to\nstudy the asymptotic behavior of Riesz-Raikov sums \\cite{petit}.\\\\\n\n\\noindent Here, we shall prove the following:\\\\\n\n\\noindent {\\bf Proposition 3.2. } {If $(i)$ and $(ii)$ hold then\nfor any Borel subset $A$ of ${\\sym T}$ with $\\lambda(A)>0$, the distribution of the\nsequence of random variables\n$\\frac{\\sqrt{2}}{\\sqrt{p_m}}\\sum_{j=0}^{p_m-1} \\cos((jh_m+s_m(j))t)$\nconverges to the Gauss distribution. That is\n\n\\begin{eqnarray}{\\label{eq:eq6}}\n\\frac1{\\lambda(A)}\\left \\{t \\in A~~:~~\\frac{\\sqrt{2}}{\\sqrt{p_m}}\n\\sum_{j=0}^{p_m-1} \\cos((jh_m+s_m(j))t) \\leq x \\right \\}\\nonumber \\\\\n\\tend{m}{\\infty}\\frac1{\\sqrt{2\\pi}}\\int_{-\\infty}^{x}\ne^{-\\frac12t^2}dt\\stackrel{\\rm {def}}{=}{\\mathcal {N}}\\left( \\left] -\\infty,x \\right] \\right) .\n\\end{eqnarray}\n}\\\\\nThe proof of proposition 3.2 is based on the idea of the proof of\nmartingale central limit theorem due to McLeish \\cite{Mcleish}.\nThe main ingredient is the following .\\\\\n\n\\noindent {\\bf Lemma 3.3. } {For $n \\geq 1$, let $U_n,T_n$ be\nrandom variables such that\n\\begin{enumerate}\n\\item $U_n\\longrightarrow a$ in probability.\n\\item $\\{T_n\\}$ is uniformly integrable.\n\\item $\\{|T_nU_n|\\}$ is uniformly integrable.\n\\item ${\\sym E}(T_n) \\longrightarrow 1$.\n\\end{enumerate}\nThen ${\\sym E}(T_nU_n)\\longrightarrow a$.}\\\\\n\n\\begin{proof}\nWrite $T_nU_n=T_n(U_n-a)+aT_n$. As ${\\sym E}(T_n) \\longrightarrow\n1$, we need to show that ${\\sym E}(T_n(U_n-a))\\longrightarrow 0$.\\\\\nSince $\\{T_n\\}$ is uniformly integrable, we have\n$T_n(U_n-a)\\longrightarrow 0$ in probability. Also, both $T_nU_n$\nand $aT_n$ are uniformly integrable, and so the combination\n$T_n(U_n-a)$ is uniformly integrable. Hence,\n${\\sym E}(T_n(U_n-a))\\longrightarrow 0$.\n\\end{proof}\\\\\n\n\\noindent{}Let us recall the following expansion\n\\[\n\\exp(ix)=(1+ix)\\exp(-\\frac{x^2}2+r(x)),\n\\]\n\\noindent{}where $|r(x)| \\leq |x|^3$, for real $x$.\\\\\n\n\\noindent {\\bf Theorem 3.4. }{Let $\\{X_{nj}~~:~~ 1 \\leq j \\leq\nk_n, n\\geq 1\\}$ be a triangular array of random variables.\n$\\displaystyle S_n=\\sum_{j=1}^{k_n}X_{nj}$, $\\displaystyle T_n=\\prod_{j=1}^{k_n}(1+itX_{nj})$,\nand $\\displaystyle U_n=\\exp \\left (-\\frac{t^2}2 \\sum_j\nX_{nj}^2+\\sum_{j}r(tX_{nj})\\right )$. Suppose that\n\\begin{enumerate}\n\\item $\\{T_n\\}$ is uniformly integrable.\n\\item ${\\sym E}(T_n) \\longrightarrow 1$.\n\\item $\\displaystyle \\sum_j X_{nj}^2 \\longrightarrow 1$ in probability.\n\\item $\\max |X_{nj}|\\longrightarrow 0$ in probability.\n\\end{enumerate}\nThen ${\\sym E}(\\exp(it S_n))\\longrightarrow \\exp(-\\displaystyle\\frac{t^2}2).$}\\\\\n\n\\begin{proof}\nLet $t$ be fixed. From condition $(3)$ and $(4)$,\n\\[\n|\\sum_j r(tX_{nj})| \\leq |t|^3 \\sum_{j}|X_{nj}|^3 \\\\\n\\leq |t|^3 \\max_j|X_{nj}| \\sum_j X_{nj}^2\n\\tend{n}{\\infty}0 {\\rm {~in~probability}}.\n\\]\n\\noindent{}$U_n=\\exp\\left (-\\displaystyle \\frac{t^2}2 \\sum_j\nX_{nj}^2+\\sum_{j}r(tX_{nj})\\right ) \\longrightarrow \\exp(-\\displaystyle\n\\frac{t^2}2)$ in probability as $n \\longrightarrow +\\infty$. This verifies the condition $(1)$ of Lemma\n3.3 with $a=\\exp(-\\displaystyle \\frac{t^2}2).$ It is easy to check\nthat the conditions $(2)$ ,$(3)$ and $(4)$ of the lemma 3.3 hold.\nThus $E\\left( \\exp(itS_n)\\right) =E\\left( T_nU_n\\right) \\longrightarrow\n\\exp(-\\displaystyle \\frac{t^2}2)$.\n\\end{proof}\\\\\n\n\\noindent{}Let $m$ be a positive integer and put\n\n\\begin{eqnarray*}\nW_m \\stackrel {\\rm {def}}{=}&& \\big \\{ \\big ({\\sum_{i \\in\nI}\\varepsilon_i p_i}\\big ) h_m+\\sum_{i \\in I}\\varepsilon_i\ns_m(p_i) ~~:~~\n\\varepsilon_i \\in \\{0,-1,1\\}, I \\subset \\{0,\\cdots,p_m-1\\},\\\\\n&&p_i \\in \\{0,\\cdots,p_m-1\\} \\}.\n\\end{eqnarray*}\n\n\\noindent{} The element $w =(\\sum_{i \\in\nI}\\varepsilon_ip_i)h_m+\\sum_{i \\in I}\\varepsilon_is_m(p_i)$ is\ncalled a word.\\\\\n\nWe shall need the following combinatorial lemma.\\\\\n\n\\noindent {\\bf Lemma 3.5.}{ Under the\nassumptions (i) and (ii) of theorem 2.1, all the words of $W_m$ are distinct.}\\\\\n\n\\begin{proof}{}\nLet $w,w' \\in W_m$, write\n\\begin{eqnarray*}\nw &=&(\\sum_{i \\in I}\\varepsilon_ip_i)h_m+\\sum_{i \\in\nI}\\varepsilon_is_m(p_i)\\\\\nw' &=&(\\sum_{i \\in I'}\\varepsilon'_ip'_i)h_m+\\sum_{i \\in\nI}\\varepsilon'_is_m(p'_i).\n\\end{eqnarray*}\n\\noindent{}Then $w=w'$ implies\n\\[\n\\{(\\sum_{i \\in I}\\varepsilon_ip_i)-(\\sum_{i \\in\nI'}\\varepsilon'_ip'_i)\\}h_m=\\sum_{i \\in\nI'}\\varepsilon'_is_m(p'_i)-\\sum_{i \\in I}\\varepsilon_is_m(p_i)\n\\]\n\\noindent{}But the modulus of LHS is greater than $h_m$ and the\nmodulus of RHS is less than $2 \\sum_{j=0}^{p_m-1}a_j^{(m)}$. It\nfollows that\n\\begin{eqnarray*}\n\\sum_{i \\in I}\\varepsilon_ip_i&=& (\\sum_{i \\in\nI'}\\varepsilon'_ip'_i){\\rm {~~~and }} \\\\\\sum_{i \\in\nI}\\varepsilon_is_m(p_i)&=&\\sum_{i \\in I'}\\varepsilon'_is_m(p'_i).\n\\end{eqnarray*}\nSince from (i) we have $s_m(p+1) \\geq 3 s_m(p)$, we get that the\nrepresentation in the form $\\sum_{i \\in I}\\varepsilon_is_m(p_i)$\nis unique and the proof of the lemma is complete.\n\\end{proof}\n\\\\\n\n\\begin{proof}{of proposition 3.2} Let $A$ a Borel set,\n$\\lambda(A)>0$. Using the Helly theorem we may assume that the\nsequence $\\displaystyle \\frac{\\sqrt{2}}{\\sqrt{p_m}}\n\\sum_{j=0}^{p_m-1} \\cos((jh_m+s_m(j))t)$ converge in distribution.\nAs is well-known, it is sufficient to show that for every real\nnumber $x$,\n\n\\begin{eqnarray}{\\label {eq:eq7}}\n \\displaystyle \\frac1{\\lambda(A)}\\int_A \\exp\\left \\{-ix\\frac{\\sqrt{2}}{\\sqrt{p_m}}\n \\sum_{j=0}^{p_m-1} \\cos((jh_m+s_m(j))t) \\right \\} dt\n \\tend{m}{\\infty} \\exp(-\\frac{x^2}2). \\nonumber\n\\end{eqnarray}\n\n\\noindent{} To this end we apply theorem 3.4. in the following\ncontext. The measure space is the given Borel set $A$ of positive Lebesgue\nmeasure in the circle with the normalised measure and the random\nvariables are given by\n$$\nX_{mj}=\\frac{\\sqrt{2}}{\\sqrt{p_m}} \\cos((jh_m+s_m(j))t),~~~~{\\rm {where}}~~~0 \\leq j \\leq p_m-1,~\nm \\in {\\sym N}.\n$$\n\n\\noindent{}It is easy to check that the variables $\\{X_{mj}\\}$\nsatisfy conditions (1) and (4). Further, condition (3) follows from fact that\n$$\n\\int_0^{2\\pi} \\left |\\sum_{j=0}^{p_m-1} X_{mj}^2-1 \\right |^2 dt\n\\tend{m}{\\infty} 0.\n$$\n\\noindent{} It remains to verify comdition (2) of theorem 3.4. it is sufficient to show that\n\\begin{eqnarray}{\\label {eq8}}\n\\int_A \\prod_{j=0}^{p_m-1}\\left (\n1-ix\\frac{\\sqrt{2}}{\\sqrt{p_m}}\\cos((jh_m+s_m(j))t)\\right)dt\n\\tend{m}{\\infty}\\lambda(A).\n\\end{eqnarray}\n\n\n\\noindent{}Write\n\\begin{eqnarray*}\n\\Theta_m(x,t)&=&\\prod_{j=0}^{p_m-1}\n\\left(1-ix\\frac{\\sqrt{2}}{\\sqrt{p_m}}\\cos((jh_m+s_m(j))t\\right)\\\\\n&=&1+\\sum_{w=1}^{N_m}{\\rho_w}^{(m)}(x) \\cos(wt),\n\\end{eqnarray*}\n\n\\noindent{}where $\\rho_w=0$ if $w$ is not of the form $(\\sum_{i\n\\in I}\\varepsilon_ip_i)h_m+\\sum_{i \\in I}\\varepsilon_is_m(p_i)$,\n$N_m=\\displaystyle \\left \\{\\frac{p_m(p_m-1)}2\\right\n\\}h_m+s_m(p_m-1)+s_m(p_m-2)+\\cdots+1$.\\\\\n\n\\noindent{}We claim that it is sufficient to prove the following:\n\n\\begin{eqnarray}{\\label{density}}\n\\int_{0}^{2\\pi} R(t) \\prod_{j=0}^{p_m-1} \\left\n(1-ix\\frac{\\sqrt{2}}{\\sqrt{p_m}}\\cos((jh_m+s_m(j))t)\\right) dt\n\\tend{m}{\\infty}\\int_{0}^{2\\pi} R(t) dt,\n\\end{eqnarray}\n\n\\noindent{}for any trigonometric polynomial $R$. In fact, assume\nthat (\\ref{density}) holds and let $\\epsilon>0$.\nThen, by the density of trigonometric polynomials, one can find a\ntrigonometric polynomial $R_\\epsilon$ such that\n\\[\n||\\chi_A-R_\\epsilon||_1 <\\epsilon,\n\\] where\n\\noindent{}$\\chi_A$ is indicator function of $A$. But\n\n\\begin{eqnarray*}\n\\left |\\prod_{j=0}^{p_m-1}\\left (\n1-ix\\frac{\\sqrt{2}}{\\sqrt{p_m}}\\cos((jh_m+s_m(j))t)\\right)\\right| \\leq\n\\left \\{\\prod_{j=0}^{p_m-1}\\left ( 1+\\frac{2x^2}{p_m}\\right) \\right\n\\}^{\\frac12},\n\\end{eqnarray*}\n\n\\noindent{}Since $1+u \\leq e^u$, we get\n\n\\begin{eqnarray}{\\label{eq:eq11}}\n\\mid\\Theta_m(x,t)\\mid \\leq e^{x^2}.\n\\end{eqnarray}\n\n\\noindent{}Hence, according to (\\ref{density}), for $m$\nsufficiently large, we have\n\n\\begin{eqnarray*}{\\label{eq :eq12}}\n&&\\left |\\int_A \\Theta_m(x,t) dt -\\lambda(A)\\right|\n=|\\int_A\\Theta_m(x,t)dt -\\int_{0}^{2\\pi} \\Theta_m(x,t) R_\\epsilon(t) dt+\n\\\\&&\\int_{0}^{2\\pi} \\Theta_m(x,t) R_\\epsilon(t) dt-\\int_{0}^{2\\pi} R_\\epsilon(t) dt + \\int_{0}^{2\\pi}\nR_\\epsilon(t) dt-\\lambda(A)|< e^{x^2}\\epsilon+2\\epsilon.\n\\end{eqnarray*}\n\n\\noindent{}The proof of the claim is complete. It still remains to prove\n(\\ref{density}). Observe that\n\n$$\\int_{0}^{2\\pi} \\Theta_m(x,t)R(t) dt =\n \\int_{0}^{2\\pi} R(t) dt+\\sum_{w=1}^{N_m}{\\rho_w}^{(m)}(x) \\int_{0}^{2\\pi} R(t)\n \\cos(wt)dt$$\n\n\\noindent{}and for\n$w=p_{i_1}h_m+s_m(p_{i_1})+\\sum_{j=1}^{r}\\varepsilon_j\n\\{(p_{i_j}h_m)+s_m(p_{i_j})\\}$, we have\n\n\\[\n|{\\rho_w}^{(m)}(x)| \\leq \\frac{2^{1-r}|x|^r}{p_m^{\\frac{r}2}},\n\\]\n\n\\noindent{}hence\n\\[\n\\max_{w \\in W}|{\\rho_w}^{(m)}(x)| \\leq\n\\frac{|x|}{p_m^{\\frac{1}2}}\\tend{m}{\\infty}0.\n\\]\n\\noindent{}Since $\\displaystyle \\sum_{w \\in{\\sym Z}}|\\int_{0}^{2\\pi} e^{-iwt}\nR(t)dt|$ is bounded, we deduce\n\\[\n|\\sum_{w=1}^{N_m}{\\rho_w}^{(m)}(x) \\int_{0}^{2\\pi} R(t)\n \\cos(wt)dt|\\leq \\frac{|x|}{p_m^{\\frac{1}2}}\n \\sum_{w \\in{\\sym Z}}|\\int_{0}^{2\\pi} e^{-iwt} R(t)dt|\\tend{m}{\\infty}0.\n\\]\n\\noindent{}The proof of the proposition 3.2. is complete.\n\\end{proof}\\\\\n\n\\begin{proof}{of proposition 3.1}\nLet $A$ be a Borel subset of ${\\sym T}$, and $x \\in ]1,+\\infty[$, then, for any\npositive integer $m$, we have\n\n\\begin{eqnarray*}{\\label{eq :eqfin}}\n\\int_A ||P_m(\\theta)|-1| d\\lambda(\\theta) &\\geq& \\int_{\\{\\theta\n\\in A\n~~~:~~~~|P_m(\\theta)|>x\\}}||P_m|-1| d\\lambda(\\theta)\\\\\n&\\geq&(x-1)\\lambda\\{\\theta \\in A~:~|P_m(\\theta)|>x\\}\\\\\n&\\geq& (x-1) \\lambda\\{\\theta \\in A ~~:~~|\\Re({P_m(\\theta)})|>x\\}\n\\end{eqnarray*}\n\\noindent{} Let $m$ go to infinity and use propositions\n2.5 and 3.2 to get\n\n\\[\n\\int_A \\phi d\\lambda \\geq (x-1)\\{1-{\\mathcal\n{N}}([-\\sqrt{2}x,\\sqrt{2}x])\\}\\lambda(A).\n\\]\n\n\\noindent{}Put $K=(x-1)\\{1-{\\mathcal {N}}([-\\sqrt{2}x,\\sqrt{2}x])\\}$. Hence $\n\\alpha(A) \\geq K \\lambda(A)$, for any Borel subset $A$ of ${\\sym T}$ which proves the proposition.\n\\end{proof}\\\\\n\n\\noindent{}Now, We give the proof of our main result.\\\\\n\n\\begin{proof}{of theorem 2.1} Follows easily from the proposition 2.6 combined with\nproposition 3.1. \n\\end{proof}\\\\\n\n\\noindent{}Let us mention that the same proof works for the\nfollowing more general statement.\\\\\n\n\\noindent {\\bf Theorem 3.6. }{\\it Let $T=T_{(p_k,\n(a_j^{(k)})_{j=1}^{p_k})_{k=0}^\\infty}$ be a rank one\ntransformation such that,\n\n\\begin{eqnarray}{\\label{eq:eqt2}}\n&&(i)~~a_{j+1}^{(k)}\\geq 2s_k(j),j=0,\\cdots, p_k-1,\\nonumber\\\\\n&& {\\rm {~~with~~~~}} s_k(j)=a_1^{(k)}+\\ldots +a_j^{(k)},\ns_k(0)=0,\\nonumber\\\\\n&&(ii)~~\\frac{s_k{(p_k)}-p_k\\min_{1 \\leq j \\leq\np_k}(a_j^{(k)})}{h_k+\\min_{1 \\leq j \\leq p_k}(a_j^{(k)})}<\\frac12\n\\nonumber\n\\end{eqnarray} then the spectrum of $T$ is singular.}\n\\\\\n\n\n\\noindent{}{\\bf Remark.} We note that, in \\cite{Bourgain} and\n\\cite{Klemes1}, the strategy of the authors is to show that the\nabsolutely continuous measure $\\beta$, obtained as the limit of\nsome subsequence of the sequence $(||P_m|^2-1| d\\lambda)_{m \\geq\n0}$, is equivalent to Lebesgue measure, in fact the authors prove\nthat\n$$ \\beta \\geq K \\lambda,~~~~~~~~~~~~~~~~ (E)$$\n\\noindent{}for some $K>0$. In the case of Ornstein\ntransformations, the relation (E) hold almost surely.\\\\\n\n\n\n\\section*{4. Simple Proof Of Bourgain Theorem.}\n\nBourgain Theorem deals with Ornstein transformations. In\nOrnstein's construction, the $p_k$'s are rapidly increasing, and\nthe number of spacers, $a_i^{(k)}$, $1 \\leq i\\leq p_k-1$, are\nchosen randomly. This may be organized in different ways as\npointed out by J. Bourgain in \\cite{Bourgain}. Here we suppose that we are given\ntwo sequences $(t_k)$, $(p_k)$ of positive integers and a sequence ($\\xi_k$) \n of probability measure such that the support of each\n$\\xi_k$ is a subset of $X_k = \\{-\\displaystyle\n\\frac{t_k}{2},\\cdots,\\displaystyle \\frac{t_k}{2}\\}$. We choose now\nindependently, according to $\\xi_k$, the numbers\n$(x_{k,i})_{i=1}^{p_k-1}$, and $x_{k,p_k}$ is chosen\ndeterministically in ${\\sym N}$. We put, for $1 \\leq i \\leq p_k$,\n\n$$a_i^{(k)} = t_{k} + x_{k,i} - x_{k,i-1}, ~~{\\rm with} ~~x_{k,0}\n= 0.$$\\\\\n\n\\noindent{} We have\n\n$$h_{k+1} = p_k(h_k + t_{k}) + x_{k,p_k}.$$\n\n\\noindent{}So the deterministic sequences of positive integers\n$(p_k)_{k=0}^\\infty$, $(t_k)_{k=0}^\\infty $ and\n$(x_{k,p_k})_{k=0}^\\infty$ determine completely the sequence of\nheights $(h_k)_{k=0}^\\infty$. The total measure of the resulting\nmeasure space is finite if\n\\begin{eqnarray}\\label{eqn:fini}\n\\sum_{k=0}^{\\infty}\\frac{t_k}{h_k}+\\sum_{k=0}^\\infty\n\\frac{x_{k,p_k}}{p_kh_k} < \\infty.\n\\end{eqnarray}\n \\noindent{}We will assume that this\nrequirement is satisfied.\\\\\nWe thus have a probability space of Ornstein transformations\n$\\Omega=\\prod_{l=0}^\\infty X_l^{p_l-1}$ equipped with the natural\nprobability measure ${\\sym P} \\stackrel {\\rm def}\n{=}\\otimes_{l=1}^{\\infty} P_l$, where $P_l\\stackrel {\\rm def}\n{=}\\otimes_{j=1}^{p_l-1}{\\xi_j}$; ${\\xi_j}$ is the probability\nmeasure on $X_j$. We denote this space by $(\\Omega, {\\mathcal\n{A}}, {{\\sym P}})$. So $x_{k,i}$, $1 \\leq i \\leq p_k -1$, is the\nprojection from $\\Omega$ onto the $i^{th}$ co-ordinate space of\n$\\Omega_k \\stackrel {\\rm def} {=} X_k^{p_k-1}$, $1 \\leq i \\leq\np_k-1$. Naturally each point $\\omega =(\\omega_k =\n(x_{k,i}(\\omega))_{i=1}^{p_k-1})_{k=0}^\\infty$ in $\\Omega$ defines\nthe spacers and therefore a rank one transformation\n$T_{\\omega,x}$, where $x=(x_{k,p_k})$.\n\n\\noindent{}This construction is more general than the\n construction due to Ornstein \\cite{Ornstein} which corresponds to the case \n$t_k=h_{k-1}$, $\\xi_k$ is the uniform distribution on $X_k$ and $p_k >> h_{k-1}$. \\\\\n\n\n\\noindent{} We recall that Ornstein in \\cite{Ornstein} proved that\nthere exists a sequence ${(p_k,x_{k,p_k})}_{k \\in {\\sym N}}$ such that,\n$T_{\\omega,x}$ is almost surely mixing. Later in \\cite{Prikhodko}\nPrikhod'ko obtains the same result for some special choice of the\nsequence of the distribution ${(\\xi_m)}$ and recently, using the\nidea of D. Creutz and C. E. Silva \\cite{Creutz-silva} one can\nextend this result to a large family of probability measures\nassociated to Ornstein construction. In our general construction,\naccording to (\\ref{eqn:type1}) the spectral type of each\n$T_{\\omega}$ , up to a discrete measure, is given by\n \\[\n\\sigma_{T_\\omega }=\\sigma^{(\\omega)}\n_{\\chi_{B_0}}=\\sigma^{(\\omega)} =W ^{*}\\lim \\prod_{l=1}^N\\frac\n1{p_l}\\left| \\sum_{p=0}^{p_l-1}z^{p(h_l+t_l)+x_{l,p}}\\right|\n^2d\\lambda.\n\\]\n\n\\noindent{} With the above notations, we state Bourgain theorem in\nthe following\nform:\\\\\n\n\\bigskip\n\\noindent {\\bf Theorem 4.1 (\\cite{elabdal2}). }{\\it For every\nchoice of $(p_k), (t_k), (x_{k,p_k})$ and for any family of\nprobability measures ${\\xi_k}$ on $X_k=\\{-t_k,\\cdots,t_k\\}$ of\n${\\sym Z}$, $ {k \\in {\\sym N}^*}$, for which\n$$\\sum_{s \\in X_m}\\xi(s)^2 \\tend{m}{\\infty}0,$$ \\noindent{}the\nassociated generalized Ornstein transformations has almost surely\nsingular spectrum. i.e.,\n\\[\n{\\sym P}\\{\\omega~:~ \\sigma^{(\\omega)} ~\\bot ~\\lambda\\}=1.\n\\]\n\\noindent{}where ${\\sym P} \\stackrel {\\rm def} {=}\\otimes_{l=0}^{\\infty}\n\\otimes_{j=1}^{p_l-1}{\\xi_l}$ is the probability measure on\n$\\Omega=\\prod_{l=0}^\\infty X_l^{p_l-1}$. } \\vskip0.5cm\n\n\n\\noindent{}In the context of Ornstein's constrution, we state\nproposition 2.5 in the following form:\\\\\n\n\\noindent {\\bf Proposition 4.2. }{\\it There exist a subsequence of\nthe sequence $\\left (\\left |\\left| P_m(z)\\right|-1 \\right|\\right)$\nwhich converge weakly to some non-negative function\n$\\phi(\\omega,\\theta)$ which satisfies $ \\phi \\leq $2, almost surely\nwith respect the ${\\sym P} \\bigotimes \\lambda$.}\n\\\\\n\n\\begin{proof}\nEasy exercise.\n\\end{proof}\\\\\n\n\n\\noindent Put, for any $\\omega\\in \\Omega$\n\\[\n\\alpha_{\\omega}=\\phi(\\omega,\\theta)d\\lambda.\n\\]\nWe shall prove that $\\alpha_{\\omega}$ is equivalent to the\nLebesgue measure for almost all $\\omega$. In fact, we have the\nfollowing proposition:\n\\\\\n\n\\noindent {\\bf Proposition 4.3. }{\\it There exist an absolutely\npositive constant $K$ such that\n for almost all $\\omega$ we have\n$$\\alpha_{\\omega} \\geq K \\lambda.$$}\n\\\\\n\n\\noindent The proof is based on the following two lemmas.\n\n\\noindent {\\bf Lemma 4.4. }{\\it $\\displaystyle lim \\int \\int\n||P_m|-|P_m'|| d\\theta d{\\sym P} \\tend{n} {\\infty} 0$, where $\\displaystyle\nP_m'(\\theta)=P_m(\\theta)-\\int_{\\Omega}P_m(\\theta)d{\\sym P}.$ }\n\n\n\\begin{proof} For any $m \\in {\\sym N}$, we have\n\\begin{eqnarray*}\n\\int \\int ||P_m|-|P_m'|| d\\theta d{\\sym P} &\\leq& \\int \\int |P_m-P_m'|\nd\\theta d{\\sym P} \\\\ & = &\n\\int |\\int P_m d{\\sym P}| d\\theta\\\\\n&\\leq& \\int |\\sum_{p=0}^{p_m-1}\\frac1{\\sqrt{p_m}} z^{p(h_m+t_m))}| |\\sum_{s\\in\nX_{m}}\\xi_m(s)z^s| dz.\n\\end{eqnarray*}\nHence by Cauchy-Schwarz inequality\n\\begin{eqnarray*}\n\\int \\int ||P_m|-|P_m'|| d\\theta d{\\sym P} \\leq \\sum_{s \\in X_m}\\xi(s)^2\n\\tend{m}{\\infty}0.\n\\end{eqnarray*}\nThe proof of the lemma is complete.\n\\end{proof}\\\\\n\\vskip0.5cm \\noindent Now observe that we have\n\\[\n\\int_T |\\sum_{s\\in X_{m}}\\xi_m(s)z^s|^2 dz=\n\\int_T |\\sum_{s\\in X_{m}}\\xi_m(s)z^{2s}|^2 dz=\\sum_{s \\in X_m}\\xi(s)^2\n\\tend{m}{\\infty}0.\n\\]\nSo, we may extract a subsequence $(m_k)$ for which, for almost all\n$t \\in [0,2\\pi)$, we have\n\\[\n\\sum_{s\\in X_{m_k}}\\xi_{m_k}(s)e^{ i s t} \\tend{k}{\\infty}0\n{\\rm {~~and~~}} \\sum_{s\\in X_{m_k}}\\xi_{m_k}(s)e^{ i s 2t} \\tend{k}{\\infty}0\n\\]\n\\noindent Define\n\\[\n\\Theta\n\\stackrel{def}{=}\\{\\theta ~:~ \\sum_{s\\in X_{m_k}}\\xi_{m_k}(s)e^{i\n s \\theta}\\tend{k}{\\infty}0 {\\rm {~and~}} \\sum_{s\\in X_{m_k}}\\xi_{m_k}(s)e^{ i s 2t}\n \\tend{k}{\\infty}0\\}\n\\]\n\\noindent Choose $m \\in \\{m_k\\}$, $t \\in \\Theta$ and put, for $j \\in\n\\{0,\\cdots,p_m-1\\}$,\n\\begin{eqnarray*}\nY_{m,j}(\\omega)&=&\\cos((j(h_m+t_m)+x_{m,j}(\\omega))t)-\\int\n\\cos((j(h_m+t_m)+x_{m,j}(\\omega))t) d{\\sym P},\\\\\nZ_{m,j}(\\omega)&=&\\sqrt{\\frac{2}{p_m}}Y_{m,j}(\\omega).\n\\end{eqnarray*}\n\\\\\n\\noindent {\\bf Lemma 4.5. }{\\it For any fixed $t \\in \\Theta$, the\ndistribution of the sequence of random variables $\\displaystyle\n\\sum_{j=0}^{p_m-1} Z_{m,j}(\\omega)$ converges\nto the Gauss distribution. That is,\n\\\\\n\\begin{eqnarray}{\\label{eq:beq6}}\n{\\sym P} \\left \\{\\omega \\in \\Omega~~:~~\n\\sum_{j=0}^{p_m-1} Z_{m,j}(\\omega) \\leq x\n\\right \\} \\tend{m}{\\infty}\\frac1{\\sqrt{2\\pi}}\\int_{-\\infty}^{x}\ne^{-\\frac12u^2}du\\stackrel{\\rm {def}}{=}{\\mathcal {N}}(]-\\infty,x]).\n\\nonumber\n\\end{eqnarray}\n}\n\\begin{proof}\nSince the random variables are independent, centred and uniformly bounded by $\\displaystyle\n\\frac{2\\sqrt{2}}{\\sqrt{p_m}}$, the conditions (1), (2) and (4) of the\ntheorem 3.4 are satisfied. We have also the following:\n\\begin{eqnarray*}\n&&{\\sym E} \\left \\{ \\left(\\sum_{j=1}^{p_m-1}{Z_{m,j}^2-1}\\right) ^{2} \\right\\rbrace \\\\\n&& = \\frac{4(p_m-1)}{p_m^2}{\\sym E} ( Y_{m,1}^4)+\\frac{(p_m-1)(p_m-2)}{p_m^2}\n({\\sym E} (2Y_{m,1}^2))^2-2\\frac{p_m-1}{p_m} {\\sym E} (2 Y_{m,1}^2)+1.\n\\end{eqnarray*}\n\\noindent{}But ${\\sym E}(2Y_{m,1}^2) \\tend{m}{\\infty}1$, it follows\nthat the variables $\\{Z_{m,j}\\}$ satisfy condition (3) of theorem 3.4.\nThus all the conditions of theorem 3.4 hold and we conclude that the distribution of\n$\\displaystyle \\sum_{j=0}^{p_m-1} Z_{m,j}(\\omega)$\nconverges to normal distribution.\n\\end{proof}\\\\\n\n\\begin{proof}{of proposition 4.3}\nLet $A$ be a Borel subset of ${\\sym T}$, $C$ a cylinder set in\n$\\Omega$, and $x \\in ]1,+\\infty[$, then, for any positive integer\n$m$, we have\n\n\\begin{eqnarray*}{\\label{eq :beqfin}}\n&&\\int_{A \\times C} ||P_m(\\theta)|-1| d\\lambda(\\theta) d{\\sym P} \\\\\n&\\geq& {\\sym P}(C) \\int_{A \\times \\Omega} ||P'_m(\\theta)|-1|\nd\\lambda(\\theta) d{\\sym P}\n-\\int||P_m|-|P_m'||d{\\sym P} d\\lambda\\\\\n&\\geq& {\\sym P}(C) \\int_{ \\{|P'_m|>x\\} \\bigcap A \\times\n\\Omega}||P'_m|-1| d\\lambda(\\theta)d{\\sym P}\n-\\int||P_m|-|P_m'||d{\\sym P} d\\lambda\\\\\n&\\geq&{\\sym P}(C) (x-1)\\int_A{\\sym P}\\{|\\Re\n{(P'_m(\\theta))}|>x\\}d\\lambda-\\int||P_m|-|P_m'||d{\\sym P} d\\lambda\n\\end{eqnarray*}\n\\noindent{} Let $m$ go to infinity and combine lemmas 4.4 and\n4.5 to get\n\n\\[\n\\int_{A \\times C} \\phi d\\lambda d{\\sym P} \\geq (x-1)\\{1-{\\mathcal\n{N}}([-\\sqrt{2}x,\\sqrt{2}x])\\}\\lambda(A) {\\sym P}(C).\n\\]\n\n\\noindent{}Put $K=(x-1)\\{1-{\\mathcal {N}}([-\\sqrt{2}x,\\sqrt{2}x])\\}$. Hence, for\nalmost all $\\omega$, we have, for any Borel set $A \\subset {\\sym T}$, $ \\alpha_{\\omega}(A) \\geq K\n\\lambda(A)$, and the proof of the\nproposition is complete.\n\\end{proof}\\\\\n\n\\begin{proof}{of theorem 4.1}\nFollow easily from the proposition 4.3 combined with the\nproposition 2.6.\n\\end{proof}\n\\\\\n\n\\noindent{}{\\bf Remark.} We point out that there exist a rank one \nmixing transformations on the space with finite measure satisfying\nthe condition of theorem 3.6, In fact, following the notations of section 4, one may define\nthe spacers in the Ornstein construction by\n\\[\n {a_j^{(k)}}={3^j} t_k+x_{k,j}-x_{k,j-1},\n\\]\n\n\\noindent{}and choose the sequence $(t_k)_{k \\in {\\sym N}}$ such that the measure of dynamical system is finite.\nThus the condition of theorem 3.6 hold and the class is\nmixing almost surely.\\\\\n\n\\begin{center} {\\bf Acknowledgements}\n\\end{center}\nI would like to express my thanks to J-P. Thouvenot who posed to me\nthe problem of the singularity of the spectrum of the rank one\ntransformations, and also to B. Host, F. Parreau and F. Bassam for\nmany conversations on the subject I have had with them. My thanks\nalso to Andrew Granville and the organizers of SMS-NATO ASI 2005\nSummer School.\\\\\nI am also grateful to the referee for a number of valuable suggestions and remarks and for \nhaving correted the mistakes in the first version of this work.\n\n\\bibliographystyle{nyjplain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\nLet $G\\subset \\mathrm{SU(2)}$ be a finite subgroup. According to McKay correspondence,\nsuch a subgroup gives rise to a graph $Q$ which turns out to be an affine\nDynkin diagram of ADE type. Let $\\g$ be the corresponding affine Lie algebra. \n\n\n\nThere are several approaches allowing one to construct $\\g$ from $G$: \n\\begin{enumerate}\n\\item There is a geometric construction, due to Nakajima and Lusztig, of $\\g$\n and its representations in terms of the quiver varieties associated to graph\n $Q$. These varieties are closely related to the moduli spaces of instantons\n on the resolution of singularities $\\widetilde{\\CC^2\/G}$. \n\n\\item There is an algebraic construction, due to Ringel \\cite{ringel2}, which\n allows one to get the universal enveloping algebra $U\\n_+$ of the positive\n part of $\\g$ as the Hall algebra of the category $\\Rep Q_h$, where $h$ is an\n orientation of $Q$ and $Q_h$ is the corresponding quiver. It was shown by \n Peng and Xiao \\cite{peng-xiao} that replacing $\\Rep Q_h$ by the\n quotient $R=D^b(\\Rep Q_h)\/T^2$ of the corresponding derived category, we\n can get a description of all of $\\g$. Different choices of orientation\n give rise to equivalent derived categories $R$, with equivalences given\n by Bernstein-Gelfand-Ponomarev reflection functors; in\n terms of $\\g$, these functors correspond to the action of the braid group\n of the corresponding Weyl group.\n \n \\end{enumerate}\nRecently, a third approach was suggested by Ocneanu \\cite{ocneanu}\n(unpublished), in a closely related setting of a subgroup in quantum\n$\\mathrm{SU(2)}$, for $q$ being a root of unity. His approach is based on\nstudying essential paths in ${\\widehat{Q}}=Q\\times \\Z\/h\\Z$, where $h$ is the\norder of the root of unity. This approach is purely combinatorial: all\nconstructions are done using this finite graph and vector spaces of\nessential paths between points in this graph, without involving any\ncategories at all. \n\nThis paper grew out of the author's attempt to understand Ocneanu's\nconstruction and in particular, find the appropriate\ncategorical interpretation of his combinatorial constructions;\nhowever, for simplicity we do it for subgroups in classical $\\mathrm{SU(2)}$,\nleaving the analysis of the subgroups in quantum $\\mathrm{SU(2)}$ for future\npapers. We show that Ocneanu's essential paths in ${\\widehat{Q}}$ have a\nnatural interpretation in terms of the category ${Coh_G(\\PP)}$ of\n$G$--equivariant $\\OO$-modules on $\\PP$ (or, rather,\nits ``even part'' $\\C={Coh_G(\\PP)}_0$, for certain natural $\\Z_2$ grading on\n${Coh_G(\\PP)}$). This also provides a relation with Ringel--Lusztig\nconstruction: for (almost) any choice of\norientation $h$ of $Q$, we construct an equivalence of triangulated\ncategories \n$$\nR\\Ph_h\\colon D^b( {Coh_G(\\PP)}_0)\\simeq D^b(\\Rep Q_h)\n$$\nThese equivalences agree with the equivalences of $D^b(\\Rep Q_h)$ for\ndifferent choices of $h$ given by reflection functors. As a corollary, we\nsee that the Grothendieck group $L=K(\\C)$ is an affine root lattice, and\nthe set $\\Delta$ of classes of indecomposable modules is an affine root\nsystem. \n\nThis construction of the affine root system via equivariant\nsheaves has a number of remarkable properties, namely:\n\\begin{enumerate}\n \\item This does not require a choice of orientation of\n $Q$ (unlike the category $D^b(\\Rep Q_h)$, where we first choose\n an orientation and then prove that the resulting\n derived category is independent of orientation).\n \\item The indecomposable objects in the category ${Coh_G(\\PP)}$ can be\n explicitly described. Namely, they are the\n sheaves $\\OO(n)\\ttt X_i$, where $X_i$ are irreducible\n representations of $G$ (these sheaves and their translations\n correspond to real roots of $\\g$) and\n torsion sheaves, whose support is a $G$--orbit in $\\PP$ (in\n particular, torsion sheaves whose support is an orbit of a\n generic point correspond to imaginary roots of $\\g$).\n \\item This construction of the affine root system does not give \n a natural polarization into negative and positive roots.\n Instead, it gives a canonical Coxeter element in the corresponding\n affine Weyl group, which is given by $C[\\F]=[\\F\\ttt \\OO(-2)]$\n (this also corresponds to the Auslander--Reiten functor\n $\\tau$).\n \n \\item This construction gives a bijection of the vertices of the\n affine Dynkin diagram $Q$ and (some of) the $C$-orbits in\n $\\Delta$ (as opposed to the quiver construction, where vertices\n of $Q$ are in bijection with the simple positive roots). \n \n\\end{enumerate}\n\n\n\nFor $G=\\{1\\}$ (which is essentially equivalent to $G=\\{\\pm 1\\}$),\nthese results were first obtained in the paper \\cite{baumann-kassel},\nwhere it was shown that the corresponding Hall algebra contains the\nsubalgebra isomorphic to $U\\widehat{\\slt}$. This paper in turn was\ninspired by an earlier paper of Kapranov \\cite{kapranov}. \n\n\nIt should be noted that many of the results obtained here have already\nbeen proved in other ways. Most importantly, it had been proved by\nLenzing \\cite{geigle-lenzing,lenzing} that the derived category of\nequivariant sheaves on $\\PP$ is equivalent to the derived category of\nrepresentation of the corresponding quiver; this result has been used by\nSchiffmann \\cite{schiffmann,schiffmann2} for construction of a subalgebra\nin the corresponding affine Lie algebra via Hall algebra of the category of\nequivariant sheaves. However, the construction of\nequivalence in \\cite{lenzing} is different than the one suggested here. The\nprimary difference is that in Lenzing's construction, the Dynkin diagram\n$Q$ is constructed as the star diagram, with lengths of branches determined\nby the branching points of the cover $\\PP\\to X=\\PP\/G$, and he uses\na standard orientation of this diagram. In the construction presented\nhere, the diagram is $Q$ is defined in a more standard way, using the set\n$I$ of irreducible representations of $G$; more importantly, we construct\nnot a single equivalence but a collection of equivalences, one for each\nadmissible orientation. In Lenzing's construction, torsion sheaves\nnaturally play a major role; in our construction, we concentrate on\nthe study of locally free sheaves. \n\nLenzing's results apply not only to $\\PP\/G$ but to a much larger\nclass of ``non-commutative projective curves''. However, the downside of\nthis is that the language he uses is rather technical, making his papers\nsomewhat hard for non-experts. For this reason, we had chosen to give\nindependent proofs of some of the results, thus saving the reader the\nnecessity of learning the language of non-commutative curves. Of course,\nwe tried to clearly mark the results which had already been known.\n\n\n{\\bf Acknowledgments.} The author would like to thank P.~Etingof,\nV.~Ostrik, and O.~Schiffmann for fruitful discussions. Special thanks to\nA.~Ocneanu, whose talk inspired this paper. \n\n\\section{Basic setup}\\label{s:basic}\nThroughout the paper, we work over the base field $\\CC$ of complex\nnumbers. $G$ is a finite subgroup in $\\mathrm{SU(2)}$; for simplicity, we assume\nthat $\\pm I\\in G$ and denote $\\Gbar=G\/\\{\\pm I\\}$ (this excludes\n$G=\\Z_n$, $n$ odd; this case can also be included, but some of the\nconstructions of this paper will require minor changes). We denote by\n$V$ the standard 2-dimensional space, considered as a tautological\nrepresentation of $\\mathrm{SU(2)}$ (and thus of $G$) and \n$$\nV_k=S^k V\n$$\nis the $k+1$--dimensional irreducible representation of $\\mathrm{SU(2)}$ (and\nthus a representation, not necessarily irreducible, of $G$). \n\nWe denote by $\\Rep G$ the category of finite-dimensional\nrepresentations of $G$ and by $I=\\Irr G$ the set of isomorphism\nclasses of simple representations; for $i\\in I$, we denote by $X_i$\nthe corresponding representation of $G$. Since $G\\supset \\{\\pm I\\}$,\nthe category $\\Rep G$ is naturally $\\Z_2$--graded:\n\\begin{equation}\\label{e:even-odd}\n\\Rep G=\\Rep_0 G\\oplus \\Rep_1 G,\\quad \n\\Rep_p G=\\{V\\in \\Rep G\\st (-I)|_V=(-1)^p\\id \\}\n\\end{equation}\nFor homogeneous object $X$, we define its ``parity'' $p(X)\\in \\Z_2$\nby \n\\begin{equation}\\label{e:parity}\np(X)=p\\text{ if } X\\in \\Rep_p G\n\\end{equation}\nin particular, $p(V)=1$. We will also define, for $i\\in I$, its\nparity $p(i)=p(X_i)$. This gives a decomposition \n\\begin{equation}\\label{e:decomposition_I}\nI=I_0 \\sqcup I_1.\n\\end{equation}\n\n\nWe define the graph $Q$ with the set of vertices $V(Q)=I$ and\nfor every two vertices $i,j$, the number of edges connecting them is\ndefined by \n$$\nn(i,j)=\\dim \\Hom_G(X_i,X_j\\ttt V)\n$$\n\nNote that decomposition \\eqref{e:decomposition_I} shows that this\ngraph is bipartite: $V(Q)=V_0(Q)\\sqcup V_1(Q)$, and all edges connect\nvertices of different parities. \n\n\nIt is well-known that one can construct an isomorphism\n$$\n\\{\\text{Paths of length $l$ in $Q$ connecting $i,j$}\\}\n=\\Hom_G(X_i, V^{\\ttt l}\\ttt X_j)\n$$\nand that $\\Hom_G(X_i, V_l \\ttt X_j)$ can be described as the space of\n``essential paths'' in $Q$, which is naturally a direct summand in\nthe space of all paths (see \\cite{coq-garcia}). The algebra of essential\npaths is also known as the preprojective algebra of $Q$. \n\n\nBy McKay correspondence, $Q$ must be an affine Dynkin diagram. \nWe denote by $\\Delta(Q)$ the corresponding affine root system; it has a\nbasis of simple roots $\\alpha_i, i\\in I$. We denote \n$$\nL(Q)=\\bigoplus_{i\\in I} \\Z\\alpha_i =\\Z^I\n$$\nthe corresponding root lattice. It has a natural bilinear\nform given by $(\\alpha_i,\\alpha_i)=2$ and $(\\alpha_i,\\alpha_j)=-n(i,j)$; as is\nwell-known, this form is positive semidefinite. The kernel of this\nform is $\\Z\\delta$, where $\\delta$ is the imaginary root of $\\Delta(Q)$. \n\nWe denote by $s_i\\colon L(Q)\\to L(Q)$ the reflection around root\n$\\alpha_i$. \n\nFinally, let $K(G)$ be the Grothendieck group of the category $\\Rep G$.\nIt is freely generated by classes $[X_i], i\\in I$; thus, we have a\nnatural isomorphism \n\\begin{equation}\\label{e:K(G)}\n\\begin{aligned}\nK(G)&\\isoto L(Q)\\\\\n[X_i]&\\mapsto \\alpha_i\n\\end{aligned}\n\\end{equation}\n\n\n\n\\section{Quiver $Q_h$}\n\nWe will consider a special class of orientations of $Q$. \n\n\\begin{definition}\\label{d:height_function}\nA {\\em height function} $h$ is a map $I\\to \\Z$ such that \n$h(i)-h(j)=\\pm 1$ if $i,j$ are connected by an edge in $Q$, and\n$h(i)\\equiv p(i) \\pmod 2$. \n\\end{definition}\n\nEvery height function gives rise to orientation of edges of $Q$: if\n$h(j)=h(i)-1$ then all edges connecting $i$ and $j$ are directed\ntowards $j$:\n$$\ni\\longrightarrow j \\quad\\text{ if } h(j)=h(i)-1\n$$\n\n\nWe will denote by $Q_h$ the quiver given by this orientation. We will\nwrite $i\\to j$ if there exists an edge whose tail is $i$ and head is\n$j$. The notation $\\sum_{j\\colon j\\to i}$ will mean the sum over all\nvertices $j$ connected with $i$ by an edge $j\\to i$; if there are\nmultiple edges, the corresponding vertex $j$ will be taken more than\nonce. \n\nOrientations obtained in this way will be called {\\em\nadmissible} (note: our use of this word is slightly different from the\nuse in other sources). It is easy to check that if $Q$ has no loops,\nthen any orientation of $Q$ is admissible. For type $A$, an\norientation is admissible if the total number of clockwise arrows is\nequal to the number of counterclockwise ones (which again rules out\ntype $\\widehat{A}_n$, $n$ even, corresponding to $G=\\Z_{n+1}$). It is\nalso obvious that $Q_h$ has no oriented loops, and that\nadding a constant to $h$ gives the same orientation. \n\nGiven a height function $h$, we will draw $Q$ in the plane so that\n$h$ is the $y$--coordinate; then all edges of $Q_h$ are directed\ndown. \\firef{f:Qh} shows an example of a height function for\nquiver of type $D$.\n\\begin{figure}[ht]\n\\fig{Qh.eps}\n\\caption{Example of a height function for Dynkin diagram\n$\\widehat{D}_7$}\\label{f:Qh}\n\\end{figure}\n\n\\begin{definition}\\label{d:reflection_functors1}\nLet $h$ be a height function on $Q$, and $i\\in I$ be a sink in $Q_h$:\nthere are no edges of the form $i\\to j$ (in terms of $h$, it is\nequivalent to saying that $h$ has a local minimum at $i$). We define\nnew height function \n$$\ns_i^+h (j)=\\begin{cases}\n h(j)+2, & j=i\\\\\n h(j), & j\\ne\\i\n \\end{cases}\n$$\nSimilarly, if $i$ is a source, i.e., there are no vertices of the\nform $j\\to i$ (equivalently, $h$ has a local maximum at $i$), then \nwe define \n$$\ns_i^-h (j)=\\begin{cases}\n h(j)-2, & j=i\\\\\n h(j), & j\\ne\\i\n \\end{cases}\n$$\n\\end{definition}\nOne easily sees that $s_i^+h$, $s_i^-h$ are again height functions;\nthe quiver $Q_{s_i^\\pm h}$ is obtained from $Q_h$ by reversing\norientation of all edges adjacent to $i$. We will refer to $s_i^\\pm$\nas (elementary) orientation reversal operations. \n\nThe following lemma is known; however, for the benefit of the reader\nwe included the proof. \n\n\\begin{lemma}\\label{l:reflection_functors1}\nAny two height functions $h, h'$ can be obtained one from another by\na sequence of orientation reversal operations $s_i^\\pm$. \n\\end{lemma}\n\\begin{proof}\nDefine the ``distance'' between two height functions by \\\\\n$d(h,h')=\\sum_I |h(i)-h'(i)|\\in \\Z_+$. We will show that if $d(h,h')>0$, then\none can\nfind $s_i^\\pm$ such that $s_i^\\pm$ can be applied to $h$, and\n$d(s_i^\\pm h, h')h'(i)\\}$. One easily sees that if $i\\in I_+$, and\n$j\\to i$ in $Q_h$, then $j\\in I_+$. Thus, if $I_+$ is non-empty, it \nmust contain at least one source $i$ for $Q_h$. But then $d(s_i^- h,\nh')=d(h, h')-2$. \n\nSimilarly, let $I_-=\\{i\\st h(i)1. \n$$\n\n For a representation $M=(M_i)_{i\\in I}$ of $Q_h$, we define its\ndimension $\\dim X\\in L(Q)$ by $\\dim M=\\sum (\\dim M_i)\\alpha_i$ (recall that\n$L(Q)=\\Z[I]$ is the root lattice of the root system $\\Delta(Q)$, see\n\\seref{s:basic}). \n \nThe following theorem summarizes some of the known results about\nrepresentations of quivers and root system $\\Delta(Q)$. \n\n\n\\begin{theorem}\\label{t:reps_of_quivers}\n\\par\\indent\n\\begin{enumerate}\n\\item The map $[X]\\mapsto \\dim X$ gives an isomorphism $K(Q_h)\\isoto\nL(Q)$. Under this isomorphism, the bilinear form on $L(Q)$ is identified\nwith the following bilinear form on $K(Q_h)$: \n $$\n (x,y)=\\+\\\n $$\n where by definition \n $$\n \\<[X],[Y]\\>= \\dim \\RHom (X,Y)=\\dim \\Hom (X,Y) -\\dim \\Ext^1(X,Y).\n $$\n \\item The set of dimensions of indecomposable modules is exactly the\n set $\\Delta_+(Q)$ of positive roots in $\\Delta(Q)$. For real roots\n $\\alpha$, there is exactly one up to isomorphism indecomposable\n module $M_\\alpha$ of dimension $\\alpha$; for\n imaginary root $\\alpha$, there are infinitely many pairwise\n non-isomorphic modules of dimension $\\alpha$. \n \n\\end{enumerate}\n\\end{theorem}\n\nThere is also an explicit description of indecomposables in\n$\\D^b(Q_h)$ (see \\cite[Lemma I.5.2]{happel}). \n\\begin{theorem}\\label{t:indecompos_derived}\n Indecomposable objects in $\\D^b(Q_h)$ are of the form $M[n]$, where\n $M$ is an indecomposable object in $\\Rep Q_h$, $n\\in \\Z$. \n\\end{theorem}\n\n\nWe will also need reflection functors of Bernstein--Gelfand--Ponomarev.\nRecall that if $i$ is a sink in $Q_h$, then one has a natural functor \n\\begin{equation}\\label{e:reflection_2}\n S_i^+\\colon \\Rep(Q_h)\\to \\Rep(Q_{s_i^+h})\n\\end{equation}\nsimilarly, if $i$ is a source, one has a natural functor \n\\begin{equation}\\label{e:reflection_3}\n S_i^-\\colon \\Rep(Q_h)\\to \\Rep(Q_{s_i^-h})\n\\end{equation}\n(see definition in \\cite{drab-ringel},\n\\cite{kraft-riedtmann}). \n\nThe following result is known and easy to prove, so we skip the\nproofs. \n\\begin{theorem}\\label{t:coxeter2}\n\\par\\indent \n \\begin{enumerate}\n \\item Functor $S_i+$ is left exact and $S_i^-$ is right exact. \n \n \n We will denote by $RS_i^+, LS_i^-\\colon \\D^b(Q_h)\\to\n \\D^b(Q_{s_i^\\pm h})$ the corresponding derived functors. \n \\item\n The functors $RS_i^+$, $LS_i^-$ are equivalences of categories\n $\\D^b(Q_h)\\to\\D^b(Q_{s_i^\\pm h})$, which induce the usual\n reflections $s_i$ on the Grothendieck group: \n $$\n \\dim RS_i^+ (X)=s_i(\\dim X), \\quad \\dim LS_i^- (X)=s_i(\\dim X)\n $$\n \\item If $i,j$ are not neighbors in $Q$, then $RS_i^+,\n RS_j^+$ commute (i.e., compositions in different orders are\n isomorphic) and similarly for $LS_i^-$. \n \\end{enumerate}\n\\end{theorem}\n\n\n\nIn particular, for a given height function $h$ let $s^+_{i_1}\\dots\ns^+_{i_r}$ be a sequence of elementary orientation\nreversal operations such that \n$$\ns_{i_1}^+\\dots s_{i_r}^+ (h) =h+2\n$$\nOne easily sees that this condition is equivalent to requiring that every\nindex $i\\in I$ appears in the sequence $\\{i_1,\\dots, i_r\\}$ exactly once;\nit follows from \\leref{l:reflection_functors1} that for every height\nfunction $h$, such sequences of orientation reversal operations exist. \nFor such a sequence, the corresponding element of the Weyl group\n\\begin{equation}\\label{e:coxeter}\nc^+_h=s_{i_1}\\dots s_{i_r}\n\\end{equation}\nis called the {\\em Coxeter element}, and\nthe corresponding composition of reflection functors \n\\begin{equation}\\label{e:C+}\nRC^+_h=RS_{i_1}^+\\dots RS_{i_r}^+\\colon \\D^b(Q_h)\\to\\D^b(Q_{h+2})\n\\end{equation}\nwill be called the {\\em Coxeter functor}. Note that since $Q_h\\simeq\nQ_{h+2}$ as a quiver, we can consider$RC_h^+$ as an autoequivalence of\n$\\D^b(Q_h)$. \n\nIt is easy to show (see \\cite{drab-ringel}, \\cite{shi}) that the\nCoxeter element $c^+_h$ only depends on $h$ and not on the choice of\nthe sequence $i_1,\\dots,i_r$; moreover, the proof of this only uses\nthe fact that $s_i, s_j$ commute if $i,j$ are not connected in $Q$\nand does not use the braid relations. Thus, by \\thref{t:coxeter2},\nthis implies that up to an isomorphism, $RC^+_h$ also depends only on\n$h$; this justifies the notation $RC_h^+$.\n\nSimilarly, we can define functors \n\\begin{equation}\\label{e:coxeter2}\nLC^-_h\\colon \\D^b(Q_h)\\to\\D^b(Q_{h-2})\\simeq\\D^b(Q_h)\n\\end{equation}\nusing sequences of orientation reversals $s_{i_1}^-\\dots s_{i_r}^- h\n=h-2$; the corresponding element of the Weyl group will be denoted by\n$c_h^-$. As before, it can be shown that $LC_h^-$, $c_h^-$ only depend\non $h$\n\n\nFor readers familiar with the theory of Auslander--Reiten sequences\n(see \\cite{auslander-reiten}, \\cite{happel}), we add\nthat the category $\\Rep Q_h$ has Auslander--Reiten sequences, and the\nAuslander--Reiten functor $\\tau$ is given by $\\tau=C_h^-$.\n\n\n\\section{Equivariant sheaves}\nIn this section, we introduce the main object of this paper, the\ncategory of equivariant sheaves on $\\PP$. Most of the results of\nthis section are well-known and given here only for the convenience\nof references. \\leref{l:frobenius} does not seem to be easily\navailable in the literature, but is very easy to prove. \n\nLet $V^*$ be the dual of the tautological representation $V$ of\n$\\mathrm{SU(2)}$. Since $G$ is a finite subgroup in $\\mathrm{SU(2)}$, it naturally acts on\n$\\PP=\\mathbb{P}(V^*)$, and the structure sheaf $\\OO$ has a standard\n$\\mathrm{SU(2)}$- (and thus $G$-) equivariant structure. Moreover, all twisted\nsheaves $\\OO(n)$ also have a standard equivariant structure, so that\nthe space of global sections $\\Gamma(\\OO(n))$ is a representation of \n$G$: \n\\begin{equation}\\label{e:H0}\n\\Gamma(\\OO(n))=\\begin{cases}\n S^nV=V_n, &n\\ge 0\\\\\n 0, &n<0\n \\end{cases}\n\\end{equation}\nSimilarly, the higher homology spaces are naturally representations of $G$: it\nis well-known that $H^i(\\PP, \\OO(n))=0$ for $i>1$, and \n\\begin{equation}\\label{e:H1}\nH^1(\\PP,\\OO(n))=\\begin{cases}\n S^{-n-2}V^*=V^*_{-n-2}, &n\\le -2\\\\\n 0, &n\\ge -1\n \\end{cases}\n\\end{equation}\n\nLet ${Qcoh_G(\\PP)}$, ${Coh_G(\\PP)}$ be the categories of $G$--equivariant\nquasi-coherent (respectively, coherent) $\\OO$--modules on $\\PP$\n(see, e.g., \\cite[Section 4]{bkr} for definitions). Note that we are\nconsidering isomorphisms $\\lambda_g\\colon \\F\\to g^*\\F$ as part of the\nstructure of the $G$--equivariant sheaf. For brevity, we will denote \nmorphisms and $\\Ext$ groups in ${Qcoh_G(\\PP)}$ by $\\Hom_G(X,Y)$, $\\Ext_G(X,Y)$.\nFor an equivariant sheaf $\\F$ we will denote \n$$\n\\F(n)=\\OO(n)\\ttt_\\OO \\F\n$$\nwith the obvious equivariant structure. Similarly, for a\nfinite-dimensional representation $X$ of $G$, we denote \n$$\nX(n)=\\OO(n)\\ttt_\\CC X.\n$$\n\n\nWe list here some of the basic properties of equivariant\nsheaves; proofs can be found in \\cite[Section 4]{bkr}.\n\\begin{theorem}\\label{t:Gcoh}\n\\par\\noindent\n \\begin{enumerate}\n \\item ${Qcoh_G(\\PP)}$ is an abelian category, and a sequence $0\\to\n \\F_1\\to \\F_2\\to \\F_3\\to 0$ is exact in ${Qcoh_G(\\PP)}$ iff it is exact in\n $Qcoh(\\PP)$. \n\n \\item For any $\\F,\\G\\in {Qcoh_G(\\PP)}$, the space $\\Hom_\\OO(\\F,\\G)$ is a\n representation of $G$, and $\\Hom_G(\\F,\\G)=(\\Hom_\\OO(\\F,\\G))^G$.\n Similarly, \\\\\n $\\Ext_G^i(\\F,\\G)=(\\Ext^i_\\OO(\\F,\\G))^G$; in\n particular, $\\Ext_G^i(\\F, \\G)=0$ for $i>1$.\n \n \\item For any $\\F,\\G\\in {Coh_G(\\PP)}$, the spaces $\\Hom_G(\\F,\\G)$,\n $\\Ext_G^1(\\F,\\G)$ are finite-dimensional. \n\n \\item For any $\\F,\\G\\in {Coh_G(\\PP)}$, one has \n \\begin{align*}\n \\Hom_G(\\F, \\G(n))=\\Hom_G(\\F(-n),\\G)=0\\quad \\text{for }n\\ll 0,\\\\\n \\Ext_G^1(\\F, \\G(n))=\\Hom_G(\\F(-n),\\G)=0\\quad \\text{for }n\\gg 0,\n \\end{align*}\n \n\\end{enumerate}\n\\end{theorem}\n\n\nAs an immediate corollary of this, we see that the category ${Coh_G(\\PP)}$ has the \nKrull--Schmidt property: every object $\\F\\in {Coh_G(\\PP)}$ can be written as a direct\nsum of indecomposable modules, and the multiplicities do not depend on the\nchoice of such decomposition. It is also a hereditary category (recall that a\ncategory is called hereditary if $\\Ext^2(A,B)=0$ for any objects $A,B$; in\nparticular, this implies that a quotient of an injective object is injective,\nas can be easily seen from the long exact sequence of $\\Ext$ groups). \n\nWe say that $\\F\\in{Coh_G(\\PP)}$ is locally free if it is locally free as an\n$\\OO$-module. \n\\begin{theorem}\\label{t:locally_free}\n\\par\\noindent\n\\begin{enumerate}\n \\item Every locally free $\\F\\in{Coh_G(\\PP)}$ is isomorphic to direct sum of\n sheaves of the form \n \\begin{equation}\n X_i(n)=\\OO(n)\\ttt X_i, \\quad X_i \\text{ -- irreducible representation of\n$G$} \\end{equation}\n \\item If $\\F\\in{Coh_G(\\PP)}$ is locally free, then the functor $\\ttt \\F\\colon\n {Coh_G(\\PP)}\\to {Coh_G(\\PP)}$ is exact. \n \n \\item Every coherent $G$-equivariant sheaf has a resolution consisting of\n locally free sheaves. \n \\item \\textup{(}Serre Duality\\textup{)} \n For any two locally free sheaves $\\F, \\G$ we have isomorphisms \n $$\n \\Ext_G^1(\\F, \\G(-2))=\\Ext^1(\\F(2),\\G)=\\Hom_G(\\G, \\F)^* \n $$\n\\end{enumerate}\n\\end{theorem}\n\nIt immediately follows from computation of homology of $\\OO(n)$ as coherent\nsheaves that \n\\begin{equation}\\label{e:hom1}\n \\Hom_G ( X_i(n), X_j(k))\n =\\begin{cases}\n \\Hom_G( X_i, V_{k-n}\\ttt X_j), &k\\ge n\\\\\n 0, & kh_i \\iff (q\\notin Q_h, q\\succ q'$ for some $q'\\in Q_h)$. In\n this case, we will say that $q$ is above $Q_h$ and write\n $q\\succ Q_h$.\n \\item $n1$, and \n$$\n(R^1\\Ph_h(\\F))(i)=\\Ext_\\C^1(X_i(h_i),\\F)\n$$\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nFollows from left exactness of functor $\\Hom_\\C(X_i(h_i), -)$ and definitions. \n\\end{proof}\n\n\n\\begin{example}\\label{x:standard}\nLet $h$ be the ``standard'' height function: $h(i)=0$ for $i\\in I_0$,\n$h(i)=1$ for $i\\in I_1$. Then the map $\\Ph_h$ can also be described\nas follows. Consider the functor $\\Rep Q_h\\to \\Rep G$ defined by\n$\\{V_i\\}\\mapsto \\bigoplus V_i\\ttt X_i$. Then the composition\n$$\n\\C\\xxto{\\Ph_h} \\Rep Q_h\\to \\Rep G\n$$\nis given by \n$$\n\\F\\mapsto \\Gamma(\\F)\\oplus \\Gamma(\\F(-1)).\n$$\n\nIndeed, for each representation $M$ of $G$ we have \n$$\nM\\simeq\\bigoplus \\Hom_G(X_i,M)\\ttt X_i\n$$\nApplying it to $\\Gamma(\\F)=\\Hom_\\OO(\\OO,\\F)$ we get \n$$\n\\Gamma(\\F)=\\bigoplus_{i\\in I} \\Hom_\\OO(X_i\\ttt \\OO,\\F)^G\\ttt X_i\n=\\bigoplus_{i\\in I} \\Hom_G (X_i\\ttt \\OO,\\F)\\ttt X_i\n$$\nSince $\\F\\in \\C={Coh_G(\\PP)}_0$, we have $\\Hom_G (X_i\\ttt \\OO,\\F)=0$ for\n$i\\in I_1$; thus, \n$$\n\\Gamma(\\F)=\\bigoplus_{i\\in I_0} \\Hom_G (X_i\\ttt \\OO,\\F)\\ttt X_i\n$$\nSimilar argument shows that \n$$\n\\Gamma(\\F(-1))=\\bigoplus_{i\\in I_1} \\Hom_G (X_i\\ttt \\OO(1),\\F)\\ttt X_i\n$$\nThus, \n$$\n\\Gamma(\\F)\\oplus \\Gamma(\\F(-1))=\n\\left(\\bigoplus_{i\\in I_0} \\Hom_\\C (X_i,\\F)\\ttt X_i\\right)\n\\oplus\n\\left(\\bigoplus_{i\\in I_1} \\Hom_\\C (X_i(1),\\F)\\ttt X_i\\right)\n$$\nas desired. \n\\end{example}\n\nSince $\\Ph_h$ is left exact, we can define the associated derived functor\n$R\\Ph_h\\colon \\D^b(\\C)\\to \\D^b(\\Rep Q_h)$. \nThe following two theorems are the main results of this paper. \n\n\\begin{theorem}\\label{t:main2}\nFor any height function $h$, the functor $R\\Ph_h\\colon \\D^b(\\C)\\to\n\\D^b(Q_h)$ defined by \\eqref{e:Ph} is an equivalence of triangulated\ncategories.\n\\end{theorem}\n\\begin{proof}\nIt follows from \\leref{l:vanishing_of_homs} that the object \n$$\nT=\\bigoplus_{q\\in Q_h}X_q\n$$\nsatisfies $\\Ext^1(T,T)=0$. By \\leref{l:generators}, direct summands of\n $T$ generate $\\D^b(\\C)$ as a triangulated category. Therefore, $T$ is\nthe tilting object in $\\D^b(\\C)$ as defined in \\cite{geigle-lenzing}.\nNow the statement of the theorem follows from the general result of\n\\cite{geigle-lenzing}.\n\\end{proof}\n\n\n\n\\begin{theorem}\\label{t:main1}\nLet $i$ be a sink for $h$, and $S_i^+$ the reflection functor defined by \n\\eqref{e:reflection_2}. Then the following diagram is commutative:\n$$\n\\xymatrixrowsep{4pt}\n\\xymatrixcolsep{40pt}\n\\xymatrix{\n &\\D^b(Q_{s_i^+h})\\\\\n\\D^b(\\C)\\ar[ur]^{R\\Ph_{s_i^+h}}\\ar[dr]^{R\\Ph_{h}}&\\\\\n &\\D^b(Q_{h})\\ar[uu]_{RS_i^+}\n} \n$$\nand similarly for $S_i^-$. \n\\end{theorem}\n\\begin{proof}\n\n\n\nLet us first prove that if $q\\succ Q_h$, then \n$$\nR\\Ph_{s_i^+h}(X_q)=RS_i^+\\circ R\\Ph_h (X_q).\n$$ \nIndeed, in this case it follows\nfrom \\leref{l:Ph}, \\leref{l:vanishing_of_homs} that $R^i\\Ph_h(X_q)=0$\nfor $i>0$,\nso $R\\Ph_h(X_q)=\\Ph_h(X_q)$; similarly, \n$R\\Ph_{s_i^+h}(X_q)=\\Ph_{s_i^+h}(X_q)$.\n\nLet $i$ be a sink for $Q_h$. Then we have the Auslander--Reiten exact\nsequence\n$$\n0\\to X_i(h_i)\\to \\bigoplus_{j} X_j(h_j)\\to X_i(h_i+2)\\to 0\n$$\nApplying to this sequence functor $\\Hom_\\C(-, X_q)$ and using\n\\leref{l:vanishing_of_homs} which gives vanishing of\n$\\Ext^1$, we get a short exact\nsequence \n$$\n0\\to \\Hom_\\C(X_i(h_i+2), X_q)\\to \\bigoplus_{j} \\Hom_\\C(X_j(h_j), X_q)\\to\n\\Hom_\\C(X_i(h_i), X_q)\\to 0\n$$\nComparing this with definition of $S_i^+$, we see that \n$\\Ph_{s_i^+h}(X_q)=S_i^+(\\Ph_h(X_q))$. \n\nThus, we have shown that for $q\\succ Q_h$, we\nhave $R\\Ph_{s_i^+h}(X_q)=RS_i^+\\circ R\\Ph_h (X_q)$. \n\n\nTo complete the proof, we now use the following easy result.\n\n\\begin{lemma}\\label{l:technical4}\n Let $\\D$, $\\D'$ be triangulated categories, $\\Ph_1,\\Ph_2\\colon\n \\D\\to \\D'$ triangle functors, and $\\alpha\\colon\n \\Ph_1\\to\\Ph_2$ a morphism of functors. Assume that there is a\n collection of objects $X_q\\in \\D$ which generate $\\D$ as a\n triangulated category in the sense of \\leref{l:generators} and such\n that for each $X_q$, \n $$\n \\alpha\\colon \\Ph_1(X_q)\\to \\Ph_2(X_q)\n $$\n is an isomorphism. Then $\\alpha$ is an isomorphism of functors. \n\\end{lemma} \n\nApplying this lemma to $\\D=\\D^b(\\C)$, $\\D'=\\D^b(Q_{s_i^+h})$,\n$\\Ph_1=R\\Ph_{s_i^+ h}$, $\\Ph_2=RC_i^+\\circ R\\Ph_h$ and the \ncollection of objects $X_q, q\\succ Q_h$ (which\ngenerate $\\D^b(\\C)$ by \\leref{l:generators}), we get the statement of\nthe theorem. \n\\end{proof}\n\n\n\\begin{corollary}\\label{c:coxeter}\nWe have the following commutative diagram\n$$\n\\xymatrixrowsep{20pt}\n\\xymatrixcolsep{30pt}\n\\xymatrix{\n \\D^b(\\C)\\ar[r]^-{R\\Ph_h} \n &\\D^b(Q_h)\\simeq \\D^b(Q_{h+2})\\\\\n \\D^b(\\C)\\ar[r]^{R\\Ph_h}\\ar[u]_{\\ttt \\OO(-2)} \n &\\D^b(Q_h)\\ar[u]_{RC_h^+}\n}\n$$\nwhere $RC_h^+$ is the Coxeter element \\eqref{e:C+}, and similarly for\n$LC_h^-$. \n\\end{corollary}\n\n\n\\begin{remark}\nIt should be noted that while $R\\Ph_h$ is an equivalence of derived\ncategories, it is definitely not true that $\\Ph_h$ is an equivalence\nof abelian categories. For example, there are no injective or\nprojective objects in $\\C$ while there are enough injectives and\nprojectives in $\\Rep Q_h$. Similarly, the set of simple modules is\nvery much different in these two categories. However, the sets of\nindecomposable modules are effectively the same. \n\\end{remark}\n\n\n\\section{The root system and the Grothendieck group}\nIn this section, we list some important corollaries of the equivalence of\ncategories constructed in the previous section. Again, many of these\nresults can also be obtained form the equivalence of categories\nconstructed by Lenzing; however, we choose to provide an independent\nexposition. \n\n\nThoroughout this section, we let $L=K(\\C)$ be the Grothendieck group of\ncategory $\\C$. We define the inner product on $L$ by \n\\begin{equation}\\label{e:euler_form}\n (x,y)=\\+\\\n\\end{equation}\n where by definition \n$$\n \\<[X],[Y]\\>=\\dim \\RHom (X,Y)=\\dim \\Hom (X,Y) -\\dim \\Ext^1(X,Y)\n$$\n(compare with \\thref{t:reps_of_quivers}). We define $\\Delta\\subset L$ by \n\\begin{equation}\n\\Delta=\\{[X], X \\text{--- a non-zero indecomposable object in\n}\\D^b(\\C)\\}\n\\end{equation}\n\nFinally, we define the map $C\\colon \\Delta\\to\\Delta$ by \n$$\nC([X])=[X(-2)]\n$$\n\n\n\n\\begin{theorem}\\label{t:main3}\n\\par\\indent\n\\begin{enumerate}\n \\item The set $\\Delta\\subset L$ is an affine root system, and $C$ is\n a Coxeter element. \n \\item Recall the lattice $L(Q)$ and root system $\\Delta(Q)$ from\n \\seref{s:basic}. Then for any height function $h$ on $Q$, the map\n \\begin{equation}\n \\begin{aligned}\n R\\Ph_h\\colon L&\\to L(Q)\\\\\n [\\F]&\\mapsto \\bigoplus \n \\biggl(\\dim \\RHom_\\C(X_i(h_i), \\F)\\biggr) \\alpha_i\n \\end{aligned}\n \\end{equation}\n is an isomorphism of abelian groups which identifies\n $\\Delta\\subset L$ with $\\Delta(Q)\\subset L(Q)$ and $C$ with the\n Coxeter element $c_h^+$ defined by \\eqref{e:coxeter}. \n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\n The first part follows from the second one; the\n second part follows from the fact that $R\\Ph_h$ is an equivalence of\n categories (\\thref{t:main2} and \\coref{c:coxeter}). \n\\end{proof}\n\n\n\n\\begin{corollary}\\label{c:generators_of_K(C)}\n \\par\\noindent\n \\begin{enumerate}\n \\item \n For any height function $h$, the classes $[X_q]$, $q\\in Q_h$, are free\n generators of the Grothendieck group $K(\\C)$. \n \\item \n $K(\\C)$ is generated by the classes $[X_i(n)], n\\equiv p(i)\\mod 2$,\n and AR relations \\eqref{e:aus-reiten3} is the full list of all\n relations among classes $[X_i(n)]$.\n \\end{enumerate}\n\\end{corollary}\n\\begin{proof}\nBy \\leref{l:generators}, classes $[X_q]$, $q\\in Q_h$, generate $K(\\C)$. On\nthe other hand, by \\thref{t:main3}, $K(\\C)$ is isomorphic to $\\Z^I$ and\nthus has rank $|I|$, so these generators must be linearly independent. \n\nTo prove the second part, denote temporarily $L'=Span([X_i(n)])\/J$, where\n$J$ is the subgroup generated by AR relations \\eqref{e:aus-reiten3}.\nSince $[X_i(n)]$ generate $K(\\C)$ and AR relations hold in $K(\\C)$, we\nsee that $K(\\C)$ is a quotient of $L'$. \n\nNow choose some height function $h$. It follows from \n\\leref{l:reflection_functors1} that $[X_q]$, $q\\in Q_h$, generate $L'$, so\nit has rank at most $|I|$. On the other hand, $K(\\C)$ has rank $|I|$.\nThus, $L'$ has rank $|I|$ and $L'=K(\\C)$. \n\\end{proof}\n\n\n\\begin{example}\\label{x:standard2}\n Let $h$ be the ``standard'' height function as defined in\n\\exref{x:standard}. Then the map\n$R\\Ph_h\\colon K(\\C)\\to L(Q)\\simeq K(G)$\n is given by \n \\begin{align*}\n &[X_i]\\mapsto \\alpha_i, \\quad i \\in I_0\\\\\n &[X_i(1)]\\mapsto \\alpha_i+\\sum_{j-i}\\alpha_j, \\quad i \\in I_1\n \\end{align*}\n and the corresponding Coxeter element is $C=(\\prod_{i\\in\n I_1}s_i)(\\prod_{i\\in I_0}s_i)$. \n \n This also implies that for this $h$, $[X_i(-1)]\\mapsto -\\alpha_i$, $i\\in\n I_1$; in particular, we see that classes\n $\\alpha_i, i\\in I_0$, and $-\\alpha_i, i\\in I_1$, form a set of representatives\n of $C$-orbits on on ${\\widehat{Q}}$. An analogous statement for finite root\n system has been proved in \\cite{kostant2}. \n\\end{example} \n\n\n\nFor completeness, we also describe here the indecomposable sheaves\ncorresponding to imaginary roots of $\\Delta$.\n\n\n\\begin{theorem}\\label{t:imaginary_root}\nLet $x\\in \\PP$ be a generic point: $\\Stab_\\Gbar x=\\{1\\}$, and let \n$$\n\\delta=[\\OO_{[Gx]}]\\in \\Delta\n$$\n\\textup{(}see \\eqref{e:torsion_sheaf}\\textup{)}. Then: \n\\begin{enumerate}\n\\item $\\delta$ does not depend on the choice of point $x$\n\\item $\\delta=\\delta_0-\\delta_1$, where \n\\begin{align*}\n\\delta_0&=\\sum_{i\\in I_0}d_i[X_i]=[R_0]\\\\\n\\delta_1&=\\sum_{i\\in I_1}d_i[X_i(-1)]=[R_1(-1)],\n\\end{align*}\n where $d_i=\\dim X_i$ and $R_0, R_1$ are even and odd parts of the\nregular representation defined by \\eqref{e:R_p}. \n\n\\item $C\\delta=\\delta$; $C\\delta_0=\\delta_0-2\\delta$; $C\\delta_1=\\delta_1-2\\delta$\n\\item For any $\\alpha\\in L$, $(\\delta,\\alpha)=0$\n\\item $\\delta$ is a generator of the set of imaginary roots of $\\Delta$:\n$$\n\\Delta^{im}=\\Z\\delta\n$$\n\\end{enumerate}\n\\end{theorem}\n\\begin{proof}\nWe start by proving (2), by explicitly constructing a resolution of \n$\\OO_{[Gx]}$. \n\nLet $\\varphi_x$ be a holomorphic section of $\\OO(1)$ which has a simple\nzero at $x$. Then we have a short exact sequence of sheaves (not\nequivariant):\n$$\n0\\to\\OO\\xxto{\\varphi_x}\\OO(1)\\to \\OO_{[x]}\\to 0\n$$\nTensoring it with $\\OO(-1)$, we get a sequence\n$$\n0\\to\\OO(-1)\\xxto{\\varphi_x}\\OO\\to \\OO_{[x]}\\to 0\n$$\nNow let us apply to this sequence functor $\\Ind$ defined by\n\\eqref{e:ind}. Using \\leref{l:frobenius}, we see that it gives a\n$\\Gbar$-equivariant short exact sequence \n$$\n0\\to R_1\\ttt \\OO(-1)\\to R_0\\ttt\\OO\\to \\OO_{[Gx]}\\to 0\n$$\nwhich gives equality $\\delta=\\delta_0-\\delta_1$, thus proving part (2) of the\ntheorem.\n\n\n\nPart (1) follows from (2).\n\nSince $\\OO_{[Gx]}\\ttt \\OO(-2)\\simeq \\OO_{[Gx]}$, we get $C\\delta=\\delta$.\nTo compute $C\\delta_0$, note that the same argument as in the proof of\npart (2) also gives a short exact sequence \n$$\n0\\to R_0\\ttt \\OO\\to R_1\\ttt\\OO(1)\\to \\OO_{[Gx]}\\to 0\n$$\nthus giving $C^{-1}\\delta_1=\\delta+\\delta_0=2\\delta+\\delta_1$. \n\nTo prove part (4), recall notation\n$\\<[X],[Y]\\>=\\dim\\Hom(X,Y)-\\dim\\Ext^1(X,Y)$. Then Serre duality\nimmediately gives \n$$\n\\=-\\.\n$$\nSince $C\\delta=\\delta$, we get \n$$\n(\\delta,x)=\\+\\<\\delta,x\\>=\\-\\=0\n$$\n\n\nPart (4) implies that $\\delta$ is an imaginary root. Moreover, since by\n\\coref{c:generators_of_K(C)} classes $[X_i]$, $[X_i(-1)]$ are free\ngenerators of $L$. Since some of $d_i$ are equal to 1, it follows\nfrom part (2) that for any $k>1$, $\\delta\/k\\notin L$; thus, $\\delta$ must be\na generator of the set of imaginary roots. \n\\end{proof}\n\nSince every indecomposable object in $\\D^b(\\C)$ is of the form\n$\\F[n]$, where $\\F$ is an indecomposable $G$-equivariant sheaf\n(\\thref{t:indecomposable}), it follows that every root $\\alpha\\in\\Delta$\ncan be written as either $\\alpha=[\\F]$ or $\\alpha=-[\\F]$, thus giving some\nsplitting of $\\Delta$ into positive and negative roots. This polarization\ncan be described explicitly. \n\nRecall from algebraic geometry (see, e.g., \\cite[Exercises II.6.10,\nII.6.12]{hartshorne}) that for any coherent sheaf, we can\ndefine two integer numbers, its {\\em rank} and {\\em degree}. In\nparticular, for a locally free sheaf $\\F=X\\ttt \\OO(n)$ (where $X$ is a\nfinite-dimensional vector space), we have \n\\begin{align*}\n&\\rk (X\\ttt \\OO(n))=\\dim X,\\\\\n&\\deg (X\\ttt \\OO(n))= n\\dim X.\n\\end{align*}\nDegree and rank give well-defined linear maps $K\\to \\Z$, where $K$\nis the Grothendieck group of the category of coherent sheaves. \n\nIn particular, we can define rank and degree for a $G$-equivariant\nsheaf, ignoring the equivariant structure; this gives linear maps\n$K(\\C)\\to\\Z$, which we also denote by $\\rk, \\deg$. \n\n\n\\begin{lemma}\\label{l:rank}\n\\par\\indent\n \\begin{enumerate}\n \\item If $\\F\\in\\C$ is a non-zero free sheaf, then $\\rk \\F>0$.\n \\item If $\\F\\in\\C$ is a non-zero torsion sheaf, then $\\rk \\F=0$,\n $\\deg\\F>0$. \n \\item For any $x\\in K(\\C)$, we have \n $$\n \\rk (x)=(x,\\delta_0)=(x,\\delta_1)\n $$\n where $\\delta_0$, $\\delta_1$ are defined in \\thref{t:imaginary_root}.\n \\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n The first two parts are well-known. \n \n To check $\\rk (x)=(x,\\delta_0)=(x,\\delta_1)$, it suffices to check it\n for $x=[X_i]$, $i\\in I_0$ and $x=[X_i(-1)], i\\in I_1$ (by\n \\exref{x:standard2}, they generate $L$). If $i\\in I_0$,\n then \n $$\n ([X_i], \\delta_0)=\\dim \\Hom_G(X_i,\\sum_{j\\in I_0} d_jX_j)=d_i=\\rk X_i\n $$\n thus proving $\\rk (x)=(x,\\delta_0)$. Since $\\delta_0-\\delta_1=\\delta$ is in the\n kernel of $(\\, , \\,)$ (\\thref{t:imaginary_root}), this implies\n $(x,\\delta_0)=(x,\\delta_1)$. \n \n If $i\\in I_1$, then \n $$\n ([X_i(-1)], \\delta_1)=\\dim \\Hom_G(X_i(-1),\\sum_{j\\in I_1}\n d_jX_j(-1))=d_i=\\rk X_i\n $$\n thus proving $\\rk (x)=(x,\\delta_1)$. Since $\\delta_0-\\delta_1=\\delta$ is in the\n kernel of $(\\, , \\,)$ (\\thref{t:imaginary_root}), this implies\n $(x,\\delta_0)=(x,\\delta_1)$.\n\n\\end{proof}\n\nNote that the functional $\\deg x$ can not be written in terms of the\nform $(\\, , \\,)$: indeed, $\\deg \\delta =|\\Gbar|=|G|\/2$, but $(\\delta,\n\\cdot)=0$. \n\n\\begin{theorem}\nLet $\\alpha\\in \\Delta$.\n\\begin{enumerate}\n \\item $\\alpha=[\\F]$ for some indecomposable free sheaf $\\F\\in \\C$ \n iff $\\rk(\\alpha)=(\\alpha,\\delta_0)>0$\n \\item $\\alpha=-[\\F]$ for some indecomposable free sheaf $\\F\\in \\C$ \n iff $\\rk(\\alpha)=(\\alpha,\\delta_0)<0$\n \\item $\\alpha=[\\F]$ for some indecomposable torsion sheaf $\\F\\in \\C$ \n iff $\\rk(\\alpha)=(\\alpha,\\delta_0)=0$, $\\deg(\\alpha)>0$\n \\item $\\alpha=-[\\F]$ for some indecomposable torsion sheaf $\\F\\in \\C$ \n iff $\\rk(\\alpha)=(\\alpha,\\delta_0)=0$, $\\deg(\\alpha)<0$\n\\end{enumerate}\n\\end{theorem}\nThus, we see that we have a triangular decomposition of $\\Delta$:\n\\begin{equation}\\label{e:polarization}\n \\begin{aligned}\n &\\Delta=\\Delta_+\\sqcup\\Delta_0\\sqcup \\Delta_-\\\\\n &\\Delta_+=\\{\\alpha\\in \\Delta\\st \\rk(\\alpha)>0\\}=\\{[\\F],\\ \\F\\text{ --- a free\n indecomposable sheaf}\\}\\\\\n &\\quad \\text{(note that $\\Delta_+$ is exactly the set of vertices of\n ${\\widehat{Q}}$)}\\\\\n &\\Delta_-=\\{\\alpha\\in \\Delta\\st \\rk(\\alpha)<0\\}=\\{-[\\F],\\ \\F\\text{ --- a free\n indecomposable sheaf}\\}\\\\\n &\\Delta_0=\\{\\alpha\\in \\Delta\\st \\rk(\\alpha)=0\\}=\\{\\pm[\\F],\\ \\F\\text{ --- a free\n torsion sheaf}\\}\n \\end{aligned}\n\\end{equation}\nThe set $\\Delta_+$ has been discussed by Schiffmann \\cite{schiffmann2}, who\nused notation ${\\widehat{Q}}_+$ and denoted the corresponding subalgebra in the\nloop algebra by $\\mathcal{L}\\mathfrak{n}$. Note, however, that this\nnotation is somewhat misleading: $\\mathcal{L}\\mathfrak{n}$ is not the\nloop algebra of a positive part of the finite dimensional algebra\n$\\bar\\g$, which easily follows from the fact that there are real roots in \n$\\Delta_0$ (see example in the next section).\n \n\\begin{theorem}\\label{t:C^g}\nLet $g=|\\Gbar|=|G|\/2$. Then for any $x\\in L$ we have \n\\begin{equation}\\label{e:C^g}\nC^g (x)=x-2(\\rk x)\\delta \n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nLet $\\varphi_x$ be a section of the sheaf $\\OO(1)$ which has a single zero at\ngeneric point $x$. Then we have a short exact sequence \n$$\n0\\to \\OO\\xxto{\\varphi_x} \\OO(1)\\to \\OO_{[x]}\\to 0\n$$\nwhich, however, is not equivariant even under the action of $\\{\\pm\nI\\}\\subset \\mathrm{SU(2)}$. To fix it we consider $\\varphi_x^2$ which gives the following\n$\\Z_2$-equivariant sequence\n$$\n0\\to \\OO(-2)\\xxto{\\varphi^2_x} \\OO\\to \\OO_{2[x]}\\to 0\n$$\nNow let us take product of pullbacks of $\\varphi_x^2$ under all $g\\in\\Gbar$\n$$\n0\\to \\OO(-2g)\\xxto{\\prod_{g\\in\\Gbar}g^*\\varphi^2_x} \\OO\\to \\OO_{2[Gx]}\\to 0\n$$\n\nTensoring it with any locally free sheaf $\\F$, we get \n$$\n0\\to \\F(-2g)\\xxto{\\prod_{g\\in\\Gbar}g^*\\varphi^2_x} \\F\\to\n\\OO_{2[Gx]}\\ttt \\F_x\\to 0\n$$\nwhich implies\n$C^g[\\F]-[\\F]+2(\\rk \\F)\\delta=0$.\n\\end{proof}\n\nThis result --- that $C^g$ is a translation --- was known before and\ncan be proved without the use of equivariant sheaves, see e.g.\nSteinberg \\cite{steinberg}. However, the approach via equivariant\nsheaves also provides a nice interpretation for the corresponding\nfunctional as the rank of the sheaf.\n\nComparing \\eqref{e:C^g} with the description of the action of Coxeter\nelement in the language of representations of the quiver, we see that rank\nis closely related to the notion of defect $\\partial_c(x)$ as defined in\n\\cite{drab-ringel}, namely \n$$\n\\rk(x)=-\\frac{1}{2}\\partial_c x\n$$\nTherefore, $\\Delta_0$ is exactly the set of indecomposable objects of defect\nzero, which shows that torsion sheaves correspond to regular\nrepresentations. \n\n\n\n\\begin{corollary}\n \\par\\noindent\n \\begin{enumerate}\n \\item For any $\\alpha\\in \\Delta_0$, $C^gx=x$; in particular, $C$-orbit\n of $\\alpha$ is finite.\n \\item $C$ acts freely on $\\Delta_+$, and the set of orbits $\\Delta_+\/C$\n is naturally in bijection with $Q$. \n \\end{enumerate}\n\\end{corollary}\n\n\n\\section{Example: $\\widehat{A}_n$}\nIn this section, we consider the example of the cyclic group of even\norder: $G=\\Z_n$, $n=2k$. \n\nThe irreducible representations of this group are $X_i$, $i\\in \\Z_n$;\nall of them are one-dimensional. The corresponding Dynkin diagram $Q$\nis the cycle of length $n$. \n\nThe root system $\\Delta(Q)$ can be described as follows. Let $V$ be a\nreal vector space of dimension $n+1$, with basis $\\delta, e_i$, $i\\in\n\\Z_n$. Define inner product in $V$ by $(e_i,e_j)=\\delta_{ij}$,\n$(v,\\delta)=0$. Then \n\\begin{equation}\n \\Delta(Q)=\\{e_i-e_j+a\\delta, i\\ne j\\in \\Z_n, a\\in \\frac{j-i}{n}+\\Z\\}\n\\end{equation}\nThe simple roots are \n$$\n\\alpha_i=e_i-e_{i+1}+\\frac{1}{n}\\delta,\\quad i\\in \\Z_n\n$$\nso that $\\sum_i\\alpha_i=\\delta$. The simple reflections $s_i$ are given by \n\\begin{align*}\n&s_i(e_i)=e_{i+1}-\\frac{1}{n}\\delta\\\\\n&s_i(e_{i+1})=e_{i}+\\frac{1}{n}\\delta\\\\\n&s_i(e_j)=e_j,\\quad j\\ne i,i+1\n\\end{align*}\n\n\nIt is easy to see that this description of\n$\\Delta(Q)$, while unusual, is equivalent to the standard\ndescription of the affine root system $\\widehat{A}_{n-1}$.\n\nWe choose standard height function $h$:\n\\begin{equation}\nh(i)=\\begin{cases}0,& i\\text{ even}\\\\\n 1, &i\\text{ odd}\n \\end{cases}\n\\end{equation}\nThe corresponding Coxeter element $C$ is \n$$\nC=\\left(\\prod_{i \\text{ odd}} s_i\\right)\n \\left(\\prod_{i \\text{ even}} s_i\\right)\n$$\nThe action of $C$ on $\\Delta(Q)$ is given by \n\\begin{equation*}\nC(e_i)=\\begin{cases}e_{i+2}-\\frac{2}{n}\\delta,& i\\text{ even}\\\\\n e_{i-2}+\\frac{2}{n}\\delta,& i\\text{ odd}\n \\end{cases}\n\\end{equation*}\nThus, we have \n\\begin{equation*}\nC^{n\/2}(e_i)=\\begin{cases}e_{i}-\\delta,& i\\text{ even}\\\\\n e_{i}+\\delta,& i\\text{ odd}\n \\end{cases}\n\\end{equation*}\nwhich implies\n\\begin{equation}\n\\begin{aligned}\n&C^{n\/2}(\\alpha)=\\alpha-(\\alpha,\\varepsilon)\\delta,\\\\\n&\\quad \\varepsilon = \\sum_{i \\text{ even}}e_i-\\sum_{i \\text{ odd}}e_i\\equiv \n\\sum_{i \\text{ even}}\\alpha_i\\equiv -\\sum_{i \\text{ odd}}\\alpha_i\\mod \\Z\\delta\n\\end{aligned}\n\\end{equation}\n(compare with \\thref{t:C^g}, \\leref{l:rank} ). \n\nExplicitly, we can write \n\\begin{align*}\n&C^{n\/2}(\\alpha)=\\begin{cases}\n \\alpha, &i\\equiv j\\mod 2 \\\\\n \\alpha-2\\delta, &(i,j)\\equiv (0,1)\\mod 2 \\\\\n \\alpha+2\\delta, &(i,j)\\equiv (1,0)\\mod 2 \n \\end{cases},\\\\\n&\\quad \\alpha=e_i-e_j+a\\delta\n\\end{align*}\nThus, in this case we have \n\\begin{align*}\n\\Delta_0=\\{e_i-e_j+a\\delta\\}, \\quad i\\equiv j\\mod 2\\\\\n\\Delta_+=\\{e_i-e_j+a\\delta\\}, \\quad i\\text{ even}, j\\text{ odd}\\\\\n\\Delta_-=\\{e_i-e_j+a\\delta\\}, \\quad i\\text{ odd}, j\\text{ even}\\\\\n\\end{align*}\n\n\n\n\\firef{f:An_graph} shows a segment of the AR graph ${\\widehat{Q}}$ for\nthis root system.\n\n\n\\begin{figure}[ht]\n$$ \n\\xy\n(0,0)*+{X_{i-2}}=\"A\"+(0,-3)*+{\\scriptstyle\n e_{i-2}-e_{i-1}+\\frac{1}{n}\\delta};\n(30,0)*+{X_{i}}=\"B\"+(0,-3)*{\\scriptstyle\n e_{i}-e_{i+1}+\\frac{1}{n}\\delta};\n(60,0)*+{X_{i+2}}=\"C\"+(0,-3)*{\\scriptstyle\n e_{i+2}-e_{i+3}+\\frac{1}{n}\\delta};\n(90,0)*+{X_{i+4}}=\"D\"+(0,-3)*{\\scriptstyle\n e_{i+4}-e_{i+5}+\\frac{1}{n}\\delta};\n(15,15)*+{X_{i-1}(1)}=\"E\"+(0,-3)*{\\scriptstyle\n e_{i-2}-e_{i+1}+\\frac{3}{n}\\delta}=\"EE\";\n(45,15)*+{X_{i+1}(1)}=\"F\"+(0,-3)*{\\scriptstyle\n e_{i}-e_{i+3}+\\frac{3}{n}\\delta}=\"FF\";\n(75,15)*+{X_{i+3}(1)}=\"G\"+(0,-3)*{\\scriptstyle\n e_{i+2}-e_{i+5}+\\frac{3}{n}\\delta}=\"GG\";\n(0,30)*+{X_{i-2}(2)}=\"H\"+(0,-3)*+{\\scriptstyle\n e_{i-4}-e_{i+1}+\\frac{5}{n}\\delta}=\"HH\";\n(30,30)*+{X_{i}(2)}=\"I\"+(0,-3)*{\\scriptstyle\n e_{i-2}-e_{i+3}+\\frac{5}{n}\\delta}=\"II\";\n(60,30)*+{X_{i+2}(2)}=\"J\"+(0,-3)*{\\scriptstyle\n e_{i}-e_{i+5}+\\frac{5}{n}\\delta}=\"JJ\";\n(90,30)*+{X_{i+4}(2)}=\"K\"+(0,-3)*{\\scriptstyle\n e_{i+2}-e_{i+7}+\\frac{5}{n}\\delta}=\"KK\";\n{\\ar \"A\";\"EE\"};\n{\\ar \"B\";\"EE\"};\n{\\ar \"B\";\"FF\"};\n{\\ar \"C\";\"FF\"};\n{\\ar \"C\";\"GG\"};\n{\\ar \"D\";\"GG\"};\n{\\ar \"E\";\"HH\"};\n{\\ar \"E\";\"II\"};\n{\\ar \"F\";\"II\"};\n{\\ar \"F\";\"JJ\"};\n{\\ar \"G\";\"JJ\"};\n{\\ar \"G\";\"KK\"};\n\\endxy\n$$\n \\caption{Fragment of graph ${\\widehat{Q}}$ for root system of type $A$.\n Here $i$ is an even number. The figure also shows, for each $X_q\\in\n {\\widehat{Q}}$, the image of $R\\Ph_h[X_q]$ for the standard choice of height\n function $h$ as in \\exref{x:standard}.} \\label{f:An_graph}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\\bibliographystyle{amsalpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzemgl b/data_all_eng_slimpj/shuffled/split2/finalzzemgl new file mode 100644 index 0000000000000000000000000000000000000000..3ee8582eeb35265176ea16162b34ef7c8633a0da --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzemgl @@ -0,0 +1,5 @@ +{"text":"\\section*{Introduction}\n\nFerromagnetic films with perpendicular magnetic anisotropy (PMA) are of wide interest for applications in established and nascent technologies such as ultrahigh density magnetic hard drives \\cite{doi:10.1063\/1.2750414}, MRAM\\cite{wong2015memory}, superconducting spintronics \\cite{linder2015superconducting}, and energy efficient spin-orbit torque memory \\cite{RevModPhys.91.035004}. PMA can be achieved in Pt\/Co\/Pt multilayer systems as a result of the interfacial anisotropy, however at a critical Co thickness, typically about 1~nm, the anisotropy will fall in-plane. Increasing the total ferromagnetic layer thickness further therefore involves adding additional interfaces to the multilayer. Having a multilayer structure introduces interfacial resistance and interfacial spin-flip scattering \\cite{bass2007spin}, which are disadvantageous for applications such as ours which require the transport current perpendicular-to-plane \\cite{birge2018spin, doi:10.1063\/1.5140095 ,satchell2021pt}. Alternately, robust intrinsic PMA can be achieved in certain Co$_x$Pt$_{100-x}$ alloys and compounds at any thickness, without increasing the number of interfaces. \n\nHere, we study the equiatomic Co$_{50}$Pt$_{50}$ alloy, hereafter referred to as CoPt. Through growth at elevated temperatures it is possible to form the $L1_1$ and $L1_0$ chemically ordered compounds of CoPt as epitaxial films. Previous experimental studies of such compounds tend to use MgO substrates as the basis for high temperature epitaxial growth. Visokay and Sinclair report $L1_0$ crystal structure on MgO [001] substrates for growth temperatures above 520$^{\\degree}$C \\cite{doi:10.1063\/1.113895}. Iwata \\textit{et al.} report growth of $L1_1$ crystal structure on MgO [111] substrates for a growth temperature of 300$^{\\degree}$C \\cite{619533}.\n\nEarly thin film studies of the chemically ordered CoPt (and the related FePt and FePd) compounds were motivated by the large out-of-plane anisotropy and narrow domain wall widths being candidate for high density storage mediums \\cite{619533, PhysRevB.50.3419, PhysRevB.52.13419, doi:10.1063\/1.362122, doi:10.1063\/1.361368, doi:10.1063\/1.113895, doi:10.1063\/1.364504, caro1998structure, doi:10.1063\/1.368158, doi:10.1063\/1.368831, doi:10.1063\/1.371397, doi:10.1126\/science.287.5460.1989, zeng2002orientation, PhysRevB.66.024413, chen2003effect, zhao2003promotion, doi:10.1021\/jp027314o, LAUGHLIN2005383, doi:10.1063\/1.1991968, LIAO2006e243, doi:10.1063\/1.2176306, PhysRevB.76.174435, doi:10.1063\/1.2730568, Sato2008, chen2008review, 4492859, doi:10.1063\/1.3153513, Seemann_2010, yuan2010effect, Antje_Dannenberg_2010, doi:10.1063\/1.3561115, doi:10.1063\/1.3672856, ohtake2012structure, sun2013formation, 6416998, ohtake2014structure, VARVARO2014415, doi:10.1063\/1.4960554, zygridou2017exploring, PhysRevApplied.10.054015, GAO2019406, doi:10.1021\/acsanm.9b01192, PhysRevMaterials.6.024403, spencer2022characterization}. Recently, renewed interest in these compounds has been driven by the discovery of self-induced spin-orbit torque switching in these materials, which can be used as the switching mechanism for a low-dissipation magnetic memory \\cite{PhysRevB.101.220402,https:\/\/doi.org\/10.1002\/adfm.202005201,doi:10.1063\/5.0028815,liu2021symmetry,9433557,DONG2022,doi:10.1063\/5.0077465, liu2022current}.\n\nOur motivation for studying [111] CoPt is to incorporate this PMA ferromagnet in an all epitaxial heterostructure suitable for superconducting Josephson devices\\cite{birge2018spin, doi:10.1063\/1.5140095 ,satchell2021pt} or MRAM\\cite{wong2015memory}. For these applications growth in the [111] direction is favourable. In Josephson junctions, the superconductor of choice is Nb, which can be grown epitaxially as a bcc structure in the [110] direction\\cite{WILDES20017}. In MRAM, the seed layer of choice is Ta, which has almost identical structural properties to Nb. On Ta or Nb [110] layers, we know that Pt and Co will grow with [111] orientation\\cite{PhysRevB.97.214509}. We therefore choose Al$_2$O$_3$ [0001] substrates for this study and use thin Pt [111] seed layers to grow CoPt [111]\\cite{yuan2010effect}. As far as we are aware, no previous works have reported the properties of [111] ordered films of $L1_0$ and $L1_1$ CoPt on Al$_2$O$_3$ [0001] substrates. We are also unaware of any measurements of the spin polarisation of CoPt. \n\n\nWe fabricate and report the properties of three sets of samples. The first set is designed to determine the optimal growth temperature. Therefore, we fix the thickness $d_{CoPt} = 40$~nm and vary the substrate heater temperature in the range from 27${\\degree}$C to 850${\\degree}$C. The next two sample sets are thickness series grown by varying the growth time at a fixed temperature and magnetron powers. The temperatures chosen for the thickness series are to produce either the $L1_1$ (350${\\degree}$C) or $L1_0$ (800${\\degree}$C) crystal structures. These temperatures are guided by the results of the first temperature series of samples. Thicknesses are varied over the range 1~nm$ \\leq d_\\text{CoPt} \\leq 128$~nm.\n\nOn each sample set we report systematically on the structural and magnetic properties of the CoPt. Additionally, on the thickest 128~nm $L1_1$ and $L1_0$ samples we perform point contact Andreev reflection (PCAR) measurements with a Nb tip at 4.2~K to determine the spin polarisation. The use of Nb as the tip, the temperature of this measurement, and the ballistic transport regime probed are relevant for our proposed application of the CoPt in Josephson devices\\cite{birge2018spin, doi:10.1063\/1.5140095 ,satchell2021pt}. \n\n\n\n\n\\section*{Results and Discussion}\n\n\\subsection*{\\label{Growth}CoPt properties as a function of growth temperature}\n\n We expect that as the growth temperature is increased the CoPt will form initially a chemically disordered $A1$ alloy phase, followed by a chemically ordered $L1_1$ crystal structure, an intermediate temperature $A1$ phase, and finally a chemically ordered $L1_0$ crystal structure respectively. In order to map out the growth parameters we report on Al$_2$O$_3$(sub)\/Pt(4~nm)\/CoPt(40~nm)\/Pt(4~nm) sheet films grown at different set temperatures in the range from room temperature (RT) to 850$^{\\degree}$C. \n \n \n\\subsubsection*{Structure}\n\nIn order to understand the magnetic phases of sputter deposited CoPt, it is necessary to understand the underpinning structure and the influence of the growth temperature. Therefore, the structural characteristics and film quality were investigated using x-ray diffraction (XRD) as a function of growth temperature. Fig.~\\ref{fig:XRD} (a-c) shows the acquired XRD for selected growth temperatures around the main CoPt [111] structural peak. Further XRD data are available in the Supplemental Information online. The CoPt [111] structural peak is clearly observable for all growth temperatures. The presented data at growth temperatures of 350$^{\\degree}$C and 800$^{\\degree}$C show additional Pendellosung fringes which bracket the primary CoPt [111] peak. In Fig.~\\ref{fig:XRD} (a), for a growth temperature of 350$^{\\degree}$C, the additional feature at $2\\theta\\approx40^{\\degree}$ corresponds to the superimposed Pendellosung fringes and the Pt [111] structural peak (bulk Pt has a lattice constant of 3.92\\AA). This additional Pt structural peak is present for growths up to 550$^{\\degree}$C, see Supplementary Information online, which is within the expected optimal temperature range for sputter deposited Pt films on Al$_2$O$_3$ \\cite{4856395}.\n \n\nUsing the Scherrer equation and the full widths at half maxima (FWHM) of the Gaussian fits to the CoPt [111] structural peaks, an estimation of the CoPt crystallite sizes can be made. The FWHM for the main [111] structrual peak is given in Figure~\\ref{fig:XRD} (e). In the expected range of the ordered $L1_1$ crystal structure between 200$^{\\degree}$C and 400$^{\\degree}$C, the estimated CoPt crystallite size is 38~nm, which compared to the nominal thickness of 40~nm indicates that the CoPt has high crystallinity. On the other hand, at RT growth and intermediate temperatures between 400$^{\\degree}$C and 800$^{\\degree}$C, the estimated CoPt crystallite size is much lower, showing a minimum value of 22~nm at 700$^{\\degree}$C where we expect $A1$ growth. Interestingly, the disorder in the $A1$ growth appears to affect both the chemical disorder (random positions of the Co and Pt atoms in the unit cell) and a poorer crystallite size compared to the chemically ordered crystal structures. Finally, upon increasing the temperature further to 800$^{\\degree}$C and 850$^{\\degree}$C the crystallite size reaches a maximum of 52~nm. This value corresponds approximately to the entire thickness of the Pt\/CoPt\/Pt trilayer. \n\nFig.~\\ref{fig:XRD} (f) shows the CoPt $c$-plane space calculated from the center of the fitter Gaussian to the main [111] structural peak. At all temperatures the measured $c$-plane spacing is very close to the expected value based on single crystal studies \\cite{mccurrie1969magnetic}, indicating low out-of-plane strain. The trend with temperature, however, is non-monotonic and shows a discrete increase between the samples grown at 550$^{\\degree}$C and 650$^{\\degree}$C. The transition associated with the $L1_0$ crystal structure is expected to result in a transition from cubic to tetragonal crystal with a $c\/a=0.979$ in the single crystal \\cite{mccurrie1969magnetic}. In the single crystal study, however, the $c$-axis is in the [001] orientation. In our [111] orientated film, the small structural transition from cubic to tetragonal crystal structure is not expected to be visible in the [111] peak studied here. Nonetheless, Figure~\\ref{fig:XRD} (f) shows features consistent with the CoPt undergoing structural transitions.\n\n\n\n\n\n\n\n\\subsubsection*{Magnetic characterisation}\n\n\n\nThe magnetisation versus field data are shown in Fig. \\ref{fig:Magnetization_Growth_T} for Al$_2$O$_3$(sub)\/Pt(4 nm)\/CoPt(40 nm)\/Pt(4 nm) sheet film samples. Further magnetisation data for all samples in this study are available in the Supplemental Information online. The 350${\\degree}$C, 550${\\degree}$C, and 800${\\degree}$C samples are plotted here as they are representative of the magnetic response of the $L1_1$ crystal structure, the chemically disordered $A1$ phase, and $L1_0$ crystal structure respectively. Magnetisation is calculated from the measured total magnetic moments, the areas of the sample portions, and the nominal thicknesses of the CoPt layer. \n\nFor the chemically ordered $L1_1$ crystal structure shown in Fig. \\ref{fig:Magnetization_Growth_T} (a), the OOP hysteresis loop shows a wasp waisted behavior associated with the formation of magnetic domains at remanence. Such behavior is common in CoPt alloys and multilayer thin films \\cite{doi:10.1063\/1.113895}. The wasp waisted OOP hysteresis loop along with the low IP remanence and higher IP saturation field indicates that the 40~nm CoPt samples with the $L1_1$ crystal structure have strong PMA. We can estimate the uniaxial anisotropy using the expression $K_u = \\mu_0M_sH_s\/2$, where $\\mu_0$ is the vacuum permeability, $M_s$ the saturation magnetisation, and $H_s$ is the saturation magnetic field. From the data presented in Fig. \\ref{fig:Magnetization_Growth_T} (a) we find $K_u = 5\\pm1$ MJ\/m$^2$. \n\nFor samples grown at intermediate temperatures with the chemically disordered $A1$ structure, the magnetism favours IP anisotropy at 40~nm, shown for growth at 550${\\degree}$C in Fig. \\ref{fig:Magnetization_Growth_T} (b).\n\nFor the chemically ordered $L1_0$ crystal structure shown in Fig. \\ref{fig:Magnetization_Growth_T} (c), there is a significant increase in the coercivity and squareness ratio ($M_r\/M_s$) for both the IP and OOP field orientations. The increased coercive field suggests that the $L1_0$ CoPt is magnetically hard compared to the $L1_1$ and $A1$ samples. The magnetisation of the $L1_0$ 40~nm CoPt sample does not show clear IP or OOP anisotropy from these measurements. \n\nThe magnetisation vs growth temperature is shown in Fig. \\ref{fig:Magnetization_Growth_T} (d). At growth temperatures below 550${\\degree}$C the magnetisation remains approximately constant, however at higher temperatures the magnetisation begins to decrease with increasing temperature. The possible cause of this decrease is the higher growth temperature contributing to interdiffusion between the Pt and CoPt layers, creating magnetic dead layers.\n\nThe saturation field and squareness ratio vs growth temperature are shown in Fig. \\ref{fig:Magnetization_Growth_T} (e) and (f) respectively. The general trends can be seen in the differences observed in the hysteresis loops of Fig. \\ref{fig:Magnetization_Growth_T} (a-c). These trends allow us to characterise the growth temperatures corresponding to the different crystal growths in our samples, further supporting our conclusions from the XRD: the $L1_1$ crystal structure driving the magnetic response of samples grown from 200${\\degree}$C to 400${\\degree}$C, the $L1_0$ crystal structure for samples grown above 750${\\degree}$C, and the intermediate temperatures being the chemically disordered $A1$ phase. \n\n\nIn the $L1_0$ crystal structure, Fig. \\ref{fig:Magnetization_Growth_T} (c), the high squareness ratio for both field orientations suggests that the anisotropy axis of the material is neither parallel or perpendicular to the film. Instead, it is possible that the magnetic anisotropy is perpendicular to the layer planes, which are stacked in the [100] direction. \n\nTo further investigate the anisotropy, we pattern our $L1_1$ 350${\\degree}$C and $L1_0$ 800${\\degree}$C samples into Hall bars and perform angular dependent Hall resistivity, R$_\\text{xy}$($\\theta$), measurements, Fig~\\ref{fig:Hallbar}. The fabricated Hall bar and measurement geometry is shown in Fig~\\ref{fig:Hallbar} (a). R$_\\text{xy}$($\\theta$) for the $L1_1$ 350${\\degree}$C and $L1_0$ 800${\\degree}$C Hall bars are shown in Fig~\\ref{fig:Hallbar} (b) and (c) respectively. For the $L1_1$ 350${\\degree}$C sample with out-of-plane anisotropy, R$_\\text{xy}$($\\theta$) shows a plateau close to out-of-plane field and a uniform response for angles in between. The plateau is interpreted as an angle forming between the magnetisation and applied field because of the anisotropy axis \\cite{doi:10.1063\/1.3262635}. In comparison, the $L1_0$ 800${\\degree}$C sample also shows a R$_\\text{xy}$($\\theta$) plateau for out-of-plane applied field plus an additional plateau for applied field angles between about 45${\\degree}$ and 60${\\degree}$. We interpret the additional plateau in R$_\\text{xy}$($\\theta$) for the $L1_0$ 800${\\degree}$C CoPt sample as evidence for an additional anisotropy axis, which we propose is perpendicular the [100] direction. The [100] plane has a dihedral angle of 54.75${\\degree}$ with the [111] growth plane. Additional sources of anisotropy in our samples are interface anisotropy at the Pt\/CoPt interfaces, which would favour out-of-plane magnetisation for thin layers, and shape anisotropy, which for our thin films would favour in-plane magnetisation. \n\nFigure \\ref{fig:IP_Rotation} shows the extracted coercive field as a function of in-plane rotator angle for the $L1_0$ 800${\\degree}$C sample. The data shows a clear six-fold symmetry. This is consistent with an easy axis for the plurality tetragonal $L1_0$ phase of $[001]$ when grown on $\\{111\\}$ planes - the $<100>$ directions of the parent cubic structure are inclined at $\\pm 45\\degree$ from the plane and are coupled with the three fold symmetry of $\\{111\\}$. The magnetometry data therefore strongly suggests that over the sample the $[001]_{L1_0}$ can be found in any of the three possible $<100>$ of the parent cubic structure without a strong preference for which of the possible twins grow.\n\n\n\n\n\n\\subsection*{CoPt properties as a function of thickness for $L1_1$ and $L1_0$ crystal structures}\n\nHaving obtained optimal growth conditions for CoPt with chemically ordered $L1_1$ and $L1_0$ crystal structures, in this section, we report on the properties of samples Al$_2$O$_3$ (sub)\/Pt (4~nm)\/CoPt ($d_\\text{CoPt}$)\/Pt (4~nm) with $d_\\text{CoPt}$ varied between 1~nm and 128~nm for both the $L1_1$ and $L1_0$ crystal structures.\n\n\n\n\n\\subsubsection*{\\label{MagL11}Magnetisation of $L1_1$ and $L1_0$ CoPt}\n\nThe hysteresis loops for Al$_2$O$_3$(sub)\/Pt(4~nm)\/CoPt(4~nm)\/Pt(4~nm) sheet films is shown in Fig.~\\ref{fig:Magnetization_d} (a) and (b) for the $L1_1$ (350${\\degree}$C) and $L1_0$ (800${\\degree}$C) crystal structures respectively. Hysteresis loops over the full thickness range are given in the Supplementary Information online. The moment\/area at saturation (or 6~T) verses nominal CoPt thickness are presented in Fig~\\ref{fig:Magnetization_d} (c) and (d).\n\nWe calculate the magnetisation ($M$) by fitting the moment\/area versus nominal CoPt thickness data. In order to account for interfacial contributions to the magnetisation of the CoPt, we model the system as a magnetic slab with possible magnetic dead layers and\/or polarised adjacent layers. Magnetic dead layers can form as a result of interdiffusion, oxidation, or at certain interfaces with non-ferromagnetic layers. At some ferromagnet\/non-ferromagnet interfaces, the ferromagnetic layer can create a polarisation inside the non-ferromagnetic layer by the magnetic proximity effect. Polarisation is particularly common at interfaces with Pt \\cite{doi:10.1063\/1.344903, PhysRevB.65.020405, PhysRevB.72.054430, rowan2017interfacial, PhysRevB.100.174418}. To take these into account, we fit to the expression,\n\\begin{equation}\n\\label{M2}\n \\text{moment\/area}= M (d_\\text{CoPt} - d_i),\n\\end{equation}\n\\noindent where $d_i$ is the contribution to $M$ from any magnetic dead layers or polarisation. The resulting best fit and the moment\/area versus the nominal CoPt thickness is shown in Fig.~\\ref{fig:Magnetization_d}.\n\nFor $L1_1$ growth at 350${\\degree}$C, the result of fitting Eqn.~\\ref{M2} shown in Fig.~\\ref{fig:Magnetization_d} (c) gives $M=750\\pm 50$~emu\/cm$^3$. For $L1_0$ growth at 800${\\degree}$C, the thinnest 1~nm and 2~nm films did not display any magnetic response and are excluded from the analysis in Fig.~\\ref{fig:Magnetization_d}. The result of fitting Eqn.~\\ref{M2} shown in Fig.~\\ref{fig:Magnetization_d} (d) for the samples thicker than 2~nm gives $M=520\\pm 50$~emu\/cm$^3$. Interestingly, we find a significant difference in $M$ between the two crystal structures. It is possible that the differences in $M$ corresponds to a true difference in the saturation magnetisation of the two crystal structures. An alternative scenario is that the 6~T applied field is not large enough to fully saturate the $L1_0$ samples, leading to a reduced measured $M$.\n\nThe thickness dependence of the magnetic switching of the $L1_1$ samples is well summarised by the squareness ratio shown in Fig~\\ref{fig:Magnetization_d} (e). The thickest 40 and 128~nm samples are wasp waisted, as presented in Fig.~\\ref{fig:Magnetization_Growth_T}. At reduced thicknesses, between 2 and 8~nm, the $L1_1$ CoPt no longer displays the wasp waisted switching for out-of-plane field orientation, and now has a square loop shown in Fig.~\\ref{fig:Magnetization_d} (a). The 16~nm sample showed an intermediate behaviour. At 1~nm, the magnetic switching showed \"S\" shaped hysteresis loops for both in- and out-of-plane applied fields with small remanent magnetisation, see Supplementary Information online. \n\nThe thickness dependence of the $L1_0$ crystal structure samples is significantly different to the $L1_1$. In the thinnest films of 1 and 2~nm there is no evidence of ferromagnetic ordering in the hystersis loops, see Supplementary Information online. For $L1_0$ growth at 800${\\degree}$C, the high growth temperature may cause interdiffusion at the Pt\/CoPt interfaces. The interdiffusion may account for magnetic dead layers, which in the thinnest samples may prevent ferromagnetic ordering. Upon increasing the thickness to 4~nm, a ferromagnetic response was recovered, however the hysteresis loops and extracted squareness ratio (Fig.~\\ref{fig:Magnetization_d}) (f)) indicate that the 4~nm and 8~nm $L1_0$ samples have in-plane magnetisation. The in-plane magnetisation in the thinner films suggests that the long-range $L1_0$ ordering may have not established at those thicknesses. \n\nThe thickness dependence in both crystal structures suggest that the Pt\/CoPt\/Pt trilayers grown on Al$_2$O$_3$ substrates are not suitable for applications where ultrathin magnetic layers are required. To improve the magnetic properties of the thinnest samples in this study our future work will focus on replacing the Pt layers with seed and capping layers where interdiffusion may be less. \n\n\n\n\n\n\\subsection*{Spin polarisation}\n\nTo estimate the spin polarisation in the chemically ordered $L1_0$ and $L1_1$ CoPt samples, we perform Point Contact Andreev Reflection (PCAR) spectroscopy experiments \\cite{doi:10.1126\/science.282.5386.85, baltz2009conductance, PhysRevB.85.064410, Seemann_2010, 6971749, PhysRevB.76.174435}. In the PCAR technique, spin polarisation in the ballistic transport regime can be determined from fitting the bias dependence of the conductance with the a modified Blonder-Tinkham-Klapwijk (BTK) model \\cite{PhysRevB.63.104510}. \n\nWe measure the Al$_2$O$_3$(sub)\/Pt(4~nm)\/CoPt(128~nm)\/Pt(4~nm) samples grown at 350${\\degree}$C, corresponding to $L1_1$ crystal structure, and 800${\\degree}$C, corresponding to $L1_0$ crystal structure. The PCAR experiment was performed with a Nb wire tip at $4.2\\,$K. Exemplar conductance spectra with fits to the BTK model are given in Figure \\ref{fig:PCAR} (a). The interpretation of PCAR data is rife with difficulties\\cite{Yates2018} and a common issue with the PCAR technique is the presence of degenerate local fitting minima. To ensure that a global best fit is obtained, the fitting code makes use of a differential-evolution algorithm and we then consider the spin polarisation and barrier strength parameter for a large number of independent contacts to the same sample.\n\nFigure \\ref{fig:PCAR} (c) shows the dependence of the polarisation as a function of the square of the barrier strength, Z$^2$. The dashed lines in Figure \\ref{fig:PCAR} are linear fits to the data. The value of the true spin polarisation is is often taken to correspond to $Z=0$, however this is strictly nonphysical. Nevertheless, in an all metal system it is possible to produce contacts approaching an ideal case and extrapolating to $Z=0$ is close to the (finite) minimum. We find that P = 47$\\pm3$\\% for both $L1_1$ and $L1_0$ CoPt samples. This compares to $\\approx42$\\% for $L1_0$ FePt \\cite{PhysRevB.76.174435} and $\\approx50$\\% for $L1_0$ FePd \\cite{Seemann_2010}.\n\n\n\n\n \n\n\n\n\\section*{Conclusions}\n\nThe major conclusions of this work may be summarised as follows. On $c$-plane Al$_2$O$_3$ with thin Pt [111] buffer layers, the Co$_{50}$Pt$_{50}$ grows following the [111] ordering. Through growth at elevated temperatures, Co$_{50}$Pt$_{50}$ is grown epitaxially in the chemically ordered $L1_1$ and $L1_0$ crystal structures or the chemically disordered $A1$ phase. The $L1_1$ Co$_{50}$Pt$_{50}$ grown between 200$^\\circ$C and 400$^\\circ$C shows perpendicular magnetic anisotropy for thicknesses $\\geq2$~nm. The $L1_0$ Co$_{50}$Pt$_{50}$ grown above 800$^\\circ$C shows significantly harder magnetic anisotropy, which is perpendicular to the [100] direction for thicknesses $\\geq40$~nm. For growth at intermediate temperatures, $450<800^\\circ$C, the Co$_{50}$Pt$_{50}$ shows a disordered structure and in-plane magnetic anisotropy associated with the $A1$ phase. The spin-polarisation of the $L1_1$ and $L1_0$ Co$_{50}$Pt$_{50}$ is determined by the PCAR technique to be 47$\\pm3$\\%.\n\n\n\n\n\\section*{Methods}\n\n\n\n\n\\subsection*{Epitaxial growth}\n\nSamples are dc sputter deposited in the Royce Deposition System \\cite{Royce}. The magnetrons are mounted below, and confocal to, the substrate with a source-substrate distance of 134~mm. The base pressure of the vacuum chamber is $1\\times10^{-9}$~mbar. The samples are deposited at elevated temperatures, with an Ar (6N purity) gas pressure of $4.8\\times10^{-3}$~mbar.\n\nFor alloy growth, we use the co-sputtering technique. To achieve as close to a Co$_{50}$Pt$_{50}$ stoichiometry as possible, first, single layer samples of Co or Pt are grown at room temperature on 10x10~mm thermally oxidised Si substrates varying the magnetron power. From this initial study, it is found that a growth rate of 0.05~nm s$^{-1}$ is achieved for a Co power of 45W and a Pt power of 25W. These growth powers are fixed for the rest of the study.\n\nFor the growth of the CoPt samples, 20x20~mm $c$-plane sapphire substrates are used. The substrates are heated by a ceramic substrate heater mounted directly above the substrate holder. The measured substrate heater temperature is reported. We note that the temperature on the substrate surface is most likely to be below the reported heater temperature. The substrate heater is ramped up from room temperature to the set temperature at a rate of 3-5${\\degree}$C min$^{-1}$. Once at the set temperature, the system is given 30~min to reach equilibrium before starting the sample growth.\n\nOnce the system is ready for growth, 4~nm Pt seed layer is deposited. The seed layer is immediately followed by the CoPt layer, which is deposited at a rate of 0.1~nm s$^{-1}$ by co-sputtering from the two targets at the determined powers. Finally, a 4~nm Pt capping layer is deposited to prevent the samples from oxidising. The final sample structure is Al$_2$O$_3$(sub)\/Pt(4~nm)\/CoPt($d_{CoPt}$)\/Pt(4~nm). Following deposition, the samples are post growth annealed for 10~min at the growth temperature before the substrate heater is ramped down to room temperature at 10${\\degree}$C min$^{-1}$.\n\n\n\n\\subsection*{Characterisation}\n\n\n Magnetisation loops are measured using a Quantum Design MPMS 3 magnetometer. Angular dependent magnetization measurements are performed using the Quantum Design Horizontal Rotator option. X-ray diffraction and reflectivity is performed on a Bruker D8 diffractometer with an additional four-bounce monochromator to isolate Cu K-$\\alpha$ at a wavelength of 1.5406~\\AA. Sheet films are patterned into Hall bars of 5~$\\mu$m width using conventional photolithography and Ar ion milling. Resulting devices are measured using 4-point-probe transport to measure the Hall resistance of the films using a combined Keithley 6221-2182A current source and nano-voltmeter.\n\n\\subsection*{PCAR}\n\nOur experimental setup for performing PCAR measurements is described elsewhere \\cite{baltz2009conductance, PhysRevB.85.064410, Seemann_2010, 6971749, PhysRevB.76.174435}. The Nb tips are prepared from commercial 99.9\\% pure Nb wires with a diameter of 0.5~mm. An AC lock-in detection technique using Stanford Research Systems SR830 lock-in amplifiers is used for the differential conductance measurements. The tip position is mechanically adjusted by a spring-loaded rod driven by a micrometer screw. The experiment is carried out in liquid He at a fixed temperature of $4.2\\,$K and at zero applied magnetic field.\n\n\n\n\\section*{Data Availability}\n\nThe datasets generated during the current study are available in the University of Leeds repository, https:\/\/doi.org\/xx.xxxx\/xxx.\n\n\n\n\\section*{X-ray Diffraction Data}\n\nFigures \\ref{fig:XRD_ALL_1}, \\ref{fig:XRD_ALL_2}, \\ref{fig:XRD_ALL_3} show a complete set of X-ray diffraction scans for samples grown at each temperature. The very sharp peak at $2\\theta\\approx42\\degree$ is from the c-plane sapphire $(0\\,0\\,0\\,6)$ plane with $\\{1\\,1\\,1\\}$ peaks from the CoPt phase clearly visible to left. The substrate $(0\\,0\\,0\\,12)$ and CoPt $\\{2\\,2\\,2\\}$ peaks are also present close to $2\\theta$ of $90\\degree$. Samples grown at $200-450\\degree\\,$C and also at $800\\degree\\,$C clearly exhibit Pendellosung fringes, indicative of good crystal ordering from the L$1_1$ and L$1_0$ phases. The peak intensity and FWHM shown in the main text are extracted from a Gaussian peak fit to the CoPt $\\{1\\,1\\,1\\}$ peak data, although similar data could be obtained from the CoPt $\\{2\\,2\\,2\\}$ data. \n\n\\section*{Magnetometry}\n\nMagnetic hysteresis loops were measured for a set of samples of common thickness grown at temperatures from room temperature to $850\\degree\\,$C with the field applied in the plane of the substrate and perpendicular to the substrate plane - shown in Figure \\ref{fig:stream}. For the data shown here, a diamagnetic background resulting from the substrate and sample stick has been removed. The saturation magnetisation is determine by fitting a line to the outer section of the upper and lower branches and taking the difference in intercepts with the $H=0$ axis. The uncertainty is dominated by the determination of the sample area. The saturation field is determined from the point at which the measured data has deviated from the saturation magnetisation by more than twice the scatter in the data in the saturated regions of the loop. Figures \\ref{fig:stream2} and \\ref{fig:stream3} show similar magnetic hysteresis loops for samples of increasing thickness grown at $350\\degree\\,$C and $800\\degree\\,$C respectively.\n\n\\section*{PCAR}\n\nThe Point Contact Andreev Reflection spectra are measured by an AC lockin technique where the contact is subjected to a controlled DC bias with a small AC modulation which is sensed across the contact and across a control resistor to allow a differential conductance to be determined. The DC bias is separately monitored as it is adjusted to give a tip bias typically in the range of $\\approx\\pm30\\,$meV. The DC bias is swept over 6 quarters from 0 to maximum and then to minimum and back to maximum and finally to zero. We only analyse the parts of the trace from maximum to minimum and back to maximum. The measured DC bias is subject to a small instrumentation offset which is removed at this first stage of data processing. Since all the features of interest in the spectra should be symmetrical about zero bias, we decompose the data into symmetric and anti-symmetric parts and retain only the symmetric part. The BTK model is usually considered in terms of the normalised conductance of the contact -- normalised to the conductance at high bias where transfer between electron\/hole states in the ferromagnetic and electron\/hole-like quasiparticles in the superconductor. In this high bias regime it is possible in some contact to get parabolic conductance due to tunnelling or Joule heating effects\\cite{baltz2009conductance} and so we normalise the experimental data by dividing through a parabola fitted to the outer 20\\% of the spectra.\n\nThe modified BTK model used in the analysis of the PCAR data has 4 fitting parameters. Firstly, an interfacial barrier strength $Z$ that is dimensionless. This effectively determines a scattering potential at the interface and so even in a perfect contact will take a small, but finite, value. Secondly, the spin polarisation $P=\\frac{N_\\uparrow v_{f\\uparrow}-N_\\downarrow v_{f\\downarrow}}{N_\\uparrow v_{f\\uparrow}+N_\\downarrow v_{f\\downarrow}}$ which thus ranges from 0 for non-spin-polarised materials to 1 for fully spin-polarised materials. Thirdly, the superconducting energy gap $\\Delta$ which might reasonably be expected not to be fully bulk like at the tip of the contact. Finally there is a `smearing parameter` $\\omega$ that encompasses both thermal effects and athermal scattering processes not otherwise accounted for in the model.\n\nThe modified BTK model is then fitted to the now normalised data. We start with an initial set of fitting values of $Z=0.5$, $\\omega=0.5\\,$meV (c.f. $4.2\\,$K $\\equiv 0.36\\,$meV), $\\Delta=1.5\\,$meV and $P=0.5$. We use a differential-evolution algorithm to identify a global minimum $\\chi^2$ value to locate the best fit. This best-fit location is then used to seed a non-linar least squares fit using the Levenberg\u2013Marquardt algorithm, allowing us to determine estimated standard errors for the fitting parameters.\n\nWe then discard fits with non-physical values of $\\Delta$, excessibley large $\\omega$ -- both of which would indicate an unaccounted spreading resistance, before considering all the fitted $(Z,P)$ combinations. As noted in the main text, the extrapolation back to $Z^2=0$ is not strictly physical, however, in practice in an all-metal system, very low values of $Z$ with uncertainties that encompass $Z=0$ can be obtained.\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{XRD_ALL_1}\n\\caption{X-ray diffraction for the growth temperature set of samples reported in the main text. Left column: Full data range. Right column: Cropped region around the 111 structural peak.}\n\\label{fig:XRD_ALL_1}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{XRD_ALL_2}\n\\caption{X-ray diffraction for the growth temperature set of samples reported in the main text. Left column: Full data range. Right column: Cropped region around the 111 structural peak.}\n\\label{fig:XRD_ALL_2}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{XRD_ALL_3}\n\\caption{X-ray diffraction for the growth temperature set of samples reported in the main text. Left column: Full data range. Right column: Cropped region around the 111 structural peak.}\n\\label{fig:XRD_ALL_3}\n\\end{figure}\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=\\linewidth]{Magnetization_Growth_T_ALL}\n\\caption{Volume magnetisation for the growth temperature set of samples reported in the main text. The thickness of all samples is 40~nm.}\n\\label{fig:stream}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=\\linewidth]{Magnetization_350_ALL}\n\\caption{Volume magnetisation for the thickness series set of samples reported in the main text for growth at 350$^\\circ$C.}\n\\label{fig:stream2}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=\\linewidth]{Magnetization_800_ALL}\n\\caption{Volume magnetisation for the thickness series set of samples reported in the main text for growth at 800$^\\circ$C. For the 1~nm and 2~nm samples which did not show ferromagnetism, the uncorrected moment vs field is presented.}\n\\label{fig:stream3}\n\\end{figure}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWith the establishment of the standard model (SM) by the discovery of the Higgs boson, searching for physics \nbeyond the SM (BSM) and understanding the electroweak phase transition have become main topics in particle physics.\nAs one scenario of BSM, the gauge-Higgs unification (GHU) scenario has been studied\n\\cite{Manton,Hosotani1983,Hosotani1988,\nDavies:1987,Davies:1988,\nHIL,Hatanaka1999,Kubo:2001zc,Burdman:2002se,\nCsaki:2002,\nScrucca2003,\nAgashe,\nCacciapaglia2006,\nMedina,\nFalkowski}.\nIn GHU, the Higgs boson is a part of the extra-dimensional \ncomponent of gauge potentials, appearing as a fluctuation mode of an \nAharonov-Bohm (AB) phase $\\theta_H$ in the fifth dimension.\nMany GHU models are proposed for electroweak unification \\cite{\nHosotani:2007qw,\nHosotani:2008by,\nFunatsu:2013ni,\nFunatsu:2014fda,\nFunatsu:2016uvi,\nFunatsu:2017nfm,\nHatanaka:2013iya,\nFunatsu:2019xwr,\nFunatsu:2019fry,\nFunatsu:2020haj,\nFunatsu2019a,\nFunatsu-FT,\nFunatsu:2021yrh,\nYoon2017,Yoon2018,Yoon2019,\nKurahashi:2014jca,\nMatsumoto:2016okl,\nHasegawa:2018jze,\nKakizaki:2021kof\n},\nand GHU models for grand unification are also proposed \\cite{\nLim2007,\nHosotani:2015hoa,\nFurui,\nHosotani:2017edv,\nEnglert:2019xhz,\nEnglert:2020eep,\nKakizaki:2013eba,\nKojima:2017qbt,\nMaru:2019bjr,\nAngelescu:2021nbp\n}. \nAmong them, two types of $\\mathrm{SU}(3)_{C} \\times \\mathrm{SO}(5) \\times \\mathrm{U}(1)_{X}$ GHU models,\nthe A-model\\cite{\nHosotani:2007qw,\nHosotani:2008by,\nFunatsu:2013ni,\nFunatsu:2014fda,\nFunatsu:2016uvi,\nFunatsu:2017nfm,\nHatanaka:2013iya}\nand B-model\\cite{Funatsu:2019xwr,Funatsu:2019fry,Funatsu:2020haj,\nFunatsu2019a,\nFunatsu-FT,\nFunatsu:2021yrh}, in warped spacetime have been extensively studied. \nAt low energies below the electroweak scale the mass spectrum and gauge and Higgs couplings of\nSM particles are nearly the same as in the SM.\n\nCouplings of the first Kaluza-Klein (KK) neutral gauge bosons to right-handed SM fermions are large in the A-model,\nwhereas those to left-handed SM fermions become large in the B-model.\nIn proton-proton collisions KK bosons of photons and $Z$ bosons appear as huge broad resonances\nof $Z'$ bosons in the Drell-Yan process, and can be seen in current and future hadron collider experiments\n \\cite{Funatsu:2014fda,Funatsu:2016uvi,Funatsu:2021yrh}.\nThe KK-excited states of the $W$ boson are also seen as resonances of $W'$ bosons. \nIn the A-model the couplings of the first KK $W$ boson to \nthe SM fermions are small.\nIn the B-model the couplings of the first KK $W$ boson to right-handed fermions are negligible, while couplings to left-handed fermions are much larger than the $W$ boson couplings.\nTherefore at the LHC the first KK $W$ boson appears as a narrow resonance of $W'$ boson in the A-model, \nbut appears as a broad resonance in the B-model\\cite{Funatsu:2016uvi,Funatsu:2021yrh}. \n\n\nIn $\\mathrm{e}^{+}\\mathrm{e}^{-}$ collider experiments, effects of GHU can be examined\nby exploring interference effects among photon, $Z$ boson and $Z'$ bosons.\nIn the previous papers we have studied effects of new physics (NP) on\nsuch observable quantities as cross section, forward-backward asymmetry and left-right asymmetry \nin $\\mathrm{e}^{+}\\mathrm{e}^{-} \\to f\\bar{f}$ ($f\\ne \\mathrm{e}$) with polarized and unpolarized $\\mathrm{e}^{+}\\mathrm{e}^{-}$ beams\n\\cite{Funatsu:2017nfm,Funatsu:2020haj,\nYoon2018,\nYoon2019,\nBilokin2017,\nIrles2019,\nIrles2020}.\nIn Ref.\\ \\cite{Funatsu:2017nfm} we compared such observable quantities of GHU with those of the SM in LEP experiments\nat $\\sqrt{s} = M_Z$, and LEP2 experiments for $130 \\text{GeV} \\le \\sqrt{s} \\le 207 \\,\\text{GeV}$\n\\cite{ALEPH:2005ab,ALEPH:2013dgf}.\nIn Refs.~\\cite{Funatsu:2017nfm,Funatsu:2020haj} we also gave predictions of several signals of $Z'$ bosons in \nGHU in $\\mathrm{e}^{+}\\mathrm{e}^{-} $ collider experiments \ndesigned for future with collision energies $\\sqrt{s} \\ge 250 \\text{ GeV}$ with polarized electron and positron beams.\nIn the $\\mathrm{e}^{+}\\mathrm{e}^{-} \\to f\\bar{f}$ ($f\\ne \\mathrm{e}$) modes,\nthe deviations of total cross sections become large for\nright-polarized electrons in the A-model, whereas\nin the B-model the deviations are large for left-polarized electrons.\n\nDeviations from the SM can be seen in the Higgs couplings as well.\n$HWW$, $HZZ$ and Yukawa couplings deviate from those in the SM in a universal manner\\cite{Hosotani:2007qw,Hosotani:2008by,Kurahashi:2014jca}.\nThey are suppressed by a common factor\n\\begin{eqnarray}\n\\frac{g_{HWW}^{\\text{GHU}}}{g_{HWW}^{{\\rm SM}}},\\,\n\\frac{g_{HZZ}^{\\rm \\text{GHU}}}{g_{HZZ}^{{\\rm SM}}}\n\\simeq\n\\cos(\\theta_H) \n\\end{eqnarray}\nfor $W$ and $Z$ bosons, and \n\\begin{eqnarray}\n\\frac{y_{\\bar{f}f}^{\\text{GHU}}}{y_{\\bar{f}f}^{{\\rm SM}}}\n\\simeq\n\\begin{cases}\n\\cos(\\theta_{H}) & \\text{A-model \\cite{Hosotani:2008by}} \\\\\n\\cos^{2}(\\theta_{H}\/2) & \\text{B-model \\cite{Funatsu:2019fry}} \n\\end{cases}\n\\label{Hcoupling1}\n\\end{eqnarray}\nfor SM fermions $f$.\nIn the analysis of both $Z'$ and $W'$ bosons in hadron colliders\n\\cite{Funatsu:2014fda,Funatsu:2016uvi,\nFunatsu:2021yrh},\nit is found that the AB phase is constrained as $\\theta_{H} \\lesssim 0.1$.\nFor $\\theta_H = \\mathcal{O}(0.1)$, probable values in the model, \nthe deviation of the couplings amounts to $(1-\\cos\\theta_H) = \\mathcal{O}(0.005)$, and is small.\nAt the International Linear Collider (ILC) at $\\sqrt{s} = 250\\text{GeV}$, the $ZZH$ coupling can be measured \nin the $0.6 \\%$ accuracy with $2\\text{ ab}^{-1}$ data \\cite{Barklow:2017suo}.\nSince the masses and Higgs couplings of the SM fields in the GHU models are very close to those in the SM, \nthe electroweak phase transition (EWPT) occurs\nat $T_{C} \\sim 160 \\text{GeV}$ and appears very weak first order\\cite{Hatanaka:2013iya,Funatsu-FT}\nin both A- and B-models, which is very similar to EWPT in the SM\\cite{Senaha:2020mop}.\n\nIn this paper we study effect of $Z'$ bosons in GHU models\non the $\\mathrm{e}^{+} \\mathrm{e}^{-} \\to \\mathrm{e}^{+} \\mathrm{e}^{-}$ Bhabha scattering.\nMeasurements of Bhabha scattering at linear colliders have\ncontributed to the establishment of the SM\\cite{\nAbe:1994sh,\nAbe:1994wx,\nAbe:1996nj\n}.\nBhabha scattering is also useful to explore NP \\cite{\nRichard:2018zhl,\nDas:2021esm\n}.\nUnlike $\\mathrm{e}^{+} \\mathrm{e}^{-} \\to f\\bar{f}$ ($f\\ne \\mathrm{e}$) scattering, \nin Bhabha scattering not only $s$-channel but also $t$-channel contribution enter the process.\nSince $t$-channel contribution of photon exchange dominates in forward scattering amplitudes,\nthe cross section becomes very large for small scattering angles,\nwhich improves the statistics of experiments.\nIt will be seen below that effects of $Z'$ bosons on cross sections can be measured \nwith large significances.\n\nBhabha scattering at very small scattering angle\nis used for the determination of the luminosity of the $\\mathrm{e}^{+}\\mathrm{e}^{-}$ beams.\nSince cross sections of all other scattering processes depend on the luminosity, \none needs to know whether or not effects of $Z'$ bosons on the $\\mathrm{e}^{+}\\mathrm{e}^{-} \\to \\mathrm{e}^{+}\\mathrm{e}^{-}$ cross section\nare sufficiently small.\nForward-backward asymmetry $A_{FB}$ of the cross section in Bhabha scattering is no longer a good quantity \nfor searching NP, since the backward scattering cross section is much smaller than the forward scattering cross section.\nWe will propose a new quantity $A_X$ to measure with polarized $\\mathrm{e}^{+}\\mathrm{e}^{-}$ beams,\nwhich can be used for seeing NP effects instead of $A_{FB}$.\n\nIn Section 2 we briefly review the GHU A- and B-models and \ndiscuss the $\\mathrm{e}^{+}\\mathrm{e}^{-} \\to \\mathrm{e}^{+}\\mathrm{e}^{-}$ scattering \nin both the SM and GHU models. In Section 3, we show the formulas of\n$\\mathrm{e}^{+}\\mathrm{e}^{-}\\to \\mathrm{e}^{+}\\mathrm{e}^{-}$ scattering cross sections for longitudinally polarized $\\mathrm{e}^{\\pm}$ beams, \nand numerically evaluate the effects\nof $Z'$ bosons in GHU models on differential cross sections and left-right asymmetries.\nWe also show that effects of $Z'$ bosons on the cross section\nare very small at the very small scattering angle.\n\n\\section{Gauge-Higgs unification}\n\nIn GHU A- and B-models the electroweak $\\mathrm{SU}(2)\\times\\mathrm{U}(1)$ symmetry\nis embedded in $\\mathrm{SO}(5)\\times\\mathrm{U}(1)_{X}$ symmetry in the Randall-Sundrum warped space \\cite{Randall:1999ee},\nwhose metric is given by\n\\begin{align}\nds^{2} &= \\frac{1}{z^{2}} \\left[\\eta_{\\mu\\nu}dx^{\\mu}dx^{\\nu} \n+ \\frac{dz^{2}}{k^{2}}\n\\right],\n\\quad\n1 \\le z \\le z_{L} = e^{kL}\n\\end{align}\nwhere $\\eta_{\\mu\\nu} = \\diag(-1,+1,+1,+1)$ and $k$ is the AdS-curvature.\n We refer two 4D hyperplanes at $z=1$ and $z=z_{L}$ as the UV and IR branes, respectively.\nThe $\\mathrm{SO}(5)$ symmetry is broken to $\\mathrm{SO}(4)\\simeq \\mathrm{SU}(2)_{L} \\times \\mathrm{SU}(2)_{R}$ by the boundary conditions at $z=1,\\,z_{L}$ and\nthe $\\mathrm{SU}(2)_{R} \\times \\mathrm{U}(1)_{X}$ symmetry is broken to $\\mathrm{U}(1)_{Y}$ by a scalar field \nlocalized on the UV brane.\n$\\mathrm{SU}(2)_{L}\\times\\mathrm{U}(1)_{Y}$ symmetry is broken to the \nelectromagnetic $\\mathrm{U}(1)_{\\rm EM}$ symmetry by\nthe VEV of the $z$-component gauge fields of $\\mathrm{SO}(5)\/\\mathrm{SO}(4)$. \nThe VEV is related to the gauge-invariant AB phase $\\theta_{H}$. \n\nThe difference between the A- and B-models lies in the content of fermions.\nIn the A-model, quarks and leptons in the SM are embedded in the $\\mathrm{SO}(5)$-vector representation. \nIn the B-model quarks and leptons are embedded in the $\\mathrm{SO}(5)$-spinor, vector and singlet representations.\nThese fields are naturally derived from spinor and vector multiplets in the $\\mathrm{SO}(11)$ \ngauge-Higgs grand unification\\cite{Hosotani:2015hoa, Furui}. \n\nInteractions of the electron and gauge bosons are given by\n\\begin{align}\n\\int d^{4}x \\int_{1}^{z_{L}} \\frac{dz}{k}\n\\biggl\\{\n\\bar{\\check{\\Psi}} \\gamma^{\\mu} (\\partial_{\\mu} - i g A_{\\mu}) \\check{\\Psi}\n\\biggr\\}\n\\end{align}\nwhere $A_{\\mu}(x^{\\mu},z)$ is a four-dimensional component of the 5D gauge field and\n$\\Psi(x^{\\mu},z) = z^2 \\check{\\Psi}$ is the 5D electron field.\nThe electron corresponds to the zero mode of the $\\check{\\Psi}(x^{\\mu},z)$.\nIn the A-model the left-handed electron is localized in the vicinity of the UV brane and the right-handed \ncomponent is localized near the IR brane.\nIn the B-model the right-handed electron is localized in the vicinity of the UV brane and the left-handed \ncomponent is localized near the IR brane.\n$A_{\\mu}(x^{\\mu},z)$ has a KK expansion which contains the photon, $Z$ boson and their KK excited modes. \nThe wave function of the photon is constant in the fifth dimension coordinate $z$.\nThe wave function of the $Z$ boson is almost constant in $z$, but has nontrivial dependence near the IR brane. \nCouplings of the electron to $Z$ boson are very close to those in the SM.\nWave functions of the first KK-excited modes of gauge bosons are localized near the IR brane so that \nthe first KK-excited gauge bosons couple strongly with fermions localized near the IR brane.\nIn the A-model right-handed electrons have large couplings with first KK-excited gauge bosons \nwhereas in the B-model left-handed electrons couple strongly with the first KK-excited gauge bosons. \n\nIn Tables~\\ref{tbl:model} and \\ref{tbl:couplings}, parameters and couplings in the A- and B-models are tabulated.\nHere, model parameters ($\\theta_H,m_{{\\rm KK}}$ and $z_{L}$),\nmasses, widths and couplings of $Z'$-bosons\nare referred from Refs.~\\cite{Funatsu2019a,Funatsu:2020haj}.\n\\begin{table}[t]\n\\caption{Parameters in GHU models.}\\label{tbl:model}\n\\vspace{5mm}\n\\centering\n\\begin{tabular}{c|ccc|cccccc}\n\\hline\\hline\nModel & $\\theta_{H}$ & $m_{{\\rm KK}}$ & $z_{L}$ \n& $m_{\\gamma^{(1)}}$ & $\\Gamma_{\\gamma^{(1)}}$ \n& $m_{Z^{(1)}}$ & $\\Gamma_{Z^{(1)}}$ \n& $m_{Z_{R}^{(1)}}$ & $\\Gamma_{Z_{R}^{(1)}}$ \n\\\\\n& [rad] & [TeV] & & [TeV] & [TeV] & [TeV] & [TeV] & [TeV] & [TeV] \n\\\\\n\\hline\nA & $0.08$ & $9.54$ & $1.01\\times10^{4}$\n& $7.86$ & $0.99$\n& $7.86$ & $0.53$\n& $7.31$ & $1.01$\n\\\\\nB & $0.10$ & $13.0$ & $3.87\\times10^{11}$ \n& $10.2$ & $3.25$ \n& $10.2$ & $7.84$ \n& $9.95$ & $0.816$ \n\\\\\n\\hline\n\\end{tabular}\n\\end{table}\nThe difference in the magnitude of $z_{L}$ in the A- and B-models\noriginates from the formulas of top-quark mass.\nIn the A-model \n$\nm_{\\rm top}^{\\rm A} \\simeq (m_{{\\rm KK}}\/(\\sqrt{2}\\pi)) \\sqrt{1 - 4c_{\\rm top}^{2}} \\sin\\theta_{H}\n$ \\cite{Funatsu:2013ni}\nwhereas in the B-model \n$\nm_{\\rm top}^{\\rm B} \\simeq (m_{{\\rm KK}}\/\\pi) \\sqrt{1-4c_{\\rm top}^{2}} \n\\sin\\frac{1}{2}\\theta_{H}$ \\cite{Funatsu:2019xwr}.\nIn both models $W$ boson mass is given by\n$m_{W} \\simeq m_{{\\rm KK}}\/(\\pi \\sqrt{kL}) \\sin\\theta_{H}$\nso that the lower bounds of $z_{L}$ becomes\n$z_{L} \\gtrsim 8\\times10^{3}$ in the A-model and $z_{L} \\gtrsim 7 \\times 10^{7}$ in the B-model.\n\\begin{table}[t]\n\\caption{\nLeft-handed and right-handed couplings of the electron, \n$\\ell_{V},r_{V}$ ($V = Z,Z^{(1)}$, $Z_{R}^{(1)}$ and $\\gamma^{(1)}$), in unit of $g_{w}\\equiv g_{A}\/\\sqrt{L}$ (see text).\nRatios of $e$ and $g_{w}$ are shown at the second column.}\\label{tbl:couplings}\n\\vspace{5mm}\n\\centering\\small\n\\begin{tabular}{c|cccccccccc}\n\\hline\\hline\nModel & $(e\/g_{w})^{2}$ \n & $\\ell_{Z}$ & $r_{Z}$ \n & $\\ell_{Z^{(1)}}$ & $r_{Z^{(1)}}$ \n & $\\ell_{Z_{R}^{(1)}}$ & $r_{Z_{R}^{(1)}}$ \n & $\\ell_{\\gamma^{(1)}}$ & $r_{\\gamma^{(1)}}$ & \n\\\\\n\\hline\nA & $0.2312$\n& $-0.3066$ & $0.2638$\n& $0.1195$ & $0.9986$\n& $0.0000$ & $-1.3762$ \n& $0.1879$ & $-1.8171$ \\\\\nB & $0.2306$ \n& $-0.3058$ & $0.2629$ \n& $-1.7621$ & $-0.0584$ \n& $-1.0444$ & $0.0000$ \n& $-2.7587$ & $0.1071$\n\\\\\n\\hline\n\\end{tabular}\n\\end{table}\nIn Table~\\ref{tbl:couplings}, the left- and right-handed electron couplings to $Z'$ bosons, $r_{V}, \\ell_{V}$ ($V = Z,Z^{(1)},Z_{R}^{(1)},\\gamma^{(1)}$), are tabulated.\nIn the table $g_{w}\\equiv g_{A}\/\\sqrt{L}$ is the 4D gauge coupling of the $\\mathrm{SO}(5)$ where\n$g_{A}$ is the 5D $\\mathrm{SO}(5)$ coupling.\nIn terms of $g_{A}$ and the 5D $\\mathrm{U}(1)_{X}$ coupling $g_{B}$, a mixing parameter is defined as \\cite{Funatsu:2014fda,Funatsu:2020haj}\n\\begin{align}\ne\/g_{w} = \\sin\\theta_{W}^{0} &\\equiv \n\\frac{s_{\\phi}}{\\sqrt{1 + s_{\\phi}^{2}}},\n\\quad\ns_{\\phi} \\equiv g_{B}\/\\sqrt{ g_{A}^{2} + g_{B}^{2}}.\n\\end{align}\nThe value of $\\sin\\theta_{W}^{0}$ is determined so as to reproduce \nthe experimental value of the forward-backward asymmetry in \n$\\mathrm{e}^{+}\\mathrm{e}^{-}\\to \\mu^{+}\\mu^{-}$ scattering at the $Z$-pole.\nIn the A-model electron's right-handed couplings to $Z'$-bosons are larger than left-handed couplings.\nIn the B-model electron's left-handed couplings to $Z'$-bosons are larger than right-handed couplings.\n\n\n\\section{Bhabha scattering in $\\mathrm{e}^{+}\\mathrm{e}^{-}$ colliders}\n\nWe consider the $\\mathrm{e}^{+} \\mathrm{e}^{-} \\to \\mathrm{e}^{+} \\mathrm{e}^{-}$ scattering in the center-of-mass frame.\nIn this frame, the Mandelstam variables $(s,t,u)$ are given by\n\\begin{align}\ns &= 4 E^{2},\n\\nonumber \\\\\nt &=-\\frac{s}{2}(1-\\cos\\theta) = -s \\sin^{2}\\frac{\\theta}{2},\n\\nonumber \\\\\nu &=-\\frac{s}{2}(1+\\cos\\theta) = -s \\cos^{2} \\frac{\\theta}{2}\n\\end{align}\nwhere $E$ is the energy of initial electron and positron, and $\\theta$ is the scattering angle of the electron.\nSince the $\\mathrm{e}^{+}\\mathrm{e}^{-} \\to \\mathrm{e}^{+}\\mathrm{e}^{-}$ scattering process consists\nboth $s-$ and $t-$channel processes,\nthe scattering amplitude is written in terms of the following six building blocks:\n\\begin{align}\nS_{LL} = S_{LL}(s) \n&\\equiv \n\\sum_{i} \\frac{\\ell_{V_{i}}^{2}}{s - M_{V_{i}}^{2} + iM_{V_{i}}\\Gamma_{V_{i}}},\n\\nonumber \\\\\nS_{RR}= S_{RR}(s) \n&\\equiv\n\\sum_{i} \\frac{r_{V_{i}}^{2}}{s - M_{V_{i}}^{2} + iM_{V_{i}}\\Gamma_{V_{i}}},\n\\nonumber \\\\\nS_{LR} = S_{LR}(s) &\\equiv\n\\sum_{i} \\frac{\\ell_{V_{i}} r_{V_{i}}}{s - M_{V_{i}}^{2} + iM_{V_{i}}\\Gamma_{V_{i}}},\n\\nonumber \\\\\nT_{LL} = T_{LL}(s,\\theta) &\\equiv\n\\sum_{i} \\frac{\\ell_{V_{i}}^{2}}{t - M_{V_{i}}^{2} + iM_{V_{i}}\\Gamma_{V_{i}}},\n\\nonumber \\\\\nT_{RR} = T_{RR}(s,\\theta) &\\equiv\n\\sum_{i} \\frac{r_{V_{i}}^{2}}{t - M_{V_{i}}^{2} + iM_{V_{i}}\\Gamma_{V_{i}}},\n\\nonumber \\\\\nT_{LR} = T_{LR}(s,\\theta) &\\equiv \n\\sum_{i} \\frac{\\ell_{V_{i}} r_{V_{i}} }{t - M_{V_{i}}^{2} + iM_{V_{i}}\\Gamma_{V_{i}}},\n\\label{eq:blocks}\n\\end{align}\nwhere $M_{V_{i}}$ and $\\Gamma_{V_{i}}$ are mass and width of the vector boson $V_{i}$.\n$\\ell_{V_{i}}$ and $r_{V_{i}}$ are left- and right-handed couplings of electrons to the vector boson $V_{i}$ ($V_{0} =\\gamma$, $V_{1} = Z$), respectively. \nIn particular, we have\n$\\ell_{\\gamma} = r_{\\gamma} = Q_{\\mathrm{e}} e$, $Q_{\\mathrm{e}} = -1$,\n$\\ell_{Z} = \\frac{e}{\\sin\\theta_{W}\\cos\\theta_{W}} [I_{\\mathrm{e}}^{3} - Q_{e}\\sin^{2}\\theta_{W}]$,\n$r_{Z} = \\frac{e}{\\sin\\theta_{W}\\cos\\theta_{W}} [- Q_{e}\\sin^{2}\\theta_{W}]$,\n $I^{3}_{\\mathrm{e}} = -\\frac{1}{2}$ in the SM. \nHere $e$, $I_{\\mathrm{e}}^{3}$ and $\\theta_{W}$ are the electromagnetic coupling, weak isospin of the electron and Weinberg angle, respectively.\n\nWhen initial state electrons and positrons are longitudinally polarized, \nthe differential cross section is given by\n\\begin{align}\n\\frac{d\\sigma}{d\\cos\\theta}(P_{\\mathrm{e}^{-}}, P_{\\mathrm{e}^{+}})\n&= \\frac{1}{4} \\biggl\\{\n(1+ P_{\\mathrm{e}^{-}}) (1+P_{\\mathrm{e}^{+}}) \n\\frac{d\\sigma_{\\mathrm{e}^{-}_{R} \\mathrm{e}^{+}_{R} }}{d\\cos\\theta}\n+\n(1 - P_{\\mathrm{e}^{-}}) (1 - P_{\\mathrm{e}^{+}}) \n\\frac{d\\sigma_{\\mathrm{e}^{-}_{L} \\mathrm{e}^{+}_{L} }}{d\\cos\\theta}\n\\nonumber\\\\& \\qquad\n+\n(1 + P_{\\mathrm{e}^{-}}) (1 - P_{\\mathrm{e}^{+}}) \n\\frac{d\\sigma_{\\mathrm{e}^{-}_{R} \\mathrm{e}^{+}_{L} }}{d\\cos\\theta}\n+\n(1 - P_{\\mathrm{e}^{-}}) (1 + P_{\\mathrm{e}^{+}}) \n\\frac{d\\sigma_{\\mathrm{e}^{-}_{L} \\mathrm{e}^{+}_{R} }}{d\\cos\\theta}\n\\biggr\\},\n\\end{align}\nwhere $P_{\\mathrm{e}^{-}}$ and $P_{\\mathrm{e}^{+}}$ are the polarization of the electron and positron beam, respectively. $P_{\\mathrm{e}^{-}}=+1$ ($P_{\\mathrm{e}^{+}}=+1$) denotes purely right-handed electrons (positrons) \\cite{MoortgatPick:2005cw,Arbuzov:2020ghr}.\n$\\sigma_{\\mathrm{e}_{X}^{-} \\mathrm{e}_{Y}^{+}}$ ($X,Y=L,R$) denotes the cross section \nfor left-handed or right-handed electron and positron. \nWhen the electron mass is neglected, these cross sections \nare given by\n\\begin{align}\n\\frac{d\\sigma_{\\mathrm{e}^-_L \\mathrm{e}^+_R}}{d\\cos\\theta}\n&= \\frac{1}{8\\pi s}\n\\left( u^2 |S_{LL} + T_{LL}|^2 + t^2 |S_{LR}|^2 \\right),\n\\nonumber \\\\\n\\frac{d\\sigma_{\\mathrm{e}^-_R \\mathrm{e}^+_L}}{d\\cos\\theta}\n&= \\frac{1}{8\\pi s}\n\\left( u^2 |S_{RR} + T_{RR}|^2 + t^2 |S_{LR}|^2 \\right),\n\\nonumber \\\\\n\\frac{d\\sigma_{\\mathrm{e}^-_L \\mathrm{e}^+_L}}{d\\cos\\theta}\n&= \\frac{d\\sigma_{\\mathrm{e}^-_R \\mathrm{e}^+_R}}{d\\cos\\theta}\n= \\frac{1}{8\\pi s} \\cdot \\left(s^2 |T_{LR}|^2 \\right).\n\\end{align}\n\nWhen $s,t \\ll M_{Z}^{2}$, \nthe cross section is approximated by the one at the QED level,\nwhere we obtain $S_{LL} = S_{RR} = S_{LR} = e^2\/s$ and\n$T_{LL} = T_{RR} = T_{LR} = e^2\/t$, and \n\\begin{align}\n\\frac{d\\sigma_{\\rm QED}}{d\\cos\\theta}(P_{e^-},P_{e^+})\n&= \n\\frac{e^{4}}{16\\pi s}\\left\\{\n(1 - P_{\\mathrm{e}^{-}}P_{\\mathrm{e}^{+}} ) \\frac{t^{4}+u^{4}}{s^{2}t^{2}}\n+ (1 + P_{\\mathrm{e}^{-}} P_{\\mathrm{e}^{+}}) \\frac{s^{2}}{t^{2}}\n\\right\\}.\n\\end{align}\nFor unpolarized electron or positron beams, the above expression reduces to\na familiar formula\n\\begin{align}\n\\frac{d\\sigma^{\\rm unpolarized}_{\\rm QED}}{ d\\cos\\theta }\n&=\n \\frac{e^4}{16\\pi s} \\frac{s^4 + t^4 + u^4}{s^2 t^2}.\n\\end{align}\nWe also note that in terms of building blocks \\eqref{eq:blocks} we \ncan write down components of s-, t-channels, and interference terms as\n\\begin{align}\n\\frac{d\\sigma}{d\\cos\\theta}\n&=\\frac{d\\sigma^{\\text{s-channel}}}{d\\cos\\theta}\n+ \\frac{d\\sigma^{\\text{t-channel}}}{d\\cos\\theta}\n+ \\frac{d\\sigma^{\\text{interference}}}{d\\cos\\theta},\n\\end{align}\nwhere each component is given by\n\\begin{align}\n\\frac{d\\sigma^{\\text{s-channel}}}{d\\cos\\theta}(P_{\\mathrm{e}^{-}},P_{\\mathrm{e}^{+}})\n&= \\frac{1}{32\\pi s} \\biggl\\{\n (1+P_{\\mathrm{e}^{-}})(1-P_{\\mathrm{e}^{+}}) \\left[ u^{2} |S_{RR}|^{2} + t^{2} |S_{LR}|^{2} \\right]\n\\nonumber\\\\& \\qquad\n+ (1-P_{\\mathrm{e}^{-}})(1+P_{\\mathrm{e}^{+}}) \\left[ u^{2} |S_{LL}|^{2} + t^{2} |S_{LR}|^{2} \\right]\n\\biggr\\},\n\\nonumber \\\\\n\\frac{d\\sigma^{\\text{t-channel}}}{d\\cos\\theta}(P_{\\mathrm{e}^{-}},P_{\\mathrm{e}^{+}})\n&= \\frac{1}{32\\pi s}\\biggl\\{\n (1+P_{\\mathrm{e}^{-}})(1+P_{\\mathrm{e}^{+}}) s^{2} |T_{LR}|^{2}\n+ (1-P_{\\mathrm{e}^{-}})(1-P_{\\mathrm{e}^{+}}) s^{2} |T_{LR}|^{2}\n\\nonumber\\\\& \\qquad\n+ (1+P_{\\mathrm{e}^{-}})(1-P_{\\mathrm{e}^{+}}) u^{2} |T_{RR}|^{2}\n+ (1-P_{\\mathrm{e}^{-}})(1+P_{\\mathrm{e}^{+}}) u^{2} |T_{LL}|^{2}\n\\biggr\\},\n\\nonumber \\\\\n\\frac{d\\sigma^{\\text{interference}}}{d\\cos\\theta}(P_{\\mathrm{e}^{-}},P_{\\mathrm{e}^{+}})\n&= \\frac{1}{16\\pi s} u^{2} \\biggl\\{\n (1+P_{\\mathrm{e}^{-}})(1-P_{\\mathrm{e}^{+}}) \\Re(S_{RR} T_{RR}^{*})\n\\nonumber\\\\& \\qquad\n+(1-P_{\\mathrm{e}^{-}})(1+P_{\\mathrm{e}^{+}}) \\Re(S_{LL} T_{LL}^{*})\n\\biggr\\}.\n\\end{align}\n\n\nWhen initial electrons and\/or positrons are longitudinally polarized, one can measure left-right asymmetries.\nThe left-right asymmetry of polarized cross sections is given by\n\\begin{align}\nA_{\\text{LR}}(P_{-},P_{+})\n&\\equiv \\frac{\n\\sigma(P_{\\mathrm{e}^{-}} = -P_{-}, P_{\\mathrm{e}^{+}}= - P_{+})\n- \\sigma(P_{\\mathrm{e}^{-}} = +P_{-}, P_{\\mathrm{e}^{+}}= + P_{+})\n}{\n\\sigma(P_{\\mathrm{e}^{-}} = -P_{-}, P_{\\mathrm{e}^{+}}= - P_{+})\n+ \\sigma(P_{\\mathrm{e}^{-}} = +P_{-}, P_{\\mathrm{e}^{+}}= + P_{+})\n}\n\\nonumber \\\\\n&= (P_{-} - P_{+}) \\cdot \n\\frac{\n\\sigma_{\\mathrm{e}^{-}_{L} \\mathrm{e}^{+}_{R}} - \\sigma_{\\mathrm{e}^{-}_{R} \\mathrm{e}^{+}_{L}}\n}{\n(1 + P_{-} P_{+}) (\\sigma_{\\mathrm{e}^{-}_{L} \\mathrm{e}^{+}_{L}} + \\sigma_{\\mathrm{e}^{-}_{R} \\mathrm{e}^{+}_{R}})\n+\n(1 - P_{-} P_{+}) (\\sigma_{\\mathrm{e}^{-}_{L} \\mathrm{e}^{+}_{R}} + \\sigma_{\\mathrm{e}^{-}_{R} \\mathrm{e}^{+}_{L}})\n}\n,\n\\nonumber \\\\ \n& 1\\ge P_{-} \\ge 0,\\quad 1 \\ge P_{+} \\ge -1,\n\\label{eq:ALR}\n\\end{align}\nwhere the cross section in a given bin $[\\theta_{1},\\theta_{2}]$ is given by\n$\n\\sigma \\equiv \\int_{\\cos\\theta_{1}}^{\\cos\\theta_{2}} \\frac{d\\sigma}{d\\cos\\theta} d\\cos\\theta\n$.\nWe have used $\\sigma_{\\mathrm{e}^{-}_{L} \\mathrm{e}^{+}_{L}} = \\sigma_{\\mathrm{e}^{-}_{R} \\mathrm{e}^{+}_{R}}$\nbecause $\\frac{d\\sigma_{\\mathrm{e}^{-}_{L} \\mathrm{e}^{+}_{L}}}{d\\cos\\theta} - \n \\frac{d\\sigma_{\\mathrm{e}^{-}_{R} \\mathrm{e}^{+}_{R}}}{d\\cos\\theta} = 0$.\nWe can also define the left-right asymmetry of the differential cross section as\n\\begin{align}\n\\lefteqn{A_{\\text{LR}}(P_{-},P_{+},\\cos\\theta)}\n\\quad\n\\nonumber\\\\\n&\\equiv\n\\frac{\n\\dfrac{d\\sigma}{d\\cos\\theta}(P_{\\mathrm{e}^{-}} = -P_{-}, P_{\\mathrm{e}^{+}}= - P_{+})\n- \\dfrac{d\\sigma}{d\\cos\\theta}(P_{\\mathrm{e}^{-}} = +P_{-}, P_{\\mathrm{e}^{+}}= + P_{+})\n}{\n\\dfrac{d\\sigma}{d\\cos\\theta}(P_{\\mathrm{e}^{-}} = -P_{-}, P_{\\mathrm{e}^{+}}= - P_{+})\n+ \\dfrac{d\\sigma}{d\\cos\\theta}(P_{\\mathrm{e}^{-}} = +P_{-}, P_{\\mathrm{e}^{+}}= + P_{+})\n}\n\\nonumber \\\\\n&= \\frac{\n(P_{-} + P_{+}) \\left(\\dfrac{d\\sigma_{\\mathrm{e}^{-}_{L} \\mathrm{e}^{+}_{L}}}{d\\cos\\theta} \n- \\dfrac{d\\sigma_{\\mathrm{e}^{-}_{R} \\mathrm{e}^{+}_{R}}}{d\\cos\\theta} \\right)\n+\n(P_{-} - P_{+}) \\left(\\dfrac{d\\sigma_{\\mathrm{e}^{-}_{L} \\mathrm{e}^{+}_{R}}}{d\\cos\\theta}\n - \\dfrac{d\\sigma_{\\mathrm{e}^{-}_{R} \\mathrm{e}^{+}_{L}}}{d\\cos\\theta} \\right)\n}{\n(1 + P_{-} P_{+}) \\left(\\dfrac{d\\sigma_{\\mathrm{e}^{-}_{L} \\mathrm{e}^{+}_{L}}}{d\\cos\\theta}\n + \\dfrac{d\\sigma_{\\mathrm{e}^{-}_{R} \\mathrm{e}^{+}_{R}}}{d\\cos\\theta} \\right)\n+\n(1 - P_{-} P_{+}) \\left(\\dfrac{d\\sigma_{\\mathrm{e}^{-}_{L} \\mathrm{e}^{+}_{R}}}{d\\cos\\theta}\n + \\dfrac{d\\sigma_{\\mathrm{e}^{-}_{R} \\mathrm{e}^{+}_{L}}}{d\\cos\\theta} \\right)\n}\n\\nonumber \\\\\n&= (P_{-} - P_{+}) \\cdot\n\\frac{\n \\Sigma_{LR-RL}\n}{(1 + P_{-} P_{+}) \\Sigma_{LL+RR} + (1-P_{-} P_{+}) \\Sigma_{LR+RL}},\n\\end{align}\nwhere we have used $d\\sigma_{\\mathrm{e}^{-}_{L} \\mathrm{e}^{+}_{L}}\/d\\cos\\theta =d\\sigma_{\\mathrm{e}^{-}_{R} \\mathrm{e}^{+}_{R}}\/d\\cos\\theta $ and defined\n\\begin{align}\n\\Sigma_{LL+RR} &\\equiv 2s^{2} |T_{LR}|^{2},\n\\nonumber \\\\\n\\Sigma_{LR+RL} &\\equiv u^{2} (|S_{LL} + T_{LL}|^{2} + |S_{RR} + T_{RR}|^{2}) + 2t^{2} |S_{LR}|^{2},\n\\nonumber \\\\\n\\Sigma_{LR-RL} &\\equiv u^{2} ( |S_{LL} + T_{LL}|^{2} - |S_{RR} + T_{RR}|^{2} ).\n\\end{align}\n\nIn $\\mathrm{e}^+ \\mathrm{e}^- \\to \\mathrm{e}^+ \\mathrm{e}^-$ scatterings we have\n$A_{LR}(P_{-},+P_{+},\\cos\\theta)$ and $A_{LR}(P_{-},-P_{+},\\cos\\theta)$ as independent observables\nand one may define the following non-trivial quantity:\n\\begin{align}\n&A_{X}(\\cos\\theta) \\equiv \n\\frac{\\Sigma_{LL+RR} - \\Sigma_{LR+RL}}{\\Sigma_{LL+RR} + \\Sigma_{LR+RL}}\n\\nonumber \\\\\n&\\quad\n= \\frac{1}{P_{-} P_{+}} \\cdot \\dfrac{\n (P_{-} - P_{+}) A_{\\text{LR}}(P_{-},-P_{+},\\cos\\theta) - (P_{-} + P_{+}) A_{\\text{LR}}(P_{-}, +P_{+}\\cos\\theta)\n}{\n (P_{-} - P_{+}) A_{\\text{LR}}(P_{-},-P_{+},\\cos\\theta) + (P_{-} + P_{+}) A_{\\text{LR}}(P_{-}, +P_{+},\\cos\\theta)\n}.\\label{eq:AX}\n\\end{align} \nThis quantity may be used to explore NP beyond the SM as discussed below.\n\nSince $\\mathrm{e}^+ \\mathrm{e}^- \\to \\mathrm{e}^+ \\mathrm{e}^-$ scattering contains $t$-channel processes,\nforward scatterings dominate. Therefore unlike the $\\mathrm{e}^+\\mathrm{e}^- \\to f\\bar{f}$ ($f\\ne \\mathrm{e}^{-}$) scattering, \nthe forward-backward asymmetry of $\\mathrm{e}^{+} \\mathrm{e}^{-} \\to \\mathrm{e}^{+} \\mathrm{e}^{-} $ scattering is a less-meaningful quantity.\n\nWe note that all of the above formulas can be applied\nto $\\ell^{+}\\ell^{-} \\to \\ell^{+}\\ell^{-}$ ($\\ell = \\mu,\\tau$) scatterings.\n\n\n\n\\section{Numerical Study}\n\nIn the followings we calculate $\\mathrm{e}^{+} \\mathrm{e}^{-} \\to \\mathrm{e}^{+}\\mathrm{e}^{-}$\ncross sections both in the SM and GHU models,\nand we evaluate effects of $Z'$ bosons in GHU models on observables\ngiven in the previous section.\nAs benchmark points, we have chosen typical parameters of the A- and B-models\nin Tables \\ref{tbl:model} and \\ref{tbl:couplings}. \nFor experimental parameters we choose\n$\\sqrt{s}=250$ GeV and $L_{\\mathrm{int}} = 250$ fb$^{-1}$ as typical value of linear $\\mathrm{e}^{+}\\mathrm{e}^{-}$ colliders like ILC\\cite{ILC}. \nWe also choose $L_{\\mathrm{int}} = 2$ ab$^{-1}$, which will be achieved\nat circular $\\mathrm{e}^{+}\\mathrm{e}^{-}$ colliders like FCC-ee\\cite{Blondel:2021ema} and CEPC\\cite{CEPCStudyGroup:2018ghi}. \nFor the new asymmetry $A_{X}(\\cos\\theta)$ in \\eqref{eq:AX},\n we consider $\\sqrt{s} = 3$ TeV for future linear colliders like CLIC\\cite{CLICdesign}.\nAs for the longitudinal polarization, we set the parameter ranges\n$- 0.8 \\le P_{\\mathrm{e}^{-}} \\le +0.8$ and $-0.3 \\le P_{\\mathrm{e}^{+}} \\le 0.3$,\nwhich can be achieved at ILC\\cite{ILC}.\n\nIn Figure~\\ref{fig:dsigma} $\\mathrm{e}^{+}\\mathrm{e}^{-}\\to \\mathrm{e}^{+}\\mathrm{e}^{-}$ differential cross sections in the SM are plotted.\nIn the forward-scattering region ($\\cos\\theta>0$),\n the magnitudes of cross sections of $t$-channel and the interference\nparts are much larger than those of the $s$-channel part.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.47\\textwidth]{dsigma-ee-SM-P00.pdf\n\\includegraphics[width=0.47\\textwidth]{dsigma-ee-SM-P00-Linear.pdf}\n\\caption{%\nDifferential cross sections for unpolarized $\\mathrm{e}^{+}\\mathrm{e}^{-}$ initial\nstates in the SM.\n(a) Log-scale plot with $-0.9 \\le \\cos\\theta \\le 0.9$.\n(b) Linear plot with $-0.7 \\le \\cos\\theta \\le 0.7$.\nIn both plots, red-solid lines indicate the total of $s$-, $t$-channels and interferences. \nThe $s$-channel and $t$-channel contributions are drawn as blue-dashed and purple-dotted lines, respectively. \nIn (b), the negative contribution from interferences is plotted with the green-dashed line.\n}\\label{fig:dsigma}\n\\end{figure}\n\n\n\n\\begin{figure}[H]\n\\includegraphics[width=0.48\\textwidth]{Delta-dsigma-ee-GHU-A4-P00.pdf}\\hspace{0.2cm}\n\\includegraphics[width=0.48\\textwidth]{Delta-dsigma-ee-GHU-B1-P00.pdf}\n\\caption{\nDeviations of differential cross sections \nof GHU from those in the SM,\n$\n\\frac{d\\sigma^{\\text{GHU}}}{d\\cos\\theta}\/\n\\frac{d\\sigma^{{\\rm SM}}}{d\\cos\\theta}-1\n$,\nfor unpolarized $\\mathrm{e}^{+}\\mathrm{e}^{-}$ beams\nare plotted. \nThe left plot is for the A-model and the right plot is for the B-model. In each plot,\nthe red-solid curve represents the deviation of the sum of all the components of the differential cross section.\nBlue-dashed, purple-dotted and green dot-dashed curves correspond to\n the deviations of $s$-, $t$-channels and interference components of differential cross sections, respectively.\n Error-bars are estimated for $L_{\\mathrm{int}} = 250 \\text{ fb}^{-1}$. \n}\\label{fig:Delta-dsigma}\n\\end{figure}\n\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{Delta-dsigma-ee-GHU-A4-P+8-3.pdf}\\hspace{0.2cm}\n\\includegraphics[width=0.48\\textwidth]{Delta-dsigma-ee-GHU-A4-P-8+3.pdf}\\\\\n\\includegraphics[width=0.48\\textwidth]{Delta-dsigma-ee-GHU-B1-P-8+3.pdf}\\hspace{0.2cm}\n\\includegraphics[width=0.48\\textwidth]{Delta-dsigma-ee-GHU-B1-P+8-3.pdf}%\n\\caption{%\nDeviations of differential cross sections\nof GHU models from the SM \nfor polarized $\\mathrm{e}^{+}\\mathrm{e}^{-}$ beams.\nGHU-A [(a) and (b)] and GHU-B [(c) and (d)].\n(a) and (c) are for $(P_{e^{-}},P_{e^{+}}) = (-0.8,+0.3)$.\n(b) and (d) are for$(P_{e^{-}},P_{e^{+}}) = (+0.8,-0.3)$.\nMeanings of the curves and error-bars are the same as in Figure~\\ref{fig:Delta-dsigma}.\n}\\label{fig:Delta-dsigma0}\n\\end{figure}\nIn Figures~\\ref{fig:Delta-dsigma} and \\ref{fig:Delta-dsigma0}, \nthe differences of differential cross sections of the GHU from the SM for unpolarized and polarized beams are plotted, respectively.\nIn the figures, differences of $s$-channel, $t$-channel and interference contributions are also plotted.\nIn the $s$-channel, the NP effects contribute destructively in the forward scattering.\nOn the other hand, in the $t$-channel NP effects contribute constructively.\nSince the cross section is dominated by $t$-channel, \nthe total of \n$s$-, $t$- and interference channels \nincreases due to the NP effects.\n\nIn the A-model $Z'$ bosons have larger couplings to right-handed electrons than to left-handed electrons.\nTherefore\nthe cross section of the $\\mathrm{e}_{R}^{-}\\mathrm{e}_{L}^{+}$ initial states becomes larger than that of $\\mathrm{e}_{L}^{-}\\mathrm{e}_{R}^{+}$. \nOn the other hand,\nin the B-model $Z'$ bosons have larger couplings to left-handed electrons than to right-handed electrons,\nand the cross section of $\\mathrm{e}_{L}^{-}\\mathrm{e}_{R}^{+}$ initial states becomes larger.\n\nNP effects become smaller when $\\theta$ becomes smaller.\nThe statistical uncertainty, however, also becomes small since the cross section\nbecomes very large.\nTherefore deviations of the cross section relative to statistical uncertainties may become large.\n\nFor unpolarized $\\mathrm{e}^{+}\\mathrm{e}^{-}$ beams (Figure~\\ref{fig:Delta-dsigma0}),\nthe new physics effect in both models tends to enhance the cross section at\nforward scattering with almost the same magnitude.\nIn the B-model the suppression of NP effects due to larger $Z'$ masses \nis compensated by larger couplings of $Z'$ bosons than the couplings in the A-model.\nThe enhancement of the differential cross section due to the NP effects at $\\cos\\theta \\sim 0.3$ is around 1\\%.\n\nFor polarized beams deviations can be much larger.\nIn the A-model, electrons have large right-handed couplings to $Z'$ bosons and for right-handed polarized beam relative deviations of the cross-section \nbecome as much as $2\\,\\%$ [Figure~\\ref{fig:Delta-dsigma}-(b)], whereas for right-handed beams relative deviations are around $0.1\\,\\%$ [Figure~\\ref{fig:Delta-dsigma}-(a)].\nContrarily, in the B-model a left-handed electron has large couplings to $Z'$ bosons. Therefore in the B-model deviations become large for left-polarized beam [Figures~\\ref{fig:Delta-dsigma}-(c) and (d)].\nIn Figure \\ref{fig:Delta-dsigma},\nwe have also shown statistical 1 $\\sigma$ relative errors at $L_{\\mathrm{int}} =250$ fb$^{-1}$ \nfor bins\n$[\\cos\\theta_{0}-0.05,\\cos\\theta_{0}+0.05]$ ($\\cos\\theta_{0} = -0.90, -0.80,\\dots, +0.90$) as vertical bars.\nIn each bin, the observed number of events and statistical uncertainty are given by $N$ and $\\sqrt{N}$, respectively.\nTherefore relative error of the cross section is \nestimated as the inverse of the square of the number of events $N$.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.58\\textwidth]{sigdsigma.pdf}\n\\caption{Estimated significances on differential cross sections of GHU models\nwith unpolarized $\\mathrm{e}^{+} \\mathrm{e}^{-}$ beam with the integrated luminosity $L_{\\mathrm{int}} = 2\\text{ ab}^{-1}$.\nA statistical significance of 0.1\\% non-statistical error is plotted with a purple dot-dashed line.}\n\\label{fig:sigdsigma}\n\\end{figure}\n\nIn Figure~\\ref{fig:sigdsigma}, the statistical significances in the GHU models are plotted.\nAn estimated significance of the deviation of the cross section \nin a bin is given by\n\\begin{align}\n\\frac{|N_{\\text{GHU}} - N_{{\\rm SM}}|}{N_{{\\rm SM}}} \\bigg\/ \\frac{\\sqrt{N_{{\\rm SM}}}}{N_{{\\rm SM}}}\n= \\frac{|N_{\\text{GHU}} - N_{{\\rm SM}}|}{\\sqrt{N_{{\\rm SM}}}},\n\\end{align} \nwhere $N_{\\text{GHU}}$ and $N_{{\\rm SM}}$ are observed numbers of events in a bin.\nIn Figure \\ref{fig:sigdsigma},\nsignificances are larger than 5 $\\sigma$\nfor $\\cos\\theta\\gtrsim0.1$. \nSignificances are very large for forward scatterings,\nbut are very small for backward scatterings.\nIn Figure \\ref{fig:sigdsigma}, relative 0.1\\% errors are also shown. Errors due to the NP effects become very small and are around $0.1\\%$ for $\\cos\\theta \\simeq 0.9$. \nA similar analysis has been given in Ref.\\cite{Richard:2018zhl}.\n\nFor small scattering angles $\\theta$, the scattering amplitude is dominated by\n$t$-channel contributions which are constructed with the blocks $T_{LL}$, $T_{RR}$ and $T_{LR}$.\nWhen $|t| \\simeq s\\theta^{2}\/4 \\ll M_{Z}^{2}, M_{Z'}^{2}$,\nwe can approximate the SM and NP contributions to the block $T_{LL}$ in the scattering amplitude as \n\\begin{align}\nT_{LL}^{{ Nucl.\\ Phys.} } &\\equiv \n\\sum_{Z'} \\frac{\\ell_{Z'}^{2}}{t - M_{Z'}^{2} + iM_{Z'}\\Gamma_{Z'}}\n\\simeq \n\\sum_{Z'} \\frac{\\ell_{Z'}^{2} }{- M_{Z'}^{2}+ i M_{Z'}\\Gamma_{Z'}}.\n\\end{align}\nWhen $s\\theta^{2}\/4 \\ll M_{Z}^{2}$, \n$T_{LL}$ is dominated by the QED part $T_{LL}^{\\rm QED} \\simeq -4e^{2}\/(\\theta^{2}s)$ and the NP effects are estimated as\n\\begin{align}\n\\frac{T_{LL}^{{ Nucl.\\ Phys.} }}{T_{LL}^{\\rm QED}} &\\simeq \\theta^{2} \\sum_{Z'} \\frac{\n(\\ell_{Z'}^{2}\/4e^{2})s }{ M_{Z'}^{2} - i M_{Z'}\\Gamma_{Z'}}\n= \\theta^{2} \\cdot \\mathcal{O}(s\/M_{Z'}^{2}),\n\\end{align}\nand similar analysis is applied to $T_{LR}$ and $T_{RR}$.\nConsequently, this correction arises not only in amplitudes but also in\ndifferential cross sections.\nFor $\\sqrt{s} = 250\\,\\text{GeV}$ and $\\theta \\lesssim 300\\text{ mrad}$,\nthe QED $t$-channel contribution dominates \nand corrections due to $Z'$ bosons are suppressed by a factor $\\theta^{2}s\/M_{Z'}^{2}$.\nIn Figure~\\ref{fig:farforward}, deviations of differential cross sections of GHU from the SM for $\\theta<300\\text{ mrad}$ are plotted.\nDeviations of cross sections from the SM are proportional to the square of \n$\\theta$ and become smaller than $0.1$\\% when $\\theta < 250\\text{ mrad}$.\nThe measurement of Bhabha scatterings at small scattering angle is used for the determination of the luminosity of $\\mathrm{e}^{+} \\mathrm{e}^{-}$ collision and uncertainties of the luminosity should be smaller than $0.1$\\%. In GHU models the NP effects on such uncertainty are well suppressed when $\\theta \\lesssim 100\\text{ mrad}$.\nAt ILC, the luminosity calorimeter in ILD (SiD) operates between 43 and 68 (40 and 90) mrad \\cite{ILC}, where the influence of the $Z'$ bosons is below 0.1 \\%.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.55\\textwidth]{farforward.pdf}\n\\caption{Deviations of differential cross sections of GHU models from the SM in the forward-scattering region ($\\theta \\le 300$\\text{ mrad}).}\\label{fig:farforward}\n\\end{figure}\n\nWhen the initial electron and positron beams are longitudinally polarized,\nthe left-right asymmetry $A_{\\text{LR}}$ \\eqref{eq:ALR} can be measured.\nIn Figure~\\ref{fig:ALR}, the left-right asymmetries of the SM and GHU models are plotted.\nThe measured asymmetries become larger when $|P_{\\mathrm{e}^{-}} - P_{\\mathrm{e}^{+}}|$ are larger. \nAs seen in Figure~\\ref{fig:Delta-dsigma}, in the A-model cross section of $\\mathrm{e}_{R}^{-}\\mathrm{e}_{L}^{+}$ initial states becomes large whereas\nin the B-model cross section of $\\mathrm{e}_{L}^{-}\\mathrm{e}_{R}^{+}$ initial states is enhanced\ndue to the large left-handed $Z'$ couplings.\nTherefore $A_{\\text{LR}}$ of B-models are larger than the SM, whereas $A_{\\text{LR}}$ of A-models are smaller. \n Since the $A_{\\text{LR}}$ is proportional to $|P_{\\mathrm{e}^{-}} - P_{\\mathrm{e}^{+}}|$,\nthe asymmetries in Figure~\\ref{fig:ALR}-(a) are almost twice as large as in (b).\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{ALR-ee-GHU-s=250-Pmppm.pdf}\n\\includegraphics[width=0.48\\textwidth]{ALR-ee-GHU-s=250-Pmpmp.pdf}\n\\caption{Left-right asymmetries. In both plots,\nblue-dotdashed, and red-dashed and black-solid curves correspond to\n the GHU-A, GHU-B and the SM, respectively. \n Error-bars are indicated for $L_{\\mathrm{int}} = 250\\,\\mathrm{fb}^{-1}$ in each polarization. \n (a) Asymmetries for $(P_{\\mathrm{e}^{-}},P_{\\mathrm{e}^{+}}) = (\\mp0.8,\\pm0.3)$.\n (b) Asymmetries for $(P_{\\mathrm{e}^{-}},P_{\\mathrm{e}^{+}}) = (\\mp0.8,\\mp0.3)$. }\\label{fig:ALR}\n\\end{figure}\nIn Figure \\ref{fig:ALR}, an asymmetry $A_{\\text{LR}}$ in a bin and the \nstatistic error $\\Delta A_{\\text{LR}}$ are also shown. \nHere\n\\begin{align}\nA_{\\text{LR}} &= \\frac{N_{L} - N_{R}}{N_{L}+N_{R}},\n&\n\\Delta A_{\\text{LR}} &= \n\\sqrt{\\frac{2 (N_{L}^{2} + N_{R}^{2})}{ (N_{L} + N_{R})^{3}}} \n\\end{align} \nwith $N_{L}$ and $N_{R}$ being observed number of events for the left-handed ($P_{\\mathrm{e}^{-}}<0$) and right-handed ($P_{\\mathrm{e}^{-}}>0$) electron beams, respectively.\nFor small scattering angle $\\cos\\theta \\gtrsim 0.8$, both \n$A_{\\text{LR}}^{\\text{GHU}}$ and $A_{\\text{LR}}^{{\\rm SM}}$ \nbecome close to each other.\n\nTo see how NP effects against statistical uncertainty grow for small $\\theta$,\nwe plotted in Figure~\\ref{fig:sigALR} the averages and statistical significances of\nleft-right asymmetries in GHU models in each bin which are estimated as \n\\begin{align}\n\\frac{A_{LR}^{\\text{GHU}} - A_{LR}^{{\\rm SM}} }{\\Delta A_{\\text{LR}}}.\n\\end{align}\nFor the forward scattering with $\\cos\\theta \\gtrsim 0.2$,\nthe deviations are bigger than several times of standard deviations.\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.58\\textwidth]{sigALR.pdf}\n\\caption{%\nEstimated statistical significance on left-right asymmetries in GHU models.\nThe blue-dotdashed, red-dashed and\nblack-dotted lines indicate\nGHU-A, GHU-B and\n 0.1\\% non-statistical errors, respectively.}\\label{fig:sigALR}\n\\end{figure}\nIn Figure~\\ref{fig:sigALR}, \nthe significance is larger than $5\\,\\sigma$ for $\\cos\\theta\\gtrsim0.2$. \nBoth models are well distinguished from the SM.\nUsing the magnitude and sign of deviations, the A-model\nand B-model can be distinguished.\n\nIn Figure~\\ref{fig:AX},\nthe asymmetry $A_{X}$ defined in Eq.~\\eqref{eq:AX} is plotted\nfor $\\sqrt{s} = 250\\,\\text{GeV}$ and \n$\\sqrt{s} = 3\\,\\text{TeV}$.\nAt $\\sqrt{s} = 250\\,\\text{GeV}$, the NP effect on $A_{X}$ is very small.\nFor $\\sqrt{s} = 3\\,\\text{TeV}$, the asymmetry $A_{X}$ of the SM and GHU models is clearly different and may be discriminated experimentally.\nIn the present analysis of NP effects, only first KK excited states of neutral bosons are taken into account. At $\\sqrt{s} \\sim 3\\,\\text{TeV}$, effects of second KK modes on $A_{X}$ are estimated as a few percent. These effects are much smaller than the effects of the first KK modes.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{AX-ee-GHU-A4B1-s=250.pdf}\\hspace{0.2cm}\n\\includegraphics[width=0.48\\textwidth]{AX-ee-GHU-A4B1_s=3000.pdf}\n\\caption{Plot of an asymmetry $A_{X}$ in Eq.~\\eqref{eq:AX}.\nThe left plot (a) is for $\\sqrt{s} = 250\\,\\text{GeV}$ with $L_{\\mathrm{int}} = 250\\,\\mathrm{fb}^{-1}$ for each polarizations.\nThe right plot (b) is for $\\sqrt{s} = 3\\,\\text{TeV}$ with $L_{\\mathrm{int}} = 3\\,\\mathrm{ab}^{-1}$ for each polarizations.\nThe red-dashed, blue-dotdashed and black-solid curves correspond to the A-model, B-model and the SM, respectively.\n Error-bars are indicated for $L_{\\mathrm{int}} = 250\\,\\mathrm{fb}^{-1}$ in each polarization.}\\label{fig:AX}\n\\end{figure}\n\n\n\\section{Summary}\nIn this paper we examined the effects of $Z'$ bosons\nin the gauge-Higgs unification (GHU) models in the $\\mathrm{e}^{+}\\mathrm{e}^{-}\\to \\mathrm{e}^{+}\\mathrm{e}^{-}$ (Bhabha) scatterings.\nWe first formulated differential cross sections in Bhabha scattering including $Z'$ bosons.\nWe then numerically evaluated the deviations of differential cross sections in the two \n$\\mathrm{SO}(5) \\times \\mathrm{U}(1) \\times \\mathrm{SU}(3)$ GHU models (the A- and B-models) at $\\sqrt{s} = 250\\,\\text{GeV}$. \nWe found that at $L_{\\mathrm{int}} = 2\\,\\mathrm{ab}^{-1}$ with unpolarized $\\mathrm{e}^{+}\\mathrm{e}^{-}$ beams, the deviation due to $Z'$ bosons \nin the GHU models from the SM can be clearly seen.\nWe also found that for $80\\%$-longitudinally polarized electron and $30\\%$-polarized positron beams, \ndeviations of the differential cross sections from the SM \nbecome as large as a few percent for $\\cos\\theta \\sim 0.2$, and that\nthe A-model and the B-model are well distinguished\nat more than $3\\,\\sigma$ significance\nat $L_{\\mathrm{int}} =250\\,\\mathrm{fb}^{-1}$.\nWe also checked the effects of $Z'$ bosons are negligible\nfor the scattering angle smaller than $100$ mrad at $\\sqrt{s} = 250\\,\\text{GeV}$. Therefore Bhabha scatterings at very small $\\theta$ \ncan be safely used as the measurements of the luminosity in the $\\mathrm{e}^{+}\\mathrm{e}^{-}$ collisions. \nFinally we introduced the new observable $A_{X}$. \nWe numerically evaluated it at $\\sqrt{s} = 250\\,\\text{GeV}$ and\n$3\\,\\text{TeV}$. \nEffects of the GHU models on $A_{X}$\ncan be seen at future TeV-scale $\\mathrm{e}^{+}\\mathrm{e}^{-}$ colliders.\n\nIn this paper the effects of $Z'$ bosons are calculated at the Born level. Higher-order QED effects should be taken \ninto account for more precise evaluation \\cite{Bardin:1990,Bardin:2017}.\n\n\n\\section*{Acknowledgements}\n\nThis work was supported in part \nby European Regional Development Fund-Project Engineering Applications of \nMicroworld Physics (No.\\ CZ.02.1.01\/0.0\/0.0\/16\\_019\/0000766) (Y.O.), \nby the National Natural Science Foundation of China (Grant Nos.~11775092, \n11675061, 11521064, 11435003 and 11947213) (S.F.), \nby the International Postdoctoral Exchange Fellowship Program (IPEFP) (S.F.), \nand by Japan Society for the Promotion of Science, \nGrants-in-Aid for Scientific Research, Nos.\nJP19K03873 (Y.H.) and JP18H05543 (N.Y.).\n\n\n\\vskip 1.cm\n\n\\def\\jnl#1#2#3#4{{#1}{\\bf #2}, #3 (#4)}\n\n\\def{ Z.\\ Phys.} {{ Z.\\ Phys.} }\n\\def{ J.\\ Solid State Chem.\\ }{{ J.\\ Solid State Chem.\\ }}\n\\def{ J.\\ Phys.\\ Soc.\\ Japan }{{ J.\\ Phys.\\ Soc.\\ Japan }}\n\\def{ Prog.\\ Theoret.\\ Phys.\\ Suppl.\\ }{{ Prog.\\ Theoret.\\ Phys.\\ Suppl.\\ }}\n\\def{ Prog.\\ Theoret.\\ Phys.\\ }{{ Prog.\\ Theoret.\\ Phys.\\ }}\n\\def{ Prog.\\ Theoret.\\ Exp.\\ Phys.\\ }{{ Prog.\\ Theoret.\\ Exp.\\ Phys.\\ }}\n\\def{ J. Math.\\ Phys.} {{ J. Math.\\ Phys.} }\n\\def Nucl.\\ Phys. \\textbf{B}{ Nucl.\\ Phys. \\textbf{B}}\n\\def{ Nucl.\\ Phys.} {{ Nucl.\\ Phys.} }\n\\def{ Phys.\\ Lett.} \\textbf{B}{{ Phys.\\ Lett.} \\textbf{B}}\n\\def{ Phys.\\ Lett.} {{ Phys.\\ Lett.} }\n\\defPhys.\\ Rev.\\ Lett. {Phys.\\ Rev.\\ Lett. }\n\\defPhys.\\ Rev. \\textbf{B}{Phys.\\ Rev. \\textbf{B}}\n\\defPhys.\\ Rev. \\textbf{D}{Phys.\\ Rev. \\textbf{D}}\n\\defPhys.\\ Rep. {Phys.\\ Rep. }\n\\def{Ann.\\ Phys.\\ (N.Y.)}{{Ann.\\ Phys.\\ (N.Y.)}}\n\\defRev.\\ Mod.\\ Phys. {Rev.\\ Mod.\\ Phys. }\n\\defZ.\\ Phys. \\textbf{C}{Z.\\ Phys. \\textbf{C}}\n\\defScience{Science}\n\\defComm.\\ Math.\\ Phys. {Comm.\\ Math.\\ Phys. }\n\\defMod.\\ Phys.\\ Lett. \\textbf{A}{Mod.\\ Phys.\\ Lett. \\textbf{A}}\n\\defEur.\\ Phys.\\ J. \\textbf{C}{Eur.\\ Phys.\\ J. \\textbf{C}}\n\\defJHEP {JHEP }\n\\def{\\it ibid.} {{\\it ibid.} }\n\n\n\\renewenvironment{thebibliography}[1]\n {\\begin{list}{[$\\,$\\arabic{enumi}$\\,$]} \n {\\usecounter{enumi}\\setlength{\\parsep}{0pt}\n \\setlength{\\itemsep}{0pt} \\renewcommand{\\baselinestretch}{1.2}\n \\settowidth\n {\\labelwidth}{#1 ~ ~}\\sloppy}}{\\end{list}}\n\n\n\n\\leftline{\\Large \\bf References}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\\vskip -0.25cm\nCompressed sensing \\cite{Donoho2006,Candes2008} is a mathematical framework that defines the conditions and tools for the recovery of a signal from a small number of its linear projections (i.e. measurements). In the CS framework, the measurement device acquires the signal in the linear projections domain, and the full signal is reconstructed by convex optimization techniques. CS has diverse applications including image acquisition \\cite{Romberg2008}, radar imaging \\cite{5420035}, Magnetic Resonance Imaging (MRI) \\cite{4472246, 6153065}, spectrum sensing \\cite{6179814}, indoor positioning \\cite{6042868}, bio-signals acquisition \\cite{6184345}, and sensor networks \\cite{6159081}. In this paper we address the problem of block-based CS (BCS) \\cite{Fowler-NOW}, which employs CS on distinct low-dimensional segments of a high-dimensional signal. BCS is mostly suitable for processing very high-dimensional images and video, where it operates on distinct local patches. Our approach is based on a deep neural network \\cite{Bengio-2009}, which simultaneously learns the linear sensing matrix and the non-linear reconstruction operator.\\\\\nThe contributions of this paper are two-fold: (1) It presents for the first time, to the best knowledge of the authors, the utilization of a fully-connected deep neural network for the task of BCS; and (2) The proposed network performs both the linear sensing and non-linear reconstruction operators, and during training these operators are \\emph{jointly} optimized, leading to a significant advantage compared to state-of-the-art.\\\\\nThis paper is organized as follows: section \\ref{Problem Formulation} introduces CS concepts, and motivates the utilization of BCS for very high-dimensional images and video. Section \\ref{The Proposed Approach} presents the deep neural network approach, and discusses structure and training aspects. Section \\ref{Results} evaluates the performance of the proposed approach for compressively sensing and reconstructing natural images, and compares it with state-of-the-art BCS methods and full-image Total Variation-based CS. Section \\ref{Conclusions} concludes the paper and discusses future research directions. \\vskip -0.25cm\n\\section{Compressed Sensing Overview}\n\\vskip -0.25cm\n\\label{Problem Formulation}\n\\subsection{Full-Signal Compressed Sensing}\nGiven a signal $\\signal \\in \\mathbf{R}^N$, an $M \\times N$ sensing matrix $\\Phi$ (such that $M \\ll N$) and a measurements vector $\\measurement = \\Phi \\signal$, the goal of CS is to recover the signal from its measurements. The sensing rate is defined by $R=M\/N$, and since $R \\ll 1$ the recovery of $\\signal$ is not possible in the general case. According to CS theory \\cite{Donoho2006,Candes2008}, signals that have a sparse representation in the domain of some linear transform can be exactly recovered with high probability from their measurements: let $\\signal = \\Psi \\reps $, where $\\Psi$ is the inverse transform, and $\\reps$ is a sparse coefficients vector with only $S \\ll N$ non-zeros entries, then the recovered signal is synthesized by $ \\hat{\\signal} = \\Psi \\repsHat$, and $\\repsHat$ is obtained by solving the following convex optimization program:\n\\begin{equation}\n \\repsHat = \\argmin_{\\reps{'}} \\left\\|\\reps{'}\\right\\|_1 \\text{ subject to } \\measurement = \\Phi \\Psi \\reps{'},\n\\end{equation}\n\\vskip -0.25cm\n\\noindent where $\\left\\|\\alpha\\right\\|_1$ is the $l_1$-norm, which is a convex relaxation of the $l_0$ pseudo-norm that counts the number of non-zero entries of $\\alpha$. The exact recovery of $\\signal$ is guaranteed with high probability if $\\reps$ is sufficiently sparse and if certain conditions are met by the sensing matrix and the transform.\n\n\\begin{table*}[]\n \\caption{Average reconstruction PSNR [dB] and SSIM vs. sensing rate (R=M\/N): for each method and sensing rate, the result is displayed as PSNR | SSIM (each result is the average over the 10 test images).}\n \\label{Reconstruction-Quality}\n \\centering\n\\resizebox{\\textwidth}{!}{\n \\begin{tabular}{lccccc}\n \\hline\n Method & R = 0.1 & R = 0.15 & R = 0.2 & R = 0.25 & R = 0.3\\\\\n \\hline\n Proposed (block-size = 16$\\times$16) & \\textbf{28.21} | \\textbf{0.916} & \\textbf{29.73} | \\textbf{0.948} & \\textbf{31.03} | \\textbf{0.965} & \\textbf{32.15} | \\textbf{0.976} & \\textbf{33.11} | \\textbf{0.983}\\\\\n BCS-SPL-DDWT (16$\\times$16)\\cite{Fowler2009} & 24.92 | 0.789 & 26.12 | 0.834 & 27.17 | 0.873 & 28.16 | 0.898 & 29.02 | 0.917 \\\\\n BCS-SPL-DDWT (32$\\times$32)\\cite{Fowler2009} & 24.99 | 0.781 & 26.40 | 0.833 & 27.46 | 0.868 & 28.43 | 0.894 & 29.29 | 0.914 \\\\\n MH-BCS-SPL (16$\\times$16) \\cite{MH-Fowler} & 26.01 | 0.827 & 27.92 | 0.888 & 29.46 | 0.919 & 30.69 | 0.939 & 31.69 | 0.952 \\\\\n MH-BCS-SPL (32$\\times$32) \\cite{MH-Fowler} & 26.79 | 0.845 & 28.51 | 0.895 & 29.81 | 0.923 & 30.77 | 0.938 & 31.73 | 0.950 \\\\\n MS-BCS-SPL \\cite{MS-Fowler} & 27.32 | 0.883 & 28.77 | 0.909 & 30.04 | 0.934 & 31.15 | 0.956 & 32.05 | 0.974 \\\\\n MH-MS-BCS-SPL \\cite{MH-Fowler} & 27.74 | 0.889 & 29.10 | 0.919 & 30.78 | 0.947 & 31.38 | 0.960 & 32.82 | 0.979 \\\\\n TV (Full Image) \\cite{Romberg2008} & 27.41 | 0.867 & 28.57 | 0.890 & 29.62 | 0.909 & 30.63 | 0.926 & 31.59 | 0.939 \\\\\n \\hline\n \\end{tabular}}\n\\end{table*}\n\\subsection{Block-based Compressed Sensing}\n\\label{Block-based Compressed Sensing}\nConsider applying CS to an image of $L \\times L$ pixels: the technique described above can be employed by column-stacking (or row-stacking) the image to a vector $\\signal \\in \\mathbb{R}^{L^2}$, and the dimensions of the measurement matrix $\\Phi$ and the inverse transform $\\Psi$ are $M \\times {L^2}$ and $ {L^2} \\times {L^2}$, respectively. For modern high-resolution cameras, a typical value of $L$ is in the range of $2000$ to $4000$, leading to overwhelming memory requirements for storing $\\Phi$ and $\\Psi$: for example, with $L=2000$ and a sensing rate $R=0.1$ the dimensions of $\\Phi$ are $400,000 \\times 4,000,000$ and of $\\Psi$ are $4,000,000 \\times 4,000,000$. In addition, the computational load required to solve the CS reconstruction problem becomes prohibitively high. Following this line of arguments, a BCS framework was proposed in \\cite{Gan2007}, in which the image is decomposed into non-overlapping blocks (i.e. patches) of $B \\times B$ pixels, and each block is compressively sensed independently. The full-size image is obtained by placing each reconstructed block in its location within the reconstructed image canvas, followed by full-image smoothing. The dimensions of the block sensing matrix $\\Phi_B$ are ${B^2}R\\times{B^2}$, and the measurement vector of the \\emph{i}-th block is given by:\n\\begin{equation}\n\\label{block-based sensing}\n\\measurement_i = \\Phi_B \\signal_i,\n\\end{equation}\n\\noindent where $\\signal_i \\in \\mathbb{R}^{B^2}$ is the column-stacked block, and $\\Phi_B$ was chosen in \\cite{Gan2007} as an orthonormalized i.i.d Gaussian matrix. Following a per-block minimum mean squared error reconstruction stage, a full-image iterative hard-thresholding algorithm is employed for improving full-image quality. An improvement to the performance of this approach was proposed by \\cite{Fowler2009}, which employed the same BCS approach as \\cite{Gan2007} and evaluated the incorporation of directional transforms such as the Contourlet Transform (CT) and the Dual-tree Discrete Wavelet Transform (DDWT) in conjunction with a Smooth Projected Landweber (SPL) \\cite{bertero1998introduction} reconstruction of the full image. The conclusion of the experiments conducted in \\cite{Fowler2009} was that in most cases the DDWT transform offered the best performance, and we term this method as BCS-SPL-DDWT. A multi-scale approach was proposed by \\cite{MS-Fowler}, termed MS-BCS-SPL, which improved the performance of BCS-SPL-DDWT by applying the block-based sensing and reconstruction stages in multiple scales and sub-bands of a discrete wavelet transform. A different block dimension was employed for each scale and with a 3-level transform, dimensions of $B=64,32,16$ were set for the high, medium and low resolution scales, respectively. A multi-hypothesis approach was proposed in \\cite{MH-Fowler} for images and videos, which is suitable for either spatial domain BCS (termed MH-BCS-SPL) or multi-scale BCS (termed MH-MS-BCS-SPL). In this approach, multiple predictions of a block are computed from neighboring blocks in an initial reconstruction of the full image, and the final prediction of the block is obtained by an optimal linear combination of the multiple predictions. For video frames, previously reconstructed adjacent frames provide the sources for multiple predictions of a block. The multi-scale version of this approach provides the best performance among all above mentioned BCS methods. A detailed survey of BCS theory and performance is provided in \\cite{Fowler-NOW}, which describes additional applications such as BCS of multi-view images and video, and motion-compensated BCS of video.\n\\section{The Proposed Approach}\n\\vskip -0.25cm\n\\label{The Proposed Approach}\nIn this paper we propose to employ a deep neural network that performs BCS by processing each block independently\\footnote{In this paper we treat only block-based processing, and a full-image post-processing stage is not performed.} as described in section \\ref{Block-based Compressed Sensing}. Our choice is motivated by the outstanding success of deep neural networks for the task of full-image denoising \\cite{Burger} in which a 4-layer neural network achieved state-of-the-art performance by block-based processing. In our approach, the first hidden layer performs the linear block-based sensing stage (\\ref{block-based sensing}) and the following hidden layers perform the non-linear reconstruction stage. The advantage and novelty of this approach is that during training, the sensing matrix and the non-linear reconstruction operator are \\emph{jointly} optimized, leading to better performance than state-of-the-art at a fraction of the computation time. \\\\ The proposed fully-connected network includes the following layers: (1) an input layer with $B^2$ nodes; (2) a compressed sensing layer with $B^{2}R$ nodes, $R\\ll1$ (its weights form the sensing matrix); (3) $K\\ge1$ reconstruction layers with $B^{2}T$ nodes, each followed by a ReLU \\cite{icml2010_NairH10} activation unit, where $T>1$ is the redundancy factor; and (4) an output layer with $B^2$ nodes. Note that the performance of the network depends on the block-size $B$, the number of reconstruction layers $K$, and their redundancy $T$. We have evaluated\\footnote{The network was implemented using Torch7 \\cite{Collobert_NIPSWORKSHOP_2011} scripting language, and trained on NVIDIA Titan X GPU card.} these parameters by a set of experiments that compared the average reconstruction PSNR of 10 test images, depicted in Figure \\ref{test_images}, and by training the network with 5,000,000 distinct patches, randomly extracted from 50,000 images in the LabelMe dataset \\cite{LabelMe}. The chosen optimization algorithm was AdaGrad \\cite{AdaGrad} with a learning rate of $0.005$ (100 epochs), and batch size of 16. Our study revealed that best\\footnote{Note that by increasing significantly the training set, slightly different values of $B$, $K$, and $T$ may provide better results, as discussed in \\cite{Burger}.} performance were achieved with a block size $B\\times B = 16\\times16$, $K=2$ reconstruction layers and redundancy $T=8$, leading to a total of 4,780,569 parameters ($R=0.1$). Table \\ref{Reconstruction-Quality-vs-Size} provides a comparison for varying the block size between $8\\times8$ to $20\\times20$ (with 2 reconstruction layers and redundancy of 8), and indicates that block size of $16\\times16$ provides the best results. Table \\ref{Reconstruction-Quality-vs-Redundancy} provides a comparison for varying the redundancy between 2 to 12 (with 2 reconstruction layers and block size of $16\\times 16$), and indicates that a redundancy of 8 provides the best results. Table \\ref{Reconstruction-Quality-vs-Layers} provides a comparison for varying the number of hidden reconstruction layers between 1 to 8 (with redundancy of 8 and block size of $16\\times 16$), and indicates that two reconstruction layers provided the best performance.\n\\begin{table}[]\n \\vskip -0.25cm\n \\caption{Reconstruction PSNR [dB] vs. block size ($B\\times B$)}\n \\label{Reconstruction-Quality-vs-Size}\n \\centering\n \\begin{tabular}{cllll}\n \\hline\n \n \n Training Examples & $B=8$ & $B=12$ & $B=16$ & $B=20$ \\\\\n \\hline\n \n $5 \\times 10^6$ & 27.21 & 27.66 & \\textbf{28.21} & 27.73 \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\begin{table}[]\n \\vskip -0.25cm\n \\caption{Reconstruction PSNR [dB] vs. network redundancy}\n \\label{Reconstruction-Quality-vs-Redundancy}\n \\centering\n \\begin{tabular}{cllll}\n \\hline\n \n \n Training Examples & $T=2$ & $T=4$ & $T=8$ & $T=12$ \\\\\n \\hline\n $5 \\times 10^6$ & 27.99 & 28.11 & \\textbf{28.21} & 28.15 \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\begin{table}[htb]\n \\vskip -0.25cm\n \\caption{Reconstruction PSNR [dB] vs. no. of reconstruction layers}\n \\label{Reconstruction-Quality-vs-Layers}\n \\centering\n \\begin{tabular}{ccccc}\n \\hline\n \n \n Training Examples & $K=1$ & $K=2$ & $K=4$ & $K=8$ \\\\\n \\hline\n $5 \\times 10^6$ & 27.98 & \\textbf{28.21} & 28.18 & 27.07 \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\begin{table}[t]\n \\caption{Computation time at R=0.25 ($512 \\times 512$ images):}\n \\label{Comp-Time}\n \\centering\n \\begin{tabular}{lc}\n \\hline\n Method & Time [seconds]\\\\\n \\hline\n \n Proposed & 0.80\\\\\n BCS-SPL-DDWT ($16 \\times 16$) \\cite{Fowler2009} & 13.57\\\\\n BCS-SPL-DDWT ($32 \\times 32$) \\cite{Fowler2009} & 13.10\\\\\n MH-BCS-SPL ($16 \\times 16$) \\cite{MH-Fowler} & 144.61\\\\\n MH-BCS-SPL ($32 \\times 32$) \\cite{MH-Fowler} & 69.73\\\\\n MS-BCS-SPL \\cite{MS-Fowler} & 6.39\\\\\n MH-MS-BCS-SPL \\cite{MH-Fowler} & 207.32\\\\\n TV (Full Image) \\cite{Romberg2008} & 1675.09\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\section{Performance Evaluation}\n\\vskip -0.25cm\n\\label{Results}\nThis section provides performance evaluation results of the proposed approach\\footnote{A MATLAB package implementing the proposed approach is available at: \\url{http:\/\/www.cs.technion.ac.il\/~adleram\/BCS_DNN_2016.zip}} vs. the leading BCS approaches: BCS-SPL-DDWT \\cite{Fowler2009}, MS-BCS-SPL \\cite{MS-Fowler}, MH-BCS-SPL \\cite{MH-Fowler} and MH-MS-BCS-SPL \\cite{MH-Fowler}, using the original code provided by their authors. The proposed approach was employed with block size $16 \\times 16$, BCS-SPL-DDWT with block sizes of $16 \\times 16$ and $32 \\times 32$ (the optimal size for this method), MH-BCS-SPL with block sizes of $16 \\times 16$ and $32 \\times 32$ (the optimal size for this method). MS-BCS-SPL and MH-MS-BCS-SPL utilized a 3-level discrete wavelet transform with block sizes as indicated in section \\ref{Block-based Compressed Sensing} (their optimal settings). In addition, we also compared to the classical full-image Total Variation (TV) CS approach of \\cite{Romberg2008} that utilizes a sensing matrix with elements from a discrete cosine transform and Noiselet vectors. Reconstruction performance was evaluated for sensing rates in the range of $R=0.1$ to $R=0.3$, using the average PSNR and SSIM \\cite{SSIM} over the collection of 10 test images ($512 \\times 512$ pixels), depicted in Figure \\ref{test_images}. Reconstruction results are summarized in Table \\ref{Reconstruction-Quality}, and reveal a consistent advantage of the proposed approach vs. all BCS methods as well as the full-image TV approach. Visual quality comparisons (best viewed in the electronic version of this paper) are provided in Figures \\ref{results_comp_01}-\\ref{results_comp_025_2}, and demonstrate the high visual quality of the proposed approach. Computation time comparison at a sensing rate $R=0.25$, with a MATLAB implementation of all tested methods (without GPU), is provided in Table \\ref{Comp-Time} and demonstrates that the proposed approach is over 200-times faster than state-of-the-art (MH-MS-BCS-SPL), and over 1600-times faster than full-image TV CS.\n\\begin{figure*}[]\n\\begin{minipage}{\\linewidth}\n\\vskip -1cm\n \\makebox[\\linewidth]{\n\\centering\n\\includegraphics[width=200mm, scale=0.55]{Test_Images.eps}}\n\\vskip -1cm\n \\caption{Test images ($512 \\times 512$): 'lena', 'bridge', 'barbara', 'peppers', 'mandril', 'houses', 'woman', 'boats', 'cameraman' and 'couple'.}\n\\label{test_images}\n \\end{minipage}\n\\end{figure*}\n \\begin{figure*}[]\n \\begin{minipage}{\\linewidth}\n \\vskip -1cm\n \\makebox[\\linewidth]{\n\\centering\n\\includegraphics[width=200mm,scale=0.5]{compare_reconst_rate_01_couple.eps}}\n\\vskip -1cm\n \\caption{Reconstruction of 'couple' at R = 0.1 (PSNR [dB] | SSIM): (a) Original; (b) Full image TV (27.1691 | 0.8812); (c) MS-BCS-SPL (26.8429 | 0.8756); (d) MH-MS-BCS-SPL (27.1804 | 0.8877); and (e) Proposed (28.5902 | 0.9414).}\n \\label{results_comp_01}\n \\end{minipage}\n\\end{figure*}\n \\begin{figure*}[]\n \\begin{minipage}{\\linewidth}\n \\vskip -1cm\n \\makebox[\\linewidth]{\n\\centering\n\\includegraphics[width=200mm,scale=0.5]{compare_reconst_rate_02_houses.eps}}\n\\vskip -1cm\n \\caption{Reconstruction of 'houses' at R = 0.2 (PSNR [dB] | SSIM): (a) Original; (b) Full image TV (31.0490 | 0.9304); (c) MS-BCS-SPL (31.4317 | 0.9497); (d) MH-MS-BCS-SPL (31.7030 | 0.9544); and (e) Proposed (32.9328 | 0.9766).}\n \\label{results_comp_02}\n \\end{minipage}\n\\end{figure*}\n \\begin{figure*}[]\n \\begin{minipage}{\\linewidth}\n \\vskip -1cm\n \\makebox[\\linewidth]{\n\\centering\n\\includegraphics[width=200mm,scale=0.5]{compare_reconst_rate_025.eps}}\n\\vskip -1cm\n \\caption{Reconstruction of 'lena' at R = 0.25 (PSNR [dB] | SSIM): (a) Original; (b) Full image TV (35.4202 | 0.9718); (c) MS-BCS-SPL (36.5555 | 0.9861); (d) MH-MS-BCS-SPL (35.7346 | 0.9825; and (e) Proposed (36.3734 | 0.9910).}\n \\label{results_comp_025_1}\n \\end{minipage}\n\\end{figure*}\n \\begin{figure*}[]\n \\begin{minipage}{\\linewidth}\n \\vskip -1cm\n \\makebox[\\linewidth]{\n\\centering\n\\includegraphics[width=200mm,scale=0.5]{compare_reconst_rate_03_boats.eps}}\n\\vskip -1cm\n \\caption{Reconstruction of 'boats' at R = 0.3 (PSNR[dB] | SSIM): (a) Original; (b) Full image TV (32.5986 | 0.9598); (c) MS-BCS-SPL (32.5063 | 0.9835); (d) MH-MS-BCS-SPL (32.7697 | 0.9847); and (e) Proposed (34.0065 | 0.9914).}\n \\label{results_comp_025_2}\n \\end{minipage}\n\\end{figure*}\n\\section{Conclusions}\n\\vskip -0.25cm\n\\label{Conclusions}\nThis paper presents a deep neural network approach to BCS, in which the sensing matrix and the non-linear reconstruction operator are jointly optimized during the training phase. The proposed approach outperforms state-of-the-art both in terms of reconstruction quality and computation time, which is two orders of magnitude faster than the best available BCS method. Our approach can be further improved by extending it to compressively sense blocks in a multi-scale representation of the sensed image, either by utilizing standard transforms or by deep learning of new transforms using convolutional neural networks.\n\\vskip -0.25cm\n\\bibliographystyle{IEEEbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe scalar sectors of the supersymmetric field theories especially\nthe supergravity theories which are the theories that govern the\nmassless sector low energy background coupling of the relative\nsuperstring theories \\cite{kiritsis} can be formulated as\nprincipal sigma models or non-linear sigma models whose target\nspaces are group manifolds or coset spaces. In particular a great\nmajority of the supergravity scalar sectors are constructed as\nsymmetric space sigma models\n\\cite{sm1,sm2,sm3,nej1,nej2,nej3,sssm1}. When the Abelian\n(Maxwell) vector multiplets are coupled to the graviton multiplets\nin these theories the scalar sector which has the non-linear sigma\nmodel interaction with in itself is coupled to the Abelian gauge\nfields through a kinetic term in the Lagrangian\n\\cite{julia1,julia2,ker1,ker2}.\n\nIn this work, we will focus on the gauge field equations of the\nprincipal sigma model and the Abelian gauge field couplings\nmentioned above. Bearing in mind that the non-linear sigma model\ncan be obtained from the principal sigma model by imposing extra\nrestrictions for the sake of generality we will consider the\ngeneric form of the principal sigma model as the non-linear\ninteraction of the scalars. We will simply show that the gauge\nfield equations can be locally integrated so that they can be\nexpressed as first-order equations containing arbitrary locally\nexact differential forms. The first-order form of the gauge field\nequations will be used to show that there exists a one-sided\ndecoupling between the scalars and the gauge fields in a sense\nthat the scalar field equations do not contain the gauge fields in\nthem whereas the scalars enter as sources in the gauge field\nequations. Thus the scalar solution space of the coupled theory\ncoincides with the general solution space of the pure sigma model.\nFurthermore we will also discuss that this hidden on-shell\nscalar-matter decoupling results in a number of coupled Maxwell\ntheories with sources whose currents contain the general solutions\nof the principal sigma model which is completely decoupled from\nthe Maxwell sector. Therefore we will show that when the general\nsolutions of the pure principal sigma model are obtained and when\none fixes the sector of the solution space of the coupled theory\nby fixing the field dependence or the independence of the locally\nexact differential forms appearing in the first-order gauge field\nequations one may determine the currents of the coupled Maxwell\nfields. In this respect one may solve the gauge fields from the\nMaxwell sector field equations. Consequently the solution space of\nthe scalar-matter coupling can be entirely generated by the\ngeneral solutions of the pure principal sigma model and the\narbitrary choice of the locally exact differential forms. This\nfact is a consequence of the solution methodology which is based\non the implicit on-shell decoupling between the matter fields and\nthe scalar sector which provides current sources to the former.\n\\section{Hidden Decoupling Structure of the Gauge Fields and Their Sources}\nIn a $D$-dimensional spacetime $M$ the Lagrangian which gives the\ninhomogeneous Maxwell equations can be given as\n\\begin{equation}\\label{de1}\n {\\mathcal{L}}=-\\frac{1}{2}dA\\wedge\n \\ast dA-A\\wedge\\ast J,\n\\end{equation}\nwhere $A$ is the $U(1)$ electromagnetic gauge potential one-form\nand $F=dA$ is the field strength of it. Also in the units where\nthe speed of light is unity the current one-form in a local\ncoordinate basis $\\{dt,dx^{a}\\}$ is\n\\begin{equation}\\label{de2}\n J=-\\rho dt+\\mathbf{J}_{a}dx^{a},\n\\end{equation}\nwhere in the temporal component $\\rho$ is the charge density and\nthe spatial components $\\mathbf{J}_{a}$ are the current densities.\nThe Lagrangian \\eqref{de1} defines a media in which the charge\ndensity and the currents are not influenced by the electromagnetic\nfield. The current one-form is predetermined and static that is to\nsay although it acts as a source for the electromagnetic field it\ndoes not interact with it dynamically. From \\eqref{de1} the\ninhomogeneous Maxwell equations read\n\\begin{equation}\\label{de2.5}\n d\\ast F=-\\ast J.\n\\end{equation}\nIn this section we will consider the coupling of $N$ $U(1)$ gauge\nfield one-forms $A^{i}$ to the principal sigma model whose target\nspace is a group manifold $G$. The sigma model Lagrangian can be\ngiven as\n\\begin{equation}\\label{de3}\n {\\mathcal{L}}=\\frac{1}{2}\\, tr(\\ast dg^{-1}\\wedge\n dg).\n\\end{equation}\nHere we take a differentiable map\n\\begin{equation}\\label{de4}\n h: M\\longrightarrow G,\n\\end{equation}\nwe also consider a representation $f$ of $G$ in $Gl(N,\\Bbb{R})$\n\\begin{equation}\\label{de5}\n f: G\\longrightarrow Gl(N,\\Bbb{R}),\n\\end{equation}\nwhich may be taken as a differentiable homomorphism. Then the map\n$g$ can be given as\n\\begin{equation}\\label{de6}\n g=f\\circ h: M\\longrightarrow Gl(N,\\Bbb{R}),\n\\end{equation}\nwhich is a matrix-valued function on $M$\n\\begin{equation}\\label{de7}\ng(p)=\\left(\\begin{array}{ccc}\n \\varphi^{11}(p) & \\varphi^{12}(p) & \\cdots \\\\\n \\varphi^{21}(p) & \\varphi^{22}(p) & \\cdots \\\\\n \\vdots & \\vdots & \\vdots \\\\\n\\end{array}\\right),\n\\end{equation}\n$\\forall p\\in M$. Due to the presence of the scalar fields\n$\\{\\varphi^{ij}\\}$ the theory can be considered to be a scalar\nfield theory. In \\eqref{de3} we have a matrix multiplication with\nthe wedge product used between the components and the trace is\nover the representation chosen in \\eqref{de5}. Depending on the\nrestrictions on the sigma model and the nature of $G$ the scalars\nin \\eqref{de7} can be all independent or not. This model also\ncovers the non-linear (coset) sigma models \\cite{westsugra,tanii}\nand in particular the symmetric space sigma models\n\\cite{sm1,sm2,sm3} which shape the scalar sectors of the\nsupergravities thus the low energy effective string theories. The\nconstruction of the Lagrangian of the symmetric space sigma model\nin which an internal metric substitutes the map $g$ can be found\nin \\cite{nej1,nej2,nej3}.\n\nBy generalizing the supersymmetric coupling of the supergravity\nmatter multiplets with the graviton multiplets\n\\cite{julia1,julia2,ker1,ker2,sssugradivdim} we can write down the\ncoupling of $N$ $U(1)$ gauge field one-forms $A^{i}$ to the\nprincipal sigma model whose target space is a group manifold as\n\\begin{equation}\\label{de8}\n \\mathcal{L}_{tot}=\\frac{1}{2}tr( \\ast dg^{-1}\\wedge dg)\n -\\frac{1}{2}\\ast F^{T}g\\wedge\n F,\n\\end{equation}\nwhere we define the column vector $F$ whose components are $F^{i}$\nthus the coupling term can be explicitly written as\n\\begin{equation}\\label{de9}\n-\\frac{1}{2}\\ast F^{T}g\\wedge\n F =-\\frac{1}{2}g^{i}_{\\:\\:\\:j} \\ast F_{i}\\wedge\n F^{j}.\n\\end{equation}\n Now if we\nvary the Lagrangian \\eqref{de8} with respect to $A^{i}$ we find\nthe corresponding field equations as\n\\begin{equation}\\label{de10}\nd(\\mathcal{T}^{i}_{\\:\\:\\:k}\\ast F_{i})=0,\n\\end{equation}\nwhere\n\\begin{equation}\\label{de11}\n\\mathcal{T}^{i}_{\\:\\:\\:k}=g^{i}_{\\:\\:\\:k}+(g^{T})^{i}_{\\:\\:\\: k}.\n\\end{equation}\nThe expression \\eqref{de10} defines a closed form. Since locally\nany closed form is equal to an exact form we can integrate\n\\eqref{de10} to write\n\\begin{equation}\\label{de12}\n\\mathcal{T}^{i}_{\\:\\:\\:k}\\ast F_{i}=dC_{k},\n\\end{equation}\nwhere $\\{C_{k}\\}$ are arbitrary $N$ $(D-3)$-forms on $M$. They may\nbe chosen to depend on the fields $\\{\\varphi^{ij},A^{i}\\}$ or they\nmay be fixed. The former case is the most general one. The general\nsolutions of the field equations of the Lagrangian \\eqref{de8}\nmust satisfy \\eqref{de12} on-shell in which one can freely change\nthe set of $(D-3)$-forms $\\{C_{k}\\}$ which may be functions of\n$\\{\\varphi^{ij},A^{i}\\}$ or not. To generate the entire solution\nspace one can first choose a set of field-dependent or\nfield-independent $\\{C_{k}\\}$ and solve the system of field\nequations of \\eqref{de8} to find the corresponding solutions then\none can repeat this procedure for a different set of $\\{C_{k}\\}$.\nIn this respect the field-independent or the fixed $(D-3)$-forms\n$\\{C_{k}\\}$ can be considered as the integration constants coming\nfrom the reduction of the degree of the gauge field equations\nwhich are second-order differential equations. Now we will follow\nthe methodology described above to show that there exists an\non-shell decoupling between the scalars and the gauge fields. Let\nus first take a look at the more restricted case of\nfield-independent $\\{C_{k}\\}$ and let us fix a set of\nfield-independent $\\{C_{k}\\}$. In this case if we take the partial\nderivative of both sides of \\eqref{de12} with respect to the\nscalar fields $\\{\\varphi^{ml}\\}$ we immediately see that\n\\begin{equation}\\label{de13}\n\\frac{\\partial\\mathcal{T}^{i}_{\\:\\:\\:k}}{\\partial\\varphi^{ml}}\\ast\nF_{i}=0,\n\\end{equation}\nsince the right hand side of \\eqref{de12} is a fixed $(D-2)$-form\nand it does not depend on the scalar fields $\\{\\varphi^{ml}\\}$.\nThese conditions must be satisfied on-shell by a sector of the\nsolution space of $\\{\\varphi^{ml},A^{i}\\}$ which is restricted to\nthe condition of choosing fixed $\\{C_{k}\\}$. After fixing the set\nof $(D-3)$-forms $\\{C_{k}\\}$ if we use \\eqref{de12} in \\eqref{de8}\nwe can obtain the on-shell Lagrangian as\n\\begin{equation}\\label{de14}\n \\mathcal{L}_{tot}=\\frac{1}{2}tr( \\ast dg^{-1}\\wedge dg)\n -\\frac{1}{2}dC_{j}\\wedge\n F^{j}.\n\\end{equation}\nSince we have made use of the gauge field equations varying this\non-shell Lagrangian with respect to $A^{i}$ yields an identity. On\nthe other hand by varying the above Lagrangian with respect to the\nscalars to find the solutions which satisfy \\eqref{de12} for a\nchosen fixed set of $\\{C_{k}\\}$ we find that the scalar field\nequations are the same ones which can be obtained directly from\nthe pure sigma model Lagrangian \\eqref{de3} since the coupling\npart in \\eqref{de14} does not depend on the scalars. This result\ncan also be obtained by directly varying \\eqref{de8} and by using\nthe conditions \\eqref{de13} which are satisfied by the sector of\nthe general solutions of the theory which we have restricted\nourselves in by fixing $\\{C_{k}\\}$. The same result would be\nobtained if one chooses another field-independent set of\n$\\{C_{k}\\}$. Thus we observe that the scalar solutions of a\nsub-sector of the scalar-gauge field theory defined by the\ncoupling Lagrangian \\eqref{de8} are the same with the general\nsolution space of the pure principal sigma model. This is a\nconsequence of the local integration given in \\eqref{de12} and\nfixing the arbitrary $\\{C_{k}\\}$.\n\nOn the other hand a similar result with a minor difference can\nalso be derived for the rest of the solution space of the theory.\nIf we consider the most general case of field-dependent\ndifferential forms $\\{C_{k}(\\varphi^{ml},A^{i})\\}$ and use\n\\eqref{de12} in the Lagrangian \\eqref{de8} we get\n\\begin{equation}\\label{de14.5}\n \\mathcal{L}_{tot}=\\frac{1}{2}tr( \\ast dg^{-1}\\wedge dg)\n -\\frac{1}{2}dC_{j}(\\varphi^{ml},A^{i})\\wedge\n F^{j}.\n\\end{equation}\nThis on-shell Lagrangian gives us the same decoupling conditions\ndiscussed above for the restricted sub-sector case. To see this we\nshould realize that the second term in the Lagrangian\n\\eqref{de14.5} which is written in an on-shell form is a closed\ndifferential form. Locally any closed differential form is an\nexact one. Thus the second term in the Lagrangian \\eqref{de14.5}\ncan be written as an exact differential form as\n\\begin{equation}\\label{de14.6}\n -\\frac{1}{2}dC_{j}(\\varphi^{ml},A^{i})\\wedge\n F^{j}=dB(\\varphi^{ml},A^{i}).\n\\end{equation}\nIn fact we may simply calculate $B(\\varphi^{ml},A^{i})$ as\n\\begin{equation}\\label{de14.65}\n B(\\varphi^{ml},A^{i})=-\\frac{1}{2}C_{j}(\\varphi^{ml},A^{i})\\wedge\n F^{j}.\n\\end{equation}\n Thus the Lagrangian \\eqref{de14.5} becomes\n\\begin{equation}\\label{de14.7}\n \\mathcal{L}_{tot}=\\frac{1}{2}tr( \\ast dg^{-1}\\wedge dg)\n +d(-\\frac{1}{2}C_{j}(\\varphi^{ml},A^{i})\\wedge\n F^{j}).\n\\end{equation}\nIf one varies the above Lagrangian one immediately sees that the\nsecond term does not contribute to the field equations as by using\nthe Stoke's theorem we have\n\\begin{equation}\\label{de14.75}\n\\int\\limits_{M}d\\delta B(\\varphi^{ml},A^{i})=\\int\\limits_{\\partial\nM}\\delta B(\\varphi^{ml},A^{i}).\n\\end{equation}\nIf $M$ does not posses a boundary the right hand side of the above\nequation is automatically zero whereas if it has a boundary then\nthe usual variation principles demand that the variation of the\nfields on the boundary are chosen to be zero in which case again\nthe right hand side of \\eqref{de14.75} becomes zero. Therefore we\nconclude that the on-shell Lagrangian \\eqref{de14.5} which is\nresponsible for the most general structure of the solution space\ngives us the scalar field equations which are the same with the\npure sigma model field equations since the coupling part in\n\\eqref{de14.5} does not contribute to the scalar field equations\nat all as we have proven as an on-shell condition above. Also like\nwe have already encountered for the non-field-dependent\n$\\{C_{k}\\}$ sub-sector case varying \\eqref{de14.5} does not give\nany information (which may also mean an identity) about the gauge\nfields $A^{i}$ since we have already made use of their field\nequations in writing it.\n\nIn summary, we have shown that the solutions of the theory must\nobey \\eqref{de12} for arbitrary right hand sides. If one\ndetermines the right hand sides in \\eqref{de12} one restricts him\nor herself to a layer of solutions. In this case \\eqref{de12}\nwhich are the descendants of the gauge field equations become\non-shell conditions. We have proven that if we use these\nfirst-order field equations which are on-shell conditions back in\nthe general Lagrangian \\eqref{de8} then the scalars of this\nparticular layer of solutions do not have gauge fields in their\nfield equations. Therefore the scalars of this particular layer\nbelong to the general solutions of the pure principal sigma model.\nBy running the right hand sides in \\eqref{de12} over the entire\nlocal field-dependent exact $(D-2)$-forms one can generalize this\nresult to the whole solution space. Thus in this way we have\nproven that the entire scalar solutions of the coupled theory\ncoincide with the pure sigma model solution space. Now from\n\\eqref{de12} we can write down the gauge field strengths as\n\\begin{equation}\\label{de15}\nF_{l}=(-1)^{s}(\\mathcal{T}^{-1})^{k}_{\\:\\:\\:l}\\ast\ndC_{k}(\\varphi^{mn},A^{i}),\n\\end{equation}\nwhere $s$ is the signature of the spacetime. We observe that if we\nconsider the sector of the solution space generated by the\nfield-independent $\\{C_{k}\\}$ then after obtaining the general\nsolutions of the pure principal sigma model and after choosing a\nfixed set $\\{C_{k}\\}$ one can use these in \\eqref{de15} to find\nthe corresponding $U(1)$ gauge field strengths. Also for the more\ngeneral field-dependent case of $\\{C_{k}(\\varphi^{ml},A^{i})\\}$\none again obtains the general solutions of the pure principal\nsigma model then one inserts these solutions in \\eqref{de15} to\nsolve for the gauge fields. We can say that in general we have a\npartial decoupling between the scalars and the gauge fields. The\nscalars are not affected by the presence of the gauge fields on\nthe other hand as we will show next they act as sources for the\ngauge fields. Now after multiplying by the Hodge star operator if\nwe take the exterior derivative of both sides of \\eqref{de15} we\nobtain\n\\begin{equation}\\label{de16}\nd\\ast F_{l}=(d\\mathcal{T}^{-1})^{k}_{\\:\\:\\:l}\\wedge\ndC_{k}(\\varphi^{mn},A^{i}).\n\\end{equation}\nWhen we compare this result with the inhomogeneous Maxwell\nequations \\eqref{de2.5} we observe that the current one-forms\nbecome\n\\begin{equation}\\label{de17}\nJ_{l}=(-1)^{(D+s)}\\ast((d\\mathcal{T}^{-1})^{k}_{\\:\\:\\:l}\\wedge\ndC_{k}(\\varphi^{mn},A^{i})).\n\\end{equation}\nOne can furthermore verify that the currents in \\eqref{de17} obey\nthe current conservation law\n\\begin{equation}\\label{de18}\nd\\ast J_{l}=0,\n\\end{equation}\nwhich guarantees that the equations \\eqref{de16} have solutions.\nIn a local moving co-frame field $\\{e^{\\alpha}\\}$ on the spacetime\n$M$ if we introduce the components of the one-forms\n$(d\\mathcal{T}^{-1})^{k}_{\\:\\:\\:l}$, the $(D-2)$-forms\n$dC_{k}(\\varphi^{ml},A^{i})$, and the one-forms $J_{l}$ as\n\\begin{subequations}\\label{de19}\n\\begin{align}\n(d\\mathcal{T}^{-1})^{k}_{\\:\\:\\:l}&=(\\mathcal{T}^{-1k}_{l})_{\\alpha}e^{\\alpha},\\notag\\\\\n\\notag\\\\\ndC_{k}(\\varphi^{ml},A^{i})&=\\frac{1}{(D-2)!}(\\mathcal{C}_{k})_{\\alpha_{1}\\cdots\\alpha_{(D-2)}}e^{\\alpha_{1}\\cdots\n\\alpha_{(D-2)}},\\notag\\\\\n \\notag\\\\\nJ_{l}&=\\mathcal{J}_{l\\beta}e^{\\beta},\\tag{\\ref{de19}}\n\\end{align}\n\\end{subequations}\nfrom \\eqref{de17} we can calculate the components of the current\none-forms as\n\\begin{equation}\\label{de20}\n\\mathcal{J}_{l\\beta}=\\frac{(-1)^{s}\\sqrt{|detH|}}{(D-2)!}(\\mathcal{T}^{-1k}_{l})_{\\alpha}\n(\\mathcal{C}_{k})_{\\alpha_{1}\\cdots\\alpha_{(D-2)}}H^{\\alpha_{1}\\beta_{1}}\\cdots\nH^{\\alpha\\beta_{(D-1)}}\\varepsilon_{\\beta_{1}\\cdots\\beta_{(D-1)}\\beta},\n\\end{equation}\nwhere we have introduced the metric $H$ on $M$ and the Levi-Civita\nsymbol $\\varepsilon$. In \\eqref{de20} $\n\\alpha,\\beta,\\alpha_{i},\\beta_{j}=1,\\cdots,D$. Also for\n$\\alpha_{i}$ $i=1,\\cdots,D-2$ and for $\\beta_{j}$\n$j=1,\\cdots,D-1$.\n\nIf one cancels an exterior derivative (performs integration) on\nboth sides of \\eqref{de16} one finds\n\\begin{equation}\\label{de21}\n\\ast\nF_{l}=(\\mathcal{T}^{-1})^{k}_{\\:\\:\\:l}dC_{k}(\\varphi^{mn},A^{i})+dC^{\\prime}_{l}(\\varphi^{mn},A^{i}).\n\\end{equation}\nNow if we compare this with the first-order equations \\eqref{de12}\noriginating from \\eqref{de8} we see that our model is a sub-sector\nof the one defined in \\eqref{de21} with the choice of\n$dC^{\\prime}_{l}(\\varphi^{mn},A^{i})=0$. One may also inspect\nwhich scalar-gauge field coupling kinetic term would result in a\nfirst-order formulation of gauge fields in the form \\eqref{de21}.\n\nWe should finally state that by using the local first-order\nformulation \\eqref{de12} of the gauge field equations of\n\\eqref{de8} we have shown that there exists a decoupling between\nthe $U(1)$ gauge fields and the scalar fields of the theory. The\nscalars are proven to be the general solutions of the pure\nprincipal sigma model and they generate current sources for the\ngauge fields as can be explicitly seen in \\eqref{de16}. For the\nsub-sector of the solution space which is generated by the\nfield-independent $\\{C_{k}\\}$ we end up with a decoupled set of\n$N$ non-interacting Maxwell theories with prescribed and known\ncurrents whose sources are predetermined by the principal sigma\nmodel scalar fields. These currents interact with each other via\nthe sigma model which is completely decoupled from the Maxwell\nsector and they define a media which does not interact with the\nMaxwell fields. For this sub-sector the $N$ decoupled and\nnon-interacting Maxwell theories can alternatively be formulated\nby the Lagrangian\n\\begin{equation}\\label{de22}\n {\\mathcal{L}}^{\\prime}=\\sum\\limits_{i=1}^{N}\\bigg(-\\frac{1}{2}dA^{i}\\wedge\n \\ast dA^{i}+A^{i}\\wedge (d\\mathcal{T}^{-1})^{k}_{\\:\\:\\:i}\\wedge\n dC_{k}\\bigg).\n\\end{equation}\nWe see that for a single scalar field the above Lagrangian drops\nto be the ordinary Maxwell Lagrangian with a known current\none-form. We also realize that the Maxwell fields in this\nrestricted case are coupled to each other only by means of\nintegration constants. Each of these decoupled Maxwell theories is\nan embedding into the ordinary Maxwell theory with known sources.\nWe should remark that \\eqref{de22} must only be used to\nderive\\footnote{By integrating the field equations to first-order\nand by choosing $dC^{\\prime}_{l}=0$.} the field equations of the\ngauge fields which belong to a restricted sector of the solution\nspace generated by fixing $\\{C_{k}\\}$ and in this case the field\nequations of the scalars must again be derived from \\eqref{de3}.\nOn the other hand for the most general solution space elements\ngenerated by choosing field-dependent\n$\\{C_{k}(\\varphi^{ml},A^{i})\\}$ we can again adopt the general\nsolutions of the pure sigma model as the general scalar solutions\nof our theory however in this case we have $N$ coupled Maxwell\ntheories whose potentials may enter in the currents. For the\nsub-sector generated by $\\{C_{k}(\\varphi^{ml})\\}$ which are not\ndependent on the gauge fields we have a similar situation of\ndecoupled Maxwell theories with prescribed currents discussed\nabove.\n\nFinally before concluding we should summarize the local general\nsolution methodology of the principal sigma model and $U(1)$ gauge\nfields coupling defined in \\eqref{de8}. The method has three steps\nfirst find the general scalar field solutions of the pure\nprincipal sigma model, secondly define field-dependent or\nfield-independent $\\{C_{k}\\}$ and use the general scalar solutions\nin them then insert these in \\eqref{de15} to solve the\ncorresponding gauge field strengths. By combining the general pure\nprincipal sigma model solutions with a different set of\n$\\{C_{k}\\}$ each time one can generate the entire set of\nsolutions. Such a solution methodology of the general solution\nspace of \\eqref{de8} is a consequence of the first-order\nformulation of the gauge field equations in \\eqref{de12} and the\non-shell decoupling between the scalars and the gauge fields which\nwe have described in detail for both the restricted sector of the\nsolution space and for the entire solution space in its most\ngeneral generation process. From the scalars point of view there\nis a complete decoupling so that the scalars of the coupled theory\ncoincide with the pure sigma model solution space. However the\ngauge fields are not decoupled from the scalars since we have\nshown that the scalars act in the current part of the gauge field\nequations.\n\\section{Conclusion}\nBy integrating the gauge field equations of the principal sigma\nmodel and Abelian gauge field coupling Lagrangian we have\nexpressed these equations in a first-order form. Then we have\nshown that when these first-order and on-shell expressions which\ncontain local $(D-3)$-forms in them are used in the scalar-matter\nLagrangian one reveals an implicit decoupling between the scalars\nand the matter gauge fields. We have proven that this decoupling\nstructure exists for the entire solution space. Therefore it is\nshown that the scalar solutions of the coupled theory are the\ngeneral solutions of the pure principal sigma model. We should\nstate here that a similar result is derived in \\cite{consist2} for\nthe heterotic string. However in that work the decoupling occurs\ndue to the existence of a dilatonic field and one derives the\ndecoupling structure of the coset scalars from the field equation\nof the dilaton. On the other hand in the present work we prove\nthat such a decoupling exists for a generic principal sigma model\nwith an arbitrary number of Abelian gauge field couplings.\n\nWe have also mentioned that the on-shell hidden scalar-matter\ndecoupling mentioned above generates a theory composed of the pure\nsigma model and a number of coupled Maxwell theories with sources\ninduced by the scalars. In particular we have discriminated a\nsub-sector of the general theory generated by the\nfield-independent integration constants. This sub-sector contains\na number of separate and dynamically non-interacting Maxwell\ntheories whose current sources are drawn from the general scalar\nsolutions of the pure principal sigma model and the integration\nconstants of the first-order formulation.\n\nIn this work, we prove that the scalars of the coupled theory are\nnot affected by the presence of the $U(1)$ gauge fields however\nthey take role in the currents of the gauge fields. Thus one may\ncall such a coupling between the scalars and the gauge fields a\none-sided or a one and a half coupling. We also see that for the\nrestricted sub-sector of the theory which is obtained by fixing\nthe integration constants the currents of the electromagnetic\nfields interact with in each other via the sigma model but they do\nnot interact with the corresponding gauge fields. Thus for this\nsub-sector the currents form up a subset of the general currents\nof the prescribed current-Maxwell theory. In this sub-sector the\nscalar-gauge field decoupling induces another decoupling scheme\nalso among the gauge fields and one obtains $N$ non-interacting\nMaxwell theories whose sources come from the pure principal sigma\nmodel. For each $U(1)$ gauge field we have an embedded sector of\nthe general predetermined current-Maxwell theory which has the\nmost general form of currents. In our case these currents are\nrestricted to the solutions of the pure principal sigma model. The\n$N$ $U(1)$ gauge fields in this case probe each other only by the\npresence of the integration constants however this does not\ncorrespond to a dynamic coupling. The coupling among the gauge\nfields occur only at the level of determining the integration\nconstants from the boundary conditions. Since we have a static and\nunchanged current structure which emerges from the pure principal\nsigma model where the currents interact with each other and since\nthese currents are not affected by the presence of the gauge\nfields this sub-sector of the general theory defines a theory of a\nstatic media with $N$ non-dynamically interacting Maxwell fields.\nOne can inspect the other sectors of the general theory based on\nvarious choice of field dependent $\\{C_{k}\\}$ which define dynamic\nmedia with current-gauge field interactions. For example one can\nstudy the sectors that result in gauge field equations that define\ndynamic media with specially defined conducting properties\n\\cite{thring}. The analysis of this work can furthermore be\nextended on another scalar-gauge field coupling which might have\nbroader dynamic media sub-sectors namely on the gauged sigma\nmodel.\n\nIn conclusion, we may state that the decoupling structure studied\nin this work is an important remark on the solutions of the\nscalar-gauge field interactions which form up a basic sector of\nthe Bosonic dynamics of the supergravities and the effective\nstring theories. Therefore the scalar-gauge field decoupling\nrevealed here points out a simplification in seeking solutions to\nthese theories.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWe wrote a small class library to render\nwith computer graphics images of\nthe highly mathematical structures\ncreated by the artist Elias Wakan \\cite{wakan06}.\nThe \\hbox{{\\tt C++}}\\ classes we wrote will be extended\nto output the type of description\nrequired by a rendering package such as POV-Ray \\cite{povRay06}.\nWe were surprised that this thoroughly practical enterprise\nled us to two fascinating fundamental issues:\nthe limits of computability\nand the abstract nature of axiomatic geometry.\nHence our title ``Computational Euclid''.\n\n\\vspace{0.05in}\nTo accommodate the limits of computability,\nwe use interval arithmetic.\nWe use it in such a way that when we pose the question\nwhether two straight lines intersect, \nthe answer ``no'' has the force of a mathematical proof they do not.\nThe other possible answer is a box in 3-space, typically very small.\nThis answer means that the intersection is in this box,\n\\emph{if there is an intersection}.\nThis proviso is probably essential\nbecause we sense that a decision procedure\nfor the intersection of two lines can be used\nto implement a decision procedure\nfor the equality of any two real numbers,\nwhich has been shown to be impossible by Turing \\cite{trng37,aberth98}.\nHowever, we have not pursued the details of such a problem reduction.\n\nThe other fundamental issue\nis the abstract nature of an axiomatic approach to geometry.\n\nEuclid is widely credited with inaugurating the axiomatic method in which\naxioms contain references to undefined concepts of which the meaning is\nonly constrained by the axioms. However, we should not look to the Elements\nfor a literal embodiment of the axiomatic method.\nIt fell to Hilbert in 1902 \\cite{hlbrt02} to cast these in a form that\nis recognized today as an axiomatic treatment of the subject.\n\nLack of space prevents us here to go into a detailed analysis of the concepts\nof Euclid's geometry.\nSuffice it to say that Hilbert's formulation contains as undefined concepts,\namong others, the following that we found useful in our work:\n\\emph{point},\n\\emph{line},\n\\emph{segment},\n\\emph{plane},\n\\emph{angle}.\n\nIn the modern conception of the axiomatic method these are undefined.\nTheir meaning is only constrained by the relations between them as\nasserted by the axioms. In logic this is formalized by the axioms being\na theory, which, if consistent, can have a variety of models.\nIt is only the model that says what a point \\emph{is}.\nFor example, in one type of model,\npoints, lines, and planes are solution spaces\nof sets of linear equations in three variables.\n\nWhat we find surprising is how well the way in which the axiomatic method,\nas realized in formal logic, combines with the most widely accepted principles\nof object-oriented software design \\cite{wbww90}.\nAccording to it, one looks for the \\emph{nouns} in an informal specification\nof the software to be written.\nThese are candidates for the \\emph{classes} of an object-oriented program.\n\nWe used Hilbert's axioms as specification.\nThe recipe of \\cite{wbww90} has, of course, to be taken with a grain of salt:\nonly the \\emph{important} nouns are candidates for classes.\nUsually, informal specifications contain a majority of not-so-important nouns.\nTo our delight,\nwe found that Hilbert's axioms\ncontain an unusually small number\nof not-so-important nouns.\n\n\\section{The structure of our class library}\n\nOf the nouns occurring in Hilbert's axioms,\n{\\tt Point}, {\\tt Line}, and {\\tt Plane}\\ are a special subset.\nThey are special in the sense\nthat any unordered pair of these determine\nan object in this trinity,\nunless a specific condition prevails.\nIn the case of a point and a line determining a plane,\nthe condition is ``unless the point is on the line''.\nNote that these conditions are called \\emph{predicates}\nby some authors~\\cite{effpred02,itar98}.\nThe table in Figure~\\ref{pointsLinesPlanes}\nsummarizes the operations for all unordered pairs,\neach with the attendant disabling condition.\n\\begin{figure*}\n\\begin{center}\n\\begin{tabular}{l|l}\n\\hline\n Construction & Disabling Condition \\\\\n\\hline\n{\\tt Point}\\ $\\times$ {\\tt Point}\\ $\\rightarrow$ {\\tt Line}\\ & equal \\\\\n{\\tt Point}\\ $\\times$ {\\tt Line}\\ $\\rightarrow$ {\\tt Plane}\\ & on \\\\\n{\\tt Point}\\ $\\times$ {\\tt Plane}\\ $\\rightarrow$ {\\tt Line}\\ & (none) \\\\\n{\\tt Line}\\ $\\times$ {\\tt Line}\\ $\\rightarrow$ {\\tt Line}\\ & parallel or intersect \\\\\n{\\tt Line}\\ $\\times$ {\\tt Plane}\\ $\\rightarrow$ {\\tt Point}\\ & in \\\\\n{\\tt Plane}\\ $\\times$ {\\tt Plane}\\ $\\rightarrow$ {\\tt Line}\\ & parallel \\\\\n \\hline\n\\end{tabular}\n\\caption{Operations for all unordered pairs formed from {\\tt Point}, {\\tt Line}\\ and {\\tt Plane}.\nThe operations cannot be performed if the condition listed holds between\nthe input arguments of the construction.\n}\n\\label{pointsLinesPlanes}\n\\end{center}\n\\end{figure*}\n\nTwo of the constructions involve perpendiculars.\nThe line determined by the point\nand the plane is the perpendicular\nto the plane through the point.\nThe line determined by two lines in general position\nlikewise is a perpendicular:\nthe unique one that is perpendicular to both given lines.\n\nIn this way, an object-oriented reading\nof Hilbert's axioms determines that the class {\\tt Line}\\\ncontains constructors with parameters\n({\\tt Point}, {\\tt Point}), with ({\\tt Point}, {\\tt Plane}), and with ({\\tt Plane}, {\\tt Plane}).\nThe class {\\tt Point}\\ contains a constructor\nwith parameters ({\\tt Line}, {\\tt Plane}).\nThe class {\\tt Plane}\\ contains a constructor\nwith arguments ({\\tt Point}, {\\tt Line}).\n\nThe constructors cannot be invoked\nwhen the conditions noted in Figure~\\ref{pointsLinesPlanes} hold\nbetween the arguments.\nFor example, if a {\\tt Point}\\ is on a {\\tt Line}, then these do not determine a plane.\nThese conditions are semi-decidable: they either determine that the\ncondition does not hold, or that the condition \\emph{may} hold.\nHowever, in rare cases it can be determined that, say, two\ninstances of {\\tt Point}\\ are equal.\nThe conditions therefore return the truth values of a 3-valued logic.\n\n\nThe abstract nature of an axiomatic approach to geometry\nrequires that the {\\tt Point}, {\\tt Line}\\ and {\\tt Plane}\\ \nare left undefined.\nThis abstraction is not only essential in \nthe axiomatic treatment of mathematical theories,\nbut it is also the essence of object-oriented design.\n\nIn object-oriented design one may distinguish two forms of abstraction.\nThe weaker form is achieved by any class in which the variables\nare private.\nOne can then modify the representation of the objects without\nconsequences for the code using the class.\nThere is also a stronger form of abstraction in which\npolymorphism makes it possible\nto use more than one implementation of the same abstraction simultaneously.\nThe concept is then represented by an abstract class for the concept in\nwhich the representation-dependent methods are virtual.\nFor each representation there is a separate derived class of\nwhich the methods are dispatched at run time.\nWe have found this stronger form of abstraction\nadvantageous in our suite of \\hbox{{\\tt C++}} classes.\n\nA UML class diagram summarizing the classes \nand the conditions above is shown in Figure~\\ref{geomuml}. \nThe purpose of the extra classes in the diagram is explained later.\n\\begin{figure}[!htbp]\n\\begin{center}\n\\epsfxsize=3.5in\n\\leavevmode\n\\epsfbox{Figures\/euclclassdiagram.eps}\n\\caption{\n\\label{geomuml}\nUML class diagram for our system.}\n\\end{center}\n\\end{figure}\n\n\\section{Consequences of computability limitations}\n\nWhatever computer representation is chosen,\nthere will only be finitely many points, lines, and planes\nthat can be represented.\nThe conventional method of mapping\nthe infinity of abstract objects\nto the finitely many representable ones\nis to choose a representation in terms of reals\nand then to map each real to a nearby floating-point number.\nWhen this method is followed,\nit has so far not been found possible\nto give precise meaning to the outcomes\nof tests such as whether a point is on a line.\nThe outcomes have to be interpreted\nas ``probably not'' and ``possibly'',\ndepending on whether the computed distance\n(subject to an unknown error)\nis greater than a certain tolerance.\n\nIt may seem that this degree of uncertainty\nis inherent in the limitation\nto a finite number of representations.\nThis is not case.\nEven when restricted to floating-point numbers,\nit is possible to represent the point $p$\nby a set $P$ of points containing $p$;\nlikewise, the line $l$ can be represented by a set $L$.\nThese sets are specified in terms of floating-point numbers,\nso there are only finitely many of these.\nBecause of this finiteness it is decidable\nwhether the set of points contains any\nthat is on any in the set of lines.\nIt may seem computationally formidable\nto make such a determination.\nActually, the techniques of interval constraints\nmake this perfectly feasible \\cite{hcqvn99},\nand this is what we use.\n\nIf it is determined that no point in $P$ is on any line in $L$,\nthen it is clear that $p$ is not on $l$.\nIf, on the other hand, some point in $P$ is on some line in $L$,\nthis says nothing about whether $p$ is on $l$.\nHowever, if $P$ and $L$ are, in a suitable sense, small,\nthen it follows that $p$ is close to $l$.\nIt is this asymmetry that is a consequence of the fact that the test for a point\non a line can at best be a \nsemi-decision algorithm.\nSimilarly, the other tests\nin Figure~\\ref{pointsLinesPlanes} are semi-decision algorithms.\n\nIt is worth mentioning that to cope with the computability limitations \nin the area of computational geometry, the \\emph{exact geometric computing} \nparadigm was proposed \\cite{yap95}. This paradigm encompasses all techniques \nfor which the outcomes are correct. As shown in \\cite{itar98}, \ninterval arithmetic can be used to do exact geometric computing.\nThis paper is also classified under this paradigm.\n\n\n\n\\section{Our implementation}\nIn the previous section we explained the need for interval methods\nto ensure that in most cases where a test should have a negative\noutcome, this is indeed proved numerically.\nInterval methods can do this in several ways.\nIn \\cite{itar98}, Br{\\\"o}nnimann {\\it et al.} used \ninterval arithmetic to dynamically bound \narithmetic errors when computing tests (i.e. to compute dynamic filters).\nIn our case, we use interval constraints\nnot only to compute tests but also to implement\ngeometrical constructions. \nThis means that the representations\nof {\\tt Point}, {\\tt Line}, and {\\tt Plane}\\ are in the form of constraint\nsatisfaction problems.\nFor example, a plane is represented by the constraint\n$ax+by+cz+d=0$, where\n$a$, $b$, $c$ and $d$ are real-valued constants and\n$x$, $y$, and $z$ are real-valued variables.\nDue to computability limitations\ndiscussed in the previous section,\nthe coefficients $a$, $b$, $c$ and $d$\nare implemented as floating-point intervals.\nFor each point with coordinates\nin these intervals, the constraint has a different\nplane as solution. In this way our concrete representation is\na set of planes in the abstract sense.\nThe reader may refer to the following papers \n\\cite{rthpra01}, \\cite{bhlgfg99}, \\cite{hckmdw98}, \nand \\cite{hckvnmdn01} for more information\non constraints, propagation algorithms, interval constraints,\ncorrectness and implementation of interval constraints.\n\nAs shown in Figure~\\ref{geomuml},\nthe abstract classes {\\tt Point}, {\\tt Line}, and {\\tt Plane}\\ are extended \nand modelled using intervals and constraints.\nThe abstract class {\\tt Constraint}\\ represents the constraint class,\nwhich can be extended to implement primitive constraints\nsuch as {\\tt Sum} and {\\tt Prod}. \nEach of these primitive constraints has \na \\emph{domain reduction operator} (DRO), \nrepresented by {\\tt shrink}() method,\nwhich removes inconsistent values from \nthe domains of the variables in the constraint.\nThe DROs of primitive constraints are computed based on \ninterval arithmetic. As an example, \nthe {\\tt Sum} constraint defined \nby $x+y=z$ has the\nfollowing DRO\n\\begin{eqnarray}\n&&X^{new}=X^{old} \\cap (Z^{old} - Y^{old})\\nonumber \\\\\n&&Y^{new}=Y^{old} \\cap (Z^{old} - X^{old})\\nonumber \\\\\n&&Z^{new}=Z^{old} \\cap (X^{old} + Y^{old})\\nonumber \n\\end{eqnarray}\nwhere the intervals $X^{old}$, $Y^{old}$, \nand $Z^{old}$ are the domains of $x$, $y$, and $z$\nrespectively before applying the DRO,\nand $X^{new}$, $Y^{new}$ and $Z^{new}$ are the\ndomains of $x$, $y$, and $z$ respectively after applying the DRO.\nFor a non-primitive constraint, such as {\\tt Line}\\ and {\\tt Plane}, \nwe first decompose it into primitive constraints and then\nuse the propagation algorithm to implement the {\\tt shrink}() method.\nA simple version of this algorithm is shown in Figure~\\ref{gpa}.\n\\begin{center}\n\\begin{figure}\n\\begin{center}\n\\fbox{\n\\parbox{1 cm}{\n\\small{\n\\begin{tabbing}\nmake $A$ the set of primitive constraints;\\\\\nwhile \\=( $A \\neq \\emptyset$) $\\{$\\\\ \n \\>choose a constraint $C$ from $A$ and apply its DRO;\\\\\n \\> if one of the domains becomes empty, then stop;\\\\\n \\>add \\=to $A$ all constraints involving variables whose\\\\\n \\>\\> domains have changed, if any;\\\\\n \\>remove $C$ from $A$; $\\}$\n\\end{tabbing}\n}\n}\n}\n\\end{center}\n\\caption{\n\\label{gpa}\nPropagation algorithm.}\n\\end{figure}\n\\end{center}\nIn what follows, we present some examples\nin two dimensions illustrating the use of our implementation.\nWe ran the examples on a Pentium II machine with a CPU rate of 400 MHz, \nand with 128 MB of memory. \n\\paragraph{Are two points in the same side of a line?}\nLet $L$ be a line represented by $[2.0,2.5]*x - [0.5,1.0]*y=[1.0,1.05]$.\nLet $P$ and $Q$ be the two points represented respectively by\n$([0.0,0.0],[0.0,0.0])$ and $([0.5,0.5],[0.5,0.5])$.\nThe question we are interested in is to\ndetermine whether $P$ and $Q$ are on the same side of $L$.\nUsing the function \\emph{sameSide(Point, Point)},\nwhich checks whether two points are in the same side of a line,\nour system outputs the following results:\n\\begin{verbatim}\nDuration (musec): 179\nTrue: the points are in the same side\n\\end{verbatim}\nThis means that our system was able to prove that\n$P$ and $Q$ are in the same side of $L$.\n\nNow suppose that $Q$ is represented by $([1 , 1],[0.5 , 0.5])$.\nIn this case, our system returned the following output:\n\\begin{verbatim}\nDuration (musec): 133\nFalse: the points are not in the same side\n\\end{verbatim}\nIf, somehow, the point $Q$ is only known to be represented by\n$([0.75 , 1],[0.25 , 0.5])$ (note that the intervals are not singletons),\nthen our system was not able to prove that\nthe points $P$ and $Q$ are in the same side.\nThe output in this case is:\n\\begin{verbatim}\nDuration (musec): 265\nUndetermined\n\\end{verbatim}\n\\paragraph{Circumcenter of a triangle}\nGiven three points $P$, $Q$ and $R$ represented respectively by\n$([0.0,0.0],[0.0,0.0])$, $([1.0,1.0],[0.5,0.5])$ and $([0.5,0.5],[1.0,1.0])$\nwe wish to find the center of the circle passing through $P$, $Q$ and $R$.\nThis example is taken from \\cite{fnpr02}.\nSince this center is given by the intersection of $L_1$ and $L_2$,\nwhere $L_1$ is the line that passes through the middle of the segment $PQ$ and is\nperpendicular to the line passing through $P$ and $Q$, and $L_2$\nis the line that passes through the middle of the segment $QR$ and\nis perpendicular to the line passing through $Q$ and $R$.\nUsing the \\emph{intersect (Line)} function that\nchecks whether a line intersects with another\nline, our system returned the following output:\n\\begin{verbatim}\nDuration (msec): 6\nTrue: the lines intersect at\nx = [0.41666666666666663 , \n 0.41666666666666669]\ny = [0.41666666666666663 , \n 0.41666666666666669]\n\\end{verbatim}\nWe then checked whether the point\n$(x,y)$\nis on the line $L_3$\nthat passes through the middle of the segment $PR$\nand is perpendicular to the line passing through $P$ and $R$.\nThe output of our system indicates it is possible:\n\\begin{verbatim}\nDuration (musec): 112\nUndetermined\n\\end{verbatim}\n\n\\section*{Acknowledgements}\nThis research was supported by the University of Victoria and by\nthe Natural Science and Engineering Research Council of Canada.\n\n\\section*{For further information please contact us:}\n\\begin{flushleft}\n\\begin{tabular}{l@{\\quad}l@{\\hspace{3mm}}l@{\\quad}l}\n$\\bullet$&\\multicolumn{3}{@{}l}{\\bfseries Springer-\\kern-2pt Verlag\nHeidelberg}\\\\[1mm]\n&\\multicolumn{3}{@{}l}{Department New Technologies\/Product\nDevelopment}\\\\\n&\\multicolumn{3}{@{}l}{Springer-Verlag, Postfach 105280, D-69042\nHeidelberg\n1, FRG}\\\\[0.5mm]\n & Telefax: & (0\\,62\\,21)487688\\\\\n & & (0\\,62\\,21)487366\\\\\n & Internet: & \\tt lncs@springer.de & for editorial questions\\\\\n & & \\tt texhelp@springer.de & for \\TeX{} problems\n\\end{tabular}\n\\end{flushleft}\n\\rule{\\textwidth}{1pt}\n\\section*{Acceptable formats of your disk\/magnetic tape and output:}\nThe following formats are acceptable: 5.25$^{\\prime\\prime}$ diskette\nMS-DOS, 5.25$^{\\prime\\prime}$ CP\/M, 3.5$^{\\prime\\prime}$ diskette\nMS-DOS, 3.5$^{\\prime\\prime}$ diskette Apple MacIntosh, 9-track 1600\nbpi magnetic tape VAX\/VMS, 9-track 1600 bpi magnetic tape ANSI with\nlabel, SUN-Streamer Tape.\n\nOnce you have completed your work using this macro package,\nplease submit your own printout of the {\\em final\nversion together with the disk or magnetic tape}, containing your\n\\LaTeX{} input (source) file und the final DVI-file and make sure\nthat the text is {\\em identical in both cases.}\n\n\\bigskip\nThis macro package, as well as all other macro packages, style\nfiles, and document classes that Springer distributes, are also\navailable through our mailserver (for people with only e-mail access).\n\n{\\tt svserv@vax.ntp.springer.de}\\hfil first try the \\verb|help|\ncommand.\n\n\\noindent We are also reachable through the world wide web:\n\\begin{flushleft}\n\\tt\n\\begin{tabular}{@{}l@{\\quad}r@{\\tt:}l}\n\\rmfamily URLs are & http&\/\/www.springer.de \\\\\n & gopher&\/\/ftp.springer.de \\\\\n & ftp&\/\/ftp.springer.de\n\\end{tabular}\n\\end{flushleft}\n\n\n\n\n\\newpage\n\\tableofcontents\n\\newpage\n\\section{Introduction}\nAuthors wishing to code their contribution\nwith \\LaTeX{}, as well as those who have already coded with \\LaTeX{},\nwill be provided with a document class that will give the text the\ndesired layout. Authors are requested to\nadhere strictly to these instructions; {\\em the class\nfile must not be changed}.\n\nThe text output area is automatically set within an area of\n12.2\\,cm horizontally and 19.3\\,cm vertically.\n\nIf you are already familiar with \\LaTeX{}, then the\nLLNCS class should not give you any major difficulties.\nIt will change the layout to the required LLNCS style\n(it will for instance define the layout of \\verb|\\section|).\nWe had to invent some extra commands,\nwhich are not provided by \\LaTeX{} (e.g.\\\n\\verb|\\institute|, see also Sect.\\,\\ref{contbegin})\n\nFor the main body of the paper (the text) you\nshould use the commands of the standard \\LaTeX{} ``article'' class.\nEven if you are familiar with those commands, we urge you to read\nthis entire documentation thoroughly. It contains many suggestions on\nhow to use our commands properly; thus your paper\nwill be formatted exactly to LLNCS standard.\nFor the input of the references at the end of your contribution,\nplease follow our instructions given in Sect.\\,\\ref{refer} References.\n\nThe majority of these hints are not specific for LLNCS; they may improve\nyour use of \\LaTeX{} in general.\nFurthermore, the documentation provides suggestions about the proper\nediting and use\nof the input files (capitalization, abbreviation etc.) (see\nSect.\\,\\ref{refedit} How to Edit Your Input File).\n\\section{How to Proceed}\nThe package consists of the following files:\n\\begin{flushleft}\n\\begin{tabular}{@{}p{2.5cm}l}\n{\\tt history.txt}& the version history of the package\\\\[2pt]\n{\\tt llncs.cls} & class file for \\LaTeX{}\\\\[2pt]\n{\\tt llncs.dem} & an example showing how to code the text\\\\[2pt]\n{\\tt llncs.doc} & general instructions (source of this document),\\\\\n & {\\tt llncs.doc} means {\\itshape l\\\/}atex {\\itshape doc\\\/}umentation for\\\\\n & {\\itshape L\\\/}ecture {\\itshape N}otes in {\\itshape C\\\/}omputer {\\itshape S\\\/}cience\\\\\n{\\tt llncsdoc.sty} & class modifications to help for the instructions\\\\\n{\\tt llncs.ind} & an external (faked) author index file\\\\\n{\\tt subjidx.ind} & subject index demo from the Springer book package\\\\\n{\\tt llncs.dvi} & the resultig DVI file (remember to use binary transfer!)\\\\[2pt]\n{\\tt sprmindx.sty} & supplementary style file for MakeIndex\\\\\n & (usage: {\\tt makeindex -s sprmindx.sty })\n\\end{tabular}\n\\end{flushleft}\n\\subsection{How to Invoke the LLNCS Document Class}\nThe LLNCS class is an extension of the standard \\LaTeX{} ``article''\ndocument class. Therefore you may use all ``article'' commands for the\nbody of your contribution to prepare your manuscript.\nLLNCS class is invoked by replacing ``article'' by ``llncs'' in the\nfirst line of your document:\n\\begin{verbatim}\n\\documentclass{llncs}\n\\begin{document}\n \n\\end{document}\n\\end{verbatim}\n\\subsection{Contributions Already Coded with \\protect\\LaTeX{} without\nthe LLNCS document class}\nIf your file is already coded with \\LaTeX{} you can easily\nadapt it a posteriori to the LLNCS document class.\n\nPlease refrain from using any \\LaTeX{} or \\TeX{} commands\nthat affect the layout or formatting of your document (i.e. commands\nlike \\verb|\\textheight|, \\verb|\\vspace|, \\verb|\\headsep| etc.).\nThere may nevertheless be exceptional occasions on which to\nuse some of them.\n\nThe LLNCS document class has been carefully designed to produce the\nright layout from your \\LaTeX{} input. If there is anything specific you\nwould like to do and for which the style file does not provide a\ncommand, {\\em please contact us}. Same holds for any error and bug you\ndiscover (there is however no reward for this -- sorry).\n\\section{General Rules for Coding Formulas}\nWith mathematical formulas you may proceed as described\nin Sect.\\,3.3 of the {\\em \\LaTeX{} User's Guide \\& Reference\nManual\\\/} by Leslie Lamport (2nd~ed. 1994), Addison-Wesley Publishing\nCompany, Inc.\n\nEquations are automatically numbered sequentially throughout your\ncontribution using arabic numerals in parentheses on the right-hand\nside.\n\nWhen you are working in math mode everything is typeset in italics.\nSometimes you need to insert non-mathematical elements (e.g.\\\nwords or phrases). Such insertions should be coded in roman\n(with \\verb|\\mbox|) as illustrated in the following example:\n\\begin{flushleft}\n{\\itshape Sample Input}\n\\end{flushleft}\n\\begin{verbatim}\n\\begin{equation}\n \\left(\\frac{a^{2} + b^{2}}{c^{3}} \\right) = 1 \\quad\n \\mbox{ if } c\\neq 0 \\mbox{ and if } a,b,c\\in \\bbbr \\enspace .\n\\end{equation}\n\\end{verbatim}\n{\\itshape Sample Output}\n\\begin{equation}\n \\left(\\frac{a^{2} + b^{2}}{c^{3}} \\right) = 1 \\quad\n \\mbox{ if } c\\neq 0 \\mbox{ and if } a,b,c\\in \\bbbr \\enspace .\n\\end{equation}\n\nIf you wish to start a new paragraph immediately after a displayed\nequation, insert a blank line so as to produce the required\nindentation. If there is no new paragraph either do not insert\na blank line or code \\verb|\\noindent| immediately before\ncontinuing the text.\n\nPlease punctuate a displayed equation in the same way as other\nordinary text but with an \\verb|\\enspace| before end punctuation.\n\nNote that the sizes of the parentheses or other delimiter\nsymbols used in equations should ideally match the height of the\nformulas being enclosed. This is automatically taken care of by\nthe following \\LaTeX{} commands:\\\\[2mm]\n\\verb|\\left(| or \\verb|\\left[| and\n\\verb|\\right)| or \\verb|\\right]|.\n\\subsection{Italic and Roman Type in Math Mode}\n\\begin{alpherate}\n\\item\nIn math mode \\LaTeX{} treats all letters as though they\nwere mathematical or physical variables, hence they are typeset as\ncharacters of their own in\nitalics. However, for certain components of formulas, like short texts,\nthis would be incorrect and therefore coding in roman is required.\nRoman should also be used for\nsubscripts and superscripts {\\em in formulas\\\/} where these are\nmerely labels and not in themselves variables,\ne.g. $T_{\\mathrm{eff}}$ \\emph{not} $T_{eff}$,\n$T_{\\mathrm K}$ \\emph{not} $T_K$ (K = Kelvin),\n$m_{\\mathrm e}$ \\emph{not} $m_e$ (e = electron).\nHowever, do not code for roman\nif the sub\/superscripts represent variables,\ne.g.\\ $\\sum_{i=1}^{n} a_{i}$.\n\\item\nPlease ensure that {\\em physical units\\\/} (e.g.\\ pc, erg s$^{-1}$\nK, cm$^{-3}$, W m$^{-2}$ Hz$^{-1}$, m kg s$^{-2}$ A$^{-2}$) and\n{\\em abbreviations\\\/} such as Ord, Var, GL, SL, sgn, const.\\\nare always set in roman type. To ensure\nthis use the \\verb|\\mathrm| command: \\verb|\\mathrm{Hz}|.\nOn p.\\ 44 of the {\\em \\LaTeX{} User's Guide \\& Reference\nManual\\\/} by Leslie Lamport you will find the names of\ncommon mathe\\-matical functions, such as log, sin, exp, max and sup.\nThese should be coded as \\verb|\\log|,\n\\verb|\\sin|, \\verb|\\exp|, \\verb|\\max|, \\verb|\\sup|\nand will appear in roman automatically.\n\\item\nChemical symbols and formulas should be coded for roman,\ne.g.\\ Fe not $Fe$, H$_2$O not {\\em H$_2$O}.\n\\item\nFamiliar foreign words and phrases, e.g.\\ et al.,\na priori, in situ, brems\\-strah\\-lung, eigenvalues should not be\nitalicized.\n\\end{alpherate}\n\\section{How to Edit Your Input (Source) File}\n\\label{refedit}\n\\subsection{Headings}\\label{headings}\nAll words in headings should be capitalized except for conjunctions,\nprepositions (e.g.\\ on, of, by, and, or, but, from, with, without,\nunder) and definite and indefinite articles (the, a, an) unless they\nappear at the beginning. Formula letters must be typeset as in the text.\n\\subsection{Capitalization and Non-capitalization}\n\\begin{alpherate}\n\\item\nThe following should always be capitalized:\n\\begin{itemize}\n\\item\nHeadings (see preceding Sect.\\,\\ref{headings})\n\\item\nAbbreviations and expressions\nin the text such as Fig(s)., Table(s), Sect(s)., Chap(s).,\nTheorem, Corollary, Definition etc. when used with numbers, e.g.\\\nFig.\\,3, Table\\,1, Theorem 2.\n\\end{itemize}\nPlease follow the special rules in Sect.\\,\\ref{abbrev} for referring to\nequations.\n\\item\nThe following should {\\em not\\\/} be capitalized:\n\\begin{itemize}\n\\item\nThe words figure(s), table(s), equation(s), theorem(s) in the text when\nused without an accompanying number.\n\\item\nFigure legends and table captions except for names and abbreviations.\n\\end{itemize}\n\\end{alpherate}\n\\subsection{Abbreviation of Words}\\label{abbrev}\n\\begin{alpherate}\n\\item\nThe following {\\em should} be abbreviated when they appear in running\ntext {\\em unless\\\/} they come at the beginning of a sentence: Chap.,\nSect., Fig.; e.g.\\ The results are depicted in Fig.\\,5. Figure 9 reveals\nthat \\dots .\\\\\n{\\em Please note\\\/}: Equations should usually be referred to solely by\ntheir number in parentheses: e.g.\\ (14). However, when the reference\ncomes at the beginning of a sentence, the unabbreviated word\n``Equation'' should be used: e.g.\\ Equation (14) is very important.\nHowever, (15) makes it clear that \\dots .\n\\item\nIf abbreviations of names or concepts are used\nthroughout the text, they should be defined at first occurrence,\ne.g.\\ Plurisubharmonic (PSH) Functions, Strong Optimization (SOPT)\nProblem.\n\\end{alpherate}\n\\section{How to Code the Beginning of Your Contribution}\n\\label{contbegin}\nThe title of a single contribution (it is mandatory) should be coded as\nfollows:\n\\begin{verbatim}\n\\title{}\n\\end{verbatim}\nAll words in titles should be capitalized except for conjunctions,\nprepositions (e.g.\\ on, of, by, and, or, but, from, with, without,\nunder) and definite and indefinite articles (the, a, an) unless they\nappear at the beginning. Formula letters must be typeset as in the text.\nTitles have no end punctuation.\n\nIf a long \\verb|\\title| must be divided please use the code \\verb|\\\\|\n(for new line).\n\nIf you are to produce running heads for a specific volume the standard\n(of no such running heads) is overwritten with the \\verb|[runningheads]|\noption in the \\verb|\\documentclass| line. For long titles that do not\nfit in the single line of the running head a warning is generated.\nYou can specify an abbreviated title for the running head on odd pages\nwith the command\n\\begin{verbatim}\n\\titlerunning{}\n\\end{verbatim}\n\nThere is also a possibility to change the text of the title that goes\ninto the table of contents (that's for volume editors only -- there is\nno table of contents for a single contribution). For this use the\ncommand\n\\begin{verbatim}\n\\toctitle{}\n\\end{verbatim}\n\nAn optional subtitle may follow then:\n\\begin{verbatim}\n\\subtitle{}\n\\end{verbatim}\n\nNow the name(s) of the author(s) must be given:\n\\begin{verbatim}\n\\author{}\n\\end{verbatim}\nNumbers referring to different addresses or affiliations are\nto be attached to each author with the \\verb|\\inst{}| command.\nIf there is more than one author, the order is up to you;\nthe \\verb|\\and| command provides for the separation.\n\nIf you have done this correctly, this entry now reads, for example:\n\\begin{verbatim}\n\\author{Ivar Ekeland\\inst{1} \\and Roger Temam\\inst{2}}\n\\end{verbatim}\nThe first name\\footnote{Other initials are optional\nand may be inserted if this is the usual\nway of writing your name, e.g.\\ Alfred J.~Holmes, E.~Henry Green.}\nis followed by the surname.\n\nAs for the title there exist two additional commands (again for volume\neditors only) for a different author list. One for the running head\n(on odd pages) -- if there is any:\n\\begin{verbatim}\n\\authorrunning{}\n\\end{verbatim}\nAnd one for the table of contents where the\naffiliation of each author is simply added in braces.\n\\begin{verbatim}\n\\tocauthor{}\n\\end{verbatim}\n\nNext the address(es) of institute(s), company etc. is (are) required.\nIf there is more than one address, the entries are numbered\nautomatically with \\verb|\\and|, in the order in which you type them.\nPlease make sure that the numbers match those placed next to\nto the authors' names to reflect the affiliation.\n\\begin{verbatim}\n\\institute{\n\\and \n\\and }\n\\end{verbatim}\n\nIn addition, you can use\n\\begin{verbatim}\n\\email{}\n\\end{verbatim}\nto provide your email address within \\verb|\\institute|. If you need to\ntypeset the tilde character -- e.g. for your web page in your unix\nsystem's home directory -- the \\verb|\\homedir| command will happily do\nthis.\n\n\\medskip\nIf footnote like things are needed anywhere in the contribution heading\nplease code\n(immediately after the word where the footnote indicator should be\nplaced):\n\\begin{verbatim}\n\\thanks{}\n\\end{verbatim}\n\\verb|\\thanks| may only appear in \\verb|\\title|, \\verb|\\author|\nand \\verb|\\institute| to footnote anything. If there are two or more\nfootnotes or affiliation marks to a specific item separate them with\n\\verb|\\fnmsep| (i.e. {\\itshape f}oot\\emph note \\emph mark\n\\emph{sep}arator).\n\n\\medskip\\noindent\nThe command\n\\begin{verbatim}\n\\maketitle\n\\end{verbatim}\nthen formats the complete heading of your article. If you leave\nit out the work done so far will produce \\emph{no} text.\n\nThen the abstract should follow. Simply code\n\\begin{verbatim}\n\\begin{abstract}\n\n\\end{abstract}\n\\end{verbatim}\nor refer to the demonstration file {\\tt llncs.dem} for an example or\nto the {\\em Sample Input\\\/} on p.~\\pageref{samppage}.\n\n\\subsubsection{Remark to Running Heads and the Table of Contents}\n\\leavevmode\\\\[\\medskipamount]\nIf you are the author of a single contribution you normally have no\nrunning heads and no table of contents. Both are done only by the editor\nof the volume or at the printers.\n\\section{Special Commands for the Volume Editor}\nThe volume editor can produce a complete camera ready output including\nrunning heads, a table of contents, preliminary text (frontmatter), and\nindex or glossary. For activating the running heads there is the class\noption \\verb|[runningheads]|.\n\nThe table of contents of the volume is printed wherever\n\\verb|\\tableofcontents| is placed. A simple compilation of all\ncontributions (fields \\verb|\\title| and \\verb|\\author|) is done. If you\nwish to change this automatically produced list use the commands\n\\begin{verbatim}\n\\titlerunning \\toctitle\n\\authorrunning \\tocauthor\n\\end{verbatim}\nto enhance the information in the specific contributions. See the\ndemonstration file \\verb|llncs.dem| for examples.\n\nAn additional structure can be added to the table of contents with the\n\\verb|\\addtocmark{}| command. It has an optional numerical\nargument, a digit from 1 through 3. 3 (the default) makes an unnumbered\nchapter like entry in the table of contents. If you code\n\\verb|\\addtocmark[2]{text}| the corresponding page number is listed\nalso, \\verb|\\addtocmark[1]{text}| even introduces a chapter number\nbeyond it.\n\\section{How to Code Your Text}\nThe contribution title and all headings should be capitalized\nexcept for conjunctions, prepositions (e.g.\\ on, of, by, and, or, but,\nfrom, with, without, under) and definite and indefinite articles (the,\na, an) unless they appear at the beginning. Formula letters must be\ntypeset as in the text.\n\nHeadings will be automatically numbered by the following codes.\\\\[2mm]\n{\\itshape Sample Input}\n\\begin{verbatim}\n\\section{This is a First-Order Title}\n\\subsection{This is a Second-Order Title}\n\\subsubsection{This is a Third-Order Title.}\n\\paragraph{This is a Fourth-Order Title.}\n\\end{verbatim}\n\\verb|\\section| and \\verb|\\subsection| have no end punctuation.\\\\\n\\verb|\\subsubsection| and \\verb|\\paragraph|\nneed to be punctuated at the end.\n\nIn addition to the above-mentioned headings your text may be structured\nby subsections indicated by run-in headings (theorem-like environments).\nAll the theorem-like environments are numbered automatically\nthroughout the sections of your document -- each with its own counter.\nIf you want the theorem-like environments to use the same counter\njust specify the documentclass option \\verb|envcountsame|:\n\\begin{verbatim}\n\\documentclass[envcountsame]{llncs}\n\\end{verbatim}\nIf your first call for a theorem-like environment then is e.g.\n\\verb|\\begin{lemma}|, it will be numbered 1; if corollary follows,\nthis will be numbered 2; if you then call lemma again, this will be\nnumbered 3.\n\nBut in case you want to reset such counters to 1 in each section,\nplease specify the documentclass option \\verb|envcountreset|:\n\\begin{verbatim}\n\\documentclass[envcountreset]{llncs}\n\\end{verbatim}\n\nEven a numbering on section level (including the section counter) is\npossible with the documentclass option \\verb|envcountsect|.\n\n\\section{Predefined Theorem like Environments}\\label{builtintheo}\nThe following variety of run-in headings are at your disposal:\n\\begin{alpherate}\n\\item\n{\\bfseries Bold} run-in headings with italicized text\nas built-in environments:\n\\begin{verbatim}\n\\begin{corollary} \\end{corollary}\n\\begin{lemma} \\end{lemma}\n\\begin{proposition} \\end{proposition}\n\\begin{theorem} \\end{theorem}\n\\end{verbatim}\n\\item\nThe following generally appears as {\\itshape italic} run-in heading:\n\\begin{verbatim}\n\\begin{proof} \\qed \\end{proof}\n\\end{verbatim}\nIt is unnumbered and may contain an eye catching square (call for that\nwith \\verb|\\qed|) before the environment ends.\n\\item\nFurther {\\itshape italic} or {\\bfseries bold} run-in headings with roman\nenvironment body may also occur:\n\\begin{verbatim}\n\\begin{definition} \\end{definition}\n\\begin{example} \\end{example}\n\\begin{exercise} \\end{exercise}\n\\begin{note} \\end{note}\n\\begin{problem} \\end{problem}\n\\begin{question} \\end{question}\n\\begin{remark} \\end{remark}\n\\begin{solution} \\end{solution}\n\\end{verbatim}\n\\end{alpherate}\n\n\\section{Defining your Own Theorem like Environments}\nWe have enhanced the standard \\verb|\\newtheorem| command and slightly\nchanged its syntax to get two new commands \\verb|\\spnewtheorem| and\n\\verb|\\spnewtheorem*| that now can be used to define additional\nenvironments. They require two additional arguments namely the type\nstyle in which the keyword of the environment appears and second the\nstyle for the text of your new environment.\n\n\\verb|\\spnewtheorem| can be used in two ways.\n\\subsection{Method 1 {\\itshape (preferred)}}\nYou may want to create an environment that shares its counter\nwith another environment, say {\\em main theorem\\\/} to be numbered like\nthe predefined {\\em theorem\\\/}. In this case, use the syntax\n\\begin{verbatim}\n\\spnewtheorem{}[]{}\n{}{}\n\\end{verbatim}\n\n\\noindent\nHere the environment with which the new environment should share its\ncounter is specified with the optional argument \\verb|[]|.\n\n\\paragraph{Sample Input}\n\\begin{verbatim}\n\\spnewtheorem{mainth}[theorem]{Main Theorem}{\\bfseries}{\\itshape}\n\\begin{theorem} The early bird gets the worm. \\end{theorem}\n\\begin{mainth} The early worm gets eaten. \\end{mainth}\n\\end{verbatim}\n\\medskip\\noindent\n{\\em Sample Output}\n\n\\medskip\\noindent\n{\\bfseries Theorem 3.}\\enspace {\\em The early bird gets the worm.}\n\n\\medskip\\noindent\n{\\bfseries Main Theorem 4.} The early worm gets eaten.\n\n\\bigskip\nThe sharing of the default counter (\\verb|[theorem]|) is desired. If you\nomit the optional second argument of \\verb|\\spnewtheorem| a separate\ncounter for your new environment is used throughout your document.\n\n\\subsection[Method 2]{Method 2 {\\itshape (assumes {\\tt[envcountsect]}\ndocumentstyle option)}}\n\\begin{verbatim}\n\\spnewtheorem{}{}[]\n{}{}\n\\end{verbatim}\n\n\\noindent\nThis defines a new environment \\verb|| which prints the caption\n\\verb|| in the font \\verb|| and the text itself in\nthe font \\verb||. The environment is numbered beginning anew\nwith every new sectioning element you specify with the optional\nparameter \\verb||.\n\n\\medskip\\noindent\n\\paragraph{Example} \\leavevmode\n\n\\medskip\\noindent\n\\verb|\\spnewtheorem{joke}{Joke}[subsection]{\\bfseries}{\\rmfamily}|\n\n\\medskip\n\\noindent defines a new environment called \\verb|joke| which prints the\ncaption {\\bfseries Joke} in boldface and the text in roman. The jokes are\nnumbered starting from 1 at the beginning of every subsection with the\nnumber of the subsection preceding the number of the joke e.g. 7.2.1 for\nthe first joke in subsection 7.2.\n\n\\subsection{Unnumbered Environments}\nIf you wish to have an unnumbered environment, please\nuse the syntax\n\\begin{verbatim}\n\\spnewtheorem*{}{}{}{}\n\\end{verbatim}\n\n\\section{Program Codes}\nIn case you want to show pieces of program code, just use the\n\\verb|verbatim| environment or the \\verb|verbatim| package of \\LaTeX.\n(There also exist various pretty printers for some programming\nlanguages.)\n\\noindent\n\\subsection*{Sample Input {\\rmfamily(of a simple\ncontribution)}}\\label{samppage}\n\\begin{verbatim}\n\\title{Hamiltonian Mechanics}\n\n\\author{Ivar Ekeland\\inst{1} \\and Roger Temam\\inst{2}}\n\n\\institute{Princeton University, Princeton NJ 08544, USA\n\\and\nUniversit\\'{e} de Paris-Sud,\nLaboratoire d'Analyse Num\\'{e}rique, B\\^{a}timent 425,\\\\\nF-91405 Orsay Cedex, France}\n\n\\maketitle\n\\begin{abstract}\nThis paragraph shall summarize the contents of the paper\nin short terms.\n\\end{abstract}\n\\section{Fixed-Period Problems: The Sublinear Case}\nWith this chapter, the preliminaries are over, and we begin the\nsearch for periodic solutions \\dots\n\\subsection{Autonomous Systems}\nIn this section we will consider the case when the Hamiltonian\n$H(x)$ \\dots\n\\subsubsection*{The General Case: Nontriviality.}\nWe assume that $H$ is\n$\\left(A_{\\infty}, B_{\\infty}\\right)$-subqua\\-dra\\-tic\nat infinity, for some constant \\dots\n\\paragraph{Notes and Comments.}\nThe first results on subharmonics were \\dots\n\\begin{proposition}\nAssume $H'(0)=0$ and $ H(0)=0$. Set \\dots\n\\end{proposition}\n\\begin{proof}[of proposition]\nCondition (8) means that, for every $\\delta'>\\delta$, there is\nsome $\\varepsilon>0$ such that \\dots \\qed\n\\end{proof}\n\\begin{example}[\\rmfamily (External forcing)]\nConsider the system \\dots\n\\end{example}\n\\begin{corollary}\nAssume $H$ is $C^{2}$ and\n$\\left(a_{\\infty}, b_{\\infty}\\right)$-subquadratic\nat infinity. Let \\dots\n\\end{corollary}\n\\begin{lemma}\nAssume that $H$ is $C^{2}$ on $\\bbbr^{2n}\\backslash \\{0\\}$\nand that $H''(x)$ is \\dots\n\\end{lemma}\n\\begin{theorem}[(Ghoussoub-Preiss)]\nLet $X$ be a Banach Space and $\\Phi:X\\to\\bbbr$ \\dots\n\\end{theorem}\n\\begin{definition}\nWe shall say that a $C^{1}$ function $\\Phi:X\\to\\bbbr$\nsatisfies \\dots\n\\end{definition}\n\\end{verbatim}\n{\\itshape Sample Output\\\/} (follows on the next page together with\nexamples of the above run-in headings)\n\\newcounter{save}\\setcounter{save}{\\value{section}}\n{\\def\\addtocontents#1#2{}%\n\\def\\addcontentsline#1#2#3{}%\n\\def\\markboth#1#2{}%\n\\title{Hamiltonian Mechanics}\n\n\\author{Ivar Ekeland\\inst{1} \\and Roger Temam\\inst{2}}\n\n\\institute{Princeton University, Princeton NJ 08544, USA\n\\and\nUniversit\\'{e} de Paris-Sud,\nLaboratoire d'Analyse Num\\'{e}rique, B\\^{a}timent 425,\\\\\nF-91405 Orsay Cedex, France}\n\n\\maketitle\n\\begin{abstract}\nThis paragraph shall summarize the contents of the paper\nin short terms.\n\\end{abstract}\n\\section{Fixed-Period Problems: The Sublinear Case}\nWith this chapter, the preliminaries are over, and we begin the search\nfor periodic solutions \\dots\n\\subsection{Autonomous Systems}\nIn this section we will consider the case when the Hamiltonian\n$H(x)$ \\dots\n\\subsubsection{The General Case: Nontriviality.}\nWe assume that $H$ is\n$\\left(A_{\\infty}, B_{\\infty}\\right)$-subqua\\-dra\\-tic at\ninfinity, for some constant \\dots\n\\paragraph{Notes and Comments.}\nThe first results on subharmonics were \\dots\n\\begin{proposition}\nAssume $H'(0)=0$ and $ H(0)=0$. Set \\dots\n\\end{proposition}\n\\begin{proof}[of proposition]\nCondition (8) means that, for every $\\delta'>\\delta$, there is\nsome $\\varepsilon>0$ such that \\dots \\qed\n\\end{proof}\n\\begin{example}[{{\\rmfamily External forcing}}]\nConsider the system \\dots\n\\end{example}\n\\begin{corollary}\nAssume $H$ is $C^{2}$ and\n$\\left(a_{\\infty}, b_{\\infty}\\right)$-subquadratic\nat infinity. Let \\dots\n\\end{corollary}\n\\begin{lemma}\nAssume that $H$ is $C^{2}$ on $\\bbbr^{2n}\\backslash \\{0\\}$\nand that $H''(x)$ is \\dots\n\\end{lemma}\n\\begin{theorem}[Ghoussoub-Preiss]\nLet $X$ be a Banach Space and $\\Phi:X\\to\\bbbr$ \\dots\n\\end{theorem}\n\\begin{definition}\nWe shall say that a $C^{1}$ function $\\Phi:X\\to\\bbbr$ satisfies \\dots\n\\end{definition}\n}\\setcounter{section}{\\value{save}}\n\\section{Fine Tuning of the Text}\nThe following should be used to improve the readability of the text:\n\\begin{flushleft}\n\\begin{tabular}{@{}p{.19\\textwidth}p{.79\\textwidth}}\n\\verb|\\,| & a thin space, e.g.\\ between numbers or between units\n and num\\-bers; a line division will not be made\n following this space\\\\\n\\verb|--| & en dash; two strokes, without a space at either end\\\\\n\\verb*| -- |& en dash; two strokes, with a space at either end\\\\\n\\verb|-| & hyphen; one stroke, no space at either end\\\\\n\\verb|$-$| & minus, in the text {\\em only} \\\\[8mm]\n{\\em Input} & \\verb|21\\,$^{\\circ}$C etc.,|\\\\\n & \\verb|Dr h.\\,c.\\,Rockefellar-Smith \\dots|\\\\\n & \\verb|20,000\\,km and Prof.\\,Dr Mallory \\dots|\\\\\n & \\verb|1950--1985 \\dots|\\\\\n & \\verb|this -- written on a computer -- is now printed|\\\\\n & \\verb|$-30$\\,K \\dots|\\\\[3mm]\n{\\em Output}& 21\\,$^{\\circ}$C etc., Dr h.\\,c.\\,Rockefellar-Smith \\dots\\\\\n & 20,000\\,km and Prof.\\,Dr Mallory \\dots\\\\\n & 1950--1985 \\dots\\\\\n & this -- written on a computer -- is now printed\\\\\n & $-30$\\,K \\dots\n\\end{tabular}\n\\end{flushleft}\n\\section {Special Typefaces}\nNormal type (roman text) need not be coded. {\\itshape Italic}\n(\\verb|{\\em }| better still \\verb|\\emph{}|) or, if\nnecessary, {\\bfseries boldface} should be used for emphasis.\\\\[6pt]\n\\begin{minipage}[t]{\\textwidth}\n\\begin{flushleft}\n\\begin{tabular}{@{}p{.25\\textwidth}@{\\hskip6pt}p{.73\\textwidth}@{}}\n\\verb|{\\itshape Text}| & {\\itshape Italicized Text}\\\\[2pt]\n\\verb|{\\em Text}| & {\\em Emphasized Text --\n if you would like to emphasize a {\\em definition} within an\n italicized text (e.g.\\ of a {\\em theorem)} you should code the\n expression to be emphasized by} \\verb|\\em|.\\\\[2pt]\n\\verb|{\\bfseries Text}|& {\\bfseries Important Text}\\\\[2pt]\n\\verb|\\vec{Symbol}| & Vectors may only appear in math mode. The default\n \\LaTeX{} vector symbol has been adapted\\footnotemark\\\n to LLNCS conventions.\\\\[2pt]\n & \\verb|$\\vec{A \\times B\\cdot C}| yields $\\vec{A\\times B\\cdot C}$\\\\\n & \\verb|$\\vec{A}^{T} \\otimes \\vec{B} \\otimes|\\\\\n & \\verb|\\vec{\\hat{D}}$|yields $\\vec{A}^{T} \\otimes \\vec{B} \\otimes\n\\vec{\\hat{D}}$\n\\end{tabular}\n\\end{flushleft}\n\\end{minipage}\n\n\\footnotetext{If you absolutely must revive the original \\LaTeX{}\ndesign of the vector symbol (as an arrow accent), please specify the\noption \\texttt{[orivec]} in the \\texttt{documentclass} line.}\n\\newpage\n\\section {Footnotes}\nFootnotes within the text should be coded:\n\\begin{verbatim}\n\\footnote{Text}\n\\end{verbatim}\n{\\itshape Sample Input}\n\\begin{flushleft}\nText with a footnote\\verb|\\footnote{The |{\\tt footnote is automatically\nnumbered.}\\verb|}| and text continues \\dots\n\\end{flushleft}\n{\\itshape Sample Output}\n\\begin{flushleft}\nText with a footnote\\footnote{The footnote is automatically numbered.}\nand text continues \\dots\n\\end{flushleft}\n\\section {Lists}\nPlease code lists as described below:\\\\[2mm]\n{\\itshape Sample Input}\n\\begin{verbatim}\n\\begin{enumerate}\n \\item First item\n \\item Second item\n \\begin{enumerate}\n \\item First nested item\n \\item Second nested item\n \\end{enumerate}\n \\item Third item\n\\end{enumerate}\n\\end{verbatim}\n{\\itshape Sample Output}\n \\begin{enumerate}\n\\item First item\n\\item Second item\n \\begin{enumerate}\n \\item First nested item\n \\item Second nested item\n \\end{enumerate}\n\\item Third item\n\\end{enumerate}\n\\section {Figures}\nFigure environments should be inserted after (not in)\nthe paragraph in which the figure is first mentioned.\nThey will be numbered automatically.\n\nPreferably the images should be enclosed as PostScript files -- best as\nEPS data using the epsfig package.\n\nIf you cannot include them into your output this way and use other\ntechniques for a separate production,\nthe figures (line drawings and those containing halftone inserts\nas well as halftone figures) {\\em should not be pasted into your\nlaserprinter output}. They should be enclosed separately in camera-ready\nform (original artwork, glossy prints, photographs and\/or slides). The\nlettering should be suitable for reproduction, and after a\nprobably necessary reduction the height of capital letters should be at\nleast 1.8\\,mm and not more than 2.5\\,mm.\nCheck that lines and other details are uniformly black and\nthat the lettering on figures is clearly legible.\n\nTo leave the desired amount of space for the height of\nyour figures, please use the coding described below.\nAs can be seen in the output, we will automatically\nprovide 1\\,cm space above and below the figure,\nso that you should only leave the space equivalent to the size of the\nfigure itself. Please note that ``\\verb|x|'' in the following\ncoding stands for the actual height of the figure:\n\\begin{verbatim}\n\\begin{figure}\n\\vspace{x cm}\n\\caption[ ]{...text of caption...} (Do type [ ])\n\\end{figure}\n\\end{verbatim}\n\\begin{flushleft}\n{\\itshape Sample Input}\n\\end{flushleft}\n\\begin{verbatim}\n\\begin{figure}\n\\vspace{2.5cm}\n\\caption{This is the caption of the figure displaying a white\neagle and a white horse on a snow field}\n\\end{figure}\n\\end{verbatim}\n\\begin{flushleft}\n{\\itshape Sample Output}\n\\end{flushleft}\n\\begin{figure}\n\\vspace{2.5cm}\n\\caption{This is the caption of the figure displaying a white eagle and\na white horse on a snow field}\n\\end{figure}\n\\section{Tables}\nTable captions should be treated\nin the same way as figure legends, except that\nthe table captions appear {\\itshape above} the tables. The tables\nwill be numbered automatically.\n\\subsection{Tables Coded with \\protect\\LaTeX{}}\nPlease use the following coding:\\\\[2mm]\n{\\itshape Sample Input}\n\\begin{verbatim}\n\\begin{table}\n\\caption{Critical $N$ values}\n\\begin{tabular}{llllll}\n\\hline\\noalign{\\smallskip}\n${\\mathrm M}_\\odot$ & $\\beta_{0}$ & $T_{\\mathrm c6}$ & $\\gamma$\n & $N_{\\mathrm{crit}}^{\\mathrm L}$\n & $N_{\\mathrm{crit}}^{\\mathrm{Te}}$\\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n 30 & 0.82 & 38.4 & 35.7 & 154 & 320 \\\\\n 60 & 0.67 & 42.1 & 34.7 & 138 & 340 \\\\\n120 & 0.52 & 45.1 & 34.0 & 124 & 370 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\end{verbatim}\n\n\\medskip\\noindent{\\itshape Sample Output}\n\\begin{table}\n\\caption{Critical $N$ values}\n\\begin{center}\n\\renewcommand{\\arraystretch}{1.4}\n\\setlength\\tabcolsep{3pt}\n\\begin{tabular}{llllll}\n\\hline\\noalign{\\smallskip}\n${\\mathrm M}_\\odot$ & $\\beta_{0}$ & $T_{\\mathrm c6}$ & $\\gamma$\n & $N_{\\mathrm{crit}}^{\\mathrm L}$\n & $N_{\\mathrm{crit}}^{\\mathrm{Te}}$\\\\\n\\noalign{\\smallskip}\n\\hline\n\\noalign{\\smallskip}\n 30 & 0.82 & 38.4 & 35.7 & 154 & 320 \\\\\n 60 & 0.67 & 42.1 & 34.7 & 138 & 340 \\\\\n120 & 0.52 & 45.1 & 34.0 & 124 & 370 \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nBefore continuing your text you need an empty line. \\dots\n\n\\vspace{3mm}\nFor further information you will find a complete description of\nthe tabular environment\non p.~62~ff. and p.~204 of the {\\em \\LaTeX{} User's Guide \\& Reference\nManual\\\/} by Leslie Lamport.\n\\subsection{Tables Not Coded with \\protect\\LaTeX{}}\nIf you do not wish to code your table using \\LaTeX{}\nbut prefer to have it reproduced separately,\nproceed as for figures and use the following coding:\\\\[2mm]\n{\\itshape Sample Input}\n\\begin{verbatim}\n\\begin{table}\n\\caption{text of your caption}\n\\vspace{x cm} \n\\end{table}\n\\end{verbatim}\n\\subsection{Signs and Characters}\n\\subsubsection*{Special Signs.}\nYou may need to use special signs. The available ones are listed in the\n{\\em \\LaTeX{} User's Guide \\& Reference Manual\\\/} by Leslie Lamport,\npp.~41\\,ff.\nWe have created further symbols for math mode (enclosed in \\$):\n\\begin{center}\n\\begin{tabular}{l@{\\hspace{1em}yields\\hspace{1em}}\nc@{\\hspace{3em}}l@{\\hspace{1em}yields\\hspace{1em}}c}\n\\verb|\\grole| & $\\grole$ & \\verb|\\getsto| & $\\getsto$\\\\\n\\verb|\\lid| & $\\lid$ & \\verb|\\gid| & $\\gid$\n\\end{tabular}\n\\end{center}\n\\subsubsection*{Gothic (Fraktur).}\nIf gothic letters are {\\itshape necessary}, please use those of the\nrelevant \\AmSTeX{} alphabet which are available using the amstex\npackage of the American Mathematical Society.\n\nIn \\LaTeX{} only the following gothic letters are available:\n\\verb|$\\Re$| yields $\\Re$ and \\verb|$\\Im$| yields $\\Im$. These should\n{\\itshape not\\\/} be used when you need gothic letters for your contribution.\nUse \\AmSTeX{} gothic as explained above. For the real and the imaginary\nparts of a complex number within math mode you should use instead:\n\\verb|$\\mathrm{Re}$| (which yields Re) or \\verb|$\\mathrm{Im}$| (which\nyields Im).\n\\subsubsection*{Script.}\nFor script capitals use the coding\n\\begin{center}\n\\begin{tabular}{l@{\\hspace{1em}which yields\\hspace{1em}}c}\n\\verb|$\\mathcal{AB}$| & $\\mathcal{AB}$\n\\end{tabular}\n\\end{center}\n(see p.~42 of the \\LaTeX{} book).\n\\subsubsection*{Special Roman.}\nIf you need other symbols than those below, you could use\nthe blackboard bold characters of \\AmSTeX{}, but there might arise\ncapacity problems\nin loading additional \\AmSTeX{} fonts. Therefore we created\nthe blackboard bold characters listed below.\nSome of them are not esthetically\nsatisfactory. This need not deter you from using them:\nin the final printed form they will be\nreplaced by the well-designed MT (monotype) characters of\nthe phototypesetting machine.\n\\begin{flushleft}\n\\begin{tabular}{@{}ll@{ yields }\nc@{\\hspace{1.em}}ll@{ yields }c}\n\\verb|\\bbbc| & (complex numbers) & $\\bbbc$\n & \\verb|\\bbbf| & (blackboard bold F) & $\\bbbf$\\\\\n\\verb|\\bbbh| & (blackboard bold H) & $\\bbbh$\n & \\verb|\\bbbk| & (blackboard bold K) & $\\bbbk$\\\\\n\\verb|\\bbbm| & (blackboard bold M) & $\\bbbm$\n & \\verb|\\bbbn| & (natural numbers N) & $\\bbbn$\\\\\n\\verb|\\bbbp| & (blackboard bold P) & $\\bbbp$\n & \\verb|\\bbbq| & (rational numbers) & $\\bbbq$\\\\\n\\verb|\\bbbr| & (real numbers) & $\\bbbr$\n & \\verb|\\bbbs| & (blackboard bold S) & $\\bbbs$\\\\\n\\verb|\\bbbt| & (blackboard bold T) & $\\bbbt$\n & \\verb|\\bbbz| & (whole numbers) & $\\bbbz$\\\\\n\\verb|\\bbbone| & (symbol one) & $\\bbbone$\n\\end{tabular}\n\\end{flushleft}\n\\begin{displaymath}\n\\begin{array}{c}\n\\bbbc^{\\bbbc^{\\bbbc}} \\otimes\n\\bbbf_{\\bbbf_{\\bbbf}} \\otimes\n\\bbbh_{\\bbbh_{\\bbbh}} \\otimes\n\\bbbk_{\\bbbk_{\\bbbk}} \\otimes\n\\bbbm^{\\bbbm^{\\bbbm}} \\otimes\n\\bbbn_{\\bbbn_{\\bbbn}} \\otimes\n\\bbbp^{\\bbbp^{\\bbbp}}\\\\[2mm]\n\\otimes\n\\bbbq_{\\bbbq_{\\bbbq}} \\otimes\n\\bbbr^{\\bbbr^{\\bbbr}} \\otimes\n\\bbbs^{\\bbbs_{\\bbbs}} \\otimes\n\\bbbt^{\\bbbt^{\\bbbt}} \\otimes\n\\bbbz \\otimes\n\\bbbone^{\\bbbone_{\\bbbone}}\n\\end{array}\n\\end{displaymath}\n\\section{References}\n\\label{refer}\nThere are three reference systems available; only one, of course,\nshould be used for your contribution. With each system (by\nnumber only, by letter-number or by author-year) a reference list\ncontaining all citations in the\ntext, should be included at the end of your contribution placing the\n\\LaTeX{} environment \\verb|thebibliography| there.\nFor an overall information on that environment\nsee the {\\em \\LaTeX{} User's Guide \\& Reference\nManual\\\/} by Leslie Lamport, p.~71.\n\nThere is a special {\\sc Bib}\\TeX{} style for LLNCS that works along\nwith the class: \\verb|splncs.bst|\n-- call for it with a line \\verb|\\bibliographystyle{splncs}|.\nIf you plan to use another {\\sc Bib}\\TeX{} style you are customed to,\nplease specify the option \\verb|[oribibl]| in the\n\\verb|documentclass| line, like:\n\\begin{verbatim}\n\\documentclass[oribibl]{llncs}\n\\end{verbatim}\nThis will retain the original \\LaTeX{} code for the bibliographic\nenvironment and the \\verb|\\cite| mechanism that many {\\sc Bib}\\TeX{}\napplications rely on.\n\\subsection{References by Letter-Number or by Number Only}\nReferences are cited in the text -- using the \\verb|\\cite|\ncommand of \\LaTeX{} -- by number or by letter-number in square\nbrackets, e.g.\\ [1] or [E1, S2], [P1], according to your use of the\n\\verb|\\bibitem| command in the \\verb|thebibliography| environment. The\ncoding is as follows: if you choose your own label for the sources by\ngiving an optional argument to the \\verb|\\bibitem| command the citations\nin the text are marked with the label you supplied. Otherwise a simple\nnumbering is done, which is preferred.\n\\begin{verbatim}\nThe results in this section are a refined version\nof \\cite{clar:eke}; the minimality result of Proposition~14\nwas the first of its kind.\n\\end{verbatim}\nThe above input produces the citation: ``\\dots\\ refined version of\n[CE1]; the min\\-i\\-mality\\dots''. Then the \\verb|\\bibitem| entry of\nthe \\verb|thebibliography| environment should read:\n\\begin{verbatim}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzeord b/data_all_eng_slimpj/shuffled/split2/finalzzeord new file mode 100644 index 0000000000000000000000000000000000000000..21e79b21db3f1244459087481de03c1b024bf30c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzeord @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nMany dynamical properties, such as hyperbolicity, are robust in\n$C^r$-topology of diffeomorphisms. That is, the property holds under any appropriate small\nperturbation of the dynamical system. However, many others\ninteresting phenomena, non-hyperbolic strange attractors for\ninstance, are not stable in that sense. Hence, the question that\narises is whether such dynamical properties could be survive if not for\nall perturbations but, at least, for most. For one-dimensional\ndynamics the Malliavin-Shavgulidze measure has been recently\nproposed as a good analogy to the Lebesgue measure in order to\nquantify this abundance in a probabilistic\nsense~\\cite{Triestino}. However, in higher dimensions, it is not\nknown how to introduce a good notion of a measure in the space of\ndynamical systems. Kolmogorov in his plenary talk ending the ICM 1954\nproposed to consider finite dimensional parametric families taking into\naccount the Lebesgue measure in the parameter space\n(see~\\cite{hunt2010prevalence}). A parametric family $(f_a)_a$\nexhibits \\emph{persistently} a property $\\mathscr{P}$ if\nit is observed for $f_a$ in a set of parameter values\n$a$ with positive Lebesgue measure. Furthermore, the property\n$\\mathscr{P}$ is called \\emph{typical} (in the sense of\nKolmogorov) if there is a Baire (local) generic set of parametric\nfamilies exhibiting the property $\\mathscr{P}$\npersistently with full Lebesgue measure. In this direction, a\nmilestone in recent history of dynamical systems has been the\npaper of Berger~\\cite{Ber16} (see also~\\cite{Ber17}) where it was\nproven that the coexistence of infinitely many periodic sinks is\nKolmogorov typical in parametric families of endomorphisms in\ndimension two and diffeomorphisms in higher dimensions.\nThe work of Berger extends, in a measurable sense according to\nKolmogorov, the important results in the 70's due to\nNewhouse~\\cite{New74,New79} (see\nalso~\\cite{robinson1983bifurcation,PT93,PV94,GST93a}) on the local\ngenericity of the coexistence of infinitely many hyperbolic\nattractors (sinks) in $C^r$-topology. This celebrated result was coined as\n\\emph{Newhouse phenomena}. Mimicking this terminology we will refer\nto the Kolmorov typical coexistence\nof infinitely many attractors as \\emph{Berger phenomena}.\n\nNewhouse phenomena has been showen to occur in open sets of\ndiffeomorphisms having a dense subset of systems displaying\nhomoclinic tangencies associated with saddle periodic points. Such\nan open set of dynamical systems is called a \\emph{Newhouse\ndomain}. In certain cases, these open sets are also the support of\nmany other interesting phenomena such as the coexistence of\ninfinitely many attracting invariant circles~\\cite{GST08} and\ninfinitely many strange\nattractors~\\cite{colli1998infinitely,leal2008high}, or wandering\ndomains~\\cite{KS17} among others. Berger phenomena also occurs\nwith respect to some open set but now in the topology of\nparametric families. Namely, in open sets where the families\nhaving persistent homoclinic tangencies are dense. As before,\nmimicking the terminology, we will refer to these open sets of\nparametric families as \\emph{Berger domains}. In the original\npaper of Berger~\\cite{Ber16,Ber17}, these open sets were\nimplicitly constructed for sectional dissipative dynamics. In this\npaper, we will introduce formally the notion of a Berger domain\nand construct new examples, not necessarily for sectional\ndissipative dynamics. As an application, we will prove Berger\nphenomena for a certain type of non-sectional dissipative Berger\ndomains and obtain that the coexistence of infinitely many\nattracting invariant circles is also Kolmogorov typical.\n\n\n\n\n\\subsection{Degenerate unfoldings}\n\nA $C^r$-diffeomorphism $f$ of a manifold $\\mathcal{M}$ has a\n\\emph{homoclinic tangency} if there is a pair of points ${P}$ and\n$Q$, in the same transitive hyperbolic set, so that the unstable\ninvariant manifold of ${P}$ and the stable invariant manifold of\n$Q$ have a non-transverse intersection at a point $Y$. The\ntangency is said to be of codimension $c>0$ if\n\\begin{equation*}\nc= c_Y(W^u(P),W^s(Q)) \\eqdef \\dim M - \\dim (T_YW^u(P)+T_YW^s(Q)).\n\\end{equation*}\nThis number measures how far from being transverse is the intersection between the invariant\nmanifolds at $Y$. Since the codimension\nof $W^u(P)$ coincides with the dimension of $W^s(Q)$ we have, in\nthis case, that the codimension $c$ at $Y$ coincides with $\\dim T_Y\nW^u(P)\\cap T_Y W^s(Q)$.\nA homoclinic tangency can be unfolded by considering a\n$k$-parameter family in the $C^{d,r}$-topology with $1\\leq d\\leq\nr$. That is, a $C^d$-family $(f_a)_a$ of $C^r$-diffeomorphisms\npa\\-ra\\-me\\-te\\-ri\\-zed by $a\\in \\mathbb{I}^k$ with $f_{a_0}=f$\nwhere $\\mathbb{I}=[-1,1]$ and $k\\geq 1$ (see~\\S\\ref{sec:topology}\nfor a more precise definition).\nThe unfolding of {a\ntangency $Y$ of codimension $c$} is said to be\n\\emph{$C^d$-{degenerate}} at $a=a_0$ if there are points $p_a \\in\nW^u(P_a)$, $q_a \\in W^s(Q_a)$ and $c$-dimensional subspaces $E_a$,\n$F_a$ of $T_{p_a}W^u(P_a)$ and $T_{q_a}W^s(Q_a)$ respectively\nsuch that\n$$\n d(p_a,q_a)=o(\\|a-a_0\\|^{d}) \\ \\ \\text{and} \\ \\\nd(E_a,F_a)={o(\\|a-a_0\\|^{d})} \\ \\ \\text{at $a=a_0$}.\n$$\nHere $P_a$ and $Q_a$ are the continuations of $P_{a_0}=P$ and\n$Q_{a_0}=Q$ for $f_a$. Also $p_{a_0}=q_{a_0}=Y$ and $(p_a,E_a)$,\n$(q_a,F_a)$ vary $C^d$-continuously with respect to the parameter\n$a\\in \\mathbb{I}^k$. Observe that in this case it is necessary to\nassume that $d0$) if there exists a dense set\n$\\mathcal{D}$ in $\\mathcal{N}$ such that every $g\\in \\mathcal{D}$\nhas a homoclinic tangency (of codimension $c>0$) associated with\nsome hyperbolic periodic saddle. A $C^r$-Newhouse domain\n$\\mathcal{N}$ ($r\\geq 1$) of homoclinic tangencies (of codimension\none) associated with sectional dissipative periodic points gives\nrise to the $C^r$-Newhouse phenomenon. Namely, there exists a\nresidual subset $\\mathcal{R}$ of $\\mathcal{N}$ where every $g\\in\n\\mathcal{R}$ has infinitely many hyperbolic\nperiodic attractors\n\\cite{New74,New79,PT93,GST93a,PV94,Ro95,GST08}. As Berger showed\nin~\\cite{Ber16}, open sets of families displaying degenerate\nunfoldings play the same role for parametric families as Newhouse\ndomains do for the case free of parameters.\nFor this reason\nmimicking the above terminology,\none could say that:\n\\begin{enumerate}[label={}, rightmargin=2em, leftmargin=2em]\n\\item \\it An open set $\\mathscr{U}$ of $k$-parameter $C^d$-families of\n$C^r$-diffeomorphisms is called a \\emph{$C^{d,r}$-Berger domain}\nof paratangencies (of codimension $c>0$) if the following holds.\nThere exists a dense set $\\mathscr{D}\\subset \\mathbb{I}^k\\times\n\\mathscr{U}$ such that for any $(a_0, f)\\in \\mathscr{D}$, the\nfamily $f=(f_a)_a$ displays a $C^d$-degenerate unfolding at\n$a=a_0$ of a homoclinic tangency (of codimension $c>0$) associated\nwith a hyperbolic periodic~saddle.\n\\end{enumerate}\nFor codimension $c=1$, this definition appears implicitly\nin~\\cite{Ber16} where\nit is proven that the\ncoexistence of infinitely many hyperbolic periodic attractors is\nKolmogorov typical.\n\nActually, by modifying the\ninitial construction Berger showed a stronger result\nin~\\cite{Ber16,Ber17} that we will refer to as\n\\emph{$C^{d,r}$-Berger phenomena}: the existence of a residual set\nin a $C^{d,r}$-open set of parametric families where each family\nhas\ninfinitely many sinks at\n\\emph{any} parameter.\nThe\nfollowing\nstronger version of the above tentative\ndefinition allowed Berger to prove such a result:\n\\begin{defi} \\label{def:Berger-domain}\nAn open set $\\mathscr{U}$ of $k$-parameter $C^d$-families of\n$C^r$-diffeomorphisms is called \\emph{$C^{d,r}$-Berger domain} of\npersistent homoclinic tangencies\n(of codimension $c>0$) if there exists a dense subset\n$\\mathscr{D}$ of $\\mathscr{U}$ such that for any $f=(f_a)_a \\in\n\\mathscr{D}$ there is a covering of $\\mathbb{I}^k$ by open balls\n$J_i$\nhaving the following property:\nthere is a continuation of a\nsaddle periodic point $Q_a$ having a homoclinic tangency $Y_{a}$\n(of codimension $c>0$) which depends $C^d$-continuously on the\nparameter $a\\in J_i$.\n\\end{defi}\n\n\nObserve that the first tentative definition above requires $d c^2+c$ admits an open\nset $\\mathscr{U}$ of $k$-parameter $C^d$-families of\n$C^r$-diffeomorphisms with {$00$.\n\\end{mainthm}\n}\n\nThe proof of Theorem~\\ref{thmD} is based on the notion of a\n$C^d$-degenerate unfolding of homoclinic tangencies and previous\nresults from~\\cite{BR20}. For this reason, we have only been able\nto show the existence of $C^{d,r}$-Berger for families of\ndiffeomorphisms with $d3$, any extended unstable manifold is transverse to the leaf of\nthe strong stable foliation which passes through the tangency\npoint. Observe that these conditions are generic.}, also in the sense of ~\\cite{GST08}.\n\n\n\n\n\n\\subsection{Berger phenomena:}\nThe $C^{d,r}$-Berger phenomena was shown in~\\cite{Ber16,Ber17} for\nsectionally dissipative families in dimension\n$m\\geq 3$. We will obtain similar results for\nfamilies that are not sectionally dissipative\nby working with a\n$C^{d,r}$-Berger domain\n$\\mathscr{U}$ of type $(2,1)$ with unstable index one. That is,\nthe persistent homoclinic tangencies are simple and associated with\nhyperbolic periodic points having multipliers\n$\\lambda_1,\\dots,\\lambda_{m-1}$ and $\\gamma$ satisfying\n\\begin{equation} \\label{eq1}\n |\\lambda_j|<|\\lambda|<1<|\\gamma| \\ \\ \\ \\text{and \\ \\ \\ $|\\lambda^2\\gamma|<1<|\\lambda\\gamma|$} \\quad \\text{for $j\\not= 1,2$ where\n $\\lambda_{1,2}=\\lambda e^{\\pm i \\varphi}$ with $\\varphi\\not=0,\\pi$.}\n\\end{equation}\nIn the following result we obtain Berger phenomena with respect to\nattracting invariant circles and hyperbolic sinks\nfor these new types of\nBerger domains.\n\n\n\\begin{mainthm} \\label{thm-thmA} Let $\\mathscr{U}$ be a\na $C^{d,r}$-Berger domain whose persistent homoclinic tangencies\nare simples and associated with hyperbolic periodic points having multipliers\nsatisfying~\\eqref{eq1}. Then there exists a residual set\n$\\mathcal{R}\\subset \\mathscr{U}$ such that for every family\n$f=(f_a)_a \\in \\mathcal{R}$ and every $a\\in \\mathbb{I}^k$, the\ndiffeomorphism $f_a$ has simultaneously\n\\begin{enumerate}\n\\item[-] infinitely many normally hyperbolic attracting invariant circles and\n\\item[-] infinitely many hyperbolic periodic sinks\n\\end{enumerate}\n\\end{mainthm}\n\n\n\n\n\n\n\\subsection{Topology of families of diffeomorphisms}\n\\label{sec:topology} Set $\\mathbb{I}=[-1,1]$. Given $00$ for families of diffeomorphisms with $d u^2+u \\geq 2$ (see \\S\\ref{BergerDomain}\nand Definition~\\ref{def:Berger-domain}).\n\nNow we will introduce the family that will be ``the organizing\ncenter'' of Berger domains. To do this, we need some results\nfrom~\\cite{BR20}. In~\\cite[Thm.~B]{BR20} we construct an open set\n$\\mathscr{U}\\subset C^{d,r}(\\mathbb{I}^k,\\mathcal{M})$\nfor $0u+u^2$ having a family of\n$cs$-blenders $\\Gamma=(\\Gamma_a)_a$ with unstable dimension $u\\geq\n1$\nand a family of folding manifolds $\\mathcal{S}=(\\mathcal{S}_a)_a$\nof dimension $m-u$ satisfying the following:\n\nFor any $a_0\\in \\mathbb{I}^k$, any family $g=(g_a)_a$ close enough\nto $f$ in the $C^{d,r}$-topology and any $C^{d,r}$-perturbation\n$\\mathcal{L}=(\\mathcal{L}_a)_a$ of $\\mathcal{S}$ there exists\n$z=(z_a)_a\\in C^d(\\mathbb{I}^k,\\mathcal{M})$ such that\n {\n\\begin{enumerate}\n\\item $z_a\\in\\Gamma_{g,a}$, where $\\Gamma_{g,a}$ denotes the\ncontinuation for $g_a$ of the blender $\\Gamma_a$,\n\\item the family of local unstable manifolds\n$\\mathcal{W}=(W^u_{loc}(z_a;g_a))_a$ and $\\mathcal{L}$ have a\ntangency\n of dimension $u$ at $a=a_0$ which unfolds $C^d$-degenerately.\n\\end{enumerate}\n}\n\\end{thm}\n\n\n\nLet us consider the family $\\Phi=(\\Phi_a)_a$ given in the above\ntheorem. Assume in addition the next hypothesis:\n\\begin{enumerate}[label=(H\\arabic*)]\n\\item \\label{H1}\n$\\Phi_a$ has a equidimensional cycle between\nsaddle periodic points $P_a$ and $Q_a$,\n\\item \\label{H2} $P_a$ belongs to $\\Gamma_a$ and the folding manifold $S_a$\nis contained in $W^s(Q_a,\\Phi_a)$.\n\\end{enumerate}\nTheorem~\\ref{thm-BR20} implies that the family $\\Phi=(\\Phi_a)_a$\nunder the above assumptions~\\ref{H1} and~\\ref{H2} defines a\n$C^{d,r}$-open set $\\mathscr{U}=\\mathscr{U}(\\Phi)$ of\n$k$-parameter $C^d$-families of $C^r$-diffeomorphisms\nsuch that any $g=(g_a)_a \\in \\mathscr{U}$ is a $C^d$-degenerate\nunfolding at any parameter $a=a_0$ of a tangency of dimension $u$.\nThe tangency is between $W^s(Q_{a_0},g_{a_0})$ and the local\nunstable manifold of some point in the blender $\\Gamma_{a_0}$ of\n$g_{a_0}$.\nSince the codimension of $W^s(Q_{a_0},g_{a_0})$ and the\ndimension of the local unstable manifolds of $\\Gamma_{a_0}$\ncoincide, the tangency also has codimension $u$. We will prove\nthat the open set $\\mathscr{U}$ is a $C^{d,r}$-Berger domain. To\ndo this, we will first need the following result,\nsee~\\cite[Lemma~3.7]{Ber16} and~\\cite[Lemma~3.2]{Ber17}.\n\n\n\n\\begin{prop}[Parametrized Inclination Lemma] \\label{thm-Parametrized-Inclination-Lemma}\nLet $g=(g_a)_a$ be a $C^{d,r}$-family of diffeomorphisms having a\nfamily $K=(K_a)_a$ of transitive hyperbolic sets $K_a$ with\nunstable dimension $d_u$. Let $C_a$ be a $C^r$-submanifold of\ndimension $d_u$ that intersects transversally a local stable\nmanifold $W^s_{loc}(x_a,g_a)$ with $x_a\\in K_a$ at a point $z_a$\nwhich we assume depends $C^d$-continuously on $a \\in\n\\mathbb{I}^k$. Then, for any $P_a\\in K_a$\nthere exists a $d_u$-dimensional disc $D_a \\subset C_a$\ncontaining $z_a$ such that the family of discs $D=(g^n_a(D_a))_a$\nis $C^{d,r}$-close to $W=(W^u_{loc}(P_a,g_a))_a$, for $n$ sufficiently large.\n\\end{prop}\n\n\n\n\nUsing the parameterized inclination lemma, the following\nproposition proves that the above open set $\\mathscr{U}$ is a\nBerger domain according to the first tentative (weaker) definition given\nin~\\S\\ref{BergerDomain}.\n\n\\begin{prop} \\label{prop-dense-of-paratangencies}\nFor any $a_0\\in \\mathbb{I}^k$ and $g\\in \\mathscr{U}$, there is\n$C^{d,r}$-arbitrarily close to $g$ a family $f=(f_a)_a$ such that\n$f_{a}=g_a$ for any parameter far from a small neighborhood of\n$a_0$ and {which displays a $C^d$-degenerate unfolding at $a=a_0$\nof a homoclinic tangency of codimension $u$ associated with the\nperiodic point $Q_{a_0}(f)$.}\n\\end{prop}\n\\begin{proof}\nBy construction, any $g=(g_a)_a\\in \\mathscr{U}$ is a\n$C^d$-degenerate unfolding at $a=a_0$ of a tangency between\n$W^s(Q_{a_0},g_{a_0})$ and some local unstable manifold\n$W^u_{loc}(x_{a_0},g_{a_0})$ of a point $x_{a_0}\\in \\Gamma_{a_0}$.\nFrom the assumptions~\\ref{H1} and~\\ref{H2} we have that both $x_a$\nand $Q_a$ belongs to the homoclinic class $H(P_a,g_a)$ of $P_a$\nfor $g_a$. Moreover, we get that $W^{u}(Q_a,g_a)$ intersects\ntransversally $W^s_{loc}(x_a,g_a)$ at a point $z_a$ which depends\n$C^d$-continuously on $a\\in\\mathbb{I}^k$. Then\nProposition~\\ref{thm-Parametrized-Inclination-Lemma} implies the\nexistence of discs $D_a$ in $W^u(Q_a,g_a)$ containing $z_a$ such\nthat the family $D_n=(g^n_a(D_a))_a$ is $C^{d,r}$-close to\n$W=(W^u_{loc}(x_a,g_a))$ when $n$ is large.\nBy a small perturbation, we now will find a new family\n$C^{d,r}$-close to $g$, which is a $C^d$-degenerate unfolding at\n$a=a_0$ of a homoclinic tangency of codimension $u$ associated\nwith the continuation of the periodic point $Q_{a_0}$.\n\n\nWe take local coordinates denoted by $x$ in a neighborhood of\n$x_{a_0}$\nwhich correspond to the origin.\nAlso denote by $y$ the tangency point between\n$W^s(Q_{a_0},g_{a_0})$ and the local unstable manifold\n$W^u_{loc}(x_{a_0},g_{a_0})$.\nSince the tangency (of dimension $u$) unfolds $C^d$-degenerately,\nwe have $\\vec{p}=(p_a,E_a)_a, \\vec{q}=(q_a,F_a)_a\\in\nC^d(\\mathbb{I}^k,G_u(\\mathcal{M}))$ such that\n\\begin{align*}\nq_a \\in W^s(Q_{a},g_{a}), \\quad &\\text{and} \\quad E_a \\subset\nT_{q_{a}}W^s(Q_{a},g_{a}), \\\\\np_a \\in W^u_{loc}(x_{a},g_{a}) \\quad &\\text{and} \\quad F_a\n\\subset\nT_{p_a}W^u_{loc}(x_{a},g_{a}), \\\\\np_{a_0}=q_{a_0}=y \\quad &\\text{and} \\quad J(\\vec{p})=J(\\vec{q}).\n\\end{align*}\nTake $\\delta>0$ such that, in these local coordinates, the\n$2\\delta$-neighborhoods of $y$ and its iterations by $g_{a_0}$ and\n$g^{-1}_{a_0}$ are pairwise disjoint. Let\n$U$ be the $2\\delta$-neighborhood of $y$\nand assume that $p_a$ and $q_a$ belong to $U$ for all $a$ close\nenough to $a_0$. Call $C_a$ the local disc in $W^s(Q_{a},g_{a})$\ncontaining the point $q_a$ and we may suppose that $U$ is such that\nthe forward iterates of $C_a$, with respect to $g_a$,\nare disjoint from each other and from $U$.\nSince $D_n=(g^n_a(D_a))_a$ is $C^{d,r}$-close to $W$, we obtain a\n$C^{d,r}$-family $\\tau_n=(\\tau_{n,a})_a$ of diffeomorphisms of\n$\\mathbb{R}^m$ which sends, in local coordinates,\n$g^n(D_a)$ onto $W^u_{loc}(x_a,g_a)$,\nis equal to the identity\noutside of $U$ and is $C^{d,r}$-close to the constant family\n$I=(\\mathrm{id}_{\\mathbb{R}^m})_a$ as $n\\to \\infty$.\nLet $t_a$ be the point in $D_n\\subset W^u(Q_{a},g_{a})$ so that $\\tau_{n,a}(t_a)=p_a$.\n\nConsider a $C^\\infty$-bump function $ \\phi:\\mathbb{R}\\to\n\\mathbb{R}$ with support in $[-1,1]$ and equal to 1 on\n$[-1\/2,1\/2]$. Let\n$$\n\\rho: a=(a_1,\\dots,a_k) \\in \\mathbb{I}^k \\mapsto\n\\phi(a_1)\\cdot\\ldots\\cdot \\phi(a_k) \\in \\mathbb{R}.\n$$\nFor a fixed $\\alpha>0$, define the perturbation $g_n=(g_{n,a})_a$\nof $g=(g_a)_a$ by\n$$\n g_{n,a}= H_{n,a} \\circ g_a \\quad \\text{for all $a\\in \\mathbb{I}^k$,}\n$$\nwhere $H_{n,a}$ in the above local coordinates takes the form\n\\begin{align*}\n H_{n,a}(x)=x+ \\theta \\cdot (\\tau_{n,a}(x)-x)\n \\quad \\text{where} \\ \\\n \\theta=\\rho\\left(\\frac{a-a_0}{2\\alpha}\\right)\\phi\\left(\\frac{\\|x-y\\|}{2\\delta}\\right)\n\\end{align*}\nand is the identity otherwise. Observe that if $a\\not\\in a_0 +\n(-2\\alpha,2\\alpha)^k$ or $x\\not\\in U$ then $H_{n,a}(x)=x$. In\nparticular, $g_{n,a}(x)=g_a(x)$ if $a\\not\\in a_0 +\n(-2\\alpha,2\\alpha)^k$ or $x\\not\\in g_a^{-1}(U)$.\n\nOn the other hand, if $a\\in a_0 + (-\\alpha,\\alpha)^k$ and $x\\in\nU$, then $H_{n,a}(x)=\\tau_{n,a}(x)$. This implies that for $a\\in\na_0+(-\\alpha,\\alpha)^k$, the point $g_a^{-1}(t_a)$ that belongs to\n$W^u(Q_a,g_a)$ is sent by $g_{n,a}$ to $p_a=\\tau_{n,a}(t_a)$ and\ntherefore $p_a \\in W^u(Q_a,g_{n,a})$. Moreover, since $\\tau_{n,a}$\nis a $C^r$-diffeomorphism ($r\\geq 2$) we also have that $F_a\n\\subset T_{p_a}W^u(Q_a,g_{n,a})$.\n\n\nAt $a=a_0$ we have that $\\tau_{n,a_0}(t_{a_0})=p_{a_0}=q_{a_0}\\in\nW^s(Q_{a_0}, g_{n,a_0})$ and so the stable and unstable manifolds\nof $Q_{a_0}$ for $g_{n,a_0}$ meet at this point. Moreover, since\n$\\tau_{n,a_0}$ is a $C^r$-diffeomorphism ($r\\geq 2$) this\nintersection is still non-transverse, i.e., we have a homoclinic\ntangency of codimension~$u$. Observe that the perturbed family\n$g_{n,a}$ does not affect the disc $C_{a}$ of the stable manifold\nof $W^s(Q_a, g_a)$ in $U$. That is, $C_{a}\\subset W^s(Q_a,\ng_{n,a})$ and $E_a \\subset T_{q_a}W^s(Q_a,g_{n,a})$. Hence, since\nfrom the initial hypothesis $J(\\vec{p})=J(\\vec{q})$ we get a\n$C^d$-degenerate unfolding at $a=a_0$ of a homoclinic tangency of\ncodimension $u$ associated with the hyperbolic periodic point\n$Q_{a_0}(g_n)$. Finally, observe\n$$\n \\|f-g_n\\|_{C^{d,r}}=\\|(I-H)\\circ f\\|_{C^{d,r}} \\leq \\|f\\|_{C^{d,r}}\n \\|\\theta (\\tau_n -\n I)\\|_{C^{d,r}} \\leq \\|f\\|_{C^{d,r}} o_{\\alpha\\to 0}(\\alpha^{-d}) o_{\\delta\\to\n 0}(\\delta^{-r}) \\|\\tau_n-I\\|_{C^{d,r}}.\n$$\nSince $\\|\\tau_n-I\\|_{C^{d,r}}$ goes to zero as $n \\to \\infty$, we\ncan obtain that for a given $\\epsilon>0$, there are $n$ large\nenough so that $\\|f-g_n\\|_{C^{d,r}}<\\epsilon$.\n\\end{proof}\n\n\\begin{rem} \\label{rem-finite-points}\nNotice that the perturbation in the previous proposition,\nto create the degenerate homoclinic unfolding at $a_0$, is local\n(in the parameter space and in the manifold). Thus, fixing\n$N\\in\\mathbb{N}$ and a finite number of points $a_1,\\dots,a_N\\in\n\\mathbb{I}^k$, we can perform the same type of perturbation\ninductively and obtain a\ndense set of families in $\\mathscr{U}$ having degenerate unfoldings at any\n$a=a_i$ for\n$i=1,\\dots,N$.\n\\end{rem}\n\n\n\n\n\nThe following proposition is an adaptation to the context of\ndiffeomorphisms of~\\cite[Lemma~5.4]{Ber16}. Roughly speaking, this\nproposition explains how it is possible to ''stop'' a tangency for\nan interval of parameters using a degenerate unfolding,\ni.e, how to create a persistent homoclinic\ntangency in the language of~\\cite{Ber17}.\n\n\\begin{prop} \\label{prop-paratangecia}\nLet $f=(f_a)_a$ be a $k$-parameter $C^d$-family of\n$C^r$-diffeomorphisms of a manifold of dimension $m\\geq 2$.\nSuppose that $f$ is a $C^d$-degenerate unfolding at $a=a_0$ of a\nhomoclinic tangency of codimension $u>0$ associated with a\nhyperbolic periodic point $Q$. Then, for any $\\epsilon>0$ there\nexists $\\alpha_0>0$ such that for every $0<\\alpha<\\alpha_0$ there\nis a $C^{d}$-family $h=(h_a)_a$ of $C^r$-diffeomorphisms such~that\n\\begin{enumerate}\n\\item $h$ is $\\epsilon$-close to $f$ in the $C^{d,r}$-topology,\n\\item $h_a=f_a$ for every $a \\not \\in a_0 + (-2\\alpha,2\\alpha)^k$,\n\\item $h_a$ has a homoclinic tangency $Y_a$ of codimension $u$ associated with the\ncontinuation $Q_a$ of $Q$ for all $a\\in a_0+ (-\\alpha,\\alpha)^k$\nand which depend $C^d$-continuously on the parameter $a$.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\nBy assumption $f$ is a $C^d$-degenerate unfolding at $a=a_0$ of a\nhomoclinic tangency $Y$ associated with a hyperbolic periodic\npoint $P$. By the definition of degenerate unfolding we have\npoints $p_a\\in W^u(Q_a,f_a)$, $q_a\\in W^s(Q_a,f_a)$ and\n$c$-dimensional subspaces $E_a$ and $F_a$ of $T_{p_a}W^u(Q_a,f_a)$\nand $T_{q_a}W^s(Q_a,f_a)$ respectively such that\n\\begin{equation} \\label{eq:o(ad)}\n d(p_a,q_a)=o(\\|a-a_0\\|^{d}) \\ \\ \\text{and} \\ \\\nd(E_a,F_a)={o(\\|a-a_0\\|^{d})} \\ \\ \\text{at $a=a_0$}.\n\\end{equation}\nHere $Q_a$ is the continuation of $Q_{a_0}=P$ for $f_a$. Also\n$p_{a_0}=q_{a_0}=Y$ and $(p_a,E_a)$, $(q_a,F_a)$ vary\n$C^d$-continuously with respect to the parameter $a\\in\n\\mathbb{I}^k$. We take local coordinates $x$ in a neighborhood\nof $Q$ which correspond to the origin.\nBy considering an iteration if necessary, we\nassume that the tangency point $Y$ belongs to\nthis neighborhood of local coordinates. Take $\\delta>0$ so that\nthe $2\\delta$-neighborhoods of $Y$ and its iterations by $f_{a_0}$\nand $f^{-1}_{a_0}$ are pairwise disjoint.\nNamely, we will denote by $U$ the $2\\delta$-neighborhood of $Y$.\nAssume that\n$p_a$ and $q_a$ belong to $U$ for all $a$ close enough to $a_0$.\nFrom~\\eqref{eq:o(ad)} it follows that\n\\begin{equation} \\label{eq:orden1}\n p_a-q_a=o(\\|a-a_0\\|^d) \\quad \\text{at \\ $a=a_0$.}\n\\end{equation}\nObserve that the Grasmannian distance between $E_a$ and $F_a$\n is given by the norm of $I-R_a$ restricted\nto $F_a$, where $I$ denotes the identity and $R_a$ is the\northogonal projection onto $E_a$. Then we obtain\nfrom~\\eqref{eq:o(ad)} that\n\\begin{equation} \\label{eq:orden2}\nI-R_a =o(\\|a-a_0\\|^d) \\quad \\text{at \\ $a=a_0$}.\n\\end{equation}\nWe would like that the rotation occurs around the point $p_a$\nand so consider\n$\\tilde{R}_a x =R_a (x-p_a) + p_a$.\nSince $(I-\\tilde{R}_a)x=(I-R_a)x+(R_a-I)p_a$ then\nfrom~\\eqref{eq:orden2} we still have\n\\begin{equation} \\label{eq:orden3}\n I-\\tilde{R}_a =o(\\|a-a_0\\|^d) \\quad \\text{at \\ $a=a_0$}.\n\\end{equation}\n\nConsider a $C^\\infty$-bump function $ \\phi:\\mathbb{R}\\to\n\\mathbb{R}$ with support in $[-1,1]$ and equal to 1 on\n$[-1\/2,1\/2]$. Let $$\\rho: a=(a_1,\\dots,a_k) \\in \\mathbb{I}^k\n\\mapsto \\phi(a_1)\\cdot\\ldots\\cdot \\phi(a_k) \\in \\mathbb{R}.\n$$\nFor a fixed $\\alpha>0$, define the perturbation $h=(h_a)_a$ of\n$f=(f_a)_a$ by the relation\n$$\n h_{a}= H_{a} \\circ f_a \\quad \\text{for all $a\\in \\mathbb{I}^k$}.\n$$\nHere $H_{a}$ in the above local coordinates takes the form\n\\begin{align*}\n \\bar{x}=\\big(I-\\theta\\cdot(I-\\tilde{R}_a)\\big)\\big(x- \\theta \\cdot\n (q_a-p_a)\\big) \\quad \\text{where} \\ \\ \\theta=\\rho\\left(\\frac{a-a_0}{2\\alpha}\\right)\\phi\\left(\\frac{\\|x-Y\\|}{2\\delta}\\right)\n\\end{align*}\nand is the identity otherwise. Observe that if $a\\not\\in a_0 +\n(-2\\alpha,2\\alpha)$ or $x\\not\\in U$ then $H_a(x)=x$. In\nparticular, $h_a(x)=f_a(x)$ if $a\\not\\in a_0 + (-2\\alpha,2\\alpha)$\nor $x\\not\\in f_a^{-1}(U)$. On the other hand, if $a\\in a_0 +\n(-\\alpha,\\alpha)$ and $x\\in U$ then\n$$H_a(x)=\\tilde{R}_a\\big(x-(q_a-p_a)\\big).$$\nSince $\\tilde{R}_a$ fixes the point $p_a$, we have that\n$H_a(q_a)=p_a$. This implies that the point\n$q_a^{-1}=f_a^{-1}(q_a)$ that belongs to $W^u(Q_a,f_a)$ is sent by\n$h_a$ to $p_a$ which belongs to $W^s_{loc}(Q_a,f_a)$. As the orbit\nof $f^{n}_a(p_a)$ for $n\\geq 0$ and $f_a^{-n}(q^{-1}_a)$ for $n>0$\nnever goes through $f_a^{-1}(U)$, then $p_a \\in\nW^s_{loc}(P_a,h_a)$ and $q_a^{-1}\\in W^u(Q_a,h_a)$. Thus, the\nstable and unstable manifolds of $Q_a$ for $h_a$ meet at\n$p_a=h_a(q_a^{-1})$. Moreover, $F_a^{-1}=Df_a(q^{-1}_a)F_a$ is a\nsubspace of $T_{q_a^{-1}}W^u(Q_a,f_a)=T_{q_a^{-1}}W^u(Q_a,h_a)$\nand\n$$\nDh_a(q_a^{-1})F_a^{-1}=\nDH_a(q_a)Df_a(q_a^{-1})F_a^{-1}=DH_a(q_a)F_a=D\\tilde{R}_a(q_a)\nF_a= R_aF_a=E_a.\n$$\nSince $E_a$ is a subspace of\n$T_{p_a}W^s(Q_a,f_a)=T_{p_a}W^s(Q_a,h_a)$, then the intersection\nbetween the stable and unstable manifolds of $Q_a$ for $h_a$ is\ntangencial.\n\nTo conclude the proposition we only need to prove that for a given\n$\\epsilon>0$ we can find $\\alpha_0$ such that for any\n$0<\\alpha<\\alpha_0$ the above perturbation $h=h(\\alpha)$ of $f$ is\nactually $\\epsilon$-close in the $C^{d,r}$-topology. Notice that\nthe $C^{d,r}$-norm satisfies\n$$\n \\|h-f\\|_{C^{d,r}}=\\|(H-I) \\circ f \\|_{C^{d,r}} \\leq\n \\|I-H\\|_{C^{d,r}} \\, \\|f\\|_{C^{d,r}}.\n$$\nThus we only need to calculate the $C^{d,r}$-norm of the family\n$(I-H_{a})_a$. Since $H_{a}=I$ if $a\\not \\in a_0 +\n(-2\\alpha,2\\alpha)^k$ or $x\\not \\in U$ then\n\\begin{align*}\n \\|I-H_{a}\\|_{C^{d,r}}\\leq\\left\\| \\theta \\cdot (I-\\tilde{R}_a)(x-\\theta\\cdot (q_a-p_a))+\\theta\\cdot (q_a-p_a)\\right\\|_{C^{d,r}}.\n\\end{align*}\nSince the $C^{d,r}$-norm of $\\phi(\\|x-Y\\|\/2\\delta)$ is bounded\n(depending only on $\\delta$), we can disregard this function from\nthe estimate. Then, to bound the $C^{d,r}$-norms from above it is\nenough to show that for $a \\in a_0+(-2\\alpha,2\\alpha)^k$ the\nfunctions\n$$\n F_{\\alpha}(a)= \\rho(\\frac{a-a_0}{2\\alpha})(p_a-q_a) \\quad\n \\text{and} \\quad G_{\\alpha}(a) = \\rho(\\frac{a-a_0}{2\\alpha})(I-\\tilde{R}_a)\n$$\nhave $C^d$-norm small when $\\alpha$ is small enough. But this is\nclear from~\\eqref{eq:orden1} and \\eqref{eq:orden3}, as having into\naccount that $\\|a-a_0\\|\\leq \\alpha$, it follows that\n$$\n F_a(a)=\\alpha^{-d}\\cdot o_{a\\to a_0}(\\|a-a_0\\|^d)=o_{\\alpha\\to 0}(1) \\quad \\text{and}\n \\quad G_\\alpha(a)=\\alpha^{-d}\\cdot o_{a\\to a_0}(\\|a-a_0\\|^d)=o_{\\alpha \\to 0}(1).\n$$\nThis completes the proof.\n\\end{proof}\n\n\\begin{rem} \\label{rem-paratangency}\nObserve that the positive constant $\\alpha_0$ in\nthe above proposition depends initially on the family $f=(f_a)_a$,\nthe constants $\\epsilon>0$, $\\delta>0$ and the parameter $a_0\\in \\mathbb{I}^k$.\nThe dependence of $a_0$ comes from the function $o_{a\\to\na_0}(\\|a-a_0\\|^d)$ in~\\eqref{eq:o(ad)}. However, one can bound\nthis function by $\\nu(\\|a-a_0\\|)\\cdot \\|a-a_0\\|^d$ where\n$\\lim_{t\\to 0}\\nu(t)=0$, controlling the modulus of continuity\n$\\nu$ of the derivatives of the unfolding. In this form we can get\nthat $\\alpha_0$ does not depend on the parameter $a_0$. Also,\nsimilarly to what was done in the previous proposition, the surgery\nusing bump functions around a neighborhood of\nthe initial paratangency point $Y$ can actually be done around any point\n$f^N_{a_0}(Y)$\nbelonging to $W^s_{loc}(P_{a_0},f_{a_0})$.\nThis allows us to fix a\npriori a uniform $\\delta>0$ because we only need to control\nthe distance between one forward\/backward iterate of $f^N_{a_0}(Y)$.\nThus,\n$\\alpha_0$ also does not depend on\n$\\delta$. Finally, if\n$f$ belongs to $\\mathscr{U}$\nthen one can obtain a uniform bound on the continuity modulus\nusing the fact that we are dealing with compact families of local\nstable and unstable manifolds. This proves that in this construction\n$\\alpha_0$ only depends on $\\epsilon$ and $\\mathscr{U}$ (i.e, on\nthe dynamics of the organizing family $\\Phi$).\n\\end{rem}\n\n\\begin{rem} \\label{rem:simples}\nIn the case of codimension $u=1$, the tangency $Y_a$ obtained in\nthe previous proposition could be assumed \\emph{simple} in the\nsense of~\\cite{GST08}.\n\\end{rem}\n\nFinally, in the next theorem we will show the\nexistence of Berger domains as stated in\nDefinition~\\ref{def:Berger-domain}. The idea behind the proof is\nthe replication of the arguments coming from~\\cite[Sec.~6.1]{Ber16}\nand~\\cite[Sec.~7]{Ber17}.\n\\begin{thm} \\label{thm-berger-domain}\nThere exists a dense subset $\\mathscr{D}$ of $\\mathscr{U}$ such\nthat for any $h=(h_a)_a \\in \\mathscr{D}$ there is a covering of\n$\\mathbb{I}^k$ by open sets $J_i$\nhaving a persistent homoclinic tangency\nof codimension $u$. That is, $\\mathscr{U}$ is a\n$C^{d,r}$-Berger domain (of paratangencies of codimension $u$).\n\\end{thm}\n\\begin{proof}\nFora a fixed family $g=(g_a)_a\\in \\mathscr{U}$ and $\\epsilon>0$,\nPropositions~\\ref{prop-dense-of-paratangencies},~\\ref{prop-paratangecia}\nand Remarks~\\ref{rem-finite-points},~\\ref{rem-paratangency} imply\nthe following. We obtain $\\alpha_0=\\alpha_0(\\epsilon)>0$ so that for a fixed\n$\\alpha$ with $0<\\alpha<\\alpha_0$ there are points $a_1,\\dots,a_N$ in\n$\\mathbb{I}^k$ such that\n\\begin{itemize}\n \\item[-] the open sets $a_i+(-2\\alpha,2\\alpha)^k$ for $i=1,\\dots,N$\n are pairwise disjoint;\n \\item[-] the union of $(a_i+(-2\\alpha,2\\alpha)^k)\\cap \\mathbb{I}^k$\n for $i=1,\\dots,N$ is dense in $\\mathbb{I}^k$;\n\\item[-] there is a $C^{d,r}$-family\n $h=(h_a)_a$, $\\epsilon$-close to $g$, having a persistent homoclinic tangency associated with the continuation of $Q_a$\n for all $a\\in (a_i+(\\alpha,\\alpha)^k)\\cap \\mathbb{I}^k$ and $i=1,\\dots,N$.\n\\end{itemize}\nHowever this result does not provide an open cover of\n$\\mathbb{I}^k$. We need to perturbe again $h$ without destroying\nthe persistent homoclinic tangencies associated with $Q_a$ and\nat the same time provide new persistent homoclinic tangencies in the complement\nof the union of $(a_i+(\\alpha,\\alpha)^k)\\cap \\mathbb{I}^k$\nfor $i=1,\\dots,N$.\nTo do this, we will need a finite set of different\nperiodic points $Q^j_a$ to replicate the above argument and ensure\nthat each new perturbation does not modify the previous one (i.e, there\nexists a common $\\delta>0$ so that the supports of all the peturbations\nare disjoint).\n\\begin{lem}\nThere exists a set $\\{Q^j_a\\}_{j=1,\\dots, 3^k}$ of $3^k$\nhyperbolic periodic points satisfying the assumptions~\\ref{H1} and~\\ref{H2}.\n\\end{lem}\n\\begin{proof}\nSince the\nhomoclinic class $H(Q_a,\\Phi_a)$ is not trivial (contains $P_a$),\nthere exists a horseshoe $\\Lambda_a$ containing $Q_a$. Thus,\nthere are infinitely many different hyperbolic periodic points of $\\Phi_a$\nwhose stable manifold intersect transversely the unstable\nmanifold of $Q_a$. Then by the inclination lemma, and the\nrobustness of the folding manifold $S_a$, the stable manifold of\nthese periodic points also contains a folding manifold.\n\\end{proof}\nAssociated with each point $Q^j_a$, as in Proposition~\\ref{prop-paratangecia},\nthere is the paratangency point $Y_j$ where the size of the perturbation is governed by\n$\\delta_j$. Thus, we can obtain a uniform $\\delta$ by taking the infimum over $\\delta_j$.\nAlso corresponding to each $Q^j_a$, there exists a lattice of points\n$\\{a_i^j\\}_{i=1,\\dots, N}$ in $\\mathbb{I}^k$ such that\nthe union of the $\\alpha$-neighborhoods $a_i^j+(\\alpha,\\alpha)^k$\nin $\\mathbb{I}^k$ cover $\\mathbb{I}^k$.\nThat is,\n$$\\mathbb{I}^k\\subset \\bigcup_{j=1}^{3^k}\\bigcup_{i=1}^N a_i^j+(\\alpha,\\alpha)^k.$$\nThen we can apply Proposition~\\ref{prop-paratangecia} independently in $j$ to obtain\nthe family with the required properties, concluding the proof the the theorem.\n\\end{proof}\n\\begin{rem}\nThe persistent homoclinic tangencies in the open set $J_i$\nobtained in the above theorem can be associated with a collection\nof saddle periodic points, $Q_a^j$ for $j=1,\\dots,3^k$, where all\nof them have the same type of multipliers. For instance, we can\ntake these points being sectionally dissipative or of type~$(1,1)$,\n$(2,1)$, $(1,2)$ or $(2,2)$ according to the nomenclature\nintroduced in~\\cite{GST08}.\nAlso, according\nto Remark~\\ref{rem:simples}, in the codimension one case, the\npersistent homoclinic tangency can be assumed \\emph{simple} in the\nsense of~\\cite{GST08}.\n\\end{rem}\n\n\n\\section{Proof of Theorem~\\ref{thm-thmA}:\n periodic sinks and invariant circles} \\label{sec:thm-thmA}\nIn this section we will prove the $C^{d,r}$-Berger phenomenena of\nthe coexistence of infinitely many normally hyperbolic attracting\ninvariant circles and also obtain a similar result for hyperbolic periodic sinks.\nFor short, we will refer to\nboth of these types of attractors as \\emph{periodic attractors}.\n\nThe next proposition claims that every family $f=(f_a)_a$ in\n$\\mathscr{U}$ can be approximated by a family $g=(g_a)_a$ having a\nperiodic attractor for every parameter $a$ in $\\mathbb{I}^k$.\nMoreover, the period of the attractors can be chosen arbitrarily\nlarge. To prove this, since $\\mathscr{U}$ is a Berger domain, it\nis enough to restrict our attention to the dense set $\\mathcal{D}$\nof $\\mathscr{U}$ having persistent tangencies.\n\n\\begin{prop} \\label{main-lema}\nFor any $\\epsilon>0$, and every $f=(f_a)_a \\in \\mathcal{D}$ there\nexists $n_0=n_0(\\varepsilon,f)\\in\\mathbb{N}$ such that for any\n$n\\geq n_0$ there is a $\\epsilon$-close family $g=(g_a)_a$ to\n$f=(f_a)_a$ in the $C^{d,r}$-topology satisfying that $g_a$ has a\nperiodic attractor of period $n$ for all $a\\in \\mathbb{I}^k$.\nMoreover, the attractor obtained for $g_a$ is the continuation of\na (hyperbolic or normally hyperbolic) $n$-periodic attractor\nobtained for a map $g_{a_0}$, where $a_0$ belongs to a finite\ncollection of parameters.\n\\end{prop}\n\nBefore proving the above proposition, we will conclude first from\nthis result the next main theorem of the section, which in particular\nproves Theorem~\\ref{thm-thmA}.\n\n\\begin{thm} For any $m\\in \\mathbb{N}$, there exists an open and\ndense set $\\mathcal{O}_m$ in $\\mathscr{U}$ such that for any\nfamily $g=(g_a)_a$ in $\\mathcal{O}_m$ there exist positive\nintegers $n_1<\\dots0$ for $i\\geq 1$, we get\nan open and dense set $\\mathcal{O}_1$ in $\\mathscr{U}$ where for\nany $g=(g_a)_a\\in \\mathcal{O}_1$ there exists a positive integer\n$n$ such that $g_a$ has an $n$-periodic attractor for all\nparameters $a\\in \\mathbb{I}^k$.\n\n\nNow we will assume that $\\mathcal{O}_m$ was constructed and show how to\nobtain $\\mathcal{O}_{m+1}$. Since $\\mathcal{O}_m$ is open\nand dense set in $\\mathscr{U}$ we can start now by taking\n$f=(f_a)_a \\in \\mathcal{O}_m\\cap \\mathcal{D}$. Hence, there exist\npositive integers $n_1<\\dots0$\nsuch that any $\\epsilon'$-close family $g=(g_a)_a$ still\nhas the same properties. Then, for any $\\epsilon_i<\\epsilon'\/2$,\nwe can apply again Proposition~\\ref{main-lema}, taking integers\n$n(i)_{m+1} > \\max\\{n_0(\\epsilon_i,f),n_{m}\\}$ in order to obtain\nan $\\epsilon_i$-perturbation $g=(g_a)_a$ of $f$ such that $g_a$ has\nalso a $n(i)_{m+1}$-periodic attractor for all $a\\in\n\\mathbb{I}^k$. As before, from the persistence of these\nattractors, we have a sequence of open sets\n$\\mathcal{O}_{m+1}(\\epsilon_i,f)\\subset \\mathcal{O}_m$ converging\nto $f$ where the same conclusion holds for any family in these\nopen sets. By taking the union of all these open sets for any\n$f\\in \\mathcal{O}_m\\cap \\mathcal{D}$ and\n$\\epsilon_i<\\epsilon'(f)$, we get an open and dense set\n$\\mathcal{O}_{m+1}$ in $\\mathscr{U}$ where for any $g=(g_a)_a\\in\n\\mathcal{O}_{m+1}$ there exist positive integers\n $n_1<\\dots0$, let $g=(g_a)_a$ be a $C^{d,r}$-family and assume\nthat $g_a$ has a simple homoclinic tangency at a point~$Y_a$\n(depending $C^d$-continuously on~$a$) associated with a saddle\n$Q_a$ satisfying~\\eqref{eq1} for any parameter $a\\in\na_0+(-\\alpha,\\alpha)^k$. Then there exists a sequence of families\n$g_n=(g_{na})_a$ approaching $g$ in the $C^{d,r}$-topology such\nthat $g_{na}=g_a$ if $a\\not\\in a_0 + (-2\\alpha,2\\alpha)^k$ and\n$g_{na}$ has an $n$-periodic sink or invariant circle\nfor all $a \\in a_0\n+ (-\\alpha,\\alpha)^k$. Moreover, for $n$ large enough\n$$\n \\|g_n-g\\|_{C^{d,r}}=O\\left(\\frac{\\alpha^{-d}}{n}\\right).\n$$\n\\end{lem}\n\n\nBefore proving this result, let us show how to get\nProposition~\\ref{main-lema} from the above lemma.\n\n\n\n\\begin{proof}[Proof of Proposition~\\ref{main-lema}]\n Given $f=(f_a)_a$ in $\\mathcal{D}$,\nrelabeling and resizing if necessary, we can assume that the cover\nof $\\mathbb{I}^k$ by the open balls $J_i$ that appears in\nDefinition~\\ref{def:Berger-domain} is of the form\n$$\n \\mathbb{I}^k \\subset \\bigcup_{j=1}^M \\bigcup_{\\ell=1}^{N_j}\n J_{j\\ell} \\quad \\text{with} \\quad\n J_{j\\ell}=a_{j\\ell}+(-\\alpha_{j\\ell},\\alpha_{j\\ell})^k\n \\quad a_{j\\ell}\\in \\mathbb{I}^k \\ \\ \\text{and} \\ \\ \\alpha_{j\\ell}>0.\n$$\nMoreover, the persistent homoclinic tangency $Y_a$ of $f_a$ on\n$J_{j\\ell}\\cap \\mathbb{I}^k$ is simple in the sense of~\\cite{GST08} and is associated with a saddle $Q_{ja}$\nfor $j=1,\\dots,M$, where for each $j$, the sets\n$\\overline{J_{j\\ell}}$ for $\\ell=1,\\dots,N_j$ are pairwise\ndisjoint. To avoid unnecessary complications in the\nnotation, we can assume that $\\alpha_{j\\ell}=\\alpha$ for all\n$j=1,\\dots,M$ and $\\ell=1,\\dots N_j$. Moreover, for each $j$,\nthe intervals $2J_{j\\ell}=a_{j\\ell}+(-2\\alpha,2\\alpha)^k$ are pairwise disjoint with respect to $\\ell$.\n\n\nOn the other hand, given $\\epsilon>0$, according to\nLemma~\\ref{lema2}, we can control the approximation by a function\n$F(\\alpha,n)$ of order $O(\\alpha^{-d}n^{-1})$. We take\n$n_0=n_0(\\epsilon,f)\\in \\mathbb{N}$ where\n$F(\\alpha,n_0)=O(\\alpha^{-d}n_0^{-1})<\\epsilon$. Now, consider an\ninteger $n\\geq n_0$. We want to find an $\\epsilon$-close family\n$g=(g_a)_a$ having an $n$-periodic attractor at any parameter $a\n\\in \\mathbb{I}^k$. Having into account that for each $j$ the\nintervals $2J_{j\\ell}$ are pairwise disjoint,\nwe can apply Lemma~\\ref{lema2} inductively to obtain an\n$\\epsilon$-close family $g=(g_{a})_a$ to $f$ such that $g_a$ has\nan $n$-periodic attractor for all $a\\in J_{j\\ell}$. This\nconcludes the proof.\n\\end{proof}\n\nFinally to complete the proof we will show Lemma~\\ref{lema2}.\nHowever, in order to understand better how periodic sinks\nand invariant circles appear in the unfolding of homoclinic\ntangencies associated with saddles of the form~\\eqref{eq1}, we need\nsome preliminaries on the theory of rescaling lemmas\nfrom~\\cite{GST08}.\n\n\\subsection{Rescaling lemma: Generalized Henon map} \\label{sec:rescaling-lemma}\nLet $f$ be a $C^r$-diffeomorphism of a manifold of dimension\n$m\\geq 3$ with a homoclinic tangency\n associated with a hyperbolic\nperiodic point $Q$ with multipliers satisfying the assumptions~\\eqref{eq1}. We\nassume that the tangency is \\emph{simple} in the sense\nof~\\cite[sec.~1, pg. 928]{GST08}. That is, the tangency is\nquadratic, of codimension one and, in the case that the dimension\n$m>3$, any extended unstable manifold is transverse to the leaf of\nthe strong stable foliation which passes through the tangency\npoint. We need to consider a two-parameter unfolding\n$f_\\varepsilon$ of $f=f_0$ where\n$\\varepsilon=(\\mu,\\varphi-\\varphi_0)$ being $\\mu$ the parameter\nthat controls the splitting of the tangency and $\\varphi$ the\nparameter related to the eigenvalues of $Q$ (here $\\varphi_0$ is the value\nof $\\varphi$ at $\\varepsilon=0$). Let $T_0=T_0(\\varepsilon)$\ndenote the local map and in this case this map\ncorresponds to $f^q_{\\varepsilon}$ defined in a neighborhood $W$ of $Q$, where $q$ is the period of $Q$.\nBy\n$T_1=T_1(\\varepsilon)$ we denote the map $f_\\varepsilon^{n_0}$\ndefined from a neighborhood $\\Pi^-$ of a tangent point $Y^-\\in\nW^u_{loc}(Q,f_0)\\cap W$ of $f_0$ to a neighborhood $\\Pi^+$ of\n$Y^+=f_0^{n_0}(Y^-)\\in W^s_{loc}(Q,f_0)\\cap W$. Then, for $n$\nlarge enough, one defines the first return map $T_n=T_1\\circ\nT_0^n$ on a subset $\\sigma_n=T_0^{-n}(\\Pi^-)\\cap \\Pi^+$ of $\\Pi^+$\nwhere $\\sigma_n\\to W^s_{loc}(Q)$ as $n\\to\\infty$. According\nto~\\cite[Lemma~1 and~3]{GST08} we have the following result:\n\n\n\n\\begin{lem}\\label{lema-GHM} There exists a sequence of open sets $\\Delta_n$ of parameters\nconverging to $\\varepsilon=0$ such that for this values the\nfirst-return $T_n$ has a two-dimensional attracting invariant\n$C^r$-manifold $\\mathcal{M}_n \\subset \\sigma_n$\nso that after a $C^r$-smooth transformation of\ncoordinates, the restriction of the map is given by the\nGeneralized H\\'enon map\n\\begin{equation} \\label{eq-GHM}\n \\bar{x}=y, \\qquad \\bar{y}=M-Bx-y^2-R_n(xy+ o(1)).\n\\end{equation}\n The rescaled parameters $M$, $B$ and $R_n$ are functions of\n$\\varepsilon \\in \\Delta_n$ such that $R_n$ converges to zero as\n$n\\to \\infty$ and $M$ and $B$ run over asymptotically large\nregions which, as $n\\to \\infty$, cover all finite values. Namely,\n\\begin{align*}\nM \\sim \\gamma^{2n}(\\mu + O(\\gamma^{-n}+\\lambda^n)), \\quad B \\sim\n(\\lambda \\gamma)^{n}\\cos(n\\varphi+O(1)) \\quad \\text{and} \\quad R_n\n= \\frac{2J_1}{B}(\\lambda^2\\gamma)^n\n\\end{align*}\nwhere $J_1\\not=0$ is the Jacobian of the global map $T_1$\ncalculated at the homoclinic point $Y^-$ for $\\varepsilon=0$. The\n$o(1)$-terms tend to zero as $n\\to \\infty$ along with all the\nderivatives up to order $r$ with respect to the coordinates\nand up to order $r-2$ with respect to the rescaled parameters\n$M$ and $B$. Moreover, the limit family is the Henon map.\n\\end{lem}\n\nThe dynamics of the following generalized H\\'enon map\n\\begin{equation} \\label{eq-GHM-simplified}\n \\bar{x}=y, \\qquad \\bar{y}=M-Bx-y^2-R_nxy\n\\end{equation}\nwas studied in~\\cite{GG00,GG04,GKM05} (see also \\cite{GGT07}). We\npresent here the main results for the case of small $R_n$ with\nemphasis on the stable dynamics (stable periodic orbits and\ninvariant circles) in order to apply the corresponding results to\nthe first return maps coming from \\eqref{eq-GHM}. Observe that the difference between equations \\eqref{eq-GHM-simplified} (generalized H\\'enon map) and \\eqref{eq-GHM} (perturbed map) has order $O(R_n)$. Then the existence of stable periodic orbits and\ninvariant circles for \\eqref{eq-GHM} can be inferred from the bifurcation diagram of \\eqref{eq-GHM-simplified}. In Figure~\\ref{fig-MB} we show the\nbifurcation curves for the generalized H\\'enon map in \\eqref{eq-GHM-simplified}\nin the parameter space $(M,B)$.\n\n\n\\begin{figure}\\vspace{0.3cm}\n\\labellist \\small\\hair 2pt \\pinlabel $L^+_n$ at 100 470 \\pinlabel\n$L^-_n$ at 405 770 \\pinlabel $B$ at 490 530 \\pinlabel $M$ at 60\n790 \\pinlabel $\\mathrm{HT}_n$ at 180 686 \\pinlabel $\\mathrm{BT}_n$\nat 255 440 \\pinlabel $L^\\varphi_n$ at 210 570 \\pinlabel $L_n$ at\n340 570\n\\endlabellist\n\\includegraphics[scale=0.6]{GHM-MB}\n\\caption{Bifurcation curves for the generalized H\\'enon\nmap~\\eqref{eq-GHM-simplified} with $R_n > 0$.\n The case $R_n=0$ corresponds with the bifurcation diagram of the H\\'enon map.\n In that case, $L_n^\\varphi$ collapses with $L_n$ at $B=1$.\n The diagram for the case $R_n<0$ is similar, changing the position of the curves\n $L_n^\\varphi$ and $L_n$ and the stability of the invariant circle. }\n\\label{fig-MB}\n\\end{figure}\n\n\nThe map~\\eqref{eq-GHM-simplified} has in the parameter plane $(M,B)$\nthe following three bifurcation curves\n\\begin{align*}\nL^+_n \\ &: \\quad M = -\\frac{(1 +B)^2}{4(1 + R_n)} \\\\\nL^-_n \\ &: \\quad M = \\frac{1}{4}(1 +B)^2(3 + R_n) \\\\\nL^\\varphi_n \\ &: \\quad M=\\frac{\\cos^2 \\omega- \\cos \\omega(2 +\nR_n)}{(1+R_n\/2)^2}, \\quad B = 1+ \\frac{R_n\\cos \\omega}{1 + R_n\/2}.\n\\end{align*}\nThese correspond to the existence of fixed points having multipliers\non the unit circle: $+1$ at $(M,B) \\in L^+_n$; $-1$ at $(M,B)\\in\nL^-_n$; and $e^{\\pm i\\omega}$ at $(M,B)\\in L^\\varphi_n$. Note that\nthe curve $L^\\varphi_n$ is written in a parametric form such that\nthe argument $\\omega$ ($0 < \\omega < \\pi$) of the complex\nmultipliers is the parameter.\nWe also point out here the points\n\\begin{equation} \\label{eq:pontos}\n\\begin{aligned}\n\\mathrm{BT}_n \\ &: \\quad M = \\frac{-1 -R_n}{(1 + R_n\/2)^2}, \\quad B=1+\\frac{R_n}{1+R_n\/2} \\\\\n\\mathrm{HT}_n \\ &: \\quad M = \\frac{3 +R_n}{(1 + R_n\/2)^2}, \\quad\n\\ \\ \\ B=\\frac{1-R_n\/2}{1+R_n\/2}.\n\\end{aligned}\n\\end{equation}\nThe are called as follows: \\emph{Bogdanov-Takens point} for\n$\\mathrm{BT}_n$ and the \\emph{Horozov-Takens point} for\n$\\mathrm{HT}_n$. Also denoted by $L_n$ is an interesting curve\n(nonbifurcational) starting at the point $\\mathrm{BT}_n$, which\ncorresponds to the existence of a saddle fixed point\nof~\\eqref{eq-GHM-simplified} of neutral type (i.e., the fixed\npoint has positive multipliers whose product is equal to one).\nThis curve is drawn in Figure~\\ref{fig-MB} as the dotted line and\nits equation is given by the same expression of\n\\mbox{$L^\\varphi_n$ replacing $\\cos \\omega$ by $\\alpha>1$.}\n\n\\begin{figure}\n\\labellist \\small\\hair 2pt \\pinlabel $K$ at 414 638 \\pinlabel $H$\nat 310 690 \\pinlabel $L_n^-$ at 120 590 \\pinlabel $L^\\varphi_n$ at\n230 445 \\pinlabel {\\footnotesize$\\mathrm{HT}_n$} at 262 575\n\\endlabellist\n \\includegraphics[scale=0.6]{GHM-Bminus}\n\\caption{Bifurcation diagrams near a Horozov-Takens point for\n$R_n<0$. For $R_n=0$ (H\\'enon map) the curves $K$ and $H$ collapse\nat $B=1$. The diagram for the case $R_n>0$ is similar but changes\nthe position of $K$, $H$ and the stability of both invariant\ncurves. The open domain between $K$ and $H$ is rather small,\nhaving the size of a finite order along the $M$-direction and\norder $O(R_n)$ along the $B$-direction.} \\label{fig-Bminus}\n\\end{figure}\n\nThere is an open domain $D^s_n$ , bounded by the curves $L^+_n$,\n$L^-_n$ $L^\\varphi_n$ with vertices\n$\\mathrm{BT}_n$, $\\mathrm{HT}_n$ (see\nFigure~\\ref{fig-MB}), such that map~\\eqref{eq-GHM-simplified} has\na stable fixed point for parameters in $D^s_n$.\nThe bifurcations of periodic points\nwith multipliers $e^{\\pm i\\omega}$\ncan lead\nto asymptotically stable or\/and unstable invariant circles.\nThe first return map $T_n$ has an\ninvariant circle which is either stable if $R_n>0$ or unstable if\n$R_n<0$. Observe that the sign of $R_n$ actually only depends on\n$J_1\\not=0$. Thus, if $J_1>0$ we obtain parameter values where we\nhave a stable close invariant curve. In the case $J_1<0$, the\nexistence of a stable closed invariant curve\nin~\\eqref{eq-GHM-simplified} follows from the bifurcation analysis\nof the Horozov-Takens point $\\mathrm{HT}_n$. Some\nof the elements that appear in this non-degenerate bifurcation are\nshowen in Figure~\\ref{fig-Bminus}. Actually, when $J_1\\not=0$ (i.e,\n$R_n\\not =0$), near the point $\\mathrm{HT}_n$\nthere are open domains parameter values where stable and\nunstable closed invariant curves coexist.\n\nMoreover, for any map that\nis $O(R_n)$-close to to~\\eqref{eq-GHM-simplified} in the\n$C^3$-topology the corresponding bifurcations still remain\nnon-degenerate and preserve the same stability. Thus, we can\nobtain the same type of results for the pertubed map~\\eqref{eq-GHM}.\nTo summarize for future reference, for $n$ large\nenough we find open sets $A^1_n$, $A^2_n$ of $(M,B)$-parameters accumulating at\n$\\mathrm{BT}=(-1,1)$ and $\\mathrm{HT}=(3,1)$ respectively as $n\\to\n\\infty$ such that the following holds. If\n$(M,B)\\in A^1_n$ (resp.~$(M,B)\\in A^2_n$) then $T_n$ has a\nhyperbolic attracting periodic point (resp.~a normally hyperbolic\nattracting invariant circle).\n\n\n\n\n\n\\subsection{Proof of Lemma~\\ref{lema2}}\nBy assumption the map $g_a$ has a homoclinic tangency at a point\n$Y_a$ associated with a sectional dissipative periodic point $Q_a$\nfor all $a\\in a_0+(-\\alpha,\\alpha)^k$. Actually, the tangency must\nbe smoothly continued until $\\|a-a_0\\|_{\\infty}=\\alpha$. We\nconsider a two-parameter unfolding $g_{a,\\varepsilon}$ of the\nhomoclinic tangency $Y_a$ of $g_a$ for $a\\in a_0+\n[-\\alpha,\\alpha]^k$,\nwhere $\\varepsilon=(\\mu,\\varphi)$.\nHere $\\mu$ is the parameter that controls the splitting of the\ntangency and $\\varphi$ is the perturbation of the argument of the complex\neigenvalues of $Q_a$.\nWe can take local coordinates $(x,u,y)$ with $x\\in \\mathbb{C}$,\n$u\\in\\mathbb{R}^{m-3}$ and $y\\in \\mathbb{R}$ in a neighborhood of\n$Q_a$ which corresponds to the origin, such that $W^s_{loc}(Q_a)$\nand $W^u_{loc}(Q_a)$ acquire the form $\\{y=0\\}$ and $\\{x=0,u=0\\}$\nrespectively. Moreover, the complex eigenvalues related to the stable\nmanifold of $Q_a$ correspond to the $x$-variable and\nthe tangency point $Y_a$\nis represented by $(x^+,u^+,0)$.\n\nLet us consider a $C^\\infty$-bump\nfunction $ \\phi:\\mathbb{R}\\to \\mathbb{R}$ with support in $[-1,1]$\nand equal to 1 on $[-1\/2,1\/2]$. Let $$\\rho: a=(a_1,\\dots,a_k) \\in\n\\mathbb{I}^k \\mapsto \\phi(a_1)\\cdot\\ldots\\cdot \\phi(a_k) \\in\n\\mathbb{R}.$$ Take $\\delta>0$ so that the $\\delta$-neighborhoods\nin local coordinates of $Q_a$ and $g_a^{-1}(Y_a)$ are disjoint.\nObserve that\nthese two neighborhoods, call $U$ and $V$ respectively, can be taken\nindependent of $a$. We can write\n$$g_{a,\\varepsilon}= H_{a,\\varepsilon}\n\\circ g_a,$$ where $H_{a,{\\varepsilon}}$ in these local coordinates\ntakes the form\n\\begin{align*}\n \\bar{x}&=\\left(1-\\rho\\left(\\frac{a-a_0}{2\\alpha}\\right)\\phi\\left(\\frac{\\|(x,u,y)\\|}{2\\delta}\\right)\n \n (1-e^{i\\varphi})\\right) x\n \\\\\n \\bar{u}&=u \\\\\n \\bar{y}&=y+\\rho\\left(\\frac{a-a_0}{2\\alpha}\\right)\\phi\\left(\\frac{\\|(x,u,y)-(x^+,u^+,0)\\|}{2\\delta}\\right)\\mu,\n\\end{align*}\nand is the identity otherwise.\nObserve that if $a\\not\\in a_0 + (-2\\alpha,2\\alpha)^k$ then\n$g_{a,\\varepsilon}=g_a$ and if $(x,u,y)\\not \\in U\\cup V$ then\n$g_{a,\\varepsilon}=g_a$.\n\nRecall in Section~\\ref{sec:rescaling-lemma} the definition of\nthe first return map associated with the unfolding of a simple\nhomoclinic tangency. Since the tangency point $Y_a$ depends\n$C^d$-continuously on $a_0+[-\\alpha,\\alpha]^k$, we may assume that the\nfirst-return map $T_n=T_n(a,\\varepsilon)$ also depends smoothly as\na function of the parameter $a$ on $a_0+[-\\alpha,\\alpha]^k$.\n\n\\begin{lem}[Parametrized rescaling lemma]\\label{para-lema-GHM}\nThere exist\nfamilies of open sets $(\\Delta_n(a))_a$ of parameters converging to\n$\\varepsilon=0$ as $n\\to\\infty$ such that for any $\\varepsilon \\in\n\\Delta_n(a)$ the map $T_n=T_n(a,\\varepsilon)$ has an attracting\n$C^r$-manifold $\\mathcal{M}_n=\\mathcal{M}_n(a,\\varepsilon)$ for any $a\\in a_0+[-\\alpha,\\alpha]^k$.\nMoreover, there exists a $C^{d,r}$-family of transformations of coordinates which bring the first-return map $T_n$ restricted to $\\mathcal{M}_n$ to the form\ngiven by~\\eqref{eq-GHM}\n\\begin{equation} \\label{eq-GHM2}\n \\bar{x}=y, \\qquad \\bar{y}=M-Bx-y^2-R_n(xy+ o(1)).\n\\end{equation}\nHere the rescaled parameters $M=M_n(a,\\varepsilon)$ and $B=B_n(a,\\varepsilon)$ are at least $C^d$-smooth\nfunctions (recall that $d\\leq r-2$) on\n$$\n\\Delta_n=\\left\\{(a,\\varepsilon): a\\in a_0+(-\\alpha,\\alpha)^k \\\n\\text{and} \\ \\varepsilon \\in \\Delta_n(a)\\right\\}.\n$$\nThe same\nproperty holds for the coefficient $R_n=R_n(a,\\varepsilon)$ and the $o(1)$-terms. More\nspecifically,\n\\begin{align} \\label{eq:MB}\nM \\sim \\gamma^{2n}_a(\\mu + O(\\gamma_a^{-n}+\\lambda_a^n)), \\quad\nB \\sim (\\lambda_a\n\\gamma_a)^{n}\\cos(n\\varphi+O(1))\n \\quad \\text{and}\n\\quad R_n = \\frac{2J_{1a}}{B}(\\lambda^2_a\\gamma_a)^n\n\\end{align}\nwhere $\\lambda_a=\\lambda_a(g)$ and $\\gamma_a=\\gamma_a(g)$ are the\neigenvalues of $Q_a=Q_a(g)$ satisfying~\\eqref{eq1} for $g_a$.\n\\end{lem}\n\n\\begin{proof}\nLet us analyze the proof of the rescaling lemma in~\\cite{GST08}, more specifically\nthe change of coordinates for the first return maps given in\nSection 3.2~\\cite{GST08} for the case $(2,1)$ with $\\lambda\\gamma>1$.\nFrom equations ~\\cite[Eq.~(3.12)-(3.16)]{GST08} we can observe the all the transformations of coordinates can be performed smoothly on the parameter\n$a \\in a_0+[-\\alpha,\\alpha]^k$. The exponents that will appear in the orders of convergence will not depend on the parameter $a$ but only on $n$. On the other hand the constants in the $O$-terms will depend on the parameter but these can be uniformely bounded due to the compactness of the parameter space.\nThese considerations allow us basically to apply\nthe rescaling Lemma~\\ref{lema-GHM} smoothly on the parameter $a \\in\na_0+[-\\alpha,\\alpha]^k$.\n\\end{proof}\n\n\\begin{lem} \\label{Delta_expand}\nFor\n$n$ large enough, $\\Delta_{n}(a)$ can be taken in such a way that\n$$\n \\phi_{n,a}: \\Delta_{n}(a) \\to (-10,10)^2 \\setminus B(0,r), \\qquad \\phi_{n,a}(\\epsilon)\n=(M_n(a,\\epsilon), B_n(a,\\epsilon))\n$$\nis a diffeomorphism where $M_n$ and $B_n$ are the functions given\nin~\\eqref{eq:MB} for fixed $n$ and $a$. Here $B(0,r)$ is a closed\nball of some small fixed radius $r$ around the origin.\n\\end{lem}\n\n\\begin{proof}\nLet us introduce the parameter value $\\varepsilon^0_n(a)=(\\mu_n^0(a),\\varphi^0_n(a))$,\nwhich by definition satisfies $\\phi_{n,a}(\\varepsilon^0_n(a))=0$. It can be seen from\nthe expressions of $M$ and $B$ in~\\eqref{eq:MB} that $\\mu^0_n(a)=O(\\gamma^{-n}_a+\\lambda^n_a)$ and $\\varphi^0_n(a)= \\frac{\\pi}{2n} +O(1\/n)$. We take out $B(0,r)$ around the origin in $(-10,10)^2$ so that $B$ is bounded away from zero and $R_n$ is well-defined. Similar values were considered in~\\cite[pg.~946]{GST08} and as is\nclaimed there, the rescaled functions $M$ and\n$B$ can take arbitrarily finite values when $\\mu$ varies close to\n$\\mu^0_n(a)$ and $\\varphi$ near\n$\\varphi^0_n(a)$. Let us explain this.\nAlthough the $O$-functions in~\\eqref{eq:MB} depend on\n$\\varepsilon$, the functions $M$ and $B$ basically only depend on\n$\\mu$ and $\\varphi$ respectively for $n$ large enough. Actually\non~\\cite[pg.~946]{GST08} are giving explicit expressions for\n$M_{n,a}$ and $B_{n,a}$.\nUsing them one can\nobserve that\n$$\n \\partial_\\mu M_{n,a} \\sim \\gamma_a^{2n} \\not = 0 \\quad \\text{and} \\quad\n \\partial_\\varphi B_{n,a} \\sim n(\\lambda_a\\gamma_a)^n\n \\sin(n\\varphi+O(1)) \\not = 0\n$$\nfor all $\\varepsilon=(\\mu,\\varphi)$ close to $\\varepsilon^0_n(a)$.\nThen the Jacobian of $\\phi_{n,a}$ converges to infinity, uniformly\non $a\\in a_0+[-\\alpha,\\alpha]^k$ as $n\\to \\infty$. The rate of\ngrowth is exponential. This implies that $\\phi_{n,a}$ is an\ninvertible expanding map with arbitrarily large uniform expansion\non $a\\in a_0 + [-\\alpha,\\alpha]^k$. On the other hand, the size of\n$\\Delta_{n}(a)$ (coming mainly from considerations on the angle\n$\\varphi$) where the expanding map $\\phi_{n,a}$ is defined has\ndecay of order $O(1\/n)$. Thus, for $n$ large enough we get that a\nneighborhood of $\\varepsilon^0_n(a)$ can be taken so that its\nimage under $\\phi_{n,a}$ is $(-10,10)^2$. In particular we can\ntake $\\Delta_n(a)$ being diffeomorphic to $(-10,10)^k\\setminus\nB(0,r)$.\n\\end{proof}\nConsequently,\n$$\n\\Phi_n(a,\\varepsilon)=(a,\\phi_{n,a}(\\varepsilon))\n$$\nis a diffeomorphism between the set $\\Delta_n$ defined above and\n$(a_0+[-\\alpha,\\alpha]^k)\\times (-10,10)^2\\setminus B(0,r)$.\nOn the other hand, although the coefficient $R_n$ depends on $B$,\nnote that the range of values it takes is negligible when $B$ is\nbounded from zero and $n$ is large enough. Actually from the\nrelations in ~\\eqref{eq:MB} it follows that $R_n=o(1)$. Thus, the\nbifurcation diagram of~\\eqref{eq-GHM2} can be studied from the\nresults described in Section~\\ref{sec:rescaling-lemma} assuming\n$R_n=o(1)$ independent of $B$.\n\nLet us remind the reader of the Bogdanov-Takens $\\mathrm{BT}_n(a)$\nand the Horozov-Takens $\\mathrm{HT}_n(a)$ points given\nin~\\eqref{eq:pontos}, which now also depend on $a$ and accumulate\nat $\\mathrm{BT}=(-1,1)$ and $\\mathrm{HT}=(3,1)$ respectively as\n$n\\to \\infty$. Hence, according to\nSection~\\ref{sec:rescaling-lemma} for each $n$ large enough we\nfind open subsets $A^1_n(a)$, $A^2_n(a)$ in the $(M,B)$-parameter\nplane such that if $(M,B)\\in A^1_n(a)$ (resp.~$(M,B)\\in\nA^2_n(a)$), then $T_n$ has a hyperbolic attracting periodic point\n(resp.~\nattracting\nsmooth invariant circle) for all $a \\in a_0+[-\\alpha,\\alpha]^k$.\nMoreover, we can assume the points $\\mathrm{BT}_n(a)$ and\n$\\mathrm{HT}_n(a)$\nbelong to the boundary of $A^1_n(a)$ and\n$A^2_n(a)$ respectively. Since these sets vary $C^d$-continuously\nwith respect to the parameter $a \\in [-\\alpha,\\alpha]^k$, we can\nchoose $C^d$-continuously $(M^*_n(a),B^*_n(a)) \\in A^*_n(a)$ for\n$*=1,2$ arbitrarily close to $\\mathrm{BT}_n(a)$ and\n$\\mathrm{HT}_n(a)$ respectively.\n\nSince $\\Phi_n$ is a diffeomorphism, we may find\na $C^{d}$-function\n$\\varepsilon^*_n(a)=(\\mu_n^*(a),\\varphi^*_n(a))$ for $a \\in\na_0+[-\\alpha,\\alpha]^k$, $*=1,2$, defined as\n$\\Phi_n^{-1}(a,(M^*_n(a),B^*_n(a))=(a,\\varepsilon^*_n(a))$. In\nparticular,\n\\begin{equation} \\label{eq2}\n\\begin{aligned}\nM^*_n(a)&=M_{n,a}(\\varepsilon^*_n(a))\\sim \\gamma^{2n}_a(\\mu_n^*(a) +\nO(\\gamma_a^{-n}+\\lambda_a^n)) \\\\\nB^*_n(a)&=B_{n,a}(\\varepsilon^*_n(a)) \\sim (\\lambda_a\n\\gamma_a)^{n}\\cos(n\\varphi^*_n(a)+O(1)).\n\\end{aligned}\n\\end{equation}\nExtending smoothly $\\varepsilon^*_n(a)$ to $\\mathbb{I}^k$ we can\nconsider the sequence of families\n$\\tilde{g}_n=(\\tilde{g}_{n,a})_a$ where\n$$\n \\tilde{g}_{n,a}=g_{a,\\varepsilon^*_n(a)} \\qquad \\text{for $a\\in\n \\mathbb{I}^k$ and $n$ large enough}.\n$$\nObserve that $\\tilde{g}_{n,a}=g_a$ for $a \\not \\in a_0 +\n(-2\\alpha,2\\alpha)^k$ and we may assume\nhas an $n$-periodic attractor (a sink or an invariant circle) for\nall $a\\in a_0+(-\\alpha,\\alpha)^k$.\n\nTo conclude the proof of the lemma we only need to show that\n$\\tilde{g}_n$ converges to $g$ in the $C^{d,r}$-topology. To do\nthis, notice that the $C^{d,r}$-norm satisfies\n$$\n \\|\\bar{g}_n-g\\|=\\|(I-H_{a,\\varepsilon^*_n(a)})\\circ g_a \\| \\leq\n \\|I-H_{a,\\varepsilon^*_n(a)}\\| \\, \\|g\\|\n$$\nwhere $I$ denotes the identity and the $C^{d,r}$-norm of any $g=(g_a)_a$ in\nthe Berger domain $\\mathscr{U}$ is bounded.\nThus, we\nonly need to calculate the $C^{d,r}$-norm of the family\n$(I-H_{a,\\varepsilon^*_n(a)})_a$. Since\n$H_{a,\\varepsilon^*_n(a)}=I$ if $a\\not \\in a_0 +\n(-2\\alpha,2\\alpha)^k$ or $(x,u,y)\\not \\in U \\cup V$, therefore\n$\\|I-H_{a,\\varepsilon^*_n(a)}\\|$ is less or equal than\n\\begin{align*}\n\n \\left\\|\\rho\\left(\\frac{a-a_0}{2\\alpha}\\right)\\phi\\left(\\frac{\\|(x,u,y)\\|}{2\\delta}\\right)\n (1-e^{i\\varphi^*_n(a)})x\\right\\| +\n \\left\\|\\rho\\left(\\frac{a-a_0}{2\\alpha}\\right)\\phi\\left(\\frac{\\|(x,u,y)-(x^+,u^+,0)\\|}{2\\delta}\\right)\\mu^*_n(a)\\right\\|.\n\\end{align*}\nTo estimate the $C^{d,r}$-norms above it suffices to show\nthat the functions\n$$\n F_{n}(a)= \\rho(\\frac{a-a_0}{2\\alpha})(1-e^{i\\varphi^*_n(a)}) \\quad\n \\text{and} \\quad G_{n}(a) = \\rho(\\frac{a-a_0}{2\\alpha})\\mu^*_n(a)\n \n$$\nhave $C^d$-norm small when $n$ is large and $a \\in a_0+(-2\\alpha,2\\alpha)^k$. Actually we will prove\nthe following:\n\\begin{equation} \\label{eq:order}\n\\|F_{n}\\|_{C^d}= O\\left(\\frac{\\alpha^{-d}}{n}\\right) \\quad\n\\text{and} \\quad \\|G_{n}\\|_{C^d}=\nO\\left(\\frac{\\alpha^{-d}}{n}\\right)\n\\end{equation}\nObserve that this assertion completes the proof.\nTo prove this we will need the following derivative estimates on the functions $\\mu_n^*(a)$ and $\\varphi_n^*(a)$. Here the symbol $\\partial_a^j$ is used to denote the $j$-th partial\nderivative with respect to the coordinates $a_i$ of $a$ using the multi-index notation.\n\n\\begin{lem} \\label{Est_der} For $1\\leq |j|\\leq d$\n\\begin{enumerate}[label=(\\roman*)]\n\\item $\\mu_n^*(a)=O(\\gamma_a^{-n}+\\lambda_a^n) \\ \\ \\text{and} \\ \\ \\partial_a^j\\mu_n^*(a)=O(n^{|j|}(\n\\gamma_a^{-n}+\\lambda_a^n)).$ \\\\[-0.3cm]\n\\item $\\varphi_n^*(a)=O\\left(\\frac{1}{n}\\right), \\ \\\n\\partial_a^j\\varphi_n^*(a)=O\\left(\\frac{n^{{|j|-1}}}{(\\gamma_a\\lambda_a)^{n}}\\right) \\ \\ \\text{and}\n\\ \\\n\\partial_a^j(e^{i\\varphi_n^*(a)})=O\\left(\\frac{n^{{|j|-1}}}{(\\gamma_a\\lambda_a)^{n}}\\right).$\n\\end{enumerate}\n\n\\end{lem}\n\nAssuming the above lemma let us now prove the estimates in~\\eqref{eq:order}, starting with the second one. To do\nthis, using the Leibniz formula,\n$$\n\\partial^\\ell_a G_{n}(a)=\\sum_{j:j\\leq\\ell} \\binom{\\ell}{j} \\ \\partial_a^{\\ell-j}\n\\rho\\left(\\frac{a-a_0}{2\\alpha}\\right) \\cdot \\partial_a^j\\mu_n^*(a).\n$$\nSubstituting the estimate from Lemma \\ref{Est_der} (i) in the above expression we\nobtain that\n$$\\partial_a^\\ell G_{n,\\alpha}(a)=O(\\alpha^{-d}\\cdot n^{\\left|\\ell \\right|}\n(\\gamma_a^{-n}+\\lambda_a^n))$$\nwhich, in fact, implies a better\nestimate than~\\eqref{eq:order}.\n\nTo prove the first estimate in~\\eqref{eq:order} for $\\|F_{n}\\|_{C^d}$, using again the Leibniz formula\n$$\n\\partial_a^\\ell F_{n,\\alpha}(a) =\n(1-e^{i\\varphi_n^*(a)}) \\ \\partial_a^\\ell\\rho\\left(\\frac{a-a_0}{2\\alpha}\\right)\n-\\sum_{j:\\ 0 1$, and fix $\\iota<\\ell$ such\nthat $\\left|\\iota\\right|=1$. Since\n$\\partial_a^{\\ell}B^*_n(a)=\\partial_a^{\\ell-\\iota}(\\partial_a^{\\iota}B^*_n(a))$\nwe get from~\\eqref{B*}, \\eqref{eq:O1} and by using the Leibniz\nformula,\n\\begin{equation} \\label{eq:final}\n \\partial_a^{\\ell}B^*_n(a) = n\\cdot\n\\partial_a^{\\ell-\\iota}(B^*_n(a)\\cdot \\partial_a(\\log \\lambda_a\\gamma_a)) +\n \\sum_{j:j\\leq (\\ell-\\iota)} \\binom{\\ell-\\iota}{j}\\,\n \\partial_a^{(\\ell-\\iota)-j}((\\lambda_a\\gamma_a)^{n}\\sin(h_n(a)))\\cdot\n \\partial_a^{j+\\iota} h_n(a).\n\\end{equation}\nNow we will determine the order of the terms in the above equation.\n\\begin{claim} \\label{claim2}\n\\begin{equation}\n\\partial_a^j\n((\\lambda_a\\gamma_a)^n\\sin(h_n(a)))=O(n^{|j|}(\\lambda_a\\gamma_a)^n)\n\\quad \\text{for all $1\\leq |j|\\leq d$}\n\\end{equation}\n\\end{claim}\n\\begin{proof}[Proof of Claim~\\ref{claim2}]\nObserve from~\\eqref{B*} that\n$[(\\lambda_a\\gamma_a)^n\\sin(h_n(a))]^2=(\\lambda_a\\gamma_a)^{2n}-(B^*_n)^2$.\nSince $\\partial_a^j B^*_n=o(1)$ from Claim~\\ref{claim1}, we get\nthat\n$$\\partial_a^j\n((\\lambda_a\\gamma_a)^n\\sin(h_n(a)))=O(\\partial_a^j(\\lambda_a\\gamma_a)^n).$$\nOn the other hand it can be seen by an inductive argument that\n$\\partial_a^j(\\lambda_a\\gamma_a)^n=O(n^{|j|}(\\lambda_a\\gamma_a)^n)$,\nwhich gives the required estimate.\n\\end{proof}\n\nFor $j<(\\ell-\\iota)$, by the induction hypothesis\n$\\partial_a^{j+\\iota} h_n(a)=\nO(n^{\\left|j+\\iota\\right|}(\\gamma_a\\lambda_a)^{-n})$. Combining\nthis with Claim~\\ref{claim2} we obtain that for $j<(\\ell-\\iota)$\n$$\\partial_a^{(\\ell-\\iota)-j}((\\lambda_a\\gamma_a)^{n}\\sin(h_n(a)))\\cdot\n \\partial_a^{j+\\iota} h_n(a)= O(n^{\\left|(\\ell-\\iota)-j\\right|}(\\lambda_a\\gamma_a)^n) \\cdot O(n^{\\left|j+\\iota\\right|}(\\gamma_a\\lambda_a)^{-n})=O(n^{\\ell}).$$\nOn the other hand $\\partial_a^{\\ell}B^*_n(a) = o(1)$ and $n\\cdot\n\\partial_a^{\\ell-\\iota}(B^*_n(a)\\cdot \\partial_a(\\log \\lambda_a\\gamma_a))=O(n)$. Thus, putting all these estimates together in~\\eqref{eq:final} and isolating the term corresponding with the index $j=\\ell-\\iota$ we obtain\n$$(\\lambda_a\\gamma_a)^{n}\\sin(h_n(a)))\\cdot\n \\partial_a^{\\ell} h_n(a)=O(n)+O(n^{\\left|\\ell\\right|})+o(1).$$\nThis implies that $\\partial_a^\\ell h_n(a)=\nO(n^{\\left|\\ell\\right|}(\\gamma_a\\lambda_a)^{-n})$ concluding the\nproof.\n\\end{proof}\n\n\n\\bibliographystyle{alpha2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nThe use of identically prepared systems, or ensembles, has been\nessential to our understanding of equilibrium and far-from-equilibrium\nproperties of classical and quantum systems. Traditionally, three\ntypes of ensembles are used---(a) the microcanonical ensemble, which\ninvolves systems with fixed energy and particle number, (b) the\ncanonical ensemble (CE), which involves systems with fixed particle number\nin contact with a large reservoir (at temperature $T$) with which they\ncan exchange energy, and (c) the grand canonical ensemble (GE), which\ninvolves systems in contact with a reservoir with which they can\nexchange energy and particles (in equilibrium, the average particle\nnumber is determined by the chemical potential $\\mu$). Whereas these\nthree ensembles pose fundamentally different physical constraints, \nit can be shown that they are equivalent in the thermodynamic limit\n(provided, of course, that temperatures and chemical potentials are\nselected appropriately). Being technically easier to deal with, the\ncanonical and grand canonical ensembles are the most commonly used\nensembles in the literature. Several texts on statistical mechanics\ncover these topics in detail; see e.g.,~Ref.~\\cite{huang87}.\n\nIn finite systems, differences appear between calculations carried \nout using the three ensembles. These differences, dubbed finite-size\neffects, have to do with the effect of energy and particle number\nfluctuations, and with boundary effects. For example, to describe an\nisolated system with mean energy $E$, it is most appropriate to \nuse the microcanonical ensemble with that energy. However, \none can also use a canonical ensemble at a temperature $T$ for which \nthe mean energy is $E$. Since the systems used to construct the \ncanonical ensemble have different energies from the ones used to \nconstruct the microcanonical ensemble, one finds differences in the\npredictions of each ensemble. Remarkably, one can show that energy\nfluctuations in the canonical ensemble typically scale as the square\nroot of the volume of the system, whereas the average energy scales \nas the volume of the system. Hence, the ratio between the energy\nfluctuations and the average energy scales as the inverse of the\nsquare root of the volume, and vanishes in the thermodynamic\nlimit. One then finds that differences between the predictions of \neach ensemble decrease polynomially with increasing volume\n(at fixed density). The same applies if one considers the grand\ncanonical ensemble, where particle number fluctuations typically \nscale with the square root of the volume of the system. Indeed, \nexplicit calculations in one dimensional (1D) lattices have shown \nthat the differences between the predictions of the canonical and\ngrand-canonical ensembles for various observables decrease with the\ninverse of the number of particles (or lattice sites) in the system\n\\cite{lebowitz61,rigol_05}.\n\nExperiments usually deal with thermodynamically large systems, whereas\nnumerical analyses of many-body interacting systems can generally be\ndone for only (relatively) much smaller system sizes. Hence, when\ntrying to theoretically predict\/reproduce the outcome an experimental\nmeasurement, a question of much relevance is: \\emph{which ensemble should \none use to minimize finite size effects and obtain the ``thermodynamic \nlimit'', or, experimental result?} From the previous discussion about \nthe differences between ensembles, one might naively conclude that finite \nsize effects always scale polynomially with system size and that, therefore, \nthe best one can do theoretically is to optimize exponents and prefactors. \n\nIn this article we show that this is not the case. There is a\npreferred ensemble (the grand canonical ensemble) and preferred boundary \nconditions (periodic boundary conditions, so that the system is translationally \ninvariant) for which finite-size effects are exponentially small in the \nsystem size. This holds if the system of interest is in an unordered \n(i.e., without long or quasi-long range order) phase at finite temperature. \nWe also consider a different approach to calculating finite-temperature \nproperties of many-particle systems, namely, numerical linked cluster \nexpansions (NLCEs) \\cite{rigol_bryant_06,rigol_bryant_07,rigol_bryant_07b}. \nWe show that NLCEs not only exhibit exponential convergence with\nincreasing system size but generally outperform grand canonical\nensemble calculations in systems with periodic boundary conditions.\n\nThe paper is organized as follows. In Sec.~\\ref{sec:gener-cons}\nwe argue, based on a high temperature expansion of the partition\nfunction, that grand canonical ensemble calculations in\ntranslationally invariant systems have exponentially small finite-size\nerrors. In Sec.~\\ref{sec:verification}, we discuss analytically\nsolvable examples, the 1D and 2D Ising models, that substantiate the\narguments in Sec.~\\ref{sec:gener-cons}. In Sec.~\\ref{sec:pert-th}, we present \na proof that finite-size errors are indeed exponentially small in the grand\ncanonical ensemble for translationally invariant noninteracting systems \nand that, within perturbation theory, the same scaling applies to interacting \nsystems. We then study numerically, in Sec.~\\ref{sec:numerical-tests}, \nthree examples where we systematically compare results from canonical and \ngrand canonical ensemble calculations, each for open boundary conditions (OBC) \nand periodic boundary conditions (PBC), and NLCEs. We\nsummarize our results and conclude in Sec.~\\ref{sec:conclusions}.\n\n\n\n\\section{General considerations}\n\\label{sec:gener-cons}\n\nIn this section, we argue that the GE for a translationally invariant\nsystem (we abbreviate the CE [GE] with open and\n periodic boundary conditions as CE-O [GE-O] and CE-P [GE-P]\n respectively) has exponentially small finite size corrections. For\nthat, we make use of a $\\beta$ expansion of the free energy, where\n$\\beta=(k_{B}T)^{-1}$ is the inverse temperature and $k_{B}$ is the\nBoltzmann constant. This kind of expansion has been used extensively\nin the literature to compute partition functions for various\nmodels \\cite{domb60a,domb60b,oitmaa_hamer_book_06}.\n\nConsider the Taylor expansion of the grand partition function $Z\\equiv\n\\Tr e^{-\\beta \\hat{H}}$ (we set $\\mu=0$ for brevity; all the arguments\nbelow are valid for nonzero $\\mu$, which will be required for bosons\nto prevent Bose-Einstein condensation):\n\\begin{equation}\n Z(\\beta) = \\Tr(1) - \\beta \\Tr(\\hat{H}) + \\frac{\\beta^{2}}{2!}\\Tr(\\hat{H}^{2}) + \\ldots\n\\end{equation}\nWe are interested in $\\ln Z$, from which thermodynamic quantities can\nbe obtained by taking suitable $\\beta$ or $\\mu$ derivatives,\n\\begin{multline}\n \\label{eq:9}\n \\ln Z(\\beta) = \\ln\\Tr(1) - \\beta \\frac{\\Tr(\\hat{H})}{\\Tr(1)} + \\\\\n + \\frac{\\beta^{2}}{2}\n \\left[\\frac{\\Tr(\\hat{H}^{2})}{\\Tr(1)}-\\frac{\\Tr(\\hat{H})^{2}}{\\Tr(1)^{2}}\\right]\n + \\cdots\n\\end{multline}\nWe note that\n\\begin{equation}\n \\label{eq:8}\n \\frac{\\Tr(\\hat{H}^{n})}{\\Tr(1)} = \\frac{\\Tr(\\hat{H}^{n}e^{-0\\cdot \\hat{H}})}\n {\\Tr(e^{-0\\cdot \\hat{H}})},\n\\end{equation}\nis an infinite temperature expectation value. At infinite temperature\nall unconnected parts of the system, however close to each other, are\nuncorrelated. Therefore, the expansion in Eq.~\\eqref{eq:9} reduces to\na sum over only the connected graphs that can be embedded in the\nfinite system \\cite{domb60a,sykes66,huang87}. In the CE, the particle\nnumber constraint, i.e., that the total particle number is fixed,\nimplicitly correlates unconnected pieces of a graph, and therefore\nthis simplification does not occur.\n\nFor a system that has no ordered phase or, equivalently, where order\nappears only at $T=0$, the above expansion must converge beyond a\ncertain order (because the correlation length is finite). Since this\nmust occur for any temperature $\\beta < \\infty$, the convergence must\ncome from the coefficients of the $\\beta$-expansion, i.e., from the\ntraces in Eq.~\\eqref{eq:9}. In other words, for the expansion $(\\ln\nZ)\/N = a_{0}+a_{1}\\beta + a_{2}\\beta^{2}+\\cdots$, the coefficient\n$a_{n}$ must fall faster than $e^{-n}$ for the series to converge for\nany $\\beta$. The convergence cannot come from cancellation of terms of\nopposite sign, since any such cancellation can work only at some\nfine-tuned value of $\\beta$.\n\nFor a system that has a phase transition between an unordered high\ntemperature phase and an ordered low temperature phase at a finite\ncritical temperature $\\beta_{c}$, the coefficients do not exhibit this\nbehavior -- in the critical phase the correlation length is infinite\nand all orders of this expansion are relevant. The convergence of the\nseries for $\\beta<\\beta_{c}$ instead comes from the fact that\n$\\beta\/\\beta_{c} < 1$. We verify these arguments in the 1D and 2D\nIsing models.\n\nFrom here on, we assume that we are in a phase where the\n$\\beta$-expansion converges. We will now show that with PBC (when the\nsystem is translationally invariant), all orders of the expansion\nEq.~\\eqref{eq:9} up to the system size (to be properly defined below)\nare identical to those in the thermodynamic limit. We will further\nshow that this is not the case with open boundary conditions.\n\n\\subsection{Periodic boundary conditions}\n\nConsider a system with $N$ sites and periodic boundary conditions (a\nsystem that is translationally invariant). The $\\beta$ expansion of\n$\\ln Z$ is shown in Eq.~\\eqref{eq:9}, in which each term can be\nrepresented by a graph embedded on the finite system. We will call\nthese graphs clusters \\cite{huang87}. First note that each cluster\nhas $N$ equivalent positions on the lattice since the system is\ntranslationally invariant. That gives a factor of $N$ that we move to\nthe left hand size in Eq.~\\eqref{eq:9} to get $(\\ln Z)\/N$, i.e., an\nintensive quantity. Let us now consider a cluster with $c$\nsites. First, since we have a cumulant expansion, as discussed above,\nonly connected clusters enter (see, e.g.,\nRefs.~\\cite{huang87,domb60a,oitmaa_hamer_book_06} for details). If\nthe extent of our cluster in each direction is less than the system\nsize in that directions (say $L$), then the cluster has open boundary\nconditions. Furthermore, even if the system size is increased in any\ndirection, this cluster is present. Hence, this cluster is present in\nthe thermodynamic limit. In general, \\emph{every cluster with $c$\n sites that, in the finite lattice with $N$ sites, does not wrap\n around any boundary appears in the infinite system, and vice versa}.\nTherefore the contribution of this cluster in a finite system is\nexactly the same as its contribution in the thermodynamic limit. On\nthe other hand, a cluster with $L$ sites in any given direction wraps\naround a boundary, i.e., it does not appear in the thermodynamic\nlimit. As a result, clusters that wrap around boundaries give\ncontributions that are not present in the thermodynamic limit\n\\cite{lebowitz61}. Hence, the difference between results in the\nthermodynamic limit and in finite-size periodic systems is\n$O(\\beta^{L-p})$, $p$ being determined by the Hamiltonian. We note\nthat $p$ is $O(1)$ for local Hamiltonians, which are the ones of\ninterest here. Further, based on the earlier argument, we must have\nthat the coefficient at this order falls faster than $e^{-(L-p)}$ or\nthat the expansion parameter ($\\beta\/\\beta_{c}$) is smaller than one.\nTherefore, finite-size errors in a GE calculation of a translationally\ninvariant system at any temperature in the unordered phase are\nsmaller than $O(e^{-L})$, for systems with linear dimension $L$.\n\n\\subsection{Open boundary conditions}\n\nFor a system with OBC, one immediately realizes that clusters do not\nhave $N$ equivalent positions on the lattice. As a result, even if a\ngiven cluster in the finite system appears in the thermodynamic limit,\nits contribution in the finite system will differ from the\nthermodynamic limit. For example, for a lattice model in which the\nHamiltonian is a sum of terms involving only nearest neighbor sites,\nthe term linear in $\\beta$ in Eq.~\\eqref{eq:9} for a system with OBC\nhas a correction $O(A\/2N)$ relative to the result for PBC, where $A$\nis the number of sites in the boundary. This correction vanishes as\n$1\/L$ with increasing system size. Complicated geometric and\ncombinatorial factors appear at higher orders, all of which approach\nthe thermodynamic limit result with increasing system size. Hence,\nnone of the coefficients of a $\\beta$ expansion for a finite system\nwith OBC match the result in the thermodynamic limit, and\nfinite size errors in $(\\ln Z)\/N$ are $O(1\/L)$.\n\n\\subsection{Numerical linked cluster expansions}\n\nRather than making calculations of finite systems with periodic or\nopen boundary conditions, and then extrapolating the results to the\nthermodynamic limit, another way to calculate finite-temperature\nproperties of lattice systems in the thermodynamic limit is to use\nNLCEs\n\\cite{rigol_bryant_06,rigol_bryant_07,rigol_bryant_07b,tang_khatami_13}.\nThe idea in this case is to directly use the linked cluster expansion\nof the infinite (translationally invariant) system, for which any\nextensive quantity ${\\cal O}$ per site can be computed as the sum\n\\begin{equation}\n \\label{eq:nlce1}\n \\frac{\\cal O}{N} = \\sum_{c}M(c)\\times W_{\\cal O}(c),\n\\end{equation}\nover all connected clusters $c$ that can be embedded in the infinite\nlattice. In Eq.~\\eqref{eq:nlce1}, $M(c)$ is the \\emph{multiplicity} of\ncluster $c$, namely, the number of ways per site in which cluster $c$\ncan be embedded on the lattice, and $W_{\\cal O}(c)$ is the\n\\emph{weight} of the cluster $c$ for observable ${\\cal O}$. \n$W_{\\cal O}(c)$ is calculated by an inclusion-exclusion principle, one\nsystematically subtracts contributions from the connected subclusters \nof $c$ \\cite{sykes66}\n\\begin{equation}\\label{eq:nlce2}\n W_{\\cal O}(c) = \\mathcal{O}(c) - \\sum_{s\\subset c}W_{\\cal O}(s).\n\\end{equation} \n${\\cal O}(c)$ is the value of the observable evaluated on the cluster\n$c$. In NLCEs, ${\\cal O}(c)$ is obtained using a full exact\ndiagonalization of the Hamiltonian for cluster $c$.\n\nDue to computational limitations, only a finite number of clusters can\nultimately be calculated in Eq.~\\eqref{eq:nlce1}. Nevertheless, as\nshown in\nRefs.~\\cite{rigol_bryant_06,rigol_bryant_07,rigol_bryant_07b}, NLCEs\ncan converge at lower temperatures than high temperature expansions,\nand sometimes all the way to the ground state for systems with\nunordered ground states. Also, NLCEs can provide very accurate\nresults for temperatures at which exact diagonalization results for\nsystems with periodic boundary conditions suffer from very large\nfinite-size effects. A pedagogical introduction to implementing NLCEs\ncan be found in Ref.~\\cite{tang_khatami_13}.\n\nIn what follows, we compare NLCE results with those obtained in\ncalculations in finite systems with different boundary conditions.\nOur goal is to find how each of them converges to the thermodynamic\nlimit result and which converges the fastest. For NLCEs, the accuracy\nof the results is determined by the size of the largest clusters\nconsidered in the sum in Eq.~\\eqref{eq:nlce1} and the model under\nconsideration.\n\n\\section{Verification in Ising models}\n\\label{sec:verification}\n\nIn this section, we verify the arguments given in\nSec.~\\ref{sec:gener-cons} in the 1D and 2D Ising models, both of which\ncan be solved analytically.\n\n\\subsection{1D Ising model}\n\\label{sec:1d-ising}\n\nIn the thermodynamic limit, the log of the partition function per\nsite, $\\Omega$, can be obtained using the transfer matrix method\n\\cite{huang87}, and is given by (we set $J=1$)\n\\begin{equation}\n \\label{eq:1}\n \\Omega(\\beta) = \\ln(e^{\\beta} + e^{-\\beta}).\n\\end{equation}\nThe expansion in powers of $\\beta$ of this result is\n\\begin{equation}\n \\label{eq:4}\n \\Omega(\\beta) = \\ln2 + \\frac{\\beta^{2}}{2}-\\frac{\\beta^{4}}{12} \n + \\frac{\\beta^{6}}{45}-\\frac{17\\beta^{8}}{2520}\n + \\frac{31\\beta^{10}}{14175}+ \\cdots\n\\end{equation}\n\nThe result for finite systems with periodic boundary conditions is\ngiven by\n\\begin{equation}\n \\label{eq:2}\n \\begin{split}\n \\Omega_L(\\beta) &= \\frac{1}{L}\\ln \\left[(e^{\\beta} +\n e^{-\\beta})^{L}\n + (e^{\\beta} - e^{-\\beta})^{L}\\right] \\\\\n &= \\Omega(\\beta) + \\frac{\\tanh^{L}\\beta}{L} + \\cdots\n \\end{split}\n\\end{equation}\nSince $0\\leq \\tanh\\beta<1$ for $0\\leq \\beta<\\infty$, the finite size\nerror is indeed faster than exponential. Quantities like the energy,\nwhich are derivatives of the free energy, converge exponentially fast\nwith $L$.\n\nExpanding Eq.~\\eqref{eq:2} in powers of $\\beta$ for different values\nof $L$, we get\n\\begin{equation}\n \\label{eq:3}\n \\begin{split}\n \\Omega_2(\\beta) &= \\ln2 + \\beta^{2} + \\ldots \\\\\n \\Omega_3(\\beta) &= \\ln2 + \\frac{\\beta^{2}}{2} + \\frac{\\beta^{3}}{3} + \\ldots \\\\\n \\Omega_4(\\beta) &= \\ln2 + \\frac{\\beta^{2}}{2} + \\frac{\\beta^{4}}{6} + \\ldots \\\\\n \\Omega_5(\\beta) &= \\ln2 + \\frac{\\beta^{2}}{2} -\n \\frac{\\beta^{4}}{12} +\n \\frac{\\beta^{5}}{5} + \\ldots \\\\\n \\Omega_6(\\beta) &= \\ln2 + \\frac{\\beta^{2}}{2} -\n \\frac{\\beta^{4}}{12} + \\frac{17\\beta^{6}}{90} + \\ldots\n \\end{split}\n\\end{equation}\nAs one can see, the results are exact to $O(\\beta^{L-1})$. Naturally,\nas one goes to lower temperatures, the correlation length increases\nand larger systems are required to capture the relevant powers of\n$\\beta$. It is easy to verify that, with open boundary conditions,\nthe corrections are always $O(1\/L)$.\n\nAs discussed in Sec.~\\ref{sec:gener-cons}, although the coefficients\nin Eq.~\\eqref{eq:3} are exact up to $O(\\beta^{L-1})$, the convergence\nfor \\emph{all} temperatures comes from the fact that they fall off\nrapidly (faster than exponential) with increasing expansion\norder. Figure \\ref{fig:ising-fit} shows a plot of the coefficients of\nexpansion in Eq.~\\eqref{eq:4} along with a fit to $a b^{-L}\/L$ that\ndemonstrates the faster-than-exponential behavior.\n\nTo conclude our discussion of the 1D Ising model, we evaluate the\nfirst few orders of the NLCE for this model (instead of numerical\nexact diagonalization of the clusters, we obtain these results\nanalytically). First, we evaluate the partition function [$\\ln\nZ_L(\\beta)$] on finite clusters with OBC\n\\begin{equation}\n \\label{eq:ising-nlce-prop}\n \\begin{split}\n \\ln Z_1(\\beta) &= \\ln2, \\\\\n \\ln Z_2(\\beta) &= \\ln2 + \\ln\\left(e^{\\beta} + e^{-\\beta}\\right),\\\\\n \\ln Z_3(\\beta) &= \\ln2 + \\ln\\left(e^{2\\beta}+e^{-2\\beta} +\n 2\\right),\n \\end{split}\n\\end{equation}\nand then carry out the subtractions. The weights are given by [see\nEq.~\\eqref{eq:nlce2}]\n\\begin{equation}\n \\label{eq:ising-nlce-weights}\n \\begin{split}\n W_{1} &= \\ln Z_{1}(\\beta) = \\ln2, \\\\\n W_{2} &= \\ln Z_{2}(\\beta)-2W_{1} = \\ln\\left(e^{\\beta} + e^{-\\beta}\\right)- \\ln2,\\\\\n W_{3} &= \\ln Z_{3}(\\beta)-2W_{2}-3W_{1}= 0.\n \\end{split}\n\\end{equation}\nHence, the result for $\\Omega$ obtained in calculations including up\nto $n$ sites, $(\\Omega)_n$, is given by [see Eq.~\\eqref{eq:nlce1}]\n\\begin{equation}\n \\label{eq:ising-nlce-sums}\n \\begin{split}\n (\\Omega)_1 &= W_{1}=\\ln2, \\\\\n (\\Omega)_2 &= W_{1}+W_{2}= \\ln\\left(e^{\\beta} + e^{-\\beta}\\right), \\\\\n (\\Omega)_3 &= W_{1}+W_{2}+W_{3} = \\ln\\left(e^{\\beta} +\n e^{-\\beta}\\right).\n \\end{split}\n\\end{equation}\nIt can be verified that the last result is valid at all higher orders\nin the ``NLCE''. The thermodynamic limit result is therefore obtained\nby just considering clusters with one and two sites. This is an\ninfinite improvement over the use of the grand canonical ensemble with\nperiodic boundary conditions. Whereas this infinite gain is specific\nto the 1D Ising model --- the model can, after all, be solved using a\ntwo dimensional transfer matrix, we show in what follows that the fact\nthat NLCEs outperform exact calculations in finite systems appears to\nbe generic.\n\n\\begin{figure}[!tb]\n \\centering\n \\includegraphics[width=0.95\\columnwidth]{ising-new.eps}\n \\caption{(Color online) Coefficients (absolute value) of the\n $\\beta$-expansion of the free energy vs. the expansion order\n $l$. We show the coefficients in Eq.~\\eqref{eq:4} for the 1D Ising\n model, and a fit to the function $a b^{-l}\/l$ with $a=2.0000$ and\n $b=1.5708$ (the difference between the exact result and\n $2(\\pi\/2)^{-l}\/l$ vanishes exponentially fast with $l$), and from\n Eq.~\\eqref{eq:7} for the 2D Ising model. The coefficients in the\n latter case do not fall off exponentially fast. Because of this,\n the $\\beta$-expansion only converges for $\\beta < \\beta_{c}$. It\n is only in this regime that finite-size errors in grand-canonical\n ensemble calculations of translationally invariant systems are\n exponentially small in system size. \\textbf{Inset:}\n Shows the rational part of the coefficients in\n Eq.~\\eqref{eq:7}. They decrease at first but increase after\n $O(\\eta^{24})$. See text for further discussion.}\n \\label{fig:ising-fit}\n\\end{figure}\n\n\\subsection{2D Ising model}\n\\label{sec:2d-ising}\n\nFor the 2D Ising model, $\\Omega$ is given by \\cite{huang87,Onsager_44}\n\\begin{multline}\n \\label{eq:5}\n \\Omega(\\beta) = \\ln[2\\cosh(2\\beta)] + \\\\ +\n \\int_{0}^{\\pi}\\frac{d\\phi}{2\\pi}\n \\ln\\left[\\frac{1+\\sqrt{1-\\frac{4\\sin^{2}\\phi}{\\cosh^{2}(2\\beta)\\coth^{2}(2\\beta)}}}{2}\n \\right].\n\\end{multline}\nThe $\\beta$-expansion of this result is given by\n\\begin{equation}\n \\label{eq:6}\n \\Omega(\\beta) = \\ln2 + \\beta^{2} + \\frac{5\\beta^{4}}{6} + \\frac{32\\beta^{6}}{45} \n + \\frac{425\\beta^{8}}{252} + \\ldots\n\\end{equation}\nOne can see that the coefficients become larger than 1 for higher\norders. The correct expansion parameter for models with a\nfinite-temperature transition is $\\beta\/\\beta_{c}$. For the classical\n2D Ising model on a square lattice, $\\beta_{c} = \\ln(1+\\sqrt{2})\/2$.\nThis gives (with $\\eta \\equiv \\beta\/\\beta_{c}$),\n\\begin{equation}\n \\label{eq:7}\n \\Omega(\\beta) = \\ln2 + \\frac{a^{2}\\eta^{2}}{4} + \\frac{5a^{4}\\eta^{4}}{96} + \n \\frac{a^{6}\\eta^{6}}{90} + \\frac{425a^{8}\\eta^{8}}{64512} + \\ldots\n\\end{equation}\nwhere $a\\equiv\\ln(1+\\sqrt{2})<1$. The coefficients of the $\\eta$-expansion \nare plotted versus the order of the expansion in Fig.~\\ref{fig:ising-fit}. \nThey do not fall off faster than exponential, or exponentially. \nWe note that the rational part of the coefficients of the \nfirst few orders of the $\\eta$-expansion reported in Eq.~\\eqref{eq:7} is \ndeceiving. They decrease with increasing order of \nexpansion. This, together with the fact that $a<1$, suggests that the \ncoefficients of the $\\eta$-expansion should fall off \nfaster than exponentially. However, as shown in the inset in \nFig.~\\ref{fig:ising-fit}, the aforementioned rational part \\emph{increases} \nwith increasing order of expansion after $O(\\eta^{24})$. Because of this,\nconvergence in the $\\eta$ expansion is only expected for $\\eta<1$ and \ndoes not come from the coefficients.\n\nWe also calculate $\\ln Z_{L\\times L}$ for small systems with $N=L\\times\nL$ sites in the GE for both periodic and open boundary\nconditions. With PBCs\n\\begin{eqnarray}\n \\label{eq:2d-ising-g-p}\n \\ln Z^{\\rm 2D-P }_{1\\times1} &=& \\ln2, \\nonumber\\\\\n \\ln Z^{\\rm 2D-P}_{2\\times 2} &=& \\ln2 + \\ln\\left(e^{4\\beta} + e^{-4\\beta}+2\\right), \\\\\n \\ln Z^{\\rm 2D-P}_{3\\times 3} &=& \\ln2 + \\ln\\bigl(e^{-18 \\beta\n }+9 e^{-10 \\beta }+24 e^{-6 \\beta }+ \\nonumber\\\\\n && +99 e^{-2 \\beta }+72 e^{2\n \\beta }+51 e^{6 \\beta }\\bigr),\\nonumber\n\\end{eqnarray}\nwhereas, with OBCs,\n\\begin{eqnarray}\n \\label{eq:2d-ising-g-o}\n \\ln Z^{\\rm 2D-O}_{1\\times1} &=& \\ln2 \\nonumber\\\\\n \\ln Z^{\\rm 2D-O}_{2\\times 2} &=& \\ln2 + \\ln\\left(e^{4\\beta} + e^{-4\\beta}+2\\right) \\\\\n \\ln Z^{\\rm 2D-O}_{3\\times 3} &=& \\ln2 + \\ln\\bigl(e^{-12 \\beta\n }+4 e^{-8 \\beta }+16 e^{-6 \\beta }+\\nonumber\\\\ \n &&+23 e^{-4 \\beta }+48 e^{-2\n \\beta }+\n 48 e^{2 \\beta }+23 e^{4 \\beta }+\\nonumber\\\\\n &&+ 16 e^{6 \\beta }+4 e^{8\n \\beta }+ e^{12 \\beta }+72 \\bigr)\\nonumber.\n\\end{eqnarray}\n\nFor the $\\beta$-expansion of the $3\\times 3$ systems, up to the first\nterm that differs from Eq.~\\eqref{eq:6}, we obtain\n\\begin{equation}\n \\label{eq:2d-ising-finite-expansion}\n \\begin{split}\n \\Omega^{\\rm 2D-O}_{3\\times 3} &= \\ln2 + \\frac{2\\beta^{2}}{3} +\n \\ldots \\\\\n \\Omega^{\\rm 2D-P}_{3\\times 3} &= \\ln2 + \\beta^{2} +\n \\frac{2\\beta^{3}}{3} + \\ldots\n \\end{split}\n\\end{equation}\nWe see that whereas for OBC the coefficient of the second order term is\nincorrect, for PBC it is correct, i.e., once again GE-P gives results\nthat are correct to $O(\\beta^{L-1})$, with $L=3$ in this case.\n\nFor the NLCE calculation with clusters with up to four sites, we\nobtain\n\\begin{eqnarray}\n \\label{eq:2d-ising-nlce}\n &&(\\Omega)_4 = 20 \\ln \\left(e^{-\\beta }+e^{\\beta }\\right)+ 54\n \\ln \\left(e^{- \\beta } e^{2 \\beta }+1\\right)\\nonumber\\\\\n && \\ -38\n \\ln \\left(e^{-2 \\beta }+e^{2 \\beta }+2\\right)+\\ln \\left(e^{-4\n \\beta }+e^{4 \\beta\n }+6\\right).\\\n\\end{eqnarray}\nExpanding in powers of $\\beta$, and reporting terms up to the first\none that differs from Eq.~\\eqref{eq:6}, we get\n\\begin{equation}\\label{eq:2dinlcee}\n (\\Omega)_4 = \\ln2 + \\beta^{2} + \\frac{5\\beta^{4}}{6} -\n \\frac{58\\beta^{6}}{45} + \\ldots\n\\end{equation}\nThe above result is correct to $O(\\beta^{5})$. We must stress that\nEq.~\\eqref{eq:2dinlcee} was obtained in an expansion in which the\nlargest cluster has $N=4$, while\nEqs.~\\eqref{eq:2d-ising-finite-expansion} are for systems with\n$N=9$. The gain is evident.\n\n\\section{Finite temperature perturbation theory to all orders}\n\\label{sec:pert-th}\n\nIn the case of bosons or fermions on a lattice that can be treated by\nfinite-temperature perturbation theory (with a noninteracting theory\nas the unperturbed starting point), a proof that finite-size errors are \nexponentially small to all orders in perturbation theory can be made based \non the momentum-space representation of the Hamiltonian. The proof is \nessentially identical for bosons and fermions, so we focus on the former.\n\nWe consider a generic massive scalar field theory in a 1D lattice,\nwith unit lattice spacing, $L$ sites, and PBC:\n\\begin{equation}\n \\label{H}\n \\hat{H}_{s}=\\frac12 \\sum_{j=1}^L\\left[\\hat{\\pi}_j^2\n +(\\hat{\\varphi}_{j+1}-\\hat{\\varphi}_j)^2 + m^2\\hat{\\varphi}_j^2\\right],\n\\end{equation}\nwhere $[\\hat{\\varphi}_j,\\hat{\\pi}_{j'}]=i\\delta_{jj'}$ and\n$\\hat{\\varphi}_{j+L}\\equiv\\hat{\\varphi}_j$,\n$\\hat{\\pi}_{j+L}\\equiv\\hat{\\pi}_j$. \nThis Hamiltonian is diagonalized via\n\\begin{equation}\n \\begin{split}\n \\hat{\\varphi}_j &= \\frac{1}{\\sqrt{2L}}\\sum_{n=0}^{L-1}\\omega_n^{1\/2}\n \\left[e^{\\frac{2\\pi i n j}{L}}\\hat{a}_n + e^{-\\frac{2\\pi i n\n j}{L}}\\hat{a}^\\dagger_n\\right],\\\\\n \\hat{\\pi}_j &= -\\frac{i}{\\sqrt{2L}}\\sum_{n=0}^{L-1}\\omega_n^{-1\/2}\n \\left[e^{\\frac{2\\pi i n j}{L}}\\hat{a}_n - e^{-\\frac{2\\pi i n j}{L}}\\hat{a}^\\dagger_n\\right],\n \\end{split}\n\\end{equation}\nso that\n\\begin{equation} \\label{Hk} \\hat{H}_{s} = \\sum_{n=0}^{L-1}\\omega_n\\hat{a}^\\dagger_n\n \\hat{a}_n + \\text{constant}\\hfill,\n\\end{equation}\nwhere $[\\hat{a}^{}_n,\\hat{a}^\\dagger_{n'}] = \\delta_{nn'}$, $\\omega_n =\\omega(k_n)$, $k_n = 2\\pi n\/L$, \n$n\\in[0,L-1]$, and\n\\begin{equation}\n \\label{wk}\n \\omega(k) = \\sqrt{2(1-\\cos k)+m^2} =\n \\sqrt{4\\sin^2(k\/2)+m^2}.\n\\end{equation}\n\nWe now want to compute the grand canonical partition function\n\\begin{equation}\n Z(\\beta,\\mu)\\equiv \\mathop{\\rm Tr}e^{-\\beta (\\hat{H} -\\mu \\hat{N})}\\,,\n \\label{Z}\n\\end{equation} \nwhere $\\hat{N}=\\sum_n \\hat{a}^\\dagger_n \\hat{a}_n$ is the total number operator.\nWe take the trace in the Fock basis of eigenstates of each $\\hat{a}^\\dagger_n a_n$,\n\\begin{equation}\n Z_L(\\beta,\\mu) = \\prod_{n=0}^{L-1}\\sum_{N_n=0}^\\infty\n e^{-\\beta(\\omega_n-\\mu)N_n} = \\prod_{n=0}^{L-1} \\frac{1}{1-e^{-\\beta(\\omega_n-\\mu)}}.\n \\label{Z1}\n\\end{equation} \nEquivalently,\n\\begin{equation}\n \\Omega_L(\\beta,\\mu) = \\frac{1}{L}\\ln Z_L(\\beta,\\mu) = \n \\frac{1}{L}\\sum_{n=0}^{L-1} F(k_n),\n \\label{OL}\n\\end{equation} \nwhere we have defined\n\\begin{equation}\n \\label{F}\n F(k) \\equiv -\\ln[1-e^{-\\beta(\\omega(k)-\\mu)}].\n\\end{equation}\nIn the thermodynamic limit, the sum over $n$ becomes an integral,\n\\begin{equation}\n \\Omega(\\beta,\\mu)\\equiv\\lim_{L\\to\\infty}\\Omega_L(\\beta,\\mu)= \n \\frac{1}{2\\pi}\\int_{0}^{2\\pi}dk\\,F(k)\\,.\n \\label{Oi}\n\\end{equation}\n\nWe now wish to show that $|\\Omega_L(\\beta,\\mu)-\\Omega(\\beta,\\mu)|$ is\nexponentially small in $L$.\n\nWe first note that $F(k)$ is periodic in $k$ with period $2\\pi$.\nTherefore its Fourier expansion takes the form\n\\begin{equation}\n F(k)=\\sum_{j=-\\infty}^{+\\infty}\\tilde F_j\\,e^{i jk}\\,,\n \\label{fsum}\n\\end{equation} \nwhere the Fourier coefficients are given by\n\\begin{equation}\n \\tilde F_j = \\frac{1}{2\\pi} \\int_{0}^{2\\pi}dk\\,e^{-i jk}F(k)\\,.\n \\label{tildef}\n\\end{equation} \nUsing \\eqs{Oi} and (\\ref{tildef}), we get\n\\begin{equation}\n \\Omega(\\beta,\\mu) = \\tilde F_0\\,.\n \\label{Oi1}\n\\end{equation} \nUsing \\eqs{OL} and (\\ref{fsum}), we get\n\\begin{eqnarray}\n \\Omega_L(\\beta,\\mu) &=& \\frac{1}{L}\\sum_{n=0}^{L-1} F(2\\pi n\/L)\n = \\frac{1}{L}\\sum_{n=0}^{L-1}\\sum_{j=-\\infty}^{+\\infty}\\tilde\n F_j\\,e^{2\\pi i jn\/L} \\nonumber\\\\\n &=& \\sum_{j=-\\infty}^{+\\infty}\\tilde F_j\\biggl[\\frac{1}{L}\n \\sum_{n=0}^{L-1}e^{2\\pi i jn\/L}\\biggr] \\label{OL5}\n \\\\& =& \\sum_{j=-\\infty}^{+\\infty}\\tilde\n F_j\\delta_{j\\,\\rm{mod}\\,L,\\,0}\n = \\sum_{j'=-\\infty}^{+\\infty}\\tilde F_{j'\\!L}\\,.\\nonumber\n\\end{eqnarray}\nSubtracting \\eq{Oi1} from \\eq{OL5}, we get \\begin{equation}\n O_L(\\beta,\\mu)-\\Omega(\\beta,\\mu) = \\sum_{j\\ne 0}\\tilde F_{jL}\\,.\n \\label{Odiff}\n\\end{equation} \nExamining \\eqs{wk} and (\\ref{F}), we see that if $m>0$ and $\\mu0$ and $\\mu0$ and\nsmall enough $k$, $F(k)$ can be approximated via\n\\begin{equation}\n F(k)\\simeq -\\ln[1-e^{-\\beta(m-\\mu)}e^{-\\beta k^2\/2m}].\n \\label{F0}\n\\end{equation} \nAssuming $\\mu0$ and $\\mu0} \\frac{1}{r^n} \\int_{B_r(x) \\cap \\Omega } \\lvert f(y)-(f)_{r,x}\\rvert^2 dy\n +\\lVert f \\rVert^2_{L^2(\\Omega)}<\\infty,\n \\end{equation*}\nwhere $(f)_{r,x}$ is the average of $f$ in $ B_r(x) \\cap \\Omega$.\n\\end{definition}\n\n\\par\nThe proofs of the following results can be found in \\cite{GT98} when $p<\\infty$\nand in \\cite{S93} when $p=\\infty$.\n\n\n\\begin{theorem}\n Consider the equation \n \\begin{equation*}\n \\Delta u= f \\textrm{ in } B_{2R}.\n \\end{equation*}\n If $f \\in L^p(B_{2R})$ for $ 1< p <\\infty $, then the solution $u\\in W^{2,p}(B_R)$, and\n \\begin{equation*}\n \\lVert D^2 u\\rVert_{L^p(B_R)}\\leq C_{p,n} \\left( \\lVert f \\rVert_{L^p(B_{2R})} \n + \\lVert u\\rVert_{L^1(B_{2R})} \\right)\n \\end{equation*}\n If $f \\in L^\\infty (B_{2R})$, then in general $ u \\notin W^{2,\\infty}(B_R)$, but\n\\begin{equation*}\n \\lVert D^2 u\\rVert_{BMO(B_R)}\\leq C_{\\infty,n} \\left( \\lVert f \\rVert_{L^\\infty(B_{2R})} \n + \\lVert u\\rVert_{L^1(B_{2R})} \\right),\n \\end{equation*}\nhere $ C_{p,n}$, $ C_{\\infty,n} $ are dimensional constants.\n\\end{theorem}\n\n\\end{subsection}\n\n\n\n\n\n\n\n\n\n\n\\begin{subsection}{The obstacle problem} \n\nIn this section we state the regularity of the solution to the following obstacle problem,\n\\begin{equation*}\n \\min (-\\Delta u+f, u-\\psi)= 0 \\textrm{ in } \\Omega\n\\end{equation*}\nwith boundary conditions $u-g\\in W_0 ^{1,2}(\\Omega)$.\n\n\\par\nHere we will omit the variational formulation of the problem, the first\nregularity results and will state the $ C^{1,1}$-regularity of the solutions \nreferring to the book \\cite{PSU12}.\n\n\\par\nIn order to be consistent with the assumptions in our paper, we will \nassume that $f\\in C^\\alpha$ and the obstacle $\\psi \\in C^{2,\\alpha}$,\nalthough these assumptions can be weekened.\n\n\\begin{theorem}\nAssume that $f\\in C^\\alpha$ and $\\psi \\in C^{2,\\alpha}$, and \n $ u $ solves the obstacle problem\n \\begin{equation*}\n \\min (-\\Delta u+f, u-\\psi)= 0 \\textrm{ a.e. in } \\Omega.\n\\end{equation*}\n Then $ u \\in C^{1,1}(\\Omega')$ for every $ \\Omega' \\Subset \\Omega$, and\n \\begin{equation*}\n \\lVert u \\rVert_{C^{1,1}(\\Omega')}\\leq C\\left( \\lVert u \\rVert_{L^\\infty(\\Omega)}+\n \\lVert f \\rVert_{C^{0,\\alpha}(\\Omega)}+ \\lVert \\psi \\rVert_{C^{2,\\alpha}(\\Omega)}\\right),\n \\end{equation*}\nwhere the constant $C$ depends on the dimension and on the \nsubset $ \\Omega' \\Subset \\Omega$.\n \n\\end{theorem}\n\n\n\\end{subsection}\n\n\n\\end{section} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Existence of $C^{1,\\alpha}$ solutions}\n\n\\par\nWe consider the system \\eqref{system2} with boundary conditions $ u^i=g^i \\textrm{ on } \\partial \\Omega $,\n$g^i \\in C^2$.\nThen we also need to impose the following compatibility condition on the boundary data:\n\\begin{equation} \\label{bc}\n g^1-g^2+\\psi^1 \\geq 0, \\textrm{ and } g^2-g^1+\\psi^2 \\geq 0 \\textrm{ on } \\partial \\Omega.\n\\end{equation}\nClearly, without the compatibility conditions, there are no solutions to \\eqref{system2}\nachieving the boundary data.\n\n\\par\nWe are interested in deriving $C^{1,1}$-regularity for the solutions to our system,\nwhich is the best regularity one can expect.\nThroughout our discussion we will assume that \n\\begin{equation} \\label{costass}\n f^1, f^2 \\in C^\\alpha (\\Omega), \\textrm{ and } \\psi^1, \\psi^2 \\in C^{2,\\alpha} (\\Omega),\n\\end{equation}\nfor some $0 <\\alpha <1$.\nThese are natural assumptions, since $f$ being bounded or continuous, \nis not enough for its Newtonian potential to be $C^{1,1}$. \nWe also provide a one-dimensional counterexample to the existence of solutions in case\nthe switching costs are not smooth.\n\n\n\\begin{example}[Diogo Gomes]\nConsider the following system in the interval $(-1,1)$ with zero Dirichlet boundary conditions,\n\\begin{equation*} \n\\begin{cases}\n\\min\\left(-(u_{1})_{xx}, u_1-u_2+(1-|x|) \\cos\\left( \\frac \\pi\n{1-|x|}\\right)\\right)=0 , \\\\\n\\min\\left(-(u_2)_{xx}, u_2-u_1+(1-|x|)(1- \\cos\\left( \\frac \\pi\n{1-|x|}\\right)\\right)=0.\n\\end{cases}\n\\end{equation*}\n Then the value function\nof the corresponding optimal control problem is not finite.\n\\end{example}\n\n\n\\begin{proof}\nIn our example the running costs are identically zero, the switching costs\nsatisfy the nonnegative loop assumption $ \\psi_{1}(x)+\\psi_{2}(x)>0 $ in $ (-1,1)$, and the\ncompatibility condition on the boundary $ \\psi^{1}(\\pm1)=\\psi^{2}(\\pm1)=0 $.\n\n\\par\nThe example illustrates that when the switching costs are not smooth, then the negative values give infinity\ngrowth to the value function of the corresponding optimal control problem.\nIn order to show this, we choose optimal controls $ i(t)$ as follows: the switching occurs at times $ t_{k} $ where\n$ \\frac \\pi {1-|x(t_{k})|}=\\pi k $: When $ \\frac \\pi {1-|x(t_{k})|}=\\pi k = \\pi (2n+1) $, $n\\in \\mathbb{N}_0$ ,\nwe switch from\nregime 1 to regime 2 gaining $ \\frac 1 {2n+1} $ and for the values $ \\frac \\pi {1-|x(t_{k})|}=\\pi k = 2\\pi n $\nwe switch back from regime 2 to 1 paying zero cost, and so\n\\begin{equation*}\nu_i(x)\\geq - \\sum_{0\\leq t\\leq T_{\\partial \\Omega}} \\psi_{i(t_k),\ni(t_{k+1})}(x(t)) = \\sum \\frac{1}{2n+1}.\n\\end{equation*}\nThen the conclusion follows from the divergence of harmonic series.\n \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Penalization method}\n\n\n\nIn this section we approximate the system \\eqref{system2} with a smooth penalized system.\nLet us take any smooth nonpositive function $\\beta : \\mathbb{R} \\rightarrow (-\\infty,0]$,\nsuch that \n\\begin{align*}\n \\beta (s)=0 \\textrm{ for } s\\geq 0, \\\\\n \\beta (s)<0 \\textrm{ for } s<0 \\textrm{ and } \\\\\n 0<\\beta'(s)\\leq 1 \\textrm{ for } s <0, \\\\\n \\lim_{s \\rightarrow -\\infty}\\beta (s)=-\\infty\n\\end{align*}\nNext we consider the following penalization function $\\beta_\\varepsilon (s)= \\beta (s\/ \\varepsilon)$, \nfor $ s\\in \\mathbb{R}, \\varepsilon >0$, and the corresponding penalized system\n\\begin{equation} \\label{pensystem}\n\\begin{cases}\n-\\Delta u^1_{\\varepsilon}+f^1+\\beta_\\varepsilon (u^1_\\varepsilon-u^2_\\varepsilon+\\psi^1)=0 \\\\\n-\\Delta u^2_{\\varepsilon}+f^2+\\beta_\\varepsilon (u^2_\\varepsilon-u^1_\\varepsilon+\\psi^2)=0 ,\n\\end{cases}\n\\end{equation}\nwith boundary conditions $ u^i_\\varepsilon=g^i \\textrm{ on } \\partial\\Omega $. \n\n\n\\par\nFor $\\varepsilon >0$ fixed, the penalized system \\eqref{pensystem}\ncan be solved by several methods. In the paper \n\\cite{EF79} the authors use nonlinear functional analysis methods in order to derive the existence \nof classical solutions, that is $u^i_\\varepsilon \\in C^2(\\Omega)$, assuming that the switching costs are positive constants. \nThe proof is rather technical, however it works line for line in our case with variable switching costs,\ntherefore we omit it.\n\n\n\\begin{lemma} \\label{penlemma}\nUnder the assumptions \\eqref{bc} and \\eqref{costass} the solutions to the penalized system \n\\eqref{pensystem}, $u^i_{\\varepsilon}$ satisfy the\nfollowing estimates for every $\\varepsilon>0$\n\\begin{enumerate}\n\n\\item [ i.) ]\n \\begin{equation*} \n - \\max_i \\Arrowvert f^i \\rVert_{L^{\\infty}} \\leq -\\Delta u^i_{\\varepsilon} \n \\leq \\max_i \\Arrowvert \\Delta \\psi^i \\rVert_{L^{\\infty}}+\n 3 \\max_i \\Arrowvert f^i \\rVert_{L^{\\infty}}.\n \\end{equation*}\n \n \\item[ ii.) ]\n \\begin{equation*}\n u^1_\\varepsilon-u^2_\\varepsilon+\\psi^1 \\geq -C \\varepsilon \n \\textrm{ and }\n u^2_\\varepsilon-u^1_\\varepsilon+\\psi^2 \\geq -C \\varepsilon \n\\end{equation*}\n\\end{enumerate}\nIn $ ii.) $ the constant $C>0$ depends only on the given data and can \nbe computed explicitly in terms of $\\beta $.\n\\end{lemma}\n\n\\begin{proof}\n For our convenience, let us denote $\\theta^1_\\varepsilon=u^1_\\varepsilon-u^2_\\varepsilon+\\psi^1$ and \n $\\theta^2_\\varepsilon=u^2_\\varepsilon-u^1_\\varepsilon+\\psi^2$, and observe that $\\theta^1_\\varepsilon $\n and $\\theta^2_\\varepsilon $ cannot be \n negative at the same time according to\n the nonnegative loop assumption. \n \n \\par\n Now let us fix $\\varepsilon >0$, \n and consider the function $\\beta_\\varepsilon( \\theta^i_\\varepsilon(x))$, $x \\in \\Omega$. \n It is bounded from above by $0$, our aim is to prove that $\\beta_\\varepsilon( \\theta^i_\\varepsilon(x))$ is\n bounded from below.\n Let $x_0=x_0(\\varepsilon)$ be a point of minimum for the function $\\beta_\\varepsilon( \\theta^1_\\varepsilon(x))$,\n moreover without loss of generality, we may assume that \n \\begin{equation*}\n \\min_{i=1,2;x\\in \\overline{\\Omega}} \\beta_\\varepsilon( \\theta^i_\\varepsilon(x))=\n \\beta_\\varepsilon( \\theta^1_\\varepsilon(x_0))<0.\n \\end{equation*}\nIf $x_0 \\in \\partial \\Omega$, then $ \\beta_\\varepsilon( \\theta^1_\\varepsilon(x_0))=0$ \naccording to \\eqref{bc}. Therefore $ x_0 \\in \\Omega $ is an interior point, and $ \\beta_\\varepsilon( \\theta^1_\\varepsilon(x_0))<0$.\nThen $ \\theta^1_\\varepsilon(x_0) <0$, and since $\\theta^1_\\varepsilon+ \\theta^2_\\varepsilon \\geq 0$, we get\n$ \\theta^2_\\varepsilon(x_0) \\geq 0 $ consequently $ \\beta_\\varepsilon( \\theta^2_\\varepsilon(x_0))=0$.\nSince $\\beta_\\varepsilon$ is nondecreasing and $\\beta_\\varepsilon(t)<0$ if and only if $t<0$, we \nget that \n\\begin{equation*}\n \\min_{i=1,2;x\\in \\overline{\\Omega}} \\theta^i_\\varepsilon(x)=\n \\theta^1_\\varepsilon(x_0).\n \\end{equation*}\n This implies that $\\theta ^1_\\varepsilon= u^1_\\varepsilon-u^2_\\varepsilon+\\psi^1 $ \n achieves its minimum at an interior point $x_0$,\n hence $\\Delta u^1_\\varepsilon-\\Delta u^2_\\varepsilon+\\Delta \\psi^1\\geq 0$\nat $x_0$. The last inequality together with $-\\Delta u^2_\\varepsilon(x_0)+f^2(x_0)=0$ shows that \n\\begin{equation*}\n \\beta_\\varepsilon( \\theta^1_\\varepsilon(x_0))= \\Delta u^1_{\\varepsilon}(x_0)-f^1(x_0)=\n\\end{equation*}\n\\begin{equation*}\n \\Delta u^1_{\\varepsilon}(x_0)-\\Delta u^2_{\\varepsilon}(x_0) +f^2(x_0)-f^1(x_0)\n \\geq -\\Delta \\psi^1 (x_0)+f^2(x_0)-f^1(x_0).\n\\end{equation*}\nThe estimate above is true for any $\\varepsilon >0$, and therefore it proves the right inequality in $ i.) $. \nThe left inequality in $ i.) $ is a direct consequence of $ -\\beta_\\varepsilon \\geq 0$.\n\n\n\\par \nIn order to prove $ ii.) $, we\nrecall that $\\lim_{s\\rightarrow -\\infty} \\beta(s)=-\\infty$, and\n$\\beta_\\varepsilon(s)=\\beta(s\/\\varepsilon)$, hence $\\beta_\\varepsilon(\\theta^i_\\varepsilon)$ is\nbounded imples that $ \\frac{\\theta^i_\\varepsilon}{\\varepsilon}$ is uniformly bounded from below \nby a negative constant $-C\\leq 0$. This finishes the proof of point $ ii.) $ in our lemma.\n\\end{proof}\n\n\n\\par\nUsing the Sobolev embedding theorem and Calderon-Zygmund estimates, we can conclude that\nthe functions $u^i_\\varepsilon$ are uniformly bounded in $W^{2,p}$ for every $10$,\nthen the strict inequality\n$u^1_\\varepsilon-u^2_\\varepsilon+\\psi^1>0$ holds in a small ball $B_r(x_0)$, centered at $x_0$\nfor $\\varepsilon>0$ small enough. Then it follows that $ -\\Delta u^1_\\varepsilon+f^1= 0$ in $B_r(x_0)$, \nand we know that $ \\Arrowvert \\Delta u^1_\\varepsilon \\rVert_{L^\\infty}$ is uniformly bounded, \ntherefore through a subsequence, \n$ \\Delta u^1_\\varepsilon \\rightarrow \\Delta u^1_0 $ a.e. as $\\varepsilon \\rightarrow 0$, \nconsequently\n $ -\\Delta u^1+f^1= 0$ a.e. in $B_r(x_0)$. Moreover, since $f^1 \\in C^\\alpha$, we that $u^1$ is\n a classical solution to\n $ -\\Delta u^1+f^1= 0$ in the ball $B_r(x_0)$.\n\n\\par\nThe solutions of the penalized system satisfy the equation\n\\begin{equation*}\n \\min(-\\Delta u^1_\\varepsilon+f^1, -\\Delta u^2_\\varepsilon+f^2)=0.\n\\end{equation*}\nAfter passing to a limit through a subsequence, we get the following\n\\begin{equation*}\n \\min(-\\Delta u^1_0+f^1, -\\Delta u^2_0+f^2)=0 \\textrm{ a.e.}.\n\\end{equation*}\n\\end{proof}\n\n\n\\par\nProposition 1 shows that there exists $(u^1_0, u^2_0), u^i_0 \\in W^{2,p}, \\forall p <\\infty$ solving \n\\eqref{system2} in a strong sense. According to Lemma 1, $u^i_0$ has the following property\n\n\\begin{equation}\\label{keyest}\n\\Arrowvert \\Delta u^i_0 \\rVert_{L^\\infty} \\leq\n\\max_i \\Arrowvert \\Delta \\psi^i \\rVert_{L^{\\infty}}+\n 3 \\max_i \\Arrowvert f^i \\rVert_{L^{\\infty}},\n\\end{equation}\nwhich will be relevant for deriving further regularity of solutions.\n\n\n\n\n\\par\n Furthermore, Proposition 1 tells us that the solution we get via the penalization method, \n solves an extra equation,\n which turns out to be very important in the discussion of the uniqueness.\n \n \n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\\subsection{Uniqueness}\nIt has been shown in the paper \\cite{LB83} that if there are no zero loops, then the\nsolution to the system \\eqref{system2} is unique. Here we give a counterexample showing\nthat the uniqueness does not hold in case \nthere are zero loops.\n\n\\begin{example}[Diogo Gomes] \nThe following system\n\\begin{equation} \\label{exun}\n\\begin{cases}\n\\min (-\\Delta u^1-M, u^1-u^2+\\psi)=0 \\\\\n\\min (-\\Delta u^2+M, u^2-u^1-\\psi)=0,\n\\end{cases}\n\\end{equation}\nwith given boundary conditions $u^i=g^i, g^1-g^2+\\psi =0 $ on $\\partial \\Omega $,\nadmits infinitely many solutions, provided $ 2 M > \\Arrowvert \\Delta \\psi \\rVert_{L^{\\infty}} $.\n\\par\nMoreover, \\eqref{exun} admits solutions $u^1, u^2 \\notin C^{1,1}$.\n\\end{example}\n\n\n\\begin{proof}\nLet $ (u^1, u^2)$ be a solution to the system \\eqref{exun}.\nSince both $u^1-u^2+\\psi \\geq 0$ and $u^2-u^1-\\psi \\geq 0$, it follows that\n $u^1-u^2+\\psi \\equiv 0$, therefore $ -\\Delta u^1= -\\Delta u^2+ \\Delta \\psi$.\n \n \\par\n Now let us take any $u^1 \\in W^{2,p}$, $p>n$, $u^1=g^1$ on $\\partial \\Omega $, such that $ -\\Delta u^1-M \\geq 0$ a.e..\n Then the function $u^2=u^1+\\psi $ satisfies the boundary conditions\n$ u^2=g^2$ on $\\partial \\Omega $, and $ -\\Delta u^2+M \\geq 0$ a.e. \n since $ 2 M > \\Arrowvert \\Delta \\psi \\rVert_{L^{\\infty}} $. \nThus we get infinitely many solutions of the form $(u^1, u^1+\\psi)$, which may not be $C^{1,1}$.\n\\end{proof}\n\n\n\n\\par\nWe observe that if the zero loop set is empty, then the equation \n$ \\min(-\\Delta u^1+f^1, -\\Delta u^2+f^2)=0$ is satisfied automatically. \nUnder the nonnegative loop assumption, we saw that there exists a \nsolution to system \\eqref{system2} also solving system \\eqref{system3}. Next we show that \nthe system \\eqref{system3} has a unique solution, which is actually the minimal solution to (1).\n\n\\begin{proposition}\n The system \\eqref{system3} has a unique solution $(u^1_0,u^2_0)$ in $W^{2,p}$ for every $p<\\infty$.\n\\end{proposition}\n\n\\begin{proof}\nLet us assume that $(u^1,u^2)$ is a solution to system \\eqref{system3}, then \nthe difference $ U= u^1-u^2 $ solves the following double-obstacle\nproblem in $\\Omega $:\n\\begin{equation*}\n\\begin{cases}\n-\\Delta U +f^1-f^2 \\leq 0 \\textrm{ a.e. if } U >-\\psi^1 \\\\\n-\\Delta U +f^1-f^2 \\geq 0 \\textrm{ a.e. if } U <\\psi^2 \\\\\n-\\psi^1 \\leq U \\leq \\psi^2,\n\\end{cases}\n\\end{equation*}\nwith boundary conditions $ U= g^1-g^2 $\non $\\partial \\Omega $.\n\n\\par\nIt is well-known that the solution to the double-obstacle problem with given boundary data \nis unique in $W^{2,p}$. Indeed, let $V $ be another solution, then \nwithout loss of generality, we may assume that \n$ \\max_{x\\in \\Omega } (U-V)= U(x_0)-V(x_0)>0$.\nThen in a small ball $B_r(x_0)$, one has $ U-V>0$, and $U-V$\n has a maximum at $x_0$. The inequality $ U>V \\geq -\\psi^1$ imples that\n $U > -\\psi^1$ in $B_r(x_0)$, hence $ -\\Delta U -f^1+f^2 \\leq 0$.\n Similarly, $V < \\psi^2 $ in $B_r(x_0)$ and therefore $ -\\Delta V -f^1+f^2 \\geq 0$.\n After combining the inequalities $ -\\Delta U -f^1+f^2 \\leq 0$ and\n $ -\\Delta V -f^1+f^2 \\geq 0$, we see that $U-V$ is a subharmonic function in the \n ball $B_r(x_0)$. Recalling that $U-V $ has a maximum at an interior point $x_0$, we get a \n contradiction to the maximum principle for subharmonic functions.\n \n\n\n\n\n\n\\par\nNow let us assume that $(v^1,v^2)$ \nis another solution to system \\eqref{system3}, then $ u^1-u^2 \\equiv v^1- v^2 $ in $\\Omega$.\nDenote $ h= u^1-v^1 \\equiv u^2- v^2$ in $\\Omega$, then $h=0$ on $\\partial \\Omega$.\n\n\\par\nNow let us plugg-in $ v^1= u^1-h $ and $ v^2= u^2-h $ to the equation\n \\begin{align*}\n 0=\\min(-\\Delta v^1+f^1, -\\Delta v^2+f^2)=\\\\\n \\min(-\\Delta u^1+f^1, -\\Delta u^2+f^2)+ \\Delta h= \\Delta h \\textrm{ a.e. }.\n \\end{align*}\n Then it follows that $\\Delta h = 0 \\textrm{ a.e. } $ in $\\Omega$, $h\\in W^{2,p}(\\Omega)$,\n for every $ 1 0$ a.e. \nin $B_r(x_0, u^1)$. Similarly we define the set $\\Omega_2$ corresponding to the function \n$u^2$, and let $\\Omega_{12}= \\Omega \\setminus \\overline{ \\Omega_1 \\cup \\Omega_2}$.\nThen $\\Omega_1$, $\\Omega_2 $ and $\\Omega_{12}$\nare disjoint open sets, and since $\\varphi^1, \\varphi^2 \\in C^{2,\\alpha}$,\n\\begin{align*}\n -\\Delta u^1= \\Delta \\varphi^1 >0, -\\Delta u^2= 0 \\textrm{ in } \\Omega_1, \\\\\n -\\Delta u^2= \\Delta \\varphi^2 >0, -\\Delta u^1= 0 \\textrm{ in } \\Omega_2, \\\\\n -\\Delta u^1=0, -\\Delta u^2=0 \\textrm{ in } \\Omega_{12}.\n\\end{align*}\n\n\\par\nIn the set $\\Omega \\setminus \\overline{\\Omega_1}$ we get \n$-\\Delta u^1=0$, and the function $u^2$ solves the\n obstacle problem $\\min(-\\Delta u^2, u^2-u^1+\\varphi^2)= 0$, with a $C^{2, \\alpha}$ obstacle\n $ u^1-\\varphi^2$. Therefore $u^2$ is locally $C^{1,1}$ in $ \\Omega \\setminus \\overline{\\Omega_1} $.\n Similarly we get that $u^2$ is locally $C^{1,1}$ in $ \\Omega \\setminus \\overline{\\Omega_2} $.\n\n \n \n \\par\n Next we need to study the regularity of the solution in a neighborhood of the set \n $\\partial \\Omega_1 \\cap \\partial \\Omega_2 \\cap \\Omega$. Let us note that it is contained in\n the zero loop set, $\\partial \\Omega_1 \\cap \\partial \\Omega_2 \\subset \\mathscr{ L}$, since \n $ u^1-u^2+\\varphi^1 \\equiv 0 \\textrm{ in } \\Omega_1$ and\n $ u^2-u^1+\\varphi^2 \\equiv 0 \\textrm{ in } \\Omega_2$.\n In the interior of the zero loop set the system \\eqref{system} reduces to the equation \n \\begin{equation} \\label{Leq}\n -\\Delta u^1 = (\\Delta \\varphi^1)^+, u^2=u^1+\\varphi^1 \\textrm{ in } \\mathscr{ L }^0.\n \\end{equation}\n From the classical theory, solutions to the equation \\eqref{Leq} are locally $C^{2, \\alpha}$\n if $\\Delta \\varphi^1 \\in C^{ \\alpha}$. So in a neighborhood of the points \n $x \\in \\partial \\Omega_1 \\cap \\partial \\Omega_2 \\textrm{ and } x \\in \\mathscr{ L}^0 $,\n the solution is $C^{2, \\alpha} $.\n \n \\par\n It remains to study the regularity of $(u^1, u^2)$ at the points\n $ x_0 \\in \\partial \\Omega_1 \\cap \\partial \\Omega_2 \\cap \\partial \\mathscr{ L}$, called a \n ''meeting'' point. In this section we show that $u^1$ and $u^2$ are actually $C^{2,\\alpha}$-regular at such points.\n \n \\par\n For simplicity, let us study the system locally in the unit ball $B_1$, assuming that\n $0\\in \\partial \\mathscr{L} \\cap \\partial \\Omega_1 \\cap \\partial \\Omega_2$. \n We can always come to such a situation with a change of variables.\n\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \\subsection{Blow-up procedure}\nAssume that $(u^1, u^2)$ solves system \\eqref{system} in the unit ball $B_1$, and\n$0\\in \\partial \\mathscr{L} = \\partial \\mathscr{L}^0 $ is a meeting point, and let us study the regularity of the \nfunctions $u^1$ and $u^2$ at $0$.\n\n\n\\begin{definition} \\label{appol}\nFor a function $u \\in W^{2,2}$, define $\\Pi(u(x),r)=p_r(x) $, where\n$p_r(x)= x \\cdot A_r\\cdot x+b_r\\cdot x+c_r$ is a second order polynomial with the matrix $A_r$, vector $b_r$ \nand scalar $c_r$ minimizing the following expression\n\\begin{equation*}\n \\int_{B_r}\\left( |D^2u-2A_r|^2+|\\nabla u -b_r|^2 +|u-c_r|^2 \\right)dx.\n\\end{equation*}\n\\end{definition}\n\nThen $ \\Pi (u(rx),1)=p_r(rx)$, and\nit is easy to see that \n\\begin{equation*}\n p_r(x)=\\frac{1}{2} x \\cdot (D^2u)_r \\cdot x+ (\\nabla u)_r\\cdot x + (u)_r,\n\\end{equation*}\nwhere $ (u)_r:=(u)_{r,0}$, and $(u)_{r,x_0}$ is the average of $ u $ over the ball\n$B_r(x_0)= \\{ x \\in \\mathbb{R}^n \\mid \\lvert x-x_0 \\rvert 0$, then \n\\begin{equation*}\n \\frac{S(r)}{r^2}=\\Arrowvert D^2 u^1(r x)-2A^1_{r} \\rVert_{L^2(B_1)}.\n\\end{equation*}\n\n\\par\nA change of variable will give us\n\\begin{align*}\n \\left(\\frac{S(r)}{r^2} \\right)^2=\\frac{1}{r^n} \\int_{B_{r}}|D^2u^1(y)-2A^1_{r}|^2dy =\\\\\n \\frac{1}{r^n} \\int_{B_{r}}|D^2u^1(y)-(D^2u^1)_{r}|^2 dy\\leq \n C \\left( \\max_ i \\Arrowvert \\Delta \\varphi^i \\rVert_{L^\\infty}+\\Arrowvert u^1 \\rVert_{L^2(B_1)}\\right)^2,\n\\end{align*}\ntherefore $ \\forall r<\\frac{1}{2} $\n\\begin{equation}\n \\frac{S(r)}{r^2}\n \\leq C \\left(\\max_ i \\Arrowvert \\Delta \\varphi^i \\rVert_{L^\\infty}+\n\\max_i \\Arrowvert u^i \\rVert_{L^2(B_1)} \\right) :=C_0.\n\\end{equation}\n\n \\par\nSo $S(r)$ has at least quadratic decay as $r\\rightarrow 0$.\n Next we improve the estimate, showing that actually \n $S(r) \\leq C_0 r^{2+\\alpha }$ for $r>0$ small enough.\n \n \n \n \n \n \\begin{proposition}\n Let $\\varphi^1, \\varphi^2 \\in C^{2, \\alpha}$ for some $0<\\alpha<1$, and $\\mathscr{L}= \\overline {\\mathscr{L}^0}$,\n then the function $\\frac{S(r)}{r^{2+\\alpha}}$ is uniformly bounded as $r$ goes to zero\n \\begin{equation} \\label{main}\n \\frac{S(r)}{r^{2+\\alpha}} \\leq C \\left( \\max_ i \\Arrowvert \\Delta \\varphi^i \\rVert_{L^\\infty}+\n\\max_i \\Arrowvert u^i \\rVert_{L^2(B_1)} \\right),\n\\end{equation}\nwhere $C$ is a dimensional constant.\n\\end{proposition} \n\n\\begin{proof}\nLet us start with an important observation: The assumptions $ 0\\in \\partial \\mathscr{L} \\cap \\partial \\Omega_1 \\cap\n\\partial \\Omega_2$,\n$\\mathscr{L}=\\{ \\varphi^1+\\varphi^2 =0 \\}$, $\\mathscr{ L}= \\overline{\\mathscr{ L}^0}$ and \n$ \\varphi^1, \\varphi^2 \\in C^{2, \\alpha}$ imply that $ \\Delta \\varphi^1(0)+\\Delta \\varphi^2(0)=0 $. \nOn the other hand $ \\Delta \\varphi^1>0 $ in $\\Omega_1$ and \n$ \\Delta \\varphi^2>0 $ in $\\Omega_2$, therefore $ \\Delta \\varphi^1(0) =\n\\Delta \\varphi^2(0)=0$.\n\n\\par\nNext we show that $\\varphi^i \\in C^{2, \\alpha}$ together with $\\Delta \\varphi^i(0)=0$, provide the growth \nestimate \\eqref{main}. The proof is based on an argument of contradiction, assume that $\\frac{S(r)}{r^{2+\\alpha}}$\nis not bounded, \nthen there exists a sequence\n$r_k\\rightarrow 0$ as $ k\\rightarrow\\infty $, such that $S(r_k)=kr_k^{2+\\alpha}$ and $S(r)\\leq kr^{2+\\alpha}$ for\nall $r \\geq r_k$. Our aim is to study the convergence of the sequence $ v^i_{k}:=v^i_{r_k}$ as $k\\rightarrow\\infty$.\nFor that we will need some basic properties of the functions $v^i_r$, where $ 00$ is a dimensional constant.\nNext we apply Poincare's inequality one more time,\n$ \\Arrowvert v^i_r- (v^i_r)_1 \\rVert_{L^2(B_1)} \\leq C_n \\Arrowvert \\nabla v^i_r \\rVert_{L^2(B_1)}$.\n Therefore we may conclude that \n\\begin{equation} \\label{w22}\n \\Arrowvert \\nabla v^i_r \\rVert_{L^{2}(B_1)}\\leq C,\n \\Arrowvert v^i_r \\rVert_{L^{2}(B_1)}\\leq C\\left(1+ \n \\lVert \\varphi \\rVert_{C^{2,\\alpha}}\\frac{r^{2+\\alpha}}{S(r)}\\right),\n\\end{equation}\nfor every $ 00$ depends only on the dimension.\n\n\n\\par\nNext by using \\eqref{w22} we want to estimate the $\\Arrowvert v^i_k \\rVert_{W^{2,2}(B_R)} $ \nfor $ R < 1\/r_k $ as $k\\rightarrow\\infty$.\nLet us start by looking at the expressions $ \\lvert A^i_{2^lr_k}-A^i_{2^{l-1}r_k} \\rvert$,\nwhere $l\\in \\mathbb{N}$ and $r_k<1$ are chosen such that $s=r_k 2^{l-1}\\leq \\frac{1}{4}$.\nIt follows from Minkowski's inequality, that\n \\begin{align*}\n \\lvert A^i_{2s}-A^i_s \\rvert \\leq {\\left( \\fint _{B_1} |D^2u^i(xs)-2A^i_s|^2\\right)}^{\\frac{1}{2}}+\n \\left(\\fint _{B_1} |D^2u^i(xs)-2A^i_{2s}|^2 \\right)^{\\frac{1}{2}}\\\\\n \\leq \\frac{S(s)}{s^2}+2^{\\frac{n}{2}} \\left(\\fint _{B_{\\frac{1}{2}}} |D^2u^i(2xs)-2A^i_{2s}|^2 \\right)^{\\frac{1}{2}} \n\\leq \\frac{S(s)}{s^2}+2^{\\frac{n}{2}}\\frac{S(2s)}{4s^2}.\n \\end{align*}\nHence\n\\begin{equation*}\n \\lvert A^i_{2^lr_k}-A^i_{2^{l-1}r_k} \\rvert \\leq k(1+2^{\\frac{n}{2}+\\alpha})(r_k 2^{l-1})^\\alpha,\n\\end{equation*}\nprovided $r_k 2^{l-1} \\leq \\frac{1}{4}$.\n\n\\par\nNow let us take any $m \\in \\mathbb{ N }$ such that $ 2^{m+1} r_k \\leq 1$, then\n\\begin{align*}\n \\left(\\int _{B_{2^m}}|D^2 v^i_k(x)|^2dx \\right)^{\\frac{1}{2}}=\\frac{r_k^2}{S(r_k)}\n \\left(\\int _{B_{2^m}}|D^2 u^i(r_k x)-2A^i_{r_k}|^2dx \\right)^{\\frac{1}{2}}\\\\ \n \\leq \\frac{ 2^{\\frac{mn}{2}}}{k r_k^{\\alpha}}\n \\left(\\int _{B_1}|D^2 u^i(2^m r_k x)-2A^i_{r_k}|^2dx \\right)^{\\frac{1}{2}} \\\\\n \\leq \\frac{ 2^{\\frac{mn}{2}}}{k r_k^{\\alpha}}\n \\left( \\left(\\int _{B_1}|D^2 u^i(2^m r_k x)-2A^i_{2^m r_k}|^2dx \\right)^{\\frac{1}{2}}+\n \\lvert A^i_{2^mr_k}-A^i_{r_k} \\rvert \\right)\\\\\n \\leq \\frac{ 2^{\\frac{mn}{2}}}{k r_k^{\\alpha}}\n \\left( \\frac{S(2^m r_k)}{(2^m r_k)^2} +\\sum_{j=1}^{m}\\lvert A^i_{2^jr_k}-A^i_{2^{j-1}r_k} \\rvert \\right)\\\\\n\\leq \\frac{ 2^{\\frac{mn}{2}}}{k r_k^{\\alpha}} \\left(k 2^{m \\alpha }r_k^\\alpha\n+ k 2^n r_k^\\alpha \\sum_{j=1}^{m} 2^{\\alpha (j-1)} \\right) \\leq\n2^{n+1} 2^{m\\left( \\frac{n}{2}+\\alpha \\right)}.\n \\end{align*}\n\n \\par\nFor every $R<\\frac{1}{2r_k}$ we can find an $m \\in \\mathbb{N }$ such that \n$2^{m-1}\\leq R<2^m $, and then applying the estimates above, we get \n\\begin{equation*}\n \\int _{B_R}|D^2 v^i_k(x)|^2dx\\leq C_n R^{n+2\\alpha},\n\\end{equation*}\nfor every $R<\\frac{1}{2r_k}$, where $C_n$ is a dimensional constant.\n Then we can also show that $ \\Arrowvert \\nabla v^i_{r_k} \\rVert_{L^2(B_R)}$ and\n $ \\Arrowvert v^i_{r_k} \\rVert_{L^2(B_R)}$ are bounded by a constant depending only on $R $.\n Indeed, applying the corresponding estimates for $A^i_{r_k}$, and the first inequality in \\eqref{w22}, we get\n \\begin{equation*}\n \\lvert b^i_{2^l r_k}- b^i_{2^{l-1}r_k}\\rvert \\leq C_{n,\\alpha} k(r_k 2^{l})^{1+\\alpha},\n \\end{equation*}\nand therefore\n\\begin{equation*}\n \\lvert b^i_{R r_k}- b^i_{r_k}\\rvert \\leq C_{n,\\alpha} k (r_k R )^{1+\\alpha}.\n\\end{equation*}\n\nThen the Poincare's inequality in a ball $B_R$ implies that\n\\begin{equation*}\n \\lVert \\nabla v^i_k -( \\nabla v^i_k)_R \\rVert_{L^2(B_R)} \\leq C_n R \\lVert D^2 v^i_k \\rVert_{L^2(B_R)},\n\\end{equation*}\nand\n\\begin{equation*}\n \\lVert v^i_k -( v^i_k)_R \\rVert_{L^2(B_R)} \\leq C_n R \\lVert \\nabla v^i_k \\rVert_{L^2(B_R)},\n\\end{equation*}\nwhere \n\\begin{align*}\n ( \\nabla v^i_k)_R= \\frac{r_k}{S(r_k)} ( \\nabla u^i_{r_k}-r_k A^i_{r_k} \\cdot x -b^i_{r_k} )_R \\\\\n= \\frac{r_k}{S(r_k)} \\left( (\\nabla u^i )_{R r_k}- b^i_{r_k}\\right)\n = \\frac{r_k}{k r_k^{2+\\alpha}} \\left( b^i_{ R r_k}- b^i_{r_k} \\right),\n\\end{align*}\nand\n\\begin{align*}\n (v^i_k)_R=\\frac{1}{S(r_k)} \\left( c^i_{R r_k} -c^i_{r_k} +\\frac{n}{2} r_k^2(\\Delta u^i)_{r_k}(x_1^2)_R \\right).\n\\end{align*}\n\n\\par\nNext let us observe that the second inequality in \\eqref{w22}, with the corresponding estimates for \n$A^i_r$ and $b^i_r$ imply that\n\\begin{equation*}\n \\lvert c^i_{Rr_k}-c^i_{r_k}\\rvert \\leq C k(R r_k)^{2+\\alpha}.\n\\end{equation*}\n\n\\par\nThen it follows from the triangle's inequality that\n\\begin{equation*}\n \\lVert \\nabla v^i_k \\rVert_{L^2(B_R)} \\leq C \\left( R^{\\frac{n}{2}+1+\\alpha}+\n R^{\\frac{n}{2}}\\frac{r_k}{k r_k^{2+\\alpha}}\\lvert b^i_{Rr_k}-b^i_{r_k}\\rvert \\right) \\leq C_{n}R^{\\frac{n}{2}+1+\\alpha},\n\\end{equation*}\nand also\n\\begin{align*}\n \\lVert v^i_k\\rVert_{L^2(B_R)}\\leq C \\left( R \\lVert \\nabla v^i_k \\rVert_{L^2(B_R)}+\n R^{\\frac{n}{2}}(v^i_k)_R \\right) \\leq \\\\\n C\\left( R^{\\frac{n}{2}+2+\\alpha} +R^{\\frac{n}{2}} \\frac{\\lvert c^i_{Rr_k}-c^i_{r_k}\\rvert}{S(r_k)} +\n \\lVert \\varphi \\rVert_{C^{2,\\alpha}} R^{\\frac{n}{2}+2} \\frac{r_k^{2+\\alpha}}{S(r_k)}\\right)\n \\leq C' R^{\\frac{n}{2}+2+\\alpha}.\n\\end{align*}\n\n\n \\par\nTherefore we have shown that the sequence $v^i_k$ is locally uniformly bounded in $W^{2,2}$, hence \n through a subsequence, $v^i_k$ converges weakly in $W^{2,2}(B_R)$, and strongly in $W^{1,2}(B_R)$, denote \n $ v^i_0= \\lim_{k\\rightarrow\\infty}v^i_k$ for $i=1,2$.\n Then the weak convergence of the second order derivatives implies that \n \\begin{equation*}\n \\int _{B_R}|D^2 v^i_0(x)|^2dx \\leq \\limsup_{k \\rightarrow \\infty} \n \\int _{B_R}|D^2 v^i_k(x)|^2dx ,\n \\end{equation*}\n and therefore\n \\begin{equation} \\label{impest}\n \\int _{B_R}|D^2 v^i_0(x)|^2dx \\leq C_n R^{n+2\\alpha}.\n\\end{equation}\n\n\n\\par\nNext we describe further properties of the limit functions, $v^1_0$ and $v^2_0$.\nRecall that \n\\begin{equation*}\n v^i_k(x)=\\frac{u^i(r_k x)-\\Pi(u^i(r_k x),1)}{S(r_k)},\n\\end{equation*}\nthen we have\n\\begin{equation*}\n -\\Delta v^i_k (x)=\\frac{r_k^2}{S(r_k)} \\left( -\\Delta u^i(r_k x)+tr A^i_{r_k} \\right).\n\\end{equation*}\nLet us denote\n\\begin{align*}\n q^1_k(x)=\\frac{p^1_{r_k}(r_k x)-p^2_{r_k}(r_k x)+\\varphi^1(r_k x)}{S(r_k)}, \\textrm{ and } \\\\\n q^2_k(x)=\\frac{p^2_{r_k}(r_k x)-p^1_{r_k}(r_k x)+\\varphi^2(r_k x)}{S(r_k)}.\n\\end{align*}\n\nThen $ (v^1_k, v^2_k)$ is a strong solution to the following system\n\\begin{equation*}\n\\begin{cases}\n\\min (-\\Delta v^1_k-\\frac{tr A^1_{r_k}}{k r_k^\\alpha}, v^1_k-v^2_k+q^1_k)=0 \\\\\n\\min (-\\Delta v^2_k-\\frac{tr A^2_{r_k}}{k r_k^\\alpha}, v^2_k-v^1_k+q^2_k)=0 \\\\\n\\min (-\\Delta v^1_k-\\frac{tr A^1_{r_k}}{k r_k^\\alpha},-\\Delta v^2_k-\\frac{tr A^2_{r_k}}{k r_k^\\alpha})=0,\n\\end{cases}\n\\end{equation*}\ntherefore\n\\begin{equation*}\n -\\Delta v^1_k(x)=\n \\begin{cases}\n \\frac{tr A^1_{r_k}}{k r_k^\\alpha}+\\frac{\\Delta \\varphi^1(r_kx)}{k r_k^\\alpha}, \\textrm{ if }\n r_k x \\in \\Omega_1 \\\\\n \\frac{tr A^1_{r_k}}{k r_k^\\alpha}, \\textrm{ otherwise },\n \\end{cases}\n\\end{equation*}\nand\n\\begin{equation*}\n -\\Delta v^2_k(x)=\n \\begin{cases}\n \\frac{tr A^2_{r_k}}{k r_k^\\alpha}+\\frac{\\Delta \\varphi^2(r_kx)}{k r_k^\\alpha},\n \\textrm{ if } r_k x \\in \\Omega_2 \\\\\n \\frac{tr A^2_{r_k}}{k r_k^\\alpha}, \\textrm{ otherwise }.\n \\end{cases}\n\\end{equation*}\n\n\n\\par\nThen $ \\Delta \\varphi^i(0) =0$, for $i=1,2$ \ntogether with $\\varphi^i \\in C^{2,\\alpha}$, implies that \n\\begin{equation*}\n \\frac{\\Arrowvert\\Delta \\varphi^i(r_k\\cdot)\\rVert_{L^2(B_R)}}{kr_k^\\alpha} \\leq \n\\frac{C_n}{k} R^{\\frac{n}{2}+\\alpha} \\Arrowvert \\varphi^i \\rVert_{C^{2,\\alpha}} \n\\rightarrow 0, \\textrm{ as } k\\rightarrow\\infty,\n\\end{equation*}\nfor $i=1,2$, and for any fixed $1\\leq R<\\infty$.\n\n\\par\nWe have that $v^i_k \\rightharpoonup v^i_0 $ weakly in $W^{2,2}(B_R)$ and $ v^i_k \\rightarrow v^i_0$ \nin $W^{1,2}(B_R)$, therefore\n$ \\Delta v^i_k \\rightharpoonup \\Delta v^i_0$ weakly in $L^{2}(B_R)$, but\n$ \\Delta v^i_k = \\frac{tr A^i_{r_k}}{k r_k^\\alpha}\n+\\frac{\\Delta \\varphi^i(r_kx)}{k r_k^\\alpha} \\chi_{\\Omega_i}(r_kx)$, and\n$\\Arrowvert \\frac{\\Delta \\varphi^i(r_kx)}{k r_k^\\alpha} \\chi_{\\Omega_i}(r_kx) \\rVert_{L^2(B_R)} \\rightarrow 0$.\nThus we may conclude that the sequence of numbers \n$\\frac{tr A^i_{r_k}}{k r_k^\\alpha}$ converges, and denote \n$ a^i:= \\lim_ {k \\rightarrow\\infty} \\frac{tr A^i_{r_k}}{k r_k^\\alpha}$.\nThen $ \\Arrowvert \\Delta v^i_k -a^i \\rVert_{L^2(B_R)} \\rightarrow 0 $ as $k\\rightarrow\\infty$ \nfor every $1\\leq R <\\infty$. Therefore\nboth $ -\\Delta v^1_0 -a^1 \\equiv 0 $ and $ -\\Delta v^2_0 -a^2 \\equiv 0 $ in $\\mathbb{R}^n$.\n\n\n\\par\nWe have shown that $ v^i_0(x)-a^i \\frac{\\lvert x \\rvert^2}{2n}$ is a harmonic functions in $\\mathbb{ R}^n $.\nHence the matrix $ D^2v^i_0$ has harmonic entries $ D^k v^i_0$, where $k$ is a multiindex, $ \\lvert k \\rvert=2$.\nNext we can apply the estimates of the \nderivatives for harmonic functions and inequality \\eqref{impest}, to get \n\\begin{align*}\n \\lvert \\nabla D^k v^i_0(x_0) \\rvert \\leq R^{ -\\frac{n}{2}-1}\\lVert D^k v^i_0 \\rVert_{L^2(B_R(x_0))} \\\\\n \\leq R^{ -\\frac{n}{2}-1} \\lVert D^k v^i_0 \\rVert_{L^2(B_{2R})} \n \\leq C' R^{-1+\\alpha},\n\\end{align*}\nprovided $ R > \\lvert x_0\\rvert$.\nLetting $ R \\rightarrow \\infty$, we see that \nthe derivatives of $D^k v^i_0$ are vanishing, hence $ D^2 v^i_0$ is a constant matrix, and therefore\n$v^i_0$, $i \\in \\{1,2\\} $ is a second order polynomial. \n\n\n\\par\nAccording to our construction, \n$v^i_0$ are orthogonal to the second order polynomials in $L^2(B_1)$-sense, hence\nboth $ v^1_0$ and $ v^2_0$ must be identically zero. Then the constants $a^1=a^2=0$, and\n$ \\Arrowvert \\Delta v^i_k \\rVert_{L^2(B_1)} \\rightarrow 0$ as $k\\rightarrow\\infty$, the latter \ncontradicts to the condition $\\max_i \\Arrowvert D^2v^i_k \\rVert_{L^2(B_1)}=1$.\n \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{$C^{2, \\alpha}$- regularity at the meeting points}\n\nWe start by showing that the approximating polynomials $p^i_r$ \nconverge to a polynomial $p^i_0$, and describe the rate of convergence. \n\n\n\n\\begin{lemma}\nLet $(u^1, u^2)$ be a solution to \\eqref{system}, and assume that $\\varphi^i \\in C^{2, \\alpha}$.\nLet the polynomials $p^i_r$ be as in the Definition 3, then there exists a polynomial $p^i_0$ such that \n\\begin{equation} \\label{pol}\n \\sup _{x \\in B_r}\\lvert p^i_r(x)-p^i_0(x)\\rvert \\leq C r^{2+\\alpha}.\n\\end{equation}\n\\end{lemma}\n\n\n\n\\begin{proof}\nThe condition $\\Arrowvert D^2v^i_r \\rVert _{L^2(B_1)}\\leq 1$ with the inequality \n\\eqref{main} implies that \n\\begin{equation} \\label{1}\n \\left( \\int_{B_1} \\lvert D^2u^i(rx)-D^2p^i_r \\rvert ^2 dx \\right) ^{\\frac{1}{2}} \\leq \\frac{S(r)}{r^2}\n \\leq C_0 r^{\\alpha}\n\\end{equation}\n\nRecall that $A^i_r=D^2p^i_r$, then using the triangle inequality, and that $A^i_r$ is minimizing \n$ \\Arrowvert D^2 u^i(rx) - A \\rVert_{L^2(B_1)}$ over matricies $A \\in \n\\mathbb{R}^n \\times \\mathbb{R}^n$, we get \n$ \\lvert A^i_r-A^i_{\\frac{r}{2}} \\rvert \\leq C_0 r^{\\alpha}$ for all $00$ depends only on the given data.\n\\end{theorem}\n\n\\begin{proof}\nWithout loss of generality, we may assume $x_0=0$,\nand consider the following rescalings\n\\begin{equation*}\n v^i_r(x)= \\frac{u^i(rx) -p^i_0(rx)}{r^{2+\\alpha}}, \n\\end{equation*}\nthen according to Lemma 2 and Corollary 2, $ \\Arrowvert v^i_r \\rVert_ {W^ {2,2}(B_1)} \\leq C$.\n\n\\par\nThe pair $(v^1_r, v^2_r)$ solves the following system\n\n\\begin{equation*}\n\\begin{cases}\n\\min (-\\Delta v^1_r-\\frac{tr A^1_0}{r^\\alpha}, v^1_r-v^2_r+q^1_r)=0 \\\\\n\\min (-\\Delta v^2_r- \\frac{tr A^2_0}{r^\\alpha}, v^2_r-v^1_r+q^2_r)=0 \\\\\n\\min (-\\Delta v^1_r-\\frac{tr A^1_0}{r^\\alpha}, -\\Delta v^2_r- \\frac{tr A^2_0}{r^\\alpha} )=0,\n\\end{cases}\n\\end{equation*}\n\nwhere\n\\begin{equation*}\n q^1_r(x)= \\frac{p^1_0(rx)-p^2_0(rx) +\\varphi^1(rx)}{r^{2+\\alpha}},\n q^2_r(x)= \\frac{p^2_0(rx)-p^1_0(rx) +\\varphi^2(rx)}{r^{2+\\alpha}}.\n\\end{equation*}\n\nThen\n\\begin{equation*}\n -\\Delta v^i_r=\\begin{cases}\n \\frac{tr A^i_0}{r^\\alpha}+ \\frac{\\Delta \\varphi^i(rx)}{r^\\alpha} \\textrm{ if } rx \\in \\Omega_i \\\\\n \\frac{tr A^i_0}{r^\\alpha} \\textrm{ otherwise. }\n \\end{cases}\n\\end{equation*}\n\n\\par\nWe assumed that $0 \\in \\partial \\Omega_1 \\cap \\partial \\Omega_2 \\cap \\partial \\mathscr{L}$, then \n$ \\Delta \\varphi^i (0)=0$, $i=1,2$. Hence \n$ \\lvert \\frac{\\Delta \\varphi^1(rx)}{r^\\alpha}\\rvert \\leq\n\\Arrowvert \\varphi^1_r \\rVert_{C^{2,\\alpha}(B_1)} \\lvert x \\rvert^\\alpha$.\nWe know that $ \\Arrowvert \\Delta v^i_r\\rVert_{L^2(B_1)}$\nis bounded, therefore $tr A^i_0=0$, and $ \\Delta v^i_r (x)$ is uniformly bounded.\n\n\n\n\\par\nWe have that $ \\Arrowvert v^i_r \\rVert_{L^2(B_1)} \\leq C$\nand we saw that $ \\Arrowvert \\Delta v^i_r \\rVert_{L^\\infty(B_1)} \\leq \n\\Arrowvert \\varphi^i \\rVert_{C^{2,\\alpha}(B_1)}$. \nUsing the Calderon-Zygmund estimates, we conclude that \n$ \\Arrowvert v^i_r \\rVert_{C^{1,\\gamma}}$\nis uniformly bounded. In particular, $ \\lvert v^i_r(x) \\rvert \\leq \nC'(C_0+\\Arrowvert \\varphi^i \\rVert_{C^{2,\\alpha}(B_1)} ) $ for every\n$ x \\in B_1$ and $r\\leq \\frac{1}{2} $. \n\n\\par\nRecall that we set $C_0=C \\left(\\max_ i \\Arrowvert \\Delta \\varphi^i \\rVert_{L^\\infty}+\n\\max_i \\Arrowvert u^i \\rVert_{L^2(B_1)} \\right) $, and $ v^i_r(x)= \\frac{u^i(rx) -p^i_0(rx)}{r^{2+\\alpha}} $.\nThen we get the desired inequality\n \\begin{equation*}\n \\sup _{x \\in B_r}\\lvert u^i(x)-p^i_0(x) \\rvert \\leq C\n \\left(\\max_{i \\in \\{1,2\\}} \\Arrowvert u^i \\rVert_{L^2(B_1)} +\\max_{i\\in \\{1,2\\}}\\Arrowvert \\varphi^i \\rVert_{C^{2,\\alpha}(B_1)} \\right)\n r^{2+\\alpha}.\n\\end{equation*}\n\\end{proof}\n\n\n\n\n\n\n\n\\subsection{A counterexample in case the zero-loop set has an isolated point}\nHere we give a counterexample, showing that if the zero loop set has an isolated point,\nthen the solution may not be $C^{1,1}$.\n\n\n\\par\nWe consider the following system in $\\mathbb{R}^2$\n\\begin{equation} \\label{countex}\n\\begin{cases}\n\\min (-\\Delta u^1, u^1-u^2+\\varphi)=0 \\\\\n\\min (-\\Delta u^2, u^2-u^1+\\varphi)=0,\n\\end{cases}\n\\end{equation}\nwith $ \\varphi= \\frac{1}{4} \\lvert x \\rvert ^2 $.\n\n\n\n\n\\par\nThen the difference $U=u^1-u^2$ solves the following double-obstacle problem \nin $\\mathbb{ R }^2$\n\n\\begin{equation} \\label{do}\n-\\Delta U = \\begin{cases}\n1, \\textrm{ if } U=-\\varphi \\\\\n-1, \\textrm{ if } U= \\varphi \\\\\n0 , \\textrm{ if } -\\varphi< U< \\varphi,\n\\end{cases}\n\\end{equation}\nand $-\\Delta u^1 =(-\\Delta U )^+$ and $ -\\Delta u^2 =(\\Delta U )^+ $.\n\n\\par\nNow let us consider a function $ w $ defined as follows\n\\begin{equation*}\nw = \\begin{cases}\n- \\frac{1}{4} \\lvert x \\rvert^2 , \\textrm{ if } x_1> 0, x_2>0 \\\\\n\\frac{1}{4} (x_1^2-x_2^2), \\textrm{ if } x_1<0, x_2 >0 \\\\\n\\frac{1}{4} (x_2^2-x_1^2), \\textrm{ if } x_1>0, x_2 <0 \\\\ \n \\frac{1}{4} \\lvert x \\rvert^2 , \\textrm{ if } x_1< 0, x_2<0.\n\\end{cases}\n\\end{equation*}\nThen $w \\in C^{1,1}$ also solves the double-obstacle problem \\eqref{do}, therefore we \nchoose $ U \\equiv w $. \n\n\\par\nNext we write $u^1$ explicitly in polar coordinates\n\\begin{equation*}\nu^1(r,\\theta) =\n\\begin{cases}\n -\\frac{1}{4}r^2-\\frac{1}{2 \\pi } r^2 \\theta \\cos 2\\theta -\\frac{1}{2 \\pi} r^2 \\ln r \\sin 2\\theta,\n \\textrm{ if } 0<\\theta \\leq \\frac{\\pi }{2}\\\\\n-\\frac{1}{4}r^2 \\cos 2 \\theta +\\frac{1}{2 \\pi } r^2 \\theta \\cos 2\\theta +\\frac{1}{2 \\pi} r^2 \\ln r \\sin 2\\theta, \\textrm{ otherwise }, \n\\end{cases}\n\\end{equation*}\nhere the function $ r^2 \\theta \\cos 2\\theta + r^2 \\ln r \\sin 2\\theta \\in C^{1, \\gamma} $ \nfor every $0<\\gamma<1$, solves the Laplace equation in $ \\mathbb{ R }^2 \\backslash \\{0\\}$,\nbut is not $C^{1,1}$ near the origin.\n\n\n\\par\nTherefore $u^1$ is a $C^{1,\\gamma}$ function in the unit ball in $\\mathbb{ R }^2$\nbut it is not $C^{1,1}$\nin the neighborhood of the origin, since $ \\lvert \\frac{\\partial^2 u^1}{\\partial r^2}\\rvert \n\\approx \\lvert \\ln r \\rvert \\rightarrow\\infty $ as $r\\rightarrow 0$.\nMoreover, \n $ -\\Delta u^1(r, \\theta) = \\chi _{\\{0<\\theta \\leq \\frac{\\pi }{2} \\}}= \\chi _{\\{ u^1 > 0 \\}}$,\nprovided $r> 0 $ is small enough.\n \n\n\n\n\\par\nNext we take $ u^2(r, \\theta )= u^1(r, \\theta)-w$, then \n\\begin{equation*}\nu^2(r,\\theta) =\n\\begin{cases}\n-\\frac{1}{2 \\pi } r^2 \\theta \\cos 2\\theta -\\frac{1}{2 \\pi} r^2 \\ln r \\sin 2\\theta,\n \\textrm{ if } 0<\\theta \\leq \\frac{\\pi }{2}\\\\\n - \\frac{1}{2}r^2 \\cos 2\\theta+\\frac{1}{2 \\pi } r^2 \\theta \\cos 2\\theta +\\frac{1}{2 \\pi} r^2 \\ln r \\sin 2\\theta,\n \\textrm{ if } \\frac{\\pi }{2} <\\theta \\leq \\pi \\\\\n - \\frac{1}{4}r^2- \\frac{1}{4}r^2 \\cos 2\\theta+\\frac{1}{2 \\pi } r^2 \\theta \\cos 2\\theta +\\frac{1}{2 \\pi} r^2 \\ln r \\sin 2\\theta,\n \\textrm{ if } \\pi <\\theta \\leq \\frac{3 \\pi }{2}\\\\\n \\frac{1}{2 \\pi } r^2 \\theta \\cos 2\\theta +\\frac{1}{2 \\pi} r^2 \\ln r \\sin 2\\theta,\n \\textrm{ if }\\frac{3 \\pi }{2} <\\theta \\leq 2 \\pi \n\\end{cases}\n\\end{equation*}\n\nNeither $u^1$ nor $u^2 $ is a $C^{1,1}$ function. However, it is easy to see that $(u^1,u^2) $\nsolves \\eqref{countex}.\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAfter the Higgs boson discovery in 2012~\\cite{Chatrchyan:2012ufa} and\nsubsequent analyses of its properties~\\cite{Hcoup}, evidence for physics\nbeyond the Standard Model (BSM) remains elusive. Although consistency\nwith SM Higgs properties is expected in many BSM\nscenarios, current measurements do not fully constrain\nthe Higgs sector. One coupling which is currently unconstrained and has recently been subject of much interest is the Higgs\nself-interaction $\\sim \\eta$, which is responsible for the spontaneous\nbreaking of electroweak gauge symmetry in the SM via the potential\n\\begin{equation}\n \\label{eq:higgspot}\n V(H^\\dagger H) =\n \\mu^2 H^\\dagger H + \\eta (H^\\dagger H)^2 \\, , \n\\end{equation}\nwith $\\mu^2<0$, where $H=(0,v+h)^T\/\\sqrt{2}$ in unitary gauge. The\nHiggs self-coupling manifests itself primarily in a destructive\ninterference in gluon fusion-induced di-Higgs\nproduction~\\cite{nigel,uli2,Maltoni:2014eza} through feeding into the\ntrilinear Higgs interaction with strength\n$\\lambda_{\\text{SM}}=m_h\\sqrt{\\eta\/2}=gm_h^2\/(4 m_W)$ in the SM. The\nlatter relation can be altered in BSM scenarios, e.g. the SM coupling\npattern can be distorted by the presence of a dimension six operator\n$\\sim (H^\\dagger H)^3$, and di-Higgs production is the only channel\nwith direct sensitivity to this interaction~\\cite{Azatov:2015oxa}. A\n modification solely of the Higgs trilinear coupling, which is typically\ninvoked in di-Higgs feasibility studies, is predicted in models of\n$\\mu^2$-less electroweak symmetry breaking,~e.g.~\\cite{womu2}.\n\n\\begin{figure*}[!t]\n \\centering\n \\subfigure[\\label{fig:pth}]{\\includegraphics[width=0.45\\textwidth]{gf_ptmaxhiggs.eps}}\n \\hfill\n \\subfigure[\\label{fig:ptj}]{\\includegraphics[width=0.45\\textwidth]{gf_pt0.eps}}\n \\caption{\\label{fig:pt} Maximum Higgs and jet transverse momenta\n for gluon fusion-induced $hhjj$ production, including the ratio of\n full theory to the effective theory calculation for three\n different values of the Higgs trilinear coupling $\\lambda$.}\n\\end{figure*}\n\n\n\\begin{figure}[t!]\n \\centering\n \\vspace{0.3cm}\n \\parbox{0.45\\textwidth}{\n \\includegraphics[angle=-90,width=0.22\\textwidth]{ggHHgg_diag1.eps}\n \\includegraphics[angle=-90,width=0.22\\textwidth]{ggHHgg_diag2.eps}\\\\\n \\includegraphics[angle=-90,width=0.22\\textwidth]{ggHHuu_diag1.eps}\n \\includegraphics[angle=-90,width=0.22\\textwidth]{uuHHdd_diag1.eps}\n \\caption{Sample Feynman diagrams contributing to $pp\\to hhjj$ via gluon\n fusion.}\n \\label{fig:sample_gf}\n \\vspace{-0.5cm}\n}\n\\end{figure}\n\nAfter the Higgs discovery, analyses of the di-Higgs final state at the\nhigh-luminosity LHC and beyond have experienced a renaissance, and\ndi-Higgs final states such as the $b\\bar b \\gamma\n\\gamma$~\\cite{Baur:2003gp,Barger:2013jfa,Azatov:2015oxa,Barr:2014sga},\n$b\\bar b \\tau^+\\tau^-$~\\cite{Baur:2003gpa,us,us2}, $b \\bar b\nW^+W^-$~\\cite{Baur:2003gpa,Papaefstathiou:2012qe,us} and $b\\bar b b\n\\bar b$~\\cite{Baur:2003gpa,deLima:2014dta,us} channels have been\nstudied phenomenologically, often relying on boosted jet substructure\ntechniques~\\cite{bdrs} (see also an\ninvestigation~\\cite{Papaefstathiou:2015iba} of rare decay channels\nrelevant for a 100 TeV collider). Recent analyses by ATLAS and\nCMS~\\cite{atlasecfa,cmsecfa} have highlighted the complexity of these\nanalyses and the necessity to explore different production mechanisms\nto formulate constraints on the Higgs self-interactions in the\nfuture. This program has already been initiated by feasibility\nanalyses of the $hhj$, $hhjj$ and $t\\bar t hh$ production modes in\nRefs.~\\cite{us,Barr:2014sga,hhjj,tthh,Liu:2014rva}.\n\nDi-Higgs production in association with two jets is a particularly\nimportant channel in this regard since this final state receives\ncontributions from the weak boson fusion (WBF) production mode. The\nphenomenological appeal of the WBF mode is twofold. Firstly, the weak\nboson fusion component of $pp\\to hh jj$ is sensitive to modifications\nof the gauge-Higgs sector~\\cite{hhjj,Brooijmans:2014eja}, which can lead to \nlarge cross-section enhancements. Secondly, the QCD uncertainties for\nthe WBF topologies are known and under theoretical\ncontrol~\\cite{hhjj2,Frederix:2014hta}, such that a search for BSM\nelectroweak-induced deviations is not hampered by QCD\nsystematics. This situation is very different from QCD-induced\nproduction~\\cite{qcd}, and can be attributed to the particular\nphenomenology of WBF-like processes~\\cite{Figy:2003nv,hhjj1}\n\nHowever, an additional source of uncertainty that was neglected until\nrecently~\\cite{hhjj} is the correct inclusion of the gluon fusion\ncontribution to $pp\\to hh jj$ analyses. In contrast to single Higgs\nphenomenology, the correct inclusion of massive fermion\nthresholds is crucial to a reliable prediction of QCD-induced $pp \\to\nhh jj$~\\cite{us}.\n\nGiven that the cross sections in WBF $hhjj$ production are very\nsuppressed compared to WBF $hjj$ production (the WBF $hhjj$ cross\nsection is $\\sim 750$ times smaller), we have to rely on the dominant\nhadronic Higgs decay modes to be able to observe this final state.\nThis rules out one of the most crucial single Higgs WBF selection\ntools - the central jet veto~\\cite{dieter}. The observation of\nWBF-induced $pp\\to hhjj$ production is further hampered by the top\nthreshold in the QCD-mediated process. Since the top threshold sets\nthe scale of the di-Higgs subsystem, an analysis that tries to retain\nas many low $p_T$ Higgs bosons as possible leads to a QCD contribution\nthat dominates over the WBF component when minimal WBF-like cut\nrequirements are imposed~\\cite{hhjj}.\n\nIn this paper we extend the discussion of Ref.~\\cite{hhjj} in a number\nof directions. We first perform a detailed comparison of\nEFT-approaches to QCD-mediated $pp\\to hhjj$ against a calculation\nkeeping the full mass dependencies of top and bottom quarks in\nSec.~\\ref{sec:gf}. We compare the QCD-induced $pp\\to hhjj$\nphenomenology to the WBF signature in Sec.~\\ref{sec:wbf} before we\ndiscuss general approaches to isolate the signal from the dominant top\nbackgrounds in a hadron level analysis in Sec.~\\ref{sec:tt}. This sets\nthe stage for a discussion about the prospects to isolate the WBF and\nGF components in Secs.~\\ref{sec:isolgf} and \\ref{sec:isolwbf},\nfollowed by a study on constraining $VV hh$ couplings using the WBF\ninduced signal in Section~\\ref{sec:limitzeta}. We focus on collisions\nwith $14$ TeV throughout.\n\n\n\\section{The gluon fusion contribution}\n\\label{sec:gf}\n\\subsection{Finite top mass effects}\n\nIt is well known that effective field theory approximations in the\n$m_t\\to \\infty$ limit cannot be invoked to study di-Higgs final states\nat colliders in a reliable way due to the effects of top-quark\nthreshold~\\cite{uli2,Dawson:2015oha}. Further, the breakdown of the $m_t\\to\n\\infty$ approximation is worsened in the presence of additional jet\nemission~\\cite{us,Maierhofer:2013sha}. Finite $m_t$ effects must\ntherefore be considered for all QCD di-Higgs production channels,\nwhich will be required to set the best limits on the Higgs\nself-coupling or formulate a realistic estimate of the GF contribution\nin a WBF-like selection.\n\n\\begin{table}[!t]\n \\begin{tabular}{lccc}\n \\toprule\n $\\lambda$ \\hspace{1cm} & \\hspace{0.25cm}\t$0 \\cdot \\lambda_\\text{SM}$ [fb] \\hspace{0.25cm} & \\hspace{0.25cm} $1 \\cdot \\lambda_\\text{SM}$ [fb]\\hspace{0.25cm} & \\hspace{0.25cm} $2 \\cdot \\lambda_\\text{SM}$ [fb] \\\\\n \\botrule\n GF\t&\t10.73\t\t\t\t\t & 5.502\t\t\t\t\t & 2.669\t\t \\\\\n WBF &\t4.141\t\t\t\t\t & 2.010\t\t\t\t\t & 0.9648\t\t\\\\\n \\botrule\n \\end{tabular}\n \\caption{Cross section normalisations for the GF and WBF\n samples at 14~TeV, for details see text. The WBF normalisation follows\n from~\\cite{hhjj2} and includes higher order QCD effects.}\n \\label{tab:xsnormalisation}\n\\end{table}\n\nThe computational challenges in QCD-mediated $hhjj$ production are\nsignificant, with the gluon-fusion channels particularly time\nconsuming even when using state-of-the-art techniques. The standard\nmethod of simulating a differential cross section from unweighted\nevents is not feasible in this case, and we instead use a reweighting\ntechnique that is exploited in higher order calculations and\nexperimental analyses~(see e.g. \\cite{Aad:2013nca}).\n\nWe generate GF $hhjj$ events by implementing the\nrelevant higher dimensional operators in the $m_t\\to \\infty$ limit\nobtained by expanding the low-energy effective theory~\\cite{kniehl}\n\\begin{equation}\n \\mathcal{L}_\\text{eff} = -\\frac{1}{4} \\frac{\\alpha_S}{3 \\pi} \n G^a_{\\mu \\nu} G^{a\\, \\mu \\nu} \\log(1 + h\/v)\n \\label{eq:efft}\n\\end{equation}\nin \\textsc{MadEvent} v5.1~\\cite{Alwall:2011uj} using the\n\\textsc{FeynRules\/Ufo}~\\cite{Degrande:2011ua} framework.\\footnote{The\n effective theory implementation can be modified in the sense that\n only one effective vertex insertion is allowed. This is gives only a\n mild $\\sim 10\\%$ effect in the tail of the distribution, and is not\n relevant for an order one EFT\/full theory rescaling, see below.}\nThis allows us to sample a weighted set of events that we subsequently\nfeed into our analysis solely depending on their final state\nkinematics. If an event passes the selection requirements of a certain\nsearch region, we correct for the full mass dependence using a\nreweighting library based on \\textsc{GoSam} package~\\cite{gosam} at\nthis stage. The reweighting employs exactly the same matrix elements\nused for the event generation and the trilinear coupling is steered\nthrough a modification of the {\\sc{GoSam}} matrix element,\ni.e. variations of the trilinear coupling are part of the\nreweighting. A selection of Feynman diagrams which contribute to the\ngluon fusion signal are illustrated in Fig.~\\ref{fig:sample_gf}. The\n{\\sc{GoSam}} code used for the reweighting is based on a Feynman\ndiagrammatic approach using {\\sc{QGRAF}}~\\cite{qgraf} and\n{\\sc{FORM}}~\\cite{form} for the diagram generation, and\n{\\sc{Spinney}}~\\cite{spinney}, {\\sc{Haggies}}~\\cite{haggies} and\n{\\sc{FORM}} to write an optimised fortran output. The reduction of the\none-loop amplitudes was done using {\\sc{Samurai}}~\\cite{samurai},\nwhich uses a $d$-dimensional integrand level decomposition based on\nunitarity methods~\\cite{unitarity}. The remaining scalar integrals\nhave been evaluated using {\\sc{OneLoop}}~\\cite{oneloop}. Alternative\nreduction techniques can be used employing {\\sc{Ninja}}~\\cite{ninja}\nor {\\sc{Golem95}}~\\cite{golem95}. To validate the reweighting\nprocedure we regenerated the code that has been used in~\\cite{hhjj} \nwith the improvements that became available within {\\sc{GoSam 2.0}},\nin particular improvements in code optimisation and in the reduction\nof the amplitudes. For the reduction we used\n{\\sc{Ninja}}, which employs an improved reduction algorithm based\non an Laurent expansion of the integrand. This leads to substantial\nimprovements in both speed and numerical stability compared to the \nprevious version. We combined the\ncode with a phase space integration provided by\n\\textsc{MadEvent}~\\cite{Maltoni:2002qb}. Further substantial speed-up\nhas been obtained by Monte Carlo sampling over the helicities rather\nthen performing the helicity sum. This enabled us to perform a full\nphase space integration and we found full agreement within the \nstatistical uncertainties between the result obtained from reweighting\nand the result from the full phase space integration.\n\n\n\n\n\\begin{figure*}[t!]\n \\subfigure[\\label{fig:mjj}]{\\includegraphics[width=0.45\\textwidth]{gf_jjmass.eps}}\n \\hfill\n \\subfigure[\\label{fig:etajj}]{\\includegraphics[width=0.44\\textwidth]{gf_deta12.eps}}\n \\caption{\\label{fig:jets} Invariant mass and pseudo-rapidity\n distributions of the jet system in QCD-mediated $hhjj$\n production. We show the effective theory and full theory results for\n three values of the trilinear Higgs coupling, applying only generator-level cuts of $p_{T,j} \\ge 20$ GeV\n and $|\\eta_j| < 5$.}\n\\end{figure*}\n\n\\begin{figure*}[t!]\n \\subfigure[\\label{fig:mhh}]{\\includegraphics[width=0.45\\textwidth]{gf_hhmass.eps}}\n \\hfill\n \\subfigure[\\label{fig:etahj}]{\\includegraphics[width=0.45\\textwidth]{gf_detahiggsj.eps}}\n \\caption{\\label{fig:hjet} Invariant mass and pseudo-rapidity distributions of the leading jet and di-Higgs system in QCD-mediated $hhjj$ production. We show the effective theory and full theory results for\n three values of the trilinear Higgs coupling, applying only generator-level cuts of $p_{T,j} \\ge 20$ GeV\n and $|\\eta_j| < 5$.}\n\\end{figure*}\n\n\\subsection{Phenomenology of QCD-mediated $hhjj$ production}\n\nTop thresholds are particularly prominent in the di-Higgs invariant\nmass distribution, which is thus well suited\nto benchmark the relation of the finite $m_t$ limit to the effective theory of Eq.~\\gl{eq:efft}. Other\nobservables constructed from the six particle final state are also\nrelevant when performing a targeted phenomenological analysis, and we\ndiscuss both these and the phase space dependent parton-level reweighting in detail\nin the following.\n\nIn Figures~\\ref{fig:pt}, \\ref{fig:jets}, and \\ref{fig:hjet} we show a\nselection of $hhjj$ final state observables for inclusive cuts\n$p_{T,j}>20~\\text{GeV}$ and $|\\eta_j|< 5$, no cuts on Higgs bosons are\nimposed. We label Higgs bosons and jets according to their hardness,\ni.e. $p_{T,h1}>p_{T,h2}$ and $p_{T,j1}>p_{T,j2}$. The cross sections\nare given in Tab.~\\ref{tab:xsnormalisation}. The inclusive gluon\nfusion cross section is about 2.5 times larger than the WBF cross\nsection approximately independent of the value of the Higgs trilinear\ncoupling.\n\nAs previously established in~\\cite{uli2,us,hhjj} the di-Higgs system\nis badly modelled by the effective theory which under- and overshoots\nthe full theory cross section at low and high momenta\nrespectively. For $pp\\to hhjj$ this is a qualitatively similar\nbehaviour compared the $pp\\to hh(j)$ production: The $m_{hh}$\ndistribution is the crucial observable which parametrises the finite\ntop quark mass effects. The EFT describes the low maximum transverse\nHiggs momenta $p_{T,h1}$ reasonably well, as shown in\nFig.~\\ref{fig:pth}. The jet emission on the other hand integrates over\na considerable range of $m_{hh}$, and the ratio of full theory vs\neffective theory smaller than one for the finite $m_t$ limit produces\na smaller integrated cross section than the $m_t\\to \\infty$ limit for\nthe jet kinematics.\n\nConsidering just the dijet system in Fig.~\\ref{fig:jets}, we observe\nthat the jet kinematics is not severely impacted by the reweighting\nprocedure upon marginalising over the di-Higgs kinematics. The phase\nspace dependence of the dijet invariant mass Fig.~\\ref{fig:mjj} is\nrelatively mild aside from the total rescaling of the inclusive cross\nsections, and the ratio for the pseudo-rapidity distribution of\nthe jets is nearly flat, Fig.~\\ref{fig:etajj}. This is also true for\nthe azimuthal angle difference $\\Delta\\phi_{jj}$. The angular\ndistributions of the leading members of the jet-Higgs system are\nrelatively mildly impacted by the reweighting too\nFig.~\\ref{fig:etahj}. This agrees with the $m_{hh}$ being the\nobservable most sensitive to the top threshold (as in $pp\\to hh(j)$),\nand is also supported by the larger impact of the reweighting of\n$m_{hh}$ in Fig.~\\ref{fig:mhh}. A reweighting based on $m_{hh}$ to\n correct for finite top mass effects suggests itself for future\n analyses as a time-saving approach with reasonable accuracy.\n\n\n\\section{The weak boson fusion contribution}\n\\label{sec:wbf}\n\nThe weak boson fusion contribution to $pp\\to hhjj$ has received\nconsiderable attention recently and precise higher-order QCD\ncorrections have been provided\nin~\\cite{hhjj1,hhjj2,Frederix:2014hta}. Due to the sensitivity of the\nWBF contribution to both the trilinear coupling and the quartic $VVhh$\n($V=W,Z,\\gamma$), as shown in the Feynman diagrams in\nFig.~\\ref{fig:sample_wbf}, weak boson fusion to two Higgs bosons can,\nin principle, provide complementary information about BSM physics\nwhich remains uncaptured in $pp\\to hh(j)$ and $pp\\to t\\bar t\nhh$~\\cite{Brooijmans:2014eja}.\n\nWe generate WBF samples with varying $\\lambda$ using \\textsc{MadEvent}\nv4~\\cite{Alwall:2007st} and normalise the cross section to NLO\naccuracy~\\cite{hhjj2}. The WBF $hhjj$ contribution shares the QCD properties\nof WBF $hjj$ production \\cite{Figy:2003nv} which means it shares\nthe distinctive $\\Delta \\eta(j1,j2)$ distribution shown in\nFig.~\\ref{fig:wbfeta}: To produce the heavy di-Higgs pair we probe the\ninitial state partons at large momentum fractions. This together with\na colour-neutral $t$-channel exchange of the electroweak\nbosons~\\cite{gaporig} (see also \\cite{gaporig2}) leads to energetic\nback-to-back jet configurations at large rapidity separation and\nmoderate transverse momenta with a centrally produced Higgs pair. The\nproduction of an additional Higgs boson in comparison to single Higgs\nproduction via WBF leads to a cross section reduction by three orders\nof magnitude (see Tab.~\\ref{tab:xsnormalisation}) in the SM. Such a\nsmall inclusive production cross section highlights the necessity of\nconsidering dominant Higgs decay channels such as $h\\to b \\bar b$ and\n$h\\to \\tau^+\\tau^-$ and the non-availability of central jet vetos\n\\cite{dieter} as a means to control the background {\\it{and}} GF\ncontribution in a targeted analysis as a consequence.\n\n\\begin{figure}[!t]\n \\vspace{1.2cm}\n \\parbox{0.45\\textwidth}{\n \\includegraphics[width=0.22\\textwidth]{vbf_diag1.eps}\n \\includegraphics[width=0.22\\textwidth]{vbf_diag2.eps}\\\\\n \\includegraphics[width=0.22\\textwidth]{vbf_diag3.eps}\n \\includegraphics[width=0.22\\textwidth]{vbf_diag4.eps}\n }\n \\caption{Sample Feynman diagrams contributing to $pp \\to hhjj$ in weak boson\n fusion. \\label{fig:sample_wbf}}\n\\end{figure}\n\nThe gluon fusion contribution is bigger by a factor of 2.5 than the\nWBF component of $hhjj$ production, however, with increasing invariant\ndi-Higgs mass the WBF contribution is enhanced relative to GF\nproduction as a consequence of the suppression above the $2m_t$\nthreshold, as shown in Fig.~\\ref{fig:gfwbfmhh}.\n\n \\begin{figure}[t!]\n\\centering\n\\subfigure[\\label{fig:wbfeta}]{\\includegraphics[width=0.45\\textwidth]{wbf_deta12.eps}} \\\\[0.1cm]\n\\subfigure[\\label{fig:gfwbfmhh}]{\\includegraphics[width=0.45\\textwidth]{mhh_comp_partonlvl.eps}}\n\\caption{The $\\Delta \\eta(j1,j2)$ distribution of the weak boson fusion\n contribution at parton-level (a) and the $m_{hh}$ distribution of\n the weak boson fusion and gluon fusion contributions compared with\n correct cross section normalisation (b), both satisfying generator-level\n cuts of $p_{T,j} \\ge 20$ GeV and $|\\eta_j| < 5$.}\n\\label{fig:detapartonlvl}\n\\end{figure}\n\n\n\n\\begin{table*}[!t]\n\\renewcommand\\arraystretch{1.1}\n \\begin{tabular}{llllr}\n \\toprule\n Cut setup \t\t\t\t\t& Base Selection [fb]\\hspace{0.1cm}\t& GF Selection [fb]\\hspace{0.1cm}& WBF Selection [fb] \\hspace{0.1cm} & Normalisation$*$ [fb] \\\\\n \\botrule\n GF\t($\\lambda = 1 \\cdot \\lambda_\\text{SM}$)\t& \\hspace{0.1cm} \\num{0.01396}\t\t& \\hspace{0.1cm} \\num{0.005722} \t\t& \\hspace{0.1cm} \\num{0.0005378} &\t\\num{0.4013}\t\\\\\n GF\t($\\lambda = 0 \\cdot \\lambda_\\text{SM}$)\t& \\hspace{0.1cm} \\num{0.02562} \t& \\hspace{0.1cm} \\num{0.008122} \t\t& \\hspace{0.1cm} \\num{0.0008767} &\t\\num{0.7831}\t\\\\\n GF\t($\\lambda = 2 \\cdot \\lambda_\\text{SM}$)\t& \\hspace{0.1cm} \\num{0.007167} \t& \\hspace{0.1cm} \\num{0.003906} \t\t& \\hspace{0.1cm} \\num{0.0003034} &\t\\num{0.1947}\t\\\\\n WBF\t($\\lambda = 1 \\cdot \\lambda_\\text{SM}$)\t& \\hspace{0.1cm} \\num{0.003292} \t& \\hspace{0.1cm} \\num{0.0004999} \t\t& \\hspace{0.1cm} \\num{0.001485} & \t\\num{0.1466}\t\\\\\n WBF\t($\\lambda = 0 \\cdot \\lambda_\\text{SM}$)\t& \\hspace{0.1cm} \\num{0.007706} \t& \\hspace{0.1cm} \\num{0.0007154} \t\t& \\hspace{0.1cm} \\num{0.002820} &\t\\num{0.3020}\t\\\\\n WBF\t($\\lambda = 2 \\cdot \\lambda_\\text{SM}$)\t& \\hspace{0.1cm} \\num{0.001103} \t& \\hspace{0.1cm} \\num{0.0001815} \t\t& \\hspace{0.1cm} \\num{0.0003912} &\t\\num{0.07037} \\\\\n $t\\bar{t}jj$\t\t\t\t\t& \\hspace{0.1cm} \\num{5.712}\t \t& \\hspace{0.1cm} \\num{0.03390} \t\t& \\hspace{0.1cm} \\num{0.01801} &\t\\num{10130}\t\\\\\n $t\\bar{t}h$\t\t\t\t\t\t& \\hspace{0.1cm} \\num{0.06229} \t& \\hspace{0.1cm} \\num{0.007047} \t\t& \\hspace{0.1cm} \\num{0.00005658} & \t\\num{38.62}\t\\\\\n $Zhjj$\t\t\t\t\t\t& \\hspace{0.1cm} \\num{0.005118} \t& \\hspace{0.1cm} \\num{0.001278} \t\t& \\hspace{0.1cm} \\num{0.0001026} & \t\\num{47.37}\t\\\\\n $ZZjj$\t\t\t\t\t\t& \\hspace{0.1cm} \\num{0.001171} \t& \\hspace{0.1cm} \\num{0.00006659} \t\t& \\hspace{0.1cm} \\num{0.0000007639}\t&\t\\num{225.7}\t\\\\\n $ZWWjj$\t\t\t\t\t\t& \\hspace{0.1cm} \\num{0.00001888} & \\hspace{0.1cm} \\num{0.000005461} \t\t& \\hspace{0.1cm} \\num{0.0000002039}\t&\t\\num{0.5368} \\\\\n total background\t\t\t\t\t& \\hspace{0.1cm} \\num{5.781} \t& \\hspace{0.1cm} \\num{0.04230} \t\t& \\hspace{0.1cm} \\num{0.01817}\t&\t-\t\\\\\n \\botrule \n $S\/B$\t($\\lambda = 1 \\cdot \\lambda_\\text{SM}$) \t\t\t& \\hspace{0.3cm} 1\/335.1 \t& \\hspace{0.3cm} 1\/6.799\t& \\hspace{0.3cm} 1\/8.983 \\\\\n $S\/B$\tGF$^\\dagger$\t ($\\lambda = 1 \\cdot \\lambda_\\text{SM}$)\t& \\hspace{0.3cm} 1\/414.3 \t& \\hspace{0.3cm} 1\/7.480 \t& \\hspace{0.3cm} 1\/36.55 \\\\\n $S\/B$\tWBF$^\\dagger$\t($\\lambda = 1 \\cdot \\lambda_\\text{SM}$)\t\t& \\hspace{0.3cm} 1\/1760 \t& \\hspace{0.3cm} 1\/96.06 \t& \\hspace{0.3cm} 1\/12.60 \\\\\n \\botrule\n $S\/\\sqrt{\\text{B}}$ (3 ab$^{-1}$, $\\lambda = 1 \\cdot \\lambda_\\text{SM}$)\t\t& \\hspace{0.3cm} 0.3930 & \\hspace{0.3cm} 1.657\t& \\hspace{0.3cm} 0.8219 \\\\\n \\botrule\n $*$ branchings included in normalisation \\\\\n $^\\dagger$ considering only this as signal\n \\end{tabular}\n \\caption{Cross sections for the two sources of signal, and backgrounds, after the various selections described in the text are applied, together with various measures of significance in the bottom four rows.}\n \\label{tab:xstable}\n\\end{table*}\n\nSince we cannot rely on vetoing hadronic activity in the central part\nof the detector, a potential discrimination of GF from WBF needs to be\nbuilt on the following strategy, which we will investigate in\nSec.~\\ref{sec:tt}:\n\n\\begin{itemize}\n \\item To isolate the di-Higgs (WBF+GF) signal we can exploit the relative\n hardness of the di-Higgs pair which peaks around $\\sim 2m_t$. Such\n hard events are less likely to be produced by (ir)reducible\n backgrounds.\n\n \\item Focussing on large $m_{hh}$ we can enhance WBF over GF by\n stringent cuts on the jet rapidity separation. This will also\n imply a significant decrease of QCD-dominated backgrounds.\n\n \\item By explicitly allowing central jet activity, we can exploit\n the colour correlation differences in WBF vs GF to further purify\n our selection. Since colour flow is tantamount to energy flow in the\n detector, event shapes are particularly well-suited observables\n for unravelling the colour correlations in the final state once\n the reconstructed di-Higgs pair has been removed\\footnote{A\n detailed discussion of event shapes at hadron colliders can be\n found in~\\cite{Banfi:2010xy}.}. This strategy was first proposed\n for single Higgs\n analyses in \\cite{Englert:2012ct} (see also\n \\cite{plehn}).\n\\end{itemize}\n\n\n\n\n\\section{Taming the background}\n\\label{sec:tt}\nFor our hadron-level analysis we shower our signal samples with\n\\textsc{Herwig++}~\\cite{Bahr:2008pv} and generate backgrounds as\nfollows: $t\\bar{t}jj$, $t\\bar{t}h$, $Zhjj$, and $ZZjj$ with\n\\textsc{Sherpa}~\\cite{Gleisberg:2008ta}, and $ZWWjj$ with\n\\textsc{MadEvent} v5. We find the dominant backgrounds to be\n$t\\bar t jj$ and $t\\bar t h$ production, for which next-to-leading\norder results are available~\\cite{ttjjnlo,tthnlo}\nand we use inclusive $K$ factors $K_{t\\bar t jj}\\simeq 1$ and $K_{t\\bar t\n h}\\simeq 1.5$ to estimate the higher order contributions\nto these backgrounds. Higgs branching ratios are set to the values\nagreed upon by the Higgs Cross Section Working\nGroup~\\cite{Heinemeyer:2013tqa}.\n\n\\begin{figure*}[t!]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{hadronlvl_deta12.eps}\n\\hfill\n\\includegraphics[width=0.45\\textwidth]{hadronlvl_hhmass.eps}\n\\caption{Shape comparison of $\\Delta \\eta(j1,j2)$ and $m_{hh}$\n distributions for our two sources of signal (GF and WBF), the dominant background\n $t\\bar{t}jj$ and the rest of the backgrounds (stacked scaled by\n relative cross sections), after the Base Selection of\n Section~\\ref{sec:tt} has been applied.}\n\\label{fig:basehadronlvl}\n\\end{figure*}\n\n\n\nWe begin the hadron-level analysis implemented in\nRivet~\\cite{Buckley:2010ar} by recreating a base selections similar\nto~\\cite{hhjj}:\\footnote{Our analysis has been validated with two\n independent implementations.}\n\\begin{enumerate}[1.)]\n\\item We require two tau leptons using a two tau-trigger based on\n staggered transverse momentum selection cuts $p_T \\ge\n 29,20~\\text{GeV}$ in $|\\eta_\\tau|<2.5$ and assume a flat tau tagging\n efficiency of 70\\% with no fakes.\n\n Jets are constructed by clustering $R = 0.4$ anti-$k_T$ jets using\n \\textsc{FastJet}~\\cite{Cacciari:2011ma} with $p_{T,j} \\ge 25$ GeV\n and $| \\eta_j | \\le 4.5$.\n\\item The two leading jets are $b$-tagged with an acceptance of 70\\%\n and fake rate of 1\\% \\cite{btagging} in the central part of the\n detector $|\\eta_j|<2.5$. We remove events if either of the two\n leading jets overlaps with a tau. Any additional jets which do not\n overlap with a tau are considered as potential ``tagging jets'', of\n which we require at least two.\n\\item As a final step of this base selection we require the $b$ jet\n and tau pairs to reproduce the Higgs mass of 125 GeV within $\\pm 15$\n and $\\pm 25~\\text{GeV}$ respectively.\\footnote{A high mass\n resolution is a crucial cornerstone of any successful di-Higgs\n analysis to assure a minimum pollution of $Z$ boson decay\n backgrounds~\\cite{us2}.} \n\\end{enumerate}\nThe signal and background cross sections after these cuts\nare presented in the Base Selection column of\nTable~\\ref{tab:xstable}. We find that the background contribution of\n$t\\bar{t}jj$ dominates with $t\\bar{t}h$ also providing a larger-than-signal\nbackground resulting in $S\/B\\sim~1\/300$, making a study based only on these \nselections extremely challenging. Since we only have $\\sim 40$ expected \ngluon fusion and $\\sim~10$ expected weak boson fusion events at 3 ab$^{-1}$ \nluminosity, additional selections must also be careful to retain enough signal\ncross section to allow statistically meaningful statements to be made\nwith a finite amount of data.\n\nShape comparisons for the rapidity and dihiggs invariant mass distributions as\nmotivated in the previous section are shown in\nFig.~\\ref{fig:basehadronlvl}. Indeed, as expected, cutting on the\nangular distance of the jets will serve to both purify towards a\nWBF-only selection at a reduced background rate. The dominant\nbackgrounds are unlikely to produce a large invariant mass\n$m_{hh}$. However the WBF contribution, due to the lack of the $2m_t$\nthreshold peaks at a considerably lower invariant mass, leading to\nsignificant decrease of the WBF contribution for a reasonably strong\ncut on $m_{hh}$, which is required to observe the $hhjj$ signal at the\ngiven low signal yield, even at 3 ab$^{-1}$ luminosity.\n\n\\begin{figure*}[t!]\n \\centering\n \\subfigure[\\label{fig:nosystematics}]{\\includegraphics[width=0.45\\textwidth]{zetalimits_nosystematics.eps}}\n \\hfill\n \\subfigure[\\label{fig:20systematics}]{\\includegraphics[width=0.45\\textwidth]{zetalimits_20systematics.eps}}\n \\caption{Expected limits on the gauge-Higgs quartic couplings $\\zeta=g_{VV hh}\/g_{VV hh}^{\\rm{SM}}$ under the assumption of no systematic \n uncertainties~(a) and 20\\% systematic uncertainties~(b).}\n \\label{fig:zetalimits}\n\\end{figure*}\n\n\\begin{figure*}[t!]\n \\centering\n \\subfigure[\\label{fig:gfjetti32}]{\\includegraphics[width=0.45\\textwidth]{hadronlvl_GFnjettiness32.eps}}\n \\hfill\n \\subfigure[\\label{fig:wbfjetti32}]{\\includegraphics[width=0.45\\textwidth]{hadronlvl_WBFnjettiness32.eps}}\\\\[0.2cm]\n \\subfigure[\\label{fig:gfthrustmaj}]{\\includegraphics[width=0.45\\textwidth]{hadronlvl_GFthrustmajor.eps}}\n \\hfill\n \\subfigure[\\label{fig:wbfthrustmaj}]{\\includegraphics[width=0.45\\textwidth]{hadronlvl_WBFthrustmajor.eps}}\n \\caption{Shape comparisons of $N$-jettiness and thrust calculated in the\n major direction after the gluon fusion selection of Sec.~\\ref{sec:isolgf} (a,c)\n and WBF Selection of Sec.~\\ref{sec:isolwbf} (b,d) have been applied.}\n \\label{fig:evtshape}\n\\end{figure*}\n\n\n\n\\subsection{Prospects to isolate gluon fusion}\n\\label{sec:isolgf}\nWe can extend the analysis outlined in Sec.~\\ref{sec:tt} with the aim\nto purify the selection towards the GF component.\\footnote{Following\n the analysis of \\cite{Bredenstein:2008tm}, we can expect negligible\n interference between WBF and GF and which allows us to make this\n distinction.} We make use of the hard Higgs candidates to greatly\nreduce the backgrounds by requiring $m_{hh} \\ge 500$ GeV and\nadditionally require $\\Delta \\eta(j1,j2) \\le 5$ to minimise the weak\nboson fusion contribution. The signal and background cross sections\nafter these cuts are applied are presented in the `GF Selection'\ncolumn of Table~\\ref{tab:xstable}.\n\nThe total background is reduced by a factor of $\\sim~100$ while the\ngluon fusion contribution only is reduced by a factor of $\\sim 2.5$\nwhich allows for an encouraging $S\/\\sqrt{\\text{B}}~\\sim~1.7$ with\n3 ab$^{-1}$ of data. The weak boson fusion contribution is \nalso suppressed compared to GF which allows for a clean probe of the \nphysics accessible in the gluon fusion contribution.\n\n\\subsection{Prospects to isolate weak boson fusion}\n\\label{sec:isolwbf}\nSimilarly we can extend the analysis towards isolating the WBF\ncomponent. Since it has slightly softer Higgs candidates we require\n$m_{hh} \\ge 400$ GeV and $\\Delta \\eta(j1,j2) \\ge 5$ to reduce both the\ngluon fusion and background contributions. The signal and background\ncross sections after these cuts are applied are presented in the `WBF\nSelection' column of Table~\\ref{tab:xstable}.\n\nThe total background is reduced by a factor of $\\sim~300$ while three\ntimes more of the weak boson fusion contribution is retained compared\nto the GF selection, resulting in $S\/\\sqrt{{B}}~\\sim~0.8$ with 3\nab$^{-1}$ of data due to the large reduction in the gluon fusion\ncontribution. However even so the WBF selection is composed of one-to-three parts\nGF to WBF, which means measurements of physics that only enters the weak\nboson fusion contribution will need to take this gluon fusion\n``pollution'' into account.\n\n\\subsection{Constraining the quartic $VV hh$ contribution}\n\\label{sec:limitzeta}\nAs mentioned in Section~\\ref{sec:wbf} there is a contribution from\nquartic $VV hh$ vertices to the WBF induced signal, and modifications\nof the corresponding $g_{VV hh}$ couplings away from their SM values\nusing the Higgs Cross Section Working Group $\\kappa$ framework\n\\cite{Heinemeyer:2013tqa} will greatly enhance the signal cross section. This\nallows us to constrain $\\zeta$ defined by $g_{VV hh} = \\zeta \\times\ng_{VV hh}^\\text{SM}$. To achieve this we have generated events with\nvarying $\\zeta$ using \\textsc{MadEvent} v5 and applied the WBF\nselections described in Section~\\ref{sec:isolwbf} to estimate the\nenhancement of the signal, which is compared to expected cross section\nlimits on the signal with 3 ab$^{-1}$ of data in the WBF selection\nunder the assumptions of no systematic uncertainties and 20\\% total\nsystematic uncertainties for comparison. The results are presented in\nFigure~\\ref{fig:zetalimits}. We find that in the more realistic\nscenario of 20\\% systematic uncertainties the expected constraint on\nthe $g_{VV hh}$ couplings is $0.55 < \\zeta < 1.65$ at 95\\% confidence\nlevel. A measurement of $pp\\to hhjj$ is therefore crucial to constrain\nnew physics which enters predominantly through enhancements to $g_{VV\n hh}$.\n\n\\subsection{Event shapes of the tagging jets system}\nThe analysis strategies outlined so far have mainly relied on\nexploiting correlations in the di-Higgs system, with only $\\Delta\n\\eta(j1,j2)$ carrying information about the tagging jets. Following similar applications in the context of single Higgs\nproduction \\cite{Englert:2012ct}, we investigate a range of event\nshapes in the tagging jets system in the following, which could offer\nadditional discriminating power through capturing colour correlations\nin the different signal contributions beyond angular\ndependencies. More specifically, we will focus on $N$-jettiness\n\\cite{jettiness,Thaler:2010tr} and thrust major which provided \nthe best results.\n\nWe calculate $N$-jettiness by minimising\n\\begin{equation}\n \\tau_N = C \\sum_k p_{T,k} \\min(\\Delta R_{k,1},\\dots, \\Delta R_{k,N})\n\\end{equation}\nwhere $C$ is a normalisation which cancels when taking the ratio of\ntwo $\\tau$s, the sum is taken over all visible momenta which do not\nbelong to one of the identified Higgs candidates within $|\\eta| < 5$,\nand $\\Delta R_{k,n}$ is the distance in the $\\eta - \\phi$ plane \nbetween the $k$-th momentum and the $n$-th reference vector. \n$\\tau_{3\/2}$ is then explicitly given by $\\tau_{3}\/\\tau_{2}$.\n\nThrust major is defined by\n\\begin{subequations}\n \\begin{equation}\n T_{\\text{maj}}=\\max_{{\\vec{n}}\\cdot{\\vec{n}}_T=0} \\frac{\\sum_k\n |{\\vec{p}}_k\\cdot {\\vec{n}}|}{\\sum_k |{\\vec{p}}_k|}\n \\end{equation}\n where ${\\vec{n}}_T$ is the normalised thrust vector\n \\begin{equation}\n {\\vec{n}}_T=\\max_{\\vec{n}}\\frac{\\sum_k\n |{\\vec{p}}_k\\cdot {\\vec{n}}|}{\\sum_k |{\\vec{p}}_k|}\\,,\n \\end{equation}\n\\end{subequations}\nAgain the sums run over all visible momenta which do not belong\nto one of the identified Higgs candidates within $|\\eta| < 5$.\n\nWe find $\\tau_{3\/2}$ and $T_\\text{maj}$ show promise for improving the\nWBF selection, but the signal cross section is already too low for us\nto be able to make meaningful use of this insight. The $\\tau_{3\/2}$\nand $T_\\text{maj}$ distributions after the GF and WBF selections have\nbeen applied are presented in Fig.~\\ref{fig:evtshape}. Cutting, e.g.,\non $T_{\\text{maj}}<0.05$, the gluon fusion contribution is reduced by\n80\\%, while the WBF contribution is reduced by only 55\\% amounting to\na total of 2 expected WBF and 0.3 expected GF events, with backgrounds \nvery strongly suppressed. This means \nthat WBF can in principle be observed at a small rate that can be used \nto set constraints on new physics in an almost GF-free selection with \ngreatly reduced backgrounds.\n\nThe event shape distributions can also be used to greatly reduce the\nbackground in the GF selection, Fig.~\\ref{fig:gfthrustmaj}. It should\nbe noted that these improvements of GF vs WBF vs background ultimately\ndepend on underlying event and pile up conditions and have to be taken\nwith a grain of salt at this stage early in run 2. However the clear\nseparation that can be achieved with these observables indicate that\nan analysis employing MVA techniques could, at least in theory,\nsignificantly improve the results presented here. These techniques may\nalso prove useful at a 100~TeV collider where the dihiggs production\ncross-section is substantially higher~\\cite{Barr:2014sga}.\n\n\\section{Summary and Conclusions}\nAfter discovering single Higgs production at the Large\nHadron Collider, new analysis strategies need to be explored to\nfurther constrain the presence of new physics beyond the Standard\nModel. Higgs pair production is pivotal in this regard as constraints\nfrom multi-Higgs production contain complementary information, in\nparticular with respect to the Higgs boson's self-interaction. Cross\nsections for di-Higgs production are generically small at the LHC,\nwhich highlights the necessity to explore other viable channels than\n$pp\\to hh$ to enhance sensitivity in a combined fit at high\nluminosity. To this end, we have investigated $pp\\to hh jj$ production\nin detail in this paper. Keeping the full top and bottom mass\ndependencies, we find sensitivity of $pp\\to hhjj$ searches at the LHC\nfor production in the SM and beyond. The gluon fusion contribution\nremains important at high invariant di-Higgs masses where the dominant\nbackgrounds can be suppressed to facilitate a reasonable signal vs\nbackground discrimination. Unfortunately, the gluon fusion\ncontribution remains large even for selections that enhance the weak\nboson fusion fraction of $pp\\to hhjj$ events. This ``pollution'' is\nimportant when such selections are employed to set constraints on new\nphysics effects that enter in the WBF contribution exclusively. Large\nnew physics effects in the WBF contribution can still be constrained,\nwhich we have illustrated through an investigation of the constraints\nthat can be set on deviations of the quartic $VV hh$ couplings from\ntheir SM values with the HL-LHC, demonstrating that a measurement of\n$pp\\to hhjj$ will provide a powerful probe of these. Employing\nobservables which are intrinsically sensitive to the different colour\ncorrelation of WBF compared to GF, the discrimination between GF, WBF,\nand background can be further improved. However, the signal cross\nsection is typically already too small to use such a strategy to\nconstrain the presence of new physics if those effects are only a\nsmall deviation around the SM. If new physics effects are sizable,\nsuch an approach will remain a well-adapted strategy to minimise GF\ntowards a pure WBF selection.\n\n\n\\vskip 1\\baselineskip\n\\noindent {\\emph{Acknowledgements.}} CE and MS are grateful to the\nMainz Institute for Theoretical Physics (MITP) for its hospitality and\nits partial support during the workshop ``Higgs Pair Production at\nColliders'' where part of this work was completed. CE is supported in\nparts by the Institute for Particle Physics Phenomenology\nAssociateship programme. KN thanks the University of Glasgow College\nof Science \\& Engineering for a PhD scholarship. This research was\nsupported in part by the European Commission through the 'HiggsTools'\nInitial Training Network PITN-GA-2012-316704.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe study of B physics is essential to determine the flavor\nstructure of the Standard Model, through knowledge of the \nCabibbo-Kobayashi-Maskawa (CKM) matrix describing quark mixing \nand CP violation, which may be associated with the lack of \nsymmetry between matter and anti-matter in the Universe. \nIn fact, since the amount of baryons in the Universe predicted using \nthe CKM mechanism is several orders of magnitude smaller\nthan what is observed by astronomers, extensions of the \nStandard Model propose additional sources of CP violation, \nwhich must be tested against Standard-Model predictions.\nB mesons provide the ideal environment for such tests.\\cite{Antonelli:2009ws}\nIn particular, high-precision theoretical inputs are needed\nfor hadronic matrix elements, which may be computed starting\nfrom the gauge theory itself using numerical simulations of lattice QCD.\nAt present, however, it is not yet feasible to perform simulations \non lattices that can simultaneously represent the two relevant scales \nof B physics:\nthe low energy scale $\\Lambda_{\\rm QCD}$, requiring\nlarge physical lattice size, and the high energy scale of\nthe b-quark mass $m_b$, requiring very small lattice spacing $a$.\nAn approximate framework is therefore needed, but one should strive\nto achieve sufficiently precise results, otherwise the task of\noverconstraining the parameters of the Standard Model is compromised.\n\nA promising such framework is to consider (lattice) heavy-quark effective \ntheory (HQET), which allows for an elegant theoretical treatment, with \nthe possibility of fully nonperturbative \nrenormalization.\\cite{Heitger:2003nj,Sommer:2006sj}\nThe approach is briefly described as follows.\nHQET provides a valid low-momentum description for systems with one\nheavy quark, with manifest heavy-quark symmetry in the limit \n$m_b \\to \\infty$. The heavy-quark flavor \nand spin symmetries are broken at finite values of $m_b$\nrespectively by kinetic and spin terms, with first-order corrections \nto the static Lagrangian parametrized by $\\omega_{\\rm kin}$ and \n$\\omega_{\\rm spin}$\n\\begin{equation}\n{\\cal L}^{\\rm HQET}\n\\;=\\; \\overline{\\psi}_{\\!h}(x)\\,D_0\\,\\psi_h(x)\n\\,-\\, \\omega_{\\rm kin}\\, {\\cal O}_{\\rm kin} \n\\,-\\, \\omega_{\\rm spin}\\,{\\cal O}_{\\rm spin}\\,,\n\\end{equation}\nwhere \n\\begin{equation}\n{\\cal O}_{\\rm kin}\\;=\\; \\overline{\\psi}_{\\!h}(x)\\,{\\bf D}^2\\,\\psi_h(x)\\,,\n\\quad\\quad\n{\\cal O}_{\\rm spin}\\;=\\; \n\\overline{\\psi}_{\\!h}(x)\\,{\\bf \\sigma}\\cdot{\\bf B}\\,\\psi_h(x)\\,.\n\\end{equation}\nThese ${\\rm O}(1\/m_b)$ corrections are incorporated by an expansion \nof the statistical weight in $1\/m_b$\nsuch that ${\\cal O}_{\\rm kin}$, ${\\cal O}_{\\rm spin}$ are\ntreated as insertions into static correlation functions.\nThis guarantees the existence of a continuum limit, with results that\nare independent of the regularization, provided that the renormalization\nbe done nonperturbatively. \n\nAs a consequence, expansions for masses and decay constants \nare given respectively by\n\\begin{equation}\nm_B \\;=\\; m_{\\rm bare} \\,+\\,E^{\\rm stat}\n\\,+\\, \\omega_{\\rm kin} \\, E^{\\rm kin} \\,+\\,\n\\omega_{\\rm spin} \\, E^{\\rm spin}\n\\end{equation}\nand\n\\begin{equation}\nf_B\\,\\sqrt{\\frac{m_B}{2}} \\;=\\; Z_A^{\\rm HQET}\\,p^{\\rm stat}\\,\n(1\\,+\\, c_A^{\\rm HQET} \\, p^{\\delta A}\n\\,+\\, \\omega_{\\rm kin} \\, p^{\\rm kin} \\,+\\,\n\\omega_{\\rm spin} \\, p^{\\rm spin}) \\,,\n\\end{equation}\nwhere the parameters $m_{\\rm bare}$ and $Z_A^{\\rm HQET}$ are written as\nsums of a static and an ${\\rm O}(1\/m_b)$ term (denoted respectively with the \nsuperscripts ``${\\rm stat}$'' and ``$1\/m_b$'' below), and \n$c_A^{\\rm HQET}$ is of order $1\/m_b$.\nBare energies ($E^{\\rm stat}$, etc.) and matrix elements \n($p^{\\rm stat}$, etc.) are computed in the numerical \nsimulation.\n\nThe divergences (with inverse powers of $a$) in the above parameters \nare cancelled through the nonperturbative renormalization, \nwhich is based on a matching of HQET parameters to QCD on lattices of \nsmall physical volume\n--- where fine lattice spacings can be considered --- and extrapolation \nto a large volume by the step-scaling method.\nSuch an analysis has been recently completed for the quenched \ncase.\\cite{Blossier:2010jk}\nIn particular, there are nonperturbative (quenched) \ndeterminations of the static coefficients $m_{\\rm bare}^{\\rm stat}$\nand $Z_{A}^{\\rm stat}$ for HYP1 and HYP2 static-quark \nactions\\cite{DellaMorte:2003mn} at the physical b-quark mass, and similarly\nfor the ${\\rm O}(1\/m_b)$ parameters\n$\\omega_{\\rm kin}$, $\\omega_{\\rm spin}$,\n$\\,m_{\\rm bare}^{1\/m_b}$, $\\,Z_{A}^{1\/m_b}$ and $c_A^{\\rm HQET}$.\n\nThe newly determined HQET parameters are very precise (with errors of \na couple of a percent in the static case) and show the expected behavior \nwith $a$. They are used in our calculations reported here, to perform\nthe nonperturbative renormalization of the (bare) observables computed\nin the simulation. Of course, in order to keep a high precision, also \nthese bare quantities have to be accurately determined. This is accomplished\nby an efficient use of the generalized eigenvalue problem\n(GEVP) for extracting energy levels $E_n$ and matrix elements, as\ndescribed below.\n\n\nA significant source of systematic errors in the determination of\nenergy levels in lattice simulations is the contamination from\nexcited states in the time correlators \n\\begin{equation}\nC(t) \\;=\\; \\,\\langle O(t)\\,O(0)\\rangle\\, \\;=\\;\n\\sum_{n=1}^{\\infty}\\, |\\langle n|\\,\\hat{O}\\,|0 \\rangle |^2\\;e^{-E_n t}\n\\end{equation}\nof fields $\\,O(t)$ with the quantum numbers of a given bound state. \n\nInstead of starting from simple local fields $O$\nand getting the (ground-state) energy from an effective-mass plateau in \n$C(t)$ as defined above, it is then advantageous to consider all-to-all \npropagators\\cite{Foley:2005ac} and to solve, instead, the GEVP \n\\begin{equation}\nC(t)\\,v_n(t,t_0)\\;=\\;\n\\lambda_n(t,t_0)\\,C(t_0)\\,v_n(t,t_0)\\,,\n\\end{equation}\nwhere $t>t_0$ and $C(t)$ is now a matrix of correlators, given by\n\\begin{equation}\nC_{ij}(t)\\,\\;=\\;\\,\\langle O_i(t) O_j(0)\\rangle\n\\,\\;=\\;\\, \\sum_{n=1}^\\infty {\\rm e}^{-E_n t}\\, \\Psi_{ni}\\Psi_{nj}\\,,\\quad\ni, j = 1,\\ldots, N\\,.\n\\end{equation}\nThe chosen interpolators $O_i$ are taken (hopefully) \nlinearly independent, e.g.\\ they may be built from the \nsmeared quark fields using $N$ different smearing levels. \nThe matrix elements $\\Psi_{ni}$ are defined by\n\\begin{equation}\n\\Psi_{ni} \\;\\equiv\\; (\\Psi_n)_i \\;=\\; \\langle n|\\hat O_i|0\\rangle\\;,\n\\quad\\; \\langle m|n\\rangle \\,=\\,\\delta_{mn}\\,.\n\\end{equation}\n\nOne thus computes $C_{ij}$ for the interpolator basis $O_i$ from \nthe numerical simulation, then gets effective energy levels $E_n^{\\rm eff}$ \nand estimates for the matrix elements $\\Psi_{ni}$ from the\nsolution $\\lambda_n(t,t_0)$ of the GEVP at large $t$. For the energies\n\\begin{equation}\nE_n^{\\rm eff}(t,t_0)\\;\\equiv\\; \n{1\\over a} \\, \\log{\\lambda_n(t,t_0) \\over \\lambda_n(t+a,t_0)}\n\\end{equation}\nit is shown\\cite{Luscher:1990ck}\nthat $E_n^{\\rm eff}(t,t_0)$ converges exponentially as $t\\to\\infty$ \n(and fixed $t_0$) to the true energy $E_n$. However, since the\nexponential falloff of higher contributions may be slow,\nit is also essential to study the convergence as a function of\n$t_0$ in order to achieve the required efficiency for the method.\nThis has been recently done,\\cite{Blossier:2009kd} by explicit application \nof (ordinary) perturbation theory to a hypothetical truncated problem \nwhere only $N$ levels contribute. The solution in this case is exactly \ngiven by the true energies, and corrections due to the higher states \nare treated perturbatively. We get\n\\newcommand{\\hat {\\cal A}_n^{\\rm eff}}{\\hat {\\cal A}_n^{\\rm eff}}\n\\newcommand{\\hat {\\cal Q}_n^{\\rm eff}}{\\hat {\\cal Q}_n^{\\rm eff}}\n\\newcommand{\\varepsilon_{n}}{\\varepsilon_{n}}\n\\newcommand{\\pi_{n}}{\\pi_{n}}\n\\begin{equation}\nE_n^{\\rm eff}(t,t_0) \\;=\\; \nE_n \\,+\\, {\\varepsilon_{n}(t,t_0)}\\,\n\\end{equation}\nfor the energies and\n\\begin{equation}\n{\\rm e}^{-\\hat H t}(\\hat {\\cal Q}_n^{\\rm eff}(t,t_0))^\\dagger|0\\rangle \\;=\\;\n|n\\rangle \\,+\\, \\sum_{n'=1}^\\infty \\pi_{nn'}(t,t_0)\n\\, |n'\\rangle\n\\end{equation}\nfor the eigenstates of the Hamiltonian, which may be estimated through\n\\begin{eqnarray}\n \\hat {\\cal Q}_n^{\\rm eff}(t,t_0) &=& R_n \\,(\\hat O\\,,\\,v_n(t,t_0)\\,) \\,, \\\\[2mm]\nR_n &=& {\\left(v_n(t,t_0)\\,,\\, C(t)\\,v_n(t,t_0)\\right)}^{-1\/2}\n\\; \\left[{\\lambda_n(t_0+a,t_0) \\over \\lambda_n(t_0+2a,t_0)}\\right]^{t\/2}\\,.\n\\end{eqnarray}\n\nIn our analysis we see that, due to cancellations of $t$-independent \nterms in the effective energy, the first-order corrections in \n${\\varepsilon_{n}(t,t_0)}$ are independent of $t_0$ and very strongly \nsuppressed at large $t$. We identify two regimes:\n1) for $t_0 \\,<\\, t\/2$, the 2nd-order corrections dominate and\ntheir exponential suppression is given by the smallest energy gap \n$\\,|E_m-E_n|\\equiv \\Delta E_{m,n}\\,$ between level $n$ and its neighboring \nlevels $m$; and 2) for $t_0 \\,\\geq\\, t\/2$,\nthe 1st-order corrections dominate and the suppression is given by the \nlarge gap $\\Delta E_{N+1,n}$. \nAmplitudes $\\,\\pi_{nn'}(t,t_0)\\,$ get main contributions\nfrom the first-order corrections. For fixed $t-t_0$ these are also \nsuppressed with $\\Delta E_{N+1,n}$.\nClearly, the appearance of large energy gaps in the second regime\nimproves convergence significantly. We therefore work with $t$, $t_0$\ncombinations in this regime.\n\n\n\\def1\/m_b{1\/m_b}\n\\def\\rm {stat}{\\rm {stat}}\n\nA very important step of our approach is to realize that the same \nperturbative analysis may be applied to get the $1\/m_b$ \ncorrections in the HQET correlation functions mentioned previously\n\\begin{equation}\n C_{ij}(t) \\;=\\; C_{ij}^{\\rm {stat}}(t) \\,+\\,\n \\omega \\,C_{ij}^{1\/m_b}(t) \\,+\\, {\\rm O}(\\omega^2)\\,,\n\\end{equation}\nwhere the combined ${\\rm O}(1\/m_b)$ corrections are symbolized by \nthe expansion parameter $\\omega$.\nFollowing the same procedure as above, we get similar exponential\nsuppressions (with the static energy gaps) for static and ${\\rm O}(1\/m_b)$ \nterms in the effective theory. We arrive at\n\\begin{equation}\n E_n^{\\rm eff}(t,t_0) \\;=\\; E_n^{{\\rm eff},{\\rm stat}}(t,t_0)\n +\\omega E_n^{{\\rm eff},{1\/m_b}}(t,t_0) +{\\rm O}(\\omega^2)\n\\end{equation}\nwith\n\\begin{eqnarray}\n E_n^{{\\rm eff},{\\rm stat}}(t,t_0) &=&\n E_n^{\\rm {stat}} \\,+\\, \\beta_n^{\\rm {stat}} \\,\n {\\rm e}^{-\\Delta E_{N+1,n}^{\\rm {stat}}\\, t}+\\ldots\\,, \n\\label{effenergies}\n\\\\[2mm]\n E_n^{\\rm eff,1\/m_b}(t,t_0) &=&\n E_n^{1\/m_b} \\,+\\, [\\,\\beta_n^{1\/m_b}\n \\,-\\, \\beta_n^{\\rm {stat}}\\,t\\;\\Delta E_{N+1,n}^{1\/m_b}\\,]\n {\\rm e}^{-\\Delta E_{N+1,n}^{\\rm {stat}}\\, t}+\\ldots\\, .\n\\end{eqnarray}\nand similarly for matrix elements.\nPreliminary results of our application of the methods described \nin this section were presented recently\\cite{Blossier:2009mg}\nand are summarized in the next section. A more detailed version of this \nstudy will be presented elsewhere.\\cite{inprep}\n\n\n\\section{Results}\n\nWe carried out a study of static-light B$_{\\rm s}$-mesons in \nquenched HQET with the nonperturbative parameters described in the\nprevious section,\nemploying the HYP1 and HYP2 lattice actions for the static quark\nand an ${\\rm O}(a)$-improved Wilson action for the strange quark\nin the simulations. The lattices considered were of the form\n$L^3 \\times 2L$ with periodic boundary conditions. We took\n$L\\approx1.5$ fm and lattice spacings $0.1$ fm, $0.07$ fm \nand $0.05$ fm, corresponding respectively to\n$\\beta=6.0219$, $6.2885$ and $6.4956$.\nWe used all-to-all strange-quark propagators constructed from\napproximate low modes, with 100 configurations. \nGauge links in interpolating fields were smeared with 3 iterations \nof (spatial) APE smearing, whereas Gaussian smearing (8 levels) was used\nfor the strange-quark field.\nA simple $\\gamma_0\\gamma_5$ structure in Dirac space was taken\nfor all 8 interpolating fields. \nAlso, the local field (no smearing) was included in order\nto compute the decay constant.\n\n\nThe resulting ($8\\times8$) correlation matrix may be\nconveniently truncated to an $N\\times N$ one and the GEVP\nsolved for each $N$, so that results can be studied as a function of $N$.\nWe then pick a basis from unprojected interpolators, by\nsampling the different smearing levels (from 1 to 7) as\n$\\{1,7\\}$,$\\{1,4,7\\}$, etc. We perform fits of the various energy \nlevels and values of $N$ to the behavior in Eq.\\ (\\ref{effenergies})\nand extract our results from the predicted plateaus.\nNext, we take the continuum limit, extrapolating our results to \n$a\\to 0$. We see that the correction to the ground-state energy\ndue to terms of order $1\/m_b$, which is positive for finite $a$,\nis quite small (consistent with zero) in the continuum limit.\nOur results for the pseudoscalar meson \ndecay constant, both in the static limit and including ${\\rm O}(1\/m_b)$\ncorrections, are shown in terms of the combination \n$\\Phi^{\\rm HQET}\\equiv F_{\\rm PS}\\,\\sqrt{m_{\\rm PS}}\/C_{\\rm PS}$, \nwhere $C_{\\rm PS}(M\/\\Lambda_{\\rm QCD})$ \nis a known matching function and $\\Phi^{\\rm RGI}$ denotes the\nrenormalization-group-invariant matrix element of the static \naxial current.\\cite{DellaMorte:2007ij}\nThese two continuum extrapolations are shown in comparison with\nfully relativistic heavy-light (around charm-strange) \ndata\\cite{DellaMorte:2007ij} in Fig.\\ \\ref{fig:fBs_comp} below.\nNote that, up to perturbative corrections of order $\\alpha^3$\nin $C_{\\rm PS}$, HQET predicts a behavior $const. + {\\rm O}(1\/r_0 m_{\\rm PS})$\nin this graph. Surprisingly no $1\/(r_0 m_{\\rm PS})^2$ terms are \nvisible, even with our rather small errors.\n\n\n\\begin{figure}\n\\ \\vspace*{-6mm}\n\\begin{center}\n\\includegraphics[width=10truecm]{fb_new3.eps}\n\\ \\vspace*{-3mm}\n\\caption{Comparison of the continuum values for the pseudoscalar\nmeson decay constant from Fig.\\ 4\nto fully relativistic data in the charm region. The solid line is a\nlinear interpolation between the static limit and the points around\nthe charm-quark mass, which corresponds to $\\,1\/r_0\\, m_{\\rm PS}\\approx 0.2$.\n}\n\\label{fig:fBs_comp}\n\\end{center}\n\\ \\vspace*{-6mm}\n\\end{figure}\n\n\n\\section{Conclusions}\n\nThe combined use of nonperturbatively determined HQET parameters \n(in action and currents) and efficient GEVP allows us to reach \nprecisions of a few percent in matrix elements and of a few MeV \nin energy levels, even with only a moderate number of configurations. \nThe method is robust with respect to the choice of interpolator basis.\nAll parameters have been determined nonperturbatively and in \nparticular power divergences are completely subtracted.\nWe see that HQET plus ${\\rm O}(1\/m_b)$ corrections at the b-quark \nmass agrees well with an interpolation between the static point and the\ncharm region, indicating that linearity in $1\/m$ extends even to \nthe charm point. A corresponding study for $N_f=2$ is in progress.\n\n\n\\vskip 3mm\n\\noindent\n{\\bf Acknowledgements.}\nThis work is supported by the DFG\nin the SFB\/TR 09, and by the EU Contract \nNo.\\ MRTN-CT-2006-035482, ``FLAVIAnet''. \nT.M. thanks the A. von Humboldt Foundation;\nN.G. thanks the MICINN grant FPA2006-05807, the\nComunidad Aut\\'onoma de Madrid programme HEPHACOS P-ESP-00346 and\nthe Consolider-Ingenio 2010 CPAN (CSD2007-00042).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzeuaz b/data_all_eng_slimpj/shuffled/split2/finalzzeuaz new file mode 100644 index 0000000000000000000000000000000000000000..0369f91db41618f30b51654124077c1e27af6099 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzeuaz @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIn a fundamental paper on the coarse geometry of Banach spaces (\\cite{kaltonpropertyq}), N. Kalton introduced a property of metric spaces that he named property $\\mathcal Q$. In particular,\nits absence served as an obstruction to coarse embeddability into reflexive Banach spaces. This property is related to the behavior of Lipschitz maps defined on a particular family of metric graphs that we shall denote $([\\N]^k,d_{\\K}^k)_{k \\in \\N}$. We will recall the precise definitions of these graphs and of property $\\mathcal Q$ in section \\ref{subsection:Q}. Let us just say, vaguely speaking for the moment, that a Banach space $X$ has property $\\mathcal Q$ if for every Lipschitz map $f$ from $([\\N]^k,d_{\\K}^k)$ to $X$, there exists a full subgraph $[\\M]^k$ of $[\\N]^k$, with $\\M$ infinite subset of $\\N$, on which $f$ satisfies a strong concentration phenomenon. It is then easy to see that if a Banach space $X$ has property $\\mathcal Q$, then the family of graphs $([\\N]^k,d_{\\K}^k)_{k \\in \\N}$ does not equi-coarsely embed into $X$ (see the definition in section \\ref{s:coarse}). One of the main results in \\cite{kaltonpropertyq} is that any reflexive Banach space has property $\\mathcal Q$. It then readily follows that a reflexive Banach space cannot contain a coarse copy of all separable metric spaces, or equivalently does not contain a coarse copy of the Banach space $c_0$. In fact, with a sophistication of this argument, Kalton proved an even stronger result in \\cite{kaltonpropertyq}: if a separable Banach space contains a coarse copy of $c_0$, then there is an integer $k$ such that the dual of order $k$ of $X$ is non separable. In particular, a quasi-reflexive Banach space does not contain a coarse copy of $c_0$.\nHowever, Kalton proved that the most famous example of a quasi-reflexive space, namely the James space $\\J$, as well as its dual $\\J^*$, fail property $\\mathcal Q$.\n\nThe main purpose of this paper is to show that, although they do not obey the concentration phenomenon described by property $\\mathcal Q$, neither $\\J$ nor $\\J^*$ equi-coarsely contains the family of graphs $([\\N]^k,d_{\\K}^k)_{k \\in \\N}$ (Corollary \\ref{James}). This provides a coarse invariant, namely ``not containing equi-coarsely the Kalton graphs'', that is very close to but different from property $\\mathcal Q$. This could allow to find obstructions to coarse embeddability between seemingly close Banach spaces. Our result is actually more general. We prove in Theorem \\ref{general} that a quasi-reflexive Banach space $X$ such that both $X$ and $X^*$ admit an equivalent $p$-asymptotically uniformly smooth norm (see the definition in section \\ref{asymptotics}), for some $p$ in $(1,\\infty)$, does not equi-coarsely contain the Kalton graphs.\n\nWe conclude this note by showing that if the James tree space $\\J\\T$ or its predual coarsely embeds into a separable Banach space $X$, then there exists $k\\in \\N$ so that the dual of order $k$ of $X$ is non separable. This extends slightly Theorem 3.5 in \\cite{kaltonpropertyq}.\n\n\\section{Metric notions}\n\n\\subsection{Coarse embeddings}\\label{s:coarse}\n\nLet $M$, $N$ be two metric spaces and $f \\colon M \\to N$ be a map. We define the compression modulus $\\rho_f$ and the expansion modulus $\\omega_f$ as follows. For $t\\in [0,\\infty)$, we set\n\\begin{eqnarray*}\n&&\\rho_f (t) = \\inf \\{ d_N(f(x),f(y)) \\, : \\, d_M(x,y) \\geq t \\},\\\\\n&&\\omega_f (t) = \\sup \\{ d_N(f(x),f(y)) \\, : \\, d_M(x,y) \\leq t \\}.\n\\end{eqnarray*}\nWe adopt the convention $\\sup(\\emptyset)=0$ and $\\inf(\\emptyset)=\\infty$.\nNote that for every $x,y \\in M$,\n$$\\rho_f (d_M(x,y)) \\leq d_N(f(x),f(y)) \\leq \\omega_f (d_M(x,y)).$$\nWe say that $f$ is a \\emph{coarse embedding} if $\\omega_f (t) < \\infty$ for every $t \\in [0,+\\infty)$ and $\\lim_{t \\to \\infty} \\rho_f (t) = \\infty$.\n\nNext, let $(M_i)_{i \\in I}$ be a family of metric spaces. We say that the family $(M_i)_{i \\in I}$ \\emph{equi-coarsely embeds} into a metric space $N$ if there exist two maps $\\rho, \\, \\omega \\colon [0,+\\infty) \\to [0,+\\infty)$ and maps $f_i \\colon M_i \\to N$ for $i \\in I$ such that:\n\\begin{enumerate}[(i)]\n\\item $\\lim_{t \\to \\infty} \\rho(t) = \\infty$,\n\\item $\\omega(t) < \\infty$ for every $t \\in [0,+\\infty)$,\n\\item $\\rho(t) \\leq \\rho_{f_i}(t)$ and $\\omega_{f_i}(t) \\leq \\omega(t)$ for every $i \\in I$ and $t \\in [0,\\infty)$.\n\\end{enumerate}\n\n\n\n\\subsection{The Kalton interlaced graphs and property Q}\n\\label{subsection:Q}\n\nFor $k\\in \\N$ and $\\M$ an infinite subset of $\\N$, we put $[\\M]^{\\le k}=\\{ S\\subset \\M: |S|\\le k\\}$, $[\\M]^{ k}=\\{ S\\subset \\M: |S|=k\\} $, $[\\M]^\\omega=\\{ S\\subset \\M: S\\text{ is infinite}\\} $, and $[\\M]^{< \\omega}=\\{ S\\subset \\M: S\\text{ is finite}\\}$.\nWe always list the elements of some $\\m$ in $[\\N]^{< \\omega}$ or in $[\\N]^{ \\omega}$ in increasing order, meaning that if we write $\\m=(m_1,m_2,\\ldots, m_l)$ or $\\m=(m_1,m_2,m_3, \\ldots )$, we tacitly assume that $m_10$. \nNotice that the sets $\\arg\\max(F)$ and $\\arg\\min(F)$ are disjoint.\nWe select inductively $\\{a_1<\\ldotsa_i\\} \\cap \\arg\\min(F)\\right)$, if this is not empty.\n\\item $a_{i+1}=\\min\\left( \\{n>b_i\\} \\cap \\arg\\max(F)\\right)$, if this set is not empty.\n\\end{itemize}\nNotice that $\\{a_1,\\ldots,a_p\\} \\subset \\n \\setminus \\m$ and $\\{b_1,\\ldots,b_q\\} \\subset \\m\\setminus \\n$.\nNotice also that either $p=q$ or $p=q+1$.\nIn the latter case we define $b_p:=r$ for some $r$ such that $r>a_p$ and $F(r-1)>F(r)$.\nSuch $r$ must exist since $F(\\max\\{n_k,m_k\\})=0$.\nAlso we have $r \\in \\m \\setminus \\n$.\nWe will set\n$$\n\\lbar=\\n \\cup \\{b_1,\\ldots,b_p\\} \\setminus \\{a_1,\\ldots,a_p\\}.\n$$\nIt is clear that $\\lbar \\in [\\M]^k$.\nWe also have $\\max F_{\\lbar,\\m}=\\max F_{\\n,\\m}-1$ and $\\min F_{\\lbar,\\m}=\\min F_{\\n,\\m}$.\nIndeed, the point $\\lbar$ is constructed in such a way that when $F_{\\n,\\m}$ attains its maximum for the first time (going from the left), $F_{\\lbar,\\m}$ is reduced by one and stays reduced by 1 until the next time the minimum of $F_{\\n,\\m}$ is attained (or until the point $r$) where this reduction is corrected back; and so on.\nThus $d(\\lbar,\\m)=d(\\n,\\m)-1$.\nAlso, since the sets $\\set{a_1,\\ldots,a_p}$ and $\\set{b_1,\\ldots,b_p}$ are interlaced we have $F_{\\n,\\m}-1\\leq F_{\\lbar,\\m}\\leq F_{\\n,\\m}$.\nTherefore, since $F_{\\n,\\m}=F_{\\n,\\lbar}+F_{\\lbar,\\m}$, we have that $0\\leq F_{\\n,\\lbar}\\leq 1$ and so finally $d(\\n,\\lbar)=1$, since it is clear that $\\n \\neq \\lbar$.\n\\end{proof}\nNote that if $X$ is a Banach space and $f \\colon ([\\M]^k,d_{\\K}^k) \\to X$ is a map with finite expansion modulus $\\omega_f$, then $\\omega_f(1)$ is actually the Lipschitz constant of $f$ as $d_{\\K}^k$ is a graph distance on $[\\M]^k$.\n\nIn \\cite{kaltonpropertyq} the property $\\mathcal Q$ is defined in the setting of metric spaces. For homogeneity reasons, its definition can be simplified for Banach spaces. Let us recall it here.\n\n\\begin{defi}\nLet $X$ be a Banach space. We say that $X$ has \\emph{property $\\mathcal{Q}$} if there exists $C\\ge 1$ such that for every $k \\in \\N$ and every Lipschitz map $f \\colon ([\\N]^k,d_{\\K}^k) \\to X$, there exists an infinite subset $\\M$ of $\\N$ such that:\n$$\\forall \\, \\overline{n},\\overline{m} \\in [\\M]^k,\\ \\|f(\\overline{n})-f(\\overline{m})\\| \\leq C\\omega_f(1).$$\n\\end{defi}\n\nThe following proposition should be clear from the definitions. We shall however include its short proof.\n\\begin{prop} \\label{QandGk}\nLet $X$ be a Banach space. If $X$ has property $\\mathcal Q$, then the family of graphs $([\\N]^k,d_{\\K}^k)_{k\\in \\N}$ does not equi-coarsely embed into $X$.\n\\end{prop}\n\n\\begin{proof}\nLet $C\\ge 1$ be given by the definition of property $\\mathcal Q$. Aiming for a contradiction, assume that the family $([\\N]^k,d_{\\K}^k)_{k\\in \\N}$ equi-coarsely embeds into $X$. That is, there are maps $f_k \\colon ([\\N]^k,d_{\\K}^k) \\to X$ and two functions $\\rho, \\omega \\colon [0,+\\infty)\\to[0,+\\infty)$ such that $\\lim_{t \\to \\infty} \\rho (t)=\\infty$ and\n$$\\forall k\\in \\N\\ \\ \\forall t>0\\ \\ \\rho(t) \\leq \\rho_{f_k}(t)\\ \\ \\text{and}\\ \\ \\omega_{f_k}(t) \\leq \\omega(t)<\\infty.$$\nThus, for every $k\\in \\N$, there exists an infinite subset $\\mathbb M_k$ of $\\N$ such that $\\diam(f([\\M_k]^k)))\\le C\\omega(1)$. Since $\\diam([\\M_k]^k)=k$, this implies that for all $k\\in \\N$, $\\rho(k) \\le C\\omega(1)$. This contradicts the fact that $\\lim\\limits_{t \\to \\infty} \\rho(t)=\\infty$.\n\\end{proof}\n\nA concrete bi-Lipschitz copy of the metric spaces $([\\N]^k,d_{\\K}^k)$ in $c_0$ is given by the following proposition.\n\n\\begin{prop}\\label{Kgraphsinc0} Let $(s_n)_{n=1}^\\infty$ be the summing basis of $c_0$, that is\\\\\n$s_n=\\sum_{i=1}^ne_i$, where $(e_i)_{i=1}^\\infty$ is the canonical basis of $c_0$.\\\\\nFor $k\\in \\N$, define $f_k:([\\N]^k,d_{\\K}^k)\\to c_0$ by $f_k(\\n)=\\sum_{i=1}^k s_{n_i}$. Then\n$$\\frac12d_{\\K}^k(\\n,\\m)\\le \\|f_k(\\n)-f_k(\\m)\\|_\\infty\\le d_{\\K}^k(\\n,\\m)$$\nfor all $\\n,\\m \\in [\\N]^k$.\n\\end{prop}\n\n\\begin{proof}\nSince $\\dk=d$, one can show (as in the Fact in the proof of Proposition~\\ref{p:KaltonDistanceFormula}) that\n$\\dk(\\n,\\m)=\\max(f_k(\\n)-f_k(\\m))-\\min(f_k(\\n)-f_k(\\m)).$\nThe result then follows easily since $\\min(f_k(\\n)-f_k(\\m))\\leq 0\\leq \\max(f_k(\\n)-f_k(\\m))$ for all $\\n,\\m\\in [\\N]^k$.\n\\end{proof}\n\n\\begin{remark} We already explained that $c_0$ cannot coarsely embed into any Banach space with property $\\mathcal Q$ (in particular into any reflexive Banach space) and that Kalton even showed with additional arguments that if $c_0$ coarsely embeds into a separable Banach space $X$, then one of the iterated duals of $X$ has to be non separable. An inspection of his proof shows that \nthe uniformly discrete metric spaces\n\\[\n M_k=\\set{\\sum_{i=1}^k s_{n_i} \\times \\indicator{A}: (n_1,\\ldots,n_k) \\in [\\N]^k, A \\in [\\N]^\\omega} \\subset c_0\n\\]\ndo not equi-coarsely embed into any Banach space $X$ such that $X^{(r)}$ is separable for all $r$.\nSee Theorem \\ref{kalton} below for more on this subject.\n\\end{remark}\n\nStudying further the property $\\mathcal Q$ in \\cite{kaltonpropertyq}, Kalton exhibited non reflexive quasi-reflexive spaces with the property $\\mathcal Q$ but showed that $\\J$ and $\\J^*$ fail property $\\mathcal Q$. It is worth noticing that a theorem of Schoenberg \\cite{Schoenberg} implies that $\\ell_1$ coarsely embeds into $\\ell_2$, and therefore $\\ell_1$ provides a simple example of a non-reflexive Banach space with property $\\mathcal Q$.\n\n\\medskip\nWe conclude this section with two propositions that we state here for future reference. We start with a classical version of Ramsey's theorem.\n\\begin{prop}[Corollary 1.2 in \\cite{Gowers}] \\label{ramsey}\nLet $(K,d)$ be a compact metric space, $k \\in \\N$ and $f \\colon [\\N]^k \\to K$. Then for every $\\ep >0$, there exists an infinite subset $\\M$ of $\\N$ such that $d(f(\\overline{n}),f(\\overline{m}))< \\ep$ for every $\\overline{n},\\overline{m} \\in [\\M]^k$.\n\\end{prop}\n\nFor a Banach space $X$, we call \\emph{tree of height $k$} in $X$ any family $(x(\\n))_{\\n\\in[\\N]^{\\le k}}$, with $x(\\n)\\in X$. Then, if $\\M \\in [\\N]^\\omega$, $(x(\\n))_{\\n\\in[\\M]^{\\le k}}$ will be called a \\emph{full subtree} of $(x(\\n))_{\\n\\in[\\N]^{\\le k}}$. A tree $(x^*(\\n))_{\\n\\in[\\M]^{\\le k}}$ in $X^*$ is called \\emph{weak$^*$-null} if for any $\\n \\in [\\M]^{\\le k-1}$, the sequence $(x^*(n_1,\\ldots,n_{k-1},t))_{t>n_{k-1},t\\in \\M}$ is weak$^*$-null.\n\nThe next proposition is\nbased on a weak$^*$-compactness argument and will be crucial for our proofs. Although the distance considered on $[\\N]^k$ is different, the proof follows the same lines as Lemma 4.1 in \\cite{blms}. We therefore state it now without further detail.\n\n\\begin{prop}\\label{nulltree} Let $X$ be a separable Banach space, $k\\in\\N$, and $f:([\\N]^k,d_{\\K}^k)\\to X^*$ a Lipschitz map. Then there exist $\\M \\in [\\N]^\\omega$ and a weak$^*$-null tree $(x^*(\\m))_{\\m\\in[\\M]^{\\le k}}$ in $X^*$ with $\\|x^*_{\\m}\\| \\leq \\omega_f(1)$ for all $\\m\\in [\\M]^{\\leq k}\\setminus\\{\\emptyset\\}$ and so that\n$$\\forall \\n \\in [\\M]^k,\\ f(\\n)=\\sum_{i=0}^k x^*(n_1,\\ldots,n_i)=\\sum_{\\m \\preceq \\n} x^*(\\m).$$\n\\end{prop}\n\n\n\n\n\\section{Uniform asymptotic properties of norms and related estimates}\n\\label{asymptotics}\n\nWe recall the definitions that will be considered in this paper. For a Banach space $(X,\\|\\ \\|)$ we\ndenote by $B_X$ the closed unit ball of $X$ and by $S_X$ its unit\nsphere. The following definitions are due to V. Milman \\cite{Milman} and we adopt the notation from \\cite{JohnsonLindenstraussPreissSchechtman2002}. For $t\\in [0,\\infty)$ we define $$\\overline{\\rho}_X(t)=\\sup_{x\\in S_X}\\inf_{Y}\\sup_{y\\in S_Y}\\big(\\|x+t y\\|-1\\big),$$\nwhere $Y$ runs through all closed subspaces of $X$ of finite codimension. Then, the norm $\\|\\ \\|$ is said to be {\\it asymptotically uniformly smooth} (in short AUS) if\n$$\\lim_{t \\to 0}\\frac{\\overline{\\rho}_X(t)}{t}=0.$$\nFor $p\\in (1,\\infty)$ it is said to be {\\it $p$-asymptotically uniformly smooth} (in short $p$-AUS) if there exists $c>0$ such that for all $t\\in [0,\\infty)$, $\\overline{\\rho}_X(t)\\le ct^p$.\n\nWe will also need the dual modulus defined \nby\n$$ \\overline{\\delta}_X^*(t)=\\inf_{x^*\\in S_{X^*}}\\sup_{E}\\inf_{y^*\\in S_E}\\big(\\|x^*+ty^*\\|-1\\big),$$\nwhere $E$ runs through all finite-codimensional weak$^*$-closed subspaces of~$X^*$. \nThe norm of $X^*$ is said to be {\\it weak$^*$ asymptotically uniformly convex} (in short AUC$^*$) if $\\overline{\\delta}_X^*(t)>0$ for all $t$ in $(0,\\infty)$. If there exists $c>0$ and $q\\in [1,\\infty)$ such that for all $t\\in [0,1]$ $\\overline{\\delta}_X^*(t)\\ge ct^q$, we say that the norm of $X^*$ is $q$-AUC$^*$.\nThe following proposition is elementary.\n\n\\begin{prop}\\label{as-sequences} Let $X$ be a Banach space. For any $t\\in (0,1)$, any weakly null sequence $(x_n)_{n=1}^\\infty$ in $B_{X}$ and any $x \\in S_X$ we have:\n$$ \\limsup_{n \\rightarrow \\infty} \\| x+tx_n \\| \\leq 1 + \\overline{\\rho}_{X}(t).$$\n\nFor any weak$^*$-null sequence $(x^*_n)_{n=1}^\\infty \\subset X^*$ and for any $x^* \\in X^*\\setminus \\set{0}$ we have\n\\[\n \\limsup_{n\\to \\infty} \\norm{x^*+x^*_n} \\geq \\norm{x^*}\\left(1+\\overline{\\delta}_X^*\\left(\\frac{\\limsup \\norm{x_n^*}}{\\norm{x^*}}\\right)\\right).\n\\]\n\n\\end{prop}\n\nWe will also need the following refinement (see Proposition 2.1 in \\cite{LancienRaja2018}).\n\n\\begin{prop}\\label{waus}\nLet $X$ be a Banach space. \nThen the bidual norm on $X^{**}$ has the following property. \nFor any $t\\in (0,1)$, any weak$^*$-null sequence $(x^{**}_n)_{n=1}^\\infty$ in $B_{X^{**}}$ and any $x \\in S_X$ we have:\n$$ \\limsup_{n \\rightarrow \\infty} \\| x+tx^{**}_n \\| \\leq 1 + \\overline{\\rho}_{X}(t).$$\n\\end{prop}\n\n\n\\medskip Let us now recall the following classical duality result concerning these moduli (see for instance \\cite{DKLR} Corollary 2.3 for a precise statement).\n\n\\begin{prop}\\label{duality} Let $X$ be a Banach space. \nThen $\\|\\ \\|_X$ is AUS if and and only if $\\|\\ \\|_{X^*}$ is AUC$^*$.\n\nIf $p,q\\in (1,\\infty)$ are conjugate exponents, then $\\|\\ \\|_X$ is $p$-AUS if and and only if $\\|\\ \\|_{X^*}$ is $q$-AUC$^*$.\n\\end{prop}\n\nWe conclude this section with a list of a few classical properties of Orlicz functions and norms that are related to these moduli.\nA map $\\varphi:[0,\\infty) \\to [0,\\infty)$ is called an \\emph{Orlicz function} if it is continuous, non decreasing, convex and so that $\\varphi(0)=0$ and $\\lim_{t \\to \\infty}\\varphi(t)=\\infty$. \nThe \\emph{Orlicz norm} $\\|\\ \\|_{\\ell_\\varphi}$, associated with $\\varphi$ is defined on $c_{00}$, the space of finitely supported sequences, as follows:\n$$\\forall x=(x_n)_{n=1}^\\infty \\in c_{00},\\ \\ \\|x\\|_{\\ell_\\varphi}=\\inf\\big\\{r>0,\\ \\sum_{n=1}^\\infty \\varphi(x_n\/r)\\le 1\\big\\}.$$\nThe following is immediate from the definition.\n\n\\begin{lem}\\label{Orlicz-lp} Let $\\varphi:[0,\\infty) \\to [0,\\infty)$ be an Orlicz function and $p\\in [1,\\infty)$.\n\\begin{enumerate}[(i)]\n\\item If there exists $C>0$ such that $\\varphi(t)\\le Ct^p$, for all $t\\in [0,1]$, then there exists $A>0$ such that $\\|x\\|_{\\ell_\\varphi} \\le A\\|x\\|_{\\ell_p}$, for all $x\\in c_{00}$.\n\\item If there exists $c>0$ such that $\\varphi(t)\\ge ct^p$, for all $t\\in [0,1]$, then there exists $a>0$ such that $\\|x\\|_{\\ell_\\varphi} \\ge a\\|x\\|_{\\ell_p}$, for all $x\\in c_{00}$.\n\\end{enumerate}\n\\end{lem}\n\nAssume now that $\\varphi:[0,\\infty) \\to [0,\\infty)$ is an Orlicz function which is 1-Lipschitz and such that $\\lim_{t\\to \\infty}\\varphi(t)\/t=1$. \nConsider for $(s,t) \\in \\R^2$, \n\\[\nN_2^\\varphi(s,t)=\n\\begin{cases}|s|+|s|\\varphi(|t|\/|s|) & \\text{ if }s\\neq 0,\\\\\n \\abs{t}& \\text{ if }s=0.\n\\end{cases}\n\\]\nThen define by induction for all $n\\geq 3$:\n$$\\forall (s_1,\\ldots,s_n)\\in \\R^n,\\ N_n^\\varphi(s_1,\\ldots,s_n)=\nN_2^\\varphi\\big(N_{n-1}^\\varphi(s_1,\\ldots,s_{n-1}),s_n\\big).$$\nThe following is proved in \\cite{KaltonTAMS2013} (see Lemma 4.3 and its preparation).\n\n\n\\begin{lemma}\\label{Orlicz-Kalton}\\\n\\begin{enumerate}[(i)]\n\\item For any $n \\ge 2$, the function $N_n^\\varphi$ is an absolute (or lattice) norm on $\\R^n$, meaning that $N_n(s_1,\\ldots,s_n)\\le N_n(t_1,\\ldots,t_n)$, whenever $|s_i|\\le |t_i|$ for all $i\\le n$.\n\\item For any $n\\in \\N$ and any $x\\in \\R^n$:\n$$\\frac12 \\|s\\|_{\\ell_\\varphi} \\le N_n^\\varphi(s) \\le e\\|s\\|_{\\ell_\\varphi}.$$\n\\end{enumerate}\n\\end{lemma}\n\nWhen $X$ is a Banach space, it is easy to see that $\\overline{\\rho}_X$ is a 1-Lipschitz Orlicz function such that $\\lim_{t\\to \\infty}\\rho(t)\/t=1$.\nBut due to its lack of convexity, $\\overline{\\delta}_X^*$ is not an Orlicz function and we need to modify it. Following \\cite{KaltonTAMS2013}, we define\n$$\\delta(t)=\\int_0^t \\frac{\\overline{\\delta}_X^*(s)}{s}\\,ds.$$\nIt is easy to see that $\\overline{\\delta}_X^*(t)\/{t}$ is increasing and tends to $1$ as $t$ tends to $\\infty$. Therefore, $\\delta$ is an Orlicz function which is 1-Lipschitz, such that $\\lim_{t\\to \\infty}\\delta(t)\/t=1$ and satisfying:\n$$\\forall t\\in [0,\\infty),\\ \\ \\overline{\\delta}_X^*(t\/2) \\le \\delta(t) \\le \\overline{\\delta}_X^*(t).$$\nThe following statement is now a direct consequence of Lemmas \\ref{Orlicz-lp} and \\ref{Orlicz-Kalton}.\n\n\\begin{lem}\\label{Nnorm-lp} Let $X$ be a Banach space and $p\\in [1,\\infty)$.\n\\begin{enumerate}[(i)]\n\\item If there exists $C>0$ such that $\\overline{\\rho}_X(x)\\le Ct^p$, for all $t\\in [0,1]$, then there exists $A>0$ such that\n$$\\forall n\\in \\N\\ \\forall x\\in \\R^n,\\ \\ N_n^{\\overline{\\rho}_X}(x)\\le A\\|x\\|_{\\ell_p^n}.$$\n\\item If there exists $c>0$ such that $\\overline{\\delta}_X^*(t)\\ge ct^p$, for all $t\\in [0,1]$, then there exists $a>0$ such that\n$$\\forall n\\in \\N\\ \\forall x\\in \\R^n,\\ \\ N_n^{\\delta}(x)\\ge a\\|x\\|_{\\ell_p^n}.$$\n\\end{enumerate}\n\\end{lem}\n\n\nWe will also use the following reformulation of Propositions~\\ref{as-sequences} and~\\ref{waus} in terms of the norms $N_2^\\delta$ and $N_2^{\\overline{\\rho}_X}$.\n\\begin{lem}\\label{l:asymptotic-Nnorm}\n Let $X$ be a Banach space. \n \\begin{enumerate}[(i)]\n \\item Let $(x_n^*) \\subset X^*$ be weak$^*$-null. Then for any $x^* \\in X^*$ we have \n \\[\n \\limsup_{n\\to \\infty} \\norm{x^*+x_n^*}\\geq N_2^\\delta(\\norm{x^*},\\limsup \\norm{x_n^*}).\n \\]\n \\item Similarly, if $(x_n^{**}) \\subset X^{**}$ is weak$^*$-null and $x\\in X$, then \n$$ \\liminf_{n \\rightarrow \\infty} \\| x+x^{**}_n \\| \\leq N_2^{\\overline{\\rho}_X}(\\norm{x},\\liminf \\norm{x_n^{**}}).$$\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\n If $x^*=0$ there is nothing to do, so we may assume that $x^*\\neq 0$.\n By application of Proposition~\\ref{as-sequences} we see that \n \\[\n \\begin{aligned}\n \\limsup_{n\\to\\infty}\\norm{x^*+x_n^*}&\\geq \\norm{x^*}\\left(1+\\overline{\\delta}^*_X\\left(\\frac{\\limsup\\norm{x_n^*}}{\\norm{x^*}}\\right)\\right)\\\\\n &\\geq \\norm{x^*}\\left(1+\\delta\\left(\\frac{\\limsup\\norm{x_n^*}}{\\norm{x^*}}\\right)\\right)=N_2^\\delta(\\norm{x^*},\\limsup \\norm{x_n^*})\n \\end{aligned}\n \\]\nThe proof of the second claim is even simpler so we leave it to the reader.\n\\end{proof}\n\n\n\n\n\\section{The general result}\n\nLet us first recall that a Banach space is said to be {\\it quasi-reflexive} if the image of its canonical embedding into its bidual is of finite codimension in its bidual. We can now state our main result.\n\n\\begin{thm}\\label{general} Let $X$ be a quasi-reflexive Banach space, let $p\\in (1,\\infty)$ and denote $q$ its conjugate exponent. Assume that $X$ admits an equivalent $p$-AUS norm and that $X^*$ admits an equivalent $q$-AUS norm. Then the family $([\\N]^k,d_{\\K}^k)_{k\\in \\N}$ does not equi-coarsely embed into $X^{**}$.\n\\end{thm}\n\nWe immediately deduce the following.\n\n\\begin{cor}\\label{generalcorollary} \nLet $X$ be a quasi-reflexive Banach space, let $p\\in (1,\\infty)$ and denote $q$ its conjugate exponent. \nAssume that $X$ admits an equivalent $p$-AUS norm and that $X^*$ admits an equivalent $q$-AUS norm. \nThen the family $([\\N]^k,d_{\\K}^k)_{k\\in \\N}$ does not equi-coarsely embed into $X$, nor does it equi-coarsely embed into any iterated dual $X^{(r)}$ ($r\\geq 0$) of $X$.\n\\end{cor}\n\n\\begin{proof} \nSince $X$ is quasi reflexive we infer that $X^{(r)}$ admits an equivalent $p$-AUS norm when $r$ is even and it admits an equivalent $q$-AUS norm when $r$ is odd. \nIndeed, note that when $r$ is even $X^{(r)}$ is isomorphic to $X\\oplus_p F$ where $F$ is finite-dimensional (resp. $X^{(r)}\\simeq X^*\\oplus_q F$ when $r$ is odd).\nNow it is obvious from Theorem~\\ref{general} that $([\\N]^k)_{k\\in \\N}$ do not equi-coarsely embed into $X^{(r)}$ when $r$ is even. \nWhen $r$ is odd, we just exchange the roles of $p$ and $q$.\n\\end{proof}\n\nBefore going into the detailed proof of Theorem~\\ref{general} let us briefly indicate the main idea. We assume that there is an equi-coarse family of embeddings $(f_k)$ of $[\\N]^k$ into $X^{**}$ with moduli $\\rho$ and $\\omega$.\nWe fix $k$ sufficiently large and observe that, up to passing to a subgraph, $f_k$ can be represented as the sum along the branches of a weak$^*$-null countably branching tree of height $k$, say $(z_{\\n})_{\\n\\in [N]^{\\leq k}}$.\nMoreover the norms of the elements of this tree stabilize on each level towards values $(K_i)_{i=1}^k \\subset [0,\\omega(1)]$.\nApplying the existence of a $q-AUS$ norm on $X^*$ one can show that $\\sum_{i=1}^k K_i^p \\leq c^p\\omega(1)^p$ where $c$ is a constant depending only on $X$.\nThe benefit of this observation is twofold.\nOn one hand we will be able to construct two elements $\\n_0,\\m_0 \\in [\\N]^{l}$ (with $l\\leq k$) such that $\\sum_{i=1}^l z_{(n_1,\\ldots,n_i)}-z_{(m_1,\\ldots,m_i)}$ is small in norm (say less than $2c\\omega(1)$) while $d_{\\K}^l(\\n_0,\\m_0)$ is large (say $\\rho(d_{\\K}^l(n_0,\\m_0))> 3c\\omega(1)$). \nOn the other hand the $p-AUS$ renormability of $X$ together with the quasi-reflexivity allows to extend these elements to elements $\\n,\\m \\in [\\N]^k$ such that $\\dk(\\n,\\m)$ is still large and \n\\[\\begin{aligned}\n\\norm{\\sum_{i=l+1}^k z_{(n_1,\\ldots,n_i)}-z_{(m_1,\\ldots,m_i)}} &\\sim \\left(\\sum_{i=l+1}^k\\norm{z_{(n_1,\\ldots,n_i)}-z_{(m_1,\\ldots,m_i)}}^p\\right)^{1\/p}\\\\ &\\sim (\\sum_{i=l+1}^k K_i^p)^{1\/p}\\leq c\\omega(1) .\n\\end{aligned}\n\\]\nEventually, summing the tree from $1$ to $k$ over the branches ending by $\\n$ and $\\m$ we get the desired contradiction\n\\[\n 3c\\omega(1)<\\rho(\\dk(\\n,\\m))\\leq \\norm{f_k(\\n)-f_k(\\m)}\\leq 3c\\omega(1).\n\\]\n\n\\begin{proof}[Proof of Theorem \\ref{general}] Let us assume that there are two maps $\\rho, \\, \\omega \\colon [0,+\\infty) \\to [0,+\\infty)$ and maps $f_k ([\\N]^k,d_{\\K}^k) \\colon \\to (X^{**},\\|\\ \\|)$ for $k \\in \\N$ such that:\n\\begin{enumerate}[(i)]\n\\item $\\lim_{t \\to \\infty} \\rho(t) = \\infty$,\n\\item $\\omega(t) < \\infty$ for every $t \\in (0,+\\infty)$,\n\\item $\\rho(t) \\leq \\rho_{f_k}(t)$ and $\\omega_{f_k}(t) \\leq \\omega(t)$ for every $k \\in \\N$ and $t \\in (0,\\infty)$.\n\\end{enumerate}\nNote that all $f_k$'s are $\\omega(1)$-Lipschitz for $\\|\\ \\|$ and so $\\omega(1)>0$. Since all the sets $[\\N]^k$ are countable, we may and will assume that $X$ and therefore, by the quasi-reflexivity of $X$, that all its iterated duals are separable.\\\\\nLet us fix $N\\in \\N$. Pick $\\alpha\\in \\N$ such that $\\alpha\\ge \\frac{p}{q}$ and set $k=N^{1+\\alpha} \\in \\N$. We also fix $\\eta>0$. We shall provide at the end of our proof a contradiction if $N$ is chosen large enough and $\\eta$ small enough. We denote $\\|\\ \\|$ the original norm on $X$, as well as its dual and bidual norms. Let us assume, as we may, that $\\|\\ \\|$ is $p$-AUS on $X$. We denote its modulus of asymptotic uniform smoothness $\\overline{\\rho}_{\\|\\ \\|}$ or simply $\\overline{\\rho}_{X}$.\n\n\\medskip\nFor the first step of the proof we shall exploit the existence of an equivalent $q$-AUS norm $|\\ |$ on $X^*$ (we also denote $|\\ |$ its dual norm on $X^{**}$). \nIt is worth mentioning that if $X$ is not reflexive, $|\\ |$ cannot be the dual norm of an equivalent norm on $X$ (see for instance Proposition 2.6 in \\cite{CauseyLancien}). \nAssume also that there exists $b>0$ such that\n\\begin{equation}\\label{equivalence}\n\\forall z \\in X^{**}\\ \\ b\\|z\\|\\le |z|\\le \\|z\\|.\n\\end{equation}\nThen we have that all $f_k$'s are also $\\omega(1)$-Lipschitz for $|\\ |$.\\\\\nBy Proposition \\ref{duality}, we have that there exists $c>0$ such that for all $t\\in [0,1]$, $\\overline{\\delta}_{|\\ |}^*(t)\\ge ct^p$. \nWe denote again\n$$\\delta(t)=\\int_0^t \\frac{\\overline{\\delta}_{|\\ |}^*(s)}{s}\\, ds.$$\nRecall that Lemma \\ref{Nnorm-lp} ensures the existence of $a>0$ such that for all $n\\in \\N$, $N_n^\\delta \\ge 2a\\|\\ \\|_{\\ell_p^n}$.\n\nFirst, using the separability of $X^*$ and Proposition \\ref{nulltree}, we may assume by passing to a full subtree, that there exist a weak$^*$-null tree $(z(\\m))_{\\m\\in[\\N]^{\\le k}}$ in $X^{**}$ with $|z_{\\m}| \\leq \\omega(1)$ for all $\\m\\in [\\N]^{\\leq k}\\setminus\\{\\emptyset\\}$ and so that\n$$\\forall \\n \\in [\\N]^k,\\ f_k(\\n)=\\sum_{i=0}^k z(n_1,\\ldots,n_i)=\\sum_{\\m \\preceq \\n} z(\\m).$$\n\nFor $r\\in \\N$ we denote $E_r=\\{\\m=(m_1,\\ldots,m_j)\\in [\\N]^{\\le k}\\setminus \\{\\emptyset\\},\\ m_j=r\\}$ and $F_r=\\bigcup_{u=1}^r E_u$. \nFix a sequence $(\\lambda_r)_{r=1}^\\infty$ in $(0,1)$ such that $\\prod_{r=1}^\\infty \\lambda_r > \\frac12$. \nWe now use Lemma~\\ref{l:asymptotic-Nnorm}~(i) and the fact that $(z(\\m))_{\\m\\in[\\N]^{\\le k}}$ is a weak$^*$-null tree to build inductively $n_1<\\ldotsj+2N+1$ such that\n\\begin{multline*}\n\\|x_{j+N+1}-x'_{j+N+1}+z_{j+N+2}-z'_{j+N+2}\\| \\\\\n\\le N_2^{\\overline{\\rho}_{X}}\\big(\\|x_{j+N+1}-x'_{j+N+1}\\|,\n\\|z_{j+N+2}-z'_{j+N+2}\\|\\big)+\\eta\n\\end{multline*}\nIt follows from (\\ref{stabilize}) that\n\\begin{multline*}\n\\|z_{j+N+1}-z'_{j+N+1}+z_{j+N+2}-z'_{j+N+2}\\| \\\\\n\\begin{aligned}\n&\\le N_2^{\\overline{\\rho}_{X}}\\big(\\|z_{j+N+1}-z'_{j+N+1}\\|+\\eta,\n\\|z_{j+N+2}-z'_{j+N+2}\\|\\big)+2\\eta\\\\\n&\\le N_2^{\\overline{\\rho}_{X}}\\Big(\\frac{2}{b}\\big(K_{j+N+1}+\\eta\\big)+\\eta,\\frac{2}{b}\\big(K_{j+N+2}+\\eta\\big)\\Big)+2\\eta.\n\\end{aligned}\n\\end{multline*}\nSimilarly, we can inductively find $m_{j+N+2}=u_{j+N+2}<\\cdots0$ such that $N_n^{\\overline{\\rho}_{X}}\\le C\\|\\ \\|_{\\ell_p^n}$ for all $n\\in \\N$ the above inequality yields \n$$\\Big\\|\\sum_{i=j+N+1}^k (z_i-z'_i)\\Big\\| \\le \\frac{2C}{b}\\Big(\\sum_{i=j+N+1}^k K_i^p\\Big)^{1\/p} +\\omega(1)\\le \\Big(\\frac{2CA}{b}+1\\Big)\\omega(1).$$\nFinally, combining the above estimate with (\\ref{samedebut}) and (\\ref{smallblock}), we get that\n$$\\|f(\\m)-f(\\overline{u})\\|\\le \\frac{3A+2CA+b}{b}\\omega(1).$$\nAs announced at the beginning of the proof, this yields a contradiction if $N$ was initially chosen, as it was possible, so that $\\rho(N)>\\frac{3A+2CA+b}{b}\\omega(1).$\n\\end{proof}\n\n\n\n\n\nUnlike reflexivity, quasi-reflexivity itself is not enough to prevent the Kalton graphs from embedding into a Banach space. We thank P. Motakis for showing us the next example.\n\n\\begin{prop}[Motakis]\\label{Pavlos} There exists a quasi-reflexive Banach space $X$ such that the family of graphs $([\\N]^k,d_\\K^k)_{k\\in \\N}$ equi-Lipschitz embeds into $X$.\n\\end{prop}\n\n\\begin{proof} The proof relies on the existence of a quasi-reflexive Banach space $X$ of order one which admits a spreading model, generated by a basis of $X$ that is equivalent to the summing basis $(s_n)_{n=1}^\\infty$ of $c_0$. This is shown in \\cite{FOSZ} (Proposition 3.2) and based on a construction given in \\cite{BHO}. We refer the reader to \\cite{BeauzLap} for the necessary definitions. Consequently, there exists a sequence $(x_n)_{n=1}^\\infty$ in $S_X$ and constants $A,B>0$ such that for all $k\\le n_1<\\cdots0\n\\end{align}\nfor every $s,t\\in[0,1]$ with $s0\n\\end{align}\nfor every $s,t\\in[0,1]$ with $s0,\\\\\n1 & \\text{if } \\kappa =0,\\\\\n\\dfrac{\\sinh(\\!\\sqrt{-\\kappa}\\,\\vartheta)}{\\sqrt{-\\kappa}\\,\\vartheta} & \\text{otherwise},\n\\end{cases}\\\\\n\\sigma_\\kappa^{(t)}(\\vartheta) &:= \\begin{cases} \\infty & \\textnormal{if }\\kappa\\,\\vartheta^2 \\geq \\pi^2,\\\\\nt\\,\\dfrac{\\mathfrak{S}_\\kappa(t\\,\\vartheta)}{\\mathfrak{S}_\\kappa(\\vartheta)} & \\textnormal{otherwise},\n\\end{cases}\n\\end{align*}\n\n\\begin{definition}\\label{Def:Dist coeff} For $K\\in\\R$ and $N\\in[1,\\infty)$, slightly abusing notation we set\n\\begin{align*}\n\\sigma_{K,N}^{(t)}(\\vartheta) &:= \\sigma_{K\/N}^{(t)}(\\vartheta),\\\\\n\\tau_{K,N}^{(t)}(\\vartheta) &:= t^{1\/N}\\,\\sigma_{K,N-1}^{(t)}(\\vartheta)^{1-1\/N}.\n\\end{align*}\n\\end{definition}\n\nNote that for every $t\\in(0,1)$ and every $\\vartheta > 0$, $\\smash{\\sigma_{K,N}^{(t)}(\\vartheta)}$ is continuous in $(K,N)\\in \\R\\times[1,\\infty)$, nondecreasing in $K$, and nonincreasing in $N$ \\cite[Rem.~2.2]{bacher2010}. Analogous claims apply to the quantity $\\smash{\\tau_{K,N}^{(t)}(\\vartheta)}$ \\cite[p.~138]{sturm2006b}. Furthermore, for every $t\\in [0,1]$, every $\\vartheta\\geq 0$, and every $\\kappa\\in (-\\infty,\\pi^2\/\\vartheta^2)$,\n\\begin{align}\\label{Eq:Distortion coeff property}\n\\sigma_\\kappa^{(t)}(\\vartheta) = \\sigma_{\\kappa\\vartheta^2}^{(t)}(1).\n\\end{align}\n\n\\begin{remark}\\label{Re:Lower bounds sigma} We recall the following elementary inequality from \\cite[Rem.~2.3]{cavalletti2017}: for every $K\\in\\R$, every $N\\in[1,\\infty)$, every $t\\in[0,1]$, and every $\\vartheta\\geq 0$,\n\\begin{align*}\n\\sigma_{K,N}^{(t)}(\\vartheta) \\geq t\\,{\\ensuremath{\\mathrm{e}}}^{-(1-t)\\vartheta\\sqrt{K^-\/N}}.\n\\end{align*}\n\\end{remark}\n\nLastly, in view of \\autoref{Th:Equivalence TCD* and TCDe} and \\autoref{Th:Equivalence TMCP* and TMCPe}, given any $t\\in[0,1]$ let us define ${\\ensuremath{\\mathrm{G}}}_t\\colon \\R^2 \\times (-\\infty,\\pi^2)\\to (-\\infty,\\infty]$ and ${\\ensuremath{\\mathrm{H}}}_t\\colon \\R\\times (-\\infty,\\pi^2)\\to (-\\infty,\\infty]$ by\n\\begin{align}\\label{Eq:GtHt}\n\\begin{split}\n{\\ensuremath{\\mathrm{G}}}_t(x,y,\\kappa) &:= \\log\\!\\big[\\sigma_\\kappa^{(1-t)}(1)\\,{\\ensuremath{\\mathrm{e}}}^x + \\sigma_\\kappa^{(t)}(1)\\,{\\ensuremath{\\mathrm{e}}}^y\\big]\\\\\n{\\ensuremath{\\mathrm{H}}}_t(x,\\kappa) &:= \\log\\!\\big[\\sigma_\\kappa^{(1-t)}(1)\\,{\\ensuremath{\\mathrm{e}}}^x\\big] = \\log\\sigma_\\kappa^{(1-t)}(1) + x.\n\\end{split}\n\\end{align}\nThen the functions ${\\ensuremath{\\mathrm{G}}}_t$ and ${\\ensuremath{\\mathrm{H}}}_t$ are jointly convex \\cite[Lem.~2.11]{erbar2015}.\n\n\\subsection{Nonsmooth Lorentzian spaces}\\label{Sub:Lorentzian nonsmooth} We continue with a concise digression on the theory of nonsmooth Lorentzian (pre-length, length, and geodesic) spaces. We refer to \\cite{cavalletti2020, kunzinger2018} for details, proofs, and examples about the corresponding notions.\n\n\\subsubsection{Chronology and causality} Let us fix a preorder $\\leq$ and a transitive relation $\\ll$, contained in $\\leq$, on $\\mms$. The triple $(\\mms,\\ll,\\leq)$ is called \\emph{causal space} \\cite[Def. 2.1]{kunzinger2018}. We say that $x,y\\in\\mms$ are \\emph{timelike} or \\emph{causally} related if $x\\ll y$ or $x\\leq y$, respectively. We write $x0$ if and only if $x\\ll y$, and\n\\item if $x\\leq y\\leq z$ we have the \\emph{reverse triangle inequality}\n\\begin{align}\\label{Eq:Reverse tau}\n\\uptau(x,z) \\geq \\uptau(x,y) + \\uptau(y,z).\n\\end{align}\n\\end{enumerate}\nThe existence of such a $\\uptau$ implies that $\\ll$ is an \\emph{open} relation \\cite[Prop.~2.13]{kunzinger2018}; in particular, the set $\\smash{I^\\pm(A)}$ is open for every $A\\subset\\mms$ \\cite[Lem.~2.12]{kunzinger2018}.\n\n\\begin{definition}\\label{Def:LLSSP} A \\emph{Lorentzian pre-length space} is a quintuple $(\\mms,\\met,\\ll,\\leq,\\uptau)$ which consists of a causal space $(\\mms,\\ll,\\leq)$ endowed with a proper metric $\\met$ and a time separation function $\\uptau$ as introduced above.\n\\end{definition}\n\n\n\\subsubsection{Length of curves}\\label{Sub:Length curves} Let $(\\mms,\\met,\\ll,\\leq,\\uptau)$ be a given Lorentzian pre-length space. A \\emph{path} designates a map $\\gamma\\colon [a,b]\\to\\mms$, where $a,b\\in\\R$ with $a0\n\\end{align}\nfor every $s,t\\in[0,1]$ with $s0$ such that the $\\met$-arclength of any causal curve contained in $C$ is no larger than $c$.\n\n\\begin{definition} The space $(\\mms,\\met,\\ll,\\leq,\\uptau)$ is \n\\begin{enumerate}[label=\\textnormal{\\alph*.}]\n\\item \\emph{globally hyperbolic} if it is non-totally imprisoning and the causal diamond $J(x,y)$ is compact for every $x,y\\in\\mms$, and\n\\item \\emph{${\\ensuremath{\\mathscr{K}}}$-globally hyperbolic} if it is non-totally imprisoning and the causal diamond $J(C_0,C_1)$ is compact for all compact $C_0,C_1\\subset\\mms$.\n\\end{enumerate}\n\\end{definition}\n\nWe list some properties of (${\\ensuremath{\\mathscr{K}}}$-)globally hyperbolic Lorentzian pre-length spaces which will lead to our standing \\autoref{Ass:ASS} below as well as useful consequences thereof. If $(\\mms,\\met,\\ll,\\leq,\\uptau)$ is locally causally closed, globally hyperbolic, and $I^\\pm(x) \\neq \\emptyset$ for every $x\\in \\mms$, then ${\\ensuremath{\\mathscr{K}}}$-global hyperbolicity holds \\cite[Lem.~1.5]{cavalletti2020}. On the other hand, every locally causally closed, ${\\ensuremath{\\mathscr{K}}}$-globally hyperbolic Lorentzian geodesic space is in fact causally closed \\cite[Lem.~1.6]{cavalletti2020}. By \\cite[Def.~3.25, Thm.~3.26]{kunzinger2018}, global hyperbolicity implies the nonsmooth analogue of the \\emph{strong causality condition} for smooth Lorentzian spacetimes, cf.~e.g.~\\cite[Def.~14.11]{oneill1983}. On every globally hyperbolic Lorentzian length space (see \\cite[Def.~3.22]{kunzinger2018} for the corresponding definition, we will not really need it), the time separation function $\\uptau$ is finite and continuous \\cite[Thm.~3.28]{kunzinger2018}; in particular, we have $\\uptau(x,x) = 0$ for every $x\\in\\mms$. Furthermore, every such space is geodesic by the nonsmooth analogue of Avez--Seifert's theorem \\cite[Thm.~3.30]{kunzinger2018}. In \\cite{burtscher2021}, a singular analogue of Geroch's characterization \\cite{geroch1970} of global hyperbolicity via Cauchy time functions is proven.\n\n\n\\subsection{Optimal transport on Lorentzian spaces}\\label{Sec:OT Lorentzian} Next, let us briefly review the theory of optimal transport on the class of spaces introduced above \\cite{cavalletti2020}. We refer to \\cite{eckstein2017,kellsuhr2020, mccann2020, mondinosuhr2018, suhr2018} for prior developments in the smooth case.\n\n\\subsubsection{Basic probabilistic notation} Let ${\\ensuremath{\\mathscr{P}}}(\\mms)$ denote the set of all Borel probability measures on $\\mms$. Let ${\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)$ and ${\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)$ be its subsets consisting of all compactly supported and $\\meas$-absolutely continuous elements, respectively; set $\\smash{{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)} := \\smash{{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)\\cap{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$. Given any $\\mu\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$, by $\\mu_\\perp$ we mean the $\\meas$-singular part in the corresponding Lebesgue decomposition of $\\mu\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$.\n\nFor a Borel map $F\\colon \\mms\\to \\mms'$ into a metric space $(\\mms',\\met')$, given any $\\mu\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$ the measure $F_\\push\\mu \\in{\\ensuremath{\\mathscr{P}}}(\\mms')$ designates the usual \\emph{push-forward} of $\\mu$ under $F$, defined by the formula $F_\\push\\mu[B] := \\smash{\\mu\\big[F^{-1}(B)\\big]}$ for every Borel set $B\\subset\\mms'$.\n\nGiven $\\mu,\\nu\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$, let $\\Pi(\\mu,\\nu)$ denote the set of all \\emph{couplings} of $\\mu$ and $\\nu$, i.e.~all $\\pi\\in{\\ensuremath{\\mathscr{P}}}(\\mms^2)$ with $\\pi[\\, \\cdot\\times\\mms] = \\mu$ and $\\pi[\\mms\\times\\cdot\\, ] = \\nu$.\n\nWith $\\Cont([0,1];\\mms)$ denoting the set of all curves $\\gamma\\colon [0,1]\\to\\mms$, endowed with the uniform topology, for $t\\in [0,1]$ the so-called \\emph{evaluation map} $\\eval_t\\colon \\Cont([0,1];\\mms) \\to \\mms$ is defined through $\\eval_t(\\gamma) := \\gamma_t$.\n\n\\subsubsection{Chronological and causal couplings} Let $(\\mms,\\met,\\ll,\\leq,\\uptau)$ be a Lorentzian pre-length space. We define the (possibly empty) set $\\Pi_\\ll(\\mu,\\nu)$ of all \\emph{chronological couplings} of two probability measures $\\mu,\\nu\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$ to consist of all $\\pi\\in\\Pi(\\mu,\\nu)$ such that $\\smash{\\pi[\\mms_\\ll^2]=1}$. The set $\\Pi_\\leq(\\mu,\\nu)$ of all \\emph{causal couplings} of $\\mu$ and $\\nu$ is defined analogously. Under causal closedness, clearly $\\pi\\in\\Pi_\\leq(\\mu,\\nu)$ if and only if $\\pi\\in\\Pi(\\mu,\\nu)$ as well as $\\smash{\\supp\\pi\\subset\\mms_\\leq^2}$; an analogous statement holds for the locally causally closed case if $\\mu,\\nu\\in{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)$.\n\nA chronological or causal coupling of $\\mu,\\nu\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$ intuitively describes a way of transporting an infinitesimal mass portion ${\\ensuremath{\\mathrm{d}}}\\mu(x)$ to an infinitesimal mass portion ${\\ensuremath{\\mathrm{d}}}\\nu(y)$ as to guarantee $x\\ll y$ or $x\\leq y$, respectively.\n\nWe call $\\mu,\\nu\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$ \\emph{chronologically related} if $\\Pi_\\ll(\\mu,\\nu)\\neq\\emptyset$.\n\n\\subsubsection{The $\\smash{\\ell_p}$-optimal transport problem} Given an exponent $p\\in (0,1]$ and following the expositions in \\cite{cavalletti2020,mccann2020} we will adopt the conventions\n\\begin{align*}\n\\sup\\emptyset &:= -\\infty,\\\\\n(-\\infty)^p &:= (-\\infty)^{1\/p} := -\\infty,\\\\\n\\infty -\\infty &:=-\\infty. \\textcolor{white}{a^{1\/p}}\n\\end{align*}\n\nLet the total transport cost function $\\ell_p\\colon{\\ensuremath{\\mathscr{P}}}(\\mms)^2\\to [0,\\infty]\\cup\\{-\\infty\\}$ be given by\n\\begin{align*}\n\\ell_p(\\mu,\\nu) &:= \\sup\\{\\Vert \\uptau\\Vert_{\\Ell^p(\\mms^2,\\pi)} : \\pi \\in \\Pi_\\leq(\\mu,\\nu)\\}\\\\\n&\\textcolor{white}{:}= \\sup\\{\\Vert l\\Vert_{\\Ell^p(\\mms^2,\\pi)} : \\pi\\in\\Pi(\\mu,\\nu)\\},\n\\end{align*}\nwhere $l\\colon \\mms^2 \\to [0,\\infty]\\cup\\{-\\infty\\}$ is defined through\n\\begin{align*}\nl^p(x,y) := \\begin{cases} \\uptau^p(x,y) & \\textnormal{if }x\\leq y,\\\\\n-\\infty & \\textnormal{otherwise}.\n\\end{cases}\n\\end{align*}\n\n\\begin{remark}\nThe sets of maximizers for both suprema defining $\\smash{\\ell_p(\\mu,\\nu)}$ coincide, including the case $\\smash{\\Pi_\\leq(\\mu,\\nu)=\\emptyset}$. One advantage of the formulation via $l$ is that under (local) causal closedness and global hyperbolicity, $l^p$ is upper semicontinuous. In this case, customary optimal transport techniques \\cite{ambrosiogigli2013, villani2009} are applicable to study the second problem, which in turn yields corresponding results for the first \\cite[Rem.~2.2]{cavalletti2020}. Note that the preimages $l^{-1}([0,\\infty))$ and $l^{-1}((0,\\infty))$ encode causality and chronology of points in $\\mms^2$, respectively.\n\\end{remark}\n\nA coupling $\\pi\\in\\Pi(\\mu,\\nu)$ of $\\mu,\\nu\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$ is called \\emph{$\\ell_p$-optimal} if $\\pi\\in\\Pi_\\leq(\\mu,\\nu)$ and \n\\begin{align*}\n\\ell_p(\\mu,\\nu) = \\Vert\\uptau\\Vert_{\\Ell^p(\\mms^2,\\pi)} = \\Vert l \\Vert_{\\Ell^p(\\mms^2,\\pi)}.\n\\end{align*}\nFor existence of such couplings, for our work it will suffice to keep the following in mind. If $(\\mms,\\met,\\ll,\\leq,\\uptau)$ is locally causally closed and globally hyperbolic, and if $\\mu,\\nu\\in{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)$ with $\\Pi_\\leq(\\mu,\\nu)\\neq\\emptyset$, existence of an $\\smash{\\ell_p}$-optimal coupling $\\pi$ of $\\mu$ and $\\nu$ holds, and $\\ell_p(\\mu,\\nu) <\\infty$ \\cite[Prop.~2.3]{cavalletti2020}.\n\nA key property of $\\smash{\\ell_p}$ is the \\emph{reverse triangle inequality} \\cite[Prop. 2.5]{cavalletti2020} somewhat reminiscent of \\eqref{Eq:Reverse tau} and of $l$ satisfying the reverse triangle inequality for \\emph{every}, i.e.~not necessarily causally related, $x,y,z\\in\\mms$: for \\emph{every} $\\mu,\\nu,\\sigma\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$,\n\\begin{align}\\label{Eq:Reverse triangle lp}\n\\ell_p(\\mu,\\sigma) \\geq \\ell_p(\\mu,\\nu) + \\ell_p(\\nu,\\sigma).\n\\end{align}\n\n\\subsubsection{Timelike $p$-dualizability}\\label{Sub:Timelike dual} The concept of so-called \\textit{\\textnormal{(}strong\\textnormal{)} timelike $p$-dualiza\\-bi\\-lity}, where $p\\in(0,1]$, of pairs $(\\mu,\\nu)\\in{\\ensuremath{\\mathscr{P}}}(\\mms)^2$ originates in \\cite{cavalletti2020} and generalizes the notion of \\emph{$p$-separation} introduced in \\cite[Def.~4.1]{mccann2020}. Pairs satisfying this condition admit good duality properties \\cite[Prop.~2.19, Prop.~2.21, Thm.~2.26]{cavalletti2020}, which leads to a characterization of $\\smash{\\ell_p}$-geodesics (see \\autoref{Sub:Geodesics} below) in the smooth case \\cite[Thm.~4.3, Thm.~5.8]{mccann2020}, cf.~\\autoref{Th:Equiv}.\n\nIn view of the following \\autoref{Def:TL DUAL}, cf.~\\cite[Def.~2.18, Def.~2.27]{cavalletti2020}, we refer to \\cite[Def.~2.6]{cavalletti2020} for the definition of \\emph{cyclical monotonicity} of a set in $\\smash{\\mms_\\leq^2}$ with respect to $l^p$, see also \\cite[Def.~5.1]{villani2009}. It will not be relevant in our work.\n\nGiven $a,b\\colon\\mms\\to\\R$ we define $a\\oplus b\\colon\\mms^2\\to\\R$ by $(a\\oplus b)(x,y) := a(x) + b(y)$.\n\n\\begin{definition}\\label{Def:TL DUAL} Given any $p\\in (0,1]$, a pair $(\\mu,\\nu)\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$ is called \n\\begin{enumerate}[label=\\textnormal{\\alph*.}]\n\\item \\emph{timelike $p$-dua\\-lizable} by $\\pi\\in \\Pi_\\ll(\\mu,\\nu)$ if $\\pi$ is $\\smash{\\ell_p}$-optimal, and there exist Borel functions $a,b\\colon\\mms\\to\\R$ such that $a\\oplus b\\in\\Ell^1(\\mms^2,\\mu\\otimes\\nu)$ as well as $l^p\\leq a\\oplus b$ on $\\supp\\mu\\times\\supp\\nu$,\n\\item \\emph{strongly timelike $p$-dualizable} by $\\pi\\in\\Pi_\\ll(\\mu,\\nu)$ if the pair $(\\mu,\\nu)$ is timelike $p$-dualizable by $\\pi$, and there is an $l^p$-cyclically monotone Borel set $\\Gamma\\subset \\smash{\\mms_\\ll^2\\cap(\\supp\\mu_0\\times\\supp\\mu_1)}$ such that every given $\\sigma\\in \\Pi_\\leq(\\mu_0,\\mu_1)$ is $\\ell_p$-optimal if and only if $\\sigma[\\Gamma]=1$ \\textnormal{(}in particular, $\\sigma$ is chronological\\textnormal{)}, and\n\\item \\emph{timelike $p$-dualizable} if $(\\mu,\\nu)$ is timelike $p$-dualizable by some coupling $\\pi\\in\\Pi_\\ll(\\mu,\\nu)$; analogously for strong timelike $p$-dualizability.\n\\end{enumerate}\n\nMoreover, any $\\pi$ as in the above items is called \\emph{timelike $p$-dualizing}. \n\\end{definition}\n\n\n\\begin{remark}\\label{Re:Strong timelike} If $\\mu,\\nu\\in{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)$ then the pair $(\\mu,\\nu)$ is timelike $p$-dualizable if and only if there is an $\\ell_p$-optimal coupling $\\smash{\\pi\\in\\Pi_\\leq(\\mu,\\nu)}$ which is concentrated on $\\smash{\\mms_\\ll^2}$.\n\nIf $\\mu,\\nu\\in{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)$ on a causally closed, globally hyperbolic Lorentzian geodesic space $(\\mms,\\met,\\ll,\\leq,\\uptau)$ with $\\smash{\\supp\\mu\\times\\supp\\nu\\subset\\mms_\\ll^2}$, then $(\\mu,\\nu)$ is automatically strongly timelike $p$-dualizable, $p\\in (0,1]$ \\cite[Cor.~2.29]{cavalletti2020}, see also \\cite[Lem.~4.4, Thm.~7.1]{mccann2020}. \n\\end{remark}\n\n\\subsubsection{Geodesics revisited}\\label{Sub:Geodesics} The definitions we consider need the notion of an \\emph{$\\smash{\\ell_p}$-geodesic}, $p\\in (0,1]$, to be made precise. We propose \\autoref{Def:Lorentzian geodesic} as Lorentzian version of Wasserstein geodesics in metric spaces. Precise technical results will be deferred to \\autoref{App:B}.\n\nSolely in this subsection, for simplicity we assume compactness of $\\mms$; the results derived here and in \\autoref{App:B} will mostly be used by replacing $\\mms$ by a causal diamond of two compact sets, which is compact under ${\\ensuremath{\\mathscr{K}}}$-global hyperbolicity.\n\nRecall the continuous reparametrization map $\\smash{{\\ensuremath{\\mathsf{r}}}\\colon \\mathrm{TGeo}(\\mms) \\to \\mathrm{TGeo}^\\uptau(\\mms)}$ and the definition $\\mathrm{TGeo}^\\uptau(\\mms) := {\\ensuremath{\\mathsf{r}}}(\\mathrm{TGeo}(\\mms))$ from \\autoref{Sub:GEO}. If $(\\mms,\\met,\\ll,\\leq,\\uptau)$ is causally closed, the set $\\Geo(\\mms)$ is Polish \\cite[Prop.~3.17]{kunzinger2018}. Since $\\mathrm{TGeo}(\\mms)$ is open, $\\mathrm{TGeo}^\\uptau(\\mms)$ is a Suslin set, hence universally measurable. \n\nGiven any $\\mu_0,\\mu_1\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$, we define\n\\begin{align*}\n\\OptGeo_{\\ell_p}(\\mu_0,\\mu_1) &:= \\{{\\ensuremath{\\boldsymbol{\\pi}}}\\in{\\ensuremath{\\mathscr{P}}}(\\Geo(\\mms)) : (\\eval_0,\\eval_1)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\Pi_\\leq(\\mu_0,\\mu_1)\\\\\n&\\qquad\\qquad \\textnormal{ is }\\ell_p\\textnormal{-optimal}\\},\\\\\n\\mathrm{OptTGeo}_{\\ell_p}(\\mu_0,\\mu_1) &:= \\{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\OptGeo_{\\ell_p}(\\mu_0,\\mu_1) : (\\eval_0,\\eval_1)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}[\\mms_\\ll^2]=1\\},\\\\\n\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1) &:= {\\ensuremath{\\mathsf{r}}}_\\push\\mathrm{OptTGeo}_{\\ell_p}(\\mu_0,\\mu_1).\n\\end{align*}\nWe say that a measure ${\\ensuremath{\\boldsymbol{\\pi}}}\\in{\\ensuremath{\\mathscr{P}}}(\\Cont([0,1];\\mms))$ \\emph{represents} a curve $(\\mu_t)_{t\\in[0,1]}$ in ${\\ensuremath{\\mathscr{P}}}(\\mms)$ provided $\\mu_t=(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}$ for every $t\\in[0,1]$.\n\n\\begin{definition}\\label{Def:Lorentzian geodesic} We term a weakly continuous path $(\\mu_t)_{t\\in[0,1]}$ in ${\\ensuremath{\\mathscr{P}}}(\\mms)$\n\\begin{enumerate}[label=\\textnormal{\\alph*.}]\n\\item \\emph{causal $\\ell_p$-geodesic} if it is represented by some $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\OptGeo_{\\ell_p}(\\mu_0,\\mu_1)}$,\n\\item \\emph{timelike $\\ell_p$-geodesic} if it is represented by some $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}(\\mu_0,\\mu_1)}$, and\n\\item\\label{Spans} \\emph{timelike proper-time parametrized $\\ell_p$-geodesic} if it is represented by some $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$.\n\\end{enumerate}\n\nA measure ${\\ensuremath{\\boldsymbol{\\pi}}}$ as in the last item will be called \\emph{$\\smash{\\ell_p}$-optimal geodesic plan}.\n\\end{definition}\n\n\\begin{remark}\\label{Re:Lip} If $(\\mms,\\met,\\ll,\\leq,\\uptau)$ is ${\\ensuremath{\\mathscr{K}}}$-globally hyperbolic, every causal or timelike $\\smash{\\ell_p}$-geodesic connecting compactly supported $\\mu_0$ and $\\mu_1$ is Lipschitz continuous with respect to the $q$-Wasserstein metric $W_q$ induced by $(\\mms,\\met)$ for every $q\\in[1,\\infty]$.\n\\end{remark}\n\nBy construction, we have ${\\ensuremath{\\boldsymbol{\\pi}}}[\\mathrm{TGeo}(\\mms)]=1$ for every $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}(\\mu_0,\\mu_1)}$. Consequently, ${\\ensuremath{\\boldsymbol{\\pi}}}[\\mathrm{TGeo}^\\uptau(\\mms)]=1$ for every $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$, and therefore ${\\ensuremath{\\boldsymbol{\\pi}}}$-a.e.~$\\gamma\\in\\Cont([0,1];\\mms)$ obeys $\\uptau(\\gamma_s,\\gamma_t) = (t-s)\\,\\uptau(\\gamma_0,\\gamma_1)>0$ for every $s,t\\in [0,1]$ with $s 0\n\\end{align*}\nfor every timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_t)_{t\\in[0,1]}$ and every $s,t\\in[0,1]$ with $s0$, the rescaled measured Lorentzian pre-length space $(\\mms, a\\met, b\\meas,\\ll,\\leq, \\theta\\uptau)$ satisfies $\\smash{\\mathrm{TCD}_p^*(K\/\\theta^2,N)}$.\n\\end{enumerate}\n\nAnalogous statements are obeyed by the conditions $\\smash{\\mathrm{wTCD}_p^*(K,N)}$, $\\smash{\\mathrm{TCD}_p(K,N)}$, and $\\smash{\\mathrm{wTCD}_p(K,N)}$. \n\\end{proposition}\n\n\\begin{proof} Concerning \\ref{La:DieEINS}, for all four conditions consistency in $K$ follows by nondecreasingness of $\\smash{\\sigma_{K,N}^{(r)}(\\vartheta)}$ and $\\smash{\\tau_{K,N}^{{(r)}}(\\vartheta)}$ in $K\\in\\R$ for fixed $r\\in[0,1]$, $N\\in [1,\\infty)$, and $\\vartheta\\geq 0$. Consistency in $N$ is clear since the defining inequality, in either case, is asked to hold for every $N''\\geq N$, so is particularly satisfied for every $N''\\geq N'$.\n\nItem \\ref{La:DieZWEI} follows from the scaling properties \n\\begin{align*}\n\\sigma_{K\/\\theta^2,N'}^{(r)}(\\theta\\,\\vartheta) &= \\sigma_{K,N'}^{(r)}(\\vartheta),\\\\\n\\tau_{K\/\\theta^2,N'}^{(r)}(\\theta\\,\\vartheta) &= \\tau_{K,N'}^{(r)}(\\vartheta).\\qedhere\n\\end{align*}\n\\end{proof}\n\nLastly, recall that the $N$-R\u00e9nyi entropy ${\\ensuremath{\\mathscr{S}}}_N(\\mu)$, $N\\in[1,\\infty)$, at $\\mu\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$ only depends on the $\\meas$-absolutely continuous part of $\\mu$, which might be trivial. Along those timelike proper-time parametrized $\\smash{\\ell_p}$-geodesics in \\autoref{Def:TCD*} or \\autoref{Def:TCD}, this does not happen; in fact, under mild additional hypotheses, these always consist of $\\meas$-absolutely continuous measures. This is addressed in \\autoref{Le:Stu} and will be useful at various occasions. (We also use variants of it without explicit notice occasionally, e.g.~in \\autoref{Sub:Local global}.) Its proof uses \\eqref{Eq:Ent SN limit} and is analogous to the one of \\cite[Prop.~1.6]{sturm2006b}, hence omitted. \n\n\\begin{lemma}\\label{Le:Stu} Fix $p\\in (0,1)$, $K\\in\\R$, and $N\\in[1,\\infty)$. Let $(\\mu_0,\\mu_1) = (\\rho_0\\,\\meas,\\mu_1 \\,\\meas)\\in({\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)\\cap \\Dom(\\Ent_\\meas))^2$, let $\\smash{\\mu\\in{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)}$, and let $\\smash{\\pi\\in\\Pi(\\mu_0,\\mu_1)}$ be a coupling of $\\mu_0$ and $\\mu_1$ such that for every $N'\\geq N$, \n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu)&\\leq -\\int_{\\mms^2} \\sigma_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\rho_0(x^0)^{-1\/N'}\\d\\pi(x^0,x^1)\\\\\n&\\qquad\\qquad - \\int_{\\mms^2} \\sigma_{K,N'}^{(t)}(\\uptau(x^0,x^1))\\,\\rho_1(x^1)^{-1\/N'}\\d\\pi(x^0,x^1).\n\\end{align*}\nThen $\\smash{\\mu = \\rho\\,\\meas\\in\\Dom(\\Ent_\\meas)}$ with\n\\begin{align*}\n\\Ent_\\meas(\\mu) \\leq (1-t)\\Ent_\\meas(\\mu_0) + t\\Ent_\\meas(\\mu_1) - \\frac{K}{2}\\,t(1-t)\\,\\big\\Vert \\uptau\\big\\Vert_{\\Ell^2(\\mms^2,\\pi)}^2.\n\\end{align*}\n\\end{lemma}\n\n\\begin{remark} Using \\autoref{Le:Villani lemma for geodesic}, the assumptions in \\autoref{Le:Stu} are satisfied for $\\mu := \\mu_t$, $t\\in[0,1]$, and $\\pi$ if these objects are coming from a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic as well as a timelike $p$-dualizing coupling, respectively, witnessing the inequality defining $\\smash{\\mathrm{TCD}_p^*(K,N)}$.\n\\end{remark}\n\n\\begin{corollary} For every $p\\in (0,1)$, $K\\in\\R$, and $N\\in[1,\\infty)$, the $\\smash{\\mathrm{wTCD}_p^*(K,N)}$ condition implies $\\smash{\\mathrm{wTCD}_p(K,\\infty)}$ in the sense of \\textnormal{\\cite[Def.~4.1]{braun2022}}.\n\\end{corollary}\n\n\n\\subsection{Geometric inequalities} Now we derive fundamental geometric inequalities from the conditions introduced in \\autoref{Ch:TCD conditions}. We start with the more restrictive $\\smash{\\mathrm{wTCD}_p(K,N)}$-case which implies sharp versions of these facts, cf.~\\autoref{Cor:HD}; \\autoref{Sub:Versions reduced} provides nonsharp extensions for $\\smash{\\mathrm{wTCD}_p^*(K,N)}$-spaces. All proofs are variations of standard arguments \\cite{sturm2006b}, see also \\cite{bacher2010,cavalletti2020}.\n\nSome of the geometric inequalities to follow hold in fact under weaker timelike measure-contraction properties, cf.~\\autoref{Pr:TMCP to TCD} and \\autoref{Re:Geom inequ TMCP}.\n\nMoreover, all results in this section also hold if ${\\ensuremath{\\mathscr{X}}}$ is a locally causally closed and globally hyperbolic measured Lorentzian geodesic space.\n\n\\subsubsection{Sharp timelike Brunn--Minkowski inequality}\n\nTo introduce the \\emph{timelike Brunn--Minkowski inequality} in the next \\autoref{Pr:Brunn-Minkowski}, given $A_0,A_1\\subset \\mms$ and $t\\in [0,1]$, define the set of timelike $t$-intermediate points between $A_0$ and $A_1$ as\n\\begin{align*}\nA_t := \\{\\gamma_t : \\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms),\\, \\gamma_0 \\in A_0,\\, \\gamma_1\\in A_1\\}.\n\\end{align*}\nFurthermore, define \n\\begin{align}\\label{Eq:THETA}\n\\Theta := \\begin{cases} \\sup\\uptau(A_0\\times A_1) & \\textnormal{if }K<0,\\\\\n\\inf\\uptau(A_0\\times A_1) & \\textnormal{otherwise}.\n\\end{cases}\n\\end{align}\n\n\\begin{proposition}\\label{Pr:Brunn-Minkowski} Assume $\\smash{\\mathrm{wTCD}_p(K,N)}$ for some $p\\in (0,1)$, $K\\in\\R$, and $N\\in [1,\\infty)$. Let $A_0,A_1\\subset\\mms$ be relatively compact Borel sets with $\\meas[A_0]\\,\\meas[A_1] >0$, and assume strong timelike $p$-dualizability of \n\\begin{align*}\n(\\mu_0,\\mu_1) := (\\meas[A_0]^{-1}\\,\\meas\\mres A_0,\\meas[A_1]^{-1}\\, \\meas\\mres A_1).\n\\end{align*}\nThen for every $t\\in [0,1]$ and every $N'\\geq N$, \n\\begin{align}\\label{Eq:BM}\n\\meas[A_t]^{1\/N'} \\geq \\tau_{K,N'}^{(1-t)}(\\Theta)\\,\\meas[A_0]^{1\/N'} + \\tau_{K,N'}^{(t)}(\\Theta)\\,\\meas[A_1]^{1\/N'}.\n\\end{align}\n\nAssuming $\\smash{\\mathrm{TCD}_p(K,N)}$ in place of $\\smash{\\mathrm{wTCD}_p(K,N)}$, the same conclusion holds if $(\\mu_0,\\mu_1)$ is merely timelike $p$-dualizable.\n\\end{proposition}\n\n\\begin{proof} Without loss of generality, we assume $N' > 1$ and $\\meas[A_t] < \\infty$. Let a time\\-like proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_t)_{t\\in [0,1]}$ connecting $\\mu_0$ to $\\mu_1$ and a timelike $p$-dualizing $\\pi\\in\\smash{\\Pi_\\ll(\\mu_0,\\mu_1)}$ witness the semiconvexity inequality defining $\\smash{\\mathrm{wTCD}_p(K,N)}$. Note that $\\supp\\mu_t\\subset A_t$ and, by $\\smash{\\mathrm{wTCD}_p(K,N)}$, that $\\meas[A_t]>0$. Let $\\rho_t$ denote the density of the absolutely continuous part of $\\mu_t$ with respect to $\\meas$. Employing Jensen's inequality, and $\\smash{\\mathrm{wTCD}_p(K,N)}$ again we obtain\n\\begin{align*}\n&\\tau_{K,N}^{(1-t)}(\\Theta)\\,\\meas[A_0]^{1\/N'} + \\tau_{K,N}^{(t)}(\\Theta)\\,\\meas[A_1]^{1\/N'}\\\\\n&\\qquad\\qquad\\leq \\int_{A_t}\\rho_t^{1-1\/N'}\\d\\meas \\leq \\meas[A_t]\\,\\Big[\\!\\fint_{A_t}\\rho_t\\d\\meas\\Big]^{1-1\/N'} \\leq \\meas[A_t]^{-1\/N'},\n\\end{align*}\nwhich is the desired claim.\n\nThe proof of the last statement is completely analogous.\n\\end{proof}\n\n\n\\begin{remark} Note that in the case $K\\geq 0$, \\eqref{Eq:BM} implies\n\\begin{align*}\n\\meas[A_t]^{1\/N'} \\geq (1-t)\\,\\meas[A_0]^{1\/N'} + t\\,\\meas[A_1]^{1\/N'}.\n\\end{align*}\n\\end{remark}\n\n\\begin{remark} Recall from \\autoref{Re:Strong timelike} that the strong timelike $p$-dualizability hy\\-pothesis for $(\\mu_0,\\mu_1)$ in \\autoref{Pr:Brunn-Minkowski} is satisfied if $\\smash{A_0 \\times A_1 \\subset \\mms_\\ll^2}$ provided the space ${\\ensuremath{\\mathscr{X}}}$ is locally causally closed, globally hyperbolic, and geodesic.\n\\end{remark}\n\n\n\\subsubsection{Sharp timelike Bonnet--Myers inequality} Next, an immediate consequence of \\autoref{Pr:Brunn-Minkowski} is the subsequent \\emph{timelike Bonnet--Myers inequality}.\n\n\\begin{corollary}\\label{Cor:Bonnet-Myers} Assume the $\\smash{\\mathrm{wTCD}_p(K,N)}$ condition for some $p\\in(0,1)$, $K>0$, and $N\\in [1,\\infty)$. Then \n\\begin{align*}\n\\sup\\uptau(\\mms^2) \\leq \\pi\\sqrt{\\frac{N-1}{K}}.\n\\end{align*}\n\\end{corollary}\n\n\\begin{proof} Suppose to the contrapositive that for some $\\varepsilon > 0$, we have\n\\begin{align*}\n\\uptau(z_0,z_1) \\geq \\pi\\sqrt{\\frac{N-1}{K}}+4\\varepsilon\n\\end{align*}\nfor two given points $z_0,z_1\\in\\mms$. By continuity of $\\uptau$, we fix $\\delta > 0$ and $x_0,x_1\\in\\mms$ such that $\\smash{A_0 := {\\ensuremath{\\mathsf{B}}}^\\met(x_0,\\delta) \\subset I^+(z_0)}$, $\\smash{A_1 := {\\ensuremath{\\mathsf{B}}}^\\met(x_1,\\delta) \\subset I^-(z_1)}$, and \n\\begin{align*}\n\\inf\\uptau({\\ensuremath{\\mathsf{B}}}^\\met(x_0,\\delta) \\times {\\ensuremath{\\mathsf{B}}}^\\met(x_1,\\delta)) \\geq \\pi\\sqrt{\\frac{N-1}{K}}+\\varepsilon.\n\\end{align*}\nThis implies strong timelike $p$-dualizability of $\\smash{(\\meas[A_0]^{-1}\\,\\meas\\mres{A_0},\\meas[A_1]^{-1}\\,\\meas\\mres{A_1})}$ by \\autoref{Re:Strong timelike}. Hence \\autoref{Pr:Brunn-Minkowski} is applicable, and the inherent inequality $\\smash{\\Theta \\geq \\pi\\sqrt{(N-1)\/K}+\\varepsilon}$ together with the definition of $\\smash{\\tau_{K,N}^{(r)}}$ gives $\\smash{\\meas[A_{1\/2}]=\\infty}$. On the other hand, the set $\\smash{A_{1\/2}}$ is relatively compact since $\\smash{A_{1\/2} \\subset J(z_0,z_1)}$ by global hy\\-per\\-bolicity, which makes the situation $\\meas[A_{1\/2}]=\\infty$ impossible.\n\\end{proof}\n\n\\begin{remark} \\autoref{Cor:Bonnet-Myers} implies that, under its assumptions, the $\\uptau$-length of every causal curve in $\\mms$ is no larger than $\\smash{\\pi\\sqrt{(N-1)\/K}}$. Moreover, if $K>0$ and $N=1$ then $\\mms$ does not contain any chronologically related pair of points.\n\\end{remark}\n\n\\subsubsection{Sharp timelike Bishop--Gromov inequality}\\label{Sub:Sharp BG inequ} To prove volume growth estimates \u00e0 la \\emph{Bishop--Gromov}, cf.~\\autoref{Th:BG} below, we recall the notion of $\\uptau$-star-shaped sets from \\cite[Sec.~3.1]{cavalletti2020}; this is used to localize the volume of $\\uptau$-balls \n\\begin{align}\\label{Eq:TAU BALLS}\n{\\ensuremath{\\mathsf{B}}}^\\uptau(x,r) := \\{y\\in I^+(x) : \\uptau(x,y)< r\\}\\cup\\{x\\},\n\\end{align}\nwith $x\\in\\mms$ and $r>0$, which typically have infinite volume. \n\nA set $\\smash{E\\subset I^+(x)\\cup\\{x\\}}$ is termed \\emph{$\\uptau$-star-shaped} with respect to $x\\in\\mms$ if for every $\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$ with $\\gamma_0 = x$ and $\\gamma_1 \\in E$, we have $\\gamma_t\\in E$ for every $t\\in (0,1)$. For such $E$ and $x$, and given $r>0$, we define the quantities\n\\begin{align*}\n{\\ensuremath{\\mathrm{v}}}_r &:= \\meas\\big[\\bar{{\\ensuremath{\\mathsf{B}}}}^\\uptau(x,r)\\cap E\\big],\\\\\n{\\ensuremath{\\mathrm{s}}}_r &:= \\limsup_{\\delta \\to 0} \\delta^{-1}\\,\\meas\\big[(\\bar{{\\ensuremath{\\mathsf{B}}}}^\\uptau(x,r+\\delta)\\setminus {\\ensuremath{\\mathsf{B}}}^\\uptau(x,r))\\cap E\\big].\n\\end{align*}\nWhenever confusion is excluded, we avoid making notationally explicit the dependency of ${\\ensuremath{\\mathrm{v}}}_r$ and ${\\ensuremath{\\mathrm{s}}}_r$ on $E$ and $x$. Lastly, we set\n\\begin{align}\\label{Eq:integral def}\n\\mathfrak{v}_{K,N}(r) := \\Big[\\!\\int_0^r \\mathfrak{s}_{K,N-1}(r)^{N-1}\\d r\\Big]^{1\/N}.\n\\end{align}\n\nIn the next result, we employ the convention $1\/0 := \\infty$.\n\n\\begin{theorem}\\label{Th:BG} Assume $\\smash{\\mathrm{wTCD}_p(K,N)}$ for some $p\\in (0,1)$, $K\\in\\R$, and $N\\in (1,\\infty)$. Let $\\smash{E\\subset I^+(x)\\cup\\{x\\}}$ be a compact set which is $\\uptau$-star-shaped with respect to $x\\in\\mms$. Then for every $r,R > 0$ with $\\smash{r < R \\leq \\pi\\sqrt{(N-1)\/\\max\\{K,0\\}}}$,\n\\begin{align*}\n\\frac{{\\ensuremath{\\mathrm{s}}}_r}{{\\ensuremath{\\mathrm{s}}}_R} &\\geq \\Big[\\frac{\\mathfrak{s}_{K,N-1}(r)}{\\mathfrak{s}_{K,N-1}(R)}\\Big]^{N-1},\\\\\n\\frac{{\\ensuremath{\\mathrm{v}}}_r}{{\\ensuremath{\\mathrm{v}}}_R} &\\geq \\Big[\\frac{\\mathfrak{v}_{K,N}(r)}{\\mathfrak{v}_{K,N}(R)}\\Big]^N.\n\\end{align*}\n\\end{theorem}\n\n\\begin{proof} We assume $K>0$, the rest is argued similarly. Define $t := r\/R$, let $\\delta > 0$, and let $\\varepsilon > 0$ be small enough such that $\\smash{A_0 \\times A_1 \\subset \\mms_\\ll^2}$, where $\\smash{A_0 := {\\ensuremath{\\mathsf{B}}}^\\uptau(x,\\varepsilon)\\cap E}$ and $A_1 := (\\bar{{\\ensuremath{\\mathsf{B}}}}^\\uptau(x,R+\\delta R) \\setminus {\\ensuremath{\\mathsf{B}}}^\\uptau(x,R))\\cap E$. Thus $(\\meas[A_0]^{-1}\\,\\meas\\mres{A_0},\\meas[A_1]^{-1}\\,\\meas\\mres{A_1})$ is strongly time\\-like $p$-dualizable according to \\autoref{Re:Strong timelike}. Now we observe that $\\smash{A_t \\subset ({\\ensuremath{\\mathsf{B}}}^\\uptau(x,r+\\delta r)\\setminus {\\ensuremath{\\mathsf{B}}}^\\uptau(x,r))\\cap E}$. By the reverse triangle inequality for $\\uptau$, we get $\\Theta \\leq R+\\delta R$. Employing \\autoref{Pr:Brunn-Minkowski}, we therefore obtain\n\\begin{align*}\n&\\meas\\big[(\\bar{{\\ensuremath{\\mathsf{B}}}}^\\uptau(x,r+\\delta r)\\setminus {\\ensuremath{\\mathsf{B}}}^\\uptau(x,r))\\cap E\\big]^{1\/N}\\\\\n&\\qquad\\qquad \\geq \\tau_{K,N}^{(1-r\/R)}(R+\\delta R)\\,\\meas\\big[{\\ensuremath{\\mathsf{B}}}^\\uptau(x,\\varepsilon)\\cap E\\big]^{1\/N}\\\\\n&\\qquad\\qquad\\qquad\\qquad + \\tau_{K,N}^{(r\/R)}(R+\\delta R)\\, \\meas\\big[(\\bar{{\\ensuremath{\\mathsf{B}}}}^\\uptau(x,R+\\delta R)\\setminus {\\ensuremath{\\mathsf{B}}}^\\uptau(x,R))\\cap E\\big]^{1\/N}\\\\\n&\\qquad\\qquad \\geq \\tau_{K,N}^{(r\/R)}(R+\\delta R)\\, \\meas\\big[(\\bar{{\\ensuremath{\\mathsf{B}}}}^\\uptau(x,R+\\delta R)\\setminus {\\ensuremath{\\mathsf{B}}}^\\uptau(x,R))\\cap E\\big]^{1\/N}.\n\\end{align*}\nWriting out the expression $\\smash{\\tau_{K,N}^{(r\/R)}(R+\\delta R)^N}$ and finally sending $\\delta\\to 0$ implies the claimed lower bound for ${\\ensuremath{\\mathrm{s}}}_r\/{\\ensuremath{\\mathrm{s}}}_R$.\n\nAs in the proof of \\cite[Thm.~2.3]{sturm2006b}, we argue that ${\\ensuremath{\\mathrm{v}}}$ is locally Lipschitz continuous on $(0,\\infty)$. Hence, it is differentiable $\\smash{\\Leb^1}$-a.e.~on $(0,\\infty)$, and $\\smash{{\\ensuremath{\\mathrm{v}}}_r = \\int_0^r \\dot{{\\ensuremath{\\mathrm{v}}}}_s\\d s}$ for every $r > 0$. Therefore, the claimed lower bound on ${\\ensuremath{\\mathrm{v}}}_r\/{\\ensuremath{\\mathrm{v}}}_R$ follows from the previous part in com\\-bination with \\cite[Lem.~18.9]{villani2009}.\n\\end{proof}\n\n\\begin{remark} If $K=0$, the estimates from \\autoref{Th:BG} become\n\\begin{align*}\n\\frac{{\\ensuremath{\\mathrm{s}}}_r}{{\\ensuremath{\\mathrm{s}}}_R} &\\geq \\Big[\\frac{r}{R}\\Big]^{N-1},\\\\\n\\frac{{\\ensuremath{\\mathrm{v}}}_r}{{\\ensuremath{\\mathrm{v}}}_R} &\\geq \\Big[\\frac{r}{R}\\Big]^N.\n\\end{align*}\nIn fact, these estimates hold for $N=1$ as well.\n\\end{remark}\n\nIncidentally, from \\autoref{Th:BG} we deduce the following natural and sharp upper bound on the $\\uptau$-Hausdorff dimension as introduced and studied in \\cite{mccann2021}. Our results improve \\cite[Thm.~5.2, Cor.~5.3]{mccann2021}. We refer to \\cite{mccann2021} for details about the notions presented in the subsequent \\autoref{Cor:HD}.\n\n\\begin{corollary}\\label{Cor:HD} Let $(\\mms,\\langle\\cdot,\\cdot\\rangle)$ be a continuous, globally hyperbolic, causally plain spacetime of dimension $n\\in\\N$ \\textnormal{(}which is thus causally closed \\cite[Prop.~3.5]{kunzinger2018} and geodesic \\cite[Thm.~3.30, Thm.~5.12]{kunzinger2018}\\textnormal{)}. Assume its induced Lorentzian geodesic space, with $\\meas$ being the Lorentzian volume measure induced by $\\langle\\cdot,\\cdot\\rangle$, to obey $\\smash{\\mathrm{wTCD}_p(K,N)}$ for some $p\\in (0,1)$, $K\\in\\R$, and $N\\in[1,\\infty)$. Then the geometric dimension $\\smash{\\dim^\\uptau\\mms}$ in the sense of \\textnormal{\\cite[Def.~3.1]{mccann2021}} satisfies\n\\begin{align*}\nn=\\dim^\\uptau\\mms \\leq N.\n\\end{align*}\n\\end{corollary}\n\n\\begin{remark}\\label{Re:HD2} As in \\cite[Thm.~5.2]{mccann2021}, which is proven under $\\smash{\\mathrm{wTCD}_p^e(K,N)}$, assuming $\\smash{\\mathrm{wTCD}_p^*(K,N)}$ instead of $\\smash{\\mathrm{wTCD}_p(K,N)}$ in \\autoref{Cor:HD} would only lead to the conclusion $n = \\dim^\\uptau\\mms \\leq N+1$ by \\autoref{Th:Reduced BG}.\n\\end{remark}\n\n\\subsubsection{Versions in the reduced case}\\label{Sub:Versions reduced} With evident modifications, the following nonsharp versions of \\autoref{Pr:Brunn-Minkowski}, \\autoref{Cor:Bonnet-Myers}, \\autoref{Th:BG}, and \\autoref{Cor:HD} are readily verified under the more general $\\smash{\\mathrm{wTCD}_p^*(K,N)}$ condition. The deduced inequalities are analogous to their counterparts drawn from the $\\smash{\\mathrm{wTCD}_p^e(K,N)}$ condition in \\cite[Prop.~3.4, Prop.~3.5, Prop.~3.6]{cavalletti2020}.\n\nIn view of \\autoref{Th:Reduced BG} below, recall the definition of $\\mathfrak{v}_{K,N}$ from \\eqref{Eq:integral def}.\n\n\\begin{proposition}\\label{Propos} Assume $\\smash{\\mathrm{wTCD}_p^*(K,N)}$ for some $p\\in (0,1)$, $K\\in\\R$, and $N\\in[1,\\infty)$. Let $A_0,A_1\\subset\\mms$ be relatively compact Borel sets which satisfy $\\meas[A_0]\\,\\meas[A_1]>0$, and assume strong timelike $p$-dualizability of \n\\begin{align*}\n(\\mu_0,\\mu_1) := (\\meas[A_0]^{-1}\\,\\meas\\mres A_0, \\meas[A_1]^{-1}\\,\\meas\\mres A_1).\n\\end{align*}\nLet $\\Theta$ be as in \\eqref{Eq:THETA}. Then for every $t\\in [0,1]$ and every $N'\\geq N$,\n\\begin{align*}\n\\meas[A_t]^{1\/N'} \\geq \\sigma_{K,N'}^{(1-t)}(\\Theta)\\,\\meas[A_0]^{1\/N'} + \\sigma_{K,N'}^{(t)}(\\Theta)\\,\\meas[A_1]^{1\/N'}.\n\\end{align*}\n\nAssuming $\\smash{\\mathrm{TCD}_p^*(K,N)}$ in place of $\\smash{\\mathrm{wTCD}_p^*(K,N)}$, the same conclusion holds if $(\\mu_0,\\mu_1)$ is merely timelike $p$-dualizable.\n\\end{proposition}\n\n\\begin{corollary}\\label{Cor:Reduced BM} Assume the $\\smash{\\mathrm{wTCD}_p^*(K,N)}$ condition for some $p\\in(0,1)$, $K>0$, and $N\\in [1,\\infty)$. Then \n\\begin{align*}\n\\sup\\uptau(\\mms^2) \\leq \\pi\\sqrt{\\frac{N}{K}}.\n\\end{align*}\n\\end{corollary}\n\n\\begin{theorem}\\label{Th:Reduced BG} Assume $\\smash{\\mathrm{wTCD}_p^*(K,N)}$ for some $p\\in (0,1)$, $K\\in\\R$, and $N\\in (1,\\infty)$. Let $\\smash{E\\subset I^+(x)\\cup\\{x\\}}$ be a compact set which is $\\uptau$-star-shaped with respect to $x\\in\\mms$. Then for every $r,R > 0$ with $\\smash{r < R \\leq \\pi\\sqrt{N\/\\max\\{K,0\\}}}$,\n\\begin{align*}\n\\frac{{\\ensuremath{\\mathrm{s}}}_r}{{\\ensuremath{\\mathrm{s}}}_R} &\\geq \\Big[\\frac{\\mathfrak{s}_{K,N}(r)}{\\mathfrak{s}_{K,N}(R)}\\Big]^N,\\\\\n\\frac{{\\ensuremath{\\mathrm{v}}}_r}{{\\ensuremath{\\mathrm{v}}}_R} &\\geq \\Big[\\frac{\\mathfrak{v}_{K,N+1}(r)}{\\mathfrak{v}_{K,N+1}(R)}\\Big]^{N+1}.\n\\end{align*}\n\\end{theorem}\n\n\\subsection{Stability}\\label{Sec:Stability TCD} In this section, we prove a key property of our timelike curvature-dimension conditions, namely the weak stability of the notions from \\autoref{Def:TCD*} and \\autoref{Def:TCD}, cf.~\\autoref{Th:Stability TCD}. The relevant notion of convergence of measured Lorentzian pre-length spaces, see \\autoref{Def:Convergence}, is due to \\cite[Sec. 3.3]{cavalletti2020}.\n\n\\begin{definition}\\label{Def:Lor isometry} For Lorentzian pre-length spaces $\\smash{(\\mms^i,\\met^i,\\ll^i,\\leq^i,\\uptau^i)}$, $i\\in\\{0,1\\}$, we term a map $\\smash{\\iota\\colon \\mms^0\\to\\mms^1}$ a \\emph{Lorentzian isometric embedding} if $\\iota$ is a topolo\\-gical embedding such that for every $x,y\\in\\mms^0$,\n\\begin{enumerate}[label=\\textnormal{\\alph*.}]\n\\item $x\\leq^0 y$ if and only if $\\iota(x) \\leq^1\\iota(y)$, and\n\\item $\\tau^1(\\iota(x),\\iota(y)) = \\tau^0(x,y)$.\n\\end{enumerate}\n\\end{definition}\n\nGiven any $k\\in\\N_\\infty := \\N\\cup\\{\\infty\\}$ let $(\\mms_k,\\met_k,\\meas_k,\\ll_k,\\leq_k,\\uptau_k)$ be a fixed measured Lorentzian pre-length space which, as in \\eqref{Eq:X}, we will abbreviate by ${\\ensuremath{\\mathscr{X}}}_k$.\n\n\\begin{definition}\\label{Def:Convergence} We define $({\\ensuremath{\\mathscr{X}}}_k)_{k\\in\\N}$ to converge to ${\\ensuremath{\\mathscr{X}}}_\\infty$ if the following holds.\n\\begin{enumerate}[label=\\textnormal{\\alph*\\textcolor{black}{.}}]\n\\item\\label{La:Blurr1reg} There exists a causally closed, ${\\ensuremath{\\mathscr{K}}}$-globally hyperbolic measured Lorentzian geodesic space $\\smash{(\\mms,\\met,\\ll,\\leq,\\uptau)}$ such that for every $k\\in\\N_\\infty$ there exists a Lorentzian isometric embedding $\\smash{\\iota_k\\colon \\mms_k\\to \\mms}$.\n\\item In terms of the maps $\\iota_k$ from \\ref{La:Blurr1reg}, the sequence $((\\iota_k)_\\push\\meas_k)_{k\\in\\N}$ converges to $(\\iota_\\infty)_\\push\\meas_\\infty$ in duality with $\\Cont_\\comp(\\mms)$, i.e.~for every $\\varphi\\in\\Cont_\\comp(\\mms)$ we have\n\\begin{align*}\n\\lim_{k\\to\\infty}\\int_\\mms \\varphi\\d(\\iota_k)_\\push\\meas_k = \\int_\\mms \\varphi\\d(\\iota_\\infty)_\\push\\meas_\\infty.\n\\end{align*}\n\\end{enumerate}\n\\end{definition}\n\n\\begin{remark} If $({\\ensuremath{\\mathscr{X}}}_k)_{k\\in\\N}$ converges to ${\\ensuremath{\\mathscr{X}}}_\\infty$ according to \\autoref{Def:Convergence}, ${\\ensuremath{\\mathscr{X}}}_k$ is automatically causally closed, ${\\ensuremath{\\mathscr{K}}}$-globally hyperbolic, and geodesic for every $k\\in\\N_\\infty$.\n\\end{remark}\n\n\n\nThe following \\autoref{Le:CM lemma} is an implicit consequence from the proof of \\cite[Lem. 3.5]{cavalletti2020} and will be useful for the proof of \\autoref{Th:Stability TCD}. \n\n\\begin{lemma}\\label{Le:CM lemma} Assume that ${\\ensuremath{\\mathscr{X}}}$ is causally closed, globally hyperbolic, and geodesic. Let the pair $\\smash{(\\mu_0,\\mu_1)=(\\rho_0\\,\\meas,\\rho_1\\,\\meas)\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)^2}$ admit some $\\smash{\\ell_p}$-optimal coupling $\\smash{\\pi\\in\\Pi_\\ll(\\mu_0,\\mu_1)}$, $p\\in (0,1]$. Then there exist sequences $(\\pi^n)_{n\\in\\N}$ in ${\\ensuremath{\\mathscr{P}}}(\\mms^2)$ and $(a_n)_{n\\in\\N}$ in $[1,\\infty)$ with the properties \n\\begin{enumerate}[label=\\textnormal{\\textcolor{black}{(}\\roman*\\textcolor{black}{)}}]\n\\item $(a_n)_{n\\in\\N}$ converges to $1$,\n\\item $\\pi^n[\\mms_\\ll^2]=1$ for every $n\\in\\N$,\n\\item $\\smash{\\pi^n = \\rho^n\\,\\meas^{\\otimes 2}\\in {\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms^2,\\meas^{\\otimes 2})}$ with $\\smash{\\rho^n\\in\\Ell^\\infty(\\mms^2,\\meas^{\\otimes 2})}$ for every $n\\in\\N$,\n\\item $(\\pi^n)_{n\\in\\N}$ converges weakly to $\\pi$, \n\\item\\label{DENS} the density $\\smash{\\rho_0^n}$ and $\\smash{\\rho_0^n}$ of the first and second marginal of $\\pi_n$ with respect to $\\meas$ is no larger than $a_n\\,\\rho_0$ and $a_n\\,\\rho_1$, respectively, for every $n\\in\\N$, and\n\\item $\\smash{\\rho_0^n\\to \\rho_0}$ and $\\smash{\\rho_1^n\\to\\rho_1}$ in $\\Ell^1(\\mms,\\meas)$ as $n\\to\\infty$.\n\\end{enumerate}\n\\end{lemma}\n\nFurthermore, in the notation of \\autoref{Le:CM lemma}, given any $K\\in\\R$, $N\\in[1,\\infty)$, and $t\\in [0,1]$, as well as a measure $\\pi\\in\\Pi(\\mu_0,\\mu_1)$ with marginals $\\mu_0 = \\rho_0\\,\\meas\\in\\smash{{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$ and $\\mu_1 = \\rho_1\\,\\meas\\in\\smash{{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$, for brevity we define\n\\begin{align}\\label{Eq:TDdef}\n\\begin{split}\n{\\ensuremath{\\mathscr{T}}}_{K,N}^{(t)}(\\pi) &:= -\\int_{\\mms^2} \\tau_{K,N}^{(1-t)}(\\uptau(x^0,x^1))\\,\\rho_0(x^0)^{-1\/N}\\d\\pi(x^0,x^1)\\\\\n&\\qquad\\qquad -\\int_{\\mms^2} \\tau_{K,N}^{(t)}(\\uptau(x^0,x^1))\\,\\rho_1(x^1)^{-1\/N}\\d\\pi(x^0,x^1).\n\\end{split}\n\\end{align}\n\nWe note the following variant of \\cite[Lem.~3.2]{sturm2006b}. The latter proof easily carries over to the setting of \\autoref{Le:Const perturb}, where the marginal densities, instead of being assumed to be the same throughout the sequence as in \\cite{sturm2006b}, are allowed to be perturbed by constants vanishing in the limit. Compare with \\autoref{Le:USC lemma}.\n\n\\begin{lemma}\\label{Le:Const perturb} Suppose $\\meas\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$. Let $(a_k)_{k\\in\\N}$ be a sequence in $[1,\\infty)$ converging to $1$, and let $(b_k)_{k\\in\\N}$ be a sequence of nonnegative real numbers converging to $0$. Define $\\smash{\\nu_{k,0},\\nu_{k,1}\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$, $k\\in\\N$, by\n\\begin{align*}\n\\nu_{k,0} &:= (a_k+b_k)^{-1}\\,(a_k\\,\\mu_0 + b_k\\,\\meas) = (a_k+b_k)^{-1}\\,(a_k\\,\\rho_0 + b_k)\\,\\meas,\\\\\n\\nu_{k,1} &:= (a_k+b_k)^{-1}\\,(a_k\\,\\mu_1 + b_k\\,\\meas) = (a_k+b_k)^{-1}\\,(a_k\\,\\rho_1 + b_k)\\,\\meas.\n\\end{align*}\nThen for every sequence $(\\pi_k)_{k\\in\\N}$ of measures $\\pi_k\\in \\Pi(\\nu_{k,0},\\nu_{k,1})$ which converges weakly to $\\pi\\in\\Pi(\\mu_0,\\mu_1)$, every $K\\in\\R$, every $N\\in[1,\\infty)$, and every $t\\in[0,1]$, \n\\begin{align*}\n\\limsup_{k\\to\\infty}{\\ensuremath{\\mathscr{T}}}_{K,N}^{(t)}(\\pi_k) \\leq {\\ensuremath{\\mathscr{T}}}_{K,N}^{(t)}(\\pi).\n\\end{align*}\n\\end{lemma}\n\n\\begin{remark} Of course, the assertion from \\autoref{Le:Const perturb} remains valid if $\\smash{\\tau_{K,N}^{(1-t)}}$ and $\\smash{\\tau_{K,N}^{(t)}}$ are replaced by $\\smash{\\sigma_{K,N}^{(1-t)}}$ and $\\smash{\\sigma_{K,N}^{(t)}}$ in \\eqref{Eq:TDdef}, respectively.\n\\end{remark}\n\nThe subsequent proof of \\autoref{Th:Stability TCD} below follows \\cite[Thm.~3.1]{sturm2006b}. However, by the nature of our timelike curvature-dimension conditions, we additionally have to ensure chronological relations of many measures under consideration. Hence, the proof becomes much longer and quite technical. It may be skipped at first reading; alternatively, the reader may consult the proof of the counterpart \\autoref{Th:Stability TMCP} of \\autoref{Th:Stability TCD} below for a somewhat easier argument in which one does not have to trace chronological $\\smash{\\ell_p}$-optimal couplings.\n\n\n\n\n\\begin{theorem}\\label{Th:Stability TCD} Assume the convergence of $({\\ensuremath{\\mathscr{X}}}_k)_{k\\in\\N}$ to ${\\ensuremath{\\mathscr{X}}}_\\infty$ as in \\autoref{Def:Convergence}. Moreover, let $(K_k,N_k)_{k\\in\\N}$ be a sequence in $\\R\\times [1,\\infty)$ converging to $(K_\\infty,N_\\infty)\\in\\R\\times[1,\\infty)$. Suppose the existence of $p\\in (0,1)$ such that ${\\ensuremath{\\mathscr{X}}}_k$ obeys $\\mathrm{TCD}_p(K_k,N_k)$ for every $k\\in\\N$. \nThen $\\smash{{\\ensuremath{\\mathscr{X}}}_\\infty}$ satisfies $\\smash{\\mathrm{wTCD}_p(K_\\infty,N_\\infty)}$. \n\nThe analogous statement in which $\\smash{\\mathrm{TCD}_p(K_k,N_k)}$ and $\\smash{\\mathrm{wTCD}_p(K_\\infty,N_\\infty)}$ are replaced by $\\smash{\\mathrm{TCD}_p^*(K_k,N_k)}$ and $\\smash{\\mathrm{wTCD}_p^*(K_\\infty,N_\\infty)}$, $k\\in\\N$, respectively, holds too.\n\\end{theorem}\n\n\\begin{proof} We only prove the first implication, the second is similar. To relax notation, without restriction and further notice we identify $\\mms_k$ with its image $\\iota_k(\\mms_k)$ in $\\mms$, and the measure $\\meas_k$ with its push-forward $(\\iota_k)_\\push\\meas_k$ for every $k\\in\\N_\\infty$.\n\n\\textbf{Step 1.} \\textit{Reduction to compact $\\mms$.} Fix any strongly timelike $p$-dualizable pair $(\\mu_{\\infty,0},\\mu_{\\infty,1}) = (\\rho_{\\infty,0}\\,\\meas_\\infty,\\rho_{\\infty,1}\\,\\meas_\\infty)\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas_\\infty)$. By compactness of $\\supp\\mu_{\\infty,0}$ and $\\supp\\mu_{\\infty,1}$, and by ${\\ensuremath{\\mathscr{K}}}$-global hyperbolicity of $\\smash{\\mms}$, we restrict all arguments below to a compact subset $\\smash{C\\subset \\mms}$ with $\\smash{\\meas_k[C]^{-1}\\,\\meas_k\\mres C \\to \\meas_\\infty[C]^{-1}\\,\\meas_\\infty\\mres C}$ weakly as $k\\to\\infty$. To relax notation, we will thus assume with no loss of generality that $\\smash{\\mms}$ itself is compact, and that $\\smash{\\meas_k\\in{\\ensuremath{\\mathscr{P}}}(\\mms)}$ for every $k\\in\\N_\\infty$. In particular, without notationally expressing this property all the time, all measures will henceforth be compactly supported.\n\n\\textbf{Step 2.} \\textit{Restriction of the assumptions on $\\mu_{\\infty,0}$ and $\\mu_{\\infty,1}$.} We will first assume that $\\rho_{\\infty,0},\\rho_{\\infty,1}\\in\\Ell^\\infty(\\mms,\\meas_\\infty)$. The general case is discussed in Step 8 below; we note for now that this conclusion will not conflict with our reductions from Step 1.\n\n\\textbf{Step 3.} \\textit{Construction of a chronological recovery sequence.} The goal of this step is the construction of a sequence $\\smash{(\\mu_{k,0},\\mu_{k,1})_{k\\in\\N}}$ of timelike $p$-dualizable pairs $\\smash{(\\mu_{k,0},\\mu_{k,1})\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_k)^2}$, $k\\in\\N$, such that $\\smash{\\mu_{k,0}\\to \\mu_{\\infty,0}}$ and $\\smash{\\mu_{k,1}\\to \\mu_{\\infty,1}}$ weakly as $k\\to\\infty$, up to a subsequence. The highly nontrivial part we address here is that such a pair can in fact be constructed to be chronological.\n\n\\textbf{Step 3.1.} Let $W_2$ denote the $2$-Wasserstein distance on $\\smash{(\\mms,\\met)}$. Since $(\\meas_k)_{k\\in\\N}$ converges weakly to $\\meas_\\infty$ and since $\\mms$ is compact, we have $W_2(\\meas_k,\\meas_\\infty)\\to 0$ as $k\\to\\infty$. Let $\\mathfrak{q}_k\\in{\\ensuremath{\\mathscr{P}}}(\\mms^2)$ be a $W_2$-optimal coupling of $\\meas_k$ and $\\meas_\\infty$, $k\\in\\N$. For later use, we disintegrate $\\mathfrak{q}_k$ with respect to $\\pr_1$ and $\\pr_2$, respectively, writing\n\\begin{align*}\n{\\ensuremath{\\mathrm{d}}}\\mathfrak{q}_k(x, y) = {\\ensuremath{\\mathrm{d}}}\\mathfrak{p}_{x}^k(y)\\d\\meas_k(x) = {\\ensuremath{\\mathrm{d}}}\\mathfrak{p}_{y}^\\infty(x)\\d\\meas_\\infty(y)\n\\end{align*}\nfor certain Borel maps $\\smash{\\mathfrak{p}^k \\colon \\mms\\to {\\ensuremath{\\mathscr{P}}}(\\mms)}$ and $\\smash{\\mathfrak{p}^\\infty \\colon \\mms\\to {\\ensuremath{\\mathscr{P}}}(\\mms)}$. (Although $\\mathfrak{p}^\\infty$ depends on $k$ as well, we do not make this explicit to not overload our notation.) These disin\\-te\\-grations induce nonrelabeled Borel maps $\\smash{\\mathfrak{p}^k\\colon {\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_\\infty)\\to {\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_k)}$ and $\\smash{\\mathfrak{p}^\\infty}\\colon {\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_k)\\to {\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_\\infty)$ defined by\n\\begin{align*}\n{\\ensuremath{\\mathrm{d}}}\\mathfrak{p}^k(f\\,\\meas_\\infty)(x) &:= \\Big[\\!\\int_\\mms f(y)\\d\\mathfrak{p}_{x}^k(y) \\Big]\\d\\meas_k(x),\\\\\n{\\ensuremath{\\mathrm{d}}}\\mathfrak{p}^\\infty(g\\,\\meas_k)(y) &:= \\Big[\\!\\int_\\mms g(x)\\d\\mathfrak{p}_{y}^\\infty(x) \\Big]\\d\\meas_\\infty(y).\n\\end{align*} \n\n\\textbf{Step 3.2.} Let $\\pi_\\infty\\in\\Pi_\\ll(\\mu_{\\infty,0},\\mu_{\\infty,1})$ be timelike $p$-dualizing, and let $(\\pi^n_\\infty)_{n\\in\\N}$ be an asso\\-ciated sequence in $\\smash{{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms^2,\\meas_\\infty^{\\otimes 2})}$ as in \\autoref{Le:CM lemma}, where we write\n\\begin{align*}\n\\pi^n_\\infty &= \\rho_\\infty^n\\,\\meas_\\infty^{\\otimes 2},\\\\\n\\mu_{\\infty,0}^n := (\\pr_1)_\\push\\pi_\\infty^n &= \\rho_{\\infty,0}^n\\,\\meas_\\infty,\\\\\n\\mu_{\\infty,1}^n := (\\pr_2)_\\push\\pi_\\infty^n &= \\rho_{\\infty,1}^n\\,\\meas_\\infty.\n\\end{align*}\nFor $k\\in\\N$, define $\\smash{\\mu_{k,0}^n,\\mu_{k,1}^n\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_k)}$ by\n\\begin{align*}\n\\mu_{k,0}^n &:= \\mathfrak{p}^k(\\mu_{\\infty,0}^n) = \\rho_{k,0}^n\\,\\meas_k,\\\\\n\\mu_{k,1}^n &:= \\mathfrak{p}^k(\\mu_{\\infty,1}^n) = \\rho_{k,1}^n\\,\\meas_k\n\\end{align*}\nas well as $\\smash{\\pi_k^n\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms^2,\\meas_k^{\\otimes 2})}$ by\n\\begin{align*}\n{\\ensuremath{\\mathrm{d}}} \\pi_k^n(x^0,x^1) &:= \\Big[\\!\\int_{\\mms^2}\\rho_\\infty^n(y^0,y^1)\\d(\\mathfrak{p}_{x^0}^k\\otimes\\mathfrak{p}_{x^1}^k)(y^0,y^1)\\Big]\\d\\meas_k^{\\otimes 2}(x^0,x^1),\n\\end{align*}\nor in other words,\n\\begin{align*}\n\\pi_k^n = (\\pr_1,\\pr_3)_\\push\\big[(\\rho_\\infty^n \\circ (\\pr_2,\\pr_4))\\,\\mathfrak{q}_k\\otimes\\mathfrak{q}_k\\big].\n\\end{align*}\nA straightforward computation entails $\\smash{\\pi_k^n\\in\\Pi(\\mu_{k,0}^n,\\mu_{k,1}^n)}$. Also, for given $n\\in\\N$, weak convergence of $(\\mathfrak{q}_k)_{k\\in\\N}$ to the unique $W_2$-optimal (diagonal) coupling of $\\meas_\\infty$ and $\\meas_\\infty$ by tightness \\cite[Lem.~4.3, Lem.~4.4]{villani2009}, up to a nonrelabeled subse\\-quence, imply weak convergence of $\\smash{(\\pi_k^n)_{k\\in\\N}}$ to $\\smash{\\pi_\\infty^n}$ up to a nonrelabeled subsequence.\n\n\\textbf{Step 3.3.} By \\autoref{Le:CM lemma}, a tightness and a diagonal argument, passage to a subsequence, and relabeling of indices, we find $n\\colon\\N\\to\\N$ such that, setting\n\\begin{align*}\n\\tilde{\\pi}_k := \\pi_k^{n_k},\n\\end{align*}\nthe sequence $\\smash{(\\tilde{\\pi}_k)_{k\\in\\N}}$ converges weakly to $\\pi_\\infty$. By Portmanteau's theorem,\n\\begin{align}\\label{Eq:Portmanteau in stability}\n1 = \\pi_\\infty[\\mms_\\ll^2] \\leq \\liminf_{k\\to\\infty} \\tilde{\\pi}_k[\\mms_\\ll^2].\n\\end{align}\nHence, up passing to a subsequence, we henceforth assume that $\\tilde{\\pi}_k[\\mms_\\ll^2] > 0$ for every $k\\in\\N$. We write the mar\\-ginals $\\smash{\\tilde{\\mu}_{k,0},\\tilde{\\mu}_{k,1}\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_k)}$ of $\\tilde{\\pi}_k$ as\n\\begin{align*}\n\\tilde{\\mu}_{k,0} &= \\tilde{\\rho}_{k,0}\\,\\meas_k = \\rho_{k,0}^{n_k}\\,\\meas_k,\\\\\n\\tilde{\\mu}_{k,1} &= \\tilde{\\rho}_{k,1}\\,\\meas_k = \\rho_{k,1}^{n_k}\\,\\meas_k.\n\\end{align*}\n\n\n\n\\textbf{Step 3.4.} Now we \ndefine $\\smash{\\hat{\\pi}_k\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms^2)}$ by\n\\begin{align*}\n\\hat{\\pi}_k := \\tilde{\\pi}_k[\\mms_\\ll^2]^{-1}\\,\\tilde{\\pi}_k\\mres \\mms^2_\\ll.\n\\end{align*}\nNote that $\\smash{(\\hat{\\pi}_k)_{k\\in\\N}}$ converges weakly to $\\smash{\\pi_\\infty}$, whence\n\\begin{align}\\label{Eq:Der Limes}\n\\lim_{k\\to\\infty}\\int_{\\mms^2}\\uptau^p\\d\\hat{\\pi}_k = \\int_{\\mms^2}\\uptau^p\\d\\pi_\\infty.\n\\end{align}\nLet $\\smash{\\hat{\\mu}_{k,0},\\hat{\\mu}_{k,0}\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_k)}$ denote the marginals of $\\smash{\\hat{\\pi}_k}$, with\n\\begin{align*}\n\\hat{\\mu}_{k,0} &= \\hat{\\rho}_{k,0}\\,\\meas_k,\\\\\n\\hat{\\mu}_{k,1} &= \\hat{\\rho}_{k,1}\\,\\meas_k.\n\\end{align*}\n\nThese measures are chronologically related, but we do not know whether they have a chronological \\emph{$\\smash{\\ell_p}$-optimal} coupling, i.e.~whether they are timelike $p$-dualizable. To achieve this, note that $\\smash{\\ell_p(\\hat{\\mu}_{k,0},\\hat{\\mu}_{k,1})} \\in (0,\\infty)$ by chronology of $\\smash{\\hat{\\pi}_k}$ as well as compactness of $\\mms^2$. Let $\\smash{\\check{\\pi}_k\\in\\Pi_\\leq(\\hat{\\mu}_{k,0},\\hat{\\mu}_{k,1})}$ be an $\\smash{\\ell_p}$-optimal coupling. By weak convergence of $\\smash{(\\hat{\\pi}_k)_{k\\in\\N}}$, its marginal sequences $\\smash{(\\hat{\\mu}_{k,0})_{k\\in\\N}}$ and $\\smash{(\\hat{\\mu}_{k,1})_{k\\in\\N}}$ are tight, and so is $\\smash{(\\check{\\pi}_k)_{k\\in\\N}}$ \\cite[Lem.~4.4]{villani2009}. By Prokhorov's theorem, the latter converges weakly to some $\\smash{\\check{\\pi}_\\infty\\in\\Pi(\\mu_{\\infty,0},\\mu_{\\infty,1})}$; causal closedness of $\\mms$ and Portmanteau's theorem imply $\\smash{\\check{\\pi}_\\infty[\\mms_\\leq^2]=1}$. Moreover, \\eqref{Eq:Der Limes} gives\n\\begin{align}\\label{Eq:Lp calc}\n\\begin{split}\n\\ell_p(\\mu_{\\infty,0},\\mu_{\\infty,1})^p &= \\int_\\mms \\uptau^p\\d\\pi_\\infty = \\lim_{k\\to\\infty}\\int_\\mms \\uptau^p\\d\\hat{\\pi}_k\\\\ &\\leq \\lim_{k\\to\\infty} \\int_\\mms\\uptau^p\\d\\check{\\pi}_k = \\int_\\mms\\uptau^p\\d\\check{\\pi}_\\infty.\n\\end{split}\n\\end{align}\nHere the inequality follows from $\\smash{\\ell_p}$-optimality of $\\smash{\\check{\\pi}_k}$. Thus, $\\smash{\\check{\\pi}_\\infty}$ is an $\\smash{\\ell_p}$-optimal coupling of $\\smash{\\mu_{\\infty,0}}$ and $\\smash{\\mu_{\\infty,1}}$. Since the latter pair is strongly timelike $p$-dualizable, every such coupling is chronological, whence $\\smash{\\check{\\pi}_\\infty[\\mms_\\ll^2]=1}$; thus,\n\\begin{align*}\n1 = \\check{\\pi}_\\infty[\\mms_\\ll^2] \\leq \\liminf_{k\\to\\infty} \\check{\\pi}_k[\\mms_\\ll^2].\n\\end{align*}\nAs usual, we may and will assume $\\smash{\\check{\\pi}_k[\\mms_\\ll^2] > 0}$ for every $k\\in\\N$.\n\n\\textbf{Step 3.5.} Lastly, we define $\\smash{\\bar{\\pi}_k\\in{\\ensuremath{\\mathscr{P}}}(\\mms^2)}$ by\n\\begin{align*}\n\\bar{\\pi}_k := \\check{\\pi}_k[\\mms_\\ll^2]^{-1}\\,\\check{\\pi}_k\\mres \\mms_\\ll^2.\n\\end{align*}\nSince the restriction of an $\\smash{\\ell_p}$-optimal coupling is $\\smash{\\ell_p}$-optimal \\cite[Lem.~2.10]{cavalletti2020}, $\\smash{\\bar{\\pi}_k}$ is a chronological $\\smash{\\ell_p}$-optimal coupling of its marginals $\\smash{\\mu_{k,0},\\mu_{k,1}\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_k)}$ with\n\\begin{align*}\n\\mu_{k,0} &= \\rho_{k,0}\\,\\meas_k,\\\\\n\\mu_{k,1} &= \\rho_{k,1}\\,\\meas_k.\n\\end{align*}\nThe pair $\\smash{(\\mu_{k,0},\\mu_{k,1})}$ is timelike $p$-dualizable by $\\bar{\\pi}_k$ for every $k\\in\\N$, as desired.\n\n\\textbf{Step 4.} \\textit{Invoking the $\\mathrm{TCD}$ condition.} Fix $K\\in\\R$ and $N\\in (1,\\infty)$ with $K < K_\\infty$ and $N > N_\\infty$. Then $K < K_k$ and $N > N_k$ for large enough $k\\in\\N$. Hence, again up to a subsequence, we may and will assume that the previous strict inequalities hold for every $k\\in\\N$, respectively.\n\nBy \\autoref{Pr:Consistency TCD}, for every $k\\in\\N$ there are a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_{k,t})_{t\\in[0,1]}$ con\\-necting $\\smash{\\mu_{k,0}}$ to $\\mu_{k,1}$ as well as a timelike $p$-dualizing coupling $\\smash{\\pi_k\\in\\Pi_\\ll(\\mu_{k,0},\\mu_{k,1})}$ such that for every $t\\in[0,1]$ and every $N'\\geq N$,\n\\begin{align}\\label{Eq:TCD COND INVOKING}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{k,t}) \\leq {\\ensuremath{\\mathscr{T}}}_{K,N'}^{(t)}(\\pi_k).\n\\end{align}\n\n\\textbf{Step 5.} \\textit{Estimating $\\smash{-{\\ensuremath{\\mathscr{T}}}_{K,N'}^{(t)}(\\pi_k)}$ from below.} Tracing back the construction from Step 2 and Step 3, we infer the inequalities\n\\begin{align}\\label{Eq:Inequ rhok0}\n\\begin{split}\n\\rho_{k,0} &\\leq \\check{\\pi}_k[\\mms_\\ll^2]^{-1}\\,\\tilde{\\pi}_k[\\mms_\\ll^2]^{-1}\\,\\tilde{\\rho}_{k,0}\\quad \\meas\\textnormal{-a.e.},\\\\\n\\rho_{k,1} &\\leq \\check{\\pi}_k[\\mms_\\ll^2]^{-1}\\,\\tilde{\\pi}_k[\\mms_\\ll^2]^{-1}\\,\\tilde{\\rho}_{k,1} \\quad\\meas\\textnormal{-a.e.}\n\\end{split}\n\\end{align}\nHowever, when using these estimates to bound $\\smash{-{\\ensuremath{\\mathscr{T}}}_{K,N'}^{(t)}(\\pi_k)}$ from below, we change the involved densities in the integrands, but not the coupling with respect to which it is integrated; we thus have to make up for this in the coupling as well. In turn, this requires us to take into account that both $\\smash{\\tilde{\\pi}_k}$ and $\\smash{\\check{\\pi}_k}$ are not chro\\-nological, but only close to being so for large $k\\in\\N$.\n\n\\textbf{Step 5.1.} We modify $\\pi_k$ into a measure with marginals \n\\begin{align*}\n\\nu_{k,0} &:= (1+\\delta_k+\\varepsilon_k+\\zeta_k)^{-1}\\,(\\tilde{\\rho}_{k,0}+ \\delta_k + \\varepsilon_k+\\zeta_k)\\,\\meas_k = \\varrho_{k,0}\\,\\meas_k,\\\\\n\\nu_{k,1} &:= (1+\\delta_k+\\varepsilon_k+\\zeta_k)^{-1}\\,(\\tilde{\\rho}_{k,1}+ \\delta_k + \\varepsilon_k+\\zeta_k)\\,\\meas_k = \\varrho_{k,1}\\,\\meas_k,\n\\end{align*}\nwhere $\\delta_k,\\varepsilon_k \\in[0,1]$ and $\\zeta_k \\geq 0$ are defined by\n\\begin{align}\\label{Eq:DEFS}\n\\begin{split}\n\\delta_k &:= \\tilde{\\pi}_k[\\{\\uptau=0\\}],\\\\\n\\varepsilon_k &:= \\check{\\pi}_k[\\{\\uptau=0\\}],\\\\\n\\zeta_k &:= W_2(\\meas_k,\\meas_\\infty).\n\\end{split}\n\\end{align}\nNote that $(\\delta_k)_{k\\in\\N}$, $(\\varepsilon_k)_{k\\in\\N}$, and $(\\zeta_k)_{k\\in\\N}$ converge to $0$. Define $\\varpi_k\\in{\\ensuremath{\\mathscr{P}}}(\\mms^2)$ by\n\\begin{align*}\n\\varpi_k &:= (1+\\delta_k+\\varepsilon_k+\\zeta_k)^{-1}\\,\\big[\\tilde{\\pi}_k[\\mms_\\ll^2]\\,\\check{\\pi}_k[\\mms_\\ll^2]\\,\\pi_k + \\tilde{\\pi}_k\\mres \\{\\uptau=0\\}\\\\\n&\\qquad\\qquad + (\\delta_k + \\varepsilon_k+\\zeta_k)\\,\\meas_k^{\\otimes 2} + \\tilde{\\pi}_k[\\mms_\\ll^2]\\,\\check{\\pi}_k\\mres \\{\\uptau=0\\}\\big].\n\\end{align*}\nBy tracing back the definitions of $\\tilde{\\pi}_k$ and $\\check{\\pi}_k$ in Step 2, one verifies that $\\varpi_k$ is a coupling, not necessarily $\\smash{\\ell_p}$-optimal, of $\\nu_{k,0}$ and $\\nu_{k,1}$. \n\n\\textbf{Step 5.2.} By \\eqref{Eq:Inequ rhok0}, we have \n\\begin{alignat*}{3}\n\\rho_{k,0} &\\leq \\check{\\pi}_k[\\mms_\\ll^2]^{-1}\\,\\tilde{\\pi}_k[\\mms_\\ll^2]^{-1}\\,(1+\\delta_k + \\varepsilon_k+\\zeta_k)\\,\\varrho_{k,0} & \\quad & \\meas\\textnormal{-a.e.},\\\\\n\\rho_{k,1} &\\leq \\check{\\pi}_k[\\mms_\\ll^2]^{-1}\\,\\tilde{\\pi}_k[\\mms_\\ll^2]^{-1}\\,(1+\\delta_k + \\varepsilon_k+\\zeta_k)\\,\\varrho_{k,1} & \\quad & \\meas\\textnormal{-a.e.}\n\\end{alignat*}\nIn the sequel, to clear up notation, given $a,b\\in\\R$ we write $a\\geq_k b$ if there exists a sequence $(c_k)_{k\\in\\N}$ of positive real numbers converging to $1$ with $a\\geq c_k\\,b$ for every $k\\in\\N$. By definition of the inherent distortion coefficients, we obtain\n\\begin{align*}\n-{\\ensuremath{\\mathscr{T}}}_{K,N'}^{(t)}(\\pi_k) &= \\int_{\\mms^2}\\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\rho_{k,0}(x^0)^{-1\/N'}\\d\\pi_k(x^0,x^1)\\\\\n&\\qquad\\qquad +\\int_{\\mms^2}\\tau_{K,N'}^{(t)}(\\uptau(x^0,x^1))\\,\\rho_{k,1}(x^1)^{-1\/N'}\\d\\pi_k(x^0,x^1)\\\\\n&\\geq_k \\int_{\\mms^2} \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\pi_k(x^0,x^1)\\\\\n&\\qquad\\qquad + \\int_{\\mms^2} \\tau_{K,N'}^{(t)}(\\uptau(x^0,x^1))\\,\\varrho_{k,1}(x^1)^{-1\/N'}\\d\\pi_k(x^0,x^1)\\\\\n&\\geq_k\\int_{\\mms^2} \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\varpi_k(x^0,x^1)\\\\\n&\\qquad\\qquad + \\int_{\\mms^2}\\tau_{K,N'}^{(t)}(\\uptau(x^0,x^1))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\varpi_k(x^0,x^1)\\\\\n&\\qquad\\qquad - (1-t)\\int_{\\{\\uptau=0\\}}\\varrho_{k,0}(x^0)^{-1\/N'} \\d\\tilde{\\pi}_k(x^0,x^1)\\\\\n&\\qquad\\qquad - t\\int_{\\{\\uptau=0\\}} \\varrho_{k,1}(x^1)^{-1\/N'}\\d\\tilde{\\pi}_k(x^0,x^1)\\\\\n&\\qquad\\qquad - (1-t)\\int_{\\{\\uptau=0\\}}\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\check{\\pi}_k(x^0,x^1)\\\\\n&\\qquad\\qquad - t\\int_{\\{\\uptau=0\\}}\\varrho_{k,1}(x^1)^{-1\/N'}\\d\\check{\\pi}_k(x^0,x^1)\\\\\n&\\qquad\\qquad - c\\,(\\delta_k + \\varepsilon_k+\\zeta_k)\\int_{\\mms} \\varrho_{k,0}(x^0)^{-1\/N'}\\d\\meas_k(x^0)\\\\\n&\\qquad\\qquad - c\\,(\\delta_k + \\varepsilon_k+\\zeta_k)\\int_{\\mms} \\varrho_{k,1}(x^1)^{-1\/N'}\\d\\meas_k(x^1),\n\\end{align*}\nwhere, recalling our choice $K < K_k$ for large enough $k\\in\\N$ and \\autoref{Cor:Bonnet-Myers},\n\\begin{align}\\label{Eq:c value}\nc := \\max\\{\\sup\\tau_{K,N'}^{(1-t)}\\circ\\uptau(\\mms^2),\\sup\\tau_{K,N'}^{(t)}\\circ\\uptau(\\mms^2)\\}.\n\\end{align}\nSince $\\varrho_{k,0} \\geq_k \\delta_k + \\varepsilon_k+\\zeta_k$ $\\meas_k$-a.e.~and $\\varrho_{k,1} \\geq_k \\delta_k+\\varepsilon_k+\\zeta_k$ $\\meas_k$-a.e., by definition of $\\delta_k$ and $\\varepsilon_k$ we obtain the estimates\n\\begin{align*}\n(\\delta_k + \\varepsilon_k+\\zeta_k)^{1-1\/N'}&\\geq_k (1-t)\\int_{\\{\\uptau=0\\}} \\varrho_{k,1}(x^0)^{-1\/N'}\\d\\tilde{\\pi}_k(x^0,x^1) \\\\\n&\\qquad\\qquad + t\\int_{\\{\\uptau=0\\}} \\varrho_{k,1}(x^0)^{-1\/N'}\\d\\tilde{\\pi}_k(x^0,x^1),\\\\\n(\\delta_k + \\varepsilon_k+\\zeta_k)^{1-1\/N'} &\\geq_k (1-t)\\int_{\\{\\uptau=0\\}} \\varrho_{k,1}(x^0)^{-1\/N'}\\d\\check{\\pi}_k(x^0,x^1)\\\\\n&\\qquad\\qquad + t\\int_{\\{\\uptau=0\\}} \\varrho_{k,1}(x^0)^{-1\/N'}\\d\\check{\\pi}_k(x^0,x^1),\\\\\n2\\,(\\delta_k + \\varepsilon_k+\\zeta_k)^{1-1\/N'} &\\geq_k (\\delta_k+\\varepsilon_k+\\zeta_k)\\int_{\\mms}\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\meas_k(x^0)\\\\\n&\\qquad\\qquad + (\\delta_k+\\varepsilon_k+\\zeta_k)\\int_{\\mms}\\varrho_{k,1}(x^1)^{-1\/N'}\\d\\meas_k(x^1),\n\\end{align*}\nand therefore\n\\begin{align*}\n-{\\ensuremath{\\mathscr{T}}}_{K,N'}^{(t)}(\\pi_k) &\\geq_k\\int_{\\mms^2} \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\varpi_k(x^0,x^1)\\\\\n&\\qquad\\qquad + \\int_{\\mms^2}\\tau_{K,N'}^{(t)}(\\uptau(x^0,x^1))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\varpi_k(x^0,x^1)\\\\\n&\\qquad\\qquad -2\\,(c+1)\\,(\\delta_k + \\varepsilon_k+\\zeta_k)^{1-1\/N'}.\\textcolor{white}{\\int}\n\\end{align*}\n\n\\textbf{Step 5.3.} Let $\\smash{\\mathfrak{r}^0,\\mathfrak{r}^1\\colon \\mms\\to {\\ensuremath{\\mathscr{P}}}(\\mms)}$ denote the ($k$-dependent) disintegrations of $\\varpi_k$ with respect to $\\pr_1$ and $\\pr_2$, respectively, i.e.\n\\begin{align*}\n{\\ensuremath{\\mathrm{d}}} \\varpi_k(x^0,x^1) = {\\ensuremath{\\mathrm{d}}}\\mathfrak{r}^0_{x^0}(x^1)\\d\\nu_{k,0}(x^0) = {\\ensuremath{\\mathrm{d}}}\\mathfrak{r}^1_{x^1}(x^0)\\d\\nu_{k,1}(x^1),\n\\end{align*}\nand define\n\\begin{align}\\label{Eq:Disint tau}\n\\begin{split}\nv_0(x^0) &:= \\int_{\\mms} \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\d\\mathfrak{r}_{x^0}^0(x^1),\\\\\nv_1(x^1) &:= \\int_{\\mms} \\tau_{K,N'}^{(t)}(\\uptau(x^0,x^1))\\d\\mathfrak{r}_{x^1}^1(x^0).\n\\end{split}\n\\end{align}\nMoreover, define $\\smash{\\nu_{\\infty,0}^k,\\nu_{\\infty,1}^k\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_\\infty)}$ by\n\\begin{align*}\n\\nu_{\\infty,0}^k &:= (1+\\delta_k + \\varepsilon_k+\\zeta_k)^{-1}\\,(\\rho_{\\infty,0}^{n_k} + \\delta_k +\\varepsilon_k+\\zeta_k)\\,\\meas_\\infty = \\varrho_{\\infty,0}^k\\,\\meas_\\infty,\\\\\n\\nu_{\\infty,1}^k &:= (1+\\delta_k + \\varepsilon_k+\\zeta_k)^{-1}\\,(\\rho_{\\infty,1}^{n_k} + \\delta_k +\\varepsilon_k+\\zeta_k)\\,\\meas_\\infty = \\varrho_{\\infty,1}^k\\,\\meas_\\infty,\n\\end{align*}\nand observe that\n\\begin{align}\\label{Eq:Observation!}\n\\begin{split}\n\\mathfrak{p}^k(\\nu_{\\infty,0}^k) &= \\nu_{k,0} = \\varrho_{k,0}\\,\\meas_k,\\\\\n\\mathfrak{p}^k(\\nu_{\\infty,1}^k) &= \\nu_{k,1} = \\varrho_{k,1}\\,\\meas_k.\n\\end{split}\n\\end{align}\nThen by Jensen's inequality,\n\\begin{align*}\n&\\int_{\\mms^2} \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\varpi_k(x^0,x^1)\\textcolor{white}{\\sum_0^1}\\\\\n&\\qquad\\qquad\\qquad\\qquad + \\int_{\\mms^2}\\tau_{K,N'}^{(t)}(\\uptau(x^0,x^1))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\varpi_k(x^0,x^1)\\\\\n&\\qquad\\qquad = \\sum_{i=0}^1 \\int_{\\mms} \\varrho_{k,i}(x^i)^{1-1\/N'}\\,v_i(x^i)\\d\\meas_k(x^i)\\\\\n&\\qquad\\qquad = \\sum_{i=0}^1 \\int_{\\mms}\\Big[\\!\\int_{\\mms} \\varrho_{\\infty,i}^k(y^i)\\d\\mathfrak{p}_{x^i}^k(y^i)\\Big]^{1-1\/N'}v_i(x^i)\\d\\meas_k(x^i)\\\\\n&\\qquad\\qquad \\geq \\sum_{i=0}^1 \\int_{\\mms^2} \\varrho_{\\infty,i}^k(y^i)^{1-1\/N'}\\,v_i(x^i)\\d\\mathfrak{p}_{x^i}^k(y^i)\\d\\meas_k(x^i)\\\\\n&\\qquad\\qquad = \\sum_{i=0}^1 \\int_{\\mms^2} \\varrho_{\\infty,i}^k(y^i)^{1-1\/N'}\\,v_i(x^i)\\d\\mathfrak{p}_{y^i}^\\infty(x^i)\\d\\meas_\\infty(y^i).\n\\end{align*}\n\n\\textbf{Step 5.4.} Next, we estimate the latter sum from below. Using \\eqref{Eq:Disint tau} and \\eqref{Eq:Observation!},\n\\begin{align*}\n&\\int_{\\mms^2} \\varrho_{\\infty,0}^k(y^0)^{1-1\/N'}\\,v_0(x^0)\\d\\mathfrak{p}_{y^0}^\\infty(x^0)\\d\\meas_\\infty(y^0)\\\\\n&\\qquad\\qquad =\\int_{\\mms^4}\\d\\mathfrak{p}_{x^1}^k(y^1)\\d\\mathfrak{r}_{x^0}^0(x^1)\\d\\mathfrak{p}_{y^0}^\\infty(x^0)\\d\\meas_\\infty(y^0)\\\\\n&\\qquad\\qquad\\qquad\\qquad \\Big[\\varrho_{\\infty,0}^k(y^0)^{-1\/N'}\\,\\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\frac{\\varrho_{\\infty,0}^k(y^0)\\,\\varrho_{\\infty,1}^k(y^1)}{\\varrho_{k,1}(x^1)}\\Big].\n\\end{align*}\n\nLet $\\varepsilon > 0$. Since $\\smash{\\tau_{K,N'}^{(t)}\\circ\\uptau}$ is uniformly continuous by compactness of $\\smash{\\mms^2}$, our choice of $K$, and possibly invoking \\autoref{Cor:Bonnet-Myers}, we fix $\\delta> 0$ such that\n\\begin{align*}\n\\big\\vert \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1)) - \\tau_{K,N'}^{(1-t)}(\\uptau(y^0,y^1))\\big\\vert &\\leq \\varepsilon,\\\\\n\\big\\vert \\tau_{K,N'}^{(t)}(\\uptau(x^0,x^1)) - \\tau_{K,N'}^{(t)}(\\uptau(y^0,y^1))\\big\\vert &\\leq \\varepsilon\n\\end{align*}\nfor every $(x^0,y^0,x^1,y^1)\\in A_\\delta$, where\n\\begin{align}\\label{Eq:A_delta}\nA_\\delta := \\{ z \\in\\mms^4 : \\met(z_1,z_2) + \\met(z_3,z_4) \\leq \\delta\\}.\n\\end{align}\nLastly, somewhat suggestively, for $k\\in\\N$ we define a coupling $\\smash{\\eta_k\\in\\Pi(\\nu_{\\infty,0}^k,\\nu_{\\infty,1}^k)}$, which is notably independent of $\\varepsilon$, by\n\\begin{align*}\n{\\ensuremath{\\mathrm{d}}}\\eta_k(y^0,y^1) &:= \\int_{\\mms^2} \\frac{\\varrho_{\\infty,0}^k(y^0)\\,\\varrho_{\\infty,1}^k(y^1)}{\\varrho_{k,0}(x^0)\\,\\varrho_{k,1}(x^1)}\\d\\mathfrak{p}_{x^0}^k(y^0)\\d\\mathfrak{p}_{x^1}^k(y^1)\\d\\varpi_k(x^0,x^1)\\\\\n&\\textcolor{white}{:}= \\int_{\\mms^2} \\frac{\\varrho_{\\infty,0}^k(y^0)\\,\\varrho_{\\infty,1}^k(y^1)}{\\varrho_{k,1}(x^1)}\\d\\mathfrak{p}_{x^1}^k(y^1)\\d\\mathfrak{r}_{x^0}^0(x^1)\\d\\mathfrak{p}_{y^0}^\\infty(x^0) \\d\\meas_\\infty(y^0)\\\\\n&\\textcolor{white}{:}= \\int_{\\mms^2} \\frac{\\varrho_{\\infty,0}^k(y^0)\\,\\varrho_{\\infty,1}^k(y^1)}{\\varrho_{k,0}(x^0)}\\d\\mathfrak{p}_{x^0}^k(y^0)\\d\\mathfrak{r}_{x^1}^1(x^0)\\d\\mathfrak{p}_{y^1}^\\infty(x^1)\\d\\meas_\\infty(y^1).\n\\end{align*}\nThe previous computations then entail\n\\begin{align*}\n&\\int_{\\mms^2} \\varrho_{\\infty,0}^k(y^0)^{1-1\/N'}\\,v_0(x^0)\\d\\mathfrak{p}_{y^0}^\\infty(x^0)\\d\\meas_\\infty(y^0)\\\\\n&\\qquad\\qquad \\geq -\\varepsilon+\\int_{A_\\delta} \\d\\mathfrak{p}_{x^1}^k(y^1)\\d\\mathfrak{r}_{x^0}^0(x^1)\\d\\mathfrak{p}_{y^0}^\\infty(x^0)\\d\\meas_\\infty(y^0)\\\\\n&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\Big[\\varrho_{\\infty,0}^k(y^0)^{-1\/N'}\\,\\tau_{K,N'}^{(1-t)}(\\uptau(y^0,y^1))\\,\\frac{\\varrho_{\\infty,0}^k(y^0)\\,\\varrho_{\\infty,1}^k(y^1)}{\\varrho_{k,1}(x^1)}\\Big]\\\\\n&\\qquad\\qquad\\qquad\\qquad + \\int_{A_\\delta^{\\ensuremath{\\mathsf{c}}}} \\d\\mathfrak{p}_{x^1}^k(y^1)\\d\\mathfrak{r}_{x^0}^0(x^1)\\d\\mathfrak{p}_{y^0}^\\infty(x^0)\\d\\meas_\\infty(y^0)\\\\\n&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\Big[\\varrho_{\\infty,0}^k(y^0)^{-1\/N'}\\,\\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\frac{\\varrho_{\\infty,0}^k(y^0)\\,\\varrho_{\\infty,1}^k(y^1)}{\\varrho_{k,1}(x^1)}\\Big]\\\\\n&\\qquad\\qquad = - \\varepsilon + \\int_{\\mms^2} \\varrho_{\\infty,0}^k(y^0)^{-1\/N'}\\,\\tau_{K,N'}^{(1-t)}(\\uptau(y^0,y^1))\\d\\eta_k(y^0,y^1)\\\\\n&\\qquad\\qquad\\qquad\\qquad + \\int_{A_\\delta^{\\ensuremath{\\mathsf{c}}}} \\d\\mathfrak{p}_{x^1}^k(y^1)\\d\\mathfrak{r}_{x^0}^0(x^1)\\d\\mathfrak{p}_{y^0}^\\infty(x^0)\\d\\meas_\\infty(y^0)\\\\\n&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\Big[\\varrho_{\\infty,0}^k(y^0)^{-1\/N'}\\,\\big[\\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))- \\tau_{K,N'}^{(1-t)}(\\uptau(y^0,y^1))\\big]\\\\\n&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad \\times \\frac{\\varrho_{\\infty,0}^k(y^0)\\,\\varrho_{\\infty,1}^k(y^1)}{\\varrho_{k,1}(x^1)}\\Big].\n\\end{align*}\n\n\\textbf{Step 5.5.} Let us estimate the latter error term. We first recall the definition $\\zeta_k := W_2(\\meas_k,\\meas_\\infty)$ from \\eqref{Eq:DEFS}. By definition of $A_\\delta$ and the value $c$ from \\eqref{Eq:c value}, the $\\meas$-a.e.~valid inequality for the density $\\smash{\\rho_{\\infty,1}^{n_k}}$ involved in the definition of $\\smash{\\varrho_{\\infty,1}^k}$ from \\autoref{Le:CM lemma}, as well as H\u00f6lder's inequality,\n\\begin{align*}\n&\\Big\\vert\\!\\int_{A_\\delta^{\\ensuremath{\\mathsf{c}}}} \\d\\mathfrak{p}_{x^1}^k(y^1)\\d\\mathfrak{r}_{x^0}^0(x^1)\\d\\mathfrak{p}_{y^0}^\\infty(x^0)\\d\\meas_\\infty(y^0)\\\\\n&\\qquad\\qquad\\qquad\\qquad \\Big[\\varrho_{\\infty,0}^k(y^0)^{-1\/N'}\\,\\big[\\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))- \\tau_{K,N'}^{(1-t)}(\\uptau(y^0,y^1))\\big]\\\\\n&\\qquad\\qquad\\qquad\\qquad \\times \\frac{\\varrho_{\\infty,0}^k(y^0)\\,\\varrho_{\\infty,1}^k(y^1)}{\\varrho_{k,1}(x^1)}\\Big]\\Big\\vert\\\\\n&\\qquad\\qquad \\leq 2\\,c\\,\\delta^{-1}\\!\\int_{\\mms^4} \\d\\mathfrak{p}_{x^1}^k(y^1)\\d\\mathfrak{r}_{x^0}^0(x^1)\\d\\mathfrak{p}_{y^0}^\\infty(x^0)\\d\\meas_\\infty(y^0)\\\\\n&\\qquad\\qquad\\qquad\\qquad\\Big[\\varrho_{\\infty,0}^k(y^0)^{-1\/N'}\\,\\big[\\met(x^0,y^0) + \\met(x^1,y^1)\\big]\\,\\frac{\\varrho_{\\infty,0}^k(y^0)\\,\\varrho_{\\infty,1}^k(y^1)}{\\varrho_{k,1}(x^1)}\\Big]\\\\\n&\\qquad\\qquad \\leq_k 2\\,c\\,\\delta^{-1}\\!\\int_{\\mms^2}\\varrho_{\\infty,0}^k(y^0)^{1-1\/N'}\\,\\met(x^0,y^0)\\d\\mathfrak{q}_k(x^0,y^0)\\\\\n&\\qquad\\qquad\\qquad\\qquad + 2\\,c\\,\\delta^{-1}\\,\\zeta_k^{-1\/N'}\\!\\int_{\\mms} \\rho_{\\infty,1}(y^1)\\,\\met(x^1,y^1)\\d\\mathfrak{q}_k(x^1,y^1)\\\\\n&\\qquad\\qquad \\leq_k 2\\,c\\,\\delta^{-1}\\,\\big\\Vert\\rho_{\\infty,0}\\big\\Vert_{\\Ell^\\infty(\\mms,\\meas_\\infty)}^{1-1\/N'}\\,\\zeta_k + 2\\,c\\,\\delta^{-1}\\,\\Vert\\rho_{\\infty,1}\\Vert_{\\Ell^\\infty(\\mms,\\meas_\\infty)}\\,\\zeta_k^{1-1\/N'}\\!.\\!\\!\\!\\textcolor{white}{\\int}\n\\end{align*}\n\n\\textbf{Step 5.6.} Taking the estimates from Step 5.4 and Step 5.5 together, we get\n\\begin{align*}\n&\\int_{\\mms^2}\\varrho_{\\infty,0}^k(y^0)^{1-1\/N'}\\,v_0(x^0)\\d\\mathfrak{p}_{y^0}^\\infty(x^0)\\d\\meas_\\infty(y^0)\\\\\n&\\qquad\\qquad \\geq_k -\\varepsilon + \\int_{\\mms^2}\\varrho_{\\infty,0}^k(y^0)^{-1\/N'}\\,\\tau_{K,N'}^{(1-t)}(\\uptau(y^0,y^1))\\d\\eta_k(y^0,y^1)\\\\\n&\\qquad\\qquad\\qquad\\qquad - 2\\,c\\,\\delta^{-1}\\,\\big\\Vert\\rho_{\\infty,0}\\big\\Vert_{\\Ell^\\infty(\\mms,\\meas_\\infty)}^{1-1\/N'}\\,\\zeta_k\\textcolor{white}{\\int}\\\\\n&\\qquad\\qquad\\qquad\\qquad - 2\\,c\\,\\delta^{-1\/N'}\\,\\Vert\\rho_{\\infty,1}\\Vert_{\\Ell^\\infty(\\mms,\\meas_\\infty)}\\,\\zeta_k^{1-1\/N'}\\!.\\textcolor{white}{\\int}\n\\end{align*}\nAnalogously, for the second summand in the last part of Step 5.3 we obtain\n\\begin{align*}\n&\\int_{\\mms^2} \\varrho_{\\infty,1}^k(y^1)^{1-1\/N'}\\,v_1(x^1)\\d\\mathfrak{p}_{y^1}^\\infty(x^1)\\d\\meas_\\infty(y^1)\\\\\n&\\qquad\\qquad \\geq_k -\\varepsilon + \\int_{\\mms^2}\\varrho_{\\infty,1}^k(y^1)^{-1\/N'}\\,\\tau_{K,N'}^{(t)}(\\uptau(y^0,y^1))\\d\\eta_k(y^0,y^1)\\\\\n&\\qquad\\qquad\\qquad\\qquad - 2\\,c\\,\\delta^{-1}\\,\\big\\Vert\\rho_{\\infty,1}\\big\\Vert_{\\Ell^\\infty(\\mms,\\meas_\\infty)}^{1-1\/N'}\\,\\zeta_k\\textcolor{white}{\\int}\\\\\n&\\qquad\\qquad\\qquad\\qquad - 2\\,c\\,\\delta^{-1}\\,\\Vert\\rho_{\\infty,0}\\Vert_{\\Ell^\\infty(\\mms,\\meas_\\infty)}\\,\\zeta_k^{1-1\/N'}\\!.\\textcolor{white}{\\int}\n\\end{align*}\n\n\\textbf{Step 6.} \\textit{Passage to the limit.} Let $(a_k)_{k\\in\\N}$ be a given sequence of normalization constants in $[1,\\infty)$ as provided by \\autoref{Le:CM lemma}, i.e. \n\\begin{align*}\n\\rho_{\\infty,0}^{n_k} &\\leq a_k\\,\\rho_{\\infty,0}\\quad\\meas\\textnormal{-a.e.},\\\\\n\\rho_{\\infty,1}^{n_k} &\\leq a_k\\,\\rho_{\\infty,1}\\quad\\meas\\textnormal{-a.e.}\n\\end{align*}\n\n\\textbf{Step 6.1.} For $k\\in\\N$, we define $\\smash{\\tilde{\\nu}_{\\infty,0}^k,\\tilde{\\nu}_{\\infty,1}^k}\\in\\smash{{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_\\infty)}$ by\n\\begin{align*}\n\\tilde{\\nu}_{\\infty,0}^k &:=(a_k^2+\\delta_k+\\varepsilon_k+\\zeta_k)^{-1}\\,(a_k^2\\,\\rho_{\\infty,0} + \\delta_k+\\varepsilon_k+\\zeta_k)\\,\\meas_\\infty = \\tilde{\\varrho}_{\\infty,0}^k\\,\\meas_\\infty,\\\\\n\\tilde{\\nu}_{\\infty,1}^k &:= (a_k^2+\\delta_k+\\varepsilon_k+\\zeta_k)^{-1}\\,(a_k^2\\,\\rho_{\\infty,1} + \\delta_k+\\varepsilon_k+\\zeta_k)\\,\\meas_\\infty = \\tilde{\\varrho}_{\\infty,1}^k\\,\\meas_\\infty.\n\\end{align*}\nWe turn $\\eta_k$ into a coupling $\\smash{\\alpha_k \\in\\Pi(\\nu_{\\infty,0}^k,\\nu_{\\infty,1}^k)}$ by setting\n\\begin{align*}\n\\alpha_k &:= (a_k^2+\\delta_k+\\varepsilon_k+\\zeta_k)^{-1} \\big[(1+\\delta_k+\\varepsilon_k+\\zeta_k)\\,\\eta_k\\\\\n&\\qquad\\qquad + a_k^2\\,\\mu_{\\infty,0}\\otimes\\mu_{\\infty,1} - \\tilde{\\mu}_{k,0}\\otimes\\tilde{\\mu}_{k,1}\\big].\n\\end{align*}\nThis construction yields\n\\begin{align*}\n&\\int_{\\mms^2}\\varrho_{k,0}^k(y^0)^{-1\/N'}\\,\\tau_{K,N'}^{(1-t)}(\\uptau(y^0,y^1))\\d\\eta_k(y^0,y^1)\\\\\n&\\qquad\\qquad\\qquad\\qquad + \\int_{\\mms^2}\\varrho_{k,1}^k(y^1)^{-1\/N'}\\,\\tau_{K,N'}^{(t)}(\\uptau(y^0,y^1))\\d\\eta_k(y^0,y^1)\\\\\n&\\qquad\\qquad \\geq_k \\int_{\\mms^2} \\tilde{\\varrho}_{\\infty,0}^k(y^0)^{-1\/N'}\\,\\tau_{K,N'}^{(1-t)}(\\uptau(y^0,y^1))\\d\\alpha_k(y^0,y^1)\\\\\n&\\qquad\\qquad\\qquad\\qquad - c\\int_\\mms \\varrho_{\\infty,0}^k(y^0)^{-1\/N'}\\,\\big\\vert \\rho_{\\infty,0}(y^0) - \\tilde{\\rho}_{k,0}(y^0)\\big\\vert\\d\\meas_\\infty(y^0)\\\\\n&\\qquad\\qquad\\qquad\\qquad + \\int_{\\mms^2}\\tilde{\\varrho}_{\\infty,1}^k(y^1)^{-1\/N'}\\,\\tau_{K,N'}^{(t)}(\\uptau(y^0,y^1))\\d\\alpha_k(y^0,y^1)\\\\\n&\\qquad\\qquad\\qquad\\qquad - c\\int_\\mms \\varrho_{\\infty,1}^k(y^1)^{-1\/N'}\\,\\big\\vert \\rho_{\\infty,1}(y^1) - \\tilde{\\rho}_{k,1}(y^1)\\big\\vert\\d\\meas_\\infty(y^1).\n\\end{align*}\n\n\\textbf{Step 6.2.} Taking together the above estimates with those obtained in Step 5 and invoking the $\\Ell^1$-convergence asserted in \\autoref{Le:CM lemma}, we obtain\n\\begin{align*}\n-\\limsup_{k\\to\\infty} {\\ensuremath{\\mathscr{T}}}_{K,N'}^{(t)}(\\pi_k) \\geq - \\limsup_{k\\to\\infty}{\\ensuremath{\\mathscr{T}}}_{K,N'}^{(t)}(\\alpha_k) - 2\\varepsilon.\n\\end{align*}\nNow, by Prokhorov's theorem, the sequence $(\\alpha_k)_{k\\in\\N}$ converges weakly to some $\\alpha\\in\\Pi(\\mu_{\\infty,0},\\mu_{\\infty,1})$ up to a nonrelabeled subsequence. Since $\\varepsilon$ was arbitrary and did not influence the construction of $\\alpha_k$, by \\autoref{Le:Const perturb} we thus get\n\\begin{align}\\label{Eq:Eqeqeqeq}\n-\\limsup_{k\\to\\infty}{\\ensuremath{\\mathscr{T}}}_{K,N'}^{(t)}(\\pi_k)\\geq -{\\ensuremath{\\mathscr{T}}}_{K,N'}^{(t)}(\\alpha).\n\\end{align}\n\n\\textbf{Step 6.3.} Now we send $k\\to\\infty$ in \\eqref{Eq:TCD COND INVOKING}. For every $k\\in\\N$, there exists some plan $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}_k\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_{k,0},\\mu_{k,1})}$ representing the curve $(\\mu_{k,t})_{t\\in[0,1]}$. \\autoref{Le:Villani lemma for geodesic} allows us to extract a nonrelabeled subsequence of $\\smash{({\\ensuremath{\\boldsymbol{\\pi}}}_k)_{k\\in\\N}}$ converging weakly to some $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_{\\infty,0},\\mu_{\\infty,1})}$. The assignment $\\mu_{\\infty,t} := (\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_\\infty$ produces a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_{\\infty,t})_{t\\in[0,1]}$ connecting $\\mu_{\\infty,0}$ to $\\mu_{\\infty,1}$. Lower semicontinuity of R\u00e9nyi's entropy on ${\\ensuremath{\\mathscr{P}}}(\\mms)$ by compactness of $\\mms$ and \\eqref{Eq:Eqeqeqeq} thus yield, for every $t\\in[0,1]$ and every $N'\\geq N$,\n\\begin{align}\\label{Eq:Renyi inequ}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{\\infty,t})\\leq \\limsup_{k\\to\\infty} {\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{k,t}) \\leq \\limsup_{k\\to\\infty}{\\ensuremath{\\mathscr{T}}}_{K,N'}^{(t)}(\\pi_k) \\leq {\\ensuremath{\\mathscr{T}}}_{K,N'}^{(t)}(\\alpha).\n\\end{align}\n\n\\textbf{Step 7.} \\textit{Proof of the $\\smash{\\ell_p}$-optimality of $\\alpha$.} To conclude the desired property leading towards $\\smash{\\mathrm{TCD}_p(K,N)}$ for ${\\ensuremath{\\mathscr{X}}}_\\infty$, at least under the restrictions of Step 2, we have to prove the causality and the $\\smash{\\ell_p}$-optimality of $\\alpha$. Both is argued similarly to Step 5.4; we concentrate on the proof of the estimate entailing $\\smash{\\ell_p}$-optimality later, and then outline how to modify this argument to prove $\\smash{\\alpha[\\mms_\\ll^2]=1}$.\n\nGiven any $\\varepsilon>0$, fix $\\delta > 0$ with the property\n\\begin{align*}\n\\big\\vert\\uptau(x^0,x^1) - \\uptau(y^0,y^1)\\big\\vert \\leq \\varepsilon\n\\end{align*}\nfor every $(x^0,y^0,x^1,y^1)\\in\\mms^4$ belonging to the set $\\smash{A_\\delta}$ from \\eqref{Eq:A_delta}. As in Step 5.5 and tracing back the definitions of all considered couplings, we obtain\n\\begin{align*}\n&\\int_{\\mms^2}\\uptau^p(y^0,y^1)\\d\\alpha(y^0,y^1) = \\lim_{k\\to\\infty}\\int_{\\mms^2}\\uptau^p(y^0,y^1)\\d\\alpha_k(y^0,y^1)\\\\\n&\\qquad\\qquad \\geq \\liminf_{k\\to\\infty} \\int_{\\mms^2}\\uptau^p(y^0,y^1)\\d\\eta_k(y^0,y^1)\\\\\n&\\qquad\\qquad \\geq \\liminf_{k\\to\\infty} \\int_{\\mms^2} \\uptau^p(x^0,x^1)\\d\\varpi_k(x^0,x^1) - \\varepsilon\\\\\n&\\qquad\\qquad\\qquad\\qquad -2\\,c\\,\\delta^{-1}\\,\\Vert\\rho_{\\infty,0}\\Vert_{\\Ell^\\infty(\\mms,\\meas_\\infty)}\\limsup_{k\\to\\infty} W_2(\\meas_k,\\meas_\\infty)\\!\\!\\!\\textcolor{white}{\\int}\\\\\n&\\qquad\\qquad\\qquad\\qquad -2\\,c\\,\\delta^{-1}\\,\\Vert\\rho_{\\infty,1}\\Vert_{\\Ell^\\infty(\\mms,\\meas_\\infty)}\\limsup_{k\\to\\infty} W_2(\\meas_k,\\meas_\\infty)\\!\\!\\!\\textcolor{white}{\\int}\\\\\n&\\qquad\\qquad \\geq \\liminf_{k\\to\\infty}\\int_{\\mms^2}\\uptau^p(x^0,x^1)\\d\\pi_k(x^0,x^1)-\\varepsilon\\\\\n&\\qquad\\qquad = \\liminf_{k\\to\\infty} \\int_{\\mms^2}\\uptau^p(x^0,x^1)\\d\\bar{\\pi}_k(x^0,x^1)-\\varepsilon\\\\\n&\\qquad\\qquad \\geq \\liminf_{k\\to\\infty}\\int_{\\mms^2}\\uptau^p(x^0,x^1)\\d\\check{\\pi}_k(x^0,x^1)-\\varepsilon\\geq \\ell_p(\\mu_{\\infty,0},\\mu_{\\infty,1})^p-\\varepsilon.\n\\end{align*}\nIn the equality in the third last step, we have used that both $\\smash{\\pi_k,\\bar{\\pi}_k\\in\\Pi_\\ll(\\mu_{k,0},\\mu_{k,1})}$ are $\\smash{\\ell_p}$-optimal; the last inequality is due to \\eqref{Eq:Lp calc}.\n\nThe relation $\\smash{\\alpha[\\mms_\\ll^2]=1}$ is argued similarly by replacing, given $\\varepsilon > 0$, $\\uptau$ by a nonnegative function $\\phi\\in\\Cont(\\mms^2)$ obeying $\\smash{\\phi(\\mms_\\ll^2)=\\{1\\}}$, $\\sup\\phi(\\mms)\\leq 1$, and\n\\begin{align*}\n\\alpha[\\mms_\\ll^2] \\geq \\int_{\\mms^2}\\phi\\d\\alpha - \\varepsilon.\n\\end{align*}\nBoth results together and the arbitrariness of $\\varepsilon$ yield the claim.\n\n\\textbf{Step 8.} \\textit{Relaxation of the assumptions on $\\mu_{\\infty,0}$ and $\\mu_{\\infty,1}$.} Now we outline how to get rid of the assumption $\\smash{\\rho_{\\infty,0},\\rho_{\\infty,1}\\in\\Ell^\\infty(\\mms,\\meas_\\infty)}$ from Step 2. \n\nIf $\\smash{\\rho_{\\infty,0}}$ and $\\smash{\\rho_{\\infty,1}}$ are not $\\meas$-essentially bounded, given any $i\\in\\N$ we set\n\\begin{align*}\nE_i := \\{\\rho_{\\infty,0} \\leq i,\\, \\rho_{\\infty,1}\\leq i\\}.\n\\end{align*}\nLet $\\smash{\\pi\\in\\Pi_\\ll(\\mu_{\\infty,0},\\mu_{\\infty,1})}$ be an $\\smash{\\ell_p}$-optimal coupling, and set\n\\begin{align*}\n\\pi_i := \\pi[E_i]^{-1}\\,\\pi\\mres E_i\n\\end{align*}\nprovided $\\pi[E_i]>0$. By restriction \\cite[Lem.~2.10]{cavalletti2020}, $\\pi_i$ is an $\\smash{\\ell_p}$-optimal coupling of its marginals $\\smash{\\lambda_{\\infty,0}^i,\\lambda_{\\infty,1}^i\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_\\infty)}$; moreover, the pair $\\smash{(\\lambda_{\\infty,0}^i,\\lambda_{\\infty,1}^i)}$ is strongly timelike $p$-dualizable. Let $\\smash{\\dot{\\pi}_i\\in\\Pi_\\ll(\\lambda_{\\infty,0}^i,\\lambda_{\\infty,1}^i)}$ be an $\\smash{\\ell_p}$-optimal coupling as constructed in the previous steps. Define $\\smash{\\beta_i\\in\\Pi_\\ll(\\mu_{\\infty,0},\\mu_{\\infty,1})}$ by\n\\begin{align*}\n\\beta_i := \\pi[E_i]\\,\\dot{\\pi}_i + \\pi\\mres E_i^{\\ensuremath{\\mathsf{c}}},\n\\end{align*}\nand note that $\\beta_i$ is an $\\smash{\\ell_p}$-optimal coupling of its marginals. Moreover, we observe that, for every $t\\in[0,1]$ and every $N'\\geq N$,\n\\begin{align*}\n{\\ensuremath{\\mathscr{T}}}_{K,N'}^{(t)}(\\dot{\\pi}_i) &\\leq_i -\\int_{\\mms^2} \\rho_{\\infty,0}(y^0)^{-1\/N'}\\,\\tau_{K,N'}^{(1-t)}(\\uptau(y^0,y^1))\\d\\dot{\\pi}_i(y^0,y^1)\\\\\n&\\qquad\\qquad -\\int_{\\mms^2} \\rho_{\\infty,1}(y^1)^{-1\/N'}\\,\\tau_{K,N'}^{(1-t)}(\\uptau(y^0,y^1))\\d\\dot{\\pi}_i(y^0,y^1)\\\\\n&\\leq_i -\\int_{\\mms^2} \\rho_{\\infty,0}(y^0)^{-1\/N'}\\,\\tau_{K,N'}^{(1-t)}(\\uptau(y^0,y^1))\\d\\beta_i(y^0,y^1)\\\\\n&\\qquad\\qquad + c \\int_{E_i^{\\ensuremath{\\mathsf{c}}}} \\rho_{\\infty,0}(y^0)^{-1\/N'}\\d\\pi(y^0,y^1)\\\\\n&\\qquad\\qquad -\\int_{\\mms^2}\\rho_{\\infty,1}(y^1)^{-1\/N'}\\,\\tau_{K,N'}^{(t)}(\\uptau(y^0,y^1))\\d\\beta_i(y^0,y^1)\\\\\n&\\qquad\\qquad + c\\int_{E_i^{\\ensuremath{\\mathsf{c}}}}\\rho_{\\infty,1}(y^1)^{-1\/N'}\\d\\pi(y^0,y^1).\n\\end{align*}\n\nBy Prokhorov's theorem, $(\\beta_i)_{i\\in\\N}$ converges weakly to some $\\smash{\\beta\\in\\Pi(\\mu_{\\infty,0},\\mu_{\\infty,1})}$ which, thanks to Portmanteau's theorem, is a causal coupling. By stability \\cite[Thm. 2.14]{cavalletti2020}, we thus infer the $\\smash{\\ell_p}$-optimality of $\\beta$. Together with \\autoref{Le:Const perturb} and H\u00f6lder's inequality, this addresses the right-hand side of the desired inequality; the left-hand side is treated with the compactness argument from Step 6.3.\n\n\\textbf{Step 9.} \\textit{Passing from $K$ and $N$ to $K_\\infty$ and $N_\\infty$.} Lastly, the restrictions from Step 4 the previous arguments imply $\\smash{\\mathrm{TCD}_p(K,N)}$ for ${\\ensuremath{\\mathscr{X}}}_\\infty$ for every $KN_\\infty$. Given $\\smash{(\\mu_{\\infty,0},\\mu_{\\infty,1})\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_\\infty)}$ strongly timelike $p$-dualizable, letting $K$ and $N$ approach $K_\\infty$ and $N_\\infty$, respectively, and using compactness arguments as in Step 6.2 and Step 6.3 gives the existence of a proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_{\\infty,t})_{t\\in[0,1]}$ from $\\mu_{\\infty,0}$ to $\\mu_{\\infty,1}$ and a timelike $p$-dualizing coupling $\\smash{\\pi\\in\\Pi_\\ll(\\mu_{\\infty,0},\\mu_{\\infty,1})}$ witnessing the desired inequality for the R\u00e9nyi entropy for every $K < K_\\infty$ and every $N' > N_\\infty$. The semicontinuity properties of both sides in the respective parameters yields $\\smash{\\mathrm{TCD}_p(K_\\infty,N_\\infty)}$. The proof is terminated.\n\\end{proof}\n\n\\begin{remark}\\label{RE:POTEN} Very roughly speaking, the relevant convergences in the above proof are justified by \\emph{uniform continuity}, recall e.g.~Step 5.4. This is considerably weaker than \\emph{Lipschitz continuity} of the quantity inside the distortion coefficients, which has been used in the proof of \\cite[Thm.~3.1]{sturm2006b} in the metric case at a similar point, yet is useless here since $\\uptau$ is no distance. Our proof thus has the potential to admit straightforward adaptations to settings with continuous potentials other than $\\uptau$ which do not come from a distance either.\n\\end{remark}\n\n\\begin{remark}\\label{Re:A remarkable} A remarkable byproduct of the above proof is that both $\\smash{\\mathrm{TCD}_p^*(K,N)}$ and $\\smash{\\mathrm{TCD}_p(K,N)}$ are weakly stable under convergence of \\emph{normalized} Lorentzian pre-length spaces with respect to the following Lorentzian modification of Sturm's transport distance $\\boldsymbol{\\mathsf{D}}$ \\cite[Def. 3.2]{sturm2006a}: it is still defined in terms of $\\met$, but the inherent metric measure isometric embeddings should also respect the given Lorentzian structures, and are required to map into spaces obeying the regularity assumptions from item \\ref{La:Blurr1reg} of \\autoref{Def:Convergence}.\n\\end{remark}\n\n\\subsection{Good geodesics}\\label{Sub:Good TCD} In this section, following \\cite{braun2022}, see also \\cite{cavalletti2017,rajala2012a,rajala2012b}, we show the existence of timelike proper-time parametrized $\\smash{\\ell_p}$-geodesics with $\\meas$-densities uniformly in $\\Ell^\\infty$ in time under mild assumptions on their endpoints. This treatise does not require any nonbranching condition on the underlying space. The proofs of the results to follow are mostly analogous to those of \\cite{braun2022} and hence only outlined. \n\nWe regard the corresponding result from \\autoref{Th:Good geos TCD} as a key to develop the notion of \\emph{timelike weak gradients} on Lorentzian geodesic spaces by using so-called \\emph{test plans} --- many of which should exist by \\autoref{Th:Good geos TCD} --- following \\cite{ambrosio2014a}, and to prove a Lorentzian analogue of the \\emph{Sobolev-to-Lipschitz property} \\cite[Subsec.~4.1.3]{gigli2013}.\n\nThe crucial result providing the critical threshold for \\eqref{Eq:Thresh} is the following.\n\n\\begin{lemma}\\label{Le:Lalelu} Assume $\\smash{\\mathrm{wTCD}_p^*(K,N)}$ for $p\\in(0,1)$, $K\\in\\R$, and $N\\in[1,\\infty)$. Let $\\smash{(\\mu_0,\\mu_1)=(\\rho_0\\,\\meas,\\rho_1\\,\\meas)\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ be strongly timelike $p$-dualizable with $\\rho_0,\\rho_1\\in\\Ell^\\infty(\\mms,\\meas)$. Let $D$ be any real number no smaller than $\\sup\\uptau(\\supp\\mu_0\\times\\supp\\mu_1)$. Then there exists a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_t)_{t\\in[0,1]}$ from $\\mu_0$ to $\\mu_1$ such that $\\mu_t=\\rho_t\\,\\meas\\in\\Dom(\\Ent_\\meas)$ for every $t\\in[0,1]$, and\n\\begin{align*}\n\\meas\\big[\\{\\rho_{1\/2}>0\\}\\big] \\geq {\\ensuremath{\\mathrm{e}}}^{-D\\sqrt{K^-N}\/2}\\,\\max\\{ \\Vert\\rho_0\\Vert_{\\Ell^\\infty(\\mms,\\meas)},\\Vert\\rho_1\\Vert_{\\Ell^\\infty(\\mms,\\meas)}\\}^{-1}.\n\\end{align*}\n\\end{lemma}\n\n\\begin{proof} Recall that $\\smash{\\mathrm{wTCD}_p^*(K,N)}$ implies $\\smash{\\mathrm{wTCD}_p^*(K^-,N)}$ by \\autoref{Pr:Consistency TCD}. Let $\\smash{(\\mu_t)_{t\\in[0,1]}}$ be a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic from $\\mu_0$ to $\\mu_1$ and $\\smash{\\pi\\in\\Pi_\\ll(\\mu_0,\\mu_1)}$ be some $\\smash{\\ell_p}$-optimal coupling obeying the inequality defining the condition $\\smash{\\mathrm{wTCD}_p^*(K^-,N)}$ as in \\autoref{Def:TCD*}. Thanks to \\autoref{Le:Stu}, we obtain $\\mu_t\\in\\Dom(\\Ent_\\meas)$ for every $t\\in[0,1]$; in particular, we have $\\smash{\\mu_t = \\rho_t\\,\\meas\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$. Moreover, by \\autoref{Re:Lower bounds sigma},\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_N(\\mu_{1\/2})&\\leq -\\int_{\\mms^2} \\sigma_{K^-,N}^{(1\/2)}(\\uptau(x^0,x^1))\\,\\rho_0(x^0)^{-1\/N}\\d\\pi(x^0,x^1)\\\\\n&\\qquad\\qquad -\\int_{\\mms^2} \\sigma_{K^-,N}^{(1\/2)}(\\uptau(x^0,x^1))\\,\\rho_1(x^1)^{-1\/N}\\d\\pi(x^0,x^1)\\\\\n&\\leq -{\\ensuremath{\\mathrm{e}}}^{-D\\sqrt{K^-\/N}\/2}\\,\\max\\{\\Vert\\rho_0\\Vert_{\\Ell^\\infty(\\mms,\\meas)}, \\Vert\\rho_1\\Vert_{\\Ell^\\infty(\\mms,\\meas)}\\}^{-1\/N}.\\!\\!\\!\\textcolor{white}{\\int}\n\\end{align*}\nOn the other hand, we have $\\smash{{\\ensuremath{\\mathscr{S}}}_N(\\mu_{1\/2}) \\geq -\\meas[\\{\\rho_{1\/2}>0\\}]^{1\/N}}$ by Jensen's inequality, and rearranging terms provides the claim.\n\\end{proof}\n\n\\begin{theorem}\\label{Th:Good geos TCD} Under the hypotheses of \\autoref{Le:Lalelu}, there exists a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_t)_{t\\in[0,1]}$ from $\\mu_0$ to $\\mu_1$ such that for every $t\\in[0,1]$, $\\smash{\\mu_t = \\rho_t\\,\\meas\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ and\n\\begin{align}\\label{Eq:Thresh}\n\\Vert\\rho_t\\Vert_{\\Ell^\\infty(\\mms,\\meas)} \\leq {\\ensuremath{\\mathrm{e}}}^{D\\sqrt{K^-N}\/2}\\,\\max\\{\\Vert\\rho_0\\Vert_{\\Ell^\\infty(\\mms,\\meas)},\\Vert\\rho_1\\Vert_{\\Ell^\\infty(\\mms,\\meas)}\\},\n\\end{align}\nwhere\n\\begin{align*}\nD := \\sup\\uptau(\\supp\\mu_0\\times\\supp\\mu_1).\n\\end{align*}\n\\end{theorem}\n\n\\begin{proof} We construct the required geodesic at dyadic times in $[0,1]$ by a bisection argument and then define the rest of the geodesic by completion. By induction, given $n\\in\\N_0$ we assume that measures $\\smash{{\\ensuremath{\\boldsymbol{\\alpha}}}^0, \\dots, {\\ensuremath{\\boldsymbol{\\alpha}}}^n\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ have already been constructed with the following properties. For every $\\smash{k\\in\\{0,\\dots,2^n\\}}$, the pair $\\smash{((\\eval_{(k-1)2^{-n}})_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}^n,(\\eval_{k2^{-n}})_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}^n)\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)^2}$ is strongly timelike $p$-dualizable, and\n\\begin{align*}\n\\sup\\uptau(\\supp\\,(\\eval_{(k-1)2^{-n}})_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}^n\\times\\supp\\,(\\eval_{k2^{-n}})_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}^n) \\leq 2^{-n}\\,D.\n\\end{align*}\n\n\\textbf{Step 1.} \\textit{Minimization of an appropriate functional.} Set\n\\begin{align*}\nc_{n+1} := {\\ensuremath{\\mathrm{e}}}^{2^{-n-2}D\\sqrt{K^-N}}\\,\\max\\{\\Vert\\rho_0\\Vert_{\\Ell^\\infty(\\mms,\\meas)},\\Vert\\rho_1\\Vert_{\\Ell^\\infty(\\mms,\\meas)}\\}\n\\end{align*}\nand define the functional $\\smash{{\\ensuremath{\\mathscr{F}}}_{c_{n+1}}\\colon {\\ensuremath{\\mathscr{P}}}(\\mms)\\to [0,1]}$ by\n\\begin{align}\\label{Eq:Functional FC}\n{\\ensuremath{\\mathscr{F}}}_{c_{n+1}}(\\mu) := \\big\\Vert (\\rho- c_{n+1})^+\\big\\Vert_{\\Ell^1(\\mms,\\meas)} + \\mu_\\perp[\\mms].\n\\end{align}\nsubject to the decomposition $\\mu = \\rho\\,\\meas + \\mu_\\perp$. It measures how far $\\rho$ deviates from being bounded from above by $c$, and how much the input $\\mu$ fails to be $\\meas$-absolutely continuous. Let $\\smash{k\\in\\{1,\\dots,2^{n+1}-1\\}}$ be odd. As for \\cite[Le.~3.13]{braun2022}, we prove that $\\smash{{\\ensuremath{\\mathscr{F}}}_{c_{n+1}}\\circ \\eval_{1\/2}}$ admits a minimizer $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}_k^{n+1}\\in \\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_{(k-1)2^{-n-1}},\\mu_{(k+1)2^{-n-1}}}$). By gluing, we find a measure $\\smash{{\\ensuremath{\\boldsymbol{\\alpha}}}^{n+1}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ such that\n\\begin{align*}\n(\\Restr_{(k-1)2^{-n-1}}^{(k+1)2^{-n-1}})_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}^{n+1} = {\\ensuremath{\\boldsymbol{\\pi}}}_k^{n+1}\n\\end{align*}\nfor every $k$ as above. Here, given $s,t\\in [0,1]$ with $sc_{n+1}$, and any odd $k\\in\\smash{\\{1,\\dots,2^{n+1}-1\\}}$, arguing as in \\cite[Prop.~3.14]{braun2022} and using \\autoref{Le:Lalelu} we get\n\\begin{align}\\label{Eq:Fc inequality}\n\\begin{split}\n&\\inf\\{ {\\ensuremath{\\mathscr{F}}}_{c'}((\\eval_{1\/2})_\\push{\\ensuremath{\\boldsymbol{\\pi}}}) : {\\ensuremath{\\boldsymbol{\\pi}}}\\in \\mathrm{OptTGeo}_{\\ell_p}^\\uptau((\\eval_{(k-1)2^{-n-1}})_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}^{n+1},\\\\\n&\\qquad\\qquad (\\eval_{(k+1)2^{-n-1}})_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}^{n+1}) \\}=0.\n\\end{split}\n\\end{align}\nThe support of all considered measures belongs to the compact set $J(\\mu_0,\\mu_1)$; thus, using \\cite[Cor.~3.15]{braun2022} we get \\eqref{Eq:Fc inequality} for $c'$ replaced by $c_{n+1}$. Thus, $\\smash{(\\eval_{k2^{-n-1}})_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}^{n+1}} = \\rho_{k2^{-n-1}}\\,\\meas\\in \\smash{{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ and, computing the geometric sum in the exponent,\n\\begin{align}\\label{Eq:DENS BD}\n\\Vert\\rho_{k2^{-n-1}}\\Vert_{\\Ell^\\infty(\\mms,\\meas)} \\leq {\\ensuremath{\\mathrm{e}}}^{D\\sqrt{K^-N}\/2}\\,\\max\\{\\Vert\\rho_0\\Vert_{\\Ell^\\infty(\\mms,\\meas)}, \\Vert\\rho_1\\Vert_{\\Ell^\\infty(\\mms,\\meas)}\\}.\n\\end{align}\n\n\\textbf{Step 3.} \\textit{Completion.} Let $\\smash{({\\ensuremath{\\boldsymbol{\\alpha}}}^n)_{n\\in\\N}}$ be the sequence in $\\smash{\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ iteratively constructed according to Step 1 and Step 2. Using \\autoref{Le:Villani lemma for geodesic}, this sequence converges weakly, up to a subsequence, to some $\\smash{\\ell_p}$-optimal geodesic plan $\\smash{{\\ensuremath{\\boldsymbol{\\alpha}}}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$. In particular, the assignment $\\mu_t := (\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}$ defines a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_t)_{t\\in[0,1]}$ from $\\mu_0$ to $\\mu_1$. Since the functional ${\\ensuremath{\\mathscr{F}}}_c$ given in terms of the threshold\n\\begin{align*}\nc := {\\ensuremath{\\mathrm{e}}}^{D\\sqrt{K^-N}\/2}\\,\\max\\{\\Vert\\rho_0\\Vert_{\\Ell^\\infty(\\mms,\\meas)}, \\Vert\\rho_1\\Vert_{\\Ell^\\infty(\\mms,\\meas)}\\}\n\\end{align*}\nanalogously to \\eqref{Eq:Functional FC} is weakly lower semicontinuous in ${\\ensuremath{\\mathscr{P}}}(J(\\mu_0,\\mu_1))$ \\cite[Lem.~3.13]{braun2022}, \\eqref{Eq:DENS BD} yields ${\\ensuremath{\\mathscr{F}}}_c(\\mu_t) =0$ for every $t\\in[0,1]$, which is the claim.\n\\end{proof}\n\n\\subsection{Equivalence with the entropic TCD condition}\\label{Sub:Equiv TCDs} In this section, we prove the equivalence of $\\smash{\\mathrm{TCD}_p^*(K,N)}$ with its entropic counterpart $\\smash{\\mathrm{TCD}_p^e(K,N)}$ introduced in \\cite{cavalletti2020}, provided the underlying space is timelike $p$-essentially nonbranching according to \\autoref{Def:Essentially nonbranching}. We also obtain equivalence of the respective weak with their strong versions; see \\autoref{Th:Equivalence TCD* and TCDe}. Yet another characterization will be obtained in \\autoref{Pr:MDPTS} below.\n\nIn this section, in addition to our standing assumptions on ${\\ensuremath{\\mathscr{X}}}$ we suppose the latter to be timelike $p$-essentially nonbranching for a fixed $p\\in (0,1)$. \n\nFor the convenience of the reader, let us recall the following notions \\cite[Def.~3.2, Prop.~3.3]{cavalletti2020}, for which we define ${\\ensuremath{\\mathscr{U}}}_N \\colon {\\ensuremath{\\mathscr{P}}}(\\mms)\\to [0,\\infty]$, $N\\in (0,\\infty)$, by\n\\begin{align}\\label{Eq:Expo Boltzmann}\n{\\ensuremath{\\mathscr{U}}}_N(\\mu) := {\\ensuremath{\\mathrm{e}}}^{-\\Ent_\\meas(\\mu)\/N}.\n\\end{align}\n\n\\begin{definition}\\label{Def:CD entropic} Let $K\\in\\R$ and $N\\in (0,\\infty)$. \n\\begin{enumerate}[label=\\textnormal{\\alph*.}]\n\\item We say that ${\\ensuremath{\\mathscr{X}}}$ satis\\-fies the \\emph{entropic timelike curvature-dimension condition} $\\smash{\\mathrm{TCD}_p^e(K,N)}$ if for every timelike $p$-dualizable pair $(\\mu_0,\\mu_1)\\in\\Dom(\\Ent_\\meas)^2$, there exist a timelike proper-time parametrized $\\smash{\\ell_p}$-geo\\-desic $(\\mu_t)_{t\\in [0,1]}$ connecting $\\mu_0$ and $\\mu_1$ as well as a timelike $p$-dualizing coupling $\\pi\\in\\smash{\\Pi_\\ll(\\mu_0,\\mu_1)}$ such that for every $t\\in [0,1]$,\n\\begin{align*}\n{\\ensuremath{\\mathscr{U}}}_N(\\mu_t) \\geq \\sigma_{K,N}^{(1-t)}\\big[\\Vert\\uptau\\Vert_{\\Ell^2(\\mms^2,\\pi)}\\big]\\,{\\ensuremath{\\mathscr{U}}}_N(\\mu_0) + \\sigma_{K,N}^{(t)}\\big[\\Vert\\uptau\\Vert_{\\Ell^2(\\mms^2,\\pi)}\\big]\\,{\\ensuremath{\\mathscr{U}}}_N(\\mu_1).\n\\end{align*}\n\\item If the previous claim holds for every strongly timelike $p$-dualizable $(\\mu_0,\\mu_1)\\in({\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)\\cap\\Dom(\\Ent_\\meas))^2$, we say that ${\\ensuremath{\\mathscr{X}}}$ satisfies the \\emph{weak entropic timelike curvature-dimension condition} $\\smash{\\mathrm{wTCD}_p^e(K,N)}$.\n\\end{enumerate}\n\\end{definition}\n\n\\begin{theorem}\\label{Th:Equivalence TCD* and TCDe} The following statements are equivalent for every given $K\\in\\R$ and $N\\in[1,\\infty)$.\n\\begin{enumerate}[label=\\textnormal{\\textcolor{black}{(}\\roman*\\textcolor{black}{)}}]\n\\item\\label{La:1} The condition $\\smash{\\mathrm{TCD}_p^*(K,N)}$ holds.\n\\item\\label{La:1.5} The condition $\\smash{\\mathrm{wTCD}_p^*(K,N)}$ holds.\n\\item\\label{La:2} For every timelike $p$-dualizable $(\\mu_0,\\mu_1) = (\\rho_0\\,\\meas,\\rho_1\\,\\meas)\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)^2$ there exists an $\\smash{\\ell_p}$-optimal geodesic plan ${\\ensuremath{\\boldsymbol{\\pi}}}\\in\\smash{\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ such that for every $t\\in [0,1]$, we have $\\smash{(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}=\\rho_t\\,\\meas\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$, and for every $N'\\geq N$, the inequality\n\\begin{align*}\n\\rho_t(\\gamma_t)^{-1\/N'}&\\geq \\sigma_{K,N'}^{(1-t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_0(\\gamma_0)^{-1\/N'} + \\sigma_{K,N'}^{(t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_1(\\gamma_1)^{-1\/N'}\n\\end{align*}\nholds for ${\\ensuremath{\\boldsymbol{\\pi}}}$-a.e.~$\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$.\n\\item\\label{La:3} The condition $\\smash{\\mathrm{TCD}_p^e(K,N)}$ holds.\n\\item\\label{La:3.5} The condition $\\smash{\\mathrm{wTCD}_p^e(K,N)}$ holds.\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{remark} The exceptional set in \\ref{La:2} may depend on $t$ and $N'$.\n\nBy \\autoref{Pr:iii to iv} and the proofs of \\autoref{Pr:ii to iii} and \\autoref{Pr:v to iii}, any of the claims in \\autoref{Th:Equivalence TCD* and TCDe} is equivalent to the following weaker version of \\ref{La:2}: for every pair $\\smash{(\\mu_0,\\mu_1)=(\\rho_0\\,\\meas,\\rho_1\\,\\meas)\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)^2}$ such that $\\smash{\\supp\\mu_0\\times\\supp\\mu_1\\subset\\mms_\\ll^2}$ and $\\rho_0,\\rho_1\\in\\Ell^\\infty(\\mms,\\meas)$, there exists $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ such that for every $t\\in[0,1]$, we have $\\smash{(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$ and\n\\begin{align*}\n\\rho_t(\\gamma_t)^{-1\/N}&\\geq \\sigma_{K,N}^{(1-t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_0(\\gamma_0)^{-1\/N} + \\sigma_{K,N}^{(t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_1(\\gamma_1)^{-1\/N}\n\\end{align*}\nholds for ${\\ensuremath{\\boldsymbol{\\pi}}}$-a.e.~$\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$, where $(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}} = \\rho_t\\,\\meas$.\n\nAn analogous note applies to \\autoref{Th:Equiv TCD with geo}, \\autoref{Th:Equivalence TMCP* and TMCPe}, and \\autoref{Th:Equivalence TMCP}.\n\\end{remark}\n\nIt is clear that \\ref{La:1} implies \\ref{La:1.5}, and that \\ref{La:3} yields \\ref{La:3.5}. Moreover, \\ref{La:2} implies \\ref{La:1} by integration. The proofs of \\ref{La:1.5} implying \\ref{La:2}, \\ref{La:2} implying \\ref{La:3}, and \\ref{La:3.5} implying \\ref{La:2} are, for the sake of clarity, outsourced to \\autoref{Pr:ii to iii}, \\autoref{Pr:iii to iv}, and \\autoref{Pr:v to iii}, respectively. \n\n\\begin{remark}\\label{Re:Uniq} Below, we occasionally use two results which are only established later in \\autoref{Ch:TMCP}, namely the following.\n\\begin{itemize}\n\\item First, the condition $\\smash{\\mathrm{wTCD}_p^*(K,N)}$ implies the timelike measure-contraction property $\\smash{\\mathrm{TMCP}_p^*(K,N)}$ according to \\autoref{Def:TMCP}, cf.~\\autoref{Pr:TMCP to TCD}.\n\\item Second, on timelike $p$-essentially nonbranching spaces satisfying the latter condition --- or $\\smash{\\mathrm{TMCP}_p^e(K,N)}$ according to \\autoref{Def:TMCPe} ---, chronological $\\smash{\\ell_p}$-optimal couplings and $\\smash{\\ell_p}$-optimal geodesic plans with sufficiently well-behaved endpoints are unique, cf.~\\autoref{Th:Uniqueness couplings}, \\autoref{Th:Uniqueness geodesics}, and \\autoref{Re:From TNB to TENB}. \n\\end{itemize}\nAs no result of this section is used in \\autoref{Ch:TMCP}, no circular reasoning occurs.\n\nThe key point of these results here is that the timelike proper-time parametrized $\\smash{\\ell_p}$-geodesics and the $\\smash{\\ell_p}$-optimal couplings from \\autoref{Def:TCD*}, \\autoref{Def:TCD}, as well as \\autoref{Def:TMCP}, which are a priori unrelated, are in fact induced by the same $\\smash{\\ell_p}$-optimal geodesic plan.\n\nThis means that we must have these uniqueness results at our disposal \\emph{a priori} to establish \\autoref{Th:Equivalence TCD* and TCDe}. In particular, these do \\emph{not} follow from \\autoref{Th:Equivalence TCD* and TCDe}, \\autoref{Re:From TNB to TENB}, and \\cite[Thm.~3.19, Thm.~3.20]{cavalletti2020}.\n\\end{remark}\n\n\\begin{proposition}\\label{Pr:ii to iii} Assume $\\smash{\\mathrm{wTCD}_p^*(K,N)}$ for some $K\\in\\R$ and $N\\in [1,\\infty)$. Then for every timelike $p$-dualizable pair $(\\mu_0,\\mu_1) = (\\rho_0\\,\\meas,\\rho_1\\,\\meas)\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)^2$ there exists some ${\\ensuremath{\\boldsymbol{\\pi}}}\\in\\smash{\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ such that for every $t\\in [0,1]$, we have $(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}= \\rho_t\\, \\meas\\in\\smash{{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$, and for every $N'\\geq N$, the inequality\n\\begin{align*}\n\\rho_t(\\gamma_t)^{-1\/N'}\\geq \\sigma_{K,N'}^{(1-t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_0(\\gamma_0)^{-1\/N'} + \\sigma_{K,N'}^{(t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_1(\\gamma_1)^{-1\/N'}\n\\end{align*}\nholds for ${\\ensuremath{\\boldsymbol{\\pi}}}$-a.e.~$\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$, where $(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}} = \\rho_t\\,\\meas$.\n\\end{proposition}\n\n\\begin{proof} \\textbf{Step 1.} \\emph{Reinforcement of the assumptions on $\\mu_0$ and $\\mu_1$.} We first assume that $\\smash{\\supp\\mu_0\\times\\supp\\mu_1\\subset\\mms_\\ll^2}$, that $\\mu_0,\\mu_1\\in{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)$, and that $\\rho_0,\\rho_1\\in\\Ell^\\infty(\\mms,\\meas)$. As mentioned above, timelike $p$-dualizable couplings and $\\ell_p$-optimal geodesic plans will then be unique all over Step 1, Step 2, and Step 3 without further notice. \n\n\n\\textbf{Step 2.} \\textit{Restriction.} Observe that the pair $(\\mu_0,\\mu_1)$ is in fact strongly timelike $p$-dualizable thanks to \\autoref{Re:Strong timelike}. Fix a $\\cap$-stable generator $\\{M_n : n\\in\\N\\}$ of the Borel $\\sigma$-field of $\\mms$; to be precise, the $M_n$'s should be chosen to generate the relative Borel $\\sigma$-field of the compact set $\\supp\\mu_0\\cup\\supp\\mu_1$, but we ignore this minor technicality by assuming compactness of $\\mms$ instead until Step 4. Given $n\\in\\N$ we cover $\\mms$ by the mutually disjoint Borel sets \n\\begin{align*}\nL_1 &:= M_1\\cap \\dots \\cap M_n,\\\\\nL_2 &:= M_1\\cap\\dots\\cap M_{n-1}\\cap M_n^{\\ensuremath{\\mathsf{c}}},\\,\\dots\\\\\nL_{2^n} &:= M_1^{\\ensuremath{\\mathsf{c}}} \\cap \\dots \\cap M_n^{\\ensuremath{\\mathsf{c}}}.\n\\end{align*}\nGiven the unique timelike $p$-dualizing coupling $\\smash{\\pi\\in\\Pi_\\ll(\\mu_0,\\mu_1)}$ of $\\mu_0$ and $\\mu_1$, let us define $\\smash{\\mu_0^{ij},\\mu_1^{ij}\\in{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)\\cap\\Dom(\\Ent_\\meas)}$ by \n\\begin{align*}\n\\mu_0^{ij}[A] &:= \\lambda_{ij}^{-1}\\,\\pi[(A\\cap L_i) \\times L_j] = \\varrho_0^{ij}\\,\\meas,\\\\\n\\mu_1^{ij}[A] &:= \\lambda_{ij}^{-1}\\,\\pi[L_i\\times (A\\cap L_j)] = \\varrho_1^{ij}\\,\\meas\n\\end{align*}\nfor $i,j\\in\\{1,\\dots,2^n\\}$ provided $\\lambda_{ij} := \\pi[L_i\\times L_j]> 0$. Then \n\\begin{align}\\label{Eq:MSING}\n\\begin{split}\n\\mu_0^{ij} &\\perp \\mu_0^{i'j'},\\\\\n\\mu_1^{ij} &\\perp \\mu_1^{i'j'}\n\\end{split}\n\\end{align}\nfor every $i,i',j,j'\\in\\{1,\\dots,2^n\\}$ with $i\\neq i'$ or $j\\neq j'$. Also, $\\smash{(\\mu_0^{ij},\\mu_1^{ij})}$ is strongly timelike $p$-dualizable by \\autoref{Re:Strong timelike}. By the consistency part in \\autoref{Pr:Consistency TCD} and the above mentioned uniqueness results, and $\\mathrm{wTCD}_p^*(K,N)$, there exists some $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}^{ij}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0^{ij},\\mu_1^{ij})}$ with \n\\begin{align}\\label{Eq:THa inequ}\n\\begin{split}\n&\\int_{\\mathrm{TGeo}^\\uptau(\\mms)} \\sigma_{K,N'}^{(1-t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\varrho_0^{ij}(\\gamma_0)^{-1\/N'}\\d{\\ensuremath{\\boldsymbol{\\pi}}}^{ij}(\\gamma)\\\\\n&\\qquad\\qquad\\qquad\\qquad + \\int_{\\mathrm{TGeo}^\\uptau(\\mms)} \\sigma_{K,N'}^{(t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\varrho_1^{ij}(\\gamma_1)^{-1\/N'}\\d{\\ensuremath{\\boldsymbol{\\pi}}}^{ij}(\\gamma)\\\\\n&\\qquad\\qquad \\leq \\int_{\\mathrm{TGeo}^\\uptau(\\mms)} \\varrho_t^{ij}(\\gamma_t)^{-1\/N'}\\d{\\ensuremath{\\boldsymbol{\\pi}}}^{ij}(\\gamma)\n\\end{split}\n\\end{align}\nfor every $t\\in [0,1]$ and every $N'\\geq N$; note that since $\\smash{\\mu_0^{ij},\\mu_1^{ij}\\in {\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)\\cap\\Dom(\\Ent_\\meas)}$ as a consequence of the assumption $\\rho_0,\\rho_1\\in\\Ell^\\infty(\\mms,\\meas)$, we have $(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}^{ij} = \\smash{\\rho_t^{ij}}\\,\\meas \\in\\smash{{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$ thanks to \\autoref{Le:Stu}.\n\n\\textbf{Step 3.} \\textit{Pasting plans together and conclusion.}\nBy uniqueness, we get\n\\begin{align*}\n{\\ensuremath{\\boldsymbol{\\pi}}} = \\lambda_{11}\\,{\\ensuremath{\\boldsymbol{\\pi}}}^{11} + \\lambda_{12}\\,{\\ensuremath{\\boldsymbol{\\pi}}}^{12} + \\dots + \\lambda_{2^n2^n}\\,{\\ensuremath{\\boldsymbol{\\pi}}}^{2^n2^n}\n\\end{align*}\nfor every $n\\in\\N$, since the right-hand side belongs to $\\smash{\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$. In particular, setting $\\smash{\\mu_t := (\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}} = \\rho_t\\,\\meas}$, $t\\in[0,1]$ --- the $\\meas$-ab\\-solute continuity stemming from \\autoref{Le:Stu} and again uniqueness ---, we have\n\\begin{align*}\n\\rho_t = \\lambda_{11}\\,\\varrho_t^{11} + \\lambda_{12}\\,\\varrho_t^{12} + \\dots + \\lambda_{2^n2^n}\\,\\varrho_t^{2^n2^n}\\quad\\meas\\text{-a.e.}\n\\end{align*}\nMoreover, \\eqref{Eq:MSING} and \\autoref{Le:Mutually singular} imply\n\\begin{align*}\n(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}^{ij} \\perp (\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}^{i'j'}\n\\end{align*}\nfor every $t\\in (0,1)$ and every $i,i',j,j'\\in\\{1,\\dots,2^n\\}$ with $i\\neq i'$ or $j\\neq j'$, i.e.\n\\begin{align*}\n\\meas\\big[\\{\\varrho_t^{ij} > 0\\} \\cap \\{\\varrho_t^{i'j'}>0\\}\\big] = 0.\n\\end{align*}\nSetting $G_{ij} := (\\eval_0,\\eval_1)^{-1}(L_i\\times L_j)$, by construction we obtain from \\eqref{Eq:THa inequ} that\n\\begin{align*}\n&\\int_{G_{ij}} \\sigma_{K,N'}^{(1-t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_0(\\gamma_0)^{-1\/N'}\\d{\\ensuremath{\\boldsymbol{\\pi}}}(\\gamma)\\\\\n&\\qquad\\qquad\\qquad\\qquad + \\int_{G_{ij}} \\sigma_{K,N'}^{(t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_1(\\gamma_1)^{-1\/N'}\\d{\\ensuremath{\\boldsymbol{\\pi}}}(\\gamma)\\\\\n&\\qquad\\qquad \\leq \\int_{G_{ij}} \\rho_t(\\gamma_t)^{-1\/N'}\\d{\\ensuremath{\\boldsymbol{\\pi}}}(\\gamma).\n\\end{align*}\nSince $i,j\\in\\{1,\\dots,2^n\\}$ and $n\\in\\N$ were arbitrary, the claim follows.\n\n\\textbf{Step 4.} \\textit{Removing the restrictions on $\\mu_0$ and $\\mu_1$.} We initially address the first two restrictions in Step 1, then the last one. \n\n\\textbf{Step 4.1.} If the pair $\\smash{(\\mu_0,\\mu_1) \\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)^2}$ with $\\rho_0,\\rho_1\\in\\Ell^\\infty(\\mms,\\meas)$ is merely timelike $p$-dualizable, let $\\pi\\in\\Pi_\\ll(\\mu_0,\\mu_1)$ be a corresponding timelike $p$-dualizing coupling. Since $\\smash{\\mms_\\ll^2}$ is open and $\\met$ is separable, we cover $\\smash{\\supp\\pi\\cap \\mms_\\ll^2}$ by countably many (not necessarily disjoint) relatively compact rectangles $A_i\\times B_i$ with $\\smash{\\bar{A}_i\\times\\bar{B}_i \\subset \\mms_\\ll^2}$, where $A_i,B_i\\subset\\mms$ are open, $i\\in\\N$. Set $\\pi^0 := 0$, and define the measure $\\pi^i$ on $\\mms$ inductively by\n\\begin{align*}\n\\pi^i := (\\pi - \\pi^{i-1})\\mres (A_i\\times B_i).\n\\end{align*}\nThen $\\smash{\\pi = \\pi^1 + \\pi^2 + \\dots}$ by construction. Define $\\smash{\\hat{\\pi}^i\\in{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms^2)}$ by\n\\begin{align*}\n\\hat{\\pi}^i := \\pi^i[\\mms^2]^{-1}\\,\\pi^i\n\\end{align*}\nprovided $\\smash{\\pi^i[\\mms^2]> 0}$, and consider its marginals\n\\begin{align*}\n\\mu_0^i := (\\pr_1)_\\push\\hat{\\pi}^i = \\varrho_0^i\\,\\meas,\\\\\n\\mu_1^i := (\\pr_2)_\\push\\hat{\\pi}^i = \\varrho_1^i\\,\\meas.\n\\end{align*} \nBy \\autoref{Re:Strong timelike}, the pair $\\smash{(\\mu_0^i,\\mu_1^i)}$ is strongly timelike $p$-dualizable for every $i\\in\\N$. Thus, the previous steps imply the existence of $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}^i\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0^i,\\mu_1^i)}$ such that for every $t\\in (0,1)$, we have $(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}^i = \\smash{\\varrho_t^i}\\,\\meas\\in\\smash{{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$, and for every $N'\\geq N$,\n\\begin{align}\\label{Eq:The Inequality rhoT}\n\\varrho_t^i(\\gamma_t)^{-1\/N'} \\geq \\sigma_{K,N'}^{(1-t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\varrho_0^i(\\gamma_0)^{-1\/N'} + \\sigma_{K,N'}^{(t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\varrho_1^i(\\gamma_1)^{-1\/N'}\n\\end{align}\nholds for $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}^i}$-a.e.~$\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$. By construction,\n\\begin{align*}\n\\mu_0^i &\\perp \\mu_0^j,\\\\\n\\mu_1^i &\\perp \\mu_1^j\n\\end{align*}\nfor every $i,j\\in\\N$ with $i\\neq j$, and from \\autoref{Le:Mutually singular} we derive\n\\begin{align*}\n(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}^i \\perp (\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}^j.\n\\end{align*}\nNote that the measure\n\\begin{align*}\n{\\ensuremath{\\boldsymbol{\\pi}}} := \\pi^1[\\mms^2]\\,{\\ensuremath{\\boldsymbol{\\pi}}}^1 + \\pi^2[\\mms^2]\\,{\\ensuremath{\\boldsymbol{\\pi}}}^2 + \\dots\n\\end{align*}\nis an $\\smash{\\ell_p}$-optimal geodesic plan interpolating $\\mu_0$ and $\\mu_1$. It satisfies $(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}} = \\rho_t\\,\\meas\\in \\smash{{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$ for every $t\\in[0,1]$, and \n\\begin{align*}\n\\rho_t = \\pi^1[\\mms^2]\\,\\varrho_t^1 + \\pi^2[\\mms^2]\\,\\varrho_t^2 + \\dots\\quad\\meas\\textnormal{-a.e.}\n\\end{align*}\nTogether with \\eqref{Eq:The Inequality rhoT} this discussion implies, for ${\\ensuremath{\\boldsymbol{\\pi}}}$-a.e.~$\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$,\n\\begin{align*}\n\\rho_t(\\gamma)^{-1\/N'} \\geq \\sigma_{K,N'}^{(1-t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_0(\\gamma_0)^{-1\/N'} + \\sigma_{K,N'}^{(t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_1(\\gamma_1)^{-1\/N'}.\n\\end{align*}\n\n\\textbf{Step 4.2.} Finally, given any timelike $p$-dualizable pair $(\\mu_0,\\mu_1)= (\\rho_0\\,\\meas,\\rho_1\\,\\meas)\\in\\smash{{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)^2}$, for every $i,j\\in\\N$ we set \n\\begin{align*}\nE_i &:= \\{i-1 \\leq \\rho_0^i < i\\},\\\\\nF_j &:= \\{j-1 \\leq \\rho_1^j < j\\}.\n\\end{align*}\nBy restriction \\cite[Lem. 2.10]{cavalletti2020}, $\\smash{(\\mu_0^i,\\mu_1^j)}$ is timelike $p$-dualizable, where \n\\begin{align*}\n\\mu_0^i &:= \\mu_0[E_i]^{-1}\\,\\mu_0\\mres E_i = \\rho_0^i\\,\\meas,\\\\\n\\mu_1^j &:= \\mu_1[F_j]^{-1}\\,\\mu_1\\mres F_j = \\rho_1^j\\,\\meas\n\\end{align*}\nprovided $\\mu_0[E_i] > 0$ as well as $\\mu_1[F_j]>0$. Clearly $\\smash{\\rho_0^i,\\rho_1^j\\in\\Ell^\\infty(\\mms,\\meas)}$, and a similar argument as above leads to the conclusion. We omit the details.\n\\end{proof}\n\n\\begin{proposition}\\label{Pr:iii to iv} Let $K\\in\\R$ and $N\\in [1,\\infty)$. Assume that for every timelike $p$-dualizable pair $(\\mu_0,\\mu_1)=(\\rho_0\\,\\meas,\\rho_1\\,\\meas)\\in\\Dom(\\Ent_\\meas)^2$, there is $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ such that for every $t\\in [0,1]$, we have $(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}} = \\rho_t\\,\\meas\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)$ and\n\\begin{align}\\label{Eq:rhot in}\n\\rho_t(\\gamma_t)^{-1\/N}\\geq \\sigma_{K,N}^{(1-t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_0(\\gamma_0)^{-1\/N} + \\sigma_{K,N}^{(t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_1(\\gamma_1)^{-1\/N}\n\\end{align}\nholds for ${\\ensuremath{\\boldsymbol{\\pi}}}$-a.e.~$\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$. Then ${\\ensuremath{\\mathscr{X}}}$ obeys $\\smash{\\mathrm{TCD}_p^e(K,N)}$.\n\\end{proposition}\n\n\\begin{proof} The assignment $\\mu_t := (\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}$ creates a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_t)_{t\\in[0,1]}$ from $\\mu_0$ to $\\mu_1$. Moreover, $\\smash{\\pi := (\\eval_0,\\eval_1)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\Pi_\\ll(\\mu_0,\\mu_1)}$ forms a timelike $p$-dualizing coupling of $\\mu_0$ and $\\mu_1$. Taking logarithms on both sides of \\eqref{Eq:rhot in} and recalling \\eqref{Eq:Distortion coeff property}, we get\n\\begin{align*}\n-\\frac{1}{N}\\log\\rho_t(\\gamma_t) \\geq {\\ensuremath{\\mathrm{G}}}_t\\Big[\\!-\\!\\frac{1}{N}\\log\\rho_0(\\gamma_0), -\\frac{1}{N}\\log\\rho_1(\\gamma_1), \\frac{K}{N}\\,\\uptau^2(\\gamma_0,\\gamma_1)\\Big]\n\\end{align*}\nfor ${\\ensuremath{\\boldsymbol{\\pi}}}$-a.e.~$\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$; recall that the function ${\\ensuremath{\\mathrm{G}}}_t$ is defined in \\eqref{Eq:GtHt}. Integrating this inequality with respect to ${\\ensuremath{\\boldsymbol{\\pi}}}$ and using Jensen's inequality together with joint convexity of ${\\ensuremath{\\mathrm{G}}}_t$ yields\n\\begin{align*}\n-\\frac{1}{N}\\Ent_\\meas(\\mu_t) \\geq {\\ensuremath{\\mathrm{G}}}_t\\Big[\\!-\\!\\frac{1}{N}\\Ent_\\meas(\\mu_0), -\\frac{1}{N}\\Ent_\\meas(\\mu_1), \\frac{K}{N}\\,\\big\\Vert\\uptau\\big\\Vert_{\\Ell^2(\\mms^2,\\pi)}^2\\Big].\n\\end{align*}\nHere, in the case $K>0$ and $\\smash{\\Vert\\uptau\\Vert_{\\Ell^2(\\mms^2,\\pi)} = \\infty}$ the right-hand side is consistently interpreted as $\\infty$, which implies $\\Ent_\\meas(\\mu_t) = -\\infty$; for $K<0$ and $\\smash{\\Vert\\uptau\\Vert_{\\Ell^2(\\mms^2,\\pi)}}=\\infty$ the right-hand side has an evident interpretation. Exponentiating both sides now yields the desired $\\smash{\\mathrm{TCD}_p^e(K,N)}$ condition.\n\\end{proof}\n\n\\begin{proposition}\\label{Pr:v to iii} Assume $\\smash{\\mathrm{wTCD}_p^e(K,N)}$ for some $K\\in\\R$ and $N\\in [1,\\infty)$. Then for every timelike $p$-dualizable pair $(\\mu_0,\\mu_1) = (\\rho_0\\,\\meas,\\rho_1\\,\\meas)\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)^2$ there exists ${\\ensuremath{\\boldsymbol{\\pi}}}\\in\\smash{\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ such that for every $t\\in [0,1]$, we have $(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}} = \\rho_t\\,\\meas$, and for every $N'\\geq N$, the inequality \n\\begin{align*}\n\\rho_t(\\gamma_t)^{-1\/N'}\\geq \\sigma_{K,N'}^{(1-t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_0(\\gamma_0)^{-1\/N'} + \\sigma_{K,N'}^{(t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_1(\\gamma_1)^{-1\/N'}\n\\end{align*}\nholds for ${\\ensuremath{\\boldsymbol{\\pi}}}$-a.e.~$\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$, where $(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}} = \\rho_t\\,\\meas$. \n\\end{proposition}\n\n\\begin{proof} The proof is similar to the one of \\autoref{Pr:ii to iii} --- whose notation we retain here ---, whence we only outline the main differences. Again, it suffices to consider the case $\\smash{\\supp\\mu_0\\times\\supp\\mu_1\\subset\\mms_\\ll^2}$, $\\mu_0,\\mu_1\\in{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)$, and $\\smash{\\rho_0,\\rho_1\\in\\Ell^\\infty(\\mms,\\meas)}$. In particular, these restrictions imply $\\mu_0,\\mu_1\\in\\Dom(\\Ent_\\meas)$. The parallel uniqueness results outlined in \\autoref{Re:Uniq} under $\\smash{\\mathrm{TCD}_p^e(K,N)}$ which are implicitly used below are due to \\cite[Thm.~3.19, Thm.~3.20]{cavalletti2020}, see also \\autoref{Re:From TNB to TENB}.\n\nLet $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ be an $\\smash{\\ell_p}$-optimal geodesic plan interpolating $\\mu_0$ and $\\mu_1$. Given any $i,j\\in\\{1,\\dots,2^n\\}$, we define\n\\begin{align*}\n\\mu_0^{ij} &:= \\lambda_{ij}^{-1}\\,(\\pr_1)_\\push\\big[(\\eval_0,\\eval_1)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}\\mres(L_i\\times L_j)\\big] = \\varrho_0^{ij}\\,\\meas\\\\\n\\mu_1^{ij} &:= \\lambda_{ij}^{-1}\\,(\\pr_2)_\\push\\big[(\\eval_0,\\eval_1)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}\\mres(L_i\\times L_j)\\big] = \\varrho_1^{ij}\\,\\meas\n\\end{align*}\nprovided $\\smash{\\lambda_{ij} := {\\ensuremath{\\boldsymbol{\\pi}}}[G_{ij}] > 0}$. Recall that $\\smash{\\mathrm{wTCD}_p^e(K,N)}$ implies $\\smash{\\mathrm{wTCD}_p^e(K,N')}$ by \\cite[Lem.~3.10]{cavalletti2020}. Arguing as for \\autoref{Pr:ii to iii}, invoking $\\smash{\\mathrm{wTCD}_p^e(K,N')}$ for the strongly timelike $p$-dualizable pair $\\smash{(\\mu_0^{ij}, \\mu_1^{ij})} \\in \\smash{({\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)\\cap\\Dom(\\Ent_\\meas))^2}$, employing the uniqueness both of ${\\ensuremath{\\boldsymbol{\\pi}}}$ as well as the involved timelike $p$-dualizing coupling $\\pi^{ij}\\in\\smash{\\Pi_\\ll(\\mu_0^{ij},\\mu_1^{ij})}$ by \\autoref{Re:From TNB to TENB}, and taking logarithms on both sides of the resulting ``pasted inequality'' we infer that\n\\begin{align*}\n&-\\frac{\\lambda_{ij}^{-1}}{N'}\\int_{G_{ij}}\\log \\rho_t(\\gamma_t)\\d{\\ensuremath{\\boldsymbol{\\pi}}}(\\gamma)\\\\\n&\\qquad\\qquad \\geq {\\ensuremath{\\mathrm{G}}}_t\\Big[\\!-\\!\\frac{\\lambda_{ij}^{-1}}{N'}\\int_{G_{ij}} \\log\\rho_0(\\gamma_0)\\d{\\ensuremath{\\boldsymbol{\\pi}}}(\\gamma),-\\frac{\\lambda_{ij}^{-1}}{N'}\\int_{G_{ij}} \\log\\rho_1(\\gamma_1)\\d{\\ensuremath{\\boldsymbol{\\pi}}}(\\gamma),\\\\\n&\\qquad\\qquad\\qquad\\qquad \\lambda_{ij}^{-1}\\,\\frac{K}{N'}\\int_{G_{ij}}\\uptau^2(\\gamma_0,\\gamma_1)\\d{\\ensuremath{\\boldsymbol{\\pi}}}(\\gamma)\\Big]\\\\\n&\\qquad\\qquad\\geq \\lambda_{ij}^{-1}\\int_{G_{ij}} {\\ensuremath{\\mathrm{G}}}_t\\Big[\\!-\\!\\frac{1}{N'}\\log\\rho_0(\\gamma_0), -\\frac{1}{N'}\\log\\rho_1(\\gamma_1), \\frac{K}{N'}\\,\\uptau^2(\\gamma_0,\\gamma_1)\\Big]\\d{\\ensuremath{\\boldsymbol{\\pi}}}(\\gamma).\n\\end{align*}\nHere the deduction that $\\smash{(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}} = \\rho_t\\,\\meas\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$ comes directly from the definition of the functional ${\\ensuremath{\\mathscr{U}}}_N$, and the last inequality follows from joint convexity of ${\\ensuremath{\\mathrm{G}}}_t$ and Jensen's inequality. Since $i,j\\in\\{1,\\dots,2^n\\}$ and $n\\in\\N$ were arbitrary, the claim follows by taking exponentials of both integrands, respectively.\n\\end{proof}\n\nBy evident adaptations of the above arguments, we also obtain the following result for the conditions from \\autoref{Def:TCD}.\n\n\\begin{theorem}\\label{Th:Equiv TCD with geo} The following statements are equivalent for every given $K\\in\\R$ and $N\\in[1,\\infty)$.\n\\begin{enumerate}[label=\\textnormal{\\textcolor{black}{(}\\roman*\\textcolor{black}{)}}]\n\\item The condition $\\smash{\\mathrm{TCD}_p(K,N)}$ holds.\n\\item The condition $\\smash{\\mathrm{wTCD}_p(K,N)}$ holds.\n\\item For every timelike $p$-dualizable $(\\mu_0,\\mu_1) = (\\rho_0\\,\\meas,\\rho_1\\,\\meas)\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)^2$ there exists an $\\smash{\\ell_p}$-optimal geodesic plan ${\\ensuremath{\\boldsymbol{\\pi}}}\\in\\smash{\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ such that for every $t\\in [0,1]$, we have $\\smash{(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}} =\\rho_t\\, \\meas\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$, and for every $N'\\geq N$, the inequality\n\\begin{align*}\n\\rho_t(\\gamma_t)^{-1\/N'}&\\geq \\tau_{K,N'}^{(1-t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_0(\\gamma_0)^{-1\/N'} + \\tau_{K,N'}^{(t)}(\\uptau(\\gamma_0,\\gamma_1))\\,\\rho_1(\\gamma_1)^{-1\/N'}\n\\end{align*}\nholds for ${\\ensuremath{\\boldsymbol{\\pi}}}$-a.e.~$\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$.\n\\end{enumerate}\n\\end{theorem}\n\n\\subsection{From local to global}\\label{Sub:Local global} The goal of this section is to establish the equivalence of $\\smash{\\mathrm{TCD}_p^*(K,N)}$ to its corresponding local version in \\autoref{Def:TCD loc} below, i.e.~to establish a Lorentzian analogue of the local-to-global property from \\cite[Ch.~5]{bacher2010}.\n\nTo this aim, in addition to our standing assumptions on the base space, suppose again ${\\ensuremath{\\mathscr{X}}}$ to be timelike $p$-essentially nonbranching for $p\\in (0,1)$.\n\n\n\\begin{definition}\\label{Def:TCD loc} Let $K\\in\\R$ and $N\\in[1,\\infty)$. \n\\begin{enumerate}[label=\\textnormal{\\alph*.}]\n\\item We say that ${\\ensuremath{\\mathscr{X}}}$ satisfies $\\smash{\\mathrm{TCD}_p^*(K,N)}$ \\emph{locally}, briefly $\\smash{\\mathrm{TCD}_{p,\\loc}^*(K,N)}$, if there exists an open cover $(\\mms_i)_{i\\in I}$ of $\\mms$ with the following property. For every $i\\in I$ and every pair $(\\mu_0,\\mu_1)=(\\rho_0\\,\\meas,\\rho_1\\,\\meas)\\in\\smash{{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms_i,\\meas)^2}$ such that $\\smash{\\supp\\mu_0\\times\\supp\\mu_1\\subset (\\mms_i)_\\ll^2}$, there is a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_t)_{t\\in[0,1]}$ in ${\\ensuremath{\\mathscr{P}}}(\\mms)$ from $\\mu_0$ to $\\mu_1$ and an $\\smash{\\ell_p}$-optimal coupling $\\pi\\in\\Pi_\\ll(\\mu_0,\\mu_1)$ such that for every $t\\in[0,1]$ and every $N'\\geq N$,\n\\begin{align}\\label{Eq:RED}\n\\begin{split}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t) &\\leq -\\int_{\\mms^2} \\sigma_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\rho_0(x^0)^{-1\/N'}\\d\\pi(x^0,x^1)\\\\\n&\\qquad\\qquad -\\int_{\\mms^2}\\sigma_{K,N'}^{(t)}(\\uptau(x^0,x^1))\\,\\rho_1(x^1)^{-1\/N'}\\d\\pi(x^0,x^1).\n\\end{split}\n\\end{align}\n\\item If the previous statement holds for $\\smash{\\sigma_{K,N'}^{(1-t)}}$ and $\\smash{\\sigma_{K,N'}^{(t)}}$ replaced by $\\smash{\\tau_{K,N'}^{(1-t)}}$ and $\\smash{\\tau_{K,N'}^{(t)}}$, respectively, ${\\ensuremath{\\mathscr{X}}}$ is termed to satisfy $\\smash{\\mathrm{TCD}_p(K,N)}$ \\emph{locally}, briefly $\\smash{\\mathrm{TCD}_{p,\\loc}(K,N)}$.\n\\end{enumerate} \n\\end{definition}\n\n\n\\begin{proposition}\\label{Pr:MDPTS} Given any $K\\in\\R$ and $N\\in[1,\\infty)$, the property $\\smash{\\mathrm{TCD}_p^*(K,N)}$ holds if and only if the following does. For every $\\smash{\\mu_0,\\mu_1\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ with $\\supp\\mu_0\\times\\supp\\mu_1 \\subset\\smash{\\mms_\\ll^2}$ there exists $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ such that for every $N'\\geq N$,\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}((\\eval_{1\/2})_\\push{\\ensuremath{\\boldsymbol{\\pi}}}) \\leq \\sigma_{K,N'}^{(1\/2)}(\\theta)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_0) + \\sigma_{K,N'}^{(1\/2)}(\\theta)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_1),\n\\end{align*}\nwhere\n\\begin{align*}\n\\theta := \\begin{cases} \\sup \\uptau(\\supp\\mu_0\\times\\supp\\mu_1) & \\textnormal{if }K<0,\\\\\n\\inf \\uptau(\\supp\\mu_0\\times\\supp\\mu_1) & \\textnormal{otherwise}.\n\\end{cases}\n\\end{align*}\n\\end{proposition}\n\n\\begin{proof} We only outline the main differences of the proof to its counterpart from \\cite[Prop.~2.10]{bacher2010}. The forward implication is clear. Conversely, similarly to the proof of \\autoref{Th:Good geos TCD}, by successively gluing plans corresponding to ``midpoints'' for which the R\u00e9nyi entropy obeys the given inequality as in the proof of \\cite[Prop.~2.10]{bacher2010}, we inductively construct a sequence $\\smash{({\\ensuremath{\\boldsymbol{\\alpha}}}^n)_{n\\in\\N}}$ in $\\smash{\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ such that for every $n\\in\\N$ and every odd $k\\in\\{1,\\dots,2^n-1\\}$,\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}((\\eval_{k2^{-n}})_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}^n) &\\leq \\sigma_{K,N'}^{(1\/2)}(2^{-n+1}\\,\\theta)\\,{\\ensuremath{\\mathscr{S}}}_{N'}((\\eval_{(k-1)2^{-n}})_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}^n)\\\\\n&\\qquad\\qquad + \\sigma_{K,N'}^{(1\/2)}(2^{-n+1}\\,\\theta)\\,{\\ensuremath{\\mathscr{S}}}_{N'}((\\eval_{(k+1)2^{-n}})_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}^n)\\\\\n&\\leq \\sigma_{K,N'}^{(1\/2)}(2^{-n+1}\\,\\theta)\\,\\sigma_{K,N'}^{(1-(k-1)2^{-n})}(\\theta)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_0)\\\\\n&\\qquad\\qquad\\qquad\\qquad + \\sigma_{K,N'}^{(1\/2)}(2^{-n+1}\\,\\theta)\\,\\sigma_{K,N'}^{((k-1)2^{-n})}(\\theta)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_1)\\\\\n&\\qquad\\qquad + \\sigma_{K,N'}^{(1\/2)}(2^{-n+1}\\,\\theta)\\,\\sigma_{K,N'}^{(1-(k+1)2^{-n})}(\\theta)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_0)\\\\\n&\\qquad\\qquad\\qquad\\qquad + \\sigma_{K,N'}^{(1\/2)}(2^{-n+1}\\,\\theta)\\,\\sigma_{K,N'}^{((k+1)2^{-n})}(\\theta)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_1)\\\\\n&= \\sigma_{K,N'}^{(1-k2^{-n})}(\\theta)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_0) + \\sigma_{K,N'}^{(k2^{-n})}(\\theta)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_1).\n\\end{align*}\nholds for every $N'\\geq N$. The latter equality is a crucial property of the distortion coefficients $\\smash{\\sigma_{K,N'}^{(r)}}$ \\cite[Lem.~3.1]{rajala2012b} which does not hold for $\\smash{\\tau_{K,N'}^{(r)}}$. Moreover, note that to proceed by induction in this strategy, it is important to know that the respective ``midpoints'' come from $\\smash{\\ell_p}$-optimal geodesic plans, which ensures that all chronology re\\-lations are preserved through this process.\n\nAs in the proof of \\autoref{Th:Good geos TCD}, $\\smash{({\\ensuremath{\\boldsymbol{\\alpha}}}^n)_{n\\in\\N}}$ admits an accumulation point ${\\ensuremath{\\boldsymbol{\\alpha}}} \\in \\smash{\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$, giving rise to a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_t)_{t\\in[0,1]}$ from $\\mu_0$ to $\\mu_1$ via $\\mu_t := (\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}$. Thanks to the weak lower semicontinuity of the R\u00e9nyi entropy for probability measures with uniformly compact support, the latter satisfies, for every $t\\in[0,1]$ and every $N'\\geq N$,\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t)\\leq \\sigma_{K,N'}^{(1-t)}(\\theta)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_0) + \\sigma_{K,N'}^{(t)}(\\theta)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_1).\n\\end{align*}\n\nAs for \\cite[Prop.~2.8]{bacher2010}, where the replacement of \\cite[Rem.~2.9]{bacher2010} is \\autoref{Le:Stu}, and \\autoref{Pr:ii to iii}, and using \\autoref{Th:Equivalence TCD* and TCDe} we obtain $\\smash{\\mathrm{TCD}_p^*(K,N)}$.\n\\end{proof}\n\n\\begin{definition}\\label{Def:Chron lp geod} We term $\\smash{{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ \\emph{chronologically $\\smash{\\ell_p}$-geodesic} if every $\\mu_0,\\mu_1\\in\\smash{{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ with $\\supp\\mu_0\\times\\supp\\mu_1\\subset\\smash{\\mms_\\ll^2}$ are joined by a timelike proper-time parame\\-tri\\-zed $\\smash{\\ell_p}$-geodesic $(\\mu_r)_{r\\in[0,1]}$ which consists of $\\meas$-absolutely continuous measures.\n\\end{definition}\n\n\\begin{theorem}\\label{Th:Local to global} Let $K\\in\\R$ and $N\\in[1,\\infty)$. Then $\\smash{\\mathrm{TCD}_p^*(K,N)}$ holds if and only if $\\smash{\\mathrm{TCD}_{p,\\loc}^*(K,N)}$ is satisfied and $\\smash{{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ is chronologically $\\smash{\\ell_p}$-geodesic.\n\\end{theorem}\n\nWe will prove \\autoref{Th:Local to global} only in the case $K\\geq 0$, the other situation follows by analogous computations. To this aim, we formulate the following property in \\autoref{Def:Cn} for which, given any $m\\in\\N_0$, we define\n\\begin{align*}\nI_m := \\{k\\,2^{-m} : k\\in\\{0,\\dots,2^m\\}\\}.\n\\end{align*}\nMoreover, given a collection $\\mu:= (\\mu_t)_{t\\in[0,1]}$ in ${\\ensuremath{\\mathscr{P}}}(\\mms)$, let $\\smash{G_\\mu^m}$ denote the set of all $\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$ with $\\gamma_t\\in\\supp\\mu_t$ for every $t\\in I_m$.\n\n\\begin{definition}\\label{Def:Cn} Let $C\\subset\\mms$ be a compact set. Given $m\\in\\N_0$, we say that the property ${\\ensuremath{\\mathrm{P}}}_m(C)$ is satisfied if for every timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $\\mu:=(\\mu_r)_{r\\in[0,1]}$ consisting of $\\meas$-absolutely continuous measures with $\\mu_0,\\mu_1\\in\\smash{{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$ with $\\smash{\\supp\\mu_0\\times\\supp\\mu_1\\subset\\mms_\\ll^2\\cap C^2}$, and every $s,t\\in I_n$ with $t-s =2^{-m}$ there is an $\\smash{\\ell_p}$-optimal geodesic plan $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_s,\\mu_t)}$ with\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}((\\eval_{1\/2})_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t}) \\leq \\sigma_{K,N'}^{(1\/2)}(\\theta_{s,t}^m)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_s) + \\sigma_{K,N'}^{(1\/2)}(\\theta_{s,t}^m)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t),\n\\end{align*}\nwhere\n\\begin{align}\\label{Eq:Thetat}\n\\theta_{s,t}^m := \\inf\\{\\uptau(\\gamma_s,\\gamma_t) : \\gamma\\in G_\\mu^m\\}.\n\\end{align}\n\\end{definition}\n\n\\begin{lemma}\\label{Le:PnC} Retaining the notation of \\autoref{Def:Cn}, ${\\ensuremath{\\mathrm{P}}}_m(C)$ implies ${\\ensuremath{\\mathrm{P}}}_{m-1}(C)$ for every $m\\in\\N$.\n\\end{lemma}\n\n\\begin{proof} Let a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $\\smash{\\mu:=(\\mu_r)_{r\\in[0,1]}}$ according to ${\\ensuremath{\\mathrm{P}}}_n(C)$ be given. Let $s,t \\in I_{m-1}$ such that $\\smash{t-s = 2^{1-m}}$, and let $\\smash{\\theta:=\\theta_{s,t}^{m-1}}$ be defined with respect to $\\mu$. Inductively, we build a sequence $\\smash{(\\mu^i)_{i\\in\\N_0}}$ of timelike proper-time parametrized $\\smash{\\ell_p}$-geodesics $\\smash{\\mu^i := (\\mu_r^i)_{r\\in[0,1]}}$ obeying $\\smash{\\mu^i_r = \\mu_r}$ for every $i\\in\\N_0$ and every $r\\in[0,s]\\cup[t,1]$ as follows. Initially, set $\\smash{\\mu^0 := \\mu}$. \n\n\\textbf{Step 1.} \\textit{Construction for odd $i\\in\\N_0$.} If the element $\\smash{\\mu^{2i}}$ has been constructed for a given $i\\in\\N_0$, as in the proof of \\cite[Thm.~2.10]{ambrosiogigli2013} and using that the respective ``midpoints'' result from an $\\smash{\\ell_p}$-optimal geodesic plan interpolating its endpoints, we construct a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $\\smash{\\mu^{2i+1}:= (\\mu_r^{2i+1})_{r\\in[0,1]}}$ with the subsequent properties. For every $r\\in[0,s]\\cup[t,1]$, we have $\\smash{\\mu_r^{2i+1}=\\mu_r}$, and moreover, for every $N'\\geq N$, \n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{s+2^{-m-1}}^{2i+1}) &\\leq \\sigma_{K,N'}^{(1\/2)}\\Big[\\frac{\\theta}{2}\\Big]\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_s) + \\sigma_{K,N'}^{(1\/2)}\\Big[\\frac{\\theta}{2}\\Big]\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{s+2^{-m}}^{2i}),\\\\\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{s+3\\times 2^{-m-1}}^{2i+1}) &\\leq \\sigma_{K,N'}^{(1\/2)}\\Big[\\frac{\\theta}{2}\\Big]\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{s+2^{-m}}^{2i}) + \\sigma_{K,N'}^{(1\/2)}\\Big[\\frac{\\theta}{2}\\Big]\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t).\n\\end{align*}\nHere we have used the property ${\\ensuremath{\\mathrm{P}}}_m(C)$, the inequalities\n\\begin{align*}\n2\\,\\theta_{s,s+2^{-m}}^m &\\geq \\theta,\\\\\n2\\, \\theta_{s+2^{-m},t}^m &\\geq \\theta, \n\\end{align*}\nwhere both quantities on the left-hand sides are defined with respect to $\\smash{\\mu^{2i}}$, and nonincreasingness of $\\smash{\\sigma_{K,N'}^{(1\/2)}(\\vartheta)}$ in $\\vartheta\\geq 0$ by our assumption $K\\geq 0$.\n\n\\textbf{Step 2.} \\textit{Construction for even $i\\in\\N_0$.} Given $\\smash{\\mu^{2i+1}}$ exhibited in Step 1 above, similarly to this step we construct a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $\\smash{\\mu^{2i+2} := (\\mu_r^{2i+2})_{r\\in[0,1]}}$ with the property that $\\smash{\\mu_r^{2i+2} = \\mu_r^{2i+1}}$ for every $r\\in[0,s+2^{-m-1}]\\cup\\smash{[s+3\\times 2^{-m-1},1]}$ such that for every $N'\\geq N$,\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{s+2^{-m}}^{2i+2}) \\leq \\sigma_{K,N'}^{(1\/2)}\\Big[\\frac{\\theta}{2}\\Big]\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{s+2^{-m-1}}^{2i+1}) + \\sigma_{K,N'}^{(1\/2)}\\Big[\\frac{\\theta}{2}\\Big]\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{s+3\\times 2^{-m-1}}^{2i+1}).\n\\end{align*}\n\n\\textbf{Step 3.} \\textit{Conclusion.} Pasting together the inequalities from Step 1 and Step 2 yields, for every $i\\in\\N_0$ and every $N'\\geq N$,\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu^{2i+2}_{s+2^{-m}}) &\\leq 2\\,\\sigma_{K,N'}^{(1\/2)}\\Big[\\frac{\\theta}{2}\\Big]^2\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{s+2^{-m}}^{2i})\\\\\n&\\qquad\\qquad + \\sigma_{K,N'}^{(1\/2)}\\Big[\\frac{\\theta}{2}\\Big]^2\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_s) + \\sigma_{K,N'}^{(1\/2)}\\Big[\\frac{\\theta}{2}\\Big]^2\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t). \n\\end{align*}\nIterating this inequality gives\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{s+2^{-m}}^{2i})&\\leq 2^i\\,\\sigma_{K,N'}^{(1\/2)}\\Big[\\frac{\\theta}{2}\\Big]^2\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{s+2^{-m}})\\\\\n&\\qquad\\qquad + \\frac{1}{2}\\sum_{k=1}^i 2^k\\,\\sigma_{K,N'}^{(1\/2)}\\Big[\\frac{\\theta}{2}\\Big]^{2k}\\,\\big[{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_s) + {\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t)\\big].\n\\end{align*}\n\nAs $\\smash{\\mu_0^{2i} = \\mu_0}$ and $\\smash{\\mu_1^{2i}= \\mu_1}$ for every $i\\in\\N_0$ by construction, using \\autoref{Le:Villani lemma for geodesic} we get the existence of a sequence $\\smash{({\\ensuremath{\\boldsymbol{\\pi}}}^{2i})_{i\\in\\N_0}}$ in $\\smash{\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ with $\\smash{\\mu_r^{2i} = (\\eval_r)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}^{2i}}$ for every $r\\in[0,1]$ which converges weakly, up to a nonrelabeled subsequence, to an $\\smash{\\ell_p}$-optimal geodesic plan $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$. By ${\\ensuremath{\\mathscr{K}}}$-global hyperbolicity, lower semicontinuity of ${\\ensuremath{\\mathscr{S}}}_{N'}$ on ${\\ensuremath{\\mathscr{P}}}(J(\\mu_0,\\mu_1))$, and the same computations as for \\cite[Clm.~5.2]{bacher2010} for the distortion coefficients on the right-hand side of the previous inequality, setting $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t} := (\\Restr_s^t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_s,\\mu_t)}$ we get\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}((\\eval_{1\/2})_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t}) \\leq \\sigma_{K,N'}^{(1\/2)}(\\theta)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_s) + \\sigma_{K,N'}^{(1\/2)}(\\theta)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t).\n\\end{align*}\nHere $\\smash{\\Restr_s^t}$ is defined in \\eqref{Eq:Restr def}. This accomplishes ${\\ensuremath{\\mathrm{P}}}_{n-1}(C)$.\n\\end{proof}\n\n\\begin{proof}[Proof of \\autoref{Th:Local to global}] The forward implication is a consequence of \\autoref{Le:Stu}. \n\nConcerning the backward implication, by the partition procedure in the proof of \\autoref{Pr:ii to iii} it suffices to prove the desired inequality defining the condition $\\smash{\\mathrm{TCD}_p^*(K,N)}$ for every $\\smash{\\mu_0,\\mu_1\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ with $\\supp\\mu_0\\times\\supp\\mu_1\\subset\\smash{\\mms_\\ll^2}$. \n\nThanks to ${\\ensuremath{\\mathscr{K}}}$-global hyperbolicity, every member of any timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic from $\\mu_0$ to $\\mu_1$ has support in $J(\\mu_0,\\mu_1)$. By $\\smash{\\mathrm{TCD}_{p,\\loc}^*(K,N)}$ and by compactness of $J(\\mu_0,\\mu_1)$, there exist $\\delta > 0$, a disjoint cover $L_1,\\dots,L_n\\subset\\mms$ of $J(\\mu_0,\\mu_1)$, where $n\\in\\N$, and closed sets $\\smash{C_k \\subset\\mms}$ containing $\\smash{{\\ensuremath{\\mathsf{B}}}^\\met(L_k,\\delta)}$, $k\\in\\{1,\\dots,n\\}$, such that every two measures in $\\smash{{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(C_k,\\meas)}$ are joined by a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic obeying \\eqref{Eq:RED}. Let $m\\in \\N$ with\n\\begin{align}\\label{Eq:m choice}\n2^{-m} \\leq \\delta.\n\\end{align}\n\nWe claim $\\smash{{\\ensuremath{\\mathrm{P}}}_m(J(\\mu_0,\\mu_1))}$. Let $(\\mu_r)_{r\\in[0,1]}$ be a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic from $\\mu_0$ to $\\mu_1$ consisting of $\\meas$-absolutely continuous measures. Let ${\\ensuremath{\\boldsymbol{\\pi}}}\\in\\smash{\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ represent $(\\mu_r)_{r\\in[0,1]}$, and define $\\smash{\\varpi\\in{\\ensuremath{\\mathscr{P}}}(\\mms^{2^m+1})}$ by\n\\begin{align}\\label{Eq:PROJ}\n\\varpi := (\\eval_0,\\eval_{2^{-m}}, \\eval_{2^{-m+1}},\\dots,\\eval_{(2^m-1)2^{-m}},\\eval_1)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}.\n\\end{align}\nLet $s,t\\in I_m$ with $t-s = 2^{-m}$.\nFor $k\\in\\{1,\\dots,n\\}$, define $\\smash{\\nu_s^k,\\nu_t^k\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ by\n\\begin{align*}\n\\nu_s^k &:= \\alpha_k^{-1}\\,(\\pr_{2^ms})_\\push\\varpi\\mres L_k,\\\\\n\\nu_t^k &:= \\alpha_k^{-1}\\,(\\pr_{2^mt})_\\push\\big[(\\pr_{2^ms},\\pr_{2^mt})_\\push{\\ensuremath{\\boldsymbol{\\pi}}}\\mres (L_k\\times \\mms)\\big],\n\\end{align*}\nprovided $\\smash{\\alpha_k := \\mu_s[L_k]>0}$. Note that $\\smash{\\supp\\nu_s^k\\subset \\bar{L}_k}$ and $\\smash{\\supp\\nu_s^k\\times\\supp\\nu_t^k\\subset\\mms_\\ll^2}$. Since $r:=\\inf\\uptau(\\supp\\mu_0\\times\\supp\\mu_1) > 0$, every curve in $\\mathrm{TGeo}^\\uptau(\\mms)$ starting in $\\supp\\mu_0$ and ending in $\\supp\\mu_1$ belongs to the set $G_r$ defined in \\autoref{Cor:Cptness and equicty}, which is uniformly equi\\-continuous. Hence, using \\eqref{Eq:m choice} and \\eqref{Eq:PROJ} we get $\\smash{\\supp\\nu_s^k, \\supp\\nu_t^k \\subset \\bar{{\\ensuremath{\\mathsf{B}}}}^\\met(L_k,\\delta)\\subset C_k}$. By $\\smash{\\mathrm{TCD}_{p,\\loc}^*(K,N)}$ in the form described above, there is some $\\smash{\\ell_p}$-optimal geodesic plan $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t}^k\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\nu_s^k,\\nu_t^k)}$ such that\n\\begin{align}\\label{Eq:Mrg}\n{\\ensuremath{\\mathscr{S}}}_{N'}((\\eval_{1\/2})_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t}^k)\\leq \\sigma_{K,N'}^{(1\/2)}(\\theta_{s,t}^m)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\nu_s^k) + \\sigma_{K,N}^{(1\/2)}(\\theta_{s,t}^m)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\nu_t^k)\n\\end{align}\nfor every $N'\\geq N$, where $\\theta_{s,t}^m$ is defined as in \\eqref{Eq:Thetat} with respect to $\\mu := (\\mu_r)_{r\\in[0,1]}$.\n\nDefine $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_s,\\mu_t)}$ by\n\\begin{align*}\n{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t} := \\alpha_1\\,{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t}^1 + \\dots + \\alpha_n\\,{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t}^n.\n\\end{align*}\nBy construction and \\autoref{Le:Mutually singular}, respectively, $\\smash{(\\eval_r)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t}^1,\\dots,(\\eval_r)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t}^n}$ are mutually singular for every fixed $r\\in\\{0,1\/2\\}$, whence\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_s) &= \\alpha_1^{1-1\/N'}\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\nu_s^1) + \\dots + \\alpha_1^{1-1\/N'}\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\nu_s^n),\\\\\n{\\ensuremath{\\mathscr{S}}}_{N'}((\\eval_{1\/2})_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t}) &= \\alpha_1^{1-1\/N'}\\,{\\ensuremath{\\mathscr{S}}}_{N'}((\\eval_{1\/2})_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t}^1) + \\dots + \\alpha_n^{1-1\/N'}\\,{\\ensuremath{\\mathscr{S}}}_{N'}((\\eval_{1\/2})_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t}^n).\n\\end{align*} \nOn the other hand, since mutual singularity of $\\smash{\\nu_t^1,\\dots,\\nu_t^n}$ may fail, \n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t) \\geq \\alpha_1^{1-1\/N'}\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\nu_t^1) + \\dots + \\alpha_n^{1-1\/N'}\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\nu_t^n).\n\\end{align*}\nMerging these inequalities with \\eqref{Eq:Mrg} yields\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}((\\eval_{1\/2})_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_{s,t}) \\leq \\sigma_{K,N'}^{(1\/2)}(\\theta_{s,t}^m)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_s) + \\sigma_{K,N'}^{(1\/2)}(\\theta_{s,t}^m)\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t),\n\\end{align*}\nand the desired property ${\\ensuremath{\\mathrm{P}}}_m(J(\\mu_0,\\mu_1))$ follows.\n\nTo close the proof of \\autoref{Th:Local to global}, let $(\\mu_r)_{r\\in[0,1]}$ be a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic from $\\mu_0$ to $\\mu_1$ which consists of $\\meas$-absolutely continuous measures, as hypothesized. Iteratively, the property ${\\ensuremath{\\mathrm{P}}}_m(J(\\mu_0,\\mu_1))$ proven above implies ${\\ensuremath{\\mathrm{P}}}_0(J(\\mu_0,\\mu_1))$ by \\autoref{Le:PnC}. By \\autoref{Propos}, this already implies the condition $\\smash{\\mathrm{TCD}_p^*(K,N)}$.\n\\end{proof}\n\nThe following \\autoref{Cor:K-} easily follows from \\autoref{Th:Local to global} and the stability result from \\autoref{Th:Stability TCD}. \\autoref{Pr:Pr} is proven along the same lines as \\cite[Prop.~5.5]{bacher2010} by using uniform continuity of $\\uptau$ on compact subsets of $\\smash{\\mms^2}$; the point is that for small $\\vartheta\\geq 0$, the quantities $\\smash{\\sigma_{K,N}^{(r)}(\\vartheta)}$ and $\\smash{\\tau_{K,N}^{(r)}(\\vartheta)}$ ``almost coincide''. See also the computations in \\cite{deng}.\n\n\\begin{corollary}\\label{Cor:K-} Let $K\\in\\R$ and $N\\in[1,\\infty)$. Then chronological $\\smash{\\ell_p}$-geodesy of $\\smash{{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ and the condition $\\smash{\\mathrm{TCD}_{p,\\loc}^*(K',N)}$ hold for every $K' < K$ if and only if $\\smash{\\mathrm{TCD}_p^*(K,N)}$ is satisfied.\n\\end{corollary}\n\n\\begin{proposition}\\label{Pr:Pr} Assume $\\smash{{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ to be chronologically $\\smash{\\ell_p}$-geodesic. Given any $K\\in\\R$ and $N\\in[1,\\infty)$, $\\smash{\\mathrm{TCD}_{p,\\loc}^*(K',N)}$ holds for every $K'0$, the rescaled measured Lorentzian pre-length space $(\\mms, a\\met, b\\meas,\\ll,\\leq, \\theta\\uptau)$ satisfies $\\smash{\\mathrm{TMCP}_p^*(K\/\\theta^2,N)}$.\n\\end{enumerate}\n\nAnalogous statements hold for the $\\smash{\\mathrm{TMCP}_p(K,N)}$ condition.\n\\end{proposition}\n\n\\begin{remark}\\label{Re:Unlikely} An analogue of \\autoref{Le:Stu} is unlikely to hold under $\\smash{\\mathrm{TMCP}_p^*(K,N)}$ or $\\smash{\\mathrm{TMCP}_p(K,N)}$. In particular, we do not know a priori whether the measures $\\mu_t$, $t\\in[0,1)$, of a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic as in \\autoref{Def:TMCP} are always --- or can be chosen to be --- $\\meas$-absolutely continuous. Instead, we establish the existence of timelike proper-time parametrized $\\smash{\\ell_p}$-geodesics consisting of $\\meas$-absolutely continuous measures whose densities are uniformly $\\Ell^\\infty$ in time under $\\smash{\\mathrm{TMCP}_p^*(K,N)}$, at least if $\\smash{\\mu_0\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ has $\\meas$-essentially bounded density, by a variational method following \\cite{braun2022}, see also \\cite{cavalletti2017,rajala2012a,rajala2012b} in \\autoref{Sub:Good}; cf.~\\autoref{Th:Good TMCP}. By uniqueness of timelike proper-time parametrized $\\smash{\\ell_p}$-geodesics under timelike essential nonbranching conditions, see \\autoref{Th:Uniqueness geodesics}, this suffices to show an equivalence result analogous to \\autoref{Th:Equivalence TCD* and TCDe} for our timelike measure-contraction property, see \\autoref{Th:Equivalence TMCP* and TMCPe} and \\autoref{Th:Equivalence TMCP}.\n\\end{remark}\n\nThe link of $\\smash{\\mathrm{TMCP}_p^*(K,N)}$ and $\\smash{\\mathrm{TMCP}_p(K,N)}$ to their respective $\\mathrm{TCD}$ counterparts is more subtle. The proof of the corresponding \\autoref{Pr:TMCP to TCD} follows \\cite[Prop.~3.11]{cavalletti2020}. We need the following result whose proof is a straigtforward adaptation of the argument for \\cite[Lem.~3.3]{sturm2006b}, recall also \\autoref{Le:Const perturb}. The difference to \\cite{sturm2006b} is that we only require the marginal whose density is contained in the respective integrand to be constant.\n\n\\begin{lemma}\\label{Le:USC lemma} Let $K\\in\\R$ and $N\\in[1,\\infty)$. Let $\\rho\\colon\\mms\\to [0,\\infty)$ be a Borel function with $\\mu:= \\rho\\,\\meas\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$. Moreover, let $(\\pi_n)_{n\\in\\N}$ be a sequence in ${\\ensuremath{\\mathscr{P}}}(\\mms^2)$ converging weakly to $\\pi\\in{\\ensuremath{\\mathscr{P}}}(\\mms^2)$ such that for some $i\\in\\{1,2\\}$, we have \n\\begin{align*}\n(\\pr_i)_\\push\\pi_n = \\mu\n\\end{align*}\nfor every $n\\in\\N$. Then for every every $r\\in[0,1]$,\n\\begin{align*}\n&\\int_{\\mms^2} \\tau_{K,N}^{(r)}(\\uptau(x^0,x^1))\\,(\\rho\\circ\\pr_i)(x^0,x^1)^{-1\/N} \\d\\pi(x^0,x^1)\\\\\n&\\qquad\\qquad\\leq \\liminf_{n\\to\\infty}\\int_{\\mms^2} \\tau_{K,N}^{(r)}(\\uptau(x^0,x^1))\\,(\\rho\\circ\\pr_i)(x^0,x^1)^{-1\/N}\\d\\pi_n(x^0,x^1).\n\\end{align*}\n\nAn analogous assertion holds with $\\smash{\\tau_{K,N}^{(r)}}$ replaced by $\\smash{\\sigma_{K,N}^{(r)}}$.\n\\end{lemma}\n\n\\begin{proposition}\\label{Pr:TMCP to TCD} The following hold for every $p\\in (0,1)$, $K\\in\\R$, and $N\\in[1,\\infty)$.\n\\begin{enumerate}[label=\\textnormal{\\textcolor{black}{(}\\roman*\\textcolor{black}{)}}]\n\\item\\label{LART} The condition $\\smash{\\mathrm{wTCD}_p(K,N)}$ implies $\\smash{\\mathrm{TMCP}_p(K,N)}$.\n\\item\\label{LARTT} The condition $\\smash{\\mathrm{wTCD}_p^*(K,N)}$ implies $\\smash{\\mathrm{TMCP}_p^*(K,N)}$.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof} We only prove \\ref{LART}, the proof of \\ref{LARTT} is similar.\n\n\\textbf{Step 1.} \\textit{Approximation of $\\mu_0$ and $\\mu_1$.} \nGiven any $\\varepsilon\\in (0,1)$, let $C_\\varepsilon \\subset\\mms$ be a compact set with $\\mu_0[C_\\varepsilon] \\geq 1-\\varepsilon$ and $\\smash{C_\\varepsilon \\times \\{x_1\\}\\subset\\mms_\\ll^2}$. Define $\\smash{\\mu_0^\\varepsilon\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ by\n\\begin{align*}\n\\mu_0^\\varepsilon := \\mu_0[C_\\varepsilon]^{-1}\\,\\mu_0\\mres C_\\varepsilon = \\rho_0^\\varepsilon\\,\\meas\n\\end{align*}\nMoreover, since $\\smash{\\mms_\\ll^2}$ is open while $C_\\varepsilon\\times\\{x_1\\}$ is compact, there exists $\\eta > 0$ such that $\\smash{C_\\varepsilon\\times {\\ensuremath{\\mathsf{B}}}^\\met(x_1,\\eta)\\subset\\mms_\\ll^2}$. For $\\delta\\in (0,\\eta)$, we set \n\\begin{align*}\n\\mu_1^\\delta := \\meas\\big[{\\ensuremath{\\mathsf{B}}}^\\met(x_1,\\delta)\\big]^{-1}\\,\\meas\\mres {\\ensuremath{\\mathsf{B}}}^\\met(x_1,\\delta) = \\rho_1^\\delta\\,\\meas.\n\\end{align*}\n\nBy \\autoref{Re:Strong timelike}, the pair $\\smash{(\\mu_0^\\varepsilon,\\mu_1^\\delta)}$ is strongly timelike $p$-dualizable, and using $\\smash{\\mathrm{wTCD}_p(K,N)}$ we find a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $\\smash{(\\mu_t^{\\varepsilon,\\delta})_{t\\in[0,1]}}$ connecting $\\smash{\\mu_0^\\varepsilon}$ to $\\smash{\\mu_1^\\delta}$ as well as a timelike $p$-dualizing $\\smash{\\pi^{\\varepsilon,\\delta}\\in\\Pi_\\ll(\\mu_0^\\varepsilon,\\mu_1^\\delta)}$, with support in $\\smash{C_\\varepsilon\\times {\\ensuremath{\\mathsf{B}}}^\\met(x_1,\\eta)}$, such that for every $t\\in[0,1]$ and every $N'\\geq N$,\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t^{\\varepsilon,\\delta}) &\\leq -\\int_{\\mms^2} \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\rho_0^\\varepsilon(x^0)^{-1\/N'}\\d\\pi^{\\varepsilon,\\delta}(x^0,x^1)\\\\\n&\\qquad\\qquad -\\int_{\\mms^2}\\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\rho_1^\\delta(x^1)^{-1\/N'}\\d\\pi^{\\varepsilon,\\delta}(x^0,x^1)\\\\\n&\\leq -\\int_{\\mms^2}\\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\rho_0^\\varepsilon(x^0)^{-1\/N'}\\d\\pi^{\\varepsilon,\\delta}(x^0,x^1).\n\\end{align*}\n\n\\textbf{Step 2.} \\textit{Sending $\\delta \\to 0$.} Given any $\\varepsilon > 0$ and a fixed sequence $(\\delta_n)_{n\\in\\N}$ decreasing to $0$, from $\\smash{(\\mu_t^{\\varepsilon,\\delta_n})_{t\\in[0,1]}}$, $n\\in\\N$, we construct a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic from $\\mu_0^\\varepsilon$ to $\\mu_1$ as follows. Let $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}^{\\varepsilon,\\delta_n}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0^\\varepsilon,\\mu_1^{\\delta_n})}$ represent $\\smash{(\\mu_t^{\\varepsilon,\\delta_n})_{t\\in[0,1]}}$. As $\\smash{(\\mu_1^{\\delta_n})_{n\\in\\N}}$ converges weakly to $\\mu_1$, this sequence is tight, and so is $\\smash{({\\ensuremath{\\boldsymbol{\\pi}}}^{\\varepsilon,\\delta_n})_{n\\in\\N}}$ by \\autoref{Le:Villani lemma for geodesic}. Let $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}^\\varepsilon\\in \\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0^\\varepsilon,\\mu_1)}$ be a weak limit of a nonrelabeled subsequence. Then the assignment $\\smash{\\mu_t^\\varepsilon := (\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}^\\varepsilon}$, $t\\in[0,1]$, gives rise to a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic connecting $\\smash{\\mu_0^\\varepsilon}$ to $\\mu_1$.\n\n\nEvery weak limit point of $\\smash{(\\pi^{\\varepsilon,\\delta_n})_{n\\in\\N}}$ equals $\\smash{\\mu_0^\\varepsilon\\otimes\\mu_1 = \\mu_0^\\varepsilon\\otimes\\delta_{x_1}}$.\nTherefore, ${\\ensuremath{\\mathscr{K}}}$-global hyperbolicity, weak lower semicontinuity of $\\smash{{\\ensuremath{\\mathscr{S}}}_{N'}}$ on measures with uniformly bounded support, and \\autoref{Le:USC lemma} yield, for every such $t$ and every $N'\\geq N$, \n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t^\\varepsilon) &\\leq \\limsup_{n\\to\\infty} {\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t^{\\varepsilon,\\delta_n})\\\\\n&\\leq -\\liminf_{n\\to\\infty}\\int_{\\mms^2}\\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\rho_0^\\varepsilon(x^0)^{-1\/N'}\\d\\pi^{\\varepsilon,\\delta_n}(x^0,x^1)\\\\\n&\\leq -\\int_{\\mms^2} \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_1))\\,\\rho_0^\\varepsilon(x^0)^{-1\/N'}\\d\\mu_0^\\varepsilon(x^0)\n\\end{align*}\n\n\\textbf{Step 3.} \\textit{Sending $\\varepsilon \\to 0$.} Given a sequence $(\\varepsilon_n)_{n\\in\\N}$ decreasing to $0$, note that $\\smash{(\\mu_0^{\\varepsilon_n})_{n\\in\\N}}$ converges weakly to $\\mu_0$. Similarly to Step 2, from $\\smash{(\\mu_t^{\\varepsilon_n})_{n\\in\\N}}$ we construct a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_t)_{t\\in[0,1]}$ connecting $\\mu_0$ to $\\mu_1$. As in Step 2 and using Fatou's lemma we get, for every $t\\in[0,1]$ and every $N'\\geq N$,\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t) &\\leq \\limsup_{n\\to\\infty} {\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t^{\\varepsilon_n})\\\\\n&\\leq -\\liminf_{n\\to\\infty}\\int_{\\mms^2}\\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_1))\\,\\rho_0^{\\varepsilon_n}(x^0)^{-1\/N'}\\d\\mu_0^{\\varepsilon_n}(x^0)\\\\\n&\\leq -\\int_{\\mms^2}\\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_1))\\,\\rho_0(x^0)^{-1\/N'}\\d\\mu_0(x^0).\\qedhere\n\\end{align*}\n\\end{proof}\n\n\\begin{remark}\\label{Re:Geom inequ TMCP} With essentially identical proofs, the respective versions of the timelike Bonnet--Myers inequality, \\autoref{Cor:Bonnet-Myers} and \\autoref{Cor:Reduced BM}, and the timelike Bishop--Gromov inequality, \\autoref{Th:BG} and \\autoref{Th:Reduced BG}, hold as well under $\\smash{\\mathrm{TMCP}_p(K,N)}$ and $\\smash{\\mathrm{TMCP}_p^*(K,N)}$. We only need to assume ${\\ensuremath{\\mathscr{X}}}$ to be a causally closed and globally hyperpolic Lorentzian geodesic space.\n\\end{remark}\n\n\\subsection{Stability} Next, we discuss the stability of the notions from \\autoref{Def:TMCP}. In contrast to the weak stability from \\autoref{Th:Stability TCD}, both timelike measure-contraction properties are stable under the convergence introduced in \\autoref{Def:Convergence}.\n\n\\begin{theorem}\\label{Th:Stability TMCP} Assume the convergence of $({\\ensuremath{\\mathscr{X}}}_k)_{k\\in\\N}$ to ${\\ensuremath{\\mathscr{X}}}_\\infty$ as in \\autoref{Def:Convergence}. Moreover, let $(K_k,N_k)_{k\\in\\N}$ be a sequence in $\\R\\times [1,\\infty)$ converging to $(K_\\infty,N_\\infty)\\in\\R\\times[1,\\infty)$. Suppose the existence of $p\\in (0,1)$ such that ${\\ensuremath{\\mathscr{X}}}_k$ obeys $\\mathrm{TMCP}_p(K_k,N_k)$ for every $k\\in\\N$. \nThen $\\smash{{\\ensuremath{\\mathscr{X}}}_\\infty}$ satisfies $\\smash{\\mathrm{TMCP}_p(K_\\infty,N_\\infty)}$. \n\nThe analogous statement in which $\\smash{\\mathrm{TMCP}_p(K_k,N_k)}$ is respectively replaced by $\\smash{\\mathrm{TMCP}_p^*(K_k,N_k)}$, $k\\in\\N_\\infty$, holds as well.\n\\end{theorem}\n\n\\begin{proof} It suffices to prove the first statement, the second is argued analogously. \n\nIn this proof, we often adopt notations from the proof of \\autoref{Th:Stability TCD} without explicit notice. In particular, we again identify $\\mms_k$ with its image $\\iota_k(\\mms_k)$ in $\\mms$ and $\\meas_k$ with its push-forward $(\\iota_k)_\\push\\meas_k$ for every $k\\in\\N_\\infty$. \n\n\\textbf{Step 1.} \\textit{Reduction to compact $\\mms$.} Owing to \\autoref{Re:mae x1}, given any $\\mu_{\\infty,0} = \\rho_{\\infty,0}\\,\\meas_\\infty\\in {\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas_\\infty)$ and $x_{\\infty,1}\\in I^+(\\mu_{\\infty,0}) \\cap \\supp\\meas_\\infty$, as for \\autoref{Th:Stability TCD} we use the compactness of $\\supp\\mu_{\\infty,0}$ and $\\supp\\mu_{\\infty,1}$, where $\\smash{\\mu_{\\infty,1} := \\delta_{x_{\\infty,1}}}$, to assume without restriction that $\\mms$ is compact, and that $\\meas_k\\in{\\ensuremath{\\mathscr{P}}}(\\mms)$ for every $k\\in\\N_\\infty$. All measures considered below will thus be compactly supported. Moreover, we may and will suppose that $W_2(\\meas_k,\\meas_\\infty)\\to 0$ as $k\\to\\infty$.\n\n\\textbf{Step 2.} \\textit{Restriction of the assumptions on $\\mu_{\\infty,0}$ and $\\mu_{\\infty,1}$.} We will first assume that $\\tau(\\cdot,x_{\\infty,1})$ is bounded away from zero on $\\supp\\mu_{\\infty,0}$, that $\\rho_{\\infty,0}\\in\\Ell^\\infty(\\mms,\\meas_\\infty)$, and that $x_{\\infty,1}$ can be approximated with respect to $\\met$ by a sequence $(x_{k,1})_{k\\in\\N}$ of points $x_{k,1}\\in \\supp\\meas_k$ such that $x_{\\infty,1} \\in I^-(x_{k,1})$ for every $k\\in\\N$. The general case is discussed in Step 7 below; we note for now that this conclusion will not conflict with our reductions from Step 1.\n\n\\textbf{Step 3.} \\textit{Construction of a chronological recovery sequence.}\nIn this step, given the sequence $(x_{k,1})_{k\\in\\N}$ from Step 2 we construct a sequence $(\\mu_{k,0})_{k\\in\\N}$ of measures $\\smash{\\mu_{k,0} = \\rho_{k,0}\\,\\meas_k\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_k)}$, $k\\in\\N$, such that $\\mu_{k,0}\\to\\mu_{\\infty,0}$ weakly as $k\\to \\infty$ possibly up to extracting a subsequence, and $x_{k,1}\\in I^+(\\mu_{k,0})$ for every $k\\in\\N$. The constructed sequence will allow for the correct behavior of all functionals under consideration, cf.~Step 5 below.\n\nGiven any $k\\in\\N$, let $\\mathfrak{q}_k\\in{\\ensuremath{\\mathscr{P}}}(\\mms^2)$ be a $W_2$-optimal coupling of $\\meas_k$ and $\\meas_\\infty$. We disintegrate $\\mathfrak{q}_k$ with respect to $\\pr_1$, writing\n\\begin{align*}\n{\\ensuremath{\\mathrm{d}}} \\mathfrak{q}_k(x,y) = {\\ensuremath{\\mathrm{d}}}\\mathfrak{p}_x^k(y)\\d\\meas_k(x).\n\\end{align*}\nLet $\\smash{\\mathfrak{p}^k\\colon {\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_\\infty)\\to{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_k)}$ denote the canonically induced map.\n\nGiven any $k\\in\\N$, define $\\tilde{\\mu}_{k,0}\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_k)$ by\n\\begin{align*}\n\\tilde{\\mu}_{k,0} := \\mathfrak{p}^k(\\mu_{\\infty,0}) = \\tilde{\\rho}_{k,0}\\,\\meas_k.\n\\end{align*}\nNote that the measure $\\mathfrak{r}_k := (\\rho_{\\infty,0}\\circ\\pr_2)\\,\\mathfrak{q}_k\\in{\\ensuremath{\\mathscr{P}}}(\\mms^2)$ constitutes a coupling of $\\tilde{\\mu}_{k,0}$ and $\\mu_{\\infty,0}$. With the $W_2$-optimality of $\\mathfrak{q}_k$, this implies\n\\begin{align*}\nW_2(\\tilde{\\mu}_{k,0},\\mu_{\\infty,0}) \\leq \\big\\Vert\\rho_{\\infty,0}\\big\\Vert_{\\Ell^\\infty(\\mms,\\meas_\\infty)}^{1\/2}\\,W_2(\\meas_k,\\meas_\\infty),\n\\end{align*}\nand consequently $\\smash{\\tilde{\\mu}_{k,0}\\to \\mu_{\\infty,0}}$ weakly as $k\\to\\infty$. Hence, by our assumption on the se\\-quence $(x_{k,1})_{k\\in\\N}$ from Step 2 and Portmanteau's theorem,\n\\begin{align*}\n\\liminf_{k\\to\\infty} \\tilde{\\mu}_{k,0}[I^-(x_{k,1})] \\geq \\liminf_{k\\to\\infty} \\tilde{\\mu}_{k,0}[I^-(x_{\\infty,1})] \\geq \\mu_{\\infty,0}[I^-(x_{\\infty,1})] =1.\n\\end{align*}\nUp to passing to a subsequence we may and will thus assume $\\smash{\\tilde{\\mu}_{k,0}[I^-(x_{k,1})] >0}$ for every $k\\in\\N$. We then define $\\smash{\\mu_{k,0}\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_k)}$ by\n\\begin{align*}\n\\mu_{k,0} := \\tilde{\\mu}_{k,0}[I^-(x_{k,1})]^{-1} \\tilde{\\mu}_{k,0} \\mres I^-(x_{k,1}).\n\\end{align*}\nBy construction, the sequence $(\\mu_{k,0})_{k\\in\\N}$ converges to $\\mu_{\\infty,0}$ weakly as $k\\to\\infty$, and we have $x_{k,1}\\in I^+(\\mu_{k,0})$ for every $k\\in\\N$, as desired.\n\n\\textbf{Step 4.} \\textit{Invoking the $\\mathrm{TMCP}$ condition.} Fix $K\\in\\R$ and $N\\in (1,\\infty)$ such that $K< K_\\infty$ and $N>N_\\infty$. Up to passing to a subsequence if necessary, we may and will thus assume that $KN_k$ for every $k\\in\\N$.\n\nBy \\autoref{Pr:Consistency TMCP}, for every $k\\in\\N$ there exists a timelike proper-time para\\-metrized $\\smash{\\ell_p}$-geodesic $(\\mu_{k,t})_{t\\in[0,1]}$ connecting $\\mu_{k,0}$ and $\\smash{\\mu_{k,1} = \\delta_{x_{k,1}}}$ such that for every $t\\in [0,1)$ and every $N'\\geq N$,\n\\begin{align}\\label{Eq:TMCP cond}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{k,t}) \\leq -\\int_{\\mms} \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{k,1}))\\,\\rho_{k,0}(x^0)^{-1\/N'} \\d\\mu_{k,0}(x^0).\n\\end{align}\n\n\\textbf{Step 5.} \\textit{Estimating the previous right-hand side.} We estimate the negative of the right-hand side of \\eqref{Eq:TMCP cond} from below, up to errors which become arbitrarily small. By construction of $\\mu_{k,0}$, we find a sequence $(a_k)_{k\\in\\N}$ of normalization constants converging to $1$ such that $\\smash{\\rho_{k,0} \\leq a_k\\,\\tilde{\\rho}_{k,0}}$ $\\meas_k$-a.e.~for every $k\\in\\N$.\n\n\\textbf{Step 5.1.} First, we argue that $x_{k,1}$ can be replaced $x_{\\infty,1}$ up to a small error. By our choice of $K$ and $N$ and the timelike Bonnet--Myers inequality from \\autoref{Re:Geom inequ TMCP},\n\\begin{align*}\nc := \\sup\\tau_{K,N'}^{(1-t)}\\circ\\uptau(\\mms^2)\n\\end{align*}\nis finite. Furthermore, the involved function is jointly uniformly continuous. Thus, given any $\\varepsilon> 0$, by uniform continuity we obtain, for sufficiently large $k\\in\\N$,\n\\begin{align*}\n&\\int_\\mms \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{k,1}))\\,\\rho_{k,0}(x^0)^{-1\/N'}\\d\\mu_{k,0}(x^0)\\\\\n&\\qquad\\qquad \\geq \\int_\\mms \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\rho_{k,0}(x^0)^{-1\/N'}\\d\\mu_{k,0}(x^0)-\\varepsilon.\n\\end{align*}\n\n\\textbf{Step 5.2.} We modify $\\smash{\\tilde{\\mu}_{k,0}}$ into the measure\n\\begin{align*}\n\\nu_{k,0} := (1+\\delta_k)^{-1}\\,(\\tilde{\\rho}_{k,0}+\\delta_k)\\,\\meas_k =\\varrho_{k,0}\\,\\meas_k,\n\\end{align*} \nwhere $\\delta_k\\in[0,1]$ is defined by\n\\begin{align*}\n\\delta_k = \\tilde{\\mu}_{k,0}[I^-(x_{k,1})^{\\ensuremath{\\mathsf{c}}}].\n\\end{align*}\nNote that $(\\delta_k)_{k\\in\\N}$ converges to $0$. Employing the inequality $\\rho_{k,0} \\leq a_k\\,\\varrho_{k,0}$ $\\meas_k$-a.e., the definition of $\\mu_{k,0}$, and a similar notation as in the proof of \\autoref{Th:Stability TCD}, \n\\begin{align}\\label{Eq:Integral rechnung}\n&\\int_{\\mms} \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\rho_{k,0}(x^0)^{-1\/N'} \\d\\mu_{k,0}(x^0)\\nonumber\\\\\n&\\qquad\\qquad \\geq_k\\int_{\\mms}\\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\mu_{k,0}(x^0)\\nonumber\\\\\n&\\qquad\\qquad \\geq_k\\int_{\\mms}\\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\nu_{k,0}(x^0)\\\\\n&\\qquad\\qquad\\qquad\\qquad - \\int_{I^-(x_{k,1})^{\\ensuremath{\\mathsf{c}}}} \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\tilde{\\mu}_{k,0}(x^0)\\nonumber\\\\\n&\\qquad\\qquad\\qquad\\qquad - \\delta_k\\int_{\\mms}\\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\meas_k(x^0).\\nonumber\n\\end{align}\n\n\\textbf{Step 5.3.} Possibly invoking the timelike Bishop--Gromov inequality outlined in \\autoref{Re:Geom inequ TMCP}, we obtain the estimates\n\\begin{align*}\nc\\,\\delta_k^{1-1\/N'} &\\geq \\int_{I^-(x_{k,1})^{\\ensuremath{\\mathsf{c}}}}\\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\tilde{\\mu}_{k,0}(x^0)\\\\\nc\\,\\delta_k^{1-1\/N'} &\\geq \\delta_k\\int_{\\mms} \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\meas_k(x^0).\n\\end{align*}\nHence, our main task is to estimate the integral in \\eqref{Eq:Integral rechnung} from below. To this aim, by definition of $\\nu_{k,0}$ and Jensen's inequality, we obtain\n\\begin{align*}\n&\\int_\\mms \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\nu_{k,0}(x^0)\\\\\n&\\qquad\\qquad = \\int_\\mms \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\Big[\\!\\int_\\mms (\\rho_{\\infty,0}(y^0)+\\delta_k)\\d\\mathfrak{p}_{x^0}^k(y^0)\\Big]^{1-1\/N'}\\!\\d\\meas_k(x^0)\\\\\n&\\qquad\\qquad \\geq \\int_{\\mms^2} \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\rho_{\\infty,0}(y^0)^{1-1\/N'}\\d\\mathfrak{q}_k(x^0,y^0).\n\\end{align*}\nLet $(\\phi_i)_{i\\in\\N}$ be a sequence in $\\Cont_\\bounded(\\mms)$ such that\n\\begin{align*}\n\\big\\Vert \\phi_i- \\rho_{\\infty,0}^{1-1\/N'}\\big\\Vert_{\\Ell^1(\\mms,\\meas_\\infty)} \\leq 2^{-i}\n\\end{align*}\nas well as $\\sup\\phi_i(\\mms) \\leq \\Vert\\rho_{\\infty,0} \\Vert_{\\Ell^\\infty(\\mms,\\meas_\\infty)}$ for every $i\\in\\N$. Then we get\n\\begin{align*}\n&\\int_\\mms \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\nu_{k,0}(x^0)\\\\\n&\\qquad\\qquad \\geq \\int_{\\mms^2} \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\phi_i(y^0)\\d\\mathfrak{q}_k(x^0,y^0) - 2^{-i}\\,c.\n\\end{align*}\n\n\\textbf{Step 5.4.} By tightness and stability of $W_2$-optimal couplings \\cite[Lem.~4.3, Lem. 4.4]{villani2009}, $(\\mathfrak{q}_k)_{k\\in\\N}$ converges weakly to the dia\\-gonal coupling of $\\meas_\\infty$ and $\\meas_\\infty$ along a nonrelabeled subsequence. In particular, by Lebesgue's theorem,\n\\begin{align*}\n&\\liminf_{k\\to\\infty} \\int_\\mms \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\varrho_{k,0}(x^0)^{-1\/N'}\\d\\nu_{k,0}(x^0)\\\\\n&\\qquad\\qquad \\geq \\liminf_{i\\to\\infty} \\int_\\mms \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\phi_i(x^0)\\d\\meas_\\infty(x^0) - c\\,\\limsup_{i\\to\\infty}2^{-i}\\\\\n&\\qquad\\qquad =\\int_\\mms \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\rho_{\\infty,0}(x^0)^{-1\/N'}\\d\\mu_{\\infty,0}(x^0).\n\\end{align*}\n\n\\textbf{Step 6.} \\textit{Conclusion.} Let $({\\ensuremath{\\boldsymbol{\\pi}}}_k)_{k\\in\\N}$ be a sequence of $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}_k\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_{k,0},\\mu_{k,1})}$ representing the $\\smash{\\ell_p}$-geodesic $(\\mu_{k,t})_{t\\in[0,1]}$ in Step 4, $k\\in\\N$. By compactness of $\\mms$ and \\autoref{Le:Villani lemma for geodesic}, this sequence converges to an $\\smash{\\ell_p}$-optimal geodesic plan $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}_\\infty\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_{\\infty,0},\\mu_{\\infty,1})}$ along a nonrelabeled subsequence. In particular, the assignment $\\smash{\\mu_{\\infty,t} := (\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_\\infty}$ gives rise to a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_{\\infty,t})_{t\\in[0,1]}$ from $\\mu_{\\infty,0}$ to $\\mu_{\\infty,1}$; note that every optimal coupling of $\\mu_{\\infty,0}$ and $\\mu_{\\infty,1}$ is concentrated on $\\smash{\\mms_\\ll^2}$. Using weak lower semicontinuity of the R\u00e9nyi entropy and \\eqref{Eq:TMCP cond}, given any $\\varepsilon > 0$ this yields, for every $t\\in[0,1)$ and every $N'\\geq N$,\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{\\infty,t}) &\\leq \\limsup_{k\\to\\infty} {\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{k,t})\\\\\n&\\leq -\\liminf_{k\\to\\infty} \\int_\\mms \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{k,1}))\\,\\rho_{k,0}(x^0)^{-1\/N'}\\d\\mu_{k,0}(x^0)\\\\\n&\\leq \\varepsilon - \\int_\\mms \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}))\\,\\rho_{\\infty,0}(x^0)^{-1\/N'}\\d\\mu_{\\infty,0}(x^0).\n\\end{align*}\n\n\n\n\n\\textbf{Step 7.} \\textit{Relaxation of the assumptions on $\\mu_\\infty,0$ and $\\mu_{\\infty,1}$.} First, we argue how to construct the hypothesized sequence $(x_{k,1})_{k\\in\\N}$ from Step 2. The idea is to approximate points which lie ``in between'' $\\supp\\mu_{\\infty,0}$ and $x_{\\infty,1}$ from the future. For these points, the above discussion applies, and we will be able to conclude the desired $\\mathrm{TMCP}$ property by a tightness argument.\n\n\\textbf{Step 7.1.} As $\\inf\\tau(\\supp\\mu_{\\infty,0}, x_{\\infty,1})>0$, given any $i\\in \\N$ we may and will fix a sequence $\\smash{(x_{\\infty,1}^i)_{i\\in\\N}}$ of points $\\smash{x_{\\infty,1}^i}\\in I(\\mu_{\\infty,0},\\mu_{\\infty,1})\\cap\\supp\\meas_\\infty$ converging to $\\smash{x_{\\infty,1}}$. Since $I(x_{\\infty,1}^i,x_{\\infty,1})$ is open and nonempty, for every $i\\in\\N$ we construct a sequence $\\smash{(x_{\\infty,1}^{i,j})_{j\\in\\N}}$ such that $\\smash{x_{\\infty,1}^{i,j} \\in I(x_{\\infty,1}^i, x_{\\infty,1})\\cap\\supp\\meas_\\infty}$ and\n\\begin{align*}\nx_{\\infty,1}^{i,j} \\ll x_{\\infty,1}^{i,j-1}\n\\end{align*}\nfor every $j\\in\\N$. Since $\\smash{x_{\\infty,1}^{i,j}\\in\\supp\\meas_\\infty}$, the weak convergence of $(\\meas_k)_{k\\in\\N}$ implies the existence of a sequence $\\smash{(x_{k,1}^{i,j})_{k\\in\\N}}$ of points $\\smash{x_{k,1}^{i,j}\\in\\supp\\meas_k}$ which converges to $\\smash{x_{\\infty,1}^{i,j}}$. In particular, for a sufficiently large integer $k_j\\in \\N$, we have \n\\begin{align*}\nx_{\\infty,1}^i\\ll x_{\\infty,1}^{i,j} \\ll x_{k_j,1}^{i,j}\n\\end{align*}\nfor every $i,j\\in\\N$, as well as $\\smash{\\met(x_{k_j}^{i,j}, x_{\\infty,1}^i) \\to 0}$ as $j\\to\\infty$ for every $i\\in\\N$. \n\nThe above arguments applied for a fixed $i\\in\\N$ (along a suitable subsequence in $k$ which depends on $i$) yield the existence of a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $\\smash{(\\mu_{\\infty,t}^i)_{t\\in[0,1]}}$ from $\\mu_{\\infty,0}$ to $\\smash{\\mu_{\\infty,1}^i:= \\delta_{x_{\\infty,1}^i}}$ such that for every $t\\in[0,1)$ and every $N'\\geq N_\\infty$,\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_{\\infty,t}^i) \\leq -\\int_\\mms \\tau_{K,N'}^{(1-t)}(\\uptau(x^0,x_{\\infty,1}^i))\\,\\rho_{\\infty,0}(x^0)^{-1\/N'}\\d\\mu_{\\infty,0}(x^0).\n\\end{align*}\nSimilarly to Step 6 and using lower semicontinuity of R\u00e9nyi's entropy and Fatou's lemma, we find a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $\\smash{(\\mu_{\\infty,t})_{t\\in[0,1]}}$ connecting $\\smash{\\mu_{\\infty,0}}$ to $\\smash{\\mu_{\\infty,1}}$ such that the previous inequality holds for $\\smash{\\mu_{\\infty,t}^i}$ and $\\smash{x_{\\infty,1}^i}$ replaced by $\\smash{\\mu_{\\infty,t}}$ and $\\smash{x_{\\infty,1}}$, respectively, for every $t\\in[0,1)$ and every $N'\\geq N_\\infty$.\n\n\\textbf{Step 7.2.} Finally, we remove the two assumptions on $\\mu_{\\infty,0}$ from Step 2. Let $\\mu_{\\infty,0}\\in\\smash{{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_\\infty)}$ and $\\smash{x_{\\infty,1}\\in I^+(\\mu_{\\infty,0})\\cap\\supp\\meas_\\infty}$. Given sufficiently large $n,m\\in\\N$, define $\\smash{\\mu_{\\infty,0}^n,\\mu_{\\infty,0}^{n,m}\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas_\\infty)}$ by\n\\begin{align*}\n\\mu_{\\infty,0}^n &:= \\mu_{\\infty,0}\\big[\\{\\uptau(\\cdot, x_{\\infty,1}) \\geq 2^{-n}\\}\\big]^{-1}\\,\\mu_{\\infty,0} \\mres \\{\\uptau(\\cdot, x_{\\infty,1}) \\geq 2^{-n}\\} = \\rho_{\\infty,0}^n\\,\\meas_\\infty,\\textcolor{white}{\\big\\vert^{0}}\\\\\n\\mu_{\\infty,0}^{n,m} &:= \\big\\Vert \\!\\min\\{\\rho_{\\infty,0}^n,m\\}\\big\\Vert_{\\Ell^1(\\mms,\\meas_\\infty)}^{-1}\\,\\min\\{\\rho_{\\infty,0}^n,m\\}\\,\\meas_\\infty.\n\\end{align*}\nBy continuity of $\\uptau$, $\\smash{\\mu_{\\infty,0}^{n,m}}$ obeys the hypotheses from Step 2 for every $n,m\\in\\N$. A diagonal procedure gives rise to a map $m\\colon \\N\\to\\N$ such that $\\smash{(\\tilde{\\mu}_{\\infty,0}^n)_{n\\in\\N}}$, where \n\\begin{align*}\n\\tilde{\\mu}_{\\infty,0}^n := \\mu_{\\infty,0}^{n,m_n},\n\\end{align*}\nconverges weakly to $\\smash{\\mu_{\\infty,0}}$ as $n\\to\\infty$.\n\nThe above discussion thus yields the desired inequality along some timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic connecting $\\smash{\\tilde{\\mu}_{\\infty,0}^n}$ to $\\smash{\\mu_{\\infty,1} := \\delta_{x_{\\infty,1}}}$. The usual tightness and lower semicontinuity argument directly gives the claim; note that the convergence of the corresponding right-hand sides is granted by Levi's theorem.\n\n\\textbf{Step 8.} \\textit{Passage from $K$ and $N$ to $K_\\infty$ and $N_\\infty$.} According to Step 4, all in all we thus infer $\\smash{\\mathrm{TMCP}_p(K,N)}$ for ${\\ensuremath{\\mathscr{X}}}_\\infty$ for every $KN_\\infty$, and we deduce $\\smash{\\mathrm{TMCP}_p(K_\\infty,N_\\infty)}$ as in Step 9 in the proof of \\autoref{Th:Stability TCD}.\n\\end{proof}\n\n\\subsection{Good geodesics}\\label{Sub:Good} Now we prove an analogue of \\autoref{Th:Good geos TCD}, assuming the $\\smash{\\mathrm{TMCP}_p^*(K,N)}$ condition. As indicated in \\autoref{Re:Unlikely}, this result will be used in the next \\autoref{Sec:Equiv TMCP's} to prove the equivalence of the latter condition with the entropic timelike measure-contraction property from \\cite{cavalletti2020}.\n\nAs in \\autoref{Sub:Good TCD}, we only outline the proof and refer to \\cite[Sec.~4.3]{braun2022}, see also \\cite{cavalletti2017}, for a similar discussion in the entropic case.\n\nThe following result is proven similarly to \\autoref{Le:Lalelu}.\n\n\\begin{lemma}\\label{Le:Lululu} Assume the $\\smash{\\mathrm{wTCD}_p^*(K,N)}$ condition for some $p\\in (0,1)$, $K\\in\\R$, and $N\\in[1,\\infty)$. Let $\\smash{\\mu_0 = \\rho_0\\,\\meas\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$, and let $x_1\\in I^+(\\mu_0)$. Finally, let $D$ be any real number no smaller than $\\sup\\uptau(\\supp\\mu_0\\times\\{x_1\\})$. Then there is a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_t)_{t\\in[0,1]}$ from $\\mu_0$ to $\\smash{\\mu_1 := \\delta_{x_1}}$ such that for every $t\\in [0,1)$, denoting by $\\rho_t$ the $\\meas$-ab\\-solutely continuous part of $\\mu_t$,\n\\begin{align*}\n\\meas\\big[\\{\\rho_t>0\\}\\big] \\geq (1-t)^N\\,{\\ensuremath{\\mathrm{e}}}^{-tD\\sqrt{K^-N}}\\,\\big\\Vert\\rho_0\\big\\Vert_{\\Ell^\\infty(\\mms,\\meas)}^{-1}.\n\\end{align*}\n\\end{lemma}\n\n\\begin{theorem}\\label{Th:Good TMCP} Under the hypotheses of \\autoref{Le:Lululu}, there exists a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_t)_{t\\in[0,1]}$ from $\\mu_0$ to $\\smash{\\mu_1:= \\delta_{x_1}}$ such that for every $t\\in[0,1]$, $\\smash{\\mu_t = \\rho_t\\,\\meas\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$ and\n\\begin{align*}\n\\Vert\\rho_t\\Vert_{\\Ell^\\infty(\\mms,\\meas)} \\leq \\frac{1}{(1-t)^N}\\,{\\ensuremath{\\mathrm{e}}}^{Dt\\sqrt{K^-N}}\\,\\Vert\\rho_0\\Vert_{\\Ell^\\infty(\\mms,\\meas)},\n\\end{align*}\nwhere\n\\begin{align*}\nD:= \\sup\\uptau(\\supp\\mu_0\\times\\{x_1\\}).\n\\end{align*}\n\\end{theorem}\n\n\\begin{proof} As for \\cite[Thm.~4.18]{braun2022}, we first observe that the bisection argument from the proof of \\autoref{Th:Good geos TCD} does not apply here since $\\smash{\\mu_1\\notin{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$, while we aim to establish that $\\smash{\\mu_t \\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$ for every $t\\in[0,1)$.\n\nGiven $n\\in\\N$ and $k\\in\\N_0$, we set $\\smash{s_n^k := (1-2^{-n})^k}$. Fix $n\\in\\N$, and assume that a measure $\\smash{{\\ensuremath{\\boldsymbol{\\beta}}}_n^k\\in \\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_1,\\mu_0)}$ [sic] has already been defined in such a way that for every $i\\in\\{1,\\dots,k\\}$, we have $\\smash{(\\eval_{s_n^i})_\\push{\\ensuremath{\\boldsymbol{\\beta}}}_n^k \\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ and\n\\begin{align*}\n\\sup\\uptau(\\supp\\,(\\eval_{s_n^i})_\\push{\\ensuremath{\\boldsymbol{\\beta}}}_n^k\\times\\{x_1\\})\\leq 2^{-n}\\,s_n^{i-1}\\,D.\n\\end{align*}\n\n\\textbf{Step 1.} \\textit{Minimization of an appropriate functional.} Set\n\\begin{align*}\nc_n^{k+1} := \\frac{1}{(1-2^{-n})^N}\\,{\\ensuremath{\\mathrm{e}}}^{2^{-n}s_n^kD\\sqrt{K^-N}}\\,\\Vert\\rho_0\\Vert_{\\Ell^\\infty(\\mms,\\meas)}\n\\end{align*}\nand let the functional $\\smash{{\\ensuremath{\\mathscr{F}}}_{c_n^{k+1}}}$ be defined as in \\eqref{Eq:Functional FC} above. As in \\cite[Le.~3.13]{braun2022}, the functional $\\smash{{\\ensuremath{\\mathscr{F}}}_{c_n^{k+1}}\\circ \\eval_{2^{-n}}}$ admits some minimizer $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}_n^{k+1}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau((\\eval_{s_n^k})_\\push{\\ensuremath{\\boldsymbol{\\beta}}}_n^k,\\mu_1)}$. Let ${\\ensuremath{\\boldsymbol{\\sigma}}}_{k+1}^n\\in\\smash{\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_1,(\\eval_{s_n^k})_\\push{\\ensuremath{\\boldsymbol{\\beta}}}_n^k)}$ be the $\\smash{\\ell_p}$-optimal geodesic plan obtained by ``time-reversal'' of $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}_n^{k+1}}$. By gluing, we build $\\smash{{\\ensuremath{\\boldsymbol{\\beta}}}_n^{k+1}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_1,\\mu_0)}$ with\n\\begin{align*}\n(\\Restr_0^{s_n^k})_\\push{\\ensuremath{\\boldsymbol{\\beta}}}_n^{k+1} &= {\\ensuremath{\\boldsymbol{\\pi}}}_n^{k+1},\\\\\n(\\Restr_{s_n^k}^1)_\\push{\\ensuremath{\\boldsymbol{\\beta}}}_n^{k+1} &= (\\Restr_{s_n^k}^1)_\\push{\\ensuremath{\\boldsymbol{\\beta}}}_n^k.\n\\end{align*}\nHere, given any $s,t\\in[0,1]$ with $s0$, recall \\eqref{Eq:BLUBBB} and \\eqref{Eq:Blubbbb}, we get $\\mu_t = \\rho_t\\,\\meas\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)$, and $\\Vert\\rho_t\\Vert_{\\Ell^\\infty(\\mms,\\meas)}$ obeys the desired upper bound for every $t\\in[0,1)$.\n\\end{proof}\n\n\\subsection{Uniqueness of $\\ell_p$-optimal couplings and $\\ell_p$-geodesics}\\label{Sub:Uniqueneess} In this section, we prove uniqueness of \\emph{chronological} $\\smash{\\ell_p}$-optimal couplings (if existent), \\autoref{Th:Uniqueness couplings}, as well as of $\\smash{\\ell_p}$-optimal geodesic plans, \\autoref{Th:Uniqueness geodesics}. We follow \\cite[Sec.~3.4]{cavalletti2020}, see also \\cite{cavalletti2017}. A byproduct of our discussion is an extension of the results from \\cite{cavalletti2020} from the timelike nonbranching to the timelike \\emph{essential} nonbranching case, cf.~\\autoref{Re:From TNB to TENB}.\n\nIn this section, in addition to our standing assumptions, let ${\\ensuremath{\\mathscr{X}}}$ be timelike $p$-essentially nonbranching for some fixed $p\\in (0,1)$. \n\n\\begin{lemma}\\label{Le:Uniqueness Diracs} Assume $\\smash{\\mathrm{TMCP}^*_p(K,N)}$ for some $K\\in\\R$ and $N\\in [1,\\infty)$. Let $\\mu_0 = \\rho_0\\,\\meas\\in\\smash{{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$, and define $\\mu_1 := \\lambda_1\\,\\delta_{x_1} + \\dots + \\lambda_n\\,\\delta_{x_n}$ for $\\lambda_1,\\dots,\\lambda_n\\in (0,1]$ with $\\lambda_1+\\dots+\\lambda_n=1$ and pairwise distinct $x_1,\\dots,x_n\\in\\mms$. Let $\\pi\\in\\Pi_\\ll(\\mu_0,\\mu_1)$ be an $\\smash{\\ell_p}$-optimal coupling. Then there is a $\\mu_0$-measurable map $T\\colon \\supp\\mu_0\\to\\mms$ with \n\\begin{align*}\n\\pi = (\\Id,T)_\\push\\mu_0,\n\\end{align*}\nand consequently,\n\\begin{align*}\n\\ell_p(\\mu_0,\\mu_1)^p = \\int_\\mms \\uptau(x,T(x))^p\\d\\mu_0(x).\n\\end{align*}\n\nIn particular, $\\pi$ is the unique chronological $\\smash{\\ell_p}$-optimal coupling of $\\mu_0$ and $\\mu_1$.\n\\end{lemma}\n\n\\begin{proof} By a standard argument, cf.~the proof of \\cite[Lem.~3.17]{cavalletti2020}, it suffices to prove the existence of a $\\mu_0$-measurable map $T\\colon \\supp\\mu_0\\to\\mms$ such that $\\pi=(\\Id,T)_\\push\\mu_0$. To this aim, define the compact set $E\\subset\\supp\\mu_0$ by\n\\begin{align*}\nE := \\{x\\in \\mms : \\#[(\\{x\\}\\times\\mms)\\cap \\supp\\pi] \\geq 2\\}.\n\\end{align*}\nWe claim that $\\mu_0[E] = 0$, which directly gives the desired $T$.\n\nAssume to the contrapositive that $\\mu_0[E]>0$. We first reduce the discussion to the uniform distribution on some subset of $\\mms$ as follows. Up to shrinking $E$, we assume without restriction that $\\varepsilon\\leq \\rho_0\\leq 1\/\\varepsilon$ $\\meas$-a.e.~on $E$ for some $\\varepsilon > 0$. A further possible shrinking of $E$ entails the existence of distinct points $z_1,z_2\\in\\{x_1,\\dots,x_n\\}$ and well-defined maps $T_1,T_2\\colon E\\to \\mms$ with $T_1(x) = z_1$ and $T_2(x)=z_2$ for every $x\\in E$. We may and will additionally assume that $\\smash{E\\times\\{z_1\\}, E\\times\\{z_2\\}\\subset \\mms_\\ll^2}$. By restric\\-tion \\cite[Lem.~2.10]{cavalletti2020}, the couplings\n\\begin{align*}\n\\pi_1 &:= \\meas[E]^{-1}\\,\\One_{E\\times\\{z_1\\}}\\,(\\rho_0\\circ\\pr_1)^{-1}\\,\\pi,\\\\\n\\pi_2 &:= \\meas[E]^{-1}\\,\\One_{E\\times\\{z_2\\}}\\,(\\rho_0\\circ\\pr_1)^{-1}\\,\\pi\n\\end{align*}\nare $\\smash{\\ell_p}$-optimal with $\\smash{\\pi_1\\in \\Pi_\\ll(\\nu_0,\\delta_{z_1})}$ and $\\smash{\\pi_2\\in\\Pi_\\ll(\\nu_0,\\delta_{z_2})}$, respectively, where\n\\begin{align*}\n\\nu_0 := \\meas[E]^{-1}\\,\\meas\\mres E = \\varrho_0\\,\\meas.\n\\end{align*}\n\nFrom now on, we will work with $\\nu_0$ instead of $\\mu_0$. Let $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}_1\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\nu_0,\\delta_{z_1})}$ and $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}_2\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\nu_0,\\delta_{z_2})}$ be $\\smash{\\ell_p}$-optimal geodesic plans which represent timelike proper-time parametrized $\\smash{\\ell_p}$-geodesics witnessing the defining inequality of $\\smash{\\mathrm{TMCP}_p^*(K,N)}$. Note that $(\\eval_0)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_1$ and $(\\eval_0)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_2$ are mutually singular since\n\\begin{align*}\n{\\ensuremath{\\boldsymbol{\\pi}}}_i[\\{\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms) : \\gamma_1 = z_i\\}] = 1\n\\end{align*}\nfor every $i\\in\\{1,2\\}$, and $z_1\\neq z_2$. By \\autoref{Le:Mutually singular}, \n\\begin{align}\\label{Eq:Mutual singularity}\n(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_1 \\perp (\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_2\n\\end{align}\nfor every $t\\in(0,1]$. Given $i\\in\\{1,2\\}$, let $\\smash{\\varrho_t^i}$ denote the density of the $\\meas$-absolutely continuous part of $(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_1$, which is nontrivial by $\\smash{\\mathrm{TMCP}_p^*(K,N)}$. By \\autoref{Re:Lower bounds sigma}, \n\\begin{align*}\n\\int_\\mms (\\varrho_t^i)^{1-1\/N'}\\d\\meas &\\geq (1-t)\\,{\\ensuremath{\\mathrm{e}}}^{-tD\\sqrt{K^-\/N'}}\\int_\\mms \\varrho_0^{1-1\/N'}\\d\\meas\\\\\n&= (1-t)\\,{\\ensuremath{\\mathrm{e}}}^{-tD\\sqrt{K^-\/N'}}\\,\\meas[E]^{1\/N'}\\textcolor{white}{\\int}\n\\end{align*}\nfor some $N'>1$. On the other hand, by Jensen's inequality,\n\\begin{align*}\n\\int_\\mms (\\varrho_t^i)^{1-1\/N'}\\d\\meas &\\leq \\meas\\big[\\{\\varrho_t^i > 0\\}\\big]\\,\\meas\\big[\\{\\varrho_t^i > 0\\}\\big]^{-1}\\int_{\\{\\varrho_t^i > 0\\}} (\\varrho_t^i)^{1-1\/N'}\\d\\meas\\\\\n&\\leq \\meas\\big[\\{\\varrho_t^i > 0\\}\\big]\\,\\Big[\\meas\\big[\\{\\varrho_t^i > 0\\}\\big]^{-1}\\int_{\\{\\varrho_t^i>0\\}}\\varrho_t^i\\d\\meas\\Big]^{1-1\/N'}\\\\\n&\\leq \\meas\\big[\\{\\varrho_t^i > 0\\}\\big]^{1\/N'}.\\textcolor{white}{\\int}\n\\end{align*}\nThese inequalities imply\n\\begin{align}\\label{Eq:mm}\n\\liminf_{t\\to 0} \\meas\\big[\\{\\varrho_t^i > 0\\}\\big] \\geq \\meas[E]= \\meas\\big[\\{\\varrho_0^i > 0\\}\\big].\n\\end{align}\n\nBy ${\\ensuremath{\\mathscr{K}}}$-global hyperbolicity, the sets\n\\begin{align*}\nF &:= \\{x\\in \\mms : x\\in\\supp\\,(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_i\\textnormal{ for some }t\\in[0,1],\\, i\\in\\{1,2\\}\\},\\\\\nG_\\delta &:= \\{x\\in F : \\uptau(y,x) \\leq \\delta\\textnormal{ for some }y\\in E\\}\n\\end{align*}\nare compact for every $\\delta>0$. Since $\\meas[G_\\delta]\\to \\meas[E]$ as $\\delta\\to 0$ by Lebesgue's theorem, there exists some $\\delta_0 \\in (0,1)$ such that\n\\begin{align}\\label{Eq:mmm}\n\\meas[G_{\\delta_0}] \\leq \\frac{3}{2}\\,\\meas[E].\n\\end{align}\nBy construction, for every $i\\in\\{1,2\\}$, every $t\\in (0,1)$, and $(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_i$-a.e.~$x\\in \\mms$ there exists $\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$ such that $x=\\gamma_t$, $\\gamma_0\\in E$, and $\\smash{\\gamma_1 = z_i}$. By definition of $\\smash{G_{\\delta_0}}$, we thus obtain $(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_i[G_{\\delta_0}] = 1$ for every $t\\in[0,\\delta_0]$. Hence, \\eqref{Eq:mm} and \\eqref{Eq:mm} imply\n\\begin{align*}\n\\meas\\big[\\{\\rho_s^1 > 0\\} \\cap \\{\\rho_s^2 > 0\\}\\big] > 0\n\\end{align*}\nfor some $s\\in (0,\\delta_0)$. However, this contradicts \\eqref{Eq:Mutual singularity}.\n\\end{proof}\n\n\\begin{lemma}\\label{Pr:More than TMCP} Let $\\smash{\\mathrm{TMCP}_p^*(K,N)}$ hold for some $K\\in\\R$ and $N\\in[1,\\infty)$. Let $\\smash{\\mu_0 = \\rho_0\\,\\meas \\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ and $\\mu_1\\in{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)$ with $\\smash{\\supp\\mu_0\\times\\supp\\mu_1\\subset\\mms_\\ll^2}$. Then there exist a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_t)_{t\\in[0,1]}$ connecting $\\mu_0$ to $\\mu_1$ as well as an $\\smash{\\ell_p}$-optimal coupling $\\pi\\in\\Pi_\\ll(\\mu_0,\\mu_1)$ such that for every $t\\in [0,1)$ and every $N'\\geq N$,\n\\begin{align}\\label{Eq:Claim conv TMCP}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t) \\leq -\\int_{\\mms^2} \\sigma_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\rho_0(x^0)^{-1\/N'}\\d\\pi(x^0,x^1).\n\\end{align}\n\\end{lemma}\n\n\\begin{proof} \\textbf{Step 1.} \\textit{Approximation of $\\mu_1$.} Let $B_1,\\dots,B_n\\subset\\mms$, $n\\in\\N$, be a given Borel partition of $\\supp\\mu_1$ with $\\mu_1[B_i]>0$ for every $i\\in\\{1,\\dots,n\\}$. Given such an $i$, fix $\\smash{x_1^i\\in\\supp\\mu_1}$ and set $\\lambda_i := \\mu_1[B_i]$ as well as\n\\begin{align*}\n\\mu_1^n := \\lambda_1\\,\\delta_{x_1^1} + \\dots + \\lambda_n\\,\\delta_{x_1^n}.\n\\end{align*}\nSince every coupling of $\\mu_0$ and $\\smash{\\mu_1^n}$ is chronological, an $\\smash{\\ell_p}$-optimal coupling of these exists uniquely by \\autoref{Le:Uniqueness Diracs}. Let $\\smash{\\pi_n\\in \\Pi_\\ll(\\mu_0,\\mu_1^n)}$ be this coupling, given by a $\\mu_0$-measurable map $T_n\\colon \\supp\\mu_0\\to\\mms$. For $i\\in\\{1,\\dots,n\\}$, we define \n\\begin{align*}\nA_i := T_n^{-1}(\\{x_1^i\\})\\times\\{x_1^i\\},\n\\end{align*}\nand observe that $A_1,\\dots,A_n\\subset \\mms^2$ constitutes a Borel partition of $\\supp\\pi_n$. \n\nDefine $\\smash{\\pi_n^i\\in{\\ensuremath{\\mathscr{P}}}(\\mms^2)}$, $\\smash{\\nu_0^i\\in {\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$, and $\\smash{\\nu_1^i\\in{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)}$ by\n\\begin{align*}\n\\pi_n^i &:= \\lambda_i^{-1}\\,\\pi_n\\mres A_i,\\\\\n\\nu_0^i &:= (\\pr_1)_\\push\\pi_n^i = \\varrho_0^i\\,\\meas,\\\\\n\\nu_1^i &:= (\\pr_2)_\\push\\pi_n^i = \\delta_{x_1^i}.\n\\end{align*}\nBy construction, we have $\\mu_0 = \\lambda_1\\,\\nu_0^i + \\dots + \\lambda_n\\,\\nu_0^n$ and thus\n\\begin{align*}\n\\rho_0 = \\lambda_1\\,\\varrho_0^i + \\dots + \\lambda_n\\,\\varrho_0^n\\quad\\meas\\textnormal{-a.e.}\n\\end{align*}\nMoreover $\\smash{\\supp\\nu_0^i \\cap \\supp\\nu_0^j = \\emptyset}$ for every $i,j\\in\\{1,\\dots,n\\}$ with $i\\neq j$, whence\n\\begin{align}\\label{Eq:MUT SING}\n\\nu_0^i\\perp\\nu_0^j.\n\\end{align}\n\n\\textbf{Step 2.} \\textit{Invoking the $\\mathrm{TMCP}$ condition.} As $\\smash{x_1^i\\in I^+(\\nu_0^i)}$ for every $i\\in\\{1,\\dots,n\\}$, using the $\\smash{\\mathrm{TMCP}_p^*(K,N)}$ condition there exists a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $\\smash{(\\nu_t^i)_{t\\in[0,1]}}$ from $\\smash{\\nu_0^i}$ to $\\smash{\\nu_1^i}$ such that for every $t\\in[0,1)$ and every $N'\\geq N$,\n\\begin{align}\\label{Eq:Interm TMCP}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\nu_t^i) \\leq -\\int_\\mms \\sigma_{K,N'}^{(1-t)}(\\uptau(x^0, x_1^i))\\,\\rho_0^i(x^0)^{1-1\/N'}\\d\\meas(x^0).\n\\end{align}\n\nWith this information, we now build a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $\\smash{(\\mu_t^n)_{t\\in [0,1]}}$ from $\\mu_0$ to $\\smash{\\mu_1^n}$ for which \\eqref{Eq:Claim conv TMCP} holds with $\\pi$ replaced by $\\pi_n$. By definition, $\\smash{(\\nu_t^i)_{t\\in[0,1]}}$ is represented by some $\\smash{{\\ensuremath{\\boldsymbol{\\alpha}}}^i\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\nu_0^i,\\nu_1^i)}$. Thanks to \\eqref{Eq:MUT SING} and \\autoref{Le:Mutually singular}, we obtain\n\\begin{align}\\label{Eq:MUT SING II}\n(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}^i \\perp (\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}^j\n\\end{align}\nfor every $t\\in(0,1]$ and every $i,j\\in\\{1,\\dots,n\\}$ with $i\\neq j$. Then\n\\begin{align*}\n{\\ensuremath{\\boldsymbol{\\pi}}}_n := \\lambda_1\\,{\\ensuremath{\\boldsymbol{\\alpha}}}^1 + \\dots + \\lambda_n\\,{\\ensuremath{\\boldsymbol{\\alpha}}}^n\n\\end{align*}\nis an $\\smash{\\ell_p}$-optimal geodesic plan from $\\mu_0$ to $\\smash{\\mu_1^n}$, and the assignment $\\smash{\\mu_t^n := (\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_n}$ gives rise to a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic connecting these probability measures. Let $\\smash{\\varrho_t^i}$ denote the nontrivial density of the $\\meas$-absolutely continuous part of $\\smash{(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}^i}$. Then we obtain that\n\\begin{align*}\n\\rho_t^n\\,\\meas := \\big[\\lambda_1\\,\\varrho_t^1 + \\dots + \\lambda_n\\,\\varrho_t^n\\big]\\,\\meas\n\\end{align*}\nis the $\\meas$-absolutely continuous part of $\\mu_t^n$, $t\\in[0,1]$. Now \\eqref{Eq:MUT SING II} ensures \n\\begin{align*}\n\\meas\\big[\\{\\varrho_t^i > 0\\} \\cap \\{\\varrho_t^j > 0\\}\\big] = 0.\n\\end{align*}\nfor every $t\\in [0,1]$ and every $i,j\\in\\{1,\\dots,n\\}$ with $i\\neq j$. Hence, using \\eqref{Eq:Interm TMCP} and \\eqref{Eq:MUT SING} we get, for every $t\\in[0,1)$ and every $N'\\geq N$,\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t^n) &=\n\\sum_{i=1}^n \\lambda_i^{1-1\/N'}\\,{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t^i)\\\\\n&\\leq -\\sum_{i=1}^n\\lambda_i^{1-1\/N'}\\!\\int_\\mms \\sigma_{K,N'}^{(1-t)}(\\uptau(x^0,x_1^i))\\,\\varrho_0^i(x^0)^{-1\/N'}(x^0)\\d\\nu_0^i(x^0)\\\\\n&= -\\sum_{i,i'=1}^n \\int_\\mms \\sigma_{K,N'}^{(1-t)}(\\uptau(x^0,x_1^i))\\,\\big[\\lambda_{i'}\\,\\varrho_0^{i'}(x^0)\\big]^{-1\/N'}\\d\\big[\\lambda_i\\,\\nu_0^i\\big](x^0)\\\\\n&= -\\int_{\\mms^2} \\sigma_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\rho_0(x^0)^{-1\/N'}\\d\\pi_n(x^0,x^1).\n\\end{align*}\n\n\\textbf{Step 3.} \\textit{Conclusion.} If $\\supp\\mu_1$ consists of finitely many points, the claim simply follows by choosing $n\\in\\N$ such that $\\smash{\\mu_1 = \\mu_1^n}$. \n\nTherefore, only the case $\\#\\supp\\mu_1=\\infty$ remains to be studied. By \\cite[Thm.~2.16]{cavalletti2020}, there exists a sequence $\\smash{(\\bar{\\mu}_1^n)_{n\\in\\N}}$ in ${\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)$ which converges weakly to $\\mu_1$ such that $\\smash{\\bar{\\mu}_1^n\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\mu_1^n)}$ for every $n\\in\\N$, and for every sequence $(\\pi_n)_{n\\in\\N}$ of $\\smash{\\ell_p}$-optimal couplings $\\smash{\\pi_n \\in\\Pi_\\leq(\\mu_0,\\bar{\\mu}_1^n)}$, every weak limit of any subsequence of the latter is $\\smash{\\ell_p}$-optimal for $\\mu_0$ and $\\mu_1$. In particular, the previous discussion applies to $\\smash{\\bar{\\mu}_1^n}$ in place of $\\smash{\\mu_1^n}$, $n\\in\\N$, and yields sequences $\\smash{({\\ensuremath{\\boldsymbol{\\pi}}}_n)_{n\\in\\N}}$ and $(\\pi_n)_{n\\in\\N}$ of $\\smash{\\ell_p}$-optimal geodesic plans $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}_n\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\bar{\\mu}_1^n)}$ and $\\smash{\\ell_p}$-optimal couplings $\\smash{\\pi_n\\in\\Pi_\\ll(\\mu_0,\\mu_1^n)}$ as in Step 2, respectively. By ${\\ensuremath{\\mathscr{K}}}$-global hyperbolicity, Prokhorov's theorem, and \\autoref{Le:Villani lemma for geodesic} these sequences converge weakly to some $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ and an $\\smash{\\ell_p}$-optimal coupling $\\pi\\in\\Pi_\\ll(\\mu_0,\\mu_1)$, respectively, up to a nonrelabeled subsequence. Define a timelike proper-time parametrized $\\smash{\\ell_p}$-geodesic $(\\mu_t)_{t\\in[0,1]}$ from $\\mu_0$ to $\\mu_1$ by $\\mu_t := (\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}$. Employing weak lower semicontinuity of ${\\ensuremath{\\mathscr{S}}}_{N'}$ on measures with uniformly bounded support and \\autoref{Le:USC lemma}, we then obtain, for every $t\\in[0,1)$, every $N'\\geq N$, and with $\\smash{\\mu_t^n := (\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_n}$,\n\\begin{align*}\n{\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t) &\\leq \\limsup_{n\\to\\infty} {\\ensuremath{\\mathscr{S}}}_{N'}(\\mu_t^n)\\\\\n&\\leq -\\liminf_{n\\to\\infty} \\int_{\\mms^2} \\sigma_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\rho_0(x^0)^{-1\/N'}\\d\\pi_n(x^0,x^1)\\\\\n&\\leq -\\int_{\\mms^2} \\sigma_{K,N'}^{(1-t)}(\\uptau(x^0,x^1))\\,\\rho_0(x^0)^{-1\/N'}\\d\\pi(x^0,x^1).\\qedhere\n\\end{align*}\n\\end{proof}\n\n\\begin{remark} Arguing as in Step 4 in the proof of \\cite[Prop.~3.18]{cavalletti2020} as well as Step 2 above, the conclusion of \\autoref{Pr:More than TMCP} holds under the following weaker assumption: the measures $\\mu_0$ and $\\mu_1$ admit an $\\smash{\\ell_p}$-optimal coupling $\\varpi\\in\\Pi_\\ll(\\mu_0,\\mu_1)$ such that $\\smash{\\supp\\pi\\subset\\mms_\\ll^2}$. We will not need this extension in the sequel.\n\\end{remark}\n\nThe proof of the following \\autoref{Th:Uniqueness couplings} follows the lines of \\autoref{Le:Uniqueness Diracs} modulo some modifications we briefly discuss.\n\n\\begin{theorem}\\label{Th:Uniqueness couplings} Assume $\\smash{\\mathrm{TMCP}_p^*(K,N)}$ for $K\\in\\R$ and $N\\in [1,\\infty)$. Suppose the pair $(\\mu_0,\\mu_1)\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)\\times{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)$ to be timelike $p$-dualizable by $\\smash{\\pi\\in\\Pi_\\ll(\\mu_0,\\mu_1)}$. Then there exists a $\\mu_0$-measurable map $T\\colon \\mms \\to \\mms$ such that\n\\begin{align*}\n\\pi = (\\Id,T)_\\push\\mu_0,\n\\end{align*}\nand consequently,\n\\begin{align*}\n\\ell_p(\\mu_0,\\mu_1)^p = \\int_\\mms \\uptau(x,T(x))^p\\d\\mu_0(x).\n\\end{align*}\n\nIn particular, $\\pi$ is the unique chronological $\\ell_p$-optimal coupling of $\\mu_0$ and $\\mu_1$.\n\\end{theorem}\n\n\\begin{proof} As in the proof of \\autoref{Le:Uniqueness Diracs}, it suffices to prove the existence of $T$. Let $\\Gamma\\subset\\mms_\\ll^2$ be an $\\smash{\\ell_p}$-cyclically monotone set with $\\pi[\\Gamma]=1$ \\cite[Prop.~2.8]{cavalletti2020}, and set\n\\begin{align*}\nE := \\{x\\in\\mms : \\pr_2[\\Gamma \\cap (\\{x\\}\\times \\mms)] \\geq 2\\},\n\\end{align*}\nwhich is a Suslin set. We claim that $\\mu_0[S]=0$, which directly gives the desired $T$.\n\nAssume to the contrapositive that $\\mu_0[S]>0$. By the von Neumann selection theorem \\cite[Thm.~9.1.3]{bogachev2007b}, there exist $\\mu_0$-measurable maps $T_1,T_2\\colon E\\to \\mms$ whose graphs are both contained in $\\Gamma$ and such that $T_1(x) \\neq T_2(x)$ for every $x\\in E$. By Lusin's theorem, there exists a compact set $C\\subset E$ with $\\mu_0[C]>0$ such that the restrictions $\\smash{T_1\\big\\vert_C}$ and $\\smash{T_2\\big\\vert_C}$ are continuous. This yields\n\\begin{align*}\n\\min\\{\\met(T_1(x),T_2(x)) : x\\in C\\} > 0.\n\\end{align*}\nIn particular, there exist $z_1,z_2\\in\\mms$ and $r>0$ with $\\met(z_1,z_2)>r$, and a compact set $C'\\subset C$ with $\\mu_0[C]>0$ as well as $\\smash{T_1(C')\\subset{\\ensuremath{\\mathsf{B}}}^\\met(z_1,r\/2)}$ and $\\smash{T_2(C')\\subset{\\ensuremath{\\mathsf{B}}}^\\met(z_2,r\/2)}$. Up to possibly shrinking the radius $r$, we may and will assume without restriction that $\\smash{C'\\times [\\bar{{\\ensuremath{\\mathsf{B}}}}^\\met(z_1,r\/2)\\cup \\bar{{\\ensuremath{\\mathsf{B}}}}^\\met(z_2,r\/2)] \\subset\\mms_\\ll^2}$.\n\nAs in the proof of \\autoref{Le:Uniqueness Diracs}, we may further shrink $C'$, hence assume that $\\rho_0$ is bounded and bounded away from zero on $C'$. Consider $\\smash{\\nu_0\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$ with\n\\begin{align*}\n\\nu_0 := \\meas[C']^{-1}\\,\\meas\\mres C'.\n\\end{align*}\nFurthermore, define $\\smash{\\mu_1^1,\\mu_1^2\\in{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)}$ by\n\\begin{align*}\n\\mu_1^1 &:= (T_1)_\\push\\mu_0,\\\\\n\\mu_1^2 &:= (T_2)_\\push\\mu_0.\n\\end{align*}\nBy construction $\\smash{\\supp\\mu_1^1 \\cap \\supp\\mu_1^2 = \\emptyset}$, and the pairs $\\smash{(\\nu_0,\\mu_1^1)}$ and $\\smash{(\\nu_0,\\mu_1^2)}$ are both strongly $p$-timelike dualizable by \\autoref{Re:Strong timelike}. \\autoref{Pr:More than TMCP} applies to both pairs; following the proof of \\autoref{Le:Uniqueness Diracs} from now on gives the claim.\n\\end{proof}\n\n\\begin{theorem}\\label{Th:Uniqueness geodesics} Assume $\\smash{\\mathrm{TMCP}_p^*(K,N)}$ for some $K\\in\\R$ and $N\\in[1,\\infty)$. Suppose the pair $(\\mu_0,\\mu_1)\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)\\times{\\ensuremath{\\mathscr{P}}}_\\comp(\\mms)$ to be timelike $p$-dualizable. Then for every ${\\ensuremath{\\boldsymbol{\\pi}}}\\in\\smash{\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ there is a $\\mu_0$-measurable map $\\smash{\\mathfrak{T} \\colon \\supp\\mu_0\\to \\mathrm{TGeo}^\\uptau(\\mms)}$ with\n\\begin{align*}\n{\\ensuremath{\\boldsymbol{\\pi}}} = \\mathfrak{T}_\\push\\mu_0.\n\\end{align*}\n\nIn particular, the set $\\smash{\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ is a singleton.\n\\end{theorem}\n\n\\begin{proof} By the usual argument outlined in the proof of \\autoref{Le:Uniqueness Diracs}, it suffices to prove that every $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$ is induced by a map $\\mathfrak{T}$ as above.\n\nAssume to the contrapositive that one such ${\\ensuremath{\\boldsymbol{\\pi}}}$, henceforth fixed, is not given by a map. We disintegrate ${\\ensuremath{\\boldsymbol{\\pi}}}$ with respect to $\\eval_0$, writing, with some abuse of notation,\n\\begin{align*}\n{\\ensuremath{\\mathrm{d}}}{\\ensuremath{\\boldsymbol{\\pi}}}(\\gamma) = {\\ensuremath{\\mathrm{d}}}{\\ensuremath{\\boldsymbol{\\pi}}}_x(\\gamma)\\d\\mu_0(x)\n\\end{align*}\nfor a $\\mu_0$-measurable map ${\\ensuremath{\\boldsymbol{\\pi}}}\\colon \\supp\\mu_0\\to {\\ensuremath{\\mathscr{P}}}(\\mathrm{TGeo}^\\uptau(\\mms))$. By assumption on ${\\ensuremath{\\boldsymbol{\\pi}}}$, there is a compact set $C\\subset\\supp\\mu_0$ such that $\\mu_0[C]>0$ and $\\#\\supp {\\ensuremath{\\boldsymbol{\\pi}}}_x \\geq 2$ for every $x\\in C$.\nHence, for every $x\\in C$ there exists $t_x\\in (0,1)$ such that $\\#\\supp\\,(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_x \\geq 2$; since $\\smash{\\mathrm{TGeo}^\\uptau(\\mms)\\subset\\Cont([0,1];\\mms)}$ and $\\smash{\\supp{\\ensuremath{\\boldsymbol{\\pi}}}_x \\subset \\mathrm{TGeo}^\\uptau(\\mms)}$, there exists an open interval $I_x\\subset (0,1)$ with $t_x\\in I_x$ such that $\\#\\supp\\,(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_x \\geq 2$ for every $t\\in I_x$.\n\nGiven any $t\\in \\Q\\cap (0,1)$, we define\n\\begin{align*}\nC_t &:= \\{x \\in C : \\#\\supp\\,(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}_x\\geq 2\\}.\n\\end{align*}\nSince $C_t$ unites to $C$ when ranging over $t\\in \\Q\\cup (0,1)$, we find $s\\in \\Q\\cap (0,1)$ with $\\smash{\\mu_0[C_s] >0}$. Finally, we define $\\smash{{\\ensuremath{\\boldsymbol{\\alpha}}}\\in{\\ensuremath{\\mathscr{P}}}(\\mathrm{TGeo}^\\uptau(\\mms))}$ by\n\\begin{align*}\n{\\ensuremath{\\mathrm{d}}}{\\ensuremath{\\boldsymbol{\\alpha}}}(\\gamma) = \\mu_0[C_s]^{-1}\\,\\One_{C_s}(x)\\d{\\ensuremath{\\boldsymbol{\\pi}}}_x(\\gamma)\\d\\mu_0(x).\n\\end{align*}\nBy restriction \\cite[Lem.~2.10]{cavalletti2020}, $(\\eval_0,\\eval_1)_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}$ is an $\\smash{\\ell_p}$-optimal coupling of its marginals, the first one of which is $\\smash{\\mu_0[C_s]^{-1}\\,\\mu_0\\mres C_s\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)}$. Since ${\\ensuremath{\\boldsymbol{\\alpha}}}$ is concentrated on $\\mathrm{TGeo}^\\uptau(\\mms)$, a standard argument using the reverse triangle inequality \\eqref{Eq:Reverse tau} for $\\uptau$ implies that $(\\eval_0,\\eval_s)_\\push{\\ensuremath{\\boldsymbol{\\alpha}}}$ is a chronological $\\smash{\\ell_p}$-optimal coupling of its marginals. By definition of $C_s$, it is not given by a map, which contradicts \\autoref{Th:Uniqueness couplings}.\n\\end{proof}\n\n\\begin{remark}\\label{Re:From TNB to TENB} The above arguments --- which, as mentioned, are leaned on \\cite{cavalletti2020} --- show that the uniqueness results of \\cite[Thm.~3.19, Thm.~3.20]{cavalletti2020} remain true if the timelike nonbranching assumption therein is replaced by the hypothesis of timelike $p$-essential nonbranching for the considered exponent $p\\in (0,1)$.\n\\end{remark}\n\n\\subsection{Equivalence with the entropic TMCP condition}\\label{Sec:Equiv TMCP's} In this section, additionally to our standing assumptions we suppose ${\\ensuremath{\\mathscr{X}}}$ to be timelike $p$-essentially nonbranching for some fixed $p\\in (0,1)$. Building upon the results in \\autoref{Sub:Good} and \\autoref{Sub:Uniqueneess}, we establish the equivalence of $\\smash{\\mathrm{TMCP}_p^*(K,N)}$ with the entropic timelike measure-contraction property from \\cite[Def.~3.7]{cavalletti2020}, restated in \\autoref{Def:TMCPe}. A byproduct of our argumentation is a pathwise characterization of $\\smash{\\mathrm{TMCP}_p^*(K,N)}$ and $\\smash{\\mathrm{TMCP}_p^e(K,N)}$, cf.~\\autoref{Th:Equivalence TMCP* and TMCPe}, which is also available for $\\smash{\\mathrm{TMCP}_p(K,N)}$ according to \\autoref{Th:Equivalence TMCP}.\n\nRecall the definition \\eqref{Eq:Expo Boltzmann} of the exponentiated Boltzmann entropy ${\\ensuremath{\\mathscr{U}}}_N$.\n\n\\begin{definition}\\label{Def:TMCPe} Let $K\\in\\R$ and $N\\in (0,\\infty)$. We say that ${\\ensuremath{\\mathscr{X}}}$ satis\\-fies the \\emph{entropic timelike measure-contraction property} $\\smash{\\mathrm{TMCP}_p^e(K,N)}$ if for every $\\mu_0\\in{\\ensuremath{\\mathscr{P}}}_\\comp^{\\mathrm{ac}}(\\mms,\\meas)$ and every $x_1\\in I^+(\\mu_0)$, there exists an $\\smash{\\ell_p}$-geo\\-desic $(\\mu_t)_{t\\in [0,1]}$ connecting $\\mu_0$ and $\\smash{\\mu_1:= \\delta_{x_1}}$ such that for every $t\\in [0,1]$,\n\\begin{align*}\n{\\ensuremath{\\mathscr{U}}}_N(\\mu_t) \\geq \\sigma_{K,N}^{(1-t)}\\big[\\Vert\\uptau\\Vert_{\\Ell^2(\\mms^2,\\mu_0\\otimes\\mu_1)}\\big]\\,{\\ensuremath{\\mathscr{U}}}_N(\\mu_0).\n\\end{align*}\n\\end{definition}\n\n\\begin{theorem}\\label{Th:Equivalence TMCP* and TMCPe} The following statements are equivalent for every given $K\\in\\R$ and $N\\in[1,\\infty)$.\n\\begin{enumerate}[label=\\textnormal{\\textcolor{black}{(}\\roman*\\textcolor{black}{)}}]\n\\item\\label{LAAA} The condition $\\smash{\\mathrm{TMCP}_p^*(K,N)}$ holds.\n\\item\\label{LAAAA} For every $\\mu_0 = \\rho_0\\,\\meas \\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)$ and every $x_1\\in I^+(\\mu_0)$ there is an $\\smash{\\ell_p}$-optimal geodesic plan $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$, where $\\smash{\\mu_1 := \\delta_{x_1}}$, such that for every $t\\in[0,1)$, we have $\\smash{(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}} = \\rho_t\\,\\meas \\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$, and for every $N'\\geq N$, the inequality\n\\begin{align*}\n\\rho_t(\\gamma_t)^{-1\/N'} \\geq \\sigma_{K,N'}^{(1-t)}(\\uptau(\\gamma_0,x_1))\\,\\rho_0(\\gamma_0)^{-1\/N'}\n\\end{align*}\nholds for ${\\ensuremath{\\boldsymbol{\\pi}}}$-a.e.~$\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$.\n\\item\\label{LAAAAA} The condition $\\smash{\\mathrm{TMCP}_p^e(K,N)}$ holds.\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof} We outline the necessary adaptations of the arguments in \\autoref{Sub:Equiv TCDs} and leave the details to the reader. \n\nBy integration, \\ref{LAAAA} implies \\ref{LAAA}. The implication of \\ref{LAAA} to \\ref{LAAAA} is argued analogously to \\autoref{Pr:ii to iii}. Note that to obtain the asserted absolute continuity of $(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}}$ with respect to $\\meas$ for every $t\\in[0,1)$, we have combined \\autoref{Th:Good TMCP} with the uniqueness \\autoref{Th:Uniqueness geodesics}. The step from \\ref{LAAAA} to \\ref{LAAAAA} follows since\n\\begin{align*}\n-\\frac{1}{N}\\log\\rho_t(\\gamma_t) \\geq {\\ensuremath{\\mathrm{H}}}_t\\Big[\\!-\\!\\frac{1}{N}\\log\\rho_0(\\gamma_0),\\frac{K}{N}\\,\\uptau^2(\\gamma_0,x_1)\\Big]\n\\end{align*}\nfor ${\\ensuremath{\\boldsymbol{\\pi}}}$-a.e.~$\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$, using that the function ${\\ensuremath{\\mathrm{H}}}_t$ defined in \\eqref{Eq:GtHt} has similar convexity properties as the function ${\\ensuremath{\\mathrm{G}}}_t$ used to derive \\autoref{Pr:iii to iv}, and then arguing as in the proof of the latter result. Similarly, the implication from \\ref{LAAAAA} to \\ref{LAAAA} is shown analogously to the proof of \\autoref{Pr:v to iii}.\n\\end{proof}\n\nEvident adaptations of the previous arguments give the following.\n\n\\begin{theorem}\\label{Th:Equivalence TMCP} The following statements are equivalent for every given $K\\in\\R$ and $N\\in[1,\\infty)$.\n\\begin{enumerate}[label=\\textnormal{\\textcolor{black}{(}\\roman*\\textcolor{black}{)}}]\n\\item The condition $\\smash{\\mathrm{TMCP}_p(K,N)}$ holds.\n\\item For every $\\mu_0 =\\rho_0\\,\\meas\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)$ and every $x_1\\in I^+(\\mu_0)$ there is an $\\smash{\\ell_p}$-optimal geodesic plan $\\smash{{\\ensuremath{\\boldsymbol{\\pi}}}\\in\\mathrm{OptTGeo}_{\\ell_p}^\\uptau(\\mu_0,\\mu_1)}$, where $\\smash{\\mu_1 := \\delta_{x_1}}$, such that for every $t\\in[0,1)$, we have $\\smash{(\\eval_t)_\\push{\\ensuremath{\\boldsymbol{\\pi}}} = \\rho_t\\,\\meas\\in{\\ensuremath{\\mathscr{P}}}^{\\mathrm{ac}}(\\mms,\\meas)}$, and for every $N'\\geq N$, the inequality\n\\begin{align*}\n\\rho_t(\\gamma_t)^{-1\/N'} \\geq \\tau_{K,N'}^{(1-t)}(\\uptau(\\gamma_0,x_1))\\,\\rho_0(\\gamma_0)^{-1\/N'}\n\\end{align*}\nholds for ${\\ensuremath{\\boldsymbol{\\pi}}}$-a.e.~$\\gamma\\in\\mathrm{TGeo}^\\uptau(\\mms)$.\n\\end{enumerate}\n\\end{theorem}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nOff-resonant Raman scattering is a robust approach to light-atom interfaces.\nOne of the methods is to induce spontaneous Stokes scattering in which\npairs of photons and collective atomic excitations - a two-mode squeezed\nstate - are created. These excitations can be stored and later retrieved\nin the anti-Stokes process \\cite{VanderWal2003,Bashkansky2012}. This approach is\ncommonly known to be a basic building block of the DLCZ protocol \\cite{Duan2001}.\n\nTypically rubidium and cesium have been used as atomic systems in\nboth warm and cold atomic ensembles \\cite{Chrapkiewicz2012,Michelberger2015,Dabrowski2014}.\nThese systems are coupled to light at near-IR wavelengths,\nsuch as 795 and 780 nm for rubidium D1 and D2 lines. Coupling to new wavelengths holds a promise to greatly extend capabilities of quantum memories. This\ncan be accomplished by non-linear frequency conversion in the four-wave\nmixing (4WM) process using strong resonant non-linearities in atoms\nthanks to transitions between excited states. Such processes\nhave been used to demonstrate frequency conversion in rubidium \\cite{Donvalkar2014,Becerra2008,Parniak2015,Akulshin2009a,Khadka2012} or to generate photon pairs in the\ncascaded spontaneous 4WM process \\cite{Willis2010,Srivathsan2013a,Chaneliere2006}. Multiphoton\nprocesses are also a developing method to interface light and Rydberg\natoms \\cite{Kondo2015,Huber2014a,Kolle2012}.\n\n\nChaneli\\`ere \\textit{et al.} \\cite{Chaneliere2006} proposed to combine the\nprocesses of Raman scattering and 4WM by first creating photon pairs\nin a cascaded spontaneous 4WM in one atomic ensemble and then storing\nphotons as collective atomic excitations in another cold ensemble,\nand an experiment was recently realized \\cite{Zhang2016}.\nAs a result they obtained a two-mode squeezed state of atomic excitations\nand telecom photons. Another approach was to frequency-convert light\ngenerated in quantum memory with 4WM \\cite{Radnaev2010a}, in order to create a frequency-converted\nquantum memory. In all cases one atomic ensemble was used for storage,\nand another for frequency conversion or photon generation. \n\n\\begin{figure}\n\\includegraphics[scale=1.02]{fig1}\\protect\\caption{(a) Configuration of atomic levels and fields we use to realize the\nfour-photon interface, (b) pulse sequence used in the experiment,\n(c) the central part of experimental setup demonstrating phase-matching\ngeometry and (d) trace of the 1 GHz FSR Fabry-P\\'erot interferometer signal showing the frequency of 4Ph field being different from what one would expect from the closed-loop process. Rows in (b) correspond to different beam paths presented\nin (c).}\n\\label{fig:schemat}\n\\end{figure}\n\n\nIn this paper we realize a Raman-like interface based\non 4WM in warm rubidium vapors driven by ground-state atomic coherence. The process we present may be in principle\nused to generate correlated pairs of collective atomic excitations\nand photons coupled to transition between two excited states in a\nsingle, four-photon process in a single atomic ensemble. Transition between two excited states corresponds to 776-nm light\nas illustrated in Fig.~\\hyperref[fig:schemat]{\\ref*{fig:schemat}(a)}. As the two intermediate states we use $5\\mathrm{P}_{3\/2}$\nand $5\\mathrm{D}_{5\/2}$.\n\nThe paper is organized as follows. In Sec. \\ref{sec:theory} we discuss the principles behind our idea. In Sec. \\ref{sec:experimental} we describe the experimental setup and methods we use to verify our findings. Finally, we give the results of our studies of the four-wave mixing interface, namely correlations and statistical properties in Sec. \\ref{sec:corr} and detuning dependencies in Sec. \\ref{sec:spectral}. We conclude the paper and give prospects for future developments in Sec. \\ref{sec:concl}.\n\n\\section{General idea}\\label{sec:theory}\nIn our experiment we generate ground-state atomic coherence $\\rho_{gh}$\nand light denoted by 2Ph in a two-photon stimulated Raman Stokes process, seeded by vacuum fluctuations. The advantage of this approach is the fact that it is a well-established and effective way to prepare atomic-ground state coherence. In particular, it may be used in different regimes, starting from the single-photon and single-excitation spontaneous regime as in the DLCZ protocol \\cite{Duan2001,Chrapkiewicz2012}, through the linear gain regime \\cite{Raymer1981,Duncan1990} and even in the nonlinear gain-saturation regime \\cite{Walmsley1985,Lewenstein1984,Trippenbach1985}. In the two latter cases, macroscopic ground-state coherence is generated \\cite{Zhang2014c}. The generated coherence and the number of scattered photons will be highly random but correlated. The atomic coherence is not averaged out to zero due to atomic motion, since the buffer gas makes the motion diffusive \\cite{Raymer2004}. Moreover, the generated Raman field remains coherent with the driving field, so phase fluctuations of the driving field do not disturb the process \\cite{Wu2010}. In particular, the generated macroscopic ground-state coherence may be probed \\cite{Chen2010}, read-out \\cite{VanderWal2003}, or may enhance further stimulated Raman process \\cite{Yuan2010}.\n\nIn this experiment, we observe concurrent generation of 776-nm light denoted by 4Ph in\na four-photon process analogous to stimulated Raman scattering driven by\nground-state coherence. It does not occur spontaneously in the macroscopic\nregime due to small gain. However, with macroscopic $\\rho_{gh}$ generated in the two-photon stimulated Raman process, the\ndriving fields $a$, $b$ and $c$ couple ground-state atomic coherence\nto the weak optical 4Ph field. In other words, the 4Ph process is stimulated by pre-existing atomic coherence. In the leading order in drive beam\nfields Rabi frequencies $\\Omega_{i}$ the atomic polarization resulting\nin emission of 4Ph signal field is:\n\n\\begin{equation}\n\\mathbf{P}_\\mathrm{4Ph}=-n\\mathbf{d}_{e_{3}e_{2}}\\rho_{gh}\\frac{{\\Omega_{a}\\Omega_{b}\\Omega_{c}^{*}}}{4\\Delta_{g}\\delta\\Delta_{h}},\n\\label{eq:P2ndrho}\n\\end{equation}\n\n\nwhere $n$ is the atom number density and $\\mathbf{d}_{ij}$ is the\ndipole moments of respective transition. From this formula it follows\nwhich detunings play role.\n\nFor the experimental observation \\textit{a priori} knowledge of polarization\nproperties is crucial. To find it, we add contributions from\nall possible paths through intermediate hyperfine states. Since the\ndetunings from intermediate $5\\mathrm{P}_{3\/2}$ state are much larger\nthan respective hyperfine splittings we ignored the latter. Same approximation\nis adopted for the highest excited state $5\\mathrm{D}_{5\/2}$.\nEven though respective detuning $\\delta$ it is of the order of several\nMHz, similar to the hyperfine splitting of the highest excited state\n$5\\mathrm{D}_{5\/2}$, the hyperfine structure is completely unresolved\ndue to significant pressure broadening \\cite{Zameroski2014} in 0.5~torr krypton as buffer gas we use. Consequently, we may omit any detuning\ndependance and calculate the unnormalized polarization vector $\\boldsymbol{\\epsilon}$\nof the signal light using path-averaging formalism we developed in\n\\cite{Parniak2015}, by decalling the definitions of Rabi frequencies\nin Eq.~\\hyperref[eq:P2ndrho]{(\\ref*{eq:P2ndrho})}:\n\n\\begin{equation}\n\\boldsymbol{\\epsilon}_\\mathrm{4Ph}\\propto\\sum_{F,\\:m_{F}}\\mathbf{d}_{e_{3}e_{2}}(\\mathbf{E}_{a}\\cdot\\mathbf{d}_{ge_{1}}\\mathbf{E}_{b}\\cdot\\mathbf{d}_{e_{1}e_{2}}\\mathbf{E}_{c}^{*}\\cdot\\mathbf{d}_{e_{3}h}^{*}),\n\\label{eq:polaryzacje}\n\\end{equation}\n\n\nwhere $\\mathbf{E}_{i}$ are electric fields of respective beams $i$.\nSummation is carried over all possible magnetic sublevels ($F_{i}$,\n$m_{F_{i}}$) of all intermediate states $|e_{1}\\rangle$, $|e_{2}\\rangle$\nand $|e_{3}\\rangle$. \n\\section{Experimental Methods}\\label{sec:experimental}\nThe experimental setup is built around the rubidium-87 vapor cell\nheated to 100$^{\\circ}$C, corresponding to atom number density $n=7.5\\times10^{12}\\ \\mathrm{cm^{-3}}$ and optical density of 1600 at D2 line for optically pumped ensemble. For the two lowest, long-lived states,\nwe use two hyperfine components of rubidium ground-state manifold,\nnamely $|5\\mathrm{S}_{1\/2},F=1\\rangle=|g\\rangle$ and $|5\\mathrm{S}_{1\/2},F=2\\rangle=|h\\rangle$.\nFor the $|e_{1}\\rangle$ and $|e_{3}\\rangle$ states we take hyperfine\ncomponents of the $5\\mathrm{P}_{3\/2}$ manifold, and for the highest\nexcited state $|e_{2}\\rangle$ we take the $5\\mathrm{D}_{5\/2}$ manifold.\nAtoms are initially pumped into $|g\\rangle$ state using 400 $\\mu$s\noptical pumping pulse (at 795 nm). Next, three square-shaped driving pulses of 4 $\\mu$s duration each\nare applied simultaneously, as shown in Fig.~\\hyperref[fig:schemat]{\\ref*{fig:schemat}(b)}. Inside\nthe cell, beams are collimated and have diameters of 400~$\\mu$m,\nbeing well-overlapped over a 2-cm-long cylindrical region. They intersect\nat a small angle of 14 mrad, with 780-nm beams nearly counter-propagating\nwith respect to the 776-nm beams, as presented in Fig.~\\hyperref[fig:schemat]{\\ref*{fig:schemat}(c)}.\nPowers of driving beams $a$, $b$ and $c$ are 10 mW, 45 mW and 8\nmW, respectively. \n\nThe 780-nm two-photon Raman signal 2Ph co-propagates with the driving\nfield $a$. It is separated using a Wollaston polarizer (Wol) and\ndetected on a fast photodiode (PD), with $10^4$ signal to driving field leakage ratio. The four-photon signal 4Ph, is\nemitted in the direction given by the phase matching condition $\\mathbf{k}_{a}+\\mathbf{k}_{b}=\\mathbf{k}_{\\mathrm{{4Ph}}}+\\mathbf{k}_{c}+\\mathbf{K}$. The wavevector $\\mathbf{K}$ of spin-wave generated in the 2Ph process Raman scattering equals $\\mathbf{K}=\\mathbf{k}_a-\\mathbf{k}_{\\mathrm{2Ph}}$. We note that both longitidual and transverse components of the spin-wave $\\mathbf{K}$ are much smaller than that of light wavevectors, and consequently it has only an insignificant effect on the 4Ph signal emission direction. This particular, nearly co-propagating geometry allows us to couple the same spin-wave of wavevector $\\mathbf{K}$ to both the 2Ph and 4Ph processes.\nIn addition to the desired 4Ph signal, 776-nm light coming from the\nclosed-loop process in which field $c$ couples level $|e_{3}\\rangle$\ndirectly to $|g\\rangle$ is emitted in the same direction \\cite{Parniak2015}\nand at a frequency differing by only 6.8~GHz. However,\nwhen driving fields $a$, $b$ and $c$ are $x$-, $y$- and $y$-\npolarized, respectively, the 4Ph signal light is $x$-polarized, while\nthe closed-loop 4WM light is $y$-polarized, where $x\\perp y$. This arrangement enables\nfiltering out the 4Ph signal from both stray 776-nm driving light\nand the closed-loop 4WM light with a polarizing beam-splitter (PBS, $10^2$ extinction).\nTo suppress residual drive laser background at 780 nm 4Ph signal goes through\nan interference filter (transmission of 80$\\%$ for 776 nm and $10^{-3}$ for 780 nm) and is detected by an avalanche photodiode\n(APD). We were able to obtain signal to background ratio of up to $10^2$. \n\nBy rotating the half-wave\nplate ($\\lambda\/2$) we can easily switch between observing 4Ph\nand closed-loop signal. For detunings optimal for the 4Ph process, we register less than 10 nW of the closed-loop 4WM light. The two signals display different temporal characteristics,\nalso only 4Ph is correlated with the 2Ph light. \n\nWe verify the frequencies of 4Ph and closed-loop light using\na scanning Fabry-P\\'erot confocal interferometer of 1 GHz free spactral range (FSR) inserted in the 4Ph beam path. Fig.~\\hyperref[fig:schemat]{\\ref*{fig:schemat}(d)} show the trace obtained by scanning the interferometer for detuning difference $\\Delta_g+\\Delta_h=2\\pi\\times3.43$ GHz. Leakage of driving field $b$, induced by slight misalignement, is used as frequency reference. The middle peak corresponds to the 4Ph signal, with a solid line indicating expected frequency. Dashed line corresponds the frequency one would expect from the closed-loop signal. \n\n\n\\section{Results}\n\\subsection{Statistics and Correlations}\\label{sec:corr}\n\\begin{figure}[b]\n\\begin{centering}\n\\includegraphics[scale=1.2]{fig2}\n\\par\\end{centering}\n\n\\protect\\caption{(a - 2Ph, b - 4Ph) Averaged signal intensities (solid lines) with several\nsingle realizations (dashed lines), denoted by same colors in (a)\nand (b), demonstrating visible strong correlations and (c) calculated\ncross-correlation $C(t)$ between signals of four-photon (4Ph) and\nthe two-photon processes (2Ph). }\n\\label{fig:corr}\n\\end{figure}\n\n\n\\begin{figure}[b]\n\\begin{centering}\n\\includegraphics[scale=0.7]{fig3}\n\\par\\end{centering}\n\n\\protect\\caption{Joint statistics $P(\\mathcal{E}_\\mathrm{2Ph},\\mathcal{E}_\\mathrm{4Ph})$ together with marginal distributions of registered pulse energies for (a) short driving time of 200 ns, yield nearly single-mode thermal statistics (note the logarythimc scale in this plot), and (b) longer driving time of 4 $\\mu$s with observable pulse energy stabilization well described by multimode thermal statistics with mode number $\\mathcal{M}$ of 4.1 for 2Ph pulses and 1.6 for 4Ph pulses. Solid curves correspond to fitted thermal distributions.}\n\\label{fig:corrmap}\n\\end{figure}\n\nIn our experiment we remain in the macroscopic scattered light intensity\nregime. When strong Raman driving field is present, atoms are transferred\nto $|h\\rangle$ simultaneously with scattering of\n2Ph photons and buildup of large atomic coherence $\\rho_{gh}$.\nThe temporal shape of the 2Ph pulse is an exponential only at the\nvery beginning, which we observe in Fig.~\\hyperref[fig:corr]{\\ref*{fig:corr}(a)}. Not only pulse energies, but also the shapes fluctuate significantly from shot to shot, as the process is\nseeded by vacuum fluctuations \\cite{Raymer1989}. However, the\n4Ph pulse, presented in Fig.~\\hyperref[fig:corr]{\\ref*{fig:corr}(b)}, nearly exactly follows\nthe 2Ph pulse. We calculate temporal correlations between the 2Ph\nsignal, which is known to be proportional to the ground-state atomic\ncoherence $\\rho_{gh}$, and the 4Ph signal. Normalized intensity\ncorrelation at time $t$ between the 2Ph signal $I_{2\\mathrm{Ph}}(t)$\nand the 4Ph signal $I_{4\\mathrm{{Ph}}}(t)$ is calculated according\nto the formula $C(t)=\\langle\\Delta I_{\\mathrm{{2Ph}}}(t)\\Delta I_{\\mathrm{{4Ph}}}(t)\\rangle\/\\sqrt{\\langle\\Delta I_{\\mathrm{{2Ph}}}^{2}(t)\\rangle\\langle\\Delta I_{\\mathrm{{4Ph}}}^{2}(t)\\rangle}$\nby averaging over 500 realizations, where the standard deviations\n$\\langle\\Delta I_{\\mathrm{{2Ph}}}^{2}(t)\\rangle$ and $\\langle\\Delta I_{\\mathrm{{4Ph}}}^{2}(t)\\rangle$\nare corrected for electronic noise. Figure \\hyperref[fig:corr]{\\ref*{fig:corr}(c)} presents\nthe cross-correlation $C(t)$. We observe that correlations are high\nduring the entire process, which proves that at any time both processes\ninteract with the same atomic coherence $\\rho_{gh}$, similarly as in some previous works in $\\Lambda$-level configurations \\cite{Chen2010,Yang2012b}.\nIn particular, we are able to measure high correlation >0.9 at the very\nbeginning of the pulses, were light intensities are low. This regime\nis quite promising for further quantum applications. To estimate the\nuncertainty of calculated correlations, we divided data into 10 equal\nsets of 50 repetitions and calculated the correlation inside each\nset to finally obtain the uncertainty by calculating standard\ndeviation of results from all sets. \n\nNext, we study statistics and correlations of pulse energies in detail. We consider short and long pulse duration regimes. Figure \\hyperref[fig:corrmap]{\\ref*{fig:corrmap}(a)} corresponds to short driving time of 200 ns. In this regime light generated in both 2Ph and 4Ph processes is well-characterized by single-mode thermal energy distribution $P(\\mathcal{E})=\\langle\\mathcal{E}\\rangle^{-1} \\exp(-\\mathcal{E}\/\\langle\\mathcal{E}\\rangle)$ \\cite{Raymer1982}, where $\\mathcal{E}$ is the total scattered light energy in a single realization. This observation shows that we excite only a single transverse-spatial mode, as intended by using an adequately small size of Raman driving beam $a$ \\cite{Raymer1985a,Duncan1990}. Thermal distribution yields very high pulse energy fluctuations (namely, mean energy is equal to standard deviation), that are due to vacuum fluctuations of electromagnetic field and quantum fluctuations of atomic state \\cite{Walmsley1985,Walmsley1983}. Still, we observe that energies of 2Ph and 4Ph pulses are highly correlated, which is demonstrated by joint statistics $P(\\mathcal{E}_\\mathrm{2Ph},\\mathcal{E}_\\mathrm{4Ph})$. The residual spread is mainly due to detection noise of both signals.\n\nIn the second regime [Fig. \\hyperref[fig:corrmap]{\\ref*{fig:corrmap}(b)}] the driving pulses of 4~$\\mu$s are longer than in the previous scheme. The relative fluctuations become smaller due to gain saturation. We found the marginal statistics to be well described by multimode thermal distributions (with number of modes $\\mathcal{M}$) given by \n\\begin{equation}\nP(\\mathcal{E})=\\frac{\\mathcal{M}}{\\langle\\mathcal{E}\\rangle\\Gamma(\\mathcal{M})}\\left(\\frac{\\mathcal{E}\\mathcal{M}}{\\langle\\mathcal{E}\\rangle}\\right)^{\\mathcal{M}-1}\\exp(-\\mathcal{E}\\mathcal{M}\/\\langle\\mathcal{E}\\rangle).\n\\end{equation}\nThe joint statistics $P(\\mathcal{E}_\\mathrm{2Ph},\\mathcal{E}_\\mathrm{4Ph})$ demonstrate clear correlations, which are here less influenced by detection noise than previously, as the pulse energies are higher.\n\n\n\\begin{figure}[b]\n\\begin{centering}\n\\includegraphics[scale=0.75]{fig4}\n\\par\\end{centering}\n\\protect\\caption{(a) Scheme of experiment to demonstrate storage of ground-state atomic coherence for $\\tau=450$ ns. (b) Measured two-point correlation $C(t_1,t_2)$ between 2Ph and 4Ph signals with average registered signal powers. Off-diagonal elements of the correlation map demonstrate that light fields interact with the same atomic coherence before and after the time delay. }\n\\label{fig:corrmaptemporal}\n\\end{figure}\n\nFinally, we check that correlations are indeed mediated by ground-state atomic coherence by interrupting the scattering process for a dark period of $\\tau=450$ ns. This is proved by strong correlations between the intensities of light scattered before and after the dark period, observed in both 2Ph and 4Ph processes. \n\nAfter the atoms are optically pumped as in the original scheme, we drive the processes with 150 ns pulses of the three driving fields $a$, $b$ and $c$. After a dark period of $\\tau=450$ ns, we drive the process for another 200 ns. The coherence $\\rho_{gh}$ generated in the first stage induces optical polarization at both the 4Ph field frequency (as in Eq. \\ref{eq:P2ndrho}) and 2Ph field frequency:\n\\begin{equation}\n\\mathbf{P}_\\mathrm{2Ph}=-n\\mathbf{d}_{e_h e_1}\\rho_{gh}\\frac{\\Omega_a}{\\Delta_g},\n\\label{eq:2ph}\n\\end{equation}\nresulting in stimulated Raman emission. The full two-point correlation map presented in Fig.~\\hyperref[fig:corrmaptemporal]{\\ref*{fig:corrmaptemporal}(b)} is calculated as $C(t_1,t_2)=\\langle\\Delta I_{\\mathrm{{4Ph}}}(t_1)\\Delta I_{\\mathrm{{2Ph}}}(t_2)\\rangle\/\\sqrt{\\langle\\Delta I_{\\mathrm{{4Ph}}}^{2}(t_1)\\rangle\\langle\\Delta I_{\\mathrm{{2Ph}}}^{2}(t_2)\\rangle}$. Apart from the diagonal correlated areas at $t_1\\approx t_2 \\approx 100$ ns and 800 ns, we observe the anti-diagonal terms corresponding to correlations between the two pulses. Due to spontaneous emission and collisional dephasing all the excited atomic states decay quickly, with their lifetime limited to 20 ns. This proves that we store information in the ground-state atomic coherence. Storage time $\\tau$ is limited mainly by atomic motion. In fact, the stored ground-state coherence and in turn correlations decay in multiple ways, e.g. by diffusive spatial spread of atomic coherence \\cite{Chrapkiewicz2014b,Parniak2014} and by influx of optically pumped atoms into the interaction region.\nFinally, we mention that very similar results are obtained if the system is driven by field $a$ only in the first stage of the sequence and in turn only the 2Ph field and atomic coherence are generated. In the second stage of the sequence, where all driving field are present, we observe that both the 2Ph and 4Ph signals are correlated with the 2Ph signal emitted in the first stage.\n\nAgreement of the above measured statistics with theoretical predictions and multiple previous experiments proves that the correlations do arise from vacuum fluctuations. With driving power stable to within less than 1\\% and laser frequency stable to within 1 MHz, these external contributions to fluctuations can be neglected. Large magnitude of fluctuations, with nearly perfect correlations between 2Ph and 4Ph signals together with demonstrated storage of correlated signals allow to reject phase-noise to amplitude-noise conversion as a source of correlations \\cite{Cruz2006}.\n\nAll of the above results were measured for $\\Delta_g\/2\\pi=1000$ MHz, $\\Delta_h\/2\\pi=1200$ MHz and $\\delta\/2\\pi=-50$ MHz.\n\n\n\\subsection{Detuning dependance}\\label{sec:spectral}\n\n\\begin{figure}\n\\includegraphics[scale=1.03]{fig5}\\protect\\caption{(a) Averaged (over 500 realizations) pulse shapes\nfor the intensities of the 2Ph and the 4Ph for a set of single-photon\ndetunings $\\Delta_{g}$ for constant $\\delta\/2\\pi=-50$\nMHz and $\\Delta_h\/2\\pi=1500$ MHz, (b) pulses full-width at half-maxima (FWHM), (c) their energies and (d) pulse energy correlation between the two processes as\na function of field $a$ single photon detuning $\\Delta_{g}$. Subsequent\nplots of 2Ph (4Ph) signals in (a) are shifted by 120 $\\mu$W (40 nW).}\n\\label{fig:shapes}\n\\end{figure}\n\n\nNow we switch to verifying properties of 4Ph signal for various drive\nfield detunings. The influence of field $a$ detuning $\\Delta_{g}$\nis seen in Fig. \\ref{fig:shapes}. A number of pronounced effects\ncomes about as this laser drives the Raman scattering and produces\nground-state atomic coherence $\\rho_{gh}$. Initially the 2Ph signal grows exponentially. The corresponding\nRaman gain coefficient is strongly dependent on drive field detuning\n$\\Delta_{g}$. \nThe final effect is shortening of 2Ph pulses closer to resonance, as shown in Fig.~\\hyperref[fig:shapes]{\\ref*{fig:shapes}(b)}. The 4Ph pulse follows\nthe ground-state coherence $\\rho_{gh}$ and 2Ph pulse as shown\nin the previous section, however its maximum is somewhat delayed.\nWe attribute this effect to internal atom dynamics at high drive intensity\nlevels which is not captured by Eq.~\\hyperref[eq:P2ndrho]{(\\ref*{eq:P2ndrho})}. Energies\nof pulses are also higher closer to resonance, although the two-photon\nRaman pulse energy saturates due to absorption losses [see Fig.~\\hyperref[fig:shapes]{\\ref*{fig:shapes}(c)}].\n\nImportant insight is provided by calculating energy correlation between\nthe 2Ph and 4Ph light pulses, which fluctuate significantly from shot\nto shot. Correlation is calculated as $C=\\langle\\Delta \\mathcal{E}_{\\mathrm{{2Ph}}}\\Delta \\mathcal{E}_{\\mathrm{{4Ph}}}\\rangle\/\\sqrt{\\langle\\Delta \\mathcal{E}_{\\mathrm{{2Ph}}}^{2}\\rangle\\langle\\Delta \\mathcal{E}_{\\mathrm{{4Ph}}}^{2}\\rangle}$,\nwhere $\\mathcal{E}_{\\mathrm{{2Ph}}}$ and $\\mathcal{E}_{\\mathrm{{4Ph}}}$ are total energies\nof 2Ph and 4Ph light pulses, respectively, in a single realization.\nThe averaging $\\langle.\\rangle$ is done over 500 realizations of\nthe process and results are plotted in Fig.~\\hyperref[fig:shapes]{\\ref*{fig:shapes}(d)}.\nStrong correlations at various detunings reinforce the observation that indeed we are able\nto couple 4Ph optical field to the ground-state coherence, since number\nof photons in the 2Ph pulse is proportional to generated atomic coherence $|\\rho_{gh}|^2$.\nWe attribute the drop in correlations close to resonance line to absorption\nlosses. Finally, we estimate the efficiency of conversion from the ground-state atomic coherence $\\eta=\\mathcal{E}_\\mathrm{4Ph}\/\\mathcal{E}_\\mathrm{2Ph}$ to the 4Ph field to be $5\\times10^{-4}$. By comparing Eq.~(\\ref{eq:P2ndrho}) with analogous expression for the 2Ph process given by Eq. \\ref{eq:2ph}, we obtain $\\eta\\approx10^{-4}$ as well. This figure of merit could be improved by choosing different experimental geometries, or laser-cooling of the atomic ensemble.\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[scale=1.15]{fig6} \n\\par\\end{centering}\n\n\\protect\\caption{Experimental average pulse energies of 2Ph and 4Ph signals as a function\nof field $a$ detuning $\\Delta_{g}$ (measured from $F=1\\rightarrow F'=1$\nresonance) and field $b$ detuning $\\delta-\\Delta_{g}$ around the\ntwo-photon resonance. As we change the field $a$ detuning both single\nphoton detuning $\\Delta_{g}$ and the two-photon detuning $\\delta$\nchange accordingly. Dots (squares) represent maxima (minima) of the 4Ph (2Ph)\nsignal, while $\\delta=0$ line indicates the two-photon absorption resonance.}\n\\label{fig:detuning}\n\\end{figure}\n\n\nTo capture the physics in the vicinity of the two-photon resonance\nwe study 4Ph and 2Ph pulse energies as a function of detunings of\nfields $a$ and $b$. First we scan the field $b$ detuning $\\delta-\\Delta_{g}$\nacross the two-photon resonance line (see Fig. \\ref{fig:detuning}).\nWe observed strong suppression of 2Ph signal (minima are marked by squares in Fig. \\ref{fig:detuning}) due to two-photon absorption\n(TPA) in ladder configuration \\cite{Moon2013}. As a consequence,\nless atomic coherence $\\rho_{gh}$ is generated and the 4Ph signal\nis also reduced. In turn, the 4Ph shifted maxima position (marked by dots) are due to a trade-off\nbetween TPA losses and 4WM efficiency, as the latter is highest at\nthe two-photon resonance (marked by $\\delta=0$ line) according to Eq.~\\hyperref[eq:P2ndrho]{(\\ref*{eq:P2ndrho})}. The peak appears only on one side of the resonance due to the phase-matching condition being influenced by atomic dispersion \\cite{Zibrov2002}. \nAdditionally, we observe expected broadening of 60 MHz of two-photon resonance due to buffer gas collisions.\nBy changing the field $a$ detuning $\\Delta_{g}$ we see expected\nshifting of the two-photon resonance. We checked that even at sub-optimal two-photon detuning $\\delta$ the 2Ph and 4Ph signals are correlated, but the correlations become harder to measure as the 4Ph signals becomes very weak. \n\n\\begin{figure}\n\\centering{}\\includegraphics[scale=1.15]{fig7}\\protect\\caption{Four-photon signal pulse energy as a function of the detuning $\\Delta_{h}$\n(measured from $F=2\\rightarrow F'=2$ resonance) of field $c$ for\noptimal conditions of other lasers ($\\Delta_{g}\/2\\pi=900$~MHz and\n$\\delta\/2\\pi=-50$ MHz). Absorption line corresponds to the $F=2$\nhyperfine component of the ground state, where the right side of the\nplot is the red-detuned side. Drop at around $\\Delta_{h}\/2\\pi=$-900\nMHz corresponds exactly to 6.8 GHz detuning between the\ntwo 780-nm lasers, or $\\Delta_{g}+\\Delta_{h}=0$.}\n\\label{fig:deltah}\n\\end{figure}\n\n\nContrary to the above, changing the detuning $\\Delta_{h}$ of the\n780-nm driving field $c$ has only mild effect on the 4Ph signal.\nFig.~\\ref{fig:deltah} presents 4Ph signal pulse energy as a function\nof $\\Delta_{h}$ while other lasers were tuned for maximal signal\n($\\Delta_{g}\/2\\pi=900$ MHz and $\\delta\/2\\pi=-50$ MHz). Since the\n4Ph field frequency adapts to match the energy conservation for the\n$|h\\rangle\\rightarrow|e_{3}\\rangle\\rightarrow|e_{2}\\rangle$ two-photon\ntransition, the frequency of the driving field $c$ is not critical.\nThe laser must only be off-resonant, so the driving field is not absorbed\nand does not disturb the ground-state coherence. We also observe\t\na marked narrow drop in 4Ph process efficiency when the detuning between\nfields $a$ and $c$ is exactly the ground-state hyperfine splitting, or equivalently $\\Delta_{g}+\\Delta_{h}=0$,\nwhich is due to $\\Lambda$ configuration two-photon resonance yielding strong interaction with the ground-state coherence. \n\n\\section{Conclusions}\\label{sec:concl}\nThe experiment we performed is a proof-of-principle\nof a light atom interface that enables coupling of long-lived\nground-state atomic coherence and light resonant with transition between\nexcited states. The non-linear process we discussed is a novel type\nof process with typical characteristics of both Raman scattering and\n4WM. \nThe observation of inverted-Y type nonlinear four-photon process involving ground-state coherence, performed in a very different regime in cold atoms, has been so far reported only in \\cite{Ding2012}.\nHere, we generated ground-state atomic coherence via the well known\ntwo-photon process. We demonstrated ability to couple the\nvery same atomic coherence to optical field resonant with transition\nbetween two excited states via a four-photon process. This was verified\nby measuring high correlations between 2Ph and 4Ph fields, as well as frequency and\npolarization characteristics of the four-photon process. \n\nWe studied the behavior of pulse shapes as a function\nof driving laser detunings. Among many results, we found that maximum\nsignal is achieved when lasers are detuned from the two-photon resonance\nby approximately $\\delta\/2\\pi=-50$ MHz, which is a trade-off\nbetween TPA spoiling the generation of atomic coherence, and the efficiency\nof the four-photon process. This results demonstrate that we are able\nto control the 4WM process with ground-state coherence, which constitutes\na novel type of Raman scattering driven by three non-degenerate fields,\nin analogy to hyper-Raman scattering where the scattering process is driven\nby two degenerate fields.\n\nHere we used light at 776 nm coupled to $5\\mathrm{P}_{3\/2}\\rightarrow5\\mathrm{D}_{5\/2}$\ntransition. Using different states, such as $4\\mathrm{D}_{3\/2}$ as\nthe highest excited state, and $5\\mathrm{P}_{1\/2}$ and $5\\mathrm{P}_{3\/2}$\nas two intermediate states, would enable coupling telecom light (at 1475.7 nm or 1529.3 nm). Such process could be used as a building\nblock for a telecom quantum repeater or memory \\cite{Michelberger2015,Chaneliere2006,Zhang2016,Radnaev2010a}. By applying\nexternal weak quantum field as the 4Ph field, the system may serve\nas an atomic quantum memory, based on a highly non-linear process\nas in \\cite{DeOliveira2015}, but still linear in the input field\namplitude. It may also solve a variety of filtering problems \\cite{Michelberger2015,Dabrowski2014},\nsince many similar configurations exists (e.g. with $5\\mathrm{P}_{3\/2}$,\n$5\\mathrm{D}_{3\/2}$ and $5\\mathrm{P}_{1\/2}$ as intermediate states) in which all driving lasers operate at different wavelengths\nthan the signal. Even thought the 2Ph field was measured to be much stronger than the 4Ph field, we note that using a cold atomic ensemble would\noffer selectivity in intermediate states of the process and small\ndetuning. Thanks to selection rules, exclusively the 4Ph process could be driven. This would be the requirement for generating pairs of photons and collective atomic excitations in the 4Ph process only. Additionally, 4WM character of the\nprocess enables engineering of phase-matching condition, namely changing\nangles between incident driving beams to address different spin-wave excitations \\cite{Chrapkiewicz2012},\nto explore spatially-multimode capabilities of the system. In future\nstudies of the process we propose to address patterns \nunachievable\nby typical Raman light-atom interface based on $\\Lambda$ level configuration.\n\n\n\\begin{acknowledgments}\nWe acknowledge R. Chrapkiewicz, M. D\\k{a}browski and J. Nunn for insightful discussions and K. Banaszek and T. Stacewicz for their generous support.\nThis work was supported by Polish Ministry of Science and Higher Education ``Diamentowy Grant''\nProject No. DI2013 011943 and National Science Center Grant\nNo. 2011\/03\/D\/ST2\/01941.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{\\fontsize{10}{15}\\selectfont A. Details of STS spectra and DFT calculations}\n\n\\textbf{We provide additional information about the STS spectra, Mott states of the theoretical models and interlayer coupling effect.}\n\n\\renewcommand{\\figurename}{SFIG.}\n\n\\begin{figure} [h!]\n\\centering{ \\includegraphics[width=14.5cm]{Sfig1} }\n\\caption{ \\label{fig1}\nWell filtered STM image of the NC phase.\nDashed lines separate two different domains and domain wall.\nThe solid lines represent the same area as Fig. 2(c).\n}\n\\end{figure}\n\n\\begin{figure} [h!]\n\\centering{ \\includegraphics[width=16cm]{Sfig2} }\n\\caption{ \\label{fig1}\nSTS spectra of the NC phase and theoretical DOS of DW-1 structure. (a) STS spectra, (b) Total DOS as a function of the electronic temperature parameter $\\sigma$ (eV). (c) Local DOS of domain and domain wall regions ($\\sigma$ = 0.15 and 0.2 eV). \n}\n\\end{figure}\n\n\\begin{figure} [ht]\n\\centering{ \\includegraphics[width=16cm]{Sfig3} }\n\\caption{ \\label{fig2}\nLocal density of states at center Ta atom of David stars (a) DW-1 and (b) Hexagonal domain model.\n}\n\\end{figure}\n\n\\clearpage\n\n\\begin{figure} [h!]\n\\centering{ \\includegraphics[width=16cm]{Sfig4} }\n\\caption{ \\label{fig3}\nOne possible tri-layer stacking order in the C phase. (a) atomic structure, (b) and (c) are the band structure and total DOS without and with electron-electron correlations, respectively. The interlayer distance of 5.9 {\\AA} is fixed.\n}\n\\end{figure}\n\n\\begin{figure} [h!]\n\\centering{ \\includegraphics[width=16cm]{Sfig5} }\n\\caption{ \\label{fig4}\nTwo different bilayer stacking order in the DW-1 structure. (a) on-top stacking and (b) shifted stacking. Top and bottom panels are atomic structure and LDOS, respectively. All atoms are fixed at their atomic position in the single-layer DW-1 structure. The interlayer distance of 5.9 {\\AA} is fixed.\n}\n\\end{figure}\n\n\\clearpage\n\n\\section{\\fontsize{10}{15}\\selectfont B. Details of Honeycomb Network Model}\nIn this supplementary material, we present the details of the honeycomb network model. Here we apply the strategy of Ref. \\cite{Efimkin} to the honeycomb lattice.\n\n\\subsection{Construction of Network Model}\n\nWe theoretically model the system by a regular array of one-dimensional metals living on the links of a honeycomb lattice. The construction of this model is motivated from the following experimental observations present in the main text. \n\\begin{enumerate}\n\\item \\textbf{Emergent honeycomb lattice}: The domains of the NC-CDW state form a regular honeycomb lattice, and the domain walls are the links of this honeycomb lattice.\n\\item \\textbf{Metallic domain walls}: The domain walls trap finite local density of states near the Fermi level (Note that this is generically expected for any domain wall of a charge-density wave since the domain wall carries in-gap states whose origin are topological \\cite{su79}). \n\\end{enumerate} \n\nMotivated from these, we consider a regular array of one-dimensional metals living on the links of a honeycomb lattice. Similar network models of one-dimensional metals have been studied to some degree in the context of quantum Hall plateau transitions, known as ``Chalker-Coddington model\" \\cite{chal88}, and recently have been revived to explain the physics of the twisted bilayer graphene at a small twisting angle \\cite{Efimkin}. We apply this latest theoretical progress to model the network on the honeycomb lattice. \n\nSome details of the model are following: \n\\begin{enumerate}\n\\item \\textbf{Degrees of freedom}: On each link $a=x,y,z$ of the honeycomb lattice, we assign the two wavefunctions $\\psi_a$ and $\\psi_{\\bar{a}}$. Here $\\psi_a$ represent the chiral mode propagating from an A-sublattice to its neighboring B-sublattice and $\\psi_{\\bar{a}}$ for the mode propagating from a B-sublattice to its neighboring A-sublattice (See Fig. \\ref{fig1}).\nMicroscopically they correspond to the low-energy modes near the Fermi momentum of one-dimensional metals propagating along the links. (Here we suppress the spin index for the modes because we are mainly interested in the spectral properties of the network model.)\n\\item \\textbf{Scattering between wavefunctions}: We assume that the modes propagate coherently within each link and scatter only at the nodes of the honeycomb lattice. We further assume that there are three-fold rotation and two-fold mirror symmetries at each nodes, and the scattering between the modes respects the crystal symmetries. \n\\end{enumerate}\n\n\\begin{figure}[htbp]\n\\centering\\includegraphics[width=0.5\\textwidth]{Sfig6.pdf}\n\\caption{Pictorial Representation of Network Model: (A) labeling of the links by $x,y,z$ on real space. Each link has the two degrees of freedom. One is from the neighboring A-sublattice [See (B)] and another is from the neighboring B-sublattice [See (C)]. They are written as $\\psi_a$ and $\\psi_{\\bar{a}}$, and the arrows represent the propagating directions of the modes. Here $\\hat{e}_a, a= x,y,z$ is the vector connecting the A-sublattice to the B-sublattice.}\\label{fig1}\n\\end{figure}\n\nHaving these in mind, we have the following scattering problem at the A-sublattice. \n\\begin{align}\n\\left[\n\\begin{array}{c}\n\\psi_x (\\bm{R}) \\\\\n\\psi_y (\\bm{R}) \\\\ \n\\psi_z (\\bm{R}) \n\\end{array} \\right] = e^{-i\\frac{E}{v_F \\hbar} L} \\cdot \\hat{T}_A \\cdot \\left[\n\\begin{array}{c}\n\\psi_{\\bar{x}} (\\bm{R} +\\hat{e}_x) \\\\\n\\psi_{\\bar{y}} (\\bm{R} + \\hat{e}_y) \\\\ \n\\psi_{\\bar{z}} (\\bm{R} +\\hat{e}_z) \n\\end{array} \\right]\n\\end{align}\nHere, the left-hand side $\\psi_a (\\bm{R}), a= x,y,z$ represents the out-going modes from the A-sublattice, which is related by a scattering matrix $\\hat{T}_A$ to the in-coming modes $\\psi_{\\bar{a}} (\\bm{R}), a = x,y,z$ appearing on the right-hand side (See Fig. \\ref{fig1}). The additional phase factor $\\sim \\exp (-i\\frac{E}{v_F \\hbar} L)$ is the phase accumulated by the incoming modes while it propagates coherently from the neighboring B-sublattices to the A-sublattice at $\\bm{R}$. Here $v_F$ is the Fermi velocity within the one-dimensional metal, which is expected to be similar to that of the bulk electron, and $L$ is the length of the link. The scattering matrix $\\hat{T}_A$ can be fixed by the three-fold rotation as well as the two-fold mirrors at the node. With the unitarity of the scattering matrix, we find \n\\begin{align}\n\\hat{T}_A = e^{i\\chi_A} \\left[\n\\begin{array}{ccc}\nT_A & t_A & t_A \\\\\nt_A & T_A & t_A \\\\ \nt_A & t_A & T_A \n\\end{array} \\right], ~~ |T_A| \\in \\Big[\\frac{1}{3}, ~1\\Big], ~ t_A = e^{i\\phi_A} \\sqrt{\\frac{1-|T_A|^2}{2}}, \n\\end{align}\nwith $\\phi_A = \\cos^{-1}(\\frac{|t_A|}{2|T_A|})$. Similarly we have the following scattering problem at the B-sublattice.\n\\begin{align}\n\\left[\n\\begin{array}{c}\n\\psi_{\\bar{x}} (\\bm{R}) \\\\\n\\psi_{\\bar{y}} (\\bm{R}) \\\\ \n\\psi_{\\bar{z}} (\\bm{R}) \n\\end{array} \\right] = e^{-i\\frac{E}{v_F \\hbar} L} \\cdot \\hat{T}_B \\cdot \\left[\n\\begin{array}{c}\n\\psi_{x} (\\bm{R} -\\hat{e}_x) \\\\\n\\psi_{y} (\\bm{R} - \\hat{e}_y) \\\\ \n\\psi_{z} (\\bm{R} -\\hat{e}_z) \n\\end{array} \\right],\n\\end{align}\nwhere $\\hat{T}_B$ has the same structure as the $\\hat{T}_A$. \n\nNow we can perform the Fourier transformation on $\\bm{R}$ and solve these scattering problems. \n\\begin{align}\n\\bm{\\Psi}_{\\bm{q}} = e^{-i\\frac{E_{\\bm{q}}}{v_F \\hbar} L} \\hat{T}_{\\bm{q}} \\cdot \\bm{\\Psi}_{\\bm{q}}, ~~ \\bm{\\Psi}_{\\bm{q}} = \\left[\n\\begin{array}{c}\n\\psi_{x} (\\bm{q}) \\\\\n\\psi_{y} (\\bm{q}) \\\\ \n\\psi_{z} (\\bm{q}) \\\\\n\\psi_{\\bar{x}} (\\bm{q}) \\\\\n\\psi_{\\bar{y}} (\\bm{q}) \\\\ \n\\psi_{\\bar{z}} (\\bm{q}) \n\\end{array} \\right], ~~ \n\\hat{T}_{\\bm{q}} = \\left[\n\\begin{array}{cc}\n0 & \\hat{T}_A \\cdot \\hat{V}_{\\bm{q}} \\\\ \n\\hat{T}_B \\cdot \\hat{V}_{\\bm{q}}^* & 0 \n\\end{array} \\right],\n\\label{Energy}\n\\end{align}\nwhere $\\hat{V}_{\\bm{q}} = $ diag $[\\exp (i\\bm{q}\\cdot \\hat{e}_x), ~\\exp (i\\bm{q}\\cdot \\hat{e}_y), ~\\exp (i\\bm{q}\\cdot \\hat{e}_z)]$. Hence, the energy spectrum can be obtained by diagonalizing $\\hat{T}_{\\bm{q}}$, which is again an unitary matrix. In terms of the eigenvalues $e^{i \\epsilon_{j} (\\bm{q})}, j= 1, 2, \\cdots 6$ of $\\hat{T}_{\\bm{q}}$, we have the energy spectrum: \n\\begin{align}\nE_{j, \\bm{q}}^n = 2\\pi \\frac{v_F \\hbar}{L} n + \\frac{v_F \\hbar}{L} \\epsilon_{j} (\\bm{q}), ~~j = 1,2, \\cdots 6.\n\\end{align}\nHere $n \\in \\mathbb{Z}$ and thus the minibands are repeating in the energy in period of $2\\pi \\frac{v_F \\hbar}{L}$. Mathematically this repetition in $n$ originates from the ambiguity of $\\epsilon_j (\\bm{q})$ by $2\\pi$ appearing in the eigenvalues $e^{i \\epsilon_{j} (\\bm{q})}, j= 1, 2, \\cdots 6$. Physically this repetition can be traced back to the excitation energy of the microscopic one-dimensional modes with the same momentum $\\bm{q}$, i.e., for a given $\\bm{q}$, there are multiple different one-dimensional modes with energy $2\\pi \\frac{v_F \\hbar}{L} n, ~ n\\in \\mathbb{Z}$. Thus we expect that the energy spectrum given by $\\frac{v_F \\hbar}{L}\\epsilon_j (\\bm{q})$ will repeat in energy with a period $2\\pi \\frac{v_F \\hbar}{L}$ and entirely fills up the bulk CDW gap. Below we will analyze only one period of the band spectrum. \n\n\\subsection{Band Spectrum}\nWe first consider the case where we have a full symmetry of the honeycomb lattice, i.e., $\\hat{T}_A = \\hat{T}_B$. As apparent from the Fig \\ref{fig2}, the spectrum features (i) Dirac cones at the $K$ and $K'$ points, (ii) flat bands, and (iii) quadratic band touchings at the $\\Gamma$ point. Now we discuss the stabilities of these features. \n\\begin{enumerate}\n\\item \\textbf{Dirac Cones at the $K$ and $K'$ points}: The Dirac cones are protected by the sublattice symmetry as in the graphene. It is easily removed by breaking the symmetry, i.e., $T_A \\neq T_B$. See the spectrum in Fig \\ref{fig2}.\n\\item \\textbf{Quadratic Band Touching at the $\\Gamma$ point}: The quadratic band touchings can be protected by the six-fold rotation symmetry \\cite{sun09}. However, even when the symmetry is broken (while keeping the three-fold rotation and mirror symmetries are kept), the band touchings are robust within our network model. See the Fig \\ref{fig2}. \n\\item \\textbf{``Flat-ness\" of Flat bands}: The flat-ness of the bands cannot be protected. However, within our network model (with the three-fold rotation $C_3$ and mirror symmetries), we find that it is robust. See the Fig \\ref{fig2}. \n\\end{enumerate}\n\n\n\\begin{figure}[htbp]\n\\centering\\includegraphics[width=0.5\\textwidth]{Sfig7.pdf}\n\\caption{Energy Spectrum of Honeycomb Network Model. Here we plot the spectrum of $\\epsilon_j (\\bm{q})$ of the scattering matrix $\\hat{T}_{\\bm{q}}$ along $M \\to K \\to \\Gamma \\to M$. (A) $C_6$-symmetric spectrum $\\hat{T}_A = \\hat{T}_B$. It is straightforward to note the Dirac fermion at the $K$ point, quadratic band touching at $\\Gamma$ point, and also the flat bands. (B) $C_3$-symmetric spectrum $\\hat{T}_A \\neq \\hat{T}_B$. Here the Dirac band touching is removed.}\\label{fig2}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\\subsection{Symmetry Analysis of Quadratic Band Touching} \nHere we perform the symmetry analysis of the quadratic band touchings at the $\\Gamma$ point. By explicitly diagonalizing Eq.\\eqref{Energy} at the $\\Gamma$ point (with $\\hat{T}_A = \\hat{T}_B$), we obtain the wavefunctions of the degenerate states. They are labeled as $|\\Psi_a \\rangle, a =1,2$ (with $\\langle \\Psi_a | \\Psi_b \\rangle = \\delta_{ab}$), from which we reconstruct the representations of the three-fold rotation $C_3$, the x-mirror $R_x: x \\to -x$, and the six-fold rotation $C_6$, i.e., $\\{[C_3]_p, [R_x]_p, [C_6]_p\\}$ within these bands. For example, we obtain the three-fold rotation $[C_3]_p$ within the two states by computing\n\\begin{align}\\label{Ex}\n[C_3]_p = \\left[\n\\begin{array}{cc}\n\\langle \\Psi_1 | \\hat{C}_3 |\\Psi_1 \\rangle & \\langle \\Psi_2 | \\hat{C}_3 |\\Psi_1 \\rangle \\\\ \n\\langle \\Psi_1 | \\hat{C}_3 |\\Psi_2 \\rangle & \\langle \\Psi_2 | \\hat{C}_3 |\\Psi_2 \\rangle\n\\end{array} \\right],\n\\end{align} \nin which $\\hat{C}_3$ is the representation of the three-fold rotation in the six-component $\\Psi_{\\bm{q}}$ in Eq.\\eqref{Energy}. \n\n\n\n\nBy computing these explicitly, we find\n\\begin{align}\n[C_3]_p = e^{\\frac{2\\pi i}{3} \\sigma^z}, ~ [R_x]_p = e^{-\\frac{\\pi i}{3} \\sigma^z} \\sigma^x, ~[C_6]_p = e^{-\\frac{2\\pi i}{3} \\sigma^z}, \n\\end{align}\nwhere $\\sigma^a, a= x,y,z$ is the Pauli matrix acting on the space spanned by $\\{|\\Psi_1 \\rangle, |\\Psi_2 \\rangle\\}$.\n\nWith these, we can write down symmetry-allowed Hamiltonian near the $\\Gamma$ point. First, by imposing $[C_3]_p$ and $[R_x]_p$, we find that no perturbation is allowed to split $|\\Psi_1 \\rangle$ and $|\\Psi_2 \\rangle$ at the $\\Gamma$ point. \n\\begin{align}\n[C_3]_p^{\\dagger} H_0 [C_3]_p = H_0, ~~[R_x]_p^{\\dagger} H_0 [R_x]_p = H_0, ~ \\to H_0 \\propto \\mu \\sigma^0\n\\end{align}\nHence the degeneracy cannot be removed when $[C_3]_p$ and $[R_x]_p$ are imposed. On the other hand, near the $\\Gamma$ point, we find that the linear band touching is allowed. \n\\begin{align}\n[C_3]_p^{\\dagger} H (\\bm{k}) [C_3]_p = H (C_3^{-1}[\\bm{k}]), ~~[R_x]_p^{\\dagger} H(k_x, k_y) [R_x]_p = H(-k_x, k_y),\n\\end{align}\nallows $H(\\bm{k}) \\propto k_x \\hat{s}^x + k_y \\hat{s}^y$ (where $(\\hat{s}_x, \\hat{s}_y)$ are the Pauli matrices obtained by properly rotating $\\sigma^x$ and $\\sigma^y$.). Hence, the quadratic band touching cannot be protected by $[C_3]_p$ and $[R_x]_p$. Nevertheless, within our network model, the touching is found to be robust though the touching is not protected by the symmetries. \n\nWe can show that we need the six-fold rotation symmetry $[C_6]_p$ to protect the quadratic band touching and this is consistent with Ref. \\cite{sun09}. Thus, on imposing $[C_6]_p$, we can fix the Hamiltonian as \n\\begin{align}\nH = \\epsilon_0 (|\\bm{k}|) + \\Big( \\frac{k_x^2 - k_y^2}{2m} \\sigma^x + \\frac{2k_x k_y}{2m} \\sigma^y \\Big).\n\\end{align}\nTo match the band spectrum seen in the model, we have $\\epsilon_0 (|\\bm{k}|) = \\frac{\\bm{k}^2}{2m}$ and thus the lower band is completely flat and the density of state at the zero energy is divergent. \n\n\\subsection{Comparison with Twisted Graphene Bilayer}\nHere we extend our discussion in the main text on the similarity between our network system and the theoretical models \\cite{Efimkin, bist11} for the twisted graphene bilayers. In particular, we compare ours with the network model in Ref. \\cite{Efimkin} and a continuum Dirac fermion model in Ref. \\cite{bist11}.\n\nTo start with, we find that our network system is close to the network model appeared in Ref \\cite{Efimkin}. In Ref. \\cite{Efimkin}, the twisted graphene bilayer at a small twisting angle has been considered. When the twisting angle is small, there is a periodic array of domain walls separating the locally AA-stacked regions and the locally AB-stacked regions. Ref. \\cite{Efimkin} argued that these domain walls trap localized one-dimensional metallic channels. These one-dimensional modes scatter at the nodes, which form a triangular lattice (in our case, the nodes form a honeycomb lattice). The structure of their model is quite similar to ours and indeed Ref. \\cite{Efimkin} obtained a similar spectrum as ours: Dirac fermions, nearly flat bands, as well as van-Hove singularities. \n\nWe can also make a comparison of our network system with the continuum Dirac theory of magic-angle twisted bilayer graphene in Ref. \\cite{bist11}. In this approach, the Dirac fermions coming from the top and bottom layers interfere each other, and as a result, the bands become flat. Theoretically, this flat spectrum is speculated to be the source of surprising correlation-driven phenomena seen in the experiments \\cite{cao18, lian18, po18}. One may note that our network model appears to be different than the continuum Dirac theory of Ref. \\cite{bist11}. Despite of the difference in the theoretical treatments, we emphasize that our network model and the result of Ref. \\cite{bist11} share the strikingly-similar features in spectrum: Dirac fermions, flat bands and associated singularities in density of states, which are believed to play an essential role in the correlation physics.\n\nBoth the twisted bilayer graphene and our honeycomb network have weak disorders \\cite{brih12, raza16}. For example, there are some imperfect hexagons in our network and imperfect triangles in twisted bilayer graphene. Naively one expects that such weak disorders would immediately localize the flat bands and completely destroy associated many-body physics. However, the previous study \\cite{chal10} surprisingly found that the flat bands do not get immediately localized but become critical. This implies that the flat bands are stronger against disorders than we naively expect. Though a more thorough investigation is desirable, we expect from the reference \\cite{chal10} that the flat bands retain relatively flat spectrum even with the weak disorders and hence is expected to remain very susceptible to many-body physics.\n\nIn summary, we have shown that the two systems, twisted graphene bilayer and our network system, share the surprising similarities including the flat bands and a large density of states, which are the key to the exotic correlation-driven phenomena. \n\n\\subsection{Interlayer coupling}\n\nIn this subsection, we consider the effect of interlayer coupling to the electronic structures in the conducting network, and we will argue that, in general, the interlayer couplings between the layers will little affect the emergent electronic structures.\n\nFor the concrete-ness of our theoretical discussion, we first assume that the charge-density wave domains in the nearly commensurate phase remain insulating even after the inclusions of interlayer couplings [see SFig.5]. With this in hand, all the lowest-energy electronic states are in the domain walls in the conducting networks, and the interlayer couplings will introduce the coupling between these metallic modes inside the conducting networks living in different layers.\n\nAmong various possible couplings, the most important coupling, which can largely modify the band structure, is the electron hopping process between the layers. Note that this is proportional to the wavefunction overlap between the states of domain walls in different layers, and the states are highly localized within each domain walls. Hence, the effect of coupling will be strongly suppressed if not the networks are almost exactly overlapping to each other when seen from c-axis. From the available literature \\cite{cho16}, we note that the networks in different layers are not correlated to each other and thus we expect that the emergent band structure of the low-energy theory will not be affected much by the interlayer coupling. \n \n\n\\newcommand{\\AP}[3]{Adv.\\ Phys.\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\CMS}[3]{Comput.\\ Mater.\\ Sci. \\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\PR}[3]{Phys.\\ Rev.\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\PRL}[3]{Phys.\\ Rev.\\ Lett.\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\PRB}[3]{Phys.\\ Rev.\\ B\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\NA}[3]{Nature\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\NAP}[3]{Nat.\\ Phys.\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\NAM}[3]{Nat.\\ Mater.\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\NAC}[3]{Nat.\\ Commun.\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\NAN}[3]{Nat.\\ Nanotechnol.\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\NARM}[3]{Nat.\\ Rev.\\ Mater.\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\NT}[3]{Nanotechnology {\\bf #1}, #2 (#3)}\n\\newcommand{\\JP}[3]{J.\\ Phys.\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\JAP}[3]{J.\\ Appl.\\ Phys.\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\JPSJ}[3]{J.\\ Phys.\\ Soc.\\ Jpn.\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\JPC}[3]{J.\\ Phys.\\ C: \\ Solid State Phys.\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\PNAS}[3]{Proc.\\ Natl.\\ Acad.\\ Sci. {\\bf #1}, #2 (#3)}\n\\newcommand{\\PRSL}[3]{Proc.\\ R.\\ Soc.\\ Lond. A {\\bf #1}, #2 (#3)}\n\\newcommand{\\PBC}[3]{Physica\\ B+C\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\PAC}[3]{Pure Appl.\\ Chem. \\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\SCI}[3]{Science\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\SCA}[3]{Sci.\\ Adv.\\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\RPP}[3]{Rep.\\ Prog.\\ Phys. \\ {\\bf #1}, #2 (#3)}\n\\newcommand{\\SCR}[3]{Sci.\\ Ref.\\ {\\bf #1}, #2 (#3)}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzezpm b/data_all_eng_slimpj/shuffled/split2/finalzzezpm new file mode 100644 index 0000000000000000000000000000000000000000..c6c408b45613cd68ce35000aaefcb5cf6574f4c3 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzezpm @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n The performance of direct-sequence code-division multiple-access (DS-CDMA)\ntransmission over mobile fading channels depends strongly on the\nreliability of channel parameter and quality of synchronization for each user: state-of-the-art detection algorithms that\nexploit multiple-access-interference and inter-symbol-interference\nrequire very powerful estimation algorithms.\n\n\nSubstantial amount of relevant references appeared in the\nliterature on delay estimation. Namely, a new prospective is\npresented in \\cite{stuller97} for the maximum likelihood (ML) time-delay estimation. Code timing estimation in a near-far environment for DS-CDMA systems was introduced in \\cite{smith}.\n Joint symbol detection, time-delay and channel parameter estimation\nproblems for asynchronous DS-CDMA systems have been investigated in several previous works (e.g., \\cite{pelin,miller}). Most of these works either work on one signal at a time and treat the other signals as interference, or employ a training sequence to obtain a coarse\nestimate of the channel parameters which is consequently used to\ndetect data. It is clear that these approaches have disadvantages of\nhaving higher overhead and additional noise enhancement.\n\nSome other proposed approaches for joint blind multiuser detection and\nchannel estimation for DS-CDMA systems are subspace-based and\nlinear prediction-based methods. Subspace-based method usually\nrequire singular value decomposition or eigenvalue decomposition\nwhich is computationally costly and does not tolerate mismatched\nchannel parameters. Another drawback of this approach is that\naccurate rank determination may be difficult in a noisy\nenvironment \\cite{bensley,strom}. Moreover, it is not clear how these methods can be extended to include the estimation of the transmission delays jointly with the channel parameters.\n\nThe expectation maximization (EM) and space alternating EM (SAGE) algorithms are\nideally suited to these kind of problems as they are guaranteed to\nconverge in likelihood. Earlier work related with delay estimation based on the EM algorithm has appeared \\cite{Georg-90,Georg-91}.\nEfficient iterative receiver structures are presented in\n\\cite{kocian2003,kocian2007}, performing joint multiuser detection and\nchannel estimation for synchronous as well as asynchronous coded\nDS-CDMA systems operating over quasi-static flat Rayleigh fading\nchannels, under the assumption that the transmissions delays are\nknown. The Bayesian EM\/SAGE algorithm can be used for joint\n\\emph{soft}-data estimation and channel estimation but the\ncomputational complexity of the resulting receiver architecture is non-polynomial in the number of\nusers \\cite{Gallo-04}. To overcome this draw-back, Hu \\emph{et al.}\napplied the Variational Bayesian EM\/SAGE algorithms to joint\nestimation of the distributions for channel coefficients, noise\nvariance, and information symbols for synchronous DS-CDMA in\n\\cite{Hu-08}. Our work may be considered to be a twofold extension of the\nwork by Gallo \\emph{et al.} in \\cite{Gallo-04}:\nFirst, the proposed receiver performs joint channel coefficient and\ntransmission delay estimation within the SAGE framework. Secondly, the\nimplication of the Monte-Carlo method in the SAGE framework makes it\npossible to compute \\emph{soft}-data estimates for all users at polynomial\ncomputational complexity, as well. Here, an\nefficient Markov chain Monte Carlo (MCMC) technique \\cite{gelfand} called\n{\\em Gibbs sampling} is used to compute the {\\em a posteriori}\nprobabilities (APP) of data symbols \\cite{doucet}. The APP's can be computed\nexactly with the MCMC algorithm, which is significantly less complex than\na standard hidden Markov model approach. \nThe resulting receiver architecture\nworks in principal fully blind and is guaranteed to converge. For\nuncoded transmission, a few\npilot bits must be inserted, though, to resolve the phase ambiguity problem.\n\n\nThe theoretical framework for the joint transmission delays and\nchannel estimation as well as the data detection algorithms can easily\nbe extended to coded transmission. \n\n\n\n\\section {System Description}\n\\label{sec:system}\n\nWe consider an asynchronous single-rate DS-CDMA system with $K$\nactive users using binary phase shift keying (BPSK) modulation\nsharing the same propagation channel. The signal transmitted by each\nuser experiences flat Rayleigh fading, which is assumed to be\nconstant over the observation frame of $L$ data symbols. Each user employs\na random signature waveform for transmitting symbols of duration\n$T_{b}$, such that each symbol consists of $N_{c}$ chips with duration\n$T_{c}=T_{b}\/N_{c}$ where $N_{c}$ is an integer. The received signal\nis the noisy sum of all user's contribution, delayed by the propagation delays $\\tau_{k}\\in [0,T_{b}\/2)$, where the subscript $k$\ndenotes the label of the $k$th user. After down-converting the\nreceived signal to baseband and passing it through an\nintegrate-and-dump filter with integration time $T_{s}=T_{c}\/Q$, $Q\n\\in \\mathbb{Q}^+$, $QN_c(L+1)$ samples over an observation frame of $L$ symbols are\nstacked into a signal column vector ${\\boldsymbol{r}} \\in \\mathbb{C}^{Q N_{c}\n(L+1)-1}$. Note that sampling is chip-synchronous without\nknowledge of the individual transmission delays. It can\ntherefore be expressed as\n\\begin{equation}\n\\boldsymbol{r} = \n\\boldsymbol{S}(\\boldsymbol{\\tau})\\bf{A}\\bf{d}+\\boldsymbol{w}.\n\\label{sys:received}\n\\end{equation}\n\nIn this expression the matrix\n$\\boldsymbol{S} (\\boldsymbol{\\tau})\n\\in \\mathbb{C}^{Q N_{c} (L+1)-1 \\times LK}$ contains the signature\nsequences of all the users\n\\[\n\\boldsymbol{S}( \\boldsymbol \\tau)= \\left[ {\\begin{array}{*{20}c}\n \\boldsymbol{S}_{1} (\\tau_1), &\n \\boldsymbol{S}_{2}(\\tau_2), & \\cdots & ,\\boldsymbol{S}_{K} (\\tau_K) \\\\\n\\end{array}} \\right]\n\\]\nwhere $\\boldsymbol{S}_k (\\tau_k) \\in \\mathbb{C}^{Q N_{c} (L+1)-1\n\\times L}$ has the form\\\\\n\\[\n\\boldsymbol{S}_k( \\tau_k)=\n\\left[ {\\begin{array}{*{20}c}\n | & | & & | \\\\\n \\boldsymbol{S}_{k} (\\tau_k,0) &\n \\boldsymbol{S}_{k}(\\tau_k,1) & \\cdots & \\boldsymbol{S}_{k} (\\tau_k,L-1) \\\\\n | & | & & | \\\\\n\\end{array}} \\right]\n\\]\nand the spreading code vector $\\boldsymbol{S}_k(\\tau_k,\\ell)\n\\in \\mathbb{C}^{Q N_{c}(L+1)-1 \\times 1}$ is given by\n\\[\n\\boldsymbol{S}_k (\\tau_k,\\ell)=\n\\left[ {\\begin{array}{c}\n \\boldsymbol{0}_{Q N_{c}\\ell+\\tau_{k} \\times 1} \\\\\n | \\\\\n \\boldsymbol{s}_{k}(\\tau_k,\\ell) \\\\\n | \\\\\n \\boldsymbol{0} \\\\\n\\end{array}} \\right].\n\\]\n\n The vector $\\boldsymbol{s}_k (\\tau_k,\\ell)$ contains the\n spreading code of user~$k$ having support\n $[\\ell N_{c}T_{c}, (\\ell+1) N_{c}T_{c}]$ with energy\n $\\boldsymbol{s}_k^{\\dag}(\\tau_k,\\ell)\\boldsymbol{s}_k\n (\\tau_k,\\ell)=1$. \n Finally, $\\boldsymbol{0}_{M \\times 1}$ denotes the $M \\times 1$-dim. all-zero\n column vector.\n\n\nThe block diagonal channel matrix\n$\\boldsymbol{A}\\in \\mathbb{C}^{LK\\times\n LK}$ in (1) is given by $\\boldsymbol{A} =\n\\mbox{diag}\\{\\boldsymbol{A}_1,\\cdots, \\boldsymbol{A}_K\\}$. The channel\nmatrix for user~$k$,\n$\\boldsymbol{A}_k \\in \\mathbb{C}^{L\\times L}$,\n is given by\n$\\boldsymbol{A}_k = {\\bf I}_{L} \\otimes$ $a_{k}$ where $\\bf{I}_{L}$ is the $L$-dim. identity matrix, and the symbol $\\otimes$\ndenotes the Kronecker product. The $k$th user's channel coefficient\n$a_{k}$ is a circularly symmetric complex\nGaussian random variable with zero mean and variance $\\sigma_k^2$. The\n$k$th user's transmission delay is assumed to be uniformly distributed.\n\nThe symbol vector $\\boldsymbol{d} \\in \\mathbb{C}^{LK}$ takes the form $\\boldsymbol{d} =\\mbox{col}\\{{\\boldsymbol{d}_1,\\cdots,\\boldsymbol{d} _K}\\}$\nwhere the vector $\\boldsymbol{d}_k \\in\\mathbb{C}^{L} $ contains the\n$k$th user's symbols, i.e. $\\boldsymbol{d} _k =\n\\mbox{col}\\{d_{k}(0),\\cdots,d_{k}(L-1) \\}$\nwith $d_{k}(\\ell) \\in\n\\{-1,+1\\}$ denoting the symbol transmitted by the $k$th user during\nthe $\\ell$th signalling interval. Finally, the column vector\n$\\boldsymbol{w} \\in \\mathbb{C}^{QN_{c}(L+1)-1}$ contains complex,\ncircularly symmetric white Gaussian noise having covariance matrix\n$N_{0}{\\bf I}$.\nWe assume that the vectors\n$\\boldsymbol{a}\\triangleq \\mbox{col}\\{a_{1},a_{2},\\cdots,a_{K}\\}$,\n$\\boldsymbol{\\tau}\\triangleq\n\\mbox{col}\\{\\tau_{1},\\tau_{2},\\cdots,\\tau_{K}\\}$, ${\\boldsymbol{d}}$ and\n$\\boldsymbol{w}$ and their components are independent.\nThe receiver does not know the data sequences, the (complex) channel coefficients, or the transmission delays.\n\n\n\n\\section{Monte-Carlo SAGE Joint Parameter Estimation}\n\\label{sec:MCSAGE_appl}\n\n\\subsection{The SAGE Algorithm}\n\nIn previous applications, the SAGE algorithm \\cite{fessler} has been extensively used\nto iteratively approximate the ML\/MAP estimate of a parameter vector $\\boldsymbol{\\theta}$\nwith respect to the observed data ${\\boldsymbol{r}}$.\n To obtain a receiver architecture that iterates between soft-data and\n channel estimation, one might choose the parameter vector as\n ${\\boldsymbol{\\theta}}=\\left\\{\\mathfrak{R}(a_{1}),\\cdots,\n \\mathfrak{R}(a_{K}),\\mathfrak{I}(a_{1}),\\cdots,\\mathfrak{I}(a_{K}),\\tau_{1},\\cdots,\\tau_{K} \\right\\}$. The symbols $\\mathfrak{R}(\\cdot)$ and $\\mathfrak{I}(\\cdot)$ denote the real and imaginary parts of the complex argument, respectively. At iteration $i$,\nonly the parameter vector of user $k$, ${\\boldsymbol{\\theta}}_{k}$ are updated, while the\nparameter vectors of the other users ${{\\boldsymbol{\\theta}}}_{\\bar k}={\\boldsymbol{\\theta}}\n\\backslash {\\boldsymbol{\\theta}}_{k}$ are kept fixed. In the SAGE framework ${\\boldsymbol{r}}$ is\nreferred to as the \\emph{incomplete} data. The so-called {\\em admissible hidden} data\n${\\boldsymbol{\\chi}}_k$ with respect to ${\\boldsymbol{\\theta}}$ is selected to be\n${\\boldsymbol{\\chi}}_k=\\{{\\boldsymbol{r}},{\\boldsymbol{d}}\\}$. Notice that ${\\boldsymbol{\\chi}}_k$ can only be partially\nobserved. Applying the SAGE algorithm to MAP parameter estimation, yields the expectation (E)-step\n\\begin{equation}\n\\label{alg:sage_estep}\nQ_k({\\boldsymbol{\\theta}}_k,{\\boldsymbol{\\theta}}^{[i]})=E_{{\\boldsymbol{d}}} \\left\\{ \\log\n p\\left({\\boldsymbol{r}},{\\boldsymbol{d}},{\\boldsymbol{a}}_k,\\boldsymbol \\tau_k,{{\\boldsymbol{a}}}_{\\bar k}^{[i]},{\\boldsymbol \\tau}_{\\bar k}^{[i]} \\right) \\mid {\\boldsymbol{r}},{\\boldsymbol{a}}^{[i]},\\boldsymbol \\tau^{[i]} \\right\\}.\n\\end{equation}\nThe maximization (M)-step computes a value of the argument $\\boldsymbol \\tau_k$\nin (\\ref{alg:sage_estep}) to obtain the update\n${\\boldsymbol{\\theta}}_k^{[i+1]}$. The objective function is non-decreasing at each\niteration.\n\n\\subsection{The Monte-Carlo SAGE algorithm}\n\nWe will see that direct computation of the expectation in\n(\\ref{alg:sage_estep}) requires a non-polynomial number of operations in the\nnumber of users $K$ and thus becomes prohibitive with increasing $K$. To make the computation of the expectation in\n(\\ref{alg:sage_estep}) feasible though, we propose to use the technique of Markov\nchain Monte Carlo (MCMC) to obtain the Monte-Carlo SAGE\nalgorithm. MCMC is a statistical technique that allows generation of ergodic\npseudo-random samples ${\\boldsymbol{d}}^{[i,1]},\\ldots,{\\boldsymbol{d}}^{[i,N_t]}$ from the current\napproximation to the conditional pdf $p({\\boldsymbol{d}} | {\\boldsymbol{r}}, {\\boldsymbol{\\theta}}^{[i]})$. These samples are used to approximate the expectation in\n(\\ref{alg:sage_estep}) by the sample-mean. The Gibbs sampler and the\nMetropolis-Hastings algorithm are widely used MCMC algorithms. Here we\ndescribe only the Gibbs sampler \\cite{Borunjeny,doucet}, as it is the\nmost commonly used in applications. Having initialized\n${\\boldsymbol{d}}^{[0,0]}$ randomly, the Gibbs sampler iterates the following loop\nat SAGE iteration $i$:\n\\begin{itemize}\n\\item Draw sample ${\\boldsymbol{d}}_1^{[i,t]}$ from\n $p({\\boldsymbol{d}}_1|{\\boldsymbol{d}}_2^{[i,t-1]},\\ldots,{\\boldsymbol{d}}_K^{[i,t-1]}, {\\boldsymbol{r}}, {\\boldsymbol{\\theta}}^{[i]})$\\\\\n\\item Draw sample ${\\boldsymbol{d}}_2^{[i,t]}$ from\n $p({\\boldsymbol{d}}_2|{\\boldsymbol{d}}_1^{[i,t]},{\\boldsymbol{d}}_3^{[i,t-1]}\\ldots,{\\boldsymbol{d}}_K^{[i,t-1]}, {\\boldsymbol{r}},\n {\\boldsymbol{\\theta}}^{[i]})$\\\\\n\\vdots\n\\item Draw sample ${\\boldsymbol{d}}_K^{[i,t]}$ from\n $p({\\boldsymbol{d}}_K|{\\boldsymbol{d}}_1^{[i,t]},\\ldots,{\\boldsymbol{d}}_{K-1}^{[i,t]}, {\\boldsymbol{r}}, {\\boldsymbol{\\theta}}^{[i]})$\\\\\n\\end{itemize}\nFollowing this approach, we have\n\\begin{equation*}\n\\label{alg:mcmc_sage_estep}\nQ_k({\\boldsymbol{\\theta}}_k,{\\boldsymbol{\\theta}}^{[i]})=\\frac{1}{N_t} \\sum_{t=1}^{N_t} \\left\\{ \\log\n p\\left({\\boldsymbol{r}},{\\boldsymbol{d}}^{[i,t]},{\\boldsymbol{a}}_k,\\boldsymbol \\tau_k,{{\\boldsymbol{a}}}_{\\bar\n k}^{[i]},{\\boldsymbol \\tau}_{\\bar k}^{[i]} \\right) \\right\\}.\n\\end{equation*}\n\nNotice that with increasing $N_t$, the Monte-Carlo SAGE algorithm converges to the MAP\nsolution ${\\boldsymbol{\\theta}} = {\\boldsymbol{\\theta}}^\\star$ up to random fluctuations around ${\\boldsymbol{\\theta}}^\\star$ \\cite{Tanner-90}.\n\n\n\\subsection{Receiver design}\n\n\nThis subsection is devoted to the derivation of a receiver\narchitecture for joint estimation of parameters within the Monte-Carlo SAGE framework. Discarding terms\nindependent of ${\\boldsymbol{a}}$ and $\\boldsymbol \\tau$, we obtain\n\\begin{equation}\n\\log p({\\boldsymbol{r}},{\\boldsymbol{d}},{\\boldsymbol{a}},\\boldsymbol \\tau) =\n\\log p({\\boldsymbol{r}}|{\\boldsymbol{d}},{\\boldsymbol{a}},\\boldsymbol \\tau) +\n\\log p({\\boldsymbol{d}}) +\n\\log p({\\boldsymbol{a}}) + \\log p(\\boldsymbol \\tau).\n\\label{appl:likefkt}\n\\end{equation}\nFrom (\\ref{sys:received}), it follows that\n\\begin{equation}\n\\log p({\\boldsymbol{r}}|{\\boldsymbol{a}},\\boldsymbol \\tau,{\\boldsymbol{d}}) \\varpropto\n\\Re{\\{{\\boldsymbol{r}}^{\\dag}\\boldsymbol{S}\\boldsymbol{A}{\\boldsymbol{d}}}\\} -\\frac{1}{2}\n{\\boldsymbol{\\mu}}({\\boldsymbol{\\theta}},{\\boldsymbol{d}})^{\\dag}{\\boldsymbol{\\mu}}({\\boldsymbol{\\theta}},{\\boldsymbol{d}}),\n\\label{appl:likelihood}\n\\end{equation}\nwhere ${\\boldsymbol{\\mu}}({\\boldsymbol{\\theta}},{\\boldsymbol{d}}) \\triangleq\n\\sum_{k=1}^{K}\\sum_{\\ell=0}^{L-1}{\\boldsymbol{S}}_{k}(\\ell,\\tau_{k})a_{k}d_{k}(\\ell)$\nand $(.)^{\\dag}$ is the conjugate transpose of the argument.\n\\subsubsection{The E-step}\nSubstituting (\\ref{appl:likelihood}) into (\\ref{appl:likefkt}) yields\nafter some algebraic manipulations for the E-step of the Monte-Carlo SAGE algorithm:\n\\begin{eqnarray}\n\\lefteqn{Q_{k}({\\boldsymbol{\\theta}}_{k}|{\\boldsymbol{\\theta}}^{[i]})=}\\nonumber\\\\\n&& \\hspace*{-5ex}\n \\frac{2}{N_{0}} \\sum_{\\ell=0}^{L-1} \\Re\\left\\{a^{*}_{k}\n\\Psi(\\ell,\\tau_{k})\\right\\}-\\frac{L}{N_{0}}|a_{k}|^{2} -\n\\frac{1}{\\sigma_k^2}|a_k|^2 \\label{appl:Q-fct}\n\\end{eqnarray}\nwith the branch definition\n\\[\n\\Psi(\\ell,\\tau_{k}) \\triangleq \\boldsymbol{S}^{\\dag}_{k}(\\ell,\\tau_{k})\\left(\\exd{k}{\\ell}{\\boldsymbol{r}} -\n \\mathcal{I}_k^{[i]}(\\ell)\\right)\n\\]\nand the interference term \n\\begin{eqnarray*}\n \\mathcal{I}_k^{[i]}(\\ell) &\\triangleq& \\sum_{k' \\neq\nk}a_k'^{[i]}\\bigg(\\boldsymbol{S}_{k'}(\\ell+1,\n\\tau^{[i]}_{k'})\\corrd{k}{\\ell}{k'}{\\ell+1}\\\\\n&&\\hspace*{1ex} +\n\\boldsymbol{S}_{k'}(\\ell,\\tau^{[i]}_{k'})\\corrd{k}{\\ell}{k'}{\\ell}\\\\\n&&\\hspace*{1ex}+\\boldsymbol{S}_{k'}\n(\\ell-1,\\tau^{[i]}_{k'})\n\\corrd{k}{\\ell}{k'}{\\ell-1}\\bigg).\n\\end{eqnarray*}\nMoreover,\n\\begin{equation}\n\\label{appl:softdata}\n \\exd{k}{\\ell}\\triangleq \\sum_{m\\in\n \\mathcal{S}}m P(d_{k}(\\ell)=m| {\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]})\n\\end{equation}\nand\n\\begin{eqnarray}\n\\label{appl:softcorr}\n\\lefteqn{\\corrd{k}{\\ell}{k'}{\\ell'} \\triangleq\n\\sum_{m\\in \\mathcal{S}}\\sum_{n\\in \\mathcal{S}}~m~n}\\nonumber \\\\&&\n\\!\\!\\times P(d_{k}(\\ell)=m,d_{k'}(\\ell')=n \\mid {\\boldsymbol{r}},\n \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]}), \\mbox{ for } k'\\neq k,~\n\\end{eqnarray}\n where $\\mathcal{S}\\triangleq\\{-1,+1\\}$ is the signal constellation\n and the lag is within range $\\ell' \\in \\{\\ell-1,\\ell,\\ell+1\\}$.\n\n\n\\subsubsection{The M-step}\n\nThe M-step of the SAGE algorithm is realized by first maximizing (\\ref{appl:Q-fct}) with respect to the transmission delays $\\tau_k$,\n\\begin{equation}\n\\label{appl:tau_update}\n\\tau^{(i+1)}_{k}=\\arg \\max_{\\tau_k} \\left|\\sum_{\\ell=0}^{L-1}\\Psi(\\ell,\\tau_k)\\right|.\n\\end{equation}\nThen by inserting (\\ref{appl:tau_update}) into\n(\\ref{appl:Q-fct}), taking derivatives with respect to the $a_k$'s,\nsetting the results equal to zero, and solving yields\n\\[\na_k^{(i+1)} = \\frac{1}{L+N_0\/\\sigma_k^2}\\sum_{\\ell=0}^{L-1}\\Psi(\\ell,\\tau^{(i+1)}_{k}).\n\\]\n\n\\section{Monte-Carlo Implementation to the Computation of A Posteriori Probabilities}\n\n\\subsection{Computation of the soft-data symbols in (\\ref{appl:softdata})}\n\n\\label{sec:softdata}\n\nLet $\\overline{{\\boldsymbol{d}}_{k}(\\ell)}\\triangleq {\\boldsymbol{d}} \\backslash \\{d_{k}(\\ell)\\}$.\nFor notational simplicity we use $\\bar{{\\boldsymbol{d}}}\\triangleq\\overline{{\\boldsymbol{d}}_{k}(\\ell)}$ throughout this section. Then, the {\\em a\nposteriori} probability of $d_{k}(\\ell)$ in (\\ref{appl:softdata}) can be evaluated as\n\n\\setlength\\arraycolsep{1pt}\n\\begin{eqnarray}\n\\lefteqn{P(d_{k}(\\ell)=m \\mid {\\boldsymbol{r}},\n \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]})}\\nonumber\\\\&=&\\sum _{\\bar{{\\boldsymbol{d}}}}P(d_{k}(\\ell)=m \\mid \\bar{{\\boldsymbol{d}}}, {\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]})~P( \\bar{{\\boldsymbol{d}}}|{\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]}) \\nonumber\\\\\n &\\approx&\\frac{1}{N_t}\\sum_{t=1}^{N_t}\n P(d_{k}(\\ell)=m| \\bar{{\\boldsymbol{d}}}^{[i,t]},{\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]}).\n\\label{MC:APP}\n\\end{eqnarray}\nTo compute $P(d_{k}(\\ell)=m| \\bar{{\\boldsymbol{d}}}^{[i,t]},{\\boldsymbol{r}},\n\\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]})$ for this Markov chain Rao-Blackwellization technique, we define\n \\begin{equation}\n\\label{MC:LLR}\n \\lambda^{[i,t]}\\triangleq\\ln \\frac{P\\left(d_{k}(\\ell)= +1|\\bar{{\\boldsymbol{d}}}^{[i,t]},{\\boldsymbol{r}},\n\\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]}\\right)}{ P\\left(d_{k}(\\ell)=\n-1|\\bar{{\\boldsymbol{d}}}^{[i,t]},{\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]}\\right)},\n \\end{equation}\nFor uncoded transmission, the data symbols are i.i.d. and equally\nlikely. Therefore, it follows from (\\ref{MC:LLR}) that\n\\begin{equation}\n\\label{MC:LLR_exp}\n \\lambda^{[i,t]} = \\ln \\frac{P( {\\boldsymbol{r}} \\mid d_{k}(\\ell)= +1, \\bar{{\\boldsymbol{d}}}^{[i,t]},\n\\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]})}{ P( {\\boldsymbol{r}} \\mid d_{k}(\\ell)= -1,\n\\bar{{\\boldsymbol{d}}}^{[i,t]},\\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]})},\n\\end{equation}\nfrom which it can be easily seen that\n\\begin{equation*}\nP\\left(d_{k}(\\ell)= m\\mid \\bar{{\\boldsymbol{d}}}^{(t)},{\\boldsymbol{r}},\n\\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]}\\right)=\\frac{1}{1+\\exp\\left(-m\\lambda^{[i,t]}\\right)}.\n\\label{MC:logapp_lambda}\n\\end{equation*}\nFrom (\\ref{sys:received}), we have $p({\\boldsymbol{r}}|{\\boldsymbol{D}})\\thicksim \\exp(-\\frac{1}{N_{0}}|{\\boldsymbol{r}}-{\\boldsymbol{G}}\n{\\boldsymbol{d}}|^{2})$, with ${\\boldsymbol{G}}\\triangleq\\bf{S}(\\boldsymbol \\tau){\\boldsymbol{A}}$ and\n${\\boldsymbol{d}}=\\mbox{col}\\{{\\boldsymbol{d}}_{1},{\\boldsymbol{d}}_{2},\\cdots,{\\boldsymbol{d}}_{K}\\},$. After some\nalgebra (\\ref{MC:LLR_exp}) can be expressed as\n\\begin{equation}\n\\label{MC:LLR_fin}\n\\lambda^{[i,t]}=\\frac{4}{N_{0}}\n\\Re\\left\\{({\\boldsymbol{g}}^{[i]}_{q})^{\\dag}({\\boldsymbol{r}}-{{\\boldsymbol{G}}}^{[i]}_{\\bar q} \\mbox{ }\n{{\\boldsymbol{d}}}^{[i,t]}_{\\bar q})\\right\\},\n\\end{equation}\nwhere $q\\triangleq kL+\\ell$, and ${{\\boldsymbol{G}}}_{\\bar q}$ is ${\\boldsymbol{G}}$ with its $q$th column\n${\\boldsymbol{g}}_{q}$ removed. Similarly, ${{\\boldsymbol{d}}}_{\\bar q}$ denotes the\nvector ${\\boldsymbol{d}}$ with its $q$th component removed.\n\nIn summary, for each $k=1,2,\\cdots,K$ and $\\ell=0,1,\\cdots,L-1$, to\nestimate the {\\em a posteriori} probabilities\n$P(d_{k}(\\ell)|{\\boldsymbol{r}},\\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]})$ in (\\ref{MC:APP}), the Gibbs\nsampler runs over all symbols $N_{t}$ times to generate a collection\nof vectors $\\left\\{\\bar{{\\boldsymbol{d}}}^{[i,t]}\\triangleq\n\\bar{{\\boldsymbol{d}}}^{[i,t]}_{k}(\\ell)\\right\\}_{t=1}^{N_{t}}$ which are used\nin (\\ref{MC:LLR_fin}) to estimate the desired quantities.\n\n\n\\subsection{Computation of the soft-value for the product of two\n data symbols in (\\ref{appl:softcorr})}\n\nSimilarly, a number of random samples\n$\\overline{\\overline{{\\boldsymbol{d}}}}^{{[i,t]}}\\triangleq\n\\overline{\\overline{{\\boldsymbol{d}}_{k,k'}(\\ell')}}^{[i,t]},\nt=1,2,\\cdots, N_t, \\ell'\n\\in \\{-1,0,+1\\}$ are drawn, using the Gibbs sampling technique,\nfrom the joint conditional posterior distribution,\n$P(\\overline{\\overline{{\\boldsymbol{d}}}} \\mid {\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},{\\boldsymbol{a}}^{[i]}).$ Based on the samples\n$\\overline{\\overline{{\\boldsymbol{d}}}}^{[i,t]}$, $\\corrd{k}{\\ell}{k'}{\\ell'}$ in (\\ref{appl:softcorr}) can be evaluated by\n\\begin{eqnarray*}\n\\lefteqn{\\corrd{k}{\\ell}{k'}{\\ell'}\\approx\n(1\/N_{t})}\\nonumber\\\\&&\n\\times \\sum_{t=1}^{N_{t}}\n\\sum_{m,n\\in \\mathcal{S}}m n\nP\\left(d_{k}(\\ell)=m,d_{k'}(\\ell')=n \\mid \\overline{\\overline{{\\boldsymbol{d}}}}^{{[i,t]}}, {\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},\n{\\boldsymbol{a}}^{[i]}\\right).\n\\end{eqnarray*}\nWe need to evaluate the probability in the expression above. Following the same route taken as in the previous section and after some algebra, it can be expressed as\n\\[\nP\\left(d_{k}(\\ell)= m, d_{k'}(\\ell')= n \\mid \\overline{\\overline{{\\boldsymbol{d}}}}^{[i,t]}, {\\boldsymbol{r}}, \\boldsymbol \\tau^{[i]},\n{\\boldsymbol{a}}^{[i]}\\right)=\\]\n\\begin{equation}\\hspace*{3cm} \\frac{1}{1+\\exp\\left(-\\zeta^{[i,t]}\\right)}\\cdot\n\\frac{1}{1+\\exp\\left(-\\lambda^{[i,t]}\\right)}.\n\\label{MC:LLR_corr}\n\\end{equation}\n\nThe quantities $\\zeta^{[i,t]}$ and $\\lambda^{[i,t]}$ (\\ref{MC:LLR_corr}) are given by\n\\begin{eqnarray*}\n\\zeta^{[i,t]}&=&\\frac{4}{N_{0}}\n\\Re\\left\\{n({\\boldsymbol{g}}^{[i]}_{p})^{\\dag}({\\boldsymbol{r}}-{{\\boldsymbol{G}}}^{[i]}_{\\overline{p,q}}\n\\mbox{ }\n{{\\boldsymbol{d}}}^{[i,t]}_{\\overline{p,q}})-mn({\\boldsymbol{g}}^{[i]}_{p})^{\\dag}{\\boldsymbol{g}}^{[i]}_{q}\\right\\},\\\\\n\\lambda^{[i,t]}&=&\\frac{4}{N_{0}}\n\\Re\\left\\{m({\\boldsymbol{g}}^{[i]}_{q})^{\\dag}({\\boldsymbol{r}}-{{\\boldsymbol{G}}}^{[i]}_{\\bar q} \\mbox{ }\n{{\\boldsymbol{d}}}^{[i,t]}_{\\bar q})\\right\\},\n\\end{eqnarray*}\nwhere $p\\triangleq k'L+\\ell'$ and $q\\triangleq kL+\\ell$.\n${{\\boldsymbol{G}}}_{\\overline{p,q}}$ is ${\\boldsymbol{G}}$ with its $p$th and $q$th columns\n${\\boldsymbol{g}}_{p},{\\boldsymbol{g}}_{q}$ removed. Similarly, ${{\\boldsymbol{d}}}_{\\overline{p,q}}$\ndenotes the vector ${\\boldsymbol{d}}$ with its $p$th and $q$th components\nremoved.\n\n\n\n\n\\section{Performance analysis}\n\n\\subsection{ Modified Cramer-Rao Bounds for the Estimated Parameters}\nWe now derive the modified Cramer-Rao lower bounds (MCRB) on the\nvariances of any unbiased estimates $\\widehat{{\\boldsymbol{\\theta}}}$ of the\nparameter vector ${\\boldsymbol{\\theta}}$. It is shown in \\cite{VanTrees} that for $\\theta_{p} \\in {\\boldsymbol{\\theta}}$, $\\mbox{var}(\\widehat{\\theta}_{p}-\\theta_{p})\\geq [{\\boldsymbol{I}}^{-1}({\\boldsymbol{\\theta}})]_{pp}$, where ${\\boldsymbol{I}}({\\boldsymbol{\\theta}})$ is the $3K\\times 3K$ Fisher information matrix whose $(p,q)$th component is defined by\n\\begin{equation*}\n[{\\boldsymbol{I}}({\\boldsymbol{\\theta}})]_{pq}\\triangleq - E_{{\\boldsymbol{r}},{\\boldsymbol{a}}} \\bigg\\{\\frac{\\partial^{2}\\ln p({\\boldsymbol{r}},{\\boldsymbol{a}} \\mid \\boldsymbol \\tau)}{\\partial \\theta_{p}\\partial \\theta_{q}} \\bigg\\}, \\mbox{ for }p,q=1,2,\\cdots,3K.\n\\end{equation*}\nFor the joint likelihood function in (\\ref{appl:likelihood}),\nit is shown in \\cite{kay} that the Fisher information matrix can be computed by\n\\begin{equation}\n[{\\boldsymbol{I}}({\\boldsymbol{\\theta}})]_{pq}=\n\\frac{2}{N_{0}}E_{{\\boldsymbol{d}}}\\bigg\\{E_{{\\boldsymbol{a}}|{\\boldsymbol{d}}}\\bigg\\{\\Re\\bigg[\n\\frac{\\partial {\\boldsymbol{\\mu}}^{\\dag}({\\boldsymbol{\\theta}},{\\boldsymbol{d}})}{\\partial\n \\theta_{p}}\\frac{\\partial {\\boldsymbol{\\mu}}({\\boldsymbol{\\theta}},{\\boldsymbol{d}})}{\\partial\n \\theta_{q}}\\bigg]\\bigg\\}\\bigg\\},\n\\label{CRB_Fisher}\n\\end{equation}\n$p,q=1,2,\\cdots, 3K.$\n\nTaking the expectations with respect to channel coefficients ${\\boldsymbol{a}}$ and data ${\\boldsymbol{d}}$ after taking the partial derivatives in (\\ref{CRB_Fisher}) with respect to $\\theta_{p}$ and $\\theta_{q}$, for different regions of $p$ and $q$ values, under the assumption that the data sequences are independent and equally likely and the fact that ${\\boldsymbol{S}}^{\\dag}(\\tau_{p},\\ell){\\boldsymbol{S}}(\\tau_{p},\\ell)=1,$ for $p=1,2\\cdots K; \\mbox{ }\\ell=0,1,\\cdots,L-1$, the Fisher information matrix becomes a diagonal matrix whose $(p,p)$th component can be evaluated as\n\\begin{equation}\n[{\\boldsymbol{I}}({\\boldsymbol{\\theta}})]_{pp}=\\frac{2}{N_{0}}\\left\\{ \\begin{array}{ll}\n L; & p=1,\\cdots,K\\\\\n L; & p=K+1,\\cdots,2K\\\\\n \\sigma_{p}^{2}\\sum_{\\ell=0}^{L-1} \\mid {\\boldsymbol{S}}'(\\ell) \\mid^{2}; & p=2K+1,\\cdots,3K.\\\\\n \\end{array}\n \\right.\n\\label{CRB_Fisher_eval}\n\\end{equation}\nwith the short-cut ${\\boldsymbol{S}}'[\\ell] \\triangleq\n\\frac{\\partial{\\boldsymbol{S}}_p(\\tau_{p},\\ell)}{\\partial\\tau_{p}} \\mid_{t=\\ell\n T_b + \\widehat\\tau_p}$.\nThe final result for the MCRBs on the estimates of the channel coefficients and the transmission delays is obtained by inverting the diagonal matrix ${\\boldsymbol{I}}({\\boldsymbol{\\theta}})$ in (\\ref{CRB_Fisher_eval}) as follows.\n\\begin{eqnarray}\n\\mbox{var}(\\widehat{a}_{k})&\\geq & N_0\/L, \\label{MCRB:a} \\\\\n\\mbox{ }\\mbox{var}(\\widehat{\\tau}_{k})&\\geq & 1\/(8 \\pi^{2} L\\overline{\\gamma_{k}}\\mbox{ }B^{2}_{s_{k}} ) \\label{MCRB:tau},\n\\end{eqnarray}\n$k = 1,2,\\dots,K$. The symbol $\\overline{\\gamma_{k}}\\triangleq \\sigma^{2}_{k}\/N_{0}$\nis the average SNR, $B_{s_{k}}$ is the Gabor bandwidth of the $k$th user's\nspreading code waveform, $s_{k}(t)$ i.e., \n\\begin{equation*}\nB_{s_{k}}\\triangleq \\bigg(\\int_{-\\infty}^{+\\infty}f^{2} \\mid S_{k}(f)\\mid^{2} df\\bigg)^{1\/2},\n\\end{equation*}\nand $S_{k}(f)$ is the Fourier transform of $s_{k}(t), t\\in\n[0,T_{b}]$. Note that the Gabor bandwidth $B_{s_{k}}$ tends to infinity for rectangular-shaped (continuous-time)\nchip waveforms.\n\n\n\n\n\\subsection{Numerical Examples}\n\nTo assess the performance of the proposed (non-linear) Monte-Carlo SAGE scheme, an asynchronous uncoded DS-CDMA system with $K = 5$ users, rectangular chip waveforms with processing gain $N_c = 8$, and $L = 80$ transmitted symbols per block is considered. The receiver processes $Q = 12$ samples per chip. For each data block, Gibbs sampling is performed over $50$ iterations. \nA few, say $L_p=4$ pilot symbols are embedded in each block to overcome the phase ambiguity problem. Each user's strongest path from the MMSE\nestimate of ${\\boldsymbol{a}}$ given the $K~Q~(L_p+1)-1$ samples of ${\\boldsymbol{r}}$ and the pilot symbols, yield the initial estimates ${\\boldsymbol{a}}^{[0]}$ and $\\boldsymbol \\tau^{[0]}$. The MMSE estimate of $\\boldsymbol{d}$, given\n${\\boldsymbol{r}}$ and weighted by ${\\boldsymbol{a}}^{[0]}$ yields the initial symbol estimate\n${\\boldsymbol{d}}^{[0]}$. We refer to this method as MMSE-separate estimation (MMSE-SE). \nFor comparison purpose, the SAGE-scheme for joint data detection and channel estimation in\n\\cite{kocian2007} for known transmission delays and hard-decision\ndecoding has also been considered subsequently. We refer to these scheme as\n\"SAGE-JDE, $\\boldsymbol \\tau$ known\".\\\\\n\n\\begin{figure}\n\\begin{center}\n\\begin{psfrags}\n\\psfrag{0.10}[][][0.8]{$0.10$}\n\\psfrag{0.15}[][][0.8]{$0.15$}\n\\psfrag{0.20}[][][0.8]{$0.20$}\n\\psfrag{0.25}[][][0.8]{$0.25$}\n\\psfrag{0.30}[][][0.8]{$0.30$}\n\\psfrag{0.35}[][][0.8]{$0.35$}\n\\psfrag{0.40}[][][0.8]{$0.40$}\n\\psfrag{0.45}[][][0.8]{$0.45$}\n\\psfrag{0.50}[][][0.8]{$0.50$}\n\\psfrag{delay}[][][0.8]{$\\tau\/T_b$}\n\\psfrag{10}[][][0.8]{$10$}\n\\psfrag{-1}[][][0.6]{$-1$}\n\\psfrag{-2}[][][0.6]{$-2$}\n\\psfrag{-3}[][][0.6]{$-3$}\n\\psfrag{user1}[l][l][0.8]{MCMC-SAGE, user 1}\n\\psfrag{user3}[l][l][0.8]{MCMC-SAGE, user 3}\n\\psfrag{MCRLB(1)xxxxxxxxxxxx}[l][l][0.8]{MCRB (\\ref{MCRB:a}), user 1}\n\\psfrag{MCRLB(3)}[l][l][0.8]{MCRB (\\ref{MCRB:a}), user 3}\n\\psfrag{MSE(a)}[][][0.8]{$\\mbox{var}(\\widehat{a}_k)$}\n\\includegraphics[width=10cm]{MSEa33_MCMC.eps}\n\\end{psfrags}\n\\end{center}\n\\caption{$\\mbox{var}(\\widehat{a}_k)$ of the MCMC-SAGE in near-far scenario. \\label{fig:MMSE_a}}\n\\end{figure}\n\nTo study the behavior of the proposed MCMC-SAGE scheme, we consider communication over AWGN (not known to the receiver). The individual powers are given by\n\\begin{equation*}\n\\begin{array}{lll}\n\\sigma_1^2 = -4~\\mathrm{dB}, & \\hspace{2ex} \\sigma_2^2 = -2~\\mathrm{dB}, & \\hspace{2ex} \\sigma_3^2 = 0~\\mathrm{dB}, \\\\\n\\sigma_4^2 = +2~\\mathrm{dB}, & \\hspace{2ex} \\sigma_5^2 = +4~\\mathrm{dB}, \\\\ \n\\end{array}\n\\end{equation*}\nFig.~\\ref{fig:MMSE_a} shows the mean-square-error (MSE) of the channel estimates $\\widehat{\\boldsymbol a}_1$ (weakest user) and $\\widehat{\\boldsymbol a}_3$ (normal user) as a function of the normalized transmission delays $\\boldsymbol \\tau\/T_b$ which are uniformly distributed on the interval between zero and the value on the abscissa. It can be seen that the MCMC-SAGE performs close to the MCRB over the entire range of $\\boldsymbol \\tau$. Not shown in the plot, convergence is achieved after around 25 iterations i.e., every user's parameter vector is updated five times.\n\n\\begin{figure}\n\\begin{center}\n\\begin{psfrags}\n\\psfrag{0.10}[][][0.8]{$0.10$}\n\\psfrag{0.15}[][][0.8]{$0.15$}\n\\psfrag{0.20}[][][0.8]{$0.20$}\n\\psfrag{0.25}[][][0.8]{$0.25$}\n\\psfrag{0.30}[][][0.8]{$0.30$}\n\\psfrag{0.35}[][][0.8]{$0.35$}\n\\psfrag{0.40}[][][0.8]{$0.40$}\n\\psfrag{0.45}[][][0.8]{$0.45$}\n\\psfrag{0.50}[][][0.8]{$0.50$}\n\\psfrag{delay}[][][0.8]{$\\tau\/T_b$}\n\\psfrag{10}[][][0.8]{$10$}\n\\psfrag{-4}[][][0.6]{$-4$}\n\\psfrag{-5}[][][0.6]{$-5$}\n\\psfrag{-6}[][][0.6]{$-6$}\n\\psfrag{-7}[][][0.6]{$-7$}\n\\psfrag{user1xxxxxxxxxxxxx}[l][l][0.8]{MCMC-SAGE, user 1}\n\\psfrag{user3}[l][l][0.8]{MCMC-SAGE, user 3}\n\\psfrag{MSE(tau)}[][][0.8]{$\\mbox{var}(\\widehat{\\tau}_k)$}\n\\includegraphics[width=10cm]{MSEtau33_MCMC.eps}\n\\end{psfrags}\n\\end{center}\n\\caption{$\\mbox{var}(\\widehat{\\tau}_k)$ of the MCMC-SAGE in near-far scenario. \\label{fig:MMSE_tau}}\n\\end{figure}\n\nFig.~\\ref{fig:MMSE_tau} depicts the MSE of the delay estimates $\\widehat{\\boldsymbol \\tau}_1$ and $\\widehat{\\boldsymbol \\tau}_3$. Notice that the MCRB for $\\boldsymbol \\tau$ tends to zero for time-continuous signature waveforms. It can be seen that user~3 does not encounter delay estimation errors for small transmission delays i.e., $\\tau\/T_b \\leq 0.2$. This effect can be partially explained by the large number of samples per chip i.e., $Q=12$. Though for higher transmission delays, $\\mbox{var}(\\widehat{\\tau}_3)$ is finite, because of the increasing residual interference in the receiver.\n\n\\begin{figure}\n\\begin{center}\n\\begin{psfrags}\n\\psfrag{2}[][][0.8]{$2$}\n\\psfrag{4}[][][0.8]{$4$}\n\\psfrag{6}[][][0.8]{$6$}\n\\psfrag{8}[][][0.8]{$8$}\n\\psfrag{10}[][][0.8]{$10$}\n\\psfrag{12}[][][0.8]{$12$}\n\\psfrag{14}[][][0.8]{$14$}\n\\psfrag{16}[][][0.8]{$16$}\n\\psfrag{effective SNR}[][][0.8]{$\\frac{L-L_p}{L}\\bar{\\gamma}_1=\\frac{L-L_p}{L}\\bar{\\gamma}_2=\\ldots=\\frac{L-L_p}{L}\\bar{\\gamma}_K$ [dB]}\n\\psfrag{10}[][][0.8]{$10$}\n\\psfrag{0}[][][0.6]{$0$}\n\\psfrag{-1}[][][0.6]{$-1$}\n\\psfrag{-2}[][][0.6]{$-2$}\n\\psfrag{-3}[][][0.6]{$-3$}\n\\psfrag{-4}[][][0.6]{$-4$}\n\\psfrag{BER(user1)}[][][0.6]{${\\mathrm{BER}_1}$}\n\\psfrag{BER(user3)}[][][0.6]{${\\mathrm{BER}_3}$}\n\\psfrag{SU}[l][l][0.7]{SU, known channel}\n\\psfrag{SAGE(kc)}[l][l][0.7]{MCMC-SAGE, $\\boldsymbol \\tau$ known}\n\\psfrag{SAGE}[l][l][0.7]{MCMC-SAGE, $\\hat{\\boldsymbol \\tau}$}\n\\psfrag{MMSE-SDEXXXXXX}[l][l][0.7]{MMSE-SE}\n\\includegraphics[width=10cm]{BER33_MCMC.eps}\n\\end{psfrags}\n\\end{center}\n\\caption{BER-performance in near-far scenario. \\label{fig:BER}}\n\\end{figure}\n\nThe bit-error-rate ($\\overline{\\mathrm{BER}}$) of the proposed\nreceiver is plotted in Fig.~\\ref{fig:BER} versus the \\emph{effective}\nSNR $\\frac{L-L_p}{L}\\bar{\\gamma}_k$, $\\bar{\\gamma}_k \\triangleq\n\\sigma_k^2\/N_0$, $k=1,\\ldots,K$. The transmission delays are uniformly distributed on $[0,T_b\/2)$. It can be seen that the MMSE-SDE scheme cannot handle delay estimation errors at all due to high\ncorrelations between the users' signature sequences. The proposed\nMCMC-SAGE scheme and the \"SAGE-JDE, $\\boldsymbol \\tau$ known\" perform similar. The weakest user 1 performs close to the single-user (SU) bound. The normal user 3 has a multiuser efficiency\nof roughly 1~dB over the entire range of SNR values.\n\n\\section{Conclusions}\nA computationally efficient estimation algorithm has been proposed\nfor estimating the transmission delays and the channel coefficients\njointly in a non-data-aided fashion via the SAGE algorithm. The {\\em a\nposteriori} probabilities needed to implement the SAGE algorithm\nhave been computed by means of the Gibbs sampling technique. Exact\nanalytical expression have been obtained for the estimates of\ntransmission delays and channel coefficients. At each iteration the\nlikelihood function is non-decreasing.\n\n\n\\bibliographystyle{IEEEbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and Motivation}\n\nDESIDERATA: ``To translate the information contained on protein\ndatabases in terms of random variables in order to model a dynamics\nof folding and unfolding of proteins''.\n\nThe information on the planetary motion has been annotated on Astronomical\nalmanacs (Ephemerides) along centuries and can be derived and analyzed\nby Classical Dynamics and Deterministic Chaos as well as confirmed and\ncorrected by General Relativity. The information which is accumulated\non Biological almanacs (Protein databases) on the last decades, is\nstill waiting for its first description to be done by a successful\ntheory of folding and unfolding of proteins. We think that this study\nshould be started from the evolution of protein families and its\nassociation into Family clans as a necessary step of their development.\n\nThe first fundamental idea to be developed here is that proteins do not\nevolute independently. We introduce several arguments and we have\ndone many calculations to span the bridge over the facts about\nprotein evolution in order to emphasize the existence of a protein\nfamily formation process (PFFP), a successful pattern recognition method\nand a coarse-grained protein dynamics are driven by optimal control\ntheory \\cite{mondaini1,mondaini2,mondaini3,mondaini4,mondaini5,mondaini6}.\nProteins, or their ``intelligent'' parts,\nprotein domains, evolute together as a family of protein domains. We then\nrealize that the exclusion of the evolution of an ``orphan'' protein is\nguaranteed by the probabilistic approach to be introduced in the present\ncontribution. We think that the elucidation of the nature of intermediate\nstages of the folding\/unfolding dynamics, in order to circumvent the\nLevinthal ``paradox'' \\cite{levinthal,karplus} as well as the determination\nof initial conditions, should be found from a detailed study of this PFFP\nprocess. A byproduct of this approach is the possibility of testing the\nhypothesis of agglutination of protein families into Clans by rigorous\nstatistical methods like ANOVA \\cite{deGroot,mondaini4}.\n\nWe take many examples of Entropy Measures as the generalized\nfunctions of random variables on our modelling. These are the\nprobabilities of occurrence of amino acids in rectangular arrays\nwhich are the representatives of families of protein domains.\nIn section 2, the sample space which is adequate for this\nstatistical approach is described in detail. We start from the\ndefinition of probabilities of occurrence and the restrictions\nimposed on the number of feasible families by the structure of\nthis sample space. Section 3 introduces the set of Sharma-Mittal\nEntropy Measures \\cite{sharma,mondaini4} to be adopted as the\nfunctions of probabilities of occurrence in the statistical\nanalysis to be developed. The Mutual Information measures\nassociated with the Sharma-Mittal set, as well as the normalized\nJaccard distance measures, are also introduced in this section.\nIn section 4, we present a naive sketch of assessing Protein\ndatabase, to set the stage for a more efficient approach of the\nfollowing sections. In section 5, we point out the inconvenience\nof the Maple computing system for the statistical calculations to\nbe done, by displaying tables with all CPU and real times\nnecessary to perform all necessary calculations. We have also\nprovided in this section, some adaptation of our methods in order\nto be used with the Perl computing system and we compare the new\ntimes of calculation with those by using Maple at the beginning\nof the section. We also include some comments on the use of Perl,\nespecially on its oddness to calculate with the input data given\nin arrays and the way of circumventing this. However, we also stress\nthat despite the fact that joint probabilities and their powers could be\nusually calculated, the output will come randomly distributed and the\nCPU and real times will increase too much to favour\nthe calculation of the entropy measures. This is due to the\nintrinsic ``Hash'' structure \\cite{cormen} of the Perl computing system.\nWe then introduce a modified array structure in order to calculate with\nPerl.\n\n\\section{The Sample Space for a Statistical Treatment}\nWe consider a rectangular array of \\textbf{\\emph{m}} rows (protein domains)\nand \\textbf{\\emph{n}} columns (amino acids). These arrays are organized\nfrom the protein database whose domains are classified into families and\nclans by the professional expertise of senior biologists \\cite{finn1,finn2}.\n\nThe random variable is the probability of occurrences of amino\nacids, $p_j(a)$, $j=1,2,\\ldots,n$, $a=A,C,D,\\ldots,W,Y$ (one-letter\ncode for amino acids), to be given by\n\\begin{equation}\n p_j(a) \\equiv \\frac{n_j(a)}{m} \\label{eq:eq1}\n\\end{equation}\n\n\\noindent where $n_j(a)$ is the number of occurrences of the amino acid\n\\textbf{\\emph{a}} in the $j$-th column. Eq.(\\ref{eq:eq1}) could be\nalso interpreted as the components of n vectors of 20 components\neach\n\\begin{equation}\n \\begin{pmatrix}\n p_1(A) \\\\ \\relax\n \\vdots \\\\ \\relax\n p_1(Y)\n \\end{pmatrix}\n \\begin{pmatrix}\n p_2(A) \\\\ \\relax\n \\vdots \\\\ \\relax\n p_2(Y)\n \\end{pmatrix}\n \\ldots\n \\begin{pmatrix}\n p_n(A) \\\\ \\relax\n \\vdots \\\\ \\relax\n p_n(Y)\n \\end{pmatrix} \\label{eq:eq2}\n\\end{equation}\n\n\\noindent and we have\n\\begin{equation}\n \\sum_a n_j(a)=m\\,, \\,\\, \\forall j \\, \\Rightarrow \\,\n \\sum_a p_j(a)=1\\,, \\,\\, \\forall j \\label{eq:eq3}\n\\end{equation}\n\nAnalogously, we could also introduce the joint probability\nof occurrence of a pair of amino acids \\textbf{\\emph{a}}, \\textbf{\\emph{b}}\nin columns \\textbf{\\emph{j}}, \\textbf{\\emph{k}}, respectively $P_{jk}(a,b)$\nas the random variables. These are given by\n\\begin{equation}\n P_{jk}(a,b) = \\frac{n_{jk}(a,b)}{m} \\label{eq:eq4}\n\\end{equation}\n\n\\noindent where $n_{jk}(a,b)$ is the number of occurrences of the pair\nof amino acids \\textbf{\\emph{a}}, \\textbf{\\emph{b}} in columns\n\\textbf{\\emph{j}}, \\textbf{\\emph{k}}, respectively.\n\nA convenient interpretation of these joint probabilities could be\nthe elements of $\\frac{n(n-1)}{2}$, square matrices of $20 \\times 20$\nelements, to be written as\n\\begin{equation}\n P_{jk} =\n \\begin{pmatrix}\n P_{jk}(A,A) & \\ldots & P_{jk}(A,Y) \\\\ \\relax\n \\vdots & \\ddots & \\vdots \\\\ \\relax\n P_{jk}(Y,A) & \\ldots & P_{jk}(Y,Y)\n \\end{pmatrix} \\label{eq:eq5}\n\\end{equation}\n\n\\noindent where $j=1,2,\\ldots,(n-1)$; $k=j+1,\\ldots,n$.\n\nWe can also write,\n\\begin{equation}\n P_{jk}(a,b)=P_{jk}(a|b)p_k(b) \\,, \\label{eq:eq6}\n\\end{equation}\nThis equation can be also taken as another definition of joint\nprobability. $\\!P\\!_{jk}(\\!a|b)$ is the Conditional probability of\noccurrence of the amino acid \\textbf{\\emph{a}} in column\n\\textbf{\\emph{j}} if the amino acid \\textbf{\\emph{b}} is already\nfound in column \\textbf{\\emph{k}}. We then have,\n\\begin{equation}\n \\sum_a P_{jk}(a|b) = 1 \\label{eq:eq7}\n\\end{equation}\nFrom eqs.(\\ref{eq:eq6}), (\\ref{eq:eq7}), we have:\n\\begin{equation}\n \\sum_a P_{jk}(a,b) = p_k(b) \\label{eq:eq8}\n\\end{equation}\nand from eq.(\\ref{eq:eq8}),\n\\begin{equation}\n \\sum_a\\sum_b P_{jk}(a,b) = 1 \\label{eq:eq9}\n\\end{equation}\n\n\\noindent which is an identity since $P_{jk}(a,b)$ is also a probability.\n\nEqs.(\\ref{eq:eq8}) and (\\ref{eq:eq9}) can be also derived from\n\\begin{equation}\n \\sum_a n_{jk}(a,b) = n_k(b) \\,;\\quad \\sum_a\\sum_b n_{jk}(a,b)\n = m \\label{eq:eq10}\n\\end{equation}\n\n\\noindent and the definitions, eqs.(\\ref{eq:eq1}), (\\ref{eq:eq4}).\n\nWe now have from Bayes' law:\n\\begin{equation}\n P_{jk}(a|b)p_k(b) = P_{kj}(b|a)p_j(a) \\label{eq:eq10a}\n\\end{equation}\nand from eq.(\\ref{eq:eq10a}), the property of symmetry,\n\\begin{equation}\n P_{jk}(a,b) = P_{kj}(b,a) \\label{eq:eq10b}\n\\end{equation}\n\nThe matrices $P_{jk}$ can be organized in a triangular array:\n\\begin{equation}\n P =\n \\begin{matrix}\n {} & P_{12} & P_{13} & P_{14} & \\ldots & P_{1\\,n-2} & P_{1\\,n-1}\n & P_{1\\,n} \\\\ \\relax\n {} & {} & P_{23} & P_{24} & \\ldots & P_{2\\,n-2} & P_{2\\,n-1}\n & P_{2\\,n} \\\\ \\relax\n {} & {} & {} & P_{34} & \\ldots & P_{3\\,n-2} & P_{3\\,n-1}\n & P_{3\\,n} \\\\ \\relax\n {} & {} & {} & {} & \\ddots & \\vdots & \\vdots & \\vdots \\\\ \\relax\n {} & {} & {} & {} & {} & P_{n-3\\,n-2} & P_{n-3\\,n-1}\n & P_{n-3\\,n} \\\\ \\relax\n {} & {} & {} & {} & {} & {} & P_{n-2\\,n-1} & P_{n-2\\,n}\n \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & P_{n-1\\,n}\n \\end{matrix} \\label{eq:eq11}\n\\end{equation}\nThe number of matrices until the $P_{jk}$-th one is given by\n\\begin{equation}\n C_{jk} = j(n-1) - \\frac{j(j-1)}{2} - (n-k) \\label{eq:eq12}\n\\end{equation}\n\n\\noindent These numbers can be also arranged as a triangular array:\n\\begin{equation}\n \\resizebox{.99\\hsize}{!}{$C =\n \\begin{matrix}\n 1 & 2 & 3 & 4 & 5 & 6 & \\ldots & (n-3) & (n-2) & (n-1) \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & n & (n+1) & (n+2) & (n+3) & (n+4) & \\ldots & (2n-5) & (2n-4) &\n (2n-3) \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & (2n-2) & (2n-1) & 2n & (2n+1) & \\ldots & (3n-8) & (3n-7) &\n (3n-6) \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & {} & (3n-5) & (3n-4) & (3n-3) & \\ldots & (4n-12) & (4n-11) &\n (4n-10) \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & {} & {} & (4n-9) & (4n-8) & \\ldots & (5n-17) & (5n-16) & (5n-15)\n \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & {} & {} & {} & (5n-14) & \\ldots & (6n-23) & (6n-22) & (6n-21)\n \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & {} & {} & {} & {} & \\ddots & \\vdots & \\vdots & \\vdots \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & \\frac{1}{2}(n^2-n-10) & \\frac{1}{2}\n (n^2-n-8) & \\frac{1}{2}(n+2)(n-3) \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & \\frac{1}{2}(n^2-n-4) &\n \\frac{1}{2}(n+1)(n-2) \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & {} \\\\ \\relax\n {} & {} & {} & {} & {} & {} & {} & {} & {} & \\frac{1}{2}\\,n(n-1)\n \\end{matrix}\n $} \\label{eq:eq13}\n\\end{equation}\n\nEq.(\\ref{eq:eq11}) should be used for the construction of a computational\ncode to perform all necessary calculations. We postpone to other publication\nthe presentation of some interesting results on the analysis of eq.(\\ref{eq:eq13}).\n\nThe calculation of the matrix elements $P_{jk}(a,b)$ from a rectangular array\n$m \\times n$ of amino acids is done by the ``concatenation'' process\nwhich is easily implemented on computational codes. We choose a pair of\ncolumns $j=\\bar{\\jmath}$, $k=\\bar{k}$ from the strings, $a=A$, $C$,\n$\\ldots$, $W$, $Y$, $b=A$, $C$, $\\ldots$, $W$, $Y$ and we look for the\noccurrence of the combinations $ab=AA$, $AC$, $\\ldots$, $AW$, $AY$, $CA$,\n$CC$, $\\ldots$, $CW$, $CY$, $\\ldots$, $WA$, $WC$, $\\ldots$, $WW$, $WY$,\n$\\ldots$, $YA$, $YC$, $\\ldots$, $YW$, $YY$. We then calculate their numbers\nof occurrences $n_{\\bar{\\jmath}\\bar{k}}(A,A)$,\n$n_{\\bar{\\jmath}\\bar{k}}(A,C)$, $\\ldots$, $n_{\\bar{\\jmath}\\bar{k}}(Y,W)$,\n$n_{\\bar{\\jmath}\\bar{k}}(Y,Y)$ and the corresponding probabilities\n$P_{\\bar{\\jmath}\\bar{k}}(A,A)$, $P_{\\bar{\\jmath}\\bar{k}}(A,C)$, $\\ldots$,\\\\\n$P_{\\bar{\\jmath}\\bar{k}}(Y,W)$, $P_{\\bar{\\jmath}\\bar{k}}(Y,Y)$ from\neq.(\\ref{eq:eq4}). We do the same for the other $\\frac{n^2-n-2}{2}$ pairs of\ncolumns.\n\nAs an example, let us suppose that we have the $3 \\times 4$ array:\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.25\\linewidth]{figure1}\n \\caption{\\small An example of a $3 \\times 4$ array with amino acids\n A, C, D. \\label{fig1}}\n\\end{figure}\n\nLet us choose the pair of columns 1,2. We look for the occurrence of the\ncombinations $AA$, $AC$, $AD$, $CA$, $CC$, $CD$, $DA$, $DC$, $DD$ on the\npair of columns 1,2 of the array above and we found $n_{12}(A,C) = 1$,\n$n_{12}(C,A) = 1$, $n_{12}(D,A) = 1$. The others $n_{12}(a,b) = 0$. From\neq.(\\ref{eq:eq4}) we can write for the matrices $P_{jk}$ of eq.(\\ref{eq:eq5}):\n\\begin{align}\n & P_{12} =\n \\begin{pmatrix}\n 0 & 1\/3 & 0 \\\\ \\relax\n 1\/3 & 0 & 0 \\\\ \\relax\n 1\/3 & 0 & 0\n \\end{pmatrix}\n \\!;\n P_{13} =\n \\begin{pmatrix}\n 1\/3 & 0 & 0 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 1\/3 & 0\n \\end{pmatrix}\n \\!;\n P_{14} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 1\/3 & 0\n \\end{pmatrix}\n \\nonumber \\\\\n & P_{23} =\n \\begin{pmatrix}\n 0 & 1\/3 & 1\/3 \\\\ \\relax\n 1\/3 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!;\n P_{24} =\n \\begin{pmatrix}\n 0 & 1\/3 & 1\/3 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!;\n P_{34} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 1\/3 & 0 \\\\ \\relax\n 0 & 0 & 1\/3\n \\end{pmatrix} \\label{eq:eq14}\n\\end{align}\nThe Maple computing system ``recognizes'' the matricial structure through\nits Linear Algebra package. The Perl computing system ``operates'' only\nwith ``strings''. The results above are easily obtained in Maple, but in\nPerl we have to find alternative ways of calculating the joint probabilities.\nThe first method is to calculate the probabilities per row of the\n$3 \\times 4$ array. We have for the first row:\n\\begin{align}\n & \\Pi_{12}^{(1)} =\n \\begin{pmatrix}\n 0 & 1\/3 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{13}^{(1)} =\n \\begin{pmatrix}\n 1\/3 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{14}^{(1)} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\nonumber \\\\\n & \\Pi_{23}^{(1)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 1\/3 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{24}^{(1)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{34}^{(1)} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix} \\label{eq:eq15}\n\\end{align}\nFor the second row:\n\\begin{align}\n & \\Pi_{12}^{(2)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 1\/3 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{13}^{(2)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{14}^{(2)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\nonumber \\\\\n & \\Pi_{23}^{(2)} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{24}^{(2)} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{34}^{(2)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 1\/3\n \\end{pmatrix} \\label{eq:eq16}\n\\end{align}\nFor the third row:\n\\begin{align}\n & \\Pi_{12}^{(3)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 1\/3 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{13}^{(3)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 1\/3 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{14}^{(3)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 1\/3 & 0\n \\end{pmatrix}\n \\nonumber \\\\\n & \\Pi_{23}^{(3)} =\n \\begin{pmatrix}\n 0 & 1\/3 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{24}^{(3)} =\n \\begin{pmatrix}\n 0 & 1\/3 & 0 \\\\ \\relax\n 0 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\!; \\,\\,\n \\Pi_{34}^{(3)} =\n \\begin{pmatrix}\n 0 & 0 & 0 \\\\ \\relax\n 0 & 1\/3 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix} \\label{eq:eq17}\n\\end{align}\n\nWe stress that Perl does not recognize these matrix structures. This is just\nour arrangement in order to make comparison with Maple calculations. However,\nPerl ``knows'' how to sum the calculations done per rows to obtain:\n\\begin{equation}\n \\Pi_{12}^{(1)}+\\Pi_{12}^{(2)}+\\Pi_{12}^{(3)} =\n \\begin{pmatrix}\n 0 & 1\/3 & 0 \\\\ \\relax\n 1\/3 & 0 & 0 \\\\ \\relax\n 1\/3 & 0 & 0\n \\end{pmatrix}\n \\equiv P_{12} \\label{eq:eq18}\n\\end{equation}\n\\begin{equation}\n \\Pi_{13}^{(1)}+\\Pi_{13}^{(2)}+\\Pi_{13}^{(3)} =\n \\begin{pmatrix}\n 1\/3 & 0 & 0 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 1\/3 & 0\n \\end{pmatrix}\n \\equiv P_{13} \\label{eq:eq19}\n\\end{equation}\n\\begin{equation}\n \\Pi_{14}^{(1)}+\\Pi_{14}^{(2)}+\\Pi_{14}^{(3)} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 1\/3 & 0\n \\end{pmatrix}\n \\equiv P_{14} \\label{eq:eq20}\n\\end{equation}\n\\begin{equation}\n \\Pi_{23}^{(1)}+\\Pi_{23}^{(2)}+\\Pi_{23}^{(3)} =\n \\begin{pmatrix}\n 0 & 1\/3 & 1\/3 \\\\ \\relax\n 1\/3 & 0 & 0 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\equiv P_{23} \\label{eq:eq21}\n\\end{equation}\n\\begin{equation}\n \\Pi_{24}^{(1)}+\\Pi_{24}^{(2)}+\\Pi_{24}^{(3)} =\n \\begin{pmatrix}\n 0 & 1\/3 & 1\/3 \\\\ \\relax\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 0 & 0\n \\end{pmatrix}\n \\equiv P_{24} \\label{eq:eq22}\n\\end{equation}\n\\begin{equation}\n \\Pi_{34}^{(1)}+\\Pi_{34}^{(2)}+\\Pi_{34}^{(3)} =\n \\begin{pmatrix}\n 0 & 0 & 1\/3 \\\\ \\relax\n 0 & 1\/3 & 0 \\\\ \\relax\n 0 & 0 & 1\/3\n \\end{pmatrix}\n \\equiv P_{34} \\label{eq:eq23}\n\\end{equation}\n\nWe are then able to translate the Perl output in ``matrix language''.\nHowever, this output does not come as an ordered set of joint\nprobabilities, as we have done by arranging the output in the form of the\nmatrices $\\Pi_{jk}^{(l)}$, $j=1,2,3$, $k=2,3,4$, $l=1,2,3$. In order to\ncalculate functions of the probabilities as the entropy measures, it will\ntake too much time for the Perl computing system to collect the necessary\nprobability values. This is due to the ``Hash'' structure of the Perl as\ncompared to the usual ``array'' structure of the Maple. A new form of\narranging the strings to favour an a priori ordination will circumvent this\ninconvenience of the ``hash'' structure. Let us then write the following\nextended string associated to the $m \\times n$ rectangular array:\n\\begin{equation*}\n \\Big(\\underbrace{(\\overset{1}{A}\\overset{2}{C}\\overset{3}{D})}_{1}\n \\underbrace{(\\overset{1}{C}\\overset{2}{A}\\overset{3}{A})}_{2}\n \\underbrace{(\\overset{1}{A}\\overset{2}{D}\\overset{3}{C})}_{3}\n \\underbrace{(\\overset{1}{D}\\overset{2}{D}\\overset{3}{C})}_{4}\\Big)\n\\end{equation*}\nWe then get\n\\begin{align}\n &P_{12}(A,C) = 1\/3,\\quad P_{12}(C,A) = 1\/3,\\quad P_{12}(D,A) = 1\/3 \\nonumber \\\\\n &P_{13}(A,A) = 1\/3,\\quad P_{13}(C,D) = 1\/3,\\quad P_{13}(D,C) = 1\/3 \\nonumber \\\\\n &P_{14}(A,D) = 1\/3,\\quad P_{14}(C,D) = 1\/3,\\quad P_{14}(D,C) = 1\/3 \\nonumber \\\\\n &P_{23}(C,A) = 1\/3,\\quad P_{23}(A,D) = 1\/3,\\quad P_{23}(A,C) = 1\/3 \\nonumber \\\\\n &P_{24}(C,D) = 1\/3,\\quad P_{24}(A,D) = 1\/3,\\quad P_{24}(A,C) = 1\/3 \\nonumber \\\\\n &P_{34}(A,D) = 1\/3,\\quad P_{34}(D,D) = 1\/3,\\quad P_{34}(C,C) = 1\/3 \\label{eq:eq24}\n\\end{align}\nAll the other joint probabilities $P_{jk}(a,b)$ are equal to zero.\n\nThis is a feasible treatment for the ``hash'' structure. In the example\nsolved above the probabilities will come already ordered in triads. This\nwill save time in the calculations with the Perl system.\n\nIt should be stressed that the Perl computing system does not recognize any\nformal relations of Linear Algebra. However, it does quite well if these\nrelations are converted into products and sums. In order to give an example\nof working with the Perl system, we take a calculation with the usual Shannon\nEntropy measure. The calculation of the Entropy for the\ncolumns $j$,$k$ is done by\n\\begin{equation}\n S_{jk} = -\\sum_a\\sum_b P_{jk}(a,b)\\log P_{jk}(a,b) =\n -\\mathrm{Tr}\\Big(P_{jk}(\\log P_{jk})^{\\mathrm{T}}\\Big) \\label{eq:eq25}\n\\end{equation}\n\n\\noindent where $P_{jk}$ is the matrix given in eq.(\\ref{eq:eq5}) and\n$\\mathrm{Tr,T}$ stands for the operations of taking the trace and transposing\na matrix, respectively. The matrix $(\\log P_{jk})^{\\mathrm{T}}$ is given by\n\\begin{equation*}\n (\\log P_{jk})^{\\mathrm{T}} =\n \\begin{pmatrix}\n \\log P_{jk}(A,A) & \\ldots & \\log P_{jk}(Y,A) \\\\ \\relax\n \\vdots & {} & \\vdots \\\\ \\relax\n \\log P_{jk}(A,Y) & \\ldots & \\log P_{jk}(Y,Y)\n \\end{pmatrix}\n\\end{equation*}\n\n\\noindent we also include for a useful reference, the matrix\n\\begin{equation*}\n p_j(p_k)^{\\mathrm{T}} =\n \\begin{pmatrix}\n p_j(A)p_k(A) & \\ldots & p_j(A)p_k(Y) \\\\ \\relax\n \\vdots & {} & \\vdots \\\\ \\relax\n p_j(Y)p_k(A) & \\ldots & p_j(Y)p_k(Y)\n \\end{pmatrix}\n\\end{equation*}\n\nSince from eqs.(\\ref{eq:eq18})--(\\ref{eq:eq23}), we have\n\\begin{equation}\n P_{jk} = \\sum_{l=1}^m \\Pi_{jk}^{(l)} \\label{eq:eq26}\n\\end{equation}\nwe can write:\n\\begin{equation}\n S_{jk} = -\\mathrm{Tr}\\left(\\left(\\sum_{l=1}^m \\Pi_{jk}^{(l)}\\right)\n (\\log P_{jk})^{\\mathrm{T}}\\right) = -\\sum_{l=1}^m \\mathrm{Tr}\n \\Big(\\Pi_{jk}^{(l)}(\\log P_{jk})^{\\mathrm{T}}\\Big) \\label{eq:eq27}\n\\end{equation}\nThere is no problem for calculating in Perl, if we prepare eq.(\\ref{eq:eq26})\nby expressing previously all the products and sums to be done. The real\nproblem with Perl calculations is the arrangement of the output of values\n$P_{jk}(a,b)$, due to the ``hash'' structure as have been stressed above.\n\n\\section{Entropy Measures. The Sharma-Mittal set and the associated Jaccard\nEntropy measure}\nWe start this section with the definition of the two-parameter Sharma-Mittal\nentropies \\cite{sharma,mondaini1,mondaini2}\n\\begin{align}\n (SM)_{jk}(r,s) &= -\\frac{1}{1-r}\\left(1-\\left(\\sum_a\\sum_b\\big(P_{jk}(a,b)\n \\big)^s\\right)^{\\frac{1-r}{1-s}}\\right) \\label{eq:eq28} \\\\\n (SM)_{j}(r,s) &= -\\frac{1}{1-r}\\left(1-\\left(\\sum_a \\big(p_{j}(a)\n \\big)^s\\right)^{\\frac{1-r}{1-s}}\\right) \\label{eq:eq29}\n\\end{align}\n\n\\noindent where $p_j(a)$ and $P_{jk}(a,b)$ are the simple and joint probabilities\nof occurrence of amino acids as defined on eqs.(\\ref{eq:eq1}) and (\\ref{eq:eq4}),\nrespectively. \\textbf{\\emph{r}}, \\textbf{\\emph{s}} are non-dimensional parameters.\n\nWe can associate to the entropy measures above their corresponding\none parameter forms to be given by the limits:\n\\begin{align}\n H_{jk}(s) &= \\lim_{r \\to s} (SM)_{jk}(r,s) = -\\frac{1}{1-s}\n \\left(1-\\sum_a\\sum_b\\big(P_{jk}(a,b)\\big)^s\\right) \\label{eq:eq30} \\\\\n H_{j}(s) &= \\lim_{r \\to s} (SM)_{j}(r,s) = -\\frac{1}{1-s}\n \\left(1-\\sum_a\\big(p_{j}(a)\\big)^s\\right) \\label{eq:eq31}\n\\end{align}\n\n\\noindent These are the Havrda-Charvat Entropy Measures and they will be\nspecially emphasized in the present work. Other alternative proposals for\nthe single parameter entropies are given by\n\n\\noindent The Renyi's Entropy measures:\n\\begin{align}\n R_{jk}(s) &= \\lim_{r \\to 1} (SM)_{jk}(r,s) = \\frac{1}{1-s}\n \\log\\left(\\sum_a\\sum_b\\big(P_{jk}(a,b)\\big)^s\\right) \\label{eq:eq32} \\\\\n R_{j}(s) &= \\lim_{r \\to 1} (SM)_{j}(r,s) = \\frac{1}{1-s}\n \\log\\left(\\sum_a\\big(p_{j}(a)\\big)^s\\right) \\label{eq:eq33}\n\\end{align}\nThe Landsberg-Vedral Entropy measures:\n\\begin{align}\n L_{jk}(s) =\\! \\lim_{\\quad r \\to 2-s} (SM)_{jk}(r,s) &= \\frac{1}{1-s}\n \\left(1-\\Big(\\sum_a\\sum_b\\big(P_{jk}(a,b)\\big)^s\\Big)^{-1}\\right)\n \\label{eq:eq34} \\\\\n &= \\frac{H_{jk}(s)}{\\sum\\limits_a\\sum\\limits_b\\big(P_{jk}(a,b)\\big)^s}\n \\nonumber \\\\\n L_{j}(s) =\\! \\lim_{\\quad r \\to 2-s} (SM)_{j}(r,s) &= \\frac{1}{1-s}\n \\left(1-\\Big(\\sum_a\\big(p_{j}(a)\\big)^s\\Big)^{-1}\\right) \\label{eq:eq35} \\\\\n &= \\frac{H_j(s)}{\\sum\\limits_a\\big(p_j(a)\\big)^s} \\nonumber\n\\end{align}\nAll these Entropy measures have the free-parameter Shannon entropy\nin the limit $s \\to 1$.\n\\begin{align}\n \\lim_{s \\to 1} H_{jk}(s) &= \\lim_{s \\to 1} R_{jk}(s) = \\lim_{s \\to 1}\n L_{jk}(s) = S_{jk} \\label{eq:eq36} \\\\\n \\lim_{s \\to 1} H_j(s) &= \\lim_{s \\to 1} R_j(s) = \\lim_{s \\to 1} L_j(s)\n = S_j \\label{eq:eq37}\n\\end{align}\nwhere\n\\begin{align}\n S_{jk} &= -\\sum_a\\sum_b P_{jk}(a,b)\\log P_{jk}(a,b) \\tag{\\ref{eq:eq25}} \\\\\n S_j &= -\\sum_a p_j(a)\\log p_j(a) \\label{eq:eq38}\n\\end{align}\nare the Shannon entropy measures \\cite{mondaini6}.\n\nWe now introduce a convenient version of a Mutual Information measure:\n\\begin{equation}\n M_{jk}(r,s) = \\frac{1}{1-r}\\left(1-\\left(\\frac{\\sum\\limits_a\\sum\\limits_b\n \\big(P_{jk}(a,b)\\big)^s}{\\sum\\limits_a\\sum\\limits_b\\big(p_j(a)p_k(b)\n \\big)^s}\\right)^{\\frac{1-r}{1-s}}\\right) \\label{eq:eq39}\n\\end{equation}\n\n\\noindent We can see that $M_{jk}(r,0) = 0$ and if $\\exists\\, \\bar{\\jmath}, \\bar{k}$\nsuch that $P_{\\bar{\\jmath}\\bar{k}}(a,b) = p_{\\bar{\\jmath}}(a)p_{\\bar{k}}(b)$\n$\\Rightarrow$ $M_{\\bar{\\jmath}\\bar{k}}(r,s) = 0$. We also have,\n\\begin{equation}\n M_{jk}(1,s) = \\lim_{r \\to 1} M_{jk}(r,s) = -\\frac{1}{1-s}\\log\\left(\n \\frac{\\sum\\limits_a\\sum\\limits_b\\big(P_{jk}(a,b)\\big)^s}{\\sum\\limits_a\n \\sum\\limits_b\\big(p_j(a)p_k(b)\\big)^s}\\right) \\label{eq:eq40}\n\\end{equation}\nand in the limit $s \\to 1$\n\\begin{align}\n M_{jk} = \\lim_{s \\to 1} M_{jk}(1,s) =& \\sum_a\\sum_b P_{jk}(a,b)\\log\n P_{jk}(a,b) \\nonumber \\\\\n &- \\sum_a\\sum_b p_j(a)p_k(b)\\log\\big(p_j(a)p_k(b)\\big) \\label{eq:eq41}\n\\end{align}\nand from the identities:\n\\begin{align*}\n &\\sum_a p_j(a) = 1 \\,,\\, \\forall j \\,; \\quad \\sum_b p_k(b) = 1 \\,,\\,\n \\forall k \\\\\n &\\sum_a P_{jk}(a,b) = p_k(b) \\,,\\, \\forall j \\,; \\quad \\sum_b P_{jk}(a,b)\n = p_j(a) \\,,\\, \\forall k\n\\end{align*}\nobtained from eqs.(\\ref{eq:eq3}), (\\ref{eq:eq4}), (\\ref{eq:eq6}), (\\ref{eq:eq7}),\nwe can also write instead eq.(\\ref{eq:eq41}):\n\\begin{equation}\n M_{jk} = \\sum_a\\sum_b P_{jk}(a,b)\\log P_{jk}(a,b) - \\sum_a\\sum_b P_{jk}(a,b)\n \\log\\big(p_j(a)p_k(b)\\big) \\label{eq:eq42}\n\\end{equation}\nIt should be stressed that we are not assuming that $P_{jk}(a,b) \\equiv\np_j(a)p_k(b)$ above. This equality is assumed to be valid only for\n$j = \\bar{\\jmath}$, $k = \\bar{k}$.\n\nEq.(\\ref{eq:eq41}) or (\\ref{eq:eq42}) can be also written as:\n\\begin{equation}\n M_{jk} = -S_{jk} + S_j + S_k \\label{eq:eq43}\n\\end{equation}\n\n\\noindent where $S_{jk}$ and $S_j$, $S_k$ are the Shannon entropy measures for\njoint and single probabilities, respectively, eqs.(\\ref{eq:eq25}), (\\ref{eq:eq38}).\n\nAs an additional topic, we emphasize that the Mutual Information measure can\nbe also derived from the Kullback-Leibler divergence \\cite{mondaini6} which is\nwritten as\n\\begin{equation}\n (KL)_{jk}(b) = \\sum_a P_{jk}(a|b)\\log\\left(\\frac{P_{jk}(a|b)}{p_j(a)}\n \\right) \\label{eq:eq44}\n\\end{equation}\n\n\\noindent where $P_{jk}(a|b)$ is the Conditional probability, eq.(\\ref{eq:eq8}).\nWe then have,\n\\begin{equation}\n (KL)_{jk}(b) = \\sum_a \\frac{P_{jk}(a,b)}{p_k(b)}\\log\\left(\\frac{P_{jk}\n (a,b)}{p_j(a)p_k(b)}\\right) \\label{eq:eq45}\n\\end{equation}\n\n\\noindent and the $M_{jk}$ mutual information measure will be given by\n\\begin{equation}\n M_{jk} = \\sum_b p_k(b)(KL)_{jk}(b) = \\sum_a\\sum_b P_{jk}(a,b)\\log\\left(\n \\frac{P_{jk}(a,b)}{p_j(a)p_k(b)}\\right) \\label{eq:eq46}\n\\end{equation}\nwhich is the same as eq.(\\ref{eq:eq42}), q.e.d.\n\nAs the last topic of this section, we now introduce the concept of\nInformation Distance and we then derive the Jaccard Entropy measure as\nan obvious consequence. Let us write:\n\\begin{equation}\n d_{jk}(r,s) = H_{jk}(r,s) - M_{jk}(r,s) \\label{eq:eq47}\n\\end{equation}\nSince we are working with Entropy measures, we have to satisfy the\nnon-negativeness criteria:\n\\begin{equation}\n H_{jk}(r,s) \\geq 0 \\,;\\quad M_{jk}(r,s) \\geq 0 \\,;\\quad H_{jk}(r,s)\n -M_{jk}(r,s) \\geq 0 \\label{eq:eq48}\n\\end{equation}\nThis means that by satisfying the inequalities (\\ref{eq:eq48}), restrictions\non the $r$, $s$ parameters should be discovered and considered for\nthe description of the protein databases by Entropy measures like $H_{jk}(r,s)$.\n\nFrom inequalities (\\ref{eq:eq48}), we can write,\n\\begin{equation}\n 0 \\leq d_{jk}(r,s) = H_{jk}(r,s) - M_{jk}(r,s) \\leq H_{jk}(r,s)\n \\label{eq:eq49}\n\\end{equation}\nand\n\\begin{equation}\n 0 \\leq J_{jk}(r,s) \\leq 1 \\label{eq:eq50}\n\\end{equation}\nwhere\n\\begin{equation}\n J_{jk}(r,s) = 1-\\frac{M_{jk}(r,s)}{H_{jk}(r,s)} \\label{eq:eq51}\n\\end{equation}\n\n\\noindent is the normalized Jaccard Entropy Measure as obtained from the\nnormalized Information Distance. We then give below the results of checking\nthe inequalities (\\ref{eq:eq48}) for some families of the Pfam database. We\nshall take the limit $r \\to s$ and we work with the corresponding\none-parameter Entropy measures: $H_j(s)$, $H_{jk}(s)$, $M_{jk}(s)$,\n$J_{jk}(s)$. We then have to check:\n\\begin{align*}\n H_{jk}(s) \\geq 0 \\,&,\\quad M_{jk}(s) \\geq 0 \\,, \\quad H_{jk}(s)-M_{jk}(s)\n \\geq 0 \\,, \\\\\n &0 \\leq J_{jk}(s) = 1 - \\frac{M_{jk}(s)}{H_{jk}(s)} \\leq 1\n\\end{align*}\n\n\\begin{table}[H]\n \\begin{center}\n \\caption{\\small Study of the non-negativeness of $H_{jk}(s)$, $M_{jk}(s)$\n and $d_{jk}(s)$ values for the protein family PF06850. \\label{tab1}}\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n $\\mathbf{s}$ & $\\mathbf{H_{jk}(s)}$ & $\\mathbf{M_{jk}(s)}$ &\n $\\mathbf{d_{jk}(s)}$ \\\\\n \\hline\n $0.1$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.3$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.5$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.7$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.9$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $1.0$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $1.2$ & $0$ & $718$ & $16$ \\\\\n \\hline\n $1.5$ & $0$ & $1708$ & $38$ \\\\\n \\hline\n $1.7$ & $0$ & $2351$ & $61$ \\\\\n \\hline\n $1.9$ & $0$ & $2898$ & $192$ \\\\\n \\hline\n $2.0$ & $0$ & $3139$ & $309$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\\noindent The $s$-values corresponding to negative $M_{jk}(s)$ values do not\nlead to a useful characterization of the Jaccard Entropy measure according\nto the inequality on eq.(\\ref{eq:eq49}) which is violated in this case and\nthese $s$-values will not be taken into consideration. Other studies of the\nEntropy values and specially those of the behaviour of the association of\nentropies, will give additional restrictions on the feasible $s$-range. The\nscope of the present work does not allow an intensive study of these\ntechniques of entropy association \\cite{mondaini2} which will then appear on\na forthcoming contribution.\n\n\n\\begin{table}[!ht]\n \\begin{center}\n \\caption{\\small Study of the non-negativeness of $H_{jk}(s)$, $M_{jk}(s)$\n and $d_{jk}(s)$ values for the protein family PF00135. \\label{tab2}}\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n $\\mathbf{s}$ & $\\mathbf{H_{jk}(s)}$ & $\\mathbf{M_{jk}(s)}$ &\n $\\mathbf{d_{jk}(s)}$ \\\\\n \\hline\n $0.1$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.3$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.5$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.7$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.9$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $1.0$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $1.2$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $1.5$ & $0$ & $0$ & $467$ \\\\\n \\hline\n $1.7$ & $0$ & $0$ & $14509$ \\\\\n \\hline\n $1.9$ & $0$ & $0$ & $19026$ \\\\\n \\hline\n $2.0$ & $0$ & $0$ & $19451$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\\begin{table}[!hbt]\n \\begin{center}\n \\caption{\\small Study of the non-negativeness of $H_{jk}(s)$, $M_{jk}(s)$\n and $d_{jk}(s)$ values for the protein family PF00005. \\label{tab3}}\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n $\\mathbf{s}$ & $\\mathbf{H_{jk}(s)}$ & $\\mathbf{M_{jk}(s)}$ &\n $\\mathbf{d_{jk}(s)}$ \\\\\n \\hline\n $0.1$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.3$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.5$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.7$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $0.9$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $1.0$ & $0$ & $0$ & $0$ \\\\\n \\hline\n $1.2$ & $0$ & $8$ & $5$ \\\\\n \\hline\n $1.5$ & $0$ & $33$ & $4741$ \\\\\n \\hline\n $1.7$ & $0$ & $55$ & $9679$ \\\\\n \\hline\n $1.9$ & $0$ & $65$ & $12442$ \\\\\n \\hline\n $2.0$ & $0$ & $69$ & $13203$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\nThe results on the previous three tables will clarify the idea of\nrestriction of the $s$-values of entropy measures for obtaining a\nsound classification of families and clans on the Pfam database.\nWe now announce that the non-negativeness of the values of\n$H_{jk}(s)$, $M_{jk}(s)$ and $d_{jk}(s)$ is actually guaranteed if\nwe restrict to $s \\leq 1$ for all 1069 families which are classified\ninto 68 clans and already characterized at section 2. In figures\n\\ref{fig2}, \\ref{fig3}, \\ref{fig4}, we present the histograms of the\nJaccard Entropy measures for some $s \\leq 1$ values of the\n$s$-parameter.\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=1\\linewidth]{figure2}\n \\caption{\\small Histograms of Jaccard Entropy for family PF06850.\n \\label{fig2}}\n\\end{figure}\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=1\\linewidth]{figure3}\n \\caption{\\small Histograms of Jaccard Entropy for family PF00135.\n \\label{fig3}}\n\\end{figure}\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=1\\linewidth]{figure4}\n \\caption{\\small Histograms of Jaccard Entropy for family PF00005.\n \\label{fig4}}\n\\end{figure}\n\n\\newpage\n\nWe also present the curves corresponding to the Average Jaccard Entropy\nMeasure (formula) for 09 families, a well-posed measure, with the\nrestriction $s \\leq 1$, which is given by\n\\begin{equation*}\n J(s,f) = \\frac{2}{n(n-1)}\\sum_j\\sum_k J_{jk}(s,f)\n\\end{equation*}\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=1\\linewidth]{figure5}\n \\caption{\\small Curves of the Average Jaccard Entropy measures for families\n PF00005, PF05673, PF13481, PF00135, PF06850, PF11339, PF02388, PF09924,\n PF13718. \\label{fig5}}\n\\end{figure}\n\n\\section{A First Assessment of Protein Databases with Entropy Measures}\nAs a motivation for future research to be developed in sections 6, 7, we\nnow introduce the first application of the formulae derived on the previous\nsections in terms of a naive analysis of averages and standard deviations of\nEntropy measure distributions. This will be also the first attempt at\nclassifying the distribution of amino acids in a generic protein database.\nA robust approach to this research topic will be introduced and intensively\nanalyzed on sections 6, 7 with the introduction of ANOVA statistics and the\ncorresponding Hypothesis testing.\n\nWe then consider a Clan with \\textbf{\\emph{F}} families. The Havrda-Charvat\nentropy measure associated to a pair of columns on the representative\n$m \\times n$ array of each family with a specified value of the\n\\textbf{\\emph{s}} parameter is given by\n\\begin{equation}\n H_{jk}(s;f) = -\\frac{1}{1-s} \\left(1-\\sum_a\\sum_b\\big(P_{jk}(a,b;f)\n \\big)^s\\right) \\label{eq:eq52}\n\\end{equation}\n\n\\noindent We can then define an average of these entropy measures for\neach family by\n\\begin{equation}\n \\langle H(s;f) \\rangle = \\frac{2}{n(n-1)} \\sum_j\\sum_k H_{jk}(s;f)\n \\label{eq:eq53}\n\\end{equation}\n\n\\noindent We also consider the average value of the averages over the\nset of \\emph{F} families:\n\\begin{equation}\n \\langle H(s) \\rangle_F = \\frac{1}{F} \\sum_{f=1}^F \\langle H(s;f) \\rangle\n \\label{eq:eq54}\n\\end{equation}\n\n\\noindent The Standard deviation of the Entropy measures $H_{jk}(s;f)$\nwith relation to the given average in eq.(\\ref{eq:eq53}) can be written\nas:\n\\begin{equation}\n \\sigma(s;f) = \\left(\\frac{1}{\\frac{n(n-1)}{2}-1}\\sum_j\\sum_k\\big(\n H_{jk}(s;f)-\\langle H(s;f) \\rangle\\big)^2\\right)^{1\/2} \\label{eq:eq55}\n\\end{equation}\n\n\\noindent and finally, the Standard deviation of the average\n$\\langle H(s;f)\\rangle$ with respect to the average\n$\\langle H(s)\\rangle_F$:\n\\begin{equation}\n \\sigma_F(s) = \\left(\\frac{1}{F-1} \\sum_{f=1}^F \\big(\\langle H(s;f)\\rangle -\n \\langle H(s)\\rangle_F\\big)^2\\right)^{1\/2} \\label{eq:eq56}\n\\end{equation}\n\nWe present in figs.\\ref{fig6}, \\ref{fig7} below the diagrams corresponding to\nformulae (\\ref{eq:eq53}) and (\\ref{eq:eq55}). We should stress that only\nClans with a minimum of five families are considered.\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{figure6}\n \\caption{\\small The Average values of the Havrda-Charvat Entropy measures\n for the families of a selected set of Clans and eleven values of the\n $s$-parameter, eq.(\\ref{eq:eq53}). \\label{fig6}}\n\\end{figure}\n\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figure7}\n \\caption{\\small The standard deviation of the Havrda-Charvat Entropy\n measures with relation to the averages of these entropies for each family\n and eleven values of the $s$-parameter, eq.(\\ref{eq:eq55}). \\label{fig7}}\n\\end{figure}\n\n\\newpage\n\nWe now present the values of $\\langle H(s)\\rangle_F$ and $\\sigma_F(s)$ for\na selected number of Clans and eleven values of the $s$-parameter, according\nto eqs.(\\ref{eq:eq54}) and (\\ref{eq:eq56}).\n\\begin{table}[H]\n \\begin{center}\n \\caption{\\small The average values and the standard deviation of the Average\n Havrda-Charvat Entropy measures for eleven values of the $s$-parameter and\n a selected set of 8 Clans. \\label{tab4}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n \\hline\n \\multicolumn{9}{|c|}{Clans from Pfam 27.0 --- Havrda-Charvat\n Entropies} \\\\\n \\hline\n Clan number & $s$ & $\\langle H(s)\\rangle_F$ & $\\sigma(s)_F$ & {} &\n Clan number & $s$ & $\\langle H(s)\\rangle_F$ & $\\sigma(s)_F$ \\\\\n \\cline{1-4} \\cline{6-9}\n {} & $0.1$ & $44.212$ & $6.396$ & {} &\n {} & $0.1$ & $43.001$ & $4.667$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $23.375$ & $3.057$ & {} &\n {} & $0.3$ & $22.851$ & $2.295$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $12.992$ & $1.492$ & {} &\n {} & $0.5$ & $12.765$ & $1.161$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $7.645$ & $0.746$ & {} &\n {} & $0.7$ & $7.546$ & $0.605$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.784$ & $0.384$ & {} &\n {} & $0.9$ & $4.741$ & $0.326$ \\\\\n \\cline{2-4} \\cline{7-9}\n CL0020 & $1.0$ & $3.875$ & $0.278$ & {} &\n CL0123 & $1.0$ & $3.846$ & $0.242$ \\\\\n \\cline{2-4} \\cline{7-9}\n (38 families) & $1.2$ & $2.661$ & $0.150$ & {} &\n (06 families) & $1.2$ & $2.648$ & $0.137$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.680$ & $0.063$ & {} &\n {} & $1.5$ & $1.676$ & $0.062$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.311$ & $0.038$ & {} &\n {} & $1.7$ & $1.309$ & $0.038$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.062$ & $0.023$ & {} &\n {} & $1.9$ & $1.061$ & $0.024$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.967$ & $0.018$ & {} &\n {} & $2.0$ & $0.966$ & $0.019$ \\\\\n %\n \\cline{1-4} \\cline{6-9}\n {} & $0.1$ & $39.235$ & $10.175$ & {} &\n {} & $0.1$ & $46.084$ & $6.790$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $20.908$ & $4.984$ & {} &\n {} & $0.3$ & $24.260$ & $3.224$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $11.733$ & $2.510$ & {} &\n {} & $0.5$ & $13.417$ & $1.561$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $6.982$ & $1.305$ & {} &\n {} & $0.7$ & $7.853$ & $0.772$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.422$ & $0.704$ & {} &\n {} & $0.9$ & $4.886$ & $0.392$ \\\\\n \\cline{2-4} \\cline{7-9}\n CL0023 & $1.0$ & $3.604$ & $0.525$ & {} &\n CL0186 & $1.0$ & $3.949$ & $0.282$ \\\\\n \\cline{2-4} \\cline{7-9}\n (119 families) & $1.2$ & $2.504$ & $0.302$ & {} &\n (29 families) & $1.2$ & $2.701$ & $0.149$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.606$ & $0.144$ & {} &\n {} & $1.5$ & $1.696$ & $0.060$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.263$ & $0.093$ & {} &\n {} & $1.7$ & $1.320$ & $0.035$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.030$ & $0.063$ & {} &\n {} & $1.9$ & $1.067$ & $0.020$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.940$ & $0.053$ & {} &\n {} & $2.0$ & $0.970$ & $0.016$ \\\\\n \\cline{1-4} \\cline{6-9}\n %\n {} & $0.1$ & $44.906$ & $7.996$ & {} &\n {} & $0.1$ & $42.862$ & $4.566$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $23.671$ & $3.906$ & {} &\n {} & $0.3$ & $22.791$ & $2.190$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $13.114$ & $1.961$ & {} &\n {} & $0.5$ & $12.741$ & $1.072$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $7.692$ & $1.015$ & {} &\n {} & $0.7$ & $7.538$ & $0.537$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.799$ & $0.545$ & {} &\n {} & $0.9$ & $4.739$ & $0.275$ \\\\\n \\cline{2-4} \\cline{7-9}\n CL0028 & $1.0$ & $3.882$ & $0.406$ & {} &\n CL0192 & $1.0$ & $3.847$ & $0.199$ \\\\\n \\cline{2-4} \\cline{7-9}\n (41 families) & $1.2$ & $2.660$ & $0.232$ & {} &\n (26 families) & $1.2$ & $2.650$ & $0.107$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.676$ & $0.109$ & {} &\n {} & $1.5$ & $1.678$ & $0.044$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.307$ & $0.070$ & {} &\n {} & $1.7$ & $1.311$ & $0.026$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.058$ & $0.047$ & {} &\n {} & $1.9$ & $1.062$ & $0.015$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.963$ & $0.039$ & {} &\n {} & $2.0$ & $0.967$ & $0.012$ \\\\\n %\n \\cline{1-4} \\cline{6-9}\n {} & $0.1$ & $42.312$ & $8.023$ & {} &\n {} & $0.1$ & $43.251$ & $7.469$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $22.454$ & $3.857$ & {} &\n {} & $0.3$ & $22.905$ & $3.564$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $12.534$ & $1.896$ & {} &\n {} & $0.5$ & $12.757$ & $1.734$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $7.411$ & $0.956$ & {} &\n {} & $0.7$ & $7.524$ & $0.863$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.660$ & $0.496$ & {} &\n {} & $0.9$ & $4.719$ & $0.440$ \\\\\n \\cline{2-4} \\cline{7-9}\n CL0063 & $1.0$ & $3.784$ & $0.362$ & {} &\n CL0236 & $1.0$ & $3.828$ & $0.317$ \\\\\n \\cline{2-4} \\cline{7-9}\n (92 families) & $1.2$ & $2.611$ & $0.198$ & {} &\n (21 families) & $1.2$ & $2.636$ & $0.169$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.658$ & $0.086$ & {} &\n {} & $1.5$ & $1.669$ & $0.069$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.297$ & $0.051$ & {} &\n {} & $1.7$ & $1.304$ & $0.040$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.053$ & $0.032$ & {} &\n {} & $1.9$ & $1.058$ & $0.024$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.960$ & $0.026$ & {} &\n {} & $2.0$ & $0.964$ & $0.019$ \\\\\n \\cline{1-4} \\cline{6-9}\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{table}\n\n\nIn figs.\\ref{fig8}a and \\ref{fig8}b below, we present the graphs\ncorresponding to Table \\ref{tab4}. These results just point out a more\nelaborate formulation of the problem.\n\n\\begin{figure}[!hbt]\n \\centering\n \\includegraphics[width=1\\linewidth]{figure8}\n \\caption{\\small (a) The average values of Havrda-Charvat Entropies for a\n set of 08 Clans. (b) The Standard Deviation of the averages for\n Havrda-Charvat Entropies for a set of 08 Clans.\\label{fig8}}\n\\end{figure}\n\nWe now proceed to analyze a proposal (naive) for testing the robustness of\nthe Clan concept. We will check if Pseudo-Clans which have the same number\nof families (a minimum of 05 families) of the corresponding Clans will have\nessentially different values of $\\langle H(s)\\rangle_F$ and $\\sigma_F(s)$.\nThe families to be associated with a Pseudo-Clan are obtained by sorting on\nthe set of 1069 families and by withdrawal of the families already sorted.\nIn Table \\ref{tab5} below we present the values $\\langle H(s)\\rangle_F$ and\n$\\sigma_F(s)$ for the Pseudo-Clans obtained by the procedure described above.\n\nThe Figures 9a, 9b, do correspond to the comparison of data of Table\n\\ref{tab4} (Clans) with those of Table \\ref{tab5} (Pseudo-Clans). Clans are\nin red, Pseudo-Clans in blue.\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=1\\linewidth]{figure9}\n \\caption{\\small (a) The comparison of Clans and Pseudo-Clans average values.\n (b) The comparison of Clans and Pseudo-Clans standard deviation values.\n \\label{fig9}}\n\\end{figure}\n\n\\begin{table}[!ht]\n \\begin{center}\n \\caption{\\small The average values and the standard deviation of the\n Average Havrda-Charvat Entropy measures for eleven values of the\n $s$-parameter and a selected set of 8 Pseudo-Clans. \\label{tab5}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n \\hline\n \\multicolumn{9}{|c|}{Pseudo-Clans \/ Pfam 27.0 --- Havrda-Charvat\n Entropies} \\\\\n \\hline\n Pseudo-Clan & \\multicolumn{1}{c|}{\\multirow{2}{*}{$s$}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{$\\langle H(s)\\rangle_F$}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{$\\sigma(s)_F$}} &\n \\multicolumn{1}{c|}{} & Pseudo-Clan &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{$s$}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{$\\langle H(s)\\rangle_F$}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{$\\sigma(s)_F$}} \\\\\n number & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} &\n \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & number &\n \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} &\n \\multicolumn{1}{c|}{} \\\\\n \\cline{1-4} \\cline{6-9}\n {} & $0.1$ & $42.381$ & $8.224$ & {} &\n {} & $0.1$ & $42.042$ & $4.484$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $22.480$ & $3.979$ & {} &\n {} & $0.3$ & $22.359$ & $2.175$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $12.542$ & $1.972$ & {} &\n {} & $0.5$ & $12.508$ & $1.082$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $7.410$ & $1.005$ & {} &\n {} & $0.7$ & $7.409$ & $0.555$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.657$ & $0.528$ & {} &\n {} & $0.9$ & $4.667$ & $0.293$ \\\\\n \\cline{2-4} \\cline{7-9}\n PCL0020 & $1.0$ & $3.781$ & $0.389$ & {} &\n PCL0123 & $1.0$ & $3.791$ & $0.216$ \\\\\n \\cline{2-4} \\cline{7-9}\n (38 families) & $1.2$ & $2.607$ & $0.216$ & {} &\n (06 families) & $1.2$ & $2.618$ & $0.120$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.655$ & $0.096$ & {} &\n {} & $1.5$ & $1.663$ & $0.053$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.295$ & $0.059$ & {} &\n {} & $1.7$ & $1.301$ & $0.032$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.051$ & $0.037$ & {} &\n {} & $1.9$ & $1.056$ & $0.020$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.958$ & $0.030$ & {} &\n {} & $2.0$ & $0.962$ & $0.016$ \\\\\n %\n \\cline{1-4} \\cline{6-9}\n {} & $0.1$ & $41.023$ & $9.007$ & {} &\n {} & $0.1$ & $40.888$ & $7.719$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $21.825$ & $4.360$ & {} &\n {} & $0.3$ & $21.790$ & $3.768$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $12.219$ & $2.163$ & {} &\n {} & $0.5$ & $12.220$ & $1.891$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $7.247$ & $1.104$ & {} &\n {} & $0.7$ & $7.258$ & $0.981$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.573$ & $0.582$ & {} &\n {} & $0.9$ & $4.585$ & $0.529$ \\\\\n \\cline{2-4} \\cline{7-9}\n PCL0023 & $1.0$ & $3.714$ & $0.427$ & {} &\n PCL0186 & $1.0$ & $3.730$ & $0.394$ \\\\\n \\cline{2-4} \\cline{7-9}\n (119 families) & $1.2$ & $2.573$ & $0.240$ & {} &\n (29 families) & $1.2$ & $2.582$ & $0.227$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.640$ & $0.109$ & {} &\n {} & $1.5$ & $1.646$ & $0.109$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.286$ & $0.068$ & {} &\n {} & $1.7$ & $1.290$ & $0.071$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.046$ & $0.044$ & {} &\n {} & $1.9$ & $1.048$ & $0.049$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.954$ & $0.037$ & {} &\n {} & $2.0$ & $0.956$ & $0.041$ \\\\\n \\cline{1-4} \\cline{6-9}\n %\n {} & $0.1$ & $42.716$ & $7.435$ & {} &\n {} & $0.1$ & $42.756$ & $9.361$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $23.658$ & $3.573$ & {} &\n {} & $0.3$ & $22.666$ & $4.535$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $12.641$ & $1.756$ & {} &\n {} & $0.5$ & $12.635$ & $2.253$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $7.468$ & $0.886$ & {} &\n {} & $0.7$ & $7.458$ & $1.151$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.692$ & $0.460$ & {} &\n {} & $0.9$ & $4.681$ & $0.608$ \\\\\n \\cline{2-4} \\cline{7-9}\n PCL0028 & $1.0$ & $3.808$ & $0.335$ & {} &\n PCL0192 & $1.0$ & $3.797$ & $0.448$ \\\\\n \\cline{2-4} \\cline{7-9}\n (41 families) & $1.2$ & $2.625$ & $0.183$ & {} &\n (26 families) & $1.2$ & $2.615$ & $0.250$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.664$ & $0.079$ & {} &\n {} & $1.5$ & $1.657$ & $0.113$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.302$ & $0.048$ & {} &\n {} & $1.7$ & $1.296$ & $0.070$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.056$ & $0.030$ & {} &\n {} & $1.9$ & $1.051$ & $0.046$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.962$ & $0.024$ & {} &\n {} & $2.0$ & $0.958$ & $0.037$ \\\\\n %\n \\cline{1-4} \\cline{6-9}\n {} & $0.1$ & $41.870$ & $8.134$ & {} &\n {} & $0.1$ & $41.551$ & $7.476$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.3$ & $22.252$ & $3.914$ & {} &\n {} & $0.3$ & $22.090$ & $3.591$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.5$ & $12.440$ & $1.929$ & {} &\n {} & $0.5$ & $12.357$ & $1.763$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.7$ & $7.366$ & $0.977$ & {} &\n {} & $0.7$ & $7.322$ & $0.887$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $0.9$ & $4.638$ & $0.510$ & {} &\n {} & $0.9$ & $4.614$ & $0.459$ \\\\\n \\cline{2-4} \\cline{7-9}\n PCL0063 & $1.0$ & $3.771$ & $0.375$ & {} &\n PCL0236 & $1.0$ & $3.752$ & $0.334$ \\\\\n \\cline{2-4} \\cline{7-9}\n (92 families) & $1.2$ & $2.603$ & $0.207$ & {} &\n (21 families) & $1.2$ & $2.595$ & $0.181$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.5$ & $1.654$ & $0.092$ & {} &\n {} & $1.5$ & $1.652$ & $0.077$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.7$ & $1.295$ & $0.057$ & {} &\n {} & $1.7$ & $1.294$ & $0.046$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $1.9$ & $1.052$ & $0.036$ & {} &\n {} & $1.9$ & $1.051$ & $0.028$ \\\\\n \\cline{2-4} \\cline{7-9}\n {} & $2.0$ & $0.959$ & $0.030$ & {} &\n {} & $2.0$ & $0.959$ & $0.022$ \\\\\n \\cline{1-4} \\cline{6-9}\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{table}\n\nFrom the figures and tables above we can see that the region $s \\leq 1$ leads\nto a better characterization of the Entropy measures distributions on protein\ndatabases.\n\nFor completeness, we list some useful formulae obtained from\neqs.(\\ref{eq:eq53}), (\\ref{eq:eq54}), (\\ref{eq:eq56}) which help to predict\nthe profile of the curves above:\n\\begin{equation}\n \\langle H(s;f) \\rangle = -\\frac{1}{1-s} \\left(1-\\frac{2}{n(n-1)}\n \\sum_{a,b,j,k} e^{-s \\lvert \\log P_{jk}(a,b;f)\\rvert}\\right) \\label{eq:eq57}\n\\end{equation}\n\\begin{equation}\n \\langle H(s) \\rangle_F = -\\frac{1}{1-s} \\left(1-\\frac{2}{Fn(n-1)}\n \\sum_{f,a,b,j,k} e^{-s \\lvert \\log P_{jk}(a,b;f)\\rvert} \\right)\n \\label{eq:eq58}\n\\end{equation}\n\\begin{equation}\n \\sigma_F(s) = \\frac{2(F-1)^{1\/2}}{Fn(n-1)}\\left(\\sum_{f=1}^F\\left(\\sum_{a,b,\n j,k} e^{-s \\lvert \\log P_{jk}(a,b;f)\\rvert}\\right)^2\\right)^{1\/2}\n \\label{eq:eq59}\n\\end{equation}\n\n\\section{The treatment of data with the Maple Computing system and its\ninadequacy for calculating Joint probabilities of occurrence. Alternative\nsystems.}\nIn this section, we specifically study the performance of the Maple system and\nan example of alternative computing system, the Perl system, for calculating\nthe simple and joint probabilities of occurrences of amino acids. We also use\nthese two systems for calculating 19 $s$-power values of these probabilities.\nWe now select the family PF06850 in order to get an idea of the CPU and real\ntimes which are necessary for calculating the probabilities and their powers\nfor the set of 1069 families. We start the calculation by adopting the Maple\nsystem version 18. There are some comments to be made on the construction of\na computational code for calculating joint probabilities. This will be done\nin detail at the end of CPU and real times for the calculation of the simple\nand joint probabilities by using the developed code. The table below will\nrepeat the times for calculating $200 \\times 20 = 4 \\times 10^3$ and $200\n\\times \\frac{200-1}{2} \\times 20 \\times 20 = 7.96 \\times 10^6$ of simple and\njoint probability values, respectively, for the PF06850 Pfam family.\n\n\\begin{table}[!ht]\n \\begin{center}\n \\caption{\\small CPU time and real times for the calculation of the simple and\n joint probabilities of occurrence associated with the protein family\n PF06850. \\label{tab6}}\n \\begin{tabular}{|c|c|c|}\n \\hline\n Maple System, version 18 & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n Simple probabilities & $0.527$ & $0.530$ \\\\\n \\hline\n Joint probabilities & $5073.049$ & $4650.697$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\nAfter calculating all values of the probabilities $p_j(a)$ and $P_{jk}(a,b)$,\nwe can proceed to evaluate the powers $\\big(p_j(a)\\big)^s$ and $\\big(P_jk(a,\nb)\\big)^s$ for 19 $s$-values. Our aim will be to use these values for\ncalculating the Entropy Measures according to eqs.(\\ref{eq:eq30}),\n(\\ref{eq:eq31}). It should be noticed that the values of $p_j(a)$ and\n$P_{jk}(a,b)$, have to be calculated only once by using a specific\ncomputational code already referred on this work. Nevertheless, the use of\nthe code for calculating the joint probabilities associated to 1069 protein\nfamilies is the hardest of all calculations to be undertaken and it takes\ntoo much time. These probabilities once calculated should be grouped in sets\nof 400 values each corresponding to a pair of columns $j$, $k$ among the\n$\\frac{n(n-1)}{2}$ feasible ones and the calculating of\nentropy value associated to this pair of columns $j$, $k$.\nGiven a $s$-value and after calculating the entropy of this\nfirst pair $H_{1\\,2}$ as a function of the 400 variables\n$\\big(P_{jk}(a,b)\\big)^s$, $j \\neq 1$, $k \\neq 2$ and he\/she will proceeds to\ncalculate again all values of $\\frac{n(n-1)}{2} \\times (20)^2$ in order\nto extract another value of joint probability for calculating the\ncorresponding entropy value. This seems to be associated to\nthe unknowing of the concepts of a function of several variables,\nunfortunately. After circumventing these mistakes coming from a bad\neducational formation, we succeed at keeping all calculated values of the\nprobabilities and we then proceed to the calculation of the powers $\\big(\np_j(a)\\big)^s$, $\\big(P_{jk}(a,b)\\big)^s$ of these values and the\ncorresponding entropy measures. In tables \\ref{tab7}, \\ref{tab8}, \\ref{tab9},\n\\ref{tab10} below, we report all these calculations for 19 values of the\n$s$-parameter.\n\n\\begin{table}[!ht]\n \\begin{center}\n \\caption{\\small CPU and real times for the calculation of 19 $s$-powers\n of simple probabilities of occurrence associated with the protein family\n PF06850. \\label{tab7}}\n \\begin{tabular}{|c|c|c|}\n \\hline\n \\multicolumn{3}{|c|}{Maple System, version 18,} \\\\\n \\multicolumn{3}{|c|}{$s$-powers of probability $\\big(p_j(a)\\big)^s$}\\\\\n \\hline\n $s$ & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $0.263$ & $0.358$ \\\\\n \\hline\n $0.2$ & $0.137$ & $0.145$ \\\\\n \\hline\n $0.3$ & $0.268$ & $0.277$ \\\\\n \\hline\n $0.4$ & $0.139$ & $0.153$ \\\\\n \\hline\n $0.5$ & $0.240$ & $0.219$ \\\\\n \\hline\n $0.6$ & $0.144$ & $0.157$ \\\\\n \\hline\n $0.7$ & $0.276$ & $0.254$ \\\\\n \\hline\n $0.8$ & $0.144$ & $0.157$ \\\\\n \\hline\n $0.9$ & $0.264$ & $0.235$ \\\\\n \\hline\n $1.0$ & $0.088$\/$0.151$ & $0.095$\/$0.307$ \\\\\n \\hline\n $2.0$ & $0.153$ & $0.095$ \\\\\n \\hline\n $3.0$ & $0.128$ & $0.131$ \\\\\n \\hline\n $4.0$ & $0.148$ & $0.141$ \\\\\n \\hline\n $5.0$ & $0.096$ & $0.144$ \\\\\n \\hline\n $6.0$ & $0.148$ & $0.167$ \\\\\n \\hline\n $7.0$ & $0.148$ & $0.155$ \\\\\n \\hline\n $8.0$ & $0.181$ & $0.094$ \\\\\n \\hline\n $9.0$ & $0.104$ & $0.092$ \\\\\n \\hline\n $10.0$ & $0.104$ & $0.100$ \\\\\n \\hline\n Total & $3.173$ & $3.164$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\\begin{table}[!ht]\n \\begin{center}\n \\caption{\\small CPU and real times for the calculation of 19 $s$-powers\n of joint probabilities of occurrence associated with the protein family\n PF06850. \\label{tab8}}\n \\begin{tabular}{|c|c|c|}\n \\hline\n \\multicolumn{3}{|c|}{Maple System, version 18,} \\\\\n \\multicolumn{3}{|c|}{$s$-powers of probability $\\big(P_{jk}(a,b)\\big)^s$}\n \\\\\n \\hline\n $s$ & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $390.432$ & $206.646$ \\\\\n \\hline\n $0.2$ & $382.887$ & $202.282$ \\\\\n \\hline\n $0.3$ & $401.269$ & $210.791$ \\\\\n \\hline\n $0.4$ & $416.168$ & $216.993$ \\\\\n \\hline\n $0.5$ & $427.572$ & $221.541$ \\\\\n \\hline\n $0.6$ & $430.604$ & $223.227$ \\\\\n \\hline\n $0.7$ & $421.904$ & $218.484$ \\\\\n \\hline\n $0.8$ & $434.888$ & $224.267$ \\\\\n \\hline\n $0.9$ & $431.948$ & $223.023$ \\\\\n \\hline\n $1.0$ & $442.933$\/$482.612$ & $224.731$\/$259.301$ \\\\\n \\hline\n $2.0$ & $176.212$ & $147.455$ \\\\\n \\hline\n $3.0$ & $234.100$ & $174.853$ \\\\\n \\hline\n $4.0$ & $289.184$ & $181.552$ \\\\\n \\hline\n $5.0$ & $327.740$ & $178.117$ \\\\\n \\hline\n $6.0$ & $334.800$ & $194.691$ \\\\\n \\hline\n $7.0$ & $349.064$ & $195.258$ \\\\\n \\hline\n $8.0$ & $361.304$ & $195.437$ \\\\\n \\hline\n $9.0$ & $386.217$ & $197.150$ \\\\\n \\hline\n $10.0$ & $397.276$ & $197.868$ \\\\\n \\hline\n Total & $7036.502$ & $3834.366$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\nThe last row in tables \\ref{tab7}, \\ref{tab8}, includes the times necessary\nfor calculating the probabilities of table \\ref{tab6}.\n\nWe are then able to proceed to the calculation of the corresponding\nHavrda-Charvat entropy measures, $H_j(s)$, $H_{jk}(s)$: The results for 19\n$s$-values are given in tables \\ref{tab9}, \\ref{tab10} below.\n\n\\begin{table}[!ht]\n \\begin{center}\n \\caption{\\small CPU and real times for the calculation of the Entropy measures\n $H_j(s)$ for the protein family PF06850. \\label{tab9}}\n \\begin{tabular}{|c|c|c|}\n \\hline\n \\multicolumn{3}{|c|}{Maple System, version 18,} \\\\\n \\multicolumn{3}{|c|}{Entropy Measures $H_j(s)$}\\\\\n \\hline\n $s$ & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $0.148$ & $0.261$ \\\\\n \\hline\n $0.2$ & $0.084$ & $0.153$ \\\\\n \\hline\n $0.3$ & $0.120$ & $0.189$ \\\\\n \\hline\n $0.4$ & $0.124$ & $0.198$ \\\\\n \\hline\n $0.5$ & $0.160$ & $0.299$ \\\\\n \\hline\n $0.6$ & $0.092$ & $0.139$ \\\\\n \\hline\n $0.7$ & $0.159$ & $0.199$ \\\\\n \\hline\n $0.8$ & $0.137$ & $0.175$ \\\\\n \\hline\n $0.9$ & $0.120$ & $0.166$ \\\\\n \\hline\n $1.0$ & $0.192$\/$0.175$ & $0.339$\/$0.159$ \\\\\n \\hline\n $2.0$ & $0.144$ & $0.099$ \\\\\n \\hline\n $3.0$ & $0.147$ & $0.105$ \\\\\n \\hline\n $4.0$ & $0.084$ & $0.101$ \\\\\n \\hline\n $5.0$ & $0.136$ & $0.070$ \\\\\n \\hline\n $6.0$ & $0.096$ & $0.119$ \\\\\n \\hline\n $7.0$ & $0.115$ & $0.078$ \\\\\n \\hline\n $8.0$ & $0.120$ & $0.109$ \\\\\n \\hline\n $9.0$ & $0.133$ & $0.080$ \\\\\n \\hline\n $10.0$ & $0.132$ & $0.133$ \\\\\n \\hline\n Total & $2.443$ & $3.012$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\\begin{table}[!hb]\n \\begin{center}\n \\caption{\\small CPU and real times for the calculation of the Entropy measures\n $H_{jk}(s)$ for the protein family PF06850. \\label{tab10}}\n \\begin{tabular}{|c|c|c|}\n \\hline\n \\multicolumn{3}{|c|}{Maple System, version 18,} \\\\\n \\multicolumn{3}{|c|}{Entropy Measures $H_{jk}(s)$}\\\\\n \\hline\n $s$ & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $156.332$ & $133.242$ \\\\\n \\hline\n $0.2$ & $160.797$ & $136.706$ \\\\\n \\hline\n $0.3$ & $169.024$ & $140.960$ \\\\\n \\hline\n $0.4$ & $176.824$ & $147.853$ \\\\\n \\hline\n $0.5$ & $184.120$ & $150.163$ \\\\\n \\hline\n $0.6$ & $190.304$ & $154.058$ \\\\\n \\hline\n $0.7$ & $196.633$ & $157.750$ \\\\\n \\hline\n $0.8$ & $205.940$ & $164.101$ \\\\\n \\hline\n $0.9$ & $215.559$ & $169.549$ \\\\\n \\hline\n $1.0$ & $253.648$\/$124.501$ & $204.634$\/$148.993$ \\\\\n \\hline\n $2.0$ & $141.148$ & $184.030$ \\\\\n \\hline\n $3.0$ & $158.536$ & $167.173$ \\\\\n \\hline\n $4.0$ & $173.136$ & $181.282$ \\\\\n \\hline\n $5.0$ & $197.680$ & $238.723$ \\\\\n \\hline\n $6.0$ & $215.000$ & $111.476$ \\\\\n \\hline\n $7.0$ & $145.257$ & $115.221$ \\\\\n \\hline\n $8.0$ & $156.848$ & $122.957$ \\\\\n \\hline\n $9.0$ & $157.300$ & $126.233$ \\\\\n \\hline\n $10.0$ & $166.399$ & $135.080$ \\\\\n \\hline\n Total & $3420.485$ & $2941.791$ \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\nThe total time for calculating all the Havrda-Charvat Entropy measure\ncontent of probabilities of occurrence of amino acids on a\nspecific family is given in table \\ref{tab11} below.\n\n\\begin{table}[!hb]\n \\begin{center}\n \\caption{\\small Total CPU and real times for calculating the Entropy measure\n content of a family PF06850 and approximations for Grand\n Total of all sample space. \\label{tab11}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\small\n \\begin{tabular}{|c|c|c|}\n \\hline\n Maple System, & Entropy Measures &\n Entropy Measures \\\\\n version 18 & $H\\!_j(s)$ --- 19 $s$-values & $H\\!_{jk}(s)$ ---\n 19 $s$-values \\\\\n \\hline\n Total CPU time & $0.527+3.173+2.443$\n & $5,073.049+7,036.502+$ \\\\\n (family PF06850) & $=6.143$ sec & $3,420.485=15,530.036$ sec \\\\\n \\hline\n Grand Total CPU time & $6,566.867$ sec\n & $16,601,608.484$ sec \\\\\n (1069 families) & $=1.824$ hs & $=192.148$ days \\\\\n \\hline\n Total Real time & $0.530+3.164+3.012$\n & $4,650.697+3,834.366+$ \\\\\n (family PF06850) & $=6.706$ sec & $2,941.791=11,426.854$ sec \\\\\n \\hline\n Grand Total Real time & $7,168.714$ sec\n & $12,215,306.926$ sec \\\\\n (1069 families) & $=1.991$ hs & $=141.381$ days \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{table}\n\nFrom inspections of table \\ref{tab11}, we realize that the Total CPU\nand real times for calculating the Havrda-Charvat entropies $H_j(s)$, of\nsimple probabilities of occurrence $p_j(a)$ are obtained by summing up the\ntotal time results from tables \\ref{tab6}, \\ref{tab7} and \\ref{tab9}. For the\nHavrda-Charvat entropies $H_{jk}(s)$, we have to sum up the total times at table\n\\ref{tab6}, \\ref{tab8} and \\ref{tab10}. We take for granted that the times for\ncalculating the Entropy Measure content of each family will not differ too much\nand the results 3rd and 5th rows of table \\ref{tab11} are obtained by multiplying\nby 1069 --- the number of families in the sample space.\n\nThe results of table \\ref{tab11} suggest the inadequacy of the Maple computing\nsystem for analyzing the Entropy measure content of an example of protein database.\nWe have restricted ourselves to operate with usual operating systems, Linux\nor OSX, on laptops. We have also worked with the alternative Perl computing\nsystem. The Maple computing system has an ``array'' structure which is very\neffective for doing calculations which require a knowledge of mathematical\nmethods. On the contrary, the alternative Perl computing system has a\n``hash'' structure as was emphasized in the 2nd section and it operates very\nwell elementary operations with very large numbers. It is essential the\ncomparison of a senior erudite which is largely conversant with a large amount\nof mathematical methods versus a ``genius'' brought to fame by media,\nwho is able only to multiply in a very fast way, numbers of many digits.\n\nIn order to specify the probabilities of computational configurations with\nthe usual desktops and laptops, we list below some of them which have been\nused in the present work. The computing systems were the Maple ($M$) and Perl\n($P$), the operating systems, the Linux ($L$) and Mac OSX ($O$) and the\nstructures: The Array I ($A_I$), Array II ($A_{I\\!I}$) and Hash ($H$) (page 8,\nsection 2). The available computational configurations to undertake the task\nof assessment of protein databases with Entropy measures could be listed as:\n\\begin{enumerate}\n \\item $MLA_I$ --- Maple, Linux, Array I\n \\item $POH$ --- Perl, OSX, Hash\n \\item $PLA_{I\\!I}$ --- Perl, Linux, Array II\n \\item $POA_{I\\!I}$ --- Perl, OSX, Array II\n\\end{enumerate}\n\nThe following table will display a comparison of the CPU and real times for\nthe calculation of 19 $s$-values of the joint probabilities $\\big(P_{jk}(a,\nb)^s\\big)$ for the protein family PF06850 by the four configurations\nnominated above. It should be stressed that we are here comparing the times\nfor calculating the $s$-power with the values of the probabilities\nthemselves previously calculated and kept on a file.\n\n\\begin{sidewaystable}[p]\n \\begin{center}\n \\caption{\\small A comparison of calculation times (CPU and real) for 19\n $s$-powers of joint probabilities only of PF06850 protein family, using\n the four configurations $MLA_I$, $POA_{I\\!I}$, $PLA_{I\\!I}$, $POH$. \\label{tab12}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\small\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n \\hline\n \\multicolumn{1}{|c|}{\\multirow{2}{*}{$\\mathbf{s}$}} &\n \\multicolumn{4}{c|}{$\\mathbf{t_{CPU}}$ (sec)} &\n \\multicolumn{4}{c|}{$\\mathbf{t_R}$ (sec)} \\\\\n \\cline{2-9}\n \\multicolumn{1}{|c|}{} & $MLA_I$ & $POA_{I\\!I}$ & $PLA_{I\\!I}$ & $POH$ &\n $MLA_I$ & $POA_{I\\!I}$ & $PLA_{I\\!I}$ & $POH$ \\\\\n \\hline\n $0.1$ & $390.432$ & $33.478$ & $\\mathbf{41.633}$ & $24.849$ & $206.646$ &\n $88.527$ & $83.139$ & $26.862$ \\\\\n \\hline\n $0.2$ & $382.887$ & $31.711$ & $26.483$ & $25.093$ & $202.282$ & $80.624$\n & $53.384$ & $26.904$ \\\\\n \\hline\n $0.3$ & $401.269$ & $31.726$ & $25.286$ & $24.751$ & $210.791$ & $80.519$\n & $50.689$ & $27.305$ \\\\\n \\hline\n $0.4$ & $416.168$ & $31.448$ & $26.138$ & $26.345$ & $216.993$ & $79.032$\n & $51.409$ & $27.687$ \\\\\n \\hline\n $0.5$ & $427.572$ & $32.860$ & $\\mathbf{25.822}$ & $26.726$ & $221.541$\n & $93.048$ & $51.334$ & $27.528$ \\\\\n \\hline\n $0.6$ & $430.604$ & $33.444$ & $27.013$ & $25.021$ & $223.227$ & $102.317$\n & $52.889$ & $25.408$ \\\\\n \\hline\n $0.7$ & $421.904$ & $31.053$ & $25.814$ & $25.011$ & $218.484$ & $79.255$\n & $51.466$ & $26.414$ \\\\\n \\hline\n $0.8$ & $434.888$ & $31.526$ & $26.725$ & $25.183$ & $224.267$ & $80.469$\n & $53.388$ & $25.668$ \\\\\n \\hline\n $0.9$ & $431.948$ & $31.482$ & $26.895$ & $25.409$ & $223.023$ & $80.002$\n & $53.579$ & $25.536$ \\\\\n \\hline\n $1.0$ & $442.933$ & $32.056$ & $25.990$ & $25.096$ & $224.731$ & $80.917$\n & $51.687$ & $25.640$ \\\\\n \\hline\n $2.0$ & $176.212$ & $32.638$ & $27.089$ & $26.012$ & $147.454$ & $80.751$\n & $54.384$ & $26.960$ \\\\\n \\hline\n $3.0$ & $234.100$ & $31.892$ & $25.766$ & $24.498$ & $174.853$ & $85.853$\n & $51.843$ & $24.717$ \\\\\n \\hline\n $4.0$ & $284.184$ & $31.662$ & $26.515$ & $25.251$ & $181.552$ & $91.837$\n & $52.636$ & $25.718$ \\\\\n \\hline\n $5.0$ & $327.740$ & $32.295$ & $27.516$ & $24.925$ & $178.117$ & $87.486$\n & $54.908$ & $25.814$ \\\\\n \\hline\n $6.0$ & $334.800$ & $32.674$ & $28.126$ & $25.440$ & $194.691$ & $86.569$\n & $54.611$ & $25.847$ \\\\\n \\hline\n $7.0$ & $349.064$ & $31.674$ & $23.908$ & $26.389$ & $195.258$ & $86.215$\n & $49.262$ & $27.745$ \\\\\n \\hline\n $8.0$ & $361.304$ & $33.105$ & $26.020$ & $25.106$ & $195.437$ & $116.601$\n & $51.889$ & $26.735$ \\\\\n \\hline\n $9.0$ & $386.217$ & $31.881$ & $26.208$ & $24.783$ & $197.150$ & $81.372$\n & $53.114$ & $25.155$ \\\\\n \\hline\n $10.0$ & $397.276$ & $32.269$ & $26.125$ & $24.979$ & $197.868$ & $87.963$\n & $52.541$ & $26.504$ \\\\\n \\hline \\hline\n Total & $7036.502$ & $611.374$ & $515.072$ & $480.867$ & $3834.365$\n & $1649.357$ & $1028.655$ & $500.147$ \\\\\n \\hline \\hline\n Total & $7,522,020.640$ & $653,588.806$ & $550,611.968$ & $514,046.813$ &\n $4,098,936.190$ & $1,763,162.630$ & $1,099,632.200$ & $534,157.143$ \\\\\n (1069) families & $=87.067$ days & $=7.565$ days & $=6.373$ days & $=5.950$ days &\n $=47.441$ days & $=20.407$ days & $=12.727$ days & $=6.188$ days \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{sidewaystable}\n\nWe can check that the times on table \\ref{tab12} seem to be generically\nordered as\n\\begin{equation}\n t_{MLA_I} > t_{POA_{I\\!I}} > t_{PLA_{I\\!I}} > t_{POH} \\label{eq:eq60}\n\\end{equation}\nFrom this ordering of computing times, we are then able to consider that the\ninconvenience of using the ``hash'' structure which has been emphasized on\nsection 2, was not circumvented by working with a modified array structure\n($A_{I\\!I}$ instead of $H$), at least for the Mac Pro machine used in these\ncalculations. We do not also know if this machine has been even used with an\n``overload'' of programs from the assumed part-time job of the experimenter\n(maybe 99.99\\% time job!). Anyhow, the usual Hash structure of Perl computing\nsystem has delayed the calculation of the Entropy Measures and even with the\nhelp of the modified $A_{I\\!I}$ structure, it does not succeed at computing\nwith this structure if operated on a OSX computing system.\n\nOn the other hand, the configuration $MLA_I$ could be chosen for parallelizing\nthe respective adopted code in a work to be done with supercomputer facilities.\nIf we try to avoid this kind of computational facility, in the belief that the\nproblem of classifying the distribution of amino acids of a protein database\nin terms of Entropy Measures could be treated with less powerful but very\nobjective ``weapons'', we should try to look for very fast laptop machines\ninstead, by working with a Linux operating system, a Perl computing system and\na modified array structure. This means that it would be worthwhile the\ncontinuation of the present work with the $PLA_{I\\!I}$ configuration. This is\nnow in progress and will be published elsewhere.\n\nWe summarize the conclusions commented above on tables \\ref{tab13}--\\ref{tab16}\nbelow for the calculation of CPU and Real times of 19 $s$-powers of joint\nprobabilities $\\big(P_{jk}(a,b)\\big)^s$ and the corresponding values of\nHavrda-Charvat entropy measures. The necessary times for calculating the joint\nprobabilities themselves has not been taken into consideration. It would be\nvery useful to make a comparison of the results of table \\ref{tab10} with those\non tables \\ref{tab14}, \\ref{tab16}, and table \\ref{tab12} with tables \\ref{tab13},\n\\ref{tab15} as well.\n\nAs a last remark of this section, we shall take into consideration,\nthe restrictions of $s \\leq 1$ for working with Jaccard Entropy\nmeasures and we calculate the total CPU and real times for the set\nof $s$-values: $s =$ $0.1$, $0.2$, $0.3$, $0.4$, $0.5$, $0.6$, $0.7$,\n$0.8$, $0.9$, $1.0$. The results are presented in table \\ref{tab17}\nbelow for the $PLA_{I\\!I}$ configuration and the calculation of the\nHavrda-Charvat entropies.\n\n\\begin{sidewaystable}[p]\n \\begin{center}\n \\caption{\\small Calculation of CPU and Real times of 19 $s$-powers of joint\n probabilities $\\big(P_{jk}(a,b)\\big)^s$ measures for 06 families from 03\n Clans with the $POA_{I\\!I}$ configuration. \\label{tab13}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n \\hline\n {} & \\multicolumn{4}{c|}{CL0028} & \\multicolumn{4}{c|}{CL0023}\n & \\multicolumn{4}{c|}{CL0257} \\\\\n \\cline{2-13}\n s & \\multicolumn{2}{c|}{PF06850} & \\multicolumn{2}{c|}{PF00135}\n & \\multicolumn{2}{c|}{PF00005} & \\multicolumn{2}{c|}{PF13481}\n & \\multicolumn{2}{c|}{PF02388} & \\multicolumn{2}{c|}{PF09924} \\\\\n \\cline{2-13}\n {} & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $33.478$ & $88.527$ & $46.747$ & $130.014$ & $41.475$ & $169.679$\n & $44.187$ & $171.636$ & $44.716$ & $242.376$ & $46.177$ & $288.259$ \\\\\n \\hline\n $0.2$ & $31.711$ & $80.624$ & $42.157$ & $90.687$ & $36.893$ & $78.893$\n & $38.567$ & $88.754$ & $37.088$ & $88.091$ & $40.709$ & $148.371$ \\\\\n \\hline\n $0.3$ & $31.726$ & $80.519$ & $39.957$ & $82.240$ & $36.929$ & $97.270$\n & $38.800$ & $81.613$ & $37.350$ & $88.664$ & $38.986$ & $119.781$ \\\\\n \\hline\n $0.4$ & $31.448$ & $79.032$ & $41.737$ & $87.934$ & $36.252$ & $80.428$\n & $38.500$ & $78.757$ & $35.592$ & $80.337$ & $36.773$ & $87.217$ \\\\\n \\hline\n $0.5$ & $32.860$ & $93.048$ & $41.130$ & $89.997$ & $38.203$ & $93.595$\n & $37.618$ & $80.179$ & $35.117$ & $78.476$ & $35.534$ & $82.528$ \\\\\n \\hline\n $0.6$ & $33.944$ & $102.317$ & $41.417$ & $120.420$ & $37.143$ & $81.289$\n & $38.387$ & $81.667$ & $45.900$ & $501.222$ & $35.830$ & $83.439$ \\\\\n \\hline\n $0.7$ & $31.053$ & $79.255$ & $40.422$ & $78.531$ & $36.386$ & $84.421$\n & $43.216$ & $562.659$ & $43.452$ & $259.861$ & $35.261$ & $72.605$ \\\\\n \\hline\n $0.8$ & $31.526$ & $80.469$ & $41.556$ & $118.519$ & $36.955$ & $79.543$\n & $41.862$ & $148.028$ & $35.047$ & $78.469$ & $35.718$ & $85.142$ \\\\\n \\hline\n $0.9$ & $31.482$ & $80.002$ & $41.386$ & $81.747$ & $36.811$ & $81.025$\n & $35.518$ & $78.547$ & $35.540$ & $74.570$ & $40.534$ & $204.673$ \\\\\n \\hline\n $1.0$ & $32.056$ & $80.917$ & $40.724$ & $80.958$ & $37.234$ & $80.587$\n & $36.610$ & $87.848$ & $36.353$ & $78.767$ & $39.095$ & $125.292$ \\\\\n \\hline\n $2.0$ & $32.638$ & $80.751$ & $40.701$ & $79.667$ & $38.452$ & $79.336$\n & $36.916$ & $111.199$ & $36.658$ & $81.415$ & $40.415$ & $182.552$ \\\\\n \\hline\n $3.0$ & $31.892$ & $85.853$ & $40.822$ & $79.293$ & $38.223$ & $80.688$\n & $37.356$ & $84.312$ & $36.658$ & $81.415$ & $39.774$ & $157.090$ \\\\\n \\hline\n $4.0$ & $31.662$ & $91.837$ & $41.208$ & $79.825$ & $37.936$ & $98.905$\n & $37.000$ & $86.906$ & $36.012$ & $77.741$ & $38.794$ & $140.857$ \\\\\n \\hline\n $5.0$ & $32.295$ & $87.486$ & $41.059$ & $82.191$ & $38.293$ & $83.289$\n & $36.245$ & $80.873$ & $35.308$ & $77.225$ & $39.927$ & $147.368$ \\\\\n \\hline\n $6.0$ & $32.674$ & $86.569$ & $41.215$ & $86.311$ & $37.601$ & $82.375$\n & $36.395$ & $97.307$ & $35.748$ & $76.573$ & $39.327$ & $154.829$ \\\\\n \\hline\n $7.0$ & $31.674$ & $86.215$ & $41.290$ & $88.714$ & $38.341$ & $80.991$\n & $36.724$ & $95.148$ & $36.194$ & $75.341$ & $39.826$ & $153.204$ \\\\\n \\hline\n $8.0$ & $33.105$ & $116.601$ & $40.984$ & $83.157$ & $37.756$ & $81.073$\n & $40.524$ & $105.466$ & $36.716$ & $82.921$ & $41.056$ & $175.943$ \\\\\n \\hline\n $9.0$ & $31.881$ & $81.372$ & $41.469$ & $113.561$ & $38.748$ & $82.499$\n & $37.874$ & $94.981$ & $40.341$ & $121.311$ & $39.847$ & $144.595$ \\\\\n \\hline\n $10.0$ & $32.269$ & $87.963$ & $41.466$ & $89.366$ & $37.807$ & $81.043$\n & $37.134$ & $88.005$ & $40.053$ & $136.243$ & $40.333$ & $171.395$ \\\\\n \\hline \\hline\n Total & $611.374$ & $1649.357$ & $787.447$ & $1743.132$ & $717.438$\n & $1676.929$ & $729.433$ & $2303.885$ & $719.843$ & $2381.018$ & $743.916$\n & $2725.140$ \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{sidewaystable}\n\n\\begin{sidewaystable}[p]\n \\begin{center}\n \\caption{\\small Calculation of CPU and Real times of Havrda-Charvat Entropy\n measures for 06 families from 03 Clans with the $POA_{I\\!I}$\n configuration. \\label{tab14}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n \\hline\n {} & \\multicolumn{4}{c|}{CL0028} & \\multicolumn{4}{c|}{CL0023}\n & \\multicolumn{4}{c|}{CL0257} \\\\\n \\cline{2-13}\n s & \\multicolumn{2}{c|}{PF06850} & \\multicolumn{2}{c|}{PF00135}\n & \\multicolumn{2}{c|}{PF00005} & \\multicolumn{2}{c|}{PF13481}\n & \\multicolumn{2}{c|}{PF02388} & \\multicolumn{2}{c|}{PF09924} \\\\\n \\cline{2-13}\n {} & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $19.451$ & $26.196$ & $22.620$ & $56.287$ & $25.411$ & $121.508$\n & $18.968$ & $32.706$ & $23.380$ & $80.826$ & $22.502$ & $77.826$ \\\\\n \\hline\n $0.2$ & $19.245$ & $24.901$ & $22.851$ & $69.992$ & $23.362$ & $65.508$\n & $18.810$ & $26.835$ & $23.366$ & $87.098$ & $23.330$ & $60.684$ \\\\\n \\hline\n $0.3$ & $19.582$ & $26.001$ & $23.963$ & $60.960$ & $22.598$ & $72.363$\n & $18.602$ & $26.028$ & $20.012$ & $43.734$ & $21.929$ & $48.947$ \\\\\n \\hline\n $0.4$ & $20.194$ & $31.418$ & $22.211$ & $54.562$ & $23.448$ & $63.369$\n & $19.176$ & $30.482$ & $21.218$ & $67.817$ & $22.182$ & $45.480$ \\\\\n \\hline\n $0.5$ & $20.162$ & $31.653$ & $23.163$ & $67.391$ & $22.153$ & $55.173$\n & $19.281$ & $34.495$ & $21.037$ & $55.237$ & $23.200$ & $62.384$ \\\\\n \\hline\n $0.6$ & $20.941$ & $34.979$ & $23.961$ & $62.158$ & $22.342$ & $57.081$\n & $20.794$ & $44.558$ & $21.087$ & $54.900$ & $22.477$ & $62.421$ \\\\\n \\hline\n $0.7$ & $20.805$ & $34.057$ & $23.679$ & $82.669$ & $19.587$ & $35.750$\n & $19.982$ & $41.314$ & $20.375$ & $32.699$ & $22.398$ & $66.438$ \\\\\n \\hline\n $0.8$ & $20.900$ & $34.146$ & $22.787$ & $60.924$ & $19.081$ & $34.116$\n & $18.890$ & $28.054$ & $20.167$ & $32.685$ & $23.056$ & $61.376$ \\\\\n \\hline\n $0.9$ & $20.909$ & $34.568$ & $22.808$ & $54.890$ & $19.030$ & $33.258$\n & $18.894$ & $29.712$ & $19.422$ & $26.934$ & $23.543$ & $65.099$ \\\\\n \\hline\n $1.0$ & $21.353$ & $35.024$ & $22.860$ & $51.308$ & $19.789$ & $32.218$\n & $19.869$ & $37.922$ & $21.076$ & $35.774$ & $22.528$ & $52.209$ \\\\\n \\hline\n $2.0$ & $20.505$ & $32.920$ & $21.085$ & $46.921$ & $19.672$ & $33.778$\n & $19.083$ & $62.081$ & $22.530$ & $82.336$ & $21.783$ & $51.656$ \\\\\n \\hline\n $3.0$ & $23.923$ & $58.898$ & $22.020$ & $47.298$ & $18.788$ & $31.926$\n & $19.097$ & $35.399$ & $24.210$ & $83.371$ & $21.636$ & $69.905$ \\\\\n \\hline\n $4.0$ & $24.622$ & $68.692$ & $21.954$ & $50.909$ & $18.556$ & $26.512$\n & $19.208$ & $32.014$ & $24.300$ & $112.613$ & $21.458$ & $49.931$ \\\\\n \\hline\n $5.0$ & $24.221$ & $60.107$ & $22.131$ & $58.975$ & $19.424$ & $33.185$\n & $18.231$ & $25.898$ & $24.199$ & $95.807$ & $22.011$ & $55.109$ \\\\\n \\hline\n $6.0$ & $24.475$ & $64.843$ & $22.911$ & $72.004$ & $20.582$ & $38.006$\n & $18.330$ & $25.959$ & $22.741$ & $61.516$ & $21.857$ & $67.976$ \\\\\n \\hline\n $7.0$ & $24.593$ & $70.336$ & $22.779$ & $51.309$ & $20.564$ & $38.390$\n & $19.583$ & $30.719$ & $23.222$ & $88.997$ & $23.018$ & $85.054$ \\\\\n \\hline\n $8.0$ & $25.613$ & $83.423$ & $20.058$ & $34.861$ & $20.460$ & $35.589$\n & $19.977$ & $37.130$ & $23.825$ & $110.933$ & $24.474$ & $90.322$ \\\\\n \\hline\n $9.0$ & $24.139$ & $73.873$ & $21.418$ & $44.709$ & $19.274$ & $32.129$\n & $18.871$ & $29.089$ & $24.207$ & $106.471$ & $23.349$ & $74.724$ \\\\\n \\hline\n $10.0$ & $25.785$ & $101.370$ & $22.183$ & $46.878$ & $23.625$ & $87.442$\n & $22.652$ & $64.895$ & $23.779$ & $86.225$ & $21.802$ & $57.444$ \\\\\n \\hline \\hline\n Total & $421.418$ & $927.405$ & $427.442$ & $1075.005$ & $397.746$\n & $927.301$ & $368.298$ & $675.290$ & $424.153$ & $1345.973$ & $428.533$\n & $1204.985$ \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{sidewaystable}\n \n\\begin{sidewaystable}[p]\n \\begin{center}\n \\caption{\\small Calculation of CPU and Real times of 19 $s$-powers of joint\n probabilities $\\big(P_{jk}(a,b)\\big)^s$ measures for 06 families from 03\n Clans with the $PLA_{I\\!I}$ configuration. \\label{tab15}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n \\hline\n {} & \\multicolumn{4}{c|}{CL0028} & \\multicolumn{4}{c|}{CL0023}\n & \\multicolumn{4}{c|}{CL0257} \\\\\n \\cline{2-13}\n s & \\multicolumn{2}{c|}{PF06850} & \\multicolumn{2}{c|}{PF00135}\n & \\multicolumn{2}{c|}{PF00005} & \\multicolumn{2}{c|}{PF13481}\n & \\multicolumn{2}{c|}{PF02388} & \\multicolumn{2}{c|}{PF09924} \\\\\n \\cline{2-13}\n {} & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $41.633$ & $83.139$ & $43.135$ & $83.901$ & $44.003$ & $84.655$\n & $26.613$ & $52.365$ & $27.452$ & $53.350$ & $43.132$ & $83.249$ \\\\\n \\hline\n $0.2$ & $26.483$ & $53.384$ & $26.373$ & $50.442$ & $26.610$ & $52.904$\n & $26.599$ & $51.822$ & $43.199$ & $83.497$ & $25.762$ & $50.203$ \\\\\n \\hline\n $0.3$ & $25.286$ & $50.689$ & $26.644$ & $50.992$ & $29.965$ & $53.250$\n & $24.946$ & $48.846$ & $25.498$ & $50.303$ & $25.904$ & $50.737$ \\\\\n \\hline\n $0.4$ & $26.138$ & $51.409$ & $25.255$ & $48.576$ & $26.696$ & $52.482$\n & $27.769$ & $53.837$ & $25.753$ & $51.013$ & $25.684$ & $50.371$ \\\\\n \\hline\n $0.5$ & $25.822$ & $51.837$ & $26.041$ & $50.492$ & $26.648$ & $52.171$\n & $25.026$ & $48.927$ & $25.727$ & $49.887$ & $25.513$ & $48.793$ \\\\\n \\hline\n $0.6$ & $27.013$ & $52.889$ & $25.778$ & $50.324$ & $24.912$ & $49.340$\n & $26.062$ & $51.312$ & $26.066$ & $51.417$ & $25.435$ & $49.686$ \\\\\n \\hline\n $0.7$ & $25.814$ & $51.466$ & $25.496$ & $49.954$ & $25.284$ & $50.268$\n & $23.698$ & $48.134$ & $25.723$ & $50.760$ & $25.822$ & $51.122$ \\\\\n \\hline\n $0.8$ & $26.725$ & $53.388$ & $26.254$ & $50.865$ & $25.456$ & $50.870$\n & $25.816$ & $51.196$ & $26.883$ & $52.511$ & $29.837$ & $57.367$ \\\\\n \\hline\n $0.9$ & $26.895$ & $53.579$ & $23.740$ & $47.296$ & $26.237$ & $51.725$\n & $28.441$ & $54.743$ & $25.237$ & $50.532$ & $27.951$ & $54.289$ \\\\\n \\hline\n $1.0$ & $25.990$ & $51.687$ & $27.225$ & $54.384$ & $26.014$ & $51.969$\n & $27.148$ & $52.701$ & $23.672$ & $48.547$ & $26.314$ & $51.932$ \\\\\n \\hline\n $2.0$ & $27.089$ & $54.384$ & $26.237$ & $50.335$ & $27.756$ & $54.761$\n & $26.424$ & $52.503$ & $24.811$ & $49.859$ & $25.846$ & $51.125$ \\\\\n \\hline\n $3.0$ & $25.766$ & $51.843$ & $27.342$ & $52.508$ & $25.135$ & $50.954$\n & $27.753$ & $53.650$ & $25.072$ & $49.142$ & $25.727$ & $50.990$ \\\\\n \\hline\n $4.0$ & $26.515$ & $52.636$ & $25.716$ & $50.531$ & $25.106$ & $50.449$\n & $24.871$ & $50.279$ & $25.674$ & $51.536$ & $26.897$ & $52.227$ \\\\\n \\hline\n $5.0$ & $27.516$ & $54.908$ & $26.002$ & $50.390$ & $24.666$ & $49.463$\n & $23.973$ & $47.554$ & $25.675$ & $51.032$ & $27.810$ & $53.820$ \\\\\n \\hline\n $6.0$ & $28.126$ & $54.611$ & $27.441$ & $53.032$ & $26.014$ & $52.209$\n & $25.676$ & $50.426$ & $26.235$ & $51.375$ & $26.180$ & $51.346$ \\\\\n \\hline\n $7.0$ & $23.908$ & $49.262$ & $26.812$ & $51.556$ & $24.696$ & $49.783$\n & $25.519$ & $50.741$ & $25.863$ & $50.995$ & $25.593$ & $50.611$ \\\\\n \\hline\n $8.0$ & $26.020$ & $51.889$ & $25.431$ & $49.135$ & $26.757$ & $52.518$\n & $26.369$ & $49.668$ & $26.401$ & $52.682$ & $25.084$ & $50.596$ \\\\\n \\hline\n $9.0$ & $26.208$ & $53.114$ & $25.856$ & $50.090$ & $25.803$ & $51.253$\n & $24.628$ & $47.880$ & $25.379$ & $51.014$ & $27.049$ & $52.891$ \\\\\n \\hline\n $10.0$ & $26.125$ & $52.541$ & $24.963$ & $47.402$ & $24.369$ & $48.065$\n & $42.270$ & $83.281$ & $24.878$ & $49.926$ & $24.563$ & $48.616$ \\\\\n \\hline \\hline\n Total & $515.072$ & $1028.655$ & $511.741$ & $992.205$ & $509.127$\n & $1009.089$ & $509.601$ & $999.865$ & $505.198$ & $999.378$ & $516.103$\n & $1009.971$ \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{sidewaystable}\n\n\\begin{sidewaystable}[p]\n \\begin{center}\n \\caption{\\small Calculation of CPU and Real times of Havrda-Charvat Entropy\n measures for 06 families from 03 Clans with the $PLA_{I\\!I}$\n configuration. \\label{tab16}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n \\hline\n {} & \\multicolumn{4}{c|}{CL0028} & \\multicolumn{4}{c|}{CL0023}\n & \\multicolumn{4}{c|}{CL0257} \\\\\n \\cline{2-13}\n s & \\multicolumn{2}{c|}{PF06850} & \\multicolumn{2}{c|}{PF00135}\n & \\multicolumn{2}{c|}{PF00005} & \\multicolumn{2}{c|}{PF13481}\n & \\multicolumn{2}{c|}{PF02388} & \\multicolumn{2}{c|}{PF09924} \\\\\n \\cline{2-13}\n {} & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $21.219$ & $26.895$ & $18.698$ & $23.442$ & $19.383$ & $24.675$\n & $21.351$ & $26.966$ & $19.059$ & $24.560$ & $21.451$ & $26.829$ \\\\\n \\hline\n $0.2$ & $22.604$ & $28.014$ & $21.109$ & $26.536$ & $20.336$ & $25.665$\n & $19.555$ & $24.562$ & $19.370$ & $24.681$ & $22.504$ & $27.855$ \\\\\n \\hline\n $0.3$ & $19.056$ & $24.130$ & $20.692$ & $25.663$ & $21.934$ & $27.466$\n & $21.084$ & $26.236$ & $19.317$ & $24.934$ & $19.681$ & $24.732$ \\\\\n \\hline\n $0.4$ & $21.552$ & $27.442$ & $19.981$ & $25.021$ & $18.794$ & $24.142$\n & $21.065$ & $26.485$ & $20.878$ & $26.291$ & $21.947$ & $26.979$ \\\\\n \\hline\n $0.5$ & $31.952$ & $88.406$ & $21.161$ & $26.374$ & $19.123$ & $24.309$\n & $22.534$ & $27.851$ & $19.729$ & $25.011$ & $21.888$ & $27.168$ \\\\\n \\hline\n $0.6$ & $20.195$ & $25.362$ & $21.006$ & $26.371$ & $21.426$ & $30.303$\n & $20.640$ & $26.050$ & $19.424$ & $24.279$ & $20.750$ & $26.175$ \\\\\n \\hline\n $0.7$ & $19.168$ & $23.924$ & $20.989$ & $26.482$ & $21.440$ & $27.129$\n & $19.370$ & $24.287$ & $19.274$ & $24.779$ & $21.545$ & $26.830$ \\\\\n \\hline\n $0.8$ & $21.982$ & $27.362$ & $19.760$ & $24.681$ & $20.662$ & $25.948$\n & $21.324$ & $26.648$ & $21.685$ & $26.964$ & $21.644$ & $27.160$ \\\\\n \\hline\n $0.9$ & $21.356$ & $27.235$ & $21.251$ & $26.550$ & $19.518$ & $24.880$\n & $21.276$ & $26.736$ & $21.283$ & $26.981$ & $19.217$ & $23.910$ \\\\\n \\hline\n $1.0$ & $20.206$ & $25.608$ & $20.914$ & $26.173$ & $20.726$ & $25.877$\n & $21.430$ & $26.625$ & $20.102$ & $25.198$ & $19.810$ & $25.348$ \\\\\n \\hline\n $2.0$ & $19.435$ & $24.882$ & $20.660$ & $25.675$ & $20.798$ & $26.326$\n & $20.301$ & $25.481$ & $19.588$ & $24.837$ & $20.446$ & $25.755$ \\\\\n \\hline\n $3.0$ & $20.549$ & $25.646$ & $19.988$ & $25.479$ & $20.629$ & $25.516$\n & $19.914$ & $25.230$ & $20.438$ & $25.648$ & $19.393$ & $24.815$ \\\\\n \\hline\n $4.0$ & $19.528$ & $24.693$ & $20.649$ & $25.610$ & $19.428$ & $24.637$\n & $20.505$ & $26.001$ & $20.868$ & $26.009$ & $20.060$ & $25.643$ \\\\\n \\hline\n $5.0$ & $19.698$ & $24.824$ & $21.076$ & $26.468$ & $19.865$ & $24.753$\n & $19.120$ & $24.267$ & $19.466$ & $24.423$ & $21.760$ & $27.244$ \\\\\n \\hline\n $6.0$ & $20.809$ & $26.319$ & $20.352$ & $25.914$ & $19.507$ & $24.362$\n & $20.110$ & $25.587$ & $20.708$ & $25.961$ & $21.240$ & $26.665$ \\\\\n \\hline\n $7.0$ & $20.287$ & $25.951$ & $21.073$ & $26.459$ & $19.482$ & $24.861$\n & $18.272$ & $23.535$ & $19.015$ & $24.460$ & $20.646$ & $25.964$ \\\\\n \\hline\n $8.0$ & $21.427$ & $26.885$ & $19.494$ & $24.421$ & $20.107$ & $25.228$\n & $20.645$ & $26.095$ & $21.039$ & $26.222$ & $20.014$ & $25.155$ \\\\\n \\hline\n $9.0$ & $21.623$ & $27.335$ & $21.554$ & $26.517$ & $19.239$ & $24.529$\n & $20.629$ & $25.929$ & $21.035$ & $26.450$ & $21.086$ & $26.526$ \\\\\n \\hline\n $10.0$ & $20.815$ & $26.379$ & $21.127$ & $26.630$ & $19.927$ & $25.340$\n & $19.781$ & $25.063$ & $21.438$ & $27.019$ & $17.286$ & $22.390$ \\\\\n \\hline \\hline\n Total & $403.461$ & $557.294$ & $391.524$ & $490.466$ & $382.324$\n & $485.946$ & $388.906$ & $489.634$ & $383.716$ & $484.707$ & $392.368$\n & $493.143$ \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{sidewaystable}\n\n\\begin{sidewaystable}[p]\n \\begin{center}\n \\caption{\\small Calculation of CPU and Real times of Havrda-Charvat\n Entropy measures for 06 families from 03 Clans with parameters\n $0 < s \\leq 1$ and the $PLA_{I\\!I}$ configuration. \\label{tab17}}\n \\begingroup\n \\everymath{\\scriptstyle}\n \\footnotesize\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}\n \\hline\n {} & \\multicolumn{4}{c|}{CL0028} & \\multicolumn{4}{c|}{CL0023}\n & \\multicolumn{4}{c|}{CL0257} \\\\\n \\cline{2-13}\n s & \\multicolumn{2}{c|}{PF06850} & \\multicolumn{2}{c|}{PF00135}\n & \\multicolumn{2}{c|}{PF00005} & \\multicolumn{2}{c|}{PF13481}\n & \\multicolumn{2}{c|}{PF02388} & \\multicolumn{2}{c|}{PF09924} \\\\\n \\cline{2-13}\n {} & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec)\n & $t_{CPU}$ (sec) & $t_R$ (sec) & $t_{CPU}$ (sec) & $t_R$ (sec) \\\\\n \\hline\n $0.1$ & $21.219$ & $26.895$ & $18.698$ & $23.442$ & $19.383$ & $24.675$\n & $21.351$ & $26.966$ & $19.059$ & $24.560$ & $21.451$ & $26.829$ \\\\\n \\hline\n $0.2$ & $22.604$ & $28.014$ & $21.109$ & $26.536$ & $20.336$ & $25.665$\n & $19.555$ & $24.562$ & $19.370$ & $24.681$ & $22.504$ & $27.855$ \\\\\n \\hline\n $0.3$ & $19.056$ & $24.130$ & $20.692$ & $25.663$ & $21.934$ & $27.466$\n & $21.084$ & $26.236$ & $19.317$ & $24.934$ & $19.681$ & $24.732$ \\\\\n \\hline\n $0.4$ & $21.552$ & $27.442$ & $19.981$ & $25.021$ & $18.794$ & $24.142$\n & $21.065$ & $26.485$ & $20.878$ & $26.291$ & $21.947$ & $26.979$ \\\\\n \\hline\n $0.5$ & $31.952$ & $88.406$ & $21.161$ & $26.374$ & $19.123$ & $24.309$\n & $22.534$ & $27.851$ & $19.729$ & $25.011$ & $21.888$ & $27.168$ \\\\\n \\hline\n $0.6$ & $20.195$ & $25.362$ & $21.006$ & $26.371$ & $21.426$ & $30.303$\n & $20.640$ & $26.050$ & $19.424$ & $24.279$ & $20.750$ & $26.175$ \\\\\n \\hline\n $0.7$ & $19.168$ & $23.924$ & $20.989$ & $26.482$ & $21.440$ & $27.129$\n & $19.370$ & $24.287$ & $19.274$ & $24.779$ & $21.545$ & $26.830$ \\\\\n \\hline\n $0.8$ & $21.982$ & $27.362$ & $19.760$ & $24.681$ & $20.662$ & $25.948$\n & $21.324$ & $26.648$ & $21.685$ & $26.964$ & $21.644$ & $27.160$ \\\\\n \\hline\n $0.9$ & $21.356$ & $27.235$ & $21.251$ & $26.550$ & $19.518$ & $24.880$\n & $21.276$ & $26.736$ & $21.283$ & $26.981$ & $19.217$ & $23.910$ \\\\\n \\hline\n $1.0$ & $20.206$ & $25.608$ & $20.914$ & $26.173$ & $20.726$ & $25.877$\n & $21.430$ & $26.625$ & $20.102$ & $25.198$ & $19.810$ & $25.348$ \\\\\n \\hline \\hline\n Total & $219.290$ & $324.380$ & $205.561$ & $257.293$ & $203.342$\n & $260.394$ & $209.629$ & $262.446$ & $200.121$ & $253.678$ & $210.437$\n & $262.986$ \\\\\n \\hline \\hline\n Grand & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1284.343$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1895.873$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1199.098$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1727.300$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1131.878$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1693.572$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1066.375$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1511.927$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1124.465$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1681.087$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1211.808$}}\n & \\multicolumn{1}{c|}{\\multirow{2}{*}{$1772.106$}} \\\\\n Total & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{}\n & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{}\n & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{}\n & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{}\n & \\multicolumn{1}{c|}{} \\\\\n \\hline\n \\end{tabular}\n \\endgroup\n \\end{center}\n\\end{sidewaystable}\n\n\\newpage\n\nThe corresponding times for calculating the joint probabilities and\n$s$-powers of these have been added up to report the results for the\nHavrda-Charvat entropies of the Grand total row. We think that these\ntimes are very well affordable indeed.\n\n\\begin{table}[H]\n \\begin{center}\n \\caption{\\small Total CPU and real times for calculating the Entropy measure\n content of a family PF06850 and approximations for Grand\n Total of all sample space. \\label{tab18}}\n \\begin{tabular}{|c|c|c|}\n \\hline\n \\multicolumn{1}{|c|}{\\multirow{2}{*}{$POA_{I\\!I}$}} & Entropy Measures &\n Entropy Measures \\\\\n {} & $H\\!_j(s)$ --- 19 $s$-values & $H\\!_{jk}(s)$ ---\n 19 $s$-values \\\\\n \\hline\n Total CPU time & $0.358+2.562+0.292$\n & $550.129+611.374$ \\\\\n (family PF06850) & $=3.212$ sec & $+421.418=1,582.921$ sec \\\\\n \\hline\n Grand Total CPU time & $3,433.628$ sec\n & $1,692,142.549$ sec \\\\\n (1069 families) & $=0.954$ hs & $=19.585$ days \\\\\n \\hline\n Total Real time & $1.062+9.300+0.332$\n & $593.848+1,649.357+$ \\\\\n (family PF06850) & $=10.694$ sec & $927.405=3.170.61$ sec \\\\\n \\hline\n Grand Total Real time & $11,431.886$ sec\n & $3,389,382.090$ sec \\\\\n (1069 families) & $=3.175$ hs & $=39.229$ days \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\\begin{table}[H]\n \\begin{center}\n \\caption{\\small Total CPU and real times for calculating the Entropy measure\n content of a family PF06850 and approximations for Grand\n Total of all sample space. \\label{tab19}}\n \\begin{tabular}{|c|c|c|}\n \\hline\n \\multicolumn{1}{|c|}{\\multirow{2}{*}{$PLA_{I\\!I}$}} & Entropy Measures &\n Entropy Measures \\\\\n {} & $H\\!_j(s)$ --- 19 $s$-values & $H\\!_{jk}(s)$ ---\n 19 $s$-values \\\\\n \\hline\n Total CPU time & $0.291+1.801+0.640$\n & $787.254+515.072$ \\\\\n (family PF06850) & $=2.732$ sec & $+403.461=1,705.787$ sec \\\\\n \\hline\n Grand Total CPU time & $2,920.508$ sec\n & $1,823,486.303$ sec \\\\\n (1069 families) & $=0.811$ hs & $=21.105$ days \\\\\n \\hline\n Total Real time & $0.642+7.261+1.291$\n & $1,068.026+1,028.655$ \\\\\n (family PF06850) & $=9.194$ sec & $+557.294=2,603.975$ sec \\\\\n \\hline\n Grand Total Real time & $9,828.386$ sec\n & $2,783,649.275$ sec \\\\\n (1069 families) & $=2.730$ hs & $=32.218$ days \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\n\\section{Concluding Remarks and Suggestions for Future Work}\nThe treatment of the distributions of probability of occur in protein\ndatabases is a twofold procedure. We intend to find a way of characterizing\nthe protein database by values of Entropy Measures in order to provide a\nsound discussion to be centered on the maximization of a convenient average\nEntropy Measure to represent the entire protein database. We also intend to\nderive a partition function in order to derive a thermodynamical theory\nassociated to the temporal evolution of the database. If the corresponding\nevolution of the protein families is assumed to be registered on the\nsubsequent versions of the database (Table \\ref{tab17}), we will then be\nable to describe the sought thermodynamical evolution from this theory as\nwell as to obtain from it the convenient description of all intermediate\nLevinthal's stages which seem to be necessary for describing the\nfolding\/unfolding dynamical process.\n\nWe summarize this approach by the need of starting from a thermodynamical\ntheory of the evolution of protein databases via Entropy measures to the\nconstruction of a successful dynamical theory of protein families. In other\nwords, from the thermodynamics of evolution of a protein database, we will\nderive a statistical mechanics to give us physical insight on the\nconstruction of a successful dynamics of protein families.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\n\n\nSemiflexible polymers is a term coined to understand a variety of physical systems that involve linear molecules. For instance, understanding the behaviors of such polymers serves as the basis to understand phenomena encountered in polymer industry, biotechnology, and mo\\-le\\-cu\\-lar processes in living cells \\cite{Fal-Sakaue2007}. Indeed, biopolymers functionality are ruled by their conformation, which in turn is considerably modified in the geometrically confined or crowded environment inside the cell \\cite{Fal-Koster2008, Fal-Reisner2005,Fal-Cifra2010, Fal-Benkova2017}. Beyond the most prominent polymer example being DNA compaction in the nucleus or viral DNA packed in capsids \\cite{Fal-Cifra2010, Fal-Locker2006}, there is also the important outstanding example of DNA transcription and replication processes that are governed by the binding of specific proteins. These mechanisms are strongly connected to polymer configuration \\cite{Fal-Gowers2003,Fal-Broek2008,Fal-Ostermeir2010}. Furthermore, a wide range of biophysical processes is derived from DNA constrained to a ring enclosure and more general topologies \\cite{Fal-Witz2011}. \n\nOn the one hand, motivated by the packaging and coiling problems mentioned above, Mondescu and Muthukumar (MM) studied in \\cite{PDoi-Mondescu1998} the conformational states of an ideal Gaussian polymer \\cite{PDoi-Doi1988book} wrapping different curved surfaces, where they presented theoretical predictions for the mean-square end-to-end distance. Later on, Spakowitz and Wang (SW) in \\cite{PSaito-Spakowitz2003} studied the conformational states of an ideal semiflexible polymer confined to a spherical surface based on the continuous Worm-Like Chain Model (WLC) \\cite{PSaito-Saito1967}. Unlike the conformational states of the Gaussian polymer, SW found the existence of a shape transition from an ordered to a disordered phase, where polymer roughly looks like cooked spaghetti and a random walk, respectively. Moreover, in the appropriate limit, the behavior of the semiflexible polymer reduces to the one of the Gaussian polymer in the spherical case. Subsequently, the MM and SW results were confirmed through computer simulations, where the validity regimes for each theory were established. Additionally, as a consequence of the excluded volume effect a helical state was found in \\cite{Fal-Cerda2005,Fal-Slosar2006} and a Tennis ball-like state in \\cite{Psim-Zhang2011}; these states are absent in both MM and SW theories. The problem becomes richer when short-range and long-range electrostatic interactions are considered since they can induce a Tennis ball-like state which is predominant over the helical conformation in the case of long-range electrostatic interactions \\cite{PSim-Angelescu2008}. A transition is also reported in a similar manner to that reported by SW with slight corrections in the case of short-range interaction and more pronounced in the long-range ones. In the extreme limit of zero temperature, the conformational states are expected to be in the ordered phase. Indeed, by a variational principle consistent to the WLC model one can obtain an abundance of conformational states including those observed in the simulations mentioned above \\cite{Psim-Guven2012,Psim-Lin2007} . On the other hand, the confinement can induce a transition from a circular polymer to a figure eight shape \\cite{Fal-Ostermeir2010}; even when the conformation of the polymer is considered as a self-avoiding random walk, properties similar to that of a critical phenomenon are found \\cite{Fal-Berger2006,Fal-Sakaue2006,Fal-Sakaue2007,Fal-Usatenko2017}. Confinement can also occur in the three-dimensional volume enclosed by rigid or soft surfaces \\cite{Fal-Brochard2005, Fal-Cacciuto2006, Fal-Chen2007, Fal-Koster2008, PSaito-Morrison2009, Psim-Gao2014}, and in a crowded environment, for instance, modeled by a nanopost array \\cite{Fal-Benkova2017}. \n\nFurthermore, two-dimensional confinement in closed flat spaces can result in a order-disorder shape transition \\cite{Polcon-Liu2008} similar to that of SW. One advantage of confinement in the flat two-dimensional case is that it can be compared with the experiment \\cite{Fal-Moukhtar2007,Fal-Witz2011, Fal-Japaridze2017, Fal-Frielinghaus2013}. However, in the literature as far as we know, even in the semiflexible ideal chain there is not a systematic study of the SW transition in a flat and bounded region. In this work, we present a theoretical and numerical analysis of the conformational states of an ideal semiflexible polymer in a compact two-dimensional flat space. First, we deduce the Hermans-Ullman (HU) equation \\cite{PSaito-Hermans1952} under the supposition that the conformational states correspond to stochastic realizations of paths defined by the Frenet equations and the assumption that stochastic ``curvature'' satisfies a fluctuation theorem given by a white noise distribution. This latter hypothesis is consistent with the continuous version of the WLC model as we will see below. Using the HU equation we shall perform a multipolar decomposition for the probability density function $P({\\bf R}, \\theta, {\\bf R}^{\\prime}, \\theta^{\\prime}, L)$ that gives the probability to find a polymer with length $L$ with endings ${\\bf R}$ and ${\\bf R}^{\\prime}$, and directions $\\theta$ and $\\theta^{\\prime}$, respectively. This decomposition allows us to find a hierarchy of equations associated to the multipoles of $P({\\bf R}, \\theta, {\\bf R}^{\\prime}, \\theta^{\\prime}, L)$, namely, the positional density distribution $\\rho({\\bf R}, {\\bf R}^{\\prime}, L)$, the dipolar distribution $\\mathbb{P}^{i}\\left({\\bf R}, {\\bf R}^{\\prime}, L\\right)$, the quadrupolar two-rank tensor distribution $\\mathbb{Q}_{ij}({\\bf R}, {\\bf R}^{\\prime}, L)$, and so on. We shall show, for instance, that the positional density and the quadrupolar distributions are exactly related through the modified Telegrapher's Equation (MTE), \n\\begin{eqnarray}\n\\frac{\\partial^2\\rho}{\\partial s^2}+\\frac{1}{2\\ell_{p}}\\frac{\\partial\\rho}{\\partial s}=\\frac{1}{2}\\nabla^2\\rho+\\partial_{i}\\partial_{j}\\mathbb{Q}^{ij}.\n\\end{eqnarray}\nIn particular, using this equation and the traceless condition of $\\mathbb{Q}_{ij}$, we are going to verify the well known exact result of Kratky-Porod for a semiflexible polymer in a two-dimensional space \\cite{PSaito-Kratky1949}. Besides, we will show that as a consequence of the exponential decay of $\\mathbb{Q}_{ij}$ we are going to define a regime where quadrupolar distribution can be neglected in the MTE. In addition, we shall explore the conformational states for a semiflexible chain enclosed by a bounded compact two-dimensional domain through the mean-square end-to-end distance. In particular, for a square domain we will show the existence of a shape transition order-disorder of the same nature as the one found by SW \\cite{PSaito-Spakowitz2003}. Furthermore, we will develop a Monte Carlo algorithm for use in computer simulations in order to study the conformational states enclosed in a compact domain. Particularly, the algorithm shall be suited in the square domain which, additionally, will allow us to confirm the shape transition and validate the theoretical predictions. \n\n\nThis paper is organized as follows. In Sec. II, we present the stochastic version of the Frenet equations whose Fokker-Planck formalism give us a derivation of the Hermans-Ullman (HU) equation. In addition, we discuss a multipolar decomposition for the HU equation. In Sec. III, we provide an application of the methods developed in Sec. II in order to study semiflexible polymer conformations enclosed in a compact domain. Particularly, we focus on a square box domain. In Sec. IV, we present a Monte Carlo algorithm to study the conformational states of a semiflexible polymer enclosed in a compact domain. In Sec. V, we give the main results in a square box domain and we provide a comparison with the theoretical predictions. In the final Sec. VI, we give our concluding remarks and perspectives on this work.\n\n\n\\section{Preliminary notation and semiflexible polymers }\\label{theory}\n\nLet us consider a polymer on a two dimensional Euclidean space as a plane curve $\\gamma$, ${\\bf R}: I\\subset \\mathbb{R}\\to\\mathbb{R}^2$, parametrized by an arc-length, $s$. For each point $s\\in I$, a Frenet dihedral can be defined whose vector basis corresponds to the set\n $\\{{\\bf T}(s), {\\bf N}(s)\\}$, consisting of the tangent vector ${\\bf T}(s)={\\bf R}^{\\prime}(s)\\equiv d{\\bf R}\/ds$ and the normal vector ${\\bf N}(s)=\\boldsymbol{\\epsilon}{\\bf T}(s)$, where $\\boldsymbol{\\epsilon}$ is a rotation by an angle of $\\pi\/2$. Note that the components of the rotation correspond to the Levi-Civita antisymmetric tensor in two dimensions. Both are unit vectors ($\\left|{\\bf T}(s)\\right|=\\left|{\\bf N}(s)\\right|=1$), and by construction are orthogonal to each other. It is well known that along the points of the curve these vectors satisfy the Frenet equations, ${\\bf T}^{\\prime}(s)=\\kappa(s){\\bf N}(s)$ and ${\\bf N}^{\\prime}(s)=-\\kappa(s){\\bf T}(s)$, where $\\kappa(s)$ is the curvature of the curve \\cite{Fal-Montiel2009}. \n \nIn absence of thermal fluctuations, the conformations of the polymer are studied through different curve configurations determined by variational principles. For instance, one of the most successful models to des\\-cribe configurations of a semiflexible polymer is\n\\begin{eqnarray}\nH[{\\bf R}]=\\frac{\\alpha}{2}\\int ds ~\\kappa^2\\left(s\\right) , \n\\label{funcional}\n\\end{eqnarray}\nwhere $H[{\\bf R}]$ is the bending energy, and $\\alpha$ the bending rigidity modulus. This energy functional (\\ref{funcional}) corresponds to the continuous form of the worm-like chain model (WLC) \\cite{PSaito-Saito1967}. In a rather different context, a classical problem originally proposed by D. Bernoulli, later by L. Euler, between the XVIII and XIX centuries, consists of finding the family of curves $\\{\\gamma\\}$ with a fixed length that minimizes the functional \\eqref{funcional}. The solution to this problem is composed of those curves whose curvature satisfies the differential equation $k^{\\prime\\prime}+\\frac{1}{2}k^{3}-\\frac{\\lambda}{\\alpha}\\kappa=0$, where $\\lambda$ is a Lagrange multiplier introduced to constrain the curve length \\cite{Polcla-Miura2017}. This problem has been generalized to study elastic curves in manifolds \\cite{Polcla-Singer2008, Psim-Guven2012, Polcla-Manning1987}, that are nowadays relevant to understand the problem of DNA packaging and the winding problem of DNA around histone octamers \\cite{Fal-Hardin2012}. \n\n\nIn what follows, we shall develop an unusual approach that incorporates the thermal fluctuations in the study of semiflexible polymers described by the bending energy \\eqref{funcional}. \n\n\\subsection{Stochastic Frenet Equations Approach}\nIn this section, we propose an approach to study conformational states of a semiflexible polymer immersed in a thermal reservoir and confined to a two-dimensional Euclidean space. We start by postulating that each conformational realization of any polymer on the plane is described by a stochastic path satisfying the stochastic Frenet equations defined by\n\\begin{subequations}\n\\label{ecsestom1}\n\\begin{align}\n\\label{ecsesto0}\n\\frac{d}{ds}{\\bf R}\\left(s\\right)=&{\\bf T}(s),\\\\\n\\frac{d}{ds}{\\bf T}\\left(s\\right)=&{\\kappa}(s){\\boldsymbol{\\epsilon}}{\\bf T}(s),\n\\label{ecsesto}\n\\end{align}\n\\end{subequations}\nwhere ${\\bf R}(s)$, ${\\bf T}(s)$ and $\\kappa(s)$ are now random variables. According to this postulate, it can be show that $\\left|{\\bf T}(s)\\right|$ is a constant that can be fixed to unit. \n\nIn addition we postulate that $\\kappa(s)$ for semiflexible polymers is a random variable, named here stochastic curvature, and is distributed according to the following probability density\n\\begin{eqnarray}\n\\mathcal{P}[\\kappa]\\mathcal{D}\\kappa:=\\frac{1}{Z}\\exp\\left[-\\beta H\\right]\\mathcal{D}\\kappa, \n\\label{dft}\n\\end{eqnarray}\nwhere $H$ is given by Eq.~\\eqref{funcional}, $Z$ is an appropriate normalization constant, $\\mathcal{D}\\kappa$ is a functional measure, and $\\beta=1\/k_{B}T$ the inverse of the thermal energy $k_{B}T$, with $k_{B}$ and $T$ being the Boltzmann constant and the absolute temperature, respectively. It is also convenient to introduce the persistence length by $\\ell_{p}=\\beta\\alpha$. Note that due to the Gaussian structure of the probability density \\eqref{dft}, the stochastic curvature satisfies the following fluctuation theorem \\cite{Fal-Zinn1996}\n\\begin{subequations}\n\\label{flucm1}\n\\begin{align}\n\\label{fluc0}\n\\left<\\kappa(s)\\kappa(s^{\\prime})\\right>=&\\frac{1}{\\ell_{p}}\\delta(s-s^{\\prime}),\\\\\n\\left<\\kappa(s)\\right>=&0. \n\\label{fluc}\n\\end{align}\n\\end{subequations}\n \nSince the polymer is confined to a plane and ${\\bf T}(s)$ is a unit vector, then it may be written as ${\\bf T}(s)=(\\cos\\theta(s), \\sin\\theta(s))$, where $\\theta(s)$ is another random variable. In this way, the stochastic equations \\eqref{ecsestom1} can be rewritten in the following manner\n\\begin{subequations}\n\\label{stochasticecs}\n\\begin{align}\n\\label{stochasticecs0}\n\\frac{d}{ds}{\\bf R}\\left(s\\right)=&\\left(\\cos\\theta(s),\\sin\\theta(s)\\right),\\\\\n\\frac{d}{ds}\\theta\\left(s\\right)=&\\kappa(s).\n\\label{stochasticecs1}\n\\end{align}\n\\end{subequations}\nThe most important feature of these equations is their analogy with the Langevin equations for an active particle in the overdamped limit, where the noise is introduced through the stochastic curvature $\\kappa(s)$ \\cite{Fal-Sevilla2014}. Moreover, these equations can be studied through traditional numerical methods, for example, using standard routines implemented in Brownian dynamics \\cite{Fal-Ermak1978}. Here, from an analytical viewpoint, we find it is more convenient to use a Fokker-Planck formalism in order to extract information of the above stochastic equations (\\ref{stochasticecs}). \n\n\\subsection{From Frenet Stochastic Equations to the Hermans-Ullman Equation}\n\nIn this section, we present the Fokker-Planck formalism corresponding to the stochastic equations \\eqref{stochasticecs}. This description consists of determining the equation that governs the probability density function defined by\n\\begin{eqnarray}\nP(\\left.{\\bf R}, \\theta\\right|{\\bf R}^{\\prime}, \\theta^{\\prime}; s)=\\left<\\delta({\\bf R}-{\\bf R}(s))\\delta(\\theta-\\theta(s))\\right>, \n\\end{eqnarray}\nwhere ${\\bf R}$ and ${\\bf R}^{\\prime}$ are the ending positions of the polymer, and the angles $\\theta$ and $\\theta^{\\prime}$ are their corresponding directions, respectively. The parameter $s$ is the polymer length. \n\nApplying the standard procedure described in Refs. \\cite{Fal-Zinn1996, Fal-Gardiner1986} on the stochastic equations \\eqref{stochasticecs}, we obtain the corresponding Fokker-Planck equation\n\\begin{eqnarray}\n\\frac{\\partial P}{\\partial s}+\\nabla\\cdot\\left({\\bf t}\\left(\\theta\\right)P\\right)=\\frac{1}{2\\ell_{p}}\\frac{\\partial^2P}{\\partial\\theta^2},\n\\label{F-P}\n\\end{eqnarray}\nwhere ${\\bf t}(\\theta)=\\left(\\cos\\theta,\\sin\\theta\\right)$ and $\\nabla$ is the gradient operator with respect to ${\\bf R}$. Let us look carefully at the last equation. Surprisingly, this equation is exactly the equation found by J. J. Hermans and R. Ullman in 1952 \\cite{PSaito-Hermans1952}. They derived it supposing that the conformation of a polymer is determined by Markovian walks, taking the mean values of $\\theta$ and $\\theta^2$ as phenomenological parameters. These are parameters based on the X-ray dispersion experiments performed by Kratky and Porod \\cite{PSaito-Kratky1949}. For this reason, from now on, we name \\eqref{F-P} the Hermans-Ullman (HU) equation. It must be mentioned that H. E. Daniels found an equivalent equation few months before Hermans and Ullman \\cite{PSaito-Daniels1952}. A revision of the methods used to obtain the HU equation can be found in Refs. \\cite{Wlc-Yamakawa2016, Polcon-Chen2016}. For instance, taking into account \\eqref{funcional}, HU can be derived through the Green formalism \\cite{Polcon-Chen2016}. In contrast, in the present work, we have deduced the Hermans-Ullman equation considering two postulates, namely, I) the conformation of the semiflexible polymer satisfy the Frenet stochastic equations \\eqref{ecsestom1}, and II) the stochastic curvature is distributed according to \\eqref{dft}, which is consistent with the worm-like chain model (\\ref{funcional}). As far as we know, this procedure has not been reported in the literature. \n \n To end this section, let us remark, as it is pointed out in \\cite{Wlc-Yamakawa2016, Polcon-Chen2016}, that\n \\begin{eqnarray}\n \\int d^{2}{\\bf R}d^{2}{\\bf R}^{\\prime}~P(\\left.{\\bf R}, \\theta\\right|{\\bf R}^{\\prime}, \\theta_{0}; s)\\propto\\mathbb{Z}\\left(\\theta,\\theta_{0}, s\\right),\n \\end{eqnarray} \n where $\\mathbb{Z}\\left(\\theta,\\theta_{0}, s\\right)$ is the marginal probability density function (see appendix \\ref{A}), which establishes a bridge to the formalism in N. Sait$\\hat{\\rm o}$ et al. \\cite{PSaito-Saito1967} for the semiflexible polymer in the thermal bath.\n\n\\subsection{Multipolar decomposition for the Hermans-Ullman Equation} \\label{sect3}\nIt is necessary to emphasize that the HU equation naturally arises in the description of the motion of an active particle. Thus, being careful with the right interpretation, the methods developed in Refs. \\cite{Fal-Sevilla2014, Fal-Castro-Villarreal2018} to solve Eq.~\\eqref{F-P} can be applied in this context. Particularly, we use the multipolar expansion approach to solve Eq.~\\eqref{F-P}, which in the orthonormal Cartesian basis $\\{1, 2t_{i}, 4(t_{i}t_{j}-\\frac{1}{2}\\delta_{ij}), \\cdots\\}$, takes the following form \\cite{ Fal-Castro-Villarreal2018}\n\\begin{eqnarray}\nP({\\bf R},\\theta, s)&=&\\rho({\\bf R}, s)+2\\mathbb{P}_{i}({\\bf R}, s) {t_{i}}\\nonumber\\\\&+&4\\mathbb{Q}_{ij}\\left({\\bf R},s\\right)\\left(t_{i}t_{j}-\\frac{1}{2}\\delta_{ij}\\right)\\nonumber\\\\\n&+&8\\mathbb{R}_{ijk}\\left({\\bf R}, s\\right)\\left(t_{i}t_{j}t_{k}-\\frac{1}{4}\\delta_{\\left(ij\\right.}t_{\\left.k\\right)}\\right)+\\cdots,\\nonumber\\\\\n\\end{eqnarray}\nwhere we have adopted the Einstein summation convention, and the symbol $(ijk)$ means symmetrization on the indices $i,j,k$. The coefficients of the series are multipolar tensors given by\n \\begin{eqnarray}\n\\rho({\\bf R}, s)&=&\\int_{0}^{2\\pi} \\frac{d\\theta}{2\\pi}~P({\\bf R},\\theta, s)\\nonumber,\\\\ \n\\mathbb{P}_{i}({\\bf R}, s)&=&\\int_{0}^{2\\pi} \\frac{d\\theta}{2\\pi}~{t}_{i}~ P({\\bf R}, \\theta, s),\\nonumber\\\\ \n\\mathbb{Q}_{ij}({\\bf R}, s)&=&\\int_{0}^{2\\pi} \\frac{d\\theta}{2\\pi}~\\left(t_{i}t_{j}-\\frac{1}{2}\\delta_{ij}\\right)P({\\bf R}, \\theta, s), \\nonumber\\\\\n\\mathbb{R}_{ijk}({\\bf R}, s)&=&\\int_{0}^{2\\pi} \\frac{d\\theta}{2\\pi}~\\left(t_{i}t_{j}t_{k}-\\frac{1}{4}\\delta_{\\left(ij\\right.}t_{\\left.k\\right)}\\right)P({\\bf R}, \\theta, s), \\nonumber\\\\\n&\\vdots&.\n\\end{eqnarray}\nIn the latter coefficients, we have ignored the $\\theta$ dependence of the vector ${\\bf t}$ for reasons of notation. We also have obviated the dependence on ${\\bf R}^{\\prime}$ and $\\theta^{\\prime}$ to improve notation. The physical meaning of these tensors is as follows: $\\rho({\\bf R}, s)$ is the probability density function (PDF) of finding configurations with ends at ${\\bf R}$ and ${\\bf R}^{\\prime}$, $\\mathbb{P}({\\bf R}, s)$ \nis the local average of the polymer conformational direction, $\\mathbb{Q}_{ij}\\left({\\bf R}, s\\right)$ \nis the correlation between the components $i$ and $j$ of the polymer direction ${\\bf t}$, etc.\n\nFrom Hermans-Ullman Eq.~\\eqref{F-P}, it is possible to determine hierarchy equations for the multipolar tensors, which have already been shown for active particles in Refs. \\cite{Fal-Sevilla2014, Fal-Castro-Villarreal2018}.\nThe same hierarchy equations can also be found in the semiflexible polymer context. Integrating over the angle $\\theta$ in Eq.~\\eqref{F-P}, we obtain the following continuity-type equation\n\\begin{eqnarray}\n\\frac{\\partial \\rho({\\bf R}, s)}{\\partial s}=-\\partial_{i}\\mathbb{P}^{i}\\left({\\bf R}, s\\right). \n\\label{eq1}\n\\end{eqnarray}\nThe related equation for $\\mathbb{P}_{i}\\left({\\bf R},s\\right)$ is obtained by multiplying Eq.~\\eqref{F-P} by ${\\bf t}(\\theta)$, and using the definition of the tensor $\\mathbb{Q}_{ij}({\\bf R}, s)$. Thus, we found\n\\begin{eqnarray}\n\\frac{\\partial \\mathbb{P}_{i}({\\bf R}, s)}{\\partial s}=-\\frac{1}{2\\ell_{p}}\\mathbb{P}_{i}({\\bf R}, s)-\\frac{1}{2}\\partial_{i}\\rho({\\bf R},s)-\\partial^{j}\\mathbb{Q}_{ij}({\\bf R}, s).\\nonumber\\\\\n\\label{eq2}\n\\end{eqnarray}\nIn the same way, we obtain the equation for $\\mathbb{Q}_{ij}({\\bf R}, s)$,\n\\begin{eqnarray}\n\\frac{\\partial\\mathbb{Q}_{ij}({\\bf R}, s)}{\\partial s}=-\\frac{2}{\\ell_{p}}\\mathbb{Q}_{ij}({\\bf R}, s)-\\frac{1}{4}\\mathbb{T}_{ij}({\\bf R}, s)-\\partial^{k}\\mathbb{R}_{ijk}({\\bf R}, s), \\nonumber\\\\\n\\end{eqnarray}\nwhere $\\mathbb{T}_{ij}$ denotes the second rank tensor $-\\delta_{ij}\\partial_{k}\\mathbb{P}^{k}+\\left(\\partial^{i}\\mathbb{P}^{j}+\\partial^{j}\\mathbb{P}^{k}\\right)$. Similarly, the equations for the rest of tensorial fields can be computed recursively for consecutive ranks. Taking a combination of \\eqref{eq1} and \\eqref{eq2}, we observe that the PDF $\\rho({\\bf R}, s)$ and the two-rank tensor $\\mathbb{Q}_{ij}({\\bf R}, s)$ are involved in one equation given by\n\\begin{eqnarray}\n\\frac{\\partial^2\\rho({\\bf R}, s)}{\\partial s^2}+\\frac{1}{2\\ell_{p}}\\frac{\\partial\\rho({\\bf R}, s)}{\\partial s}=\\frac{1}{2}\\nabla^2\\rho({\\bf R}, s)+\\partial_{i}\\partial_{j}Q^{ij}({\\bf R}, s).\\nonumber\\\\\n\\label{eq3}\n\\end{eqnarray}\n\nIt is noteworthy to mention, that Eq. (\\ref{eq3}) is a modified version of Telegrapher's equation \\cite{Fal-Masoliver1989}, where the term $\\partial_{i}\\partial_{j}Q^{ij}({\\bf R}, s)$ makes the difference. In the following, we use Eq. (\\ref{eq3}) for the case of a semiflexible polymer in the open Euclidean plane as a test case. This allows us to verify the famous experimental result of Kratky and Porod \\cite{PSaito-Kratky1949} using this procedure.\n\n\\subsubsection{Example: Testing the Kratky-Porod result.}\\label{sectII}\nIn this section, we study the case of a semiflexible polymer on the Euclidean plane. In order to reproduce the well known result of Kratky-Porod \\cite{PSaito-Kratky1949}, we apply the multipolar series method shown in the previous section to compute the mean square end-to-end distance.\nThe end-to-end distance is defined as $\\delta{\\bf R}:={\\bf R}-{\\bf R}^{\\prime}$, thus the mean square end-to-end distance is given by \n\\begin{equation}\n \\left<\\delta{\\bf R}^2\\right>\\equiv\\int_{\\mathbb{R}^{2}\\times\\mathbb{R}^{2}}\\rho({\\bf R}|{\\bf R}^{\\prime}; s)\\delta{\\bf R}^2~d^{2}{\\bf R}~d^2{\\bf R}^{\\prime}.\n \\label{MS}\n \\end{equation}\nTo compute this quantity, we use Eq.~\\eqref{eq3} to show that l.h.s of (\\ref{MS}) satisfies \n {\\small\\begin{eqnarray}\n\\frac{\\partial^2\\left<\\delta{\\bf R}^2\\right>}{\\partial s^2}+\\frac{1}{2\\ell_{p}}\\frac{\\partial\\left<\\delta{\\bf R}^2\\right>}{\\partial s}&=&\\int d^{2}{\\bf R}~d^2{\\bf R}^{\\prime}\\left(\\delta{\\bf R}\\right)^2\\times\\nonumber\\\\&&\\left[\\frac{1}{2}\\nabla^2\\rho({\\bf R}, s)+\\partial_{i}\\partial_{j}Q_{ij}({\\bf R}, s)\\right].\\nonumber\\\\\n\\label{eq4}\n\\end{eqnarray}}\nIntegrating by parts on the r.h.s. of \\eqref{eq4} with respect to ${\\bf R}$, using that $\\nabla^2\\delta{\\bf R}^2=4$ and the traceless condition $\\delta_{ij}\\mathbb{Q}^{ij}=0$, we have that $\\left<\\delta{\\bf R}^2\\right>$ satisfies the differential equation\n\n\\begin{eqnarray}\n\\frac{\\partial^2\\left<\\delta{\\bf R}^2\\right>}{\\partial s^2}+\\frac{1}{2\\ell_{p}}\\frac{\\partial\\left<\\delta{\\bf R}^2\\right>}{\\partial s}=2.\n\\label{eq5}\n\\end{eqnarray}\nNow, we solve this differential equation with the initial conditions, for $s=0$, $\\left<\\delta{\\bf R}^2\\right>=0$ and $\\frac{d}{ds}\\left<\\delta{\\bf R}^2\\right>=0$. The final polymer length is denoted by $L$. \n\nIn this way, we found that the mean square end-to-end distance is given by\n\\begin{eqnarray}\n\\left<\\delta{\\bf R}^2\\right>=4\\ell_{p}L-8\\ell_{p}^2\\left(1-\\exp\\left(-\\frac{L}{2\\ell_{p}}\\right)\\right),\n\\label{planoabierto}\n\\end{eqnarray}\nwhich is the standard Kratky-Porod result for semiflexible polymers confined to a plane \n\\cite{PSaito-Kratky1949, PSaito-Hermans1952}. The last result has two well-known asymptotic limits, namely, \n\\begin{eqnarray}\n\\left<\\delta{\\bf R}^2\\right>\\simeq\\left\\{\\begin{array}{cc}\n 4\\ell_{p}L, & {\\rm if }~L\\gg \\ell_{p},\\\\\n &\\label{asymptotics}\\\\ \n L^2, & {\\rm if}~L\\ll\\ell_{p}.\\\\\n \\end{array}\\right.\n\\end{eqnarray}\n\nIn the first case, the polymer conformations are equivalent to brownian trajectories. In this case, the polymer is called Gaussian polymer \\cite{PDoi-Doi1988book}. In the second case, the polymer takes only one configuration; it goes in a straight line, which is known as the ballistic limit. We remark that the result in Eq.~\\eqref{planoabierto} is usually obtained by using different analytical approaches (for example, see Appendix \\ref{A} and Refs. \\cite{MGD-Kamien2002, PSaito-Saito1967}).\n \n In the next section, we address the study of a confined polymer to a flat compact domain within the approach developed above.\n\n\\section{Semiflexible polymer in a compact plane domain }\n\n\\subsection{General expressions for a semiflexible polymer in an arbitrary compact domain}\\label{sectTE}\nIn this section, we apply the hierarchy equations developed in section \\ref{sect3} in order to determine the conformational states of a semiflexible polymer confined to a flat compact domain $\\mathcal{D}$. Commonly, it is necessary to truncate the hierarchy equations at some rank. For instance, at first order, let us consider $\\mathbb{P}_{i}({\\bf R}, s)$ as a constant vector field, then (\\ref{eq1}) implies that $\\rho({\\bf R}, s)$ is uniformly distributed, which clearly it is not an accurate description because otherwise it means that the mean square end-to-end distance would be a constant for all values of the polymer length $s$. An improved approximation consists of taking the truncation on the second hierarchy rank, which corresponds to assume that $\\mathbb{Q}_{ij}({\\bf R}, s)$ is uniformly distributed. Indeed, the truncation approximation gets better the larger the polymer length is, since as it is pointed out in \\cite{Fal-Castro-Villarreal2018} from Eqs. (\\ref{eq1}) and (\\ref{eq2}) one can conclude that the tensors $\\mathbb{P}_{i}({\\bf R}, s)$ and $\\mathbb{Q}_{ij}({\\bf R}, s)$ damp out as $e^{-L\/(2\\ell_{p})}$ and $e^{-2 L\/\\ell_{p}}$, respectively. From these expressions, clearly $\\mathbb{Q}_{ij}({\\bf R}, s)$ damps out more strongly than $\\mathbb{P}_{i}({\\bf R}, s)$ for larger polymer length. In the polymer context, it means that the tangent directions of the polymer are uniformly correlated.\n\n In the following, let us define a characteristic length $a$ associated to the size of the compact domain $\\mathcal{D}$, thus if we scale polymer length $s$ with $a$ one can consider $2a\/\\ell_{p}$ as a dimensionless attenuation coefficient associated to the damp out of $\\mathbb{Q}_{ij}({\\bf R}, s)$. Thus as long as we consider cases when $2a\/\\ell_{p}$ far from 1, we may neglect the contribution of $\\mathbb{Q}_{ij}({\\bf R}, s)$. Here, we are going to consider this latter case, therefore according to (\\ref{eq3}), the Telegrapher's equation is the one considered as the governing equation of the PDF $\\rho(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}, s)$, that is, \n\\begin{eqnarray}\n\\frac{\\partial^2\\rho({\\bf R}, s)}{\\partial s^2}+\\frac{1}{2\\ell_{p}}\\frac{\\partial\\rho({\\bf R}, s)}{\\partial s}=\\frac{1}{2}\\nabla^2\\rho({\\bf R}, s),\n\\label{eq6}\n\\end{eqnarray}\nwith the initial conditions \n\\begin{subequations}\n\\label{IC}\n\\begin{align}\n\\label{ICa}\n\\lim_{s\\to 0}\\rho(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}, s)=&\\delta^{\\left(2\\right)}({\\bf R}-{\\bf R}^{\\prime}),\\\\\n\\lim_{s\\to 0}\\frac{\\partial \\rho(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}, s)}{\\partial s}=&0.\n\\label{ICb}\n\\end{align}\n\\end{subequations}\nThese conditions have the following physical meaning. Clearly, Eq.~\\eqref{ICa} means that the polymer ends coincide when the polymer length is zero, whereas Eq.~\\eqref{ICb} means that the polymer length does not change spontaneously. Since the polymer is confined to a compact domain, we also impose a Neumann boundary condition\n\\begin{eqnarray}\n\\left.\\nabla \\rho\\left(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}, s\\right)\\right|_{{\\bf R}, {\\bf R}^{\\prime}\\in\\partial \\mathcal{D}}&=&0, ~~~~\\forall s, \n\\label{BC}\n\\end{eqnarray}\nwhere $\\partial \\mathcal{D}$ is the boundary of $\\mathcal{D}$. \nThis boundary condition means that the polymer does not cross the boundary coating the domain. \n\nTo solve the differential equation \\eqref{eq6}, we use the standard separation of variables \\cite{Fal-Feshbach1953}. This method requires to solve the so-called Neumann eigenvalue problem. It consists of finding all possible real values $\\lambda$, for which there exists a non-trivial solution $\\psi\\in C^{2}(\\mathcal{D})$ that satisfies the eigenvalue equation $-\\nabla^2\\psi=\\lambda\\psi$, and the Neumann boundary condition \\eqref{BC}. In this case, the set of eigenvalues is a sequence $\\lambda_{{\\bf k}}$ with ${\\bf k}$ in a numerable set $I$, and each associated eigenspace is finite dimensional. These latter eigenespaces are orthogonal to each other in the space of square-integrable functions $L^2(\\mathcal{D})$ \\cite{Fal-Chavel1984, Fal-Feshbach1953}. That is, the sequence $\\lambda_{\\bf k}$ is associated with the set of eigenfunctions $\\{ \\psi_{\\bf k}({\\bf R})\\}$ that satisfy the orthonormal relation\n\\begin{eqnarray}\n\\int_{\\mathcal{D}}\\psi_{\\bf k}\\left({\\bf R}\\right)\\psi_{\\bf k^{\\prime}}\\left({\\bf R}\\right)d^{2}{\\bf R}=\\delta_{{\\bf k}, {\\bf k}^{\\prime}}. \\label{ortogonal}\n\\end{eqnarray}\n\n Next, we expand the probability density function $\\rho(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}, s)$ in a linear combination of those eigenfunctions $\\{ \\psi_{\\bf k}({\\bf R})\\}$, that is, a spectral decomposition $\\rho(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}; s)=\\sum_{{\\bf k}}g_{\\bf k}(s)\\psi_{\\bf k}({\\bf R})\\psi_{\\bf k}({\\bf R}^{\\prime})$. Substituting this series in the Telegrapher's equation \\eqref{eq6}, we find that the functions $g_{\\bf k}(s)$ satisfy the following ordinary differential equation\n\\begin{eqnarray}\n\\frac{d^2g_{\\bf k}(s)}{ds^2}+\\frac{1}{2\\ell_{p}}\\frac{dg_{\\bf k}(s)}{ds}+\\frac{1}{2}\\lambda_{\\bf k}g_{\\bf k}(s)=0, \n\\label{eq8}\n\\end{eqnarray}\nwhere the initial conditions, (\\ref{IC}), imply $g_{\\bf k}(0)=1$, and $dg_{\\bf k}(0)\/ds=0$. Therefore, the solution is given by \n\\begin{eqnarray}\ng_{\\bf k}(s)=G\\left(\\frac{s}{4\\ell_{p}}, 8\\ell_{p}^2\\lambda_{\\bf k}\\right),\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n{\\scriptsize G(v, w)=e^{-v}\\left[\\cosh\\left(v\\sqrt{1-w}\\right)+ \\frac{\\sinh\\left(v\\sqrt{1-w}\\right)}{\\sqrt{1-w}}\\right]}.\\nonumber\\\\\n\\label{Gfunction}\n\\end{eqnarray}\n\nFinally, using the above information the probability density function is given by \n\\begin{eqnarray}\n\\rho(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}, s)=\\frac{1}{A\\left(\\mathcal{D}\\right)}\\sum_{{\\bf k}\\in I}G\\left(\\frac{s}{4\\ell_{p}}, 8\\ell_{p}^2\\lambda_{\\bf k}\\right)\\psi_{\\bf k}({\\bf R})\\psi_{\\bf k}({\\bf R}^{\\prime}), \\nonumber\\\\\\label{pdf}\n\\end{eqnarray}\nwhere $A(\\mathcal{D})$ is the area of the domain $\\mathcal{D}$, which is needed in order to have a normalized probability density function in the space $\\mathcal{D}\\times{\\mathcal{D}}$. Then, we have that $\\rho(\\left.{\\bf R}\\right|{\\bf R}^{\\prime}, s)d^{2}{\\bf R}d^{2}{\\bf R}^{\\prime}$ is the probability of having a polymer in a conformational state with polymer length $s$, and ends at ${\\bf R}$ and ${\\bf R}^{\\prime}$. Additionally, using the expression (\\ref{pdf}), the mean square end-to-end distance can be computed in the standard fashion by\n\\begin{eqnarray}\n\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}=\\sum_{{\\bf k}\\in I}a_{k}~G\\left(\\frac{s}{4\\ell_{p}}, 8\\ell_{p}^2\\lambda_{\\bf k}\\right), \n\\label{gen-sol}\n\\end{eqnarray}\nwhere the coefficients $a_k$ are obtained by \n\\begin{eqnarray}\na_{k}=\\int_{\\mathcal{D}\\times{\\mathcal{D}}}\\left({\\bf R}-{\\bf R}^{\\prime}\\right)^2 \\psi_{\\bf k}({\\bf R})\\psi_{\\bf k}({\\bf R}^{\\prime})~d^{2}{\\bf R}~d^{2}{\\bf R}^{\\prime}.\n\\label{coef}\n\\end{eqnarray}\n\nIn the following, we shall discuss the specific case when the polymer is enclosed in a square box.\n\n\\subsection{Example: semiflexible polymer in a square domain}\\label{thdomain}\n\nIn this section, we study the case when the semiflexible polymer is enclosed in a square box $\\mathcal{D}=\\left[0,a\\right]\\times\\left[0, a\\right]$. For this domain, it is well known that the eigenfunctions of the Laplacian operator $\\nabla^2$ correspond to a combination of products of trigonometric functions \\cite{Fal-Chavel1984}. That is, for each pair of positive integer numbers $(n, m)$ and positions ${\\bf R}=(x,y)\\in\\mathcal{D}$, it is not difficult to show that the eigenfunctions of the Laplacian operator consistent with (\\ref{BC}) are \n\\begin{eqnarray}\n\\psi_{\\bf k}\\left({\\bf R}\\right)=\\frac{2}{a}\\cos\\left(\\frac{\\pi n}{a}x\\right)\\cos\\left(\\frac{\\pi m}{a}y\\right).\\nonumber\n\\end{eqnarray}\nThese functions constitute a complete orthonormal basis, that satisfy \\eqref{ortogonal}. The corresponding eigenvalues are $\\lambda_{\\bf k}={\\bf k}^2$, with ${\\bf k}=\\left(\\frac{\\pi n}{a}, \\frac{\\pi m}{a}\\right)$. \n\nNow, we proceed to determine the coefficients $a_{\\bf k}$ \\-using Eq.~\\eqref{coef} in order to give an expression for the mean square end-to-end distance. By straightforward calculation, the coefficients $a_{\\bf k}$ are given explicitly by \n\\begin{eqnarray}\na_{\\bf k}=\\left\\{\\begin{array}{cc}\n\\frac{1}{3}a^2, & {\\bf k}=0,\\\\\n-\\frac{4a^2}{\\pi^4}\\left(\\frac{\\left(1-(-1)^{n}\\right)}{n^4}\\delta_{m,0}+ \\frac{\\left(1-(-1)^{m}\\right)}{m^4}\\delta_{n,0}\\right), & {\\bf k}\\neq 0.\n\\end{array}\\right.\\nonumber\\\\\n\\end{eqnarray}\nUpon substituting the latter coefficients in the general expression \\eqref{gen-sol}, we found that\n\\begin{eqnarray}\n\\frac{\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}}{a^2}=\\frac{1}{3}-\\sum_{n\\in 2\\mathbb{N}+1 }\\frac{32}{\\pi^4 n^4}G\\left(\\frac{L}{4\\ell_{p}}, 8\\pi^2\\left(\\frac{\\ell_{p}}{a}\\right)^2n^2\\right),\\nonumber\\\\\n\\label{sol}\n\\end{eqnarray}\nwhere $2\\mathbb{N}+1$ is the set of odd natural numbers. Since the function $G(v,w)$ satisfy that $G(v, w)\\leq 1$ for all positive real numbers $v$ and $w$, the series in Eq.~\\eqref{sol} is convergent for all values of $L\/\\ell_{p}$ y $\\ell_{p}\/a$. Considering this last property, it is possible to prove the following assertions.\n\n\\begin{prop} \\label{result1}\nLet $L\/\\ell_{p}$ any positive non-zero real number, then the mean square end-to-end distance \\eqref{sol} obeys \n\\begin{eqnarray}\n \\lim_{\\ell_{p}\/a\\to 0}\\frac{\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}}{\\ell^2_{p}}=\\frac{4L}{\\ell_{p}}-8\\left(1-\\exp\\left(-\\frac{L}{2\\ell_{p}}\\right)\\right). \\nonumber\n\\end{eqnarray}\n\\end{prop}\n\n\\begin{prop}\\label{result2} Let $L\/\\ell_{p}$ and $\\ell_{p}\/a$ any positive non-zero real numbers and $c=2\/3-64\/\\pi^4$, then the mean square end-to-end distance \\eqref{sol} obeys\n\\begin{eqnarray}\n 0\\leq \\frac{\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}}{a^2}\\leq \\frac{2}{3},\\nonumber\\end{eqnarray}\nand\n\\begin{eqnarray}\n 0\\leq \\frac{\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}}{a^2}-\\left(\\frac{1}{3}-\\frac{1}{3}G\\left(\\frac{L}{4\\ell_{p}}, 8\\pi^2\\frac{\\ell^2_{p}}{a^2}\\right)\\right)\\leq c.\\nonumber\n\\end{eqnarray}\n\n\\end{prop}\n\nClaim {\\bf \\ref{result1}} recovers the Kratky-Porod result about the mean square end-to-end distance (see Eq.~\\eqref{planoabierto}). Claim {\\bf\\ref{result2}} means that the mean-square end-to-end distance is bounded from below by $0$ and is bounded from above by $2\/3 a^2$. In addition, this second claim also provides an approximation formula for $\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}$, that is, for all values of $L\/\\ell_{p}$ and $\\ell_{p}\/a$ such that the condition\n\\begin{eqnarray}\n1-G\\left(\\frac{L}{4\\ell_{p}}, 8\\pi^2\\frac{\\ell^2_{p}}{a^2}\\right)\\gg 3c\\label{cond}\n\\end{eqnarray}\nholds, one has the following approximation\n{\\small\\begin{eqnarray}\n\\frac{\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}}{a^2}&\\simeq&\\frac{1}{3}-\\frac{1}{3}\\exp\\left(-\\frac{L}{4\\ell_{p}}\\right)\\nonumber\\\\\n&\\times&\\left\\{ \\cosh\\left[\\frac{L}{4\\ell_{p}}\\left(1-8\\pi^2\\frac{\\ell^2_{p}}{a^2}\\right)^{\\frac{1}{2}}\\right]\\right.\\nonumber\\\\&+&\\left.\\left(1-8\\pi^2\\frac{\\ell^2_{p}}{a^2}\\right)^{-\\frac{1}{2}}\\sinh\\left[\\frac{L}{4\\ell_{p}}\\left(1-8\\pi^2\\frac{\\ell^2_{p}}{a^2}\\right)^{\\frac{1}{2}}\\right]\\right\\}.\\nonumber\\\\ \n\\label{approx2}\n\\end{eqnarray}}\nLet us point out, the validity of this approximation occurs provided that the condition (\\ref{cond}) holds, that is, whenever one can neglect the value $c$ (See Appendix \\ref{coefficients} for proofs of claims 1 and 2). \n\nIn the following, let us remark that for any fixed value of $a$, the r.h.s. of (\\ref{approx2}), as a function of $L$, shows the existence of a critical persistence length, $\\ell^{*}_{p}\\equiv a\/(\\pi\\sqrt{8})$, such that for all values $\\ell_{p}>\\ell^{*}_{p}$ it exhibits an oscillating behavior, whereas for $\\ell_{p}<\\ell^{*}_{p}$, it is monotonically increasing. In addition, for each value of $\\ell_{p}$ the function (\\ref{approx2}) converges to $1\/3$ as long as $L\\gg a$. The critical persistence length, therefore, distinguishes two conformational behaviors of the semiflexible polymer enclosed in the square box. \nIn Fig. \\ref{MSD-ex}, the mean-square end-to-end distance, Eq. (\\ref{sol}) and r.h.s. of (\\ref{approx2}), have been shown for the ratios $\\ell_{p}\/a=1\/32, 1\/16,1\/8, 1\/4,1\/2, 1$ where we can appreciate both conformational states. Furthermore, one of the most intriguing features of the above approximation (\\ref{approx2}) is its similar structure to the corresponding one in the case of a polymer wrapping a spherical surface. Indeed, let us remark that \\eqref{approx2} has the same mathematical structure that the mean square end-to-end distance found by Spakowitz and Wang in \\cite{PSaito-Spakowitz2003}, exhibiting both conformational states . \n\nIn the next section, we address the study of the semiflexible polymer through a Monte Carlo algorithm in order to corroborate the results found here.\n\n\n\n\n\\section{Monte Carlo-Metropolis algorithm for semiflexible polymers }\\label{mc}\n\nHere, we develop a Monte Carlo algorithm to be use in computer simulations in order to study the conformational states of a semiflexible polymer enclosed in a compact domain. Particularly, the algorithm shall be suited to the square domain which, additionally, will allow us to validate our analytical approximations shown in the latest section.\n \nAs we have emphasized above, the worm-like chain model is the suitable framework to describe the spatial distribution of semiflexible polymers, which are modeled as $n$ beads consecutively connected by $n-1$ rigid bonds, called Kuhn's segments \\cite{Fal-Yamakawa1971}. Each bead works like a pivot allowing us to define an angle $\\theta_i$ between two consecutive bonds, where $i$ is the label of the $i-$th bead. This model requires a potential energy description where all possible contributions due to bead-bead, bead-bond and bond-bond interactions are taken into account. In a general setting, energies of bond-stretching, elastic bond angle, electrostatic interaction, torsional potential, etc., should be considered, such as in Refs.~\\cite{Wlc-Qiu2016, Psim-Chirico1994, Psim-Allison1989,Psim-DeVries2005,Psim-Jian1997,Psim-Jian1998}. However, here we are interested solely on the study of possible spatial configurations of a single semiflexible polymer enclosed in a compact domain $\\mathcal{D}$, such as the one shown in Fig. \\ref{frame}. Thus in our case we only take into account two energetic contributions, namely, the elastic bond angle and the wall-polymer interaction. The first contribution, that is, the elastic bond angle is given by\n\\begin{equation}\nE_b=\\frac{g}{2}\\sum_i \\theta_i^2,\n\\end{equation}\nwhere $\\theta_i$ is the angle between two consecutively bonds, and $g=\\alpha\/l_{0}$, where we recall that $\\alpha$ is the bending rigidity, and $l_{0}$ is the Kuhn length. In addition, we must consider the wall-polymer interaction given by\n\\begin{eqnarray}\n E_{w}=\n \\begin{cases}\n 0, & \\text{if all beads are in $\\mathcal{D}$}, \\\\\n \\infty, & \\text{if there are beads outside of $\\mathcal{D}$}.\n \\end{cases}\n\\end{eqnarray}\n\nIn the algorithm, the acceptance changes criteria of the polymer spatial configurations take the structure of a Gaussian distribution function. In this context, we generate random chains enclosed in $\\mathcal{D}$, constituted by $N$ bonds with constant Kuhn length, implementing a growth algorithm. Our computational realization consists of bead generation attending the following conditions:\n{\\it starting bead}, {\\it beads far from walls}, {\\it beads near to walls}, and {\\it selection problem}\nwhich are explained in the following subsections. \n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=.45]{contornob}\n\\caption{Generic compact domain $\\mathcal{D}$ is shown, where the boundary wall $\\partial\\mathcal{D}$ is represented by the black continuous line. The condition defining the beads near to the wall is represented by the blue filled region of width $l_s$. }\n\\label{frame}\n\\end{figure}\n\n\n\\subsection*{Starting bead}\n\nThis condition describes the process of initial bead generation and the acceptance criteria of the second bead. Let us regard, without loss of generality that the origin of $\\mathbb{R}^2$ belongs to $\\mathcal{D}$. We choose the initial bead at $\\mathbf{x}_0$ as a uniformly distributed random point in the region $\\mathcal{D}$. We define the auxiliary vector $\\mathbf{R}_{l_0}=(l_0,0)$, which is parallel to the horizontal axis, it will allow us to determine if the next bead is inside $\\mathcal{D}$. Also, we consider the angle $\\theta_0$ as the one that is formed between the horizontal axis and the first bead, which is taken from a uniform distribution in the interval $[0,2\\pi]$. Now, we compute the following vector\n\\begin{equation}\n\\mathbf{R}'=\\mathcal{R}\n\\left(\\theta_0\\right){\\mathbf{R}_{l_0}}^T,\n\\end{equation}\nwhere $\\mathcal{R}\n\\left(\\theta_0\\right)$\ndenotes the two-dimensional rotational matrix by an angle $\\theta_0$, defined as\n\\begin{equation}\n\\mathcal{R}\\left(\\theta_0\\right)=\n\\begin{bmatrix}\n\\cos \\theta_0 & -\\sin \\theta_0 \\\\\n\\sin \\theta_0 & \\cos \\theta_0 \\\\\n\\end{bmatrix},\n\\end{equation}\nand the superscript denotes the transposition of $\\mathbf{R}_{l_0}$. The resultant vector $\\mathbf{x}_0+\\mathbf{R}'^T$ will be the position of the second bead only if this is inside $\\mathcal{D}$. If this happened, the new bead will be denoted by $\\mathbf{x}_1$ as well as the vector $\\mathbf{R}_{l_0}=\\mathbf{R}'^T$ is actualized. On the contrary, we repeat the process until finding an angle that satisfies the condition that the second bead is enclosed in the domain.\n\n\\subsection*{Beads far from walls}\\label{bffw}\n\nThis condition describes the method of subsequence bead generation. We say that a bead is far from walls if the perpendicular distance between the boundary $\\partial\\mathcal{D}$ and the bead is greater than a particular distance $l_{s}$. In this case, if the $(k-1)$-th bead satisfies this condition, we generate the subsequent bead taking a random angle $\\theta_{k}$ distributed according to a Gaussian density function\n $\\mathcal{N}(0,l_{0}\/\\ell_{p})$, where we recall $\\ell_{p}$ as the polymer persistence length. As in the previous condition, we compute the vector $\\mathbf{R}'=\\mathcal{R}\\left(\\theta_0\\right){\\mathbf{R}_{l_0}}^T$ corresponding to the $k$-th rotation of $\\mathbf{R}_{l_0}$. If $l_0_{\\mathcal{D}}=\\langle (\\mathbf{x}(L)-\\mathbf{x}_0)^2 \\rangle$. In Sec.~\\ref{results} we shall analyze the results obtained for $\\left<{\\delta \\bf R}^2\\right>_{\\mathcal{D}}$ using this algorithm as a function of the persistence length and the polymer length when $\\mathcal{D}$ is a square box.\n\n\\subsection*{Selection problem }\n\nThe selection problem consist of choosing the adequate value of $l_{s}$. This value should be suitable to avoid the over or under bending of the polymer. For instance, if $\\ell_p$ is comparable with the size of $\\mathcal{D}$, and $l_s$ is not appropriate to promote the chain bending when the polymer is near the boundary $\\partial\\mathcal{D}$, the generation of beads outside of the domain will be favorable. Therefore, the polymer will present bendings with high values of angles where the chain meets the boundary. The selection problem is resolved using the dimensional analysis of $\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}$. Indeed, observe that in the continous limit of the chain, the mean-square end-to-end distance can be written as $\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}=a^2 g(\\ell_{p}\/a, L\/a)$, where $g(\\ell_{p}\/a, L\/a)$ is a dimensionless function. Then, we choose $l_{s}$ such that the mean-square end-to-end distance computed with the simulation data depends just on this combination $\\ell_{p}\/a$ and $L\/a$. In other words, for $k$ we calculate $k$ profiles of $\\left<\\delta{\\bf R}^2\\right>_{\\mathcal{D}}\/a^2$ for a $k$ number of pairs $(\\ell^{1}_{p}, a^{1}),(\\ell^{2}_{p}, a^{2}), \\cdots, (\\ell^{k}_{p}, a^{k})$, with $\\ell^{i}_{p}\/a^{i}$ fixed for all $i=1,\\cdots, k$, then we choose $l_{s}$ such that all these profiles collapse in a unique curve. \n\n\\section{Semiflexible polymers enclosed in a square box: simulation vs analytical results}\\label{results}\nIn this section, we are going to implement the algorithm explained in the preceding section for the particular case of a polymer enclosed in a square box of side $a$. In this case, let us first note that for beads near the corners, checking the conditions to promote the chain bending for both adjacent walls at the same time is needed. \nNext, in the simulation we set up our unit length by $d=10^{2}~l_{0}$. Now, we have to present the selection of $l_{s}$ according to the last part of the general algorithm. Thus, for the fixed ratios $\\ell_{p}\/a=1\/50, 1\/32, 1\/16,1\/8, 1\/4,1\/2, 1$, we study three profiles corresponding to the values $a\/d=5,10,15$, respectively. In Table \\ref{tab1} the selection values $l_{s}$ are shown once we collapse the three profiles in a unique curve. \n\n\\begin{table}[h!]\n\\caption{\\label{tab1}Values of $l_s$ used in simulations for different values of persistence length $\\ell_p$.}\n\\begin{tabular}{|c| c|}\n\\hline\n$\\ell_p\/a$ & $ l_s\/a$\\\\\n\\hline\n1 & 0.085\\\\\n1\/2 & 0.065\\\\\n1\/4 & 0.050\\\\\n1\/8 & 0.040\\\\\n1\/16 & 0.010\\\\\n1\/32 & 0.005\\\\\n$\\leq$1\/50 & 0 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\nThe results shown in this section were computed as the average over $10^6$ spatial configuration of confined polymers, which were obtained using the algorithm described in the previous section. In particular, we shall study two respective regimes defined through the comparison between the box side $a$ and the polymer length $L$. The first one, polymers in weak confinement whenever $L\/a\\leq 1$ is discussed in subsection~\\ref{noconfinado}. The second one, polymers in strong confinement, corresponding to the situation when the polymer length is larger than the box side, is discussed in subsection ~\\ref{confinado}.\n\n\n\n\n\\subsection{Polymer in weak confinement}\\label{noconfinado}\n\nIn this regime, once we have solved the selection problem of $l_{s}$ we present the simulation results for polymers enclosed in a square box of side $a=10~d$ for different values of persistence lengths. In Fig.~\\ref{ex-short}, examples of semiflexible polymers in weak confinement are shown. Notice that for very short persistence length, $\\ell_p\\simeq l_0$, the chain looks like a very curly string like random walks, whereas when $\\ell_p$ increases, the polymer adopts uncoiled configurations.\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=1.]{ex-shortb}\n\\caption{Examples of semiflexible polymers, with length $L=a$, in the weak confinement regime for several values of persistence length. Solid black lines represent the walls of the box.}\n\\label{ex-short}\n\\end{figure} \n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=1]{test-MSD-short2b}\n\\caption{Mean square end-to-end distance for confined polymers (point) with several persistence lengths. The deviation from the predictions for a polymer in a infinite plane (solid lines) given by Eq.~\\eqref{planoabierto} is because on average the chain meets the polymer at lengths $L\\simeq a$.}\n\\label{fig-MSDs}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=1.]{MSD-short-weakb}\n\\caption{Mean square end-to-end distance for polymers in the gaussian chain limit generated by the algorithm described in Sec.~\\ref{mc}.}\n\\label{gchain}\n\\end{figure}\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=1.]{MSD-Wangb}\n\\caption{Universal behavior of the Mean Square end-to-end distance in Fig.~\\ref{fig-MSDs} in the scaling $\\langle {\\delta \\bf R}^2 \\rangle\/ l_p^2$ vs $L\/l_p$ (solid lines). Dashed lines are references for the ballistic and diffusive regimes for polymers in weak confinement.}\n\\label{fig-MSDs-Wang}\n\\end{figure}\n\n\n\n\n\nThe mean-square end-to-end distance is shown in Fig. \\ref{fig-MSDs}, where it is noted that it increases as long as $\\ell_p$ increases, allowing the polymer to explore more surface as a function of the total polymer length. Also, notice that for small polymer lengths, the mean square end-to-end distance is in an excellent agreement with the results for semiflexible polymers in an infinite plane given by Eq.~\\eqref{planoabierto}. Conversely, for polymer lengths around $L\\simeq a$ we observe an slight deviation between the mean square end-to-end distance and the infinite plane solution (\\ref{planoabierto}), because of the finite-size of the box. Notwithstanding, for small persistence lengths ($\\ell_p\\simeq 10^{-3}a$), the mean square end-to-end distance is well fitted by Eq.~\\eqref{planoabierto}. In this case, the polymer does not seem to be affected by the walls, since the area explored by the chain is too short, on average, to meet the walls.\n\nFurthermore, when the mean square end-to-end distance and the polymer length is scaled by $l_p^2$ and $l_p$, respectively, we found that the data shown in Fig.~\\ref{fig-MSDs} and Fig.~\\ref{gchain} are collapsed, respectively, into an unique plot shown in Fig.~\\ref{fig-MSDs-Wang}, evidencing a ballistic behavior for small values of $L\/\\ell_p$ followed by a ``diffusive'' regime for large values of $L\/\\ell_{p}$. These results correspond to the well-known asymptotic limits of the Kratky-Porod result (\\ref{asymptotics}). In addition, it is noteworthy to mention that these asymptotic limits are also reported in \\cite{PSaito-Spakowitz2003} for a semiflexible polymer wrapping a spherical surface in the corresponding plane limit. \n\n\\subsection{Polymer in strong confinement}\\label{confinado}\n\nIn this section, we discuss the case of a polymer enclosed in a square box when its length is large enough to touch the walls, and to interact several times with them. We perform simulations in order to generate polymers of lengths up to $L\/a=10$ (chains of $10^4$ beads) for persistence lengths $\\ell_{p}\/a=1\/32, 1\/16,1\/8, 1\/4,1\/2, 1$ for the values of box side $a\/d=5,10,$ and $15$, respectively. These simulations use the values of $l_s$ shown in Table~\\ref{tab1}. Examples of these polymers are shown in Fig. \\ref{evo-ex} and in Fig.~\\ref{MSD-ex}. \n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=1]{test10b}\n\\caption{Semiflexible polymer realizations for the value of persistence length $l_p=a$. Each column shows a polymer with a particular length from the left to the right with L\/a=2, L\/a=4, L\/a=6, L\/a=8, and L\/a=10, respectively. Black filled circles indicate the endings of the polymer. It is also shown how the polymer rolling up around the square box while the polymer length becomes larger. }\n\\label{evo-ex}\n\\end{figure}\n\\begin{figure*}\n\\centering\n\\includegraphics[scale=.68]{MSD-ex4b}\n\\caption{\\small Examples of polymers, in the first and third columns, are shown as solid green lines, where the initial and final beads are represented by black filled circles, and solid black lines represent the walls of the box. The figure also shows, in the second and fourth columns, the mean-square end-to-end distance for polymers in strong confinement as a function of the polymer length: The bold black line represents the superposition of the theoretical prediction (Eqs. (\\ref{sol}) and (\\ref{approx2})) with the simulation results shown for different box sides $a\/d=5$ (blue triangles), $a\/d=10$ (red squares) and $a\/d=15$ (green circles). }\n\\label{MSD-ex}\n\\end{figure*}\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=1]{msd-Wang-strong22b}\n\\caption{\\small Mean square end-to-end distance for polymers in strong confinement as a function of the persistence length. Note that $\\langle \\delta \\mathbf{R} ^2 \\rangle$ shows an oscillating behavior for values of persistence length satisfying the relation $\\ell_p\/a>8$, which is the same signature for the mean square end-to-end distance for polymers confined into a sphere. Dashed and doted lines has been plotted as references for the ballistic and diffusive behaviors, respectively.}\n\\label{MSD-Wang-strong}\n\\end{figure}\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[scale=1]{convb}\n\\caption{\\small Convergence of the mean square end-to-end distance $\\langle \\delta \\mathbf{R}^2\\rangle_{\\mathcal{D}}=a^2\/3$ for very large polymers in strong confinement as a function of the ratio $a\/l_p$.}\n\\label{MSD-conv}\n\\end{figure}\n\n\nIn Fig.~\\ref{MSD-ex}, we also report the mean square end-to-end distance scaled by $a^2$ as a function of $L\/a$ for the different box sides ($a=$ 5, 10 and 15). In addition, in Fig. \\ref{MSD-Wang-strong} we report as well the mean square end-to-end distance scaled by $\\ell_{p}^2$ as a function of $L\/\\ell_{p}$. By simple inspection, in both cases an oscillating behavior is exhibited for values $\\ell_p\/a>\\frac{1}{8}$, whereas a monotonically increasing behavior becomes evident for persistence lengths such that $\\ell_{p}\/a<\\frac{1}{8}$. A growth in the number of oscillations is observed while $\\ell_{p}\/a$ increases from $1\/8$. In addition, for values $\\ell_{p}\/a$ less than $1\/8$ the behavior of the mean-square end-to-end distance corresponds to the one of a Gaussian polymer enclosed in a box and the corresponding conformational realization of the polymer looks like a confined random walk. As was mentioned before this transition between the oscillating and monotonic behavior of the conformation of the semiflexible polymer is very similar to that described by Wang and Spakowitz in Ref.~\\cite{PSaito-Spakowitz2003} for semiflexible polymer confined to a spherical surface. Moreover, as noted in Fig. \\ref{MSD-Wang-strong} in the small polymer lengths regime, the mean square end-to-end distance shows a ballistic behavior, followed by a brief interval with a ``diffusive'' behavior. Furthermore, an interesting observation is that the mean square end-to-end distance exhibits asymptotic plateau behavior for large values of $L\/l_p$ as a function of $a\/l_p$. In Fig.~\\ref{MSD-conv} we show the value of the mean square end-to-end distance for the the polymer length $L=10~a$ in logarithmic scale as a function of $a\/l_p$ in binary logarithmic scale. In this conditions, the plateau behavior is well fitted by a linear function, where the slope, $m$, of the line satisfies $m\/\\log2=1.97\\sim 2$, whereas that the intercept takes the value $b=-0.473\\pm0.006$. This last fact leads us to a universal scaling law of the mean square end-to-end distance regarding the box side for very large polymers:\n\\begin{equation}\n\\frac{\\langle \\delta \\mathbf{R}(L=10a)^2\\rangle}{a^2}=10^b\\sim 0.336\\pm 0.004,\n\\label{eq-msd-a}\n\\end{equation}\nwhere the error has been computed as the propagation error from the linear fit of the data in Fig.~\\ref{MSD-conv}. This result is the universal convergence of the rate $\\langle \\delta \\mathbf{R}^2\\rangle_{\\mathcal{D}}\/a^2$ to $1\/3$ for very large polymers, which becomes independent of the box side when the quotient $l_p\/a$ keeps fixed. This is comprised, if we consider all available space in the box occupied so that a uniform distribution of beads occurs. Indeed, through the definition (\\ref{MS}) with $\\rho(\\left. {\\bf R}\\right|{\\bf R}^{\\prime}; L\\to\\infty)=1\/a^4$ one can get the desired result. Finally, it is noticeable that by simple inspection of the five polymer realizations shown in Fig. \\ref{evo-ex}, for $\\ell_{p}\/a=1$, all have the same conspicuous relation between the period of oscillation and the number of turns that the polymer performs. \n\nAll these features of the behavior of the mean-square end-to-end distance are reproduced by the theoretical prediction (\\ref{approx2}) and (\\ref{sol}). In particular, it is significant to note that the critical persistence length found in our earlier discussion (see section (\\ref{thdomain})) satisfies approximately $\\ell^{*}_{p}\/a=1\/(\\pi\\sqrt{8})\\approx 1\/8$. We also note that for $\\ell_{p}\/a=1$ there is a slight discrepancy between the simulations results and the theoretical prediction (\\ref{sol}) appearing in the three local minima shown in Fig. \\ref{MSD-ex}. This is due to the fact that for values of $\\ell_{p}\/a$ near $2$ there is a breakdown of the Telegrapher approximation that we performed in the section \\ref{sectTE}. In other words, the small disagreement appears for $\\ell_{p}\/a\\approx 1$ because the role of the tensor $\\mathbb{Q}_{ij}$ becomes important and it can not be neglected in Eq. (\\ref{eq3}). \n\n\n\n\n\n\\section{Concluding remarks and perspectives}\\label{conclusions}\n\nIn this paper, we have analyzed the conformational states of a semiflexible polymer enclosed in a compact domain. The approach followed rests on two postulates, namely, that the conformation of a semiflexible polymer satisfies the Frenet stochastic equations \\eqref{ecsestom1}, and the stochastic curvature is distributed according to \\eqref{dft}, which is consistent with the worm-like chain model. In addition, it turned out that the Fokker-Planck equation, corresponding to the stochastic Frenet equations, is exactly the same as the Hermans-Ullman equation (see Eq. (\\ref{F-P})) \\cite{PSaito-Hermans1952}. Furthermore, taking advantage of the analogy between the Hermans-Ullman equation and the Fokker-Planck equation for a free active particle motion \\cite{Fal-Castro-Villarreal2018}, we establish a multipolar decomposition for the probability density function, $P(\\left.{\\bf R}, \\theta\\right|{\\bf R}^{\\prime}, \\theta^{\\prime}; L)$, that describes the manner in which a polymer with length $L$ distributes in the domain with certain endings, ${\\bf R}$ and ${\\bf R}^{\\prime}$, and their associated directions $\\theta$ and $\\theta^{\\prime}$, respectively. In consequence, exploiting this analogy we provide an approximation for the positional distribution $\\rho({\\bf R}, {\\bf R}^{\\prime}, L)$ through Telegrapher's Equation, which for a compact domain is a good approximation as long as $2a\/\\ell_{p}>1$, where $a$ is a characteristic length of the compact domain. In particular, we derive results for a semiflexible polymer enclosed in a square box domain, where we can give a mathematical formula for the {\\it mean-square end-to-end distance}. \n\nFurthermore, we have developed a Monte Carlo-Metropolis algorithm to study the conformational states of a semiflexible polymer enclosed in a compact domain. In particular, for the square box domain, we compare the results of the simulation with the theoretical predictions finding an excellent agreement. Particularly, we have considered two situations, namely, a {\\it polymer in weak confinement} and a {\\it polymer in strong confinement} corresponding to polymers with length lesser and greater than the box side, respectively. In the weak confinement case, we reproduce the two-dimensional solution of a free chain i.e. the Kratky-Porod result for polymers confined in two-dimensions. In the strong confinement case, we showed the existence of a critical persistent length $\\ell^{*}_{p}\\simeq a\/8$ such that for all values $\\ell_{p}>\\ell^{*}_{p}$ the mean-square end-to-end distance exhibits an oscillating behavior, whereas for $\\ell_{p}<\\ell^{*}_{p}$, it is monotonically increasing. In addition, for each value of $\\ell_{p}$ the function converges to $1\/3$ as long as $L\\gg a$. The critical persistence length, thus, distinguishes two conformational behaviors of the semiflexible polymer enclosed in the square box. As was mentioned above, this result is the same type to the one found by Wang and Spakowitz in \\cite{PSaito-Spakowitz2003} for a semiflexible polymer wrapping a spherical surface. As a consequence of this resemblance, one can conclude that the shape transition from oscillating to monotonic conformational states provides evidence of a universal signature for a semiflexible polymer enclosed in a compact space. \n\nOur approach can be extended in various directions. For instance, the whole formulation can be extended easily to semiflexible polymers in three dimensions. Although, we must consider an stochastic version of the Frenet-Serret equations, now in this case, one would obtain the three dimensional case of the Hermans-Ullman equation, because the worm-like chain model involves just the curvature. In addition, the approach developed here can also be extended to the case where the semiflexible polymer wraps a curved surface. \n\n\\section*{Acknowledgement}\nP.C.V. acknowledges financial support by CONACyT Grant No. 237425 and PROFOCIE-UNACH 2017. J.E.R. acknowledges financial support from VIEP-BUAP (grant no. VIEP2017-123). The computer simulations were performed at the LARCAD-UNACH.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSeveral experimental collaborations at $ep$ and $p\\overline{p}$ colliders\npresented data on the differential cross section $d^2\\sigma\/dy\\,dp_T$ for the\ninclusive production of $D^0$, $D^+$, and $D_s^+$ mesons, $\\Lambda_c^+$\nbaryons, and their charge-conjugate counterparts.\nAt DESY HERA, such data were collected by the ZEUS Collaboration\n\\cite{zeus,pad} in low-$Q^2$ $ep$ collisions, equivalent to photoproduction,\nand by the H1 Collaboration \\cite{h1} in deep-inelastic $ep$ scattering.\nAt the Fermilab Tevatron, such data were taken by the CDFII Collaboration\n\\cite{cdf} in $p\\overline{p}$ collisions.\n\nOn the theoretical side, fragmentation functions (FF's) for the transitions\n$c,b\\to X_c$, where $X_c$ denotes a generic charmed hadron, are needed as\nnonperturbative inputs for the calculation of all the cross sections mentioned\nabove.\nSuch FF's are preferably constructed by using precise information from\n$e^+e^-\\to X_c+X$ via $e^+e^-$ annihilation at the $Z$-boson resonance, where \n$X$ denotes the hadronic rest.\nIn this process, two mechanisms contribute with similar rates:\n(i) $Z\\to c\\overline{c}$ decay followed by $c\\to X_c$ (or\n$\\overline{c}\\to X_c$) fragmentation; and\n(ii) $Z\\to b\\overline{b}$ decay followed by $b\\to X_b$ (or\n$\\overline{b}\\to X_b$) fragmentation and weak $X_b\\to X_c+X$ decay of the\nbottom-flavored hadron $X_b$.\nThe latter two-step process is usually treated as a one-step fragmentation\nprocess $b\\to X_c$.\n\nUsing ALEPH \\cite{aleph} and OPAL \\cite{opal} data on inclusive $D^{*+}$\nproduction at the $Z$-boson resonance, we determined separate FF's for\n$c\\to D^{*+}$ and $b\\to D^{*+}$ in collaboration with Binnewies \\cite{bkk}.\nIt is the purpose of this work to extract nonperturbative FF's for\n$c,b\\to D^0,D^+,D_s^+,\\Lambda_c^+$ from the respective data samples collected\nby the OPAL Collaboration at LEP1 \\cite{opal1} using the same theoretical\nframework as in Ref.~\\cite{bkk}.\n\nThe work in Ref.~\\cite{bkk} is based on the QCD-improved parton model\nimplemented in the modified minimal-subtraction ($\\overline{\\mathrm{MS}}$)\nrenormalization and factorization scheme in its pure form with $n_f=5$\nmassless quark flavors, which is also known the as the massless scheme\n\\cite{spira} or zero-mass variable-flavor-number scheme.\nIn this scheme, the masses $m_c$ and $m_b$ of the charm and bottom quarks are\nneglected, except in the initial conditions of their FF's.\nThis is a reasonable approximation for center-of-mass (c.m.) energies\n$\\sqrt s\\gg m_c,m_b$ in $e^+e^-$ annihilation or transverse momenta\n$p_T\\gg m_c,m_b$ in $ep$ and $p\\overline{p}$ scattering, if the respective\nFF's are used as inputs for the calculation of the cross sections for these\nreactions.\nHence, we describe the $c,b\\to X_c$ transitions by nonperturbative FF's, as is\nusually done for the fragmentation of the up, down, and strange quarks into\nlight hadrons.\n\nThe outline of this paper is as follows.\nIn Sec.~\\ref{sec:two}, we briefly recall the theoretical framework underlying\nthe extraction of FF's from the $e^+e^-$ data, which has already been\nintroduced in Refs.~\\cite{bkk,bkk1}.\nIn Sec.~\\ref{sec:three}, we present the $D^0$, $D^+$, $D_s^+$, and\n$\\Lambda_c^+$ FF's we obtained by fitting the respective LEP1 data samples\nfrom OPAL \\cite{opal1} at leading order (LO) and next-to-leading order (NLO)\nin the massless scheme and discuss their properties.\nIn Sec.~\\ref{sec:four}, we present predictions for the inclusive production of\nthese $X_c$ hadrons in nonresonant $e^+e^-$ annihilation at lower c.m.\\\nenergies and compare them with data from other experiments.\nOur conclusions are summarized in Sec.~\\ref{sec:five}.\n\n\\section{Theoretical Framework}\n\\label{sec:two}\n\nOur procedure to construct LO and NLO sets of $D$ FF's has already been\ndescribed in Refs.~\\cite{bkk,bkk1}.\nAs experimental input, we use the LEP1 data from OPAL \\cite{opal1}.\n\nIn $e^+e^-$ annihilation at the $Z$-boson resonance, $X_c$ hadrons are\nproduced either directly through the hadronization of charm quarks produced \nby $Z\\to c\\overline{c}$ or via the weak decays of $X_b$ hadrons from\n$Z\\to b\\overline{b}$.\nIn order to disentangle these two production modes, the authors of\nRef.~\\cite{opal1} utilized the apparent decay length distributions and energy\nspectra of the $X_c$ hadrons.\nBecause of the relatively long $X_b$-hadron lifetimes and the hard $b\\to X_b$\nfragmentation, $X_c$ hadrons originating from $X_b$-hadron decays have\nsignificantly longer apparent decay lengths than those from primary production.\nIn addition, the energy spectrum of $X_c$ hadrons originating from\n$X_b$-hadron decays is much softer than that due to primary charm production. \n\nThe experimental cross sections \\cite{opal1} were presented as distributions\ndifferential in $x=2E(X_c)\/\\sqrt s$, where $E(X_c)$ is the measured energy of\nthe $X_c$-hadron candidate, and normalized to the total number of hadronic\n$Z$-boson decays.\nBesides the total $X_c$ yield, which receives contributions from $Z\\to c\\bar c$\nand $Z\\to b\\bar b$ decays as well as from light-quark and gluon fragmentation,\nthe OPAL Collaboration separately specified results for $X_c$ hadrons from\ntagged $Z\\to b\\bar b$ events.\nAs already mentioned above, the contribution due to charm-quark fragmentation\nis peaked at large $x$, whereas the one due to bottom-quark fragmentation has\nits maximum at small $x$.\n\nFor the fits, we use the $x$ bins in the interval $[0.15,1.0]$ and integrate\nthe theoretical cross sections over the bin widths used in the experimental\nanalysis.\nFor each of the four charmed-hadron species considered here,\n$X_c=D^0,D^+,D_s^+,\\Lambda_c^+$, we sum over the two charge-conjugate states as\nwas done in Ref.~\\cite{opal1}.\nAs a consequence, there is no difference between the FF's of a given quark\nand its antiquark.\nAs in Refs.~\\cite{bkk,bkk1}, we take the starting scales for the $X_c$ FF's of\nthe gluon and the $u$, $d$, $s$, and $c$ quarks and antiquarks to be\n$\\mu_0=2m_c$, while we take $\\mu_0=2m_b$ for the FF's of the bottom quark and\nantiquark.\nThe FF's of the gluon and the first three flavors are assumed to be zero at\ntheir starting scale.\nAt larger scales $\\mu$, these FF's are generated through the usual\nDokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) \\cite{dglap} evolution at LO\nor NLO.\nThe FF's of the first three quarks and antiquarks coincide with each other at\nall scales $\\mu$.\n\nWe employ two different forms for the parameterizations of the charm- and\nbottom-quark FF's at their respective starting scales.\nIn the case of charm, we use the distribution of Peterson {\\it et al.}\\\n\\cite{pet},\n\\begin{equation}\nD_c(x,\\mu_0^2)=N\\frac{x(1-x)^2}{[(1-x)^2+\\epsilon x]^2}.\n\\label{eq:peterson}\n\\end{equation}\nIn the case of bottom, we adopt the ansatz\n\\begin{equation}\nD_b(x,\\mu_0^2)=Nx^{\\alpha}(1-x)^{\\beta},\n\\label{eq:standard}\n\\end{equation}\nwhich is frequently used for the FF's of light hadrons.\nEquation~(\\ref{eq:peterson}) is particularly suitable for FF's that peak at\nlarge values of $x$, as is typically the case for $c\\to X_c$ transitions.\nSince the $b\\to X_c$ FF is a convolution of the $b\\to X_b$ fragmentation and\nthe subsequent $X_b\\to X_c+X$ decay, it has its maximum at small $x$ values.\nTherefore, Eq.~(\\ref{eq:peterson}) is less suitable in this case.\nWe apply Eqs.~(\\ref{eq:peterson}) and (\\ref{eq:standard}) for the FF's of all \nfour $X_c$-hadron species considered here.\n\nThe calculation of the cross section $(1\/\\sigma_{\\rm tot})d\\sigma\/dx$ for\n$e^+e^-\\to\\gamma\/Z\\to X_c+X$ is performed as described in Ref.~\\cite{bkk}, in\nthe pure $\\overline{\\mathrm{MS}}$ subtraction scheme, {\\it i.e.}, without the\nsubtraction terms $d_{Qa}(x)$ specified in Eq.~(2) of Ref.~\\cite{kks}.\nAll relevant formulas and references may be found in Ref.~\\cite{bkk1}.\nAs for the asymptotic scale parameter for five active quark flavors, we adopt\nthe LO (NLO) value $\\Lambda_{\\overline{\\rm MS}}^{(5)}=108$~MeV (227~MeV) from\nour study of inclusive charged-pion and -kaon production \\cite{bkk2}.\nThe particular choice of $\\Lambda_{\\overline{\\rm MS}}^{(5)}$ is not essential,\nsince other values can easily accommodated by slight shifts of the other fit\nparameters.\nAs in Refs.~\\cite{bkk,bkk1}, we take the charm- and bottom-quark masses to be \n$m_c=1.5$~GeV and $m_b=5$~GeV, respectively.\n\n\\boldmath\n\\section{Determination of the $D^0$, $D^+$, $D_s^+$, and $\\Lambda_c^+$ FF's}\n\\label{sec:three}\n\\unboldmath\n\nThe OPAL Collaboration \\cite{opal1} presented $x$ distributions for their full\n$D^0$, $D^+$, $D_s^+$, and $\\Lambda_c^+$ samples and for their $Z\\to b\\bar b$\nsubsamples.\nWe received these data in numerical form via private communication\n\\cite{martin}.\nThey are displayed in Figs.~4 (for the $D^0$ and $D^+$ mesons) and 5 (for the\n$D_s^+$ meson and the $\\Lambda_c^+$ baryon) of Ref.~\\cite{opal1} in the form\n$(1\/N_{\\rm had})dN\/dx$, where $N$ is the number of $X_c$-hadron candidates\nreconstructed through appropriate decay chains.\nIn order to convert this into the cross sections\n$(1\/\\sigma_{\\rm tot})d\\sigma\/dx$, we need to divide by the branching \nfractions of the decays that were used in Ref.~\\cite{opal1} for the\nreconstruction of the various $X_c$ hadrons, namely,\n\\begin{eqnarray}\nB(D^0\\to K^-\\pi^+)&=&(3.84\\pm0.13)\\%,\\nonumber\\\\\nB(D^+\\to K^-\\pi^+\\pi^-)&=&(9.1\\pm0.6)\\%,\\nonumber\\\\\nB\\left(D_s^+\\to \\phi\\pi^+\\right)&=&(3.5\\pm0.4)\\%,\\nonumber\\\\\nB\\left(\\Lambda_c^+\\to pK^-\\pi^+\\right)&=&(4.4\\pm0.6)\\%,\n\\end{eqnarray}\nrespectively.\nThe experimental errors on these branching fractions are not included in our\nanalysis.\n\nThe values of $N$ and $\\epsilon$ in Eq.~(\\ref{eq:peterson}) and of $N$,\n$\\alpha$, and $\\beta$ in Eq.~(\\ref{eq:standard}) which result from our LO and\nNLO fits to the OPAL data are collected in Table~\\ref{tab:par}.\nFrom there, we observe that the parameters $\\alpha$ and $\\beta$, which\ncharacterize the shape of the bottom FF, take very similar values for the\nvarious $X_c$ hadrons, which are also similar to those for the $D^{*+}$\nmeson listed in Table~I of Ref.~\\cite{bkk}.\nOn the other hand, the values of the $\\epsilon$ parameter, which determines\nthe shape of the charm FF, significantly differ from particle species to\nparticle species.\nIn the $D^{*+}$ case \\cite{bkk}, our LO (NLO) fits to ALEPH \\cite{aleph} and\nOPAL \\cite{opal} data, which required separate analyses, yielded\n$\\epsilon=0.144$ (0.185) and 0.0851 (0.116), respectively.\nWe observe that, for each of the $X_c$-hadron species considered, the LO\nresults for $\\epsilon$ are considerably smaller than the NLO ones.\nFurthermore, we notice a tendency for the value of $\\epsilon$ to decrease as\nthe mass ($m_{X_c}$) of the $X_c$ hadron increases. \n\n\\begin{table}\n\\begin{center}\n\\caption{Fit parameters of the charm- and bottom-quark FF's for the various\n$X_c$ hadrons at LO and NLO.\nThe corresponding starting scales are $\\mu_0=2m_c=3$~GeV and \n$\\mu_0=2m_b=10$~GeV, respectively.\nAll other FF's are taken to be zero at $\\mu_0=2m_c$.}\n\\label{tab:par}\n\\begin{tabular}{ccccccc}\n\\hline\\hline\n$X_c$ & Order & $Q$ & $N$ & $\\alpha$ & $\\beta$ & $\\epsilon$ \\\\\n\\hline\n$D^0$ & LO & $c$ & 0.998 & -- & -- & 0.163 \\\\\n & & $b$ & 71.8 & 1.65 & 5.19 & -- \\\\\n & NLO & $c$ & 1.16 & -- & -- & 0.203 \\\\\n & & $b$ & 97.5 & 1.71 & 5.88 & -- \\\\\n$D^+$ & LO & $c$ & 0.340 & -- & -- & 0.148 \\\\\n & & $b$ & 48.5 & 2.16 & 5.38 & -- \\\\\n & NLO & $c$ & 0.398 & -- & -- & 0.187 \\\\\n & & $b$ & 64.9 & 2.20 & 6.04 & -- \\\\\n$D_s^+$ & LO & $c$ & 0.0704 & -- & -- & 0.0578 \\\\\n & & $b$ & 40.0 & 2.05 & 4.93 & -- \\\\\n & NLO & $c$ & 0.0888 & -- & -- & 0.0854 \\\\\n & & $b$ & 21.8 & 1.64 & 4.71 & -- \\\\\n$\\Lambda_c^+$ & LO & $c$ & 0.0118 & -- & -- & 0.0115 \\\\\n & & $b$ & 44.1 & 1.97 & 6.33 & -- \\\\\n & NLO & $c$ & 0.0175 & -- & -- & 0.0218 \\\\\n & & $b$ & 27.3 & 1.66 & 6.24 & -- \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nIn Table~\\ref{tab:chi}, we list three values of $\\chi^2$ per degree of freedom\n($\\chi_{\\rm DF}^2$) for each of the fits from Table~\\ref{tab:par}:\none for the $Z\\to b\\overline{b}$ subsample, one for the total sample (sum of\ntagged-$c\\overline{c}$, tagged-$b\\overline{b}$, and gluon-splitting events),\nand an average one evaluated by taking into account the $Z\\to b\\overline{b}$\nsubsample and the total sample.\nThe actual $\\chi_{\\rm DF}^2$ values are rather small.\nThis is due to the sizeable errors and the rather limited number of data\npoints, especially for the $D_s^+$ and $\\Lambda_c^+$ data.\nIn each case, the $Z\\to b\\overline{b}$ subsample is somewhat less well\ndescribed than the total sample.\nThe NLO fits yield smaller $\\chi_{\\rm DF}^2$ values than the LO ones, except\nfor the $\\Lambda_c^+$ case.\n\n\\begin{table}\n\\begin{center}\n\\caption{$\\chi^2$ per degree of freedom achieved in the LO and NLO fits to the\nOPAL \\cite{opal1} data on the various $D$ hadrons.\nIn each case, $\\chi_{\\rm DF}^2$ is calculated for the $Z\\to b\\overline{b}$\nsample ($b$), the full sample (All), and the combination of both (Average).}\n\\label{tab:chi}\n\\begin{tabular}{ccccc}\n\\hline\\hline\n$X_c$ & Order & $b$ & All & Average \\\\\n\\hline\n$D^0$ & LO & 1.16 & 0.688 & 0.924 \\\\\n & NLO & 0.988 & 0.669 & 0.829 \\\\\n$D^+$ & LO & 0.787 & 0.540 & 0.663 \\\\\n & NLO & 0.703 & 0.464 & 0.584 \\\\\n$D_s^+$ & LO & 0.434 & 0.111 & 0.273 \\\\\n & NLO & 0.348 & 0.108 & 0.228 \\\\\n$\\Lambda_c^+$ & LO & 1.05 & 0.106 & 0.577 \\\\\n & NLO & 1.05 & 0.118 & 0.582 \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\nThe normalized differential cross sections $(1\/\\sigma_{\\rm tot})d\\sigma\/dx$\nfor $D^0$, $D^+$, $D_s^+$, and $\\Lambda_c^+$ hadrons (circles), extracted from\nRef.~\\cite{opal1} as explained above, are compared with our LO (upmost dashed\nlines) and NLO (upmost solid lines) fits in Figs.~\\ref{fig:xs}(a)--(d),\nrespectively.\nThe same is also done for the $Z\\to b\\overline{b}$ subsamples (squares).\nIn addition, our LO and NLO fit results for the $Z\\to c\\overline{c}$\ncontributions are shown.\nIn each case, the $X_c$ hadron and its charge-conjugate partner are summed\nover.\nFrom Figs.~\\ref{fig:xs}(a)--(d), we observe that the LO and NLO results are\nvery similar, except for very small values of $x$.\nThis is also true for the distributions at the starting scales, as may be seen\nby comparing the corresponding LO and NLO parameters in Table~\\ref{tab:par}.\nThe branching of the LO and NLO results at small values of $x$ indicates that,\nin this region, the perturbative treatment ceases to be valid. \nThis is related to the phase-space boundary for the production of $X_c$ hadrons\nat $x_{\\rm min}=2m_{X_c}\/\\sqrt s$.\nThese values are somewhat larger than the $x$ values where our NLO results turn\nnegative.\nSince our massless-quark approach is not expected to be valid in regions of\nphase space where finite-$m_{X_c}$ effects are important, our results should\nonly be considered meaningful for $x\\agt x_{\\rm cut}=0.1$, say.\nWe also encountered a similar small-$x$ behavior for the $D^{*+}$ FF's in\nRefs.~\\cite{bkk,bkk1}.\n\nAs mentioned above, we take the FF's of the partons\n$g,u,\\overline{u},d,\\overline{d},s,\\overline{s}$ to be vanishing at their\nstarting scale $\\mu_0=2m_c$.\nHowever, these FF's are generated via the DGLAP evolution to the high scale\n$\\mu=\\sqrt s$.\nThus, apart from the FF's of the heavy quarks $c,\\overline{c},b,\\overline{b}$,\nalso these radiatively generated FF's contribute to the cross section.\nAll these contributions are properly included in the total result for\n$(1\/\\sigma_{\\rm tot})d\\sigma\/dx$ shown in Figs.~\\ref{fig:xs}(a)--(d).\nAt LEP1 energies, the contribution from the first three quark flavors is still\nnegligible; it is concentrated at small values of $x$ and only amounts to a\nfew percent of the integrated cross section.\nHowever, the contribution from the gluon FF, which appears at NLO in \nconnection with $q\\overline{q}g$ final states, is numerically significant.\nAs in our previous works \\cite{bkk,bkk1}, motivated by the decomposition of\n$(1\/\\sigma_{\\rm tot})d\\sigma\/dx$ in terms of parton-level cross sections, we\ndistributed this contribution over the $Z\\to c\\bar c$ and $Z\\to b\\bar b$\nchannels in the ratio $e_c^2:e_b^2$, where $e_q$ is the effective electroweak\ncoupling of the quark $q$ to the $Z$ boson and the photon including propagator\nadjustments.\nThis procedure should approximately produce the quantities that are compared\nwith the OPAL data \\cite{opal1}.\n\nAs in Refs.~\\cite{bkk,bkk1}, we study the branching fractions for the\ntransitions\\break $c,b\\to D^0,D^+,D_s^+,\\Lambda_c^+$, defined by\n\\begin{equation}\nB_Q(\\mu)=\\int_{x_{\\rm cut}}^1dx\\,D_Q(x,\\mu^2),\n\\label{eq:br}\n\\end{equation}\nwhere $Q=c,b$, $D_Q$ are the appropriate FF's, and $x_{\\rm cut}=0.1$.\nThis allows us to test the consistency of our fits with information presented\nin the experimental paper \\cite{opal1} that was used for our fits.\nThe contribution from the omitted region $0 2$. \n\nWhen $\\alpha = \\beta$, our result that $f(n, P, d) = \\Theta(n^{d - 1})$ for every $d$-dimensional tuple permutation matrix $P$, on one hand, generalizes Geneson's result \\cite{G}\nfrom $d=2$ to $d \\geq 2$. On the other hand, even when $d=2$ our ideas improve some key calculations in Geneson's paper \\cite{G}.\nThese improvements are vital in our derivation of a new upper bound on the limit superior of the sequence $\\{ {f(n,P,d) \\over n^{d-1}}\\}$ that we discuss below. \n\nThe importance of our result $f(n, P, d) = \\Theta(n^{d - 1})$ for every $d$-dimensional tuple permutation matrix $P$ lies in the fact that, in view of Proposition \\ref{Easy}, $\\Theta(n^{d - 1})$ is the lowest possible order for the extremal function of any nontrivial $d$-dimensional matrix. \n\nIn the second direction, we study the limit inferior and limit superior of the sequence $\\{ {f(n,P,d) \\over n^{d-1}} \\}$ where $P$ satisfies $f(n, P, d) = \\Theta(n^{d - 1})$. These are the multidimensional analogues of the F\\\"uredi-Hajnal limit. We show that the limit inferior is at least $d(k-1)$ for $k \\times \\cdots \\times k$ permutation matrices, \ngeneralizing Cibulka's result \\cite{C} from $d=2$ to all $d \\ge 2$. \n\nWe observe that $f(n, P, d)$ is super-homogeneous in higher dimensions, i.e., $f(sn, P, d) \\geq K s^{d-1} f(n, P, d)$ for some positive constant $K$.\nThis super-homogeneity is key to our proof that \nthe limit inferior of $\\{ {f(n,P,d) \\over n^{d-1}} \\}$ has a lower bound $2^{\\Omega(k^{1 \/ d})}$ for a family of $k \\times \\cdots \\times k$ permutation matrices, generalizing Fox's result \\cite{Fox2} from $d=2$ to $d \\geq 2$.\n\nFinally, we show that the limit superior of the sequence $\\{ {f(n,P,d) \\over n^{d-1}} \\}$ is bounded above by $ 2^{O(k)}$ for all $k \\times \\cdots \\times k$ permutation matrices $P$.\nThis is a substantial improvement of Klazar and Marcus upper bound $2^{O(k \\log k)}$ for $d > 2$ in \npaper \\cite{KM} and it also generalizes Fox's bound $2^{O(k)}$ on the F\\\"uredi-Hajnal limit in two dimensions \\cite{Fox}. We further show that this upper bound $2^{O(k)}$ is also true for every tuple permutation matrix $P$, which is a new result even for $d=2$. We are able to extend the new upper bound from permutation matrices to tuple permutation matrices mainly because of our improvement of Geneson's approach as mentioned above.\n\nThe rest of the paper is organized as follows. In Section 2, we study $f(n,P,d)$ when $P$ is a block permutation matrix but not a tuple permutation matrix. The more difficult case when $P$ is a tuple permutation matrix is analyzed in Section \\ref{tuple}. \nIn Section 4, we study the limit inferior and limit superior of the sequence $\\{ {f(n,P,d) \\over n^{d-1}}\\}$ for permutation and tuple permutation matrices $P$. We conclude the paper and discuss our future directions in Section \\ref{conclusion}.\n\n\n\n\n\n\n\\section{Block permutation matrices}\n\\label{Block}\n\\bigskip\n\nIn this section, we study the extremal function of a variant of $d$-dimensional permutation matrices. \nWe are interested in the forbidden matrices which can be written as the Kronecker product of a $d$-dimensional permutation \nmatrix and a $d$-dimensional matrix of $1$-entries only.\n\nLet $R^{k_1,...,k_d}$ be the $d$-dimensional $k_1 \\times \\cdots \\times k_d$ matrix of all ones. We \nstudy lower and upper bounds on the extremal function of block permutation matrix $P \\otimes R^{k_1,...,k_d}$, \nwhere $P$ is a $d$-dimensional permutation matrix.\n\nWe first study the extremal function of $R^{k_1, \\ldots , k_d}$. We use the probabilistic method to obtain a lower \nbound on $f(n,R^{k_1,...,k_d},d)$. When $d=2$, this lower bound is classical \\cite{ES}.\n\\begin{theo}\n\\label{lowerbound}\nIf $k_1 \\cdot k_2 \\cdots k_d > 1$, then\n$f(n,R^{k_1, \\ldots , k_d},d) = \\Omega \\left( n^{d- \\beta(k_1, k_2, \\ldots , k_d)} \\right)$, where $\\beta = {k_1+ \\cdots +k_d - d \\over k_1 \\cdot k_2 \\cdots k_d-1}$.\n\\end{theo}\n\\begin{proof} Let each entry of a $d$-dimensional $n \\times \\cdots \\times n$ zero-one matrix \n$A$ be chosen to be $1$ with probability $p=n^{- \\beta(k_1, \\ldots , k_d)}$ and $0$ with probability $1 - p$. \nThe expected number of $1$-entries in $A$ is $pn^d$. There are ${n \\choose k_1} \\cdot {n \\choose k_2} \\cdots {n \\choose k_d}$ \npossible copies of $R^{k_1, \\ldots , k_d}$ in matrix $A$ and each has a probability of $p^{k_1 \\cdot k_2 \\cdots k_d}$ of occurring.\nThe expected number of copies of $R^{k_1, \\ldots , k_d}$ in $A$ is\n$$ {n \\choose k_1} \\cdot {n \\choose k_2} \\cdots {n \\choose k_d}p^{k_1 \\cdot k_2 \\cdots k_d} \\le Cn^{k_1+ \\cdots + k_d}p^{k_1 \\cdot k_2 \\cdots k_d} \\ ,$$\nwhere, since at least one of $k_1$, $\\ldots$ , $k_d$ is greater than one, $C$ is a positive constant less than 1. \n\nLet $A'$ be the matrix formed by changing a single $1$-entry in each copy of $R^{k_1, \\ldots , k_d}$ in \n$A$ to a 0-entry. Then $A'$ avoids $R^{k_1,...,k_d}$ and the expected number of $1$-entries in $A'$ is at least\n$pn^d- Cn^{k_1+k_2+ \\cdots + k_d}p^{k_1 \\cdot k_2 \\cdots k_d}=(1 - C) \\ n^{d-\\beta(k_1, k_2, \\ldots, k_d) }$.\n As a consequence, there exists some matrix $A'$ that avoids $R^{k_1, \\ldots , k_d}$ \nand has at least so many $1$-entries.\n\\end{proof}\n\nWe now obtain an upper bound on the extremal function of $R^{k_1, \\ldots , k_d}$. When $d=2$, this upper bound is due to K\\H{o}v\\'{a}ri, S\\'{o}s, and Tur\\'{a}n \\cite{KST}.\n\\begin{theo}\n\\label{upperbound}\n$f(n,R^{k_1, \\ldots , k_d},d)=O(n^{d- \\alpha(k_1, \\ldots , k_d)})$, where $\\alpha = {\\max({k_1, \\ldots , k_d}) \\over k_1 \\cdot k_2 \\cdots k_d }$.\n\\end{theo}\n\\begin{proof}\nWe prove the theorem by induction on $d$. The base case of $d=1$ is trivial. Assuming that \n$f(n, R^{k_1, \\ldots, k_{d-1}},d-1)=O(n^{d-1- \\alpha(k_1, \\ldots, k_{d-1})})$ \nfor some $d \\geq 2$, we show that $f(n,R^{k_1, \\ldots , k_d},d)=O(n^{d- \\alpha(k_1, \\ldots , k_d)})$.\n\nThroughout the proof, we let $A = (a_{i_1, \\ldots , i_{d}})$ be a $d$-dimensional $n \\times \\cdots \\times n$ matrix \nthat avoids $R^{k_1, \\ldots , k_d}$ and has the maximum number, $f(n,R^{k_1, \\ldots , k_d},d)$, of ones. We need the following lemma on the number \nof $d$-rows that have $1$-entries in each of predetermined $k_d$ $d$-cross sections.\n\n\\begin{lem}\n\\label{d-row}\nFor any set of $k_d$ $d$-cross sections of $A$, there are $O\\left(n^{d-1-\\alpha(k_1, \\ldots , k_{d-1})} \\right)$ $d$-rows in $A$ which contain a 1-entry in each of these $d$-cross sections.\n\n\\end{lem}\n\\begin{proof}\nLet the $d^{\\text{th}}$ coordinates of these $d$-cross sections be $\\ell_1, \\ldots , \\ell_{k_d}$. Define a ($d-1$)-dimensional\n$n\\times \\cdots \\times n$ matrix $B=(b_{i_1, \\ldots , i_{d-1}})$ such that\n$b_{i_1, \\ldots , i_{d-1}}=1$ if $a_{ i_1, \\ldots , i_{d-1},\\ell_1}= \\cdots =a_{i_1, \\ldots , i_{d-1}, \\ell_{k_d}}=1$ \nand $b_{i_1, \\ldots , i_{d-1}}=0$ otherwise.\n\nWe claim that matrix $B$ must avoid $R^{k_1, \\ldots , k_{d-1}}$. Suppose to the contrary that \n$B$ contains $R^{k_1, \\ldots , k_{d-1}}$. Let $e_1, \\ldots , e_{k_1 \\cdot k_2 \\cdots k_{d-1}}$ be all the $1$-entries \nin $B$ that represent $R^{k_1,...,k_{d-1}}$. By the construction of $B$,\nthere are $k_d$ nonzero entries with coordinates $(x_1, \\ldots , x_{d-1},\\ell_1), \\ldots , (x_1, \\ldots , x_{d-1}, \\ell_{k_d})$ in $A$ corresponding to\neach $e_i$ with coordinates $(x_1, \\ldots , x_{d-1})$ in $B$. All these $k_1 \\cdot k_2 \\cdots k_d$ nonzero entries \nform a copy of $ R^{k_1, \\ldots , k_{d}}$ in $A$, a contradiction. Thus $B$ must avoid \n$R^{k_1, \\ldots , k_{d-1}}$ and by our inductive assumption, $B$ must have \n$O(n^{d-1-\\alpha(k_1, \\ldots , k_{d-1})})$ ones. The result follows.\n\\end{proof}\n\nSuppose all the $d$-rows of $A$ have $r_1, \\ldots , r_{n^{d-1}}$ non-zero entries, respectively. \nCounting the total number of sets of $k_d$ nonzero entries in the same $d$-row in two different ways yields\n\\begin{equation}\n\\label{two}\n\\sum_{i=1}^{n^{d-1}}{r_i \\choose k_d} = {n \\choose k_d}O\\left( n^{d-1-\\alpha(k_1, \\ldots , k_{d-1})} \\right) \\ ,\n\\end{equation}\nwhere we use Lemma \\ref{d-row} to obtain the right hand side.\n\nMatrix $A$ avoids $R^{k_1, \\ldots, k_d}$ and has the largest possible number of $1$-entries, so $r_i \\geq k_d - 1$ for $1 \\leq i \\leq n^{d-1}$. \nSince ${r \\choose k}$ is a convex function of $r$ for $r \\geq k - 1$, we apply Jensen's inequality to obtain\n\\begin{eqnarray*}\n\\sum_{i=1}^{n^{d-1}}{r_i \\choose k_d} \\ge n^{d-1} {{1\\over n^{d-1}}\\sum_{i=1}^{n^{d-1}} r_i \\choose k_d} \n= n^{d-1}{{1\\over n^{d-1}}f(n,R^{k_1, \\ldots , k_d},d) \\choose k_d} \\ ,\n\\end{eqnarray*}\nwhere, in the equality, we use the assumption that $A$ has $f(n,R^{k_1, \\ldots , k_d}, d)$ total $1$-entries.\nSubstituting this into equation (\\ref{two}) yields\n$$n^{d-1}{{1\\over n^{d-1}}f(n,R^{k_1, \\ldots , k_d},d) \\choose k_d} = {n \\choose k_d}O\\left(n^{d-1-\\alpha(k_1,\\ldots , k_{d-1})} \\right) \\ ,$$\n\\noindent\nwhich together with ${n \\choose k}=\\Theta(n^k)$ gives\n$$n^{d-1} \\left({1\\over n^{d-1}}f(n,R^{k_1, \\ldots , k_d},d)\\right)^{k_d} = O\\left( n^{ k_d}\\cdot n^{d-1-\\alpha(k_1,\\ldots , k_{d-1})} \\right) \\ . $$ \nThis implies\n$$f\\left( n,R^{k_1, \\ldots , k_d},d \\right) = O \\left( n^{d-{\\alpha(k_1, \\ldots , k_{d-1}) \\over k_d }} \\right) \\ . $$\nSimilarly, we have\n$$f(n,R^{k_1, \\ldots , k_d},d) = O\\left(n^{d-{\\alpha(k_2, \\ldots , k_d) \\over k_1}}\\right) \\ .$$ \nNote that $\\max\\left({\\alpha(k_2, \\ldots , k_d) \\over k_1}, {\\alpha(k_1, \\ldots , k_{d-1}) \\over k_d}\\right)=\\alpha(k_1, \\ldots , k_d)$. \nThus taking the smaller of the two upper bounds gives\n$$f(n,R^{k_1, \\ldots , k_d},d) = O\\left(n^{d-\\alpha(k_1, \\ldots , k_d)}\\right) $$ \nwhich completes the inductive step, and thus Theorem \\ref{upperbound} is proved.\n\\end{proof}\n\nWe make the following observation on $\\alpha(k_1, \\ldots , k_d)$ and $\\beta(k_1, \\ldots , k_d)$.\n\\begin{pro}\n\\label{alpha}\n Suppose $d > 1$ and $k_1, \\ldots , k_d$ be positive integers such that $k_1 \\cdot k_2 \\cdots k_d > 1$. If only one of $k_1, \\ldots , k_d$ is greater than 1, then\n$\\alpha(k_1, \\ldots , k_d) = \\beta(k_1, \\ldots , k_d) = 1 $.\nOtherwise,\n$0 < \\alpha(k_1, \\ldots , k_d) < \\beta(k_1, \\ldots , k_d) < 1 $.\n\\end{pro}\n\nWe omit the proof since it is straightforward. Proposition \\ref{alpha} implies that the lower bound of Theorem \\ref{lowerbound} and the upper bound of Theorem \\ref{upperbound} are significant improvements of the bounds in Proposition \\ref{Easy}.\n\nWe now study the extremal function of the Kronecker product $P \\otimes R^{k_1, \\ldots , k_d}$, where $P$ is a $d$-dimensional permutation matrix. We show that the extremal functions of $P \\otimes R^{k_1, \\ldots , k_d}$ and $R^{k_1, \\ldots , k_d}$ share the same lower and upper bounds.\n\n\\noindent\n\\begin{theo}\n\\label{super}\nIf $P$ is a $d$-dimensional permutation matrix and at least two of $k_1, \\ldots , k_d$ are greater than 1, then there exist constants $C_1$ and $C_2$ such that for all $n$,\n\\begin{equation}\n\\label{block}\nC_1 n^{d - \\beta(k_1, \\ldots , k_d)} \\leq f(n,P\\otimes R^{k_1, \\ldots , k_d}, d) \\leq C_2 n^{d - \\alpha(k_1, \\ldots , k_d)}\n\\end{equation}\n\\end{theo}\n\\noindent\n\\begin{proof} We first have \n\\begin{equation}\n\\label{R}\nf(n, R^{k_1, \\ldots , k_d}, d) \\leq f(n,P\\otimes R^{k_1, \\ldots , k_d}, d).\n\\end{equation}\n This follows from the fact that any matrix that avoids $R^{k_1, \\ldots , k_d}$ must also avoid\n$P\\otimes R^{k_1, \\ldots , k_d}$. The left inequality of (\\ref{block}) is then the result of (\\ref{R}) and Theorem \\ref{lowerbound}.\n\nTo prove the right inequality of (\\ref{block}), we follow Hesterberg's idea for the $2$-dimensional case \\cite{H} to \nestimate $f(n,P\\otimes R^{k_1, \\ldots , k_d}, d)$ first for $n = c^m$, where $m$ is an arbitrary positive integer and $c$ \nis a positive integer to be determined, and then for all other positive integers $n$. \n\nWe make use of the upper bound in Theorem \\ref{upperbound}\n\\begin{equation}\n\\label{Kconstant}\nf(n, R^{k_1, \\ldots , k_d}, d) \\leq g(n) \\ , \n\\end{equation}\nwhere $g(n) = K n^{d - \\alpha(k_1, \\ldots , k_d)}$ for some positive constant $K$, and claim that\n\\begin{equation}\n\\label{cm}\n f(c^m,P\\otimes R^{k_1, \\ldots , k_d}, d) \\leq 2 c^d g(c^m).\n\\end{equation}\n\nWe justify the claim by induction. The base case of $m=0$ is trivially true. Suppose that \n\\begin{equation}\n\\label{indu}\nf(n,P\\otimes R^{k_1, \\ldots , k_d},d) \\le 2c^d g(n) \n\\end{equation}\nfor $n = c^m$. We show that \n$f(cn,P\\otimes R^{k_1, \\ldots , k_d},d)\\le 2c^d g(cn)$.\n\nLet $A$ be a $d$-dimensional $cn\\times \\cdots \\times cn$ matrix avoiding $P\\otimes R^{k_1, \\ldots , k_d}$ with $f(cn, P \\otimes R^{k_1, \\ldots , k_d}, d)$ total $1$-entries. We divide $A = (a_{i_1, \\ldots , i_d})$ into $c^d$ \ndisjoint submatrices of size $n \\times \\cdots \\times n$. We label these submatrices by $S(i_1, \\ldots , i_d) = (s_{j_1, \\ldots , j_d})$, where \n$$s_{j_1, \\ldots , j_d} =a_{j_1+n(i_1-1), \\ldots , j_d+n(i_d-1)} \\ .$$\nThese are called $S$ submatrices throughout the paper. \n\nLet $C$ be the $d$-dimensional $c \\times \\cdots \\times c$ matrix such that $c_{i_1, \\ldots , i_d}=1$ \nif submatrix $S(i_1, \\ldots , i_d)$ of $A$ contains $R^{k_1, \\ldots , k_d}$ and that $c_{i_1, \\ldots , i_d}=0$ otherwise. \nSince any two $1$-entries of the permutation matrix $P$ differ in all coordinates, $C$ must avoid $P$ or else $A$ contains $P\\otimes R^{k_1, \\ldots , k_d}$.\n\nWe can classify all the $S$ submatrices of $A$ into two classes.\n\n\\medskip\n\n\\noindent\n{\\bf Case 1: $S$ contains $R^{k_1, \\ldots , k_d}$} \n\nSince $C$ avoids $P$, there are at most $f(c,P,d)$ such $S$ submatrices. Clearly each $S$ \nsubmatrix must avoid $P \\otimes R^{k_1, \\ldots , k_d}$, so it has at most $f(n,P\\otimes R^{k_1, \\ldots , k_d},d)$ \n$1$-entries. There are at most $f(c,P,d)f(n,P\\otimes R^{k_1, \\ldots , k_d},d)$ $1$-entries from this type of $S$ submatrices.\n\n\\medskip\n\n\\noindent\n{\\bf Case 2: $S$ avoids $R^{k_1, \\ldots , k_d}$} \n\nThere are at most $c^d$ such submatrices in total. Each has at most $f(n,R^{k_1, \\ldots , k_d},d)$ $1$-entries. \nThere are at most $c^df(n,R^{k_1, \\ldots , k_d},d)$ $1$-entries from the second type of $S$ submatrices.\n\n\\medskip\n\n\\noindent\nSumming the numbers of $1$-entries in both cases gives \n\\begin{equation}\nf(cn,P\\otimes R^{k_1,...,k_d},d)\\le f(c,P,d)f(n,P\\otimes R^{k_1, \\ldots , k_d},d)+c^df(n,R^{k_1, \\ldots , k_d},d) . \\no\n\\end{equation}\nOn the right hand side of the inequality, $f(n,P\\otimes R^{k_1, \\ldots , k_d},d)$ has an upper bound $2c^d g(n)$ \nbecause of the inductive assumption (\\ref{indu}) and $f(n,R^{k_1, \\ldots , k_d},d)$ has an upper bound $g(n)$ by (\\ref{Kconstant}). Since $f(c,P,d) = O(c^{d-1})$ \nfor any permutation matrix $P$ \\cite{KM}, \nthere exists a constant $L$ such that $f(c,P,d) \\le Lc^{d-1}$. Because at least two of $k_1, k_2, \\ldots , k_d$ are greater than $1$, it follows from Proposition \\ref{alpha}\nthat $\\alpha < 1$. Hence, the integer $c$ can be chosen \nso large that $2 L c^{\\alpha - 1} \\leq 1$. Therefore,\n\\begin{eqnarray*}\nf(cn,P\\otimes R^{k_1,...,k_d},d)\n\\le (L c^{d-1})(2c^d g(n))+c^dg(n)\n\\le [2L c^{\\alpha(k_1, \\ldots, k_d) - 1}]c^{d}g(cn)+c^dg(cn)\n\\le2c^dg(cn) \\ ,\n\\end{eqnarray*}\nwhere we use $g(n) = K n^{d - \\alpha}$ in the second inequality. This completes our induction and hence proves equation (\\ref{cm}).\n\nFinally, we estimate $f(n,P\\otimes R^{k_1, \\ldots , k_d}, d)$ for all positive integers $n$.\n\\begin{eqnarray*}\nf(n, P \\otimes R^{k_1,...,k_d}, d) &=& f(c^{\\log_c n},P \\otimes R^{k_1, \\ldots , k_d},d) \\\\\n&\\le& f(c^{\\lceil\\log_c n \\rceil},P \\otimes R^{k_1, \\ldots , k_d},d) \\\\\n& \\le & 2c^dg(c^{\\lceil\\log_c n \\rceil}) \\\\\n&\\le& 2c^dg(c^{\\log_c n +1}) \\\\\n&=& 2c^dg(cn) \\\\\n&\\le& 2c^d c^dg(n),\n\\end{eqnarray*}\nwhere $\\lceil\\log_c n \\rceil$ is the smallest integer $\\ge \\log_c n$, \nand we use (\\ref{cm}) in the second inequality and $g(n) = K n^{d - \\alpha}$ in the last inequality.\nThis proves the right inequality of (\\ref{block}).\n\nThe proof of Theorem \\ref{super} is completed.\n\\end{proof}\n\nWe conclude this section with an observation. If only one of $k_1, \\ldots , k_d$ is greater than one, the matrix $P \\otimes R^{k_1, \\ldots, k_d}$\n is a tuple permutation matrix. By Proposition \\ref{alpha}, $\\alpha(k_1, \\ldots , k_d) = 1$. The proof of Theorem \\ref{super} fails in this case, but it can be \nmodified to show that $f(n, P \\otimes R^{k_1, \\ldots , k_d}, d) = O(n^{d - 1 + \\epsilon})$, where $\\epsilon$ is an arbitrarily small positive number. To see this, we can replace $g(n)$ of (\\ref{Kconstant}) by $g(n) = K n^{d - 1 + \\epsilon}$ and choose $c$ so large that $2 L c^{- \\epsilon} \\leq 1$. In the next section, we improve this result and show that $f(n, P \\otimes R^{k_1, \\ldots , k_d}, d) = O(n^{d - 1})$. The method is quite different from that of this section.\n\n\n\n\n\n\n\n\n\n\\section{Tuple permutation matrices}\n\\label{tuple}\nIn this section, we study the extremal function of an arbitrary tuple permutation matrix. As previously mentioned, a tuple permutation matrix is the Kronecker product of a $d$-dimensional permutation matrix and $ R^{k_1, \\ldots , k_d}$, where only one of $k_1, \\ldots, k_d$ is larger than unity. We improve Geneson's ideas for $d=2$ case \\cite{G} and obtain a tight bound on the extremal function for $d \\geq 2$.\n \nSuppose $P$ is a permutation matrix. We call a matrix $P \\otimes R^{k_1, \\ldots , k_d}$ a $j$-tuple permutation matrix generated by $P$ if one of $k_1, \\ldots , k_d$ is equal to $j$ and the rest are unity. In particular, a $j$-tuple permutation matrix is called a double permutation matrix if $j=2$.\n\nLet\n\\begin{displaymath}\nF(n,j,k,d)=\\max_{M} f(n,M,d) \\ ,\n\\end{displaymath}\nwhere $M$ ranges through all $d$-dimensional $j$-tuple permutations matrices generated by $d$-dimensional $k \\times \\cdots \\times k$ permutation matrices.\n\n\n\\begin{theo}\n\\label{main}\nFor all $j \\ge 2$,\n$F(n,j,k,d)=\\Theta(n^{d-1})$.\n\\end{theo}\n\n\\noindent\nThe proof of this theorem is based on a series of lemmas.\n\nSince $F(n,j,k,d)$ has $n^{d-1}$ as a lower bound in view of Proposition \\ref{Easy}, it suffices to prove that it has upper bound $O(n^{d-1})$.\n\nWe first observe that $F(n,j,k,d)$ and $F(n,2,k,d)$ are bounded by each other.\n\\begin{lem}\n\\label{2j}\n$F(n, 2, k, d) \\leq F(n, j, k, d) \\leq (j-1) F(n, 2, k, d) ~~~~~ \\mbox{for $j > 2$} \\ .$\n\\end{lem}\n\\begin{proof}\nIt suffices to show that \n\\begin{equation}\n\\label{2j'}\nf(n, P, d) \\leq f(n, P', d) \\leq (j-1) f(n, P, d) ,\n\\end{equation}\nwhere $P$ is a double permutation $2k \\times k \\times \\cdots \\times k$ matrix, $P'$ is a $j$-tuple permutation $jk \\times k \\times \\cdots \\times k$ matrix, and both $P$ and $P'$ are generated from the same arbitrary permutation matrix of size $k \\times \\cdots \\times k$.\n\nThe left inequality of (\\ref{2j'}) follows from the fact that a $d$-dimensional $n \\times \\cdots \\times n$ matrix that avoids $P$ must also avoid $P'$.\n\nTo prove the right inequality, we suppose $A$ is a $d$-dimensional $n \\times \\cdots \\times n$ matrix that avoids $P'$ and has $f(n, P', d)$ nonzero entries. \nIn each $1$-row of $A$, we list all the $1$-entries $e_1, e_2, \\ldots$ in the order of increasing first coordinates and then change all the $1$-entries in this $1$-row \nexcept $e_1, e_{j}, e_{2j-1}, \\ldots$ to $0$-entries. In this way, we obtain a new matrix $A'$, which avoids $P$ since $A$ avoids $P'$.\nThis together with $|A| \\leq (j-1)|A'|$, where $|M|$ denotes the number of $1$-entries in $M$, justifies the right inequality of (\\ref{2j'}).\n\\end{proof}\n\nIn view of Lemma \\ref{2j}, it suffices to study the upper bound on $f(n, P, d)$, where $P$ is a $d$-dimensional double permutation matrix of size \n$2k \\times k \\times \\cdots \\times k$.\n\nSuppose $A$ is an arbitrary $d$-dimensional $kn \\times \\cdots \\times kn$ matrix that avoids $P$. As in Section 2, we study the $S$ submatrices of $A$, which are constructed by dividing $A$ into $n^d$ disjoint submatrices of size $k \\times \\cdots \\times k$ and labeling these submatrices as $S(i_1, \\ldots , i_d)$.\n\nThe contraction matrix of $A$ is defined to be the $d$-dimensional $n \\times \\cdots \\times n$ matrix $C = \\left(c_{i_1,i_2, \\ldots , i_d}\\right)$ such that $c_{i_1,i_2, \\ldots ,i_d}=1$ if $S(i_1, i_2, \\ldots , i_d)$ is a nonzero matrix and $c_{i_1,i_2, \\ldots , i_d}=0$ if $S(i_1, i_2, \\ldots ,i_d)$ is a zero matrix.\n\nWe now construct a $d$-dimensional $n \\times \\cdots \\times n$ zero-one matrix $Q=(q_{i_1, \\ldots , i_d})$. Each entry $q_{i_1, \\ldots , i_d}$ is defined based on the $S$ submatrices of $A$.\n\\begin{enumerate}\n\\item $q_{i_1, \\ldots , i_d}=0$ if $S(i_1, \\ldots , i_d)$ is a zero matrix.\n\n\\item $q_{i_1, \\ldots , i_d}=1$ if $S(i_1, i_2, \\ldots , i_d)$ is a nonzero matrix and $S(1,i_2, \\ldots , i_d)$, $\\ldots$, $S(i_1-1,i_2, \\ldots , i_d)$ are all zero matrices.\n\n\\item Let $x$ be the largest integer less than $i_1$ for which $q_{x,i_2, \\ldots , i_d}=1$. Then define $q_{i_1, i_2, \\ldots , i_d}=1$ if the augmented matrix \nformed by submatrices $S(x, i_2, \\ldots , i_d)$, $\\ldots$ , $S(i_1, i_2, \\ldots , i_d)$ contains at least two $1$-entries in the same $1$-row, and $q_{i_1, \\ldots , i_d}= 0$ otherwise.\n\n\\end{enumerate}\n\n\\noindent\n\\begin{lem}\n\\label{Q}\n$Q$ avoids $P$.\n\\end{lem}\n\\begin{proof}\nSuppose to the contrary that $Q$ contains $P$. Suppose the $1$-entries $e_1$, $e_2$, $\\ldots$ , $e_{2k}$, where $e_{2i-1}$ and $e_{2i}$ are in the same $1$-row, form a copy of $P$ in $Q$. Denote $e_{2i-1}= q_{x_1, x_2, \\ldots , x_d}$ and $e_{2i}= q_{x_1',x_2, \\ldots, x_d}$, where $x_1 < x_1'$. Then, by the definition of matrix $Q$, the augmented matrix formed by $S(x_1, x_2, \\ldots , x_d), \\ldots , S(x_1', x_2, \\ldots , x_d)$ contains two $1$-entries, denoted by $f_{2i-1}$ and $f_{2i}$, in the same $1$-row of $A$. The one-entries\n$f_1, \\ldots , f_{2k}$ form a copy of $P$ in $A$, a contradiction.\n\\end{proof}\n\nWe now study those $S$ submatrices of $A$ which contain two nonzero entries in the same $1$-row. The next lemma is the key difference between our approach and Geneson's approach \\cite{G} even for $d=2$.\n\\begin{lem}\n\\label{wide}\n$A$ has at most $F(n, 1, k, d)$ total $S$ submatrices with two nonzero entries in the same $1$-row.\n\\end{lem}\n\\begin{proof} We assume to the contrary that $A$ has more than $F(n,1,k,d)$ such $S$ submatrices. Let $A'$ be formed by changing all $1$-entries in all other $S$ submatrices to $0$-entries in $A$. Suppose that the double permutation matrix $P$ is generated from the permutation matrix $P'$ and that $C'$ is the contraction matrix of $A'$. Matrix $C'$ has more than $F(n,1,k,d) \\ge f(n,P',d)$ $1$-entries, so it must contain $P'$. Denote by $e_1, \\ldots , e_k$ the $1$-entries in $C'$ forming a copy of $P'$. Then each of $S(e_1), \\ldots, S(e_k)$ is a $S$ submatrix of $A'$ that has at least two nonzero entries in the same $1$-row. All of these pairs of nonzero entries in $S(e_1), \\ldots , S(e_k)$ form a copy of $P$ in $A'$. Hence, $A'$ contains $P$ and so does $A$, a contradiction.\n\\end{proof}\n\nFor each 1-entry $q_{i_1, i_2, \\ldots , i_d}=1$ of $Q$, we define a chunk $C^*(i_1, i_2, \\ldots , i_d)$, which is an augmented matrix formed by consecutive $S$ submatrices, as follows \\cite{G}. \n\n\\begin{enumerate}\n\n\\item If $q_{i_1, i_2, \\ldots , i_d}=1$ and $i_1'$ is the smallest integer greater than $i_1$ such that $q_{i_1', i_2, \\ldots , i_d}=1$, then the chunk $C^*(i_1, i_2, \\ldots , i_d)$ is defined to be the augmented matrix formed by $S(i_1, i_2, \\ldots , i_d)$, $\\ldots$ , $S(i_1' - 1, i_2, \\ldots , i_d)$.\n\n\\item If $q_{i_1, i_2, \\ldots , i_d}=1$ and there is no $i_1'>i_1$ such that $q_{i_1', i_2, \\ldots , i_d}=1$, then $C^*(i_1, i_2, \\ldots , i_d)$ is the augmented matrix formed by $S(i_1, i_2, \\ldots , i_d)$, $\\ldots$ , $S(n, i_2, \\ldots , i_d)$.\n\n\\end{enumerate}\n\nWe call a chunk {\\it $j$-tall}, where $j=2,3, \\ldots , d$, if each of its $j$-cross sections contains at least one 1-entry. The ($d-1$)-dimensional matrix $M' = (m'_{i_1, \\ldots, i_{j-1}, i_{j+1}, \\ldots, i_d})$ is called the $j$-remainder of a $d$-dimensional matrix $M = (m_{i_1, \\ldots , i_d})$ if $m'_{i_1, \\ldots , i_{j-1},i_{j+1}, \\ldots , i_d}$ is defined to be $1$\nwhen there exists $i_j$ such that $m_{i_1, \\ldots , i_d}=1$ and to be $0$ otherwise.\n\n\\begin{lem}\n\\label{tall}\nFor each $j= 2, 3, \\ldots, d$ and each $m=1, \\ldots , n$, $A$ has at most $F(n,1+k^{d-2},k,d-1)$ total $j$-tall chunks of the form $C^*(i_1, \\ldots , i_{j-1},m,i_{j+1}, \\ldots , i_d)$.\n\\end{lem}\n\\begin{proof}\nAssume to the contrary that $A$ has $r$ chunks $C^*_1, C^*_2, \\ldots , C^*_r$, where $r>F(n,1+k^{d-2},k,d-1)$, of the form $C^*(i_1, \\ldots , i_{j-1},m,i_{j+1}, \\ldots , i_d)$ \nthat have $1$-entries in\nall their $j$-cross sections. Let $S_1, S_2, \\ldots, S_r$ be the starting $S$ submatrices of the chunks $C^*_1, C^*_2, \\cdots , C^*_r$, respectively. Let $A'$ be the matrix formed\nby changing all $1$-entries of $A$ that do not lie in the chunks $C^*_1, \\ldots , C^*_r$ to $0$-entries. We further change all the $1$-entries of $A'$ that do not sit in \n$S_1, \\ldots, S_r$ to $0$-entries and denote the resulting matrix by $A''$. \nDenote by $C$ the contraction matrix of the $j$-remainder of $A''$. Then $C$ is a $(d-1)$-dimensional $n \\times \\cdots \\times n$ matrix and it has $r$ ones so it contains every ($1+k^{d-2}$)-tuple \n($d-1$)-dimensional permutation matrix.\n\nWe now pick a $(d-1)$-dimensional $(1 + k^{d-2})$-tuple permutation matrix. Since $P$ is a $d$-dimensional double permutation matrix of size $2k \\times k \\times \\cdots \\times k$ and $j \\neq 1$, the $j$-remainder of $P$ is a $(d-1)$-dimensional double permutation matrix\nof size $2k \\times k \\times \\cdots \\times k$. We denote by $P'$ the $(d-1)$-dimensional $(1 + k^{d-2})$-tuple permutation matrix of size $(1+k^{d-2})k \\times k \\times \\cdots \\times k$ such that $P'$ and the $j$-remainder of $P$ are generated from the same $(d-1)$-dimensional permutation matrix.\n\nFor each pair of ones in a row of $P$ with coordinates $(x_1,x_2, \\ldots , x_d)$ and $(x_1 + 1,x_2, \\ldots , x_d)$, $P'$ has corresponding ($1 + k^{d-2}$) \nones with coordinates $(\\tilde{x}_1, x_2, \\ldots, x_{j-1}, x_{j+1}, \\ldots , x_d)$, $(\\tilde{x}_1 + 1, x_2, \\ldots , x_{j-1}, x_{j+1}, \\ldots , x_d)$, $\\cdots$ , \n$(\\tilde{x}_1 + k^{d-2}, x_2, \\ldots , x_{j-1}, x_{j+1}, \\ldots, x_d)$ \nin a single $1$-row. Since $C$ contains $P'$, this set of ($1 + k^{d-2}$) ones is represented by $1$-entries with coordinates \n$(t_1(\\lambda), t_2, \\ldots , t_{j-1}, t_{j+1}, \\ldots , t_d)$, where $\\lambda = 1, 2, \\ldots , 1 + k^{d-2}$, in the same $1$-row of $C$.\n\nLet $S(t_1(\\lambda), t_2, \\ldots , t_{j-1}, m, t_{j+1}, \\ldots , t_d)$, $1 \\leq \\lambda \\leq 1 + k^{d-2}$, be the corresponding $S$ submatrices of $A'$. By the construction of $A'$, $A''$ and $C$, these $S$ submatrices are the starting $S$ submatrices of some of the chunks $C^*_1, \\ldots , C^*_r$. Each of these ($1 + k^{d-2}$) chunks has $1$-entries in every $j$-cross section; in particular each chunk has a nonzero entry \nwith the same $j^{\\text{th}}$ coordinate $(m-1)k+x_j$. There are at least $1+k^{d-2}$ nonzero entries with this given $j^{\\text{th}}$ coordinate in these chunks, but there are $k^{d-2}$ $1$-rows in a $j$-cross section of these chunks. By the pigeonhole principle, there exist a pair of $1$-entries in the same $1$-row of $A'$.\n\nHence, for each pair of ones in the same $1$-row of $P$, we have a corresponding pair of ones in the same $1$-row of $A'$. Since two $1$-entries of $P$ not in the same \n$1$-row differ in all their coordinates, $A'$ contains $P$, and so does $A$; a contradiction.\n\\end{proof}\n\n\n\n\n\n\n\nWe can now derive a recursive inequality on $F(n, j, k, d)$, the resolution of which gives an upper bound on $F(n,j,k,d)$.\n\n\\begin{lem}\n\\label{Ine}\nLet $d$, $s$, $n$ be positive integers where $d \\ge 2$. Then\n\\begin{eqnarray}\nF(kn,2,k,d) &\\le& (d-1)nk^{d-1}F(n, 1+k^{d-2},k,d-1) + k^dF(n,1,k,d) \n +(k-1)^{d-1}F(n,2,k,d) . ~~~~~\n\\label{IN}\n\\end{eqnarray}\n\\end{lem}\n\\begin{proof} We count the maximum number of $1$-entries in $A$ by counting the number of ones in three types of chunks of $A$.\n\n\\medskip\n\n\\noindent\n{\\bf Case 1: chunk has two $1$-entries in the same $1$-row} \n\nIn view of the definitions of matrix $Q$ and a chunk, such a chunk has only one nonzero $S$ submatrix so it has at most $k^d$ nonzero entries. By Lemma \\ref{wide}, there are at most $F(n,1,k,d)$ such $S$ submatrices. Chunks of this type contain at most $k^d F(n, 1, k, d)$ nonzero entries.\n\n\\medskip\n\n\\noindent\n{\\bf Case 2: chunk is $j$-tall for some $j=2, 3, \\ldots, d$ and has no two $1$-entries in the same $1$-row}\n\nThere are $(d-1)$ choices for $j$-tall since $j=2, 3, \\ldots , d$. For each $j$, the integer $m$ of Lemma \\ref{tall} can be $1, \\ldots , n$. A $j$-tall chunk with no two $1$-entries in the same row has at most $k^{d-1}$ $1$-entries. For each pair of $j$ and $m$, there are at most $F(n, 1 + k^{d-2}, k, d-1)$ such chunks in view of Lemma \\ref{tall}. In total, chunks of this type contain at most $(d-1)nk^{d-1}F(n, 1+k^{d-2},k,d-1)$ nonzero entries.\n\n\\medskip\n\n\\noindent\n{\\bf Case 3: chunk is not $j$-tall for any $j=2, 3, \\ldots , d$ and has no two $1$-entries in the same $1$-row}\n\nSuch a chunk has at most $(k-1)^{d-1}$ ones. By the definition of a chunk, the number of chunks is equal to the number of nonzero entries in matrix $Q$, which, by Lemma \\ref{Q}, has \nat most $F(n,2,k,d)$ nonzero entries. There are at most $(k-1)^{d-1} F(n,2,k,d)$ ones in chunks of this type.\n\nSumming all cases proves Lemma \\ref{Ine}.\n\\end{proof}\n\nWe are now ready to finish the proof of Theorem \\ref{main}.\n\n\\begin{proof}[Proof of Theorem \\ref{main}]\nWe proceed by induction on $d$. The base case of $d=1$ is trivial. We then make the inductive assumption that \n\\begin{equation}\nF(n,j,k,d-1)=O(n^{d-2}) ~~ \\mbox{for some $d \\geq 2$} \\label{inductive}\n\\end{equation}\nand prove that $F(n,j,k,d) = O(n^{d-1})$.\n\nWe first use Lemma \\ref{Ine} to show that \n\\begin{equation}\nF(n,2,k,d) \\leq k(c+dk)n^{d-1} \\label{j=2} ,\n\\end{equation}\nwhere $c$ is a positive constant to be determined.\n\nWe simplify inequality (\\ref{IN}) of Lemma \\ref{Ine}. Inductive assumption (\\ref{inductive}) implies that $F(n, 1+ k^{d-2},k,d-1)=O(n^{d-2})$. \nWe also have $F(n,1,k,d)=O(n^{d-1})$, which was proven by Marcus and Tardos \\cite{MT} for $d=2$ and by Klazar and Marcus \\cite{KM} for $d > 2$. Hence, we can choose a sufficiently large constant $c$ \nsuch that the sum of the first two terms on the right hand side of (\\ref{IN}) is bounded by $c n^{d-1}$. Therefore,\n\\begin{equation}\n F(kn,2,k,d) \\le (k-1)^{d-1}F(n,2,k,d)+cn^{d-1} ~~~~~ \\mbox{for all $n$.} \\label{sn}\n\\end{equation}\n\nWe then use another induction, which is a strong induction on $n$, to prove inequality (\\ref{j=2}). The base case of $n \\leq k$ is trivial. Assuming that (\\ref{j=2}) is true for all $n < m$, we show that (\\ref{j=2}) also holds for $n =m$.\n\nLet $N$ be the maximum integer that is less than $m$ and divisible by $k$. A $d$-dimensional $m \\times \\cdots \\times m$ zero-one matrix has at most $m^d-N^d \\le m^d-(m-k)^d \\le dkm ^{d-1}$ more entries than a $d$-dimensional $N \\times N \\times \\cdots \\times N$ matrix. Thus we have $F(m,2,k,d)\\le F(N,2,k,d)+dkm^{d-1}$. This together with (\\ref{sn}) gives\n\\begin{eqnarray*}\nF(m, 2, k, d) &\\le& (k-1)^{d-1}F\\left({N \\over k}, 2,k,d\\right)+c\\left({N \\over k}\\right)^{d-1}+dkm^{d-1} \\\\\n&\\le& (k-1)^{d-1}k(c+dk) \\left({N \\over k}\\right)^{d-1} + c\\left({N \\over k}\\right)^{d-1}+dkm^{d-1} \\\\\n&\\le& (k-1)(c+dk)N^{d-1}+(c+dk)m^{d-1} \\\\\n&\\le& k(c+dk)m^{d-1} \\ ,\n\\end{eqnarray*}\nwhere we use the strong inductive assumption in the second inequality. Hence, inequality (\\ref{j=2}) holds for $n=m$. The strong induction shows that (\\ref{j=2}) is true for all positive integers $n$.\n\nHaving verified the inequality (\\ref{j=2}), we continue to complete the induction on $d$ by showing that $F(n,j,k,d)=O(n^{d-1})$. This easily follows from inequality (\\ref{j=2}) and Lemma \\ref{2j}.\nWe have completed the induction.\n\nSince $F(n,j,k,d) = \\Omega(n^{d-1})$ in view of Proposition \\ref{Easy}, this together with $F(n,j,k,d)=O(n^{d-1})$ completes the proof of Theorem \\ref{main}.\n\\end{proof}\n\nWe conclude this section with a remark. In the paragraph between two inequalities (\\ref{j=2}) and (\\ref{sn}), we use Klazar and Marcus' result \\cite{KM} $F(n, 1, k, d) = O(n^{d-1})$ to \nchoose the constant $c$ in (\\ref{j=2}). In fact, Klazar and Marcus gave a more refined upper bound ${F(n,1,k, d) \\over n^{d-1}} = 2^{O(k \\log k)}$. This allows us to improve the inductive assumption (\\ref{inductive}) to ${F(n,j,k,d-1) \\over n^{d-2}} = 2^{O( k \\log k)}$ and choose \n$c = 2^{O(k \\log k)}$. In this way, we are able to prove ${F(n,j,k,d) \\over n^{d-1}} = 2^{O(k \\log k)}$. \n\nIn the next section, we improve Klazar and Marcus upper bound from $2^{O(k \\log k)}$ to $2^{O(k)}$. As a consequence, $c = 2^{O(k)}$ and hence ${F(n,j,k,d) \\over n^{d-1}} = 2^{O(k)}$. Lemma \\ref{wide} is crucial in making the extension from ${F(n,1,k,d) \\over n^{d-1}} = 2^{O(k)}$ to ${F(n,j,k,d) \\over n^{d-1}} = 2^{O(k)}$ possible.\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\\section{Limit inferior and limit superior }\n\\label{constant}\nIn this section, we consider matrices $P$ such that $f(n, P, d) = \\Theta(n^{d-1})$. This tight bound implies that $\\{{ f(n, P, d) \\over n^{d - 1}} \\}$ is a bounded sequence. We are interested in the limits of this sequence.\n\nWhen $d=2$, Pach and Tardos showed that $f(n, P, 2)$ is super-additive \\cite{PT}.\nBy Fekete's Lemma on super-additive sequences \\cite{Fekete}, the sequence $\\{ {f(n, P, 2) \\over n} \\}$ is convergent. The limit is known as the F\\\"uredi-Hajnal limit.\n\nWhen $d > 2$, it is still an open problem to prove the convergence of the sequence $\\{ {f(n, P, d) \\over n^{d -1 }} \\}$. Instead, we consider the limit inferior and limit superior of the sequence and define\n\\begin{displaymath}\n\\label{ISd}\nI(P, d) = \\liminf_{n \\rightarrow \\infty} { f(n, P, d) \\over n^{d - 1}} \\ , ~~~~~ S(P, d) = \\limsup_{n \\rightarrow \\infty} { f(n, P, d) \\over n^{d - 1}} \\ .\n\\end{displaymath}\nWe derive lower bounds on $I(P,d)$ and an upper bound on $S(P,d)$. These bounds are written in terms of the size of $P$.\n\nThe main ideas in this section are Fox's interval minor containment \\cite{Fox} and our observation that the extremal function is super-homogeneous in higher dimensions.\n\n\n\\subsection{An improved upper bound}\n\nKlazar and Marcus \\cite{KM} showed that $S(P, d) =2^{O(k \\log k)}$ \nfor $k \\times \\cdots \\times k$ permutation matrices $P$. In this subsection, we extend Fox's ideas for the $d=2$ case \\cite{Fox} to improve this upper bound to $2^{O(k)}$ for $d \\geq 2$. We then show that the new upper bound also holds for tuple permutation matrices, which is a new result even for $d=2$.\n\n\n\\begin{theo} \n\\label{IS}\nIf $P$ is a $d$-dimensional $k \\times \\cdots \\times k$ permutation matrix or a tuple permutation matrix generated by such a permutation matrix, then\n$S(P,d) = 2^{O(k)}$.\n\\end{theo}\n\nThe proof uses the notion of cross section contraction and interval minor containment \\cite{Fox}. Contracting several consecutive $\\ell$-cross sections of a $d$-dimensional matrix means that we replace these $\\ell$-cross sections by a single $\\ell$-cross section, placing a one in an entry of the new cross section if at least one of the corresponding entries in the original $\\ell$-cross sections is a $1$-entry and otherwise placing a zero in that entry of the new cross section. The contraction matrix, as defined in Section 3, of an $sn \\times \\cdots \\times sn$ matrix $A$ can be obtained by contracting every $s$ consecutive $\\ell$-cross sections of $A$ uniformly for $1 \\leq \\ell \\leq d$.\n\nWe say that $A$ contains $B$ as an interval minor if we can use repeated cross section contraction to transform $A$ into a matrix which contains $B$. Matrix $A$ avoids $B$ as an interval minor if $A$ does not contain $B$ as an interval minor. \n\nEquivalently, a $k_1 \\times k_2 \\times \\cdots \\times k_d$ matrix $B = (b_{i_1, i_2, \\ldots , i_d} )$ is an interval minor of a matrix $A$ if\n\\begin{itemize}\n\n\\item for each $i=1, \\ldots , d$, there are $k_i$ disjoint intervals, $W_{i, 1}, \\ldots , W_{i, k_i}$, which are sets of consecutive positive integers,\n\n\\item and if $b_{i_1, \\ldots , i_d} = 1$ then the submatrix $W_{1, i_1} \\times \\cdots \\times W_{d, i_d}$ of $A$ contains a $1$-entry.\n\n\\end{itemize}\n\nThe containment in previous sections is generally stronger than containment as an interval minor. Indeed, $A$ contains $B$ implies that $A$ contains $B$ as an interval minor. However, since a permutation matrix has only one $1$-entry in every cross section, containment of a permutation matrix $P$ is equivalent to containment of $P$ as an interval minor.\n\nAnalogous to $f(n, P, d)$, we define $m(n, P, d)$ to be the maximum number of $1$-entries in a $d$-dimensional $n \\times \\cdots \\times n$ zero-one matrix that avoids $P$ as an interval minor.\n\nWe observe that\n\\begin{equation}\n\\label{fm}\nf(n, P, d) \\leq m(n, R^{k, \\ldots, k}, d) \n\\end{equation}\nfor every $k \\times \\cdots \\times k$ permutation matrix $P$. This follows from the fact that containment of $R^{k, \\ldots, k}$ as an interval minor implies containment of $P$. Hence, we seek an upper bound on $m(n, R^{k, \\ldots, k}, d)$. We denote by $f_{k_1, \\ldots , k_d}(n,t,s, d)$ the maximum number of $1$-rows that have at least $s$ nonzero entries in a $d$-dimensional $t \\times n \\times\\cdots \\times n$ matrix that avoids \n$R^{k_1, \\ldots , k_d}$ as an interval minor.\n\n\\begin{lem}\n\\label{mblock}\n\n\\begin{equation}\nm(tn,R^{k, \\ldots, k},d) \\le s^d m(n,R^{k, \\ldots, k},d)+dn t^df_{k, \\ldots, k}(n,t,s, d) ,\n\\end{equation}\n\n\\end{lem}\n\\begin{proof}\nLet $A$ be a $d$-dimensional $tn \\times \\cdots \\times tn$ matrix that avoids $R^{k, \\ldots , k}$ as an interval minor and has $m(tn,R^{k, \\ldots , k},d)$ $1$-entries. Partition $A$ uniformly into $S$ submatrices of size $t \\times \\cdots \\times t$. Let $C$ be the contraction matrix of $A$ as defined in Section 3.\n\nWe do casework based on whether an $S$ submatrix of $A$ has $s$ nonzero $\\ell$-cross sections for some $\\ell$.\n\nWe first count the number of $1$-entries from the $S$ submatrices which do not have $s$ nonzero $\\ell$-cross sections for any $\\ell$. The contraction matrix $C$ has at most\n$m(n,R^{k, \\ldots, k},d)$ $1$-entries for, otherwise, $C$ contains $R^{k, \\ldots, k}$ as an interval minor, and thus $A$ contains $R^{k, \\ldots, k}$ as an interval minor as well, \na contradiction.\nHence, $A$ has at most $m(n,R^{k, \\ldots, k},d)$ such $S$ submatrices, each of which contains at most $(s-1)^d m(n,R^{k, \\ldots, k},d-1)$. Then there is a $t \\times n \\times \\cdots \\times n$ matrix $A$ which avoids $R^{1,k,\\ldots,k}$\nas an interval minor and has more than $m(n,R^{k, \\ldots, k},d-1)$ $1$-rows with at least $s$ $1$-entries in each $1$-row. Let $B$ be\nthe $1 \\times n \\times \\cdots \\times n$ matrix obtained from $A$ by contracting all the $1$-cross sections. Then $B$, which can be viewed as a $(d-1)$-dimensional matrix, has over $m(n,R^{k, \\ldots, k},d-1)$ $1$-entries \nand thus contains the $(d-1)$-dimensional matrix $R^{k, \\ldots, k}$ as an interval minor. Consequently, $A$ contains the $d$-dimensional $R^{1, k, \\ldots, k}$ as\nan interval minor, a contradiction. Therefore, $$f_{1,k, \\ldots, k}(n,t,s, d) \\le m(n,R^{k, \\ldots, k},d-1) \\leq {2^{1-1}t^2 \\over s} \\ m(n, R^{k, \\ldots, k}, d - 1) \\ , $$ \nwhich proves the base case.\n\nAssuming that for all $s$ and $t$ that are powers of $2$ satisfying $2^{\\ell-2} \\le s \\le t$ we have \n\\begin{equation}\n\\label{f_i}\nf_{\\ell-1,k, \\ldots, k}(n,t,s, d) \\leq {2^{\\ell-2}t^2 \\over s}\\ m(n, R^{k, \\ldots, k}, d - 1)\n\\end{equation}\n for some $\\ell \\geq 2$, we need to show that \n\\begin{equation}\n\\label{f_}\nf_{\\ell ,k, \\ldots, k}(n,t,s, d) \\leq {2^{\\ell -1}t^2 \\over s} \\ m(n, R^{k, \\ldots, k}, d - 1) \n\\end{equation}\nfor all $s$ and $t$ that are powers of $2$ satisfying $2^{\\ell -1} \\le s \\le t$. \n\nWe use another induction on $t$ to show that (\\ref{f_}) is true for all $t \\geq s$ that are powers of $2$. The base case of $t=s$ is trivial. If $f_{\\ell,k, \\ldots, k}(n,t,s, d) \n\\leq {2^{\\ell-1}t^2 \\over s} m(n, R^{k, \\ldots, k}, d - 1) $ for some $t \\geq s$ that is a power of $2$, we prove the same inequality for $2t$.\nBy Lemma \\ref{f}, we have \n\\begin{eqnarray*}\nf_{\\ell,k, \\ldots, k}(n,2t,s, d) &\\le& 2f_{\\ell,k, \\ldots, k}(n,t,s, d)+2f_{\\ell-1,k, \\ldots, k}(n,t,s\/2, d) \\\\\n&\\le & 2{2^{\\ell-1}t^2 \\over s} m(n, R^{k, \\ldots, k}, d - 1) +2{2^{\\ell-2}t^2 \\over s\/2} m(n, R^{k, \\ldots, k}, d - 1) \\\\ \n&=& {2^{\\ell-1}(2t)^2 \\over s} m(n, R^{k, \\ldots, k}, d - 1) \\ ,\n\\end{eqnarray*}\nwhere we use the two inductive assumptions in the second inequality. \nThus our induction on $t$ is complete and (\\ref{f_}) is proved. As a result, our induction on $l$ is also complete.\n\\end{proof}\n\nWe are now ready to prove Theorem \\ref{IS}.\n\\begin{proof}[Proof of Theorem \\ref{IS}]\nWe first bound the right hand side of inequality (\\ref{fm}). \nWe claim that\n\\begin{equation}\n\\label{mO}\n{m(n,R^{k, \\ldots, k},d) \\over n^{d -1}} =2^{O(k)} \\ .\n\\end{equation}\n\nThe base case of $d=1$ is trivial. Assuming that (\\ref{mO}) is true for $(d-1)$, we\ncombine Lemmas \\ref{mblock} and \\ref{closed} to get\n$$m(tn,R^{k, \\ldots , k},d) \\le s^dm(n,R^{k, \\ldots, k}, d)+dt^d{2^{k-1}t^2 \\over s}2^{O(k)}n^{d-1} .$$\nChoosing $t=2^{dk }$ and $s=2^{k-1}$ yields\n$$m(2^{dk}n,R^{k, \\ldots , k},d) \\le 2^{(k-1)d}m(n,R^{k, \\ldots, k},d)+d 2^{kd(d + 2)} 2^{O(k)}n^{d-1} .$$\nIn particular, if $n$ is a positive integer power of $2^{dk}$, iterating this inequality yields\n\\begin{eqnarray*}\n\\lefteqn{m((2^{dk})^L, R^{k, \\ldots, k}, d)} \\\\\n& \\leq& 2^{(k-1)d} m((2^{dk})^{L - 1}, R^{k, \\ldots, k}, d) \n + d 2^{kd(d+2)} 2^{O(k)} (2^{dk})^{(L-1)(d-1)} \\\\\n&\\leq & 2^{2(k-1)d} m((2^{dk})^{L - 2}, R^{k, \\ldots, k}, d) \n + \\ d 2^{kd(d+2)} 2^{O(k)} \\left( 1 + {1 \\over 2^{d(dk - 2k +1)}} \\right) (2^{dk})^{(L-1)(d-1)} \\\\\n&\\leq& 2^{L(k-1)d} m(1, R^{k, \\ldots, k}, d) \n + \\ d 2^{kd(d+2)} 2^{O(k)} \\left( 1 + \n{1 \\over 2^{d(dk - 2k +1)}} + {1 \\over 2^{2d(dk - 2k +1)}} + \\cdots \\right) (2^{dk})^{(L-1)(d-1)} \\\\\n&=& 2^{O(k)} (2^{dk})^{(L-1)(d-1)} \\ .\n\\end{eqnarray*}\nHence, if $(2^{kd})^{L-1} \\leq n < (2^{kd})^L$, then\n$$m(n, R^{k, \\ldots, k}, d) \\leq m((2^{dk})^L, R^{k, \\ldots, k}, d) = 2^{O(k)} (2^{dk})^{(L-1)(d-1)} \\leq 2^{O(k)} n^{d-1} \\ .$$\nThis completes the induction on $d$, and hence (\\ref{mO}) is proved.\n\nIt follows from (\\ref{fm}) and (\\ref{mO}) that Theorem \\ref{IS} is true for every permutation matrix $P$. By the remark at the end of Section 3, this result can be extended to tuple permutation matrices. The proof of Theorem \\ref{IS} is completed.\n\\end{proof}\n\n\\subsection{\nLower bounds and super-homogeneity}\n\nWe first use Cibulka's method in \\cite{C} to show that $I(P, d) \\geq d(k-1)$ for all permutation matrices of size $k \\times \\cdots \\times k$ and extend this lower bound to tuple permutation matrices. \n\\begin{theo}\n\\label{llb}\nIf $P$ is a $d$-dimensional $k \\times \\cdots \\times k$ permutation matrix or a tuple permutation matrix generated by such a permutation matrix, then $I(P, d) \\geq d(k-1)$. Furthermore, if $P$ is the identity matrix, then $I(P, d) = S(P, d) = d(k-1)$.\n\\end{theo}\n\\begin{proof} We first show that, for all $n\\ge k-1$, we have\n\\begin{equation}\n\\label{lb}\nf(n, P, d) \\geq n^d - (n - k +1)^d\n\\end{equation}\nfor every permutation matrix $P$. \nPick one nonzero entry $p_{i_1, \\ldots , i_d}=1$ of $P$. Construct a $d$-dimensional $n \\times \\cdots \\times n$ matrix $A$ with entries such that $a_{j_1, \\ldots , j_d}=0$ if $i_l \\le j_l \\le n-k+i_l$ for all $1 \\leq l \\leq d$ and $a_{j_1, \\ldots , j_d}=1$ otherwise. We first show that $A$ avoids $P$. Suppose to the contrary that $A$ contains $P$. Let the special nonzero entry \n$p_{i_1, \\ldots, i_d}=1$ of $P$ be represented by entry $a_{y_1, \\ldots, y_d}$ of $A$. By the construction of $A$, we must have \neither $y_l\\le i_l-1$ or $y_l \\ge n-k+i_l+1$. If $y_l \\le i_l -1$, since $A$ contains $P$,\n$A$ has $i_l -1$ other nonzero entries whose $l^{\\text{th}}$ coordinates are smaller than $y_l \\leq i_l -1$ to represent $1$-entries of $P$, an impossibility. If $y_l \\geq n - k + i_l +1$, a similar argument leads to another impossibility. Counting the number of $1$-entries in $A$ proves (\\ref{lb}).\n\nWe next show that \n\\begin{equation}\n\\label{ub}\nf(n,P,d) \\le n^d-(n-k+1)^d\n\\end{equation}\nwhen $P$ is the identity matrix, i.e., $p_{i_1, \\ldots, i_d}$ is one on the main diagonal $i_1=\\cdots=i_d$ and zero otherwise. \nIf $A$ is a matrix that avoids $P$, each diagonal of $A$, which is parallel to the main diagonal, has at most $k-1$ nonzero entries. Summing over the maximum numbers of $1$-entries in all diagonals\nproves (\\ref{ub}). \n\nThe second part of Theorem \\ref{llb} follows immediately from (\\ref{lb}) and (\\ref{ub}). The first part is obvious for a permutation matrix $P$ because of (\\ref{lb}). \nThe first part is also true \nfor a tuple permutation matrix $P'$ since $f(n, P, d) \\leq f(n, P', d)$ if $P'$ is generated by a permutation matrix $P$.\n\\end{proof}\n\nThe lower bound given in Theorem \\ref{llb} is linear in $k$. One may ask how large a lower bound on $I(P,d)$ can be for some $P$. In the the rest of this section, we extend Fox's idea for the $d=2$ case \\cite{Fox, Fox2} to show that a lower bound can be as large as an exponential function in $k$ in multiple dimensions. The crucial part in our approach is our observation that $f(n,P,d)$ is super-homogeneous.\n\n\\begin{theo}\n\\label{lower}\nFor each large $k$, there exists a family of $d$-dimensional $k\\times \\cdots \\times k$ permutation matrices $P$ such that \n$I(P,d)=2^{\\Omega(k^{1 \/ d})}$.\n\\end{theo}\nThe proof uses the super-homogeneity of extremal functions. In dimension two, the extremal function \nwas shown to be super-additive \\cite{PT}, i.e., $f(m+ n, P, 2) \\geq f(m, P, 2) + f(n, P, 2)$. This was the key in showing the convergence of the sequence\n$\\{ {f(n,P, 2) \\over n} \\}$ for those matrices $P$ whose extremal functions are $\\Theta(n)$. The limit is the well-known F\\\"uredi-Hajnal limit \\cite{FH}.\n\nWe note that the super-additivity of $f(n, P, 2)$ implies super-homogeneity, i.e., $f(s n, P, 2) \\geq s f(n, P, 2)$ for every positive integer $s$. In higher dimensions, \nwe show that $f(n, P, d)$ is\nsuper-homogeneous of a higher degree.\n\nA corner entry of a $k_1 \\times \\cdots \\times k_d$ matrix $P = (p_{i_1, \\ldots, i_d})$ is defined to be an entry $p_{i_1, \\ldots, i_d}$ located at a corner of $P$, i.e., where $i_{\\tau}=1$ or $k_{\\tau}$ for $1 \\leq \\tau \\leq d$.\n\n\\begin{lem}\n\\label{homo}\nIf $P$ is a $d$-dimensional matrix with a corner $1$-entry, then $f(sn,P,d) \\ge {s^{d-1} \\over (d-1)!}f(n,P,d)$.\n\\end{lem}\n\\begin{proof}\n Without loss of generality, we assume that $p_{1, \\ldots, 1}=1$ is the corner $1$-entry in $P$. Let $M$ be an $s \\times \\cdots\\times s$ matrix with $1$-entries \nat the coordinates $(i_1, \\ldots, i_d)$ where $i_1+ \\cdots+i_d=s+d-1$ and $0$-entries everywhere else, so $M$ has ${s+d-2 \\choose d-1} \\ge {s^{d-1} \\over (d-1)!}$ $1$-entries. \nLet $N$ be an $n \\times \\cdots \\times n$ matrix that avoids $P$ and has $f(n,P,d)$ $1$-entries. It then suffices to prove that $M \\otimes N$ avoids $P$.\n\nAssume for contradiction that the Kronecker product $M \\otimes N$ contains $P$. Pick an arbitrary $1$-entry $p^*$ in $P$ other than $p_{1, \\ldots, 1}$. Suppose that $p_{1, \\ldots, 1}$ and $p^*$ are represented by $e_1$ and $e_2$ in $M \\otimes N$, respectively.\nWe consider the $n \\times \\cdots \\times n$ $S$ submatrices of $M \\otimes N$. We may assume that $e_1$ and $e_2$ are in\nthe $S$-submatrices $S(i_1, \\ldots, i_d)$ and $S(j_1, \\ldots, j_d)$, respectively. Note that $i_1+ \\cdots +i_d = j_1+ \\cdots + j_d$. Since $p^*$\nhas larger coordinates than $p_{1, \\ldots, 1}$ in $P$, entry $e_2$ must also have larger coordinates than $e_1$ in $M \\otimes N$ and hence $i_{\\tau} \\leq j_{\\tau}$ for $\\tau = 1, 2, \\ldots, d$. It then follows from $i_1+ \\cdots +i_d = j_1+ \\cdots + j_d$\nthat $i_{\\tau} = j_{\\tau}$ for $\\tau= 1, 2, \\ldots, d$, i.e., \nthe two entries $e_1$ and $e_2$ must be in the same $S$ submatrix in $M \\otimes N$. Since $p^*$ is an arbitrary $1$-entry other than $p_{1, \\ldots , 1}$ in $P$, \nthe $S$ submatrix contains $P$. But this is a contradiction since each nonzero $S$ submatrix in $M \\otimes N$ is an exact copy of $N$, \nwhich avoids $P$. Thus $M \\otimes N$ avoids $P$.\n\\end{proof}\n\nJust as super-additivity leads to the F\\\"uredi-Hajnal limit in dimension two, super-homogeneity also produces an interesting result on limits.\n\\begin{lem}\n\\label{converge}\nIf $P$ is a $d$-dimensional matrix which contains a corner $1$-entry, then for any positive integer $m$,\n$$I(P,d) \\ge {1 \\over (d-1)!} \\ {f(m,P,d) \\over m^{d-1}}.$$\n\\end{lem}\n\\begin{proof}\nFor each fixed positive integer $m$, we write $n$ as $n=sm+r$, where $0 \\le r 0$ and $(1 + 1\/x_1)^{x_1} \\leq e$ for $x_1 \\geq 1$. We also note that $$r(\\ell, \\ldots , \\ell) = (1-q)^{\\ell^d}\\left[{2\\ell-1\\choose \\ell-1}2^{\\ell}{\\ell^{\\ell} \\over \\ell!} \\right]^d \\le e^{-q\\ell^d}(2^{2\\ell-1}2^{\\ell}e^{\\ell})^d \\le \n2^{-20^{d-1}\\ell\/2 } (2^{3\\ell - 1}e^{\\ell})^d \\le (2N)^{-d}, $$\nwhere we use Stirling's inequality and ${2\\ell-1 \\choose \\ell-1}\\le 2^{2\\ell-1}$.\nWe now use the \nsymmetry of $r(x_1, \\cdots, x_d)$ to obtain\n$$ \\mathbb{P}(X) \\le \\sum_{x_1, \\ldots, x_d\\ge \\ell} r(x_1,\\ldots, x_d) \\le \\left(\\sum_{i=0}^{\\infty}(1\/2)^i \\right)^d r(\\ell,\\ldots,\\ell) \n\\le 2^d (2N)^{-d} = N^{-d}. $$\n\nWe now estimate conditional expectation $\\mathbb{E}( \\xi | Y)$, where $\\xi = |A|$. Note that $\\mathbb{E}(\\xi | Y)\\mathbb{P}(Y)=\\mathbb{E}(\\xi)-\\mathbb{E}(\\xi | X) \\mathbb{P}(X) \\ge \\Theta(N^{d-1\/2})-N^d N^{-d} = \\Theta(N^{d-1\/2})$ so $\\mathbb{E}(\\xi | Y)=\\Theta(N^{d-1\/2})$.\nThus, there exists an $A$ that avoids $R^{l, \\ldots,l}$ as an interval minor and has at least $\\Theta(N^{d-1\/2})$ $1$-entries.\n\\end{proof}\n\n\nWe are now ready to prove Theorem \\ref{lower}.\n\\begin{proof} [Proof of Theorem \\ref{lower}]\n\nLet $\\ell=\\lfloor k^{1\/d} \\rfloor$ be the largest integer less than or equal to $ k^{1\/d}$. There is a family of $d$-dimensional permutation matrices of size $\\ell^d \\times \\cdots \\times \\ell^d$ that contain $R^{\\ell, \\ldots , \\ell}$ as an interval minor and have at least one corner $1$-entry. To see this, \nmany such permutation matrices can be constructed so that they have exactly one $1$-entry in each of their $S$-submatrices of size $\\ell^{d-1} \\times \\cdots \\times \\ell^{d-1}$, including a corner $1$-entry.\n\nSince $k \\geq \\ell^d$, there is a family of permutation matrices $P$ of size $k \\times \\cdots \\times k$ that contain $R^{\\ell, \\ldots , \\ell}$ as an interval minor and have at least one corner $1$-entry.\nEach $P$ has a corner $1$-entry, so we can apply Lemma \\ref{converge} to obtain\n\\begin{equation}\n\\label{key}\nI(P,d) \\ge {1 \\over (d-1)!} \\ {f(N,P,d) \\over N^{d-1}} ,\n\\end{equation}\nwhere $N$ can be chosen to be the positive integer given in Lemma 4.9. \n\nMatrix $P$ contains $R^{\\ell,\\ldots,\\ell}$ as an interval minor, \nso $f(N,P,d)\\ge m(N,R^{\\ell, \\ldots, \\ell},d)$,\nwhich along with (\\ref{key}) and Lemma \\ref{exists} yields \n$$I(P,d) \\ge {1 \\over (d-1)!} \\ {m(N, R^{\\ell, \\ldots, \\ell}, d) \\over N^{d - 1}} \n \\geq \\Theta(N^{1 \\over 2}) \n= 2^{\\Omega(\\ell)}\n= 2^{\\Omega(k^{1 \/ d})}.\n$$\nThis completes the proof of Theorem \\ref{lower}.\n\\end{proof}\n\n\\section{Conclusions and future directions}\n\\label{conclusion}\nWe obtained non-trivial lower and upper bound on $f(n,P,d)$ when $n$ is large for block permutation matrices $P$. In particular, we established the tight bound $\\Theta(n^{d-1})$ on $f(n,P,d)$ for every $d$-dimensional tuple permutation matrix $P$. \nWe improved the previous upper bound on the limit superior of the sequence $\\{ {f(n,P,d) \\over n^{d-1}}\\}$ for \nall permutation and tuple permutation matrices. We used the super-homogeneity of the extremal function to show that \nthe limit inferior is exponential in $k$ for a family of $k \\times \\cdots \\times k$ permutation matrices. Our results substantially advance the extremal theory of matrices. We believe that super-homogeneity is fundamental to pattern avoidance in multidimensional matrices.\n\nOne possible direction for future research would be to strengthen the super-homogeneity as expressed in Lemma \\ref{homo} to\n$f(sn, P, d) \\geq s^{d-1} f(n, P, d)$. We have successfully tested this super-homogeneity on the identity matrix and the matrices whose $1$-entries are on rectilinear paths. If this super-homogeneity is true \nfor permutation matrices $P$, we can then use a Fekete-like lemma to show the convergence of the sequence $\\{ {f(n,P,d) \\over n^{d-1}} \\}$.\n\nAnother possible direction would be to extend Theorem \\ref{lower} from a family of permutation matrices to almost all permutation matrices. We think this becomes possible if the corner $1$-entry condition is removed in Lemmas \\ref{homo} and \\ref{converge}.\n\n\\section{Acknowledgments}\n\nWe would like to thank Professor Jacob Fox for valuable discussions about this research. The first author was supported by the NSF graduate fellowship under grant number 1122374. The research of the second author was supported in part by the Department of Mathematics, MIT through PRIMES-USA 2014. It was also supported in part by the Center for Excellence in Education and the Department of Defense through RSI 2014. \n\n\n\\bibliographystyle{amsplain}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzezvq b/data_all_eng_slimpj/shuffled/split2/finalzzezvq new file mode 100644 index 0000000000000000000000000000000000000000..0d305b2b4ff9e1e29f348a5555a0e0674294127e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzezvq @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThis paper examines split-improvement feature importance scores for tree-based methods. Starting with Classification and Regression Trees \\citep[CART;][]{breiman1984classification} and C4.5 \\citep{quinlan2014c4}, decision trees have been a workhorse of general machine learning, particularly within ensemble methods such as Random Forests \\citep[RF;][]{breiman2001random} and Gradient Boosting Trees \\citep{friedman2001greedy}. They enjoy the benefits of computational speed, few tuning parameters and natural ways of handling missing values. Recent statistical theory for ensemble methods \\citep[e.g.][]{denil2014narrowing,scornet2015consistency,mentch2016quantifying,wager2018estimation,zhou2018boulevard} has provided theoretical guarantees and allowed formal statistical inference. \\textcolor{black}{Variants of these models have also been proposed such as Bernoulli Random Forests \\citep{yisen2016bernoulli,wang2017novel} and Random Survival Forests \\citep{ishwaran2008random}.} For all these reasons, tree-based methods have seen broad applications including in protein interaction models \\citep{meyer2017interactome} in product suggestions on Amazon \\citep{sorokina2016amazon} and in financial risk management \\citep{khaidem2016predicting}.\n\nHowever, in common with other machine learning models, large ensembles of trees act as ``black boxes'', providing predictions but little insight as to how they were arrived at. There has thus been considerable interest in providing tools either to explain the broad patterns that are modeled by these methods, or to provide justifications for particular predictions. This paper examines variable or feature\\footnote{We use ``feature'', ``variable'' and ``covariate'' interchangeably here to indicate individual measurements that act as inputs to a machine learning model from which a prediction is made.} importance scores that provide global summaries of how influential a particular input dimension is in the models' predictions. These have been among the earliest diagnostic tools for machine learning and have been put to practical use as screening tools, see for example \\citet{diaz2006gene} and \\citet{menze2009comparison}. Thus, it is crucial that these feature importance measures reliably produce well-understood summaries.\n\nFeature importance scores for tree-based models can be broadly split into two categories. Permutation methods rely on measuring the change in value or accuracy when the values of one feature are replaced by uninformative noise, often generated by a permutation. These have the advantage of being applicable to any function, but have been critiqued by \\citet{hooker2007generalized,strobl2008conditional,hooker2019please} for forcing the model to extrapolate. By contrast, in this paper we study the alternative split-improvement scores \\textcolor{black}{(also known as Gini importance, or mean decrease impurity)} that are specific to tree-based methods. These naturally aggregate the improvement associated with each note split and can be readily recorded within the tree building process \\citep{breiman1984classification,friedman2001greedy}. In Python, split-improvement is the default implementation for almost every tree-based model, including \\textbf{RandomForestClassifier}, \\textbf{RandomForestRegressor}, \\textbf{GradientBoostingClassifier} and \\textbf{GradientBoostingRegressor} from \\textbf{scikit-learn} \\citep{pedregosa2011scikit}.\n\nDespite their common use, split-improvement measures are biased towards features that exhibit more potential splits and in particular towards continuous features or features with large numbers of categories. \\textcolor{black}{This weakness was already noticed in \\citet{breiman1984classification} and \\citet{strobl2007bias} conducted thorough experiments followed by more discussions in \\citet{boulesteix2011random} and \\citet{nicodemus2011letter}\\footnote{See \\underline{https:\/\/explained.ai\/rf-importance\/} for a popular demonstration of this.}.} While this may not be concerning when all covariates are similarly configured, in practice it is common to have a combination of categorical and continuous variables in which emphasizing more complex features may mislead any subsequent analysis. For example, gender will be a very important binary predictor in applications related to medical treatment; whether the user is a paid subscriber is also central to some tasks such as in Amazon and Netflix. But each of these may be rated as less relevant to age \\textcolor{black}{which is a more complex feature} in either case. \\textcolor{black}{In the task of ranking single nucleotide polymorphisms with respect to their ability to predict a target phenotype, researchers may overlook rare variants as common ones are systematically favoured by the split-improvement measurement. \\citep{boulesteix2011random}}.\n\nWe offer an intuitive rationale for this phenomenon and design a simple fix to solve the bias problem. The observed bias is similar to overfitting in training machine learning models, where we should not build the model and evaluate relevant performance using the same set of data. To fix this, split-improvement calculated from a separate test set is taken into consideration. We further demonstrate that this new measurement is unbiased in the sense that features with no predictive power for the target variable will receive an importance score of zero in expectation. \\textcolor{black}{These measures can be very readily implemented in tree-based software packages.} We believe the proposed measurement provides a more sensible means for evaluating feature importance in practice.\n\nIn the following, we introduce some background and notation for tree-based methods in Section \\ref{tree}. In Section \\ref{FI}, split-improvement is described in detail and its bias and limitations are presented. The proposed unbiased measurement is introduced in Section \\ref{USI}. Section \\ref{real} applies our idea to a simulated example and three real world data sets. We conclude with some discussions and future directions in Section \\ref{dis}. Proofs and some additional simulation results are collected in Appendix \\ref{proof} and \\ref{sim} respectively. \n\n\\section{Tree-Based Methods}\\label{tree}\n\nIn this section, we provide a brief introduction and mathematical formulation of tree-based models \\textcolor{black}{that will also serve to introduce our notation}. We refer readers to relevant chapters in \\citet{friedman2001elements} for a more detailed presentation. \n\n\\subsection{Tree Building Process}\\label{cart}\n\nDecision trees are a non-parametric machine learning tool for constructing prediction models from data. They are obtained by recursively partitioning feature space by axis-aligned splits and fitting a simple prediction function, usually constant, within each partition. The result of this partitioning procedure is represented as a binary tree. Popular tree building algorithms, such as CART and C4.5, may differ in how they choose splits or deal with categorical features. Our introduction in this section mainly reflects how decision trees are implemented in \\textbf{scikit-learn}.\n\nSuppose our data consists of $p$ inputs and a response, denoted by $z_i = (x_i, y_i)$ for $i = 1, 2, \\ldots, n$, with $x_i = (x_{i1}, x_{i2}, \\ldots, x_{ip})$. For simplicity we assume our inputs are continuous\\footnote{Libraries in different programming languages differ on how to handle categorical inputs. \\textbf{rpart} and \\textbf{randomForest} libraries in \\textbf{R} search over every possible subsets when dealing with categorical features. However, tree-based models in \\textbf{scikit-learn} do not support categorical inputs directly. Manually transformation is required to convert categorical features to integer-valued ones, such as using dummy variables, or treated as ordinal when applicable.}. Labels can be either continuous (regression trees) or categorical (classification trees). Let the data at a node $m$ represented by $Q$. Consider a splitting variable $j$ and a splitting point $s$, which results in two child nodes: \n$$\nQ_{l} = \\{(x, y)|x_j \\leq s \\}\n$$\n$$\nQ_{r} = \\{(x, y)|x_j > s \\}.\n$$\nThe impurity at node $m$ is computed by a function $H$, which acts as a measure for goodness-of-fit \\textcolor{black}{and is invariant to sample size}. Our loss function for split $\\theta = (j, s)$ is defined as the weighted average of the impurity at two child nodes:\n$$\nL(Q, \\theta) = \\frac{n_{l}}{n_m}H(Q_{l}) + \\frac{n_{r}}{n_m}H(Q_{r}),\n$$\nwhere $n_m, n_l, n_r$ are the number of training examples falling into node $m, l, r$ respectively.\nThe best split is chosen by minimizing the above loss function:\n\\begin{equation}\\label{best}\n \\theta^* = \\argmin_{\\theta}L(Q, \\theta).\n\\end{equation}\nThe tree is built by recursively splitting child nodes until some stopping criterion is met. For example, we may want to limit tree depth, or keep the number of training samples above some threshold within each node.\n\nFor regression trees, $H$ is usually chosen to be mean squared error, using average values as predictions within each node. At node $m$ with $n_m$ observations, $H(m)$ is defined as:\n$$\n\\bar{y}_m = \\frac{1}{n_m}\\sum_{x_i \\in m}y_i,\n$$\n$$\nH(m) = \\frac{1}{n_m}\\sum_{x_i \\in m}(y_i - \\bar{y}_m)^2.\n$$\nMean absolute error can also be used depending on specific application.\n\nIn classification, there are several different choices for the impurity function $H$. Suppose for node $m$, the target $y$ can take values of $1, 2, \\ldots, K$, define\n$$\np_{mk} = \\frac{1}{n_m}\\sum_{x_i \\in m}\\mathbbm{1}(y_i = k)\n$$\nto be the proportion of class $k$ in node $m$, for $k = 1, 2, \\ldots, K$. Common choices are:\n\\begin{enumerate}\n\\item Misclassification error:\n$$\nH(m) = 1 - \\max_{1 \\leq k \\leq K} p_{mk}.\n$$\n\\item Gini index:\n$$\nH(m) = \\sum_{k \\neq k'}p_{mk}p_{mk'} = 1 - \\sum_{k = 1}^K p_{mk}^2.\n$$\n\\item Cross-entropy or deviance:\n$$\nH(m) = -\\sum_{k=1}^K p_{mk}\\log p_{mk}.\n$$\n\\end{enumerate}\n\nThis paper will focus on mean squared error for regression and Gini index for classification. \n\n\\subsection{Random Forests and Gradient Boosting Trees}\n\nThough intuitive and interpretable, there are two major drawbacks associated with a single decision tree: they suffer from high variance and in some situations they are too simple to capture complex signals in the data. Bagging \\citep{breiman1996bagging} and boosting \\citep{friedman2001greedy} are two popular techniques used to improve the performance of decision trees.\n\nSuppose we use a decision tree as a base learner $t(x; z_1, z_2, \\ldots, z_n)$, where $x$ is the input for prediction and $z_1, z_2, \\ldots, z_n$ are training examples as before. Bagging aims to stabilize the base learner $t$ by resampling the training data. In particular, the bagged estimator can be expressed as:\n$$\n\\hat{t}(x) = \\frac{1}{B}\\sum_{b=1}^Bt(x; z^*_{b1}, z^*_{b2}, \\ldots, z^*_{bn})\n$$\nwhere $z^*_{bi}$ are drawn independently with replacement from the original data (bootstrap sample), and $B$ is the total number of base learners. Each tree is constructed using a different bootstrap sample from the original data. Thus approximately one-third of the cases are left out and not used in the construction of each base learner. We call these \\textit{out-of-bag} samples.\n\nRandom Forests \\citep{breiman2001random} are a popular extension of bagging with an additional randomness injected. At each step when searching for the best split, only $p_0$ features are randomly selected from all $p$ possible features and the best split $\\theta^*$ must be chosen from this subset. When $p_0 = p$, this reduces to bagging. Mathematically, the prediction is written as \n$$\n\\hat{t}^{RF}(x) = \\frac{1}{B}\\sum_{b=1}^Bt(x; \\xi_b, z^*_{b1}, z^*_{b2}, \\ldots, z^*_{bn})\n$$\nwith $\\xi_b \\stackrel{\\mbox{iid}}{\\sim} \\Xi$ denoting the additional randomness for selecting from a random subset of available features.\n\nBoosting is another widely used technique by data scientists to achieve state-of-the-art results on many machine learning challenges \\citep{chen2016xgboost}. Instead of building trees in parallel as in bagging, it does this sequentially, allowing the current base learner to correct for any previous bias. In \\citet{ghosal2018boosting}, the authors also consider boosting RF to reduce bias. We will skip over some technical details on boosting and restrict our discussion of feature importance in the context of decision trees and RF. Note that as long as tree-based models combine base learners in an additive fashion, their feature importance measures are naturally calculated by (weighted) average across those of individual trees. \n\n\n\\section{Measurement of Feature Importance}\\label{FI}\n\nAlmost every feature importance measures used in tree-based models belong to two classes: split-improvement or permutation importance. Though our focus will be on split-improvement, permutation importance is introduced first for completeness. \n\n\\subsection{Permutation Importance}\n\nArguably permutation might be the most popular method for assessing feature importance in the machine learning community. Intuitively, if we break the link between a variable $X_j$ and $y$, the prediction error increases then variable $j$ can be considered as important. \n\nFormally, we view the training set as a matrix $X$ of size $n \\times p$, where each row $x_i$ is one observation. Let $X^{\\pi, j}$ be a matrix achieved by permuting the $j^{th}$ column according to some mechanism $\\pi$. If we use $l(y_i, f(x_i))$ as the loss incurred when predicting $f(x_i)$ for $y_i$, then the importance of $j^{th}$ feature is defined as:\n\\begin{equation} \\label{p}\n\\text{VI}^{\\pi}_j = \\sum_{i=1}^n l(y_i, f(x_i^{\\pi, j}) - l(y_i, f(x_i)))\n\\end{equation}\nthe increase in prediction error when the $j^{th}$ feature is permuted. Variations include choosing different permutation mechanism $\\pi$ or evaluating Equation (\\ref{p}) on a separate test set. In Random Forests, \\citet{breiman2001random} suggest to only permute the values of the $j^{th}$ variable in the \\textit{out-of-bag} samples for each tree, and final importance for the forest is given by averaging across all trees. \n\nThere is a small literature analyzing permutation importance in the context of RF. \\citet{ishwaran2007variable} studied paired importance.\n\\citet{hooker2007generalized, strobl2008conditional, hooker2019please} advocated against permuting features by arguing it emphasizes behavior in regions where there is very little data. More recently, \\citet{gregorutti2017correlation} conducted a theoretical analysis of permutation importance measure for an additive regression model. \n\n\\subsection{Split-Improvement}\n\nWhile permutation importance measures can generically be applied to any prediction function, \nsplit-improvement is unique to tree-based methods, and can be calculated directly from the training process. Every time a node is split on variable $j$, the combined impurity for the two descendent nodes is less than the parent node. Adding up the weighted impurity decreases for each split in a tree and averaging over all trees in the forest yields an importance score for each feature.\n\nFollowing our notation in Section \\ref{cart}, the impurity function $H$ is either mean squared error for regression or Gini index for classification. The best split at node $m$ is given by $\\theta^*_m$ which splits at $j^{th}$ variable and results in two child nodes denoted as $l$ and $r$. Then the decrease in impurity for split $\\theta^*$ is defined as:\n\\begin{equation} \\label{di}\n\\Delta(\\theta^*_m) = \\omega_mH(m) - (\\omega_lH(l) + \\omega_rH(r)),\n\\end{equation}\nwhere $\\omega$ is the proportion of observations falling into each node, i.e., $\\omega_m = \\frac{n_m}{n}$, $\\omega_l = \\frac{n_l}{n}$ and $\\omega_r = \\frac{n_r}{n}$. Then, to get the importance for $j^{th}$ feature in a single tree, we add up all $\\Delta(\\theta^*_m)$ where the split is at the $j^{th}$ variable:\n\\begin{equation} \\label{t}\n\\text{VI}^{\\text{T}}_j = \\sum_{m, j \\in \\theta^*_m}\\Delta(\\theta^*_m).\n\\end{equation}\nHere the sum is taken over all non-terminal nodes of the tree, and we use the notation $j \\in \\theta^*_m$ to denote that the split is based on the $j^{th}$ feature. \n\nThe notion of split-improvement for decision trees can be easily extended to Random Forests by taking the average across all trees. Suppose there are $B$ base learners in the forest, we could naturally define\n\\begin{equation} \\label{si}\n\\text{VI}^{\\text{RF}}_j = \\frac{1}{B}\\sum_{b = 1}^B\\text{VI}^{\\text{T(b)}}_j = \\frac{1}{B}\\sum_{b = 1}^B\\sum_{m, j \\in \\theta^*_m}\\Delta_b(\\theta^*_m).\n\\end{equation}\n\n\n\\subsection{Bias in Split-Improvement}\n\n\\citet{strobl2007bias} pointed out that the split-improvement measure defined above is biased towards increasing the importance of continuous features or categorical features with many categories. This is because of the increased flexibility afforded by a larger number of potential split points. We conducted a similar simulation to further demonstrate this phenomenon. All our experiments are based on Random Forests which gives more stable results than a single tree.\n\nWe generate a simulated dataset so that $X_1 \\sim N(0, 1)$ is continuous, and $X_2, X_3, X_4, X_5$ are categorically distributed with $2, 4, 10, 20$ categories respectively. The probabilities are equal across categories within each feature. In particular, $X_2$ is Bernoulli distribution with $p=0.5$. In classification setting, the response $y$ is also generated as a Bernoulli distribution with $p=0.5$, but independent of all the $X$'s. For regression, $y$ is independently generated as $N(0, 1)$. We repeat the simulation 100 times, each time generating $n=1000$ data points and fitting a Random Forest model\\footnote{Our experiments are implemented using \\textbf{scikit-learn}. Unless otherwise noted, default parameters are used.} using the data set. Here categorical features are encoded into dummy variables, and we sum up importance scores for corresponding dummy variables as final measurement for a specific categorical feature. In Appendix \\ref{sim}, we also provide simulation results when treating those categorical features as (ordered) discrete variables. \n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Split_Improvement_for_Classification_Dummy.png}\n \\caption{Classification}\n \\label{fig:si_cls}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Split_Improvement_for_Regression_Dummy.png}\n \\caption{Regression}\n \\label{fig:si_regr}\n \\end{subfigure}\n \\hfill\n \n \\caption{Split-improvement measures on five predictors. Box plot is based on 100 repetitions. 100 trees are built in the forest and maximum depth of each tree is set to 5.}\n \\label{fig:si}\n \n\\end{figure}\n\nBox plots are shown in Figure \\ref{fig:si_cls} and \\ref{fig:si_regr} for classification and regression respectively. The continuous feature $X_1$ is frequently given the largest importance score in regression setting, and among the four categorical features, those with more categories receive larger importance scores. Similar phenomenon is observed in classification as well, while $X_5$ appears to be artificially more important than $X_1$. Also note that all five features get positive importance scores, though we know that they have no predictive power for the target value $y$.\n\nWe now explore how strong a signal is needed in order for the split-improvement measures to discover important predictors. We generate $X_1, X_2, \\ldots, X_5$ as before, but in regression settings set $y = \\rho X_2 + \\epsilon $ where $\\epsilon \\sim N(0, 1)$. We choose $\\rho$ to range from 0 to 1 at step size 0.1 to encode different levels of signal. For classification experiments, we first make $y = X_2$ and then flip each element of $y$ according to $P(U > \\frac{1 + \\rho}{2})$ where $U$ is Uniform$[0,1]$. This way, the correlation between $X_2$ and $y$ will be approximately $\\rho$. We report the average ranking of all five variables across 100 repetitions for each $\\rho$. The results are shown in Figure \\ref{fig:rank}.\n\nWe see that $\\rho$ needs to be larger than 0.2 to actually find $X_2$ is the most important predictor in our classification setting, while in regression this value increases to 0.6. And we also observe that a clear order exists for the remaining (all unimportant) four features. \n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Average_Ranking_for_Classification_Dummy.png}\n \\caption{Classification}\n \\label{fig:c_rank}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Average_Ranking_for_Regression_Dummy.png}\n \\caption{Regression}\n \\label{fig:r_rank}\n \\end{subfigure}\n \\hfill\n \n \\caption{Average feature importance ranking across different signal strengths over 100 repetitions. 100 trees are built in the forest and maximum depth of each tree is set to 5.}\n \\label{fig:rank}\n \n\\end{figure}\n\n\nThis bias phenomenon could make many statistical analyses based on split-improvement invalid. For example, gender is a very common and powerful binary predictor in many applications, but feature screening based on split-improvement might think it is not important compared to age. In the next section, we explain intuitively why this bias is observed, and provide a simple but effective adjustment. \n\n\\subsection{Related Work}\n\n\\textcolor{black}{Before presenting our algorithm, we review some related work aiming at correcting the bias in split-improvement. Most of the methods fall into two major categories: they either propose new tree building algorithms by redesigning split selection rules, or perform as a post hoc approach to debias importance measurement.} \n\n\\textcolor{black}{There has been a line of work on designing trees which do not have such bias as observed in classical algorithms such as CART and C4.5. For example, Unbiased and Efficient Statistical Tree \\citep[QUEST;][]{loh1997split} removed the bias by using F-tests on ordered variables and contingency table chi-squared tests on categorical variables. Based on QUEST, CRUISE \\citep{kim2001classification} and GUIDE \\citep{loh2009improving} were developed. We refer readers to \\citet{loh2014fifty} for a detailed discussion in this aspect. In \\citet{strobl2007bias}, the authors resorted to a different algorithm called cforest \\citep{hothorn2010party}, which was based on a conditional inference framework \\citep{hothorn2006unbiased}. They also implemented a stopping criteria based on multiple test procedures.} \n\n\\textcolor{black}{\\citet{sandri2008bias} expressed split-improvement as two components: a heterogeneity reduction and a positive bias. Then the original dataset $( \\mathbf{X}, Y)$ is augmented with pseudo data $\\mathbf{Z}$ which is uninformative but shares the structure of $\\mathbf{X}$ \\citep[this idea of generating pseudo data is later formulated in a general framework termed ``knockoffs'';][]{barber2015controlling}. The positive bias term is estimated by utilizing the pseudo variables $\\mathbf{Z}$ and subtracted to get a debiased estimate. \\citet{nembrini2018revival} later modified this approach to shorten computation time and provided empirical importance testing procedures. Most recently, \\citet{li2019debiased} derived a tight non-asymptotic bound on the expected bias of noisy features and provided a new debiased importance measure. However, this approach only alleviates the issue and still yields biased results.}\n\n\\textcolor{black}{Our approach works as a post hoc analysis, where the importance scores are calculated after a model is built. Compared to previous methods, it enjoys several advantages: \n\\begin{itemize}\n \\item It can be easily incorporated into any existing framework for tree-based methods, such as Python or R.\n \\item It does not require generating additional pseudo data or computational repetitions as in \\citet{sandri2008bias, nembrini2018revival}.\n \\item Compared to \\citet{li2019debiased} which does not have a theoretical guarantee, our method is proved to be unbiased for noisy features.\n\\end{itemize}}\n\n\\section{Unbiased Split-Improvement}\\label{USI}\n\nWhen it comes to evaluating the performance of machine learning models, we generally use a separate test set to calculate generalization accuracy. The training error is usually smaller than the test error as the algorithm is likely to \"overfit\" on the training data. This is exactly why we observe the bias with split-improvement. Each split will favor continuous features or those features with more categories, as they will have more flexibility to fit the training data. The vanilla version of split-improvement is just like using train error for evaluating model performance. \n\nBelow we propose methods to remedy this bias phenomenon by utilizing a separate test set, and prove that for features with no predictive power, we're able to get an importance score of 0 in expectation for both classification and regressions settings. Our method is entirely based on the original framework of RF, requires barely no additional computational efforts, and can be easily integrated into any existing software libraries.\n\nThe main ingredient of the proposed method is to calculate the impurity function $H$ using additional information provided from test data. In the context of RF, we can simply take \\textit{out-of-bag} samples for each individual tree. \\textcolor{black}{Our experiments below are based on this strategy. In the context of the honest trees proposed in \\citet{wager2018estimation} that divide samples into a partition used to determine tree structures and a partition used to obtain leaf values, the latter could be used as our test data below. In boosting, it is common not to sample, but to keep a test set separate to determine a stopping time.} Since the choice of impurity function $H$ is different for classification and regression, in what follows we will treat them separately. \n\nFigure \\ref{fig:test} and \\ref{fig:urank} shows the results on previous classification and regression tasks when our unbiased method is applied\\footnote{Relevant codes can be found at \\href{https:\/\/github.com\/ZhengzeZhou\/unbiased-feature-importance}{https:\/\/github.com\/ZhengzeZhou\/unbiased-feature-importance}.}. Feature scores for all variables are spread around $0$, though continuous features and categorical features with more categories tend to exhibit more variability. In the case where there is correlation between $X_2$ and $y$, even for the smallest $\\rho = 0.1$, we can still find the most informative predictor, whereas there are no clear order for the remaining noise features. \n\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Unbiased_Split_Improvement_for_Classification_Dummy.png}\n \\caption{Classification}\n \\label{fig:test_cls}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Unbiased_Split_Improvement_for_Regression_Dummy.png}\n \\caption{Regression}\n \\label{fig:test_regr}\n \\end{subfigure}\n \\hfill\n \n \\caption{Unbiased split-improvement. Box plot is based on 100 repetitions. 100 trees are built in the forest and maximum depth of each tree is set to 5. Each tree is trained using bootstrap samples and \\textit{out-of-bag} samples are used as test set.}\n \\label{fig:test}\n \n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Unbiased_Average_Ranking_for_Classification_Dummy.png}\n \\caption{Classification}\n \\label{fig:uc_rank}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{Unbiased_Average_Ranking_for_Regression_Dummy.png}\n \\caption{Regression}\n \\label{fig:ur_rank}\n \\end{subfigure}\n \\hfill\n \n \\caption{Unbiased feature importance ranking across different signal strengths averaged over 100 repetitions. 100 trees are built in the forest and maximum depth of each tree is set to 5. Each tree is trained using bootstrap samples and \\textit{out-of-bag} samples are used as test set.}\n \\label{fig:urank}\n \n\\end{figure}\n\n\n\\subsection{Classification}\n\nConsider a root node $m$ and two child nodes, denoted by $l$ and $r$ respectively. The best split $\\theta^*_m = (j, s)$ was chosen by Formula (\\ref{best}) and Gini index is used as impurity function $H$. \n\nFor simplicity, we focus on binary classification. Let $p$ denote class proportion within each node. For example, $p_{r, 2}$ denotes the proportion of class 2 in the right child node. Hence the Gini index for each node can be written as:\n$$\nH(m) = 1 - p_{m,1}^2 - p_{m,2}^2,\n$$\n$$\nH(l) = 1 - p_{l,1}^2 - p_{l,2}^2,\n$$\n$$\nH(r) = 1 - p_{r,1}^2 - p_{r,2}^2.\n$$\nThe split-improvement for a split at $j^{th}$ feature when evaluated using only the training data is written as in Equation (\\ref{di}). This value is always positive no matter which feature is chosen and where the split is, which is exactly why a selection bias will lead to overestimate of feature importance.\n\nIf instead, we have a separate test set available, the predictive impurity function for each node is modified to be:\n\\begin{equation}\n \\label{pi}\n \\begin{aligned}\n H'(m) &= 1 - p_{m,1}p'_{m,1} - p_{m,2}p'_{m,2}, \\\\ \n H'(l) &= 1 - p_{l,1}p'_{l,1} - p_{l,2}p'_{l,2}, \\\\\n H'(r) &= 1 - p_{r,1}p'_{r,1} - p_{r,2}p'_{r,2},\n \\end{aligned}\n\\end{equation}\nwhere $p'$ is class proportion evaluating on the test data.\nAnd similarly, \n\\begin{equation}\n \\label{delta_c}\n \\begin{aligned}\n\\Delta'(\\theta^*_m) & = \\omega_mH'(m) - (\\omega_lH'(l) + \\omega_rH'(r)) \\\\\n& = \\omega_l(H'(m) - H'(l)) + \\omega_r(H'(m) - H'(r)).\n \\end{aligned}\n\\end{equation}\n\\textcolor{black}{Using these definitions, we first demonstrate that an individual split is unbiassed in the sense that if $y$ has no bivariate relationship with $X_j$, $\\Delta'(\\theta^*_m)$ will have expectation 0.}\n\\begin{lemma}\\label{l_c}\nIn classification settings, for a given feature $X_j$, if $y$ is marginally independent of $X_j$ within the region defined by node $m$, then \n$$\nE\\Delta'(\\theta^*_m) = 0\n$$\nwhen splitting at the $j^{th}$ feature.\n\\end{lemma}\n\n\\begin{proof}\nSee Appendix \\ref{proof}.\n\\end{proof}\nSimilar to Equation (\\ref{t}), split-improvement of $x_j$ in a decision tree is defined as:\n\\begin{equation} \\label{ut}\n\\text{VI}^{\\text{T,C}}_j = \\sum_{m, j \\in \\theta^*_m}\\Delta'(\\theta^*_m).\n\\end{equation}\n\\textcolor{black}{We can now apply Lemma \\ref{l_c} to provide a global result so long as $X_j$ is always irrelevant to $y$.}\n\\begin{theorem}\nIn classification settings, for a given feature $X_j$, if $y$ is independent of $X_j$ in every hyper-rectangle subset of the feature space, then we always have\n$$\nE\\text{VI}^{\\text{T,C}}_j = 0.\n$$\n\\end{theorem}\n\n\\begin{proof}\nThe result follows directly from Lemma \\ref{l_c} and Equation (\\ref{ut}). \n\\end{proof}\n\nThis unbiasedness result can be easily extended to the case of RF by (\\ref{si}), as it's an average across base learners. \\textcolor{black}{We note here that our independence condition is designed to account for relationships that appear before accounting for splits on other variables, possibly due to relationships between $X_j$ and other features, and afterwards. It is trivially implied by the independence of $X_j$ with both $y$ and the other features. Our condition may also be stronger than necessary, depending on the tree-building process. We may be able to restrict the set of hyper-rectangles to be examined, but only by analyzing specific tree-building algorithms.}\n\n\\subsection{Regression}\n\nIn regression, we use mean squared error as the impurity function $H$:\n$$\n\\bar{y}_m = \\frac{1}{n_m}\\sum_{x_i \\in m}y_i,\n$$\n$$\nH(m) = \\frac{1}{n_m}\\sum_{x_i \\in m}(y_i - \\bar{y}_m)^2.\n$$\nIf instead the impurity function $H$ is evaluated on a separate test set, we define \n$$\nH'(m) = \\frac{1}{n'_m}\\sum_{i=1}^{n'_m}(y'_{m,i} - \\bar{y}_m)^2 \n$$\nand similarly\n$$\n\\Delta'(\\theta^*_m) = \\omega_mH'(m) - (\\omega_lH'(l) + \\omega_rH'(r)).\n$$\nNote that here $H'(m)$ measures mean squared error within node $m$ on test data with the fitted value $\\bar{y}_m$ from training data. If we just sum up $\\Delta'$ as feature importance, it will end up with negative values as $\\bar{y}_m$ will overfit the training data and thus make mean squared error much larger deep in the tree. In other words, it \\textit{over-corrects} the bias. For this reason, our unbiased split-improvement is defined slightly different from the classification case (\\ref{ut}):\n\\begin{equation} \\label{utr}\n\\text{VI}^{\\text{T,R}}_j = \\sum_{m, j \\in \\theta^*_m} (\\Delta(\\theta^*_m) + \\Delta'(\\theta^*_m)).\n\\end{equation}\n\nNotice that although Equation (\\ref{ut}) and (\\ref{utr}) are different, they originates from the same idea by correcting bias using test data. Unlike Formula (\\ref{pi}) for Gini index, where we could design a predictive impurity function by combining train and test data together, it's hard to come up with a counterpart in regression setting. \n\nJust as in the classification case, we could show the following unbiasedness results:\n\n\\begin{lemma}\\label{l_r}\nIn regression settings, for a given feature $X_j$, if $y$ is marginally independent of $X_j$ within the region defined by node $m$, then \n$$\nE(\\Delta(\\theta^*_m) + \\Delta'(\\theta^*_m)) = 0\n$$\nwhen splitting at the $j^{th}$ feature.\n\\end{lemma}\n\n\\begin{proof}\nSee Appendix \\ref{proof}.\n\\end{proof}\n\\begin{theorem}\nIn regression settings, for a given feature $X_j$, if $y$ is independent of $X_j$ in every hyper-rectangle subset of the feature space, then we always have\n$$\nE\\text{VI}^{\\text{T,R}}_j = 0.\n$$\n\\end{theorem}\n\n\n\n\\section{Empirical Studies}\\label{real}\n\n\\textcolor{black}{In this section, we apply our method to one simulated example and three real data sets. We compare our results to three other algorithms: the default split-improvement in \\textbf{scikit-learn}, cforest \\citep{hothorn2006unbiased} in R package \\textbf{party} and bias-corrected impurity \\citep{nembrini2018revival} in R package \\textbf{ranger}. We did not include comparison with \\citet{li2019debiased} since their method does not enjoy the unbiased property. In what follows, we use shorthand SI for the default split-improvement, UFI for our method (unbiased feature importance).}\n\n\\subsection{Simulated Data}\n\n\\textcolor{black}{The data has 1000 samples and 10 features, where $X_i$ takes values in $0, 1, 2, \\ldots, i - 1$ with uniform probability for $1 \\leq i \\leq 10$. Here, we assume only $X_1$ contains true signal and all remaining nine features are noisy features. The target value $y$ is generated as follows:\n\\begin{itemize}\n \\item Regression: $y = X_1 + 5 \\epsilon$, where $\\epsilon \\sim \\mathcal{N}(0, 1)$.\n \\item Classification: $P(y = 1 | X) = 0.55$ if $X_1 = 1$, and $P(y = 1 | X) = 0.45$ if $X_1 = 0$.\n\\end{itemize}\nNote that this task is designed to be extremely hard by choosing the binary feature as informative, and adding large noise (regression) or setting the signal strength low (classification). To evaluate the results, we look at the ranking of all features based on importance scores. Ideally $X_1$ should be ranked $1^{st}$ as it is the only informative feature. Table \\ref{table:sum} shows the average ranking of feature $X_1$ across 100 repetitions. The best result of each column is marked in \\textbf{bold}. Here we also compare the effect of tree depth by constructing shallow trees (with tree depth 3) and deep trees (with tree depth 10). Since cforest does not provide a parameter for directly controlling tree depth, we change the values of mincriterion as an alternative. }\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{} & \\multicolumn{2}{c|}{Tree depth = 3} & \\multicolumn{2}{c|}{Tree depth = 10} \\\\ \\cline{2-5} \n & R & C & R & C \\\\ \\hline\nSI & 3.71 & 4.10 & 10.00 & 10.00 \\\\ \\hline\nUFI & \\textbf{1.47} & 1.39 & \\textbf{1.55} & \\textbf{1.69} \\\\ \\hline\ncforest & 1.57 & \\textbf{1.32} & 1.77 & 1.88 \\\\ \\hline\nranger & 1.54 & 1.64 & 2.46 & 1.93 \\\\ \\hline\n\\end{tabular}\n\\caption{Average importance ranking of informative feature $X_1$. R stands for regression and C for classification. The result averages over 100 repetitions. Lower values indicate better abilities in identifying informative features. In cforest, we set mincriterion to be 2.33 (0.99 percentile of normal distribution) for shallow trees and 1.28 (0.9 percentile) for deep trees.}\n\\label{table:sum}\n\\end{table}\n\n\\textcolor{black}{We can see that our method UFI achieves the best results in three situations except the classification case for shallow trees, where it is only slightly worse than cforest. Another interesting observation is that deeper trees tend to make the task of identifying informative features harder when there are noisy ones, since it is more likely to split on noisy features for splits deep down in the tree. This effect is most obvious for the default split-improvement, where it performs the worst especially for deep trees: the informative feature $X_1$ is consistently ranked as the least important ($10^{th}$ place). UFI does not seem to be affected too much from tree depth.}\n\n\\subsection{RNA Sequence Data}\nThe first data set examined is the prediction of C-to-U edited sites in plant mitochondrial RNA. This task was studied statistically in \\citet{cummings2004simple}, where the authors applied Random Forests and used the original split-improvement as feature importance. Later, \\citet{strobl2007bias} demonstrated the performance of cforest on this data set. \n\nRNA editing is a molecular process whereby an RNA sequence is\nmodified from the sequence corresponding to the DNA\ntemplate. In the mitochondria of land plants, some cytidines\nare converted to uridines before translation \\citep{cummings2004simple}. \n\nWe use the \\textit{Arabidopsis thaliana} data file\\footnote{The data set can be downloaded from \\href{https:\/\/bmcbioinformatics.biomedcentral.com\/articles\/10.1186\/1471-2105-5-132}{https:\/\/bmcbioinformatics.biomedcentral.com\/articles\/10.1186\/1471-2105-5-132}.} as in \\citet{strobl2007bias}. The features are based on the nucleotides surrounding the edited\/non-edited sites and on the estimated folding energies of those regions. After removing missing values and one column which will not be used, the data file consists of 876 rows and 45 columns:\n\\begin{itemize}\n \\item the response (binary).\n \\item 41 nucleotides at positions -20 to 20 relative to the\nedited site (categorical, one of A, T, C or G).\n \\item the codon position (also 4 categories).\n \\item two continuous variables based on on the estimated folding energies.\n\\end{itemize} \n\nFor implementation, we create dummy variables for all categorical features, and build forest using 100 base trees. The maximum tree depth for this data set is not restricted as the number of potential predictors is large. We take the sum of importance across all dummy variables corresponding to a specific feature for final importance scores. All default parameters are used unless otherwise specified.\n\nThe results are shown in Figure \\ref{fig:ctou}. Red error bars depict one standard deviation when the experiments are repeated 100 times. From the default split-improvement (Figure \\ref{fig:si_ctou}), we can see that except several apparently dominant predictors (nucleotides at position -1 and 1, and two continuous features \\textit{fe} and \\textit{dfe}), the importance for the remaining nearly 40 features are indistinguishable. The feature importance scores given by UFI (Figure \\ref{fig:ufi_ctou}) and cforest (Figure \\ref{fig:cf_ctou}) are very similar. Compared with SI, although all methods agree on top three features being the nucleotides at position -1 and 1, and the continuous one \\textit{fe}, there are some noticeable differences. Another continuous feature \\textit{dfe} is originally ranked at the fourth place in Figure \\ref{fig:si_ctou}, but its importance scores are much lower by UFI and cforest. The result given by ranger (Figure \\ref{fig:ranger_ctou}) is slightly different from UFI and cforest, where it seems to have more features with importance scores larger than 0. In general, we see a large portion of predictors with feature importance close to 0 for three improved methods, which makes subsequent tasks like feature screening easier. \n\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.495\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{SI_on_CtoU.png}\n \\caption{SI}\n \\label{fig:si_ctou}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.495\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{UFI_on_CtoU.png}\n \\caption{UFI}\n \\label{fig:ufi_ctou}\n \\end{subfigure}\n \\hfill\n\n \\begin{subfigure}[b]{0.495\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cforest_on_CtoU.png}\n \\caption{cforest}\n \\label{fig:cf_ctou}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.495\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{ranger_on_CtoU.png}\n \\caption{ranger}\n \\label{fig:ranger_ctou}\n \\end{subfigure}\n \n \\caption{Feature importance for RNA sequence data. 100 trees are built in the forest. Red error bars depict one standard deviation when the experiments are repeated 100 times.}\n \\label{fig:ctou}\n \n\\end{figure}\n\n\n\\subsection{Adult Data}\n\nAs a second example, we will use the Adult Data Set from UCI Machine Learning Repository\\footnote{\\href{https:\/\/archive.ics.uci.edu\/ml\/datasets\/adult}{https:\/\/archive.ics.uci.edu\/ml\/datasets\/adult}}. The task is to predict whether income exceeds \\$50K\/yr based on census data. We remove all entries including missing values, and only focus on people from Unites States. In total, there are 27504 training samples and Table \\ref{table:adult} describes relevant feature information. Notice that we add a standard normal random variable, which is shown in the last row. We randomly sample 5000 entries for training. \n\n\\begin{table}[]\n\\centering\n\\begin{tabular}{|c|c|}\n\\hline\nAttribute & Description \\\\ \\hline\nage & continuous \\\\ \\hline\nworkclass & categorical (7) \\\\ \\hline\nfnlwgt & continuous \\\\ \\hline\neducation & categorical (16) \\\\ \\hline\neducation-num & continuous \\\\ \\hline\nmarital-status & categorical (7) \\\\ \\hline\noccupation & categorical (14) \\\\ \\hline\nrelationship & categorical (6) \\\\ \\hline\nrace & categorical (5) \\\\ \\hline\nsex & binary \\\\ \\hline\ncapital-gain & continuous \\\\ \\hline\ncapital-loss & continuous \\\\ \\hline\nhours-per-week & continuous \\\\ \\hline\nrandom & continuous \\\\ \\hline\n\\end{tabular}\n\\caption{Attribute description for adult data set.}\n\\label{table:adult}\n\\end{table}\n\n\nThe results are shown in Figure \\ref{fig:adult}. UFI (\\ref{fig:ufi_adult}), cforest(\\ref{fig:cf_adult}) and ranger (\\ref{fig:ranger_adult}) display similar feature rankings which are quite different from the original split-improvement (\\ref{fig:si_adult}). Notice the random normal feature we added (marked in black) is actually ranked the third most important in \\ref{fig:si_adult}. This is not surprising as most of the features are categorical, and even for some continuous features, a large portion of the values are actually 0 (such as \\textit{capital-gain} and \\textit{capital-loss}). For UFI, cforest and ranger, the random feature is assigned an importance score close to 0. Another feature with big discrepancy is \\textit{fnlwgt}, which is ranked among top three originally but is the least important for other methods. \\textit{fnlwgt} represents final weight, the number of units in the target population that the responding unit represents. Thus it is unlikely to have strong predictive power for the response. For this reason, some analyses deleted this predictor before fitting models\\footnote{\\href{http:\/\/scg.sdsu.edu\/dataset-adult\\_r\/}{http:\/\/scg.sdsu.edu\/dataset-adult\\_r\/}}.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{SI_on_adult_data.png}\n \\caption{SI}\n \\label{fig:si_adult}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{UFI_on_adult_data.png}\n \\caption{UFI}\n \\label{fig:ufi_adult}\n \\end{subfigure}\n \\hfill\n \n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cforest_on_adult_data.png}\n \\caption{cforest}\n \\label{fig:cf_adult}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{ranger_on_adult_data.png}\n \\caption{ranger}\n \\label{fig:ranger_adult}\n \\end{subfigure}\n \n \\caption{Feature importance for adult data. 20 trees are built in the forest. Red error bars depict one standard deviation when the experiments are repeated 100 times.}\n \\label{fig:adult}\n\\end{figure}\n\n\n\\subsection{Boston Housing Data}\n\n\\textcolor{black}{We also conduct analyses on a regression example using the Boston Housing Data\\footnote{\\href{https:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/housing\/}{https:\/\/archive.ics.uci.edu\/ml\/machine-learning-databases\/housing\/}}, which has been widely studied in previous literature \\citep{bollinger1981book, quinlan1993combining}. The data set contains 12 continuous, one ordinal and one binary features and the target is median value of owner-occupied homes in \\$1000's. We add a random feature distributed as $\\mathcal{N}(0, 1)$ as well.}\n\n\\begin{figure}[h]\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{SI_on_boston_data.png}\n \\caption{SI}\n \\label{fig:si_boston}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{UFI_on_boston_data.png}\n \\caption{UFI}\n \\label{fig:ufi_boston}\n \\end{subfigure}\n \\hfill\n \n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{cf_on_boston_data.png}\n \\caption{cforest}\n \\label{fig:cf_boston}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{ranger_on_boston_data.png}\n \\caption{ranger}\n \\label{fig:ranger_boston}\n \\end{subfigure}\n \n \\caption{Feature importance for Boston housing data. 100 trees are built in the forest. Red error bars depict one standard deviation when the experiments are repeated 100 times.}\n \\label{fig:boston}\n\\end{figure}\n\n\\textcolor{black}{All four methods agree on two most important features: RM (average number of rooms per dwelling) and LSTAT (\\% lower status of the population). In SI, the random feature still appears to be more important than several other features such as INDUS (proportion of non-retail business acres per town) and RAD (index of accessibility to radial highways), though the spurious effect is much less compared to Figure \\ref{fig:si_adult}. As expected, the importance of random feature is close to zero in UFI. In this example, the SI did not seem to provide misleading result as most of the features are continuous, and the only binary feature CHAS (Charles River dummy variable) turns out to be not important. }\n\n\\subsection{Summary}\n\n\\textcolor{black}{Our empirical studies confirm that the default split-improvement method is biased towards increasing the importance of features with more potential splits. The bias is more severe in deeper trees. Compared to three other approaches, our proposed method performs the best in a difficult task to identify the only important feature from 10 noisy features. For real world data sets, though we do not have a ground truth for feature importance scores, our method gives similar and meaningful outputs as two state-of-the-art methods cforest and ranger.}\n\n\\section{Discussions}\\label{dis}\n\nTree-based methods are widely employed in many applications. One of the many advantages is that these models come naturally with feature importance measures, which practitioners rely on heavily for subsequent analysis such as feature ranking or screening. It is important that these measurements are trustworthy.\n\nWe show empirically that split-improvement, as a popular measurement of feature importance in tree-based models, is biased towards continuous features, or categorical features with more categories. This phenomenon is akin to overfitting in training any machine learning model. We propose a simple fix to this problem and demonstrate its effectiveness both theoretically and empirically. Though our examples are based on Random Forests, the adjustment can be easily extended to any other tree-based model. \n\nThe original version of split-improvement is the default and only feature importance measure for Random Forests in \\textbf{scikit-learn}, and is also returned as one of the measurements for \\textbf{randomForest} library in R. Statistical analyses utilizing these packages will suffer from the bias discussed in this paper. Our method can be easily integrated into existing libraries, and require almost no additional computational burden. \\textcolor{black}{As already observed, while we have used \\textit{out-of-bag} samples as a natural source of test data, alternatives such as sample partitions -- thought of as a subsample of \\textit{out-of-bag} data for our purposes -- can be used in the context of honest trees, or a held-out test set will also suffice. The use of subsamples fits within the methods used to demonstrate the asymptotic normality of Random Forests developed in \\citet{mentch2016quantifying}. This potentially allows for formal statistical tests to be developed based on the unbiased split-improvement measures proposed here.} Similar approaches have been taken in \\citet{zhou2018approximation} for designing stopping rules in approximation trees. \n\n\n\nHowever, feature importance itself is very difficult to define exactly, with the possible exception of linear models, where the magnitude of coefficients serves as a simple measure of importance. There are also considerable discussion on the subtly introduced when correlated predictors exist, see for example \\cite{strobl2008conditional, gregorutti2017correlation}. We think \\textcolor{black}{that clarifying the relationship between split-improvement and the topology of the resulting function represents an important} future research direction. \n\n\\subsection*{Acknowledgements}\nThis work was supported in part by NSF grants DMS-1712554, DEB-1353039 and TRIPODS 1740882. \n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn environmental epidemiology, we are often interested in making inference about the relationships between spatially-referenced exposures and spatially-referenced health outcomes. Spatial structure in exposures can arise from underlying environmental processes and human activities, while health outcomes can have spatial structure derived from both the exposure of interest and other spatially-varying factors. For example, coronary heart disease is associated with socioeconomic status \\citep{DiezRoux2001}, which can have geographic structure and is associated with air pollution \\citep{Brochu2011, Jerrett2004, Mensah2005}. When not measured (or measured incompletely), these factors can cause \\emph{unmeasured spatial confounding}, which is the inability to distinguish the effect of a spatially-varying exposure from residual spatial variation in a health outcome, resulting in biased point estimates and standard errors \\citep{Paciorek2010}.\n\nEarly discussion of spatial confounding in the literature includes \\citet{Clayton1993}, who described the `confounding effect due to location' in regression models for ecological studies, which they attributed to unmeasured confounding factors. \\citet{Clayton1993} advocated for the inclusion of a spatially-correlated error term in hierarchical models for spatial data \\citep{Clayton1987, Besag1991} and claimed this would account for confounding bias but might result in conservative inference.\nSince then, the approach of adding spatially structured error terms has frequently been used in spatial models for areal data \\citep{Reich2006, Wakefield2007, Hodges2010, Hughes2013, Hanks2015, Page2017}. In a causal inference framework, confounding due to geography has been recently addressed using spatial propensity scores \\citep{Papadogeorgou2019, Davis2019}, however in those settings the exposure of interest was not explicitly spatial.\n\nFor point-referenced data, \\citet{Paciorek2010} provided a rigorous discussion of spatial confounding and \n the importance of the spatial scales of variability in the exposure and outcome.\n\\cite{Paciorek2010} demonstrated that reductions in bias could be obtained when the scale of unconfounded variability in the exposure, which he quantified by the spatial range parameter in a Matern covariance function, is smaller than the scale of confounded variability. \n\nOne approach in the literature for adjusting for spatial confounding at broad scales with point-referenced data is to estimate the coefficient of interest from a semi-parametric model that includes spatial splines \\citep{Paciorek2010, Chan2015}. This approach is a natural extension of time series studies, where flexible, one-dimensional basis functions can account for confounding due to unmeasured temporal variation \\citep{Burnett1991,Schwartz1994, Dominici2003a, Dominici2004, SzpiroSheppard2014}. Thin-plate regression splines (TPRS) \\citep{Wood2003} are a commonly used basis and the amount of adjustment can be tuned using the degrees of freedom ($df$) parameter \\citep{Paciorek2010}. However, particular values of $df$ do not have clear spatial scale or interpretation. Generalizing inference about associations between exposures and health outcomes is difficult when the extent of spatial confounding adjustment is not easily quantified.\n\n\\subsection{Air Pollution in the NIEHS Sister Study}\n\\label{sec:spatconf_motivatingexmp}\nThis work is motivated by an analysis of systolic blood pressure and fine particulate matter (PM$_{2.5}$) in the NIEHS Sister Study. \\citet{Chan2015} found that a 10 $\\upmu$g\/m$^3$ difference in long-term exposure to ambient fine particulate matter (PM$_{2.5}$) was associated with 1.4 mmHg (95\\% Confidence Interval [CI]: 0.6, 2.3) higher systolic blood pressure (SBP) at baseline. \nTo account for spatial confounding from unmeasured regional differences in socioeconomic and health patterns, \\citet{Chan2015} included TPRS with 10 $df$ in their model. \n\n\nHere we consider a re-analysis of this cohort using PM$_{2.5}$ exposures at grid locations, rather than at subject residences, to accommodate the methods we describe below. \nWe use predictions of the 2006 annual average ambient concentration from the universal kriging model of \\citet{Sampson2013}, made on a 25km by 25km grid across the contiguous United States.\nFor each of the 47,206 Sister Study subjects, we assign exposure based upon the closest grid cell center.\n Using the same measured confounders as \\citet{Chan2015}, but no spatial adjustment, we find that a difference of 10 $\\upmu$g\/m$^3$ in PM$_{2.5}$ is associated with 0.26 mmHg higher SBP (95\\% CI: -0.14, 0.66). However, when TPRS with 10 $df$ are added to the model then the estimated difference in SBP is 0.77 mmHg (95\\% CI: 0.14, 1.41). \n Figure~\\ref{fig:sister_tprsdf} shows the estimates for other amounts of adjustment. \nThe change in the estimates as a function of $df$\n suggests that some form of spatial confounding is present, but it is not clear from Figure~\\ref{fig:sister_tprsdf} on what scales the confounding is occurring, how much adjustment should be done, and which estimate should be reported.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.9\\textwidth]{Sister_Est_TPRS_Semipar.pdf}\n\\caption{Estimated difference in SBP (in mmHg) associated with a difference of 10 $\\upmu$g\/m$^3$ in annual average ambient PM$_{2.5}$, when adjusting for TPRS with varying values of $df$ in the outcome model. The square marker (\\mysquare{black}) at $df=0$ is the estimate without spatial adjustment and has error bars indicating a 95\\% confidence interval. The thick black curve (\\drawline{ultra thick}) represents estimates for different choices of $df$ and the thin black curves (\\drawline{}) represent point-wise 95\\% confidence intervals.}\n\\label{fig:sister_tprsdf}\n\\end{center}\n\\end{figure}\n\n\\subsection{Manuscript Outline}\nIn the remainder of this paper, we address the question of how to interpret and select the spatial scale of adjustment.\nIn Section~\\ref{sec:adjustment} we introduce the statistical framework and procedures for spatial confounding adjustment.\nIn Section~\\ref{sec:spatial_scales} we describe three choices of spatial basis (TPRS, a Fourier basis, and a wavelet basis) and present a method for assigning variation in these bases to a spatial distance. In Section~\\ref{sec:choosing_m} we describe approaches for selecting the amount of adjustment and compare the approaches in simulations in Section~\\ref{sec:simulations}. \nIn Section~\\ref{sec:sisterfullapp} we apply the adjustment approaches to the motivating Sister Study cohort. Section~\\ref{sec:discussion} provides a concluding discussion.\n\n\\section{Methods for Spatial Confounding Adjustment}\n\\label{sec:adjustment}\n\n\\subsection{Statistical Framework}\n\\label{sec:framework}\nWe consider a cohort study of $n$ subjects with measured health outcomes $y_i$, exposures $x_i$, and observed confounder covariates ${\\bm z}_i$. \nWe partition the spatial domain into a rectangular grid $\\mathcal{S}$ with distinct locations $\\{1, \\dots, S\\}$. \nEach subject is assigned to a location ${\\bm s}_i \\in \\{1, \\dots, S\\}$. We assume \nthat the subjects are distributed evenly across the locations; we discuss non-uniform sampling of subject locations in Section~\\ref{sec:pop_spatial_dist}.\n\nExtending the framework of \\cite{SzpiroSheppard2014} to a spatial setting, \nwe assume that the exposure for each individual is measured without error and takes the form\n\\begin{equation}\nx_i = x({\\bm s}_i) = g({\\bm s}_i) + \\xi({\\bm s}_i),\n\\end{equation}\nwhere $g({\\bm s}_i)$ is a smooth spatial surface and $\\xi_i$ is (fixed) residual spatial variation. We decompose the observed confounders in a similar manner as ${\\bm z}_i = \\bm w({\\bm s}_i) + \\bm\\zeta_i$, except that $\\bm\\zeta_i$ is not a function of space and is modelled as $N(0, \\sigma_\\zeta^2{\\bm I})$. The stochasticity of $\\bm\\zeta$ arises from the sampling of subjects.\n\nWe assume that the observed health outcomes $y_i$ arise from the model\n\\begin{equation}\n\\label{eq:dgm}\ny_i = \\beta_0 + \\beta x_i + {\\bm z}_i^\\ensuremath{\\mathsf{T}} \\bm\\gamma + f({\\bm s}_i) + \\epsilon_i,\n\\end{equation}\nwhere $f({\\bm s}_i)$ is an unknown, fixed spatial surface and $\\epsilon_i \\stackrel{iid}{\\sim} N(0, \\sigma_\\epsilon^2)$. The target of inference is the parameter $\\beta \\in \\mathbb{R}$, which represents the association between exposure $x_i$ and the outcome $y_i$ \\emph{conditional} on $\\bm z$ and $f$. The $f({\\bm s}_i)$ term represents unmeasured spatial variability in $y_i$, which we assume to be fixed and unknown. \n\n\nIn contrast to the approach taken by ``restricted spatial regression'', which targets the unconditional effect of exposure on outcome \\citep{Hodges2010,Hughes2013}, our parameter of interest here is the exposure-outcome association conditional on the unmeasured confounder $f$. In observational air pollution epidemiology studies, the conditional association is targeted in order to provide information that is generalizable beyond the specific time and location of the study cohort \\citep[e.g.][]{Peng2006}.\n\nTo model the spatially-varying terms in \\eqref{eq:dgm}, we employ a set of hierarchical spatial basis functions $\\{h_1(\\cdot), h_2(\\cdot), \\dots. \\}$ that are in order of non-increasing resolution.\n For compactness of notation, let ${H}_m = \\begin{bmatrix} \\bm h_1 & \\dots & \\bm h_m\\end{bmatrix}$ denote the $n \\times m$ matrix of the first $m$ basis functions evaluated at the $n$ subject locations.\nFollowing \\citet{Dominici2004} and \\citet{SzpiroSheppard2014}, we decompose $g(\\cdot)$, $\\bm w(\\cdot)$, and $f(\\cdot)$ as:\n \\begin{align}\n\\notag g({\\bm s}_i) &= \\sum_{k=1}^{m_1} \\alpha_kh_k({\\bm s}_i) = \\bm e_i^\\ensuremath{\\mathsf{T}}{H}_{m_1}\\bm\\alpha\\\\\n \\label{eq:gwf_basis} w_j({\\bm s}_i) & = \\sum_{k=1}^{m_2} \\delta_{jk}h_k({\\bm s}_i) = \\bm e_i^\\ensuremath{\\mathsf{T}}{H}_{m_2}\\bm\\delta_j\\\\\n\\notag f({\\bm s}_i) &= \\sum_{k=1}^{m_3} \\theta_kh_k({\\bm s}_i) = \\bm e_i^\\ensuremath{\\mathsf{T}}{H}_{m_3}\\bm\\theta,\n \\end{align}\n where $\\bm e_i$ is the unit vector with 1 as the $i$th element.\nWe assume\n$m_1 > m_3$. This assumption means that there is finer scale variability in the exposure surface than in the unobserved confounding surfaces, which \\citet{Paciorek2010} identified as a necessary condition to achieve reductions in bias via adjustment.\nIn Section~\\ref{sec:spatial_scales}, we consider three different choices of $h_j$: a regression spline basis, a Fourier basis, and a wavelet basis.\n\nThe ordinary least squares estimator from the model ${\\rm E}[y_i] = {\\beta}_0 + {\\beta}_1x_i + {\\gamma} {\\bm z}_i$\n will be biased due to the correlation of the observed $g({\\bm s}_i)$ and $\\bm w({\\bm s}_i)$ with the unobserved $f({\\bm s}_i)$. In the literature, this correlation is sometimes referred to as\n \\textit{concurvity} \\citep{Hastie1990, Ramsay2003,Ramsay2003a}.\n We describe two strategies for adjusting for this correlation in order to eliminate the confounding bias.\n \n \\subsection{Adjustment in Outcome Model}\n\\label{sec:adjust_health}\nA straightforward way to adjust for the unmeasured spatial structure $f({\\bm s}_i)$ is to fit the semiparametric regression model\n\\begin{equation}\n\\label{eq:adj_in_health_model}\n{\\rm E}[y_i] = \\breve{\\beta}_0 + \\breve{\\beta}_1x_i + {\\bm z}_i^\\ensuremath{\\mathsf{T}}\\breve{\\gamma} + {H}_m\\breve{\\bm \\theta},\n\\end{equation}\nfor a chosen value of $m$.\nThis is a direct extension of confounding-adjustment approaches used in time series methods \\citep{Dominici2004,Peng2006} and was the approach taken by \\cite{Chan2015} and in Figure~\\ref{fig:sister_tprsdf}. For basis functions such as TPRS that are easily computed, fitting model \\eqref{eq:adj_in_health_model} in standard statistical software is straightforward. \n\nIf $m \\ge m_3$, then the ordinary least squares estimator $\\widehat{\\breve{\\beta}}_1$ from model \\eqref{eq:adj_in_health_model} will be unbiased, because the inclusion of ${H}_m$ fully adjusts for variation in its column space, which includes $f$. \nIf $m < m_3$, then $\\widehat{\\breve{\\beta}}_1$ will remain biased because of correlation between $x({\\bm s})$ and the $m_3 - m$ terms of $f({\\bm s})$ (from the decomposition in \\eqref{eq:gwf_basis}) that are not within the column space of ${H}_m$.\n \n \n\\subsection{Pre-adjusting Exposure}\n\\label{sec:preadjust}\n\nA second approach to confounding adjustment is to first `pre-adjust' the exposure, to remove variation in the exposure that is correlated with the confounding surface. In comparison to the semi-parametric adjustment approach described above, the pre-adjustment approach does not necessarily require the explicit construction of ${H}_m$, which allows for additional choices of spatial basis.\n\\cite{SzpiroSheppard2014} outlined how preadjustment can be done for cohort studies with time series exposures, and here we present an extension of that approach to spatial settings. This approach can be considered a form of spatial filtering \\citep[e.g.][]{Haining1991, Tiefelsdorf2007}\n\nWe decompose the vector of observed exposures ${\\bm x} = \\begin{bmatrix} x({\\bm s}_1) & \\cdots &x({\\bm s}_n)\\end{bmatrix}^\\ensuremath{\\mathsf{T}} $ into two components, ${\\bm x}_1$ and ${\\bm x}_2$, such that ${\\bm x}_2$ is orthogonal to ${H}_m$ for some $m$ (we discuss methods for selecting $m$ in Section~\\ref{sec:choosing_m}). \nIn settings when ${H}_m$ can be explicitly computed, this decomposition can be accomplished by taking ${\\bm x}_1$ to be the projection of ${\\bm x}$ onto ${H}_m$ ($\\bm x_1 = {H}_m({H}_m^\\ensuremath{\\mathsf{T}}{H}_m)^{-1}{H}_m^\\ensuremath{\\mathsf{T}}{\\bm x}$) and ${\\bm x}_2$ to be its complement ($\\bm x_2 = {\\bm x} - {\\bm x}_1)$.\nWhen it is not practical to explicitly construct ${H}_m$, as in the Fourier and wavelet approaches we discuss in Section~\\ref{sec:spatial_scales}, we use a filtering procedure in the frequency domain to achieve this decomposition. \n\n\n\nOnce ${\\bm x}$ has been partitioned into ${\\bm x}_1$ and ${\\bm x}_2$, we fit the model\n\\begin{equation}\n\\label{eq:preadj_model}\ny_i = \\tilde{\\beta}_0 + \\tilde{\\beta}_1x_{1i} + \\tilde{\\beta}_2x_{2i}+ \\tilde{\\gamma} {\\bm z}_i + \\tilde{\\epsilon}_i\n\\end{equation}\nwhere $\\tilde\\epsilon_i = \\epsilon_i + f(s_i)$ and $\\tilde\\beta_1, \\tilde\\beta_2 \\in \\mathbb{R}$. Because $\\sum x_1x_2 = 0$, the bias of $\\hat{\\tilde{\\beta}}_2$ is proportional to\n\\begin{equation}\n\\label{eq:beta2bias}\n\\sum x_2f\\left(\\sum x_1z^2 - \\sum x_1^2\\sum z^2\\right) + \\sum x_2z\\left(\\sum x_1^2\\sum zf - \\sum x_1z\\sum x_1f\\right),\n\\end{equation}\nwhere the summations are over $i$. If $m \\ge m_3$, then ${\\bm x}_2$ is by construction orthogonal to $\\bm f$ and so $\\sum x_2 f = 0$. If $m \\ge m_2$, then ${\\bm x}_2$ is orthogonal to $\\bm z$ and so $\\sum x_2z=0$. Under these two conditions, the bias of $\\hat{\\tilde{\\beta}}_2$ given in \\eqref{eq:beta2bias} is zero.\nIf either $m < m_2$ or $m < m_3$, then $\\hat{\\tilde{\\beta}}_2$ will be biased. Forcing $f(s_i)$ into the error term creates correlation between the $\\tilde\\epsilon_i$, but this can be accounted for using `sandwich' standard error estimates \\citep{White1980}. \n\n\n\\subsection{Non-Uniform Spatial Distributions of People}\n\\label{sec:pop_spatial_dist}\n\nSo far we have assumed that all locations $s \\in \\mathcal{S}$ have an observed subject and that the number of subjects was that same at each location. This assumption was necessary to achieve the orthogonality of the pre-adjusted exposure (${\\bm x}_2$) and the basis functions matrix $H_m$ (and thus ${\\bm x}_1$, ${\\bm z}$, and $\\bm f$). \nAdjustment in the outcome model or exposure pre-adjustment in which $H_m$ can be explicitly constructed do not require that the locations be spatially uniform. However, pre-adjustment using the filtering approaches does require that all locations be observed the same number of times.\n\nTo implement pre-adjustment in the presence of duplicate locations, first the duplicated locations are removed (and unobserved locations added) to create the vector $\\bm x'$. The filtering pre-adjustment is done on $\\bm x'$ to generate $\\bm x_1'$ and $\\bm x_2'$. By construction $\\bm x_2'^\\ensuremath{\\mathsf{T}} \\bm h_j' = 0 $, where $\\bm h_j'$ is the $j$th basis function evaluated at all locations in $\\mathcal{S}$ once.\nThe duplicated locations are then added back in by repeating the relevant entries in ${\\bm x}_2'$ to form $\\bm x_2$ (and similarly for $\\bm x_1$). However, the duplicated locations means that in general $\\bm x_2^\\ensuremath{\\mathsf{T}} \\bm h_j \\ne 0$. To recover the orthogonality necessary to eliminate the bias term \\eqref{eq:beta2bias}, the outcome model can use re-weighting.\n\nSuppose that subject locations are distributed according to a spatial distribution $\\Phi({\\bm s})$, which has positive support across all of $\\mathcal{S}$. \nTo recover orthogonality of ${\\bm x}_2$ and $\\bm h_j$ ($j=1, \\dots, m$), we reweight the study population by its spatial distribution. Specifically, we fit a weighted least squares (WLS) version of \\eqref{eq:preadj_model} where the observation weights are proportional to the inverse of $\\frac{d}{d {\\bm s}} \\Phi({\\bm s}) = \\phi({\\bm s})$. \nThis assures orthogonality of ${\\bm x}_2$ and $\\bm h_j$ as $n\\to \\infty$, since\n\\begin{equation*}\n \\frac{S}{n} \\sum_{i=1}^n \\frac{1}{\\phi({\\bm s}_i)}x_2({\\bm s}_i)h_j({\\bm s}_i) \\rightarrow \\sum_{{\\bm s} \\in \\mathcal{S}} \\phi({\\bm s}) \\left(\\frac{1}{\\phi({\\bm s})} x_2'({\\bm s})h_j'({\\bm s})\\right) = \\sum_{{\\bm s} \\in \\mathcal{S}} x_2'({\\bm s}) h_j'({\\bm s}) = \\bm x_2'^\\ensuremath{\\mathsf{T}}\\bm h_j' = 0.\n\\label{eq:reweighting}\n \\end{equation*}\n\n\nThis WLS approach essentially downweights the contribution of people living in highly-populated areas and gives added weight to those who live in more sparsely populated areas. The parameter being estimated remains the same. For applications in air pollution epidemiology, the correlation between high exposures and population density means that this has the effect of weighting subjects with different exposure levels differently. This is similar in spirit to propensity score weighting, in which groups are re-weighted by their probability of exposure in order to remove bias due to confounding \\citep{Stuart2010}.\n\n\n\n\n\\section{Interpreting Spatial Scales of Adjustment} \n\\label{sec:spatial_scales}\n\n\nHere we present specific details for three different choices of spatial basis functions: thin-plate regression splines, Fourier basis, and wavelet basis. We describe how to implement adjustment for each basis and how to interpret the amount of adjustment in terms of physical distance. \nWe will describe methods for selecting the amount of adjustment in Section~\\ref{sec:choosing_m}.\n\n\\subsection{Effective Bandwidth $\\hat k$}\nFor each basis, we describe how to relate the amount of confounding adjustment (i.e. $m$) to a physical spatial scale. We call this scale, denoted by $\\hat k$, the \\emph{effective bandwidth} of the adjustment. Heuristically, the effective bandwidth is the width of the smoothing kernel induced by the choice of basis function. This allows for adjustment using $m$ basis functions to be interpreted as adjusting for confounding by smoothing at $\\hat k$ distance. In the context of particular application, $\\hat k$ may represent a national, regional, or metropolitan scales of variation. \n\n\\subsection{Thin-plate regression splines}\n\\label{sec:tprs}\nThin-plate regression splines (TPRS) are low-rank approximations to thin-plate smoothing splines \\citep{Wood2003}. The full-rank thin-plate splines are derived from a set of radial basis functions, and TPRS achieve computational benefits by using a truncated eigenbasis to approximate the radial basis. TPRS are based upon the spatial locations of observed points, eliminating the need for knot-selection.\n\nFor problems of inference, unpenalized TPRS are indexed by the degrees of freedom ($df$), which controls the dimension of the basis and can be fixed to yield a set of hierarchical basis functions $h_1(s), h_2(s), \\dots, h_{df}(s)$ where higher values of $df$ correspond to adjustment at a finer scale. The \\texttt{mgcv} package in R \\citep{Wood2003} makes explicit construction of the TPRS basis straightforward, so both the semi-parametric adjustment and pre-adjustment approaches can be used with this basis.\n\nWe propose the following procedure for computing the effective bandwidth for a chosen number of TPRS basis functions. The smoothing induced by TPRS can be represented in the smoothing matrix $ S = {H}_{df}({H}_{df}^\\ensuremath{\\mathsf{T}}{H}_{df})^{-1}{H}_{df}^\\ensuremath{\\mathsf{T}}$. Unlike the frequency-based methods discussed below, the amount of smoothing, which is given by the columns of $ S$, varies at each location due to the location of neighboring points, which is also impacted by the geometry of the domain. We compute $\\hat k$ by finding the distance at which the ``average'' value of the smoothing matrix values first cross zero. For each observation $i$, we first compute the vector of distances $\\bm d_i$ to all other points. We then fit a loess smooth to ${\\bm s}_i$, the $i$th column of $S$, as a function of $\\bm d_i$. The procedure is repeated across all locations, and the pointwise median of the loess smooths computed. The distance at which this median smooth first crosses zero is $\\hat k$. A detailed description of this procedure is provided in Supplementary Material Section A. The publicly-available R package \\texttt{spconf} implements this procedure for any provided spline or smoothing matrix values.\n\n\n\\subsection{Fourier decomposition and Frequency-based filtering}\n\\label{sec:fourier}\nA second choice for basis $\\{h_j\\}$ is the set of sinusoidal Fourier basis functions. Without loss of generality, let $u=0, \\dots, M-1$ and $v=0, \\dots, N-1$ denote coordinates of $\\mathcal{S}$. \nThe value of the function $g(u, v)$ can be written as a linear combination of sine and cosine functions, whose coefficients are determined by the Discrete Fourier Transform (DFT) of $g(u, v)$ \\citep{Gonzalez2008}. \nSpectral methods have been widely used in spatial statistics, often as a means to approximate covariance functions \\citep[e.g.][]{Guinness2017, Royle2005,Fuentes2007}.\n \nAllowing for ties as necessary, we can use the spectral coordinates $(p, q)$ to order the spatial basis functions by \\emph{effective frequency}: $\\omega_{(p, q)} = \\sqrt{\\left(\\frac{p}{M}\\right)^2 + \\left(\\frac{q}{N}\\right)^2}$ \\citep{Burger2009}.\nBecause each basis function is oriented to a particular angle, there may be multiple basis functions with the same frequency $\\omega$ but different orientation. \n\nTo avoid explicit construction of all $MN$ basis functions, we apply the pre-adjustment approach described in Section~\\ref{sec:preadjust} for a selected frequency $\\omega$. We decompose ${\\bm x}$ into its projection onto $H_{\\omega}$ and corresponding complement by applying a high-pass filter in the frequency domain. The values of the filter\n are\n\\begin{equation}\nF_{\\omega}(p, q) = \\begin{cases} 1 & \\text{ if } \\omega_{(p, q)}> \\omega \\\\ 0 & \\text{ if } \\omega_{(p, q)} \\le \\omega \\end{cases}.\n\\end{equation}\nTo implement pre-adjustment, we first compute the DFT $\\mathcal{X}(p,q)$ of $\\bm x'$, which is the gridded exposure surface ${\\bm x}({\\bm s})$ with duplicated locations removed. We then define $\\mathcal{X}_{\\omega}$ to be the element-wise product of $\\mathcal{X}$ and $F_{\\omega}$. The inverse DFT of $\\mathcal{X}_{\\omega}$ provides ${\\bm x}_2'$. Duplicated locations are assigned the same value from ${\\bm x}_2'$, yielding ${\\bm x}_2$.\n\nOne interpretation of removing variation described by frequencies $\\omega$ and lower is that it removes periodic variation with periods $1\/\\omega$ and higher. %\nThe analogue of $\\hat k$ for the Fourier basis is the effective bandwidth of the kernel smoother in the spatial domain that corresponds to the frequency filter $F_{\\omega}$. \nThis smoother is given by the inverse Fourier transform of $F_{\\omega}$, which is analytically a 'sinc' function. However, edge effects from finite grids mean the inverse DFT of $F_{\\omega}$ does not exactly follow the expected analytic form. Therefore, we compute the effective bandwidth $\\hat k$ of $F_{\\omega}$ empirically in a manner similar to the approach for TPRS: Let $F'_\\omega(u, v)$ denote the inverse DFT of $F_{\\omega}$. We take $\\hat k$ to be the minimum value of $\\sqrt{u^2 + v^2}$, scaled by grid size, for which $F'_\\omega(u, v)=0$. \n\n\\subsection{Wavelet Basis and Thresholding}\n\\label{sec:wavelets}\nA third choice of spatial basis $\\{h_j\\}$ is a set of wavelets. Like Fourier basis functions, wavelets are localized in frequency, but they are also localized in space \\citep{Nason2008}. This means that wavelets can compactly describe variation at different frequencies in different areas of a spatial domain. There are many different sets of wavelets, and here we use the smooth Daubechies wavelets \\citep{Daubechies1988}.\n Wavelet basis functions are indexed by \\emph{level}, based upon successive halving of the spatial domain. Within each level $L$ (and thus, within each frequency $2^{L}$), there are multiple wavelet basis functions with different orientations and spatial positions. The effective bandwidth that corresponds to each level is $\\hat k = 2^{-L}$.\n \n Multi-resolution approaches such as wavelets have recently been used to quantify spatial variation of air pollution exposures \\citep{Antonelli2017}. However, \\cite{Antonelli2017} only described exposures with variation at ``low\" and ``high'' frequencies, and did not attempt to use wavelets for explicit confounding adjustment nor did they provide a specific distance to these labels. De-correlation of the exposure and outcome via wavelets was presented in an ecological context by \\citet{Carl2008}. However, they applied the thresholding to the exposure and the outcome jointly and performed regression on the wavelet coefficients, which is not practical for the large grids and when we have other confounders in the model.\n \n For finite data on a discrete grid, the Discrete Wavelet Transform (DWT) maps any surface to a set of wavelet coefficients. \n Unlike the DFT, the DWT requires that the grid have length equal to a power of two. Data on a non-square grid can be embedded within a larger grid with dyadic dimension to apply the DWT. We use an implementation of the DWT available in the R package \\texttt{wavethresh} \\citep{Nason2008}.\n\nTo pre-adjust exposure using wavelets, we apply a filtering approach similar to that used for a Fourier basis. We first compute the DWT $\\mathcal{W}$ of ${\\bm x}'$. We then threshold all coefficients at levels $\\ell = 0, \\dots, L$ to get a modified wavelet transform $\\mathcal{W}_L$.\n We then apply the inverse wavelet transform to $\\mathcal{W}_L$ to get the pre-adjusted exposure ${\\bm x}_2'$.\nAlthough we use a global threshold of all wavelet coefficients up to a particular level, the extent of thresholding could be varied across the exposure domain if desired.\n\n\n\\section{Selecting the amount of adjustment}\n\\label{sec:choosing_m}\n\nA critical question for data analysis is how much adjustment to do or, equivalently, what value of $\\hat k$ or $m$ should be chosen. In both approaches presented in Section~\\ref{sec:adjustment}, the estimates of $\\beta$ are unbiased only if $m \\ge m_3$ (with an additional condition of $m \\ge m_2$ for the pre-adjustment approach) and if the basis for adjustment is correctly chosen. \nBut in most practical settings, the true values of $m_1$, $m_2$, and $m_3$ and the correct choice of basis $\\{h(\\cdot)\\}$ are unknown. \nIn this section, we describe different approaches to choosing the amount of adjustment ($m$ or $\\hat k$).\n\nIf there is specific knowledge of assumed unmeasured confounders or other external, content-area knowledge, then $m$ could be selected \\emph{a priori}. Exact knowledge of $m_3$ is unlikely to be known, but a choice of $m$ or $\\hat k$ might be based upon a combination of known scale of variation in the exposure (e.g. if it is predicted from a spatial model with known parameter values) and the approximate scale of the possible confounders (e.g. regional variation in socioeconomic status). This choice might also be influenced by estimates of $m_1$ and $m_2$, which could be estimated from the data. While an \\emph{a priori} choice of $m$ may not necessarily lead to an unbiased estimate, it does provide a way to pre-specify the extent and interpretation of adjustment, without risking overfitting the model to the outcome data $\\bm y$.\n\nIn the absence of external knowledge, the amount of adjustment $m$ can be selected in a data-driven manner. Estimating $m_3$ directly is challenging, since it requires knowing the amount of variation in $y$ that is due to $\\bm x$ and $\\bm z$--exactly what we are trying to estimate. However, measures of model fit (for either \\eqref{eq:adj_in_health_model} or \\eqref{eq:preadj_model}) can be used to calculate the scale of spatial variation in $\\bm y$. Specifically, after fitting the outcome model with a range of choices for $m$, the model that minimizes the Akaike Information Criterion (AIC) \\citep{Akaike1973} or Bayesian Information Criterion (BIC) \\citep{Schwarz1978} can provide a selection of the model that best fits the data. However, selecting the model that best fits $\\bm y$ is not guaranteed to reduce error in the estimate of $\\beta$.\n\nA variation of this procedure for estimating the scale of relevant spatial variation in $\\bm y$ is to fit outcome models that include the adjustment basis but do not include the exposure. The amount of adjustment that leads to the smallest AIC or BIC can be chosen as $m$ for the primary model. Observed confounders can either be included in the model, or first projected out from $\\bm y$. Here we consider the former approach and refer to is as the ``AIC-NE'' approach for choosing $m$, respectively, where ``NE'' stands for ``No Exposure''. The BIC alternative is named in the analogous manner.\nThe rationale of this approach is that if the unmeasured confounders have a strong impact on the values of $\\bm y$ and the association of interest ($\\beta$) is relatively weak (a relatively common occurrence in environmental epidemiology), this approach can yield a choice of $m$ without overfitting the relationship between $\\bm y$ and $\\bm x$. Because it involves fitting a model with the outcome $\\bm y$, this approach is mostly limited to adjustment bases such as TPRS that can be explicitly computed. If all locations on the grid had a single outcome value, then this approach could be used for Fourier and wavelet filtering, but that setting is unlikely to occur in practice. \n\nAn approach that targets estimation error directly is to choose $m$ to minimize the estimated MSE of $\\hat{\\beta}(m)$. Let $m'$ be some large value for which it is assumed the estimator $\\hat{\\beta}$ is unbiased. Then estimate the bias of $\\hat{\\beta}(m)$ as $\\hat{\\beta}(m) - \\hat{\\beta}(m')$ and estimate ${\\rm Var}(\\hat{\\beta}(m))$ via the `sandwich' estimator \\citep{White1980} for the fitted model. Together these provide a method for selecting $m$:\n\\begin{equation}\n\\label{eq:pick_m_mse}\n\\hat{m}_{MSE} = \\argmin_{\\tilde{m}} \\widehat{\\text{MSE}}(\\hat{\\beta}(\\tilde{m})) = \n\\argmin_{\\tilde{m}} \\left\\{\\left[\\hat{\\beta}(\\tilde{m}) - \\hat{\\beta}(m')\\right]^2 + \\widehat{\\rm Var}(\\hat{\\beta}(\\tilde{m}))\\right\\}\n\\end{equation}\nThis approach, however, has the clear drawback that $\\hat{\\beta}(m')$ may not be an unbiased estimator (see Section~\\ref{sec:overadj}).\n\nApproaches that are more \\emph{ad hoc} are also possible. One such option is to choose $m$ by selecting a point right after a ``knee'' in the plot of $\\hat{\\beta}$ against $m$ (for example, Figure~\\ref{fig:sister_tprsdf}). While visually intuitive, this approach requires a cumbersome formal definition. We define $\\hat{m}_{knee}$ to be:\n\\begin{equation}\n\\label{eq:pick_m_knee}\n\\hat{m}_{knee} = \\argmin_{\\tilde m} \\left\\{|D^1(\\hat{\\beta}(\\tilde m))| \\Big| |D^1(\\hat{\\beta}(\\tilde m))| < |D^1(\\hat{\\beta}(\\tilde m + 1)), \\tilde{m} \\in \\mathcal{M}\\right\\},\n\\end{equation} \nwhere $D^k(\\cdot)$ is a $k$-order difference operator and $\\mathcal{M} = \\{m | m > \\argmax_\\omega D^2(\\hat{\\beta}(\\omega))\\}$.\nThe set $\\mathcal{M}$ limits to values of $m$ that are beyond the largest second-order difference, an approximation of curvature, in the $\\hat{\\beta}(m)$ sequence. Within this set, the value of $m$ that gives the first minimum in the first-order differences is selected. Importantly, this approach does not account for differences in the resolution of $m$ and can be greatly impacted by the noise in the finite differences.\n\n\n \n\n\\subsection{Impact of over-adjustment}\n\\label{sec:overadj}\nIf the choice of basis functions exactly matches the underlying confounding surface $f(s)$, then sufficiently rich adjustment will remove bias. However, if the choice of basis functions does not match the underlying confounding surface $f(s)$, then residual bias may remain regardless of the choice of $k$. This could lead to increases in bias, since the adjustment basis ($H_m$) then becomes a near-instrumental variable (IV) that is highly correlated with the exposure $x(s)$ but only weakly related to the outcome $y$. Adjustment for a near-IV in contexts with residual confounding can amplify bias from the residual confounders at a rate greater than any bias reduction due to confounding by the near-IV \\citep{Pearl2011}. This suggests that the MSE approach presented above may perform quite poorly if the choice of basis functions is incorrect.\n\n\n\n\\section{Simulations}\n\\label{sec:simulations}\n\\subsection{Primary Setup}\nWe conducted a set of simulations to compare the different adjustment approaches, demonstrating both different choices of spatial basis and the different methods for selecting the scale of adjustment.\nThe data for the simulations are created at points $(u,v)$ lying on a $512 \\times 512$ grid over the unit square $[0, 1) \\times [0, 1)$. \nFor each simulation, we constructed a fixed exposure surface $x(u, v)$ and a fixed unmeasured confounder surface $f(u, v)$ on this grid. \n\nWe considered six different unmeasured confounder surfaces, $f_1, \\dots, f_6$, constructed to have ``large'' and ``fine'' scale variation for each of three choices of basis: TPRS, sinusoidal functions, and a spatial Gaussian process (GP). Table~\\ref{tab:simf} summarizes these surfaces, and mathematical detail on their definitions is provided in the Supplemental Material, Section B.1. \n\n\\begin{table}\n\\caption{Description of the different confounder surfaces in the simulations.\n\\label{tab:simf}}\n\\centering\n\\fbox{\n\\begin{tabular}{ll}\nSurface & Description\\\\\n\\hline\n$f_1$ & TPRS with 10 df\\\\ %\n$f_2$ & TPRS with 50 df\\\\\n$f_3$ & Sinusoidal up to frequency $\\sqrt{40}$\\\\ %\n$f_4$ & Sinusoidal up to frequency $\\sqrt{500}$\\\\ %\n$f_5$ & Exponential GP with range 0.5 \\\\ %\n$f_6$ & Exponential GP with range 0.15\\\\ %\n\\hline\n\\end{tabular}\n}\n\\end{table}\n\n\n\nWe constructed the exposure surface as $x(u, v) = \\theta f(u, v) + g(u, v)$, where $g(u, v)$ is a fixed realization of a Gaussian process with exponential covariance structure.\nThis structure for $x$ allows for the amount of correlation between $x$ and $f$ to be controlled, without requiring that they are generated in the same manner (e.g. correlated Gaussian process as used by \\cite{Page2017}).\n For Simulations 1 and 2, we considered two values for the range parameter in the covariance function used to generate $g(u,v)$: 0.05 and 0.50, respectively. The value of 0.05 results in smaller scale spatial variation than the candidate confounders, which is needed to eliminate bias. The value of 0.50 results in spatial variation at a coarser scale than some of the confounder surfaces, which may lead to the persistence of bias but is also a situation that could reasonably arise in practice. The parameter $\\theta$ controls the correlation between $x$ and $f$ and thus the amount of bias due to unmeasured confounding. Because the correlations between $g(u, v)$ and $f(u, v)$ differ for each choice of $f(u, v)$, we calculated $\\theta$ so that the amount of bias was equal for all confounder surfaces, which makes comparison of the results more straightforward. Plots of the surfaces $g, f_1, \\dots, f_6$ and the formula for $\\theta$ are provided in Supplemental Material Section B.2. The values of $x(u, v)$ and $f(u, v)$ were standardized to have mean zero and unit variance.\n\nFor each simulation replication, the outcome was constructed as\n$y(u,v) = \\beta x(u, v) + f(u, v) + \\epsilon(u, v),$\n where $\\epsilon(u, v) \\stackrel{iid}{\\sim} N(0, 16)$. This setup is designed to reflect a feature of the Sister Study application in that the residual variation in the outcome, even after accounting for unmeasured confounding, is large relative to the exposure variation.\n We redrew a sample of $n=2,000$ observation locations (selected from the grid of points) for 1,000 replications of each simulation.\nFor Simulations 1 and 2, we set $\\beta = 1$ and chose $\\theta$ such that the uncorrected bias would be 0.2. For each simulation we also conducted a variation with no effect ($\\beta = 0$) to estimate Type 1 error at the nominal $\\alpha=0.05$ level.\n \nWe compare estimates from four sets of models: \n (i) a model with semi-parametric adjustment via TPRS in the outcome model and models with pre-adjustment of exposure (ii) by TPRS (at observed locations), (iii) via a high-pass Fourier filter, and (iv) via wavelet thresholding. We present estimates for a sequence of adjustment amounts and using the selection methods from Section~\\ref{sec:choosing_m}.\n\n\\subsection{Effective bandwidth } \n The effective bandwidths $\\hat k$ for TPRS and a high-pass filter on the unit square are shown in Figure~\\ref{fig:eff_band_unit_square}. The difference in ranges of $\\hat k$ between the two bases is clear, with the high-pass frequency filter spanning a much larger range of spatial scales than TPRS. The slight ``hiccup'' in Figure~\\ref{fig:eff_bandwidth_tprs_unit_square} at $df= 10$ is an artifact of the shape of that particular basis function (and the constraints of the gridded locations). For the high-pass filter, the spatial smoother corresponding to high-pass filters $F_1$ and $F_2$ never cross zero, which is why points are only shown for $\\omega \\ge 3$. For both adjustment bases, there is an approximately linear relationship between tuning parameter ($df$ or $\\omega$) and $\\hat k$ on the log-log scale.\n\n\n\\begin{figure}\n\\begin{center}\n\\subfloat[\\label{fig:eff_bandwidth_tprs_unit_square}]{\n\\includegraphics[width=0.45\\textwidth]{Grid_TPRS_Median_Effective_Bandwidth.pdf}\n}\n\\subfloat[\\label{fig:eff_bandwidth_hpf_unit_square}]{\n\\includegraphics[width=0.45\\textwidth]{Grid_HPF_Effective_Bandwidth.pdf}\n}\n\\caption{Effective bandwidth (a) by df for TPRS and (b) by frequency for HPF on a $512 \\times 512$ grid over the unit square.}\n\\label{fig:eff_band_unit_square}\n\\end{center}\n\\end{figure}\n \n \\subsection{Simulation Results}\n The point estimates for the different choices of adjustment basis and unobserved confounder surfaces are provided in \nFigure~\\ref{fig:sim_S1_g1_results} for Simulation 1 (exposure with fine scale variation) and Figure~\\ref{fig:sim_S1_g2_results} for Simulation 2 (exposure with larger scale variation). In each figure, each panel shows the mean estimate for the estimators that automatically select $\\hat m$ and across a range of fixed values of $m$. For both simulations, the bias of the unadjusted estimator that ignores the unmeasured confounding is 1.2, by construction.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{Simulation_S1_M512_Results_g1_Paper.pdf}\n\\caption{Estimates (\\drawline{thick}) of $\\beta$ in Simulation 1 when pre-adjusting exposure with different choices of spatial basis (panel columns) and different underlying confounding surfaces (panel rows). The far left column shows the unadjusted estimate. The dashed lines (\\drawline{dashed}) and error bars indicate 2x the standard error. The true parameter value $\\beta =1$ is plotted as a dotted line (\\drawline{thick, blue,densely dotted}).}\n\\label{fig:sim_S1_g1_results}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{Simulation_S1_M512_Results_g2_Paper.pdf}\n\\caption{Estimates (\\drawline{thick}) of $\\beta$ in Simulation 2 when pre-adjusting exposure with different choices of spatial basis (panel columns) and different underlying confounding surfaces (panel rows). The far left column shows the unadjusted estimate. The dashed lines (\\drawline{dashed}) and error bars indicate 2x the standard error. The true parameter value $\\beta =1$ is plotted as a dotted line (\\drawline{thick, blue,densely dotted}).}\n\\label{fig:sim_S1_g2_results}\n\\end{center}\n\\end{figure}\n\n\n \\subsubsection{Results for fixed $m$}\n \nWhen the adjustment approach matches the basis used to generate the confounder, we see that the bias can be removed after sufficient adjustment. For Simulation 1 when confounding is due to $f_1$ and $f_2$, this occurs when there is TPRS adjustment of at least $df=10$ and $df=50$, respectively. In addition to leading to low bias (top-left panels of Figures~\\ref{fig:sim_S1_g1_results}), this leads to low MSE that matches or beats all other estimators (Supplemental Materials Table~1), and nearly correct coverage rates (94\\% for nominal 95\\% confidence interval [CI]; see Supplemental Materials Table~2).\nFor $f_3$ and $f_4$, complete adjustment occurs when there is a high pass filter with cutoff at a frequency of 7 and 23, respectively. However the variance of the estimates from the Fourier filtering approach increases greatly with the amount of adjustment, so the MSE remains relatively large despite the lack of bias (Supplemental Materials Table~1).\n\nWhen the adjustment approach does not match the basis generating the confounder, we see that in some cases most bias can still be eliminated but in other cases bias can persist. \nWhen the unmeasured confounder has large scale variation ($f_1$, $f_3$, and $f_5$), we see that all approaches are able to eliminate most bias after sufficient adjustment.\nHowever, for the confounder surfaces that have small scale variation, we see the persistence of bias regardless of the amount of adjustment ($f_6$) and bias amplification with increasing adjustment ($f_4$). This bias amplification occurs even when the correct adjustment basis is chosen, which can seen by the increase in bias for confounder surface $f_4$ when adjusting with Fourier filtering up to $\\omega = 22$.\n\nIn general, we see that TPRS can successfully adjust for confounding even when the unobserved confounder is generated from a different basis. For example, when $f=f_3$, adjustment with TPRS is able to remove the confounding bias for $df>100$. However, when $f=f_4$, the bias is not eliminated even for very high choices of $df$ relative to the sample size of $n=2,000$. This difference is due to the smaller range of effective bandwidths of TPRS compared to those of the high-pass filter approach (Figure~\\ref{fig:eff_band_unit_square}). By construction, variation in $f_2$ and $f_3$ extends up to $\\omega =\\sqrt{40} \\approx 6.3$ and $\\omega = \\sqrt{500} \\approx 22.4$, respectively. Using the relationship in Figure~\\ref{fig:eff_bandwidth_hpf_unit_square}, this corresponds to values of $\\hat{k}$ of approximately 0.12 and 0.03, respectively. A value of $df = 85$ corresponds to approximately $\\hat k = 0.12$, but there is not value of $df$ for which TPRS can achieve $\\hat k = 0.03$. \n\n\nFor Simulation 2, in which the exposure has larger scale spatial variation, we see similar trends to most of the Simulation 1 results. However, we do see greater effects of bias amplification, which now appears when the confounder is $f_2$ as well as when the confounder is $f_4$ (Figure~\\ref{fig:sim_S1_g2_results}). Unlike in Simulation 1, the bias induced by $f_6$ is not reduced at all.\n\n\nIn both sets of simulations, empirical Type I error when $\\beta = 0$ followed the same patterns as the coverage rates when $\\beta = 1$. Specifically, the error rates were at or close to 0.05 when the correct amount of adjustment was done using the correct basis (Supplemental Materials Table~3).\n\n\n\n\n\n\n \\subsubsection{Results when selecting $m$ automatically}\n\nThe estimators that automatically select $\\hat m$ using information criteria perform generally well. For adjustment using TPRS in the outcome model in Simulation 1, the model selected by minimizing AIC from the outcome model without exposure (AIC-NE) performs the best in almost all settings. For confounders other than $f_4$, a large amount of adjustment is selected, which leads to reduced bias (Figure~\\ref{fig:sim_S1_g1_results}). The MSE of the estimator with adjustment selected by AIC-NE is smaller than the MSE for estimators using fixed values of $m$ (Supplemental Materials Table~1). For some surfaces, the amount of adjustment selected by BIC-NE yielded slightly smaller MSE, but coverage rates were always better for selection by AIC-NE (Supplemental Materials Table~2). \nIn Simulation 2, selection using BIC-NE performs best in bias (Figure~\\ref{fig:sim_S1_g2_results}) and MSE (Supplemental Materials Table~4) for all settings except when the confounder is $f_2$. This reflects the benefit of greater penalization for model complexity (which leads to less adjustment) in settings with extensive bias amplification. When the confounder is $f_2$, selection using BIC from the full outcome model performs the best. However, coverage rates are slightly better using AIC-NE for selection in most cases.\n\nWhen pre-adjusting with TPRS at observed locations in Simulation 1, the amount of adjustment selected by minimizing AIC-NE was also better in most settings. Again while MSE was slightly lower for the estimator with amount of adjustment selected by BIC-NE, coverage rates were always better for selection by AIC-NE. Performance in all settings was similar to using adjustment for TPRS in the outcome model.\n In Simulation 2, minimizing BIC-NE does better in most settings, having lower variance since it tends to select a smaller model. \n\nFor adjustment using Fourier or wavelet filtering, the minimally-adjusted model is almost always selected when minimizing AIC and BIC since the number of parameters estimated for filtering increases exponentially. This results in lower MSE and better coverage than the unadjusted models, but poorer performance than adjusting with TPRS (Supplemental Materials Tables~1 through 5). The exception is in Simulation 2 for confounders $f_2$ and $f_4$, when the models selected by AIC and BIC do much worse than the unadjusted model.\n\nSelecting the amount of adjustment by the estimated-MSE criterion (equation \\eqref{eq:pick_m_mse}) reduced most of the bias in Simulation 1. However, it let to standard errors that were much larger than the other approaches, as can be seen in Figure~\\ref{fig:sim_S1_g1_results}. For Fourier and wavelet filtering, the standard errors were so large that a single estimate provided little information. The corresponding MSE was much larger than all other estimators (Supplemental Materials Table~1). In Simulation 2, the bias amplification led to estimates that were as bad or worse.\nThe \\emph{ad hoc} ``knee'' based approach (equation \\eqref{eq:pick_m_knee}) performed similarly poorly to the estimated-MSE approach, although had better coverage.\n\n\\subsection{Additional Simulations}\nWe conducted additional simulations that included measured confounders $\\bm z$, and the results were not substantively different from Simulation 1 and 2 so are not shown here.\nWe also conducted simulations in which the number of distinct locations was smaller than the number of subjects, leading to repeated locations. However, little difference was observed between those results compared to the primary results presented here. Approaches to reweighting had some benefit in terms of reducing variance in settings with extreme bias (small variance in $\\epsilon$ and $\\theta$ chosen for large relative bias), but no practical impact on the amount of bias.\n\n\n\\subsection{Simulation Conclusions}\n\\label{sec:sim_conclusions}\nBased on the results of Simulation 1 and 2, the best overall approach to reducing confounding bias is adjustment with TPRS at observed locations, either in the outcome model or using exposure pre-adjustment. The best approach for selecting the amount of adjustment is to minimize AIC-NE or BIC-NE (AIC or BIC from an outcome model without the exposure). In cases where bias amplification may be a concern, due to either only large scale variation in the exposure surface or very fine scale confounding, preference should be given to BIC-NE over AIC-NE. \n\n\n\n\\section{Sister Study Application}\n\\label{sec:sisterfullapp}\nWe now compare these different spatial confounding adjustment approaches in the Sister Study example of the association between SBP and PM$_{2.5}$.\nThe estimates from performing exposure pre-adjustment using TPRS across all observed grid locations, with each location repeated for the number of subjects at the same location, are provided \n in Figure~\\ref{fig:sister_obsgrid}.\nThese results are similar to those in Figure~\\ref{fig:sister_tprsdf}, as the adjustment basis is the same and the pre-adjustment that leads to Figure~\\ref{fig:sister_obsgrid} repeated locations as needed. Based upon the simulations (Section~\\ref{sec:sim_conclusions}), the preferred method for selecting the appropriate amount of adjustment is to minimize AIC-NE or BIC-NE. Because the exposure surface in this application is quite coarse (25km grid), we use BIC-NE for selecting our primary result to reduce the impact of bias amplification from over-adjustment. This choice results in adjustment using $m=3$ TPRS basis functions. When adjustment is done in the health model, the estimated association is a difference of 0.96 mmHg (95\\% CI: 0.50, 1.42) in SBP for each 10 $\\mu g\/m^3$ difference in PM$_{2.5}$ (Table~\\ref{tab:sister_tprs_res}). When the exposure is pre-adjusted, the estimated association is a difference of 0.92 mmHg (95\\% CI: 0.46, 1.41) in SBP for each 10 $\\mu g\/m^3$ difference in PM$_{2.5}$. \n\n\n\\begin{figure}[t]\n\\begin{center}\n\n\\subfloat[\\label{fig:sister_obsgrid}]{\n\\includegraphics[width=0.47\\textwidth]{Sister_Est_TPRS_PW_obsgrid.pdf}\n}\n\\subfloat[\\label{fig:sister_obs1grid}]{\n\\includegraphics[width=0.47\\textwidth]{Sister_Est_TPRS_PW_obs1grid.pdf}\n}\\\\\n\\subfloat[\\label{fig:sister_tprs_grid}]{\n\\includegraphics[width=0.47\\textwidth]{Sister_Est_TPRS_PW_grid.pdf}\n}\n\\caption{Estimates (\\drawline{ultra thick}) and pointwise confidence intervals (\\drawline{black}) of the association between SBP and PM$_{2.5}$ for different amounts of confounding adjustment using TPRS. The horizontal line (\\drawline{blue, dashed}) is at zero.}\\label{fig:sister_tprs}\n\\end{center}\n\\end{figure}\n\n\n \\begin{table}\n\\caption{\\label{tab:sister_tprs_res}Point estimates and 95\\% confidence intervals of the difference in SBP (in mmHg) associated with a difference of 10 $\\upmu$g\/m$^3$ in PM$_{2.5}$ exposure in the Sister Study, corresponding to different scales of adjustment using TPRS.}\n\\centering\n\\fbox{\n\\begin{tabular}{l c c c c cc}\n& \\multicolumn{2}{c}{Outcome Model} & \\multicolumn{2}{c}{Pre-Adjustment} \\\\\n& \\multicolumn{2}{c}{Adjustment} &\\multicolumn{2}{c}{Observed Locations} \\\\\n& $m$ & $\\hat \\beta$ (95\\% CI) & $m$ & $\\hat \\beta$ (95\\% CI) \\\\\n \\hline\n BIC-NE & 3 & $0.96$ ($0.50$, 1.42) & 3&0.92 (0.46, 1.41) \\\\\n AIC-NE & 401 & $-0.49$ ($-1.44$, 0.45) & 401 & $-0.43$ ($-1.38$, 0.52) \\\\\n BIC & 3 & $0.96$ ($0.50$, 1.42) & 3 &0.92 (0.46, 1.41) \\\\\n AIC & 401 & $-0.49$ ($-1.44$, 0.45) & 4 &0.95 (0.49, 1.41) \\\\\n $\\hat k = $ 1,000 km & 12 & 1.05 (0.37, 1.73) & 12 & 0.87 (0.20, 1.55) \\\\\nMSE & 352 & $-0.59$ ($-1.52$, 0.33) & 113 & $-0.37$ ($-1.18$, 0.44) \\\\\nKnee & 11& $1.10$ (0.43, 1.78) & 11 & $0.93$ ($0.26$, 1.60) \\\\\n\\hline\n\\hline\n& \\multicolumn{2}{c}{Pre-Adjustment} & \\multicolumn{2}{c}{Pre-Adjustment} \\\\\n& \\multicolumn{2}{c}{Observed Locations (1 time each) } & \\multicolumn{2}{c}{All Locations (1 time each)}\\\\\n& $m$ & $\\hat \\beta$ (95\\% CI) & $m$ & $\\hat \\beta$ (95\\% CI) \\\\\n \\hline\nAIC & 4 & 0.88 (0.43, 1.33) & 5 & 1.34 (0.81, 1.86)\\\\\nBIC & 3 & 0.80 (0.35, 1.26) & 4 & 0.81 (0.36, 1.26) \\\\\n$\\hat k = $1,000 km & 12 & 1.32 (0.68, 1.96) & 12 & 1.34 (0.73, 1.95) \\\\\nMSE & 268 &0.23 ($-0.58$, 1.03) & 3 & 0.62 (0.17, 1.07) \\\\\nKnee & 5 & 1.41 (0.86, 1.96) & 5 & 1.34 (0.81, 1.86) \\\\\n\\end{tabular}}\n\\end{table}\n\n\nFor comparison, Table~\\ref{tab:sister_tprs_res} also includes the results for the other approaches to selecting $m$. Using AIC-NE or AIC results in a large amount of adjustment being selected ($m=401$). This appears to be extensive overfitting of the model; the increases in the value of the log-likelihood as $m$ increases are small relative to the large sample size ($n=47,206$). The MSE approach also chooses a large amount of adjustment: $m=352$ for outcome model adjustment and $m=113$ for exposure pre-adjustment. All of these approaches with large adjustment result in negative point estimates due to the downward trend observed in Figures~\\ref{fig:sister_tprsdf} and \\ref{fig:sister_obsgrid}. This downward trend is likely due to bias amplification from residual confounding, since PM$_{2.5}$ only explains a small fraction of the variation in SBP and the measured confounders, while numerous, likely cannot capture all of the confounding relationships. Furthermore, the coarse spatial scale of the exposure means that variation due to unmeasured confounders is likely finer-scale than the exposure.\n\n\n\nResults from two alternative approaches to exposure pre-adjustment are presented in Figures~\\ref{fig:sister_obs1grid} and \\ref{fig:sister_tprs_grid}. These figures correspond, respectively, to pre-adjustment at observed locations, with duplicated locations removed, and pre-adjustment over all locations in the domain, with each location included once. \nThe point estimates corresponding to the different selection methods for these adjustment approaches are provided in the bottom half of Table~\\ref{tab:sister_tprs_res}. The amount of adjustment selected by AIC and BIC (using the full outcome model) is small, ranging from $m=3$ to $m=5$. These choices of adjustment yield point estimates ($0.80$ to $1.34$) similar to the main results from the models that adjust at all observed locations ($0.96$ and $0.92$).\nThe difference in estimates between Figures~\\ref{fig:sister_obs1grid} and \\ref{fig:sister_tprs_grid} is relatively small, suggesting that the restriction of the pre-adjustment to the observed locations does not have a large impact on the results. However, there is a notable qualitative difference between these results and those that included duplicate locations for the adjustment (Figures~\\ref{fig:sister_tprsdf} and \\ref{fig:sister_obsgrid}). While all four approaches show a negative trend in the point estimates for large values of $m$, the approaches that adjust using observed locations including duplicates decrease at smaller values of $m$ and yield negative point estimates. \nWe explored using reweighting to correct for the impact of the duplicated locations in the pre-adjustment approaches and obtained results that were either qualitatively similar to the results without reweighting or were highly unstable (see Supplemental Materials Section C.2). The magnitude of the difference in the point estimates from these alternative approaches using TPRS (including or excluding duplicate locations) suggests that residual confounding is likely occurring in this setting.\n\nEstimates for a fixed choice of $\\hat k = $ 1,000 km are also provided in Table~\\ref{tab:sister_tprs_res}. This corresponds to adjustment using $12$ degrees of freedom (Supplemental Materials Section C.1).\n A bandwidth of this size smooths large scale variation across the contiguous United States (which is approximately 4,500 km by 2,900 km). Because there are well-established large-scale trends in health outcomes across the United States \\citep{Mensah2005}, this value was chosen so that the analysis accounts for long-scale trends in systolic blood pressure. \n\n \nBased on the simulation results and duplicated observed locations, we recommend adjustment using TPRS as described above instead of using the Fourier or wavelet approaches. However, we present results for those approaches here for illustration.\nTo apply the pre-adjustment approaches with Fourier and wavelet basis functions, we embed the gridded locations within a larger square grid. \nBecause the predictions of 2006 annual average PM$_{2.5}$ of \\citet{Sampson2013} are only defined over the contiguous United States, \nthe added points are assigned exposure concentration of zero. This results in a grid of size 184 $\\times$ 184 for the Fourier approach and 256 $\\times $ 256 for the wavelet approach. We see from Figures~\\ref{fig:sister_hpf} and \\ref{fig:sister_wave} that the point estimates are relatively stable for all amounts of adjustment. This is probably due in part to the sinusoidal and wavelet basis functions not being representative of the spatial structure of the real-world unmeasured confounders, so increased adjustment removes little variation from the outcome or exposure. Additionally, because these adjustment methods rely on each location being present only a single time, we expect trends similar to Figures~\\ref{fig:sister_obs1grid} and \\ref{fig:sister_tprs_grid}.\n\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\subfloat[\\label{fig:sister_hpf}]{\n\\includegraphics[width=0.47\\textwidth]{Sister_Est_HPF_PW.pdf}\n}\n\\subfloat[\\label{fig:sister_wave}]{\n\\includegraphics[width=0.47\\textwidth]{Sister_Est_Wave_PW.pdf}\n}\n\\caption{Estimates (\\drawline{ultra thick}) and pointwise confidence intervals (\\drawline{black}) of the association between SBP and PM$_{2.5}$ for different amounts of confounding adjustment using Fourier and wavelet filtering. The horizontal line (\\drawline{blue, dashed}) is at zero.\n\\label{fig:sister_all}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\nWe have examined different approaches to adjusting for unmeasured spatial confounding and quantifying and selecting the scale of adjustment. These approaches are motivated as extensions of time series methods for temporal confounding \\citep{Peng2006,SzpiroSheppard2014}. \nWe presented a method for comparing the spatial scales across choices of basis using the effective bandwidth $\\hat k$. We showed examples using TPRS, a Fourier basis, and wavelets, each of which indexes variability in different forms and on different scales.\nWe showed that adjustment with TPRS is limited to a smaller range of spatial scales than the Fourier approach. However, the smaller bandwidth range of TPRS can have more flexibility at these scales. But when the true variation is beyond the scales TPRS can reach, it fails to remove confounding bias.\n\nFor a particular application, a choice of the amount of adjustment must be made. We identified selecting the amount of adjustment using AIC-NE and BIC-NE as the preferred approaches for most settings.\nWhen there is substantive knowledge available about the scales of variation involved, $\\hat k$ can be chosen \\emph{a priori}.\n\\emph{Ad hoc} approaches such as the ``knee'' and estimated MSE methods performed poorly.\n\nPre-adjusting the exposure by Fourier filtering or wavelet thresholding is attractive due to the orthogonality of those bases. However, they are both severely limited by the requirement of a square grid, assumptions of periodicity, and the need to reweight the population. For wavelets, the thresholding at dyadic levels also leads to limited available intervals for adjustment. Although in settings where there is a priori knowledge about confounding relationships, non-uniform thresholding of the wavelet coefficients could add flexibility. \nFor these reasons, the more flexible TPRS are likely to be preferred over the Fourier or wavelet basis for settings where TPRS can represent variation at the scale of interest, such as the example study of PM$_{2.5}$ in the Sister Study.\nThe Fourier basis ccould be chosen when very-fine scale adjustment is needed or there is external evidence to support periodic variation in the confounders.\n\nOne of the key results from our simulations is that the addition of spatial basis functions, or an equivalent pre-adjustment procedure, does not necessarily reduce bias in the point estimate. In fact, it can lead to substantial bias amplification. \nThis is an important contrast to the time series settings, in which more aggressive adjustment is recommended to reduce bias from unmeasured confounding \\citep{Peng2006}. The exposure in a time series settings are typically measured directly, which means that there is substantial residual variation in the exposure at each time point, whereas the use of an exposure prediction model in a spatial context means that spatial exposures are often quite smooth.\nThe potential for bias amplification means that using changes in the point estimates is often not a good approach to assessing the extent of confounding bias.\n\nWe have presented results here in the context of a linear health model. However, these approaches to spatial confounding adjustment can be applied in generalized linear model settings. The patterns of bias reduction (or increase) for a sequence of adjustment values $m$ will differ by context and choice of link function, but the connection between adjustment basis, spatial scale, and overall interpretation remains the same as the linear case. For example, \\cite{Keet2018} used TPRS to adjust for large-scale confounding across the contiguous United States in an analysis of particulate matter and asthma-related outcomes. Automated selection could be done using extensions of information criteria such as QIC \\citep{Pan2001}.\n\nIn summary, we have presented methods for describing and selecting the spatial scale of spatial confounding adjustment using different choices of basis. \nThese methods can be used to more accurately describe the extent of spatial confounding adjustment in future studies with spatial exposures.\n\n\n\\section*{Acknowledgements}\nThis work was supported by grants T32ES015459 and R21ES024894 from the the National Institute for Environmental Health Sciences (NIEHS). The Sister Study supported in part by the Intramural Research Program of the NIH, NIEHS (Z01ES044005). This work was also supported in part by the US Environmental Protection Agency (EPA) through awards RD83479601 and RD835871. This was work has not been formally reviewed by the EPA. The views expressed in this document are solely those of the authors and do not necessarily reflect those of the Agency. EPA does not endorse any products or commercial services mentioned in this publication.\n\n\n\\bibliographystyle{chicago}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\@startsection {section}{1}{\\z@\n {-3.5ex \\@plus -1ex \\@minus -.2ex\n {2.3ex \\@plus.2ex\n {\\normalfont\\large\\bfseries}}\n\\renewcommand\\subsection{\\@startsection{subsection}{2}{\\z@\n {-3.25ex\\@plus -1ex \\@minus -.2ex\n {1.5ex \\@plus .2ex\n {\\normalfont\\bfseries}}\n\n\\parskip 6 pt\n\n \\marginparwidth 0pt\n \\oddsidemargin -0.3cm\n \\evensidemargin -0.3cm\n \\marginparsep 0pt\n \\topmargin -0.4in\n \\textwidth 7.1in\n \\textheight 9.0 in\n\n\\newcommand{\\be}{\\begin{equation}}\n\\newcommand{\\ee}{\\end{equation}}\n\\newcommand{\\bea}{\\begin{eqnarray}}\n\\newcommand{\\eea}{\\end{eqnarray}}\n\\newcommand{\\bse}{\\begin{subequations}}\n\\newcommand{\\ese}{\\end{subequations}}\n\\newcommand{\\beqa}{\\begin{eqnarray}}\n\\newcommand{\\eeqa}{\\end{eqnarray}}\n\\newcommand{\\beqar}{\\begin{eqnarray*}}\n\\newcommand{\\eeqar}{\\end{eqnarray*}}\n\\newcommand{\\bi}{\\begin{itemize}}\n\\newcommand{\\ei}{\\end{itemize}}\n\\newcommand{\\bn}{\\begin{enumerate}}\n\\newcommand{\\en}{\\end{enumerate}}\n\\newcommand{\\labell}[1]{\\label{#1}\\qquad_{#1}}\n\\newcommand{\\fixme}[1]{\\textbf{FIXME: }$\\langle$\\textit{#1}$\\rangle$}\n\\newcommand{\\note}[1]{\\textbf{NOTE: }$\\langle$\\textit{#1}$\\rangle$}\n\\newcommand{\\ba}{\\begin{array}}\n\\newcommand{\\ea}{\\end{array}}\n\\newcommand{\\bc}{\\begin{center}}\n\\newcommand{\\ec}{\\end{center}}\n\\newcommand{\\nnr}{\\nonumber \\\\}\n\\newcommand{\\nn}{\\nonumber}\n\n\\newcommand{\\cf}{{\\em cf.}\\ }\n\\newcommand{\\ie}{{\\em i.e.}\\ }\n\\newcommand{\\eg}{{\\em e.g.} }\n\\newcommand{\\viz}{{\\em viz.}\\ }\n\\newcommand{\\nb}{{\\em N.B.}\\ }\n\\newcommand{\\etal}{{\\em et al.}\\ }\n\n\\newcommand{\\tcb}{\\textcolor{blue}}\n\\newcommand{\\tcr}{\\textcolor{red}}\n\\definecolor{darkgreen}{rgb}{0,0.3,0}\n\\definecolor{darkblue}{rgb}{0,0,0.3}\n\\definecolor{darkred}{rgb}{0.7,0,0}\n\\definecolor{VioletRed4}{rgb}{0.55,0.13,0.32}\n\\definecolor{VioletRed}{rgb}{0.82,0.13,0.56}\n\\definecolor{VioletRed2}{rgb}{0.93,0.23,0.55}\n\\newcommand{\\tcdr}{\\textcolor{darkred}}\n\\newcommand{\\tcdg}{\\textcolor{darkgreen}}\n\\newcommand{\\tcdb}{\\textcolor{darkblue}}\n\n\n\n\\makeatother\n\n\\begin{document}\n\n\\newcommand{\\email}[1]{\\footnote{\\href{mailto:#1}{#1}}}\n\n\n\n\\title{\\bf\\Large{One-loop Photon's Effective Action in the Noncommutative Scalar QED$_{3}$}}\n\n\\author{\\bf{M.~Ghasemkhani}\\email{ghasemkhani@ipm.ir} $^{a}$, \\bf{R.~Bufalo}\\email{rodrigo.bufalo@ufla.br} $^{b}$, \\bf{V.~Rahmanpour}\\email{v.rahmanpour@mail.sbu.ac.ir} $^{a}$ and M.~Alipour\\email{moj.alipour@yahoo.com} $^{a}$ \\\\\\\\\n\\textit{\\small$^a$ Department of Physics, Shahid Beheshti University, G.C., Evin, Tehran 19839, Iran}\\\\\n\\textit{\\small $^b$ Departamento de F\\'isica, Universidade Federal de Lavras,}\\\\\n\\textit{\\small Caixa Postal 3037, 37200-000 Lavras, MG, Brazil}\\\\\n}\n\\maketitle\n\n\\begin{abstract}\nIn this paper, we consider the evaluation of the effective action for photons coupled to charged scalar fields in the framework of a $(2+1)$-dimensional noncommutative spacetime.\nIn order to determine the noncommutative Maxwell Lagrangian density, we follow a perturbative approach, by integrating out the charged scalar fields, to compute the respective graphs for the vev's $\\left\\langle AA \\right\\rangle$, $\\left\\langle AAA \\right\\rangle$ and $\\left\\langle AAAA \\right\\rangle$.\nSurprisingly, it is shown that these contributions are planar and that, in the highly noncommutative limit, correspond to the Maxwell effective action and its higher-derivative corrections.\nIt is explicitly verified that the one-loop effective action is gauge invariant, as well as under discrete symmetries: parity, time reversal, and charge conjugation.\nMoreover, a comparison of the main results with the noncommutative QED$_{3}$ is established.\nIn particular, the main difference is the absence of parity violating terms in the photon's effective action coming from integrating out the charged scalar fields.\n\n\\end{abstract}\n\n\n\n\\setcounter{footnote}{0}\n\\renewcommand{\\baselinestretch}{1.05} \n\n\\newpage\n\\tableofcontents\n\\section{Introduction}\n\\label{sec1}\n\nIn recent years a great amount of attention has been paid in the analysis and calculation of covariant effective action for different types of quantum fields, exploring the diversity of new interactions that mainly depend on the spin of the fields involved as well as the spacetime dimensionality \\cite{Bonora:2016otz, Bonora:2017ykb, Quevillon:2018mfl}.\nOne may say that the canonical example of a complete analysis is the Euler-Heisenberg effective action \\cite{Heisenberg:1935qt}, where quantum effects from QED are responsible to induce nonlinear interactions among photons.\nMoreover, the effective action framework has served as an important tool to explore different point of views about the quantum gravitational theory, where the Einstein-Hilbert action is augmented by metric and\/or torsion fields higher-order terms \\cite{Buchbinder:1992rb,Masud}.\n\nNaturally, since the framework of effective action is a powerful tool, there is a great expectation that this approach can be used to make contact with modern phenomenology of physics beyond the standard model.\nThe main idea behind this formulation is that at energies below some cutoff scale $\\mu$, \\footnote{That may signal symmetry violation, for instance Lorentz symmetry violation.} all the effects of the massive degrees of freedom above $\\mu$ can be encoded as new interactions among the fields remaining active below $\\mu$.\nThe effective action approach has been extensively used to the study of Lorentz violating field theories, where the energy scale $\\mu$ is related to the Planck energy scale $E_{\\rm Pl}$ (or length $\\ell_{\\rm Pl}$) where our notion of smooth geometry is expected to break \\cite{ref53,Bluhm:2005uj}.\nIn this case, the current understanding is that the low energy Lorentz violating terms come as quantum corrections from heavy modes \\cite{Borges:2013eda,Borges:2016uwl}.\n\n\nAlthough the majority of studies of Lorentz violating field theories is developed in a four-dimensional spacetime, there are considerable interests in the description of\nthree-dimensional (3D) ones \\cite{Charneski:2008hy, Nascimento:2014owa, Casana:2015hda}.\nBesides the algebraic richness of odd dimensional spacetimes, one may say that the most appealing aspect of 3D field theories is the UV finiteness in some models.\nThis feature might provide an ambiguity free description of Lorentz violation, allowing thus a close contact of violating effects with physical planar phenomena.\nIn particular, it is worth recall the example of the description of quantum hall fluids in terms of noncommutative geometry \\cite{Susskind:2001fb,Douglas:2001ba}.\n\nOver the past two decades, field theories defined in a noncommutative (NC) geometry have been considered as one of the most prominent candidates presenting Lorentz violation to make contact with quantum gravity phenomenology \\cite{Douglas:2001ba,AmelinoCamelia:2008qg}.\nWithin this description, the noncommutativity measurement parameter is related to a length scale $\\ell_{\\rm nc} \\sim \\sqrt{\\theta}$.\nIn one hand, this length scale can be seen as a manifestation of the discreteness of the spacetime, presenting a smooth profile in the UV region \\cite{Arzano:2017uuh}.\nOn the other hand, this same scale is responsible for introducing instabilities in the dispersion relations of the fields, the so-called UV\/IR mixing \\cite{Matusis:2000jf}.\n\nNC field theories have been studied through the effective action approach, where the behavior of the new couplings was deeply analyzed \\cite{Vassilevich:2005vk}, where the presence of UV\/IR mixing in the 1PI functions signals that applying the usual Wilsonian field theory notions and techniques to NC QFT's one should be careful.\nThis type of analysis was also developed to two and three-dimensional NC models\n\\cite{Ghasemkhani:2013bs, Ghasemkhani:2017bjp, Chu:2000bz, Banerjee:2007ua, Bufalo:2014ooa}.\nThese studies of effective action in 3D models were exclusive to the coupling of gauge and fermion fields, no much attention has been paid to the case involving scalar fields, in particular the case of spinless charged fields interacting with photons.\n\n\nIn one hand, it is of physical significance to study scalars in 3D field theories independently of fermions in condensed matter systems, as in Quantum Hall systems, since we have scalar quasiparticle excitations.\nOn the other hand, recently 3D versions of fermionization\/bosonization have also been introduced \\cite{Hsin:2016blu,Benini:2017dus}.\nIn these studies, it was discussed the duality between nonspin\nChern-Simons theory and a spin Chern-Simons theory, exploring precisely the spin structure of the given models.\nIn this sense, the present work could be the first step in extending such analysis to the NC case.\nMotivated by these facts, we will analyze throughout the paper to what extent the spin of the matter fields can change the effective action when charged scalar and fermion fields are considered in the presence of the spacetime noncommutativity.\nA straightforward result is that in the case of the 3D scalar quantum electrodynamics (scalar QED$_{3}$), it is not possible to generate the parity odd Chern-Simons terms, showing thus that the dynamics of the 3D gauge field is significantly different in the presence of either charged scalar or fermion fields. It is well known that the presence of the degree of freedom associated with the spin changes in most of the cases only the magnitude of physical quantities, e.g. the beta function \\cite{Ghasemkhani:2016zjy} and electron's magnetic moment \\cite{Panigrahi:2004cf}.\n\nIn this paper we discuss the effective action for the photon in the scalar QED in the noncommutative three-dimensional spacetime.\nIn Sec.~\\ref{sec2} we present an overview of the scalar QED, where the charged scalar fields are minimally coupled with the photons.\nThere we define the main aspects regarding the Moyal product used in our analysis,\\footnote{The noncommutativity we will be using in the paper is\ndefined by the algebra $[\\hat{x}_\\mu,\\hat{x}_\\nu] = i \\theta_{\\mu\\nu}$. So in order to construct a noncommutative field theory, using the Weyl-Moyal (symbol)\ncorrespondence, the ordinary product is replaced\nby the Moyal star product as defined below.}\nwe also discuss the content of discrete symmetries in the NC 3D spacetime.\nIn addition, all the Feynman rules are presented for the propagators and 1PI vertices.\nSection \\ref{sec4} is focused in the perturbative computation of the relevant graphs corresponding to the one-loop effective action for the photon gauge field.\nIt is also discussed the generation of higher-derivative terms, similarly to the\nAlekseev-Arbuzov-Baikov effective Lagrangian for non-Abelian fields.\nIn Sec.~\\ref{sec5} we establish a comparison of the obtained results for the effective action in the NC-scalar QED to those of ordinary NC-QED, exploring the part played by the spin in these cases.\nWe present our final remarks in Sec.~\\ref{conc}.\n\n\n\n\\section{The model}\n\\label{sec2}\nIn this section, we introduce the model and fix our notation.\nThe noncommutative extension of the bosonic electrodynamics is described by the following action\n\\begin{equation}\nS=\\int d^{3}x\\Big[\\left(D_{\\mu}\\phi\\right)^{\\dagger}\\star D^{\\mu}\\phi-m^{2}\\phi^{\\dagger}\\star \\phi\\Big],\n\\label{eq:a1}\n\\end{equation}\nthis functional action consists of the interaction of charged scalar fields minimally coupled with an external gauge field.\nWe consider the covariant derivative form in the fundamental representation $D_{\\mu}\\phi=\\partial_{\\mu}\\phi+ieA_{\\mu}\\star\\phi$.\nThis action is invariant under the infinitesimal gauge transformation\n\\begin{equation}\n\\delta A_{\\mu}=\\partial_{\\mu}\\lambda+ie[A_{\\mu},\\lambda]_{\\star}~,\\quad \\delta\\phi=ie\\lambda\\star\\phi,\n\\label{eq:a2}\n\\end{equation}\nwhere $[\\,\\, ,\\,]_{\\star}$ is the Moyal bracket.\nMoreover, the Moyal star product between the functions $f$ and $g$ is defined as\n\\begin{equation}\nf\\left( x\\right)\\star g\\left(x\\right) = f\\left( x\\right) \\exp \\left(\\frac{i}{2}\\theta ^{\\mu \\nu}\n\\overleftarrow{\\partial_\\mu}\n \\overrightarrow{\\partial_\\nu}\\right) g\\left( x\\right),\n \\label{eq:a3}\n\\end{equation}\nwhere $\\theta^{\\mu\\nu}=-\\theta^{\\nu\\mu}$ are constant parameters that measure the noncommutative structure of the space-time.\nIn order to avoid unitarity violation, we assume that $\\theta^{0i}=0$, hence we have only one nonzero independent component $\\theta^{12}$ in our model.\n\nIt is worth mentioning that although the couplings \\eqref{eq:a1} are simply modified by the presence of a nonplanar phase due to the Moyal product, the noncommutativity of spacetime coordinates shows its importance in the computation of the one-loop effective action for the gauge field, where nonlinear self-couplings are present solely due to the NC framework.\nThe one-loop effective action for the gauge field\ncan be readily obtained by integrating out the charged scalar fields of \\eqref{eq:a1}\n\\begin{equation}\ne^{i\\Gamma_{\\rm eff}[A]}=\\int D\\phi^{\\dagger} D\\phi~e^{-i\\int d^{3}x~\\phi^{\\dagger}\\star(D^{2}+m^{2})\\star\\phi}.\n\\end{equation}\nUsing the Gaussian functional integration formulas for the case of interacting charged scalar fields, we can write the noncommutative 1PI effective action as below\n\\begin{equation}\ni\\Gamma_{\\rm eff}[A]={\\rm Tr}\\ln\\left[\\frac{(i\\partial_{\\mu}-eA_{\\mu})\\star(i\\partial^{\\mu}-eA^{\\mu})\\star-m^{2}}{-\\partial^{2}-m^{2}}\\right],\n\\end{equation}\nwhere ${\\rm Tr}$ is a sum over eigenvalues of the operator inside the bracket which can also be evaluated in momentum space.\nSimilarly to the description of one-loop effective action for the gauge field in the case of NC-QED \\cite{Bufalo:2014ooa}, one can show that $\\Gamma_{\\rm eff}[A]$ has a convergent series expansion in coupling constant $e$. From a diagrammatic point of view, it includes the one-loop graphs contributing to the gauge field $n$-point functions which is considered as\n\\begin{equation}\n\\Gamma_{\\rm eff}[A]={\\cal{S}}_{\\rm eff}[AA]+{\\cal{S}}_{\\rm eff}[AAA]+{\\cal{S}}_{\\rm eff}[AAAA]+\\cdots.\n\\label{eq:aa}\n\\end{equation}\nHowever, the functional $\\Gamma_{\\rm eff}[A]$, in comparison to the NC-QED case, has more graphs due to the presence of an additional interacting vertex.\n\nMoreover, it is important to emphasize that as we will show in our model,\nsimilarly to the case of NC-QED \\cite{Ghasemkhani:2017bjp}, the one-loop effective action for the photons is completely planar.\nExplicitly, in the evaluation of the one-loop diagrams with an arbitrary number of external legs of photons, for energies below the mass scale $m$, only planar diagrams contribute. This means the absence of IR\/UV mixing.\n\n\\subsection{Discrete symmetries}\n\\label{sec2.1}\n\nSince we are interested in computing the one-loop effective action for the photon, it is useful to analyze the behavior of the original action \\eqref{eq:a1} under discrete symmetries: parity, charge conjugation and time reversal.\nThis study will allow us to determine which of them may be anomalous in the obtained results for the one-loop order effective action.\n\n\\begin{itemize}\n\\item{\\emph{Parity}}\n\nParity transformation in $d=2+1$ is defined as $x_{1} \\rightarrow -x_{1}$ and $x_{2}\\rightarrow x_{2}$, in this case we have that the field $\\phi$ is even under parity, and the components of the gauge field $A_{\\mu}$ behave as $A_{0}\\rightarrow A_{0}$, $A_{1}\\rightarrow -A_{1}$ and $A_{2}\\rightarrow A_{2}$.\nMoreover, we observe from the NC algebra that the $\\theta$ parameter changes under this transformation as $\\theta^{12}\\rightarrow -\\theta^{12}$ .\nWith these considerations, it is easy to show that the whole of the action \\eqref{eq:a1} is parity invariant.\n\\item{\\emph{Time Reversal}}\n\nUnder time reversal, we have that $x_{0} \\rightarrow -x_{0}$. In this case, the components of the gauge field behave as $\\left( A_{0}, A_i\\right) \\rightarrow \\left( A_0,-A_i\\right)$.\nBy demanding that the scalar field does not change $\\phi \\to \\phi $, and that necessarily the NC parameter transforms as $\\theta^{12}\\rightarrow -\\theta^{12}$ under time reversal, we are left with a $T$-invariant action.\n\\item{\\emph{Charge Conjugation}}\n\nAs we know, the behavior of the gauge field under charge conjugation is given by $A_{\\mu}\\rightarrow -A_{\\mu}$ for any space-time dimensionality. Taking the scalar field to be unchanged under $C$, and the transformation for the NC parameter $\\theta\\rightarrow-\\theta$, we conclude that the action \\eqref{eq:a1} is $C$-invariant.\n\\end{itemize}\n\n\n\\subsection{Propagators and vertex functions}\n\\label{sec3}\n\nIn order to discuss the computation of the perturbative effective action, we must determine the basic propagators and 1PI vertex functions.\nFrom the functional action described in \\eqref{eq:a1}, we can obtain the bosonic propagator\n\\begin{equation}\n{\\cal{D}}(p) = \\frac{i}{{p^{2}-m^{2}}},\n \\label{eq:b1}\n\\end{equation}\nthe cubic vertex $\\left \\langle A \\,\\phi \\, \\phi^\\dagger \\right \\rangle$\n\\begin{align}\n\\Gamma^\\mu(p,q)= -ie\\left(p+q\\right)^{\\mu}\\exp\\Big(\\frac{i}{2} p \\wedge q\\Big),\n \\label{eq:b2}\n\\end{align}\nand the quartic vertex $\\left \\langle AA\\,\\phi \\,\\phi^\\dagger \\right \\rangle$\n\\begin{align}\n\\Lambda^{\\mu\\nu}(p,q,s)=2ie^{2}\\eta^{\\mu\\nu}\\exp{\\Big(\\frac{i}{2}k \\wedge s\\Big)}\\cos\\Big(\\frac{p \\wedge q}{2}\\Big),\n \\label{eq:b3}\n\\end{align}\nwhere we have introduced the notation $p \\wedge q = p_\\mu \\theta^{\\mu\\nu}q_\\nu$.\nA straightforward difference of the scalar and fermionic electrodynamics is the presence of the quartic vertex $\\left \\langle AA\\phi \\phi^\\dagger \\right \\rangle$, which increases significantly the number of the one-loop graphs.\nMoreover, the scalar vertices are rather simpler due to the absence of the Dirac $\\gamma$ matrices, resulting in a much simpler algebraic analysis.\n\n\n\\section{Perturbative Effective Action}\n \\label{sec4}\n\nNow that we have determined the basic Feynman rules for the 1PI functions, we shall proceed to the computation of the one-loop diagrams related to the effective action for the gauge field.\nFor this purpose, we shall compute along this section the respective contributions: the free part of the effective action $\\left\\langle AA \\right\\rangle$, and the interacting parts for the cubic vertex $\\left\\langle AAA \\right\\rangle$ and quartic vertex $\\left\\langle AAAA \\right\\rangle$.\nIn general, the final results of our analysis related to the graphs contributing to $\\left\\langle AA \\right\\rangle$, $\\left\\langle AAA \\right\\rangle$ and $\\left\\langle AAAA \\right\\rangle$ vertices shall be a function of $e$, $\\tilde{p}_\\mu = \\theta_{\\mu \\nu } p^\\nu$ and $p^2\/m^2$.\n\nTo highlight the effects of noncommutativity in the low energy effective action and the photon two, three and four-point functions, while we take the external momenta $p$ such that $p^2\/m^2\\ll 1$, we also take the highly noncommutative limit, i.e., the low-energy regime $p^2\/m^2 \\to 0$ while $\\tilde{p}$ is kept finite.\nIn this limit the noncommutative (planar) phase factors, which are a function of $\\tilde{p}$, remain finite.\nMoreover, we shall focus our attention on those terms of order $m^{-1}$.\nWe present by complementarity, at the next to leading order, the terms of order $m^{-3}$ that correspond to higher-derivative corrections.\n \\subsection{One-loop $\\left\\langle AA \\right\\rangle$ part}\n\nFrom the Feynman rules we can compute the one-loop contribution to the $AA$-term corresponding to the free part of the photon effective action.\nThe two diagrams contributing at this order are depicted in Fig.~\\ref{oneloop1}, which the respective expressions have the form\n\\begin{align}\n\\Pi^{\\mu\\nu}_{(a)}(p) &= e^{2} \\int \\frac{d^{d}k}{(2\\pi)^{d}} \\frac{(p+2k)^{\\mu}(p+2k)^{\\nu}}{[(p+k)^{2}-m^{2}][k^{2}-m^{2}]}, \\nonumber \\\\\n\\Pi^{\\mu\\nu}_{(b)}(p) &= -e^{2} \\int \\frac{d^{d}k}{(2\\pi)^{d}} \\frac{2\\eta^{\\mu\\nu}[(p+k)^{2}-m^{2}]}{[(p+k)^{2}-m^{2}][k^{2}-m^{2}]},\n \\label{eq:c1}\n\\end{align}\nso that the full contribution is written as\n\\begin{align}\n\\Pi^{\\mu\\nu}(p) &= e^{2}\\int \\frac{d^{d}k}{(2\\pi)^{d}} \\frac{(p+2k)^{\\mu}(p+2k)^{\\nu}-2\\eta^{\\mu\\nu}[(p+k)^{2}-m^{2}]}{[(p+k)^{2}-m^{2}][k^{2}-m^{2}]}.\n \\label{eq:c2}\n\\end{align}\n\nA first comment is that this piece is completely planar, carrying no noncommutative effects.\nThe explicit computation is straightforward using dimensional regularization.\nAfter some algebraic calculation, we can consider the low-energy limit, $p^2\/m^2 \\to 0$, resulting into\n\\begin{align}\n\\Pi^{\\mu\\nu}(p) =\\frac{ie^{2}}{48\\pi m}\\left(p^{\\mu}p^{\\nu}-\\eta^{\\mu\\nu}p^{2}\\right).\n \\label{eq:c3}\n\\end{align}\nMoreover, for the next to leading order contribution, ${\\cal{O}}(m^{-3})$, we find that\n\\begin{equation}\n\\Pi^{\\mu\\nu}_{\\rm hd}(p)=\\frac{ie^{2}}{960\\pi m^{3}}\\left(p^{\\mu}p^{\\nu}-\\eta^{\\mu\\nu}p^{2}\\right)p^{2}.\n \\label{eq:c4}\n\\end{equation}\nThese two terms Eqs.~\\eqref{eq:c3} and \\eqref{eq:c4} satisfy straightforwardly the Ward identity, $p_\\mu \\Pi^{\\mu\\nu}=0$, as we expected.\nWe can determine the respective contribution to the effective action by means of\n\\begin{equation}\ni{\\cal{S}}_{\\rm eff}[AA]=\\int\\int d^{3}x_{1}d^{3}x_{2}~A_{\\mu}(x_{1})\\Gamma^{\\mu\\nu}(x_{1},x_{2})A_{\\nu}(x_{2}),\n \\label{eq:c5}\n\\end{equation}\nwhere ${\\cal{S}}_{\\rm eff}[AA]$ is the quadratic part of the effective action $\\Gamma[A]$ in \\eqref{eq:aa}. Here, we have defined by simplicity\n\\begin{equation}\n\\Gamma^{\\mu\\nu}(x_{1},x_{2})=\\int\\frac{d^{3}p}{(2\\pi)^{3}} e^{-i p\\cdot (x_{1}-x_{2})}\\Pi^{\\mu\\nu}(p ).\n \\label{eq:c6}\n\\end{equation}\n\\begin{figure}[t]\n\\vspace{-1.2cm}\n\\includegraphics[height=6\\baselineskip]{2-point.eps}\n \\centering\\caption{Relevant graphs for the induced $AA$-term.}\n\\label{oneloop1}\n\\end{figure}\n\nAfter some algebra, the quadratic part of the induced effective action for the photon, considering \\eqref{eq:c3} and \\eqref{eq:c4}, is given by\n\\begin{align}\ni{\\cal{S}}_{\\rm eff}[AA]&=-\\frac{ie^{2}}{48\\pi m}\\int d^{3}x\\Big(\\partial_{\\mu}A_{\\nu}\\partial^{\\mu}A^{\\nu}-\\partial^{\\mu}A_{\\nu}\\partial^{\\nu}A_{\\mu}\\Big) \\nonumber \\\\\n&+\\frac{ie^{2}}{960\\pi m^{3}}\\int d^{3}x\\Big(\\partial_{\\mu}A_{\\nu}\\Box\\partial^{\\mu}A^{\\nu}-\\partial^{\\mu}A_{\\nu}\\Box\\partial^{\\nu}A_{\\mu}\\Big).\n \\label{eq:c7}\n\\end{align}\nAs we have previously mentioned, the first term of the expression \\eqref{eq:c7} corresponds to the kinetic part of the noncommutative Maxwell action, ${\\cal O}(m^{-1})$, while the second term is the higher-derivative correction to the kinetic term, of order ${\\cal O}(m^{-3})$.\nMoreover, the obtained result does not contain any noncommutativity effect, since the produced phase factors cancel for $n=2$.\nIt is worth noticing the absence of the parity odd Chern-Simons term in the scalar QED$_{3}$, which in turn is generated in the fermionic electrodynamics due to the algebraic structure of the two-dimensional realization of $\\gamma$ matrices.\n\n\n\\subsection{One-loop $\\left\\langle A AA\\right\\rangle$ vertex}\n\nThe relevant graphs for the $\\left\\langle A AA\\right\\rangle$ part of the effective action are shown in Fig.~\\ref{oneloop2}.\nHowever, in order to determine correctly the full contribution to the effective action, it is necessary to consider all different permutations of the external bosonic lines of the given graphs.\nIt is easy to see that the diagram (a) has an additional contribution (b), corresponding to a permutation of the external photon legs, which has an equivalent structure but with a reversed momentum flow, which comes exactly from the S-matrix expansion at the order of $e^{3}$.\nWith help of the Feynman rules, we can easily write the relevant expression for the sum of the graphs (a) and (b)\n\\begin{equation}\n\\Pi^{\\mu\\nu\\rho}_{(a+b)}(p,q)=2i e^{3}\\int \\frac{d^{d}k}{(2\\pi)^{d}} \\frac{(p+2k)^{\\mu}(2p+2k+q)^{\\nu}(p+q+2k)^{\\rho}}{[(p+k)^{2}-m^{2}][(p+q+k)^{2}-m^{2}][k^{2}-m^{2}]} \\sin\\big(\\frac{p \\wedge q}{2}\\big),\n \\label{eq:c8}\n\\end{equation}\nwhich is a planar quantity, we can see that its integrand is independent of the noncommutativity.\nThe contribution from the graph (c) also has a simple planar structure, which is given by the expression\n\\begin{equation}\n\\Pi^{\\mu\\nu\\rho}_{(c)}(p,q)= -e^{3}\\int \\frac{d^{d}k}{(2\\pi)^{d}} \\frac{\\eta^{\\mu\\nu}(p+q+2k)^{\\rho}}{[(p+q+k)^{2}-m^{2}][k^{2}-m^{2}]} \\cos\\big(\\frac{p \\wedge q}{2}\\big).\n \\label{eq:c9}\n\\end{equation}\n\nSince the graph (c) is planar, one can perform straightforward manipulations to show that this contribution is identically zero, i.e. $\\Pi^{\\mu\\nu\\rho}_{(c)}=0$, for any value of the external momenta. Hence the full contribution for the $\\left\\langle A AA\\right\\rangle$ vertex reads\n\\begin{align}\n\\Pi^{\\mu\\nu\\rho}(p,q)=2i e^{3}\\int \\frac{d^{d}k}{(2\\pi)^{d}} \\frac{(2k+p)^{\\mu}(2k+p+s)^{\\nu}(s+2k)^{\\rho}}{[(p+k)^{2}-m^{2}][(s+k)^{2}-m^{2}][k^{2}-m^{2}]} \\sin\\big(\\frac{p \\wedge q}{2}\\big).\n \\label{eq:c10}\n\\end{align}\n\\begin{figure}[t]\n\\vspace{-1.2cm}\n\\includegraphics[height=8\\baselineskip]{3-point.eps}\n \\centering\\caption{Relevant graphs for the induced $AAA$-term.}\n\\label{oneloop2}\n\\end{figure}\n\nThe computation of the loop integral is lengthy but straightforward using dimensional regularization, and in the low-energy limit $p^{2},q^{2}\\ll m^{2}$, we find that\n\\begin{equation}\n\\Pi^{\\mu\\nu\\rho}(p,q)=\\frac{e^{3}}{12\\pi m}\\bigg[\\big(p-q\\big)^{\\rho}\\eta^{\\mu\\nu}-\\big(2p+ q\\big)^{\\nu}\\eta^{\\mu\\rho} +\\big(p+2 q\\big)^{\\mu}\\eta^{\\nu\\rho}\\bigg] \\sin\\big(\\frac{p \\wedge q}{2}\\big).\n \\label{eq:c11}\n\\end{equation}\nHere, we notice that the Eq.~\\eqref{eq:c11} corresponds exactly to the standard Feynman vertex of the 3-photon interaction term in the NC spacetime.\nMoreover, in the next to leading order, ${\\cal{O}}(m^{-3})$, we have the contribution from the higher-derivative terms\n\\begin{align}\n\\Pi^{\\mu\\nu\\rho}_{\\rm hd}(p,q)=-\\frac{e^3}{240\\pi m^3}\n\\bigg\\{&\\eta^{\\mu\\nu}\\Big[ p^2(2q-p)^{\\rho}+q^2(q-2p)^{\\rho}+(p.q)(q-p)^{\\rho}\\Big]\\nonumber\\\\\n+&\\eta^{\\mu\\rho}\\Big[p^2(4p+2q)^{\\nu}+q^2(3p+q)^{\\nu}+(p.q)(4p+q)^{\\nu}\\Big]\\nonumber\\\\\n-& \\eta^{\\nu\\rho}\\Big[p^2(p+3q)^{\\mu}+q^2(2p+4q)^{\\mu}+(p.q)(p+4q)^{\\mu}\\Big]\\nonumber\\\\\n+&p^{\\mu}q^{\\rho}(q-p)^{\\nu}+p^{\\rho}q^{\\mu}(q-p)^{\\nu}-p^{\\mu}\np^{\\rho}(2p+q)^{\\nu}+q^{\\mu}q^{\\rho}(p+2q)^{\\nu}\\bigg\\}\\sin\\big(\\frac{p\\wedge q}{2}\\big).\n \\label{eq:c12}\n\\end{align}\nThis expression corresponds to the higher-derivative correction to the 3-photon vertex.\n\nIt is important to observe that in the commutative limit, the graphs (a) and (b) cancel each other, so that the induced 3-photon vertex is completely removed in the scalar QED.\nWe can understand this result from the charge conjugation invariance of the scalar QED in any space-time dimension, known as Furry's theorem, that forbids the presence of an odd number of photon lines in the case of commutative theory.\nAnother important aspect from our analysis is the absence of the Chern-Simons self-coupling $\\epsilon_{\\mu\\nu\\lambda}A^{\\mu}\\star A^{\\nu}\\star A^{\\lambda}$ for the noncommutative scalar QED$_{3}$ effective action \\eqref{eq:c11}, which is only generated in the case of fermionic electrodynamics \\cite{Bufalo:2014ooa}.\n\n\n\\subsection{One-loop $\\left\\langle AA AA \\right\\rangle$ vertex}\n\nThe full contribution to the $\\left\\langle AA AA \\right\\rangle$ part is determined by considering three different types of diagrams that are depicted in Fig.~\\ref{oneloop3}. Since all of these graphs have 4 external bosonic legs, 24 different permutations for each graph must be considered in order to obtain the fully symmetrized contribution.\nHence, the full contribution can be formally written as\n\\begin{align}\n \\Gamma^{\\mu\\nu\\rho\\sigma}_{\\rm total}= \\Gamma^{\\mu\\nu\\rho\\sigma}_{(a)} +\n \\Gamma^{\\mu\\nu\\rho\\sigma}_{(b)} +\\Gamma^{\\mu\\nu\\rho\\sigma}_{(c)} =\\sum_{i=1}^{24}\\Gamma^{\\mu\\nu\\rho\\sigma}_{(a,i)}+\n \\sum_{i=1}^{24}\\Gamma^{\\mu\\nu\\rho\\sigma}_{(b,i)}+\n \\sum_{i=1}^{24}\\Gamma^{\\mu\\nu\\rho\\sigma}_{(c,i)}.\n \\label{eq:c17}\n\\end{align}\nWe shall present next the explicit discussion for the first contribution of each graph, whereas the remaining graphs are obtained by a direct permutation of momenta and spacetime indices.\n\\begin{figure}[t]\n\\vspace{-1.2cm}\n\\includegraphics[height=8\\baselineskip]{4-point.eps}\n \\centering\\caption{Relevant graphs for the induced $AAAA$-term.}\n\\label{oneloop3}\n\\end{figure}\n\nThe box diagram contribution represented in graph (a) has the following expression\n\\begin{align}\n\\Pi^{\\mu\\nu\\rho\\sigma}_{(a,1)}&=e^{4}\\int \\frac{d^{d}k}{(2\\pi)^{d}} \\frac{(p+2k)^{\\mu}(2p+q+2k)^{\\nu}(2p+2q+s+2k)^{\\rho}(p+q+s+2k)^{\\sigma}}{[(p+q+s+k)^{2} -m^{2}][(p+q+k)^{2}-m^{2}][(p+k)^{2}-m^{2}][k^{2}-m^{2}]}\ne^{\\frac{i}{2}p \\wedge q}e^{\\frac{i}{2}(p+q)\\wedge s},\n \\label{eq:c13}\n\\end{align}\nwhere we have labeled the momenta $(p,q,s,r)$ accordingly to the spacetime indices of the external legs $(\\mu,\\nu,\\rho,\\sigma)$.\nMoreover we have adopted the notation, in order to satisfy the energy-momentum conservation, where the momenta flow satisfies the relation $r = p+q+s$.\nThe remaining contributions from the other 23 box diagrams, coming from the S-matrix expansion, can easily be obtained from the equation \\eqref{eq:c13} by considering the respective permutation.\nNext, we have the contribution from the bubble diagram represented in (b), which is given by\n\\begin{align}\n\\Pi^{\\mu\\nu\\rho\\sigma}_{(b,1)} =e^{4} \\int \\frac{d^{d}k}{(2\\pi)^{d}} \\frac{\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}}{[(p+q+k)^{2}-m^{2}][k^{2}-m^{2}]}\\cos\\big(\\frac{p \\wedge q}{2}\\big)\\cos\\big(\\frac{s\\wedge (p+q)}{2}\\big).\n \\label{eq:c14}\n\\end{align}\nAt last, the triangle contribution shown in graph (c) is written as\n\\begin{align}\n\\Pi^{\\mu\\nu\\rho\\sigma}_{(c,1)} =-e^{4}\\int \\frac{d^{d}k}{(2\\pi)^{d}} \\frac{\\eta^{\\mu\\nu}(2p+2q+s+2k)^{\\rho}(p+q+s+2k)^{\\sigma}}{[(p+q+s+k)^{2}-m^{2}][(p+q+k)^{2}-m^{2}][k^{2}-m^{2}]}\\cos\\big( \\frac{p \\wedge q}{2}\\big) e^{\\frac{i}{2} (p+q) \\wedge s}.\n \\label{eq:c15}\n\\end{align}\nThe Feynman expressions of the graphs (a), (b) and (c) show that all of them are planar, making the evaluation of the momentum integration easier by dimensional regularization.\nHence the resulting expressions from the contributions \\eqref{eq:c13} to \\eqref{eq:c15}, evaluated in the highly noncommutative limit, where $p^2,q^2,s^2 \\ll m^{2}$, are written as follows\n\\begin{align}\n\\Gamma^{\\mu\\nu\\rho\\sigma}_{(a,1)}&=\\frac{1}{4}\\times \\frac{ie^{4}}{12\\pi m}\\Big(\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}+\\eta^{\\mu\\rho}\\eta^{\\nu\\sigma}+\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}\\Big)\n~e^{\\frac{i}{2}p\\wedge q}e^{\\frac{i}{2}r\\wedge s},\\nonumber\\\\\n\\Gamma^{\\mu\\nu\\rho\\sigma}_{(b,1)}&= \\frac{1}{2}\\times\\frac{ie^{4}}{8\\pi m}\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}\n\\cos\\big(\\frac{p\\wedge q}{2}\\big)\\cos\\big(\\frac{r\\wedge s}{2}\\big),\\nonumber\\\\\n\\Gamma^{\\mu\\nu\\rho\\sigma}_{(c,1)}&=1\\times \\frac{-ie^{4}}{8\\pi m} ~\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}\\cos\\big(\\frac{p\\wedge q}{2}\\big)\\cos\\big(\\frac{r\\wedge s}{2}\\big),\n \\label{eq:c16}\n\\end{align}\nwhere the coefficients $\\frac{1}{4}$, $\\frac{1}{2}$ and $1$ are the symmetry factors for the graphs (a), (b) and (c) of Fig.~\\ref{oneloop3}, respectively.\nWe then apply to the results \\eqref{eq:c16} all the 24 permutations, necessary to evaluate \\eqref{eq:c17}, yielding\n\\begin{align}\n\\Gamma^{\\mu\\nu\\rho\\sigma}_{(a)}&=\\frac{ie^{4}}{6\\pi m}\\Big(\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}+\\eta^{\\mu\\rho}\\eta^{\\nu\\sigma}+\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}\\Big)\n\\Big[\\cos\\big(12\\big)\\cos\\big(34\\big)+\\cos\\big(13\\big)\\cos\\big(24\\big)+\\cos\\big(14\\big)\\cos\\big(23\\big)\\Big],\n\\nonumber\\\\\n\\Gamma^{\\mu\\nu\\rho\\sigma}_{(b)}&=\\frac{ie^{4}}{2\\pi m}\n\\Big[\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}\\cos\\big(12\\big)\\cos\\big(34\\big)+\\eta^{\\mu\\rho}\n\\eta^{\\nu\\sigma}\\cos\\big(13\\big)\\cos\\big(24\\big)+\n\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}\\cos\\big(14\\big)\\cos\\big(23\\big)\\Big],\n\\nonumber\\\\\n\\Gamma^{\\mu\\nu\\rho\\sigma}_{(c)}&=- \\frac{ie^{4}}{\\pi m}\n\\Big[\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}\\cos\\big(12\\big)\\cos\\big(34\\big)+\\eta^{\\mu\\rho}\n\\eta^{\\nu\\sigma}\\cos\\big(13\\big)\\cos\\big(24\\big)+\n\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}\\cos\\big(14\\big)\\cos\\big(23\\big)\\Big],\n \\label{eq:c18}\n\\end{align}\nwhere we have introduced, by simplicity of the upcoming analysis, the following notation for the NC momenta product: $\\left(12 \\right) \\equiv \\left(\\frac{p\\wedge q}{2}\\right)$, $\\left(13 \\right) \\equiv \\left(\\frac{p\\wedge s}{2}\\right)$, $\\left(14 \\right) \\equiv \\left(\\frac{p\\wedge r}{2}\\right)$, $\\left(23 \\right) \\equiv \\left(\\frac{q\\wedge s}{2}\\right)$, $\\left(24 \\right) \\equiv \\left(\\frac{q\\wedge r}{2}\\right)$, and $ \\left(34\\right) \\equiv \\left(\\frac{s\\wedge r}{2}\\right)$.\nFinally, we substitute the results \\eqref{eq:c18} into the Eq.~\\eqref{eq:c17} to obtain the total one-loop contribution expression corresponding to the photon 4-point function\n\\begin{align}\n\\Gamma^{\\mu\\nu\\rho\\sigma}_{\\rm total}=\\frac{ie^{4}}{\\pi m}\\Bigg\\{\n&\\frac{1}{6}\\Big(\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}+\\eta^{\\mu\\rho}\\eta^{\\nu\\sigma}+\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}\\Big) \\Big[\\cos(12)\\cos(34)+\\cos(14)\\cos(23)+\\cos(13)\\cos(24)\\Big]\\nonumber\\\\\n-&\\frac{1}{2} \\Big[\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}\\cos(12)\n\\cos(34)+\\eta^{\\mu\\rho}\\eta^{\\nu\\sigma}\\cos(13)\\cos(24)+\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}\\cos(14)\\cos(23)\\Big]\\Bigg\\}.\n \\label{eq:c19}\n\\end{align}\nWe can verify whether the quartic vertex \\eqref{eq:c19} satisfy the Ward identity in scalar QED. First, we consider the commutative limit, i.e. $\\theta\\rightarrow 0$, so that the one-loop contribution \\eqref{eq:c19} is reduced to\n\\begin{equation}\n\\lim_{\\theta\\rightarrow 0}\\Gamma^{\\mu\\nu\\rho\\sigma}_{\\rm total}=\\frac{ie^{4}}{\\pi m}\n\\Big(\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}+\\eta^{\\mu\\rho}\\eta^{\\nu\\sigma}+\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}\\Big) \\Big(\\frac{1}{2}+\\frac{1}{2}-1\\Big)=0,\n \\label{eq:c20}\n\\end{equation}\nshowing that the photon quartic self-coupling in the order ${\\cal{O}}(m^{-1})$ is absent in the Abelian theory; however higher order contributions could be nonvanishing, corresponding to nonlinear Euler-Heisenberg-like terms.\nMoreover, in the case of the violation of the Ward identity, a nonvanishing result for the contribution \\eqref{eq:c20} would generate a four photon interaction term in the effective action of the type\n\\begin{align}\n\\lim_{\\theta\\rightarrow 0}{\\cal{S}}_{\\rm eff}[AAAA]\\sim\\int d^{3}x~\\Big(\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}+\\eta^{\\mu\\rho}\\eta^{\\nu\\sigma}+\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}\\Big)A_{\\mu}(x) A_{\\nu}(x) A_{\\rho}(x)A_{\\sigma}(x),\n \\label{eq:c21}\n\\end{align}\nthat is not explicitly gauge invariant. Hence, with the result \\eqref{eq:c20} we conclude that the gauge invariance is satisfied in our analysis of ${\\cal{O}}(m^{-1})$ terms at the one-loop approximation.\n\nOn the other hand, in the noncommutative case, as it is well known, we expect to find the relevant Feynman rule corresponding to the 4-photon interaction term at this order.\nTo accomplish that, we start working separately each one of the tensor terms present in the function $\\Gamma^{\\mu\\nu\\rho\\sigma}_{\\rm total}$ Eq.~\\eqref{eq:c19}.\nWe shall illustrate the analysis for the terms proportional to $\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}$, the remaining terms can be evaluated in the same fashion.\nHence, by picking the pieces that are proportional to $\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}$ in \\eqref{eq:c19}, we have that\n\\begin{equation}\n\\mathcal{I}^{\\mu\\nu,\\rho\\sigma}\\equiv\n\\frac{ie^{4}}{6\\pi m}\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}\\Big[-2\\cos(12)\\cos(34)+ \\cos(14)\\cos(23)+\\cos(13)\\cos(24)\\Big]. \\label{eq:c22}\n\\end{equation}\nThe major work here consist in simplifying the trigonometric part of this function by making use of the energy-momentum conservation $r = p+q+s$, which in the new notation is renamed as $4=1+2+3$, together with the manipulation of some trigonometric identities, e.g. $\\cos\\alpha\\cos\\beta =\\cos(\\alpha+\\beta)+\\sin\\alpha\\sin\\beta$.\nAfter some laborious but straightforward calculation, we arrive at the desired expression\n\\begin{equation}\n\\mathcal{I}^{\\mu\\nu,\\rho\\sigma}=\\frac{ie^{4}}{6\\pi m}\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}\\Big[\\sin(14)\\sin(23)+\\sin(13)\\sin(24)\\Big].\n \\label{eq:c24}\n\\end{equation}\nSimilarly, we can apply the same process in order to simplify the remaining terms, proportional to $\\eta^{\\mu\\rho}\\eta^{\\nu\\sigma}$ and $\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}$, so that it yields to\n\\begin{align}\n\\mathcal{I}^{\\mu\\rho,\\nu\\sigma}&=\\frac{ie^{4}}{6\\pi m}\\eta^{\\mu\\rho}\\eta^{\\nu\\sigma}\\Big[\\sin(12)\\sin(34)-\\sin(14)\\sin(23)\\Big],\n\\nonumber\\\\\n\\mathcal{I}^{\\mu\\sigma,\\nu\\rho}&=- \\frac{ie^{4}}{6\\pi m}\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}\\Big[\\sin(12)\\sin(34)+\\sin(13)\\sin(24)\\Big].\n \\label{eq:c25}\n\\end{align}\n\nHence, by considering the results from our manipulations, Eqs.~\\eqref{eq:c24} and \\eqref{eq:c25}, we can rewrite \\eqref{eq:c19} in a convenient form as the following\n\\begin{align}\n\\Gamma^{\\mu\\nu\\rho\\sigma}_{\\rm total}=\\frac{ie^{4}}{6\\pi m}\\bigg[&\\Big(\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}-\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}\\Big)\\sin\\big(\\frac{p\\wedge s}{2}\\big)\\sin\\big(\\frac{q\\wedge r}{2}\\big)\\nonumber\\\\\n+&\\Big(\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}-\\eta^{\\mu\\rho}\\eta^{\\nu\\sigma}\\Big)\\sin\\big(\\frac{p\\wedge r}{2}\\big)\\sin\\big(\\frac{q\\wedge s}{2}\\big)\\nonumber\\\\\n+&\\Big(\\eta^{\\mu\\rho}\\eta^{\\nu\\sigma}-\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}\\Big)\\sin\\big(\\frac{p\\wedge q}{2}\\big)\n\\sin\\big(\\frac{s\\wedge r}{2}\\big)\\bigg],\n \\label{eq:c27}\n\\end{align}\nwhere we have reintroduced the notation in terms of the external momenta $p,q,s,r$.\nAs we can observe, the expression inside the bracket corresponds exactly to the Feynamn vertex of the 4-photon interaction within the noncommutative $U_{\\star}(1)$ gauge theory.\n\nFinally, we can gather the leading ${\\cal{O}}(m^{-1})$ contributions from the one-loop order parts related to the two, three and four-point functions, Eqs.~\\eqref{eq:c3}, \\eqref{eq:c11} and \\eqref{eq:c27}, respectively, so that we can write the complete expression of the NC Maxwell action as\n\\begin{equation}\ni{\\cal{S}}_{\\rm eff}\\bigg|_{{\\cal{O}}(m^{-1})}=-\\frac{ie^{2}}{96\\pi m}\\int~d^{3}x~F_{\\mu\\nu}\\star F^{\\mu\\nu},\n\\label{eq:c28}\n\\end{equation}\nin which the field strength tensor in the NC framework is defined as $F_{\\mu\\nu}=\\partial_{\\mu}A_{\\nu}-\\partial_{\\nu}A_{\\mu}+ie[A_{\\mu},A_{\\nu}]_{\\star}$.\nAs we have previously discussed, this action is manifestly $U_{\\star}(1)$ gauge invariant under the transformation $U=e^{ie\\lambda}_{\\star}$, where the field strength has the following transformation law $F_{\\mu\\nu}\\rightarrow U\\star F_{\\mu\\nu}\\star U^{-1}$.\n\nRegarding the higher-derivative corrections to the 4-photon vertex \\eqref{eq:c27}, corresponding to the next to leading order ${\\cal{O}}(m^{-3})$ terms, we arrive at a result involving a long expression which can be found in the Appendix \\ref{sec-appA}.\nThis ${\\cal{O}}(m^{-3})$ result can be seen as the 3D version of the Euler-Heisenberg Lagrangian.\nWe notice that in the commutative limit, the gauge invariant field strength is defined as $f_{\\mu\\nu}=\\partial_{\\mu}A_{\\nu}-\\partial_{\\nu}A_{\\mu}$, so that the commutative version of the effective action \\eqref{eq:c28} only receives contribution from the one-loop $\\langle AA\\rangle$ part, the remaining contributions are vanishing.\nThus, the commutative one-loop effective action in the presence of the higher-derivative term, at the next to leading order, is described as\n \\begin{equation}\n \\lim\\limits_{\\theta\\rightarrow 0}i{\\cal{S}}_{\\rm eff}\\bigg|_{{\\cal{O}}(m^{-3})}=\n -\\frac{ie^{2}}{96\\pi m}\\int d^{3}x~f_{\\mu\\nu}f^{\\mu\\nu}+\\frac{ie^{2}}{1920\\pi m^{3}}\\int d^{3}x~\nf_{\\mu\\nu}\\Box f^{\\mu\\nu}\n\\end{equation}\n\nSome comments about the result \\eqref{eq:c28} are now in place.\nRegarding discrete symmetries, following the aforementioned discussion in Sec. \\ref{sec2.1}, it is easy to show that the above one-loop effective action is also invariant under all of the discrete symmetries and therefore we have not faced any anomalous symmetry at this order.\n\nSince the effective action \\eqref{eq:c28}, arising from the 2, 3 and 4-point functions at the order ${\\cal{O}}(m^{-1})$, is exactly gauge invariant, it is possible to conclude that no further ${\\cal{O}}(m^{-1})$ terms are generated from higher-order graphs with $n>4$ external photon legs.\nAccording to this reasoning, we can also discuss the gauge invariance of the higher-derivative terms generated by considering the next to leading order ${\\cal{O}}(m^{-3})$ of our expansion.\nActually, it is possible to make use of a dimensional analysis, based on arguments of gauge invariance, to establish the perturbative generation of all possible gauge invariant higher-derivative terms in the one-loop effective action \\cite{Bufalo:2014ooa}.\nAs an example, if we consider all of the ${\\cal{O}}(m^{-3})$ contributions up to the diagrams with $n=6$ external photon legs, we can generate the following effective higher-derivative Lagrangian\n\\begin{equation}\n\\mathcal{L}_{\\rm hd} = \\frac{1}{6\\mu^2} \\nabla _\\mu F^{\\mu \\nu }\\star \\nabla^\\lambda F_{\\lambda \\nu }\n+\\frac{1}{6\\mu^2} \\nabla _\\lambda F^{\\mu \\nu }\\star \\nabla^\\lambda F_{\\mu \\nu }\n- \\frac{e}{18\\mu^2} F^{\\mu \\nu }\\star F_{\\nu \\lambda}\\star F^{\\lambda } _{\\;\\;\\; \\mu},\n\\end{equation}\nwhere $\\nabla _\\mu = \\partial_\\mu +i e \\left[A_\\mu, \\,\\right]_{\\star} $ is the covariant derivative in the adjoint representation and $\\mu \\sim m$ is the mass scale of the theory.\nThis expression can been seen as a noncommutative extension of the Alekseev-Arbuzov-Baikov effective Lagrangian \\cite{Quevillon:2018mfl,Alekseev:1981fu}.\n\nOne last comment about the photon effective action is in regard of some of the nonlinear contributions.\nIt is well known that either fermionic or scalar electrodynamics generate nonlinear corrections of quantum character to the photon dynamics \\cite{Quevillon:2018mfl}.\nIn the case of a $(2+1)$ spacetime the effective Euler-Heisenberg Lagrangian density has the appearance of fractional powers of the field strength \\cite{Redlich:1983dv}\n\\begin{equation}\n\\mathcal{L}_{\\rm EH} \\sim \\left( e \\sqrt{B^2-E^2} \\right)^{\\frac{3}{2}}.\n\\end{equation}\nHence, it is reasonable to expect that the coordinates noncommutativity would also present corrections to this nonlinear coupling term.\n\n \\section{Comparison with NC-QED$_{3}$}\n\\label{sec5}\n\nIn this section, we shall present a comparative discussion of the 2,3 and 4-point functions in the case of fermionic and bosonic matter fields coupled to the photon.\nIt is notable that the presence of the trace of $\\gamma$ matrices in fermionic QED$_{3}$ leads to two sectors characterized as: odd and even in regard to parity symmetry.\nThe former is related to the induced Chern-Simons (CS) terms, appearing with odd powers of the fermion mass $m_e$, while the latter sector contributes to the induced Maxwell (M) terms, with even powers of $m_e$.\nNow in the scalar framework, odd parity terms are absent, and only parity preserving terms are present in the induced effective action, which can be understood as the main difference between these two matter fields.\nThese results can be briefly described in terms of a mass expansion as the following:\n\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n{\\color{blue}Order} & {\\color{blue}Induced action} & {\\color{blue}NC-QED$_{3}$}& {\\color{blue}NC-scalar QED$_{3}$} \\\\\n\\hline\n${\\cal{O}}(m^{0})$ & ordinary~NC-CS & $\\checkmark$ & $\\times$ \\\\\n\\hline\n~${\\cal{O}}(m^{-1})$ & ordinary~NC-M~ & $\\checkmark$ & $\\checkmark$ \\\\\n\\hline\n~${\\cal{O}}(m^{-2})$& higher-derivative~NC-CS & $\\checkmark$ & $\\times$ \\\\\n\\hline\n~${\\cal{O}}(m^{-3})$& higher-derivative~NC-M~ & $\\checkmark$ & $\\checkmark$\\\\\n\\hline\n\\vdots& \\vdots& \\vdots& \\vdots\\\\\n\\hline\n~~${\\cal{O}}(m^{-2\\ell})$& higher-derivative~NC-CS & $\\checkmark$ & $\\times$\\\\\n\\hline\n~~~~~${\\cal{O}}(m^{-2\\ell-1})$& higher-derivative~NC-M~ & $\\checkmark$ & $\\checkmark$\\\\\n\\hline\n\\end{tabular}\n\\end{center}\nHere some important comments are in order.\nIn NC-QED$_{3}$, the structure of the kinetic part, coming from the $n=2$ photon external legs analysis, has the contribution of two types of terms:\na CS-type parity violating term $e^{2}A \\partial\\Box^{\\ell} A$, at the order ${\\cal{O}}(m^{-2\\ell})$, and a M-type parity preserving term $e^{2}A\\partial\\partial \\Box^{\\ell} A$, at the order ${\\cal{O}}(m^{-2\\ell-1})$.\nWe can observe that the mass dimension of the CS and M-type terms is given by $3+2\\ell$ and $4+2\\ell$, respectively.\nThus, we can conclude that in order to have a gauge invariant CS-type action at the order ${\\cal{O}}(m^{-2\\ell})$, it is necessary to consider all contributions originating from the graphs with $n=2,3,\\ldots, 3+2\\ell$ photon legs.\nOn the other hand, to obtain a gauge invariant M-type action at the order ${\\cal{O}}(m^{-2\\ell-1})$, it is necessary to consider all contributions arising from the relevant graphs with $n=2,3,\\ldots, 4+2\\ell$ photon legs.\n\n\nFurthermore, in $(2+1)$ dimensions, the number of degrees of freedom of the charged boson and Dirac fermion (in the 2 dimensional representation) is equal and hence it is easy to see that the numerical coefficient appearing in the 2-point function \\eqref{eq:c7} would be the same as in the fermionic QED$_{3}$.\nNow for the $n=3$ graphs, in the case of NC-QED$_{3}$, there are only the contribution of two triangle graphs in the one-loop order.\nThese contributions are the same as in the NC-scalar QED$_{3}$ because the additional graph is identically zero.\nThus, it is easy to realize that the final result in the parity preserving sector for both cases is the same \\cite{Bufalo:2014ooa}.\nThe last type of diagrams is for $n=4$ legs, that for the fermionic case we have the contribution of the box diagram only, i.e. type (a).\nThe expression for this diagram at the leading order of ${\\cal{O}}(m^{-1})$ is given by\n\\begin{equation}\n\\Gamma^{\\mu\\nu\\rho\\sigma}_{(a,1)}\\bigg|_{\\tiny\\mbox{NC-QED}}=-\\frac{ie^{4}}{3\\pi m}\\Big(\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}-2\\eta^{\\mu\\rho}\\eta^{\\nu\\sigma}+ \\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}\\Big)e^{\\frac{i}{2}p\\wedge q}e^{\\frac{i}{2}r\\wedge s}.\n\\end{equation}\nBy considering all of the 24 permutations and some manipulations we obtain the following\n\\begin{align}\n\\Gamma^{\\mu\\nu\\rho\\sigma}_{\\rm total}\\bigg|_{\\tiny\\mbox{NC-QED}}=\\frac{4ie^4}{3\\pi m}\\bigg[ \\Big(&\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}-\\eta^{\\mu\\rho}\\eta^{\\nu\\sigma}\\Big)\\sin{\\big(\\frac{p\\wedge r}{2}\\big)}\\sin{\\big(\\frac{q\\wedge s}{2}\\Big)}\\nonumber\\\\\n+\\Big(&\\eta^{\\mu\\nu}\\eta^{\\rho\\sigma}-\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}\\Big)\\sin{\\big(\\frac{q\\wedge r}{2}\\big)}\\sin{\\big(\\frac{p\\wedge s}{2}\\big)}\\nonumber\\\\\n+\\Big(&\\eta^{\\mu\\rho}\\eta^{\\nu\\sigma}-\\eta^{\\mu\\sigma}\\eta^{\\nu\\rho}\\Big)\\sin{\\big(\\frac{p\\wedge q}{2}\\big)}\\sin{\\big(\\frac{s\\wedge r}{2}\\big)}\\bigg],\n\\end{align}\nwhich has the same tensor and momenta structure as the standard 4-photon vertex \\eqref{eq:c27}, but with a different numerical coefficient.\nIn this case, we can conclude that the commutative limit of this function is also satisfied, and have a vanishing result as we would expect.\n\\section{Final remarks}\n\\label{conc}\n\nIn this paper we have considered the perturbative evaluation of the effective action for photon in the context of scalar QED in the $(2+1)$ noncommutative spacetime.\nOur main interest was to determine to what extent the spin of the matter fields can change the effective action in the presence of the spacetime noncommutativity.\nSince the number of degrees of freedom of the charged boson and Dirac fermion (in the 2 dimensional representation) is equal, the main difference between these fields is solely to the well known presence of different couplings in the case of scalar QED, implying in new types of graphs.\nAn important drawback from the scalar QED in terms of the induced effective action for the photon is the absence of parity violating terms, showing that no Chern-Simons terms are generated when charged scalar fields are considered.\n\nThe perturbative analysis followed the computation of the $\\left\\langle AA \\right\\rangle$, $\\left\\langle AA A \\right\\rangle$ and $\\left\\langle AA AA \\right\\rangle$ vertex functions.\nA more detailed and careful analysis was necessary to the computation of the 4 point vertex, where the process of symmetrization is rather intricate.\nThe evaluation of the noncommutative Maxwell action $\\int F_{\\mu\\nu}\\star F^{\\mu\\nu} $ was done by considering the highly noncommutative limit of these $1PI$ functions at order ${\\cal{O}}(m^{-1})$.\n\nIn addition, we have considered the generation of ${\\cal{O}}(m^{-3})$ terms, which are higher-derivative terms for the photon fields, and can be seen as the noncommutative generalization of the phenomenological Alekseev-Arbuzov-Baikov effective Lagrangian.\nAnother possible terms to be present in the gauge invariant photon's effective action are those nonlinear couplings, e.g. analogous to the Euler-Heisenberg action, thus within our study of NC-scalar QED, we would find noncommutative corrections to the fractional powers of the field strength that appear in the $(2+1)$ dimensional Euler-Heisenberg action.\n\n \\subsection*{Acknowledgements}\n\n We are grateful to M.M. Sheikh-Jabbari for fruitful discussions and comments on the manuscript. R.B. acknowledges partial support from Conselho Nacional de Desenvolvimento Cient\\'ifico e Tecnol\\'ogico (CNPq Projects No. 304241\/2016-4 and 421886\/2018-8 ) and Funda\\c{c}\\~{a}o de Amparo \\`a Pesquisa do Estado de Minas Gerais (FAPEMIG Project No. APQ-01142-17).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAn important part in the design of non-player characters (NPCs) in video-games has to do \nwith the artificial intelligence (AI) of the characters in terms of their actions in the \ngame. This amounts to handling a variety of problems that also involve low-level aspects \nof the game, including for example pathfinding, detection of conditions, execution \nmonitoring for the actions of the characters, and many more. \n\nWhile many of these aspects have been studied extensively separately, for instance \npathfinding has been traditionally a very active research topic with significant impact on \nthe video-game industry, others such as execution monitoring are typically treated in per \ncase basis as part of the implementation of the underlying game engine or the particular\ncharacter in question. As NPCs become more of real autonomous entities in the game, \nhandling such components in a more principled way becomes very important. This is relevant\nboth from the designers and developers point of view that benefit from an architecture \nthat is easier to maintain and reuse, but also from the point of view of the more refined \ninteractions with the game-world that NPCs can achieve.\n\nIn this paper we propose an architecture for specifying the interaction of NPCs in the \ngame-world in a way that abstracts common tasks in four main conceptual components, \nnamely \\emph{perception, deliberation, control, action}. The architecture is inspired by \nAI research on autonomous agents and robots, in particular the notion of \\emph{high-level\ncontrol} for \\emph{cognitive robotics} as it is used in the context of deliberative\nagents and robots such as \\cite{levesque98highlevel,Shanahan01Highlevel}. \nThe motivation is that by adopting a clear role for each component and specifying a simple \nand clean interface between them, we can have several benefits as we discuss next. \n\nFirst, as there are many techniques for specifying how an NPC decides on the next action \nto pursue in the game world, an architecture that abstracts this part in an appropriate \nway could allow to easily switch between approaches. For example, it could facilitate \nexperimentation with a finite-state machine \\cite{Rabin02FSM}, a behavior tree\n\\cite{Isla05BehaviorTrees} or a goal-oriented action planning approach \\cite{orkin06fear} \nfor deciding on NPC actions, in a way that keeps all other parts of the architecture \nagnostic to the actual method for deliberation.\n\nSecond, this can provide the ground for a thorough investigation among the different ways\nof combining the available methodologies for different components, possibly leading \nto novel ways of using existing approaches. Also, this clear-cut separation of roles can \nencourage the development of modules that encapsulate existing approaches that have not \nbeen abstracted out of their application setting before, increasing re-usability of \ncomponents.\n\nWe also believe that this type of organization is a necessary prerequisite for enabling \nmore advanced behaviors that rely on each NPC holding a \\emph{personalized view} of the \ngame-world that is separated from the current (updated and completely specified) state \nof affairs. In particular, we argue that it is important for believable NPCs to adopt a \nhigh-level view of relevant aspects of the game-world including the topology and \nconnectivity of available areas in the world. In such cases then, when the deliberation \nis more tightly connected with low-level perception and action, we believe that a \nwell-principled AI architecture becomes important for maintaining and debugging the \ndevelopment process.\n\nFor example, typically the game engine includes a pathfinding module that is used by all\nNPCs in order to find their way in the game world. But what happens when one NPC knows \nthat one path to the target destination is blocked while another NPC does not possess this\ninformation? The approach of handling all requests with a single pathfinding module cannot \nhandle this differentiation unless the pathfinder takes into account the personalized \nknowledge of each NPC. This type of mixing perception, deliberation, and action can\nbe handled in the low-level of pathfinding using various straightforward tricks, but \nmaking a separation of the high-level knowledge for each NPC and the low-level game-world\nstate can be very useful.\n\nAdopting this separation, each NPC may keep a personalized high-level view of the \ngame-world in terms of available areas or zones in the game and their connectivity, and\nuse the low-level pathfinding module as a service only for the purpose of finding paths \nbetween areas that are \\emph{known to the NPC} to be connected or points in the same area. \nA simple coarse-grained break-down of the game-world in large areas can ensure that the \ndeliberation needed in the high-level is very simple and does not require effort that is\nat all similar to the low-level pathfinding. Alternatively, a more sophisticated approach\nwould be to model this representation based on the actual one used by a hierarchical \npathfinding approach such as \\cite{Botea04HPAstar}, so that the high-level personalized \ndeliberation is also facilitated by the pathfinding module but using the NPCs version of\nthe higher level of the map.\n\nThe rest of the paper is organized as follows. We continue with the \ndescription of our proposed \\emph{CogBot} architecture. We then introduce a motivating \nexample that shows the need for personalized knowledge of the game-world for NPCs, and\nreport on an implementation of CogBots in the popular game engine of \nUnity.\\footnote{\\url{www.unity3d.com}} \nThen we continue with a discussion on the \nstate-of-the-art for approaches that deal with AI for NPCs wrt to actions in the \ngame-world, and finally, we conclude with our view on what are \ninteresting future directions to investigate. \n\n\\section{The CogBot NPC AI architecture}\n\nThe proposed architecture formalizes the behavior of NPCs as far as their actions in the \ngame-world is concerned, in terms of four basic components as follows. \n\n\\medskip\n\\centerline{\\includegraphics[width=0.95\\linewidth]{img\/GeneralArchitecture.png}}\n\\smallskip\n\n\\begin{itemize}\n\\item The \\emph{perception} component (\\emph{CogBotPerception}) is responsible for \nidentifying objects and features \nof the game-world in the field of view of the NPC, including conditions or events that \noccur, which can be useful for the deliberation component. \n\\item The \\emph{deliberation} component (\\emph{CogBotDeliberation}) is responsible for \ndeciding the immediate action that should be performed by the NPC by taking into account\nthe input from the perception component as well as internal representations. This component \nmay be used to abstract the logic or strategy that the NPC should follow which could be \nfor instance expressed in terms of reactive or proactive behavior following any of \nthe existing approaches.\n\\item The \\emph{control} component (\\emph{CogBotControl}) is responsible for going \nover a loop that passes information between the perception and deliberation components,\nand handling the execution of actions as they are decided. In particular, the controller is \nagnostic of the way that perception, deliberation and action is implemented, but is responsible\nfor coordinating the information between the components while handling exceptions, monitoring \nconditions and actions, and allocating resources to the deliberator accordingly.\n\\item The \\emph{action} component (\\emph{CogBotAction}) is responsible for realizing the \nconceptual actions that are decided by the deliberator in the game-world and provide \ninformation about the state of action execution, e.g., success or failure.\n\\end{itemize}\n\nEssentially, the control component acts as a mediator that distributes information between\nthe other components. Note that more than one instance of each component may be used in an\nNPC architecture. This particular specification of components should be seen as a means for\nstructuring the number of various processes that need to operate and coordinate so that an \nNPC can perform challenging tasks in a game-world environment. One may think of cases where\na single controller is used as a hub that manages many perception, deliberation, and action\ncomponents, or other cases where networks of one instance of each these four components are\nused to mange different aspects of the NPC behavior.\n\n\n\\subsection{Perception}\n\nThe perception component is the main information source for the NPC control component. In \nthe typical case it is attached to a mesh object surrounding the NPC and it provides instant\ninformation about all the game objects that lie in the area of the mesh object, e.g., a \nsight cone positioned on the head of the NPC. Also, the perception component may be \nmonitoring the field of view for conditions or events which are also propagated to the\ncontrol component.\n\nThe communication with the control component is asynchronous as the perception component\npushes information to the control component by calling appropriate callback functions as\nfollows. \n\n\\begin{itemize}\n\\item An ``Object Entering FoV'' and ``Object Leaving FoV'' callback function is \ncalled when an object enters or leaves the field of view of the NPC.\n\\item A ``Notify Object Status'' is called when the internal state of an object in the \nfield of view is changed. \n\\item A ``Notify Event'' callback function is called whenever an implemented monitoring\ncondition check is triggered.\n\\end{itemize}\n\n\\medskip\n\\centerline{\\includegraphics[width=.95\\linewidth]{img\/PerceptionControl.png}}\n\\medskip\n\nMoreover, a the perception component provides a filtering mechanism based on the type of\nobjects and conditions so that only the ones for which the NPC registers for will be reported \nand others will be ignored. In general an NPC may have more than one instance of a perception \ncomponent, each of which may have a different range and can be used to track different \nobject types. A simple example is an NPC having one perception mesh for sight and one \nfor hearing, which are both set up to communicate with the same control component.\n\nFinally, note that conditions that may be triggered in the environment are propagated as\nnotification of events. This type of information is also used in other parts of the \ncommunication between components. This is adopted as a means to provide a simple uniform\nmechanism for communication between components.\n\n\\subsection{Deliberation}\n\nThe deliberation component is the bridge between the low-level space of perception and\naction in the game-world and the high-level space of conceptual representation of the\nstate of affairs as far as the NPC is concerned. The deliberation component exposes an\ninterface for the control component to asynchronously invoke communication as follows. \n\\begin{itemize}\n\\item A ``Get Next Action'' function abstracts the decision by the deliberation component\nwith respect to the next immediate action to be performed by the NPC.\n\\item A ``Notify Object'' function notifies the deliberation component about relevant \nobjects that become visible or have changed their state.\n\\item A ``Notify Event'' function notifies the deliberation component about relevant \nconditions received by the perception component that may affect the internal \nrepresentation of the state or knowledge of the NPC.\n\\end{itemize}\n\n\\medskip\n\\centerline{\\includegraphics[width=0.95\\linewidth]{img\/DeliberatorControl.png}}\n\\medskip\n\nThe deliberation component is responsible for modeling the decision making of the \nNPC with respect to action executing in the game-world, using the information that\nis provided by the control component and other internal representations that are\nappropriate for each approach or specific implementation.\n\nIn particular, one useful view is to think of the deliberation component as maintaining\nan internal model of the game-world that is initiated and updated by the low-level input \ncoming from the perception component, and using this model along with other models of \naction-based behavior in order to specify the course of action of the NPC. The actual\nimplementation or method for the model of the game-world and the model of desired \nbehavior is not constrained in any way other than the type of input that is provided \nand the type of action descriptions that can be passed to the action component.\n\nUnder this abstraction an NPC may for example just keep track of \nsimple conditions in the game-world and a representation of its internal state in\nterms of health and inventory, and use a finite-state machine approach to specify\nthe immediate next action to be performed at any time. Similarly, but following a\ntotally different methodology, an NPC may keep track of a propositional logic\nliteral-based model of the current state of the game-world, and use a automated planning\ndecision making process for deciding on the next immediate action to be performed as\npart of a longer plan that achieves a desired goal for the NPC.\n\nObserve that in the general case this level of abstraction allows the decision making of\nthe NPC to possess information that is different that the true state of affairs in the \ngame-world. This may be either because some change happened for which the NPC was not\ninformed by the perception component, for example because the NPC was simply not able\nto perceive this information, or even because the perception component itself is \nimplemented in such way as to provide filtered or altered information, serving some \naspect of the game design and game-play.\n\n\n\\subsection{Control}\n\nAs we mentioned earlier, the control component acts as a mediator that distributes \ninformation between the other components, including notifications about the state of \nobjects, notifications about conditions in the game-world, as well as feedback about \naction execution. In order to do so it propagates a callback invocation from one \ncomponent to the other (as for example in the case of the perception and deliberation \ncomponent). The way in which this is performed depends on the particular\ncase and implementation choices. For example different types of information may be \ndelivered to different instances of the same component as we discussed earlier in \nthis section.\n\nAlso, the controller component goes over a loop that handles action execution. In its\nmost simple form this could be just repeatedly calling in sequence the corresponding\nfunction of the deliberation component that informs about the next action to be\nperformed, and then pass on this information to the action component so that the\naction is actually executed in the game-world. The architecture does not limit or \nprescribe the way that this loop should be implemented and indeed should be done in\na variable way depending on the characteristics of the game. Nonetheless, the intention\nis that under this abstraction the control component can encourage more principled \napproaches for action execution monitoring, handling exceptions, and reviving from \nerrors.\n\n\n\\subsection{Action}\n\nThe action component abstracts the actions of the NPC in the game-world, allowing the \nrest of the architecture to work at a symbolic level and the other components be\nbe agnostic as per the implementation details for each action. Note that the architecture\nview we adopt does not prescribe the level of detail that these actions should be \nabstracted to. For some cases an action could be an atomic low-level task in the game-world\nas for example the task of turning the face of the NPC toward some target, while at other\ncases a more high-level view would be appropriate, essentially structuring NPC action \nbehavior in terms of strategies or macro-actions. In the latter case, the action component\nmay be used for example to connect a conceptual high-level view of low-level \nimplementations of behaviors as it would be typically done when a finite-state machine\nis used for reactive behavior.\n\n\nIn terms of the communication of the action component with the control component, again\na very simple interface is adopted for asynchronous interaction as follows.\n\\begin{itemize}\n\\item An ``Invoke Action'' function abstracts the action execution from the point of view\nof the architecture, initiating an implemented internal function that operates in the \ngame-world.\n\\item A ``Notify Event'' callback function is called to inform the control component about\ninformation related to action execution, such as that the action has finished with success\nor that some error occurred, etc.\n\\end{itemize}\n \n\\medskip\n\\centerline{\\includegraphics[width=0.95\\linewidth]{img\/ActionControl.png}}\n\\medskip\n\nAs in the deliberation component an internal representation is assumed to model the\npersonalized view of the game-world for the NPC, in the action component a similar \nconceptual representation needs to be utilized in order to capture the available actions\nand their characteristics. As in this case the representation can be more straightforward,\nwe also assume the following simple schema for registering actions in the architecture.\nAn new action can be registered by means of calling an internal ``Register Action'' \nfunction which requires i) a string representing the action name (for instance move-to, \npick-up, etc, and ii) a reference to a function that implements the action. An appropriate\naccount for additional parameters for actions is needed but this is an implementation \ndetail.\n\n\nWe now proceed do discuss a concrete scenario that shows how the CogBot architecture and an\nappropriate separation between the low-level state of affairs in the game-world and a personalized\nconceptual view of the game-world can enable novel behaviors for NPCs.\n \n\n\\section{A motivating example}\n\nAs a motivating running example we will consider a simple case of a room-based \ngame-world in which the human player and some NPCs can move between rooms, pickup and \ndrop objects, and use some of them to block pathways or set traps. Suppose also that the \ngoal of the game is to eventually get hold of a special item and deliver it at the a \ndesignated spot. \n\nNow consider the scenario according to which the player decides to block a passage that \nworks as a shortcut to some room, by putting some obstacle in the way after he passes \nthrough. This means that the characters will have to go round using a different route in\norder to get close to the human player. This is an important piece of information that\nactually should affect greatly how the NPCs will decide to move in the game-world. \n\nFor example, at first, no NPC knows that the passage is blocked so perhaps an NPC that \nwants to go to the room in question will have to go through the normal (shortest) route \nthat goes through the shortcut, see that it is blocked, and then go round using the \nalternative route. From then on and until the NPC sees or assumes that the passage is \ncleared, this is what the NPC would be expected to do in order to act in a believable way, \nand this line of reasoning should be adopted for each one of the NPCs separately.\n\nHow would this behavior be realized in a game though? This simple scenario suggests that\nthe pathfinding module of the game-engine should somehow keep track of which obstacles \neach NPC is aware and return a different \\emph{personalized path} for each one that is \nconsistent with their knowledge. To see why simpler ways to handle this would hurt the \nbelievability of NPCs consider that the pathfinding module follows an NPC-agnostic \napproach for finding a route such that i) does not take into account object obstacles \nand ii) does take into account object obstacles. In both cases, NPCs are assumed to \nreplan if the planned route turns out to be unrealizable.\n\nIt is easy to see that both approaches are problematic. In the first case, an NPC may\ntry to go through the shortcut more than once, in a way that exposes that they has no\nway of remembering that the shortcut is blocked. In the second case, an NPC that did \nnot first try to use the shortcut will nonetheless immediately choose to follow the \nalternative route even though they did not observe that the shortcut is blocked, probably\nruining the player's plan to slow down other NPCs. \n\nEssentially, in order to maintain \nthe believability of NPCs, each one should be performing pathfinding based on the \ninformation they possess which needs to be updated accordingly from their observations.\nNote how this account of knowledge of topology sets the ground for other more advanced\nfeatures for NPCs as for example the capability of exchanging information in the sense\nthat if a player sees that NPCs gathered together and talked his little trick is no\nlonger able to slow down other NPCs any more.\n\nWe now continue to discuss an application of the CogBot architecture in an implemented\nversion of this scenario, which is able to handle the intended behavior for NPCs \nmaking use of an appropriate separation of the actual state of the game-world and\nthe personalized view of the NPCs. \n\n\\section{CogBots in GridWorld}\n\n\\emph{GridWorld} is a grid-based environment developed in the Unity game engine modeled after the \nmotivating example we introduced in the previous section. It was specifically designed to be used\nfor performing automated tests with respect to NPC action-based behavior, therefore all of the\ncomponents of the game-world are procedurally generated from an input text file that specifies\nthe topology as well as the available objects and their state. For example we can design a map \nconsisting of rooms interconnected through doors and use it to observe the behavior of one (or \nmore) NPCs navigating in the environment while the user opens or closes the doors. In particular\nwe will experiment with a prototype implementation of our proposed architecture.\n\n\n\n\\subsection{GridWorld}\n\nThe main component of GridWorld is the \\emph{map} generated from an ASCII file that is taken as\ninput. Each element in the grid including wall sections, floor tiles, and doors, is a low-level\ngame-object of Unity with a component containing some basic information about the object such as:\na flag that indicates whether the object is static or not and its type. Non-static objects, that \nis, objects that may be in more than one states, have an additional component for handle the\ninternal state representation and the way it changes by means of interactions in the game-world.\n\nA simple A* heuristic search path-finding procedure is implemented based on the grid representation \nin order to provide a low-level navigation system. Moreover, in the initialization phase a processing \nof the map takes place that decomposes the map into interconnected areas using the standard method of \nconnected-component labeling \\cite{Dillencourt92Component} from the area of image processing. The \nresulting high-level topology is represented as:\n\\begin{itemize} \n\\item A list of \\emph{areas}. An area is a fully connected set of tiles in which the movement from \nany tile to any tile is guaranteed to succeed.\n\\item A list of \\emph{way-points} between areas. In our case these way-points are explicitly\nrepresented by \\emph{doors}. Each door connects two adjacent areas and has an internal state that\ncan be open or close.\n\\item A list of \\emph{points of interest} that can be used to model the tiles in the map that are\npotential target destinations for the NPC with respect to the scenario in question.\n\\end{itemize}\n\nThis type of information will be maintained by our prototype CogBot NPC, which as we will see \nshortly, combined with the low-level pathfinding procedure can address the challenges raised by\nthe motivating example we introduced earlier.\nThe map data ground truth is stored in a global object in the root of the game scene graph. This \nobject can be accessed by any other object in the game to ask for world information such as the\nsize of the map, the object type in some $(i,j)$ position, etc.\n\n\n\\subsection{CogBots}\n\nA prototype NPC in GridWorld is implemented following the CogBot architecture described in\nthe previous section. The CogBot NPC consists of a standard game character object with a\nmesh, colliders, and animations. In addition to these the NPC has the following components:\n\n\\begin{itemize}\n\\item A \\emph{conic collider} representing the field of view of the NPC. This collider is attached \nto the main body of the NPC positioned in such way as to simulate its sight cone.\n\\item A \\emph{CogBotPerception} component attached to the conic collider. The perception component\nis implemented so as to pass information about all visible objects in the field of view of the NPC.\nIn this example, no conditions are raised by means of events by the perception component.\n\\item A \\emph{CogBotController} component that simply acts as a bridge for communication between\nperception, deliberation, and action.\n\\item A \\emph{PlayerAction} component. This component is a collection of actions of the human\nplayer\/spectator in GridWorld that instruct the NPC to perform some activity. \nIn this simple example we only have actions from the PlayerAction component that invoke\nmoving actions for the NPC in order to reach a target destination tile.\n\\item A \\emph{CogBotAction} component that implements the moving actions of the NPC.\n\\item A \\emph{ManualDeliberator} component. This is an implementation\nof the CogBotDeliberator interface that provides the immediate action to be performed by the NPC\nbased on an internal representation and an model for activity that we will describe with an example\nnext.\n\\end{itemize}\n\n\\subsection{A simple example}\n\nConsider the following setting for the map of GridWorld.\\footnote{\nOur prototype implementation of the GridWorld test-bed and the CogBots architecture are \navailable at \\url{https:\/\/github.com\/THeK3nger\/gridworld} and\n\\url{https:\/\/github.com\/THeK3nger\/unity-cogbot}. The simple example reported here can \nbe found in the ``KBExample'' folder in the GridWorld repository.}\n\n\\medskip\n\\centerline{\\includegraphics[width=0.8\\linewidth]{img\/GridWorldShortcutMap.png}}\n\\medskip\n\nAfter the decomposition of the map we have three areas: area 1 where the NPC is located, area 2 that is\nsymmetrical to area 1 connected through a door, and area 3, a little room that intervenes between areas \n1 and 2 and connects to both with a door. In particular we call $D_{1,2}$ the door connect areas 1 and 2, and\n$D_{1,3}$, $D_{3,2}$ the doors that connect the little room depicted as area 3 with the other two areas.\n\nIn this example the human player\/observer can instruct the NPC to move to particular tiles by clicking on\nthem. Suppose for example that the player asks the NPC to go from the initial position in area 1 into area \n2 in a symmetrical position. The actual path to follow would be different depending on which doors are open\nallowing the NPC to go through. \n\nSimilarly, assuming an internal representation for the connectivity of \n areas, and a personalized internal representation of the state of the doors, the NPC could\n\\emph{deliberate in the level of areas} about the path to take. In other words, the NPC can first build a\ncoarse-grained high-level plan of the form $\\{$``move to area X'', ``move to area Y''$\\}$, and then can use\nthe low-level pathfinding procedure to generate paths for each part of the plan. Apart from achieving a\nhierarchical form of pathfinding that could be beneficial for various reasons, this approach actually \nhandles the type of believability property that we discussed in the motivating example.\n\nAt the beginning of the simulation the internal knowledge of the NPC assumes all doors to be closed.\nIn this situation instructing the NPC to move to area 2, the deliberation component returns no plan as \naccording to the NPC's knowledge it is impossible to go there as all doors are closed.\nIf we open the doors between 1 and 2 by clicking on them, the deliberation component is still unable to\nfind a plan because the NPC was not able to get this information through the perception component and its\nfield of view. Even though the pathfinding procedure would return a valid path, this is irrelevant for the\npersonalized view of the NPC.\n\nNow assume that we move the NPC close to the doors and the internal representation is updated to reflect\nthe real world state and take it back to the starting point. If we instruct the NPC to move to area 2, the \ndeliberation component produces the straightforward plan which is executed using the pathfinding procedure. \nSimilarly, if we open the door $D_{1,3}$, the deliberate component would still return the same plan, as in\norder to get the shortest path that goes through area 3 the NPC needs to see that the door is open. \n\n\n \n\n\\section{Challenges and related work}\n\nThere is a variety of work that aims for action-based AI for NPCs.\nTraditionally, a combination of scripts and finite state machine are used in interactive games for controlling NPCs. These methods, even if fairly limited, allow the game designer to control every aspect of the NPCs actions. This approach has been employed in different types of succesfull videogames such as the Role Playing Game (RPG) \\textit{Never Winter Nights} or the First Person Shooter (FPS) \\textit{Unreal Tournament}. Scripts are written off-line in a high-level language and are used to define simple behaviors for NPCs. Procedural script generation has been proposed in \\cite{Mcnaughton04scriptease:generative} by using simple patter templates which are tuned and combined by hand. Complex behaviors can be developed in a short amount of time but many of the intricacies of the classical scripting approach are not solved and it remains difficult to manage the NPCs as the complexity of the virtual world increases.\n\nState machines are still the preferred way to control NPCs in modern games, from FPS like the \\textit{Quake} series to RTS like Blizzard's \\textit{Warcraft III}. A problem with state machines is that they allow little reusability and they must often be rebuilt for every different case \\cite{Orkin2003}. Furthermore, the number of the states grows exponentially if the behavior of the character becomes slightly more sophisticated which is problematic in the design, maintenance, and debugging of NPC behavior.\nIn Bungie's \\textit{Halo 2}, a form of hierarchical state machines or behavior trees are used \\cite{Isla05BehaviorTrees}.\nIn the FPSs from Monolith \\textit{F.E.A.R.} and Epic's \\textit{Unreal Tournament}, STRIPS and HTN planning have been used to define complex behaviors such as NPCs able to coordinate as squads and perform advanced strategies as sending for backup or flanking \\cite{orkin06fear}.\nIn Bethesda's \\textit{Elder Scrolls IV Oblivion}, NPCs are controlled by \ngoals that involve scheduling and are given for NPCs to achieve. \nThis allows to define behaviors and tasks that depends on preconditions and scheduled times. \nNonetheless, there is much less effort on standardizing approaches for action-based\nAI in a framework that would allow better comparison or collaboration between the existing\ntechniques. To that end, our proposed architecture allows to:\n\\begin{itemize}\n\\item Try out different existing approaches and decision algorithms by abstracting them in the \nas different deliberation compoments that can be easily switched, allowing a comparative AI \nperformance analysis.\n\\item Build NPC that are able to dynamically change the underlying AI method depending on\nthe state of the game or the vicinity of the player.\nFor example a simple FSM can be used while an NPC is idle or far away from the player,\nand then switch to BT or GOAP when it must defend a location or attack the player.\n\\item Build NPC that are able to use a combination of existing decision algorithms. For example\nwe could think of a system that combines BTs for low level decisions and GOAP for high level \ntactical reasoning. The high level planner then would return a sequence of actions, each of \nwhich would invoke a corresponding BT.\n\\item Develop NPCs with a \\emph{personalized} conceptual representation of the game-world that\nenables rich and novel behaviors. The motivating example we examined is just a very simple \ncase which to the best of our knowledge has not been handled by approaches in the literature.\nMoreover this view can lead to more interesting cases involving also communication between NPC \nand their personalized knowledge.\n\\end{itemize}\n\n\n\n\n\n \n\\section{Conclusions}\n\nIn this paper we have introduced a robotics-inspired architecture for\nhandling non-player character artificial intelligence for the purposes of specifying\naction in the game world. We demonstrated that certain\nbenefits can come out of the principled approach for decomposing the artificial \nintelligence\neffort in components, in particular with respect to a separation between the \nlow-level ground truth for the \nstate of the game-world and a personalized conceptual representation of the world for each NPC.\n\nOur proposed architecture provides modularity allowing each of the four main \ncomponents of the architecture, namely, perception, deliberation, control, and\naction, to encapsulate a self-contained independent functionality through a clear\ninterface. We expect that this way of developing characters can enable better code \nreusability and speed up prototyping, testing and debugging.\nMoreover our proposed architecture provides the ground for revisiting problems and\ntechniques that have been extensively studied, such as pahtfinding, and arrive to\nfeasible methods for developing believable characters with their own view of the\ntopology and connectivity of the game-world areas.\n\n \n\n\\clearpage\n\n\\bibliographystyle{aaai}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Overview}\n\nThe concept of a $G$-\\emph{principal bundle} for a topological or\nLie group $G$ is fundamental in \nclassical topology and differential\ngeometry, e.g. \\cite{Husemoeller}. More generally, for $G$ a \\emph{geometric}\ngroup in the sense of a \\emph{sheaf of groups} over some site, the notion of \n$G$-principal bundle or \\emph{$G$-torsor} is fundamental in \\emph{topos theory}\n\\cite{Johnstone,Moerdijk}.\nIts relevance rests in the fact that \n$G$-principal bundles constitute natural geometric representatives \nof cocycles in degree 1 nonabelian cohomology\n$H^1(-,G)$ and that general fiber bundles are \n\\emph{associated} to principal bundles.\n\nIn recent years it has become clear that\nvarious applications, notably in \n``\\emph{String-geometry}'' \\cite{SSS,Schreiber}, involve a notion of principal bundles where\ngeometric groups $G$ are generalized to \\emph{geometric grouplike $A_\\infty$-spaces},\nin other words\n\\emph{geometric $\\infty$-groups}: geometric objects that are equipped with a group\nstructure up to higher coherent homotopy. The resulting \\emph{principal $\\infty$-bundles}\nshould be natural geometric representatives of geometric nonabelian \\emph{hypercohomology}:\n{\\v C}ech cohomology with coefficients in arbitrary positive degree. \n\nIn the \\emph{absence} of geometry, \nthese principal $\\infty$-bundles are essentially just the classical \\emph{simplicial} principal bundles \nof simplicial sets \\cite{May} (this we discuss in Section 4.1 of \\cite{NSSb}). However,\nin the presence of non-trivial geometry the situation is both more subtle and richer, and plain \nsimplicial principal bundles can only serve as a specific \\emph{presentation}\nfor the general notion (section 3.7.2 of \\cite{NSSb}).\n\nFor the case of \\emph{principal 2-bundles}, \nwhich is the first step after ordinary principal bundles,\naspects of a geometric definition and theory have been proposed and developed by\nvarious authors, see section 1 of \\cite{NSSb} for references and see \n\\cite{NikolausWaldorf} for a comprehensive discussion.\nNotably the notion of a \\emph{bundle gerbe} \\cite{Mur}\nis, when regarded as an extension of a {\\v C}ech-groupoid, \nalmost manifestly that of a principal 2-bundle, \neven though this perspective is not prominent in the respective literature.\nWe discuss these relations in detail in \\cite{NSSc}.\nThe oldest definition of geometric 2-bundles is conceptually different, but closely related:\nGiraud's \\emph{$G$-gerbes} \\cite{Giraud} are by definition not principal 2-bundles but are \nfiber 2-bundles \\emph{associated} to $\\mathbf{Aut}(\\mathbf{B}G)$-principal 2-bundles,\nwhere $\\mathbf{B}G$ is the \\emph{geometric moduli stack} of $G$-principal bundles.\nThis means that $G$-gerbes provide the universal \\emph{local coefficients}, in the sense\nof \\emph{twisted cohomology}, for $G$-principal bundles.\n\nFrom the definition of principal 2-bundles\/bundle gerbes it is fairly clear that these\nought to be just the first step (or second step) in an infinite tower of higher analogs. Accordingly, \ndefinitions of \\emph{principal 3-bundles} have been considered in the literature, \nmostly in the guise of\n\\emph{bundle 2-gerbes} \\cite{Stevenson} (we discuss the relation in \\cite{NSSc}). \nThe older notion\nof Breen's \\emph{$G$-2-gerbes} \\cite{Breen} (also discussed by Brylinski-MacLaughlin), \nis, as before, not that of a principal \n3-bundle, but that of a fiber 3-bundle which is\n\\emph{associated} to an $\\mathbf{Aut}(\\mathbf{B}G)$-principal 3-bundle, \nwhere now\n$\\mathbf{B}G$ is the \\emph{geometric moduli 2-stack} of $G$-principal 2-bundles\n(details are also in \\cite{NSSc}).\n\nGenerally, for every $n \\in \\mathbb{N}$ and every geometric $n$-group $G$, \nit is natural to consider the theory of \n$G$-principal $n$-bundles \\emph{twisted} by an $\\mathbf{Aut}(\\mathbf{B}G)$-principal\n$(n+1)$-bundle, hence by the associated \\emph{$G$-$n$-gerbe}. \nA complete theory of principal bundles therefore needs to involve \nthe notion of principal $n$-bundles \nand also that of twisted principal $n$-bundles\nin the limit as $n \\to \\infty$. \n\nAs $n$ increases, the piecemeal \nconceptualization of principal $n$-bundles quickly becomes tedious \nand their structure opaque, without a general theory of higher geometric\nstructures. In recent years such a theory -- long conjectured and with many precursors -- \nhas materialized in a comprehensive and elegant form, now known as \\emph{$\\infty$-topos theory} \n\\cite{ToenVezzosiToposb, Rezk, Lurie}. \nWhereas an ordinary topos is a category of \\emph{sheaves} over \nsome site\\footnote{Throughout \\emph{topos} here stands for \\emph{Grothendieck topos},\nas opposed to the more general notion of \\emph{elementary topos}.}, an \n$\\infty$-topos is an \\emph{$\\infty$-category} of \\emph{$\\infty$-sheaves} or equivalently\nof \\emph{$\\infty$-stacks} over some \\emph{$\\infty$-site}, where the prefix ``$\\infty-$''\nindicates that all these notions are generalized to structures up to \n\\emph{coherent higher homotopy} (as in the older terminology of \n$A_\\infty$-, $C_\\infty$-, $E_\\infty$- and $L_\\infty$-algebras, all of which re-appear\nas algebraic structures in $\\infty$-topos theory). \nIn as far as an ordinary topos is a context\nfor general \\emph{geometry}, an $\\infty$-topos is a context for what is called\n\\emph{higher geometry} or \\emph{derived geometry}: the pairing of the notion of\n\\emph{geometry} with that of \\emph{homotopy}.\n(Here ``derived'' alludes to \n``derived category'' and ``derived functor'' in homological algebra, \nbut refers in fact to a nonabelian generalization of these concepts.) \n\nAs a simple instance of this pairing, \none observes that\nfor any geometric abelian group (sheaf of abelian groups) \n$A$, the higher degree (sheaf) cohomology $H^{n+1}(-,A)$ in ordinary geometry\nmay equivalently be understood as the degree-1 cohomology\n$H^1(-,\\mathbf{B}^{n}A)$ in higher geometry, where \n$\\mathbf{B}^{n}A$ is the geometric $\\infty$-group obtained by\nsuccessively \\emph{delooping} $A$ geometrically. \nMore generally, there are geometric $\\infty$-groups $G$ not of this abelian form.\nThe general degree-1 geometric cohomology\n$H^1(X,G)$ is a nonabelian and simplicial generalization of \\emph{sheaf hypercohomology},\nwhose cocycles are morphisms $X \\to \\mathbf{B}G$ into the geometric delooping of $G$. \nIndeed, delooping plays a central role \nin $\\infty$-topos theory; \na fundamental fact of $\\infty$-topos theory (recalled as \nTheorem \\ref{DeloopingTheorem} below) says that, quite generally, under internal \nlooping and delooping, $\\infty$-groups $G$ in an $\\infty$-topos $\\mathbf{H}$ are equivalent\nto connected and \\emph{pointed} objects in $\\mathbf{H}$:\n$$\n \\xymatrix{\n \\left\\{\n\t \\mbox{\n\t groups in $\\mathbf{H}$\n\t }\n\t\\right\\}\n\t\t\\ar@{<-}@<+5pt>[rrr]^<<<<<<<<<<<<<{\\mbox{\\small looping}\\; \\Omega}\n\t\\ar@<-5pt>[rrr]_<<<<<<<<<<<<<{\\mbox{\\small delooping}\\; \\mathbf{B}}^<<<<<<<<<<<<<\\simeq\n &&&\n\t\\left\\{\n\t \\mbox{\\begin{tabular}{c} pointed connected \\\\ objects in $\\mathbf{H}$ \\end{tabular}}\n\t\\right\\}\n }\n \\,.\n$$\nWe will see that this fact plays a key role in the theory of \nprincipal $\\infty$-bundles. \n \nTopos theory is renowned for providing a general convenient context for the development\nof geometric structures. In some sense, $\\infty$-topos theory \nprovides an even more convenient context, due to the fact that\n\\emph{$\\infty$-(co)limits} or \\emph{homotopy (co)limits} in an $\\infty$-topos\nexist, and refine or \\emph{correct} the corresponding naive (co)limits. \nThis convenience manifests itself in the central definition of \nprincipal $\\infty$-bundles (Definition~\\ref{principalbundle} below): whereas the \ntraditional definition of a $G$-principal bundle over $X$ as a quotient map\n$P \\to P\/G \\simeq X$ requires the additional clause that the quotient be\n\\emph{locally trivial}, $\\infty$-topos theory comes pre-installed with the correct homotopy quotient for\nhigher geometry, and as a result the local triviality of $P \\to P\/\\!\/G =: X$ is automatic;\nwe discuss this in more detail in Section \\ref{PrincBund-Intro} below.\nHence \nconceptually, $G$-principal $\\infty$-bundles are in fact simpler than their\ntraditional analogs, and so their theory is stronger. \n\n\n\nA central theorem of topos theory is \\emph{Giraud's theorem}, which \nintrinsically characterizes \ntoposes as those presentable categories that satisfy three simple \nconditions: 1.\\ coproducts are disjoint, 2.\\ colimits are preserved by pullback, and\n3.\\ quotients are effective. The analog of this characterization turns out\nto remain true essentially verbatim in $\\infty$-topos theory: \nthis is the \\emph{Giraud-To{\\\"e}n-Vezzosi-Rezk-Lurie} characterization\nof $\\infty$-toposes, recalled as Definition \\ref{GiraudRezkLurieAxioms} below. \nWe will show that given an $\\infty$-topos $\\mathbf{H}$, the second\nand the third of these axioms \nlead directly to the {\\em classification theorem} for principal $\\infty$-bundles \n(Theorem~\\ref{PrincipalInfinityBundleClassification} below) which states that \nthere is an equivalence of $\\infty$-groupoids \n\\[\n\\begin{array}{cccc}\nG \\mathrm{Bund}(X)\n\t &\n\t \\simeq\n\t &\n\t \\mathbf{H}(X, \\mathbf{B}G)\n\\\\\n\\\\\n \t\\left\\{\n\t \\begin{array}{c}\n\t \\mbox{$G$-principal $\\infty$-bundles}\n\t \\\\\n\t \\mbox{over $X$}\n\t \\end{array}\n\t \\raisebox{20pt}{\n\t \\xymatrix{\n\t P \\ar[d] \\\\ X\n\t }}\n\t\\right\\}\n &\\simeq&\n \t\\left\\{\n\t \\mbox{cocycles}\\;\\;\\;\n\t g : X \\to \\mathbf{B}G\n\t\\right\\}\n\t \t\\end{array}\n\\]\t\nbetween the $\\infty$-groupoid of $G$-principal $\\infty$-bundles on $X$, \nand the mapping space $\\mathbf{H}(X,\\mathbf{B}G)$. \n\nThe mechanism underlying the proof of this theorem is summarized \nin the following diagram, which is supposed to indicate that the geometric $G$-principal $\\infty$-bundle corresponding \nto a cocycle is nothing but the\ncorresponding homotopy fiber:\n $$\n \\raisebox{50pt}{\n \\xymatrix@C=-7pt@R=2pt{\n \\vdots && \\vdots\n \\\\\n P \\times G \\times G \n\t \\ar[rr] \\ar@<-4pt>[ddd] \\ar@<+0pt>[ddd] \\ar@<+4pt>[ddd] \n\t &&\n\tG \\times G\n \\ar@<-4pt>[ddd] \\ar@<+0pt>[ddd] \\ar@<+4pt>[ddd]\t\n \\\\\n\t\\\\\n \\\\\n P \\times G \\ar[rr] \\ar@<-3pt>[ddd]_{p_1} \\ar@<+3pt>[ddd]^\\rho \n\t&& G\n\t\\ar@<-3pt>[ddd]_{} \\ar@<+3pt>[ddd]\n \\\\\n\t\\\\ && & \\mbox{\\small $G$-$\\infty$-actions}\n\t\\\\\n P \\ar[rr] \\ar[ddd] && {*} \\ar[ddd] & \\mbox{\\small total objects}\n\t\\\\\n\t\\\\ & \\mbox{\\small $\\infty$-pullback} && \n\t\\\\\n\tX \\ar[rr]_{g } && \\mathbf{B}G & \\mbox{\\small quotient objects}\n\t\\\\\n\t\\mbox{\\small \\begin{tabular}{c} $G$-principal \\\\ $\\infty$-bundle \\end{tabular}}\n\t\\ar@{}[rr]_{\\mbox{\\small cocycle}}\n\t&& \\mbox{\\small \\begin{tabular}{c} universal \\\\ $\\infty$-bundle \\end{tabular}}\n }\n }\n$$\nThe fact that all geometric $G$-principal $\\infty$-bundles arise this\nway, up to equivalence, is quite useful in applications, and also sheds helpful light \non various existing constructions and provides more examples; we discuss this in \\cite{NSSc}. \n\nNotably, the implication\nthat \\emph{every} geometric $\\infty$-action $\\rho : V \\times G \\to V$ \nof an $\\infty$-group $G$ on an object $V$ \nhas a classifying morphism $\\mathbf{c} : V\/\\!\/G \\to \\mathbf{B}G$,\ntightly connects the theory of associated $\\infty$-bundles with that of\nprincipal $\\infty$-bundles (Section~\\ref{StrucRepresentations} below):\nthe fiber sequence\n$$\n \\raisebox{20pt}{\n \\xymatrix{\n V \\ar[r] & V\/\\!\/G \n\t\\ar[d]^{\\mathbf{c}}\n\t\\\\\n\t& \\mathbf{B}G\n }\n }\n$$\nis found to be the $V$-fiber $\\infty$-bundle which is \\emph{$\\rho$-associated} \nto the universal $G$-principal $\\infty$-bundle $* \\to \\mathbf{B}G$. \nAgain, using the $\\infty$-Giraud axioms, \nan $\\infty$-pullback of $\\mathbf{c}$ along a cocycle $g_X : X \\to \\mathbf{B}G$ \nis identified with the $\\infty$-bundle $P \\times_G V$ that is \\emph{$\\rho$-associated}\nto the principal $\\infty$-bundle $P \\to X$ classified by $g_X$ \n(Proposition \\ref{RhoAssociatedToUniversalIsUniversalVBundle}) and \nevery $V$-fiber $\\infty$-bundle arises this way, associated to an \n$\\mathbf{Aut}(V)$-principal $\\infty$-bundle (Theorem \\ref{VBundleClassification}).\n\nUsing this, we may observe that the space $\\Gamma_X(P \\times_G V)$ \nof \\emph{sections} of $P \\times_G V$ is equivalently\nthe space $\\mathbf{H}_{\/\\mathbf{B}G}(g_X, \\mathbf{c})$ \nof cocycles $\\sigma : g_X \\to \\mathbf{c}$ in the slice $\\infty$-topos \n$\\mathbf{H}_{\/\\mathbf{B}G}$:\n$$\n \\begin{array}{cccc}\n \\Gamma_X(P \\times_G V)\n\t &\n\t \\simeq\n\t &\n\t \\mathbf{H}_{\/\\mathbf{B}G}(g_X, \\mathbf{c})\n\t \\\\\n\t \\\\\n \t\\left\\{\n\t \\raisebox{20pt}{\n\t \\xymatrix{\n\t P \\times_G V \\ar[r] \\ar[d] & V \/\\! \/ G\n\t\t \\ar[d]^\\rho\n\t \\\\\n\t X \\ar[r]_-{g_X} \n\t\t\\ar@\/^1pc\/@{-->}[u]^{\\sigma}\n\t\t& \\mathbf{B}G\n\t }\n\t }\n\t\\right\\}\n &\\simeq&\n \t\\left\\{\n\t \\raisebox{20pt}{\n\t \\xymatrix{\n\t & V \/\\! \/ G\n\t\t \\ar[d]^\\rho\n\t \\\\\n\t X \\ar[r]_-{g_X} \n\t\t\\ar@{-->}[ur]^{\\sigma}\n\t\t& \\mathbf{B}G\n\t }\n\t }\n\t\\right\\}\n\t\\end{array}\n$$\nMoreover, by the above classification theorem of $G$-principal $\\infty$-bundles, \n$g_X$ trivializes over some\ncover $\\xymatrix{U \\ar@{->>}[r] & X}$, \nand so the universal property of the $\\infty$-pullback implies that\n\\emph{locally} a section $\\sigma$ is a $V$-valued function\n$$\n \\raisebox{20pt}{\n \\xymatrix{\n & V \\ar[r] & V \/\\! \/ G\n\t \\ar[d]^{\\mathbf{c}}\n \\\\\n U \\ar@{->>}[r]_<<<{\\mathrm{cover}}\\ar@{-->}[ur]^{\\sigma|_U} & X \\ar[r]^{g_X} \n\t& \\mathbf{B}G\\, .\n }\n }\n$$\nFor $V$ an ordinary space, hence a \\emph{0-truncated} object in the $\\infty$-topos,\nthis is simply the familiar statement about sections of associated bundles. \nBut in higher geometry $V$ may more generally itself be a higher \nmoduli $\\infty$-stack, which makes the general theory of sections more\ninteresting.\nSpecifically, if $V$ is a pointed connected object, this\nmeans that it is locally a cocycle for an $\\Omega V$-principal $\\infty$-bundle, \nand so globally is a \\emph{twisted $\\Omega V$-principal $\\infty$-bundle}.\nThis identifies $\\mathbf{H}_{\/\\mathbf{B}G}(-, \\mathbf{c})$ as the \n\\emph{twisted cohomology} induced by the \\emph{local coefficient bundle $\\mathbf{c}$}\nwith \\emph{local coefficients} $V$.\nThis yields a geometric and unstable analogue \nof the \npicture of twisted cohomology discussed in \\cite{AndoBlumbergGepner}.\n\nGiven $V$, the most general \ntwisting group is the \\emph{automorphism $\\infty$-group} \n$\\mathbf{Aut}(V) \\hookrightarrow [V,V]_{\\mathbf{H}}$, \nformed in the $\\infty$-topos (Definition \\ref{InternalAutomorphismGroup}). If \n$V$ is pointed connected and hence of the form $V = \\mathbf{B}G$, this means that the\nmost general universal local coefficient bundle is\n$$\n \\raisebox{20pt}{\n \\xymatrix{\n \\mathbf{B}G \\ar[r] \n\t & \n\t (\\mathbf{B}G)\/\\!\/\\mathbf{Aut}(\\mathbf{B}G) \n\t \\ar[d]^{\\mathbf{c}_{\\mathbf{B}G}}\n\t \\\\\n\t & \\mathbf{B}\\mathbf{Aut}(\\mathbf{B}G).\n }}\n$$\nThe corresponding associated twisting $\\infty$-bundles are \n\\emph{$G$-$\\infty$-gerbes}: fiber $\\infty$-bundles with typical fiber \nthe moduli $\\infty$-stack $\\mathbf{B}G$. These are the\nuniversal local coefficients for twists of $G$-principal $\\infty$-bundles.\n\nWhile twisted cohomology in $\\mathbf{H}$ is hence identified simply with ordinary cohomology \nin a slice of $\\mathbf{H}$, the corresponding geometric representatives, the \n$\\infty$-bundles, do not translate to the \nslice quite as directly. The reason is that a universal local coefficient bundle\n$\\mathbf{c}$ as above is rarely a pointed connected object in the slice\n(if it is, then it is essentially trivial) and so the theory of principal \n$\\infty$-bundles does not directly apply to these coefficients. \nIn Section~\\ref{ExtensionsOfCohesiveInfinityGroups} we show that what \ndoes translate is a notion of\n\\emph{twisted $\\infty$-bundles}, a generalization of the twisted bundles known from \ntwisted K-theory: given a section $\\sigma : g_X \\to \\mathbf{c}$ as above, \nthe following pasting diagram of $\\infty$-pullbacks\n$$\n \\xymatrix{\n Q \n\t\\ar[d] \\ar[r] & {*} \\ar[d]\n\t&&\n\t\\mbox{$P$-twisted $\\Omega V$-principal $\\infty$-bundle}\n \\\\\n P \\ar[d] \\ar[r] & V \\ar[r] \\ar[d] & {*} \\ar[d] & \\mbox{$G$-principal $\\infty$-bundle}\n \\\\\n X \\ar[r]^\\sigma \\ar@\/_2pc\/[rr]_{g_X} & V\/\\!\/G \n \\ar[r]^{\\mathbf{c}} & \\mathbf{B}G & \\mbox{section of $\\rho$-associated $V$-$\\infty$-bundle}\n }\n$$\nnaturally identifies an $\\Omega V$-principal $\\infty$-bundle $Q$ on the total space $P$\nof the twisting $G$-principal $\\infty$-bundle, and since this is classified by \na $G$-equivariant morphism $P \\to V$ it enjoys itself a certain twisted $G$-equivariance\nwith respect to the defining $G$-action on $P$. We call such $Q \\to P$ the\n\\emph{$[g_X]$-twisted $\\Omega V$-principal bundle} classified by $\\sigma$.\nAgain, a special case of special importance is that where \n$V = \\mathbf{B}A$ is pointed connected, \nwhich identifies the universal $V$-coefficient bundle with an \n\\emph{extension of $\\infty$-groups}\n$$\n \\raisebox{20pt}{\n \\xymatrix{\n \\mathbf{B}A \\ar[r] & \\mathbf{B}\\hat G\n\t\\ar[d]\n\t\\\\\n\t& \\mathbf{B}G.\n }\n }\n$$\nAccordingly, $P$-twisted $A$-principal $\\infty$-bundles are equivalently\n\\emph{extensions} of $P$ to $\\hat G$-principal $\\infty$-bundles.\n\nA direct generalization of the previous theorem yields the \nclassification Theorem \\ref{ClassificationOfTwistedGEquivariantBundles}, \nwhich identifies $[g_X]$-twisted $A$-principal $\\infty$-bundles with cocycles in twisted cohomology\n$$\n \\begin{array}{cccc}\n \t A \\mathrm{Bund}^{[g_X]}(X)\n\t &\n\t \\simeq\n\t &\n\t \\mathbf{H}_{\/\\mathbf{B}G}(g_X, \\mathbf{c}_{\\mathbf{B}G})\n \\\\\n \\\\\n \t\\left\\{\n\t \\begin{array}{c}\n\t \\mbox{$[g_X]$-twisted}\n\t \\\\\n\t \\mbox{$A$-principal $\\infty$-bundles}\n\t \\\\\n\t \\mbox{over $X$}\n\t \\end{array}\n\t \\raisebox{40pt}{\n\t \\xymatrix{\n\t Q \\ar[d] \\\\ P = (g_X)^* {*} \\ar[d] \\\\ X\n\t }}\n\t\\right\\}\n &\\simeq&\n \t\\left\\{\n\t \\mbox{twisted cocycles}\\;\\;\\;\n\t \\sigma : g_X \\to \\mathbf{c}_{\\mathbf{B}G}\n\t\\right\\}\n\t\\end{array}\n$$\nFor instance if $\\mathbf{c}$ is the connecting homomorphism\n$$\n \\xymatrix{\n \\mathbf{B}\\hat G \\ar[r] & \\mathbf{B}G \\ar[d]^{\\mathbf{c}}\n\t\\\\\n\t& \\mathbf{B}^2 A\n }\n$$\nof a central \nextension of ordinary groups $A \\to \\hat G \\to G$, then\nthe corresponding twisted $\\hat G$-bundles are those known from geometric models of\ntwisted K-theory (discussed in \\cite{NSSc}).\n\nWhen the internal \\emph{Postnikov tower} of a coefficient object is regarded as \na sequence of local coefficient bundles as above, the induced \ntwisted $\\infty$-bundles are decompositions of nonabelian principal $\\infty$-bundles\ninto ordinary principal bundles together with\nequivariant abelian hypercohomology cocycles on their total spaces. \nThis construction identifies much of equivariant cohomology theory\nas a special case of higher nonabelian cohomology. \nSpecifically, when applied to a Postnikov stage of the delooping of an\n$\\infty$-group of internal automorphisms, the corresponding \ntwisted cohomology reproduces\nthe notion of Breen \\emph{$G$-gerbes with band} (Giraud's \\emph{lien}s); \nand the corresponding \ntwisted $\\infty$-bundles are their incarnation as \nequivariant \\emph{bundle gerbes} over principal bundles. \n\n\\medskip\n\n\nThe classification statements for principal and fiber $\\infty$-bundles in this article, \nTheorems \\ref{PrincipalInfinityBundleClassification} \nand \\ref{VBundleClassification} are not surprising, they say exactly what one would\nhope for. It is however useful to see how they flow naturally from the abstract\naxioms of $\\infty$-topos theory, and to observe that they immediately imply \na series of classical as well as recent theorems as special cases, \nsee Remark \\ref{ReferencesOnClassificationOfVBundles}. \nAlso the corresponding long exact sequences in (nonabelian) cohomology,\nTheorem \\ref{LongExactSequenceInCohomology}, reproduce classical theorems, \nsee Remark \\ref{ReferencesOnLongSequences}.\nSimilarly the\ndefinition and classification of lifting of principal $\\infty$-bundles, \nTheorem \\ref{ExtensionsAndTwistedCohomology}, and of twisted principal $\\infty$-bundles \nin Theorem \\ref{ClassificationOfTwistedGEquivariantBundles} flows naturally\nfrom the $\\infty$-topos theory and yet it immediately implies various\nconstructions and results in the literature as special cases, see \nRemark \\ref{ReferencesOnLiftings} and Remark \\ref{ReferencesTwistedBundles}, respectively.\nIn particular the notion of nonabelian twisted cohomology itself is elementary\nin $\\infty$-topos theory,\nSection \\ref{TwistedCohomology}, and yet it sheds light on a wealth of \napplications, see Remark \\ref{ReferencesOnTwisted}.\n\nThis should serve to indicate that the theory of (twisted) principal $\\infty$-bundles\nis rich and interesting. \nThe present article is intentionally written in general abstraction only, aiming to present\nthe general theory of (twisted) principal $\\infty$-bundles as elegantly as possible, \ntrue to its roots in \nabstract higher topos theory. We believe that this serves to usefully make transparent \nthe overall picture. In the companion article \n\\cite{NSSb} we give a complementary discussion and construct \nexplicit presentations of the structures appearing here that lend themselves\nto explicit computations. \nFinally in \\cite{NSSc} we use the combination of the general abstract \nformulation and its explicit presentations to discuss a list of \ninteresting examples and applications.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Preliminaries}\n\\label{structures}\n\nThe discussion of principal $\\infty$-bundles in \nSection~\\ref{Principal infinity-bundles general abstract} \nbelow builds on \nthe concept of an \\emph{$\\infty$-topos} and on a handful\nof basic structures and notions that are present in any $\\infty$-topos,\nin particular the notion of \\emph{group objects} and of \\emph{cohomology} with coefficients\nin these group objects. The relevant theory has been\ndeveloped in \\cite{ToenVezzosiToposb, Rezk, Lurie, LurieAlgebra}. \nWhile we assume the reader to be familiar with basic ideas of this theory, \nthe purpose of this section\nis to recall the main aspects that we need, to establish our notation, and to \nhighlight some aspects of the general theory \nthat are relevant to our discussion and which have perhaps \nnot been highlighted in this way in the\nexisting literature. \n\n\\medskip\n\nFor many purposes the notion of \\emph{$\\infty$-topos} is best thought of as \na generalization of the notion of a sheaf topos --- the category \nof sheaves over some site is replaced by an \n\\emph{$\\infty$-category of $\\infty$-stacks\/$\\infty$-sheaves}\nover some $\\infty$-site (there is also supposed to be a more general notion of an \n\\emph{elementary $\\infty$-topos}, which however we do not consider here). \nIn this context the $\\infty$-topos $\\mathrm{Gpd}_\\infty$ of $\\infty$-groupoids\nis the natural generalization of the punctual topos $\\mathrm{Set}$ to \n$\\infty$-topos theory. A major achievement\nof \\cite{ToenVezzosiToposb}, \\cite{Rezk} and \\cite{Lurie} was to provide a more intrinsic characterization of\n$\\infty$-toposes, which generalizes the classical characterization of \nsheaf toposes (Grothendieck toposes) originally given by Giraud. \nWe will show that the theory of \nprincipal $\\infty$-bundles is naturally expressed in terms of these intrinsic\nproperties, and therefore we here take these \\emph{Giraud-To{\\\"e}n-Vezzosi-Rezk-Lurie axioms}\nto be the very definition of an $\\infty$-topos \n(\\cite{Lurie}, Theorem 6.1.0.6, the main ingredients will be recalled below):\n\\begin{definition}[$\\infty$-Giraud axioms]\n \\label{GiraudRezkLurieAxioms}\n An \\emph{$\\infty$-topos} is a presentable $\\infty$-category $\\mathbf{H}$ that satisfies the\n following properties.\n \\begin{enumerate}\n \\item {\\bf Coproducts are disjoint.} For every two objects \n $A, B \\in \\mathbf{H}$, the intersection of $A$ and \n $B$ in their coproduct is the initial object: in other words the diagram\n\t$$\n\t \\xymatrix{\n\t \\emptyset \\ar[r] \\ar[d] & B \\ar[d]\n\t\t\\\\\n\t\tA \\ar[r] & A \\coprod B\n\t }\n\t$$\n\tis a pullback.\n\t\n\t\\item {\\bf Colimits are preserved by pullback.} \n\t For all morphisms $f\\colon X\\to B$ in $\\mathbf{H}$ and all \n\t small diagrams $A\\colon I\\to \\mathbf{H}_{\/B}$, there is an \n\t equivalence \n\t $$\n\t \\varinjlim_i f^*A_i \\simeq f^*(\\varinjlim_i A_i)\n\t $$\t \n\t between the pullback of the colimit and the colimit over the pullbacks of its\n\t components.\n\t\\item \n\t {\\bf Quotient maps are effective epimorphisms.} Every simplicial object\n\t $A_\\bullet : \\Delta^{\\mathrm{op}} \\to \\mathbf{H}$ that satisfies the\n\t groupoidal Segal property (Definition~\\ref{GroupoidObject}) is the {\\v C}ech nerve of its quotient projection:\n\t $$\n\t A_n \\simeq \n\t\tA_0 \\times_{\\varinjlim_n A_n} A_0 \\times_{\\varinjlim_n A_n} \\cdots \\times_{\\varinjlim_n A_n} A_0\n\t\t\\;\\;\\;\n\t\t\\mbox{($n$ factors)}\n\t\t\\,.\n\t $$\n \\end{enumerate}\n\\end{definition}\nRepeated application of the second and third axiom provides the proof of \nthe classification of principal $\\infty$-bundles, \nTheorem \\ref{PrincipalInfinityBundleClassification} and the\nuniversality of the universal associated $\\infty$-bundle, \nProposition \\ref{UniversalAssociatedBundle}.\n\nAn ordinary topos is famously characterized by the existence of a classifier object\nfor monomorphisms, the \\emph{subobject classifier}. \nWith hindsight, this statement already carries in it the seed \nof the close relation between topos theory and bundle theory, for we may think of \na monomorphism $E \\hookrightarrow X$ as being a \\emph{bundle of $(-1)$-truncated fibers}\nover $X$. The following axiomatizes the existence of arbitrary universal bundles\n\\begin{definition}\n An \\emph{$\\infty$-topos} $\\mathbf{H}$ is a presentable $\\infty$-category with the following\n properties.\n \\begin{enumerate}\n\t\\item {\\bf Colimits are preserved by pullback.} \n\t\\item {\\bf There are universal $\\kappa$-small bundles.}\n\t For every sufficiently large regular cardinal $\\kappa$, there exists \n\t a morphism $\\widehat {\\mathrm{Obj}}_\\kappa \\to \\mathrm{Obj}_\\kappa$ in $\\mathbf{H}$, \n\t such that for every other object $X$, pullback along morphisms $X \\to \\mathrm{Obj}$\n\t constitutes an equivalence\n\t $$\n\t \\mathrm{Core}(\\mathbf{H}_{\/_{\\kappa}X})\n\t\t\\simeq\n\t\t\\mathbf{H}(X, \\mathrm{Obj}_\\kappa)\n\t $$\n\t between the $\\infty$-groupoid of bundles (morphisms) $E \\to X$ which are $\\kappa$-small over $X$\n\t and the $\\infty$-groupoid of morphisms from $X$ into $\\mathrm{Obj}_\\kappa$.\t \n \\end{enumerate}\n \\label{RezkCharacterization}\n\\end{definition}\nThese two characterizations of $\\infty$-toposes, Definition \\ref{GiraudRezkLurieAxioms}\nand Definition \\ref{RezkCharacterization} are equivalent;\nthis is due to Rezk and Lurie, appearing as Theorem 6.1.6.8 in \\cite{Lurie}.\nWe find that the second of these axioms \ngives the equivalence between $V$-fiber bundles and \n$\\mathbf{Aut}(V)$-principal $\\infty$-bundles\nin Proposition \\ref{VBundleIsAssociated}.\n\n\nIn addition to these axioms, a basic property of $\\infty$-toposes\n(and generally of $\\infty$-categories with pullbacks) which we will \nrepeatedly invoke, is the following.\n\\begin{proposition}[pasting law for pullbacks]\n Let $\\mathbf{H}$ be an $\\infty$-category with pullbacks. If \n $$\n \\xymatrix{\n\t A \\ar[r]\\ar[d] & B \\ar[r]\\ar[d] & C \\ar[d]\n\t \\\\\n\t D \\ar[r] & E \\ar[r] & F\n\t}\n $$\n is a diagram in $\\mathbf{H}$ such that the right \n square is an $\\infty$-pullback, then the left square is an $\\infty$-pullback precisely\n if the outer rectangle is.\n \\label{PastingLawForPullbacks}\n\\end{proposition}\nNotice that here and in all of the following\n\\begin{itemize}\n \\item all square diagrams are filled by a 2-cell, even if we do not indicate this\n notationally;\n \\item\n all limits are $\\infty$-limits\/homotopy limits\n\t(hence all pullbacks are $\\infty$-pullbacks\/homotopy pullbacks), and so on;\n\\end{itemize}\nthis is the only consistent way of speaking about $\\mathbf{H}$ in generality.\nOnly in the followup article \\cite{NSSb} do we consider presentations of \n$\\mathbf{H}$ by 1-categorical data; there we will draw a careful \ndistinction between 1-categorical\nlimits and $\\infty$-categorical\/homotopy limits.\n\n\n\\subsection{Epimorphisms and monomorphisms}\n\\label{StrucEpi}\n\nIn an $\\infty$-topos there is an infinite tower of notions of epimorphisms and monomorphisms:\nthe $n$-connected and $n$-truncated morphisms for all $-2 \\leq n \\leq \\infty$\n\\cite{Rezk, Lurie}. \nThe case when $n = -1$ is the most direct generalization of the 1-categorical notion, and this\nis what we need in the following. Here we briefly recall the main definitions and properties.\n\n\\medskip\n\n\n\\begin{definition}\nLet $\\mathbf{H}$ be an $\\infty$-topos. \nFor $X \\to Y$ any morphism in $\\mathbf{H}$, there is a\n simplicial object $\\check{C}(X \\to Y)$ in $\\mathbf{H}$ (the {\\em \\v{C}ech \n nerve} of $f\\colon X\\to Y$) which in degree $n$ is the \n $(n+1)$-fold $\\infty$-fiber product of $X$ over $Y$ with itself\n $$\n \\check{C}(X \\to Y) : [n] \\mapsto X^{\\times^{n+1}_Y}\n $$\n A morphism $f : X \\to Y$ in $\\mathbf{H}$ is an \\emph{effective epimorphism}\n if it is the colimiting cocone under its own {\\v C}ech nerve:\n $$\n f : X \\to \\varinjlim \\check{C}(X\\to Y)\n\t\\,.\n $$ \n Write $\\mathrm{Epi}(\\mathbf{H}) \\subset \\mathbf{H}^I$ for the collection of \n effective epimorphisms.\n \\label{EffectiveEpi}\n\\end{definition}\n\\begin{proposition}\n A morphism $f : X \\to Y$ in the $\\infty$-topos $\\mathbf{H}$\n is an effective epimorphism precisely if its 0-truncation\n $\\tau_0 f : \\tau_0 X \\to \\tau_0 Y$ is an epimorphism (necessarily effective)\n in the 1-topos $\\tau_{\\leq 0} \\mathbf{H}$.\n \\label{EffectiveEpiIsEpiOn0Truncation}\n\\end{proposition}\nThis is Proposition 7.2.1.14 in \\cite{Lurie}.\n\\begin{proposition}\n The classes $( \\mathrm{Epi}(\\mathbf{H}), \\mathrm{Mono}(\\mathbf{H}) )$\n constitute an orthogonal factorization system.\n \\label{EpiMonoFactorizationSystem}\n\\end{proposition}\nThis is Proposition 8.5 in \\cite{Rezk} and Example 5.2.8.16 in \\cite{Lurie}.\n\\begin{definition}\n For $f : X \\to Y$ a morphism in $\\mathbf{H}$, we write its\n epi\/mono factorization given by Proposition \\ref{EpiMonoFactorizationSystem}\n as\n $$\n f : \n \\xymatrix{\n\t X \\ar@{->>}[r] & \\mathrm{im}(f)\\ \\ar@{^{(}->}[r] & Y \n\t}\n $$\n and we call $\\xymatrix{\\mathrm{im}(f)\\ \\ar@{^{(}->}[r] & Y}$ the \\emph{$\\infty$-image}\n of $f$.\n \\label{image}\n\\end{definition}\n\n\n\n\\subsection{Groupoids}\n\\label{StrucInftyGroupoids}\n\nIn any $\\infty$-topos $\\mathbf{H}$ we may consider groupoids \\emph{internal}\nto $\\mathbf{H}$, in the sense of internal category theory \n(as exposed for instance in the introduction of \\cite{Lurie2}). \n\nSuch a \\emph{groupoid object} \n$\\mathcal{G}$ in $\\mathbf{H}$ is an $\\mathbf{H}$-object $\\mathcal{G}_0$ ``of $\\mathcal{G}$-objects''\ntogether with an $\\mathbf{H}$-object $\\mathcal{G}_1$ ``of $\\mathcal{G}$-morphisms''\nequipped with source and target assigning morphisms $s,t : \\mathcal{G}_1 \\to \\mathcal{G}_0$,\nan identity-assigning morphism $i : \\mathcal{G}_0 \\to \\mathcal{G}_1$ and a composition\nmorphism $\\mathcal{G}_1 \\times_{\\mathcal{G}_0} \\mathcal{G}_1 \\to \\mathcal{G}_1$\nwhich together satisfy all the axioms of a groupoid (unitality, associativity, existence of\ninverses) up to coherent homotopy in $\\mathbf{H}$. One way to formalize what it\nmeans for these axioms to hold up to coherent homotopy is as follows.\n\nOne notes that\nordinary groupoids, i.e.\\ groupoid objects internal to $\\mathrm{Set}$, are \ncharacterized by the fact that their nerves are simplicial sets \n$\\mathcal{G}_\\bullet : \\Delta^{\\mathrm{op}} \\to \\mathrm{Set}$ \nwith the property that the groupoidal Segal maps \n\\[\n\\mathcal{G}_n\\to \\mathcal{G}_1\\times_{\\mathcal{G}_0} \n\\mathcal{G}_1\\times_{\\mathcal{G}_0} \\cdots \\times_{\\mathcal{G}_0} \n\\mathcal{G}_1 \n\\]\nare isomorphisms for all $n\\geq 2$. This last condition \nis stated precisely in Definition~\\ref{GroupoidObject} below, \nand clearly gives a characterization of groupoids that makes sense more generally, in\nparticular it makes sense internally to higher categories: \na groupoid object in $\\mathbf{H}$ is an $\\infty$-functor \n$\\mathcal{G} : \\Delta^{\\mathrm{op}} \\to \\mathbf{H}$ such that all groupoidal \nSegal morphisms are equivalences in $\\mathbf{H}$.\nThese $\\infty$-functors $\\mathcal{G}$ form the \nobjects of an $\\infty$-category $\\mathrm{Grpd}(\\mathbf{H})$\nof groupoid objects in $\\mathbf{H}$.\n\nHere a subtlety arises that is the source of a lot of interesting structure\nin higher topos theory: the objects of $\\mathbf{H}$ are themselves\n ``structured $\\infty$-groupoids''. Indeed, there is a \nfull embedding $\\mathrm{const} : \\mathbf{H} \\hookrightarrow \\mathrm{Grpd}(\\mathbf{H})$\nthat forms constant simplicial objects and thus regards every object $X \\in \\mathbf{H}$\nas a groupoid object which, even though it has a trivial object of morphisms, already\nhas a structured $\\infty$-groupoid of objects. This embedding is in fact \nreflective, with the reflector given by forming the $\\infty$-colimit\nover a simplicial diagram, the ``geometric realization''\n$$\n \\xymatrix{\n \\mathbf{H}\n \\ar@{<-}@<+1.25ex>[rr]^-{\\varinjlim}\n\t \\ar@{^{(}->}@<-1.25ex>[rr]_-{\\mathrm{const}}^-{\\perp}\n\t &&\n\t \\mathrm{Grpd}(\\mathbf{H})\n }\n \\,.\n$$\nFor $\\mathcal{G}$ a groupoid object in $\\mathbf{H}$, the object \n$\\varinjlim \\mathcal{G}_\\bullet$ in $\\mathbf{H}$ \nmay be thought of as the \n$\\infty$-groupoid obtained by ``gluing together the object of objects of\n$\\mathcal{G}$ along the object of morphisms of $\\mathcal{G}$''. \nThis idea that groupoid objects in an $\\infty$-topos are \nlike structured $\\infty$-groupoids together with gluing information \nis formalized by the statement recalled as \nTheorem~\\ref{NaturalThirdGiraud} below, which says that groupoid objects in \n$\\mathbf{H}$ are equivalent to the \\emph{effective epimorphisms} \n$\\xymatrix{Y \\ar@{->>}[r] & X}$ in $\\mathbf{H}$, the intrinsic notion of\n\\emph{cover} (of $X$ by $Y$) in $\\mathbf{H}$. The effective epimorphism\/cover\ncorresponding to a groupoid object $\\mathcal{G}$ is the colimiting cocone\n$\\xymatrix{\\mathcal{G}_0 \\ar@{->>}[r] & \\varinjlim \\mathcal{G}_\\bullet}$.\n\n\\medskip\n\nAfter this preliminary discussion we state the \nfollowing definition of groupoid object in \nan $\\infty$-topos (this definition appears in \\cite{Lurie}\nas Definition 6.1.2.7, using Proposition 6.1.2.6).\n\n\\begin{definition}[\\cite{Lurie}, Definition 6.1.2.7] \n \\label{GroupoidObject}\n A \\emph{groupoid object} in an $\\infty$-topos $\\mathbf{H}$ is \n a simplicial object\n $$\n \\mathcal{G} : \\Delta^{\\mathrm{op}} \\to \\mathbf{H}\n $$\n all of whose groupoidal Segal maps are equivalences: \n in other words, for every\n $n \\in \\mathbb{N}$\n and every partition $[k] \\cup [k'] = [n]$ into two subsets \n such that $[k] \\cap [k'] = \\{*\\}$, the canonical diagram\n \\[\n \\xymatrix{\n \\mathcal{G}_n \\ar[r] \\ar[d] & \\mathcal{G}_k \\ar[d]\n \\\\\n \\mathcal{G}_{k'} \\ar[r] & \\mathcal{G}_{0}\n }\n \\]\n is an $\\infty$-pullback diagram. We write\n \\[\n \\mathrm{Grpd}(\\mathbf{H}) \\subset \\mathrm{Func}(\\Delta^{\\mathrm{op}}, \\mathbf{H})\n \\]\n for the full subcategory of the $\\infty$-category of simplicial\n objects in $\\mathbf{H}$ on the groupoid objects. \n\\end{definition}\nThe following example is fundamental. In fact the third $\\infty$-Giraud axiom\nsays that up to equivalence, all groupoid objects are of this form. \n\\begin{example}\n For $X \\to Y$ any morphism in $\\mathbf{H}$, the \\v{C}ech \n nerve $\\check{C}(X\\to Y)$ of $X\\to Y$ (Definition~\\ref{EffectiveEpi}) is a\n groupoid object $\\check{C}(Y \\to X)$. This example appears in \\cite{Lurie} as Proposition 6.1.2.11.\n\\end{example}\n \n\nThe following statement refines\nthe third $\\infty$-Giraud axiom, Definition \\ref{GiraudRezkLurieAxioms}.\n\\begin{theorem}\n \\label{NaturalThirdGiraud}\n There is a natural equivalence of $\\infty$-categories\n $$\n \\mathrm{Grpd}(\\mathbf{H})\n \\simeq\n (\\mathbf{H}^{\\Delta[1]})_{\\mathrm{eff}}\n \\,,\n $$ \n where $(\\mathbf{H}^{\\Delta[1]})_{\\mathrm{eff}}$ \n is the full sub-$\\infty$-category of the \n arrow category $\\mathbf{H}^{\\Delta[1]}$ \n of $\\mathbf{H}$ on the effective epimorphisms, Definition \\ref{EffectiveEpi}.\n\\end{theorem}\nThis appears below Corollary 6.2.3.5 in \\cite{Lurie}.\n\n\n\\subsection{Groups}\n\\label{StrucInftyGroups}\n\nEvery $\\infty$-topos \ncomes with a notion of \\emph{$\\infty$-group objects} that generalize both the\nordinary notion of group objects in a topos as well as that of\ngrouplike $A_\\infty$-spaces in $\\mathrm{Top} \\simeq \\mathrm{Grpd}_{\\infty}$.\n\n\\medskip\n\nThroughout the following, let $\\mathbf{H}$ be an $\\infty$-topos.\nAn explicit definition of group objects in $\\mathbf{H}$ is the following \n(this appears as Definition 5.1.3.2 together with\nRemark 5.1.3.3 in \\cite{LurieAlgebra}). \n\\begin{definition}[Lurie \\cite{LurieAlgebra}] \n\\label{inftygroupinootopos}\n An \\emph{$\\infty$-group} in $\\mathbf{H}$\n is an $A_\\infty$-algebra $G$ in $\\mathbf{H}$ such that\n the sheaf of connected components\n $\\pi_0(G)$ is a group object in $\\tau_{\\leq 0} \\mathbf{H}$. \n Write\n $\\mathrm{Grp}(\\mathbf{H})$ for the $\\infty$-category\n of $\\infty$-groups in $\\mathbf{H}$.\n\\end{definition}\n\nWe will mostly conceive group objects in $\\mathbf{H}$ as loop space\nobjects of connected objects.\n\\begin{definition}\n Write \n \\begin{itemize}\n \\item $\\mathbf{H}^{*\/}$ for the\n $\\infty$-category of pointed objects in $\\mathbf{H}$;\n \\item $\\mathbf{H}_{\\geq 1}$ \n for the full sub-$\\infty$-category of $\\mathbf{H}$ on the \n connected objects;\n \\item\n $\\mathbf{H}^{*\/}_{\\geq 1}$ for the full sub-$\\infty$-category\n of the pointed objects on the connected objects.\n \\end{itemize}\n\\end{definition}\n\\begin{definition}\n \\label{loop space object}\n Write \n $$\n \\Omega : \\mathbf{H}^{*\/} \\to \\mathbf{H}\n $$\n for the $\\infty$-functor that sends a pointed object $* \\to X$\n to its \\emph{loop space object}, i.e.\\ the $\\infty$-pullback\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n \\Omega X \\ar[r]\\ar[d] & {*} \\ar[d]\n \\\\\n {*} \\ar[r] & X\\,.\n }\n }\n $$\n\\end{definition}\n\\begin{theorem}[Lurie]\n \\label{DeloopingTheorem}\n \\label{delooping}\n Every loop space object canonically has the structure of an \n $\\infty$-group, and this construction extends to an \n $\\infty$-functor\n $$\n \\Omega : \\mathbf{H}^{*\/} \\to \\mathrm{Grp}(\\mathbf{H})\n \\,.\n $$\n This $\\infty$-functor constitutes part of an equivalence of $\\infty$-categories\n $$\n (\\Omega \\dashv \\mathbf{B})\n :\n \\xymatrix{\n \\mathrm{Grp}(\\mathbf{H})\n \\ar@{<-}@<+5pt>[r]^<<<<<<{\\Omega}\n \\ar@<-5pt>[r]_<<<<<<{\\mathbf{B}}^<<<<<<\\simeq\n &\n \\mathbf{H}^{*\/}_{\\geq 1}\\, .\n } \n $$ \n\\end{theorem}\nThis is Lemma 7.2.2.1 in \\cite{Lurie}. \n(See also Theorem 5.1.3.6 of \\cite{LurieAlgebra} \nwhere this is the equivalence denoted $\\phi_0$ in the proof.) For \n$\\mathbf{H} = \\mathrm{Grpd}_{\\infty} \\simeq \\mathrm{Top}$ this \nreduces to various classical\ntheorems in homotopy theory, for instance the \nconstruction of classifying spaces (Kan and Milnor) and de-looping theorems (May and Segal).\n\\begin{definition}\nWe call the inverse \n$\\mathbf{B} : \\mathrm{Grp}(\\mathbf{H}) \\to \\mathbf{H}^{*\/}_{\\geq 1}$\nin Theorem~\\ref{DeloopingTheorem} above the \n\\emph{delooping} functor of $\\mathbf{H}$. By convenient abuse\nof notation we write $\\mathbf{B}$ also for the composite\n$\\mathbf{B} : \\mathrm{Grpd}(\\mathbf{H}) \\to \\mathbf{H}^{*\/}_{\\geq 1} \n\\to \\mathbf{H}$ with the functor that forgets the basepoint and the\nconnectivity.\n\\end{definition}\n\\begin{remark}\n Even if the connected objects involved admit an essentially\n unique point,\n the homotopy type of the full hom-$\\infty$-groupoid \n $\\mathbf{H}^{*\/}(\\mathbf{B}G, \\mathbf{B}H)$ \n of pointed objects in general differs\n from the hom $\\infty$-groupoid $\\mathbf{H}(\\mathbf{B}G, \\mathbf{B}H)$\n of the underlying unpointed objects. \n For instance let $\\mathbf{H} := \\mathrm{Grpd}_{\\infty}$ and let $G$ be \n an ordinary group, regarded as a group object in $\\mathrm{Grpd}_{\\infty}$.\n Then $\\mathbf{H}^{*\/}(\\mathbf{B}G, \\mathbf{B}G) \\simeq \\mathrm{Aut}(G)$\n is the ordinary automorphism group of $G$, but\n $\\mathbf{H}(\\mathbf{B}G, \\mathbf{B}G) = \\mathrm{Aut}(\\mathbf{B}G)$ is the \n automorphism 2-group of $G$, we discuss this \n further around Example~\\ref{automorphism2GroupAbstractly} below.\n\\end{remark}\n\\begin{proposition}[Lurie]\n \\label{InfinityGroupObjectsAsGroupoidObjects}\n $\\infty$-groups $G$ in $\\mathbf{H}$ are equivalently \n those groupoid objects $\\mathcal{G}$ in $\\mathbf{H}$ (Definition~\\ref{GroupoidObject}) \n for which $\\mathcal{G}_0 \\simeq *$. \n\\end{proposition}\nThis is the statement of the compound equivalence\n$\\phi_3\\phi_2\\phi_1$ in the proof of Theorem 5.1.3.6 in \n\\cite{LurieAlgebra}.\n\\begin{remark}\n \\label{PointIntoBGIsEffectiveEpimorphism}\n \\label{Cech nerve of * -> BG}\n This means that for $G$ an $\\infty$-group object, \n the {\\v C}ech nerve extension of its delooping fiber\n sequence $G \\to * \\to \\mathbf{B}G$ is the simplicial \n object\n $$\n \\xymatrix{\n \\cdots\n \\ar@<+6pt>[r] \\ar@<+2pt>[r] \\ar@<-2pt>[r] \\ar@<-6pt>[r]\n &\n G \\times G\n \\ar@<+4pt>[r] \n \\ar[r] \n \\ar@<-4pt>[r] \n & \n G \n \\ar@<+2pt>[r] \\ar@<-2pt>[r] \n & \n {*}\n \\ar@{->>}[r]\n &\n \\mathbf{B}G\n }\n $$\n that exhibits $G$ as a groupoid object over $*$.\n In particular it means that for $G$ an $\\infty$-group, the\n essentially unique morphism $* \\to \\mathbf{B}G$\n is an effective epimorphism.\n \\end{remark}\n\n\n\n\n\\subsection{Cohomology}\n\\label{StrucCohomology}\n\\label{section.Cohomology}\n\nThere is an intrinsic notion of \\emph{cohomology} \nin every $\\infty$-topos $\\mathbf{H}$: it is simply given by the \nconnected components of mapping spaces. Of course such mapping spaces\nexist in every $\\infty$-category, but we need some extra conditions on \n$\\mathbf{H}$ in order for them to behave like cohomology sets. For\ninstance, if $\\mathbf{H}$ has pullbacks then there is a notion of\nlong exact sequences in cohomology. \nOur main theorem (Theorem~\\ref{PrincipalInfinityBundleClassification} below) will show that the second\nand third $\\infty$-Giraud axioms imply that this intrinsic\nnotion of cohomology has the property that it \\emph{classifies} \ncertain geometric structures in the $\\infty$-topos.\n\n\\begin{definition}\n \\label{cohomology}\nFor $X,A \\in \\mathbf{H}$ two objects, we say that\n$$\n H^0(X,A) := \\pi_0 \\mathbf{H}(X,A)\n$$\nis the \\emph{cohomology set}\\index{cohomology!general abstract} of $X$ with coefficients in $A$. \nIn particular if $G$ is an $\\infty$-group we write\n$$\n H^1(X,G) := H^0(X,\\mathbf{B}G) = \\pi_0 \\mathbf{H}(X, \\mathbf{B}G)\n$$\nfor cohomology with coefficients in the delooping \n$\\mathbf{B}G$ of $G$. \nGenerally, if $K \\in \\mathbf{H}$ has a $n$-fold delooping \n$\\mathbf{B}^nK$ for some non-negative integer $n$, we write\n$$\n H^n(X,K) := H^0(X,\\mathbf{B}^n K) = \\pi_0 \\mathbf{H}(X, \\mathbf{B}^n K)\n \\,.\n$$\n\\end{definition}\nIn the context of cohomology on $X$ wth coefficients in $A$ we say that\n\\begin{itemize}\n\\item\n the hom-space $\\mathbf{H}(X,A)$ is the \\emph{cocycle $\\infty$-groupoid}\\index{\n!cocycle};\n\\item\n an object $g : X \\to A$ in $\\mathbf{H}(X,A)$ is a \\emph{cocycle};\n\\item \n a morphism: $g \\Rightarrow h$ in $\\mathbf{H}(X,A)$ is a \\emph{coboundary} between cocycles.\n\\item \n a morphism $c : A \\to B$ in $\\mathbf{H}$ \n represents the \\emph{characteristic class}\\index{characteristic class!general abstract}\n $$\n [c] : H^0(-,A) \\to H^0(-,B)\n \\,.\n $$\n\\end{itemize}\nIf $X\\simeq Y\/\\!\/G$ is a homotopy quotient, then the cohomology of $X$ is \nequivariant cohomology of $Y$. Similarly, for general $X$ this notion of cohomology\nincorporates various local notions of equivariance.\n\\begin{remark}\n \\label{CohomologyOverX}\nOf special interest is the cohomology defined by a slice \n$\\infty$-topos \n$$\n \\mathcal{X} := \\mathbf{H}_{\/X}\n$$ \nover some $X \\in \\mathbf{H}$.\nSuch a slice is canonically equipped with the \n{\\'e}tale geometric morphism (\\cite{Lurie}, Remark 6.3.5.10)\n$$\n (X_! \\dashv X^* \\dashv X_*)\n :\n \\xymatrix{\n \\mathbf{H}_{\/X}\n\t \\ar@<1.3ex>[rr]^-{X_!}\n\t \\ar@{<-}[rr]|-{X^*}\n\t \\ar@<-1.3ex>[rr]_-{X_*}\n\t &&\n\t \\mathbf{H}\n } \n \\,,\n$$\nwhere $X_!$ simply forgets the morphism to $X$ and \nwhere $X^* = X \\times (-)$ forms the product with $X$. \nAccordingly $X^* (*_{\\mathbf{H}}) \\simeq *_{\\mathcal{X}} =: X$ \nand $X_! (*_{\\mathcal{X}}) = X \\in \\mathbf{H}$. Therefore \ncohomology over $X$ with coefficients of the form $X^* A$ is \nequivalently the cohomology in $\\mathbf{H}$ of $X$ with coefficients in $A$:\n$$\n \\mathcal{X}(X, X^* A) \\simeq \\mathbf{H}(X,A)\n \\,.\n$$ \nBut for a general coefficient object $A \\in \\mathcal{X}$ the \n$A$-cohomology over $X$ in $\\mathcal{X}$ is a \n\\emph{twisted} cohomology of $X$ in $\\mathbf{H}$.\nThis we discuss below in Section~\\ref{TwistedCohomology}.\n\\end{remark}\nTypically one thinks of a morphism $A \\to B$ in \n$\\mathbf{H}$ as presenting a \\emph{characteristic class} of $A$ if\n$B$ is ``simpler'' than $A$, notably if $B$ is an Eilenberg-MacLane object $B = \\mathbf{B}^n K$ for\n$K$ a 0-truncated abelian group in $\\mathbf{H}$. In this case the characteristic class may\nbe regarded as being in the degree-$n$ $K$-cohomology of $A$\n$$\n [c] \\in H^n(A,K)\n \\,.\n$$\n\n\\begin{definition}\n For $f : Y \\to Z$ any morphism in $\\mathbf{H}$\n and $z : * \\to Z$ a point, the \\emph{$\\infty$-fiber} or\n \\emph{homotopy fiber} of $f$ over this point is the $\\infty$-pullback \n $ X := {*} \\times_Z Y$\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n X \\ar[r] \\ar[d]& {*} \\ar[d]\n \\\\\n Y \\ar[r]^f & Z\\,.\n }\n\t}\n $$\n\\end{definition}\n\\begin{observation}\n Let $f\\colon Y\\to Z$ in $\\mathbf{H}$ \n be as above. Suppose that $Y$ is pointed and $f$ is a morphism of pointed objects.\n Then the $\\infty$-fiber of an $\\infty$-fiber is the loop object of the base.\n\\end{observation}\nThis means that we have a diagram\n $$\n \\xymatrix{\n \\Omega Z \\ar[d] \\ar[r] & X \\ar[r] \\ar[d]& {*} \\ar[d]\n \\\\\n {*} \\ar[r] & Y \\ar[r]^f & Z\n }\n $$\nwhere the outer rectangle is an $\\infty$-pullback if the left square is an \n$\\infty$-pullback. This follows from the pasting law, Proposition~\\ref{PastingLawForPullbacks}.\n\\begin{definition}\n \\label{fiber sequence}\n \\label{LongFiberSequence}\nFor every morphism $c : \\mathbf{B}G \\to \\mathbf{B}H \\in \\mathbf{H}$ define the \n\\emph{long fiber sequence to the left}\n$$\n \\cdots \n \\to \n \\Omega G\n \\to \n \\Omega H \n \\to \n F\n \\to \n G \n \\to \n H \n \\to \n \\mathbf{B} F \n \\to \n \\mathbf{B}G\n \\stackrel{c}{\\to} \n \\mathbf{B}H\n$$\nby the consecutive pasting diagrams of $\\infty$-pullbacks\n$$\n \\xymatrix{\n & \\ar@{..}[d] & \\ar@{..}[d]\n \\\\\n \\ar@{..}[r] & F \\ar[d]\\ar[r]& G \\ar[r] \\ar[d] & {*} \\ar[d]\n \\\\\n & {*} \\ar[r] & H \\ar[r] \\ar[d] & \\mathbf{B}F \\ar[r] \\ar[d] & {*} \\ar[d]\n \\\\\n & & {*} \\ar[r] & \\mathbf{B}G \\ar[r]^c & \\mathbf{B}H\n }\n$$\n\\end{definition}\n\\begin{theorem}\n\\begin{enumerate}\n\\item The long fiber sequence to the left of \n$c : \\mathbf{B}G \\to \\mathbf{B}H$ becomes constant on the point after $n$ iterations if $H$ is $n$-truncated. \n\\item For every object $X \\in \\mathbf{H}$ we have a long exact sequence of pointed cohomology sets\n $$ \n \\cdots \\to H^0(X,G) \\to H^0(X,H) \\to H^1(X,F) \\to H^1(X,G) \\to H^1(X,H)\n \\,.\n $$\n\\end{enumerate}\n \\label{LongExactSequenceInCohomology}\n\\end{theorem}\n\\proof\nThe first statement follows from the observation that a \nloop space object $\\Omega_x A$ is a fiber of the free loop space object $\\mathcal{L} A$\nand that this may equivalently be computed by the \n$\\infty$-powering $A^{S^1}$, where $S^1 \\in \\mathrm{Top} \\simeq \\mathrm{Grpd}_{\\infty}$ is the circle. \n\nThe second statement follows by observing that the \n$\\infty$-hom-functor \n$$\n \\mathbf{H}(X,-) : \\mathbf{H} \\to \\mathrm{Grpd}_\\infty\n$$\npreserves all $\\infty$-limits, so that we have $\\infty$-pullbacks in $\\mathrm{Grpd}_{\\infty}$ of the form \n$$ \n \\xymatrix{\n \\mathbf{H}(X,F) \\ar[r] \\ar[d] & {*} \\ar[d]\n \\\\\n \\mathbf{H}(X,G) \\ar[r] & \\mathbf{H}(X,H)\n }\n$$\nat each stage of the fiber sequence. \nThe statement then follows from the familiar long exact sequence for homotopy groups \nin $\\mathrm{Top} \\simeq \\mathrm{Grpd}_{\\infty}$.\n\\hfill{$\\square$}\\\\\n\\begin{remark}\n For the special case that $G$ is a 1-truncated $\\infty$-group (or\n \\emph{2-group}) Theorem \\ref{LongExactSequenceInCohomology} is \n a classical result due to \\cite{BreenBitorseurs}. The first and\n only nontrivial\n stage of the internal Postnikov tower \n $$\n \\xymatrix{\n\t \\mathbf{B}^2 A \\ar[r] & \\mathbf{B}G \\ar[d]\n\t \\\\\n\t & \\mathbf{B} H\n\t}\n $$ \n of the delooped 2-group (with $H := \\tau_0 G\\in \\tau_{\\leq 0} \\mathrm{Grp}(\\mathbf{H})$ \n an ordinary group object and $A := \\pi_1 G \\in \\tau_{\\leq 0} \\mathrm{Grp}(\\mathbf{H})$\n an ordinary abelian group object) yields the long exact sequence of pointed cohomology\n sets\n $$\n 0 \\to H^1(-,A) \\to H^0(-,G) \\to H^0(-,H) \\to H^2(-,A) \\to H^1(-,G) \\to \n\tH^1(-,H) \\to H^3(-,A)\n $$\n (see also \\cite{NikolausWaldorf2}.)\n Notably, the last morphism gives the obstructions against lifting traditional\n nonabelian cohomology $H^1(-,H)$ to nonabelian cohomology $H^1(-,G)$ with values\n in the 2-group. This we discuss further in Section \\ref{ExtensionsOfCohesiveInfinityGroups}.\n \\label{ReferencesOnLongSequences}\n\\end{remark}\n\nGenerally, to every cocycle $g : X \\to \\mathbf{B}G$ is \ncanonically associated its $\\infty$-fiber \n$P \\to X$ in $\\mathbf{H}$, the $\\infty$-pullback\n$$\n \\raisebox{20pt}{\n \\xymatrix{\n P \\ar[r] \\ar[d]& {*} \\ar[d]\n \\\\\n X \\ar[r] ^g & \\mathbf{B}G\n \\,.\n }\n }\n$$\nWe discuss now that each such $P$ canonically has the structure of a \n\\emph{$G$-principal $\\infty$-bundle} and that $\\mathbf{B}G$ is the \n\\emph{fine moduli object} (the \\emph{moduli $\\infty$-stack}) for \n$G$-principal $\\infty$-bundles.\n\n\n\n\\section{Principal bundles}\n\\label{Principal infinity-bundles general abstract}\n\\label{PrincipalInfBundle}\n\n\nWe define here $G$-principal $\\infty$-bundles in any $\\infty$-topos\n$\\mathbf{H}$, discuss their basic properties and show that they are classified\nby the intrinsic $G$-cohomology in $\\mathbf{H}$, as discussed in \nDefinition~\\ref{cohomology}.\n\n\\subsection{Introduction and survey}\n\\label{PrincBund-Intro}\n\nLet $G$ be a topological group, or Lie group or \nsome similar such object. The traditional \ndefinition of \\emph{$G$-principal bundle} is the following: \nthere is a map \n$$\n P \\to X := P\/G\n$$ \nwhich is the quotient projection\ninduced by a \\emph{free} action \n$$\n \\rho : P \\times G \\to P\n$$ \nof $G$ on a space (or manifold, depending on context) $P$,\nsuch that there is a cover $U \\to X$ over which the quotient projection is isomorphic\nto the trivial one $U \\times G \\to U$.\n \nIn higher geometry, if $G$ is a topological or smooth \n$\\infty$-group, the quotient projection must be \nreplaced by the $\\infty$-quotient (homotopy quotient) \nprojection \n\\[\nP\\to X := P\/\\!\/ G\n\\]\nfor the action of $G$ on a topological or smooth $\\infty$-groupoid \n(or $\\infty$-stack) $P$. It is a remarkable fact that this \nsingle condition on the map $P\\to X$ \nalready implies that $G$ acts freely on $P$ and that $P\\to X$ \nis locally trivial, when the latter notions are understood in the \ncontext of higher geometry. We will therefore define \na $G$-principal $\\infty$-bundle to be such a map $P\\to X$. \n\n \n\nAs motivation for this, notice that if a Lie group $G$ acts properly, \nbut not freely, then the quotient $P \\to X := P\/G$ differs from the homotopy quotient.\nSpecifically, if precisely the subgroup $G_{\\mathrm{stab}} \\hookrightarrow G$ acts trivially, then \nthe homotopy quotient is \ninstead the \\emph{quotient stack} $X \/\\!\/ G_{\\mathrm{stab}}$ \n(sometimes written $[X\/\\!\/G_{\\mathrm{stab}}]$, \nwhich is an orbifold if $G_{\\mathrm{stab}}$ is finite). The ordinary \nquotient coincides with the homotopy quotient if and only if the stabilizer subgroup\n$G_{\\mathrm{stab}}$ is trivial, and hence if and only if the action of $G$ is free.\n\nConversely this means that in the context of higher geometry a non-free action\nmay also be principal: with respect not to a base space, but with respect to a base groupoid\/stack.\nIn the example just discussed, we have that the projection $P \\to X\/\\!\/ G_{\\mathrm{stab}}$\nexhibits $P$ as a $G$-principal bundle over the action groupoid \n$P \/\\!\/ G \\simeq X\/\\!\/ G_{\\mathrm{stab}}$. For instance if $P = V$ is a\nvector space equipped with a $G$-representation, then $V \\to V\/\\!\/ G$ is a\n$G$-principal bundle over a groupoid\/stack.\nIn other words, the traditional requirement of freeness in a principal action is not so much\na characterization of principality as such, as rather a condition that ensures that the\nbase of a principal action is a 0-truncated object in higher geometry.\n\nBeyond this specific class of 0-truncated examples, this means that we have the following \nnoteworthy general statement: in higher geometry \\emph{every} $\\infty$-action \nis principal with respect to\n\\emph{some} base, namely with respect to its $\\infty$-quotient. In this sense\nthe notion of principal bundles is (even) more fundamental to higher geometry than\nit is to ordinary geometry. \nAlso, several constructions in ordinary geometry that are\ntraditionally thought of as conceptually different from the notion of principality\nturn out to be special cases of principality in higher geometry. For instance\na central extension of groups $A\\to \\hat G \\to G$ turns out to be \nequivalently a higher principal bundle, namely a $\\mathbf{B}A$-principal 2-bundle\nof moduli stacks $\\mathbf{B}\\hat G \\to \\mathbf{B}G$. Following this\nthrough, one finds that the topics \nof principal $\\infty$-bundles, of $\\infty$-group extensions (\\ref{ExtensionsOfCohesiveInfinityGroups}), \nof $\\infty$-representations (\\ref{StrucRepresentations}), \nand of $\\infty$-group cohomology\nare all different aspects of just one single concept in higher geometry.\n\nMore is true: in the context of an $\\infty$-topos \nevery $\\infty$-quotient projection of an $\\infty$-group action \nis locally trivial, with respect to\nthe canonical intrinsic notion of cover, hence of locality. Therefore\nalso the condition of local triviality in the classical definition of principality\nbecomes automatic. This is a direct consequence of the third\n$\\infty$-Giraud axiom, Definition~\\ref{GiraudRezkLurieAxioms}\nthat ``all $\\infty$-quotients are effective''.\nThis means that the projection map $P \\to P \/\\!\/ G$ is always a cover\n(an \\emph{effective epimorphism}) and so, since every $G$-principal $\\infty$-bundle\ntrivializes over itself, it exhibits a local trivialization of itself; \neven without explicitly requiring it to be locally trivial.\n\nAs before, this means that the local triviality clause appearing in the\ntraditional definition of principal bundles is not so much a characteristic of\nprincipality as such, as rather a condition that ensures that a given quotient \ntaken in a category of geometric spaces coincides with the ``correct'' quotient\nobtained when regarding the situation in the ambient $\\infty$-topos. \n\nAnother direct consequence of the $\\infty$-Giraud axioms\nis the equivalence of the definition of principal bundles as quotient maps, which \nwe discussed so far, with the other main definition of principality: the condition \nthat the ``shear map'' $ (\\mathrm{id}, \\rho) : P \\times G \\to P \\times_X P$ is an equivalence. \nIt is immediate to verify in traditional 1-categorical contexts that this is \nequivalent to the action being properly free and exhibiting $X$ as its quotient\n(we discuss this in detail in \\cite{NSSc}). \nSimple as this is, one may observe, in view of the above discussion, \nthat the shear map being an equivalence is much more fundamental even: notice\nthat $P \\times G$ is the first stage of the \\emph{action groupoid object}\n$P\/\\!\/G$, and that $P \\times_X P$ is the first stage of the \\emph{{\\v C}ech nerve groupoid object}\n$\\check{C}(P \\to X)$ of the corresponding quotient map. Accordingly, the shear map equivalence\nis the first stage in the equivalence of groupoid objects in the $\\infty$-topos\n$$\n P \/\\!\/G \\simeq \\check{C}(P \\to X)\n \\,.\n$$\nThis equivalence is just the explicit statement of the fact mentioned before: the groupoid object\n$P\/\\!\/G$ is effective -- as is any groupoid object in an $\\infty$-topos -- and, equivalently,\nits principal $\\infty$-bundle map $P \\to X$ is an effective epimorphism.\n\nFairly directly from this fact, finally, springs the classification theorem of \nprincipal $\\infty$-bundles. For we have a canonical morphism of groupoid objects\n$P \/\\!\/G \\to * \/ \\!\/G$ induced by the terminal map $P \\to *$. By the $\\infty$-Giraud\ntheorem the $\\infty$-colimit over this sequence of morphisms of groupoid objects\nis a $G$-cocycle on $X$ (Definition~\\ref{cohomology}) canonically induced by $P$:\n$$\n \\varinjlim \\left(\\check{C}(P \\to X)_\\bullet \\simeq (P \/\\!\/G)_\\bullet \\to (* \/\\!\/G)_\\bullet \\right) \n = \n (X \\to \\mathbf{B}G) \n \\;\\;\\;\n \\in \\mathbf{H}(X, \\mathbf{B}G)\n \\,.\n$$\nConversely, from any such $G$-cocycle one finds that one obtains a $G$-principal \n$\\infty$-bundle simply by forming its $\\infty$-fiber: the $\\infty$-pullback of\nthe point inclusion ${*} \\to \\mathbf{B}G$. We show in \\cite{NSSb} that in presentations\nof the $\\infty$-topos theory by 1-categorical tools, the computation of this homotopy\nfiber is \\emph{presented} by the ordinary pullback of a big resolution of the point, \nwhich turns out to be nothing but the universal $G$-principal bundle. \nThis appearance of the universal $\\infty$-bundle as just \na resolution of the point inclusion may be understood in light of the above discussion \nas follows. \nThe classical characterization of the \nuniversal $G$-principal bundle $\\mathbf{E}G$ is as a space that is homotopy equivalent\nto the point and equipped with a \\emph{free} $G$-action. But by the above, freeness of the\naction is an artefact of 0-truncation and not a characteristic of principality in higher\ngeometry. Accordingly, in higher geometry the universal $G$-principal $\\infty$-bundle\nfor any $\\infty$-group $G$ may be taken to \\emph{be} the point, equipped with the \ntrivial (maximally non-free) $G$-action. As such, it is a bundle not over the \nclassifying \\emph{space} $B G$ of $G$, but over the full moduli $\\infty$-stack $\\mathbf{B}G$.\n\nThis way we have natural assignments of $G$-principal $\\infty$-bundles to \ncocycles in $G$-nonabelian cohomology, and vice versa. We find (see \nTheorem~\\ref{PrincipalInfinityBundleClassification} below) that\nprecisely the second $\\infty$-Giraud axiom of Definition~\\ref{GiraudRezkLurieAxioms}, \nnamely the fact that in an $\\infty$-topos $\\infty$-colimits are preserved by\n$\\infty$-pullback, \nimplies that these \nconstructions constitute an equivalence of $\\infty$-groupoids, hence \nthat $G$-principal $\\infty$-bundles are classified by $G$-cohomology.\n\nThe following table summarizes the relation between\n$\\infty$-bundle theory and the $\\infty$-Giraud axioms as indicated above, and as \nproven in the following section.\n\n\\medskip\n\\begin{center}\n\\begin{tabular}{c|c}\n {\\bf $\\infty$-Giraud axioms} & {\\bf principal $\\infty$-bundle theory}\n \\\\\n \\hline\n \\hline\n quotients are effective & \n \\begin{tabular}{c} \\\\ every $\\infty$-quotient $P \\to X := P\/\\!\/ G$ \\\\is principal \\\\ \\ \\end{tabular}\n \\\\\n \\hline\n colimits are preserved by pullback & \n \\begin{tabular}{c}\\\\ $G$-principal $\\infty$-bundles \\\\ are classified by $\\mathbf{H}(X,\\mathbf{B}G)$\\\\\n \\end{tabular}\n\\end{tabular}\n\\end{center}\n\n\\subsection{Definition and classification}\n\\label{DefintionAndClassification}\n\n\n\\begin{definition} \n \\label{ActionInPrincipal}\n For $G \\in \\mathrm{Grp}(\\mathbf{H})$ a group object,\n we say a \\emph{$G$-action} on an object $P \\in \\mathbf{H}$\n is a groupoid object $P\/\\!\/G$ (Definition~\\ref{GroupoidObject}) of the form\n $$\n \\xymatrix{\n \\cdots\n \\ar@<+6pt>[r] \\ar@<+2pt>[r] \\ar@<-2pt>[r] \\ar@<-6pt>[r]\n &\n P \\times G \\times G\n \\ar@<+4pt>[r] \n \\ar[r] \n \\ar@<-4pt>[r] \n & \n P \\times G \n \\ar@<+2pt>[r]^<<<<<{\\rho := d_0 } \\ar@<-2pt>[r]_{d_1} \n & P\n }\n $$\n such that $d_1 : P \\times G \\to P$ is the projection, and such that \n the degreewise projections \n $P \\times G^n \\to G^n $ constitute a morphism of groupoid\n objects\n $$\n \\xymatrix{\n \\cdots\n \\ar@<+6pt>[r] \\ar@<+2pt>[r] \\ar@<-2pt>[r] \\ar@<-6pt>[r]\n &\n P \\times G \\times G\n \\ar[d]\n \\ar@<+4pt>[r] \n \\ar[r] \n \\ar@<-4pt>[r] \n & \n P \\times G \n \\ar[d]\n \\ar@<+2pt>[r] \\ar@<-2pt>[r] \n & P\n \\ar[d]\n \\\\\n \\cdots\n \\ar@<+6pt>[r] \\ar@<+2pt>[r] \\ar@<-2pt>[r] \\ar@<-6pt>[r]\n &\n G \\times G\n \\ar@<+4pt>[r] \n \\ar[r] \n \\ar@<-4pt>[r] \n & \n G \n \\ar@<+2pt>[r] \\ar@<-2pt>[r] \n & {*} \n }\n $$\nwhere the lower simplicial object exhibits $G$ as a groupoid \nobject over $\\ast$ (see Remark~\\ref{Cech nerve of * -> BG}). \n \n With convenient abuse of notation we also write\n $$ \n P\/\\!\/G := \\varinjlim (P \\times G^{\\times^\\bullet})\\;\\; \n \\in \\mathbf{H}\n $$\n for the corresponding $\\infty$-colimit object, the \\emph{$\\infty$-quotient} of this \n action. \n \n Write\n $$\n G \\mathrm{Action}(\\mathbf{H}) \\hookrightarrow \\mathrm{Grpd}(\\mathbf{H})_{\/({*}\/\\!\/G)}\n $$\n for the full sub-$\\infty$-category of groupoid objects over $*\/\\!\/G$\n on those that are $G$-actions. \n\\end{definition}\n\\begin{remark}\n \\label{ActionMapEncodedInGBundle}\n The remaining face map $d_0$\n $$\n \\rho := d_0 : P \\times G \\to P\n $$\n is the action itself.\n\\end{remark}\n\\begin{remark}\n Using this notation in Proposition~\\ref{InfinityGroupObjectsAsGroupoidObjects}\n we have\n $$\n \\mathbf{B}G \\simeq *\/\\!\/G\n \\,.\n $$\n\\end{remark}\nWe list examples of $\\infty$-actions below as Example \\ref{ExamplesOfActions}.\nThis is most conveniently done after astablishing the theory of \nprincipal $\\infty$-actions, to which we now turn.\n\n\\begin{definition}\n \\label{principalbundle}\n Let $G \\in \\infty \\mathrm{Grp}(\\mathbf{H})$ be an \n $\\infty$-group and let $X$ be an object of $\\mathbf{H}$. \n A {\\em $G$-principal $\\infty$-bundle} over $X$ \n (or \\emph{$G$-torsor over $X$}) \n is \n \\begin{enumerate}\n \\item a morphism $P \\to X$ in $\\mathbf{H}$;\n\t\\item together with a $G$-action on $P$;\n \\end{enumerate} \n such that $P \\to X$ is the colimiting cocone exhibiting the quotient map\n $X \\simeq P\/\\!\/G$ (Definition \\ref{ActionInPrincipal}).\n \n A \\emph{morphism} of $G$-principal $\\infty$-bundles over $X$ is a morphism of $G$-actions \n that fixes $X$; the $\\infty$-category of $G$-principal $\\infty$-bundles over $X$\n is the homotopy fiber of $\\infty$-categories\n $$\n G \\mathrm{Bund}(X) := G \\mathrm{Action}(\\mathbf{H}) \\times_{\\mathbf{H}} \\{X\\}\n $$\n over $X$ of the quotient map\n $$\n \\xymatrix{\n G \\mathrm{Action}(\\mathbf{H}) \n\t \\ar@{^{(}->}[r] & \\mathrm{Grpd}(\\mathbf{H})_{\/(*\/\\!\/G)} \n\t \\ar[r] &\n\t \\mathrm{Grpd}(\\mathbf{H}) \n\t \\ar[r]^-{\\varinjlim}\n\t &\n\t \\mathbf{H}\n\t}\n\t\\,.\n $$\n\\end{definition}\n\\begin{remark}\n \\label{GBundlesAreEffectiveEpimorphisms}\n By the third $\\infty$-Giraud axiom (Definition~\\ref{GiraudRezkLurieAxioms})\n this means in particular that a \n $G$-principal $\\infty$-bundle $P \\to X$ is an \n effective epimorphism in $\\mathbf{H}$.\n\\end{remark}\n\\begin{remark}\n Even though $G \\mathrm{Bund}(X)$ is by definition a priori an $\\infty$-category,\n Proposition \\ref{MorphismsOfInfinityBundlesAreEquivalences} below says\n that in fact it happens to be $\\infty$-groupoid: all\n its morphisms are invertible.\n\\end{remark}\n\\begin{proposition}\n \\label{PrincipalityCondition}\n A $G$-principal $\\infty$-bundle $P \\to X$ satisfies the\n \\emph{principality condition}: the canonical morphism\n $$\n (\\rho, p_1)\n\t:\n \\xymatrix{\n\t P \\times G \n\t \\ar[r]^{\\simeq}\n\t &\n\t P \\times_X P\n\t}\n $$\n is an equivalence, where $\\rho$ is the $G$-action.\n\\end{proposition}\n\\proof\n By the third $\\infty$-Giraud axiom (Definition~\\ref{GiraudRezkLurieAxioms}) the groupoid object\n $P\/\\!\/G$ is effective, which means that it is equivalent\n to the {\\v C}ech nerve of $P \\to X$. In first degree this implies\n a canonical equivalence $P \\times G \\to P \\times_X P$. Since\n the two face maps $d_0, d_1 : P \\times_X P \\to P$ in the\n {\\v C}ech nerve are simply the projections out of the fiber product, \n it follows that the two components of this canonical equivalence\n are the two face maps $d_0, d_1 : P \\times G \\to P$ of $P\/\\!\/G$.\n By definition, these are the projection onto the first factor\n and the action itself. \n\\hfill{$\\square$}\\\\\n\\begin{proposition}\n \\index{principal $\\infty$-bundle!construction from cocycle}\n \\label{BundleStructureOnInfinityFiber}\n \\label{PrincipalBundleAsHomotopyFiber}\n For $g : X \\to \\mathbf{B}G$ any morphism, its homotopy fiber\n $P \\to X$ canonically carries the structure of a \n $G$-principal $\\infty$-bundle over $X$.\n\\end{proposition}\n\\proof\n That $P \\to X$ is the fiber of $g : X \\to \\mathbf{B}G$\n means that we have an $\\infty$-pullback diagram\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n P \\ar[d]\\ar[r] & {*}\\ar[d]\n \\\\\n X \\ar[r]^g & \\mathbf{B}G.\n }\n\t}\n $$\n By the pasting law for $\\infty$-pullbacks, Proposition~\\ref{PastingLawForPullbacks},\n this induces a compound diagram\n $$\n \\xymatrix{\n \\cdots\n \\ar@<+6pt>[r] \\ar@<+2pt>[r] \\ar@<-2pt>[r] \\ar@<-6pt>[r]\n &\n P \\times G \\times G\n \\ar[d]\n \\ar@<+4pt>[r] \n \\ar[r] \n \\ar@<-4pt>[r] \n & \n P \\times G \n \\ar[d]\n \\ar@<+2pt>[r] \\ar@<-2pt>[r] \n & \n P\n \\ar[d]\n \\ar@{->>}[r]\n &\n X\n \\ar[d]^g\n \\\\\n \\cdots\n \\ar@<+6pt>[r] \\ar@<+2pt>[r] \\ar@<-2pt>[r] \\ar@<-6pt>[r]\n &\n G \\times G\n \\ar@<+4pt>[r] \n \\ar[r] \n \\ar@<-4pt>[r] \n & \n G \n \\ar@<+2pt>[r] \\ar@<-2pt>[r] \n & \n {*} \n \\ar@{->>}[r]\n &\n \\mathbf{B}G\n }\n $$\n where each square and each composite rectangle is an \n $\\infty$-pullback. \n This exhibits the $G$-action on $P$. \n Since $* \\to \\mathbf{B}G$\n is an effective epimorphism, so is its $\\infty$-pullback\n $P \\to X$. Since, by the $\\infty$-Giraud theorem, $\\infty$-colimits are preserved\n by $\\infty$-pullbacks we have that $P \\to X$ exhibits the \n $\\infty$-colimit $X \\simeq P\/\\!\/G$. \n\\hfill{$\\square$}\\\\\n\\begin{lemma}\n For $P \\to X$ a $G$-principal $\\infty$-bundle\n obtained as in Proposition~\\ref{BundleStructureOnInfinityFiber}, and for\n $x : * \\to X$ any point of $X$, we have \n a canonical equivalence\n $$\n \\xymatrix{\n\t x^* P \\ar[r]^{\\simeq} & G\n\t}\n $$\n between the fiber $x^*P$ and the $\\infty$-group object $G$.\n\\end{lemma}\n\\proof\n This follows from the pasting law for $\\infty$-pullbacks, which \n gives the diagram\n $$\n \\xymatrix{\n G \\ar[d] \\ar[r] & P \\ar[d]\\ar[r] & {*} \\ar[d]\n \\\\\n {*} \\ar[r]^x & X \\ar[r]^g & \\mathbf{B}G\n }\n $$\n in which both squares as well as the total rectangle are\n $\\infty$-pullbacks.\n\\hfill{$\\square$}\\\\\n\\begin{definition}\n \\label{TrivialGBundle}\n The \\emph{trivial} $G$-principal $\\infty$-bundle $(P \\to X) \\simeq (X \\times G \\to X)$\n is, up to equivalence, the one obtained via Proposition~\\ref{BundleStructureOnInfinityFiber}\n from the morphism $X \\to * \\to \\mathbf{B}G$. \n\\end{definition}\n\\begin{observation}\n \\label{PullbackOfInfinityTorsors}\n For $P \\to X$ a $G$-principal $\\infty$-bundle and $Y \\to X$ any morphism, the\n $\\infty$-pullback $Y \\times_X P$ naturally inherits the structure of \n a $G$-principal $\\infty$-bundle.\n\\end{observation}\n\\proof\n This uses the same kind of argument as in Proposition~\\ref{BundleStructureOnInfinityFiber}\n (which is the special case of the pullback of what we will see is the\n universal $G$-principal $\\infty$-bundle $*\\to \\mathbf{B}G$ below in \n Proposition~\\ref{LocalTrivialityImpliesCocycle}).\n\\hfill{$\\square$}\\\\\n\\begin{definition}\n \\label{GInfinityBundleAndMorphism}\n \\label{GPrincipalInfinityBundle}\n A $G$-principal $\\infty$-bundle $P \\to X$ is called\n \\emph{locally trivial}\n if there exists an effective epimorphism $\\xymatrix{U \\ar@{->>}[r] & X}$\n and an equivalence of $G$-principal $\\infty$-bundles\n $$\n U \\times_X P \\simeq U \\times G\n $$\n from the pullback of $P$ (Observation~\\ref{PullbackOfInfinityTorsors})\n to the trivial $G$-principal $\\infty$-bundle over $U$ (Definition~\\ref{TrivialGBundle}).\n\\end{definition}\n\\begin{proposition}\n \\label{EveryGBundleIsLocallyTrivial}\n Every $G$-principal $\\infty$-bundle is locally trivial.\n\\end{proposition}\n\\proof\n For $P \\to X$ a $G$-principal $\\infty$-bundle, it is, by \n Remark~\\ref{GBundlesAreEffectiveEpimorphisms}, itself an effective\n epimorphism. The pullback of the $G$-bundle to its \n own total space along this morphism is trivial,\n by the principality condition\n (Proposition~\\ref{PrincipalityCondition}). Hence setting $U := P$ \n proves the claim.\n\\hfill{$\\square$}\\\\\n\\begin{remark}\n This means that every $G$-principal $\\infty$-bundle is in particular a\n $G$-fiber $\\infty$-bundle (in the evident sense of Definition~\\ref{FiberBundle} below).\n But not every $G$-fiber bundle is $G$-principal, since the local trivialization\n of a fiber bundle need not respect the $G$-action. \n\\end{remark}\n\\begin{proposition}\n \\label{LocalTrivialityImpliesCocycle}\n For every $G$-principal $\\infty$-bundle $P \\to X$ the square\n $$\n \\xymatrix{\n & P \\ar[d] \\ar[r] & {*} \\ar[d]\n \\\\\n X \\ar@{}[r]|<<<\\simeq\n & \\varinjlim_n (P \\times G^{\\times_n})\n \\ar[r]\n &\n \\varinjlim_n G^{\\times_n}\n \\ar@{}[r]|\\simeq\n &\n \\mathbf{B}G\n }\n $$\n is an $\\infty$-pullback diagram.\n\\end{proposition}\n\\proof\n Let $U \\to X$ be an effective epimorphism\n such that $P \\to X$ pulled back to $U$ becomes the trivial $G$-principal\n $\\infty$-bundle. By Proposition~\\ref{EveryGBundleIsLocallyTrivial} this exists.\n By definition of morphism of $G$-actions and \n by functoriality of the $\\infty$-colimit, this induces \n a morphism in ${\\mathbf{H}^{\\Delta[1]}}_{\/(* \\to \\mathbf{B}G)}$ \n corresponding to the diagram\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n\t U \\times G \\ar@{->>}[r] \\ar@{->>}[d] & P \\ar[r] \\ar@{->>}[d] & {*} \\ar@{->>}[d]^{\\mathrm{pt}}\n\t \\\\\n\t U \\ar@{->>}[r] & X \\ar[r] & \\mathbf{B}G\n\t}\n\t}\n\t\\;\\;\n\t\\simeq\n\t\\;\\;\n \\raisebox{20pt}{\n \\xymatrix{\n\t U \\times G \\ar@{->>}[rr] \\ar@{->>}[d] & & {*} \\ar@{->>}[d]^{\\mathrm{pt}}\n\t \\\\\n\t U \\ar[r] & {*} \\ar[r]^{\\mathrm{pt}} & \\mathbf{B}G\n\t}\n\t}\n $$\n in $\\mathbf{H}$. \n By assumption, in this diagram the outer rectangles and the square on the very left \n are $\\infty$-pullbacks. We need to show that\n the right square on the left is also an $\\infty$-pullback.\n \n Since $U \\to X$ is an effective epimorphism by assumption, and since these are\n stable under $\\infty$-pullback, $U \\times G \\to P$ is also an effective epimorphism,\n as indicated. This means that \n $$\n P \\simeq {\\varinjlim_n}\\, (U \\times G)^{\\times^{n+1}_P}\n\t\\,.\n $$\n We claim that for all $n \\in \\mathbb{N}$ the fiber products in the colimit on the right\n are naturally equivalent to $(U^{\\times^{n+1}_X}) \\times G$. For $n = 0$ this is\n clearly true. Assume then by induction that \n it holds for some $n \\in \\mathbb{N}$. Then\n with the pasting law (Proposition~\\ref{PastingLawForPullbacks}) we find an \n $\\infty$-pullback diagram of the form\n $$\n \\raisebox{30pt}{\n \\xymatrix{\n\t (U^{\\times^{n+1}_X}) \\times G \t \n\t \\ar@{}[r]|\\simeq\n\t & \n\t (U \\times G)^{\\times^{n+1}_P}\n\t \\ar[r] \\ar[d] \n\t & \n\t (U \\times G)^{\\times^n_P}\n \\ar[d]\n\t \\ar@{}[r]|{\\simeq}\n\t &\n\t (U^{\\times^{n}_X}) \\times G \n\t \\\\\n\t & U \\times G \\ar[r] \\ar[d] & P \\ar[d]\n\t \\\\\n\t & U \\ar[r] & X.\n\t}\n\t}\n $$\n This completes the induction.\n With this the above expression for $P$ becomes\n $$\n \\begin{aligned}\n P & \\simeq {\\varinjlim_n}\\, (U^{\\times^{n+1}_X}) \\times G\n\t \\\\\n\t & \\simeq {\\varinjlim_n} \\,\\mathrm{pt}^* \\, (U^{\\times^{n+1}_X}) \n\t \\\\\n\t & \\simeq \\mathrm{pt}^* \\, {\\varinjlim_n}\\, (U^{\\times^{n+1}_X}) \t \n\t \\\\\n\t & \\simeq \\mathrm{pt}^* \\, X,\n\t\\end{aligned}\n $$\n where we have used that by the second $\\infty$-Giraud axiom (Definition~\\ref{GiraudRezkLurieAxioms})\n we may take the $\\infty$-pullback out of the $\\infty$-colimit and \n where in the last step we \n used again the assumption that $U \\to X$ is an effective epimorphism.\n\\hfill{$\\square$}\\\\\n\\begin{example}\n The fiber sequence \n $$\n \\xymatrix{\n\t G \\ar[r] & {*} \\ar[d]\n\t \\\\\n\t & \\mathbf{B}G\n\t}\n $$\n which exhibits the delooping $\\mathbf{B}G$ of $G$ according to \n Theorem \\ref{DeloopingTheorem} is a $G$-principal $\\infty$-bundle\n over $\\mathbf{B}G$, with \\emph{trivial} $G$-action on its total space\n $*$. Proposition \\ref{LocalTrivialityImpliesCocycle} says that this is \n the \\emph{universal $G$-principal $\\infty$-bundle} in that every\n other one arises as an $\\infty$-pullback of this one. \n In particular, $\\mathbf{B}G$ is a classifying object for $G$-principal\n $\\infty$-bundles. \n \n Below in Theorem \\ref{ClassificationOfTwistedGEquivariantBundles} \n this relation is strengthened:\n every \\emph{automorphism} of a $G$-principal $\\infty$-bundle, and in fact\n its full automorphism $\\infty$-group arises from pullback of the above\n universal $G$-principal $\\infty$-bundle: $\\mathbf{B}G$ is the fine\n \\emph{moduli $\\infty$-stack} of $G$-principal $\\infty$-bundles.\n \n The traditional definition of universal $G$-principal bundles in terms of \n contractible objects equipped with a free $G$-action has no intrinsic\n meaning in higher topos theory. Instead this appears in \n \\emph{presentations} of the general theory in model categories\n (or categories of fibrant objects)\n as \\emph{fibrant representatives}\n $\\mathbf{E}G \\to \\mathbf{B}G$ of the above point inclusion.\n This we discuss in \\cite{NSSb}. \n \\label{UniversalPrincipal}\n\\end{example}\nThe main classification Theorem \\ref{PrincipalInfinityBundleClassification} below\nimplies in particular that every morphism in $G\\mathrm{Bund}(X)$ is an equivalence.\nFor emphasis we note how this also follows directly:\n\\begin{lemma}\n \\label{EquivalencesAreDetectedOverEffectiveEpimorphisms}\n Let $\\mathbf{H}$ be an $\\infty$-topos and let $X$ be an \n object of $\\mathbf{H}$. A morphism \n $f\\colon A\\to B$ in $\\mathbf{H}_{\/X}$ \n is an equivalence if and only if $p^*f$ is an equivalence in \n $\\mathbf{H}_{\/Y}$ for any effective epimorphism $p\\colon Y\\to X$ in \n $\\mathbf{H}$. \n\n \n\\end{lemma}\n\\proof\n It is clear, by functoriality, that $p^* f$ is a weak equivalence if $f$ is. \n Conversely, assume that $p^* f$ is a weak equivalence. \n Since effective epimorphisms as well as \n equivalences are preserved by pullback \n we get a simplicial diagram of the form\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n \\cdots\n \\ar@<+4pt>[r]\n \\ar@<+0pt>[r]\n \\ar@<-4pt>[r]\n &\n p^* A \\times_A p^* A\n \\ar@<+2pt>[r]\n \\ar@<-2pt>[r]\n \\ar[d]^\\simeq\n &\n p^* A\n \\ar[d]^\\simeq\n \\ar@{->>}[r]\n & \n A\n \\ar[d]^f\n \\\\\n \\cdots\n \\ar@<+4pt>[r]\n \\ar@<+0pt>[r]\n \\ar@<-4pt>[r]\n &\n p^* B \\times_B p^* B\n \\ar@<+2pt>[r]\n \\ar@<-2pt>[r]\n &\n p^* B\n \\ar@{->>}[r]\n & \n B\n }\n\t}\n $$\n where the rightmost horizontal morphisms are effective epimorphisms, as indicated.\n By definition of effective epimorphisms this exhibits\n $f$ as an $\\infty$-colimit over equivalences, hence as\n an equivalence.\n\\hfill{$\\square$}\\\\\n\\begin{proposition}\n \\label{MorphismsOfInfinityBundlesAreEquivalences}\n Every morphism between $G$-actions over $X$ that are \n $G$-principal $\\infty$-bundles over $X$ is an equivalence.\n\\end{proposition}\n\\proof\n Since a morphism of $G$-principal bundles\n $P_1 \\to P_2$ is a morphism of {\\v C}ech nerves that fixes \n their $\\infty$-colimit $X$, up to equivalence, \n and since $* \\to \\mathbf{B}G$ is an effective \n epimorphism,\n we are, by Proposition~\\ref{LocalTrivialityImpliesCocycle}, in the situation of \n Lemma~\\ref{EquivalencesAreDetectedOverEffectiveEpimorphisms}.\n\\hfill{$\\square$}\\\\\n\\begin{theorem}\n \\label{PrincipalInfinityBundleClassification}\n For all $X, \\mathbf{B}G \\in \\mathbf{H}$ there is a natural\n equivalence of $\\infty$-groupoids\n $$\n G \\mathrm{Bund}(X)\n \\simeq\n \\mathbf{H}(X, \\mathbf{B}G)\n $$ \n which on vertices is the construction of Definition~\\ref{BundleStructureOnInfinityFiber}:\n a bundle $P \\to X$ is mapped to a morphism\n $X \\to \\mathbf{B}G$ such that $P \\to X \\to \\mathbf{B}G$ is a fiber\n sequence.\n\\end{theorem}\nWe therefore say\n\\begin{itemize}\n \\item $\\mathbf{B}G$ is the \\emph{classifying object} \n or \\emph{moduli $\\infty$-stack} for \n $G$-principal $\\infty$-bundles;\n \\item a morphism $c : X \\to \\mathbf{B}G$ is a \\emph{cocycle}\n for the corresponding $G$-principal $\\infty$-bundle and its class \n $[c] \\in \\mathrm{H}^1(X,G)$ is its \n \\emph{characteristic class}.\n\\end{itemize}\n\\proof\n By Definitions~\\ref{ActionInPrincipal} \n and~\\ref{principalbundle} and using \n the refined statement of the third $\\infty$-Giraud axiom \n (Theorem~\\ref{NaturalThirdGiraud}), the\n $\\infty$-groupoid of $G$-principal $\\infty$-bundles over $X$ is \n equivalent to the fiber over $X$\n of the\n sub-$\\infty$-category \n of the slice\n of the arrow $\\infty$-topos \n on those squares\n $$\n \\xymatrix{\n\t P \\ar[r] \\ar@{->>}[d] & {*} \\ar@{->>}[d]\n\t \\\\\n\t X \\ar[r] & \\mathbf{B}G\n\t}\n $$\n that exhibit $P \\to X$ as a $G$-principal $\\infty$-bundle. By \n Proposition~\\ref{BundleStructureOnInfinityFiber} and\n Proposition~\\ref{LocalTrivialityImpliesCocycle} these are \n the $\\infty$-pullback squares \n $\n \\mathrm{Cart}({\\mathbf{H}^{\\Delta[1]}}_{\/{(* \\to \\mathbf{B}G)}})\n \\hookrightarrow {\\mathbf{H}^{\\Delta[1]}}_{\/{(* \\to \\mathbf{B}G)}}\n $, hence\n $$\n G \\mathrm{Bund}(X) \\simeq \n\t \\mathrm{Cart}({\\mathbf{H}^{\\Delta[1]}}_{\/{(* \\to \\mathbf{B}G)}}) \\times_{\\mathbf{H}} \\{X\\}\n\t \\,.\n $$\n By the universality of the $\\infty$-pullback\n the morphisms between these are fully determined by their value on $X$,\n so that the above is equivalent to \n $$\n \\mathbf{H}_{\/\\mathbf{B}G} \\times_{\\mathbf{H}} \\{X\\}\n\t\\,.\n $$\n (For instance in terms of model categories: choose a model structure for\n $\\mathbf{H}$ in which all objects are cofibrant, choose a fibrant representative\n for $\\mathbf{B}G$ and a fibration resolution $\\mathbf{E}G \\to \\mathbf{B}G$\n of the universal $G$-bundle. Then the slice model structure of the arrow model structure\n over this presents the slice in question and the statement follows from the analogous\n 1-categorical statement.)\n This finally is equivalent to \n $$\n\t\\mathbf{H}(X, \\mathbf{B}G)\n\t\\,.\n $$\n (For instance in terms of quasi-categories: the projection \n $\\mathbf{H}_{\/\\mathbf{B}G} \\to \\mathbf{H}$ is a fibration by \n Proposition 2.1.2.1 and 4.2.1.6 in \\cite{Lurie}, hence the homotopy fiber\n $\\mathbf{H}_{\/\\mathbf{B}G} \\times_{\\mathbf{X}} \\{X\\}$ \n is the ordinary fiber of quasi-categories. This is manifestly \n the\n $\\mathrm{Hom}^R_{\\mathbf{H}}(X, \\mathbf{B}G)$ from Proposition 1.2.2.3 of \\cite{Lurie}.\n Finally, by Proposition 2.2.4.1 there, this is equivalent to $\\mathbf{H}(X,\\mathbf{B}G)$.)\n\\hfill{$\\square$}\\\\\n\\begin{corollary}\n Equivalence classes of $G$-principal $\\infty$-bundles over $X$ are\n in natural bijection with the degree-1 $G$-cohomology of $X$:\n $$\n G \\mathrm{Bund}(X)_{\/\\sim} \\simeq H^1(X, G)\n\t\\,.\n $$\n\\end{corollary}\n\\proof\n By Definition \\ref{cohomology} this is the restriction of \n the equivalence $G \\mathrm{Bund}(X) \\simeq \\mathbf{H}(X, \\mathbf{B}G)$ to\n connected components.\n\\hfill{$\\square$}\\\\\n\n\\section{Twisted bundles and twisted cohomology}\n\\label{StrucTwistedCohomology}\n\nWe show here how the general notion of cohomology in an \n$\\infty$-topos, considered above in Section~\\ref{StrucCohomology}, subsumes the notion of\n\\emph{twisted cohomology} and we discuss the corresponding\ngeometric structures classified by twisted cohomology: \n\\emph{extensions} of principal $\\infty$-bundles and \\emph{twisted $\\infty$-bundles}.\n \nWhereas ordinary cohomology is given by a derived hom-$\\infty$-groupoid, \ntwisted cohomology is given by the $\\infty$-groupoid of \n\\emph{sections of \na local coefficient bundle} in an $\\infty$-topos, \nwhich in turn is \nan \\emph{associated $\\infty$-bundle} induced via a representation \nof an $\\infty$-group $G$ from a $G$-principal $\\infty$-bundle\n(this is a geometric and unstable variant of the picture \nof twisted cohomology developed in \\cite{AndoBlumbergGepner,MaySigurdsson}).\n\nIt is fairly immediate that, given a \\emph{universal} local coefficient bundle\nassociated to a universal principal $\\infty$-bundle,\nthe induced twisted cohomology is equivalently ordinary\ncohomology in the corresponding slice $\\infty$-topos. This\nidentification provides a clean formulation of the contravariance\nof twisted cocycles. However, a universal coefficient bundle\nis a pointed connected object in the slice $\\infty$-topos only\nwhen it is a trivial bundle, so that twisted cohomology does not classify\nprincipal $\\infty$-bundles in the slice. We show below that instead\nit classifies \\emph{twisted principal $\\infty$-bundles}, which are\nnatural structures that generalize the twisted bundles familiar from\ntwisted K-theory.\nFinally, we observe that twisted cohomology in an $\\infty$-topos \nequivalently classifies extensions of structure groups\nof principal $\\infty$-bundles. \n\nA wealth of structures turn out to be special cases of \nnonabelian twisted cohomology and of twisted\nprincipal $\\infty$-bundles and also turn out to be usefully informed by the\ngeneral theory of twisted cohomology, we will discuss some of these structures in \\cite{NSSc}.\n\n\n\n\\subsection{Actions and associated $\\infty$-bundles}\n\\label{StrucRepresentations}\n\nLet $\\mathbf{H}$ be an $\\infty$-topos, $G \\in \\mathrm{Grp}(\\mathbf{H})$\nan $\\infty$-group.\nFix an action $\\rho : V \\times G \\to V$ (Definition \\ref{ActionInPrincipal}) on an object $V\\in \\mathbf{H}$.\nWe discuss the induced notion of \\emph{$\\rho$-associated $V$-fiber $\\infty$-bundles}.\nWe show that there is a \\emph{universal} $\\rho$-associated $V$-fiber bundle over \n$\\mathbf{B}G$ and observe that under Theorem \\ref{PrincipalInfinityBundleClassification}\nthis is effectively identified with the action itself. Accordingly, we also further discuss\n$\\infty$-actions as such.\n\n\\medskip\n\n\\begin{definition}\n For $V,X \\in \\mathbf{H}$ any two objects, \na \\emph{$V$-fiber $\\infty$-bundle} over $X$ is a morphism $E \\to X$, \nsuch that there is an effective epimorphism\n$\\xymatrix{U \\ar@{->>}[r] & X}$ and an $\\infty$-pullback of the form\n$$\n \\raisebox{20pt}{\n \\xymatrix{\n U \\times V \\ar[r] \\ar[d] & E \\ar[d]\n\t\\\\\n\tU \\ar@{->>}[r] & X\\, .\n }\n }\n$$ \n \\label{FiberBundle}\n\\end{definition}\nWe say that $E \\to X$ locally trivializes with respect to $U$.\nAs usual, we often say \\emph{$V$-bundle} for short.\n\n\\begin{definition}\n For $P \\to X$ a $G$-principal $\\infty$-bundle, we write\n $$ \n P \\times_G V := (P\\times V)\/\\!\/G \n $$ \n for the $\\infty$-quotient of the diagonal $\\infty$-action of $G$ on $P \\times V$.\n Equipped with the canonical morphism\n $P \\times_G V \\to X$ we call this the $\\infty$-bundle \\emph{ $\\rho$-associated} to $P$.\n \\label{AssociatedBundle}\n\\end{definition}\n\\begin{remark}\n The diagonal $G$-action on $P \\times V$ is the product in \n $G \\mathrm{Action}(\\mathbf{H})$ of the given actions on $P$ and on $V$.\n Since $G\\mathrm{Action}(\\mathbf{H})$ is a full sub-$\\infty$-category of a slice\n category of a functor category, the product is given by a degreewise\n pullback in $\\mathbf{H}$:\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n\t P \\times V \\times G^{\\times_n} \n\t \\ar[r]\n\t \\ar[d]\n\t &\n\t V \\times G^{\\times_n}\n\t \\ar[d]\n\t \\\\\n\t P \\times G^{\\times_n}\n\t \\ar[r]\n\t &\n\t G^{\\times_n}\\,.\n\t}\n\t}\n $$\n and so\n $$\n P \\times_G V \\simeq \\varinjlim_n (P \\times V \\times G^{\\times_n})\n\t\\,.\n $$\n The canonical bundle morphism of the corresponding $\\rho$-associated\n $\\infty$-bundle is the realization of the left morphism of this diagram:\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n\t P \\times_G V\n\t \\ar@{}[r]|<<<<{:=}\n\t \\ar[d]\n\t &\n\t \\varinjlim_n (P \\times V \\times G^{\\times_n})\n\t \\ar[d]\n\t \\\\\n\t X \\ar@{}[r]|<<<<<<<<<{\\simeq} & \n\t \\varinjlim_n (P \\times G^{\\times_n})\\,.\n\t}\n\t}\n $$\n \\label{ProductActionByPullback}\n\\end{remark}\n\\begin{example}\nBy Theorem \\ref{PrincipalInfinityBundleClassification} every $\\infty$-group action\n$\\rho : V \\times G \\to V$ has a classifying morphism $\\mathbf{c}$ defined on its homotopy\nquotient, which fits into a fiber sequence of the form\n$$\n \\raisebox{20pt}{\n \\xymatrix{\n V \\ar[r] & V\/\\!\/G \\ar[d]^{\\mathbf{c}} \n \\\\\t \n\t & \\mathbf{B}G\\,.\n }\n }\n$$\n\n Regarded as an \n $\\infty$-bundle, this is\n $\\rho$-associated to the universal $G$-principal $\\infty$-bundle\n $\\xymatrix{{*} \\ar[r] & \\mathbf{B}G}$ from Example \\ref{UniversalPrincipal}:\n $$\n V\/\\!\/G \\simeq {*} \\times_G V\n\t\\,.\n $$\n \\label{ActionGroupoidIsRhoAssociated}\n\\end{example}\n\\begin{lemma}\n The realization functor $\\varinjlim : \\mathrm{Grpd}(\\mathbf{H}) \\to \\mathbf{H}$\n preserves the $\\infty$-pullback of Remark \\ref{ProductActionByPullback}:\n $$\n P \\times_G V \\simeq \\varinjlim_n (P \\times V \\times G^{\\times_n})\n\t\\simeq\n\t(\\varinjlim_n P \\times G^{\\times_n}) \\times_{(\\varinjlim_n G^{\\times_n})} (\\varinjlim_n V \\times G^{\\times_n})\n\t\\,.\n $$\n \\label{RealizationPreservesProductOfRepresentations}\n\\end{lemma}\n\\proof\n Generally, let $X \\to Y \\leftarrow Z \\in \\mathrm{Grpd}(\\mathbf{H})$ be a\n diagram of groupoid objects, such that in the induced diagram\n $$\n \\xymatrix{\n\t X_0 \\ar[r] \\ar@{->>}[d] & Y_0 \\ar@{<-}[r] \\ar@{->>}[d] & Z_0 \\ar@{->>}[d]\n\t \\\\\n\t \\varinjlim_n X_n \\ar[r] & \\varinjlim_n Y_n \\ar@{<-}[r] & \\varinjlim_n Z_n\n\t}\n $$\n the left square is an $\\infty$-pullback. By the third \n $\\infty$-Giraud axiom (Definition~\\ref{GiraudRezkLurieAxioms}) the vertical \n morphisms are effective epi, as indicated. \n By assumption we have a pasting of $\\infty$-pullbacks as shown on the\n left of the following diagram, and by\n the pasting law (Proposition \\ref{PastingLawForPullbacks}) this is equivalent to\n the pasting shown on the right:\n $$\n \\raisebox{38pt}{\n \\xymatrix{\n\t X_0 \\times_{Y_0} Z_0 \\ar[r] \\ar[d] & Z_0 \\ar[d]\n\t \\\\\n\t X_0 \\ar[r] \\ar[d] & Y_0 \\ar[d]\n\t \\\\\n\t \\varinjlim_n X_n \\ar[r] & \\varinjlim_n Y_n\n\t}\n\t}\n\t\\;\\;\\;\n\t\\simeq\n\t\\;\\;\\;\n \\raisebox{38pt}{\n \\xymatrix{\n\t X_0 \\times_{Y_0} Z_0 \\ar[r] \\ar@{->>}[d] & Z_0 \\ar@{->>}[d]\n\t \\\\\n\t (\\varinjlim_n X_n) \\times_{(\\varinjlim_n Y_n)} (\\varinjlim_n Z_n) \\ar[r] \\ar[d] & \n\t \\varinjlim_n Z_n \\ar[d]\n\t \\\\\n\t \\varinjlim_n X_n \\ar[r] & \\varinjlim_n Y_n.\n\t}\n\t}\n $$\nSince effective epimorphisms are stable under $\\infty$-pullback, this identifies \nthe canonical morphism \n$$\n X_0 \\times_{Y_0} Z_0\n \\to \n (\\varinjlim_n X_n) \\times_{(\\varinjlim_n Y_n)} (\\varinjlim_n Z_n)\n$$\nas an effective epimorphism, as indicated. \n\nSince $\\infty$-limits commute over each other, the {\\v C}ech nerve of this morphism \nis the groupoid object $[n] \\mapsto X_n \\times_{Y_n} Z_n$.\nTherefore the third $\\infty$-Giraud axiom now says that $\\varinjlim$ preserves the\n$\\infty$-pullback of groupoid objects:\n$$\n \\varinjlim (X \\times_Y Z) \n \\simeq \n \\varinjlim_n (X_n \\times_{Y_n} Z_n )\n \\simeq\n (\\varinjlim_n X_n) \\times_{(\\varinjlim_n Y_n)} (\\varinjlim_n Z_n)\n \\,.\n$$\n\nConsider this now in the special case that $X \\to Y \\leftarrow Z$ is \n$(P \\times G^{\\times_\\bullet}) \\to G^{\\times_\\bullet} \\leftarrow (V \\times G^{\\times_\\bullet})$.\nTheorem \\ref{PrincipalInfinityBundleClassification} implies that the initial assumption above is \nmet, in that $P \\simeq (P\/\\!\/G) \\times_{*\/\\!\/G} {*} \\simeq X \\times_{\\mathbf{B}G} {*}$, \nand so the claim follows.\n\\hfill{$\\square$}\\\\\n\\begin{proposition}\n For $g_X : X \\to \\mathbf{B}G$ a morphism and $P \\to X$\n the corresponding $G$-principal $\\infty$-bundle according to Theorem \n \\ref{PrincipalInfinityBundleClassification}, \n there is a natural equivalence\n $$\n g_X^*(V\/\\!\/G) \\simeq P \\times_G V\n $$\n over $X$, between the pullback of the\n $\\rho$-associated $\\infty$-bundle\n $\\xymatrix{V\/\\!\/G \\ar[r]^{\\mathbf{c}} & \\mathbf{B}G}$\n of Example \\ref{ActionGroupoidIsRhoAssociated}\n and the $\\infty$-bundle $\\rho$-associated to $P$ by Definition \\ref{AssociatedBundle}.\n \\label{UniversalAssociatedBundle}\n\\end{proposition}\n\\proof\n By Remark \\ref{ProductActionByPullback} the product action is given by the \n pullback \n $$\n \\xymatrix{\n P \\times V \\times G^{\\times_\\bullet}\n\t \\ar[r]\n\t \\ar[d]\n\t &\n\t V \\times G^{\\times_\\bullet}\n\t \\ar[d]\n\t \\\\\n\t P \\times G^{\\times_\\bullet} \\ar[r] & G^{\\times_\\bullet}\n }\n $$\n in $\\mathbf{H}^{\\Delta^{\\mathrm{op}}}$. \n By Lemma $\\ref{RealizationPreservesProductOfRepresentations}$ the realization functor\n preserves this $\\infty$-pullback. By \n Remark \\ref{ProductActionByPullback} it sends the left morphism to the \n associated bundle, and by Theorem \\ref{PrincipalInfinityBundleClassification}\n it sends the bottom morphism to $g_X$. Therefore it produces an $\\infty$-pullback\n diagram of the form\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n V \\times_G P \\ar[r] \\ar[d] & V\/\\!\/G \\ar[d]^{\\mathbf{c}}\n\t \\\\\n\t X \\ar[r]^{g_X} & \\mathbf{B}G\\,.\n }\n }\n $$ \n\\hfill{$\\square$}\\\\\n\\begin{remark}\n This says that $\\xymatrix{V\/\\!\/G \\ar[r]^{\\mathbf{c}} & \\mathbf{B}G}$ is both, \n the $V$-fiber $\\infty$-bundle \n $\\rho$-associated to the universal $G$-principal $\\infty$-bundle, Observation\n \\ref{ActionGroupoidIsRhoAssociated}, \n as well as the universal $\\infty$-bundle for $\\rho$-associated $\\infty$-bundles.\n \\label{RhoAssociatedToUniversalIsUniversalVBundle}\n\\end{remark}\n\\begin{proposition}\n Every $\\rho$-associated $\\infty$-bundle is a $V$-fiber $\\infty$-bundle, \n Definition \\ref{FiberBundle}.\n \\label{AssociatedIsFiberBundle}\n\\end{proposition}\n\\proof\n Let $P \\times_G V \\to X$ be a $\\rho$-associated $\\infty$-bundle.\n By the previous Proposition \\ref{UniversalAssociatedBundle} it is \n the pullback $g_X^* (V\/\\!\/G)$ of the universal $\\rho$-associated bundle.\n By Proposition \\ref{EveryGBundleIsLocallyTrivial} there exists an \n effective epimorphism $\\xymatrix{U \\ar@{->>}[r] & X}$ over which\n $P$ trivializes, hence such that $g_X|_U$ factors through the point, up\n to equivalence. In summary and by the pasting law, Proposition \\ref{PastingLawForPullbacks},\n this gives a pasting of $\\infty$-pullbacks of the form\n $$\n \\raisebox{20pt}{\n \\xymatrix@R=8pt{\n\t U \\times V\n\t \\ar[dd]\n\t \\ar[r]\n\t &\n\t P \\times_G V\n\t \\ar[r]\n\t \\ar[dd]\n\t &\n\t V\/\\!\/G\n\t \\ar[dd]\n\t \\\\\n\t \\\\\n\t U \\ar@{->>}[r] \n\t \\ar[dr]\n\t & \n\t X\n\t \\ar[r]^{g_X}\n\t &\n\t \\mathbf{B}G\n\t \\\\\n\t & {*}\n\t \\ar[ur]\n\t}\n\t}\n $$\n which exhibits $P \\times_G V \\to X$ as a $V$-fiber bundle by a local trivialization\n over $U$.\n\\hfill{$\\square$}\\\\\n\nSo far this shows that every $\\rho$-associated $\\infty$-bundle is a\n$V$-fiber bundle. We want to show that, conversely, every $V$-fiber bundle\nis associated to a principal $\\infty$-bundle.\n\\begin{definition}\n Let $V \\in \\mathbf{H}$ be a $\\kappa$-compact object, for some regular cardinal $\\kappa$. \n By the characterization of Definition \\ref{RezkCharacterization}, there exists\n an $\\infty$-pullback square in $\\mathbf{H}$ of the form\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n\t V \\ar[r] \\ar[d] & \\widehat {\\mathrm{Obj}}_\\kappa \\ar[d]\n\t \\\\\n\t {*} \\ar[r]^<<<<<<{\\vdash V} & \\mathrm{Obj}_\\kappa\n\t}\n\t}\n $$\n Write\n $$\n \\mathbf{B}\\mathbf{Aut}(V) := \\mathrm{im}(\\vdash V)\n $$\n for the $\\infty$-image, Definition \\ref{image}, \n of the classifying morphism $\\vdash V$ of $V$. \n By definition this comes with an effective epimorphism \n $$\n \\xymatrix{\n\t {*} \\ar@{->>}[r] & \\mathbf{B}\\mathbf{Aut}(V)\n\t \\ar@{^{(}->}[r] & \\mathrm{Obj}_\\kappa\n\t}\n\t\\,,\n $$\n and hence, by Proposition \\ref{InfinityGroupObjectsAsGroupoidObjects},\n it is the delooping of an $\\infty$-group \n $$\n \\mathbf{Aut}(V) \\in \\mathrm{Grp}(\\mathbf{H})\n $$\n as indicated. We call this the \\emph{internal automorphism $\\infty$-group} of $V$.\n \n By the pasting law, Proposition \\ref{PastingLawForPullbacks}, \n the image factorization gives a pasting\n of $\\infty$-pullback diagrams of the form\n $$\n \\xymatrix{\n\t V \\ar[r] \\ar[d] & V\/\\!\/\\mathbf{Aut}(V) \\ar[r] \\ar[d]^{\\mathbf{c}_V} & \n\t \\widehat {\\mathrm{Obj}}_\\kappa \\ar[d]\n\t \\\\\n\t {*} \\ar@{->>}[r]^<<<<<<{\\vdash V} & \\mathbf{B}\\mathbf{Aut}(V) \\ar@{^{(}->}[r] & \n\t \\mathrm{Obj}_\\kappa\n\t}\n $$\n By Theorem~\\ref{PrincipalInfinityBundleClassification} this defines a canonical \n $\\infty$-action \n $$\n \\rho_{\\mathbf{Aut}(V)} : V \\times \\mathbf{Aut}(V) \\to V\n $$\n of $\\mathbf{Aut}(V)$ on $V$ with homotopy quotient $V\/\\!\/\\mathbf{Aut}(V)$\n as indicated. \n \\label{InternalAutomorphismGroup}\n\\end{definition}\n\\begin{proposition}\n Every $V$-fiber $\\infty$-bundle is $\\rho_{\\mathbf{Aut}(V)}$-associated to an \n $\\mathbf{Aut}(V)$-principal $\\infty$-bundle.\n \\label{VBundleIsAssociated}\n\\end{proposition}\n\\proof\n Let $E \\to V$ be a $V$-fiber $\\infty$-bundle. \n By Definition \\ref{FiberBundle} there exists an effective epimorphism\n $\\xymatrix{U \\ar@{->>}[r] & X}$ along which the bundle trivializes locally.\n It follows \n by the second Axiom in Definition \\ref{RezkCharacterization}\n that on $U$ the morphism $\\xymatrix{X \\ar[r]^<<<<<{\\vdash E} & \\mathrm{Obj}_\\kappa}$\n which classifies $E \\to X$ factors through the point\n $$\n \\raisebox{20pt}{\n \\xymatrix@R=8pt{\n\t U \\times V \\ar[r]\\ar[dd] & E \\ar [r] \\ar[dd] & \\widehat{\\mathrm{Obj}}_\\kappa \\ar[dd]\n\t \\\\\n\t \\\\\n\t U \\ar@{->>}[r] \\ar[dr] & X \\ar[r]^<<<<<{\\vdash E} & \\mathrm{Obj}_\\kappa.\n\t \\\\\n\t & {*} \\ar[ur]_<<<<<{\\vdash V}\n\t}\n\t}\n $$\n Since the point inclusion, in turn, factors through its $\\infty$-image \n $\\mathbf{B}\\mathbf{Aut}(V)$, Definition \\ref{InternalAutomorphismGroup},\n this yields the outer commuting diagram of the following form\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n\t U \\ar[r] \\ar@{->>}[d] & {*} \\ar[r] & \\mathbf{B}\\mathbf{Aut}(V) \\ar@{^{(}->}[d]\n\t \\\\\n\t X \\ar[rr]_{\\vdash E} \n\t \\ar@{-->}[urr]^{g}\n\t && \\mathrm{Obj}_\\kappa\n\t}}\n $$\n By the epi\/mono factorization system of Proposition \\ref{EpiMonoFactorizationSystem}\n there is a diagonal lift $g$ as indicated. Using again the \n pasting law and by Definition \\ref{InternalAutomorphismGroup}\n this factorization induces a pasting of $\\infty$-pullbacks of the form\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n\t E \\ar[r] \\ar[d] & V\/\\!\/\\mathbf{Aut}(V) \\ar[r] \\ar[d]^{\\mathbf{c}_V} & \n\t \\widehat {\\mathrm{Obj}}_\\kappa \\ar[d]\n\t \\\\\n\t X \\ar[r]^<<<<}[r] & \\mathrm{Obj}_\\kappa\n\t}\n\t}\n $$\n Finally, by Proposition \\ref{UniversalAssociatedBundle}, this \n exhibits $E \\to X$ as being $\\rho_{\\mathbf{Aut}(V)}$-associated to the \n $\\mathbf{Aut}(V)$-principal $\\infty$-bundle with class $[g] \\in H^1(X,G)$.\n\\hfill{$\\square$}\\\\\n\\begin{theorem}\n $V$-fiber $\\infty$-bundles over $X \\in \\mathbf{H}$ are classified by \n $H^1(X, \\mathbf{Aut}(V))$.\n \\label{VBundleClassification}\n\\end{theorem}\n Under this classification, the $V$-fiber $\\infty$-bundle corresponding to \n $[g] \\in H^1(X, \\mathbf{Aut}(V))$ is identified,\n up to equivalence, with the $\\rho_{\\mathbf{Aut}(V)}$-associated $\\infty$-bundle \n (Definition~\\ref{AssociatedBundle}) to the $\\mathbf{Aut}(V)$-principal $\\infty$-bundle\n corresponding to $[g]$ by Theorem~\\ref{PrincipalInfinityBundleClassification}.\n \\\\\n\\proof\n By Proposition~\\ref{VBundleIsAssociated} every morphism \n $\\xymatrix{\n X \\ar[r]^<<<<<{\\vdash E} & \\mathrm{Obj}_\\kappa\n }$\n that classifies a small $\\infty$-bundle $E \\to X$ \n which happens to be a $V$-fiber $\\infty$-bundle factors via some $g$\n through \n the moduli for $\\mathbf{Aut}(V)$-principal $\\infty$-bundles\n $$ \n \\xymatrix{\n\t X\n\t \\ar[r]^<<<<<{g}\n\t \\ar@\/_1pc\/[rr]_{\\vdash E}\n\t &\n \\mathbf{B}\\mathbf{Aut}(V) \n\t \\ar@{^{(}->}[r]\n\t & \\mathrm{Obj}_\\kappa\n }\n\t \\,.\n $$ \n Therefore it only remains to show that also every homotopy \n $(\\vdash E_1) \\Rightarrow (\\vdash E_2)$ factors \n through a homotopy $g_1 \\Rightarrow g_2$. \n This follows by applying the epi\/mono lifting property of \n Proposition \\ref{EpiMonoFactorizationSystem} to the diagram\n $$\n \\xymatrix{\n\t X \\coprod X \\ar[r]^<<<<<{(g_1, g_2)} \\ar@{->>}[d] & \\mathbf{B}\\mathbf{Aut}(V) \n\t \\ar@{^{(}->}[d]\n\t \\\\\n\t X \\ar[r] \\ar@{-->}[ur] & \\mathrm{Obj}_\\kappa\n\t }\n $$\n The outer diagram exhibits the original homotopy. The left morphism is \n an effective epi (for instance immediately by Proposition \\ref{EffectiveEpiIsEpiOn0Truncation}), \n the right morphism is a monomorphism by construction. Therefore the dashed\n lift exists as indicated and so the top left triangular diagram exhibits\n the desired factorizing homotopy.\n\\hfill{$\\square$}\\\\\n\\begin{remark}\n In the special case that $\\mathbf{H} = \\mathrm{Grpd}_{\\infty}$, the classification \n Theorem \\ref{VBundleClassification}\n is classical \\cite{Stasheff,May}, traditionally\n stated in (what in modern terminology is) the \n presentation of $\\mathrm{Grpd}_{\\infty}$ by simplicial sets\n or by topological spaces. Recent discussions include \\cite{BlomgrenChacholski}.\n For $\\mathbf{H}$ a general 1-localic $\\infty$-topos (meaning: with a 1-site of definition), \n the statement of Theorem \\ref{VBundleClassification} appears in \\cite{Wendt}, \n formulated there in terms of the presentation of $\\mathbf{H}$ by simplicial presheaves.\n (We discuss the relation of these presentations to the above general abstract result \n in \\cite{NSSb}.)\n Finally, one finds that the classification of \\emph{$G$-gerbes} \\cite{Giraud} and \n \\emph{$G$-2-gerbes} in \\cite{Breen} is the special case of the general statement,\n for $V = \\mathbf{B}G$ and $G$ a 1-truncated $\\infty$-group. \n This we discuss below in Section~\\ref{StrucInftyGerbes}.\n \\label{ReferencesOnClassificationOfVBundles}\n\\end{remark}\n\nWe close this section with a list of some fundamental classes \nof examples of $\\infty$-actions, or equivalently, \nby Remark \\ref{RhoAssociatedToUniversalIsUniversalVBundle}, \nof universal associated $\\infty$-bundles. \nFor doing so we use again that, \nby Theorem \\ref{PrincipalInfinityBundleClassification}, to give an \n$\\infty$-action of $G$ on $V$ is equivalent to giving a fiber\nsequence of the form $V \\to V\/\\!\/G \\to \\mathbf{B}G$.\nTherefore the following list mainly serves to associate a tradition\n\\emph{name} with a given $\\infty$-action.\n\\begin{example}\n \\label{ExamplesOfActions}\n The following are $\\infty$-actions.\n \\begin{enumerate}\n \\item\n\t For every $G \\in \\mathrm{Grp}(\\mathbf{H})$, the fiber sequence\n\t $$\n\t \\raisebox{20pt}{\n\t \\xymatrix{\n\t G \\ar[d]\n\t\t \\\\\n\t\t {*} \\ar[r] & \\mathbf{B}G\n\t }\n\t }\n\t $$\n\t which defines $\\mathbf{B}G$ by Theorem \\ref{DeloopingTheorem}\n\t induces the \\emph{right action of $G$ on itself}\n\t $$\n\t * \\simeq G\/\\!\/G\n\t \\,.\n\t $$\n\t At the same time\n\t this sequence, but now regarded as a bundle over $\\mathbf{B}G$, \n\t is the universal $G$-principal $\\infty$-bundle, Remark \\ref{UniversalPrincipal}.\n \\item\n For every object $X \\in \\mathbf{H}$ write \n\t $$\n\t \\mathbf{L}X := X \\times_{X \\times X} X\n\t $$ \n\t for its \\emph{free loop space} object, the $\\infty$-fiber product of the \n\t diagonal on $X$ along itself\n\t $$\n\t \\raisebox{20pt}{\n\t \\xymatrix{\n\t \\mathbf{L}X \\ar[r] \\ar[d]_{\\mathrm{ev}_{*}} & X \\ar[d]\n\t\t \\\\\n\t\t X \\ar[r] & X \\times X\n\t }\n\t }.\n\t $$\n For every $G \\in \\mathrm{Grp}(\\mathbf{H})$ there is a fiber sequence\n\t $$\n\t \\raisebox{20pt}{\n\t \\xymatrix{\n\t G \n\t\t \\ar[d]\n\t\t \\\\\n\t\t \\mathbf{L}\\mathbf{B}G\n\t\t \\ar[r]^{\\mathrm{ev}_{*}}\n\t\t &\n\t\t \\mathbf{B}G\n\t }\n\t }\n\t \\,.\n\t $$\n\t This exhibits the \\emph{adjoint action of $G$ on itself}\n\t $$\n\t \\mathbf{L}\\mathbf{B}G \\simeq G\/\\!\/_{\\mathrm{ad}} G\n\t \\,.\n\t $$\n\t\\item\n\t For every $V \\in \\mathbf{H}$ there is the canonical \n\t $\\infty$-action of the \\emph{automorphism $\\infty$-group}\n\t $$\n\t \\raisebox{20pt}{\n\t \\xymatrix{\n\t\t V\n\t\t \\ar[d]\n\t\t \\\\\n\t\t V\/\\!\/\\mathbf{Aut}(V)\n\t\t \\ar[r]\n\t\t &\n\t\t \\mathbf{B}\\mathbf{Aut}(V)\n\t\t}\n\t\t}\n\t\t\\,,\n\t $$\n\t introduced in Definition \\ref{InternalAutomorphismGroup}, this exhibits the\n\t \\emph{automorphism action}.\n \\end{enumerate}\n\\end{example}\n\n\\subsection{Sections and twisted cohomology}\n\\label{TwistedCohomology}\n\nWe discuss a general notion of \\emph{twisted cohomology} or \n\\emph{cohomology with local coefficients} in any $\\infty$-topos $\\mathbf{H}$, \nwhere the \\emph{local coefficient $\\infty$-bundles} are \nassociated $\\infty$-bundles as discussed above, and where the cocycles are\n\\emph{sections} of these local coefficient bundles.\n\n\\medskip\n\n\\begin{definition}\n Let $p : E \\to X$ be any morphism in $\\mathbf{H}$, to be regarded as an \n $\\infty$-bundle over $X$. A \\emph{section} of $E$ is a diagram\n $$\n \\raisebox{20pt}{\n \\xymatrix@!C=40pt@!R=30pt{\n\t & E \\ar[d]^p\n\t \\\\\n\t X \\ar[r]_{\\mathrm{id}}^{\\ }=\"t\" \n\t \\ar[ur]^{\\sigma}_{\\ }=\"s\"\n\t & X\n\t %\n\t \\ar@{=>}^{\\simeq} \"s\"; \"t\"\n\t}\n\t}\n $$\n (where for emphasis we display the presence of the homotopy filling the diagram).\n The \\emph{$\\infty$-groupoid of sections} of $E \\stackrel{p}{\\to} X$ is the homotopy fiber\n $$\n \\Gamma_X(E) := \\mathbf{H}(X,E) \\times_{\\mathbf{H}(X,X)} \\{\\mathrm{id}_X\\}\n $$\n of the space of all morphisms $X \\to E$ on those that cover the identity on $X$.\n \\label{Sections}\n\\end{definition}\nWe record two elementary but important observations about spaces of sections.\n\\begin{observation}\n There is a canonical identification\n $$\n \\Gamma_X(E) \\simeq \\mathbf{H}_{\/X}(\\mathrm{id}_X, p)\n $$\n of the space of sections of $E \\to X$ with the hom-$\\infty$-groupoid in the \n slice $\\infty$-topos $\\mathbf{H}_{\/X}$ between the identity on $X$ and the bundle map $p$.\n \\label{SectionBySliceMaps}\n\\end{observation}\n\\proof\n For instance by Proposition 5.5.5.12 in \\cite{Lurie}.\n\\hfill{$\\square$}\\\\\n\\begin{lemma}\n Let \n $$\n \\xymatrix{\n\t E_1 \\ar[r] \\ar[d]^{p_1} & E_2 \\ar[d]^{p_2}\n\t \\\\\n\t B_1 \\ar[r]^{f} & B_2\n\t}\n $$ \n be an $\\infty$-pullback diagram in $\\mathbf{H}$\n and let $\\xymatrix{X \\ar[r]^{g_X} & B_1}$ be any morphism. Then \n post-composition with $f$ induces\n a natural equivalence of hom-$\\infty$-groupoids\n $$\n \\mathbf{H}_{\/B_1}(g_X, p_1)\n\t\\simeq\n\t\\mathbf{H}_{\/B_2}(f\\circ g_X, p_2)\n\t\\,.\n $$\n \\label{SliceMapsIntoPullbacks}\n\\end{lemma}\n\\proof\n By Proposition 5.5.5.12 in \\cite{Lurie}, the left hand side is given by the homotopy pullback\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n\t \\mathbf{H}_{\/B_1}(g_X, p_1) \\ar[r] \\ar[d] & \\mathbf{H}(X, E_1) \n\t \\ar[d]^{\\mathbf{H}(X,p_1)}\n\t \\\\\n\t \\{g_X\\} \\ar[r] & \\mathbf{H}(X,B_1)\\,.\n\t}\n\t}\n $$\n Since the hom-$\\infty$-functor\n $\\mathbf{H}(X,-) : \\mathbf{H} \\to \\mathrm{Grpd}_{\\infty}$ \n preserves the $\\infty$-pullback $E_1 \\simeq f^* E_2$, this\n extends to a pasting of $\\infty$-pullbacks, which by the pasting law \n (Proposition \\ref{PastingLawForPullbacks}) is\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n\t \\mathbf{H}_{\/B_1}(g_X, p_1) \\ar[r] \\ar[d] \n\t & \n\t \\mathbf{H}(X, E_1) \\ar[d]^{\\mathbf{H}(X,p_1)} \\ar[r]\n\t &\n\t \\mathbf{H}(X, E_2)\n\t \\ar[d]^{\\mathbf{H}(X, p_2)}\n\t \\\\\n\t \\{g_X\\} \\ar[r] & \\mathbf{H}(X,B_1)\n\t \\ar[r]_-{\\mathbf{H}(X, f)} &\n \\mathbf{H}(X, B_2)\t \n\t}\n\t}\n\t\\;\\;\\;\n\t\\simeq\n\t\\;\\;\\;\n\t\\raisebox{20pt}{\n\t\\xymatrix{\n\t \\mathbf{H}_{\/B_2}(f \\circ g_X, p_2)\n\t \\ar[r]\n\t \\ar[d]\n\t &\n\t \\mathbf{H}(X, E_2)\n\t \\ar[d]^{\\mathbf{H}(X, p_2)}\n\t \\\\\n\t \\{f\\circ g_X\\}\n\t \\ar[r]\n\t &\n\t \\mathbf{H}(X, B_2).\n\t}\n\t}\n $$\n\\hfill{$\\square$}\\\\\nFix now an $\\infty$-group $G \\in \\mathrm{Grp}(\\mathbf{H})$ and an \n$\\infty$-action $\\rho : V \\times G \\to V$. Write \n$$\n \\xymatrix{\n V \\ar[r] & V\/\\!\/G \\ar[d]^{\\mathbf{c}}\n\t\\\\\n\t& \\mathbf{B}G\n }\n$$\nfor the corresponding \\emph{universal $\\rho$-associated $\\infty$-bundle} \nas discussed in Section~\\ref{StrucRepresentations}.\n\\begin{proposition}\n For $g_X : X \\to \\mathbf{B}G$ a cocycle and $P \\to X$ the corresponding\n$G$-principal $\\infty$-bundle according to Theorem \\ref{PrincipalInfinityBundleClassification},\nthere is a natural equivalence\n$$\n \\Gamma_X(P \\times_G V) \\simeq \\mathbf{H}_{\/\\mathbf{B}G}(g_X, \\mathbf{c})\n$$\nbetween the space of sections of the corresponding $\\rho$-associated $V$-bundle \n(Definition~\\ref{AssociatedBundle}) and the hom-$\\infty$-groupoid of the slice\n$\\infty$-topos of $\\mathbf{H}$ over $\\mathbf{B}G$, between $g_X$ and $\\mathbf{c}$. \nSchematically:\n\\[\n\\left\\{ \n\t\\begin{xy} \n\t\t(10,10)*+{E}=\"1\";\n\t\t(-10,-10)*+{X}=\"2\";\n\t\t(10,-10)*+{X}=\"3\"; \n\t\t(5,0)*+{}=\"4\";\n\t\t(3,-5)*+{}=\"5\"; \n\t\t{\\ar^-{p} \"1\";\"3\"};\n\t\t{\\ar^-{\\sigma} \"2\";\"1\"}; \n\t\t{\\ar_-{\\mathrm{id}} \"2\";\"3\"};\t\n\t\t{\\ar@{=>}^-{\\simeq} \"4\";\"5\"};\n\t\\end{xy} \n\\right\\} \n \\;\\;\n \\simeq\n \\;\\;\n\\left\\{\n\\begin{xy} \n\t\t(10,10)*+{V\/\\!\/G}=\"1\";\n\t\t(-10,-10)*+{X}=\"2\";\n\t\t(10,-10)*+{\\mathbf{B}G}=\"3\"; \n\t\t(5,0)*+{}=\"4\";\n\t\t(3,-5)*+{}=\"5\"; \n\t\t{\\ar^-{\\mathbf{c}} \"1\";\"3\"};\n\t\t{\\ar^-{\\sigma} \"2\";\"1\"}; \n\t\t{\\ar_-{\\mathrm{g_X}} \"2\";\"3\"};\t\n\t\t{\\ar@{=>}^-{\\simeq} \"4\";\"5\"};\n\t\\end{xy} \n\\right\\} \n\\]\n\\label{SectionsAndSliceHoms}\n\\end{proposition}\n\\proof\n By Observation \\ref{SectionBySliceMaps} and Lemma \\ref{SliceMapsIntoPullbacks}.\n\\hfill{$\\square$}\\\\\n\\begin{observation}\n If in the above the cocycle $g_X$ is trivializable, in the sense that it factors through the\n point $* \\to \\mathbf{B}G$ (equivalently if its class $[g_X] \\in H^1(X,G)$ is\n trivial) then there is an equivalence\n $$\n \\mathbf{H}_{\/\\mathbf{B}G}(g_X, \\mathbf{c})\n\t\\simeq\n\t\\mathbf{H}(X,V)\n\t\\,.\n $$\n \\label{TwistedCohomologyIsLocallyVCohomology}\n\\end{observation}\n\\proof\n In this case the homotopy pullback on the right in the proof of\n Proposition \\ref{SectionsAndSliceHoms} is\n $$\n\t\\raisebox{20pt}{\n\t\\xymatrix{\n\t \\mathbf{H}_{\/\\mathbf{B}G}(g_X, \\mathbf{c})\n\t \\ar@{}[r]|{\\simeq}\n\t &\n\t \\mathbf{H}(X, V)\n\t \\ar[r]\n\t \\ar[d]\n\t &\n\t \\mathbf{H}(X, V\/\\!\/G)\n\t \\ar[d]^{\\mathbf{H}(X, \\mathbf{c})}\n\t \\\\\n\t \\{g_X\\}\n\t \\ar@{}[r]|\\simeq\n\t &\n\t \\mathbf{H}(X, {*})\n\t \\ar[r]\n\t &\n\t \\mathbf{H}(X, \\mathbf{B}G)\n\t}\n\t}\n $$\n using that $V \\to V\/\\!\/G \\stackrel{\\mathbf{c}}{\\to} \\mathbf{B}G$ is a fiber\n sequence by definition, and that $\\mathbf{H}(X,-)$ preserves this fiber sequence.\n\\hfill{$\\square$}\\\\\n\\begin{remark}\n Since by Proposition~\\ref{EveryGBundleIsLocallyTrivial} \n every cocycle $g_X$ trivializes locally over some cover \n $\\xymatrix{U \\ar@{->>}[r] & X}$\n and equivalently, by Proposition \\ref{AssociatedIsFiberBundle}, every\n $\\infty$-bundle $P \\times_G V$ trivializes locally, \n Observation \\ref{TwistedCohomologyIsLocallyVCohomology} says that\n elements $ \\sigma \\in \\Gamma_X(P \\times_G V) \\simeq \\mathbf{H}_{\/\\mathbf{B}G}(g_X, \\mathbf{c})$\n \\emph{locally} are morphisms $\\sigma|_U : U \\to V$ with values in $V$.\n They fail to be so \\emph{globally} to the extent that $[g_X] \\in H^1(X, G)$ is non-trivial,\n hence to the extent that $P \\times_G V \\to X$ is non-trivial.\n\\end{remark}\nThis motivates the following definition.\n\\begin{definition}\n We say that \n the $\\infty$-groupoid\n $\\Gamma_X(P \\times_G V) \\simeq \\mathbf{H}_{\/\\mathbf{B}G}(g_X, \\mathbf{c})$\n from Proposition \\ref{SectionsAndSliceHoms}\n is the $\\infty$-groupoid of \\emph{$[g_X]$-twisted cocycles} with values in $V$, with respect to the\n \\emph{local coefficient $\\infty$-bundle} $V\/\\!\/G \\stackrel{\\mathbf{c}}{\\to} \\mathbf{B}G$.\n \n Accordingly, its set of connected components we call the \n \\emph{$[g_X]$-twisted $V$-cohomology} with respect to the local coefficient bundle $\\mathbf{c}$\n and write:\n $$\n H^{[g_X]}(X, V) := \\pi_0 \\mathbf{H}_{\/\\mathbf{B}G}(g_X, \\mathbf{c})\n\t\\,.\n $$\n \\label{TwistedCohomologyInOvertopos}\n\\end{definition}\n\\begin{remark}\n The perspective that twisted cohomology is the theory of sections of \n associated bundles whose fibers are classifying spaces is maybe most famous\n for the case of twisted K-theory, where it was described in this form in \n \\cite{Rosenberg}. But already the old theory of \n \\emph{ordinary cohomology with local coefficients} is of this form, \n as is made manifest in \\cite{BFG} (we discuss this in detail in \\cite{NSSc}).\n \n A proposal for a comprehensive theory in terms of bundles of topological spaces is\n in \\cite{MaySigurdsson} and a systematic formulation \n in $\\infty$-category theory and\n for the case of multiplicative generalized cohomology theories is in \n \\cite{AndoBlumbergGepner}. The formulation above refines this, unstably, \n to geometric cohomology\n theories\/(nonabelian) sheaf hypercohomology, \n hence from bundles of classifying spaces to \n $\\infty$-bundles of moduli $\\infty$-stacks. \n \n A wealth of examples and applications of such geometric nonabelian twisted cohomology\n of relevance in quantum field theory and in string theory is discussed in \n \\cite{SSS,Schreiber}. We discuss further examples in \\cite{NSSc}.\n \\label{ReferencesOnTwisted}\n\\end{remark}\n\\begin{remark}\n Of special interest is the case where \n $V$ is pointed connected, hence (by Theorem \\ref{DeloopingTheorem}) \n of the form $V = \\mathbf{B}A$ for some $\\infty$-group $A$, and so \n (by Definition \\ref{cohomology}) the coefficient for degree-1 $A$-cohomology, \n and hence itself (by Theorem \\ref{PrincipalInfinityBundleClassification}) the moduli $\\infty$-stack for\n $A$-principal $\\infty$-bundles. \n In this case $H^{[g_X]}(X, \\mathbf{B}A)$ is\n \\emph{degree-1 twisted $A$-cohomology}. Generally, if $V = \\mathbf{B}^n A$\n it is \\emph{degree-$n$ twisted $A$-cohomology}. In analogy with Definition \n \\ref{cohomology} this is sometimes written\n $$\n H^{n+[g_X]}(X,A) := H^{[g_X]}(X,\\mathbf{B}^n A)\n\t\\,.\n $$\n \n Moreover, in this case $V\/\\!\/G$ is itself pointed connected, hence of the form\n $\\mathbf{B}\\hat G$ for some $\\infty$-group $\\hat G$, and so the universal local \n coefficient bundle\n $$\n \\xymatrix{\n\t \\mathbf{B}A \\ar[r] & \\mathbf{B}\\hat G \\ar[d]^{\\mathbf{c}}\n\t \\\\\n\t & \\mathbf{B}G\n\t}\n $$\n exhibits $\\hat G$ as an \\emph{extension of $\\infty$-groups} of $G$ by $A$.\n This case we discuss below in Section \\ref{ExtensionsOfCohesiveInfinityGroups}.\n \\label{PointedConnectedLocalCoefficients}\n\\end{remark}\nIn this notation the local coefficient bundle $\\mathbf{c}$ is left implicit.\nThis convenient abuse of notation is justifed to some extent by the fact that there\nis a \\emph{universal local coefficient bundle}:\n\\begin{example}\n The classifying morphism of the $\\mathbf{Aut}(V)$-action on some \n $V \\in \\mathbf{H}$ from Definition \\ref{InternalAutomorphismGroup} according to \n Theorem \\ref{PrincipalInfinityBundleClassification} yields a local coefficient \n $\\infty$-bundle of the form\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n\t V \\ar[r] & V\/\\!\/\\mathbf{Aut}(V)\\ar[d]\n\t \\\\\n\t & \\mathbf{B}\\mathbf{Aut}(V)\n\t}\n\t}\n $$\n which we may call the \\emph{universal local $V$-coefficient bundle}.\n In the case that $V$ is pointed connected and hence of the form\n $V = \\mathbf{B}G$\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n\t \\mathbf{B}G \\ar[r] & (\\mathbf{B}G)\/\\!\/\\mathbf{Aut}(\\mathbf{B}G)\\ar[d]\n\t \\\\\n\t & \\mathbf{B}\\mathbf{Aut}(\\mathbf{B}G)\n\t}\n\t}\n $$\n the universal twists of the corresponding twisted $G$-cohomology are \n the \\emph{$G$-$\\infty$-gerbes}. These we discuss below in section\n \\ref{StrucInftyGerbes}.\n \\label{UniversalTwistByAutomorphisms}\n\\end{example}\n\n\n\n\n\\subsection{Extensions and twisted bundles}\n\\label{ExtensionsOfCohesiveInfinityGroups}\n\n\nWe discuss the notion of \\emph{extensions} of $\\infty$-groups\n(see Section~\\ref{StrucInftyGroups}), generalizing the traditional notion of \ngroup extensions. \nThis is in fact a special case of the notion of \nprincipal $\\infty$-bundle, Definition~\\ref{principalbundle}, for base space objects that \nare themselves deloopings of $\\infty$-groups. \nFor every extension of $\\infty$-groups, there is the \ncorresponding notion of \n\\emph{lifts of structure $\\infty$-groups} of principal $\\infty$-bundles. \nThese are classified equivalently by trivializations of an \\emph{obstruction class}\nand by the twisted cohomology with coefficients in the extension itself, regarded\nas a local coefficient $\\infty$-bundle.\n\nMoreover, we show\nthat principal $\\infty$-bundles with an extended structure $\\infty$-group\nare equivalent to principal $\\infty$-bundles with unextended structure $\\infty$-group\nbut carrying a principal $\\infty$-bundle for the \\emph{extending} $\\infty$-group on their\ntotal space, which on fibers restricts to the given $\\infty$-group extension.\nWe formalize these \\emph{twisted (principal) $\\infty$-bundles}\nand observe that they are classified by twisted cohomology, \nDefinition~\\ref{TwistedCohomologyInOvertopos}.\n\n\\medskip \n\n\\begin{definition}\n \\label{ExtensionOfInfinityGroups}\nWe say a sequence of $\\infty$-groups (Definition~\\ref{inftygroupinootopos}),\n\\[\n A \\to \\hat G \\to G\n\\]\nin $\\mathrm{Grp}(\\mathbf{H})$\n\\emph{exhibits $\\hat G$ as an extension of $G$ by $A$} if $\\hat G \\to G$ \nin $\\mathbf{H}$ is the\nquotient map $\\hat G \\to \\hat G \/\\!\/A$, such that the cocycle \n$G \\to \\mathbf{B}A$ in $\\mathbf{H}$ in $\\mathbf{H}$ corresponding\nto this by Theorem \\ref{PrincipalInfinityBundleClassification} is once deloopable.\n\nHence, by \nTheorem~\\ref{DeloopingTheorem}, the extension corresponds to a \nfiber sequence in $\\mathbf{H}$, Definition~\\ref{fiber sequence} of the form\n$$\n \\xymatrix{\n \\mathbf{B}A \\ar[r] & \\mathbf{B}\\hat G \\ar[r]^{\\mathbf{p}} & \\mathbf{B}G\n\t\\ar[r]^{\\mathbf{c}} & \\mathbf{B}^2 A\n\t\\,.\n }\n$$\nWe write \n$$\n \\mathrm{Ext}(G,A) := \\mathbf{H}(\\mathbf{B}G, \\mathbf{B}^2 A)\n \\simeq (\\mathbf{B}A)\\mathrm{Bund}(\\mathbf{B}G)\n$$\nfor the \\emph{$\\infty$-groupoid of extensions} of $G$ by $A$.\n\\end{definition}\n\\begin{definition}\nGiven an $\\infty$-group extension \n$\\xymatrix{A \\ar[r] & \\hat G \\ar[r]^{\\Omega \\mathbf{c}} & G}$ \nand given a $G$-principal $\\infty$-bundle $P \\to X$ in $\\mathbf{H}$, we say that a \n\\emph{lift} $\\hat P$ of $P$ to a $\\hat G$-principal $\\infty$-bundle is a lift \n$\\hat g_X$ of its classifying \ncocycle $g_X : X \\to \\mathbf{B}G$, under the equivalence of Theorem\n\\ref{PrincipalInfinityBundleClassification}, through the extension:\n$$\n \\raisebox{20pt}{\n \\xymatrix{\n & \\mathbf{B} {\\hat G}\n \\ar[d]^{\\mathbf{p}}\n \\\\\n X \\ar@{-->}[ur]^{\\hat g_X} \\ar[r]_-{g_X} & \\mathbf{B}G.\n }\n }\n$$\nAccordingly, the \\emph{$\\infty$-groupoid of lifts} of $P$ with respect to \n$\\mathbf{p}$ is \n$$\n \\mathrm{Lift}(P,\\mathbf{p}) := \\mathbf{H}_{\/\\mathbf{B}G}(g_X, \\mathbf{p})\n \\,.\n$$\n\\label{PrincipalInfinityBundleExtension}\n\\end{definition}\n\\begin{observation}\n By the universal property of the $\\infty$-pullback, a lift exists precisely if the \n cohomology class\n $$\n [\\mathbf{c}(g_X)] := [\\mathbf{c}\\circ g_X] \\in H^2(X, A)\n $$\n is trivial. \n \\label{ObstructionClassObstructs}\n\\end{observation}\nThis is implied by Theorem \\ref{ExtensionsAndTwistedCohomology}, to which we turn after\nintroducing the following terminology.\n\\begin{definition}\n In the above situation, \n we call $[\\mathbf{c}(g_X)]$ the \\emph{obstruction class} to the extension;\n and we call $[\\mathbf{c}] \\in H^2(\\mathbf{B}G, A)$ the\n \\emph{universal obstruction class} of extensions through $\\mathbf{p}$.\n \n We say that a \\emph{trivialization} of the obstruction cocycle\n $\\mathbf{c}(g_X)$ is a morphism $\\mathbf{c}(g_X) \\to *_X$ in $\\mathbf{H}(X, \\mathbf{B}^2 A)$,\n where ${*}_X : X \\to * \\to \\mathbf{B}^2 A$ is the trivial cocycle. Accordingly, the\n \\emph{$\\infty$-groupoid of trivializations of the obstruction} is\n $$\n \\mathrm{Triv}(\\mathbf{c}(g_X)) := \\mathbf{H}_{\/\\mathbf{B}^2 A}(\\mathbf{c}\\circ g_X, *_X)\n\t\\,.\n $$\n \\label{Obstructions}\n\\end{definition}\nWe give now three different characterizations of spaces of extensions of $\\infty$-bundles.\nThe first two, by spaces of twisted cocycles and by spaces of trivializations \nof the obstruction class, are immediate consequences of the previous discussion:\n\\begin{theorem}\n Let $P \\to X$ be a $G$-principal $\\infty$-bundle corresponding by\n Theorem \\ref{PrincipalInfinityBundleClassification} to a cocycle $g_X : X \\to \\mathbf{B}G$.\n \\begin{enumerate}\n \\item \n\t There is a natural equivalence \n\t $$\n\t \\mathrm{Lift}(P, \\mathbf{p}) \\simeq \\mathrm{Triv}(\\mathbf{c}(g_X))\n\t $$\n\t between the $\\infty$-groupoid of lifts of $P$ through $\\mathbf{p}$, \n Definition \\ref{PrincipalInfinityBundleExtension},\t \n\t and the $\\infty$-groupoid of trivializations \n\t of the obstruction class, Definition \\ref{Obstructions}.\n\t \\item \n\t There is a natural equivalence\n\t $\\mathrm{Lift}(P, \\mathbf{p}) \\simeq \\mathbf{H}_{\/\\mathbf{B}G}(g_X, \\mathbf{p})$\n\t between the $\\infty$-groupoid of lifts and the $\\infty$-groupoid of \n\t $g_X$-twisted cocycles relative to $\\mathbf{p}$, Definition \\ref{TwistedCohomologyInOvertopos},\n\t hence a classification\n\t $$\n\t \\pi_0 \\mathrm{Lift}(P, \\mathbf{P}) \\simeq H^{1+[g_X]}(X, A)\n\t $$\n\t of equivalence classs of lifts by the $[g_X]$-twisted $A$-cohomology of $X$\n\t relative to the local coefficient bundle\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n \\mathbf{B}A \\ar[r] & \\mathbf{B}\\hat G \\ar[d]^{\\mathbf{p}} \n \t \\\\\n\t & \\mathbf{B}G\\,.\n }\n }\n $$ \n\\end{enumerate}\n \\label{ExtensionsAndTwistedCohomology}\n\\end{theorem}\n\\proof\n The first statement is the special case of Lemma \\ref{SliceMapsIntoPullbacks}\n where the $\\infty$-pullback $E_1 \\simeq f^* E_2$ in the notation there is identified\n with $\\mathbf{B}\\hat G \\simeq \\mathbf{c}^* {*}$.\n The second is evident after unwinding the definitions.\n\\hfill{$\\square$}\\\\\n\\begin{remark}\n For the special case that $A$ is 0-truncated, we may, by the discussion in \n \\cite{NikolausWaldorf, NSSc}, identify \n $\\mathbf{B}A$-principal $\\infty$-bundles with $A$-\\emph{bundle gerbes},\n \\cite{Mur}.\n Under this identification the $\\infty$-bundle classified by the \n obstruction class $[\\mathbf{c}(g_X)]$ above is what is called the\n \\emph{lifting bundle gerbe} of the lifting problem, see for instance \n \\cite{CBMMS} for a review. \n In this case \n the first item of Theorem \\ref{ExtensionsAndTwistedCohomology} reduces to\n Theorem 2.1 in \\cite{Waldorf} and Theorem A (5.2.3) in \\cite{NikolausWaldorf2}. \n The reduction of this statement to \n connected components, \n hence the special case of Observation \\ref{ObstructionClassObstructs},\n was shown in \\cite{BreenBitorseurs}.\n \\label{ReferencesOnLiftings}\n\\end{remark}\nWhile, therefore, the discussion of extensions of $\\infty$-groups and of \nlifts of structure $\\infty$-groups\nis just a special case of the discussion in the previous sections, this special case\nadmits geometric representatives of cocycles in the corresponding twisted cohomology by\ntwisted principal $\\infty$-bundles. This we turn to now. \n\\begin{definition}\n \\label{TwistedBundle}\n Given an extension of $\\infty$-groups $A \\to \\hat G \\xrightarrow{\\Omega \\mathbf{c}} G$\n and given a $G$-principal $\\infty$-bundle $P \\to X$, with class $[g_X] \\in H^1(X,G)$, \n a \\emph{$[g_X]$-twisted $A$-principal $\\infty$-bundle} on $X$ is an $A$-principal\n $\\infty$-bundle $\\hat P \\to P$ such that the cocycle $q : P \\to\\mathbf{B}A$\n corresponding to it under Theorem \\ref{PrincipalInfinityBundleClassification}\n is a morphism of $G$-$\\infty$-actions.\n \n The \\emph{$\\infty$-groupoid of $[g_X]$-twisted $A$-principal $\\infty$-bundles on $X$} is \n $$\n A\\mathrm{Bund}^{[g_X]}(X)\n\t:=\n\tG \\mathrm{Action}( P, \\mathbf{B}A )\n\t\\subset\n\t\\mathbf{H}(P, \\mathbf{B}A)\n\t\\,.\n $$\n\\end{definition}\n\\begin{observation}\n \\label{BundleOnTotalSpaceFromExtensionOfBundles}\n Given an $\\infty$-group extension $A \\to \\hat G \\stackrel{\\Omega \\mathbf{c}}{\\to} G$,\n an extension of a $G$-principal $\\infty$-bundle $P \\to X$\n to a $\\hat G$-principal $\\infty$-bundle, Definition \\ref{PrincipalInfinityBundleExtension}, induces\n an $A$-principal $\\infty$-bundle $\\hat P \\to P$ \n fitting into a pasting diagram of $\\infty$-pullbacks of the form\n$$\n \\raisebox{30pt}{\n \\xymatrix{\n \\hat G \\ar[r] \\ar[d]^{\\Omega \\mathbf{c}} & \\hat P \\ar[r] \\ar[d] & {*} \\ar[d]\n \\\\\n G \\ar[r] \\ar[d] & P \\ar[r]^{q} \\ar[d]& \\mathbf{B}A \\ar[r] \\ar[d] & {*} \\ar[d]\n \\\\\n {*} \\ar[r]^{x}&\n X \\ar@\/_1pc\/[rr]_g \\ar[r]^{\\hat g} & \\mathbf{B}\\hat G\n \\ar[r]^{\\mathbf{c}}& \\mathbf{B}G.\n }\n }\n$$\n In particular, it has the following properties: \n \\begin{enumerate}\n \\item $\\hat P \\to P$ is a $[g_X]$-twisted $A$-principal bundle, Definition \\ref{TwistedBundle};\n\t\\item for all points $x : * \\to X$ the restriction of $\\hat P \\to P$ to the \n\tfiber $P_x$ is equivalent to the $\\infty$-group extension $\\hat G \\to G$.\n \\end{enumerate} \n\\end{observation}\n\\proof\nThis follows from repeated application of the pasting law for $\\infty$-pullbacks, \nProposition~\\ref{PastingLawForPullbacks}.\n\nThe bottom composite $g : X \\to \\mathbf{B}G$ is a cocycle for the given $G$-principal $\\infty$-bundle \n$P \\to X$ and it factors through $\\hat g : X \\to \\mathbf{B}\\hat G$ by assumption of the existence of \nthe extension $\\hat P \\to P$. \n\nSince also the bottom right square is an $\\infty$-pullback by the given $\\infty$-group extension, \nthe pasting law asserts that the square over $\\hat g$ is also an $\\infty$-pullback, and then that so \nis the square over $q$. This exhibits $\\hat P$ as an $A$-principal $\\infty$-bundle over $P$ classified\nby the cocycle $q$ on $P$. By Proposition~\\ref{ClassificationOfTwistedGEquivariantBundles}\nthis $\\hat P \\to P$ is twisted $G$-equivariant.\n\nNow choose any point $x : {*} \\to X$ of the base space \nas on the left of the diagram. Pulling this back upwards \nthrough the diagram and using the pasting law and the \ndefinition of loop space objects $G \\simeq \\Omega \n\\mathbf{B}G \\simeq * \\times_{\\mathbf{B}G} *$ the diagram \ncompletes by $\\infty$-pullback squares on the left as indicated, which proves the claim.\n\\hfill{$\\square$}\\\\\n\n\\begin{theorem}\n The construction of Observation \\ref{BundleOnTotalSpaceFromExtensionOfBundles}\n extends to an equivalence of $\\infty$-groupoids\n $$\n A\\mathrm{Bund}^{[g_X]}(X)\n\t\\simeq\n\t\\mathbf{H}_{\/\\mathbf{B}G}(g_X, \\mathbf{c})\n $$\n between that of $[g_X]$-twisted $A$-principal bundles on $X$, \n Definition \\ref{TwistedBundle}, and the cocycle $\\infty$-groupoid of\n degree-1 $[g_X]$-twisted $A$-cohomology, Definition \\ref{TwistedCohomologyInOvertopos}.\n \n In particular the classification of $[g_X]$-twisted $A$-principal bundles is\n $$\n A\\mathrm{Bund}^{[g_X]}(X)_{\/\\sim}\n\t\\simeq\n\tH^{1+[g_X]}(X, A)\n\t\\,.\n $$\n \\label{ClassificationOfTwistedGEquivariantBundles}\n\\end{theorem}\n\\proof\n For $G = *$ the trivial group, the statement reduces to \n Theorem~\\ref{PrincipalInfinityBundleClassification}. The general proof\n works along the same lines as the proof of that theorem.\n The key step is the generalization of the proof of Proposition~\\ref{LocalTrivialityImpliesCocycle}.\n This proceeds verbatim as there, only with $\\mathrm{pt} : * \\to \\mathbf{B}G$ generalized to\n $i : \\mathbf{B}A \\to \\mathbf{B}\\hat G$. \n The morphism of $G$-actions\n $P \\to \\mathbf{B}A$ and a choice of effective epimorphism $U \\to X$ over\n which $P \\to X$ trivializes gives rise to a morphism in \n $\\mathbf{H}^{\\Delta[1]}_{\/(* \\to \\mathbf{B}G)}$ which involves the diagram\n $$\n \\raisebox{20pt}{\n \\xymatrix{\n\t U \\times G \\ar@{->>}[r] \\ar[d] & P \\ar[r] \\ar[d] & \\mathbf{B}A \\ar[d]^{i}\n\t \\\\\n\t U \\ar@{->>}[r] & X \\ar[r] & \\mathbf{B}\\hat G\n\t}\n\t}\n\t\\;\\;\n\t\\simeq\n\t\\;\\;\n \\raisebox{20pt}{\n \\xymatrix{\n\t U \\times G \\ar@{->>}[rr] \\ar[d] & & \\mathbf{B}A \\ar[d]^{i}\n\t \\\\\n\t U \\ar[r] & {*} \\ar[r]^{\\mathrm{pt}} & \\mathbf{B}\\hat G\n\t}\n\t}\n $$\n in $\\mathbf{H}$. (We are using that for the 0-connected object $\\mathbf{B}\\hat G$\n every morphism $* \\to \\mathbf{B}G$ factors through $\\mathbf{B}\\hat G \\to \\mathbf{B}G$.)\n Here the total rectangle and the left square on the left are $\\infty$-pullbacks,\n and we need to show that the right square on the left is then also an \n $\\infty$-pullback. Notice that by the pasting law the rectangle on the\n right is indeed equivalent to the pasting of $\\infty$-pullbacks\n $$ \n \\raisebox{20pt}{\n \\xymatrix{\n\t U \\times G \\ar@{->}[r] \\ar[d] & G \\ar[r] \\ar[d] & \\mathbf{B}A \\ar[d]^{i}\n\t \\\\\n\t U \\ar[r] & {*} \\ar[r]^{\\mathrm{pt}} & \\mathbf{B}\\hat G\n\t}\n\t}\n $$\n so that the relation\n $$\n U^{\\times^{n+1}_X} \\times G\n\t\\simeq\n\ti^* (U^{\\times^{n+1}_X})\n $$\n holds. With this the proof finishes as in the proof of \n Proposition~\\ref{LocalTrivialityImpliesCocycle}, with $\\mathrm{pt}^*$\n generalized to $i^*$. \n\\hfill{$\\square$}\\\\\n\n\\begin{remark}\n Aspects of special cases of this theorem can be identified in the literature.\n For the special case of ordinary extensions of ordinary Lie groups,\n the equivalence of the corresponding extensions of a principal bundle with \n certain equivariant structures on its total space is essentially the content\n of \\cite{Mackenzie, Androulidakis}. \n In particular the twisted unitary bundles or \\emph{gerbe modules}\n of twisted K-theory \\cite{CBMMS}\n are equivalent to such structures.\n\n For the case of $\\mathbf{B}U(1)$-extensions of Lie groups, such as the\n $\\mathrm{String}$-2-group, the equivalence of the corresponding \n $\\mathrm{String}$-principal 2-bundles, \n by the above theorem, to certain bundle gerbes on the total spaces of \n principal bundles underlies constructions such as in \\cite{Redden}.\n Similarly the bundle gerbes on double covers considered in \n \\cite{SSW} are $\\mathbf{B}U(1)$-principal 2-bundles on $\\mathbb{Z}_2$-principal\n bundles arising by the above theorem from the extension \n $\\mathbf{B}U(1) \\to \\mathbf{Aut}(\\mathbf{B}U(1)) \\to \\mathbb{Z}_2$,\n a special case of the extensions that we consider in the next Section\n \\ref{StrucInftyGerbes}.\n \n These and more examples we discuss in detail in \\cite{NSSc}.\n \\label{ReferencesTwistedBundles}\n\\end{remark}\n\n\n\n\\subsection{Gerbes}\n\\label{section.InfinityGerbes}\n\\label{StrucInftyGerbes}\n\nRemark \\ref{PointedConnectedLocalCoefficients} above indicates that of special relevance are those\n$V$-fiber $\\infty$-bundles $E \\to X$ in an $\\infty$-topos $\\mathbf{H}$ \nwhose typical fiber $V$ is pointed connected,\nand hence is the moduli $\\infty$-stack $V = \\mathbf{B}G$ of $G$-principal \n$\\infty$-bundles for some $\\infty$-group $G$. \nDue to their local triviality, \nwhen regarded as objects in the slice $\\infty$-topos $\\mathbf{H}_{\/X}$,\nthese $\\mathbf{B}G$-fiber $\\infty$-bundles are themselves \\emph{connected objects}.\nGenerally, for $\\mathcal{X}$ an $\\infty$-topos regarded as an $\\infty$-topos\nof $\\infty$-stacks over a given space $X$, it makes sense to consider\nits connected objects as $\\infty$-bundles over $X$. Here we discuss these\n\\emph{$\\infty$-gerbes}.\n\n\\medskip\n\nIn the following discussion it is useful to consider two $\\infty$-toposes:\n\\begin{enumerate}\n \\item an ``ambient'' $\\infty$-topos $\\mathbf{H}$ as before, to be thought of \n as an $\\infty$-topos ``of all geometric homotopy types'' for a given notion of geometry, \n\tin which $\\infty$-bundles are given by \\emph{morphisms} and the terminal \n\tobject plays the role of the geometric point $*$;\n \\item an $\\infty$-topos $\\mathcal{X}$, to be thought of as the topos-theoretic\n incarnation of a single geometric homotopy type (space) $X$, hence as \n an $\\infty$-topos of\n ``geometric homotopy types {\\'e}tale over $X$'', in which an $\\infty$-bundle over $X$\n is given by an \\emph{object} and the terminal object plays the role of the base space $X$. \n \n In practice, $\\mathcal{X}$ is the slice\n $\\mathbf{H}_{\/X}$ of the previous ambient $\\infty$-topos over $X \\in \\mathbf{H}$, or\n the smaller $\\infty$-topos $\\mathcal{X} = \\mathrm{Sh}_\\infty(X)$ of (internal)\n $\\infty$-stacks over $X$. \n\\end{enumerate}\nIn topos-theory literature the role of $\\mathbf{H}$ above is sometimes referred to as that of a \n\\emph{gros} topos and then the role of $\\mathcal{X}$ is referred to as that of a \\emph{petit} topos. \nThe reader should beware that much of the classical literature on gerbes is written\nfrom the point of view of only the \\emph{petit} topos $\\mathcal{X}$. \n\n\n\\medskip\n\nThe original definition of a \\emph{gerbe} \non $X$ \\cite{Giraud} is: a stack $E$ (i.e.\\ a \n1-truncated $\\infty$-stack) over $X$ that is \n1. \\emph{locally non-empty} and 2. \\emph{locally connected}. \nIn the more intrinsic language of higher topos theory, these \ntwo conditions simply say that $E$ is \na \\emph{connected object} (Definition 6.5.1.10 in \\cite{Lurie}): \n1. the terminal morphism $E \\to *$ is an \neffective epimorphism and 2. the 0th homotopy sheaf is trivial, \n$\\pi_0(E) \\simeq *$. This reformulation is \nmade explicit in the literature for instance in \nSection 5 of \\cite{JardineLuo} and in Section 7.2.2 of \\cite{Lurie}. Therefore:\n\\begin{definition}\n For $\\mathcal{X}$ an $\\infty$-topos, a \\emph{gerbe} in $\\mathcal{X}$ is an object $E \\in \\mathcal{X}$ which is\n \\begin{enumerate}\n \\item connected;\n\t\\item 1-truncated.\n \\end{enumerate}\n For $X \\in \\mathbf{H}$ an object, a \\emph{gerbe $E$ over $X$} is a gerbe in the slice\n $\\mathbf{H}_{\/X}$. This is an object $E \\in \\mathbf{H}$ together with an effective epimorphism $E \\to X$ such that $\\pi_i(E) = X$ for all $i \\neq 1$. \n \\label{Gerbe}\n\\end{definition}\n\\begin{remark}\n Notice that conceptually this is different from the\n notion of \\emph{bundle gerbe} introduced in \\cite{Mur} \n (see \\cite{NikolausWaldorf} for a review). \n We discuss in \\cite{NSSc} that \n bundle gerbes are presentations of \\emph{principal} $\\infty$-bundles \n (Definition \\ref{principalbundle}).\n But gerbes -- at least the \\emph{$G$-gerbes}\n considered in a moment in Definition \\ref{G-Gerbe} -- \n are $V$-fiber $\\infty$-bundles (Definition \\ref{FiberBundle})\n hence \\emph{associated} to principal $\\infty$-bundles \n (Proposition \\ref{VBundleIsAssociated}) with the special \n property of having pointed connected fibers. \n By Theorem \\ref{VBundleClassification} \n $V$-fiber $\\infty$-bundles may be identified with their underlying $\\mathbf{Aut}(V)$-principal \n $\\infty$-bundles and so one may identify $G$-gerbes with nonabelian \n $\\mathrm{Aut}(\\mathbf{B}G)$-bundle gerbes\n (see also around Proposition~\\ref{ClassificationOfGGerbes} below), \n but considered generally, neither of these\n two notions is a special case of the other. Therefore the terminology is \n slightly unfortunate, but it is standard.\n\\end{remark}\nDefinition \\ref{Gerbe} has various obvious generalizations. The following is considered in \\cite{Lurie}.\n\\begin{definition}\n \\index{$\\infty$-gerbe!EM $n$-gerbe}\n For $n \\in \\mathbb{N}$, an \\emph{EM $n$-gerbe} is an object $E \\in \\mathcal{X}$ which is\n \\begin{enumerate}\n \\item $(n-1)$-connected;\n \\item $n$-truncated.\n \\end{enumerate}\n\\end{definition}\n\\begin{remark}\nThis is almost the definition of an \n\\emph{Eilenberg-Mac Lane object} in $\\mathcal{X}$, \nonly that the condition requiring a global section \n$* \\to E$ (hence $X \\to E$) is missing. Indeed, the \nEilenberg-Mac Lane objects of degree $n$ in \n$\\mathcal{X}$ are precisely the EM $n$-gerbes of \n\\emph{trivial class}, according to Proposition~\\ref{ClassificationOfGGerbes} below.\n\\end{remark}\nThere is also an earlier established definition of \n\\emph{2-gerbes} in the literature \\cite{Breen}, which is \nmore general than EM 2-gerbes. Stated in the above fashion it reads as follows.\n\\begin{definition}[Breen \\cite{Breen}]\n \\index{$\\infty$-gerbe!2-gerbe}\n \\index{2-gerbe}\n A \\emph{2-gerbe} in $\\mathcal{X}$ is an object $E \\in \\mathcal{X}$ which is\n \\begin{enumerate}\n \\item connected;\n\t\\item 2-truncated.\n \\end{enumerate}\n\\end{definition}\nThis definition has an evident generalization to arbitrary degree, which we adopt here.\n\\begin{definition}\n \\label{nGerbe}\n An \\emph{$n$-gerbe} in $\\mathcal{X}$ is an object $E \\in \\mathcal{X}$ which is \n \\begin{enumerate}\n \\item connected;\n\t \\item $n$-truncated.\n \\end{enumerate}\nIn particular an \\emph{$\\infty$-gerbe} is a connected object.\n\\end{definition}\nThe real interest is in those $\\infty$-gerbes which have a prescribed\ntypical fiber:\n\\begin{remark}\nBy the above, $\\infty$-gerbes (and hence EM $n$-gerbes \nand 2-gerbes and hence gerbes) are much like deloopings \nof $\\infty$-groups (Theorem \\ref{DeloopingTheorem}) \nonly that there is no requirement that there \nexists a global section. An $\\infty$-gerbe for which there exists a \nglobal section $X \\to E$ is called \\emph{trivializable}. By \nTheorem~\\ref{DeloopingTheorem} trivializable $\\infty$-gerbes are equivalent to $\\infty$-group objects in $\\mathcal{X}$ \n(and the $\\infty$-groupoids of all of these are equivalent \nwhen transformations are required to preserve the canonical global section).\n\\end{remark}\nBut \\emph{locally} every $\\infty$-gerbe $E$ is of this form. For let\n$$\n (x^* \\dashv x_*) :\n \\xymatrix{\n \\mathrm{Grpd}_{\\infty}\n\t \\ar@{<-}@<+3pt>[r]^-{x^*}\n\t \\ar@<-3pt>[r]_-{x_*}\n\t\t&\n \\mathcal{X}\t \n }\n$$\nbe a topos point. Then the stalk $x^* E \\in \\mathrm{Grpd}_{\\infty}$ \nof the $\\infty$-gerbe is connected: because inverse images \npreserve the finite $\\infty$-limits involved in the definition of homotopy sheaves, and preserve the terminal object. Therefore\n$$\n \\pi_0\\, x^* E \\simeq x^* \\pi_0 E \\simeq x^* * \\simeq *\n \\,.\n$$\nHence for every point $x$ we have a stalk $\\infty$-group $G_x$ and an equivalence \n$$\n x^* E \\simeq B G_x \n \\,.\n$$\nTherefore one is interested in the following notion.\n\\begin{definition}\n For $G \\in \\mathrm{Grp}(\\mathcal{X})$ an $\\infty$-group \n object, a \\emph{$G$-$\\infty$-gerbe} is an $\\infty$-gerbe $E$ such that there exists \n \\begin{enumerate}\n \\item an effective epimorphism $\\xymatrix{U \\ar@{->>}[r] & X}$;\n\t\\item an equivalence $E|_U \\simeq \\mathbf{B} G|_U$.\n \\end{enumerate}\n Equivalently: a $G$-$\\infty$-gerbe is a $\\mathbf{B}G$-fiber $\\infty$-bundle,\n according to Definition \\ref{FiberBundle}.\n \\label{G-Gerbe}\n\\end{definition}\nIn words this says that a $G$-$\\infty$-gerbe is one that locally looks like the \nmoduli $\\infty$-stack of $G$-principal $\\infty$-bundles.\n\\begin{example}\n For $X$ a topological space and $\\mathcal{X} = \n \\mathrm{Sh}_\\infty(X)$ the $\\infty$-topos of $\\infty$-sheaves over it, these notions reduce to the following.\n \\begin{itemize}\n \\item a 0-group object $G \\in \\tau_{0}\\mathrm{Grp}(\\mathcal{X}) \n \\subset \\mathrm{Grp}(\\mathcal{X})$ is a sheaf of groups on $X$ \n (here $\\tau_0\\mathrm{Grp}(\\mathcal{X})$ denotes the 0-truncation of \n $\\mathrm{Grp}(\\mathcal{X})$;\n\t\\item for $\\{U_i \\to X\\}$ any open cover, the canonical \n\tmorphism $\\coprod_i U_i \\to X$ is an effective epimorphism to the terminal object;\n\t\\item $(\\mathbf{B}G)|_{U_i}$ is the stack of \n\t $G|_{U_i}$-principal bundles ($G|_{U_i}$-torsors).\n \\end{itemize}\n\\end{example}\nIt is clear that one way to construct a $G$-$\\infty$-gerbe \nshould be to start with an $\\mathbf{Aut}(\\mathbf{B}G)$-principal \n$\\infty$-bundle, Remark \\ref{UniversalTwistByAutomorphisms},\nand then canonically \\emph{associate} a fiber $\\infty$-bundle to it.\n\\begin{example}\n \\label{automorphism2GroupAbstractly}\n For $G \\in \\tau_{0}\\mathrm{Grp}(\\mathrm{Grpd}_{\\infty})$ an \n ordinary group, $\\mathbf{Aut}(\\mathbf{B}G)$ is usually called the \n \\emph{automorphism 2-group} of $G$. Its underlying groupoid is equivalent to\n \\[\n \\mathbf{Aut}(G) \\times G\\rightrightarrows \n \\mathbf{Aut}(G), \n \\]\n the action groupoid for the action of $G$ on $\\mathbf{Aut}(G)$ \n via the homomorphism $\\mathrm{Ad}\\colon G\\to \\mathbf{Aut}(G)$. \n\\end{example}\n\\begin{corollary}\n Let $\\mathcal{X}$ be a 1-localic $\\infty$-topos \n (i.e.\\ one that has a 1-site of definition). \n Then for $G \\in \\mathrm{Grp}(\\mathcal{X})$ any \n $\\infty$-group object, $G$-$\\infty$-gerbes are classified by $\\mathbf{Aut}(\\mathbf{B}G)$-cohomology:\n $$\n \\pi_0 G \\mathrm{Gerbe}\n\t\\simeq\n\t\\pi_0 \\mathcal{X}(X,\\mathbf{B}\\mathbf{Aut}(\\mathbf{B}G))\n\t=:\n\tH^1_{\\mathcal{X}}(X,\\mathbf{Aut}(\\mathbf{B}G))\n\t\\,.\n $$\n \\label{ClassificationOfGGerbes}\n\\end{corollary}\n\\proof\n This is the special case of Theorem \\ref{VBundleClassification}\n for $V = \\mathbf{B}G$.\n\\hfill{$\\square$}\\\\\nFor the case that $G$ is 0-truncated (an ordinary group object) \nthis is the content of Theorem 23 in \\cite{JardineLuo}.\n\\begin{example}\n For $G \\in \\mathrm{Grp}(\\mathcal{X}) \\subset \n \\tau_{\\leq 0}\\mathrm{Grp}(\\mathcal{X})$ an ordinary 1-group object, \n this reproduces the classical result of \\cite{Giraud}, \n which originally motivated the whole subject: \n by Example~\\ref{automorphism2GroupAbstractly} in this case $\\mathbf{Aut}(\\mathbf{B}G)$ is the traditional automorphism 2-group and \n $H^1_{\\mathcal{X}}(X, \\mathbf{Aut}(\\mathbf{B}G))$\n is Giraud's nonabelian $G$-cohomology that classifies $G$-gerbes\n (for arbitrary \\emph{band}, see Definition \\ref{BandOfInfinityGerbe} below).\n \n For $G \\in \\tau_{\\leq 1}\\mathrm{Grp}(\\mathcal{X}) \\subset \n \\mathrm{Grp}(\\mathcal{X})$ a 2-group, we recover the classification of 2-gerbes\n as in \\cite{Breen,BreenNotes}.\n\\end{example}\n\\begin{remark}\n In Section 7.2.2 of \\cite{Lurie} the special case that here\n we called \\emph{EM-$n$-gerbes} is considered. Beware \n that there are further differences: for instance the notion \n of morphisms between $n$-gerbes as defined in \\cite{Lurie} is more \n restrictive than the notion considered here. For instance with our definition \n (and hence also that in \\cite{Breen}) each group automorphism of \n an abelian group object $A$ induces an automorphism of the \n trivial $A$-2-gerbe $\\mathbf{B}^2 A$. But, except for the identity, \n this is not admitted in \\cite{Lurie} (manifestly so by the diagram \n above Lemma 7.2.2.24 there). Accordingly, the classification \n result in \\cite{Lurie} is different: it involves the cohomology group \n $H^{n+1}_{\\mathcal{X}}(X, A)$. Notice that there is a canonical morphism\n $$\n H^{n+1}_{\\mathcal{X}}(X, A) \\to H^1_{\\mathcal{X}}(X, \\mathbf{Aut}(\\mathbf{B}^n A))\n $$ \n induced from the morphism $\\mathbf{B}^{n+1}A \\to \\mathbf{Aut}(\\mathbf{B}^n A)$.\n\\end{remark}\n\nWe now discuss how the $\\infty$-group extensions, Definition \\ref{ExtensionOfInfinityGroups},\ngiven by the Postnikov stages of $\\mathbf{Aut}(\\mathbf{B}G)$ induces the notion of\n\\emph{band} of a gerbe, and how the corresponding twisted cohomology,\naccording to Remark \\ref{ExtensionsAndTwistedCohomology}, reproduces \nthe original definition of nonabelian cohomology in \\cite{Giraud} and generalizes\nit to higher degree.\n\\begin{definition}\n \\label{outerAutomorphismInfinityGroup}\n Fix $k \\in \\mathbb{N}$. For $G \\in \\infty\\mathrm{Grp}(\\mathcal{X})$ a \n $k$-truncated $\\infty$-group object (a $(k+1)$-group), write\n $$\n \\mathbf{Out}(G) := \\tau_{k}\\mathbf{Aut}(\\mathbf{B}G)\n $$\n for the $k$-truncation of $\\mathbf{Aut}(\\mathbf{B}G)$. (Notice \n that this is still an $\\infty$-group, since by Lemma 6.5.1.2 in \n \\cite{Lurie} $\\tau_n$ preserves all $\\infty$-colimits and additionally \n all products.) We call this the \\emph{outer automorphism $n$-group} of $G$.\n \n In other words, we write\n $$\n \\mathbf{c} : \\mathbf{B}\\mathbf{Aut}(\\mathbf{B}G) \\to \\mathbf{B}\\mathbf{Out}(G)\n $$\n for the top Postnikov stage of $\\mathbf{B}\\mathbf{Aut}(\\mathbf{B}G)$.\n\\end{definition}\n\\begin{example}\n Let $G \\in \\tau_0\\mathrm{Grp}(\\mathrm{Grpd}_{\\infty})$ be \n a 0-truncated group object, an \n ordinary group,.\n Then by Example \\ref{automorphism2GroupAbstractly}, \n $\\mathbf{Out}(G) = \\mathbf{Out}(G)$ is the coimage of \n $\\mathrm{Ad} : G \\to \\mathrm{Aut}(G)$, which is the traditional group of outer automorphisms of $G$.\n\\end{example}\n\\begin{definition}\n Write $\\mathbf{B}^2 \\mathbf{Z}(G)$ for the $\\infty$-fiber of \n the morphism $\\mathbf{c}$ from Definition \\ref{outerAutomorphismInfinityGroup}, \n fitting into a fiber sequence\n $$\n \\xymatrix{\n \\mathbf{B}^2 \\mathbf{Z}(G) \\ar[r] & \n\t \\mathbf{B}\\mathbf{Aut}(\\mathbf{B}G) \\ar[d]^{\\mathbf{c}}\n\t \\\\\n\t & \\mathbf{B}\\mathbf{Out}(G)\n\t}\n\t\\,.\n $$\n We call $\\mathbf{Z}(G)$ the \\emph{center} of the $\\infty$-group $G$.\n \\label{Center}\n\\end{definition}\n\\begin{example}\n For $G$ an ordinary group, so that $\\mathbf{Aut}(\\mathbf{B}G)$ \n is the automorphism 2-group from Example \\ref{automorphism2GroupAbstractly},\n $\\mathbf{Z}(G)$ is the center of $G$ in the traditional sense.\n\\end{example}\nBy theorem \\ref{ClassificationOfGGerbes} there is an induced morphism\n$$\n \\mathrm{Band} \n : \n \\pi_0 G \\mathrm{Gerbe} \\to H^1(X, \\mathbf{Out}(G))\n \\,.\n$$\n\\begin{definition}\n \\label{BandOfInfinityGerbe}\n For $E \\in G \\mathrm{Gerbe}$ we call $\\mathrm{Band}(E)$ the \\emph{band} of $E$.\n\n By using Definition \\ref{Center} in Definition \\ref{TwistedCohomologyInOvertopos}, \n given a band $[\\phi_X] \\in H^1(X, \\mathbf{Out}(G))$, \n we may regard it as a twist for twisted $\\mathbf{Z}(G)$-cohomology,\n classifying $G$-gerbes with this band:\n $$\n \\pi_0 G \\mathrm{Gerbe}^{[\\phi_X]}(X) \\simeq H^{2+[\\phi_X]}(X, \\mathbf{Z}(G))\n\t\\,.\n $$\n\\end{definition}\n\\begin{remark}\n The original definition of \\emph{gerbe with band} in \\cite{Giraud} is slightly\n more general than that of \\emph{$G$-gerbe} (with band) in \\cite{Breen}: in the former\n the local sheaf of groups whose delooping is locally equivalent to the gerbe need not\n descend to the base. These more general Giraud gerbes are 1-gerbes in the sense of\n Definition \\ref{nGerbe}, but only the slightly more restrictive $G$-gerbes of Breen\n have the good property of being connected fiber $\\infty$-bundles. From our\n perspective this is the decisive property of gerbes, and the notion of band is \n relevant only in this case.\n\\end{remark}\n\\begin{example}\n For $G$ a 0-group this reduces to the notion of band as introduced in \\cite{Giraud},\n for the case of $G$-gerbes as in \\cite{Breen}.\n\\end{example}\n\n\\medskip\n\n\\noindent{\\bf Acknowledgements.}\nThe writeup of this article and the companions \\cite{NSSb, NSSc} was\ninitiated during a visit by the first two authors to the third author's institution, \nUniversity of Glasgow, in summer 2011. It was completed in summer 2012\nwhen all three authors were guests at the \nErwin Schr\\\"{o}dinger Institute in Vienna. \nThe authors gratefully acknowledge the support of \nthe Engineering and Physical Sciences Research Council \ngrant number EP\/I010610\/1 and the support of the ESI.\nU.S. thanks Domenico Fiorenza for inspiring discussion about\ntwisted cohomology.\n\n\n\n\n\\addcontentsline{toc}{section}{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzfgak b/data_all_eng_slimpj/shuffled/split2/finalzzfgak new file mode 100644 index 0000000000000000000000000000000000000000..3d208703d3a5b8267c11b61dfc61ab78b0c1dedc --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzfgak @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe \\texttt{iopart-num} Bib\\TeX{} style is intended for use in\npreparing manuscripts for Institute of Physics Publishing journals,\nincluding Journal of Physics. It provides numeric citation with\nHarvard-like formatting, based upon the specification in ``How to\nprepare and submit an article for publication in an IOP journal using\n\\LaTeXe'' by Graham Douglas (2005).\n\nThe \\texttt{iopart-num} package is available on the Comprehensive\n\\TeX{} Archive Network (CTAN) as \\texttt{\/biblio\/bibtex\/contrib\/iopart-num}.\n\n\\section{General instructions}\n\nTo use the \\texttt{iopart-num} style, include the command\n\\verb+\\bibliographystyle{iopart-num}+ in the\ndocument preamble. The reference section is then inserted into the\ndocument with the command \\verb+\n\\section{Introduction}\n\nThe Compact Muon Solenoid (CMS) Experiment is a general purpose particle detector experiment located at the Large Hadron Collider (LHC) at CERN, in Geneva, Switzerland.\nThe design of the CMS detector centers around a 3.8T iron-core superconducting solenoid. Within the solenoid, at the very heart of the detector, is an all silicon tracker, consisting of 66M pixel and 10M strip detectors. \nCalorimetry is performed by a high-resolution, high-granularity electromagnetic calorimeter (ECAL), comprised of more than 70k lead-tungstate crystals, and a brass-scintillator hadronic calorimeter (HCAL).\nJust outside the solenoid is a scintillator-based tail-catching hadronic outer calorimeter (HO). In addition, a steel and quartz fibre forward hadronic calorimeter system (HF) is placed on both sides of the detector endcaps to ensure hermiticity.\nThe iron yoke of the magnet system is instrumented with a muon spectrometer, composed of 3 detector technologies: drift tubes (DT) in the central region and cathode strip chambers (CSC) in the endcaps, both complemented by resistive plate chambers (RPC).\nConsidering the late availability of the underground cavern, a modular detector design was chosen to allow for construction, installation, and testing before lowering underground. \nFigure~\\ref{fig:cms} (left) shows a schematic drawing of the CMS Detector.\nFurther detail on the design and performance of the CMS detector can be found elsewhere~\\cite{cmsjinst,tdr}.\n\\begin{figure}[bt!]\n\\begin{center}\n\\includegraphics[height=5.5cm]{cms_complete_labelled.eps}\\hspace{1pc}\n\\includegraphics[height=5.0cm]{cms_photo.eps}\n\\caption{\\label{fig:cms}(left) Schematic of the Compact Muon Solenoid detector. (right) The CMS detector during installation of the silicon strip tracker, Dec 2007. }\n\\end{center}\n\\end{figure}\n\nThe construction phase of the CMS detector is now complete, and the detector has been fully installed and commissioned in the underground cavern at LHC Point 5. \nFigure~\\ref{fig:cms} (right) shows a photo of the CMS detector during installation of the tracker.\nSince May 2007, CMS has undergone frequent ``global runs'' devoted to global commissioning, exercising all of the installed detector subsystems together.\nIn addition to testing the global readout, global runs have also helped exercise event selection and reconstruction, data analysis, alignment and calibration, data quality monitoring, computing operations and data transfer.\nThe complexity and scope has increased with each global run, with a larger fraction of subdetectors joining at each interval.\nThe global runs culminated in data taking for over a week with first LHC beam in September 2008, and a two week Cosmic Run at Full Tesla (CRAFT) in October-November 2008.\n\n\n\n\\section{Performance with First LHC Beam Data}\\label{sec:beam}\n\nIn September 2008, the CMS detector collected first beam data from the LHC. The first proton beams were circulated successfully on 10 September 2008. Single beams were circulated through the LHC for several days, at injection energy of 450 GeV, until 19 September. During this time, CMS triggered and recorded data successfully.\n\nDuring beam running, CMS triggered on beam halo muons, muons outside of the beam-pipe arising from decays of pions created when off-axis protons scrape collimators or other beam-line elements. \nBeam halo events were particularly useful for testing the triggering and reconstruction of events in the endcap muon system. \nFigure~\\ref{fig:csc} (left) shows the beam halo trigger rate as measured by one sector of the muon endcap CSCs. The shaded region denotes the period of first successful LHC beam RF capture, which lasted for approximately 10 minutes and ended with a beam abort. The halo rate measured by the CSCs was clearly reduced during and after the RF capture of the beam. The HCAL endcap system also saw a ``cleaner'' beam with less energy deposition along the beam-line after the RF capture of the beam.\n\nFigure~\\ref{fig:csc} (right) shows the distribution of the angle with respect to the beam-line of muon tracks passing through the endcap CSC system.\nSince beam halo muons pass through the detector collinear with the beam-line, we expect this angle to be small for pure beam halo events (beam halo simulation, in blue). Cosmic muons pass through the detector more orthogonally to the beam-line, and thus we expect this angle to be larger for cosmic muon ``beam off'' events (in black). For ``beam on'' events, we expect a combination of beam halo and cosmic muon events (in orange filled). Figure~\\ref{fig:csc} shows this trend validated with data and simulation events.\n\n\\begin{figure}[bt!]\n\\begin{center}\n\\includegraphics[height=4.5cm]{beamhalorate.eps}\n\\includegraphics[height=5.0cm]{cms_csc_halo_cosmics.eps}\n\\caption{\\label{fig:csc}(left) Beam halo rate (Hz) as a function of time (hour:min). The shaded region denotes the ~10min period when the LHC beam was first RF captured. (right) Angle of muon tracks with respect to beam line, as measured by the endcap muon cathode strip chambers, with beam data (orange filled), cosmics data (black), and beam halo simulation (blue). }\n\\end{center}\n\\end{figure}\n\nIn the days preceding September 10th, single beam shots were sent onto closed collimators near the CMS detector. The interactions of the beam with the collimator and surrounding material created a large number of secondary particles which passed through the CMS detector. These beam-on-collimator events are called ``beam splash'' events because of the spectacular signature they left in the detectors. In particular, these events deposited an enormous amount of energy in the calorimeters; in a single beam splash event, $\\sim100$ and $\\sim1000$ TeV of energy were deposited in the hadronic and electromagnetic calorimeters, respectively. Figure~\\ref{fig:evtdisplay} shows an event display for such a ``beam splash'' event. In addition to leaving a spectacular signature in the detector, these events have been used to internally synchronize ECAL and HCAL, and also to validate the inter-calibration of the ECAL endcap channels.\n\n\\begin{figure}[b!]\n\\begin{center}\n\\includegraphics[width=0.49\\textwidth]{splash_longitudinal_ecal.eps}\n\\includegraphics[width=0.49\\textwidth]{splash_longitudinal_hcal.eps}\n\\caption{Event display images from a single ``beam splash'' event. \nThe left image shows the energy deposited in the electromagnetic calorimeter (pink). The right image shows the energy deposited in the hadronic calorimeter (blue).\nIn a single beam splash event, $\\sim100$ and $\\sim1000$ TeV of energy were deposited in the hadronic and electromagnetic calorimeters, respectively. \n\n\\label{fig:evtdisplay}}\n\\end{center}\n\\end{figure}\n\n\n\n\\section{Performance with Data from the Cosmic Run at Full Tesla (CRAFT)}\\label{sec:craft}\n\nThe aim of the Cosmic Run at Full Tesla (CRAFT) was to run CMS for four weeks, with all subsystems participating, collecting data continuously to further gain operational experience before data-taking with $p$-$p$ collisions. In addition, a major goal of CRAFT was to operate CMS at full field (3.8T) for as much of the running period as possible and to study the effects of the magnetic field on the detector components. Prior to CRAFT, the most extensive test of the CMS detector at full magnetic field was during the Magnet Test and Cosmic Challenge in 2006.\n\nDuring CRAFT, we collected more than 370M cosmic events during four weeks in the interval of 13 October to 11 November 2008. In this period, CMS was operated with B=3.8T for 19 days. 290M of the 370M events had B=3.8T, with the silicon strip tracker and the muon drift tube system in the readout. 194M of the total had all subdetector components in the readout. Figure~\\ref{fig:craft} (left) shows an event display for a CRAFT cosmic ray muon passing through the CMS detector.\nThe large CRAFT data set has given rise to many analyses, from subsystem and trigger performance studies, calibration and alignment, validation of the magnetic field map, measurements of cosmic muon flux and charge ratio, to cosmic muon background studies for LHC running. Figure~\\ref{fig:craft} (right) shows the results from one such analysis, the study of energy deposition in the ECAL crystals. We see good agreement between the measured $dE\/\\rho dx$ in ECAL and the expected stopped power for lead-tungstate crystals.\n\n\\begin{figure}[bt!]\n\\begin{center}\n\\includegraphics[height=14pc]{cms_craft.eps}\n\\includegraphics[height=14pc]{cms_ecal_dedx.eps}\n\\caption{\\label{fig:craft}(left) Event display for a CRAFT cosmic muon passing through the CMS Detector. In blue is the reconstructed muon track passing through the muon system, the hadronic and electromagnetic calorimeter, as well as the tracker. (right) The $dE\/\\rho dx$ stopping power of the CMS electromagnetic crystal calorimeter, as a function of muon momentum. The blue points correspond to data from CRAFT cosmic muon events; the black curve is the expected stopping power of lead-tungstate crystals. The red dashed line shows the contribution from collision loss; the blue dashed line shows the contribution from bremsstrahlung.}\n\\end{center}\n\\end{figure}\n\n\n\\section{Conclusions}\n\nAfter many years of design and construction, the CMS detector has been commissioned and has collected first data with LHC beams. The detector performance has been proven with beam splash, beam halo, as well as cosmic ray muon events with and without B field.\nSince the beginning of September 2008, all installed subsystems of the CMS detector have routinely been in global readout, and the stability of running with all CMS components has been proven. \nSince the CRAFT run at the end of 2008, CMS has continued global commissioning with cosmic ray muons, in preparation for upcoming LHC beams in 2009.\n\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDuring the 1960s and 1970s, with the advent of powerful numerical techniques for\nperforming non-LTE radiative transfer calculations, several groups vigorously\npursued modeling of critical spectral lines in an effort to understand the\nstructure and energy balance of the solar chromosphere.\nTwo striking problems quickly emerged.\nThe first concerned the wings of strong resonance lines\\footnote{Here we define line ``cores'' and ``wings'' in terms of observed profiles, and\nFigure~\\ref{fig:problem} shows the \\ion{Mg}{2} lines.\nThe core region lies roughly between the maxima in intensity marked historically\nas ``$k_2$'' in the figure, the wings lie outside of these maxima.},\n\\new{which arose under} the simplifying\napproximation of complete redistribution (CRD): the wings were factors of\nseveral too bright\n\\citep{1967AnAp...30..861D, 1973A&A....22...61L}.\nThe wing problem was resolved using the more realistic approach of partial\nredistribution\n\\citep[PRD:][]{1973ApJ...185..709M,1974ApJ...192..769M,1976ApJ...205..874A}.\nIn this more realistic description, the emission and absorption processes in\nspectral lines allow for frequency-by-frequency coupling between these processes\nin the line wings, where scattering in the atomic reference frame is coherent.\nThe resolution of the wing discrepancy is illustrated by the qualitative\nagreement between observations and the standard VAL3-C 1D model\n\\citep[][``VAL\"]{Vernazza+Avrett+Loeser1981}\nin Fig.~\\ref{fig:problem}.\nIn this figure, the variations of the wing intensities with wavelength are\ncaptured by these traditional models.\n(The offset in intensity is within the calibration uncertainties of the rocket\nspectra).\n\n\\figproblem\n\nIn the second problem, the centers of line cores ($k_3$ in the figure),\ncontrolled by CRD, were computed to be too dim by factors of several, relative\nto the $k_2$ peaks.\nThese discrepancies remain today for some of the strongest lines in solar and\nstellar spectra, including H~L$\\alpha$, and to a lesser extent L$\\beta$\n\\citep{1978ApJ...225..655G,1979ApJ...230..924B};\nthe \\ion{Mg}{2} $h$ and $k$ lines\n\\citep{Vernazza+Avrett+Loeser1981};\nthe \\ion{Ca}{2} $H$ and $K$ lines\n\\citep{1981A&A...103..160L}.\n\nObserved profiles of these strong lines were recognized to be asymmetric.\nThe explanation proposed early concerns the relative bulk motion between the\nmiddle and upper chromosphere\n\\citep{1970SoPh...11..347A},\na result compatible with the upwardly-propagating radiating shocks calculated in\n1D models by\n\\citet{Carlsson+Stein1995}.\nThis remains a related problem of interest here, but we do not address it\ndirectly here.\n\nMore sophisticated calculations based upon 3D radiation-MHD (R-MHD) models\ncomputed with the Bifrost code\n\\citep{2011A&A...531A.154G}\nhave reduced core $k_3\/k_2$ \\ion{Mg}{2} intensity ratio discrepancies somewhat\n\\citep{2017A&A...597A..46S},\nand also for the $K_3\/K_2$ \\ion{Ca}{2} ratios\n\\citep{2018A&A...611A..62B}.\nBut issues in line cores \\new{remained, in particular \nthey found that observed peak separations implied that something was missing in their models.}\n\nPhysically, the cores of the strong lines generally correspond to wavelengths\nwhere Doppler-shifted thermal and other motions control the opacity and source function.\nTherefore in this ``Doppler core'', CRD is a reasonable approximation.\nWhile the Doppler broadening widths are not known \\textit{a priori}, the cores\nare considered to span about 3~times the r.m.s. Doppler width, before\nboth opacity and source function become controlled by coherent\nscattering.\nIn models and observations, this 3$\\times$Doppler width varies systematically\nfrom $\\pm 25$~$\\,$km$\\,$s$^{-1}${} for heavy elements such as calcium, to $\\pm 50$~$\\,$km$\\,$s$^{-1}${}\nfor hydrogen.\n\nGiven the importance of strong lines to the chromospheric energy balance, the\nonset of the solar corona, the irradiance effects in the solar system and the\nrecent work on the Hanle effect in the cores of these lines\n\\citep[see, for example,][]{2018cosp...42E1564I},\nwe re-analyse various models and data-sets to understand better the meaning of\nthe remaining discrepancies between observation and models.\n\n\\figval\n\nThe early 1D models remain a reasonable first physical approximation because of\nsteep hydrostatic stratification, where the dominant direction of transfer of\nradiative energy lies in the vertical direction\n\\citep[e.g.][]{2017ApJ...851....5J}.\nHowever, even though in the 3D R-MHD models the stratification is steep (isobars\nare more horizontal than vertical), there are many conditions where, owing to\nmagnetic and Reynolds stresses evolving in response to convection beneath, the\neffects of horizontal radiative transfer are important. \\new{Thus we study \n1D and \n3D~models, focussing on the differences \narising between 1D~formal versus 3D~formal solutions.}\nOur attention is focused on the resonance lines of \\ion{H}{1}, \\ion{Mg}{2} and\n\\ion{Ca}{2}.\n\nLastly, a unique study by \\citet{1994A&A...287..233B} reported simultaneous spectra\nacquired with various $1\\arcsec$-wide image-plane slits and different spectrograph\nexit slits, in the resonance lines of \\ion{H}{1}, \\ion{Mg}{2} and \\ion{Ca}{2},\nfrom the LPSP instrument on the OSO-8 satellite.\nThey found that, while \\ion{Mg}{2} and \\ion{Ca}{2} data were similar, the\nL$\\alpha$ data appeared qualitatively different.\n\n\\section{Statement of the problem} \n\nFigures~\\ref{fig:problem} and \\ref{fig:val3c} illustrate our core problem of\ninterest: \\textit{the calculated profiles have ratios between peak and core\nbrightness that systematically exceed observations, even those with spectral\nresolutions of 200,000, by a factor of 1.5--2}.\nBelow we will examine in detail data of \\ion{Mg}{2} from the Interface Region\nImaging Spectrograph \\citep[IRIS,][]{2014SoPh..289.2733D} and from calculations\nwhich highlight spatially-resolved spectra in contrast to the comparison of\nspatial averages shown.\n\nArmed only with 1-dimensional calculations, early resolutions that were proposed\nto this problem were necessarily limited.\nWithin each calculation, the only option is to examine the assumed velocity\nfields within the chromosphere.\nIf included as a formal ``microturbulence'' (random fluid velocities exist on\nscales below the photon mean free path), the effect is to increase the\nwavelength differences between the intensity peaks.\nAs shown in Fig.~\\ref{fig:problem}, the separation-of-peaks for the $h$ and\n$k$ lines in standard 1D calculations are close to spatially-averaged observed\nvalues.\nThe well-known effect of variations in micro-turbulence is recalled in\nFigure~\\ref{fig:val3c}.\n(Curiously, using the distribution of micro-turbulence of VAL3-C, the peak\nintensities are split by similar amounts in wavelength, yet the line center\nwavelengths are in the ratio $1 : 2.3 : 3.3$).\n\nAnother option in 1D modeling is to include line broadening as macro-turbulence,\nthen within each 1D calculation, the velocity would be introduced as a\nmacroscopic, not microscopic flow.\nIn this case, the emergent profiles, for changes small in magnitude compared\nwith micro-scale motions, introduce shifts (and asymmetries if velocity\ngradients exist) of the entire core profiles.\nThis resolution, proposed by earlier authors\n\\citep[e.g.][]{1978ApJ...225..655G},\nsuggests that each observation sums spatially over intensities from several such\nadjacent atmospheres in each pixel.\nFigure~\\ref{fig:cartoon} shows a schematic picture of this proposed solution.\nEach atmosphere is a 1D solution to the non-LTE equations.\nIf each observation corresponds to a superposition of a set of 1D calculations\nthat are randomly distributed in line of sight velocity, to some degree the\ndifferences can be reconciled.\nEach observation then corresponds to the simple sum of individual theoretical\nprofiles, each shifted in wavelength according to a bulk velocity shift.\n\n\\cartoon\n\nSpectroscopic turbulence was studied in the context of 1D~models by\n\\citet{1985cdm..proc..137C}.\nThey found that the regimes of applicability of the micro- and macro- turbulent\nlimits were found to be ${<}25$~km and ${>}3000$~km respectively, using velocity\nfields with a variety of correlation lengths, for lines of \\ion{Ca}{2}.\nThese lengths can be compared with a photon mean free path of 120~km and\nthickness of the entire stratified chromosphere of 1500~km.\nThey showed that typical profiles could not be explained by a linear combination\nof micro- and macro-turbulent flows. \n\nThree-dimensional radiation-MHD (R-MHD) models\n\\citep{2017A&A...597A..46S,2018A&A...611A..62B}\nare based not on \\textit{ad-hoc} explorations, but are solutions to the dynamic equations of motion and radiative transfer.\nThey are performed on grids fine enough to avoid too much dissipation, but for\nwhich solutions can be obtained in reasonable computing time.\nThe calculations examined here have grids with spacings of 49 and 34~km\nhorizontally and vertically respectively, and are therefore capable of exploring\nradiative transfer in different regimes of spectroscopic turbulence to the\nextent that their intrinsic numerical smoothing permits significant changes in\nvelocities to occur between neighboring slices through the 3D~atmosphere which\ndetermine the opacities and source functions.\nThese calculations were advanced in time using radiation transfer solutions based\nupon the short-characteristics method, which is encumbered with a large,\nunphysical diffusion.\n\\new{The source functions for strong lines are dominated by scattering.\nAs such, the modeled profiles, even computed with long characteristics,} must be treated with some caution.\nOur final goal is to explore how profiles of strong lines\ncomputed with the best 3D~MHD models compare with these observations.\n\n\\section{Observations and their analyses}\n\n\\subsection{Typical quiet Sun line profiles}\n\nFirst, we examine the highest dispersion spectra of the three lines from the\nliterature, of the quiet Sun, but with low spatial resolution.\nFigure~\\ref{fig:problem} shows data of both the $h$ and $k$ lines from a\n$7\\arcsec\\times130\\arcsec$ area of the quiet Sun\n\\citep{1977STIN...7831029A}.\nFigure~\\ref{fig:val3c} shows profiles from regions of quiet Sun, highlighting\nthe cores of \\ion{Ca}{2}, \\ion{Mg}{2} and \\ion{H}{1} resonance lines, from\nseveral instruments.\nThe figure compares the observations with calculations from the 1D VAL-3C model.\nSpectra of \\ion{Ca}{2} lines were taken from the atlas of\n\\citet{1973apds.book.....D}.\nThe spectrometer used had a native spectral resolution of\n${\\cal R}= 600,000$, so that instrumental broadening is negligible.\nThe \\ion{Mg}{2} spectra are taken from the ${\\cal R}= 200,000$ quiet Sun spectra\nshown in Fig.~\\ref{fig:problem}\n\\citep{1977STIN...7831029A}.\nSolar data for \\ion{H}{1}~L$\\alpha$ are all of a significantly lower spectral\nresolution.\nThose obtained from rocket flights or low Earth orbit are also affected by\nabsorption by neutral hydrogen in the Earth's upper atmosphere in the line core.\nThese include data from the HRTS-II experiment from\n\\citet{1991ApJS...75.1337B}\nwith ${\\cal R}= 24,000$ and the LPSP instrument on OSO-8, which obtained quiet\nSun spectra with ${\\cal R}= 60,000$.\nData from the SUMER instrument on SoHO have a lower spectral resolution of\n15,000, even though the instrument, on the SoHO spacecraft at the L1~Lagrange\npoint, is unaffected by the geocorona.\nL$\\alpha$ SUMER data also have been obtained, mostly behind a mesh at the edge\nof the detector, to avoid saturation.\n\\citet{2008A&A...492L...9C} obtained SUMER L$\\alpha$ data behind the\npartially-shut door of the telescope, of six quiet-Sun regions on\nJune 24--25, 2008.\nOwing to the apodization of the pupil plane by the partly-closed door,\nthe angular resolution is difficult to assess, but they found the characteristic\nasymmetry (blue peak brighter than red) which increased with brightness.\nThe central core was found to be typically 75\\%{} of the brightness of\nthe blue peak, independent of limb distance, at a spectral resolution of 15,000.\n\nQuiet Sun L$\\alpha$ profiles from all three instruments are compiled in the\nlower middle panel of Fig.~\\ref{fig:val3c}.\nThe SUMER data shown there are from \\citet{2001A&A...375..591C}.\nAll three lines show similar qualitative differences with 1D~calculations.\n1D~calculations tend to predict brighter peaks and a darker central intensity.\n\nA closer look at Figures~\\ref{fig:val3c} and \\ref{fig:cartoon}\nimmediately yields tight constraints on any macro-turbulence.\nThe right-most panels of Fig.~\\ref{fig:val3c}, plotted against Doppler velocity,\nshow that the \\ion{Ca}{2} $K$, \\ion{Mg}{2} $k$ and \\ion{H}{1} L$\\alpha$ line cores \nwould need systematically different macroturbulent velocity distributions to\nexplain the observed core-peak intensity ratios.\n\\textit{The \\ion{Ca}{2} $K$ line would require r.m.s. speeds of\n10~$\\,$km$\\,$s$^{-1}$, but \\ion{Mg}{2} $k$ and \\ion{H}{1} L$\\alpha$ would need\nr.m.s. values of 30 and 50~$\\,$km$\\,$s$^{-1}${} for the superposition shown in\nFig.~\\ref{fig:cartoon} to work.}\nIn stratified atmospheres, these three line cores form all within a region\ntermed the ``upper chromosphere'' (see, for example, Fig.~1 of\n\\citealp{2019A&A...623A..74D}\nand Fig.~3 of\n\\citealp{2015ApJ...803...65S}).\nWe note that the area coverage of spicules inferred from data above the limb is\ntoo small to contribute significantly to the spatially-averaged profiles,\n\\citep[e.g.][]{2010ApJ...719..469J,2016ARep...60..848M}.\nHowever, models do show that the $k_2$ and $K_2$ peaks form substantially lower\nin the atmosphere\n\\citep{Vernazza+Avrett+Loeser1981,2013ApJ...772...90L,2018A&A...611A..62B}.\n\nThe 3D~models have revealed that the peak separations of these lines can\nincrease in locations where the chromospheric temperature rise is located deep\nin the atmosphere (see Fig.~20 of \\citealp{2018A&A...611A..62B}).\nBut, no matter the details of the complex non-LTE formation of these\nlines, we know of no first-principles reasons nor data that are compatible with\nthe idea of a systematic gradient of macro-scale motions that can rescue the\nexplanation that the peak-to-core ratios are small because of\nmacro-turbulence.\nIndeed, below we show that the line profiles of \\ion{Mg}{2} lines obtained with\nthe IRIS instrument, obtained with a high cadence and at the highest angular\nresolution ever achieved, also can be used to reject this hypothesis.\n\n\\figsji\n\n\\subsection{Quiet Sun \\ion{Mg}{2} line profiles in time and space}\n\nWe examine detailed profiles for the \\ion{Mg}{2} $h$ and $k$ lines from the IRIS\ninstrument\n\\citep{2014SoPh..289.2733D}.\nWhile IRIS does not measure the UV~spectra at the highest spectral resolution,\nthe linear properties of the detector combined with the high angular resolution\nmake it uniquely suited to address the problem of interest.\nMeasurements in the \\ion{Mg}{2} $h$ and $k$ lines have a spatial step along its\nslit of $0.168\\arcsec$, a critical sampling of the angular resolution of\n$0.33\\arcsec$.\nInspection of typical data (e.g. Figure~5 of \\citealp{2014SoPh..289.2733D})\nsuggests that quiet regions have only modest significant spatial\nvariations of the $k_2$ and $h_2$ peaks observed along the projected spectrograph\nslit, as seen at the nominal angular resolution of IRIS.\nThe center-limb behavior shown in Fig.~6 of\n\\citet{2014SoPh..289.2733D},\nreveals that peak separations first increase, and finally disappear some\n$5\\arcsec$ above the limb seen in the neighboring continuum.\nThe line profiles become a narrow single peak $5\\arcsec$ higher.\n\nAll of the IRIS data were reduced to photometrically-calibrated spectra\nusing the IDL SolarSoft packages as well as deconvolved with the point spread\nfunction (PSF) of the IRIS telescope, which has been measured during a Mercury\ntransit by \\citet{2018SoPh..293..125C}.\n\nQuiet Sun \\ion{Mg}{2} data were obtained with IRIS close to disk center\nbeginning on February 27, 2014, at 5:39:28.850, in a ``sit and stare'' mode.\n290 frames of 774 0.16635$\\arcsec$-wide pixels (solar $y$) by\n554 0.0254~\\AA{}-wide wavelength pixels.\nWe will focus on spectra, but simultaneous IRIS slitjaw images were obtained in\nthe 1400~\\AA{} channel.\nFigure~\\ref{fig:sji} shows examples of these images obtained 1000 and\n3500 seconds into the time-series.\nEach has been median-filtered over time (90~seconds) to remove internetwork\noscillations and fast dynamics in the low-density transition region.\n\\figyt\nThe thick lines show three regions we highlight below: network, haze, and\ninter-network.\nThe color table saturates the highest DN to reveal UV~continuum emission that\nforms in the lower chromosphere.\n\n\\figirismacrotk\n\nFigure~\\ref{fig:irisyt} shows wavelength-integrated intensities of the\n\\ion{Mg}{2}~$k$ line core, spanning 1~\\AA{} (roughly the wavelength\nseparation between the $k_1$ minima), as a function of position along\nthe slit and time.\nThe data were acquired with a cadence of 16.8051 seconds, for a total duration\nof 81 minutes.\nThe total solar area rotating under the fixed slit was therefore roughly\n$13\\arcsec\\times 120\\arcsec$ (solar $x$ and $y$ respectively).\nThis area covers 1~part in 2000 of the area of the solar disk, below\n(Section~\\ref{subsec:lucia}) we therefore examine a larger region.\n\n\\figiris\n\nThe data, when deconvolved, have a sufficiently high angular resolution\n($0\\farcs33$) that the macro-turbulent picture can be directly addressed.\nEach resolution element (two pixels) corresponds to about 240~km on the solar\nsurface.\nIf a superposition such as that shown in Fig.~\\ref{fig:cartoon} were responsible\nfor filling-in the line cores, then some profiles adjacent in space should\nexhibit large variations between separate resolution elements, i.e. on scales\ndown to 240~km.\nBut such variations are absent, as shown in the typical, randomly chosen samples\nof line profiles of network, inter-network, and a hazy intermediate region.\nThese profiles are typical of the entire data-set, including the $h$-line.\n\n\\subsection{Spatial auto-correlations}\n\n\nThe leftmost panel \nof \nFigure~\\ref{fig:irismacrotk} \nhas not been spatially de-convolved, the others have.\nThe tick-marked line in the rightmost panel shows ten spatial pixels, \\new{each of} which\ncritically sample the convolved data.\nVery little structure below scales of about 5 pixels ($\\equiv 600$ km) is visible.\nThis figure prompted us to measure quantitatively the spatial structure along\nthe slit of the entire dataset, using auto-correlations.\nThus, we computed the spatial auto-correlation lengths along the IRIS slit at\neach wavelength, and time, for those columns in Fig.~\\ref{fig:irisyt}\ncorresponding to all three marked regions.\n\nThe figures below show autocorrelation lengths of the intensity images\n$I_\\lambda(y)$, which are the characteristic full-widths at half-maximum (FWHM)\nof the functions\n\\begin{equation}\n c_\\lambda(\\ell) = \n I_\\lambda(y) \\cdot I_\\lambda(y+\\ell),\n\\end{equation}\ncentered on the maximum ($\\ell=0$).\nSince $c_\\lambda(\\ell)$ depends on each wavelength $\\lambda$ across the line\nprofiles, any variations of line-of-sight velocity on resolvable scales\nwill be reflected by changes in the monochromatic intensities on the same length\nscales, thereby imprinting the velocities onto these correlation lengths.\nFigure~\\ref{fig:iris} shows these $c_\\lambda(\\ell)$ values.\nOver-plotted is the average line profile in blue, and the autocorrelation FWHM\nlengths averaged over space and time, in white, along the network, haze, and\ninter-network slit regions.\n\nSmall values of the FWHM of $c_\\lambda(\\ell)$ correspond to rapid changes in\nspace, and \\textit{vice versa}.\nAn obvious feature of Fig.~\\ref{fig:iris} are that in the wings of the\n$k$ line ($\\lambda$ outside of the $k_1$ minima) the three regions are not\neasily distinguishable, unlike the line cores which are radically different.\nThere are some extended periods where the structure in the wings of the network\ncontain consistently small FWHM values (between 1400 and 1600, or 1800 to 2200\nseconds for example), these upper photospheric signatures behave, statistically\nat least, in a similar fashion.\nThe average autocorrelation lengths outside of line cores are all close to\n400~km (the averages are shown as white lines in the figure).\n\nBut within the cores, the network and other regions behave quite differently.\nThe average intensity profiles in the haze and internetwork regions are similar\nin shape to the average correlation length profiles (white and blue lines).\nBut in the network, the very cores of the autocorrelation profiles possess a\npeak at the $k_3$ core that is absent in the intensity profiles.\nThe spatial autocorrelations are also larger farther from line center\n($\\Delta\\lambda > 0.1$~\\AA{} in the network region, with the exception of a short\nperiod around $t=2600$ seconds (left panel of Fig.~\\ref{fig:iris}).\nDuring the first half of the time series autocorrelation lengths can exceed\n${\\approx}2500$~km near the $k_2$ peaks, and almost symmetric about line center.\nLater, after about 3000 seconds the region of network shows an asymmetric\nautocorrelation profile in wavelength, line cores lengths exceeding 2000~km, but\nwith a coherent darker feature, i.e. smaller lengths between 0.1--0.2~\\AA{} to\nthe red of line center (Doppler redshifts of 10 and 20~$\\,$km$\\,$s$^{-1}$) between 3200 and\n4300 seconds.\n\nAll three regions have average spatial scales exceeding 800~km at all\nwavelengths which have significant core emission (within about ${\\pm}0.3$, 0.2,\nand 0.1~\\AA{} respectively for the network, haze, and internetwork).\nThe spatial auto-correlations within the cores (${\\pm}0.5$~\\AA) have structures\nwith a median of 1060~km, mean of 1140~km, and r.m.s. variation of\n440~km, for the haze and internetwork data, with 20\\%\\ higher typical values for\nthe network.\n\nIn contrast, the haze and internetwork regions possess a significant and\npersistent dark streak within 5--8~$\\,$km$\\,$s$^{-1}${} (Doppler shift) of $k_3$.\nThere, the spatial FWHM correlation lengths fall to 400~km, which is only about\ntwice the spatial resolution of IRIS.\nWe will speculate on the origin of these small structures below.\n\nIn summary, these IRIS data reveal that:\n\\begin{enumerate}\n\\item Away from the line cores (outside $k_1$ minima), all regions have\n autocorrelation FWHM values close to 400~km, roughly twice the IRIS\n resolution.\n\\item At and near wavelengths of intensity maxima ($k_2$), FWHM \nmeasurements \nexceed\n 1000~km, increasing to over 1500~km in network regions.\n\\item Outside of network regions, $k_3$ has a persistent small spatial FWHM of\n $\\approx 400$~km, seen only in line center pixels and those two immediately\n to the red side (${\\le}10$~$\\,$km$\\,$s$^{-1}${} from line center).\n\\item Over network regions, autocorrelation lengths are systematically larger,\n extend further in $\\Delta\\lambda$, and have a broad minimum near 800--900~km\n across the central ${\\pm}0.2$~\\AA.\n Within ${\\pm}0.1$~\\AA{} the lengths increase producing a third, central small\n peak in average autocorrelation profiles.\n\\end{enumerate}\nTo explain the small $k_3\/k_2$ ratios, the macro-turbulent picture requires\nlarge changes in line-of-sight velocities on scales down to the 200~km\nresolution limit of IRIS.\nOur analysis of typical IRIS data for \\ion{Mg}{2} rejects this hypothesis.\n\n\\subsection{Other relevant observations}\n\nOther spectral lines have been observed at a high angular resolution\nwhich can also shed light on conditions where the cores of strong solar\nresonance lines form.\nSpin-forbidden lines of \\ion{O}{1} at 1356 and 1358~\\AA{} have been observed\nrepeatedly with UV spectrometers.\nThe lines are optically thin across most of the chromosphere, the shared upper\nlevel is 9.15~eV above the ground level of \\ion{O}{1}.\nThe \\ion{O}{1} ionization is strongly tied to the \\ion{H}{1} level populations\nthrough a well-known resonant charge-transfer reaction, and so excitation by\ncollisions with electrons will occur under conditions common to these lines of\n\\ion{O}{1} and L$\\alpha$, whose upper level lies at 10.19~eV.\nNaturally, the L$\\alpha$ line source function and emergent intensity are\ndominated by photon scattering, unlike the spin-forbidden lines of \\ion{O}{1}.\nBut the two lines otherwise are excited under similar conditions within the\n chromosphere.\n\nThe HRTS instrument has provided spectra near 1350~\\AA{} \\new{at a spectral resolution \n(50~m\\AA) similar to IRIS, but a lower spatial (1$\\arcsec$) resolution.}\nHRTS data analyzed by\n\\citet{1989ApJ...346..514A}\nshow that this line's kinematics is unspectacular, with r.m.s. Doppler\nshifts of 2~$\\,$km$\\,$s$^{-1}$, and linewidths close to that of the instrument.\nThe unresolved motions are at most 40\\%{} of the sound speed, which is\n10--15~$\\,$km$\\,$s$^{-1}$.\nIt seems there is little power in motions, resolved or unresolved, across the\nquiet Sun's upper chromosphere.\n\\citet{2015ApJ...809L..30C}\nreported widths of ${\\sim}7$~km$\\,$s$^{-1}$ of the \\ion{O}{1} 1356~\\AA{} line in IRIS data\nin quiet and plage regions. \n\\new{Optically thin lines such as the 1356 line have contributions from many scale heights across the chromosphere, so their line widths reflect\nthe line-of-sight sum of many different heights.}\n\n\\subsection{A broader sample of \\ion{Mg}{2} IRIS data}\n\\label{subsec:lucia}\n\nThe above IRIS dataset was selected to represent a typical quiet Sun\nregion at disk center.\nBut these data sampled just 1 part in 2000 of the solar surface.\nWe therefore looked at statistical properties of the IRIS \\ion{Mg}{2} $h$ and\n$k$ profiles from a spatial (raster) scan, that of April 15, 2014,\nbetween 05:25:24 and 05:43:28 UT.\nThis spanned an area of $127\\arcsec \\times 119\\arcsec$, which is ten\ntimes larger than the sit-and-stare observation.\n\nThis observation included a little more active Sun, lying at the periphery of a\nsubstantial active region (NOAO 12036).\nWe found that only about 10\\%{} of the line profiles had $k_2\/k_3$ intensity\nratios between 4 and 6, spanning the value of ${\\approx}5$ for the VAL3-C\ncalculation.\nThe bright chromospheric network showed no more spatial correlation with these\ndeep line ratios than other pixels in the instrument's field of view.\nOn balance, these data from a larger area confirm that large $k_2\/k_3$ intensity\nratios above 4 are present at a 10\\%{} level.\nThe median ratio is 2.5. \n\n\\section{Re-analysis of recent 3D calculations}\n\nWe have examined model data from \\citet{2017A&A...597A..46S} and\n\\citet{2018A&A...611A..62B}.\nThe atmospheric structure in the models is the same in both articles and is\ndetermined by R-MHD calculations, using the short\ncharacteristics method to solve for radiation gains and losses in the energy\nequation.\nAfter the calculations have been evolved, the authors then studied solutions\nto the transfer equation using long characteristics for the 1D and 3D\ncalculations.\nHere we focus on the computations for the $k$ line by\n\\citet{2017A&A...597A..46S},\nand also comment on those for the \\ion{Ca}{2} $K$ line at 393.4nm.\nOur purpose here is to examine the hypothesis that the line cores of\nchromospheric resonance lines are filled in by horizontal radiative transfer.\n\nThe horizontal component of the numerical grid has pixels of width 47~km.\nWhile the MHD calculations upon which these radiative transfer calculations are\nbased are themselves highly diffusive compared with the real Sun, the thermal\nstructure in their calculations nevertheless has features down to these scales.\nThe computational data analyzed here have been re-binned to scales half of the native\nresolution, 98~km in the horizontal direction.\nThis scale is comparable or smaller than typical photon mean free paths through\nthe stratified chromosphere.\nWhile a finer grid is desirable, these calculations are fine enough to reveal\nthe effects explored below.\n\nThe modeled region is similar to enhanced network on the Sun, with\nan unsigned magnetic field strength of 50~G passing through the photosphere\n\\citep{2016A&A...585A...4C,2017A&A...597A..46S,2018A&A...611A..62B}.\nA typical snapshot of a time-dependent 3D calculation is taken.\nFrom the source functions in each voxel, the long-characteristic method is used\nto produce the emergent intensities, including PRD.\nTherefore the solutions shown are a kind of hybrid: the source functions and\nthe optical depth scales are computed using the diffusive\nshort-characteristic method, but the final emergent intensities, given these\nparameters, are far less diffusive.\n\nFigure~\\ref{fig:mgii} shows emergent intensities from a small part\n($6\\times6$ Mm$^2$) of the calculated area ($24\\times24$ Mm$^2$), using the long\ncharacteristic method, for a region at disk center ($\\mu=1$).\nWhen compared with observations, we note firstly, that the computed features\n$k_{2V}$ and $k_{2R}$ are factors of between one and two brighter than the\naverage quiet Sun data shown.\nThis is to be expected given the higher concentration of the magnetic\nfield used in the computations.\nBut the $k_2\/k_3$ contrasts are also significantly higher, both in 3D and 1D,\nand the $k_{2V}$ and $k_{2R}$ separations are smaller than observed.\nQualitatively similar results (from \\citealp{2018A&A...611A..62B} but not shown\nhere) are seen for the \\ion{Ca}{2} $K$ line, where the average $K_{2R}$\nand $K_{2V}$ peaks are separated by 0.33~\\AA{} in both quiet and plage regions\n\\citep[compare Figures~3 and 5 of][]{Linsky+Avrett1970},\nwhereas the average over the computed enhanced network region is about half of\nthis (0.17~\\AA).\n\n\\figmgii\n\nSignificantly, in comparison with plage observations, these $k_3$ and $K_3$\ncomponents are far deeper in the computations.\nThis suggests that the source function is on average larger where this feature\nis formed than is captured in the calculations.\n\nIn Figure~\\ref{fig:autoc} we show spatial auto-correlation data computed from\nthe 3D and 1D intensities of the $k$ line, together with spatially-averaged\nintensity profiles.\nSolid lines show observations, dashed lines computations.\nThe auto-correlation characteristic lengths show three kinds of behavior\ndepending on the wavelengths relative to line center.\nIn the line wings, the images have the smallest spatial scales, with FWHM values\nof about 750 and 610~km.\nIn 3D, these scales steadily increase towards the line cores, peaking near the\nobserved peaks $k_{2V}$ and $k_{2R}$.\n(In 1D these lengths decrease towards the line center before showing a small\nincrease at $k_{2V}$ wavelengths).\nWithin the line core, the widths computed in 3D drop dramatically to below\n700~km.\n\n\\figautoc\n\nThe comparison of 3D and 1D images in Fig.~\\ref{fig:mgii} is instructive in ways\ncomplementary to the interesting points made by\n\\citet{2018A&A...611A..62B}.\nFigure~\\ref{fig:autoc} shows clearly that 3D~transfer effects are\nimportant within $\\pm 1$~\\AA{} of line center: the FWHM length scales are larger\non average by a factor of two for the $k$ line (15--20\\%\\ for the $K$ line) in\n3D than 1D.\nThe 3D features all appear ``fuzzier'' to the eye than their 1D counterparts, on\nscales of a few hundred kilometers in Fig.~\\ref{fig:mgii}.\nWe will speculate further on what is missing in these calculations in\nSection~\\ref{sec:discussion}.\n\nThe second and fourth rows of Fig.~\\ref{fig:mgii} show how the calculated model\nmight be observed through filter instruments, with FWHM values of 0.1, 1, and\n3~\\AA.\nWider FWHM instruments have been used in solar physics for many years to record\nthe \\ion{Ca}{2} lines, and other, narrower filter widths have been developed%\n\\footnote{see, e.g. \n\\url{https:\/\/www.su.se\/isf\/}}.\n\nThese panels suggest that 3D smearing effects are biggest at wavelengths within\nthe emission cores.\nThe differences are probably lower limits, given that the source functions for\nstrongly scattering lines are smeared by the use of the short characteristics,\nwhich will tend to underestimate differences between the 1D and 3D formal\nsolutions to the transfer equation.\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\nWe have re-visited an old problem concerning the upper chromosphere of the Sun:\nis there a clear discrepancy between observed and computed profiles in the\nDoppler cores of the strongest lines?\nIf so, can we determine its origin?\nFirstly, we compiled data to re-examine literature from the 1970s when this\nproblem was first addressed.\nUsing these data, including optically thin lines generated in the same regions\nas the strong lines, we confirmed earlier work showing that micro-turbulence\ncannot account for the systematic behavior of the data and argued that\nmacro-turbulence also fails.\nOn this basis, we then examined the $k$ line of \\ion{Mg}{2} in detail, taking\nadvantage of the quality and angular resolution of the IRIS instrument.\nWe then explored radiation MHD simulations of an enhanced network\nregion of the $K$ line of \\ion{Ca}{2}.\nIn particular, we compared the line cores in 1D and 3D to explore the effects of\n horizontal radiative transfer on the emergent spectra generated\nthroughout the chromosphere.\n\nOur first conclusion is that \\textit{the small peak-core ratios in these\nresonance lines cannot be due to turbulence on any scale}.\nGiven the well-known result that micro-turbulence serves mostly to increase the\nseparation of peaks (see Fig.~\\ref{fig:val3c}), our conclusion rests on the\nrefutation of macro-turbulence.\nThree lines of argument all point to this result.\n\n\\begin{itemize}\n\\item The macro-turbulence limit requires the spatial averaging of profiles of\n fluid elements, which are Doppler-shifted by a velocity distribution,\n which spans the computed 1D separation of peaks.\n If the distribution were narrower, little difference would be seen between\n the 1D and average profiles.\n If larger, then we would see multiple peaks for a small number of averaged\n elements, and\/or a severely broadened feature.\n But observations clearly show a steady increase of separation of peaks from\n \\ion{Ca}{2} to \\ion{H}{1} by a factor of three in Doppler shift.\n Yet the \\new{deep} cores of these features all form within the confines of the upper\n chromosphere and higher. \\new{Whether or not the chromosphere is thermally isolated from the corona above by mostly horizontal magnetic fields, these line cores likely form in regions of very steep gradients in electron temperature where ionization causes them to become optically thin.}\n \n \n \n\\item Optically thin UV lines observed with the HRTS instrument since the 1970s\n also \\new{have contribnutions from} the upper chromosphere.\n Spin-forbidden lines of \\ion{O}{1} near 1356~\\AA{} require almost the same\n population of electron energies to be excited as H~L$\\alpha$, yet they are\n spectrally unresolved by HRTS and show small subsonic deviations of just a\n few $\\,$km$\\,$s$^{-1}${} along the instrument's slit\n \\citep[e.g.,][]{1989ApJ...346..514A}.\n\\item Spatially de-convolved profiles of the \\ion{Mg}{2} resonance lines\n observed by IRIS show very few features smaller than 600~km along the slit,\n yet the spatial resolution is close to 240~km (Figures~\\ref{fig:irismacrotk}\n and \\ref{fig:iris}).\n Between the $k_1$ minima, average autocorrelation lengths vary between\n 800--1200~km, larger values being seen in brighter regions of the network.\n\\end{itemize}\n\nFor our second conclusion, based upon differences between the 1D and 3D\nautocorrelation calculations (Figure~\\ref{fig:mgii}), \\textit{at least some of\nthe peak-to-core ratio discrepancy is due to horizontal components of\nradiation transfer}.\nFigures~\\ref{fig:mgii} and \\ref{fig:autoc} unambiguously reveal the small-scale\nsmearing effects of horizontal radiative transfer in the emergent intensities.\nHorizontal radiative transport in space is the \\new{most likely way to explain the difference} between the\n1D and 3D calculations.\nThese differences cause the lowering of contrast across the line profiles in\nboth space and frequency, and larger the autocorrelation lengths in space.\n\n\n\nWe can also draw a third conclusion, namely that the upper chromosphere may well\nharbor \\textit{less ``turbulence'' than previously thought}.\nThe profiles of the three strongest lines formed in this region simply do not\nsupport supersonic motions within the uppermost layers of the stratified\natmosphere for the vast majority of the time.\nIn this picture, supersonic spicules and other features simply are not abundant\nenough and do not cover enough area of the surface to be significant, on\naverage, in affecting these line profiles.\n\n\\subsection{Further speculations}\n\nBeyond these conclusions, the confrontation between computed and observed\nprofiles necessarily becomes more speculative.\nArguably, modern MHD calculations capture the lower chromosphere better than the\nupper chromosphere and higher layers.\nAfter all, the energy in higher layers is modified and filtered by its\npropagation and dissipation through the lower layers, which themselves remain a\nsubject of active research.\nThe interface between the cool chromospheric and hot coronal plasma is not only\ndifficult to model accurately, but observations over the past century have\nconsistently revealed the complex thermal nature of the fine structure\nof the upper chromosphere.\nEven the latest radiation MHD calculations attempting to understand why the Sun\nmust produce spicules are based not on 3D but ``2.5D'' calculations in which\nspecial symmetries are imposed \\citep{2018ApJ...860..116M}.\nIt remains premature to state that we really know what spicules, and other\nproducts of the plasma dynamics of the chromosphere extending higher up, really\nare.\n\nThese and other structures entirely absent in 1D~calculations and\n3D~calculations may well influence in our analysis.\nAs made explicit by\n\\citet{2015SoPh..290..979J},\nsuch structures can have optical depths and source functions \\textit{radiatively\nde-coupled} from the chromosphere beneath.\nThese structures will simply absorb or emit radiation along the line of sight on\ntheir physical scales, essentially disconnected from the non-locally coupled\nradiative transfer solutions leading to the bulk of the line profiles beneath.\nThey can have structure on scales much smaller than the photon path\nlengths.\nOwing to the magnetic control of these plasmas owing to lower densities and\npressures, they might appear at almost any Doppler shift up to the (high)\nAlfv\\'en speed.\nThus we might expect to see absorbing and\/or emitting structures perhaps with\nlength scales down to the diffraction limit if such structures cover much of the\nchromosphere much of the time.\nThis kind of picture might explain the peculiar structures seen in the network\npanel in the upper part of Fig.~\\ref{fig:iris}.\nIn the internetwork and haze regions, we see the smallest structures in the line\ncores, mostly within $\\pm 0.05~\\text{\\AA} \\equiv \\pm 5$~$\\,$km$\\,$s$^{-1}${} Doppler shifts\n(Figure~\\ref{fig:iris}, seen in the images and the column-averaged white lines).\nThese are sub-sonic speeds.\nSuper-sonic motions associated with spicules and some fibrils are notable by\ntheir absence.\nFor a typical length of a spicule of 5--10$\\arcsec$, and assuming that $10^5$\nsuch spicules are present at any time on the Sun\n\\citep{2016ARep...60..848M},\nwe would expect to see one spicule cross a randomly oriented slit of this length\nat any time.\nThe small-scale features at line-center observed almost everywhere are therefore\nnot related to spicules.\nInterestingly, similar features are present in the numerical models for the\n\\ion{Ca}{2} $K$ line, outside of the chromospheric network.\n\nFinally, as noted earlier, if the location of the chromospheric temperature rise\nis deeper in the atmosphere than typical models suggest (see Fig.~20 of\n\\citet{2018A&A...611A..62B}),\nthen this problem should be re-visited.\nHints of a potential contribution of such a regime to a new understanding are\npresent in line widths computed from A--F models in\n\\citet{Vernazza+Avrett+Loeser1981},\nand the calculations of\n\\citet{2015ApJ...809L..30C}.\nThis is beyond the scope of the present paper but will be addressed in future.\n\n\\section{Conclusions}\n\nOur results listed above suggest that the amount of ``turbulence'' present in the\nupper chromosphere has been over-estimated in prior work.\nThis may have significant consequences for the energy budget that can be used to\nheat the overlying regions of the solar atmosphere.\n\nOur analysis is incomplete in many ways: for example, we have no pure solar\nspectra for H~L$\\alpha$ with sufficient spectral resolution and photometric\nquality to make comparisons with computations; neither have we any images with\nsufficiently narrow passbands (Doppler widths ${\\lta}10$~$\\,$km$\\,$s$^{-1}${}) to be able to\ncompare with predictions from computations including scattering such as\n\\andrii{in} Fig.~\\ref{fig:mgii}.\nEven so, it is clear that the various models in 3D as well as 1D are missing\nessential physical processes affecting the typical conditions in the upper solar\nchromosphere.\n\nThe consequences of revisiting of early work remain to be fully understood.\nThere are many observations relevant to this study, such as the remarkable\nimages of broad-band L$\\alpha$ light from the VAULT instrument\n\\citep[e.g.][]{Patsourakos+Gouttebroze+Vourlidas2007},\nof chromospheric fine structure in very \\andrii{narrowband} images in H$\\alpha$,\nthe \\ion{Ca}{2} infrared triplet lines and others\n\\citep[e.g.][]{2014ApJ...785..109L}.\nOur work is almost certainly important in terms of the amount of energy stored\nin and transported by the small-scale mass motions that we need to invoke, as\nwell as 3D transfer effects, to help explain the problem we have addressed.\n\n\\acknowledgements\n\nJ.L.\\ acknowledges support through the CHROMATIC project (2016.0019) funded by\nthe Knut and Alice Wallenberg foundation.\nThe Institute for Solar Physics is supported by a grant for research\ninfrastructures of national importance from the Swedish Research Council\n(registration number 2017-00625).\nA.V.S.\\ acknowledges financial support from the European Research Council (ERC)\nunder the European Union's Horizon 2020 research and innovation programme (ERC\nAdvanced Grant agreement No~742265) and from the Swiss National Science\nFoundation (SNSF) through Grant CRSII5\\_180238.\n\nNCAR is sponsored by the National Science Foundation.\n\\emph{IRIS} is a NASA small explorer mission developed and operated by LMSAL\nwith mission operations executed at NASA Ames Research center and major\ncontributions to downlink communications funded by ESA and the Norwegian Space\nCentre.\nComputations presented in this paper were performed on resources provided by the\nSwedish National Infrastructure for Computing (SNIC) at the National\nSupercomputer Centre (NSC) at Link\\\"oping University, at the PDC Centre for High\nPerformance Computing at the Royal Institute of Technology in Stockholm, and at\nthe High Performance Computing Center North (HPC2N) at Ume{\\aa} University.\n\n\\bibliographystyle{apj}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\nAn important question on the mechanism of high-temperature superconductivity in iron pnictides concerns the relationship between the quantum criticality and superconductivity. In the vicinity of the range of the superconducting phase, antiferromagnetic order with a close-by tetragonal-orthorhombic structural transition is frequently found, and the anomalous normal-state properties near the endpoint of the order have been detected in BaFe$_2$As$_2$-based superconductors, implying the importance of quantum critical fluctuations \\cite{Shibauchi14,Kasahara10,Walmsley13,Nakai13,Yoshizawa12,Chu12,Gallais13}. These anomalies are mostly associated with the enhanced electron correlations, which manifest themselves as divergently increasing electron effective mass $m^*$ approaching a quantum critical point (QCP) \\cite{Walmsley13}. \n\nOn the other hand, in the high doping regime of the Ba$_{1-x}$K$_x$Fe$_2$As$_2$ system, an increasing trend of $m^*$ has been found with hole doping $x$; the Sommerfeld coefficient of the electronic specific heat $\\gamma$, a measure of $m^*$, reaches $\\sim 100$\\,mJ\/mol\\,K$^2$ in KFe$_2$As$_2$ ($x=1$) \\cite{Fukazawa11,Hardy13,Storey13}, and even exceeds $\\gamma$ of the quantum critical concentration $z=0.3$ in BaFe$_2$(As$_{1-z}$P$_z$)$_2$ system \\cite{Walmsley13}. Such a very large mass in KFe$_2$As$_2$, comparable to moderately heavy fermion materials, is also confirmed by the large Fermi-liquid coefficient of the resistivity \\cite{Hashimoto10}, and by quantum oscillations \\cite{Terashima13}, angle-resolved photoemission \\cite{Yoshida14}, and optical conductivity \\cite{Nakajima14}. It has been theoretically suggested that these strongly enhanced electron correlations for the $x=1$ system, where the number $N$ of $3d$ electrons per Fe site is 5.5, are related to the possible proximity to the Mottness of $N=5$ (half-filled for all $3d$ orbitals) \\cite{Medici14}, which is much more pronounced than the $N=6$ case of BaFe$_2$As$_2$ \\cite{Gerorges13,Yu13}. It has also been found that the isovalent substitution for K by larger alkali-ions Rb or Cs results in even larger $\\gamma$ values up to $\\sim 180$\\,mJ\/mol\\,K$^2$ \\cite{Wang13,Zhang15}. This suggests that the negative chemical pressure effect in $A$Fe$_2$As$_2$ ($A=$ K, Rb, and Cs) [see Fig.\\:\\ref{lambda}(a)] effectively reduces the band width with increasing alkali-ion radius, which leads to an approach to a QCP of another antiferromagnetically ordered phase \\cite{Zocco15}. \n\nOne of the key issues is how the quantum critical fluctuations affect the superconducting properties\n\\cite{Hashimoto12,Hashimoto13,Chowdhury13,Levchenko13,Nomoto13,Putzke14,Lamhot15}. It has been demonstrated both experimentally \\cite{Hashimoto12,Lamhot15} and theoretically \\cite{Chowdhury13,Levchenko13,Nomoto13} that the enhanced electron correlations lead to an enhancement of the London penetration depth $\\lambda(0)$ in the superconducting phase. It has also been proposed \\cite{Hashimoto13,Nomoto13} that the low-energy quasiparticle excitations in nodal superconductors near an antiferromagnetic QCP are significantly affected in a non-trivial way, giving rise to an unusual power-law temperature dependence of $\\lambda(T)$ with a non-integer exponent close to 1.5, which differs distinctly from the $T$-linear dependence expected for line nodes of the superconducting energy gap $\\Delta(\\bm{k})$. In this viewpoint, clarifying the effect of the possible QCP on $\\lambda(T)$ in $A$Fe$_2$As$_2$ ($A=$ K, Rb, and Cs) is fundamentally important. Here, we report on such measurements of $\\lambda(T)$ down to low temperatures by using very clean single crystals. We find a systematic change in the exponent $\\alpha$ of its power-law temperature dependence with increasing alkali-ion radius, from $\\alpha\\sim 1$ for K \\cite{Hashimoto10} to $\\alpha\\sim 1.5$ for Cs. This supports the presence of quantum critical fluctuations in CsFe$_2$As$_2$, strongly affecting quasiparticle excitations as observed in other quantum critical superconductors.\n\n\n\nSingle crystals of $A$Fe$_2$As$_2$ were synthesized in alumina crucibles by a self-flux method \\cite{Zocco13}. In these crystals, quantum oscillations have been observed in magnetostriction measurements for $A=$ K \\cite{Zocco14}, Rb, and Cs \\cite{Zocco15}, indicating that the samples are very clean. In this study, the penetration depth measurements were performed for $A=$ Rb, and Cs using a self-resonant tunnel-diode oscillator with the resonant frequency $f$ of $\\sim 13$\\,MHz in a dilution refrigerator, and we compare the results with the previously reported data for $A=$ K taken in a similar setup \\cite{Hashimoto10}. The shift of the resonant frequency $\\Delta f$ is directly proportional to the change in the magnetic penetration depth, $\\Delta\\lambda(T)$ = $G\\Delta f(T)$, in the superconducting state below $T_c$. The geometric factor $G$ is determined from the geometry of samples and the coil. The samples are cleaved on all sides to avoid degradation due to the reaction with air and coated by Apiezon grease. The samples were mounted in the cryostat within 15 minutes after cleavage to minimize the exposure to air. Typical lateral size of crystals is $\\sim 500 \\times 500$\\,$\\mu$m$^2$ and the thickness is less than 50\\,$\\mu$m. \n\n\n\\begin{figure}[tbp]\n\\includegraphics[width=1\\linewidth]{fig1.eps}\n\\vspace{-3mm}\n\\caption{(Color online). (a) Schematic phase diagrams of BaFe$_2$As$_2$-based superconductors with Co, K, and P substitutions \\cite{Shibauchi14}, combined with that of $A$Fe$_2$As$_2$ ($A=$ K, Rb, and Cs). Only antiferromagnetic $T_N$ (blue circles) and superconducting transition temperatures $T_c$ (red circles) are shown for clarity. The isovalent $A$Fe$_2$As$_2$ and BaFe$_2$(As$_{1-z}$P$_z$)$_2$ systems correspond to $3d$ electron numbers per Fe atom of $N=5.5$ and 6.0, respectively (green lines). (b) Temperature dependence of the frequency shift multiplied by the geometric factor $G$ in $A$Fe$_2$As$_2$ single crystals. The data for $A=$ K are taken from Ref.\\,\\cite{Hashimoto10}. (c)-(e) Low-temperature change of the penetration depth $\\Delta\\lambda$ below $T\/T_c=0.20$ plotted against $T\/T_c$ (c), $(T\/T_c)^{1.5}$ (d), and $(T\/T_c)^{2}$ (e). The data are vertically shifted for clarity. Dashed lines are the guides to the eyes.\n}\n\\label{lambda}\n\\end{figure}\n\n\nFigure\\:\\ref{lambda}(b) shows the temperature dependence of the resonant frequency shift $\\Delta f$ multiplied by $G$. The superconducting transition temperatures $T_c$ defined by the midpoint of the transition are 3.4, 2.5, and 2.2\\,K for $A=$ K, Rb, and Cs, respectively, consistent with the previous studies \\cite{Shermadini10, Hong13, Wang13, Tafti15, Wu15}. In the normal state above $T_c$, $G\\Delta f$ is determined by the skin depth $\\delta(T)$, which is related to the dc resistivity $\\rho(T)$ through $\\delta(T) = \\sqrt{2\\rho(T)\/\\mu\\omega}$. Here, $\\mu$ is the permeability and $\\omega=2\\pi f$ is the angular frequency of the oscillator. From these relations we estimate $\\rho$ values just above $T_c$ as $\\sim 0.8$, $\\sim 1.0$, and $\\sim 1.2$\\,$\\mu\\Omega$\\,cm for the K, Rb, and Cs cases, respectively. These low resistivity values indicate the high quality of our samples.\n\nIn Figs.\\:\\ref{lambda}(c)-(e), the change in the penetration depth $\\Delta\\lambda(T) = \\lambda(T)-\\lambda(0)$ at low temperatures is plotted against $T\/T_c$, $(T\/T_c)^{1.5}$ and $(T\/T_c)^2$ for all samples. Two crystals are measured for both Rb and Cs systems and they exhibit almost identical $T$-dependence in each case. For all systems $\\Delta\\lambda(T)$ clearly exhibits a non-exponential $T$-dependence in a wide temperature range, which can be approximated by a power law. The exponent of the power-law $T$-dependence is less than 2, which is clearly demonstrated by the convex curvature against $(T\/T_c)^2$ [Fig.\\:\\ref{lambda}(e)]. Such small exponents $\\alpha<2$ found for all three compounds, which can hardly be explained by the pair-breaking scattering effect for fully opened gaps \\cite{Prozorov11}, immediately indicates the robust presence of low-energy quasiparticle excitations. This is consistent with nodal gap structure inferred from the observation of a residual term $\\kappa\/T$ for $T\\to 0$\\,K of the thermal conductivity $\\kappa$ in $A$Fe$_2$As$_2$ \\cite{Dong10,Reid12,Watanabe14,Zhang15,Hong13}, although measurements at lower temperature below $\\sim0.1$\\,K would be required for the ultimate determination of the presence of nodes or very small minima in $\\Delta(\\bm{k})$ \\cite{Hardy14,Carrington15}. \n\n\\begin{figure}[tbp]\n\\includegraphics[width=1.0\\linewidth]{fig2.eps}\n\\vspace{-3mm}\n\\caption{(Color online). (a) Band structures of $A$Fe$_2$As$_2$ ($A=$ K, Rb, and Cs) calculated within GGA-PBE. (b) Density of states as a function of energy near the Fermi level.\n}\n\\label{band}\n\\end{figure}\n\nA closer look at the data shows a systematic increase of the exponent in the order of K, Rb, and Cs. To discuss the origin of this change, we first examine the evolution of electronic structure in this series. The band-structure calculations are performed within the generalized gradient approximation (GGA-PBE \\cite{Perdew96}) for the experimentally obtained lattice parameters by using the Wien2k package \\cite{Blaha}. As shown in Fig.\\:\\ref{band}(a), the results indicate very similar structures for all three systems, which are consistent with the local density approximation results for $A=$ K \\cite{Terashima13} and Cs \\cite{Kong14}. The Fermi surfaces of the K, Rb, and Cs systems all have the same topology, which is confirmed by the quantum oscillation experiments mentioned above \\cite{Zocco15}. Furthermore, $T_c$ of these materials shows a universal V-shaped dependence on hydrostatic pressure \\cite{Tafti15}, suggesting that the superconducting gap structure is also essentially similar. \n\n\n\\begin{figure}[tbp]\n\\includegraphics[width=1\\linewidth]{fig3.eps}\n\\vspace{-3mm}\n\\caption{(Color online). (a) Superfluid stiffness $\\rho_s(T) = \\lambda^2(0)\/\\lambda^2(T)^2$ obtained by using the reported values of $\\lambda(0)=203$ \\cite{Kawano11}, 267\\,nm \\cite{Shermadini10}, and the estimate of $\\lambda(0)=305$\\,nm (see text) for $A=$ K, Rb, and Cs, respectively. (b) Comparisons of $\\rho_s(T)$ for $T\/T_c<0.2$ with multigap calculations with and without considering the momentum-dependent effective Fermi velocity $v_F^*\\propto|\\Delta(\\bm{k})|^{1\/2}$. (c)-(e) Low-$T$ part of $\\rho_s(T)$ below $T\/T_c=0.15$ plotted against $T\/T_c$ for K (c), $(T\/T_c)^{1.4}$ for Rb (d), and $(T\/T_c)^{1.5}$ for Cs (e). The data are vertically shifted for clarity. Dashed lines are the guides for the eyes.\n}\n\\label{rho_s}\n\\end{figure}\n\n\nTo make a more quantitative analysis, we use the value of $\\lambda(0) \\sim 203(10)$\\,nm reported in the small-angle neutron scattering measurements for K \\cite{Kawano11} and $\\lambda(0) \\sim 267(5)$\\,nm obtained in the $\\mu$SR measurement for Rb \\cite{Shermadini10}. As there is no report on $\\lambda(0)$ for Cs so far, we roughly estimate $\\lambda(0) \\sim 305(30)$\\,nm on the assumption that $\\lambda^2(0)$ is proportional to $\\gamma$ \\cite{Walmsley13}. By using these values, we obtain the full temperature dependence of $\\lambda(T)$ and the normalized superfluid stiffness $\\rho_s(T) = \\lambda^2(0)\/\\lambda^2(T)$ [Fig.\\:\\ref{rho_s}(a)]. The overall $T$ dependence of $\\rho_s$ is quite similar in all samples, consistent with the very similar electronic and gap structures in these materials except for the effective mass, which is mainly reflected by the difference of the $\\lambda(0)$ values. \nIn contrast, $\\rho_s(T)$ at low temperatures shows a drastic change; compared with the data for K, the Rb and Cs data show more convex curvatures [Fig.\\:\\ref{rho_s}(b)], indicating a significant change in the amount of quasiparticle excitations at low energies. \n\nThe low-temperature part of $\\rho_s(T)$ plotted against $(T\/T_c)^\\alpha$ with different exponents $\\alpha$ for different systems [Figs.\\:\\ref{rho_s}(c)-(e)] confirms a power-law $T$-dependence with a systematic change in $\\alpha$. In particular, we find $1-\\rho_s(T)\\propto T^{\\alpha}$ with $\\alpha \\sim 1.5$ for Cs in a wide $T$ range, which is clearly different from $\\alpha \\sim 1$ for K. It should be noted that there is a small deviation from the power-law dependence at the lowest temperatures below $\\sim 0.15$\\,K, which may be due to impurity scattering. Such disorder effects can induce $T^2$ dependence of penetration depth at sufficiently low temperatures \\cite{Hirschfeld93,Bonn94}, and sometimes open a small gap in $s$-wave superconductors with accidental nodes \\cite{Mizukami14}, but these effects are limited to the correspondingly low temperature scale and will not affect the higher-$T$ behavior, which is determined by the intrinsic spectrum of quasiparticle excitations. To describe the disorder effect in superconductors with line nodes, the empirical formula $\\Delta\\lambda(T) \\propto (T\/T_c)^2\/\\{(T\/T_c)+(T^*\\!\/T_c)\\}$ that interpolates $T$-linear and $T^2$ dependences has been widely used, where $T^*$ is a measure of the impurity scattering rate \\cite{Hirschfeld93}. However, this fitting procedure to our data in a wide temperature range up to $T\/T_c=0.2$ gives considerably large values of $T^*\\!\/T_c$; the power-law dependence with $\\alpha\\sim1.5$ in CsFe$_{2}$As$_{2}$ can be approximated by the above equation with $T^*\\!\/T_c\\sim 0.9$ which is unphysically large for clean samples with low residual resistivity \\cite{Hashimoto13}. Such a large $T^*\\!\/T_c$ is also incompatible with the nonlocality effect, in which $T^*\\!\/T_c \\sim \\xi(0)\/\\lambda(0) \\ll 1$ where $\\xi$ is the coherence length \\cite{Kosztin97}. Therefore we conclude that the anomalous power $\\alpha\\sim 1.5$ observed for the Cs system is an intrinsic property of the low-energy quasiparticles excited near the nodes or minima of $\\Delta(\\bm{k})$. \n\nThe most pronounced change with the alkali-ion radius $R_{\\rm ion}$ in this series is the strong enhancement of the experimentally determined Sommerfeld coefficient $\\gamma_{\\rm exp}$, as shown in Fig.\\:\\ref{trends}(a). This $R_{\\rm ion}$ dependence of $\\gamma_{\\rm exp}$ is much more rapid than that of the band-structure calculations $\\gamma_{\\rm calc}$, which changes by only $\\sim 15$\\% as can be seen in the small increase in the density of states at the Fermi level [Fig.\\:\\ref{band}(b)]. This indicates that the mass renormalization factor averaged over the Fermi surface, $1\/Z=\\gamma_{\\rm exp}\/\\gamma_{\\rm calc}$, is strongly enhanced with increasing $R_{\\rm ion}$ [Fig.\\:\\ref{trends}(b)], which is consistent with the analysis of quantum oscillations \\cite{Zocco15}. The value of $1\/Z$ in CsFe$_2$As$_2$ reaches $\\sim13$, which is larger than the estimated value of $\\sim10$ for the quantum critical concentration $z=0.3$ in the BaFe$_2$(As$_{1-z}$P$_z$)$_2$ system \\cite{Walmsley13}. This result suggests that negative chemical pressure brings the $A$Fe$_2$As$_2$ system toward a QCP. The exponent $\\alpha$ obtained from the power-law fit to the $1-\\rho_s(T)$ data in the $T$ range up to $0.15 T_c$ also shows a systematic change with $R_{\\rm ion}$ as shown in Fig.\\:\\ref{trends}(b). This highlights a close link between the unconventional exponent $\\sim1.5$ and quantum criticality. Considering the fact that $\\alpha\\sim1.5$ has been reported for several unconventional superconductors in the vicinity of QCPs \\cite{Hashimoto13}, we infer that this anomaly might have a common origin associated with quantum critical fluctuations. \n\n\n\\begin{figure}[tbp]\n\\includegraphics[width=0.7\\linewidth]{fig4.eps}\n\\vspace{-3mm}\n\\caption{(Color online). (a) Sommerfeld coefficients $\\gamma_{\\rm exp}$ in single crystals of $A$Fe$_2$As$_2$ ($A=$ K, Rb, and Cs) \\cite{Fukazawa11,Hardy13,Wang13,Zhang15}, compared with the estimated $\\gamma_{\\rm calc}$ from the band-structure calculations, as a function of alkali-ion radius $R_{\\rm ion}$ in the eight-fold coordination. (b) Mass renormalization factor $1\/Z$ extracted from the comparisons of experimental and calculated $\\gamma$ values, and the exponent $\\alpha$ of the power-law $T$ dependence of $\\rho_s(T)$ in a temperature range up to $T\/T_c=0.15$.\n}\n\\label{trends}\n\\end{figure}\n\nIt has been proposed \\cite{Hashimoto13} that an unusual power-law dependence of $\\rho_s(T)$ with a non-integer exponent might arise from a strong momentum dependence of the effective Fermi velocity $v_F^*(\\bm{k})$ near the nodes of the superconducting gap $\\Delta(\\bm{k})$. Such a $\\bm{k}$ dependence of $v_F^*\\propto 1\/m^*$ requires that quantum critical fluctuations responsible for the mass renormalization also have strong momentum dependence associated with $\\Delta(\\bm{k})$. This can be naturally expected when the quantum critical fluctuations are quenched by opening the superconducting gap, the degree of which is determined by the gap magnitude $|\\Delta(\\bm{k})|$ \\cite{Hashimoto13}. Recent theoretical calculations for quantum critical superconductors with line nodes \\cite{Nomoto13} indicate that the current vertex corrections to the $T$-linear penetration depth can be significant if the antiferromagnetic hot spots are located near the nodal points such as is the case of electron-doped cuprates. \n\nNow we extend this consideration to the multigap systems, where $\\Delta(\\bm{k})$ is very different for different bands. The strong curvature found in $\\rho_s(T)$ near $T_c$ indicates the importance of the multiband effect \\cite{Prozorov11}. Indeed recent ARPES and specific-heat measurements have reported highly band-dependent gap structures in KFe$_{2}$As$_{2}$ \\cite{Okazaki12,Hardy14}. As shown in Fig.\\:\\ref{rho_s}(b), we find that a set of multigap functions similar to that accounting for the specific-heat data \\cite{Hardy14} can reproduce the low-temperature $\\rho_s(T)$ of KFe$_2$As$_2$ fairly well. Here we calculate $\\rho_s(T)=\\sum_i{w^i\\rho_s^i}(T)$ by assuming an extended $s$-wave state with a nodal gap in one band (the middle hole band $\\zeta$ around $\\Gamma$) and different constant gaps in other bands. The weight $w^i$ of each band $i$ is estimated from the quantum oscillation results \\cite{Terashima13} (see Table\\:\\ref{multigap}). Apparently, these weights do not change significantly for Rb and Cs \\cite{Zocco15}, and thus we need another ingredient to explain the enhanced exponent $\\alpha$. To include the effect of quantum criticality discussed above, we then calculate $\\rho_s(T)$ by applying the $\\bm{k}$-dependent effective velocity in the superconducting state of the form $v_F^*(\\bm{k})\\propto|\\Delta(\\bm{k})|^{1\/2}$ \\cite{Hashimoto13} to all bands, which affects the weight that includes the effective mass \\cite{SI}. For the $\\zeta$ band this $\\bm{k}$-dependence increases the low-$T$ power-law exponent of $\\rho_s^{\\zeta}(T)$. For the other bands, the $T$ dependence of each $\\rho_s^i$ is unchanged but the relative weights are modified by their gap magnitudes; for the small-gap band the weight is reduced because of the suppressed $v_F^*$. This simple analysis captures salient features of the systematic change in $\\rho_s(T)$ with $A$ [Fig.\\:\\ref{rho_s}(b)], although at higher temperatures there are some deviations which are possibly due to the interband coupling effect. The essence of this $|\\Delta(\\bm{k})|$-dependent mass renormalization is that it can naturally reduce the contribution of quasiparticles excited at the positions where the gap is small. Thus the observed positive correlation between the mass enhancement and the exponent of the superfluid stiffness supports this non-trivial effect of quantum critical fluctuations in the superconducting state. \n\n\n\n\n\n\n\\begin{table}[tbp]\n \\begin{center}\n \\caption{Multigap parameters used in the calculations of $\\rho_s(T)$. The weight $w^i$ to $\\rho_s$ of band $i$ is estimated from the number of holes $n$ and the effective mass $m^*$ in units of the electron mass $m_e$ \\cite{Hashimoto10}, which are determined by the quantum oscillations in KFe$_2$As$_2$ \\cite{Terashima13}. The gap values of each band and the angle dependence of $\\Delta(\\phi)$ of $\\zeta$ band are assumed following Ref.\\,\\cite{Hardy14}.}\n \\begin{tabular}{cccccc} \n\\hline \n\\hline\nband & $n$ & ${m^*}\/{m_e}$ & $w^i$ & $\\Delta(\\phi)\/k_BT_c$ \\\\\n\\hline\n$\\alpha$ (inner) & 0.17 & 6 & 0.31 & 0.61 \\\\\n$\\zeta$ (middle) & 0.26 & 13 & 0.23 & $0.35+0.45\\cos(4\\phi)$ \\\\\n$\\beta$ (outer) & 0.48 & 18 & 0.31 & 0.25 \\\\\n$\\varepsilon$ (corner) & 0.09 & 7 & 0.15 & 1.8 \\\\\ntotal & 1.0 & & 1.0 & \\\\\n\\hline\n\\hline\n \\end{tabular}\n \\label{multigap}\n \\end{center}\n\\end{table}\n\n\n\n\n\nIn summary, from the penetration-depth measurements of $A$Fe$_2$As$_2$ ($A=$ K, Rb, and Cs) in the heavily hole-doped regime with $3d$ electron number of $N=5.5$, we show that the superconducting gaps have robust strong anisotropies, consistent with the previous reports of thermal conductivity measurements \\cite{Dong10,Reid12,Watanabe14,Zhang15,Hong13}. The exponent of the approximate power-law dependence of the superconducting stiffness changes from $\\alpha\\sim 1$ toward $\\alpha\\sim 1.5$ with negative chemical pressure, indicating a systematic decrease of the low-energy quaiparticle excitations which correlates with the enhancement of the effective mass toward the possible QCP in $A$Fe$_2$As$_2$. This observation entails a new relation between the momentum dependencies of quantum critical antiferromagnetic fluctuations and quasiparticle excitations of the superconducting condensate.\n\n\n\n\nWe thank A. Carrington, F. Eilers, F. Hardy, R. Heid, P.\\,J. Hirschfeld, and C. Meingast for valuable discussions. This work has been supported by\nBilateral Joint Research Project program and KAKENHI from JSPS.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\n\n\nEfficient photon-to-electron energy conversion is an important property of a nanomaterial, which has been receiving a \nlot of attention. One mechanism of increasing the conversion efficiency is called multiple-exciton generation (MEG), where \nabsorption of a high-energy photon results in the generation of multiple charge carriers. Fig. \\ref{MEGprocess} illustrates \nthe MEG process: absorption of a high-energy photon creates an electron-hole pair (exciton) with the energy exceeding twice \nthe bandgap. In MEG, this excess photon energy is diverted into generation of additional charge carriers instead of being \nlost to generating atomic vibrations. \n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{process3.pdf} \n \\caption{Schematic illustration of the MEG process. Absorption of a photon with energy $\\hbar \\omega \\ge 2E_{gap}$ creates and exciton which can decay into a biexciton. \nPictured here, a hot electron loses some of its energy to the creation of another electron-hole pair. } \n\\label{MEGprocess} \n\\end{figure}\n\n\nIt has been shown that biexciton bound states play an important role in the excited-state properties of SWCNTs and that exciton-exciton binding energies are strongly dependent on the nanotube chirality \\cite{Matsunaga, Santos}. Experimental work aimed at addressing specifically the importance of exciton-exciton interactions in chiral SWCNTs has been performed by Colombier \\textit{et al.}, where biexciton binding energy of 106 meV in (9,7) SWCNT has been reported \\cite{Colombier}. Previous theoretical work on exciton and biexciton binding energies in carbon nanotubes has been reported by Kammerlander \\textit{et al.} where the quantum Monte Carlo method has been combined with the tight-binding approximation and binding energies of $\\approx 120-150$ meV were reported \\cite{Kammerlander}.\nTheoretical efforts based on MBPT for calculating exciton-to-biexciton rates in MEG calculations in carbon nanotubes have been reported by Rabani \\textit{et al.}, although in their work, the final biexciton state is treated as a pair of two non-interacting excitons \\cite{Rabani}. {Additionally, there has been extensive theoretical work based on perturbation theory aimed at predicting carrier multiplication (CM) rates in semiconductor nanostructures \\cite{Voros, Marri, Marri2}, but exciton-exciton interactions in the final biexciton state are not included. \nExplicit, first-principles treatment of exciton-exciton interactions in semiconductor nanostructures is reported by Piryatinski \\textit{et al.} \\cite{Piryatinski}, \nwhere first-order perturbation theory is applied to biexciton states in type II core\/shell nanocrystals, where spatial separation of opposite charges and enhanced \nconfinement of like-charges leads to large ($\\sim$ 100 meV) positive shifts to the bi-exciton energies. However, the methodology developed in \\cite{Piryatinski} is not \napplicable to biexciton states where exciton separation distance is comparable to electron-hole separation distance, as can be expected in CNTs.} \n\nRecently, Kryjevski \\textit{et al.} have developed several methods for a comprehensive description of MEG in a nanosctructure using \nDFT-based MBPT, including exciton effects \\cite{KryjevskiMEG2,KryjevskiMEG3,KryjevskiCNT,KryjevskiSF,KryjevskiMEG,KryjevskiCT}. \nFirst, one uses DFT-based MBPT technique to compute exciton-to-biexciton decay and recombination rates, {\\it i.e.}, the rates of \nthe inverse and direct Auger processes, respectively \\cite{KryjevskiMEG2, KryjevskiMEG3}. Next, one utilizes DFT software to compute \nphonon frequencies and normal modes, \nwhich are then used to compute one- and two-phonon exciton emission rates \\cite{KryjevskiMEG}. Finally, all these rates are incorporated \ninto the Boltzmann transport equation (BE) which provides comprehensive nonperturbative description of time evolution of the excited state \nincluding ``competition\" between different relaxation channels, such as MEG, phonon-mediated relaxation, {\\it etc.} \\cite{KryjevskiMEG}. \nIn particular, one can compute number of excitons generated from a single high-energy photon, which is the internal quantum efficiency (QE). \nThis method has been successfully applied to several chiral SWCNTs and efficient low-energy MEG was predicted, in a good agreement with the \navailable experimental data \\cite{MEGExperiment, MEGExperiment2}. Further, when augmented with the exciton transfer terms \nthis BE technique has been applied to the doped SWCNT-$Si$ nanocrystal system and formation of a long-lived charge transfer state was predicted \\cite{KryjevskiCT}. \nAdditionally, in \\cite{KryjevskiSF} MEG rates method for the singlet fission (SF) process, where a singlet exciton decays into two spin-one (triplet) excitons in \nthe overall singlet state, has been developed and applied to SWCNTs. \n\nIn all the MEG work so far \nthe final biexciton state has been approximated as a non-interacting exciton pair. However, knowledge of the low-energy biexciton energy levels is essential \nfor the accurate prediction of the MEG threshold. So, here we investigate this issue by developing a DFT-based MBPT method for biexciton state energies by including \nresidual electrostatic (dipole-dipole) interactions between the excitons in the biexciton state. The technique is then applied to chiral SWCNTs (6,2), (6,5) and (10,5). Here, only biexciton states composed of two singlet excitons are included.\n\nThe paper is organized as follows. Section \\ref{theory} contains description of the methods and\napproximations employed in this work. Section \\ref{CompDetail} contains description of the atomistic\nmodels studied in this work and of DFT simulation details. Section \\ref{Results} contains discussion\nof the results obtained. Conclusions and Outlook are presented in Section \\ref{Conclusion}. \n\n\\section{Theoretical Methods and Approximations} \\label{theory}\n\n\\subsection{Microscopic Hamiltonian} \\label{Hamiltonian}\n\nFor completeness, let us review basics of the DFT-based MBPT approach that includes electron-exciton terms \\cite{KryjevskiSF}. \n\nThe electron field operator $\\psi_{\\sigma}(\\textbf{x})$ is related to the annihilation operator of the $i^{th}$ Kohn-Sham (KS) state $a_{i\\alpha}$ via\n\\begin{eqnarray}\n \\psi_{\\sigma}(\\textbf{x}) = \\sum_{i, \\sigma} \\phi_{i\\sigma}(\\textbf{x})a_{i\\sigma}\n\\end{eqnarray} \nwhere $\\phi_{i\\sigma}(\\textbf{x})$ is the $i^{th}$ KS orbital with spin $\\sigma$ and $a_{i\\sigma}, a_{i\\sigma}^{\\dagger} $ obey fermion anti-commutation relations $\\{a_{i\\sigma}, a_{j\\nu}^{\\dagger}\\} = \\delta_{ij}\\delta_{\\sigma\\nu}, \\{a_{i\\sigma}, a_{j\\nu}\\} = 0$ \\cite{Fetter,Mahan}. In the spin nonpolarized case we consider here, \n$\\phi_{i\\uparrow}(\\textbf{x})=\\phi_{i\\downarrow}(\\textbf{x})\\equiv \\phi_{i}(\\textbf{x}).$ In terms of $a_{i\\sigma}, a_{i\\sigma}^{\\dagger}$, the electron Hamiltonian is\n\n\\begin{equation}\n {H}= \\sum_{i\\sigma} \\varepsilon_i a_{i\\sigma}^{\\dagger} a_{i\\sigma} + H_C - H_V + H_{e-exciton} \\label{Hamiltonian} \n\\end{equation} \nwhere $\\varepsilon_{i\\uparrow} = \\varepsilon_{i\\downarrow} \\equiv \\varepsilon_{i}$ is the energy of the $i^{\\text{th}}$ KS orbital. In a periodic structure KS energies and orbitals \nare labeled by the band number and lattice wavevector but here, as explained in Sec. \\ref{CompDetail}, the state label is just an integer. The $H_C$ term is the microscopic Coulomb interaction operator \n\\begin{eqnarray}\n & H_C = \\frac{1}{2} \\sum_{ijkl \\sigma, \\nu} V_{ijkl} a^{\\dagger}_{i \\sigma} a^{\\dagger}_{j \\nu} a_{l \\nu} a_{k \\sigma} , \\\\\n & V_{ijkl} = \\int d\\textbf{x} d\\textbf{y} \\phi_i^*(\\textbf{x})\\phi_j^*(\\textbf{y}) \\frac{e^2}{| \\textbf{x} - \\textbf{y} |} \\phi_l(\\textbf{y})\\phi_k(\\textbf{x}), \n\\label{COULOMB}\n\\end{eqnarray}\nwhere KS indices $i,j,k,l,...$ can refer to both occupied and unoccupied states within the included range. The $H_V$ term prevents double-counting of electron interactions \n\\begin{equation}\n H_V = \\sum_{ij, \\sigma} a^{\\dagger}_{i \\sigma} \\bigg( \\int d\\textbf{x} d\\textbf{y} \\phi_i^*(\\textbf{x}) V_{KS}(\\textbf{x}, \\textbf{y}) \\phi_j(\\textbf{y}) \\bigg) a_{j \\sigma} ,\n\\label{HV}\n\\end{equation}\nwhere $V_{KS}(\\textbf{x}, \\textbf{y})$ is the Kohn-Sham potential used in the DFT simulation \\cite{Onida,Kummel}. \n\nThe electron-exciton coupling term $H_{e-exciton}$ appearing in Eq. \\eqref{Hamiltonian} is\n\\begin{eqnarray}\n& H_{e-exciton} = \\sum_{e h \\alpha} \\sum_{\\sigma} \\frac{1}{\\sqrt{2}} \\bigg( [\\varepsilon_{e} - \\varepsilon_h - E^{\\alpha}]\\Psi_{eh}^{\\alpha} a_{h \\sigma} a^{\\dagger}_{e \\sigma}({\\rm B}^{\\alpha} + {\\rm B}^{\\alpha \\dagger}) + h.c. \\bigg) + \\sum_{\\alpha} E^{\\alpha} {\\rm B}^{\\alpha \\dagger} {\\rm B}^{\\alpha} \\label{EXCFERMSF} \n\\end{eqnarray}\nwhere $e \\ge$ LU (the lowest unoccupied KS state) and $h \\le HO$ (the highest occupied KS state), ${\\rm B}^{\\alpha \\dagger}$, $\\Psi_{eh}^{\\alpha}$ and $E^{\\alpha}$ \nare the singlet exciton creation operator, wave function and energies respectively. Technically, the full electron Hamiltonian is comprised of only the first three \nterms in Eq. \\eqref{Hamiltonian}, however adding $H_{e-exciton}$ along with the rules to avoid double counting allows for the perturbative treatment of excitonic \neffects and is a standard approch to describing the coupling between excitons and electrons and holes \\cite{Beane, Spataru1, Perebeinos, Spataru2}. The $H_{e-exciton}$ term is a result of resummation of Coulomb perturbative corrections resulting from the Coulomb interaction operator $H_C$ (Eq. \\eqref{COULOMB}) to the electron-hole state, \nwhich is an implementation of a standard method to include bound states into the MBPT framework \\cite{Berestetskii, Beane, KryjevskiSF}. \n\nExciton wave functions, $\\Psi_{eh}^{\\alpha},$ and energies, $E^{\\alpha},$ are solutions to the Bethe-Salpeter equation (BSE), which is (see, {\\it e.g.}, \\cite{Benedict}) \n\\begin{equation}\n [\\varepsilon_{e} - \\varepsilon_h - E^{\\alpha}]\\Psi_{eh}^{\\alpha} + \\sum_{e' h'} (K_{C} + K_{D})(e,h;e',h')\\Psi_{e'h'}^{\\alpha} = 0, \\label{BSE}\n\\end{equation}\nwhere\n\\begin{equation}\n K_{C} =\\frac{8 \\pi e^2}{V} \\sum_{\\textbf{k} \\neq 0} \\frac{\\rho_{eh}(\\textbf{k})\\rho_{e'h'}^*(\\textbf{k})}{k^2}, \\label{KCOUL}\n\\end{equation}\n\\begin{equation}\n K_{D} = - \\frac{4 \\pi e^2}{V} \\sum_{\\textbf{k} \\neq 0} \\frac{\\rho_{ee'}(\\textbf{k})\\rho_{hh'}^*(\\textbf{k})}{|\\textbf{k}|^2 - \\Pi(0, -\\textbf{k}, \\textbf{k})}, \\label{KDIR}\n\\end{equation}\nwhere\n\\begin{equation}\n \\rho_{ij}(\\textbf{k}) = \\sum_{\\textbf{p}} \\phi_j^*(\\textbf{p} - \\textbf{k}) \\phi_i(\\textbf{p})\n\\end{equation}\n\n{Medium screening is taken into account via the polarization function (Eq. \\eqref{PI}) which appears only in the direct term (Eq. \\eqref{KDIR}) of BSE.} In the random-phase approximation (RPA) \n\\begin{equation}\n \\Pi(\\omega,\\textbf{k},\\textbf{p}) = \\frac{8 \\pi e^2}{\\hbar V} \\sum_{ij} \\rho_{ij}(\\textbf{k})\\rho_{ji}(\\textbf{p})\\bigg(\\frac{\\theta_{-j}\\theta_{i}}{\\omega - \\omega_{ij} + i\\delta} - \\frac{\\theta_{j}\\theta_{-i}}{\\omega - \\omega_{ij} - i\\delta} \\bigg), \\;\\;\\; \\omega_{ij} = \\frac{\\varepsilon_i - \\varepsilon_j}{\\hbar} \\label{PI}\n\\end{equation}\nwhere $V$ is the simulation cell volume, $\\delta$ is the width parameter, which will be set to $0.025~eV$ corresponding to the room temperature scale. \n\nHere, we use static approximation, taking $\\Pi(\\omega=0,\\textbf{k},\\textbf{p}).$ This is a widely-used approach for semiconductor nanostructures ({\\it e.g.}, \\cite{PhysRevLett.90.127401,PhysRevB.68.085310,PhysRevB.79.245106}), which is justified by the cancellations that appear when the electron-hole screening and the single-particle Green's functions are \nboth treated dynamically \\cite{Bechstedt}. Next, the main simplifying approximation employed in this work is to retain only the diagonal elements of the polarization matrix, \n\\textit{i.e.}, $\\Pi(0,\\textbf{k},\\textbf{p}) \\simeq \\Pi(0,\\textbf{k},-\\textbf{k})\\delta_{\\textbf{k}, -\\textbf{p}} $, or $\\Pi(0,\\textbf{x},\\textbf{x}') \\simeq \\Pi(0,\\textbf{x} - \\textbf{x}')$ in position space, {\\it i.e.}, the system \nis approximated as a uniform medium. This is a valid approximation for quasi-one-dimensional systems, such as CNTs, where one can expect $\\Pi(\\textbf{x},\\textbf{x}') \\simeq \\Pi(z-z')$, with $z,z'$ being the axial positions. This diagonal polarization matrix approximation, which is a an improvement on previous studies on CNTs in which screening has been approximated by a dielectric constant \\cite{Perebeinos2}, has been employed for the time being, as it significantly reduces computational costs. Calculations including full treatment of the polarization matrix, although unlikely to change qualitative conclusions, are left for future work. Also, in the DFT simulations one uses hybrid Heyd-Scuseria-Ernzerhof (HSE06) exchange-correlation functional \\cite{HSE06} \nwhich is to substitute for the $G_0W_0$ calculation of single-particle energies - the second step in the standard three-step process in the electronic structure \ncalculation \\cite{Rohlfing2,Hybertsen}. The HSE06 functional, which is significantly less computationally expensive than $G_0W_0$, has been shown to produce somewhat reasonable \nresults for bandgaps in semiconducting nanostructures \\cite{Kummel, HSE06Muscat, LouieHSE06}. So, single-particle energies and orbitals are approximated by the KS $\\varepsilon_i$ and \n$\\phi_i({\\bf x})$ from the HSE06 DFT output. Therefore, in our approximation using HSE06 replaces ``dressing'' fermion \nlines in the Feynman diagrams, including subtraction of the compensating term (\\ref{HV}). \n\nFor SWCNTs, the set of approximations stated above have been checked and shown to be reasonable by reproducing experimental results for the low-energy absorption peaks in (6,2), (6,5) and \n(10,5) SWCNTs within 5-13\\% error \\cite{KryjevskiMEG}.\n \n\n\\section{Expressions for the Biexciton Self-Energy} \\label{Expressions}\n\nIn order to account for the exciton-exciton interactions one computes self-energy of the biexciton state $\\hbar\\Sigma_{\\alpha\\beta} (E)$. \nThen the biexciton energy is approximated as \n\\begin{eqnarray}\n E_{\\alpha\\beta}=E^{\\alpha} + {E}^{\\beta}+\\hbar\\Sigma_{\\alpha\\beta} (E = E^{\\alpha} + E^{\\beta}) \\label{self_energy_corr}\n\\end{eqnarray}\nwhere $E^{\\alpha}$ and $E^{\\beta}$ are the energies of the two excitons in the biexciton state. $E^{\\alpha}$ and $E^{\\beta}$ are obtained by solving the Bethe-Salpeter equation (Eq. \\eqref{BSE}). $\\hbar\\Sigma_{\\alpha\\beta} (E = E^{\\alpha} + E^{\\beta})$ is the biexciton self-energy evaluated on the total energy of a state composed of two non-interacting excitons $\\ket{\\alpha}$ and $\\ket{\\beta}$. \n\nTo the leading order in the Coulomb interaction the relevant Feynman diagrams are shown in Fig. \\ref{BIEXCfig}. One only retains \ncorrections to the bare biexciton state where a particle or a hole from the exciton $\\ket{\\alpha}$ interacts with a particle or a hole \nfrom the other exciton $\\ket{\\beta}$; the interactions between particles and holes within the same exction are already included by the BSE. \nNote that including contributions with all possible arrow directions in each fermion loop is needed to include\nall the interactions between electrons and holes within the biexciton state. \n\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.97\\textwidth]{biexciton_self_ab_alphabeta_2.pdf}\n \\caption{Leading order Feynman diagrams representing contributions to the biexciton self-energy. Thin (fermion) lines are Kohn-Sham particle\/hole propagators, thick lines - excitons (see Fig. \\ref{EXCbound}), zigzag lines - the {screened} Coulomb interaction. Not shown for brevity are similar diagrams where arrow directions in one of the fermion loops are reversed, and diagrams where all arrows are reversed. \n } \\label{BIEXCfig}\n\\end{figure}\n\\noindent\nNote that for each diagram in Fig. \\ref{BIEXCfig} there is an additional contribution upon exchange of the $\\alpha$ and $\\beta$ excitons on the right-hand-side of the fermion loop. These contributions are currently a subject of future work. The expressions resulting from each diagram in Fig. \\ref{BIEXCfig} are similar in general form, so for brevity, only one of them will be quoted here and the rest will be presented in the appendix. For instance, the diagram from Fig. \\ref{BIEXCfig} - \\textit{a)} results in the following expression\n\\begin{eqnarray}\n & \\hbar\\Sigma_{\\alpha\\beta} (E = E^{\\alpha} + E^{\\beta}) = \\frac{4}{5} \\sum_{ijklmn} \\frac{\\Theta_{i}\\Theta_n\\Theta_{l}\\Theta_{-k}\\Theta_{-j}\\Theta_{-m}}{(E^{\\alpha}-\\epsilon_{nk} - i\\delta)(E^{\\beta}-\\epsilon_{lm}+ i\\delta)(\\epsilon_{kj} - i\\delta)(\\epsilon_{li} + i\\delta)} \\times \\nonumber \\\\ \n& \\times (\\Psi_{im}^{\\beta})^*(E^{\\beta} - \\epsilon_{im})(\\Psi_{nk}^{\\alpha})^*(E^{\\alpha} - \\epsilon_{nk}) \\Psi_{lm}^{\\beta} (E^{\\beta} - \\epsilon_{lm}) \\Psi_{nj}^{\\alpha} (E^{\\alpha} - \\epsilon_{nj}) \\times W_{ijkl} \\label{expression}\n\\end{eqnarray}\nwhere $\\Psi_{ij}^{\\alpha}$ and $E^{\\alpha}$ are the exciton wave function and exciton energies respectively obtained by solving the Bethe-Salpeter equation (Eq. \\eqref{BSE}). $\\epsilon_{ij} = \\epsilon_i - \\epsilon_j$; $W_{ijkl}$ are the RPA-screened Coulomb matrix elements \n\\begin{equation}\n W_{ijkl} = \\frac{4 \\pi e^2}{V} \\sum_{\\textbf{k} \\neq 0} \\frac{\\rho_{il}^*(\\textbf{k})\\rho_{jk}(\\textbf{k})}{|\\textbf{k}|^2 - \\Pi(0, -\\textbf{k}, \\textbf{k})} \\label{W}\n\\end{equation}\nwhere $\\Pi(0, -\\textbf{k}, \\textbf{k})$ is defined in Eq. \\eqref{PI}. The theta-functions determine whether the KS indices $i,j,k,l$ are particles or holes\n\\begin{equation}\n\\Theta_i = \\sum_{i > \\text{HO} } ~~ , ~~ \\Theta_{-i} = \\sum_{i \\le \\text{HO}}.\n\\end{equation}\n In this work we use \n\\begin{eqnarray}\n\\frac{1}{x \\pm i \\delta}=\\frac{x}{x^2+\\delta^2} \\mp i \\frac{\\delta}{x^2+\\delta^2},\n\\label{1xidelta}\n\\end{eqnarray}\ni.e., both the principal value and delta function parts of $1\/(x \\pm i \\delta)$ factors are included.\n\nThe other three diagrams - \\ref{BIEXCfig}-$b),~c),~d)$ - produce similar expressions not shown for the sake of brevity. \n\nNext, the possibility of including higher order perturbative corrections was explored. Naively including terms of second (or higher) order in the Coulomb interaction, i.e., \ndecorating diagrams in Fig. \\ref{BIEXCfig} with additional zigzag lines, would lead to prohibitively expensive calculations, even for a small system. However, certain classes \nof perturbative contributions can be summed to all orders, {such as the Coulomb interactions between electron and hole resulting in formation of an exciton bound state (see Fig. \\ref{EXCbound}). \nEq. \\ref{EXCFERMSF} describes the resulting exciton-electron coupling term. } \n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.89\\textwidth]{exc_bound_3.pdf}\n \\caption{ {Summing perturbative contributions from Coulomb interactions (zigzag lines) between electron and hole (thin lines) is equivalent to including intermediate exciton bound state (thick line).} } \\label{EXCbound}\n\\end{figure}\n\\noindent\nThis summation of Coulomb corrections to all orders can be applied to modify the diagrams shown in Fig. \\ref{BIEXCfig}. In Fig. \\ref{BIEXCresum} it is illustrated how an intermediate exciton state \n$\\gamma$ appears from decorating the electron-hole lines in Fig. \\ref{BIEXCfig} with zigzag lines in all possible ways. Here, we work to the leading order in the electron-exciton coupling - Eq. \\ref{EXCFERMSF} - and, so, only decorate one pair of the electron-hole lines. \n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.99\\textwidth]{biexciton_self_resummation_4.pdf}\n \\includegraphics[width=0.99\\textwidth]{biexciton_self_resummation_5.pdf}\n \\caption{ Summing perturbative contributions from Coulomb interactions (zigzag lines) between electron and hole (thin lines) from different excitons - $\\alpha$ and $\\beta$ - results in appearance of the intermediate exciton $\\gamma$.} \\label{BIEXCresum} \n\\end{figure}\n\\noindent\nThe two distinct diagrams resulting from modifying Fig. \\ref{BIEXCfig} are shown in Fig. \\ref{BIEXCfig2}. \n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.99\\textwidth]{biexciton_self_with_excitons.pdf}\n \\caption{ \\textit{a)} - diagram resulting from modification of diagrams \\ref{BIEXCfig}-\\textit{a)} and \\textit{b)}. \\textit{b)} - diagram resulting from modification of diagrams \\ref{BIEXCfig}-\\textit{c)} and \\textit{d)}. } \\label{BIEXCfig2}\n\\end{figure}\n\\noindent\nThe contributions to the biexciton self-energy \nfrom the diagrams in Fig. \\ref{BIEXCfig2} are \n\\begin{eqnarray}\n & \\hbar\\Sigma_{\\alpha\\beta}^+ = \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{a}} + \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{b}} \\label{expression+} \\\\\n & \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{a}}(E = E^{\\alpha} + E^{\\beta}) = \\frac{8}{45} \\sum_{ijklmn\\gamma} \\frac{\\Theta_{j}\\Theta_k\\Theta_{n}\\Theta_{-i}\\Theta_{-l}\\Theta_{-m}}{(E^{\\alpha} - \\epsilon_{ki} + i\\delta)(E^{\\beta} - \\epsilon_{nm}- i\\delta)(E^{\\gamma} - \\epsilon_{km} + i\\delta)(E^{\\gamma} - \\epsilon_{jl} + i\\delta)(\\epsilon_{jk} + i\\delta)} \\times \\label{expression2} \\\\ \n& \\times (\\Psi_{ji}^{\\alpha})^*(E^{\\alpha} - \\epsilon_{ji})\\Psi_{ki}^{\\alpha}(E^{\\alpha} - \\epsilon_{ki}) \\Psi_{nl}^{\\beta} (E^{\\beta} - \\epsilon_{nl}) (\\Psi_{nm}^{\\beta})^* (E^{\\beta} - \\epsilon_{nm}) \\Psi_{jl}^{\\gamma} (E^{\\gamma} - \\epsilon_{jl}) (\\Psi_{km}^{\\gamma})^* (E^{\\gamma} - \\epsilon_{km}) \\nonumber \n\\end{eqnarray}\n\\begin{eqnarray}\n & \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{b}}(E^{\\alpha} + E^{\\beta}) = \\frac{8}{45} \\sum_{ijklmn\\gamma} \\frac{\\Theta_{j}\\Theta_k\\Theta_{n}\\Theta_{-i}\\Theta_{-l}\\Theta_{-m}}{(E^{\\alpha} - \\epsilon_{kl} + i\\delta)(E^{\\beta} - \\epsilon_{km}- i\\delta)(E^{\\gamma} - \\epsilon_{jl} + i\\delta)(E^{\\gamma} - \\epsilon_{nm} + i\\delta)(\\epsilon_{jn} + i\\delta)} \\times \\label{expression3} \\\\ \n& \\times (\\Psi_{ji}^{\\alpha})^*(E^{\\alpha} - \\epsilon_{ji})\\Psi_{kl}^{\\alpha}(E^{\\alpha} - \\epsilon_{kl}) \\Psi_{ni}^{\\beta} (E^{\\beta} - \\epsilon_{ni}) (\\Psi_{km}^{\\beta})^* (E^{\\beta} - \\epsilon_{km}) \\Psi_{jl}^{\\gamma} (E^{\\gamma} - \\epsilon_{jl}) (\\Psi_{nm}^{\\gamma})^* (E^{\\gamma} - \\epsilon_{nm}) \\nonumber \n\\end{eqnarray}\n\\noindent\nwhere the additional summation label $\\gamma$ is over the intermediate exciton state. Note that here the screened Coulomb interaction only appears implicitly through the exciton wavefunctions and energies. \n\nAdditionally, there are self-energy contributions from the repulsive electron-electron and hole-hole interactions. These come from, e.g., diagrams in Fig. \\ref{BIEXCfig} a) and d)\nbut with arrows in one of the fermion loops reversed. These corrections are treated to the leading order in the Coulomb interaction. There are eight distinct expressions contributing to the biexciton self-energy due to the inter-exciton electron-electron or hole-hole interactions, two from each of the 4 diagrams in Fig. \\ref{BIEXCfig} with the appropriate fermion arrow flips. The expressions are \n\\begin{eqnarray}\n & \\hbar\\Sigma_{\\alpha\\beta}^- = \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{ee},a} + \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{hh},a}+\\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{ee},b} + \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{hh},b}+\\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{ee},c} + \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{hh},c} +\\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{ee},d} + \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{hh},d} \\label{expression-}\\\\ \n & \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{ee},a} (E^{\\alpha} + E^{\\beta}) = -\\frac{4}{5} \\sum_{ijklmn} \\frac{\\Theta_{i}\\Theta_{l}\\Theta_{k}\\Theta_{j}\\Theta_{-m}\\Theta_{-n}}{(E^{\\alpha}+\\epsilon_{kn} - i\\delta)(E^{\\beta}-\\epsilon_{lm}+ i\\delta)(\\epsilon_{kj} - i\\delta)(\\epsilon_{li} + i\\delta)} \\times \\nonumber \\\\ \n& \\times (\\Psi_{im}^{\\beta})^*(E^{\\beta} - \\epsilon_{im})(\\Psi_{kn}^{\\alpha})^*(E^{\\alpha} - \\epsilon_{kn}) \\Psi_{lm}^{\\beta} (E^{\\beta} - \\epsilon_{lm}) \\Psi_{jn}^{\\alpha} (E^{\\alpha} - \\epsilon_{jn}) \\times W_{ijkl} \\label{expression4} \\\\\n& \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{hh},a} (E^{\\alpha} + E^{\\beta}) = -\\frac{4}{5} \\sum_{ijklmn} \\frac{\\Theta_{-i}\\Theta_{-l}\\Theta_{-k}\\Theta_{-j}\\Theta_{m}\\Theta_{n}}{(E^{\\alpha}+\\epsilon_{nk} - i\\delta)(E^{\\beta}-\\epsilon_{ml} + i\\delta)(\\epsilon_{kj} - i\\delta)(\\epsilon_{li} + i\\delta)} \\times \\nonumber \\\\ \n& \\times (\\Psi_{mi}^{\\beta})^*(E^{\\beta} - \\epsilon_{mi})(\\Psi_{nk}^{\\alpha})^*(E^{\\alpha} - \\epsilon_{nk}) \\Psi_{ml}^{\\beta} (E^{\\beta} - \\epsilon_{ml}) \\Psi_{nj}^{\\alpha} (E^{\\alpha} - \\epsilon_{nj}) \\times W_{ijkl} \\label{expression5} \\\\\n& \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{ee},b} (E^{\\alpha} + E^{\\beta}) = -\\frac{4}{5} \\sum_{ijklmn} \\frac{\\Theta_{i}\\Theta_{l}\\Theta_{k}\\Theta_{j}\\Theta_{-m}\\Theta_{-n}}{(E^{\\alpha}+\\epsilon_{im} - i\\delta)(E^{\\beta}-\\epsilon_{ln}+ i\\delta)(\\epsilon_{il} - i\\delta)(\\epsilon_{jk} + i\\delta)} \\times \\nonumber \\\\\n& \\times (\\Psi_{kn}^{\\beta})^*(E^{\\beta} - \\epsilon_{kn})(\\Psi_{jm}^{\\alpha})^*(E^{\\alpha} - \\epsilon_{jm}) \\Psi_{ln}^{\\beta} (E^{\\beta} - \\epsilon_{ln}) \\Psi_{im}^{\\alpha} (E^{\\alpha} - \\epsilon_{im}) \\times W_{ijkl} \\label{expression6} \\\\\n& \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{hh},b} (E^{\\alpha} + E^{\\beta}) = -\\frac{4}{5} \\sum_{ijklmn} \\frac{\\Theta_{-i}\\Theta_{-l}\\Theta_{-k}\\Theta_{-j}\\Theta_{m}\\Theta_{n}}{(E^{\\alpha}-\\epsilon_{mi} - i\\delta)(E^{\\beta}-\\epsilon_{nl} + i\\delta)(\\epsilon_{il} - i\\delta)(\\epsilon_{jk} + i\\delta)} \\times \\nonumber \\\\ \n& \\times (\\Psi_{nk}^{\\beta})^*(E^{\\beta} - \\epsilon_{nk})(\\Psi_{mj}^{\\alpha})^*(E^{\\alpha} - \\epsilon_{mj}) \\Psi_{nl}^{\\beta} (E^{\\beta} - \\epsilon_{nl}) \\Psi_{mi}^{\\alpha} (E^{\\alpha} - \\epsilon_{mi}) \\times W_{ijkl} \\label{expression7} \\\\\n& \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{ee},c} (E^{\\alpha} + E^{\\beta}) = -\\frac{4}{5} \\sum_{ijklmn} \\frac{\\Theta_{i}\\Theta_{l}\\Theta_{k}\\Theta_{j}\\Theta_{-m}\\Theta_{-n}}{(E^{\\alpha}+\\epsilon_{im} - i\\delta)(E^{\\beta}-\\epsilon_{jm}+ i\\delta)(\\epsilon_{il} - i\\delta)(\\epsilon_{jk} + i\\delta)} \\times \\nonumber \\\\\n& \\times (\\Psi_{kn}^{\\beta})^*(E^{\\beta} - \\epsilon_{kn})(\\Psi_{ln}^{\\alpha})^*(E^{\\alpha} - \\epsilon_{ln}) \\Psi_{jm}^{\\beta} (E^{\\beta} - \\epsilon_{jm}) \\Psi_{im}^{\\alpha} (E^{\\alpha} - \\epsilon_{im}) \\times W_{ijkl} \\label{expression8} \\\\\n& \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{hh},c} (E^{\\alpha} + E^{\\beta}) = -\\frac{4}{5} \\sum_{ijklmn} \\frac{\\Theta_{-i}\\Theta_{-l}\\Theta_{-k}\\Theta_{-j}\\Theta_{m}\\Theta_{n}}{(E^{\\alpha}-\\epsilon_{mi} - i\\delta)(E^{\\beta}+\\epsilon_{mj}+ i\\delta)(\\epsilon_{il} - i\\delta)(\\epsilon_{jk} + i\\delta)} \\times \\nonumber \\\\\n& \\times (\\Psi_{nk}^{\\beta})^*(E^{\\beta} - \\epsilon_{nk})(\\Psi_{nl}^{\\alpha})^*(E^{\\alpha} - \\epsilon_{nl}) \\Psi_{mj}^{\\beta} (E^{\\beta} - \\epsilon_{mj}) \\Psi_{mi}^{\\alpha} (E^{\\alpha} - \\epsilon_{mi}) \\times W_{ijkl} \\label{expression9} \\\\\n& \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{ee},d} (E^{\\alpha} + E^{\\beta}) = -\\frac{4}{5} \\sum_{ijklmn} \\frac{\\Theta_{i}\\Theta_{l}\\Theta_{k}\\Theta_{j}\\Theta_{-m}\\Theta_{-n}}{(E^{\\alpha}-\\epsilon_{im} - i\\delta)(E^{\\beta}+\\epsilon_{lm}+ i\\delta)(\\epsilon_{il} + i\\delta)(\\epsilon_{kj} - i\\delta)} \\times \\nonumber \\\\\n& \\times (\\Psi_{kn}^{\\beta})^*(E^{\\beta} - \\epsilon_{kn})(\\Psi_{jn}^{\\alpha})^*(E^{\\alpha} - \\epsilon_{jn}) \\Psi_{lm}^{\\beta} (E^{\\beta} - \\epsilon_{lm}) \\Psi_{im}^{\\alpha} (E^{\\alpha} - \\epsilon_{im}) \\times W_{ijkl} \\label{expression10} \\\\\n& \\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{hh},d} (E^{\\alpha} + E^{\\beta}) = -\\frac{4}{5} \\sum_{ijklmn} \\frac{\\Theta_{-i}\\Theta_{-l}\\Theta_{-k}\\Theta_{-j}\\Theta_{m}\\Theta_{n}}{(E^{\\alpha}+\\epsilon_{mi} - i\\delta)(E^{\\beta}-\\epsilon_{ml}+ i\\delta)(\\epsilon_{il} + i\\delta)(\\epsilon_{kj} - i\\delta)} \\times \\nonumber \\\\\n& \\times (\\Psi_{nk}^{\\beta})^*(E^{\\beta} - \\epsilon_{nk})(\\Psi_{nj}^{\\alpha})^*(E^{\\alpha} - \\epsilon_{nj}) \\Psi_{ml}^{\\beta} (E^{\\beta} - \\epsilon_{ml}) \\Psi_{mi}^{\\alpha} (E^{\\alpha} - \\epsilon_{mi}) \\times W_{ijkl}, \\label{expression11}\n\\end{eqnarray}\n{ where, for example, $\\hbar\\Sigma_{\\alpha\\beta}^{\\textrm{ee},a}$ stands for a self-energy contribution from a diagram representing electron-electron interactions obtained by flipping the arrows in one of the fermion loops in Fig. \\ref{BIEXCfig}-\\textit{a)}. Other symbols hold the same meaning as in Eq. \\eqref{expression}. Note that here, Coulomb matrix elements $W_{ijkl}$ have either all-electron or all-hole indices. Thus, the complete expression for the biexciton self-energy is\n\\begin{eqnarray}\n\\hbar\\Sigma_{\\alpha\\beta} = \\hbar\\Sigma_{\\alpha\\beta}^+ + \\hbar\\Sigma_{\\alpha\\beta}^-\n\\end{eqnarray}\nwhere $\\hbar\\Sigma_{\\alpha\\beta}^+$ is defined in \\eqref{expression+} and $\\hbar\\Sigma_{\\alpha\\beta}^-$ is defined in \\eqref{expression-}. } \n\n\n\n\n\\section{Atomistic Models}\\label{CompDetail}\n\nCalculations of biexciton self-energies have been performed for chiral SWCNTs (6,5), (6,2) and (10,5), which are shown in Fig. \\ref{CNTall}. DFT with HSE06 functional, as implemented in VASP (Vienna \\textit{ab-initio} simulation package) \\cite{VASP}, was used to optimize geometries and obtain KS orbitals $\\phi_{i\\sigma}(\\textbf{x})$ and energies $\\varepsilon_i$. The momentum cutoff is defined as\n\\begin{equation}\n \\frac{\\hbar^2 \\textbf{k}^2}{2m} \\le E_{max} , \\;\\;\\;\n \\textbf{k} = 2\\pi \\bigg( \\frac{n_x}{L_x}, \\frac{n_y}{L_y}, \\frac{n_z}{L_z} \\bigg), \\;\\;\\; n_x, n_y, n_z = 0, \\pm 1, \\pm 2, \\cdots \\label{cutoff}\n\\end{equation}\nwhere $m$ is the electron mass; we used $E_{max} = 300~ eV$. The number of orbitals used in the calculations was determined by the condition \n$E_{i_{max}} - E_{HO} \\simeq E_{LU} - E_{i_{min}} \\ge 3.5~eV$ where $i_{max}$\/$i_{min}$ label the highest\/lowest orbital. \n\nPeriodic boundary conditions were used in the DFT simulations. In the axial direction the simulation cell length was chosen to accommodate \nan integer number of unit cells, while in the other two directions the SWCNT surfaces were separated by about 1 $nm$ of vacuum in order to avoid spurious interactions \nbetween their periodic images. For (6,2) and (10,5) the simulations have been performed including three unit cells. The rationale for this is that \nfuture work will involve SWCNTs with functionalized surfaces. So, including several unit cells will allow to keep the dopant concentration reasonably low. \nAlso, including three unit cells instead of one substitutes for the Brillouin zone sampling. So, here we perform calculations at the $\\Gamma$ point only. \nPreviously, it has been shown that the variations in the single particle energies over the Brillouin zone is reasonably small ($\\approx 10\\%$) when three \nunit cells have been included in the simulation instead of one \\cite{KryjevskiMEG}. For the (6,5) SWCNT only one unit cell was included due to high computational cost. \nHowever, the absorption spectrum for (6,5) was reproduced with the same accuracy as for the other two CNTs \\cite{KryjevskiMEG}. \n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{CNT_all.pdf} \n \\caption{ Atomistic models of the three chiral SWCNTs: a) - (6,2), b) - (6,5) and c) - (10,5) } \\label{CNTall} \n\\end{figure}\n\n\n\n\\section{Results and Discussion} \\label{Results} \nTable \\ref{table1} shows the results for the three SWCNTs. In all cases the resulting $\\Sigma_{\\alpha\\beta}({E}_{\\alpha} + {E}_{\\beta})$ come out to be real, with vanishingly small imaginary parts. This is as expected since in our approximation we only include the possibility of the biexciton state decaying into an exciton and an electron-hole pair, which is suppressed due to energy conservation. \nThe biexciton self-energy correction to the biexciton gap was obtained by evaluating Eqs. \\eqref{expression} and \\eqref{expression2} for a biexciton state made of two lowest-energy excitons, \\textit{i.e.}, \n$\\hbar\\Sigma(E = 2 E_{\\rm gap}^{\\rm exc}),~E_{\\rm gap}^{\\rm exc}= {E}_1$. For reference, the quasiparticle gap $E_{\\rm gap}=\\varepsilon_{LU}-\\varepsilon_{HO}$ obtained from the DFT simulation is included. Figure \\ref{plots1} shows plots of the density of states (DOS) for the excitons and biexcitons, including the shift due to exciton-exciton interactions. {The DOS was calculated using DOS$ = \\sum_i \\delta( E - E_i)$, where the delta function was approximated by a Gaussian function with a width corresponding to room temperature. For the exciton DOS, $E_i$ are the solutions to the Bethe-Salpeter equation - $E_{\\alpha}$ (Eq. \\eqref{BSE}). For biexciton DOS, $E_i = E_{\\alpha} + E_{\\beta}$ for a pair of non-interacting excitons and $E_i = E_{\\alpha\\beta}$ (Eq. \\eqref{self_energy_corr}) for the self-energy corrected pair of excitons. }\n\\begin{table}[H]\n \\centering\n {\\begin{tabular}{|c|c|c|c| } \\hline\n Chirality & \\textbf{$E_{gap}$} & $E_{\\rm gap}^{\\rm exc}$ & $\\Sigma(2 E_{\\rm gap}^{\\rm exc})$ \\\\ \\hline\n$(6,2)$ & 1.33 & 0.98 & -0.045 \\\\ [1ex] \\hline\n$(6,5)$ & 1.22 & 1.09 & -0.041 \\\\ [1ex] \\hline\n$(10,5)$ & 0.91 & 0.835 & -0.036 \\\\ [1ex]\\hline \n \\end{tabular}}\n \\caption{ $E_{gap}$ is the HO-LU gap and $E_{\\rm gap}^{\\rm exc}=\\epsilon_1$ is the lowest exciton energy obtained from BSE. $\\Sigma(2 E_{\\rm gap}^{\\rm exc})$ is the biexciton self-energy for the lowest-energy biexciton. All energies are in $eV$.} \n \\label{table1}\n\\end{table}\n\n\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{multiplot.pdf} \n \\caption{ Exciton and biexciton DOS for the three nanotubes in units of exciton gap $E_{\\rm gap}^{\\rm exc}$ . Blue line is the exciton DOS, red line - biexciton DOS without self-energy corrections, green line - biexciton DOS with self-energy corrections. } \\label{plots1} \n\\end{figure}\n\\noindent\nResults show a small redshift in the biexciton density of states for all the nanostructures under consideration. As a percentage of the non-interacting gap, these shifts are: -4.59\\% for (6,2), -4.47\\% for (6,5) and -4.31\\% for (10,5) SWCNT. This is as expected since dipole-dipole electrostatic interactions are attractive. \n\n\nIn our approach it has been possible to sum perturbative contributions from the attractive interactions between the excitons in the biexciton state. However, the repulsive \ncontributions are only included to the leading order. A calculation where both types of interactions are included to the leading order, {\\it i.e.}, only including the contributions from \nthe four diagrams in Fig. \\ref{BIEXCfig}, has been performed. In this case the corrections to the biexciton gap as a fraction of \nthe non-interacting gap are: -1.84\\% for (6,2), -1.93\\% for (6,5) and -2.27\\% for (10,5) SWCNT. This suggests that better treatment of repulsive interactions by including \nhigher order perturbative corrections, which is prohibitively expensive, would only reduce the negative shift in the biexciton gap. \n \n\n\n\n\\section{Conclusions and Outlook} \\label{Conclusion} \nWe have developed a first-principles DFT-based MBPT method to compute biexciton state energies in a semiconductor nanostructure. \nFor that, we have computed self-energy of the biexciton state including (residual) electrostatic exciton-exciton interactions. In \nthis work we have only included spin-zero excitons. These biexciton energies are relevant for, {\\it e.g.}, accurate determination \nof the MEG threshold in a nanostructure. \n\n\nTo the first order in the Coulomb interaction there are four distinct Feynman diagrams contributing to the biexciton self-energy shown in Fig. \\ref{BIEXCfig}. \nHowever, it has been possible to perform partial resummation of the perturbative corrections, such as those included in the exciton-electron coupling term \nin Eq. \\ref{EXCFERMSF}. As a result, to the leading order in the electron-exciton coupling, just two distinct contributions that include intermediate exciton \nstate appear, as shown Fig. \\ref{BIEXCfig2}. Additionally, there are repulsive interactions between the like charges in the two excitons (electron-electron and hole-hole), \nwhich have been included to the leading order resulting in eight distinct contributions.\n \nCalculations have been performed for the chiral SWCNTs (6,2), (6,5) and (10,5). \nWe have found small negative corrections to the biexciton state gaps: -0.045 $eV$ in (6,2), which is 4.59\\% of the non-interacting biexciton gap; -0.041 $eV$ in \n(6,5), which is 4.47\\% of the non-interacting gap and -0.039 $eV$ in (10,5), which is 4.31\\%.\nSmall magnitude of energy shifts confirms validity of the perturbative approach for the biexciton states, at least for the SWCNTs,\nand justifies simplified treatment of the biexcitons in SWCNTs as a pair of non-interacting excitons.\n\nAn important extension of the approach left to future work is to include the effects of triplet excitons, which would allow to compute energy of a biexciton \nmade of a pair of a triplets in the overall singlet state. This is needed for precise determination of the MEG threshold in the SF channel. \n\nAnother extension of this work would be to decorate both pairs of electron-hole lines in the diagrams in Fig. \\ref{BIEXCfig2} with interactions (zigzag lines) and \nperform resummation. This would result in the processes where two intermediate exciton states appear. But small magnitude of the first-order biexciton \nenergy corrections suggests that this modification of the technique is not likely to change the results significantly, at least for the SWCNTs. \n\nImproving the overall precision of the calculations preformed here can be done by the use of $G_0W_0$ calculations for single particle energies instead of using the HSE06 functional. \nThis is expected to introduce an overall blueshift in both exciton and biexciton DOS, but not to alter our overall qualitative conclusions. Another step would be \ninclusion of the full, dynamically screened polarization function $\\Pi(\\omega,\\textbf{k},\\textbf{p})$ instead of the static, diagonal approximation $\\Pi(0,-\\textbf{k},\\textbf{k})$. This would greatly \nincrease computational cost, but it is not expected to alter the conclusions reached in this work. \n\n\n\\section{Acknowledgements} \n Authors acknowledge financial support from the NSF grant CHE-1413614. \nThe authors acknowledge the use of computational resources at\nthe Center for Computationally Assisted Science and\nTechnology (CCAST) at North Dakota State University. The diagrams shown in Figs. 1-5 have been created with JaxoDraw \\cite{Jaxodraw}. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nConvolutional Neural Networks (CNN) have become a widely used architecture since they have proven to be an accurate universal function estimator in many domains, such as Image Classification and Natural Language Processing \\cite{CNN-lecun}. As a particular type of Neural Network (NN), a CNN consists of a collection of parameters, which represent weights and biases, that are tuned depending on the problem that is faced. These parameters, put together after long training sessions, have proven to be able to solve complex problems even outperforming humans \\cite{GO-deepmind}. The tuning is performed automatically and the values that the parameters take are outside of the human comprehension, turning the models into black boxes. \n\nThe use of CNNs in custom applications demands high computational power, since there is no physical limitation to the complexity of the designed models. Simple augmentation of the size of the collection of trainable parameters has proven to work as long as new data keeps being fed to the model \\cite{hestness2017deep}. Furthermore, following the exponential evolution of the availability of computational power, the tendency of solving complex tasks with complex models is increasing \\cite{anthes2017artificial}. Language models are good examples of this, e.g. Mann et al. with the description of a NN with 175 billion parameters \\cite{gpt3}.\n\nAccording to this tendency, the most common deployment platforms used historically for DL-based software are servers that enable cloud computing. For the purpose of making use of a trained NN, a data pipeline is developed manually so that it links the pre-processing steps, computation of results, post-processing and eventually the display of the predictions. When all these steps are implemented in a server\/cloud, the requirement of an internet connection for using the model is a must. However, with the growth of the mobile applications market, approaches on the integration of NNs in lightweight devices are becoming popular given their ability to provide great services to the user. Within this setting, DL-based software consists of software-reliant systems that \\textbf{include data and components} that implement algorithms mimicking learning and problem solving \\cite{arpteg2018software}.\n\nIn this work, we aim to perform an exploratory and descriptive analysis on: {\\it (i)} what are the current challenges regarding the deployment of DL-based software as mobile applications; and {\\it (ii)} how can complexity be controlled while keeping a good level of accuracy. The subject is analysed by means of a practical study in which we perform all the required cycles from data acquisition, DL modelling, classification and application, and operation in real context. In this way, we study the relation between the DL-based software development and the accuracy when operating with it in the production settings \\cite{9238323}. Ensuring the generalization of the DL-based software in the operation environment is key and can be especially concerning in safety-critical applications \\cite{martinez-fernandez}.\n\nOur work is performed over the German Traffic Sign Recognition Benchmark dataset \\cite{Benchmarking-tsr}, created in order to standardize the traffic sign recognition literature. In the DL literature, this faced task is an instance of the Image Classification problem \\cite{doi:10.1080\/01431160600746456}. Each image in the data is from one and only one class so the models will be trained using classification accuracy as the success criteria. CNNs are the standard approach to Image Classification problems for their ability to fit in image processing. The CNN architecture has already shown that can achieve very good results in the literature but also for this specific dataset \\cite{li2014medical} \\cite{Benchmarking-tsr}.\n\nThis work has four main contributions: \n\\begin{itemize}\n \\item Show experiences of the end-to-end DL lifecycle from a practical study on traffic sign recognition.\n \\item Discuss the challenges encountered in the deployment of CNNs.\n \\item Discuss the trade-offs between accuracy and complexity in environments with limited computational power (e.g., mobile applications).\n \\item An open science package of the developed DL-based software freely available on GitHub, following the \"Cookiecutter Data Science\" project structure \\footnote{https:\/\/drivendata.github.io\/cookiecutter-data-science\/} to foster correctness and reproducibility.\n\\end{itemize} \n\nThe document is structured as follows. In Section 2 we describe Related Work on both the challenges of integrating DL models in mobile applications and the focuses on optimizing the performance trade-off. In Section 3 we describe the Study Goal and Research Questions. In Section 4 we describe the study design and the end-to-end DL software lifecycle. In Section 5 we show the results of practical study that allow to answer the Research Questions. In Section 6 we discuss what is strictly analysed in our study and what specific parts of the whole topic are addressed.\nTo end with, in Section 7 we draw conclusions from the research performed and motivate future work.\n\n\\section{Related Work}\nIn this section, we respectively describe the challenges of deploying CNNs in DL-based software, and related work on the optimization of the performance trade-off in systems with limited computation power.\n\n\\subsection{Challenges in deployment}\\label{sec:sota-challenges}\nRegarding the recent interests in the deployment of DL-based software, Chen et al. state that mobile applications come with the strongest popularity trend over other platforms like servers\/clouds or browsers for integrating DL models \\cite{deployment-DL}. They also show that this increasing trend in the deployment of DL models in mobile applications comes with an inverse proportional relation with the knowledge that the community has about the subject. This way, they demonstrate that the deployment of DL-based software becomes the most challenging part in the life cycle of DL, and that this effect is even more critical when the target are mobile devices.\n\nWhen analysing the challenges of deployment of DL-based software, several studies focus on the quality attributes that are more relevant to this type of software. The importance of each quality attribute varies depending on the application but it is always the case that DL-based software implies taking care of measures that might not be considered when building traditional software systems. Remarkably, DL-based software effectiveness strongly depends on the structure of the data.\n\nIndeed, Pons and Ozkaya clearly state that, compared to software systems that do not integrate DL-based components, the deployment of DL-based software increases the risk in many quality attributes \\cite{quality-AI}. They identify Data centricity as the cause that affects the robustness, security, privacy and sustainability of these systems. Additionally, they analyse the methodology for architecting in Artificial Intelligence (AI) engineering. With the focus put on the software-data dependency, Ozkaya makes the difference between the development of software systems and DL-based software in the processes of building and sustaining \\cite{engineering-AI}. Also, Lwakatare et al. provide a list of challenges and solutions regarding the life cycle of DL-based software within industry settings. The challenges and solutions are synthesized into four quality attributes: adaptability (e.g. unstable data dependencies and quality problems), scalability (e.g. balancing efficiency and effectiveness in DL workflow), safety (e.g. explainability of DL models) and privacy (e.g. difficult data exploration in private datasets) \\cite{large-scale-AI}.\n\nThe identified challenges in the above related work that are objects of study in our work are: \n\\begin{itemize}\n \\item (C1) \\textbf{Frameworks in early stage} of its development to support DL-based mobile applications.\n \\item (C2) \\textbf{Software--Data dependency}.\n \\item (C3) \\textbf{Explainability} of the models.\n \\item (C4) \\textbf{Sustainability} of the models when deployed.\n\\end{itemize}\n \n\\subsection{In the quest for optimizing the performance}\nIn the pursuit of optimizing the performance of DL-based software, there exist several key points that affect the overall system implementation. Important bottlenecks that involve limitations to the efficiency are found in both the modelling stage and in the usage of different frameworks to deploy the models. On the one hand, recent contributions \\cite{mobile-nets} show an optimized CNN architecture for enabling its usage in devices with less energy capacity. This allows to reduce the model's complexity (e.g. storage weight, computing power) while keeping the desired accuracy, hence optimizing the performance. In this case, the increase in the performance is due to the increase in the efficiency. \nOn the other hand, the availability of different frameworks supporting the implementation of DL-based software in all its stages has a strong influence on the accuracy of the integrated DL models, as shown in \\cite{empirical-frameworks}.\n\nCompared to the aforementioned studies, in our work we focus on analysing the challenges when deploying a specific type of NN, namely CNN, into mobile devices. Furthermore, we do so by means of a practical study. We relate the challenges found with the analysis of the performance, defined as the trade-off between two quality attributes: accuracy and complexity. We provide a comparative analysis between different configurations for each of the models in order to gather evidence of the implications of the accuracy and complexity. Moreover, the sustainability of the models once deployed is also studied to enhance the capabilities of the applications during the DL-based software lifecycle. Our work is developed under the PyTorch framework and Android operating system.\n\n\\section{Research Questions}\nIn this Section, we define the Research Questions (RQs), following the GQM guidelines \\cite{GQM}.\n\n\\begin{itemize}\n\\item RQ1 - What are the challenges of the creation and integration of complex CNNs in DL-based mobile applications?\n\n\\item RQ2 - What criteria are needed to reason about the trade-off between accuracy and complexity of DL models in mobile applications?\n\n\\end{itemize}\n\nThe motivation of RQ1 within this project is to study the \\textbf{challenges of the creation, training and integration of a CNN in a mobile application}, while identifying, exploring and documenting as many challenges as possible. The obtained conclusions aim to be generic for anyone whose desire is deploying complex models in quotidian software applications.\n\nFurthermore, RQ2 motivates a \\textbf{comparative analysis between different solutions to the problem of optimizing the performance trade-off}, since balancing the accuracy of the models and their complexity can be studied from different viewpoints (e.g. design of optimized operations or limitation of architecture's complexity).\n\n\\section{Study design}\\label{study-design}\nIn this section we describe the proposed pipeline for achieving a functional implementation of the traffic sign recognition mobile application. \n\nThere are four main cycles in the DL-based software lifecycle \\cite{9238323}. First, the \\textit{Data Management} which consists of the collection and processing of raw data. Second, the \\textit{DL Modelling} which consists of adjusting different models to obtain the best-performing possible solution. Third, the \\textit{Development} of the environment (mobile application in our work) and the integration of the DL model in it. Fourth, the \\textit{DL-based System Operation} with the application that integrates the DL model. This last phase enables sustainability of the model and experiments the performance trade-off within the production settings. \n\nA global overview of the study design and DL-based software lifecycle is shown in Figure \\ref{global_overview}. The goals required over the system are shown in the following list:\n\\begin{enumerate}\n \\item Use of the device integrated camera\n \\item Real-time response for a call to the model\n \\item High accuracy in traffic sign recognition\n \\item Data augmentation by the community\n \\item Integration of updated models \n\\end{enumerate}\nWe implement these goals through iterations along the aforementioned DL-based software lifecycle phases. We also provide a solution for the \\textit{Send Data} operation in Figure \\ref{global_overview}. This allows the possibility that the user collects the generated data at run-time during the DL-based software system operation, which is then manually annotated and used for re-training the models on the performant computer. Then, this updated models can be integrated again for inference in the mobile application.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=\\textwidth]{static\/GlobalProject.png}\n \\caption{Software development lifecycle in our DL-based software}\n \\label{global_overview}\n\\end{figure*}\n\n\\subsection{Platforms and technologies used}\nFor joining the DL modelling and the development of DL-based software, there exist recent DL frameworks, all of them under continuous development. These frameworks offer up to two different approaches to model transferring operations. On the one hand, \\emph{model conversion}. CoreML for iOS software and TensorFlow-Light for Android arguably are the most famous examples that offer this capability. On the other hand, \\emph{model export} using the Open Neural Network Exchange (.ONNX) file format, which was presented in 2017 by Microsoft and Facebook. This file format allows to store serialized versions of trained networks in compressed files. \n\nSince the .ONNX file format is widely used in the current state of the art, it is defined as a key part part of the proposed project pipeline for the practical study.\n\nWith all this, the focus is put on finding two different platforms that allow development-sided and operation-sided implementation of both the DL-component and the application that integrates it. The development-sided platform has to allow training and writing of a CNN into a .ONNX file. The operation-sided one has to be capable of reading and using the model and also of supporting the use of the camera in a mobile device. In this work, the development-sided platform is set to be PyTorch and the operation-sided platform is set to be Unity. We make use of the Unity Barracuda libraries for connecting the two platforms. Also, as observed in Figure \\ref{global_overview} the models built on PyTorch are trained on the Kaggle GPU, which is available freely for 40 hours a week.\n \nMoreover, it is also seen in Figure \\ref{global_overview} that only two of the five models that are trained are later deployed to the application. For an appropriate analysis of the performance of these two models when deployed, we built two applications that only vary in the integrated models.\n \n\\subsection{The Dataset}\nThe GTSRB traffic sign data consists of 144,769 labelled images of 2,416 traffic sign instances of 70 classes belonging to a portion of the real traffic signs that can be found in the roads of Germany. However, after following several criteria for guaranteeing the quality of data, explained in \\cite{Benchmarking-tsr}, the final dataset consists of a collection of 51,840 images from 43 classes fulfilling: {\\it (i)} images are of sizes from 15\u00d715 to 222\u00d7193 pixels, stored in PPM format; {\\it (ii)} it contains the definitions of the region of interest with a separating margin of the size equal to the 10\\% of the pixels; {\\it (iii)} it records the temporal order of the creation of the images; {\\it (iv)} it provides post-processed features like HOG descriptors, Haar-like features and Color Histograms. No usage of the post-processed features is made for the purpose of the deployment of a full-stack convolutional-based architecture for solving the image classification problem. \n\\subsubsection{The Pre-processing}\nWe applied several pre-processing steps that can be considered critical in the context of image processing model training: {\\it (i)} the images are resized to a fixed and squared size; {\\it (ii)} a center crop is applied to the images; {\\it (iii)} The images are encoded as tensors of the form \\textit{[B, C, H, W]}, where \\textit{B} is the batch size, \\textit{C} is the number of channels, \\textit{H} is height and \\textit{W} is width. The values of these tensors are normalized into the 0-1 range; {\\it (iv)} the tensors are standardized in the \\textit{H}x\\textit{W} channels with the pre-computed means and standard deviations of the 3 RGB channels of the images in the dataset.\n\n\\subsubsection{The Augmentation}\nFor the purpose of supporting a comparative analysis of the performance of different models, we implemented two variants for applying the technique of data augmentation to the pre-processed dataset: {\\it (i)} rotation in 90\u00ba degrees plus the vertical and horizontal flips to the images; {\\it (ii)} addition of manually tagged images by the users of the application during the system operation phase. \n\n\\subsection{The DL\/CNN models}\nThe models that are applied for solving the traffic sign recognition task are the standard CNN, the MobileNet v2 (MBv2) \\cite{mobile-nets} and the ResNet34 (RN34) \\cite{resnet}. For the first one, we test and evaluate different configurations in order to choose an optimal one that balances accuracy and complexity. For the last two, we test and compare two different ways of training the model: pre-training with feature extraction and fine-tuning, and full-training from scratch. The CNN works with standard convolutional blocks that consist of convolutions followed by batch normalization, an activation function, a max-pooling layer and a dropout layer. It can learn representations of images from scratch which are then passed to a classifier that is built on top of the convolutional blocks. The MBv2 is similar but works with separable depthwise convolutions. It is a form of factorized convolutions which factorize a standard convolution into a depthwise one and a 1x1 one called a pointwise convolution. The depthwise convolution applies a single filter to each input channel and the pointwise convolution then applies a 1x1 convolution to combine the outputs of the depthwise convolution \\cite{mobile-nets}. Finally, the RN34 is a more complex CNN that has been successfully applied in the image classification domain and has become a benchmark model for related tasks in the DL literature \\cite{resnet}. Both the MBv2 and the RN34 used in the practical study are downloaded from PyTorch and are pre-trained on ImageNet data \\cite{deng2009imagenet}. \n\nThe results on the CNN models are shown in Table \\ref{tab:cnn-results}, where we compare the number of Convolutional Layers, the sizes of the two implemented Fully Connected layers (FC1 and FC2), the total number of parameters, and also the accuracy achieved in the test set with 10 epochs of training. Furthermore, we write down if the augmented dataset is used, the fixed image sizes chosen and the model weight in MegaBytes (MB). Note that the input size of FC1 is equal to the number of output features of the convolutional blocks. Also note that the output size of FC1 is the input size of FC2 and the output size of FC2 is 43, which is the number of unique classes of traffic signs in the data. The resulting optimal configuration from Table \\ref{tab:cnn-results} is promising because of the little complexity, the small storage weight and the great performance shown.\n\n\\begin{table*}[t]\n\\begin{center}\n\\caption{\\label{tab:cnn-results}Performance analysis of the different CNN configurations.}\n\\scalebox{1}{%\n \\begin{tabular}{|c|c|r|c|c|c|r|r|r|}\n \\hline\n \\textbf{Data Aug.} & \\textbf{Image Size} & \\textbf{Batch Size} & \\textbf{Conv. Layers} & \\textbf{FC1 input} & \\textbf{FC2 input} \n & \\textbf{Num. Parameters} & \\textbf{Test accuracy} & \\textbf{Model Weight} \\\\\n \\hline\n NO & 256x256 & 256 & 5 & 8x8x256 & 4x4x64 & 17.21M & 88.02\\% & 65.68MB \\\\\n NO & 512x512 & 64 & 6 & 8x8x512 & 4x4x128 & 67.13M & 95.00\\% & 259.70MB \\\\\n YES & 256x256 & 64 & 5 & 8x8x256 & 4x4x64 & 17.21M & 89.31\\% & 65.68MB \\\\\n YES & 512x512 & 64 & 6 & 8x8x512 & 4x4x128 & 67.13M & \\textbf{95.52\\%} & 259.70MB \\\\\n NO & 256x256 & 64 & 6 & 4x4x512 & 2x2x128 & \\textbf{5.10M} & \\textbf{95.00\\%} & \\textbf{22.11MB} \\\\\n \\hline\n \\end{tabular}}\n\\end{center}\n\\end{table*}\n\nAs it can be seen in Table \\ref{tab:cnn-results}, no increase in performance appears when the augmented set of data is used. This might be because very fast training is happening, thanks to the non-saturating ReLU function in combination of the convolutional operations plus the use of minibatches. More importantly, it can be seen that the highest tier classification accuracy can be achieved with the lightest of the tested configurations.\n\nRegarding the MBv2, the experiments do not change its architecture but only its training methodology. Three techniques are distinguished: Feature Extraction (FE), which uses a pre-trained version of the model and just tunes the built classifier on top of it, training its weights and biases. Fine-Tuning (FT), which also uses the pre-trained version of the model but tunes the weights of all its layers. And Training from Scratch (TfS), which loads the architecture with a random initialization and fully trains it. The results on the training stage with the MBv2 for 10 epochs are presented in Table \\ref{tab:mobilenets-test}.\n\n\\begin{table}[h!]\n\\centering\n\\caption{Performance analysis of the Mobile Net v2}\n\\label{tab:mobilenets-test}\n\\scalebox{1}{%\n \\begin{tabular}{|c|c|r|}\n \\hline\n \\textbf{Data Aug.} & \\textbf{Training method} & \n \\textbf{Test accuracy} \\\\\n \\hline\n NO & FE & 4.00\\% \\\\\n NO & FT & 4.00\\% \\\\\n NO & TfS & 4.00\\% \\\\\n YES & FE & 68.00\\% \\\\\n YES & FT & \\textbf{95.20\\%} \\\\\n YES & TfS & 92.50\\% \\\\\n \\hline\n \\end{tabular}\n}\n\\end{table}\n\nAn important drawback of the MBv2 is spotted in Table \\ref{tab:mobilenets-test}, where it can be seen that without the augmented set of data the network is not capable of learning on the traffic sign recognition task. Anyway, when the augmented set of data is used it can be seen that the same performance as the best CNN is achieved with a much smaller number of parameters, showing that this architecture performs very efficiently. \n\nFinally, the results on the benchmark RN34 is shown in Table \\ref{tab:resnet-test}. Training experiments follow the same logic as with the MBv2 but without trying the augmented set of data, since it is rapidly seen that it is not needed for this architecture.\n\n\\begin{table}[h!]\n\\centering\n\\caption{Performance analysis of the RN34}\\label{tab:resnet-test}\n\\scalebox{1}{%\n \\begin{tabular}{|c|c|r|}\n \\hline\n \\textbf{Data Aug.} & \\textbf{Training method} & \\textbf{Test accuracy} \\\\\n \\hline\n NO & FE & 20.00\\% \\\\\n NO & FT & \\textbf{97.80\\%} \\\\\n NO & TfS & 94.10\\% \\\\\n \\hline\n \\end{tabular}\n}\n\\end{table}\n\nAs it is seen in Table \\ref{tab:resnet-test} the training method of FT gets to the highest seen accuracy in the test split of the training set during the practical study. The output model of this training stage is the one deployed in the application.\n\n\\subsection{The DL-based software}\nThe architecture of the DL-based software system operation is shown in Figure \\ref{app_DL}, following design patterns from \\cite{washizaki2020machine}. We build two applications that have the same architecture but only differ in the DL model they integrate. The application that loads the SmallCNN has a total weight of 68.84MB and the one that loads the RN34 of 131MB. The baseline weight of the application bundle generated by Unity3D is of 46.73MB without the models.\n\nThe two built applications implement the goals 1,2 and 3 defined in Section \\ref{study-design}. However, for ensuring the sustainability of the models when deployed, and for satisfying the applications goals 4 and 5, an external server\/computer is needed for performing the re-training of the model with the new tagged data added by the community. This implementation still lets the user obtain recognition predictions without external calls. The user has this option because the application client will only make use of the server\/computer for downloading new updates on the model and for manually submit new tagged data in a personally controlled way, whenever it is desired. We consider allowing the user to annotate the data that it generates at run-time an efficient solution to the sustainability of the model when deployed. This could be done either by verifying correct model recognition predictions or by correcting wrong ones. However, the architecture that we implement in this practical study makes use of the mobile device storage to save the taken pictures, which are then manually annotated and sent to the local machine, where they are fed to the model again in the Kaggle GPU. This is not a scalable implementation but releasable for research purposes.\n\n\\subsection{Threats to validity}\nRegarding construct validit\n, we build two DL models following a small CNN and ResNet34 (RN34) \\cite{resnet}, in order to mitigate mono-operation bias. As for conclusion validit\n, due to the exploratory nature of the study we do not execute statistical tests. Regarding internal validit\n, the DL models are executed in the same context (e.g., computational power, mobile technologies), mitigating that the efficiency results are caused accidentally rather than by the DL models themselves. Finally, regarding external validit\n, our results are tied to the context of traffic sign recognition in mobile applications. Furthermore, the analysis of the performance in the operation-sided environment is carried out in a mobile device with the Qualcomm Adreno 618 GPU running over Android, which determines the extent to which the results obtained that regard the complexity can be generalized.\n\n\\begin{figure*}[h]\n \\centering\n \\scalebox{0.65}{%\n \\includegraphics[width=\\linewidth]{static\/sysops.png}\n }\n \\caption{DL-based Software System Operation Architecture}\n \\label{app_DL}\n\\end{figure*}\n\n\\section{Results}\nIn this section we present and discuss the results of our practical study for each RQ. \n\n\\subsection{RQ1: Challenges of the creation and integration of complex CNNs in DL-based mobile applications}\nFor answering the RQ1, we analyze both the identified challenges in the state-of-the-art (see Section \\ref{sec:sota-challenges}) and our results. This analysis is shown in Table \\ref{results-challenges}, where: \\textbf{v} indicates that the challenge has been verified to be real and concerning; \\textbf{\u00b1} indicates that the challenge has been identified but not faced in the practical study; \\textbf{new} defines newly spotted challenges up to our knowledge emerging from our study.\n\n\\begin{table}[h]\n\\begin{center}\n\\caption{\\label{results-challenges}Diagnoses of identified challenges}\n\\scalebox{1.15}{\n \\begin{tabular}{|c|c|}\n \\hline\n \\textbf{Challenges} & \\textbf{Diagnostic}\\\\\n \\hline\n Frameworks in early stage (C1) & v \\\\\n Software-Data dependency (C2) & v \\\\\n Explainability (C3) & \u00b1 \\\\\n Sustainability (C4) & v \\\\\n Software dependencies (C5) & new \\\\\n Model performance (C6) & new \\\\\n \\hline\n \\end{tabular}\n}\n\\end{center}\n\\end{table}\n\nWe verify C1 because we find few alternatives to develop DL-based mobile applications. Furthermore, we encounter difficulties in the support of DL architectures in these frameworks. In our work we experience limitations in the ONNX file format and in the Unity Barracuda package, so we motivate further development of these. From C1 we uncover C5, stating that \\textbf{we see a too marked separating line between the development-sided frameworks and the operation-sided ones}. C5 reveals a constraint we found in DL-based software defined as follows. \\textbf{No matter what framework one wishes to use, it will always need to wrap at least two different technologies}: one for the data management and model training outside the device, and the other for the model operation inside the user's device. This implies challenges regarding the mutual support between the two technologies. In the following paragraph, the faced challenges regarding C5 are listed.\n\nFirst, \\textbf{the pre-processing steps that are applied to both the initial dataset in the development phase and the ones in the operation phase (which are applied in real time to the images taken by the user) have to be the same despite being implemented in different frameworks}. DL-based software developers shall ensure that the inputs on both platforms are the same to the network, so it gives the same predictions for the same images.\nSecondly, \\textbf{the operation-sided framework must support model importing and must provide functional libraries that enable the appropriate usage of the model in its exported format}. In Unity Barracuda, a challenging limitation found is the inability to support a lot of the basic operations in the treatment of DL models (a reason for which we verify C1). For example, let's consider the \\textit{view()} function from PyTorch. For the hand-made CNNs, this operation has been replaced by \\textit{Flatten()} which does exactly the same, but for the pre-trained MBv2 this method is used and hence the architecture cannot be deployed directly to Unity. In conclusion, due to the limitations in the used operation-sided framework, a model architecture that is built for ensuring efficiency on mobile devices cannot be deployed to the mobile application.\n\nRegarding C2, \\textbf{we have experienced high volatility in the applications' effectiveness when testing them with real-world data outside of the training set}. We verify C2 because we experience that the DL-based mobile applications effectiveness strongly depends on the data that is fed to it. This fact makes sustainability (C4) \\textbf{become a critical quality attribute for reducing the data dependency and ensuring high performance of the models when deployed}. In the practical study, we implement a solution for the challenge of ensuring sustainability in DL-based software that consists of real world context data augmentation. By doing this, we smooth the differences between the development and operation phases of the DL-based software lifecycle by adapting the models to the real world environment. Hence, we also smooth the dependency between the applications' effectiveness and the data that is fed to it, which makes the DL-based mobile applications more sustainable and makes them generalize better. See the performance of the two deployed models in the two iterations of the operation phase in Table \\ref{tab:operation-performance} where it is shown how the data augmentation technique applied in Iteration 2 reduces the data-software dependency (C2) and hence increases the sustainability (C4) of the applications. \n\n\\begin{table}[ht]\n\\centering\n\\caption{Performance of the deployed models when receiving operation-sided data}\n\\label{tab:operation-performance}\n\\scalebox{1}{%\n \\begin{tabular}{|c|c|r|r|}\n \\hline\n \\textbf{Model} & \\textbf{Iteration} & \n \\textbf{Accuracy} & \\textbf{Storage Weight} \\\\\n \\hline\n Small CNN & 1 & 45.07\\% & 22.11MB \\\\\n RN34 & 1 & 64.78\\% & 83.24MB \\\\\n Small CNN & 2 & \\textbf{92.95\\%} & 22.11MB \\\\\n RN34 & 2 & \\textbf{98.59\\%} & 83.24MB \\\\\n \\hline\n \\end{tabular}\n}\n\\end{table}\n\nRegarding C3, our approach for providing explainability in the built applications is based on providing standard confidence indicators of the models' predictions. These indicators consist of the output probabilities of class membership which have been scaled to become percentages of confidence for a given input. \\textbf{These simple indicators provide a level of explainability of the models} that is acceptable for the applications in the practical study, meaning that \\textbf{they allow some level of interpretation to the user}. We do not consider ensuring this quality attribute a critical challenge because it might not generalize to lots of applications that integrate a DL-based component. However, \\textbf{we recognize the importance of the presence of any measure that provides the user with information about why the model has given a result}.\n\nOur practical study uncovered C6. Compared to the usage of a server, the model exporting operation and its integration into the mobile application \\textbf{has the drawback of the inclusion of the model's storage weight in the application, hence, in the mobile storage}. For this reason the \\textbf{optimization of the performance trade-off is a must when deploying DL-based software to mobile devices}. We have shown in the practical study that not limiting the complexity of the model's architecture, as in the case of the RN34, yields a cost in performance in the mobile application in terms of real time response and storage weight. For the deployment of DL-based software to mobile devices, ensuring efficiency is not an option, it is a must, and facing it becomes a critical challenge.\n\n\\subsection{RQ2: Criteria needed to reason about the trade-off between accuracy and complexity of DL models in mobile applications}\nMost of the criteria applied to optimize the performance trade-off are approaches based on the reduction of the models' complexity. In an environment where the developer has few alternatives in the choice of both development-sided and operation-sided frameworks, the deployment of these systems becomes a bottleneck of the DL pipeline. This way, \\textbf{the evolution of the performance of DL-based software is strictly linked to the evolution of the current model conversion and exporting technologies}, a field that is in its very early stages, where few file formats can be used and with several limitations. For this reason, the deployment of DL-based software to the server cloud and the remote usage through internet connections is still a more flexible choice in terms of DL-based services that can be provided to a mobile user. However, this is not fault-tolerant, due to the dependency on hosting and internet connections. \n\n\\textbf{A fact that makes the deployment of DL-based software to mobile devices not always suitable is that it collides with the benefits of transfer learning}. The major capabilities of transfer learning include great results with little task-specific data, the ease in fine-tuning pre-built architectures and omitting the model design stage. The common available models for transferring learning are of massive weight, and the revolution of transfer learning has created a motivation to use these largest-scaled models for solving any task before designing task-specific architectures \\cite{transfer-learn}. This has never been a problem until now, when one wants the massive model to be deployed in an environment with limited computational power.\n\nWe show in Table \\ref{tab:operation-performance} that the most famous architecture amongst all the experimented ones, which is the RN34 \\cite{resnet}, is the one carrying the biggest drawbacks in terms of storage weight. In this way, the simple CNN architecture with the appropriate configuration outperforms the massive model in terms of performance because of the ability to provide almost the same accuracy when solving the traffic sign recognition task and because of its reduction in the complexity of the architecture.\n\nFinally, the MBv2 looks very promising in this field for its low complexity and high accuracy. Although it has not been deployed for the proposed practical study due to framework incompatibilities, it is concluded that the modifications in its architecture provide the best capabilities for the purpose of deploying DL-based software into environments with limited computational power, in this case the mobile devices. \n\nThis way, in order to lean towards a specific architecture for DL models that are going to be deployed in mobile applications we look for the ones that keep low complexity (e.g. fast real-time response, limited number of parameters, reasonable storage weight) and accomplish high accuracy and generalization. \n\n\\section{Discussion}\nIn the following we review our findings and discuss their implications. First, we have verified that the identified challenges that regard the software-data dependency and sustainability of the models deployed in the software are of major concern. To solve them, we have proposed the implementation of a data augmentation technique that adapts the models to the real world environment, hence providing sustainability. We have shown significant increases in the accuracy when sustainability is ensured in the two systems by means of the data augmentation technique (see Table \\ref{tab:operation-performance}). This step is represented by the \"Send Data\" operation in Figure \\ref{global_overview}. We have also verified the need of the evolution of frameworks that support the development of DL-based software, which is another challenge that has been identified in related work. Additionally, we have uncovered a new challenge related to the mutual dependencies of these frameworks, which enriches the identification of the framework capabilities challenge. Furthermore, we have provided identification and description of the criteria that can be applied to approach efficiency in the deployment of DL models in mobile applications. Also, we have related the reasoning of the accomplishment of efficiency to a newly identified challenge.\n\nTaking into account the differences between the performance of the two DL-based mobile applications according to our criteria, we now discuss a possible functional and efficient implementation. This implementation could consist of the integration of the two deployed models in the same DL-based mobile application. The default model could be set to the most efficient one, and in case the results given by this were not satisfactory the user would have the option to switch to the most robust and least efficient model.\n\n\n\\section{Conclusions}\nIn this work we have studied the DL-based software lifecycle in a practical manner. We have provided an analysis of the challenges found when developing this type of software. Specifically, we have put the focus on the deployment of DL-based software in mobile applications. Concretely, we have tested the performance of different CNN architectures under the PyTorch and Android frameworks. With all this, we have experienced, solved and documented many of the previously identified challenges in the related work and also have highlighted new ones. Furthermore, we have analysed the roles of the accuracy and the complexity in the DL-based software performance, relating this quality attribute to a newly identified challenge of the deployment of DL-based software.\n\nThis study motivates several key points of future work. First, the search for optimized model architectures that allow more efficient solutions to the deployment of DL-based components in mobile applications. These can both implement more efficient operations in its architecture or have limitations in its complexity when designed. The study also encourages the community to develop alternative solutions to the conversion and export of models together with software technologies that support mobile application development with DL-based components. The latter would reduce the severity of the challenges that are met while implementing this type of software. \nMoreover, the study motivates the design of more sophisticated confidence indicators that describe in a more precise way the uncertainty in the models predictions. The uncertainty of a model prediction given the input conditions is a measure that is hard to determine but useful to influence the decisions of autonomous DL-based systems. These can be key in many applications related to safety-critical environments. Also, sophisticated confidence indicators increase the quality of the systems since these enhance the explainability to the user. Last, we expect long evolution in the design of empowered GPUs for mobile devices which can relax the limitations of complexity in the architectures for achieving efficiency within the mobile device settings. \n\n\\section{Data Availability}\nIn this section we provide a demo video of the developed and evolved DL-based software, available at \\url{https:\/\/www.youtube.com\/watch?v=yFsp6kxO5kI}. Additionally, we provide an open source code repository of the whole project in \\url{https:\/\/github.com\/yuyecreus\/CNN-in-mobile-device}.\n\n\\section*{Acknowledgment}\nWe thank Lisa J{\\\"o}ckel for having reviewed our work and for being supportive and useful to enhance it.\nThe research presented in this paper has been developed in the context of the CBI course at the GCED@FIB.\n\n{\\small\n\\bibliographystyle{IEEE\/IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzfoki b/data_all_eng_slimpj/shuffled/split2/finalzzfoki new file mode 100644 index 0000000000000000000000000000000000000000..02fe7bfd7533260f1239ec20a26f01c989500e5c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzfoki @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n\n\n\n\nDyson introduced his model of $N$ Brownian particles evolving in a constraining potential in 1962 \\cite{Dyson62}. The particle evolution described by Dyson with quadratic potential gives the evolution of the eigenvalues a Hermitian matrix Brownian motion $M_t$. For fixed time $t>0$ the matrix $M_t$ has Gaussian Unitary Ensemble (GUE) distribution. Dyson also formulated a more general particle flow which corresponds to a larger class of point processes called $\\beta$-ensembles. The $\\beta$-ensembles are finite point processes with $N$ points and joint density\n\\begin{equation}\n\\label{eq:betaensemble}\n \\rho^{\\beta}_N(\\lambda) = \\frac{1}{Z^{\\beta}_N} e^{-\\frac{\\beta}{2}\n \\left( -\\sum_{1 \\leq i \\neq j \\leq N} \\log|\\lambda_i - \\lambda_j| + \\sum_{i=1}^N V(\\lambda_i) \\right)\n},\n\\end{equation}\nwith $Z_N^{\\beta}$ a scaling factor $V(x)$ the potential, and $\\beta>0$. The case $V(x)=x^2\/2$ and $\\beta=1,2, $ and $4$ correspond to the original Gaussian ensembles and their Dyson evolution. These could be further generalized by changing the interaction term, but we restrict ourselves to this case. This joint density leads to the generalization of the Dyson Brownian motion. Let \n\\begin{equation}\n\\label{eq:dlambda}\n d\\lambda_{i,t} = \\sqrt{\\frac{2}{\\beta}}dZ_{i,t} - \\biggl( \\frac{V'(\\lambda_i)}{2} - \\sum_{j: j \\neq i} \\frac{1}{\\lambda_i - \\lambda_j} \\biggr)\\,dt\n\\end{equation}\nwhere $\\left\\{ Z_{i,t} \\right\\}_1^n$ are independent standard Brownian motions. Then (\\ref{eq:betaensemble}) is the equilibrium measure for this process, and the flow is stationary with respect to this measure. \n\nThere is significant existing work on Dyson's model. Rogers and Shi showed convergence to the Wigner law in the $N\\to \\infty$ limit for the original quadratic potential \\cite{RogersShi}. Israelsson showed the fluctuations converged to a Gaussian process with later work by Bender giving the covariance structure \\cite{Israelsson, Bender}. Other work on the fluctuations of the trace moments was done by Perez-Abreu and Tudor \\cite{AbreuTudor}. Later work by Unterberger extended this to the general $V,\\beta$ case \\cite{Unterberger}. This work done in the asymptotic setting rests heavily on convergence of the Stieltjes transform but gives little information on local interactions.\n\nIn the case of the Gaussian ensembles where $V(x)=x^2\/2$ and $\\beta=1, 2,$ or $4$ the eigenvalues of the matrices form a finite determinantal or Pfaffian point process. The Dyson flow for $\\beta=2$ can also be described as a determinantal point process with the extended Hermite kernel $K(\\overline x,\\overline y)$, now with points in space-time. These descriptions give exact formulas for correlation functions and may be used to prove local limits. Forrester and Nagao give local point process limits at multiple times via determinantal methods for the $\\beta=2$ case \\cite{FN2}. Under the appropriate substitutions the extended Hermite kernel converges to the extended sine kernel. Similar results hold at the edge of the spectrum. \n\nWork in recent years has been focused on the more general Dyson model introduced in (\\ref{eq:dlambda}), but much remains to be done. Landon, Sosoe, and Yau give universality results relating eigenvalue statistics for a class of potentials $V$ to the classical Dyson Brownian motion \\cite{LSY}. Huang and Landon give results on rigidity and a mesoscopic CLT \\cite{HuangLandon}. In particular the question of local limits for the general $\\beta$ remain open even in the case of the quadratic potential. These questions are the motivation behind the current work. To see the connection we give a brief review of the work done for non-dynamic model.\n\n\nIn the case of the $\\beta$-Hermite ensembles (not time-evolving) for general $\\beta>0$ the proof of local limits rests on a tridiagonal matrix models introduced by Dumitriu and Edelman \\cite{DE}. The tridiagonal (Jacobi) model for the case $V(x) = 2 x^2$ is given by \n\\begin{equation}\n\\label{eq:tridiagonal}\nA_{n} = \\left[ \\begin{array}{ccccc}\nb_1 & a_1 & && \\\\\na_1 & b_2 & a_2 & &\\\\\n& \\ddots & \\ddots & \\ddots \\\\\n& & a_{n-2} & b_{n-1} & a_{n-1}\\\\\n&&& a_{n-1} & b_n\n\\end{array}\\right]\n\\end{equation}\nwith $a_i \\sim \\chi_{\\beta(n-k)\/4}$ and $b_i \\sim \\mathcal{N}(0,2)$ all independent. Notice that the potential in this case is off by a factor of 4 from the potential mentioned previously. This will be a convenient choice later. Dumitriu and Edelman showed that the eigenvalues of $A_n$ have joint density given by (\\ref{eq:betaensemble}) \\cite{DE}. They in fact go further, using a bijection between the entries of a Jacobi matrix and the eigenvalues together with the square of their spectral weights. \n\\begin{proposition}[Dumitriu-Edelman, \\cite{DE}]\nWe take $q_i$ to be the first entry of the vector $v_i$ that satisfies $A_n v_i = \\lambda_i v_i$ with $\\|v_i \\|= 1$, and $\\lambda_1< \\lambda_2<\\cdots < \\lambda_n$ to be the ordered eigenvalues of $A_n$. Then the map $(\\overline a, \\overline b) \\leftrightarrow (\\overline \\lambda, \\overline q^2)$ is a bijection, and moreover for $A_n$ defined as above we get \n\\begin{equation}\n\\label{eq:dirichlet}\n(q_1^2,q_2^2,...,q_n^2) \\sim \\operatorname{Dirichlet}(\\tfrac{\\beta}{2},...,\\tfrac{\\beta}{2}).\n\\end{equation}\n\\end{proposition}\nThis tridiagonal model was used by Valk\\'o and Vir\\'ag to give a point process limit in the bulk \\cite{BVBV}, and Ram\\'irez, Rider and Vir\\'ag to give a point process limit at the edge \\cite{RRV}. \n\n\n\\subsection{Notation}\n\n \nWe deal with matrix-valued random processes. The subscripts of such processes are used both for referring to matrix and vector entries, as well as a time parameter. So, we adopt the following convention. A process $\\MP{X} = \\left( \\MP{X}{t}, t\\geq 0\\right)$ will be shown in bold whenever its time parameter is not shown. Otherwise, it will be shown in normal weight font. Matrix indexing subscripts will also appear in the subscript, but always first. So for example, $\\MP{X}{t}[i,j]$ refers to the $i,j$ entry of the matrix process $\\MP{X}$ at time $t,$ while $\\MP{X}[i,j]$ refers to the scalar-valued process of the $i,j$ entry. \n\nThere are many instances in this paper where $[A,B]=AB-BA$ denotes the commutator. In order to avoid confusion all quadratic variation and covariation terms will be written in the form $dX_t dY_t$ or $(dX_t)^2$.\n\n\n\n\\subsection{Goals and Results}\n\nThe goal of this paper is to give a description of the tridiagonal matrix evolution associated to the $\\beta$-Dyson Brownian motion flow. These models can in fact be completely characterized, but require a substantial amount of machinery to describe. In this paper we begin by showing the existence of such a tridiagonal flow and then developing the linear algebra tools needed to characterize the model. We then move on to characterizing the model and asymptotics for the case $V(x)= 2x^2$. \n\nRecall that the tridiagonal model is in bijection with the eigenvalues and the spectral weights. We have an explicit description of the eigenvalue flow, but we still have the freedom to choose the process by which the spectral weights evolve. The first partial description of the tridiagonal model is in Theorem \\ref{thm:martingales} which gives a description of the martingale terms for the general model. We then specialize to the case where the spectral weights remain fixed. The model here is given in its entirety by Theorem \\ref{thm:tridiagonalfrozen}. This model is particularly nice in the case where the eigenvalue flow is stationary. In this case we get that the tridiagonal model is also stationary. We describe two more cases, the first is the case where the spectral weights evolve according to some finite variation process. The description is given in Theorem \\ref{thm:tridiagonalFV}. The second is a model where the martingale terms in the evolution vanish on the first half of the matrix. \n\n \n\n\nIn addition to explicit descriptions of the evolution of the finite matrices we get the following asymptotic result on the evolution of the entries of $\\MP{A}$ as $n \\to \\infty$. Define the operator $\\mathcal{F}$ on Jacobi matrices by\n \\[\n \\mathcal{F}(A)_{k,\\ell}\n =\n \\begin{cases}\n -b_\\ell(4\\ell-2)+\n 4\\sum_{j=1}^{\\ell-1} b_j & \\text{ if } k = \\ell , \\\\\n -2 (\\ell+1)a_\\ell- 2\\ell a_{\\ell-1} \n + 4\\sum_{j=1}^{\\ell-2} a_j \n & \\text{ if } k = \\ell + 1 ,\\\\\n 0, & \\text{ otherwise.}\n \\end{cases}\n \\]\n and $D$ be the infinite Jacobi matrix\n\\[\n D\n =\n \\begin{bmatrix}\n 0 & \\frac{1}{2} & & & & \\\\\n \\frac{1}{2} & 0 & \\frac{1}{2} & & & \\\\\n & \\frac{1}{2} & 0 & \\frac{1}{2}& & \\\\\n & & \\frac{1}{2} & & & & \\\\\n & & & & \\ddots & & \\\\\n \\end{bmatrix}.\n\\]\n\nLet $\\mathfrak{s}$ denote the semicircle density on $[-1,1],$\n\\[\n \\mathfrak{s}(x) = \\frac{2}{\\pi}\\sqrt{1-x^2}\\,\\one[|x| \\leq 1],\n\\]\nand define for $z \\in {\\mathbb C} \\setminus [-1,1]$\n\\[\n s^{\\mathfrak{s}}(z) =\n \\int_{-1}^1 \\frac{\\mathfrak{s}(x)\\,dx}{x-z}\n =\n 2(-z+\\sqrt{z^2-1})\n .\n\\]\n\n\n \\begin{theorem}\n \\label{thm:asymptotics}\n Let $\\MP{A}$ satisfy the evolution for the case of frozen spectral weights in Theorem \\ref{thm:tridiagonalfrozen}, with $V(x)=2x^2.$ Suppose that at time $0,$ the eigenvalue distribution of $\\MP{A}{0}^{(n)}\/\\sqrt{n}$ satisfies\n \\[\n n^{-1\/2}\\operatorname{tr}\\left( z - \\MP{A}{0}^{(n)}\/\\sqrt{n} \\right)^{-1}\n - n^{1\/2} s^{\\mathfrak{s}}(z)\n \\overset{\\Pr}{\\to} \\int_{-1}^1 \\frac{\\nu(dx)}{x-z}\n \\]\n for some signed measure $\\nu$\n uniformly on compact sets of $\\mathbb{C}\\setminus [-1,1].$ Suppose that the spectral weights of $\\MP{A}$ satisfy \\eqref{eq:dirichlet} and are independent of the eigenvalues of $\\MP{A}$. If $\\MP{A}{0}$ is the tridiagonal in (\\ref{eq:tridiagonal}), then this is satisfied with $\\nu = 0.$ \n \nThen any bounded order principal submatrix of $\\MP{A}-D\\sqrt{n}$ converges to the solution of\n\\[\ndA_t = \\mathcal{F}(A_t) - { \\sqrt{\\beta}} \\mathcal{G}\n\\]\nwhere $\\mathcal{G}$ is a fixed (non-time dependent) Gaussian vector with an explicitly computable correlation structure. If we choose $\\overline q$ to be flat (that is $q_i^2 = \\frac{1}{n}$ for all $i$) then we get\n\\[\ndA_t = \\mathcal{F}(A_t).\n\\]\n\\end{theorem}\nIn the stationary case, or more generally when $\\nu$ is $0,$ the limiting evolution of the tridiagonal matrix is identically $0.$\n\n\n\\subsubsection{Derivatives of the Lanczos algorithm}\n\nFinding an explicit description for the tridiagonal evolution requires the us to solve for the tridiagonalization of a symmetric or Hermitian matrix after a small perturbation. This may be used to give an explicit derivative of the Lanczos algorithm. \n\nWe will use $\\mathscr{T}$ to denote the tridiagonalization operator on Hermitian matrices. Let $M$ be any $n \\times n$ Hermitian matrix so that $\\mathscr{T}(M)=A$ where the off diagonal entries are strictly positive. We will take $p_0(x),...,p_{k-1}(x)$ to the be orthogonal polynomials that are orthonormal with respect to the weights $\\{q_i\\}_{i=1}^n$. See Section \\ref{sec:OPs} for further information about the construction. We further take $p_{k}^{(\\ell)}(x)$ to denote the $\\ell$-th minor polynomials. We use a notation that may not be consistent with other notations choices for the minor polynomials, see Section \\ref{sec:minors} for details. \n\nWe make one further definition. We use $D_{G, \\mathscr{T}}$ to denote the derivative of the Lanczos algorithm in the direction of $G$ in the tridiagonal basis. In particular \n\\[\nD_{G,\\mathscr{T}} \\mathscr{T}(M) = \\lim_{\\varepsilon \\to 0} \\frac{1}{\\varepsilon} \\mathscr{T}(A+\\varepsilon G).\n\\]\nNotice that on the right hand side we are taking the tridiagonalization of a perturbed tridiagonal matrix rather than in the original basis for $M$.\n\n\\begin{theorem}\n\\label{thm:lanczos}\nLet $M$ be any $n \\times n$ Hermitian matrix so that $\\mathscr{T}(M)=A$ where the off diagonal entries are strictly positive, and let $W^{k,\\ell}$ be the matrix that is 0 everywhere except $(k,\\ell)$ and $(\\ell,k)$ where it is 1. Let $G$ be a matrix and $D_{G,\\mathscr{T}}$ denote the derivative in the direction of $G$ in the tridiagonal basis. Then \n\\[\nD_{W^{k,\\ell},\\mathscr{T}} \\mathscr{T}(M) = G^{k,\\ell}(A)\n\\]\nwhere \n \\[\n G^{k,\\ell}_{u,r}\n =\n \\frac{1}{2}\\sum_{i=1}^n\n q_i^2\n p_{k-1}(\\lambda_i)\n \\begin{cases}\n a_{r-1} p_{r-2}(\\lambda_i)p_{r-1}^{(\\ell-1)}(\\lambda_i) - a_r p_{r-1}(\\lambda_i) p_{r}^{(\\ell-1)}(\\lambda_i)& \\\\\n +a_{r-1} p_{r-1}(\\lambda_i)p_{r-2}^{(\\ell-1)}(\\lambda_i) - a_r p_{r}(\\lambda_i) p_{r-1}^{(\\ell-1)}(\\lambda_i), & \\text{ if } u = r < n, \\\\\n a_r (p_{r-1}(\\lambda_i)p_{r-1}^{(\\ell-1)}(\\lambda_i) - p_r(\\lambda_i)p_r^{(\\ell-1)}(\\lambda_i)), & \\text{ if } u = r + 1 < n,\\\\\n 0, & \\text{ otherwise.}\n \\end{cases}\n\\]\n\\end{theorem}\n\n\\begin{remark}\nThe previous theorem is enough to give the derivative for an arbitrary $G$. We can also consider the derivative in the original basis for $M$, this will be equivalent to $D_{O^{*}GO}$ where $O$ is the change of basis matrix $O^*MO=A$.\n\\end{remark}\n\n\n\n\\subsection{Organization}\n\nThe paper is organized in the following way: section \\ref{sec:iso} characterizes matrix diffusions that have the same spectral process, and in section \\ref{sec:dysoniso} specializes these matrix diffusions to the case of the $\\beta$-Dyson Brownian motion flow. These sections will essentially give existence of the tridiagonal model, but the description of the evolution itself will be implicit. Section \\ref{sec:commutator} will develop the linear algebra and orthogonal polynomial tools that will be needed to write down an explicit representation of the tridiagonal matrix evolution. In this section we also prove Theorem \\ref{thm:lanczos}. The proof is not explicitly identified, but this theorem will be a consequence of remark \\ref{rem:Gkl}. \n\nSection \\ref{sec:commutator} gives a brief introduction to discrete orthogonal polynomials (see section \\ref{sec:OPs}) and later defines the corresponding minor polynomials (see section \\ref{sec:minors}). The remainder of the sections is split into the derivation of several identities related to the commutator of $[A,M]$ for a tridiagonal $A$, and ends with a summary of the identities derived in the section. This final summary section \\ref{sec:summary} gives a summary of the relations needed to describe the tridiagonal model explicitly and the necessary pieces for Theorem \\ref{thm:lanczos}.\n\n\nThis explicit representation will be written down in several cases in section \\ref{sec:models}. The organization of this sections is pretty self-explanatory from the section heading, but it is worth noting that section on the tridiagonal model with frozen weights \\ref{sec:tridiagonalfrozen} will be necessary for reading the section on finite variation spectral weights \\ref{sec:tridiagonalfv}. Additionally section \\ref{sec:tridiagonalfrozen} contains a section where the finite variation terms are worked out and somewhat simplified. This explicit computation will be necessary for the asymptotic results in the next section, but is not essential otherwise. \n\nThe final section gives the proof of Theorem \\ref{thm:asymptotics}. A more detailed description about the organization may be found at the start of this section.\n\n\n\n\n\n\\section{Isospectral processes}\n\\label{sec:iso}\n\nWe begin by characterizing real matrix diffusions that carry the same eigenvalue distributions. Suppose that $Z_t$ is a real symmetric semimartingale and $O_t$ is a real orthogonal semimartingale that are adapted to a common filtration. Real orthogonal processes can be characterized by an associated process on the lie algebra associated to the orthogonal group, the real antisymmetric matrices. Specifically, if $M_t$ is any real antisymmetric continuous semimartingale, the process\n\\[\n dO_t^t = O_t^t( dM_t + \\frac{1}{2} (dM_t)^2)\n\\]\nis a real symmetric process. Moreover, any real orthogonal continuous semimartingale arises in this way (see the proof sketch in Theorem~\\ref{thm:iso}).\n\nLet $Y_t=O_t Z_t O_t^t.$ Applying the stochastic product rule to $Y_t,$ we get\n\\begin{align*}\n dY_t &= \n dO_t Z_t O_t^t\n + O_t dZ_t O_t^t\n + O_t Z_t dO_t^t \\\\\n &+ dO_t dZ_t O_t^t\n + dO_t Z_t dO_t^t\n + O_t dZ_t dO_t^t.\n\\end{align*}\nLet $W_t = \\int_0^t O_s dZ_s O_s^t.$ Then, we can write\n\\begin{align*}\n dY_t &= \n (dM_t + \\frac{1}{2} (dM_t)^2)^t Y_t \n + dW_t + Y_t( dM_t + \\frac{1}{2} (dM_t)^2) \\\\\n &+ dM_t^t dW_t\n +dM_t^t Y_tdM_t\n +dW_t dM_t,\n\\end{align*}\nnoting that higher order terms disappear. This can be rewritten as\n\\[\n dY_t = dW_t + [Y_t, dM_t] + \\frac{1}{2}[2dW_t + [Y_t,dM_t], dM_t].\n\\]\nAlternatively, using that $[Y_t,dM_t] = dY_t - dW_t,$ up to finite variation terms,\n\\[\n dY_t = dW_t + [Y_t, dM_t] + \\frac{1}{2}[dW_t + dY_t, dM_t].\n\\]\nWe have, in effect, proven the following.\n\\begin{theorem}\n Suppose $\\MP{Z}$ is a real symmetric process adapted to a filtration $\\mathscr{F}$ and $\\MP{Y}=\\MP{O}\\MP{Z}\\MP{O}^t$ for some continuous orthogonal process $\\MP{O}$ adapted to $\\mathscr{F},$ then\n \\begin{equation}\n \\label{eq:prog}\n \\begin{aligned}\n dY_t &= dW_t + [Y_t, dM_t] + \\frac{1}{2}[dW_t + dY_t, dM_t] \\\\\n dO_t^t &= O_t^t( dM_t + \\frac{1}{2} (dM_t)^2)\n \\end{aligned}\n \\end{equation}\n for some real anti-symmetric process $\\MP{M}$ adapted to $\\mathscr{F}$ and where $\\MP{W}$ solves\n $dW_t = O_t dZ_t O_t^t$ for all $t \\geq 0.$\n \\label{thm:iso}\n\\end{theorem}\n\\begin{proof}\n We need to demonstrate the existence of the process $\\MP{M}$ solving\n \\[\n dO_t^t = O_t^t( dM_t + \\frac{1}{2} (dM_t)^2).\n \\]\n On having done so, the product rule computations preceeding the theorem statement completes the proof.\n Let $T > 0$ be arbitrary.\n For any fixed time $t_0 \\in [0,T],$ we have that \n \\(\n \\lim_{t \\to t_0} O_t^t O_{t_0} = \\operatorname{Id}.\n \\)\n Hence, we can take the matrix logarithm of $\\tilde M_{t_0,t} = \\log(O_t^t O_{t_{0}})$ for time $t$ sufficiently close to $t_0,$ which as an analytic function of $O_t$ is adapted to $(\\mathscr{F}_t, t \\geq t_0).$ Further, we have that \n \\(\n O_t^t = O_{t_0}^t e^{\\tilde M_{t_0,t}}\n \\)\n for a sufficiently small window of time around $t_0.$ Hence, for a mesh $\\left\\{ t_i \\right\\}_0^d$ of $[0,T]$ of sufficiently fine spacing we may define for $t_i \\leq t < t_{i+1}$\n \\(\n M_t^d = \\tilde M_{t_i,t} + \\sum_{j=1}^i \\tilde M_{t_{j-1},t_{j}}.\n \\)\n On sending the mesh size to $0$ it is routine to check that $(M_t^{(d)}, 0 \\leq t \\leq T)$ converges to a solution of \n \\(\n dO_t^t = O_t^t( dM_t + \\frac{1}{2} (dM_t)^2).\n \\)\n\n\\end{proof}\n\n\n\\section{The time evolving tridiagonal model for general $\\beta$}\n\\label{sec:dysoniso}\n\nSo, any time evolving tridiagonal model for which the eigenvector process is adapted will follow \\eqref{eq:prog}. In particular, any adapted conjugation of $\\beta$-Dyson Brownian motion, which is a real diagonal semimartingale, has this form. Since the eigenvalues and eigenvectors form locally differentiable processes in terms of the matrix entries, it follows that any tridiagonal model for $\\beta$-Dyson Brownian motion satisfies \\eqref{eq:prog}.\n\\begin{theorem}\n\\label{thm:dysoniso}\n If $\\MP{A}$ is a tridiagonal model for $\\beta$-Dyson Brownian motion with $\\beta \\geq 1$ whose starting configuration is almost surely simple, i.e.\\;there is an orthogonal matrix process $\\MP{O}$ so that $\\MP{A} = \\MP{O} \\MP{\\Lambda} \\MP{O}^t$ for diagonal $\\MP{\\Lambda}$ satisfying for each $1 \\leq i \\leq n,$\n \\[\n d\\Lambda_{i,t} = \\sqrt{\\frac{2}{\\beta}}dB_{i,t} - \\biggl( \\frac{V'(\\lambda_i)}{2} - \\sum_{j: j \\neq i} \\frac{1}{\\lambda_i - \\lambda_j} \\biggr)\\,dt\n \\]\n then with respect to the filtration $(\\mathscr{F}_t, t\\geq 0) = (\\sigma(A_t),t\\geq 0),$\n \\[\n \\begin{aligned}\n dA_t &= dW_t + [A_t, dM_t] + \\frac{1}{2}[dW_t + dA_t, dM_t] \\\\\n dO_t^t &= O_t^t( dM_t + \\frac{1}{2} (dM_t)^2)\n \\end{aligned}\n \\]\n for some real antisymmetric process $\\MP{M}$ where $\\MP{W}$ solves $dW_t = O_t d\\Lambda_t O_t^t.$ \n \\label{thm:tri}\n\\end{theorem}\n\\begin{proof}\n Since the eigenvalues of $\\MP{A}$ are always distinct almost surely, the eigenvector matrix $\\MP{O}$ is continuous and in fact differentiable as a function of the entries of $\\MP{A}.$ Hence $\\MP{O}$ is a continuous semimartingale adapted to $\\mathscr{F}$ and by Theorem~\\ref{thm:iso} the result follows. \n \\end{proof}\n\nThe presence of the Dyson Brownian motion generator inside the theorem may at first sight appear disconcerting. For example, we may wish to find a representation for the driving term $dW_t$ so that it is a local martingale. Indeed this is possible:\n\\begin{theorem}\n Suppose that $\\MP{W}$ is a symmetric matrix diffusion that solves\n \\[\n dW_t = \\sqrt{\\tfrac{2}{\\beta}} O_t dB_t O_t^t + O_t dZ_t O_t^t,\n \\]\n where $\\beta \\geq 1,$ $\\MP{B}$ is a diagonal matrix Brownian motion, $\\MP{Z}$ is a symmetric matrix Brownian motion with $0$-diagonal and $\\MP{O}$ is a continuous version of the eigenvector matrix of $\\MP{W}.$ Then, $\\MP{\\Lambda} = \\MP{O}^t \\MP{W} \\MP{O}$ has entries satisfying\n \\begin{equation}\n \\label{eq:dl1}\n d\\Lambda_{i,t} = \\sqrt{\\frac{2}{\\beta}}dB_{i,t} +\\sum_{j: j \\neq i} \\frac{1}{\\lambda_i - \\lambda_j}\\,dt,\n \\end{equation}\n that is the eigenvalues of $\\MP{W}$ follow $\\beta$-Dyson Brownian motion with no potential.\n Hence if $\\MP{A} = \\MP{U}\\MP{W}\\MP{U}^t$ is any tridiagonal matrix diffusion satisfying\n \\[\n \\begin{aligned}\n dA_t &= U_tdW_tU_t^t + [A_t, dM_t] + \\frac{1}{2}[dW_t + dA_t, dM_t] \\\\\n dU_t^t &= U_t^t( dM_t + \\frac{1}{2} (dM_t)^2),\n \\end{aligned}\n \\]\n then the eigenvalues of $\\MP{A}$ evolve according to \\eqref{eq:dl1}.\n\n \\label{thm:tri2}\n\\end{theorem}\nWe require some linear algebra before proving this theorem, which we give in Section~\\ref{sec:commutator}\nHowever, we note that in the case $\\beta=1,$ this driving noise simplifies:\n\\begin{corollary}\n In the case $\\beta=1,$ the driving noise $\\MP{W} \\overset{\\mathscr{L}}{=} \\frac{\\MP{G} + \\MP{G}^t}{\\sqrt{2}}$ where $\\MP{G}$ is a matrix of i.i.d.\\;standard Brownian motions.\n\\end{corollary}\n\\noindent The previous Theorem and Corollary are related to work of Allez and Guionnet see \\cite{AllezGuionnet}.\n\n\\begin{remark}\n Similar simplifications occur for $\\beta=2$ and $\\beta=4$ if we replace the process $\\MP{O}$ on the orthogonal group by appropriate unitary or symplectic processes, respectively. \n\\end{remark}\n\n\n\n\\section{Solving the commutator equation}\n\\label{sec:commutator}\n\nIntroduce the space $\\mathbb{T}$ of symmetric tridiagonal matrices. \nIn this section, we consider the linear algebra problem of solving for antisymmetric $M$ so that \n\\begin{equation}\n [A,M] + W \\in \\mathbb{T}.\n \\label{eq:comm}\n\\end{equation}\nBy dimension counting, we can see that this problem does not have unique solutions. Indeed there is possibly (and generically there is) an $(n-1)$-dimensional space of $M$ that produce the desired answer (see Theorem \\ref{thm:span}).\nThis is locally the problem that must be solved to describe the rotation differentials $dM_t$ that tridiagonalize $A_t$ when perturbed by $dW_t$ in \\eqref{eq:prog}. Further, we will need to understand the resulting space of tridiagonal matrices that solve this equation for a given $W.$\nUnderstanding this algebra leads us to the theory of discrete orthogonal polynomials. \n\n\n\n\\subsection{Discrete orthogonal polynomials}\n\\label{sec:OPs}\n\nLet\n\\begin{equation}\nA = \\left[ \\begin{array}{ccccc}\nb_1 & a_1 & && \\\\\na_1 & b_2 & a_2 & &\\\\\n& \\ddots & \\ddots & \\ddots \\\\\n& & a_{n-2} & b_{n-1} & a_{n-1}\\\\\n&&& a_{n-1} & b_n\n\\end{array}\\right]\n\\end{equation}\nbe a Jacobi matrix. We can define a sequence of polynomials $p_0(x), p_1(x),...,p_n(x)$ to be solutions to the three term recurrence generated by the equation $A\\underline{p}(x)=x\\underline{p}(x)$ with $\\underline{p}^T(x)=[p_0(x),p_1(x),...,p_{n-1}(x)]$. Taking $p_0(x)=1$ we get \n\\begin{align*}\nb_1 p_0(x)+ a_1 p_1(x) & =x p_0(x)\\\\\na_{k-1}p_{k-2}(x) + b_k p_{k-1}(x) + a_k p_k(x) & =x p_{k-1}(x)\\\\\na_{n-1} p_{n-2}(x) + (b_n-x)p_{n-1}(x) & = - p_n(x).\n\\end{align*}\nFor convenience we have completed the recurrence by taking $a_{n} =1$. This makes $p_n(x)$ a multiple of $\\det(x-A).$ If $\\lambda$ is an eigenvalue of $A$ the recurrence closes exactly and so $p_n(\\lambda) = 0$. In other words the zeros $\\lambda_1,...,\\lambda_n$ of $p_n(x)$ are the eigenvalues of $A$, and moreover they have associated eigenvectors $\\underline{p}(\\lambda_k)$. Because $A$ is self-adjoint it follows that the eigenvectors are orthogonal. Take $q_k^{-2} = \\| \\underline{p}(\\lambda_k)\\|^2$, then the matrix \n\\begin{equation}\n \\label{eq:COB}\nO^t= \\left[ \\begin{array}{cccc}\nq_1 & q_1 p_1(\\lambda_1) & \\cdots & q_1 p_{n-1}(\\lambda_1)\\\\\nq_2 & q_2 p_1(\\lambda_2) & \\cdots & q_2 p_{n-1}(\\lambda_2)\\\\\n\\vdots & & & \\vdots \\\\\nq_n & q_n p_1(\\lambda_n) & \\cdots & q_n p_{n-1}(\\lambda_n)\n\\end{array}\n\\right]\n\\end{equation}\nis the orthogonal change of basis matrix that diagonalizes $A$ with $O^t A O = \\Lambda$. Moreover because $O$ is orthogonal we have the standard orthonormality conditions on both rows and columns. Therefore we get the following relationships:\n\\begin{align}\n\\sum_{i=1}^n q_i^2 p_{k}(\\lambda_i) p_{\\ell}(\\lambda_i) = \\delta_{k,\\ell},\\\\\n\\sum_{i=1}^n q_k q_\\ell p_{i-1}(\\lambda_k)p_{i-1}(\\lambda_\\ell) = \\delta_{k,\\ell}. \n\\end{align}\n\n\n\\subsection{Christoffel-Darboux Formula}\n\nThe Christoffel-Darboux formula is an automatic consequence of the three-term recurrence for $\\left\\{ p_k \\right\\}_{k=0}^n.$ As it is in symmetric form, this formula is\n\\begin{equation}\n \\sum_{k=0}^m p_k(x)p_k(y) = a_{m+1}\\frac{p_{m+1}(x)p_m(y) - p_m(x)p_{m+1}(y)}{x-y}.\n \\label{eq:CD}\n\\end{equation}\nMultiplying the left hand side through by $(x-y),$ and applying the recurrence, all but the terms on the right survive. We can also take the limit in this formula as $y \\to x$ to get the \\emph{confluent} form of this formula:\n\\begin{equation}\n \\sum_{k=0}^m p_k(x)^2 = a_{m+1}(p_{m+1}'(x)p_m(x) - p_m'(x)p_{m+1}(x) ).\n \\label{eq:cCD}\n\\end{equation}\nThis allows us to represent the spectral weights $q_i^2$ by the following special case\n\\begin{equation}\n \\frac{1}{q_i^2} = \\sum_{k=0}^{n-1} p_k(\\lambda_i)^2 = p_{n}'(\\lambda_i)p_{n-1}(\\lambda_i).\n \\label{eq:qipn}\n\\end{equation}\nWe will also use the summation\n\\begin{align}\n \\sum_{k=0}^m p_k(x)p'_k(y) \n = &a_{m+1}\\partial_y\\biggl(\\frac{p_{m+1}(x)p_m(y) - p_m(x)p_{m+1}(y)}{x-y}\\biggr) \\nonumber\\\\\n = \n &a_{m+1}\\frac{p_{m+1}(x)p'_m(y) - p_m(x)p'_{m+1}(y)}{x-y} \\nonumber \\\\\n + &a_{m+1}\\frac{p_{m+1}(x)p_m(y) - p_m(x)p_{m+1}(y)}{(x-y)^2}. \n \\label{eq:dCD}\n\\end{align}\nSending $x$ to $y$ (or more directly, differentiating \\eqref{eq:cCD}), we also get\n\\begin{equation}\n \\sum_{k=0}^m p_k(x)p_k'(x) = a_{m+1}(p_{m+1}''(x)p_m(x) - p_m''(x)p_{m+1}(x) ).\n \\label{eq:dcCD}\n\\end{equation}\n\n\n\n\n\n\n\\subsection{Fundamental identity}\n\nThe commutator at entry $k,\\ell$ is given by \n\\begin{equation}\n[A,M]_{k,\\ell} = a_{k-1} m_{k-1,\\ell} + b_k m_{k,\\ell} +a_k m_{k+1,\\ell} - (a_{\\ell-1} m_{k,\\ell-1} + b_\\ell m_{k,\\ell}+ a_\\ell m_{k,\\ell+1}).\n\\label{eq:AM}\n\\end{equation}\nTerms for which $k$ or $\\ell$ exceed the allowable indices are taken to be $0.$\nThis makes a natural choice for $M$ to be built out of the orthogonal polynomials associated to $A$, $p_{k-1}(x)p_{\\ell-1}(y).$\nTo this end, define the matrix\n\\[\n E(x,y)_{k,\\ell} = \n p_{k-1}(x) p_{\\ell-1}(y) \\one[ k > \\ell]\n +\\frac{1}{2}p_{k-1}(x) p_{k-1}(y) \\one[ k = \\ell].\n\\]\nIn terms of $E,$ let $E^+$ be the symmetric analogue and let $E^{-}$ be the antisymmetric analogue \n\\begin{equation}\n\\label{eq:E}\nE^{+}(x,y) = E(x,y) + E(x,y)^t \\qquad \\text{ and } \\qquad\nE^{-}(x,y) = E(x,y) - E(x,y)^t.\n\\end{equation}\n\nSome of these matrices, when evaluated at the eigenvalues of $A,$ have a nice expression when conjugated by the eigenvector change of basis matrix $O.$ These changes of basis are simple to check, and we do not verify them. In what follows, we let $\\delta_{i,j}$ be the matrix which zero everywhere except in the $(i,j)$ position where it is $1.$\n\\begin{align}\n &E^{+}(\\lambda_i,\\lambda_j) \n + \n E^{+}(\\lambda_j,\\lambda_i) \n =\n O^t \\biggl( \\frac{\\delta_{i,j} + \\delta_{j,i}}{q_iq_j} \\biggr) O,\n \\label{eq:spec1} \\\\\n &E^{+}(\\lambda_i,\\lambda_i) \n =\n O^t\\biggl(\n \\frac{\\delta_{i,i}}{q_i^2}\n \\biggr) O,\n \\label{eq:spec2} \\\\\n &E^{-}(\\lambda_i,\\lambda_j) \n -\n E^{-}(\\lambda_j,\\lambda_i) \n =\n O^t \\biggl( \\frac{\\delta_{i,j} - \\delta_{j,i}}{q_iq_j} \\biggr) O.\n \\label{eq:spec3}\n\\end{align}\nWe note that \\eqref{eq:spec2} is really just a special case of \\eqref{eq:spec1}, but we include it for emphasis. Note that, despite the importance of $E^{-}(\\lambda_i,\\lambda_i),$ which will become clear, it does not have any such simple representation.\n\n\nCommutation of $A$ with $E^{-}$ nearly gives a multiple of $E^+.$ In effect, save for boundary terms that result from the problem having finite size and the boundary terms that result from the necessary antisymmetry conditions, $[A,E^{-}(x,y)] = (x-y)E^{+}:$\n\\begin{proposition}\n Define the matrix $\\tilde B(x,y) = [A,E^{-}(x,y)] - (x-y)E^{+}(x,y).$ Then, for any $k \\geq \\ell,$\n \\[\n \\tilde B(x,y)_{k,\\ell}\n =\\begin{cases}\n -a_{\\ell-1} p_{\\ell-2}(x)p_{\\ell-1}(y) + a_\\ell p_{\\ell-1}(x) p_{\\ell}(y)& \\\\\n -a_{\\ell-1} p_{\\ell-1}(x)p_{\\ell-2}(y) + a_\\ell p_{\\ell}(x) p_{\\ell-1}(y), & \\text{ if } k = \\ell < n, \\\\\n -a_\\ell (p_{\\ell-1}(x)p_{\\ell-1}(y) - p_\\ell(x)p_\\ell(y)), & \\text{ if } k = \\ell + 1 < n,\\\\\n -p_n(x)p_{\\ell-1}(y), & \\text{ if } k = n, \\ell < n-1, \\\\\n -p_n(x)p_{\\ell-1}(y) \n -a_\\ell(p_{\\ell-1}(x)p_{\\ell-1}(y) - p_\\ell(x)p_\\ell(y))\n , & \\text{ if } k = n, {\\ell = n-1},\\\\\n -a_{\\ell-1} p_{\\ell-2}(x)p_{\\ell-1}(y) + a_\\ell p_{\\ell-1}(x) p_{\\ell}(y)& \\\\\n -a_{\\ell-1} p_{\\ell-1}(x)p_{\\ell-2}(y) - a_\\ell p_{\\ell}(x) p_{\\ell-1}(y),\n & \\text{ if } k = n, {\\ell = n}, \\text{ or } \\\\\n 0, & \\text{ otherwise.}\n \\end{cases}\n \\]\n Observe that the matrix $[A,E^{-}(x,y)] - (x-y)E^{+}(x,y)$ is symmetric, which determines the $k > \\ell$ terms. \n \\label{prop:commutator}\n\\end{proposition}\n\\begin{proof}\n \\noindent If $1 < k < n$ and $\\ell + 1 < k$ there are no boundary effects. Hence \\eqref{eq:AM} becomes\n \\begin{align*}\n [A,E^{-}(x,y)]_{k,\\ell} = \n &(a_{k-1} p_{k-2}(x) + b_k p_{k-1}(x) + a_k p_{k}(x))p_{\\ell-1}(y) - \\\\\n &p_{k-1}(x)(a_{\\ell-1} p_{\\ell-2}(y) + b_\\ell p_{\\ell-1}(y) + a_\\ell p_{\\ell}(y)) \\\\\n =&(x-y)p_{k-1}(x)p_{\\ell-1}(y).\n \\end{align*}\n The formula additionally holds when $k = 1,$ as the orthogonal polynomial recurrence for this case holds with $a_0 = 0.$\n\n \\vspace{1cm}\n \\noindent $k = n:$ \n When $k = n$ and $\\ell + 1 < n,$ due to the[A,M] + W border at the bottom of the matrix, we have\n \\begin{align*}\n [A,E^{-}(x,y)]_{n,\\ell} = \n &(a_{n-1} p_{n-2}(x) + b_n p_{n-1}(x) + 0 \\cdot p_{n}(x))p_{\\ell-1}(y) - \\\\\n &p_{n-1}(x)(a_{\\ell-1} p_{\\ell-2}(y) + b_\\ell p_{\\ell-1}(y) + a_\\ell p_{\\ell}(y)) \\\\\n =&(x-y)p_{n-1}(x)p_{\\ell-1}(y) - p_n(x) p_{\\ell-1}(y),\n \\end{align*}\n using that $a_n$ is taken to be $1.$ \n\n \\vspace{1cm}\n \\noindent $k = \\ell+1:$ \n When $k = \\ell+1$ and $\\ell + 1 < n,$ due to the diagonal of $E^{-}$ being $0,$ we have\n \\begin{align*}\n [A,E^{-}(x,y)]_{\\ell+1,\\ell} = \n &(0 \\cdot p_{\\ell-1}(x) + b_{\\ell+1} p_{\\ell}(x) + a_{\\ell+1} p_{\\ell+1}(x))p_{\\ell-1}(y) - \\\\\n &p_{\\ell}(x)(a_{\\ell-1} p_{\\ell-2}(y) + b_\\ell p_{\\ell-1}(y) + 0 \\cdot p_{\\ell}(y)) \\\\\n =&(x-y)p_{\\ell}(x)p_{\\ell-1}(y) - a_\\ell (p_{\\ell-1}(x)p_{\\ell-1}(y) - p_\\ell(x)p_\\ell(y)).\n \\end{align*}\n If $\\ell+1 = n,$ then we must subtract from the expression above for $[A,E^{-}(x,y)]_{n,n-1}$ any terms containing $a_n.$\n\n \\vspace{1cm}\n \\noindent $k = \\ell:$ \n When $k=\\ell,$ antisymmetry of $E^{-}$ plays a larger role. Taking care that the terms above the diagonal are properly accounted, for $k=\\ell < n,$ \n \\begin{align*}\n [A,E^{-}(x,y)]_{\\ell,\\ell} = \n &(0 \\cdot p_{\\ell-2}(x) + 0 \\cdot p_{\\ell-1}(x) + 2 a_\\ell p_{\\ell}(x))p_{\\ell-1}(y) - \\\\\n &p_{\\ell-1}(x)(2a_{\\ell-1} p_{\\ell-2}(y) + 0 \\cdot p_{\\ell-1}(y) + 0 \\cdot p_{\\ell}(y)) \\\\\n =&(x-y)p_{\\ell-1}(x)p_{\\ell-1}(y) \n - a_{\\ell-1} p_{\\ell-2}(x)p_{\\ell-1}(y) \n + a_{\\ell} p_{\\ell}(x)p_{\\ell-1}(y) \\\\\n -&a_{\\ell-1} p_{\\ell-1}(x) p_{\\ell-2}(y)\n + a_\\ell p_{\\ell-1}(x) p_{\\ell}(y). \n \\end{align*}\n If $k=\\ell = n,$ then we must subtract from the expression above for $[A,E^{-}(x,y)]_{n,n-1}$ any terms containing $a_n.$\n\\end{proof}\n\nIn the case that we pick $x=\\lambda_i$ and $y=\\lambda_j$ to be eigenvalues of $A,$ the error term $\\tilde B(\\lambda_i,\\lambda_j)$ will be tridiagonal. We begin by characterizing the space of matrices whose commutator with $A$ is tridiagonal. Recall that $\\left\\{ \\lambda_i : 1 \\leq i \\leq n \\right\\}$ are the eigenvalues of $A.$ Essentially everything we do here is predicated on working with tridiagonal matrices $A$ which have nonzero offdiagonal entries. We call these tridiagonal matrices \\emph{nondegenerate}.\n\n\\begin{theorem}\n Suppose that $A$ is \\emph{nondegenerate} and all $\\left\\{ \\lambda_i \\right\\}$ are distinct.\n Let $V \\subset \\mathbb{M}_{n}$ be the space of antisymmetric matrices so that for all $v \\in V,$\n \\[\n [A,v] \\in \\mathbb{T}.\n \\]\n The space $V$ is spanned by $\\left\\{ E^{-}(\\lambda_i,\\lambda_i) : 1 \\leq i \\leq n\\right\\}.$ \n \\label{thm:span}\n\\end{theorem}\n\\begin{proof}\n The argument is by dimension counting. We start by showing that the dimension of $V$ is $n-1.$ Observe first that there is a linear dependence among the $V$ that follows from the orthogonality of the polynomials. For any $\\ell > r$ in $[n],$\n \\[\n \\sum_{i=1}^n q_i^2 E^{-}(\\lambda_i,\\lambda_i)_{\\ell,r} \n = \n \\sum_{i=1}^n q_i^2 p_{\\ell-1}(\\lambda_i) p_{r-1}(\\lambda_i) = 0. \n \\]\n As the diagonal of $E^{-}$ is identically $0,$ this linear combination of these matrices is identically $0.$ On the other hand, looking at the first columns of $E^{-}(\\lambda_i,\\lambda_i),$ the span of these first columns is at least $n-1$ dimensional from the linear independence of $\\left\\{ p_k : k=1,2,\\dots,n-1 \\right\\}.$\n\n On the other hand, by the lattice path solution, after picking the first column of $v,$ the constraint that $[A,v] \\in \\mathbb{T}$ determines the remainder of $v$ when $A$ has nonzero offdiagonal entries. Hence, this space is at most $(n-1)$-dimensional, which completes the proof.\n\\end{proof}\n\n\\begin{remark}\n The requirement that $A$ have nonzero offdiagonal entries is necessary for the previous theorem. When $A$ is diagonal for example, the commutator of any tridiagonal matrix with $A$ will be tridiagonal.\n\\end{remark}\n\n\n\\subsection{Symmetrized form}\n\nDefine the symmetrized matrix\n\\[\n H^{-}(x,y)\n = \\frac{E^{-}(x,y) - E^{-}(y,x)}{x-y}.\n\\]\nDefine $L(x,y) = [A,H^{-}(x,y)] - E^{+}(x,y) - E^{+}(y,x).$\nBy Proposition~\\ref{prop:commutator} we have that\n\\begin{equation}\n L(x,y)_{k,\\ell}\n =\\begin{cases}\n \\frac{-p_n(x)p_{\\ell-1}(y) + p_{\\ell-1}(x)p_n(y)}{x-y}, & \\text{ if } k = n, \\ell {\\leq} n-1, \\\\\n \\frac{-2p_n(x)p_{n-1}(y) + 2p_{n-1}(x)p_n(y)}{x-y}, & \\text{ if } k = n, \\ell = n, \\\\\n 0, & \\text{ otherwise.}\n \\end{cases}\n \\label{eq:L}\n\\end{equation}\nObserve that this error vanishes entirely when $x$ and $y$ are chosen to be eigenvalues of $A.$ In short, this is because the matrix $H^{-}$ has a simple expression when conjugated by $O$ (recall \\eqref{eq:COB} and \\eqref{eq:spec3})\n\\begin{equation*}\n O^t H^{-}(\\lambda_i, \\lambda_j) O = \\frac{\\delta_{i,j} + \\delta_{j,i}}{q_iq_j(\\lambda_i - \\lambda_j)},\n\\end{equation*}\n\nwith $\\delta_{i,j}$ the matrix with a single nonzero entry, equal to $1,$ in position $(i,j).$\nThis gives an alternative derivation of \\eqref{eq:L} for the case that $x$ and $y$ are eigenvalues by writing (and recalling \\eqref{eq:spec1})\n\\begin{equation}\n \\label{eq:H-}\n [A,H^{-}(\\lambda_i,\\lambda_j)]\n = O\\biggl[\\Lambda, \\frac{\\delta_{i,j} - \\delta_{j,i}}{q_iq_j(\\lambda_i - \\lambda_j)}\\biggr]O^t\n = O\\biggl(\\frac{\\delta_{i,j} + \\delta_{j,i}}{q_iq_j}\\biggr)O^{t}\n = {E^{+}(\\lambda_i,\\lambda_j) + E^{+}(\\lambda_j,\\lambda_i)}.\n\\end{equation}\n\nThis will prove useful in what follows, but it will also be helpful to define another preimage of $E^{+}(\\lambda_i,\\lambda_j) + E^{+}(\\lambda_j,\\lambda_i) + \\mathbb{T}$ that vanishes in the first column.\nThe matrix $H^{-}$ in its first column has $k$-th entry\n \\[\n H^{-}(\\lambda_i, \\lambda_j)_{k,1}\n =\\frac{p_{k-1}(\\lambda_i) - p_{k-1}(\\lambda_j)}{\\lambda_i - \\lambda_j}.\n \\]\n This is equal to the $k$-th entry of the first column of the matrix \n \\begin{equation*}\n \\frac{E^{-}(\\lambda_i,\\lambda_i) - E^{-}(\\lambda_j,\\lambda_j)}{\\lambda_i-\\lambda_j}.\n \\end{equation*}\nHence, we define\n\\begin{equation}\nH_0^{-}(x,y)\n= \\frac{E^{-}(x,y) - E^{-}(y,x)-E^{-}(x,x) + E^{-}(y,y)}{x-y},\n\\label{eq:H0}\n\\end{equation}\nwhich satisfies $[A,H_0^{-}(\\lambda_i,\\lambda_j)] - E^{+}(\\lambda_i,\\lambda_j) - E^{+}(\\lambda_j,\\lambda_i) \\in \\mathbb{T}$ and $H_0^{-} e_1=0,$ with $e_1$ the first standard basis vector.\n\nRecapitulating: we would like to solve the equation \n\\[\n [A,M] + W \\in \\mathbb{T}\n\\]\nwhere $W$ is an arbitrary symmetric matrix for $M$. If $W$ is in the span of \n\\[\n\\left\\{ {E^{+}(\\lambda_i,\\lambda_j) + E^{+}(\\lambda_j,\\lambda_i), i < j} \\right\\},\n\\]\nthen we can do this exactly without the need for a tridiagonal error. Thus it essentially remains to determine how to solve for \n\\[\n [A,M] + E^{+}(\\lambda_i,\\lambda_i) \\in \\mathbb{T}\n\\]\nfor each $1\\leq i \\leq n.$\nNoting that $O^t E^{+}(\\lambda_i,\\lambda_i) O = q_i^{-2} \\delta_{i,i},$ the combination of these matrices and \n\\[\n \\left\\{ {E^{+}(\\lambda_i,\\lambda_j) + E^{+}(\\lambda_j,\\lambda_i)}, i < j \\right\\},\n\\]\nclearly span all symmetric matrices, and so we will have a complete solution.\n\n\nUsing this discussion, we can give a quick proof of Theorems \\ref{thm:tri2}.\n\\begin{proof}[Proof of Theorem~\\ref{thm:tri2}]\n We recall the setup for the theorem. Suppose that $\\MP{W}$ is a stochastic matrix diffusion that satisfies \n \\[\n d W_t = \\sqrt{\\tfrac{2}{\\beta}} O_t dB_t O_t^t + O_t dZ_t O_t,\n \\] \n where $\\MP{O}$ is the eigenvector matrix of $\\MP{W}$. Let $\\MP{\\Lambda}$ be the eigenvalue matrix of $\\MP{W}$, we check that \n \\begin{align*}\n d \\Lambda_t &= O_t^t dW_t O_t + [\\Lambda_t, dN_t] + \\frac{1}{2} [ O_t^t dW_t O_t + d\\Lambda_t, dN_t]\\\\\n & =\\sqrt{\\tfrac{2}{\\beta}}dB_t+dZ_t + [\\Lambda_t, dN_t] + \\frac{1}{2} [ \\sqrt{\\tfrac{2}{\\beta}}dB_t+dZ_t+ d\\Lambda_t, dN_t]\n \\end{align*}\n where $N_t$ is chosen so that $dN_{i,j,t}(\\lambda_i-\\lambda_j) = dZ_{i,j,t}$ which gives perfect cancelation of the $dZ_t$ term. This induces the evolution of $\\MP{O}$ and so the evolution of $\\MP{W}$. By independence the previous equation simplifies further to \n \\[\n d \\Lambda_t =\\sqrt{\\tfrac{2}{\\beta}}dB_t+ \\frac{1}{2} [ dZ_t, dN_t].\n \\]\n Further\n \\[\n [dZ_t, dN_t ]_{i,i}\n = \\sum_{j \\neq i} \\frac{\n dZ_{i,j,t}dZ_{j,i,t}\n }{\\lambda_{j,t}-\\lambda_{i,t}}\n -\\sum_{j \\neq i} \\frac{\n dZ_{i,j,t}dZ_{j,i,t}\n }{\\lambda_{i,t}-\\lambda_{j,t}}\n = \\sum_{j \\neq i} \\frac{\n -2dt}{\\lambda_{i,t}-\\lambda_{j,t}}.\n \\]\n This gives us that $\\Lambda$ evolves according to a Dyson Brownian motion with zero potential. To get the conclusion of the theorem we can take any tridiagonal evolution of $\\MP{W}$ and the diffusion equation follows from Theorem \\ref{thm:dysoniso}.\n\n\\end{proof}\n\n\\subsection{Confluent form}\n\nGoing forward, we will essentially always specialize $x$ and $y$ to be eigenvalues. For eigenvalues the matrix $B$ simplifies. So, we define\n\\begin{equation}\n B(x,y)_{k,\\ell}\n =\\begin{cases}\n -a_{\\ell-1} p_{\\ell-2}(x)p_{\\ell-1}(y) + a_\\ell p_{\\ell-1}(x) p_{\\ell}(y)& \\\\\n -a_{\\ell-1} p_{\\ell-1}(x)p_{\\ell-2}(y) + a_\\ell p_{\\ell}(x) p_{\\ell-1}(y), & \\text{ if } k = \\ell, \\\\\n -a_\\ell (p_{\\ell-1}(x)p_{\\ell-1}(y) - p_\\ell(x)p_\\ell(y)), & \\text{ if } k = \\ell + 1,\\\\\n 0, & \\text{ otherwise,}\n \\end{cases}\n \\label{eq:B}\n\\end{equation}\nand observe that for eigenvalues $\\lambda_1$ and $\\lambda_2,$ $B(\\lambda_1,\\lambda_2) = \\tilde B(\\lambda_1,\\lambda_2).$\n\nOne possible choice for $x$ and $y$ is to take them equal. Indeed, by symmetrizing the $E^{-},$ dividing by $x-y$ and sending $x \\to y$ we are also led to the existence of an approximate preimage to $E^{+}(y,y)$ that involves derivatives of $E^{-}.$ A more direct identity is possible simply by differentiating $E^{-}.$ Observe that\n\\[\n \\partial_y [A,E^{-}(x,y)] = [A, \\partial_y E^{-}(x,y)],\n\\]\nby virtue of $A$ having no $y$ dependence. Hence,\n\\[\n [A, \\partial_y E^-(x,y)] \n = \\partial_y [A, E^-(x,y)]\n = -E^+(x,y) + (x-y)\\partial_y E^+(x,y) + \\partial_y\\tilde B(x,y).\n\\]\nThis operation can be iterated to produce a basis of derivatives, by using\n\\[\n [A, \\partial_x^k \\partial_y^\\ell E^{-}(x,y)]\n =\\partial_x^k \\partial_y^\\ell \\left( (x-y)E^{+}(x,y) + \\tilde B(x,y) \\right),\n\\]\nbut we do not pursue this further.\nIf we define $F(y) = -\\partial_y E^{-}(x,y) \\vert_{x=y},$ then\n\\begin{equation}\n [A,F(y)] = E^{+}(y,y) - \\partial_y \\tilde B(x,y) \\vert_{x=y}.\n \\label{eq:F}\n\\end{equation}\nWe write some entries of this error term:\n\\[\n -\\partial_y \\tilde B(x,y) \\vert_{x=y}\n =\\begin{cases}\n a_{\\ell-1} p_{\\ell-2}(y)p_{\\ell-1}'(y) - a_\\ell p_{\\ell-1}(y) p_{\\ell}'(y)& \\\\\n +a_{\\ell-1} p_{\\ell-1}(y)p_{\\ell-2}'(y) - a_\\ell p_{\\ell}(y) p_{\\ell-1}'(y), & \\text{ if } k = \\ell < n, \\\\\n a_\\ell (p_{\\ell-1}(y)p_{\\ell-1}'(y) - p_\\ell(y)p_\\ell'(y)), & \\text{ if } k = \\ell + 1 < n,\\\\\n p_n(y)p_{\\ell-1}'(y), & \\text{ if } k = n, \\ell < n-1, \\\\\n \\text{other}, & \\text{ if } k = n, {\\ell \\geq n-1}, \\text{ or } \\\\\n 0, & \\text{ otherwise.}\n \\end{cases}\n\\]\n\nSpecializing to $x=y=\\lambda_i$ for a zero $\\lambda_i $ of $p_n$ will be especially useful here, as in this case the boundary term above is tridiagonal. In addition, in this case, the formulas for the tridiagonal that hold in the middle of the matrix also hold in the final entries. Otherwise said, at an eigenvalue $\\lambda_i$ we define\n\\begin{equation}\n \\label{eq:G}\n G(\\lambda_i)\n =\n -\\partial_y \\tilde B(x,y) \\vert_{x=y=\\lambda_i}\n = \n -\\partial_y B(x,y)\\vert_{x=y=\\lambda_i} \n = \n -\\frac{1}{2}\\partial_y B(y,y)\\vert_{y=\\lambda_i}, \n\\end{equation}\nand we define $G(y) = -\\frac{1}{2}\\partial_y B(y,y)$ for general $y.$\n\n\n\n\\subsection{Difference quotients and minor polynomials}\n\\label{sec:minors}\n\nDefine the difference quotient operator of a polynomial as\n\\[\n (\\Delta^y f)(x) = \\frac{f(x)-f(y)}{x-y}.\n\\]\nWe always consider $\\Delta^y$ as a map on polynomials. In particular, for $x=y,$ the previous definition extends by continuity.\nFor a multivariate polynomial, let $\\Delta^y_x$ denote the corresponding partial difference quotient in the $x$ variable at $y.$\n\nFor any polynomial $p,$ the function \n\\[\n \\Delta^y p(x) = \\frac{p(x)-p(y)}{x-y}\n\\]\nis a polynomial in two variables, which as a polynomial in $x$ has degree one less than $p.$ By continuity, it takes the value $p'(x)$ at $y=x.$\nWith this in mind, we define the polynomials for any $0 \\leq k,\\ell \\leq n-1,$\n\\begin{equation}\n p^{(k)}_{\\ell}(x) = \\sum_{j=1}^n q_j^2 \n p_k(\\lambda_j)\n \\frac{p_\\ell(x)-p_\\ell(\\lambda_j)}{x-\\lambda_j},\n \\label{eq:minor}\n\\end{equation}\nfor $x$ not in the spectrum. Observe that if $\\ell \\leq k,$ then by orthogonality, this polynomial is $0.$ On the other hand, observe that\n\\begin{equation*}\n \\begin{aligned}\n xp^{(k)}_{\\ell}(x) \n &= \n \\sum_{j=1}^n q_j^2 \n p_k(\\lambda_j)\n \\frac{xp_\\ell(x)-xp_\\ell(\\lambda_j)}{x-\\lambda_j} \\\\\n &= \n \\sum_{j=1}^n q_j^2 \n p_k(\\lambda_j)\n \\biggl\\{\n \\frac{xp_\\ell(x)-\\lambda_j p_\\ell(\\lambda_j)}{x-\\lambda_j} \n -p_\\ell(\\lambda_j)\n \\biggr\\}\n \\\\\n &=\n a_{\\ell+1}p^{(k)}_{\\ell+1}(x) \n +b_{\\ell+1}p^{(k)}_{\\ell}(x) \n +a_{\\ell}p^{(k)}_{\\ell-1}(x) \n -\\delta_{k,\\ell}.\n \\end{aligned}\n\\end{equation*}\nThis is to say that these polynomials satisfy the same $3$-term recurrence as $\\left\\{ p_\\ell \\right\\}_{\\ell}$ but started from different initial conditions. This also implies that \n\\[\n \\deg(p^{(k)}_{\\ell}) = (\\ell-k-1)_{+} \\quad \\text{and} \\quad p_{\\ell+1}^{(\\ell)}(x) = \\frac{1}{a_{\\ell+1}}.\n\\]\nOtherwise stated, they are multiples of the orthogonal polynomials associated to the lower-right-principal submatrix of $A$ of size $n-k-1.$ \nFurthermore, we have the identity that\n\\[\n \\Delta^y p(x)\n =\n \\frac{p_\\ell(x)-p_\\ell(y)}{x-y}\n =\n \\sum_{k = 1}^\\ell p^{(k-1)}_{\\ell}(x) p_{k-1}(y),\n\\]\nwhich by continuity specializes to \n\\[\n p_\\ell'(x) = \\sum_{k = 1}^\\ell p^{(k-1)}_{\\ell}(x) p_{k-1}(x).\n\\]\n\nWe also observe that\n\\begin{equation}\n \\sum_{k = 1}^\\ell p^{(k-1)}_{\\ell}(x) p_{k-1}'(y)\n =\n \\partial_y \\Delta^y p_\\ell(x)\n =\n \\frac{(p_\\ell(x)-p_\\ell(y))-(x-y)p_\\ell'(y)}{(x-y)^2}\n = \\Delta^y \\Delta^y p_\\ell(x),\n \\label{eq:secondderiv}\n\\end{equation}\nwhich by continuity specializes to $\\frac{1}{2}p_\\ell''(y)$ when $x=y.$\n\n\\subsubsection*{Approximating the minor polynomials}\n\nBecause the minor polynomials satisfy the same recurrence as the original polynomials, it is possible to formulate a mixed Christoffel--Darboux formula using the two sets of polynomials, namely for $r < \\ell$\n\\begin{equation}\n \\sum_{k=1}^{\\ell} p_{k-1}(x) p_{k-1}^{(r)}(y) (x-y) = \n a_{\\ell} \\left( \n p_{\\ell}(x) p_{\\ell-1}^{(r)}(y)\n -p_{\\ell-1}(x) p_{\\ell}^{(r)}(y) \\right) + p_{r}(x),\n \\label{eq:CDmixed}\n\\end{equation}\nwhere the extra $p_r(x)$ results from the defect in the recurrence for the minor polynomial in the bottom term. Note that on taking $x=y$ in this formula, we are left with\n\\begin{equation}\np_{\\ell-1}(x) p_{\\ell}^{(r)}(x)\n-\np_{\\ell}(x) p_{\\ell-1}^{(r)}(x)\n=\\frac{p_{r}(x)}{a_\\ell},\n \\label{eq:Wronskian}\n\\end{equation}\nwhich shows that $\\left\\{ ((p_\\ell), (p_\\ell^{(0)})), \\ell=1,2,\\dots \\right\\}$ forms a fundamental set of solutions to the three--term recurrence for any initial conditions. In particular, \n\\begin{equation}\n \\label{eq:MinorIdentity}\n p^{(j)}_\\ell(x) = \n \\begin{cases}\n p_j(x) p_\\ell^{(0)}(x)\n - \n p_\\ell(x)p_j^{(0)}(x), & j \\leq \\ell, \\\\\n 0, & \\text{otherwise,}\n\\end{cases}\n\\end{equation}\nobserving that the $3$--term--recurrence is trivially satisfied, and that the initial conditions at $\\ell=j$ and $\\ell=j+1$ are satisfied. \nThis identity also shows that $\\{p_{\\ell}^{(j)}\\}$ satisfies a recurrence in $j$ as well, specifically that for $j < \\ell$\n\\begin{equation}\n \\label{eq:DualRecurrence}\n x p^{(j)}_\\ell(x) = \n a_{j+1}p^{(j+1)}_\\ell(x) \n +b_{j+1}p^{(j)}_\\ell(x) \n +a_{j}p^{(j-1)}_\\ell(x). \n\\end{equation}\nThis formula is also true for $j=0,$ taking $a_0 = 1$ and $p^{(-1)}_\\ell(x) = p_\\ell(x),$ as it is equivalent to \\eqref{eq:MinorIdentity} with $j=1.$\n\n\\subsubsection*{Perturbation theory for orthogonal polynomials}\n\nThe dual recurrence \\eqref{eq:DualRecurrence} is used in \\cite{Geronimo} to create a perturbation theory for orthogonal polynomials, one which shows that when two families of orthogonal polynomials have similar coefficients, they can be compared. We present this perturbation theory here. \nSuppose that $\\{\\hat p_j(x)\\}$ are defined by\n\\begin{align*}\n\\hat b_1 \\hat p_0(x)+ \\hat a_1 \\hat p_1(x) & =x \\hat p_0(x)\\\\\n\\hat a_{j} \\hat p_{j-1}(x) + \\hat b_{j+1} \\hat p_{j}(x) + \\hat a_{j+1} \\hat p_{j+1}(x) & =x \\hat p_{j}(x),\n\\end{align*}\nfor $j = 1,2,3,\\dots.$ If we multiply \\eqref{eq:DualRecurrence} through by $\\hat p_j$ and subtract the previous equation multiplied by $p^{(j)}_\\ell(x),$ we get\n\\[\n 0 =\n (a_{j+1}p^{(j+1)}_\\ell(x) \n +b_{j+1}p^{(j)}_\\ell(x) \n +a_{j}p^{(j-1)}_\\ell(x))\\hat p_j(x)\n -\n (\\hat a_{j} \\hat p_{j-1}(x) + \\hat b_{j+1} \\hat p_{j}(x) + \\hat a_{j+1} \\hat p_{j+1}(x)\n )p^{(j)}_\\ell(x). \n\\]\nHence, if we sum this identity for $j=0,1,\\dots,\\ell-1,$ we get\n\\begin{align*}\n 0\n &=\n \\sum_{j=0}^{\\ell-1} (b_{j+1} - \\hat b_{j+1})p^{(j)}_\\ell(x)\\hat p_j(x) \n +\n \\sum_{j=0}^{\\ell-2} (a_{j+1} - \\hat a_{j+1})\n (\n p^{(j+1)}_\\ell(x)\\hat p_j(x) \n +p^{(j)}_\\ell(x)\\hat p_{j+1}(x) \n ) \\\\\n &+a_0 p^{(-1)}_\\ell(x)\\hat p_0(x)-\\hat a_\\ell \\hat p_\\ell(x) p_\\ell^{(\\ell-1)}(x).\n\\end{align*}\nIn summary,\n\\begin{equation}\n \\begin{aligned}\n \\frac{\\hat a_\\ell}{a_\\ell} \\hat p_\\ell(x)\n &=\n p_\\ell(x)\\hat p_0(x) \\\\\n &+\n \\sum_{j=1}^{\\ell} (b_{j} - \\hat b_{j})p^{(j-1)}_\\ell(x)\\hat p_{j-1}(x) \n \\\\\n &+\n \\sum_{j=1}^{\\ell-1} (a_{j} - \\hat a_{j})\n (\n p^{(j)}_\\ell(x)\\hat p_{j-1}(x) \n +p^{(j-1)}_\\ell(x)\\hat p_{j}(x) \n ). \n\\end{aligned}\n \\label{eq:Perturb}\n\\end{equation}\nHere we take $\\hat{p}_j(x) = p_{j+1}^{(0)}(x),$ so that $\\hat a_j = a_{j+1},$ $\\hat b_j = b_{j+1},$ and $\\hat p_0(x) = \\frac{1}{a_1}.$ Specializing these formulas, we get\n\\begin{equation}\n \\begin{aligned}\n p^{(0)}_{\\ell+1}(x)\n &=\n \\frac{a_\\ell}{a_{\\ell+1}a_1}p_\\ell(x) \\\\\n &+\n \\frac{a_\\ell}{a_{\\ell+1}}\\sum_{j=1}^{\\ell} (b_{j} - b_{j+1})p^{(j-1)}_\\ell(x)p_{j}^{(0)}(x) \n \\\\\n &+\n \\frac{a_\\ell}{a_{\\ell+1}}\\sum_{j=1}^{\\ell-1} (a_{j} - a_{j+1})\n (\n p^{(j)}_\\ell(x)p_{j}^{(0)}(x) \n +p^{(j-1)}_\\ell(x)p_{j+1}^{(0)}(x) \n ). \n\\end{aligned}\n \\label{eq:MinorPerturb}\n\\end{equation}\n\n\n\n\\subsection{Summary}\n\\label{sec:summary}\n\nThe summary of the previous discussion is that we have a complete description of the solutions of the commutator equation \\eqref{eq:comm} in terms of the orthogonal polynomials associated to $A.$\n\\begin{theorem}\n A solution to the commutator equation\n \\[\n [A,M] - W \\in \\mathbb{T}\n \\]\n for antisymmetric $M$ in terms of a symmetric $W$ is given by $M = M_1 + M_2$ where\n \\begin{align*}\n M_1 &= \\sum_{i=1}^n (O^t W O)_{i,i} q_i^2 F(\\lambda_i), \\text{ and} \\\\\n M_2 &= \\sum_{i > j}^n (O^t W O)_{i,j}q_iq_j H_0^{-}(\\lambda_i,\\lambda_j).\n \\end{align*}\n Moreover, this is the unique $M$ that vanishes in the first column.\n The tridiagonal error is given by\n \\begin{equation*}\n \\begin{aligned}\n [A,M]-W &= \n \\sum_{i=1}^n (O^t W O)_{i,i} q_i^2 G(\\lambda_i)\n -\\sum_{i > j}^n (O^t W O)_{i,j} q_iq_j\n \\frac{B(\\lambda_i,\\lambda_i) - B(\\lambda_j,\\lambda_j)}{\\lambda_i-\\lambda_j} \\\\\n &=\n -\\frac{1}{2} \n \\sum_{i,j=1}^n\n (O^t W O)_{i,j} q_iq_j\n \\Delta^{\\lambda_j}_{\\lambda_i} B(\\lambda_i,\\lambda_i).\n \\end{aligned}\n \\end{equation*}\n All other solutions to the equation differ from this one by elements of\n \\[\n \\operatorname{Span}\\left\\{ E^{-}(\\lambda_i,\\lambda_i), i=1,2,\\dots,n \\right\\}.\n \\]\n \\label{thm:comm}\n\\end{theorem}\n\\begin{proof}\n From \\eqref{eq:spec1} and \\eqref{eq:spec2} we can expand $W$ as\n \\begin{align*}\n W &= \\sum_{i=1}^n (O^t W O)_{i,i} \\cdot q_i^2 E^{+}(\\lambda_i,\\lambda_i)\\\\\n &+ \\sum_{i > j}^n (O^t W O)_{i,j} \\cdot q_iq_j \\cdot\n \\left\\{ \n E^{+}(\\lambda_i,\\lambda_j) + E^{+}(\\lambda_j,\\lambda_i)\n \\right\\}.\n \\end{align*}\n Applying \\eqref{eq:H-} and \\eqref{eq:F}, the theorem now follows. Theorem~\\ref{thm:span} shows that any other solution to the equation differs from this one as desired.\n The uniqueness of $M$ follows from the invertibility of $O,$ i.e.\\,the linear independence of $\\left\\{ (p_{k-1}(\\lambda_i))_k, i=1,\\dots,n \\right\\},$ which is what appears in the first column of $E^{-}(\\lambda_i,\\lambda_i)$ for $i=1,\\dots,n.$ \n\\end{proof}\n\nIf we restrict to the case that $W$ is tridiagonal itself, this leads to a family of orthogonality rules which have quartic dependence on the orthogonal polynomials. \n\\begin{corollary}\n \\label{cor:orth}\n For a symmetric tridiagonal matrix $W,$\n \\[\n W + \n \\sum_{i=1}^n (O^t W O)_{i,i} q_i^2 G(\\lambda_i)\n =\n \\sum_{i > j}^n (O^t W O)_{i,j} q_iq_j\n \\frac{B(\\lambda_i,\\lambda_i) - B(\\lambda_j,\\lambda_j)}{\\lambda_i-\\lambda_j}.\n \\]\n\\end{corollary}\n\\begin{proof}\n Suppose that $W$ is tridiagonal. Then on the one hand, $M=M_1+M_2,$ given in the theorem, is one solution to\n \\[\n [A,M] - W \\in \\mathbb{T}.\n \\]\n A second solution is given by $M=0.$ As both vanish in the first column, we have that $M_1+M_2 = 0.$ \n\\end{proof}\n\nWe can also use Theorem~\\ref{thm:comm} to make a connection to the minor polynomials. Suppose we wished to solve for a matrix $W$ that is supported in a single entry\n\\begin{corollary}\n Let $W^{k,\\ell}$ be the matrix that is $1$ in the $(k,\\ell)$ and $(\\ell,k)$ entries, for $k > \\ell$, and $0$ elsewhere. \n \n \n The matrix $M^{k,\\ell}$ with $0$ in the first column so that $[A,M^{k,\\ell}] = W^{k,\\ell} + \\mathbb{T}$ is given by\n \\[\n \\begin{aligned}\n M\n &=\n -\\frac{1}{2}\\sum_{i,j=1}^n\n q_i^2\n q_j^2\n p_{k-1}(\\lambda_i)\n p_{\\ell-1}(\\lambda_j)\n \\Delta_{\\lambda_j}^{\\lambda_i} E^{-}(\\lambda_i,\\lambda_j), \\\\\n M_{u,r}\n &=\n -\\frac{1}{2}\\sum_{i}\n q_i^2\n p_{u-1}(\\lambda_i)\n p_{k-1}(\\lambda_i)p^{(\\ell-1)}_{r-1}(\\lambda_i),\n \\end{aligned}\n \\]\n for any $u > r.$\n When $k =\\ell+1,$ this matrix is identically $0.$ We also define $M = 0$ if $k = \\ell.$ \n For other $k > \\ell+1,$ the entry $M_{u,r}$ vanishes when $u-r \\geq k-\\ell,$ and it vanishes when $u+r \\leq k+\\ell.$\n The tridiagonal matrix $G^{k,\\ell} = [A,M^{k,\\ell}] - W^{k,\\ell}$ is given by \n \n \\[\n {G}^{k,\\ell}\n =\n -\n \\frac{1}{2}\\sum_{i,j=1}^n\n q_i^2\n q_j^2\n p_{k-1}(\\lambda_i)\n p_{\\ell-1}(\\lambda_j)\n \\Delta_{\\lambda_j}^{\\lambda_i} B(\\lambda_i,\\lambda_j)\n \\]\n When $k = \\ell+1,$ this matrix is equal to $W^{k,\\ell},$ and when $k=\\ell,$ this matrix is $0$ everywhere except in the $(k,k)$ entry where it is $1.$\n \\label{cor:entry}\n\\end{corollary}\n\\begin{proof}\n We expand the $(u,r),$ entry of $M=M_1+M_2$ for $u\\geq r$ from Theorem~\\ref{thm:comm}, which gives\n \\[\n M\n =\n -\\frac{1}{2}\\sum_{i,j}\n p_{k-1}(\\lambda_i)\n p_{\\ell-1}(\\lambda_j)\n q_i^2 q_j^2\n \\left( p_{u-1}(\\lambda_i) + p_{u-1}(\\lambda_j) \\right)\n \\left\\{ \n \\frac{p_{r-1}(\\lambda_i) - p_{r-1}(\\lambda_j)}{\\lambda_i-\\lambda_j}\n \\right\\},\n \\]\n where the $i=j$ terms are defined by continuity.\n In terms of the minor polynomials, this can be written as\n \\[\n \\begin{aligned}\n M=\n &-\\frac{1}{2}\\sum_{i}\n q_i^2\n p_{k-1}(\\lambda_i) \n p_{u-1}(\\lambda_i)\n p^{(\\ell-1)}_{r-1}(\\lambda_i) \\\\\n &\n -\\frac{1}{2}\\sum_{j}\n q_j^2\n p_{\\ell-1}(\\lambda_j) \n p_{u-1}(\\lambda_j)\n p^{(k-1)}_{r-1}(\\lambda_j).\n \\end{aligned}\n \\]\n As $k > \\ell$ and $u > r,$ the second sum vanishes in all cases as $p_{\\ell-1} p^{(k-1)}_{r-1}$ has degree at most $r-1 < u - 1.$\n To see the other vanishing conditions, if $u$ is too large, then $p_{u-1}$ is orthogonal to all polynomials of degree strictly smaller degree. On the other hand, if $u+r$ is too small, then by the orthogonality of $p_{k-1}$ to lower degree polynomials, the first sum vanishes. \n\\end{proof}\n\n\\begin{remark}\n It is also possible to express the matrix $G^{k,\\ell}$ in terms on minor polynomials.\n \\[\n G^{k,\\ell}_{u,r}\n =\n \\frac{1}{2}\\sum_{i=1}^n\n q_i^2\n p_{k-1}(\\lambda_i)\n \\begin{cases}\n a_{r-1} p_{r-2}(\\lambda_i)p_{r-1}^{(\\ell-1)}(\\lambda_i) - a_r p_{r-1}(\\lambda_i) p_{r}^{(\\ell-1)}(\\lambda_i)& \\\\\n +a_{r-1} p_{r-1}(\\lambda_i)p_{r-2}^{(\\ell-1)}(\\lambda_i) - a_r p_{r}(\\lambda_i) p_{r-1}^{(\\ell-1)}(\\lambda_i), & \\text{ if } u = r < n, \\\\\n a_r (p_{r-1}(\\lambda_i)p_{r-1}^{(\\ell-1)}(\\lambda_i) - p_r(\\lambda_i)p_r^{(\\ell-1)}(\\lambda_i)), & \\text{ if } u = r + 1 < n,\\\\\n 0, & \\text{ otherwise.}\n \\end{cases}\n\\]\n\\label{rem:Gkl}\nThis characterization gives the result in Theorem \\ref{thm:lanczos}\n\\end{remark}\n\nThis leads us directly to a characterization of all differentials $dM_t$ that produce tridiagonal models.\n\n\n\\section{The Tridiagonal Models}\n\\label{sec:models}\n\nThe first characterization of the tridiagonal models associated to is a direct consequence of the work in the previous section. The following theorem gives the martingale structure of the model. \n\n\\begin{theorem}\n\\label{thm:martingales}\n The rotation differentials $dM_t$ of Dyson Brownian Motion $(\\lambda_{i,t}, 1 \\leq i \\leq n)$ that produce \\emph{nondegenerate} tridiagonal models are all of the form\n \\[\n dM_t = \\sum_{i=1}^n \\left\\{q_{i,t}^2 d\\lambda_{i,t} F(\\lambda_{i,t}) + c_i E^{-}(\\lambda_{i,t},\\lambda_{i,t})\\right\\} + dM^{(2)}_t.\n \\]\n for some choice of coefficients $c_i$ and some finite variation process $dM^{(2)}_t.$ In particular, the martingale portion of $dA_t$ has the form\n \\[\n \\sum_{i=1}^n \n \n \\biggl\\{\n -\n dZ_{i,t}\n \\cdot q_{i,t}^2 \\cdot G(\\lambda_{i,t})\n +\n dX_{i,t}\n B(\\lambda_{i,t},\\lambda_{i,t})\n \\biggr\\}\n \\]\n where $Z_{i,t}$ are the Brownian motions driving the Dyson Brownian motion and $X_{i,t}$ are some martingales. For the definitions of the matrices $F, E^{-}, G,$ and $B$ see the equations \\textup{(\\ref{eq:F}), (\\ref{eq:E}), (\\ref{eq:G})}, and \\textup{(\\ref{eq:B})} respectively\n \\label{cor:trim}\n\\end{theorem}\nThere is a great deal of freedom in the general model. We specialize to two cases before writing down explicit finite variation processes, though this can be done for the general model. The two cases we study are the model where the spectral weights $q_i$ are frozen, and the second is where they evolve according to a finite variation process. \n\n\n\n\n\\subsection{The frozen spectral weight model}\n\\label{sec:tridiagonalfrozen}\n\nHaving the martingale portions of the SDE, we can then turn to solving for the finite variation portions of $dA_t.$ These second--order terms depend on the choices made in the martingale terms. One possible choice is to make $dM_t$ have $0$ in the first column, which will lead to the frozen spectral weight model (where the spectral weights are constant in time).\n\n\n\n\\begin{theorem}\n\\label{thm:tridiagonalfrozen}\nLet the $q_i$ be fixed then the tridiagonal model associated to the $\\beta$-Dyson flow satisfies \n\\begin{equation}\n \\begin{aligned}\n dA_t &= \\sum_{i=1}^n -\\biggl\\{\\sqrt{\\frac{2}{\\beta}} dZ_{i,t} + dt \\cdot \\biggl\\{-\\frac{V'(\\lambda_{i,t})}{2} + \\sum_{j \\neq i}\\frac{1}{\\lambda_{i,t} - \\lambda_{j,t}} \\biggr\\} \\biggr\\} \\cdot q_i^2 \\cdot G(\\lambda_{i,t})\n + \\sum_{k \\geq \\ell} dP_{k,\\ell,t} \\cdot G^{k,\\ell} \\\\\n dP_t &= \\sum_{i=1}^n -\\frac{q_{i}^4}{2} \\cdot [ -E^{+}(\\lambda_{i,t},\\lambda_{i,t}) + G(\\lambda_{i,t}), F(\\lambda_{i,t})] \\cdot dt .\n \\end{aligned}\n \\label{eq:frozen}\n\\end{equation}\nMoreover if $\\Lambda$ is a stationary process then $A$ will be stationary in the frozen spectral weight model. The definitions of $G, E^+, F$ and $G^{k,\\ell}$ may be found in \\textup{(\\ref{eq:G}), (\\ref{eq:E}), (\\ref{eq:F})} and Remark \\ref{rem:Gkl} respectively\n\\end{theorem}\n\n\\begin{corollary}\nIf we take $A_0$ to have the same distribution as the tridiagonal model in (\\ref{eq:tridiagonal}) and $\\Lambda$ satisfies the $\\beta$-Dyson Brownian motion flow with $V(x)= 2x^2$ then $A_t$ will have the same distribution as (\\ref{eq:tridiagonal}).\n\\end{corollary}\n\nRecall that $dA_t$ satisfies\n\\[\n dA_t = dW_t + [A_t, dM_t] + \\frac{1}{2}[2dW_t + [A_t,dM_t], dM_t]\n\\]\nwhere $dW_t = \\sum_{i=1}^n d\\lambda_{i,t} q_i^2 E^{+}(\\lambda_i,\\lambda_i).$ \nDefine\n\\begin{equation}\n \\label{eq:frozenM}\n \\begin{aligned}\n dM^{(1)}_t &= \\sum_{i=1}^n -\\sqrt{\\frac{2}{\\beta}}dZ_{i,t} \\cdot q_i^2 \\cdot F(\\lambda_{i,t}), \\\\\n dM^{(2)}_t &= \\sum_{i=1}^n -dt \\cdot q_i^2 \\cdot F(\\lambda_i)\\cdot \\biggl\\{-\\frac{V'(\\lambda_{i,t})}{2} + \\sum_{j \\neq i}\\frac{1}{\\lambda_{i,t} - \\lambda_{j,t}} \\biggr\\}, \\\\\n dM^{(3)}_t &= \\sum_{k \\geq \\ell} -\\frac{1}{2}[2dW_t+[A_t,dM^{(1)}_t], dM^{(1)}_t]_{k,\\ell} \\cdot M^{k,\\ell},\n \\end{aligned}\n\\end{equation}\nand let $dM_t$ be the sum of these three components. Here $M^{k,\\ell}$ is the matrix defined in Corollary~\\ref{cor:entry}. Observe that the first column of $dM_t$ is $0.$ Recalling the definitions of $F$ and $M^{k,\\ell},$ we have that\n\\[\n \\begin{aligned}\n &[A_t, dM^{(1)}_t+ dM^{(2)}_t] \\in \\sum_{i=1}^n -d\\lambda_{i,t} \\cdot q_i^2 \\cdot E^{+}(\\lambda_i,\\lambda_i) + \\mathbb{T} \\\\\n &[A_t, dM^{(3)}_t] \\in -\\frac{1}{2}[ 2dW_t + [A_t,dM^{(1)}_t],dM^{(1)}_t] + \\mathbb{T} \\\\ \n &[A_t, dM^{(3)}_t] \\in -\\frac{1}{2}[ 2dW_t + [A_t,dM_t],dM_t] + \\mathbb{T} \n \\end{aligned}\n\\]\nThe equivalence of the last two terms comes from the fact that only products of two stochastic differentials survive.\nHence, this choice of $dM_t$ produces a tridiagonal diffusion, moreover, the produces the tridiagonal model in Theorem \\ref{thm:tridiagonalfrozen}. To arrive at the expression we have for $dP_t,$ we have used the independence of the Brownian motions $dZ_{i,t}.$\n\n\\subsubsection*{Computation of $dP_t$}\n\nThis computation reduces to that of the commutators $[G(\\lambda_i),F(\\lambda_i)]$ and $[E^{+}(\\lambda_i,\\lambda_i),F(\\lambda_i)]$ for an eigenvalue $\\lambda_i.$ We further identify the dominant terms in this expansion, at least for a sufficiently small principal submatrix.\nSpecifically we show that\n \\[\n \\sum_{k \\geq \\ell} dP_{k,\\ell,t} \\cdot G^{k,\\ell}\n =\\sum_{i=1}^n \\frac{q_i^2}{4} \\partial_{xy} B(x,y) \\vert_{x=y=\\lambda_{i,t}}\n + dR_{t},\n \\]\n where $dR_t$ is a lower order term, at least for sufficiently small windows of the upper left corner. In the asymptotic regime considered in Section~\\ref{sec:asymptotics}, the magnitude of the displayed terms in principal submatrix of size $M$ will be $M^2\/n.$ In contrast, the $dR_t$ terms will be $M^4\/n^2.$\n\nWe begin with $[G(\\lambda_i), F(\\lambda_i)].$ For notational simplicity, we suppress $\\lambda_i$ in what follows. All polynomials are evaluated at $\\lambda_i.$\nAs $G$ is tridiagonal, we recall for convenience the commutator equation \\eqref{eq:AM}\n\\begin{equation*}\n[A,M]_{k,\\ell} = a_{k-1} m_{k-1,\\ell} + b_k m_{k,\\ell} +a_k m_{k+1,\\ell} - (a_{\\ell-1} m_{k,\\ell-1} + b_\\ell m_{k,\\ell}+ a_\\ell m_{k,\\ell+1}).\n\\end{equation*}\nFor $k > \\ell +1,$ we get\n\\begin{align*}\n [G,F]_{k,\\ell}\n =\n &-a_{k-1} (p_{k-2}p_{k-2}' - p_{k-1}p_{k-1}')\n p_{k-2}p'_{\\ell-1} \\\\\n &-\\biggl\\{a_{k-1} p_{k-2}p_{k-1}' - a_k p_{k-1} p_{k}'\n +a_{k-1} p_{k-1}p_{k-2}' - a_k p_{k} p_{k-1}'\\biggr\\}\n p_{k-1}p'_{\\ell-1} \\\\\n &-a_k(p_{k-1}p_{k-1}' - p_k p_k')\n p_{k}p'_{\\ell-1} \\\\\n &+a_{\\ell-1} (p_{\\ell-2}p_{\\ell-2}' - p_{\\ell-1}p_{\\ell-1}')\n p_{k-1}p'_{\\ell-2} \\\\\n &+\\biggl\\{a_{\\ell-1} p_{\\ell-2}p_{\\ell-1}' - a_\\ell p_{\\ell-1} p_{\\ell}'\n +a_{\\ell-1} p_{\\ell-1}p_{\\ell-2}' - a_\\ell p_{\\ell} p_{\\ell-1}'\\biggr\\}\n p_{k-1}p'_{\\ell-1} \\\\\n &+a_\\ell(p_{\\ell-1}p_{\\ell-1}' - p_\\ell p_\\ell')\n p_{k-1}p'_{\\ell}. \n\\end{align*}\nThere is cancellation in this formula, which results in \n\\begin{equation}\n \\label{eq:GF}\n \\begin{aligned}\n ~[G,F]_{k,\\ell}\n = \n \\mathcal{H}_{k,\\ell}\n :=\n &-p_{\\ell-1}'a_{k-1}p_{k-2}'(p_{k-2}^2 + p_{k-1}^2) \\\\\n &+p_{\\ell-1}'a_{k}p_{k}'(p_{k-1}^2 + p_{k}^2) \\\\\n &+p_{k-1}a_{\\ell-1}p_{\\ell-2}((p_{\\ell-2}')^2 + (p_{\\ell-1}')^2) \\\\\n &-p_{k-1}a_{\\ell}p_{\\ell}((p_{\\ell-1}')^2 + (p_{\\ell}')^2). \n\\end{aligned}\n\\end{equation}\nHere, the formula for $[G,F]$ holds for $k > \\ell+1.$ We let $\\mathcal{H}_{k,\\ell}$ be given by this formula for all $k \\geq \\ell.$ \nFrom the symmetry of $G$ and the antisymmetry of $F,$ the commutator $[G,F]$ is symmetric, which allows its entries above the diagonal to be completed. \nHence we extend the definition of $\\mathcal{H}$ to be $0$ on the diagonal and symmetric.\nOn the tridiagonal, there are correction terms coming from the behavior of $F$ near the diagonal. \n\nThe correction to \\eqref{eq:GF} when $k = \\ell+1$ stems from the $0$ diagonal of $F.$ This $0$-diagonal implies that \\[\n G_{\\ell+1,\\ell} F_{\\ell,\\ell} - F_{\\ell+1,\\ell+1} G_{\\ell+1,\\ell} = 0.\n\\]\nHence, the commutator for $k=\\ell+1$ is\n\\begin{equation}\n \\label{eq:GFe1}\n \\begin{aligned}\n ~[G,F]_{\\ell+1,\\ell}\n -\\mathcal{H}_{\\ell+1,\\ell}\n &=\n -G_{\\ell+1,\\ell} p_{\\ell-1} p_{\\ell-1}' + p_{\\ell}p_{\\ell}' G_{\\ell+1,\\ell} \\\\\n &=\n -a_\\ell( p_{\\ell-1} p_{\\ell-1}' - p_{\\ell}p_{\\ell}')^2. \n \\end{aligned}\n\\end{equation}\nAs for the diagonal, we get\n\\begin{equation}\n \\label{eq:GFe2}\n \\begin{aligned}\n ~[G,F]_{\\ell,\\ell}\n =&\n 2G_{\\ell,\\ell+1} F_{\\ell+1,\\ell}\n -2G_{\\ell-1,\\ell} F_{\\ell,\\ell-1} \\\\\n =&-p_{\\ell-1}'a_{\\ell-1}p_{\\ell-2}'(0 + 2p_{\\ell-1}^2) \\\\\n &+p_{\\ell-1}'a_{\\ell}p_{\\ell}'(0 + 2p_{\\ell}^2) \\\\\n &+p_{\\ell-1}a_{\\ell-1}p_{\\ell-2}(2(p_{\\ell-2}')^2 + 0) \\\\\n &-p_{\\ell-1}a_{\\ell}p_{\\ell}(2(p_{\\ell-1}')^2 + 0). \\\\\n =&\\mathcal{H}_{\\ell,\\ell} \\\\\n &-p_{\\ell-1}'a_{\\ell-1}p_{\\ell-2}'(-p_{\\ell-2}^2 + p_{\\ell-1}^2) \\\\\n &+p_{\\ell-1}'a_{\\ell}p_{\\ell}'(-p_{\\ell-1}^2 + p_{\\ell}^2) \\\\\n &+p_{\\ell-1}a_{\\ell-1}p_{\\ell-2}((p_{\\ell-2}')^2 - (p_{\\ell-1}')^2) \\\\\n &-p_{\\ell-1}a_{\\ell}p_{\\ell}((p_{\\ell-1}')^2 - (p_{\\ell}')^2). \\\\\n \\end{aligned}\n\\end{equation}\n\nTurning to $[E^{+},F],$ once again evaluated at an eigenvalue $\\lambda_i$ which we suppress, for any $k \\geq \\ell$\n\\begin{align*}\n [E^{+},F]_{k,\\ell}\n =&\n \\sum_{m=1}^n E^{+}_{k,m} F_{m,\\ell} - F_{k,m} E^{+}_{m,\\ell} \\\\\n =&\n \\sum_{m=\\ell+1}^n -p_{k-1} p_{m-1} p_{m-1} p_{\\ell-1}' \n -\\sum_{m=k+1}^n p_{k-1}' p_{m-1} p_{m-1} p_{\\ell-1} \\\\\n +&\\sum_{m=1}^{\\ell-1} p_{k-1} p_{m-1} p_{m-1}' p_{\\ell-1}\n +\\sum_{m=1}^{k-1} p_{k-1} p_{m-1}' p_{m-1} p_{\\ell-1}. \\\\\n\\end{align*}\nApplying the Christoffel--Darboux identities to these sums, we arrive at\n\\begin{align*}\n [E^{+},F]_{k,\\ell}\n =\n &-\\{p_{k-1}p_{\\ell-1}'+p_{k-1}'p_{\\ell-1}\\}q_i^{-2} \\\\\n &-p_{k-1}p_{\\ell-1}'\\left\\{ - a_\\ell p_\\ell'p_{\\ell-1}+a_\\ell p_\\ell p_{\\ell-1}' \\right\\} \\\\\n &-p_{k-1}'p_{\\ell-1}\\left\\{ - a_k p_k'p_{k-1}+ a_k p_k p_{k-1}' \\right\\} \\\\\n &+p_{k-1}p_{\\ell-1}a_{\\ell-1} \\left\\{ p_{\\ell-1}''p_{\\ell-2} - p_{\\ell-1}p_{\\ell-2}'' \\right\\} \\\\\n &+p_{k-1}p_{\\ell-1}a_{k-1} \\left\\{ p_{k-1}''p_{k-2} - p_{k-1}p_{k-2}'' \\right\\}.\n\\end{align*}\nWe set $\\mathcal{E}_{k,\\ell}$ to be the sum of the final four lines from this equation.\n\n\\subsubsection*{Some simplifications of the FV terms}\n\nThere is some cancellation in the finite variation expressions which assists in estimating its true magnitude when passing to asymptotics. Much of the simplification is possible on account of the observation that for $k < \\ell,$\n\\[\n \\frac{1}{2}\\sum_{i,j=1}^n\n q_i^2\n q_j^2\n p_{k-1}(\\lambda_i)\n p_{\\ell-1}(\\lambda_j)\n \\Delta_{\\lambda_j}^{\\lambda_i} B(\\lambda_i,\\lambda_j)\n =0.\n\\]\nThe left hand side of this expression gives $-G^{k,\\ell}$ for $k \\geq \\ell.$ Hence if we extend the definition of $G^{k,\\ell}$ to be $0$ for $k < \\ell,$ we can express the finite variation terms as four sums\n\\begin{equation}\n \\sum_{k \\geq \\ell} dP_{k,\\ell,t} \\cdot G^{k,\\ell}\n = \n \\begin{aligned}\n &\\sum_{i,k,\\ell=1}^n -\\frac{q_i^2}{2} \\cdot (p_{k-1}(\\lambda_i)p_{\\ell-1}'(\\lambda_i)+p_{k-1}(\\lambda_i)'p_{\\ell-1}(\\lambda_i) )\\cdot G^{k,\\ell} \\cdot dt &\\bigg\\} =: dS_t \\\\\n +&\\sum_{i,k,\\ell=1}^n -\\frac{q_i^4}{2} \\cdot \\mathcal{H}_{k,\\ell}(\\lambda_i)\\cdot G^{k,\\ell} \\cdot dt \\qquad &\\bigg\\} =: dR^1_t \\\\\n +&\\sum_{i,k,\\ell=1}^n -\\frac{q_i^4}{2} \\cdot ( [G(\\lambda_i),F(\\lambda_i)]_{k,\\ell} - \\mathcal{H}_{k,\\ell}(\\lambda_i))\\cdot G^{k,\\ell} \\cdot dt \\qquad &\\bigg\\} =: dR^2_t \\\\\n +&\\sum_{i,k,\\ell=1}^n \\frac{q_i^4}{2} \\cdot \\mathcal{E}_{k,\\ell}(\\lambda_i)\\cdot G^{k,\\ell} \\cdot dt. &\\bigg\\} =: dR^3_t \\\\\n\\end{aligned}\n \\label{eq:FVterms}\n\\end{equation}\n\nDefine for any eigenvalues $\\lambda_i$ and $\\lambda_u$\n\\[\n \\mathfrak{D}(\\lambda_u,\\lambda_i)\n =\\sum_{\\ell=1}^n p_{\\ell-1}(\\lambda_u) p_{\\ell-1}'(\\lambda_i).\n\\]\nObserve that for any $1 \\leq r \\leq n,$\n\\[\n \\sum_{u=1}^n \n \\mathfrak{D}(\\lambda_u,\\lambda_i)\n p_{r-1}(\\lambda_u) q_u^2 = p_{r-1}'(\\lambda_i),\n\\]\nand hence we have that for any polynomial $p$ of degree less than $n,$\n\\begin{equation}\n \\label{eq:Did}\n \\sum_{u=1}^n \n \\mathfrak{D}(\\lambda_u,\\lambda_i)\n p(\\lambda_u) q_u^2 = p'(\\lambda_i).\n\\end{equation}\n\n\\subsubsection*{Simplifying $dS_t:$}\n\nWe begin this case by observing\nthat since $G^{k,\\ell} = 0$ for $k < \\ell,$\n\\[\n\\sum_{i,k,\\ell=1}^n -\\frac{q_i^2}{2} \\cdot (p_{k-1}(\\lambda_i)p_{\\ell-1}'(\\lambda_i))\\cdot G^{k,\\ell} = 0,\n\\]\nfrom the orthogonality of $p_{k-1}$ to lower degree polynomials. Hence we only need consider the other term. Expanding the definition of $G^{k,\\ell},$\n\\begin{align*}\n dS_t\n &=\n \\sum_{i,k,\\ell=1}^n -\\frac{q_i^2}{2} \\cdot (p_{k-1}(\\lambda_i)'p_{\\ell-1}(\\lambda_i) )\\cdot G^{k,\\ell} \\cdot dt \\\\\n &=\n \\sum_{i,k,\\ell,u,j=1}^n \n \\frac{q_i^2q_u^2q_j^2}{4} \\cdot \n p_{k-1}(\\lambda_i)'p_{\\ell-1}(\\lambda_i) \\cdot \n p_{k-1}(\\lambda_u)\n p_{\\ell-1}(\\lambda_j)\n \\Delta_{\\lambda_j}^{\\lambda_u} B(\\lambda_u,\\lambda_j)\n \\cdot dt \\\\\n &=\n \\sum_{i,k,u=1}^n \n \\frac{q_i^2q_u^2}{4} \\cdot \n p_{k-1}(\\lambda_i)' \\cdot \n p_{k-1}(\\lambda_u)\n \\Delta_{\\lambda_i}^{\\lambda_u} B(\\lambda_u,\\lambda_i)\\cdot dt \\\\\n &=\n \\sum_{i,u=1}^n \n \\frac{q_i^2q_u^2}{4} \\cdot \n \\mathfrak{D}(\\lambda_u,\\lambda_i)\n \\Delta_{\\lambda_i}^{\\lambda_u} B(\\lambda_u,\\lambda_i)\\cdot dt. \n\\end{align*}\nIn the third equality, we have used the orthogonality of the polynomials in $\\ell.$ In the fourth, we have used the definition of $\\mathfrak{D}.$ Hence, we conclude\n\\begin{lemma}\n For $k,\\ell$ satisfying $2\\max(k,\\ell) \\leq n+1,$ we have\n \\[\n dQ_{k,\\ell,t} = \n \\sum_{i=1}^n \n \\frac{q_i^2}{4} \\partial_{xy}B_{k,\\ell}(x,y) \\vert_{x=y=\\lambda_{i,t}}\n .\n \\]\n \\label{lem:R3}\n\\end{lemma}\n\\begin{proof}\n For $k,\\ell$ as stated,\n \\(\n \\Delta_{y}^{x}B_{k,\\ell}(x,y)\n \\)\n are polynomials in $x$ of degree at most $n-1.$\n Applying \\eqref{eq:Did} to $dS_t,$ it follows that \n \\[\n dQ_{k,\\ell,t} = \n \\sum_{i=1}^n \n \\frac{q_i^2}{4} (\\partial_{y} \\Delta_{\\lambda_i}^{y}B_{k,\\ell}(y,\\lambda_i))\\vert_{y=\\lambda_i}\n \\]\n Expanding,\n \\[\n \\partial_{y} \\Delta_{\\lambda_i}^{y}B(y,\\lambda_i)\n =\n \\partial_y\n \\frac{B(y,y) - B(y,\\lambda_i)}{y-\\lambda_i}.\n \\]\n Hence, on setting $y=\\lambda_i,$ we get\n \\[\n (\\partial_{y} \\Delta_{\\lambda_i}^{y}B(y,\\lambda_i))\\vert_{y=\\lambda_i}\n =(\\partial_{xy} + \\frac{1}{2}\\partial_{yy})B(x,y) \\vert_{x=y=\\lambda_i}.\n \\]\n By orthogonality, and recalling \\eqref{eq:B}, we have\n \\[\n \\sum_{i=1}^n \n \\frac{q_i^2}{4}\n (\\partial_{yy})B(x,y) \\vert_{x=y=\\lambda_i} = 0,\n \\]\n which completes the proof.\n\\end{proof}\n\n\\subsubsection*{Simplifications for quadratic constraining potential}\nFor $V'(t) = ct,$ some additional simplification is possible. Specifically, we may take advantage of the identity\n\\begin{equation}\n \\sum_{i=1}^n q_i^2 \\lambda_i G(\\lambda_i) = -A\n \\label{eq:AG}\n\\end{equation}\nWe prove this like follows. By \\eqref{eq:F}, for an eigenvalue $\\lambda$ of $A,$\n\\[\n [A,F(\\lambda)] = E^{+}(\\lambda,\\lambda) + G(\\lambda).\n\\]\nRecall that an entry of $F_{k,\\ell}(\\lambda)=p_{k-1}(\\lambda)p_{\\ell-1}'(\\lambda)$ for $k > \\ell.$ Hence, by orthogonality of $p_{k-1}$ to lower degree polynomials.\n\\[\n \\sum_{i=1}^n q_i^2 \\lambda_i F(\\lambda_i) = 0.\n\\]\nOn the other hand, recalling \\eqref{eq:spec2},\n\\[\n \\sum_{i=1}^n E^{+}(\\lambda_i,\\lambda_i)q_i^2 \\lambda_i = A,\n\\]\nwhich completes the proof.\n\n\\subsection{Smooth spectral weight model}\n\\label{sec:tridiagonalfv}\n\nA relatively tame perturbation of the frozen--spectral weight model is one in which the spectral weights are finite variation processes, which may or may not depend on the eigenvalues. \n\\begin{theorem}\n\\label{thm:tridiagonalFV}\nSuppose that $dR_{i,t},$ for $1 \\leq i \\leq n$ are any finite variation processes. Define\n\\[\n dM^R_t = \n dM^{F}_t\n + \\sum_{i=1}^n dR_{i,t} q_{i,t} E^{-}(\\lambda_{i,t},\\lambda_{i,t}),\n\\]\nwhere $dM^{F}_t$ is the rotation differential from \\eqref{eq:frozenM}. The associated tridiagonal model is given by \n\\begin{equation}\n \\begin{aligned}\n dA_t &= dA^{F}_t + \\sum_{i=1}^n dR_{i,t} q_i B(\\lambda_{i,t},\\lambda_{i,t}).\n \\end{aligned}\n \\label{eq:smooth}\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nThe eigenvector matrix $O_t$ of $A_t$ evolves (by Theorem~\\ref{thm:tri}) as\n\\begin{equation*}\n dO_t^t = O_t^t (dM_t +\\frac{1}{2} dM_t^2).\n\\end{equation*}\nWhen $dM_t = dM^R_t,$ the first column of $dM_t^2$ vanishes, as $dM^F_t$ is identically $0$ in this column and $dM^R_t-dM^F_t$ is finite variation. Hence, the first row of $dO_t$ evolves according to\n\\begin{align*}\n dq_{i,t} = dO_{1,i,t} = \n \\sum_{\\ell=1}^n O_{\\ell,i} dM^R_{\\ell,1,t} \n &=\n \\sum_{\\ell=2}^n q_{i,t} p_{\\ell-1}(\\lambda_{i,t}) \n \\biggl\\{ \n \\sum_{j=1}^n dR_{j,t} q_{j,t} p_{\\ell-1}(\\lambda_{j,t})\n \\biggr\\} \n \\\\\n &=\n -q_i \\sum_{j=1}^n dR_{j,t} q_{j,t} \n +\n \\sum_{j=1}^n \n \\sum_{\\ell=1}^n q_{i,t} p_{\\ell-1}(\\lambda_{i,t}) \n dR_{j,t} q_{j,t} p_{\\ell-1}(\\lambda_{j,t}) \\\\\n &=\n -q_{i,t} \\sum_{j=1}^n dR_{j,t} q_{j,t}\n + dR_{i,t}.\n\\end{align*}\nIn other words, the spectral weights evolve exactly according to the projection of $( dR_{i,t} )_{i=1}^n$ in the direction orthogonal to $(q_{i,t})_{i=1}^n.$ Note that\n\\[\n \\sum_{i=1}^n q_{i,t}^2 E^{-}(\\lambda_{i,t},\\lambda_{i,t}) = 0,\n\\]\nand hence one may as well assume that $dR_{i,t}$ already had this orthogonality.\n\\end{proof}\n\n\\subsection{The half finite variation model}\n\nThe frozen spectral weight and smooth spectral weight models featured in some sense the tamest choices for the evolution of the spectral weights. However, it is also possible to choose evolutions of the spectral weights that lead to substantially different tridiagonal evolutions. For example, one might hope to choose a spectral weight evolution that cancels some of the rough parts of the tridiagonal. Here, we show it is possible to choose these weights such that the first half of the tridiagonal model will be finite variation.\n\n\nIn effect, it suffices to show the following lemma.\n\\begin{lemma}\nThere is a choice of coefficients $\\{c^j_i\\}$ so that\n\\[\n F(\\lambda_i) = \\sum_{j=1}^n c^j_i E^{-}(\\lambda_j,\\lambda_j) + T,\n\\]\nfor some $T$ that vanishes for entries $(k,\\ell)$ with $k+\\ell \\leq n.$\n \\label{lem:Fsolve}\n\\end{lemma}\n\\begin{proof}\n Suppose that $p$ is a polynomial of degree at most $n-1.$ Then\n \\[\n \\sum_{i=1}^n \\Delta_{\\lambda_i}^{\\lambda_j} p(\\lambda_i) p_{n-1}(\\lambda_i) q_i^2 = 0\n \\]\n by orthogonality.\n It follows that on rearranging the sum\n \\[\n \\sum_{i \\neq j} \\Delta_{\\lambda_i}^{\\lambda_j} p(\\lambda_i) p_{n-1}(\\lambda_i) q_i^2\n =\n p'(\\lambda_j) p_{n-1}(\\lambda_j) q_j^2.\n \\]\n In particular, we have for $k+\\ell \\leq n,$ when $E^{-}_{k,\\ell}(x,x)$ has degree at most $n-1,$ \n \\[\n F_{k,\\ell}(\\lambda_i) \n = -\\frac{1}{2}\\partial_{\\lambda_i} E^{-}(\\lambda_i,\\lambda_i)\n = \\frac{-1}{2p_{n-1}(\\lambda_i)q_i^2}\\sum_{j \\neq i} \\Delta_{\\lambda_j}^{\\lambda_i} E^{-}_{k,\\ell}(\\lambda_j,\\lambda_j) p_{n-1}(\\lambda_j) q_j^2.\n \\]\n Hence choosing $c^j_i$ to be these coefficients, the lemma follows.\n\\end{proof}\n\n\\begin{corollary}\n There is a tridiagonal model $A_t$ for Dyson Brownian motion so that its entries $(k,\\ell)$ with $k+\\ell \\leq n$ are finite variation.\n\\end{corollary}\n\n\n\\section{Asymptotics}\n\\label{sec:asymptotics}\n\nIn this section we will prove the large--$n$ asymptotics given in Theorem \\ref{thm:asymptotics} of the bounded--order principal submatrices of the frozen spectral weight model with external potential $V(x)=2x^2$ run from the Dumitriu--Edelman distribution. \nLet $\\MP{A}$ be this tridiagonal matrix process.\nLet $M$ be a fixed natural number, which will be the order of the principal submatrix that we consider. Recall that the model is given by \\eqref{eq:frozen}, which after applying \\eqref{eq:AG} is given by\n\\begin{equation}\n \\begin{aligned}\n dA_t &= -2A_t\\,dt + \\sum_{i=1}^n -dt \\cdot \\biggl\\{\\sum_{j \\neq i}\\frac{1}{\\lambda_{i,t} - \\lambda_{j,t}} \\biggr\\} \\cdot q_i^2 \\cdot G(\\lambda_{i,t}) + dA_t' \\\\\n dA_t' &= \\sum_{i=1}^n -\\sqrt{\\frac{2}{\\beta}} dZ_{i,t}\\cdot q_i^2 \\cdot G(\\lambda_{i,t})\n + \\sum_{k \\geq \\ell} dP_{k,\\ell,t} \\cdot G^{k,\\ell} \\\\\n dP_t &= \\sum_{i=1}^n -\\frac{q_i^4}{2} \\cdot [ -E^{+}(\\lambda_{i,t},\\lambda_{i,t}) + G(\\lambda_{i,t}), F(\\lambda_{i,t})] \\cdot dt\n \\end{aligned}\n \\label{eq:frozen2}\n\\end{equation}\nTo show the convergence statement we will show that the contribution from $A'$ will be negligible and the contributions from $dA_t-dA'_t$ will be derived using an observation about the orthogonal polynomials being approximately Chebyshev and convergence of Stieltjes transforms.\n\nThis section is organized in the following way: we first show the observation about the polynomials being approximately Chebyshev holds in a precise way. Second we show that the contribution of we prove convergence of the appropriate Stieltjes transforms. Third we show the contributions from the $dA'_t$ are negligible. Fourth we give the asypmtotics for the main terms in $dA_t- dA'_t$. Fifth we show the cancellation of the leading order terms. Lastly we collect everything to get the appropriate convergence statement.\n\n\n\n\\subsection{Convergence to Chebyshev polynomials}\n\nThe asymptotics are based off the observation that the orthogonal polynomials in this upper corner are nearly Chebyshev polynomials.\nDefine the semi--infinite tridiagonal matrix\n\\[\n D\n =\n \\begin{bmatrix}\n 0 & \\frac{1}{2} & & & & \\\\\n \\frac{1}{2} & 0 & \\frac{1}{2} & & & \\\\\n & \\frac{1}{2} & 0 & \\frac{1}{2}& & \\\\\n & & \\frac{1}{2} & & & & \\\\\n & & & & \\ddots & & \\\\\n \\end{bmatrix}.\n\\]\nThe orthogonal polynomials with this Jacobi matrix are the Chebyshev polynomials of the second kind, $\\left\\{ U_k \\right\\}_{k=0}^{\\infty}.$\n\nDefine stopping times, for any $R > 0,$\n\\[\n \\tau_R = \\inf\\left\\{ t \\geq 0 ~:~ \\max_{1 \\leq i,j \\leq M} |\\MP{A}{t}[i,j]-\\sqrt{n}D_{i,j}| > R \\right\\}.\n\\]\nFor time before $\\tau_R,$ we have quantitative control on the proximity of $\\MP{A}$ to $D,$ and this implies the closeness of the scaled orthogonal polynomials associated to $\\MP{A}$ to the Chebyshev polynomials of the second kind.\n\\begin{lemma}\n Uniformly in $0 \\leq k \\leq M,$ $0 \\leq j < k,$ $t \\leq \\tau_R,$ and locally uniformly in $x\\in{\\mathbb C},$\n \\begin{align*}\n &\\lim_{n\\to\\infty} p_{k,t}(\\sqrt{n} x) = U_k(x), \\\\\n &\\lim_{n\\to\\infty} a_{j+1,t}p^{(j)}_{k,t}(\\sqrt{n} x) = U_{k-j-1}(x), \\\\\n &\\lim_{n\\to\\infty} \\sqrt{n} p_{k,t}'(\\sqrt{n} x) = U_k'(x).\n \\end{align*}\n \\label{lem:cheb}\n\\end{lemma}\n\\begin{proof}\n The three-term recurrence for $q_{k,t}(x) = p_{k,t}(\\sqrt{n}x)$ is given by\n \\begin{align*}\n x q_{k,t}(x) \n =\n x p_{k,t}(\\sqrt{n} x)\n &=\n \\frac{a_{k+1,t}}{\\sqrt{n}} p_{k+1,t}(\\sqrt{n} x)\n +\\frac{b_{k+1,t}}{\\sqrt{n}} p_{k,t}(\\sqrt{n} x)\n +\\frac{a_{k,t}}{\\sqrt{n}} p_{k-1,t}(\\sqrt{n} x) \\\\\n &=\n \\frac{a_{k+1,t}}{\\sqrt{n}} q_{k+1,t}(x)\n +\\frac{b_{k+1,t}}{\\sqrt{n}} q_{k,t}(x)\n +\\frac{a_{k,t}}{\\sqrt{n}} q_{k-1,t}(x).\n \\end{align*}\n From the uniform convergence of $n^{-1\/2} A_t,$ the $3$-term recurrence converges to that of the Chebyshev polynomials. Hence by induction on $k,$ the result follows. \n\\end{proof}\n\nWe will need to find a uniform left--tail bound for these stopping times in $n$ which improves as $R \\to \\infty.$ To do so, define stopping times, for $K >0,$ \n\\[\n \\sigma_K = \\inf\\left\\{ t \\geq 0 ~:~ \\max_{1 \\leq i \\leq N} |\\lambda_{i,t}| > K\\sqrt{n} \\right\\}.\n\\]\n\\begin{lemma}\n There is a $K >0$ so that for all $T >0,$\n \\[\n \\lim_{n \\to \\infty} \\Pr\\left[ \\sigma_K \\leq T \\right] = 0.\n \\]\n \\label{lem:sigmaK}\n\\end{lemma}\n\\begin{proof}\n See \\cite[Theorem 5.1]{Unterberger}.\n\\end{proof}\n\n\\subsection{Stieltjes transforms and their limits}\n\nThe Stieltjes transform of a measure $\\mu$ on the real line is given by \n\\[\ns^\\mu(z) =\\int_\\mathbb{R} \\frac{1}{x-z}d\\mu.\n\\]\nWe can also define the Stieltjes transform of a matrix by taking the measure to be the associated spectral measure. We make use of both the unweighted and weighted spectral measures and so define the Stieltjes transforms of $A_t$ by\n\\[\n s_t(z) = \\frac{1}{n}\\sum_{i=1}^n \\frac{1}{\\lambda_{i,t}\/\\sqrt{n}-z}\n \\quad\n \\text{and}\n \\quad\n s_t^A(z) = \\sum_{i=1}^n \\frac{q_i^2}{\\lambda_{i,t}\/\\sqrt{n}-z},\n\\]\nfor $z$ in the complement of the spectrum with respect to the complex plane.\nRecall that $\\mathfrak{s}$ denotes the semicircle density on $[-1,1],$\n\\[\n \\mathfrak{s}(x) = \\frac{2}{\\pi}\\sqrt{1-x^2}\\,\\one[|x| \\leq 1],\n\\]\nand that we define for $z \\in {\\mathbb C} \\setminus [-1,1]$\n\\[\n s^{\\mathfrak{s}}(z) =\n \\int_{-1}^1 \\frac{\\mathfrak{s}(x)\\,dx}{x-z}\n =\n 2(-z+\\sqrt{z^2-1})\n .\n\\]\n\\begin{proposition}\n \\label{prop:stieltjes}\n Let $K,T > 0$ and let $F \\subset {\\mathbb C} \\setminus [-K,K]$ be compact. Then \n \\[\n \\max_{0 \\leq t \\leq \\sigma_K \\wedge T} \\max_{z \\in F} |s_t(z) - s^{\\mathfrak{s}}(z)| \\to 0\n \\]\n pointwise. This is essentially a version of uniform convergence for $0 \\le t \\le T$ of $\\mu_{n,t} = \\frac{1}{n} \\sum_{i=1}^n \\delta_{\\lambda_i\/\\sqrt{n}}$ weakly to $\\mathfrak{s}$.\n\\end{proposition}\n\n\\begin{proof}\nThe first statement is a consequence of Theorem 1 in Rogers and Shi \\cite{RogersShi}. To get that this implies the uniform weak convergence first observe that the measures are compactly supported and so $\\int x^k d\\mu_{n,t} \\to \\int x^k d\\mathfrak{s}(x)$ by contour integration, which then implies the weak convergence.\n\\end{proof}\n\nLet $Q_t$ be a standard Brownian motion defined for all positive and negative times, and set\n\\[\n s^{Q}(z) = \n s^{\\mathfrak{s}}(z)c_\\beta \\int_{-1}^{1}1\\ dQ_x+ c_\\beta \\int_{-1}^1 \\frac{\\sqrt{\\mathfrak{s}(x)}dQ_x}{x-z},\n\\]\nwith $c_\\beta = \\sqrt{\\beta}.$\n\\begin{proposition}\n \\label{prop:stieltjes2}\n Let $K,T > 0$ and let $F \\subset {\\mathbb C} \\setminus [-K,K]$ be compact. Then there is a probability space so that\n \\[\n \\max_{0 \\leq t \\leq \\sigma_K \\wedge T} \\max_{z \\in F} |\\sqrt{n}(s_t^A(z)-s_t(z)) - s^{Q}(z)| \\overset{\\Pr}{\\to} 0.\n \\]\n\\end{proposition}\n\\begin{proof}\nWe make use of the martingale central limit theorem to show convergence directly. Recall that the the vector $(q_1^2,...,q_n^2)$ has Dirichlet$(\\frac{\\beta}{2},...,\\frac{\\beta}{2})$ distribution. We couple the $q_i$'s to an independent family of i.i.d. random variables $Y_1,Y_2,...$ in the following way. Let $Y_i \\sim$ Gamma$(\\frac{\\beta}{2},\\theta)$ then for $V_n = \\sum_{i=1}^n Y_i$ we define \n\\[\n\\left( \\frac{Y_1}{V_n},...,\\frac{Y_n}{V_n}\\right) = (q_1^2,...,q_n^2).\n\\]\nUsing this we can use the decomposition $\\sqrt {n} (s_t^A(z)-s_t(z)) = X_n^{(t)}(z)+M_n^{(t)}(1,z)$ where \n \\[\n X^{(t)}_n(z) = \\sqrt n \\sum_{i=1}^{n} \\left( \\frac{Y_i}{\\theta\\frac{\\beta}{2}n} - q_i^2\\right) \\frac{1}{\\lambda_{i,t}\/\\sqrt n-z} = \\frac{V_n- \\theta\\frac{\\beta}{2}n}{\\theta \\frac{\\beta}{2}\\sqrt n} s_t^A. \n\\]\nand $M_n^{(t)}$ is a family of continuous time martingales defined by \n\\[\nM_n^{(t)}(s,z) = \\sqrt{n} \\sum_{i=1}^{\\lfloor sn\\rfloor} \\left( \\frac{1}{n} - \\frac{Y_i}{\\theta \\frac{\\beta}{2} n}\\right) \\frac{1}{\\lambda_{i,t}\/\\sqrt n-z}.\n\\]\n\nTo show convergence we begin by defining\n\\[\nS_n(s) =\\sqrt{\\frac{2}{\\beta}}\\sqrt{n} \\sum_{k=1}^{\\lfloor sn\\rfloor} \\left(\\frac{Y_i}{\\theta \\frac{\\beta}{2} n}- \\frac{1}{n}\\right).\n\\]\nA strong approximation theorem by Koml\\'os, Major, and Tusn\\'ady \\cite{KMT} gives us that there exists a Brownian motion $B(s)$ so that $\\sup_{s\\in[0,T] }|S_n(s) - B(s)| \\overset{\\Pr}{\\to} 0.$ Let $Q(x) = B((x+1)\/2)$. Notice that $X_n^{(t)}$ is bounded an so $|s_t(z)-s_t^A(z)|\\to 0$ for all $t$. In particular we get that $\\sup_{0\\le t \\le T} \\sup_{z\\in F}| s_t^A(z)- s^{\\mathfrak{s}}(z)|\\to 0$ in probability. From this we get \n\\begin{equation}\n \\max_{0 \\leq t \\leq \\sigma_K \\wedge T} \\max_{z \\in F}\\left| X^{(t)}_n(z) - s^{\\mathfrak{s}}(z)c_\\beta \\int_{-1}^{1}1\\ dQ_x \\right| \\to 0 \\quad \\text{ in probability.}\n \\label{eq:Xlimit}\n\\end{equation}\nWe now turn to the martingale term. In order to prove convergence we will separate the $\\lambda_{i,t}$ from the main term. Let $\\gamma_i$ be the $i\/n$th quantile of $\\mathfrak{s}$. That is $\\gamma_i$ satisfies\n\\[\n\\int_{-1}^{\\gamma_i} \\mathfrak{s}(x)dx = \\frac{i}{n}.\n\\]\nNow observe that $M_n^{(t)}(s)$ may be rewritten as\n\\[\nM_n^{(t)}(s) = \\sqrt{n} \\sum_{i=1}^{\\lfloor sn\\rfloor} \\left( \\frac{1}{n} - \\frac{Y_i}{\\theta \\frac{\\beta}{2} n}\\right) \\frac{1}{\\gamma_i-z}\n+\\sqrt{n} \\sum_{i=1}^{\\lfloor sn\\rfloor} \\left( \\frac{1}{n} - \\frac{Y_i}{\\theta \\frac{\\beta}{2} n}\\right)\\left(\\frac{1}{\\lambda_{i,t}\/\\sqrt n-z}- \\frac{1}{\\gamma_i-z}\\right).\n\\]\nThe second moment of the second term, conditional on the eigenvalue evolution, then may be bounded by \n\\[\nn \\max_{1\\le i \\le n}\\left\\{ \\left(\\frac{1}{\\lambda_{i,t}\/\\sqrt n-z}- \\frac{1}{\\gamma_i-z}\\right)^2 \\right\\} \\sum_{i=1}^{\\lfloor sn \\rfloor} \\mathbb{E}\\left( \\frac{1}{n} - \\frac{Y_i}{\\theta \\frac{\\beta}{2} n}\\right)^2.\n\\]\nIf we can show that the maximum converges to $0$ in probability that will show that the second term converges to 0 in probability. Recall that $\\sup_{0\\le t \\le T} \\max_{1\\le i \\le n}|\\frac{1}{\\lambda_{i,t}\/\\sqrt n-z}- \\frac{1}{\\gamma_i-z}| \\to 0$ in probability. This follows from convergence of the quantile functions which can be obtained from the weak convergence in Proposition \\ref{prop:stieltjes}. \nWe now show the first sum appearing in $M_n^{(t)}$ converges to a stochastic integral.\nWe make use of the semicircle convergence again to get that \n\\[\n \\frac{1}{n}\n\\sum_{i=1}^{\\lfloor sn\\rfloor} \\frac{1}{\\gamma_i-z} \\to \\int_{-1}^{2s -1} \\frac{ \\mathfrak{s}(x)}{x-z} dx.\n\\]\nThis convergence is uniform in $s$. By theorem 4.6 in Kurtz and Protter \\cite{KP} we get that \n\\[ \n\\max_{0 \\leq t \\leq \\sigma_K \\wedge T} \\max_{z \\in F} \\left|M_n^{(t)} - c_\\beta \\int_{-1}^1 \\frac{\\sqrt{\\mathfrak{s}(x)}dQ_x}{x-z}\\right| \\overset{\\Pr}{\\to} 0.\n\\]\nThis together with (\\ref{eq:Xlimit}) completes the proof.\n\\end{proof}\n\nHaving shown that the Stieltjes transforms converge, we can show that stopping times $\\tau_R$ advance to infinity for large $n$ and large $R.$\n\\begin{proposition}\n Let $T > 0,$ then \n \\[\n \\limsup_{R \\to \\infty} \\limsup_{n \\to \\infty} \\Pr\\left[ \\tau_R \\leq T \\right] = 0.\n \\]\n \\label{prop:tightness}\n\\end{proposition}\n\\begin{proof}\n Let $K > 1$ be a constant as in Lemma~\\ref{lem:sigmaK}. Let $\\gamma$ be a smooth contour winding once in the positive orientation around $[-K,K]$ in the complex plane. \n Define the $k$-th moment\n \\[\n c_{k,t} \n = \\sum_{i=1}^n q_i^2 (\\lambda_{i,t}\/n)^2 \n = \\frac{1}{2\\pi i} \\oint_\\gamma s^A_t(z) z^k\\,dz.\n \\]\n From \\cite[(2.2.6)]{Szego}, it is possible to express for all $1 \\leq k \\leq n-1$\n \\[\n p_k(\\sqrt{n} x) = \\frac{1}{\\sqrt{D_kD_{k-1}}}\n \\left|\n \\begin{matrix}\n c_0 & c_1 & c_2 & \\dots & c_k \\\\\n c_1 & c_2 & c_3 & \\dots & c_{k+1} \\\\\n \\dots & \\dots & \\dots & \\dots & \\dots \\\\\n c_{k-1} & c_k & c_{k+1} & \\dots & c_{2k-1} \\\\\n 1 & x & x^2 & \\dots & x^{k} \\\\\n \\end{matrix}\n \\right|,\n \\]\n where $D_k$ is the Hankel determinant $\\det[c_{i+j}]_{i,j=0,\\dots, k}.$\n By Proposition~\\ref{prop:stieltjes}, each coefficient of $p_k(\\sqrt{n}x)$ converges with rate $1\/\\sqrt{n}$ to the corresponding coefficient of $U_k(x),$ the orthogonal polynomials associated to the semicircle, uniformly for time up to $T \\wedge \\sigma_K.$ That is to say, for any natural numbers $i,j,$\n \\[\n \\max_{0 \\leq t \\leq T \\wedge \\sigma_K} |\\MP{A}{t}[i,j]\/\\sqrt{n} - D_{i,j}|\/\\sqrt{n} \n \\]\nis tight as a family of variables in $n.$ Hence \n\\[\n \\limsup_{R \\to \\infty} \\limsup_{n \\to \\infty} \\Pr\\left[ \\tau_R \\leq T \\right] \n \\leq \n \\limsup_{n \\to \\infty}\n \\Pr\\left[ \\sigma_K \\leq T \\right],\n\\]\nwhich is equal to $0$ by Lemma~\\ref{lem:sigmaK}.\n\\end{proof}\n\n\\subsection{Negligible terms}\nRecall the decomposition given in (\\ref{eq:frozen2}). In this section we show that the $A'_t$ terms will be negligible. Recall\n\\begin{equation*}\n \\begin{aligned}\n dA_t' &= \\sum_{i=1}^n -\\sqrt{\\frac{2}{\\beta}} dZ_{i,t}\\cdot q_i^2 \\cdot G(\\lambda_i)\n + \\sum_{k \\geq \\ell} dP_{k,\\ell,t} \\cdot G^{k,\\ell} \\\\\n dP_t &= \\sum_{i=1}^n -\\frac{q_i^4}{2} \\cdot [ -E^{+}(\\lambda_i,\\lambda_i) + G(\\lambda_i), F(\\lambda_i)] \\cdot dt\n \\end{aligned}\n\\end{equation*}\nHere we will also need that the spectral weights are relatively flat. \n\\begin{lemma}\n There is a constant $C_\\beta > 0$ so that with probability going to $1$\n \\[\n \\max_{1 \\leq i \\leq n} q_i^2 \\leq \\frac{C_\\beta \\log n}{n},\n \\]\n as $n \\to \\infty.$\n \\label{lem:qibound}\n\\end{lemma}\n\n\\begin{proposition}\n For any $T,R,K > 0$ as $n\\to\\infty,$\n \\[\n \\max_{1\\leq i,j \\leq M}\n \\biggl|\n \\int_0^{T \\wedge \\tau_R \\wedge \\sigma_K}\n dA_{i,j,t}'\n \\biggr|\n \\overset{\\Pr}{\\to}\n 0.\n \\]\n \\label{prop:dAt}\n\\end{proposition}\n\\begin{proof}\n Let $\\zeta$ be the stopping time $T \\wedge \\tau_R \\wedge \\sigma_K.$\n We consider the local martingale and finite variation portions separately. Recall that entries of $G$ are given by\n \\begin{equation}\n G_{k,\\ell}(\\lambda)\n =\\begin{cases}\n a_{\\ell-1} p_{\\ell-2}(\\lambda)p_{\\ell-1}'(\\lambda) - a_\\ell p_{\\ell-1}(\\lambda) p_{\\ell}'(\\lambda)& \\\\\n +a_{\\ell-1} p_{\\ell-1}(\\lambda)p_{\\ell-2}'(\\lambda) - a_\\ell p_{\\ell}(\\lambda) p_{\\ell-1}'(\\lambda), & \\text{ if } k = \\ell , \\\\\n a_\\ell (p_{\\ell-1}(\\lambda)p_{\\ell-1}'(\\lambda) - p_\\ell(\\lambda)p_\\ell'(\\lambda)), & \\text{ if } k = \\ell + 1 ,\\\\\n 0, & \\text{ otherwise.}\n \\end{cases}\n \\label{eq:GG}\n\\end{equation}\nHence in light of Lemma~\\ref{lem:cheb}, we have that\n\\[\n G^* = \\max_{1 \\leq i \\leq n} \\max_{0 \\leq t \\leq \\zeta} \\max_{1\\leq k,\\ell \\leq M}\n |G_{k,\\ell}(\\lambda_{i,t})|\n\\]\nis tight as a family of random variables in $n.$ Employing this bound, the quadratic variation of the local martingale is dominated by \n\\[\n \\biggl\\langle\n \\sum_{i=1}^n -\\sqrt{\\frac{2}{\\beta}} dZ_{i,t}\\cdot q_i^2 \\cdot G_{k,\\ell}(\\lambda_i)\n\\biggr\\rangle\n\\leq \n\\sum_{i=1}^n \\frac{2}{\\beta}q_i^4 (G^*)^2\\,dt.\n\\]\nBy Lemma~\\ref{lem:qibound}, it follows that\n\\[\n \\max_{1\\leq k,\\ell \\leq M}\n \\biggl|\n \\int_0^{\\zeta}\n \\sum_{i=1}^n dZ_{i,t}\\cdot q_i^2 \\cdot G_{k,\\ell}(\\lambda_i)\n \\biggr|\n \\overset{\\Pr}{\\to}\n 0.\n\\]\n\nAs for the finite variation terms, we consider each of the four classes of terms broken out in \\eqref{eq:FVterms}. The estimates for $dS_t,$ $dR^1_t, dR^2_t,$ and $dR^3_t$ are similar\nand so we explain the estimates for $dR^1_t$ and do not discuss the others.\n\nRecall that\n\\(\ndR^1_t=\\sum_{i,k,\\ell=1}^n -\\frac{q_i^4}{2} \\mathcal{H}_{k,\\ell} \\cdot G^{k,\\ell} \\cdot dt,\n\\)\nwhere $\\mathcal{H}_{k,\\ell}$ is given in \\eqref{eq:GF} and $G^{k,\\ell}$ is given in Corollary~\\ref{cor:entry}. The entries $G^{k,\\ell}_{u,r}$ vanish when $u+r < k+\\ell.$ In particular, this implies that for considering $dR^{1}_{u,r,t}$ for $1 \\leq u,r \\leq M$ it suffices to estimate $k,\\ell$ with $1 \\leq k,\\ell \\leq 2M.$ Using Remark~\\ref{rem:Gkl} and Lemma~\\ref{lem:cheb}, we have that\n\\[\n G^{**}=\n \\max_{0 \\leq t \\leq \\zeta} \n \\max_{1 \\leq k,\\ell \\leq 2M} \n \\max_{1 \\leq u,r \\leq M} \n |G^{k,\\ell}_{u,r,t}|\n\\]\nis tight as a family of random variables in $n$, where we have used that $\\sum_{i=1}^n q_i^2=1.$ In a similar way,\n\\[\n H^*\n =\n \\max_{1 \\leq i \\leq n}\n \\max_{0 \\leq t \\leq \\zeta} \n \\max_{1 \\leq k,\\ell \\leq 2M} \n |\\mathcal{H}_{k,\\ell,t}(\\lambda_{i,t})|\\sqrt{n}\n\\]\nis tight. Hence, \n\\[\n \\biggl| \\int_0^\\zeta dR^1_t \\biggr| \\leq \n \\zeta \\cdot \n (\\max_{1 \\leq i \\leq n} q_i^4) \\cdot\n \\sqrt{n} \\cdot H^* \\cdot G^{**} \\overset{\\Pr}{\\to} 0,\n\\]\nby Lemma~\\ref{lem:qibound}.\n\n\n\\end{proof}\n\n\\subsection{Principal terms}\nThe remaining terms will produce a nonvanishing contribution to the limit in the regime we consider. Recalling \\eqref{eq:frozen2}, these remaining terms are given by\n\\[\n dA_t - dA_t' = -2A_t\\,dt + \\sum_{i=1}^n -dt \\cdot \\biggl\\{\\sum_{j \\neq i}\\frac{1}{\\lambda_i - \\lambda_j} \\biggr\\} \\cdot q_i^2 \\cdot G(\\lambda_i).\n\\]\nThere are large cancellations between the $2A_t$ term and the sum. Both are order $\\sqrt{n},$ but their leading order behavior cancels. \n\nTo identify an exact leading term, we make a connection to the minor polynomials.\nRecall from \\eqref{eq:G} that $G(\\lambda) = -\\partial_\\lambda B(x,\\lambda) \\vert_{x=\\lambda}.$ By orthogonality, we have that \n\\[\n \\sum_{i=1}^n q_i^2\\biggl\\{\\partial_{\\lambda \\lambda} B(\\lambda_i,\\lambda)\\vert_{\\lambda=\\lambda_i} \\biggr\\}=0,\n\\]\non account of the orthogonality of $p_{k}$ to lower degree polynomials. Hence, we have\n\\[\n \\sum_{i=1}^n q_i^2\\biggl\\{\\sum_{j \\neq i} \\Delta_{\\lambda_j}^{\\lambda_i} \\partial_{\\lambda_j} B(\\lambda_i,\\lambda_j) \\biggr\\}\n =\n \\sum_{j=1}^n \n \\sum_{i=1}^n \n q_i^2\\biggl\\{\\Delta_{\\lambda_j}^{\\lambda_i} \\partial_{\\lambda_j} B(\\lambda_i,\\lambda_j) \\biggr\\}\n =0,\n\\]\nwhere we have extended the formula to $i = j$ by continuity, and the second equality follows as the inner sum is identically $0,$ again by degree considerations. This formula allows us to write\n\\[\n\\sum_{i=1}^n \\biggl\\{\\sum_{j \\neq i}\\frac{1}{\\lambda_i - \\lambda_j} \\biggr\\} \\cdot q_i^2 \\cdot G(\\lambda_i) \n =\\frac{-1}{2}\n\\sum_{i=1}^n \\biggl\\{\\sum_{j \\neq i}\\frac{ \n \\partial_\\lambda B(\\lambda_i,\\lambda) \\vert_{\\lambda=\\lambda_i}\n +\\partial_\\lambda B(\\lambda_i,\\lambda) \\vert_{\\lambda=\\lambda_j}\n}{\\lambda_i - \\lambda_j} \\biggr\\} \\cdot q_i^2 \n\\]\nSo, adding and subtracting $\\partial_\\lambda B(\\lambda_j,\\lambda) \\vert_{\\lambda=\\lambda_j},$ we can write\n\\begin{equation}\n \\sum_{i=1}^n \\biggl\\{\\sum_{j \\neq i}\\frac{1}{\\lambda_i - \\lambda_j} \\biggr\\} \\cdot q_i^2 \\cdot G(\\lambda_i) \n =\\frac{-1}{2}\n \\sum_{i,j:i\\neq j}^n\n \\Delta_{\\lambda_i}^{\\lambda_j} \\partial_{\\lambda_j} B(\\lambda_i,\\lambda_j) \\cdot q_i^2\n +\\frac{1}{2}\n \\sum_{i,j:i\\neq j}^n \\frac{G(\\lambda_j)+G(\\lambda_i)}{\\lambda_i-\\lambda_j} \\cdot q_i^2\n \\label{eq:S2}\n\\end{equation}\nRewriting the model, we have\n\\begin{equation}\n \\begin{aligned}\n dA_t - dA_t' &= -2A_t\\,dt + \\frac{n}{2}\n \\sum_{i,j=1}^n\n \\Delta_{\\lambda_i}^{\\lambda_j} \\partial_{\\lambda_j} B(\\lambda_i,\\lambda_j) \\cdot q_i^2 \\cdot q_j^2 \\\\\n &- \\frac{n}{2}\n \\sum_{i,j=1}^n\n \\Delta_{\\lambda_i}^{\\lambda_j} \\partial_{\\lambda_j} B(\\lambda_i,\\lambda_j) \\cdot q_i^2 \\cdot \\biggl\\{ q_j^2 - \\frac{1}{n} \\biggr\\} \\\\\n &+\\frac{1}{2}\n \\sum_{i,j:i\\neq j}^n \\frac{G(\\lambda_j)+G(\\lambda_i)}{\\lambda_i-\\lambda_j} \\cdot q_i^2 \\\\\n &-\n \\frac{1}{2}\n \\sum_{i=1}^n\n \\partial_{\\lambda_i\\lambda_j} B(\\lambda_i,\\lambda_j) \\vert_{\\lambda_j=\\lambda_i} \\cdot q_i^2 \\\\\n \\label{eq:S3}\n \\end{aligned}\n\\end{equation}\nThe first line can be expressed in terms of the minor polynomials. Specifically, we define \n\\(\nG^{(0)}(\\lambda)\n=\n-\n \\sum_{i=1}^n\n \\Delta_{\\lambda_i}^{\\lambda} \\partial_{\\lambda} B(\\lambda_i,\\lambda) \\cdot q_i^2,\n\\)\nin terms of which\n\\begin{equation}\n G_{k,\\ell}^{(0)}(\\lambda)\n =-\\begin{cases}\n -a_{\\ell-1} p^{(0)}_{\\ell-2}(\\lambda)p_{\\ell-1}'(\\lambda) + a_\\ell p^{(0)}_{\\ell-1}(\\lambda) p_{\\ell}'(\\lambda)& \\\\\n -a_{\\ell-1} p^{(0)}_{\\ell-1}(\\lambda)p_{\\ell-2}'(\\lambda) + a_\\ell p^{(0)}_{\\ell}(\\lambda) p_{\\ell-1}'(\\lambda), & \\text{ if } k = \\ell , \\\\\n -a_\\ell (p^{(0)}_{\\ell-1}(\\lambda)p_{\\ell-1}'(\\lambda) - p^{(0)}_\\ell(\\lambda)p_\\ell'(\\lambda)), & \\text{ if } k = \\ell + 1 ,\\\\\n 0, & \\text{ otherwise.}\n \\end{cases}\n \\label{eq:BB}\n\\end{equation}\nWe will make a comparison between the minor polynomials and the original polynomials to expose the leading order behavior (c.f.\\,\\eqref{eq:MinorPerturb}). The last line is an absolute constant multiple of $dR^3,$ which is negligible in the limit (c.f.\\,Lemma~\\ref{lem:R3}). \n\nAs for the middle two lines, they can be expressed in terms of the fluctuations of the spectral weights: \n\\begin{lemma}\n Let $\\gamma$ be a smooth contour enclosing all the eigenvalues once in the positive orientation.\n Then\n \\[\n \\begin{aligned}\n -\\sum_{i,j=1}^n\n \\Delta_{\\lambda_i}^{\\lambda_j} \\partial_{\\lambda_j} B(\\lambda_i,\\lambda_j) \\cdot q_i^2 \\cdot \\biggl\\{ q_j^2 - \\frac{1}{n} \\biggr\\} \n &= \\frac{1}{2\\pi i}\\oint (s^A_t(z) - s_t(z))G^{(0)}(\\sqrt{n} z)\\,dz \\\\ \n \\end{aligned}\n \\]\n and\n \\[\n \\begin{aligned}\n \\sum_{i,j:i\\neq j}^n \\frac{G(\\lambda_j)+G(\\lambda_i)}{\\lambda_i-\\lambda_j} \\cdot \n q_i^2\n \n &=\\frac{-\\sqrt{n}}{(2\\pi i)^2}\\oint\\oint (s^A_t(z) - s_t(z))s_t(y) \\frac{G(\\sqrt{n} z)-G(\\sqrt{n} y)}{z-y}\\,dzdy \\\\\n &+\\frac{2\\sqrt{n}}{2\\pi i}\\oint (s^A_t(z) - s_t(z))s_t(z) G(\\sqrt{n} z)\\,dz,\n \\end{aligned}\n \\]\n where the contour integrals are over $\\gamma.$\n \\label{lem:second}\n\\end{lemma}\n\\begin{proof}\n By antisymmetry, we have that\n \\[\n \\sum_{i,j:i\\neq j}^n \\frac{G(\\lambda_j)+G(\\lambda_i)}{\\lambda_i-\\lambda_j} \\cdot q_i^2\n =\n \\sum_{i,j:i\\neq j}^n \\frac{G(\\lambda_j)+G(\\lambda_i)}{\\lambda_i-\\lambda_j} \\cdot \n \\biggl(q_i^2-\\frac{1}{n}\\biggr).\n \\]\n This we now break into two parts\n \\[\n \\sum_{i,j:i\\neq j}^n \\frac{G(\\lambda_j)+G(\\lambda_i)}{\\lambda_i-\\lambda_j} \\cdot \n \\biggl(q_i^2-\\frac{1}{n}\\biggr)\n =\n -\\sum_{i,j:i\\neq j}^n \\frac{G(\\lambda_i)-G(\\lambda_j)}{\\lambda_i-\\lambda_j} \\cdot \n \\biggl(q_i^2-\\frac{1}{n}\\biggr)\n +\\sum_{i,j:i\\neq j}^n \\frac{2G(\\lambda_i)}{\\lambda_i-\\lambda_j} \\cdot \n \\biggl(q_i^2-\\frac{1}{n}\\biggr)\n \\]\n For entire functions $f,$ by a residue computation\n \\begin{equation}\n \\label{eq:residues}\n \\begin{aligned}\n \\frac{1}{2\\pi i} \\oint (s^A_t(z) - s_t(z))f(z)\\,dz\n =\n &\\sum_{i=1}^n \\biggl\\{q_i^2-\\frac{1}{n}\\biggr\\}f(\\lambda_{i,t}\/\\sqrt{n}), \\text{ and } \\\\\n \\frac{1}{2\\pi i} \\oint (s^A_t(z) - s_t(z)) s_t(z) f(z)\\,dz\n =\n &\\frac{1}{\\sqrt{n}}\n \\sum_{i=1}^n \\biggl\\{q_i^2-\\frac{1}{n}\\biggr\\}\\cdot \\biggl\\{\\sum_{j \\neq i}\\frac{f(\\lambda_{i,t}\/\\sqrt{n})}{\\lambda_{i,t}-\\lambda_{j,t}}\\biggr\\}\n \\\\\n +&\\frac{1}{2n}\n \\sum_{i=1}^n \\biggl\\{q_i^2-\\frac{1}{n}\\biggr\\}f'(\\lambda_{i,t}\/\\sqrt{n}),\n \\end{aligned}\n \\end{equation}\n for a contour enclosing all the eigenvalues once in the positive orientation. Hence, appropriately accounting for $G'$ terms, we get the desired formula.\n\\end{proof}\n\n\\subsection{Cancellation at leading order}\n\nWe consider the leading order behavior of the sum\n\\[\n \\sum_{i,j=1}^n\n \\Delta_{\\lambda_i}^{\\lambda_j} \\partial_{\\lambda_j} B(\\lambda_i,\\lambda_j) \\cdot q_i^2\\cdot q_j^2.\n\\]\nOn account of \\eqref{eq:BB}, we are led to consider the following asymptotics. \n\\begin{lemma}\n\n \\begin{align*}\n &\\sum_{j=1}^n p_k^{(0)}(\\lambda_j) p_{k}'(\\lambda_j)\\cdot q_j^2 = \n \\frac{k}{a_1 a_k} \n + n^{-3\/2}\\sum_{j=1}^{k-1} 16 (a_j-\\tfrac{\\sqrt{n}}{2}) (k-2j) \n + n^{-3\/2} 8k(a_1-a_k)\n + o(n^{-3\/2}),\n \\\\\n &\\sum_{j=1}^n p_k^{(0)}(\\lambda_j) p_{k+1}'(\\lambda_j)\\cdot q_j^2 \n = \\frac{(b_1+\\cdots + b_k-kb_{k+1}) }{a_1a_k a_{k+1}}\n + n^{-3\/2}\\sum_{j=1}^{k} 8b_j(k+1-2j)\n +o(n^{-3\/2}),\\\\\n &\\sum_{j=1}^n p_k^{(0)}(\\lambda_j) p_{k-1}'(\\lambda_j)\\cdot q_j^2\n = n^{-3\/2}\\sum_{j=1}^{k} 8b_j(k+1-2j)\n + o(n^{-3\/2}),\n \\end{align*}\n with the errors uniform in $0 \\leq t \\leq T \\wedge \\sigma_K \\wedge \\tau_R.$ \n \\label{lem:mpeval}\n\\end{lemma}\n\\begin{proof}\n We begin with the first asymptotic.\n Equation \\eqref{eq:MinorPerturb} states that to leading order,\n \\[\n p_{k}^{(0)}(x) = \\frac{p_{k-1}{(x)}}{a_1}.\n \\]\n Hence, for the first identity, we have\n \\begin{align*}\n \\sum_{j=1}^n \n \\frac{p_{k-1}{(\\lambda_j)}}{a_1}\n p_k'(\\lambda_j)\n \\cdot q_j^2\n &=\n \\frac{k }{a_1a_k}. \n \\end{align*}\n\n On the other hand, we also have the subleading behavior of $p_{k}^{(0)}$ given by\n \\begin{equation*}\n \\begin{aligned}\n p^{(0)}_{k}(x) - \\frac{1}{a_1}p_{k-1}(x)\n &=\n \\frac{a_{k-1}-a_k}{a_{k}a_1}p_{k-1}(x) \\\\\n &+\n \\frac{a_{k-1}}{a_{k}}\\sum_{j=1}^{k-1} (b_{j} - b_{j+1})p^{(j-1)}_{k-1}(x)p_{j}^{(0)}(x) \n \\\\\n &+\n \\frac{a_{k-1}}{a_{k}}\\sum_{j=1}^{k-2} (a_{j} - a_{j+1})\n (\n p^{(j)}_{k-1}(x)p_{j}^{(0)}(x) \n +p^{(j-1)}_{k-1}(x)p_{j+1}^{(0)}(x) \n ). \n\\end{aligned}\n\\end{equation*}\nHence, on scaling up by $n,$ we get locally uniformly\n \\begin{equation}\n \\label{eq:perturbationlimit}\n \\begin{aligned}\n \\frac{n}{4}(p^{(0)}_{k}(x) - \\frac{1}{a_1}p_{k-1}(x))\n &\\to\n (a_{k-1}-a_k)U_{k-1}(x) \\\\\n &+\n \\sum_{j=1}^{k-1} (b_{j} - b_{j+1})U_{k-j-1}(x)U_{j-1}(x)\n \\\\\n &+\n \\sum_{j=1}^{k-2} (a_{j} - a_{j+1})\n (\n U_{k-j-2}(x)U_{j-1}(x)\n +U_{k-j-1}(x)U_{j}(x)\n ). \n\\end{aligned}\n\\end{equation}\nWe then need to evaluate the integral of this asymptotic against $U_k'(x)\\mathfrak{s}(x)\\,dx,$ for which purpose, we will need the following Chebyshev polynomial identities\n\\begin{align}\n U_k'(x) &= \\sum_{j=0}^{k\/2} 2(k-2j) U_{k-2j-1}(x) \\label{eq:dU} \\\\\n U_k(x)U_\\ell(x) &= \\sum_{j=0}^{\\ell} U_{k-\\ell+2j}(x), \\text{ if } k \\geq \\ell. \\label{eq:UU}\n\\end{align}\nBy parity considerations, some terms cancel. For any $1 \\leq j \\leq k-1,$ $U_{k-j-1}(x)U_{j-1}(x)$ is a sum over $U_{p}(x)$ where $p$ has the same parity as $k.$ In particular, it follows that\n\\[\n \\int_{-1}^{1} U_k'(x)\n \\left\\{\n \\sum_{j=1}^{k-1} (b_{j} - b_{j+1})U_{k-j-1}(x)U_{j-1}(x)\n\\right\\}\n \\mathfrak{s}(x)\\,dx\n =0.\n\\]\n\n\nOn the other hand, for $1 \\leq j \\leq k-1$\n\\begin{align*}\n &\\int_{-1}^{1} U_k'(x)\n (\n U_{k-j-2}(x)U_{j-1}(x)\n +U_{k-j-1}(x)U_{j}(x)\n )\n \\mathfrak{s}(x)\\,dx \\\\\n &= \n \\sum_{\\ell=1}^{ j \\wedge (k-j-1) } 2(k-2\\ell)\n +\\sum_{\\ell=0}^{ j \\wedge (k-j-1) } 2(k-2\\ell).\n\\end{align*}\nDefine the array, for $1 \\leq j < k-1$\n \\[\n H_{k,j} = \\sum_{\\ell=1}^{j \\wedge (k-j-1)} 2(k-2\\ell),\n \\]\n and $H_{k,j} = 0$ for $j \\geq k-1.$\n Then we can express\n\\[\n n^{3\/2}\\sum_{j=1}^n p_k'(\\lambda_j)(p^{(0)}_{k}(\\lambda_j) - \\frac{1}{a_1}p_{k-1}(\\lambda_j))q_j^2 \\to \n \\sum_{j=1}^{k-1} 4(a_j - a_{j+1})(2H_{k,j}+2k).\n\\]\n\nFor the second term we begin with the observation that for any $k \\in {\\mathbb N}$\n\\begin{equation}\n\\label{eq:pkcoeffiecients}\np_k(x) = \\frac{1}{a_1\\cdots a_k} x^k - \\frac{b_1+\\cdots +b_k}{a_1\\cdots a_k}x^{k-1} + ...\n\\end{equation}\nTherefore,\n\\[\n p_{k+1}'(x) - \\frac{(k+1)p_k(x)}{a_{k+1}}\n = \n -k\\frac{b_1+\\cdots +b_{k+1}}{a_1\\cdots a_{k+1}}x^{k-1}\n +(k+1)\\frac{b_1+\\cdots +b_{k}}{a_1\\cdots a_{k+1}}x^{k-1}\n +O(x^{k-2})\n\\]\nHence,\n\\begin{align*}\n n^{3\/2}\n\\sum_{j=1}^n \n \\frac{p_{k-1}{(\\lambda_j)}}{a_1}\n p_{k+1}'(\\lambda_j)\\cdot q_j^2\n =\n \\frac{n^{3\/2}(b_1+\\cdots + b_k-kb_{k+1}) }{a_1a_k a_{k+1}}.\n \\end{align*}\nWe again use equation \\eqref{eq:perturbationlimit} but observe that the parity is the opposite of the previous case which leaves us with the terms of the form\n \\[\n \\begin{aligned}\n \\frac{n^{3\/2}}{4}\n &\\sum_{j=1}^n \n \\biggl(p^{(0)}_{k}(\\lambda_i) - \\frac{p_{k-1}(\\lambda_i)}{a_1}\\biggr)\n p_{k+1}'(\\lambda_j)\\cdot q_j^2 \\\\\n &\\to\n \\int_{-1}^{1} U_{k+1}'(x)\n \\left\\{\n \\sum_{j=1}^{k-1} (b_{j} - b_{j+1})U_{k-j-1}(x)U_{j-1}(x)\n\\right\\}\n \\mathfrak{s}(x)\\,dx \\\\\n &=\\sum_{j=1}^{k-1}(b_j-b_{j+1})\\sum_{m=1}^{(k-j)\\wedge(j)}2(k+1-2m).\n \\end{aligned}\n \\]\n\nFor the final term, we have by degree considerations that\n\\[\n n^{3\/2}\n\\sum_{j=1}^n \n \\frac{p_{k-1}{(\\lambda_j)}}{a_1}\n p_{k-1}'(\\lambda_j)\\cdot q_j^2\n =\n 0.\n\\]\nAnd we have that\n \\[\n \\begin{aligned}\n \\frac{n^{3\/2}}{4}\n &\\sum_{j=1}^n \n \\biggl(p^{(0)}_{k}(\\lambda_i) - \\frac{p_{k-1}(\\lambda_i)}{a_1}\\biggr)\n p_{k-1}'(\\lambda_j)\\cdot q_j^2 \\\\\n &\\to\n \\int_{-1}^{1} U_{k-1}'(x)\n \\left\\{\n \\sum_{j=1}^{k-1} (b_{j} - b_{j+1})U_{k-j-1}(x)U_{j-1}(x)\n\\right\\}\n \\mathfrak{s}(x)\\,dx \\\\\n &=\\sum_{j=1}^{k-1}(b_j-b_{j+1})\\sum_{m=1}^{(k-j)\\wedge(j)}2(k+1-2m),\n \\end{aligned}\n \\]\n where we observe the limit agrees with that of the previous case.\n \n Lastly we use the identity\n \\begin{equation}\n \\label{eq:Hdiff}\n H_{k,j}-H_{k,j-1} = 2(k-2j)\n \\end{equation}\n for $j0$ is the learning rate. $U$ and $V$ are the user profile matrix and item profile matrix with each row as a profile, and $\\nabla_{u_i}F(U,V)$ and $\\nabla_{v_j}F(U,V)$ can be computed as follows:\n\\begin{align}\n \\nabla_{u_i}F(U,V) = - \\frac{2}{M_{u_i}} \\sum_{j:(i,j)\\in \\mathcal{M}} v_j (r_{ij} - \\langle{u_i}, v_j\\rangle) \\label{gradient_U}\\\\\n \\nabla_{v_j}F(U,V) = - \\frac{2}{M_{v_j}} \\sum_{i:(i,j)\\in \\mathcal{M}} u_i (r_{ij} - \\langle{u_i}, v_j\\rangle) \\label{gradient_V}\n\\end{align}\n\n\\subsection{Security Model}\nWe assume all participants as well as the server if there is any are \\textit{honest-but-curious} (a.k.a. semi-honest). An honest-but-curious participant follows the protocol honestly but tries to infer private information from the intermediate information it knows. \n\n\\section{Federated MF and Privacy Threats} \\label{fedmf_privacy_threats}\n\n\\begin{figure*}[!htb]\n\\centering \n\\subfigure[Horizontal Federated MF]{\n\\label{Fig.1.1}\n\\includegraphics[width=0.55\\linewidth]{fig\/horizontal_federated_MF_data.pdf}}\n\\subfigure[Vertical Federated MF]{\n\\label{Fig.1.2}\n\\includegraphics[width=0.34\\linewidth]{fig\/vertical_FedMF.pdf}}\n\\subfigure[Federated Transfer MF (Common item set)]{\n\\label{Fig.1.3}\n\\includegraphics[width=0.55\\linewidth]{fig\/federated_transfer_MF_data.pdf}}\n\\caption{The way of data partition in federated MF for horizontal federated MF, vertical federated MF and federated transfer MF (assuming participants shares common item set). Orange, blue, and green rectangles denote the profile matrices of the item, user, and auxiliary data, respectively. The small rectangles in the user-item interaction matrix represent user ratings.}\n\\label{Fig.data_partition}\n\\end{figure*}\n\n\\begin{figure*}[htb]\n\\centering \n\\subfigure[Horizontal Federated MF]{\n\\label{Fig.2.1}\n\\includegraphics[width=0.49\\linewidth]{fig\/feature_space_horizontal_FedMF_2.pdf}}\n\\subfigure[Federated Transfer MF (Common item set)]{\n\\label{Fig.2.2}\n\\includegraphics[width=0.49\\linewidth]{fig\/feature_space_Fed_Trans_MF_2.pdf}}\n\\subfigure[Vertical Federated MF]{\n\\label{Fig.2.3}\n\\includegraphics[width=0.48\\linewidth]{fig\/feature_space_vertical_FedMF_2.pdf}}\n\\caption{The sparse representation of partitioned data and feature space in federated MF for horizontal federated MF, vertical federated MF, and federated transfer MF (assuming participants share common item set). Orange, blue, and green rectangles denote the profile matrices of the item, user, and auxiliary data, respectively. The small rectangles represent non-zero values.}\n\\label{Fig.sparse_data_partition}\n\\end{figure*}\n\n\n\nIn this section, we discuss federated MF in three settings that differ in data partition. Then we investigate how privacy can be breached towards adversarial participants in each setting. For simplicity and without loss of generality, we consider FL systems consisting of two participants, $P^{A}$ and ${P^B}$. \nFig. \\ref{Fig.data_partition} compares the data partition in each setting of federated MF discussed in this paper. \nWe adopt a sparse representation of partitioned data in Fig.\\ref{Fig.sparse_data_partition}, to demonstrate the nature of horizontal, vertical, and transfer federated learning. In the horizontal FL setting Fig. \\ref{Fig.2.1}, participants share the same feature space. In the vertical FL setting Fig.\\ref{Fig.2.3}, participants hold heterogeneous feature space, and only $P^A$ holds the ratings. Whereas, in federated transfer MF Fig.\\ref{Fig.2.2}, participants share partial models (e.g., item profiles) for knowledge transfer.\n\n\\subsection{Horizontal Federated MF}\nIn horizontal federated MF, $P^A$ and $P^B$ share the same user-item interaction matrix (i.e., the same user and item feature space), as shown in Fig \\ref{Fig.1.1} and Fig. \\ref{Fig.2.1}. Therefore, each participant holds the profiles of all users and items, and can locally compute gradient of the whole MF model. Only model aggregation requires communication between A and B~\\cite{mcmahan2016communication}. In model aggregation, the global user profiles matrix is computed by $U_{Global} = \\frac{1}{2} (U_{P^A} + U_{P^B})$, and item profiles matrix $V_{Global} = \\frac{1}{2} (V_{P^A} + V_{P^B})$. $P^A$ can compute the gradient $\\nabla_{U}F_B(U,V)$ of $P^B$ following\n\\begin{align*}\n & \\nabla_{U}F_B(U^{T-1},V^{T-1}) \\\\\n & = \\frac{1}{\\gamma} (2 \\cdot U_{Global}^{T} - U_{P^A}^{T} - U^{T-1}) - 2 \\lambda_u U^{T-1}\n\\end{align*}\nwhere $T$ is the index of round. Since the user-item interaction matrix is sparse, that is, $M = \\Theta(n+m)$, which is much smaller than the number of potential ratings $n \\cdot m$ ~\\cite{nikolaenko2013privacy}. For one update in SGD, it is very likely to have one rating record for each item or user. Therefore, according to Equation \\ref{gradient_U}, $P^A$ can easily find the $(i,j)$ pair and the corresponding $v_j$, by checking the gradient and comparing $\\nabla_{u_i}F_B(U,V)$ to each of $\\{u_i\\}_{i=1}^{n}$ as well as comparing $\\nabla_{v_j}F_B(U,V)$ to each of $\\{u_j\\}_{j=1}^m$. Then, $P^A$ further infer the private rating score by\n\\begin{equation*}\n \\hat{r}_{ij} = - \\frac{\\nabla_{u_i}F_B(U^{T-1},V^{T-1})}{2\\cdot v_j^{T-1}} + \\langle {u_i^{T-1}}, v_j^{T-1} \\rangle.\n\\end{equation*}\nThis way, $P^A$ may complete inference attack and extracts raw private user preference data $(i, j, \\hat{r}_{ij})$ of $P^B$ from the plaintext global model in horizontal federated MF. \n\n\\subsection{Vertical Federated MF}\nIn vertical federated MF as shown in Fig. \\ref{Fig.1.2} and Fig. \\ref{Fig.2.3}, $P^A$ holds the user-item interaction matrix, and $P^B$ holds some auxiliary data of users (or items). We adopt the model presented in ~\\cite{5197422} to leverage auxiliary data provided by $P^B$ in vertical federated MF. \nFor each user $i$, $P^B$ hold distinct factor vectors $y_a \\in \\mathbb{R}^d$ corresponds to each attribute. The user $i$ can thus be described through the set of user-associated attributes $A(u)$ as $\\sum_{a\\in A(u)} y_a$.\nFor vertical federated MF model, Equation \\ref{mf_optimization} can be modified as follows:\n\\begin{align*}\n & \\min_{U, V} \\frac{1}{M} \\sum_{(i,j) \\in \\mathcal{M}} (r_{ij} - \\langle (u_i + k_i), {v_j}\\rangle)^2 + \\\\\n & \\lambda_u \\sum_{i\\in [u]} ||u_i||_2^2 + \\lambda_v \\sum_{j\\in [m]} ||v_j||_2^2,\n\\end{align*}\nwhere $k_i = |N(u)|^{-0.5}\\sum_{l\\in N(u)} x_l \\sum_{a\\in A(u)}$. $y_a$ is the auxiliary information of user $i$. $x$ is the implicitly preferred item set, $y$ is $i$'s attributes (e.g., demographic info).\n\nTo conduct federated vertical MF, $P^B$ locally computes and sends $k_i$ to $P^A$, and $P^A$ sends nothing to $P^B$. Therefore, $P^A$ has no privacy leakage to $P^B$, while $P^B$ leaks $k_u$ to $P^A$. In such a setting, user ID leakage during the user alignment stage causes a major privacy threat.\n\n\\subsection{Federated Transfer MF}\nWithout loss of generality, in federated transfer MF, we assume $P^A$ and $P^B$ holds ratings given by different users on the same set of items, and $P^A$ tries to infer private data of $P^B$. That is, $P^A$ and $P^B$ holds the item profiles matrix $V$. $P^A$ holds $U^A$, and $P^B$ holds $U^A$ for their users, respectively. To train the model, each participant locally conducts SGD to update local models. For model aggregation, only $V$ is aggregated by the participants. \n$P^A$ can learn the gradient $\\nabla_V F_B(U^B,V)$ of $P^B$ as follows:\n\\begin{align*}\n & \\nabla_{V}F_B({U^B}^{T-1},V^{T-1}) \\\\\n & = \\frac{1}{\\gamma} (2 \\cdot V_{Global}^{T} - V_{P^A}^{T} - V^{T-1}) - 2 \\lambda_v V^{T-1}\n\\end{align*}\nAs the interaction matrix is sparse for each round, it is reasonable to assume $\\nabla_{V}F_B({U^B}^{T-1},V^{T-1}) = -2 {u^B_i}^{T-1} (r_{ij} - \\langle {u^B_i}^{T-1}, {v_j}^{T-1}\\rangle)$ for a user $i$. \nBy collecting the gradients of several steps and assuming the $u_i^B$ does not change, which is reasonable, then the model is nearly converged, we can use some iterative methods such as Newton's method to approximate the numeric value of $u^B_i$, as shown in ~\\cite{chai2019secure}. After computing $u^B_i$, the reconstructed rating score $\\hat{r^B}_{ij}$ of $P^B$ can easily be computed as follows:\n\\begin{equation*}\n \\hat{r^B}_{ij} = - \\frac{\\nabla_{v_j}F_B({U^B}^{T-1},V^{T-1})}{2 \\cdot {u^B}_i^{T-1}} + \\langle {{u^B_i}^{T-1}}, v_j^{T-1} \\rangle.\n\\end{equation*}\nThus, $P^A$ completes the inference attack and extracts user profiles and the corresponding ratings of $P^B$ from the plaintext global model in horizontal federated MF. However, as the users are not aligned between participants, the user ID information can not be inferred by $P^A$. \n\nIt is worth noting that, although some works denote participants sharing the same set of items and a different set of users as \\textit{horizontal federated recommender systems}, and denote participants sharing the same set of users and a different set of items as \\textit{vertical federated recommender systems}, based on whether the users should be aligned before FL training. Such setting demonstrates the nature of federated transfer learning, where neither the global model is shared among participants, nor the feature space is fully partitioned without intersection. Privacy attacks on both settings are the same during FL training.\nTherefore, we denote both settings as federated transfer MF in this paper. \n\n\\subsection{Comparison}\\label{Comparison}\n\\begin{table*}[!htb]\n\\centering\n\\footnotesize\n \\begin{tabular}{| c || c | c | c |}\n \\hline \n Problem setting & Parameter partition in each party & Gradient computation & Resilience against inference attack \\\\\n \\hline \n \\hline\n Horizontal FedMF & The whole model params & Locally & Weak \\\\\n \\hline\n Vertical FedMF & Partial params without shared params & Collaboratively & P$^A$ strong, P$^B$ weak \\\\\n \\hline\n Federated Transfer MF & Partial params with shared params & Locally & Medium \\\\\n \\hline\n\\end{tabular}\n\\caption{Comparison of different problem settings including horizontal and vertical federated MF as well as federated transfer MF. FedMF denotes federated MF.}\\label{table_comparison}\n\\end{table*}\n\nTab. \\ref{table_comparison} demonstrates the comparison of three settings based on the way to update the model, the partition of model parameters, and the resilience of the FL system against inference attack. \nIn horizontal federated MF, all participants share ratings from the same set of users and items. Therefore, each participant locally holds the whole user profiles matrix and item profiles matrix for local SGD. For federated transfer MF, participants only share the same set of users (or items), each participant locally holds its user (or item) profiles sub-matrix and the global item (or user) profiles matrix for local SGD. For vertical federated MF, one party holds rating data; the other holds auxiliary data, each party holds partial parameters with shared parameters such as user profiles matrix.\nFor both horizontal federated MF and Federated transfer MF, clients can locally conduct SGD optimization without the need for communication. Participants only need to exchange parameters during the model aggregation process. For vertical federated MF, two participants need to collaboratively compute the estimated rating for each update, which dramatically increases the communication cost. The resilience of each setting against the inference attack is also shown. Horizontal federated MF breaches most private information, including user ID and user preference data. For vertical federated MF, recommender $P^A$ leaks no information to data provider $P^B$, and data provider sends the intermediate data to the recommender. The user ID is breached for both participants. For federated transfer MF, only private rating data and user profiles are leaked, and no user ID is breached.\n\n\n\\section{Privacy Preservation in Federated MF}\\label{priv_prev}\nAccording to the privacy threats investigated in section \\ref{fedmf_privacy_threats}, we give some advises for privacy preservation in federated MF.\nFor horizontal federated MF, the global user and item profile matrices computed by aggregation should be protected against each participant. For vertical federated MF, the auxiliary data provider should keeping its computed feature sent to the recommender secret. For federated transfer MF, the shared user or item profile matrix should be kept secret to any honest-but-curious participant throughout the FL training, as the rating score and private profile can be potentially implied.\n\nTo keep intermediate parameters private, there are mainly three types of approaches \\textit{cryptography-based}, \\textit{obfuscation-based} and \\textit{hardware-based} approaches. Cryptography-based approaches generally use HE and MPC to keep intermediate transactions private. Obfuscation-based approaches such as DP obfuscate private data by randomization, generalization or repression. Hardware-based approaches rely on trusted execution environment (TEE) to conduct FL learning in a trusted enclave.\nBy using cryptography-based approaches, fully HE can be introduced to prevent decryption during training~\\cite{kim2016efficient}. Secret sharing schemes can also be introduced following a two-server architecture~\\cite{cryptoeprint:2011:535}. \nSince the user-item interaction matrix is sparse, applying DP may introduce too much noise and make the model unavailable. TEE can also be applied by encrypting private data and conducting private training inside TEE~\\cite{CHEN202069}.\n\n\\section{Conclusion}\\label{conclusion}\nWe identify and formulate three types of federated MF problems based on the partition of feature space. Then, we demonstrate the privacy threats against each type of federated MF. We show how the private user preference data, private user\/item profiles matrix, and user ID can be potentially leaked to honest-but-curious participants. Finally, We discuss privacy-preserving approaches to protect privacy in federated MF. \nFor future work, we will experimentally study the power of the proposed privacy attacks by measuring the portion and accuracy of the inferred private data. Privacy threats against alternating least squares-based MF and other recommender systems also require further comprehensive study.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{named}\n\n\\section{Introduction}\nFederated learning (FL) enables multiple parties to collaboratively train machine learning models without exposing ones' own data. It addresses the data silos and privacy problem together. \nIn this paper, we consider a typical federated learning scenario in the recommender systems. \nTaking the collaboration between a movie recommender and a data provider as an example, the intersection of their user space is large. The recommender records the user's rating, browsing and watching history of the recommended movies, while the data provider owns the demographic information. Their feature spaces are very different. We propose a method to integrate the features of these two parties in a secure and privacy-preserving way and improve the recommendation performance.\n\n\n\nThe feature interaction is a widely used method to integrate features and increase the models' expressiveness. For example, the movie recommender has features about the movie genres, and the data provider has user gender information. For instance, through modeling the interaction between ``gender-male'', ``gender-female'', ``genres-romance'' and ``genres-adventure'' features, we can capture the preferences of different user groups, such as ``female on romance'' and ``male on adventure''. Learning interactions among features is significant in improving the models' performance. In the federated learning setting, these features are stored in the movie recommender and data provider, respectively. They could not leave their own data repository and never be transmitted due to privacy concerns. Previous learning algorithms, which build a model on centrally collected data, are not able to model the interactions between features distributed on these two parties. Besides, although we can model the feature interaction through the weight parameters of feature pairs and learning the model by exchanging weight parameters as previous FL algorithms did, the communication cost could be extremely high. Taking the second-order feature interaction as an example, the cost is $O(d_A\\times d_B)$, where $d_A$ and $d_B$ are the feature dimensions. It is unacceptable with iterative optimization algorithms, which is widely adopted in FL. Furthermore, to avoid information leakage, the weight parameters are usually encrypted by the encryption algorithm. The bit lengths of the parameters increase in the encrypted space. Therefore, the communication cost further increases. \n\nIn this paper, we propose the \\textbf{Federated Collective Matrix Factorization} (FedCMF) algorithm. As shown in Figure ~\\ref{fig_intro_overview}, the FedCMF explicitly captures the feature interactions inside each party as well as between two parties in a privacy-preserving way. Only the model parameters and intermediate results are transmitted between parties instead of the raw data. As we prove in the following sections, the communication cost between two parties is at most $O(d\\times d')$, where $d'$ is a hyper-parameter and much smaller than the original dimensions $d\\in \\{d_A, d_B\\}$. The FedCMF is based on traditional collective matrix factorization (CMF), which is proposed in ~\\cite{rendle2010factorization}. Traditional CCMF parameterizes the weight of the feature interaction as the inner product of the embedding vectors of the constituent features. By learning an embedding vector for each feature, traditional CCMF can estimate the weight for any interacted features. However, traditional CCMF is not designed for decentralized data. In other words, all the interacted features should be collected together on a central server, then the traditional CMF can start to learn the model. For the federated learning setting, we propose a federated collective matrix factorization by reformulating the objective function of the traditional CCMF. The FedCMF exchanges embedding vectors and their partial gradients between parties to capture the feature interactions, without collecting raw data from isolated parties. \n\nNonetheless, exchanging the clear-text intermediate results in the FedCMF results in high privacy risks and is vulnerable to model inversion attack and data reconstruction attack. Theoretically, we prove that the model and raw data could be fully reconstructed by inserting a few known instances to the other party. To enhance the model security, we propose a secure computation protocol, where the intermediate results are encrypted by secret sharing and exchanged with MPC protocol~\\cite{7958569}. With the secure protocol, no bit of information will be leaked between parties. For the tasks that have high-dimensional sparse features, to protect the positional information of the non-null features, the secure computation protocol has to be conducted over all features, including the zero-value (and blank) features. It is not practical. Therefore, we further propose a secure sparse inner product protocol based on homomorphic encryption (HE). The secure computation is only conducted on non-null features, and the intermediate results that updated by the non-null features are encrypted by homomorphic encryption. The communication cost further decreases to $O(\\hat{d}\\times d')$, where $\\hat{d}\\in \\{\\hat{d}_A, \\hat{d}_B\\}$ is the number of non-null features in the instance. In most high-dimensional sparse cases, the $\\hat{d}$ is much smaller than the original dimension $d$ in each instance. \n\n\nIn summary, our contributions are three folds:\n\\begin{itemize}\n \\item We propose a new FedCMF algorithm that models feature interactions between parties with small communication cost and is secure against model inversion attack and data reconstruction attack.\n \\item We also propose a secure sparse inner product protocol to handle high-dimensional sparse features, which commonly exists in recommender systems and has not been studied in federated learning so far.\n\t\\item We evaluate the FedCMF over three real-world datasets, showing fast convergence and promising performance improvement.\n\\end{itemize}\n\nIn the following parts of this article, section~\\ref{problemdefinition} gives a formal definition of federated learning with feature interactions. In section~\\ref{vfCMF}, we introduce the traditional factorization machine and the federated factorization machine. In section~\\ref{securevfCMF}, first of all, we theoretically prove the information leakage when FedCMF transmits clear-text intermediate results. Then, we introduce secure computation protocols. In section~\\ref{exp}, we conduct experiments to evaluate the proposed FedCMF. We summary the related work in section~\\ref{relatedwork} and conclude the paper in section~\\ref{conclusion}.\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzhcxa b/data_all_eng_slimpj/shuffled/split2/finalzzhcxa new file mode 100644 index 0000000000000000000000000000000000000000..04d85200d5739c97cfef8d140d2f3ca2b8d9f4ab --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzhcxa @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nGraph Neural Networks (GNNs) have emerged as a powerful tool for extracting information from graph-structured data and have achieved state-of-the-art performance on a variety of machine learning tasks. However, compared to research on constructing GNNs for unsigned and undirected graphs, and graphs with multiple types of edges, GNNs for graphs where the edges have a natural notion of sign, direction, or both, have received relatively little attention. \n\nThere is a demand for such tools because many important and interesting phenomena are naturally modeled as signed and\/or directed graphs, i.e., graphs in which objects may have either positive or negative relationships, and\/or in which such relationships are not necessarily symmetric~\\cite{he2022pytorch}. For example, in the analysis of social networks, positive and negative edges could model friendship or enmity, and directional information could model the influence of one person on another~\\cite{konect:kumar2016wsn,huang2021sdgnn}. Signed\/directed networks also arise when analyzing time-series data with lead-lag relationships~\\cite{bennett2021detection}, detecting influential groups in social networks~\\cite{he2021digrac}, and computing rankings from pairwise comparisons~\\cite{he2022gnnrank}. Additionally, signed and directed networks are a natural model for group conflict analysis~\\cite{zheng2015signed}, modeling the interaction network of the agents during a rumor spreading process~\\cite{hu2013bipartite}, and maximizing positive influence while formulating opinions~\\cite{ju2020new}. \nBroadly speaking, most GNNs may be classified as either spectral or spatial. Spatial methods typically define convolution on graphs as a localized aggregation operator whereas spectral methods rely on the eigen-decomposition of a suitable graph Laplacian. The goal of this paper is to introduce a novel Laplacian and an associated spectral GNN for signed directed graphs. While spatial GNNs exist, such as SDGNN \\cite{huang2021sdgnn}, SiGAT \\cite{huang2019signed}, SNEA \\cite{li2020learning}, and SSSNET \\cite{he2022sssnet} proposed for signed (and possibly directed) networks, this is one of the first works to propose a spectral GNN for such networks.\n \nA principal challenge in extending traditional spectral GNNs to this setting is to define a proper notion of the signed, directed graph Laplacian. Such a Laplacian should be positive semidefinite, have a bounded spectrum when properly normalized, and encode information about both the sign and direction of each edge. Here, we unify the magnetic Laplacian, which has been used in \\cite{zhang2021magnet} to construct a GNN on an (unsigned) directed graph, with a signed Laplacian which has been used for a variety of data science tasks on (undirected) signed graphs~\\cite{kunegis2010spectral,zheng2015spectral, mercado2016clustering,Cucuringu2020Reg}. Importantly, our proposed matrix, which we refer to as the \\emph{magnetic signed \nLaplacian}, reduces to either the magnetic Laplacian or the signed Laplacian when the graph is directed, but not signed, or signed, but not directed.\n \nAlthough this magnetic signed Laplacian is fairly straightforward to obtain, it is novel and surprisingly powerful: We show that our proposed \\emph{\\textbf{M}agnetic \\textbf{S}igned \\textbf{GNN} ({MSGNN})} is effective for a variety of node clustering and link prediction tasks. Specifically, we consider several variations of the link prediction task, some of which prioritize signed information over directional information, some of which prioritize directional information over signed information, while others emphasize the method's ability to extract both signed and directional information simultaneously. \n\n\\looseness=-1 In addition to testing {MSGNN} on established data sets, we also devise a novel synthetic model which we call the \\emph{Signed Directed Stochastic Block Model (SDSBM)}, which generalizes both the (undirected) Signed Stochastic Block Model from \\cite{he2022sssnet} and the (unsigned) Directed Stochastic Block Model from \\cite{he2021digrac}. Analogous to these previous models, our SDSBM can be defined by a meta-graph structure and additional parameters describing density and noise levels. We also introduce a number of signed directed networks for link prediction tasks using lead-lag relationships in real-world financial time series. \n\n\\textbf{Main Contributions.} The main contributions of our work are: (1) We devise a novel matrix called the magnetic signed Laplacian, which can naturally be applied to signed and directed networks. The magnetic signed Laplacian is Hermitian, positive semidefinite, and the eigenvalues of its normalized counterpart lie in $[0,2]$. They reduce to existing Laplacians when the network is unsigned and\/or undirected. \n(2) We propose an efficient spectral graph neural network architecture, {MSGNN}, based on this magnetic signed Laplacian, which attains leading performance on extensive node clustering and link prediction tasks, including novel tasks that consider edge sign and directionality jointly. To the best of our knowledge, this is the first work to evaluate GNNs\\footnote{Some previous work, such as \\cite{huang2021sdgnn}, evaluates GNNs on signed and directed graphs. However, they focus on tasks where either only signed information is important, or where only directional information is important.} on tasks that are related to both edge sign and directionality. \n(3) We introduce a novel synthetic model for signed and directed networks, called Signed Directed Stochastic Block Model (SDSBM), and also contribute a number of new real-world data sets constructed from lead-lag relationships of financial time series data. \n\\section{Related Work} \\label{sec:related} \nIn this section, we review related work constructing neural networks for directed graphs and signed graphs. We refer the reader to \\cite{he2022pytorch} for more background information.\n\nSeveral works have aimed to define neural networks on directed graphs by constructing various directed graph Laplacians and defining convolution as multiplication in the associated eigenbasis. \\cite{ma:spectralDGCN2019} defines a directed graph Laplacian by generalizing identities involving the undirected graph Laplacian and the stationary distribution of a random walk. \\cite{Tong2020DigraphIC} uses a similar idea, but with PageRank in place of a random walk. \\cite{tong:directedGCN2020} constructs three different first- and second-order symmetric adjacency matrices and uses these adjacency matrices to define associated Laplacians. Similarly, \\cite{monti:MotifNet2018} uses several different graph Laplacians based on various graph motifs. \n\n\nQuite closely related to our work, \\cite{zhang2021magnet} constructs a graph neural network using the magnetic Laplacian. Indeed, in the case where all links are positive, our GNN exactly reduces to the one proposed in \\cite{zhang2021magnet}. Importantly, unlike the other directed graph Laplacians mentioned here, the magnetic Laplacian is a complex, Hermitian matrix rather than a real, symmetric matrix. We also note\n\\cite{he2021digrac}, which constructs a GNN for node clustering on directed graphs based on flow imbalance. \n\nAll of the above works are restricted to unsigned graphs, i.e., graphs with positive edge weights. However, there are also a number of neural networks introduced for signed (and possibly also directed) graphs, mostly focusing on the task of link sign prediction, i.e., predicting whether a link between two nodes will be positive or negative. SGCN by \\cite{derr2018signed} is one of the first graph neural network methods to be applicable to signed networks, using an approach based on balance theory~\\cite{harary1953notion}. However, its design is mainly aimed at undirected graphs. SiGAT~\\cite{huang2019signed} utilizes a graph attention mechanism based on \\cite{velivckovic2017graph} to learn node embeddings for signed, directed graphs, using a novel motif-based GNN architecture based on balance theory and status theory~\\cite{leskovec2010signed}. Subsequently, SDGNN by \\cite{huang2021sdgnn} builds upon this work by increasing its efficiency and proposing a new objective function. In a similar vein, SNEA \\cite{li2020learning} proposes a signed graph neural network for link sign prediction based on a novel objective function. In a different line of work,\n\\cite{he2022sssnet} proposes SSSNET, a GNN not based on balance theory designed for semi-supervised node clustering in signed (and possibly directed) graphs. \n\nAdditionally, we note that several GNNs~\\cite{schlichtkrull2018modeling, vashishth2019composition, zhang2020relational} have been introduced for multi-relational graphs, i.e., graphs with different types of edges. In such networks, the number of learnable parameters typically increases linearly with the number of edge types. Signed graphs, at least if the graph is unweighted or the weighting function $w$ only takes finitely many values, can be thought of as special cases of multi-relational graphs. However, in the context of (possibly weighted) signed graphs, there is an implicit relationship between the different edge-types, namely that a negative edge is interpreted as the opposite of a positive edge and that edges with large weights are deemed more important than edges with small weights. These relationships will allow us to construct a network with significantly fewer trainable parameters than if we were considering an arbitrary multi-relational graph.\n\nWe are aware of a concurrent preprint~\\cite{fiorini2022sigmanet} which also constructs a GNN, SigMaNet, based on a signed magnetic Laplacian. Notably, the signed magnetic Laplacian considered in \\cite{fiorini2022sigmanet} is different from the magnetic signed Laplacian proposed here. The claimed advantage of SigMaNet is that it does not require the tuning of a charge parameter $q$ and is invariant to, e.g., doubling the weight of every edge. In our work, for the sake of simplicity, we usually set $q=0.25$, except for when the graph is undirected (in which case we set $q=0$). However, a user may choose to also tune $q$ through a standard cross-validation procedure as in \\cite{zhang2021magnet}. Moreover, one can readily address the latter issue by normalizing the adjacency matrix via a preprocessing step (e.g., \\cite{CLONINGER2017370}).\nIn contrast to our magnetic signed Laplacian, in the case where the graph is not signed but is weighted and directed, the matrix proposed in \\cite{fiorini2022sigmanet} does not reduce to the magnetic Laplacian considered in \\cite{zhang2021magnet}. For example, denoting the graph adjacency matrix by $\\mathbf{A}$, consider the case where $0< \\mathbf{A}_{j,i}< \\mathbf{A}_{i,j}$. Let $m=\\frac{1}{2}(\\mathbf{A}_{i,j}+\\mathbf{A}_{j,i})$, $\\delta=\\mathbf{A}_{i,j}-\\mathbf{A}_{j,i}$, and let $\\mathbbm{i}$ denote the imaginary unit. Then the $(i,j)$-th entry of the matrix $\\mathbf{L}^\\sigma$ proposed in \\cite{fiorini2022sigmanet} is given by \n$ \\mathbf{L}^\\sigma_{i,j}=m\\mathbbm{i}$, whereas the corresponding entry of the unnormalized magnetic Laplacian is given by\n$(\\mathbf{L}^{(q)}_U)_{i,j}=m\\exp(2\\pi \\mathbbm{i} q \\delta).$ Moreover, while SigMaNet is in principle well-defined on signed and directed graphs, the experiments in \\cite{fiorini2022sigmanet} are restricted to tasks where only signed or directional information is important (but not both). \nIn our experiments, we find that our proposed method outperforms SigMaNet on a variety of tasks on signed and\/or directed networks. Moreover, we observe that the signed magnetic Laplacian $\\mathbf{L}^\\sigma$ proposed in \\cite{fiorini2022sigmanet} has an undesirable property when the graph is unweighted --- a node is assigned to have degree zero if it has an equal number of positive and negative connections. Our proposed Laplacian does not suffer from this issue.\n\\section{Proposed Method}\n\\subsection{Problem Formulation}\n\\label{sec:formulation}\n\\looseness=-1 Let $\\mathcal{G}=(\\mathcal{V}, \\mathcal{E}, w, \\mathbf{X}_\\mathcal{V})$ denote\na signed, and possibly directed, weighted graph with node attributes, \nwhere $\\mathcal{V}$ is the set of nodes (or vertices), $\\mathcal{E}$ is the set of (directed) edges (or links), and $w: \n\\mathcal{E}\\rightarrow (-\\infty, \\infty)\\setminus\\{0\\}$ is the weighting function. Let $\\mathcal{E}^+=\\{e\\in\\mathcal{E}: w(e)>0\\}$ denote the set of positive edges and let $\\mathcal{E}^-=\\{e\\in\\mathcal{E}: w(e)<0\\}$ denote the set of negative edges so that $\\mathcal{E}=\\mathcal{E}^+\\cup\\mathcal{E}^-.$\nHere, we do allow self loops but not multiple edges; \nif $v_i, v_j\\in \\mathcal{V},$ there is at most one edge $e\\in\\mathcal{E}$ from $v_i$ to $v_j$. \nLet $n=|\\mathcal{V}|$, and let $d_{\\text{in}}$ be the number of attributes at each node, so that $\\mathbf{X}_\\mathcal{V}$ is an $n\\times d_\\text{in}$ matrix whose rows are the attributes of each node. \nWe let $\\mathbf{A} = (A_{ij})_{i,j \\in \\mathcal{V}} $ denote the weighted, signed adjacency matrix where $\\mathbf{A}_{i,j}=w_{i,j}$ if $(v_i,v_j)\\in\\mathcal{E}$, and $\\mathbf{A}_{i,j}=0$ otherwise. \n\\subsection{Magnetic Signed Laplacian}\n\\label{sec:mag}\nIn this section, we define Hermitian matrices $\\mathbf{L}_U^{(q)}$ and $\\mathbf{L}_N^{(q)}$ which we refer to as the unnormalized and normalized magnetic signed Laplacian matrices, respectively. We first define a symmetrized adjacency matrix and an absolute degree matrix by\n$$\\Tilde{\\mathbf{A}}_{i,j} \\coloneqq \\frac{1}{2} ( \\mathbf{A}_{i,j} + \\mathbf{A}_{j,i} ), \\,\\:\\:1\\leq i,j\\leq n ,\\:\\:\n \\Tilde{\\mathbf{D}}_{i,i} \\coloneqq \\frac{1}{2}\\sum_{j=1}^n (|\\mathbf{A}_{i,j}|+|\\mathbf{A}_{j,i}|), \\:\\:1\\leq i\\leq n, $$\nwith $\\Tilde{\\mathbf{D}}_{i,j}=0$ for $i\\neq j$. Importantly, the use of absolute values ensures that the entries of $\\Tilde{\\mathbf{D}}$ are non-negative. Furthermore, it ensures that all $\\Tilde{\\mathbf{D}}_{i,i}$ will be strictly positive if the graph is connected. This is in contrast to the construction in \\cite{fiorini2022sigmanet} which will give a node degree zero if it has an equal number of positive and negative neighbors (for unweighted networks). To capture directional information, we next define a phase matrix $\\mathbf{\\Theta}^{(q)}$ by\n$\\mathbf{\\Theta}^{(q)}_{i,j} \\coloneqq 2\\pi q (\\mathbf{A}_{i,j} - \\mathbf{A}_{j,i}),$ where $q\\in\\mathbb{R}$ is the so-called ``charge parameter.\"\nIn our experiments, for simplicity, we set $q=0$ when the task at hand is unrelated to directionality, or when the underlying graph is undirected, and we set $q=0.25$ for all the other tasks (except in an ablation study on the role of $q$). \nWith $\\odot$ denoting elementwise multiplication, and $\\mathbbm{i}$ denoting the imaginary unit, we now construct a complex Hermitian matrix $\\mathbf{H}^{(q)}$ by\n$$\n \\mathbf{H}^{(q)} \\coloneqq \\Tilde{\\mathbf{A}} \\odot \\exp (\\mathbbm{i} \\mathbf{\\Theta}^{(q)}) \\, \n$$\nwhere \n$\\exp (\\mathbbm{i} \\mathbf{\\Theta}^{(q)})$ is defined elementwise by\n $ \\exp (\\mathbbm{i} \\mathbf{\\Theta}^{(q)})_{i,j} \\coloneqq \\exp(\\mathbbm{i} \\mathbf{\\Theta}^{(q)}_{i,j})$. \n \nNote that $\\mathbf{H}^{(q)}$ is Hermitian, as $\\Tilde{\\mathbf{A}}$ is symmetric and $\\mathbf{\\Theta}^{(q)}$ is skew-symmetric. \nIn particular, when $q=0$, we have $\\mathbf{H}^{(0)}=\\Tilde{\\mathbf{A}}$. Therefore, setting $q=0$ is equivalent to making the input graph symmetric and discarding directional information. In general, however, $\\mathbf{H}^{(q)}$ captures information about a link's sign, through $\\tilde{\\mathbf{A}}$, and about its direction, through $\\boldsymbol{\\Theta}^{(q)}$. \n\nWe observe that flipping the direction of an edge, i.e., replacing a positive or negative link from $v_i$ to $v_j$ with a link of the same sign from $v_j$ to $v_i$ corresponds to complex conjugation of $\\mathbf{H}^{(q)}_{i,j}$ (assuming either that there is not already a link from $v_j$ to $v_i$ or that we also flip the direction of that link if there is one). \nWe also note that if $q=0.25,$ $\\mathbf{A}_{i,j}=\\pm1$, and $\\mathbf{A}_{j,i}=0$, we have \n\\begin{equation*}\n \\mathbf{H}^{(0.25)}_{i,j}=\\pm\\frac{i}{2}=-\\mathbf{H}^{(0.25)}_{j,i}.\n\\end{equation*}\nThus, a unit-weight edge from $v_i$ to $v_j$ is treated as the opposite of a unit-weight edge from $v_j$ to $v_i$. \n\nGiven $\\mathbf{H}^{(q)}$, we next define the unnormalized magnetic signed Laplacian by \n\\begin{equation}\n \\mathbf{L}_{U}^{(q)} \\coloneqq \\Tilde{\\mathbf{D}} - \\mathbf{H}^{(q)} = \\Tilde{\\mathbf{D}} - \\Tilde{\\mathbf{A}} \\odot \\exp (\\mathbbm{i} \\mathbf{\\Theta}^{(q)}),\n\\end{equation}\nand also define the normalized magnetic signed Laplacian by\n\\begin{equation} \\mathbf{L}_{N}^{(q)} \\coloneqq \\mathbf{I} - \\left(\\Tilde{\\mathbf{D}}^{-1\/2}\\Tilde{\\mathbf{A}}\\Tilde{\\mathbf{D}}^{-1\/2}\\right) \\odot \\exp (\\mathbbm{i} \\mathbf{\\Theta}^{(q)}) \\, .\n\\end{equation}\n\nWhen the graph $\\mathcal{G}$ is directed, but not signed, $\\mathbf{L}_U^{(q)}$ and $\\mathbf{L}_N^{(q)}$ reduce to the magnetic Laplacians utilized in works such as \\cite{zhang2021magnet,fanuel2017magnetic, fanuel:MagneticEigenmaps2018} and \\cite{f2020characterization}. Similarly, when $\\mathcal{G}$ is signed, but not directed, $\\mathbf{L}_U^{(q)}$ and $\\mathbf{L}_N^{(q)}$ reduce to the signed Laplacian matrices considered in e.g., \\cite{kunegis2010spectral, Cucuringu2020Reg} and \\cite{ATAY2014165}. Additionally, when the graph is neither signed nor directed, they reduce to the standard normalized and unnormalized graph Laplacians~\\cite{defferrard2016convolutional}. The following theorems show that $\\mathbf{L}_U^{(q)}$ and $\\mathbf{L}_N^{(q)}$ satisfy properties analogous to the traditional graph Laplacians. The proofs are in Appendix \\ref{app:proofs}.\n\n\n\\begin{theorem}\\label{thm: posdef} \nFor any signed directed graph $\\mathcal{G}$ defined in Sec.~\\ref{sec:formulation}, $\\forall q\\in \\mathbb{R},$ both the unnormalized magnetic signed Laplacian $\\mathbf{L}_{U}^{(q)}$ and its normalized counterpart \n$\\mathbf{L}_{N}^{(q)}$ are positive semidefinite.\n\\end{theorem}\n\n\n\\begin{theorem}\\label{thm: normal02} For any signed directed graph $\\mathcal{G}$ defined in Sec.~\\ref{sec:formulation}, $\\forall q\\in \\mathbb{R},$ the eigenvalues of the normalized magnetic signed Laplacian $\\mathbf{L}_N^{(q)}$ are contained in the interval $[0,2]$.\n\\end{theorem}\n\nBy construction, $\\mathbf{L}_U^{(q)}$ and $\\mathbf{L}_N^{(q)}$ are Hermitian, and Theorem~\\ref{thm: posdef} shows they are positive semidefinite. In particular they are diagonalizable by an orthonormal basis of complex eigenvectors $\\mathbf{u}_1, \\ldots, \\mathbf{u}_n$ associated to real, nonnegative eigenvalues $\\lambda_1\\leq \\ldots\\leq \\lambda_n\\hspace{-.03in}=\\hspace{-.03in}\\lambda_{\\max}$. Thus, \nsimilar to the traditional normalized Laplacian, we may factor \n $\\mathbf{L}_N^{(q)}= \\mathbf{U} \\bm{\\Lambda} \\mathbf{U}^\\dagger$, \nwhere $\\mathbf{U}$ is an $n\\times n$ matrix whose $k$-th column is $\\mathbf{u}_k$, for $1\\leq k\\leq n$, $\\bm{\\Lambda}$ is a diagonal matrix with $\\bm{\\Lambda}_{k,k}=\\lambda_k$, and $\\mathbf{U}^{\\dagger}$ is the conjugate transpose of $\\mathbf{U}$. A similar formula holds for $\\mathbf{L}^{(q)}_U$.\n\n\\subsection{Spectral Convolution via the Magnetic Signed Laplacian}\n\nIn this section, we show how to use a Hermitian, positive semidefinite matrix $\\mathbf{L}$ such as the normalized or unnormalized magnetic signed Laplacian introduced in Sec.~\\ref{sec:mag}, to define convolution on a signed directed graph. This method is similar to the ones proposed for unsigned (possibly directed) graphs in, e.g., \\cite{kipf2016semi} and \\cite{zhang2021magnet}, but we provide details in order to keep our work reasonably self-contained.\n\nGiven $\\mathbf{L}$, let $\\mathbf{u}_1\\ldots,\\mathbf{u}_n$ be an orthonormal basis of eigenvectors such that $\\mathbf{L}\\mathbf{u}_k=\\lambda_k\\mathbf{u}_k$, and let $\\mathbf{U}$ be an $n\\times n$ matrix whose $k$-th column is $\\mathbf{u}_k$, for $1\\leq k \\leq n$. For a signal $\\mathbf{x}:\\mathcal{V}\\rightarrow\\mathbb{C},$ we define its Fourier transform $\\widehat{\\mathbf{x}}\\in\\mathbb{C}^n$ by $ \\widehat{\\mathbf{x}}(k)= \\langle \\mathbf{x},\\mathbf{u}_k\\rangle\\coloneqq\\mathbf{u}_k^\\dagger\\mathbf{x},$ and equivalently, $\\widehat{\\mathbf{x}}=\\mathbf{U}^\\dagger \\mathbf{x}$. \nSince $\\mathbf{U}$ is unitary, we readily obtain the Fourier inversion formula\n\\begin{equation}\\label{eqn: inversion}\n \\mathbf{x}=\\mathbf{U}\\widehat{\\mathbf{x}} = \\sum_{k=1}^n\\widehat{\\mathbf{x}}(k)\\mathbf{u}_k \\, .\n\\end{equation}\nAnalogous to the well-known convolution theorem in Euclidean domains, we define the \nconvolution of $\\mathbf{x}$ with a filter $\\mathbf{y}$ as multiplication in the Fourier domain, i.e., $\\widehat{\\mathbf{y}\\ast\\mathbf{x}}(k)=\\widehat{\\mathbf{y}}(k)\\widehat{\\mathbf{x}}(k)$. \nBy \\eqref{eqn: inversion}, this implies\n $\\mathbf{y}\\ast\\mathbf{x} = \\mathbf{U}\\text{Diag}(\\widehat{\\mathbf{y}})\\widehat{\\mathbf{x}} = (\\mathbf{U}\\text{Diag}(\\widehat{\\mathbf{y}})\\mathbf{U}^\\dagger)\\mathbf{x}$, where $\\text{Diag}(\\mathbf{z})$ denotes a diagonal matrix with the vector $\\mathbf{z}$ on its diagonal.\nTherefore, we declare that $\\mathbf{Y}$ is a \\emph{generalized convolution matrix} if \n\\begin{equation}\\label{eqn: conv}\n \\mathbf{Y} = \\mathbf{U} \\bm{\\Sigma} \\mathbf{U}^\\dagger \\, ,\n\\end{equation}\nfor a diagonal matrix $\\bm{\\Sigma}.$ This is a natural generalization of the class of convolutions used in \\cite{bruna:spectralNN2014}. \n\nSome potential drawbacks exist when defining a convolution via \\eqref{eqn: conv}. First, it requires one to compute the eigen-decomposition of $\\mathbf{L}$ which is expensive for large graphs. Second, the number of trainable parameters equals the size of the graph (the number of nodes), rendering GNNs constructed via \\eqref{eqn: conv} prone to overfitting. To remedy these issues, we follow \\cite{Defferrard2018} (see also \\cite{hammond2011wavelets}) and observe that spectral convolution may also be implemented in the spatial domain via polynomials of $\\mathbf{L}$ by setting $\\bm{\\Sigma}$ equal to a polynomial of $\\bm{\\Lambda}$. This reduces the number of trainable parameters from the size of the graph to the degree of the polynomial and also enhances robustness \nto perturbations \\cite{levie2019transferability}.\nAs in \\cite{Defferrard2018}, we let $\\widetilde{\\bm{\\Lambda}}=\\frac{2}{\\lambda_{\\max}} \\bm{\\Lambda}-\\mathbf{I}$ denote the normalized eigenvalue matrix (with entries in $[-1,1]$) and choose \n$\\bm{\\Sigma} = \\sum_{k=0}^K\\theta_k T_k(\\widetilde{\\bm{\\Lambda}}),$\nfor some $\\theta_1,\\ldots,\\theta_k\\in\\mathbb{R}$ where for $0\\leq k \\leq K$, \n$T_k$ is the Chebyshev polynomials defined by $T_0(x)=1, T_1(x)=x,$ and $T_{k}(x)=2xT_{k-1}(x)+T_{k-2}(x)$ for $k\\geq 2.$ \nSince $\\mathbf{U}$ is unitary, we have $(\\mathbf{U} \\widetilde{\\bm{\\Lambda}} \\mathbf{U}^\\dagger)^k = \\mathbf{U}\\widetilde{\\bm{\\Lambda}}^k\\mathbf{U}^\\dagger$, and thus, letting $\\widetilde{\\mathbf{L}} \\coloneqq \\frac{2}{\\lambda_{\\text{max}}}\\mathbf{L}-\\mathbf{I}$, we have\n\\begin{equation}\\label{eqn: ChebNet}\n \\mathbf{Y}\\mathbf{x}=\\mathbf{U}\\sum_{k=0}^K\\theta_k T_k(\\widetilde{\\bm{\\Lambda}})\\mathbf{U}^\\dagger\\mathbf{x} = \\sum_{k=0}^K\\theta_k T_k(\\widetilde{\\mathbf{L}}) \\mathbf{x} \\, .\n\\end{equation}\nThis is the class of convolutional filters we will use in our experiments. However, one could also imitate Sec.~3.1 on \\cite{zhang2021magnet} to produce a class of filters based on \\cite{kipf2016semi} rather than \\cite{Defferrard2018}.\n\nIt is important to note that $\\widetilde{\\mathbf{L}}$ is constructed so that, in \\eqref{eqn: ChebNet}, $(\\mathbf{Y}\\mathbf{x})_i$ depends on all nodes within $K$-hops from $v_i$ on the undirected, unsigned counterpart of $\\mathcal{G}$, i.e. the graph whose adjacency matrix is given by $\\mathbf{A}'_{i,j}=\\frac{1}{2}{(|\\mathbf{A}_{i,j}|+|\\mathbf{A}_{j,i}|})$. Therefore, this notion of convolution does not favor ``outgoing neighbors\" $\\{v_j\\in \\mathcal{V}: (v_i,v_j)\\in\\mathcal{E}\\}$ over ``incoming neighbors\" $\\{v_j\\in V: (v_j,v_i)\\in\\mathcal{E}\\}$ (or vice versa). This is important since for a given node $v_i$, both sets may contain different, useful information. Furthermore, since the phase matrix $\\mathbf{\\Theta}^{(q)}$ encodes an outgoing edge and an incoming edge differently, the filter matrix $\\mathbf{Y}$ is also able to aggregate information from these two sets in different ways. \nFor computational complexity, we note that while the matrix $\\exp (\\mathbbm{i} \\mathbf{\\Theta}^{(q)})$ is dense in theory, in practice, one only needs to compute a small fraction of its entries corresponding to the nonzero entries of $\\Tilde{\\mathbf{A}}$ (which is sparse for most real-world data sets). Thus, the computational complexity of the convolution proposed here is equivalent to that of its undirected, unsigned counterparts.\n\n\\subsection{The {MSGNN} architecture}\n\nWe now define our network, {MSGNN}. Let $\\mathbf{X}^{(0)}$ be an $n\\times F_0$ input matrix with columns $\\mathbf{x}_1^{(0)},\\ldots\\mathbf{x}_{F_0}^{(0)}$, and\n$L$ denote the number of convolution layers.\nAs in \\cite{zhang2021magnet}, we use a complex version of the Rectified Linear Unit \ndefined by $\\sigma(z)=z$, if $-\\pi\/2\\leq \\arg(z)<\\pi\/2$, and $\\sigma(z)=0$ otherwise, where $\\arg(\\cdot)$ is the complex argument of $z\\in\\mathbb{C}$. Let $F_\\ell$ be the number of channels in the $\\ell$-th layer. For $1\\leq\\ell\\leq L$, $1 \\leq i \\leq F_{\\ell}$, and $1\\leq j\\leq F_{\\ell-1},$ let $\\mathbf{Y}_{ij}^{(\\ell)}$ be a convolution matrix defined by \\eqref{eqn: conv} or \\eqref{eqn: ChebNet}. Given the $(\\ell-1)$-st layer hidden representation matrix $\\mathbf{X}^{(\\ell-1)}$, we define $\\mathbf{X}^{(\\ell)}$ columnwise by\n\\begin{equation}\\label{eqn: single hidden layer}\n\\vspace{-1mm}\n \\mathbf{x}_j^{(\\ell)} = \\sigma \\left(\\sum_{i=1}^{F_{\\ell-1}} \\mathbf{Y}_{ij}^{(\\ell)} \\mathbf{x}_i^{(\\ell-1)} + \\mathbf{b}_j^{(\\ell)}\\right), \n\\end{equation}\nwhere $\\mathbf{b}_j^{(\\ell)}$ is a bias vector with equal real and imaginary parts, $\\text{Real}(\\mathbf{b}_j^{(\\ell)}) = \\text{Imag}(\\mathbf{b}_j^{(\\ell)})$.\nIn matrix form we write $\\mathbf{X}^{(\\ell)} = \\mathbf{Z}^{(\\ell)} \\left(\\mathbf{X}^{(\\ell-1)}\\right) $, where $\\mathbf{Z}^{(\\ell)}$ is a hidden layer of the form \\eqref{eqn: single hidden layer}. In our experiments, we utilize convolutions of the form \\eqref{eqn: ChebNet} with $\\mathbf{L} = \\mathbf{L}_N^{(q)}$ and set $K=1$, \nin which case we obtain\n\\begin{equation*}\n \\mathbf{X}^{(\\ell)} = \\sigma\\left( \\mathbf{X}^{(\\ell-1)} \\mathbf{W}_{\\text{self}}^{(\\ell)} + \\widetilde{\\mathbf{L}}_N^{(q)} \\mathbf{X}^{(\\ell-1)} \\mathbf{W}_{\\text{neigh}}^{(\\ell)} + \\mathbf{B}^{(\\ell)}\\right) \\, ,\n\\end{equation*}\nwhere $\\mathbf{W}_{\\text{self}}^{(\\ell)}$ and $\\mathbf{W}_{\\text{neigh}}^{(\\ell)}$ are learned weight matrices corresponding to the filter weights of different channels and $\\mathbf{B}^{(\\ell)} = (\\mathbf{b}_1^{(\\ell)}, \\ldots, \\mathbf{b}_{F_{\\ell}}^{(\\ell)})$. After the convolutional layers, we unwind the complex matrix $\\mathbf{X}^{(L)}$ into a real-valued $n\\times 2F_L$ matrix. For node clustering, we then apply a fully connected layer followed by the softmax function. \nBy default, we set $L=2$, in which case, our network is given by \n\\begin{equation*}\n \\text{softmax}(\\text{unwind}(\\mathbf{Z}^{(2)}(\\mathbf{Z}^{(1)}(\\mathbf{X}^{(0)}))) \\mathbf{W}^{\\mathrm{(3)}}) \\, .\n\\end{equation*}\nFor link prediction, we apply the same method, except we concatenate rows corresponding to pairs of nodes after the unwind layer before applying the linear layer and softmax.\n\\section{Experiments}\n\\subsection{Tasks and Evaluation Metrics}\\label{subsec:tasks} \n\\paragraph{Node Clustering}\nIn the node clustering task, one aims to partition the nodes of the graph into the disjoint union of $C$ sets $\\mathcal{C}_0,\\ldots,\\mathcal{C}_{C-1}$. \nTypically in an unsigned, undirected network, one aims to choose the $\\mathcal{C}_i$'s so that there are many links within each cluster and comparably few links between clusters, in which case nodes within each cluster are \\emph{similar} due to dense connections. In general, however, similarity could be defined differently \\cite{he2022gnns}. \nIn a signed graph, clusters can be formed by grouping together nodes with positive links and separating nodes with negative links (see \\cite{he2022sssnet}). \nIn a directed graph, clusters can be determined by a directed flow on the network (see \\cite{he2021digrac}). More generally, we can define clusters based on an underlying meta-graph, where meta-nodes, each of which corresponds to a cluster in the network, can be distinguished based on either signed or directional information (e.g., flow imbalance~\\cite{he2021digrac}). This general meta-graph idea motivates our introduction of a novel synthetic network model, which we will define in Sec.~\\ref{sec: novel sdsbm}, driven by both link sign and directionality. \n\nAll of our node clustering experiments are done in the semi-supervised setting, where one selects a fraction of the nodes in each cluster as seed nodes, with known cluster membership labels. In all of our node clustering tasks, we measure our performance using the Adjusted Rand Index (ARI) \\cite{hubert1985comparing}.\n\\paragraph{Link Prediction}\n\\label{subsec:link_pred}\n\\looseness=-1 On undirected, unsigned graphs, link prediction is simply the task of predicting whether or not there is a link between a pair of nodes. Here, we consider five different variations of the link prediction task for \\emph{signed and\/or directed} networks. In our first task, link sign prediction (SP), one assumes that there is a link from $v_i$ to $v_j$ and aims to predict whether that link is positive or negative, i.e., whether $(v_i, v_j) \\in \\mathcal{E}^+$ or $(v_i, v_j) \\in \\mathcal{E}^-$. Our second task, direction prediction (DP), one aims to predict whether $(v_i, v_j)\\in\\mathcal{E}$ or $(v_j, v_i)\\in\\mathcal{E}$ under the assumption that exactly one of these two conditions holds. We also consider three-, four-, and five-class prediction problems. In the three-class problem (3C), the possibilities are $(v_i, v_j)\\in\\mathcal{E},$ $(v_j, v_i)\\in\\mathcal{E},$ or that neither $(v_i, v_j)$ nor $(v_j, v_i)$ are in $\\mathcal{E}$. For the four-class problem (4C), the possibilities are $(v_i, v_j)\\in\\mathcal{E}^+$, $(v_i, v_j)\\in\\mathcal{E}^-$, $(v_j, v_i)\\in\\mathcal{E}^+$, and $(v_j, v_i)\\in\\mathcal{E}^-$. For the five-class problem (5C), we also add in the possibility that neither $(v_i, v_j)$ nor $(v_j, v_i)$ are in $\\mathcal{E}$. For all tasks, we evaluate the performance with classification accuracy. \nNotably, while (SP), (DP), and (3C) only require a method to be able to extract signed \\emph{or} directed information, the tasks (4C) and (5C) require it to be able to effectively process both sign \\emph{and} directional information. Also, we discard those edges that satisfy more than one condition in the possibilities for training and evaluation, but these edges are kept in the input network observed during training.\n\\subsection{Synthetic Data for Node Clustering}\n\\paragraph{Established Synthetic Models}\nWe conduct experiments on the Signed Stochastic Block Models (SSBMs) and polarized SSBMs (\\textsc{POL-SSBM}s) introduced in \\cite{he2022sssnet}. Notably, both of these models are signed, but undirected. In the SSBM($n, C, p, \\rho, \\eta$) model, $n$ represents the number of nodes, $C$ is the number of clusters, $p$ is the probability that there is a link (of either sign) between two nodes, $\\rho$ is the approximate ratio between the largest cluster size and the smallest cluster size, and $\\eta$ is the probability that an edge will have the ``wrong\" sign, i.e., that an intra-cluster edge will be negative or an inter-cluster edge will be positive. {\\textsc{Pol-SSBM}} ($n,r,p,\\rho,\\eta,N$) is a hierarchical variation of the SSBM model consisting of $r$ communities, each of which is itself an SSBM. We refer the reader to \\cite{he2022sssnet} for details of both models. \n\\paragraph{A novel Synthetic Model: Signed Directed Stochastic Block Model (SDSBM)}\\label{sec: novel sdsbm}\nGiven a meta-graph adjacency matrix $\\mathbf{F} = (\\mathbf{F}_{k,l})_{k,l = 0, \\ldots, C-1}\n$, an edge sparsity level $p$, a number of nodes $n$, and a \nsign flip noise level parameter $0 \\leq \\eta \\leq 0.5$, we defined a \n SDSBM model, denoted by \nSDSBM ($\\mathbf{F}, n, p, \\rho, \\eta$), \n as follows: \n 1) Assign block sizes $n_0 \\le n_1 \\le \\cdots \\le n_{C-1}$ based on a parameter $\\rho \\geq 1$, which approximately represents the ratio between the size of largest block and the size of the smallest block, using the same method as in \\cite{he2022sssnet}. \n 2) \nAssign each node to one of the $C$ blocks, so that each block $C_i$ has size $n_i$. \n 3) For nodes $v_i\\in\\mathcal{C}_k$, and $v_j\\in\\mathcal{C}_l$, independently sample an edge from $v_i$ to $v_j$ with probability $ p \\cdot |\\mathbf{F}_{k,l}|$. Give this edge weight $1$ if $F_{k,l}\\geq 0$ and weight $-1$ if $F_{k,l}<0$. \n 4) Flip the sign of all the edges in the generated graph with sign-flip probability $\\eta$.\n\n\\looseness=-1 In our experiments, we use two sets of specific meta-graph structures $\\{\\mathbf{F}_1(\\gamma)\\},\\{\\mathbf{F}_2(\\gamma)\\}$, with three and four clusters, respectively, where $0 \\leq \\gamma \\leq 0.5$ is the directional noise level. Specifically, we are interested in SDSBM ($\\mathbf{F}_1(\\gamma), n, p, \\rho, \\eta$) and SDSBM ($\\mathbf{F}_2(\\gamma), n, p, \\rho, \\eta$) models with varying $\\gamma$ where \n$$\\mathbf{F}_1(\\gamma)=\\begin{bmatrix}\n0.5 & \\gamma & -\\gamma\\\\\n1-\\gamma & 0.5 & -0.5\\\\\n-1+\\gamma&-0.5&0.5\n\\end{bmatrix},\\mathbf{F}_2(\\gamma)=\\begin{bmatrix}\n0.5 & \\gamma & -\\gamma&-\\gamma\\\\\n1-\\gamma & 0.5 &-0.5& -\\gamma\\\\\n-1+\\gamma&-0.5&0.5&-\\gamma\\\\\n-1+\\gamma&-1+\\gamma&-1+\\gamma&0.5\n\\end{bmatrix}.$$ \n\nTo better understand the above SDSBM models, toy examples are provided in Appendix~\\ref{appendix:examples}. We also note that the SDSBM model proposed here is a generalization of both the SSBM model from \\cite{he2022sssnet} and the Directed Stochastic Block Model from \\cite{he2021digrac} when we have suitable meta-graph structures.\n\n\\subsection{Real-World Data for Link Prediction}\n\\paragraph{Standard Real-World Data Sets}\nWe consider four standard real-world signed and directed data sets. \\textit{BitCoin-Alpha} and \\textit{BitCoin-OTC}~\\cite{konect:kumar2016wsn} describe bitcoin trading. \\textit{Slashdot}~\\cite{slashdot} is related to a technology news website, and \\textit{Epinions}~\\cite{epinions} describes consumer reviews. These networks range in size from 3783 to 131580 nodes. Only \\textit{Slashdot} and \\textit{Epinions} are unweighted. \n\\paragraph{Novel Financial Data Sets from Stock Returns}\n\\looseness=-1 Using financial time series data, we build signed directed networks where the weighted edges encode lead-lag relationships inherent in the financial market, for each year in the interval 2000-2020. The lead-lag matrices are built from time series of daily price returns\\footnote{Raw CRSP data accessed through \\url{https:\/\/wrds-www.wharton.upenn.edu\/}.}. We refer to these networks as our \\textbf{Fi}ancial \\textbf{L}ead-\\textbf{L}ag (FiLL) data sets. For each year in the data set, we build a signed directed graph (\\textit{FiLL-pvCLCL}) based on the price return of 444 stocks at market close times on consecutive days. We also build another graph (\\textit{FiLL-OPCL}), based on the price return of 430 stocks from market open to close. The difference between 444 versus 430 stems from the non-availability of certain open and close prices on some days for certain stocks. The lead-lag metric that is captured by the entry $\\mathbf{A}_{i,j}$ in each network encodes a measure that quantifies the extent to which stock $v_i$ leads stock $v_j$, and is obtained by computing the linear regression coefficient when regressing the time series (of length 245) of daily returns of stock $v_i$ against the lag-one version of the time series (of length 245) of the daily returns of stock $v_j$. Specifically, we use the beta coefficient of the corresponding simple linear regression, to serve as the one-day lead-lag metric. The resulting matrix is asymmetric and signed, rendering it amenable to a signed, directed network interpretation. The initial matrix is dense, with nonzero entries outside the main diagonal, since we do not consider the own auto-correlation of each stock. Note that an alternative approach to building the directed network could be based on Granger causality, or other measures that quantify the lead-lag between a pair of time series, potentially while accounting for nonlinearity, such as second-order log signatures from rough paths theory as in \\cite{bennett2021detection}.\n\nNext, we sparsify each network, \nkeeping only 20\\% of the edges with the largest magnitudes. We also report the average results across the all the yearly data sets (a total of 42 networks) where the data set is denoted by \\textit{FiLL (avg.)}. To facilitate future research using these data sets as benchmarks, both the dense lead-lag matrices and their sparsified counterparts will be made publicly available. \n\\subsection{Experimental Results}\nWe compare {MSGNN} against representative GNNs which are described in Section \\ref{sec:related}.\nThe six methods we consider \nare\n1)~SGCN \\cite{derr2018signed}, \n2)~SDGNN~\\cite{huang2021sdgnn}, \n3)~SiGAT~\\cite{huang2019signed}, \n4)~SNEA~\\cite{li2020learning}, \n5)~SSSNET~\\cite{he2022sssnet}, \nand 6)~SigMaNet~\\cite{fiorini2022sigmanet}. For all link prediction tasks, comparisons are carried out on all baselines; for the node clustering tasks, we only compare {MSGNN} against SSSNET and SigMaNet as adapting the other methods to this task is nontrivial. \nImplementation details are provided in Appendix~\\ref{appendix:implementation}, along with a runtime comparison which shows that {MSGNN} is the fastest method, see Table \\ref{tab:runtime} in Appendix \\ref{appendix:implementation}. Extended results are in Appendix~\\ref{appendix_sec:ablation} and \\ref{appendix:full_results}. Anonymous codes and some preprocessed data are available at \\url{https:\/\/anonymous.4open.science\/r\/MSGNN}.\n\\subsubsection{Node Clustering}\n\\begin{figure*}[hbt]\n \\centering\n \\begin{subfigure}[ht]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth,trim=0cm 0cm 0cm 1.8cm,clip]{figures\/signed_clustering\/SSBM\/1000p10K5100eta4010rho15N10000ARI.pdf}\n \\vspace{-5mm}\n \\caption{SSBM($n=10000,$\\\\$ C=5, p=0.01, \\rho=1.5$)}\n \\end{subfigure}\n \\begin{subfigure}[ht]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth,trim=0cm 0cm 0cm 1.8cm,clip]{figures\/signed_clustering\/polarized_SSBM\/1000p100K2100eta4010rho15N500total_n5000num_com5ARI.pdf}\n \\vspace{-5mm}\n \\caption{\\textsc{Pol-SSBM}($n=5000,$\\\\$ r=5, p=0.1, \\rho=1.5$) }\n \\end{subfigure}\n \\begin{subfigure}[ht]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth,trim=0cm 0cm 0cm 1.8cm,clip]{figures\/signed_clustering\/SDSBM\/3c\/1000p100K3100eta25100gamma010rho15N1000ARI.pdf}\n \\vspace{-5mm}\n \\caption{SDSBM($\\mathbf{F}_1(\\gamma=0), n=1000,$\\\\$ p=0.1, \\rho=1.5$)}\n \\end{subfigure}\n \\begin{subfigure}[ht]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth,trim=0cm 0cm 0cm 1.8cm,clip]{figures\/signed_clustering\/SDSBM\/3c\/1000p100K3100eta0100gamma2510rho15N1000_change_gammaARI.pdf}\n \\vspace{-5mm}\n \\caption{SDSBM($\\mathbf{F}_1(\\gamma),n=1000,$\\\\$ p=0.1, \\rho=1.5, \\eta=0$)}\n \\end{subfigure}\n \\begin{subfigure}[ht]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth,trim=0cm 0cm 0cm 1.8cm,clip]{figures\/signed_clustering\/SDSBM\/3c\/1000p1000K3100eta0100gamma2510rho15N1000_change_gammaARI.pdf}\n \\vspace{-5mm}\n \\caption{SDSBM($\\mathbf{F}_1(\\gamma),n=1000,$\\\\$ p=1, \\rho=1.5, \\eta=0$)}\n \\end{subfigure}\n \\begin{subfigure}[ht]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth,trim=0cm 0cm 0cm 1.8cm,clip]{figures\/signed_clustering\/SDSBM\/4c\/1000p100K4100eta25100gamma010rho15N1000ARI.pdf}\n \\vspace{-5mm}\n \\caption{SDSBM($\\mathbf{F}_2(\\gamma=0), n=1000,$\\\\$ p=0.1, \\rho=1.5$)}\n \\end{subfigure}\n \\begin{subfigure}[ht]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth,trim=0cm 0cm 0cm 1.8cm,clip]{figures\/signed_clustering\/SDSBM\/4c\/1000p100K4100eta0100gamma2510rho15N1000_change_gammaARI.pdf}\n \\vspace{-5mm}\n \\caption{SDSBM($\\mathbf{F}_2(\\gamma),n=1000,$\\\\$ p=0.1, \\rho=1.5, \\eta=0$)}\n \\end{subfigure}\n \\begin{subfigure}[ht]{0.24\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth,trim=0cm 0cm 0cm 1.8cm,clip]{figures\/signed_clustering\/SDSBM\/4c\/1000p1000K4100eta0100gamma2510rho15N1000_change_gammaARI.pdf}\n \\vspace{-5mm}\n \\caption{SDSBM($\\mathbf{F}_2(\\gamma),n=1000,$\\\\$ p=1, \\rho=1.5, \\eta=0$)}\n \\end{subfigure}\n \\begin{subfigure}[ht]{\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth,trim=0cm 0cm 0cm 0cm,clip]{figures\/signed_clustering\/legend_SDSBM.jpg}\n \\end{subfigure}\n \\caption{Node clustering test ARI comparison on synthetic data. Dashed lines highlight {MSGNN}'s performance with $q=.25$. \n Error bars indicate one standard error. Results are averaged over ten runs --- five different networks, each with two distinct data splits.\n }\n \\label{fig:signed_clustering_synthetic_compare}\n\\end{figure*}\nFigure~\\ref{fig:signed_clustering_synthetic_compare} \ncompares the node clustering performance of {MSGNN} with two other signed GNNs on synthetic data, and against variants of {MSGNN} on SDSBMs that take different $q$ values. For signed, undirected networks $q$ does not has an effect, and hence we only report one {MSGNN} variant. Error bars are given by one standard error. We conclude that {MSGNN} outperforms SigMaNet on all data sets and is competitive with SSSNET. On the majority of data sets, {MSGNN} achieves leading performance, whereas on the others it is slightly outperformed by \nSSSNET, depending on the network density, noise level, network size, and the underlying meta-graph structure. On these relatively small data sets, MSGNN and SSSNET have comparable runtime and are faster than SigMaNet. Comparing the {MSGNN} variants, we conclude that the directional information in these SDSBM models plays a vital role since {MSGNN} with $q=0$ is usually outperformed by {MSGNN} with nonzero $q$.\n\\subsubsection{Link Prediction}\n\\begin{table*}\n\\vspace{-1mm}\n\\caption{Test accuracy (\\%) comparison the signed and directed link prediction tasks introduced in\nSec.~\\ref{subsec:tasks}.\nThe best is marked in \\red{bold red} and the second best is marked in \\blue{underline blue}. }\n\\vspace{-2mm}\n \\label{tab:link_sign_direction_prediction_performance}\n \\centering\n \\small\n \\resizebox{0.8\\linewidth}{!}{\\begin{tabular}{c c c c c cccccc}\n \\toprule\n Data Set& Link Task&SGCN&SDGNN&SiGAT&SNEA&SSSNET&SigMaNet&{MSGNN} \\\\\n \\midrule\n\\multirow{4}{*}{\\textit{BitCoin-Alpha}}&SP&64.7$\\pm$0.9&64.5$\\pm$1.1&62.9$\\pm$0.9&64.1$\\pm$1.3&\\blue{67.4$\\pm$1.1}&47.8$\\pm$3.9&\\red{69.8$\\pm$2.5}\\\\\n&DP&60.4$\\pm$1.7&61.5$\\pm$1.0&61.9$\\pm$1.9&60.9$\\pm$1.7&\\blue{68.1$\\pm$2.3}&49.4$\\pm$3.1&\\red{68.3$\\pm$3.7}\\\\\n&3C&81.4$\\pm$0.5&79.2$\\pm$0.9&77.1$\\pm$0.7&\\red{83.2$\\pm$0.5}&78.3$\\pm$4.7&37.4$\\pm$16.7&\\blue{82.7$\\pm$1.2}\\\\\n&4C&51.1$\\pm$0.8&52.5$\\pm$1.1&49.3$\\pm$0.7&52.4$\\pm$1.8&\\blue{54.3$\\pm$2.9}&20.6$\\pm$6.3&\\red{55.9$\\pm$4.0}\\\\\n&5C&79.5$\\pm$0.3&78.2$\\pm$0.5&76.5$\\pm$0.3&\\blue{81.1$\\pm$0.3}&77.9$\\pm$0.3&34.2$\\pm$6.5&\\red{81.4$\\pm$0.6}\\\\\n\\midrule\n\\multirow{4}{*}{\\textit{BitCoin-OTC}}&SP&65.6$\\pm$0.9&65.3$\\pm$1.2&62.8$\\pm$1.3&\\blue{67.7$\\pm$0.5}&\\red{70.1$\\pm$1.2}&50.0$\\pm$2.3&65.5$\\pm$10.1\\\\\n&DP&63.8$\\pm$1.2&63.2$\\pm$1.5&64.0$\\pm$2.0&65.3$\\pm$1.2&\\blue{69.6$\\pm$1.0}&48.4$\\pm$4.9&\\red{73.1$\\pm$1.2}\\\\\n&3C&79.0$\\pm$0.7&77.3$\\pm$0.7&73.6$\\pm$0.7&\\red{82.2$\\pm$0.4}&76.9$\\pm$1.1&26.8$\\pm$10.9&\\blue{81.3$\\pm$1.6}\\\\\n&4C&51.5$\\pm$0.4&55.3$\\pm$0.8&51.2$\\pm$1.8&\\blue{56.9$\\pm$0.7}&\\red{57.0$\\pm$2.0}&23.3$\\pm$7.4&55.5$\\pm$3.3\\\\\n&5C&77.4$\\pm$0.7&77.3$\\pm$0.8&74.1$\\pm$0.5&\\red{80.5$\\pm$0.5}&74.0$\\pm$1.6&25.9$\\pm$6.2&\\blue{79.0$\\pm$2.1}\\\\\n\\midrule\n\\multirow{4}{*}{\\textit{Slashdot}}&SP&74.7$\\pm$0.5&74.1$\\pm$0.7&64.0$\\pm$1.3&70.6$\\pm$1.0&\\blue{86.6$\\pm$2.2}&57.9$\\pm$5.3&\\red{92.4$\\pm$0.6}\\\\\n&DP&74.8$\\pm$0.9&74.2$\\pm$1.4&62.8$\\pm$0.9&71.1$\\pm$1.1&\\red{87.8$\\pm$1.0}&53.0$\\pm$4.0&\\blue{84.3$\\pm$17.2}\\\\\n&3C&69.7$\\pm$0.3&66.3$\\pm$1.8&49.1$\\pm$1.2&72.5$\\pm$0.7&\\blue{79.3$\\pm$1.2}&42.0$\\pm$7.9&\\red{84.7$\\pm$0.6}\\\\\n&4C&63.2$\\pm$0.3&64.0$\\pm$0.7&53.4$\\pm$0.2&60.5$\\pm$0.6&\\blue{72.7$\\pm$0.6}&25.7$\\pm$8.9&\\red{73.5$\\pm$3.7}\\\\\n&5C&64.4$\\pm$0.3&62.6$\\pm$2.0&44.4$\\pm$1.4&66.4$\\pm$0.5&\\blue{70.4$\\pm$0.7}&19.3$\\pm$8.6&\\red{74.2$\\pm$0.9}\\\\\n\\midrule\n\\multirow{4}{*}{\\textit{Epinions}}&SP&62.9$\\pm$0.5&67.7$\\pm$0.8&63.6$\\pm$0.5&66.5$\\pm$1.0&\\red{78.5$\\pm$2.1}&53.3$\\pm$10.6&\\blue{69.7$\\pm$16.2}\\\\\n&DP&61.7$\\pm$0.5&67.9$\\pm$0.6&63.6$\\pm$0.8&66.4$\\pm$1.2&\\blue{73.9$\\pm$6.2}&49.0$\\pm$3.2&\\red{81.0$\\pm$1.5}\\\\\n&3C&70.3$\\pm$0.8&\\blue{73.2$\\pm$0.8}&52.3$\\pm$1.3&72.8$\\pm$0.2&72.7$\\pm$2.0&30.5$\\pm$8.3&\\red{81.5$\\pm$1.3}\\\\\n&4C&66.7$\\pm$1.2&\\blue{71.0$\\pm$0.6}&62.3$\\pm$0.5&69.5$\\pm$0.7&70.2$\\pm$5.2&29.9$\\pm$6.4&\\red{74.9$\\pm$3.6}\\\\\n&5C&73.5$\\pm$0.8&\\blue{76.6$\\pm$0.7}&52.9$\\pm$0.7&74.2$\\pm$0.1&70.3$\\pm$4.6&22.1$\\pm$6.1&\\red{77.9$\\pm$1.4}\\\\\n\\midrule\n\\multirow{4}{*}{\\textit{FiLL (avg.)}}&SP&88.4$\\pm$0.0&82.0$\\pm$0.3&76.9$\\pm$0.1&\\red{90.0$\\pm$0.0}&\\blue{88.7$\\pm$0.3}&50.4$\\pm$1.8&88.4$\\pm$1.3\\\\\n&DP&88.5$\\pm$0.1&82.0$\\pm$0.2&76.9$\\pm$0.1&\\red{90.0$\\pm$0.0}&\\blue{88.8$\\pm$0.3}&48.0$\\pm$2.7&88.6$\\pm$2.1\\\\\n&3C&63.0$\\pm$0.1&59.3$\\pm$0.0&55.3$\\pm$0.1&\\red{64.3$\\pm$0.1}&62.2$\\pm$0.3&33.7$\\pm$1.3&\\blue{64.2$\\pm$0.8}\\\\\n&4C&\\blue{81.7$\\pm$0.0}&78.8$\\pm$0.1&70.5$\\pm$0.1&\\red{83.2$\\pm$0.1}&80.0$\\pm$0.3&24.9$\\pm$0.9&78.8$\\pm$1.2\\\\\n&5C&\\blue{63.8$\\pm$0.0}&61.1$\\pm$0.1&55.5$\\pm$0.1&\\red{64.8$\\pm$0.1}&60.4$\\pm$0.4&19.8$\\pm$1.1&62.1$\\pm$0.8\\\\\n\\bottomrule\n\\vspace{-4mm} \n\\end{tabular}}\n\\vspace{-2mm}\n\\end{table*}\nOur results for link prediction in Table~\\ref{tab:link_sign_direction_prediction_performance} indicate that {MSGNN} is the top performing method, achieving the highest level of accuracy in 13 out of 25 total cases and being among the leading two in 19 out of 25 total cases. SNEA is the second best performing method, but is the least efficient in speed due to its use of graph attention, see runtime comparison in Appendix~\\ref{appendix:implementation}.\n\nSpecifically, the ``avg.\" results for the novel financial data sets first average the accuracy values across all individual networks (a total of 42 networks), then report the mean and standard deviation over the five runs. Results for individual \\textit{FiLL} networks are reported in Appendix~\\ref{appendix:full_results}. Note that \\textit{$\\pm0.0$} in the result tables indicates that the standard deviation is less than 0.05\\%.\n\n\\vspace{-2mm}\n\\subsubsection{Ablation Study and Discussion}\n\\looseness=-1 We now discuss the influence of the charge parameter $q$ in the Laplacian matrix, and how input features affect the performance. \nTable~\\ref{tab:link_prediction_ablation} in Appendix~\\ref{appendix_sec:ablation} compares different variants of {MSGNN} on the link prediction tasks, with respect to (1) whether we use a traditional signed Laplacian that is initially designed for undirected networks (for which $q=0$) and a magnetic signed Laplacian that strongly emphasizes directionality (for which $q=0.25$); (2) whether to include sign in input node features (if False, then only in- and out-degrees are computed like in \\cite{zhang2021magnet} regardless of edge signs, otherwise features are constructed based on the positive and negative subgraphs separately); and (3) whether we take edge weights into account (if False, we view all edge weights as having magnitude one). Taking the standard errors into account, we find that incorporating directionality into the Laplacian matrix (i.e., having nonzero $q$) typically leads to slightly better performance in the directionality-related tasks (DP, 3C, 4C, 5C). Drilling deeper, a comparison using different $q$ values in the magnetic part shown in Table~\\ref{tab:link_prediction_q_ablation} provides further evidence that nonzero $q$ values usually boost performance compared with no emphasis on directionality at all ($q=0$), in addition to the evidence from\nFigure~\\ref{fig:signed_clustering_synthetic_compare} for node clustering.\n\nMoreover, signed features are in general helpful for tasks involving sign prediction. \nFor constructing weighted features we see no significant difference in simply summing up entries in the adjacency matrix compared to summing the absolute values of the entries. Besides, it seems that calculating degrees based on unit-magnitude weights is generally not best, as even when it leads performance it is within one standard error of another method. Treating negative edge weights as the negation of positive ones is also not helpful (by not having separate degree features for the positive and negative subgraphs), which may explain why SigMaNet performs poorly in most scenarios due to its undesirable property. Surprisingly often, including only signed information but not weighted features does well. To conclude, constructing features based on the positive and negative subgraphs separately is helpful, and including directional information is generally beneficial.\n\n\\vspace{-2mm}\n\\section{Conclusion and Outlook}\nIn this paper, we propose a spectral graph neural network based on a novel magnetic signed Laplacian matrix, introduce a novel synthetic network model and new real-world data sets, and conduct experiments on node clustering and link prediction tasks that are not restricted to considering either link sign or directionality alone. MSGNN performs as well or better than leading GNNs, while being considerably faster on real-world data sets. Future plans include investigating an extension to temporal\/dynamic graphs, where node features and\/or edge information could evolve over time~\\cite{dang2018link,rozemberczki2021pytorch}. We are also interested in extending our work to being able to encode nontrivial edge features, to develop objectives which explicitly handle heterogeneous edge densities throughout the graph, and to extend our approach to hypergraphs and other complex network structures.\n\n\\clearpage\n\\bibliographystyle{unsrtnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe fundamentals of fast magnetisation switching constitute one of the important problems in magnetism, related to technological applications \\cite{Hillebrands}. Experimentally microwave-assisted switching has been reported as requiring a smaller applied field thus opening new possibilities for applications in magnetic recording and communication devices \\cite{Thirion, Nembach, Woltersdorf,Moriyama, Pimentel}. The microwave-assisted (mw) switching process with linearly-polarized excitation can be realized based, for example, on a fast magneto-optic Kerr effect set-up, applying simultaneously static and microwave fields \\cite{Nembach,Woltersdorf, Pimentel, Adam}. Another possibility to observe mw-assisted switching is given by measuring the magnetoresistance as has been done, for example, in Co stripes \\cite{Grollier} or in NiFe magnetic tunnel junctions \\cite{Moriyama}. These experiments use relatively large (up to tens of microns) lithographically prepared magnetic elements whose dimensions do not allow the occurrence of homogeneous magnetisation processes.\n\n Differently from this, the micro-SQUID experiment \\cite{Thirion}, performed on Co nanoparticles with 20 nm diameter gives an example of a macrospin magnetisation behavior. In this system nonlinear effects are clearly observed experimentally in agreement with a simple macrospin dynamical model.\n The behavior found in the macrospin magnetisation dynamics has stimulated the technologically important proposal of micro-wave assisted (or ferromagnetic resonance (FMR) assisted) magnetic recording \\cite{Zhu, Rivkin, Scholtz}.\n\n The process of understanding the underlying physics is still going on and several ideas are found in the literature. From the theoretical point of view, probably the most comprehensive understanding has been achieved in the case of one single magnetic moment \\cite{Thirion, Rivkin, Scholtz, Sun, Zhang, Lee, Horley}.\n In what follows we would like to mention some of the proposed mechanisms, since most of them can also be partially encountered in the complex situation of non-homogeneous magnetisation reversal processes discussed in the present article. The most frequently used one is based on a simple idea of the efficient energy transfer during excitation with frequencies close to the FMR one \\cite{Woltersdorf, Sun}. It has been also argued that excitation with a circular polarised field would be much more efficient due to the possibility to synchronize with the rotation sense of the magnetic moment \\cite{Sun}. Although it seems to be less efficient, synchronization is also possible with a linearly polarised field \\cite{Horley}. This idea remains certainly valid in a general case, however it should be noted that the notion of the FMR frequency, as corresponding to excitation of mode with zero wave vector, is not exact for micron-sized magnetic elements. Due to the minimization of the magnetostatic charges at the surfaces, even small angle magnetisation vibrations are not homogeneous \\cite{Adam} and the frequency spectrum depends strongly on the size and shape of the magnetic nanoelement \\cite{Kruglyak1, Kruglyak}.\n Moreover, in magnetic nanoelements the frequency spectrum becomes quantized \\cite{Bayer,Guslienko}. The second idea found in the literature is also simple:\n The microwave field provides a constant energy input which rises the magnetic energy of the particle up to a level higher than the saddle point of the magnetic energy landscape \\cite{Thirion, Sun, Moriyama, Horley}. Note that here also the thermal-activated process becomes possible \\cite{Lee,Nozaki}. In this sense, it can be said that the microwave field can act as an energy source which effectively decreases the energy barrier. It should be noticed here that this idea is relied on a simple picture of a two-level system which is generally not valid for a non-homogeneous magnetisation reversal scenario.\n\nThe fast mw-assisted magnetisation switching process in magnetic elements is closely related to the phenomenon of precessional switching. Indeed, a small perpendicular field helps the switching to take place via the excitation of the precessional motion \\cite{Back, BackScience, Hiebert} due to the torque acting on the magnetisation. The main problem is that with, for example, a perpendicularly applied field the magnetisation continues to precess (\"ringing\" phenomenon) and could switch back \\cite{Schumacker}. Furthermore, it has been suggested to use a pulsed field and to tune the pulse duration to avoid multiple switching. Even the pulse shape could be optimized \\cite{Bauer}, together with the use of two pulses with an optimised delay time \\cite{Gerrits, Schumacker}. This case again is mostly studied in the case of one magnetic moment, although micromagnetic simulations have been also widely performed in nanoelements with different shapes \\cite{Viena}.\n\nThe final idea which we would like to mention is that one magnetic moment under a microwave field represents a classical example of a parametrically excited nonlinear oscillator. Thus the problem is similar to the classical parametrically excited pendulum, where parametric instabilities occur varying the strength and the frequency of external oscillation. Such instabilities may be considered as precursors of the switching process. Next, in these systems nonlinear phenomena such as bifurcations and chaos \\cite{Pla, Serpico, Horley} occur. A classical example of this is the appearance of an additional large-amplitude stationary orbit via the folding bifurcation, again studied in the case of one macrospin only \\cite{Pla, Serpico}. The nonlinear phenomena lead to a complex structure of the trajectories near the separatrix, a special trajectory separating the basis of attractions of the two minima. Under the influence of external periodic force, the separatrix may become fractal so that closely situated initial conditions may lead either to switching or not \\cite{Horley}. This produced, for example, fractal regions of switching beyond the Stoner-Wolfarth astroid observed experimentally in micro-SQUID experiments \\cite{Thirion} and numerically in Refs.\\onlinecite{Thirion, Zhang, Horley, Scholtz}. However, since these phenomena are complex even in the case of one magnetic moment, in a large system nonlinear phenomena are more difficult to interpret and may not lead to a clear pattern.\n\t\n\nThe reversal under oscillating fields of large-sized magnetic nanoelements was mostly studied in cases with simple magnetisation reversal modes, such as domain wall\\cite{Grollier,Martinez}, $C$ or $S$ magnetisation states \\cite{Fidler}, the vortex to onion state in magnetic rings \\cite{Podbielski} or vortex core reversal \\cite{Wayenberge, LeeK}. The enhanced mobility of domain walls with some frequencies was numerically found \\cite{Martinez}. Experimentally, using the magnetic circular dichroism technique in a magnetic element having a Landau pattern, the asymmetry of the domain wall motion under a linearly-polarised mw field with the drift in one direction was reported \\cite{Krasyuk}. This has been explained by the entropy maximization involving the precessional excitation (the excitation of a so-called \"self-trapping spin-wave mode\").\n\n\nAs we mentioned above, most of the previous theoretical studies involve either fast switching in the macrospin case (or effectively conditions of coherent rotation) or the switching of individual objects as vortices or domain wall structures. However, the spatial resolution of the fast Kerr experiments \\cite{Nembach, Pimentel} requires quite large magnetic elements with not so simple and unique magnetisation reversal modes. Namely, the mw-assisted switching in large systems produces a sequence of nucleation - propagation - relaxation phenomena which is the subject of the present study. We show that the overall dynamics is a complex phenomenon where some of the above mentioned processes could occur simultaneously and could play a role on different stages of the magnetisation reversal.\n\n\n\n\n\\section{Model}\nIn the present work and with the aim to understand the mw-assisted switching processes in large magnetic elements \\cite{Nembach,Pimentel}, we use a micromagnetic model to simulate the magnetisation dynamics in a permalloy ellipsoid. Due to the limitation of the simulational size we used reduced dimensions of the ellipsoid: the total magnetic volume is $V= 4 \\mu m$ x $2 \\mu m$ by $25 nm$. It has the same aspect ratio as in the experiment. We used the publicly available code Magpar \\cite{Magpar} with a finite element discretisation and in which we implemented the possibility to simultaneously apply dc and ac fields.\nThe permalloy easy axis was directed parallel to the large ellipsoid axis (y-axis, see Fig.~\\ref{geometry}). Initially the ellipsoid was magnetised along this axis. The dc-applied field is placed parallel to it in the opposite direction. The linearly polarized ac field with large amplitude is directed perpendicular to it (x-axis). The following micromagnetic parameters were used in the simulations: anisotropy value $K=3.31\\times 10^3 erg\/cm^3$, saturation magnetisation value $M_{s}=860 \\;emu\/cm^3$ (the anisotropy field $H_{K}=7.7 Oe$), exchange parameter $A=1.05 \\times 10^{-6} erg\/cm$, damping parameter $\\alpha=0.012$.\n\n\\begin{figure}[h!]\n\\includegraphics[height=6cm]{sistema.eps}\n\\caption{Geometry of the simulated system.}\n\\label{geometry}\n\\end{figure}\n\nIn Fig. \\ref{Hyst} we present a hysteresis cycle for the permalloy ellipsoidal element with the field applied parallel to the easy axis direction. The coercive field is much larger than the anisotropy field value due to the large magnetostatic shape anisotropy.\n\n\n\\begin{figure}[h!]\n\\includegraphics[height=6cm]{Hyst1.EPS}\n\\caption{Simulated descending branch of the hysteresis cycle for the permalloy ellipsoid with static applied field only (full circles) and with simultaneously applied static and mw field with $H_{ac}=25.1 Oe$, and two frequencies (open symbols). The dc-field is applied along the long ellipsoid axis $Y$. The average ellipsoid magnetisation $$ is normalized to the total saturation value.}\n\\label{Hyst}\n\\end{figure}\n\n\n\\section{Results}\nFirst, we model the hysteresis cycle with simultaneously applied dc and ac fields (see Fig.~\\ref{Hyst}) and note the reduction of coercivity in the case of the mw-assisted switching process. The hysteresis cycle is square-like in the former case and rounded in the latter one. To understand the coercivity reduction we model the minimum strength of the applied field required to produce the magnetisation reversal in our system for different mw-field frequencies.\n\nWe notice that one of the large contributions to the coercivity reduction comes from the deviation of the magnetisation direction from the y-axis, following the ac-field. Thus, the coercivity is reduced simply due to the fact that most of time the resulting field is applied at some angle to the easy axis. Consequently, we compare the mw-assisted switching process with the situation when a static field is applied at an angle equal to that formed during the mw-assisted case with the maximum amplitude $H_{ac}$ of the ac field. The maximum amplitude of the critical field $H_{cr}=\\sqrt{H_{ac}^2+H_{dc}^2}$, necessary to switch the magnetisation, is presented in Fig.~\\ref{DC_AC} as a function of the maximum applied field angle. We use the following condition for the magnetisation switching $ < 0$, where $$ is the average magnetisation along the $y$ direction. Our results show that even with this more fair comparison, mw-assisted switching always requires less field, except for the case of very large deviation angles and some frequencies. Therefore, the first important contribution to decrease the coercivity during the mw-assisted switching process is the deviation of field from the easy axis which leads to precessional switching. Moreover, for small field angles the results are almost independent of the ac-field frequency which role is just to induce the precession. The results are different when strong deviations of the magnetisation occur which stresses the importance of the nonlinear effects.\n\n\n\n\n\n\n\\begin{figure}[h!]\n\\includegraphics[height=6cm]{Hcr_vs_teta2.EPS}\n\\caption{Amplitude of the critical field necessary for magnetisation switching as a function of the maximum field angle with the anisotropy axis for static field and microwave-assisted switching. }\n\\label{DC_AC}\n\\end{figure}\n\n\nTo illustrate the differences between the static and mw-assisted switching processes we present the dynamical configurations during the magnetisation reversal in Figs.~\\ref{Static} and~\\ref{MW}. For the static process (Fig.~\\ref{Static}) the magnetisation nucleation starts with two vortices in the opposite upper and lower sides of the ellipsoid. During the irreversible jump these vortices propagate in opposite directions creating two domain walls. The propagation of these domain walls toward the left and the right sides of the ellipsoid completes the magnetisation reversal.\n\n\n\\begin{figure}[h!]\n\\includegraphics[height=5cm]{Fig4.eps}\n\\caption{Magnetisation configurations during the static field hysteresis process at different time moments (a) initial state at the coercive field, (b) intermediate state, showing the vortices propagation and\n(c)the final state at the coercive field. The field is applied at $45^o$ with the $Y$ axis.}\n\n\\label{Static}\n\\end{figure}\n\n\nFig.~\\ref{MW} shows the magnetisation configurations during the mw-assisted switching process. The complimentary video material (see Ref.~\\onlinecite{Videos}) illustrates the temporary dynamical evolution of the demagnetisation reversal. We observe that the ellipsoid is dynamically divided into domain structures. The number of domains depends strongly on the mw frequency and may be related to the length of the spinwave mode, excited in the system. In Fig.~\\ref{ripple} we represent an approximate number of magnetisation domains nucleated during the first two nanoseconds as a function of the ac-field frequency and in the magnetisation range $0.6 <\\; \\; < 0.8$ (normalized to the total saturation magnetisation). Note that the domain size is larger in the central part of the ellipsoid than in the upper and lower sides, due to the ellipsoidal form, and initially grows with time (see Fig.~\\ref{size}).\nThe division of the ellipsoid into domains during mw-assisted switching is experimentally confirmed by Kerr images in Ref.~\\onlinecite{Pimentel}. Therefore, the mw-assisted case is characterized by a reversal mode completely different from the one appearing during the static hysteresis.\n\n\\begin{figure}[h!]\n\\includegraphics[height=6cm]{Fig5.eps}\n\\includegraphics[height=6cm]{Fig5_cd.eps}\n\\caption{Dynamical configurations: $M_x$ (upper) and $M_z$ (lower) components (grey scale) during the mw-assisted magnetisation reversal for applied fields $H_{dc}=-31.41 Oe$, $H_{ac}=18.84 Oe$ and for two different frequencies (left) $\\nu=1 GHz$ and (right) $\\nu=6 GHz$. For the dynamics of the reversal process, see the supplemental material \\cite{Videos}.}\n\\label{MW}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\includegraphics[height=6cm]{ripple1.eps}\n\\caption{ Number of domains in the ellipsoid nucleated during the first two nanoseconds as a function of mw frequency for applied field $H_{dc}=-31.41 Oe$, $H_{ac}=25.13 Oe$.}\n\\label{ripple}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\includegraphics[height=6cm]{ripples_vs_time1.EPS}\n\\caption{Domain size in the ellipsoid along the $y$ coordinate as a function of the position of its center for two time snapshots, corresponding to the first nucleation-expansion stage of the mw-assisted reversal processes and for $H_{dc}=-31.41 Oe$, $H_{ac}=25.13Oe$. Note that after approximately $t> 2.5 ns$ the domain size expansion is suppressed.}\n\\label{size}\n\\end{figure}\n\nGenerally speaking, the ac-driven process is complicated and involves different objects. The domains are separated by domain walls (of N\\'eel or cross-tie types) and the magnetic moments in the center of these domain walls are constantly precessing (see Fig.~\\ref{MW} (c) and (d)). This precession is important for the domain wall mobility. In the junctions between the domains vortices are created (see Fig.~\\ref{Vortex}).\nNext, during the process we observe constant spinwave generation and their reflection from the ellipsoid boundaries and domain walls. Therefore, the overall process contains an interplay of several important effects, discussed earlier in the literature in more simple situations (see the Introduction). Each of them has been reported previously to be influenced by the mw-field.\n\n\\begin{figure}[h!]\n\\includegraphics[height=5cm]{Fig7.eps}\n\\caption{More detailed magnetisation configuration in the upper part of the ellipsoid showing the occurrence of domain walls and vortices. $H_{dc}=-31.41Oe$, $H_{ac}= 25.13Oe$, $\\nu=0.1 GHz$.}\n\\label{Vortex}\n\\end{figure}\n\nFig.~\\ref{time} shows the dependence of the magnetisation switching time in the ellipsoid on the mw frequency for several amplitudes of the ac field. We clearly observe oscillations in the switching time. Moreover there exists a region of the parameters (see Fig.~\\ref{contur}) where the switching required more than $16 \\;ns$. To understand the phenomenon we first plot in Fig. \\ref{My} the average $M_y$ magnetisation component as a function of time. The small-period magnetisation oscillations are related to the ac-field oscillations. During the switching process we can distinguish two main processes: The first one (occurring in the first 2 ns) is related to the efficiency of the magnetisation nucleation in the system while the second one is related to the magnetisation relaxation. Fig.~\\ref{My} shows that at several frequencies the magnetisation reversal process started faster in its nucleation part but proceeded slower in the relaxation part.\n\n\n\n The nucleation process is associated to efficient spin wave generation and then to spinwave instability phenomena which is a precursor of the magnetisation reversal \\cite{Chubykalo, Dobin, Kashuba}. The role of the ac-field is to excite a spinwave with the external frequency. Because of this, the data of Fig.~\\ref{ripple} resembles the spinwave dispersion relation and the ripple configurations - the spinwave modes reported in Ref.~\\onlinecite{Gubbiotti}. However, the excited mode may be unstable for a given condition and, thus, produce instabilities and magnetisation reversal. For example, we have seen that at these values of the applied field the main FMR mode is completely unstable. When we tried to excite the homogeneous mode, with the magnetisation anti-parallel to the applied field direction, the energy was immediately redistributed into inhomogeneous spinwaves provoking later the magnetisation reversal. The excitation of these spinwaves was acting as an additional damping source leading to the magnetisation reversal similar to the case of Ref.~\\onlinecite{Safonov}. In this case, the excitation with the frequencies close to FMR (around $3 GHz$ in Fig.~\\ref{time}) is the most efficient one. The first stage (several ns) is also characterized by initial expansion of the domain size, especially in the center, as seen in Fig.~\\ref{size}.\n\n\n\n\n\\begin{figure}[h!]\n\\includegraphics[height=6cm]{tsw1.EPS}\n\\caption{Switching time of the magnetisation as a function of the mw frequency for various values of the mw-field amplitude $H_{ac}$ and $H_{dc}= -31.41 Oe$ }\n\\label{time}\n\\end{figure}\n\n\n\\begin{figure}[h!]\n\\includegraphics[height=6.5cm]{contour1.eps}\n\\caption{Contour plot for the switching time of the ellipsoid for applied field $H_{dc}=- 31.41 Oe$. }\n\\label{contur}\n\\end{figure}\n\n\n\\begin{figure}[h!]\n\\includegraphics[height=6cm]{My_time1.EPS}\n\\caption{Temporal evolution for the average $$ magnetisation component (normalized to the saturation value) during the microwave-assisted switching process at $H_{dc}=- 31.41 \\; Oe$ and $H_{ac}= 25.13\\; Oe$. }\n\\label{My}\n\\end{figure}\n\n\\begin{figure}[h!]\n\\includegraphics[height=6cm]{desv1.EPS}\n\\caption{Temporal evolution for the average $$ magnetisation component (normalized to the saturation value) during the microwave-assisted switching process at $H_{dc}=- 31.41 \\; Oe$ and $H_{ac}= 25.13 \\; Oe$.}\n\\label{desv}\n\\end{figure}\n\n\n\nTo illustrate the role of precession, we present in Fig.~\\ref{desv} the temporal evolution of the average value of the $$ component (this value is proportional to the energy put into precession by the microwave). The first part of these curves characterizes the initial nucleation-expansion process (first 2 ns). It is clear that the energy is put efficiently into the precession with the frequency close to the main FMR mode. This is confirmed by the fast magnetisation decay of the $$ component in Fig.~\\ref{My} and the fast growth of the $$ value in Fig.~\\ref{desv}. This fact is consistent with the hypothesis introduced in many papers stating that for efficient switching the mw frequency should coincide with the FMR one \\cite{Woltersdorf, Sun}. In the case of magnetic elements behaving as one macrospin magnetic moment this would determine the overall switching. However, a fast nucleation is insufficient for fast magnetisation switching of larger elements. Fig.~\\ref{desv} demonstrates that for frequencies close to $6GHz$ (double FMR frequency in the opposite well) the precessional energy remains constant. We note that only the magnetic moments in the centers of domain walls, separating domains with opposite $M_x$ signs (and in the vortex centers) are precessing (see Fig.~\\ref{MW}). Thus the expansion of domain walls necessary for magnetisation relaxation is dependent on the relaxation of these magnetic moments. For frequencies and timescales at which the $$ relaxation is very slow, the energy is transferred efficiently into the precessional motion and not to the relaxation process. Consequently, the further domain expansion is extremely slow. We would like to note that this idea is in the spirit of the hypothesis introduced in Ref.~\\onlinecite{Krasyuk} (self-trapping of magnetic oscillations). At the long-time scale the magnetisation reversal at this stage proceeds again via additional spin-wave generation and reflection destabilizing the domain structure. Note also that the nucleation process occurs when the magnetisation is anti-parallel to the field direction while in the relaxation part it is parallel. Thus, the relevant frequencies are different in both cases.\n\n\nFig.~\\ref{Twofields} shows the magnetisation switching time as a function of the mw frequency for two dc applied field values. As the applied dc field increases, the angle of the magnetisation precession increases in the nucleation part of the process but decreases in the relaxation part. Larger precessional angles lead to nonlinear phenomena. Associated with larger precessional angles there is a nonlinear shift of the frequencies to smaller values \\cite{Usatenko}. This is in agreement with the results presented in Fig.~\\ref{Twofields}, provided that the relevant frequencies are determined by the relaxation process. However, in Fig.~\\ref{time} the frequencies are shifted to larger values with larger ac-field amplitude. This shows that the overall process is much more complicated and cannot be analysed in terms of the FMR frequencies relevant to the switching of one magnetic moment only. In fact, different ac-frequencies excite modes with different spin wave vector and the instabilities of them occur at different threshold amplitudes.\n\n\\begin{figure}[h!]\n\\includegraphics[height=6cm]{Twofields2.EPS}\n\\caption{Switching time of the magnetisation as a function of the mw frequency for two values of the stationary field $H_{dc}$ and the mw-field amplitude $H_{ac}= 25.13 \\; Oe$.}\n\\label{Twofields}\n\\end{figure}\n\n\n\\section{Conclusions}\n\nUsing the micromagnetic model, we have investigated the mechanism of fast magnetisation switching assisted by a linearly polarised microwave field in micron-sized magnetic elements. Magnetisation dynamics in these magnetic patterns is governed by nucleation-propagation-relaxation processes and is different to those of nanoscale elements behaving as one macrospin. Our simulations confirm that the microwave-assisted magnetisation reversal requires a smaller field than in the process under a static applied field. The first contribution to this is the deviation of the field from the easy-axis during the mw-assisted switching process. However, even when we compared the situations when the maximum applied field angle is the same in both cases, mw-assisted switching appeared to require less field. Moreover, our results show that at small intensity\nof the mw-field its action is not described by the effective field tilt.\n This happens due to the fact that the static and the mw-assisted switching processes involve different reversal modes. In the first case this mode consists of the nucleation of two vortices in opposite corners, their subsequent merging and creation of two domain walls which later span the whole ellipsoid. In the second case, the reversal mode consists of a ripple structure, the external frequency being responsible for the excitation of a spinwave mode with the corresponding ripple size. For fields below the static coercivity field, the magnetisation configuration corresponds to almost homogeneous background plus excited spin wave mode and is metastable. The amplitude of the excited spin wave mode grows with applied field and becomes unstable, leading to magnetisation reversal. Thus, the spin-wave instability mechanism \\cite{Chubykalo, Dobin,Kashuba} plays a major role for the fast magnetisation switching process in nano-size magnetic elements.\n\nWe have shown that the most efficient nucleation is at the FMR frequency (and its multiples). However, the magnetisation reversal process after the nucleation requires also an efficient magnetisation relaxation. For example, the domain growth is stimulated by the ac field. This happens due to the relaxation of precessing magnetic moments in the domain wall center. In the spirit of the mechanism suggested in Ref.~\\onlinecite{Krasyuk}, for some frequencies, the relaxation process is not efficient, this happens when the mw field is coupled to the precessional motion. In this case the microwave field efficiently puts the energy into precession and not into the propagation-relaxation processes which are slowed down.\nAs a consequence of the interplay of several mechanisms with different relevant frequencies, the switching time of magnetic elements is a complicated function of the external frequency. The above results show that the magnetisation dynamics in micron-sized magnetic elements could not be analysed as a simple FMR-related phenomenon and neither follows a direct correspondence with a spinwave spectrum.\n\n\nFinally, we would like to discuss our results in the context of available experimental data.\n Out results are in qualitative agreement with experimental observations. The Kerr images in Ref.~\\onlinecite{Pimentel} show the creation of ripple structures during the mw-assisted switching process in a permalloy ellipsoid. Currently there is no data available for the switching time of the magnetisation in ellipsoids which could confirm the existence of the oscillations presented in Fig.~\\ref{time}. However, in several experimental papers on different magnetic elements we have found qualitative similarities with our results. For example, in Co bars the relaxation time observed by the fast Kerr technique showed a strong nonmonotonic behavior as a function of applied field \\cite{Adam}. Also in Co bars measured by the anisotropic magnetoresistance effect \\cite{Grollier}, the authors have observed the occurrence of several resonance peaks sweeping in frequency. The microwave power input necessary for the maximum coercivity reduction was reported to be frequency-dependent in the measurements of mw-assisted switching in NiFe magnetic tunneling junctions \\cite{Moriyama}. The critical field necessary for the microwave assistance was shown to be a complicated function of the mw frequency with several minima for the mw- assisted process going from vortex to onion state in magnetic nanorings \\cite{Podbielski}.\n\n\n\\section*{Acknowledgements}\nThe results of the German group were obtained with the research funding from the European Commission under EU-RTN ULTRASWITCH (HPRN-CT-2002-00318) project. The Spanish authors acknowledge financial support from the European COST-P19 Action and Spanish National projects MAT2007-66719-C03-01 and\n Consolider CS2008-023. They are also grateful to J.M. Rodriguez Puerta for his help with computer facilities.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{APPROACH}\n\\label{sec::approach}\n\nIn this section, we introduce \\textsc{apple}, which has two novel features: (1) compared to learning from demonstration or interventions, which require human driving expertise in order to take full control of the moving robot, \\textsc{apple} requires instead just evaluative feedback that can be provided even by non-expert users; (2) in contrast to previous work \\cite{xiao2020appld, wang2021appli} that selects the planner parameter set based on how similar the deployment environment is to the demonstrated environment, \\textsc{apple} is based on the expected evaluative feedback, i.e., the actual navigation performance. \n\\textsc{apple}'s performance-based parameter policy has the potential to outperform previous approaches that are based on similarity.\n\n\\subsection{Problem Definition}\n \nWe denote a classical parameterized navigation system as $G: \\mathcal{X} \\times \\Theta \\rightarrow \\mathcal{A}$, where $\\mathcal{X}$ is the state space of the robot (e.g., goal, sensor observations), $\\Theta$ is the parameter space for $G$ (e.g., max speed, sampling rate, inflation radius), and $\\mathcal{A}$ is the action space (e.g., linear and angular velocities). During deployment, the navigation system repeatedly estimates state $x$ and takes action $a$ calculated as $a = G(x; \\theta)$. Typically, a default parameter set $\\bar{\\theta}$ is tuned by a human designer trying to achieve good performance in most environments. However, being good at everything often means being great at nothing: $\\bar{\\theta}$ usually exhibits suboptimal performance in some situations and may even fail (is unable to find feasible motions, or crashes into obstacles) in particularly challenging ones~\\cite{xiao2020appld}. \n\nTo mitigate this problem, \\textsc{apple} learns a parameter policy from human evaluative feedback with the goal of selecting the appropriate parameter set $\\theta$ (from either a discrete parameter set library or from a continuous full parameter space) for the current deployment environment.\nIn detail, a human can supervise the navigation system's performance at state $x$ by observing its action $a$ and giving corresponding evaluative feedback $e$.\nHere, the evaluative feedback can be either discrete (e.g., ``good\/bad job\") or continuous (e.g., a score ranging in $[0, 1]$).\nDuring feedback collection, \\textsc{apple} finds (1) a parameterized predictor $F_\\phi: \\mathcal{X} \\times \\Theta \\rightarrow \\mathcal{E}$ that predicts human evaluative feedback for each state-parameter pair $(x, \\theta)$, and (2) a parameterized parameter policy $\\pi_\\psi: \\mathcal{X} \\rightarrow \\Theta$ that determines the appropriate planner parameter set for the current state.\n\nBased on whether \\textsc{apple} chooses the parameter set from a library or the parameter space, we introduce the discrete and continuous parameter policies in the following two sections, respectively.\n\n\\subsection{Discrete Parameter Policy}\n\nIn some situations, the user may already have $K$ candidate parameter sets (e.g., the default set or sets tuned for special environments like narrow corridors, open spaces, etc.) which together make up a parameter library $\\mathcal{L} = \\{\\theta^i\\}_{i=1}^K$ (superscript $i$ denotes the index in the library). In this case, \\textsc{apple} uses the provided evaluative feedback $e$ in order to learn a policy that selects the most appropriate of these parameters given the state observation $x$.\n\nTo do so, we parameterize the feedback predictor $F_\\phi$ in a way similar to the value network in DQN \\cite{mnih2015humanlevel}, where the input is the observation $x$ and the output is $K$ predicted feedback values $\\{\\hat{e}^i\\}_{i=1}^K$, one for each parameter set $\\theta^i$ in the library $\\mathcal{L}$, as a prediction of the evaluative feedback a human user would give if the planner were using the respective parameter set at state $x$. We form a dataset for supervised learning, $\\mathcal{D} := \\{x_j, \\theta_j, e_j\\}_{j=1}^N$ ($\\theta_j \\in \\mathcal{L}$, subscript $j$ denotes the time step) \nusing the evaluative feedback collected so far, and $F_\\phi$ is learned via supervised learning to minimize the difference between predicted feedback and the label,\n\\begin{equation}\n \\phi^* = \\argmin_\\phi \\mathop{\\mathbb{E}}_{(x_j, \\theta_j, e_j) \\sim \\mathcal{D}} \\ell(F_\\phi(x_j, \\theta_j), e_j)\n \\label{eq:critic_loss}\n\\end{equation}\nwhere $\\ell(\\cdot, \\cdot)$ is the categorical cross entropy loss if the feedback $e_j$ is discrete (e.g., ``good job\/bad job'' or an integer score between 1 to 5), or mean squared error given continuous feedback.\n\nTo achieve the best possible performance, the parameter policy $\\pi(\\cdot|x)$ chooses the parameter set that maximizes the expected human feedback (the discrete parameter policy doesn't require any additional parameters beyond $\\phi$ for $F$, so the $\\psi$ is omitted here for simplicity). More specifically, \n\\begin{equation}\n \\pi(\\cdot|x) = \\argmax_{\\theta \\in \\mathcal{L}} F_{\\phi^*}(x, \\theta).\n \\label{eqn::discrete}\n\\end{equation}\n\nCompared to RL, especially DQN, discrete \\textsc{apple} has a similar architecture and training objective. However, an important difference is that while RL optimizes future (discounted) cumulative reward, \\textsc{apple} greedily maximizes the current feedback. The reason is that we assume, while supervising the robot's actions, the human will not only consider the current results but also future consequences and give the feedback accordingly. This assumption is consistent with past systems such as \\textsc{tamer}~\\cite{knox2009interactively}. Under this interpretation of feedback, \\textsc{apple} can also be though of as trying to maximize some notion of future performance.\n\n\\subsection{Continuous Parameter Policy}\n\nIf a discrete parameter library is not available or desired, \\textsc{apple} can also be used over continuous parameter spaces (e.g., deciding the max speed from $[0.1, 2]\\ \\mathrm{m\/s}$). \nIn this scenario, \\textsc{apple} can still learn from either discrete or continuous feedback. However, learning from discrete feedback of finer resolutions or even continuous feedback should lead to better performance.\n\nIn this setting, we parameterize the parameter policy $\\pi_\\psi$ and the feedback predictor $F_\\phi$ in the actor-critic style.\nWith the collected evaluative feedback $ \\mathcal{D} := \\{x_j, \\theta_j, e_j\\}_{j=1}^N$, the training objective of $F_\\phi$ is still to minimize the difference between predicted and collected feedback, as specified by Eqn. (\\ref{eq:critic_loss}). For the parameter policy $\\pi_\\psi$, beyond choosing the action that maximizes expected feedback, its training objective is augmented by maximizing the entropy of policy $\\mathcal{H}(\\pi_\\psi(\\cdot|x))$ at state $x$. Using the same entropy regularization as Soft Actor Critic (SAC)~\\cite{haarnoja2018soft}, $\\pi_\\psi$ favors more stochastic policies, leading to better exploration during training:\n\\begin{equation}\n \\psi^* = \\argmin_\\psi \\mathop{\\mathbb{E}}_{\\substack{x_j \\in \\mathcal{D} \\\\ \\tilde{\\theta}_j \\sim \\pi_\\psi(\\cdot | x_j)}} \\left[-F_\\phi(x_j, \\tilde{\\theta}_j) + \\alpha \\log \\pi_\\psi(\\tilde{\\theta}_j | x_j) \\right],\n \\label{eqn::continuous}\n\\end{equation}\nwhere $\\alpha$ is the temperature controlling the importance of the entropy bonus and is automatically tuned as in SAC~\\cite{haarnoja2018soft}.\n\n\n\\subsection{Deployment}\nDuring deployment, we measure the state $x_t$, use the parameter policy to obtain a set of parameters $\\theta_t \\sim \\pi_{\\psi^*}(\\cdot | x_t)$ at each time step, and apply that parameter set to the navigation planner $G$.\n\n\\section{CONCLUSIONS}\n\\label{sec::conclusions}\n\nIn this work, we introduce \\textsc{apple}, \\emph{Adaptive Planner Parameter Learning from Evaluative Feedback}. In contrast to most existing end-to-end machine learning for navigation approaches, \\textsc{apple} utilizes existing classical navigation systems and inherits all their benefits, such as safety and explainability. Furthermore, instead of requiring a full expert demonstration or a few corrective interventions that need the user to take full control of the robot, \\textsc{apple} just needs evaluative feedback as simple as ``good job\" or ``bad job\" that can be easily collected from non-expert users. Moreover, comparing with \\textsc{appli} which selects the parameter set based on the similarity with demonstrated environments, \\textsc{apple} achieves better generalization by selecting the parameter set with a performance-based criterion, i.e., the expected evaluative feedback. We show \\textsc{apple}'s performance improvement with simulated and real human feedback, as well as its generalizability in both 50 unseen simulated environments and an unseen physical environment.\nIn this paper, we use relatively dense feedback signals from the human user in the physical experiments (and different resolutions of simulated feedback signals in the simulated experiments) to reduce the amount of time needed to train a good \\textsc{apple} policy. These dense feedback signals may not always be practical, for example, the user may not always be paying attention. Therefore an important direction for future investigation is to study how little feedback is needed to yield good performance. Another important direction is to evaluate APPLE's generality with human subjects with different expertise levels and feedback criteria using an extensive user study.\n\n\n\n\n\\section{EXPERIMENTS}\n\\label{sec::experiments}\nIn our experiments, we aim to show that \\textsc{apple} can improve navigation performance by learning from evaluative feedback, in contrast to a teleoperated demonstration or a few corrective interventions, both of which require the non-expert user to take control of the moving robot. We also show \\textsc{apple}'s generalizability to unseen environments. We implement \\textsc{apple} on a ClearPath Jackal ground robot in \\textsc{barn}~\\cite{perille2020benchmarking} with 300 navigation environments randomly generated using Cellular Automata, and in two physical obstacle courses. \n\n\\subsection{Implementation}\n\nThe Jackal is a differential-drive robot equipped with a Velodyne LiDAR that we use to obtain a 720-dimensional planar laser scan with a 270$^\\circ$ field of view, denoted as $l_t$.\nThe robot uses the Robot Operating System \\texttt{move\\textunderscore base} navigation stack with Dijkstra's global planner and the default \\textsc{dwa} local planner~\\cite{fox1997dynamic}.\nFrom the global planner, we query the relative local goal direction $g_t$ (in angle) as the averaged tangential direction of the first 0.5m global path. The state space of the robot is the combination of the laser scan and local goal $x_t = (l_t, g_t)$. The parameter space consists of the 8 parameters of the \\textsc{dwa} local planner as described in Tab. \\ref{tab::jackal_parameters}, and the action space is the linear and angular velocity of the robot, $a_t = (v_t, \\omega_t)$.\n\nFor discrete \\textsc{apple}, we construct the parameter library shown in Tab. \\ref{tab::jackal_parameters} with the default \\textsc{dwa} parameter set $\\theta_1$ and parameter sets $\\theta_{2 \\sim 7}$ learned in the \\textsc{appli} work~\\cite{wang2021appli}. Here, we use parameter sets learned in previous work, and we have found that this is important\u2014without a reasonably good set to start with, \\textsc{apple} is typically unable to learn, e.g., smaller \\emph{inflation\\_radius} and larger \\emph{vtheta\\_samples} in $\\theta_2$ achieve good navigation in tight spaces, while larger \\emph{max\\_vel\\_x} and \\emph{pdist\\_scale} in $\\theta_4$ perform well in open ones. Without prelearned parameter sets, one can obtain a library via coarse tuning to create various driving modes (e.g. increase \\emph{max\\_vel\\_x} to create an aggressive mode). For continuous \\textsc{apple}, the parameter ranges for the parameter policy $\\pi_\\psi$ to select from are listed in the same table.\n\n\\begin{table}[h]\n \\caption{Parameter Library and Range: \\\\ \\emph{max\\_vel\\_x} \\textnormal{(v)}, \\emph{max\\_vel\\_theta} \\textnormal{(w)}, \\emph{vx\\_samples} \\textnormal{(s)}, \\emph{vtheta\\_samples} \\textnormal{(t)}, \\emph{occdist\\_scale} \\textnormal{(o)}, \\emph{pdist\\_scale} \\textnormal{(p)}, \\emph{gdist\\_scale \\textnormal{(g)}}, \\emph{inflation\\_radius \\textnormal{(i)}}}\n \\label{tab::jackal_parameters}\n \\centering\n \\small\n \\begin{tabular}{lrrrrrrrr}\n \\toprule\n & v & w & s & t & o & p & g & i \\\\\n \\midrule\n $\\theta_1$ & 0.50 & 1.57 & 6 & 20 & 0.10 & 0.75 & 1.00 & 0.30 \\\\\n $\\theta_2$ & 0.26 & 2.00 & 13 & 44 & 0.57 & 0.76 & 0.94 & 0.02 \\\\\n $\\theta_3$ & 0.22 & 0.87 & 13 & 31 & 0.30 & 0.36 & 0.71 & 0.30 \\\\\n $\\theta_4$ & 1.91 & 1.70 & 10 & 47 & 0.08 & 0.71 & 0.35 & 0.23 \\\\\n $\\theta_5$ & 0.72 & 0.73 & 19 & 59 & 0.62 & 1.00 & 0.32 & 0.24 \\\\\n $\\theta_6$ & 0.37 & 1.33 & 9 & 6 & 0.95 & 0.83 & 0.93 & 0.01 \\\\\n $\\theta_7$ & 0.31 & 1.05 & 17 & 20 & 0.45 & 0.61 & 0.22 & 0.23 \\\\\n \\midrule\n $\\min$ & 0.2 & 0.31 & 4 & 8 & 0.10 & 0.10 & 0.01 & 0.10 \\\\\n $\\max$ & 2.0 & 3.14 & 20 & 40 & 1.50 & 2.00 & 1.00 & 0.60 \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\nImplementation-wise, for discrete \\textsc{apple}, $F_\\phi(x, \\theta)$ is a fully-connected neural network with 2 hidden layers of 128 neurons, taking the 721 dimensional $x_t$ as input and outputting the 7 predicted feedback signals $\\hat{e}_{t, 1 \\sim 7}$ for $\\theta_{1 \\sim 7}$ respectively. Parameter policy $\\pi_\\psi$ uses $\\epsilon$-greedy exploration with $\\epsilon$ decreasing from $0.3$ to $0.02$ during the first half of the training. For continuous \\textsc{apple}, $F_\\phi(x, \\theta)$ shares the same architecture as discrete \\textsc{apple}, except for different input (concatenation of $x_t$ and $\\theta_t$) and output (scalar $\\hat{e}_t$). The parameter policy $\\pi_\\psi$ also uses the same architecture, mapping $x_t$ to $\\theta_t$.\n\nTo evaluate the performance of \\textsc{apple}, we use \\textsc{appli} with the same parameter library in Tab. \\ref{tab::jackal_parameters} upper part, \\textsc{applr} with the same parameter ranges in Tab. \\ref{tab::jackal_parameters} lower part, and the Default \\textsc{dwa} planner as three baselines. Since \\textsc{appld} does not generalize well without a confidence-based context predictor~\\cite{wang2021appli} and \\textsc{appli} can therefore outperform \\textsc{appld}, we do not include \\textsc{appld} as one of the baselines. Despite using the same library, \\textsc{appli} chooses the parameter set based on the similarity between the current observation and the demonstrated environments, while discrete \\textsc{apple} uses the expected feedback.\n\n\\subsection{Simulated Experiments}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{contents\/figures\/gazebo.jpg}\n \\vspace{-15pt}\n \\caption{Simulated Environments in \\textsc{barn} Dataset with Low, Medium, and High Difficulty Levels}\n \\vspace{-15pt}\n \\label{fig::simulated_envs}\n\\end{figure}\n\nWe begin by testing \\textsc{apple} on the \\textsc{barn} dataset, with simulated evaluative feedback generated by an oracle (proxy human). Note that although the proxy human for the simulated experiments appears to be similar to the \\textsc{applr} work \\cite{xu2020applr}, the simulated experiments aim to validate different \\textsc{apple} setups with easily accessible feedback before physical experiments. The intended use case for \\textsc{apple} is still during \\emph{physical} deployments with \\emph{real} humans. The benchmark dataset consists of 300 simulated navigation environments ranging from easy ones with a lot of open spaces to challenging ones where the robot needs to get through dense obstacles. Navigation trials in three example environments with low, medium, and high difficulty levels are shown in Fig. \\ref{fig::simulated_envs}. We randomly select 250 environments for training \\textsc{apple} and hold the remained 50 environments as the test set.\n\nFor the simulated feedback, we use the projection of the robot's linear velocity along the local goal direction, i.e., $e_t = v_t \\cdot \\cos(g_t)$, and it greedily encourages the robot to move along the global path as fast as possible. Then we discretize it to different number of levels ($\\infty$ levels mean using continuous feedback) to study the effect of feedback resolutions. Notice, this simulated evaluative feedback is suboptimal as it doesn't consider future states, and we expect actual human evaluative feedback would be more accurate. For example, when the robot is leaving an open space and is about to enter a narrow exit, a human would expect the robot to slow down to get through the exit smoothly, but the simulated oracle still encourages the robot to drive fast. The oracle provides its evaluative feedback at 1Hz, while \\textsc{apple}, \\textsc{appli} and \\textsc{applr} dynamically adjust the parameter set for the \\textsc{dwa} planner at the same frequency.\n\nAfter training in 250 environments with a total of 2.5M feedback signals collected, we evaluate \\textsc{apple} with discrete and continuous parameter policies (denoted as \\textsc{apple} (disc.) and \\textsc{apple} (cont.)), as well as three baselines, on the 50 test environments by measuring the traversal time for 20 runs per environment. The proxy human aims at improving navigation efficiency and thus reducing traversal time.\nWe then conduct t-tests to compute the percentage of environments in which \\textsc{apple} with different feedback resolutions is significantly better\/worse ($p<0.05$) than baselines, as shown in Fig. \\ref{fig::apple_vs_baselines}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{contents\/figures\/APPLE_different_feedback_levels.png}\n \\vspace{-15pt}\n \\caption{\\textsc{apple} Learning from Different Feedback Resolutions.}\n \\label{fig::apple_vs_baselines}\n \\vspace{-10pt}\n\\end{figure}\n\n\n\nDespite learning from suboptimal evaluative feedback, \\textsc{apple} (disc.) and \\textsc{apple} (cont.) still outperform the Default \\textsc{dwa} and \\textsc{appli} at all feedback resolutions.\nThese results demonstrate the advantage of \\textsc{apple} over \\textsc{appli}, which selects the parameter set with a performance-based predictor (Eqns. \\ref{eqn::discrete} and \\ref{eqn::continuous}) rather than a similarity-based predictor.\nAmong all feedback resolutions, there is significant improvement of 3 feedback levels over 2, for both discrete and continuous policies. Further increasing resolutions doesn't improve performance much (the slight decreases are likely due to stochasticity in learning), except for using continuous feedback (i.e., $\\infty$ levels) for continuous policy. These results suggest that as few as three discrete feedback levels are needed to improve navigation performance. \nMost of the cases where \\textsc{apple} achieves worse performance are due to a corner case for the global planner where it keeps switching between two global paths and thus confuses the local planner. \nSurprisingly, despite that in theory \\textsc{applr} is expected to perform the best, \\textsc{apple} (disc.) performs only slightly worse than \\textsc{applr}, and \\textsc{apple} (cont.) outperforms \\textsc{applr} in $29\\%$ of test environments with continuous feedback. The worse performance of \\textsc{applr} may come from challenging optimization of cumulative rewards or reward design. Lastly, comparing the left and right parts of Fig. \\ref{fig::apple_vs_baselines}, \\textsc{apple} (cont.) is comparable with \\textsc{apple} (disc.) with low feedback resolutions and performs much better with continuous feedback, because of its larger model capacity in the parameter space. \n\n\\subsection{Physical Experiments}\nWe also apply \\textsc{apple} on a physical Jackal robot. In a highly constrained obstacle course (Fig. \\ref{fig::apple}), we first apply \\textsc{appli} which provides 5 sets of parameters (1 default $\\theta_1$ and 4 learned $\\theta_{2\\sim5}$). Then to train a discrete \\textsc{apple} policy which uses the same parameter library, one of the authors follows the robot autonomously navigating and uses an Xbox joystick to give binary evaluative feedback (instead of 3 feedback levels for easier collection) at 2Hz. The author aims at teaching \\textsc{apple} to reduce traversal time. \nTo reduce the burden of giving a large amount of feedback, the user is only requested to give negative feedback by pressing a button on the joystick when he thinks the robot's navigation performance is bad, while for other instances, positive feedback is automatically given.\nIn other words, we interpret the absence of human feedback to be the same as if the human had provided positive feedback.\nWhile this interpretation is not standard in the literature (and even undesirable at times \\cite{faulkner2018policy}), we found that it yielded good results for the application studied here. Because \\textsc{applr} \\cite{xu2020applr} requires an infeasible amount of trial and error in the real world, it is not included as a baseline in the physical experiments.\n\nThe entire \\textsc{apple} training session lasts roughly 30 minutes, in which the robot navigates 10 trials in the environment shown in Fig. \\ref{fig::apple}. \\textsc{apple} learns in an online fashion with $30\\%$ probability of random exploration. \nAfter the training, the learned \\textsc{apple} model is deployed in the same training environment. We compare \\textsc{apple} to \\textsc{appli} with the same sets of parameters and the confidence measure of context prediction~\\cite{wang2021appli}, and the DWA planner with static default parameters. Each experiment is repeated five times. The results are shown in Tab \\ref{tab::physical}. \\textsc{apple} achieves the fastest average traversal time with the smallest variance in the training environment.\n\nTo test \\textsc{apple}'s generalizability, we also test \\textsc{apple} in an unseen environment (Fig. \\ref{fig::unseen}) and show the results in Tab. \\ref{tab::physical}. In the unseen environment, \\textsc{apple} has slightly increased variance, but still has the fastest average traversal time compared to the other two baselines.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{contents\/figures\/unseen.jpg}\n \\vspace{-10pt}\n \\caption{\\textsc{apple} Running in an Unseen Physical Environment}\n \\vspace{-10pt}\n \\label{fig::unseen}\n\\end{figure}\n\n\\begin{table}\n\\centering\n\\caption{Traversal Time in Training and Unseen Environment}\n\\begin{tabular}{cccc}\n\\toprule\n & \\textbf{Default} & \\textbf{\\textsc{appli}} & \\textbf{\\textsc{apple} (disc.)} \\\\ \n\\midrule\n\\textbf{Training} & 143.1$\\pm$20.0s & 79.8$\\pm$8.1s & 75.2$\\pm$4.1s \\\\\n\\textbf{Unseen} & 150.5$\\pm$24.0s & 86.4$\\pm$1.1s & 83.9$\\pm$4.6s \\\\\n\\bottomrule\n\\end{tabular}\n\\vspace{-15pt}\n\\label{tab::physical}\n\\end{table}\n\n\\section{INTRODUCTION}\n\\label{sec::intro}\n\n\\IEEEPARstart{M}{obile}\nrobot navigation is a well-studied problem in the robotics community. Many classical approaches have been developed over the last several decades and several of them have been robustly deployed on physical robot platforms moving in the real world~\\cite{quinlan1993elastic, fox1997dynamic}, with verifiable guarantees of safety and explainability.\n\nHowever, prior to deployment in a new environment, these approaches typically require parameter re-tuning in order to achieve robust navigation performance. For example, in cluttered environments, a low velocity and high sampling rate are necessary in order for the system to be able to generate safe and smooth motions, whereas in relatively open spaces, a large maximum velocity and relatively low sampling rate are needed in order to achieve optimal navigation performance. This parameter re-tuning requires robotics knowledge from experts who are familiar with the inner workings of the underlying navigation system, and may not be intuitive for non-expert users~\\cite{zheng2021ros}. Furthermore, using a single set of parameters assumes the same set will work well on average in different regions of a complex environment, which is often not the case. \n\nTo address these problems, Adaptive Planner Parameter Learning (\\textsc{appl}) is a recently-proposed paradigm that opens up the possibility of dynamically adjusting parameters to adapt to different regions, and enables non-expert users to fine-tune navigation systems through modalities such as teleoperated demonstration~\\cite{xiao2020appld} or corrective interventions~\\cite{wang2021appli}.\nThese interaction modalities require non-expert users to take full control of the moving robot during the entire navigation task, or, at least, when the robot suffers from poor performance. However, non-expert users who are inexperienced at controlling the robot may be unwilling or unable to take such responsibility due to perceived risk of human error and causing collisions.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{contents\/figures\/apple.jpg}\n \\vspace{-15pt}\n \\caption{For non-expert users who are unable or unwilling to take control of the robot, evaluative feedback, e.g. {\\em good job} (green thumbs up) or {\\em bad job} (red thumbs down), is a more accessible human interaction modality, but still valuable for improving navigation systems during deployment. }\n \\label{fig::apple}\n \\vspace{-15pt}\n\\end{figure}\n\nFortunately, even non-expert users who are not willing to take control of the robot are typically still able to observe the robot navigating and provide real-time positive or negative assessments of the observed navigation behavior through {\\em evaluative feedback}. For example, even non-expert users can know to provide negative feedback when a robot is performing poorly, e.g., getting stuck in highly constrained spaces~\\cite{xiao2020toward, liu2020lifelong} or driving unnecessarily slowly in open spaces~\\cite{xiao2020agile}. This more-accessible modality provides an interaction channel for a larger community of non-expert users with mobile robots (Fig. \\ref{fig::apple}).\n\nIn this work, we introduce a machine learning method that can leverage evaluative feedback in the context of autonomous navigation called \\emph{Adaptive Planner Parameter Learning from Evaluative Feedback} (\\textsc{apple}). Based on a parameter library~\\cite{xiao2020appld, wang2021appli} or a parameter policy~\\cite{xu2020applr}, \\textsc{apple} learns how to choose appropriate navigation planner parameters at each time step in order to adapt to different parts of the deployment environment. \nSpecifically, \\textsc{apple} treats the scalar human feedback as the value for the state-action pair in the Reinforcement Learning framework during training, in which the action is the parameter set to be used by the underlying navigation system. During deployment, \\textsc{apple} selects the parameters to maximize the expected human feedback value. \nWe implement \\textsc{apple} both in the Benchmarking Autonomous Robot Navigation (\\textsc{barn}) ~\\cite{perille2020benchmarking} environments and also in real-world, highly constrained obstacle courses. In both training and unseen environments, \\textsc{apple} is able to outperform the planner with default parameters, and even to improve over \\textsc{appl} variants learned from richer interaction modalities, such as teleoperated interventions. Our experimental results indicate that evaluative feedback is a particularly valuable form of human interaction modality for improving navigation systems during deployment.\n\n\n\\section{RELATED WORK}\n\\label{sec::related}\n\nIn this section, we review existing work on machine learning for mobile robot navigation, adaptive planner parameters, and learning from human evaluative feedback. \n\n\\subsection{Learning for Navigation}\nWhile autonomous navigation has been studied by the robotics community for decades, machine learning approaches have recently been extensively applied to this problem as well. Xiao, et al.~\\cite{xiao2020motion} presented a survey on using machine learning for motion control in mobile robot navigation: while the majority of learning approaches tackle navigation in an end-to-end manner~\\cite{bojarski2016end, pfeiffer2017perception}, it was found that approaches using learning in conjunction with other classical navigation components were more likely to have achieved better navigation performance.\nThese methods included those that learned sub-goals~\\cite{stein2018learning}, local planners~\\cite{gao2017intention, chiang2019learning, xiao2020toward, xiao2020agile, liu2020lifelong}, or planner parameters~\\cite{teso2019predictive, bhardwaj2020differentiable, binch2020context, xiao2020appld, wang2021appli, xu2020applr}. Learning methods have also enabled navigation capabilities that complement those provided in the classical navigation literature, including terrain-aware~\\cite{wigness2018robot, siva2019robot, kahn2021badgr} and social~\\cite{hart2020using, liang2020crowd, everett2018motion} navigation. \n\n\\textsc{apple} leverages the aforementioned hybrid learning and classical architecture, where the learning component only learns to select appropriate set of planner parameters, and interacts with the underlying classical navigation system. \n\n\\subsection{Adaptive Parameters for Classical Navigation} \nConsidering classical navigation systems' verifiable safety, explainability, and stable generalization to new environments, and the difficulty in fine-tuning those systems, learning adaptive planner parameters is an emerging paradigm of combining learning and planning. Examples include finding trajectory optimization coefficients using Artificial Neural Fuzzy Inference Improvement~\\cite{teso2019predictive}, optimizing two different sets of parameters for straight-line and U-turn scenarios with genetic algorithms~\\cite{binch2020context}, or designing novel systems that can leverage gradient descent to match expert demonstrations~\\cite{bhardwaj2020differentiable}. Recently, the \\textsc{appl} paradigm~\\cite{xiao2020appld, wang2021appli, xu2020applr} has been proposed, which further allows parameters to be appropriately adjusted during deployment ``on-the-fly'', in order to adapt to different regions of a complex environment. \\textsc{appl} also learns from non-expert users using teleoperated demonstration~\\cite{xiao2020appld}, corrective interventions~\\cite{wang2021appli}, or trial-and-error in simulation~\\cite{xu2020applr}. \n\n\\textsc{apple} utilizes an accessible but sparse modality of human interaction in evaluative feedback, which is suitable for non-expert users who are not able to take control of the robot. It is also suitable for scenarios where extensive trial-and-error is not feasible and a handcrafted reward function is not available, e.g., using Reinforcement Learning (RL) \\cite{xu2020applr}. Similar, or even better, navigation performance, compared to that learned from richer interaction modalities, can be achieved using \\textsc{apple}.\n\n\\subsection{Learning from Human Feedback}\nThe method we propose in this paper uses evaluative feedback from a human to drive a machine learning process that seeks to increase the performance of an autonomous navigation system.\nBecause evaluative feedback is a relatively easy signal for humans to provide, several methods have been proposed to allow machines to learn from such signals over the past several decades.\nBroadly speaking, most of these methods can be understood as trying to interpret the feedback signal in the context of the classical RL framework.\nFor example, the \\textsc{coach} framework \\cite{macglashan2017interactive} interprets evaluative feedback as the policy-dependent {\\em advantage}, i.e., it is assumed that the feedback indicates how much better or worse the agent's current behavior is compared to what the human currently expects the agent to do.\nThe \\textsc{tamer} framework \\cite{knox2009interactively}, on the other hand, can be thought of as interpreting evaluative feedback to be the {\\em value}, or expected payoff, of the current behavior if the agent were to act in the future in the way the human desires.\nYet other approaches interpret evaluative feedback directly as reward or some related statistic \\cite{isbell2001social, thomaz2006reinforcement, pilarski2011online}.\n\n\\textsc{apple} adopts a similar learning from feedback paradigm, but instead of taking actions as raw motor commands, \\textsc{apple}'s action space is the parameters used by the underlying navigation system. During training, \\textsc{apple} learns the value of state-action pairs based on a scalar human feedback. During deployment, \\textsc{apple} selects the parameters to maximize the expected human feedback. \n\n\n\\section{INTRODUCTION}\n\\label{sec::intro}\n\nMobile robot navigation is a well-studied problem in the robotics community. Many classical approaches have been developed over the last \\DIFaddbegin \\DIFadd{several }\\DIFaddend decades and many of them have been robustly deployed on physical robot platforms moving in the real world~\\cite{quinlan1993elastic, fox1997dynamic}, with verifiable guarantees of safety and explainability.\n\nHowever, prior to deployment in a new environment, \\DIFaddbegin \\DIFadd{these approaches typically require }\\DIFaddend parameter re-tuning \\DIFdelbegin \\DIFdel{is most likely necessary }\\DIFdelend \\DIFaddbegin \\DIFadd{in order }\\DIFaddend to achieve robust navigation performance. For example, in cluttered \\DIFdelbegin \\DIFdel{environment, low maximum }\\DIFdelend \\DIFaddbegin \\DIFadd{environments, a low }\\DIFaddend velocity and high sampling rate are \\DIFdelbegin \\DIFdel{appropriate for generating }\\DIFdelend \\DIFaddbegin \\DIFadd{necessary in order for the system to be able to generate }\\DIFaddend safe and smooth motions, \\DIFdelbegin \\DIFdel{in contrast to high speed and }\\DIFdelend \\DIFaddbegin \\DIFadd{whereas in relatively open spaces, a large maximum velocity and relatively }\\DIFaddend low sampling rate \\DIFdelbegin \\DIFdel{for a relatively open space. Parameter }\\DIFdelend \\DIFaddbegin \\DIFadd{are needed in order to achieve optimal navigation performance. This parameter }\\DIFaddend re-tuning requires robotics knowledge from experts who are familiar with the inner workings of the underlying navigation system, and may not be intuitive for non-expert users~\\cite{zheng2017ros}. Furthermore, \\DIFdelbegin \\DIFdel{finding }\\DIFdelend \\DIFaddbegin \\DIFadd{using }\\DIFaddend a single set of parameters assumes the same set will work well on average in different regions of a complex environment, which is often not the case. \n\nTo address these problems, Adaptive Planner Parameter Learning (\\textsc{appl}) is a recently proposed paradigm that opens up the possibility of dynamically adjusting parameters to adapt to different regions, and enables non-expert users to fine-tune navigation systems with a teleoperated demonstration~\\cite{xiao2020appld} or a few corrective interventions~\\cite{wang2020appli}. \n\n\\DIFdelbegin \\DIFdel{However, interaction modalities such as teleoperated demonstration or corrective interventions }\\DIFdelend \\DIFaddbegin \\DIFadd{These interaction modalities (i.e., demonstration and corrective intervention) }\\DIFaddend require non-expert users to take full control of the moving robot during the entire navigation task\\DIFdelbegin \\DIFdel{or at least }\\DIFdelend \\DIFaddbegin \\DIFadd{, or, at least, }\\DIFaddend when the robot suffers from poor performance. \\DIFdelbegin \\DIFdel{Therefore}\\DIFdelend \\DIFaddbegin \\DIFadd{However}\\DIFaddend , non-expert users who are \\DIFdelbegin \\DIFdel{unversed in }\\DIFdelend \\DIFaddbegin \\DIFadd{inexperienced at }\\DIFaddend controlling the robot may be unwilling to take such responsibility \\DIFdelbegin \\DIFdel{, running the }\\DIFdelend \\DIFaddbegin \\DIFadd{due to perceived }\\DIFaddend risk of human \\DIFdelbegin \\DIFdel{errors }\\DIFdelend \\DIFaddbegin \\DIFadd{error }\\DIFaddend and causing collisions.\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{contents\/figures\/apple.jpg}\n \\caption{For non-expert users who are \\DIFdelbeginFL \\DIFdelFL{also }\\DIFdelendFL not willing to take control of the robot, evaluative feedback, e.g. \\DIFaddbeginFL {\\em \\DIFaddendFL good job\\DIFaddbeginFL } \\DIFaddendFL (green thumbs up) or \\DIFaddbeginFL {\\em \\DIFaddendFL bad job\\DIFaddbeginFL } \\DIFaddendFL (red thumbs down), is a \\DIFdelbeginFL \\DIFdelFL{less engaging }\\DIFdelendFL \\DIFaddbeginFL \\DIFaddFL{more accessible }\\DIFaddendFL human interaction modality, but still valuable for improving navigation systems during deployment. }\n \\label{fig::apple}\n\\end{figure}\n\n\\DIFdelbegin \\DIFdel{On the other hand}\\DIFdelend \\DIFaddbegin \\DIFadd{Fortunately}\\DIFaddend , even non-expert users who are not willing to take control of the robot \\DIFdelbegin \\DIFdel{can simply oversee }\\DIFdelend \\DIFaddbegin \\DIFadd{are typically still able to observe }\\DIFaddend the robot navigating and provide real-time positive or negative assessments of the observed navigation behavior through evaluative feedback. This \\DIFdelbegin \\DIFdel{less engaging }\\DIFdelend \\DIFaddbegin \\DIFadd{more-accessible }\\DIFaddend modality opens up a new interaction channel to a larger community of non-expert users with mobile robots (Fig. \\ref{fig::apple}). For example, it is apparent even to non-expert users when the robot gets stuck in highly-constrained spaces~\\cite{xiao2020toward, liu2020lifelong} or drives unnecessarily slowly in open spaces~\\cite{xiao2020agile}. \n\nIn this work, we introduce \\emph{Adaptive Planner Parameter Learning from Evaluative Feedback} (\\textsc{apple}). Based on a parameter library~\\cite{xiao2020appld, wang2020appli} or a parameter policy~\\cite{xu2020applr}, \\textsc{apple} learns how to choose appropriate planner parameters at each time step \\DIFaddbegin \\DIFadd{in order }\\DIFaddend to adapt to different parts of the deployment environment. \n\\DIFaddbegin \\DIFadd{Specifically, }\\textsc{\\DIFadd{apple}} \\DIFadd{treats the scalar human feedback as the value for the state-action pair during training, in which the action is the parameter set to be used by the underlying navigation system. During deployment, }\\textsc{\\DIFadd{apple}} \\DIFadd{selects the best parameters to maximize the expected human feedback value. \n}\\DIFaddend We implement \\textsc{apple} \\DIFaddbegin \\DIFadd{both }\\DIFaddend in the Benchmarking Autonomous Robot Navigation (BARN) ~\\cite{perille2020benchmarking} \\DIFdelbegin \\DIFdel{dataset and }\\DIFdelend \\DIFaddbegin \\DIFadd{environments and also in }\\DIFaddend real-world\\DIFaddbegin \\DIFadd{, }\\DIFaddend highly-constrained obstacle courses. In both training and unseen environments, \\textsc{apple} is able to outperform the planner with default parameters\\DIFdelbegin \\DIFdel{and even }\\DIFdelend \\DIFaddbegin \\DIFadd{, and even improve over }\\DIFaddend \\textsc{appl} variants learned from richer interaction modalities, such as teleoperated interventions. Our experimental results indicate that \\DIFdelbegin \\DIFdel{despite being sparse and less engaging, }\\DIFdelend evaluative feedback is a \\DIFdelbegin \\DIFdel{uniquely }\\DIFdelend \\DIFaddbegin \\DIFadd{particularly }\\DIFaddend valuable form of human interaction modality for improving navigation systems during deployment.\n\n\\section{RELATED WORK}\n\\label{sec::related}\n\nIn this section, we review existing work on machine learning for mobile robot navigation, adaptive planner parameters, and learning from human evaluative feedback. \n\n\\subsection{Learning for Navigation}\nWhile \\DIFdelbegin \\DIFdel{classical }\\DIFdelend \\DIFaddbegin \\DIFadd{autonomous }\\DIFaddend navigation has been studied by the robotics community for decades, \\DIFdelbegin \\DIFdel{recently }\\DIFdelend machine learning approaches have \\DIFaddbegin \\DIFadd{recently }\\DIFaddend been extensively applied to this problem as well. Xiao, et al.~\\cite{xiao2020motion} presented a survey on using machine learning for motion control in mobile robot navigation: while the majority of learning approaches tackle navigation in an end-to-end manner~\\cite{bojarski2016end, pfeiffer2017perception}, it \\DIFdelbegin \\DIFdel{is shown that }\\DIFdelend \\DIFaddbegin \\DIFadd{was found that approaches }\\DIFaddend using learning in conjunction with other classical navigation components \\DIFdelbegin \\DIFdel{is likely to achieve }\\DIFdelend \\DIFaddbegin \\DIFadd{were more likely to have achieved }\\DIFaddend better navigation performance\\DIFdelbegin \\DIFdel{, such as learning }\\DIFdelend \\DIFaddbegin \\DIFadd{.\nThese methods included those that learned }\\DIFaddend sub-goals~\\cite{stein2018learning}, \\DIFdelbegin \\DIFdel{learning }\\DIFdelend local planners~\\cite{gao2017intention, chiang2019learning, xiao2020toward, xiao2020agile, liu2020lifelong}, or \\DIFdelbegin \\DIFdel{learning }\\DIFdelend planner parameters~\\cite{teso2019predictive, bhardwaj2019differentiable, binch2020context, xiao2020appld, wang2020appli, xu2020applr}. Learning methods have also enabled navigation capabilities \\DIFdelbegin \\DIFdel{orthogonal to }\\DIFdelend \\DIFaddbegin \\DIFadd{that complement those provided in }\\DIFaddend the classical navigation \\DIFdelbegin \\DIFdel{problem, such as }\\DIFdelend \\DIFaddbegin \\DIFadd{literature, including }\\DIFaddend terrain-aware~\\cite{wigness2018robot, siva2019robot, kahn2020badgr} \\DIFdelbegin \\DIFdel{or }\\DIFdelend \\DIFaddbegin \\DIFadd{and }\\DIFaddend social~\\cite{hart2020using, liang2020crowdsteer, everett2018motion} navigation. \n\n\\textsc{apple} leverages the aforementioned hybrid learning and classical architecture, where the learning component only learns to select appropriate set of planner parameters, and interacts with the underlying classical navigation system. \n\n\\subsection{Adaptive Parameters for Classical Navigation} \nConsidering classical navigation systems' verifiable safety, explainability, and stable generalization to new environments, and the difficulty in fine-tuning those systems, learning adaptive planner parameters is an emerging paradigm of combining learning and planning. Examples include finding trajectory optimization coefficients using Artificial Neural Fuzzy Inference Improvement~\\cite{teso2019predictive}, optimizing two different sets of parameters for straight-line and U-turn scenarios with genetic \\DIFdelbegin \\DIFdel{algorithm}\\DIFdelend \\DIFaddbegin \\DIFadd{algorithms}\\DIFaddend ~\\cite{binch2020context}, or designing novel systems that can leverage gradient descent to match expert demonstrations~\\cite{bhardwaj2019differentiable}. Recently, the \\textsc{appl} paradigm~\\cite{xiao2020appld, wang2020appli, xu2020applr} has been proposed\\DIFaddbegin \\DIFadd{, }\\DIFaddend which further allows parameters to be appropriately adjusted during deployment ``on-the-fly'', in order to adapt to different regions of a complex environment. \\textsc{appl} also learns from non-expert users using teleoperated demonstration~\\cite{xiao2020appld}, corrective interventions~\\cite{wang2020appli}, or trial-and-error in simulation~\\cite{xu2020applr}. \n\n\\textsc{apple} utilizes \\DIFdelbegin \\DIFdel{another sparse and less engaging human interaction modality, i.e. }\\DIFdelend \\DIFaddbegin \\DIFadd{an accessible but sparse modality of human interaction in }\\DIFaddend evaluative feedback, which is suitable for non-expert users who are not able to take control of the robot. Similar, or even better, navigation performance, compared to that learned from richer interaction modalities, can be achieved using \\textsc{apple}.\n\n\\subsection{Learning from Human Feedback}\nThe method we propose in this paper uses evaluative feedback from a human to drive a machine learning process that seeks to increase the performance of an autonomous navigation system.\nBecause evaluative feedback is a relatively easy signal for humans to provide, several methods have been proposed to allow machines to learn from such signals over the past several decades.\nBroadly speaking, most of these methods can be understood as trying to interpret the feedback signal in the context of the classical \\DIFdelbegin \\DIFdel{reinforcement learning }\\DIFdelend \\DIFaddbegin \\DIFadd{Reinforcement Learning (RL) }\\DIFaddend framework.\nFor example, the \\textsc{coach} framework \\cite{macglashan2017interactive} interprets evaluative feedback as the policy-dependent {\\em advantage}, i.e., it is assumed that the feedback indicates how much better or worse the agent's current behavior is compared to what the human currently expects the agent to do.\nThe \\textsc{tamer} framework \\cite{knox2009interactively}, on the other hand, can be thought of as interpreting evaluative feedback to be the {\\em value}, or expected payoff, of the current behavior if the agent were to act in the future in the way the human desires.\nYet other approaches interpret evaluative feedback directly as reward \\DIFdelbegin \\DIFdel{, or something very close to it }\\DIFdelend \\DIFaddbegin \\DIFadd{or some related statistic }\\DIFaddend \\cite{isbell2001social, thomaz2006reinforcement, pilarski2011online}.\n\n\\DIFaddbegin \\textsc{\\DIFadd{apple}} \\DIFadd{adopts a similar learning from feedback paradigm, but instead of taking actions as raw motor commands, }\\textsc{\\DIFadd{apple}}\\DIFadd{'s action space is the parameters used by the underlying navigation system. During training, }\\textsc{\\DIFadd{apple}} \\DIFadd{learns the value of state-action pairs based on a scalar human feedback. During deployment, }\\textsc{\\DIFadd{apple}} \\DIFadd{selects the best parameters to maximize the expected human feedback. \n}\n\n\\DIFaddend %\n\\section{APPROACH}\n\\label{sec::approach}\n\nIn this section, we introduce \\textsc{apple}, which has two novel features: (1) compared \\DIFdelbegin \\DIFdel{with Learning from Demonstration or Interventions that requires expertise }\\DIFdelend \\DIFaddbegin \\DIFadd{to learning from demonstration or interventions, which require human driving expertise in order }\\DIFaddend to take full control of the moving robot, \\textsc{apple} \\DIFdelbegin \\DIFdel{only needs }\\DIFdelend \\DIFaddbegin \\DIFadd{requires instead just }\\DIFaddend evaluative feedback that can be provided even by non-expert users; (2) in contrast to previous work \\cite{xiao2020appld, wang2020appli} that selects the planner parameter set based on \\DIFdelbegin \\DIFdel{the similarity with demonstrated environments}\\DIFdelend \\DIFaddbegin \\DIFadd{how similar the deployment environment is to the demonstrated environment}\\DIFaddend , \\textsc{apple} is based on the expected evaluative feedback\\DIFdelbegin \\DIFdel{. }\\DIFdelend \\DIFaddbegin \\DIFadd{, i.e., the actual navigation performance. \n}\\DIFaddend \\textsc{apple}'s performance-based parameter policy has the potential to outperform previous approaches \\DIFdelbegin \\DIFdel{only }\\DIFdelend \\DIFaddbegin \\DIFadd{that are }\\DIFaddend based on similarity.\n\n\\subsection{Problem Definition}\n\nWe denote a classical parameterized navigation system as $G: \\mathcal{X} \\times \\Theta \\rightarrow \\mathcal{A}$, where $\\mathcal{X}$ is the state space of the robot (e.g., goal, sensor observations), $\\Theta$ is the parameter space for $G$ (e.g., max speed, sampling rate, inflation radius), and $\\mathcal{A}$ is the action space (e.g., linear and angular velocities). During deployment, the navigation system repeatedly estimates state $x$ and takes action $a$ calculated as \\DIFdelbegin \\DIFdel{$a = G(x; \\bar{\\theta})$}\\DIFdelend \\DIFaddbegin \\DIFadd{$a = G(x; \\theta)$}\\DIFaddend . Typically, \\DIFdelbegin \\DIFdel{the }\\DIFdelend \\DIFaddbegin \\DIFadd{a }\\DIFaddend default parameter set $\\bar{\\theta}$ is tuned by a human designer trying to achieve good performance in most environments. However, being good at everything often means being great at nothing: $\\bar{\\theta}$ usually exhibits suboptimal performance in some situations and may even fail (is unable to find feasible motions, or crashes into obstacles) in particularly challenging ones\\DIFaddbegin \\DIFadd{~\\mbox{%\n\\cite{xiao2020appld}}\\hspace{0pt}%\n}\\DIFaddend . \n\nTo mitigate this problem, \\textsc{apple} learns a parameter policy from human evaluative feedback \\DIFdelbegin \\DIFdel{, to select }\\DIFdelend \\DIFaddbegin \\DIFadd{with the goal of selecting }\\DIFaddend the appropriate parameter set $\\theta$ (from \\DIFdelbegin \\DIFdel{a parameter }\\DIFdelend \\DIFaddbegin \\DIFadd{either a discrete parameter set }\\DIFaddend library or from \\DIFdelbegin \\DIFdel{the }\\DIFdelend \\DIFaddbegin \\DIFadd{a continuous full }\\DIFaddend parameter space) for \\DIFaddbegin \\DIFadd{the }\\DIFaddend current deployment environment.\nIn detail, a human can supervise the navigation system's performance at state $x$ by observing its action $a$ and giving corresponding evaluative feedback $e$.\nHere, the evaluative feedback can be either discrete (e.g., ``good job\"\/``bad job\") or continuous (e.g., a score ranging in $[0, 1]$).\nDuring \\DIFdelbegin \\DIFdel{the }\\DIFdelend feedback collection, \\textsc{apple} finds (1) a parameterized predictor $F_\\phi: \\mathcal{X} \\times \\Theta \\rightarrow \\mathcal{E}$ that predicts human evaluative feedback for each state-parameter pair $(x, \\theta)$, and (2) a parameterized parameter policy $\\pi_\\psi: \\mathcal{X} \\rightarrow \\Theta$ that determines the appropriate planner parameter set \\DIFaddbegin \\DIFadd{for the current state}\\DIFaddend .\n\nBased on whether \\textsc{apple} chooses the parameter set from a library or the parameter space, we introduce the discrete and continuous parameter policies in the following two sections, respectively.\n\n\\subsection{Discrete Parameter Policy}\n\n\\DIFdelbegin \\DIFdel{Including the default parameter set}\\DIFdelend \\DIFaddbegin \\DIFadd{In some situations}\\DIFaddend , the user may \\DIFdelbegin \\DIFdel{also has }\\DIFdelend \\DIFaddbegin \\DIFadd{already have }\\DIFaddend $K$ candidate parameter sets (e.g., \\DIFdelbegin \\DIFdel{ones }\\DIFdelend \\DIFaddbegin \\DIFadd{the default set or sets }\\DIFaddend tuned for special environments like narrow corridors, open spaces\\DIFdelbegin \\DIFdel{) which }\\DIFdelend \\DIFaddbegin \\DIFadd{, etc.) which together }\\DIFaddend make up a parameter library \\DIFdelbegin \\DIFdel{$\\{\\theta_i\\}_{i=1}^K$}\\DIFdelend \\DIFaddbegin \\DIFadd{$\\mathcal{L} = \\{\\theta^i\\}_{i=1}^K$ (superscript $i$ denotes the index in the library)}\\DIFaddend . In this case, \\textsc{apple} \\DIFdelbegin \\DIFdel{learns to select }\\DIFdelend \\DIFaddbegin \\DIFadd{uses the provided evaluative feedback $e$ in order to learn a policy that selects }\\DIFaddend the most appropriate \\DIFdelbegin \\DIFdel{one given the }\\DIFdelend \\DIFaddbegin \\DIFadd{of these parameters given the state }\\DIFaddend observation $x$\\DIFdelbegin \\DIFdel{and the provided evaluative feedback $e$}\\DIFdelend .\n\nTo do so, we parameterize the feedback predictor $F_\\phi$ in a way similar to the value network in DQN \\cite{mnih2015humanlevel}, where \\DIFaddbegin \\DIFadd{the }\\DIFaddend input is the observation $x$ and the \\DIFdelbegin \\DIFdel{outputs are the }\\DIFdelend \\DIFaddbegin \\DIFadd{output is }\\DIFaddend $K$ predicted feedback \\DIFdelbegin \\DIFdel{$\\{\\hat{e}_i\\}_{i=1}^K$}\\DIFdelend \\DIFaddbegin \\DIFadd{values $\\{\\hat{e}^i\\}_{i=1}^K$}\\DIFaddend , one for each parameter set \\DIFdelbegin \\DIFdel{$\\theta_i$ respectively. Then we form a supervised dataset }\\DIFdelend \\DIFaddbegin \\DIFadd{$\\theta^i$ in the library $\\mathcal{L}$, as a prediction of the evaluative feedback a human user would give if the planner were using the respective parameter set at state $x$. We form a dataset for supervised learning, }\\DIFaddend $\\mathcal{D} := \\{x_j, \\theta_j, e_j\\}_{j=1}^N$ \\DIFdelbegin \\DIFdel{from }\\DIFdelend \\DIFaddbegin \\DIFadd{($\\theta_j \\in \\mathcal{L}$, subscript $j$ denotes the time step) \nusing }\\DIFaddend the evaluative feedback collected so far, and $F_\\phi$ is learned via supervised learning to minimize the difference between predicted feedback and the label,\n\\begin{equation}\n \\phi^* = \\argmin_\\phi \\mathop{\\mathbb{E}}\\DIFdelbegin \\DIFdel{_{(x_j, e_j) \\in \\mathcal{D}} \\| }\\DIFdelend \\DIFaddbegin \\DIFadd{_{(x_j, \\theta_j, e_j) \\sim \\mathcal{D}} \\ell(}\\DIFaddend F_\\phi(x_j, \\theta_j)\\DIFdelbegin \\DIFdel{- }\\DIFdelend \\DIFaddbegin \\DIFadd{, }\\DIFaddend e_j\\DIFdelbegin \\DIFdel{\\|\n }\\DIFdelend \\DIFaddbegin \\DIFadd{)\n }\\DIFaddend \\label{eq:critic_loss}\n\\DIFdelbegin %\n\\DIFdel{where $\\| \\cdot \\|$ }\\DIFdelend \\DIFaddbegin }\n\\DIFadd{where $\\ell(\\cdot, \\cdot)$ }\\DIFaddend is the binary cross entropy loss if the feedback $e_j$ is discrete (e.g., $e_j = 1$ for ``good job'', and $e_j = 0$ for ``bad job''), or mean square error given continuous feedback.\n\n\\DIFdelbegin \\DIFdel{For the parameter policy $\\pi(\\cdot|x)$, to achieve }\\DIFdelend \\DIFaddbegin \\DIFadd{To achieve the }\\DIFaddend best possible performance, \\DIFdelbegin \\DIFdel{it }\\DIFdelend \\DIFaddbegin \\DIFadd{the parameter policy $\\pi(\\cdot|x)$ }\\DIFaddend chooses the parameter set that maximizes the expected human feedback (the discrete parameter policy doesn't \\DIFdelbegin \\DIFdel{need its own parameter, so }\\DIFdelend \\DIFaddbegin \\DIFadd{require any additional parameters beyond $\\phi$ for $F$, so the }\\DIFaddend $\\psi$ is omitted \\DIFaddbegin \\DIFadd{here }\\DIFaddend for simplicity). More specifically, \n\\begin{equation}\n \\pi(\\cdot|x) = \\argmax\\DIFdelbegin \\DIFdel{_\\theta }\\DIFdelend \\DIFaddbegin \\DIFadd{_{\\theta \\in \\mathcal{L}} }\\DIFaddend F_{\\phi^*}(x, \\theta).\n \\DIFaddbegin \\label{eqn::discrete}\n\\DIFaddend \\end{equation}\n\nWhen comparing with RL, especially DQN, discrete \\textsc{apple} has \\DIFaddbegin \\DIFadd{a }\\DIFaddend similar architecture and training objective. However, an \\DIFdelbegin \\DIFdel{importance }\\DIFdelend \\DIFaddbegin \\DIFadd{important }\\DIFaddend difference is that while RL optimizes \\DIFdelbegin \\DIFdel{for future cumulative rewards with a discount factor $\\gamma \\in [0, 1]$}\\DIFdelend \\DIFaddbegin \\DIFadd{future (discounted) cumulative reward}\\DIFaddend , \\textsc{apple} greedily maximizes the current feedback. The reason is that we assume, \\DIFdelbegin \\DIFdel{during }\\DIFdelend \\DIFaddbegin \\DIFadd{while }\\DIFaddend supervising the robot's actions, \\DIFaddbegin \\DIFadd{the }\\DIFaddend human will not only consider \\DIFaddbegin \\DIFadd{the }\\DIFaddend current results but also future consequences and give the feedback accordingly. \\DIFaddbegin \\DIFadd{This assumption is consistent with past systems such as }\\textsc{\\DIFadd{tamer}}\\DIFadd{~\\mbox{%\n\\cite{knox2009interactively}}\\hspace{0pt}%\n. }\\DIFaddend Hence, \\textsc{apple}'s greedy objective for human feedback still considers the goal of maximizing current and future performance.\n\n\\subsection{Continuous Parameter Policy}\n\n\\DIFdelbegin \\DIFdel{When not having a parameter library to select from}\\DIFdelend \\DIFaddbegin \\DIFadd{If a discrete parameter library is not available or desired}\\DIFaddend , \\textsc{apple} \\DIFdelbegin \\DIFdel{needs to select the optimal parameter set from the continuous parameter space }\\DIFdelend \\DIFaddbegin \\DIFadd{can also be used over continuous parameter spaces }\\DIFaddend (e.g., deciding the max speed from $[0.1, 2]\\ \\mathrm{m\/s}$). In this scenario, we assume the human will provide continuous evaluative feedback, as discrete feedback is not informative enough to learn a continuous policy efficiently.\n\nIn \\DIFdelbegin \\DIFdel{detail}\\DIFdelend \\DIFaddbegin \\DIFadd{this setting}\\DIFaddend , we parameterize the parameter policy $\\pi_\\psi$ and the feedback predictor $F_\\phi$ in the actor-critic style.\nWith the collected evaluative feedback $ \\mathcal{D} := \\{x_j, \\theta_j, e_j\\}_{j=1}^N$, the training objective of $F_\\phi$ is still to minimize the difference between predicted and collected feedback, \\DIFdelbegin \\DIFdel{the same as Eq}\\DIFdelend \\DIFaddbegin \\DIFadd{as specified by Eqn}\\DIFaddend . (\\ref{eq:critic_loss}). For the parameter policy $\\pi_\\psi$, beyond choosing the action that maximizes expected feedback, its training objective is augmented by maximizing the entropy of policy $\\mathcal{H}(\\pi_\\psi(\\cdot|x))$ at state $x$. \\DIFdelbegin \\DIFdel{With }\\DIFdelend \\DIFaddbegin \\DIFadd{Using }\\DIFaddend the same entropy regularization \\DIFdelbegin \\DIFdel{in }\\DIFdelend \\DIFaddbegin \\DIFadd{as }\\DIFaddend Soft Actor Critic \\DIFaddbegin \\DIFadd{(SAC)~}\\DIFaddend \\cite{haarnoja2018soft}, $\\pi_\\psi$ favors more stochastic policies\\DIFdelbegin \\DIFdel{and has }\\DIFdelend \\DIFaddbegin \\DIFadd{, leading to }\\DIFaddend better exploration during training:\n\\begin{equation}\n \\psi^* = \\argmin_\\psi \\mathop{\\mathbb{E}}_{\\substack{x_j \\in \\mathcal{D} \\\\ \\tilde{\\theta}_j \\sim \\pi_\\psi(\\cdot | x_j)}} \\left[-F_\\phi(x_j, \\tilde{\\theta}_j) + \\alpha \\log \\pi_\\psi(\\tilde{\\theta}_j | x_j) \\right],\n \\DIFaddbegin \\label{eqn::continuous}\n\\DIFaddend \\end{equation}\nwhere $\\alpha$ is the temperature controlling the importance of \\DIFaddbegin \\DIFadd{the }\\DIFaddend entropy bonus and is automatically tuned as \\DIFdelbegin \\DIFdel{described in }\\DIFdelend \\DIFaddbegin \\DIFadd{in SAC~}\\DIFaddend \\cite{haarnoja2018soft_application}.\n\n\n\\subsection{Deployment}\nDuring deployment, we measure the state $x_t$, use the parameter policy to obtain a set of parameter $\\theta_t \\sim \\pi_\\psi^*(\\cdot | x_t)$ \\DIFaddbegin \\DIFadd{at each time step }\\DIFaddend and apply it to the navigation planner $G$.\n\n\\section{EXPERIMENTS}\n\\label{sec::experiments}\nIn our experiments, we aim to show that \\textsc{apple} can improve navigation performance by learning from evaluative feedback, in contrast to a teleoperated demonstration or a few corrective interventions, both of which require the non-expert user to take control of the moving robot. We also show \\textsc{apple}'s generalizability to unseen environments. We implement \\textsc{apple} on a ClearPath Jackal ground robot in \\DIFdelbegin \\DIFdel{the BARNdataset}\\DIFdelend \\DIFaddbegin \\DIFadd{BARN}\\DIFaddend ~\\cite{perille2020benchmarking} with 300 navigation environments randomly generated using Cellular Automata\\DIFdelbegin \\DIFdel{and }\\DIFdelend \\DIFaddbegin \\DIFadd{, and in }\\DIFaddend two physical obstacle courses. \n\n\\subsection{Implementation}\n\n\\DIFdelbegin \\DIFdel{Our }\\DIFdelend \\DIFaddbegin \\DIFadd{The }\\DIFaddend Jackal is a differential-drive robot equipped with a Velodyne LiDAR that we use to obtain a 720-dimensional planar laser scan with a 270$^\\circ$ field of view, denoted as $l_t$.\nThe robot uses the Robot Operating System \\texttt{move\\textunderscore base} navigation stack with Dijkstra's global planner and the default \\textsc{dwa} local planner\\DIFaddbegin \\DIFadd{~\\mbox{%\n\\cite{fox1997dynamic}}\\hspace{0pt}%\n}\\DIFaddend .\nFrom the global planner, we query the relative local goal direction $g_t$ (in angle) as \\DIFdelbegin \\DIFdel{a linear fitting }\\DIFdelend \\DIFaddbegin \\DIFadd{the averaged tangential direction }\\DIFaddend of the first 0.5m global path. The state space of the robot is the combination of the laser scan and local goal $x_t = (l_t, g_t)$. The parameter space \\DIFdelbegin \\DIFdel{are }\\DIFdelend \\DIFaddbegin \\DIFadd{consists of the }\\DIFaddend 8 parameters of the \\textsc{dwa} local planner as described in \\DIFdelbegin \\DIFdel{Table }\\DIFdelend \\DIFaddbegin \\DIFadd{Tab. }\\DIFaddend \\ref{tab::jackal_parameters}, and the action space \\DIFdelbegin \\DIFdel{are }\\DIFdelend \\DIFaddbegin \\DIFadd{is }\\DIFaddend the linear and angular velocity of the robot, $a_t = (v_t, \\omega_t)$.\n\nFor discrete \\textsc{apple}, we construct the parameter library shown in \\DIFdelbegin \\DIFdel{Table }\\DIFdelend \\DIFaddbegin \\DIFadd{Tab. }\\DIFaddend \\ref{tab::jackal_parameters} with the default \\textsc{dwa} parameter set $\\theta_1$ and parameter sets $\\theta_{2 \\sim 7}$ learned in \\DIFaddbegin \\DIFadd{the }\\textsc{\\DIFadd{appli}} \\DIFadd{work~}\\DIFaddend \\cite{wang2020appli}. Here we use parameter sets learned in previous work for simplicity. Without prelearned parameter sets, one can \\DIFdelbegin \\DIFdel{still }\\DIFdelend obtain a library \\DIFdelbegin \\DIFdel{by a }\\DIFdelend \\DIFaddbegin \\DIFadd{via }\\DIFaddend coarse tuning to create various driving modes (e.g. increase \\emph{max\\_vel\\_x} to create an aggressive mode). For continuous \\textsc{apple}, the parameter ranges for the parameter policy $\\pi_\\psi$ to select from are listed in the same table.\n\n\\begin{table}[h]\n \\caption{Parameter Library and Range: \\\\ \\emph{max\\_vel\\_x} \\textnormal{(v)}, \\emph{max\\_vel\\_theta} \\textnormal{(w)}, \\emph{vx\\_samples} \\textnormal{(s)}, \\emph{vtheta\\_samples} \\textnormal{(t)}, \\emph{occdist\\_scale} \\textnormal{(o)}, \\emph{pdist\\_scale} \\textnormal{(p)}, \\emph{gdist\\_scale \\textnormal{(g)}}, \\emph{inflation\\_radius \\textnormal{(i)}}}\n \\label{tab::jackal_parameters}\n \\centering\n \\small\n \\begin{tabular}{lrrrrrrrr}\n \\toprule\n & v & w & s & t & o & p & g & i \\\\\n \\midrule\n $\\theta_1$ & 0.50 & 1.57 & 6 & 20 & 0.10 & 0.75 & 1.00 & 0.30 \\\\\n $\\theta_2$ & 0.26 & 2.00 & 13 & 44 & 0.57 & 0.76 & 0.94 & 0.02 \\\\\n $\\theta_3$ & 0.22 & 0.87 & 13 & 31 & 0.30 & 0.36 & 0.71 & 0.30 \\\\\n $\\theta_4$ & 1.91 & 1.70 & 10 & 47 & 0.08 & 0.71 & 0.35 & 0.23 \\\\\n $\\theta_5$ & 0.72 & 0.73 & 19 & 59 & 0.62 & 1.00 & 0.32 & 0.24 \\\\\n $\\theta_6$ & 0.37 & 1.33 & 9 & 6 & 0.95 & 0.83 & 0.93 & 0.01 \\\\\n $\\theta_7$ & 0.31 & 1.05 & 17 & 20 & 0.45 & 0.61 & 0.22 & 0.23 \\\\\n \\midrule\n $\\min$ & 0.2 & 0.31 & 4 & 8 & 0.10 & 0.10 & 0.01 & 0.10 \\\\\n $\\max$ & 2.0 & 3.14 & 20 & 40 & 1.50 & 2.00 & 1.00 & 0.60 \\\\\n \\bottomrule\n \\end{tabular}\n\\end{table}\n\nImplementation-wise, for discrete \\textsc{apple}, $F_\\phi(x, \\theta)$ is a fully-connected neural network with 2 hidden layers of 128 neurons, taking the 721 dimensional $x_t$ as input and \\DIFdelbegin \\DIFdel{outing }\\DIFdelend \\DIFaddbegin \\DIFadd{outputting }\\DIFaddend the 7 predicted feedback \\DIFaddbegin \\DIFadd{signals }\\DIFaddend $\\hat{e}_{t, 1 \\sim 7}$ for $\\theta_{1 \\sim 7}$ respectively. \\DIFdelbegin \\DIFdel{For parameter }\\DIFdelend \\DIFaddbegin \\DIFadd{Parameter }\\DIFaddend policy $\\pi_\\psi$ \\DIFdelbegin \\DIFdel{, it }\\DIFdelend uses $\\epsilon$-greedy exploration with $\\epsilon$ decreasing from $0.3$ to $0.02$ during the first half of the training. For continuous \\textsc{apple}, $F_\\phi(x, \\theta)$ shares the same architecture as discrete \\textsc{apple}, except for different input (concatenation of $x_t$ and $\\theta_t$) and output (scalar $\\hat{e}_t$). The parameter policy $\\pi_\\psi$ also uses the same architecture, mapping $x_t$ to $\\theta_t$.\n\nTo evaluate the performance of \\textsc{apple}, we use \\DIFdelbegin \\textbf{\\textsc{\\DIFdel{appli}}%\n} %\n\\DIFdelend \\DIFaddbegin \\textsc{\\DIFadd{appli}} \\DIFaddend with the same parameter library in \\DIFdelbegin \\DIFdel{Table }\\DIFdelend \\DIFaddbegin \\DIFadd{Tab. }\\DIFaddend \\ref{tab::jackal_parameters} and the \\DIFdelbegin \\textbf{\\DIFdel{Default}} %\n\\DIFdelend \\DIFaddbegin \\DIFadd{Default }\\DIFaddend \\textsc{dwa} planner as two baselines. Since \\DIFdelbegin \\DIFdel{it is pointed out that }\\DIFdelend \\textsc{appld} does not generalize well without a confidence-based context predictor~\\cite{wang2020appli} and \\textsc{appli} can therefore outperform \\textsc{\\DIFdelbegin \\DIFdel{appli}\\DIFdelend \\DIFaddbegin \\DIFadd{appld}\\DIFaddend }, we do not include \\textsc{appld} as one of the baselines. Despite \\DIFaddbegin \\DIFadd{using }\\DIFaddend the same library, \\textsc{appli} chooses the parameter set based on the similarity between \\DIFaddbegin \\DIFadd{the }\\DIFaddend current observation and \\DIFaddbegin \\DIFadd{the }\\DIFaddend demonstrated environments, while discrete \\textsc{apple} uses the expected feedback.\n\n\\subsection{Simulated Experiments}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{contents\/figures\/gazebo.jpg}\n \\caption{Simulated Environments in BARN Dataset with Low, Medium, and High Difficulty Levels\n }\n \\label{fig::simulated_envs}\n\\end{figure}\n\nWe begin \\DIFdelbegin \\DIFdel{with }\\DIFdelend \\DIFaddbegin \\DIFadd{by }\\DIFaddend testing \\textsc{apple} on the BARN dataset, with simulated evaluative feedback generated by an oracle (proxy human). The benchmark dataset consists of 300 simulated navigation environments ranging from easy ones with a lot of open spaces to challenging ones where the robot needs to get through dense obstacles. Navigation trials in three example environments with low, medium, and high difficulty levels are shown in Fig. \\ref{fig::simulated_envs}. We randomly select 250 environments for training \\textsc{apple} and hold the remained 50 environments as the test set.\n\nFor the simulated feedback, we use the projection of the robot's linear velocity along the local goal direction, i.e., $e_t = v_t \\cdot \\cos(g_t)$, and it greedily encourages the robot to move along the global path as fast as possible. Notice, this simulated evaluative feedback is suboptimal as it doesn't consider future states, and we expect actual human evaluative feedback would be more accurate. For example, when the robot is leaving an open space and is about to enter a narrow exit, \\DIFaddbegin \\DIFadd{a }\\DIFaddend human would expect the robot to slow down to get through the exit smoothly, but the simulated oracle still encourages the robot to drive fast. The oracle provides its evaluative feedback at 4Hz, while \\textsc{apple} and \\textsc{appli} dynamically adjust the parameter set for the \\textsc{dwa} planner at the same frequency.\n\nAfter training in 250 environments with a total of 12.5M feedback \\DIFaddbegin \\DIFadd{signals }\\DIFaddend collected, we evaluate \\textsc{apple} with discrete and continuous parameter \\DIFdelbegin \\DIFdel{policy }\\DIFdelend \\DIFaddbegin \\DIFadd{policies }\\DIFaddend (denoted as \\DIFdelbegin \\textbf{\\textsc{\\DIFdel{apple}} %\n\\DIFdel{(disc.)}} %\n\\DIFdel{and }\\textbf{\\textsc{\\DIFdel{apple}} %\n\\DIFdel{(cont.)}}%\n\\DIFdelend \\DIFaddbegin \\textsc{\\DIFadd{apple}} \\DIFadd{(disc.) and }\\textsc{\\DIFadd{apple}} \\DIFadd{(cont.)}\\DIFaddend ), as well as two baselines, on the 50 test environments by measuring the traversal time for 20 runs per environment. We then conduct a pair-wise t-test for all methods in order to compute the percentage of environments in which one method (denoted as Method 1) is significantly worse ($p<0.05$) than another (denoted as Method 2). For better illustration, we order the method by their performance and show the pairwise comparison in Tab. \\ref{tab::similation_results}.\n\n\\begin{table}\n\\vspace{0.25cm}\n\\centering\n\\caption{Percentage of Test Simulation Environments that Method 1 is Significantly Worse than Method 2 in Terms of Traversal Time\\\\\n\\scriptsize (Methods are listed in order of increasing performances. Best method when comparing with Method 1 are bold)}\n\\begin{tabular}{c | c c c c}\n\\hline\n \\diagbox{Method 1}{Method 2} & \\textbf{Default} & \\textbf{\\textsc{appli}} & \\thead{\\textbf{\\textsc{apple}} \\\\ \\textbf{(disc.)}} & \\thead{\\textbf{\\textsc{apple}} \\\\ \\textbf{(cont.)}}\\\\\n\\hline\n\\textbf{Default} & 0 & 23 & \\textbf{33} & \\textbf{33} \\\\\n\\textbf{\\textsc{appli}} & 6 & 0 & \\textbf{29} & \\textbf{29} \\\\\n\\textbf{\\textsc{apple} (disc.)} & 2 & 4 & 0 & \\textbf{27} \\\\\n\\textbf{\\textsc{apple} (cont.)} & 4 & 4 & 10 & 0 \\\\\n\\hline\n\\end{tabular}\n\\label{tab::similation_results}\n\\end{table}\n\nDespite learning from suboptimal evaluative feedback, \\textsc{apple} (disc.) and \\textsc{apple} (cont.) still outperform the Default \\textsc{dwa} and \\textsc{appli} in $33\\%$ and $29\\%$ of test environments respectively, and they only perform significantly worse in a few cases. \\DIFdelbegin \\DIFdel{This proves }\\DIFdelend \\DIFaddbegin \\DIFadd{These results demonstrate }\\DIFaddend the advantage of \\textsc{apple} over \\textsc{appli}, which \\DIFdelbegin \\DIFdel{is selecting }\\DIFdelend \\DIFaddbegin \\DIFadd{selects }\\DIFaddend the parameter set with a performance-based predictor \\DIFaddbegin \\DIFadd{(Eqns. \\ref{eqn::discrete} and \\ref{eqn::continuous}) }\\DIFaddend rather than a similarity-based predictor. \\DIFdelbegin \\DIFdel{For self-comparison, }\\DIFdelend \\textsc{apple} (cont.) is much better than \\textsc{apple} (disc.), because of its larger model capacity in the parameter space.\n\n\n\\subsection{Physical Experiments}\nWe also apply \\textsc{apple} on a physical Jackal robot. \\DIFdelbegin \\DIFdel{One of the authors follows the robot autonomously navigating }\\DIFdelend \\DIFaddbegin \\DIFadd{In }\\DIFaddend a highly-constrained obstacle course (Fig. \\ref{fig::apple})\\DIFaddbegin \\DIFadd{, we first apply }\\textsc{\\DIFadd{appli}} \\DIFadd{which provides 5 sets of parameters (1 default and 4 learned). Then to train }\\textsc{\\DIFadd{apple}} \\DIFadd{which uses the same parameter library, one of the authors follows the robot autonomously navigating }\\DIFaddend and uses an Xbox joystick to give evaluative feedback at 2Hz. To reduce the burden of giving a large amount of feedback, the user is only requested to give negative feedback by pressing a button on the joystick when he thinks the robot's navigation performance is bad, while for other instances, positive feedback is automatically given.\nIn other words, we interpret the absence of human feedback to be the same as if the human had provided positive feedback.\nWhile this interpretation is not standard in the literature (and even undesirable at times \\cite{faulkner2018policy}), we found that it yielded good results for the application studied here.\n\nThe entire \\DIFaddbegin \\textsc{\\DIFadd{apple}} \\DIFaddend training session lasts roughly 30 minutes, in which the robot navigates 10 trials in the environment shown in Fig. \\ref{fig::apple}. \\textsc{apple} learns in an online fashion with $30\\%$ probability of random exploration. \n\nAfter the training, the learned \\textsc{apple} model is deployed in the same training environment. We compare \\textsc{apple} to \\textsc{appli} with the same \\DIFdelbegin \\DIFdel{four sets of planner }\\DIFdelend \\DIFaddbegin \\DIFadd{sets of }\\DIFaddend parameters and the confidence measure \\DIFaddbegin \\DIFadd{of context prediction}\\DIFaddend ~\\cite{wang2020appli}, and the DWA planner with static default parameters. Each experiment is repeated five times. The results are shown in Tab \\ref{tab::physical}. \\textsc{apple} achieves the fastest average traversal time with \\DIFaddbegin \\DIFadd{the }\\DIFaddend smallest variance in the training environment.\n\nTo test \\textsc{apple}'s generalizability, we also test \\textsc{apple} in an unseen environment (Fig. \\ref{fig::unseen}) and show the results in \\DIFdelbegin \\DIFdel{Table }\\DIFdelend \\DIFaddbegin \\DIFadd{Tab. }\\DIFaddend \\ref{tab::physical}. In the unseen environment, \\textsc{apple} has slightly increased variance, but still has the fastest average traversal time compared to the other two baselines.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{contents\/figures\/unseen.jpg}\n \\caption{\\textsc{apple} Running in an Unseen Physical Environment}\n \\label{fig::unseen}\n\\end{figure}\n\n\\begin{table}\n\\centering\n\\caption{Traversal Time in Training and Unseen Environment}\n\\begin{tabular}{cccc}\n\\toprule\n & \\textbf{Default} & \\textbf{\\textsc{appli}} & \\textbf{\\textsc{apple} (disc.)} \\\\ \n\\midrule\n\\textbf{Training} & 143.1$\\pm$20.0s & 79.8$\\pm$8.1s & 75.2$\\pm$4.1s \\\\\n\\textbf{Unseen} & 150.5$\\pm$24.0s & 86.4$\\pm$1.1s & 83.9$\\pm$4.6s \\\\\n\\bottomrule\n\\end{tabular}\n\\label{tab::physical}\n\\end{table}\n\n\\section{CONCLUSIONS}\n\\label{sec::conclusions}\n\nIn this work, we introduce \\textsc{apple}, \\emph{Adaptive Planner Parameter Learning from Evaluative Feedback}. In contrast to most existing end-to-end machine learning for navigation approaches, \\textsc{apple} utilizes existing classical navigation systems and inherits all their benefits, such as safety and explainability. Furthermore, instead of requiring a full expert demonstration or a few corrective interventions that \\DIFdelbegin \\DIFdel{needs the users }\\DIFdelend \\DIFaddbegin \\DIFadd{need the user }\\DIFaddend to take full control of the robot, \\textsc{apple} just needs evaluative feedback as simple as ``good job\" or ``bad job\" that can be easily collected from non-expert users. Moreover, comparing with \\textsc{appli} which \\DIFdelbegin \\DIFdel{select }\\DIFdelend \\DIFaddbegin \\DIFadd{selects }\\DIFaddend the parameter set based on the similarity with demonstrated environments, \\textsc{apple} \\DIFdelbegin \\DIFdel{achieve }\\DIFdelend \\DIFaddbegin \\DIFadd{achieves }\\DIFaddend better generalization by selecting the parameter set with a performance-based criterion, i.e., the expected evaluative feedback. We show \\textsc{apple}'s performance improvement with simulated and real human feedback, as well as its generalizability in both 50 unseen simulated environments and an unseen physical environment. While \\DIFdelbegin \\DIFdel{we can }\\DIFdelend \\DIFaddbegin \\DIFadd{in this paper we }\\DIFaddend only learn a continuous parameter policy from continuous evaluative feedback\\DIFdelbegin \\DIFdel{now}\\DIFdelend , an interesting direction for future work is to learn \\DIFdelbegin \\DIFdel{it }\\DIFdelend \\DIFaddbegin \\DIFadd{one }\\DIFaddend from discrete feedback, which can be augmented with instructive guidance (e.g. ``bad job\" augmented with ``should increase the velocity\"). \\DIFaddbegin \\DIFadd{In this paper, we use relatively dense feedback signals from the human user in the physical experiments and much denser simulated feedback signals in the simulated experiments to reduce the amount of time needed to train a good }\\textsc{\\DIFadd{apple}} \\DIFadd{policy. These dense feedback signals may not always be practical, for example, the user may not always be paying attention. Therefore another important direction for future investigation is to study how little feedback is needed to yield good performance.\n}\n\n\\DIFaddend \n\n\n\n\\section*{\\DIFdelbegin \\DIFdel{ACKNOWLEDGMENT}\\DIFdelend \\DIFaddbegin \\DIFadd{ACKNOWLEDGMENTS}\\DIFaddend }\nThis work has taken place in the Learning Agents Research\nGroup (LARG) at the Artificial Intelligence Laboratory, The University\nof Texas at Austin. LARG research is supported in part by grants from\nthe National Science Foundation (CPS-1739964, IIS-1724157,\nNRI-1925082), the Office of Naval Research (N00014-18-2243), Future of\nLife Institute (RFP2-000), Army Research Office (W911NF-19-2-0333),\nDARPA, Lockheed Martin, General Motors, and Bosch. The views and\nconclusions contained in this document are those of the authors alone.\nPeter Stone serves as the Executive Director of Sony AI America and\nreceives financial compensation for this work. The terms of this\narrangement have been reviewed and approved by the University of Texas\nat Austin in accordance with its policy on objectivity in research.\n\n\\bibliographystyle{IEEEtran}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nA central task in NLP is the generation of word embeddings to encode the meaning of words and their relationships in a vector space. This task is usually performed by a self-supervised Machine Learning (ML) model such as Word2Vec~\\cite{w2v}, ELMo~\\cite{elmo} or BERT~\\cite{bert}, with access to a large corpus of documents as input. These representations can then be used to perform advanced analytics on textual data. The larger and more complete the corpus is, the more accurate the representations will be.\n\nAs such, it can be useful for multiple organizations to collaborate with each other, each providing access to their corpora, in order to obtain the best results. However, different organizations typically cannot easily share their data, as they have to protect the privacy of their users and the details of their internal operations, or might be bound by external laws preventing the sharing of the data. One way organizations could overcome these issues is by employing Federated Learning protocol~\\cite{fl} to generate a global model without sharing any data. It is therefore fundamental to assess the performance, quality and privacy preservation characteristics of such approach.\n\n\\subsection{Distributed vs data-private massively-distributed approaches}\n\nDatacenter-scale ML algorithms work with vast amounts of data allowing the training of large models, exploiting multiple machines and multiple GPUs per machine. Currently, GPUs offer enough power to satisfy the needs of state of the art models. However, this exposes another important aspect for consideration - Data Privacy. In recent years, users and governments have started to be aware of this issue, publishing new and stricter regulations, such as the GDPR~\\cite{EUdataregulations2018}. Companies started making efforts to shield themselves from any security leak that could happen in a centralised system. This created a need to move the research towards distributed architectures where the data is not gathered by a central entity. \n\nThe fast development of smart devices, the growth of their computational power and of fast Internet connections, like 4G and 5G, enable new approaches that exploit them to train distributed models. This solution is currently not at the same scale of resources that a datacenter can offer yet, but the research and development of edge devices is making it feasible.\n\nFor these reasons, researchers are exploring the possibilities of different massively-distributed training designs. These new designs should offer scalability, ensure data privacy and reduce large traffic of data over the network. The main massively-distributed approach to large-scale training that fulfills all these requirements is Federated Learning \\cite{fl}.\n\n\\subsection{Federated Learning in a small collaborative NLP scenario}\n\nFederated learning is frequently applied in a commercial, global-scale setting as an on-device learning framework. In this common scenario, many small devices, like smartphones, collaborate and grant access to their data, typically small due to storage limitations, to train a higher-quality model. These models are usually NLP applications to make suggestions \\cite{fl_1} or predict a user behaviour \\cite{fl_2}. \n\nHowever, Federated Learning could be applied in different contexts. This paper shifts the focus to address a new scenario where the users are a small number of large organizations with access to large corpora, which cannot be shared or centralised. These organizations are willing to cooperate in order to have access to a larger corpus with diverse topics and to overcome the very strict data privacy policies. A practical example could be a group of government agencies, each of which has only access to sensitive documents in a specific domain (e.g. taxes), which alone would not be sufficient for high-quality training.\n\n\\section{Federated Word2Vec}\n\nFederated Learning addresses the privacy concerns as the data is not shared between the organizations. It stays in the node of the owner and the information transferred through the network is only the gradient of the model. It also avoids the expensive transfer of the training data, replacing it with the repeated transfer of gradients, the total size of which is, generally, less than that of the dataset. And even if repeated gradient exchanges were to surpass the size of the dataset, their transfer would be spread on a long time and divided in smaller batches, so it would not delay the training as much as having to send a huge dataset before starting. The only common point that all the nodes share in this architecture is the existence of a central node which oversees the training process, directing the data transfers and merging the contributions of each node. Having the central node can facilitate the inclusion of additional safety measures to shield the training process from malicious attacks \\cite{secure_fl}. \n\nIn Federated Word2Vec, each organization owns a private dataset, which means that words that appear in the corpus of one organization may not be present in the one of another. This an issue as the input vocabulary must be common to all local models, so that the gradients can be aggregated. Preserving the privacy of the content of the text is very important, so this paper will overcome the aforementioned issue through a strategy that consists of a common agreement of all the participants in a global vocabulary. All must agree on a fixed vocabulary size $N$ and a minimum threshold of occurrences $T$. Each participant must provide a list of their top $N$ words that appear in their respective texts surpassing $T$ occurrences. The privacy is preserved as the organizations only share a set of isolated, unsorted words. However, the question arises of how to merge these sets. There are two operations that can be applied to produce the final vocabulary: intersection and union. We use, and recommend to use, the union operation. The final vocabulary is larger than the initial size $N$, but all organizations keep their relevant words. Although this approach requires more time to converge due to many words appearing only in certain datasets, the words meaning and knowledge return to the participants is enriched.\n\nOnce the participants receive the common agreed vocabulary, the training process can start following the FederatedSGD algorithm from \\cite{fl}. The gradient is transferred from all the external nodes to the main node in each iteration. The average gradient is calculated and transferred back to perform the updating process.\n\n\\section{Experiments}\n\n\\subsection{Data collection}\n\nWe generated topic-related datasets collected from Wikipedia articles and organised in different sizes. Two Wikipedia dumps were downloaded to satisfy the aforementioned datasets characteristics: a partial dump with a compressed size of 180 MB to simulate small organizations with short corpora; and a 16 GB compressed file with all the text content published in Wikipedia. From the whole Wikipedia dump, the Wikipedia Extractor script \\cite{extractor} is used with a tag to filter categories and prepare 5 different datasets divided by topic. The chosen topics are: biology, history, finance, geography, and sports. Although the themes are quite specific, some articles can appear in more than one dataset because of the distribution of the Wikipedia tree of categories. So, if an article is tagged with the biology category, it is included in the dataset of biological content. In order to simulate a larger number of organizations, every topic is split between two organizations.\n\n\\subsection{Setup}\n\nThe simulation is performed sequentially on a single machine and with a single GPU, a Nvidia Quadro RTX 5000. This limits the possibility to study the influence of the network, that is thus not covered in this work. The hyperparameters were set to sensible values based on existing literature and should provide a good compromise of training speed and quality. We use a \\textbf{fixed batch size} of 2,048 samples. It is important to notice that one iteration of Federated Word2Vec processes as much data as $N$ iterations of centralized Word2Vec, because each of the $N$ nodes processes one batch in parallel during each iteration, in our study $N$ is equal to 10 nodes. The \\textbf{embeddings size} is fixed to 200. The number of \\textbf{negative samples} per batch is 64, a small amount of negative samples, compared to the total batch size, but sufficient to achieve good results \\cite{w2v}. The \\textbf{vocabulary size} is 200,000 unique words, with a minimum threshold of 10 occurrences.\n\n\\section{Results}\n\n\\subsection{Proving convergence of the model with small datasets}\n\nFigure \\ref{fig:val_base} compares the validation loss of Federated Word2Vec with that of a baseline, centralized implementation. In order to compare the two models when both have processed the same amount of data, Federated Word2Vec is stopped at epoch 70, which is iteration 500,000.\n\n\\begin{figure}[!ht]\n \\begin{center}\n \\includegraphics[width=1\\linewidth]{base_together_scale.png}\n \\end{center}\n \\caption{On the left, validation loss per epoch in a full execution of Word2Vec. On the right, validation loss per epoch of the first 500.000 iterations of Federated Word2Vec. The red lanes represent the average of the validation loss calculated by aggregating all previous values from each epoch. Y-axis is in logarithmic scale.}\n \\label{fig:val_base}\n\\end{figure}\n\n\\begin{figure}[!ht]\n \\begin{center}\n \\includegraphics[width=1\\linewidth]{bse_iterations.JPG}\n \\end{center}\n \\caption{On the left, validation loss per iteration in a full execution of Word2Vec. On the right, validation loss per iterarions in a full execution of Federated Word2Vec. The red lanes represent the average of the validation loss calculated by aggregating all previous values from each epoch. Y-axis is in logarithmic scale.}\n \\label{fig:val_base_1}\n\\end{figure}\n\nThe loss of Word2Vec presents a value of ${\\small\\sim} 10^{4}$. It is stable, but with a small descending trend. On the other hand, although Federated Word2Vec does not reach the same loss (it is 10 times greater) its trend is clearly decreasing. To check if the trend continues, Figure \\ref{fig:val_base_1} illustrates the validation loss in terms of iteration for the full execution, 2 million iterations. The loss keeps going down until 1 million iterations when it stabilises. Overall, the two models provide very similar results. Centralized Word2Vec has a faster initial convergence; however, this might be overcome by adapting the hyperparameters to the distributed setting, for example with learning rate scaling \\cite{learning_rate_scaling}.\n\n\\subsection{Proving convergence of the model with large datasets}\n\nThe training of Federated Word2Vec with a large dataset presents improved results compared to the previous graphs. Figure \\ref{fig:large_loss} freezes the training in the iteration 500,000 as it was done in Figure \\ref{fig:val_base}. The number of epochs is fewer than in the former experiment but the loss presents a clear downward trend with a steeper slope. In Figure \\ref{fig:last}, where the execution continues until iteration 2 millions, the loss keeps decreasing reaching values of $10^{3}$, something that did not happen in Figure \\ref{fig:val_base_1}.\n\n\\begin{figure}[!h]\n \\begin{center}\n \\includegraphics[scale=0.49]{validation_loss_balanced_vs_baseline.JPG}\n \\end{center}\n \\caption{Validation loss of the first 500.000 iterations of Federated Word2Vec with a larger dataset, divided by epoch. The red lanes represent the average of the validation loss calculated by aggregating all previous values from each epoch. Y-axis is in logarithmic scale.}\n \\label{fig:large_loss}\n\\end{figure}\n\n\n\\begin{figure}[!h]\n \\begin{center}\n \\includegraphics[scale=0.49]{validation_loss_balanced_all_iterations.JPG}\n \\end{center}\n \\caption{Validation loss of a full execution of Federated Word2Vec with a larger dataset, represented in blue. The red lanes represent the average of the validation loss calculated by aggregating all previous values from each epoch. Y-axis is in logarithmic scale.}\n \\label{fig:last}\n\\end{figure}\n\nConsequently, Federated Word2Vec seems to work better with larger datasets as it benefits from learning from multiple sources at the same time. The results show that Federated Word2Vec is not better, and might perform slightly worse, than Word2Vec under the same settings. However, it is proven that Federated Word2Vec has a similar convergence pattern to Word2Vec and easily scales to a large dataset. \n\n\\subsection{How categorised data influence the results}\n\nWe then compare collaborative training with Federated Word2Vec to local training by a single organization, which only has access to the \\textit{finance} dataset. We analyse the organization of the words in the embedding space, using their cosine distance, by identifying the top-5 closest neighbours for a number of target words, as shown in Table \\ref{font-table}.\n\nThe most striking finding in this analysis is that clusters are populated with more meaningful words in Federated Word2Vec. This behaviour was expected for the target word \\textit{bacteria}, as it does not appear frequently in the \\textit{finance} dataset. However, the same situation happens with \\textit{market}, presenting meaningless words as the closest neighbours in its community, while the execution of Federated Word2Vec shows more specific context words. \n\nMoreover, \\textit{market} is not an outlier. Most words that are relevant to the \\textit{finance} dataset present similar results. The resultant neighbourhood of the word \\textit{money} trained with baseline Word2Vec on the financial dataset still presents generic words such as \\textit{\\{stated, said, there\\}}. In contrast, the community generated during the federated training clearly gathers meaning from the finance topic.\n\nThese results show the importance of having a full picture of the language to produce high-quality embeddings, even for domain-specific tasks. This, in turn, underscores the need for collaboration among organizations.\n\n\\begin{table}\n\\centering\n\\small\n\\begin{tabular}{l l l l l}\n\n\\hline \\textbf{Word} & \\multicolumn{2}{c}{\\textbf{W2V}} & \\multicolumn{2}{c}{\\textbf{Fed W2V}}\\\\ \\hline\n\\hline\n\n\\textbf{Market} & \\textbf{Top-5} & \\textbf{Dist} & \\textbf{Top-5} & \\textbf{Dist} \\\\ \\hline\n& this & 0.023 & markets & 0.029 \\\\\n& proposed & 0.024 & company & 0.035 \\\\\n& some & 0.024 & share & 0.042 \\\\\n& all & 0.025 & trading & 0.048 \\\\\n& other & 0.025 & assets & 0.049 \\\\ \\hline \\hline\n\n\\textbf{Bacteria} & \\textbf{Top-5} & \\textbf{Dist} & \\textbf{Top-5} & \\textbf{Dist} \\\\ \\hline\n\n& rare & 0.026 & organism & 0.070 \\\\\n& animals & 0.026 & toxic & 0.075 \\\\\n& applied & 0.026 & tissue & 0.077 \\\\\n& result & 0.027 & cells & 0.081 \\\\\n& plants & 0.027 & humans & 0.083 \\\\ \\hline \\hline\n\n\\textbf{Money} & \\textbf{Top-5} & \\textbf{Dist} & \\textbf{Top-5} & \\textbf{Dist} \\\\ \\hline\n& stated & 0.028 & paid & 0.045 \\\\\n& said & 0.028 & offer & 0.053 \\\\\n& there & 0.028 & sell & 0.062 \\\\\n& take & 0.029 & cash & 0.071 \\\\\n& help & 0.031 & interest & 0.073 \\\\\n\n\\end{tabular}\n\\caption{\\label{font-table} Top-5 nearest neighbours of each central word, using the cosine distance in the training of W2V with finance dataset and Fed W2V with all 5 datasets.}\n\\end{table}\n\n\\section{Conclusions}\n\nThe purpose of this paper was to implement and test the viability of a distributed, efficient, data-private approach that allows a small number of organizations, each owning a large private text corpus, to train global word representations. The results indicate the potential for applicability to real scenarios of collaborative training. The main contributions of this work are \n\\begin{enumerate*}\n\\item the viability of training NLP models like Word2Vec under the Federated Learning protocol with convergence times, at least, at the same level of the widely tested Word2Vec;\n\\item the importance for organizations to cooperate, as cooperation provides models that are not only globally good, but also locally better than locally-trained models; and\n\\item the quality of vector representations is not affected by the size of the corpora.\n\\end{enumerate*}\n\n\\noindent\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\uppercase{Introduction}}\n\\label{sec:introduction}\n\t\\IEEEPARstart{P}{oint} cloud registration is the problem of estimating the rigid relative pose transformation that aligns a pair of point clouds into the same coordinate system.\n\tThis is a key problem in many downstream applications including 3D scene reconstruction \\cite{li20183d}, localisation \\cite{Lu_2019_CVPR} and SLAM \\cite{ramezani2020online}.\n\tRecent applications such as Augmented Reality (AR) \\cite{gao2017stable}, cooperative (multi-agent) perception for autonomous vehicles \\cite{arnold2020cooperative} and multi-agent SLAM \\cite{dube2017online} introduce new challenges to this problem.\n\tSpecifically, these applications require registration methods that are robust to point clouds with low overlap, \\textit{e.g.} when sensors are far apart, and capable of operating in real-time.\n\t\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\input{images\/qualitative\/qualitative.pgf}\n\t\t\\caption{Qualitative results of the proposed method. Each row represents a different sample from the CODD test set and the vertical label indicates the relative translation between point clouds in meters, where $t^g$ indicates the ground-truth relative translation vector.}\n\t\t\\label{fig:qualitative}\n\t\\end{figure}\n\t\n\tExisting registration methods are often designed and evaluated assuming a significant overlap between the input point clouds.\n\tThis assumption is valid for applications such as SLAM \\cite{ramezani2020online} and lidar odometry \\cite{graeter2018limo}, where pairs of point clouds are obtained sequentially in adjacent time steps by a single vehicle navigating in a driving environment.\n\tOn the other hand, applications such as cooperative perception \\cite{arnold2020cooperative} and multi-agent SLAM \\cite{dube2017online} require registering point clouds obtained simultaneously from a pair of sensors on two different vehicles that are potentially far apart, and thus, may have low field-of-view overlap, \\textit{e.g.} Figure \\ref{fig:qualitative}.\n\tAs the relative translation between the sensors increases, the number of identifiable correspondences decreases, which poses challenges in registering the point clouds accurately.\n\t\n\tThe majority of existing point cloud registration methods cannot guarantee real-time execution.\n\tTraditional local registration methods such as Iterative Closest Point (ICP) \\cite{arun1987least} solve the problem iteratively assuming an initial relative pose.\n\tHowever, the iterative nature of such methods renders them unfeasible for real-time applications, particularly considering large scale point clouds.\n\tThese methods are also prone to non-optimal solutions when the initial pose estimate is poor, which may be addressed with global-optimisation variants \\cite{olsson2008branch,yang2013go} at the cost of higher computational complexity.\n\tAnother category of methods identify correspondences between point clouds using a distance metric between hand-engineered features \\cite{rusu2009fpfh} or learned point-wise features \\cite{choy2019fully}.\n\tThese correspondences are often contaminated by a large number outliers and must be filtered using RANSAC \\cite{fischler1981random,bustos2017guaranteed} or learned models \\cite{choy2020deep}, which further increases the registration execution time.\n\tFurthermore, state-of-the-art learning-based models \\cite{choy2019fully,choy2020deep} require computationally demanding 3D convolutions and generate numerous putative correspondences, introducing a bottleneck on the RANSAC loop and rendering real-time execution unfeasible.\n\n\tTo mitigate the aforementioned limitations, we propose a novel point cloud registration method capable of operating in real-time and robust to low-overlapping point clouds.\n\tThe proposed method identifies correspondences between the source and target point clouds by learning point-wise features.\n\tA novel encoder hierarchically subsamples the point clouds to reduce the number of key points and improve the run-time performance.\n\tThe resulting features are refined using self- and cross-attention based on a graph neural network.\n\tThe attention network leverages geometrical relationships between key points and their features across to improve the correspondence accuracy, particularly in regions of low overlap.\n\tThe relative pose parameters are obtained by fitting the learned correspondences using Random Sample Consensus (RANSAC) to robustly reject outliers.\t\n\tDuring inference, the RANSAC fitting is done efficiently considering a small number of correspondences, which allows end-to-end inference times below 410ms.\n\tThe model is trained and evaluated separately on the KITTI odometry dataset and a novel Cooperative Driving Dataset (CODD).\n\tThe relative translation between sensors in CODD ranges up to 30m, introducing challenging pairs of point clouds with low overlap, which we hope will create a new research benchmark.\n\tOur contributions are summarised as:\n\t\\begin{itemize}\n\t\t\\item A computationally efficient point-wise feature encoder that allows identifying correspondences between point clouds;\n\t\t\\item A graph neural network that provides self- and cross-attention between point clouds and improves the quality of correspondences;\n\t\t\\item A novel registration method for point clouds that is robust to partially-overlapping point clouds and capable of operating in real-time;\n\t\t\\item A new synthetic lidar dataset containing low overlapping point clouds in a wide range of driving scenarios;\n\t\\end{itemize}\n\n\n\\section{\\uppercase{Related Works}}\n\\label{sec:relatedwork}\n\tThis section reviews existing point cloud registration methods in the literature and highlights how the method proposed in this paper differs from these existing works.\n\tWe divide existing methods in the literature into two categories: traditional registration methods and learning-based methods.\n\n\t\\subsection{Traditional Registration Methods}\n\t\tIterative Closest Point (ICP) \\cite{arun1987least} is a local registration method that assumes an initial relative pose and iteratively computes the transformation parameters that minimise the distance between each point in the source point cloud and its closest neighbour in the target point cloud.\n\t\tThis method is highly sensitive to the initial pose estimate, and converges to non-optimal local-minima results when the initial pose estimate is poor.\n\t\tTo mitigate this, \\cite{olsson2008branch,yang2013go} estimate global optimum solutions for ICP considering branch-and-bound search over the transformation space.\n\t\tHowever, such global methods have significantly higher execution times, which prevents their usage in real-time applications.\n\t\t\t\t\n\t\tOther traditional approaches use handcrafted features to find correspondences between the point clouds.\n\t\tFast Point Feature Histogram (FPFH) \\cite{rusu2009fpfh} encodes the local geometry of 3D points using multi-dimensional feature vectors.\n\t\tBut the correspondences obtained by comparing FPFH features are often contaminated by a large number of outliers, which prevents accurate registration.\n\t\tFor this reason, RANSAC methods are used to filter out the outlier correspondences.\n\t\tMore recently, TEASER \\cite{Yang20tro-teaser} reformulates the registration problem using a truncated least-squares cost, which results in improved registration accuracy compared to RANSAC when considering a high number of outliers.\n\n\t\\subsection{Learning-based Methods}\n\t\tOne group of learning-based registration methods focus on learning accurate correspondences between the point clouds.\n\t\tGenerally, these methods learn a mapping from the original Euclidean space to a latent feature space and optimise the mapping such that corresponding points have a small distance in the latent space.\n\t\tDeng \\textit{et al.} \\cite{deng2018ppfnet} uses a PointNet \\cite{qi2017pointnet} model to learn point-wise features and trains the model using an $N$-tuple loss.\n\t\tVCR-Net \\cite{wei2020endtoend} learns point-wise feature vectors using multi-layer-perceptrons (MLPs) to extract local features, which are refined using global attention and used to identify correspondences between point clouds.\n\t\tIn contrast, \\cite{choy2019fully} uses sparse fully convolutional networks to obtain voxel-wise features and trains the model using variations of triplet loss with hard negative mining.\n\t\tThe resulting correspondences are often contaminated with outliers and need to be pruned using RANSAC \\cite{fischler1981random} or further learning-based filtering \\cite{choy2020deep} methods before estimating the pose transformation parameters.\n\t\t\n\t\tAnother group of methods solve the problem end-to-end by learning the relative pose transformation directly.\n\t\tFor example, PCRNet \\cite{vsarode2019pcrnet} uses a PointNet \\cite{qi2017pointnet} model to encode a global feature vector for both source and target point clouds and directly regress the transformation parameters.\n\t\tDeepVCP \\cite{lu2019deepvcp} uses PointNet++ \\cite{qi2017pointnetpp} to create point-wise features, then selects top K salient points using a learned weighting network to generate a deep feature embedding.\n\t\tThe embeddings are fed to a 3D CNN to obtain soft-correspondences which are finally used to compute the transformation parameters in closed form.\n\t\t\n\t\tThe proposed method differs from previous works in the following ways.\n\t\tFirst, our method considers a novel and computationally efficient point-wise feature encoder based on Set Abstraction (SA) and Feature Propagation (FP) layers \\cite{qi2017pointnetpp}.\n\t\tWhile previous works \\cite{lu2019deepvcp} have used PointNet++ feature encoders, we distinguish our encoder by adopting an architecture that hierarchically subsamples points at each layer, resulting in improved computational performance.\n\t\tSecondly, we improve the quality of the correspondences using a novel graph-based attention network that allows to efficiently combine self- and cross- information across point clouds.\n\t\tDifferently from the feature-based attention in \\cite{wei2020endtoend}, the proposed graph attention leverages both spatial and feature dimensions of local neighbourhoods to refine point-wise features.\n\t\tFinally, we extend our analysis beyond existing datasets and evaluate our model performance in challenging low overlapping point clouds using a novel dataset where the translation between sensor poses vary uniformly up to 30 meters.\n\n\\section{\\uppercase{Problem Formulation}}\n\\label{sec:problem}\n\tGiven two input point clouds $P_X \\subset \\R^3$ and $P_Y \\subset \\R^3$, the registration problem is to estimate the rigid relative pose transformation that aligns $P_X$ into the coordinate system of $P_Y$.\n\tThis transformation is parametrised by a rotation matrix $R \\in SO(3)$ and a translation vector $t \\in \\R^3$.\n\tThe problem can be solved by identifying pairs of correspondences between $P_X$ and $P_Y$.\n\tGiven a set of correspondences, $X = \\{x_1,\\dots,x_N \\} \\subset P_X, Y = \\{y_1,\\dots,y_N \\} \\subset P_Y$, where $(x_i, y_i), i=1,\\dots,N$ are correspondence pairs, the transformation parameters are obtained by the minimisation of the least-squares error:\n\t\\begin{equation}\n\t\t\\label{eq:ls-error}\n\t\tE(R,t) = \\frac{1}{N} \\sum_{i=1}^N \\left\\| Rx_i + t - y_i \\right\\|^2.\n\t\\end{equation}\n\tThis error is a form of the Orthogonal Procrustes problem \\cite{1996weightedProcrustes} and admits the closed-form solution described below.\n\tFirst, the centroids are computed as\n\t\\begin{equation}\n\t\t\\bar{x} = \\frac{1}{N}\\sum_{i=1}^{N} x_i,\\quad\\quad \\bar{y} = \\frac{1}{N}\\sum_{i=1}^{N} y_i,\n\t\\end{equation}\n\tand the covariance matrix, denoted by $H$, is obtained using\n\t\\begin{equation}\n\t\tH = \\sum_{i=1}^{N} (x_i - \\bar{x})(y_i - \\bar{y})^\\T.\n\t\\end{equation}\n\tFinally, the rotation matrix and translation vector $R,t$ that minimise Eq. \\ref{eq:ls-error} are computed in closed-form as\n\t\\begin{align}\n\t\t\\label{eq:procrustes-closed-form}\n\t\tR &= V \\left[\\begin{smallmatrix} 1 & & \\\\ & 1 & \\\\ & & \\det(V^\\T U) \\end{smallmatrix}\\right] U^\\T, \\\\\n\t\tt &= -R\\bar{x} + \\bar{y}, \\nonumber\n\t\\end{align}\n\tconsidering the Singular Value Decomposition (SVD) $H = U S V^T$.\n\tThe next section proposes a novel method to efficiently obtain correspondences between pairs of point clouds.\n\t\n\\section{\\uppercase{Proposed Method}}\n\\label{sec:method}\n\tThis section presents a novel method for robust point cloud registration using learned correspondences targetting efficient, real-time inference.\n\tFigure \\ref{fig:diagram} describes the components and data flow of the proposed method.\n\tThe proposed method can be summarised as follows:\n\t\\begin{enumerate}[label=(\\Alph*)]\n\t\t\\item Both point clouds are fed to an encoder to obtain a subset of key points and associated point-wise features.\n\t\t\\item A graph neural network refines the point-wise features considering self- and cross-attention.\n\t\t\\item The resulting features are used to identify correspondences between the source and target key points.\n\t\t\\item The relative transformation parameters $R,t$ are robustly estimated using a RANSAC formulation of the problem defined in Section \\ref{sec:problem}. \n\t\\end{enumerate}\n\tThe aforementioned components and the training process are described in the following subsections.\n\t\n\t\\begin{figure*}\n\t\t\\centering\n\t\t\\includegraphics[width=\\linewidth]{images\/diagram.pdf}\n\t\t\\caption{Pipeline and data flow of the proposed point cloud registration method.}\n\t\t\\label{fig:diagram}\n\t\\end{figure*}\n\n\t\\subsection{Point-wise Feature Encoder}\n\t\tThe encoder is a core component of the pipeline, as it computes point-wise features that will be used to identify correspondences.\n\t\tWe propose a novel and computationally efficient encoder network based on Set Abstraction (SA) and Feature Propagation (FP) layers \\cite{qi2017pointnetpp}.\n\t\tThe proposed encoder architecture, including the hyper-parameters of each layer, is depicted in Figure \\ref{fig:encoder}.\n\t\tThe encoder outputs subset of sampled coordinates (key points) from the source and target point clouds, denoted by $X$ and $Y$, and their respective feature vectors, $f_X$ and $f_Y$.\n\t\tThe input to the encoder consists of 3D point coordinates and corresponding features, \\textit{e.g.} lidar return intensity.\n\t\tNote that the input features are optional, but in this work they consist of a single scalar per point representing the lidar intensity.\n\t\tThe source and target point clouds are fed to the encoder independently.\n\t\t\n\t\tThe first four encoder layers are SA layers.\n\t\tA SA layer consists of four operations:\n\t\t\\begin{enumerate}\n\t\t\t\\item $n$ coordinates are sampled from the previous layer using Farthest Point Sampling (FPS) \\cite{moenning2003fps}.\n\t\t\t\\item A local neighbourhood of each sampled coordinate is established by selecting all points within radius $r$ of the respective coordinate.\n\t\t\t\\item The features of the points in each neighbourhood are fed to a shared Multi Layer Perceptron (MLP), denoted as a list $L$ containing the number of intermediate nodes per layer.\n\t\t\t\\item The resulting feature vectors of the $n$ sampled coordinates is computed using an aggregation function (\\textit{max-pooling}) over the MLP output of the points in the respective neighbourhoods.\n\t\t\\end{enumerate}\n\t\tEach SA layer hierarchically subsamples and aggregates information from the previous layer with progressively larger receptive volumes, which is a fundamental step in reducing the computational cost of our pipeline.\n\t\tAt the same time, it is also important not to discard valuable information, \\textit{i.e.} prioritising that points from one layer are within the neighbourhood of sampled points in the next layer.\n\t\tThis trade-off is achieved by tuning the layers' hyper-parameters, namely $n,r,L$, such that the sampled points' neighbourhood include most points from the previous layer.\n\t\tThese hyper-parameters were optimised for large outdoor driving environments and would require fine-tuning for indoor scenarios.\n\t\t\n\t\tThe last encoder layer is an FP layer.\n\t\tIt propagates high level information from SA4 to the points in the previous layer (SA3) as illustrated in Figure \\ref{fig:encoder}.\n\t\tThis is achieved by interpolating the feature vectors in SA3 layer using the features from the three nearest-neighbour coordinates in SA4.\n\t\tThe final features are obtained fusing the original SA3 features with the interpolated SA4 features using a shared MLP, represented by a list $L$ of intermediate nodes.\n\t\tMore details about the interpolation can be found in \\cite{qi2017pointnetpp}.\n\t\t\t\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\linewidth]{images\/encoder.pdf}\n\t\t\t\\caption{Point-wise feature encoder model architecture. The model consists of four Set-Abstraction (SA) layers and one Feature Propagation (FP) layer. Point coordinates and features are represented in yellow and green, respectively. The brackets indicate the dimensionality of each matrix. The number of input points, denoted by $M$, is arbitrary, and the model consistently outputs $N=512$ point coordinates and their respective feature vectors with $D=128$ dimensions.}\n\t\t\t\\label{fig:encoder}\n\t\t\\end{figure}\n\t\t\n\t\\subsection{Graph-based Attention}\n\t\tThe feature vectors obtained with the encoder network represent local point cloud information.\n\t\tHowever, these features are agnostic to the global context of the point cloud.\n\t\tFor example, if a point cloud contains multiple objects, \\textit{e.g.} trees, it would be difficult to distinguish between individual trees.\n\t\tAnother problem, most critical for low overlapping point clouds, is that regions of overlap generally have different point densities in each point cloud, challenging the correct identification of correspondences since a point's features change with the density of points in its neighbourhood.\n\t\tTo mitigate both problems, we propose a graph-based attention module which transforms points' feature vectors considering the wider point cloud context (self-attention) and the context of both source and target point-clouds (cross-attention).\n\t\tSelf-attention increases the distinctiveness of key points by attending to their surrounding context.\n\t\tThe cross-attention layer learns to refine point-wise features by attending to the most similar features across point clouds. \n\t\tBoth layers, illustrated in Figure \\ref{fig:graphatt}, increase the likelihood of finding accurate correspondences, even in cases of low overlap.\n\t\t\n\t\tThe self-attention layer introduces attention between points within the same point cloud.\n\t\tA graph connecting the points (nodes) is created for each point cloud using the $k$-Nearest-Neighbours of the points' spatial coordinates.\n\t\tLet $f_i$ be a feature vector from either $f_X$ or $f_Y$. \n\t\tThe self-attention layer computes a residual term for $f_i$ using the Crystal Graph Convolution Operation \\cite{xie2018cgconv}:\n\t\t\\begin{equation}\n\t\t\t\\label{eq:cgconv}\n\t\t\t\\hat{f}_i = f_i + \\max_{j \\in \\mathcal{N}(i)} \\sigma(z_{i,j} W_f) \\odot \\text{Softplus}(z_{i,j} W_s),\n\t\t\\end{equation}\n\t\twhere $\\mathcal{N}(i)$ indicates the $k$ neighbour nodes of $i$, $z_{i,j} = [f_i, f_j] $ is the aggregated features of nodes $i,j$.\n\t\t$\\sigma(\\cdot)$ indicates the sigmoid function, and the \\textit{Softplus} function is defined as $\\text{Softplus}(z) = \\log(1+e^z)$.\n\t\tThe attention matrices $W_f, W_s$ are parameters to be learned and the operation $\\odot$ represents element-wise multiplication.\n\t\tWe adopt the number of nearest neighbours $k=32$, which provides a good trade-off between accuracy and computational efficiency.\n\t\tThis process is performed independently with shared parameters for both source and target point clouds.\n\t\t\n\t\tFollowing the self-attention layer, the cross-attention layer allows interaction between the source and target point cloud features.\n\t\tThis layer creates a bi-partite graph between source and target points.\n\t\tEach source point is connected to the $k$-Nearest-Neighbours nodes in the target point cloud, where the distance metric is the dot product between the feature vectors of the respective points.\n\t\tThis layer uses the same residual update rule from Eq. \\ref{eq:cgconv}, considering the different underlying graph and independent attention matrices $W_f', W_s'$.\n\t\tThe graph-attention network output is given by the updated feature vectors from source and target point clouds, denoted by $\\hat{f}_X$ and $\\hat{f}_Y$, respectively.\n\t\t\n\t\t\\begin{figure}\n\t\t\t\\centering\n\t\t\t\\includegraphics[width=\\linewidth]{images\/graphatt.pdf}\n\t\t\t\\caption{Graph-based attention representation. Points are represented as graph nodes. The nodes are defined by their position, \\textit{i.e.} 3D coordinates, and their feature vector (not represented in the image). The graphs connecting the points are created based on the $k$-NN ($k=32$) between the points. In the self-attention layer, the $k$-NN distance is the Euclidean distance between points' spatial coordinates. In the cross-Attention graph, the $k$-NN distance is the dot product between points' feature vectors.}\n\t\t\t\\label{fig:graphatt}\n\t\t\\end{figure}\n\t\n\t\\subsection{Identifying Correspondences}\n\t\\label{sec:soft-corr}\n\t\tThe correspondences between the point clouds can be obtained by comparing the point-wise features of the source and target key points, denoted respectively as $\\hat{f}_X, \\hat{f}_Y \\in \\R^{N \\times D}$.\n\t\tThese feature vectors are normalised to unity $D$-dimensional vectors using the Euclidean norm.\n\t\tA matching probability map indicating the probability of correspondences between $X$ and $Y$ is computed as\n\t\t\\begin{equation}\n\t\t\t\\label{eq:softmap}\n\t\t\t\\phi = \\text{Softmax}\\left( \\frac{\\hat{f}_X \\cdot \\hat{f}_Y^\\T}{T} \\right) \\in \\R^{N \\times N},\n\t\t\\end{equation}\n\t\twhere $T$ is a temperature hyper-parameter and the \\textit{Softmax} function is applied row-wise.\n\t\tEach element $\\phi_{ij}$ represents the probability that the $i$-th key point in $X$ matches the $j$-th key point in $Y$.\n\t\tThe \\textit{Softmax} function scales the coefficients of each row $\\phi_i$, ensuring a probability distribution over the points in $Y$.\n\t\tThe temperature parameter, denoted by $T$, controls the entropy of distribution across points in $Y$.\n\t\tIn the limit, when $T \\to 0^+$, the coefficients become the one-hot encoding of the point in $Y$ with the highest similarity (\\textit{i.e.} dot product).\n\t\tFinally, each key point $x_i \\in X$ is matched to the key point in $Y$ with highest correspondence probability, resulting in the set of correspondence pairs $\\{ (x_i, y_{\\argmax_j \\phi_{ij}}), i=1,\\dots,N \\}$.\n\t\tThe ordered set of correspondences in $Y$ is denoted as $\\hat{Y} = \\{ y_{\\argmax_j \\phi_{1j}}, \\dots, y_{\\argmax_j \\phi_{Nj}} \\}$.\t\n\t\t\n\t\\subsection{Estimating Transformation Parameters}\n\t\tThe previous step computes a correspondence for every point in $X$.\n\t\tIn practice, only a fraction of points in $X$ will have correspondences in $Y$, particularly in the case of partially overlapping point clouds.\n\t\tSensor noise and varying point densities can also lead to encoding errors and erroneous correspondences.\n\t\tTo mitigate the effect of correspondence outliers, a common practice is to use sample consensus algorithms such as RANSAC \\cite{fischler1981random,rusu2009fpfh}.\n\t\tA general version of this algorithm applied to this problem consists of three steps:\n\t\t\\begin{enumerate}\n\t\t\t\\item Create a hypothesis: Sample a minimal set of three correspondences from the set of correspondences and compute the transformation parameters using Eq. \\ref{eq:procrustes-closed-form}.\n\t\t\t\\item Score the hypothesis based on consensus: Compute the number of inlier correspondences, where a correspondence $(x_i,\\hat{y}_i), x_i \\in X, \\hat{y}_i \\in \\hat{Y}$ is an inlier if $\\left\\| Rx_i + t - \\hat{y}_i \\right\\| \\leq \\kappa$, where $\\kappa$ is the inlier threshold, $R, t$ are the hypothesis parameters computed in the first step.\n\t\t\t\\item Repeat the previous steps for $L$ times and select the hypothesis with the highest number of inliers.\n\t\t\\end{enumerate}\n\t\tThe number of tested hypotheses, denoted by $H$, offers a trade-off between computational performance and robustness to outliers.\n\t\t$H$ can also be derived to achieve a desired confidence of the selected hypothesis \\cite{fischler1981random}, \\textit{i.e.} sampling an outlier-free set of correspondences.\n\t\tWhile previous works used RANSAC with a large set of putative correspondences \\cite{rusu2009fpfh}, this may be unfeasible for real-time systems.\n\t\tIn this work, we achieve negligible RANSAC computational cost by learning a small set of correspondences ($N=512$), which allows to reduce the number of hypotheses being tested.\n\t\t\n\t\\subsection{Training Process}\n\t\tThe training process consists of optimising the encoder and attention networks to find accurate correspondences where they exist.\n\t\tWe maximise the matching probabilities of ground-truth correspondences and minimise the matching probability of non-corresponding points.\n\t\tThis is achieved by directly minimising the following loss function:\n\t\t\\begin{equation}\n\t\t\t\\label{eq:loss}\n\t\t\t\\mathcal{L} = \\frac{1}{N_c} \\sum_{i=1}^N \\delta_i \\left[ -\\phi_{i\\hat{j}} + \\frac{\\lambda}{N-1} \\sum_{j=1,j \\neq \\hat{j}}^N \\phi_{ij} \\right],\n\t\t\\end{equation}\n\t\twhere the binary variable $\\delta_i$ indicates whether the source point $x_i$ has a correspondence in $Y$ and $\\hat{j}$ represents the index of the corresponding point in $Y$. \n\t\tAdditionally, $N_c=\\sum_{i=1}^N \\delta_i$ is the number of ground-truth correspondences and $\\lambda$ is a hyper-parameter scaling the contributions of incorrect matches into the loss function.\n\t\tA point in $X$ is considered to have a correspondence in $Y$ if, under the ground-truth transformation, it is within a distance smaller or equal to $1.6$ meters from a point in $Y$.\n\t\tThis inlier distance is arbitrary and was chosen based on the smallest encoder radius.\n\t\tData augmentation is employed by applying random rotation transformations to both input point clouds and adjusting the ground-truth rotation matrix accordingly.\n\t\tThe optimisation details are described in Section \\ref{sec:exp:implementation}.\n\t\t\n\\section{\\uppercase{Performance Evaluation}}\n\\label{sec:exp}\n\tIn this section, we first describe the datasets and the evaluation metrics, followed by the implementation details.\n\tWe then compare the performance of the proposed method with traditional baselines, including ICP \\cite{arun1987least}, FPFH RANSAC \\cite{rusu2009fpfh} and TEASER \\cite{Yang20tro-teaser}; and two state-of-the-art learning-based methods: FCGF \\cite{choy2019fully} and DGR \\cite{choy2020deep}.\n\tWe also provide an ablation study identifying the impact of the proposed attention network into the registration performance.\n\n\t\\subsection{Dataset}\n\t\\label{sec:exp:dataset}\n\t\tThe KITTI Odometry dataset \\cite{Geiger2012CVPR} is traditionally used to evaluate point cloud registration methods in outdoor environments.\n\t\tWe follow the evaluation protocol of recent methods \\cite{FGR2016,choy2019fully,choy2020deep,Bai_2020_CVPR}, which adopt sequences 0 to 5 for training, 6 to 8 for validation and 9 to 10 for testing.\n\t\tIn each sequence, the samples are created by selecting pairs of points clouds obtained sequentially by a single vehicle such that the translation between the poses is less than 10m.\n\t\tThe ground-truth pose is provided by GPS and refined using ICP to reduce misalignment.\n\t\t\n\t\tThe distribution of poses in the KITTI dataset is limited to the trajectory of a single vehicle as it navigates the environment.\n\t\tIn practice, registration methods must be resilient to point clouds with arbitrary relative pose, where the overlap between point clouds may vary significantly across samples.\n\t\tTo this end, we introduce Cooperative Driving Dataset (CODD) \\cite{codd}, an open-source synthetic dataset containing lidar point clouds collected simultaneously from multiple vehicles.\n\t\tThis dataset is created using CARLA \\cite{Dosovitskiy17} and features a diverse range of driving environments, including rural areas, suburbs, and dense urban centres.\n\t\tThe dataset consists of 108 sequences, which are split into three independent subsets for training, validation and testing, as detailed in Table \\ref{tab:dataset}.\n\t\tThe samples are created by selecting all pair-wise combinations of point clouds obtained from vehicles driving simultaneously within a vicinity considering a maximum distance of 30m.\n\t\tFigure \\ref{fig:plot:DatasetStats} presents the cumulative density plots of the relative distance (translation vector norm), rotation angle and overlap ratio of the pairs of point clouds in each dataset.\n\t\tThe overlap ratio measures the overlap between point clouds as the percentage of points in the source point cloud that, when aligned, are within a distance smaller than $\\gamma$ to any point in the target point cloud.\n\t\tOur CODD dataset has a significantly broader distribution of relative distance, rotation angles and overlap ratio between the point cloud pairs, which provides representative scenarios for cooperative perception and multi-agent SLAM.\n\n\t\t\\begin{table}[]\t\n\t\t\t\\centering\n\t\t\t\\caption{Dataset Details}\n\t\t\t\\label{tab:dataset}\t\t\t\n\t\t\t\\begin{tabular}{@{}llll@{}}\n\t\t\t\t\\toprule\n\t\t\t\t\\textbf{KITTI Odometry} & \\textbf{Train} & \\textbf{Validation} & \\textbf{Test} \\\\ \\midrule\n\t\t\t\t\\# sequences & 5 & 2 & 2 \\\\\n\t\t\t\t\\# samples (pairs of point clouds) & 1358 & 180 & 555 \\\\ \t\\midrule\t\t\n\t\t\t\t\\textbf{CODD} & \\textbf{Train} & \\textbf{Validation} & \\textbf{Test} \\\\ \\midrule\n\t\t\t\t\\# exclusive maps & 6 & 1 & 1 \\\\\n\t\t\t\t\\# sequences & 78 & 14 & 16 \\\\\n\t\t\t\t\\# samples (pairs of point clouds) & 6129 & 1339 & 1315 \\\\ \\bottomrule\t\n\t\t\t\\end{tabular}\n\t\t\\end{table}\n\t\n\t\t\\begin{figure*}\n\t\t\t\\centering\n\t\t\t\\input{images\/datasetStats.pgf}\n\t\t\t\\caption{Cumulative density of the relative distance and rotation angle between coordinate systems of the pairs of point clouds in each subset.}\n\t\t\t\\label{fig:plot:DatasetStats}\n\t\t\\end{figure*}\n\n\t\\subsection{Evaluation Metrics}\n\t\\label{sec:exp:metrics}\n\t\tFollowing previous studies \\cite{choy2019fully,choy2020deep}, the registration performance is evaluated in terms of the translation and rotation errors given by\n\t\t\\begin{align}\n\t\t\t\\label{eq:metrics}\n\t\t\t\\text{TE} &= \\left\\|\\hat{t} - t^g\\right\\|_2, \\\\\n\t\t\t\\text{RE} &= \\arccos \\frac{\\Tr(\\hat{R}^\\T R^g) - 1}{2},\n\t\t\\end{align}\n\t\twhere $R^g,t^g$ denotes the ground-truth rotation matrix and translation vector, respectively.\n\t\tThese metrics are reported considering their mean value over all dataset samples, denoted as Mean Translation Error (MTE) and Mean Rotation Error (MRE), respectively.\n\t\tWe also consider the recall rate, measured as the ratio of successful registrations to the total number of samples, where the success criteria is $\\text{TE} < 0.6$m and $\\text{RE} < 5$deg following \\cite{choy2020deep}.\n\t\tThe runtime performance is evaluated as the average inference time for the registration of a pair of point clouds disregarding the data loading time.\n\t\n\t\\subsection{Implementation Details}\n\t\\label{sec:exp:implementation}\n\t\tOur proposed method is implemented using PyTorch \\cite{paszke2019pytorch}, PyTorch Geometric \\cite{feyPytorchGeometric}, the CUDA implementation of SA and FP layers from \\cite{qi2017pointnetpp} and the Open3D \\cite{Zhou2018} Procrustes RANSAC implementation.\n\t\tThe model is trained independently for each dataset using the Adam \\cite{kingma2014adam} optimiser with a learning rate of $0.1$, $\\epsilon=10^{-4}$, $\\beta_1=0.9, \\beta_2=0.999$ and a batch size of 6 (pairs of point clouds).\n\t\tThe model is trained for twenty epochs and the learning rate is reduced in half after every five epochs.\n\t\tFor evaluation, the model with the lowest validation loss is selected.\n\t\tThe temperature hyper-parameter, described in Section \\ref{sec:soft-corr}, is set to $T=10^{-2}$, and the loss scaling hyper-parameter is set to $\\lambda=10$.\n\t\tDuring inference, the RANSAC inlier threshold, $\\kappa$, is set to 0.5m, and the maximum number of RANSAC iterations, denoted by $H$, is computed to achieve $0.999$ confidence in the selected hypothesis within a limit of $10^5$ iterations.\n\t\tThe point clouds from both datasets are downsampled using voxel sizes of 0.3m following previous methods \\cite{choy2019fully,choy2020deep}.\n\t\tThe overlap ratio distance threshold, $\\gamma$, is set to 0.3m, following the down-sampling voxel size. \n\t\tThe experiments are carried out on a Xeon ES-1630 CPU and Quadro M4000 GPU with 8 GB of memory.\n\t\tFor fair comparison, all baselines are also evaluated on the same hardware.\n\t\tThe official implementation and pre-trained models (30cm voxel) are used for the evaluation of \\cite{choy2019fully,choy2020deep} in the KITTI dataset; likewise, the official TEASER \\cite{Yang20tro-teaser} implementation is adopted; and the Open3D \\cite{Zhou2018} implementation of FPFH, RANSAC and ICP is used for the evaluation of the respective methods.\n\t\t\n\t\\subsection{Performance on the KITTI dataset}\n\t\\label{sec:exp:results-kitti}\n\t\tThe evaluation results, presented in Table \\ref{tab:results-kitti}, show that our proposed method achieves competitive registration errors compared to other methods at a significantly lower inference time -- more than five times faster than the fastest baseline.\n\t\tWhile the proposed method has a marginal increase in mean translation error compared to baseline methods, it achieves on-par recall rate relative to baseline methods.\n\t\tThe mean translation and rotation errors of our method can be further reduced using ICP for refinement (Ours + ICP), at the cost of increased inference time.\n\t\t\n\t\t\\begin{table}[]\n\t\t\t\\centering\n\t\t\t\\caption{Evaluation Results on KITTI test set}\n\t\t\t\\label{tab:results-kitti}\n\t\t\t\\resizebox{\\linewidth}{!}{%\n\t\t\t\\begin{tabular}{@{}lllll@{}}\n\t\t\t\t\\toprule\n\t\t\t\t\\textbf{Model} & \\textbf{MTE [cm]} & \\textbf{MRE [deg]} & \\textbf{Recall} & \\textbf{Time\/sample [s]} \\\\ \\midrule\n\t\t\t\tFPFH TEASER \\cite{Yang20tro-teaser} & 18.8 & 0.76 & 0.984 & 2.26 \\\\\t\t\t\t\n\t\t\t\tFCGF RANSAC \\cite{choy2019fully} & 23.7 & 0.034 & 0.975 & 26.29 \\\\\n\t\t\t\tDGR \\cite{choy2020deep} & 15.6 & 1.43 & 0.982 & 7.60 \\\\ \\midrule\n\t\t\t\tOurs & 26.1 & 0.74 & 0.949 & 0.41 \\\\\n\t\t\t\tOurs + ICP & 8.2 & 0.23 & 0.985 & 3.68 \\\\ \\bottomrule\n\t\t\t\\end{tabular}\n\t\t\t}\n\t\t\\end{table}\n\t\n\t\\subsection{Performance on the CODD dataset}\n\t\\label{sec:exp:results-codd}\n\t\tWe aim to evaluate the performance of the proposed method and baselines on challenging low-overlapping pairs of point clouds.\n\t\tFor a fair comparison, learning-based methods \\cite{choy2019fully,choy2020deep} are also trained on the CODD dataset.\n\t\tTable \\ref{tab:results} presents the results on the CODD test set, aggregated by the overlap ratio between point clouds in four progressively larger intervals -- the last interval contains all samples.\n\t\tTraditional methods are not resilient to low overlapping points clouds, as the registration error increases significantly when considering lower overlap ratios, as shown in the first three rows of Table \\ref{tab:results}.\n\t\tIn contrast, the learning-based baselines are reasonably robust to low overlapping point clouds and achieve high recall rates on all intervals.\n\t\tHowever, the latter methods demand substantial running times due to their complex encoders and the filtering of a high number of putative correspondences.\n\t\tIn contrast, our proposed method achieves similar or better recall rates to the learning-based baselines with more than 35 times faster inference times.\n\t\tThis is achieved by our efficient encoder design which outputs a small number of correspondences, which in turn reduces the RANSAC inference time.\n\t\tOur efficient encoder strategy comes at the cost of a slight increase of the MTE and MRE metrics, as compared to DGR \\cite{choy2020deep}.\n\t\tTo mitigate this, we apply ICP refinement to our model's output (Ours + ICP), which allows achieving similar MTE and MRE for highly overlapping point clouds and outperforming all baselines on low-overlapping point clouds.\n\t\tAlthough the ICP refinement comes with an additional computational cost, we still achieve a nine-fold speed-up compared to learning-based baselines.\n\t\tQualitative results are presented in Figure \\ref{fig:qualitative}.\n\n\t\tFigure \\ref{fig:plot:results-ecdf} shows the Empirical Cumulative Density Function (ECDF) of the translation error, rotation errors, and inference time for different methods.\n\t\tThe distributions indicate that the proposed method with ICP refinement has the best translation error across samples, closely matched by DGR \\cite{choy2020deep}, however with one order of magnitude smaller inference time.\n\t\tThe inference time distributions show that the proposed method is the fastest among baselines, with an inference time of 320ms on average, with negligible standard deviation (17ms).\n\t\tThe proposed method can operate in real-time considering data input frequencies up to 3Hz.\n\t\t\n\t\t\\begin{table*}[]\n\t\t\t\\centering\n\t\t\t\\caption{Evaluation Results on CODD test set}\n\t\t\t\\label{tab:results}\n\t\t\t\\resizebox{\\linewidth}{!}{%\n\t\t\t\\begin{tabular}{@{}lllllllllllllll@{}}\n\t\t\t\\cmidrule(l){2-15}\n\t\t\t\\textbf{} &\n\t\t\t\\multicolumn{3}{c}{$\\text{Overlap Ratio} > 0.6$} &\n\t\t\t\\multicolumn{3}{c}{$\\text{Overlap Ratio} > 0.5$} &\n\t\t\t\\multicolumn{3}{c}{$\\text{Overlap Ratio} > 0.4$} &\n\t\t\t\\multicolumn{3}{c}{$\\text{Overlap Ratio} > 0$} &\n\t\t\t\\multicolumn{2}{c}{\\textbf{Time\/sample [s]}} \\\\ \\cmidrule(r){1-1} \\cmidrule(r){2-4} \\cmidrule(r){5-7} \\cmidrule(r){8-10} \\cmidrule(r){11-13} \\cmidrule(r){14-15}\n\t\t\t\\textbf{Method} &\n\t\t\t\\multicolumn{1}{c}{\\textbf{MTE [m]}} &\n\t\t\t\\multicolumn{1}{c}{\\textbf{MRE [deg]}} &\n\t\t\t\\multicolumn{1}{c}{\\textbf{Recall}} &\n\t\t\t\\multicolumn{1}{c}{\\textbf{MTE [m]}} &\n\t\t\t\\multicolumn{1}{c}{\\textbf{MRE [deg]}} &\n\t\t\t\\multicolumn{1}{c}{\\textbf{Recall}} &\n\t\t\t\\multicolumn{1}{c}{\\textbf{MTE [m]}} &\n\t\t\t\\multicolumn{1}{c}{\\textbf{MRE [deg]}} &\n\t\t\t\\multicolumn{1}{c}{\\textbf{Recall}} &\t\t\t\n\t\t\t\\multicolumn{1}{c}{\\textbf{MTE [m]}} &\n\t\t\t\\multicolumn{1}{c}{\\textbf{MRE [deg]}} &\n\t\t\t\\multicolumn{1}{c}{\\textbf{Recall}} &\t\t\t\n\t\t\t\\multicolumn{1}{c}{\\textbf{Mean}} &\n\t\t\t\\multicolumn{1}{c}{\\textbf{Std}} \\\\ \\cmidrule(r){1-1} \\cmidrule(r){2-4} \\cmidrule(r){5-7} \\cmidrule(r){8-10} \\cmidrule(r){11-13} \\cmidrule(r){14-15}\n\t\t\tICP \\cite{arun1987least} & 1.69 & 36.11 & 0.67 & 4.81 & 68.67 & 0.38 & 9.11 & 66.42 & 0.17 & 16.14 & 74.84 & 0.07 & 0.38 & 0.048 \\\\\n\t\t\tRANSAC FPFH \\cite{rusu2009fpfh} & 1.59 & 1.42 & 0.47 & 1.25 & 1.58 & 0.28 & 2.64 & 2.42 & 0.18 & 9.55 & 9.49 & 0.09 & 71.69 & 15.37 \\\\\n\t\t\tTEASER FPFH \\cite{Yang20tro-teaser}& 0.04 & 0.10 & 1.00 & 1.61 & 23.33 & 0.86 & 4.29 & 36.42 & 0.69 & 12.87 & 69.74 & 0.39 & 1.13 & 0.24 \\\\ \\midrule\n\t\t\tRANSAC FCGF \\cite{choy2019fully} & 0.09 & 0.01 & 1.00 & 0.10 & 0.01 & 1.00 & 0.12 & 0.01 & 1.00 & 1.70 & 0.11 & 0.91 & 16.5 & 22.4 \\\\\n\t\t\tDGR \\cite{choy2020deep} & 0.02 & 0.07 & 1.00 & 0.02 & 0.06 & 1.00 & 0.02 & 0.05 & 1.00 & 0.39 & 1.52 & 0.94 & 11.89 & 3.92 \\\\ \\midrule\n\t\t\tOurs & 0.14 & 0.21 & 1.00 & 0.19 & 0.25 & 0.99 & 0.22 & 0.29 & 0.98 & 0.28 & 0.41 & 0.94 & 0.32 & 0.017 \\\\\n\t\t\tOurs + ICP & 0.03 & 0.09 & 1.00 & 0.03 & 0.09 & 0.99 & 0.04 & 0.09 & 0.99 & 0.09 & 0.13 & 0.97 & 1.28 & 0.031 \\\\ \n\t\t\tOurs - Att & 0.29 & 0.48 & 0.87 & 0.39 & 0.56 & 0.86 & 0.59 & 0.86 & 0.76 & 2.22 & 5.69 & 0.57 & 0.30 & 0.003 \\\\ \\bottomrule\n\t\t\t\\end{tabular}\n\t\t}\n\t\t\\end{table*}\n\n\t\t\\begin{figure*}\n\t\t\t\\centering\n\t\t\t\\input{images\/error-time-ecdf.pgf}\n\t\t\t\\caption{ECDF of the Translation Error, Rotation Error and Sample Execution Time for different methods on the CODD test set.}\n\t\t\t\\label{fig:plot:results-ecdf}\n\t\t\\end{figure*}\n\n\n\t\\subsection{Ablation Study}\n\t\\label{sec:exp:results-ablation}\n\t\tWe assess the impact of the proposed attention network into the registration performance, measured in terms of translation and rotation errors.\n\t\tThis is achieved by removing the graph attention module, retraining the model and evaluating its performance on the CODD test set.\n\t\tThe results, indicated in Table \\ref{tab:results} ``Ours - Att'', show that the graph-attention network plays a key role in improving the accuracy of the correspondences, resulting in lower translation and rotation errors.\n\t\tThe benefits of the graph attention network is most significant for low-overlapping point clouds, as indicated by the last range group in Table \\ref{tab:results}, where the removal of the attention results in a 40\\% reduction of the registration recall and a significant increase in the mean translation and rotation errors.\n\n\n\\section{\\uppercase{Conclusion}}\n\\label{sec:conclusion}\n\tWe proposed a novel point cloud registration method focusing on fast inference of partially overlapping lidar point clouds. \n\tThe performance evaluation on the KITTI and CODD datasets indicates that the proposed model can operate with a latency lower than 410ms and 320ms, respectively.\n\tThe results show that the proposed model outperform baseline methods in terms of rotation and translation errors for pairs of point clouds with low overlap.\n\tFurthermore, we show that the proposed graph attention module plays a key role in improving the quality of the correspondences in low overlapping point clouds, which results in higher registration performance.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzidot b/data_all_eng_slimpj/shuffled/split2/finalzzidot new file mode 100644 index 0000000000000000000000000000000000000000..6ae103211d713e3fbe649c9be0d773441165a27f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzidot @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n{\\it Magnetic resonance imaging} (MRI) is a widely used imaging method in clinical applications and research. It is based on measuring the magnetic signal resulting from {\\it nuclear magnetic resonance} (NMR) of $\\rm ^1_1H$ nuclei (protons). In NMR, the magnetization rotates around an applied magnetic field $\\vec{B}$ at the proton Larmor frequency $f_{\\rm L}$, which is proportional to $B$ \\cite{Abragam}. This behavior of the magnetization is often referred to as {\\it precession} due to the direct connection to the quantum mechanical precession of nuclear spin angular momentum. \n\nConventionally, the magnetic precession signal has been detected using induction coils. The voltage induced in a coil by an oscillating magnetic field is proportional to the frequency of the oscillation, leading to vanishing signal amplitudes as $f_{\\rm L}$ approaches zero. Today, clinical MRI scanners indeed use a high main static field $\\vec B_0$; typically $B_0 = 3$\\unit{T}, corresponding to a frequency $f_0 = 128$\\unit{MHz}. However, when the signal is detected using magnetic field (or flux) sensors with a frequency-independent response, this need for high frequencies disappears. Combined with the so-called prepolarization technique for signal enhancement, highly sensitive magnetic field detectors, typically those based on {\\it superconducting quantum-interference devices} (SQUIDs), provide an NMR signal-to-noise ratio (SNR) that is independent of $B_0$ \\cite{Clarke2007}. In recent years, there has been growing interest in ultra-low-field (ULF) MRI, usually measured in a field on the order of Earth's magnetic field ($B_0 \\sim 10$--$100$\\unit{\\textmu T}). \n\n\\begin{figure}\n\t\\centering\n\t\t\\includegraphics[width=.98\\columnwidth]{array_schem_3d.pdf}\n \\vspace{-2mm}\n\t\\caption{Helmet-type sensor array geometries consisting of (a) triple-sensor modules at 102 positions similar to standard Elekta\/Neuromag MEG configurations and (b) an array with larger overlapping pickup coils for increased perfomance. Magnetometers are marked in green and gradiometers in red or blue; see Sec.~\\ref{ssPickups} for descriptions of pickup coils. (The sample head shape is from MNE-Python \\cite{MNESoftware}.)}\n\t\\label{figGradArrays}\n\\end{figure}\n\nA number of ULF-MRI-specific imaging techniques have emerged, including rotary-scanning acquisition (RSA) \\cite{Hsu2016}, temperature mapping \\cite{VesanenTemperature2013}, signal-enhancing dynamic nuclear polarization \\cite{KRISSLee2010, Buckenmaier2018}, imaging of electric current density (CDI) \\cite{Vesanen2014, Nieminen2014, Hommen2019}, and making use of significant differences in NMR relaxation mechanisms at ULF compared to tesla-range fields \\cite{Lee2005, Hartwig2011, Vesanen2013Temperature}. Several groups have also investigated possibilities to directly detect changes in the NMR signal due to neural currents in the brain \\cite{KrausJr2008, Korber2013, Xue2006, KRISSKim2014} and electrical activation of the heart \\cite{KRISSKim2012}. A further notable field of research now focuses on combining ULF MRI with magnetoencephalography (MEG). In MEG, an array of typically $\\sim 100$ sensors \\cite{Lounasmaa2004, Vrba2002, Pizzella2001} is arranged in a helmet-shaped configuration around the head (see Fig.~\\ref{figGradArrays}) to measure the weak magnetic fields produced by electrical activity in the brain \\cite{Hamalainen1993, DelGratta2001}. SQUID sensors tailored for ULF MRI can typically also be used for MEG, and performing MEG and MRI with the same device can significantly improve the precision of localizing brain activity \\cite{MEGMRI2013,Magnelind2011co, Luomahaara2018, Roadmap2016, Makinen2019}. \n\nIn typical early ULF-MRI setups \\cite{Clarke2007}, the signal was detected by a single dc SQUID coupled to a superconducting pickup coil wound in a gradiometric configuration that rejects noise from distant sources. In this case, the maximum size of the imaging field of view (FOV) is roughly given by the diameter of the pickup coil. With large diameters such as 60\\unit{mm}, field sensitivities better than 1\\unit{fT$\/\\sqrt{\\rm Hz}$} have been achieved with a reasonable FOV. A large coil size, however, does have its drawbacks, including issues such as high inductance and increased requirements in dynamic range. Therefore, the most straightforward way to increase the available FOV and the SNR is to use an array of sensors. In addition, as is well known in the context of MEG \\cite{Uusitalo1997ssp, Vrba2002, Taulu2005}, a multi-channel measurement allows forming so-called software gradiometers and more advanced signal processing techniques to reduce noise that can be optimized separately for different noise environments. In ULF-MRI, this can even be done individually for each voxel (volume element) position within the imaging target, as will be shown later. While single-channel systems are still common, several groups have already been using arrays of sensors.\n\n\nAlso in conventional MRI, so-called parallel MRI is performed using an array of tens of induction coils, allowing full reconstruction of images from a reduced number of data acquisitions \\cite{Pruessmann1999, Larkman2007}. There are studies on designing arrays of induction coils for parallel MRI \\cite{Ohliger2006} with an emphasis on minimizing artefacts caused by the reduced number of acquisitions. At the kHz frequencies of ULF MRI, the dominant noise mechanisms are significantly different, and one needs to consider, for instance, electromagnetic interference from power lines and electrical equipment, thermal noise from the radiation shield of the cryostat required for operating the superconducting sensors, as well as noise and transients from other parts of the ULF MRI system structure and electronics \\cite{Zevenhoven2014amp}. Studies on the design of arrays for MEG \\cite{Vrba2002,Ahonen1993,Nurminen2014thesis},\nwhich mainly focus on the accuracy of localizing brain activity, are also not applicable to ULF MRI. In terms of single-sensor ULF-MRI signals, there are existing studies of the depth sensitivity \\cite{Burmistrov2013} and SNR as a function of frequency with different detector types \\cite{Myers2007}. \n\nPreviously, in Ref.~\\cite{Zevenhoven2011}, we presented approaches for quantitative comparison of sensor arrays in terms of the combined performance of the sensors, the results indicating that the optimum sensor for ULF MRI of the brain would be somewhat larger than typical MEG sensors.\nExtending and refining those studies, we aim to provide a fairly general study of the optimization of ULF-MRI array performance, with special attention to SNR and imaging the human head.\n\nWe begin by defining relevant quantities and reviewing basic principles of ULF MRI in Sec.~\\ref{sBasics}. Then, we analyze the effects of sensor geometry and size with different noise mechanisms (Sec.~\\ref{sSingleSensor}), advancing to sensor arrays (Sec.~\\ref{sArrays}). Finally, we show computed estimations of array SNR as functions of pickup size and number, and provide more detailed comparison of spatial SNR profiles with different array designs (Secs.~\\ref{sMethods} and \\ref{sResults}).\n\n\\section{SQUID-detected MRI} \\label{sBasics}\n\n\\subsection{Signal model and single-channel SNR} \\label{ssULFMRI}\n\nIn contrast to conventional MRI, where the tesla-range main field is static and accounts for both polarizing the sample and for the main readout field, ULF MRI employs switchable fields. Dedicated electronics \\cite{Zevenhoven2014amp} are able to ramp on and off even the main field $\\vec B_0$ with an ultra-high effective dynamic range. An additional pulsed prepolarizing field $\\vec{B}_{\\rm p}$ magnetizes the target before signal acquisition. Typically, a dedicated coil is used to generate $\\vec{B}_{\\rm p}$ ($B_{\\rm p} \\sim 10$--$100$\\unit{mT}) in some direction to cause the proton bulk magnetization $\\vec{M}(\\vec r\\,)$ to relax with a longitudinal relaxation time constant $T_1$ towards its equilibrium value corresponding to $\\vec B_{\\rm p}$. After a polarizing time on the order of seconds or less, $\\vec B_{\\rm p}$ is switched off---adiabatically, in terms of spin dynamics---so that $\\vec M$ turns to the direction of the remaining magnetic field, typically $\\vec B_0$, while keeping most of its magnitude. \n\nNext, say at time $t=0$, a short excitation pulse $\\vec B_1$ is applied which flips $\\vec M$ away from $\\vec B_0$, typically by 90$^\\circ$, bringing $\\vec M$ into precession around the magnetic field at positions $\\vec r$ throughout the sample. While rotating, $\\vec M(\\vec r\\,)$ decays towards its equilibrium value corresponding to the applied magnetic field in which the magnetization precesses. This field, $\\vec B_\\mathrm L$, may sometimes simply be a uniform $\\vec B_0$, but for spatial encoding and other purposes, different non-uniform magnetic fields $\\mathrm\\Delta \\vec B(\\vec r, t)$ are additionally applied to affect the precession before or during acquisitions. The encoding is taken into account in the subsequent image reconstruction. \n\n\nThe ULF MRI signal can be modeled to a high accuracy given the absence of unstable distortions common at high frequencies and high field strengths. To obtain a model for image formation, we begin by examining $\\vec M$ at a single point. If the $z$ axis is set parallel to the total precession field $\\vec B_\\mathrm{L}$, then the $xy$ (transverse) components of $\\vec M$ account for the precession. Assuming, for now, a static $\\vec B_\\mathrm L$, and omitting the decay for simplicity, the transverse magnetization $\\vec M_{xy} = \\vec M_{xy}(t)$ can be written as \n\\begin{align}\n\\vec M_{xy}(t) = M_{xy}&\\left[\\widehat{e}_x\\cos(\\omega t+\\phi_0)\n - \\widehat{e}_y\\sin(\\omega t+\\phi_0)\\right]\\,,\n\\end{align}\nwhere $\\omega = 2\\pi f_\\mathrm{L}$ is the precession angular frequency, $\\widehat{e}_\\heartsuit$ is the unit vector along the $\\heartsuit$ axis ($\\heartsuit = x, y, z$), and $\\phi_0$ is the initial phase, which sometimes contains useful information.\n\nIn an infinitesimal volume $dV$ at position $\\vec r$ in the sample, the magnetic dipole moment of protons in the volume is $\\vec M(\\vec r\\,)\\, dV$. It is straightforward to show that the rotating components of this magnetic dipole are seen by any magnetic field or flux sensor as a sinusoidal signal $d\\psi_{\\rm s} = |\\beta|\\cos(\\omega t+\\phi_0 + \\phi_\\mathrm s) M_{xy}\\,dV$. Here $|\\beta| = |\\beta(\\vec r\\,)|$ is the peak sensitivity of the sensor to a unit dipole at $\\vec r$ that precesses in the $xy$ plane, and $\\phi_{\\mathrm s} = \\phi_{\\mathrm s}(\\vec r\\,)$ is a phase shift depending on the relative positioning of the sensor and the dipole. To obtain the total sensor signal $\\psi_{\\rm s}$, $d\\psi_{\\rm s}$ is integrated over all space:\n\\begin{align}\\label{eqRSignal}\n&\\psi_{\\rm s}(t) = \\int |\\beta(\\vec r\\,)|M_{xy}(\\vec r\\,)\\cos \\phi(\\vec r, t)\\, d^3\\vec r\\,,\\\\\\nonumber\n&\\text{where }~\\phi(\\vec r, t) = \\int_0^t\\omega(\\vec r, t^\\prime) \\,dt^\\prime +\\phi_0(\\vec r\\,)+\\phi_{\\mathrm s}(\\vec r\\,)\\,.\n\\end{align}\nHere, we have noted that the magnetic field can vary in both space and time and therefore $\\omega = \\omega(\\vec r, t) = \\gamma B(\\vec r, t)$, where $\\gamma$ is the gyromagnetic ratio; $\\gamma \/2\\pi = 42.58$\\unit{MHz\/T} for a proton.\n\nFor convenience, the signal given by Eq.~\\eqref{eqRSignal} can be demodulated at the angular Larmor frequency $\\omega_0=2\\pi f_0$ corresponding to $B_0$; using the quadrature component of the phase sensitive detection as the imaginary part, one obtains a complex-valued signal\n\\begin{align}\\nonumber\n\\Psi(t) &= \\int |\\beta(\\vec r\\,)|M_{xy}(\\vec r\\,)e^{-i[\\phi(\\vec r, t)-\\omega_0 t]}\\,d^3\\vec r\\\\\\label{eqSignal}\n&= \\int \\beta^*(\\vec r\\,) m(\\vec r\\,)e^{-i\\int_0^t\\mathrm\\Delta\\omega(\\vec r,t^\\prime)\\, dt^\\prime}\\,d^3\\vec r\\,,\n\\end{align}\nwhere $^*$ denotes the complex conjugate, $m(\\vec r\\,) = M_{xy}(\\vec r\\,)e^{-i\\phi_0(\\vec r\\,)}$ is the \\emph{uniform-sensitivity image}, $\\mathrm\\Delta\\omega = \\omega-\\omega_0$, and we define\n\\begin{equation}\n\\beta(\\vec r\\,) = |\\beta(\\vec r\\,)|e^{i\\phi_{\\mathrm s}(\\vec r\\,)}\n\\end{equation} \nas the single-channel \\emph{complex sensitivity profile}. Besides geometry, $\\beta$ generally also depends on the direction of the precession field; $\\beta = \\beta_{\\vec B_{\\mathrm L}}(\\vec r\\,)$. \n\nAfter acquiring enough data of the form of Eq.~\\eqref{eqSignal}, the image can be reconstructed---in the simplest case using only one sensor, or using multiple sensors, each having its own sensitivity profile $\\beta$. As a simplified model for understanding image formation, ideal Fourier encoding turns Eq.~\\eqref{eqSignal} into the 3-D Fourier transform of the sensitivity-weighted complex image $\\beta^*m = (\\beta^*m)(\\vec r\\,)$. In reality, however, the inverse Fourier transform only provides an approximate reconstruction, and more sophisticated techniques should be used instead \\cite{Hsu2014}. \n\nHere, we do not assume a specific spatial encoding scheme. Notably, however, the sensitivity profile is indistinguishable from $m$ based on the signal [Eq.~\\eqref{eqSignal}]. In other words, the spatial variation of $\\beta^*$ affects the acquired data in the same way as a similar variation of the actual image would, regardless of the spatial encoding sequence in $\\mathrm\\Delta \\omega$.\n\nConsider a small voxel of centered at $\\vec r$. The contribution of the voxel to the signal in Eq.~\\eqref{eqSignal} is proportional to an effective voxel volume $V$. Due to measurement noise, the voxel value becomes $V\\beta^* m + \\xi$, where $\\xi$ is a random complex noise term. If $\\beta$ is known, the intensity-corrected voxel of a real-valued image from a single sensor is given by\n\\begin{equation} \\label{eqVoxelIntensity}\n{\\rm Re}\\left(m(\\vec r\\,) + \\frac{\\xi}{V\\beta^*(\\vec r\\,)}\\right) = \n m(\\vec r\\,) + \\frac{{\\rm Re}\\left(\\xi e^{i\\phi_{\\mathrm s}}\\right)}{|s(\\vec r\\,)|}\\, ,\n \\end{equation}\nwhere $s(\\vec r\\,)=V\\beta^*(\\vec r\\,)$ is the sensitivity of the sensor to $m$ in the given voxel.\nAssuming that the distribution of $\\xi=|\\xi|e^{i\\phi_\\xi}$ is independent of the phase $\\phi_\\xi$, the standard deviation $\\sigma$ of ${\\rm Re}\\left(\\xi e^{i\\phi_{\\mathrm s}}\\right)$ is independent of $\\phi_{\\mathrm s}$ and proportional to $\\sigma_{\\rm s}$, the standard deviation of the noise in the relevant frequency band of the original sensor signal. \n\nThe precision of a voxel value can be described by the (amplitude) SNR of the voxel value. The voxel SNR is defined as the correct voxel value $m(\\vec r\\,)$ divided by the standard deviation of the random error and can be written as\n\\begin{equation} \\label{eqSNR0}\n{\\rm SNR} = \\frac{m(\\vec r\\,)V|\\beta(\\vec r\\,)|}{\\sigma}\n\\propto\\frac{B_{\\rm p}V|\\beta(\\vec r\\,)|\\sqrt{T_{\\rm tot}}}{\\sigma_{\\rm s}}\\, ,\n\\end{equation}\nwhere the last expression incorporates that $m \\propto B_{\\rm p}$, and that $\\sigma$ is inversely proportional to the square root of the total signal acquisition time, which is proportional to the total MRI scanning time $T_{\\rm tot}$. It should be recognized, however, that $\\sigma$ also depends heavily on factors not visible in Eq.~\\eqref{eqSNR0}, such as the imaging sequence.\n\nUltimately, the ability to distinguish between different types of tissue depends on the {\\it contrast-to-noise ratio} (CNR), which can be defined as the SNR of the difference between image values corresponding to two tissues. A better CNR can be achieved by improving either the SNR or the contrast, which both strongly depend also on the imaging sequence.\n\n\\subsection{SQUIDs, pickup coils and detection} \\label{ssPickups}\n\n\n\nSQUIDs are based on {\\it superconductivity}, the phenomenon where the electrical resistivity of a material completely vanishes below a critical temperature $T_{\\rm c}$ \\cite{SQUID-HB}. A commonly used material is niobium (Nb), which has $T_{\\rm c}=9.2\\,$K. It is usually cooled by immersion in a liquid helium bath that boils at $4.2\\,$K in atmospheric pressure. \n\nSQUIDs can be divided into two categories, rf and dc SQUIDs, of which the latter is typically used for biomagnetic signals as well as for ULF MRI \\cite{Lounasmaa2004, Roadmap2016}. The dc SQUID is a superconducting loop interrupted by two weak links, or Josephson junctions; see Fig.~\\ref{figSQUID}(a). With suitable shunting and biasing to set the electrical operating point, the current or voltage across the SQUID can be configured to exhibit an oscillatory dependence on the magnetic flux going through the loop---analogously to the well known double-slit interference of waves.\n\nA linear response to magnetic flux is obtained by operating the SQUID in a flux-locked loop (FLL), where an electronic control circuit aims to keep the flux constant by applying negative flux feedback via an additional feedback coil.\n\n\\begin{figure}\n\t\\centering\n\t\t\\includegraphics[width=0.95\\columnwidth]{squid.pdf}\n\t\\caption{Schematic (a) of a simple SQUID sensor and the flux-locked loop (more detail in Secs.~\\ref{ssIntrinsic} and \\ref{ssCorrEffect}), and (b--f) of different types of pickup coils. Pickup coil types are (b) magnetometer (M0), (c) planar first-order gradiometer (PG1), (d) axial first-order gradiometer (AG1), (e) axial second-order gradiometer (AG2), (f) planar gradiometer with a long baseline, and (g) a magnetometer and two planar gradiometers in a triple-sensor unit (M0, PG1$x$, PG1$y$).}\n\t\\label{figSQUID}\n\\end{figure}\n\nTo avoid harmful resonances and to achieve low noise, the SQUID loop itself is usually made small. The signal is coupled to it using a larger pickup coil connected to the SQUID via an input circuit to achieve high sensitivity. An input circuit may simply consist of a {\\it pickup coil} and an {\\it input coil} in series, forming a continuous superconducting path which, by physical nature, conserves the flux through itself, and feeds the SQUID according to the signal received by the pickup coil, as explained in Sec.~\\ref{ssIntrinsic} along with more sophisticated input circuits.\n\nDifferent types of responses to magnetic fields can be achieved by varying the pickup coil geometry. Fig.~\\ref{figSQUID}(b--g) schematically depicts some popular types. The simplest case is just a single loop, a {\\it magnetometer}, which in a homogeneous field responds linearly to the field component perpendicular to the plane of the loop (b). Two loops of the same size and orientation, but wound in opposite directions, can be used to form a {\\it gradiometer}. The resulting signal is that of one loop subtracted from that of the other. It can be used to approximate a derivative of the field component with respect to the direction in which the loops are displaced (by distance $b$, called the baseline). Typical examples are the planar gradiometer (c) and the axial gradiometer (d). By using more loops, one can measure higher-order derivatives. Some ULF-MRI implementations \\cite{Clarke2007,Zotev2007} use second-order axial gradiometers (e). If a source is close to one loop of a long-baseline gradiometer, that `pickup loop' can be thought of as a magnetometer, while the additional loops suppress noise from MRI coils or distant sources. However, adding loops also increases the inductance $L_\\mathrm p$. Before a more detailed theoretical discussion regarding $L_\\mathrm p$ and SQUID noise scaling, we study the detection of the MRI signal by the pickup coils. \n\n\\subsection{Sensitivity patterns and signal scaling}\n\nThe magnetic flux $\\Phi$ picked up by a coil made of a thin superconductor is given by the integral of the magnetic field $\\vec{B}$ over a surface $S$ bound by the coil path $\\partial S$,\n\\begin{equation} \\label{eqFlux}\n\\Phi = \\int_S \\vec{B}\\cdot d_{\\mathrm n}^2\\vec r = \\oint_{\\partial S} \\vec{A}\\cdot d\\vec r\\,.\n\\end{equation}\nHere, the line integral form was obtained by writing $\\vec{B}$ in terms of the vector potential $\\vec{A}$ as $\\vec{B} = \\nabla \\times \\vec{A}$, and applying Stokes's theorem.\n\nAs explained in Sec.~\\ref{ssULFMRI}, the signal in MRI arises from spinning magnetic dipoles. The quasi-static approximation holds well at signal frequencies, providing a vector potential for a dipole $\\vec{m}$ positioned at $\\vec r\\,'$ as $\\vec{A}(\\vec r\\,) = \\frac{\\mu}{4\\pi}\\frac{\\vec{m}\\times(\\vec r-\\vec r\\,^\\prime)}{|\\vec r-\\vec r\\,^\\prime|^3},$\nwhere $\\mu$ is the permeability of the medium, assumed to be that of vacuum; $\\mu = \\mu_0$. Substituting this into Eq.~\\eqref{eqFlux} and rearranging the resulting scalar triple product leads to\n\\begin{equation} \\label{eqLeadField}\n\\Phi = \\vec{m}\\cdot \\vec{B}_{\\rm s}(\\vec r\\,')\\,, \\;\\; \\vec{B}_{\\rm s}(\\vec r\\,') = \\frac{\\mu}{4\\pi}\\oint_{\\partial S} \\frac{d\\vec r \\times (\\vec r\\,'-\\vec r\\,)}{|\\vec r\\,'-\\vec r\\,|^3}\\,,\n\\end{equation}\nwhere the expression for the \\emph{sensor field} $\\vec{B}_{\\rm s}$ is the Biot--Savart formula for the magnetic field at $\\vec r\\,'$ caused by a hypothetical unit current in the pickup coil, as required by reciprocity.\n\nThe sensor field $\\vec B_\\mathrm s$ is closely related to the complex sensitivity pattern $\\beta$ introduced in Sec.~\\ref{ssULFMRI}. In an applied field $\\vec{B}_\\mathrm L = B_\\mathrm L\\widehat e_z$, the magnetization precesses in the $xy$ plane, and $\\beta$ can in fact be written as\n\\begin{equation} \\label{eqBeta0}\n\t\\beta(\\vec r\\,) = \\vec B_\\mathrm s(\\vec r\\,) \\cdot \\left(\\widehat e_x + i \\,\\widehat e_y\\right)\\,.\n\\end{equation}\nFor arbitrary $\\vec B = B_\\mathrm L\\widehat e_\\mathrm L$, we have\n\\begin{equation} \\label{eqBetaNorm}\n |\\beta_{\\vec B} (\\vec r\\,)| = \\sqrt{|\\vec B_\\mathrm s(\\vec r\\,)|^2 - [\\vec B_\\mathrm s (\\vec r\\,) \\cdot\\widehat e_\\mathrm L]^2}\\,.\n\\end{equation}\n\n\nWe choose to define the measured signal as the \\emph{flux} through the pickup coil---a convention that appears throughout this paper. The measurement noise is considered accordingly, as flux noise. This contrasts looking at magnetic-field signals and noise, as is often seen in the literature. Working with magnetic flux signals allows for direct comparison of different pickup coil types. Moreover, the approximation that magnetometer and gradiometer pickups respond to the field and its derivatives, respectively, is not always valid.\n\nThe signal often scales as simple power laws $R^\\alpha$ with the pickup coil size $R$ (or radius, for circular coils). When the distance $l$ from the coil to the signal source is large compared to $R$, a magnetometer sees a flux $\\Phi\\propto BR^2$, giving an \\emph{amplitude scaling exponent} $\\alpha=2$. When scaling a gradiometer, however, also the baseline $b$ is proportional to $R$. This leads to $\\alpha=3$ for a first-order gradiometer, or $\\alpha=2+k$ for one of $k^{\\rm th}$ order. Conversely, the signal scales with the distance as $l^{-\\alpha-1}$, as is verified by writing the explicit forms of the field and its derivatives. The additional $-1$ in the exponent reflects the dipolar nature of the measured field ($-2$ for quadrupoles etc.).\n\nFor some cases, the detected flux can be calculated analytically using Eq.~\\eqref{eqLeadField}. First, as a simple example, consider a dipole at the origin, and a circular magnetometer pickup loop of radius $R$ parallel to the $xy$ plane at $z=l$, centered on the $z$ axis. The integral in Eq.~\\eqref{eqLeadField} is easily integrated in cylindrical coordinates to give\n\\begin{equation}\\label{eqCircleLead}\n\\vec{B}_{\\rm s} = B_{\\rm s}\\widehat{e}_z \n= \\frac{\\mu R^2}{2(R^2+l^2)^\\frac{3}{2}}\\widehat{e}_z\\,.\n\\end{equation}\nIf the dipole precesses in, for instance, the $xz$ plane, the corresponding sensitivity is $|\\beta| = B_{\\rm s}$. Instead, if precession takes place in the $xy$ plane, the sensitivity vanishes; $|\\beta|=0$, and no signal is received. In this case, moving the pickup loop away from the $z$ axis would cause a signal to appear. These extreme cases show that even the absolute value of a single-channel sensitivity is strongly dependent on the sensor orientation with respect to the source and the magnetic field, as is also seen in Fig.~\\ref{figSensContours}.\n\n\\begin{figure}\n\t\\centering\n\t\t\\includegraphics[width=0.90\\columnwidth]{contours3d2.png}\n\t\\caption{Isosurfaces of sensitivity patterns $|\\beta(\\vec r\\,)|$ inside a helmet array for two of the magnetometer loops marked in red. The arrow depicts the direction of the precession field $\\vec B_\\mathrm L$ during readout ({\\it e.g.}\\ $\\vec{B}_0$). Note that, because of the precession plane, there are insensitive directions (``blind angles'') in the profiles, depending on the relative orientation of $\\vec B_\\mathrm L$.}\n\t\\label{figSensContours}\n\\end{figure}\n\n\nAnother notable property of the sensitivity $|\\beta|=B_{\\rm s}$ from Eq.~\\eqref{eqCircleLead} is that if $l$ is fixed, there is a value of $R$ above which the sensitivity starts to decrease, {\\it i.e.}, part of the flux going through the loop comes back at the edges canceling a portion of the signal. By requiring $\\partial B_{\\rm s}\/\\partial R$ to vanish, one obtains $R=l\\sqrt{2}$, the loop radius that gives the maximum signal. Interestingly, however, if instead of the perpendicular ($z$) distance, $l$ is taken as the closest distance to the pickup-coil winding, then the coil is on a spherical surface of radius $R_\\mathrm a = l$. Now, based on Pythagoras's theorem, $R^2 + l^2$ in Eq.~\\eqref{eqCircleLead} is replaced with $l^2$. In other words, the sensor field is simply $\\vec B_\\mathrm{s} = \\widehat e_z \\,\\mu R^2\/2l^3$, so the scaling of $\\alpha = 2$ happens to be the same as for distant sources in this simple case. \n\nImportantly, however, the \\emph{noise} mechanisms also depend on $R$, and moreover, the situation is complicated by the presence of multiple sensors. These matters are discussed in Secs.~\\ref{sSingleSensor}--\\ref{sArrays}.\n\n\n\n\n\\section{Noise mechanisms and scaling} \\label{sSingleSensor}\n\nThe signal from each measurement channel, corresponding to a pickup coil in the sensor array, contains flux noise that can originate from various sources. Examples of noise sources are the sensor itself, noise in electronics that drives MRI coils, cryostat noise, magnetic noise due to thermal motion of particles in other parts of the measurement device and in the sample, noise from other sensors, as well as environmental noise. This section is devoted to examining the various noise mechanisms and how the noise can be dealt with. Unless stated otherwise, noise is considered a random signal with zero average. We use amplitude scaling exponents $\\alpha$ to characterize the dependence of noise on pickup-coil size and type. \n\n\\subsection{Flux coupling and SQUID noise} \\label{ssIntrinsic}\n\nFor estimates of SQUID sensor noise as a function of pickup coil size, a model for the sensor is needed. As explained in Sec.~\\ref{ssPickups}, the signal is coupled into the SQUID loop via an input circuit. \nIn general, the input circuit may consist of a sequence of one or more all-superconductor closed circuits connected by intermediate transformers. Via inductance matching and coupling optimization, these circuits are designed to efficiently couple the flux signal into the SQUID loop. \n\n\n\\begin{figure}\n\t\\centering\n\t\t\\includegraphics[width=\\columnwidth]{inputcircuit.pdf}\n\t\\caption{Simplified schematic of a superconducting SQUID input circuit. Zero or more intermediate transformers (dashed box) may be present.}\n\t\\label{figInputCircuit}\n\\end{figure}\nIntermediate transformers can be useful for optimal coupling of a large pickup coil to a SQUID-coupled input coil, as analyzed {\\it e.g.} in Ref.~\\cite{Mates2014}. To further understand the concept, consider a two-stage input circuit where a pickup coil ($L_\\mathrm p$) is connected to a transmitting inductor $L_1$ to form a closed superconducting path; see Fig.~\\ref{figInputCircuit}. Ideally, the distance between the two coils is fairly small in order to avoid signal loss due to parasitic inductances of the connecting traces or wiring. The total inductance of this flux-coupling circuit by itself is $L_\\mathrm p + L_1$. The primary is coupled to a secondary inductor $L_2$ with mutual inductance $M_{12}$. As the magnetic flux picked up in $L_\\mathrm p$ changes by $\\mathrm\\Delta\\Phi_{\\mathrm p}$, there is a corresponding change $\\mathrm\\Delta J_1$ in the supercurrent flowing in the circuit such that the flux through the closed path remains constant. This passes the flux signal onwards to $L_2$ which forms another flux-transfer circuit together with the input coil $L_\\mathrm i$, which couples inductively into the SQUID.\n\nSuperconductivity has two important effects on the transmission of flux into the next circuit. First, the presence of superconducting material close to a coil tends to reduce the coil inductance because of the Meissner effect: the magnetic flux is expelled and the material acts as a perfect diamagnet. This effect is included in the given inductances $L_\\mathrm{p}$ and $L_1$. The other effect emerges when the flux is transmitted into another closed superconducting circuit, such as via $M_{12}$. This is because the transmitting coil is subject to the counteracting flux $M_{12}^2 \\mathrm\\Delta J_1\/(L_2 + L_\\mathrm{i})$ from the receiving coil of the other circuit. Now current $\\mathrm\\Delta J_1$ only generates a flux $[L_1 - M_{12}^2\/(L_2 + L_\\mathrm{i})]\\mathrm\\Delta J_1$ in $L_1$. Closing the secondary circuit thus changes the inductance from $L_1$ to \n\\begin{equation}\nL_1^\\prime = L_1 - \\frac{M_{12}^2}{L_2 + L_\\mathrm{i}} = L_1\\left(1 - \\frac{k_{12}^2}{1 + L_\\mathrm{i}\/L_2}\\right)\\,, \n\\end{equation}\nwhere the last form is obtained by expressing the mutual inductance in terms of the coupling constant $k_{12}$ ($|k_{12}|<1$) as $M_{12} = k_{12}\\sqrt{L_1L_2}$. Note that we do not include a counteracting flux from the SQUID inductance $L_\\mathrm{S}$ back into $L_\\mathrm{i}$, {\\it i.e.}, no screening from the biased SQUID loop. However, like other inductances, $L_\\mathrm{i}$ does include the effect of the presence of the nearby superconductors through the Meissner effect.\n\n\nThe change of flux though the dc SQUID loop is now obtained as\n\\begin{align}\n\\mathrm\\Delta \\Phi_\\mathrm{S} &= M_\\mathrm{iS}\\mathrm\\Delta J_2 = \\frac{M_\\mathrm{iS}M_{12}}{L_2 + L_\\mathrm{i}}\\mathrm\\Delta J_1 \\\\&= \\frac{M_\\mathrm{iS}M_{12}}{(L_2 + L_\\mathrm{i})(L_\\mathrm{p} + L_1) - M_{12}^2} \\mathrm\\Delta\\Phi_\\mathrm{p} \\, ,\n\\end{align} \nor, with $M_\\mathrm{iS} = k_\\mathrm{iS}\\sqrt{L_\\mathrm{i}L_\\mathrm{S}}$ and defining $\\chi_1$ and $\\chi_2$ such that $L_1 = \\chi_1 L_\\mathrm{p}$ and $L_2 = \\chi_2 L_\\mathrm{i}$, we have\n\\begin{equation}\\label{eqSQUIDFluxChi}\n\\frac{\\mathrm\\Delta \\Phi_\\mathrm{S}}{\\mathrm\\Delta \\Phi_\\mathrm{p}} = \\frac{k_\\mathrm{iS}\\sqrt{L_\\mathrm{S}}}{\\sqrt{L_\\mathrm{p}}}\\times \\frac{k_{12}\\sqrt{\\chi_1\\chi_2} }{\\chi_1\\chi_2(1 - k_{12}^2) + \\chi_1 + \\chi_2 + 1} \\, .\n\\end{equation}\n\nFor a given pickup coil, $\\chi_1$ and $\\chi_2$ can usually be chosen to maximize the flux seen by the SQUID. While the function in Eq.~\\eqref{eqSQUIDFluxChi} is monotonous in $k_{12}$, there is a single maximum with respect to parameters $\\chi_1,\\chi_2 > 0$. Noting the symmetry, we must have $\\chi_1 = \\chi_2 =: \\chi$, and the factor in Eq.~\\eqref{eqSQUIDFluxChi} becomes $k_{12}\\chi\/[\\chi^2(1-k_{12}^2) + 2\\chi + 1]$, which is maximized at $\\chi = 1\/\\sqrt{1-k_{12}^2}$. At the optimum, the coupled flux is given by\n\\begin{equation}\n\\frac{\\mathrm\\Delta \\Phi_\\mathrm{S}}{\\mathrm\\Delta \\Phi_\\mathrm{p}} = \n\\frac{k_\\mathrm{iS}k_{12}\\sqrt{L_\\mathrm{S}}}{2\\sqrt{L_\\mathrm{p}}\\left(1 + \\sqrt{1-k_{12}^2}\\right)} \\underset{k_{12} \\rightarrow 1^-}{\\longrightarrow}\n\\frac{k_\\mathrm{iS}}{2}\\sqrt\\frac{L_\\mathrm{S}}{L_\\mathrm{p}}\n\\,.\n\\end{equation}\nNotably, with a $k_{12} \\approx 1$, the coupling corresponds to a perfectly matched single flux-coupling circuit \\cite{SQUID-HB}. Already at $k_{12} = 0.8$, 50\\% of the theoretical maximum is achieved, while matching without an intermediate transformer may cause practical difficulties or parasitic resonances.\n\nWhen referred to SQUID flux $\\Phi_\\mathrm{S}$, the noise in the measured SQUID voltage in the flux-locked loop corresponds to a noise spectral density $S_{\\Phi_{\\rm S}}(f)$ at frequency $f$. As the signal transfer from the pickup coil to the SQUID is given by Eqs.~\\eqref{eqSQUIDFluxChi}, the equivalent flux resolution referred to the signal through the pickup coil can be written as\n\\begin{equation} \\label{eqNoiseSDens1}\nS_{\\Phi_{\\rm p}}^{1\/2}(f) = \\frac{2\\sqrt{L_{\\rm p}}\\left(1 + \\sqrt{1-k_{12}^2}\\right)}{k_\\mathrm{iS}k_{12}\\sqrt{L_\\mathrm{S}}}S_{\\Phi_{\\rm S}}^{1\/2}(f)\\,.\n\\end{equation}\nDue to resonance effects and thermal flux jumps, $L_\\mathrm{S}$ needs to be kept small \\cite{SQUID-HB}. The flexibility of intermediate transformers allows the same model to estimate noise levels with a wide range of pickup coil inductances $L_\\mathrm{p}$.\n\nIn general, the inductance of a coil with a given shape scales as the linear dimensions, or radius $R$, of the coil. If the wire thickness is not scaled accordingly, there will be an extra logarithmic term \\cite{Grover1973}. Even then, within a range small enough, the dependence is roughly $S_{\\Phi_{\\rm p}}^{1\/2} \\propto R^\\alpha$ with $\\alpha = 1\/2$. The case of a magnetometer loop in a homogeneous field then still has a field resolution $S_B^{1\/2}(f)$ proportional to $R^{-3\/2}$.\n\n\n\\subsection{Thermal magnetic noise from conductors} \\label{ssThermalNoise}\n\nElectric noise due to the thermal motion of charge carriers in a conducting medium is called Johnson--Nyquist noise \\cite{Nyquist1928, Johnson1928}. According to Amp$\\grave{\\rm e}$re's law $\\nabla \\times \\vec B = \\mu_0 \\vec J$, the noise currents in the current density $\\vec J$ also produce a magnetic field which may interfere with the measurement. In this view, devices should be designed in such a way that the amount of conducting materials in the vicinity of the sensors is small. However, there is a lower limit set by the conducting sample---the head. Estimations of the sample noise \\cite{Myers2007} have given noise levels below $0.1\\,{\\rm fT}\/\\sqrt{\\rm Hz}$, consistent with a recent experimental result of $55\\,{\\rm aT}\/\\sqrt{\\rm Hz}$ \\cite{Storm2019}. Other noise sources still exceed those values by more than an order of magnitude. More restrictingly, it is difficult to avoid metals in most applications. \n\n\nTo keep the SQUID sensors in the superconducting state, the array is kept in a helmet-bottom cryostat filled with liquid helium at $4.2\\,$K. The thermal superinsulation of a cryostat usually involves a vacuum as well as layers of aluminized film to suppress heat transfer by radiation \\cite{SQUID-HB}. The magnetic noise from the superinsulation can be reduced by breaking the conducting materials into small isolated patches. Seton {\\it et al.} \\cite{Seton2005} used aluminium-coated polyester textile, which efficiently breaks up current paths in all directions. By using very small patches, one can decrease the field noise at the sensors by orders of magnitude, although with increased He boil-off \\cite{Tervo2016MSc}.\n\n\nTo look at the thermal noise from the insulation layers in some more detail, consider first a thin slab with conductivity $\\sigma$ on the $xy$ plane at temperature $T$. Johnson--Nyquist currents in the conductor produce a magnetic field $\\vec{B}(x,y,z,t)$ outside the film. For an infinite (large) slab, the magnitude of the resulting field noise depends, besides the frequency, only on $z$, the distance from the slab (assume $z>0$). At low frequencies, the spectral densities $S_{B_\\alpha}$ ($\\alpha = x,y,z$) corresponding to Cartesian field noise components are then given by \\cite{Varpula1984}\n\\begin{equation} \\label{eqBz1}\nS_{B_z}^{1\/2} = \\sqrt{2} S_{B_x}^{1\/2} = \\sqrt{2} S_{B_y}^{1\/2} =\\frac{\\mu}{2}\\sqrt{ \\frac{k_{\\rm B}T}{2\\pi} \\frac{\\sigma d}{z(z+d)}}\\,, \n\\end{equation}\nwhere $d$ \nis the thickness of the slab and $k_{\\rm B}$ the Boltzmann constant.\n\nThe infinite slab is a good approximation when using a flat-bottom cryostat or when the radius of curvature of the cryostat wall is large compared to individual pickup loops. Consider a magnetometer pickup loop with area $A$ placed parallel to the conducting films in the insulation---to measure the $z$ component of the magnetic field, $B_z$. The coupled noise flux is the integral of $B_z$ over the loop area. If the loop is small, the noise couples to the pickup circuit as $S_{\\Phi}^{1\/2} = S_{B_z}^{1\/2}A$. A coil of size $R$ then sees a flux noise proportional to $S_{B_z}^{1\/2}R^2$, that is, $\\alpha = 2$. \n\nInstead, if the pickup coil is large, the situation is quite different. The instantaneous magnetic field depends on all coordinates and varies significantly over the large coil area. Consider the noise field at two points in the plane of the coil. The fields at the two points are nearly equal if the points are close to each other. However, if the points are separated by a distance larger than a correlation length $\\lambda_{\\rm c}(z)$, the fields are uncorrelated. Therefore, if $R \\gg \\lambda_c$, the coupled flux is roughly a sum of $A\/\\lambda_{\\rm c}^2$ uncorrelated terms from regions in which the field is correlated. Each term has a standard deviation of order $S_{B_z}^{1\/2}\\lambda_{\\rm c}^2$. The spectral density of the cryostat noise is then\n\\begin{equation} \\label{eqSPhiSlab}\nS_{\\Phi,\\rm c}(f) \\approx A S_{B_z}(\\vec r, f)\\lambda_{\\rm c}^2(\\vec r\\,)\\,.\n\\end{equation}\nMost importantly, the flux noise amplitude $S_{\\Phi,\\rm c}^{1\/2}$ is directly proportional to the coil size $R$, and we now have $\\alpha=1$. Still, the noise increases to a higher power of $R$ than the sensor noise, which according to section \\ref{ssIntrinsic} scales as $\\sqrt{R}$ and hence dominates in small pickup coils.\n\nFor a continuous film, the correlation length $\\lambda_{\\rm c}$ can be estimated from data in Ref.~\\cite{Nenonen96} to be around several times $z$. The correlation at distances smaller than $\\lambda_c$ is due to two reasons. First, the magnetic field due to a small current element in the conductor is spread in space according to the Biot--Savart law. Second, the noise currents in elements close to each other are themselves correlated. The latter effect is broken down when the film is divided into small patches; only very small current loops can occur, and the noise field starts to resemble that of Gaussian uncorrelated magnetic point dipoles throughout the surface. In this case, Eq.~\\eqref{eqBz1} is no longer valid, but the approximate relation of Eq.~\\eqref{eqSPhiSlab} still holds---now with a smaller $\\lambda_\\mathrm{c}$. \n\nThe magnetometer case is easily extended to first-order planar gradiometers parallel to the superinsulation layers [Fig.~\\ref{figSQUID}($\\mathrm b, \\mathrm f$)]. For a very small baseline, $b\\ll\\lambda_c$, the field noise is effectively homogeneous and thus cancels out. However, when $b\\gg\\lambda_c$, the spectral density of the noise power is twice that of a single loop. \n\n\n\n\n\\subsection{MRI electronics, coils and other noise sources}\n\nAs explained in Sec.~\\ref{ssULFMRI}, MRI makes heavy use of applied magnetic fields. The fields are generated with dedicated current sources, or amplifiers, to feed currents into coils wound in different geometries. As opposed to applying static fields, a major challenge arises from the need for oscillating pulses and the desire to quickly switch on and off all fields, including not only readout gradients but also the main field $\\vec B_0$, which requires an ultra-high dynamic range to avoid excess noise. Switching of $\\vec B_0$ enables full 3-D field mapping for imaging of small electric currents in volume \\cite{Zevenhoven2014amp}. Noise in the coil currents can be a major concern in the instrumentation. The contribution from $\\vec B_0$ ideally scales with pickup coil size as $R^\\alpha$, $\\alpha=2$ for a magnetometer, and noise in linear gradients essentially scales as $\\alpha=2$ in magnetometers as well as fixed-baseline gradiometers. With $b \\propto R$, first-order gradiometers experience noise from linear gradient coils according to $\\alpha=3$.\n\nMRI coils themselves also produce Johnson--Nyquist noise. In particular, the polarizing coil is often close to the sensors and made of thick wires as it should be able to produce relatively high fields. This allows thermal electrons to form current loops that generate field noise with complicated spatial characteristics, which is detrimental to image quality and should be eliminated. Another approach is to use litz wire, which is composed of thin wires individually coated with an insulating layer. This prevents significant noise currents perpendicular to the wire and eliminates large current loops. However, efficient uniform cooling of litz wire is problematic, leading to larger coil diameters. Increasing the coil size, however, significantly increases harmful transients in the system as well as the power and cooling requirements \\cite{Zevenhoven2011MSc}. Instead, we have had promising results with thin custom-made superconducting filament wire and DynaCAN (Dynamical Coupling for Additional dimeNsions) in-sequence degaussing waveforms to solve the problem of trapped flux \\cite{Zevenhoven2011MSc, Zevenhoven2013degauss}; optimized oscillations at the end of a pulse can expel the flux from the superconductor. Such coils contain much less metal, and significantly reduce the size of current loops that can generate magnetic noise.\n\nA significant amount of noise also originates from more distant locations. Power lines and electric devices, for instance, are sources that often can not be removed. Indeed, magnetically shielded rooms (MSRs) effectively attenuate such magnetic interference. However, pulsed magnetic fields inside the shielded room induce eddy currents exceeding $1\\,$kA in conductive MSR walls \\cite{Zevenhoven2014eddy}, leading to strong magnetic field transients that not only saturate the SQUID readout, but also seriously interfere with the nuclear spin dynamics in the imaging field of view. Even a serious eddy current problem can again be solved with a DynaCAN approach where optimized current waveforms are applied in additional coil windings to couple to the complexity of the transient \\cite{Zevenhoven2015}.\n\nNoise from distant sources typically scales with the pickup coil size with an exponent at least as large as the signal from far-away sources: $\\alpha=2+k$ for a $k^{\\rm th}$-order gradiometer (see Sec.~\\ref{ssPickups}). Although the noise detected by gradiometers scales to a higher power than with magnetometers ($k=0$), gradiometers have the advantage that they, in principle, do not respond to a uniform field. For a higher-order gradiometer that is not too large, the environmental noise is nearly uniform in space, and therefore effectively suppressed by the pickup coil geometry. Gradiometers with relatively long baselines can also be seen as magnetometers when the source is close to one of the loops. Still, they function as gradiometers from the perspective of distant noise sources. A similar result applies for so-called software gradiometers, which can, for example, be formed by afterwards taking the difference of the signals of two parallel magnetometers. However, in Sec.~\\ref{ssArrayNoise}, a more sophisticated technique is described for minimizing noise in the combination of multiple channels.\n\nAt very low system noise levels, other significant noise mechanisms include noise due to dielectric losses. Electrical activity in the brain can also be seen as a source of noise. This noise, however, is strongest at frequencies well below $1\\,$kHz. Using Larmor frequencies in the kHz range may therefore be sufficient for spectral separation of brain noise from MRI.\n\nThe amplitude scaling exponents $\\alpha$ for signal and noise are summarized in Table \\ref{tabExponents}. The notation in later sections refers to the scaling of flux signal and noise in terms of $\\alpha_\\mathrm s$ and $\\alpha_\\mathrm n$, respectively. For a single sensor, the SNR scaling $R^\\delta$ is given by $\\delta = \\alpha_\\mathrm s - \\alpha_\\mathrm n$.\n\n\n\n\n\n\\begin{table}\n\\caption{Amplitude scaling exponents $\\alpha$ for the flux noise standard deviation $\\sigma \\propto R^\\alpha$ as well as the signal, given different pickup-coil geometries and noise mechanisms.}\\label{tabExponents}\\vspace{5mm}\n\t\\centering\n\t\\begin{tabular}{l@{$\\;\\;$}c@{$\\;\\;$}c@{$\\;\\;$}c}\n\tPickup type (see Fig. 1) $\\rightarrow$ & M0 & AG$k$ & PG$k$\\\\\n\t\\hline\n\tSensor noise (optimally matched) & 1\/2 & 1\/2 & 1\/2 \\\\\n Sensor noise (unmatched, large $L_\\mathrm p$) & 1 & 1 & 1 \\\\\n\tDistant source, $b \\propto R$ & 2 & $2 + k$ & $2 + k$ \\\\\n\tDistant source, $b$ fixed & 2 & 2 & -- \\\\\n $\\vec B_0$ amplifier & 2 & $0^*$ & $0^*$ \\\\\n Gradient amplifiers, $b \\propto R$, $k \\le 1$ & 2 & 3 & 3 \\\\\n Gradient amplifiers, $b$ fixed & 2 & 2 & -- \\\\\n\tCryostat noise, small $R$ & 2 & 2 & $2 + k$ \\\\\n\tCryostat noise, large $R$ & 1 & 1 & 1 \\\\\n\t\\hline\n\t\\end{tabular}\n \\\\\\vspace{1mm}$^*$ Larger in practice, because of gradiometer \\\\imbalance and field inhomogeneities.\n\\end{table}\n\n\n\n\\section{Sensor arrays} \\label{sArrays}\n\n\n\n\n\n\n\\subsection{Combining data from multiple channels} \\label{ssArrayNoise}\n\nIt is common to work with absolute values of the complex images to eliminate phase shifts. Images from multiple channels can then be combined by summing the squares and taking the square root. This procedure, however, causes asymmetry in the noise distribution and loses information that can be used for improved combination of the data. If the sensor array and the correlations of noise between different sensors are known, the multi-channel data can be combined more effectively. \n\nIn the following, we show that, where multiple sensors can form a software gradiometer, an array of $N$ sensors can form an $N^\\text{th}$-order combination optimized to give the best SNR for each voxel.\n\nTo follow the derivation in Ref.~\\cite{Zevenhoven2011}, consider a voxel centered at $\\vec r$, and $N$ sensors indexed by $j = 1,2, ...,N$. Based on Sec.~\\ref{ssULFMRI}, each sensor has a unit magnetization image $s_j(\\vec r\\,) = \\beta_j^*(\\vec r\\,)V$, where $\\beta_j$ and $V$ are the sensitivity profile and voxel volume, respectively. The absolute value $|s_j|$ gives the sensed signal amplitude caused by a unit magnetization in the voxel, precessing perpendicular to $\\vec B_\\mathrm L$. The complex phase represents the phase shift in the signal due to the geometry. To study the performance of the array only, we set $V$ to unity.\n\nFor a voxel centered at $\\vec r$, we have a vector of reconstructed image values ${\\bf v} = [v_1,v_2, ...,v_N]^\\top $ corresponding to the $N$ sensors. At this point, the values $v_j$ have not been corrected according to the sensitivity. The linear combination that determines the final voxel value $u$ can be written in the form\n\\begin{equation}\\label{eqVoxelLinComb}\nu = \\sum_{j=1}^{N} a_j^*v_j = {\\bf a}^\\dagger {\\bf v}\\,,\n\\end{equation}\nwhere $^\\dagger $ denotes the conjugate transpose. Requiring that the outcome is sensitivity-corrected sets a condition on the coefficient vector ${\\bf a} = [ a_1, ...,a_N]^\\top $. In the absence of noise, a unit source magnetization gives $v_j = s_j(\\vec r\\,)$. The final voxel value $u$ should represent the source, which leads to the condition\n\\begin{equation} \\label{eqConstr}\n{\\bf a}^\\dagger {\\bf s} = 1\\,.\n\\end{equation}\nBelow, we show how ${\\bf a} = [ a_1, ..., a_N]^\\top $ should be chosen in order to maximize the SNR in the final image given the sensor array and noise properties.\n\nThe single-sensor image values $v_i$ can be written in the form $v_j = w_j + \\xi_j$ where $w_j$ is the `pure' signal and $\\xi_j$ is the noise. The noise terms $\\xi_j$ can be modeled as random variables, which, for unbiased data, have zero expectation: ${\\rm E}(\\xi_j)=0$. If there is a bias, it can be measured and subtracted from the signals before this step. The expectation of the final value of this voxel is then\n\\begin{equation}\\label{eqExpectU}\n{\\rm E}(u) = {\\rm E}\\left[{\\bf a}^\\dagger ({\\bf w + \\boldsymbol \\xi})\\right] = {\\bf a}^\\dagger {\\bf w}\\,.\n\\end{equation}\nThe noise in the voxel is quantified by the variance of $u$. Eqs. \\eqref{eqVoxelLinComb} and \\eqref{eqExpectU} yield $u = {\\rm E}(u) + {\\bf a}^\\dagger \\boldsymbol\\xi$, leading to\n\\begin{equation} \\label{eqVaru}\n{\\rm Var}(u) = {\\rm E}\\left[|u-{\\rm E}(u)|^2\\right] = {\\rm E}\\left[{\\bf a}^\\dagger \\boldsymbol{\\xi}\\boldsymbol{\\xi}^\\dagger {\\bf a}\\right] = {\\bf a}^\\dagger {\\mathbf\\Sigma}{\\bf a}\\,,\n\\end{equation}\nwhere ${\\mathbf\\Sigma} = {\\rm E}(\\boldsymbol \\xi \\boldsymbol\\xi^\\dagger )$ identifies as the noise covariance matrix.\nFor simple cases, ${\\mathbf\\Sigma}$ is the same for all voxels. However, it may vary between voxels if, for instance, the voxels are of different sizes.\n\nNow, the task is to minimize the noise ${\\bf a}^\\dagger {\\mathbf\\Sigma}{\\bf a}$ subject to the constraint in Eq.~\\eqref{eqConstr}. The Lagrange multiplier method turns the problem into finding the minimum of\n\\begin{equation} \\label{eqLagrange}\nL = {\\bf a}^\\dagger {\\mathbf\\Sigma}{\\bf a} - \\lambda(1-{\\bf a}^\\dagger {\\bf s})\n\\end{equation}\nwith respect to ${\\bf a}$, while still requiring that Eq.~\\eqref{eqConstr} holds. From the constraint it follows that ${\\bf a}^\\dagger {\\bf s}$ is real, so it may be replaced by $({\\bf a}^\\dagger {\\bf s}+{\\bf s}^\\dagger {\\bf a})\/2$ in Eq.~\\eqref{eqLagrange}. By `completing the square' in Eq.~\\eqref{eqLagrange}, one obtains\n\\begin{equation}\nL = {(\\bf a - {\\bf \\tilde a})}^\\dagger {\\mathbf\\Sigma}{(\\bf a- {\\bf\\tilde a})} - \\lambda + {\\rm constant}\\,,\n\\end{equation}\nwhere ${\\bf \\tilde a}$ satisfies \n\\begin{equation}\\label{eqLagrange2}\n2{\\mathbf\\Sigma}{\\bf\\tilde a} = -\\lambda {\\bf s}\\,.\n\\end{equation}\nSince $\\mathbf\\Sigma$, being a covariance matrix, is positive (semi)definite, the minimum of $L$ is found at ${\\bf a} = {\\bf\\tilde a}$. \n\nFurther, ${\\mathbf\\Sigma}$ is always invertible, as the contrary would imply that some non-trivial linear combination of the signals would contain zero noise. Multiplying Eq.~\\eqref{eqLagrange2} by ${\\bf s}^\\dagger {\\mathbf\\Sigma}^{\\text{-1}}$ from the left and using Eq.~\\eqref{eqConstr} leads to $\\lambda = -2\/{\\bf s}^\\dagger {\\mathbf\\Sigma}^{\\text{-1}}{\\bf s}$. When this expression for $\\lambda$ is put back into Eq.~\\eqref{eqLagrange2}, the optimal choice for the coefficient vector ${\\bf a} = \\tilde{\\bf a}$ is obtained as\n\\begin{equation} \\label{eqOptimalCoeff}\n{\\bf a} = \\frac{{\\mathbf\\Sigma}^{\\text{-1}}{\\bf s}}{{\\bf s}^\\dagger {\\mathbf\\Sigma}^{\\text{-1}}{\\bf s}}\\,.\n\\end{equation}\nSimilar to Eq.~(7) of Ref.~\\cite{Capon1970}, Eqs. \\eqref{eqVaru} and \\eqref{eqOptimalCoeff} reveal the final noise variance $\\sigma_{\\rm fin}^2$ for the given voxel position,\n\\begin{equation}\\label{eqNoiseVar}\n\\sigma_{\\rm fin}^2 = {\\bf a}^\\dagger {\\mathbf\\Sigma}{\\bf a} = \\frac{1}{{\\bf s}^\\dagger \n{\\mathbf\\Sigma^{\\text{-1}}}{\\bf s}}\\,.\n\\end{equation}\n\nIn the above derivation, we assumed little about how the individual single-sensor data were acquired. In fact, the only significant requirement was that the sensitivities $s_i$ are well defined and accessible. As discussed previously, the signal can be modeled to high accuracy at ULF (see Sec.~\\ref{ssULFMRI}).\n\n\\subsection{Figures of merit and scaling for arrays} \\label{ssFigures}\n\nGiven the $N^\\mathrm{th}$-order combination from Eqs.~\\eqref{eqVoxelLinComb} and \\eqref{eqOptimalCoeff}, the contribution of the sensor array to the voxel-wise image SNR is given by Eq.~\\eqref{eqNoiseVar}. We define the \\emph{array-sensitivity-to-noise ratio} aSNR as\n\\begin{equation} \\label{eqaSNR}\n\\text{aSNR} = \\sqrt{\\mathbf s^\\dagger \\mathbf \\Sigma^{\\text{-1}} \\mathbf s}\\,.\n\\end{equation}\nWhen each sensor in the array sees an equal flux noise level $\\sigma$, the aSNR$^{1\/2}$ takes the form\n\\begin{equation}\n\\text{aSNR} = \\frac{\\sqrt{\\mathbf s^\\dagger \\mathbf X^{\\text{-1}} \\mathbf s}}{\\sigma} = \\frac{\\text{array sensitivity}}{\\text{noise level}}\\,,\n\\end{equation}\nwhere $\\mathbf X = \\mathbf\\Sigma\/\\sigma^2$ is the dimensionless noise \\emph{correlation} matrix. We refer to the quantity $\\sqrt{\\mathbf s^\\dagger \\mathbf X^{\\text{-1}} \\mathbf s}$ as the \\emph{array sensitivity}, which for weak correlation is given approximately as $||\\mathbf s||_2$. Scaling law exponents for the array sensitivity are denoted by $\\alpha_\\mathrm a$, and for the aSNR by $\\delta = \\alpha_\\mathrm a - \\alpha_\\mathrm n$.\n\n\n\\subsection{Correlation of noise between sensors} \\label{ssCorrEffect}\n\nAs already seen in Secs.~\\ref{ssArrayNoise} and \\ref{ssFigures}, the aSNR is affected by the correlation of random noise between different single-sensor channels. There are two main reasons for such correlations. First, a noise source that is not an intrinsic part of a sensor can directly couple to many sensors. For instance, thermal noise in conductors close to the sensors may result in such correlated noise (see Sec.~\\ref{ssThermalNoise}). Second, the pickups of the sensors themselves are coupled to each other through their mutual inductances. This cross-coupling increases noise correlation and may also affect the sensitivity profiles via signal cross-talk.\n\nTo see the effect of noise correlation on the image SNR, consider a noise covariance matrix of the form\n\\begin{equation} \\label{eqCovSimplified}\n{\\mathbf\\Sigma} = \\sigma^2({\\bf I} + {\\bf C})\\,,\n\\end{equation}\nwhere ${\\bf I}$ is the identity matrix and $\\bf C$ contains the correlations between channels (the off-diagonal elements of $\\mathbf X$). In words, each channel has a noise variance of $\\sigma^2$ and channels $p$ and $q$ have correlation $C_{pq}={\\rm E}(\\xi_p\\xi_q^*)\/\\sigma^2$. Assume further that absolute values of the correlations $C_{pq}$ are substantially smaller than one. \n\nTo first order in $\\mathbf C$, the inverse of $\\mathbf\\Sigma$ is obtained as $\\mathbf\\Sigma^{-1} \\approx \\sigma^{-2}({\\bf I} - {\\bf C})$. The SNR in the final image, according to Eq.~\\eqref{eqNoiseVar}, is then proportional to $\\sigma_{\\rm fin}^{-1}$, with\n\\begin{align}\n \\sigma_{\\rm fin}^{-2} &\\approx \\sigma^{-2}\\left({\\bf s}^\\dagger {\\bf s} - {\\bf s}^\\dagger {\\bf C}{\\bf s}\\right)\\nonumber\\\\\n &= \\sigma^{-2}\\left\\|{\\bf s}\\right\\|_2^2 - 2\\sigma^{-2}\\sum_{p 0$. This leads to the conclusion that the noise correlation tends to decrease the image SNR.\n\nWhile the assumptions made in the above discussion may not always be exactly correct, the result is an indication that the correlation of noise between adjacent sensors is usually harmful---even if it is taken into account in reconstruction. Moreover, the actions taken in order to reduce noise correlation are often such that the noise variances decrease as well. For instance, eliminating a noise source from the vicinity of the sensor array does exactly that. \n\nCorrelation can also be reduced by minimizing the inter-sensor cross-talk, for instance by designing a sensor array with low mutual inductances between pickup coils. If the mutual inductances are non-zero, the cross-talk can be dramatically reduced by coupling the feedback of the SQUID flux-locked loop to the pickup circuit instead of more directly into the SQUID loop \\cite{SQUID-HB}. This way, the supercurrent in the pickup coil stays close to zero at all times. In theory, the cross-talk of the \\emph{flux signals} can be completely eliminated by this method.\n\nCorrelated noise originating from sources far from the subject's head and the sensor array can also be attenuated by signal processing methods prior to image reconstruction. The {\\it signal space separation} method (SSS) was developed at Elekta Neuromag Oy \\cite{Taulu2005} (now MEGIN) for use with `whole-head' MEG sensor arrays. The SSS method can distinguish between signals from inside the sensor helmet and those produced by distant sources. Now, the strong noise correlation is in fact exploited to significantly improve the SNR. Similar methods may be applicable to ULF MRI as well. To help such methods, additional sensors can be placed outside the helmet arrangement to provide an improved noise reference.\n\nFor sensor array comparisons, we assume that all measures have been taken to reduce correlated noise before image reconstruction. The details of the remaining noise correlation depend on many, generally unknown aspects. Therefore, we set ${\\bf C} = 0$ in Eq.~\\eqref{eqCovSimplified} for a slightly optimistic estimate, {\\it i.e.}, sensor noises are uncorrelated, each having variance $\\sigma^2$.\n\n\\subsection{Filling the array} \\label{ssSizeInfluence}\n\nIn this section, we use general scaling arguments to provide estimations of how the whole sensor array performs as a function of the pickup coil size. Consider a surface, for instance, of the shape of a helmet, and a voxel at a distance $l$ from the surface.\nThe surface is filled with $N$ pickup coils of radius $R$ to measure the field perpendicular to the surface. We assume the pickup coils are positioned either next to each other or in such a way that their areas overlap by a given fraction (see Fig.~\\ref{figGradArrays}). The number of sensors that fit the surface is then proportional to $R^{-2}$.\n\nTake, at first, a voxel far from the sensors; $l \\gg R$. Now, the signal from the voxel is spread over many sensors. For $\\mathbf\\Sigma = \\sigma^2{\\bf I}$, the aSNR is proportional to $\\|{\\bf s}\\|_2\/\\sigma$. Assume that $s_j\\propto R^{\\alpha_\\mathrm s}$ and $\\sigma \\propto R^{\\alpha_\\mathrm n}$, which leads to $\\|{\\bf s}\\|_2 \\propto \\sqrt{N}R^{\\alpha_\\mathrm s}\\propto R^{\\alpha_\\mathrm s-1}$, and finally, \n\\begin{equation}\n{\\rm aSNR} \\propto R^\\delta,\\quad \\delta = \\alpha_\\mathrm a - \\alpha_\\mathrm n = \\alpha_\\mathrm s - \\alpha_\\mathrm n-1\\, .\n\\end{equation}\nHere we thus have array sensitivity scaling according to $\\alpha_\\mathrm a = \\alpha_\\mathrm s - 1$, as opposed to $\\alpha_\\mathrm a = \\alpha_\\mathrm s$ when $N$ is fixed. \nRecall from Sec.~\\ref{ssPickups} that the flux sensitivities scale as $R^{\\alpha_\\mathrm s}$ with $\\alpha_\\mathrm s=2$ for magnetometers and $\\alpha_\\mathrm s=3$ for first-order planar gradiometers, given that $l\\gg R$. \nAssuming, for instance, optimally matched input circuits, the intrinsic flux noise of the sensor in both cases has a power law behavior with exponent $\\alpha_\\mathrm n=1\/2$ (see Sec.~\\ref{ssThermalNoise}), which yields $\\delta=0.5$ and $\\delta=1.5$. This is clearly in favor of using larger pickup coils. Especially for larger $R$, however, the cryostat noise may become dominant, and one has $\\alpha_\\mathrm n\\approx 1$. Now, magnetometer arrays have $\\delta\\approx 0$, {\\it i.e.}, the coils size does not affect the SNR. Still, gradiometer arrays perform better with larger $R$ ($\\alpha_\\mathrm a\\approx 1$).\n\nIn the perhaps unfortunate case that noise sources far from the sensors are dominant, the noise behaves like the signal, that is, $\\alpha_\\mathrm s=\\alpha_\\mathrm n$ and $\\delta=-1$. Unlike in the other cases, a higher SNR would be reached by decreasing the pickup coil size. However, such noise conditions are not realistic in the low-correlation limit. Instead, one should aim to suppress the external noise by improving the system design or by signal processing.\n\nThe breakdown of the assumption of $l \\gg R$ needs some attention. If the voxel of interest is close to the sensor array, the image value is formed almost exclusively by the closest pickup-loop. Now, for non-overlapping pickups, the results for single sensors ($\\alpha_\\mathrm a = \\alpha_\\mathrm s$) are applicable, and the optimum magnetometer size is $R\\approx l$. But then, if the voxel is far from the array (deep in the head), and $R$ is increased to the order of $l$, it is more difficult to draw conclusions. We therefore extend this discussion in Secs.~\\ref{sMethods} and \\ref{sResults} by a computational study.\n\n\n\n\n\n\n\n\n\n\\section{Methods for numerical study} \\label{sMethods}\n\nIn order to be able to compare the performance of different sensor configurations, we used 3-D computer models of sensor arrays and calculated their sensitivities to signals from different locations in the sample.\n\nThe sensitivities of single pickup coils were calculated using $\\vec B_\\textrm s$ from Eq.~\\eqref{eqLeadField}. Evaluating the line integral required the coil path $\\partial S$ to be discretized. The number of discretization points could be kept small by analytically integrating Eq.~\\eqref{eqLeadMethod0} over $n$ straight line segments between consecutive discretization points $\\vec r_j$ and $\\vec r_{j+1}$ (the end point $\\vec r_n = \\vec r_0$):\n\\begin{equation}\\label{eqLeadMethod0}\n \\vec B_\\textrm s(\\vec r\\,) = \\frac{\\mu}{4\\pi}\\sum_{k=0}^{n - 1} \\int_{\\vec r\\,^\\prime = \\vec r_j}^{\\vec r_{j+1}} \\frac{d\\vec r\\,^\\prime \\times (\\vec r-\\vec r\\,^\\prime)}{|\\vec r-\\vec r\\,^\\prime|^3}\\,.\n\\end{equation}\n As shown in Appendix A, this integrates exactly to\n\\begin{equation}\\label{eqLeadMethod1}\n \\vec B_\\mathrm s(\\vec r\\,) = \\frac{\\mu}{4\\pi}\\sum_{j=0}^{n - 1} \\frac{ a_j+a_{j+1}}{a_ja_{j+1}}\\,\\frac{\\vec a_j\\times \\vec a_{j+1}}{a_j a_{j+1} + \\vec a_j\\cdot\\vec a_{j+1}}\\,,\n\\end{equation}\nwhere $\\vec a_j = \\vec r_j-\\vec r$. Besides reducing computational complexity and increasing accuracy, this result allowed exact computation for polygonal coils.\n\nFor a precession field $\\vec B_\\mathrm L = B_\\mathrm L\\widehat e_\\mathrm L$, the single-sensor sensitivities were obtained from Eq.~\\eqref{eqBetaNorm} and the array-sensitivity and aSNR maps were computed according to Sec.~\\ref{ssFigures}. The normalization of the values computed here is somewhat arbitrary; the real image SNR depends on a host of details that are not known at this point (see Sec.~\\ref{ssULFMRI}). However, the results can be used for studying array sensitivity patterns and---with noise levels scaled according to estimated coil inductances---for comparing different possible array setups. \n\n\\section{Results} \\label{sResults}\n\nNumerical calculations were performed for simple spherical sensor arrays (Sec.~\\ref{ssSphereResults}) as well as for realistic configurations (Sec.~\\ref{ssHelmetResults}), {\\it e.g.}, of the shape of a helmet. The former were used for studying scaling behavior of array sensitivities with sensor size and number, extending the discussion in Sec.~\\ref{ssSizeInfluence}. The latter were used for comparing array sensitivity patterns of different potential designs. \n\n\\subsection{Effects of size and number} \\label{ssSphereResults}\n\nA sensor array model was built by filling the surface of a sphere of radius $10\\,$cm (see Fig.~\\ref{figSphere}) with $N$ magnetometers or $N\/2$ planar units of two orthogonal planar first-order gradiometers. Combining one of the magnetometers with one of the gradiometer units would thus give a sensing unit similar to those of the Elekta\/Neuromag MEG system, though circular (radius $R$). All sensors were oriented to measure the radial component of the field. A spherical surface of radius $6\\,$cm was chosen to represent the cerebral cortex. The cortex surface was thus at distance $4\\,$cm from the sensor shell. In addition, the center of the sphere was considered to represent deep parts of the brain.\n\\begin{figure}\n\t\\centering\n \\includegraphics{sphere.pdf}\\\\\n\t\\caption{Geometry used in numerical analysis of the dependence of array sensitivity as functions of sensor size $R$ and number $N$ at different points inside the imaging volume. Sensors are on a spherical surface of radius $10\\,$cm. A shell with radius $6\\,$cm is representative of points on the cerebral cortex.}\n\t\\label{figSphere}\n\\end{figure}\n\nThe data in Fig.~\\ref{figRDep} show the dependence of the array sensitivity on $R$. Note that the number of sensors is approximately proportional to $R^{-2}$. The largest coil size $R=10\\,$cm corresponds to one magnetometer or gradiometer unit on each of the six faces of a cube. The solid lines correspond to the scaling of the sensitivity as $R^{\\alpha_\\mathrm a}$, $\\alpha_\\mathrm a = \\alpha_\\mathrm s - 1$. For smaller $R$, the scaling laws from Sec.~\\ref{ssSizeInfluence} hold in all cases, and particularly well for gradiometers and deep sources. The scaling law fails most notably with the magnetometer array at the cortex. Indeed, the sensitivity starts to \\emph{decrease} with $R$ when $R$ is very large, as was shown for a special case in Sec.~\\ref{ssPickups}. \n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=1.00\\columnwidth]{array_snr_rdep.pdf}\n\t\\caption{Scaling of array sensitivity at the center and on the cortex as depicted in Fig.~\\ref{figSphere}: sphere filled with magnetometer loops and with planar units of two orthogonal gradiometers arranged side by side. Error bars correspond to the minimum and maximum values. Noise scaling with size is included in the figure, illustrating a potential cross-over from sensor noise with $\\alpha_\\mathrm n = 1\/2$ to cryostat noise or suboptimal input circuit matching with $\\alpha_\\mathrm n = 1$. With fixed $N$, array sensitivity scaling is steeper and given by $\\alpha_\\mathrm a = 2, 3$ for planar magnetometers and gradiometers.}\n\t\\label{figRDep}\n\\end{figure}\n\nThe error bars in Fig.~\\ref{figRDep} correspond to the minimum and maximum value of the sensitivity at the cortex while the data symbols correspond to the average value. Despite the strong orientational dependence of single sensors (see Sec.~\\ref{ssPickups}), the array sensitivities are fairly uniform at the cortex. Only at large $R$ do the orientational effects emerge. \n\n\\begin{figure}\n\t\\centering\n\t\t\\includegraphics[width=\\columnwidth]{array_snr_ndep.pdf}\n\t\\caption{Scaling of array sensitivity as $\\sqrt{N}$ at the center and on the cortex as depicted in Fig.~\\ref{figSphere}, when the pickup coil radius is fixed at $R=1.44\\,$cm: (left) $N$ magnetometers, (right) $N\/2$ planar units of two orthogonal gradiometers. Error bars correspond to the minimum and maximum values.}\n\t\\label{figSparse}\n\\end{figure}\n\nFigure \\ref{figSparse} shows a different dataset on how the array sensitivity changes with how densely the sensors are packed into the array. In this case, a varying number of magnetometer coils or gradiometer units with fixed radius $R=1.44\\,$cm was distributed on the spherical shell. The aSNR of voxels at the center scales as $\\sqrt{N}$ to an excellent accuracy. While the average sensitivity at points on the cortex also obeys $\\sqrt{N}$ scaling remarkably well, the uniformity drops dramatically when $N$ is lowered below roughly 30 sensors. Closer to the sensors, {\\it e.g.}\\ on the scalp, this effect is even more pronounced.\n\n\\subsection{Realistic sensor configurations} \\label{ssHelmetResults}\n\nFigure~\\ref{fig:snrfigures} presents several possible sensor configurations and provide maps of $\\log_{10}(\\text{aSNR})$ for their comparison for their comparison. The data shown are sagittal slices of the 3-D maps, {\\it i.e.}, on the symmetry plane of the sensor array. Other slices, however, displayed similar performance at the cortex. Also changing the direction of the precession field $\\vec B_\\mathrm L$ had only a minor effect on the SNR in the region of interest. In all cases shown here, $\\vec B_\\mathrm L$ was parallel to the $y$ axis, which is perpendicular to the visualization plane. Note that this contrasts MRI convention, where the $\\vec B_\\mathrm L$ direction is considered fixed and always along the $z$ axis.\n\nIn most cases, the sensors are arranged on a helmet surface at 102 positions as in the Elekta\/Neuromag system. Again, magnetometers and planar double-gradiometer units are considered separately (here, $R=1.25\\,$cm, resembling conventional MEG sensors). The same flux noise level was assumed for magnetometers and planar gradiometers of the same size. In addition, we consider arrays with axial gradiometers as well as radially oriented planar gradiometers, both cases having $k=1$, $b=4\\,$cm and $R=1.25\\,$cm. Configurations with 102 overlapping units with $R=2.5\\,$cm are also considered, as well as the existing Los Alamos 7-channel coil geometry \\cite{Zotev2007} and the single large second-order gradiometer at UC Berkeley \\cite{Clarke2007} (see figure caption). For long-baseline gradiometers with $k=1$, $L_\\mathrm p$ was estimated to be twice that of a single loop, and six times for $k=2$.\n\nWith planar sensor units of $R=1.25\\,$cm [Fig.~\\ref{fig:snrfigures}(a--b)], the aSNR for 102 magnetometers is three times that of 204 gradiometers at the cerebral cortex. At the center of the head, the difference is almost a whole order of magnitude in favor of the magnetometers. Therefore, the small gradiometers bring little improvement to the image SNR if the magnetometers are in use. However, as shown previously, especially gradiometer performance improves steeply with coil size. Allowing the coils to overlap with $R = 2.5\\,$cm [Fig.~\\ref{fig:snrfigures}(g--h)] leads to a vastly improved aSNR, especially with gradiometers, but also with magnetometers.\n\n\nGradiometers with long baselines provide somewhat magnetometer-like sensitivity patterns while rejecting external noise. However, their aSNR performance is inferior to magnetetometers because of their larger inductance, yielding higher flux noise when the sensor noise dominates; see Sec.~\\ref{ssIntrinsic}. Helmet arrays of magnetometers can provide a similar aSNR in the deepest parts of the brain as the Berkeley gradiometer provides at a small area on the scalp. \n\n\\begin{figure*}\n\t\\centering\n\t\t\\includegraphics[width=0.95\\textwidth]{aSNR_maps.pdf}\n\t\\caption{Base-10 logarithms of aSNR for different sensor-array geometries. To allow comparison of different arrays, we assumed SQUID noise scaling according to optimally matched input circuits. (a) Magnetometers: $R=1.25\\,$cm, (b) double-gradiometer units: $R=1.25\\,$cm, (c) axial gradiometers: $b=4\\,$cm, $R=1.25\\,$cm, (d) 7 Los Alamos second-order axial gradiometers: $b = 6\\,$cm, $R=1.85\\,$cm, \n (e) Berkeley single second-order axial gradiometer: $b=7.5\\,$cm, $R=3.15\\,$cm, (f) radially oriented planar gradiometers [Fig.~\\ref{figSQUID}(f)]: $b=4\\,$cm, $R=1.25\\,$cm, (g) overlapping double-gradiometer units: $R=2.5\\,$cm, (h) overlapping magnetometers: $R=2.5\\,$cm. The data rate of the acquisition is proportional to the square of the of aSNR.}\n\t\\label{fig:snrfigures}\n\\end{figure*}\n\n\\section{Conclusions and outlook}\n\nExtending Ref.~\\cite{Zevenhoven2011}, we analyzed a variety of factors that affect the noise and sensitivity of a SQUID-based sensor array for ULF MRI of the brain. Many of the principles, however, apply to non-SQUID arrays as well. We also derived numerical means for studying and comparing the SNR performances of any given sensor array designs.\n\nSignal- and noise-scaling arguments and calculations showed that filling a sensor array with a huge number of tiny sensors is usually not advantageous. Larger pickup coil sizes give a better image SNR at the center of the head and, up to some point, also at closer sources such as the cerebral cortex. This is true even if the number of sensors needs to be decreased due to the limited area available for the array. However, the average voxel SNR is proportional to the square root of the number of sensors.\n\n\n\\sloppy Several possible array designs were compared, including existing arrays designed for MEG and ULF MRI. The results are mostly in favor of magnetometers and large first-order gradiometers. While typically having inferior SNR, gradiometers do have the advantage of rejecting external fields, reducing also transient issues due to pulsed fields \\cite{Zevenhoven2011MSc}. An especially dramatic difference was found when comparing a magnetometer-filled helmet with a single larger gradiometer.\n\nIn general, using an array of sensors relaxes the dynamic range requirements for sensor readout. Splitting a large loop into smaller ones further allows interference rejection based on correlation, while also increasing the SNR close to the center of the loop. An array of many sensors also solves the single-sensor problem of `blind angles'.\n\nOur initial analysis of \\emph{overlapping} magnetometer and gradiometer coils gave promising results. Implementing such arrays, however, poses challenges. Practical considerations include how to fabricate such an array and what materials to use. For instance, wire-wound Type-I superconducting pickup coils have shown some favorable properties \\cite{Luomahaara2011,Hwang2014} in pulsed systems, and exploiting the dynamics of superconductor-penetrating flux \\cite{Zevenhoven2011MSc,Zevenhoven2013degauss,Al-Dabbagh2018} has been promising. However, existing techniques are not suitable for helmet configurations with overlapping coils. In addition, careful design work should be conducted to minimize mutual inductances and other coupling issues. Further significant improvements could be achieved by placing the sensors closer to the scalp, but that would require dramatic advancements in cryostat technology, and was not studied here.\n\nHere, we only considered the contribution of the sensor array to the imaging performance. Other things to consider are the polarizing technique as well as the ability of the instrumentation to apply more sophisticated sequences and reconstruction techniques, while preserving low system noise. A class of techniques enabled by multichannel magnetometers is accelerated parallel MRI \\cite{Larkman2007}. However, the so-called geometry factor should be taken into account \\cite{Lin2013} if large parallel acceleration factors are pursued.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAs cloud services and distributed data storage become increasingly prevalent, growing concerns about users' privacy have sparked much recent interest in the problem of Private Information Retrieval (PIR). Originally introduced in \\cite{PIRfirst,PIRfirstjournal}, the goal of PIR is to allow a user to efficiently retrieve a desired message from a server or a set of servers where multiple messages are stored, without revealing any information about which message is desired. In the information theoretic framework, which requires perfect privacy and assumes long messages, the capacity of PIR is the maximum number of bits of desired information that can be retrieved per bit of download from the server(s) \\cite{Sun_Jafar_PIR}. Capacity characterizations have recently been obtained for various forms of PIR, especially for the multi-server setting \\cite{Shah_Rashmi_Kannan, Sun_Jafar_PIR, Tajeddine_Gnilke_Karpuk_Hollanti, Sun_Jafar_TPIR, Sun_Jafar_SPIR, \nBanawan_Ulukus_BPIR, \nWang_Skoglund_PIRSPIRAd, \nWang_Skoglund_TSPIR, \nFREIJ_HOLLANTI, Sun_Jafar_MDSTPIR, Wang_Skoglund_MDS, Jia_Sun_Jafar_XSTPIR, Jia_Jafar_MDSXSTPIR, Yang_Shin_Lee,\nJia_Jafar_GXSTPIR, Tandon_CachePIR, Wei_Banawan_Ulukus, Tajeddine_Gnilke_Karpuk, Yao_Liu_Kang_Collusion_Pattern, PIR_survey}. \n\nPIR in the basic single server setting would be most valuable if it could be made efficient. However, it was already shown in the earliest works on PIR \\cite{PIRfirst, PIRfirstjournal} that in the single server case there is no better alternative to the trivial solution of downloading everything, which is prohibitively expensive. Since the optimal solution turns out to be trivial, single server PIR generally received less attention from the information theoretic perspective, until recently. \nInterest in the capacity of single-server PIR was revived by the seminal contribution of Kadhe et al. in \\cite{Kadhe_Garcia_Heidarzadeh_ElRouayheb_Sprintson_PIR_SI} which showed that the presence of \\emph{side information} at the user can significantly improve the efficiency of PIR, and that capacity characterizations under side information are far from trivial. This crucial observation inspired much work on understanding the role of side-information in PIR \\cite{Chen_Wang_Jafar_Side, heidarzadeh2018oncapacity, li2018single, li2020single, kazemi2019single, heidarzadeh2019single, heidarzadeh2018capacity, heidarzadeh2019capacity, Wei_Banawan_Ulukus_Side, PIR_PCSI}, which remains an active topic of research. Among the recent advances in this area is the study of single-server PIR with private coded side information (PIR-PCSI) that was initiated by Heidarzadeh, Kazemi and Sprintson in \\cite{PIR_PCSI}. Heidarzadeh et al. obtain sharp capacity characterizations for PIR-PCSI in many cases, and also note an open problem, along with an intriguing conjecture that motivates our work in this paper.\n\nIn the PIR-PCSI problem, a single server stores $K$ independent messages $\\bm{W}_1, \\cdots, \\bm{W}_K$, each represented by $L$ i.i.d. uniform symbols from a finite field $\\mathbb{F}_q$. A user wishes to efficiently retrieve a desired message $\\bm{W}_{\\bm{\\theta}}$, while utilizing private side information $(\\bm{\\mathcal{S}}, \\bm{\\Lambda}, \\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]})$ that is unknown to the server, comprised of a linear combination $ \\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]}=\\sum_{m=1}^M\\bm{\\lambda}_m\\bm{W}_{i_m}$ of a uniformly chosen size-$M$ subset of messages, $\\bm{\\mathcal{S}}=\\{\\bm{i}_1,\\bm{i}_2,\\cdots,\\bm{i}_M\\}\\subset[K], \\bm{i}_1<\\bm{i}_2<\\cdots<\\bm{i}_M$, with the coefficient vector $\\bm{\\Lambda}=(\\bm{\\lambda}_1, \\cdots, \\bm{\\lambda}_M)$ whose elements are chosen i.i.d. uniform from $\\mathbb{F}_q^\\times$. Depending on whether $\\bm{\\theta}$ is drawn uniformly from $[K]\\setminus\\bm{\\mathcal{S}}$ or uniformly from $\\bm{\\mathcal{S}}$, there are two settings, known as PIR-PCSI-I and PIR-PCSI-II, respectively. In each case, $(\\bm{\\theta}, \\bm{\\mathcal{S}})$ must be kept private. Capacity of PIR is typically defined as the maximum number of bits of desired message that can be retrieved per bit of download from the server(s), and includes a supremum over message size $L$. Since the side-information formulation specifies a finite field $\\mathbb{F}_q$, the capacity of PIR-PCSI can potentially depend on the field. A field-independent notion of capacity is introduced in \\cite{PIR_PCSI} by allowing a supremum over all finite fields. For PIR-PCSI-I, where $\\bm{\\theta}\\notin \\bm{\\mathcal{S}}$, Heidarzadeh et al. fully characterize the capacity as $(K-M)^{-1}$ for $1 \\leq M \\leq K-1$. For PIR-PCSI-II, the capacity is characterized as $(K-M+1)^{-1}$ for $\\frac{K+1}{2} < M \\leq K$. Capacity characterization for the remaining case of $2 \\leq M \\leq \\frac{K+1}{2}$ is noted as an open problem in \\cite{PIR_PCSI}, and it is conjectured that the capacity in this case is also $(K-M+1)^{-1}$. \n\nThe main motivation of our work is to settle this conjecture and obtain the capacity characterization for PIR-PCSI-II when $2 \\leq M \\leq \\frac{K+1}{2}$. Given the importance of better understanding the role of side information for single-server PIR, additional motivation comes from the following questions: What is the infimum capacity (infimum over all finite fields instead of supremum)? What if the coefficient vector $\\bm{\\Lambda}$ (whose privacy is not required in \\cite{PIR_PCSI}) is also required to be private? Can the side-information be reduced, e.g., to save storage, without reducing capacity? \n\n\n\\begin{table*}[!t]\n \\caption{Capacity results for PIR-PCSI-I, PIR-PCSI-II and PIR-PCSI}\n \\label{tab:capacity}\n \\centering\n \\scalebox{0.76}{\n \\begin{tabular}{|c|c|c|}\n \\hline\n PIR-PCSI-I ($1 \\leq M \\leq K-1$) & PIR-PCSI-II ($2 \\leq M \\leq K$) & PIR-PCSI ($1 \\leq M \\leq K$)\\\\ \\hline\n $C_{\\mbox{\\tiny PCSI-I}}^{\\sup} = \\frac{1}{K-M}$, \\cite{PIR_PCSI} \n & $C_{\\mbox{\\tiny PCSI-II}}^{\\sup} =\n \\begin{cases}\n \\frac{2}{K}, & 2 \\leq M \\leq \\frac{K+1}{2}, \\text{Thm. \\ref{thm:cap_PCSI2_sup}}\\\\\t\n \\frac{1}{K-M+1}, & \\frac{K+1}{2} < M \\leq K,\\text{\\cite{PIR_PCSI}}\n \\end{cases}$ \n & $C_{\\mbox{\\tiny PCSI}}^{\\sup} = \n \\begin{cases}\n \\frac{1}{K-1}, & M=1,\\\\\n \\frac{1}{K-M+1}, & 2 \\leq M \\leq K,\n \\end{cases}$, Thm. \\ref{thm:cap_PCSI_sup}\\\\ \\hline\n $C_{\\mbox{\\tiny PCSI-I}}^{\\inf} =\n \\begin{cases}\n \\frac{1}{K-1}, & 1 \\leq M \\leq \\frac{K}{2},\\\\\n \\big(K - \\frac{M}{K-M}\\big)^{-1}, & \\frac{K}{2} < M \\leq K-1,\n \\end{cases}$, Thm. \\ref{thm:cap_PCSI1_inf}\n & $C_{\\mbox{\\tiny PCSI-II}}^{\\inf} = \\frac{M}{(M-1)K}$, Thm. \\ref{thm:cap_PCSI2_inf} & $C_{\\mbox{\\tiny PCSI}}^{\\inf} = \\frac{1}{K-1}$, Thm. \\ref{thm:cap_PCSI_inf} \\\\ \\hline\n \n \\begin{tabular}{c}$ C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\sup} = C_{\\mbox{\\tiny PCSI-I}}^{\\inf}$\\\\\n $\\frac{1}{K-1} \\leq C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\inf} \\leq \\min\\bigg(C_{\\mbox{\\tiny PCSI-I}}^{\\inf}, \\frac{1}{K-2}\\bigg)$\\end{tabular}, Thm. \\ref{thm:pcsi1_pub_pri} & $ C_{\\mbox{\\tiny PCSI-II}}^{\\mbox{\\tiny pri}}(q) = C_{\\mbox{\\tiny PCSI-II}}^{\\inf}$, Thm. \\ref{thm:pcsi2_pub_pri} & $ C_{\\mbox{\\tiny PCSI}}^{\\mbox{\\tiny pri}}(q) = C_{\\mbox{\\tiny PCSI}}^{\\inf}$, Thm. \\ref{thm:pcsi_pub_pri} \\\\ \\hline\n\n \n \\end{tabular}\n }\n\\end{table*}\n\n\nThe contributions of this work are summarized in Table \\ref{tab:capacity}, along with prior results from \\cite{PIR_PCSI}. As our main contribution we show that the capacity of PIR-PCSI-II for $2 \\leq M \\leq \\frac{K+1}{2}$ is equal to $2\/K$, which is strictly higher than the conjectured value in this parameter regime. The result reveals two surprising aspects of this parameter regime. First, whereas previously known capacity characterizations of PIR-PCSI-II (and PIR-PCSI-I) in \\cite{PIR_PCSI} are all strictly increasing with $M$ (the size of the support set of side information), here the capacity does not depend on $M$. Second, in this parameter regime (and also when $M=\\left\\lfloor (K+1)\/2\\right\\rfloor +1$), half of the side information turns out to be redundant, i.e., the supremum capacity remains the same even if the user discards half of the side information. Half the side information is also the minimal necessary, because we show that the same capacity is not achievable with less than half of the side information. By contrast, in other regimes no redundancy exists in the side information, i.e., any reduction in side information would lead to a loss in supremum capacity.\n\nThe optimal rate $2\/K$ is shown to be achievable for any finite field $\\mathbb{F}_q$ where $q$ is an \\emph{even} power of a prime. The achievable scheme requires downloads that are ostensibly non-linear in $\\mathbb{F}_q$, but in its essence the scheme is linear, as can be seen by interpreting $\\mathbb{F}_q$ as a $2$ dimensional vector space over the base field $\\mathbb{F}_{\\sqrt{q}}$, over which the downloads are indeed linear. Intuitively, the scheme may be understood as follows. A rate of $2\/K$ means a download of $K\/2$, which is achieved by downloading \\emph{half} of every message (one of the two dimensions in the $2$ dimensional vector space over $\\mathbb{F}_{\\sqrt{q}}$). The key idea is \\emph{interference alignment} -- for the undesired messages that appear in the side information, the halves that are downloaded are perfectly \\emph{aligned} with each other, whereas for the desired message, the half that is downloaded is not aligned with the downloaded halves of the undesired messages. For messages that are not included in the side information, any random half can be downloaded to preserve privacy. \n\nWith a bit of oversimplification for the sake of intuition, suppose there are $K=4$ messages, that can be represented as $2$-dimensional vectors $\\bm{A}=[\\bm{a}_1~~ \\bm{a}_2], \\bm{B}=[\\bm{b}_1~~ \\bm{b}_2], \\bm{C}=[\\bm{c}_2~~ \\bm{c}_2], \\bm{D}=[\\bm{d}_1~~ \\bm{d}_2]$, the side information is comprised of $M=3$ messages, say at first $\\bm{A}+\\bm{B}+\\bm{C}=[\\bm{a}_1+\\bm{b}_1+\\bm{c}_1~~ \\bm{a}_2+\\bm{b}_2+\\bm{c}_2]$, and the desired message is $\\bm{A}$. Then the user could recover $\\bm{A}$ by downloading $\\bm{a}_1, \\bm{b}_2, \\bm{c}_2$ and either $\\bm{d}_1$ or $\\bm{d}_2$, i.e., half of each message for a total download of $K\/2=2$ (normalized by message size). We may also note that half of the side information is redundant, i.e., the user only needs $\\bm{a}_2+\\bm{b}_2+\\bm{c}_2$, and can discard the rest. But there is a problem with this oversimplification -- this toy example seemingly loses privacy because the matching indices reveal that $\\bm{b}_2$ aligns with $\\bm{c}_2$ but not $\\bm{a}_1$. This issue is resolved by noting that the side information is in fact $\\bm{\\lambda}_1\\bm{A}+\\bm{\\lambda}_2\\bm{B}+\\bm{\\lambda}_3\\bm{C}=\\bm{A}'+\\bm{B}'+\\bm{C}'$. Suppose $\\bm{\\lambda}_1, \\bm{\\lambda}_2, \\bm{\\lambda}_3$ are random (unknown to the server) independent linear transformations (\\emph{matrices}) that independently `\\emph{rotate}' $\\bm{A}, \\bm{B}, \\bm{C}$ vectors into $\\bm{A}',\\bm{B}',\\bm{C}'$ vectors, respectively, such that the projections (combining coefficients) of each along any particular dimension become independent of each other. In other words, $\\bm{a}_i', \\bm{b}_i', \\bm{c}_i'$ are independent projections of $\\bm{A}, \\bm{B}, \\bm{C}$, and downloading, say $(\\bm{a}_1', \\bm{b}_2', \\bm{c}_2', \\bm{d}_2')$ reveals to the server no information about their relative alignments in the side information. From the server's perspective, each downloaded symbol is simply an independent random linear combination of the two components of each message. Intuitively, since the random rotation is needed to maintain privacy, it is important that $\\bm{\\lambda}_i$ are matrices, not scalars (because scalars only scale, they do not rotate vectors). This is not directly the case in $\\mathbb{F}_q$ because $\\bm{\\lambda}_i$ are scalars in $\\mathbb{F}_q$. However, viewed as a $2$ dimensional vector space over $\\mathbb{F}_{\\sqrt{q}}$, the $\\bm{\\lambda}_i$ indeed act as invertible $2\\times 2$ matrices that act on the vectors $\\bm{A}, \\bm{B}, \\bm{C}, \\bm{D}$, rotating each vector randomly and independently, thus ensuring privacy. \n\nIn order for $\\mathbb{F}_{\\sqrt{q}}$ to be a valid finite field we need $q$ to be an \\emph{even} power of a prime. This suffices to characterize the capacity because the capacity definition in \\cite{PIR_PCSI} allows a supremum over all fields. However, the question remains about whether the rate $2\/K$ is achievable over every finite field. To understand this better, we explore an alternative definition of capacity (called infimum capacity in this work) which considers the infimum (instead of supremum) over all $\\mathbb{F}_q$. We find that the infimum capacity of PIR-PCSI-II is always equal to $M\/((M-1)K)$. Evidently, for $M=2$ the capacity is field independent because the infimum and supremum over fields produce the same capacity result. In general however, the infimum capacity can be strictly smaller, thus confirming field-dependence. The worst case corresponds to the binary field $\\mathbb{F}_2$. Intuitively, the reason that the infimum capacity corresponds to the binary field is that over $\\mathbb{F}_2$ the non-zero coefficients $\\bm{\\lambda}_m$ must all be equal to one, and thus the coefficients are essentially known to the server. On the other hand, we also present an example with $q=3$ (and $M=3, K=4$) where $2\/K$ is achievable (and optimal), to show that the achievability of $2\/K$ for $M>2$ is not limited to just field sizes that are even powers of a prime number. We also show that for PIR-PCSI-II, the the infimum capacity with private $(\\bm{\\theta},\\bm{\\mathcal{S}})$ is the same as the (supremum or infimum) capacity with private $(\\bm{\\theta},\\bm{\\mathcal{S}},\\bm{\\Lambda})$, i.e., when the coefficients $\\bm{\\Lambda}$ must also be kept private from the server. \n\nNext we consider PIR-PCSI-I where $\\bm{\\theta}$ is drawn from $[K]\\setminus\\bm{\\mathcal{S}}$. The supremum capacity of PIR-PCSI-I is fully characterized in \\cite{PIR_PCSI}. In this case, we show that there is no redundancy in the CSI. As in PIR-PCSI-II, we find that the infimum capacity of PIR-PCSI-I is strictly smaller than the supremum capacity in general, and the binary field $\\mathbb{F}_2$ yields the worst case. Unlike PIR-PCSI-II, however, the infimum capacity of PIR-PCSI-I with private $(\\bm{\\theta},\\bm{\\mathcal{S}})$ does not always match the infimum capacity with private $(\\bm{\\theta},\\bm{\\mathcal{S}},\\bm{\\Lambda})$. For example, if $M=K-1$, then both the supremum and infimum capacities of PIR-PCSI-I are equal to $1$ for private $(\\bm{\\theta},\\bm{\\mathcal{S}})$, but if the coefficient vector $\\bm{\\Lambda}$ must also be kept private then the infimum capacity is no more than $1\/(K-2)$. Thus, the loss in capacity from requiring privacy of coefficients can be quite significant.\n\nTo complete the picture, we finally consider the capacity of PIR-PCSI where $\\bm\\theta$ is drawn uniformly from $[K]$. In PIR-PCSI the server is not allowed to learn anything about whether or not $\\bm{\\theta}\\in\\bm{\\mathcal{S}}$. The supremum capacity of PIR-PCSI is found to be $(K-M+1)^{-1}$ for $2 \\leq M \\leq K$. Remarkably, this is not just the smaller of the two capacities of PIR-PCSI-I and PIR-PCSI-II, so there is an additional cost to be paid for hiding from the server whether $\\bm{\\theta} \\in \\bm{\\mathcal{S}}$ or $\\bm{\\theta} \\notin \\bm{\\mathcal{S}}$. Depending on the relative values of $M$ and $K$, in this case we find that the redundancy in CSI can be as high as $1\/2$ or as low as $0$. The infimum capacity of PIR-PCSI is smaller than the supremum capacity, the binary field $\\mathbb{F}_2$ yields the worst case, and as in PIR-PCSI-II, the infimum capacity with private $(\\bm{\\theta},\\bm{\\mathcal{S}})$ is the same as the (supremum or infimum) capacity with private $(\\bm{\\theta},\\bm{\\mathcal{S}},\\bm{\\Lambda})$.\n\nThis paper is organized as follows: Section \\ref{sec:state} states PIR-PCSI, PIR-PCSI-I, PIR-PCSI-II problems in \\cite{PIR_PCSI}. Section \\ref{sec:main} states our capacity and redundancy (in the CSI) results for PIR-PCSI-II, PIR-PCSI-I, PIR-PCSI with fourteen theorems which are proved in Section \\ref{sec:cap_PCSI2_sup} to Section \\ref{proof:pcsi_pub_pri}. Section \\ref{sec:con} concludes this paper and gives possible future directions.\n\n\\emph{Notation}: For a positive integer $a$, let $[a]$ denote the set $\\{1,2,\\cdots,a\\}$. For two integers $a, b$ where $a < b$, $[a:b]$ denotes the set $\\{a, a+1, \\cdots, b\\}$. For a set $\\mathcal{S} = \\{i_1, i_2, \\cdots, i_n\\}$, $|\\mathcal{S}|$ denotes the cardinality of $\\mathcal{S}$. $\\mathbf{I}_{M}$ denotes the $M \\times M$ identity matrix, and $\\mathbf{0}_{M}$ denotes the $M \\times M$ all-zero matrix. For a matrix $\\mathbf{A}$, let $\\mathbf{A}(i,:)$ be the $i^{th}$ row of $\\mathbf{A}$. For a set $\\mathcal{A}$ whose elements are integers, let $\\mathcal{A}(i)$ denote the $i^{th}$ element of $\\mathcal{A}$ in ascending order. Let $\\mathbb{F}_{q}$ denote the finite field of order $q$ and $\\mathbb{F}_{q}^{\\times}$ contain all the non-zero elements of $\\mathbb{F}_{q}$. The notation $\\mathbb{F}_q^{a\\times b}$ represents the set of all $a\\times b$ matrices with elements in $\\mathbb{F}_q$. Let $\\mathfrak{S}$ be the set of all the subsets with cardinality $M$ of $[K]$, i.e., $|\\mathfrak{S}| = \\tbinom{K}{M}$, and let $\\mathfrak{C}$ be the set of all length $M$ sequences with elements in $\\mathbb{F}_{q}^{\\times}$, i.e., $|\\mathfrak{C}| = (q-1)^M$. For an index set $S\\subset[K]$, define the subscript notation $X_S=\\{X_s\\mid s\\in S\\}$. All entropies are in $q$-ary units.\n\n\\section{Problem Statement}\\label{sec:state}\n\\subsection{Capacity of PIR-PCSI-I, PIR-PCSI-II, PIR-PCSI}\nA single server stores $K$ independent messages $\\bm{W}_1, \\bm{W}_2, \\cdots, \\bm{W}_{K}\\in\\mathbb{F}_q^L$, each comprised of $L$ i.i.d. uniform symbols from $\\mathbb{F}_{q}$, where we refer to $\\mathbb{F}_{q}$ as the \\emph{base field}. In terms of entropies,\n\\begin{align}\n &H(\\bm{W}_{1}) = H(\\bm{W}_{2}) = \\cdots = H(\\bm{W}_{K}) = L,\\\\\n &H(\\bm{W}_{[K]}) = \\sum_{k \\in [K]}H(\\bm{W}_{k}) = KL.\n\\end{align}\n\n A user who wishes to retrieve a message $\\bm{W}_{\\bm{\\theta}}$ for a privately generated index $\\bm{\\theta}$. The user has a linear combination of $M$ messages available as coded side information (CSI). $M$ is globally known. The CSI is comprised of $(\\bm{\\mathcal{S}}, \\bm{\\Lambda}, \\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]})$, defined as follows. The \\emph{support index set} $\\bm{\\mathcal{S}}$, drawn uniformly from $\\mathfrak{S}$, is a subset of $[K]$, of cardinality $M$. The vector of coefficients $\\bm{\\Lambda}=(\\bm{\\lambda}_1,\\bm{\\lambda}_2,\\cdots,\\bm{\\lambda}_M)$ is drawn uniformly from $\\mathfrak{C}$. \nThe linear combination available to the user is\n\\begin{align}\n \\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]}\\triangleq \\bm{\\lambda}_1\\bm{W}_{\\bm{\\mathcal{S}}(1)} + \\bm{\\lambda}_2\\bm{W}_{\\bm{\\mathcal{S}}(2)} + \\cdots + \\bm{\\lambda}_M\\bm{W}_{\\bm{\\mathcal{S}}(M)},\\label{eq:sideinfo_CSI}\n\\end{align}\nwhere we recall the notation that $\\bm{\\mathcal{S}}(m)$ denotes the $m^{th}$ element of $\\bm{\\mathcal{S}}$, in ascending order, i.e., $\\bm{\\mathcal{S}}(1)<\\bm{\\mathcal{S}}(2)<\\cdots<\\bm{\\mathcal{S}}(M)$. \nWe assume that $(\\bm{\\theta}, \\bm{\\mathcal{S}})$, $\\bm{\\Lambda}$, $\\bm{W}_{[K]}$ are independent.\n\\begin{align}\n H(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda}, \\bm{W}_{[K]}) = H(\\bm{\\theta}, \\bm{\\mathcal{S}}) + H(\\bm{\\Lambda}) + H(\\bm{W}_{[K]}).\n\\end{align}\n\nThere are three formulations of the problem depending on how $\\bm{\\theta}$ is chosen by the user.\n\\begin{enumerate}\n\\item{\\bf PIR-PCSI-I}: $\\bm{\\theta}$ is chosen uniformly from $[K]\\setminus\\bm{\\mathcal{S}}$.\n\\item{\\bf PIR-PCSI-II}: $\\bm{\\theta}$ is chosen uniformly from $\\bm{\\mathcal{S}}$.\n\\item{\\bf PIR-PCSI}: $\\bm{\\theta}$ is chosen uniformly from $[K]$.\n\\end{enumerate}\nWhen referring to all three formulations, we will refer to the problem as {\\bf PIR-PCSI*} for brevity. In such statements, PCSI* can be replaced with PCSI-I, PCSI-II, or PCSI to obtain corresponding statements for each of the three formulations.\n\nThe server knows the distributions but not the realizations of $\\bm\\theta, \\bm{\\mathcal{S}}, \\bm{\\Lambda}, \\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]}$.\nIt is required that $(\\bm{\\theta},\\bm{\\mathcal{S}})$ be kept jointly private from the server. Note that the privacy of $\\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]}$ or the coefficient vector $\\bm{\\Lambda}$ is not required. While the server initially knows nothing about the realization of $\\bm{\\Lambda}$, a PIR-PCSI* scheme may reveal some information about the coefficients, especially if it allows for efficient retrieval without leaking any information about $(\\bm{\\theta},\\bm{\\mathcal{S}})$. Leaking information about $\\bm{\\Lambda}$ has implications for reusability of side-information, an issue that is explored recently in \\cite{Anoosheh_reusable}.\n\nIn order to retrieve $\\bm{W_\\theta}$, we assume as in \\cite{PIR_PCSI} that the user generates a random query $\\bm{Q}$ that is independent of the messages. Specifically,\n\\begin{align}\n I(\\bm{W}_{[K]}; \\bm{Q}, \\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda}) = 0.\\label{eq:indQ}\n\\end{align}\nLet $\\mathcal{Q}$ denote the alphabet of $\\bm{Q}$. \n\nBecause the messages are i.i.d. uniform, and the coefficients are non-zero, according to the construction of $\\bm{Y}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]}$, it follows that\n\\begin{align}\nL&=H(\\bm{Y}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]}),\\\\\n& = H(\\bm{Y}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]} \\mid \\bm{Q}, \\bm{\\mathcal{S}}, \\bm{\\Lambda}, \\bm{W}_{{[K]}\\setminus\\{\\bm{\\mathcal{S}}(m)\\}}), \\forall m \\in [M].\\label{eq:indY} \n\\end{align}\n\nThe user uploads $\\bm{Q}$ to the server. Mathematically, the privacy constraint is expressed as,\n\\begin{align}\n &\\text{[$(\\bm{\\theta}, \\bm{\\mathcal{S}})$ Privacy]} &&I\\left(\\bm{\\theta}, \\bm{\\mathcal{S}}; \\bm{Q}, \\bm{W}_{[K]}\\right) = 0.\\label{eq:tsprivacy}\n\\end{align}\nThe server returns an answer $\\bm{\\Delta}$ as a function of $\\bm{Q}$ and the messages, i.e.,\n\\begin{align}\n H\\left(\\bm{\\Delta} \\mid \\bm{Q}, \\bm{W}_{[K]}\\right) = 0.\n\\end{align}\nUpon receiving the answer, the user must be able to decode the desired message $\\bm{W}_{\\bm\\theta}$. \n\\begin{align}\n &\\text{[Correctness]} &&H(\\bm{W}_{\\bm{\\theta}} \\mid \\bm{\\Delta}, \\bm{Q}, \\bm{Y}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]}, \\bm{\\mathcal{S}}, \\bm{\\Lambda},\\bm{\\theta}) = 0.\n\\end{align}\nThe rate of an achievable scheme is the ratio of the number of bits of the desired message to the total number of bits downloaded on average. If the average download is $D$ $q$-ary symbols, from which the $L$ $q$-ary symbols of the desired message are retrieved, then the rate achieved is,\n\\begin{align}\n R = \\frac{L}{D}\n\\end{align}\nThe capacity is the supremum of achievable rates over all message sizes $L$,\n\\begin{align}\n C_{\\mbox{\\tiny PCSI*}}(q) = \\sup_{L, \\mbox{\\tiny achievable $R$}}R.\n\\end{align}\nThe capacity can depend on the field $\\mathbb{F}_q$ which affects the nature of side information. Field-independent measures of capacity may be obtained by taking a supremum (as in \\cite{PIR_PCSI}) or infimum over all finite fields. These are called supremum and infimum capacity, respectively.\n\\begin{align}\n C_{\\mbox{\\tiny PCSI*}}^{\\sup} &= \\sup_{q}C_{\\mbox{\\tiny PCSI*}}(q),\\\\\n C_{\\mbox{\\tiny PCSI*}}^{\\inf} &= \\inf_{q}C_{\\mbox{\\tiny PCSI*}}(q).\n\\end{align}\n\n\\subsection{Capacity of PIR-PCSI* with Private Coefficients}\nRecall that in the formulation of PIR-PCSI* as presented above, while $(\\bm{\\theta},\\bm{\\mathcal{S}})$ must be kept private, the privacy of the coefficient vector $\\bm{\\Lambda}$ is not required. As an important benchmark, we consider the setting where the privacy of coefficients must also be preserved. In this setting, the privacy constraint is modified so that instead of \\eqref{eq:tsprivacy} we require the following.\n\\begin{align}\n &\\text{[$(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda})$ Privacy]} &&I\\left(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda}; \\bm{Q}, \\bm{W}_{[K]}\\right) = 0.\\label{eq:tscprivacy}\n\\end{align}\nThe capacity under this privacy constraint is referred to as the capacity with private coefficients and is denoted as $C_{\\mbox{\\tiny PCSI*}}^{\\mbox{\\tiny pri}}(q)$, which is potentially a function of the field size $q$. The supremum and infimum (over $q$) of $C_{\\mbox{\\tiny PCSI*}}^{\\mbox{\\tiny pri}}(q)$ are denoted as $C_{\\mbox{\\tiny PCSI*}}^{\\mbox{\\tiny pri},\\sup}, C_{\\mbox{\\tiny PCSI*}}^{\\mbox{\\tiny pri},\\inf}$, respectively.\n\n\\subsection{Redundancy of CSI}\nIn addition to the capacity of PIR-PCSI*, we also wish to determine how much (if any) of the side information is redundant, i.e., can be discarded without any loss in the \\emph{supremum capacity}. \n\nFor all $\\mathcal{S}\\in\\mathfrak{S}, \\Lambda\\in\\mathfrak{C}$, let $f_{\\mathcal{S},\\Lambda}$ be functions that produce\n\\begin{align}\n\\overline{\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]} = f_{\\mathcal{S}, \\Lambda}(\\bm{Y}^{[{\\mathcal{S}}, {\\Lambda}]}).\n\\end{align}\nLet us refer to all these functions collectively as $\\mathcal{F}=(f_{\\mathcal{S}, \\Lambda})_{\\mathcal{S}\\in\\mathfrak{S}, \\Lambda\\in\\mathfrak{C}}$. \nDefine, $\\overline{C}_{\\mbox{\\tiny PCSI*}}(q,\\mathcal{F})$ as the capacity (supremum of achievable rates) if the decoding must be based on $\\overline{\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]}$ instead of $\\bm{Y}^{[{\\mathcal{S}}, {\\Lambda}]}$, i.e., the correctness condition is modified to\n\\begin{align}\nH(\\bm{W}_{\\bm{\\theta}} \\mid \\bm{\\Delta}, \\bm{Q}, \\overline{\\bm{Y}}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]}, \\bm{\\mathcal{S}}, \\bm{\\Lambda},\\bm{\\theta}) = 0.\n\\end{align}\nWe say that $\\mathcal{F}$ uses $\\alpha$-CSI, where\n\\begin{align}\n\\alpha=\\max_{\\mathcal{S}\\in\\mathfrak{S}, \\Lambda\\in\\mathfrak{C}} H(\\overline{\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]})\/L\n\\end{align}\nWhereas storing $\\bm{Y}^{[{\\mathcal{S}}, {\\Lambda}]}$ requires $L$ $q$-ary symbols, note that storing $\\overline{\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]}$ requires only $\\alpha L$ storage, i.e., storage is reduced by a factor $\\alpha$. Define the $\\alpha$-CSI constrained capacity as\n\\begin{align}\n\\overline{C}_{\\mbox{\\tiny PCSI*}}(q,\\alpha)&=\\sup_{\n\\mathcal{F}: ~\\mbox{\\footnotesize uses no more than $\\alpha$-CSI}} \\overline{C}_{\\mbox{\\tiny PCSI*}}(q,\\mathcal{F})\n\\end{align}\nIn other words, $\\overline{C}_{\\mbox{\\tiny PCSI*}}(q,\\alpha)$ is the capacity when the user is allowed to retain no more than a fraction $\\alpha$ of the CSI $\\bm{Y}^{[{\\mathcal{S}}, {\\Lambda}]}$.\nThe notion of $\\alpha$-CSI constrained capacity is of broader interest on its own. However, in this work we will explore only the redundancy of CSI with regard to the supremum capacity. We say that `$\\alpha$-CSI is sufficient' if \n\\begin{align}\n\\sup_q\\overline{C}_{\\mbox{\\tiny PCSI*}}(q,\\alpha)&={C}_{\\mbox{\\tiny PCSI*}}^{\\sup}\n\\end{align}\nDefine $\\alpha^*$ as the smallest value of $\\alpha$ such that $\\alpha$-CSI is sufficient.\nThe redundancy of PCSI is defined as $\\rho_{\\mbox{\\tiny PCSI*}}=1-\\alpha^*$.\nNote that the opposite extremes of $\\rho_{\\mbox{\\tiny PCSI*}}=1$ and $\\rho_{\\mbox{\\tiny PCSI*}}=0$ correspond to situations where all of the side information is redundant, and where none of the side information is redundant, respectively.\n\nFor later use, it is worthwhile to note that for any scheme that uses no more than $\\alpha$-CSI, because $\\overline{\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]}$ is a function of ${\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]}$, it follows from \\eqref{eq:indY} that for all\\footnote{We say $(Q,\\mathcal{S},\\Lambda)$ is feasible if $\\Pr((\\bm{Q}, \\bm{\\mathcal{S}}, \\bm{\\Lambda}) = (Q,\\mathcal{S},\\Lambda))>0$.} feasible $(Q,\\mathcal{S},\\Lambda)$,\n{\\small\n\\begin{align}\nH\\bigg(\\overline{\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]} \\mid (\\bm{Q},\\bm{\\mathcal{S}},\\bm{\\Lambda})=(Q,\\mathcal{S},\\Lambda)\\bigg)=H(\\overline{\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]} )\\leq\\alpha L.\\label{eq:invaYR}\n\\end{align}\n}\nThis is because of the property that if $A$ is independent of $B$, then any function of $A$ is also independent of $B$. In this case, \\eqref{eq:indY} tells us that ${\\bm{Y}}^{[{\\mathcal{S}}, {\\Lambda}]}$ is independent of ${\\bf Q}$, therefore so is $\\overline{{\\bm{Y}}}^{[{\\mathcal{S}}, {\\Lambda}]}$.\n\n\\section{Main Results}\\label{sec:main}\nWe start with the setting of PIR-PCSI-II (where $\\bm{\\theta}\\in\\bm{\\mathcal{S}}$), which is the main motivation for this work. Note that the case $M=1$ is trivial, because in that case the user already has the desired message. Therefore, for PIR-PCSI-II we will always assume that $M>1$.\n\\subsection{PIR-PCSI-II (where $\\bm{\\theta}$ is drawn uniformly from $\\bm{\\mathcal{S}}$)}\n\\begin{theorem}\\label{thm:cap_PCSI2_sup}\n The supremum capacity of PIR-PCSI-II is\n \\begin{align}\n C_{\\mbox{\\tiny PCSI-II}}^{\\sup} &=\\max\\left(\\frac{2}{K},\\frac{1}{K-M+1}\\right)\\\\\n &=\n \\begin{cases}\n \\frac{2}{K}, & 1 < M \\leq \\frac{K+1}{2},\\\\\t\n \\frac{1}{K-M+1}, & \\frac{K+1}{2} < M \\leq K,\\text{\\cite{PIR_PCSI}}\n \\end{cases}\n \\end{align}\n\\end{theorem}\n\nThe case $(K+1)\/20$. So consider an achievable scheme such that $\\alpha$ PCSI is sufficient and the average download cost $D\/L\\leq K\/2+\\epsilon$ for some $L$. Since $D\/L\\leq K\/2+\\epsilon$, we have\n\\begin{align}\n&LK\/2+\\epsilon L\\notag\\\\\n&\\geq D\\\\\n &\\geq H(\\bm{\\Delta} \\mid \\bm{Q}) \\\\\n &\\geq I(\\bm{\\Delta}; \\bm{W}_{[K]} \\mid \\bm{Q}) \\notag\\\\\n &= \\sum_{k \\in [K]}I(\\bm{\\Delta}; \\bm{W}_k \\mid \\bm{Q}, \\bm{W}_{[k-1]})\\\\\n &= \\sum_{k \\in [K]}\\bigg(H(\\bm{W}_k \\mid \\bm{Q}, \\bm{W}_{[k-1]})-H(\\bm{W}_k \\mid \\bm{\\Delta}, \\bm{Q}, \\bm{W}_{[k-1]})\\bigg)\\\\\n &= \\sum_{k \\in [K]}\\bigg(H(\\bm{W}_k)-H(\\bm{W}_k \\mid \\bm{\\Delta}, \\bm{Q}, \\bm{W}_{[k-1]})\\bigg)\\label{eq:independent}\\\\\n &\\geq \\sum_{k \\in [K]}\\bigg(H(\\bm{W}_k)-H(\\bm{W}_k \\mid \\bm{\\Delta}, \\bm{Q})\\bigg)\\\\\n &= \\sum_{k \\in [K]}I(\\bm{W}_k; \\bm{\\Delta}, \\bm{Q}),\\\\\n &\\geq K I(\\bm{W}_{k^*}; \\bm{\\Delta}, \\bm{Q})\n \\label{eq:alpha_half}\n\\end{align}\nwhere \\eqref{eq:independent} holds since all the messages and the query are mutually independent, and\n\\begin{align}\nk^*=\\arg\\min_{k\\in[K]}I(\\bm{W}_k; \\bm{\\Delta}, \\bm{Q})\n\\end{align}\nFrom \\eqref{eq:alpha_half} we have,\n\\begin{align}\n H(\\bm{W}_{k^*} \\mid \\bm{\\Delta}, \\bm{Q}) \\geq L\/2 -\\epsilon L\/K.\n\\end{align}\nThus, there must exist a feasible query $Q$ such that \n\\begin{align}\n H(\\bm{W}_{k^*} \\mid \\bm{\\Delta}, \\bm{Q}=Q) \\geq L\/2-\\epsilon L\/K. \\label{eq:smallest}\n\\end{align}\nLet $\\mathcal{S} =\\{i_1, \\cdots, i_{M-1}, k^{*}\\}\\subset[K]$, such that $|\\mathcal{S}|=M$. Then according to Lemma \\ref{lem:privacy} and \\eqref{eq:invaYR}, there must exist $\\Lambda \\in \\mathfrak{C}$ such that \n\\begin{align}\n &H(\\bm{W}_{k^*} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}^{[\\mathcal{S},\\Lambda]}, \\bm{Q}=Q) = 0,\\label{eq:dec_smallest}\\\\\n &H(\\overline{\\bm{Y}}^{[\\mathcal{S},\\Lambda]} \\mid \\bm{Q} = Q) = H(\\overline{\\bm{Y}}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]}) \\leq \\alpha L.\n\\end{align}\nCombining \\eqref{eq:smallest} and \\eqref{eq:dec_smallest}, we have\n\\begin{align}\n I(\\overline{\\bm{Y}}^{[\\mathcal{S},\\Lambda]}; \\bm{W}_{k^*} \\mid \\bm{\\Delta}, \\bm{Q}=Q) \\geq L\/2-\\epsilon L\/K.\n\\end{align}\nThus \n\\begin{align}\n \\alpha L &\\geq H(\\overline{\\bm{Y}}^{[\\mathcal{S},\\Lambda]} \\mid \\bm{Q} = Q)\\notag\\label{eq:indYQR}\\\\\n &\\geq I(\\overline{\\bm{Y}}^{[\\mathcal{S},\\Lambda]}; \\bm{W}_{k^*} \\mid \\bm{\\Delta}, \\bm{Q}=Q) \\geq \\frac{L}{2}-\\epsilon L\/K\n\\end{align}\nwhich implies that $\\alpha \\geq 1\/2 - \\epsilon\/K$. In order to approach capacity, we must have $\\epsilon\\rightarrow 0$, therefore we need $\\alpha\\geq 1\/2$. Since this is true for any $\\alpha$ such that $\\alpha$ PCSI is sufficient, it is also true for $\\alpha^*$, and therefore the redundancy is $\\rho_{\\mbox{\\tiny PCSI-II}}\\leq 1\/2$.\n\n\\begin{lemma}\\label{lem:alpha_min_2}\n For $\\frac{K+2}{2} < M \\leq K$, the redundancy $\\rho_{\\mbox{\\tiny PCSI-II}}\\leq 0$.\n \\end{lemma}\n\n\\proof Recall that the capacity for this case is $(K-M+1)^{-1}$, i.e., the optimal average download cost is $D\/L=K-M+1$. Consider an achievable scheme such that $\\alpha$ PCSI is sufficient and the average download cost $D\/L\\leq K-M+1+\\epsilon$ for some $L$. Since $D\/L\\leq K-M+1+\\epsilon$, we have $L(K-M+1)+\\epsilon L\\geq D\\geq H(\\bm{\\Delta} \\mid \\bm{Q})$. Thus, there exists a feasible $Q$ such that \n\\begin{align}\n H(\\bm{\\Delta} \\mid \\bm{Q} = Q) \\leq (K-M+1)L+\\epsilon L.\n\\end{align}\nFor all $i \\in [K-M+1]$, let $\\mathcal{S}_{i} = [i:i+M-1]$. Also, let $\\mathcal{S}_{K-M+2} = \\{1\\} \\cup [K-M+2:K]$. For all $i \\in [K-M+2]$, let $\\Lambda_i \\in \\mathfrak{C}$ satisfy\n\\begin{align}\n H(\\bm{W}_i \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}^{[\\mathcal{S}_i,\\Lambda_i]}, \\bm{Q} = Q) = 0.\n\\end{align}\nSuch $\\Lambda_i$'s must exist according to Lemma \\ref{lem:privacy}. \n\nWriting $\\overline{\\bm{Y}}^{[\\mathcal{S}_i,\\Lambda_i]}$ as $\\overline{\\bm{Y}}_{i}$ for compact notation, we have \n\\begin{align}\n H(\\bm{W}_{[K-M+2]} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}_{[K-M+2]}, \\bm{Q}=Q) = 0. \\label{eq:redundancy2_1}\n\\end{align}\nAccording to \\eqref{eq:invaYR}, \n\\begin{align}\n H(\\overline{\\bm{Y}}_i \\mid \\bm{Q} = Q) \\leq \\alpha L.\n\\end{align}\nso we have\n\\begin{align}\n &(K-M+1)L+\\epsilon L + H(\\overline{\\bm{Y}}_{[K-M+1]} \\mid \\bm{Q} = Q)+\\alpha L\\notag\\\\\n &\\geq H(\\bm{\\Delta}, \\overline{\\bm{Y}}_{[K-M+2]} \\mid \\bm{Q} = Q)\\\\\n &\\geq I(\\bm{\\Delta}, \\overline{\\bm{Y}}_{[K-M+2]}; \\bm{W}_{[K-M+2]},\\overline{\\bm{Y}}_{[K-M+2]}\\mid \\bm{Q}=Q)\\\\\n &= H(\\bm{W}_{[K-M+2]},\\overline{\\bm{Y}}_{[K-M+2]} \\mid \\bm{Q}=Q)\\label{eq:redundancy2_2}\\\\\n &\\geq H(\\bm{W}_{[K-M+2]},\\overline{\\bm{Y}}_{[K-M+1]} \\mid \\bm{Q}=Q)\\\\\n &= H(\\bm{W}_{[K-M+2]} \\mid \\bm{Q}=Q)\\notag\\\\\n &\\quad\\quad + H(\\overline{\\bm{Y}}_{[K-M+1]} \\mid \\bm{W}_{[K-M+2]}, \\bm{Q}=Q)\\\\\n &\\geq (K-M+2)L\\notag\\\\\n &\\quad\\quad + H(\\overline{\\bm{Y}}_{[K-M+1]} \\mid \\bm{W}_{[M-1]}, \\bm{Q}=Q),\\label{eq:region}\n\\end{align}\nwhere \\eqref{eq:redundancy2_2} follows from \\eqref{eq:redundancy2_1}. Step \\eqref{eq:region} uses the independence of messages and queries according to \\eqref{eq:indQ} and the fact that $M-1 \\geq K-M+2$, because we require $M>(K+2)\/2$. We further bound\n\\begin{align}\n &H(\\overline{\\bm{Y}}_{[K-M+1]} \\mid \\bm{W}_{[M-1]}, \\bm{Q}=Q)\\notag\\\\\n &=H(\\overline{\\bm{Y}}_{1} \\mid\\bm{W}_{[M-1]}, \\bm{Q}=Q) + \\cdots \\notag\\\\\n &\\quad\\quad + H(\\overline{\\bm{Y}}_{K-M+1} \\mid \\bm{W}_{[M-1]}, \\overline{\\bm{Y}}_{[K-M]}, \\bm{Q}=Q)\\\\\n &\\geq \\sum_{i=1}^{K-M+1}H(\\overline{\\bm{Y}}_i\\mid \\bm{W}_{[i+M-2]}, \\bm{Q}=Q)\\label{eq:linearfunc}\\\\\n &= \\sum_{i=1}^{K-M+1}H(\\overline{\\bm{Y}}_i\\mid \\bm{Q}=Q)\\label{eq:pcsi2_red_indYO}\\\\\n &\\geq H(\\overline{\\bm{Y}}_{[K-M+1]}\\mid \\bm{Q}=Q)\\label{eq:plugin}\n\\end{align}\n \\eqref{eq:linearfunc} holds because $\\overline{\\bm{Y}}_{[i-1]}$ is a function of $\\bm{W}_{[i+M-2]}$ for all $i \\in [2:K-M+1]$. Step \\eqref{eq:pcsi2_red_indYO} follows from \\eqref{eq:invaYR}. Substituting from \\eqref{eq:plugin} into \\eqref{eq:region}, we have \n\\begin{align}\n &(K-M+1)L + \\epsilon L+ \\alpha L \\geq (K-M+2)L ,\n\\end{align}\nwhich gives $\\alpha \\geq 1-\\epsilon$. In order to approach capacity, we must have $\\epsilon\\rightarrow 0$, so we need $\\alpha\\geq 1$, and since this is true for any $\\alpha$ such that $\\alpha$ PCSI is sufficient, it is also true for $\\alpha^*$. Thus, the redundancy is bounded as $\\rho_{\\mbox{\\tiny PCSI-II}}\\leq 0$. $\\hfill\\square$\n\nAccording to Remark \\ref{rmk:PCSI2_margin} and \\ref{rmk:half_CSI}, $\\alpha = 1\/2$ is sufficient for $2 \\leq M \\leq \\frac{K+2}{2}$ and by the construction of CSI (a linear combination of messages), $\\alpha \\leq 1$. Theorem \\ref{thm:red} is thus proved.\n\n\\section{Proof of Theorem \\ref{thm:cap_PCSI2_inf}}\\label{sec:cap_PCSI2_inf}\nWe prove Theorem \\ref{thm:cap_PCSI2_inf} by first showing that $C_{\\mbox{\\tiny PCSI-II}}(q=2)\\leq M\/((M-1)K)$ and then presenting a PIR-PCSI-II scheme with rate $M\/((M-1)K)$ that works for any $\\mathbb{F}_{q}$.\n\n\\subsection{Converse for $C_{\\mbox{\\tiny PCSI-II}}(q=2)$}\nNote that Lemma \\ref{lem:privacy} is true for arbitrary $\\mathbb{F}_q$. In $\\mathbb{F}_{2}$, we can only have $\\bm{\\Lambda}=(1,1,\\cdots,1)=1_M$, i.e., the length $M$ vector whose elements are all equal to $1$. As a direct result of Lemma \\ref{lem:privacy}, for PIR-PCSI-II in $\\mathbb{F}_2$,\n\\begin{align}\n H(\\bm{W}_{\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},1_{M}]}, \\bm{Q}=Q) = 0, ~\\forall(Q,\\mathcal{S}) \\in \\mathcal{Q}\\times\\mathfrak{S}.\\label{eq:pcsi2_inf_dec}\n\\end{align}\nThus, $\\forall (Q,\\mathcal{S}) \\in \\mathcal{Q}\\times\\mathfrak{S}$,\n\\begin{align}\n &H(\\bm{W}_{\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\notag\\\\\n &= H(\\bm{W}_{\\mathcal{S}}, \\bm{Y}^{[\\mathcal{S},1_M]} \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\label{eq:F2CSI}\\\\\n &= H(\\bm{Y}^{[\\mathcal{S},1_M]} \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\notag\\\\\n &\\quad + H(\\bm{W}_{\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},1_M]}, \\bm{Q}=Q)\\\\\n &\\leq L.\n\\end{align}\n\\eqref{eq:F2CSI} holds because $\\bm{Y}^{[\\mathcal{S},1_M]}$ is simply the summation of $\\bm{W}_{\\mathcal{S}}$. Averaging over $\\bm{Q}$, we have $H(\\bm{W}_{\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Q}) \\leq L, \\forall \\mathcal{S} \\in \\mathfrak{S}$. By submodularity,\n\\begin{align}\n H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\leq KL\/M.\n\\end{align}\nThe download cost can now be lower bounded as,\n\\begin{align}\n D\\geq H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq KL - H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q})) \\geq \\frac{(M-1)KL}{M}.\n\\end{align}\nThus, we have shown that $C_{\\mbox{\\tiny PCSI-II}}(q=2) \\leq \\frac{M}{(M-1)K}$.\n\n\\subsection{A PIR-PCSI-II Scheme for Arbitrary $q$}\\label{sec:PCSI2_inf_ach}\nIn this section, we prove $C_{\\mbox{\\tiny PCSI-II}}(q) \\geq \\frac{M}{(M-1)K}$ for all $q$ by proposing a scheme, namely \\emph{Generic Linear Combination Based Scheme}, that can achieve the rate $\\frac{M}{(M-1)K}$ for any $\\mathbb{F}_{q}$.\n\nLet us choose $L = Ml$ where $M$ is the size of the support index set and $l$ is a positive integer which can be arbitrarily large. Thus, any message $\\bm{W}_k, k \\in [K]$ can be represented as a length-$M$ column vector $V_{\\bm{W}_k} \\in \\mathbb{F}_{q^l}^{M\\times 1}$. Let \n\\begin{align}\n V_{\\bm{W}_{\\bm{\\mathcal{S}}}} = \n \\begin{bmatrix}\n V_{\\bm{W}_{\\bm{i}_1}}^{\\mathrm{T}} & \\cdots & V_{\\bm{W}_{\\bm{i}_M}}^{\\mathrm{T}}\n \\end{bmatrix}^{\\mathrm{T}} \\in \\mathbb{F}_{q^l}^{M^2\\times 1}\n\\end{align}\nwhere $\\bm{\\mathcal{S}} = \\{\\bm{i}_1, \\cdots, \\bm{i}_M\\}$ is the support index set. The CSI $\\bm{Y}$ can be represented as $V_{\\bm{Y}} \\in \\mathbb{F}_{q^l}^{M\\times 1}$ such that,\n\\begin{align}\n V_{\\bm{Y}} = \\underbrace{\n \\begin{bmatrix}\n \\bm{\\lambda}_{1}\\mathbf{I}_{M} & \\bm{\\lambda}_{2}\\mathbf{I}_{M} & \\cdots & \\bm{\\lambda}_{M}\\mathbf{I}_{M}\n \\end{bmatrix}}_{M}V_{\\bm{W}_{\\bm{\\mathcal{S}}}},\n\\end{align}\nwhere $\\mathbf{I}_{M} \\in \\mathbb{F}_{q^l}^{M\\times M}$ is the $M \\times M$ identity matrix. \n\nThe download is specified as,\n\\begin{align}\n \\bm{\\Delta} = \\{&\\mathbf{L}_1^{(1)}V_{\\bm{W}_{1}}, \\cdots, \\mathbf{L}_1^{(M-1)}V_{\\bm{W}_{1}}, \\notag\\\\ \n &\\cdots, \\mathbf{L}_K^{(1)}V_{\\bm{W}_{K}}, \\cdots, \\mathbf{L}_K^{(M-1)}V_{\\bm{W}_{K}}\\},\n\\end{align}\nwhere $\\forall k \\in [K], m \\in [M-1], \\mathbf{L}_k^{(m)} \\in \\mathbb{F}_{q^l}^{1\\times M}$ is a length-$M$ row vector, i.e., for any message vector $V_{\\bm{W}_{k}} \\in \\mathbb{F}_{q^l}^{M\\times 1}$, $\\bm{\\Delta}$ contains $M-1$ linear combinations of that message vector.\n\nSuppose the vectors $\\mathbf{L}_k^{(m)}$ are chosen such that $\\forall \\mathcal{S}=\\{j_1, \\cdots, j_{M}\\} \\in \\mathfrak{S}$ the following $M^2\\times M^2$ square matrix has full rank.\n\\begin{align}\n \\mathbf{G}_{\\mathcal{S}} = \n \\begin{bmatrix}\n \\lambda_{1}\\mathbf{I}_{M} & \\cdots & \\lambda_{M}\\mathbf{I}_{M}\\\\\n & \\mathbf{e}_{1}\\otimes\\mathbf{L}_{j_1}^{(1)} &\\\\\n & \\cdots &\\\\\n & \\mathbf{e}_{1}\\otimes\\mathbf{L}_{j_1}^{(M-1)} &\\\\\n & \\cdots &\\\\\n & \\mathbf{e}_{M}\\otimes\\mathbf{L}_{j_M}^{(1)} &\\\\\n & \\cdots &\\\\\n & \\mathbf{e}_{M}\\otimes\\mathbf{L}_{j_M}^{(M-1)} &\n \\end{bmatrix},\\label{eq:F2_inv}\n\\end{align}\n$\\mathcal{S} = \\{j_1, \\cdots, j_{M}\\} \\in \\mathfrak{S}$. Note that $(\\lambda_1, \\cdots, \\lambda_M) \\in \\mathfrak{C}$ is the realization of $\\bm{\\Lambda}$, $\\mathbf{e}_{m}, m\\in [M]$ is the $m^{th}$ row of the $M\\times M$ identity matrix and ``$\\otimes$'' is the Kronecker product.\n\nThe correctness constraint is satisfied because the side-information and the downloads allow the user to obtain $\\mathbf{G}_{\\bm{\\mathcal{S}}}V_{\\bm{W}_{\\bm{\\mathcal{S}}}}$, which can then be multiplied by the inverse of $\\mathbf{G}_{\\bm{\\mathcal{S}}}$ to obtain $V_{\\bm{W}_{\\bm{\\mathcal{S}}}}$, i.e., $\\bm{W}_{\\bm{\\mathcal{S}}}$ which contains $\\bm{W}_{\\bm{\\theta}}$. Specifically the side-information corresponds to the first $M$ rows of $\\mathbf{G}_{\\bm{\\mathcal{S}}}V_{\\bm{W}_{\\bm{\\mathcal{S}}}}$, the downloads $\\mathbf{L}_{\\bm{j}_1}^{(1)}V_{\\bm{W}_{\\bm{j}_1}},\\cdots,\\mathbf{L}_{\\bm{j}_1}^{(M-1)}V_{\\bm{W}_{\\bm{j}_1}}$ correspond to the next $M-1$ rows of $\\mathbf{G}_{\\bm{\\mathcal{S}}}V_{\\bm{W}_{\\bm{\\mathcal{S}}}}$, and so on.\n\nOn the other hand, the privacy constraint is satisfied because the construction is such that for every feasible $\\mathcal{S}$, the user is able to decode all $M$ messages $\\bm{W}_{\\mathcal{S}}$. \n\nFinally let us evaluate the rate achieved by this scheme. Since the user downloads $\\frac{M-1}{M}$ portion of every message, the download cost is $D=LK(M-1)\/M$, and the rate achieved is $M\/((M-1)K))$. Since this rate is achieved for any $\\mathbb{F}_q$, we have the lower bound $C_{\\mbox{\\tiny PCSI-II}}(q) \\geq M\/((M-1)K))$.\n\nIt remains to show the existence of such $\\mathbf{L}_k^{(m)}$, for which we need the following lemma.\n\\begin{lemma}\\label{lem:existence}\n There exist $\\{\\mathbf{L}_{k}^{(m)}\\}_{k \\in [K], m \\in [M-1]}$ such that for every $\\mathcal{S} = \\{j_1, \\cdots, j_{M}\\} \\in \\mathfrak{S}$, the matrix $ \\mathbf{G}_{\\mathcal{S}} $ in \\eqref{eq:F2_inv} has full rank, provided \n \\begin{align}\n q^l > \\tbinom{K}{M}M(M-1).\n \\end{align}\n\\end{lemma}\n\\proof The proof is in Appendix \\ref{app:existence}. $\\hfill\\square$\n\nWith the help of Lemma \\ref{lem:existence}, Theorem \\ref{thm:cap_PCSI2_inf} is proved. Let us illustrate the scheme with an example.\n\n\\begin{example}\nConsider $M=2, K=4, L=2l, q = 2$. The $4$ messages are $\\bm{A},\\bm{B},\\bm{C},\\bm{D}$. The user has $\\bm{A}+\\bm{B}$ as the side information and wants to retrieve $\\bm{A}$.\n\n$\\bm{A}$ can be represented as a $2\\times 1$ vector $V_{\\bm{A}} = [V_{\\bm{A}}(1) \\quad V_{\\bm{A}}(2)]^{\\mathrm{T}}$ where $V_{\\bm{A}}(1), V_{\\bm{A}}(2) \\in \\mathbb{F}_{2^l}$. Similarly, $\\bm{B}, \\bm{C}, \\bm{D}$ can be represented as $V_{\\bm{B}}$, $V_{\\bm{C}}$, $V_{\\bm{D}}$, respectively. Thus, \n\\begin{align}\n V_{\\bm{W}_{\\bm{\\mathcal{S}}}} = [V_{\\bm{A}}(1) \\quad V_{\\bm{A}}(2) \\quad V_{\\bm{B}}(1) \\quad V_{\\bm{B}}(2)]^{\\mathrm{T}},\n\\end{align}\n\\begin{align}\n V_{\\bm{Y}} = \n \\begin{bmatrix}\n 1&0&1&0\\\\\n 0&1&0&1\n \\end{bmatrix}V_{\\bm{W}_{\\bm{\\mathcal{S}}}} =\n \\begin{bmatrix}\n V_{\\bm{A}}(1)+V_{\\bm{B}}(1)\\\\\n V_{\\bm{A}}(2)+V_{\\bm{B}}(2)\n \\end{bmatrix}.\n\\end{align}\n\nThe download from the server is \n\\begin{align}\n\\bm{\\Delta} = \\{&V_{\\bm{A}}(1)+\\alpha_1 V_{\\bm{A}}(2), V_{\\bm{B}}(1)+\\alpha_2 V_{\\bm{B}}(2),\\notag\\\\\n&V_{\\bm{C}}(1)+\\alpha_3 V_{\\bm{C}}(2), V_{\\bm{D}}(1)+\\alpha_4 V_{\\bm{D}}(2)\\}.\n\\end{align}\nwhere $\\alpha_1, \\cdots, \\alpha_4$ are elements of $\\mathbb{F}_{2^l}$ \nsuch that the $4\\times 4$ matrix over $\\mathbb{F}_{2^l}$, \n\\begin{align}\n\\begin{bmatrix}\n 1&0&1&0\\\\\n 0&1&0&1\\\\\n 1&\\alpha_i&0&0\\\\\n 0&0&1&\\alpha_j\n\\end{bmatrix}\n\\end{align}\nhas full rank for any $i < j, \\{i,j\\} \\subset [4]$, which is true if and only if $\\alpha_1, \\alpha_2, \\alpha_3, \\alpha_4,1,0$ are distinct. Thus any $2^l\\geq 6$ works, i.e., it suffices to choose $l=3$.\n\\end{example}\n\n\n\\section{Proof of Theorem \\ref{thm:MK}}\\label{proof:MK}\nFor the case $q=2$, it suffices to download any $K-1$ messages out of the $K$ messages to achieve the capacity $\\frac{1}{K-1}$, since the desired message is either directly downloaded or can be recovered by subtracting the $K-1$ downloaded messages from the CSI.\n\nFor $q \\neq 2$, to achieve the capacity $1$, it suffices to download a linear combination of all $K$ messages with non-zero coefficients. Specifically, \n\\begin{align}\n \\bm{\\Delta} = \\bm{Y} + \\bm{\\lambda}^{\\prime}\\bm{W}_{\\bm{\\theta}},\n\\end{align}\nwhere $\\bm{Y}$ is the CSI and $\\bm{\\lambda}^{\\prime} \\in \\mathbb{F}_{q}^{\\times}$ is a non-zero element in $\\mathbb{F}_{q}$ such that $\\bm{\\lambda}_{\\bm{t}} + \\bm{\\lambda}^{\\prime} \\neq 0$ (let $\\bm{\\lambda}_{\\bm{t}}$ denote the coefficient in front of $\\bm{W}_{\\bm{\\theta}}$ in the CSI $\\bm{Y}$). Such $\\bm{\\lambda}^{\\prime}$ always exists for $q \\neq 2$. From the server's perspective, the user is downloading a random linear combination of $K$ messages so the privacy constraint is satisfied. The user is able to decode $\\bm{W}_{\\bm{\\theta}}$ by subtracting $\\bm{Y}$ from $\\bm{\\Delta}$ so the correctness constraint is satisfied. \n\n\\section{Proof of Theorem \\ref{thm:M3K4}}\\label{proof:M3K4}\nLet us denote the $K=4$ messages as $\\bm{W}_1=\\bm{A},\\bm{W}_2=\\bm{B},\\bm{W}_3=\\bm{C},\\bm{W}_4=\\bm{D}$ for simpler notation. We have $M = 3$, the base field is $\\mathbb{F}_{3}$ and the length of each message is $L=1$. Our goal is to prove the achievability of rate $1\/2$, i.e., download cost $D=2$ for $L=1$. The user downloads, \n\\begin{align}\n \\bm{\\Delta} = \\{&\\bm{\\Delta}_1 = \\bm{A} + \\bm{\\eta}_{b}\\bm{B} + \\bm{\\eta}_{c}\\bm{C}, \\notag\\\\\n &\\bm{\\Delta}_2 = 2\\bm{\\eta}_{b}\\bm{B} + \\bm{\\eta}_{c}\\bm{C} + \\bm{\\eta}_{d}\\bm{D}\\}.\\label{eq:queryfixed}\n\\end{align}\nFrom $\\bm{\\Delta}$, the user is able to also compute \n\\begin{align}\n \\bm{L}_1 = \\bm{\\Delta}_1 + \\bm{\\Delta}_2 &= \\bm{A} + 2\\bm{\\eta}_{c}\\bm{C} + \\bm{\\eta}_{d}\\bm{D},\\\\\n \\bm{L}_2 = \\bm{\\Delta}_1 + 2\\bm{\\Delta}_2 &= \\bm{A} + 2\\bm{\\eta}_{b}\\bm{B} + 2\\bm{\\eta}_{d}\\bm{D}.\n\\end{align}\nLet $\\bm{W}_{\\bm{\\theta}}$ denote the desired message. Let us normalize $\\bm{\\lambda_1}=1$ without loss of generality.\nThe $\\bm{\\eta}_b, \\bm{\\eta}_c, \\bm{\\eta}_d$ values are specified as follows. \n\\begin{enumerate}\n \\item When $\\bm{\\mathcal{S}}=\\{1,2,3\\}$ and $\\bm{Y} = \\bm{A} + \\bm{\\lambda}_2\\bm{B} + \\bm{\\lambda}_3\\bm{C}$, then $\\bm{\\eta}_d$ is randomly chosen from $\\mathbb{F}_{3}^{\\times}=\\{1,2\\}$ and $\\bm{\\eta}_b,\\bm{\\eta}_c$ are chosen so that the desired message $\\bm{W}_{\\bm\\theta}$ can be recovered from $\\bm{Y}$ and $\\bm{\\Delta}_1$ as follows.\n {\\small\n \\begin{align}\n \\bm{W}_{\\bm{\\theta}} = \\bm{A}:& ~(\\bm{\\eta}_b,\\bm{\\eta}_c) =( 2 \\bm{\\lambda}_2, 2 \\bm{\\lambda}_3), 2\\bm{A}=\\bm{Y}+\\bm{\\Delta}_1 \\notag\\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{B}:&~(\\bm{\\eta}_b,\\bm{\\eta}_c) =( 2 \\bm{\\lambda}_2, \\bm{\\lambda}_3), \\bm{\\lambda}_2\\bm{B}=2\\bm{Y}+\\bm{\\Delta}_1 \\notag\\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{C}:&~ (\\bm{\\eta}_b,\\bm{\\eta}_c) =( \\bm{\\lambda}_2, 2\\bm{\\lambda}_3),\\bm{\\lambda}_3\\bm{C}=2\\bm{Y}+\\bm{\\Delta}_1 \\notag\n \\end{align} \n }\n \n \\item When $\\bm{\\mathcal{S}}=\\{2,3,4\\}$ and $\\bm{Y} = \\bm{B} + \\bm{\\lambda}_2\\bm{C} + \\bm{\\lambda}_3\\bm{D}$, then $\\bm{\\eta}_b$ is randomly chosen from $\\mathbb{F}_{q}^{\\times}=\\{1,2\\}$ and $\\bm{\\eta}_c,\\bm{\\eta}_d$ are chosen so that the desired message $\\bm{W}_{\\bm\\theta}$ can be recovered from $\\bm{Y}$ and $\\bm{\\Delta}_2$ as follows.\n {\\small\n \\begin{align}\n \\bm{W}_{\\bm{\\theta}} = \\bm{B}:& ~(\\bm{\\eta}_c,\\bm{\\eta}_d) =( \\bm{\\eta}_b \\bm{\\lambda}_2, \\bm{\\eta}_b \\bm{\\lambda}_3), \\bm{B}=2\\bm{Y}+\\bm{\\Delta}_2\/\\bm{\\eta}_b \\notag \\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{C}:&~(\\bm{\\eta}_c,\\bm{\\eta}_d) =( \\bm{\\eta}_b \\bm{\\lambda}_2, 2\\bm{\\eta}_b \\bm{\\lambda}_3), 2\\bm{\\lambda}_2 \\bm{C}=\\bm{Y}+\\bm{\\Delta}_2\/\\bm{\\eta}_b \\notag \\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{D}:&~(\\bm{\\eta}_c,\\bm{\\eta}_d) =(2 \\bm{\\eta}_b \\bm{\\lambda}_2, \\bm{\\eta}_b \\bm{\\lambda}_3), 2\\bm{\\lambda}_3 \\bm{D}=\\bm{Y}+\\bm{\\Delta}_2\/\\bm{\\eta}_b \\notag \n \\end{align} \n }\n \\item When $\\bm{\\mathcal{S}}=\\{1,3,4\\}$ and $\\bm{Y} = \\bm{A} + \\bm{\\lambda}_2\\bm{C} + \\bm{\\lambda}_3\\bm{D}$, then $\\bm{\\eta}_b$ is randomly chosen from $\\mathbb{F}_{q}^{\\times}$ and $\\bm{\\eta}_c,\\bm{\\eta}_d$ are chosen so that the desired message $\\bm{W}_{\\bm\\theta}$ can be recovered from $\\bm{Y}$ and $\\bm{L}_1$ as follows.\n {\\small\n \\begin{align}\n \\bm{W}_{\\bm{\\theta}} = \\bm{A}:& ~(\\bm{\\eta}_c,\\bm{\\eta}_d) =( \\bm{\\lambda}_2, 2\\bm{\\lambda}_3), 2\\bm{A}=\\bm{Y}+\\bm{L}_1 \\notag \\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{C}:&~(\\bm{\\eta}_c,\\bm{\\eta}_d) =( \\bm{\\lambda}_2, \\bm{\\lambda}_3), \\bm{\\lambda}_2\\bm{C}=2\\bm{Y}+\\bm{L}_1 \\notag \\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{D}:&~(\\bm{\\eta}_c,\\bm{\\eta}_d) =( 2\\bm{\\lambda}_2, 2\\bm{\\lambda}_3), \\bm{\\lambda}_3 \\bm{D}=2\\bm{Y}+\\bm{L}_1 \\notag \n \\end{align} \n }\n\n \\item When $\\bm{\\mathcal{S}}=\\{1,2,4\\}$ and $\\bm{Y} = \\bm{A} + \\bm{\\lambda}_2\\bm{B} + \\bm{\\lambda}_3\\bm{D}$, then $\\bm{\\eta}_c$ is randomly chosen from $\\mathbb{F}_{q}^{\\times}$ and $\\bm{\\eta}_b,\\bm{\\eta}_d$ are chosen so that the desired message $\\bm{W}_{\\bm\\theta}$ can be recovered from $\\bm{Y}$ and $\\bm{L}_2$ as follows. \n {\\small\n \\begin{align}\n \\bm{W}_{\\bm{\\theta}} = \\bm{A}:& ~(\\bm{\\eta}_b,\\bm{\\eta}_d) =( \\bm{\\lambda}_2, \\bm{\\lambda}_3), 2\\bm{A}=\\bm{Y}+\\bm{L}_2 \\notag \\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{B}:&~(\\bm{\\eta}_b,\\bm{\\eta}_d) =( \\bm{\\lambda}_2, 2\\bm{\\lambda}_3), \\bm{\\lambda}_2\\bm{B}=2\\bm{Y}+\\bm{L}_2 \\notag \\\\\n \\bm{W}_{\\bm{\\theta}} = \\bm{D}:&~(\\bm{\\eta}_b,\\bm{\\eta}_d) =( 2\\bm{\\lambda}_2, \\bm{\\lambda}_3), \\bm{\\lambda}_3 \\bm{D}=2\\bm{Y}+\\bm{L}_2 \\notag \n \\end{align} \n }\n\\end{enumerate}\nCorrectness is already shown. For privacy, note that the form of the query is fixed as in \\eqref{eq:queryfixed} so the user only needs to specify $\\bm{\\eta}_b,\\bm{\\eta}_c,\\bm{\\eta}_d$, and those are i.i.d. uniform over $\\mathbb{F}_3^{\\times}=\\{1,2\\}$, regardless of $(\\bm{\\mathcal{S}},\\bm{\\theta})$. Thus, the scheme is private, and the rate achieved is $1\/2$, which completes the proof of Theorem \\ref{thm:M3K4}.\n\n\\section{Proof of Theorem \\ref{thm:pcsi2_pub_pri}}\\label{proof:pcsi2_pub_pri}\n\\subsection{Converse}\nHere we prove that \n\\begin{align}\n C_{\\mbox{\\tiny PCSI-II}}^{\\mbox{\\tiny pri}}(q) \\leq C_{\\mbox{\\tiny PCSI-II}}(q=2) = C_{\\mbox{\\tiny PCSI-II}}^{\\inf}.\n\\end{align}\n\nThe following lemma states that for PIR-PCSI*, for every feasible $Q$ and $(\\theta, \\mathcal{S})$ value, all possible coefficient vectors must allow successful decoding.\n\\begin{lemma}\\label{lem:fullypri} Under the constraint of $(\\bm{\\theta}, \\bm{\\mathcal{S}, \\bm{\\Lambda}})$ privacy, \n \\begin{align}\n &\\mbox{PIR-PCSI: } \\forall (Q,\\mathcal{S},\\theta,\\Lambda)\\in\\mathcal{Q}\\times \\mathfrak{S}\\times[K]\\times\\mathfrak{C},\\notag\\\\\n &\\hspace{0.2cm} H(\\bm{W}_{\\theta} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},\\Lambda]}, \\bm{Q}=Q) = 0.\\label{eq:pcsi_pri}\\\\\n &\\mbox{PIR-PCSI-I: }\\forall (Q,\\mathcal{S},\\theta,\\Lambda)\\in\\mathcal{Q}\\times \\mathfrak{S}\\times([K]\\setminus\\mathcal{S})\\times\\mathfrak{C},\\notag\\\\\n &\\hspace{0.2cm} H(\\bm{W}_{\\theta} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},\\Lambda]}, \\bm{Q}=Q) = 0.\\label{eq:pcsi1_pri}\\\\\n &\\mbox{PIR-PCSI-II: }\\forall (Q,\\mathcal{S},\\theta,\\Lambda)\\in\\mathcal{Q}\\times \\mathfrak{S}\\times\\mathcal{S}\\times\\mathfrak{C},\\notag\\\\\n &\\hspace{0.2cm} H(\\bm{W}_{\\theta} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},\\Lambda]}, \\bm{Q}=Q) = 0.\\label{eq:pcsi2_pri}\n \\end{align}\n\\end{lemma}\n\\proof Since the server knows $\\bm{\\Delta}, \\bm{Q}$ and can test all possible realizations of $\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda}$ for decodability. If there exists $(\\theta, \\mathcal{S}, \\Lambda)$ such that $\\bm{W}_{\\theta}$ cannot be decoded, then that $(\\theta, \\mathcal{S}, \\Lambda)$ can be ruled out by the server. This contradicts the joint $(\\bm{\\theta}, \\bm{\\mathcal{S}, \\bm{\\Lambda}})$ privacy constraint.$\\hfill\\square$\n\nAs a direct result of \\eqref{eq:pcsi2_pri}, for any PIR-PCSI-II scheme that preserves joint $(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda})$ privacy, \n\\begin{align}\n H(\\bm{W}_{\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},\\Lambda]}, \\bm{Q}=Q) = 0,\\notag\\\\\n \\forall (\\mathcal{S}, \\Lambda, Q) \\in \\mathfrak{S}\\times\\mathfrak{C}\\times\\mathcal{Q}.\\label{eq:pcsi2_pri_dec}\n\\end{align}\nNote that \\eqref{eq:pcsi2_pri_dec} is a \\emph{stronger} version of \\eqref{eq:pcsi2_inf_dec} which is sufficient to bound $C_{\\mbox{\\tiny PCSI-II}}(q=2)$. Thus, we have $C_{\\mbox{\\tiny PCSI-II}}^{\\mbox{\\tiny pri}}(q) \\leq C_{\\mbox{\\tiny PCSI-II}}(q=2) = C_{\\mbox{\\tiny PCSI-II}}^{\\inf}$.\n\n\\subsection{Achievability}\nThe \\emph{Generic Linear Combination Based Scheme} in Section \\ref{sec:PCSI2_inf_ach} where $M-1$ linear combinations of each messages (represented in the extended field $\\mathbb{F}_{q^l}$ where $L = Ml$) are downloaded, also works under $(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda})$ privacy, but with a slight modification. The only difference between the modified scheme and the infimum capacity achieving scheme of PIR-PCSI-II in Section \\ref{sec:PCSI2_inf_ach} is that, instead of the matrix in \\eqref{eq:F2_inv}, the following matrix \n\\begin{align}\n \\mathbf{G}_{\\mathcal{S}}^{(\\gamma_1, \\gamma_2, \\cdots,\\gamma_M)} = \n \\begin{bmatrix}\n \\gamma_{1}\\mathbf{I}_{M} & \\cdots & \\gamma_{M}\\mathbf{I}_{M}\\\\\n & \\mathbf{e}_{1}\\otimes\\mathbf{L}_{j_1}^{(1)} &\\\\\n & \\cdots &\\\\\n & \\mathbf{e}_{1}\\otimes\\mathbf{L}_{j_1}^{(M-1)} &\\\\\n & \\cdots &\\\\\n & \\mathbf{e}_{M}\\otimes\\mathbf{L}_{j_M}^{(1)} &\\\\\n & \\cdots &\\\\\n & \\mathbf{e}_{M}\\otimes\\mathbf{L}_{j_M}^{(M-1)} &\n \\end{bmatrix},\\label{eq:Fq_inv_arb}\n\\end{align}\nmust have full rank for every $\\mathcal{S} = \\{j_1, \\cdots, j_{M}\\} \\in \\mathfrak{S}$ and every realization of $(\\gamma_1, \\gamma_2, \\cdots,\\gamma_M) \\in \\mathfrak{C}$. Let us prove that the scheme is correct, jointly private and such $\\mathbf{L}_{\\cdot}^{(\\cdot)}$ vectors exist when $l$ is large enough that,\n\\begin{align}\n q^l > (q-1)^{M}\\tbinom{K}{M}M(M-1).\n\\end{align}\n\n\n\\proof For a particular realization of $(\\gamma_1, \\gamma_2, \\cdots, \\gamma_M)$, e.g., $(\\gamma_1, \\gamma_2, \\cdots, \\gamma_M) = (1, 1, \\cdots, 1)$, \\eqref{eq:Fq_inv_arb} yields a set of $\\tbinom{K}{M}$ matrices \n\\begin{align}\n \\mathcal{G}^{(1,1,\\cdots,1)} = \\{\\mathbf{G}_{\\mathcal{S}_{1}}^{(1,1,\\cdots,1)}, \\mathbf{G}_{\\mathcal{S}_{2}}^{(1,1,\\cdots,1)}, \\cdots, \\mathbf{G}_{\\mathcal{S}_{\\tbinom{K}{M}}}^{(1,1,\\cdots,1)}\\}\\notag\n\\end{align}\ncorresponding to all possible $\\{j_1, j_2, \\cdots, j_M\\} \\in \\mathfrak{S}$. If all the $\\tbinom{K}{M}$ matrices in $\\mathcal{G}^{(1,1,\\cdots,1)}$ are invertible, this scheme preserves the joint privacy of $(\\bm{\\theta}, \\bm{\\mathcal{S}})$ and enables the user to decode all the $M$ messages in the support set, when all the coefficients in CSI are $1$, according to Appendix \\ref{app:existence}. \n\nGoing over all the possible realizations of $(\\gamma_1, \\cdots, \\gamma_M) \\in \\mathfrak{C}$ and $\\{j_1, j_2, \\cdots, j_M\\} \\in \\mathfrak{S}$, \\eqref{eq:Fq_inv_arb} yields $(q-1)^{M}$ sets of matrices \n\\begin{align}\n \\mathcal{G}^{(1,\\cdots,1)}, \\mathcal{G}^{(1,\\cdots,1,2)}, \\cdots, \\mathcal{G}^{(q-1,\\cdots,q-1)},\n\\end{align}\neach of which contains $\\tbinom{K}{M}$ matrices, i.e., there are in total $(q-1)^{M}\\tbinom{K}{M}$ matrices. If all the $(q-1)^{M}\\tbinom{K}{M}$ matrices are invertible, then for arbitrary realization of $(\\gamma_1, \\gamma_2, \\cdots, \\gamma_M)$, i.e., arbitrary $M$ coefficients in the CSI, this scheme enables the user to decode all the $M$ messages in the support set and preserves the joint $(\\bm{\\theta}, \\bm{\\mathcal{S}})$ privacy. Since this scheme works for arbitrary coefficients, from the server's perspective, all the realizations of $M$ coefficients are equally likely. Thus, the joint privacy of coefficients $\\bm{\\Lambda}$, index $\\bm{\\theta}$, and support set $\\bm{\\mathcal{S}}$, is preserved.\n\nTo prove the existence of such linear combinations, note that the determinant of each one of the $(q-1)^{M}\\tbinom{K}{M}$ matrices yields a degree $M(M-1)$ multi-variate polynomial as proved in Appendix \\ref{app:existence}. Thus, the product of the determinants of all the matrices $F$ is a multi-variate polynomial of degree $(q-1)^{M}\\tbinom{K}{M}M(M-1)$. Again, as in Appendix \\ref{app:existence}, according to the Schwartz-Zippel Lemma, when $q^l > (q-1)^{M}\\tbinom{K}{M}$, there exists elements in $\\mathbb{F}_{q^l}$ such that the polynomial $F$ does not evaluate to $0$, i.e., all the $(q-1)^{M}\\tbinom{K}{M}M(M-1)$ matrices are invertible. \n\n\n\\section{Proof of Theorem \\ref{thm:redundancy1}}\\label{proof:redundancy1}\nHere we bound the redundancy $\\rho_{\\mbox{\\tiny PCSI-I}} $ from above (equivalently, lower-bound $\\alpha^{*}$) for $1 \\leq M \\leq K-1$.\n\nRecall that the supremum capacity for PIR-PCSI-I is $(K-M)^{-1}$, i.e., the optimal average download cost is $D\/L = K-M$. Consider an achievable scheme such that $\\alpha$ PCSI is sufficient and the average download cost $D\/L \\leq K-M+\\epsilon$ for some $L$. Since $D\/L \\leq K-M+\\epsilon$, we have $L(K-M) + \\epsilon L \\geq D \\geq H(\\bm{\\Delta} \\mid \\bm{Q})$. Thus, there exists a feasible $Q$ such that \n\\begin{align}\n H(\\bm{\\Delta} \\mid \\bm{Q}=Q) \\leq (K-M)L + \\epsilon L.\n\\end{align}\nFor all $i \\in [M]$, let $\\mathcal{S}_{i} = [M+1] \\setminus \\{i\\}$. ALso, for all $i \\in [M+1:K]$, let $\\mathcal{S}_{i} = [M]$. For all $i \\in [K]$, let $\\Lambda_i \\in \\mathfrak{C}$ satisfy \n\\begin{align}\n H(\\bm{W}_{i} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}^{[\\mathcal{S}_i, \\Lambda_i]}, \\bm{Q}=Q) = 0.\\label{eq:redundancy1_1}\n\\end{align}\nSuch $\\Lambda_i$'s must exist according to \\eqref{eq:lemma1pcsi1} in Lemma \\ref{lem:privacy}.\n\nWriting $\\overline{\\bm{Y}}^{[\\mathcal{S}_i, \\Lambda_i]}$ as $\\overline{\\bm{Y}}_{i}$ for compact notation, we have \n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]}, \\bm{Q}=Q)\\\\\n &= H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]}, \\bm{W}_{[M]}, \\bm{Q}=Q)\\label{eq:redundancy1_2}\\\\\n &= H(\\bm{W}_{[M+1:K]} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}_{[K]}, \\bm{W}_{[M]}, \\bm{Q}=Q)\\label{eq:redundancy1_3}\\\\\n &= 0\\label{eq:redundancy1_4},\n\\end{align}\nwhere \\eqref{eq:redundancy1_2} follows from \\eqref{eq:redundancy1_1}. \\eqref{eq:redundancy1_3} is correct since $\\overline{\\bm{Y}}_{[M+1:K]}$ are functions of $\\bm{W}_{[M]}$. \\eqref{eq:redundancy1_4} follows from \\eqref{eq:redundancy1_1}. Since we are considering the case where the supremum capacity is achieved, we have \n\\begin{align}\n &(K-M)L + \\epsilon L + M\\alpha L\\notag\\\\\n &\\geq H(\\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]} \\mid \\bm{Q}=Q)\\label{eq:redundancy1_5}\\\\\n &\\geq I(\\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]}; \\bm{W}_{[K]} \\mid \\bm{Q}=Q)\\notag\\\\\n &= H(\\bm{W}_{[K]} \\mid \\bm{Q}=Q) = KL.\\label{eq:redundancy1_6}\n\\end{align}\n\\eqref{eq:redundancy1_5} follows from \\eqref{eq:invaYR}. Step \\eqref{eq:redundancy1_6} follows from \\eqref{eq:redundancy1_4} and the fact that the query and the messages are mutually independent according to \\eqref{eq:indQ}. Thus we have $\\alpha \\geq 1 - \\frac{\\epsilon}{M}$. In order to approach capacity, we must have $\\epsilon\\rightarrow 0$, so we need $\\alpha\\geq 1$, and since this is true for any $\\alpha$ such that $\\alpha$ PCSI is sufficient, it is also true for $\\alpha^*$. Thus, the redundancy is bounded as $\\rho_{\\mbox{\\tiny PCSI-I}}\\leq 0$.\n\n\\section{Proof of Theorem \\ref{thm:cap_PCSI1_inf}}\\label{sec:cap_PCSI1_inf}\n\\subsection{Converse for $C_{\\mbox{\\tiny PCSI-I}}(q=2)$}\nAgain, \\eqref{eq:lemma1pcsi1} is true for arbitrary $\\mathbb{F}_{q}$. The only thing different in $\\mathbb{F}_{2}$ is that $\\bm{\\Lambda}$ must be the vector of all ones. As a direct result of \\eqref{eq:lemma1pcsi1}, for PIR-PCSI-I in $\\mathbb{F}_{2}$,\n\\begin{align}\n H(\\bm{W}_{[K]\\setminus\\mathcal{S}} \\mid \\bm{\\Delta}, {\\bm{Y}}^{[\\mathcal{S}, 1_{M}]}, \\bm{Q} = Q) = 0, \\forall (Q,\\mathcal{S}) \\in \\mathcal{Q} \\times \\mathfrak{S}\\label{eq:dec_inf_PCSI1_1}\n\\end{align}\nand thus \n\\begin{align}\n &H(\\bm{W}_{[K]\\setminus\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Q} = Q) \\\\\n &=I(\\bm{W}_{[K]\\setminus\\mathcal{S}}; {\\bm{Y}}^{[\\mathcal{S}, 1_{M}]}\\mid \\bm{\\Delta}, \\bm{Q} = Q)\\\\\n &\\leq H({\\bm{Y}}^{[\\mathcal{S}, 1_{M}]}\\mid \\bm{\\Delta}, \\bm{Q} = Q)\\\\\n &\\leq L, ~~\\forall (Q,\\mathcal{S}) \\in \\mathcal{Q} \\times \\mathfrak{S}.\n\\end{align}\nAveraging over $\\bm{Q}$ gives \n\\begin{align}\n H(\\bm{W}_{[K]\\setminus\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Q}) \\leq L, \\forall \\mathcal{S} \\in\\mathfrak{S}.\\label{eq:dec_inf_PCSI1_2}\n\\end{align}\nAlso, for all $\\mathcal{S} \\in \\mathfrak{S}$ and $Q \\in \\mathcal{Q}$, \n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\\\ \n &= H(\\bm{W}_{\\mathcal{S}}\\mid \\bm{\\Delta}, \\bm{Q}=Q)\\notag\\\\\n &~~ + H(\\bm{W}_{[K]\\setminus\\mathcal{S}}\\mid \\bm{\\Delta}, \\bm{W}_{\\mathcal{S}}, \\bm{Q}=Q)\\\\\n &= H(\\bm{W}_{\\mathcal{S}}\\mid \\bm{\\Delta}, \\bm{Q}=Q)\\notag\\\\ \n &~~ + H(\\bm{W}_{[K]\\setminus\\mathcal{S}}\\mid \\bm{\\Delta}, \\bm{W}_{\\mathcal{S}}, {\\bm{Y}}^{[\\mathcal{S}, 1_{M}]}, \\bm{Q}=Q)\\label{eq:pcsi1_inf_Ysum}\\\\\n &= H(\\bm{W}_{\\mathcal{S}}\\mid \\bm{\\Delta}, \\bm{Q}=Q),\n\\end{align}\nwhere \\eqref{eq:pcsi1_inf_Ysum} results from the fact that $\\overline{\\bm{Y}}^{[\\mathcal{S}, 1_{M}]} = \\sum_{s \\in \\mathcal{S}}\\bm{W}_{s}$, and the last step follows from \\eqref{eq:dec_inf_PCSI1_1}. Averaging over $\\bm{Q}$, it follows that \n\\begin{align}\n H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) = H(\\bm{W}_{\\mathcal{S}}\\mid \\bm{\\Delta}, \\bm{Q}), &&\\forall \\mathcal{S} \\in \\mathfrak{S}.\\label{eq:pcsi1_inf_equ}\n\\end{align}\n\nLet us first prove $C_{\\mbox{\\tiny PCSI-I}}(q=2) \\leq (K-1)^{-1}$ in the regime where $1 \\leq M \\leq \\frac{K}{2}$. \n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\notag\\\\\n &=H(\\bm{W}_{[M]} \\mid \\bm{\\Delta}, \\bm{Q})\\label{eq:dec_inf_PCSI1_r1_1}\\\\\n &\\leq H(\\bm{W}_{[K-M]} \\mid \\bm{\\Delta}, \\bm{Q})\\label{eq:MtoK-M}\\\\\n & \\leq L \\label{eq:K-MtoL},\n\\end{align}\nwhere \\eqref{eq:dec_inf_PCSI1_r1_1} is true according to \\eqref{eq:pcsi1_inf_equ}, \\eqref{eq:MtoK-M} follows from $(K-M \\geq M)$ and \\eqref{eq:dec_inf_PCSI1_1}, and \\eqref{eq:K-MtoL} follows from \\eqref{eq:dec_inf_PCSI1_2}. Thus\n\\begin{align}\n H(\\bm{\\Delta} \\mid \\bm{Q}) &\\geq I(\\bm{\\Delta}; \\bm{W}_{[K]} \\mid \\bm{Q})\\\\\n &= H(\\bm{W}_{[K]} \\mid \\bm{Q}) - H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q})\\\\\n &\\geq KL - L.\n\\end{align} \nThus $D \\geq H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq KL-L$ and since the rate $L\/D\\leq (K-1)^{-1}$ for every achievable scheme, we have shown that $C_{\\mbox{\\tiny PCSI-I}}(q=2) \\leq (K-1)^{-1}$ when $K-M\\geq M\\geq 1$, i.e., $1\\leq M\\leq K\/2$.\n\nNext let us prove that $C_{\\mbox{\\tiny PCSI-I}}(q=2) \\leq \\big(K - \\frac{M}{K-M}\\big)^{-1}$ for the regime $\\frac{K}{2} < M \\leq K-1$. It suffices to prove $H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq KL - \\frac{ML}{K-M}$. Define,\n\\begin{align}\n H_{m}^{K} = \\frac{1}{\\tbinom{K}{m}}\\sum_{\\mathcal{M}:\\mathcal{M}\\subset[K], |\\mathcal{M}|=m}\\frac{H(\\bm{W}_{\\mathcal{M}} \\mid \\bm{\\Delta}, \\bm{Q})}{m},\n\\end{align} \nwe have\n\\begin{align}\n H_{K-M}^{K}&\\geq H_{M}^{K}\\label{eq:dec_inf_PCSI1_r2_1}\\\\\n &= \\frac{H(\\bm{W}_{[K]}\\mid \\bm{\\Delta}, \\bm{Q})}{M},\\label{eq:dec_inf_PCSI1_r2_2}\n\\end{align}\nwhere \\eqref{eq:dec_inf_PCSI1_r2_1} follows from Han's inequality \\cite{Cover_Thomas}, and \\eqref{eq:dec_inf_PCSI1_r2_2} follows from \\eqref{eq:pcsi1_inf_equ}. Note that according to \\eqref{eq:dec_inf_PCSI1_2},\n\\begin{align}\n \\frac{L}{K-M} \\geq H_{K-M}^{K},\n\\end{align}\nand therefore,\n\\begin{align}\n H(\\bm{W}_{[K]}\\mid \\bm{\\Delta}, \\bm{Q}) \\leq \\frac{ML}{K-M}.\n\\end{align}\nThus, $H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq KL - \\frac{ML}{K-M}$, which completes the converse proof for Theorem \\ref{thm:cap_PCSI1_inf}. We next prove achievability.\n\n\\subsection{Two PIR-PCSI-I Schemes for Arbitrary $q$}\\label{sec:PCSI1_inf_ach}\n\\subsubsection{Achieving rate $\\frac{1}{K-1}$ when $1 \\leq M \\leq \\frac{K}{2}$}\\label{sec:PCSI1_inf_ach1}\nThe goal here is to download $K-1$ generic linear combinations so that along with the one linear combination already available as side-information, the user has enough information to retrieve all $K$ messages. Let $L$ be large enough that $q^L > \\tbinom{K}{M}(K-1)$. For all $k \\in [K]$, message $\\bm{W}_k \\in \\mathbb{F}_{q}^{L\\times 1}$ can be represented as a scalar $\\bm{w}_k \\in \\mathbb{F}_{q^L}$. Let \n\\begin{align}\n \\bm{w}_{[K]} = \n \\begin{bmatrix}\n \\bm{w}_1 & \\bm{w}_2 & \\cdots & \\bm{w}_K\n \\end{bmatrix}^{\\mathrm{T}} \\in \\mathbb{F}_{q^L}^{K\\times 1},\n\\end{align}\nbe the length $K$ column vector whose entries are the messages represented in $\\mathbb{F}_{q^{L}}$. Let $\\Psi \\in\\mathbb{F}_{q^L}^{K\\times (K-1)}$ be a $K\\times (K-1)$ matrix whose elements are the variables $\\psi_{ij}$. The user downloads \n\\begin{align}\n \\bm{\\Delta} =\\Psi^T \\bm{w}_{[K]} \\in\\mathbb{F}_{q^L}^{(K-1)\\times 1}.\n\\end{align}\nSuppose the realization of the coefficient vector is $\\bm{\\Lambda}=\\Lambda$. The linear combination available to the user can be expressed as $\\bm{Y}^{[{\\Lambda},\\bm{\\mathcal{S}}]}=U_{{{\\Lambda}},\\bm{\\mathcal{S}}}^T\\bm{w}_{[K]}$ for some $K\\times 1$ vector $U_{{\\Lambda},\\bm{\\mathcal{S}}}$ that depends on $({\\Lambda},\\bm{\\mathcal{S}})$. Combined with the download, the user has \n\\begin{align}\n[U_{{\\Lambda},\\bm{\\mathcal{S}}}, \\Psi]^T\\bm{w}_{[K]},\n\\end{align}\nso if the $K\\times K$ matrix $G_{{\\Lambda},\\bm{\\mathcal{S}}}=[U_{{\\Lambda},\\bm{\\mathcal{S}}}, \\Psi]$ is invertible (full rank) then the user can decode all $K$ messages. For all $\\mathcal{S}\\in\\mathfrak{S}$, let $f_{\\Lambda,\\mathcal{S}}(\\cdot)$ be the multi-variate polynomial of degree $K-1$ in variables $\\psi_{ij}$, representing the determinant of $G_{{\\Lambda},{\\mathcal{S}}}$. This is not the zero polynomial because the $K-1$ columns of $\\Psi$ can always be chosen to be linearly independent of the vector $U_{{\\Lambda},{\\mathcal{S}}}$ in a $K$ dimensional vector space. The product of all such polynomials, $f_\\Lambda=\\prod_{\\mathcal{S}\\in\\mathfrak{S}}f_{\\Lambda,\\mathcal{S}}$ is itself a multi-variate non-zero polynomial of degree $(K-1)\\binom{K}{M}$ in the variables $\\psi_{ij}$. By Schwartz-Zippel Lemma, if the $\\psi_{ij}$ are chosen randomly from $\\mathbb{F}_{q^L}$ then the probability that the corresponding evaluation of $f_\\Lambda$ is zero, is no more than $(K-1)\\binom{K}{M}\/q^L<1$, so there exists a choice of $\\psi_{ij}$ for which all $f_{\\Lambda,\\mathcal{S}}$ evaluate to non-zero values, i.e., $G_{\\Lambda, \\mathcal{S}}$ is invertible for every $\\mathcal{S}\\in\\mathfrak{S}$. Thus, with this choice of $\\Psi$, we have a scheme with rate $1\/(K-1)$ that is correct and private and allows the user to retrieve all $K$ messages. To verify privacy, note that the user constructs the query based on the realization of $\\bm\\Lambda$ alone, and does not need to know $(\\bm{\\mathcal{S}},\\bm{\\theta})$ before it sends the query, so the query is independent of $(\\bm{\\mathcal{S}},\\bm{\\theta})$. \n\n\\begin{remark}\\label{rmk:pcsi1_inf_pcsi_inf}\nSince the scheme allows the user to decode all messages, the scheme also works if $\\bm{\\theta}$ is uniformly drawn from $[K]$, i.e., in the PIR-PCSI setting.\n\\end{remark}\n\n\\subsubsection{Achieving rate $(K-\\frac{M}{K-M})^{-1}$ when $K\/2 < M \\leq K-1$}\nNow let us present a scheme with rate $(K-\\frac{M}{K-M})^{-1}$ which is optimal for the regime $\\frac{K}{2} < M \\leq K-1$. The scheme is comprised of two steps.\n\n\\emph{Step 1}: The user converts the $(M,K)$ PIR-PCSI-I problem to $(K-M,K)$ PIR-PCSI-II problem as follows.\n\nThe user first downloads\n\\begin{align}\n \\bm{\\Delta}_{1} = \\sum_{k \\in [K]}\\bm{a}_{k}\\bm{W}_{k},\n\\end{align}\nwhere $\\bm{a}_{\\bm{i}_m} = \\bm{\\lambda}_m$ for $\\bm{i}_{m} \\in \\bm{\\mathcal{S}}$ while for $k \\notin \\bm{\\mathcal{S}}$, $\\bm{a}_k$'s are independently and uniformly drawn from $\\mathbb{F}_{q}^{\\times}$.\nThe user then computes\n\\begin{align}\n \\bm{Y}^{\\prime} = \\bm{\\Delta}_1 - \\bm{Y}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]} = \\sum_{k \\in [K]\\setminus \\bm{\\mathcal{S}}}\\bm{a}_{k}\\bm{W}_{k}.\n\\end{align} \nIn this step, from the server's perspective, $\\bm{a}_1, \\cdots, \\bm{a}_K$ are i.i.d. uniform over $\\mathbb{F}_{q}^{\\times}$, thus there is no loss of privacy. The download cost of this step is $H(\\bm{\\Delta}_1) = L$.\n\n\\emph{Step 2}: The user has $\\bm{Y}^{\\prime}$ as coded side information and applies the fully private PIR-PCSI-II scheme described in Section \\ref{proof:pcsi2_pub_pri} that protects the privacy of all the coefficients. \n\nThe reason to apply the PIR-PCSI-II scheme that maintains the privacy of coefficients is that in \\emph{Step 1}, server knows $\\bm{a}_1, \\cdots, \\bm{a}_K$. If in the second step, the Query is not independent of $\\bm{a}_i, i \\in [K]\\setminus \\bm{\\mathcal{S}}$, then the server may be able to rule out some realizations of $\\bm{\\mathcal{S}}$. The download cost of this step is $\\frac{K(K-M-1)L}{K-M}$. Thus, the total download cost of this scheme is $KL - \\frac{ML}{K-M}$ and the rate is $\\big(K - \\frac{M}{K-M}\\big)^{-1}$.\n\n\\section{Proof of Theorem \\ref{thm:pcsi1_pub_pri}}\\label{proof:pcsi1_pub_pri}\n\\subsection{Proof of $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\sup}=C_{\\mbox{\\tiny PCSI-I}}^{\\inf}$}\nFirst let us prove the converse. As a direct result of \\eqref{eq:pcsi1_pri} in Lemma \\ref{lem:fullypri}, for any PIR-PCSI-I scheme that preserves joint $(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda})$ privacy, \n\\begin{align}\n H(\\bm{W}_{[K]\\setminus\\mathcal{S}} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},\\Lambda]}, \\bm{Q}=Q) = 0, \\notag\\\\\n \\forall (\\mathcal{S}, \\Lambda, Q) \\in \\mathfrak{S}\\times\\mathfrak{C}\\times\\mathcal{Q}.\\label{eq:pcsi1_pri_dec}\n\\end{align}\nNote that \\eqref{eq:pcsi1_pri_dec} is a stronger version of \\eqref{eq:dec_inf_PCSI1_1} which is sufficient to bound $C_{\\mbox{\\tiny PCSI-I}}(q=2)$. Thus, we have $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q) \\leq C_{\\mbox{\\tiny PCSI-I}}(q=2) = C_{\\mbox{\\tiny PCSI-I}}^{\\inf}$, which completes the proof of converse.\n\nFor achievability, let us note that $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\sup}\\geq C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q=2)=C_{\\mbox{\\tiny PCSI-I}}(q=2)=C_{\\mbox{\\tiny PCSI-I}}^{\\inf}$, because over $\\mathbb{F}_2$, the $\\bm{\\Lambda}$ vector is constant (all ones) and therefore trivially private.\n\n\\subsection{Proof of the bound: $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\inf} \\leq \\min\\bigg(C_{\\mbox{\\tiny PCSI-I}}^{\\inf}, \\frac{1}{K-2}\\bigg)$}\nSince privacy of $\\bm\\Lambda$ only further constrains PIR-PCSI, it is trivial that $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\inf} \\leq C_{\\mbox{\\tiny PCSI-I}}^{\\inf}$. For the remaining bound, $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\inf}\\leq \\frac{1}{K-2}$, it suffices to show that $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q\\geq M) \\leq \\frac{1}{K-2}$, because $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\inf}\\leq C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q\\geq M)$. Note that by $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q\\geq M)$ we mean $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q)$ for all $q\\geq M$.\n\nLet \n\\begin{align}\n \\bm{Y}_1 &= \\bm{W}_2 + \\alpha_3 \\bm{W}_3 + \\cdots \\alpha_{M+1} \\bm{W}_{M+1},\\\\\n \\bm{Y}_2 &= \\bm{W}_1 + \\bm{W}_3 + \\bm{W}_4 + \\cdots \\bm{W}_{M+1},\n\\end{align}\nwhere $\\alpha_3, \\alpha_4, \\cdots, \\alpha_{M+1}$ are $M-1$ distinct elements in $\\mathbb{F}_{q}^\\times$.\n\nLet $\\beta_3, \\beta_4, \\dots \\beta_{M+1}$ be $M-1$ distinct elements in $\\mathbb{F}_{q}^\\times$ such that $\\forall{m \\in [3:M+1]}, \\beta_{m}\\alpha_{m} + 1 = 0$ in $\\mathbb{F}_{q}$.\n\nNote that such $\\alpha$'s and $\\beta$'s exist since $q \\geq M$.\n\nThen let \n\\begin{align}\n \\bm{Y}_{m} &= \\beta_m \\bm{Y}_1 + \\bm{Y}_2 \\notag\\\\\n &= \\bm{W}_1 + \\beta_m \\bm{W}_2 + (\\beta_m \\alpha_3 + 1)\\bm{W}_3 + \\cdots \\notag\\\\\n &\\quad +(\\beta_m \\alpha_i + 1) \\bm{W}_i + \\cdots + (\\beta_m \\alpha_{M+1} + 1)\\bm{W}_{M+1}, \\notag\\\\\n &\\forall m \\in [3:M+1],\n\\end{align}\nbe $M-1$ linear combinations of the first $M+1$ messages $\\bm{W}_{[M+1]}$. Note that for any $m \\in [3:M+1]$, the coefficient for $\\bm{W}_m$ in $\\bm{Y}_m$ (i.e., $\\beta_{m}\\alpha_{m} + 1$) is $0$ while the coefficient for any $\\bm{W}_i, i\\in[M+1], i\\neq m$ (i.e., $\\beta_{m}\\alpha_{i} + 1$) is non-zero\\footnote{Since $\\beta_{m}\\alpha_{m}+1=0$, $\\beta_{m}\\alpha_{i}+1\\neq 0$ for $i \\neq m$.}. For example, \n\\begin{align}\n \\bm{Y}_3 &= \\bm{W}_1 + \\beta_3 \\bm{W}_2 + 0\\bm{W}_3 + (\\beta_3\\alpha_4 + 1)\\bm{W}_4\\notag\\\\\n &\\quad + \\cdots + (\\beta_3\\alpha_{M+1} + 1)\\bm{W}_{M+1}.\n\\end{align}\nThus, for any $m \\in [M+1]$, $\\bm{Y}_m$ is a linear combination of $M$ messages $\\bm{W}_{[M+1]\\setminus\\{m\\}}$ with non-zero coefficients. For $\\mathcal{S}_m=[M+1]\/\\{m\\}$ and $\\Lambda_m$ as the vector of coefficients that appear in $\\bm{Y}_m$, we $\\bm{Y}^{[\\mathcal{S}_m,\\Lambda_m]}=\\bm{Y}_m$.\n\nAccording to \\eqref{eq:pcsi1_pri_dec}, \n\\begin{align}\n H(\\bm{W}_m, \\bm{W}_{[M+2:K]} \\mid \\bm{\\Delta}, \\bm{Y}_m, \\bm{Q} = Q) = 0,\\notag\\\\\n \\forall m \\in [M+1], Q \\in \\mathcal{Q}. \\label{eq:PCSI1_pri_dec1}\n\\end{align}\nThus, for all $Q\\in\\mathcal{Q}$,\n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q} = Q) \\notag\\\\\n &\\leq H(\\bm{W}_{[K]}, \\bm{Y}_{[M+1]} \\mid \\bm{\\Delta}, \\bm{Q}=Q) \\\\\n &= H(\\bm{Y}_{[M+1]} \\mid \\bm{\\Delta}, \\bm{Q}=Q) + H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Y}_{[M+1]}, \\bm{Q}=Q)\\\\\n &= H(\\bm{Y}_1, \\bm{Y}_2 \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\label{eq:PCSI1_pri_dec2}\\\\\n &\\leq 2L,\n\\end{align}\nwhere \\eqref{eq:PCSI1_pri_dec2} follows from \\eqref{eq:PCSI1_pri_dec1} and the fact that $\\bm{Y}_{[3:M+1]}$ are functions of $\\bm{Y}_{1}, \\bm{Y}_{2}$. Averaging over $\\bm{Q}$ we have \n\\begin{align}\n H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\leq 2L.\n\\end{align}\n\n\\noindent Therefore, the average download cost is bounded as,\n\\begin{align}\n D&\\geq H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq H(\\bm{W}_{[K]}\\mid\\bm{Q}) - H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\\\\n & \\geq (K-2)L.\n\\end{align}\nThus, for $q\\geq M$, we have $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q) \\leq \\frac{1}{K-2}$.\n\n\\subsection{Proof of $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}, \\inf} \\geq \\frac{1}{K-1}$}\\label{sec:pcsi1_pri_ach}\nWe need to show that $C_{\\mbox{\\tiny PCSI-I}}^{\\mbox{\\tiny pri}}(q)\\geq \\frac{1}{K-1}$ for all $\\mathbb{F}_q$. The scheme is identical to the scheme with rate $(K-1)^{-1}$ in Section \\ref{sec:PCSI1_inf_ach1} with a slight modification. Instead of fixing a realization $\\bm{\\Lambda}=\\Lambda$, we will consider all possible realizations $\\Lambda\\in\\mathfrak{C}$, and consider the product polynomial $f=\\prod_{\\Lambda\\in\\mathfrak{C}}f_{\\Lambda}$ which is a multi-variate polynomial of degree $(K-1)\\binom{K}{M}(q-1)^M$ in variables $\\psi_{ij}$. Following the same argument based on the Schwartz-Zippel Lemma, we find that there exists a $\\Psi$ for which all $G_{\\Lambda,\\mathcal{S}}$ are invertible matrices, provided that $L$ is large enough that $q^L>(q-1)^M(K-1)\\binom{K}{M}$. Thus, with this choice of $\\Psi$ we have a scheme that is allows the user to retrieve all $K$ messages. The scheme is also $(\\bm{\\mathcal{S}},\\bm{\\theta},\\bm{\\Lambda})$ private because we note that the user does not need to know the realization of $(\\bm{\\mathcal{S}},\\bm{\\theta},\\bm{\\Lambda})$ before it sends the query, so the query is independent of $(\\bm{\\mathcal{S}},\\bm{\\theta},\\bm{\\Lambda})$. \n\n\\begin{remark}\\label{rmk:pcsi1_pri_pcsi_pri}\nSince the scheme allows the user to decode all messages, and the query does not depend on $(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda})$, the scheme also works if $\\bm{\\theta}$ is uniformly drawn from $[K]$, i.e., in the PIR-PCSI setting.\n\\end{remark}\n\n\\section{Proof of Theorem \\ref{thm:cap_PCSI_sup}}\\label{sec:cap_PCSI_sup}\n\\subsection{Converse}\nThe converse is divided into two regimes.\n\n\\textbf{Regime 1}: $2 \\leq M \\leq K$. The proof relies on \\eqref{eq:lemma1pcsi} in Lemma \\ref{lem:privacy}.\nConsider any particular realization $Q \\in \\mathcal{Q}$ of $\\bm{Q}$. For all $i \\in [K]$, consider $\\mathcal{S} = [M], \\theta = i$, and let $\\Lambda_i$ be a coefficient vector that satisfies \\eqref{eq:lemma1pcsi} according to Lemma \\ref{lem:privacy}, so that \n\\begin{align}\n H(\\bm{W}_i \\mid \\bm{\\Delta}, \\bm{Y}^{[[M],\\Lambda_i]}, \\bm{Q} = Q) = 0.\\label{eq:con_PCSI_0}\n\\end{align}\nWriting $\\bm{Y}^{[[M],\\Lambda_i]}$ as $\\bm{Y}_{i}$ for compact notation, we have\n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Y}_{[M-1]}, \\bm{Q} = Q)\\notag\\\\\n &= H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Y}_{[M-1]}, \\bm{W}_{[M-1]}, \\bm{Q} = Q)\\label{eq:con_PCSI_1}\\\\\n &= H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{W}_{[M]}, \\bm{Q} = Q)\\label{eq:con_PCSI_2}\\\\\n &= H(\\bm{W}_{[M+1:K]} \\mid \\bm{\\Delta}, \\bm{W}_{[M]}, \\bm{Y}_{[M+1:K]}, \\bm{Q} = Q)\\label{eq:con_PCSI_3}\\\\\n &= 0,\\label{eq:con_PCSI_3a}\n\\end{align}\nwhere \\eqref{eq:con_PCSI_1} holds according to \\eqref{eq:con_PCSI_0}, and \\eqref{eq:con_PCSI_2} follows from the fact that $\\bm{W}_M$ is decodable by subtracting $\\bm{W}_{[M-1]}$ terms from $\\bm{Y}_1$. Then, \\eqref{eq:con_PCSI_3} uses the fact that $\\bm{Y}_{[M+1:K]}$ are functions of $\\bm{W}_{[M]}$. Finally, \\eqref{eq:con_PCSI_3a} follows from \\eqref{eq:con_PCSI_0}. \n\nAveraging over $\\bm{Q}$, \n\\begin{align}\n H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Y}_{[M-1]}, \\bm{Q}) = 0.\\label{eq:con_PCSI_4}\n\\end{align}\nThen we have \n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q})\\\\\n &= H(\\bm{W}_{[K]}, \\bm{Y}_{[M-1]} \\mid \\bm{\\Delta}, \\bm{Q})\\label{eq:con_PCSI_5}\\\\\n &= H(\\bm{Y}_{[M-1]} \\mid \\bm{\\Delta}, \\bm{Q}) + H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}, \\bm{Y}_{[M-1]})\\\\\n &\\leq H(\\bm{Y}_{[M-1]})\\label{eq:con_PCSI_6}\\\\\n &\\leq (M-1)L,\n\\end{align}\nwhere \\eqref{eq:con_PCSI_5} follows from the fact that $\\bm{Y}_{[M-1]}$ are linear combinations of $\\bm{W}_{[M]}$. Step \\eqref{eq:con_PCSI_6} holds because of \\eqref{eq:con_PCSI_4}, and because conditioning reduces entropy.\n\nThus $D \\geq H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq H(\\bm{W}_{[K]}) - H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\geq (K-M+1)L$, which implies that $C_{\\mbox{\\tiny PCSI}}^{\\sup} \\leq (K-M+1)^{-1}$ for $2 \\leq M \\leq K$.\n\n\\textbf{Regime 2}: $M=1$.\n\nConsider any particular realization $Q \\in \\mathcal{Q}$ of $\\bm{Q}$. Since $M=1$, $\\bm\\Lambda$ is irrelevant, e.g., we may assume $\\bm{\\Lambda}=\\Lambda=1$ without loss of generality. For all $j \\in [2:K]$, consider $\\mathcal{S} = \\{1\\}, \\theta = j$, and apply \\eqref{eq:lemma1pcsi} according to Lemma \\ref{lem:privacy} so that \n\\begin{align}\n H(\\bm{W}_j \\mid \\bm{\\Delta}, \\bm{Y}^{[\\{1\\},1]}, \\bm{Q}=Q) = 0\\\\\n\\implies H(\\bm{W}_{[2:K]} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\{1\\},1]}, \\bm{Q}=Q) = 0\\label{eq:dec_PCSI_corner}\n\\end{align}\n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\\\\n &\\leq H(\\bm{W}_1, \\bm{Y}^{[\\{1\\},1]} \\mid \\bm{\\Delta}, \\bm{Q} = Q)\\label{eq:corner_PCSI_1}\\\\\n &= H(\\bm{W}_1 \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\label{eq:corner_PCSI_2}\\\\\n &\\leq L,\n\\end{align}\nwhere \\eqref{eq:corner_PCSI_1} holds since \\eqref{eq:dec_PCSI_corner} holds. Averaging over $\\bm{Q}$, $H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\leq L$. Thus $D \\geq H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq H(\\bm{W}_{[K]}) - H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\geq KL-L$, which implies that $C_{\\mbox{\\tiny PCSI}}(q) \\leq (K-1)^{-1}$ for $M=1$.\n\n\\subsection{Achievability}\nFor $2 \\leq M \\leq K$, the achievable scheme will be a combination of \\emph{Specialized GRS Codes} and \\emph{Modified Specialized GRS Codes} which are schemes in \\cite{PIR_PCSI} for PIR-PCSI-I and PIR-PCSI-II setting, respectively.\n\nThe rate $(K-M)^{-1}$ is achievable by \\emph{Specialized GRS Codes} for PIR-PCSI-I setting and the rate $(K-M+1)^{-1}$ is achievable by \\emph{Modified Specialized GRS Codes} for the PIR-PCSI-II setting. Both schemes work for $L=1$, so let us say $L=1$ here. Intuitively, these two achievable schemes have the same structures as explained below. \n\nFor the PIR-PCSI-I setting, the desired message is not contained in the support set. The download will be $K-M$ linear equations of $K$ unknowns ($K$ messages). These $K-M$ linear equations are independent by design, so they allow the user to eliminate any $K-M-1$ unknowns and get an equation in the remaining $K-(K-M-1) = M+1$ unknowns (messages). Let these $M+1$ unknowns be the $M$ messages in the support set and the desired message. With careful design, the equation will be equal to $\\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]} + \\bm{\\lambda}^{\\prime}\\bm{W}_{\\bm{\\theta}}$ for some non-zero $\\bm{\\lambda}^\\prime$. Thus by subtracting CSI from the equation the user is able to recover $\\bm{W}_{\\bm{\\theta}}$.\n\nFor the PIR-PCSI-II setting the desired message is contained in the support set. The download will be $K-M+1$ linear equations in $K$ unknowns (messages). These $K-M+1$ linear equations are independent by design, so they allow the user to eliminate any $K-M$ unknowns and get an equation in the remaining $K-(K-M) = M$ unknowns (messages). Let these $M$ unknowns be the $M$ messages in the support set. With careful design, the equation will be equal to $\\bm{Y}^{[\\bm{\\mathcal{S}},\\bm{\\Lambda}]} + \\bm{\\lambda}^{\\prime}\\bm{W}_{\\bm{\\theta}}$ for some $\\bm{\\lambda}^{\\prime} \\neq 0$. Thus by subtracting CSI from the equation the user is able to recover $\\bm{W}_{\\bm{\\theta}}$.\n\nConsider a scheme where the user applies \\emph{Specialized GRS Codes} when $\\bm{\\theta} \\notin \\bm{\\mathcal{S}}$ and applies \\emph{Modified Specialized GRS Codes} when $\\bm{\\theta} \\in \\bm{\\mathcal{S}}$. This scheme is obviously correct but not private because the server can tell if $\\bm{\\theta} \\in \\bm{\\mathcal{S}}$ or not from the download cost since the download cost of the two schemes are different. However, if the user always downloads one more redundant equation when applying \\emph{Specialized GRS Codes}, then there is no difference in the download cost. This is essentially the idea for the achievable scheme.\n\nLet us first present the \\emph{Specialized GRS Codes} in \\cite{PIR_PCSI} here for ease of understanding. There are $K$ distinct evaluation points in $\\mathbb{F}_{q}$, namely $\\omega_{1}, \\cdots, \\omega_{K}$. A polynomial $\\bm{p}(x)$ is constructed as \n\\begin{align}\n \\bm{p}(x) &\\triangleq \\prod_{k \\in [K]\\setminus(\\bm{\\mathcal{S}} \\cup \\{\\bm{\\theta}\\})}(x - \\omega_{k})\\\\\n & = \\sum_{i=1}^{K-M}\\bm{p}_i x^{i-1}.\\label{eq:polyGRS}\n\\end{align}\nThe query $\\bm{Q}$ is comprised of $K-M$ row vectors, each $1\\times K$, namely $\\bm{Q}_{1}, \\cdots, \\bm{Q}_{K-M}$ such that \n\\begin{align}\n \\bm{Q}_i = [\\bm{v}_1\\omega_{1}^{i-1}~~ \\cdots~~ \\bm{v}_K\\omega_{K}^{i-1}], \\forall i \\in [K-M],\n\\end{align}\nwhere for $\\bm{i}_m \\in \\bm{\\mathcal{S}}, m \\in [M]$, $\\bm{v}_{\\bm{i}_m} = \\frac{\\bm{\\lambda}_m}{p(\\omega_{\\bm{i}_m})}$ ($\\bm{\\lambda}_m$ is the $m^{th}$ coefficient in the CSI), while for $k \\notin \\bm{\\mathcal{S}}$, $\\bm{v}_{k}$ is randomly drawn from $\\mathbb{F}_{q}^{\\times}$. Upon receiving $\\bm{Q}$, the server sends \n\\begin{align}\n \\bm{\\Delta} = \n \\begin{bmatrix}\n \\bm{\\Delta}_1\\\\\n \\vdots\\\\\n \\bm{\\Delta}_{K-M}\n \\end{bmatrix}\n =\n \\begin{bmatrix}\n \\bm{Q}_1\\\\\n \\vdots\\\\\n \\bm{Q}_{K-M}\n \\end{bmatrix}\n \\begin{bmatrix}\n \\bm{W}_1\\\\\n \\bm{W}_2\\\\\n \\vdots\\\\\n \\bm{W}_K\n \\end{bmatrix}\n\\end{align}\nto the user. Let us call $[\\bm{Q}_1^{\\mathrm{T}} ~ \\cdots ~ \\bm{Q}_{K-M}^{\\mathrm{T}}]^{\\mathrm{T}}$ the \\emph{Specialized GRS Matrix} and $[\\bm{\\Delta}_1 ~ \\cdots ~ \\bm{\\Delta}_{K-M}]^{\\mathrm{T}}$ \\emph{Specialized GRS Codes} of $\\bm{W}_{[K]}$ for ease of reference. Note that the \\emph{Specialized GRS Matrix} is uniquely defined by $\\bm{v}_{1}, \\cdots, \\bm{v}_{K}$ as $\\omega$'s are constants.\n\nThe user gets $\\bm{W}_{\\bm{\\theta}}$ by subtracting $\\bm{Y}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]}$ from \n\\begin{align}\n \\sum_{i=1}^{K-M}\\bm{p}_i\\bm{\\Delta}_{i} = \\bm{Y}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]} + \\bm{v}_{\\bm{\\theta}}\\bm{p}(\\omega_{\\bm{\\theta}})\\bm{W}_{\\bm{\\theta}}.\n\\end{align}\n\nOur PIR-PCSI scheme is as follows.\nFor any realization $(\\theta, \\mathcal{S})$ of $(\\bm{\\theta}, \\bm{\\mathcal{S}})$, \n\\emph{1)} When $\\theta \\in [K]\\setminus\\mathcal{S}$, first apply the Specialized GRS Codes in \\cite{PIR_PCSI}. Besides $Q_1, Q_2, \\cdots, Q_{K-M}$ as specified in the \\emph{Specialized GRS Codes} of \\cite{PIR_PCSI}, the user also has \n\\begin{align}\n Q_{K-M+1} = [v_1\\omega_{1}^{K-M}, \\cdots, v_K\\omega_{K}^{K-M}]\n\\end{align}\nas part of the query. And the answer $\\bm{\\Delta}_{K-M+1} = \\sum_{j=1}^{K}v_j \\omega_j^{K-M} \\bm{W}_j$ will be generated for $Q_{K-M+1}$ and downloaded by the user as a redundant equation. Note that the matrix $[Q_1^{\\mathrm{T}}, Q_2^{\\mathrm{T}}, \\cdots, Q_{K-M+1}^{\\mathrm{T}}]^{\\mathrm{T}}$ is the generator matrix of a $(K, K-M+1)$ GRS code \\cite{Coding_Theory}.\n\n\\emph{2)} When $\\theta \\in \\mathcal{S}$, the user will directly apply \\emph{Modified Specialized GRS Codes} where the queries also form a generator matrix of a $(K, K-M+1)$ GRS code as specified in \\cite{PIR_PCSI}.\n\nSuch a scheme is private since the queries in both cases form a generator matrix of a $(K,K-M+1)$ GRS code, and the $v_1, \\cdots, v_{K}$ in both cases are identically uniform over $\\mathbb{F}_{q}^{\\times}$ for any realization of $\\bm{\\theta}, \\bm{\\mathcal{S}}$.\n\nFor the corner case $M=1$, it suffices to download $K-1$ generic linear combinations of all the $K$ messages such that from the $K-1$ downloaded linear combinations and the CSI, all the $K$ messages are decodable as noted in Remark \\ref{rmk:pcsi1_inf_pcsi_inf}.\n\n\\section{Proof of Theorem \\ref{thm:redundancy}}\\label{proof:redundancy}\nHere we bound the redundancy $\\rho_{\\mbox{\\tiny PCSI}}$ from above (equivalently, lower-bound $\\alpha^{*}$) for $1 \\leq M \\leq K$. For $\\frac{K+2}{2} < M \\leq K$, the proof for $\\rho_{\\mbox{\\tiny PCSI}} = 0$ is the same as in Section \\ref{proof:red} show that so it will not be repeated. \n\n\nConsider an achievable scheme such that $\\alpha$ PCSI is sufficient and the average download cost, $D \\leq \\frac{1}{C_{\\mbox{\\tiny PCSI}}^{\\sup}}L+\\epsilon L$ for some $L$. Note that $D\\geq H(\\bm{\\Delta\\mid \\bm{Q}})$, therefore,\n\\begin{align}\nH(\\bm{\\Delta\\mid \\bm{Q}})\\leq \\frac{1}{C_{\\mbox{\\tiny PCSI}}^{\\sup}}L+\\epsilon L\\label{eq:deltabound}\n\\end{align}\n\nIt follows from \\eqref{eq:deltabound} that there exists a feasible $Q \\in \\mathcal{Q}$ such that \n\\begin{align}\n H(\\bm{\\Delta} \\mid \\bm{Q}=Q) \\leq \\frac{1}{C_{\\mbox{\\tiny PCSI}}^{\\sup}}L+\\epsilon L.\n\\end{align}\nFor all $i \\in [K]$, let $\\Lambda_{i} \\in \\mathfrak{C}$ satisfy \n\\begin{align}\n H(\\bm{W}_{i} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}^{[[M], \\Lambda_i]}, \\bm{Q} = Q) = 0.\\label{eq:red_M1_dec}\n\\end{align}\nThe argument that such $\\Lambda_i$'s must exist is identical to the proof of Lemma \\ref{lem:privacy}. \nWriting $\\overline{\\bm{Y}}^{[[M], \\Lambda_i]}$ as $\\overline{\\bm{Y}}_{i}$ for compact notation,\n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]}, \\bm{Q} = Q)\\\\\n &= H(\\bm{W}_{[M]} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}_{M}, \\bm{Q} = Q)\\notag\\\\\n &~~~+ H(\\bm{W}_{[M+1:K]} \\mid \\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]}, \\bm{W}_{[M]}, \\bm{Q} = Q)\\\\\n &= 0 + H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{W}_{[M]}, \\overline{\\bm{Y}}_{[K]}, \\bm{Q} = Q)\\label{eq:red_M1_funcW}\\\\\n &=0.\n\\end{align}\nwhere \\eqref{eq:red_M1_funcW} follows from \\eqref{eq:red_M1_dec} and the fact that $\\overline{\\bm{Y}}_{[K]}$ are functions of $\\bm{W}_{[M]}$. The last step also follows from \\eqref{eq:red_M1_dec}. Thus,\n\\begin{align}\n &\\frac{1}{C_{\\mbox{\\tiny PCSI}}^{\\sup}}L + \\epsilon L + M\\alpha L\\notag\\\\\n &\\geq H(\\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]} \\mid \\bm{Q}=Q)\\label{eq:red_M1_indY}\\\\\n &\\geq I(\\bm{\\Delta}, \\overline{\\bm{Y}}_{[M]}; \\bm{W}_{[K]} \\mid \\bm{Q}=Q)\\\\\n &=H(\\bm{W}_{[K]} \\mid \\bm{Q}=Q) = KL.\\label{eq:red_M1_indW}\n\\end{align}\n\\eqref{eq:red_M1_indY} is true because \\eqref{eq:indQ}, \\eqref{eq:invaYR} hold. Step \\eqref{eq:red_M1_indW} follows from \\eqref{eq:red_M1_dec} and the fact that the query and messages are mutually independent according to \\eqref{eq:indQ}. Thus, $\\alpha \\geq (K-\\frac{1}{C_{\\mbox{\\tiny PCSI}}^{\\sup}})\/M - \\epsilon\/M$. In order to achieve capacity, we must have $\\epsilon \\rightarrow 0$, so we must have $\\alpha \\geq (K-\\frac{1}{C_{\\mbox{\\tiny PCSI}}^{\\sup}})\/M$, for all $1\\leq M\\leq K$.\n\nNow note that for $M=1$, since $C_{\\mbox{\\tiny PCSI}}^{\\sup} = (K-1)^{-1}$, we have shown that $\\alpha \\geq 1$, which implies $\\rho_{\\mbox{\\tiny PCSI}} = 0$ in this case. \n\nFor $2 \\leq M \\leq \\frac{K+2}{2}$, since $C_{\\mbox{\\tiny PCSI}}^{\\sup} = (K-M+1)^{-1}$, we have shown that $\\alpha \\geq \\frac{M-1}{M}$, which implies $\\rho_{\\mbox{\\tiny PCSI}} \\leq \\frac{1}{M}$ in this case.\n\nIt only remains to show that for $M=2$, $\\rho_{\\mbox{\\tiny PCSI}} = \\frac{1}{2}$ is achievable, or equivalently, $\\alpha^{*} = \\frac{1}{2}$. For this case, let us present a PIR-PCSI scheme that achieves the rate $(K-M\/2)^{-1}$ for arbitrary $1 \\leq M \\leq K$. Note that $K-M\/2 = K-M+1$ when $M=2$, which is the only case where the supremum capacity is achieved by this scheme. The rate of this scheme is strictly smaller than $C_{\\mbox{\\tiny PCSI}}^{\\sup}$ for other $M \\neq 2$.\n\n\nLet the size of the base field $q$ be an even power of a prime number such that $\\sqrt{q}$ is a prime power and $\\sqrt{q} \\geq K$. For arbitrary realization $(\\theta, \\mathcal{S}) \\in [K]\\times\\mathfrak{S}$ of $(\\bm{\\theta},\\bm{\\mathcal{S}})$, if $\\theta \\in \\mathcal{S}$, the user can apply the \\emph{Interference Alignment} based PIR-PCSI-II scheme where half of each message is downloaded. If $\\theta \\in [K]\\setminus\\mathcal{S}$, then user can apply the \\emph{Specialized GRS Codes} based scheme for the halves of the messages corresponding to the CSI dimension that is retained (while the other half of the CSI dimensions is discarded as redundant) and download the other half dimension of all the messages directly. Note that in both cases, a half-dimension of each of the $K$ messages is directly downloaded. The other halves are involved in the download corresponding to the \\emph{Specialized GRS Codes} which is not needed for decodability\/correctness if $\\theta \\in \\mathcal{S}$, but is still included for privacy, i.e., to hide whether or not $\\bm\\theta\\in\\bm{\\mathcal{S}}$. The download cost required is $K\\left(\\frac{L}{2}\\right)$ for the direct downloads of half of every message, plus $(K-M)\\frac{L}{2}$ for the \\emph{Specialized GRS Codes} based scheme that usually requires $K-M$ downloads per message symbol, but is applied here to only half the symbols from each message, for a total download cost of $(K-M\/2)L$ which achieves the supremum capacity of PIR-PCSI for $M=2$. The details of the scheme are presented next.\n\nFor all $k \\in [K]$, let $V_{\\bm{W}_k} \\in \\mathbb{F}_{\\sqrt{q}}^{2\\times 1}$ be the length $2$ vector representation of $\\bm{W}_k \\in \\mathbb{F}_{q}$. For all $m \\in [M]$, let $M_{\\bm{\\lambda}_m} \\in \\mathbb{F}_{\\sqrt{q}}^{2\\times 2}$ be the matrix representation of $\\bm{\\lambda}_m \\in \\mathbb{F}_{q}^{\\times}$ where $\\bm{\\lambda}_m$ is the $m^{th}$ entry of the coefficient vector $\\bm{\\Lambda}$. Let \n\\begin{align}\n \\overline{\\bm{Y}}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]} = M_{\\bm{\\lambda}_1}(1,:)V_{\\bm{W}_{\\bm{i}_1}} +\\cdots+ M_{\\bm{\\lambda}_M}(1,:)V_{\\bm{W}_{\\bm{i}_M}},\n\\end{align}\nwhere $\\bm{\\mathcal{S}} = \\{\\bm{i}_1, \\bm{i}_2, \\cdots, \\bm{i}_{M}\\}$ is the support index set, be the processed CSI where $H(\\overline{\\bm{Y}}^{[\\bm{\\mathcal{S}}, \\bm{\\Lambda}]}) = \\frac{1}{2}H(\\bm{W}_k)$. Note that $\\forall m \\in [M], M_{\\bm{\\lambda}_m}(1,:)$ is uniform over $\\mathbb{F}_{\\sqrt{q}}^{1\\times 2} \\setminus \\{[0~~0]\\}$ according to Lemma \\ref{lem:uniform12}.\n\nThe query $\\bm{Q} = \\{\\bm{Q}_1, \\bm{Q}_2, \\bm{Q}_3\\}$,\n\\begin{align}\n \\bm{Q}_1 &= \\{\\mathbf{L}_1, \\mathbf{L}_2, \\cdots, \\mathbf{L}_{K}\\},\\\\\n \\bm{Q}_2 &= \\{\\mathbf{L}_1^{\\prime}, \\mathbf{L}_2^{\\prime}, \\cdots, \\mathbf{L}_{K}^{\\prime}\\},\\\\\n \\bm{Q}_3 &= \\{\\bm{v}_1, \\bm{v}_2, \\cdots, \\bm{v}_{K}\\}.\n\\end{align}\nwhere $\\mathbf{L}_{k}, \\mathbf{L}_{k}^{\\prime} \\in \\mathbb{F}_{\\sqrt{q}}^{1\\times 2} \\setminus \\{[0~~0]\\}$. $\\mathbf{L}_{k}, \\mathbf{L}_{k}^{\\prime}$ serve as two linearly independent projections that ask the server to split $\\bm{W}_k$ into two halves \n\\begin{align}\n \\bm{w}_{k}(1) = \\mathbf{L}_{k}V_{\\bm{W}_k} \\in \\mathbb{F}_{\\sqrt{q}},\\\\\n \\bm{w}_{k}(2) = \\mathbf{L}_{k}^{\\prime}V_{\\bm{W}_k} \\in \\mathbb{F}_{\\sqrt{q}}.\n\\end{align}\n$\\bm{Q}_3$ uniquely defines a \\emph{Specialized GRS Matrix} whose elements are in $\\mathbb{F}_{\\sqrt{q}}$.\n\nThe user will download the first halves of all the $K$ messages after projection, i.e., $\\bm{w}_{[K]}(1)$ and apply the \\emph{Specialized GRS Matrix} to download a \\emph{Specialized GRS Codes} of the second halves of all the $K$ messages after projection, i.e., $\\bm{w}_{[K]}(2)$. \n\nLet us specify $\\mathbf{L}_{k}, \\mathbf{L}_{k}^{\\prime}, \\bm{v}_{k}$. Consider any realization $(\\theta, \\mathcal{S}) \\in [K]\\times\\mathfrak{S}$ of $(\\bm{\\theta},\\bm{\\mathcal{S}})$. Let us say $\\mathcal{S} = \\{i_1, i_2, \\cdots, i_M\\}$. For the messages not involved in the CSI, they are randomly projected to two linearly independent directions, i.e., for any $k \\in [K] \\setminus \\mathcal{S}$, $\\mathbf{L}_{k}, \\mathbf{L}_{k}^{\\prime}$ are linearly independent and are randomly drawn from $\\mathbb{F}_{\\sqrt{q}}^{1 \\times 2} \\setminus \\{[0~~0]\\}$. Also, for any $k \\in [K] \\setminus \\mathcal{S}$, $\\bm{v}_{k}$ is uniformly distributed in $\\mathbb{F}_{\\sqrt{q}}^{\\times}$. \n\nFor messages involved in the CSI, the construction of projections and $\\bm{v}$'s depends on whether $\\theta$ is in $\\mathcal{S}$ or not.\n\\begin{enumerate}\n \\item When $\\theta \\in \\mathcal{S}$, for any $m \\in [M]$,\n \\begin{align}\n \\mathbf{L}_{i_m} = \n \\begin{cases}\n M_{\\bm{\\lambda}_m}(2,:), i_m = \\theta,\\\\\n M_{\\bm{\\lambda}_m}(1,:), i_m \\neq \\theta.\n \\end{cases}\n \\end{align}\n $\\mathbf{L}_{i_m}^{\\prime}$ is then chosen randomly from $\\mathbb{F}_{\\sqrt{q}}^{1 \\times 2} \\setminus \\{[0~~0]\\}$ such that it is linearly independent with $\\mathbf{L}_{i_m}$. Meanwhile, $\\bm{v}_{i_m}$ is randomly drawn from $\\mathbb{F}_{\\sqrt{q}}^{\\times}$. Under this case, the user has \n \\begin{align}\n \\overline{\\bm{Y}}^{[\\mathcal{S}, \\bm{\\Lambda}]} = \\sum_{i_m \\in \\mathcal{S}\\setminus\\{\\theta\\}}\\bm{w}_{i_m}(1) + \\bm{w}_{\\theta}(2)\n \\end{align} \n according to the construction of $\\mathbf{L}_{i_m}$. $\\bm{w}_{\\theta}(1)$ is directly downloaded and $\\bm{w}_{\\theta}(2)$ can be recovered by subtracting $\\{\\bm{w}_{i_m}(1)\\}_{i_m \\neq \\theta}$ from $\\overline{\\bm{Y}}^{[\\mathcal{S}, \\bm{\\Lambda}]}$. The user is then able to recover $\\bm{W}_{\\theta}$ as the two projections are linearly independent. $\\bm{Q}_3$ uniquely defines a \\emph{Specialized GRS Matrix} and applying $\\bm{Q}_3$ to download a \\emph{Specialized GRS Codes} of $\\bm{w}_{[K]}(2)$ is just for privacy.\n\n \\item When $\\theta \\in [K]\\setminus\\mathcal{S}$, for any $m \\in [M]$, \n \\begin{align}\n \\mathbf{L}_{i_m}^{\\prime} = \\frac{1}{\\bm{a}_m}M_{\\bm{\\lambda}_m}(1,:),\n \\end{align}\n where $\\bm{a}_m$ is randomly drawn from $\\mathbb{F}_{\\sqrt{q}}^{\\times}$. $\\mathbf{L}_{i_m}$ is then chosen randomly from $\\mathbb{F}_{\\sqrt{q}}^{1 \\times 2} \\setminus \\{[0~~0]\\}$ such that they are linearly independent with $\\mathbf{L}_{i_m}^{\\prime}$. Under this case, the user has \n \\begin{align}\n \\sum_{m\\in[M]}\\bm{a}_{m}\\bm{w}_{i_m}(2) = \\overline{\\bm{Y}}^{[\\mathcal{S}, \\bm{\\Lambda}]},\n \\end{align}\n and sets \n \\begin{align}\n \\bm{v}_{i_m} = \\frac{\\bm{a}_m}{p(\\omega_{i_m})}, \\forall m \\in [M],\n \\end{align}\n where $p(\\omega_{i_m})$ is the evaluation of the polynomial specified in \\eqref{eq:polyGRS} (when $(\\bm{\\theta},\\bm{\\mathcal{S}}) = (\\theta$, $\\mathcal{S})$) at $\\omega_{i_m}$, which is a non-zero constant given $(\\theta, \\mathcal{S})$. Thus, given $(\\theta, \\mathcal{S})$, $\\bm{v}_{i_m}$ is still uniform over $\\mathbb{F}_{\\sqrt{q}}^{\\times}$. $\\bm{Q}_{3}$ uniquely defines a \\emph{Specialized GRS Matrix}. Applying $\\bm{Q}_3$ to download a \\emph{Specialized GRS Codes} of $\\bm{w}_{[K]}(2)$, together with $\\sum_{m\\in[M]}\\bm{a}_{m}\\bm{w}_{i_m}(2)$ as the side information, enable the user to recover $\\bm{w}_{\\theta}(2)$. Since the first halves of all the projected messages are also downloaded, the user also has $\\bm{w}_{\\theta}(1)$, thus, is able to decode $\\bm{W}_{\\theta}$.\n\\end{enumerate}\n\nNote that for arbitrary realization $(\\theta, \\mathcal{S})$ of $(\\bm{\\theta}, \\bm{\\mathcal{S}})$, no matter $\\theta \\in \\mathcal{S}$ or not, $\\mathbf{L}_1, \\cdots, \\mathbf{L}_{K}$, $\\mathbf{L}_1^{\\prime}, \\cdots, \\mathbf{L}_{K}^{\\prime}$, $\\bm{v}_1, \\cdots, \\bm{v}_{K}$ are independent, and for any $k \\in [K]$, the matrix whose first row is $\\mathbf{L}_{k}$ and second row is $\\mathbf{L}_{k}^{\\prime}$ is uniform over the set that contains all the full-rank matrix in $\\mathbb{F}_{\\sqrt{q}}^{2\\times 2}$, $\\bm{v}_{k}$ is uniform over $\\mathbb{F}_{\\sqrt{q}}^{\\times}$. Thus, the scheme is private.\n\n\n\\section{Proof of Theorem \\ref{thm:cap_PCSI_inf}}\\label{sec:cap_PCSI_inf}\nThe rate $\\frac{1}{K-1}$ PIR-PCSI-I scheme in Section \\ref{sec:PCSI1_inf_ach} is also the infimum capacity achieving PIR-PCSI scheme as noted in Remark \\ref{rmk:pcsi1_inf_pcsi_inf}, so we just prove the converse here.\n\nAs a result of \\eqref{eq:lemma1pcsi} and the fact that in $\\mathbb{F}_{2}$, we can only have $\\bm{\\Lambda} = 1_M$, i.e., the length-$M$ vector all of whose elements are equal to $1$, we have\n\\begin{align}\n H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S},1_M]}, \\bm{Q}=Q) = 0, \\notag\\\\\n \\forall (Q,\\mathcal{S}) \\in \\mathcal{Q}\\times\\mathfrak{S}. \\label{eq:pcsi_inf_dec}\n\\end{align}\nWriting $\\bm{Y}^{[[M],1_M]}$ as $\\bm{Y}$ for compact notation, for any $Q \\in \\mathcal{Q}$, we have\n\\begin{align}\n &H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}=Q)\\notag\\\\\n &= H(\\bm{W}_{[K]}, \\bm{Y} \\mid \\bm{\\Delta}, \\bm{Q}=Q) \\label{eq:dec_K_1}\\\\\n &= H(\\bm{Y} \\mid \\bm{\\Delta}, \\bm{Q}=Q) + H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Y}, \\bm{Q} = Q)\\label{eq:dec_K_2}\\\\\n &\\leq H(\\bm{Y}) = L.\n\\end{align}\n\\eqref{eq:dec_K_1} is true since $\\bm{Y}$ is a summation of the first $M$ messages, and \\eqref{eq:dec_K_2} follows from \\eqref{eq:pcsi_inf_dec}. Averaging over $\\bm{Q}$ we have,\n\\begin{align}\n H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\leq L.\n\\end{align}\nThus, $D \\geq H(\\bm{\\Delta} \\mid \\bm{Q}) \\geq H(\\bm{W}_{[K]}) - H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Q}) \\geq KL-L$ which implies that $C_{\\mbox{\\tiny PCSI}}^{\\inf}(q = 2) \\leq (K-1)^{-1}$.\n\n\n\\section{Proof of Theorem \\ref{thm:pcsi_pub_pri}}\\label{proof:pcsi_pub_pri}\nThe rate $\\frac{1}{K-1}$ PIR-PCSI-I scheme which preserves $(\\bm{\\theta}, \\bm{\\mathcal{S}}, \\bm{\\Lambda})$ in Section \\ref{sec:pcsi1_pri_ach} is also the capacity achieving PIR-PCSI scheme with private coefficients as noted in Remark \\ref{rmk:pcsi1_pri_pcsi_pri}, so we just prove the converse here. Specifically, we prove that $C_{\\mbox{\\tiny PCSI}}^{\\mbox{\\tiny pri}}(q)\\leq C_{\\mbox{\\tiny PCSI}}(q=2) = C_{\\mbox{\\tiny PCSI}}^{\\inf}$.\n\nAccording to \\eqref{eq:pcsi_pri} in Lemma \\ref{lem:fullypri}, for a fully private PIR-PCSI scheme,\n\\begin{align}\n H(\\bm{W}_{[K]} \\mid \\bm{\\Delta}, \\bm{Y}^{[\\mathcal{S}, \\Lambda]}, \\bm{Q} = Q) = 0, \\notag\\\\\n \\forall (Q,\\mathcal{S},\\Lambda) \\in \\mathcal{Q} \\times \\mathfrak{S} \\times \\mathfrak{C}.\\label{eq:pcsi_pri_dec}\n\\end{align}\nNote that \\eqref{eq:pcsi_pri_dec} is a \\emph{stronger} version of \\eqref{eq:pcsi_inf_dec} which is sufficient to bound $C_{\\mbox{\\tiny PCSI}}(q=2) = C_{\\mbox{\\tiny PCSI}}^{\\inf}$. Thus, $C_{\\mbox{\\tiny PCSI}}^{\\mbox{\\tiny pri}}(q) \\leq C_{\\mbox{\\tiny PCSI}}^{\\inf}$.\n\n\\section{Conclusion} \\label{sec:con}\nSide-information is a highly valuable resource for PIR in general, and for single-server PIR in particular. Building on the foundation laid by Heidarzadeh et al. in \\cite{PIR_PCSI}, this work presents a more complete picture, as encapsulated in Table \\ref{tab:capacity}, revealing new insights that are described in the introduction. The redundancy of side-information is particularly noteworthy, because it allows the user to save storage cost, which may be used to store additional non-redundant side-information, e.g., multiple linear combinations instead of just one, as assumed in this work and in \\cite{PIR_PCSI}. An interesting direction for future work is to understand the trade-off between the size of side information and the efficiency of single-server PIR, e.g., by characterizing the $\\alpha$-CSI constrained capacity of PIR-PCSI-I, PIR-PCSI-II, PIR-PCSI. Other questions that remain open include issues that are field-specific. For example, is the supremum capacity of PIR-PCSI-II for $M>2$ achievable for all fields except $\\mathbb{F}_2$? Are there other fields besides $\\mathbb{F}_2$ over which the capacity is equal to the infimum capacity? Can the capacity over certain fields take values other than the supremum and infimum capacities? Progress on these issues may require field-dependent constructions of interference alignment schemes for achievability, and combinatorial arguments for converse bounds, both of which may be of broader interest.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nQuestion answering (QA) relates to the building of systems capable of automatically answering questions posed by humans in natural language. Various frameworks have been proposed for question answering, ranging from simple information-retrieval techniques for finding relevant knowledge articles or webpages, through methods for identifying the most relevant sentence in a text regarding a posed question, to methods for querying structured knowledge-bases or databases to produce an answer~\\cite{burke1997question,voorhees1999trec,kwok2001scaling,hirschman2001natural,ravichandran2002learning}\n\nA popular QA task is {\\em answer selection}, where, given a question, the system must pick correct answers from a pool of candidate answers~\\cite{xu2002trec,jijkoun2004answer,ko2007probabilistic,lee2009model,severyn2013automatic}.\n\nAnswer selection has many commercial applications. Virtual assistants such as Amazon Alexa and Google Assistant are designed to respond to natural language questions posed by users. In some cases such systems simply use a search engine to find relevant webpages; however, for many kinds of queries, such systems are capable of providing a concise specific answer to the posed question. \n\nSimilarly, various AI companies are attempting to improve customer service by automatically replying to customer queries. One way to design such a system is to curate a dataset of historical questions posed by customers and the responses given to these queries by human customer service agents. Given a previously unobserved query, the system can then locate the best matching answer in the curated dataset. \n\nAnswer selection is a difficult task, as typically there is a large number of possible answers which need to be examined. Furthermore, although in many cases the correct answer is lexically similar to the question, in other cases semantic similarities between words must be learned in order to find the correct answer~\\cite{kolomiyets2011survey,allam2012question}. Additionally, many of the words in the answer may not be relevant to the question. \n\nConsider, for example, the following question answer pair:\n\n\\begin{displayquote}\n\\textbf{How do I freeze my account?}\n\nHello, hope you are having a great day. You can freeze your account by logging into our site and pressing the freeze account button. Let me know if you have any further questions regarding the management of your account with us. \n\\end{displayquote}\n\n\\noindent Intuitively, the key section which identifies the above answer as correct is ``[...] you can freeze your account by [...]'', which represents a small fraction of the entire answer.\n\nEarlier work on answer selection used various techniques, ranging from information retrieval methods~\\cite{clarke2001exploiting} and machine learning methods relying on hand-crafted features~\\cite{parsetreeManning,wang2007jeopardy}. Deep learning methods, which have recently shown great success in many domains including image classification and annotation~\\cite{krizhevsky2012imagenet,zhou2014learning,lewenberg2016predicting}, multi-annotator data fusion~\\cite{albarqouni2016aggnet,gaunt2016training}, NLP and conversational models~\\cite{graves2013speech,bahdanau2014ntm,li2015diversity,kandasamy2017batch,shao2017generating} and speech recognition~\\cite{graves2013speech,albarqouni2016aggnet}, have also been successfully applied to question answering~\\cite{fengCNN}. Current state-of-the-art methods use recurrent neural network (RNN) architectures which incorporate attention mechanisms~\\cite{tan2016}. These allow such models to better focus on relevant sections of the input~\\cite{bahdanau2014ntm}.\n\n{\\bf Our contribution: } We propose a new architecture for question answering. Our high-level approach is similar to recently proposed QA systems~\\cite{fengCNN,tan2016}, but we augment this design with a more sophisticated attention mechanism, combining the {\\em local} information in a specific part of the answer with a {\\em global} representation of the entire question and answer. \n\nWe evaluate the performance of our model using the recently released {\\em InsuranceQA dataset}~\\cite{fengCNN}, a large open dataset for answer selection comprised of insurance related questions such as: ``what can you claim on Medicare?''. \\footnote{As opposed to other QA tasks such as answers extraction or machine text comprehension and reasoning~\\cite{weston2015towards,rajpurkar2016squad}, the InsuranceQA dataset questions do not generally require logical reasoning.}\n\nWe beat state-of-the-art approaches ~\\cite{fengCNN,tan2016}, and achieve good performance even when using a relatively small network. \n\n\n\n\\section{Previous Work}\n\nAnswer selection systems can be evaluated using various datasets consisting of questions and answers. Early answer selection models were commonly evaluated against the QASent dataset \\cite{wang2007jeopardy}; however, this dataset is very small and thus less similar to real-world applications. Further, its candidate answer pools are created by finding sentences with at least one similar (non-stopword) word as compared to the question, which may create a bias in the dataset. \n\nWiki-QA~\\cite{yang2015wikiqa} is a dataset that contains several orders of magnitude more examples than QASent, where the candidate answer pools were created from the sentences in the relevant Wikipedia page for a question, reducing the amount of keyword bias in the dataset compared to QASent. \n\nOur analysis is based on the InsuranceQA~\\cite{fengCNN} dataset, which is much larger, and similar to real-world QA applications. The answers in InsuranceQA are relatively long (see details in Section~\\ref{sec:setup}), so the candidate answers are likely to contain content that does not relate directly to the question; thus, a good QA model for InsuranceQA must be capable of identifying the most important words in a candidate answer. \n\n\nEarly work on answer selection was based on finding the semantic similarity between question and answer parse trees using hand-crafted features \\cite{parsetreeManning, wang2007jeopardy}. Often, lexical databases such as WordNet were used to augment such models \\cite{ChangWordnet}. Not only did these models suffer from using hand-crafted features, those using lexical databases were also often language-dependent. \n\nRecent attempts at answer selection aim to map questions and candidate answers into n-dimensional vectors, and use a vector similarity measure such as cosine similarity to judge a candidate answer's affinity to a question. In other words, the similarity between a question and a candidate is high if the candidate answers the question well, low if the candidate is not a good match for the question. \n\nSuch models are similar to Siamese models, a good review of which can be found in Muller et al's paper~\\cite{mueller2016siamese}. Feng et al.~\\cite{fengCNN} propose using convolutional neural networks to vectorize both questions and answers before comparing them using cosine similarity. Similarly, Tan et al.~\\cite{tan2016} use a recurrent neural network \nto vectorize questions and answers. \nAttention mechanisms have proven to greatly improve the performance of recurrent networks in many tasks \\cite{bahdanau2014ntm, tan2016, rocktaschel2015entailment,rush2015neural,luong2015effective}, and indeed Tan et al.~\\cite{tan2016} incorporate a simple attention mechanism in their system.\n\n\n\\section{Preliminaries}\n\\label{l_sect_prelim}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.65\\textwidth]{attention0}\n\\caption{Model architecture using answer-localized attention \\cite{tan2016}. The left hand side used for the question. The right side of the architecture is used for both the answer and distractor.}\n\\label{fig:oldModel}\n\\end{figure*}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.8\\textwidth]{attention3}\n\\caption{Our proposed architecture with augmented attention. As in Figure~\\ref{fig:oldModel}, the right side of the model is used to embed answers and distractors.}\n\\label{fig:newModel}\n\\end{figure*}\n\nOur approach is similar to the {\\em Answer Selection Framework} of Tan et al.~\\cite{tan2016}, but we propose a different network architecture and a new attention mechanism. We first provide a high level description of this framework (see the original paper for a more detailed discussion), then discuss our proposed attention mechanism. \n\n The framework is based on a neural network with parameters $\\theta$ which can embed either a question $q$ or a candidate answer $a$ into low dimensional vectors $r \\in \\!R^k$. The network can embed a question with no attention, which we denote as $f_{\\theta}(q)$, and embed a candidate answer with attention to the question, denoted as $g_{\\theta}(a, q)$. We denote the similarity function used as $s(x,y)$ ($s$ may be the dot product function, the cosine similarity function or some other similarity function).\n\nGiven a trained network, we compute the similarity between question and answer embeddings:\n\n$$s_i = s(f_{\\theta}(q), g_{\\theta}(A_i, q))$$\n\\noindent for any $i \\in {1,2,\\ldots,k}$ with $A_i$ being the $i$th candidate answer in the pool. We then select the answer yielding the highest similarity $\\arg \\max_i s_i$. \n\nThe embedding functions, $f_{\\theta}$ and $g_{\\theta}$, depend on the architecture used and the parameters $\\theta$. The network is trained by choosing a loss function $\\mathcal{L}$, and using stochastic gradient descent to tune the parameters given the training data. Each training item consists of a question $q$, the correct answer $a^*$ and a distractor $d$ (an incorrect answer). A prominent choice is using a shifted hinge loss, designating that the correct answer must have a higher score than the distractor by at least a certain margin $M$, where the score is based on the similarity to the question. \n\n$$ \\mathcal{L} =\\max \\Big\\{ 0, M - \\sigma_{a^*} + \\sigma_{d} \\Big\\} $$ \n\nwhere:\n$$\n\\sigma_{a^*} = s \\Big(f_{\\theta}(q), g_{\\theta}(a^*, q) \\Big) \n$$ \n$$\n\\sigma_{d} = s\\Big( f_{\\theta}(q), g_{\\theta}(d, q) \\Big)\n$$\n\nThe above expression has a zero loss if the correct answer has a score higher than the distractor by at least a margin $M$, and the loss linearly increases in the score difference between the correct answer and the distractor. \n\nAny reasonable neural network design for $f_{\\theta}$ can be used to build a working answer-selection systems using the above approach; however, the network design can have a big impact on the system's accuracy. \n\n\\subsection{Embedding Questions and Answers}\n\nEarlier work examined multiple approaches for embedding questions and answers, including convolutional neural networks, recurrent neural networks (RNNs) (sometimes augmented with an attention mechanism) and hybrid designs \\cite{fengCNN,tan2016}. \n\nAn RNN design ``digests'' the input sequence, one element at a time, changing its internal state at every timestep. The RNN is based on a cell, a parametrized function mapping a current state and an input element to the new state~\\cite{werbos1990backpropagation}. A popular choice for the RNN's cell is the Long Short Term Memory (LSTM) cell~\\cite{hochreiter1997long}.\n\nGiven a question comprised of words $q=(x_1,x_2,\\ldots,x_m)$, we denote the $i$'th output of an LSTM RNN digesting the question as $q_i$; similarly given an answer $a=(y_1,y_2,\\ldots,y_n)$ we denote the $j$'th output of an LSTM RNN digesting the question as $a_j$. \n\nOne simple approach is to have the embeddings of the question and answer be the last LSTM output, i.e. $f_{\\theta}(q) = q_m$ and $f_{\\theta}(a) = a_n$. Note that $q_i,a_i$ are vectors whose dimensionality depends on the dimensionality of the LSTM cell; we denote by $q_{i,j}$ the $j$'th coordinate of the LSTM output at timestep $i$.\n\nAnother alternative is to aggregate the LSTM outputs across the different timesteps by taking their coordinate-wise mean (mean-pooling):\n$$f_{\\theta}(q)_r = \\frac{1}{m} \\sum_{i=1}^m q_{i,r}$$\n\\noindent Alternatively, one may aggregate by taking the or coordinate-wise max (max-pooling):\n$$f_{\\theta}(q)_r = max_{i=1}^m q_{i,r}$$\n\nWe use another simple way of embedding the question and answer, which is based on term-frequency (TF) features. Given a vocabulary of words $V=(w_1,\\ldots,w_v)$, and a text $p$ we denote the TF representation of $p$ as $p^{\\text{tf}} = (d_1,\\ldots,d_v)$ where $d_j=1$ if the word $w_j$ occurs in $p$ and otherwise $d_j=0$. \\footnote{Another alternative is setting $d_j$ to the {\\em number} of times the word $w_j$ appears in $p$. A slightly more complex option is using TF-IDF features~\\cite{ramos2003using} or an alternative hand-crafted feature scheme; however we opt for the simpler TF representation, letting the neural network learn how to use the raw information.}\n\nA simple overall embedding of a text $p$ is $p' = W t(p)$ where $W$ is an $v \\times d$ matrix, and where $d$ determines the final embedding's dimensionality; the weights of $W$ are typically part of the neural network parameters, to be learned during the training of the network. Instead of a single matrix multiplication, one may use the slightly more elaborate alternative of applying a feedforward network, in order to allow for non-linear embeddings.\n\nWe note that a TF representation loses information regarding the {\\em order} of the words in the text, but can provide a good global view of key topics discussed in the text. \n\nOur main contribution is a new design for the neural network that ranks candidate answers for a given question. Our design uses a TF-based representation of the question and answer, and includes a new attention mechanism which uses this global representation when computing the attention weights (in addition to the local information used in existing approaches). We describe existing attention designs (based on local information) in Section~\\ref{l_sect_loc_attn}, before proceeding to describe our approach in Section~\\ref{l_sect_glob_loc_attn}. \n\n\\subsection{Local Attention}\n\\label{l_sect_loc_attn}\n\nEarly RNN designs were based on applying a deep feedforward network at every timestep, but struggled to cope with longer sequences due to exploding and diminishing gradients \\cite{lstm}. Other recurrent cells such as the LSTM and GRU cells \\cite{lstm,gru} have been proposed as they alleviate this issue; however, even with such cells, tackling large sequences remains hard~\\cite{lstmsSUCK}. Consider using an LSTM to digest a sequence, and taking the final LSTM state to represent the entire sequence; such a design forces the system to represent the entire sequence using a single LSTM state, which is a very narrow channel, making it difficult for the network to represent all the intricacies of a long sequence~\\cite{bahdanau2014ntm}. \n\nAttention mechanisms allow placing varying amounts of emphasis across the entire sequence~\\cite{bahdanau2014ntm}, making it easier to process long sequences; in QA, we can give different weights to different parts of the answer while aggregating the LSTM outputs along the different timesteps: \n$$f_{\\theta}(a) = \\sum_{i=1}^m \\alpha_i a_{i,r}$$\n\\noindent where $\\alpha_i$ denotes the weight (importance) placed on timestep $i$ and $a_{i,r}$ is the $r$th value of the $i$th embedding vector. \n\nTan et al.~\\cite{tan2016} proposed a very simple attention mechanism for QA, shown in Figure~\\ref{fig:oldModel}:\n$$ m_{a,q}(i) = W_{ad} a_i + W_{qd} f_{\\theta}(q) $$\n$$ \\alpha_i \\propto exp (w_{ms}^T \\tanh(m_{a,q}(i))) $$\n$$ \\hat{a} = \\sum_{i=1}^m \\alpha_i a_i $$ \n\\noindent where $\\alpha_i a(i)$ is the weighted hidden layer, $W_{ad}$ and $W_{qd}$ are matrix parameters to be learned, and $w_{ms}$ is a vector parameter to be learned.\n\n\n\\section{Global-Local Attention}\n\\label{l_sect_glob_loc_attn}\n\nA limitation of the attention mechanism of Tan et al.~\\cite{tan2016} is that it only looks at the the embedded question vector and one candidate answer word embedding at a time. Our proposed attention mechanism adds a {\\em global} view of the candidate, incorporating information from {\\em all} words in the answer. \n\n\\subsection{Creating Global Representations}\n\nOne possibility for constructing a global embedding is an RNN design. However, RNN cells tend to focus on the more recent parts of an examined sequence~\\cite{lstmsSUCK}. We thus opted for using a term-frequency vector representing the entire answer, as shown in Figure~\\ref{fig:newModel}. We denote this representation as:\n$$a^{\\text{tf}} = (d_1,d_2,\\ldots,d_v) $$ \n\\noindent where $d_i$ relates to the i'th word in our chosen vocabulary, and $d_i = 1$ if this word appears in the candidate answer, and $d_i = 0$ otherwise. \n\nConsider a candidate answer $a = (y_1,\\ldots,y_n)$, and let $(a_1,\\ldots,a_n)$ denote its sequence of RNN LSTM outputs, i.e. $a_i$ denotes the $i$'th output of a RNN LSTM processing this sequence (so $a_i$ is a vector whose dimensionality is as the hidden size of the LSTM cell). We refer to $a_i$ as the local-embedding at time $i$. \\footnote{Note that although we call $a_i$ a local embedding, the $i$'th LSTM state does of course take into account other words in the sequence (and not only the $i$'th word). By referring to it as ``local'' we simply mean to say that it is more heavily influenced by the $i$'th word or words close to it in the sequence.}\n\n\\subsection{Combining Local and Global Representations to Determine Attention Weights}\n\nThe goal of an attention mechanism is to construct an overall representation of the candidate answer $a$, which is later compared to the question representation to determine how well the candidate answers the question; this is achieved by obtaining a set of weights $w_1,\\ldots,w_n$ (where $w_i \\in \\mathbb{R}^+$), and constructing the final answer representation as a weighted average of the LSTM outputs, with these weights. \n\nGiven a candidate answer $a$, we compute the attention coefficient $w_i$ for timestep $i$ as follows. \n\nFirst, we combine the local view (the LSTM output, more heavily influenced by the words around timestep $t$) with the global view (based on TF features of all the words in the answer). We begin by taking linear combinations of the TF features then passing them through a $\\tanh$ nonlinearity (so that the range of each dimension is bounded in $[-1,1]$):\n$$ b^{\\text{tf}} = \\tanh (W_{1} a^{\\text{tf}}) $$\n\\noindent The weights of the matrix $W_{1}$ are model parameters to be learned, and its dimensions are set so as to map the sparse TF vector $a^{\\text{tf}}$ to a dense low dimensional vector (in our implementation $b^{\\text{tf}}$ is a 50 dimensional vector). \n\nSimilarly, we take a linear combination of the different dimensions of the local representation $a_i$ (in this case there is no need for the $tanh$ operation, as the LSTM output is already bounded):\n$$ b_i^{\\text{loc}} = W_{2} a_i $$\n\\noindent where the weights of the $W_{2}$ are model parameters to be learned (and with dimensions set so that $b_i^{\\text{loc}}$ would be a 140 dimensional vector). \n\nGiven a TF representation of a text $x^{\\text{tf}}$, whose dimensionality is the size of the vocabulary, and an RNN representation of the text $x^{\\text{rnn}}$, with a certain dimentionality $h$, we may wish construct a normalized representation of the text. As the norms of these two parts may differ, simply concatenating these parts may result in a vector dominated by one side. We thus define a joint representation \n$h(x^{\\text{tf}}, x^{\\text{rnn}})$ as follows. \n\nWe normalize each part so as to have a desired ratio of norms $\\frac{\\alpha}{\\beta}$ between the RNN and TF representations; this ratio reflects the relative importance of the RNN and TF embeddings in the combined representation (for instance when settings both $\\alpha, \\beta$ to $1$ both parts would have a unit norm, giving them equal importance): \n$$ c^{\\text{tf}} = \\frac{\\alpha}{||x^{\\text{tf}}||} \\cdot x^{\\text{tf}} $$\n$$ c^{\\text{rnn}} = \\frac{\\beta} {||x^{\\text{rnn}}||} \\cdot x^{\\text{rnn}} $$ \n\\noindent We then concatenate the normalized TF and RNN representations to generate the joint representation:\n$$h(x^{\\text{tf}}, x^{\\text{rnn}}) = c^{\\text{tf}} \\| c^{\\text{rnn}} $$\n\\noindent where $\\|$ represents vector concatenation. \n\nWe construct the local attention representation at the $i$'th word of the answer as:\n$$ a_i^{\\text{glob-loc}} = h(b^{\\text{tf}}, b_i^{\\text{loc}}) $$ \n\\\\ using values of $\\alpha=0.5, \\beta=1$.\n\nThe raw attention coefficient of the $i$'th word in the answer is computed by measuring the similarity of a vector representing the question, and a local-global representation of the answer at word $i$. We build these representations, of matching dimensions, by taking the same number of linear combinations from $a_i^{\\text{glob-loc}}$ (the raw global-local representation of the answer at word $i$). Thus the attention weight for the $i$'th word is:\n\n$$\n\\alpha'_i = sim\\Big( W_{3} a_i^{\\text{glob-loc}}, W_{4} f_{\\theta}(q) \\Big)\n$$\n\\noindent where $W_2$, $W_3$ are matrices whose weights are parameters to be learned (and whose dimensions are set so that $ W_{3} a_i^{\\text{glob-loc}} $ and $W_{4} f_{\\theta}(q)$ would be vectors of identical dimensionality, 140 in our implementation), and where $sim$ denotes the cosine similarity between vectors:\n$$ sim(u,v) = \\frac{u \\cdot v}{||u|| \\cdot ||v|| } $$ \n\\noindent with the $\\cdot$ symbol in the nominator denoting the dot product between two vectors.\n\n\nFinally, we normalize the attention coefficients with respect to their exponent to obtain the final attention weights, by applying the softmax operator on the raw attention coefficients. We take the raw attention coefficients, $\\alpha' = (\\alpha'_1, \\alpha'_2, \\ldots, \\alpha'_m)$ and define the final attention weights $\\alpha = (\\alpha_1, \\alpha_2, \\ldots, \\alpha_m)$ where $\\alpha_i \\propto exp(\\alpha'_i)$ and \n$\\alpha$ is the result of the softmax operator applied on $\\alpha$:\n$$ \\alpha_i = \\frac{\\exp{(\\alpha'_i)}}{\\sum_{j=1}^{m} \\exp{(\\alpha'_j)}} $$\n\n\\subsection{Building the Final Attention Based Representation}\n\nThe role of the attention weights is building a final representation of a candidate answer; different answers are ranked based on the similarity of their final representation and a final question representation. \nSimilarly to the TF representation of the answer, we denote the TF representation of the question as: $q^{\\text{tf}} = (r_1,r_2,\\ldots,r_v) $, where $r_i$ relates to the i'th word in our chosen vocabulary, and $r_i = 1$ if this word appears in the question, and $r_i = 0$ otherwise. Our final representation of the question is a joining of the TF representation of the question and the mean pooled RNN question representation (somewhat similarly to how we join the TF and RNN representation when determining the attention weights):\n$$ f'_{\\theta}(q) = h(q^{\\text{tf}}, f_{\\theta}(q)) $$ \n\nOur final representation of the answer is also a joining two parts, a TF part $a^{\\text{tf}}$ (as defined earlier) and an attention weighted RNN part $\\hat{a}$. We construct $\\hat{a}$ as the weighted average of the LSTM outputs, where the weights are the attention weights defined above:\n$$ \\hat{a} = \\sum_{i=1}^m \\alpha_i a_i $$ \n\nThe final representation of the answer is thus:\n$$ f'_{\\theta}(a) = h(a^{\\text{tf}}, \\hat{a}) $$ \n\nFigure \\ref{fig:newModel} describes the final architecture of our model, showing how we use a TF-based global embedding both in determining the attention weights and in the overall representation of the questions and answers. The dotted lines in the figures indicate that our model's attention weights depend not only on the local embedding but also on the global embedding. \n\n\\subsection{Tuning Parameters to Minimize the Loss}\n\nThe loss function $\\mathcal{L}$ we use is the shifted hinge loss defined in Section~\\ref{l_sect_prelim}. We compute the score of an answer candidate $a$ as the similarity between its final representation $f'_{\\theta}(a)$ and the final representation of the question $f'_{\\theta}(q)$ \n\\footnote{We use the cosine similarity as our similarity function for the loss, though other similarity functions can also be used.} : \n$$sim(f'_{\\theta}(q),f'_{\\theta}(a))$$ \n\\noindent Given the score of the correct answer candidate $\\sigma_{a^*} = sim(f'_{\\theta}(q),f'_{\\theta}(a))$ and the score of a distractor (incorrect) candidate $d$, $\\sigma_d = sim(f'_{\\theta}(q),f'_{\\theta}(d))$, our loss is \n$\\mathcal{L} =\\max \\Big\\{ 0, M - \\sigma_{a^*} + \\sigma_{d} \\Big\\}$. \n\nThe above loss relates to a single training item (consisting of a single question, its correct answer and an incorrect candidate answer). Training the neural network parameters involves iteratively examining items in a dataset consisting of many training items (each containing a question, its correct answer and a distractor) and modifying the current network parameters. We train our system using variant of stochastic gradient descent (SGD) with the Adam optimization~\\cite{kingma2014adam}.\n\n\\section{Empirical Evaluation}\n\nWe evaluate our proposed neural network design in a similar manner to earlier evaluations of Siamese neural network designs~\\cite{yang2015wikiqa,severyn2015learning}, where a neural network is trained to embed both questions and candidate answers as low dimensional vectors. \n\n\\subsection{Experiment Setup} \\label{sec:setup}\n\n\\begin{figure*}[h!t]\n\\includegraphics[width=\\textwidth]{att3_example5}\n\\includegraphics[width=0.8\\textwidth]{att3_example92}\n\\caption{A visualization of the attention weights for each word in a correct answer to a question. These examples show how the attention mechanism is focusing on relevant parts of the correct answer (although the attention is still quite noisy).}\n\\label{fig-example-attn-weights}\n\\end{figure*}\n\n\\begin{figure}[h!t]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{ModelPerf}\n\\caption{Performance of our system on InsuranceQA for various model sizes $h$ (both the LSTM hidden layer size and embedding size)}\n\\label{fig-size-to-perf}\n\\end{figure}\n\nWe use the InsuranceQA dataset and its evaluation framework~\\cite{fengCNN}. \nThe InsuranceQA dataset contains question and answer pairs from the insurance domain, with roughly 25,000 unique answers, and is already partitioned into a training set and two test sets, called test 1 and test 2. \n\nThe InsuranceQA dataset has relatively short questions (mean length of 7). However, the answers are typically very long (mean length of 94). \n\nAt test time the system takes as input a question $q$ and a pool of candidate answers $P=(a_1,a_2,\\ldots,a_k)$ and is asked to select the best matching answer $a^*$ to the question from the pool. The InsuranceQA comes with answer pools of size $k=500$, consisting of the correct answers and random distractors chosen from the set of answers to other questions. \n\nState-of-the-art results for InsuranceQA were achieved by Tan et al~\\cite{tan2016}, which also provide a comparison with several baselines: Bag-of-words (with IDF weighted sum of word vectors and cosine similarity based ranking), the Metzler-Bendersky IR model~\\cite{bendersky2010}, and ~\\cite{fengCNN} - the CNN based Architecture-II and Architecture-II with Geometricmean of Euclidean and Sigmoid Dot product (GESD).\n\nWe implemented our model in TensorFlow~\\cite{abadi2016tensorflow} and conducted experiments on our GPU cluster. \n\nWe use the same hidden layer sizes and embedding size as Tan et al.~\\cite{tan2016}: $h=141$ for the bidirectional LSTM size and an embedding size of $e=100$; this allows us to investigate the impact of our proposed attention mechanism. \\footnote{As is the case with many neural networks, increasing the hidden layer size or embedding size can improve the performance on our InsuranceQA models; we compare our performance to the work of Tan et al.~\\cite{tan2016} with the same hidden and embedding sizes; similarly to them we use embeddings pre-trained using Word2Vec~\\cite{mikolov2013} and avoid overfitting by applying early stopping (we also apply Dropout~\\cite{dropout,zaremba2014recurrent}). } \n\n\\begin{table}[h!t]\n\\small\n\\centering\n \\begin{tabular}{ | l | c | c | }\n \\hline\n Model & Test1 & Test2 \\\\\n \\hline\n \\hline\n Bag-of-words & 32.1 & 32.2 \\\\\n \\hline\n Metzler-Bendersky & 55.1 & 50.8 \\\\\n \\hline\n Arch-II~\\cite{fengCNN} & 62.8 & 59.2 \\\\\n \\hline\n Arch-II GSED~\\cite{fengCNN} & 65.3 & 61.0 \\\\\n \\hline \n Attention LSTM~\\cite{tan2016} & 69.0 & 64.8 \\\\ \n \\hline\n \\hline\n TF-LSTM Concatenation & 62.1 & {61.5} \\\\\n \n \\hline \n Local-Global Attention & {\\bf 70.1} & {\\bf 67.4} \\\\\n \n \\hline \n \\end{tabular}\n \\vspace{0.2cm}\n\\caption{Performance of various models on InsuranceQA}\n\\label{table:perf_insqa}\n\\end{table}\n\n\\subsection{Results}\n\nTable~\\ref{table:perf_insqa} presents the results of our model and the various baselines for InsuranceQA. The performance metric used here is P@1, the proportion of instances where a correct answer was ranked higher than all other distractors in the pool. The table shows that our model outperforms the previous baselines. \n\nWe have also examined the performance of our model as a function of its size (determining the system's runtime and memory consumption). We used different values $h \\in \\{10,20,30,40,50\\}$ for both the size of the LSTM's hidden layer size and embedding size, and examined the performance of the resulting QA system on InsuranceQA. Our results are given in Figure~\\ref{fig-size-to-perf}, which shows both the P@1 metric and the mean reciprocal rank (MRR)~\\cite{craswell2009mean,chapelle2009expected} \\footnote{The MRR metric assigns the model partial credit even in cases where the highest ranking candidate is an incorrect answer, with the score depending on the highest rank of a correct answer. }\n\nFigure~\\ref{fig-size-to-perf} shows that performance improves as the model gets larger, but the returns on extending the model size quickly diminish. Interestingly, even relatively small models achieve a reasonable question answering performance. \n\nTo show our attention mechanism is necessary to achieve good performance, we also construct a model that simply concatenates the output of the feedforward network (on TF features) and the output of the bidirectional LSTM, called TF-LSTM concatenation. While this model does make use of TF-based features in addition to the LSTM state of the RNN, it does not use an attention mechanism to allow it to focus on the more relevant parts of the text. As the table shows, the performance of the TF-LSTM model is significantly lower than that of our model with the global-local attention mechanism. This indicates that the improved performance stems from the model's improved ability to focus on the relevant parts of the answer (and not simply from having a larger capacity and including TF-features).\n\nFinally, we examine the the attention model's weights to evaluate it qualitatively. Figure~\\ref{fig-example-attn-weights} visualizes the weights for two question-answer pairs, where the color intensity reflects the relative weight placed on the word (the $\\alpha_i$ coefficients discussed earlier). The figure shows that our attention model can focus on the parts of the candidate answer that are most relevant for the given question. \n\n\n\n\\section{Conclusion}\n\nWe proposed a new neural design for answer selection, using\nan augmented attention mechanism, which combines both local and global information when determining the attention weight to place at a given timestep. Our analysis shows that our design outperforms earlier designs based on a simpler attention mechanism which only considers the local view. \n\nSeveral questions remain open for future research. First, the TF-based global view of our design was extremely simple; could a more elaborate design, possibly using convolutional neural networks, achieve better performance? \n\nSecond, our attention mechanism joins the local and global information in a very simple manner, by normalizing each vector and concatenating the normalized vectors. Could a more sophisticated joining of this information, perhaps allowing for more interaction between the parts, help further improve the performance of our mechanism?\n\nFinally, can the underlying principles of our global-local attention design improve the performance of other systems, such as machine translation or image processing systems? \n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe duality between symplectic and orthogonal groups has a long standing history,\nand has been noted in physics literature in various settings, see e.g.\n \\cite{Jungling-Oppermann-80,Mkrtchyan-81,Wegner-83,Witten-98}. Informally, the duality asserts that\n averages such as moments or partition functions\nfor the symplectic case of ``dimension'' $N$, can be derived from the respective\nformulas for the orthogonal case of dimension $N$ by inserting $-N$\n into these expressions and by simple scaling.\nThe detailed study of the moments of one-matrix Wishart ensembles, with duality explicitly noted,\nappears in \\cite{Hanlon-Stanley-Stembridge-92}, see \\cite[Corollary 4.2]{Hanlon-Stanley-Stembridge-92}.\nThe duality for one matrix Gaussian Symplectic Ensemble was noted\nby Mulase and Waldron \\cite{Mulase-Waldron-03} who introduced M\\\"obius graphs to write the\nexpansion for traces of powers of\nGOE\/GUE\/GSE expansions in a unified way. The duality appears also in\n \\cite[Theorem 6]{Ledoux-07} as a by-product of differential equations\n for the generating functions of moments.\nRef. \\cite{Goulden-Jackson-96,Goulden-Jackson-96b,Goulden-Jackson-97}\nanalyze the related ``genus series\" over locally orientable surfaces.\n\n\n\nThe purpose of this paper is to prove that the duality between moments of the Gaussian Symplectic Ensemble\nand the Gaussian Orthogonal Ensemble, and between real Wishart and quaternionic Wishart ensembles extends\nto several independent\nmatrices. Our technique consists of elementary combinatorics; our proofs\n differ from \\cite{Mulase-Waldron-03} in the one matrix case,\n and provide a more geometric interpretation for the duality; in the one-matrix Wishart case,\n our proof completes the combinatorial approach initiated in\n \\cite[Section 6]{Hanlon-Stanley-Stembridge-92}.\nThe technique limits the scope of our results to moments, but the\nrelations between moments suggest similar relations between other analytic objects,\nsuch as partition functions, see \\cite{Mulase-Waldron-03}, \\cite{Kodama-Pierce}. The asymptotic expansion of the partition function and analytic description of\n the coefficients of this expansion\n for $\\beta=2$ case appear in \\cite{Ercolani-McLaughlin-03,Guionnet-Maurel-Segala-0503064,Maurel-Segala-0608192}.\n\nThe paper is organized as follows. In Section \\ref{Sect1} we review basic properties of quaternionic\nGaussian random variables. In Section \\ref{Sect2} we introduce M\\\"obius graphs; Theorems \\ref{T quaternion moments} and \\ref{thm2.1}\n give formulae\nfor the expected values of products of quaternionic Gaussian random\nvariables in terms of the Euler characteristics of sub-families of\nM\\\"obius graphs or of bipartite M\\\"obius graphs.\nIn Section \\ref{duality} we apply the formulae to the quaternionic Wigner and Wishart families.\n\nIn this paper, we do not address the question of whether the duality\n can be extended to more general functions, or to more general\n $\\beta$-Hermite and $\\beta$-Laguerre ensembles introduced in \\cite{Dumitru-Edelman-06}.\n\n\n\\section{ Moments of quaternion-valued Gaussian random variables}\\label{Sect1}\n\n\\subsection{Quaternion Gaussian law}\nRecall that a quaternion $q\\in \\mathbb{H}$ can be represented as $q=x_0+i x_1+j x_2+ k x_3$\nwith\n$i^2=j^2=k^2=ijk=-1$ and with real coefficients $x_0,\\dots,x_3$.\nThe conjugate quaternion is $\\overline{q}=x_0-i x_1-j x_2- k x_3$,\nso $|q|^2:=q\\overline{q}\\geq 0$. Quaternions with $x_1=x_2=x_3=0$ are usually identified with real numbers;\nthe real part of a quaternion is $\\Re(q)=(q+\\bar{q})\/2$.\n\nIt is well known that quaternions can be identified with the set of certain $2\\times2$ complex matrices:\n\\begin{equation}\n \\label{H2C}\n \\mathbb{H}\\ni x_0+ix_1+jx_2+kx_3\\sim \\left[\\begin{matrix}\n x_0+ix_1 & x_2+i x_3 \\\\\n -x_2+ix_3&x_0-ix_1\n\\end{matrix}\\right]\\in \\mathcal{M}_{2\\times 2}(\\mathbb{C}),\n\\end{equation}\nwhere on the right hand side $i$ is the usual imaginary unit of $\\mathbb{C}$.\nNote that since $\\Re(q)$ is twice the trace of the matrix\nrepresentation in \\eqref{H2C}, this implies the cyclic property\n\\begin{equation}\n \\label{tmp**}\\Re(q_1q_2\\dots q_n)=\\Re(q_2q_3\\dots q_nq_1).\n\\end{equation}\n\n\nThe (standard) quaternion Gaussian random variable is an $\\mathbb{H}$-valued random variable\nwhich can be represented as\n\\begin{equation}\n \\label{HH}\n Z=\\xi_0+i \\xi_1+ j\\xi_2+k\\xi_3\n\\end{equation} with independent real\nnormal $N(0,1)$ random variables $\\xi_0,\\xi_1,\\xi_2,\\xi_3$.\nDue to symmetry of the centered normal laws on $\\mathbb{R}$,\nthe law of $(Z,\\overline{Z})$ is the same as the law of $(\\overline{Z},Z)$.\nA calculation shows that if $Z$ is quaternion Gaussian then\nfor fixed $q_1, q_2 \\in\\mathbb{H}$,\n$$\n\\mathbb{E}(Z q_1 Zq_2)=\\mathbb{E}(Z^2)\\bar{q}_1q_2,\\;\n\\mathbb{E}(Z q_1 \\overline{Z}q_2)=\\mathbb{E}(Z\\bar{Z})\\Re(q_1)q_2\\,.\n$$\nFor future reference, we insert explicitly the moments:\n\\begin{eqnarray}\n \\label{q1}\n\\mathbb{E}(Z q_1 Zq_2)&=&-2\\bar{q}_1q_2,\\\\\n\\label{q2}\n\\mathbb{E}(Z q_1 \\overline{Z}q_2)&=& 2 (q_1+\\bar{q}_1)q_2.\n\\end{eqnarray}\nBy linearity, these formulas imply\n\\begin{eqnarray}\n\\label{q3}\n\\mathbb{E}( \\Re(Z q_1) \\Re( \\overline{Z} q_2) ) &=& \\Re( q_1 q_2 ) ,\\\\\n\\label{q4}\n\\mathbb{E}( \\Re(Z q_1) \\Re(Z q_2 ) ) &=& \\Re( \\bar{q}_1 q_2 ).\n\\end{eqnarray}\n\n\\subsection{Moments}\nThe following is known as Wick's theorem \\cite{Wick-50}.\n\\begin{theoremA}[Isserlis \\cite{Isserlis-1918}\n\\label{WickTHMR} If $(X_1,\\dots,X_{2n})$ is\na $\\mathbb{R}^{2n}$-valued Gaussian random vector with mean zero, then\n\\begin{equation}\\label{R-Wick}\n \n E(X_1X_2\\dots X_{2n})=\\sum_{V}\\prod_{\\{j,k\\}\\in V} E(X_jX_k),\n\\end{equation}\nwhere the sum is taken over all pair partitions $V$ of\n$\\{1,2,\\dots,2n\\}$, i.e.,\npartitions into two-element sets, so each $V$ has the form\n$$V=\\left\\{\\{j_1,k_1\\},\\{j_2,k_2\\},\\dots,\\{j_n, k_n\\}\\right\\}.$$\n\\end{theoremA}\n\n\n Theorem \\ref{WickTHMR} is a consequence of the moments-cumulants relation \\cite{Leonov-Shirjaev-59};\n the connection is best visible in the partition formulation of \\cite{Speed-83}. For another proof,\n see \\cite[page 12]{Janson-97}.\n\n Our first goal is to extend this formula to certain quaternion Gaussian random variables.\nThe general multivariate quaternion Gaussian law is discussed in\n\\cite{Vakhania-99}. Here we will only consider a special setting of\nsequences that are drawn with repetition from a sequence of\nindependent standard Gaussian quaternion random variables. In\nsection \\ref{duality}\n we apply this result to a multi-matrix version of the\nduality between GOE and GSE ensembles of random matrices.\n\n\n\nIn view of the Wick formula \\eqref{R-Wick} for real-valued jointly Gaussian random\nvariables, formulas \\eqref{q1} and \\eqref{q2} allow us to compute\nmoments of certain products of quaternion Gaussian random variables.\nSuppose the $n$-tuple $(X_1,X_2,\\dots,X_{n})$\nconsists of random variables taken, possibly with repetition, from\nthe set\n $$\\{Z_1,\\bar{Z}_1,Z_2,\\bar{Z_2},\\dots\\},$$\n where\n$Z_1,Z_2,\\dots$ are independent quaternion Gaussian random variables.\n Consider an auxiliary family of independent pairs $\\{(Y_{j}^{(r)},Y_{k}^{(r)}): r=1,2,\\dots\\}$ which have the same laws as\n $(X_j,X_k)$, $1\\leq j,k\\leq n$ and are independent for different $r$.\n Then the Wick formula for real-valued Gaussian variables implies $\\mathbb{E}(X_1X_2\\dots X_{n})=0$ for odd $n$, and\n \\begin{equation}\\label{Wick0}\n\\mathbb{E}(X_1X_2\\dots X_{n})=\\sum_{f}\\mathbb{E}(Y_1^{(f(1))}Y_2^{(f(2))}\\dots Y_{n}^{(f(n))}),\n \\end{equation}\n where the sum is over the pair partitions $V$ that appear under the sum in Theorem \\ref{WickTHMR}, each represented by the level sets of a two-to-one valued function\n $f:\\{1,\\dots,n\\}\\to\\{1,\\dots,m\\}$ for $n=2m$. (Thus the sum is over classes of equivalence of $f$, each of $m!$ representatives contributing the same value.)\n\n\n For example, if $Z$ is quaternion Gaussian then\napplying \\eqref{Wick0} with $f_1$ that is constant, say $1$, on $\\{1,2\\}$,\n $f_2$ that is constant on $\\{1,3\\}$, and $f_3$ that is constant on $\\{1,4\\}$\nwe get\n$$\n\\mathbb{E}(Z^4)=\\mathbb{E}(Z^2) \\left(\\mathbb{E}(Z^2) + \\mathbb{E}(\\bar{Z}Z)+ \\mathbb{E}(\\bar{Z}^2)\\right)=0.\n$$\n\n\nFormulas \\eqref{q1} and \\eqref{q2} then show that the Wick reduction step takes the\nfollowing form.\n\\begin{equation}\\label{Wick1}\n \\mathbb{E}(X_1X_2\\dots X_n)=\\sum_{j=2}^n \\mathbb{E}(X_1X_j)\\mathbb{E}(U_j X_{j+1}\\dots X_n),\n\\end{equation}\nwhere\n$$\nU_j=\\begin{cases}\n \\Re(X_2\\dots X_{j-1}) & \\mbox{ if $X_j=\\bar{X}_1$ }\\\\\n \\bar{X}_{j-1}\\dots \\bar{X}_2 &\\mbox{ if $X_j={X}_1$}\\\\\n 0 &\\mbox{ otherwise} \\,.\n\\end{cases}\n$$\nThis implies that one can do inductively the calculations, but due to noncommutativity\narriving at explicit answers may still require significant work.\n\n\n\nFormula \\eqref{Wick1} implies that $\\mathbb{E}(X_1X_2\\dots X_n)$ is real, so on the left hand side of \\eqref{Wick1} we can write\n$\\mathbb{E}(\\Re(X_1X_2\\dots X_n))$; this form of the formula will be associated with one-vertex M\\\"obius graphs.\n\nFurthermore, we have a Wick reduction which will correspond to the multiple vertex M\\\"obius graphs:\n\\begin{multline}\\label{Wick2}\n\\mathbb{E}( \\Re( X_1 ) \\Re( X_2 X_3 \\dots X_n ) ) \\\\=\n\\sum_{j=2}^n \\mathbb{E}(\\Re(X_1)\\Re(X_j))\\mathbb{E}( \\Re(X_2 \\dots X_{j-1} X_{j+1} \\dots X_n )).\n\\end{multline}\n(This is just a consequence of Theorem \\ref{WickTHMR}).\n\n\n\n\nIn the next section we will show that formulae (\\ref{Wick1}) and\n(\\ref{Wick2}) give a method of computing the expected values of\nquaternionic Gaussian random variables by the enumeration of M\\\"obius\ngraphs partitioned by their Euler characteristic. This is analogous\nto similar results for complex Gaussian random variables and ribbon\ngraphs, and for real Gaussian random variables and M\\\"obius graphs.\nIn Section \\ref{duality} we will show that this\nresult implies the duality of the GOE and GSE\nensembles of Wigner random matrices and the duality of real and\nquaternionic Wishart random matrices.\n\n\n\n\\section{M\\\"obius graphs and quaternionic Gaussian moments}\\label{Sect2}\n\nIn this section we introduce M\\\"obius graphs and then give formulae\nfor the expected values of products of quaternionic Gaussian random\nvariables in terms of the Euler characteristics of sub-families of\nM\\\"obius graphs. This is an analogue of the method of t'Hooft\n\\cite{Bessis-Itzykson-Zuber-80,Goulden-Harer-Jackson-01, Goulden-Jackson-97,harer-zagier-86, Jackson-94, tHooft}. M\\\"obius graphs have also been used to give combinatoric\ninterpretations of the expected values of traces of Gaussian\northogonal ensemble of random matrices and of Gaussian symplectic ensembles, see the articles\n\\cite{Goulden-Jackson-97} and \\cite{Mulase-Waldron-03}. The connection between M\\\"obius graphs and\nquaternionic Gaussian random variables is at the center of the work of Mulase\nand Waldron \\cite{Mulase-Waldron-03}.\n\n\n\n\n\\subsection{M\\\"obius graphs}\nM\\\"obius graphs are ribbon graphs where the edges (ribbons) are\nallowed to twist, that is they either preserve or reverse the local\norientations of the vertices. As the convention is that the ribbons\nin \\textit{ribbon graphs} are not twisted we follow \\cite{Mulase-Waldron-03} and call the unoriented\nvariety M\\\"obius graphs. The vertices of a M\\\"obius graph are\nrepresented as disks together with a local orientation; the edges are represented as\nribbons, which preserve or reverse the local orientations of the\nvertices connected by that edge. Next we identify the collection of\ndisjoint cycles of the sides of the ribbons found by following the\nsides of the ribbon and obeying the local orientations at each\nvertex. We then attach disks to each of these cycles by gluing the\nboundaries of the disks to the sides of the ribbons in the cycle.\nThese disks are called the faces of the M\\\"obius graph, and the\nresulting surface we find is the surface of maximal\nEuler\ncharacteristic on which the M\\\"obius graph may be drawn so that edges do not cross.\n\n\nDenote by $v(\\Gamma)$, $e(\\Gamma)$, and $f(\\Gamma)$ the number of vertices, edges, and faces of $\\Gamma$.\nWe say that the Euler characteristic of $\\Gamma$ is\n$$\n\\chi(\\Gamma)=v(\\Gamma)-e(\\Gamma)+f(\\Gamma),\n$$\nfor connected $\\Gamma$, this is also the maximal Euler characteristic of a connected surface into which\n$\\Gamma$ is embedded.\nFor example, in Fig. \\ref{F1}, the Euler characteristics are\n$\\chi_1=1-1+2=2$ and $\\chi_2=1-1+1=1$. The two graphs may be embedded\ninto the sphere or projective sphere respectively.\n\n\nIf $\\Gamma$ decomposes into connected components $\\Gamma_1$, $\\Gamma_2$, then\n$\\chi(\\Gamma)=\\chi(\\Gamma_1)+\\chi(\\Gamma_2)$.\n\nThroughout the paper our M\\\"obius graphs will have the following\nlabels attached to them: the vertices are labeled to make them\ndistinct, in addition the edges emanating from each vertex are also\nlabeled so that rotating any vertex produces a distinct graph. These\nlabels may be removed if one wishes by rescaling all of our\nquantities by the number of automorphisms the unlabeled graph would\nhave.\n\n\\subsection{Quaternion version of Wick's theorem}\n\n\\tolerance=2000\nSuppose the $2n$-tuple $$(X_1,X_2,\\dots,X_{2n})$$\nconsists of random variables taken, possibly with repetition, from\nthe set $\\{Z_1,\\bar{Z_1},Z_2,\\bar{Z_2},\\dots\\}$ where\n$Z_1,Z_2,\\dots$ are independent quaternionic Gaussian.\nFix a sequence $j_1,j_2,\\dots,j_m$ of natural numbers such that $j_1+\\dots+j_m=2n$.\n\nConsider the family $\\mathcal{M}=\\mathcal{M}_{j_1,\\dots,j_m}(X_1,X_2,\\dots,X_{2n})$, possibly empty, of M\\\"obius graphs with $m$ vertices of degrees $j_1,j_2,\\dots,j_m$ with edges labeled by $X_1,X_2,\\dots,X_{2n}$,\nwhose regular edges correspond to pairs $X_i=\\bar{X}_j$ and flipped edges correspond to pairs $X_i=X_j$. No edges of $\\Gamma\\in \\mathcal{M}$ can join random variables $X_i,X_j$ that are independent.\n\\tolerance=1000\n\n\\begin{theorem}\n \\label{T quaternion moments}\nLet $\\left\\{ X_1, X_2, \\dots, X_{2n} \\right\\}$ be chosen, possibly\nwith repetition, from the set $\\{ Z_1, \\bar{Z_1}, Z_2, \\bar{Z_2},\n\\dots \\}$ where $Z_j$ are independent quaternionic Gaussian random\nvariables, then\n\\begin{multline}\n\\mathbb{E}\\big(\\Re(X_1X_2 \\dots X_{j_1})\\Re(X_{j_1+1}^{}\\dots X_{j_1+j_2}^{})\\times\\dots \\\\\\times\\Re(X_{j_1+j_2+\\ldots+j_{m-1}+1}^{}\\dots\n X_{2n}^{})\\big)=4^{n-m}\\sum_{\\Gamma\\in \\mathcal{M}} (-2)^{\\chi(\\Gamma)}.\n\\end{multline}\n(The right hand side is interpreted as $0$ when $\\mathcal{M}=\\emptyset$.)\n\\end{theorem}\n\n\\begin{remark}\n We would like to emphasize that in computing the Euler characteristic one must first break the graph into\n connected components. For example, if $j_1=\\dots=j_m=1$ so that $m=2n$ is even,\nand $X_1, X_2, \\dots, X_{2n}$ are $n$ independent pairs, as the real\nparts are commutative we may assume that $X_{2k} = X_{2k-1}$, and\nthe moment is\n$$ 1=\\mathbb{E}( \\Re (X_1) \\Re (X_2) \\dots \\Re (X_{2n}) ) = 4^{-n} (-2)^{\\chi(\\Gamma)}. $$\nWe see that graphically $\\Gamma$\n is a collection of $2n$ degree one vertices connected together forming $n$\n dipoles (an edge with a vertex on either end).\n Hence there are $n$ connected components each of Euler characteristic $2$,\n therefore the total Euler characteristic is $\\chi = 2 n $ giving\n$$ 4^{-n} (-2)^\\chi= 4^{-n} 4^{n} = 1. $$\n\\end{remark}\n\n\\begin{proof}[Proof of Theorem \\ref{T quaternion moments}]\nIn view of \\eqref{Wick0} and \\eqref{Wick1}, it suffices to show that if $X_1,\\dots,X_{2n}$ consists of $n$ independent pairs, and each pair is either of the form $(X,X)$ or $(X,\\bar{X})$, then\n\\begin{multline}\n\\label{star2}\n \\mathbb{E}\\big(\\Re(X_1X_2 \\dots X_{j_1})\\Re(X_{j_1+1}^{}\\dots X_{j_1+j_2}^{})\\times\\dots \\\\\\times\\Re(X_{j_1+j_2+ \\dots+\n j_{m-1}+1}^{}\\dots\n X_{2n}^{})\\big)= 4^{n-m}(-2)^{\\chi(\\Gamma)}\\;,\n\\end{multline}\nwhere $\\Gamma$ is the M\\\"obius graph that describes the pairings of the sequence.\n\nFirst we check the two M\\\"obius graphs for\n$n=1$, $m=1$:\n\\[ \\mathbb{E}( \\Re( X \\bar{X}) ) = (-2)^2 \\,, \\quad \\mbox{and}\\quad \\mathbb{E}( \\Re(X X))\n= (-2)^1 \\,.\\]\nOne checks that these correspond to the M\\\"obius graphs in Figure \\ref{F1},\nwhich gives a sphere ($\\chi=2$) and projective sphere ($\\chi=1$) respectively.\n\\begin{figure}[htb]\n \\includegraphics[width=4in]{fig1}\n \\caption{The two possible M\\\"obius graphs with a single degree 2 vertex. The left hand one is a ribbon which is untwisted and the graph embeds into a copy of the Riemann sphere, while the\n right hand one is a ribbon which is twisted and the graph embeds into a copy of the projective sphere. \n \\label{F1}}\n\\end{figure}\n\n\nWe now proceed with the induction step.\n One notes that by independence of the pairs at different edges, the\nleft hand side of \\eqref{star2} factors into the product corresponding to\nconnected components of $\\Gamma$. It is therefore enough to consider\nconnected $\\Gamma$. \\label{tmp1}\n\nIf $\\Gamma$ has two vertices that are joined by an edge, we can use cyclicity\nof $\\Re$ to move the variables that label the edge to the first positions in\ntheir cycles, say $X_1$ and $X_{j_1+1}$ and use \\eqref{q3} or \\eqref{q4} to\neliminate this pair from the product. The use of relation (\\ref{q3})\nis just that of gluing the two vertices together removing the edge $x$\nwhich is labeled by the two appearances of $Z$. Relation (\\ref{q4}) glues\ntogether the two vertices, removing\nthe edge $x$, and the reversal of orientation across the edge is\ngiven by the conjugate (see Figure \\ref{F4}).\nThese geometric operations reduce $n$ and $m$ by one without changing the\nEuler characteristic:\nthe number of edges and the number of vertices are reduced by 1; the faces are\npreserved -- in the case of edge flip\nin Fig. \\ref{F4}, the edges of the face from which we remove the edge, after\nreduction follow the same order.\n\n\n\n Therefore we will only need to prove the\nresult for the single vertex case of the induction step.\n\n\\begin{figure}[htb]\n\\includegraphics[height=4in]{fig2\n \\caption{A M\\\"obius graph with two vertices connected by a ribbon may be reduced to a M\\\"obius graph with one less vertex and one less edge.\nIn these graphs the ``$\\dots$\" are to mean that there are arbitrary\nnumbers of other edges at the vertex, and the edges drawn are\nconnected to other vertices.\n The top graph is an example of this reduction when the connecting ribbon is untwisted, in this case the two vertices are glued together with no other changes in the ribbons. The bottom graph is an example of this reduction when the connecting ribbon is twisted, in this case the two vertices are glued together and the order and orientation (twisted or untwisted) of the ribbons on one side are reversed.\n\\label{F4}}\n\\end{figure}\n\n\nWe wish to show that\n\\begin{equation} \\label{star3}\n \\mathbb{E}( X_1 X_2 X_3 X_4 \\dots X_{2n}) =\n(-2)^{\\chi({\\Gamma})} 4^{n-1} \\,,\n\\end{equation}\nwhere $\\Gamma$ is a one vertex M\\\"obius graph with arrows (half edges)\n labeled by\n$X_k$.\nWe will do this by induction, there are two cases:\n\\begin{enumerate}\n\n\\item[\\bf Case 1:] $X_1 = \\bar{X}_j$ for $1< j \\leq 2n $,\n\\begin{align} \\nonumber\n\\mathbb{E}( X_1 X_2 \\dots X_{j-1} \\bar{X}_1 X_{j+1} \\dots X_{2n}) &=\n\\mathbb{E}( X_1 \\bar{X}_1 ) \\mathbb{E}( \\Re( X_2 \\dots X_{j-1} ) \\Re( X_{j+1} \\dots X_{2n}) ) \\\\\n\\label{oriented_reduction}\n&= 4 \\mathbb{E}( \\Re( X_2 \\dots X_{j-1}) \\Re( X_{j+1} \\dots X_{2n}) ) \\,.\n\\end{align}\nThis corresponds to the reduction of the M\\\"obius graph pictured\nin Figure \\ref{orient-reduct}, which splits the single vertex\ninto two vertices. The Euler characteristic becomes $\\chi_2 = 2 - (n-1) + f_1\n= \\chi_1 +2 $.\nBy the induction assumption we find\n\\begin{equation*}\n\\mathbb{E}( X_1 \\dots X_{2n}) = 4 \\left[ 4^{(n-1) - 2} (-2)^{\\chi_2} \\right]\n= 4^{n-2} (-2)^{\\chi_1 +2} = 4^{n-1} (-2)^{\\chi_1} \\,.\n\\end{equation*}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=8cm]{orient-reduct}\n\\end{center}\n\\caption{ Here we have an untwisted ribbon of the M\\\"obius graph\nreturning to the same vertex, this edge is removed in our reduction\nprocedure giving us a M\\\"obius graph with one more vertex and one\nless edge. \\label{orient-reduct}}\n\\end{figure}\n\n\\item[\\bf Case 2:] $X_1 = X_j$ for $1 < j \\leq 2n$,\n\\begin{align}\\nonumber\n\\mathbb{E}( X_1 X_2 \\dots X_{j-1} X_1 X_{j+1} \\dots X_{2n}) &=\n\\mathbb{E}( X_1 X_1 ) \\mathbb{E}( \\bar{X}_{j-1} \\dots \\bar{X}_2 X_{j+1} \\dots X_{2n} )\n\\\\ \\label{unoriented-reduction}\n&= (-2) \\mathbb{E}( \\bar{X}_{j-1} \\dots \\bar{X}_2 X_{j+1} \\dots X_{2n} ) \\,.\n\\end{align}\nThis corresponds to the reduction of the M\\\"obius graph pictured\nin Figure \\ref{unorient-reduct}, which keeps the single vertex\nand flips the order and orientation of the edges between $X_1$ and $X_j$.\nThe Euler characteristic becomes $\\chi_2 = 1 - (n-1) + f_1 = \\chi_1 + 1$.\nBy the induction assumption we find\n\\begin{equation*}\n\\mathbb{E}( X_1 \\dots X_{2n}) = (-2) \\left[ 4^{(n-1) -1} (-2)^{\\chi_2} \\right]\n= (-2)^{-1} 4^{n-1} (-2)^{\\chi_1 + 1} = 4^{n-1} (-2)^{\\chi_1}\\,.\n\\end{equation*}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=8cm]{unorient-reduct}\n\\end{center}\n\\caption{ Here we have a twisted ribbon of the M\\\"obius graph\nreturning to the same vertex, this edge is removed in our reduction\nprocedure giving us a M\\\"obius graph with one less edge and giving a\nreverse in both the order and orientation (twisted or untwisted) of\nthe ribbons on one side of the removed ribbon.\n\\label{unorient-reduct}}\n\\end{figure}\n\n\n\n\\end{enumerate}\n\n\nNote that taking away an oriented ribbon creates a new vertex. The\nremaining graph might still be connected, or it may split into two\ncomponents. If taking away a loop makes the graph disconnected, then\nthe counts of changes to edges and vertices are still the same. But\nthe faces need to be counted as follows: the inner face of the\nremoved edge becomes the outside face of one component, and the\nouter face at the removed edge becomes the outer face of the other\ncomponent. Thus the counting of faces is not affected by\nwhether the graph is connected.\n\nWith these two cases checked, by the induction hypothesis, the\nproof is completed.\n\n\\end{proof}\n\n\n\n\\subsection{Bipartite M\\\"obius graphs and quaternionic Gaussian moments} \\label{Sect bipartite}\nTo deal with quaternionic Wishart random matrices, we need to consider a special subclass of quaternionic\nGaussian variables from Theorem \\ref{T quaternion moments}.\nSuppose the $2n$-tuple $(X_{\\pm 1},X_{\\pm 2},\\dots,X_{\\pm n})$\nconsists of $n$ pairs of random variables taken with repetition, from\nthe set $\\{Z_1,Z_2,\\dots,Z_n\\}$ of independent quaternionic Gaussian random variables. Note that in contrast to the setup for Theorem \\ref{T quaternion moments}, here all $Z$'s are without conjugation.\nFix a sequence $j_1,j_2,\\dots,j_m$ of natural numbers such that $j_1+\\dots+j_m=n$.\nTheorem \\ref{T quaternion moments} then says that\n\\begin{align*}\n \\mathbb{E}\\big( \\Re( \\bar{X}_{-1} X_1 \\cdots \\bar{X}_{-j_1} X_{j_1})\n\\times \\Re( \\bar{X}_{-j_1-1} X_{j_1 +1} \\cdots \\bar{X}_{-j_1-j_2} X_{j_1+j_2} )\n\\times \\cdots\n\\\\ \\dots \\times\n\\Re( \\bar{X}_{-j_1-j_2-\\dots -j_{m-1}-1} \\cdots \\bar{X}_{-n} X_{n} )\\big)\n\\\\ = 4^{n-m} (-2)^{\\chi(\\Gamma)},\n\\end{align*}\nwhere $\\Gamma$ is the M\\\"obius graph with $m$ vertices and with edges labeled by $X_{\\pm 1},\\dots,X_{\\pm n}$ that describes the\npairings between the variables under the expectation. Our goal is to show that the same formula holds true for another graph, a bipartite M\\\"obius graph whose edges are labeled by $n$ pairs $(X_{-j},X_j)$, $1\\leq j \\leq n$.\n\nThe bipartite M\\\"obius graph has two types of vertices: black vertices and white (or later, colored) vertices with ribbons that can only connect a black vertex to a white vertex. (As previously, the ribbons may carry a ``flip\" of orientation which we represent graphically as a twist.) To define this graph, we need to introduce three pair partitions on the set $\\{\\pm 1,\\dots,\\pm n\\}$.\nThe first partition, $\\delta$, pairs $j$ with $-j$. The second partition, $\\sigma$, describes the placement of $\\Re$: its pairs are\n\\begin{align*}\n\\{1,-2\\},\\{2,-3\\},\\dots,\\{j_1,-1\\},\\\\\n\\{j_1+1, -(j_1+2)\\},\\dots, \\{j_1+j_2,-(j_1+1)\\},\\\\\n\\vdots\\\\\n\\{1+\\sum_{k=1}^{m-1}j_k,-(2+\\sum_{k=1}^{m-1}j_k)\\},\\dots, \\{n,-(1+\\sum_{k=1}^{m-1}j_k)\\}.\n\\end{align*}\nThe third partition, $\\gamma$, describes the choices of pairs from $Z_1,\\dots,Z_n$. Thus\n$\n\\{j,k\\}\\in\\gamma\n$\nif $X_j=X_k$ when $jk>0$ or $X_j=\\bar{X}_k$ if $jk<0$.\n\nWe will also represent these pair partitions as graphs with vertices arranged in\ntwo rows, and\nwith the edges drawn between the vertices in each pair of a partition.\nThus\n\\begin{equation}\n \\label{eq:delta}\n \\delta=\\begin{matrix}\n \\xymatrix @-1pc{\n {^1_ \\bullet} \\ar@{-}[d]& {^2_\\bullet}\n \\ar@{-}[d]& \\dots & {^n_\\bullet} \\ar@{-}[d]\\\\\n {^{\\hspace{1.5mm}\\bullet}_{-1}} &{^{\\hspace{1.5mm}\\bullet}_{-2}} &\\dots &\n{^{\\hspace{1.5mm}\\bullet}_{-n}} \\\\\n\n}\n\\end{matrix}\n\\end{equation}\nand\n{\\small\n$$\n\\sigma=\\begin{matrix}\n \\xymatrix @-1pc{\n {^1_ \\bullet} \\ar@{-}[dr]& {^2_\\bullet} \\ar@{-}[dr]& {^3_\\bullet}\n & \\dots & {^{j_1}_\\bullet}\\ar@{-}[dllll] & {^{j_1+1}_{\\hspace{2.5mm}\\bullet}}\\ar@{-}[dr] & {^{j_1+2}_{\\hspace{3mm}\\bullet}}&\\dots & {^{j_1+j_2}_{\\hspace{3.7mm}\\bullet}}\\ar@{-}[dlll]&\n {^{j_1+j_2+1}_{\\hspace{5mm}\\bullet}}&\\dots \\\\\n {^{\\hspace{1.5mm}\\bullet}_{-1}} &{^{\\hspace{1.5mm}\\bullet}_{-2}} &{^{\\hspace{1.5mm}\\bullet}_{-3}}&\\dots &\n{^{}_\\bullet}& {^{}_\\bullet} & {^{}_\\bullet}&\\dots&{^{}_\\bullet}&{^{}_\\bullet}&\\dots\n}\n\\end{matrix}\n$$\n}\n\nConsider the 2-regular graphs $\\delta\\cup \\gamma$ and $\\delta\\cup \\sigma$. We orient the cycles of these graphs by ordering $(-j,j)$ on the left-most vertical edge of the cycle.\nFor example,\n\n{\\small $$\n\\sigma\\cup\\delta=\\begin{matrix}\n \\xymatrix @-1pc{\n {^1_ \\bullet} \\ar@{-}[dr]& {^2_\\bullet} \\ar@{-}[dr]& {^3_\\bullet}\n & \\dots & {^{j_1}_\\bullet}\\ar@{-}[dllll] & {^{j_1+1}_{\\hspace{2.5mm}\\bullet}}\\ar@{-}[dr] & {^{j_1+2}_{\\hspace{3mm}\\bullet}}&\\dots & {^{j_1+j_2}_{\\hspace{3.7mm}\\bullet}}\\ar@{-}[dlll]& {^{j_1+j_2+1}_{\\hspace{5mm}\\bullet}}&\\dots \\\\\n {^{\\hspace{1.5mm}\\bullet}_{-1}}\\ar@{->}[u] &{^{\\hspace{1.5mm}\\bullet}_{-2}}\\ar@{-}[u] &{^{\\hspace{1.5mm}\\bullet}_{-3}}\\ar@{-}[u]&\\dots &\n{^{}_\\bullet}\\ar@{-}[u]& {^{}_\\bullet} \\ar@{->}[u]& {^{}_\\bullet}\\ar@{-}[u]&\\dots&{^{}_\\bullet}\\ar@{-}[u]& {^{}_\\bullet} \\ar@{->}[u]&\\dots\n}\n\\end{matrix}\n$$\n}\nWe now define the bipartite M\\\"obius graph by assigning black vertices to the $m$ cycles of $\\delta\\cup \\sigma$, and white vertices to the cycles of $\\delta\\cup \\gamma$.\n\nEach black vertex is oriented counter-clockwise. For each black vertex we follow the cycle of $\\delta\\cup \\sigma$, drawing a labeled line for each element of the partition. The lines corresponding to $-j,j$ are adjacent, and will eventually become two edges of a ribbon.\n\nEach white vertex is oriented, say, clockwise; we identify the graphs that differ only by\na choice of orientation at some of the white vertices.\nFor each white vertex, we follow the corresponding cycle of $\\delta\\cup \\gamma$, drawing a labeled line for each element of the partition. The lines corresponding to $-j,j$ are adjacent, but may appear in two different orders depending on the orientation of the corresponding edge of $\\delta$ on the cycle.\n\nThe final step is to connect pairs $(j,-j)$ on the black vertices with the same pairs on the white vertices. This creates the ribbons, which carry a flip if the orientation of the two lines\non the black vertex does not match the orientation of the same edges on the white vertex.\n\n\n\\begin{figure}\n\\begin{center}\\includegraphics[width=6cm]{black-white}\n\\end{center}\n\\caption{\\label{black_white} Representation of a bipartite M\\\"obius\ngraph, the edges drawn are ribbons that are either twisted or\nuntwisted. }\n\\end{figure}\nThe individual edges pictured in Figure \\ref{black_white} are ribbons and are\nlabeled as in Figure \\ref{black-white-2}.\n\\begin{figure}\n\\begin{center}\\includegraphics[width=5cm]{black-white-2}\\end{center}\n\\caption{\\label{black-white-2} Example of the labeling we use for\nthe edges emanating from a black vertex in a bipartite M\\\"obius\ngraph. }\n\\end{figure}\n We allow twists of ribbons to\npropagate through a white vertex calling the two bipartite M\\\"obius graphs\nequivalent.\n\n\n\nSuppose the $2n$-tuple $(X_{\\pm 1},X_{\\pm 2},\\dots,X_{\\pm n})$\nconsists of random variables taken with repetition, from\nthe set $\\{Z_1,Z_2,\\dots,Z_n\\}$ of independent quaternionic Gaussian random variables.\nLet $\\mathcal{M}=\\mathcal{M}(X_{\\pm 1},X_{\\pm 2},\\dots,X_{\\pm n})$ denote the set of all bipartite M\\\"obius graphs $\\Gamma$\nthat correspond to various ways of pairing all repeated $Z$'s in the sequence\n$(X_{\\pm 1},X_{\\pm 2},\\dots,X_{\\pm n})$; the pairs are given by adjacent half edges at each white\nvertex. (See the preceding construction.) $\\mathcal{M}=\\emptyset$ if there is a $Z_j$ that is repeated\nan odd number of times.\n\\begin{theorem} \\label{thm2.1}\n\\begin{align*}\n \\mathbb{E}\\big( \\Re( \\bar{X}_{-1} X_1 \\cdots \\bar{X}_{-j_1} X_{j_1})\n\\times \\Re( \\bar{X}_{-j_1-1} X_{j_1 +1} \\cdots \\bar{X}_{-j_1-j_2} X_{j_1+j_2} )\n\\times \\cdots\n\\\\ \\dots \\times\n\\Re( \\bar{X}_{-j_1-j_2-\\dots -j_{m-1}-1} \\cdots \\bar{X}_{-n} X_{n} )\\big)\n\\\\ = 4^{n-m}\\sum_{\\Gamma\\in\\mathcal{M}} (-2)^{\\chi(\\Gamma)}\\;,\n\\end{align*}\n(The right hand side is interpreted as $0$ when $\\mathcal{M}=\\emptyset$.)\n\\end{theorem}\n\n\n\n\\begin{proof}\nThe proof is fundamentally the same as that of the Wigner version of this\ntheorem.\nIn view of \\eqref{Wick0} and \\eqref{Wick1}, it suffices\n to consider $\\{ X_{\\pm 1}, \\dots X_{\\pm n} \\}$ that form $n$ independent pairs, and show that\n\\begin{multline}\\label{WWW314}\n \\mathbb{E}\\big( \\Re( \\bar{X}_{-1} X_1 \\cdots \\bar{X}_{-j_1} X_{j_1})\n\\times \\Re( \\bar{X}_{-j_1-1} X_{j_1 +1} \\cdots \\bar{X}_{-j_1-j_2} X_{j_1+j_2} )\n\\times \\cdots\n\\\\ \\dots \\times\n\\Re( \\bar{X}_{-j_1-j_2-\\dots -j_{m-1}-1} \\cdots \\bar{X}_{-n} X_{n} )\\big)\n = 4^{n-m} (-2)^{\\chi(\\Gamma)}\\;,\n\\end{multline}\nwhere $\\Gamma$ is the bipartite M\\\"obius graph that describes the\npairings.\n\nWe will prove \\eqref{WWW314} by induction; to that end we first check\nthat with $n=1, m=1$\nwe have $\\mathbb{E}(\\bar{X}X)=(-2)^2$ in agreement with \\eqref{WWW314}.\n\nIf $\\Gamma$ has two black vertices connected together by edges adjacent at a white vertex,\nwe can use the cyclicity of $\\Re$ to move the variables that label the\nrespective edges and share the same face to\nthe first position in their cycles, so that we may call them $\\bar{X}_{-1}$\nand either $X_j$ or $\\bar{X}_{-j}$.\nWe now use relations (\\ref{q3}) and (\\ref{q4}) to eliminate the pair from the\nproduct:\n\\begin{equation*}\n\\mathbb{E}\\left( \\Re( \\bar{X}_{-1} \\cdots X_{j_1}) \\Re( X_j \\dots X_{j_1+j_2} \\bar{X}_{-j} )\\right)\n= \\mathbb{E}( X_1 \\cdots X_{j_1+j_2} \\bar{X}_{-j})\\,,\n\\end{equation*}\nor\n\\begin{equation*}\n\\mathbb{E}\\left( \\Re( \\bar{X}_{-1} \\cdots X_{j_1}) \\Re( \\bar{X}_j \\dots X_{j_1+j_2} )\\right)\n= \\mathbb{E}( \\bar{X}_{j_1} X_{-j_1} \\cdots \\bar{X}_1 X_j \\cdots X_{j_1+j_2})\\,.\n\\end{equation*}\nThe use of relation (\\ref{q3}) corresponds to that of gluing together the two\nribbons along the halves adjacent at the white vertex, and gluing together the\ncorresponding black vertices (see Figure \\ref{two-gluing-orient}).\nThe use of relation (\\ref{q4}) corresponds to the same gluing, but in this\ncase one of the ribbons has an orientation reversal in it, resulting in an\norientation reversal for the remaining sides (see Figure\n\\ref{two-gluing-unorient}).\nThese geometric operations reduce $n$ and $m$ by one without changing the\nEuler characteristic: both the number of edges and the number of vertices are\nreduced by one while the number of faces is preserved.\n\n\\begin{figure}\n\\includegraphics[width=12cm]{two-gluing-orient}\n\\caption{ Here two black vertices are connected together through\nuntwisted edges adjacent at a white vertex. This bipartite M\\\"obius\ngraph reduces to one with one less vertex and one less edge. The\nreduction is found by gluing the two black vertices together and\ngluing the two ribbons together along their adjacent sides, here\nlabeled by $\\bar{X}_{-1}$ and $X_j$. The same reduction would apply\nif the two edges were twisted as we could pass this twist through\nthe white vertex. \\label{two-gluing-orient}}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=12cm]{two-unorient}\n\\caption{ Here two black vertices are connected together through one\ntwisted edge and one untwisted edge adjacent at a white vertex.\nThis bipartite M\\\"obius graph reduces to one with one less vertex\nand one less edge. The reduction is found by gluing the two black\nvertices together, gluing the two ribbons together along the sides\nadjacent at the white vertex, and reversing both the order and\norientations of remaining ribbons on the second black vertex.\n\\label{two-gluing-unorient}}\n\\end{figure}\n\nTherefore we will only need to prove the result for the single black vertex case of\nthe induction step.\nWe wish to show that\n\\begin{equation}\n\\mathbb{E}( \\bar{X}_{-1} X_1 \\bar{X}_{-2} X_2 \\cdots \\bar{X}_{-n} X_n ) = 4^{n-1}\n(-2)^{\\chi(\\Gamma)}\\;,\n\\end{equation}\nwhere $\\Gamma$ is a bipartite M\\\"obius graph with a single black vertex and\nhalf ribbons labeled by $X_{\\pm k}$. We will\ndo this by induction, there are two cases:\n\\begin{enumerate}\n\n\\item[\\bf Case 1:] $X_{-1} = X_j$ for $1 \\leq j \\leq n$,\n\\begin{align*}\n\\mathbb{E}( \\bar{X}_{-1} X_1 \\cdots \\bar{X}_{-j} X_j \\cdots X_n ) &=\n\\mathbb{E}( \\bar{X}_{-1} X_j ) \\mathbb{E}( \\Re( X_1 \\cdots \\bar{X}_{-j} ) \\Re( \\bar{X}_{-j-1}\nX_{j+1} \\cdots X_n)) \\\\\n&= 4 \\mathbb{E}(\\Re( \\bar{X}_{-j} X_1 \\cdots X_{j-1} ) \\Re( \\bar{X}_{-j-1} X_{j+1}\n\\cdots X_n) ) \\,.\n\\end{align*}\nThis corresponds to the reduction of the bipartite M\\\"obius graph pictured in Figure\n\\ref{reduce-1}, which for $j>1$ splits the single black vertex into two black vertices,\nand glues the two edges labeled as $\\bar{X}_{-1}$ and $X_j$ together.\n\nThe edges $(1,-1,j,-j)$ are adjacent at the white vertex and appear either in this order, or in the reverse order. Thus after the removal of $\\{-1,j\\}$ we get an ordered pair of labels $(1,-j)$ or $(-j,1)$ to glue back into a ribbon.\nOn the black vertex, due to our conventions the edges of the ribbons appear in the following order\n $$((-1,1), (-2, 2),\\dots, (-j,j), (-k_1,k_1),\\dots,(-k_r,k_r)).$$\n Once we split the black vertex into two vertices with the edges of ribbons given by\n$((-1,1), (-2, 2),\\dots, (-j,j))$ and $((-k_1,k_1),\\dots,(-k_r,k_r))$, the removal of $\\{-1,j\\}$ creates a new pair $(1,-j)$ which we use to create the ribbon to the white vertex.\n[After this step, we relabel all edges to use again $\\pm 1,\\pm 2,\\dots$ consecutively.]\n\n\nWe note that the number of faces of the new graph is the same as the previous graph -- the face with the sequence of edges $$(\\dots, X_{k_r},\\bar{X}_{-1},X_j,\\bar{X}_{-k_1},\\dots)$$ becomes the face with edges $(\\dots, X_{k_r},\\bar{X}_{-k_1},\\dots)$ on the new graph.\n The\nEuler characteristic becomes $\\chi_2 = (v_1 + 1) - (e_1 - 1) + f_1 = \\chi_1 +\n2$ where $v_1, e_1$ and $f_1$ are the number of vertices, edges, and faces of\n$\\Gamma$.\nBy the induction assumption we then find\n\\begin{equation*}\n\\mathbb{E}( \\bar{X}_{-1} X_1 \\cdots X_n) = 4 \\left[ 4^{(n-1) - 2} (-2)^{\\chi_2}\n\\right] = 4^{n-2} (-2)^{\\chi_1 + 2} = 4^{n-1} (-2)^{\\chi_1} \\,.\n\\end{equation*}\n\n\n\\begin{figure}\n\\includegraphics[width=9cm]{reduce-1}\n\\caption{ Here we have a black vertex with two ribbons, both twisted\nor both untwisted, adjacent at the same white vertex. The reduction\nglues these two ribbons together along their common side. The\nresult is a bipartite M\\\"obius graph with one more vertex and one\nless edge. The resulting graph may or may not be disconnected at\nthis point. \\label{reduce-1}}\n\\end{figure}\n\n\n\\item[\\bf Case 2:] $X_{-1} = X_{-j}$ for $1 < j \\leq n$,\n\\begin{multline*}\n\\mathbb{E}( \\bar{X}_{-1} X_1 \\cdots \\bar{X}_{-j} X_j \\cdots X_n ) \\\\=\n\\mathbb{E}( \\bar{X}_{-1} \\bar{X}_{-j} ) \\mathbb{E}( \\bar{X}_{j-1} X_{-j+1} \\cdots \\bar{X}_1\nX_j \\bar{X}_{-j-1} X_{j+1} \\cdots X_n) \\\\\n= (-2) \\mathbb{E}( \\bar{X}_{j-1} X_{-j+1} \\cdots \\bar{X}_1\nX_j \\bar{X}_{-j-1} X_{j+1} \\cdots X_n) \\,.\n\\end{multline*}\nThis corresponds to the reduction of the M\\\"obius graph pictured in Figure\n\\ref{reduce-2}, which switches the order and the orientations of the\nribbons on one side of the\nblack vertex from the $\\pm 1$ and $\\pm j$ ribbons and glues the two\nedges adjacent at the white vertex together as shown.\nAs previously, the removed edges are adjacent at the white vertex. At the black vertex, the labeled lines for the construction of the bipartite graph change from the sequence\n$$\n(-1,1),(-2,2),\\dots,(-j+1,j-1),(-j,j),(-k_1,k_1),\\dots,(-k_r,k_r)\n$$\nto the sequence\n$$\n(j-1,-j+1),\\dots,(2,-2),(1,j),(-k_1,k_1),\\dots,(-k_r,k_r)\n$$\nwhich then needs to be relabeled to use $\\pm 1,\\dots,\\pm n$. Again the number of faces on the bipartite graphs is preserved:\nthe face with edges\n$$(\\dots, 2, -1, -j,k_1,\\dots)$$\nbecomes the face\n$$\n(\\dots,2,1,j,k_1,\\dots).\n$$\nThe Euler\ncharacteristic becomes $\\chi_2 = v_1 - (e_1 -1) + f_1 = \\chi_1 + 1$. By the\ninduction assumption we then find\n\\begin{equation*}\n\\mathbb{E}( \\bar{X}_{-1} X_1 \\cdots X_n) = (-2) \\left[ 4^{(n-1) - 1} (-2)^{\\chi_2}\n\\right] = (-2) 4^{n-2} (-2)^{\\chi_1 + 1} = 4^{n-1} (-2)^{\\chi_1} \\,.\n\\end{equation*}\n\n\n\\begin{figure}\n\\includegraphics[width=9cm]{reduce-2}\n\\caption{ Here we have a black vertex with two ribbons, one twisted\nand the other untwisted, adjacent at the same white vertex. The\nreduction glues these two ribbons together along their common side.\nThe result is a bipartite M\\\"obius graph with one less edge, and\nwith the ribbons on one side of the removed ribbon now with reversed\norder and orientations. \\label{reduce-2}}\n\\end{figure}\n\n\\end{enumerate}\n\nWith these two cases checked, by the induction hypothesis, the proof is\ncompleted.\n\n\\end{proof}\n\nOne should note that this is fundamentally the same proof as in the Wigner\ncase, however in this case the geometric reduction is given by gluing together\ntwo ribbons, while in the Wigner case the geometric reduction is the\nelimination of one ribbon at a time. The inductive steps remain\nthe same.\n\n\n\n\n\n\n\n\n\n\\section{\\label{duality} Duality between real and symplectic ensembles}\nBy $\\mathcal{M}_{M\\times N}(\\mathbb{H} )$ we denote the set of all $M\\times N$ matrices with entries from $\\mathbb{H}$.\nFor $\\mathbf A\\in \\mathcal{M}_{M\\times N}(\\mathbb{H})$, the adjoint matrix is $A^*_{i,j}:=\\overline{A}_{j,i}$. The trace is\n${\\rm tr}(\\mathbf A)=\n \\sum_{j=1}^NA_{jj}\n$.\nSince the traces ${\\rm tr}(\\mathbf A)$ may fail to commute, in the formulas we will use $\\Re({\\rm tr}(\\mathbf A))$, compare to\n \\cite{Hanlon-Stanley-Stembridge-92}.\n\n\\subsection{Duality between GOE and GSE ensembles}\n\nThe Gaussian orthogonal ensembles consist of square symmetric matrices, $\\mathbf Z$,\nwhose independent entries are independent (real) Gaussian random\nvariables; the off diagonal entries have variance $1$ while the\ndiagonal entries have variance\n$2$.\nOne may show that\n\\begin{theoremA} \\label{thm3.1} \\tolerance=2000\nFor $\\mathbf Z$ from the $N\\times N$ Gaussian orthogonal ensemble:\n\\begin{equation*}\n\\frac{1}{N^{n-m}} \\mathbb{E}( {\\rm tr}(\\mathbf Z^{j_1}) {\\rm tr}(\\mathbf Z^{j_2}) \\dots\n{\\rm tr}(\\mathbf Z^{j_m}) ) = \\sum_{\\Gamma} N^{\\chi(\\Gamma)} \\,,\n\\end{equation*}\nwhere the sum is over labeled M\\\"obius graphs $\\Gamma$ with $m$\nvertices of degree $j_1, j_2, \\dots, j_m$, $\\chi(\\Gamma)$ is the Euler\ncharacteristic and $j_1 + j_2 + \\dots + j_m = 2n$.\nMore generally, if $\\mathbf Z_1, \\dots, \\mathbf Z_s$ are independent $N\\times N$ GOE\nensembles and $t: \\{ 1, 2, \\dots, 2n\\} \\to \\{ 1, \\dots, s\\}$ is fixed,\nthen\n\\begin{multline*}\n\\frac{1}{N^{n-m}} \\mathbb{E}( {\\rm tr}( \\mathbf Z_{t(1)} \\dots \\mathbf Z_{t(\\beta_1)} )\n{\\rm tr}( \\mathbf Z_{t(\\alpha_2)} \\dots \\mathbf Z_{t(\\beta_2)} ) \\times \\dots \\\\ \\dots\\times\n{\\rm tr}( \\mathbf Z_{t(\\alpha_m)} \\dots \\mathbf Z_{t(\\beta_m)}) )\n= \\sum_{\\Gamma} N^{\\chi(\\Gamma)} \\,,\n\\end{multline*}\nwhere $\\alpha_1 = 1$, $\\alpha_k = j_1 + j_2 + \\dots +\n j_{k-1} + 1 $, and\n$\\beta_k = j_1 + j_2 + \\dots + j_k $ denote the ranges under the\ntraces, and where the sum is over labeled color-preserving M\\\"obius\ngraphs $\\Gamma$ with vertices of degree $j_1, j_2,\\dots, j_m$ whose edges are\ncolored by the mapping $t$.\nIf there are no $\\Gamma$ that are consistent with the coloring we\ninterpret the sum as being $0$.\n\\end{theoremA}\nThe single color version of this Theorem was given in Ref.\n\\cite{Goulden-Jackson-97}.\n\\tolerance=1000\n\nThe Gaussian symplectic ensembles (GSE)\nconsist of square self-adjoint matrices\n\\begin{equation}\n \\label{GUE} \\mathbf Z=\\left[Z_{i,j}\\right],\n\\end{equation}\nwhere $\\{Z_{i,j}:i M(x) = 2$ for $k = 2, n = 3, x = (2,3,3)$.\n\nFor the case of $n = k+1$ Jenkyns and Mayberry \\cite{JM80}\nprovided a formula for the SG function of {\\sc Nim}$^\\leq_{k+1, k}$.\nAn alternative proof for a slightly more general game\nwas given recently in \\cite{BGHM15}.\n\nIn this paper we introduce another generalization of {\\sc Nim}.\nGiven positive integers $n$ and $k$ such that $1 \\leq k \\leq n$,\nwe define {\\sc Exact $k$-Nim}, denoted by {\\sc Nim$^=_{n,k}$}, as follows.\nGiven $n$ piles of tokens, by one move a player chooses exactly $k$ piles and\nremoves arbitrary positive number of tokens from each of them.\nThe game terminates when there are less than $k$ nonempty piles.\n{\\sc Nim}$^=_{n, k}$ turns into the standard {\\sc Nim} when $k=1$ and\nit is the trivial one-pile {\\sc Nim} when $k=n$.\n\n\\subsection*{Main results}\n\nGiven a position $x\\in \\Z_\\geq^n$ of {\\sc Nim$^=_{n,k}$},\nwe denote by $T_{n,k}(x)$ the maximum number of consecutive moves\none can make starting with $x$.\nWe call $T_{n,k}$ the \\emph{Tetris} function of the game.\n\nThe following two theorems characterize the SG function of {\\sc Nim$^=_{n,k}$} for $2k\\geq n$.\n\\begin{theorem} \\label{thm.n<2k}\nIf $n < 2k$ then the SG function of {\\sc Nim$^=_{n,k}$} is\nequal to its Tetris function, $\\G(x) = T_{n,k}(x)$.\n\\end{theorem}\n\n\\begin{theorem} \\label{thm.n=2k}\nLet $k \\geq 2$, $n = 2k$, and\nlet $x = (x_1, \\ldots, x_n)$ be a position of {\\sc Nim}$^=_{2k, k}$.\nSet\n\\begin{align}\nu(x) &= T_{2k,k}(x), \\label{eq:u}\\\\\nm(x) &= \\min_{1\\leq i\\leq 2k} x_i,\\label{eq:m}\\\\\ny(x) &= T_{2k,k}\\big(x_1 - m(x), \\ldots, x_{2k} - m(x)\\big),\\label{eq:y}\\\\\nz(x) &= 1+\\binom{y(x)+1}{2}, \\text{ and}\\label{eq:z}\\\\\nv(x) &= \\big(z(x)-1\\big) +\\big[\\big(m(x)-z(x)\\big)\\mod \\big(y(x)+1\\big)\\big].\\label{eq:v}\n\\end{align}\nThen the SG function of {\\sc Nim}$^=_{2k,k}$ is given by formula\n\\begin{equation}\\label{eq-binomial;n=2k-Formula}\n\\G(x) ~=~ \\begin{cases}\nu(x), & \\text{if~~} m(x) < z(x);\\\\\nv(x), & \\text{if~~} m(x) \\geq z(x).\n\\end{cases}\n\\end{equation}\n\\end{theorem}\n\nNote that this formula fails for $k = 1$.\nIn this case {\\sc Nim}$^=_{2, 1}$ is the standard $2$-pile {\\sc Nim} and\nits SG function is the modulo $2$ sum of the two coordinates of a position,\nas described by Bouton.\nThis function is different from the one described by the above formula.\n\nWe also would like to remark that the above formula is surprisingly\nsimilar to the one given by Jenkyns and Mayberry\nin \\cite{JM80} for {\\sc Nim}$^\\leq_{k+1, k}$.\n\n\\bigskip\n\nThe above result implies a simple characterization of\nthe $0$- and $1$-positions of {\\sc Nim}$^=_{2k, k}$.\nA position $x = (x_1, \\ldots, x_n)$ is said to be\n\\emph{nondecreasing} if $x_1 \\leq x_2 \\leq \\cdots \\leq x_n$.\n\n\n\\begin{corollary} \\label{cor-0,1-exact:n=2k}\nGiven a nondecreasing position $x = (x_1, \\dots, x_{2k})$ of the game {\\sc Nim}$^=_{2k, k}$,\n\\begin{enumerate} \\itemsep0em\n\\item [\\rm{(i)}] $x$ is a $\\P$-position if and only if\nmore than half of its smallest coordinates are equal, that is, $x_1 = \\cdots = x_k = x_{k+1}$.\n\\item [\\rm{(ii)}] $x$ is a $1$-position if and only if $x_1 = \\cdots = x_{k - \\ell} = 2c$ and\n$x_{k-\\ell+1} = \\cdots = x_{k+ \\ell + 1} = 2c+1$\nfor some integer $c \\in \\ZZ_\\geq$ and $\\ell \\in \\{0,1, \\ldots, k-1\\}$.\n\\end{enumerate}\n\\end{corollary}\n\nBoth statements \\rm{(i)} and \\rm{(ii)} follow from Theorem \\ref{thm.n=2k},\nbut can also be derived much simpler, directly from the definitions.\n\n\\medskip\n\nThe case of $2k < n$ looks much more difficult and it is still open.\nMoreover, we have not even been able to characterize\nthe $\\P$-positions of {\\sc Exact $k$-Nim} for $1 < k < n\/2$,\ne.g., for {\\sc Nim}$^=_{5,2}$.\n\n\\bigskip\n\nThe rest of the paper is organized as follows.\nIn Section \\ref{Ss.n<2k} we characterize the SG function of {\\sc Nim$^=_{n,k}$} for the case $2k>n$.\nIn Section \\ref{Ss.n=2k} we characterize the SG function of {\\sc Nim$^=_{n,k}$} for the case $2k=n$.\nIn Section \\ref{Ss.Moore01} we provide an alternative proof for the above stated result of Jenkyns and Mayberry \\cite{JM80}.\nFinally in Section \\ref{Ss.Tetris} we show that for a given position we can compute efficiently the corresponding SG value.\n\n\n\n\n\\section{SG function in the case of $n < 2k$} \\label{Ss.n<2k}\n\n\nFor our proof we need the following basic properties of the Tetris function.\nGiven positions $x,x'\\in\\Z^n_\\geq$ we write $x\\leq x'$ if $x_i\\leq x'_i$ holds for $i=1,...,n$.\n\n\\begin{lemma}\\label{tbasic}\nConsider two positions $x, x'\\in\\ZZ^n_\\geq$.\n\\begin{itemize}\n\\item [\\rm{(i)}] If $x'\\leq x$ then $T_{n,k}(x')\\leq T_{n,k}(x)$.\n\\item [\\rm{(ii)}] If in addition we have $\\sum_{i=1}^n(x_i-x_i')=1$, then $T_{n,k}(x)-1\\leq T_{n,k}(x')\\leq T_{n,k}(x)$.\n\\end{itemize}\n\\end{lemma}\n\\proof\nIt is immediate by the definition.\n\\qed\n\nA move in {\\sc Nim}$^=_{n, k}$ is called {\\it slow} if exactly one token is taken from each of the $k$ chosen piles.\n\n\n\n\\begin{lemma}\\label{Tetrisnotdecrease}\nConsider a position $x=(x_1, \\ldots, x_n) \\in \\Z^n_\\geq$ with some indices $i, j$ such that\n$x_i < x_j$. Let $x'=(x'_1, \\ldots, x'_n)$ be defined by\n\\begin{equation}\\label{eq-l2}\nx_l'=\n\\begin{cases}\nx_i+1, & \\text{ if } l=i, \\\\\nx_j-1, & \\text{ if } l=j, \\\\\nx_l, & \\text{ otherwise. }\\\\\n\\end{cases}\n\\end{equation}\nThen we have $T_{n,k}(x) \\leq T_{n,k}(x')$.\n\\end{lemma}\nIn other words, the Tetris function is nondecreasing when\nwe move a token from a larger pile to a smaller one.\n\n\n\\proof\nConsider any sequence of slow moves from $x \\to \\cdots \\to x''$. If $x''_j>0$ then the same sequence of slow moves can be made from $x'$ since $x'_l \\geq x_l$ for $l \\neq j$.\n\nIf $x''_j=0$ then since $x_j > x_i$, this sequence contains a slow move reducing $x_j$ but not $x_i$.\nLet us modify this move reducing $x_i$ rather than $x_j$ and keeping all other moves of the sequence unchanged.\nThe obtained sequence has the same length and consists of slow moves from $x'$. \\qed\n\nNotice that we can generalize Lemma \\ref{Tetrisnotdecrease}\nreplacing $\\pm 1$ in \\raf{eq-l2} by $\\pm \\Delta$ for any integral $\\Delta \\in [0,x_j-x_i]$.\n\n\n\\begin{lemma}\\label{bestslowmove}\nThe slow move that reduces the $k$ largest piles of $x$ reduces the Tetris value $T_{n,k}(x)$ by exactly one.\n\\end{lemma}\n\n\\proof\nLet $x'$ be the position obtained from $x$ by reducing the $k$ largest piles of $x$ by exactly one each. Let $x''$ be another position obtained by some slow move.\nBy applying \\raf{eq-l2} repeatedly, we can obtain $x'$ from $x''$ with $T_{n,k}(x'') \\leq T_{n,k}(x')$ by Lemma \\ref{Tetrisnotdecrease}.\nThis implies that $x'$ has the highest Tetris value among all positions each reachable from x by a slow move. By Lemma \\ref{tbasic},\neach slow move reduces the Tetris value by at least one and there exists a slow move reducing it by exactly one.\nHence, $T_{n,k}(x')=T_{n,k}(x)-1$. \\qed\n\n\\bigskip\n\\noindent\\textbf{Proof of Theorem }\\ref{thm.n<2k}:\nThe Tetris value is the largest number of moves one can take from a position $x$,\nimplying that $\\G(x)$ is at most $T_{n,k}(x)$.\nTherefore, it is enough to show that for all integral $g$ such that\n$0 \\leq g < T_{n,k}(x)$ there exists a move $x\\to x'$ such that $T_{n,k}(x')= g$.\n\nConsider the move $x \\to x'$ that reduces the largest $k$ piles to $0$.\nFor the resulting position we have $T_{n,k}(x')=0$ because $2k>n$.\nLet us also consider the move $x \\to x''$ that reduces the $k$ largest piles each by only $1$.\nThen we have $T_{n,k}(x'')=T_{n,k}(x)-1$ according to Lemma \\ref{bestslowmove}.\nAny position between $x'$ and $x''$ is reachable form $x$. Thus, by Lemma \\ref{tbasic} the claim follows. \\qed\n\n\\section{SG function in case of $n=2k$} \\label{Ss.n=2k}\n\nLet us consider the game {\\sc Nim}$^=_{2k, k}$, where $k\\geq 2$, and let $x$ be a position of this game.\nRecall that to $x$ we associated several parameters in \\eqref{eq:u}--\\eqref{eq:v}.\nBased on these parameters, we classify the positions into the following two types:\n\\begin{itemize} \\itemsep0em\n\\item [\\rm{(i)}] \\emph{type I}, if $m(x) m(x)$.\nObviously, any move $x \\rightarrow x'$ reduces the Tetris value by at least 1, implying $u(x) > u(x')$.\nUsing this and the definitions, we get $g(x)=u(x) > u(x')$. If $g(x') = u(x')$ then by the above inequality, we get $g(x) \\neq g(x')$.\nOn the other hand, if $g(x') = v(x')$ then we must have $m(x') \\geq z(x')$ and thus $v(x')=z(x')-1 + \\left((m(x')-z(x')) \\mod (y(x')+1)\\right)\\leq z(x')-1 + (m(x')-z(x'))=m(x')-1$.\nThus, we get $g(x) = u(x) > u(x') > m(x')-1 \\geq v(x') = g(x')$.\n\nIt remains to consider the case $z(x) \\leq m(x)$, in which case $v(x)\\leq m(x)-1$ follows by the definitions. \n\t\\begin{enumerate} \\itemsep0em\n\t\\item Suppose $z(x') > m(x')$.\n We can estimate $u(x') \\geq m(x)+m(x')$ since $2k = n$.\n Furthermore, $g(x)=v(x) \\leq m(x)-1$ and thus $g(x')=u(x') \\geq m(x)+m(x') > m(x)-1 \\geq g(x)$.\n \\item Suppose $z(x') \\leq m(x')$. Then $g(x')=v(x')$. Note that $z(x) \\leq m(x)$ and thus $g(x)=v(x)$.\n Also note that $m(x') \\leq m(x)$. We examine the last inequality.\n \t\\begin{enumerate} \\itemsep0em\n\t \\item Suppose $m(x')=m(x)$.\n \n Then $x-m(x)\\to x'-m(x')$ is a move, and hence decreases the Tetris value implying $y(x)>y(x')$.\n\t By Lemma \\ref{interval}, we have\n $z(x')+y(x')-1 < z(x)-1$\n and so\n $z(x) > z(x') + y(x')$\n implying\n $v(x) \\geq z(x)-1> z(x')-1+y(x') \\geq v(x')$,\n since\n $y(x') \\geq \\bigg(\\big(m(x') - z(x)\\big) \\mod\\big(y(x) + 1\\big)\\bigg)$,\n regardless of the value of $m(x') - z(x)$.\n\t \\item Suppose $m(x') < m(x)$. We compare $y(x)$ with $y(x')$.\n \\begin{enumerate} \\itemsep0em\n\t \\item If $y(x)= y(x')$ then $z(x)=z(x')$. By the definition of a legal move, $x'$ has at least $k$ piles not smaller than $m(x)$. Therefore, $1 \\leq\n m(x)-m(x') \\leq y(x')=y(x)$, which implies that $m(x) \\mod (y(x)+1) \\neq m(x') \\mod (y(x)+1)$ and thus $v(x) \\neq v(x')$.\n\t \\item If $y(x) \\neq y(x')$, since the $y$ values are different, by Lemma \\ref{interval} we have that $v(x)$ and $v(x')$ are in different intervals,\n therefore $v(x)\\neq v(x')$.\n \\end{enumerate}\n\t \\end{enumerate}\n\t\\end{enumerate}\n\n\\subsection{Proof of (II)}\n\nWe prove property (II) by considering type I and type II positions, separately.\n\n\\subsubsection{Type I positions: $m(x)\\nu(m-1)\\geq 0.\n\\end{equation}\nwith $\\nu$ being defined as in Corollary \\ref{delta-epsilon}.\n\nLet us next define a set $Q=Q(x)$ of pairs of integers by setting\n\\[\nQ^1(x)=\\left\\{(\\mu,\\ge)\\left| \\begin{array}{c}m-\\ge~\\leq~ \\mu~\\leq~ m\\\\\n\\ge ~\\leq \\nu(m-1)-1\n\\end{array}\\right.\\right\\}\n\\]\n\\[\nQ^2(x)=\\left\\{(\\mu,\\ge) \\left| \\begin{array}{c}m-\\eps(m-1)~\\leq~ \\mu~\\leq~ m\\\\\n\\ge ~= \\nu(m-1)\n\\end{array}\\right. \\right\\} ,\n\\]\nand defining $Q=Q^1(x)\\cup Q^2(x)$.\n\nWe show that if for a position $x^*$ we have $(m(x^*),y(x^*))\\in Q$, then $x^*$ is of type II.\nTo see this consider a pair $(\\mu,\\ge)\\in Q^1(x)$.\nThen we have by the definition of $Q^1(x)$ that $\\mu\\geq m-\\ge$ and that $\\nu(m-1)-1\\geq \\ge$ from which $z(\\nu(m-1)-1)\\geq z(\\ge)$ follows by the definition of $z$ in \\eqref{eq:z}.\nWe also have the inequality $m-\\nu(m-1)+1\\geq z(\\nu(m-1)-1)$ since $m\\geq \\binom{\\nu(m-1)+1}{2}$ by the definition of $\\nu$ in Corollary \\ref{delta-epsilon}.\nPutting these together, we obtain $\\mu\\geq z(\\ge)$ as stated.\nFor $(\\mu,\\ge)\\in Q^2(x)$ we have $\\mu\\geq m-\\epsilon(m-1)$ and $\\ge=\\nu(m-1)$ by the definition of $Q^2(x)$.\nSince $m-1=\\binom{\\nu(m-1)+1}{2}+\\epsilon(m-1)$ by Corollary \\ref{delta-epsilon},\nthe inequality $\\mu\\geq m-\\epsilon(m-1)=z(\\ge)$ follows again.\n\nWe show next that for all pairs $(\\mu,\\ge)\\in Q$ there exists a move $x\\to x^*$ such that $\\mu=m(x^*)$ and $\\ge=y(x^*)$ (and $x^*$ is of type II, as we argued in the previous paragraph.)\nFor this let us consider first $\\mu\\leq m-1$ and note that if $(\\mu,\\ge)\\in Q$ then $m-\\mu\\leq \\ge\\leq \\nu(m-1)$.\nOur plan is to use Lemma \\ref{continuity} and to cover this range of $\\ge$ values by two constructions.\n\nLet us define a pair of positions $x'\\geq x''$ by\n\\begin{align*}\nx'_i &=\n\\begin{cases}\nx_i, &\\text{ for } i=1 \\text{ and } i\\geq k+2,\\\\\n\\mu, &\\text{ for } i=2, \\\\\nx_i-1, &\\text{ for } i=3,...,k+1\n\\end{cases} \\\\\n\\intertext{and}\nx''_i &=\n\\begin{cases}\nx_i, &\\text{ for } i=1 \\text{ and } i\\geq k+2,\\\\\n\\mu, &\\text{ for } i=2,...,k+1.\n\\end{cases}\n\\end{align*}\nNote that since $\\mu\\delta\\geq x_1$ which implies $x'_n=\\delta-x_10$ for $i=k+1,...,n-1$,\nand therefore we indeed decrease exactly $k$ piles of $x$ to obtain $x'$. Thus $x'$ is reachable from $x$.\n\nFor $x'$ we have\n\\begin{align*}\n\\sum_{i=1}^{2k}\\min(x_i',\\delta) &\\geq x_1' + (k-1)\\delta + (\\delta-x_1') =k\\delta \\\\\n\\intertext{and}\n\\sum_{i=1}^{2k}\\min(x_i',\\delta+1) &\\leq x_1' + (k-1)(\\delta+1) + (\\delta - x_1)\\\\\n &= \\delta + (k-1)(\\delta +1) < k(\\delta +1).\n\\end{align*}\nTherefore, by Lemma \\ref{getmeTetris}, we have $T_{2k,k}(x')=\\delta$.\n\nSince $k\\geq 2$, $x'_{k+1}=0$ and therefore $m(x')=0$. Thus, $m(x')=0<1\\leq z(x')$ implying\nthat $x'$ is of type I, from which $g(x')=u(x')=\\delta$ follows by the above.\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[height=7cm]{Doc2}\n\\caption{$x'$ is obtained by removing the gray area.} \\label{F1}\n\\end{figure}\n\n\\item $x_2 \\leq \\delta < u(x)-m(x)$ (Figure \\ref{F2}). Let $A=\\sum_{i=2}^{k+1} \\max(0,\\delta-x_i)$.\nFor $i=k+2, \\ldots ,n$ choose $a_i$ such that $0 \\leq a_i \\leq \\min(x_i-1,\\delta)$ and $\\sum_{i=k+2}^n a_i=A$.\nFirst let us prove that this is possible, or equivalently that $A \\leq \\sum_{i=k+2}^n \\min(\\delta,x_i-1)$.\nTo see this let us consider two cases.\nIf $x_{k+1}>\\delta$ then $\\sum_{i=k+2}^n \\min(\\delta,x_i-1)\\geq (k-1)\\delta \\geq A$.\nIf $x_{k+1}\\leq \\delta$ then let us define\n\\begin{equation} \\label{B.def}\nB=\\sum_{i=2}^{k+1}x_i = k\\delta - A,\n\\end{equation}\nand observe that for any integer $t\\geq\\delta >x_{k+1}$ we have\n\\[\n\\sum_{i=1}^n\\min(t,x_i) = m + B +\\sum_{i=k+2}^n\\min(t,x_i).\n\\]\nConsequently, since we have $T_{2k,k}(x)=u(x)$, by Lemma \\ref{getmeTetris} we can write for $t=u(x)$ that\n\\[\nkt \\leq \\sum_{i=1}^n\\min(t,x_i) = m(x)+B+\\sum_{i=k+2}^n\\min(t,x_i).\n\\]\nNote that if we decrease $t=u(x)$ by $1$,\nthen the left hand side decreases by $k$ while the right hand side decreases by at most $k-1$,\nhence the inequality remains valid. Let us repeat this $m(x)$ times, obtaining the inequality\n\\[\nk(u(x)-m(x))+km(x) \\leq m(x)+B+\\sum_{i=k+2}^n\\min(u(x)-m(x),x_i)+(k-1)m(x)\n\\]\nfrom which\n\\[\nk(u(x)-m(x))\\leq B+\\sum_{i=k+2}^n\\min(u(x)-m(x),x_i)\n\\]\nfollows. Let us now decrease $t=u(x)-m(x)$ further by $1$, as well as replace $x_i$ by $x_i-1$.\nThen the left hand side decreases by exactly $k$, while the right hand side decreases by at most $k$, yielding the valid inequality\n\\[\nk(u(x)-m(x)-1)\\leq B+\\sum_{i=k+2}^n\\min(u(x)-m(x)-1,x_i-1).\n\\]\nFinally, we can decrease $t=u(x)-m(x)-1$ further on both sides to $t=\\delta$ and similarly to the above argument obtain\n\\[\nk\\delta\\leq B+\\sum_{i=k+2}^n\\min(\\delta,x_i-1).\n\\]\nBy \\eqref{B.def} we obtain the claimed inequality, and hence the proof for the existence of the $a_i$ values for $k=k+2,...,n$\nthat satisfy the desired inequalities.\n\nLet us now consider the position $x'$ defined by\n\\[\nx'_i =\n\\begin{cases}\n0, & \\textrm{ for } i=1; \\\\\nx_i, &\\textrm{ for }i=2, \\ldots, k+1; \\\\\na_i, &\\textrm{ for }i=k+2, \\ldots ,n.\n\\end{cases}\n\\]\nBy the above arguments $x\\to x'$ is a move in the game.\nThe equality $g(x')=u(x')=\\delta$ now follows by the above analysis and Lemma \\ref{getmeTetris}, completing our proof in this case.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[height=9cm]{Doc3}\n\\caption{$x'$ is obtained by removing the gray area.} \\label{F2}\n\\end{figure}\n\n\n\\item $u(x)-m(x) \\leq \\delta < u(x)$ (Figure \\ref{F3}).\nLet us define position $x'$ as follow\n$$\nx_i' =\n\\begin{cases}\nx_i-u(x)+\\delta, &\\textrm{ for } i \\in I_1=\\{i \\mid i\\leq k, x_{i+k} \\leq \\delta\\}; \\\\\nx_i-u(x)+\\delta, &\\textrm{ for } i \\in I_2=\\{i \\mid i > k, \\delta < x_i \\leq u(x)\\}; \\\\\n\\delta , &\\textrm{ for }i \\in I_3=\\{i \\mid i > k, x_i > u(x) \\}; \\\\\nx_i, &\\textrm{ otherwise.}\n\\end{cases}\n$$\nNote that $x_i \\geq m(x)\\geq u(x)-\\delta$, therefore $x_i'$ are all nonnegative.\nIt is easy to see that $I_1+k,I_2,I_3$ form a partition of $\\{ k+1, \\ldots , n \\}$.\nWe have reduced $x_i$ for all $i \\in I_1 \\cup I_2 \\cup I_3$, therefore $x\\to x'$ is a move.\n\nNext we note that the above construction implies that\n\\[\n\\sum_{i=1}^n \\min(x_i',\\delta) \\geq \\sum_{i=1}^n\\min(x_i,u(x)) - k (u(x)-\\delta)\\geq k \\delta\n\\]\nwhere the last inequality follows by the fact that $u(x)=T_{2k,k}(x)$, and hence $\\sum_{i=1}^n\\min(x_i,u(x))\\geq ku(x)$ by Lemma \\ref{getmeTetris}.\nSimilarly, we get\n\\[\n\\sum_{i=1}^n \\min(x_i',\\delta+1) \\leq \\sum_{i=1}^n\\min(x_i,u(x)+1) - k (u(x)-\\delta)< k (\\delta+1)\n\\]\nsince $\\sum_{i=1}^n\\min(x_i,u(x)+1)< k(u(x)+1)$.\nTherefore $T_{n,k}(x')=u(x')= \\delta$ follows by Lemma \\ref{getmeTetris}.\n\nNote that $x_{k+1}\\leq \\delta$, since otherwise $u(x)\\geq \\delta+m(x)$ would follow, contradicting our choice of $\\delta$.\nIt follows that $x_1 \\in I_1$ and thus $m(x')=m(x)-u(x)+\\deltam(x)$.\nWe consider two positions $\\bar{x}$ and $\\hat{x}$ reachable from $x$ defined as\n$$\\bar{x}_i =\n\\begin{cases}\nx_i, & \\textrm{ if } i=1, \\ldots ,k;\\\\\nm(x), & \\textrm{ if } i\\geq k+1; \\\\\n\\end{cases}\n~~~~\\hat{x}_i =\n\\begin{cases}\nx_i, & \\textrm{ if } i=1, \\ldots ,k; \\\\\nx_i-1, & \\textrm{ if } i\\geq k+1. \\\\\n\\end{cases}\n$$\nSimilarly to the previous cases, we have $y(\\bar{x})=0$ and\n$y(\\hat{x}) = y(x)-1$, and thus by Lemma \\ref{continuity} it follows that\nfor all $y \\in[0,y(x)-1]$ there exists an $x' \\in [\\bar{x},\\hat{x}]$ such that $y=y(x')$.\n\nIf we put all three cases together we cover all values $(m,y) \\in D$.\n\\qed\n\n\\medskip\n\nLet us set\n\\begin{equation} \\label{v(m,y)}\nv(m,y)= {y+1 \\choose 2}+ \\Big[ \\Big(m-1-{y+1 \\choose 2}\\Big)\\mod (y+1)\\Big].\n\\end{equation}\nNote that if $m=m(x)$ and $y=y(x)$ then $v(m,y)=v(x)$.\nFurthermore, we have\n$$V(y):=\\{v(m,y) | m \\in \\ZZ_\\geq\\}=\\left[ {y+1 \\choose 2},{y+2 \\choose 2} \\right).$$\nTherefore the sets $V(y)$, $y \\in \\ZZ_\\geq$ form a partition of $\\ZZ_\\geq$, as shown in Lemma \\ref{interval}.\n\n\\begin{lemma}\\label{Dstar}\nIf $m(x) \\geq z(x)$, then every $(m,y) \\in D(x)$ satisfies the following relations\n$$m\\geq {y+1 \\choose 2}+1~~\\textrm{ and }~~\\{v(m,y) \\mid (m,y) \\in D(x) \\} = [0,v(x)).$$\n\\end{lemma}\n\n\\proof\nLet us first consider $(m,y) \\in D_1(x)$.\nBy the definition of $D_1$ and the assumption of $m(x) \\geq z(x)={y(x)+1 \\choose 2}+1$ we get\n$$m\\geq m(x)-y \\geq {y(x)+1 \\choose 2}+1-y \\geq {y+2 \\choose 2}+1-y \\geq {y+1 \\choose 2}+1.$$\nBy the definition of $D_1(x)$, for all $y \\in [0,y(x))$ we have $(m,y) \\in D_1(x)$ for all $m \\in [m(x)-y,m(x)]$. Hence, by (\\ref{v(m,y)}) we have $\\{v(m,y) \\mid m \\in [m(x)-y,m(x)]\\}=V(y)$.\nThus, by Lemma~\\ref{interval} we get $$\\{v(m,y) \\mid (m,y) \\in D_1(x) \\}=\\bigcup_{y \\in [0,y(x))}V(y)=[0,z(x)-1).$$\n\n\\smallskip\n\nLet us next consider $(m,y) \\in D_2(x)$.\nBy the definition of $ v(x)$ we can write $v(x)=z(x)-1+r $, where $r=m(x)-z(x)-\\lambda(y(x)+1)$ for some $\\lambda \\in \\ZZ_\\geq$. This implies $m(x)-r\\geq z(x)$. By the definition of $D_2(x)$ we have $m\\geq m(x)-r$.\nThese two inequalities imply $m\\geq {y+1 \\choose 2} +1$.\nSince $m \\in [m(x)-r, m(x))$ takes $r$ consecutive values, we have\n$\\{v(m,y) \\mid (m,y) \\in D_2(x) \\}=[z(x)-1,v(x))$.\n\\qed\n\nThe above lemma implies that any position $x'$ with $m(x')=m$, $y(x')=y$ for some $(m,y)\\in D$\nis a type II position. Hence $g(x')=v(x')=v(m,y)$.\nThus the second claim in the above lemma together with Lemma \\ref{existsmoveinD} implies that for any $0\\leq \\delta x_{ij}$, then we must have a $j'>j$ such that $x'_{ij'} 0$, there exists a move $x \\to x'$ such that $M(x') = 0$.\n\\end{itemize}\n\nTo show \\rm{(i0)}, let us consider a move $x \\to x'$\nfrom a position $x$ with $M(x)=0$.\n\nLet $j$ be the highest binary bit such that\n$x_{ij}$ and $x'_{ij}$ differ for some $i$. Such a $j$ must exist since in a move we must change at least one components.\nBy Lemma \\ref{moore1} we have $x'_{ij}\\leq x_{ij}$ for all $i=1,...,n$,\nimplying $1\\leq \\sum_{i=1}^n(x_{ij}-x'_{ij}) \\leq k$ because in a move we can change at most $k$ components. Therefore,\n$(\\sum_{i}x'_{ij} \\mod (k+1)) \\not= 0$ and, thus, $M(x') > 0$.\n\n\\medskip\n\nTo show \\rm{(a0)}, let us consider a position $x$ with $M(x)>0$. We will construct a move $x \\to x'$ such that $M(x') = 0$.\n\n\\begin{notation} \\label{T.N}\nLet $t_1, \\dots , t_p$ denote the bits $j$ such that $y_j\\not=0$, assuming $t_1 > \\dots > t_p$.\nSet $N = \\{1,2,\\ldots,n\\}$.\n\\end{notation}\n\n\nThe following algorithm defines index sets $\\emptyset=I_0\\subseteq I_1\\subseteq \\cdots I_p \\subseteq N$\nsuch that we can compute a move $x \\to x'$ with $M(x')=0$, by decreasing components $i\\in I_p$. Define $O_j=\\{i\\in I_j\\mid x_{it_{j+1}}=1\\}$, and set $\\ga_j = |I_j|$ and $\\gb_j=|O_j|$.\n\n\n\n\\medskip\n\\noindent\n{\\bf Step 0}. Initialize $I_0=\\emptyset$\nand hence, $\\ga_0 = \\gb_0 = 0$, and set $x'_i:=x_i$ for all $i \\in N$.\n\n\\smallskip\n\\noindent\n{\\bf Step 1}. For $j=1, \\dots , p$, construct $I_j$ and update $x'$ as follows.\n\n\\smallskip\n\n{\\bf Case 1.} If $y_{t_j} \\leq \\beta_{j-1}$, then let $I_j:=I_{j-1}$,\nchoose $y_{t_j}$ many indices $i\\in O_{j-1}$,\nand update $x'_{it_j}:=0$.\n\n\\smallskip\n\n{\\bf Case 2.} If $y_{t_j} > \\beta_{j-1}$ and $(k+1) -y_{t_j} \\leq \\ga_{j-1}-\\gb_{j-1}$, then\nlet $I_j:=I_{j-1}$, choose $(k+1)-y_{t_j}$ many indices $i\\in I_{j-1}\\setminus O_{j-1}$, and update $x'_{i{t_j}}:=1$.\n\n\\smallskip\n\n{\\bf Case 3.} If $y_{t_j} > \\beta_{j-1}$ and $y_{t_j}-\\beta_{j-1} \\leq k-\\ga_{j-1}$, then\nlet $I_j$ be the index set obtained from $I_{j-1}$ by adding\n$y_{t_j} - \\beta_{j-1}$ many indices $i$ from $N \\setminus I_{j-1}$ such that\n$x_{i{t_j}}=1$. Update $x'_{i{t_j}}:=0$ for $i\\in (I_j\\setminus I_{j-1})\\cup O_{j-1}$.\n\n\n\\medskip\n\\noindent\nWe first note that the three cases above are exclusive and\ncover all possible $y_{t_j}$, $\\ga_{j-1}$ and $\\gb_{j-1}$ values.\nMoreover, it is easily seen that\na position $x'$ after the execution of the algorithm satisfies $M(x')=0$, and\n$x'_i=x_i$ holds for $i \\not\\in I_p$.\n\nNote that we increase the set $I_j$ only in Case 3, in which case we have\n\\[\n|I_j|=\\alpha_j=(y_{t_j}-\\beta_{j-1})+\\alpha_{j-1} \\leq k,\n\\]\nimplying $|I_p|\\leq k$.\n\nNote also that in Case 3 we must have at least $y_{t_j} - \\beta_{j-1}$ many indices $i\\in N\\setminus I_{j-1}$ with $x_{it_j}=1$ by the definition of $y_{t_j}$ in \\eqref{e0001}.\n\nIt remains to show that $x' < x$.\nAssume that $i$ is an index such that $x'_i \\not= x_i$.\nThen some $j$ satisfies $i \\not\\in I_{j-1}$ and $i \\in I_j$.\nThis implies that $x'_i$ was first updated during the $j$th iteration of Step 1.\nNamely, the $t_j$th bit of $x'_i$ is modified from $1$ to $0$.\nSince $x'_{it}=x_{it}$ holds for all $t$ with $t > t_j$,\nwe have $x'_i < x_i$, which completes the proof.\n\\qed\n\n\\subsection{Proof of Theorem \\ref{t-Moore-m=0,1} for $m=1$}\n\\label{ss-p-t1-1}\n\nNow, let us prove that\n$\\cG(x)=1$ if and only if $M(x)=1$.\n\n\\smallskip\nThe proof in the previous subsection implies that for a position $x$ with $M(x)=1$ there exists a move $x\\to x'$ such that $M(x')=0$.\nBy the properties of the SG function, it remains to show that\n\\begin{itemize}\n\\item [\\rm{(i1)}] for any position $x$ with $M(x) = 1$, there exists\nno move $x \\to x'$ such that $M(x') = 1$;\n\\item [\\rm{(a1)}] for any position $x$ with $M(x) > 1$, there\nexists a move $x \\to x'$ such that $M(x')=1$;\n\\end{itemize}\n\nWe prove \\rm{(i1)} similarly to (i0).\nLet us assume that $M(x)=1$ holds for a position $x$\nand consider a move $x \\to x'$.\nLet $j$ be the highest binary bit such that\n$x_{ij}$ and $x'_{ij}$ differ for some $i$.\nThen by Lemma \\ref{moore1} we have $x'_{ij} \\leq x_{ij}$ for all $i$, and\n$1 \\leq \\sum_{i}(x_{ij}-x'_{ij}) \\leq k$.\nHence, $\\sum_{i}x'_{ij} \\not= \\sum_{i}x_{ij} (\\mod (k+1))$ and, thus, $M(x') \\not= 1$.\n\n\nTo show \\rm{(a1)}, let us consider a position $x$ with $M(x) > 1$.\nSimilarly to \\rm{(a0)}, we will algorithmically construct a move $x \\to x'$ such that $M(x')=1$.\n\nLet again $t_1, \\dots , t_{p-1}$ denote the bits $j>0$ such that $y_j>0$,\nwhere we assume that $t_1 > \\dots > t_{p-1}$, and add $t_p=0$.\n\nThe algorithm remains the same, as for \\rm{(a0)},\nexcept for $j=p$, when $t_p=0$. We detail below the computation of $I_p$ from $I_{p-1}$:\n\n\n\\medskip\n\\noindent\n\n{\\bf Case 1.} If $y_0> 1$ and $y_0-1 \\leq \\gb_{p-1}$,\nthen let $I_p:=I_{p-1}$,\nchoose $y_0-1$ many indices ${i}$ from $I_p$ such that\n$x'_{i0}=1$, and update $x'_{i0}:=0$ for such indices.\n\n\\smallskip\n\n{\\bf Case 2.} If $y_0> 1$ and $(k+2)-y_0 \\leq \\ga_{p-1}-\\beta_{p-1}$,\nlet $I_p:=I_{p-1}$, choose $(k+2)-y_{0}$ many indices ${i}$ from $I_p$ such\nthat $x'_{i0}=0$ and update $x'_{i0}:= 1$ for such indices.\n\n\\smallskip\n\n{\\bf Case 3.} If $y_0> 1$ and $0< (y_0-1) -\\gb_{p-1} \\leq k-\\ga_{p-1}$, then\nlet $I_p$ be an index set obtained from $I_{p-1}$ by adding $(y_{0}-1)-\\beta_{p-1}$ many\nindices $i$ from $N \\setminus I_{p-1}$ such that $x_{i{0}}=1$, and\nupdate $x'_{i0}:=0$ for all $i \\in I_p$ with $x'_{i0}=1$.\n\n\\smallskip\n\n{\\bf Case 4.} If $y_0=0$ and $\\ga_{p-1} > \\gb_{p-1}$,\nthen let $I_p:=I_{p-1}$,\nchoose an index ${i}$ from $I_p$ such that $x'_{i0}=0$, and update $x'_{i0}:=1$.\n\n\\smallskip\n\n{\\bf Case 5.} If $y_0=0$ and $\\ga_{p-1} = \\gb_{p-1}=k$,\nthen let $I_p:=I_{p-1}$, and update $x'_{i0}:=0$ for all $i \\in I_p$.\n\n\\smallskip\n\n{\\bf Case 6.} If $y_0=0$ and $\\ga_{p-1} = \\gb_{p-1} 0$ since otherwise $M(x)=0$, giving a contradiction.\nThus, in Case 6, we can choose $k - \\ga_{p-1}$ many\nindices $i$ from $N \\setminus I_{p-1}$ such that $x_{i{0}}=1$.\nLet $x'$ be a position obtained by the algorithm.\nThen, clearly $M(x')=1$ and $x \\to x'$ is a move. This completes the proof of (a1).\n\\qed\n\n\\section{More on the Tetris function} \\label{Ss.Tetris}\n\nIn this section we show that the SG function described in Theorems \\ref{thm.n<2k}\nand \\ref{thm.n=2k} can in fact be computed efficiently, in polynomial time. For this we need to show that the Tetris function can be computed in polynomial time for these games.\nWe also prove that for a given position $x$ and integer $0\\leq g < T_{n,k}(x)$\nwe can compute in polynomial time a move $x\\to x'$ such that $T_{n,k}(x')=g$.\nFinally, in subsection \\ref{tetris-degree} we recall\nsome relations to degree sequences of graphs and hypergraphs.\n\n\\subsection{Computing the Tetris function in polynomial time} \\label{comp-tet-in-p}\n\nLet us recall that to a position $x$ we associated a shifted position $\\bar x$ after Corollary \\ref{delta-epsilon} with the property that $T_{n,k}(x)=\\bar x_{n-k+1}$.\nThe procedure described there is a non-polynomial algorithm.\nHowever $\\bar x$ and consequently $T_{n,k}(x)=\\bar x_{n-k+1}$ can be computed in a more efficient way.\n\n\\begin{theorem}\\label{Tetrispolynomial}\nGiven a position $x$ we can compute $T_{n,k}(x)$ in linear time in $n$.\n\\end{theorem}\n\n\\proof\nWe can assume without loss of generality that $x$ is a nondecreasing position. We show that the corresponding $\\bar x$ can be constructed in linear time in $n$, and thus the claim follows by the equality $T_{n,k}(x)=\\bar x_{n-k+1}$.\n\nRecall that the input size is $\\log(\\prod_{i = 1}^n x_i)$. Let $s=\\sum_{i=1}^{n-k} x_i$ be the number of tokens we shift on top of the largest $k$ piles; see Figure \\ref{ADD(1)}.\nWe know that for some $\\ell n\/2$.\nWe examine the question of how to move to some position of given Tetris value $g$.\n\nWe denote by\n\\begin{enumerate} \\itemsep0em\n\\item [\\rm{(i)}] $x^\\ell$ the position obtained from $x$ by removing all tokens from the largest $k$ piles of $x$, and by\n\\item [\\rm{(ii)}] $x^u$ the position obtained from $x$ by decreasing the largest $k$ piles by one unit each.\n\\end{enumerate}\n\nConsider the set\n$W = \\{T_{n,k}(x') \\mid \\forall x': x\\to x'\\}$.\nBy Lemma \\ref{tbasic} (i) we have $T_{n,k}(x^\\ell) = \\min(W)$.\nBy Lemma \\ref{bestslowmove} we have $T_{n,k}(x^u)=T_{n,k}(x)-1$. As in the proof of\nTheorem \\ref{thm.n<2k}, we can argue that for\nevery value $g$ such that $T_{n,k}(x^\\ell)\\leq g \\leq T_{n,k}(x)-1$ there exists a move $x\\to x'$ such that $x^\\ell \\leq x' \\leq x$ and $T_{n,k}(x')=g$.\n\n\n\\begin{theorem}\\label{achieveTetrisvalue} \nGiven $g \\in W$, computing a position $x'$ such that $x^\\ell \\leq x' \\leq x$ and $T_{n,k}(x')=g$ can be done in $O\\big(n\\log(\\sum_{i=n-k+1}^nx_i)\\big)$ time.\n\\end{theorem}\n\\proof\nWe have $T_{n,k}(x^\\ell)\\leq g \\leq T_{n,k}(x^u)$.\nUsing the monotonicity of the Tetris function we perform a binary search in the space of positions between $x^\\ell$ and $x^u$.\nIn a general step we compute $L=\\sum_{i=n-k+1}^nx^\\ell_i$ and $U=\\sum_{i=n-k+1}^nx^u_i$, set $M=\\lfloor\\frac{L+U}{2}\\rfloor$ and compute $y_i=int(\\frac{x^\\ell_i+x^u_i}{2})$ for $i=n-k+1$,\nwhere $int(\\cdot)$ is a rounding to a nearest integer value in such a way that $\\sum_{i=n-k+1}^ny_i=M$.\nFinally, we set $y_i=x_i$ for $i0}$ (see, for instance, \\cite[Theorem\n II.6.1]{Te15}). There are two downsides of this method. First, the\nerror term is suboptimal under assumption of the GRH. Second, it would need some effort to make the constant $c$ actually explicit. \n\nHowever, as $|\\Delta|<10^{15}$, we are only interested in the case\nwhere $[a,b]$ is a subinterval of ${[1, 2 \\cdot 10^7]}$. In this\nrange, a simple \\textsf{SAGE} script using the \\textsf{MPFI} library\n\\cite{MPFI,sagemath} can be used to improve on\n\\eqref{equation::selbergdelange} computationally (see Proposition\n\\ref{proposition::selbergdelange}). As a consequence, we obtain\n$|\\Delta|<10^{10}$ for any singular unit of discriminant $\\Delta$ in Theorem \\ref{thmidrange}.\n\nThis is still not sufficient to check all remaining cases, at least\nwith modest computational means. The range is nevertheless small\nenough to use a counting algorithm in order to bound\n${\\mathcal{C}}_{10^{-3}}(\\Delta)$ for all discriminants $\\Delta$ satisfying\n$|\\Delta|< 10^{10}$, see Lemma \\ref{lem:lowrangeCebound}. {This still\nneeds an appropriate counting strategy, as determining\n${\\mathcal{C}}_{10^{-3}}(\\Delta)$ for each discriminant is rather slow,\ncomparable to computing separately each class number ${\\mathcal{C}}(\\Delta)$ in the\nsame range. Our trick is to bound all ${\\mathcal{C}}_{10^{-3}}(\\Delta)$\nsimultaneously by running through a set containing all imaginary quadratic\n$\\tau \\in \\mathcal{F}$ satisfying ${|\\tau-\\zeta_3|< \\varepsilon}$ or\n${|\\tau-\\zeta_6|< \\varepsilon}$ and such that $j(\\tau)$ is of discriminant\n$\\Delta$ with $|\\Delta|<10^{10}$. For each $\\tau$ encountered, we\ncompute its discriminant $\\Delta(\\tau)$ after the fact and increment\nour counter for ${\\mathcal{C}}_{10^{-3}}(\\Delta(\\tau))$. The thus obtained\nbounds for ${\\mathcal{C}}_{10^{-3}}(\\Delta)$ refine once again our previous\ninequalities, and allow us to conclude that $|\\Delta|<10^7$. Repeating this\nprocedure once again, with a slightly changed $\\varepsilon$, we\nachieve even $|\\Delta|<3 \\cdot 10^5$ in Theorem \\ref{thlowrange}.\nThese remaining cases can now be dealt with directly, for which we use\na \\textsf{PARI}~\\cite{PARI} program to prove Theorem \\ref{thm:habeggerspari}, completing thereby the proof of Theorem~\\ref{thmain}. \n\n\\bigskip\n\nIt is very probable that our argument can be adapted to solve a more general problem: given an algebraic integer~$\\beta$, determine the singular moduli~$\\alpha$ such that ${\\alpha-\\beta}$ is a unit; or at least bound effectively the discriminants of such~$\\alpha$. For instance, one may ask whether~$0$ is the only singular modulus~$\\alpha$ such that ${\\alpha-1}$ is a unit. In the general case, as explained in~\\cite{hab:junit}, this would require lower bounds for elliptic logarithmic forms, but when~$\\beta$ itself is a singular modulus, our argument extends almost without changes. One may go further and obtain an effective version of Theorem~2 from~\\cite{hab:junit}, which is an analogue of Siegel's Finiteness Theorem for\nspecial points.\n\n\nThe famous work of Gross-Zagier and Dorman \\cite{Dorman1988,Gross1985} inspires the following problem: determine all couples $(\\alpha,\\beta)$ of singular moduli such that ${\\alpha-\\beta}$ is a unit; presumably, there is none. As indicated above, when~$\\beta$ is fixed and~$\\alpha$ varying, a version of our argument does the job, but if we let both~$\\alpha$ and~$\\beta$ vary, the problem seems more intricate. Very recently Yingkun Li~\\cite{Li18} made important progress: he proved that ${\\alpha-\\beta}$ is not a unit if the discriminants of~$\\alpha$ and~$\\beta$ are fundamental and coprime. In particular, his result implies the following partial version of our Theorem~\\ref{thmain}: the discriminant of a singular unit must be either non-fundamental or divisible by~$3$. \n\nAnother natural problem is extending our work to $S$-units. Recall that, given a finite set~$S$ of prime numbers, a non-zero algebraic number is called an $S$-unit if both its denominator and numerator are composed of prime ideals dividing primes from~$S$. Recently\nHerrero,\nMenares and\nRivera-Letelier announced the proof of finiteness of the set of singular $S$-units (that is, singular moduli that are $S$-units) for any finite set of primes~$S$. However, to the best of our knowledge, their argument is not effective as of now. \n\n\n\\bigskip\n\nFinally, let us discuss an application of Theorem~\\ref{thmain} to\neffective results of Andr\\'e-Oort type. A point\n$(\\alpha_1,\\ldots,\\alpha_n) \\in \\mathbb{C}^n$ is called special if\neach $\\alpha_i$, $i \\in \\{ 1, \\dots, n\\}$, is a singular modulus. Since singular moduli are algebraic integers, the following statement is an immediate consequence of our main result. \n\n\n\n\n\\begin{corollary} For each polynomial ${P}$ in unknowns $X_2,\\ldots,\n X_n$ and coefficients that are algebraic integers in~$\\C$,\n \n the hypersurface defined by \n\\begin{equation*}\nX_1 P(X_1, \\dots, X_n) = 1\n\\end{equation*}\n contains no special points.\n\\end{corollary}\n\nIn particular, $\\alpha_1^{a_1}\\cdots \\alpha_n^{a_n}\\not=1$\nfor all special points $(\\alpha_1,\\ldots,\\alpha_n)$\nand all integers $a_1\\ge 1,\\ldots,a_n \\ge 1$. \nThis corollary exhibits a rather general class of algebraic varieties\nof arbitrary dimension and degree for which the celebrated theorem of\nPila~\\cite{Pila2011a} can be proved effectively and even explicitly.\nIt is complementary to other recent effective results of Andr\\'e-Oort type~\\cite{Bilu2017,Bi18}.\n\n\\paragraph{Plan of the article}\nIn Section~\\ref{sceps} we obtain an explicit version of the\nestimate~\\eqref{ecepsrough}. In Section~\\ref{supper} we obtain an\nupper estimate for the height of a singular unit. In\nSection~\\ref{slower} we obtain explicit versions of the lower estimates~\\eqref{ecolm} and~\\eqref{etriv}. In Section~\\ref{stenfourteen} we use all previous results to bound the discriminant of a singular unit as ${|\\Delta|<10^{15}}$. This bound is reduced to $10^{10}$ in Section~\\ref{sec:midrange} and to $3\\cdot10^5$ in Section~\\ref{slowrange}. Finally, in Section~\\ref{sfinal} we show that the discriminant of a singular unit satisfies ${|\\Delta|>3\\cdot10^5}$. \n\n\\paragraph{Convention}\nIn this article we fix, once and for all, an embedding ${\\bar\\Q\\hookrightarrow \\C}$; this means that all algebraic numbers in this article are viewed as elements of~$\\C$.\n\n\\paragraph{Acknowledgments}\nYuri Bilu was partially supported by the University of Basel, the Fields Institute (Toronto), and the Xiamen University. Lars K\u00fchne was supported by the Max-Planck Institute for Mathematics, the Fields Institute, and the Swiss National Science Foundation through an Ambizione grant. We thank Ricardo Menares and Amalia Pizarro for many useful conversations, Florian Luca and Aleksandar Ivic for helpful suggestions, Bill Allombert and Karim Belabas for a \\textsf{PARI} tutorial, and Jean-Louis Nicolas and Cyril Mauvillain for helping to access Robin's thesis~\\cite{Ro83a}. Finally, we thank both anonymous referees for encouraging reports and helpful suggestions. \n\n\n\\section{An estimate for \\texorpdfstring{${\\mathcal{C}}_\\varepsilon(\\Delta)$}{Ceps(Delta)}}\n\\label{sceps}\nLet~$\\Delta$ be a negative integer satisfying ${\\Delta\\equiv 0,1\\bmod 4}$ and \n$$\n{\\mathcal{O}}_\\Delta=\\Z[(\\Delta+\\sqrt\\Delta)\/2] \n$$\nthe imaginary quadratic order of discriminant~$\\Delta$. Then ${\\Delta=Df^2}$, where~$D$ is the discriminant of the imaginary quadratic field ${\\Q(\\sqrt\\Delta)}$ (the ``fundamental discriminant'') and ${f=[{\\mathcal{O}}_D:{\\mathcal{O}}_\\Delta]}$ is the conductor. We denote by ${\\mathcal{C}}(\\Delta)$ the class number of the order~${\\mathcal{O}}_\\Delta$. \n\n\n\nUp to $\\C$-isomorphism there exist ${\\mathcal{C}}(\\Delta)$ elliptic curves with CM by~${\\mathcal{O}}_\\Delta$. The $j$-invariants of these curves are called \\textsl{singular moduli} of discriminant~$\\Delta$.\nThe singular moduli of discriminant~$\\Delta$ form a full Galois orbit over~$\\Q$\nof cardinality ${\\mathcal{C}}(\\Delta)$, see \\cite[Proposition~13.2]{Cox}. \n\n\nLet~${\\mathcal{F}}$ be the standard fundamental domain in the Poincar\u00e9 plane, that is, the open hyperbolic triangle with vertices ${\\zeta_3,\\zeta_6,i\\infty}$, together with the geodesics $[i,\\zeta_6]$ and ${[\\zeta_6,i\\infty)}$; here \n$$\n\\zeta_3=e^{2\\pi i\/3}=\\frac{-1+\\sqrt{-3}}2, \\quad \\zeta_6=e^{\\pi i\/3}=\\frac{1+\\sqrt{-3}}2. \n$$\nEvery singular modulus can be uniquely presented as ${j(\\tau)}$, where ${\\tau\\in {\\mathcal{F}}}$. \n\n\nNow fix ${\\varepsilon\\in (0,1\/3]}$ and denote by ${\\mathcal{C}}_\\varepsilon(\\Delta)$ the number of singular moduli of discriminant~$\\Delta$ that can be presented as ${j(\\tau)}$ where ${\\tau\\in {\\mathcal{F}}}$ satisfies\n\\begin{equation}\n\\label{etauzeze}\n\\min \\{|\\tau-\\zeta_3|,|\\tau-\\zeta_6|\\}< \\varepsilon. \n\\end{equation}\nIn this section we bound this quantity.\n\n\nDefine the \\textsl{modified conductor}~${\\tilde f}$ by\n\\begin{equation}\n\\label{etilf}\n{\\tilde f}=\n\\begin{cases}\nf,& D\\equiv 1\\bmod 4,\\\\\n2f, &D\\equiv 0\\bmod 4. \n\\end{cases}\n\\end{equation}\nThen ${\\Delta\/{\\tilde f}^2}$ is a square-free integer. \n\n\n\n\n\\begin{theorem}\n\\label{thceps}\nFor ${\\varepsilon\\in (0,1\/3]}$ we have\n\\begin{equation}\n\\label{enewbound}\n{\\mathcal{C}}_\\varepsilon(\\Delta) \\le F\\left( \\frac{16}3\\frac{\\sigma_1({\\tilde f})}{{\\tilde f}}|\\Delta|^{1\/2}\\varepsilon^2+\\frac83|\\Delta|^{1\/2}\\varepsilon+ 8|\\Delta\/3|^{1\/4}\\sigma_0({\\tilde f})\\varepsilon+4\\right),\n\\end{equation}\nwhere \n\\begin{equation}\n\\label{ecapitalf}\nF=F(\\Delta) = \\max \\bigl\\{2^{\\omega(a)}: a\\le |\\Delta|^{1\/2}\\bigr\\}.\n\\end{equation}\n\\end{theorem}\n\\begin{corollary}\n\\label{cseps}\nIn the set-up of Theorem~\\ref{thceps} assume that ${|\\Delta|\\ge 10^{14}}$. Then \n\\begin{equation}\n\\label{enewboundsimple}\n{\\mathcal{C}}_\\varepsilon(\\Delta) \\le F\\left( 9.83|\\Delta|^{1\/2} \\varepsilon^2\\log\\log(|\\Delta|^{1\/2})+3.605|\\Delta|^{1\/2}\\varepsilon+ 4\\right).\n\\end{equation}\n\\end{corollary}\n\n\n\\subsection{Some lemmas}\nWe need some lemmas. For a prime number~$\\ell$ and a non-zero integer~$n$ we denote by ${\\mathrm{ord}}_\\ell(n)$ the $\\ell$-adic order of~$n$; that is, ${\\ell^{{\\mathrm{ord}}_\\ell(n)}\\,\\|\\,n}$. \n\n\\begin{lemma}\n\\label{lrootsmodle}\nLet~$\\ell$ be a prime number, ${e \\ge 1}$ an integer, and~$\\Delta$ a\nnon-zero integer with ${\\nu={\\mathrm{ord}}_\\ell\\Delta}$.\nThen the set of ${b\\in \\Z}$ satisfying \n${b^2\\equiv \\Delta\\bmod \\ell^e}$ \nis a union of at most~$2$ residue classes modulo ${\\ell^{e-\\lfloor\n \\min\\{e,\\nu\\}\/2\\rfloor}}$ in all cases except when ${\\ell=2}$ and\n${e\\ge 3}$; in this latter case it is a union of most~$4$ such\nclasses. Finally, the set of $b$ equals a single residue class modulo ${\\ell^{e-\\lfloor\n \\min\\{e,\\nu\\}\/2\\rfloor}}$\nif $\\nu\\ge e$. \n\\end{lemma}\n\\begin{proof}\nWe suppose first that ${\\nu=0}$, that is, ${\\ell\\nmid \\Delta}$. In this case we have to count the number of elements in the multiplicative group $(\\Z\/\\ell^e\\Z)^\\times$ whose\nsquare is represented by~$\\Delta$. If ${\\ell\\ge 3}$ or ${\\ell^e\\in \\{2,4\\}}$, then $(\\Z\/\\ell^e\\Z)^\\times$ is a cyclic group.\nThen there are at most~$2$ square roots and this implies our claim. If ${\\ell=2}$ and ${e \\ge 3}$, then ${(\\Z\/2^e\\Z)^\\times\\cong \\Z\/2\\Z\\times \\Z\/2^{e-2}\\Z}$, and there are at most~$4$ square roots, as desired.\n\nNow assume that ${\\nu {|\\Delta\/3|^{1\/4}}. \n\\end{cases}\n$$\nHence\n\\begin{align}\n\\sum_{d^2\\mid \\Delta}d\\cdot \\#(I\\cap d^2\\Z) &\\le \\sum_{\\genfrac{}{}{0pt}{}{d\\mid{\\tilde f}}{d\\le|\\Delta\/3|^{1\/4}}}d\\left(\\frac23\\frac{|\\Delta|^{1\/2}}{d^2}\\varepsilon+1\\right)\\nonumber\\\\\n&\\le \\frac23|\\Delta|^{1\/2}\\varepsilon\\sum_{d\\mid{\\tilde f}}d^{-1}+ \\sum_{\\genfrac{}{}{0pt}{}{d\\mid{\\tilde f}}{d\\le |\\Delta\/3|^{1\/4}}}d\\nonumber\\\\\n\\label{estim3}\n&\\le \\frac23\\frac{\\sigma_1({\\tilde f})}{{\\tilde f}}|\\Delta|^{1\/2}\\varepsilon+ |\\Delta\/3|^{1\/4}\\sigma_0({\\tilde f}). \n\\end{align}\nFinally, Lemma~\\ref{ltrivi} implies that\n\\begin{equation}\n\\label{eintini}\n\\#(I\\cap\\Z)\\le \\frac23|\\Delta|^{1\/2}\\varepsilon+1. \n\\end{equation}\nPutting the estimates~\\eqref{estim1},~\\eqref{estim2},~\\eqref{estim3} and~\\eqref{eintini} together, we obtain~\\eqref{enewbound}. \\qed\n\n\\subsection{Proof of Corollary~\\ref{cseps}}\n\nWe need to estimate $\\sigma_0({\\tilde f})$ and $\\sigma_1({\\tilde f})$ in terms of~$|\\Delta|$. The following lemma uses a simple estimate for $\\sigma_0(n)$ due to Nicolas and Robin~\\cite{NR83}. Much sharper estimates can be found in Robin's thesis~\\cite{Ro83a}. \n\n\\begin{lemma}\n\\label{lsigzersimple}\nFor ${|\\Delta|\\ge 10^{14}}$ we have \n\\begin{align}\n\\label{euppersigzer}\n\\sigma_0({\\tilde f})&\\le |\\Delta|^{0.192},\\\\\n\\label{euppersigone}\n{\\sigma_1({\\tilde f})}\/{\\tilde f} &\\le 1.842\\log\\log(|\\Delta|^{1\/2}). \n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\nFor proving~\\eqref{euppersigzer} may assume that ${{\\tilde f}\\ge 16}$, otherwise there is nothing to prove. In~\\cite{NR83} it is proved that for ${n\\ge 3}$ we have\n$$\n\\frac{\\log\\sigma_0(n)}{\\log2} \\le 1.538\\frac{\\log n}{\\log\\log n}.\n$$\nThe function ${x\\mapsto (\\log x)\/(\\log\\log x)}$ is increasing for ${x\\ge 16}$. Since \n$$\n|\\Delta|\\ge 10^{14}, \\qquad 16\\le {\\tilde f}\\le |\\Delta|^{1\/2},\n$$ \nthis gives\n\\begin{align*}\n\\log\\sigma_0({\\tilde f}) \n&\\le 1.538\\log 2\\frac{\\log(|\\Delta|^{1\/2})}{\\log\\log(|\\Delta|^{1\/2})}\\\\\n&\\le \\frac{1.538}2\\log 2\\frac{\\log|\\Delta|}{\\log\\log(10^{7})}\\\\\n&<0.192\\log|\\Delta|,\n\\end{align*}\nas wanted. \n\nFor proving~\\eqref{euppersigone} we use the estimate \n${\\sigma_1(n)\\le 1.842n\\log\\log n}$\nwhich holds for ${n\\ge 121}$, see \\cite[Theorem~1.3]{AFJ07}. This\nproves~\\eqref{euppersigone} for ${{\\tilde f}\\ge 121}$. For ${{\\tilde f}\\le 120}$\none can check directly that ${\\sigma_1({\\tilde f})\/{\\tilde f}\\le 3}$ so that inequality~\\eqref{euppersigone} is also\ntrue in this case.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Corollary~\\ref{cseps}]\nIf ${|\\Delta|\\ge 10^{14}}$ then Lemma~\\ref{lsigzersimple} implies that \n\\begin{align*}\n&8\\left|\\frac\\Delta3\\right|^{1\/4}\\sigma_0({\\tilde f}) \\le \\frac8{3^{1\/4}}|\\Delta|^{0.442}\\le \\frac8{3^{1\/4}\\cdot10^{0.812}}|\\Delta|^{1\/2}\\le 0.938|\\Delta|^{1\/2},\\\\\n&\\frac{16}3\\frac{\\sigma_1({\\tilde f})}{{\\tilde f}} \\le \\frac{16}3\\cdot 1.842 \\log\\log(|\\Delta|^{1\/2})\\le9.83 \\log\\log(|\\Delta|^{1\/2}).\n\\end{align*}\nSubstituting all this to~\\eqref{enewbound}, we obtain~\\eqref{enewboundsimple}. \n\\end{proof}\n\n\n\n\\section{An upper bound for the height of a singular unit}\n\\label{supper}\n\nIn this section we obtain a fully explicit version of estimate~\\eqref{eupperheight}.\nWe use the notation ${\\mathcal{C}}(\\Delta)$, ${\\mathcal{C}}_\\varepsilon(\\Delta)$, ${\\mathcal{F}}$, $\\zeta_3$, $\\zeta_6$ introduced in Section~\\ref{sceps}. \n\nLet~$\\alpha$ be a complex algebraic number of degree~$m$ whose minimal polynomial over~$\\Z$ is \n$$\nP(x)=a_mx^m+\\cdots+a_0=a_m(x-\\alpha_1)\\cdots (x-\\alpha_m) \\in \\Z[x].\n$$\nHere ${\\gcd(a_0,a_1, \\ldots, a_m)=1}$ and ${\\alpha_1, \\ldots, \\alpha_m\\in \\C}$ are the conjugates of~$\\alpha$ over~$\\Q$. Then the height of~$\\alpha$ is defined by\n$$\n{\\mathrm{h}}(\\alpha) =\\frac1m\\left(\\log|a_m|+\\sum_{k=1}^m \\log^+|\\alpha_k|\\right), \n$$ \nwhere ${\\log^+(\\cdot)=\\log\\max\\{1,\\cdot\\}}$. \nIf~$\\alpha$ is an algebraic integer then \n$$\n{\\mathrm{h}}(\\alpha) =\\frac1m\\sum_{k=1}^m \\log^+|\\alpha_k|. \n$$ \nIt is known that ${{\\mathrm{h}}(\\alpha)={\\mathrm{h}}(\\alpha^{-1})}$ when ${\\alpha\\ne 0}$. \n\n\\begin{theorem}\n\\label{theceps}\nLet~$\\alpha$ be a singular unit of discriminant~$\\Delta$, and~$\\varepsilon$ a real number satisfying ${0<\\varepsilon \\le 4\\cdot10^{-3}}$. \nThen \n\\begin{equation}\n\\label{eupperheightbis}\n{\\mathrm{h}}(\\alpha)\\le 3\\frac{{\\mathcal{C}}_\\varepsilon(\\Delta)}{{\\mathcal{C}}(\\Delta)}\\log|\\Delta|+3\\log(\\varepsilon^{-1})-10.66. \n\\end{equation}\n\\end{theorem}\n\nCombining this with Corollary~\\ref{cseps} and optimizing~$\\varepsilon$, we obtain the following consequence. \n\n\\begin{corollary}\n\\label{cupper}\nIn the set-up of Theorem~\\ref{theceps} assume that ${|\\Delta|\\ge 10^{14}}$. Then\n\\begin{align}\n\\label{eupdel}\n{\\mathrm{h}}(\\alpha)&\\le \\frac{12A}{{\\mathcal{C}}(\\Delta)}+3\\log\\frac{A|\\Delta|^{1\/2}}{{\\mathcal{C}}(\\Delta)}-3.77,\n\\end{align}\nwhere ${A= F\\log|\\Delta|}$ and~$F$ is defined in~\\eqref{ecapitalf}. \n\\end{corollary}\n\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{theceps}}\n\nWe start from some simple lemmas.\n\n\n\\begin{lemma}\n\\label{lanal}\nFor ${z\\in {\\mathcal{F}}}$ we have \n$$\n|j(z)|\\ge 42700\\bigl(\\min\\{|z-\\zeta_3|,|z-\\zeta_6|,4\\cdot10^{-3}\\}\\bigr)^3. \n$$\n\\end{lemma}\n\n\\begin{proof}\nThis is an easy modification of Proposition~2.2 from~\\cite{BLP16}; just replace therein $10^{-3}$ by ${4\\cdot10^{-3}}$. \n\\end{proof}\n\nIn the next lemma we use the notation $T_\\Delta$ and ${\\tau(a,b,c)}$ introduced before Lemma~\\ref{lgauss}. \n\n\\begin{lemma}\n\\label{lliouv}\nAssume that ${\\Delta\\ne -3}$. Let ${\\tau=\\tau(a,b,c)}$, where ${(a,b,c)\\in T_\\Delta}$. Let~$\\zeta$ be one of the numbers~$\\zeta_3$ or~$\\zeta_6$. \nThen\n$$\n|\\tau-\\zeta|\\ge \\frac{\\sqrt3}{4|\\Delta|}. \n$$\n\\end{lemma}\n\n\\begin{proof}\nWe have\n$$\n|\\tau-\\zeta|\\ge|\\Im\\tau-\\Im\\zeta|= \\left|\\frac{\\sqrt{|\\Delta|}}{2a}-\\frac{\\sqrt3}{2}\\right|= \\frac{\\bigl||\\Delta|-3a^2\\bigr|}{2a(\\sqrt{|\\Delta|}+a\\sqrt3)}.\n$$\nSince ${\\Delta\\ne -3}$ we have ${\\Delta\\ne -3a^2}$, see item~\\ref{iadelta} of Lemma~\\ref{lgauss}. Hence\n$$\n|\\tau-\\zeta|\\ge \\frac{1}{2a(\\sqrt{|\\Delta|}+a\\sqrt3)}\\ge \\frac{\\sqrt3}{4|\\Delta|}, \n$$\nthe last inequality being again by item~\\ref{iadelta} of Lemma~\\ref{lgauss}. \n\\end{proof}\n\nNow we are ready to prove Theorem~\\ref{theceps}. \n\n\\begin{proof}[Proof of Theorem~\\ref{theceps}]\nLet ${\\alpha=\\alpha_1, \\alpha_2, \\ldots, \\alpha_m\\in \\C}$ be the conjugates of~$\\alpha$ over~$\\Q$. Then ${m={\\mathcal{C}}(\\Delta)}$ and ${\\alpha_1, \\ldots, \\alpha_m}$ is the full list of singular moduli of discriminant~$\\Delta$. Write them as ${j(\\tau_1), \\ldots, j(\\tau_m)}$, where ${\\tau_1, \\ldots, \\tau_m\\in {\\mathcal{F}}}$. \n\nSince~$\\alpha$ is a unit, we have \n$$\n{\\mathrm{h}}(\\alpha)={\\mathrm{h}}(\\alpha^{-1}) = \\frac1m \\sum_{k=1}^m\\log^+|\\alpha_k^{-1}|,\n$$\nHence \n{\\footnotesize\n\\begin{align}\n{\\mathrm{h}}(\\alpha) &= \\frac{1}{{\\mathcal{C}}(\\Delta)}\\sum_{k=1}^m\\log^+|j(\\tau_k)^{-1}|\\nonumber\\\\\n\\label{etwosums}\n&= \\frac{1}{{\\mathcal{C}}(\\Delta)}\\left(\\sum_{\\genfrac{}{}{0pt}{}{1\\le k\\le m}{\\min\\{|\\tau_k-\\zeta_3|, |\\tau_k-\\zeta_6|\\}< \\varepsilon}}+\\sum_{\\genfrac{}{}{0pt}{}{1\\le k\\le m}{\\min\\{|\\tau_k-\\zeta_3|, |\\tau_k-\\zeta_6|\\}\\ge \\varepsilon}}\\right)\\log^+|j(\\tau_k)^{-1}|.\n\\end{align}\n}%\nWe estimate each of the two sums separately.\n\nSince ${\\varepsilon\\le 4\\cdot10^{-3}}$, Lemma~\\ref{lanal} implies that each term in the second sum satisfies \n$$\n\\log^+|j(\\tau_k)^{-1}| \\le 3\\log(\\varepsilon^{-1})-\\log42700 \\le 3\\log(\\varepsilon^{-1})-10.66. \n$$\nHence\n$$\n\\sum_{\\genfrac{}{}{0pt}{}{1\\le k\\le m}{\\min\\{|\\tau_k-\\zeta_3|, |\\tau_k-\\zeta_6|\\}\\ge \\varepsilon}}\\log^+|j(\\tau_k)^{-1}|\\le ({\\mathcal{C}}(\\Delta)-{\\mathcal{C}}_\\varepsilon(\\Delta))\\bigl(3\\log(\\varepsilon^{-1})-10.66\\bigr). \n$$\nSince ${\\varepsilon\\le 4\\cdot10^{-3}}$ we have\n ${3\\log(\\varepsilon^{-1})>10.66}$, which implies that \n\\begin{equation}\n\\label{esecondsum}\n\\sum_{\\genfrac{}{}{0pt}{}{1\\le k\\le m}{\\min\\{|\\tau_k-\\zeta_3|, |\\tau_k-\\zeta_6|\\}\\ge \\varepsilon}}\\log^+|j(\\tau_k)^{-1}|\\le {\\mathcal{C}}(\\Delta)\\bigl(3\\log(\\varepsilon^{-1})-10.66\\bigr). \n\\end{equation}\nAs for the first sum, Lemmas~\\ref{lanal} and~\\ref{lliouv} imply that each term in this sum satisfies\n$$\n\\log^+|j(\\tau_k)^{-1}| \\le \\max\\left\\{0, 3\\log\\frac{4|\\Delta|}{\\sqrt3}-\\log42700\\right\\} \\le 3\\log|\\Delta|. \n$$ \nNote that we may use here Lemma~3.4 because the only singular modulus of discriminant $-3$ is~$0$, which is not a unit. \n\nSince the first sum has ${\\mathcal{C}}_\\varepsilon(\\Delta)$ terms, this implies the estimate \n\\begin{equation}\n\\label{efirstsum}\n\\sum_{\\genfrac{}{}{0pt}{}{1\\le k\\le m}{\\min\\{|\\tau_k-\\zeta_3|, |\\tau_k-\\zeta_6|\\}< \\varepsilon}}\\log^+|j(\\tau_k)^{-1}|\\le 3{\\mathcal{C}}_\\varepsilon(\\Delta)\\log|\\Delta|. \n\\end{equation}\nSubstituting~\\eqref{esecondsum} and~\\eqref{efirstsum} into~\\eqref{etwosums}, we obtain~\\eqref{eupperheightbis}.\n\\end{proof}\n\n\n\\subsection{Proof of Corollary~\\ref{cupper}}\n\nTo prove the corollary we need a lower bound for the quantity~$F$ defined in Theorem~\\ref{thceps} and an upper bound for the class number ${\\mathcal{C}}(\\Delta)$. \n\n\\begin{lemma}\n\\label{lf}\nAssume that ${|\\Delta|\\ge 10^{14}}$. Then ${F\\ge |\\Delta|^{0.34\/\\log\\log(|\\Delta|^{1\/2})}}$ and ${F\\ge 18.54\\log\\log(|\\Delta|^{1\/2})}$. \n\\end{lemma}\n\n\\begin{proof}\nDefine, as usual \n\\begin{equation}\n\\label{ethetapi}\n\\vartheta(x)=\\sum_{p\\le x}\\log p, \\qquad \\pi(x)=\\sum_{p\\le x}1. \n\\end{equation}\nThen \n\\begin{align}\n\\vartheta(x) & \\le 1.017 x &&(x>0), \\nonumber\\\\\n\\label{elowerpi}\n\\pi(x) &\\ge \\frac{x}{\\log x} && (x\\ge 17),\n\\end{align} \nsee \\cite{RS62}, Theorem~9 on page~71 and Corollary~1 after Theorem~2 on page~69. Estimate~\\eqref{elowerpi} implies that\n\\begin{equation}\n\\label{elowerpibis}\n\\pi(x) \\ge 0.99995\\frac{x}{\\log x} \\qquad (x\\ge 13).\n\\end{equation}\nSetting here \n$$\nx=\\frac{\\log(|\\Delta|^{1\/2})}{1.017} , \\qquad N =\\prod_{p\\le x}p,\n$$\nwe obtain ${N\\le |\\Delta|^{1\/2}}$ and \n$$\n\\omega(N) =\\pi(x) \\ge \\frac{0.99995\\log(|\\Delta|^{1\/2})}{1.017\\log\\log(|\\Delta|^{1\/2})}. \n$$\nNote that ${x\\ge \\bigl(\\log(10^{7})\\bigr)\/1.017>15}$, so we are allowed to use~\\eqref{elowerpibis}. \nWe obtain \n$$\nF\\ge 2^{\\omega(N)} \\ge |\\Delta|^{\\frac{0.99995\\log2}{2\\cdot1.017\\log\\log(|\\Delta|^{1\/2})}}\\ge |\\Delta|^{0.34\/\\log\\log(|\\Delta|^{1\/2})},\n$$\nproving the first estimate.\n\n\nTo prove the second estimate, we deduce from the first estimate that \n\\begin{equation}\n\\label{elog-log}\n\\log F-\\log\\log\\log(|\\Delta|^{1\/2})\\ge 0.68 \\frac{u}{\\log u}-\\log\\log u,\n\\end{equation} \nwhere we set ${u=\\log(|\\Delta|^{1\/2})}$. The right-hand side of~\\eqref{elog-log}, viewed as a function in~$u$, is increasing for ${u\\ge \\log(10^{7})}$. Hence \n$$\n\\log F-\\log\\log\\log(|\\Delta|^{1\/2})\\ge 0.68 \\frac{\\log(10^{7})}{\\log \\log(10^{7})}-\\log\\log \\log(10^{7}) \\ge 2.92,\n$$\nand ${F\\ge e^{2.92}\\log\\log(|\\Delta|^{1\/2}) \\ge 18.54 \\log\\log(|\\Delta|^{1\/2})}$. \n\\end{proof}\n\n\\begin{lemma}\n\\label{lcd}\nFor ${\\Delta\\ne -3,-4}$ we have \n$$\n{\\mathcal{C}}(\\Delta) \\le \\pi^{-1}|\\Delta|^{1\/2}(2+\\log|\\Delta|).\n$$ \n\\end{lemma}\n\n\\begin{proof}\nThis follows from Theorems~10.1 and~14.3 in \\cite[Chapter~12]{Hu82}. Note that in~\\cite{Hu82} the right-hand side has an extra factor $\\omega\/2$, where~$\\omega$ is the number of roots of unity in the imaginary quadratic order of discriminant~$\\Delta$. Since we assume that ${\\Delta \\ne -3,-4 }$, we have ${\\omega=2}$, so we may omit this factor. \n\\end{proof}\n\n\n\\begin{proof}[Proof of Corollary~\\ref{cupper}]\nSubstituting the estimate for ${\\mathcal{C}}_\\varepsilon(\\Delta)$ from~\\eqref{enewboundsimple} into~\\eqref{eupperheightbis}, we obtain the estimate\n\\begin{align*}\n{\\mathrm{h}}(\\alpha)&\\le 3A\\frac{9.83|\\Delta|^{1\/2} \\varepsilon^2\\log\\log(|\\Delta|^{1\/2})+3.605|\\Delta|^{1\/2}\\varepsilon+ 4}{{\\mathcal{C}}(\\Delta)}\n\\\\&\\hphantom{\\le}\n+3\\log(\\varepsilon^{-1})-10.66\n\\end{align*}\nwith ${A=F\\log|\\Delta|}$. Specifying \n$$\n\\varepsilon=0.27\\frac{{\\mathcal{C}}(\\Delta)}{A|\\Delta|^{1\/2}}\n$$\n(this is a nearly optimal value, and it satisfies ${\\varepsilon\\le 4\\cdot10^{-3}}$ as verified below), we obtain, using Lemmas~\\ref{lf} and~\\ref{lcd},\n\\begin{align*}\n{\\mathrm{h}}(\\alpha)&\\le 3\\cdot 9.83 \\cdot (0.27)^2\\frac{\\log\\log(|\\Delta|^{1\/2})}F\\frac{{\\mathcal{C}}(\\Delta)}{|\\Delta|^{1\/2}\\log|\\Delta|}+3\\cdot3.605\\cdot0.27\\\\ \n&\\hphantom{\\le}+\\frac{12A}{{\\mathcal{C}}(\\Delta)} +3\\log\\frac{A|\\Delta|^{1\/2}}{{\\mathcal{C}}(\\Delta)}-3\\log0.27-10.66\\\\\n&\\le \\frac{12A}{{\\mathcal{C}}(\\Delta)} +3\\log\\frac{A|\\Delta|^{1\/2}}{{\\mathcal{C}}(\\Delta)}+ \\frac{3\\cdot 9.83 \\cdot (0.27)^2\\cdot0.34}{18.54}+3\\cdot3.605\\cdot0.27\\\\ \n&\\hphantom{\\le}-3\\log0.27-10.66\\\\\n&\\le \\frac{12A}{{\\mathcal{C}}(\\Delta)} +3\\log\\frac{A|\\Delta|^{1\/2}}{{\\mathcal{C}}(\\Delta)}-3.77,\n\\end{align*}\nas wanted. \n\n \nWe only have to verify that ${\\varepsilon\\le 4\\cdot10^{-3}}$. We have\n${F\\ge 256}$ when ${|\\Delta|\\ge 10^{14}}$. Using Lemma~\\ref{lcd}, we obtain \n\\begin{align*}\n\\varepsilon=\\frac{0.27{\\mathcal{C}}(\\Delta)}{|\\Delta|^{1\/2}\\log |\\Delta|}\\frac1F\n\\le 0.27\\pi^{-1}\\frac{2+\\log(10^{14})}{\\log(10^{14})}\\cdot \\frac1{256}\n<4\\cdot 10^{-4}.\n\\end{align*}\nThe proof is complete.\n\\end{proof}\n\n\n\n\n\\section{Lower bounds for the height of a singular modulus}\n\\label{slower}\n\nNow we establish explicit lower bounds of the form~\\eqref{ecolm} and~\\eqref{etriv}. \n\n\\subsection{The ``easy'' bound}\n\nWe start by proving a bound of the form~\\eqref{etriv}. \n\n\\begin{proposition}\n\\label{ptriv}\nLet~$\\alpha$ be a singular modulus of discriminant~$\\Delta$. \nAssume that ${|\\Delta|\\ge 16}$. Then \n\\begin{equation}\n\\label{etrivbis}\n{\\mathrm{h}}(\\alpha) \\ge \\frac{\\pi|\\Delta|^{1\/2}-0.01}{{\\mathcal{C}}(\\Delta)}.\n\\end{equation}\n\\end{proposition}\n\n\nWe need a simple lemma. \n\n\\begin{lemma}\n\\label{lttsn}\nFor ${z\\in {\\mathcal{F}}}$ with imaginary part~$y$ we have \n$$\n\\bigl||j(z)|-e^{2\\pi y}\\bigr|\\le 2079.\n$$ \nIf ${y\\ge 2}$ then we also have ${|j(z)|\\ge 0.992 e^{2\\pi y}}$. \n\\end{lemma}\n\n\\begin{proof}\nThe first statement is Lemma~1 of~\\cite{BMZ13}, and the second one is an immediate consequence. \n\\end{proof}\n\n\n\n\\begin{proof}[Proof of Proposition~\\ref{ptriv}]\nOne of the conjugates of~$\\alpha$ over~$\\Q$ is equal to ${j((b+\\sqrt\\Delta)\/2)}$, with ${b=1}$ for~$\\Delta$ odd, and ${b=0}$ for~$\\Delta$ even; it corresponds to the element ${(1,b,(-\\Delta+b^2)\/4)}$ of the set $T_\\Delta$. \nHence\n$$\n{\\mathrm{h}}(\\alpha) \\ge \\frac{\\log|j((b+\\sqrt\\Delta)\/2)|}{{\\mathcal{C}}(\\Delta)}.\n$$\nUsing Lemma~\\ref{lttsn}, we obtain\n$$\n\\log|j((b+\\sqrt\\Delta)\/2)|\\ge \\pi|\\Delta|^{1\/2}+\\log 0.992 \\ge \\pi|\\Delta|^{1\/2}-0.01.\n$$\nWhence the result. \n\\end{proof}\n\n\\subsection{The ``hard'' bound}\n\n\\label{sshard}\n\nWe are left with bound~\\eqref{ecolm}. We are going to prove the following. \n\n\\begin{proposition}\n \\label{phlb2}\nLet~$\\alpha$ be a singular modulus of discriminant~$\\Delta$.\nThen\n\\begin{equation}\n\\label{ecolmbis}\n{\\mathrm{h}}(\\alpha) \\ge \\frac{3}{\\sqrt{5}}\\log|\\Delta| -9.79. \n\\end{equation}\n\\end{proposition}\n\n\nThe proof of Proposition~\\ref{phlb2} relies on the\nfact that it is possible to evaluate the Faltings height of an elliptic curve\nwith complex multiplication precisely, due to the work of Colmez~\\cite{Colmez} and Nakkajima-Taguchi~\\cite{NT}; for an exact statement see \\cite[Lemma 4.1]{Ha10}. \n\nLet~$E$ be an elliptic curve with CM by an order of discriminant~$\\Delta$. \nWe let ${\\mathrm{h}}_F(E)$ denote the stable Faltings height of~$E$ (using Deligne's normalization~\\cite{De85}). \nThe above-mentioned explicit formula for ${\\mathrm{h}}_F(E)$ is used in~\\cite{HJM:6} to obtain the lower bound \n\\begin{equation}\n\\label{efaltlowerold}\n{\\mathrm{h}}_F(E) \\ge \\frac{1}{4\\sqrt5} \\log|\\Delta| - 5.93,\n\\end{equation}\nsee Lemma 14(ii) therein. Unfortunately, this bound is numerically too weak for our purposes.\n\nProposition~\\ref{phlb2} will be deduced from the following numerical refinement of~\\eqref{efaltlowerold}.\n\n\\begin{proposition}\n\\label{pfaltlower}\nLet~$E$ be an elliptic curve with CM by an order of discriminant~$\\Delta$.\nThen \n\\begin{equation}\n\\label{efaltlowernew}\n{\\mathrm{h}}_F(E) \\ge \\frac{1}{4\\sqrt5} \\log|\\Delta| -\\gamma -\\frac{\\log(2\\pi)}{2}-\\left(\\frac1{2\\sqrt5}-\\frac16\\right)\\log2,\n\\end{equation}\nwhere ${\\gamma=0.57721\\ldots}$ is the Euler constant. \n\\end{proposition}\n\n\nLet us first show how Proposition~\\ref{pfaltlower} implies Proposition~\\ref{phlb2}. \n\n\n\\begin{proof}[Proof of Proposition~\\ref{phlb2} (assuming Proposition~\\ref{pfaltlower})]\nLet~$E$ be an elliptic curve with ${j(E)=\\alpha}$. \nWe only need to relate ${\\mathrm{h}}_F(E)$ to ${\\mathrm{h}}(j(E))$. For this purpose we use Lemma~7.9 of Gaudron and R\\'emond~\\cite{GaudronRemond:periodes}\\footnote{The reader should be warned that our ${\\mathrm{h}}_F(E)$ is denoted ${\\mathrm{h}}(E)$ in~\\cite{GaudronRemond:periodes}.}.\nIn our\nnotation they\nshow that \n\\begin{equation}\n\\label{egr}\n{\\mathrm{h}}_F(E)\\le {\\mathrm{h}}(j(E))\/12 - 0.72 \n\\end{equation}\nA quick calculation yields our\nclaim. \n\\end{proof}\n\n\n\nTo prove Proposition~\\ref{pfaltlower} we need a technical lemma.\nSet \n\\begin{equation}\n\\label{elambda}\n\\lambda=\\frac12-\\frac1{2\\sqrt5},\n\\end{equation}\nand define the additive arithmetical functions $\\beta(n)$ and $\\delta(n)$ by \n\\begin{equation}\n\\label{ebetadelta}\n\\beta(p^k) =\\frac{\\log p}{p+1}\\frac{1-p^{-k}}{1-p^{-1}}, \\quad \\beta(n)=\\sum_{p^k\\|n}\\beta(p^k), \\quad \\delta(n)= \\lambda\\log n -\\beta(n).\n\\end{equation}\n\n\\begin{lemma}\n\\label{ltechn} \nFor every positive integer~$n$ we have \n$$\n\\delta(n) \\ge\\delta(2)= \\left(\\frac16-\\frac1{2\\sqrt5}\\right)\\log2. \n$$\n\\end{lemma}\n\n\\begin{proof}\nSince ${1\/3>\\lambda>1\/4}$, we have ${\\delta(2)<0}$ and ${\\delta(p)>0}$ for all primes ${p\\ge 3}$. Also, for ${k\\ge 1}$ and any prime~$p$ we have \n$$\n\\delta(p^{k+1})-\\delta(p^k)= \\left(\\lambda-\\frac1{p^k(p+1)}\\right)\\log p >0.\n$$\nSince ${\\delta(4)>0}$, this proves that ${\\delta(p^k)>0}$ for every prime power ${p^k\\ne 2}$, whence the result. \n\\end{proof}\n\nProposition~\\ref{pfaltlower} is an immediate consequence of Lemma~\\ref{ltechn} and the following statement. \n\n\\begin{proposition}\n\\label{pfaltlowerf}\nIn the set-up of Proposition~\\ref{pfaltlower} we have\n\\begin{equation}\n\\label{efaltlowernewf}\n{\\mathrm{h}}_F(E) \\ge \\frac{1}{4\\sqrt5} \\log|\\Delta| + \\lambda \\log f- \\beta(f) -\\gamma -\\frac{\\log(2\\pi)}{2}.\n\\end{equation}\n\\end{proposition}\nSince \n$$\n\\lambda \\log f- \\beta(f)\\ge-\\left(\\frac1{2\\sqrt5}-\\frac16\\right)\\log2\n$$\nby Lemma~\\ref{ltechn}, this implies Proposition~\\ref{pfaltlower}. \n\n\\begin{proof}[Proof of Proposition~\\ref{pfaltlowerf}]\nWrite ${\\Delta=Df^2}$ with~$D$ the fundamental discriminant and~$f$ the conductor. Define \n$$\ne_f(p) = \\frac{1-\\chi(p)}{p-\\chi(p)}\\frac{1-p^{-{\\mathrm{ord}}_p(f)}}{1-p^{-1}}, \\qquad c(f) = \\frac 12 \\left(\\sum_{p\\mid f} e_f(p)\\log p\\right),\n$$\nwhere ${\\chi(\\cdot)=(D\/\\cdot)}$ is Kronecker's symbol.\n\n\nIn the proof of Lemma~14 of~\\cite{HJM:6}\\footnote{Note that our~$D$ is written~$\\Delta$ in~\\cite{HJM:6}.},\nthe stable Faltings height of~$E$ is estimated as \n\\begin{align*}\n{\\mathrm{h}}_F(E) &\\ge \\frac{1}{4\\sqrt5}\\log|D| + \\frac 12 \\log f\n-c(f)\n-\\gamma -\\frac{\\log(2\\pi)}{2},\\\\\n&=\\frac{1}{4\\sqrt5}\\log|\\Delta| + \\lambda \\log f\n-c(f)\n-\\gamma -\\frac{\\log(2\\pi)}{2}.\n\\end{align*}\nThus, to establish~\\eqref{efaltlowernewf}, we only have to prove that ${c(f)\\le \\beta(f)}$. \nWe have \n$$\n\\frac{1-\\chi(p)}{p-\\chi(p)} = \n\\begin{cases}\n0, &\\chi(p)=1,\\\\\n1\/p, &\\chi(p)=0,\\\\\n 2\/(p+1), &\\chi(p)=-1. \n\\end{cases}\n$$\nHence\n$$\n\\frac{1-\\chi(p)}{p-\\chi(p)} \\le \\frac2{p+1}\n$$\nin any case. This implies that ${c(f)\\le \\beta(f)}$.\nThe proposition is proved. \n\\end{proof}\n\n\n\\section{The estimate \\texorpdfstring{${|\\Delta|< 10^{15}}$}{|Delta|<10to15}}\n\\label{stenfourteen}\n\n\n\nIn this section we obtain the first explicit upper bound for the\ndiscriminant of a singular unit. \n\n\n\\begin{theorem}\n \\label{thhighrange}\nLet~$\\Delta$ be the discriminant of a singular unit. Then\n ${|\\Delta|<10^{15}}$. \n\\end{theorem}\n\n{\\sloppy\n\nThroughout this section~$\\Delta$ is the discriminant of a singular unit~$\\alpha$, and we assume that ${X=|\\Delta|\\ge 10^{15}}$, as otherwise there is nothing to prove. \nOur principal tools will be the upper estimate~\\eqref{eupdel}\nand the lower estimates~\\eqref{etrivbis},~\\eqref{ecolmbis}. We reproduce them here for convenience:\n\\begin{align}\n\\label{eupx}\n{\\mathrm{h}}(\\alpha)&\\le \\frac{12A}{{\\mathcal{C}}(\\Delta)}+3\\log\\frac{AX^{1\/2}}{{\\mathcal{C}}(\\Delta)}-3.77,\\\\\n\\label{etrivx}\n{\\mathrm{h}}(\\alpha)&\\ge \\frac{\\pi X^{1\/2}-0.01}{{\\mathcal{C}}(\\Delta)},\\\\\n\\label{ecolmx}\n{\\mathrm{h}}(\\alpha)&\\ge \\frac{3}{\\sqrt5}\\log X-9.79.\n\\end{align}\nNote that our assumption ${X\\ge 10^{15}}$ implies that the right-hand side of~\\eqref{ecolmx} is positive.\n\n}\n\n\n\\subsection{The main inequality}\n\\label{ssinequality}\n\nRecall that ${A= F\\log X}$. \nMinding~$0.01$ in~\\eqref{etrivx} we deduce from~\\eqref{eupx},~\\eqref{etrivx} and~\\eqref{ecolmx} the inequality\n$$\n\\frac{12A}{{\\mathcal{C}}(\\Delta)}+3\\log\\frac{AX^{1\/2}}{{\\mathcal{C}}(\\Delta)}-3.76 \\ge \\max\\left\\{\\frac{\\pi X^{1\/2}}{{\\mathcal{C}}(\\Delta)}, \\frac{3}{\\sqrt5}\\log X-9.78\\right\\}. \n$$\nDenoting\n\\begin{equation}\n\\label{ey}\nY=\\max\\left\\{\\frac{\\pi X^{1\/2}}{{\\mathcal{C}}(\\Delta)}, \\frac{3}{\\sqrt5}\\log X-9.78\\right\\}, \n\\end{equation}\nwe re-write this as \n\\begin{equation}\n\\label{enotyet}\n\\frac{12A\/{\\mathcal{C}}(\\Delta)}{Y}+\\frac{3\\log A-3.76}Y+ \\frac{\\log(X^{1\/2}\/{\\mathcal{C}}(\\Delta))}{Y} \\ge 1. \n\\end{equation}\nNote that ${3\\log A-3.76>0}$, because ${A\\ge \\log X\\ge \\log(10^{15}) >30}$. Hence we may replace~$Y$ by ${\\frac{3}{\\sqrt5}\\log X-9.78}$ in the middle term of the left-hand side in~\\eqref{enotyet}. Similarly, in the first term we may replace~$Y$ by ${\\pi X^{1\/2}\/{\\mathcal{C}}(\\Delta)}$, and in the third term we may replace ${X^{1\/2}\/{\\mathcal{C}}(\\Delta)}$ by ${\\pi^{-1}Y}$. We obtain \n\\begin{equation}\n\\label{einequality}\n12\\pi^{-1} AX^{-1\/2}+ \\frac{3\\log A-3.76}{\\frac{3}{\\sqrt5}\\log X-9.78} + 3\\frac{\\log(\\pi^{-1}Y)}{Y}\\ge 1. \n\\end{equation}\n\n\nTo show that~\\eqref{einequality} is not possible for ${X\\ge 10^{15}}$, we will bound from above each of the three terms in its left-hand side. To begin with, we bound~$A$. \n\n\n\\subsection{Bounding~$F$ and~$A$}\n\nRecall that ${F=\\max\\{2^{\\omega(a)}: a\\le X^{1\/2}\\}}$ and ${A=F\\log X}$. \n\nLet \n${N_1 = 2\\cdot3\\cdot5\\cdots 1129}$ be the product of the first $189$ prime numbers. Define the real number~$c_1$ from \n\\begin{align*}\n\\omega(N_1)&=\\frac{\\log N_1}{\\log\\log N_1 -c_1}.\n\\end{align*}\nA calculation shows that ${c_1 < 1.1713142}$. \nRobin \\cite[Th\u00e9or\u00e8me~13]{Ro83} proved that\n\\begin{equation*}\n\\omega(n) \\le \\frac{\\log n}{\\log\\log n-c_1} \n\\end{equation*}\nfor ${n\\ge 26}$. \nThis implies that \n\\begin{align}\n\\label{eboundf}\n\\frac{\\log F}{\\log 2} &\\le \\frac12\\frac{\\log X}{\\log\\log X-c_1-\\log2} ,\\\\\n\\label{ebounda}\n\\log A &\\le \\frac{\\log2}2\\frac{\\log X}{\\log\\log X-c_1-\\log2}+\\log\\log X. \n\\end{align}\nIndeed, the function \n$$\ng(x)= \\frac{\\log x}{\\log\\log x-c_1}\n$$\nis strictly increasing for ${x\\ge 6500}$ and ${g(6500)>8}$. \nIf ${a\\le X^{1\/2}}$ then either ${a\\le 6500}$ in which case \n${\\omega(a) \\le 5A>0$ we remark that\n\\begin{equation}\n\\label{eremark}\nB\\log B - A \\log A \\le (B-A)(1+\\log B).\n\\end{equation}\nWhen ${A\\ge 2}$ estimate~\\eqref{esqrtab} follows immediately\nfrom~\\eqref{esqrt} and~\\eqref{eremark}.\n\nLet us assume that ${A<2}$, hence $\\lfloor A\\rfloor$ is $0$ or $1$\nand $S(\\lfloor A\\rfloor)=0$ or $1$, respectively. For $B\\ge 2$ we find\n$$\n\\sum_{A 0}$, as ${X\\ge 10^{5}}$, \nto get\n\\begin{equation}\n \\label{eq:1lowerbound}\\begin{aligned}\n 1&\\le \\frac{3}{\\pi}(8\\varepsilon^2 + 0.811\\varepsilon) (\\log X)^2\n +\\frac{3}{\\pi}(28\\varepsilon^2 + 2.829\\varepsilon) \\log X\\\\\n &\\hphantom{\\le}+ \\frac{267}{\\pi}\\varepsilon\\frac{\\log X}{X^{1\/8}} \n +\\frac{93.18}{\\pi X^{1\/4}}\n +\n \\frac{3\\log(\\varepsilon^{-1})-10.65}{\\frac{3}{\\sqrt 5}\\log X - 9.78}.\n\\end{aligned}\n\\end{equation}\nOur choice is ${\\varepsilon = \n10^{-4}}$.\nThe first two terms in the right-hand side of~\\eqref{eq:1lowerbound} are \nmonotonously increasing, and the remaining three terms are decreasing for ${X\\in[10^{10}, 10^{15})}$; note that ${x\\mapsto(\\log x)\/x^{1\/8}}$ is\ndecreasing for ${x \\ge 3000 > e^8}$. Using ${X < 10^{15}}$ for the first two terms and ${X\\ge 2\\cdot 10^{10}}$ for the\n remaining three terms, we see that the right-hand side of~\\eqref{eq:1lowerbound} is strictly smaller than $0.962$ if\n${X \\in [2\\cdot10^{10}, 10^{15})}$. Similarly, we infer that it is strictly smaller than $0.960$ if ${X \\in [10^{10}, 2\\cdot10^{10})}$. This completes the proof of Theorem~\\ref{thmidrange}. \n\\qed\n\n}\n\n\\section{Handling the low-range \\texorpdfstring{$3\\cdot 10^{5}\\le |\\Delta|< 10^{10}$}{3.10to5<|Delta|<10to10}}\n\\label{slowrange}\nWe now deal with the low-range ${|\\Delta|\\in [3\\cdot10^5, 10^{10})}$. For this range the upper\nbound on ${\\mathcal{C}}_\\varepsilon(\\Delta)$ arises from a computer-assisted search algorithm.\n\n \n\nWe prove the following. \n\\begin{theorem}\n \\label{thlowrange}\n Let~$\\Delta$ be the discriminant of a singular unit. Then\n ${|\\Delta|\\notin [3\\cdot10^5, 10^{10})}$. \n\\end{theorem}\n\nThe proof relies on the following lemma. \n\n\\begin{lemma}\n \\label{lem:lowrangeCebound}\n Let $\\Delta$ be the discriminant of a singular modulus.\n \\begin{enumerate}[label={(\\roman*)}]\n \\item \n \\label{itenten} If ${10^7\\le|\\Delta|< 10^{10}}$ and ${\\varepsilon = 10^{-3}}$, then\n ${{\\mathcal{C}}_\\varepsilon(\\Delta)\\le 16}$.\n \\item \n \\label{itenseven}\n If ${3\\cdot10^5\\le|\\Delta|< 10^7}$ and ${\\varepsilon = 4\\cdot 10^{-3}}$, then\n ${{\\mathcal{C}}_\\varepsilon(\\Delta)\\le 6}$. \n \\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nLet $X_{\\min}$ and $X_{\\max}$ be positive integers satisfying ${ X_{\\min} 0$}\n \n \n \\Output{an upper bound for ${\\mathcal{C}}_\\varepsilon(\\Delta)$ for all\n discriminants $\\Delta\\in [-X_{\\mathrm{max}},-X_{\\mathrm{min}}]$}\n \\BlankLine\n\n $counter \\leftarrow$ pointer to array of length\n $X_{\\mathrm{max}}-X_{\\mathrm{min}}+1$ initialized to $0$\\;\n\n $bound \\leftarrow 0$\\;\n \n \\For{$c\\leftarrow \\lfloor X_{\\mathrm{min}}^{1\/2}\/2\\rfloor$ \\KwTo\n $\\lfloor X_{\\mathrm{max}}^{1\/2}\\rfloor$}{\n \\For{$a\\leftarrow \\lfloor c\/(1+\\sqrt 3 \\varepsilon+\\varepsilon^2) \\rfloor$ to $c$}{\n \\For{$b\\leftarrow \\lfloor (1-2\\varepsilon)a\\rfloor$ to $a$}{\n $X\\leftarrow 4ac-b^2$\\;\n \\If{$X\\ge X_{\\mathrm{min}}$ and $X \\le X_{\\mathrm{max}}$\n }{$pos \\leftarrow X-X_{\\mathrm{min}}$\\;\n $counter[pos] \\leftarrow counter[pos]+2$\\;\n \\lIf{$counter[pos] > bound$}{$bound\\leftarrow counter[pos]$}\n }\n }\n }}\n \\Return{$bound$}\\;\n \\caption{Compute an upper bound for ${\\mathcal{C}}_{\\epsilon}(\\Delta)$ in the\n range ${\\Delta\\in [-X_{\\mathrm{max}},-X_{\\mathrm{min}}]}$}\\label{algo:CCDelta}\n }%\n \\end{algorithm}\n \n\n\n\\begin{proof}[Proof of Theorem~\\ref{thlowrange}]\n Assume that~$\\alpha$ is a singular unit of discriminant ${\\Delta\\in (-10^{-10}, -3\\cdot10^5]}$.\nLet ${0<\\varepsilon\\le 4\\cdot 10^{-3}}$. We set again~$Y$ as in (\\ref{ey}).\n As in the proof of Theorem~\\ref{thmidrange} we find (\\ref{eq:midrangeYbound}). \n \n \nWe infer that\n \\begin{alignat*}1\n 1 &\\le 3\\frac{{\\mathcal{C}}_\\varepsilon(\\Delta)}{{\\mathcal{C}}(\\Delta) Y} \\log X +\n \\frac{3\\log(\\varepsilon^{-1}) - 10.65}{Y}\\\\\n & \\le \\frac {3{\\mathcal{C}}_\\varepsilon(\\Delta)}{\\pi} \\frac{\\log X}{X^{1\/2}} +\n \\frac{3\\log(\\varepsilon^{-1}) - 10.65}{\\frac{3}{\\sqrt{5}}\\log X - 9.78}\n \\end{alignat*}\nwhere we use ${{\\mathcal{C}}(\\Delta)Y \\ge \\pi X^{1\/2}}$\n and ${Y\\ge \\frac{3}{\\sqrt{5}} \\log X - 9.78}$. \n\n If ${X\\in [10^7,10^{10})}$ then we set ${\\varepsilon = 10^{-3}}$ and use the estimate\n ${{\\mathcal{C}}_{\\varepsilon}(\\Delta) \\le 16}$ from Lemma~\\ref{lem:lowrangeCebound}\\ref{itenten}. \n Recall that ${x\\mapsto (\\log x)\/x^{1\/2}}$\n is decreasing for ${x\\ge e^2}$. So we find\n \\begin{equation*}\n 1 \\le \\frac{3\\cdot 16}{\\pi}\\frac{\\log(10^7)}{10^{7\/2}} +\n \\frac{3\\log(1000) - 10.65}{\\frac{3}{\\sqrt{5}} \\log(10^7)- 9.78}< 0.929,\n \\end{equation*}\n a contradiction.\n\nWhen ${X\\in [3\\cdot10^5,10^7)}$ we set ${\\varepsilon = 4\\cdot10^3}$. Then\n ${{\\mathcal{C}}_{\\varepsilon}(\\Delta)\\le 6}$ by Lem\\-ma~\\ref{lem:lowrangeCebound}\\ref{itenseven}.\nUsing \n ${X\\ge 3\\cdot 10^5}$ we find as before \n \\begin{equation*}\n 1 \\le \\frac{3\\cdot 6}{\\pi}\\frac{\\log(3\\cdot 10^5)}{(3\\cdot 10^{5})^{1\/2}} +\n \\frac{3\\log(250) - 10.65}{\\frac{3}{\\sqrt{5}} \\log(3\\cdot 10^{5})- 9.78}< 0.961,\n \\end{equation*}\n another contradiction which completes this proof. \n\\end{proof}\n\n\n\\section{The extra low-range}\n\n\\label{sfinal}\n\nThe results of the three previous sections reduce the proof of Theorem~\\ref{thmain} to the\nfollowing assertion.\n\n\n\\begin{theorem}\n\\label{thm:habeggerspari}\n Let~$\\Delta$ be the discriminant of a singular unit. Then \n ${|\\Delta|\\ge 3\\cdot 10^5}$. \n\\end{theorem}\n\\begin{proof}\n Let $\\alpha$ be a singular unit of discriminant $\\Delta$. We write\n ${X=|\\Delta|}$. \nWe may assume that ${X\\ge 4}$ because the only singular modulus of discriminant $-3$ is\n${j(\\zeta_3)=0}$, \nwhich is not an algebraic unit.\n\n\nRecall from Section~\\ref{sceps} that the Galois conjugates of~$\\alpha$ are precisely the singular moduli\n$j(\\tau)$, where ${\\tau = \\tau(a,b,c)}$ with $(a,b,c)$ as in~\\eqref{ekuh}. The imaginary part of such~$\\tau$ is\n$X^{1\/2}\/(2a)$ and ${a\\le (X\/3)^{1\/2}}$\nby Lemma~\\ref{lgauss}\\ref{iadelta}. \n Lemma~\\ref{lttsn} implies that\n$$\n|j(\\tau)|\\ge e^{2\\pi X^{1\/2}\/(2a)} -2079\n =e^{\\pi X^{1\/2}\/a}-2079 > 23^{X^{1\/2}\/a}-2079\n$$\n as ${e^\\pi > 23}$. Using Lemmas~\\ref{lanal} and~\\ref{lliouv}, we find that\n$$\n|j(\\tau)| \\ge 42700 \\min \\left\\{ \\frac{\\sqrt3}{4X},4\\cdot 10^{-3}\\right\\}^3.\n$$\nThese\n bounds together show that\n \\begin{equation}\n\\label{eq:finaljlb}\n |j(\\tau)|\\ge \\max\\left\\{ 23^{\\lfloor X^{1\/2}\/a\\rfloor}-2079,\n 42700\\min\\Bigl\\{\\frac2{5X},\\frac1{250}\\Bigr\\}^3\\right\\}. \n \\end{equation}\nBased on this observation, Algorithm~\\ref{algo:exclude} prints a list of discriminants of potential singular\nunits in the range ${[-X_{\\max},-4]}$. For this purpose, it computes a \nrational lower bound~$P$ for the absolute value of the\n$\\Q(\\alpha)\/\\Q$-norm of each singular moduli in this range. Those singular\nmoduli where ${P \\le 1}$ are then flagged as potential singular units.\n\n We have implemented this algorithm as a \\textsf{PARI}\n script\\footnote{A link to our \\textsf{PARI} script \\textsf{algorithm2.gp}\nis on the second-named author's homepage. The running time is about 23 minutes on a regular desktop computer (Intel Xeon CPU E5-1620 v3, 3.50GHz, 32GB RAM). The only floating point operation used approximates $X^{1\/2}$\n which leads to ${n=\\lfloor X^{1\/2}\/a\\rfloor}$.\nTo rule out a rounding error in the floating point arithmetic we\ncompare $(an)^2$ with~$X$ in our implementation.}. \nThe script flags only $-4$, $-7$ and $-8$ as discriminants of potential singular units. The\nsingular moduli of these discriminants are well-known \\cite[(12.20)]{Cox}: they are $12^3$, $-15^3$ and $20^3$, respectively. None of them is a unit, which concludes the proof. \n\\end{proof} \n \n \\begin{algorithm}[t]{\\footnotesize\n \\SetKwInOut{Input}{Input}\\SetKwInOut{Output}{Output}\n \\Input{An integer $X_{\\mathrm{max}}\\ge 1$}\n \\Output{Print a list containing all discriminants\n in $[-X_{\\mathrm{max}},-4]$ that are attached to a potential\n singular unit.}\n\n \\For{$X\\leftarrow 4$ \\KwTo $X_{\\mathrm{max}}$}{\n $\\Delta \\leftarrow -X$\\;\n \n \\lIf{$\\Delta\\equiv 2 \\text{ or }3 \\mod 4$}{\n next $X$\n }\n \n\n\n$P\\leftarrow 1$\\;\n \\For{$a\\leftarrow 1$ \\KwTo $\\lfloor \\sqrt{X\/3}\\rfloor$} {\n\n $n\\leftarrow \\lfloor X^{1\/2}\/a\\rfloor$\\;\n \\For{$b\\leftarrow -a+1$ \\KwTo $a$}{\n \\lIf{$b^2 \\not\\equiv \\Delta \\mod 4a$}{\n next $b$\n }\n $c\\leftarrow (b^2-\\Delta)\/(4a)$\\;\n \\lIf{$a>c$}{next $b$}\n \\lIf{$a=c \\text{ and } b<0$}{next $b$}\n \\lIf{$\\mathrm{gcd}(a,b,c)\\not=1$}{next $b$}\n \n \n \n \n $P\\leftarrow P \\cdot \\max\\{23^{n\n }-2079,42700\\min\\{2\/(5X),1\/250\\}^3\\}$\\;\n}}\n\n\\lIf{$P\\le 1$}{print $\\Delta$}}\n\\caption{Exclude singular units} \\label{algo:exclude}\n}%\n \\end{algorithm}\n \n\n \n\n\nAs indicated in the introduction, Theorem~\\ref{thmain} is the combination of Theorems~\\ref{thhighrange},~\\ref{thmidrange},~\\ref{thlowrange} and~\\ref{thm:habeggerspari}. \n\n\n{\\footnotesize\n\n\\def$'${$'$}\n\\providecommand{\\bysame}{\\leavevmode\\hbox to3em{\\hrulefill}\\thinspace}\n\\providecommand{\\MR}{\\relax\\ifhmode\\unskip\\space\\fi MR }\n\\providecommand{\\MRhref}[2]{%\n \\href{http:\/\/www.ams.org\/mathscinet-getitem?mr=#1}{#2}\n}\n\\providecommand{\\href}[2]{#2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\ \\ \\ \\ The time evolution of coherent states (CS) has attracted a great\ndeal of attention since the introduction of Glauber's CS of the harmonic\noscillator {\\normalsize \\cite{Glauber63}}. Of\nparticular interest has been the determination of the Hamiltonian operator\nfor which an initial coherent state remains coherent under time evolution.\nIt is established that this Hamiltonian has the form of the nonstationary\nbosonic forced oscillator Hamiltonian {\\normalsize \\cite{Glauber66, Mehta66,\n Stoler75, Kano76}: \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ } \\\n\\begin{equation}\nH_{\\mathrm{cs}}=\\omega (t)a^{\\dagger }a+f(t)a^{\\dagger }+f^{\\ast }(t)a+\\beta\n(t), \\label{Hcs}\n\\end{equation}%\nwhere $\\omega (t)$ and $\\beta (t)$ are arbitrary real functions of time $t$,\nand $f(t)$ is arbitrary complex function. \n$H_{\\mathrm{cs}}$ is a particular case of the nonstationary forced oscillator\nHamiltonian for which the exact time evolution of CS has been obtained in\n\\cite{MMT70,Holz70} by first constructing boson ladder operator dynamical %\ninvariants according to the Lewis and Riesenfeld scheme of time\ndependent invariants \\cite{Lewis69}.\n\nOur purpose in the present article is to study the dynamical invariants and\ntime evolution of CS for the {\\it fermionic} forced oscillator (FFO), which %\nin fact is the general (one mode) Hamiltonian.\n\nThe organization of the article is as follows. In Sec. 2 we construct fermionic %\nladder operator dynamical invariants and the corresponding Lewis-Riesenfeld\nHermitian invariant {\\normalsize \\cite{Lewis69}}, following the scheme related %\nto the boson system {\\normalsize \\cite{MMT70}}. Using these invariants, we\nconstruct in Sec. 3 fermionic CS and Fock states of FFO system as eigenstates %\nof the constructed invariant fermionic annihilation operator $B(t)$ and %\n$B^\\dagger(t)B(t)$ correspondingly. These CS can\nrepresent (under appropriate initial conditions) the exact time-evolution of\ninitial canonical fermionic CS. Finally the relation of the invariant ladder %\noperators method {\\normalsize \\cite{MMT70, Holz70}} to the Lewis-Riesenfeld method\n{\\normalsize \\cite{Lewis69}} is briefly described on the example of FFO. The\npaper ends with concluding remarks.\\\n\n\\medskip\n\n\\section{FFO and invariant ladder operators}\n\nWe consider the single nonstationary fermionic forced oscillator (FFO)\ndescribed by the following Hamiltonian,%\n\\begin{equation}\nH_{\\!f}=\\omega (t)b^{\\dagger }b+f(t)b^{\\dagger }+f^{\\ast }(t)b+g(t),\n\\label{Hf}\n\\end{equation}%\nwhere $\\omega (t)$ and $g(t)$ are arbitrary real functions of time, $f(t)$\nis arbitrary complex function, $b$ and $b^{\\dagger }$ are fermion\nannihilation and creation operators respectively, which obey to the fermion\nalgebra:\n\\begin{equation}\n\\{b,b^{\\dagger }\\} = 1,\\text{ \\ }%\nb^{2}=b^{\\dagger }{}^{2}=0, \\label{013}\n\\end{equation}%\nwhere $\\{b,b^{\\dagger }\\}\\equiv bb^{\\dagger }+b^{\\dagger }b$.\nDue to the nilpotency of the fermionic operators $b$, $b^\\dagger$\nthe operator $H_{\\!f}$ represents the most general (one mode) fermionic %\nHamiltonian.\n\nThe Hilbert space $\\mathcal{H}$ of the single-fermion system is spanned by\nthe two eigenstates $\\left\\{ \\left\\vert 0\\right\\rangle ,\\left\\vert\n1\\right\\rangle \\right\\} $ of number operator $b^{\\dagger }b$:\\, %\n$b^{\\dagger }b\\left\\vert n\\right\\rangle =n\\left\\vert n\\right\\rangle ,\\text{ \\ }n=0,1$.\nThe operators $b$ and $b^{\\dagger }$\\ allow transitions between number states,\n\\begin{equation}\nb\\left\\vert 0\\right\\rangle =0,\\text{\\ }b\\left\\vert 1\\right\\rangle\n=\\left\\vert 0\\right\\rangle \\,,\\text{ \\ }b^{\\dagger }\\left\\vert\n1\\right\\rangle =0,\\text{ }b^{\\dagger }|0\\rangle =|1\\rangle .\\,\n\\end{equation}%\nThe form of the Hamiltonian ({\\normalsize \\ref{Hf}}) is a Hermitian linear\ncombination of $b$, $b^{\\dagger }$ and $N = b^{\\dagger }b$. The fermion number\noperator $N$ obey the relation $N^{2}=N$ and the three operators $b$, $%\nb^{\\dagger }$and $N$ close under commutation the algebra:\n\\begin{equation}\n\\left[ b,N\\right] =b,\\text{ }\\left[ b^{\\dagger },N\\right] =-b^{\\dagger },%\n\\text{ }\\left[ b,b^{\\dagger }\\right] =1-2N,\\text{ \\ \\ \\ }\n\\end{equation}%\nLet us note that linear combinations of $b^{\\dagger }$, $b$ and $N$ produce\nthe half-spin operators $J_{i}$,\n\\begin{equation}\nJ_{1}=\\tfrac{1}{2}(b^{\\dagger }+b),\\quad J_{2}=\\tfrac{1}{2\\ri}(b^{\\dagger\n}-b),\\quad J_{3}=b^{\\dagger }b-\\tfrac{1}{2}, \\label{30a}\n\\end{equation}%\nclosing the \\textit{su}(2) algebra: $\\left[ J_{k},J_{l}\\right] =i\\epsilon\n_{klm}J_{m}.$\n\nIt is convenient to use raising and lowering operators $J_{\\pm }=J_{1}\\pm\niJ_{2}$ which satisfy the following commutation relation: $\\left[ J_{+},J_{-}%\n\\right] =2J_{3},\\text{ }\\left[ J_{3},J_{\\pm }\\right] =\\pm J_{\\pm }\\text{\\ }$%\n, where $J_{+}=b^{\\dagger },$ $J_{-}=b.$ So that in terms of these half spin\noperators the Hamiltonian ({\\normalsize \\ref{Hf}}) takes the form\n\\begin{equation}\nH_{\\!f}=\\omega (t)J_{3}+f(t)J_{+}+f^{\\ast }(t)J_{-}+g(t)+\\tfrac{\\omega (t)}{2%\n}. \\label{Hf 2}\n\\end{equation}%\n\nOur task is the construction of the time-dependent invariants for the system\n{\\normalsize (\\ref{Hf})}, {\\normalsize (\\ref{Hf 2})}. The defining equation\nof the invariant operator $B(t)$ for a quantum system with Hamiltonian $H(t)$\nis%\n\\begin{equation}\n\\frac{\\partial }{\\partial t}B(t)-i[B(t),H]=0 \\label{19}\n\\end{equation}\nFormal solutions to Eq. {\\normalsize (\\ref{19})} are operators $%\nB(t)=U(t)B(0)U^{\\dagger }(t)$, where $U(t)$ is the evolution operator of the\nsystem, $U =T\\exp [-\\ri\\tint_{0}^{t}H(t^{^{\\prime }}){\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e} t^{^{\\prime }}]$. \nIn our case of FFO {\\normalsize (\\ref{Hf})}, {\\normalsize (\\ref{Hf 2})} we look\nfor the non-Hermitian invariants $B(t)$, $B^\\dagger(t)$ of the form of linear %\ncombination of the $SU(2)$ generators {\\normalsize (\\ref{30a})}, \\\n\\begin{equation}\n\\begin{aligned} B = \\nu_{-}(t)J_{-} + \\nu_{+}(t) J_{+} + \\nu_3 (t) J_{3}\n,\\\\ B^\\dagger = \\nu_{-}^* (t)J_{+} + \\nu_{+}^*(t) J_{-} + \\nu_3^* (t) J_{3},\n\\end{aligned} \\label{B}\n\\end{equation}%\nwhere $\\nu _{\\pm }(t),\\,\\,\\nu _{3}(t)$ may be complex functions of the time.\nHermitian invariants then can be easily built up as Hermitian combinations\nof $B$ and $B^{\\dagger }$. In particular if $B$ is a non-Hermitian invariant\nthe operator $I=B^{\\dagger }B- 1\/2$ \nis a Hermitian invariant, the fermion analog of the Lewis-Riesenfeld quadratic\ninvariant {\\normalsize \\cite{Lewis69}}.\n\nLet us note at this point that we look for FFO invariants as elements of the\nsame algebra $su(2)$ to which the Hamiltonian belongs. Similar is the approach\nused in \\cite{Chumakov86} in construction of invariants for\nthe nonstationary singular oscillator, where the related algebra is $su(1,1)$.\nThis is to be compared with the case of nonsingular oscillator, for which invariant\nladder operators have been built up as elements of the Heisenberg-Weyl algebra %\n$h_w$ (i.e. as linear combinations of coordinate and momentum operators $x$ and %\n$p$ \\cite{MMT70,Holz70}), while the related nonstationary Hamiltonian belongs to \n$su(1,1)$. %\nFor {\\it forced} boson oscillator the Hamiltonian belongs to the large algebra of \nsemi-direct sum $su(1,1)\\dot{+}h_w$ but the ladder operator invariants %\n are again elements of the invariant subalgebra $h_w$ %\n\\cite{Prants86,Dattoli86,Trif93}.\n\n\nTo proceed with construction of FFO invariants we substitute {\\normalsize (\\ref{B}) and \n(\\ref{Hf 2}) } into {\\normalsize (\\ref{19})}, and find the following system of\ndifferential equations for the parameter functions $\\nu_\\pm,\\,\\nu_3$: \\\n\\begin{eqnarray}\n\\dot{\\nu}_{3} &=&2\\ri(\\nu _{+}f^{\\ast }-\\nu _{-}f), \\label{(a)} \\\\\n\\dot{\\nu}_{+} &=&\\ri(\\nu _{3}f-\\nu _{+}\\omega ), \\label{(b)} \\\\\n\\dot{\\nu}_{-} &=&\\ri(\\nu _{-}\\omega -\\nu _{3}f^{\\ast }). \\label{(c)}\n\\end{eqnarray}%\nSolutions to the above linear system of first order equations are\nuniquely determined by the initial conditions $\\nu _{\\pm }(0)=\\nu _{0,\\pm }$%\n, $\\nu _{3}(0)=\\nu _{0,3}$. If we want the invariants $B(t)$ and $B^{\\dagger\n}(t)$ be again fermion ladder operators, i.e. to obey the conditions\n\\begin{equation}\nB^{2}=0, \\quad \\{B,B^{\\dagger}=1, \\label{constr}\n\\end{equation}%\nwe have to take $\\nu _{0,\\pm }$ and $\\nu _{0,3}$ satisfying\n\\begin{equation}\n \\nu _{0,3}^{2} = -4\\nu _{0,+}\\nu _{0,-},\\quad\n |\\nu _{0,-}|+|\\nu _{0,+}| = 1\\,. \\label{0}\n\\end{equation}%\nIndeed, for $B^{2}(t)$ and $\\{B(t),B^{\\dagger }(t)\\}$ we find\n\\begin{equation}\n\\begin{tabular}{l}\n$\\displaystyle B^{2}=\\nu _{+}\\nu _{-}+\\tfrac{1}{4}\\nu _{3}^{2}\\equiv\n\\lambda _{1}$, \\\\[2mm]\n$\\displaystyle\\{B,B^{\\dagger}\\}=|\\nu _{-}|^{2}+|\\nu _{+}|^{2}+\\tfrac{1%\n}{2}|\\nu _{3}|^{2}\\equiv \\lambda _{2}$.%\n\\end{tabular}\n\\label{lam_i}\n\\end{equation}%\nThe quantities $\\lambda _{1}(\\nu _{\\pm },\\nu _{3})$, $\\lambda _{2}(\\nu _{\\pm\n},\\nu _{3})$ turned out to be two different '\\textit{constants of motion}'\nfor the system {\\normalsize (\\ref{(a)})}-{\\normalsize (\\ref{(c)})}, their\ntime derivatives being vanishing:\n\\begin{equation}\n\\begin{tabular}{l}\n$\\displaystyle \n\\frac{{\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e}}{{\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e} t}\\lambda _{1} = \\frac{{\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e}}{{\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e} t}\\left( \\nu _{+}\\nu _{-}+\\tfrac{1%\n}{4}\\nu _{3}^{2}\\right) =0$, \\\\[3mm]\n$\\displaystyle\n\\frac{{\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e}}{{\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e} t}\\lambda _{2} = \\frac{{\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e}}{{\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e} t}\\left( |\\nu _{-}|^{2}+|\\nu\n_{+}|^{2}+\\tfrac{1}{2}|\\nu _{3}|^{2}\\right) =0$.%\n\\end{tabular}\n\\label{dotlam_i}\n\\end{equation}%\nTherefore we can fix the values of these constants as $\\lambda _{1}=0$, $%\n\\lambda _{2}=1$, i.e.\n\\begin{equation}\n\\begin{tabular}{l}\n$ \\displaystyle \\nu _{+}\\nu _{-}+\\tfrac{1}{4}\\nu _{3}^{2}=0,$\\\\[3mm]\n$\\displaystyle |\\nu _{-}|^{2}+|\\nu_{+}|^{2}+\\tfrac{1}{2}|\\nu _{3}|^{2}=1$, \n\\end{tabular}\n\\label{lam 0}\n\\end{equation}%\nand satisfy the conditions {\\normalsize (\\ref{constr})}. If furthermore the initial\nconditions are taken as\n\\begin{equation}\n\\nu _{-}(0)=1,\\,\\,\\,\\nu _{+}(0)=0=\\nu _{3}(0), \\label{nu_i0}\n\\end{equation}%\nthen $B(0)=b$. Later on we work with these fermionic ladder operator invariants, %\ni.e. we consider conditions (\\ref{lam 0}) satisfied.\n\nLet us now recall that in the case boson nonstationary oscillator the ladder %\noperator invariants, constructed first in \\cite{MMT70, Holz70} (see also %\n\\cite{Chumakov86, Prants86}), are expressed in terms of one only parameter function %\n$\\epsilon(t)$, which obeys a simple second order equation, namely that of the %\nclassical oscillator with varying frequency. It turned out that this can be done %\nin the case of fermionic oscillator as well.\nIn this aim we first express all the three parameter functions $\\nu _{\\pm }(t)$,%\n $\\nu _{3}(t)$ in terms of one of them, which has to obey a second order %\ndifferential equation.\nLet for example, express $\\nu _{3}(t)$ and $\\nu _{-}(t)$ in terms of $\\nu _{+}(t)$%\n and its derivatives. We have %\n\\begin{equation} \\label{nu3}\n\\nu _{3} = - \\tfrac{\\ri}{f}(\\dot{\\nu}_{+} + \\ri \\nu _{+}\\omega ),\n\\end{equation}%\n\\begin{equation}\n\\nu _{-}=\\tfrac{1}{2f^{2}}\\left[ \\ddot{\\nu}_{+}+\\left( \\ri \\omega -\\tfrac{\\dot{f%\n}}{f}\\right) \\dot{\\nu}_{+}+\\left( 2ff^{\\ast }+\\ri\\dot{\\omega}-\\ri \\frac{\\omega }{f%\n}\\dot{f}\\right) \\nu _{+}\\right] . \\label{nu-}\n\\end{equation}%\nSubstituting these expressions into the expression of $\\lambda_1$ in terms of $\\nu_\\pm,\n\\,\\nu_3$ and taking into account that $\\lambda_1$ is fixed to $0$ we find that $\\nu_+$ %\nshould satisfy the following second order equation,\n\n\\begin{equation}\n2\\nu _{+}\\ddot{\\nu}_{+}-\\dot{\\nu}_{+}^{2} - 2\\nu_{+}\\dot{\\nu}_{+}\\tfrac{\\dot{f}}{f} +\n4\\nu _{+}^{2}\\left(\n|f|^{2}+\\tfrac{\\omega ^{2}}{4}+ \\ri\\tfrac{\\dot{\\omega}}{2}-\\ri \\tfrac{\\omega %\n\\dot{f }}{2f}\\right) = 0 . \\label{lam}\n\\end{equation}%\n Using this, and supposing that $\\nu_{+}\\neq 0$, we obtain for $\\nu _{-}$ a more %\ncompact expression in terms of $\\nu _{+}$ and $\\dot{\\nu}_{+}$, %\n\\begin{equation}\n\\nu _{-}= -\\nu _{3}^{2}\/4\\nu _{+}, \\label{nu- 2}\n\\end{equation}%\nwhere $\\nu_3$ is given again by eq. (\\ref{nu3}).\n\nThus the operators $B(t),\\,B^{\\dagger}(t)$, eq. (\\ref{B}), are fermionic %\nladder operator invariants for the forced oscillator (\\ref{Hf}), (\\ref{Hf 2})%\nif $\\nu _{3}$ and $\\nu _{-}$ are given by eqs. (\\ref{nu3}) and (\\ref{nu- 2}),%\nand $\\nu _{+}(t)$ is a nonvanishing solution of the second order equation (\\ref{lam}).\n\nNext we try to linearize the auxiliary eq. (\\ref{lam}). In this purpose we put\n\\begin{equation}\n\\nu _{+}(t) = \\tfrac{1}{2}{\\epsilon'} ^{2}(t) \\label{nu+ eps'}\n\\end{equation}%\nand obtain that $\\epsilon' (t)$ satisfies the linear equation\n\\begin{equation} \\label{eps' eq}\n\\ddot{\\epsilon}' - \\tfrac{\\dot{f}}{f}\\dot{\\epsilon}'+\\Omega' (t)\\epsilon' =0,\n\\end{equation}%\nwhere\n\\begin{equation}\n\\Omega' (t)=|f(t)|^{2}+\\tfrac{1}{4}\\omega ^{2}(t)+\\tfrac{\\ri}{2}\\dot{\\omega}-%\n\\tfrac{\\ri}{2}\\omega \\tfrac{\\dot{f}}{f}.\n\\end{equation}\n\nIn terms of $\\epsilon'$ the formulas (\\ref{nu-}) and (\\ref{nu3} for $\\nu_-,\\, \\nu_3$ read\n\\begin{equation}\n\\begin{tabular}{l}\n$\\displaystyle\\nu _{-}=-\\frac{1}{2\\epsilon'}\\nu_3^{2}$, \\\\[2mm]\n$\\displaystyle\\nu _{3}=\\frac{1}{f}\\left( \\tfrac{\\omega }{2}{\\epsilon'}\n^{2}-\\ri\\epsilon' \\dot{\\epsilon}'\\right) $.%\n\\end{tabular}%\n\\end{equation}%\nThe term in (\\ref{eps' eq}) proportional to the first derivative can be eliminated %\nby the substitution %\n\\begin{equation}\\label{eps}\n\\epsilon' = \\epsilon \\exp \\left( \\tfrac{1}{2}\\tint_{0}^{t}{\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e}\\tau \\,\\dot{f}%\n(\\tau )\/f(\\tau )\\right) .\n\\end{equation}%\nThis leads to the desired simple equation for $\\epsilon$,\n\\begin{equation}\\label{eps eq}\n\\ddot{\\epsilon}+\\Omega (t)\\epsilon = 0,\n\\end{equation}%\nwhere $ \\Omega(t)= \\Omega^{\\prime }(t) + \\ddot{f}\/2f - 3\\dot{f}\\,^{2}\/4f^{2}$. %\nEquation (\\ref{eps eq}) is of the same type, as the auxiliary equation used in the %\ncase of nonstationary boson oscillator \\cite{MMT70, Holz70}. Here the %\n'squared frequency' $\\Omega$ however is complex and depends in a different manner %\non the corresponding Hamiltonian parameters. And the solutions are subject to %\ndifferent constraints, stemming from the different commutation relations: \nin terms of our $\\epsilon$, eq. (\\ref{eps eq}), the constraint $\\lambda_2=1$ reads %\n( $\\epsilon'$ is related to $\\epsilon$ according to (\\ref{eps})),\n\\begin{equation}\n\\tfrac{|\\epsilon' |^{4}}{4}\\left( 1+\\tfrac{2}{|f|^{2}}\\left\\vert\n\\tfrac{\\omega }{2}\\epsilon' -\\ri\\dot{\\epsilon}'\\right\\vert ^{2}+\\tfrac{1}{|f|^{4}%\n}\\left\\vert \\tfrac{\\omega }{2}\\epsilon' -\\ri\\dot{\\epsilon}'\\right\\vert^{4}\\right) =1 ,\n\\end{equation}%\n while in the boson case the constraint is ${\\rm Im}\\left({\\epsilon}^*\\dot{\\epsilon}\\right) = 1$ \n\\cite{MMT70, Holz70}.\n\nTo finalize this section let note that in the particular case of the %\n\\textit{free} fermion oscillator, $f(t)\\equiv 0$, the explicit solutions of the problem can %\n be easily found in the form %\n\\begin{equation}\n\\begin{tabular}{l}\n$\\displaystyle \n\\nu _{\\pm }(t)=\\nu _{0,\\pm }\\re^{\\pm \\ri\\tint^{t}\\omega (\\tau ){\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e}\\tau },$\\\\\n$ \\displaystyle \n\\nu_{3}=\\nu _{0,3},$\n\\end{tabular}\n\\label{solutions1}\n\\end{equation}%\nwhere $\\nu _{0,\\pm }$, $\\nu _{0,3}$ are constants. To ensure the fermionic %\ncommutation relations of $B(t),\\,B^\\dagger(t)$ they have to obey the relations\n$\\nu _{0,-}\\,\\nu _{0,+} + \\nu _{0,3}^{2}\/4 =0$ \\,\\, and \\,\\, $ |\\nu\n_{0,-}|^{2} + |\\nu _{0,+}|^{2} + |\\nu _{0,3}|^{2}\/2 = 1.$\n\\medskip\n\n\\section{CS for the fermion forced oscillator}\n\nWe define coherent states (CS) for a given fermion system as eigenstates of\nthe corresponding invariant fermion annihilation (or creation) operator $B(t)\n$. Since the most general fermion one mode Hamiltonian operator is of the\nform of (nonstationary) forced oscillator (\\ref{Hf 2}), the one-mode fermion\nCS are defined as eigenstates of the invariant ladder operator $B(t)$\n(eqs. (\\ref{B}), (\\ref{constr})):\n\\begin{equation}\nB(t)|\\zeta ;t\\rangle =\\zeta |\\zeta ;t\\rangle . \\label{|z;t> 1}\n\\end{equation}%\nSince $B(t)$ is invariant operator, the eigenvalue $\\zeta $ does\nnot depend on time $t$. In terms of the $\\zeta $, $B(t)$, $B^{\\dagger }(t)$\nand the $B(t)$-vacuum $|0;t\\rangle $ we have for $|\\zeta ;t\\rangle $ the\nsame formulas as for the canonical fermion CS $|\\zeta \\rangle $ which are\ndefined {\\normalsize \\cite{Klauder, Abe89, Maam92, Junker98, Cahill99}} as\n\\begin{equation}\n\\left\\vert \\zeta \\right\\rangle =e^{-\\frac{1}{2}\\zeta ^{\\ast }\\zeta }\\left(\n\\left\\vert 0\\right\\rangle -\\text{ }\\zeta \\left\\vert 1\\right\\rangle \\right)\n\\,. \\label{|z>}\n\\end{equation}%\nwhere the eigenvalue $\\zeta $ is a Grassmannian variable: $\\zeta ^{2}=0,\\\n\\zeta \\zeta ^{\\ast }+\\zeta ^{\\ast }\\zeta =0$, $\\left\\vert 0\\right\\rangle $\nis the fermionic vacuum, $b\\left\\vert 0\\right\\rangle =0$, and $\\left\\vert\n1\\right\\rangle $ is the one-fermion state, $\\left\\vert 1\\right\\rangle\n=b^{\\dagger }\\left\\vert 0\\right\\rangle $. In particular\n\\begin{equation}\n|\\zeta ;t\\rangle = \\re^{-\\tfrac{1}{2}\\zeta ^{\\ast }\\zeta }\\left( |0;t\\rangle\n-\\zeta B^{\\dagger }(t)|0;t\\rangle \\right) . \\label{|z;t> 2}\n\\end{equation}%\nIt remains therefore to construct the (normalized) new ground state $%\n|0;t\\rangle $ according to its defining equations\n\\begin{equation}\n\\begin{aligned} B(t) |0;t\\rangle = 0 ,\\quad \n\\ri\\frac{{\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e}}{dt} |0;t\\rangle = H_{f}|0;t\\rangle. \\end{aligned} \\label{|0;t> 1}\n\\end{equation}%\nWe put\n\\begin{equation}\n|0;t\\rangle =\\alpha _{0}(t)|0\\rangle +\\alpha _{1}(t)|1\\rangle ,\n\\label{|0;t> 2}\n\\end{equation}%\nsubstitute this into (\\ref{|0;t> 1}) and after some tedious calculations find%\n\\begin{eqnarray}\n\\alpha _{1}(t) &=&\\alpha _{0}(t)\\frac{\\nu _{3}^{\\ast }(t)}{2\\nu _{+}^{\\ast\n}(t)}, \\\\\n\\alpha _{0}(t) &=&\\sqrt{|\\nu _{+}(t)|}\\exp \\left[ -\\tfrac{\\ri}{2}\\left(\n\\varphi _{\\nu _{+}}(t)+\\tint_{0}^{t}(2g(\\tau )+\\omega (\\tau )){\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e}\\tau \\right) %\n\\right] ,\n\\end{eqnarray}%\nwhere $\\varphi _{\\nu _{+}}$ is the phase of $\\nu _{+}(t)$. The state $|\\zeta\n;t\\rangle $ will represent the exact time evolution of an initial canonical\nCS $|\\zeta \\rangle $ if the initial conditions (\\ref{nu_i0}) are imposed: %\n$|\\zeta ;0\\rangle = |\\zeta \\rangle $. In this case, the time evolved state %\n$ |\\zeta ;t\\rangle $ could be again an eigenstate of $b$ if the oscillator %\nis not 'forced', i.e. if $f(t)=0$.\nLet us note that the time-dependence of the constructed states is obtained %\nin terms of solutions to the system of auxiliary equations (\\ref{(a)})-(\\ref{(c)}),%\nor equivalently to the 'classical oscillator' equation (\\ref{eps eq}).\n\nOur method of construction of dynamical invariants differs slightly from the\nLewis-Riesenfeld method \\cite{Lewis69} (developed for bosonic oscillators).\nLewis and Riesenfeld used to first construct Hermitian invariant, which then\nis represented as a product of normally ordered ladder operators. To make\nconnection to their approach let us suppose that we first succeeded to\nconstruct the Hermitian invariant $N(t)$ and to find some ladder operators $%\n\\tilde{B}(t)$, $\\tilde{B}^{\\dagger }(t)$ that factorize it: $N(t)=\\tilde{B}%\n^{\\dagger }(t)\\tilde{B}(t)$. It is clear that $\\tilde{B}(t)$ may differ from\nour non-Hermitian invariant $B(t)$ in a phase factor:\\thinspace\\ $\\tilde{B}%\n(t) = {\\rm e}^{\\ri\\varphi (t)}B(t)$.\\newline\nWe can then in a standard way construct normalized eigenstates of $N(t)$,\n\\begin{equation}\nN(t)\\widetilde{|0;t\\rangle }=0,\\quad N(t)\\widetilde{%\n|1;t\\rangle }=\\widetilde{|1;t\\rangle }, \\label{tld 2}\n\\end{equation}%\nand of $\\tilde{B}(t)$, \\\n\\begin{equation}\n\\tilde{B}(t)\\widetilde{|\\zeta ;t\\rangle }=\\zeta \\widetilde{|\\zeta ;t\\rangle },\n\\end{equation}%\n\\begin{equation}\n\\widetilde{|\\zeta ;t\\rangle }=\\left( 1-\\tfrac{1}{2}\\zeta ^{\\ast }\\zeta\n\\right) \\left[ \\widetilde{|0;t\\rangle }-\\zeta \\widetilde{|1;t\\rangle }\\right]\n\\label{tld 3}\n\\end{equation}%\nwhich however do not obey the Schr\\\"odinger equation since, in general $ \\tilde{B}(t)$\nmay not be invariant. To obtain solutions $|n;t\\rangle $ and $ |\\zeta ;t\\rangle $ the\nabove eigenstates $\\widetilde{|n;t\\rangle }$, $n=0,1$, should also be multiplied by\nphase factors,\n\\begin{equation}\n|n;t\\rangle = {\\rm e}^{\\ri\\phi _{n}(t)}\\widetilde{|n;t\\rangle },\\quad n=0,1,\n\\label{tld 4}\n\\end{equation}%\n\\begin{equation}\n|\\zeta ;t\\rangle =\\left( 1-\\tfrac{1}{2}\\zeta ^{\\ast }\\zeta \\right) \\left[\n{\\rm e}^{\\ri\\phi _{0}(t)}\\widetilde{|0;t\\rangle }-\\zeta {\\rm e}^{\\ri\\phi _{1}(t)}%\n\\widetilde{|1;t\\rangle }\\right] \\label{tld 5}\n\\end{equation}%\nwhich should obey the equations \\\n\\begin{equation}\n\\tfrac{{\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e}}{{\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e} t}\\phi _{n}=\\widetilde{\\langle n;t|}\\ri\\tfrac{\\partial }{\\partial t}%\n-H\\widetilde{|n;t\\rangle }. \\label{tld 6}\n\\end{equation}%\nEvidently the state (\\ref{tld 5}) is an eigenstate of $\\tilde{B}(t)$ with time\ndependent eigenvalue $\\zeta (t)=\\zeta \\exp (\\ri\\varphi (t))$, $\\varphi\n(t)=\\phi _{1}(t)-\\phi _{0}(t)$.\\newline\nThe phase $\\varphi (t)=\\phi _{1}(t)-\\phi _{0}(t)$ consists of two parts -\ngeometrical one $\\varphi ^{G}$, and dynamical one $\\varphi ^{D}=\\varphi\n-\\varphi ^{G}$ \\cite{Maam99},\n\\begin{eqnarray}\n\\varphi ^{G}(t) &=&\\varphi (t)+\\int_{0}^{t}\\left( \\widetilde{\\langle 1;t^{\\prime }|}H%\n\\widetilde{|1;t^{\\prime }\\rangle }-\\widetilde{\\langle 0;t^{\\prime }|}H%\n\\widetilde{|0;t^{\\prime }\\rangle }\\right) {\\rm d}} \\def\\ri{{\\rm i}} \\def\\re{{\\rm e} t^{\\prime }.\n\\end{eqnarray}\n\n\\subsection*{Concluding Remarks}\n\n\\ \\ \\ In this article, we have studied fermionic system of nonstationary\nforced oscillator and we have constructed invariant ladder operators and the\nrelated Fock and coherent states. We succeeded to express these invariants\nand the time evolution of the corresponding states in terms of the same\nclassical equation, that describes the evolution of coherent states of the\nboson nonstationary (forced) oscillator \\cite{MMT70, Holz70}. The relation of\nthe invariant ladder operators method to the Lewis-Riesenfeld method \\cite{Lewis69}\nwas briefly described on the example of nonstationary fermion systems. \\\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:level1}INTRODUCTION}\n\nIt is well recognized that the $\\alpha$ particle interaction with atomic nuclei is important in astrophysics \\cite{1}. Even if astrophysical reactions involving helium do not proceed through the strong $\\alpha$-cluster states (because of their high excitation energy), these states can provide $\\alpha$ width to the states that are closer to the region of astrophysical interest through configuration mixing.\n\nFor a long time, the surprising alpha cluster structure has been a stimulus for the development of classical shell model approaches (see \\cite{2} for new results). Additionally, work by the authors of Ref. \\cite{3} recently ``strengthened the theoretical motivation for experimental searches of alpha cluster states in alpha-like nuclei\" \\cite{3}. The authors of Ref. \\cite{3} related the nuclear structure in light systems of even and equal numbers of protons and neutrons with the first-order transition at zero temperature from a Bose-condensed gas of alpha particles (\\textsuperscript{4}He nuclei) to a nuclear liquid.\n\n\\textsuperscript{20}Ne nucleus presents a famous example of the manifestation of the alpha-cluster structure, and therefore it makes this nucleus a touchstone for \\textit{ab initio} approaches \\cite{4}. The nucleus \\textsuperscript{20}Ne is a benchmark case for the traditional shell model and its extension into algebraic and clustering domains. The well-established effective interaction Hamiltonians such as \\cite{5} not only shows an outstanding agreement with experimental data for \\textit{sd}-shell nuclei but also generates configuration mixing that shows transition to deformation and clustering.\n\nThe remarkable feature of \\textsuperscript{20}Ne nucleus shows up in the fact that almost all the observed states below 10 MeV can be classified into several overlapping rotational-like bands with the first one based on the ground 0$^+_1$ state. There are three other bands based on 0$^+$ levels: on 0$^+_2$ at 6.725 MeV, on a very narrow 0$^+_3$ at 7.191 MeV, on a very broad 0$^+_4$ at $\\sim$ 8.7 MeV \\cite{6} which are of evident cluster structure. The 0$^+_2$ and 0$^+_4$ bands have $\\alpha$+\\textsuperscript{16}O core structure as can be seen from their reduced $\\alpha$ particles widths, and probably the 0$^+_3$ band has predominant \\textsuperscript{12}C+\\textsuperscript{8}Be structure which manifests used itself in the selectivity of the \\textsuperscript{8}Be transfer reactions \\cite{7}. As the ground state band and the 0$^+_2$ band in \\textsuperscript{20}Ne can be related with similar structures in \\textsuperscript{16}O and \\textsuperscript{12}C, the ``additional\" structure of 0$^+_4$ states is not understood \\cite{4}. The cluster approaches \\cite{8} related the 0$^+_4$ band with large $\\alpha$-widths, starting with the so-called ``\\textsuperscript{16}O+$\\alpha$\" higher nodal band, which has one more nodal point in ``\\textsuperscript{16}O+$\\alpha$\" relative wave function than the lower bands have. However it appeared that there are too many bands with a similar structure.\n\nThe $\\alpha$ particle decay threshold in \\textsuperscript{20}Ne is 4.73 MeV, while the threshold for proton decay is at 12.8 MeV (neutron decay threshold is even higher). Therefore resonance $\\alpha$ particle scattering should be considered as an evident way to obtain data on the natural parity levels in \\textsuperscript{20}Ne up to 13 MeV excitation energy. Indeed the majority of adopted data \\cite{6} on level properties in the region in question based on a resonance work and an analysis made in 1960 year. More recently the $\\alpha$+\\textsuperscript{16}O resonance scattering experiment were developed further to backward angles and the data were reanalyzed using \\textit{R} matrix code Multi 6 \\cite{9}. The authors \\cite{9} obtained quite different results from those used in for many levels (see Table 1); in particular the broad 0$^+_4$ and 2$^+_4$ levels appeared even much more broader. The authors \\cite{9} noted difficulties of the fit in the region of 6-8 MeV excitation energies, in a region mainly free from narrow resonances. Evidently strong states of over 1 MeV width should influence a very broad excitation region (see for instance \\cite{10}).\n\nT. Fortune et al. \\cite{11,12} were the first who recognized the importance of the fact that single particle structure of the broad states is in drastic contradiction with the shell model predictions. They \\cite{11,12} also proposed the idea of mixing different configurations to explain the effect. The same idea was used in \\cite{13}. However, the authors of Refs. \\cite{11,12} used old data for the 0$^+_2$ state and used some estimates for the properties of the broad states proposed in Ref. \\cite{14} to support the idea. Later measurements \\cite{6} gave the width of 19 keV for the 6.72 MeV state (which is about 25\\% larger than that used in work \\cite{11}).\n\n\\begin{figure}[!t]\n \\begin{center}\n \\include{fig1}\n \\end{center}\n \\caption{E-T spectrum for the zero degrees detector. Alpha particles dominate; one can see also a weaker proton locus below the alpha particles.}\n\\end{figure}\n\nThe experimental aim of this work is to obtain new information on the structure of \\textsuperscript{20}Ne states, especially the broad 0$^+$, 2$^+$ states. Unlike other experimentalists, we used the Thick Target Inverse Kinematics method (TTIK) (see \\cite{15,16,17,18,19} and references therein) to study the excitation functions for the \\textsuperscript{16}O($\\alpha$, $\\alpha$)\\textsuperscript{16}O elastic scattering in the \\textsuperscript{20}Ne excitation region of 5.5-9.6 MeV and in a broad angular interval. On the theoretical side, we also used multi configuration shell model calculation to understand the limits of this approach in a description of the cluster states.\n\n\\section{\\label{sec:level2}EXPERIMENT}\n\n\\begin{figure}[!t]\n \\begin{center}\n \\include{fig2}\n \\end{center}\n \\caption{The \\textsuperscript{16}O($\\alpha$, $\\alpha$)\\textsuperscript{16}O elastic scattering excitation function at 180$^\\circ$ cm. The excitation energies in \\textsuperscript{20}Ne, E$_x$ in Table 1, are related with cm energy, E$_{cm}$, by expression, E$_x$=E$_{cm}$+4.73 MeV. The bold (red) line is the \\textit{R} matrix fit with the parameters of the present work. The dot (cyan) line is a fit with the 0$^+_4$ excitation energy of 8.3 MeV \\cite{11}, and dot (black) line is a fit with the 0$^+_4$ energy excitation energy of 8.62 MeV and the width of 1.472MeV \\cite{9}.}\n\\end{figure}\n\nThe experiment was performed at the DC-60 cyclotron (Astana) \\cite{17} which can accelerate heavy ions up to the 1.9 MeV\/A energy. While the TTIK method can't compete with a classical approach in terms of energy resolution, the possibility of observing excitation functions at and close to 180 degrees, where the resonance scattering dominates over the potential scattering, enables one to obtain more reliable information on the broad states. In the TTIK technique the inverse kinematics is used; and the incoming ions are slowed in a helium target gas. The light recoils, $\\alpha$ particles, are detected from a scattering event. These recoils emerge from the interaction with the beam ions and hit a Si detector array located at forward angles while the beam ions are stopped in the gas, as $\\alpha$ particles have smaller energy losses than the scattered ions. The TTIK approach provides a continuous excitation function as a result of the slowing down of the beam.\n\nFor the present experiment, the scattering chamber was filled with helium of 99.99\\% purity. The 30 MeV \\textsuperscript{16}O beam entered the scattering chamber through a thin entrance window made of 2.0 $\\mu$m Ti foil. Eight monitor Si detectors were placed in the chamber to detect \\textsuperscript{16}O ions elastically scattered from the Ti foil at 21$^\\circ$ angle. This array monitors the intensity of the beam with precision better than 4\\%. Fifteen 10x10 mm$^2$ Si detectors were placed at a distance of $\\sim$ 500 mm from the entrance window in the forward hemisphere at different laboratory angles starting from zero degrees. The gas pressure was chosen to stop the beam at distance of 40 mm from the zero degrees detector. The detector energy calibration and resolution ($\\sim$ 30 keV) were tested with a \\textsuperscript{226}Ra, \\textsuperscript{222}Rn, \\textsuperscript{218}Po and \\textsuperscript{214}Po $\\alpha$-source. The experimental set up was similar to that used before \\cite{19}, and more details can be found in Ref. \\cite{17,19}. The main errors in the present experimental approach are related to the uncertainties of the beam energy loss in the gas. To test the energy loss, we placed a thin Ti foil (2.0 $\\mu$m) at different distances from the entrance window. This can be used during the experiment without cycling vacuum. We found that the data \\cite{20} for energy loss of \\textsuperscript{16}O in helium are correct. The details of these tests will be published elsewhere. As a result, we estimated that the uncertainties in the absolute cross section are less than 6\\%. This conclusion was tested by comparison with the Rutherford cross sections at low energies. The agreement with the Rutherford scattering is within 5\\% error bars at all angles (see Fig.3).\n\nTogether with the amplitude signal, the Si detectors provided for a fast signal. This signal together with a ``start\" signal from RF of the cyclotron was used for the Time-of-Flight measurements. This E-T combination is used for particle identification in the TTIK approach \\cite{16,17,18}. Of course, only $\\alpha$ particles should be detected as a result of the interaction of \\textsuperscript{16}O with helium at the chosen conditions. However, protons can be created in the Ti window, and protons can appear due to hydrogen admixtures in the gas. Indeed, we have observed a weak proton banana, likely as a result of reactions in the window. These protons were easily identified by TF and separated from the $\\alpha$ particles, as seen in Fig.1.\n\n\\section{\\label{sec:level3}Experimental Results and Discussion}\n\n\\begin{figure}[!t]\n \\begin{center}\n \\include{fig3}\n \\caption{\\textit{R} matrix fit (bold red curve) of the excitation functions for the $\\alpha$+\\textsuperscript{16}O elastic scattering. (a) The dashed (cyan) curve presents the data of the 0$^+$ level at the 8.3 MeV excitation energy \\cite{11} and (e) dot (black) line is a fit with 2$^+$ level at the excitation energy of 9.0 MeV \\cite{6}.}\n \\end{center}\n\\end{figure} \n\n\\begin{table*}[!t]\n\\label{tab:1}\n\\caption{\\textsuperscript{20}Ne levels}\n\\begin{center}\n\\begin{tabular}{lccccccccccclccl}\n\\hline\n\\hline\n\\multirow{1}{*}{N} & \\multicolumn{3}{c}{TUNL data \\cite{6}} & \\multirow{1}{*}{} & %\n \\multicolumn{2}{c}{H. Shen et al. \\cite{9}} & \\multirow{1}{*}{} & %\n \\multicolumn{3}{c}{This work} & \\multirow{1}{*}{} & \\multicolumn{4}{c}{CNCIM} \\\\\n\\cline{2-4}\\cline{6-7}\\cline{9-11}\\cline{13-16}\n & E$_x$ & J$^\\pi$ & $\\Gamma_\\alpha$ & & E$_x$ & $\\Gamma_\\alpha$ & & E$_x$ &\n $\\Gamma_\\alpha$ & $\\gamma_\\alpha$ & & E$_x$ & J$^\\pi$ & SF$_p$ & SF$_\\alpha$\\\\\n & (MeV) & & (keV) & & (MeV) & (keV) & & (MeV) & (keV) & & & (MeV) & & & \\\\\n\\hline\n\n1 &\t0 & 0$^+_1$ & & & - & - & & 0 & & Large & & 0 & 0$^+$ & 0.36 & 0.73\\\\\n2 &\t1.63 & 2$^+_1$ & & & - & - & & 1.63 & & Large & & 2.242 & 2$^+$ & 0.41 & 0.67\\\\\n3 &\t4.25 & 4$^+_1$ & & & - & - & & 4.25 & & Large & & 4.58 & 4$^+$ & & 0.62\\\\\n\n4 &\t5.78 & 1$^-$ & (28$\\pm$3)x10$^{-3}$ & & - & - & & 4.45 & 0.03 & 1.4\\\\\t\t\t\t\n5 &\t6.73 & 0$^+_2$ & 19$\\pm$0.9 & & 6.72 & 11 & & 6.78 & 20.6 & 0.47 & & 6.94 & 0$^+_3$ & 0.55 & 0.46\\\\\n6 &\t7.16 & 3$^-$ & 8.2$\\pm$0.3 & & 7.16 & 10 & & 7.18 & 8.3 & 1.37\\\\\t\t\t\t\n7 &\t7.19 & 0$^+_3$ & 3.4$\\pm$0.2 & & 7.19 & 5 & & 7.20 & 3\t& 0.019 & & 6.27** & 0$^+_2$ & 0.055 & 0.44**\\\\\n8 &\t7.42 & 2$^+_2$ & 15.1$\\pm$0.7 & & 7.43 & 7 & & 7.44 & 14.3 & 0.19 & & 7.39 & 2$^+_3$ & 0.01 & 0.12\\\\\n9 &\t7.83 & 2$^+_3$ & 2 & & 7.83 & 1 & & 7.85 & 3.68 & 0.01 & & 7.15** & 2$^+_2$ & 0.12 & 0.18**\\\\\n\n10 & 8.45 & 5$^-$ & 0.013$\\pm$0.004 & & 8.45 & 0.02 & & 8.45 & 0.013\\\\\t\t\t\t\t\n11 & 8.71 & 1$^-$ & 2.1$\\pm$0.8 & & & & & 8.71 & 3.5\\\\\t\t\t\t\t\n12 & $\\approx$8.7 & 0$^+_4$ & $>$800 & & 8.62 & 1470 & & 8.77$\\pm$0.15 & 750$\\pm$220 & $\\sim$0.25 & & 9.66** & 0$^+_4$ & 0.002 & 0.18**\\\\\n13 & 8.78 & 6$^+_1$ & 0.11$\\pm$0.02 & & - & - & & 8.78 & 0.14 & 0.5 & & 9.49 & 6$^+$ & & 0.51\\\\\n14 & 8.85 & 1$^-$ & 19 & & 8.84 & 27 & & 8.85 & 18.0\\\\\t\t\t\t\t\n15 & 9.00 & 2$^+_4$ & $\\approx$800 & & 8.87 & 1250 & & 8.79$\\pm$0.10 & 695$\\pm$120 & 0.86 & & 8.36** & 2$^+_4$ & 0.02 & 0.02**\\\\\n16 & 9.03 & 4$^+_3$ & 3 & & 9.02 & & & 9.03 & 1.9 & 0.03 & & 9.0 & 4$^+$ & & 0.09\\\\\n\n17 & 9.12 & 3$^-$ & 3.2 & & 9.09 & 4 & & 9.13 & 4.1\\\\\n18 & 9.19 & 2$^+$ & & & - & - & & (9.29) & $\\leq$10\t\\\\\t\t\t\t\n19 & 9.48 & 2$^+$ & 29$\\pm$15 & & 9.48 & 46 & & 9.48 & 65$\\pm$20 & 0.02?\\\\\t\t\t\t\n20 & 9.99 & 4$^+_4$ & 155$\\pm$30 & & 10.02 & 150 & & 9.97 & 157 & 0.38 & & 9.5 & 4$^+$ & & 0.009\\\\\n21* & 10.26 & 5$^-$ & 145$\\pm$40 & & 10.26 & 190 & & 10.26 &\t& 1.9\t\\\\\t\t\t\n22 & 10.41 & 3$^-$ & 80 & & 10.40 & 101 & & 10.41 & &\t\\\\\n\n23 & 10.58 & 2$^+$ & 24 & & 10.56 & 15 & & 10.58 & & & & 10.2 & 2+ & 0.005 & 0.04\\\\\n24 & 10.80 & 4$^+_4$ & 350 & & 10.75 & 400 & & 10.80 & & & & 10.7 & 4$^+$ & & 0.04\\\\\n25 & 10.97 & 0$^+_5$ & 580 & & 10.99 & 700 & & 10.97 & & & & 11.9 & 0$^+$\t\\\\\t\n26 & 11.24 & 1$^-$ & 175 & & 11.19 & 85 & & 11.24 & &\\\\\t\t\t\t\t\n27 & 11.95 & 8$^+$ & (3.5$\\pm$1.0)x10$^{-2}$ & & & & & 11.95 & & 0.35 & & 11.50 & 8$^+$ & & 0.40\\\\\n\\hline\n\\hline\n\\multicolumn{16}{l}{ * For the levels with numbers 21-27 the parameters of the present fit were fixed as in \\cite{6} } \\\\\n\\multicolumn{16}{l}{ ** Calculated in \\textit{psd} space. SF is to the first excited state in \\textsuperscript{16}O; SFs for the ground state in \\textsuperscript{16}O are $\\leq$ 0.1 } \\\\\n\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\nThe experimental excitation functions were analyzed using multilevel multichannel \\textit{R} matrix code \\cite{21}. The calculated curves were convoluted with the experimental energy resolution. The experimental energy resolution was $\\sim$ 30 keV at zero degrees and deteriorated up to $\\sim$100 keV with angles estranging from zero degrees. We did not notice a deterioration of the energy resolution with the energy loss of the beam in the chamber. As it seen in Table 1 the excitation energies of the resonances of the present work agree with the adopted ones \\cite{6} within 10-15 keV. This agreement is an evidence of our overall good energy calibrations and the correct account of the ion energy loss in helium in the present work. Fig.2 and Fig.3 give the experimental excitation functions together with the present \\textit{R} matrix fit. Fig.2 demonstrates the data at 180$^\\circ$ and illustrates the differences in the fits due to different parameters of the broad 0$^+_4$ resonance. The data on the resonances used in the present \\textit{R}-matrix fit are summarized in the Table 1 together with the adopted data \\cite{6}. Data of the last \\textit{R} matrix analysis \\cite{9} are also given in Table 1. The analysis \\cite{9} resulted in level parameters which are often different from the adopted.\n\nOur analysis (Table 1) resulted in small discrepancies with the data \\cite{6} in the detail description of the narrow states (widths less than 10 keV). The \\textit{R}-matrix code \\cite{21} is tuned for the TTIK measurements and for the analyses of states with a width of over 10 keV to accelerate automatic fit calculations. Therefore the small disagreements for the narrow states are not significant. We focused on the broader states and on the part of the excitation function changing slowly with energy and angle.\n\nOur analysis indicates that all strong alpha cluster states at 5$\\sim$6 MeV below or up the investigated excitation energy region can influence the \\textit{R}-matrix fit. Therefore we included in the fit the \\textsuperscript{20}Ne ground state, the first 2$^+$ and 4$^+$ states and 1$^-$ (5.67 MeV) state in the fit, even though below the investigated region (see Table 1). Among these states, only the 1$^-$ state is above the $\\alpha$ particle decay threshold in \\textsuperscript{20}Ne; the reduced width of this state is known, and it is large. Shell model calculations also give very large spectroscopic factors for all members of the ground state band. Indeed, a good agreement needs large values of the corresponding amplitudes (over 0.7). Above the investigated excitation energy region, high spin $\\alpha$-cluster resonances are mainly known. Each of these resonances (see Table 1) considered separately influences the fit, especially at 180$^\\circ$. However, their joint influence is much weaker. This cancellation is due to different parities of the spins. Only the influence of the closest to the investigated region, the 4$^+$ (9.99 MeV) resonance, can be noticed. A somewhat better fit needs the width of this resonance to be slightly larger $\\sim$160 keV (well within the quoted uncertainties, see Table 1). Parameters of all other resonances above the investigated region were fixed according to the data of Ref. \\cite{6}. A good general fit ($\\chi^2$=1.1) was reached in this way without any backward resonance inclusion.\n\n\\begin{table*}[!t]\n \\caption{$\\alpha$+\\textsuperscript{12}C levels in \\textsuperscript{16}O}\n \\centering\n \\begin{tabular}{ccccccccc}\n \\hline\n \\hline \n \\multicolumn{1}{c}{\\textsuperscript{16}O level} & \\multirow{1}{*}{} & \\multicolumn{1}{c}{$\\Gamma_\\alpha$ $_{exp}$ keV \\cite{6} } & \\multirow{1}{*}{} & \\multicolumn{1}{c}{$\\gamma_\\alpha$(1); -V$_0$ MeV} & \\multirow{1}{*}{} & \\multicolumn{1}{c}{$\\gamma_\\alpha$(2); -V$_0$ MeV} & \\multirow{1}{*}{} & \\multicolumn{1}{c}{$\\gamma_\\alpha$(3); -V$_0$ MeV}\\\\ \n \\cline{1-1}\\cline{3-3}\\cline{5-5}\\cline{7-7}\\cline{9-9}\n 1$^-$; 9.58 MeV & & 420$\\pm$20 & & 0.70; 138.2 & & 0.72; 150.0 & & 0.84; 158.5 \\\\\n 4$^+$; 10.36 MeV & & 26$\\pm$3 & & 0.68; 125.3 & & 0.88; 139,6 & & 1.21; 143.2 \\\\\n \\hline\n \\hline \n \\end{tabular}\n \\label{tab:2}\n\\end{table*}\n\nAll resonances are at the maximum at 180$^{\\circ}$. The broad hump at this angle at cm energy of 4 MeV (Fig. 2) is a clear indication for the presence of low spin states, 0$^+$ and 2$^+$. Higher spin states are narrower. A single level (2$^+$) cannot produce the strong peak at different angles, and levels of different parity, such as 2$^+$ and 1$^-$, interfere destructively at 180$^{\\circ}$. The present analysis resulted in two practically degenerate states at $\\sim$ 8.8 MeV with the same width (see Table 1). Our results for the broad 2$^+$ are rather close to the adopted values. However, if this level is moved to 9.0 MeV excitation energy (as in \\cite{6}) then the fit becomes worse, especially in the vicinity of the dip die to the presence of another 2$^+$ level at 9.48 MeV (Fig. 3(e)). The fit becomes even worse with a very broad 2$^+$ level resulting from the parameters of Ref. \\cite{9}.\n\nT. Fortune at al., \\cite{11} observed a broad distribution with a center at 8.3 MeV excitation energy in \\textsuperscript{20}Ne and related it with the 0$^+$ level. Fig. 2 presents \\textit{R} matrix calculations with the 0$^+$ level to be moved to 8.3 MeV excitation energy. This move destroyed the good fit. Fig.2 shows also that a very broad 0$^+$ level of the fit \\cite{9} destroys the agreement.\n\nThe 18$^{th}$ (2$^+$) level of \\cite{6} is in the energy region of the present investigation. It was observed in a single work in a study of \\textsuperscript{20}Na $\\beta^+$ decay. We have not found any reliable evidence for the presence of this state. If it exists, its width should be less than 10 keV. We observe a fluctuation of points which might be associated with a narrow 2$^+$ level at excitation energy of 9.29 MeV.\n\nWe noted that the 19$^{th}$ level, 2$^+$, has the adopted \\cite{6} excitation energy in our fit but a different width of 65$\\pm$20 keV. While the two times difference with Ref. \\cite{6} in the widths is marginally exceeds the error bars, the influence on the better fit is evident. The adopted data \\cite{6} for this level are based on the results of a single older work \\cite{22}. The authors \\cite{22} observed a weak $\\gamma$ decay of this state in the presence of a large background. A broader level than in Ref. \\cite{22} was also found in work \\cite{9} as shown in Table 1.\n\nWe characterized the alpha-cluster properties of the states above the alpha particle decay threshold by SF=$\\gamma_\\alpha$=$\\Gamma_\\alpha$ $_{exp}$\/$\\Gamma_\\alpha$ $_{calc}$, where $\\Gamma_\\alpha$ $_{calc}$ is the single alpha particle width calculated in the $\\alpha$-core potential. To normalize SFs we calculated these values for the well-known alpha-cluster states in \\textsuperscript{16}O.\n\nThe Woods-Saxon potential was used to calculate the limit ($\\Gamma_\\alpha$ $_{cal}$) as the widths of single particle states in the potential. First we tried to fit the widths of known alpha-cluster states, 1$^-$ and 4$^+$, in \\textsuperscript{16}O so that $\\gamma_\\alpha$ = $\\Gamma_\\alpha$ $_{exp}$\/$\\Gamma_\\alpha$ $_{cal}$ $\\sim$ 1.0. The real part of the potential was changed to fit the binding energy of the states. The radius of the potential was chosen to be R = r$_0$ $\\times$12$^{1\/3}$; the Coulomb potential was taken in to account as a charge sphere potential with R$_{coul}$ = R. We made first calculations (1) with r$_0$ = 1.31 fm and the diffuseness a = 0.65 fm, then we set r$_0$ = 1.23 fm (2), and we finally performed the third calculations (3) with r$_0$ = 1.23 fm and a = 0.6 fm. The results are summarized in Table 2. The $\\gamma_\\alpha$ calculations for the \\textsuperscript{20}Ne states were made with the final (3) parameters should be compared with SF given by a theory.\n\n\\section{\\label{sec:level4}Theoretical description of the $\\alpha$ cluster states in \\textsuperscript{20}$\\textbf{Ne}$}\n\nCNCIM \\cite{2} is among the latest developments of the classical shell model approaches towards clustering. This model targets a combination of classical configuration interaction techniques with algebraic methods that emerge in the description of clustering. The ability to construct a fully normalized set of orthogonal cluster channels is at the core of this approach; the overlaps of the shell model states with these channels are associated with spectroscopic factors and compared in Table 1 with the reduced widths obtained earlier. The CNCIM allows us to study clustering features that emerge in models with well-established traditional shell model Hamiltonians. These effective model Hamiltonians are built from fundamental nucleon-nucleon interactions followed by phenomenological adjustments to select observables; thus, they generally describe of a broad scope of experimental data with high accuracy. Apart from using these phenomenological shell model Hamiltonians, our study does not involve any adjustable parameters. In order to fully explore the problem, we considered several different model spaces and corresponding Hamiltonians: the \\textit{sd} model space with USDB interaction \\cite{5}; unrestricted \\textit{p-sd} shell model Hamiltonian \\cite{23}, the same Hamiltonian has been used in Refs. \\cite{2}; WBP Hamiltonian \\cite{24} allowing 0$\\hbar\\omega$, 1$\\hbar\\omega$ and 2$\\hbar\\omega$ excitations in \\textit{p-sd-pf} valence space; and the \\textit{sd-pf} Hamiltonian \\cite{25}. This sequence of Hamiltonians represents and expansions of the valence space from \\textit{sd} to \\textit{p-sd}, to \\textit{p-sd-pf}. All models are in good agreement for the \\textit{sd}-states; the low-lying negative parity states as well as positive 2ph excitations are dominated by the \\textit{p-sd} configurations. Thus, in Table 1 we only include the results from the \\textit{p-sd} Hamiltonian which turned out to be most representative although the following discussion and conclusions are largely based on comparisons. The lowest states associated with significant \\textit{fp} shell component appear at excitation energies above 15 MeV.\n\nShell model calculations for \\textsuperscript{20}Ne with open \\textit{2s-1d (sd)} shells predict well the ground state band. The structure of this band is based on the dominating SU(3) configuration (about 75\\%) with quantum numbers (8,0). The model predicts (Table 1) large SF for all members of the band based on the ground state in \\textsuperscript{20}Ne. The 0$^+$, 2$^+$, and 4$^+$ members of the band are below the $\\alpha$ particle decay threshold and do not have observable $\\alpha$ particle widths. However, large $\\alpha$ cluster SFs for these states provide for a better \\textit{R} matrix fit. While uncertainties for the \\textit{R} matrix amplitudes for these states are large, the fit (Fig. 2 and 3) requires these amplitudes to be close to those of the negative parity states with the known extreme $\\alpha$ cluster structure. The $\\alpha$ particle widths of the highest 6$^+$ and 8$^+$ members of this band are known. There is a long history of attempts and ideas to describe these widths using shell model approaches (see, for instance \\cite{5}). The most calculations predicted large clustering for the band but could not explain the decrease of the reduced width for the 6$^+$ and 8$^+$ members. As one sees in Table 1, the CNCIM calculations are in fine agreement with the experimental data for these states. All members of this band have significant clustering that diminishes at higher energies due to configuration mixing. The second 0$^+$ state within \\textit{sd} space appears at around 6.7 MeV of excitation (in \\textit{psd} model in Table 1, this is a third 0$^+$ state at 6.9 MeV). This level also has a substantial clustering component and absorbs nearly all 15\\% of the remaining strength of the SU(3) (8,0) component. The following 2$^+_2$ (in experiment and in \\textit{sd} model, but third in \\textit{p-sd} model as discussed in what follows) at about 7.4 MeV of excitation energy being a member of the 0$^+_2$ band can be described in a similar way.\n\nA defrost of the \\textit{1p} shell (which is filled in \\textsuperscript{16}O) results in the doubling of the levels, as it is observed and is shown in Table 1 new levels 0$^+$ and 2$^+$, marked with asterisks, appear. In our calculations, the ordering in energy is reversed for both doublets. Structurally the members of each doublet are very different which allows them to be so close in energy and inhibits configuration mixing and Wigner repulsion. One of the doublet levels (the lower 0$^+$ and 2$^+$ levels) has a large $\\alpha$ cluster SF relative to the first excited state in \\textsuperscript{16}O and much smaller SF relative to the ground state in \\textsuperscript{16}O (pay attention that the theory gives the wrong order for the levels in question, see Table 1). Indeed the predicted difference in the structure is supported by population selectivity in different nuclear reactions. The 6.72 MeV 0$^+$ and 7.42 MeV 2$^+$ are populated much stronger than neighboring 7.20 and 7.83 MeV levels in the \\textsuperscript{16}O(\\textsuperscript{6}Li, d) reaction \\cite{26}. The opposite is the case in the \\textsuperscript{12}C(\\textsuperscript{9}Be, n) or \\textsuperscript{12}C(\\textsuperscript{12}C,$\\alpha$) \\cite{27} reactions. \n\nThe theory gives large single particle spectroscopic factors for the \\textit{sd} states and smaller for the 7.20 and 7.83 MeV states. Indeed, one expects that states in \\textsuperscript{20}Ne with a hole in the \\textit{p1\/2} shell will be weakly excited in the single nucleon transfer, \\textsuperscript{19}F(\\textsuperscript{3}He, d) reaction in accordance with the experimental data \\cite{28}. The experiment \\cite{11,12,28} supports also detailed single particle SF calculations giving SF for the ground state smaller than for the 0$^+$ 6.73 MeV state and much higher SF for the first excited 2$^+$ than for 2$^+$ member of the band based on the 6.73 MeV state.\n\nIn the \\textit{sd} valence space (USDB) the third 0$^+$ state appears only at 11.9 MeV, and it has a relatively small alpha spectroscopic factor; opening of the \\textit{p}-shall in addition to 0$^+$ at 6.27 MeV, leads to 0$^+$ state at 9.7 MeV. However, the predicted state has a low proton spectroscopic factor which is not as well supported by observations. A similar serious discrepancy is observed with a broad 4$^{th}$ 2$^+$ state at around 9 MeV, both \\textit{sd} and \\textit{p-sd} shell models produce candidates but with very low alpha SF. As evident from our studies the only strong coupling to alpha channels could come from \\textit{fp} shell and higher shells. The lowest two bands saturate the alpha strength within \\textit{sd} configurations, holes in the \\textit{p}-shell do not lead to a significant contribution due to low level of core excitation in the ground state of \\textsuperscript{16}O. While our models predict high excitation energies of states with significant \\textit{fp} components, we can speculate that strong configuration mixing, collective effects such as deformation, and coupling to continuum via super radiance mechanism \\cite{29,30} can enhance admixture needed to reproduce the broad resonances observed. There is a similar problem with $\\alpha$ cluster negative parity states 1$^-$ and 3$^-$, the \\textit{p-sd} Hamiltonian produces an acceptable spectrum but the alpha spectroscopic factors are low (see also \\cite{31}).\n\n\\section{\\label{sec:level5}Conclusions}\n\nIn this work we study $\\alpha$-clustering in \\textsuperscript{20}Ne. This nucleus is a benchmark example of many theoretical techniques targeting clustering in light nuclei. Our \\textit{R}-matrix analysis of TTIK experimental data confirms previously known results and establishes new constraints for the positions and widths of the resonances. We compared our findings with those obtained theoretically using cluster-nucleon configuration interaction approach developed in several previous works \\cite{2} and references therein. There is good overall agreement between theoretically predicted and observed spectra. Our theoretical approach describes very well the ground state band and the band built on the first 0$^+$ state. Allowing cross shell excitations from the \\textit{p}-shell, it was possible to reproduce the band built on the second 0$^+$ state. For these states, all spectroscopic factors for alpha transitions to the ground state of \\textsuperscript{16}O and to the first excited state in \\textsuperscript{16}O as well as proton spectroscopic factors to the ground state of \\textsuperscript{19}F are well reproduced. The situation is not as good when it comes to resonances 1$^-$ 3$^-$ and 4$^{th}$ 0$^+$ and 4$^{th}$ 2$^+$, all of these states are broad and have exceptionally large alpha spectroscopic factors. \n\nIn order to describe strong clustering features these states must include configurations from \\textit{fp} shell and from higher oscillator shells, however Hamiltonians that we explored predict these contributions to be negligible below 15 MeV of excitation. Thus, inability of theoretical models to describe broad states exclusively while working well elsewhere suggests an additional coupling mechanism unaccounted for in the traditional shell model Hamiltonians. The super radiance suggested in Refs. \\cite{29,30} could provide this mechanism. Alternatively, the problem could be associated with relatively unknown cross shell interactions. Therefore, work shows that the experimental study of alpha clustering represents an outstanding tool for exploring cross shell excitations especially those of multi-particle multi-hole nature.\n\n\\begin{acknowledgments}\nThis work was supported by Ministry of Education and Science of the Republic of Kazakhstan (grant number \\#0115\u0420\u041a03029 ``NU-Berkeley\", 2014-2018; grant number \\#0115\u0420\u041a022465, 2015-2017). This material is also based upon work supported by the U.S. Department of Energy Office of Science, Office of Nuclear Physics under Grants No DE-FG02-93ER40773 and No. DE-SC0009883. G.V.R is also grateful to the Welch Foundation (Grant No. A-1853).\n\\end{acknowledgments}\n\n\\nocite{*}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzigyu b/data_all_eng_slimpj/shuffled/split2/finalzzigyu new file mode 100644 index 0000000000000000000000000000000000000000..7f530f79e5c73c32ca9be5ac9345ed7927d6c0fd --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzigyu @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{Introduction}\n\nLiDAR point clouds, compared against other sensors such as camera and radar in the autonomous driving perception, have advantages of both accurate distance measurements and fine semantic descriptions. Studies on point clouds have gained increasing popularity in the computer vision area. Typical research topics include 3D shape recognition, part segmentation, indoor scenario parsing, and outdoor large-scale scene understanding. Several benchmark datasets such as ModelNet40 \\cite{wu20153d}, ShapeNet \\cite{chang2015shapenet}, S3DIS \\cite{armeni20163d}, and Semantic3D \\cite{hackel2017semantic3d} have been established for these topics. However, there exists spatial property disparity between these datasets and outdoor 360$^\\circ$ sweep scans, which are typically produced by the on-board vehicle LiDAR. Per swept LiDAR point clouds are much sparser, and their sparsity is generally increased with the reflection distance. Examples in Figure \\ref{fig2} demonstrate the difference between dense and sparse point clouds in outdoor areas.\n\nPoint clouds processing approaches could be generally classified into three categories. First, projection into 2D representations \\cite{lawin2017deep, wu2018squeezeseg, caltagirone2017fast, meyer2019lasernet}. The basic idea of this approach is to transform unstructured point clouds into 2D images, which could be directly plugged into image-based Convolutional Neural Networks (CNN) \\cite{krizhevsky2012imagenet}. Consequently, the 2D projection benefits to an easier fusion strategy with the image \\cite{chen2017multi, meyer2019sensor} or multi-views \\cite{su2015multi, qi2016volumetric}. However, the drawback is inevitably occluding a large number of points from the projection perspective, which suffers massive information loss. Second, voxelization into 3D volumetric grid cells \\cite{maturana2015voxnet, huang2016point, li20173d, tchapmi2017segcloud, jiang2018pointsift} or their structural variations (e.g., Octree \\cite{riegler2017octnet} or Spherical \\cite{rao2019spherical}). Even though this approach is more effective to maintain points in the 3D space, the data loss problem is not eliminated due to the grid quantization. 3D convolutions are also computationally expensive. Third, directly process raw point clouds. A fundamental model PointNet \\cite{qi2017pointnet} has been specifically designed to process raw unstructured point clouds, which inspires many other studies \\cite{qi2017pointnet++, wang2018dynamic, wu2019pointconv, wang2019graph, wang2019associatively} to follow this idea. Many existing methods have reported their performance on several dense data classification and segmentation benchmarks, however, their effectiveness for sparse data is unknown.\n\nTransfer learning \\cite{pan2009survey} and domain adaptation \\cite{tzeng2017adversarial} techniques have been recently discussed to bridge the gap between the source data and target data. They either extend the training procedure with new labeled data \\cite{rist2019cross}, or re-design the network architecture by adopting an adversarial generator-discriminator \\cite{lai2010object, wu2019squeezesegv2} for semi-supervised or unsupervised learning. In this particular study, however, we focus on the specific problem of semantic segmentation for sparse point clouds. We retain the cross-domain adaptation in our continued work. \n\nTo the best of our knowledge and to the time of our work, VirtualKITTI \\cite{3dsemseg_ICCVW17}, SemanticKITTI \\cite{behley2019dataset}, and DeepenaiKITTI\\footnote{https:\/\/www.deepen.ai\/kitti-labels-download\/} are currently available public datasets that provide semantic labels for sparse point clouds. We investigate a decent comparison of more than 10 state-of-the-art methods that directly process raw point clouds. With the evaluation of their effectiveness on the sparse data, we reveal that network architecture, neighborhood selection, and local sparsity are essential factors that affect the segmentation performance. This investigation has shown us advantages\/shortcomings of previous methods, and therefore we are motivated to propose our method, Multi-domain Neighborhood Embedding and Weighting (MNEW). The key idea is illustrated in Figure \\ref{fig1}. In MNEW, we collect multi-scale neighborhood points in both the static geometry domain and dynamic feature domain. Given a query point, its geometry neighbors are based on the Euclidean distance, and its feature neighbors are based on the similarity that are dynamically varied across different network layers. For each neighborhood point, we first assign attentions according to their location distance and feature similarity. We also compute the geometry\/feature sparsity at each neighbor point, which are then transformed as adaptive weighting factors. The embedded neighborhood feature is a combination of weighted convolution outputs in the two domains. The overall network structure inherits PointNet, which is able to capture both pointwise details and global semantics. In addition, MNEW also extends its capability to embody the local contextual information. Experiments in Section \\ref{Experiments} manifest the effectiveness of our method for the sparse data.\n\nThe major contributions of this paper are summarized as follows:\n\\begin{itemize}\n\t\\itemsep 0em\n\t\\item We introduce MNEW, a novel semantic segmentation model that is effective for per sweep LiDAR data, which is crucial for the application of autonomous driving perception.\n\t\\item We design a neighborhood embedding method in both static geometry domain and dynamic feature space, which embodies attention mechanism by location distance and feature similarity, as well as sparsity-adapted weighting mechanism.\n\t\\item We investigate a thorough comparison for a number of recent methods, evaluating their effectiveness on sparse point clouds. We claim that network architecture, neighborhood selection, and weighting mechanism are essential factors.\n\t\\item We achieve state-of-the-art performance on sparse point clouds, and we observe that performance is varied by distance and local sparsity.\n\\end{itemize}\n\n\n\n\n\n\t\n\n\n\n\\begin{table*}[!htb]\n\t\\begin{center}\n\t\t\\footnotesize\n\t\n\t\t\\begin{tabular}{c c c *3c c c}\n\t\t\t\\toprule\n\t\t\t\\multirow{2}{*}{Method}\t\t\t\t& \\multirow{2}{*}{Architecture} \t& \\multirow{2}{*}{\\makecell{Feature \\\\Extractor}}\t& \\multicolumn{3}{c}{Neighborhood}\t\t\t\t\t& \\multirow{2}{*}{Weighting}\t& \\multirow{2}{*}{Loss}\t\t\t\t\t\t\t\\\\ \\cline{4-6}\n\t\t\t&\t\t\t\t\t\t\t\t\t&\t\t\t\t\t\t\t\t\t\t\t\t\t& Domain\t\t\t& Selection\t\t& Embedding \t&\t\t\t\t\t\t\t\t&\t\t\t\t\t\t\t\t\t\t\t\t\\\\ \n\t\t\t\\midrule\n\t\t\tPointNet \\cite{qi2017pointnet} \t\t& Dilated \t\t\t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& -\t\t\t\t\t& - \t\t\t& Points\t\t& -\t\t\t\t\t\t\t\t& $\\mathcal{L}_{CE}$\t\t\t\t\t\t\t\\\\\n\t\t\tPointNet++ \\cite{qi2017pointnet++}\t& Encoder-Decoder \t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& Multi-radius\t& Points \t\t& -\t\t\t\t\t\t\t\t& $\\mathcal{L}_{CE}$\t\t\t\t\t\t\t\\\\\n\t\t\tA-CNN \\cite{komarichev2019cnn}\t\t& Encoder-Decoder \t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& Ring-shaped \t& Points\t\t& -\t\t\t\t\t\t\t\t& $\\mathcal{L}_{CE}$\t\t\t\t\t\t\t\\\\ \n\t\t\tKP-FCNN \\cite{thomas2019kpconv}\t\t& Encoder-Decoder\t\t\t\t\t& KP-Conv\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& kNN in Radius\t& Points\t\t& Geometry Distance\t\t\t\t& $\\mathcal{L}_{CE} + \\mathcal{L}_{Reg}$ \t\t\\\\\n\t\t\tDGCNN \\cite{wang2018dynamic}\t\t& Dilated\t\t\t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& Feature\t\t\t& kNN\t\t\t& Query-Edges\t& -\t\t\t\t\t\t\t\t& $\\mathcal{L}_{CE}$ \t\t\t\t\t\t\t\\\\\n\t\t\tRS-CNN \\cite{liu2019relation}\t\t& Encoder-Decoder\t\t\t\t\t& RS-Conv\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& Random-pick\t& Query-Edges\t& -\t\t\t\t\t\t\t\t& $\\mathcal{L}_{CE}$ \t\t\t\t\t\t\t\\\\\n\t\t\tPointWeb \\cite{zhao2019pointweb}\t& Encoder-Decoder\t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& kNN\t\t\t& Pairwise-Edges& -\t\t\t\t\t\t\t\t& $\\mathcal{L}_{CE}$ \t\t\t\t\t\t\t\\\\\n\t\t\tGACNet \\cite{wang2019graph}\t\t\t& Encoder-Decoder\t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& Radius\t\t& Points\t\t& Feature Similarity\t\t\t& $\\mathcal{L}_{CE}$ \t\t\t\t\t\t\t\\\\\n\t\t\tPointConv \\cite{wu2019pointconv}\t& Encoder-Decoder\t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& Radius\t\t& Points\t\t& Local Density\t\t\t\t\t& $\\mathcal{L}_{CE}$ \t\t\t\t\t\t\t\\\\\n\t\t\tASIS \\cite{wang2019associatively}\t& Encoder-Decoder\t\t\t\t\t& MLP\t\t\t\t\t\t\t\t\t\t\t\t& Geometry\t\t\t& Radius\t\t& Points\t\t& -\t\t\t\t\t\t\t\t& $\\mathcal{L}_{CE} + \\mathcal{L}_{Disc}$\t\t\\\\ \n\t\t\t\\hline\n\t\t\t\\multirow{3}{*}{Ours (MNEW)}\t& \\multirow{3}{*}{\\makecell{Dilated \\\\ (improved)}}\t& \\multirow{3}{*}{MLP}\t& \\multirow{3}{*}{\\makecell{Geometry \\\\+ Feature}}\t& \\multirow{3}{*}{\\makecell{Multi-radius \\\\+ Multi-kNN}}\t& \\multirow{3}{*}{Query-Edges}\t& \\multirow{3}{*}{\\makecell{Geometry Distance \\\\+ Feature Similarity \\\\+ Neighbor Sparsity}}\t& \\multirow{3}{*}{$\\mathcal{L}_{CE} + \\mathcal{L}_{Reg}$} \\\\ \n\t\t\n\t\t\t&\t\t\t\t\t\t\t\t\t&\t\t\t\t\t\t\t\t\t\t\t\t\t&\t\t\t\t\t&\t\t\t\t&\t\t\t\t&\t\t\t\t\t\t\t\t&\t\t\t\t\t\t\t\t\t\t\t\t\\\\\n\t\t\t&\t\t\t\t\t\t\t\t\t&\t\t\t\t\t\t\t\t\t\t\t\t\t&\t\t\t\t\t&\t\t\t\t&\t\t\t\t&\t\t\t\t\t\t\t\t&\t\t\t\t\t\t\t\t\t\t\t\t\\\\ \n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-0.5cm}\n\t\\caption{Comparison of methods that directly process raw point clouds.}\n\n\t\\label{tab1}\n\\end{table*}\n\n\n\t\n\\section{Related Work} \\label{Related Work}\n\n\\subsection{Methods} \\label{sec2.1}\nSince conversion-based approaches like 2D projection or 3D voxelization inevitably suffer the problem of losing points, in this section we focus on the related semantic segmentation methods that directly process raw point clouds, which are well-suited to explore the 3D data capability and close to our work.\n\nPointNet \\cite{qi2017pointnet} is considered a milestone method that inputs raw point clouds without any format transformation. This method adopts shared Multi-Layer Perceptrons (MLP) \\cite{haykin1994neural} as the key component to learn pointwise features, and a pooling operation is followed to obtain global features representing all-points maximal response. The limit of PointNet is that it does not consider the local spatial relationship with neighborhood points. To address this issue, PointNet++ \\cite{qi2017pointnet++} is proposed with a hierarchical encoder-decoder structure. Analogous to the Fully Convolutionally Networks (FCN) \\cite{long2015fully} used in image segmentation, PointNet++ extracts local features by grouping and subsampling points in increasing contextual scales, and propagates subsampled features to their original points by interpolation.\n\nSeveral subsequent studies improve PointNet and PointNet++ by their upgraded network designs. A-CNN \\cite{komarichev2019cnn} introduces annular convolution with ring-shape neighborhoods to reduce the duplicated computation that exists in PointNet++ multi-scale grouping. Inspired from kernel pixels in the image-based convolution, KP-FCNN \\cite{thomas2019kpconv} creates local 3D spatial filters using a set of kernel points. A function KP-Conv between kernel points and input points is defined, which is used to replace the MLP operation in PointNet\/PointNet++. Alternative to the process of independent points, DGCNN \\cite{wang2018dynamic} and RS-CNN \\cite{liu2019relation} employ the idea of graph which embeds edges between a query point and its neighborhood points. The difference is that DGCNN follows the PointNet pipeline whereas RS-CNN follows the encoder-decoder structure of PointNet++. PointWeb \\cite{zhao2019pointweb} extends this idea and proposes a pairwise edge embedding between every two points within the selected local region. Instead of the typical approach which collects neighborhood points based on their location distance, DGCNN collects neighbors in the dynamic feature space based on their similarity. Likewise, GACNet \\cite{wang2019graph} assigns proper attention weights to different geometry neighbor points according to their feature attributes, which is to focus on the most relevant part of the neighbors. Weighting mechanism is also utilized in PointConv \\cite{wu2019pointconv}, which estimates kernel density to re-weight the continuous function learned by MLP. SPG \\cite{landrieu2018large} and its continued work \\cite{landrieu2019point} partition point clouds into superpoint graphs and perform Edge-Conditioned Convolution (ECC) \\cite{simonovsky2017dynamic} to assign a label on each superpoint. The difference lies in the graph construction approach, which is solved as an unsupervised minimal geometric partition problem in \\cite{landrieu2018large} and a leaning-based method that minimizes the graph contrastive loss in \\cite{landrieu2019point}. However, neither of these two methods is end-to-end. The purpose of graph contrastive loss is to detect the borders between adjacent objects. It pulls points belonging to the same object towards their centroid, while repelling those belonging to different objects. This idea is intuitively derived as the discriminative loss ($\\mathcal{L}_{Disc}$) \\cite{de2017semantic} in ASIS \\cite{wang2019associatively}, which is added with the cross-entropy loss ($\\mathcal{L}_{CE}$) and regularization loss ($\\mathcal{L}_{Reg}$) for a joint semantic and instance segmentation.\n\nTable \\ref{tab1} summarizes an extensive comparison of selected methods, which are varied by the network architecture, feature extractor, neighborhood selection\/embedding, weighting mechanism, and loss function. Our proposed method is also listed to overview the relation and distinction. More details are discussed in Section \\ref{Method}.\n\n\n\n\n\\begin{table*}[!htp]\n\t\\begin{center}\n\t\t\\footnotesize\n\t\n\t\t\\begin{tabular}{c c c c c c c c}\n\t\t\t\\toprule\n\t\t\tDataset\t\t\t\t\t\t\t\t\t& Type \t\t\t\t& Attributes\t& Size (Train + Test)\t\t\t\t\t& Classes\t\t\t\t& Instance\t& Sequential\t& Train\/Valid \t\t\t\\\\\n\t\t\t\\midrule\n\t\t\tS3DIS \\cite{armeni20163d}\t\t\t\t& indoor dense\t\t& XYZ + RGB\t\t& 6 indoor area, 273M points\t\t\t& 13\t\t\t\t\t& Yes\t\t& No\t\t\t& -\t\t\t\t\t\t\\\\\n\t\t\tScanNet \\cite{dai2017scannet}\t\t\t& indoor dense\t\t& XYZ + RGB\t\t& 1.5K scans, 2.5M frames\t\t\t\t& 21\t\t\t\t\t& Yes\t\t& Yes\t\t\t& -\t\t\t\t\t\t\\\\\n\t\t\tSemantic3D \\cite{hackel2017semantic3d}\t& outdoor dense\t\t& XYZ + RGB\t\t& 30 scenarios, 4009M points\t\t\t& 9\t\t\t\t\t\t& No\t\t& No\t\t\t& -\t\t\t\t\t\t\\\\\n\t\t\tNPM3D \\cite{roynard2018paris}\t\t\t& outdoor dense\t\t& XYZ\t\t\t& 6 scenarios, 143M points\t\t\t\t& 50 (10)\t\t\t\t& Yes\t\t& No\t\t\t& -\t\t\t\t\t\t\\\\\n\t\t\t\\midrule\n\t\t\tVirtualKITTI \\cite{3dsemseg_ICCVW17}\t& outdoor sparse\t& XYZ + RGB\t\t& 4 simulated scenes, 90 frames\t\t\t& 14\t\t\t\t\t& No\t\t& No\t\t\t& 80\\%\/20\\%\trandom\t\t\\\\\n\t\t\tDeepenaiKITTI\t\t\t\t\t\t\t& outdoor sparse\t& XYZ\t\t\t& 1 sequence, 100 frames\t\t\t\t& 17\t\t\t\t\t& No\t\t& Yes\t\t\t& 80\\%\/20\\% random\t\t\\\\\n\t\t\tSemanticKITTI \\cite{behley2019dataset}\t& outdoor sparse\t& XYZ\t\t\t& 22 sequences, 43.5K frames\t\t\t& 28 (20)\t\t\t\t& Yes\t\t& Yes\t\t \t& 10\/1 sequence\t\t\t\\\\\n\t\t\n\t\t\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-0.5cm}\n\t\\caption[Caption for LOF]{Comparison of selected datasets with dense and sparse point clouds.}\n\n\t\\label{tab2}\n\\end{table*}\n\t\n\t\n\\begin{table*}[!htb]\n\t\\begin{center}\n\t\t\\footnotesize\n\t\n\t\t\\begin{tabular}{c| *2c *2c *2c| *2c *2c *2c}\n\t\t\t\\toprule\n\t\t\t\\multirow{2}{*}{Method}\t\t\t\t& \\multicolumn{2}{c}{S3DIS} \t& \\multicolumn{2}{c}{ScanNet}\t& \\multicolumn{2}{c|}{Semantic3D}\t\t& \\multicolumn{2}{c}{VirtualKITTI}\t& \\multicolumn{2}{c}{DeepenaiKITTI}\t& \\multicolumn{2}{c}{SemanticKITTI}\t\t\\\\ \\cline{2-13}\n\t\t\t\t\t\t\t\t\t\t\t\t& OA\t\t\t& mIoU\t\t\t& OA\t\t\t& mIoU\t\t\t& OA\t\t\t& mIoU\t\t\t\t\t& OA\t\t\t& mIoU\t\t\t\t& OA\t\t\t& mIoU\t\t\t\t& OA\t\t\t& mIoU\t\t\t\t\t\\\\\n\t\t\t\\midrule\n\t\t\tPointNet \\cite{qi2017pointnet}\t\t& 78.62\t\t\t& 47.71\t\t\t& 73.9\t\t\t& -\t\t\t\t& -\t\t\t\t& -\t\t\t\t\t\t& 88.07\t\t\t& 50.36\t\t\t\t& 98.41\t\t\t& 64.85\t\t\t\t& 66.12\t\t\t& 19.74\t\t\t\t\t\\\\\n\t\t\tPointNet++ \\cite{qi2017pointnet++}\t& -\t\t\t\t& -\t\t\t\t& 84.5\t\t\t& 33.9\t\t\t& 82.5\t\t\t& 52.1\t\t\t\t\t& 81.99\t\t\t& 44.74\t\t\t\t& 96.66\t\t\t& 54.40\t\t\t\t& 72.35\t\t\t& 22.90\t\t\t\t\t\\\\\n\t\t\tA-CNN \\cite{komarichev2019cnn}\t\t& 87.3\t\t\t& 62.9\t\t\t& \\textbf{85.4}\t& -\t\t\t\t& -\t\t\t\t& -\t\t\t\t\t\t& 42.80\t\t\t& 18.75\t\t\t\t& 43.15\t\t\t& 7.31\t\t\t\t& 33.35\t\t\t& 7.85\t\t\t\t\t\\\\\n\t\t\tKP-FCNN \\cite{thomas2019kpconv}\t\t& -\t\t\t\t&\\textbf{65.4}\t& -\t\t\t\t& \\textbf{68.6}\t& \\textbf{92.9}\t& \\textbf{74.6}\t\t\t& 75.02\t\t\t& 30.49\t\t\t\t& 36.75\t\t\t& 4.54\t\t\t\t& 78.05\t\t\t& 26.71\t\t\t\t\t\\\\\n\t\t\tDGCNN \\cite{wang2018dynamic}\t\t& 84.1\t\t\t& 56.1\t\t\t& -\t\t\t\t& -\t\t\t\t& -\t\t\t\t& -\t\t\t\t\t\t& 92.04\t\t\t& 60.19\t\t\t\t& 98.28\t\t\t& 64.54\t\t\t\t&\\textbf{80.64}\t&\\textbf{30.51}\t\t\t\\\\\n\t\t\tPointWeb \\cite{zhao2019pointweb}\t& 86.97\t\t\t& 60.28\t\t\t& 85.9\t\t\t& -\t\t\t\t& -\t\t\t\t& -\t\t\t\t\t\t& 57.06\t\t\t& 18.94\t\t\t\t& 67.98\t\t\t& 16.67\t\t\t\t& 32.17\t\t\t& 6.84\t\t\t\t\t\\\\\n\t\t\tGACNet \\cite{wang2019graph}\t\t\t&\\textbf{87.79}\t& 62.85\t\t\t& -\t\t\t\t& -\t\t\t\t& 91.9\t\t\t& 70.8\t\t\t\t\t&\\textbf{92.57}\t&\\textbf{60.58}\t\t& 95.56\t\t\t& 51.38\t\t\t\t& 76.51\t\t\t& 26.06\t\t\t\t\t\\\\\n\t\t\tPointConv \\cite{wu2019pointconv}\t& -\t\t\t\t& -\t\t\t\t& -\t\t\t\t& 55.6\t\t\t& -\t\t\t\t& -\t\t\t\t\t\t& 85.26\t\t\t& 47.60\t\t\t\t&\\textbf{98.50}\t&\\textbf{65.74}\t\t& 72.51\t\t\t& 23.24\t\t\t\t\t\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace{-0.5cm}\n\t\\caption{Comparison of existing methods on dense and sparse point clouds.}\n\n\t\\label{tab3}\n\\end{table*}\n\t\n\t\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\begin{center}\n\t\t\\includegraphics[width=0.9\\linewidth]{img\/Fig3_MNEWdesign.png}\n\t\\end{center}\n\t\\caption{Design of our proposed network. The key module MNEW is zoomed in for a detailed illustration. In MNEW, the upper branch embeds static geometry neighbors based on their location distance (by multi-radius), and the lower branch embeds dynamic feature neighbors based on their similarity (by multi-kNN). Local sparsity is computed in both geometry and feature domain, and transformed to weight the convolution output. After concatenation, a pooling operation aggregates neighborhood features for each query point.}\n\t\\label{fig3}\n\\end{figure*}\n\n\n\n\\subsection{Datasets} \\label{sec2.2}\n\nFor the task of 3D scene semantic segmentation, publicly available datasets such as S3DIS \\cite{armeni20163d} and ScanNet \\cite{dai2017scannet} are indoor dense data, whereas Semantic3D \\cite{hackel2017semantic3d} and NPM3D \\cite{roynard2018paris} are outdoor. For all of the four benchmark datasets, point clouds are collected by accumulating multiple scans in stationary environments to obtain fine detailed measurements. However, sparse point clouds for the application of autonomous driving perception, like the example shown in Figure \\ref{fig2b}, are much different. A sweeping LiDAR sensor is mounted on a moving vehicle, correspondingly the scanning environment is also changing. In a single frame, point clouds are generally denser in close areas, and much sparser far away from the sensor. This is determined by hardware characteristics of the rotation LiDAR sensor. To the time of our work, we found three public datasets with sparse point clouds. VirtualKITTI \\cite{3dsemseg_ICCVW17} is simulated virtual data, while DeepenaiKITTI and SemanticKITTI \\cite{behley2019dataset} provide small\/large-sized semantic labels on the real world KITTI \\cite{geiger2012we} sequential data.\n\nTable \\ref{tab2} summarizes the selected datasets. Other popular benchmarks such as ModelNet40 \\cite{wu20153d} or ShapeNet \\cite{chang2015shapenet} are focused on small 3D CAD objects, which are beyond our interest scope. Note for the number of classes, NPM3D has 50-class fine annotations and 10-class coarse annotations; SemanticKITTI has 28-class labels to separate moving\/stationary objects, and mapped into 20-class for closest equivalent. Since labels for their test subsets are invisible before submission, we split the annotated training data into training\/validation in our experiments for verification.\n\n\n\n\\subsection{Analysis} \\label{sec2.3}\n\nTable \\ref{tab3} compares the performance of selected methods on selected datasets. For dense point clouds, results are borrowed from their original reports. For sparse data, we use the train\/valid splits in Table \\ref{tab2}, and re-produce experiments with their official implementations. Evaluation metrics are the overall accuracy (OA) for every point and the mean intersection-over-union (mIoU) for averaged IoU across all classes. From Table \\ref{tab3}, we summarize our findings as follows.\n\n{\\bf Architecture.} \nPointNet and DGCNN are the networks retaining the number of points unchanged, while other methods are hierarchical encoder-decoders where points are down-sampled and then up-sampled. Despite the fact that encoder-decoder networks performs better in dense data benchmarks, their effectiveness is depreciated for sparse point clouds. One possible explanation is that, 3D interpolations for up-sampling might be suitable for the near-uniformly distributed dense point clouds, but not for the irregular sparse data. Since no down-sample\/up-sampling exists in PointNet and DGCNN, they are similar to the dilated convolution \\cite{yu2015multi} in the image segmentation, which maintains resolution unchanged end-to-end for element-wise feature learning. \n\n{\\bf Neighborhood.} \nDGCNN selects neighboring points in the dynamic feature domain, which differs from other methods whose neighbors are selected by the static geometry distance. GACNet collects geometry neighbors but assigns attention weights based on their feature similarity. Since the performances of DGCNN and GACNet seem promising for SemanticKITTI and VirtualKITTI, we infer that dynamic feature-based neighborhoods are critical. This is interpretable, for example, sparse points on the road in far distances are isolated in geometry location, but they are similar in their feature representations. In contrast, traffic signs hidden in trees are close to leaves, but they should be distinguished from the surrounding vegetation.\n\n{\\bf Weighting.} \nSimilar to the attention schema in GACNet, PointConv compromises the density estimation as a weighting function, which is learned to adapt with different local densities. This method directly considers the data density and therefore obtaining encouraging performance on DeepenaiKITTI. We infer that weighting mechanisms in GACNet and PointConv could compensate for the effectiveness depreciation of their encoder-decoder architecture. \n\n{\\bf Dataset Discrepancy.} \nAccording to the experimental results in Table \\ref{tab3}, DGCNN, GACNet and PointConv are the preferred methods on sparse point clouds. However, the performance is inconsistent across the three selected sparse datasets. The major reason is essentially the data intrinsic discrepancy. VirtualKITTI is generated by the simulator, and it is the only sparse data with RGB colors. SemanticKITTI is a large-scale sequential dataset, which suggests 10 sequences for training and 1 other sequence for validation. DeepenaiKITTI is small-sized probe data, and all its currently available frames are extracted from the same sequence. Due to the small size and high correlation of DeepenaiKITTI, we neglect it in Section \\ref{Experiments} and evaluate our method on VirtualKITTI and SemanticKITTI.\n\t\n\t\n\n\n\n\\begin{table*}[!htb]\n\n\t\\scriptsize\n\t\\centering\n\t\\newcolumntype{M}{ >{\\arraybackslash} p{0.27cm} }\n\n\t\\begin{subtable}[!htb]{\\textwidth}\n\t\t\\centering\n\t\t\\begin{tabular}{c| M M M M M M M M M M M M M M| M M}\n\t\t\t\\toprule\n\t\t\tMethod\t&\\rotatebox[origin=c]{90}{Terrian}\t&\\rotatebox[origin=c]{90}{Tree}\t&\\rotatebox[origin=c]{90}{Vegetation}\t&\\rotatebox[origin=c]{90}{Building}\t&\\rotatebox[origin=c]{90}{Road}\t&\\rotatebox[origin=c]{90}{Guardrail}\t&\\rotatebox[origin=c]{90}{Traffic Sign}\t&\\rotatebox[origin=c]{90}{Traffic Light}\t&\\rotatebox[origin=c]{90}{Pole}\t&\\rotatebox[origin=c]{90}{Misc}\t&\\rotatebox[origin=c]{90}{Truck}\t&\\rotatebox[origin=c]{90}{Car}\t&\\rotatebox[origin=c]{90}{Van}\t&\\rotatebox[origin=c]{90}{Unlabeled} \t& OA\t& mIoU\t\\\\\n\t\t\n\t\t\t\\midrule\n\t\t\n\t\t\n\t\t\tDGCNN \\cite{wang2018dynamic}\t\t& 86.7\t\t\t& 92.5\t\t\t& 70.8\t\t\t& 81.2\t\t\t& 94.8\t\t\t& 93.9\t\t\t& 38.0\t\t\t& 78.0\t\t\t& 65.5\t\t\t& 27.3\t\t\t& 29.2\t\t\t& 76.3\t\t\t& 8.6\t\t\t& 0.0\t& 92.0\t\t\t& 60.2\t\\\\\n\t\t\tGACNet \\cite{wang2019graph}\t\t\t& 82.0\t\t\t& 95.2\t\t\t& 72.5\t\t\t& 86.6\t\t\t& 92.1\t\t\t& 90.6\t\t\t& 51.4\t\t\t& 48.1\t\t\t& 42.6\t\t\t& 31.1\t\t\t& 46.6\t\t\t& 81.6\t\t\t& \\textbf{27.8}\t& 0.0\t& 92.6\t\t\t& 60.6\t\\\\\n\t\t\tPointConv \\cite{wu2019pointconv}\t& 58.1\t\t\t& 89.4\t\t\t& 57.0\t\t\t& 76.0\t\t\t& 80.6\t\t\t& 66.9\t\t\t& 25.2\t\t\t& 59.3\t\t\t& 25.1\t\t\t& 35.6\t\t\t& 9.14\t\t\t& 72.4\t\t\t& 12.0\t\t\t& 0.0\t& 85.3\t\t\t& 47.6\t\\\\\n\t\t\t\\midrule\n\t\t\tMNEW-4096\t\t\t\t\t\t\t& 92.6\t\t\t& \\textbf{97.7}\t& 84.5\t\t\t& 90.7\t\t\t& 97.6\t\t\t& 97.3 \t\t\t& 68.8\t\t\t& 71.9\t\t\t& 62.6\t\t\t& 52.9\t\t\t& 11.0\t\t\t& 85.9\t\t\t& 23.5\t\t\t& 0.0\t& 95.9\t\t\t& 67.0\t\\\\\n\t\t\tMNEW-2048\t\t\t\t\t\t\t& \\textbf{97.0}\t& \\textbf{97.7} & \\textbf{91.2} & \\textbf{92.4} & \\textbf{98.8} & \\textbf{98.2} & \\textbf{70.6} & \\textbf{83.8} & \\textbf{72.8} & \\textbf{64.9} & \\textbf{58.4} & \\textbf{88.3} & 12.7\t\t\t& 0.0\t& \\textbf{97.1} & \\textbf{73.3}\t\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\vspace{-0.1cm}\n\t\t\\caption{Validation results on VirtualKITTI dataset}\n\t\t\\label{tab4a}\n\t\t\\vspace{0.1cm}\n\t\\end{subtable}\n\t\t\n\t\t\n\t\\begin{subtable}[!htb]{\\textwidth}\n\t\t\\centering\n\t\t\\begin{tabular}{c| M M M M M M M M M M M M M M M M M M M M| M M}\n\t\t\t\\toprule\n\t\t\tMethod\t&\\rotatebox[origin=c]{90}{Car}\t&\\rotatebox[origin=c]{90}{Bicycle}\t&\\rotatebox[origin=c]{90}{Motorcyclist}\t&\\rotatebox[origin=c]{90}{Truck}\t&\\rotatebox[origin=c]{90}{Other Vehicle}\t&\\rotatebox[origin=c]{90}{Person}\t&\\rotatebox[origin=c]{90}{Bicyclist}\t&\\rotatebox[origin=c]{90}{Motorcyclist}\t&\\rotatebox[origin=c]{90}{Road}\t&\\rotatebox[origin=c]{90}{Parking}\t&\\rotatebox[origin=c]{90}{Sidewalk}\t&\\rotatebox[origin=c]{90}{Other Ground}\t&\\rotatebox[origin=c]{90}{Building}\t&\\rotatebox[origin=c]{90}{Fence}\t&\\rotatebox[origin=c]{90}{Vegetation}\t&\\rotatebox[origin=c]{90}{Trunk}\t&\\rotatebox[origin=c]{90}{Terrain}\t&\\rotatebox[origin=c]{90}{Pole}\t&\\rotatebox[origin=c]{90}{Traffic Sign}\t&\\rotatebox[origin=c]{90}{Unlabeled}\t& OA\t& mIoU \\\\\n\t\t\n\t\t\t\\midrule\n\t\t\n\t\t\n\t\t\tDGCNN \\cite{wang2018dynamic}\t\t& 78.3\t\t\t& 0.0\t\t\t& 1.1\t\t\t& \\textbf{17.3}\t& 1.4\t\t\t& 1.9\t\t\t& 3.9\t\t\t& 0.0\t& 88.7\t\t\t& 10.2\t\t\t& 65.1\t\t\t& 0.1\t\t\t& 74.1\t\t\t& 18.8\t\t\t& 71.6\t\t\t& 25.0\t\t\t& 62.1\t\t\t& 28.5\t\t\t& 8.8\t\t\t& 47.8\t\t\t& 80.8\t\t\t& 30.2\t\t\t\\\\\n\t\t\tGACNet \\cite{wang2019graph}\t\t\t& 71.5\t\t\t& 0.0\t\t\t& 0.0 \t\t\t& 12.2\t\t\t& 1.4\t\t\t& 0.0\t\t\t& 0.0\t\t\t& 0.0\t& 80.3\t\t\t& 13.4\t\t\t& 55.3\t\t\t& 0.2\t\t\t& 63.1\t\t\t& 16.7\t\t\t& 67.8\t\t\t& 15.7\t\t\t& 56.4\t\t\t& 12.1\t\t\t& \\textbf{22.9}\t& 38.8\t\t\t& 76.0 \t\t\t& 26.4 \t\t\t\\\\\n\t\t\tPointConv \\cite{wu2019pointconv}\t& 60.5\t\t\t& 0.1\t\t\t& 0.2\t\t\t& 0.6\t\t\t& 3.3\t\t\t& 1.0\t\t\t& 0.9\t\t\t& 0.0\t& 82.1\t\t\t& 3.8\t\t\t& 55.4\t\t\t& \\textbf{0.4}\t& 63.6\t\t\t& 10.8\t\t\t& 59.6\t\t\t& 14.2 \t\t\t& 52.1\t\t\t& 14.1\t\t\t& 8.0\t\t\t& 34.4\t\t\t& 72.5\t\t\t& 23.2\t\t\t\\\\\n\t\t\tRangeNet \\cite{milioto2019rangenet++}\t& 74.1\t\t& \\textbf{14.3}\t& 2.6\t\t\t& 9.4\t\t\t& 10.5\t\t\t& \\textbf{7.2}\t& 21.9\t\t\t& 0.0\t& \\textbf{90.7}\t& \\textbf{36.2}\t& \\textbf{74.2}\t& 0.2\t\t\t& 67.8\t\t\t& \\textbf{33.4}\t& 71.9\t\t\t& 30.7\t\t\t& \\textbf{68.5}\t& 23.0\t\t\t& 22.2\t\t\t& 34.1\t\t\t& 81.4\t\t\t& 34.6 \t\t\t\\\\\n\t\t\t\\midrule\n\t\t\tMNEW-4096\t\t\t\t\t\t\t& \\textbf{81.3}\t& 0.0\t\t\t& \\textbf{13.3}\t& 8.8\t\t\t& \\textbf{12.9}\t& 6.2\t\t\t& \\textbf{31.7}\t& 0.0\t& 88.7\t\t\t& 22.3\t\t\t& 70.4\t\t\t& 0.1\t\t\t& \\textbf{79.3}\t& 30.0\t\t\t& \\textbf{76.9}\t& \\textbf{34.2}\t& 66.4\t\t\t& \\textbf{33.1}\t& 1.1\t\t\t& \\textbf{49.3}\t& \\textbf{84.1}\t& \\textbf{35.3} \\\\\n\t\t\tMNEW-2048\t\t\t\t\t\t\t& 79.8\t\t\t& 0.0\t\t\t& 10.5\t\t\t& 6.5\t\t\t& 7.8\t\t\t& 5.5\t\t\t& 25.5\t\t\t& 0.0\t& 88.8\t\t\t& 22.7\t\t\t& 67.4\t\t\t& 0.0\t\t\t& 77.2\t\t\t& 29.1\t\t\t& 75.0\t\t\t& 29.6\t\t\t& 61.9\t\t\t& 27.3\t\t\t& 1.4 \t\t\t& 47.5\t\t\t& 82.5\t\t\t& 33.2 \t\t\t\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\t\\vspace{-0.1cm}\n\t\t\\caption{Validation results on SemanticKITTI dataset}\n\t\t\\label{tab4c}\n\t\t\\vspace{0.1cm}\n\t\\end{subtable}\n\t\n\t\\caption{Semantic segmentation results on sparse point clouds. Metrics are OA(\\%), mIoU(\\%), and per class IoU(\\%).}\n\t\\label{tab4}\n\t\\end{table*}\n\t\n\t\n\t\n\n\\section{Methodology} \\label{Method}\n\n\\subsection{Network Architecture} \\label{sec3.1}\n\nInspired from the findings in Section \\ref{sec2.3}, we propose our network design illustrated in Figure \\ref{fig3}. The overall architecture inherits a dilated structure like PointNet \\cite{qi2017pointnet} or DGCNN \\cite{wang2018dynamic}, which eliminates re-sampling operations end-to-end. The model takes batches of input $N_p$ points, and passes through a sequence of three MNEW modules to extract pointwise features $L_1$, $L_2$, and $L_3$ in a hierarchical order. Note that local neighborhood information is carried inside MNEW, which yields the upgrade from PointNet. We increase the number of neighbors in $L_1$, $L_2$, and $L_3$, which correspond to hierarchical-scale feature encoders but keep the number of query points fixed. A global 2D convolution, max pooling, and two fully convolution layers are followed to aggregate the global feature $G_3$. The hierarchical pointwise features and tiled global feature are concatenated as a descriptor for each point, and passed through three regular 1D convolutions to get the segmentation score, i.e., category-level probability. \n\n\\subsection{Multi-domain Neighborhood Embedding and Weighting} \\label{sec3.2}\n\nThe key component in our network design is the multi-domain neighborhood embedding and weighting (MNEW) module. As shown in Figure \\ref{fig3}, the input of MNEW is batches of points with their xyz coordinates and original features, shaped as $[B, N_p, D_{xyz+fea}]$. We compute pairwise distances in both geometry and feature domain, resulting in two matrices with shape $[B, N_p, N_p]$ representing the geometry distance and feature similarity between every two points. \n\nAs discussed in KP-Conv \\cite{thomas2019kpconv} and RS-Conv \\cite{liu2019relation}, radius-based geometry neighborhood selection is more robust than k-NN \\cite{weinberger2006distance} in the non-uniform sampling settings. However, feature neighborhoods are dynamically shifting and hard to be encircled. Therefore, we use multiple radius in the geometry domain and multiple k-NN in the feature domain to collection multi-scale neighbor indices. We gather their original features to compose two initial neighborhood embedding matrices $\\mathbf{X}_g^0$ and $\\mathbf{X}_f^0$ with shape $[B, N_p, N_{ng}, D_{embed}]$ and $[B, N_p, N_{nf}, D_{embed}]$ respectively, where $ng=\\sum r_i$ represents the number of accumulated neighbors in multi-radius geometry space, and $nf=\\sum k_i$ represents the number of accumulated neighbors in multi-kNN feature space. Similar to the graph embedding $G(V,E)$ in DGCNN and RS-Conv which includes vertices and edges, we embed each element in $\\mathbf{X}_g^0$ and $\\mathbf{X}_f^0$ as,\n\\begin{equation} \\label{eq1}\n\\mathbf{f}(x_{i,n_j}) = \\mathbf{f}(x_{n_j}, x_{n_j}-x_i), \\quad x_{i,n_j} \\in (\\mathbf{X}_g^0 \\cup \\mathbf{X}_f^0)\n\\end{equation}\nwhere $x_i$ denotes the $i$-th query point, and $x_{n_j}$ denotes the $j$-th neighbor point. Given indices of selected neighbors, we also gather their geometry distance matrix $\\mathbf{D}_g$ (shape $[B, N_p, N_{ng}, 1]$) and feature similarity matrix $\\mathbf{D}_f$ (shape $[B, N_p, N_{nf}, 1]$). Next, a transformation function $\\mathbf{T}(d_{i,n_j})=\\mathbf{w}_{i,n_j}\\cdot\\mathbf{f}(d_{i,n_j})$ is utilized to obtain adaptive attention weights. The attended embedding $\\mathbf{X}_g^a$ and $\\mathbf{X}_f^a$ are computed as, \n\\begin{equation} \\label{eq2}\n\\begin{split}\n\\mathbf{X}_g^a = \\mathbf{T}(\\mathbf{D}_g) \\cdot \\mathbf{X}_g^0 \\\\\n\\mathbf{X}_f^a = \\mathbf{T}(\\mathbf{D}_f) \\cdot \\mathbf{X}_f^0\n\\end{split}\n\\end{equation}\n\nMotivated by the density estimation in PointConv \\cite{wu2019pointconv}, we calculate the neighborhood sparsity using,\n\\begin{equation} \\label{eq3}\n\\mathbb{P}(x_{i,n_j}|\\mu, \\sigma^2) = \\frac{1}{\\sqrt{2\\pi\\sigma^2}} \\exp{[-\\frac{(x_{n_j}-x_i)^2}{2\\sigma^2}]}\n\\end{equation}\n\\begin{equation} \\label{eq4}\n\\mathbb{S}(x_i|\\mu, \\sigma^2) = (\\frac{1}{N_n}\\log[\\sum_{n_j \\in N_n} \\mathbb{P}(x_{i,n_j}|\\mu, \\sigma^2)])^{-1}\n\\end{equation}\n$\\mathbb{P}(x_{i,n_j}|\\mu, \\sigma^2)$ in Equation (\\ref{eq3}) is equivalent to the Gaussian probability density function computed for every neighbor $x_{n_j}$ with respect to the query point $x_{i}$. $\\mathbb{S}(x_{i,n_j})$ in Equation (\\ref{eq4}) is the estimated sparsity which inverses the averaged density. We also take the log-scale value to obtain a better sparsity distribution (see Figure \\ref{fig4b}). Geometry sparsity $\\mathbb{S}_g$ and feature sparsity $\\mathbb{S}_f$ are computed individually, which are transformed as the weighting factor for the 2D convolution activation $\\mathbf{h}(x)$. The weighted outputs $\\mathbf{X}_g^w$ (shape $[B, N_p, N_{ng}, D_{conv}]$) and $\\mathbf{X}_f^w$ (shape $[B, N_p, N_{nf}, D_{conv}]$) are computed as,\n\\begin{equation} \\label{eq5}\n\\begin{split}\n\\mathbf{X}_g^w = \\mathbf{T}(\\mathbb{S}_g) \\cdot \\mathbf{h}(\\mathbf{X}_g^a) \\\\\n\\mathbf{X}_f^w = \\mathbf{T}(\\mathbb{S}_f) \\cdot \\mathbf{h}(\\mathbf{X}_f^a)\n\\end{split}\n\\end{equation}\nAfter concatenating the neighborhood information from geometry and feature domain, an average pooling operation is followed to aggregate a feature vector for each query point, yielding the output of MNEW module $\\mathbf{X}_{mnew}^{out}$ with shape $[B, N_p, D_{conv}]$.\n\\begin{equation} \\label{eq6}\n\\mathbf{X}_{mnew}^{out} = \\frac{1}{N_p} \\sum_{i \\in N_p} (\\mathbf{X}_g^w \\oplus \\mathbf{X}_f^w)\n\\end{equation}\n\n\n\\subsection{Loss Function} \\label{sec3.3}\n\nThe loss function is a combination of softmax cross-entropy loss $\\mathcal{L}_{CE}$ and regularization loss $\\mathcal{L}_{Reg}$ adjusted by $\\lambda$. Since the task is semantic segmentation only (i.e., no instance-level labels), the discriminitive loss suggested by ASIS \\cite{wang2019associatively} is not applicable.\n\\begin{equation} \\label{eq7}\n\\mathcal{L}_{Total} = \\mathcal{L}_{CE} + \\lambda \\mathcal{L}_{Reg}\n\\end{equation}\n\n\n\n\n\\subsection{Comparison to Existing Methods} \\label{sec3.4}\n\nReferring to Table \\ref{tab1}, we compare existing methods and summarize the essential distinctions of MNEW as follows.\n\nThe dilated network architecture in our work excludes downsample grouping and upsample interpolation, which differs from all recent works that are based on hierarchical encoder-decoder structures. Compared with PointNet \\cite{qi2017pointnet} whose feature contains pointwise and global information, we also include local neighborhood features. Compared with DGCNN \\cite{wang2018dynamic} which collects neighbors in the feature space only, we embed neighbor points in both geometry and feature domain.\n\nIn terms of the neighborhood embedding, our method adopts multi-scaling in multi-domain. This differs from all existing methods where neighbors are collected in only one single domain (i.e., either geometry or feature). Hierarchical PointNet++ \\cite{qi2017pointnet++} and A-CNN \\cite{komarichev2019cnn} use multi-radius ball-shaped or ring-shaped scales in the geometry domain, while DGCNN using single-scale kNN in the feature domain. In our method, there may exist overlapping points selected in geometry and feature neighborhoods. However, since we compute adaptive attention \\& weighting factors in each domain separately, their impact are learned individually.\n\nFor the attention\/weighting mechanism, KP-FCNN \\cite{thomas2019kpconv} and GACNet \\cite{wang2019graph} compute the geometry distance or feature similarity as fixed weighting factors, while PointConv \\cite{wu2019pointconv} transforms the local sparsity as a learning-based flexible variable. In our method, all these factors are trainable.\n\n\n\n\n\n\n\\section{Experiments} \\label{Experiments}\n\n\\subsection{Sparse Point Cloud Segmentation} \\label{sec4.1}\n\nExperimental results on VirtualKITTI and SemanticKITTI are shown in Table \\ref{tab4}, using the train\/valid splits from Table \\ref{tab2}. Evaluation metrics include OA, mIoU, and per class IoU. We select DGCNN, GACNet, and PointConv as baselines since their accuracies are higher than other related methods (see Table \\ref{tab3}). Our proposed method, MNEW, achieves outstanding performances on both VirtualKITTI and SemanticKITTI. We experimentally set the number of points (i.e., $N_p$) 4096 or 2048, since it affects the global feature extraction and neighborhood selection. Table \\ref{tab4} indicates that MNEW-2048 performs better on VirtualKITTI, while MNEW-4096 is superior on SemanticKITTI. Comparing against the top-performed baseline method, MNEW facilitates 4.5\\% OA and 12.7\\% mIoU increments on VirtualKITTI, a greater margin than those on SemanticKITTI (3.3\\% OA and 5.1\\% mIoU higher than DGCNN). This is because VirtualKITTI provides RGB information as its original feature, which is independent from the XYZ geometry coordinates. For SemanticKITTI, we use intensity, and compute 2D\\_range and 3D\\_distance to fill the input feature slots. Therefore, the multi-domain neighborhood combination is more effective for VirtualKITTI. \n\nSemanticKITTI contains more categorical labels, but the percentage of unlabeled points (class-0) is also higher than VirtualKITTI (4.49\\% vs. $<$0.01\\%). For SemanticKITTI, our experimental results slightly differ from those reported in \\cite{behley2019dataset}. One of the critical reason is that unlabeled points, despite occupying considerable large percentage, are ignored in \\cite{behley2019dataset}. With the class-0 included, we reproduced the RangeNet \\cite{milioto2019rangenet++}, an upgraded DarkNet in \\cite{behley2019dataset}. We provide a consistent comparison as listed in Table \\ref{tab4c}. In addition to DarkNet\/RangeNet, \\cite{behley2019dataset} and \\cite{milioto2019rangenet++} also claim that projection-based methods such as SqueezeSeg \\cite{wu2018squeezeseg} or TangentConv \\cite{tatarchenko2018tangent} are more effective than those directly processing raw data. Since per sweep point clouds are collected by a rotational LiDAR, a cylinder-view projection exactly looks at those points from the perspective of LiDAR sensor. This could ensure its effectiveness for small objects, such as bicycles and traffic-signs which are better viewed from the sensor's perspective. Although reasonable, MNEW still obtains higher mIoU (35.3\\% vs. 34.6\\%) and OA (84.1\\% vs. 81.4\\%) than RangeNet.\n\n\n\n\n\n\n\\begin{table}[t]\n\t\\begin{center}\n\t\n\t\t\\scriptsize\n\t\t\\begin{tabular}{c c c c| c c}\n\t\t\t\\toprule\n\t\t\tGeometry\t\t& Feature\t\t\t& Distance\/Similarity\t& Sparsity\t\t& OA\t\t& mIoU\t\t\\\\\n\t\t\t\\midrule\n\t\t\t\\checkmark\t\t&\t\t\t\t\t&\t\t\t\t\t\t&\t\t\t\t& 92.1\t\t& 57.0\t\t\\\\\n\t\t\t\t\t\t\t& \\checkmark\t\t&\t\t\t \t\t\t&\t\t\t\t& 95.8\t\t& 65.9\t\t\\\\\n\t\t\t\\checkmark\t\t& \\checkmark\t\t&\t\t\t \t\t\t&\t\t\t\t& 95.9\t\t& 69.7\t\t\\\\\n\t\t\t\\midrule\n\t\t\t\\checkmark\t\t& \\checkmark\t\t& \\checkmark \t\t\t&\t\t\t\t& 96.3\t\t& 72.6\t\t\\\\\n\t\t\t\\checkmark\t\t& \\checkmark\t\t& \t\t\t\t\t\t& \\checkmark\t& 96.9\t\t& 70.6\t\t\\\\\n\t\t\t\\checkmark\t\t& \\checkmark\t\t& \\checkmark \t\t\t& \\checkmark\t& 97.1\t\t& 73.3\t\t\\\\\n\t\t\t\\bottomrule\n\t\t\\end{tabular}\n\t\\end{center}\n\t\\vspace*{-1mm}\n\t\\caption{Effect of neighborhood selection, distance\/similarity attention, and sparsity weighting. Experimented MNEW-2048 on VirtualKITTI}\n\t\\label{tab5}\n\\end{table}\n\n\n\\begin{figure*}[t!]\n\t\\centering\n\t\\begin{subfigure}[t]{\\textwidth}\n\t\t\\centering\n\t\t\\begin{subfigure}[t]{0.4\\textwidth}\n\t\t\t\\includegraphics[width=\\textwidth, height=3.8cm]{img\/Fig4-1_DistanceDistribution.png}\n\t\t\\end{subfigure}\n\t\n\t\n\t\n\t\t\\begin{subfigure}[t]{0.5\\textwidth}\n\t\t\t\\includegraphics[width=\\textwidth, height=3.8cm]{img\/Fig4-2_DistanceEffects.png}\n\t\t\\end{subfigure}\n\t\t\\vspace*{-1mm}\n\t\t\\caption{Distance (m) Distribution and OA (\\%) by Distance Effects}\n\t\t\\label{fig4a}\n\t\\end{subfigure}\n\t\\begin{subfigure}[t]{\\textwidth}\n\t\t\\centering\n\t\t\\begin{subfigure}[t]{0.4\\textwidth}\n\t\t\t\\includegraphics[width=\\textwidth, height=3.8cm]{img\/Fig4-3_SparsityDistribution.png}\n\t\t\\end{subfigure}\n\t\n\t\n\t\n\t\t\\begin{subfigure}[t]{0.5\\textwidth}\n\t\t\t\\includegraphics[width=\\textwidth, height=3.8cm]{img\/Fig4-4_SparsityEffects.png}\n\t\t\\end{subfigure}\n\t\t\\vspace*{-1mm}\n\t\t\\caption{Sparsity (normalized 0-1) Distribution and OA (\\%) by Sparsity Effects}\n\t\t\\label{fig4b}\n\t\\end{subfigure}\n\t\\caption{Points distribution and performance variation by distance and sparsity. Experimented MNEW-4096 on SemanticKITTI.}\n\t\\label{fig4}\n\\end{figure*}\n\n\n\n\n\\subsection{Ablation Studies} \\label{sec4.2}\n\n{\\bf Neighborhood Embedding and Weighting.}\nIn Table \\ref{tab5}, we compare several variations of the model design. For the neighborhood selection, we compare the neighbors collected by geometry location, feature similarity, and both. The experiment reveals that the accuracy significantly increased with the supplement of feature domain neighborhood. Next, we optionally enable the neighbor attention of geometry distance and feature similarity ($\\mathbf{T}(\\mathbf{D}_g)$+$\\mathbf{T}(\\mathbf{D}_f)$), as well as the weighting of local sparsity ($\\mathbf{T}(\\mathbb{S}_g)$+$\\mathbf{T}(\\mathbb{S}_f)$). Both attention\/weighting settings contribute to the performance improvement, which validate the effectiveness of our MNEW design.\n\n\n{\\bf Performance varied by Distance and Sparsity.}\nSince we target sparse point clouds segmentation towards the application of autonomous driving perception, it is interesting to know the performance for points with different distances (with respect to the LiDAR sensor) or sparsity (with respect to nearby points). Euclidean distances are computed in the geometry space, and sparsity is calculated using the normalized value of Equation (\\ref{eq4}). Investigated on SemanticKITTI, we demonstrate distribution histograms of distance and sparsity, as well as result variations in Figure \\ref{fig4}. We observe in Figure \\ref{fig4a}, the performance occurs an abrupt elevation at $\\approx$50 meters distance. We explicit that points in far away areas (e.g., $>$50m) are limited in quantity (refer to the distance distribution), but generally involved in well-leaned categories (e.g., road, building, terrain) which contain relatively large percentage of samples. In Figure \\ref{fig4b}, as the sparsity increases, the performance starts to decrease until 0.7$\\sim$0.8 and then increase. This is corresponded with the sparsity distribution, implying that the amount of samples affect the performance. Since sparsity distributions for dense datasets are relatively uniform, we infer it an essential reason for the effectiveness disparity of existing methods. It is also observed from Figure \\ref{fig4} that MNEW achieves winning performance against other methods across the distance and sparsity distribution.\n\n\n\n\\section{Conclusion} \\label{Conclusion}\n\nIn this work, we propose MNEW for sparse point clouds segmentation. MNEW inherits a dilation architecture to capture pointwise and global features, and involves multi-scale local semantics adopted from the hierarchical encoder-decoder structure. Neighborhood information is embedded in both static geometry and dynamic feature domain. The geometry distance, feature similarity, and local sparsity are computed and transformed as adaptive weighting factors. We obtain outstanding performances on both sparse point clouds. We believe this study will contribute to the application of LiDAR-based autonomous driving perception. \n\nIn the continued work, one direction is to extend the model for joint semantic and instance segmentation, i.e., panoptic segmentation \\cite{kirillov2019panoptic} for point clouds. The second ongoing direction is domain adaption to resolve the issue of cross-sensor disparity. The individual model could also be light-weighted and exported version as a LiDAR feature extractor, which is useful to fuse with other sensors such as radar and camera \\cite{zheng2019gfd}.\n\n\n\n\n{\\small\n\t\\bibliographystyle{ieee_fullname}\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:Introduction}\nIn the field of illumination optics, optical engineers design optical elements to transport the light from a source, which can be an LED, laser, or incandescent lamp, to obtain a desired irradiance (spatial density of the luminous flux) or intensity (angular density of the luminous flux) \\citep{Grant2011}.\nTo transport the light from the source to the target, the optical engineer can construct a system consisting of various optical elements such as lenses, mirrors, diffusers, and light guides \\citep{John2013}.\nOne particular type of optic used in automotive and road lighting applications is the freeform lens, a lens without any form of symmetry \\citep{Falaggis2022, Mohedano2016}.\nThe design of these lenses is a complex problem. It is currently solved by numerically solving system-specific differential equations or through optimization, with every step validated using a (non-differentiable) non-sequential ray tracer \\citep{Wu2018}.\nGreat effort is involved in generalizing these methods to account for varying amounts of optical surfaces \\citep{Anthonissen2021}, their optical surface and volume properties \\citep{Kronberg2022, lippman_prescribed_2020}, or the source model~\\citep{Muschaweck2022, tukker_efficient_2007, sorgato_design_2019}. \n\nThe performance of an optical system is evaluated using ray tracing, which is the process of calculating the path of a ray originating from a source through the optical system. Sequential ray tracers such as Zemax~\\citep{zemax} and Code V~\\citep{codev}, primarily used in the design of imaging optics, trace a small number of rays to determine the quality of the image. \nNon-sequential ray tracers such as LightTools~\\citep{LightTools} and Photopia~\\citep{photopia} use many rays to simulate the optical flux through the system and share similarities with the rendering procedures in computer graphics, with the main difference being that the rays are traced from source to camera.\n\nAlgorithmically differentiable ray tracing, a generalization of differential ray tracing \\citep{feder_differentiation_1968, stone_differential_1997, oertmann_differential_1989, chen_second-order_2012}, is a tool that is being developed for both sequential~\\citep{Sun2021, Volatier2017} and non-sequential \\citep{Mitsuba2} ray tracing.\n\\emph{Differential ray tracing} obtains system parameter gradients using numerical or algebraic differentiation. The gradient can be calculated numerically using numerical differentiation or the adjoint method \\citep{givoli_tutorial_2021}, requiring the system to be ray traced twice, once for its current state and once with perturbed system parameters. Analytic expressions for the gradient can be obtained by tracing the rays analytically through the system, calculating where the ray intersects the system's surfaces and how the ray's trajectory is altered. However, these expressions can become long and complicated depending on the system. In addition, the method is limited to optics described by conics as finding analytic ray surface intersection with surfaces of higher degrees becomes complicated or even impossible. Algorithmic differentiable ray tracing can handle these issues by obtaining the gradients with one single forward simulation for an almost arbitrary system. In addition, it can be seamlessly integrated into gradient-descent-based optimization pipelines. A modern framework for this is \\emph{Physics Informed Machine Learning} \\citep{Karniadakis2021}, where a neural network is trained to approximate the solution to a physics problem formulated using data, a set of differential equations, or an implemented physics simulation (or a combination of these). \n\nWe investigate the reliability of designing freeform lenses with B-spline surfaces \\citep{Piegl1995} using algorithmically differentiable non-sequential ray tracing and gradient-based optimization to redirect the light of a light source into a prescribed irradiance distribution. The source models will be the collimated light source, point source, and finally, sources with a finite extent. The results are validated using the commercial ray trace program LightTools \\citep{LightTools}. In addition, we investigate the effectiveness of optimizing a network to determine the optimal B-spline control points as proposed in \\citep{Moller2021PIML} and \\citep{GASICK2023115839}, and compare it to optimizing the control points directly and seeing the possible speed-up.\n\n\\section{Gradient-based freeform design}\nThe overall structure of our pipeline is depicted in Fig.~\\ref{fig:optimizationloop}. \nA freeform surface is defined by the parameters $P \\in \\mathscr{P}$, where $\\mathscr{P}$ is the set of permissible parameter values. This surface is combined with a flat surface to create a lens, and an irradiance distribution $\\mathcal{I}$ is produced by tracing rays through the lens onto a screen. The irradiance distribution is compared to a target $\\mathcal{I}_\\text{ref}$ yielding a loss $\\mathscr{L}(\\mathbf{P};\\mathcal{I}_\\text{ref})$. The optimization problem we are trying to solve can then be formulated as\n\\begin{equation}\n \\min_\\mathbf{P \\in \\mathscr{P}} \\; \\mathscr{L}(\\mathbf{P};\\mathcal{I}_\\text{ref}),\n\\end{equation}\nwhich we solve by using gradient descent.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = \\textwidth]{figures\/Caustic_design_optimization.png}\n \\caption{Overview of our learning-based freeform design pipeline.}\n \\label{fig:optimizationloop}\n\\end{figure}\n\nThe freeform surface of the lens is defined in terms of a B-spline surface. From a manufacturing standpoint, this is convenient since B-spline surfaces can be chosen to be $C^1$ smooth (in fact, B-spline surfaces can be $C^n$ smooth for arbitrarily large $n$). From an optimization perspective, B-spline surfaces have the property that the control points that govern the shape of the surface and which will be optimized have a local influence on the surface geometry, which in turn has a local influence on the resulting irradiance distribution.\n\n\\subsection{The lens model using a B-spline surface}\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.2\\textwidth]{figures\/lens_schematic.png}\n \\caption{The used lens type: a volume enclosed between a flat surface and a freeform surface with a uniform refractive index.}\n \\label{fig:lenschematic}\n\\end{figure}\n\nWe define a lens as in Fig.~\\ref{fig:lenschematic} as the volume between a flat surface and a B-spline surface, with a uniform refractive index.\n\nA B-spline surface $\\mathbf{S}$ in $\\mathbb{R}^3$ is a parametric surface, see Fig.~\\ref{fig:Bsplinesurface}. It has rectangular support $[a,b]\\times[c,d]$ where $a z_\\text{in} \\quad \\forall (i,j). \\label{eq:nosurfintersect}\n\\end{equation}\nManufacturing can require that the lens has some minimal thickness $\\delta$, so that the constraint is stronger:\n\\begin{equation}\n P^z_{i,j} \\ge \\delta + z_\\text{in} \\quad \\forall (i,j).\n\\end{equation}\n\n\n\\subsection{Differentiable ray tracer} \\label{subsec:raytracer}\nOur implementation traces rays from a source through the flat lens surface and the freeform lens surface to the detector screen as depicted in Figs.~\\ref{fig:planetraceschematic} and \\ref{fig:pointtraceschematic}. Other ray paths, e.g., total internal reflection at lens surfaces, are not considered since it is assumed that the contribution of these to the resulting irradiance distribution is negligible.\n\n\\subsubsection{Sources and ray-sampling}\nNon-sequential ray tracing is a Monte-Carlo approximation method of the solution to the continuous integration formulation of light transport through an optical system. For a detailed discussion of this topic, see \\cite[ch. 14]{pharr2016physically}. Thus to perform ray tracing, the light emitted by a source must be discretized into a finite set of rays\n\\begin{equation}\n l: t \\rightarrow \\mathbf{o} + \\hat{\\mathbf{d}}t,\n\\end{equation}\nwhere $\\mathbf{o}$ is the origin of the ray and $\\hat{\\mathbf{d}}$ its normalized direction vector. Both collimated ray bundle and point sources will be considered, see Figs.~\\ref{fig:planetraceschematic} and \\ref{fig:pointtraceschematic}, respectively. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.6\\textwidth]{figures\/plane_trace_schematic.png}\n \\caption{Schematic of the ray tracing with a collimated ray bundle source.}\n \\label{fig:planetraceschematic}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.6\\textwidth]{figures\/point_trace_schematic.png}\n \\caption{Schematic of the ray tracing with a point source.}\n \\label{fig:pointtraceschematic}\n\\end{figure}\n\nTracing rays from a collimated ray bundle can be understood from Fig.~\\ref{fig:planetraceschematic}. The path of all rays from the source plane to the B-spline surface is a line segment parallel to the $z$-axis. Therefore, we can sample the incoming rays directly on the B-spline surface, with $\\hat{\\mathbf{d}} = (0,0,1)^\\top$. By the linearity of $X$ and $Y$ sampling on the B-spline domain $[0,1]^2$ is analogous to sampling on the lens extent $[-r_x,r_x]\\times [-r_y,r_y]$ in terms of distribution. Rays are sampled in a (deterministic) square grid on $[0,1]^2$. \n\nFor a point source, each ray starts at the location of the source, and the direction vector $\\hat{\\mathbf{d}}$ is sampled over the unit sphere $\\mathbb{S}^2$. More precisely, $\\hat{\\mathbf{d}}$ is given by\n\\begin{equation}\n \\hat{\\mathbf{d}} = \\left(\\cos\\theta\\sin\\phi,\\sin\\theta\\sin\\phi,\\cos\\phi\\right)^\\top,\n\\end{equation}\nwith $\\theta \\in [0,2\\pi)$ and $\\phi \\in [0,\\phi_\\text{max}]$ for some $0\\le \\phi_\\text{max} < \\frac{\\pi}{2}$, see Fig.~\\ref{fig:pointtraceschematic}. $\\phi_\\text{max}$ is chosen as small enough to minimize the number of rays that miss the lens entrance surface but large enough such that the whole surface is illuminated. For instance, if the source is on the $z$-axis, then $\\phi_\\text{max} = \\arctan\\left(\\frac{\\sqrt{r_x^2 + r_y^2}}{z_\\text{in}-z_s}\\right)$ where $z_\\text{in}$ is the $z$-coordinate location of the entrance surface and $z_s$ the $z$-coordinate of the source. To uniformly sample points on this sphere segment, $\\theta$ is sampled (non-deterministically) uniformly in $[0,2\\pi)$ and $\\phi$ is given by\n\\begin{equation}\n \\phi = \\arccos\\left(1-(1-\\cos\\phi_\\text{max})a\\right)\n\\end{equation}\nwhere $a$ is sampled (non-deterministically) uniformly in $[0,1]$. This sampling is used to produce the results in Section~\\ref{sec:results}.\n\nFor the point source, the calculation of the intersection of a ray with the B-spline surface is non-trivial. This calculation comes down to finding the smallest positive root of the $p+q$ degree piece-wise polynomial function\n\\begin{equation}\n f(t) = \n Z\\left(\\begin{pmatrix}o_u \\\\ o_v\\end{pmatrix} + \n \\begin{pmatrix}d_u \\\\ d_v\\end{pmatrix}t\n \\right)\n - d_zt - o_z, \\label{eq:surfaceintersect}\n\\end{equation}\nif such a root exists and yields a point in the domain of $Z$. Here the subscripts $u$ and $v$ denote that the ray is considered in $(u,v,z)$ space instead of $(x,y,z)$ space, so for instance\n\\begin{equation}\n o_u = X^{-1}(o_x) = \\frac{1}{2}\\left(\\frac{o_x}{r_y}+1\\right), \\quad d_v = \\frac{d_y}{2r_y}.\n\\end{equation}\nThe roots of eq. \\ref{eq:surfaceintersect} cannot generally be found analytically for $p+q>4$, and thus an intersection algorithm is implemented, which is explained in the next section.\n\n\\subsubsection{B-spline surface intersection algorithm}\nThe intersection algorithm is based on constructing a triangle mesh approximation of the B-spline surface and computing intersections with that mesh.\n\n\\paragraph{Triangle mesh intersection phase 1: bounding boxes}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{figures\/bb_per_knotspanproduct.png}\n \\caption{Triangles and corresponding bounding box for a few knot span products of a spherical surface.}\n \\label{fig:bb_per_knotspanproduct}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{figures\/uv_triangle_check.png}\n \\caption{Example of which triangles are candidates for a ray-surface intersection with the ray plotted in red, based on their $u,v$-domain.}\n \\label{fig:uvtriangleint}\n\\end{figure}\nChecking every ray against every triangle for intersection is computationally expensive, so it is helpful to have bounding box tests that provide rough information about whether the ray is even near some section of the B-spline surface. B-spline theory provides a tool for this: the strong convex hull property, which yields the bounding box \n\n\\begin{equation}\n B_{i_0,j_0} = \\left[u_{i_0},u_{i_0 + 1}\\right) \\times \\left[v_{j_0},v_{j_0 + 1}\\right) \\times \\left[z^{\\min}_{i_0,j_0}, z^{\\max}_{i_0,j_0}\\right]\n\\end{equation}\nwhere $z^{\\min}_{i,j}$ and $z^{\\max}_{i,j}$ are the minimum and maximum $z$-values of the control points that affect the B-spline surface on the knot span product $\\left[u_{i_0},u_{i_0 + 1}\\right) \\times \\left[v_{j_0},u_{j_0 + 1}\\right)$, hence those with indices $i_0-p\\le i\\le i_0, j_0-q \\le j\\le j_0$. Formulated in terms of $Z(u,v)$ this yields\n\\begin{equation}\n z^{\\min}_{i_0,j_0} \\le Z(u,v) \\le z^{\\max}_{i_0,j_0}, \\quad\n (u,v) \\in \\left[u_{i_0},u_{i_0 + 1}\\right) \\times \\left[v_{j_0},v_{j_0 + 1}\\right).\n\\end{equation}\nExamples of such bounding boxes are shown in Fig.~\\ref{fig:bb_per_knotspanproduct}.\n\nThere are two steps in applying the bounding boxes in the intersection algorithm. First, a test for the entire surface (in $(u,v,z)$-space):\n\\begin{equation}\n [0,1]^2 \\times \\left[\\min_{i,j}P^z_{i,j},\\max_{i,j}P^z_{i,j}\\right].\n\\end{equation}\nSecond, a recursive method where, starting with all knot span products, each rectangle of knot span products is divided into at most 4 sub-rectangles for a new bounding box test until individual knot span products are reached.\n\n\n\\paragraph{Triangle mesh intersection phase 2: $(u,v)$-space triangle intersection}\nEach non-trivial knot span product $[u_{i_0},u_{i_0+1}) \\times [v_{j_0},v_{j_0+1})$ is divided into a grid of $n_u$ by $n_v$ rectangles. Thus we can define the boundary points\n\\begin{subequations}\n \\begin{align}\n u_{i_0,k} =& u_{i_0} + k\\Delta u_{i_0},\n \\quad \\Delta u_{i_0} = \\frac{u_{i_0+1}-u_{i_0}}{n_u}, \n \\quad k = 0, \\ldots, n_u, \\\\\n v_{i_0,\\ell} =& v_{j_0}+ \\ell \\Delta v_{j_0}, \\quad \\Delta v_{j_0} = \\frac{v_{j_0+1}-v_{j_0}}{n_v},\n \\quad \\ell = 0,\\ldots, n_v.\n \\end{align}\n\\end{subequations}\n\nEach rectangle is divided into a lower left and an upper right triangle, as demonstrated in Fig.~\\ref{fig:uvtriangleint}. In this figure it is shown for a ray projected onto the $(u,v)$-plane in some knot span which triangles are candidates for an intersection in $(u,v,z)$-space. This is determined by the following rules:\n\\begin{itemize}\n \\item A lower left triangle is intersected in the $(u,v)$-plane if either its left or lower boundary is intersected by the ray;\n \\item an upper right triangle is intersected in the $(u,v)$-plane if either its right or upper boundary is intersected by the ray.\n\\end{itemize}\n\nThe intersection of these boundaries can be determined by finding the indices of the horizontal lines at which the vertical lines are intersected:\n\\begin{equation}\n \\ell_k = \\left\\lfloor\\frac{ o_v+(u_{i_0,k}-o_u)\\frac{d_v}{d_u} - v_{j_0}}{\\Delta v_{j_0}}\\right\\rfloor,\n\\end{equation}\nand analogously $k_\\ell$.\n\n\\paragraph{Triangle mesh intersection phase 3: $u,v,z$-space triangle intersection}\nA lower left triangle can be expressed by a plane\n\\begin{equation}\n T(u,v) = Au + Bv + C\n\\end{equation}\n defined by the following linear system:\n\\begin{equation}\n \\begin{pmatrix}\n u_{i_0,k} & v_{j_0,\\ell} & 1 \\\\\n u_{i_0,k+1} & v_{j_0,\\ell} & 1 \\\\\n u_{i_0,k} & v_{j_0,\\ell+1} & 1\n \\end{pmatrix}\n \\begin{pmatrix}\n A \\\\ B \\\\ C\n \\end{pmatrix}\n =\n \\begin{pmatrix}[1.75]\n z_{i_0,k}^{j_0,\\ell} \\\\ z_{i_0,k+1}^{j_0,\\ell} \\\\ z_{i_0,k}^{j_0,\\ell+1}\n \\end{pmatrix}.\n\\end{equation}\nHere we use the following definition:\n\\begin{equation}\n z_{i_0,k}^{j_0,\\ell} = Z(u_{i_0,k},v_{j_0,\\ell}).\n\\end{equation}\nThis yields the plane\n\\begin{align}\n T(u,v) =&& z_{i_0,k}^{j_0,\\ell} + n_u\\left(z_{i_0,k+1}^{j_0,\\ell}-z_{i_0,k}^{j_0,\\ell}\\right)\\frac{u-u_{i_0,k}}{u_{i_0+1}-u_{i_0}} \\\\ \n && +n_v\\left(z_{i_0,k}^{j_0,\\ell+1}-z_{i_0,k}^{j_0,\\ell}\\right)\\frac{v-v_{j_0,\\ell}}{v_{j_0+1}-v_{j_0}}. \\label{eq:trianglefuncdetermined}\n\\end{align}\nNote that to define this triangle, the B-spline basis functions are evaluated at fixed points in $[0,1]^2$ independent of the rays or the $P^z_{i,j}$. This means that for a lens that will be optimized these basis function values can be evaluated and stored only once rather than in every iteration, for computational efficiency.\n\nComputing the intersection with the ray $\\tilde{\\mathbf{r}}(t) = \\tilde{\\mathbf{o}} + \\tilde{\\hat{\\mathbf{d}}}t$ is now straight-forward, and yields\n\\begin{equation}\n t_\\text{int} = - \\frac{C+\\langle\\tilde{\\mathbf{o}}, \\mathbf{n}\\rangle}{\\langle\\tilde{\\hat{\\mathbf{d}}},\\mathbf{n}\\rangle}, \\quad \\mathbf{n} = \n \\begin{pmatrix} 0 \\\\ 1 \\\\ \\partial_u T\\end{pmatrix} \\times\n \\begin{pmatrix} 1 \\\\ 0 \\\\ \\partial_v T\\end{pmatrix} = \n \\begin{pmatrix}A \\\\ B \\\\ -1\\end{pmatrix},\n\\end{equation}\nwhere $\\mathbf{n}$ is a normal vector to the triangle, computed using the cross product. This also explains why $\\langle\\tilde{\\hat{\\mathbf{d}}},\\mathbf{n}\\rangle=0$ does not yield a well-defined result: in this situation the ray is parallel to the triangle.\n\nThe last thing to check is whether $\\tilde{l}(t_\\text{int})$ lies in the $(u,v)$-domain of the triangle, which can be checked by three inequalities for the three boundaries of the triangle:\n\n\\begin{subequations}\n \\begin{align}\n o_u + d_u t_\\text{int} \\ge u_{i_0,k} \\\\\n 0 \\leq o_v + d_v t_\\text{int} - v_{j_0,\\ell}< \\frac{n_u}{n_v}\\frac{v_{j_0+1}-v_{j_0}}{u_{i_0+1}-u_{i_0}}(u_{i_0,k+1}-(o_u + d_u t_\\text{int})).\n \\end{align}\n\\end{subequations}\n\nThe computation for an upper right triangle is completely analogous. The upper triangle has a closed boundary, whereas the lower triangle has an open one and vice versa, which means that the $(u,v)$ domains of the triangles form an exact partition of $[0,1]^2$. Thus the triangle mesh is `water-tight', meaning that no ray intersection should be lost by rays passing in between triangles.\n\n\\subsection{Image reconstruction}\nThe ray tracing produces an irradiance distribution in the form of an image matrix $\\mathcal{I} \\in \\mathbb{R}^{n_x \\times n_y}_{\\ge 0}$, where the elements correspond to a grid of rectangles called pixels that partition the detector screen positioned at $z=z_\\text{screen} > \\max_{i,j} P_{i,j}^z$. The screen resolution $(n_x,n_y)$ and the screen radii $(R_x,R_y)$ together yield the pixel size\n\\begin{equation}\n (w_x,w_y) = \\left(\\frac{2R_x}{n_x},\\frac{2R_y}{n_y}\\right).\n\\end{equation}\nFor reasons explained later in this section, sometimes a few `ghost pixels' are added, so the effective screen radii are\n\\begin{equation}\n R_x^* := R_x + \\frac{\\nu_x - 1}{2}w_x, \\quad\n R_y^* := R_y + \\frac{\\nu_y - 1}{2}w_y,\n\\end{equation}\nand the effective screen resolution is $(n_x + \\nu_x -1, n_y + \\nu_y - 1)$ where $\\nu_x$ and $\\nu_y$ are odd positive integers whose meaning will become clear later in this section.\n\n\nProducing the irradiance distribution from the rays that intersect the detector screen is called image reconstruction \\cite[sec. 7.8]{pharr2016physically}. The way that a ray contributes to a pixel with indices $i,j$ is governed by a reconstruction filter\n\\begin{equation}\n F_{i,j} : [-R_x,R_x] \\times [-R_y,R_y] \\rightarrow \\mathbb{R}_{\\ge 0},\n\\end{equation}\nyielding for the irradiance distribution\n\\begin{equation}\n \\mathcal{I}_{i,j} = \\sum_{k=1}^N \\omega_k F_{i,j}(\\mathbf{x}_k),\n\\end{equation}\nfor a set of ray intersections $\\{\\mathbf{x}_k\\}_{k=1}^N$ with corresponding final ray weights $\\{\\omega_k\\}_{k=1}^N$. The ray weights are initialized at the sampling of the ray at the source. They are slightly modified by the lens boundary interactions as a small portion of the light is reflected rather than refracted. The amount by which the ray weights are modified is governed by the Fresnel equations \\cite[sec. 2.7.1]{Fowles1975}. In our implementation, the Fresnel equations are approximated by Schlick's approximation \\cite[eq. 24]{Schlick1994}. In the current implementation, all ray weights are initialized equally. The precise value does not matter since the relationship between the initial and final weights is linear. The loss function (section \\ref{lossfunc}) compares scaled versions of the produced and target irradiance distribution.\n\nIn the simplest reconstruction case, the value of a pixel is given by the sum of the weights of the rays that intersect the detector screen at that pixel (called box reconstruction in \\cite[sec. 7.8.1]{pharr2016physically}). In this case the reconstruction filter of pixel $i,j$ is simply the indicator function of the pixel $\\left[(i-1)w_x,iw_x\\right) \\times \\left[(j-1)w_y,jw_y\\right)$.\n\nTo obtain a ray tracing implementation where the irradiance $\\mathcal{I}$ is differentiable with respect to geometry parameters of the lens, say, the parameter $\\theta$, the irradiance distribution must vary smoothly with this parameter. The dependency on this parameter is carried from the lens to the screen by the rays through the screen intersections $\\mathbf{x}_k = \\mathbf{x}_k(\\theta)$. Thus to obtain a useful gradient $\\frac{\\partial \\mathcal{I}}{\\partial \\theta}$ the filter function $F_{i,j}$ should be at least $C^1$, see Fig.~\\ref{fig:reconstructiondiffb} which is achieved by introducing a filter function that spreads out the contribution of a ray over a kernel of pixels of size $(\\nu_x,\\nu_y)$ centered at the intersection location. For the conservation of light, we require that $\\sum_{i,j}F_{i,j}(\\mathbf{x}) \\equiv 1$.\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.7\\textwidth]{figures\/reconstruction_diffb.png}\n \\caption{$\\mathbf{x}(\\theta)$ in the left plot shows the intersection location of a ray with the screen, dependent on a lens geometry parameter $\\theta$. The right plot then shows the reconstruction filter value for the green pixel in the left plot dependent on $\\theta$. In order to obtain a useful gradient of the pixel value with respect to $\\theta$, a smooth reconstruction filter is needed.}\n \\label{fig:reconstructiondiffb}\n\\end{figure}\n\nTherefore, the Gaussian reconstruction function is introduced, based on the identically named one described in \\cite[sec. 7.8.1]{pharr2016physically}. This filter function is based on the product \n\\begin{equation}\n \\tilde{F}_{i,j}(x,y;\\alpha,\\nu_x,\\nu_y) := f_{i}^x(x;\\alpha,\\nu_x)f_{j}^y(y;\\alpha,\\nu_y),\n\\end{equation}\nwhere\n\\begin{equation}\n f_{i_0}^x(x;\\alpha,\\nu_x) = \n \\begin{cases}\n e^{-\\alpha\\left(x-c^x_{i_0}\\right)^2} - e^{-\\alpha\\left(\\frac{\\nu_x w_x}{2}\\right)^2} &\\text{ if } \\lvert x-c^x_i\\rvert < \\frac{\\nu_x w_x}{2},\\\\\n 0 & \\text{otherwise.}\n \\end{cases} \\label{eq:filter1dim}\n\\end{equation}\nThe centers of the pixels are given by\n\\begin{equation}\n (c_i^x,c_j^y) := \\left(\\left(i + \\textstyle\\frac{1}{2}\\right)w_x - R_x, \\left(j + \\textstyle\\frac{1}{2}\\right)w_y - R_y\\right).\n\\end{equation}\nNote that the support of $\\tilde{F}_{i,j}$ is of size $\\nu_xw_x$ by $\\nu_yw_y$, the size of the kernel on the detector screen. The normalized reconstruction filter is then given by\n\\begin{equation}\n F_{i,j}(x,y;\\alpha,\\nu_x,\\nu_y) = \\frac{\\tilde{F}_{i,j}(x,y;\\alpha,\\nu_x,\\nu_y)}{\\sum_{i',j'}\\tilde{F}_{i',j'}(x,y;\\alpha,\\nu_x,\\nu_y)}.\n\\end{equation}\nThe function $F_{i,j}$ is plotted in Fig.~\\ref{fig:recfilter3d}. Note that the function is not differentiable at the boundary of its support, but this yields no problems in the optimization.\n\\begin{figure}\n \\centering\n \\includegraphics{figures\/Gaussianfilter.png}\n \\caption{Gaussian reconstruction filter $F_{i_0,j_0}$ for $\\alpha = 1$ and $(\\nu_x,\\nu_y)=(3,3)$.}\n \\label{fig:recfilter3d}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = \\textwidth]{figures\/image_reconstr_examples.png}\n \\caption{Image reconstruction based on a small set of ray-screen intersections, for bincount and various reconstruction filter sizes and $\\alpha = 1$.}\n \\label{fig:imagereconstr}\n\\end{figure}\n\nGaussian image reconstruction is shown in Fig.~\\ref{fig:imagereconstr} for various values of $\\nu_x = \\nu_y$. There is a trade-off here since the larger $\\nu_x$, and $\\nu_y$ are the blurrier the resulting image is, and the larger the computational graph becomes, but also the larger the section of the image is that is aware of a particular ray which yields more informative gradients. \n\nUp to this point, this section has discussed the ray tracing part of the pipeline, the next subsections will discuss the role of the neural network and the optimization.\n\n\\subsection{Multi-layer perceptron as optimization accelerator} \\label{subsec:nnarchitectures}\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.6\\textwidth]{figures\/nn_architecture_dense.png}\n \\caption{The dense multi-layer perceptron architecture based on the size of the control net $(n_1+1)\\times(n_2+1)$.}\n \\label{fig:densenn}\n\\end{figure}\n\nSeveral neural network architectures are considered, all with a trivial input of 1, meaning that the neural networks will not, strictly speaking, be used to approximate a function since the considered domain is trivial. Non-trivial network inputs of system parameters like the source location will probably be part of follow-up research.\n\nIn this configuration, the neural network can be considered a transformation of the space over which is optimized: from the space of trainable neural network parameters to the space of control point $z$-coordinate values.\nThe goal of choosing the network architecture is that optimizing the trainable neural network parameters of this architecture yields better training behavior than optimizing the control point z-coordinate values directly. The used networks are multi-layer perceptions (MLPs), feed-forward networks consisting of several layers of neurons, as depicted in Fig.~\\ref{fig:densenn}. The considered architectures are:\n\\begin{enumerate}\n \\item No network at all. \\label{item:nonetcase}\n \\item A sparse MLP where the sparsity structure is informed by the overlap of the B-spline basis function supports on the knot spans. In other words: this architecture aims to precisely let those control points 'communicate' within the network that share influence on some knot span product on the B-spline surface, yielding a layer with the same connectivity as a convolutional layer with kernel size $(2p+1,2q+1)$. However, each connection has its own weight and each kernel its own bias, instead of only having a weight per element of the convolution kernel and one single bias for all kernels.\n \\item Larger fully connected architectures are also considered, with 3 layers of control net size. Note that two consecutive such layers yield many weight parameters: $n^4$ for a square control net with `side length' $n$.\n\\end{enumerate}\nThe activation function used for all neurons is the hyperbolic tangent, which is motivated below.\n\n\\subsubsection{Control point freedom} \\label{subsec:controlpointfreedom}\nControl over the range of values that can be assumed by the control point $z$-coordinates is essential to make sure that the systems stays physical (as mentioned in Section~\\ref{subsubsec:lensconstraints}), but also to be able to take into account restrictions imposed on the lens as part of mechanical construction in a real-world application. Note that the restriction $P_{i,j}^z > z_\\text{in}$ for the control points being above the lens entrance surface is not critical for a collimated ray bundle simulation since, the entrance surface can be moved arbitrarily to the $-z$ direction without affecting the ray tracing. \n\nSince the final activation function $\\tanh$ has finite range $(-1,1)$, this can easily be mapped to a desired interval $(z_{\\min},z_{\\max})$:\n\\begin{equation}\n y_{i,j} \\mapsto z_{\\min} + \\textstyle\\frac{1}{2} (y_{i,j} + 1)(z_{\\max} - z_{\\min}), \\label{eq:outputcorrection}\n\\end{equation}\nwhich can even vary per control point if desired. Here $y_{i,j}$ denotes an element of the total output $Y$ of the network.\nThe above can also be used as an offset from certain fixed values:\n\\begin{equation}\n y_{i,j} \\mapsto f\\left(P^x_{i,j},P^y_{i,j}\\right) + z_{\\min} + \\textstyle\\frac{1}{2} (y_{i,j} + 1)(z_{\\max} - z_{\\min}). \\label{eq:outputcorrectionwfunc}\n\\end{equation}\nThe resulting B-spline surface approximates the surface given by $f(x,y) + \\textstyle\\frac{1}{2}(z_{\\max}+z_{\\min})$ if $Y \\approx 0$ can be used to optimize a lens that is globally at least approximately convex\/concave. The choice of the hyperbolic tangent activation function accommodates this: since this activation function is smooth around its fixed point $0$ when initializing the weights and biases of the network close to $0$, there is no cumulative value-increasing effect in a forward pass through the network so that indeed $Y\\approx 0$ in this case.\n\nFor comparability, in the case without a network, the optimization is not performed directly on the control point $z$-coordinates. Instead, for each control point, a new variable for optimization is created, which is passed through the activation function and the correction as in Eq.~\\ref{eq:outputcorrection} or \\ref{eq:outputcorrectionwfunc} before being assigned to the control point.\n\n\\subsection{The optimization}\n\n\\label{lossfunc}\nThe lens is optimized such that the irradiance distribution $\\mathcal{I}$ projected by the lens approximates a reference image $\\mathcal{I}_\\text{ref}$, where $\\mathcal{I} ,\\mathcal{I}_\\text{ref}\\in \\mathbb{R}^{n_x\\times n_y}_{\\ge 0 }$. The loss function used to calculate the difference between the two uses the normalized matrices: \n\\begin{equation}\n \\widehat{\\mathcal{I}} = \\frac{\\mathcal{I}}{\\sum_{i,j}^{n_x,n_y} \\mathcal{I}_{i,j}} \n \\quad\n \\mathrm{and}\n \\quad\n \\widehat{\\mathcal{I}}_\\text{ref} = \\frac{\\mathcal{I}_\\text{ref}}{\\sum_{i,j}^{n_x,n_y} \\mathcal{I}_{\\mathrm{ref},i,j}}.\n\\end{equation}\n\n\\noindent\nThe loss function is given by\n\\begin{equation}\n \\mathcal{L}(\\mathcal{I};\\mathcal{I}_\\text{ref}) = \\frac{1}{\\sqrt{n_x n_y}}\n \\left\\| \\widehat{\\mathcal{I}}-\\widehat{\\mathcal{I}}_\\text{ref} \\right\\|_F \n \\label{eq:pipelineLoss},\n\\end{equation}\nwhere $\\| \\cdot \\|_F$ is the Frobenius or matrix norm, which is calculated as follows:\n\\begin{equation}\n \\| \\mathcal{A} \\|_F = \\sqrt{\\sum_{i}^{n_x}\\sum_{j}^{n_y} \\lvert a_{i,j} \\rvert ^2}.\n\\end{equation}\nFig.~\\ref{fig:optimizationloop} shows the conventional stopping criterion of the loss value being smaller than some $\\varepsilon > 0$, but in our experiments, we use a fixed number of iterations.\n\nThe neural network parameters (weights and biases) are updated using the Adam optimizer \\citep{Kingma2014} by back-propagation of the loss to these parameters.\n\n\\section{Results} \\label{sec:results}\n\nSeveral results produced with the optimization pipeline discussed in the previous sections are displayed and discussed in this section. The implementation mainly uses PyTorch, a Python wrapper of Torch \\citep{Collobert2002Torch}.\n\nNone of the optimizations performed for this section took more than a few hours to complete, on a \\verb|HP ZBook Power G7 Mobile Workstation| with a \\verb|NVIDIA Quadro T1000 with Max-Q Design| GPU.\n\nMost of the results have been validated with \\emph{LightTools} \\citep{LightTools}, an established ray tracing software package in the optics community. Lens designs were imported to LightTools as a point cloud, then interpolated to obtain a continuous surface, and all simulations were conducted using $10^6$ rays.\n\nUnits of length are mostly unspecified since the obtained irradiance distributions are invariant under uniform scaling of the optical system. This invariance to scaling is reasonable as long as the lens details are orders of magnitude larger than the wavelength of the incident light such that diffraction effects do not play a role. Furthermore, the irradiance distributions are directly proportional to the scaling of all ray weights and thus the source power, so the source and screen power also need no unit specification. Note that relative changes have a non-trivial effect, like changes to the power proportion between sources or the distance proportions of the optical system.\n\n\\newpage\n\\subsection{Irradiance derivatives with respect to a control point}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.75\\textwidth]{figures\/results\/controlpoint_derivatives.png}\n \\caption{Gradients of an irradiance distribution of a collimated ray bundle through a flat lens (parallel sides), with respect to the $z$-coordinate of one control point. The zeros are masked with white to show the extend of the influence of the control point. These irradiation distributions differ by: (a): degrees $(3,3)$, reconstruction filter size $(3,3)$, (b): degrees $(3,3)$, reconstruction filter size $(11,11)$, (c): degrees $(5,3)$, reconstruction filter size $(3,3)$.}\n \\label{fig:controlpointderivs}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/renderderivsexplanation.png}\n \\caption{Demonstration of how one control point influences the irradiance distribution in the case of a flat lens with B-spline degrees $(3,3)$ and a collimated ray bundle source.}\n \\label{fig:renderderivsexplanation}\n\\end{figure}\nThis section gives a simple first look at the capabilities of the implemented differentiable ray tracer: computing the derivative of an irradiance distribution with respect to a single control point. Obtaining this data is inefficient in the current PyTorch implementation as a forward mode automatic differentiation pass is required, which is not currently (entirely) supported by PyTorch. Therefore these derivatives are computed with pixel-wise back-propagation.\n\nFig.~\\ref{fig:controlpointderivs} shows the derivative of an irradiance distribution produced by a collimated ray bundel through a flat lens for various B-spline degrees and reconstruction filter sizes, and Fig.~\\ref{fig:renderderivsexplanation} shows what one of these systems looks like. The overall `mountain with a surrounding valley' structure can be understood as follows: as one of the control points rises, it creates a local convexity in the otherwise flat surface. This convexity has a focusing effect, redirecting light from the negative valley region toward the positive mountain region.\n\nNoteworthy of these irradiance derivatives is also their total sum: (a) $\\SI{-1.8161e-08}{}$, (b) $\\SI{3.4459e-08}{}$, (c) $\\SI{9.7095e-05}{}$. These small numbers with respect to the total irradiance of about $93$ and therefore indicate conservation of light; as the control point moves out of the flat configuration, at first, the total amount of power received by the screen will not change much. This is expected from cases (a) and (b), where the control point does not affect rays that reach the screen on the boundary pixels. However, in all cases, all rays intersect the lens at right angles. Around $\\theta = 0$, the slope of Schlick's approximation is very shallow, indicating a small decrease in refraction in favor of reflection.\n\n\\subsection{Sensitivity of the optimization to initial state and neural network architecture} \\label{subsec:results_nnsensitivity}\nAs with almost any iterative optimization procedure, choosing a reasonable initial guess of the solution is crucial for reaching a good local\/global minimum. For training neural networks, this comes down to how the network weights and biases are initiated. In this section, we look at three target illuminations: the circular top hat distribution (Fig.~\\ref{fig:circtophat}), the TU Delft logo (Fig.~\\ref{fig:TUDflameinv}), and an image of a faceted ball (Fig.~\\ref{fig:facball}). For some experiments, black padding or Gaussian blurring is applied to these images. We design lenses to produce these distributions from a collimated ray bundle, given various neural network architectures (introduced in section \\ref{subsec:nnarchitectures}) and parameter initializations.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/results\/circular_tophat.png}\n \\caption{The circular tophat target illumination.}\n \\label{fig:circtophat}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/TUD_400_inverse.png}\n \\caption{The TU Delft flame target illumination.}\n \\label{fig:TUDflameinv}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.4\\textwidth]{figures\/results\/shaded_faceted_ball.png}\n \\caption{The faceted ball target illumination.}\n \\label{fig:facball}\n\\end{figure}\n\n\\paragraph{Circular top hat distribution from collimated ray bundle} \\label{subsec:circtophat}\nFig.~\\ref{fig:tophatloss} shows the progress of the loss over 1000 iterations, with each iteration taking $2.5$ seconds, for various neural network architectures and parameter initialization combinations. For the other parameters in these simulations, see the supplementary information. For a few moments during the training, the resulting freeform surfaces and irradiance distributions are shown in Figs.~\\ref{fig:lensshapes}, \\ref{fig:tophatrandomsparse}, \\ref{fig:tophatunifsparse}, \\ref{fig:tophatunifnonet} and \\ref{fig:tophatunifdense}. Uniform here means that the initial trainable parameter values are sampled from a small interval: $U\\left(\\left[-10^{-4},10^{-4}\\right]\\right)$, except for the no-network case; this is initialized with all zeros.\n\nThe first notable difference is between the random and uniformly initialized sparse neural networks. The uniformly initialized neural network performs much better, and no network performs better. This is probably because the uniformly initialized cases converge to a better (local) minimum than the randomly initialized case. Of course, it could happen that the random initialization lands in a very favorable spot in the design landscape, but intuitively this seems very unlikely.\n\nAnother property of the uniformly initialized cases is their preservation of symmetry in these setups. As Fig.~\\ref{fig:lensshapes} shows, this leads to much simpler lenses, which are probably much less sensitive to manufacturing errors due to their relative lack of small detail. What is interesting to note here is that if the sparse network is initialized with all parameters set to $0$, then its optimization is identical to the no-network case, as only the biases in the last layer achieve non-zero gradients.\n\nNo rigorous investigation has been conducted to the extent that this behavior of increased convergence speed carries over to other target distributions and system configurations and what the optimal hyper-parameters are. A thorough investigation of the hyper-parameter space that defines a family of network architectures could reveal where in the increase of the architecture complexity, diminishing returns for optimizing these lenses arises. However, based on these initial findings the fully connected network is used for all the following optimizations in the results.\n\n\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{figures\/results\/tophat_loss_progress.png}\n \\caption{Loss progress over the iterations for various pipeline-setups for forming a tophat distribution from a collimated ray bundle.}\n \\label{fig:tophatloss}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_lens.png}\n \\caption{The lens height field after initialization ($n=0$), and $n=50,100$ and $1000$ iterations respectively, for different network architectures (Section~\\ref{subsec:nnarchitectures}) and network parameter initializations (Section~\\ref{subsec:circtophat}).}\n \\label{fig:lensshapes}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_randomsparse.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a random lens with a sparse network towards a circular tophat illumination.}\n \\label{fig:tophatrandomsparse}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_unifsparse.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a flat lens with a sparse network towards a circular tophat illumination.}\n \\label{fig:tophatunifsparse}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_unifnonet.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a flat lens without a network towards a circular tophat illumination.}\n \\label{fig:tophatunifnonet}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_unifdense.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a flat lens with a dense network towards a circular tophat illumination.}\n \\label{fig:tophatunifdense}\n\\end{figure}\n\n\\clearpage\n\\paragraph{TU flame and faceted ball from collimated ray bundle}\nIn what follows, we consider complex target distributions: the TU Delft flame (for a complex shape) and a faceted ball (for a target with various brightness levels). Here we still use the collimated ray bundle illumination, but lenses are now optimized for various magnifications; see Table.~\\ref{tab:magndata}. These magnifications are defined as the scaling of the screen size with respect to the smallest screen size $(0.64,0.64)$. The other parameters of these optimizations are shown in the supplementary information. All these iterations took about $4$ seconds each.\n\nThe final irradiance distributions and corresponding LightTools results are shown in Figs.~\\ref{fig:flamerenders} and \\ref{fig:facetedballrenders}, respectively. These figures show that the optimization pipeline can handle these more complex target illuminations well. The LightTools results predict some artifacts within the irradiance distribution, which the implemented ray tracer does not, especially in the TU flame magnification 1 case. By visual inspection, based on the LightTools results, one would probably rate these results in the exact opposite order than as indicated by the losses shown in Fig.~\\ref{fig:ManufacturingLoss}.\n\nA potential explanation of the increase in loss with the magnification factor in Fig.~\\ref{fig:ManufacturingLoss} is that the bigger the screen is: the rays require higher angles to reach the edges of the screen, which is apparent in the cases of magnification 3 and 5 Fig.~\\ref{fig:ManufacturingRays}. This results in a larger sensitivity of the irradiance to the angle with which a ray leaves the screen. This in turn gives larger gradients of the irradiance with respect to the control points. Therefore the optimization takes larger steps in the neural network parameter space, possibly overshooting points that result in a lower loss.\n\nFor the magnification, $3$ and $5$, the irradiance distributions from LightTools show artifacts at the screen boundaries. A possible explanation for this is that the way the B-spline surfaces are transferred to LightTools is inaccurate at the surface boundaries.\\footnote{Assuming only rays from the B-spline surface boundaries reach the screen boundary area.} This is because surface normals are inferred from fewer points on the B-spline surface at the boundary than in the middle of the surface by LightTools.\n\nFurthermore, a significant amount of rays are lost during optimization because the target illuminations are black at the borders, so rays near the screen boundary will be forced off the screen by the optimization. Once rays are off the screen, they no longer contribute to the loss function. Once a ray misses the screen, the patch on the B-spline surface these rays originate from does not influence the irradiance and, thus, the loss function. \nHowever, this does not mean that this patch is idle for the rest of the optimization, as this patch can be in support of a basis function that corresponds to a control point that still affects rays that hit the screen. Therefore, the probability of getting idle lens patches with this setup decreases with the B-spline degrees since these determine the size of the support of the B-spline basis functions but might, in some cases, lead to oscillatory behavior, with rays alternating between hitting and missing the screen. \n\nFig. \\ref{fig:ManufacturingSurfaces} shows the optimized B-spline lens surface height field. A densely varying color map is chosen since the deviations from a flat or smooth concave shape are quite subtle, which is due to the large lens exit angle sensitivity of the ray-screen intersections since the ratio lens size to screen size is large with respect to the ratio lens size to screen distance.\n\n\\begin{table}[h]\n \\centering\n {\n \\def1.5{1.5}\n \\begin{tabular}{c|c|c|c}\n \\textbf{Magnification} & \\textbf{screen size} & $f(x,y)$ & \\textbf{starting shape type}\\\\\n \\hline\n $1$ & $(0.64,0.64)$ & $\\textstyle\\frac{1}{2}$ & flat\\\\\n $3$ & $(1.92,1.92)$ & $\\textstyle\\frac{1}{2} + 8 - \\sqrt{8^2-x^2-y^2}$ & concave\\\\\n $5$ & $(3.20,3.20)$ & $\\textstyle\\frac{1}{2} + 4 - \\sqrt{4^2 - x^2 - y^2}$ & concave\n \\end{tabular}\n }\n \\caption{The screen size and control point offset function $f$ used per magnification in the TU flame and faceted ball optimizations (distances in centimeters).}\n \\label{tab:magndata}\n\\end{table}\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{figures\/results\/Manufacturing_Loss.png}\n \\caption{Loss progress for the various magnifications and target distributions.}\n \\label{fig:ManufacturingLoss}\n\\end{figure}\n\n\\clearpage\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_flames.png}\n \\caption{Implementation and LightTools irradiance distributions of the TU flame target from the final lens design of the optimization.}\n \\label{fig:flamerenders}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_balls.png}\n \\caption{Implementation and LightTools irradiance distributions of the faceted ball target from the final lens design of the optimization.}\n \\label{fig:facetedballrenders}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_surfaces.png}\n \\caption{The lens designs for the different magnifications and two target distributions.}\n \\label{fig:ManufacturingSurfaces}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_rays.png}\n \\caption{$25\\times25$ traced rays through the final lens designs for the different magnifications and two target distributions.}\n \\label{fig:ManufacturingRays}\n\\end{figure}\n\n\n\n\n\\clearpage\n\\subsection{Optimization with a point source and a grid of point sources}\nWe now consider an optimization that uses the B-spline intersection algorithm. First, we design a lens with one point source at $(0,0,-5)$ with $5\\times 10^5$ rays to again form the TU flame. Then after $\\sim 200$ iterations, we change the source to an equispaced grid of $25\\times 25$ point sources with $10^3$ rays each on $[-1,1]\\times[-1,1]\\times\\{-5\\}$, approximating a source of non-negligible size. The other (hyper-)~parameters of this optimization are shown in the supplementary information. Due to the additional B-spline intersection procedures, each iteration takes approximately $50$ seconds.\nThe resulting final irradiance distribution and LightTools verifications can be seen in Fig.~\\ref{fig:point_source_renders}. The final irradiance distribution similar to the that obtained by LightTools, indicating that ray tracing with the implemented B-spline intersection algorithm works correctly. The irradiance are blurred due to the reconstruction filter.\nThe single-source point optimization performs well, although the illumination is less uniform than in the collimated ray bundle case (Figs.~\\ref{fig:flamerenders} and \\ref{fig:facetedballrenders}). \nThe non-uniformity can be attributed to the gaussian reconstruction filter used during optimization, as it smoothes out the small uniformities. \n\n\nAs seen in Fig.~\\ref{fig:point_source_renders} the irradiance distribution obtained with a grid of point sources accurately approximates the extended source illumination distribution quite well for the unoptimized case. \nFinding a lens design that redirects light from a source of non-negligible size into a desired irradiance distribution is a complex problem for which it is hard to indicate how good the optimal irradiance distribution can become.\nThe progress of the loss, as seen in Fig.~\\ref{fig:point_source_loss}, shows that the optimization can still improve the loss, even after the transition to the grid of point sources. Interestingly, looking at Fig.~\\ref{fig:point_source_renders} again, the optimization seems to adopt the coarse strategy of filling up the target distribution with images of the source square, as shown in Fig.~\\ref{fig:sourceimages}. This strategy does hinder the possible quality of the final irradiance distribution as the image of the source on the target is larger than the fine details in the desired irradiance. Optimizing both the front and back surfaces of the freeform could resolve this issue, as this will cause the image of the source to change shape depending on where it ends up on the target screen.\n\n\\begin{figure}\n \\centering\n \\includegraphics{figures\/results\/source_images_indication.png}\n \\caption{Indication of images of the source square in the irradiance distribution obtained by LightTools using the point source grid.}\n \\label{fig:sourceimages}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/point_source_renders.png}\n \\caption{The final irradiation distribution of the lens optimizations with point sources and the corresponding LightTools verifications. The extended source is not implemented in our ray tracer, but is approximated by the point source grid.}\n \\label{fig:point_source_renders}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.75\\textwidth]{figures\/results\/point_source_surfaces.png}\n \\caption{Height fields of the lenses optimized for the TU flame with point sources.}\n \\label{fig:point_source_surfaces}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width= 0.6\\textwidth]{figures\/results\/point_source_loss.png}\n \\caption{Loss over the iterations optimizing for the TU flame. The system is initiated with a point source, and after $\\sim 200$ iterations the point source is replaced by an equispaced grid of $25\\times 25$ point sources.}\n \\label{fig:point_source_loss}\n\\end{figure}\n\n\\section{Conclusion} \\label{sec:conclusion}\n\nWe demonstrated that non-sequential differentiable ray tracing is a viable tool for designing freeform lenses for collimated ray bundles, points, and extended sources.\nUsing a B-spline allows for the design of a continuous surface, which is desirable for manufacturing, and its control point allows for locally altering the irradiance distribution. For both cases, collimated and point source lens designs were found that could accurately project the desired irradiance distribution in both the differentiable ray tracer and in commercial software LightTools. Some artifacts still exist and resolving this issue will be a part of further research.\n\nFor the source with a finite extent, the optimizer improved upon the design obtained for a point source. However, the final irradiance distribution was made up of images of the source, which hinders the minimum that can be obtained as the source image is larger than the details in the desired irradiance distribution. This issue can be resolved by optimizing multiple surfaces simultaneously, as the image of the source on the target plane can then be optimized to vary with location.\n\nUsing a neural network to remap the optimization space provides an interesting way to increase the convergence speed of the optimization. However, further investigation is required to see whether this generally holds and what the effect is on other network architectures.\n\nThe developed ray tracing implementation is currently a proof of concept and needs to be optimized for speed. The B-spline intersection algorithm, in particular, adds roughly a factor of $10$ to the computation time. A significant speedup can be achieved here by leveraging efficient lower-level GPU programming languages, such as CUDA. \n\n\n\\section{Acknowledgements}\nWe acknowledge support by NWO-TTW Perspectief program (P15-36) ``Free-form scattering optics\".\n\n\n\\section{Introduction}\\label{sec:Introduction}\nIn the field of illumination optics, optical engineers design optical elements to transport the light from a source, which can be an LED, laser, or incandescent lamp, to obtain a desired irradiance (spatial density of the luminous flux) or intensity (angular density of the luminous flux) \\citep{Grant2011}.\nTo transport the light from the source to the target, the optical engineer can construct a system consisting of various optical elements such as lenses, mirrors, diffusers, and light guides \\citep{John2013}.\nOne particular type of optic used in automotive and road lighting applications is the freeform lens, a lens without any form of symmetry \\citep{Falaggis2022, Mohedano2016}.\nThe design of these lenses is a complex problem. It is currently solved by numerically solving system-specific differential equations or through optimization, with every step validated using a (non-differentiable) non-sequential ray tracer \\citep{Wu2018}.\nGreat effort is involved in generalizing these methods to account for varying amounts of optical surfaces \\citep{Anthonissen2021}, their optical surface and volume properties \\citep{Kronberg2022, lippman_prescribed_2020}, or the source model~\\citep{Muschaweck2022, tukker_efficient_2007, sorgato_design_2019}. \n\nThe performance of an optical system is evaluated using ray tracing, which is the process of calculating the path of a ray originating from a source through the optical system. Sequential ray tracers such as Zemax~\\citep{zemax} and Code V~\\citep{codev}, primarily used in the design of imaging optics, trace a small number of rays to determine the quality of the image. \nNon-sequential ray tracers such as LightTools~\\citep{LightTools} and Photopia~\\citep{photopia} use many rays to simulate the optical flux through the system and share similarities with the rendering procedures in computer graphics, with the main difference being that the rays are traced from source to camera.\n\nAlgorithmically differentiable ray tracing, a generalization of differential ray tracing \\citep{feder_differentiation_1968, stone_differential_1997, oertmann_differential_1989, chen_second-order_2012}, is a tool that is being developed for both sequential~\\citep{Sun2021, Volatier2017} and non-sequential \\citep{Mitsuba2} ray tracing.\n\\emph{Differential ray tracing} obtains system parameter gradients using numerical or algebraic differentiation. The gradient can be calculated numerically using numerical differentiation or the adjoint method \\citep{givoli_tutorial_2021}, requiring the system to be ray traced twice, once for its current state and once with perturbed system parameters. Analytic expressions for the gradient can be obtained by tracing the rays analytically through the system, calculating where the ray intersects the system's surfaces and how the ray's trajectory is altered. However, these expressions can become long and complicated depending on the system. In addition, the method is limited to optics described by conics as finding analytic ray surface intersection with surfaces of higher degrees becomes complicated or even impossible. Algorithmic differentiable ray tracing can handle these issues by obtaining the gradients with one single forward simulation for an almost arbitrary system. In addition, it can be seamlessly integrated into gradient-descent-based optimization pipelines. A modern framework for this is \\emph{Physics Informed Machine Learning} \\citep{Karniadakis2021}, where a neural network is trained to approximate the solution to a physics problem formulated using data, a set of differential equations, or an implemented physics simulation (or a combination of these). \n\nWe investigate the reliability of designing freeform lenses with B-spline surfaces \\citep{Piegl1995} using algorithmically differentiable non-sequential ray tracing and gradient-based optimization to redirect the light of a light source into a prescribed irradiance distribution. The source models will be the collimated light source, point source, and finally, sources with a finite extent. The results are validated using the commercial ray trace program LightTools \\citep{LightTools}. In addition, we investigate the effectiveness of optimizing a network to determine the optimal B-spline control points as proposed in \\citep{Moller2021PIML} and \\citep{GASICK2023115839}, and compare it to optimizing the control points directly and seeing the possible speed-up.\n\n\\section{Gradient-based freeform design}\nThe overall structure of our pipeline is depicted in Fig.~\\ref{fig:optimizationloop}. \nA freeform surface is defined by the parameters $P \\in \\mathscr{P}$, where $\\mathscr{P}$ is the set of permissible parameter values. This surface is combined with a flat surface to create a lens, and an irradiance distribution $\\mathcal{I}$ is produced by tracing rays through the lens onto a screen. The irradiance distribution is compared to a target $\\mathcal{I}_\\text{ref}$ yielding a loss $\\mathscr{L}(\\mathbf{P};\\mathcal{I}_\\text{ref})$. The optimization problem we are trying to solve can then be formulated as\n\\begin{equation}\n \\min_\\mathbf{P \\in \\mathscr{P}} \\; \\mathscr{L}(\\mathbf{P};\\mathcal{I}_\\text{ref}),\n\\end{equation}\nwhich we solve by using gradient descent.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = \\textwidth]{figures\/Caustic_design_optimization.png}\n \\caption{Overview of our learning-based freeform design pipeline.}\n \\label{fig:optimizationloop}\n\\end{figure}\n\nThe freeform surface of the lens is defined in terms of a B-spline surface. From a manufacturing standpoint, this is convenient since B-spline surfaces can be chosen to be $C^1$ smooth (in fact, B-spline surfaces can be $C^n$ smooth for arbitrarily large $n$). From an optimization perspective, B-spline surfaces have the property that the control points that govern the shape of the surface and which will be optimized have a local influence on the surface geometry, which in turn has a local influence on the resulting irradiance distribution.\n\n\\subsection{The lens model using a B-spline surface}\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.2\\textwidth]{figures\/lens_schematic.png}\n \\caption{The used lens type: a volume enclosed between a flat surface and a freeform surface with a uniform refractive index.}\n \\label{fig:lenschematic}\n\\end{figure}\n\nWe define a lens as in Fig.~\\ref{fig:lenschematic} as the volume between a flat surface and a B-spline surface, with a uniform refractive index.\n\nA B-spline surface $\\mathbf{S}$ in $\\mathbb{R}^3$ is a parametric surface, see Fig.~\\ref{fig:Bsplinesurface}. It has rectangular support $[a,b]\\times[c,d]$ where $a z_\\text{in} \\quad \\forall (i,j). \\label{eq:nosurfintersect}\n\\end{equation}\nManufacturing can require that the lens has some minimal thickness $\\delta$, so that the constraint is stronger:\n\\begin{equation}\n P^z_{i,j} \\ge \\delta + z_\\text{in} \\quad \\forall (i,j).\n\\end{equation}\n\n\n\\subsection{Differentiable ray tracer} \\label{subsec:raytracer}\nOur implementation traces rays from a source through the flat lens surface and the freeform lens surface to the detector screen as depicted in Figs.~\\ref{fig:planetraceschematic} and \\ref{fig:pointtraceschematic}. Other ray paths, e.g., total internal reflection at lens surfaces, are not considered since it is assumed that the contribution of these to the resulting irradiance distribution is negligible.\n\n\\subsubsection{Sources and ray-sampling}\nNon-sequential ray tracing is a Monte-Carlo approximation method of the solution to the continuous integration formulation of light transport through an optical system. For a detailed discussion of this topic, see \\cite[ch. 14]{pharr2016physically}. Thus to perform ray tracing, the light emitted by a source must be discretized into a finite set of rays\n\\begin{equation}\n l: t \\rightarrow \\mathbf{o} + \\hat{\\mathbf{d}}t,\n\\end{equation}\nwhere $\\mathbf{o}$ is the origin of the ray and $\\hat{\\mathbf{d}}$ its normalized direction vector. Both collimated ray bundle and point sources will be considered, see Figs.~\\ref{fig:planetraceschematic} and \\ref{fig:pointtraceschematic}, respectively. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.6\\textwidth]{figures\/plane_trace_schematic.png}\n \\caption{Schematic of the ray tracing with a collimated ray bundle source.}\n \\label{fig:planetraceschematic}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.6\\textwidth]{figures\/point_trace_schematic.png}\n \\caption{Schematic of the ray tracing with a point source.}\n \\label{fig:pointtraceschematic}\n\\end{figure}\n\nTracing rays from a collimated ray bundle can be understood from Fig.~\\ref{fig:planetraceschematic}. The path of all rays from the source plane to the B-spline surface is a line segment parallel to the $z$-axis. Therefore, we can sample the incoming rays directly on the B-spline surface, with $\\hat{\\mathbf{d}} = (0,0,1)^\\top$. By the linearity of $X$ and $Y$ sampling on the B-spline domain $[0,1]^2$ is analogous to sampling on the lens extent $[-r_x,r_x]\\times [-r_y,r_y]$ in terms of distribution. Rays are sampled in a (deterministic) square grid on $[0,1]^2$. \n\nFor a point source, each ray starts at the location of the source, and the direction vector $\\hat{\\mathbf{d}}$ is sampled over the unit sphere $\\mathbb{S}^2$. More precisely, $\\hat{\\mathbf{d}}$ is given by\n\\begin{equation}\n \\hat{\\mathbf{d}} = \\left(\\cos\\theta\\sin\\phi,\\sin\\theta\\sin\\phi,\\cos\\phi\\right)^\\top,\n\\end{equation}\nwith $\\theta \\in [0,2\\pi)$ and $\\phi \\in [0,\\phi_\\text{max}]$ for some $0\\le \\phi_\\text{max} < \\frac{\\pi}{2}$, see Fig.~\\ref{fig:pointtraceschematic}. $\\phi_\\text{max}$ is chosen as small enough to minimize the number of rays that miss the lens entrance surface but large enough such that the whole surface is illuminated. For instance, if the source is on the $z$-axis, then $\\phi_\\text{max} = \\arctan\\left(\\frac{\\sqrt{r_x^2 + r_y^2}}{z_\\text{in}-z_s}\\right)$ where $z_\\text{in}$ is the $z$-coordinate location of the entrance surface and $z_s$ the $z$-coordinate of the source. To uniformly sample points on this sphere segment, $\\theta$ is sampled (non-deterministically) uniformly in $[0,2\\pi)$ and $\\phi$ is given by\n\\begin{equation}\n \\phi = \\arccos\\left(1-(1-\\cos\\phi_\\text{max})a\\right)\n\\end{equation}\nwhere $a$ is sampled (non-deterministically) uniformly in $[0,1]$. This sampling is used to produce the results in Section~\\ref{sec:results}.\n\nFor the point source, the calculation of the intersection of a ray with the B-spline surface is non-trivial. This calculation comes down to finding the smallest positive root of the $p+q$ degree piece-wise polynomial function\n\\begin{equation}\n f(t) = \n Z\\left(\\begin{pmatrix}o_u \\\\ o_v\\end{pmatrix} + \n \\begin{pmatrix}d_u \\\\ d_v\\end{pmatrix}t\n \\right)\n - d_zt - o_z, \\label{eq:surfaceintersect}\n\\end{equation}\nif such a root exists and yields a point in the domain of $Z$. Here the subscripts $u$ and $v$ denote that the ray is considered in $(u,v,z)$ space instead of $(x,y,z)$ space, so for instance\n\\begin{equation}\n o_u = X^{-1}(o_x) = \\frac{1}{2}\\left(\\frac{o_x}{r_y}+1\\right), \\quad d_v = \\frac{d_y}{2r_y}.\n\\end{equation}\nThe roots of eq. \\ref{eq:surfaceintersect} cannot generally be found analytically for $p+q>4$, and thus an intersection algorithm is implemented, which is explained in the next section.\n\n\\subsubsection{B-spline surface intersection algorithm}\nThe intersection algorithm is based on constructing a triangle mesh approximation of the B-spline surface and computing intersections with that mesh.\n\n\\paragraph{Triangle mesh intersection phase 1: bounding boxes}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{figures\/bb_per_knotspanproduct.png}\n \\caption{Triangles and corresponding bounding box for a few knot span products of a spherical surface.}\n \\label{fig:bb_per_knotspanproduct}\n\\end{figure}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.6\\textwidth]{figures\/uv_triangle_check.png}\n \\caption{Example of which triangles are candidates for a ray-surface intersection with the ray plotted in red, based on their $u,v$-domain.}\n \\label{fig:uvtriangleint}\n\\end{figure}\nChecking every ray against every triangle for intersection is computationally expensive, so it is helpful to have bounding box tests that provide rough information about whether the ray is even near some section of the B-spline surface. B-spline theory provides a tool for this: the strong convex hull property, which yields the bounding box \n\n\\begin{equation}\n B_{i_0,j_0} = \\left[u_{i_0},u_{i_0 + 1}\\right) \\times \\left[v_{j_0},v_{j_0 + 1}\\right) \\times \\left[z^{\\min}_{i_0,j_0}, z^{\\max}_{i_0,j_0}\\right]\n\\end{equation}\nwhere $z^{\\min}_{i,j}$ and $z^{\\max}_{i,j}$ are the minimum and maximum $z$-values of the control points that affect the B-spline surface on the knot span product $\\left[u_{i_0},u_{i_0 + 1}\\right) \\times \\left[v_{j_0},u_{j_0 + 1}\\right)$, hence those with indices $i_0-p\\le i\\le i_0, j_0-q \\le j\\le j_0$. Formulated in terms of $Z(u,v)$ this yields\n\\begin{equation}\n z^{\\min}_{i_0,j_0} \\le Z(u,v) \\le z^{\\max}_{i_0,j_0}, \\quad\n (u,v) \\in \\left[u_{i_0},u_{i_0 + 1}\\right) \\times \\left[v_{j_0},v_{j_0 + 1}\\right).\n\\end{equation}\nExamples of such bounding boxes are shown in Fig.~\\ref{fig:bb_per_knotspanproduct}.\n\nThere are two steps in applying the bounding boxes in the intersection algorithm. First, a test for the entire surface (in $(u,v,z)$-space):\n\\begin{equation}\n [0,1]^2 \\times \\left[\\min_{i,j}P^z_{i,j},\\max_{i,j}P^z_{i,j}\\right].\n\\end{equation}\nSecond, a recursive method where, starting with all knot span products, each rectangle of knot span products is divided into at most 4 sub-rectangles for a new bounding box test until individual knot span products are reached.\n\n\n\\paragraph{Triangle mesh intersection phase 2: $(u,v)$-space triangle intersection}\nEach non-trivial knot span product $[u_{i_0},u_{i_0+1}) \\times [v_{j_0},v_{j_0+1})$ is divided into a grid of $n_u$ by $n_v$ rectangles. Thus we can define the boundary points\n\\begin{subequations}\n \\begin{align}\n u_{i_0,k} =& u_{i_0} + k\\Delta u_{i_0},\n \\quad \\Delta u_{i_0} = \\frac{u_{i_0+1}-u_{i_0}}{n_u}, \n \\quad k = 0, \\ldots, n_u, \\\\\n v_{i_0,\\ell} =& v_{j_0}+ \\ell \\Delta v_{j_0}, \\quad \\Delta v_{j_0} = \\frac{v_{j_0+1}-v_{j_0}}{n_v},\n \\quad \\ell = 0,\\ldots, n_v.\n \\end{align}\n\\end{subequations}\n\nEach rectangle is divided into a lower left and an upper right triangle, as demonstrated in Fig.~\\ref{fig:uvtriangleint}. In this figure it is shown for a ray projected onto the $(u,v)$-plane in some knot span which triangles are candidates for an intersection in $(u,v,z)$-space. This is determined by the following rules:\n\\begin{itemize}\n \\item A lower left triangle is intersected in the $(u,v)$-plane if either its left or lower boundary is intersected by the ray;\n \\item an upper right triangle is intersected in the $(u,v)$-plane if either its right or upper boundary is intersected by the ray.\n\\end{itemize}\n\nThe intersection of these boundaries can be determined by finding the indices of the horizontal lines at which the vertical lines are intersected:\n\\begin{equation}\n \\ell_k = \\left\\lfloor\\frac{ o_v+(u_{i_0,k}-o_u)\\frac{d_v}{d_u} - v_{j_0}}{\\Delta v_{j_0}}\\right\\rfloor,\n\\end{equation}\nand analogously $k_\\ell$.\n\n\\paragraph{Triangle mesh intersection phase 3: $u,v,z$-space triangle intersection}\nA lower left triangle can be expressed by a plane\n\\begin{equation}\n T(u,v) = Au + Bv + C\n\\end{equation}\n defined by the following linear system:\n\\begin{equation}\n \\begin{pmatrix}\n u_{i_0,k} & v_{j_0,\\ell} & 1 \\\\\n u_{i_0,k+1} & v_{j_0,\\ell} & 1 \\\\\n u_{i_0,k} & v_{j_0,\\ell+1} & 1\n \\end{pmatrix}\n \\begin{pmatrix}\n A \\\\ B \\\\ C\n \\end{pmatrix}\n =\n \\begin{pmatrix}[1.75]\n z_{i_0,k}^{j_0,\\ell} \\\\ z_{i_0,k+1}^{j_0,\\ell} \\\\ z_{i_0,k}^{j_0,\\ell+1}\n \\end{pmatrix}.\n\\end{equation}\nHere we use the following definition:\n\\begin{equation}\n z_{i_0,k}^{j_0,\\ell} = Z(u_{i_0,k},v_{j_0,\\ell}).\n\\end{equation}\nThis yields the plane\n\\begin{align}\n T(u,v) =&& z_{i_0,k}^{j_0,\\ell} + n_u\\left(z_{i_0,k+1}^{j_0,\\ell}-z_{i_0,k}^{j_0,\\ell}\\right)\\frac{u-u_{i_0,k}}{u_{i_0+1}-u_{i_0}} \\\\ \n && +n_v\\left(z_{i_0,k}^{j_0,\\ell+1}-z_{i_0,k}^{j_0,\\ell}\\right)\\frac{v-v_{j_0,\\ell}}{v_{j_0+1}-v_{j_0}}. \\label{eq:trianglefuncdetermined}\n\\end{align}\nNote that to define this triangle, the B-spline basis functions are evaluated at fixed points in $[0,1]^2$ independent of the rays or the $P^z_{i,j}$. This means that for a lens that will be optimized these basis function values can be evaluated and stored only once rather than in every iteration, for computational efficiency.\n\nComputing the intersection with the ray $\\tilde{\\mathbf{r}}(t) = \\tilde{\\mathbf{o}} + \\tilde{\\hat{\\mathbf{d}}}t$ is now straight-forward, and yields\n\\begin{equation}\n t_\\text{int} = - \\frac{C+\\langle\\tilde{\\mathbf{o}}, \\mathbf{n}\\rangle}{\\langle\\tilde{\\hat{\\mathbf{d}}},\\mathbf{n}\\rangle}, \\quad \\mathbf{n} = \n \\begin{pmatrix} 0 \\\\ 1 \\\\ \\partial_u T\\end{pmatrix} \\times\n \\begin{pmatrix} 1 \\\\ 0 \\\\ \\partial_v T\\end{pmatrix} = \n \\begin{pmatrix}A \\\\ B \\\\ -1\\end{pmatrix},\n\\end{equation}\nwhere $\\mathbf{n}$ is a normal vector to the triangle, computed using the cross product. This also explains why $\\langle\\tilde{\\hat{\\mathbf{d}}},\\mathbf{n}\\rangle=0$ does not yield a well-defined result: in this situation the ray is parallel to the triangle.\n\nThe last thing to check is whether $\\tilde{l}(t_\\text{int})$ lies in the $(u,v)$-domain of the triangle, which can be checked by three inequalities for the three boundaries of the triangle:\n\n\\begin{subequations}\n \\begin{align}\n o_u + d_u t_\\text{int} \\ge u_{i_0,k} \\\\\n 0 \\leq o_v + d_v t_\\text{int} - v_{j_0,\\ell}< \\frac{n_u}{n_v}\\frac{v_{j_0+1}-v_{j_0}}{u_{i_0+1}-u_{i_0}}(u_{i_0,k+1}-(o_u + d_u t_\\text{int})).\n \\end{align}\n\\end{subequations}\n\nThe computation for an upper right triangle is completely analogous. The upper triangle has a closed boundary, whereas the lower triangle has an open one and vice versa, which means that the $(u,v)$ domains of the triangles form an exact partition of $[0,1]^2$. Thus the triangle mesh is `water-tight', meaning that no ray intersection should be lost by rays passing in between triangles.\n\n\\subsection{Image reconstruction}\nThe ray tracing produces an irradiance distribution in the form of an image matrix $\\mathcal{I} \\in \\mathbb{R}^{n_x \\times n_y}_{\\ge 0}$, where the elements correspond to a grid of rectangles called pixels that partition the detector screen positioned at $z=z_\\text{screen} > \\max_{i,j} P_{i,j}^z$. The screen resolution $(n_x,n_y)$ and the screen radii $(R_x,R_y)$ together yield the pixel size\n\\begin{equation}\n (w_x,w_y) = \\left(\\frac{2R_x}{n_x},\\frac{2R_y}{n_y}\\right).\n\\end{equation}\nFor reasons explained later in this section, sometimes a few `ghost pixels' are added, so the effective screen radii are\n\\begin{equation}\n R_x^* := R_x + \\frac{\\nu_x - 1}{2}w_x, \\quad\n R_y^* := R_y + \\frac{\\nu_y - 1}{2}w_y,\n\\end{equation}\nand the effective screen resolution is $(n_x + \\nu_x -1, n_y + \\nu_y - 1)$ where $\\nu_x$ and $\\nu_y$ are odd positive integers whose meaning will become clear later in this section.\n\n\nProducing the irradiance distribution from the rays that intersect the detector screen is called image reconstruction \\cite[sec. 7.8]{pharr2016physically}. The way that a ray contributes to a pixel with indices $i,j$ is governed by a reconstruction filter\n\\begin{equation}\n F_{i,j} : [-R_x,R_x] \\times [-R_y,R_y] \\rightarrow \\mathbb{R}_{\\ge 0},\n\\end{equation}\nyielding for the irradiance distribution\n\\begin{equation}\n \\mathcal{I}_{i,j} = \\sum_{k=1}^N \\omega_k F_{i,j}(\\mathbf{x}_k),\n\\end{equation}\nfor a set of ray intersections $\\{\\mathbf{x}_k\\}_{k=1}^N$ with corresponding final ray weights $\\{\\omega_k\\}_{k=1}^N$. The ray weights are initialized at the sampling of the ray at the source. They are slightly modified by the lens boundary interactions as a small portion of the light is reflected rather than refracted. The amount by which the ray weights are modified is governed by the Fresnel equations \\cite[sec. 2.7.1]{Fowles1975}. In our implementation, the Fresnel equations are approximated by Schlick's approximation \\cite[eq. 24]{Schlick1994}. In the current implementation, all ray weights are initialized equally. The precise value does not matter since the relationship between the initial and final weights is linear. The loss function (section \\ref{lossfunc}) compares scaled versions of the produced and target irradiance distribution.\n\nIn the simplest reconstruction case, the value of a pixel is given by the sum of the weights of the rays that intersect the detector screen at that pixel (called box reconstruction in \\cite[sec. 7.8.1]{pharr2016physically}). In this case the reconstruction filter of pixel $i,j$ is simply the indicator function of the pixel $\\left[(i-1)w_x,iw_x\\right) \\times \\left[(j-1)w_y,jw_y\\right)$.\n\nTo obtain a ray tracing implementation where the irradiance $\\mathcal{I}$ is differentiable with respect to geometry parameters of the lens, say, the parameter $\\theta$, the irradiance distribution must vary smoothly with this parameter. The dependency on this parameter is carried from the lens to the screen by the rays through the screen intersections $\\mathbf{x}_k = \\mathbf{x}_k(\\theta)$. Thus to obtain a useful gradient $\\frac{\\partial \\mathcal{I}}{\\partial \\theta}$ the filter function $F_{i,j}$ should be at least $C^1$, see Fig.~\\ref{fig:reconstructiondiffb} which is achieved by introducing a filter function that spreads out the contribution of a ray over a kernel of pixels of size $(\\nu_x,\\nu_y)$ centered at the intersection location. For the conservation of light, we require that $\\sum_{i,j}F_{i,j}(\\mathbf{x}) \\equiv 1$.\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.7\\textwidth]{figures\/reconstruction_diffb.png}\n \\caption{$\\mathbf{x}(\\theta)$ in the left plot shows the intersection location of a ray with the screen, dependent on a lens geometry parameter $\\theta$. The right plot then shows the reconstruction filter value for the green pixel in the left plot dependent on $\\theta$. In order to obtain a useful gradient of the pixel value with respect to $\\theta$, a smooth reconstruction filter is needed.}\n \\label{fig:reconstructiondiffb}\n\\end{figure}\n\nTherefore, the Gaussian reconstruction function is introduced, based on the identically named one described in \\cite[sec. 7.8.1]{pharr2016physically}. This filter function is based on the product \n\\begin{equation}\n \\tilde{F}_{i,j}(x,y;\\alpha,\\nu_x,\\nu_y) := f_{i}^x(x;\\alpha,\\nu_x)f_{j}^y(y;\\alpha,\\nu_y),\n\\end{equation}\nwhere\n\\begin{equation}\n f_{i_0}^x(x;\\alpha,\\nu_x) = \n \\begin{cases}\n e^{-\\alpha\\left(x-c^x_{i_0}\\right)^2} - e^{-\\alpha\\left(\\frac{\\nu_x w_x}{2}\\right)^2} &\\text{ if } \\lvert x-c^x_i\\rvert < \\frac{\\nu_x w_x}{2},\\\\\n 0 & \\text{otherwise.}\n \\end{cases} \\label{eq:filter1dim}\n\\end{equation}\nThe centers of the pixels are given by\n\\begin{equation}\n (c_i^x,c_j^y) := \\left(\\left(i + \\textstyle\\frac{1}{2}\\right)w_x - R_x, \\left(j + \\textstyle\\frac{1}{2}\\right)w_y - R_y\\right).\n\\end{equation}\nNote that the support of $\\tilde{F}_{i,j}$ is of size $\\nu_xw_x$ by $\\nu_yw_y$, the size of the kernel on the detector screen. The normalized reconstruction filter is then given by\n\\begin{equation}\n F_{i,j}(x,y;\\alpha,\\nu_x,\\nu_y) = \\frac{\\tilde{F}_{i,j}(x,y;\\alpha,\\nu_x,\\nu_y)}{\\sum_{i',j'}\\tilde{F}_{i',j'}(x,y;\\alpha,\\nu_x,\\nu_y)}.\n\\end{equation}\nThe function $F_{i,j}$ is plotted in Fig.~\\ref{fig:recfilter3d}. Note that the function is not differentiable at the boundary of its support, but this yields no problems in the optimization.\n\\begin{figure}\n \\centering\n \\includegraphics{figures\/Gaussianfilter.png}\n \\caption{Gaussian reconstruction filter $F_{i_0,j_0}$ for $\\alpha = 1$ and $(\\nu_x,\\nu_y)=(3,3)$.}\n \\label{fig:recfilter3d}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = \\textwidth]{figures\/image_reconstr_examples.png}\n \\caption{Image reconstruction based on a small set of ray-screen intersections, for bincount and various reconstruction filter sizes and $\\alpha = 1$.}\n \\label{fig:imagereconstr}\n\\end{figure}\n\nGaussian image reconstruction is shown in Fig.~\\ref{fig:imagereconstr} for various values of $\\nu_x = \\nu_y$. There is a trade-off here since the larger $\\nu_x$, and $\\nu_y$ are the blurrier the resulting image is, and the larger the computational graph becomes, but also the larger the section of the image is that is aware of a particular ray which yields more informative gradients. \n\nUp to this point, this section has discussed the ray tracing part of the pipeline, the next subsections will discuss the role of the neural network and the optimization.\n\n\\subsection{Multi-layer perceptron as optimization accelerator} \\label{subsec:nnarchitectures}\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.6\\textwidth]{figures\/nn_architecture_dense.png}\n \\caption{The dense multi-layer perceptron architecture based on the size of the control net $(n_1+1)\\times(n_2+1)$.}\n \\label{fig:densenn}\n\\end{figure}\n\nSeveral neural network architectures are considered, all with a trivial input of 1, meaning that the neural networks will not, strictly speaking, be used to approximate a function since the considered domain is trivial. Non-trivial network inputs of system parameters like the source location will probably be part of follow-up research.\n\nIn this configuration, the neural network can be considered a transformation of the space over which is optimized: from the space of trainable neural network parameters to the space of control point $z$-coordinate values.\nThe goal of choosing the network architecture is that optimizing the trainable neural network parameters of this architecture yields better training behavior than optimizing the control point z-coordinate values directly. The used networks are multi-layer perceptions (MLPs), feed-forward networks consisting of several layers of neurons, as depicted in Fig.~\\ref{fig:densenn}. The considered architectures are:\n\\begin{enumerate}\n \\item No network at all. \\label{item:nonetcase}\n \\item A sparse MLP where the sparsity structure is informed by the overlap of the B-spline basis function supports on the knot spans. In other words: this architecture aims to precisely let those control points 'communicate' within the network that share influence on some knot span product on the B-spline surface, yielding a layer with the same connectivity as a convolutional layer with kernel size $(2p+1,2q+1)$. However, each connection has its own weight and each kernel its own bias, instead of only having a weight per element of the convolution kernel and one single bias for all kernels.\n \\item Larger fully connected architectures are also considered, with 3 layers of control net size. Note that two consecutive such layers yield many weight parameters: $n^4$ for a square control net with `side length' $n$.\n\\end{enumerate}\nThe activation function used for all neurons is the hyperbolic tangent, which is motivated below.\n\n\\subsubsection{Control point freedom} \\label{subsec:controlpointfreedom}\nControl over the range of values that can be assumed by the control point $z$-coordinates is essential to make sure that the systems stays physical (as mentioned in Section~\\ref{subsubsec:lensconstraints}), but also to be able to take into account restrictions imposed on the lens as part of mechanical construction in a real-world application. Note that the restriction $P_{i,j}^z > z_\\text{in}$ for the control points being above the lens entrance surface is not critical for a collimated ray bundle simulation since, the entrance surface can be moved arbitrarily to the $-z$ direction without affecting the ray tracing. \n\nSince the final activation function $\\tanh$ has finite range $(-1,1)$, this can easily be mapped to a desired interval $(z_{\\min},z_{\\max})$:\n\\begin{equation}\n y_{i,j} \\mapsto z_{\\min} + \\textstyle\\frac{1}{2} (y_{i,j} + 1)(z_{\\max} - z_{\\min}), \\label{eq:outputcorrection}\n\\end{equation}\nwhich can even vary per control point if desired. Here $y_{i,j}$ denotes an element of the total output $Y$ of the network.\nThe above can also be used as an offset from certain fixed values:\n\\begin{equation}\n y_{i,j} \\mapsto f\\left(P^x_{i,j},P^y_{i,j}\\right) + z_{\\min} + \\textstyle\\frac{1}{2} (y_{i,j} + 1)(z_{\\max} - z_{\\min}). \\label{eq:outputcorrectionwfunc}\n\\end{equation}\nThe resulting B-spline surface approximates the surface given by $f(x,y) + \\textstyle\\frac{1}{2}(z_{\\max}+z_{\\min})$ if $Y \\approx 0$ can be used to optimize a lens that is globally at least approximately convex\/concave. The choice of the hyperbolic tangent activation function accommodates this: since this activation function is smooth around its fixed point $0$ when initializing the weights and biases of the network close to $0$, there is no cumulative value-increasing effect in a forward pass through the network so that indeed $Y\\approx 0$ in this case.\n\nFor comparability, in the case without a network, the optimization is not performed directly on the control point $z$-coordinates. Instead, for each control point, a new variable for optimization is created, which is passed through the activation function and the correction as in Eq.~\\ref{eq:outputcorrection} or \\ref{eq:outputcorrectionwfunc} before being assigned to the control point.\n\n\\subsection{The optimization}\n\n\\label{lossfunc}\nThe lens is optimized such that the irradiance distribution $\\mathcal{I}$ projected by the lens approximates a reference image $\\mathcal{I}_\\text{ref}$, where $\\mathcal{I} ,\\mathcal{I}_\\text{ref}\\in \\mathbb{R}^{n_x\\times n_y}_{\\ge 0 }$. The loss function used to calculate the difference between the two uses the normalized matrices: \n\\begin{equation}\n \\widehat{\\mathcal{I}} = \\frac{\\mathcal{I}}{\\sum_{i,j}^{n_x,n_y} \\mathcal{I}_{i,j}} \n \\quad\n \\mathrm{and}\n \\quad\n \\widehat{\\mathcal{I}}_\\text{ref} = \\frac{\\mathcal{I}_\\text{ref}}{\\sum_{i,j}^{n_x,n_y} \\mathcal{I}_{\\mathrm{ref},i,j}}.\n\\end{equation}\n\n\\noindent\nThe loss function is given by\n\\begin{equation}\n \\mathcal{L}(\\mathcal{I};\\mathcal{I}_\\text{ref}) = \\frac{1}{\\sqrt{n_x n_y}}\n \\left\\| \\widehat{\\mathcal{I}}-\\widehat{\\mathcal{I}}_\\text{ref} \\right\\|_F \n \\label{eq:pipelineLoss},\n\\end{equation}\nwhere $\\| \\cdot \\|_F$ is the Frobenius or matrix norm, which is calculated as follows:\n\\begin{equation}\n \\| \\mathcal{A} \\|_F = \\sqrt{\\sum_{i}^{n_x}\\sum_{j}^{n_y} \\lvert a_{i,j} \\rvert ^2}.\n\\end{equation}\nFig.~\\ref{fig:optimizationloop} shows the conventional stopping criterion of the loss value being smaller than some $\\varepsilon > 0$, but in our experiments, we use a fixed number of iterations.\n\nThe neural network parameters (weights and biases) are updated using the Adam optimizer \\citep{Kingma2014} by back-propagation of the loss to these parameters.\n\n\\section{Results} \\label{sec:results}\n\nSeveral results produced with the optimization pipeline discussed in the previous sections are displayed and discussed in this section. The implementation mainly uses PyTorch, a Python wrapper of Torch \\citep{Collobert2002Torch}.\n\nNone of the optimizations performed for this section took more than a few hours to complete, on a \\verb|HP ZBook Power G7 Mobile Workstation| with a \\verb|NVIDIA Quadro T1000 with Max-Q Design| GPU.\n\nMost of the results have been validated with \\emph{LightTools} \\citep{LightTools}, an established ray tracing software package in the optics community. Lens designs were imported to LightTools as a point cloud, then interpolated to obtain a continuous surface, and all simulations were conducted using $10^6$ rays.\n\nUnits of length are mostly unspecified since the obtained irradiance distributions are invariant under uniform scaling of the optical system. This invariance to scaling is reasonable as long as the lens details are orders of magnitude larger than the wavelength of the incident light such that diffraction effects do not play a role. Furthermore, the irradiance distributions are directly proportional to the scaling of all ray weights and thus the source power, so the source and screen power also need no unit specification. Note that relative changes have a non-trivial effect, like changes to the power proportion between sources or the distance proportions of the optical system.\n\n\\newpage\n\\subsection{Irradiance derivatives with respect to a control point}\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.75\\textwidth]{figures\/results\/controlpoint_derivatives.png}\n \\caption{Gradients of an irradiance distribution of a collimated ray bundle through a flat lens (parallel sides), with respect to the $z$-coordinate of one control point. The zeros are masked with white to show the extend of the influence of the control point. These irradiation distributions differ by: (a): degrees $(3,3)$, reconstruction filter size $(3,3)$, (b): degrees $(3,3)$, reconstruction filter size $(11,11)$, (c): degrees $(5,3)$, reconstruction filter size $(3,3)$.}\n \\label{fig:controlpointderivs}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/renderderivsexplanation.png}\n \\caption{Demonstration of how one control point influences the irradiance distribution in the case of a flat lens with B-spline degrees $(3,3)$ and a collimated ray bundle source.}\n \\label{fig:renderderivsexplanation}\n\\end{figure}\nThis section gives a simple first look at the capabilities of the implemented differentiable ray tracer: computing the derivative of an irradiance distribution with respect to a single control point. Obtaining this data is inefficient in the current PyTorch implementation as a forward mode automatic differentiation pass is required, which is not currently (entirely) supported by PyTorch. Therefore these derivatives are computed with pixel-wise back-propagation.\n\nFig.~\\ref{fig:controlpointderivs} shows the derivative of an irradiance distribution produced by a collimated ray bundel through a flat lens for various B-spline degrees and reconstruction filter sizes, and Fig.~\\ref{fig:renderderivsexplanation} shows what one of these systems looks like. The overall `mountain with a surrounding valley' structure can be understood as follows: as one of the control points rises, it creates a local convexity in the otherwise flat surface. This convexity has a focusing effect, redirecting light from the negative valley region toward the positive mountain region.\n\nNoteworthy of these irradiance derivatives is also their total sum: (a) $\\SI{-1.8161e-08}{}$, (b) $\\SI{3.4459e-08}{}$, (c) $\\SI{9.7095e-05}{}$. These small numbers with respect to the total irradiance of about $93$ and therefore indicate conservation of light; as the control point moves out of the flat configuration, at first, the total amount of power received by the screen will not change much. This is expected from cases (a) and (b), where the control point does not affect rays that reach the screen on the boundary pixels. However, in all cases, all rays intersect the lens at right angles. Around $\\theta = 0$, the slope of Schlick's approximation is very shallow, indicating a small decrease in refraction in favor of reflection.\n\n\\subsection{Sensitivity of the optimization to initial state and neural network architecture} \\label{subsec:results_nnsensitivity}\nAs with almost any iterative optimization procedure, choosing a reasonable initial guess of the solution is crucial for reaching a good local\/global minimum. For training neural networks, this comes down to how the network weights and biases are initiated. In this section, we look at three target illuminations: the circular top hat distribution (Fig.~\\ref{fig:circtophat}), the TU Delft logo (Fig.~\\ref{fig:TUDflameinv}), and an image of a faceted ball (Fig.~\\ref{fig:facball}). For some experiments, black padding or Gaussian blurring is applied to these images. We design lenses to produce these distributions from a collimated ray bundle, given various neural network architectures (introduced in section \\ref{subsec:nnarchitectures}) and parameter initializations.\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/results\/circular_tophat.png}\n \\caption{The circular tophat target illumination.}\n \\label{fig:circtophat}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/TUD_400_inverse.png}\n \\caption{The TU Delft flame target illumination.}\n \\label{fig:TUDflameinv}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.4\\textwidth]{figures\/results\/shaded_faceted_ball.png}\n \\caption{The faceted ball target illumination.}\n \\label{fig:facball}\n\\end{figure}\n\n\\paragraph{Circular top hat distribution from collimated ray bundle} \\label{subsec:circtophat}\nFig.~\\ref{fig:tophatloss} shows the progress of the loss over 1000 iterations, with each iteration taking $2.5$ seconds, for various neural network architectures and parameter initialization combinations. For the other parameters in these simulations, see the supplementary information. For a few moments during the training, the resulting freeform surfaces and irradiance distributions are shown in Figs.~\\ref{fig:lensshapes}, \\ref{fig:tophatrandomsparse}, \\ref{fig:tophatunifsparse}, \\ref{fig:tophatunifnonet} and \\ref{fig:tophatunifdense}. Uniform here means that the initial trainable parameter values are sampled from a small interval: $U\\left(\\left[-10^{-4},10^{-4}\\right]\\right)$, except for the no-network case; this is initialized with all zeros.\n\nThe first notable difference is between the random and uniformly initialized sparse neural networks. The uniformly initialized neural network performs much better, and no network performs better. This is probably because the uniformly initialized cases converge to a better (local) minimum than the randomly initialized case. Of course, it could happen that the random initialization lands in a very favorable spot in the design landscape, but intuitively this seems very unlikely.\n\nAnother property of the uniformly initialized cases is their preservation of symmetry in these setups. As Fig.~\\ref{fig:lensshapes} shows, this leads to much simpler lenses, which are probably much less sensitive to manufacturing errors due to their relative lack of small detail. What is interesting to note here is that if the sparse network is initialized with all parameters set to $0$, then its optimization is identical to the no-network case, as only the biases in the last layer achieve non-zero gradients.\n\nNo rigorous investigation has been conducted to the extent that this behavior of increased convergence speed carries over to other target distributions and system configurations and what the optimal hyper-parameters are. A thorough investigation of the hyper-parameter space that defines a family of network architectures could reveal where in the increase of the architecture complexity, diminishing returns for optimizing these lenses arises. However, based on these initial findings the fully connected network is used for all the following optimizations in the results.\n\n\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{figures\/results\/tophat_loss_progress.png}\n \\caption{Loss progress over the iterations for various pipeline-setups for forming a tophat distribution from a collimated ray bundle.}\n \\label{fig:tophatloss}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_lens.png}\n \\caption{The lens height field after initialization ($n=0$), and $n=50,100$ and $1000$ iterations respectively, for different network architectures (Section~\\ref{subsec:nnarchitectures}) and network parameter initializations (Section~\\ref{subsec:circtophat}).}\n \\label{fig:lensshapes}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_randomsparse.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a random lens with a sparse network towards a circular tophat illumination.}\n \\label{fig:tophatrandomsparse}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_unifsparse.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a flat lens with a sparse network towards a circular tophat illumination.}\n \\label{fig:tophatunifsparse}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_unifnonet.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a flat lens without a network towards a circular tophat illumination.}\n \\label{fig:tophatunifnonet}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{figures\/results\/tophat_unifdense.png}\n \\caption{Irradiance distributions and pixel-wise errors in the optimization progress of a flat lens with a dense network towards a circular tophat illumination.}\n \\label{fig:tophatunifdense}\n\\end{figure}\n\n\\clearpage\n\\paragraph{TU flame and faceted ball from collimated ray bundle}\nIn what follows, we consider complex target distributions: the TU Delft flame (for a complex shape) and a faceted ball (for a target with various brightness levels). Here we still use the collimated ray bundle illumination, but lenses are now optimized for various magnifications; see Table.~\\ref{tab:magndata}. These magnifications are defined as the scaling of the screen size with respect to the smallest screen size $(0.64,0.64)$. The other parameters of these optimizations are shown in the supplementary information. All these iterations took about $4$ seconds each.\n\nThe final irradiance distributions and corresponding LightTools results are shown in Figs.~\\ref{fig:flamerenders} and \\ref{fig:facetedballrenders}, respectively. These figures show that the optimization pipeline can handle these more complex target illuminations well. The LightTools results predict some artifacts within the irradiance distribution, which the implemented ray tracer does not, especially in the TU flame magnification 1 case. By visual inspection, based on the LightTools results, one would probably rate these results in the exact opposite order than as indicated by the losses shown in Fig.~\\ref{fig:ManufacturingLoss}.\n\nA potential explanation of the increase in loss with the magnification factor in Fig.~\\ref{fig:ManufacturingLoss} is that the bigger the screen is: the rays require higher angles to reach the edges of the screen, which is apparent in the cases of magnification 3 and 5 Fig.~\\ref{fig:ManufacturingRays}. This results in a larger sensitivity of the irradiance to the angle with which a ray leaves the screen. This in turn gives larger gradients of the irradiance with respect to the control points. Therefore the optimization takes larger steps in the neural network parameter space, possibly overshooting points that result in a lower loss.\n\nFor the magnification, $3$ and $5$, the irradiance distributions from LightTools show artifacts at the screen boundaries. A possible explanation for this is that the way the B-spline surfaces are transferred to LightTools is inaccurate at the surface boundaries.\\footnote{Assuming only rays from the B-spline surface boundaries reach the screen boundary area.} This is because surface normals are inferred from fewer points on the B-spline surface at the boundary than in the middle of the surface by LightTools.\n\nFurthermore, a significant amount of rays are lost during optimization because the target illuminations are black at the borders, so rays near the screen boundary will be forced off the screen by the optimization. Once rays are off the screen, they no longer contribute to the loss function. Once a ray misses the screen, the patch on the B-spline surface these rays originate from does not influence the irradiance and, thus, the loss function. \nHowever, this does not mean that this patch is idle for the rest of the optimization, as this patch can be in support of a basis function that corresponds to a control point that still affects rays that hit the screen. Therefore, the probability of getting idle lens patches with this setup decreases with the B-spline degrees since these determine the size of the support of the B-spline basis functions but might, in some cases, lead to oscillatory behavior, with rays alternating between hitting and missing the screen. \n\nFig. \\ref{fig:ManufacturingSurfaces} shows the optimized B-spline lens surface height field. A densely varying color map is chosen since the deviations from a flat or smooth concave shape are quite subtle, which is due to the large lens exit angle sensitivity of the ray-screen intersections since the ratio lens size to screen size is large with respect to the ratio lens size to screen distance.\n\n\\begin{table}[h]\n \\centering\n {\n \\def1.5{1.5}\n \\begin{tabular}{c|c|c|c}\n \\textbf{Magnification} & \\textbf{screen size} & $f(x,y)$ & \\textbf{starting shape type}\\\\\n \\hline\n $1$ & $(0.64,0.64)$ & $\\textstyle\\frac{1}{2}$ & flat\\\\\n $3$ & $(1.92,1.92)$ & $\\textstyle\\frac{1}{2} + 8 - \\sqrt{8^2-x^2-y^2}$ & concave\\\\\n $5$ & $(3.20,3.20)$ & $\\textstyle\\frac{1}{2} + 4 - \\sqrt{4^2 - x^2 - y^2}$ & concave\n \\end{tabular}\n }\n \\caption{The screen size and control point offset function $f$ used per magnification in the TU flame and faceted ball optimizations (distances in centimeters).}\n \\label{tab:magndata}\n\\end{table}\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{figures\/results\/Manufacturing_Loss.png}\n \\caption{Loss progress for the various magnifications and target distributions.}\n \\label{fig:ManufacturingLoss}\n\\end{figure}\n\n\\clearpage\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_flames.png}\n \\caption{Implementation and LightTools irradiance distributions of the TU flame target from the final lens design of the optimization.}\n \\label{fig:flamerenders}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_balls.png}\n \\caption{Implementation and LightTools irradiance distributions of the faceted ball target from the final lens design of the optimization.}\n \\label{fig:facetedballrenders}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_surfaces.png}\n \\caption{The lens designs for the different magnifications and two target distributions.}\n \\label{fig:ManufacturingSurfaces}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/Manufacturing_rays.png}\n \\caption{$25\\times25$ traced rays through the final lens designs for the different magnifications and two target distributions.}\n \\label{fig:ManufacturingRays}\n\\end{figure}\n\n\n\n\n\\clearpage\n\\subsection{Optimization with a point source and a grid of point sources}\nWe now consider an optimization that uses the B-spline intersection algorithm. First, we design a lens with one point source at $(0,0,-5)$ with $5\\times 10^5$ rays to again form the TU flame. Then after $\\sim 200$ iterations, we change the source to an equispaced grid of $25\\times 25$ point sources with $10^3$ rays each on $[-1,1]\\times[-1,1]\\times\\{-5\\}$, approximating a source of non-negligible size. The other (hyper-)~parameters of this optimization are shown in the supplementary information. Due to the additional B-spline intersection procedures, each iteration takes approximately $50$ seconds.\nThe resulting final irradiance distribution and LightTools verifications can be seen in Fig.~\\ref{fig:point_source_renders}. The final irradiance distribution similar to the that obtained by LightTools, indicating that ray tracing with the implemented B-spline intersection algorithm works correctly. The irradiance are blurred due to the reconstruction filter.\nThe single-source point optimization performs well, although the illumination is less uniform than in the collimated ray bundle case (Figs.~\\ref{fig:flamerenders} and \\ref{fig:facetedballrenders}). \nThe non-uniformity can be attributed to the gaussian reconstruction filter used during optimization, as it smoothes out the small uniformities. \n\n\nAs seen in Fig.~\\ref{fig:point_source_renders} the irradiance distribution obtained with a grid of point sources accurately approximates the extended source illumination distribution quite well for the unoptimized case. \nFinding a lens design that redirects light from a source of non-negligible size into a desired irradiance distribution is a complex problem for which it is hard to indicate how good the optimal irradiance distribution can become.\nThe progress of the loss, as seen in Fig.~\\ref{fig:point_source_loss}, shows that the optimization can still improve the loss, even after the transition to the grid of point sources. Interestingly, looking at Fig.~\\ref{fig:point_source_renders} again, the optimization seems to adopt the coarse strategy of filling up the target distribution with images of the source square, as shown in Fig.~\\ref{fig:sourceimages}. This strategy does hinder the possible quality of the final irradiance distribution as the image of the source on the target is larger than the fine details in the desired irradiance. Optimizing both the front and back surfaces of the freeform could resolve this issue, as this will cause the image of the source to change shape depending on where it ends up on the target screen.\n\n\\begin{figure}\n \\centering\n \\includegraphics{figures\/results\/source_images_indication.png}\n \\caption{Indication of images of the source square in the irradiance distribution obtained by LightTools using the point source grid.}\n \\label{fig:sourceimages}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/results\/point_source_renders.png}\n \\caption{The final irradiation distribution of the lens optimizations with point sources and the corresponding LightTools verifications. The extended source is not implemented in our ray tracer, but is approximated by the point source grid.}\n \\label{fig:point_source_renders}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width=0.75\\textwidth]{figures\/results\/point_source_surfaces.png}\n \\caption{Height fields of the lenses optimized for the TU flame with point sources.}\n \\label{fig:point_source_surfaces}\n\\end{figure}\n\n\\begin{figure}[p]\n \\centering\n \\includegraphics[width= 0.6\\textwidth]{figures\/results\/point_source_loss.png}\n \\caption{Loss over the iterations optimizing for the TU flame. The system is initiated with a point source, and after $\\sim 200$ iterations the point source is replaced by an equispaced grid of $25\\times 25$ point sources.}\n \\label{fig:point_source_loss}\n\\end{figure}\n\n\\section{Conclusion} \\label{sec:conclusion}\n\nWe demonstrated that non-sequential differentiable ray tracing is a viable tool for designing freeform lenses for collimated ray bundles, points, and extended sources.\nUsing a B-spline allows for the design of a continuous surface, which is desirable for manufacturing, and its control point allows for locally altering the irradiance distribution. For both cases, collimated and point source lens designs were found that could accurately project the desired irradiance distribution in both the differentiable ray tracer and in commercial software LightTools. Some artifacts still exist and resolving this issue will be a part of further research.\n\nFor the source with a finite extent, the optimizer improved upon the design obtained for a point source. However, the final irradiance distribution was made up of images of the source, which hinders the minimum that can be obtained as the source image is larger than the details in the desired irradiance distribution. This issue can be resolved by optimizing multiple surfaces simultaneously, as the image of the source on the target plane can then be optimized to vary with location.\n\nUsing a neural network to remap the optimization space provides an interesting way to increase the convergence speed of the optimization. However, further investigation is required to see whether this generally holds and what the effect is on other network architectures.\n\nThe developed ray tracing implementation is currently a proof of concept and needs to be optimized for speed. The B-spline intersection algorithm, in particular, adds roughly a factor of $10$ to the computation time. A significant speedup can be achieved here by leveraging efficient lower-level GPU programming languages, such as CUDA. \n\n\n\\section{Acknowledgements}\nWe acknowledge support by NWO-TTW Perspectief program (P15-36) ``Free-form scattering optics\".\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nHigh-redshift star-forming galaxies are becoming an important probe of galaxy\nformation, reionization and cosmology (Robertson et al. 2010; Shapley 2011). A\npopular method for finding high redshift star forming galaxies is to target\ntheir often bright Ly$\\alpha$ emission (Partridge \\& Peebles 1967). This\nemission can be easily detected in narrow-band imaging surveys, and can be\nfurther confirmed by spectroscopic observations (Hu et al. 1998; Ouchi et al.\n2008; Yamada et al. 2012a,b). In addition to discovering numerous Ly$\\alpha$\nemitters (LAEs), a particular class of objects, also known as \"Ly$\\alpha$\nblobs\" (LABs), has been most commonly found in the dense environment of\nstar-forming galaxies at high redshift, and they are very extended (30 to 200\nkpc) and Ly$\\alpha$-luminous (10$^{43}$ to 10$^{44}$ erg~s$^{-1}$) (see,\ne.g., Francis et al. 1996; Steidel et al. 2000; Palunas et al. 2004; Matsuda et\nal. 2004, 2009, 2011; Dey et al. 2005; Saito et al. 2006; Yang et al. 2009,\n2010; Erb et al. 2011; Prescott et al. 2012a, 2013; Bridge et al. 2013). In\ncontrast to the large Ly$\\alpha$ nebulae surrounding some high-redshift radio\ngalaxies (e.g., Reuland et al. 2003; Venemans et al. 2007), these objects do\nnot always have obvious sources for energy responsible for their strong\nemission.\n\nWhile the LABs' preferential location in overdense environments indicates an\nassociation with massive galaxy formation, the origin of Ly$\\alpha$ emission in\nthe LABs is still unclear and under debate (Faucher-Giguere et al. 2010; Cen \\&\nZheng 2013; Yajima et al. 2013). Proposed sources have generally fallen into\ntwo categories: cooling radiation from cold streams of gas accreting onto\ngalaxies (e.g., Haiman et al. 2000; Dijkstra \\& Loeb 2009; Goerdt et al. 2010)\nand photoionization\/recombination from starbursts or active\ngalactic nuclei (AGNs) (e.g., Taniguchi \\& Shioya 2000; Furlanetto et al. 2005;\nMori \\& Umemura 2006; Zheng et al. 2011). Supporting evidence for the cooling\nflow scenario comes from those LABs lacking any visible power source (e.g.,\nNilsson et al. 2006; Smith \\& Jarvis 2007). Ionizing photons from young stars\nin star-forming galaxies and\/or AGNs can ionize neutral hydrogen atoms and the\nsubsequent recombination gives off Ly$\\alpha$\\, emission. The resonant scattering of\nLy$\\alpha$\\, photons in the circumgalactic medium makes the emission extended (Geach\net al. 2005, 2009; Colbert et al. 2006, 2011; Beelen et al. 2008; Webb et al.\n2009; Zheng et al. 2011; Cen \\& Zheng 2013; Overzier et al. 2013). \n\nExcept for cooling flows and photoionization from star-forming galaxies and\/or\nAGNs, other possible mechanisms, such as galactic super-winds and obscured AGNs,\nare also proposed to explain the nature of the LABs (e.g., Ohyama et al. 2003;\nWilman et al. 2005; Colbert et al. 2006; Matsuda et al. 2007). All these\nsources of energy may be activated in an environment where violent interactions are\nfrequent between gas rich galaxies as expected in over-dense regions at high\nredshift (Matsuda et al. 2009, 2011; Prescott et al. 2012b; Kubo et al. 2013).\n\nThe 110 Mpc filament with 37 LAEs related to the protocluster J2143-4423 at\n$z$=2.38 (Francis et al. 1996, 2004; Palunas et al. 2004) is one of the largest\nknown structures at high redshift, and this field also includes four large\nextended LABs with extensions of $\\sim$ 50 kpc and above, named {B1}, {B5}, {B6}\\,\nand {B7}. In this paper, we present our deep radio observations and {\\it\nHerschel} released far-infrared (FIR) data in J2143-4423\\, to study the powering\nsource of these LABs. Throughout this paper, we use a $\\Lambda$ cosmology with\n$\\rm H_{\\rm0}$ = 67.3~$\\rm \\ifmmode{{\\rm \\ts km\\ts s}^{-1}}\\else{\\ts km\\ts s$^{-1}$}\\fi\\ Mpc^{-1}$, $\\rm \\Omega_\\Lambda$ = 0.685 and\n$\\rm \\Omega_{\\rm m}$ = 0.315 (Planck Collaboration XVI 2013), and 1$\\arcsec$\ncorresponds to 8.37~kpc at $z$=2.38.\n\n\n\\section{Observations}\n\\subsection{ATCA observations}\nWe observed J2143-4423\\, with the Australia Telescope Compact Array\n(ATCA)\\footnote{The Australia Telescope Compact Array is part of the Australia\nTelescope, which is funded by the Commonwealth of Australia for operation as a\nNational Facility managed by CSIRO.} in its extended configuration 6A. During\nthe observations from 2009 June 14 to 17, only five out of six antennas were\navailable. The observations were performed at a central frequency of 1.75 GHz.\nWe used the Compact Array Broadband Backend (Wilson et al. 2011) in a\nwide-band mode, with a total bandwidth of 2~GHz and a channel width of 1~MHz.\nThe nearby source PKS~2134-470 served as a gain calibrator. Absolute fluxes\nwere calibrated with the ATCA standard PKS~1934-638. The total observing time\nwas about 70 hours. \n\nThe data were reduced with the MIRIAD software package. Although the observations\nwere carried out with a total bandwidth of 2~GHz, the effective bandwidth was\nabout 489 MHz with a central frequency of 1.51~GHz. We carefully flagged the\nchannels affected by radio frequency interference (RFI) by checking the\nvisibility data sorted by time, channels and baselines.\nThe image was deconvolved with MIRIAD task MFCLEAN, and\ntask SELFCAL was used to reduce the noise from strong radio continuum sources.\nWe first created cleaned images in a normal procedure and made model images for\nthe strong sources. The models were used as inputs for task SELFCAL to perform\nself-calibration of visibility data. We ran this cycle for three times, and\nthen obtained the model images to create the visibility data with\nself-calibration, which were used to make the final images. The noise of the\nimages after applying self-calibration was about one order of magnitude lower\nthan that without self-calibration. The field of view was about 31 arcmins and\nthe synthesized beam size was 7.8$\\arcsec$$\\times$4.8$\\arcsec$. The noise was\nabout 15~$\\mu$Jy\/beam before applying primary beam correction.\n\n\\begin{figure*}[t]\n\\vspace{-0.0cm}\n\\centering\n\\includegraphics[angle=0,width=1.0\\textwidth]{fig1.pdf}\n\\vspace{-0.0cm}\n\\caption{ATCA 20~cm, {\\it Spitzer} MIPS 24$\\mu$m and {\\it Herschel}\\, PACS and SPIRE data for the four Ly$\\alpha$\\, blobs\n(LABs) in J2143-4423. {\\bf a) }Contours and gray scale maps of ATCA radio\nemission. The contours are -2, 2, 3, 4, 5 and 6~$\\times$ 15~$\\mu$Jy\n(1~$\\sigma$), with a synthesized beam of 7.8$\\arcsec$$\\times$4.8$\\arcsec$,\nwhich is shown in the lower left corner of each panel. {\\bf b) }Gray maps of\n{\\it Spitzer} MIPS 24~$\\mu$m emission (Colbert et al. 2006). {\\bf c-g)\n}Contours and gray scale maps of {\\it Herschel}\\, FIR emission. The contours are\n-2$\\sigma$, 2$\\sigma$, 3$\\sigma$, 4$\\sigma$, 5$\\sigma$ and 6$\\sigma$ (see\n$\\S$~\\ref{obsher} for the noise level of each band). A circle with a diameter\nof 40$\\arcsec$ is shown in each panel. The circles in\n{B7}\\, are on an off-center position (5$\\arcsec$, 0$\\arcsec$) to cover most FIR\nemission. All sources are centered on the positions of the four LABs (see\nColbert et al. 2006) as shown with plus signs in each panel. All offsets are\nrelative to the positions of the LABs.}\n\\label{map} \n\\end{figure*}\n\n\\subsection{Archival {\\it Herschel}\\, observations}\\label{obsher}\n{\\it Herschel}\\, observations towards J2143-4423\\, were carried out with PACS (Poglitsch et\nal. 2010) at 100 and 160~$\\mu$m and SPIRE (Griffin et al. 2010) at 250, 350 and\n500~$\\mu$m in 2010 to 2011. J2143-4423\\, was imaged in a field size of\n15$^\\prime$$\\times$15$^\\prime$ for each band, and the observing time was\n$\\sim$2.9 hours for PACS ({\\it Herschel}\\, OD: 686) and $\\sim$0.6 hours for SPIRE ({\\it Herschel}\\,\nOD: 558). The level 2.5 product for PACS and the level 2 product for SPIRE from\nthe pipeline procedures are used for our data analysis. Source photometry\nis carried out using DAOphot algorithm in the Herschel Interactive Processing\nEnvironment (HIPE). We apply beam correction, colour correction, aperture\ncorrection for a spectral index of $-$2 and adopt a flux calibration error of 5\\%\nat PACS bands and 7\\% at SPIRE bands as recommended in the PACS and SPIRE\nObserver's Manual. The full width at half power (FWHP) beam sizes are\n6.8$\\arcsec$ at 100~$\\mu$m, 11.4$\\arcsec$ at 160~$\\mu$m, 17.6$\\arcsec$ at\n250~$\\mu$m, 23.9$\\arcsec$ at 350~$\\mu$m and 35.2$\\arcsec$ at 500~$\\mu$m,\nrespectively.\n\n\n\\section{Results}\n\\subsection{Radio emission from ATCA observations}\nIn Fig.~\\ref{map}(a) we present the radio continuum emission images at 20~cm\nfrom the ATCA. Among the four LABs, {B6}\\, and {B7}\\, are detected with fluxes\nof 67\\ppm17~$\\mu$Jy and 77\\ppm16~$\\mu$Jy, respectively, and {B5}\\, is marginally\ndetected at 3~$\\sigma$ (51\\ppm16~$\\mu$Jy). For all detected sources, their\npositions are consistent with the central positions of the LABs. Only {B1}\\, is\nnot detected by the observations. \n\n\\subsection{FIR emission from {\\it Herschel}\\, observations}\nAll four LABs are observed with {\\it Herschel}\\, PACS at 100 and 160~$\\mu$m and SPIRE at\n250, 350 and 500~$\\mu$m, and the images are shown in Fig.~\\ref{map}(c-g). The\nobserved flux densities are calculated for the areas within the blue circles as\nshown in Fig.~\\ref{map} and are listed in Table~\\ref{tab1}. {B1}\\, is not\ndetected but contaminated by a nearby strong source about 20$\\arcsec$ in the\nnorth-west, which is the background QSO LBQS2138-4427 at $z$\\,=\\,3.2 (Francis \\&\nHewett 1993), and its emission features at different FIR bands appear to reach\nout to {B1}\\, from this location. There is no FIR counterpart for {B5}\\, in any\n{\\it Herschel}\\, band.\n\n\\begin{center}\n\\begin{table*}[t]\n\\centering\n\\caption{Observational and derived parameters towards the four LABs$^a$}\\label{tab1}\n\\begin{tabular}{ccccccccc}\n\\hline\nSource & 20~cm$^{b}$ & 100~$\\mu$m & 160~$\\mu$m & 250~$\\mu$m & 350~$\\mu$m & 500~$\\mu$m \n& \\ifmmode{L_{\\rm FIR}}\\else{$L_{\\rm FIR}$}\\fi$^c$ & M$\\rm_{dust}$\\\\ \n & [$\\mu$Jy] & [mJy] & [mJy] & [mJy] & [mJy] & [mJy] & [10$^{12}$\\ifmmode{L_{\\odot}}\\else{$L_{\\odot}$}\\fi] & [$10^8$\\ifmmode{{\\rm M}_{\\odot} }\\else{M$_{\\odot}$}\\fi] \\\\\n\\hline\nB1 & $<$51 & $<$4.2 & $<$9.0 & $<$17.9 & $<$19.6 & $<$22.5 & $<$2.8 & \\\\\nB5 & 51\\ppm16 & $<$2.1 & $<$11.1 & $<$17.5 & $<$18.7 & $<$19.8 & $<$2.5 & \\\\\nB6 & 67\\ppm17 & 13.2\\ppm3.2 & 53.9\\ppm8.0 & 49.7\\ppm9.0 & 53.7\\ppm10.7 & 36.7\\ppm10.3 & 10.0\\ppm1.9 & 3.2\\ppm0.6 \\\\\nB7 & 77\\ppm16 & 12.9\\ppm4.0 & 33.5\\ppm10.0 & 41.6\\ppm7.8 & 48.0\\ppm10.6 & 39.2\\ppm8.6 & 8.6\\ppm2.3 & 5.0\\ppm1.0\\\\\n\\hline\n\\end{tabular}\n\\begin{list}{}{}\n\\item{$^{\\mathrm{a}}$ The wavelengths shown in this table are the redshifted values.}\n\\item{$^{\\mathrm{b}}$ Measured fluxes have been modified by a primary beam correction (less than 15\\%).}\n\\item{$^{\\mathrm{c}}$ The total luminosities are calculated between rest frame\nwavelengths of 40~$\\mu$m to 200~$\\mu$m from the dust models (see\n$\\S$~\\ref{dust} for details). The 3~$\\sigma$ upper limits are given for undetected sources.} \n\\end{list}\n\\end{table*}\n\\end{center}\n\n\\subsection{Redshifts of the FIR sources}\\label{red}\nTo estimate the redshift of the FIR sources associated with {B6}\\, and {B7}, we\ntry to fit the data with the SEDs of different templates (Polletta et al.\n2007) at different redshifts and find that the starburst templates can well\nreproduce the data. With the observational data and the SEDs of the templates,\nthe minimum reduced $\\chi^2$ value for each redshift can be calculated and the\ncorresponding probability can be estimated. In this analysis, we include\nfive {\\it Herschel}\\, band, APEX 870~$\\mu$m data (Beelen et al. 2008), and {\\it Spitzer}\nMIPS 24~$\\mu$m data (Colbert et al. 2006).\n\nAmong four typical templates, Arp~220, M~82, Mrk~231 and NGC~6240, we find\nthat the spectral energy distribution of starburst galaxies NGC 6240 and Arp 220\nfit the data best, and Mrk 231 doesn't fit well because it has warm IR emission \nfrom its AGN which is not really consistent with the data.\nFig.~\\ref{redshift} shows the probability distribution\nagainst redshift for both LABs. The estimated redshifts are\n2.20$^{+0.30}_{-0.35}$ for {B6}\\, and 2.20$^{+0.45}_{-0.30}$ for {B7}\\,\nrespectively. Considering the uncertainty of this method to determine the\nredshifts, both values are consistent with the Ly$\\alpha$\\, redshift of 2.38 of the\nLABs. Adopting the number count study of {\\it Herschel}\\, sources in Clements et al.\n(2010), the probability of finding a 350 $\\mu$m source with a flux greater than 40 mJy\nwithin 20 arcsec is 2\\%. According for\nsuch a low number density of strong FIR sources and the positional coincidence\nof the LABs with strong FIR sources, the FIR sources are very likely associated\nwith the LABs. Nevertheless, future spectroscopic observations from molecular\nlines at millimeter or from forbidden lines at near-infrared will be quite\nimportant to confirm it. In the following sections, the Ly$\\alpha$\\, redshift of 2.38\nwill be adopted for the LABs.\n\n\n\\begin{figure*}[t]\n\\vspace{-0.0cm}\n\\centering\n\\includegraphics[angle=0,width=1.0\\textwidth]{fig2.pdf}\n\\vspace{-0.0cm}\n\\caption{Probability as a function of redshift for {B6}\\, and {B7}. NGC~6240 and\nArp~220 are adopted as the most appropriate starburst templates for B6 and B7, respectively.\nA red vertical line denotes a redshift of 2.38 for\nLy$\\alpha$\\, emission.}\n\\label{redshift} \n\\end{figure*}\n\n\\subsection{Dust properties}\\label{dust}\nFor {B6}\\,and {B7}, we have included the measurements from the five {\\it Herschel}\\, bands\nas well as the 870~$\\mu$m data taken from Beelen et al. (2008)\nin the dust continuum analysis using a single-component dust model as described in\nWei\\ss\\ et al. (2007). \n{\\it Spitzer} MIPS 24~$\\mu$m data (Colbert et al. 2006) are not used in the\nmodel fitting because they are strongly affected by PAH features, but are shown\nin Fig.~\\ref{sed} to allow for a better comparison with overlaid templates.\nWe find a dust temperature, $T\\rm_{dust}$, of\n70\\ppm5 K and a dust mass, M$\\rm_{dust}$, of (3.2\\ppm0.8)$\\times$10$^8$ M$\\rm\n_\\odot$ for {B6}, and $T\\rm_{dust}$\\,=\\,70\\ppm5 K and\nM$\\rm_{dust}$\\,=\\,(5.0\\ppm1.0)$\\times$10$^8$ M$\\rm _\\odot$ for {B7}, respectively. \nThe implied FIR luminosities are \\ifmmode{L_{\\rm FIR}}\\else{$L_{\\rm FIR}$}\\fi\\, = (10.0\\ppm1.9)$\\times$10$^{12}$\\,\\ifmmode{L_{\\odot}}\\else{$L_{\\odot}$}\\fi\\,\nfor {B6}, and \\ifmmode{L_{\\rm FIR}}\\else{$L_{\\rm FIR}$}\\fi\\, = (8.6\\ppm2.3)$\\times$10$^{12}$\\,\\ifmmode{L_{\\odot}}\\else{$L_{\\odot}$}\\fi\\, for {B7},\nrespectively, where \\ifmmode{L_{\\rm FIR}}\\else{$L_{\\rm FIR}$}\\fi\\, is integrated from 40~$\\mu$m to 200~$\\mu$m in the\nrest frame. The upper \\ifmmode{L_{\\rm FIR}}\\else{$L_{\\rm FIR}$}\\fi\\,limits for both {B1}\\, and {B5}\\, are\n$\\sim$2.5$-$2.8$\\times$10$^{12}$\\,\\ifmmode{L_{\\odot}}\\else{$L_{\\odot}$}\\fi.\n\n\n\\begin{figure*}[t]\n\\vspace{-0.0cm}\n\\centering\n\\includegraphics[angle=0,width=1.0\\textwidth]{fig3.pdf}\n\\vspace{-0.0cm}\n\\caption{Single-component dust models for B6 and B7 (a redshift of 2.38 is\nadopted). The black solid lines show the thermal dust continuum emission\nof the 70~K dust components for both {B6} and {B7}. The open circles represent\nthe measurements at five {\\it Herschel}\\, bands in this paper and the filled circles\nindicate the flux densities at 24~$\\mu$m (Colbert et al. 2006). The filled\nsquare denotes the flux density (or its upper limit) at 870~$\\mu$m, taken from\nBeelen et al. (2008). The wavelengths at the rest frame are labelled on the\ntop. For the single-component dust models adopted in the figure (see\n$\\S~\\ref{dust}$ for details of the dust models.), the $\\chi^2$ values are 1.1\nfor {B6}\\, and 1.0 for {B7}, respectively. In $\\S~\\ref{red}$, four typical\nstarburst templates, NGC~6240, M~82, Mrk~231 and Arp~220 (Polletta et al.\n2007), are adopted to estimate the redshifts for {B6}\\, and {B7}, and their best\nfits are overlaid in colored lines.\n}\n\\label{sed} \n\\end{figure*}\n\n\\subsection{Star formation rates}\nHere we derive the star formation rates from the Ly$\\alpha$, far-infrared and\nradio luminosities. To estimate the star formation rate (SFR) from the Ly$\\alpha$\\,\nluminosity, we first assume that star formation (SF) powers the observed\nLy$\\alpha$\\, flux. We use an unreddened Ly$\\alpha$\/H$\\alpha$ ratio of 8:1 and the\nconversion factor between H$\\alpha$ luminosity and SFR (Kennicutt 1998),\nyielding SFR($\\rm{Ly\\alpha}$)\/(\\ifmmode{{\\rm M}_{\\odot} }\\else{M$_{\\odot}$}\\fi\/yr)\\,=\\,$L_{\\rm Ly\\alpha}$\/(10$^{42}$ erg\ns$^{-1}$). This provides a lower limit because the extinction of Ly$\\alpha$\\, emission\ncaused by dust will largely reduce the observed Ly$\\alpha$\\, luminosity. With the FIR\nluminosity derived from {\\it Herschel}\\,data, we can estimate the SFR by using the\nrelation SFR($L_{\\rm FIR}$)\/(\\ifmmode{{\\rm M}_{\\odot} }\\else{M$_{\\odot}$}\\fi\/yr)\\,=\\,$1.7\\times$$L_{\\rm FIR}$\/(10$^{10}$\n\\ifmmode{L_{\\odot}}\\else{$L_{\\odot}$}\\fi) (Kennicutt 1998). If the observed radio emission, with a rest wavelength\nof 6~cm, is dominated by free-free emission in H{\\small II} regions, one can\nalso relate the SFR by the relation SFR($L_{\\rm\n1.4~GHz}$)\/(\\ifmmode{{\\rm M}_{\\odot} }\\else{M$_{\\odot}$}\\fi\/yr)\\,=\\,5.52$\\times$10$^{-22}$~$L_{\\rm 1.4~GHz}$\/(W Hz$^{-1}$)\n(Bell 2003). The radio luminosity at 1.4 GHz at the rest frame can be estimated\nfrom the observed flux at 1.51 GHz by assuming a relation\n$S \\propto \\nu^\\alpha$, where S is the flux density and the typical spectral\nindex $\\alpha$ of $-$0.8 is commonly adopted for the SMGs (e.g., Ivison et al.\n2010). These values are listed in Table~\\ref{tab2}.\n\n\n\\begin{center}\n\\begin{table}[h]\n\\centering\n\\caption{Derived star formation rates towards the four LABs}\\label{tab2}\n\\begin{tabular}{ccccc}\n\\hline\nSource & SFR(${L_{\\rm FIR}}$) & SFR($L_{\\rm 1.4GHz}$) & log $L_{\\rm Ly\\alpha}$$^a$ & SFR($\\rm{Ly\\alpha}$) \\\\ \n & [\\ifmmode{{\\rm M}_{\\odot} }\\else{M$_{\\odot}$}\\fi\/yr] & [\\ifmmode{{\\rm M}_{\\odot} }\\else{M$_{\\odot}$}\\fi\/yr] & [ergs~s$^{-1}$] & [\\ifmmode{{\\rm M}_{\\odot} }\\else{M$_{\\odot}$}\\fi\/yr] \\\\\n\\hline\nB1 & $<$480 & $<$1090 & 43.9 & 79 \\\\\nB5 & $<$430 & 1090\\ppm340 & 43.8 & 63 \\\\\nB6 & 1700\\ppm320 & 1430\\ppm360 & 43.8 & 63 \\\\\nB7 & 1460\\ppm390 & 1650\\ppm340 & 43.5 & 32 \\\\\n\\hline\n\\end{tabular}\n\\begin{list}{}{}\n\\item{$^{\\mathrm{a}}$ The Ly$\\alpha$\\,luminosities are adopted from Colbert et al. (2006).}\n\\end{list}\n\\end{table}\n\\end{center}\n\n\n\n\\section{Discussion and Conclusions}\n\nA high detection rate of radio emission (three out of four) around LABs\nsuggests that most LABs do not originate from cooling radiation. Instead,\nphotoionization from starbursts and\/or AGNs may power the LABs in most cases.\nThe high rate of FIR detections (two out of four) points to a star-formation\norigin of the LABs. \nThe SEDs of {B6}\\, and {B7}\\, can be well described by starburst dominated\ntemplates, as shown in Fig.~\\ref{sed}, further supporting Ly$\\alpha$\\, emission\nrelated to the SF in the LABs. In {B6}\\, and {B7}, the SFRs derived from Ly$\\alpha$\\,\nfluxes are far below those estimated from FIR luminosities (Table~\\ref{tab2}).\nThis suggests that the dust indeed greatly reduces the measured Ly$\\alpha$\\, flux.\nComparing the different SFRs, the dust absorption optical depth of the Ly$\\alpha$\\,\nemission becomes $\\sim$3.1$-$3.6. The SFRs estimated from the FIR and radio\nluminosities are comparable, indicating that the radio emission is dominated by\nSF, not by AGNs. The energetic starbursts can provide enough ionizing photons\nto ionize neutral hydrogen atoms in the interstellar medium (ISM), and each\nsubsequent recombination has a probability of $\\sim$ 2\/3 of ending up as a\nLy$\\alpha$ photon (Partridge \\& Peebles 1967). After escaping the galaxy's\nISM, these Ly$\\alpha$ photons can be resonantly scattered by neutral hydrogen\natoms in the intergalactic medium (IGM), which tends to make the Ly$\\alpha$\nemission extended (Zheng et al. 2011). \n\nCen \\& Zheng (2013) propose an SF-based model and predict that LABs at high\nredshift correspond to protoclusters containing the most massive galaxies and\ncluster halos in the early universe as well as ubiquitous strong infrared\nsources undergoing extreme starbursts. \nThis may be supported by the multiple Spitzer\/MIPS sources detected in both\nLABs (see Fig~\\ref{map}(b), Colbert et al. 2006, 2011). Indeed, Prescott et\nal. (2012b) suggest that LABs may be the seeds of galaxy clusters by resolving\nthe galaxies within a LAB at $z$\\,=\\,2.7. The strong FIR emission and the\ninferred high SFRs support the presence of a strong starburst in both {B6}\\, and\n{B7}. However, AGN-dominated templates like Mrk~231 can not well reproduce\nthe data (see $\\S$~\\ref{red}), suggesting that the SF instead of AGN may power\nthe Ly$\\alpha$\\, emission in both LABs. The model also predicts that the most\nluminous FIR source in each LAB is likely representing the gravitational center\nof the protocluster. Fig.~\\ref{map}(c-g) shows that the FIR emission indeed\npeaks in the centers of {B6}\\, and {B7}. The radio continuum emission is detected\nexclusively in the centers, which suggests that the source with most luminous\nFIR emission (therefore highest SFR) is in the gravitational center of each\nLAB. Another very important prediction of this model is that the Ly$\\alpha$\\, emission\nfrom photons that escape the galaxy are expected to be significantly polarized,\nwhich has been for the first time confirmed towards LAB1 in the SSA22 field by\nHayes et al. (2011), supporting models with central power sources. Adopting a\ngas-to-dust mass ratio of 150 and the SFRs estimated above, the timescales of\n{B6}\\, and {B7}\\, are relatively short ($\\sim$100 Myr), which is much shorter\nthan the galaxy building timescale. Note that this timescale is a lower limit\nbecause (1) the LABs may have been alive for a while now, and (2) additional\ngas may be continuously accreted. In any case, the LABs are visible only during\na short time interval during the lifetime of their parent clusters.\n\nNote that the so-called ``SF-based model'' proposed by Cen \\& Zheng (2013) also\nincludes AGN powering or any central powering. The morphologies of the\nLy$\\alpha$\\,emission of the four LABs are quite different (Palunas et al. 2004):\n{B1}\\,and {B5}\\, have core-like structures, while {B6}\\,and {B7}\\, are\ncharacterized by diffuse and extended emission with physical sizes of\n$\\sim$60-70~kpc. The latter may be driven by multiple sources as suggested\nby the MIPS data and are consistent with the SF-based model. There is no\nclear FIR emission detected around {B1}\\,and {B5}. Therefore, the Ly$\\alpha$\\, emission\nin both LABs is unlikely predominantly triggered by SF. Overzier et al. (2013)\nconclude that in {B1}\\, the photoionization from an AGN is the main driver of\nLy$\\alpha$\\, emission. However, Francis et al. (2013) shows that the observed Ly$\\alpha$\\,\nemission in {B1}\\, is of complex origin, dominated by the sum of the emission\nfrom the sub-haloes where the cold gas is being lit up most likely by a\ncombination of tidally triggered star formation, bow shocks, resonant\nscattering of Ly$\\alpha$\\, from the filament collisions and tidal stripping of the\ngas. In {B5}\\, radio emission is tentatively detected and therefore the AGN may\nalso power the Ly$\\alpha$\\, emission. Among the four LABs in J2143-4423, two of them, {B6}\\,\nand {B7}, are mainly driven by SF. However, the other two LABs, {B1}\\, and {B5},\nwithout clear FIR detection, are predominantly driven by the AGNs or other\nsources of energy still to be specified, but not mainly by star formation. We\nthus conclude that LABs must be powered by quite diverse sources of energy.\n\nWith its high angular resolution and superb sensitivity, future observations\nwith the Large Atacama Millimeter Array (ALMA) will reveal more details about\nthe nature of LABs such as testing the predictions of models where the\nionization is provided by intense star formation and confirming the\nsignificantly polarized dust emission at mm\/submm wavelength.\n\n\\begin{acknowledgements}\nWe thank the anonymous referee for valuable comments that improved this manuscript. \nY.A. acknowledges partial support by NSFC grant 11373007 and Youth Innovation Promotion Association CAS.\nR.C. is supported in part by NASA grant NNX11AI23G.\nY.M. acknowledges support from JSPS KAKENHI Grant Number 20647268. \nZZ was partially supported by NSF grant AST-1208891 and NASA grant NNX14AC89G.\nThis research has made use of NASA's Astrophysical Data System (ADS).\n\nPACS has been developed by a consortium of institutes led by MPE (Germany) and\nincluding UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France);\nMPIA (Germany); INAF-IFSI\/OAA\/OAP\/OAT, LENS, SISSA (Italy); IAC (Spain). This\ndevelopment has been supported by the funding agencies BMVIT (Austria),\nESA-PRODEX (Belgium), CEA\/CNES (France), DLR (Germany), ASI\/INAF (Italy), and\nCICYT\/MCYT (Spain).\n\nSPIRE has been developed by a consortium of institutes led by Cardiff\nUniversity (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA, LAM\n(France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory\n(Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); and\nCaltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported\nby national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS\n(France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC, UKSA (UK); and NASA\n(USA).\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{PACS and its view}\nThe PACS spectrometer (\\citeauthor{pog2010}~\\citeyear{pog2010}) on-board the Herschel Space Telescope (\\citeauthor{pil2010}~\\citeyear{pil2010}), covers the wavelength range between $50\\ \\mu$m and $200\\ \\mu$m at a resolution of typically 1000. In comparison to the ISO LWS instrument, PACS offers a higher resolution and a better sensitivity. With PACS, we can observe a wide range of molecular emission lines in the innermost regions of circumstellar envelopes (CSE) of AGB stars, including CO and H$_2$O, which are major coolants and thus important for determining the thermodynamical structure of these environments.\n\n\\section{Methodology and preliminary results}\nKinematical, thermodynamical and chemical information about the circumstellar shell is provided by molecular emission lines and dust features. This information is derived through the use of two radiative transfer codes. The non-LTE line radiative transfer code, \\emph{GASTRoNOoM} (\\citeauthor{dec2006}~\\citeyear{dec2006}), calculates the velocity, temperature and density profiles of the gas envelope, the level populations of the molecules accounted for and the emergent line profiles for the different transitions of each molecule. The continuum radiative transfer code, \\emph{MCMax} (\\citeauthor{min2009a}~\\citeyear{min2009a}), calculates the temperature structure of the dust envelope and the final SED. In order to get a full understanding of the entire envelope around an AGB source, both modelling approaches need be used while maintaining consistency between dust and gas (see Lombaert et al., in prep).\n\nThe need for a consistent treatment of both the gas and dust components is important because of the high sensitivity of the water emission models to the dust-to-gas ratio. Modelling has shown that CO emission lines do not share this sensitivity. This behaviour is shown in Figure~1, which gives an excerpt of the PACS data of V669 Cas (full black), overlayed with two models differing only in the dust-to-gas ratio. This indicates that CO can be safely used to determine the gas temperature profile and the mass loss rate and that the water abundance profile can then be derived if the dust-to-gas ratio is constrained. Therefore, we suggest to determine the dust-to-gas ratio empirically from both gas emission (i.e. CO lines) and SED (i.e. dust continuum) modelling, with a consistent iterative treatment of both gas and dust. Consequently, one can improve the constraints on the water abundance profile.\n\n\\begin{figure}\\centering\n\\includegraphics[height=5.7cm]{lombaert_fig1.ps}\n\\caption{An excerpt of the PACS spectrum of V669 Cas (full black). Two models are overplotted, as well as the molecular transitions for CO, $^{13}$CO, o-H$_2$O, p-H$_2$O, o-H$_2^{18}$O, p-H$_2^{18}$O and SiO at their expected frequencies. Model 1 (full gray) has a dust-to-gas ratio $\\psi = 0.001$, whereas Model 2 (dashed) has $\\psi = 0.005$.}\n\\end{figure}\n\\acknowledgements\nRL acknowledges support from the KULeuven under grant number GOA-B6995, BdV and LD from the Fund for Scientific Research of Flanders (FWO), EDB from FWO under grant number G.0470.07 and JB and PR from the Belgian Federal Science Policy Office via the PRODEX Programme of ESA.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\vspace{-0.3em}\n\n\n Ultra-reliable and low-latency communications (URLLC) have attracted a lot of attention in 5G and the upcoming 6G for mission-critical services\\cite{3GPPRelease16,Mahyar2019ShortCode,popovski2019wireless}. URLLC is renowned by its requirements for significantly lower latency compared to 4G long term evolution (4G-LTE) and high transmission reliability requiring the block error rate (BLER) lower than $10^{-4}$. These stringent requirements necessitate the use of low-complexity decoders and short block-length codes ($\\leq 150$ bits) in coding scheme design \\cite{R1-1608770}. Additionally, URLLC requires bit-level granularity in the block lengths and coding rates, to accommodate scenarios with varying latency and bandwidth constraints \\cite{Mahyar2019ShortCode}, which complicates the coding system further.\n \n Main candidates of short block-length codes for URLLC have been thoroughly reviewed in \\cite{Mahyar2019ShortCode,liva2016codeSurvey}. It has been identified that short Bose-Chaudhuri-Hocquenghem (BCH) codes and cyclic-redundancy-check-aided Polar (CRC-Polar) codes have superior BLER performance that is close to the normal approximation (NA) bound\\cite{erseghe2016coding}. However, it is challenging for BCH and CRC-Polar to provide bit-level granularity with the optimal error-correction capability, as they are originally available at certain block-lengths and rates \\cite{lin2004ECC,Designofpolar}, while the best known linear codes of different lengths and rates have different structures \\cite{Grassl:codetables}. Recently, \\cite{papadopoulou2021short} demonstrated that when using universal decoders, high-performance rate-compatible short codes can also be conveniently constructed by selecting good random codes. Therefore, universal decoders are favored in URLLC, as they can decode any linear block codes, including BCH and CRC-Polar codes, as well as codes that do not adhere to specific code structures.\n \n Ordered-statistics decoding (OSD) \\cite{Fossorier1995OSD} is a near-maximum-likelihood (near-ML) universal decoder that rekindles interests recently. It can decode any linear block code with near-ML BLER performance. OSD has two main phases, namely, \\textit{preprocessing} and \\textit{reprocessing}. In preprocessing, it sorts the received codeword bits according to their reliabilities (by which high reliable bits are distinguished), and permutes the columns of the code generator matrix accordingly. The permuted matrix is then transformed into the systematic form by Gaussian elimination (GE), where the information set is associated with high reliable bits. In reprocessing, a number of test error patterns (TEPs) are attempted to flip the high reliable bits. The remaining low reliable bits are recovered by \\textit{re-encoding} the flipped high reliable bits with the systematic permuted generator matrix.\n \n Let $C$ be the notation of complexity. In general, the average complexity of OSD is roughly characterized as\n \\begin{equation} \\label{equ::CmpOSD}\n C_{\\mathrm{OSD}} = C_{\\mathrm{Preprocessing}} + N_a C_{\\mathrm{Re-encoding}},\n \\end{equation}\n where $N_a$ is the average number of TEPs attempted in single decoding. Recent years have seen many works towards reducing $C_{\\mathrm{OSD}}$ by reducing the number $N_a$ \\cite{yue2021probability,Chentao2019SDD,NewOSD-5GNR,yue2021linear,Wu2007OSDMRB,FossorierBoxandMatch,improvedTEPwithMean,choi2019fast, jin2006probabilisticConditions,WJin2007MultipleBiases}. For example, \\cite{yue2021probability,Chentao2019SDD,Wu2007OSDMRB,choi2019fast} proposed techniques to identify unpromising TEPs that can be discarded without processing, and \\cite{yue2021probability,Wu2007OSDMRB, jin2006probabilisticConditions} designed approaches to terminate OSD early rather than processing all possible TEPs. These approaches can usually reduce $N_a$ to a very low level at high signal-to-noise ratios (SNRs). In this situation, $C_{\\mathrm{OSD}}$ will be dominated by preprocessing rather than reprocessing. Specifically, GE has the complexity as high as $O(nk^2)$ for a $k\\times n$ generator matrix \\cite{Fossorier1995OSD}, where $O(\\cdot)$ is the big-O operand. As a result, when $N_a$ is not large, GE introduces a \\textit{complexity floor} to OSD. That is, a complexity component hardly reducible. This complexity floor hinders the application of OSD in URLLC, especially for scenarios operating at high SNRs. Recently, \\cite{choi2021fast} proposed using multiple offline produced generator matrices to replace GE in OSD. Nevertheless, this approach could introduce extra overheads at low-to-moderate SNRs, as it performs several reprocessings over multiple generator matrices.\n \n In this paper, we design an OSD decoder with adaptive GE reduction. Specifically, the decoder will skip GE if doing so is still very likely to produce the correct decoding result. Two conditions are introduced in the proposed decoder. The first condition decides whether to skip GE based on evaluating BLER performance with and without GE. If GE is skipped, the re-encoding process will be performed with the original generator matrix of the code. The second condition determines whether the correct decoding result has been found in the decoding process without GE. If so, decoding is terminated early to reduce the complexity, and if not, the standard OSD is performed as a supplement to prevent BLER from degrading. Our verification shows that at high SNRs, the proposed decoder can avoid GE in almost all decoding attemps with nearly the same BLER performance as the standard OSD. Owing to the effective GE reduction, the proposed decoder can achieve a significantly lowered complexity at high SNRs compared to the latest OSD approaches from the literature \\cite{choi2021fast,yue2021probability}.\n \n The rest of this paper is organized as follows. Section \\ref{sec::Preliminaries} presents the preliminaries. Section \\ref{sec::Algorithm} describes the proposed decoding algorithm. Section \\ref{sec::Performance and Complexity} discusses the decoding error performance and complexity. Section \\ref{Sec::Simulation} verifies the BLER and complexity of the proposed decoder via simulations. Finally, Section \\ref{sec::Conclusion} concludes the paper.\n \n \\emph{Notation}: We use $[a]_u^v = [a_u,\\ldots,a_v]$ to denote a row vector containing element $a_{\\ell}$ for $u\\le \\ell\\le v$. For simplicity, we do not distinguish random variables and their samples throughout the paper, while possible abuse of notations will be specified.\n\n\\vspace{-0.1em} \n\\section{Preliminaries} \\label{sec::Preliminaries}\n\\vspace{-0.1em} \nLet $\\mathcal{C}(n,k)$ denote a binary linear block code, where $n$ and $k$ are the lengths of the codeword and the information block, respectively. $\\mathcal{C}(n,k)$ is defined by its generator matrix $\\mathbf{G}$; that is, an information sequence $\\mathbf{b} = [b]_1^k$ is uniquely encoded to a codeword $\\mathbf{c} = [c]_1^n$ by $\\mathbf{c} = \\mathbf{b}\\mathbf{G}$. In this paper, we assume that $\\mathbf{G}$ is systematic, i.e., $\\mathbf{G} = [\\mathbf{I}_k \\ \\mathbf{P}]$, where $\\mathbf{I}_k$ is a $k\\times k$ identity matrix and $\\mathbf{P}$ is the parity sub-matrix.\n\nWe consider an additive white Gaussian Noise (AWGN) channel and binary phase shift keying (BPSK) modulation. Let $\\mathbf{s} = [s]_1^n$ denote the modulated signals, where $s_{i} = (-1)^{c_{i}}\\in \\{\\pm 1\\}$. At the channel output, the received signal is given by $\\mathbf{r} = \\mathbf{s} + \\mathbf{w}$, where $\\mathbf{w}$ is the AWGN vector with zero mean and variance $N_{0}\/2$, for $N_0$ being the single-band noise power spectrum density. SNR is accordingly defined as $\\mathrm{SNR} = 2\/N_0$. Without loss of generality, approaches provided in this paper can be extended to other modulation schemes.\n\nAt the receiver, the bitwise hard-decision estimate $\\mathbf{y}= [y]_{1}^n$ of codeword $\\mathbf{c}$ is obtained according to: $y_{i} = 1 $ for $r_{i}<0$ and $y_{i} = 0$ for $r_{i}\\geq 0$. If codewords in $\\mathcal{C}(n,k)$ have equal transmission probabilities, the log-likelihood ratio (LLR) of the $i$-th received symbol is defined as ${\\ell}_{i} \\triangleq \\ln \\frac{\\mathrm{Pr}(c_{i}=0|r_{i})}{\\mathrm{Pr}(c_{i}=1|r_{i})}$, which is further simplified to ${\\ell}_{i} = \\frac{4r_{i}}{N_{0}}$ if employing BPSK \\cite{lin2004ECC}. Thus, we define $\\alpha_{i} \\triangleq |r_{i}|$ (the scaled magnitude of LLR) as the reliability of $y_i$, where $|\\cdot|$ is the absolute operation.\n\nOSD preprocesses received signals according to reliabilities. First, a permutation $\\pi_{1}$ is performed to sort the reliabilities $\\bm{\\alpha} = [\\alpha]_1^n$ in descending order, and the ordered reliabilities are obtained as $\\pi_1(\\bm{\\alpha})$. Then, the generator matrix are accordingly permuted (in terms of columns) to $\\pi_1(\\mathbf{G})$. Next, OSD obtains the systematic form of $\\pi_1(\\mathbf{G})$ as $\\widetilde{\\mathbf{G}} = [\\mathbf{I}_k \\ \\widetilde{\\mathbf{P}}]$ by performing GE. We represent the GE operation as $\\widetilde{\\mathbf{G}} = \\mathbf{E}(\\pi_2(\\pi_1(\\mathbf{G})))$, where $\\mathbf{E}$ (dimension $k\\times k$) represents row operations, and $\\pi_{2}$ represents an additional column permutation. $\\pi_{2}$ occurs to ensure that the first $k$ columns of $\\pi_1(\\mathbf{G})$ are linearly independent. Accordingly, $\\mathbf{r}$, $\\mathbf{y}$, and $\\bm{\\alpha}$, are permuted into $\\widetilde{\\mathbf{r}} = \\pi_2(\\pi_1(\\mathbf{r}))$, $\\widetilde{\\mathbf{y}} = \\pi_2(\\pi_1(\\mathbf{y}))$, $\\widetilde{\\bm{\\alpha}} = \\pi_2(\\pi_1(\\bm{\\alpha}))$, respectively.\n\nSince $\\pi_2$ only marginally disrupts the descending order of $\\widetilde{\\bm{\\alpha}}$ \\cite[Eq. (59)]{Fossorier1995OSD}, the first $k$ positions of $\\widetilde{\\mathbf{y}}$, denoted by $\\widetilde{\\mathbf{y}}_{\\mathrm{B}} =[\\widetilde{y}]_1^k$, are referred to as the most reliable basis (MRB) \\cite{Fossorier1995OSD}. To eliminate errors in MRB, a length-$k$ TEP $\\mathbf{e} = [e]_1^k$ is added to $\\widetilde{\\mathbf{y}}_{\\mathrm{B}}$ to obtain a codeword estimate by re-encoding according to $\\widetilde{\\mathbf{c}}_{\\mathbf{e}} = \\left(\\widetilde{\\mathbf{y}}_{\\mathrm{B}}\\oplus \\mathbf{e}\\right)\\widetilde{\\mathbf{G}}$, where $\\widetilde{\\mathbf{c}}_{\\mathbf{e}}$ is the ordered codeword estimate with respect to the TEP $\\mathbf{e}$. In reprocessing, a list of TEPs are re-encoded to generate multiple codeword candidates. The maximum Hamming weight of TEPs attempted is limited by a parameter $m$, namely, decoding order. Thus, the number of TEPs processed in an order-$m$ OSD can be up to $\\sum_{i=0}^{m}\\binom{k}{i}$. For a code with the minimum Hamming weight $d_{\\mathrm{H}}$, OSD with order $m = \\lceil d_{\\mathrm{H}}\/4-1\\rceil$ can achieve the ML decoding performance \\cite{Fossorier1995OSD}.\n\nWith BPSK, the best ordered codeword estimate $\\widetilde{\\mathbf{c}}_{\\mathrm{best}}$ is found by minimizing the weighted Hamming distance between codeword estimate $\\widetilde{\\mathbf{c}}_{\\mathbf{e}}$ and $\\widetilde{\\mathbf{y}}$, which is defined as \\cite{valembois2002comparison}\n \t\\begin{equation} \\small \\label{equ::Prelim::WHD_define}\n \t\t \\mathcal{D}(\\widetilde{\\mathbf{c}}_{\\mathbf{e}},\\widetilde{\\mathbf{y}}) \\triangleq \\sum_{1 \\leq i \\leq n } (\\widetilde{c}_{\\mathbf{e},i} \\oplus \\widetilde{y}_{i}) \\widetilde{\\alpha}_{i}.\n \t\\end{equation}\nFinally, the decoding result $\\hat{\\mathbf{c}}_{\\mathrm{best}}$ is output by performing inverse permutations over $\\widetilde{\\mathbf{c}}_{\\mathrm{best}}$, i.e., $\\hat{\\mathbf{c}}_{\\mathrm{best}} = \\pi_1^{-1}(\\pi_2^{-1}(\\widetilde{\\mathbf{c}}_{\\mathrm{best}}))$.\n\n\\vspace{-0.25em} \n\\section{OSD with Adaptive GE Reduction} \\label{sec::Algorithm}\n\\vspace{-0.25em} \n\\subsection{Overall Description}\n\\vspace{-0.25em} \n \\begin{figure} \n \\centering\n \\definecolor{mycolor1}{rgb}{0.00000,0.44706,0.74118}%\n \\definecolor{mycolor2}{rgb}{0.00000,0.44700,0.74100}%\n \\tikzstyle{terminator} = [rectangle, draw, text centered, rounded corners, minimum height=2em]\n \\tikzstyle{process} = [rectangle, draw, text centered, minimum height=2em]\n \\tikzstyle{decision} = [diamond, draw, text centered, minimum height=1em,aspect=2.5]\n \\tikzstyle{connector} = [draw, -latex']\n %\n \\begin{tikzpicture}[node distance=2cm]\n \\node at (-2,0) [terminator] (start) {\\footnotesize Start Decoding};\n \\node [decision] at (-2,-1.3) (con1) {\\footnotesize Condition 1};\n \\node [process] at (2,-1.3) (NonGE) {\\footnotesize Non-GE OSD (order $m\\!-\\!1$)};\n \\node [decision] at (2,-2.6) (con2) {\\footnotesize Condition 2};\n \\node [process] at (-2,-2.6) (GE) {\\footnotesize Standard OSD (order $m$)};\n \\node at (0,-3.9) [terminator] (end) {\\footnotesize Finish Decoding};\n \\path [connector] (start) -- (con1);\n \\path [connector] (con1) -- (NonGE);\n \\path [connector] (con1) -- (GE);\n \\path [connector] (NonGE) -- (con2);\n \\path [connector] (con2) -- (GE);\n \\path [connector] (con2) |- (end);\n \\path [connector] (GE) |- (end);\n \n \\node[draw=none] at (-1.6, -2.0) (No) {\\footnotesize No};\n \\node[draw=none] at (-0.4, -1.1) (Yes) {\\footnotesize Yes};\n \\node[draw=none] at (1.6, -3.3) (yes) {\\footnotesize Yes};\n \\node[draw=none] at (0.4, -2.4) (No) {\\footnotesize No};\n \n \\end{tikzpicture}\n \t\\vspace{-0em}\n \\caption{The structure of the proposed decoder.}\n \t\\vspace{-0em}\n \t\\label{Fig::structure}\n \n\t\\end{figure}\n\nWe proposed an OSD algorithm that can adaptively skip its reprocessing stage (including GE). When reprocessing is skipped, the original hard-decision estimate $\\mathbf{y}$ and generator matrix $\\mathbf{G}$ will be used for re-encoding (instead of using $\\widetilde{\\mathbf{y}}$ and $\\widetilde{{\\mathbf{G}}}$). Specifically, let $\\mathbf{y}_{\\mathrm{B}}$ denote the first $k$ positions of $\\mathbf{y}$, i.e., $\\mathbf{y}_{\\mathrm{B}} = [y]_1^k$. Then, a codeword estimate is directly recovered by $\\mathbf{c}_{\\mathbf{e}} = \\left(\\mathbf{y}_{\\mathrm{B}}\\oplus \\mathbf{e}\\right)\\mathbf{G}$ with respect to a TEP $\\mathbf{e}$. Similar to the standard OSD, a list of TEPs will be used in re-encoding, and the maximum allowed Hamming weight of TEPs is limited by $m'$. Finally, if $\\mathbf{c}_{\\mathbf{e}}$ is identified as the best codeword estimate, it will be directly output as the decoding result with no inverse permutation required. We referred to such a decoding process without GE as the Non-GE OSD, and $m'$ is its decoding order. \n\nThe structure of the proposed decoder is illustrated in Fig \\ref{Fig::structure}. At the start of decoding, the decoder decides whether to conduct the Non-GE OSD or the standard OSD according to ``Condition 1''. If the Non-GE OSD is performed, ``Condition 2\" will determine whether the Non-GE OSD has found the correct decoding result. If not, the standard OSD will be conducted following the Non-GE OSD to avoid degraded decoding performance. We set $m'=\\max(m-1,0)$ in the proposed decoder, i.e., the Non-GE OSD is one-order lower than the standard OSD. The reasons are that 1) if $m'\\geq m$, the Non-GE OSD has a higher complexity than the standard OSD, which negates the need for GE reduction and worsens the worst-case decoding complexity, and 2) if $m'$ is too small, the Non-GE OSD may easily fail to find the correct result.\n\nAs seen, the design of ``Condition 1'' and ``Condition 2'' is of importance for the structure illustrated in Fig. \\ref{Fig::structure}. We note that the standard OSD in Fig. \\ref{Fig::structure} can be implemented by any improved variants of OSD; for example, the efficient probability-based OSD (PB-OSD) proposed recently \\cite{yue2021probability}.\n\n\\vspace{-0.25em} \n\\subsection{The First Condition}\n\\vspace{-0.25em} \nTo derive the first condition, let us first consider the BLER performance of OSD, which is represented as \\cite{Fossorier1995OSD}\n\\begin{equation}\n \\mathrm{P_e} \\leq (1 - \\mathrm{P_{list}}) + \\mathrm{P_{ML}},\n\\end{equation}\nwhere $\\mathrm{P_{ML}}$ is the ML BLER of $\\mathcal{C}(n,k)$. $\\mathrm{P_{ML}}$ is mainly characterized by the structure of $\\mathcal{C}(n,k)$ and in particular, the minimum Hamming weight $d_{\\mathrm{H}}$. $\\mathrm{P_{list}}$ is the probability that some TEP can eliminate the the errors over MRB, whose value depends on the decoding order $m$ \\cite[Eq. (24)]{dhakal2016error}.\n\n\n\nLet $\\mathrm{P'_{list}}$ denote the probability that the Non-GE OSD can eliminate the errors over $\\mathbf{y}_{\\mathrm{B}}$ with some TEP. Thus, BLER of Non-GE OSD is upper bounded by $\\mathrm{P_e} \\leq (1 - \\mathrm{P'_{list}}) + \\mathrm{P_{ML}}$. Therefore, if $\\mathrm{P'_{list}} =\n \\mathrm{P_{list}}$, the Non-GE OSD will deliver the same BLER as OSD. In other words, the Non-GE OSD is sufficient to find the correct decoding result and thus GE is not necessary. $\\mathrm{P'_{list}} =\n \\mathrm{P_{list}}$ can be satisfied when SNR is asymptotically large (i.e., $N_0\\to 0$). To see this, consider $\\mathrm{P_{list}}$ derived from \\cite[Lemma 1]{yue2021revisit}), i.e.,\n \\begin{equation} \\label{equ::Ana::Plist}\n \\mathrm{P_{list}} = \\sum_{i=0}^{m}\\int_{x = 0}^{\\infty} \\binom{k}{i} (p(x))^{i}(1-p(x))^{k-i} f_{\\widetilde{\\alpha}_{k+1}}(x) dx,\n \\end{equation}\n where $f_{\\widetilde{\\alpha}_{k+1}}(x)$ is the probability density function ($\\mathrm{pdf}$) of the $(k+1)$-th ordered reliability, $\\widetilde{\\alpha}_{k+1}$ (as a random variable). $p(x)$ is the average bitwise error probability of $\\widetilde{\\mathbf{y}}_{\\mathrm{B}}$ conditioning on $\\{\\widetilde{\\alpha}_{k+1} = x\\}$, which is given by \\cite[Eq. (13)]{yue2021revisit}\n \\begin{equation} \\label{equ::Ana::Pe::px}\n p(x) = \\frac{Q(\\frac{2x+2}{\\sqrt{2N_0}}) }{ Q(\\frac{2x+2}{\\sqrt{2N_0}}) + Q(\\frac{2x-2}{\\sqrt{2N_0}}) } .\n \\end{equation}\nThen, when $N_0\\to \\infty$, we have \\cite[Eq. (28)]{yue2021linear}\n \\begin{equation} \n \\lim_{N_0 \\to 0}p(x) = \\frac{1}{1 + \\lim\\limits_{N_0 \\to 0}\\exp\\left(\\frac{4x}{N_0}\\right)}.\n \\end{equation}\nwhen $N_0 \\to 0$, there are $\\widetilde{\\alpha}_{k+1} \\to 1$ and $f_{\\widetilde{\\alpha}_{k+1}}(x) \\to \\delta(x-1)$, where $\\delta(x)$ is the Dirac delta function. We then obtain \n\\begin{equation} \\label{equ::Plist::N0to0}\n \\lim_{N_0 \\to 0}\\mathrm{P_{list}} = \\lim_{N_0 \\to 0}\\sum_{i=0}^{m} \\binom{k}{i} \\left(\\frac{1}{1 + e^{4\/N_0}}\\right)^{\\!i\\!}\\left(\\frac{e^{4\/N_0}}{1 + e^{4\/N_0}}\\right)^{\\!\\!k-i}\\!\\!\\!.\n\\end{equation}\n\nOn the other hand, we can derive $\\mathrm{P'_{list}}$ as\n \\begin{equation} \\label{equ::Ana::Plist'}\n \\mathrm{P'_{list}} = \\sum_{i=0}^{m'} \\binom{k}{i} (p')^{i}(1-p')^{k-i},\n \\end{equation}\nwhere $p'$ is the bitwise error probability of $\\mathbf{y}_{\\mathrm{B}}$. Under AWGN and BPSK, $p'$ is readily given by $p' = Q(\\sqrt{2\/N_0})$, which also has the following asymptotic property,\n\\begin{equation} \\label{equ::Ana::P'::app}\n \\lim_{N_0 \\to 0}p' = \\frac{1}{1 + \\lim\\limits_{N_0 \\to 0}\\exp\\left(\\frac{4}{N_0}\\right)}.\n\\end{equation}\nTherefore, substituting (\\ref{equ::Ana::P'::app}) into (\\ref{equ::Ana::Plist'}) and taking $m'=m-1$, we observe that \n\\begin{equation} \\label{equ::ana::same}\n\\begin{split}\n \\lim_{N_0 \\to 0}&\\mathrm{P_{list}}- \\mathrm{P'_{list}} \\\\\n &= \\lim_{N_0 \\to 0}\\binom{k}{m} \\left(\\frac{1}{1 + e^{4\/N_0}}\\right)^{\\!m\\!}\\left(\\frac{e^{4\/N_0}}{1 + e^{4\/N_0}}\\right)^{\\!\\!k-m} \\\\\n &\\to 0, \n\\end{split}\n\\end{equation}\nwhich indicates that when $N_0\\to 0$, the Non-GE OSD could completely replace the standard OSD. However, when $N_0$ is not negligible, there is $\\mathrm{P'_{list}} < \\mathrm{P_{list}}$ for $m',black] (axis cs: 4.8,1.5) -- (axis cs: 5.25,1.5);\n \\node[anchor=south] at (axis cs: 4,1){\\scriptsize PB-OSD};\n\n\n \\end{axis}\n \\end{tikzpicture}%\n \\vspace{-0.3em}\n \\caption{Average number of TEPs, $N_a'$}\n \\vspace{-0.3em}\n \\label{Fig::64-36-N'}\n \\end{subfigure}\n \n \\vspace{-0.11em}\n \\caption{The impacts of values of $\\lambda$ in decoding the $(64,36)$ eBCH code with order 3.}\n \\vspace{-0.61em}\n \\label{Fig::64-36-PARA}\n \\end{figure}\n\n\n\n\n\\vspace{-0.3em}\n\\section{Conclusion} \\label{sec::Conclusion}\n\\vspace{-0.3em}\n\nIn this paper, we designed an efficient ordered-statistics decoding (OSD) algorithm with adaptive GE reduction. The proposed decoder employs two conditions. The first condition decides whether to conduct the OSD decoding without Gaussian elimination (GE). If the OSD without GE is performed, the second condition identifies if the correct decoding result has been found in the Non-GE decoding process. If so, the standard OSD with GE can be avoided. The proposed decoding algorithm is an effective solution to the ``complexity floor'' owning to the overhead of GE in OSD decoders. Simulation results indicated that compared to the approaches from the literature, the proposed decoder can significantly reduce the decoding complexity at high SNRs while maintaining the error performance of the original OSD.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\\vspace{-0.3em}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzittr b/data_all_eng_slimpj/shuffled/split2/finalzzittr new file mode 100644 index 0000000000000000000000000000000000000000..3d7baa09b80deba241a7de75d53f43b9a5834821 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzittr @@ -0,0 +1,5 @@ +{"text":"\\section{Resonances}\nResonances are unstable particles usually observed as bell-shaped structures in scattering cross sections of their decay products. For a simple narrow resonance, its fundamental properties correspond to the visible cross-section features: mass $M$ is at the peak position, and decay width $\\Gamma$ is the width of the bell shape. These parameters, along with the branching fraction $x$, are known as the Breit-Wigner parameters \\cite{BW}. In reality, resonance peaks may be very broad, and the shape so deformed that it is not at all clear where exactly is the mass, or what would be the width of that resonance. In such cases resonance parameters are treated as energy dependent functions. These functions are often defined differently for different resonances. For example, in the case of $\\rho(770)$ resonance in the $\\pi \\pi$ channel, modern analyses include the pion-pion P-wave potential barrier (momentum to three halves) in the energy dependent width \\cite{GS}, while in the case of $Z$ boson the width function is \nproportional to the energy squared \\cite{Sir91}. \n\nWith such model dependent parameterizations, the simple connection between physical properties of a resonance and its model parameters is lost, and the choice of the \"proper\" resonance parameters becomes the matter of preference. There are many definitions for Breit-Wigner mass, which is assumed by some to be the proper resonance physical property. Other will prefer the real part of pole position in the complex energy plane. Some will even define the resonance mass to be something unrelated to these two most common definitions, as we will soon see, or assume that there is no difference between poles and Breit-Wigner parameters whatsoever. All that makes the comparison between cited resonance parameters quite confusing and potentially hinders the direct comparison between microscopic theoretical predictions (such is \\cite{Dur08}) and experimentally obtained resonance properties \\cite{PDG}. \n\nTo clarify this situation we try to devise a simple model-independent formula for resonant scattering, with well defined resonance physical properties, which will be capable of successfully fitting the realistic data for broad resonances.\n\nIn this letter we show how to dramatically improve the simple Breit-Wigner formula by incorporating in it just one additional (phase) parameter. This new formula has two equivalent forms that can be used to estimate either pole or Breit-Wigner parameters in a model independent way. \n\nWe begin our analysis by noting that resonant cross section is commonly parameterized by a simple Breit-Wigner formula \\cite{BW}\n\\begin{equation}\n\\sigma =\\frac{4\\pi}{q^2}\\frac{2J+1}{(2s_1+1)(2s_2+1)}\\, |A|^2,\n\\end{equation}\nwhere $q$ is a c.m.~momentum, $J$ is the spin of the resonance, while $s_1$ and $s_2$ are spins of the two incoming particles. Resonant scattering amplitude $A$ is given by\n\\begin{equation}\\label{eq:PDGparameterization}\nA=\\frac{x\\, \\Gamma\/2}{M-W-i\\,\\Gamma\/2},\n\\end{equation}\nwhere $M$ is the resonant mass, $\\Gamma$ is the total decay width, $x$ is the branching fraction to a particular channel (for inelastic scattering it is $\\sqrt{x_{\\mathrm{in}}\\,x_{\\mathrm{out}}}$), and W is c.m.~energy. \n\nThis simple parameterization cannot describe most of the realistic cross sections since the resonance shapes are seldom symmetric. To fix this, a background contribution is usually added. Unfortunately, there is no standard way to add background, but polynomials in $W^2$ (i.e., Mandelstam $s$) are commonly used in the literature (see e.g.~\\cite{BES06}) \n\\begin{equation}\\label{eq:polybg}\n|A|^2\\rightarrow|A|^2+\\sum_{k=0}^n B_k\\,W^{2k}.\n\\end{equation}\n\nTo extract the resonance parameters, we do local fits (in energy) of this parameterization to a broad range of data points in the vicinity of the resonance peak. To estimate the proper order $n$ of polynomial background, we vary endpoints of the data range and check the convergence of the physical fit parameters: $M$, $\\Gamma$, and $x$. Goodness of the convergence is estimated by calculating $c_{n,l}$ parameters for each data range and for all polynomial orders $n$ and $l$ \n\\begin{equation}\\label{eq:crit1}\nc_{n,l}=\\sum_{y=m,\\Gamma,x}(y_l-y_n)^2\/y_n^2.\n\\end{equation}\nSmaller $c_{n,l}$ means better convergence. \n\nTo avoid false positive convergence signals as much as possible, we demand good convergence not just for two, but for three consecutive polynomial orders by using \n\\begin{equation}\\label{eq:crit2}\nc_{n}=c_{n,n+1}+c_{n,n+2}.\n\\end{equation}\nFinal result is the one having the smallest reduced $\\chi^2_R$ among several fits (we use ten) with lowest convergence parameters $c_n$. When statistical errors turn out to be unrealistically small due to the dataset issues, the spread in extracted pole parameter values is used to estimate parameters errors. \n\nTo test this extraction approach, we analyze five broad resonances with well known properties, and masses ranging from less then 1 GeV, to almost 100 GeV. For $\\Delta(1232)$ and $N(1440)$, we analyze GWU \\cite{GWU} $\\pi N$ elastic partial-wave amplitudes squared. For $\\rho(770)$ and $Z$ boson we analyze $e^+e^-$ scattering ratio R (between hadronic an muonic channels) from PDG compilation \\cite{PDG}, and for $\\Upsilon(11020)$ the new {\\sc BaBar} data \\cite{Aub09}.\n\n\nUsing the Breit-Wigner parameterization (\\ref{eq:PDGparameterization}) on broad resonances does not produce very good results. Therefore, in the advanced approaches, resonance width $\\Gamma$ (and other parameters) are considered to be energy dependent, which drastically improves the fit. However, parameterization then becomes model dependent, obfuscating the connection between the model parameters and physical properties of the resonance. We want to find a simple model independent form, as close to the original Breit-Wigner parameterization as possible, that will be capable of successfully fitting the realistic data for broad resonances. To do so, we assume that the numerator and the denominator in Rel.~(\\ref{eq:PDGparameterization}) are functions of energy, expand them and keep only the linear terms. The amplitude $A$ becomes\n\\begin{equation}\\label{eq:res.amplitude}\nA = \\frac{x_p\\,\\Gamma_p\/2\\, \\,e^{i\\theta_p}}{M_p-W-i\\,\\Gamma_p\/2}+|A_{B}|\\,e^{i\\theta_{B}},\n\\end{equation}\nwhich turns out to be the lowest order Laurent expansion of amplitude A about its pole position at \\mbox{$W= M_p-i\\,\\Gamma_p\/2$}. Therefore, $M_p$ and $\\Gamma_p$ are pole mass and width, while $x_p\\,\\Gamma_p\/2$ and $\\theta_p$ are the complex residue magnitude and phase, respectively. (Note that we use the standard convention for residue phase $\\theta_p$, as used in PDG \\cite{PDG}, which differs from the mathematical residue phase by $\\pm\\pi$.) Three additional fit parameters are residue phase $\\theta_p$, (coherently added) background magnitude $|A_B|$, and the background phase $\\theta_B$. We can extract only the relative phase $\\delta_{p}=\\theta_{p}-\\theta_{B}$, since absolute square of this amplitude will be compared to the data. In order to ease the numerical analysis, we rewrite the new parameterization in a compact form\n\\begin{equation}\\label{eq:betterparameterization}\n|A|^2 = |A_B|^2\\frac{(\\mu-W)^2+\\lambda^2}{(M_p-W)^2+\\Gamma_p^2\/4},\n\\end{equation}\nwhere $\\mu$ and $\\lambda$ are simple fit parameters related to the pole parameters through \n\\begin{align}\nx_p\\,\\sin\\delta_p&=|A_B|\\,\\frac{\\Gamma_p\/2-|\\lambda|}{\\Gamma_p\/2},\\\\\nx_p\\,\\cos\\delta_p&=-|A_B|\\,\\frac{M_p-\\mu}{\\Gamma_p\/2}.\n\\end{align}\n\nUsing parameterization (\\ref{eq:betterparameterization}), we should be able to extract pole mass, width, branching fraction, magnitude of the background amplitude, and the relative phase from the data. We again use the same polynomial background from relation (\\ref{eq:polybg}) and convergence criteria from relations (\\ref{eq:crit1}) and (\\ref{eq:crit2}). However, at the very beginning of this analysis we stumbled upon a problem with our fits. When we fitted $\\Delta(1232)$ resonance, parameter $\\lambda$ was rather unstable, ranging from zero to several thousands MeV. In addition, fits often did not converge, even for a carefully chosen initial values. \n\nWe looked into it more closely and realised that since $\\Delta(1232)$ is almost elastic resonance (decaying by more than 99 percent to $\\pi N$ channel), $\\lambda$ should be zero due to the elastic two-body unitarity condition \\mbox{$A^\\dag A=\\mathrm{Im}\\,A$}. When $\\lambda$ was set to zero, everything worked almost perfectly. It is important to note that $x_p$ should be 1 for elastic resonances, again due to the unitarity, but setting $\\lambda$ to zero does not imply that $x_p$ is 1.\n\nThings became really interesting when we tried to extract Z boson parameters from $e^+e^-$ scattering data. The unstable $\\lambda$ behavior seen in the case of $\\Delta(1232)$ was observed again, even though Z boson is definitely not an elastic resonance. Fit could not be stabilized, and eventually we tried $\\lambda=0$ again ($x_p$ still can take care of inelasticity). This choice smoothed the fitting procedure, and the extracted resonance parameters were in excellent agreement with the PDG (pole) estimates \\cite{PDG}. Assuming that $\\lambda=0$ for other processes, we rewrite amplitude defined in Eq.~(\\ref{eq:res.amplitude}) as:\n \\begin{equation}\\label{eq:newamplitude}\nA = x_p\\,e^{i\\eta}\\left(\\frac{\\Gamma_p\/2 \\,\\, e^{2i\\delta_p}} {M_p-W-i\\,\\Gamma_p\/2} + e^{i\\delta_p}\\sin\\delta_p\\right),\n\\end{equation}\nwith unmeasurable overall phase $\\eta$ equal to $2\\theta_{B}-\\theta_p$. Square of this amplitude is then\n \\begin{equation}\\label{eq:pole}\n |A|^2 = x_p^2\\,\\frac{\\left[(M_p-W)\\,\\sin\\delta_p+\\Gamma_p\/2\\,\\cos\\delta_p\\right]^2}{(M_p-W)^2+\\Gamma_p^2\/4}.\n\\end{equation}\n\nWe know that for $\\Delta(1232)$, the overall phase $\\eta$ iz zero due to unitarity, which means that $\\theta_{B} = \\delta_p$ and $\\theta_p=2\\delta_p$. We compare our results for $2\\delta_p$ to published results of $\\theta_p$ for other analyzed resonances to check whether the same relation is valid for them as well. Roper resonance N(1440) is $\\pi N$ resonance with $\\pi N$ branching fraction $x$ estimated to 65\\%, and the $2\\delta_p$ value of $-81^\\circ$ is a surprisingly close to the newest residue phase estimate $-85^\\circ$ from \\cite{PDG}. For the Z boson, these two values are even closer: $2\\delta_p$ is $-2.2^\\circ$, while $\\theta_p$ is $-2.35^\\circ$. Extracted masses and widths are much closer to the pole parameters listed in the literature, than to the Breit-Wigner ones, as can be seen in Table \\ref{tab:poles}. The best fits for all analyzed resonances are shown in Figures \\ref{fig:results5}-\\ref{fig:results11}.\n\n\n\n\\begin{table}[h!]\n\\caption{\\label{tab:poles} Resonance pole parameters extracted by using pole formula (\\ref{eq:pole}). Our $2\\delta_p$ is compared to the residue phase $\\theta_p$ from the literature. PDG pole estimates are from Ref.~\\cite{PDG}. The $\\rho$ meson pole and the Z boson residue phase are estimated by analytic continuation of Gounaris-Sakurai \\cite{GS} and Breit-Wigner \\cite{PDG} parameterization, respectively. }\n\\begin{ruledtabular}\n\\begin{tabular}{lllll}\nResonance & $M_p$ \/ MeV & $\\Gamma_p$ \/ MeV & $x_p$ \/ \\% & $2\\delta_p$ \/ $^o$\\\\\\hline\n$\\rho(770)$ & 762 $\\pm$ 1 & 138 $\\pm$ 2 & 0.71 & 1 $\\pm$ 1 \\\\\n\\scriptsize POLE & 763 & 144 & \\scriptsize N\/A & \\scriptsize N\/A \\\\\\hline \n$\\Delta(1232)$ & 1211 $\\pm$ 1 & 102 $\\pm$ 1 & 103 $\\pm$ 1 & $-$47 $\\pm$ 1 \\\\\n\\scriptsize PDG\/POLE & 1210 $\\pm$ 1 & 100 $\\pm$ 2 & 104 $\\pm$ 2 & $-$47 $\\pm$ 1 \\\\\\hline\n$N(1440) $ & 1362 $\\pm$ 5 & 191 $\\pm$ 10 & 61 $\\pm$ 4 & $-$81 $\\pm$ 10 \\\\\n\\scriptsize PDG\/POLE & 1365 $\\pm$ 15 & 190 $\\pm$ 30 & 65 $\\pm$ 10 & $-$85 $\\pm$ $^{15}_{10}$ \\\\\\hlin\n$\\Upsilon(11020)$ & 11000 $\\pm$ 2 & 43 $\\pm$ 6 & 0.10 & $-$52 $\\pm$ 8 \\\\\n {\\scriptsize\\sc BaBar} \\cite{Aub09} & 10996 $\\pm$ 2 & 37 $\\pm$ 3 & \\scriptsize N\/A & \\scriptsize N\/A \\\\\\hline\n\n$Z(91188)$ & 91167 $\\pm$ 6 & 2493 $\\pm$ 5 & 15.4 & $-$2.2 $\\pm$ 0.2 \\\\\n\\scriptsize PDG\/POLE & 91162 $\\pm$ 2 & 2494 $\\pm$ 2 & \\scriptsize N\/A & $-2.35$ \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n \nThe new pole parameterization (\\ref{eq:pole}) may be used instead of (\\ref{eq:PDGparameterization}), since it works much better and adds only one quite important parameter $\\delta_p$. This parameter is the main ingredient of the shape of the resonance contribution to the cross section. When $\\delta_p$ is equal to zero, the new pole and the old simple Breit-Wigner parameterization are exactly the same. \n\n\nWe began this study in the first place to find an improved Breit-Wigner parameterization, but ended up with pole parameterization instead. Following the original notion of Breit and Wigner, that the resonance mass is at the peak position \\cite{BW}, and by noting the convenient form of our pole parameterization (\\ref{eq:newamplitude}) that looks very similar to a single channel elastic amplitude (apart from $x_p\\neq1$ and $\\eta\\neq0$). We now define the new Breit-Wigner parameters as single-channel K-matrix pole $M_b$, residue $\\Gamma_b$, branching fraction $x_b$, and background phase $\\delta_b$\n\\begin{align}\nK&= \\frac{\\Gamma_{b}\/2}{M_{b}-W} + \\tan\\delta_{b},\\\\\nA&=x_{b}\\,\\frac{K}{1-iK},\\\\\n|A|^2 &=x_{b}^2\\frac{\\left[\\Gamma_{b}\/2+(M_{b}-W)\\tan{\\delta_{b}}\\right]^2}{(M_{b}-W)^2+\\left[\\Gamma_{b}\/2+(M_{b}-W)\\tan{\\delta_{b}}\\right]^2}. \\label{eq:BW}\n\\end{align}\n\nIn this form, $x_{b}$ and $\\delta_{b}$ will be mathematically equal to $x_p$ and $\\delta_p$, respectively. When we fit parameterization (\\ref{eq:BW}) to the data, extracted fit parameters $M_{b}$, $\\Gamma_{b}$, and $x_{b}$ are consistent with Breit-Wigner parameters in PDG \\cite{PDG}, as is clearly visible from Table \\ref{tab:breitwigner}. As expected, $x_{b}$ and $\\delta_{b}$ have almost exactly equal values as their pole counterparts $x_p$ and $\\delta_p$ in Table \\ref{tab:poles}. \nFurthermore, the extracted pole and Breit-Wigner parameters are interrelated through Manley relations \\cite{Man95}\n\\begin{align}\nM_{b}&=M_p-\\Gamma_p\/2\\,\\,\\tan\\delta_p,\\\\\n\\Gamma_{b}&=\\Gamma_p\/\\cos^2\\delta_p. \n\\end{align}\n\n\n\\begin{table}[h!]\n\\caption{\\label{tab:breitwigner} Resonance parameters extracted by using new Breit-Wigner formula (\\ref{eq:BW}). PDG estimates are from Ref.~\\cite{PDG}.}\n\\begin{ruledtabular}\n\\begin{tabular}{lllll}\nResonance & $M_{b}$ \/ MeV & $\\Gamma_{b}$ \/ MeV & $x_{b}$ \/ \\% & $2\\delta_{b}$ \/ $^o$ \\\\\\hline\n$\\rho(770)$ & 761 $\\pm$ 1 & 139 $\\pm$ 2 & 0.71 & 0 $\\pm$ 1 \\\\\n\\scriptsize PDG & \\scriptsize 775.5 $\\pm$ 0.3 & \\scriptsize 146.2 $\\pm$ 0.7 & 0.69 & \\scriptsize N\/A \\\\\n\\hline\n$\\Delta(1232)$ & 1233 $\\pm$ 1 & 120 $\\pm$ 1 & 102 $\\pm$ 1 & $-$46 $\\pm$ 1 \\\\\n\\scriptsize PDG\/BW & 1232 $\\pm$ 2 & 117 $\\pm$ 3 & 100 & \\scriptsize N\/A \\\\\\hline\n$N(1440) $ & 1443 $\\pm$ 2 & 325 $\\pm$ 11 & 61 $\\pm$ 4 & $-$80 $\\pm$ 2 \\\\\n\\scriptsize PDG\/BW & 1440 $\\pm$ $^{30}_{20}$ & 300 $\\pm$ $^{150}_{100}$ & 65 $\\pm$ 10 & \\scriptsize N\/A \\\\\\hline\n$\\Upsilon(11020)$ & 11010 $\\pm$ 2 & 53 $\\pm$ 8 & 0.10 & $-$52 $\\pm$ 8 \\\\\n\\scriptsize PDG & 11019 $\\pm$ 8 & 79 $\\pm$ 16 & \\scriptsize N\/A & \\scriptsize N\/A \\\\\\hline\n$Z(91188)$ & 91191 $\\pm$ 5 & 2494 $\\pm$ 5 & 15.4 & $-$2.2 $\\pm$ 0.2 \\\\\n\\scriptsize PDG\/BW & 91188 $\\pm$ 2 & 2495 $\\pm$ 2 & 15.3 & \\scriptsize N\/A \n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n\\begin{figure}[h!]\n\\includegraphics[width=8.5cm]{fig5.eps}\n\\caption{Fitting pole parameterization (\\ref{eq:pole}) to the data \\cite{PDG}. The pole and BW masses have almost the same value, quite different from PDG estimate (dotted line). (We removed the data from the peak to eliminate the influence of the $\\omega(782)$ resonance.) \\label{fig:results5}}\n\\end{figure}\n\n \\begin{figure}[h!]\n\\includegraphics[width=8.5cm]{fig1.eps}\n\\caption{Fitting pole parameterization (\\ref{eq:pole}) to the SAID data \\cite{GWU}. Pole (solid) and BW mass (dashed) are clearly distinct, while the PDG estimates (dotted lines) are indistinguishable from them. All the data in this figure are analyzed, but only the black data points are used in the best fit. \\label{fig:results1}}\n\\end{figure}\n \\begin{figure}[h!]\n\\includegraphics[width=8.5cm]{fig2.eps}\n\\caption{Fitting pole parameterization (\\ref{eq:pole}) to the SAID data \\cite{GWU}. This resonance has the largest difference between pole and BW mass. PDG estimates (dotted liness) are consistent with pole (solid) and BW (dashed) parameters. \\label{fig:results2}}\n\n\\end{figure}\n \\begin{figure}[h!]\n\\includegraphics[width=8.5cm]{fig14.eps}\n\\caption{Fitting pole parameterization (\\ref{eq:pole}) to the {\\sc BaBar} data \\cite{Aub09}. Pole and BW masses are clearly distinct, but PDG estimate coincides with neither of them. \\label{fig:results14}}\n\\end{figure}\n \\begin{figure}[h!]\n\\includegraphics[width=8.5cm]{fig11.eps}\n\\caption{Fitting pole parameterization (\\ref{eq:pole}) to the data \\cite{PDG}. The pole and BW parameters are consistent with their PDG estimates (dotted lines). \\label{fig:results11}}\n\\end{figure}\n\nThese Breit-Wigner parameters are uniquely defined and model independent, with directly observable mass as the peak of the squared amplitude $|A|^2$. However, they strongly depend on phase $\\delta_p$, which may change from reaction to reaction. That means that for the same pole position, there will be different Breit-Wigner masses and widths in different channels. Therefore, it is more practical to have one (pole) mass and width for each resonance in particle data tables \\cite{PDG} than stockpiling different (Breit-Wigner) masses and widths for each process in which the resonance contributes.\n\nIn our study, there are two resonances that show systematic discrepancy between the PDG mass estimates \\cite{PDG} and our results presented here: the $\\rho(770)$ and the $\\Upsilon(11020)$. For $\\rho$ meson an alternative parameterization by Gounaris and Sakurai \\cite{GS} is used, where mass and width are defined somewhat unconventionally to take into account its mixing with $\\omega(782)$ and $\\rho(1450)$. $\\Upsilon(11020)$ mass and width were fit parameters to a Gaussian with a relativistic tail \\cite{CUSBCLEO}. For both resonances cited values in the PDG tables are neither masses consistent with the original Breit-Wigner idea, being at the peak of the resonance, nor the pole positions. Since the resonance parameters are collected in PDG tables to be used as an input for various models and for comparison between theory and experiment, placing all these resonance parameters in a single table may generally create considerable confusion. This confusion is evident in the $\\rho$ meson case where in the table \nwith predominantly Gounaris-Sakurai masses (about 775 MeV) one can find pole masses (roughly 760 MeV). \n\nIn conclusion, we have shown here that the original Breit-Wigner formula may be drastically improved by including a single additional (phase) parameter $\\delta_p$. Our results suggest that parameter $\\delta_p$ seems to be equal to the half of the resonance residue phase $\\theta_p$, regardless of the resonance inelasticity. This new formula has two equivalent forms that can be used to estimate either the pole, or the Breit-Wigner parameters in a model independent way. Having both forms enabled us to learn that in the PDG tables \\cite{PDG} there are values that do not correspond neither to pole, nor to Breit-Wigner parameters. Such an outcome undermines the proper matching between microscopic theories (e.g., lattice QCD \\cite{Dur08}) and experiment.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFrom the time of Connes's 1995 paper~\\cite{Connes95}, spectral triples with finite-di{\\-}men{\\-}sion{\\-}al \\ensuremath{\\ast}-algebra and Hilbert space, or \\term{finite spectral triples}, have been central to the noncommutative-geometric (NCG) approach to the Standard Model of elementary particle physics, where they are used to encode the fermionic physics. As a result, they have been the focus of considerable research activity.\n\nThe study of finite spectral triples began in earnest with papers by Paschke and Sitarz~\\cite{PS98} and by Krajewski~\\cite{Kraj98}, first released nearly simultaneously in late 1996 and early 1997, respectively, which gave detailed accounts of the structure of finite spin geometries, {\\it i.e.\\\/}\\ of finite real spectral triples of $KO$-dimension $0 \\bmod 8$ satisfying orientability and Poincar{\\'e} duality. In their approach, the study of finite spectral triples is reduced, for the most part, to the study of \\term{multiplicity matrices}, integer-valued matrices that explicitly encode the underlying representation-theoretic structure. Krajewski, in particular, defined what are now called \\term{Krajewski diagrams} to facilitate the classification of such spectral triples. Iochum, Jureit, Sch{\\\"u}cker, and Stephan have since undertaken a programme of classifying Krajewski diagrams for finite spectral triples satisfying certain additional physically desirable assumptions~\\cites{ACG1,ACG2,ACG3,Sch05} using combinatorial computations~\\cite{JS08}, with the aim of fixing the finite spectral triple of the Standard Model amongst all other such triples.\n\nHowever, there were certain issues with the then-current version of the NCG Standard Model, including difficulty with accomodating massive neutrinos and the so-called fermion doubling problem, that were only to be resolved in the 2006 papers by Connes~\\cite{Connes06} and by Chamseddine, Connes and Marcolli~\\cite{CCM07}, which use the Euclidean signature of earlier papers, and by Barrett~\\cite{Bar07}, which instead uses Lorentzian signature; we restrict our attention to the Euclidean signature approach of~\\cite{Connes06} and~\\cite{CCM07}, which has more recently been set forth in the monograph~\\cite{CM08} of Connes and Marcolli. The finite spectral triple of the current version has $KO$-dimension $6 \\bmod 8$ instead of $0 \\bmod 8$, fails to be orientable, and only satisfies a certain modified version of Poincar{\\'e} duality. It also no longer satisfies $S^0$-reality, another condition that holds for the earlier finite geometry of~\\cite{Connes95}, though only because of the Dirac operator. Jureit, and Stephan~\\cites{ACG4,ACG5} have since adopted the new value for the $KO$-dimension, but further assume orientability and Poincar{\\'e} duality. As well, Stephan~\\cite{St06} has proposed an alternative finite spectral triple for the current NCG Standard Model with the same physical content but satisfying Poincar{\\'e} duality; it also just fails to be $S^0$-real in the same manner as the finite geometry of~\\cite{CCM07}; in the same paper, Stephan also discusses non-orientable finite spectral triples.\n\nMore recently, Chamseddine and Connes~\\cites{CC08a,CC08b} have sought a purely algebraic method of isolating the finite spectral triple of the NCG Standard Model, by which they have obtained the correct \\ensuremath{\\ast}-algebra, Hilbert space, grading and real structure using a small number of fairly elementary assumptions. In light of these successes, it would seem reasonable to try to view this new approach of Chamseddine and Connes through the lens of the structure theory of Krajewski and Paschke--Sitarz, at least in order to understand better their method and the assumptions involved. This, however, would require adapting that structure theory to handle the failure of orientability and Poincar{\\'e} duality, yielding the initial motivation of this work.\n\nTo that end, we provide, for the first time, a comprehensive account of the structure theory of Krajewski and Paschke--Sitarz for finite real spectral triples of arbitrary $KO$-dimension, without the assumptions of orientability or Poincar{\\'e} duality; this consists primarily of straightforward generalisations of the results and techniques of~\\cite{PS98} and~\\cite{Kraj98}. In this light, the main features of the approach presented here are the following:\n\\begin{enumerate}\n \\item A finite real spectral triple with algebra $\\alg{A}$ is to be viewed as an $\\alg{A}$-bimodule with some additional structure, together with a choice of Dirac operator compatible with that structure.\n \\item For fixed algebra $\\alg{A}$, an $\\alg{A}$-bimodule is entirely characterised by its multiplicity matrix (in the ungraded case) or matrices (in the graded case), which also completely determine(s) what sort of additional structure the bimodule can admit; this additional structure is then unique up to unitary equivalence.\n \\item The form of suitable Dirac operators for an $\\alg{A}$-bimodule with real structure is likewise determined completely by the multiplicity matrix or matrices of the bimodule and the choice of additional structure.\n\\end{enumerate}\nHowever, we do not discuss Krajewski diagrams, though suitable generalisation thereof should follow readily from the generalised structure theory for Dirac operators.\n\nOnce we view a real spectral triple as a certain type of bimodule together with a \\emph{choice} of suitable Dirac operator, it then becomes natural to consider moduli spaces of suitable Dirac operators, up to unitary equivalence, for a bimodule with fixed additional structure, yielding finite real spectral triples of the appropriate $KO$-dimension. The construction and study of such moduli spaces of Dirac operators first appear in~\\cite{CCM07}, though the focus there is on the sub-moduli space of Dirac operators commuting with a certain fixed subalgebra of the relevant \\ensuremath{\\ast}-algebra. Our last point above almost immediately leads us to relatively concrete expressions for general moduli spaces of Dirac operators, which also appear here for the first time. Multiplicity matrices and moduli spaces of Dirac operators are then worked out for the bimodules appearing in the Chamseddine--Connes--Marcolli formulation of the NCG Standard Model~\\cites{CCM07,CM08} as examples.\n\nFinally, we apply these methods to the work of Chamseddine and Connes~\\cites{CC08a,CC08b}, offering concrete proofs and some generalisations of their results. In particular, the choices determining the finite geometry of the current NCG Standard Model within their framework are made explicit.\n\nThis work, a revision of the author's qualifying year project (master's thesis equivalent) at the Bonn International Graduate School in Mathematics (BIGS) at the University of Bonn, is intended as a first step towards a larger project of investigating in generality the underlying noncommutative-geometric formalism for field theories found in the NCG Standard Model, with the aim of both better understanding current versions of the NCG Standard Model and facilitating the further development of the formalism itself.\n\nThe author would like to thank his supervisor, Matilde Marcolli, for her extensive comments and for her advice, support, and patience, Tobias Fritz for useful comments and corrections, and George Elliott for helpful conversations. The author also gratefully acknowledges the financial and administrative support of BIGS and of the Max Planck Institute for Mathematics, as well as the hospitality and support of the Department of Mathematics at the California Institute of Technology and of the Fields Institute.\n\n\\section{Preliminaries and Definitions}\n\n\\subsection{Real {\\ensuremath{C^*}}-algebras}\n\nIn light of their relative unfamiliarity compared to their complex counterparts, we begin with some basic facts concerning real {\\ensuremath{C^*}}-algebras.\n\nFirst, recall that a \\term{real \\ensuremath{\\ast}-algebra} is a real associative algebra $\\alg{A}$ together with an \\term{involution} on $\\alg{A}$, namely an antihomomorphism $\\ast$ satisfying $\\ast^2 = \\Id$, and that the \\term{unitalisation} of a real \\ensuremath{\\ast}-algebra $\\alg{A}$ is the unital real \\ensuremath{\\ast}-algebra $\\tilde{\\alg{A}}$ defined to be $\\alg{A} \\oplus \\field{R}$ as a real vector space, together with the multiplication $(a,\\alpha)(b,\\beta) := (ab + \\alpha b + \\beta a, \\alpha\\beta)$ for $a$, $b \\in \\alg{A}$, $\\alpha$, $\\beta \\in \\field{R}$ and the involution $\\star \\oplus \\Id_{\\field{R}}$. Note that if $\\alg{A}$ is already unital, then $\\unit{\\alg{A}}$ is simply $\\alg{A}\\oplus\\field{R}$.\n\n\\begin{definition}\n A \\term{real {\\ensuremath{C^*}}-algebra} is a real \\ensuremath{\\ast}-algebra $\\alg{A}$ endowed with a norm $\\norm{\\cdot}$ making $\\alg{A}$ a real Banach algebra, such that the following two conditions hold:\n\\begin{enumerate}\n \\item $\\forall a \\in \\alg{A}$, $\\norm{a^* a} = \\norm{a}^2$ (\\term{{\\ensuremath{C^*}}-identity});\n \\item $\\forall a \\in \\unit{\\alg{A}}$, $1 + a^* a$ is invertible in $\\unit{\\alg{A}}$ (\\term{symmetry}).\n\\end{enumerate}\n\\end{definition}\n\nThe symmetry condition is redundant for complex {\\ensuremath{C^*}}-algebras, but not for real {\\ensuremath{C^*}}-algebras. Indeed, consider $\\field{C}$ as a real algebra together with the trivial involution $\\ensuremath{\\ast} = \\Id$ and the usual norm $\\norm{\\zeta} = |\\zeta|$, $\\zeta \\in \\field{C}$. Then $\\field{C}$ with this choice of involution and norm yields a real Banach \\ensuremath{\\ast}-algebra satisfying the {\\ensuremath{C^*}}-identity but not symmetry, for $1 + i^* i = 0$ is certainly not invertible in $\\unit{\\field{C}} = \\field{C}\\oplus\\field{R}$.\n\nNow, in the finite-dimensional case, one can give a complete description of real {\\ensuremath{C^*}}-algebras, which we shall use extensively in what follows:\n\n\\begin{theorem}[Wedderburn's theorem for real {\\ensuremath{C^*}}-algebras \\cite{Fare}]\n Let $\\alg{A}$ be a finite-dimensional real {\\ensuremath{C^*}}-algebra. Then\n\\begin{equation}\n \\alg{A} \\cong \\bigoplus_{i=1}^N M_{n_i}(\\field{K}_i),\n\\end{equation}\nwhere $\\field{K}_i = \\field{R}$, $\\field{C}$, or $\\field{H}$, and $n_i \\in \\semiring{N}$. Moreover, this decomposition is unique up to permutation of the direct summands.\n\\end{theorem}\n\nNote, in particular, that a finite-dimensional real {\\ensuremath{C^*}}-algebra is necessarily unital.\n\nGiven a finite-dimensional real {\\ensuremath{C^*}}-algebra $\\alg{A}$ with fixed \\term{Wedderburn decomposition} $\\oplus_{i=1}^N M_{n_i}(\\field{K}_i)$ we can associate to $\\alg{A}$ a finite dimensional complex {\\ensuremath{C^*}}-algebra $\\alg{A}_{\\field{C}}$, the \\term{complex form} of $\\alg{A}$, by setting\n\\begin{equation}\n \\alg{A}_{\\field{C}} := \\bigoplus_{i=1}^N M_{m_i}(\\field{C}),\n\\end{equation}\nwhere $m_i = 2n_i$ if $\\field{K}_i = \\field{H}$, and $m_i = n_i$ otherwise. Then $\\alg{A}$ can be viewed as a real \\ensuremath{\\ast}-subalgebra of $\\alg{A}_\\field{C}$ such that $\\alg{A}_{\\field{C}} = \\alg{A} + i \\alg{A}$, that is, as a \\term{real form} of $\\alg{A}_{\\field{C}}$. Here, $\\field{H}$ is considered as embedded in $M_2(\\field{C})$ by\n\\[\n \\zeta_1 + j \\zeta_2 \\mapsto\n \\begin{pmatrix}\n \\zeta_1 & \\zeta_2 \\\\\n -\\overline{\\zeta_2} & \\overline{\\zeta_1}\n \\end{pmatrix},\n\\]\nfor $\\zeta_1$, $\\zeta_2 \\in \\field{C}$.\n\nIn what follows, we will consider only finite-dimensional real {\\ensuremath{C^*}}-algebras with fixed Wedderburn decomposition.\n\n\n\\subsection{Representation theory}\n\nIn keeping with the conventions of noncommutative differential geometry, we shall consider \\ensuremath{\\ast}-representations of real {\\ensuremath{C^*}}-algebras on complex Hilbert spaces. Recall that such a (left) representation of a real {\\ensuremath{C^*}}-algebra $\\alg{A}$ consists of a complex Hilbert space $\\hs{H}$ together with a \\ensuremath{\\ast}-homomorphism $\\lambda : \\alg{A} \\to \\mathcal{L}(\\hs{H})$ between real {\\ensuremath{C^*}}-algebras. Similarly, a \\term{right representation} of $\\alg{A}$ is defined to be a complex Hilbert space $\\hs{H}$ together with a \\ensuremath{\\ast}-\\emph{antihomomorphism} $\\rho : \\alg{A} \\to \\mathcal{L}(\\hs{H})$ between real {\\ensuremath{C^*}}-algebras. For our purposes, then, an \\term{$\\alg{A}$-bimodule} consists of a complex Hilbert space $\\hs{H}$ together with a left \\ensuremath{\\ast}-representation $\\lambda$ and a right \\ensuremath{\\ast}-representation $\\rho$ that commute, {\\it i.e.\\\/}\\ such that $[\\lambda(a),\\rho(b)]=0$ for all $a$, $b \\in \\alg{A}$. In what follows, we will consider only finite-dimensional representations and hence only finite-dimensional bimodules; since finite-dimensional {\\ensuremath{C^*}}-algebras are always unital, we shall require all representations to be unital as well.\n\nNow, given a left [right] representation $\\alpha = (\\hs{H},\\pi)$ of an algebra $\\alg{A}$, one can define its \\term{transpose} to be the right [left] representation $\\alpha^T = (\\hs{H}^*,\\pi^T)$ , where $\\pi^T(a) := \\pi(a)^T$ for all $a \\in \\alg{A}$. Note that for any left or right representation $\\alpha$, $(\\alpha^T)^T$ can naturally be identified with $\\alpha$ itself. In the case that $\\hs{H} = \\field{C}^N$, we shall identify $\\hs{H}^*$ with $\\hs{H}$ by identifying the standard ordered basis on $\\hs{H}$ with the corresponding dual basis on $\\hs{H}^*$. The notion of the transpose of a representation allows us to reduce discussion of right representations to that of left representations.\n\nSince real {\\ensuremath{C^*}}-algebras are semisimple, any left representation can be written as a direct sum of irreducible representations, unique up to permutation of the direct summands, and hence any right representation can be written as a direct sum of transposes of irreducible representations, again unique up to permutation of the direct summands.\n\n\\begin{definition}\n The \\term{spectrum} $\\spec{\\alg{A}}$ of a real {\\ensuremath{C^*}}-algebra $\\alg{A}$ is the set of unitary equivalence classes of irreducible representations of $\\alg{A}$.\n\\end{definition}\n\nNow, let $\\alg{A}$ be a real {\\ensuremath{C^*}}-algebra with Wedderburn decomposition $\\oplus_{i=1}^N M_{k_i}(\\field{K}_i)$. Then\n\\begin{equation}\n \\spec{\\alg{A}} = \\bigsqcup_{i=1}^N \\spec{M_{k_i}(\\field{K}_i)},\n\\end{equation}\nwhere the embedding of $\\spec{M_{k_i}(\\field{K}_i)}$ in $\\spec{\\alg{A}}$ is given by composing the representation maps with the projection of $\\alg{A}$ onto the direct summand $M_{k_i}(\\field{K}_i)$. The building blocks for $\\spec{\\alg{A}}$ are as follows:\n\\begin{enumerate}\n \\item $\\spec{M_{n}(\\field{R})} = \\{[(\\field{C}^n,\\lambda)]\\}$,\n \\item $\\spec{M_{n}(\\field{C})} = \\{[(\\field{C}^n,\\lambda)],[(\\field{C}^N,\\overline{\\lambda})]\\}$,\n \\item $\\spec{M_{n}(\\field{H})} = \\{[(\\field{C}^{2n},\\lambda)]\\}$,\n\\end{enumerate}\nwhere $\\lambda(a)$ denotes left multiplication by $a$ and $\\overline{\\lambda}(a)$ denotes left multiplication by $\\overline{a}$.\n\n\\begin{definition}\n Let $\\alg{A}$ be a real {\\ensuremath{C^*}}-algebra, and let $\\alpha \\in \\spec{\\alg{A}}$. We shall call $\\alpha$ \\term{conjugate-linear} if it arises from the conjugate-linear irreducible representation $(a \\mapsto \\overline{a},\\field{C}^{n_i})$ of a direct summand of $\\alg{A}$ of the form $M_{n_i}(\\field{C})$; otherwise we shall call it \\term{complex-linear}.\n\\end{definition}\n\nThus, a representation $\\alpha$ of the real {\\ensuremath{C^*}}-algebra $\\alg{A}$ extends to a $\\field{C}$-linear \\ensuremath{\\ast}-representation of $\\alg{A}_{\\field{C}}$ if and only if $\\alpha$ is the sum of complex-linear irreducible representations of $\\alg{A}$.\n\nFinally, for an individual direct summand $M_{k_i}(\\field{K}_i)$ of $\\alg{A}$, let $e_i$ denote its unit, $n_i$ the dimension of its irreducible representations (which is therefore equal to $2 k_i$ if $\\field{K}_i = \\field{H}$, and to $k_i$ itself otherwise), $\\rep{n}_i$ its complex-linear irreducible representation, and, if $\\field{K}_i = \\field{C}$, $\\crep{n}_i$ its conjugate-linear irreducible representation. We define a strict ordering $<$ on $\\spec{\\alg{A}}$ by setting $\\alpha < \\beta$ whenever $\\alpha \\in \\spec{M_{n_i}(\\field{K}_i)}$, $\\beta \\in \\spec{M_{n_j}(\\field{K}_j)}$ for $i < j$, and by setting $\\rep{n}_i < \\crep{n}_i$ in the case that $\\field{K}_i = \\field{C}$. Note that the ordering depends on the choice of Wedderburn decomposition, {\\it i.e.\\\/}\\ on the choice of ordering of the direct summands. Let $S$ denote the cardinality of $\\spec{\\alg{A}}$. We shall identify $M_S(\\field{R})$ with the real algebra of functions $\\spec{\\alg{A}}^2 \\to \\ring{R}$, and hence index the standard basis $\\{E_{\\alpha\\beta}\\}$ of $M_S(\\field{R})$ by $\\spec{\\alg{A}}^2$.\n\n\\subsection{Bimodules and spectral triples}\n\nLet us now turn to spectral triples. Recall that we are considering only finite-dimensional algebras and representations ({\\it i.e.\\\/}\\ Hilbert spaces), so that we are dealing only with what are termed \\term{finite} or \\term{discrete} spectral triples.\n\nLet $\\hs{H}$ and $\\hs{H}^\\prime$ be $\\alg{A}$-bimodules. We shall denote by $\\bdd^{\\textup{L}}_\\alg{A}(\\hs{H},\\hs{H}^\\prime)$, $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H},\\hs{H}^\\prime)$, and $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H},\\hs{H}^\\prime)$ the subspaces of $\\mathcal{L}(\\hs{H},\\hs{H}^\\prime)$ consisting of left $\\alg{A}$-linear, right $\\alg{A}$-linear, and left and right $\\alg{A}$-linear operators, respectively. In the case that $\\hs{H}^\\prime = \\hs{H}$, we shall write simply $\\bdd^{\\textup{L}}_\\alg{A}(\\hs{H})$, $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H})$ and $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H})$. If $N$ is a subalgebra or linear subspace of a real or complex {\\ensuremath{C^*}}-algebra, we shall denote by $N_\\textup{sa}$ the real linear subspace of $N$ consisting of the self-adjoint elements of $N$, and we shall denote by $\\U(N)$ set of unitary elements of $N$. Finally, for operators $A$ and $B$ on a Hilbert space, we shall denote their anticommutator $AB + BA$ by $\\{ A,B \\}$.\n\n\\subsubsection{Conventional definitions}\n\nWe begin by recalling the standard definitions for spectral triples of various forms. Since we are working with the finite case, all analytical requirements become redundant, leaving behind only the algebraic aspects of the definitions.\n\nThe following definition first appeared in a 1995 paper~\\cite{Connes95a} by Connes:\n\n\\begin{definition}\n A \\term{spectral triple} is a triple $(\\alg{A},\\hs{H},D)$, where:\n\\begin{itemize}\n \\item $\\alg{A}$ is a unital real or complex \\ensuremath{\\ast}-algebra;\n \\item $\\hs{H}$ is a complex Hilbert space on which $\\alg{A}$ has a left representation $\\lambda : \\alg{A} \\to \\mathcal{L}(\\hs{H})$;\n \\item $D$, the \\term{Dirac operator}, is a self-adjoint operator on $\\hs{H}$.\n\\end{itemize}\n\nMoreover, if there exists a $\\ring{Z}\/2\\ring{Z}$-grading $\\gamma$ on $\\hs{H}$ ({\\it i.e.\\\/}\\ a self-adjoint unitary on $\\hs{H}$) such that:\n\\begin{enumerate}\n \\item $[\\gamma, \\lambda(a)]=0$ for all $a \\in \\alg{A}$,\n \\item $\\{\\gamma,D\\}=0$;\n\\end{enumerate}\nthen the spectral triple is said to be \\term{even}. Otherwise, it is said to be \\term{odd}.\n\\end{definition}\n\nIn the context of the general definition for spectral triples, a finite spectral triple necessarily has metric dimension $0$.\n\nIn a slightly later paper~\\cite{Connes95}, Connes defines the additional structure on spectral triples necessary for defining the noncommutative spacetime of the NCG Standard Model; indeed, the same paper also contains the first version of the NCG Standard Model to use the language of spectral triples, in the form of a reformulation of the so-called Connes-Lott model.\n\n\\begin{definition}\\label{realdef}\n A spectral triple $(\\alg{A},\\hs{H},D)$ is called a \\term{real spectral triple of $KO$-dimension $n \\bmod 8$} if, in the case of $n$ even, it is an even spectral triple, and if there exists an antiunitary $J: \\hs{H} \\to \\hs{H}$ such that:\n\\begin{enumerate}\n \\item $J$ satisfies $J^2 = \\varepsilon$, $JD = \\varepsilon^\\prime DJ$ and $J\\gamma = \\varepsilon^{\\prime\\prime} \\gamma J$ (in the case of even $n$), where $\\varepsilon$, $\\varepsilon^\\prime$, $\\varepsilon^{\\prime\\prime} \\in \\{-1,1\\}$ depend on $n \\bmod 8$ as follows:\n \\begin{center}\n \\begin{tabular}{crrrrrrrr}\n \\toprule\n $n$ & $0$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ \\\\\n \\midrule\n $\\varepsilon$ & $1$ & $1$ & $-1$ & $-1$ & $-1$ & $-1$ & $1$ & $1$ \\\\\n $\\varepsilon^\\prime$ & $1$ & $-1$ & $1$ & $1$ & $1$ & $-1$ & $1$ & $1$ \\\\\n $\\varepsilon^{\\prime\\prime}$ & $1$ & & $-1$ & & $1$ & & $-1$ & \\\\\n \\bottomrule\n \\end{tabular}\n \\end{center}\n \\item The \\term{order zero condition} is satisfied, namely $[\\lambda(a),J\\lambda(b)J^*]=0$ for all $a$, $b \\in \\alg{A}$;\n \\item The \\term{order one condition} is satisfied, namely $[[D,\\lambda(a)],J\\lambda(b)J^*]=0$ for all $a$, $b \\in \\alg{A}$.\n\\end{enumerate}\n\nMoreover, if there exists a self-adjoint unitary $\\epsilon$ on $\\hs{H}$ such that:\n\\begin{enumerate}\n \\item $[\\epsilon,\\lambda(a)]=0$ for all $a \\in \\alg{A}$;\n \\item $[\\epsilon, D]=0$;\n \\item $\\{\\epsilon, J\\}=0$;\n \\item $[\\epsilon, \\gamma]=0$ (even case);\n\\end{enumerate}\nthen the real spectral triple is said to be \\term{$S^0$-real}.\n\\end{definition}\n\n\\begin{remark}[Krajewski~\\cite{Kraj98}*{\\S 2.2}, Paschke--Sitarz~\\cite{PS98}*{Obs.~1}]\n If $(\\alg{A},\\hs{H},D)$ is a real spectral triple, then the order zero condition is equivalent to the statement that $\\hs{H}$ is an $\\alg{A}$-bimodule for the usual left action $\\lambda$ and the right action $\\rho: a \\mapsto J\\lambda(a^*)J^*$.\n\\end{remark}\n\nIt was commonly assumed until fairly recently that the finite geometry of the NCG Standard Model should be $S^0$-real. Though the current version of the NCG Standard Model no longer makes such an assumption~\\cites{Connes06,CCM07}, we shall later see that its finite geometry can still be seen as satisfying a weaker version of $S^0$-reality.\n\n\\subsubsection{Structures on bimodules}\n\nIn light of the above remark, the order one condition, the strongest algebraic condition placed on Dirac operators for real spectral triples, should be viewed more generally as a condition applicable to operators on bimodules~\\cite{Kraj98}*{\\S 2.4}. This then motivates our point of view that a finite real spectral triple $(\\alg{A},\\hs{H},D)$ should be viewed rather as an $\\alg{A}$-bimodule with additional structure, together with a Dirac operator satisfying the order one condition that is compatible with that additional structure. We therefore begin by defining a suitable notion of ``additional structure'' for bimodules.\n\n\\begin{definition}\n A \\term{bimodule structure} $P$ consists of the following data:\n\\begin{itemize}\n \\item A set $\\alg{P} = \\alg{P}_\\gamma \\sqcup \\alg{P}_J \\sqcup \\alg{P}_\\epsilon$, where each set $\\alg{P}_X$ is either empty or the singleton $\\{X\\}$, and where $\\alg{P}_\\epsilon$ is non-empty only if $\\alg{P}_J$ is non-empty;\n \\item If $\\alg{P}_J$ is non-empty, a choice of \\term{$KO$-dimension} $n \\bmod 8$, where $n$ is even if and only if $\\alg{P}_\\gamma$ is non-empty.\n\\end{itemize}\n\nIn particular, we call a structure $P$:\n\\begin{itemize}\n \\item \\term{odd} if $\\alg{P}$ is empty;\n \\item \\term{even} if $\\alg{P} = \\alg{P}_\\gamma = \\{\\gamma\\}$;\n \\item \\term{real} if $\\alg{P}_J$ is non-empty and $\\alg{P}_\\epsilon$ is empty\n \\item \\term{$S^0$-real} if $\\alg{P}_\\epsilon$ is non-empty.\n\\end{itemize}\n\nFinally, if $P$ is a graded structure, we call $\\gamma$ the \\term{grading}, and if $P$ is real or $S^0$-real, we call $J$ the \\term{charge conjugation}.\n\\end{definition}\n\nSince this notion of $KO$-dimension is meant to correspond with the usual $KO$-dimension of a real spectral triple, we assign to each real or $S^0$-real structure $P$ of $KO$-dimension $n \\bmod 8$ constants $\\varepsilon$, $\\varepsilon^\\prime$ and, in the case of even $n$, $\\varepsilon^{\\prime\\prime}$, according to the table in Definition~\\ref{realdef}.\n\nWe now define the \\term{structure algebra} of a structure $P$ to be the real associative algebra with generators $\\alg{P}$ and relations, as applicable,\n\\[\n \\gamma^2 = 1, \\quad J^2 = \\varepsilon, \\quad \\epsilon^2 = 1; \\gamma J = \\varepsilon^{\\prime\\prime} J \\gamma, \\quad [\\gamma,\\epsilon] = 0, \\quad \\{\\epsilon,J\\} = 0.\n\\]\n\n\\begin{definition}\n An $\\alg{A}$-bimodule $\\hs{H}$ is said to have structure $P$ whenever it admits a faithful representation of the structure algebra of $P$ such that, when applicable, $\\gamma$ and $\\epsilon$ are represented by self-adjoint unitaries in $\\bdd^{\\textup{LR}}_{\\alg{A}}(\\hs{H})$, and $J$ is represented by an antiunitary on $\\hs{H}$ such that\n\\begin{equation}\\label{realintertwine}\n \\forall a \\in \\alg{A}, \\quad \\rho(a) = J \\lambda(a^*) J.\n\\end{equation}\n\\end{definition}\n\nNote that a $S^0$-real bimodule can always be considered as a real bimodule, and a real bimodule of even [odd] $KO$-dimension can always be considered as an even [odd] bimodule. Note also that an even bimodule is simply a graded bimodule such that the algebra acts from both left and right by degree $0$ operators, and the grading itself respects the Hilbert space structure; an odd bimodule is then simply an ungraded bimodule. We use the terms ``even'' and ``odd'' so as to keep the terminology consistent with that for spectral triples.\n\nNote also that for a real or $S^0$-real structure $P$, the structure algebra of $P$ is independent of the value of $\\varepsilon^\\prime$. Thus the notions of real [$S^0$-real] $\\alg{A}$-bimodule with $KO$-dimension $1 \\bmod 8$ and $7 \\bmod 8$ are identical, as are the notions of [$S^0$-real] $\\alg{A}$-bimodule with $KO$-dimension $3 \\bmod 8$ and $5 \\bmod 8$; again, we make the distinction with an eye to the discussion of Dirac operators (and hence of spectral triples) later on.\n\nNow, a \\term{unitary equivalence} of $\\alg{A}$-bimodules $\\hs{H}$ and $\\hs{H}^\\prime$ with structure $P$ is a unitary equivalence of $\\alg{A}$-bimodules ({\\it i.e.\\\/}\\ a unitary element of $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H},\\hs{H}^\\prime)$) that effects unitary equivalence of the representations of the structure algebra of $P$. We denote the set of all such unitary equivalences $\\hs{H} \\to \\hs{H}^\\prime$ by $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\hs{H}^\\prime;\\alg{P})$. In particular, $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\hs{H};\\alg{P})$, which we denote by $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H};\\alg{P})$, is a subgroup of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}) := \\U(\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}))$. In all such notation, we suppress the argument $\\alg{P}$ whenever $\\alg{P}$ is empty.\n\n\\begin{definition}\n Let $\\alg{A}$ be a real {\\ensuremath{C^*}}-algebra, and let $P$ be a bimodule structure. The abelian monoid $(\\Bimod(\\alg{A},P),+)$ of $\\alg{A}$-bimodules with structure $P$ is defined as follows:\n\\begin{itemize}\n \\item $\\Bimod(\\alg{A},P)$ is the set of unitary equivalence classes of $\\alg{A}$-bimodules with structure $P$;\n \\item For $[\\hs{H}]$, $[\\hs{H}^\\prime] \\in \\Bimod(\\alg{A},P)$, $[\\hs{H}] + [\\hs{H}^\\prime] := [\\hs{H} \\oplus \\hs{H}^\\prime]$.\n\\end{itemize}\n\\end{definition}\n\nFor convenience, we shall denote $\\Bimod(\\alg{A},P)$ by:\n\\begin{itemize}\n \\item $\\Bimod(\\alg{A})$ if $P$ is the odd structure;\n \\item $\\Bimod^\\textup{even}(\\alg{A})$ if $P$ is the even structure;\n \\item $\\Bimod(\\alg{A},n)$ if $P$ is the real structure of $KO$-dimension $n \\bmod 8$;\n \\item $\\Bimod^0(\\alg{A},n)$ if $P$ is the $S^0$-real structure of $KO$-dimension $n \\bmod 8$.\n\\end{itemize}\nThese monoids will be studied in depth in the next section. In light of our earlier comment, we therefore have that\n\\[\n \\Bimod(\\alg{A},1) = \\Bimod(\\alg{A},7), \\quad \\Bimod(\\alg{A},3) = \\Bimod(\\alg{A},5).\n\\]\nand\n\\[\n \\Bimod^0(\\alg{A},1) = \\Bimod^0(\\alg{A},7), \\quad \\Bimod^0(\\alg{A},3) = \\Bimod^0(\\alg{A},5).\n\\]\n\nFinally, for the sake of completeness, we now define the notions of orientabilty and Poincar{\\'e} duality in this more general context; in the case of a real spectral triple $(\\alg{A},\\hs{H},D,\\gamma, J)$ of even $KO$-dimension, where the right action is given by $\\rho(a) := J\\lambda(a^*)J^*$, these definitions yield precisely the usual ones ({\\it cf.\\\/}\\ ~\\cite{Kraj98}*{\\S\\S 2.2, 2.3}).\n\n\\begin{definition}\n We call an even $\\alg{A}$-bimodule $(\\hs{H},\\gamma)$ \\term{orientable} if there exist $a_1, \\dotsc, a_k$, $b_1, \\dotsc, b_k \\in \\alg{A}$ such that\n\\begin{equation}\n \\gamma = \\sum_{i=1}^k \\lambda(a_i)\\rho(b_i).\n\\end{equation}\n\\end{definition}\n\n\\begin{definition}\n Let $\\alg{A}$ be a real ${\\ensuremath{C^*}}$-algebra, and let $(\\hs{H},\\gamma)$ be an even $\\alg{A}$-bimodule. Then the \\term{intersection form} $\\left\\langle \\cdot, \\cdot \\right\\rangle : KO_0(\\alg{A}) \\times KO_0(\\alg{A}) \\to \\ring{Z}$ associated with $(\\hs{H},\\gamma)$ is defined by setting\n\\begin{equation}\n \\left\\langle \\left[e\\right], \\left[f\\right] \\right\\rangle := \\tr(\\gamma\\lambda(e)\\rho(f))\n\\end{equation}\nfor projections $e$, $f \\in \\alg{A}$.\n\nIn the case that the intersection form is non-degenerate, we shall say that $(\\hs{H},\\gamma)$ satisfies \\term{Poincar{\\'e} duality}.\n\\end{definition}\n\nThe orientability assumption was used extensively in \\cite{PS98} and \\cite{Kraj98}, as it leads to considerable algebraic simplifactions; we shall later define a weakened version of orientability that will yield precisely those simplifications.\n\n\\subsubsection{Bilateral spectral triples}\n\nWe now turn to Dirac operators on bimodules satisfying a generalised order one condition, and define the appropriate notion of compatibility with additional structure on the bimodule.\n\n\\begin{definition}\n A \\term{Dirac operator} for an $\\alg{A}$-bimodule $\\hs{H}$ with structure $P$ is a self-adjoint operator $D$ on $\\hs{H}$ satisfying the \\term{order one condition}:\n\\begin{equation}\n \\forall a, b \\in \\alg{A}, \\quad [[D,\\lambda(a)],\\rho(b)] = 0,\n\\end{equation}\ntogether with the following relations, as applicable:\n\\[\n \\{D,\\gamma\\} = 0, \\quad DJ = \\varepsilon^\\prime JD, \\quad [D,\\epsilon]=0.\n\\]\n\\end{definition}\n\nWe denote the finite-dimensional real vector space of Dirac operators for an an $\\alg{A}$-bimodule $\\hs{H}$ with structure $P$ by $\\ms{D}_0(\\alg{A},\\hs{H},\\alg{P})$.\n\n\\begin{definition}\n A \\term{bilateral spectral triple} with structure $P$ is a triple of the form $(\\alg{A},\\hs{H},D)$, where $\\alg{A}$ is a real {\\ensuremath{C^*}}-algebra, $\\hs{H}$ is an $\\alg{A}$-bimodule with structure $P$, and $D$ is a Dirac operator for $(\\hs{H},P)$.\n\\end{definition}\n\nWe shall generally denote such a spectral triple by $(\\alg{A},\\hs{H},D;\\alg{P})$, where $\\alg{P}$ is the set of generators of the structure algebra; in cases where the presence or absence of a grading $\\gamma$ is immaterial, we will suppress the generator $\\gamma$ in this notation.\n\n\\begin{remark}\n In the case that $P$ is a real [$S^0$-real] structure of $KO$-dimension $n \\bmod 8$, a bilateral spectral triple with structure $P$ is precisely a real [$S^0$-real] spectral triple of $KO$-dimension $n \\bmod 8$.\n\nMore generally, an odd [even] bilateral spectral triple $(\\alg{A},\\hs{H},D)$ is equivalent to an odd [even] spectral triple $(\\alg{A} \\otimes \\alg{A}^\\textnormal{op},\\hs{H},D)$ such that $[[D,\\alg{A}\\otimes 1],1\\otimes\\alg{A}^\\textnormal{op}]=\\{0\\}$, an object that first appears in connection with $S^0$-real spectral triples~\\cite{Connes95}\n\\end{remark}\n\nA \\term{unitary equivalence} of spectral triples $(\\alg{A},\\hs{H},D)$ and $(\\alg{A},\\hs{H}^\\prime, D^\\prime)$ is then a unitary $U \\in \\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\hs{H}^\\prime)$ such that $D^\\prime = U D U^*$. This concept leads us to the following definition:\n\n\\begin{definition}\n Let $\\alg{A}$ be a real {\\ensuremath{C^*}}-algebra, and let $\\hs{H}$ be an $\\alg{A}$-bimodule with structure $P$. The \\term{moduli space of Dirac operators} for $\\hs{H}$ is defined by\n\\begin{equation}\n \\ms{D}(\\alg{A},\\hs{H},\\alg{P}) := \\ms{D}_0(\\alg{A},\\hs{H},\\alg{P})\/\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\alg{P}),\n\\end{equation}\nwhere $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\alg{P})$ acts on $\\ms{D}_0(\\alg{A},\\hs{H},\\alg{P})$ by conjugation.\n\\end{definition}\n\nIf $\\alg{C}$ is a central subalgebra of $\\alg{A}$, we can form the subspace\n\\begin{equation}\n \\ms{D}_0(\\alg{A},\\hs{H},\\alg{P};\\alg{C}) := \\{D \\in \\ms{D}_0(\\alg{A},\\hs{H},\\alg{P}) \\mid [D,\\lambda(\\alg{C})] = [D,\\rho(\\alg{C})] = \\{0\\}\\}.\n\\end{equation}\nand hence the sub-moduli space\n\\begin{equation}\n \\ms{D}(\\alg{A},\\hs{H},\\alg{P};\\alg{C}) := \\ms{D}_0(\\alg{A},\\hs{H},\\alg{P};\\alg{C})\/\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\alg{P}),\n\\end{equation}\nof $\\ms{D}_0(\\alg{A},\\hs{H},\\alg{P})$; the moduli space of Dirac operators studied by Chamseddine, Connes and Marcolli~\\cite{CCM07}*{\\S 2.7},\\cite{CM08}*{\\S 13.4} is in fact a sub-moduli space of this form.\n\nSince $\\ms{D}(\\alg{A},\\hs{H},\\alg{P})$ [$\\ms{D}(\\alg{A},\\hs{H},\\alg{P};\\alg{C})$] is the orbit space of a smooth finite-di\\-men\\-sion\\-al representation of a compact Lie group, it is \\latin{a priori} locally compact Hausdorff, and is thus homeomorphic to a semialgebraic subset of $\\field{R}^d$ for some $d$~\\cite{Schwarz75}. The dimension of $\\ms{D}(\\alg{A},\\hs{H},\\alg{P})$ [$\\ms{D}(\\alg{A},\\hs{H},\\alg{P};\\alg{C})$] can then be defined as the dimension of this semialgebraic set. Such moduli spaces will be discussed in some detail.\n\n\\subsubsection{$S^0$-reality}\n\nFollowing Connes~\\cite{Connes95}, we now describe how to reduce the study of $S^0$-real bimodules of even [odd] $KO$-dimension to the study of even [odd] bimodules.\n\nLet $(\\hs{H},J,\\epsilon)$ be an $S^0$-real $\\alg{A}$-bimodule of even [odd] $KO$-dimension. Define mutually orthogonal projections $P_i$, $P_{-i}$ in $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H})$ by $P_{\\pm i} = \\frac{1}{2}(1 \\pm \\epsilon)$. Then, at the level of even [odd] bimodules, $\\hs{H} = \\hs{H}_i \\oplus \\hs{H}_{-i}$ for $\\hs{H}_{\\pm i} := P_{\\pm i} \\hs{H}$, where the left and right actions on $\\hs{H}_{\\pm i}$ are given by\n\\[\n \\lambda_{\\pm i}(a) := P_{\\pm i}\\lambda(a)P_{\\pm i}, \\quad \\rho_{\\pm i}(a) := P_{\\pm i}\\rho(a)P_{\\pm i},\n\\]\nfor $a \\in \\alg{A}$, and, in the case of even $KO$-dimension, the grading on $\\hs{H}_{\\pm i}$ is given by $\\gamma_{\\pm i} := P_{\\pm i} \\gamma P_{\\pm i}$. Moreoever,\n\\[\n J =\n\\begin{pmatrix}\n 0 & \\varepsilon \\tilde{J}^*\\\\\n \\tilde{J} & 0\n\\end{pmatrix},\n\\]\nwhere $\\tilde{J} := P_{-i} J P_i$ is an antiunitary $\\hs{H}_i \\to \\hs{H}_{-i}$, so that for $a \\in \\alg{A}$,\n\\[\n \\lambda_{-i}(a) = \\tilde{J} \\rho_i(a^*) \\tilde{J}^*, \\quad \\rho_{-i}(a) = \\tilde{J} \\lambda_i(a^*) \\tilde{J}^*,\n\\]\nand in the case of even $KO$-dimension, $\\gamma_{-i} = \\varepsilon^{\\prime\\prime} \\tilde{J} \\gamma \\tilde{J}^*$. Finally, note that $\\tilde{J}$ can also be viewed as a unitary $\\overline{\\hs{H}_i} \\to \\hs{H}_{-i}$, where $\\overline{\\hs{H}_i}$ denotes the conjugate space of $\\hs{H}$. Hence, for fixed $KO$-dimension, an $S^0$-real $\\alg{A}$-bimodule $\\hs{H}$ is determined, up to unitary equivalence, by the bimodule $\\hs{H}_i$.\n\nOn the other hand, if $\\hs{V}$ is an even [odd] $\\alg{A}$-bimodule, we can construct an $S^0$-real $\\alg{A}$-bimodule $\\hs{H}$ for any even [odd] $KO$-dimension $n \\bmod 8$ such that $\\hs{H}_i = \\hs{V}$, by setting $\\hs{H} := \\hs{H}_i \\oplus \\hs{H}_{-i}$ for $\\hs{H}_i := \\hs{V}$, $\\hs{H}_{-i} := \\overline{\\hs{V}}$, defining $\\tilde{J} : \\hs{H}_i \\to \\hs{H}_{-i}$ as the identity map on $\\hs{V}$ viewed as an antiunitary $\\hs{V} \\to \\overline{\\hs{V}}$, then using the above formulas to define $J$, $\\gamma$ (as necessary), $\\lambda$, $\\rho$, and finally setting $\\epsilon = 1_\\hs{V} \\oplus (-1_{\\overline{\\hs{V}}})$. In the case that $\\hs{V}$ is already $\\hs{H}_i$ for some $S^0$-real bimodule $\\hs{H}$, this procedure reproduces $\\hs{H}$ up to unitary equivalence. We have therefore proved the following:\n\n\\begin{proposition}\\label{s0reduction}\n Let $\\alg{A}$ be a real {\\ensuremath{C^*}}-algebra, and let $n \\in \\ring{Z}_8$. Then the map\n\\[\n \\Bimod^0(\\alg{A},n) \\to\n \\begin{cases}\n \\Bimod(\\alg{A}), &\\text{if $n$ is odd,}\\\\\n \\Bimod^\\textup{even}(\\alg{A}), &\\text{if $n$ is even,}\n \\end{cases}\n\\]\ndefined by $[\\hs{H}] \\mapsto [\\hs{H}_i]$ is an isomorphism of monoids.\n\\end{proposition}\n\nNow, let $\\hs{H}$ is an $S^0$-real $\\alg{A}$-bimodule, and suppose that $D$ is a Dirac operator for $\\hs{H}$. We can define Dirac operators $D_i$ and $D_{-i}$ on $\\hs{H}_i$ and $\\hs{H}_{-i}$, respectively, by $D_{\\pm i} := P_{\\pm i} D P_{\\pm i}$; then $D = D_i \\oplus D_{-i}$ and, in fact, $D_{-i} = \\varepsilon^\\prime \\tilde{J} D_i \\tilde{J}^*$. Thus, a Dirac operator $D$ on $\\hs{H}$ is completely determined by $D_i$; indeed, the map $D \\mapsto D_i$ defines an isomorphism $\\ms{D}_0(\\alg{A},\\hs{H},J,\\epsilon) \\cong \\ms{D}_0(\\alg{A},\\hs{H})$. \n\nAlong similar lines, one can show that $\\U^{\\textup{LR}}_{\\alg{A}}(\\hs{H},J) \\cong \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}_i)$ by means of the map $U \\mapsto U_i := P_i U P_i$; this isomorphism is compatible with the isomorphism $\\ms{D}_0(\\alg{A},\\hs{H},J,\\epsilon) \\cong \\ms{D}_0(\\alg{A},\\hs{H})$. Hence, the functional equivalence between $\\hs{H}$ and $\\hs{H}_i$ holds at the level of moduli spaces of Dirac operators:\n\n\\begin{proposition}\\label{diracs0reduction}\n Let $\\hs{H}$ be an $S^0$-real $\\alg{A}$-bimodule. Then\n\\begin{equation}\n \\ms{D}(\\alg{A},\\hs{H},J,\\epsilon) \\cong \\ms{D}(\\alg{A},\\hs{H}_i).\n\\end{equation}\n\\end{proposition}\n\nOne can similarly show that for a central subalgebra $\\alg{C}$ of $\\alg{A}$,\n\\[\n \\ms{D}(\\alg{A},\\hs{H},J,\\epsilon;\\alg{C}) \\cong \\ms{D}(\\alg{A},\\hs{H}_i;\\alg{C}).\n\\]\n\nLet us conclude by considering the relation between orientability and Poincar{\\'e} duality for an $S^0$-real bimodule $\\hs{H}$ of even $KO$-dimension and orientability and Poincar{\\'e} duality, respectively, for the associated even bimodule $\\hs{H}_i$.\n\n\\begin{proposition}\\label{s0orient}\n Let $\\hs{H}$ be an $S^0$-real $\\alg{A}$-bimodule of even $KO$-dimension. Then $\\hs{H}$ is orientable if and only if there exist $a_1, \\dotsc, a_k, b_1, \\dotsc, b_k \\in \\alg{A}$ such that\n\\begin{equation}\n \\gamma_i = \\sum_{j=1}^k \\lambda_i(a_j)\\rho_i(b_j) = \\varepsilon^{\\prime\\prime} \\sum_{j=1}^k \\lambda_i(b_j^*)\\rho_i(a_j^*).\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\n Let $a_1, \\dotsc, a_k$, $b_1, \\dotsc, b_k \\in \\alg{A}$, and set $T = \\sum_{j=1}^k \\lambda(a_j)\\rho(b_j)$. Then\n\\[\n T_i := P_i T P_i = \\sum_{j=1}^k \\lambda_i(a_j)\\rho_i(b_j),\n\\]\nwhile\n\\[\n \\quad T_{-i} := P_{-i} T P_{-i} = \\sum_{j=1}^k \\lambda_{-i}(a_j)\\rho_{-i}(b_j) = \\tilde{J} \\biggl( \\sum_{j=1}^k \\lambda_i(b_j^*)\\rho_i(a_j^*) \\biggr) \\tilde{J}^*.\n\\]\nHence, $T_{-i} = \\varepsilon^{\\prime\\prime} \\tilde{J} T_i \\tilde{J}^*$ if and only if\n\\[\n \\varepsilon^{\\prime\\prime} \\sum_{j=1}^k \\lambda_i(b_j^*)\\rho_i(a_j^*) = T_i = \\sum_{j=1}^k \\lambda_i(a_j)\\rho_i(b_j).\n\\]\nApplying this intermediate result to $a_j$ and $b_j$ such that $\\gamma = \\sum_{j=1}^k \\lambda(a_j)\\rho(b_j)$, in the case that $\\hs{H}$ is orientable, and then to $a_j$ and $b_j$ such that $\\gamma_i = \\sum_{j=1}^k \\lambda_i(a_j)\\rho_i(b_j)$, in the case that $\\hs{H}_i$ is orientable, yields the desired result.\n\\end{proof}\n\nThus, orientability of an $S^0$-real bimodule $\\hs{H}$ is equivalent to a stronger version of orientability on the bimodule $\\hs{H}_i$.\n\nTurning to Poincar{\\'e} duality, we can obtain the following result:\n\n\\begin{proposition}\\label{s0poincare}\n Let $\\hs{H}$ be an $S^0$-real $\\alg{A}$-bimodule of even $KO$-dimension with intersection form $\\form{\\cdot,\\cdot}$, and let $\\form{\\cdot,\\cdot}_i$ be the intersection form for $\\hs{H}_i$. Then for any $p$, $q \\in KO_0(\\alg{A})$,\n\\[\n \\form{p,q} = \\form{p,q}_i + \\varepsilon^{\\prime\\prime} \\form{q,p}_i.\n\\]\n\\end{proposition}\n\n\\begin{proof}\n Let $e$, $f \\in \\alg{A}$ be projections. Then\n\\begin{align*}\n \\form{[e],[f]} &= \\tr(\\gamma\\lambda(e)\\rho(f))\\\\\n &= \\tr(\\gamma_i \\lambda_i(e) \\rho_i(f)) + \\tr(\\gamma_{-i}\\lambda_{-i}(e)\\rho_{-i}(f))\\\\\n &= \\tr(\\gamma_i \\lambda_i(e) \\rho_i(f)) + \\varepsilon^{\\prime\\prime} \\tr(\\tilde{J}\\gamma_i \\lambda_i(f) \\rho_i(e) \\tilde{J}^*)\\\\\n &= \\tr(\\gamma_i \\lambda_i(e) \\rho_i(f)) + \\varepsilon^{\\prime\\prime} \\overline{\\tr(\\gamma_i \\lambda_i(f) \\rho_i(e))}\\\\\n &= \\form{[e],[f]}_i + \\varepsilon^{\\prime\\prime} \\form{[f],[e]}_i,\n\\end{align*}\nwhere we have used the fact that the intersection forms are integer-valued.\n\\end{proof}\n\nThus, Poincar{\\'e} duality on an $S^0$-real bimodule $\\hs{H}$ is equivalent to nondegeneracy of either the symmetrisation or antisymmetrisation of the intersection form on $\\hs{H}_i$, as the case may be.\n\n\\section{Bimodules and Multiplicity Matrices}\n\nWe now turn to the study of bimodules, and in particular, to their characterisation by multiplicity matrices. We shall find that a bimodule admits, up to unitary equivalence, at most one real structure of any given $KO$-dimension, and that the multiplicity matrix or matrices of a bimodule will determine entirely which real structures, if any, it does admit.\n\nIn what follows, $\\alg{A}$ will be a fixed real {\\ensuremath{C^*}}-algebra.\n\n\\subsection{Odd bimodules}\n\nLet us begin with the study of odd bimodules.\n\nFor $m \\in M_S(\\semiring{Z}_{\\geq 0})$, we define an $\\alg{A}$-bimodule $\\hs{H}_m$ by setting\n\\begin{align*}\n \\hs{H}_m &:= \\bigoplus_{\\alpha,\\beta \\in \\spec{\\alg{A}}} \\field{C}^{n_\\alpha} \\otimes \\field{C}^{m_{\\alpha\\beta}} \\otimes \\field{C}^{n_\\beta},\\\\\n \\lambda_m(a) &:= \\bigoplus_{\\alpha,\\beta \\in \\spec{\\alg{A}}} \\lambda_\\alpha(a) \\otimes 1_{m_{\\alpha\\beta}} \\otimes 1_{n_\\beta}, \\quad a \\in \\alg{A},\\\\\n \\rho_m(a) &:= \\bigoplus_{\\alpha, \\beta \\in \\spec{\\alg{A}}} 1_{n_\\alpha} \\otimes 1_{m_{\\alpha\\beta}} \\otimes \\lambda_\\beta(a)^T, \\quad a \\in \\alg{A}.\n\\end{align*}\nHere we use the convention that $1_n$ is the identity on $\\field{C}^n$, with $\\field{C}^0 := \\{0\\}$ and hence $1_0 := 0$. \n\n\\begin{proposition}[Krajewski~\\cite{Kraj98}*{\\S 3.1}, Paschke--Sitarz~\\cite{PS98}*{Lemmas 1, 2}] \\label{oddmult}\nThe map $\\bimod : M_S(\\semiring{Z}_{\\geq 0}) \\to \\Bimod(\\alg{A})$ given by $m \\mapsto [\\hs{H}_m]$ is an isomorphism of monoids.\n\\end{proposition}\n\n\\begin{proof}\nBy construction, $\\bimod$ is an injective morphism of monoids. It therefore suffices to show that $\\bimod^\\textup{odd}$ is also surjective.\n\nNow, let $\\hs{H}$ be an $\\alg{A}$-bimodule. For $\\alpha \\in \\spec{\\alg{A}}$ define projections $P^L_\\alpha$ and $P^R_\\alpha$ by\n\\[\n P^L_\\alpha :=\n \\begin{cases}\n \\lambda(e_i) &\\text{if $\\alpha = \\rep{n}_i$ for $\\field{K}_i \\neq \\field{C}$,}\\\\\n \\frac{1}{2}\\left(\\lambda(e_i) - i \\lambda(i e_i) \\right) &\\text{if $\\alpha = \\rep{n}_i$ for $\\field{K}_i = \\field{C}$,}\\\\\n \\frac{1}{2}\\left(\\lambda(e_i) + i \\lambda(i e_i) \\right) &\\text{if $\\alpha = \\crep{n}_i$ for $\\field{K}_i = \\field{C}$,}\n \\end{cases}\n\\]\nand\n\\[\n P^R_\\alpha :=\n \\begin{cases}\n \\rho(e_i) &\\text{if $\\alpha = \\rep{n}_i$ for $\\field{K}_i \\neq \\field{C}$,}\\\\\n \\frac{1}{2}\\left(\\rho(e_i) - i \\rho(i e_i) \\right) &\\text{if $\\alpha = \\rep{n}_i$ for $\\field{K}_i = \\field{C}$,}\\\\\n \\frac{1}{2}\\left(\\rho(e_i) + i \\rho(i e_i) \\right) &\\text{if $\\alpha = \\crep{n}_i$ for $\\field{K}_i = \\field{C}$,}\n \\end{cases}\n\\]\nrespectively; by construction, $P^L_\\alpha \\in \\lambda(\\alg{A}) + i \\lambda(\\alg{A})$ and $P^R_\\alpha \\in \\rho(\\alg{A}) + i \\rho(\\alg{A})$, so that for $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$, $P^L_\\alpha$ and $P^R_\\beta$ commute. We can therefore define projections $P_{\\alpha\\beta} := P^L_\\alpha P^R_\\beta$ for each $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$; it is then easy to see that each $\\hs{H}_{\\alpha\\beta} := P_{\\alpha\\beta}\\hs{H}$ is a sub-$\\alg{A}$-bimodule of $\\hs{H}$, and that $\\hs{H} = \\oplus_{\\alpha,\\beta \\in \\spec{\\alg{A}}} \\hs{H}_{\\alpha\\beta}$.\n\nLet $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$. As noted before, the left action of $\\alg{A}$ on $\\hs{H}_{\\alpha\\beta}$ must decompose as a direct sum of irreducible representations, but by construction of $\\hs{H}_{\\alpha\\beta}$, those irreducible representations must all be $\\alpha$. Similarly, the right action on $\\hs{H}_{\\alpha\\beta}$ must be a direct sum of copies of $\\beta$. Since the left action and right action commute, we must therefore have that $\\hs{H}_{\\alpha\\beta} \\cong \\hs{H}_{m_{\\alpha\\beta} E_{\\alpha\\beta}}$ for some $m_{\\alpha\\beta} \\in \\semiring{Z}_{\\geq 0}$. Taking the direct sum of the $\\hs{H}_{\\alpha\\beta}$, we therefore see that $\\hs{H}$ is unitarily equivalent to $\\hs{H}_m$ for $m = (m_{\\alpha\\beta}) \\in M_S(\\semiring{Z}_{\\geq 0})$, that is, $[\\hs{H}] = \\bimod(m)$.\n\\end{proof}\n\nWe denote the inverse map $\\bimod^{-1} : \\Bimod(\\alg{A}) \\to M_S(\\semiring{Z}_{\\geq 0})$ by $\\mult$.\n\n\\begin{definition}\n Let $\\hs{H}$ be an $\\alg{A}$-bimodule. Then the \\term{multiplicity matrix} of $\\alg{A}$ is the matrix $\\mult[\\hs{H}] \\in M_S(\\semiring{Z}_{\\geq 0})$.\n\\end{definition}\n\nFrom now on, without any loss of generality, we shall assume that an $\\alg{A}$-bimodule $\\hs{H}$ with multiplicity matrix $m$ is $\\hs{H}_m$ itself.\n\n\\begin{remark}\n Multiplicity matrices readily admit a $K$-theoretic interpretation~\\cite{Ell08}. For simplicity, suppose that $\\alg{A}$ is a complex {\\ensuremath{C^*}}-algebra and consider only complex-linear representations. Then for $\\hs{H}$ an $\\alg{A}$-bimodule, $\\mult[\\hs{H}]$ is essentially the \\term{Bratteli matrix} of the inclusion $\\lambda(\\alg{A}) \\hookrightarrow \\rho(\\alg{A})^\\prime \\subset \\mathcal{L}(\\hs{H})$ (cf.~\\cite{Ell07}*{\\S 2}), and can thus be interpreted as representing the induced map $K_0(\\lambda(\\alg{A})) \\to K_0(\\rho(\\alg{A})^\\prime)$ in complex $K$-theory. Likewise, $\\mult[\\hs{H}]^T$ can be interpreted as representing the map $K_0(\\rho(\\alg{A})) \\to K_0(\\lambda(\\alg{A})^\\prime)$ induced by the inclusion $\\rho(\\alg{A}) \\hookrightarrow \\lambda(\\alg{A})^\\prime \\subset \\mathcal{L}(\\hs{H})$. Similar interpretations can be made in the more general context of real {\\ensuremath{C^*}}-algebras and $KO$-theory.\n\\end{remark}\n\nWe shall now characterise left, right, and left and right $\\alg{A}$-linear maps between $\\alg{A}$-bimodules. Let $\\hs{H}$ and $\\hs{H}^\\prime$ be $\\alg{A}$-bimodules with multiplicity matrices $m$ and $m^\\prime$, respectively, let $P_{\\alpha\\beta}$ be the projections on $\\hs{H}$ defined as in the proof of Proposition~\\ref{oddmult}, and let $P_{\\alpha\\beta}^\\prime$ be the analogous projections on $\\hs{H}^\\prime$. Then any linear map $T : \\hs{H} \\to \\hs{H}^\\prime$ is characterised by the components\n\\begin{equation}\n T_{\\alpha\\beta}^{\\gamma\\delta} := P^\\prime_{\\gamma\\delta} T P_{\\alpha\\beta},\n\\end{equation}\nwhich we view as maps $T_{\\alpha\\beta}^{\\gamma\\delta} : \\field{C}^{n_\\alpha} \\otimes \\field{C}^{m_{\\alpha\\beta}} \\otimes \\field{C}^{n_\\beta} \\to \\field{C}^{n_\\gamma} \\otimes \\field{C}^{m_{\\gamma\\delta}^\\prime} \\otimes \\field{C}^{n_\\delta}$, or equivalently, as elements $\n T_{\\alpha\\beta}^{\\gamma\\delta} \\in M_{n_\\gamma \\times n_\\alpha}(\\field{C}) \\otimes M_{m_{\\gamma\\delta}^\\prime \\times m_{\\alpha\\beta}} \\otimes M_{n_\\delta \\times n_\\beta}(\\field{C})$. Thus we have an isomorphism\n\\[\n \\comp : \\mathcal{L}(\\hs{H},\\hs{H}^\\prime) \\to \\bigoplus_{\\alpha,\\beta,\\gamma,\\delta \\in \\spec{\\alg{A}}} M_{n_\\gamma \\times n_\\alpha}(\\field{C}) \\otimes M_{m_{\\gamma\\delta}^\\prime \\times m_{\\alpha\\beta}} \\otimes M_{n_\\delta \\times n_\\beta}(\\field{C})\n\\]\ngiven by $\\comp(T) := (T_{\\alpha\\beta}^{\\gamma\\delta})_{\\alpha,\\beta,\\gamma,\\delta \\in \\spec{\\alg{A}}}$. Note that when $\\hs{H} = \\hs{H}^\\prime$, $T$ is self-adjoint if and only if $T_{\\gamma\\delta}^{\\alpha\\beta} = (T_{\\alpha\\beta}^{\\gamma\\delta})^*$ for all $\\alpha$, $\\beta$, $\\gamma$, $\\delta \\in \\spec{\\alg{A}}$.\n\n\\begin{proposition}[Krajewski~\\cite{Kraj98}*{\\S3.4}]\\label{linear}\n Let $\\hs{H}$ and $\\hs{H}^\\prime$ be $\\alg{A}$-bimodules with multiplicity matrices $m$ and $m^\\prime$, respectively. Then\n\\begin{align}\n \\comp(\\bdd^{\\textup{L}}_{\\alg{A}}(\\hs{H},\\hs{H}^\\prime)) &= \\bigoplus_{\\alpha,\\beta,\\delta \\in \\spec{\\alg{A}}} 1_{n_\\alpha} \\otimes M_{m_{\\alpha\\delta}^\\prime \\times m_{\\alpha\\beta}}(\\field{C}) \\otimes M_{n_\\delta \\times n_\\beta}(\\field{C}),\\\\\n \\comp(\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H},\\hs{H}^\\prime)) &= \\bigoplus_{\\alpha,\\beta,\\gamma \\in \\spec{\\alg{A}}} M_{n_\\gamma \\times n_\\alpha}(\\field{C}) \\otimes M_{m_{\\gamma\\beta}^\\prime \\times m_{\\alpha\\beta}}(\\field{C}) \\otimes 1_{n_\\beta},\\\\\n \\comp(\\bdd^{\\textup{LR}}_{\\alg{A}}(\\hs{H},\\hs{H}^\\prime)) &= \\bigoplus_{\\alpha,\\beta \\in \\spec{\\alg{A}}} 1_{n_\\alpha} \\otimes M_{m_{\\alpha\\beta}^\\prime \\times m_{\\alpha\\beta}}(\\field{C}) \\otimes 1_{n_\\beta}.\n\\end{align} \n\\end{proposition}\n\n\\begin{proof}\nObserve that $T \\in \\mathcal{L}(\\hs{H},\\hs{H}^\\prime)$ is left, right, or left and right $\\alg{A}$-linear if and only if each $T_{\\alpha\\beta}^{\\gamma\\delta}$ is left, right, or left and right $\\alg{A}$. Thus, let $\\alpha$, $\\beta$, $\\gamma$ and $\\delta \\in \\spec{\\alg{A}}$ be fixed, and let $T \\in M_{n_\\gamma \\times n_\\alpha}(\\field{C}) \\otimes M_{m_{\\gamma\\delta}^\\prime \\times m_{\\alpha\\beta}} \\otimes M_{n_\\delta \\times n_\\beta}(\\field{C})$.\n\nFirst, write $T = \\sum_{i=1}^k A_i \\otimes B_i$ for $A_i \\in M_{n_\\gamma \\times n_\\alpha}(\\field{C})$ and for linearly independent $B_i \\in M_{m_{\\gamma\\delta}^\\prime \\times m_{\\alpha\\beta}} \\otimes M_{n_\\delta \\times n_\\beta}(\\field{C})$. Then, for $a \\in \\alg{A}$, \n\\[\n (\\lambda_\\gamma(a) \\otimes 1_{m_{\\gamma\\delta}^\\prime} \\otimes 1_{n_\\delta}) T - T (\\lambda_\\alpha(a) \\otimes 1_{m_{\\alpha\\beta}} \\otimes 1_{n_\\beta}) = \\sum_{i=1}^k (\\lambda_\\gamma(a) A_i - A_i \\lambda_\\alpha(a)) \\otimes B_i, \n\\]\nso that by linear independence of the $B_i$, $T$ is left $\\alg{A}$-linear if and only if each $A_i$ intertwines the irreducible representations $\\alpha$ and $\\gamma$, and hence, by Schur's lemma, if and only if $\\alpha = \\gamma$ and each $A_i$ is a constant multiple of $1_{n_\\alpha}$ or each $A_i = 0$. Thus,\n\\begin{multline*}\n \\bdd^{\\textup{L}}_{\\alg{A}}(\\field{C}^{n_\\alpha} \\otimes \\field{C}^{m_{\\alpha\\beta}} \\otimes \\field{C}^{n_\\beta}, \\field{C}^{n_\\gamma} \\otimes \\field{C}^{m_{\\gamma\\delta}^\\prime} \\otimes \\field{C}^{n_\\delta})\\\\ =\n \\begin{cases}\n 1_{n_\\alpha} \\otimes M_{m_{\\alpha\\delta}^\\prime \\times m_{\\alpha\\beta}}(\\field{C}) \\otimes M_{n_\\delta \\times n_\\beta}(\\field{C}) &\\text{if $\\alpha = \\gamma$,}\\\\\n \\{0\\} &\\text{otherwise.}\n \\end{cases}\n\\end{multline*}\n\nAnalogously, one can show that\n\\begin{multline*}\n \\bdd^{\\textup{R}}_{\\alg{A}}(\\field{C}^{n_\\alpha} \\otimes \\field{C}^{m_{\\alpha\\beta}} \\otimes \\field{C}^{n_\\beta}, \\field{C}^{n_\\gamma} \\otimes \\field{C}^{m_{\\gamma\\delta}^\\prime} \\otimes \\field{C}^{n_\\delta})\\\\ = \n \\begin{cases}\n M_{n_\\gamma \\times n_\\alpha}(\\field{C}) \\otimes M_{m_{\\gamma\\beta}^\\prime \\times m_{\\alpha\\beta}}(\\field{C}) \\otimes 1_{n_\\beta} &\\text{if $\\beta = \\delta$,}\\\\\n \\{0\\} &\\text{otherwise},\n \\end{cases}\n\\end{multline*}\nand then these first two results together imply that\n\\begin{multline*}\n \\bdd^{\\textup{LR}}_{\\alg{A}}(\\field{C}^{n_\\alpha} \\otimes \\field{C}^{m_{\\alpha\\beta}} \\otimes \\field{C}^{n_\\beta}, \\field{C}^{n_\\gamma} \\otimes \\field{C}^{m_{\\gamma\\delta}^\\prime} \\otimes \\field{C}^{n_\\delta})\\\\ = \n \\begin{cases}\n 1_{n_\\alpha} \\otimes M_{m_{\\alpha\\beta}^\\prime \\times m_{\\alpha\\beta}}(\\field{C}) \\otimes 1_{n_\\beta} &\\text{if $(\\alpha,\\beta)=(\\gamma,\\delta)$,}\\\\\n \\{0\\} &\\text{otherwise,}\n \\end{cases}\n\\end{multline*}\nas was claimed.\n\\end{proof}\n\nAn immediate consequence is the following description of the group $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H})$:\n\n\\begin{corollary}\\label{oddunitary}\n Let $\\hs{H}$ be an $\\alg{A}$-bimodule. Then\n\\[\n \\comp(\\U^{\\textup{LR}}_\\alg{A}(\\hs{H})) = \\bigoplus_{\\alpha,\\beta \\in \\spec{\\alg{A}}} 1_{n_\\alpha} \\otimes \\U(m_{\\alpha\\beta}) \\otimes 1_{n_\\beta} \\cong \\prod_{\\alpha,\\beta \\in \\spec{\\alg{A}}} \\U(m_{\\alpha\\beta}),\n\\]\nwith the convention that $\\U(0) = \\{0\\}$ is the trivial group.\n\\end{corollary}\n\n\\subsection{Even bimodules}\n\nWe now turn to the study of even bimodules; let us begin by considering the decomposition of an even bimodule into its even and odd sub-bimodules.\n\nLet $(\\hs{H},\\gamma)$ be an even $\\alg{A}$-bimodule. Define mutually orthogonal projections $P^\\textup{even}$ and $P^\\textup{odd}$ by \n\\[\n P^\\textup{even} = \\frac{1}{2}(1+\\gamma), \\quad P^\\textup{odd} = \\frac{1}{2}(1-\\gamma).\n\\]\nWe can then define sub-bimodules $\\hs{H}^\\textup{even}$ and $\\hs{H}^\\textup{odd}$ of $\\hs{H}$ by $\\hs{H}^\\textup{even} = P^\\textup{even} \\hs{H}$, $\\hs{H}^\\textup{odd} = P^\\textup{odd} \\hs{H}$; one has that $\\hs{H} = \\hs{H}^\\textup{even} \\oplus \\hs{H}^\\textup{odd}$ at the level of bimodules.\n\nOn the other hand, given $\\alg{A}$-bimodules $\\hs{H}_1$ and $\\hs{H}_2$, we can construct an even $\\alg{A}$-bimodule $(\\hs{H},\\gamma)$ such that $\\hs{H}^\\textup{even} = \\hs{H}_1$ and $\\hs{H}^\\textup{odd} = \\hs{H}_2$ by setting $\\hs{H} = \\hs{H}_1 \\oplus \\hs{H}_2$ and $\\gamma = 1_{\\hs{H}_1} \\oplus (-1_{\\hs{H}_2})$. If $\\hs{H}_1$ and $\\hs{H}_2$ are already $\\hs{H}^\\textup{even}$ and $\\hs{H}^\\textup{odd}$ for some $(\\hs{H},\\gamma)$, then this procedure precisely reconstructs $(\\hs{H},\\gamma)$. Since this procedure manifestly respects direct summation and unitary equivalence at either end, we have therefore proved the following:\n\n\\begin{proposition}\\label{evensplit}\n Let $\\alg{A}$ be a real {\\ensuremath{C^*}}-algebra. The map \n\\[\n C : \\Bimod^\\textup{even}(\\alg{A}) \\to \\Bimod(\\alg{A}) \\times \\Bimod(\\alg{A})\n\\]\ngiven by\n\\[\n C([\\hs{H}]) := ([\\hs{H}^\\textup{even}],[\\hs{H}^\\textup{odd}])\n\\]\nis an isomorphism of monoids.\n\\end{proposition}\n\nOne readily obtains a similar decomposition at the level of unitary groups:\n\n\\begin{corollary}\nLet $(\\hs{H},\\gamma)$ be an even $\\alg{A}$-bimodule. Then\n\\[\n \\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\gamma) = \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even}) \\oplus \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd}).\n\\]\n\\end{corollary}\n\nAnother immediate consequence is the following analogue of Proposition~\\ref{oddmult}:\n\n\\begin{proposition}\\label{evenmult}\n Let $\\alg{A}$ be a real {\\ensuremath{C^*}}-algebra. The map \n\\[\n \\bimod^\\textup{even} : M_S(\\semiring{Z}_{\\geq 0}) \\times M_S(\\semiring{Z}_{\\geq 0}) \\to \\Bimod^\\textup{even}(\\alg{A})\n\\]\ndefined by $bimod^\\textup{even} := C^{-1} \\circ (\\bimod \\times \\bimod)$ is an isomorphism of monoids.\n\\end{proposition}\nJust as in the odd case, we will find it convenient to denote $(\\bimod^\\textup{even})^{-1} : \\Bimod^\\textup{even}(\\alg{A}) \\to M_S(\\semiring{Z}_{\\geq 0}) \\times M_S(\\semiring{Z}_{\\geq 0})$ by $\\mult^\\textup{even}$. It then follows that $\\mult^\\textup{even} = (\\mult \\times \\mult) \\circ C$.\n\n\\begin{definition}\n Let $(\\hs{H},\\gamma)$ be an even $\\alg{A}$-bimodule. Then the \\term{multiplicity matrices} of $(\\hs{H},\\gamma)$ are the pair of matrices\n\\[\n (\\mult[\\hs{H}^\\textup{even}],\\mult[\\hs{H}^\\textup{odd}]) = \\mult^\\textup{even}[(\\hs{H},\\gamma)] \\in M_S(\\semiring{Z}_{\\geq 0}) \\times M_S(\\semiring{Z}_{\\geq 0}).\n\\]\n\\end{definition}\n\nLet us now consider orientability of even bimodules.\n\n\\begin{lemma}[Krajewski~\\cite{Kraj98}*{\\S3.4}]\\label{orientable}\n Let $(\\hs{H},\\gamma)$ be an even $\\alg{A}$-bimodule. Then $(\\hs{H},\\gamma)$ is orientable only if $\\bdd^{\\textup{LR}}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})=\\{0\\}$.\n\\end{lemma}\n\n\\begin{proof}\n Suppose that $(\\hs{H},\\gamma)$ is orientable, so that $\\gamma = \\sum_{i=1}^k \\lambda(a_i)\\rho(b_i)$ for some $a_1,\\dotsc,a_k$, $b_1,\\dotsc,b_k \\in \\alg{A}$. Now, let $T \\in \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$, and define $\\tilde{T} \\in \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H})$ by\n\\[\n \\tilde{T} =\n \\begin{pmatrix}\n 0 & T^*\\\\\n T & 0\n \\end{pmatrix}.\n\\]\nThen, on the one hand, since $\\gamma = 1_{\\hs{H}^\\textup{even}} \\oplus (-1_{\\hs{H}^\\textup{odd}})$, $\\tilde{T}$ anticommutes with $\\gamma$, and on the other, since $\\gamma = \\sum_{i=1}^k \\lambda(a_i)\\rho(b_i)$, $\\tilde{T}$ commutes with $\\gamma$, so that $\\tilde{T} = 0$. Hence, $T = 0$.\n\\end{proof}\n\nThis last result motivates the following weaker notion of orientability:\n\n\\begin{definition}\n An even $\\alg{A}$-bimodule $(\\hs{H},\\gamma)$ shall be called \\term{quasi-orientable} whenever $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd}) = \\{0\\}$.\n\\end{definition}\n\nThe subset of $\\Bimod^\\textup{even}(\\alg{A})$ consisting of the unitary equivalence classes of the quasi-orientable even $\\alg{A}$-bimodules will be denoted by $\\Bimod^\\textup{even}_q(\\alg{A})$. \n\nWe define the \\term{support} of a real $p \\times q$ matrix $A$ to be the set\n\\[\n \\supp(A) := \\{(i,j) \\in \\{1,\\dotsc,p\\} \\times \\{1,\\dotsc,q\\} \\mid A_{ij} \\neq 0\\}.\n\\]\nFor $A \\in M_S(\\field{R})$, we shall view $\\supp(A)$ as a subset of $\\spec{\\alg{A}}^2$ by means of the identification of $\\{1,\\dotsc,S\\}$ with $\\spec{\\alg{A}}$ as ordered sets. We shall also find it convenient to associate to each matrix $m \\in M_S(\\ring{Z})$ a matrix $\\widehat{m} \\in M_N(\\ring{Z})$ by\n\\begin{equation}\n \\widehat{m}_{ij} := \\sum_{\\alpha \\in \\spec{M_{n_i}(\\field{K}_i)}} \\sum_{\\beta \\in \\spec{M_{n_j}(\\field{K}_j)}} m_{\\alpha\\beta}.\n\\end{equation}\nOne can check the map $M_S(\\ring{Z}) \\to M_N(\\ring{Z})$ defined by $m \\mapsto \\widehat{m}$ is linear and respects transposes.\n\nWe can now offer the following characterisation of quasi-orientable bimodules:\n\n\\begin{proposition}[Krajewski~\\cite{Kraj98}*{\\S3.3}, Paschke--Sitarz~\\cite{PS98}*{Lemma 3}]\n Let $\\alg{A}$ be a real {\\ensuremath{C^*}}-algebra. Then\n\\begin{multline}\n \\mult^\\textup{even}(\\Bimod^\\textup{even}_q(\\alg{A})) \\\\ = \\{(m^\\textup{even},m^\\textup{odd}) \\in M_S(\\semiring{Z}_{\\geq 0})^2 \\mid \\supp(m^\\textup{even}) \\cap \\supp(m^\\textup{odd}) = \\emptyset\\}.\n\\end{multline}\n\\end{proposition}\n\n\\begin{proof}\nLet $(\\hs{H},\\gamma)$ be an even $\\alg{A}$-bimodule and let $(m^\\textup{even},m^\\textup{odd})$ be its multiplicity matrices. Then by Proposition~\\ref{linear},\n\\[\n \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd}) \\cong \\bigoplus_{\\alpha,\\beta \\in \\spec{\\alg{A}}} M_{m_{\\alpha\\beta}^\\textup{odd} \\times m_{\\alpha\\beta}^\\textup{even}}(\\field{C}),\n\\]\nwhence the result follows immediately.\n\\end{proof}\n\nWe therefore define the \\term{signed multiplicity matrix} of a quasi-orientable even $\\alg{A}$-bimodule $(\\hs{H},\\gamma)$, or rather, the unitary equivalence class thereof, to be the matrix\n\\[\n \\mult_q[(\\hs{H},\\gamma)] := \\mult[\\hs{H}^\\textup{even}] - \\mult[\\hs{H}^\\textup{odd}] \\in M_S(\\ring{Z}).\n\\]\nThe map $\\Bimod^\\textup{even}_q(\\alg{A}) \\to M_S(\\ring{Z})$ defined by \n\\[\n [(\\hs{H},\\gamma)] \\mapsto \\mult_q[(\\hs{H},\\gamma)]\n\\]\nis then bijective, and $\\mult^\\textup{even}[(\\hs{H},\\gamma)]$ is readily recovered from $\\mult_q[(\\hs{H},\\gamma)]$. Indeed, if $(\\hs{H},\\gamma)$ is a quasi-orientable even $\\alg{A}$-bimodule with signed multiplicity matrix $\\mu$, then (cf.~\\cite{PS98}*{Lemma 3},\\cite{Kraj98}*{3.3})\n\\begin{equation}\n \\gamma = \\bigoplus_{\\alpha,\\beta \\in \\spec{\\alg{A}}} \\mu_{\\alpha\\beta} 1_{\\hs{H}_{\\alpha\\beta}}.\n\\end{equation}\nThese algebraic consequences of quasi-orientability, which were derived from the stronger condition of orientability in the original papers \\cite{PS98} and \\cite{Kraj98}, are key to the formalism developed by Krajewski and Paschke--Sitarz, and hence to the later work by Iochum, Jureit, Sch{\\\"u}cker, and Stephan~\\cites{ACG1,ACG2,ACG3,Sch05}.\n\nWe can now characterise orientable bimodules amongst quasi-orientable bimodules:\n\n\\begin{proposition}[Krajewski~\\cite{Kraj98}*{\\S 3.3}]\\label{converseorientable}\n Let $(\\hs{H},\\gamma)$ be a quasi-orientable $\\alg{A}$-bimodule with signed multiplicity matrix $\\mu$. Then $(\\hs{H},\\gamma)$ is orientable if and only if the following conditions all hold:\n\\begin{enumerate}\n \\item For each $i \\in \\{1,\\dotsc,N\\}$ such that $\\field{K}_i = \\field{C}$ and all $\\beta \\in \\spec{\\alg{A}}$,\n\\[\n \\mu_{\\rep{n}_i \\beta}\\mu_{\\crep{n}_i \\beta} \\geq 0;\n\\]\n \\item For all $\\alpha \\in \\spec{\\alg{A}}$ and each $j \\in \\{1,\\dotsc,N\\}$ such that $\\field{K}_j = \\field{C}$, \n\\[\n \\mu_{\\alpha \\rep{n}_j}\\mu_{\\alpha \\crep{n}_j} \\geq 0;\n\\]\n \\item For all $i$, $j \\in \\{1,\\dotsc,N\\}$ such that $\\field{K}_i = \\field{K}_j = \\field{C}$,\n\\[\n \\mu_{\\rep{n}_i \\crep{n}_j}\\mu_{\\crep{n}_i \\rep{n}_j} \\geq 0.\n\\]\n\\end{enumerate}\nIn particular, if $(\\hs{H},\\gamma)$ is orientable, then\n\\begin{equation}\n \\gamma = \\sum_{i,j=1}^N \\lambda(\\sgn(\\widehat{\\mu}_{ij})e_i)\\rho(e_j).\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\n First, suppose that $(\\hs{H},\\gamma)$ is indeed orientable, so that there exist $a_1,\\dotsc,a_n$, $b_1,\\dotsc,b_n \\in \\alg{A}$ such that $\\gamma = \\sum_{l=1}^n \\lambda(a_l)\\rho(b_l)$; in particular, then, for each $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$,\n\\[\n \\sgn(\\mu_{\\alpha\\beta}) 1_{n_\\alpha} \\otimes 1_{|\\mu_{\\alpha\\beta}|} \\otimes 1_{n_\\beta} = \\gamma_{\\alpha\\beta}^{\\alpha\\beta} = \\sum_{l=1}^n \\lambda_\\alpha(a_l) \\otimes 1_{|\\mu_{\\alpha\\beta}|} \\otimes \\lambda_\\beta(b_l)^T.\n\\]\n\nNow, let $i \\in \\{1,\\dotsc,N\\}$ be such that $\\field{K}_i = \\field{C}$, and let $\\beta \\in \\spec{\\alg{A}}$, and suppose that $\\mu_{\\rep{n}_i \\beta}$ and $\\mu_{\\crep{n}_i \\beta}$ are both non-zero. It then follows that\n\\[\n \\sgn(\\mu_{\\rep{n}_i \\beta}) 1_{n_i} \\otimes 1_{n_\\beta} = \\sum_{l=1}^n (a_l)_i \\otimes \\lambda_\\beta(b_l)^T, \\quad\n \\sgn(\\mu_{\\crep{n}_i \\beta}) 1_{n_i} \\otimes 1_{n_\\beta} = \\sum_{l=1}^n \\overline{(a_l)_i} \\otimes \\lambda_\\beta(b_l)^T,\n\\]\nwhere $(a_l)_i$ denotes the component of $a_l$ in the direct summand $M_{k_i}(\\field{C})$ of $\\alg{A}$. If $X$ denotes complex conjugation on $\\field{C}^{n_i}$, it then follows from this that\n\\[\n \\sgn(\\mu_{\\rep{n}_i \\beta}) 1_{n_i} \\otimes 1_{n_\\beta} = (X \\otimes 1_{n_\\beta}) (\\sgn(\\mu_{\\rep{n}_i \\beta}) 1_{n_i} \\otimes 1_{n_\\beta}) (X \\otimes 1_{n_\\beta}) = \\sgn(\\mu_{\\crep{n}_i \\beta}) 1_{n_i} \\otimes 1_{n_\\beta},\n\\]\nso that $\\sgn(\\mu_{\\rep{n}_i \\beta}) = \\sgn(\\mu_{\\crep{n}_i \\beta})$, or equivalently $\\mu_{\\rep{n}_i \\beta}\\mu_{\\crep{n}_i \\beta} > 0$. One can similarly show that the other two conditions hold.\n\nNow, suppose instead that the three conditions on $\\mu$ hold. Then for all $i$, $j \\in \\{1,\\dotsc,N\\}$, all non-zero entries $\\mu_{\\alpha\\beta}$ for $\\alpha \\in \\spec{M_{k_i}(\\field{K}_i)}$, $\\beta \\in \\spec{M_{k_j}(\\field{K}_j)}$, have the same sign, so set $\\gamma_{ij}$ equal to this common value of non-zero $\\sgn(\\mu_{\\alpha\\beta})$ if at least one such $\\mu_{\\alpha\\beta}$ is non-zero, and set $\\gamma_{ij} = 0$ otherwise. One can then easily check that $\\gamma = \\sum_{i,j=1}^{N} \\lambda(\\gamma_{ij}e_i)\\rho(e_j)$, so that $(\\hs{H},\\gamma)$ is indeed orientable. Moreover, using the same three conditions, one can readily check that $\\gamma_{ij} = \\sgn(\\widehat{\\mu}_{ij})$, which yields the last part of the claim.\n\\end{proof}\n\nLet us now turn to intersection forms and Poincar{\\'e} duality. In particular, we are now able to provide explicit expressions for intersection forms in terms of multiplicity matrices.\n\nRecall that for $\\field{K} = \\field{R}$, $\\field{C}$ or $\\field{H}$, $KO_0(M_k(\\field{K}))$ is the infinite cyclic group generated by $[p]$ for $p \\in M_k(\\field{K})$ a minimal projection, so that for $\\alg{A}$ a real {\\ensuremath{C^*}}-algebra with Wedderburn decomposition $\\oplus_{i=1}^N M_{n_i}(\\field{K}_i)$,\n\\[\n KO_0(\\alg{A}) \\cong \\prod_{i=1}^N KO_0(M_{n_i}(\\field{K}_i)) \\cong \\ring{Z}^N,\n\\]\nwhich can be viewed as the infinite abelian group generated by $\\{[p_i]\\}_{i=1}^N$ for $p_i$ a minimal projection in $M_{n_i}(\\field{K}_i)$. Since\n\\[\n \\tau_i := \\tr(p_i) =\n \\begin{cases}\n 2 &\\text{if $\\field{K}_i = \\field{H}$,}\\\\\n 1 &\\text{otherwise,}\n \\end{cases}\n\\]\nit follows that for $\\alpha \\in \\spec{\\alg{A}}$,\n\\begin{equation}\n \\tr(\\lambda_\\alpha(p_i)) =\n \\begin{cases}\n \\tau_i &\\text{if $\\alpha \\in \\spec{M_{n_i}(\\field{K}_i)}$,}\\\\\n 0 &\\text{otherwise.}\\\\\n \\end{cases}\n\\end{equation}\n\nNow, if $(\\hs{H},\\gamma)$ is an even $\\alg{A}$-bimodule with intersection form $\\form{\\cdot,\\cdot}$, we can define a matrix $\\cap \\in M_N(\\ring{Z})$ by\n\\begin{equation}\n \\cap_{ij} := \\form{[p_i],[p_j]}.\n\\end{equation}\nThe intersection form $\\form{\\cdot,\\cdot}$ is completely determined by the matrix $\\cap$, and in particular, $\\form{\\cdot,\\cdot}$ is non-degenerate ({\\it i.e.\\\/}\\ $(\\hs{H},\\gamma)$ satisfies Poincar{\\'e} duality) if and only if $\\cap$ is non-degenerate.\n\n\\begin{proposition}[Krajewski~\\cite{Kraj98}*{\\S 3.3}, Paschke--Sitarz~\\cite{PS98}*{\\S 2.4}]\\label{intform}\n Let $(\\hs{H},\\gamma)$ be an even $\\alg{A}$-bimodule with pair of multiplicity matrices $(m^\\textup{even},m^\\textup{odd})$. Then\n\\begin{equation}\n \\cap_{ij} = \\tau_i \\tau_j \\left(\\widehat{m^\\textup{even}}_{ij} - \\widehat{m^\\textup{odd}}_{ij}\\right),\n\\end{equation}\nso that $(\\hs{H},\\gamma)$ satisfies Poincar{\\'e} duality if and only if the matrix $\\widehat{m^\\textup{even}} - \\widehat{m^\\textup{odd}}$ is non-degenerate.\n\\end{proposition}\n\n\\begin{proof}\nFirst, since $\\hs{H} = \\hs{H}^\\textup{even} \\oplus \\hs{H}^\\textup{odd}$, we can write\n\\[\n \\gamma = \\bigoplus_{\\alpha,\\beta\\in\\spec{\\alg{A}}} 1_{n_\\alpha} \\otimes \\gamma_{\\alpha\\beta} \\otimes 1_{n_\\beta},\n\\]\nwhere $\\gamma_{\\alpha\\beta} = 1_{m_{\\alpha\\beta}^\\textup{even}} \\oplus (-1_{m_{\\alpha\\beta}^\\textup{odd}})$. Then,\n\\begin{align*}\n\\cap_{ij} &= \\form{[p_i],[p_j]}\\\\\n&= \\tr(\\gamma\\lambda(p_i)\\rho(p_j))\\\\ \n&= \\tr\\left(\\bigoplus_{\\alpha,\\beta\\in\\spec{\\alg{A}}} \\lambda_\\alpha(p_i) \\otimes \\gamma_{\\alpha\\beta} \\otimes \\lambda_\\beta(p_j)\\right)\\\\\n&= \\sum_{\\alpha,\\beta \\in \\spec{\\alg{A}}} \\tr(\\lambda_\\alpha(p_i))\\tr(\\lambda_\\beta(p_j))(m_{\\alpha\\beta}^\\textup{even} - m_{\\alpha\\beta}^\\textup{odd})\\\\\n&= \\sum_{i,j=1}^N \\tau_i \\tau_j (\\widehat{m^\\textup{even}}_{ij} - \\widehat{m^\\textup{odd}}_{ij}).\n\\end{align*}\nThis calculation implies, in particular, that $\\cap$ can be obtained from $\\widehat{m^\\textup{even}} - \\widehat{m^\\textup{odd}}$ by a finite sequence of elementary row or column operations, so that $\\cap$ is indeed non-degenerate if and only if $\\widehat{m^\\textup{even}} - \\widehat{m^\\textup{odd}}$ is.\n\\end{proof}\n\n\\begin{corollary}\n Let $(\\hs{H},\\gamma)$ be a quasi-orientable $\\alg{A}$-bimodule with signed multiplicity matrix $\\mu$. Then $(\\hs{H},\\gamma)$ satisfies Poincar{\\'e} duality if and only if $\\widehat{\\mu}$ is non-degenerate.\n\\end{corollary}\n\nIn particular, if we restrict ourselves to complex {\\ensuremath{C^*}}-algebras and complex-linear representations, a quasi-orientable bimodule is completely characterised by the $K$-theoretic datum of its intersection form.\n\n\\subsection{Real bimodules of odd $KO$-dimension}\n\nLet us now consider real bimodules of odd $KO$-dimension. Before continuing, recall that\n\\[\n \\Bimod(\\alg{A},1) = \\Bimod(\\alg{A},7), \\quad \\Bimod(\\alg{A},3) = \\Bimod(\\alg{A},5).\n\\]\n\nFor $m \\in \\Sym_S(\\semiring{Z}_{\\geq 0})$, we define an antilinear operator $X_m$ on $\\hs{H}_m$ by defining $(X_m)_{\\alpha\\beta}^{\\gamma\\delta} : \\field{C}^{n_\\alpha} \\otimes \\field{C}^{m_{\\alpha\\beta}} \\otimes \\field{C}^{n_\\beta} \\to \\field{C}^{n_\\gamma} \\otimes \\field{C}^{m_{\\gamma\\delta}} \\otimes \\field{C}^{n_\\delta}$ by\n\\begin{equation}\n (X_m)_{\\alpha\\beta}^{\\beta\\alpha}: \\xi_1 \\otimes \\xi_2 \\otimes \\xi_3 \\mapsto \\overline{\\xi_3} \\otimes \\overline{\\xi_2} \\otimes \\overline{\\xi_1},\n\\end{equation}\nand by setting $(X_m)_{\\alpha\\beta}^{\\gamma\\delta} = 0$ whenever $(\\gamma,\\delta) \\neq (\\beta,\\alpha)$.\n\n\\subsubsection{$KO$-dimension $1$ or $7 \\bmod 8$}\n\nWe begin by determining the form of the multiplicity matrix for a real bimodule of $KO$-dimension $1$ or $7 \\bmod 8$.\n\n\\begin{lemma}[Krajewski~\\cite{Kraj98}*{\\S3.2}, Paschke--Sitarz~\\cite{PS98}*{Lemma 4}]\\label{real17a}\n Let $(\\hs{H},J)$ be a real $\\alg{A}$-bimodule of $KO$-dimension $1$ or $7 \\bmod 8$ with multiplicity matrix $m$. Then $m$ is symmetric, and the only non-zero components of $J$ are of the form $J_{\\alpha\\beta}^{\\beta\\alpha}$ for $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$, which are anti-unitaries $\\hs{H}_{\\alpha\\beta} \\to \\hs{H}_{\\beta\\alpha}$ satisfing the relations $J_{\\beta\\alpha}^{\\alpha\\beta} = (J_{\\alpha\\beta}^{\\beta\\alpha})^*$.\n\\end{lemma}\n\n\\begin{proof}\nLet the projections $P_{\\alpha}^L$, $P_{\\beta}^R$ and $P_{\\alpha\\beta}$ be defined as in the proof of Proposition~\\ref{oddmult}, and recall that $P_{\\alpha\\beta} = P_{\\alpha}^L P_{\\beta}^R$. By Equation~\\ref{realintertwine}, it follows that for all $\\alpha \\in \\spec{\\alg{A}}$,$J P_{\\alpha}^L = P_{\\alpha}^R J$ and $J P_{\\alpha}^R = P_{\\alpha}^L J$, and hence that for all $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$, $J P_{\\alpha\\beta} = J P_\\alpha^L P_\\beta^R = P_\\alpha^R P_\\beta^L J = P_{\\beta\\alpha} J$. Thus, the only non-zero components of $J$ are the anti-unitaries $J_{\\alpha\\beta}^{\\beta\\alpha} : \\hs{H}_{\\alpha\\beta} \\to \\hs{H}_{\\beta\\alpha}$ which satisfy $J_{\\beta\\alpha}^{\\alpha\\beta} = (J_{\\alpha\\beta}^{\\beta\\alpha})^*$; this, in turn, implies that $m$ is indeed symmetric.\n\\end{proof}\n\nNext, we show that for every $m \\in \\Sym_S(\\semiring{Z}_{\\geq 0})$, not only does $\\hs{H}_m$ admit a real structure of $KO$-dimension $1$ or $7 \\bmod 8$, but it is also unique up to unitary equivalence.\n\n\\begin{lemma}[Krajewski~\\cite{Kraj98}*{\\S3.2}, Paschke--Sitarz~\\cite{PS98}*{Lemma 5}]\\label{real17b}\n Let $m \\in \\Sym_S(\\semiring{Z}_{\\geq 0})$. Then, up to unitary equivalence, $J_m := X_m$ is the unique real structure on $\\hs{H}_m$ of $KO$-dimension $1$ or $7 \\bmod 8$.\n\\end{lemma}\n\n\\begin{proof}\n First, $X_m$ is indeed by construction a real structure on $\\hs{H}_m$ of $KO$-dimension $1$ or $7 \\bmod 8$.\n \n Now, let $J$ be another real structure on $\\hs{H}_m$ of $KO$-dimension $1$ or $7 \\bmod 8$. Define a unitary $K$ on $\\hs{H}$ by $K = J X_m$; thus, $J = K X_m$. Since the intertwining condition of Equation~\\ref{realintertwine} applies to both $J$ and $X_m$, we have, in fact, that $K \\in \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}_m)$, and hence\n\\[\n K = \\bigoplus_{\\alpha,\\beta \\in \\spec{\\alg{A}}} 1_{n_\\alpha} \\otimes K_{\\alpha\\beta} \\otimes 1_{n_\\beta},\n\\]\nfor $K_{\\alpha\\beta} \\in \\U(m_{\\alpha\\beta})$. In particular, since $K^* = X_m J = X_m K X_m$, we have that $K_{\\beta\\alpha} = K_{\\alpha\\beta}^T$.\n\nLet $(\\alpha,\\beta) \\in \\supp(m)$, and suppose that $\\alpha < \\beta$. Let $K_{\\alpha\\beta} = V_{\\alpha\\beta} \\tilde{K}_{\\alpha\\beta} V_{\\alpha\\beta}^*$ be a unitary diagonalisation of $K_{\\alpha\\beta}$, and let $L_{\\alpha\\beta}$ be a diagonal square root of $\\tilde{K}_{\\alpha\\beta}$. Then$ K_{\\alpha\\beta} = V_{\\alpha\\beta} L_{\\alpha\\beta} L_{\\alpha\\beta} V_{\\alpha\\beta}^* = (V_{\\alpha\\beta}L_{\\alpha\\beta})(\\overline{V_{\\alpha\\beta}} L_{\\alpha\\beta})^T$, and hence $K_{\\beta\\alpha} = (\\overline{V_{\\alpha\\beta}} L_{\\alpha\\beta})(V_{\\alpha\\beta}L_{\\alpha\\beta})^T$. If, instead, $\\alpha = \\beta$, then $K_{\\alpha\\alpha}$ is unitary and complex symmetric, so that there exists a unitary $W_{\\alpha\\alpha}$ such that $K_{\\alpha\\alpha} = W_{\\alpha\\alpha}W_{\\alpha\\alpha}^T$. We can now define a unitary $U \\in \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}_m)$ by\n\\[\n U = \\bigoplus_{\\alpha,\\beta \\in \\spec{\\alg{A}}} 1_{n_\\alpha} \\otimes U_{\\alpha\\beta} \\otimes 1_{n_\\beta},\n\\]\nwhere $U_{\\alpha\\beta} = 0$ if $m_{\\alpha\\beta} = 0$, and for $(\\alpha,\\beta) \\in \\supp(m)$,\n\\[\n U_{\\alpha\\beta} = \n\\begin{cases}\n V_{\\alpha\\beta}L_{\\alpha\\beta}, &\\text{if $\\alpha < \\beta$,}\\\\\n \\overline{V_{\\beta\\alpha}}L_{\\beta\\alpha}, &\\text{if $\\alpha > \\beta$,}\\\\\n W_{\\alpha\\alpha}, &\\text{if $\\alpha = \\beta$.}\n\\end{cases}\n\\]\nThen, by construction, $K = U X_m U^* X_m$, and hence, $J = U X_m U^*$, so that $U$ is the required unitary equivalence between $(\\hs{H}_m,X_m)$ and $(\\hs{H}_m,J)$.\n\\end{proof}\n\nWe can now give our characterisation of real bimodules of $KO$-dimension $1$ or $7 \\bmod 8$:\n\n\\begin{proposition}[Krajewski~\\cite{Kraj98}*{\\S3.2}]\\label{real17mult}\nLet $n = 1$ or $7 \\bmod 8$. Then the map $\\iota_n : \\Bimod(\\alg{A},n) \\to \\Bimod(\\alg{A})$ defined by $\\iota_n : [(\\hs{H},J)] \\mapsto [\\hs{H}]$ is injective, and\n\\begin{equation}\n (\\mult \\circ \\iota_n)(\\Bimod(\\alg{A},n)) = \\Sym_S(\\semiring{Z}_{\\geq 0}).\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nFirst, since a unitary equivalence of real $\\alg{A}$-bimodules of $KO$-di\\-men\\-sion $n \\bmod 8$ is, in particular, a unitary equivalence of odd $\\alg{A}$-bimodules, the map $\\iota_n$ is well defined.\n\nNext, let $(\\hs{H},J)$ and $(\\hs{H}^\\prime,J^\\prime)$ be real $\\alg{A}$-bimodules of $KO$-dimension $n \\bmod 8$, and suppose that $\\hs{H}$ and $\\hs{H}^\\prime$ are unitarily equivalent as bimodules; let $U \\in \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\prime,\\hs{H})$. Now, if $m$ is the multiplicity matrix of $\\hs{H}$, then $\\hs{H}$ and $\\hs{H}_m$ are unitarily equivalent, so let $V \\in \\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\hs{H}_m)$. Then $V J V^*$ and $V U J^\\prime U^* V^*$ are both real structures of $KO$-dimension $n \\bmod 8$, so by Lemma~\\ref{real17b}, they are both unitarily equivalent to $J_m$. This implies that $J$ and $U J^\\prime U^*$ are unitarily equivalent as real structures on $\\hs{H}$, and hence that $(\\hs{H},J)$ and $(\\hs{H}^\\prime,J^\\prime)$ are unitarily equivalent. Thus, $\\iota_n$ is injective.\n\nFinally, Lemma~\\ref{real17a} implies that $(\\mult \\circ \\iota_n)(\\Bimod(\\alg{A},n)) \\subseteq \\Sym_S(\\semiring{Z}_{\\geq 0})$, while Lemma~\\ref{real17b} implies the reverse inclusion.\n\\end{proof}\n\nThus, without any loss of generality, a real bimodule $\\hs{H}$ of $KO$-dimension $1$ or $7 \\bmod 8$ with multiplicity matrix $m$ can be assumed to be simply $(\\hs{H}_m,J_m)$.\n\nOne following characterisation of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},J)$ now follows by direct calculation:\n\n\\begin{proposition}\\label{real17unitary}\nLet $(\\hs{H},J)$ be a real $\\alg{A}$-bimodule of $KO$-dimension $1$ or $7 \\bmod 8$ with multiplicity matrix $m$. Then\n\\begin{equation}\\begin{split}\n \\comp(\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},J)) &= \\{(1_{n_\\alpha} \\otimes U_{\\alpha\\beta} \\otimes 1_{n_\\beta})_{\\alpha,\\beta \\in \\spec{\\alg{A}}} \\in \\comp(\\U^{\\textup{LR}}_\\alg{A}(\\hs{H})) \\mid U_{\\beta\\alpha} = \\overline{U_{\\alpha\\beta}}\\}\\\\ &\\cong \\prod_{\\alpha \\in \\spec{\\alg{A}}} \\biggl( \\Orth(m_{\\alpha\\alpha}) \\times \\prod_{\\substack{\\beta \\in \\spec{\\alg{A}}\\\\ \\beta > \\alpha}} \\U(m_{\\alpha\\beta}) \\biggr).\n\\end{split}\\end{equation}\n\\end{proposition}\n\n\\subsubsection{$KO$-dimension $3$ or $5 \\bmod 8$}\n\nLet us now turn to real bimodules of $KO$-dimension $3$ or $5 \\bmod 8$. We begin with the relevant analogue of Lemma~\\ref{real17a}.\n\n\\begin{lemma}\\label{real35a}\nLet $(\\hs{H},J)$ be a real $\\alg{A}$-bimodule of $KO$-dimension $3$ or $5 \\bmod 8$ with multiplicity matrix $m$. Then $m$ is symmetric with even diagonal entries, and the only non-zero components of $J$ are of the form $J_{\\alpha\\beta}^{\\beta\\alpha}$ for $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$, which are anti-unitaries $\\hs{H}_{\\alpha\\beta} \\to \\hs{H}_{\\beta\\alpha}$ satisfying the relations $J_{\\beta\\alpha}^{\\alpha\\beta} = -(J_{\\alpha\\beta}^{\\beta\\alpha})^*$.\n\\end{lemma}\n\n\\begin{proof}\nThe proof follows just as for Lemma~\\ref{real17a}, except that the equation $J^2 = -1$ forces the relations $J_{\\beta\\alpha}^{\\alpha\\beta} = -(J_{\\alpha\\beta}^{\\beta\\alpha})^*$, which imply, in particular, that for each $\\alpha \\in \\spec{\\alg{A}}$, $(J_{\\alpha\\alpha}^{\\alpha\\alpha})^2 = -1$, so that $m_{\\alpha\\alpha}$ must be even.\n\\end{proof}\n\nLet us denote by $\\Sym_S^0(\\semiring{Z}_{\\geq 0})$ the set of all matrices in $\\Sym_S(\\semiring{Z}_{\\geq 0})$ with even diagonal entries. For $n = 2k$, let\n\\[\n \\Omega_n =\n \\begin{pmatrix}\n 0 & -1_k\\\\\n 1_k & 0\n \\end{pmatrix}.\n\\]\n\n\\begin{lemma}~\\label{real35b}\nLet $m \\in \\Sym_S^0(\\semiring{Z}_{\\geq 0})$. Define an antiunitary $J_m$ on $\\hs{H}_m$ by\n\\[\n (J_m)_{\\alpha\\beta}^{\\gamma\\delta} =\n \\begin{cases}\n (X_m)_{\\alpha\\beta}^{\\beta\\alpha} &\\text{if $(\\gamma,\\delta)=(\\beta,\\alpha)$ and $\\alpha < \\beta$,}\\\\\n -(X_m)_{\\alpha\\beta}^{\\beta\\alpha} &\\text{if $(\\gamma,\\delta)=(\\beta,\\alpha)$ and $\\alpha > \\beta$,}\\\\\n \\Omega_{m_{\\alpha\\alpha}}(X_m)_{\\alpha\\alpha}^{\\alpha\\alpha} &\\text{if $\\alpha = \\beta = \\gamma = \\delta$,}\\\\\n 0 &\\text{otherwise.}\n \\end{cases}\n\\]\nThen, up to unitary equivalence, $J_m$ is the unique real structure on $\\hs{H}_m$ of $KO$-dimension $3$ or $5 \\bmod 8$.\n\\end{lemma}\n\n\\begin{proof}\nThe proof follows that of Lemma~\\ref{real17b}, except we now have that\n$K_{\\alpha\\alpha}^T = \\Omega_{m_{\\alpha\\alpha}} K_{\\alpha\\alpha} \\Omega_{m_{\\alpha\\alpha}}^T$ instead of $K_{\\alpha\\alpha}^T = K_{\\alpha\\alpha}$; each $K_{\\alpha\\alpha} \\Omega_{m_{\\alpha\\alpha}}$ is therefore unitary and complex skew-symmetric, so that we choose $W_{\\alpha\\alpha}$ unitary such that\n\\[\n K_{\\alpha\\alpha}\\Omega_{m_{\\alpha\\alpha}} = W_{\\alpha\\alpha}\\Omega_{m_{\\alpha\\alpha}}W_{\\alpha\\alpha}^T,\n\\]\nor equivalently, $K_{\\alpha\\alpha} = W_{\\alpha\\alpha}\\Omega_{m_{\\alpha\\alpha}}W_{\\alpha\\alpha}^T\\Omega_{m_{\\alpha\\alpha}}^T$. One can then construct the unitary equivalence $U$ between $(\\hs{H}_m,J)$ and $(\\hs{H},J_m)$ as before.\n\\end{proof}\n\nMuch as in the analogous case of $KO$-dimension $1$ or $7 \\bmod 8$, Lemmas~\\ref{real35a} and~\\ref{real35b} together imply the following characterisation of real bimodules of $KO$-dimension $3$ or $5 \\bmod 8$:\n\n\\begin{proposition}\\label{real35mult}\nLet $n=3$ or $5 \\bmod 8$. Then the map $\\iota_n : \\Bimod(\\alg{A},n) \\to \\Bimod(\\alg{A})$ defined by $\\iota_n : [(\\hs{H},J)] \\mapsto [\\hs{H}]$ is injective, and\n\\begin{equation}\n (\\mult \\circ \\iota_n)(\\Bimod(\\alg{A},n)) = \\Sym_S^0(\\semiring{Z}_{\\geq 0}).\n\\end{equation}\n\\end{proposition}\n\nFinally, these results immediately imply the following description of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},J)$:\n\n\\begin{proposition}\\label{real35unitary}\nLet $(\\hs{H},J)$ be a real $\\alg{A}$-bimodule of $KO$-dimension $3$ or $5 \\bmod 8$ with multiplicity matrix $m$. Then\n\\begin{equation}\\begin{split}\n \\comp(\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},J)) &= \\left\\{(1_{n_\\alpha} \\otimes U_{\\alpha\\beta} \\otimes 1_{n_\\beta})_{\\alpha,\\beta \\in \\spec{\\alg{A}}} \\in \\comp(\\U^{\\textup{LR}}_\\alg{A}(\\hs{H})) \\mid \\substack{U_{\\alpha\\alpha} \\in \\Sp(m_{\\alpha\\alpha}), \\\\ U_{\\beta\\alpha} = \\overline{U_{\\alpha\\beta}}, \\: \\alpha \\neq \\beta}\\right\\}\\\\\n &\\cong \\prod_{\\alpha \\in \\spec{\\alg{A}}} \\biggl(\\Sp(m_{\\alpha\\alpha}) \\times \\prod_{\\substack{\\beta \\in \\spec{\\alg{A}}\\\\ \\beta > \\alpha}} \\U(m_{\\alpha\\beta})\\biggr).\n\\end{split}\\end{equation}\n\\end{proposition}\n\n\\subsection{Real bimodules of even $KO$-dimension}\n\nWe now come to the case of even $KO$-dimension. Before continuing, note that for $(\\hs{H},\\gamma,J)$ a real bimodule of even $KO$-dimension,\n\\[\n \\forall p,q \\in KO_0(\\alg{A}), \\: \\form{q,p} = \\varepsilon^{\\prime\\prime} \\form{p,q},\n\\]\nas a direct result of the relation $J \\gamma = \\varepsilon^{\\prime\\prime} \\gamma J$; this is then equivalent to the condition\n\\begin{equation}\n \\cap = \\varepsilon^{\\prime\\prime} \\cap^T,\n\\end{equation}\nwhere $\\cap$ is the matrix of the intersection form. Thus, for $KO$-dimension $0$ or $4 \\bmod 8$, the intersection form is symmetric, whilst for $KO$-dimension $2$ or $6 \\bmod 8$, it is anti-symmetric. It then follows, in particular, that a real $\\alg{A}$-bimodule of $KO$-dimension $2$ or $6 \\bmod 8$ satisfies Poincar{\\'e} duality only if $\\alg{A}$ has an even number of direct summands in its Wedderburn decomposition, as an anti-symmetric $k \\times k$ matrix for $k$ odd is necessarily degenerate.\n\n\\subsubsection{$KO$-dimension $0$ or $4 \\bmod 8$}\n\nWe begin with the case where $\\varepsilon^{\\prime\\prime} = 1$ and hence $[\\gamma,J]=0$, {\\it i.e.\\\/}\\ of $KO$-dimension $0$ or $4 \\bmod 8$.\n\nLet $(\\hs{H},\\gamma,J)$ be a real $\\alg{A}$-bimodule of $KO$-dimension $n \\bmod 8$, for $n = 0$ or $4$; let the mutually orthogonal projections $P^\\textup{even}$ and $P^\\textup{odd}$ on $\\hs{H}$ be defined as before. Then, since $[J,\\gamma]=0$, we have that $J = J^\\textup{even} \\oplus J^\\textup{odd}$, where $J^\\textup{even} = P^\\textup{even} J P^\\textup{even}$ and $J^\\textup{odd} = P^\\textup{odd} J P^\\textup{odd}$. One can then check that $(\\hs{H}^\\textup{even},J^\\textup{even})$ and $(\\hs{H}^\\textup{odd},J^\\textup{odd})$ are real $\\alg{A}$-bimodules of $KO$-dimension $1$ or $7 \\bmod 8$ if $n = 0$, and $3$ or $5 \\bmod 8$ if $n=4$. On the other hand, given $(\\hs{H}^\\textup{even},J^\\textup{even})$ and $(\\hs{H}^\\textup{odd},J^\\textup{odd})$, one can immediately reconstruct $(\\hs{H},\\gamma,J)$ by setting $\\gamma = 1_{\\hs{H}^\\textup{even}} \\oplus (-1_{\\hs{H}^\\textup{odd}})$ and $J = J^\\textup{even} \\oplus J^\\textup{odd}$. Thus we have proved the following analogue of Proposition~\\ref{evensplit}:\n\n\\begin{proposition}~\\label{real04split}\n Let $\\alg{A}$ be a real {\\ensuremath{C^*}}-algebra. Let $k_0$ denote $1$ or $7 \\bmod 8$, and let $k_4$ denote $3$ or $5 \\bmod 8$. Then for $n = 0, 4 \\bmod 8$, the map\n \\[\n C_n : \\Bimod(\\alg{A},n) \\to \\Bimod(\\alg{A},k_n) \\times \\Bimod(\\alg{A},k_n)\n \\]\n given by $C_n([(\\hs{H},\\gamma,J)]) := ([(\\hs{H}^\\textup{even},J^\\textup{even})],[(\\hs{H}^\\textup{odd},J^\\textup{odd})])$ is an isomorphism of monoids.\n\\end{proposition}\n\nOne can then apply this decomposition to the group $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\gamma,J)$ to find:\n\n\\begin{corollary}\\label{real04unitary}\n Let $(\\hs{H},\\gamma,J)$ be a real $\\alg{A}$-bimodule of $KO$-dimension $0$ or $4 \\bmod 8$. Then\n \\begin{equation}\n \\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\gamma,J) = \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},J^\\textup{even}) \\oplus \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd},J^\\textup{odd}).\n \\end{equation}\n\\end{corollary}\n\nCombining Proposition~\\ref{real04split} with our earlier characterisations of real bimodules of odd $KO$-dimension, we immediately obtain the following:\n\n\\begin{proposition}\\label{real04mult}\n Let $n=0$ or $4 \\bmod 8$. Then the map $\\iota_n : \\Bimod(\\alg{A},n) \\to \\Bimod^\\textup{even}(\\alg{A})$ defined by\n $[(\\hs{H},\\gamma,J)] \\mapsto ([(\\hs{H},\\gamma)])$ is injective, and\n \\[\n (\\mult^\\textup{even} \\circ \\iota_n)(\\Bimod(\\alg{A},n)) =\n \\begin{cases}\n \\Sym_S(\\semiring{Z}_{\\geq 0}) \\times \\Sym_S(\\semiring{Z}_{\\geq 0}) &\\text{if $n = 0 \\bmod 8$,}\\\\\n \\Sym_S^0(\\semiring{Z}_{\\geq 0}) \\times \\Sym_S^0(\\semiring{Z}_{\\geq 0}) &\\text{if $n = 4 \\bmod 8$.}\n \\end{cases}\n \\]\n\\end{proposition}\n\nIn particular,\n\\[\n \\Bimod_q(\\alg{A},n) := \\iota_n^{-1}(\\Bimod^\\textup{even}_q(\\alg{A}))\n\\]\nis thus the set of all equivalence classes of quasi-orientable real $\\alg{A}$-bimodules of $KO$-dimension $n \\bmod 8$; the last Proposition then implies the following:\n\n\\begin{corollary}\n Let $n=0$ or $4 \\bmod 8$. Then\n\\begin{equation}\n (\\mult_q \\circ \\iota_n)(\\Bimod_q(\\alg{A},n)) = \\Sym_S(\\ring{Z}).\n\\end{equation}\n\\end{corollary}\n\n\\subsubsection{$KO$-dimension $2$ or $6 \\bmod 8$}\n\nFinally, let us consider the remaining case where $\\varepsilon^{\\prime\\prime} = -1$ and hence $\\{\\gamma,J\\}=0$, {\\it i.e.\\\/}\\ of $KO$-dimensions $2$ and $6 \\bmod 8$.\n\nLet $(\\hs{H},\\gamma,J)$ be a real $\\alg{A}$-bimodule of $KO$-dimension $n \\bmod 8$ for $n = 2$ or $6$. Since $\\{J,\\gamma\\}$, we have that\n\\[\n J =\n \\begin{pmatrix}\n 0 & \\varepsilon \\tilde{J}^*\\\\\n \\tilde{J} & 0\n \\end{pmatrix},\n\\]\nwhere $\\tilde{J} := P^\\textup{odd} J P^\\textup{even}$ is an antiunitary $\\hs{H}^\\textup{even} \\to \\hs{H}^\\textup{odd}$, so that for $a \\in \\alg{A}$,\n\\[\n \\lambda^\\textup{odd}(a) = \\tilde{J} \\rho^\\textup{even}(a^*) \\tilde{J}^*, \\quad \\rho^\\textup{odd}(a) = \\tilde{J} \\lambda^\\textup{even}(a^*) \\tilde{J}^*.\n\\]\nIt then follows, in particular, that $\\mult[\\hs{H}^\\textup{odd}] = \\mult[\\hs{H}^\\textup{even}]^T$. \n\nNow, let $J^\\prime$ be another real structure on $(\\hs{H},\\gamma)$ of $KO$-dimension $n \\bmod 8$, and let $\\tilde{J^\\prime} = P^\\textup{odd} J^\\prime P^\\textup{even}$. Define $K \\in \\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\gamma)$ by $K = 1_{\\hs{H}^\\textup{even}} \\oplus (\\tilde{J^\\prime} \\tilde{J}^*)$. Then, by construction, $J^\\prime = KJK^*$, {\\it i.e.\\\/}\\ $K$ is a unitary equivalence of real structures between $J$ and $J^\\prime$. Thus, real structures of $KO$-dimension $2$ or $6 \\bmod 8$ are unique. As a result, we have proved the following analogue of Proposition~\\ref{s0reduction}:\n\n\\begin{proposition}~\\label{real26reduction}\nLet $\\alg{A}$ be a real {\\ensuremath{C^*}}-algebra, and let $n = 2$ or $6 \\bmod 8$. Then the map\n\\[\n C_n : \\Bimod(\\alg{A},n) \\to \\Bimod(\\alg{A})\n\\]\ngiven by $C_n([(\\hs{H},\\gamma,J)]) := ([\\hs{H}^\\textup{even}])$ is an isomorphism of monoids.\n\\end{proposition}\n\nAgain, as an immediate consequence, we obtain the following characterisation of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\gamma,J)$:\n\n\\begin{corollary}\\label{real26unitary}\nLet $(\\hs{H},\\gamma,J)$ be a real $\\alg{A}$-bimodule of $KO$-dimension $2$ or $6 \\bmod 8$. Then\n\\begin{equation}\\begin{split}\n \\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\gamma,J) &= \\{ U^\\textup{even} \\oplus U^\\textup{odd} \\in \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even}) \\oplus \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd}) \\mid U^\\textup{odd} = \\tilde{J}U^\\textup{even}\\tilde{J}^*\\}\\\\ &\\cong \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even}).\n\\end{split}\\end{equation}\n\\end{corollary}\n\nFinally, one can combine Proposition~\\ref{real26reduction} with our observation concerning the uniqueness up to unitary equivalence of real structures of $KO$-dimension $2$ or $6 \\bmod 8$ and earlier results on multiplicity matrices to obtain the following characterisation:\n\n\\begin{proposition}\n Let $n=2$ or $6 \\bmod 8$. Then the map $\\iota_n : \\Bimod(\\alg{A},n) \\to \\Bimod^\\textup{even}(\\alg{A})$ defined by $[(\\hs{H},\\gamma,J)] \\mapsto ([\\hs{H},\\gamma])$ is injective, and\n\\begin{equation}\\begin{split}\n (\\mult^\\textup{even} \\circ \\iota_n)(\\Bimod(\\alg{A},n)) &= \\{(m^\\textup{even},m^\\textup{odd}) \\in M_S(\\semiring{Z}_{\\geq 0})^2 \\mid m^\\textup{odd} = (m^\\textup{even})^T\\}\\\\ &\\cong M_S(\\semiring{Z}_{\\geq 0}).\n\\end{split}\\end{equation}\n\\end{proposition}\n\nOnce more, it follows that\n\\[\n \\Bimod_q(\\alg{A},n) := \\iota_n^{-1}(\\Bimod^\\textup{even}_q(\\alg{A})),\n\\]\nis the set of all equivalence classes of quasi-orientable real $\\alg{A}$-bimodules of $KO$-dimension $n \\bmod 8$, for which we can again obtain a characterisation in terms of signed multiplicity matrices:\n\n\\begin{corollary}\n Let $n=2$ or $6 \\bmod 8$. Then\n\\begin{equation}\n (\\mult_q \\circ \\iota_n)(\\Bimod_q(\\alg{A},n)) = \\{m \\in M_S(\\ring{Z}) \\mid m^T = -m\\}.\n\\end{equation}\n\\end{corollary}\n\n\\subsubsection{$S^0$-real bimodules of even $KO$-dimension}\n\nLet us now characterise quasi-orientability, orientability and Poincar{\\'e} duality for an even $KO$-dimensional $S^0$-real $\\alg{A}$-bimodule $(\\hs{H},\\gamma,J,\\epsilon)$ by means of suitable conditions on $(\\hs{H}_i,\\gamma_i)$ expressible entirely in terms of the pair of multiplicity matrices of $(\\hs{H}_i,\\gamma_i)$\n\nWe begin by considering quasi-orientability:\n\n\\begin{proposition}\\label{s0quasiorient}\n Let $(\\hs{H},\\gamma,J,\\epsilon)$ be an $S^0$-real $\\alg{A}$-bimodule of even $KO$-dimension $n \\bmod 8$. Then $(\\hs{H},\\gamma)$ is quasi-orientable if and only if $(\\hs{H}_i,\\gamma_i)$ is quasi-orientable and\n\\[\n \\begin{cases}\n \\supp(m^\\textup{even}_i) \\cap \\supp((m^\\textup{odd}_i)^T) = \\emptyset &\\text{if $n=0$, $4$,}\\\\\n \\supp(m^\\textup{even}_i) \\cap \\supp((m^\\textup{even}_i)^T) = \\supp(m^\\textup{odd}_i) \\cap \\supp((m^\\textup{odd}_i)^T) = \\emptyset &\\text{if $n=2$, $6$,}\n \\end{cases}\n\\]\nfor $(m^\\textup{even}_i,m^\\textup{odd}_i)$ the multiplicity matrices of $(\\hs{H}_i,\\gamma_i)$, in which case, if $\\mu$ and $\\mu_i = m^\\textup{even}_i - m^\\textup{odd}_i$ are the signed multiplicity matrices of $(\\hs{H},\\gamma)$ and $(\\hs{H}_i,\\gamma_i)$, respectively, then\n\\begin{equation}\n \\mu = \\mu_i + \\varepsilon^{\\prime\\prime} \\mu_i^T.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\n First, let $(m^\\textup{even},m^\\textup{odd})$ and $(m^\\textup{even}_i,m^\\textup{odd}_i)$ denote the pairs of multipicity matrices of $(\\hs{H},\\gamma)$ and $(\\hs{H}_i,\\gamma_i)$, respectively. It then follows that \n\\begin{align*}\n m^\\textup{even} &=\n \\begin{cases}\n m^\\textup{even}_i + (m^\\textup{even}_i)^T &\\text{if $n = 0$, $4$,}\\\\\n m^\\textup{even}_i + (m^\\textup{odd}_i)^T &\\text{if $n = 2$, $6$;}\n \\end{cases}\\\\\n m^\\textup{odd} &=\n \\begin{cases}\n m^\\textup{odd}_i + (m^\\textup{odd}_i)^T &\\text{if $n = 0$, $4$,}\\\\\n m^\\textup{odd}_i + (m^\\textup{even}_i)^T &\\text{if $n=2$, $6$.}\n \\end{cases}\n\\end{align*}\n\nThus $\\supp(m^\\textup{even}) = \\supp(m^\\textup{even}_i) \\cup S^\\textup{even}, \\quad \\supp(m^\\textup{odd}) = \\supp(m^\\textup{odd}_i) \\cup S^\\textup{odd}$, where\n\\[\n S^\\textup{even} =\n\\begin{cases}\n \\supp((m^\\textup{even}_i)^T) &\\text{if $n = 0$, $4$,}\\\\\n \\supp((m^\\textup{odd}_i)^T) &\\text{if $n = 2$, $6$;}\n\\end{cases} \\quad\n S^\\textup{odd} =\n\\begin{cases}\n \\supp((m^\\textup{odd}_i)^T) &\\text{if $n = 0$, $4$,}\\\\\n \\supp((m^\\textup{even}_i)^T) &\\text{if $n = 2$, $6$.}\n\\end{cases}\n\\]\n\nThen,\n\\begin{multline*}\n \\supp(m^\\textup{even}) \\cap \\supp(m^\\textup{odd}) = (\\supp(m^\\textup{even}_i) \\cap \\supp(m^\\textup{odd}_i)) \\cup (S^\\textup{even} \\cap \\supp(m^\\textup{odd}_i)) \\\\ \\cup (\\supp(m^\\textup{even}_i) \\cap S^\\textup{odd}) \\cup (S^\\textup{even} \\cap S^\\textup{odd}),\n\\end{multline*}\nso that $(\\hs{H},\\gamma)$ is quasi-orientable if and only if $(\\hs{H}_i,\\gamma_i)$ is quasi-orientable and\n\\[\n (S^\\textup{even} \\cap \\supp(m^\\textup{odd}_i)) \\cup (\\supp(m^\\textup{even}_i) \\cap S^\\textup{odd}) = \\emptyset,\n\\]\nas required. \n\nFinally, if $\\mu = m^\\textup{even} - m^\\textup{odd}$ and $\\mu_i = m^\\textup{even}_i - m^\\textup{odd}_i$ are the signed multiplicity matrices of $(\\hs{H},\\gamma)$ and $(\\hs{H}_i,\\gamma_i)$, respectively, then the relations amongst $m^\\textup{even}$, $m^\\textup{odd}$, $m^\\textup{even}_i$, and $m^\\textup{odd}_i$ given at the beginning immediately yield the equation $\\mu = \\mu_i + \\varepsilon^{\\prime\\prime} \\mu_i^T$.\n\\end{proof}\n\nLet us now turn to orientability:\n\n\\begin{proposition}\n Let $(\\hs{H},\\gamma,J,\\epsilon)$ be a quasi-orientable $S^0$-real $\\alg{A}$-bimodule of even $KO$-dimension $n \\bmod 8$. Then $(\\hs{H},\\gamma)$ is orientable if and only if $(\\hs{H}_i,\\gamma_i)$ is orientable and, if $n = 2$ or $6 \\bmod 8$, for all $j \\in \\{1,\\dotsc,N\\}$ such that $\\field{K}_j = \\field{C}$, \n\\begin{equation}\n (\\mu_i)_{\\rep{n}_j \\crep{n}_j} = (\\mu_i)_{\\crep{n}_j \\rep{n}_j},\n\\end{equation}\nwhere $\\mu_i$ is the signed multiplicity matrix of $(\\hs{H}_i,\\gamma_i)$.\n\\end{proposition}\n\n\\begin{proof}\n Let $\\mu$ be the signed multiplicity matrix of $(\\hs{H},\\gamma)$. Propositions~\\ref{s0orient} and~\\ref{converseorientable} together imply that $(\\hs{H},\\gamma,J,\\epsilon)$ is orientable if and only if\n\\[\n \\gamma_i = \\sum_{k,l=1}^N \\lambda_i(\\sgn(\\widehat{\\mu}_{kl})e_k)\\rho_i(e_l) = \\varepsilon^{\\prime\\prime} \\sum_{k,l=1}^N \\lambda_i(e_l)\\rho_i(\\sgn(\\widehat{\\mu}_{kl})e_k),\n\\]\nand by considering individual components $(\\gamma_i)_{\\alpha\\beta}$, one can easily check that this in turn holds if and only if $(\\hs{H}_i,\\gamma_i)$ is orientable and for all $k \\in \\{1,\\dotsc,N\\}$,\n\\[\n \\sgn(\\widehat{\\mu}_{kk}) = \\varepsilon^{\\prime\\prime} \\sgn(\\widehat{\\mu}_{kk}).\n\\]\n\nThis last condition is trivial when $\\varepsilon^{\\prime\\prime} = 1$, {\\it i.e.\\\/}\\ when $n = 0$ or $4 \\bmod 8$, so let us suppose instead that $n = 2$ or $6 \\bmod 8$, so that $\\varepsilon^{\\prime\\prime} = -1$. If $(\\hs{H},\\gamma)$ is orientable, then, by the above discussion, $(\\hs{H}_i,\\gamma_i)$ is orientable and the diagonal entries of $\\widehat{\\mu}$ vanish, which in turn implies by Proposition~\\ref{converseorientable} that for each $l \\in \\{1,\\dotsc,N\\}$ and all $\\alpha$, $\\beta \\in \\spec{M_{k_l}(\\field{K}_l)}$, $\\mu_{\\alpha\\beta} = 0$. By antisymmetry of $\\mu$, this is equivalent to having, for all $l \\in \\{1,\\dotsc,N\\}$ such that $\\field{K}_l = \\field{C}$, $\\mu_{\\rep{n}_l \\crep{n}_l} = 0$, or equivalently,\n\\[\n (\\mu_i)_{\\rep{n}_j \\crep{n}_j} = (\\mu_i)_{\\crep{n}_j \\rep{n}_j},\n\\]\nwhere $\\mu_i$ is the signed multiplicity matrix of $(\\hs{H}_i,\\gamma_i)$. On the other hand, if $(\\hs{H}_i,\\gamma_i)$ is orientable and this condition on $\\mu_i$ holds, then $\\mu$ certainly satisfies the above condition, so that $(\\hs{H},\\gamma)$ is indeed orientable.\n\\end{proof}\n\nFinally, let us consider Poincar{\\'e} duality.\n\n\\begin{proposition}\n Let $(\\hs{H},\\gamma,J,\\epsilon)$ be an $S^0$-real $\\alg{A}$-bimodule of even $KO$-dimension $n \\bmod 8$, let $(m^\\textup{even}_i,m^\\textup{odd}_i)$ denote the multiplicity matrices of $(\\hs{H}_i,\\gamma_i)$, and let $\\cap$ denote the matrix of the intersection form of $(\\hs{H},\\gamma)$. Finally, let $\\mu_i = m^\\textup{even}_i - m^\\textup{odd}_i$. Then\n\\begin{equation}\n \\cap_{kl} = \\tau_k \\tau_l (\\widehat{\\mu_i} + \\varepsilon^{\\prime\\prime} \\widehat{\\mu_i}^T)_{kl},\n\\end{equation}\nso that $(\\hs{H},\\gamma)$ satisfies Poincar{\\'e} duality if and only if $\\widehat{\\mu_i} + \\varepsilon^{\\prime\\prime} \\widehat{\\mu_i}^T$ is non-degenerate.\n\\end{proposition}\n\n\\begin{proof}\n By Proposition~\\ref{s0poincare}, $\\cap = \\cap_i + \\varepsilon^{\\prime\\prime} \\cap_i^T$ for $\\cap_i$ the matrix of the intersection form of $(\\hs{H}_i,\\gamma_i)$, which, together with Proposition~\\ref{intform}, yields the desired result.\n\\end{proof}\n\n\\subsection{Bimodules in the Chamseddine--Connes--Marcolli model}\n\nTo illustrate the structure theory outlined thus far, let us apply it to the construction of the finite spectral triple of the NCG Standard Model given by Chamseddine, Connes and Marcolli~\\cite{CCM07}*{\\S\\S 2.1, 2.2, 2.4} (cf.~also~\\cite{CM08}*{\\S 1.13}).\n\nLet $\\alg{A}_{LR} = \\field{C} \\oplus \\field{H}_L \\oplus \\field{H}_R \\oplus M_3(\\field{C})$, where the labels $L$ and $R$ serve to distinguish the two copies of $\\hs{H}$; we can therefore write $\\spec{\\alg{A}_{LR}} = \\{\\rep{1},\\crep{1},\\rep{2}_L,\\rep{2}_R,\\rep{3},\\crep{3}\\}$ without ambiguity. Now, let $(\\hs{M}_F,\\gamma_F,J_F)$ be the orientable real $\\alg{A}_{LR}$-bimodule of $KO$-dimension $6 \\bmod 8$ with signed multiplicity matrix\n\\[\n \\mu =\n\\begin{pmatrix}\n 0 & 0 & -1 & 1 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0\\\\\n 1 & 0 & 0 & 0 & 1 & 0\\\\\n -1 & 0 & 0 & 0 & -1 & 0\\\\\n 0 & 0 & -1 & 1 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0\n\\end{pmatrix}.\n\\]\nThis bimodule is, in fact, an $S^0$-real bimodule for $\\epsilon_F = \\lambda(-1,1,1,-1)$; $\\hs{E} = (\\hs{M}_F)_i$ is then the orientable even $\\alg{A}_{LR}$-bimodule with signed multiplicity matrix\n\\[\n \\mu_\\hs{E} = \n\\begin{pmatrix}\n 0 & 0 & 0 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0\\\\\n 1 & 0 & 0 & 0 & 1 & 0\\\\\n -1 & 0 & 0 & 0 & -1 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0\n\\end{pmatrix}.\n\\]\nNote, however, that neither $\\hs{M}_F$ nor $\\hs{E}$ satisfies Poincar{\\'e} duality, as\n\\[\n \\hat{\\mu} =\n\\begin{pmatrix}\n 0 & -1 & 1 & 0\\\\\n 1 & 0 & 0 & 1\\\\\n -1 & 0 & 0 & -1\\\\\n 0 & -1 & 1 & 0\n\\end{pmatrix}, \\quad\n \\widehat{\\mu_\\hs{E}} =\n\\begin{pmatrix}\n 0 & 0 & 0 & 0\\\\\n 1 & 0 & 0 & 1\\\\\n -1 & 0 & 0 & -1\\\\\n 0 & 0 & 0 & 0\n\\end{pmatrix}\n\\]\nare both clearly degenerate; the intersection forms of $\\hs{M}_F$ and $\\hs{E}$ are given by the matrices $\\cap = 2 \\hat{\\mu}$ and $\\cap_\\hs{E} = 2 \\widehat{\\mu_\\hs{E}}$, respectively. \n\nIn order to introduce $N$ generations of fermions and anti-fermions, one now considers the real $\\alg{A}$-bimodule $\\hs{H}_F := (\\hs{M}_F)^{\\oplus N}$; by abuse of notation, $\\gamma_F$, $J_F$ and $\\epsilon_F$ now also denote the relevant structure operators on $\\hs{H}_F$. In terms of multiplicity matrices and intersection forms, the sole difference from our discussion of $\\hs{M}_F$ is that all matrices are now multiplied by $N$.\n\nNow, let $\\alg{A}_F = \\field{C} \\oplus \\hs{H} \\oplus M_3(\\field{C})$, which we consider as a subalgebra of $\\alg{A}_{LR}$ by means of the embedding\n\\[\n (\\zeta,q,m) \\mapsto \\left(\\zeta,q,\\begin{pmatrix}\\lambda & 0\\\\0 & \\overline{\\lambda}\\end{pmatrix},m\\right);\n\\]\njust as we could for $\\alg{A}_{LR}$, we can write $\\spec{\\alg{A}_F} = \\{\\rep{1},\\crep{1},\\rep{2},\\rep{3},\\crep{3}\\}$ without ambiguity. We can therefore view $\\hs{H}_F$ as a real $\\alg{A}_F$-bimodule of $KO$-dimension $6 \\bmod 8$, whose pair of multiplicity matrices $(m^\\textup{even},m^\\textup{odd})$ is then given by\n\\[\n m^\\textup{even} = N\n\\begin{pmatrix}\n1 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0\\\\\n1 & 0 & 0 & 1 & 0\\\\\n1 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0\n\\end{pmatrix}, \\quad\nm^\\textup{odd} = N\n\\begin{pmatrix}\n1 & 0 & 1 & 1 & 0\\\\\n1 & 0 & 0 & 1 & 0\\\\\n0 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 1 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0\n\\end{pmatrix};\n\\]\nthe essential observation is that the irreducible representation $\\rep{2}_R$ of $\\alg{A}_{LR}$ corresponds to the representation $\\rep{1}\\oplus\\crep{1}$ of $\\alg{A}_F$, whilst $\\rep{2}_L$, $\\rep{3}$ and $\\crep{3}$ correspond to $\\rep{2}$, $\\rep{3}$ and $\\crep{3}$, respectively. \n\nNote that $\\hs{H}_F$ now fails even to be quasi-orientable let alone orientable, with the sub-bimodule $(\\hs{H}_F)_{\\rep{1}\\rep{1}}$ providing the obstruction, and even if we were to restore quasi-orientability by setting $(\\hs{H}_F)_{\\rep{1}\\rep{1}} = 0$, $(\\hs{H}_F)_{\\rep{1}\\crep{1}}$ and $(\\hs{H}_F)_{\\crep{1}\\rep{1}}$ would still present an obstruction to orientability by Proposition~\\ref{converseorientable}. Note also that $\\hs{H}_F$ must necessarily fail to satisfy Poincar{\\'e} duality, as the matrix $\\cap_F$ of its intersection form is a $3 \\times 3$ anti-symmetric matrix, and thus \\latin{a priori} degenerate. Let us nonetheless compute $\\cap_F$:\n\\[\n \\widehat{m^\\textup{even}} - \\widehat{m^\\textup{odd}} = \n N \\begin{pmatrix}\n 2 & 0 & 0\\\\\n 1 & 0 & 1\\\\\n 2 & 0 & 0\n\\end{pmatrix} -\nN \\begin{pmatrix}\n 2 & 1 & 2\\\\\n 0 & 0 & 0\\\\\n 0 & 1 & 0 \n\\end{pmatrix} =\nN \\begin{pmatrix}\n 0 & -1 & -2\\\\\n 1 & 0 & 1\\\\\n 2 & -1 & 0\n\\end{pmatrix},\n\\]\nand hence, by Proposition~\\ref{intform},\n\\[\n \\cap_F =\n2N \\begin{pmatrix}\n 0 & -1 & -1\\\\\n 1 & 0 & 1\\\\\n 1 & -1 & 0\n\\end{pmatrix}.\n\\]\n\nFinally, let us consider the $S^0$-real structure on $\\hs{H}_F$ the $\\alg{A}_F$-bimodule, inherited from $\\hs{H}_F$ as an $\\alg{A}_{LR}$-bimodule; we now denote $(\\hs{H}_F)_i$ by $\\hs{H}_f$. One still has that $\\hs{H}_f = \\hs{E}^{\\oplus N}$, which is still orientable and thus specified by the signed multiplicity matrix\n\\[\n \\mu_{f} = N\n\\begin{pmatrix}\n -1 & 0 & 0 & -1 & 0\\\\\n -1 & 0 & 0 & -1 & 0\\\\\n 1 & 0 & 0 & 1 & 0\\\\\n 0 & 0 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0\n\\end{pmatrix};\n\\]\nthe intersection form is then given by the matrix\n\\[\n \\cap_{f} = 2N\n\\begin{pmatrix}\n -1 & 0 & -1\\\\\n 1 & 0 & 1\\\\\n 0 & 0 & 0\n\\end{pmatrix},\n\\]\nso that $\\hs{H}_f$ fails to satisfy Poincar{\\'e} duality as an $\\alg{A}_F$-bimodule.\n\n\n\\section{Dirac Operators and their Structure}\n\n\\subsection{The order one condition}\n\nWe now examine the structure of Dirac operators in detail. We will find it useful to begin with the study of operators between $\\alg{A}$-bimodules (for fixed $\\alg{A}$) satisfying a further generalisation of the order one condition. Thus, let $\\alg{A}$ be a fixed real {\\ensuremath{C^*}}-algebra, and let $\\hs{H}_1$ and $\\hs{H}_2$ be fixed $\\alg{A}$-bimodules with multiplicity matrices $m_1$ and $m_2$, respectively. \n\n\\begin{definition}\n We shall say that a map $T \\in \\mathcal{L}(\\hs{H}_1,\\hs{H}_2)$ satisfies the \\term{generalised order one condition} if\n\\begin{equation}\\label{genorderone}\n \\forall a, b \\in \\alg{A}, \\: (\\lambda_2(a)T - T\\lambda_1(a))\\rho_1(b) = \\rho_2(b)(\\lambda_2(a)T - T\\lambda_1(a)).\n\\end{equation}\n\\end{definition}\n\nNote that if $\\hs{H}_1 = \\hs{H}_2$, then the generalised order one condition reduces to the usual order one condition on Dirac operators.\n\nIt is easy to check that the generalised order one condition is, in fact, equivalent to the following alternative condition:\n\\begin{equation}\n \\forall a, b \\in \\alg{A}, \\: (\\rho_2(a)T - T\\rho_1(a))\\lambda_1(b) = \\lambda_2(b)(\\rho_2(a)T - T\\rho_1(a)).\n\\end{equation}\nThus, the following are equivalent for $T \\in \\mathcal{L}(\\hs{H}_1,\\hs{H}_2)$:\n\\begin{enumerate}\n \\item $T$ satisfies the generalised order one condition;\n \\item For all $a \\in \\alg{A}$, $\\lambda_2(a)T - T\\lambda_1(a)$ is right $\\alg{A}$-linear;\n \\item For all $a \\in \\alg{A}$, $\\rho_2(a)T - T\\rho_1(a)$ is left $\\alg{A}$-linear.\n\\end{enumerate}\n\nNow, since the unitary group $\\U(\\alg{A})$ of $\\alg{A}$ is a compact Lie group, let $\\mu$ be the normalised bi-invariant Haar measure on $\\U(\\alg{A})$. \n\n\\begin{lemma}~\\label{decompproj}\nLet $\\hs{H}_1$ and $\\hs{H}_2$ be $\\alg{A}$-bimodules. Define operators $E_\\lambda$ and $E_\\rho$ on $\\mathcal{L}^1_\\alg{A}(\\hs{H}_1,\\hs{H}_2)$ by\n\\begin{equation}\n E_\\lambda(T) := \\int_{\\U(\\alg{A})} \\mathop{}\\!\\mathrm{d}\\mu(u) \\lambda_2(u)T\\lambda_1(u^{-1}), \\quad E_\\rho(T) := \\int_{\\U(\\alg{A})} \\mathop{}\\!\\mathrm{d}\\mu(u) \\rho_2(u^{-1})T\\rho_1(u).\n\\end{equation}\nThen $E_\\lambda$ and $E_\\rho$ are commuting idempotents such that\n\\[\n \\im(E_\\lambda) = \\bdd^{\\textup{L}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2), \\quad \\im(E_\\rho) = \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2),\n\\]\nand\n\\[\n \\ker(E_\\lambda) = \\im(\\Id - E_\\lambda) \\subseteq \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2), \\quad \\ker(E_\\rho) = \\im(\\Id - E_\\rho) \\subseteq \\bdd^{\\textup{L}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2),\n\\]\nwhile\n\\[\n \\im(E_\\lambda E_\\rho) = \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2).\n\\]\n\\end{lemma}\n\n\\begin{proof}\nFirst, the fact that $E_\\lambda$ and $E_\\rho$ are idempotents follows immediately from the Fubini-Tonelli theorem together with translation invariance of the Haar measure $\\mu$, whilst commutation of $E_\\lambda$ and $E_\\rho$ follows from the Fubini-Tonelli theorem together with the commutation of left and right actions on $\\hs{H}_1$ and on $\\hs{H}_2$. Moreover, by construction, $E_\\lambda$ and $E_\\rho$ act as the identity on $\\bdd^{\\textup{L}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)$ and $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)$, respectively, so that\n\\[\n \\im(E_\\lambda) \\supseteq \\bdd^{\\textup{L}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2), \\quad \\im(E_\\rho) \\supseteq \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2).\n\\]\n\nNow, let $T \\in \\mathcal{L}^1_\\alg{A}(\\hs{H}_1,\\hs{H}_2)$. Then, by translation invariance of the Haar measure, it follows that for any $u \\in \\U(\\alg{A})$,\n\\[\n E_\\lambda(T) = \\lambda_2(u) E_\\lambda(T) \\lambda_1(u)^*, \\quad E_\\rho(T) = \\rho_2(u) E_\\rho(T) \\rho_1(u)^*,\n\\]\nor equivalently,\n\\[\n \\lambda_2(u) E_\\lambda(T) = E_\\lambda(T) \\lambda_1(u), \\quad \\rho_2(u)E_\\rho(T) = E_\\rho(T) \\rho_1(u).\n\\]\nBy the real analogue of the Russo-Dye theorem~\\cite{Li}*{Lemma 2.15.16}, the convex hull of $\\U(\\alg{A})$ is weakly dense in the unit ball of $\\alg{A}$, so that\n\\[\n \\lambda_2(a) E_\\lambda(T) = E_\\lambda(T) \\lambda_1(a), \\quad \\rho_2(a)E_\\rho(T) = E_\\rho(T) \\rho_1(a)\n\\]\nfor all $a \\in \\alg{A}$, {\\it i.e.\\\/}\\ $E_\\lambda(T) \\in \\bdd^{\\textup{L}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)$ and $E_\\rho(T) \\in \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)$.\n\nOn the other hand,\n\\begin{align*}\n (\\Id - E_\\lambda)(T) &= \\int_{\\U(\\alg{A})} \\mathop{}\\!\\mathrm{d}\\mu(u) (T\\lambda_1(u) - \\lambda_2(u)T)\\lambda_1(u^{-1}),\\\\ (\\Id - E_\\rho)(T) &= \\int_{\\U(\\alg{A})} \\mathop{}\\!\\mathrm{d}\\mu(u) (T\\rho_1(u^{-1}) - \\rho_2(u^{-1})T)\\rho_1(u),\n\\end{align*}\nso that by the generalised order one condition, $(\\Id - E_\\lambda)(T) \\in \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)$ and $(\\Id - E_\\rho)(T) \\in \\bdd^{\\textup{L}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)$.\n\nFinally, the commutation of $E_\\lambda$ and $E_\\rho$ together with our identification of $\\im(E_\\lambda)$ and of $\\im(E_\\rho)$ imply the desired result about $\\im(E_\\lambda E_\\rho)$.\n\\end{proof}\n\nNow, since\n\\[\n \\im(\\Id - E_\\lambda) \\subseteq \\im(E_\\rho), \\quad \\im(\\Id - E_\\rho) \\subseteq \\im(E_\\lambda),\n\\]\none has that\n\\[\n (\\Id - E_\\lambda)E_\\rho = \\Id - E_\\lambda, \\quad (\\Id - E_\\rho)E_\\lambda = \\Id - E_\\rho,\n\\]\nwhich implies in turn that $\\Id - E_\\rho$, $E_\\lambda E_\\rho$ and $\\Id - E_\\lambda$ are mutually orthogonal idempotents such that\n\\[\n (\\Id - E_\\rho) + E_\\lambda E_\\rho + (\\Id - E_\\lambda) = \\Id.\n\\]\nWe have therefore proved the following:\n\n\\begin{proposition}[Krajewski~\\cite{Kraj98}*{\\S 3.4}]\\label{order1decomp}\nLet $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)^0$ denote $\\ker(E_\\lambda)$, and let $\\bdd^{\\textup{L}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)^0$ denote $\\ker(E_\\rho)$. Then\n\\begin{equation}\n \\mathcal{L}^1_\\alg{A}(\\hs{H}_1,\\hs{H}_2) = \\bdd^{\\textup{L}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)^0 \\oplus \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2) \\oplus \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)^0,\n\\end{equation}\nwhere\n\\begin{equation}\n \\bdd^{\\textup{L}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)^0 \\oplus \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2) = \\bdd^{\\textup{L}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)\n\\end{equation}\nand\n\\begin{equation}\n \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2) \\oplus \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)^0 = \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2).\n\\end{equation}\n\\end{proposition}\n\nThus, elements of $\\bdd^{\\textup{L}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)^0$ can be interpreted as the ``purely'' left $\\alg{A}$-linear maps $\\hs{H}_1 \\to \\hs{H}_2$, whilst elements of $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)^0$ can be interpreted as the ``purely'' right $\\alg{A}$-linear maps $\\hs{H}_1 \\to \\hs{H}_2$.\n\nOne can readily check that the decomposition of Proposition~\\ref{order1decomp} is respected by left multiplication by elements of $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}_2)$ and right multiplication by elements of $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}_1)$:\n\n\\begin{proposition}~\\label{lrorder1decomp}\nFor any $T \\in \\mathcal{L}^1_\\alg{A}(\\hs{H}_1,\\hs{H}_2)$, $A \\in \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}_1)$, $B \\in \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}_2)$,\n\\[\n E_\\lambda(A T) = A E_\\lambda(T), \\quad E_\\rho(T B) = E_\\rho(T) B.\n\\]\n\\end{proposition}\n\nNow, if $T \\in \\mathcal{L}(\\hs{H}_1,\\hs{H}_2)$, it is easy to see that $T$ satisfies the generalised order one condition if and only if each $T_{\\alpha\\beta}^{\\gamma\\delta}$ satisfies the generalised order one condition within $\\mathcal{L}((\\hs{H}_1)_{\\alpha\\beta},(\\hs{H}_2)_{\\gamma\\delta})$; by abuse of notation, we will also denote by $E_\\lambda$ and $E_\\rho$ the appropriate idempotents on each $\\mathcal{L}((\\hs{H}_1)_{\\alpha\\beta},(\\hs{H}_2)_{\\gamma\\delta})$. It then follows that\n\\[\n E_\\lambda(T)_{\\alpha\\beta}^{\\gamma\\delta} = E_\\lambda(T_{\\alpha\\beta}^{\\gamma\\delta}), \\quad E_\\rho(T)_{\\alpha\\beta}^{\\gamma\\delta} = E_\\rho(T_{\\alpha\\beta}^{\\gamma\\delta}).\n\\]\n\nFinally, let us turn to characterising $\\ker(E_\\lambda)$ and $\\ker(E_\\rho)$; before proceeding, we first need a technical lemma:\n\n\\begin{lemma}\\label{schur}\n Let $G$ be a compact Lie group, and let $\\mu$ be the bi-invariant Haar measure on $G$. Let $(\\hs{H},\\pi)$ and $(\\hs{H}^\\prime,\\pi^\\prime)$ be finite-dimensional irreducible unitary matrix representations of $G$. Then for any $T \\in \\mathcal{L}(\\hs{H}^\\prime,\\hs{H})$, if $\\pi \\ncong \\pi^\\prime$ then\n\\begin{equation}\n \\int_G \\mathop{}\\!\\mathrm{d}\\mu(g) \\pi(g)T\\pi^\\prime(g^{-1}) = 0,\n\\end{equation}\nand if $\\pi \\cong \\pi^\\prime$, then for any unitary $G$-isomorphism $U: \\hs{H}^\\prime \\to \\hs{H}$,\n\\begin{equation}\n \\int_G \\mathop{}\\!\\mathrm{d}\\mu(g) \\pi(g)T\\pi^\\prime(g^{-1}) = \\frac{1}{\\dim \\hs{H}} \\tr(T U^*)U.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\n Let\n\\[\n \\tilde{T} = \\int_G \\mathop{}\\!\\mathrm{d}\\mu(g) \\pi(g)T\\pi^\\prime(g^{-1}).\n\\]\nwhich, by translation invariance of the Haar measure $\\mu$, is a $G$-invariant map. If $\\pi \\ncong \\pi^\\prime$, then Schur's Lemma forces $\\tilde{T}$ to vanish. If instead $\\pi \\cong \\pi^\\prime$, let $U: \\hs{H} \\to \\hs{H}^\\prime$ be a unitary $G$-isomorphism. Then by Schur's Lemma there exists some $\\alpha \\in \\field{C}$ such that $\\tilde{T} = \\alpha U$; in fact,\n\\[\n \\alpha = \\alpha \\frac{1}{\\dim \\hs{H}}\\tr(U U^*) = \\frac{1}{\\dim \\hs{H}} \\tr(\\tilde{T} U^*).\n\\]\nOne can then show that $\\tr(\\tilde{T} U^*) = \\tr(T U^*)$ by introducing an orthonormal basis of $\\hs{H}$ and then calculating directly.\n\\end{proof}\n\nWe now arrive at the desired characterisation:\n\n\\begin{proposition}~\\label{zeromean}\nIf $T \\in \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)$, then $E_\\lambda(T) = 0$ if and only if for all $\\alpha$, $\\beta \\in \\supp(m_1) \\cap \\supp(m_2)$,\n\\[\n T_{\\alpha\\beta}^{\\alpha\\beta} \\in \\mathfrak{sl}(n_\\alpha) \\otimes M_{(m_2)_{\\alpha\\beta} \\times (m_1)_{\\alpha\\beta}}(\\field{C}) \\otimes 1_{n_\\beta},\n\\]\nand if $T \\in \\bdd^{\\textup{L}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)$, then $E_\\rho(T) = 0$ if and only if for all $\\alpha$, $\\beta \\in \\supp(m_1) \\cap \\supp(m_2)$,\n\\[\n T_{\\alpha\\beta}^{\\alpha\\beta} \\in 1_{n_\\alpha} \\otimes M_{(m_2)_{\\alpha\\beta} \\times (m_1)_{\\alpha\\beta}}(\\field{C}) \\otimes \\mathfrak{sl}(n_\\beta).\n\\]\n\\end{proposition}\n\n\\begin{proof}\nLet $T \\in \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}_1,\\hs{H}_2)$. Then, by Proposition~\\ref{linear}, it suffices to consider components $T_{\\alpha\\beta}^{\\gamma\\beta}$ for $\\alpha$, $\\beta$, $\\gamma \\in \\spec{\\alg{A}}$, which take the form\n\\[\n T_{\\alpha\\beta}^{\\gamma\\beta} = M_{\\alpha\\beta}^{\\gamma} \\otimes 1_{n_\\beta}\n\\]\nfor $M_{\\alpha\\beta}^\\gamma \\in M_{n_\\gamma \\times n_\\alpha}(\\field{C}) \\otimes M_{(m_2)_{\\gamma\\beta} \\times (m_1)_{\\alpha\\beta}}(\\field{C})$.\n\nNow fix $\\alpha$, $\\beta$, $\\gamma \\in \\spec{\\alg{A}}$, and write\n\\[\n M_{\\alpha\\beta}^\\gamma = \\sum_{i=1}^k A_i \\otimes B_i\n\\]\nfor $A_i \\in M_{n_\\gamma \\times n_\\alpha}(\\field{C})$ and for $B_i \\in M_{(m_2)_{\\gamma\\beta} \\times (m_1)_{\\alpha\\beta}}(\\field{C})$ linearly independent. It then follows by direct computation together with Lemma~\\ref{schur} that\n\\[\n E_\\lambda(T_{\\alpha\\beta}^{\\gamma\\beta}) = \n \\begin{cases}\n \\frac{1}{n_\\alpha} \\left(\\sum_{i=1}^k \\tr(A_i) 1_{n_\\alpha} \\otimes B_i \\right) \\otimes 1_{n_\\beta} &\\text{if $\\alpha = \\gamma$,}\\\\\n 0 &\\text{otherwise},\n \\end{cases} \n\\]\nso that by linear independence of the $B_i$, $E_\\lambda(T_{\\alpha\\beta}^{\\gamma\\beta})$ vanishes if and only if either $\\alpha \\neq \\gamma$ or, $\\alpha = \\beta$ and each $A_i$ is traceless, and hence, if and only $\\alpha \\neq \\gamma$ or, $\\alpha = \\beta$ and $M_{\\alpha\\beta}^\\alpha \\in \\mathfrak{sl}(n_\\alpha) \\otimes M_{(m_2)_{\\gamma\\beta} \\times (m_1)_{\\alpha\\beta}}(\\field{C})$, as required.\n\n\\latin{Mutatis mutandis}, this argument also establishes the desired characterisation of $\\ker(E_\\rho)$.\n\\end{proof}\n\n\\subsection{Odd bilateral spectral triples}\n\nLet us now take $\\hs{H}_1 = \\hs{H}_2 = \\hs{H}$. By construction of $E_\\lambda$ and $E_\\rho$, the following conditions are readily seen to be equivalent for $T \\in \\mathcal{L}_\\alg{A}^1(\\hs{H})$:\n\\begin{enumerate}\n \\item $T$ is self-adjoint;\n \\item $E_\\lambda(T)$ and $(\\Id - E_\\lambda)(T)$ are self-adjoint;\n \\item $(\\Id - E_\\rho)(T)$ and $E_\\rho(T)$ are self-adjoint;\n \\item $(\\Id - E_\\rho)(T)$, $(E_\\lambda E_\\rho)(T)$ and $(\\Id - E_\\lambda)(T)$ are self-adjoint.\n\\end{enumerate}\nThus, in particular,\n\\begin{equation}\n \\ms{D}_0(\\alg{A},\\hs{H}) = \\bdd^{\\textup{L}}_\\alg{A}(\\hs{H})^0_\\textup{sa} \\oplus \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H})_\\textup{sa} \\oplus \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H})^0_\\textup{sa}.\n\\end{equation}\nIn light of Proposition~\\ref{lrorder1decomp}, we therefore have the following description of $\\ms{D}(\\alg{A},\\hs{H})$:\n\n\\begin{proposition}\n Let $\\hs{H}$ be an $\\alg{A}$-bimodule. Then\n\\begin{equation}\n \\ms{D}(\\alg{A},\\hs{H}) = \\left( \\bdd^{\\textup{L}}_\\alg{A}(\\hs{H})^0_\\textup{sa} \\times \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H})_\\textup{sa} \\times \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H})^0_\\textup{sa} \\right) \/ \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}),\n\\end{equation}\nwhere $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H})$ acts diagonally by conjugation. \n\\end{proposition}\n\nNow, in light of Propositions~\\ref{linear},~\\ref{order1decomp} and~\\ref{zeromean}, we can describe how to construct an arbitrary Dirac operator on an odd $\\alg{A}$-bimodule $\\hs{H}$ with multiplicity matrix $m$:\n\\begin{enumerate}\n \\item For $\\alpha$, $\\beta$, $\\gamma \\in \\spec{\\alg{A}}$ such that $\\alpha < \\gamma$, choose $M_{\\alpha\\beta}^\\gamma \\in M_{n_\\gamma m_{\\gamma\\beta} \\times n_\\alpha m_{\\alpha\\beta}}(\\field{C})$;\n \\item For $\\alpha$, $\\beta$, $\\delta \\in \\spec{\\alg{A}}$ such that $\\beta < \\delta$, choose $N_{\\alpha\\beta}^\\delta \\in M_{m_{\\alpha\\delta} n_\\delta \\times m_{\\alpha\\beta} n_\\beta}(\\field{C})$;\n \\item For $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$, choose $M_{\\alpha\\beta}^\\alpha \\in M_{n_\\alpha m_{\\alpha\\beta}}(\\field{C})_\\textup{sa}$ and $N_{\\alpha\\beta}^\\beta \\in M_{m_{\\alpha\\beta} n_\\beta}(\\field{C})_\\textup{sa}$;\n \\item Finally, for $\\alpha$, $\\beta$, $\\gamma$, $\\delta \\in \\spec{\\alg{A}}$, set\n \\begin{equation}\n D_{\\alpha\\beta}^{\\gamma\\delta} = \n \\begin{cases}\n M_{\\alpha\\beta}^\\gamma \\otimes 1_{n_\\beta} &\\text{if $\\alpha < \\gamma$ and $\\beta = \\delta$,}\\\\\n (M_{\\gamma\\beta}^\\alpha)^* \\otimes 1_{n_\\beta} &\\text{if $\\alpha > \\gamma$ and $\\beta = \\delta$,}\\\\\n 1_{n_\\alpha} \\otimes N_{\\alpha\\beta}^\\delta &\\text{if $\\alpha = \\gamma$ and $\\beta < \\delta$,}\\\\\n 1_{n_\\alpha} \\otimes (N_{\\alpha\\delta}^\\beta)^* &\\text{if $\\alpha = \\gamma$ and $\\beta > \\delta$,}\\\\\n M_{\\alpha\\beta}^\\alpha \\otimes 1_{n_\\beta} + 1_{n_\\alpha} \\otimes N_{\\alpha\\beta}^\\beta &\\text{if $(\\alpha,\\beta) = (\\gamma,\\delta)$,}\\\\\n 0 &\\text{otherwise}.\n \\end{cases}\n \\end{equation}\n\\end{enumerate}\n\nNote that for any $K = (1_{n_\\alpha} \\otimes K_{\\alpha\\beta} \\otimes 1_{n_\\beta})_{\\alpha,\\beta\\in\\spec{\\alg{A}}} \\in \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H})_\\textup{sa}$ (so that each $K_{\\alpha\\beta}$ is self-adjoint), we can make the replacements\n\\[\n M_{\\alpha\\beta}^\\alpha \\mapsto M_{\\alpha\\beta}^\\alpha + 1_{n_\\alpha} \\otimes K_{\\alpha\\beta}, \\quad N_{\\alpha\\beta}^\\beta \\mapsto N_{\\alpha\\beta}^\\beta - K_{\\alpha\\beta} \\otimes 1_{n_\\beta},\n\\]\nand still obtain the same Dirac operator $D$; by Proposition~\\ref{order1decomp}, this freedom is removed by requiring either that $M_{\\alpha\\beta}^\\alpha \\in \\mathfrak{sl}(n_\\alpha) \\otimes M_{m_{\\alpha\\beta}}(\\field{C})$ or that $N_{\\alpha\\beta}^\\beta \\in M_{m_{\\alpha\\beta}}(\\field{C}) \\otimes \\mathfrak{sl}(n_\\beta)$.\n\nWe now turn to the moduli space $\\ms{D}(\\alg{A},\\hs{H})$ itself. By the above discussion and Corollary~\\ref{oddunitary}, we can identify the space $\\ms{D}_0(\\alg{A},\\hs{H})$ with\n\\begin{multline}\n \\ms{D}_0(\\alg{A},m) := \\prod_{\\alpha,\\beta \\in \\spec{\\alg{A}}} \\prod_{\\substack{\\gamma \\in \\spec{\\alg{A}} \\\\ \\gamma > \\alpha}} M_{n_\\gamma m_{\\gamma\\beta} \\times n_\\alpha m_{\\alpha\\beta}}(\\field{C}) \\times \\left(\\mathfrak{sl}(n_\\alpha) \\otimes M_{m_{\\alpha\\beta}}(\\field{C})\\right)_\\textup{sa} \\\\ \\times \\prod_{\\substack{\\delta \\in \\spec{\\alg{A}} \\\\ \\delta \\geq \\alpha}} M_{m_{\\alpha\\delta} n_\\delta \\times m_{\\alpha\\beta} n_\\beta}(\\field{C}) \\times M_{m_{\\alpha\\beta} n_\\beta}(\\field{C})_\\textup{sa},\n\\end{multline}\nand identify $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H})$ with\n\\begin{equation}\\label{modunit}\n \\U(\\alg{A},m) := \\prod_{\\alpha,\\beta \\in \\spec{\\alg{A}}} \\U(m_{\\alpha\\beta}).\n\\end{equation}\nBy checking at the level of components, one sees that the action of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H})$ on the space $\\ms{D}_0(\\alg{A},\\hs{H})$ corresponds under these identifications to the action of $\\U(\\alg{A},m)$ on $\\ms{D}_0(\\alg{A},m)$ defined by having $(U_{\\alpha\\beta}) \\in \\U(\\alg{A},m)$ act on\n\\[\n (M_{\\alpha\\beta}^\\gamma;M_{\\alpha\\beta}^\\alpha;N_{\\alpha\\beta}^\\delta;N_{\\alpha\\beta}^\\beta) \\in \\ms{D}_0(\\alg{A},m)\n\\]\nby\n\\[\n M_{\\alpha\\beta}^\\gamma \\mapsto (1_{n_\\gamma} \\otimes U_{\\gamma\\beta}) M_{\\alpha\\beta}^\\gamma (1_{n_\\alpha\\beta} \\otimes U_{\\alpha\\beta}^*), \\quad N_{\\alpha\\beta}^\\delta \\mapsto (U_{\\alpha\\delta} \\otimes 1_{n_\\delta}) N_{\\alpha\\beta}^\\delta (U_{\\alpha\\beta}^* \\otimes 1_{n_\\beta}).\n\\]\nWe have therefore proved the following:\n\n\\begin{proposition}\n Let $\\hs{H}$ be an odd $\\alg{A}$-bimodule with multiplicity matrix $m$. Then\n \\begin{equation}\n \\ms{D}(\\alg{A},\\hs{H}) \\cong \\ms{D}_0(\\alg{A},m) \/ \\U(\\alg{A},m).\n \\end{equation}\n\\end{proposition}\n\n\\subsection{Even bilateral spectral triples}\\label{evenbilateral}\n\nFor this section, let $(\\hs{H},\\gamma)$ be a fixed even $\\alg{A}$-bimodule with pair of multiplicity matrices $(m^\\textup{even},m^\\textup{odd})$.\n\nNow, let $D$ be a self-adjoint operator on $\\hs{H}$ anticommuting with $\\gamma$. Then, with respect to the decomposition $\\hs{H} = \\hs{H}^\\textup{even} \\oplus \\hs{H}^\\textup{odd}$ we can write\n\\[\n D =\n \\begin{pmatrix}\n 0 & \\Delta^*\\\\\n \\Delta & 0\\\\\n \\end{pmatrix},\n\\]\nwhere $\\Delta = P^\\textup{odd} D P^\\textup{even}$, viewed as a map $\\hs{H}^\\textup{even} \\to \\hs{H}^\\textup{odd}$. Thus, $D$ is uniquely determined by $\\Delta$ and \\latin{vice versa}. Moreover, one can check that $D$ satisfies the order one condition if and only if $\\Delta$ satisfies the generalised order one condition as a map $\\hs{H}^\\textup{even} \\to \\hs{H}^\\textup{odd}$. We therefore have the following:\n\n\\begin{lemma}\nLet $(\\hs{H},\\gamma)$ be an even $\\alg{A}$-bimodule. Then the map $\\ms{D}_0(\\alg{A},\\hs{H},\\gamma) \\to \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$ defined by $D \\mapsto P^\\textup{odd} D P^\\textup{even}$ is an isomorphism.\n\\end{lemma}\n\nWe now apply this Lemma to obtain our first result regarding the form of $\\ms{D}(\\alg{A},\\hs{H},\\gamma)$:\n\n\\begin{proposition}\\label{evendirac}\nThe map \n\\[\n \\ms{D}(\\alg{A},\\hs{H},\\gamma) \\to \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd}) \\backslash \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd}) \/ \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even})\n\\] \ndefined by $[D] \\mapsto [P^\\textup{odd} D P^\\textup{even}]$ is a homeomorphism.\n\\end{proposition}\n\n\\begin{proof}\nRecall that $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\gamma) = \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$. We therefore have for $D \\in \\ms{D}_0(\\alg{A},\\hs{H},\\gamma)$ and $U = U^\\textup{even} \\oplus U^\\textup{odd} \\in \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$ that\n\\[\n P^\\textup{odd} U D U^* P^\\textup{even} = U^\\textup{odd} P^\\textup{odd} D P^\\textup{even} (U^\\textup{even})^*.\n\\]\nThus, under the correspondence $\\ms{D}_0(\\alg{A},\\hs{H},\\gamma) \\cong \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$, the action of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},\\gamma)$ decouples into an action of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd})$ by multiplication on the left and an action of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even})$ by multiplication by the inverse on the right. Thus, the map $[D] \\to [P^\\textup{odd} D P^\\textup{even}]$ is not only well-defined but manifestly homeomorphic.\n\\end{proof}\n\nCombining this last Proposition with Proposition~\\ref{order1decomp}, we immediately obtain the following:\n\n\\begin{corollary}\n Let $(\\hs{H},\\gamma)$ be an even $\\alg{A}$-bimodule. Then\n \\begin{multline}\n \\ms{D}(\\alg{A},\\hs{H},\\gamma) \\cong \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd}) \\backslash ( \\bdd^{\\textup{L}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})^0 \\\\ \\times \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd}) \\times \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})^0 ) \/ \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even}),\n \\end{multline}\n where $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd})$ acts diagonally by multiplication on the left, and $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even})$ acts diagonally by multiplication on the right by the inverse.\n\\end{corollary}\n\nNow, just as we did in the odd case, let us describe the construction of an arbitary Dirac operator $D$ on $(\\hs{H},\\gamma)$:\n\\begin{enumerate}\n \\item For $\\alpha$, $\\beta$, $\\gamma \\in \\spec{\\alg{A}}$, choose $M_{\\alpha\\beta}^\\gamma \\in M_{n_\\gamma m^\\textup{odd}_{\\gamma\\beta} \\times n_\\alpha m^\\textup{even}_{\\alpha\\beta}}(\\field{C})$;\n \\item For $\\alpha$, $\\beta$, $\\delta \\in \\spec{\\alg{A}}$, choose $N_{\\alpha\\beta}^\\delta \\in M_{m^\\textup{odd}_{\\alpha\\delta} n_\\delta \\times m^\\textup{even}_{\\alpha\\beta} n_\\beta}(\\field{C})$;\n \\item Construct $\\Delta \\in \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$ by setting, for $\\alpha$, $\\beta$, $\\gamma$, $\\delta \\in \\spec{\\alg{A}}$,\n \\begin{equation}\n \\Delta_{\\alpha\\beta}^{\\gamma\\delta} =\n \\begin{cases}\n M_{\\alpha\\beta}^\\gamma \\otimes 1_{n_\\beta} &\\text{if $\\alpha \\neq \\gamma$ and $\\beta = \\delta$,}\\\\\n 1_{n_\\alpha} \\otimes N_{\\alpha\\beta}^\\delta &\\text{if $\\alpha = \\gamma$ and $\\beta \\neq \\delta$,}\\\\\n M_{\\alpha\\beta}^\\alpha \\otimes 1_{n_\\beta} + 1_{n_\\alpha} \\otimes N_{\\alpha\\beta}^\\beta &\\text{if $(\\alpha,\\beta) = (\\gamma,\\delta)$,}\\\\\n 0 &\\text{otherwise;}\\\\\n \\end{cases}\n \\end{equation}\n \\item Finally, set $D = \\bigl( \\begin{smallmatrix} 0 & \\Delta^* \\\\ \\Delta & 0 \\end{smallmatrix} \\bigr)$.\n\\end{enumerate}\n\nAgain, note that for any $K = (1_{n_\\alpha} \\otimes K_{\\alpha\\beta} \\otimes 1_{n_\\beta})_{\\alpha,\\beta\\in\\spec{\\alg{A}}} \\in \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$ , we can make the replacements\n\\[\n M_{\\alpha\\beta}^\\alpha \\mapsto M_{\\alpha\\beta}^\\alpha + 1_{n_\\alpha} \\otimes K_{\\alpha\\beta}, \\quad N_{\\alpha\\beta}^\\beta \\mapsto N_{\\alpha\\beta}^\\beta - K_{\\alpha\\beta} \\otimes 1_{n_\\beta},\n\\]\nand still obtain the same Dirac operator $D$; by Proposition~\\ref{order1decomp}, this freedom is removed by requiring either that\n\\[\n M_{\\alpha\\beta}^\\alpha \\in \\mathfrak{sl}(n_\\alpha) \\otimes M_{m^\\textup{odd}_{\\alpha\\beta} \\times m^\\textup{even}_{\\alpha\\beta}}(\\field{C})\n\\]\nor that\n\\[\nN_{\\alpha\\beta}^\\beta \\in M_{m^\\textup{odd}_{\\alpha\\beta} \\times m^\\textup{even}_{\\alpha\\beta}}(\\field{C}) \\otimes \\mathfrak{sl}(n_\\beta).\n\\]\n\nJust as in the odd case, the above discussion and Corollary~\\ref{oddunitary} imply that we can identify $\\ms{D}_0(\\alg{A},\\hs{H},\\gamma)$ with\n\\begin{multline}\n \\ms{D}_0(\\alg{A},m^\\textup{even},m^\\textup{odd}) := \\prod_{\\alpha,\\beta \\in \\spec{\\alg{A}}} \\prod_{\\substack{\\gamma \\in \\spec{\\alg{A}} \\\\ \\gamma \\neq \\alpha}} M_{n_\\gamma m^\\textup{odd}_{\\gamma\\beta} \\times n_\\alpha m^\\textup{even}_{\\alpha\\beta}}(\\field{C}) \\\\ \\times \\left(\\mathfrak{sl}(n_\\alpha) \\otimes M_{m^\\textup{odd}_{\\alpha\\beta} \\times m^\\textup{even}_{\\alpha\\beta}}(\\field{C})\\right) \\times \\prod_{\\delta \\in \\spec{\\alg{A}}} M_{m^\\textup{odd}_{\\alpha\\delta} n_\\delta \\times m^\\textup{even}_{\\alpha\\beta} n_\\beta}(\\field{C}),\n\\end{multline}\nand identify $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even})$ and $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd})$ with $\\U(\\alg{A},m^\\textup{even})$ and $\\U(\\alg{A},m^\\textup{odd})$, respectively, which are defined according to Equation~\\ref{modunit}. The actions of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even})$ and $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd})$ on $\\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$ therefore correspond under these identifications to the actions of $\\U(\\alg{A},m^\\textup{even})$ and $\\U(\\alg{A},m^\\textup{odd})$, respectively, on $\\ms{D}_0(\\alg{A},m^\\textup{even},m^\\textup{odd})$ defined by having $(U_{\\alpha\\beta}^\\textup{odd}) \\in \\U(\\alg{A},m^\\textup{odd})$ and $(U_{\\alpha\\beta}^\\textup{even}) \\in \\U(\\alg{A},m^\\textup{even})$ act on\n\\[\n (M_{\\alpha\\beta}^\\gamma;M_{\\alpha\\beta}^\\alpha;N_{\\alpha\\beta}^\\delta) \\in \\ms{D}_0(\\alg{A},m^\\textup{even},m^\\textup{odd})\n\\]\nby\n\\[\n M_{\\alpha\\beta}^\\gamma \\mapsto (1_{n_\\gamma} \\otimes U_{\\gamma\\beta}^\\textup{odd})M_{\\alpha\\beta}^\\gamma, \\quad N_{\\alpha\\beta}^\\delta \\mapsto (U_{\\alpha\\delta}^\\textup{odd} \\otimes 1_{n_\\delta}) N_{\\alpha\\beta}^\\delta,\n\\]\nand\n\\[\n M_{\\alpha\\beta}^\\gamma \\mapsto M_{\\alpha\\beta}^\\gamma (1_{n_\\alpha} \\otimes (U_{\\alpha\\beta}^\\textup{even})^*), \\quad N_{\\alpha\\beta}^\\delta \\mapsto N_{\\alpha\\beta}^\\delta ((U_{\\alpha\\beta}^\\textup{even})^* \\otimes 1_{n_\\beta}),\n\\]\nrespectively. Thus we have proved the following:\n\n\\begin{proposition}\nLet $(\\hs{H},\\gamma)$ be an even $\\alg{A}$-bimodule with multiplicity matrices $(m^\\textup{even},m^\\textup{odd})$. Then\n\\begin{equation}\n \\ms{D}(\\alg{A},\\hs{H},\\gamma) \\cong \\U(\\alg{A},m^\\textup{odd}) \\backslash \\ms{D}_0(\\alg{A},m^\\textup{even},m^\\textup{odd}) \/ \\U(\\alg{A},m^\\textup{even}).\n\\end{equation}\n\\end{proposition}\n\nIn the quasi-orientable case, the picture simplifies considerably, as all components $\\Delta_{\\alpha\\beta}^{\\alpha\\beta}$ necessarily vanish. One is then left, essentially, with the situation described by Krajewski~\\cite{Kraj98}*{\\S 3.4} and Paschke--Sitarz~\\cite{PS98}*{\\S 2.II} ; as mentioned before, one can find in the former the original definition of what are now called \\term{Krajewski diagrams}. These diagrams, used extensively by Iochum, Jureit, Sch{\\\"u}cker and Stephan~\\cites{ACG1,ACG2,ACG3,ACG4,ACG5,Sch05}, offer a concise, diagrammatic approach to the study of quasi-orientable even bilateral spectral triples that strongly emphasizes the underlying combinatorics. Though they do admit ready generalisation to the non-quasi-orientable case, we will not discuss them here.\n\nWe conclude our discussion of even bilateral spectral triples by recalling a result of Paschke and Sitarz of particular interest in relation to the NCG Standard Model.\n\n\\begin{proposition}[Paschke--Sitarz~\\cite{PS98}*{Lemma 7}]\nLet $(\\hs{H},\\gamma)$ be an orientable $\\alg{A}$-bimodule. Then for all $D \\in \\ms{D}_0(\\alg{A},\\hs{H},\\gamma)$,\n\\begin{equation}\n D = \\sum_{\\substack{i,j = 1\\\\ i \\neq j}}^N \\lambda(e_i)[D,\\lambda(e_j)] + \\sum_{\\substack{k,l = 1\\\\ k \\neq l}}^N \\rho(e_k)[D,\\rho(e_l)].\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\n Fix $D \\in \\ms{D}_0(\\alg{A},\\hs{H},\\gamma)$, and let\n\\begin{align*}\n T &:= D - \\sum_{\\substack{i,j = 1\\\\ i \\neq j}}^N \\lambda(e_i)[D,\\lambda(e_j)] - \\sum_{\\substack{k,l = 1\\\\ k \\neq l}}^N \\rho(e_k)[D,\\rho(e_l)]\\\\\n &= D - \\sum_{\\substack{i,j = 1\\\\ i \\neq j}}^N \\lambda(e_i)D\\lambda(e_j) - \\sum_{\\substack{k,l = 1\\\\ k \\neq l}}^N \\rho(e_k)D\\rho(e_l).\n\\end{align*}\nThen for all $\\alpha$, $\\beta$, $\\gamma$, $\\delta \\in \\spec{\\alg{A}}$,\n\\[\n T_{\\alpha\\beta}^{\\gamma\\delta} = \n\\begin{cases}\n D_{\\alpha\\beta}^{\\gamma\\delta} &\\text{if $r(\\alpha) = r(\\gamma)$, $r(\\beta) = r(\\delta)$,}\\\\\n -D_{\\alpha\\beta}^{\\gamma\\delta} &\\text{if $r(\\alpha) \\neq r(\\gamma)$, $r(\\beta) \\neq r(\\delta)$,}\\\\\n 0 &\\text{otherwise,}\n\\end{cases}\n\\]\nwhere for $\\alpha \\in \\spec{\\alg{A}}$, $r(\\alpha)$ is the value of $j \\in \\{1,\\dotsc,N\\}$ such that $\\alpha \\in \\spec{M_{k_j}(\\field{K}_j)}$. However, by Proposition~\\ref{order1decomp}, $D_{\\alpha\\beta}^{\\gamma\\delta}$ must vanish in the second case, whilst by Proposition~\\ref{converseorientable}, $D_{\\alpha\\beta}^{\\gamma\\delta}$ must vanish in the first, so that $T = 0$.\n\\end{proof}\n\nNow, let $(\\alg{A},\\hs{H},D,J,\\gamma)$ be a real spectral triple of even $KO$-dimension. A \\term{gauge potential} for the triple is then a self-adjoint operator on $\\hs{H}$ of the form\n\\[\n \\sum_{k=1}^n \\lambda(a_k) [D,\\lambda(b_k)],\n\\]\nwhere $a_1,\\dotsc,a_n$, $b_1,\\dotsc,b_n \\in \\alg{A}$, and an \\term{inner fluctuation of the metric} is a Dirac operator $D_A \\in \\ms{D}_0(\\alg{A},\\hs{H},J,\\gamma)$ of the form\n\\[\n D_A := D + A + \\varepsilon^\\prime J A J^* = D + A + J A J^*,\n\\]\nwhere $A$ is a gauge potential. One then has that for any gauge potential $A$, $(\\alg{A},\\hs{H},D,J,\\gamma)$ and $(\\alg{A},\\hs{H},D_A,J,\\gamma)$ are Morita equivalent. In this light, the last Proposition admits the following interpretation:\n\n\\begin{corollary}\\label{orientmorita}\n Let $(\\hs{H},J,\\gamma)$ be an orientable real $\\alg{A}$-bimodule of even $KO$-dimen\\-sion. Then for all $D \\in \\ms{D}_0(\\alg{A},\\hs{H},\\gamma,J)$, \n\\begin{equation}\n A = - \\sum_{\\substack{i,j = 1\\\\ i \\neq j}}^N \\lambda(e_i)[D,\\lambda(e_j)]\n\\end{equation}\nis a gauge potential for the real spectral triple $(\\alg{A},\\hs{H},D,J,\\gamma)$ such that $D_A = 0$.\n\\end{corollary}\n\nThus, every finite orientable real spectral triple $(\\alg{A},\\hs{H},D,J,\\gamma)$ of even $KO$-dimension is Morita equivalent to the dynamically trivial triple $(\\alg{A},\\hs{H},0,J,\\gamma)$.\n\n\\subsection{Real spectral triples of odd $KO$-dimension}\n\nFor this section, let $(\\hs{H},J)$ be a real $\\alg{A}$-bimodule of odd $KO$-dimension $n \\bmod 8$ with multiplicity matrix $m$. We begin by reducing the study of Dirac operators on $(\\hs{H},J)$ to that of self-adjoint right $\\alg{A}$-linear operators on $\\hs{H}$.\n\n\\begin{proposition}[Krajewski~\\cite{Kraj98}*{\\S 3.4}]\\label{oddrealdirac}\nLet $(\\hs{H},J)$ be a real $\\alg{A}$-bimodule of odd $KO$-dimension $n \\bmod 8$. Then the map $R_n : \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H})_\\textup{sa} \\to \\ms{D}_0(\\alg{A},\\hs{H},J)$ defined by $R_n(M) := M + \\varepsilon^\\prime J M J^*$ is a surjection interwining the action of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},J)$ on $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H})_\\textup{sa}$ by conjugation with the action on $\\ms{D}_0(\\alg{A},\\hs{H},J)$ by conjugation, and $\\ker(R_n) \\subseteq \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H})_\\textup{sa}$.\n\\end{proposition}\n\n\\begin{proof}\nFirst, note that $R_n$ is indeed well-defined, since by Equation~\\ref{realintertwine}, for any $M \\in \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H})_\\textup{sa}$, $J M J^* \\in \\bdd^{\\textup{L}}_\\alg{A}(\\hs{H})_\\textup{sa}$, and hence $R_n(M) \\in \\ms{D}_0(\\alg{A},\\hs{H},J)$.\n\nNow, let $E_\\lambda$ and $E_\\rho$ be defined as in Lemma~\\ref{decompproj}, and let $E_\\lambda^\\prime = \\Id - E_\\lambda$, $E_\\rho^\\prime = \\Id - E_\\rho$. Then, by construction of $E_\\lambda$ and $E_\\rho$ and Equation~\\ref{realintertwine}, for any $T \\in \\mathcal{L}^1_\\alg{A}(\\hs{H})$,\n\\[\n E_\\lambda(J T J^*) = J E_\\rho(T) J^*,\\quad E_\\rho(J T J^*) = J E_\\lambda(T) J^*.\n\\]\nHence, in particular, for $D \\in \\ms{D}_0(\\alg{A},\\hs{H},J)$, since $J D J^* = \\varepsilon^\\prime D$,\n\\begin{align*}\n D &= \\frac{1}{2}(E_\\lambda^\\prime + E_\\rho)(T) + \\frac{1}{2}(E_\\lambda + E_\\rho^\\prime)(T) \\\\ &= \\frac{1}{2}(E_\\lambda^\\prime + E_\\rho)(T) + \\varepsilon^\\prime J \\frac{1}{2}(E_\\lambda^\\prime + E_\\rho)(T) J^*\\\\ &= R_n\\left(\\frac{1}{2}(E_\\lambda^\\prime + E_\\rho)(T)\\right),\n\\end{align*}\nwhere $\\frac{1}{2}(E_\\lambda^\\prime + E_\\rho)(T) \\in \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H})_\\textup{sa}$.\n\nFinally, that $R_n$ interwtines the actions of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},J)$ follows from Proposition~\\ref{lrorder1decomp} together with the fact that elements of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},J)$, by definition, commute with $J$, whilst the fact that $R_n(M) = 0$ if and only if $M = -\\varepsilon^\\prime J M J^*$ implies that $\\ker(R_n) \\subseteq \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H})_\\textup{sa}$.\n\\end{proof}\n\nIt follows, in particular, that $\\ker(R_n)$ is invariant under the action of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},J)$ by conjugation, so that the action of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},J)$ on $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H})_\\textup{sa}$ induces an action on the quotient $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H})_\\textup{sa} \/ \\ker(R_n)$, and hence $R_n$ induces an isomorphism\n\\begin{equation}\n \\ms{D}_0(\\alg{A},\\hs{H},J) \\cong \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H})_\\textup{sa} \/ \\ker(R_n)\n\\end{equation}\nof $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},J)$-representations. Thus we have proved the following:\n\n\\begin{corollary}\nLet $(\\hs{H},J)$ be a real $\\alg{A}$-bimodule of odd $KO$-dimension $n \\bmod 8$. Then\n\\begin{equation}\n \\ms{D}(\\alg{A},\\hs{H},J) \\cong \\left(\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H})_\\textup{sa} \/ \\ker(R_n)\\right) \/ \\U^{\\textup{LR}}_\\alg{A}(\\hs{H},J).\n\\end{equation}\n\\end{corollary}\n\nDiscussion of $\\ms{D}(\\alg{A},\\hs{H},J)$ thus requires discussion first of $\\ker(R_n)$:\n\n\\begin{lemma}\\label{oddker}\nIf $K = (1_{n_\\alpha} \\otimes K_{\\alpha\\beta} \\otimes 1_{n_\\beta})_{\\alpha,\\beta\\in\\spec{\\alg{A}}} \\in \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H})_\\textup{sa}$, then $K \\in \\ker(R_n)$ if and only if for each $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$ such that $\\alpha \\neq \\beta$,\n\\begin{equation}\n K_{\\beta\\alpha} = -\\varepsilon^\\prime K_{\\alpha\\beta}^T,\n\\end{equation}\nand for each $\\alpha \\in \\spec{\\alg{A}}$,\n\\begin{equation}\n K_{\\alpha\\alpha} \\in \\hs{R}_\\alpha(n) = \n \\begin{cases}\n \\Sym_{m_{\\alpha\\alpha}}(\\field{R}) &\\text{if $n=1$,}\\\\\n i \\mathfrak{sp}(m_{\\alpha\\alpha}) &\\text{if $n=3$,}\\\\\n M_{m_{\\alpha\\alpha}\/2}(\\field{H})_\\textup{sa} &\\text{if $n=5$,}\\\\\n i \\mathfrak{so}(m_{\\alpha\\alpha}) &\\text{if $n=7$.}\n \\end{cases}\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nBy definition of $R_n$, $K \\in \\ker(R_n)$ if and only if $K = -\\varepsilon^\\prime J K J^* = -\\varepsilon\\varepsilon^\\prime J K J$, and this in turn holds if and only if, for $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$ such that $\\alpha \\neq \\beta$, \n\\[\n K_{\\alpha\\beta} = -\\varepsilon^\\prime K_{\\beta\\alpha}^T,\n\\]\nwhile for $\\alpha \\in \\spec{\\alg{A}}$,\n\\[\n K_{\\alpha\\alpha} =\n \\begin{cases}\n -\\varepsilon^\\prime \\overline{K_{\\alpha\\alpha}}, &\\text{if $n=1$ or $7$,}\\\\\n \\varepsilon^\\prime I_{\\alpha} K_{\\alpha\\alpha} I_{\\alpha}^* &\\text{if $n=3$ or $5$,}\n \\end{cases}\n\\]\nwhere $I_\\alpha = \\Omega_{m_{\\alpha\\alpha}} \\circ \\text{complex conjugation}$. In the case that $n =3$ or $5$, however, by construction, $M_{m_{\\alpha\\alpha}\/2}(\\field{H})$, viewed in the usual way as a real form of $M_{m_{\\alpha\\alpha}}(\\field{C})$, is precisely the set of matrices in $M_{m_{\\alpha\\alpha}}(\\field{C})$ commuting with $I_\\alpha$. This, together with the hypothesis that $K$ is self-adjoint, so that each $K_{\\alpha\\beta}$ is self-adjoint, yields the desired result.\n\\end{proof}\n\nWe can now describe the the construction of an arbitrary Dirac operator $D$ on $(\\hs{H},J)$:\n\\begin{enumerate}\n \\item For $\\alpha$, $\\beta$, $\\gamma \\in \\spec{\\alg{A}}$ such that $\\alpha < \\gamma$, choose $M_{\\alpha\\beta}^\\gamma \\in M_{n_\\gamma m_{\\gamma\\beta} \\times n_\\alpha m_{\\alpha\\beta}}(\\field{C})$;\n \\item For $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$, choose $M_{\\alpha\\beta}^\\alpha \\in M_{n_\\alpha m_{\\alpha\\beta}}(\\field{C})_\\textup{sa}$;\n \\item For $\\alpha$, $\\beta$, $\\gamma$, $\\delta \\in \\spec{\\alg{A}}$, set\n \\begin{equation}\n M_{\\alpha\\beta}^{\\gamma\\delta} = \n \\begin{cases}\n M_{\\alpha\\beta}^\\gamma \\otimes 1_{n_\\beta} &\\text{if $\\alpha < \\gamma$ and $\\beta = \\delta$,}\\\\\n (M_{\\gamma\\beta}^\\alpha)^* \\otimes 1_{n_\\beta} &\\text{if $\\alpha > \\gamma$ and $\\beta = \\delta$,}\\\\\n M_{\\alpha\\beta}^\\alpha \\otimes 1_{n_\\beta} &\\text{if $(\\alpha,\\beta) = (\\gamma,\\delta)$,}\\\\\n 0 &\\text{otherwise}.\n \\end{cases}\n \\end{equation}\n \\item Finally, set $D = R_n(M)$.\n\\end{enumerate}\n\nNow, let $K = (1_{n_\\alpha} \\otimes K_{\\alpha\\beta} \\otimes 1_{n_\\beta})_{\\alpha,\\beta\\in\\spec{\\alg{A}}} \\in \\ker(R_n)$, so that each $K_{\\alpha\\beta}$ is self-adjoint, and for $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$ such that $\\alpha \\neq \\beta$, $K_{\\beta\\alpha} = -\\varepsilon^\\prime K_{\\alpha\\beta}^T$ and $K_{\\alpha\\alpha} \\in \\hs{R}_\\alpha(n)$. Thus, $K$ is uniquely specified by the matrices $K_{\\alpha\\beta} \\in M_{m_{\\alpha\\beta}}(\\field{C})_\\textup{sa}$ for $\\alpha < \\beta$ and by the $K_{\\alpha\\alpha} \\in \\hs{R}_\\alpha(n)$. Then, we can replace $M$ by $M + K$, {\\it i.e.\\\/}\\ make the replacements, for $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$ such that $\\alpha < \\beta$,\n\\begin{gather*}\n M_{\\alpha\\beta}^\\alpha \\mapsto M_{\\alpha\\beta}^\\alpha + 1_{n_\\alpha} \\otimes K_{\\alpha\\beta}, \\quad M_{\\beta\\alpha}^\\beta \\mapsto M_{\\beta\\alpha}^\\beta + 1_{n_\\beta} \\otimes (-\\varepsilon^\\prime K_{\\alpha\\beta}^T), \\\\ M_{\\alpha\\alpha}^\\alpha \\mapsto M_{\\alpha\\alpha}^\\alpha + 1_{n_\\alpha} \\otimes K_{\\alpha\\alpha}\n\\end{gather*}\nand obtain the same Dirac operator $D$. However, this is a freedom cannot generally be removed as we did in earlier cases, as it reflects precisely the non-injectivity of $R_n$. \n\nBy the above discussion and Propositions~\\ref{real17unitary} and~\\ref{real35unitary}, we can identify the space $\\ms{D}_0(\\alg{A},\\hs{H},J)$ with\n\\begin{multline}\n \\ms{D}_0(\\alg{A},m,n) := \\prod_{\\alpha \\in \\spec{\\alg{A}}} \\biggl[ M_{n_\\alpha m_{\\alpha\\alpha}}(\\field{C})_\\textup{sa} \/ (1_{n_\\alpha} \\otimes \\hs{R}_\\alpha(n)) \\\\ \\times \\prod_{\\substack{\\beta \\in \\spec{\\alg{A}} \\\\ \\beta > \\alpha}} (M_{n_\\alpha m_{\\alpha\\beta}}(\\field{C})_\\textup{sa} \\oplus M_{n_\\beta m_{\\alpha\\beta}}(\\field{C})_\\textup{sa})\/M_{m_\\alpha\\beta}(\\field{C})_\\textup{sa} \\times \\prod_{\\substack{\\beta,\\gamma \\in \\spec{\\alg{A}}\\\\ \\gamma > \\alpha}} M_{n_\\gamma m_{\\gamma\\beta} \\times n_\\alpha m_{\\alpha\\beta}}(\\field{C}) \\biggr],\n\\end{multline}\nwhere $M_{m_{\\alpha\\beta}}(\\field{C})_\\textup{sa}$ is viewed as embedded in $M_{n_\\alpha m_{\\alpha\\beta}}(\\field{C})_\\textup{sa} \\oplus M_{n_\\beta m_{\\alpha\\beta}}(\\field{C})_\\textup{sa}$ via the map\n\\[\n K \\mapsto (1_{n_\\alpha} \\otimes K) \\oplus (-\\varepsilon^\\prime 1_{n_\\beta} \\otimes K^T),\n\\]\nand $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H},J)$ with\n\\begin{equation}\n \\U(\\alg{A},m,n) := \\prod_{\\alpha \\in \\spec{\\alg{A}}} \\biggl( \\hs{U}_\\alpha(n) \\times \\prod_{\\substack{\\beta \\in \\spec{\\alg{A}}\\\\ \\beta > \\alpha}} \\U(m_{\\alpha\\beta}) \\biggr),\n\\end{equation}\nwhere\n\\[\n \\hs{U}_\\alpha(n) :=\n\\begin{cases}\n \\Orth(m_{\\alpha\\alpha}) &\\text{if $n = 1$ or $7$,}\\\\\n \\Sp(m_{\\alpha\\alpha}) &\\text{if $n=3$ or $5$.}\n\\end{cases}\n\\]\nThen the action of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H};J)$ on $\\ms{D}_0(\\alg{A},\\hs{H};J)$ corresponds under these identifications to the action of $\\U(\\alg{A},m,n)$ on $\\ms{D}_0(\\alg{A},m,n)$ defined by having the element $(U_{\\alpha\\alpha};U_{\\alpha\\beta}) \\in \\U(\\alg{A},m,n)$ act on $\\bigl([M_{\\alpha\\alpha}^\\alpha];[(M_{\\alpha\\beta}^\\alpha,M_{\\beta\\alpha}^\\beta)];M_{\\alpha\\beta}^\\gamma\\bigr) \\in \\ms{D}_0(\\alg{A},m,n)$ by\n\\begin{align*}\n [M_{\\alpha\\alpha}^\\alpha] &\\mapsto \\bigl[(1_{n_\\alpha} \\otimes U_{\\alpha\\alpha}) M_{\\alpha\\alpha} (1_{n_\\alpha} \\otimes U_{\\alpha\\alpha}^*)\\bigr];\\\\\n [(M_{\\alpha\\beta}^\\alpha,M_{\\beta\\alpha}^\\beta)] &\\mapsto \\biggl[\\bigl((1_{n_\\alpha} \\otimes U_{\\alpha\\beta}) M_{\\alpha\\beta}^\\alpha (1_{n_\\alpha} \\otimes U_{\\alpha\\beta}^*),(1_{n_\\beta} \\otimes \\overline{U_{\\alpha\\beta}})M_{\\beta\\alpha}^\\beta(1_{n_\\beta} \\otimes U_{\\alpha\\beta}^T)\\bigr)\\biggr];\\\\\n M_{\\alpha\\beta}^\\gamma &\\mapsto\n \\begin{cases}\n (1_{n_\\gamma} \\otimes U_{\\gamma\\beta}) M_{\\alpha\\beta}^\\gamma (1_{n_\\alpha} \\otimes U_{\\alpha\\beta}^*) &\\text{if $\\alpha < \\beta$, $\\gamma < \\delta$,}\\\\\n (1_{n_\\gamma} \\otimes U_{\\gamma\\beta}) M_{\\alpha\\beta}^\\gamma (1_{n_\\alpha} \\otimes U_{\\beta\\alpha}^T) &\\text{if $\\alpha > \\beta$, $\\gamma < \\delta$,}\\\\\n (1_{n_\\gamma} \\otimes \\overline{U_{\\beta\\gamma}}) M_{\\alpha\\beta}^\\gamma (1_{n_\\alpha} \\otimes U_{\\alpha\\beta}^*) &\\text{if $\\alpha < \\beta$, $\\gamma > \\delta$,}\\\\\n (1_{n_\\gamma} \\otimes \\overline{U_{\\beta\\gamma}}) M_{\\alpha\\beta}^\\gamma (1_{n_\\alpha} \\otimes U_{\\beta\\alpha}^T) &\\text{if $\\alpha > \\beta$, $\\gamma > \\delta$.}\n \\end{cases}\n\\end{align*}\nWe have therefore proved the following:\n\n\\begin{proposition}\n Let $(\\hs{H},J)$ be a real $\\alg{A}$-bimodule of odd $KO$-dimension $n \\bmod 8$ with multiplicity matrix $m$. Then\n \\begin{equation}\n \\ms{D}(\\alg{A},\\hs{H},J) \\cong \\ms{D}_0(\\alg{A},m,n) \/ \\U(\\alg{A},m,n).\n \\end{equation}\n\\end{proposition}\n\n\\subsection{Real spectral triples of even $KO$-dimension}\n\nWe now turn to real spectral triples of even $KO$-dimension. Because of the considerable qualitative differences between the two cases, we consider separately the case of $KO$-dimension $0$ or $4 \\bmod 8$ and $KO$-dimension $2$ or $6 \\bmod 8$. \n\nIn what follows, $(\\hs{H},\\gamma,J)$ is a fixed real $\\alg{A}$-bimodule of even $KO$-dimension $n \\bmod 8$ with multiplicity matrices $(m^\\textup{even},m^\\textup{odd})$; we denote by $\\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd};J)$ the subspace of $\\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$ consisting of $\\delta$ such that \n\\[\n \\begin{pmatrix}\n 0 & \\Delta^*\\\\\n\\Delta & 0\n\\end{pmatrix} \\in \\ms{D}_0(\\alg{A},\\hs{H};\\gamma,J).\n\\]\nIt then follows that\n\\begin{equation}\n \\ms{D}_0(\\alg{A},\\hs{H},\\gamma,J) \\cong \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd};J)\n\\end{equation}\nvia the map $D \\mapsto P^\\textup{odd} D P^\\textup{even}$.\n\n\\subsubsection{$KO$-dimension $0$ or $4 \\bmod 8$}\n\nLet us first consider the case where $n = 0$ or $4 \\bmod 8$, {\\it i.e.\\\/}\\ where $\\varepsilon^\\prime = 1$. Then $J = J^\\textup{even} \\oplus J^\\textup{odd}$ for anti-unitaries $J^\\textup{even}$ and $J^\\textup{odd}$ on $\\hs{H}^\\textup{even}$ and $\\hs{H}^\\textup{odd}$, respectively, such that $(\\hs{H}^\\textup{even},J^\\textup{even})$ and $(\\hs{H}^\\textup{odd},J^\\textup{odd})$ are real $\\alg{A}$-bimodules of $KO$-dimension $n^\\prime \\bmod 8$, where $n^\\prime = 1$ or $7$ if $n=0$, $3$ or $5$ if $n=4$. In light of Corollary~\\ref{real04unitary}, one can readily check the following analogue of Proposition~\\ref{evendirac}:\n\n\\begin{proposition}\\label{real04dirac}\n The map \n\\[\n \\ms{D}(\\alg{A},\\hs{H},\\gamma,J) \\to \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd},J^\\textup{odd}) \\backslash \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd};J) \/ \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},J^\\textup{even})\n\\] \ndefined by $[D] \\mapsto [P^\\textup{odd} D P^\\textup{even}]$ is a homeomorphism.\n\\end{proposition}\n\nHere, as before, $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd},J^\\textup{odd})$ acts by multiplication on the left, whilst the group $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},J^\\textup{even})$ acts by multiplication on the right by the inverse.\n\nWe now prove the relevant analogue of Proposition~\\ref{oddrealdirac}:\n\n\\begin{proposition}\\label{even04dirac}\n The map $R_n : \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd}) \\to \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd},J)$ defined by $R_n(M) := M + J^\\textup{odd} M (J^\\textup{even})^*$ is a surjection interwining the actions of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd},J^\\textup{odd})$ by multiplication on the left and of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},J^\\textup{even})$ by multiplication on the right by the inverse on $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$ and $\\mathcal{L}^1_\\alg{A}(\\alg{A},\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd},J)$, and $\\ker(R_n) \\subseteq \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$.\n\\end{proposition}\n\n\\begin{proof}\n First note that\n\\[\n \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd},J) = \\{\\Delta \\in \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd}) \\mid \\Delta = J^\\textup{odd} \\Delta (J^\\textup{even})^*\\},\n\\]\nso that $R_n$ is indeed well-defined by construction. Moreover, since $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},J^\\textup{even})$ and $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd},J^\\textup{odd})$ commute by definition with $J^\\textup{even}$ and $J^\\textup{odd}$, respectively, it then follows by construction of $R_n$ that $R_n$ does indeed have the desired intertwining properties.\n\nNext, for $M \\in \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$, we have that $R_n(M) = 0$ if and only if $M = -J^\\textup{odd} M (J^\\textup{even})^*$, but $M$ is right $\\alg{A}$-linear if and only if $J^\\textup{odd} M (J^\\textup{even})^* = \\varepsilon J^\\textup{odd} M J^\\textup{even}$ is left $\\alg{A}$-linear, so that $M \\in \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$ as claimed.\n\nFinally, it is easy to check, just as in the proof of Proposition~\\ref{oddrealdirac}, that for $\\Delta \\in \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd},J)$,\n\\[\n \\Delta = R_n\\left(\\frac{1}{2}(E_\\lambda^\\prime + E_\\rho)(\\Delta)\\right),\n\\]\nwhere $\\frac{1}{2}(E_\\lambda^\\prime + E_\\rho)(\\Delta) \\in \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$.\n\\end{proof}\n\nAgain, just as in the case of odd $KO$-dimension, this last result not only implies that the actions of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},J^\\textup{even})$ and $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd},J^\\textup{odd})$ on $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$ descend to actions on $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd}) \/ \\ker(R_n)$, but that $R_n$ descends to an isomorphism $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd}) \/ \\ker(R_n) \\cong \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd};J)$ intertwining the actions of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},J^\\textup{even})$ and $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd},J^\\textup{odd})$, thereby yielding the following\n\n\\begin{corollary}\n Let $(\\hs{H},\\gamma,J)$ be a real $\\alg{A}$-bimodule of $KO$-dimension $n \\bmod 8$ for $n = 0$ or $4$. Then \n\\begin{equation}\n \\ms{D}(\\alg{A},\\hs{H},\\gamma,J) \\cong \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd},J^\\textup{odd}) \\backslash \\left(\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd}) \/ \\ker(R_n)\\right) \/ \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},J^\\textup{even}) \n\\end{equation}\n\\end{corollary}\n\n\\latin{Mutatis mutandis}, the proof of Lemma~\\ref{oddker} yields the following characterisation of $\\ker(R_n)$:\n\n\\begin{lemma}\\label{even04ker}\nIf $K = (1_{n_\\alpha} \\otimes K_{\\alpha\\beta} \\otimes 1_{n_\\beta})_{\\alpha,\\beta\\in\\spec{\\alg{A}}} \\in \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$, then $K \\in \\ker(R_n)$ if and only if for each $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$ such that $\\alpha \\neq \\beta$,\n\\begin{equation}\n K_{\\beta\\alpha} = -\\overline{K_{\\alpha\\beta}},\n\\end{equation}\nand for each $\\alpha \\in \\spec{\\alg{A}}$,\n\\begin{equation}\n K_{\\alpha\\alpha} \\in \\hs{R}_\\alpha(n) = \n \\begin{cases}\n i M_{m^\\textup{odd}_{\\alpha\\alpha} \\times m^\\textup{even}_{\\alpha\\alpha}}(\\field{R}) &\\text{if $n=0$,}\\\\\n i M_{m^\\textup{odd}_{\\alpha\\alpha}\/2 \\times m^\\textup{even}_{\\alpha\\alpha}\/2}(\\field{H}) &\\text{if $n=4$.}\\\\\n \\end{cases}\n\\end{equation}\n\\end{lemma}\n\nNote that such a map $K \\in \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$ is therefore entirely specified by the $K_{\\alpha\\beta} \\in M_{m^\\textup{odd}_{\\alpha\\beta} \\times m^\\textup{even}_{\\alpha\\beta}}(\\field{C})$ for $\\alpha < \\beta$ and by the $K_{\\alpha\\alpha} \\in \\hs{R}_\\alpha(n)$.\n\nLet us now describe the construction of an arbitrary Dirac operator $D$ on the real $\\alg{A}$-bimodule $(\\hs{H},\\gamma,J)$ of $KO$-dimension $n = 0$ or $4 \\bmod 8$:\n\\begin{enumerate}\n \\item For $\\alpha$, $\\beta$, $\\gamma \\in \\spec{\\alg{A}}$, choose $M_{\\alpha\\beta}^\\gamma \\in M_{n_\\gamma m^\\textup{odd}_{\\gamma\\beta} \\times n_\\alpha m^\\textup{even}_{\\alpha\\beta}}(\\field{C})$;\n \\item Construct $M \\in \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$ by setting for $\\alpha$, $\\beta$, $\\gamma$, $\\delta \\in \\spec{\\alg{A}}$,\n \\begin{equation}\n M_{\\alpha\\beta}^{\\gamma\\delta} =\n \\begin{cases}\n M_{\\alpha\\beta}^\\gamma \\otimes 1_{n_\\beta} &\\text{if $\\beta = \\delta$,}\\\\\n 0 &\\text{otherwise;}\\\\\n \\end{cases}\n \\end{equation}\n \\item Finally, set \n \\begin{equation}\n D =\n \\begin{pmatrix}\n 0 & R_n(M)^*\\\\\n R_n(M) & 0\n \\end{pmatrix}.\n \\end{equation}\n\\end{enumerate}\n\nJust as before, if $R_n$ is non-injective, we can make the substitution $M \\mapsto M + K$ for any $K \\in \\ker(R_n)$ and obtain the same Dirac operator D; at the level of components, we have for $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$ such that $\\alpha < \\beta$,\n\\begin{gather*}\n M_{\\alpha\\beta}^\\alpha \\mapsto M_{\\alpha\\beta}^\\alpha + 1_{n_\\alpha} \\otimes K_{\\alpha\\beta}, \\quad M_{\\beta\\alpha}^\\beta \\mapsto M_{\\beta\\alpha}^\\beta + 1_{n_\\alpha} \\otimes (-\\overline{K_{\\alpha\\beta}})\\\\\n M_{\\alpha\\alpha}^\\alpha \\mapsto M_{\\alpha\\alpha}^\\alpha + 1_{n_\\alpha} \\otimes K_{\\alpha\\alpha}.\n\\end{gather*}\nWith these observations in hand, we can revisit the moduli space $\\ms{D}(\\alg{A},\\hs{H},\\gamma,J)$.\n\nBy the discussion above and Corollaries~\\ref{real17unitary} and~\\ref{real35unitary}, we can identify the space $\\ms{D}_0(\\alg{A},\\hs{H},\\gamma,J)$ with\n\\begin{multline}\n \\ms{D}_0(\\alg{A},m^\\textup{even},m^\\textup{odd},n) := \\prod_{\\alpha \\in \\spec{\\alg{A}}} \\biggl[ M_{n_\\alpha m^\\textup{odd}_{\\alpha\\alpha} \\times n_\\alpha m^\\textup{even}_{\\alpha\\alpha}}(\\field{C}) \/ (1_{n_\\alpha} \\otimes \\hs{R}_\\alpha(n)) \\\\ \\times \\prod_{\\substack{\\beta \\in \\spec{\\alg{A}} \\\\ \\beta > \\alpha}} (M_{n_\\alpha m^\\textup{odd}_{\\alpha\\beta} \\times n_\\alpha m^\\textup{even}_{\\alpha\\beta}}(\\field{C}) \\oplus M_{n_\\beta m^\\textup{odd}_{\\alpha\\beta} \\times n_\\beta m^\\textup{even}_{\\alpha\\beta}}(\\field{C}))\/M_{m^\\textup{odd}_{\\alpha\\beta} \\times m^\\textup{even}_{\\alpha\\beta}}(\\field{C}) \\\\ \\times \\prod_{\\substack{\\beta,\\gamma\\in\\spec{\\alg{A}} \\\\ \\gamma \\neq \\alpha}} M_{n_\\gamma m^\\textup{odd}_{\\gamma\\beta} \\times n_{\\alpha} m^\\textup{even}_{\\alpha\\beta}}(\\field{C})\\biggr],\n\\end{multline}\nwhere $M_{m^\\textup{odd}_{\\alpha\\beta} \\times m^\\textup{even}_{\\alpha\\beta}}(\\field{C})$ is viewed as embedded in the space\n\\[\n M_{n_\\alpha m^\\textup{odd}_{\\alpha\\beta} \\times n_\\alpha m^\\textup{even}_{\\alpha\\beta}}(\\field{C}) \\oplus M_{n_\\beta m^\\textup{odd}_{\\alpha\\beta} \\times n_\\beta m^\\textup{even}_{\\alpha\\beta}}(\\field{C})\n\\]\nvia the map $K \\mapsto (1_{n_\\alpha} \\otimes K) \\oplus (-1_{n_\\beta} \\otimes \\overline{K})$, and identify the groups $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even};J^\\textup{even})$ and $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd};J^\\textup{odd})$ with $\\U(\\alg{A},m^\\textup{even},n^\\prime)$ and $\\U(\\alg{A},m^\\textup{odd},n^\\prime)$, respectively. Then the actions of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even};J^\\textup{even})$ and $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{odd};J^\\textup{odd})$ on $\\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd};J)$ corresponds under these identifications to the actions of the groups $\\U(\\alg{A},m^\\textup{even},n^\\prime)$ and $\\U(\\alg{A},m^\\textup{odd},n^\\prime)$, respectively, on $\\ms{D}_0(\\alg{A},m^\\textup{even},m^\\textup{odd},n)$ defined by having \n\\[\n (U_{\\alpha\\alpha}^\\textup{odd};U_{\\alpha\\beta}^\\textup{odd}) \\in \\U(\\alg{A},m^\\textup{odd};n^\\prime), \\quad (U_{\\alpha\\alpha}^\\textup{even};U_{\\alpha\\beta}^\\textup{odd}) \\in \\U(\\alg{A},m^\\textup{even};n^\\prime)\n\\]\nact on $\\bigl([M_{\\alpha\\alpha}^\\alpha];[(M_{\\alpha\\beta}^\\alpha,M_{\\beta\\alpha}^\\beta)];M_{\\alpha\\beta}^\\gamma\\bigr) \\in \\ms{D}_0(\\alg{A},m,n)$ by\n\\begin{align*}\n [M_{\\alpha\\alpha}^\\alpha] &\\mapsto \\bigl[(1_{n_\\alpha} \\otimes U_{\\alpha\\alpha}^\\textup{odd}) M_{\\alpha\\alpha}\\bigr];\\\\\n [(M_{\\alpha\\beta}^\\alpha,M_{\\beta\\alpha}^\\beta)] &\\mapsto \\biggl[\\bigl((1_{n_\\alpha} \\otimes U_{\\alpha\\beta}^\\textup{odd}) M_{\\alpha\\beta}^\\alpha,(1_{n_\\beta} \\otimes \\overline{U_{\\alpha\\beta}^\\textup{odd}})M_{\\beta\\alpha}^\\beta\\bigr)\\biggr];\\\\\n M_{\\alpha\\beta}^\\gamma &\\mapsto\n \\begin{cases}\n (1_{n_\\gamma} \\otimes U_{\\gamma\\beta}^\\textup{odd}) M_{\\alpha\\beta}^\\gamma &\\text{if $\\gamma < \\delta$,}\\\\\n (1_{n_\\gamma} \\otimes \\overline{U_{\\beta\\gamma}^\\textup{odd}}) M_{\\alpha\\beta}^\\gamma &\\text{if $\\gamma > \\delta$;}\n \\end{cases}\n\\end{align*}\nand\n\\begin{align*}\n [M_{\\alpha\\alpha}^\\alpha] &\\mapsto \\bigl[M_{\\alpha\\alpha} (1_{n_\\alpha} \\otimes (U_{\\alpha\\alpha}^\\textup{even})^*)\\bigr];\\\\\n [(M_{\\alpha\\beta}^\\alpha,M_{\\beta\\alpha}^\\beta)] &\\mapsto \\biggl[\\bigl(M_{\\alpha\\beta}^\\alpha (1_{n_\\alpha} \\otimes (U_{\\alpha\\beta}^\\textup{even})^*), M_{\\beta\\alpha}^\\beta(1_{n_\\beta} \\otimes (U_{\\alpha\\beta}^\\textup{even})^T)\\bigr)\\biggr];\\\\\n M_{\\alpha\\beta}^\\gamma &\\mapsto\n \\begin{cases}\n M_{\\alpha\\beta}^\\gamma (1_{n_\\alpha} \\otimes (U_{\\alpha\\beta}^\\textup{even})^*) &\\text{if $\\alpha < \\beta$,}\\\\\n M_{\\alpha\\beta}^\\gamma (1_{n_\\alpha} \\otimes (U_{\\beta\\alpha}^\\textup{even})^T) &\\text{if $\\alpha > \\beta$;}\n \\end{cases}\n\\end{align*}\nrespectively. We have therefore proved the following:\n\n\\begin{proposition}\n Let $(\\hs{H},\\gamma,J)$ be a real $\\alg{A}$-bimodule of even $KO$-dimension $n \\bmod 8$ for $n = 0$ or $4$, with multiplicity matrices $(m^\\textup{even},m^\\textup{odd})$. Then\n \\begin{equation}\n \\ms{D}(\\alg{A},\\hs{H},\\gamma,J) \\cong \\U(\\alg{A},m^\\textup{odd},n^\\prime) \\backslash \\ms{D}_0(\\alg{A},m^\\textup{even},m^\\textup{odd},n) \/ \\U(\\alg{A},m^\\textup{even},n^\\prime).\n \\end{equation}\n\\end{proposition}\n\nIt is worth noting that considerable simplifications are obtained in the quasi-orientable case, as all components of the form $M_{\\alpha\\beta}^\\alpha \\otimes 1_{n_\\beta}$ of $M \\in \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$ must necessarily vanish, as must $\\ker(R_n)$ itself. In particular, then, one is left with\n\\[\n \\ms{D}_0(\\alg{A},m^\\textup{even},m^\\textup{odd},n) = \\prod_{\\alpha,\\beta,\\gamma\\in\\spec{\\alg{A}}} M_{n_\\gamma m^\\textup{odd}_{\\gamma\\beta} \\times n_{\\alpha} m^\\textup{even}_{\\alpha\\beta}}(\\field{C}).\n\\]\n\n\\subsubsection{$KO$-dimension $2$ or $6 \\bmod 8$}\\label{ko6}\n\nLet us now consider the case where $n = 2$ or $n=6 \\bmod 8$, {\\it i.e.\\\/}\\ where $\\varepsilon^\\prime = -1$. Then\n\\[\n J =\n\\begin{pmatrix}\n 0 & \\varepsilon \\tilde{J}^*\\\\\n \\tilde{J} & 0\n\\end{pmatrix}\n\\]\nfor $\\tilde{J} : \\hs{H}^\\textup{even} \\to \\hs{H}^\\textup{odd}$ anti-unitary, and $m^\\textup{odd} = (m^\\textup{even})^T$. In light of Corollary~\\ref{real26unitary}, one can easily establish, along the lines of Propositions~\\ref{evendirac} and~\\ref{even04dirac}, the following result:\n\n\\begin{proposition}\n Let $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even})$ act on $\\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd};J)$ by\n\\[\n (U,\\Delta) \\mapsto \\tilde{J}U\\tilde{J}^* \\Delta U^*\n\\]\nfor $U \\in \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even})$ and $\\Delta \\in \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd};J)$. Then the map\n\\[\n \\ms{D}(\\alg{A},\\hs{H},\\gamma,J) \\to \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd};J)\/\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even})\n\\]\ndefined by $[D] \\mapsto [P^\\textup{odd} D P^\\textup{even}]$ is a homeomorphism.\n\\end{proposition}\n\nIn the same way, we can define an action of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even})$ on $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$. We now give the relevant analogue of Propositions~\\ref{oddrealdirac} and~\\ref{even04dirac}:\n\n\\begin{proposition}\\label{even26dirac}\n The map $R_n : \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd}) \\to \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd};J)$ defined by $R_n(M) := M + \\varepsilon \\tilde{J} M^* \\tilde{J}$ is a surjection intertwining the actions of the group $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even})$ on $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$ and $\\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd};J)$, and $\\ker(R_n) \\subset \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$.\n\\end{proposition}\n\n\\begin{proof}\n First note that\n\\[\n \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd};J) = \\{\\Delta \\in \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd}) \\mid \\Delta = \\varepsilon \\tilde{J} \\Delta^* \\tilde{J}\\},\n\\]\nas can be checked by direct calculation, so that $R_n$ is indeed well-defined. It also readily follows by construction of $R_n$ and the definition of the actions of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even})$ that $R_n$ has the desired intertwining properties.\n\nNow, for $M \\in \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$, one has that $R_n(M) = 0$ if and only if $M = -\\varepsilon \\tilde{J} M^* \\tilde{J}$, but $\\tilde{J} M^* \\tilde{J}$ is manifestly left $\\alg{A}$-linear, so that $M \\in \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$, as claimed.\n\nFinally, just as in the proof of Propositions~\\ref{oddrealdirac} and~\\ref{even04dirac}, one can easily check that for $\\Delta \\in \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd};J)$,\n\\[\n \\Delta = R_n\\bigl(\\frac{1}{2}(E^\\prime_\\lambda + E_\\rho)(\\Delta)\\bigr),\n\\]\nwhere $\\frac{1}{2}(E^\\prime_\\lambda + E_\\rho)(\\Delta)$ is right $\\alg{A}$-linear.\n\\end{proof}\n\nJust as in the earlier cases, the action of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even})$ on $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$ descends to an action on the quotient $\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd}) \/ \\ker(R_n)$, so that $R_n$ descends to an $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$-isomorphism\n\\begin{equation}\n \\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd}) \/ \\ker(R_n) \\cong \\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd};J),\n\\end{equation}\nthereby yielding the following:\n\n\\begin{corollary}\n Let $(\\hs{H},\\gamma,J)$ be a real $\\alg{A}$-bimodule of $KO$-dimension $n \\bmod 8$ for $n = 2$ or $6$. Then\n\\begin{equation}\n \\ms{D}(\\alg{A},\\hs{H},\\gamma,J) \\cong (\\bdd^{\\textup{R}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})\/\\ker(R_n)) \/ \\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even}).\n\\end{equation}\n\\end{corollary}\n\nAgain, \\latin{mutatis mutandis}, the proof of Lemma~\\ref{oddker} yields the following characterisation of $\\ker(R_n)$:\n\n\\begin{lemma}\n If $K = (1_{n_\\alpha} \\otimes K_{\\alpha\\beta} \\otimes 1_{n_\\beta})_{\\alpha,\\beta\\in\\spec{\\alg{A}}} \\in \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd})$, then $K \\in \\ker(R_n)$ if and only if for each $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$ such that $\\alpha \\neq \\beta$,\n\\begin{equation}\n K_{\\beta\\alpha} = -\\varepsilon K_{\\alpha\\beta}^T,\n\\end{equation}\nand for each $\\alpha \\in \\spec{\\alg{A}}$,\n\\begin{equation}\n K_{\\alpha\\alpha} \\in \\hs{R}_\\alpha(n) =\n\\begin{cases}\n \\Sym_{m^\\textup{even}_{\\alpha\\alpha}}(\\field{C}) &\\text{if $n = 2$,}\\\\\n \\mathfrak{so}(m^\\textup{even}_{\\alpha\\alpha},\\field{C}) &\\text{if $n = 6$.}\n\\end{cases}\n\\end{equation}\n\\end{lemma}\n\nThus, such a map $K \\in \\ker(R_n)$ is entirely specified by the components $K_{\\alpha\\beta} \\in M_{m^\\textup{even}_{\\beta\\alpha} \\times m^\\textup{even}_{\\alpha\\beta}}(\\field{C})$ for $\\alpha < \\beta$ and by the $K_{\\alpha\\alpha} \\in \\hs{R}_\\alpha(n)$.\n\nNote that the discussion of the construction of Dirac operators and of the freedom in the construction provided by $\\ker(R_n)$ in the case of $KO$-dimension $0$ or $4 \\bmod 8$ holds also in this case. Thus we can identify $\\ms{D}(\\alg{A},\\hs{H},\\gamma,J)$ with\n\\begin{multline}\n \\ms{D}_0(\\alg{A},m^\\textup{even},n) := \\prod_{\\alpha \\in \\spec{\\alg{A}}} \\biggl[ M_{n_\\alpha m^\\textup{even}_{\\alpha\\alpha}}(\\field{C}) \/ \\bigl(1_{n_\\alpha} \\otimes \\hs{R}_\\alpha(n)\\bigr) \\\\\n\\times \\prod_{\\substack{\\beta \\in \\spec{\\alg{A}} \\\\ \\beta > \\alpha}} \\bigl(M_{n_\\alpha m^\\textup{even}_{\\beta\\alpha} \\times n_\\alpha m^\\textup{even}_{\\alpha\\beta}}(\\field{C}) \\oplus M_{n_\\beta m^\\textup{even}_{\\alpha\\beta} \\times n_\\beta m^\\textup{even}_{\\beta\\alpha}}(\\field{C})\\bigr) \/ M_{m^\\textup{even}_{\\beta\\alpha} \\times m^\\textup{even}_{\\alpha\\beta}}(\\field{C}) \\\\\n\\times \\prod_{\\substack{\\beta,\\gamma\\in\\spec{\\alg{A}}\\\\\\gamma \\neq \\alpha}} M_{n_\\gamma m^\\textup{even}_{\\beta\\gamma} \\times n_\\alpha m^\\textup{even}_{\\alpha\\beta}}(\\field{C})\\biggr],\n\\end{multline}\nwhere $M_{m^\\textup{even}_{\\beta\\alpha} \\times m^\\textup{even}_{\\alpha\\beta}}(\\field{C})$ is viewed as embedded in the space\n\\[\n M_{n_\\alpha m^\\textup{even}_{\\beta\\alpha} \\times n_\\alpha m^\\textup{even}_{\\alpha\\beta}}(\\field{C}) \\oplus M_{n_\\beta m^\\textup{even}_{\\alpha\\beta} \\times n_\\beta m^\\textup{even}_{\\beta\\alpha}}(\\field{C})\n\\]\nvia the map $K \\mapsto (1_{n_\\alpha} \\otimes K) \\oplus (-\\varepsilon 1_{n_\\beta} \\otimes K^T)$, and identify $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even})$ with $\\U(\\alg{A},m^\\textup{even})$. Then the action of $\\U^{\\textup{LR}}_\\alg{A}(\\hs{H}^\\textup{even})$ on $\\mathcal{L}^1_\\alg{A}(\\hs{H}^\\textup{even},\\hs{H}^\\textup{odd};J)$ corresponds under these identifications with the action of $\\U(\\alg{A},m^\\textup{even})$ on $\\ms{D}_0(\\alg{A},m^\\textup{even},n)$ defined by having $(U_{\\alpha\\beta}) \\in \\U(\\alg{A},m^\\textup{even})$ act on $\\bigl([M_{\\alpha\\alpha}^\\alpha];[(M_{\\alpha\\beta}^\\alpha,M_{\\beta\\alpha}^\\beta)];M_{\\alpha\\beta}^\\gamma\\bigr) \\in \\ms{D}_0(\\alg{A},m,n)$ by\n\\begin{align*}\n [M_{\\alpha\\alpha}^\\alpha] &\\mapsto \\bigl[(1_{n_\\alpha} \\otimes \\overline{U_{\\alpha\\alpha}}) M_{\\alpha\\alpha}^\\alpha (1_{n_\\alpha} \\otimes U_{\\alpha\\alpha}^*)\\bigr];\\\\\n \\bigl[(M_{\\alpha\\beta}^\\alpha,M_{\\beta\\alpha}^\\beta)\\bigr] &\\mapsto \\biggl[\\bigl((1_{n_\\alpha} \\otimes \\overline{U_{\\beta\\alpha}}) M_{\\alpha\\beta}^\\alpha (1_{n_\\alpha} \\otimes U_{\\alpha\\beta}^*), (1_{n_\\beta} \\otimes \\overline{U_{\\alpha\\beta}}) M_{\\beta\\alpha}^\\alpha (1_{n_\\beta} \\otimes U_{\\beta\\alpha}^*)\\bigr)\\biggr];\\\\\n M_{\\alpha\\beta}^\\gamma &\\mapsto (1_{n_\\gamma} \\otimes \\overline{U_{\\beta\\gamma}}) M_{\\alpha\\beta}^\\gamma (1_{n_\\alpha} \\otimes U_{\\alpha\\beta}^*).\n\\end{align*}\nThis, then, proves the following:\n\n\\begin{proposition}\\label{moduli26}\n Let $(\\hs{H},\\gamma,J)$ be a real $\\alg{A}$-bimdoule of even $KO$-dimension $n \\bmod 8$ for $n = 2$ or $6$, with multiplicity matrices $(m^\\textup{even},(m^\\textup{even})^T)$. Then\n\\begin{equation}\n \\ms{D}(\\alg{A},\\hs{H},\\gamma,J) \\cong \\ms{D}_0(\\alg{A},m^\\textup{even},n) \/ \\U(\\alg{A},m^\\textup{even}).\n\\end{equation}\n\\end{proposition}\n\nAgain, considerable simplifications are obtained in the quasi-orientable case, just as for $KO$-dimension $0$ or $4 \\bmod 8$.\n\n\n\\subsection{Dirac operators in the Chamseddine--Connes--Marcolli model}\n\nLet us now apply the above results on Dirac operators and moduli spaces thereof to the bimodules appearing in the Chamseddine--Connes--Marcolli model. \n\nWe begin with $(\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F)$ as an $S^0$-real $\\alg{A}_{LR}$-bimodule of $KO$-dimension $6 \\bmod 8$, which, as we shall now see, is essentially $S^0$-real in structure:\n\n\\begin{proposition}\n For the $S^0$-real $\\alg{A}_{LR}$-bimodule $(\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F)$ of $KO$-dimen\\-sion $6 \\bmod 8$,\n\\[\n \\ms{D}_0(\\alg{A}_{LR},\\hs{H}_F,\\gamma_F,J_F) = \\ms{D}_0(\\alg{A}_{LR},\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F),\n\\]\nand\n\\[\n \\U^{\\textup{LR}}_{\\alg{A}_{LR}}(\\hs{H},\\gamma_F,J_F) = \\U^{\\textup{LR}}_{\\alg{A}_{LR}}(\\hs{H},\\gamma_F,J_F,\\epsilon_F),\n\\]\nso that\n\\[\n \\ms{D}(\\alg{A}_{LR},\\hs{H}_F,\\gamma_F,J_F) = \\ms{D}(\\alg{A}_{LR},\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F).\n\\]\n\n\\end{proposition}\n\n\\begin{proof}\n To prove the first part of the claim, by Proposition~\\ref{even26dirac}, it suffices to show that any right $\\alg{A}_{LR}$-linear operator $\\hs{H}_F^\\textup{even} \\to \\hs{H}_F^\\textup{odd}$ commutes with $\\epsilon_F$. Thus, let $T \\in \\bdd^{\\textup{R}}_{\\alg{A}_{LR}}(\\hs{H}_F^\\textup{even},\\hs{H}_F^\\textup{odd})$. Then, since the signed multiplicity matrix $\\mu$ of $(\\hs{H}_F,\\gamma_F)$ as an orientable even $\\alg{A}_{LR}$-bimodule is given by\n\\[\n \\mu =\n N \\begin{pmatrix}\n 0 & 0 & -1 & +1 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0\\\\\n +1 & 0 & 0 & 0 & +1 & 0\\\\\n -1 & 0 & 0 & 0 & -1 & 0\\\\\n 0 & 0 & -1 & +1 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0\n\\end{pmatrix},\n\\]\nit follows from Proposition~\\ref{linear} that the only non-zero components of $T$ are $T_{\\rep{2}_R \\rep{1}}^{\\rep{2}_L \\rep{1}}$ and $T_{\\rep{2}_R \\rep{3}}^{\\rep{2}_L \\rep{3}}$, which both have domain and range within $\\hs{H}_f = (\\hs{H}_F)_i$, where $\\epsilon$ acts as the identity. Thus, $T$ commutes with $\\epsilon_F$.\n\nTo prove the next part of the claim, it suffices to show that any left and right $\\alg{A}_{LR}$-linear operator on $\\hs{H}_F$ commutes with $\\epsilon_F$. But again, if $K \\in \\bdd^{\\textup{LR}}_{\\alg{A}_{LR}}(\\hs{H}_F)$, then the only non-zero components of $K$ are of the form $K_{\\alpha\\beta}^{\\alpha\\beta}$, each of which therefore has both domain and range either within $\\hs{H}_f$ or $\\hs{H}_{\\overline{f}} = J_F \\hs{H}_f$, so that $K$ commutes with $\\epsilon_F$. The last part of the claim is then an immediate consequence of the first two parts.\n\\end{proof}\n\nThus, by Proposition~\\ref{diracs0reduction}, we have that\n\\begin{equation}\n \\ms{D}_0(\\alg{A}_{LR},\\hs{H}_F,\\gamma_F,J_F) = \\ms{D}_0(\\alg{A}_{LR},\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F) \\cong \\ms{D}_0(\\alg{A}_{LR},\\hs{H}_f,\\gamma_f)\n\\end{equation}\nand\n\\begin{equation}\n \\ms{D}(\\alg{A}_{LR},\\hs{H}_F,\\gamma_F,J_F) = \\ms{D}(\\alg{A}_{LR},\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F) \\cong \\ms{D}(\\alg{A}_{LR},\\hs{H}_f,\\gamma_f),\n\\end{equation}\nwhere $(\\hs{H}_f,\\gamma_f) = ((\\hs{H}_F)_i,(\\gamma_F)_i)$ is the orientable even $\\alg{A}_{LR}$-bimodule with signed multiplicity matrix\n\\[\n \\mu_f =\n N \\begin{pmatrix}\n 0 & 0 & 0 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0\\\\\n +1 & 0 & 0 & 0 & +1 & 0\\\\\n -1 & 0 & 0 & 0 & -1 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0\n\\end{pmatrix}.\n\\]\nIn particular, then, $(\\hs{H}_F,\\gamma_F,J_F)$ as a real $\\alg{A}_{LR}$-bimodule admits no \\term{off-diagonal} Dirac operators, that is, Dirac operators with non-zero $P_{-i} D P_i : \\hs{H}_f \\to \\hs{H}_{\\overline{f}}$, or equivalently, that have non-vanishing commutator with $\\epsilon_F$. Let us now examine $\\ms{D}_0(\\alg{A}_{LR},\\hs{H}_F,\\gamma_F,J_F)$ and $\\ms{D}(\\alg{A}_{LR},\\hs{H}_F,\\gamma_F,J_F)$, or rather, $\\ms{D}_0(\\alg{A}_{LR},\\hs{H}_f,\\gamma_f)$ and $\\ms{D}(\\alg{A}_{LR},\\hs{H}_f,\\gamma_f)$, in more detail.\n\nFirst, it follows from the form of $\\mu_f$ and Proposition~\\ref{linear} that $\\bdd^{\\textup{L}}_{\\alg{A}_{LR}}(\\hs{H}_f^\\textup{even},\\hs{H}_f^\\textup{odd})$ vanishes, whilst\n\\[\n \\bdd^{\\textup{R}}_{\\alg{A}_{LR}}(\\hs{H}_f^\\textup{even},\\hs{H}_f^\\textup{odd}) = M_{2N}(\\field{C}) \\oplus (M_{2N}(\\field{C}) \\otimes 1_3) \\cong M_{2N}(\\field{C}) \\oplus M_{2N}(\\field{C}).\n\\]\nso that any Dirac operator on $\\hs{H}_f$ (and hence on $\\hs{H}_F$) is completely specified by a choice of $M_{\\rep{2}_L \\rep{1}}^{\\rep{2}_R}$, $M_{\\rep{2}_L \\rep{3}}^{\\rep{2}_R} \\in M_{2N}(\\field{C})$. Indeed, if $(m^\\textup{even},m^\\textup{odd})$ denotes the pair of multiplicity matrices of $(\\hs{H}_f,\\gamma_f)$, then, in the notation of subsection~\\ref{ko6},\n\\[\n \\ms{D}_0(\\alg{A}_{LR},m^\\textup{even},6) = M_{2N}(\\field{C}) \\oplus M_{2N}(\\field{C}).\n\\]\nAt the same time,\n\\[\n \\U^{\\textup{LR}}_{\\alg{A}_{LR}}(\\hs{H}_f^\\textup{even}) = (1_2 \\otimes \\U(N)) \\oplus (1_2 \\otimes \\U(N) \\otimes 1_3) \\cong \\U(N) \\times \\U(N) =: \\U(\\alg{A}_{LR},m^\\textup{even})\n\\]\nand\n\\[\n \\U^{\\textup{LR}}_{\\alg{A}_{LR}}(\\hs{H}_f^\\textup{odd}) = (1_2 \\otimes \\U(N)) \\oplus (1_2 \\otimes \\U(N) \\otimes 1_3) \\cong \\U(N) \\times \\U(N) =: \\U(\\alg{A}_{LR},m^\\textup{odd}).\n\\]\nIt then follows that\n\\begin{align}\n \\ms{D}(\\alg{A}_{LR},\\hs{H}_f,\\gamma_f) &\\cong \\U(\\alg{A}_{LR},m^\\textup{odd}) \\backslash \\ms{D}_0(\\alg{A}_{LR},m^\\textup{even},6) \/ \\U(\\alg{A}_{LR},m^\\textup{even}) \\\\\n &= \\bigl(\\U(N) \\backslash M_{2N}(\\field{C}) \/ \\U(N)\\bigr)^2,\n\\end{align}\nwhere $\\U(N)$ acts on the left by multiplication and on the right by multiplication by the inverse as $1_2 \\otimes U(N)$. The two factors of the form $\\U(N) \\backslash M_{2N}(\\field{C}) \/ \\U(N)$ can thus be viewed as the parameter spaces of the components $M_{\\rep{2}_L \\rep{1}}^{\\rep{2}_R}$ and $M_{\\rep{2}_L \\rep{3}}^{\\rep{2}_R}$, respectively. \n\nLet us now consider $(\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F)$ as an $S^0$-real $\\alg{A}_F$-bimodule, so that the multiplicity matrices $(m^\\textup{even},m^\\textup{odd})$ of $(\\hs{H}_F,\\gamma_F)$ are given by\n\\[\n m^\\textup{even} = N\n\\begin{pmatrix}\n1 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0\\\\\n1 & 0 & 0 & 1 & 0\\\\\n1 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0\n\\end{pmatrix}, \\quad\nm^\\textup{odd} = N\n\\begin{pmatrix}\n1 & 0 & 1 & 1 & 0\\\\\n1 & 0 & 0 & 1 & 0\\\\\n0 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 1 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0\n\\end{pmatrix} = (m^\\textup{even})^T.\n\\]\n\nNow it follows from the form of $(m^\\textup{even},m^\\textup{odd})$ that\n\\begin{multline*}\n \\bdd^{\\textup{R}}_{\\alg{A}_F}(\\hs{H}_F^\\textup{even},\\hs{H}_F^\\textup{odd}) = M_N(\\field{C})^{\\oplus 2} \\oplus M_{N \\times 2N}(\\field{C})^{\\oplus 2} \\oplus M_{N \\times 3N}(\\field{C})^{\\oplus 2} \\\\ \\oplus (M_{N \\times 2N}(\\field{C}) \\otimes 1_3)^{\\oplus 2},\n\\end{multline*}\nwhilst\n\\[\n \\ker(R_6) = \\mathfrak{sl}(N,\\field{C}) \\subseteq M_N(\\field{C})\n\\]\nfor the copy of $M_N(\\field{C})$ corresponding to $\\bdd^{\\textup{R}}_\\alg{A}((\\hs{H}_F^\\textup{even})_{\\rep{1}\\rep{1}},(\\hs{H}_F^\\textup{odd})_{\\rep{1}\\rep{1}})$. Since\n$M_N(\\field{C}) = \\Sym_N(\\field{C}) \\oplus \\mathfrak{sl}(N,\\field{C})$, $M_N(\\field{C})\/\\mathfrak{sl}(N,\\field{C})$ can be identified with $\\Sym_N(\\field{C})$, so that\n\\begin{align*}\n &\\ms{D}_0(\\alg{A}_F,\\hs{H}_F,\\gamma_F,J_F)\\\\ \\cong &\\bdd^{\\textup{R}}_{\\alg{A}_F}(\\hs{H}_F^\\textup{even},\\hs{H}_F^\\textup{odd}) \/ \\ker(R_6)\\\\\n = &\\Sym_N(\\field{C}) \\oplus M_N(\\field{C}) \\oplus M_{N \\times 2N}(\\field{C})^{\\oplus 2} \\oplus M_{N \\times 3N}(\\field{C})^{\\oplus 2} \\oplus (M_{N \\times 2N}(\\field{C}) \\otimes 1_3)^{\\oplus 2}.\n\\end{align*}\nThus, a Dirac operator $D$, which is specified by a choice of class \n\\[\n [M] \\in \\bdd^{\\textup{R}}_{\\alg{A}_F}(\\hs{H}_F^\\textup{even},\\hs{H}_F^\\textup{odd}) \/ \\ker(R_6),\n\\]\nis therefore specified in turn by the choice of the following matrices:\n\\begin{itemize}\n \\item $M_{\\rep{1}\\rep{1}}^{\\rep{1}} \\in \\Sym_N(\\field{C})$, $M_{\\rep{1}\\rep{1}}^{\\crep{1}} \\in M_N(\\field{C})$;\n \\item $M_{\\rep{2}\\rep{1}}^{\\rep{1}}$, $M_{\\rep{2}\\rep{1}}^{\\crep{1}} \\in M_{N \\times 2N}(\\field{C})$;\n \\item $M_{\\rep{3}\\rep{1}}^{\\rep{1}}$, $M_{\\rep{3}\\rep{1}}^{\\crep{1}} \\in M_{N \\times 3N}(\\field{C})$;\n \\item $M_{\\rep{2}\\rep{3}}^{\\rep{1}}$, $M_{\\rep{2}\\rep{3}}^{\\crep{1}} \\in M_{N \\times 2N}(\\field{C})$.\n\\end{itemize}\nIndeed, it follows that\n\\begin{multline}\n \\ms{D}_0(\\alg{A}_F,m^\\textup{even},6) = \\Sym_N(\\field{C}) \\oplus M_N(\\field{C}) \\oplus M_{N \\times 2N}(\\field{C})^{\\oplus 2} \\oplus M_{N \\times 3N}(\\field{C})^{\\oplus 2} \\\\ \\oplus M_{N \\times 2N}(\\field{C})^{\\oplus 2}.\n\\end{multline}\n\nNext, we have that $\\U(\\alg{A}_F,m^\\textup{even}) = \\U(N)^6$, with a copy of $\\U(N)$ corresponding to each of $(\\hs{H}_F^\\textup{even})_{\\rep{1}\\rep{1}}$, $(\\hs{H}_F^\\textup{even})_{\\rep{1}\\crep{1}}$, $(\\hs{H}_F^\\textup{even})_{\\rep{2}\\rep{1}}$, $(\\hs{H}_F^\\textup{even})_{\\rep{2}\\rep{3}}$, $(\\hs{H}_F^\\textup{even})_{\\rep{3}\\rep{1}}$, and $(\\hs{H}_F^\\textup{even})_{\\rep{3}\\crep{1}}$. Then, by Proposition~\\ref{moduli26},\n\\begin{equation}\n \\ms{D}(\\alg{A}_F,\\hs{H}_F,\\gamma_F,J_F) \\cong \\ms{D}_0(\\alg{A}_F,m^\\textup{even},6) \/ \\U(\\alg{A}_F,m^\\textup{even})\n\\end{equation}\nfor the action of $\\U(\\alg{A}_F,m^\\textup{even})$ on $\\ms{D}_0(\\alg{A}_F,m^\\textup{even},6)$ given by having the element $(U_{\\alpha\\beta}) \\in \\U(\\alg{A}_F,m^\\textup{even})$ act on $(M_{\\alpha\\beta}^\\gamma) \\in \\ms{D}_0(\\alg{A}_F,m^\\textup{even},6)$ by\n\\[\n M_{\\alpha\\beta}^\\gamma \\mapsto (1_{n_\\gamma} \\otimes \\overline{U_{\\beta\\gamma}}) M_{\\alpha\\beta}^\\gamma (1_{n_\\alpha} \\otimes U_{\\alpha\\beta}^*).\n\\]\nNote that in the notation of~\\cite{CM08}*{\\S\\S 13.4, 13.5}, for $(M_{\\alpha\\beta}^\\gamma) \\in \\ms{D}_0(\\alg{A}_F,m^\\textup{even},6)$,\n\\[\n M_{\\rep{1}\\rep{1}}^{\\rep{1}} = \\frac{1}{2}\\Upsilon_R,\n\\]\nso that the so-called Majorana mass term is already present in its final form, whilst for $U \\in \\U(\\alg{A}_F,m^\\textup{even})$,\n\\[\n U = (U_{\\rep{1}\\rep{1}},U_{\\rep{1}\\crep{1}},U_{\\rep{2}\\rep{1}},U_{\\rep{2}\\rep{3}},U_{\\rep{3}\\rep{1}},U_{\\rep{3}\\crep{1}}) = (\\overline{V_2},\\overline{V_1},V_3,W_3,\\overline{W_2},\\overline{W_1}).\n\\]\n\nFinally, let us compute the sub-moduli space $\\ms{D}(\\alg{A}_F,\\hs{H}_F,\\gamma_F,J_F;\\field{C}_F)$ for \n\\[\n \\field{C}_F = \\{(\\zeta,\\diag(\\zeta,\\overline{\\zeta}),0) \\in \\alg{A}_F \\mid \\lambda \\in \\field{C}\\} \\cong \\field{C}.\n\\]\nIt is easy to see that $[M] \\in \\bdd^{\\textup{R}}_{\\alg{A}_F}(\\hs{H}_F^\\textup{even},\\hs{H}_F^\\textup{odd}) \/ \\ker(R_6)$ yields an element of the subspace $\\ms{D}_0(\\alg{A}_F,\\hs{H}_F,\\gamma_F,J_F;\\field{C}_F)$ if and only if $M$ commutes with $\\lambda(\\field{C}_F)$, but this holds if and only if for all $\\zeta \\in \\field{C}$ and $\\beta \\in \\spec{\\alg{A}_F}$,\n\\begin{align*}\n \\zeta M_{\\rep{1}\\beta}^{\\rep{1}} &= M_{\\rep{1}\\beta}^{\\rep{1}} \\zeta, \n &\\overline{\\zeta} M_{\\rep{1}\\beta}^{\\crep{1}} &= M_{\\rep{1}\\beta}^{\\crep{1}} \\zeta,\\\\\n \\zeta M_{\\rep{2}\\beta}^{\\rep{1}} &= M_{\\rep{2}\\beta}^{\\rep{1}} (\\diag(\\zeta,\\overline{\\zeta}) \\otimes 1_N),\n &\\overline{\\zeta} M_{\\rep{2}\\beta}^{\\crep{1}} &= M_{\\rep{2}\\beta}^{\\crep{1}} (\\diag(\\zeta,\\overline{\\zeta}) \\otimes 1_N),\\\\\n 0 M_{\\rep{3}\\beta}^{\\rep{1}} &= M_{\\rep{3}\\beta}^{\\rep{1}} \\zeta,\n &0 M_{\\rep{3}\\beta}^{\\crep{1}} &= M_{\\rep{3}\\beta}^{\\crep{1}} \\zeta,\n\\end{align*}\nwhich is in turn equivalent to having $M_{\\rep{1}\\rep{1}}^{\\crep{1}}$, $M_{\\rep{3}\\rep{1}}^{\\rep{1}}$ and $M_{\\rep{3}\\rep{1}}^{\\crep{1}}$ all vanish, and\n\\[\n M_{\\rep{2}\\rep{1}}^{\\rep{1}} = \\begin{pmatrix} \\Upsilon_\\nu & 0 \\end{pmatrix}, \\quad M_{\\rep{2}\\rep{1}}^{\\crep{1}} = \\begin{pmatrix} 0 & \\Upsilon_e \\end{pmatrix}, \\quad M_{\\rep{2}\\rep{3}}^{\\rep{1}} = \\begin{pmatrix} \\Upsilon_u & 0 \\end{pmatrix}, \\quad M_{\\rep{2}\\rep{3}}^{\\crep{1}} = \\begin{pmatrix} 0 & \\Upsilon_d \\end{pmatrix},\n\\]\nfor $\\Upsilon_\\nu$, $\\Upsilon_e$, $\\Upsilon_u$, $\\Upsilon_d \\in M_N(\\field{C})$. One can check that our notation is consistent with that of~\\cite{CM08}*{\\S\\S 13.4, 13.5}. Indeed, if $\\ms{D}_0(\\alg{A}_F,m^\\textup{even},6;\\field{C}_F)$ denotes the subspace of $\\ms{D}_0(\\alg{A}_F,m^\\textup{even},6)$ corresponding to $\\ms{D}_0(\\alg{A}_F,\\hs{H}_F,\\gamma_F,J_F;\\field{C}_F)$, then\n\\begin{equation}\n \\ms{D}(\\alg{A}_F,\\hs{H}_F,\\gamma_F,J_F;\\field{C}_F) \\cong \\ms{D}_0(\\alg{A}_F,m^\\textup{even},6;\\field{C}_F) \/ \\U(\\alg{A}_F,m^\\textup{even}) \\cong \\ms{C}_q \\times \\ms{C}_l\n\\end{equation}\nfor\n\\begin{equation}\n \\ms{C}_q := \\bigl(\\U(N) \\times \\U(N)\\bigr) \\backslash \\bigl(M_N(\\field{C}) \\times M_N(\\field{C})\\bigr) \/ \\U(N),\n\\end{equation}\nwhere $\\U(N)$ acts diagonally by multiplication on the right, and\n\\[\n \\ms{C}_l := \\bigl(\\U(N) \\times \\U(N)\\bigr) \\backslash \\bigl(M_N(\\field{C}) \\times M_N(\\field{C}) \\times \\Sym_N(\\field{C})\\bigr) \/ \\U(N),\n\\]\nwhere $\\U(N) \\times \\U(N)$ acts trivially on $\\Sym_N(\\field{C})$ and $\\U(N)$ acts on $\\Sym_N(\\field{C})$ by\n\\[\n (V_2,\\Upsilon_R) \\mapsto V_2 \\Upsilon_R V_2^T;\n\\]\nnote that $\\ms{C}_q$ is the parameter space for the matrices $(\\Upsilon_u,\\Upsilon_d)$, whilst $\\ms{C}_l$ is the parameter space for the matrices $(\\Upsilon_\\nu,\\Upsilon_e,\\Upsilon_R)$. Thus we have recovered the sub-moduli space of Dirac operators considered by Chamseddine--Connes--Marcolli~\\cite{CCM07}*{\\S\\S 2.6, 2.7} ({\\it cf.\\\/}\\ also~\\cite{CM08}*{\\S\\S 13.4, 13.5}).\n\n\\section{Applications to the Recent Work of Chamseddine and Connes}\n\nIn this section, we reformulate the results of Chamseddine and Connes in~\\cites{CC08a,CC08b} and give new proofs thereof using the theory of bimodules and bilateral triples developed above.\n\nBefore continuing, recall that, up to automorphisms, the only real forms of $M_n(\\field{C})$ are $M_n(\\field{C})$, $M_n(\\field{R})$, and, if $n$ is even, $M_{n\/2}(\\field{H})$.\n\n\\subsection{Admissible real bimodules}\n\nWe begin by studying what Chamseddine and Connes call \\term{irreducible triplets}, namely, real $\\alg{A}$-bimodules satisfying certain representation-theoretic conditions, along the lines of~\\cite{CC08b}*{\\S 2}. However, we shall progress by adding Chamseddine and Connes's various requirements for irreducible triplets one by one, bringing us gradually to their classification of irreducible triplets. \n\nIn what follows, $\\alg{A}$ will once more denote a fixed real {\\ensuremath{C^*}}-algebra, and for $(\\hs{H},J)$ a real $\\alg{A}$-bimodule of odd $KO$-dimension, $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H};J)$ will denote the real \\ensuremath{\\ast}-subalgebra of $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H})$ consisting of elements commuting with $J$.\n\nLet us now introduce the first explict requirement for irreducible triplets.\n\n\\begin{definition}\n Let $(\\hs{H},J)$ be a real $\\alg{A}$-bimodule of odd $KO$-dimension. We shall say that $(\\hs{H},J)$ is \\term{irreducible} if $0$ and $1$ are the only projections in $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H};J)$.\n\\end{definition}\n\nTo proceed, we shall need the following:\n\n\\begin{lemma}\\label{irreduciblelemma}\n Let $(\\hs{H},J)$ be a real $\\alg{A}$-bimodule of odd $KO$-dimension $n \\bmod 8$ with multiplicity matrix $m$. Then\n\\begin{equation}\n \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H};J) \\cong\n\\begin{cases}\n \\biggl(\\bigoplus_{\\alpha \\in \\spec{\\alg{A}}} M_{m_{\\alpha\\alpha}}(\\field{R}) \\biggr) \\oplus \\bigoplus_{\\substack{\\alpha,\\beta \\in \\spec{\\alg{A}}\\\\ \\alpha < \\beta}} M_{m_{\\alpha\\beta}}(\\field{C}), &\\text{if $n = 1$ or $7 \\bmod 8$,}\\\\\n \\biggl(\\bigoplus_{\\alpha \\in \\spec{\\alg{A}}} M_{m_{\\alpha\\alpha}\/2}(\\field{H}) \\biggr) \\oplus \\bigoplus_{\\substack{\\alpha,\\beta \\in \\spec{\\alg{A}}\\\\ \\alpha < \\beta}} M_{m_{\\alpha\\beta}}(\\field{C}), &\\text{if $n = 3$ or $5 \\bmod 8$.}\n\\end{cases}\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\n Let $T = (1_{n_\\alpha} \\otimes T_{\\alpha\\beta} \\otimes 1_{n_\\beta}) \\in \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H})$. Just as for Propositions~\\ref{real17unitary} and~\\ref{real35unitary}, one can show that $[T,J] = 0$ if and only if for all $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$, $T_{\\beta\\alpha} = \\overline{T_{\\alpha\\beta}}$ if $\\alpha \\neq \\beta$ and\n\\[\n T_{\\alpha\\alpha} \\in\n\\begin{cases}\n M_{m_{\\alpha\\alpha}}(\\field{R}), &\\text{if $n = 1$ or $7 \\bmod 8$,}\\\\\n M_{m_{\\alpha\\alpha}\/2}(\\field{H}), &\\text{if $n = 3$ or $5 \\bmod 8$.}\n\\end{cases}\n\\]\nThus, $T \\in \\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H};J)$ is completely specified by the matrices $T_{\\alpha\\alpha}$ and $T_{\\alpha\\beta}$ for $\\alpha > \\beta$, giving rise to the isomorphisms of the claim.\n\\end{proof}\n\nWe can now formulate the part of the results of~\\cite{CC08b}*{\\S 2} that depends only on this notion of irreducibility.\n\n\\begin{proposition}\n Let $(\\hs{H},J)$ be a real $\\alg{A}$-bimodule of odd $KO$-dimension $n \\bmod 8$ with multiplicity matrix $m$. Then $(\\hs{H},J)$ is irreducible if and only if one of the following holds:\n\\begin{enumerate}\n \\item There exists $\\alpha \\in \\spec{\\alg{A}}$ such that $m = 2^{(1-\\varepsilon)\/2} E_{\\alpha\\alpha}$;\n\\item There exist $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$, $\\alpha \\neq \\beta$, such that $m = E_{\\alpha\\beta} + E_{\\beta\\alpha}$.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nBy definition, $(\\hs{H},J)$ is irreducible if and only if the only projections in the real {\\ensuremath{C^*}}-algebra $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H},J)$ are $0$ and $1$, but by Lemma~\\ref{irreduciblelemma}, this in turn holds if and only if one of the following holds:\n\\begin{enumerate}\n \\item $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H};J) \\cong \\field{R}$, so that $n = 1$ or $7 \\bmod 8$, and $m = E_{\\alpha\\alpha}$ for some $\\alpha \\in \\spec{\\alg{A}}$,\n \\item $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H};J) \\cong \\field{H}$, so that $n = 3$ or $5 \\bmod 8$, and $m = 2 E_{\\alpha\\alpha}$ for some $\\alpha \\in \\spec{\\alg{A}}$,\n \\item $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H};J) \\cong \\field{C}$, so that $m = E_{\\alpha\\beta} + E_{\\beta\\alpha}$ for some $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$, $\\alpha \\neq \\beta$,\n\\end{enumerate}\nwhich yields in turn the desired result.\n\\end{proof}\n\nWe shall call an irreducible odd $KO$-dimensional real $\\alg{A}$-bimod\\-ule $(\\hs{H},J)$ \\term{type A} if the first case holds, and \\term{type B} if the second case holds; Chamseddine and Connes's first and second case for irreducible triplets~\\cite{CC08b}*{Lemma 2.2} correspond to the type A and type B case, respectively. We shall also find it convenient to define the \\term{skeleton} $\\skel(\\hs{H},J)$ of such a bimodule as follows:\n\\begin{enumerate}\n \\item if $(\\hs{H},J)$ is type A, then $\\skel(\\hs{H},J) := \\{\\alpha\\}$, where $\\alpha \\in \\spec{\\alg{A}}$ is such that $\\mult[\\hs{H}] = 2^{\\frac{1-\\varepsilon}{2}} E_{\\alpha\\alpha}$;\n \\item if $(\\hs{H},J)$ is type B, then $\\skel(\\hs{H},J) := \\{\\alpha,\\beta\\}$, where $\\alpha$, $\\beta \\in \\spec{\\alg{A}}$, $\\alpha \\neq \\beta$, are such that $\\mult[\\hs{H}] = E_{\\alpha\\beta} + E_{\\beta\\alpha}$.\n\\end{enumerate}\n\nLet us now introduce the second explicit requirement for irreducible triplets.\n\n\\begin{definition}\n An $\\alg{A}$-bimodule $\\hs{H}$ is \\term{(left) separating} if there exists some $\\xi \\in \\hs{H}$ such that $\\lambda(\\alg{A})^\\prime \\xi = \\hs{H}$. Such a vector $\\xi$ is then called a \\term{separating vector} for $\\alg{A}$.\n\\end{definition}\n\nRecall that for a representation $\\hs{X}$ of a complex {\\ensuremath{C^*}}-algebra $\\alg{C}$, $\\xi \\in \\hs{X}$ is a separating vector if and only if the map $\\alg{C} \\to \\hs{X}$ given by $c \\mapsto c \\xi$ is injective.\n\n\\begin{lemma}\\label{separatinglemma}\n Let $p$, $q \\in \\semiring{N}$. There exists a separating vector $\\xi$ for the usual action of $M_p(\\field{C})$ on $\\field{C}^p \\otimes \\field{C}^q$ as $M_p(\\field{C}) \\otimes 1_q$ if and only if $p \\leq q$.\n\\end{lemma}\n\n\\begin{proof}\n Let $\\{e_i\\}_{i=1}^p$ be a basis for $\\field{C}^p$, and let $\\{f_j\\}_{j=1}^q$ be a basis for $\\field{C}^q$. \n\nFirst suppose that $p \\leq q$. Let $\\xi \\in \\field{C}^p \\otimes \\field{C}^q$ be given by $\\xi = \\sum_{i=1}^p e_i \\otimes f_i$. Then for any $a$, $b \\in M_p(\\field{C})$,\n\\[\n \\left(a \\otimes 1_q\\right) \\xi - \\left(b \\otimes 1_q\\right) \\xi = \\sum_{i=1}^p \\left(\\sum_{l=1}^p (a_i^l - b_i^l) e_l \\right) \\otimes f_i\n\\]\nso that by linear independence of the $e_i$ and $f_j$, the left-hand side vanishes if and only if for each $i$ and $l$, $a_i^l - b_i^l = 0$, {\\it i.e.\\\/}\\ $a = b$. Hence, $\\xi$ is indeed a separating vector.\n\nNow suppose that $p > q$. Then $\\dim_{\\field{C}} M_p(\\field{C}) - \\dim_{\\field{C}} \\field{C}^p \\otimes \\field{C}^q = p(p-q) > 0$, so that for any $\\xi \\in \\field{C}^p \\otimes \\field{C}^q$, the map $M_p(\\field{C}) \\mapsto \\field{C}^p \\otimes \\field{C}^q$ given by $a \\mapsto \\left(a \\otimes 1_q \\right) \\xi$ cannot possibly be injective, and hence $\\xi$ cannot possibly be separating.\n\\end{proof}\n\nWe can now reformulate that part of the results in~\\cite{CC08b}*{\\S 2} that depends only on irreducibility and the existence of a separating vector.\n\n\\begin{proposition}\n Let $(\\hs{H},J)$ be an irreducible real $\\alg{A}$-bimodule of odd $KO$-dimen\\-sion $n \\bmod 8$.\n\\begin{enumerate}\n \\item If $(\\hs{H},J)$ is type A, then it is separating;\n \\item If $(\\hs{H},J)$ is type B with skeleton $(\\alpha,\\beta)$, then $(\\hs{H},J)$ is separating if and only if $n_{\\alpha} = n_{\\beta}$.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nFirst suppose that $(\\hs{H},J)$ is type A. Let $\\{\\alpha\\} = \\skel(\\hs{H},J)$, and let $m_n = 2^{(1-\\varepsilon)\/2}$. Then $\\hs{H} = \\field{C}^{n_\\alpha} \\otimes \\field{C}^{m_n} \\otimes \\field{C}^{n_\\alpha} = \\field{C}^{n_\\alpha} \\otimes \\field{C}^{m_n n_\\alpha}$, and the left action $\\lambda$ of $\\alg{A}$ on $\\hs{H}$ is thus given by $\\lambda_\\alpha \\otimes 1_{m_n n_{\\alpha}}$. Now\n\\[\n \\lambda(\\alg{A})^\\prime = \\left(\\lambda_{\\alpha}(\\alg{A}) \\otimes 1_{m_n n_{\\alpha}}\\right)^\\prime = \\left(M_{n_\\alpha}(\\field{C}) \\otimes 1_{m_n n_\\alpha}\\right)^\\prime,\n\\]\nso that the action $\\lambda$ of $\\alg{A}$ admits a separating vector if and only if the action of $M_{n_\\alpha}(\\field{C})$ as $M_{n_\\alpha}(\\field{C}) \\otimes 1_{m_n n_\\alpha}$ admits a separating vector, but by Lemma~\\ref{separatinglemma} this is indeed the case, as $n_\\alpha \\leq m_n n_\\alpha$.\n\nNow, suppose that $(\\hs{H},J)$ is type B. Let $\\{\\alpha,\\beta\\} = \\skel(\\hs{H},J)$. Then\n\\[\n \\hs{H} = (\\field{C}^{n_\\alpha} \\otimes \\field{C}^{n_\\beta}) \\oplus (\\field{C}^{n_\\beta} \\otimes \\field{C}^{n_\\alpha}),\n\\]\nand the left action $\\lambda$ of $\\alg{A}$ on $\\hs{H}$ is given by $\\lambda = (\\lambda_{\\alpha} \\otimes 1_{n_\\beta}) \\oplus (\\lambda_{\\beta} \\otimes 1_{n_\\alpha})$. Since $\\alpha \\neq \\beta$,\n\\begin{align*}\n \\lambda(\\alg{A})^\\prime &= \\left((\\lambda_{\\alpha}(\\alg{A}) \\otimes 1_{n_\\beta}) \\oplus (\\lambda_\\beta(\\alg{A}) \\otimes 1_{n_\\alpha})\\right)^\\prime\\\\ &= \\left((M_{n_\\alpha}(\\field{C}) \\otimes 1_{n_\\beta}) \\oplus (M_{n_\\beta}(\\field{C}) \\otimes 1_{n_\\alpha})\\right)^\\prime,\n\\end{align*}\nso that the action $\\lambda$ of $\\alg{A}$ admits a separating vector if and only if the action of $M_{n_\\alpha}(\\field{C}) \\oplus M_{n_\\beta}(\\field{C})$ as $(M_{n_\\alpha}(\\field{C}) \\otimes 1_{n_\\beta}) \\oplus (M_{n_\\beta}(\\field{C}) \\otimes 1_{n_\\alpha})$ admits a separating vector. Since $\\dim_{\\field{C}}M_{n_\\alpha}(\\field{C}) \\oplus M_{n_\\beta}(\\field{C}) - \\dim_{\\field{C}}\\hs{H} = (n_\\alpha - n_\\beta)^2$, if $n_\\alpha \\neq n_\\beta$ then no injective linear maps $M_{n_\\alpha}(\\field{C}) \\oplus M_{n_\\beta}(\\field{C}) \\to \\hs{H}$ can exist, and in particular, there exist no separating vectors for the action of $M_{n_\\alpha}(\\field{C}) \\oplus M_{n_\\beta}(\\field{C})$, and hence for $\\lambda$. Suppose instead that $n_\\alpha = n_\\beta = n$. Then\n\\[\n \\hs{H} = (\\field{C}^n \\otimes \\field{C}^n) \\oplus (\\field{C}^n \\otimes \\field{C}^n)\n\\]\nso that, since $\\alpha \\neq \\beta$, $\\lambda(\\alg{A})^\\prime = (M_n(\\field{C}) \\otimes 1_n)^\\prime \\oplus (M_n(\\field{C}) \\otimes 1_n)^\\prime$. Thus, if $\\xi$ is the separating vector for the action of $M_n(\\field{C})$ on $\\field{C}^n \\otimes \\field{C}^n$ given by the proof of Lemma~\\ref{separatinglemma}, then $\\xi\\oplus\\xi$ is also a separating vector for the action $\\lambda$ of $\\alg{A}$, and hence $(\\hs{H},J)$ is indeed separating.\n\\end{proof}\n\nLet us now introduce the final requirement for irreducible triplets; recall that the complex form of a real {\\ensuremath{C^*}}-algebra $\\alg{A}$ a real {\\ensuremath{C^*}}-algebra is denoted by $\\alg{A}_\\field{C}$.\n\n\\begin{definition}\n We shall call an $\\alg{A}$-bimodule $\\hs{H}$ \\term{complex-linear} if both left and right actions of $\\alg{A}$ on $\\hs{H}$ extend to $\\field{C}$-linear actions of $\\alg{A}_{\\field{C}}$, making $\\hs{H}$ into a complex $\\alg{A}_{\\field{C}}$-bimodule.\n\\end{definition}\n\nIt follows immediately that a $\\alg{A}$-bimodule $\\hs{H}$ is complex-linear if and only if for $m = \\mult[\\hs{H}]$, $m_{\\alpha\\beta} = 0$ whenever $\\alpha$ or $\\beta$ is conjugate-linear. In particular, by Proposition~\\ref{converseorientable}, it follows that a complex-linear quasi-orientable graded bimodule is always orientable.\n\nWe can now reformulate Chamseddine and Connes's definition for irreducible triplets:\n\n\\begin{definition}\n An \\term{irreducible triplet} is a triplet $(\\alg{A},\\hs{H},J)$, where $\\alg{A}$ is a finite-dimensional real {\\ensuremath{C^*}}-algebra and $(\\hs{H},J)$ is a complex-linear, separating, irreducible real $\\alg{A}$-bimodule of odd $KO$-dimension such that the left action of $\\alg{A}$ on $\\hs{H}$ is faithful.\n\\end{definition}\n\nNote that for $\\hs{H}$ a real $\\alg{A}$-bimodule, the left action of $\\alg{A}$ is faithful if and only if the right action is faithful.\n\nBy combining the above results, we immediately obtain Cham\\-sed\\-dine and Connes's classification of irreducible triplets:\n\n\\begin{proposition}[Chamseddine--Connes~\\cite{CC08b}*{Propositions 2.5, 2.8}]\n Let $\\alg{A}$ be a finite-dimensional real {\\ensuremath{C^*}}-algebra, and let $(\\hs{H},J)$ be a real $\\alg{A}$-bimodule of odd $KO$-dimension $n \\bmod 8$. Then $(\\alg{A},\\hs{H},J)$ is an irreducible triplet if and only if one of the following cases holds:\n\\begin{enumerate}\n \\item There exists $n \\in \\semiring{N}$ such that $\\alg{A} = M_{k}(\\field{K})$ for a real form $M_{k}(\\field{K})$ of $M_n(\\field{C})$, and\n\\begin{equation}\n \\mult[\\hs{H}] = 2^{(1-\\varepsilon)\/2} E_{\\rep{n}\\rep{n}};\n\\end{equation}\n \\item There exists $n \\in \\semiring{N}$ such that $\\alg{A} = M_{k_1}(\\field{K}_1) \\oplus M_{k_2}(\\field{K}_2)$ for real forms $M_{k_1}(\\field{K}_1)$ and $M_{k_2}(\\field{K}_2)$ of $M_n(\\field{C})$, and \n\\begin{equation}\n \\mult[\\hs{H}] = E_{\\rep{n}_1 \\rep{n}_2} + E_{\\rep{n}_2 \\rep{n_1}}.\n\\end{equation}\n\\end{enumerate}\n\\end{proposition}\n\n\\subsection{Gradings}\n\nWe now seek a classification of gradings inducing even $KO$-dimen\\-sional real bimodules from irreducible triplets.\n\n\\begin{definition}\n Let $(\\alg{A},\\hs{H},J)$ be an irreducible triplet. We shall call a $\\ring{Z}_2$-grading $\\gamma$ on $\\hs{H}$ as a Hilbert space \\term{compatible} with $(\\alg{A},\\hs{H},J)$ if and only if the following conditions all hold:\n\\begin{enumerate}\n \\item For every $a \\in \\alg{A}$, $\\gamma \\lambda(a) \\gamma \\in \\lambda(\\alg{A})$;\n \\item The operator $\\gamma$ either commutes or anticommutes with $J$.\n\\end{enumerate}\n\\end{definition}\n\nGiven a compatible grading $\\gamma$ for an irreducible triplet $(\\alg{A},\\hs{H},J)$, one can view $(\\hs{H},\\gamma,J)$ as a real $\\alg{A}^\\textup{even}$-bimodule of even $KO$-dimension, for $\\alg{A}^\\textup{even} = \\{a \\in \\alg{A} \\mid [\\lambda(a),\\gamma] = 0\\}$, with $KO$-dimension specified by the values of $\\varepsilon$ and $\\varepsilon^{\\prime\\prime}$ such that $J^2 = \\varepsilon$, $\\gamma J = \\varepsilon^{\\prime\\prime} J \\gamma$.\n\nNow, recall that a $\\ring{Z}_2$-grading on a real {\\ensuremath{C^*}}-algebra $\\alg{A}$ is simply an automorphism $\\Gamma$ on $\\alg{A}$ satisfying $\\Gamma^2 = \\Id$; we call such a grading \\term{admissible} if and only if $\\Gamma$ extends to a $\\field{C}$-linear grading on $\\alg{A}_\\field{C}$. Thus, if $(\\alg{A},\\hs{H},J)$ is an irreducible triplet and $\\gamma$ is a grading on $\\hs{H}$, then $\\gamma$ satisfies the first condition for compatibility if and only if there exists some admissible grading $\\Gamma$ on $\\alg{A}$ such that $\\Ad_\\gamma \\circ \\lambda = \\lambda \\circ \\Gamma$, where $\\Ad_x$ denotes conjugation by $x$.\n\n\\begin{lemma}\\label{admissiblegrading}\n Let $M_{k}(\\field{K})$ be a real form of $M_n(\\field{C})$, and let $\\alpha \\in \\Aut(M_n(\\field{C}))$. Then $\\alpha$ is an admissible grading on $M_{k}(\\field{K})$ if and only if there exists a self-adjoint unitary $\\gamma$ in $M_{k}(\\field{K})$ or $i M_{k}(\\field{K})$, such that $\\alpha = \\Ad_\\gamma$.\n\\end{lemma}\n\n\\begin{proof}\nSuppose that $\\alpha$ is an admissible grading. Let $\\field{K}_0$ be $\\field{C}$ if $\\field{K}=\\field{C}$, and $\\field{R}$ otherwise. Then $M_k(\\field{K})$ is central simple over $\\field{K}_0$, so that there exists some invertible element $S$ of $M_k(\\field{K})$ such that $\\alpha = \\Ad_S$. Since $\\alpha$ respects the involution, for any $A \\in M_k(\\field{K})$ we must have\n\\[\n (S^{-1})^*A^*S^* = (SAS^{-1})^* = \\alpha(A)^* = \\alpha(A^*) = SA^*S^{-1},\n\\]\n{\\it i.e.\\\/}\\ $[A,S^*S]=0$, so that $S^*S$ is a positive central element of $M_k(\\field{K})$, and hence $S^*S = c1$ for some $c > 0$. Thus, $U = c^{-1\/2}S$ is a unitary element of $M_k(\\field{K})$ such that $\\alpha = \\Ad_U$. Now, recall that $\\alpha^2 = \\Id$, so that $\\Ad_{U^2} = \\Id$, and hence $U^2 = \\zeta 1$ for some $\\zeta \\in \\mathbb{T} \\cap \\field{K}_0$. If $\\field{K} = \\field{C}$, then one can simply set $\\gamma = \\overline{\\lambda}U$ for $\\lambda$ is a square root of $\\zeta$. Otherwise, $U^2 = \\pm 1$, so that if $U^2 = 1$, set $\\gamma = U \\in M_k(\\field{K})$, and if $U^2 = -1$, set $\\gamma = iU \\in i M_k(\\field{K})$.\n\nOn the other hand, if $\\gamma$ is a self-adjoint unitary in either $M_k(\\field{K})$ or $i M_k(\\field{K})$, then $\\Ad_\\gamma$ is readily seen to be an admissible grading on $M_k(\\field{K})$. \n\\end{proof}\n\nLet us now give the classification of compatible gradings for a type A irreducible triplet; it is essentially a generalisation of \\cite{CC08b}*{Lemma 3.1}.\n\n\\begin{proposition}\n Let $(\\alg{A},\\hs{H},J)$ be a type A irreducible triplet of odd $KO$-dimen\\-sion $n \\bmod 8$, so that $\\alg{A}$ is a real form $M_k(\\field{K})$ of $M_n(\\field{C})$ for some $n$, and let $\\gamma$ be a grading on $\\hs{H}$ as a Hilbert space. Then $\\gamma$ is compatible if and only if there exists a self-adjoint unitary $g$ in $M_k(\\field{K})$ or $i M_k(\\field{K})$ such that\n\\begin{equation}\n \\gamma = \\pm g \\otimes 1_{m_k} \\otimes g^T,\n\\end{equation}\nin which case $\\gamma$ necessarily commutes with $J$.\n\\end{proposition}\n\n\\begin{proof}\nLet $m_n = 2^{(1-\\varepsilon)\/2}$. Then $\\hs{H} = \\field{C}^n \\otimes \\field{C}^{m_n} \\otimes \\field{C}^n$, and for all $a \\in \\alg{A}$,\n\\[\n \\lambda(a) = \\lambda_\\alpha(a) \\otimes 1_{m_n} \\otimes 1_n = a \\otimes 1_{m_nk} \\otimes 1_n, \\quad \\rho(a) = 1_n \\otimes 1_{m_n} \\otimes \\lambda_\\alpha(a)^T = 1_n \\otimes 1_{m_n} \\otimes a^T.\n\\]\n\nSuppose that $\\gamma$ is compatible. Then by Lemma~\\ref{admissiblegrading} there exists some self-adjoint unitary $g$ in either $M_k(\\field{K})$ or $i M_k(\\field{K})$ such that for all $a \\in \\alg{A}$,\n\\[\n \\gamma(a \\otimes 1_{m_n} \\otimes 1_n)\\gamma = (gag) \\otimes 1_{m_k} \\otimes 1_n.\n\\]\nNow, let $\\gamma_0 = g \\otimes 1_{m_n} \\otimes g^T$. Then, by construction, $\\gamma_0$ is a compatible grading for $(\\alg{A},\\hs{H},J)$ that induces the same admissible grading on $\\alg{A}$ as $\\gamma$, and moreover commutes with $J$. Then $\\nu := \\gamma \\gamma_0 \\in \\U^{\\textup{LR}}_\\alg{A}(\\hs{H};J)$, so that $\\nu = 1_n \\otimes \\nu_{\\rep{n}\\rep{n}} \\otimes 1_n$ for some\n\\[\n \\nu_{\\rep{n}\\rep{n}} \\in\n\\begin{cases}\n \\{\\pm1\\}, &\\text{if $n = 1$ or $7 \\bmod 8$,}\\\\\n \\SU(2), &\\text{if $k = 3$ or $5 \\bmod 8$.}\n\\end{cases}\n\\]\nThus $\\gamma = g \\otimes \\nu_{\\rep{n}\\rep{n}} \\otimes g^T$, and hence, since $\\gamma$ is self-adjoint, $\\nu_{\\rep{n}\\rep{n}}$ must also be self-adjoint. Therefore $\\nu_{\\alpha\\alpha} = \\pm 1_{m_k}$, or equivalently, $\\gamma^\\prime = \\pm \\gamma$.\n\nOn the other hand, if $g$ is a self-adjoint unitary in either $M_k(\\field{K})$ or $i M_k(\\field{K})$, then $\\gamma = g \\otimes 1_{m_k} \\otimes g^T$ is certainly a compatible grading that commutes with $J$.\n\\end{proof}\n\nThus, irreducible triplets can only give rise to real $\\alg{A}^\\textup{even}$-bimodules of $KO$-dimension $0$ or $4 \\bmod 8$.\n\nLet us now turn to the type B case.\n\n\\begin{proposition}\n Let $(\\alg{A},\\hs{H},J)$ be a type B irreducible triplet of odd $KO$-dimen\\-sion $n \\bmod 8$, so that for some $n \\in \\semiring{N}$, $\\alg{A} = M_{k_1}(\\field{K}_1) \\oplus M_{k_2}(\\field{K}_2)$ for real forms $M_{k_1}(\\field{K}_1)$ and $M_{k_2}(\\field{K}_2)$ of $M_n(\\field{C})$, and let $\\gamma$ be a grading on $\\hs{H}$ as a Hilbert space. Then $\\gamma$ is compatible if and only if one of the following holds:\n\\begin{enumerate}\n \\item There exist gradings $\\gamma_1$ and $\\gamma_2$ on $\\field{C}^n$, with $\\gamma_j \\in M_{k_j}(\\field{K}_j)$ or $i M_{k_j}(\\field{K}_j)$, such that\n \\begin{equation}\n \\gamma =\n \\begin{pmatrix}\n \\gamma_1 \\otimes \\gamma_2^T & 0\\\\\n 0 & \\varepsilon^{\\prime\\prime} \\gamma_2 \\otimes \\gamma_1^T\n \\end{pmatrix},\n \\end{equation}\n in which case $\\gamma J = \\varepsilon^{\\prime\\prime} J \\gamma$, and if $\\gamma^\\prime$ is any other compatible grading, $\\Ad_{\\gamma^\\prime} = \\Ad_\\gamma$ if and only if $\\gamma^\\prime = \\pm \\gamma$.\n \\item One has that $\\field{K}_1 = \\field{K}_2 = \\field{K}$ and $k_1 = k_2 = k$, and there exist a unitary $u \\in M_k(\\field{K})$ and $\\eta \\in \\mathbb{T}$ such that\n \\begin{equation}\n \\gamma =\n \\begin{pmatrix}\n 0 & \\overline{\\eta} u^* \\otimes \\overline{u}\\\\\n \\eta u \\otimes u^T & 0\n \\end{pmatrix},\n \\end{equation}\n in which case $\\gamma$ necessarily commutes with $J$, and if $\\gamma^\\prime$ is any other compatible grading, $\\Ad_{\\gamma^\\prime} = \\Ad_\\gamma$ if and only if $\\gamma^\\prime = (\\zeta 1_{n^2} \\oplus \\overline{\\zeta} 1_{n^2})\\gamma$ for some $\\zeta \\in \\mathbb{T}$.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nLet $\\gamma$ be a compatible grading. Then, with respect to the decomposition $\\hs{H} = (\\field{C}^n \\otimes \\field{C}^n) \\oplus (\\field{C}^n \\oplus \\field{C}^n)$, let us write\n\\[\n \\gamma = \n \\begin{pmatrix}\n A & B\\\\\n C & D\n \\end{pmatrix}\n\\]\nfor $A$, $B$, $C$ and $D \\in M_n(\\field{C})\\otimes M_n(\\field{C})$. Applying self-adjointness of $\\gamma$, we find that $A$ and $D$ must be self-adjoint, and that $B = C^*$, and then applying the fact that $\\gamma^2 = 1$, we find that\n\\[\n A^2 + C^*C = 1, \\qquad CA + DC = 0, \\qquad CC^* + D^2 = 1.\n\\]\nFinally, applying the condition that $\\gamma$ commutes or anticommutes with $J$, {\\it i.e.\\\/}\\ that $\\gamma J = \\varepsilon^{\\prime\\prime} J \\gamma$ for $\\varepsilon^{\\prime\\prime} = \\pm 1$, we find that\n\\[\n D = \\varepsilon^{\\prime\\prime} X A X, \\qquad C^* = \\varepsilon^{\\prime\\prime} X C X,\n\\]\nwhere $X$ is the antiunitary on $\\field{C}^n \\otimes \\field{C}^n$ given by $X : \\xi_1 \\otimes \\xi_2 \\mapsto \\overline{\\xi_2} \\otimes \\overline{\\xi_1}$.\n\nNow, since $\\gamma$ is compatible, and since $(1,0)$ and $(0,1)$ are projections in $\\alg{A}$ satisfying $(1,0) + (0,1) = 1$, there exist projections $P$ and $Q$ in $\\alg{A}$ such that\n\\[\n \\Ad_\\gamma \\lambda(1,0) = \\lambda(P,1-Q), \\qquad \\Ad_\\gamma \\lambda(0,1) = \\lambda(1-P,Q),\n\\]\nthat is,\n\\[\n \\begin{pmatrix}\n P \\otimes 1_n & 0\\\\\n 0 & (1-Q) \\otimes 1_n\n \\end{pmatrix} =\n \\gamma\n \\begin{pmatrix}\n 1 & 0\\\\\n 0 & 0\n \\end{pmatrix}\n \\gamma =\n \\begin{pmatrix}\n A^2 & AC^*\\\\\n CA & CC^*\n \\end{pmatrix}\n\\]\nand\n\\[\n \\begin{pmatrix}\n (1-P) \\otimes 1_n & 0\\\\\n 0 & Q \\otimes 1_n\n \\end{pmatrix} =\n \\gamma\n \\begin{pmatrix}\n 0 & 0\\\\\n 0 & 1\n \\end{pmatrix}\n \\gamma =\n \\begin{pmatrix}\n C^*C & C^*D\\\\\n DC & D^2\n \\end{pmatrix}.\n\\]\nThus, $A$ is a self-adjoint partial isometry with support and range projection $P \\otimes 1_n$, $D$ is a self-adjoint partial isometry with support and range projection $Q \\otimes 1_n$, and $C$ is a partial isometry with support projection $(1-P)\\otimes 1_n$ and range projection $(1-Q)\\otimes 1_n$.\n\nNow, recalling that $D = \\varepsilon^{\\prime\\prime} X A X$, we see that\n\\[\n Q \\otimes 1_n = D^2 = X A^2 X = X P \\otimes 1_n X = 1_n \\otimes \\overline{P}.\n\\]\nIf $Q = 0$, then certainly $P = 0$. Suppose instead that $Q \\neq 0$, and let $\\xi \\in Q\\field{C}^n \\otimes \\field{C}^n$ be non-zero. Then\n\\[\n \\Id_{\\xi \\otimes \\field{C}^n} = (Q \\otimes 1_n)|_{\\xi \\otimes \\field{C}^n} = (1 \\otimes \\overline{P})|_{\\xi \\otimes \\field{C}^n},\n\\]\nso that $P = 1$ and hence $Q = 1$ also. We therefore have two possible cases:\n\\begin{enumerate}\n \\item We have\n \\[\n \\gamma =\n \\begin{pmatrix}\n A & 0\\\\\n 0 & \\varepsilon^{\\prime\\prime} X A X\n \\end{pmatrix}\n \\]\n for $A$ a grading on $\\field{C}^n \\otimes \\field{C}^n$;\n \\item We have\n \\[\n \\gamma =\n \\begin{pmatrix}\n 0 & C^*\\\\\n C & 0\n \\end{pmatrix}\n \\]\n for $C$ a unitary on $\\field{C}^n \\otimes \\field{C}^n$ such that $C^* = (-1)^m X C X$.\n\\end{enumerate}\n \n First suppose that the first case holds. Then, on the one hand, $\\Ad_A|_{M_n(\\field{C}) \\otimes 1_n}$ induces an admissible grading for $M_{k_1}(\\field{K}_1)$, so that there exists a self-adjoint unitary $\\gamma_1$ in either $M_{k_1}(\\field{K}_1)$ or $i M_{k_1}(\\field{K}_1)$ such that $\\Ad_A|_{M_n(\\field{C}) \\otimes 1_n} = \\Ad_{\\gamma_1 \\otimes 1_n}$, and on the other hand, $\\Ad_{\\varepsilon^{\\prime\\prime} X A X}|_{M_n(\\field{C}) \\otimes 1_n}$ induces an admissible grading for $M_{k_2}(\\field{K}_2)$, so that there exists a self-adjoint unitary $\\gamma_2$ in $M_{k_2}(\\field{K}_1)$ or $i M_{k_2}(\\field{K}_1)$ such that $\\Ad_{\\varepsilon^{\\prime\\prime} X A X}|_{M_n(\\field{C}) \\otimes 1_n} = \\Ad_{\\gamma_2 \\otimes 1_n}$. Since for $a \\otimes b \\in M_n(\\field{C}) \\otimes M_n(\\field{C})$ we can write\n\\[\n a \\otimes b = (a \\otimes 1_n) X (\\overline{b} \\otimes 1_n) X,\n\\]\nit therefore follows that $\\Ad_A = \\Ad_{\\gamma_1 \\otimes \\gamma_2^T}$ on the central simple algebra $M_n(\\field{C}) \\otimes M_n(\\field{C}) \\cong M_{n^2}(\\field{C})$ over $\\field{C}$. Hence, there exists some non-zero $\\eta \\in \\field{C}$ such that $A = \\eta \\gamma_1 \\otimes \\gamma_2^T$, and since both $A$ and $\\gamma_1 \\otimes \\gamma_2^T$ are self-adjoint and unitary, it follows that $\\eta = \\pm 1$. Absorbing $\\pm 1$ into $\\gamma_1$ or $\\gamma_2$, we therefore find that\n\\[\n \\gamma =\n \\begin{pmatrix}\n \\gamma_1 \\otimes \\gamma_2^T & 0\\\\\n 0 & \\varepsilon^{\\prime\\prime} \\gamma_2 \\otimes \\gamma_1^T\n \\end{pmatrix}.\n\\]\nOn the other hand, $\\gamma$ so constructed is readily seen to be a compatible grading satisfying $\\gamma J = \\varepsilon^{\\prime\\prime} J \\gamma$.\n\nNow suppose that the second case holds. Then, since $\\gamma$ is compatible, it is clear that the automorphisms $\\alpha$, $\\beta$ of $M_n(\\field{C})$ specified by\n\\[\n \\alpha(a) \\otimes 1_n = C (a \\otimes 1_n) C^*, \\qquad \\beta(a) \\otimes 1_n = C^*(a \\otimes 1_n) C,\n\\]\nare inverses of each other, and that $\\alpha$, in particular, induces an isomorphism $M_{k_1}(\\field{K}_1) \\to M_{k_2}(\\field{K}_2)$, so that $\\field{K}_1 = \\field{K}_2 = \\field{K}$ and $k_1 = k_2 = k$. Next, by the proof of Lemma~\\ref{admissiblegrading}, there exists some unitary $u$ in $M_n(\\field{C})_I$ such that $\\alpha = \\Ad_u$, from which it follows that $\\beta = \\Ad_{u^*}$. By the same trick as above, we then find that $\\Ad_C = \\Ad_{u \\otimes u^T}$ on the central simple algebra $M_n(\\field{C}) \\otimes M_n(\\field{C}) \\cong M_{n^2}(\\field{C})$ over $\\field{C}$. Hence, there exists some non-zero $\\eta \\in \\field{C}$ such that $C = \\eta u \\otimes u^T$, and since both $C$ and $u \\otimes u^T$ are unitary, it follows that $\\eta \\in \\mathbb{T}$. Thus,\n\\[\n \\gamma =\n \\begin{pmatrix}\n 0 & \\overline{\\eta} u^* \\otimes \\overline{u}\\\\\n \\eta u \\otimes u^T & 0\n \\end{pmatrix}.\n\\]\nOn the other hand, $\\gamma$ so constructed is readily seen to be a compatible grading satisfying $[\\gamma,J]=0$.\n\nFinally, let $\\gamma$ and $\\gamma^\\prime$ be two compatible gradings. Suppose that $\\Ad_\\gamma = \\Ad_{\\gamma^\\prime}$, and set $U = \\gamma^\\prime \\gamma$. Then, by construction, $U$ is a unitary element of $\\bdd^{\\textup{LR}}_\\alg{A}(\\hs{H};J)$, so that there exists some $\\zeta \\in \\mathbb{T}$ such that\n\\[\n U = \\zeta 1_{n^2} \\oplus \\overline{\\zeta} 1_{n^2}.\n\\]\nIf the second case holds, then nothing more can be said, but if the first case holds, so that\n\\[\n \\gamma =\n \\begin{pmatrix}\n \\gamma_1 \\otimes \\gamma_2^T & 0\\\\\n 0 & \\varepsilon^{\\prime\\prime} \\gamma_2 \\otimes \\gamma_1^T\n \\end{pmatrix}\n\\]\nfor suitable $\\gamma_1$ and $\\gamma_2$, then\n\\[\n \\gamma^\\prime =\n \\begin{pmatrix}\n \\zeta \\gamma_1 \\otimes \\gamma_2^T & 0\\\\\n 0 & \\varepsilon^{\\prime\\prime} \\overline{\\zeta} \\gamma_2 \\otimes \\gamma_1^T\n \\end{pmatrix},\n\\]\nso that by self-adjointness of $\\gamma^\\prime$, $\\gamma_1$ and $\\gamma_2$, we must have $\\zeta = \\pm 1$, as required.\n\\end{proof}\n\nThus, we can obtain a real bimodule of $KO$-dimension $6 \\bmod 8$ only from a type B irreducible triplet together with a compatible grading satisfying the first case of the last result.\n\n\\subsection{Even subalgebras and even $KO$-dimensional bimodules}\n\nWe now consider real bimodules of $KO$-di\\-men\\-sion $6 \\bmod 8$ obtained from irreducible triplets. Thus, let $(\\alg{A},\\hs{H},J)$ be a fixed type B irreducible triplet of $KO$-dimension $1$ or $7 \\bmod 8$, and let $\\gamma$ be a fixed compatible grading for $(\\alg{A},\\hs{H},J)$ anticommuting with $J$, so that for some $n \\in \\semiring{N}$,\n\\begin{itemize}\n \\item $\\alg{A} = M_{k_1}(\\field{K}_1) \\oplus M_{k_2}(\\field{K}_2)$ for real forms $M_{k_j}(\\field{K}_j)$ of $M_n(\\field{C})$;\n \\item $\\mult[\\hs{H}] = E_{\\rep{n}_1 \\rep{n}_2} + E_{\\rep{n}_2 \\rep{n}_1}$;\n \\item There exist self-adjoint unitaries $\\gamma_j \\in M_{k_j}(\\field{K}_j)$ or $i M_{k_j}(\\field{K}_j)$ with signature $(r_j,n - r_j)$ such that\n\\[\n \\gamma = \n\\begin{pmatrix}\n \\gamma_1 \\otimes \\gamma_2^T & 0\\\\\n 0 & -\\gamma_2 \\otimes \\gamma_1^T\n\\end{pmatrix}.\n\\]\n\\end{itemize}\nIt is worth noting that $(\\hs{H},J)$ admits, up to sign, a unique $S^0$-real structure, given by $\\epsilon = 1_{n^2} \\oplus -1_{n^2}$, which certainly commutes with $\\gamma$. We can exploit the symmetries present to simplify our discussion by taking, without loss of generality, $r_j > 0$, and requiring that $\\gamma_1 \\in i M_{k_1}(\\field{K}_1)$ only if $\\gamma_2 \\in i M_{k_2}(\\field{K}_2)$, and that $\\gamma_1 = 1_n$ only if $\\gamma_2 = 1_n$.\n\nOur main goal in this section is to give an explicit description of $\\alg{A}^\\textup{even}$ and of $(\\hs{H},\\gamma,J)$ as a real $\\alg{A}^\\textup{even}$-bimodule. To do so, however, we first need the following:\n\n\\begin{lemma}\n Let $M_k(\\field{K})$ be a real form of $M_n(\\field{C})$, let $g$ be a self-adjoint unitary in $M_k(\\field{K})$ or $i M_k(\\field{K})$, and let $r = \\nullity(g-1)$. Set $M_k(\\field{K})^g := \\{a \\in M_k(\\field{K}) \\mid [a,g] = 0\\}$.\n\\begin{itemize}\n \\item If $g \\in M_k(\\field{K})$, then $M_k(\\field{K})^g \\cong M_{kr\/n}(\\field{K}) \\oplus M_{k(n-r)\/n}(\\field{K})$;\n \\item If $g \\in i M_k(\\field{K})$, then $r = n\/2$ and\n \\[\n \tM_k(\\field{K})^g \\cong \\{(a,b) \\in M_{k\/2}(\\field{C})^2 \\mid b = \\overline{a}\\} \\cong M_{k\/2}(\\field{C}).\n \\]\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof}\nLet $P^+ := \\frac{1}{2}(1 + g)$ and $P^- := \\frac{1}{2}(1-g)$, which are thus projections in $M_n(\\field{C})$ of rank $r$ and $n-r$, respectively. Define an injection $\\phi : M_k(\\field{K})^g \\mapsto M_{r}(\\field{C}) \\oplus M_{n-r}(\\field{C})$ by $\\phi(A) := (P^\\textup{even} A P^\\textup{even}, P^\\textup{odd} A P^\\textup{odd})$.\n\nFirst, suppose that $g \\in M_k(\\field{K})$. Then $P^+$ and $P^-$ are also in $M_k(\\field{K})$, from which it immediately follows that $\\phi(M_k(\\field{K})^g) = M_{kr\/n}(\\field{K}) \\oplus M_{k(n-r)\/n}(\\field{K})$.\n\nSuppose instead that $g \\in i M_k(\\field{K})$ and $\\field{K} \\neq \\field{C}$. Then $M_k(\\field{K}) = \\{A \\in M_n(\\field{C}) \\mid [A,I]=0\\}$ for a suitable antiunitary $I$ on $\\field{C}^n$ satisfying $I^2 = \\alpha 1$, where $\\alpha = 1$ if $\\field{K} = \\field{R}$ and $\\alpha = -1$ if $\\field{K} = \\field{H}$. Then $\\{g,I\\} = 0$, and hence, with respect to the decomposition $\\field{C}^n = P^+ \\field{C}^n \\oplus P^- \\field{C}^n \\cong \\field{C}^r \\oplus \\field{C}^{n-r}$,\n\\[\n I =\n\\begin{pmatrix}\n 0 & \\alpha \\tilde{I}^*\\\\\n \\tilde{I} & 0\n\\end{pmatrix},\n\\]\nwhere $\\tilde{I} = P^\\textup{odd} I P^\\textup{even}$ is an antiunitary $\\field{C}^r \\mapsto \\field{C}^{n-r}$. Thus, $n$ is even and $r = n\/2$, and taking $\\tilde{I}$, without loss of generality, to be complex conjugation on $\\field{C}^r$, for all $A \\in M_n(\\field{C})$ commuting with $g$, $[A,I] = 0$ if and only if $P^- A P^- = \\overline{P^+ A P^+}$, and hence $\\phi(M_k(\\field{K})^g) = \\{(a,\\overline{a}) \\mid a \\in M_{n\/2}(\\field{C})\\} \\cong M_{n\/2}(\\field{C})$.\n\\end{proof}\n\nIn light of the form of $\\gamma$, this last Lemma immediately implies the aforementioned explicit description of $\\alg{A}^\\textup{even}$ and $(\\hs{H},\\gamma,J)$:\n\n\\begin{proposition}\n Let $(m^\\textup{even},m^\\textup{odd}) = (m^\\textup{even},(m^\\textup{even})^T)$ be the pair of multiplicity matrices of $(\\hs{H},\\gamma,J)$ as an even $KO$-dimensional real $\\alg{A}^\\textup{even}$-bimodule. Let $r_i^\\prime = n - r_i$, and, when $n$ is even, let $c = n\/2$. Then:\n\\begin{enumerate}\n \\item If $\\gamma_1 \\in i M_{k_1}(\\field{K}_1)$, $\\gamma_2 \\in i M_{k_2}(\\field{K})$, then\n\\begin{equation}\n \\alg{A}^\\textup{even} = M_{c}(\\field{C}) \\oplus M_{c}(\\field{C}),\n\\end{equation}\nand\n\\begin{equation}\n m^\\textup{even} = E_{\\rep{c}_1 \\rep{c}_2} + E_{\\crep{c}_1 \\crep{c}_2} + E_{\\rep{c}_2 \\crep{c}_1} + E_{\\crep{c}_2 \\rep{c}_1};\n\\end{equation}\n \\item If $\\gamma_1 \\in i M_{k_1}(\\field{K}_1)$, $\\gamma_2 \\in M_{k_2}(\\field{K}) \\setminus \\{1_n\\}$, then\n\\begin{equation}\n \\alg{A}^\\textup{even} = M_{c}(\\field{C}) \\oplus M_{k_2 r_2\/n}(\\field{K}_2) \\oplus M_{k_2 r_2^\\prime \/n}(\\field{K}_2).\n\\end{equation}\nand\n\\begin{equation}\n m^\\textup{even} = E_{\\rep{c} \\rep{r}_2} + E_{\\crep{c} \\rep{r}_2^\\prime} + E_{\\rep{r}_2 \\crep{c}} + E_{\\rep{r}_2^\\prime \\rep{c}};\n\\end{equation}\n \\item If $\\gamma_1 \\in i M_{k_1}(\\field{K}_1)$, $\\gamma_2 = 1$, then\n\\begin{equation}\n \\alg{A}^\\textup{even} = M_{c}(\\field{C}) \\oplus M_{k_2}(\\field{K}_2),\n\\end{equation}\nand\n\\begin{equation}\n m^\\textup{even} = E_{\\rep{c} \\rep{n}} + E_{\\rep{n} \\crep{c}};\n\\end{equation}\n \\item If $\\gamma_1 \\in M_{k_1}(\\field{K}_1) \\setminus \\{1_n\\}$, $\\gamma_2 \\in M_{k_2}(\\field{K}_2) \\setminus \\{1_n\\}$, then\n\\begin{equation}\n \\alg{A}^\\textup{even} = M_{k_1 r_1 \/n}(\\field{K}_1) \\oplus M_{k_1 r_1^\\prime \/n}(\\field{K}_1) \\oplus M_{k_2 r_2 \/ n}(\\field{K}_2) \\oplus M_{k_2 r_2^\\prime \/n}(\\field{K}_2),\n\\end{equation}\nand\n\\begin{equation}\n m^\\textup{even} = E_{\\rep{r}_1 \\rep{r}_2} + E_{\\rep{r}_1^\\prime \\rep{r}_2^\\prime} + E_{\\rep{r}_2 \\rep{r}_1^\\prime} + E_{\\rep{r}_2^\\prime \\rep{r}_1};\n\\end{equation}\n \\item If $\\gamma_1 \\in M_{k_1}(\\field{K}_1) \\setminus \\{1_n\\}$, $\\gamma_2 = 1_n$, then\n\\begin{equation}\n \\alg{A}^\\textup{even} = M_{k_1 r_1 \/n}(\\field{K}_1) \\oplus M_{k_1 r_1^\\prime \/n}(\\field{K}_1) \\oplus M_{k_2}(\\field{K}_2),\n\\end{equation}\nand\n\\begin{equation}\n m^\\textup{even} = E_{\\rep{r}_1 \\rep{n}} + E_{\\rep{n} \\rep{r}_1^\\prime};\n\\end{equation}\n item If $\\gamma_1 = \\gamma_2 = 1_n$, then\n\\begin{equation}\n \\alg{A}^\\textup{even} = M_{k_1}(\\field{K}_1) \\oplus M_{k_2}(\\field{K}_2),\n\\end{equation}\nand\n\\begin{equation}\n m^\\textup{even} = E_{\\rep{n}_1 \\rep{n}_2}.\n\\end{equation}\n\\end{enumerate}\n\\end{proposition}\n\nOne can check in each case that $(\\hs{H},\\gamma)$ is quasi-orientable as an even $\\alg{A}^\\textup{even}$-bimodule. However, Propositions~\\ref{converseorientable} and~\\ref{intform} immediately imply the following:\n\n\\begin{corollary}\n The following are equivalent for $(\\hs{H},\\gamma)$ as an even $\\alg{A}^\\textup{even}$-bimodule:\n\\begin{enumerate}\n \\item $\\gamma_1 \\in M_{k_1}(\\field{K}_1)$ and $\\gamma_2 \\in M_{k_2}(\\field{K}_2)$;\n \\item $(\\hs{H},\\gamma)$ is orientable;\n \\item $(\\hs{H},\\gamma)$ has non-vanishing intersection form;\n \\item $(\\hs{H},\\gamma)$ is complex-linear.\n\\end{enumerate}\n\\end{corollary}\n\nThis then motivates us to restrict ourselves to the case where $\\gamma_1 \\in M_{k_1}(\\field{K}_1)$ and $\\gamma_2 \\in M_{k_2}(\\field{K}_2)$. Note, however, that in no case is Poincar{\\'e} duality possible.\n\n\\subsection{Off-diagonal Dirac operators}\n\nLet us now consider the slightly more general $S^0$-real $\\alg{A}^\\textup{even}$-bimodule $(\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F)$ of $KO$-dimension $6 \\bmod 8$ given by taking the direct sum of $N$ copies of $(\\hs{H},\\gamma,J,\\epsilon)$, where $N \\in \\semiring{N}$. If we modify our earlier conventions slightly to allow for the summand $0$ in Wedderburn decompositions, we can therefore write\n\\begin{equation}\n \\alg{A}^\\textup{even} = M_{k_1 r_1 \/n}(\\field{K}_1) \\oplus M_{k_1 r_1^\\prime \/n}(\\field{K}_1) \\oplus M_{k_2 r_2 \/ n}(\\field{K}_2) \\oplus M_{k_2 r_2^\\prime}(\\field{K}_2),\n\\end{equation}\nso that $(\\hs{H}_F,\\gamma_F,J_F)$ is the real $\\alg{A}^\\textup{even}$-bimodule of $KO$-dimension $6 \\bmod 8$ with signed multiplicity matrix\n\\begin{equation}\n \\mu_F = N(E_{\\rep{r}_1 \\rep{r}_2} - E_{\\rep{r}_1 \\rep{r}_2^\\prime} - E_{\\rep{r}_1^\\prime \\rep{r}_2} + E_{\\rep{r}_1^\\prime \\rep{r}_2^\\prime} - E_{\\rep{r}_2 \\rep{r}_1} + E_{\\rep{r}_2 \\rep{r}_1^\\prime} + E_{\\rep{r}_2^\\prime \\rep{r}_1} - E_{\\rep{r}_2^\\prime \\rep{r}_1^\\prime}),\n\\end{equation}\nwhilst $(\\hs{H}_f,\\gamma_f) := ((\\hs{H}_F)_i,(\\gamma_F)_i)$ is the even $\\alg{A}^\\textup{even}$-bimodule with signed multiplicity matrix\n\\begin{equation}\n \\mu_f = N(E_{\\rep{r}_1 \\rep{r}_2} - E_{\\rep{r}_1 \\rep{r}_2^\\prime} - E_{\\rep{r}_1^\\prime \\rep{r}_2} + E_{\\rep{r}_1^\\prime \\rep{r}_2^\\prime}).\n\\end{equation}\nIt then follows also that $(\\hs{H}_{\\overline{f}},\\gamma_{\\overline{f}}) := (J_F \\hs{H}_f, - (J_F \\gamma_f J_F)|_{J_F \\hs{H}_f})$ is the even $\\alg{A}^\\textup{even}$-bimodule with signed multiplicty matrix\n\\[\n \\mu_{\\overline{f}} = -\\mu_f^T = N(- E_{\\rep{r}_2 \\rep{r}_1} + E_{\\rep{r}_2 \\rep{r}_1^\\prime} + E_{\\rep{r}_2^\\prime \\rep{r}_1} - E_{\\rep{r}_2^\\prime \\rep{r}_1^\\prime}).\n\\]\n\nNow, for $\\alg{C}$ a unital \\ensuremath{\\ast}-subalgebra of $\\alg{A}^\\textup{even}$, let us call a Dirac operator $D \\in \\ms{D}_0(\\alg{C},\\hs{H}_F,\\gamma_F,J_F)$ \\term{off-diagonal} if it does not commute with $\\epsilon_F$, or equivalently~\\cite{CC08b}*{\\S 4} if $[D,\\mathcal{Z}(\\alg{A})] \\neq \\{0\\}$. If $\\ms{D}_1(\\alg{C},\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F) \\subseteq \\ms{D}_0(\\alg{C},\\hs{H}_F,\\gamma_F,J_F)$ is the subspace consisting of Dirac operators anti-commuting with $\\epsilon_F$, then, in fact,\n\\begin{equation}\n \\ms{D}_0(\\alg{C},\\hs{H}_F,\\gamma_F,J_F) = \\ms{D}_0(\\alg{C},\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F) \\oplus \\ms{D}_1(\\alg{C},\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F),\n\\end{equation}\nas can be seen from writing \n\\[\n D = \\frac{1}{2}\\{D,\\epsilon_F\\}\\epsilon_F + \\frac{1}{2}[D,\\epsilon_F]\\epsilon_F\n\\]\nfor $D \\in \\ms{D}_0(\\alg{C},\\hs{H}_F,\\gamma_F,J_F)$. Thus, non-zero off-diagonal Dirac operators exist for $(\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F)$ as an $S^0$-real $\\alg{C}$-bimodule if and only if \n\\[\n \\ms{D}_1(\\alg{C},\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F) \\neq \\{0\\}.\n\\]\nOur goal is to generalise Theorem 4.1 in~\\cite{CC08b}*{\\S 4} and characterise subalgebras of $\\alg{A}^\\textup{even}$ of maximal dimension admitting off-diagonal Dirac operators.\n\nThe following result is the first step in this direction:\n\n\\begin{proposition}[\\cite{CC08b}*{Lemma 4.2}]\n A unital \\ensuremath{\\ast}-subalgebra $\\alg{C} \\subseteq \\alg{A}^\\textup{even}$ admits off-diagonal Dirac operators if and only if there exists some partial unitary $T \\in \\mathcal{L}(\\field{C}^{r_1} \\oplus \\field{C}^{r_1^\\prime} \\oplus \\field{C}^{r_2} \\oplus \\field{C}^{r_2^\\prime})$ with support contained in one of $\\field{C}^{r_1}$ or $\\field{C}^{r_1^\\prime}$ and range contained in one of $\\field{C}^{r_2}$ or $\\field{C}^{r_2^\\prime}$, such that\n\\[\n \\alg{C} \\subseteq \\alg{A}(T) := \\{a \\in \\alg{A}^\\textup{even} \\mid [a,T] = [a^*,T] = 0\\}.\n\\]\n\\end{proposition}\n\n\\begin{proof}\n First note that the map $\\ms{D}_1(\\alg{C},\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F) \\to \\mathcal{L}_\\alg{C}^1(\\hs{H}_f,\\hs{H}_{\\overline{f}})$ given by $D \\mapsto P_{-i} D P_i$ is an isomorphism, so that $\\alg{C}$ admits off-diagonal Dirac operators if and only if $\\mathcal{L}_\\alg{C}^1(\\hs{H}_f,\\hs{H}_{\\overline{f}}) \\neq \\{0\\}$. Since a map $S \\in \\mathcal{L}(\\hs{H}_f,\\hs{H}_{\\overline{f}})$ satisfies the generalised order one condition for $\\alg{C}$ if and only if $\\rho_{\\overline{f}}(c) S - S \\rho_f(C)$ is left $\\alg{C}$-linear for all $c \\in \\alg{C}$, $\\alg{C}$ admits off-diagonal Dirac operators only if\n\\[\n \\{S \\in \\bdd^{\\textup{L}}_\\alg{C}(\\hs{H}_f,\\hs{H}_{\\overline{f}}) \\mid -\\gamma_{\\overline{f}}S = S\\gamma_f\\} \\neq \\{0\\},\n\\]\nor equivalently,\n\\[\n \\alg{C} \\subseteq \\alg{A}_S := \\{a \\in \\alg{C}^\\textup{even} \\mid \\lambda_{\\overline{f}}(a)S = S\\lambda(a), \\; \\lambda_{\\overline{f}}(a^*)S = S\\lambda(a^*)\\}\n\\]\nfor some non-zero $S \\in \\mathcal{L}(\\hs{H}_f,\\hs{H}_{\\overline{f}})$ such that $-\\gamma_{\\overline{f}}S = S\\gamma_f$. \n\nNow, let $S \\in \\mathcal{L}(\\hs{H}_f,\\hs{H}_{\\overline{f}})$ be non-zero and such that $-\\gamma_{\\overline{f}}S = S\\gamma_f$. Then, the support of $S$ must have non-zero intersection with one of $(\\hs{H}_F)_{\\rep{r}_1\\rep{r}_2}$ or $(\\hs{H}_F)_{\\rep{r}_1^\\prime \\rep{r}_2^\\prime}$, and the range of $S$ must have non-zero intersection with one of $(\\hs{H}_F)_{\\rep{r}_2\\rep{r}_1}$ or $(\\hs{H}_F)_{\\rep{r}_2^\\prime \\rep{r}_1^\\prime}$. Thus, $S_{\\alpha\\beta}^{\\gamma\\delta} \\neq 0$ for some $(\\alpha,\\beta) \\in \\{(\\rep{r}_1,\\rep{r}_2),(\\rep{r}_1^\\prime,\\rep{r}_2^\\prime)\\}$ and $(\\gamma,\\delta) \\in \\{(\\rep{r}_2,\\rep{r}_1),(\\rep{r}_2^\\prime, \\rep{r}_1^\\prime)\\}$, so that $\\alg{A}_S \\subseteq \\alg{A}_{S_{\\alpha\\beta}^{\\gamma\\delta}}$. Let us now write\n\\[\n S_{\\alpha\\beta}^{\\gamma\\delta} = \\sum_{i} A_i \\otimes B_i\n\\]\nfor non-zero $A_i \\in M_{n_\\gamma \\times n_\\alpha}(\\field{C})$ and for linearly independent $B_i \\in M_{N n_\\delta \\times N n_\\beta}(\\field{C})$. Then for all $a \\in \\alg{A}^\\textup{even}$,\n\\[\n \\lambda_{\\overline{f}}(a)S - S\\lambda_f(a) = \\sum_i \\bigl(\\lambda_\\gamma(a) A_i - A_i \\lambda_\\alpha(a)\\bigr) \\otimes B_i,\n\\]\nso by linear independence of the $B_i$, $a \\in \\alg{A}_{S_{\\alpha\\beta}^{\\gamma\\delta}}$ if and only if for each $i$,\n\\[\n \\lambda_\\gamma(a) A_i = A_i \\lambda_\\alpha(a), \\quad \\lambda_\\gamma(a^*) A_i = A_i \\lambda_\\alpha(a),\n\\]\nand hence\n\\[\n \\alg{A}(S) \\subseteq \\alg{A}_{S_{\\alpha\\beta}^{\\gamma\\delta}} \\subseteq \\alg{A}(T_0) := \\{a \\in \\alg{A}^\\textup{even} \\mid [a,T_0] = 0, [a^*,T_1]=0\\}\n\\]\nfor $T_0 = A_1$, say, viewing $T_0$ and the elements of $\\alg{A}^\\textup{even}$ as operators on $\\field{C}^{r_1} \\oplus \\field{C}^{r_1^\\prime} \\oplus \\field{C}^{r_2} \\oplus \\field{C}^{r_2^\\prime}$. However, if $T_0 = P T$ is the polar decomposition of $T_0$ into a positive operator $P$ on $\\field{C}^{n_\\gamma}$ and a partial isometry $T : \\field{C}^{n_\\alpha} \\to \\field{C}^{n_\\gamma}$, it follows that $a \\in \\alg{A}^\\textup{even}$ commutes with $T_0$ only if it commutes with $T$, and hence $\\alg{A}_0 \\subseteq \\alg{A}(T_0) \\subseteq \\alg{A}(T)$, proving the one direction of the claim.\n\nNow suppose that $\\alg{C} = \\alg{A}(T)$ for a suitable partial isometry $T$, which we view as a partial isometry $\\field{C}^{n_{\\alpha_0}} \\to \\field{C}^{n_{\\gamma_0}}$ for some $\\alpha_0 \\in \\{\\rep{r}_1,\\rep{r}_1^\\prime\\}$, $\\gamma_0 \\in \\{\\rep{r}_2,\\rep{r}_2^\\prime\\}$. Then for any non-zero $\\Upsilon \\in M_N(\\field{C})$, we can define an element $S(\\Upsilon) \\in \\bdd^{\\textup{LR}}_\\alg{C}(\\hs{H}_f,\\hs{H}_{\\overline{f}})$ by setting\n\\[\n S(\\Upsilon)_{\\alpha\\beta}^{\\gamma\\delta} =\n\\begin{cases}\n T \\otimes \\Upsilon \\otimes T^* &\\text{if $\\alpha = \\delta = \\alpha_0$, $\\beta = \\gamma = \\gamma_0$,}\\\\\n 0 &\\text{otherwise,}\n\\end{cases}\n\\]\nwhich, as noted above, corresponds to a unique non-zero element of the space $\\ms{D}_1(\\alg{C},\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F)$, so that $\\alg{C}$ does indeed admit off-diagonal Dirac operators.\n\\end{proof}\n\nIn light of the above characterisation, it suffices to consider subalgebras $\\alg{A}(T)$ for partial isometries $T : \\field{C}^{r_1} \\to \\field{C}^{r_2}$, so that\n\\begin{align}\n \\alg{A}(T) &= \\{(a_1,a_2,b_1,b_2) \\in \\alg{A}^\\textup{even} \\mid b_1 T = T a_1, \\; b_1^* T = T a_1^*\\}\\\\\n &\\cong \\alg{A}_0(T) \\oplus M_{k_1 r_1^\\prime \/ n}(\\field{K}_1) \\oplus M_{k_2 r_2^\\prime \/n}(\\field{K}_k),\n\\end{align}\nwhere\n\\begin{equation}\n \\alg{A}_0(T) := \\{(a,b) \\in M_{k_1 r_1\/n}(\\field{K}_1) \\oplus M_{k_2 r_2 \/n}(\\field{K}_2) \\mid b T = T a, \\; b^* T = T a^*\\},\n\\end{equation}\nso that our problem is reduced to that of maximising the dimension of $\\alg{A}_0(T)$.\n\nIt is reasonable to assume that $T$ is, in some sense, compatible with the algebraic structures of $M_{k_1 r_1\/n}(\\field{K}_1)$ and $M_{k_2 r_2\/n}(\\field{K}_2)$, so as to minimise the restrictiveness of the defining condition on $\\alg{A}_0(T)$, and hence maximise the dimension of $\\alg{A}_0(T)$. It turns out that this notion of compatibility takes the form of the following conditions on $T$:\n\\begin{enumerate}\n \\item The subspace $\\supp(T)$ of $\\field{C}^{r_1}$ is either a $\\field{K}_1$-linear subspace of $\\field{C}^{r_1} = \\field{K}_1^{k_1 r_1\/n}$ or, if $\\field{K}_1 = \\field{H}$, $\\supp(T) = E \\oplus \\field{C}$ for $E$ an $\\field{H}$-linear subspace of $\\field{C}^{r_1} = \\field{H}^{r_1\/2}$;\n \\item The subspace $\\im(T)$ of $\\field{C}^{r_2}$ is either a $\\field{K}_2$-linear subspace of $\\field{C}^{r_2} = \\field{K}_1^{k_2 r_2\/n}$ or, if $\\field{K}_2 = \\field{H}$, $\\im(T) = E \\oplus \\field{C}$ for $E$ an $\\field{H}$-linear subspace of $\\field{C}^{r_2} = \\field{H}^{r_2\/2}$.\n\\end{enumerate}\nNow, let $r = \\rank(T)$, let $d(r) = \\dim_{\\field{R}}(\\alg{A}_0(T))$, and let\n\\[\n d_i = \n\\begin{cases}\n 1 &\\text{if $\\field{K}_i = \\field{R}$,}\\\\\n 2 &\\text{if $\\field{K}_i = \\field{C}$,}\\\\\n \\frac{1}{2} &\\text{if $\\field{K}_i = \\field{H}$.}\n\\end{cases}\n\\]\nUnder these assumptions, then, one can show that\n\\begin{enumerate}\n \\item If $\\field{K}_1 = \\field{K}_2$ or $\\field{K}_2 = \\field{C}$, and, if $\\field{K}_1 = \\field{H}$, $r$ is even, then\n\\begin{equation}\n \\alg{A}_0(T) \\cong M_{k_1 r \/n}(\\field{K}_1) \\oplus M_{k_1 (r_1 - r)\/n}(\\field{K}_1) \\oplus M_{k_2 (r_2 - r)\/n}(\\field{K}_2),\n\\end{equation}\nand hence\n\\[\n d(r) = d_1 r^2 + d_1 (r - r_1)^2 + d_2 (r - r_2)^2;\n\\]\n \\item If $(\\field{K}_1,\\field{K}_2) = (\\field{H},\\field{R})$ and $r$ is odd, then\n\\begin{equation}\n \\alg{A}_0(T) \\cong \\bigl(M_{(r-1)\/2}(\\field{H}) \\cap M_{r-1}(\\field{R})\\bigr) \\oplus \\field{R} \\oplus M_{(r_2-r-1)\/2}(\\field{H}) \\oplus M_{r_1 - r}(\\field{R}),\n\\end{equation}\nand hence\n\\[\n d(r) = (r-1)^2 + 1 + \\frac{1}{2}(r - r_2 + 1)^2 + (r - r_1)^2;\n\\]\n \\item If $(\\field{K}_1,\\field{K}_2) = (\\field{H},\\field{C})$ and $r$ is odd, then\n\\begin{equation}\n \\alg{A}_0(T) \\cong M_{(r-1)\/2}(\\field{H}) \\oplus \\field{C} \\oplus M_{(r_2 - r - 1)\/2}(\\field{H}) \\oplus M_{r_1 - r}(\\field{C}),\n\\end{equation}\nand hence\n\\[\n d(r) = \\frac{1}{2}(r-1)^2 + 2 + \\frac{1}{2}(r-r_2+1)^2 + 2(r-r_1)^2;\n\\]\n \\item If $\\field{K}_1 = \\field{K}_2 = \\field{H}$ and $r$ is odd, then\n\\begin{equation}\n \\alg{A}_0(T) \\cong M_{(r-1)\/2}(\\field{H}) \\oplus \\field{C} \\oplus M_{(r_1 - r - 1)\/2}(\\field{H}) \\oplus M_{(r_2-r-1)\/2}(\\field{H}),\n\\end{equation}\nand hence\n\\begin{equation}\n d(r) = \\frac{1}{2}(r-1)^2 + 2 + \\frac{1}{2}(r-r_1+1)^2 + \\frac{1}{2}(r-r_2+1)^2.\n\\end{equation}\n\\end{enumerate}\nThe other cases are obtained easily, by symmetry, from the ones listed above.\n\nNow, let $R_{\\textrm max}$ be the set of all $r \\in \\{1, \\dotsc, \\min(r_1,r_2)\\}$ maximising the value of $d(r)$. By checking case by case, one can arrive at the following generalisation of Theorem 4.1 in~\\cite{CC08b}:\n\n\\begin{proposition}\n Let $T : \\field{C}^{r_1} \\to \\field{C}^{r_2}$ be a partial isometry. Then $\\alg{A}(T)$ attains maximal dimension only if $\\rank(T) \\in R_{\\textrm max}$, where $R_{\\textrm max} = \\{1\\}$ except in the following cases:\n\\begin{enumerate}\n \\item $(\\field{K}_1,\\field{K}_2)=(\\field{C},\\field{C})$ and $(r_1,r_2)=(2,2)$, in which case $R_{\\textrm max} = \\{2\\}$;\n \\item $(\\field{K}_1,\\field{K}_2)=(\\field{C},\\field{C})$ and $(r_1,r_2)=(3,3)$, in which case $R_{\\textrm max} = \\{1,2\\}$;\n \\item $(\\field{K}_1,\\field{K}_2)=(\\field{C},\\field{R})$ and $(r_1,r_2)=(2,2)$, in which case $R_{\\textrm max} = \\{1,2\\}$;\n \\item $(\\field{K}_1,\\field{K}_2)=(\\field{C},\\field{H})$ and $(r_1,r_2)=(2,2)$, in which case $R_{\\textrm max} = \\{1,2\\}$;\n \\item $(\\field{K}_1,\\field{K}_2)=(\\field{R},\\field{C})$ and $(r_1,r_2)=(2,2)$, in which case $R_{\\textrm max} = \\{1,2\\}$;\n \\item $(\\field{K}_1,\\field{K}_2)=(\\field{R},\\field{R})$ and $(r_1,r_2)=(2,2)$, in which case $R_{\\textrm max} = \\{2\\}$;\n \\item $(\\field{K}_1,\\field{K}_2)=(\\field{R},\\field{R})$ and $(r_1,r_2)=(3,3)$, in which case $R_{\\textrm max} = \\{1,2\\}$;\n \\item $(\\field{K}_1,\\field{K}_2)=(\\field{R},\\field{H})$ and $r_1=2$, in which case $R_{\\textrm max} = \\{1,2\\}$;\n \\item $(\\field{K}_1,\\field{K}_2)=(\\field{H},\\field{C})$ and $(r_1,r_2)=(2,2)$, in which case $R_{\\textrm max} = \\{1,2\\}$;\n \\item $(\\field{K}_1,\\field{K}_2)=(\\field{H},\\field{R})$ and $r_2=2$, in which case $R_{\\textrm max} = \\{1,2\\}$;\n \\item $(\\field{K}_1,\\field{K}_2)=(\\field{H},\\field{H})$ and $(r_1,r_2)=(4,4)$, in which case $R_{\\textrm max} = \\{4\\}$;\n \\item $(\\field{K}_1,\\field{K}_2)=(\\field{H},\\field{H})$ and $(r_1,r_2)\\neq(4,4)$, in which case $R_{\\textrm max} = \\{2\\}$.\n\\end{enumerate}\nMoreover, if $T$ satisfies the aforementioned compatibility conditions, then $\\alg{A}(T)$ does indeed attain maximal dimension whenever $\\rank(T) \\in R_{\\textrm max}$.\n\\end{proposition}\n\nOne must carry out the same calculations for the other possibilities for the domain and range of $T$, but this can be done simply by replacing $(r_1,r_2)$ in the above equations and claims with $(r_1,r_2^\\prime)$, $(r_1^\\prime,r_2)$ and $(r_1^\\prime,r_2^\\prime)$. Thus, one can determine the maximal dimension of a subalgebra of $\\alg{A}^\\textup{even}$ admitting off-diagonal operators by comparing the maximal values of $\\dim_{\\field{R}}(\\alg{A}(T))$ for $T : \\field{C}^{r_1} \\to \\field{C}^{r_2}$, $T : \\field{C}^{r_1} \\to \\field{C}^{r_2^\\prime}$, $T : \\field{C}^{r_1^\\prime} \\to \\field{C}^{r_2}$, and $T : \\field{C}^{r_1^\\prime} \\to \\field{C}^{r_2^\\prime}$.\n\nFinally, by means of the discussion above and the fact that $\\Sp(n)$ acts transitively on $1$-dimensional subspaces of $\\field{C}^n$, one can readily check that the real {\\ensuremath{C^*}}-algebra $\\alg{A}_F$ and the $S^0$-real $\\alg{A}_F$-bimodule $(\\hs{H}_F,\\gamma_F,J_F,\\epsilon_F)$ of $KO$-dimension $6 \\bmod 8$ of the NCG Standard Model are uniquely determined, up to inner automorphisms of $\\alg{A}^\\textup{even}$ and unitary equivalence, by the following choice of inputs:\n\\begin{itemize}\n \\item $n = 4$;\n \\item $(\\field{K}_1,\\field{K}_2) = (\\field{H},\\field{C})$;\n \\item $g_1 \\in M_2(\\field{H})$, $g_2 \\in M_4(\\field{C})$;\n \\item $(r_1,r_2) = (2,4)$;\n \\item $N = 3$.\n\\end{itemize}\nThe value of $N$, by construction, corresponds to the number of generations of fermions, whilst the values of $n$, $r_1$ and $r_2$ give rise to the number of species of fermion of each chirality per generation. The significance of the other inputs remains to be seen.\n\n\\section{Conclusion}\n\nAs we have seen, the structure theory first developed by Paschke and Sitarz~\\cite{PS98} and by Krajewski~\\cite{Kraj98} for finite real spectral triples of $KO$-dimension $0 \\bmod 8$ and satisfying orientability and Poincar{\\'e} duality can be extended quite fully to the case of arbitrary $KO$-dimension and without the assumptions of orientability and Poincar{\\'e} duality. In particular, once a suitable ordering is fixed on the spectrum of a finite-dimensional real {\\ensuremath{C^*}}-algebra $\\alg{A}$, the study of finite real spectral triples with algebra $\\alg{A}$ reduces completely to the study of the appropriate multiplicity matrices and of certain moduli spaces constructed using those matrices. This reduction is what has allowed for the success of Krajewski's diagrammatic approach~\\cite{Kraj98}*{\\S 4} in the cases dealt with by Iochum, Jureit, Sch{\\\"u}cker, and Stephan~\\cites{ACG1,ACG2,ACG3,ACG4,ACG5,JS08,Sch05}. We have also seen how to apply this theory both to the ``finite geometries'' of the current version of the NCG Standard Model~\\cites{Connes06,CCM07,CM08} and to Chamseddine and Connes's framework~\\cites{CC08a,CC08b} for deriving the same finite geometries.\n\nDropping the orientability requirement comes at a fairly steep cost, as even bimodules of various sorts generally have fairly intricate moduli spaces of Dirac operators. It would therefore be useful to characterise the precise nature of the failure of orientability (and of Poincar{\\'e} duality) for the finite spectral triple of the current noncommutative-geometric Standard Model. It would also be useful to generalise and study the physically-desirable conditions identified in the extant literature on finite spectral triples, such as dynamical non-degeneracy~\\cite{Sch05} and anomaly cancellation~\\cite{Kraj98}. Indeed, it would be natural to generalise Krajewski diagrams~\\cite{Kraj98} and the combinatorial analysis they facilitate~\\cite{JS08} to bilateral spectral triples of all types. The paper by Paschke and Sitarz~\\cite{PS98} also contains further material for generalisation, namely discussion of the noncommutative differential calculus of a finite spectral triple and of quantum group symmetries. In particular, one might hope to characterise finite spectral triples equivariant under the action or coaction of a suitable Hopf algebra~\\cites{PS00,Sit03}.\n\nFinally, as was mentioned earlier, the finite geometry of the current NCG Standard Model fails to be $S^0$-real. However, this failure is specifically the failure of the Dirac operator $D$ to commute with the $S^0$-real structure $\\epsilon$. The ``off-diagonal'' part of $D$ does, however, take a very special form; we hope to provide in future work a more geometrical interpretation of this term, which provides for Majorana fermions and for the so-called see-saw mechanism~\\cite{CCM07}.\n\n\n\\begin{bibdiv}\n\\begin{biblist}\n\\bib{Bar07}{article}{\n\ttitle={A Lorentzian version of the non-commutative geometry of the standard model of particle physics},\n\tauthor={Barrett, John W.},\n\tjournal={J. Math. Phys.},\n\tvolume={48},\n\tdate={2007},\n\tnumber={012303},\n\n}\n\\bib{CC08a}{article}{\n\ttitle={Conceptual explanation for the algebra in the noncommutative approach to the Standard Model},\n\tauthor={Chamseddine, Ali H.},\n\tauthor={Connes, Alain},\n\tjournal={Phys. Rev. Lett.},\n\tvolume={99},\n\tdate={2007},\n\tnumber={191601},\n\n}\n\\bib{CC08b}{article}{\n\ttitle={Why the standard model},\n\tauthor={Chamseddine, Ali H.},\n\tauthor={Connes, Alain},\n\tjournal={J. Geom. Phys.},\n\tvolume={58},\n\tdate={2008},\n\tpages={38--47},\n\n}\n\\bib{CCM07}{article}{\n\ttitle={Gravity and the Standard Model with neutrino mixing},\n\tauthor={Chamseddine, Ali H.},\n\tauthor={Connes, Alain},\n\tauthor={Marcolli, Matilde},\n\tjournal={Adv. Theor. Math. Phys.},\n\tvolume={11},\n\tdate={2007},\n\tpages={991--1089},\n\n}\n\\bib{Connes95a}{article}{\n\ttitle={Geometry from the spectral point of view},\n\tauthor={Connes, Alain},\n\tjournal={Lett. Math. Phys.},\n\tvolume={34},\n\tdate={1995},\n\tnumber={3},\n\tpages={203--238},\n}\n\\bib{Connes95}{article}{\n\ttitle={Noncommutative geometry and reality},\n\tauthor={Connes, Alain},\n\tjournal={J. Math. Phys.},\n\tvolume={6},\n\tdate={1995},\n\tpages={6194--6231},\n}\n\\bib{Connes06}{article}{\n\ttitle={Noncommutative geometry and the Standard Model with neutrino mixing},\n\tauthor={Connes, Alain},\n\tjournal={JHEP},\n\tvolume={11},\n\tdate={2006},\n\tnumber={81},\n\n}\n\\bib{CM08}{book}{\n\ttitle={Noncommutative Geometry, Quantum Fields and Motives},\n\tauthor={Connes, Alain},\n\tauthor={Marcolli, Matilde},\n\tseries={Colloquium Publications},\n\tvolume={55},\n\tpublisher={American Mathematical Society},\n\taddress={Providence, RI},\n\tdate={2007},\n}\n\\bib{Ell07}{article}{\n\ttitle={Towards a theory of classification},\n\tauthor={Elliott, George A.},\n\tjournal={Adv. in Math.},\n\tvolume={223},\n\tdate={2010},\n\tnumber={1},\n\tpages={30--48},\n\n}\n\\bib{Ell08}{misc}{\n\ttitle={private conversation},\n\tauthor={Elliott, George A.},\n\tdate={2008},\n}\n\\bib{Fare}{book}{\n\ttitle={Algebras of Linear Transformations},\n\tauthor={Farenick, Douglas R.},\n\tpublisher={Springer},\n\taddress={New York},\n\tdate={2000},\n}\n\\bib{ACG1}{article}{\n\ttitle={On a classification of irreducible almost commutative geometries},\n\tauthor={Iochum, Bruno},\n\tauthor={Sch{\\\"u}cker, Thomas},\n\tauthor={Stephan, Christoph},\n\tjournal={J. Math. Phys.},\n\tvolume={45},\n\tdate={2004},\n\tpages={5003--5041},\n\n}\n\\bib{ACG2}{article}{\n\ttitle={On a classification of irreducible almost commutative geometries, a second helping},\n\tauthor={Jureit, Jan-H.},\n\tauthor={Stephan, Christoph A.},\n\tjournal={J. Math. Phys.},\n\tvolume={46},\n\tdate={2005},\n\tnumber={043512},\n\n}\n\\bib{ACG3}{article}{\n\ttitle={On a classification of irreducible almost commutative geometries III},\n\tauthor={Jureit, Jan-Hendrik},\n\tauthor={Sch{\\\"u}cker, Thomas},\n\tauthor={Stephan, Christoph},\n\tjournal={J. Math. Phys.},\n\tvolume={46},\n\tdate={2005},\n\tnumber={072303},\n\n}\n\\bib{ACG4}{article}{\n\ttitle={On a classification of irreducible almost commutative geometries IV},\n\tauthor={Jureit, Jan-Hendrik},\n\tauthor={Stephan, Christoph A.},\n\tjournal={J. Math. Phys.},\n\tvolume={49},\n\tdate={2008},\n\tpages={033502},\n\n}\n\\bib{ACG5}{article}{\n\ttitle={On a classification of irreducible almost commutative geometries, V},\n\tauthor = {Jureit, Jan-Hendrik},\n\tauthor={Stephan, Christoph A.},\n\tdate={2009},\n\n}\n\\bib{JS08}{article}{\n\ttitle={Finding the standard model of particle physics, a combinatorial problem},\n\tauthor={Jureit, Jan-H.},\n\tauthor={Stephan, Christoph A.},\n\tjournal={Comp. Phys. Comm.},\n\tvolume={178},\n\tdate={2008},\n\tpages={230--247},\n\n}\n\\bib{Kraj98}{article}{\n\ttitle={Classification of finite spectral triples},\n\tauthor={Krajewski, Thomas},\n\tjournal={J. Geom. Phys.},\n\tvolume={28},\n\tdate={1998},\n\tpages={1--30},\n\n}\n\\bib{Li}{book}{\n\ttitle={Introduction to Operator Algebras},\n\tauthor={Li, Bing-Ren},\n\tpublisher={World Scientific},\n\taddress={Singapore},\n\tdate={1992},\n}\n\\bib{PS98}{article}{\n\ttitle={Discrete spectral triples and their symmetries},\n\tauthor={Paschke, Mario},\n\tauthor={Sitarz, Andrzej},\n\tjournal={J. Math. Phys.},\n\tvolume={39},\n\tdate={1998},\n\tpages={6191--6205},\n\n}\n\\bib{PS00}{article}{\n\ttitle={The geometry of noncommutative symmetries},\n\tauthor={Paschke, Mario},\n\tauthor={Sitarz, Andrzej},\n\tjournal={Acta Physica Polonica B},\n\tvolume={31},\n\tdate={2000},\n\tpages={1897--1911},\n}\n\\bib{Sch05}{article}{\n\ttitle={Krajewski diagrams and spin lifts},\n\tauthor={Sch{\\\"u}cker, Thomas},\n\tdate={2005},\n\teprint={arXiv:hep-th\/0501181v2},\n}\n\\bib{Sit03}{article}{\n\ttitle={Equivariant spectral triples},\n\tauthor={Sitarz, Andrzej},\n\tbook={\n\t\ttitle={Noncommutative Geometry and Quantum Groups},\n\t\teditor={Hajac, Piotr M.},\n\t\teditor={Pusz, Wies{\\l}aw},\n\t\tseries={Banach Center Publ.},\n\t\tvolume={61},\t\n\t\tpublisher={Polish Acad. Sci.},\n\t\taddress={Warsaw},\n\t\tdate={2003},\n\t},\n\tpages={231--268},\n}\n\\bib{Schwarz75}{article}{\n\ttitle={Smooth functions invariant under the action of a compact Lie group},\n\tauthor={Schwarz, Gerald W.},\n\tjournal={Topology},\n\tvolume={14},\n\tdate={1975},\n\tpages={63--68},\n}\n\\bib{St06}{article}{\n\ttitle={Almost-commutative geometry, massive neutrinos and the orientability axiom in $KO$-dimension $6$},\n\tauthor={Stephan, Christoph A.},\n\tdate={2006},\n\teprint={arXiv:hep-th\/0610097v1},\n}\n\\end{biblist}\n\\end{bibdiv}\n\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe variety of cuprate compounds that exhibit superconductivity is remarkable; however, the set of cuprate families for which large superconducting crystals can readily be grown is much more limited. Such large crystals are needed for neutron scattering studies of the magnetic excitations, where it is of interest to establish universal behaviors across multiple families. While suitable crystals of La$_{2-x}$Sr$_x$CaCu$_2$O$_6$ (La-Sr-2126) \\cite{ulri02} as well as La$_{2-x}$Ca$_{1+x}$Cu$_2$O$_6$ (La-Ca-2126) \\cite{wang03} have been grown previously and studied by neutron diffraction \\cite{ulri02,huck05}, those samples exhibited little to no superconductivity. Obtaining crystals with a large superconducting volume fraction has been a challenge, because synthesis of superconducting samples requires annealing in high-pressure oxygen \\cite{cava90b}. We have finally been able to achieve this last step, and in this paper we describe the synthesis conditions and magnetic and structural characterizations of the resulting crystals.\n\nThe La-Sr\/Ca-2126 system is certainly not new. The first report of La$_{2-x}$$A_{1+x}$Cu$_2$O$_{6-x\/2}$ ($A = $Ca, Sr) was made by Raveau and coworkers \\cite{nguy80} almost four decades ago. Following the discovery of superconductivity in La$_{2-x}$Ba$_x$CuO$_4$\\ \\cite{bedn86}, Torrance {\\it et al.}\\ \\cite{torr88} reported on the metallic, but non-superconducting, character of La$_2$SrCu$_2$O$_{6.2}$, followed by crystallographic studies of La-Ca-2126 \\cite{izum89} and La$_2$SrCu$_2$O$_6$ \\cite{caig90}. It was not long before Cava {\\it et al.}\\ \\cite{cava90b} announced the discovery of superconductivity with a transition temperature, $T_c$, of $\\sim60$~K in La-Sr-2126; the key here was to anneal in high-pressure oxygen. Numerous studies of synthesis conditions and superconductivity in La-Sr-2126 \\cite{saku91,liu91} and La-Ca-2126 \\cite{fuer90,kino90,kino92a,kino92b} quickly followed.\n\nMillimeter-size crystals of La-Ca-2126 were initially grown from a CuO flux; these were rendered superconducting, with $T_c$ as high as 40~K by annealing in an O$_2$ partial pressure of 300 atm at 1080$^\\circ$C \\cite{ishi91}. Larger crystals (4 mm $\\phi \\times 30$ mm) were grown by the travelling-solvent floating-zone (TSFZ) method, where annealing in 400 atm of O$_2$ at 1080$^\\circ$C yielded $T_c\\approx45$~K \\cite{okuy94}. One of us (GDG) was able to grow large crystals of La-Sr-2126 in 11 bar O$_2$, which showed onset $T_c$'s over 40~K, but with a very small superconducting volume fraction ($<7$\\%\\ as measured by magnetic shielding) \\cite{gu06b}.\n\nHere we focus on crystals of La$_{2-x}$Ca$_{1+x}$Cu$_2$O$_6$ with $x=0.10$ and 0.15. Annealing in $\\gtrsim0.11$~GPa (1100 atm) partial pressure of O$_2$ yields sharp superconducting transitions with $T_c$ up to 60~K (after post-annealing). The magnetic shielding fraction is essentially 100\\%; however, there are substantial volume fractions of two other phases present, as we will demonstrate.\n\nThe rest of the paper is organized as follows. The next section describes the crystal growth, annealing treatments, and the characterization methods [magnetization, neutron diffraction, and muon-spin-rotation ($\\mu$SR)]. In Sec.~III, we present and analyze the diffraction and $\\mu$SR data, followed by a description of the post-annealing study in Sec.~IV. The results are discussed in Sec.~V and summarized in Sec.~VI. We will present studies of the spin fluctuations by inelastic neutron scattering and anisotropic resistivity as a function of magnetic field in separate papers.\n\n\n\\section{Experimental Methods}\n\nSingle crystals of La-Ca-2126 were grown by the TSFZ method \\cite{gu06a}. High purity powders of La$_{2}$O$_{3}$, CaCO$_{3}$, and CuO (99.99\\%) were mixed in their metal ratio, ground well, and then sintered in a crucible. This grind-sinter procedure was repeated three times in order to achieve a homogeneous mixture, before the final feed-rod sintering, performed with the rod hung vertically from a Pt wire in a box furnace at 1100$^\\circ$C\\ for 72 hours in air. The Cu-rich solvent material with lower melting point was prepared through the same procedures, and then sintered at 950 $^\\circ$C\\ for 48 hours. Single-crystal growth was performed in flowing oxygen gas ($P_{\\rm O_{2}}=1$~atm) in the floating-zone furnace. During the crystal growth, the feed and seed rods rotated in opposite directions at 30 rpm, to stir the liquid zone, and simultaneously translated through the heating zone at a velocity of 0.4 mm\/h. After growth, the resulting rod was cut into sections, with polished sections checked with an optical polarization microscope to identify regions of single-crystal domain, allowing large single crystals to be selected for the high-pressure annealing experiments. \n\nThe as-grown crystals of La-Ca-2126 are non-superconducting. We induced superconductivity in large crystals with both $x=0.10$ and 0.15 by annealing under a high-pressure mixture of 20\\% oxygen and 80\\% argon in a hot isostatic press (HIP). The annealing was performed at temperatures within 1130--1180$^\\circ$C\\ and pressures within 0.55--0.69 GPa in two separate runs, with run A lasting 31 hours and run B lasting 8 days {\\color{black} \\footnote{{\\color{black} Note that the annealing temperature was selected to test the performance of the HIP. Typical annealing temperatures in previous studies have been somewhat lower, in the range of 970$^\\circ$C to 1080$^\\circ$C \\cite{cava90b,kino90,ishi91}.}}}.\n\nMagnetization measurements were performed using a commercial SQUID (superconducting quantum interference device) magnetometer. The magnetic susceptibility $\\chi$ was calculated assuming a density of 6.244 g\/cm$^{3}$, which was calculated from the lattice constants listed in Ref.\\ \\cite{huck05} assuming a composition of La$_{1.9}$Ca$_{1.1}$Cu$_{2}$O$_{6}$. The susceptibility of two of the annealed crystals measured in a field of 1 mT (with uncertain crystal orientation) is shown in Fig.~\\ref{fg:susc1}. While the Meissner fraction, measured while field cooling, is small, the shielding fraction, obtained after zero-field cooling, is large. \n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[width=1.0\\columnwidth]{fig_diamagnetism.pdf}}\n\\caption{Volume magnetic susceptibility for samples SC45 (blue circles) and SC55 (red squares). Open symbols: Meissner fraction (measured on field cooling); filled symbols: shielding fraction (zero-field cooling).}\n\\label{fg:susc1} \n\\end{figure}\n\nInitial neutron scattering experiments were performed on the SEQUOIA time-of-flight spectrometer at the Spallation Neutron Source, Oak Ridge National Laboratory \\cite{tof_sns14}. Three crystals were used for neutron scattering, which we label NSC, SC45, and SC55. NSC is an as-grown, non-superconducting crystal with $x=0.10$ and mass 7.5g. SC45 is a 6.3~g crystal of $x=0.10$ annealed in run A. SC55 is a 7.4-g crystal of $x=0.15$ annealed in run B. As one can see from Fig.~\\ref{fg:susc1}, SC45 has an onset of diamagnetism at $T_c=45$~K, while SC55 has $T_c=55$~K. \n\nWhile the inelastic scattering results will be reported separately, the data in the elastic channel provided interesting clues as to the structural phases in the sample, motivating further measurements.\nNeutron diffraction measurements were then performed on triple-axis spectrometers HB-1A and HB-1 at the High Flux Isotope Reactor, Oak Ridge National Laboratory. Sample SC55 was studied at HB-1A, where the incident energy was $E_i=14.6$~meV and horizontal collimations of $40'$-$40'$-S-$40'$-$80'$ were used. The elastic scans of SC45 were done on a 1.2-g piece at HB-1 with $E_i=13.5$~meV and horizontal collimations $48'$-$40'$-S-$40'$-$120'$. \n\n\\begin{figure}[b]\n\\centerline{\\includegraphics[width=0.9\\columnwidth]{fig_chi_LCCO10.pdf}}\n\\caption{Normalized susceptibility data for pieces of the SC55$\\mu$ sample. Red line: zero-field-cooled bulk susceptibility; blue circles: diamagnetic shift of the internal field observed by muons in a field of 30~mT.}\n\\label{fg:susc2} \n\\end{figure}\n\n\\begin{figure*}[t]\n\\centerline{\\includegraphics[width=1.8\\columnwidth]{fig_fitted00LPlots.pdf}}\n\\caption{Neutron diffraction intensities measured along ${\\bf Q}=(0,0,L)$ for the SC55 (red circles, measured on HB-1A) and SC45 (blue circles, measured on HB-1) crystals at $T=4$~K. Intensity scale is in arbitrary units; data for the two samples have been normalized at the (004) peak. Counting time was $\\sim5$~s\/pt, with an attenuator in the beam (to avoid detector saturation). Top (bottom) panel uses a linear (logarithmic) intensity scale. The solid lines are model calcuations as described in the text.}\n\\label{fg:00L} \n\\end{figure*}\n\nComplementary $\\mu$SR measurements were performed at the $\\pi$M3 beam line of the Paul Scherrer Institut (Switzerland), using the general purpose instrument (GPS), on an NSC crystal and a piece of $x=0.10$ annealed in run B, which we will label SC55$\\mu$. Experiments were performed both with the sample in zero field (ZF), to test for a finite magnetic hyperfine field, and in a transverse field (TF) of 3 mT, to probe the paramagnetic fraction in the normal state, or 30 mT, to probe the superconducting state. The $\\mu$SR spectra were analyzed in the time domain using least-squares optimization routines from the {\\tt musrfit} software suite \\cite{sute12}.\n\nFigure~\\ref{fg:susc2} shows measurements on the SC55$\\mu$ sample, comparing the normalized diamagnetic response obtained by weak-TF $\\mu$SR and from a bulk susceptibility measurement. Both measurements are quite consistent with $T_c\\approx55$~K.\n\n\\section{Results and Analysis}\n\\label{sc:results}\n\nA number of studies, largely on polycrystalline samples of La-Sr\/Ca-2126, have demonstrated that high-pressure annealing can lead to secondary phases \\cite{liu91,saku93,hu14b}. This behavior can depend on the concentration of Ca or Sr. We have chosen to focus on $x=0.10$ and 0.15 because previous work \\cite{kino90} has indicated that these concentrations span the composition range for which homogeneous, single-phase samples can be prepared with the available oxygen partial pressure during crystal growth. The high-pressure annealing of the crystals, essential for achieving bulk superconductivity, can result in some inhomogeneity. To identify specific phases and relative volumes of secondary phases in our superconducting crystals, we have combined neutron diffraction and $\\mu$SR measurements.\n\n\\subsection{La-Ca-2126 and La-214}\n\nIn an earlier study of a high-pressure annealed La-Ca-2126 with $x=0.10$, imaging with transmission electron microscopy demonstrated the presence of intergrowth-like thin layers of La$_{2-x}$Ca$_x$CuO$_4$ (La-214) within the La-Ca-2126 matrix \\cite{hu14b}. (Note that such layers were not observed in an as-grown crystal.) From neutron diffraction measurements, such as those shown in Fig.~\\ref{fg:00L}, we again find evidence for both phases. The sharp Bragg peaks at $(00L)$ with $L$ an even integer come from the La-Ca-2126 domains, while the La-214 phase shows up as broad diffuse scattering with prominent peaks at $L\\sim3.2$ and 8.9. (Other peaks, such as those at $L\\approx2.5$, 5.1, 7.7, 10.2, and 12.8, come from a third phase that will be discussed in the following subsection.)\n\nTo model these data, we created a random stacking along the $c$ axis of two structural units: 0.5 unit cell of La$_2$CaCu$_2$O$_6$ and 1.5 unit cells of La$_2$CuO$_4$. These units are terminated by LaO layers, and there is a shift of $(0.5,0.5,0)$ applied between neighboring units. To calculate the structure factor, structural parameters for La-Ca-2126 were taken from \\cite{ulri02} and for La-214 from \\cite{jorg88}, where we take the space group to be $I4\/mmm$ in both cases. The lattice parameters used for La-Ca-2126 are $a=3.83$~\\AA\\ and $c=19.37$~\\AA. For La-214, we started with $c=13.1$~\\AA, corresponding to the undoped bulk compound; however, we found that the fit was considerably improved by using $c_{214}=\\frac23 c_{2126} = 12.91$~\\AA. \n\nIf we allow that the La-214 layers have their $a$ lattice parameter epitaxially-constrained to match that of La-Ca-2126, an expansion of 1.007, then the reduction in $c$ corresponds to a uniaxial strain of the unit cell that does not change the unit cell volume. Given that Ca solubility in La$_2$CuO$_4$ is limited and its presence tends to change the lattice parameters in directions opposite to the apparent strain \\cite{mood92}, it seems likely that the La-214 phase contains minimal Ca ($x\\lesssim0.1$).\n\nThe model calculations shown in Fig.~\\ref{fg:00L} correspond to the square of the structure factor calculated with 29,500 La-Ca-2126 units and 3,500 La-214 units. These parameters are a compromise that is close to the best fit for both samples. The intensity has been multiplied by an $L$-dependent correction for spectrometer resolution volume; the only differences between the two calculated curves are due to the differences in spectrometer configurations for the measurements of the two samples. We find that this model gives a very good description of the measured scattering. Taking into account the relative thicknesses of the structural units, the calculation shown corresponds to an 80.8\\%\\ volume fraction of La-Ca-2126 and 19.2\\%\\ of La-214. The actual best fits to each data set correspond to a La-214 volume fraction of 17.7\\%\\ for SC45 and 19.9\\%\\ for SC55. The difference between these values is comparable to the uncertainty, but it suggests that the volume of La-214 increases with annealing time and $T_c$.\n\n\\subsection{La-8-8-20}\n\n\\begin{figure}[b]\n\\centerline{\\includegraphics[width=1.0\\columnwidth]{fig_4.png}}\n\\caption{(a) Elastic scattering obtained at SEQUOIA on the SC55 sample at $T=4$~K. Rings of scattering correspond to powder diffraction from the Al sample holder and from the sample itself. (b) Calculated diffraction pattern for the La-8-8-20 phase using the structural parameters from \\cite{erra88}.}\n\\label{fg:tof} \n\\end{figure}\n\nAnother phase that has been identified in La-Sr-2126 samples is La$_{8-x}$Sr$_x$Cu$_8$O$_{20}$ (La-8-8-20) \\cite{liu91,saku93,erra88}. This phase was first reported in 1987 as La$_5$SrCu$_6$O$_{15}$ \\cite{toku87,torr88}; the proper formula per unit cell was later determined in a neutron powder diffraction study \\cite{erra88}. Given the absence of Sr in our samples and evidence (discussed in the next subsection) that the relevant phase is an antiferromagnetic insulator, we believe that our case corresponds to $x=0$. The phase is essentially a version of the perovskite LaCuO$_{3-\\delta}$ with an ordered arrangement of oxygen vacancies. Taking $a_0$ as the average Cu-O-Cu distance, the unit cell is tetragonal with $a'\\approx2\\sqrt{2}a_0$ and $c'\\approx a_0$. Extrapolating reported lattice parameters for finite $x$ to $x=0$ gives $a'\\approx10.89$~\\AA\\ and $c'\\approx3.85$~\\AA, with $a_0\\approx3.85$~\\AA.\n\nWe can fully explain the extra $(00L)$ peaks in Fig.~\\ref{fg:00L} if the La-8-8-20 phase is oriented coherently with the La-Ca-2126 phase, such that a [110] axis of the former is parallel to the [001] axis of the latter. This identification also allows us to explain an array of superlattice peaks observed in the elastic channel of time-of-flight measurements performed at SEQUOIA. An example of peaks in the $(H0L)$ zone for the SC55 sample is shown in Fig.~\\ref{fg:tof}, together with peak positions and intensities calculated from the reported structural parameters \\cite{erra88}. Based on the analysis of Bragg peak intensities, we estimate that the La-8-8-20 phase corresponds to $\\sim15\\%$ of the sample volume. The presence of this third phase then renormalizes the fractions of the other phases to 69\\%\\ La-Ca-2126 and 16\\%\\ La-214.\n\n\n\n\\subsection{Antiferromagnetism}\n\n\\begin{figure}[b]\n\\centerline{\\includegraphics[width=0.9\\columnwidth]{fig_mag_vol_frac.pdf}}\n\\caption{Magnetic volume fraction in the NSC and SC55$\\mu$ samples as determined from $\\mu$SR measured in a weak transverse field of 3 mT. }\n\\label{fg:mvol} \n\\end{figure}\n\nZero-field $\\mu$SR data provide clear evidence for long-range antiferromagnetic ordering in the NSC sample, with two distinct precession frequencies in the $\\mu$SR signal, corresponding (at low temperature) to the local magnetic fields 37.7(1)~mT (70\\%\\ of the signal) and 102.6(3)~mT (30\\%\\ of the signal). The hyperfine fields (sampled at only a few temperatures) grow substantially on cooling, especially between 100~K and 5~K. Similar measurements on the SC55$\\mu$ sample provide evidence for ordering, but without any coherent precession signal; instead, a fast decaying ${\\mu}$SR signal is found, which could be due to a wide distribution of static fields. Figure~\\ref{fg:mvol} shows the temperature-dependent magnetic volume fractions for the two samples determined from weak TF $\\mu$SR. Both samples show a significant enhancement of the magnetic volume fraction at low temperature.\n\nThe rise in magnetic volume and hyperfine fields at low $T$ in the NSC sample is reminiscent of related behavior in lightly-doped La$_{2-x}$Sr$_x$CuO$_4$, with $00$. Any $(\\delta,\\gamma)$-distribution query of $G$ can be simulated with probability at least $1-\\eta$ by\n \\[\\max\\left\\{\\frac{1}{\\gamma\\delta^2}\\log\\left(\\frac{8n}{\\eta}\\right),\\frac{8}{\\gamma}\\log\\left(\\frac{4n}{\\eta}\\right)\\right\\}\\]\n profile queries.\n\\end{theorem}\n\n\\begin{corollary}\\label{cor:GR16}\n Take $G\\in\\G n21,\\eta>0$. Any $(\\delta,\\gamma)$-distribution query of $G$ can be simulated with probability at least $1-\\eta$ by\n \\[\\frac{8}{\\gamma^2\\delta^2}\\log^2\\left(\\frac{8n}{\\eta}\\right)\\]\n profile queries. Furthermore, any algorithm making $q$ $(\\delta,\\gamma)$-distribution queries of $G$ can be simulated with probability at least $1-\\eta$ by\n \\[\\frac{8q}{\\gamma^2\\delta^2}\\log^2\\left(\\frac{8nq}{\\eta}\\right)=\\textnormal{poly}\\left(n,\\frac{1}{\\gamma},\\frac{1}{\\delta},\\log\\frac{1}{\\eta}\\right)\\cdot q\\log q\\]\n profile queries.\n\\end{corollary}\n\n\\begin{proof}\nThe first claim is a weaker but simpler version of the upper bound of Theorem \\ref{thm:GR16}. The second claim follows from the first by a union bound.\\qed\n\\end{proof}\n\n\n\\subsection{The Induced Population Game}\n\nFinally, this section introduces a reduction utilized by \\cite{AS13} in an alternative proof of Nash's Theorem, and by \\cite{Bab13a} to upper bound the support size of $\\epsilon$-ANEs.\n\n\\begin{definition}\\label{def:gG}\n Given a game $G$ with payoff function $\\textbf{\\textup{u}}$, we define the {\\em population game} induced by $G$, $G'=g_G(L)$ with payoff function $\\textbf{\\textup{u}}'$ as follows. Every player $i$ is replaced by a population of $L$ players ($v^i_\\ell$ for $\\ell\\in[L]$), each playing $G$ against the aggregate behavior of the other $n-1$ populations. More precisely,\\\\ $u'_{v^i_\\ell}(\\textbf{\\textup{p}}')=u_i\\left(p'_{v^i_\\ell},\\textbf{\\textup{p}}_{-i}\\right)$ where $p_{i'}=\\frac{1}{L}\\sum_{\\ell=1}^Lp'_{v^{i'}_{\\ell}}$ for all $i'\\neq i$.\n\\end{definition}\n\nPopulation games date back even to Nash's thesis \\cite{N50}, in which he uses them to justify the consideration of mixed equilibria. To date, the reduction to the induced population game has been focused on proofs of existence. We show that the reduction can be made query-efficient: an equilibrium of $g_G(L)$ induces an equilibrium on $G$ \\emph{which can be found with few additional queries}. This technique is the foundation for the main results of this work.\n\n\\begin{lemma}\\label{lem:Nfactor}[Appendix \\ref{app:Nfactor}]\n Given an $n$-player, $m$-action game $G$ and a population game $G'=g_G(L)$ induced by $G$, if an $\\epsilon$-PNE of $G'$ can be found by an algorithm making $q$ $(\\delta,\\gamma)$-distribution queries of $G'$, then an $\\epsilon$-WSNE of $G$ can be found by an algorithm making $n\\cdot m\\cdot q$ $(\\delta,\\gamma\/L)$-distribution queries of $G$.\n\\end{lemma}\n\n\\section{Results}\\label{sec:results}\n\nIn this section, we present our three main results:\n\n\\begin{itemize}\n \\item In Section \\ref{subsec:pure}, Theorem \\ref{thm:pure} shows a lower bound exponential in $\\frac{n\\lambda}{\\epsilon}$ on the randomized query complexity of finding $\\epsilon$-approximate \\emph{pure} Nash equilibria of games in $\\G n2\\lambda$.\n \\item In Section \\ref{subsec:multi}, we generalize the concept of Lipschitz games. Theorem \\ref{thm:multi} provides a reduction from finding approximate equilibria in our new class of ``Multi-Lipschitz'' games to finding approximate equilibria of Lipschitz games.\n \\item In Section \\ref{subsec:det}, Theorem \\ref{thm:det} and Proposition \\ref{prop:det} provide a complete dichotomy of the query complexity of deterministic algorithms finding $\\epsilon$-approximate correlated equilibria of $n$-player, $m$-action games. Corollary \\ref{cor:det} scales the lower bound to apply to Lipschitz games, and motivates the consideration of explicitly randomized algorithms for the above results.\n\\end{itemize}\n\nThese results also use the following simple lemma (which holds for all types of queries and equilibria mentioned in Section \\ref{sec:prelim}).\n\n\\begin{lemma}\\label{lem:scale}\n For any constants $\\lambda'<\\lambda\\leq1,\\epsilon>0$, there is a query-free reduction from finding $\\epsilon$-approximate equilibria of games in $\\G nm\\lambda$ to finding $\\frac{\\lambda'}{\\lambda}\\epsilon$-approximate equilibria of games in $\\G nm{\\lambda'}$.\n\\end{lemma}\n\nIn other words, query complexity upper bounds hold as $\\lambda$ and $\\epsilon$ are scaled up together, and query complexity lower bounds hold as they are scaled down. The proof is very simple - the reduction multiplies every payoff by $\\frac{\\lambda'}{\\lambda}$ (making no additional queries) and outputs the result. Note that the lemma does not hold for $\\lambda'>\\lambda$, as the reduction could introduce payoffs that are larger than $1$.\n\n\\subsection{Hardness of Approximate Pure Equilibria}\\label{subsec:pure}\n\nIn this section we will rely heavily on the following result of Babichenko.\n\n\\begin{theorem}[\\cite{Bab16}]\\label{thm:Bab16}\n There is a constant $\\epsilon_0>0$ such that, for any $\\beta=2^{-o(n)}$, the randomized $\\delta$-distribution query complexity of finding an $\\epsilon_0$-WSNE of $n$-player binary-action games with probability at least $\\beta$ is $\\delta^22^{\\Omega(n)}$.\n\\end{theorem}\n\nFor the remainder of this work, the symbol $\\epsilon_0$ refers to this specific constant. A simple application of Lemma \\ref{lem:scale} yields\n\n\\begin{corollary}\\label{cor:Bab16}\n There is a constant $\\epsilon_0>0$ such that, for any $\\beta=2^{-o(n)}$, the randomized $\\delta$-distribution query complexity of finding an $\\epsilon_0\\lambda$-WSNE of games in $\\G n2\\lambda$ with probability at least $\\beta$ is $\\delta^22^{\\Omega(n)}$.\n\\end{corollary}\n\nWe are now ready to state our main result -- an exponential lower bound on the randomized query complexity of finding $\\epsilon$-PNEs of $\\lambda$-Lipschitz games.\n\n\\begin{theorem}[Main Result]\\label{thm:pure}\n There exists some constant $\\epsilon_0$ such that, for any $n\\in\\mathbb{N},\\epsilon<\\epsilon_0,\\lambda\\leq\\frac{\\epsilon}{\\sqrt{8n\\log4n}}$, while every game in $\\G n2\\lambda$ has an $\\epsilon$-PNE, any randomized algorithm finding such equilibria with probability at least $\\beta=1\/\\textnormal{poly}(n)$ must make $\\lambda^22^{\\Omega(n\\lambda\/\\epsilon)}$ profile queries.\n\\end{theorem}\n\nThe proof follows by contradiction. Assume such an algorithm $A$ exists making $\\lambda^22^{o(n\\lambda\/\\epsilon)}$ profile queries, convert it to an algorithm $B$ making $\\lambda^22^{o(n\\lambda\/\\epsilon)}$ $\\delta$-distribution queries, then use Lemma \\ref{lem:Nfactor} to derive an algorithm $C$ finding $\\epsilon_0\\lambda$-WSNE in $\\lambda$-Lipschitz games contradicting the lower bound of Corollary \\ref{cor:Bab16}.\n\n\\begin{proof}\nAssume that some such algorithm $A$ exists finding $\\epsilon$-PNEs of games in $\\G n2\\lambda$ making at most $\\lambda^22^{o(n\\lambda\/\\epsilon)}$ profile queries. Consider any $\\epsilon<\\epsilon_0,\\lambda'<\\frac{\\epsilon}{\\sqrt{8n\\log4n}}$, and define $\\lambda=\\frac{\\epsilon}{\\epsilon_0},L=\\frac{\\lambda}{\\lambda'},N=Ln$. We derive an algorithm $C$ (with an intermediate algorithm $B$) that contradicts Corollary \\ref{cor:Bab16}.\n\n\\begin{enumerate}[$A$]\n \\item Note that $A$ finds $\\frac{\\epsilon}{2}$-PNEs of games in $\\G N2{\\frac{3\\lambda'}{2}}$ with probability at least $\\beta$ making at most $\\lambda^{\\prime2}2^{o\\left(N\\lambda'\/\\epsilon\\right)}$ profile queries ($\\beta$ can be amplified to constant).\n \\item Let $\\delta=\\frac{\\epsilon_0\\lambda'}{4}$. For any game $G'\\in\\G N2{\\lambda'}$, consider an algorithm making $\\delta$-distribution queries of \\emph{pure action profiles} of $G'$ (introducing the uncertainty without querying mixed strategies).\n\n\\begin{claim}[Appendix \\ref{app:G''}]\n\tThere is a game $G''\\in\\G N2{\\frac{3\\lambda'}{2}}$ that is consistent with all $\\delta$-distribution queries (i.e. $\\textbf{\\textup{u}}''(\\textbf{\\textup{a}})=\\tilde{\\textbf{\\textup{u}}}'(\\textbf{\\textup{a}})$ for all queried $\\textbf{\\textup{a}}$) in which no payoff differs from $G'$ by more than an additive $\\delta$. Futhermore, any $\\frac{\\epsilon}{2}$-PNE of $G''$ is an $\\epsilon$-PNE of $G'$. Figure \\ref{fig:dist} visually depicts this observation.\n\\end{claim}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.3]{deltadistvexact.jpeg}\n \\caption{Taking $G'$ to be the Coordination Game for fixed values of $\\epsilon$ and $\\delta$, the blue region shows the set of $\\epsilon$-approximate equilibria of $G'$ (the acceptable outputs of algorithm $B$) while the orange region shows the set of all $\\frac{\\epsilon}{2}$-approximate equilibria of any possible game $G''$ in which each payoff may be perturbed by at most $\\delta$ (the possible outputs of algorithm $B$).}\n \\label{fig:dist}\n\\end{figure}\n\nSo define the algorithm $B$ that takes input $G'$ and proceeds as though it is algorithm $A$ (but makes $\\delta$-distribution queries instead). By the claim above, after at most $\\lambda^{\\prime2}2^{o\\left(N\\lambda'\/\\epsilon\\right)}$ queries, it has found an $\\frac{\\epsilon}{2}$-PNE of some $G''\\in\\G N2{\\frac{3\\lambda'}{2}}$ that it believes it has learned, which is also an $\\epsilon$-PNE of $G'$.\n\n \\item Consider any game $G\\in\\G n2\\lambda$, and let $G'=g_G(L)$ be the population game induced by $G$. There is an algorithm $C$ described by Lemma \\ref{lem:Nfactor} that takes input $G$ and simulates $B$ on $G'$ (making $2n\\cdot\\lambda^{\\prime2}2^{o\\left(N\\lambda'\/\\epsilon\\right)}=\\delta^22^{o\\left(n\\lambda\/\\epsilon\\right)}$ $\\delta$-distribution queries) and correctly outputs an $\\epsilon$-WSNE (i.e. an $\\epsilon_0\\lambda$-WSNE) of $G$ with probability constant probability (so certainly $2^{-o(n)}$).\n\\end{enumerate}\n\nThe existence of algorithm $C$ directly contradicts the result of Corollary \\ref{cor:Bab16}, proving that algorithm $A$ cannot exist.\\qed\n\\end{proof}\n\n\\begin{remark}\n Note that, if we instead start the proof with the assumption of such an algorithm $B$, we can also show a $\\delta^22^{o(n\\lambda\/\\epsilon)}$ lower bound for the $\\delta$-distribution query complexity of finding $\\epsilon$-PNEs of $\\lambda$-Lipschitz games.\n\\end{remark}\n\n\\subsection{Multi-Lipschitz Games}\\label{subsec:multi}\n\nIn this section, we consider a generalization of Lipschitz games in which each player $i\\in[n]$ has a ``player-specific'' Lipschitz value $\\lambda_i$ in the sense that, if player $i$ changes actions, the payoffs of all other players are changed by at most $\\lambda_i$.\n\n\\begin{definition}\n A $\\Lambda$-Multi-Lipschitz game $G$ is an $n$-player, $m$-action game $G$ in which each player $i\\in[n]$ is associated with a constant $\\lambda_i\\leq1$ such that $\\sum_{i'=1}^n\\lambda_{i'}=\\Lambda$\n and, for any player $i'\\neq i$ and action profiles $\\textbf{\\textup{a}}^{(1)},\\textbf{\\textup{a}}^{(2)}$ with $\\textbf{\\textup{a}}^{(1)}_{-i}=\\textbf{\\textup{a}}^{(2)}_{-i}$,\n $\\left|u_{i'}\\left(\\textbf{\\textup{a}}^{(1)}\\right)-u_{i'}\\left(\\textbf{\\textup{a}}^{(2)}\\right)\\right|\\leq\\lambda_i$.\n The class of such games is denoted $\\GL\\Lambda nm$, and for simplicity it is assumed that $\\lambda_1\\leq\\ldots\\leq\\lambda_n$.\n\\end{definition}\n\nThe consideration of this generalized type of game allows real-world situations to be more accurately modeled. Geopolitical circumstances, for example, naturally take the form of Multi-Lipschitz games, since individual countries have different limits on how much their actions can affect the rest of the world. Financial markets present another instance of such games; they not only consist of individual traders who have little impact on each other, but also include a number of institutions that might each have a much greater impact on the market as a whole. This consideration is further motivated by the recent GameStop frenzy; the institutions still wield immense power, but so do the aggregate actions of millions of individuals \\cite{PL21}.\n\nNotice that a $\\lambda$-Lipschitz game is a $\\Lambda$-Multi-Lipschitz game, for $\\Lambda=n\\lambda$. Any algorithm that finds $\\epsilon$-ANEs of $\\Lambda$-Multi-Lipschitz games is also applicable to $\\Lambda\/n$-Lipschitz games. Theorem \\ref{thm:multi} shows a kind of converse for query complexity, reducing from finding $\\epsilon$-ANE of $\\Lambda$-Multi-Lipschitz games to finding $\\epsilon$-ANE of $\\lambda$-Lipschitz games, for $\\lambda$ a constant multiple of $\\Lambda\/n$.\n\n\\begin{theorem}\\label{thm:multi}\n There is a reduction from computing $\\epsilon$-ANEs of games in $\\GL\\Lambda n2$ with probability at least $1-\\eta$ to computing $\\frac{\\epsilon}{2}$-ANEs of games in $\\G{2n}2{\\frac{3\\Lambda}{2n}}$ with probability at least $1-\\frac{\\eta}{2}$ introducing at most a multiplicative $\\textnormal{poly}(n,\\frac{1}{\\epsilon},\\log\\frac{1}{\\eta})$ query blowup.\n\\end{theorem}\n\nAs we now consider $\\epsilon$-ANEs, existence is no longer a question: such equilibria are \\emph{always} guaranteed to exist by Nash's Theorem \\cite{N51}. This proof will also utilize a more general population game $G'=g_G(L_1,\\ldots,L_n)$ in which player $i$ is replaced by a population of size $L_i$ (where the $L_i$ may differ from each other), and the queries in Lemma \\ref{lem:Nfactor} become $(\\delta,\\min_{i\\in[n]}\\{\\gamma\/L_i\\})$-distribution queries (this will now be relevant, as we need to apply Corollary \\ref{cor:GR16}). Otherwise, the proof follows along the same lines as that of Theorem \\ref{thm:pure}.\n\n\\begin{proof}\n Consider some $\\epsilon>0$ and a game $G\\in\\GL\\Lambda n2$ (WLOG take $\\lambda_1\\leq\\ldots\\leq\\lambda_n)$. First, if $\\Lambda<\\frac{\\epsilon}{n}$, finding an $\\epsilon$-ANE is trivial (each player can play their best-response to the uniform mixed strategy, found in $2n$ queries). So assume $\\Lambda\\geq\\frac{\\epsilon}{n}$. Define $L_i=\\max\\{\\frac{n\\lambda_i}{\\Lambda},1\\}$ and, taking $i'=\\max_{i\\in[n]}\\{i:L_i=1\\}$, note that\n \\[\\sum_{i=1}^nL_i=\\sum_{i=1}^{i'}1+\\sum_{i=i'+1}^n\\frac{n\\lambda_i}{\\Lambda}=\\sum_{i=1}^{i'}1+\\frac{n}{\\Lambda}\\sum_{i=i'+1}^n\\lambda_i\\leq\\sum_{i=1}^{i'}1+\\frac{n}{\\Lambda}\\Lambda\\leq2n.\\]\n Thus the population game $G'=g_G(L_1,\\ldots,L_n)\\in\\G{2n}2{\\frac{\\Lambda}{n}}$.\n \\begin{enumerate}[$A$]\n \\item Consider an algorithm $A$ that finds $\\frac{\\epsilon}{2}$-ANEs of games in $\\G{2n}2{\\frac{3\\Lambda}{2n}}$, with probability at least $1-\\frac{\\eta}{2}$ making $q$ profile queries.\n \\item Taking $\\delta=\\frac{\\epsilon^2}{4n^2}<\\frac{\\epsilon\\Lambda}{4n}$, the algorithm $B$ from the proof of Theorem \\ref{thm:pure} that simulates $A$ but makes $(\\delta,1)$-distribution queries finds an $\\epsilon$-ANE of $G'$ (the proof in Appendix \\ref{app:G''} also holds for these parameters with this choice of $\\delta$).\n \\item By Lemma \\ref{lem:Nfactor}, there is an algorithm $C$ on input $G\\in\\GL\\Lambda n2$ that simulates $B$ (replacing each $(\\delta,1)$-distribution query of $G'$ with $2n$ $(\\delta,\\frac{1}{n})$-distribution queries of $G$ since $\\frac{1}{L_n}\\geq\\frac{1}{n}$) finding an $\\epsilon$-ANE with probability at least $1-\\eta$.\n \\end{enumerate}\n Applying Corollary \\ref{cor:GR16} (using $\\delta=\\frac{\\epsilon^2}{4n^2},\\gamma=\\frac{1}{n}$) to create a profile-query algorithm from $C$ completes the proof.\\qed\n\\end{proof}\n\nAs an example application of Theorem \\ref{thm:multi}, an algorithm of \\cite{GCW19} finds $\\left(\\frac{1}{8}+\\alpha\\right)$-approximate Nash equilibria of games in $\\G n2{\\frac{1}{n}}$; Theorem \\ref{thm:GCW19} states that result in detail, and Corollary \\ref{cor:GCW} extends it to Multi-Lipschitz games.\n\n\\begin{theorem}[\\cite{GCW19}]\\label{thm:GCW19}\n Given constants $\\alpha,\\eta>0$, there is a randomized algorithm that, with probability at least $1-\\eta$, finds $\\left(\\frac{1}{8}+\\alpha\\right)$-approximate Nash equilibria of games in $\\G n2{\\frac{1}{n}}$ making $O\\left(\\frac{1}{\\alpha^4}\\log\\left(\\frac{n}{\\alpha\\eta}\\right)\\right)$ profile queries. \n\\end{theorem}\n\nWe now have some ability to apply this to Multi-Lipschitz games; if $1\\leq\\Lambda<4$ we can improve upon the trivial $\\frac{1}{2}$-approximate equilibrium of Proposition \\ref{prop:det}.\n\n\\begin{corollary}\\label{cor:GCW}\n For $\\alpha,\\eta>0,\\Lambda\\geq1,\\epsilon\\geq\\frac{\\Lambda}{8}+\\alpha$, there is an algorithm finding $\\epsilon$-ANEs of games in $\\GL\\Lambda n2$ with probability at least $1-\\eta$ making at most $\\textnormal{poly}(n,\\frac{1}{\\alpha},\\log\\frac{1}{\\eta})$\n profile queries.\n\\end{corollary}\n\n\\begin{remark}\n This is actually a slight improvement over just combining Theorems \\ref{thm:multi} and \\ref{thm:GCW19}, since the choice of $\\delta$ can be made slightly smaller to shrink $\\alpha$ as necessary.\n\\end{remark}\n\n\\subsection{A Deterministic Lower Bound}\\label{subsec:det}\n\nWe complete this work by generalizing the following result of Hart and Nisan.\n\n\\begin{theorem}[\\cite{HN18}]\\label{thm:HN18}\n For any $\\epsilon<\\frac{1}{2}$, the deterministic profile query complexity of finding $\\epsilon$-ACEs of $n$-player games is $2^{\\Omega(n)}$.\n\\end{theorem}\n\nWhile the proof of Theorem \\ref{thm:HN18} utilizes a reduction from $\\texttt{ApproximateSink}$, we employ a more streamlined approach, presenting an explicit family of ``hard'' games that allows us to uncover the optimal value of $\\epsilon$ as a function of the number of actions:\n\n\\begin{theorem}\\label{thm:det}\n Given some $m\\in\\mathbb{N}$, for any $\\epsilon<\\frac{m-1}{m}$, the deterministic profile query complexity of finding $\\epsilon$-ACEs of $n$-player, $m$-action games is $2^{\\Omega(n)}$.\n\\end{theorem}\n\nFurthermore, this value of $\\epsilon$ cannot be improved:\n\n\\begin{proposition}\\label{prop:det}\n Given some $n,m\\in\\mathbb{N}$, for any $\\epsilon\\geq\\frac{m-1}{m}$, an $\\epsilon$-ANE of an $n$-player, $m$-action game can be found making no profile queries.\n\\end{proposition}\n\nThe upper bound of Proposition \\ref{prop:det} can be met if every player plays the uniform mixed strategy over their actions. Finally, we can apply Lemma \\ref{lem:scale} to scale Theorem \\ref{thm:det} and obtain our intended result:\n\n\\begin{corollary}\\label{cor:det}\n Given some $m\\in\\mathbb{N},\\lambda\\in(0,1]$, for any $\\epsilon<\\frac{m-1}{m}\\lambda$, the deterministic profile query complexity of finding $\\epsilon$-ACEs of $n$-player, $m$-action, $\\lambda$-Lipschitz games is $2^{\\Omega(n)}$.\n\\end{corollary}\n\nIn order to prove these results, we introduce a family of games $\\{G_{k,m}\\}$. For any $k,m\\in\\mathbb{N}$, $G_{k,m}$ is a $2k$-player, $m$-action generalization of $k$ Matching Pennies games in which every odd player $i$ wants to match the even player $i+1$ and every even player $i+1$ wants to mismatch with the odd player $i$.\n\n\\begin{definition}\\label{def:gk}\n Define $G_{1,2}$ to be the generalized Matching Pennies game, as described in Figure \\ref{fig:MP}(a). Define the generalization $G_{k,m}$ to be the $2k$-player $m$-action game such that, for any $i\\in[k]$, player $2i-1$ has a payoff $1$ for matching player $2i$ and $0$ otherwise (and vice versa for player $2i$) ignoring all other players.\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\begin{tabular}{cc|c|c|}\n & \\multicolumn{1}{c}{} & \\multicolumn{2}{c}{Player $2$} \\\\\n & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{$1$} & \\multicolumn{1}{c}{$2$} \\\\\\cline{3-4}\n \\multirow{2}*{Player $1$} & $1$ & $1,0$ & $0,1$ \\\\\\cline{3-4}\n & $2$ & $0,1$ & $1,0$ \\\\\\cline{3-4}\n \\end{tabular}\n \\caption{The payoff matrix of $G_{1,2}$, the Matching Pennies game.}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.54\\textwidth}\n \\centering\n \\begin{tabular}{cc|c|c|c|}\n & \\multicolumn{1}{c}{} & \\multicolumn{3}{c}{Player 2} \\\\\n & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{$1$} & \\multicolumn{1}{c}{$2$} &\\multicolumn{1}{c}{$3$} \\\\\\cline{3-5}\n \\multirow{3}*{Player 1} & $1$ & $1,0$ & $0,1$ & $0,1$ \\\\\\cline{3-5}\n & $2$ & $0,1$ & $1,0$ & $0,1$ \\\\\\cline{3-5}\n & $3$ & $0,1$ & $0,1$ & $1,0$ \\\\\\cline{3-5}\n \\end{tabular}\n \\caption{The payoff matrix of $G_{1,3}$, the generalized Matching Pennies game.}\n \\end{subfigure}\n \\caption{The payoff matrices of $G_{1,2}$ and $G_{1,3}$.}\n \\label{fig:MP}\n\\end{figure}\n\\end{definition}\n\nThe critical property of the generalized Matching Pennies game is that we can bound the probability that any given action profile is played in any $\\epsilon$-ACE of $G_{k,m}$. If too much probability is jointly placed on matching actions, player $2$ will have high regret. Conversely, if too much probability is jointly placed on mismatched actions, player $1$ will have high regret.\n\n\\begin{lemma}\\label{lem:game}\n For any $k,m\\in\\mathbb{N},\\alpha>0$, take $\\epsilon=\\frac{m-1}{m}-\\alpha$. In any $\\epsilon$-ACE $\\textbf{\\textup{X}}^*$ of $G_{k,m}$, every action profile $\\textbf{\\textup{a}}'\\in[m]^n$ satisfies $\\Pr_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*}(\\textbf{\\textup{a}}=\\textbf{\\textup{a}}')<\\rho^{\\frac{n}{2}}$ where\n \\[\\rho=\\frac{(2-\\alpha)m-1}{2m}.\\]\n\\end{lemma}\n\nThis phenomenon can be seen in Figure \\ref{fig:cregion}.\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[scale=0.3]{alphac13.jpeg}\n \\caption{$\\alpha=\\frac{1}{3}$}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[scale=0.3]{alphac16.jpeg}\n \\caption{$\\alpha=\\frac{1}{6}$}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[scale=0.3]{alphac0.jpeg}\n \\caption{$\\alpha\\rightarrow0$}\n \\end{subfigure}\n \\caption{The region of possible values for $(\\Pr_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*}(a_1=1,a_2=1),\\Pr_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*}(a_1=2,a_2=2)$ in any $\\left(\\frac{1}{2}-\\alpha\\right)$-approximate correlated equilibrium of $G_{k,2}$. The only exact correlated equilibrium is shown by the red point, and the corresponding values of $\\rho$ are displayed as the orange lines.}\n \\label{fig:cregion}\n\\end{figure}\n\n\\begin{proof}\n Define $n=2k$ to be the number of players in $G_{k,m}$, and consider some $\\epsilon$-ACE $\\textbf{\\textup{X}}^*$. Now WLOG consider players $1$ and $2$ and assume, for the sake of contradiction, that there exist some actions $j_1,j_2$ such that $\\Pr_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*}(a_1=j_1,a_2=j_2)>\\rho$. We will need to consider the two cases $j_1=j_2$ and $j_1\\neq j_2$.\n \\item\\paragraph{Matching Actions} In this case, WLOG assume $j_1=j_2=1$. We show that player $2$ can improve her payoff by more than $\\epsilon$. Under $\\textbf{\\textup{X}}^*$, with probability $>\\rho$, any random realization $\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*$ will yield player $2$ a payoff of $0$. In other words, $u_2(\\textbf{\\textup{X}}^*)<1-\\rho$. Furthermore, considering the marginal distribution over player $1$'s action, we are guaranteed that\n \\[\\sum_{j=2}^m\\Pr_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*}(a_1=j)<1-\\rho\\]\n so there must exist some action (WLOG action $2$) for which $\\Pr_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*}(a_1=2)<\\frac{1-\\rho}{m-1}$. As such, define $\\phi(j)=2$. Then $u_2^{(\\phi)}(\\textbf{\\textup{X}}^*)>1-\\frac{1-\\rho}{m-1}$, so\n \\[\\textnormal{reg}_2^{(\\phi)}(\\textbf{\\textup{X}}^*)>\\underbrace{\\left(1-\\frac{1-\\rho}{m-1}\\right)}_{u_2^{(\\phi)}(\\textbf{\\textup{X}}^*)}-\\underbrace{(1-\\rho)}_{u_2(\\textbf{\\textup{X}}^*)}=\\frac{\\rho m-1}{m-1}\\geq\\frac{m-1}{m}-\\alpha\\]\n for all $m\\geq2$. This contradicts our assumption of an $\\epsilon$-ACE.\n \\item\\paragraph{Mismatched Actions}\n In this case, WLOG assume $j_1=1,j_2=2$. The situation is simpler, taking $\\phi(j)=2$, $u_1(\\textbf{\\textup{X}}^*)<1-\\rho$ and $u_1^{(\\phi)}(\\textbf{\\textup{X}}^*)>\\rho$, so $\\textnormal{reg}_1^{(\\phi)}(\\textbf{\\textup{X}}^*)>\\frac{m-1}{m}-\\alpha$. This too contradicts our assumption of an $\\epsilon$-ACE, and thus completes the proof of Lemma \\ref{lem:game}.\\qed\n\\end{proof}\n\nWe can now prove Theorem \\ref{thm:det}. The general idea is that, should an efficient algorithm exist, because any equilibrium of $G_{k,m}$ must have large support by Lemma \\ref{lem:game}, there is significant probability assigned to action profiles that are not queried by the algorithm. We show there is a game that the algorithm cannot distinguish from $G_{k,m}$ that shares no approximate equillibria with $G_{k,m}$.\n\n\\begin{proof}[Theorem \\ref{thm:det}]\n Consider any $\\alpha>0$ and let $\\epsilon=\\frac{m-1}{m}-\\alpha$. Taking $\\rho$ as in the statement of Lemma \\ref{lem:game}, assume there exists some deterministic algorithm $A$ that takes an $n$-player, $m$-action game $G$ as input and finds an $\\epsilon$-ACE of $G$ querying the payoffs of $q<\\frac{\\alpha}{2}\\rho^{-\\frac{n}{2}}$ action profiles. Fix some $k\\in\\mathbb{N}$ and consider input $G_{k,m}$ as defined in Definition \\ref{def:gk}. Then $\\textbf{\\textup{X}}^*=A\\left(G_{k,m}\\right)$ is an $\\epsilon$-ACE of $G_{k,m}$. Note that, for some $j$, $\\Pr_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*}(a_1=j)\\leq\\frac{1}{m}$ (WLOG assume $j=1$).\n \n Now define the perturbation $G'_{k,m}$ of $G_{k,m}$ with payoffs defined to be equal to $G_{k,m}$ for every action profile queried by $A$, $1$ for every remaining action profile in which player $1$ plays action $1$ (chosen because it is assigned low probability by $\\textbf{\\textup{X}}^*$ by assumption), and $0$ otherwise. Note that, by definition, $A$ cannot distinguish between $G_{k,m}$ and $G'_{k,m}$, so $A(G'_{k,m})=\\textbf{\\textup{X}}^*$.\n \n Taking the function $\\phi(j)=1$, the quantity we need to bound is $\\textnormal{reg}_i^{\\prime(\\phi)}(\\textbf{\\textup{X}}^*)\\geq u_i^{\\prime(\\phi)}(\\textbf{\\textup{X}}^*)-u'_i(\\textbf{\\textup{X}}^*)$.\n We must bound the components of this expression as follows:\n\\begin{claim}[Appendix \\ref{app:Gkm}]\n$u^{\\prime(\\phi)}_1(\\textbf{\\textup{X}}^*)>(1-q\\rho^{\\frac{n}{2}})$ and $u'_1(\\textbf{\\textup{X}}^*)<\\left(\\frac{1}{m}+q\\rho^{\\frac{n}{2}}\\right)$.\n\\end{claim}\n Using the claim and once again recalling the assumption that $q<\\frac{\\alpha}{2}\\rho^{-\\frac{n}{2}}$, we see\n \\[\\textnormal{reg}^{\\prime(\\phi)}_1(\\textbf{\\textup{X}}^*)>\\left(1-\\frac{1}{m}-2q\\rho^{\\frac{n}{2}}\\right)=\\frac{m-1}{m}-\\alpha=\\epsilon.\\]\n So $\\textbf{\\textup{X}}^*$ cannot actually be an $\\epsilon$-ACE of $G_{k,m}$. This completes the proof of Theorem \\ref{thm:det}.\\qed\n\\end{proof}\n\n\\section{Further Directions}\n\nAn important additional question is the query complexity of finding $\\epsilon$-PNEs of $n$-player, $\\lambda$-Lipschitz games in which $\\epsilon=\\Omega(n\\lambda)$. Theorem \\ref{thm:pure} says nothing in this parameter range, yet Theorem \\ref{thm:GCW19} provide a logarithmic upper bound in this regime. The tightness of this bound is of continuing interest. Furthermore, the query- and computationally-efficient reduction discussed in Lemma \\ref{lem:Nfactor} provides a hopeful avenue for further results bounding the query, and computational, complexities of finding equilibria in many other classes of games.\n\n\\paragraph{Acknowledgements}{We thank Francisco Marmalejo Coss\\'{i}o and Rahul Santhanam for their support in the development of this work, and the reviewers of an earlier version for helpful comments. Matthew Katzman was supported by an Oxford-DeepMind Studentship for this work.}\n\n\\bibliographystyle{splncs04}\n\n\\section{Introduction}\n\nA Lipschitz game is a multi-player game in which there is an additive limit $\\lambda$ (called the Lipschitz constant of the game) on how much any player's payoffs can change due to a deviation by any other player. Thus, any player's payoff function is $\\lambda$-Lipschitz continuous as a function of the other players' mixed strategies. Lipschitz games were introduced about ten years ago by Azrieli and Shmaya \\cite{AS13}. A key feature of Lipschitz games is that they are guaranteed to have approximate Nash equilibria \\emph{in pure strategies}, where the quality of the approximation depends on the number of players $n$, the number of actions $m$, and the Lipschitz constant $\\lambda$. In particular, \\cite{AS13} showed that this guarantee holds (keeping the number of actions constant) for Lipschitz constants of size $o(1\/\\sqrt{n \\log n})$ (existence of pure approximate equilibria is trivial for Lipschitz constants of size $o(1\/n)$ since then, players have such low effect on each others' payoffs that they can best-respond independently to get a pure approximate equilibrium). The general idea of the existence proof is to take a mixed Nash equilibrium (guaranteed to exist by Nash's theorem \\cite{N51}), and prove that there is a positive probability that a pure profile sampled from it will constitute an approximate equilibrium.\n\nAs noted in \\cite{AS13} (and elsewhere), solutions in pure-strategy profiles are a more plausible and satisfying model of a game's outcome than solutions in mixed-strategy profiles. On the other hand, the existence guarantee raises the question of how to \\emph{compute} an approximate equilibrium. In contrast with potential games, in which pure-strategy equilibria can often be found via best- and better-response dynamics, there is no obvious natural approach in the context of Lipschitz games, despite the existence guarantee. The general algorithmic question (of interest in the present paper) is:\n\\begin{quote}\n\\emph{Given a Lipschitz game, how hard is it to find a pure-strategy profile that constitutes an approximate equilibrium?}\n\\end{quote}\nRecent work \\cite{DFS20,GCW19} has identified algorithms achieving additive constant approximation guarantees, but as noted by Babichenko \\cite{Bab19}, the extent to which we can achieve the pure approximate equilibria that are guaranteed by \\cite{AS13} (or alternatively, potential lower bounds on query or computational complexity) is unknown.\n\nVariants and special cases of this question include classes of Lipschitz games having a concise representation, as opposed to unrestricted Lipschitz games for which an algorithm has query access to the payoff function (as we consider in this paper). In the latter case, the question subdivides into what we can say about the query complexity, and about the computational complexity (for concisely-represented games the query complexity is low, by Theorem 3.3 of \\cite{GR16}). Moreover, if equilibria can be easily computed, does that remain the case if we ask for this to be achievable via some kind of natural-looking decentralized process? Positive results for these questions help us to believe in ``approximate pure Nash equilibrium'' as a solution concept for Lipschitz games. Alternatively, it is of interest to identify computational obstacles to the search for a Nash equilibrium.\n\n\\subsection{Prior Work}\n\nIn this paper we apply various important lower bounds on the query complexity of computing approximate Nash equilibria of \\emph{unrestricted} $n$-player games. In general, lower bounds on the query complexity are known that are exponential in $n$ (which motivates a focus on subclasses of games, such as Lipchitz games, and others). Hart and Mansour \\cite{HM10} showed that the communication (and thus query) complexity of computing an exact Nash equilibrium (pure or mixed) in a game with $n$ players is $2^{\\Omega(n)}$. Subsequent results have iteratively strengthened this lower bound. First, Babichenko \\cite{Bab16} showed an exponential lower bound on the \\emph{randomized} query complexity of computing an $\\epsilon$-well supported Nash equilibrium (an approximate equilibrium in which every action in the support of a given player's mixed strategy is an $\\epsilon$-best response) for a constant value of $\\epsilon$, even when considering $\\delta$-distributional query complexity, as defined in Definition \\ref{def:query}. Shortly after, Chen, Cheng, and Tang \\cite{CCT17} showed a $2^{\\Omega\\left(n\/\\log n\\right)}$ lower bound on the randomized query complexity of computing an $\\epsilon$-approximate Nash equilibrium for a constant value of $\\epsilon$, which Rubinstein \\cite{Rub16} improved to a $2^{\\Omega(n)}$ lower bound, even allowing a constant fraction of players to have regret greater than $\\epsilon$ (taking regret as defined in Definition \\ref{def:reg}). These intractability results motivate us to consider a restricted class of games (Lipschitz, or large, games) which contain significantly more structure than do general games.\n\nLipschitz games were initially considered by Azrieli and Shmaya \\cite{AS13}, who showed that any $\\lambda$-Lipschitz game (as defined in Section \\ref{subsec:game}) with $n$ players and $m$ actions admits an $\\epsilon$-approximate \\emph{pure} Nash equilibrium for any $\\epsilon\\geq\\lambda\\sqrt{8n\\log2mn}$ \\footnote{General Lipschitz games cannot be written down concisely, so we assume black-box access to the payoff function of a Lipschitz game. This emphasizes the importance of considering \\emph{query complexity} in this context. Note that a pure approximate equilibrium can still be checked using $mn$ queries.}. In Section \\ref{sec:results} we provide a lower bound on the query complexity of finding such an equilibrium.\n\nPositive algorithmic results have been found for classes of games that combine the Lipschitz property with others, such as \\emph{anonymous} games \\cite{DP15} and \\emph{aggregative} games \\cite{Bab18}. For anonymous games (in which each player's payoffs depend only on the \\emph{number} of other players playing each action, and not \\emph{which} players), Daskalakis and Papadimitriou \\cite{DP15} improved upon the upper bound of \\cite{AS13} to guarantee the existence of $\\epsilon$-approximate pure Nash equilibria for $\\epsilon=\\Omega(\\lambda)$ (the only dependence on $n$ coming from $\\lambda$ itself). Peretz et al.\\ \\cite{PSS20} analyze Lipschitz values that result from $\\delta$-perturbing anonymous games, in the sense that every player is assumed to randomize uniformly with probability $\\delta$. Goldberg and Turchetta \\cite{GT17} showed that a $3\\lambda$-approximate pure Nash equilibrium of a $\\lambda$-Lipschitz anonymous game can be found querying $O(n\\log n)$ individual payoffs.\n\nGoldberg et al.\\ \\cite{GCW19} showed a logarithmic upper bound on the randomized query complexity of computing $\\frac{1}{8}$-approximate Nash equilibria in binary-action $\\frac{1}{n}$-Lipschitz games. They also presented a randomized algorithm finding a $(\\frac{3}{4}+\\alpha)$-approximate Nash equilibrium when the number of actions is unbounded.\n\n\\subsection{Our Contributions}\n\nThe primary contribution of this work is the development and application of a query-efficient version of a reduction technique used in \\cite{AS13,Bab13a} in which an algorithm finds an equilibrium in one game by reducing it to a population game with a smaller Lipschitz parameter. As the former is a known hard problem, we prove hardness for the latter.\n\nIn Section \\ref{sec:prelim} we introduce notation and relevant concepts, and describe the query model assumed for our results. Section \\ref{sec:results} contains our main contributions. In particular, Theorem \\ref{thm:pure} utilizes a query-efficient reduction to a population game with a small Lipschitz parameter while preserving the equilibrium. Hence, selecting the parameters appropriately, the hardness of finding well-supported equilibria in general games proven in \\cite{Bab16} translates to finding approximate pure equilibria in Lipschitz games. Whilst several papers have discussed both this problem and this technique, none has put forward this observation.\n\nIn Section \\ref{subsec:multi} we introduce ``Multi-Lipschitz'' games, a generalization of Lipschitz games that allows player-specific Lipschitz values (the amount of influence the player has on others). We show that certain results of Lipschitz games extend to these, and the measure of interest is the sum of individual Lipschitz values (in a standard Lipschitz game, they are all equal). Theorem \\ref{thm:multi} provides a query-efficient reduction from finding equilibria in Multi-Lipschitz games to finding equilibria in Lipschitz games. In particular, if there is a query-efficient approximation algorithm for the latter, there is one for the former as well.\n\nFinally, Section \\ref{subsec:det} provides a simpler proof of the result of \\cite{HN18} showing exponential query lower-bounds on finding correlated equilibria with approximation constants better than $\\frac{1}{2}$. Theorem \\ref{thm:det} provides a more general result for games with more than $2$ actions, and Corollary \\ref{cor:det} extends this idea futher to apply to Lipschitz games. While \\cite{HN18} relies on a reduction from the $\\texttt{ApproximateSink}$ problem, we explicitly describe a class of games with vastly different equilibria between which no algorithm making a subexponential number of queries can distinguish. To any weak deterministic algorithm, these games look like pairs of players playing Matching Pennies against each other - however the equilibria are far from those of the Matching Pennies game.\n\nFor the sake of brevity, some technical details are omitted from this work, and can be found in the appendix.\n\n\\section{Preliminaries}\\label{sec:prelim}\n\nThroughout, we use the following notation.\n\\begin{itemize}\n\\item Boldface capital letters denote matrices, and boldface lowercase letters denote vectors.\n\\item The symbol $\\textbf{\\textup{a}}$ is used to denote a pure action profile, and $\\textbf{\\textup{p}}$ is used when the strategy profiles may be mixed. Furthermore, $\\textbf{\\textup{X}}$ is used to denote \\emph{correlated} strategies.\n\\item $[n]$ and $[m]$ denote the sets $\\{1,\\ldots,n\\}$ of players and $\\{1,\\ldots,m\\}$ of actions, respectively. Furthermore, $i\\in[n]$ will always refer to a player, and $j\\in[m]$ will always refer to an action.\n\\item Whenever a query returns an approximate answer, the payoff vector $\\tilde{\\textbf{\\textup{u}}}$ will be used to represent the approximation and $\\textbf{\\textup{u}}$ will represent the true value.\n\\end{itemize}\n\n\n\\subsection{The Game Model}\\label{subsec:game}\n\nWe introduce standard concepts of strategy profiles, payoffs, regret, and equilibria for pure, mixed, and correlated strategies.\n\n\\begin{paragraph}{Types of strategy profile; notation:}\n \\begin{itemize}\n \\item A \\emph{pure} action profile $\\textbf{\\textup{a}}=(a_1,\\ldots,a_n)\\in[m]^n$ is an assignment of one action to each player. We use $\\textbf{\\textup{a}}_{-i}=(a_1,\\ldots,a_{i-1},a_{i+1},\\ldots,a_n)\\in[m]^{n-1}$ to denote the set of actions played by players in $[n]\\setminus\\{i\\}$.\n \\item A (possibly \\emph{mixed}) strategy profile $\\textbf{\\textup{p}}=(p_1,\\ldots,p_n)\\in(\\Delta[m])^n$ (where $\\Delta(S)$ is the probability simplex over $S$) is a collection of $n$ independent probability distributions, each taken over the action set of a player, where $p_{ij}$ is the probability with which player $i$ plays action $j$. The set of distributions for players in $[n]\\setminus\\{i\\}$ is denoted $\\textbf{\\textup{p}}_{-i}=(p_1,\\ldots,p_{i-1},p_{i+1},\\ldots,p_n)$. When $\\textbf{\\textup{p}}$ contains just $0$-$1$ values, $\\textbf{\\textup{p}}$ is equivalent to some action profile $\\textbf{\\textup{a}}\\in[m]^n$.\n \n Furthermore, when considering binary-action games with action set $\\{1,2\\}$, we instead describe strategy profiles by $\\textbf{\\textup{p}}=\\left(p_1,\\ldots,p_n\\right)$, where $p_i$ is the probability that player $i$ plays action $1$.\n \\item A \\emph{correlated} strategy profile $\\textbf{\\textup{X}}\\in\\Delta([m]^n)$ is a single joint probability distribution taken over the space of all pure action profiles $\\textbf{\\textup{a}}$.\n \\end{itemize}\n\\end{paragraph}\n\n\\begin{paragraph}{Notation for payoffs:}\n Given player $i$, action $j$, and pure action profile $\\textbf{\\textup{a}}$,\n \\begin{itemize}\n \\item $u_i(j,\\textbf{\\textup{a}}_{-i})$ is the payoff that player $i$ obtains for playing action $j$ when all other players play the actions given in $\\textbf{\\textup{a}}_{-i}$.\n \\item $u_i(\\textbf{\\textup{a}})=u_i(a_i,\\textbf{\\textup{a}}_{-i})$ is the payoff that player $i$ obtains when all players play the actions given in $\\textbf{\\textup{a}}$.\n \\item Similarly for mixed-strategy profiles:\\newline$u_i(j,\\textbf{\\textup{p}}_{-i})=\\mathbb{E}_{\\textbf{\\textup{a}}_{-i}\\sim\\textbf{\\textup{p}}_{-i}}[u_i(j,\\textbf{\\textup{a}}_{-i})]$ and $u_i(\\textbf{\\textup{p}})=\\mathbb{E}_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{p}}}[u_i(\\textbf{\\textup{a}})]$.\n \\item For a given player $i\\in[n]$, consider a deviation function $\\phi:[m]\\rightarrow[m]$. Then, similarly, $u_i^{(\\phi)}(\\textbf{\\textup{X}})=\\mathbb{E}_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}}[u_i(\\phi(a_i),\\textbf{\\textup{a}}_{-i})]$ and $u_i(\\textbf{\\textup{X}})=\\mathbb{E}_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}}[u_i(\\textbf{\\textup{a}})]$. Furthermore, given an event $E$, $u_i(\\textbf{\\textup{X}}\\mid E)=\\mathbb{E}_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}}[u_i(\\textbf{\\textup{a}})\\mid E]$.\n \\end{itemize}\n\\end{paragraph}\n\n\\begin{definition}[Regret]\\label{def:reg}\n \\begin{itemize}\n \\item Given a player $i$ and a strategy profile $\\textbf{\\textup{p}}$, define the regret\n \\[\\textnormal{reg}_i(\\textbf{\\textup{p}})=\\max_{j\\in[m]}u_i(j,\\textbf{\\textup{p}}_{-i})-u_i(\\textbf{\\textup{p}})\\]\n to be the difference between the payoffs of player $i$'s best response to $\\textbf{\\textup{p}}_{-i}$ and $i$'s strategy $p_i$.\n \\item Given a player $i$ and a correlated strategy profile $\\textbf{\\textup{X}}$, define\n \\[\\textnormal{reg}_i^{(\\phi)}(\\textbf{\\textup{X}})=u_i^{(\\phi)}(\\textbf{\\textup{X}})-u_i(\\textbf{\\textup{X}}),\\qquad\\qquad\\textnormal{reg}_i(\\textbf{\\textup{X}})=\\max_{\\phi:[m]\\rightarrow[m]}\\textnormal{reg}_i^{(\\phi)}(\\textbf{\\textup{X}}),\\]\n the regret $\\textnormal{reg}_i(\\textbf{\\textup{X}})$ being the difference between the payoffs of player $i$'s best deviation from $\\textbf{\\textup{X}}$, and $\\textbf{\\textup{X}}$.\n \\end{itemize}\n\\end{definition}\n\n\\begin{definition}[Equilibria]\n \\begin{itemize}\n \\item An {\\em $\\epsilon$-approximate Nash equilibrium} ($\\epsilon$-ANE) is a strategy profile $\\textbf{\\textup{p}}^*$ such that, for every player $i\\in[n]$, $\\textnormal{reg}_i(\\textbf{\\textup{p}}^*)\\leq\\epsilon$.\n \\item An {\\em $\\epsilon$-well supported Nash equilibrium} ($\\epsilon$-WSNE) is an $\\epsilon$-ANE $\\textbf{\\textup{p}}^*$ for which every action $j$ in the support of $p^*_i$ is an $\\epsilon$-best response to $\\textbf{\\textup{p}}^*_{-i}$. \n \\item An {\\em $\\epsilon$-approximate pure Nash equilibrium} ($\\epsilon$-PNE) is a pure action profile $\\textbf{\\textup{a}}$ such that, for every player $i\\in[n]$, $\\textnormal{reg}_i(\\textbf{\\textup{a}})\\leq\\epsilon$.\n \\item An {\\em $\\epsilon$-approximate correlated equilibrium} ($\\epsilon$-ACE) is a correlated strategy profile $\\textbf{\\textup{X}}^*$ such that, for every player $i\\in[n]$, $\\textnormal{reg}_i(\\textbf{\\textup{X}}^*)\\leq\\epsilon$.\n \\end{itemize}\n\\end{definition}\n\nNote that any $\\epsilon$-PNE is an $\\epsilon$-WSNE, and any $\\epsilon$-ANE constitutes an $\\epsilon$-ACE. Consequently, any algorithmic lower bounds on correlated equilibria also apply to Nash equilibria.\n\nFinally, to end this section, we introduce the class of games that is the focus of this work.\n\n\\begin{definition}[Lipschitz Games]\n For any value $\\lambda\\in(0,1]$, a {\\em $\\lambda$-Lipschitz game} is a game in which a change in strategy of any given player can affect the payoffs of any other player by at most an additive $\\lambda$, or for every player $i$ and pair of action profiles $\\textbf{\\textup{a}},\\textbf{\\textup{a}}'$, $|u_i(\\textbf{\\textup{a}})-u_i(\\textbf{\\textup{a}}')|\\leq\\lambda||\\textbf{\\textup{a}}_{-i}-\\textbf{\\textup{a}}'_{-i}||_1$. Here we consider games in which all payoffs are in the range $[0,1]$ (in particular, note that any general game is, by definition, $1$-Lipschitz).\n\\end{definition}\nThe set of $n$-player, $m$-action, $\\lambda$-Lipschitz games will be denoted $\\G nm\\lambda$.\n\n\\subsection{The Query Model}\n\nThis section introduces the model of queries we consider.\n\n\\begin{definition}[Queries]\\label{def:query}\n\\begin{itemize}\n \\item A {\\em profile query} of a pure action profile $\\textbf{\\textup{a}}$ of a game $G$, denoted $\\mathcal{Q}^G(\\textbf{\\textup{a}})$, returns a vector $\\textbf{\\textup{u}}$ of payoffs $u_i(\\textbf{\\textup{a}})$ for each player $i\\in[n]$.\n \\item A {\\em $\\delta$-distribution query} of a strategy profile $\\textbf{\\textup{p}}$ of a game $G$, denoted $\\mathcal{Q}^G_{\\delta}(\\textbf{\\textup{p}})$, returns a vector $\\tilde{\\textbf{\\textup{u}}}$ of $n$ values such that $\\left|\\left|\\tilde{\\textbf{\\textup{u}}}-\\textbf{\\textup{u}}\\right|\\right|_{\\infty}\\leq\\delta$, where $\\textbf{\\textup{u}}$ is the players' expected utilities from $\\textbf{\\textup{p}}$. We also define a {\\em $(\\delta,\\gamma)$-distribution query} to be a $\\delta$-distribution query of a strategy profile $\\textbf{\\textup{p}}$ in which every action $j$ in the support of $p_i$ is allocated probability at least $\\gamma$ for every player $i\\in[n]$.\n \\item The {\\em (profile) query complexity} of an algorithm $A$ on input game $G$ is the number of calls $A$ makes to $\\mathcal{Q}^G$. The {\\em $\\delta$-distribution query complexity} of $A$ is the number of calls $A$ makes to $\\mathcal{Q}^G_\\delta$.\n\\end{itemize}\n\\end{definition}\n\nBabichenko \\cite{Bab16} points out that it is uninteresting to consider $0$-distribution queries, as any game in which every payoff is a multiple of $\\frac{1}{M}$ for some $M\\in\\mathbb{N}$ can be completely learned by a single $0$-distribution query. On the other hand, additive approximations to the expected payoffs can be computed via sampling from $\\textbf{\\textup{p}}$. Indeed, for general binary-action games we have from \\cite{GR16}:\n\n\\begin{theorem}[\\cite{GR16}]\\label{thm:GR16}\n Take $G\\in\\G n21,\\eta>0$. Any $(\\delta,\\gamma)$-distribution query of $G$ can be simulated with probability at least $1-\\eta$ by\n \\[\\max\\left\\{\\frac{1}{\\gamma\\delta^2}\\log\\left(\\frac{8n}{\\eta}\\right),\\frac{8}{\\gamma}\\log\\left(\\frac{4n}{\\eta}\\right)\\right\\}\\]\n profile queries.\n\\end{theorem}\n\n\\begin{corollary}\\label{cor:GR16}\n Take $G\\in\\G n21,\\eta>0$. Any $(\\delta,\\gamma)$-distribution query of $G$ can be simulated with probability at least $1-\\eta$ by\n \\[\\frac{8}{\\gamma^2\\delta^2}\\log^2\\left(\\frac{8n}{\\eta}\\right)\\]\n profile queries. Furthermore, any algorithm making $q$ $(\\delta,\\gamma)$-distribution queries of $G$ can be simulated with probability at least $1-\\eta$ by\n \\[\\frac{8q}{\\gamma^2\\delta^2}\\log^2\\left(\\frac{8nq}{\\eta}\\right)=\\textnormal{poly}\\left(n,\\frac{1}{\\gamma},\\frac{1}{\\delta},\\log\\frac{1}{\\eta}\\right)\\cdot q\\log q\\]\n profile queries.\n\\end{corollary}\n\n\\begin{proof}\nThe first claim is a weaker but simpler version of the upper bound of Theorem \\ref{thm:GR16}. The second claim follows from the first by a union bound.\\qed\n\\end{proof}\n\n\n\\subsection{The Induced Population Game}\n\nFinally, this section introduces a reduction utilized by \\cite{AS13} in an alternative proof of Nash's Theorem, and by \\cite{Bab13a} to upper bound the support size of $\\epsilon$-ANEs.\n\n\\begin{definition}\\label{def:gG}\n Given a game $G$ with payoff function $\\textbf{\\textup{u}}$, we define the {\\em population game} induced by $G$, $G'=g_G(L)$ with payoff function $\\textbf{\\textup{u}}'$ as follows. Every player $i$ is replaced by a population of $L$ players ($v^i_\\ell$ for $\\ell\\in[L]$), each playing $G$ against the aggregate behavior of the other $n-1$ populations. More precisely,\\\\ $u'_{v^i_\\ell}(\\textbf{\\textup{p}}')=u_i\\left(p'_{v^i_\\ell},\\textbf{\\textup{p}}_{-i}\\right)$ where $p_{i'}=\\frac{1}{L}\\sum_{\\ell=1}^Lp'_{v^{i'}_{\\ell}}$ for all $i'\\neq i$.\n\\end{definition}\n\nPopulation games date back even to Nash's thesis \\cite{N50}, in which he uses them to justify the consideration of mixed equilibria. To date, the reduction to the induced population game has been focused on proofs of existence. We show that the reduction can be made query-efficient: an equilibrium of $g_G(L)$ induces an equilibrium on $G$ \\emph{which can be found with few additional queries}. This technique is the foundation for the main results of this work.\n\n\\begin{lemma}\\label{lem:Nfactor}[Appendix \\ref{app:Nfactor}]\n Given an $n$-player, $m$-action game $G$ and a population game $G'=g_G(L)$ induced by $G$, if an $\\epsilon$-PNE of $G'$ can be found by an algorithm making $q$ $(\\delta,\\gamma)$-distribution queries of $G'$, then an $\\epsilon$-WSNE of $G$ can be found by an algorithm making $n\\cdot m\\cdot q$ $(\\delta,\\gamma\/L)$-distribution queries of $G$.\n\\end{lemma}\n\n\\section{Results}\\label{sec:results}\n\nIn this section, we present our three main results:\n\n\\begin{itemize}\n \\item In Section \\ref{subsec:pure}, Theorem \\ref{thm:pure} shows a lower bound exponential in $\\frac{n\\lambda}{\\epsilon}$ on the randomized query complexity of finding $\\epsilon$-approximate \\emph{pure} Nash equilibria of games in $\\G n2\\lambda$.\n \\item In Section \\ref{subsec:multi}, we generalize the concept of Lipschitz games. Theorem \\ref{thm:multi} provides a reduction from finding approximate equilibria in our new class of ``Multi-Lipschitz'' games to finding approximate equilibria of Lipschitz games.\n \\item In Section \\ref{subsec:det}, Theorem \\ref{thm:det} and Proposition \\ref{prop:det} provide a complete dichotomy of the query complexity of deterministic algorithms finding $\\epsilon$-approximate correlated equilibria of $n$-player, $m$-action games. Corollary \\ref{cor:det} scales the lower bound to apply to Lipschitz games, and motivates the consideration of explicitly randomized algorithms for the above results.\n\\end{itemize}\n\nThese results also use the following simple lemma (which holds for all types of queries and equilibria mentioned in Section \\ref{sec:prelim}).\n\n\\begin{lemma}\\label{lem:scale}\n For any constants $\\lambda'<\\lambda\\leq1,\\epsilon>0$, there is a query-free reduction from finding $\\epsilon$-approximate equilibria of games in $\\G nm\\lambda$ to finding $\\frac{\\lambda'}{\\lambda}\\epsilon$-approximate equilibria of games in $\\G nm{\\lambda'}$.\n\\end{lemma}\n\nIn other words, query complexity upper bounds hold as $\\lambda$ and $\\epsilon$ are scaled up together, and query complexity lower bounds hold as they are scaled down. The proof is very simple - the reduction multiplies every payoff by $\\frac{\\lambda'}{\\lambda}$ (making no additional queries) and outputs the result. Note that the lemma does not hold for $\\lambda'>\\lambda$, as the reduction could introduce payoffs that are larger than $1$.\n\n\\subsection{Hardness of Approximate Pure Equilibria}\\label{subsec:pure}\n\nIn this section we will rely heavily on the following result of Babichenko.\n\n\\begin{theorem}[\\cite{Bab16}]\\label{thm:Bab16}\n There is a constant $\\epsilon_0>0$ such that, for any $\\beta=2^{-o(n)}$, the randomized $\\delta$-distribution query complexity of finding an $\\epsilon_0$-WSNE of $n$-player binary-action games with probability at least $\\beta$ is $\\delta^22^{\\Omega(n)}$.\n\\end{theorem}\n\nFor the remainder of this work, the symbol $\\epsilon_0$ refers to this specific constant. A simple application of Lemma \\ref{lem:scale} yields\n\n\\begin{corollary}\\label{cor:Bab16}\n There is a constant $\\epsilon_0>0$ such that, for any $\\beta=2^{-o(n)}$, the randomized $\\delta$-distribution query complexity of finding an $\\epsilon_0\\lambda$-WSNE of games in $\\G n2\\lambda$ with probability at least $\\beta$ is $\\delta^22^{\\Omega(n)}$.\n\\end{corollary}\n\nWe are now ready to state our main result -- an exponential lower bound on the randomized query complexity of finding $\\epsilon$-PNEs of $\\lambda$-Lipschitz games.\n\n\\begin{theorem}[Main Result]\\label{thm:pure}\n There exists some constant $\\epsilon_0$ such that, for any $n\\in\\mathbb{N},\\epsilon<\\epsilon_0,\\lambda\\leq\\frac{\\epsilon}{\\sqrt{8n\\log4n}}$, while every game in $\\G n2\\lambda$ has an $\\epsilon$-PNE, any randomized algorithm finding such equilibria with probability at least $\\beta=1\/\\textnormal{poly}(n)$ must make $\\lambda^22^{\\Omega(n\\lambda\/\\epsilon)}$ profile queries.\n\\end{theorem}\n\nThe proof follows by contradiction. Assume such an algorithm $A$ exists making $\\lambda^22^{o(n\\lambda\/\\epsilon)}$ profile queries, convert it to an algorithm $B$ making $\\lambda^22^{o(n\\lambda\/\\epsilon)}$ $\\delta$-distribution queries, then use Lemma \\ref{lem:Nfactor} to derive an algorithm $C$ finding $\\epsilon_0\\lambda$-WSNE in $\\lambda$-Lipschitz games contradicting the lower bound of Corollary \\ref{cor:Bab16}.\n\n\\begin{proof}\nAssume that some such algorithm $A$ exists finding $\\epsilon$-PNEs of games in $\\G n2\\lambda$ making at most $\\lambda^22^{o(n\\lambda\/\\epsilon)}$ profile queries. Consider any $\\epsilon<\\epsilon_0,\\lambda'<\\frac{\\epsilon}{\\sqrt{8n\\log4n}}$, and define $\\lambda=\\frac{\\epsilon}{\\epsilon_0},L=\\frac{\\lambda}{\\lambda'},N=Ln$. We derive an algorithm $C$ (with an intermediate algorithm $B$) that contradicts Corollary \\ref{cor:Bab16}.\n\n\\begin{enumerate}[$A$]\n \\item Note that $A$ finds $\\frac{\\epsilon}{2}$-PNEs of games in $\\G N2{\\frac{3\\lambda'}{2}}$ with probability at least $\\beta$ making at most $\\lambda^{\\prime2}2^{o\\left(N\\lambda'\/\\epsilon\\right)}$ profile queries ($\\beta$ can be amplified to constant).\n \\item Let $\\delta=\\frac{\\epsilon_0\\lambda'}{4}$. For any game $G'\\in\\G N2{\\lambda'}$, consider an algorithm making $\\delta$-distribution queries of \\emph{pure action profiles} of $G'$ (introducing the uncertainty without querying mixed strategies).\n\n\\begin{claim}[Appendix \\ref{app:G''}]\n\tThere is a game $G''\\in\\G N2{\\frac{3\\lambda'}{2}}$ that is consistent with all $\\delta$-distribution queries (i.e. $\\textbf{\\textup{u}}''(\\textbf{\\textup{a}})=\\tilde{\\textbf{\\textup{u}}}'(\\textbf{\\textup{a}})$ for all queried $\\textbf{\\textup{a}}$) in which no payoff differs from $G'$ by more than an additive $\\delta$. Futhermore, any $\\frac{\\epsilon}{2}$-PNE of $G''$ is an $\\epsilon$-PNE of $G'$. Figure \\ref{fig:dist} visually depicts this observation.\n\\end{claim}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.3]{deltadistvexact.jpeg}\n \\caption{Taking $G'$ to be the Coordination Game for fixed values of $\\epsilon$ and $\\delta$, the blue region shows the set of $\\epsilon$-approximate equilibria of $G'$ (the acceptable outputs of algorithm $B$) while the orange region shows the set of all $\\frac{\\epsilon}{2}$-approximate equilibria of any possible game $G''$ in which each payoff may be perturbed by at most $\\delta$ (the possible outputs of algorithm $B$).}\n \\label{fig:dist}\n\\end{figure}\n\nSo define the algorithm $B$ that takes input $G'$ and proceeds as though it is algorithm $A$ (but makes $\\delta$-distribution queries instead). By the claim above, after at most $\\lambda^{\\prime2}2^{o\\left(N\\lambda'\/\\epsilon\\right)}$ queries, it has found an $\\frac{\\epsilon}{2}$-PNE of some $G''\\in\\G N2{\\frac{3\\lambda'}{2}}$ that it believes it has learned, which is also an $\\epsilon$-PNE of $G'$.\n\n \\item Consider any game $G\\in\\G n2\\lambda$, and let $G'=g_G(L)$ be the population game induced by $G$. There is an algorithm $C$ described by Lemma \\ref{lem:Nfactor} that takes input $G$ and simulates $B$ on $G'$ (making $2n\\cdot\\lambda^{\\prime2}2^{o\\left(N\\lambda'\/\\epsilon\\right)}=\\delta^22^{o\\left(n\\lambda\/\\epsilon\\right)}$ $\\delta$-distribution queries) and correctly outputs an $\\epsilon$-WSNE (i.e. an $\\epsilon_0\\lambda$-WSNE) of $G$ with probability constant probability (so certainly $2^{-o(n)}$).\n\\end{enumerate}\n\nThe existence of algorithm $C$ directly contradicts the result of Corollary \\ref{cor:Bab16}, proving that algorithm $A$ cannot exist.\\qed\n\\end{proof}\n\n\\begin{remark}\n Note that, if we instead start the proof with the assumption of such an algorithm $B$, we can also show a $\\delta^22^{o(n\\lambda\/\\epsilon)}$ lower bound for the $\\delta$-distribution query complexity of finding $\\epsilon$-PNEs of $\\lambda$-Lipschitz games.\n\\end{remark}\n\n\\subsection{Multi-Lipschitz Games}\\label{subsec:multi}\n\nIn this section, we consider a generalization of Lipschitz games in which each player $i\\in[n]$ has a ``player-specific'' Lipschitz value $\\lambda_i$ in the sense that, if player $i$ changes actions, the payoffs of all other players are changed by at most $\\lambda_i$.\n\n\\begin{definition}\n A $\\Lambda$-Multi-Lipschitz game $G$ is an $n$-player, $m$-action game $G$ in which each player $i\\in[n]$ is associated with a constant $\\lambda_i\\leq1$ such that $\\sum_{i'=1}^n\\lambda_{i'}=\\Lambda$\n and, for any player $i'\\neq i$ and action profiles $\\textbf{\\textup{a}}^{(1)},\\textbf{\\textup{a}}^{(2)}$ with $\\textbf{\\textup{a}}^{(1)}_{-i}=\\textbf{\\textup{a}}^{(2)}_{-i}$,\n $\\left|u_{i'}\\left(\\textbf{\\textup{a}}^{(1)}\\right)-u_{i'}\\left(\\textbf{\\textup{a}}^{(2)}\\right)\\right|\\leq\\lambda_i$.\n The class of such games is denoted $\\GL\\Lambda nm$, and for simplicity it is assumed that $\\lambda_1\\leq\\ldots\\leq\\lambda_n$.\n\\end{definition}\n\nThe consideration of this generalized type of game allows real-world situations to be more accurately modeled. Geopolitical circumstances, for example, naturally take the form of Multi-Lipschitz games, since individual countries have different limits on how much their actions can affect the rest of the world. Financial markets present another instance of such games; they not only consist of individual traders who have little impact on each other, but also include a number of institutions that might each have a much greater impact on the market as a whole. This consideration is further motivated by the recent GameStop frenzy; the institutions still wield immense power, but so do the aggregate actions of millions of individuals \\cite{PL21}.\n\nNotice that a $\\lambda$-Lipschitz game is a $\\Lambda$-Multi-Lipschitz game, for $\\Lambda=n\\lambda$. Any algorithm that finds $\\epsilon$-ANEs of $\\Lambda$-Multi-Lipschitz games is also applicable to $\\Lambda\/n$-Lipschitz games. Theorem \\ref{thm:multi} shows a kind of converse for query complexity, reducing from finding $\\epsilon$-ANE of $\\Lambda$-Multi-Lipschitz games to finding $\\epsilon$-ANE of $\\lambda$-Lipschitz games, for $\\lambda$ a constant multiple of $\\Lambda\/n$.\n\n\\begin{theorem}\\label{thm:multi}\n There is a reduction from computing $\\epsilon$-ANEs of games in $\\GL\\Lambda n2$ with probability at least $1-\\eta$ to computing $\\frac{\\epsilon}{2}$-ANEs of games in $\\G{2n}2{\\frac{3\\Lambda}{2n}}$ with probability at least $1-\\frac{\\eta}{2}$ introducing at most a multiplicative $\\textnormal{poly}(n,\\frac{1}{\\epsilon},\\log\\frac{1}{\\eta})$ query blowup.\n\\end{theorem}\n\nAs we now consider $\\epsilon$-ANEs, existence is no longer a question: such equilibria are \\emph{always} guaranteed to exist by Nash's Theorem \\cite{N51}. This proof will also utilize a more general population game $G'=g_G(L_1,\\ldots,L_n)$ in which player $i$ is replaced by a population of size $L_i$ (where the $L_i$ may differ from each other), and the queries in Lemma \\ref{lem:Nfactor} become $(\\delta,\\min_{i\\in[n]}\\{\\gamma\/L_i\\})$-distribution queries (this will now be relevant, as we need to apply Corollary \\ref{cor:GR16}). Otherwise, the proof follows along the same lines as that of Theorem \\ref{thm:pure}.\n\n\\begin{proof}\n Consider some $\\epsilon>0$ and a game $G\\in\\GL\\Lambda n2$ (WLOG take $\\lambda_1\\leq\\ldots\\leq\\lambda_n)$. First, if $\\Lambda<\\frac{\\epsilon}{n}$, finding an $\\epsilon$-ANE is trivial (each player can play their best-response to the uniform mixed strategy, found in $2n$ queries). So assume $\\Lambda\\geq\\frac{\\epsilon}{n}$. Define $L_i=\\max\\{\\frac{n\\lambda_i}{\\Lambda},1\\}$ and, taking $i'=\\max_{i\\in[n]}\\{i:L_i=1\\}$, note that\n \\[\\sum_{i=1}^nL_i=\\sum_{i=1}^{i'}1+\\sum_{i=i'+1}^n\\frac{n\\lambda_i}{\\Lambda}=\\sum_{i=1}^{i'}1+\\frac{n}{\\Lambda}\\sum_{i=i'+1}^n\\lambda_i\\leq\\sum_{i=1}^{i'}1+\\frac{n}{\\Lambda}\\Lambda\\leq2n.\\]\n Thus the population game $G'=g_G(L_1,\\ldots,L_n)\\in\\G{2n}2{\\frac{\\Lambda}{n}}$.\n \\begin{enumerate}[$A$]\n \\item Consider an algorithm $A$ that finds $\\frac{\\epsilon}{2}$-ANEs of games in $\\G{2n}2{\\frac{3\\Lambda}{2n}}$, with probability at least $1-\\frac{\\eta}{2}$ making $q$ profile queries.\n \\item Taking $\\delta=\\frac{\\epsilon^2}{4n^2}<\\frac{\\epsilon\\Lambda}{4n}$, the algorithm $B$ from the proof of Theorem \\ref{thm:pure} that simulates $A$ but makes $(\\delta,1)$-distribution queries finds an $\\epsilon$-ANE of $G'$ (the proof in Appendix \\ref{app:G''} also holds for these parameters with this choice of $\\delta$).\n \\item By Lemma \\ref{lem:Nfactor}, there is an algorithm $C$ on input $G\\in\\GL\\Lambda n2$ that simulates $B$ (replacing each $(\\delta,1)$-distribution query of $G'$ with $2n$ $(\\delta,\\frac{1}{n})$-distribution queries of $G$ since $\\frac{1}{L_n}\\geq\\frac{1}{n}$) finding an $\\epsilon$-ANE with probability at least $1-\\eta$.\n \\end{enumerate}\n Applying Corollary \\ref{cor:GR16} (using $\\delta=\\frac{\\epsilon^2}{4n^2},\\gamma=\\frac{1}{n}$) to create a profile-query algorithm from $C$ completes the proof.\\qed\n\\end{proof}\n\nAs an example application of Theorem \\ref{thm:multi}, an algorithm of \\cite{GCW19} finds $\\left(\\frac{1}{8}+\\alpha\\right)$-approximate Nash equilibria of games in $\\G n2{\\frac{1}{n}}$; Theorem \\ref{thm:GCW19} states that result in detail, and Corollary \\ref{cor:GCW} extends it to Multi-Lipschitz games.\n\n\\begin{theorem}[\\cite{GCW19}]\\label{thm:GCW19}\n Given constants $\\alpha,\\eta>0$, there is a randomized algorithm that, with probability at least $1-\\eta$, finds $\\left(\\frac{1}{8}+\\alpha\\right)$-approximate Nash equilibria of games in $\\G n2{\\frac{1}{n}}$ making $O\\left(\\frac{1}{\\alpha^4}\\log\\left(\\frac{n}{\\alpha\\eta}\\right)\\right)$ profile queries. \n\\end{theorem}\n\nWe now have some ability to apply this to Multi-Lipschitz games; if $1\\leq\\Lambda<4$ we can improve upon the trivial $\\frac{1}{2}$-approximate equilibrium of Proposition \\ref{prop:det}.\n\n\\begin{corollary}\\label{cor:GCW}\n For $\\alpha,\\eta>0,\\Lambda\\geq1,\\epsilon\\geq\\frac{\\Lambda}{8}+\\alpha$, there is an algorithm finding $\\epsilon$-ANEs of games in $\\GL\\Lambda n2$ with probability at least $1-\\eta$ making at most $\\textnormal{poly}(n,\\frac{1}{\\alpha},\\log\\frac{1}{\\eta})$\n profile queries.\n\\end{corollary}\n\n\\begin{remark}\n This is actually a slight improvement over just combining Theorems \\ref{thm:multi} and \\ref{thm:GCW19}, since the choice of $\\delta$ can be made slightly smaller to shrink $\\alpha$ as necessary.\n\\end{remark}\n\n\\subsection{A Deterministic Lower Bound}\\label{subsec:det}\n\nWe complete this work by generalizing the following result of Hart and Nisan.\n\n\\begin{theorem}[\\cite{HN18}]\\label{thm:HN18}\n For any $\\epsilon<\\frac{1}{2}$, the deterministic profile query complexity of finding $\\epsilon$-ACEs of $n$-player games is $2^{\\Omega(n)}$.\n\\end{theorem}\n\nWhile the proof of Theorem \\ref{thm:HN18} utilizes a reduction from $\\texttt{ApproximateSink}$, we employ a more streamlined approach, presenting an explicit family of ``hard'' games that allows us to uncover the optimal value of $\\epsilon$ as a function of the number of actions:\n\n\\begin{theorem}\\label{thm:det}\n Given some $m\\in\\mathbb{N}$, for any $\\epsilon<\\frac{m-1}{m}$, the deterministic profile query complexity of finding $\\epsilon$-ACEs of $n$-player, $m$-action games is $2^{\\Omega(n)}$.\n\\end{theorem}\n\nFurthermore, this value of $\\epsilon$ cannot be improved:\n\n\\begin{proposition}\\label{prop:det}\n Given some $n,m\\in\\mathbb{N}$, for any $\\epsilon\\geq\\frac{m-1}{m}$, an $\\epsilon$-ANE of an $n$-player, $m$-action game can be found making no profile queries.\n\\end{proposition}\n\nThe upper bound of Proposition \\ref{prop:det} can be met if every player plays the uniform mixed strategy over their actions. Finally, we can apply Lemma \\ref{lem:scale} to scale Theorem \\ref{thm:det} and obtain our intended result:\n\n\\begin{corollary}\\label{cor:det}\n Given some $m\\in\\mathbb{N},\\lambda\\in(0,1]$, for any $\\epsilon<\\frac{m-1}{m}\\lambda$, the deterministic profile query complexity of finding $\\epsilon$-ACEs of $n$-player, $m$-action, $\\lambda$-Lipschitz games is $2^{\\Omega(n)}$.\n\\end{corollary}\n\nIn order to prove these results, we introduce a family of games $\\{G_{k,m}\\}$. For any $k,m\\in\\mathbb{N}$, $G_{k,m}$ is a $2k$-player, $m$-action generalization of $k$ Matching Pennies games in which every odd player $i$ wants to match the even player $i+1$ and every even player $i+1$ wants to mismatch with the odd player $i$.\n\n\\begin{definition}\\label{def:gk}\n Define $G_{1,2}$ to be the generalized Matching Pennies game, as described in Figure \\ref{fig:MP}(a). Define the generalization $G_{k,m}$ to be the $2k$-player $m$-action game such that, for any $i\\in[k]$, player $2i-1$ has a payoff $1$ for matching player $2i$ and $0$ otherwise (and vice versa for player $2i$) ignoring all other players.\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\begin{tabular}{cc|c|c|}\n & \\multicolumn{1}{c}{} & \\multicolumn{2}{c}{Player $2$} \\\\\n & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{$1$} & \\multicolumn{1}{c}{$2$} \\\\\\cline{3-4}\n \\multirow{2}*{Player $1$} & $1$ & $1,0$ & $0,1$ \\\\\\cline{3-4}\n & $2$ & $0,1$ & $1,0$ \\\\\\cline{3-4}\n \\end{tabular}\n \\caption{The payoff matrix of $G_{1,2}$, the Matching Pennies game.}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.54\\textwidth}\n \\centering\n \\begin{tabular}{cc|c|c|c|}\n & \\multicolumn{1}{c}{} & \\multicolumn{3}{c}{Player 2} \\\\\n & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{$1$} & \\multicolumn{1}{c}{$2$} &\\multicolumn{1}{c}{$3$} \\\\\\cline{3-5}\n \\multirow{3}*{Player 1} & $1$ & $1,0$ & $0,1$ & $0,1$ \\\\\\cline{3-5}\n & $2$ & $0,1$ & $1,0$ & $0,1$ \\\\\\cline{3-5}\n & $3$ & $0,1$ & $0,1$ & $1,0$ \\\\\\cline{3-5}\n \\end{tabular}\n \\caption{The payoff matrix of $G_{1,3}$, the generalized Matching Pennies game.}\n \\end{subfigure}\n \\caption{The payoff matrices of $G_{1,2}$ and $G_{1,3}$.}\n \\label{fig:MP}\n\\end{figure}\n\\end{definition}\n\nThe critical property of the generalized Matching Pennies game is that we can bound the probability that any given action profile is played in any $\\epsilon$-ACE of $G_{k,m}$. If too much probability is jointly placed on matching actions, player $2$ will have high regret. Conversely, if too much probability is jointly placed on mismatched actions, player $1$ will have high regret.\n\n\\begin{lemma}\\label{lem:game}\n For any $k,m\\in\\mathbb{N},\\alpha>0$, take $\\epsilon=\\frac{m-1}{m}-\\alpha$. In any $\\epsilon$-ACE $\\textbf{\\textup{X}}^*$ of $G_{k,m}$, every action profile $\\textbf{\\textup{a}}'\\in[m]^n$ satisfies $\\Pr_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*}(\\textbf{\\textup{a}}=\\textbf{\\textup{a}}')<\\rho^{\\frac{n}{2}}$ where\n \\[\\rho=\\frac{(2-\\alpha)m-1}{2m}.\\]\n\\end{lemma}\n\nThis phenomenon can be seen in Figure \\ref{fig:cregion}.\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[scale=0.3]{alphac13.jpeg}\n \\caption{$\\alpha=\\frac{1}{3}$}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[scale=0.3]{alphac16.jpeg}\n \\caption{$\\alpha=\\frac{1}{6}$}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[scale=0.3]{alphac0.jpeg}\n \\caption{$\\alpha\\rightarrow0$}\n \\end{subfigure}\n \\caption{The region of possible values for $(\\Pr_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*}(a_1=1,a_2=1),\\Pr_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*}(a_1=2,a_2=2)$ in any $\\left(\\frac{1}{2}-\\alpha\\right)$-approximate correlated equilibrium of $G_{k,2}$. The only exact correlated equilibrium is shown by the red point, and the corresponding values of $\\rho$ are displayed as the orange lines.}\n \\label{fig:cregion}\n\\end{figure}\n\n\\begin{proof}\n Define $n=2k$ to be the number of players in $G_{k,m}$, and consider some $\\epsilon$-ACE $\\textbf{\\textup{X}}^*$. Now WLOG consider players $1$ and $2$ and assume, for the sake of contradiction, that there exist some actions $j_1,j_2$ such that $\\Pr_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*}(a_1=j_1,a_2=j_2)>\\rho$. We will need to consider the two cases $j_1=j_2$ and $j_1\\neq j_2$.\n \\item\\paragraph{Matching Actions} In this case, WLOG assume $j_1=j_2=1$. We show that player $2$ can improve her payoff by more than $\\epsilon$. Under $\\textbf{\\textup{X}}^*$, with probability $>\\rho$, any random realization $\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*$ will yield player $2$ a payoff of $0$. In other words, $u_2(\\textbf{\\textup{X}}^*)<1-\\rho$. Furthermore, considering the marginal distribution over player $1$'s action, we are guaranteed that\n \\[\\sum_{j=2}^m\\Pr_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*}(a_1=j)<1-\\rho\\]\n so there must exist some action (WLOG action $2$) for which $\\Pr_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*}(a_1=2)<\\frac{1-\\rho}{m-1}$. As such, define $\\phi(j)=2$. Then $u_2^{(\\phi)}(\\textbf{\\textup{X}}^*)>1-\\frac{1-\\rho}{m-1}$, so\n \\[\\textnormal{reg}_2^{(\\phi)}(\\textbf{\\textup{X}}^*)>\\underbrace{\\left(1-\\frac{1-\\rho}{m-1}\\right)}_{u_2^{(\\phi)}(\\textbf{\\textup{X}}^*)}-\\underbrace{(1-\\rho)}_{u_2(\\textbf{\\textup{X}}^*)}=\\frac{\\rho m-1}{m-1}\\geq\\frac{m-1}{m}-\\alpha\\]\n for all $m\\geq2$. This contradicts our assumption of an $\\epsilon$-ACE.\n \\item\\paragraph{Mismatched Actions}\n In this case, WLOG assume $j_1=1,j_2=2$. The situation is simpler, taking $\\phi(j)=2$, $u_1(\\textbf{\\textup{X}}^*)<1-\\rho$ and $u_1^{(\\phi)}(\\textbf{\\textup{X}}^*)>\\rho$, so $\\textnormal{reg}_1^{(\\phi)}(\\textbf{\\textup{X}}^*)>\\frac{m-1}{m}-\\alpha$. This too contradicts our assumption of an $\\epsilon$-ACE, and thus completes the proof of Lemma \\ref{lem:game}.\\qed\n\\end{proof}\n\nWe can now prove Theorem \\ref{thm:det}. The general idea is that, should an efficient algorithm exist, because any equilibrium of $G_{k,m}$ must have large support by Lemma \\ref{lem:game}, there is significant probability assigned to action profiles that are not queried by the algorithm. We show there is a game that the algorithm cannot distinguish from $G_{k,m}$ that shares no approximate equillibria with $G_{k,m}$.\n\n\\begin{proof}[Theorem \\ref{thm:det}]\n Consider any $\\alpha>0$ and let $\\epsilon=\\frac{m-1}{m}-\\alpha$. Taking $\\rho$ as in the statement of Lemma \\ref{lem:game}, assume there exists some deterministic algorithm $A$ that takes an $n$-player, $m$-action game $G$ as input and finds an $\\epsilon$-ACE of $G$ querying the payoffs of $q<\\frac{\\alpha}{2}\\rho^{-\\frac{n}{2}}$ action profiles. Fix some $k\\in\\mathbb{N}$ and consider input $G_{k,m}$ as defined in Definition \\ref{def:gk}. Then $\\textbf{\\textup{X}}^*=A\\left(G_{k,m}\\right)$ is an $\\epsilon$-ACE of $G_{k,m}$. Note that, for some $j$, $\\Pr_{\\textbf{\\textup{a}}\\sim\\textbf{\\textup{X}}^*}(a_1=j)\\leq\\frac{1}{m}$ (WLOG assume $j=1$).\n \n Now define the perturbation $G'_{k,m}$ of $G_{k,m}$ with payoffs defined to be equal to $G_{k,m}$ for every action profile queried by $A$, $1$ for every remaining action profile in which player $1$ plays action $1$ (chosen because it is assigned low probability by $\\textbf{\\textup{X}}^*$ by assumption), and $0$ otherwise. Note that, by definition, $A$ cannot distinguish between $G_{k,m}$ and $G'_{k,m}$, so $A(G'_{k,m})=\\textbf{\\textup{X}}^*$.\n \n Taking the function $\\phi(j)=1$, the quantity we need to bound is $\\textnormal{reg}_i^{\\prime(\\phi)}(\\textbf{\\textup{X}}^*)\\geq u_i^{\\prime(\\phi)}(\\textbf{\\textup{X}}^*)-u'_i(\\textbf{\\textup{X}}^*)$.\n We must bound the components of this expression as follows:\n\\begin{claim}[Appendix \\ref{app:Gkm}]\n$u^{\\prime(\\phi)}_1(\\textbf{\\textup{X}}^*)>(1-q\\rho^{\\frac{n}{2}})$ and $u'_1(\\textbf{\\textup{X}}^*)<\\left(\\frac{1}{m}+q\\rho^{\\frac{n}{2}}\\right)$.\n\\end{claim}\n Using the claim and once again recalling the assumption that $q<\\frac{\\alpha}{2}\\rho^{-\\frac{n}{2}}$, we see\n \\[\\textnormal{reg}^{\\prime(\\phi)}_1(\\textbf{\\textup{X}}^*)>\\left(1-\\frac{1}{m}-2q\\rho^{\\frac{n}{2}}\\right)=\\frac{m-1}{m}-\\alpha=\\epsilon.\\]\n So $\\textbf{\\textup{X}}^*$ cannot actually be an $\\epsilon$-ACE of $G_{k,m}$. This completes the proof of Theorem \\ref{thm:det}.\\qed\n\\end{proof}\n\n\\section{Further Directions}\n\nAn important additional question is the query complexity of finding $\\epsilon$-PNEs of $n$-player, $\\lambda$-Lipschitz games in which $\\epsilon=\\Omega(n\\lambda)$. Theorem \\ref{thm:pure} says nothing in this parameter range, yet Theorem \\ref{thm:GCW19} provide a logarithmic upper bound in this regime. The tightness of this bound is of continuing interest. Furthermore, the query- and computationally-efficient reduction discussed in Lemma \\ref{lem:Nfactor} provides a hopeful avenue for further results bounding the query, and computational, complexities of finding equilibria in many other classes of games.\n\n\\paragraph{Acknowledgements}{We thank Francisco Marmalejo Coss\\'{i}o and Rahul Santhanam for their support in the development of this work, and the reviewers of an earlier version for helpful comments. Matthew Katzman was supported by an Oxford-DeepMind Studentship for this work.}\n\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn biomolecular simulations, electrostatic interactions are of paramount importance \ndue to their ubiquitous existence and significant contribution in the force fields, \nwhich governs the dynamics of molecular simulation. \nHowever, computing non-bonded interactions is challenging \nsince these pairwise interactions are long-range with $O(N^2)$ computational cost, \nwhich could be prohibitively expensive for large systems. \nTo reduce the degree of freedom of the system in terms of electrostatic interactions, \nan implicit solvent Poisson-Boltzmann (PB) model is used \\cite{Baker:2005}. \nIn this model, the explicit water molecules are treated as continuum \nand the dissolved electrolytes are approximated using the statistical Boltzmann distribution. \nThe PB model has broad applications in biomolecular simulations such as \nprotein structure \\cite{Cherezov:2007}, \nprotein-protein interaction \\cite{Huang:2012}, \nchromatin packing \\cite{Beard:2001}, \npKa \\cite{Alexov:2011,Hu:2018,Chen:2021a}, \nmembrane \\cite{Zhou:2010}, \nbinding energy \\cite{Nguyen:2017}, \nsolvation free energy\\cite{Wagoner:2006}, \nion channel profiling \\cite{Unwin:2005}, etc.\n\nThe PB model is an elliptic interface problem \nwith several numerical difficulties \nsuch as \ndiscontinuous dielectric coefficients, \nsingular sources, \na complex interface, \nand unbounded domains. \nGrid-based finite difference or finite volume discretization that discretize the entire\nvolumetric domain have been developed in, e.g.,\n\\cite{\nBaker:2004, \nIm:1998, \nHonig:1995,\nBaker:2001,\nLuo:2002,\nDeng:2015,\nYing:2015}.\nThe grid-based discretization is efficient and robust and is \ntherefore popular. \nHowever, solvers that are based on discretizing the partial\ndifferential equation may suffer from accuracy reduction due to \ndiscontinuity of the coefficients, \nnon-smoothness of the solution, \nsingularity of the sources, \nand truncation of the domains, \nunless special interface \\cite{Qiao:2006, Yu:2007} \nand singularity \\cite{Geng:2007, Cai:2009,Geng:2017, Lee:2020} treatments are applied. \nThese treatments come at the price of more complicated discretization scheme \nand possibly reduced convergence speed of the iterative solver for the\nlinear system. \n\n\nAn alternative approach is to reformulate the PB equation as a\nboundary integral equation and use the boundary elements \nto discretize the molecular surface,\ne.g.~\\cite{\nJuffer:1991,\nBoschitsch:2002,\nLu:2007,\nGreengard:2009,\nBajaj:2011,\nZhang:2013,\nGeng:2013b,\nZhong:2018,\nQuan:2019}.\nBesides the reduction from\nthree dimensional space to the two dimensional molecular boundary,\nthis approach has the advantage that singular charges, interface\nconditions, and far-field condition are incorporated analytically in\nthe formulation, and hence do not impose additional approximation\nerrors.\n\nIn addition, due to the structures hidden in the linear algebraic system \nafter the discretization of the boundary integral and molecular surface, \nthe matrix-vector product in each iteration can be accelerated by fast methods \nsuch as fast multipole methods (FMM) \\cite{Greengard:2002,Tausch:2004,\nBoschitsch:2002,Lu:2007,Bajaj:2011}\nand\ntreecodes~\\cite{Barnes:1986, Li:2009, Wang:2020a}.\nOur recently developed treecode-accelerated boundary integral (TABI) Poisson-Boltzmann solver \n\\cite{Geng:2013b} is an example of a code that combines the advantages of \nboth boundary integral equation and multipole methods. \nThe TABI solver uses the well-posed derivative form of the Fredholm second kind integral equation \\cite{Juffer:1991}\nand the $O(N\\log N)$ treecode \\cite{Li:2009}. \nIt also has advantages in memory use and parallelization\n\\cite{Geng:2013b, Chen:2021}. The TABI solver has been\nused by many computational biophysics\/biochemistry groups and it has\nbeen disseminated as a standalone code \\cite{Geng:2013b} or as a contributing module of the popular APBS software package \\cite{Baker:2001a,Jurrus:2018}.\n\nRecently, based on feedback from TABI solver users \nand our gained experience in theoretical development and practical applications, \nwe realized that we could still improve the TABI solver in the following aspects.\nFirst, the $O(N \\log N)$ treecode can be replaced by the $O(N)$ FMM method \nwith manageable extra costs in memory usage and complexity of the algorithms. \nSecond, the singularity that occurs \nwhen the Poisson's or Yukawa's kernel is evaluated \nwas previously handled by simply removing the singular triangle \\cite{Geng:2013b, Lu:2007}\nin fact can be treated by using the Duffy transformation \\cite{Duffy:1982} analytically, \nachieving improved accuracy. \nThird, the collocation scheme used in TABI solver \ncan be updated by using Galerkin discretization\nwith further advantage in maintaining desired accuracy. \nFourth, the treecode-based preconditioning scheme that was used\nin TABI solver \\cite{Chen:2018a} can be similarly developed and used under the FMM frame, \nreceiving significant improvement in convergence and robustness. \nBy combining all these new features, we developed a Cartesian fast multipole method (FMM) \naccelerated Galerkin boundary integral (FAGBI) Poisson--Boltzmann\nsolver. In the remainder of this article, we provide more detail\nabout the theoretical background of the numerical algorithms related\nto the FAGBI solver. We conclude with a discussion of the numerical\nresults obtained with our implementation. \n\n\n\n\n\n \n\n \n\n\n\\section{Theory and Algorithms}\n\\label{theory}\n\nIn this section we briefly describe \nthe Poisson-Boltzmann (PB) implicit solvent model and review the\nboundary integral form of the PB equation and its\nGalerkin discretization. \nBased on this background information, \nwe then provide details of our recently developed FMM-accelerated Galerkin\nboundary integral (FAGBI) Poisson-Boltzmann solver, which involves\nthe boundary integral form, multipole expansion scheme, and a block diagonal preconditioning scheme. \n\n\n\\begin{figure}[htb]\n\\setlength{\\unitlength}{1cm}\n\\begin{picture}(5,1)\n\\put( 5.2, -3.2){\\Large $\\Omega_1$}\n\\put( 2.4, -0.9){\\Large $\\Omega_2$}\n\\put( 4.8, -5.2){\\Large $\\Gamma$}\n\\end{picture}\n\\begin{center}\n\\includegraphics[width=2.5in]{figures\/pbe1layer}~~~~~\n\\includegraphics[width=2.5in]{figures\/1b2s_barstar_d5}\\\\\n\\hskip -0.7in (a) \\hskip 3in (b)\n\\caption{\nSchematic models;\n(a) the PB implicit solvent model, in which\nthe molecular surface $\\Gamma$ separates space into the solute region $\\Omega_1$ \nand solvent region $\\Omega_2$; \n(b) the triangulation of molecular surface of protein Barstar at MSMS \\cite{Sanner:1996} density $d=5$ (\\# of vertices per \\AA$^2$). \n}\n\\label{fig_1}\n\\end{center}\n\\end{figure}\n\n\n\\subsection{The Poisson-Boltzmann (PB) model for a solvated biomolecule}\n\nThe PB model for a solvated biomolecule is depicted in Fig.~\\ref{fig_1}(a) in which\nthe molecular surface $\\Gamma$ \nseparates the solute domain $\\Omega_1$ from the solvent domain $\\Omega_2$.\nFigure~\\ref{fig_1}(b) is an example of the molecular surface $\\Gamma$ \nas the triangulated surface of protein barstar \\cite{Dong:2003}. \nIn domain $\\Omega_1$, \nthe solute is represented by $N_c$ partial charges $q_k$ located at atomic centers ${\\bf r}_k$ for $k=1,\\cdots,N_c$,\nwhile in domain $\\Omega_2$,\na distribution of ions is described by a Boltzmann distribution \nand we consider a linearized version in this study. \nThe solute domain has a low dielectric constant $\\epsilon_1$\nand\nthe solvent domain has a high dielectric constant $\\epsilon_2$.\nThe modified inverse Debye length $\\bar{\\kappa}$ is given as $\\bar{\\kappa}^2 = \\epsilon_2\\kappa^2$,\nwhere $\\kappa$ is the inverse Debye length measuring the ionic strength;\n$\\bar{\\kappa} = 0$ in $\\Omega_1$ and is nonzero only in $\\Omega_2$. \nThe electrostatic potential $\\phi({\\bf x})$ satisfies the linear PB equation,\n\\begin{equation}\n-\\nabla \\cdot \\epsilon({\\bf x}) \\nabla \\phi({\\bf x}) + \\bar{\\kappa}^2({\\bf x})\\phi({\\bf x}) =\n\\sum_{k=1}^{N_c} q_k \\delta({\\bf x}-{\\bf x}_k),\n\\label{eqNPBE}\n\\end{equation}\nsubject to continuity conditions for the potential and electric flux density on $\\Gamma$,\n\\begin{equation}\n[\\phi] = 0, \\quad [\\epsilon \\phi_\\nu] = 0,\n\\label{eqJump}\n\\end{equation}\nwhere \n$[f] = f_1 - f_2$ is the difference of the quantity $f$ across the interface,\nand\n$\\phi_\\nu = \\partial\\phi\/\\partial{\\nu}$ is the partial derivative in the outward normal direction $\\nu$.\nThe model also incorporates the far-field condition,\n\\begin{equation}\n\\label{eqfar-field}\n\\lim_{{\\bf x} \\rightarrow \\infty} \\phi({\\bf x }) = 0.\n\\end{equation}\nNote that Eqs.~(\\ref{eqNPBE})-(\\ref{eqfar-field}) \ndefine a boundary value problem for the potential $\\phi({\\bf x})$\nwhich in general must be solved numerically.\n\n\n\n\n\n\n\n\n\n\n\\subsection{Boundary integral form of PB model}\n\nThis section summarizes the well-conditioned boundary integral form of the \nPB implicit solvent model we employ~\\cite{Juffer:1991,Geng:2013b}.\nApplying Green's second identity and properties of fundamental solutions \nto Eq.~(\\ref{eqNPBE}) yields the electrostatic potential in each domain,\n\\begin{subequations}\n\\begin{align}\n\\label{eqbim_1}\n\\phi({\\bf x}) = \n&\\int_\\Gamma \\left[G_0({\\bf x},\n{\\bf y})\\frac{\\partial\\phi({\\bf y})}{\\partial{\\nu}} -\\frac{\\partial G_0({\\bf x}, {\\bf y})}{\\partial{\\nu}_{{\\bf y}}} \\phi({\\bf y})\n\\right]dS_{{\\bf y}} + \\sum_{k=1}^{N_{c}}q_k G_0({\\bf x}, {\\bf y}_k), \\quad {\\bf x} \\in \\Omega_1, \\\\\n\\label{eqbim_2}\n\\phi({\\bf x}) = \n&\\int_\\Gamma\\left[-G_\\kappa({\\bf x},\n{\\bf y})\\frac{\\partial\\phi({\\bf y})}{\\partial{\\nu}} + \\frac{\\partial G_\\kappa({\\bf x}, {\\bf y})}{\\partial{\\nu}_{{\\bf y}}}\n\\phi({\\bf y})\\right]dS_{{\\bf y}}, \\quad {\\bf x} \\in \\Omega_2,\n\\end{align}\n\\end{subequations}\nwhere $G_0({\\bf x},{\\bf y})$ and $G_\\kappa({\\bf x}, {\\bf y})$ are the \nCoulomb and screened Coulomb potentials,\n\\begin{equation}\nG_0({\\bf x}, {\\bf y}) = \\frac{1}{4\\pi|{\\bf x}-{\\bf y}|},\n\\quad\n\\label{eq_potential}\nG_\\kappa({\\bf x}, {\\bf y}) = \\frac{e^{-\\kappa|{\\bf x}-{\\bf y}|}}{4\\pi|{\\bf x}-{\\bf y}|},\n\\end{equation}\nand ${\\bf y}_k \\in \\Omega_1$ are the location of the atomic centers.\n\nApplying the interface conditions in Eq.~(\\ref{eqJump}) \nwith the differentiation of electrostatic potential in each domain\nyield \na set of boundary integral equations relating the\nsurface potential $\\phi_1$ (the subscript 1 denotes the inside domain)\nand\nits normal derivative $\\partial\\phi_1\/\\partial{\\nu}$ on $\\Gamma$, \\cite{Juffer:1991, Geng:2013b},\n\\begin{subequations}\n\\begin{align}\n\\label{eqbim_3}\n\\frac{1}{2}\\left(1+\\varepsilon\\right)\\phi_1({\\bf x}) & =\n\\int_\\Gamma \\left[K_1({\\bf x}, {\\bf y})\\frac{\\partial\\phi_1({\\bf y})}{\\partial{\\nu}} +\nK_2({\\bf x}, {\\bf y})\\phi_1({\\bf y})\\right]dS_{{\\bf y}}+S_{1}({\\bf x}),\n\\quad {\\bf x}\\in\\Gamma, \\\\\n\\label{eqbim_4}\n\\frac{1}{2}\\left(1+\\frac{1}{\\varepsilon}\\right)\\frac{\\partial\\phi_1({\\bf x})}{\\partial{\\nu}} & =\n\\int_\\Gamma \\left[K_3({\\bf x}, {\\bf y})\\frac{\\partial\\phi_1({\\bf y})}{\\partial{\\nu}} +\nK_4({\\bf x}, {\\bf y})\\phi_1({\\bf y})\\right]dS_{{\\bf y}}\n+S_{2}({\\bf x}),\n\\quad {\\bf x} \\in \\Gamma,\n\\end{align}\n\\end{subequations}\nwhere $\\varepsilon = \\varepsilon_2\/\\varepsilon_1$, \nand\nthe kernels $K_{1,2,3,4}$ and source terms $S_{1,2}$ are\n\\begin{subequations}\n\\begin{align} \n\\label{Eq_K1}\nK_1({\\bf x}, {\\bf y}) = \n&\\,{G_{0}({\\bf x},{\\bf y})}-{G_{\\kappa}({\\bf x},{\\bf y})},\n\\quad\nK_2({\\bf x}, {\\bf y}) = \\varepsilon\\frac{\\partial G_{\\kappa}({\\bf x},{\\bf y})}{\\partial\\nu_{{\\bf y}}}-\\frac{\\partial\nG_{0}({\\bf x},{\\bf y})}{\\partial\\nu_{{\\bf y}}}, \\\\\n\\label{Eq_K2}\nK_3({\\bf x}, {\\bf y}) = \n&\\,\\frac{\\partial G_{0}({\\bf x},{\\bf y})}{\\partial\\nu_{{\\bf x}}}-\\frac{1}{\\varepsilon}\\frac{\\partial G_{\\kappa}({\\bf x},{\\bf y})}{\\partial\\nu_{{\\bf x}}},\n\\quad\nK_4({\\bf x}, {\\bf y}) =\n\\frac{\\partial^2 G_\\kappa({\\bf x},{\\bf y})}{\\partial\\nu_{{\\bf x}}\\partial\\nu_{{\\bf y}}}-\\frac{\\partial^2G_{0}({\\bf x},{\\bf y})}{\\partial\\nu_{{\\bf x}}\\partial\\nu_{\\bf y}},\n\\end{align}\n\\end{subequations}\nand \n\\begin{equation}\nS_{1}({\\bf x}) = \\frac{1}{\\varepsilon_1}\\sum_{k=1}^{N_{c}}q_kG_{0}({\\bf x}, {\\bf y}_k),\n\\quad\nS_{2}({\\bf x}) = \\frac{1}{\\varepsilon_1}\\sum_{k=1}^{N_{c}}q_k\n\\frac{\\partial G_{0}({\\bf x},{\\bf y}_k)}{\\partial\\nu_{{\\bf x}}}.\n\\label{source_terms}\n\\end{equation}\nAs given in Eqs.~(\\ref{Eq_K1}-\\ref{Eq_K2}) and (\\ref{source_terms}), the kernels $K_{1,2,3,4}$ and source terms $S_{1,2}$ are linear combinations of $G_0$, $G_k$, \nand their first and second order normal\nderivatives~\\cite{Juffer:1991,Geng:2013b}. Since the Coulomb potential\nis singular, the kernels have the following behavior\n\\begin{equation*}\n K_1({\\bf x}, {\\bf y}) = O(1),\\;\n K_{2,3,4}({\\bf x}, {\\bf y}) = O\\left(\\frac{1}{|{\\bf x}-{\\bf y}|}\\right),\\;\n \n \n\\end{equation*}\nas $ {\\bf y} \\to {\\bf x}$. \n\n\n\n\n\n\nAfter the potentials $\\phi_1$ and $\\partial \\phi_1\/\\partial \\nu$ have\nbeen found by solving the boundary integral equation, the electrostatic\nsolvation energy can be obtained by\n\\begin{equation}\nE_{\\rm sol} = \\frac{1}{2}\\sum_{k=1}^{N_c}q_k\\phi_{\\rm reac}({\\bf y}_k) =\n\\frac{1}{2}\\displaystyle \\sum\\limits_{k=1}^{N_c} q_k\n\\int_\\Gamma \\left[K_1({\\bf y}_k, {\\bf y})\\frac{\\partial\\phi_1({\\bf y})}\n{\\partial\\nu}+K_2({\\bf y}_k, {\\bf y})\\phi_1({\\bf y})\\right]dS_{{\\bf y}},\n\\label{solvation_energy}\n\\end{equation}\nwhere $\\phi_{\\rm reac}({\\bf x}) = \\phi_1({\\bf x})-S_1({\\bf x})$ is the\nreaction potential \\cite{Juffer:1991,Geng:2013b}.\n\n\n\\subsection{Galerkin Discretization}\n\nIn solving the boundary integral PB equation, \nboth the molecular surface and the solution function need to be discretized. \nThe molecular surface $\\Gamma$ is usually approximated by\na collection of triangles\n\\begin{equation}\n\\label{eqbemesh}\n\\Gamma_{N}=\\bigcup\\limits^{N}_{i=1} \\tau_{i},\n\\end{equation}\nwhere $N$ is number of elements \nand $\\tau_i$ for $i=1,\\dots,N$ is a planar triangular boundary element with mid-point $\\textbf{x}^c_i$.\nThis triangulation must be conforming, i.e., the intersection of two\ndifferent triangles is either empty, or a common vertex or edge.\nFortunately, surface generators are available, and\nour choice for our computations is MSMS~\\cite{Sanner:1996}, though\nother packages could also be used.\nHere, the resolution of the surface can be controlled by the parameter\n$d$ that controls the number of vertices per \\AA$^2$.\nFor example, Fig.~\\ref{fig_1} (b)\nshows the triangulated molecule surface\nof the protein barstar, which will bind another protein barnase to form a\nbiomolecular complex (PDB: 1b2s) \\cite{Dong:2003}.\n\n\nEach triangle $\\tau_i$ of $\\Gamma_N$ is the parametric image of the reference triangle $\\tau$\n\\begin{equation}\n\\label{eqreftri}\n\\tau = \\big\\{ \\eta=(\\eta_1, \\eta_2) \\in \\mathbb{R}^2 : 0 \\le \\eta_1 \\le 1 , 0 \\le \\eta_2 \\le \\eta_1 \\big\\}.\n\\end{equation}\nIf $\\textbf{u}_i$, $\\textbf{v}_i$ and $\\textbf{w}_i$ are the vertices\nthen the parameterization is given by\n\\begin{equation}\n\\label{eqparameter}\n\\textbf{x}(\\eta) = \\textbf{u}_i + \\eta_1(\\textbf{v}_i-\\textbf{u}_i) + \\eta_2(\\textbf{w}_i-\\textbf{u}_i) \\in \\tau_i \\quad \\text{for } \\eta=(\\eta_1,\\eta_2)\\in\\tau.\n\\end{equation}\nThe area of the element, the local mesh size, and the global mesh size of the boundary elements $\\tau_i$ are given as\n$A_i=\\frac{1}{2}|(\\textbf{v}_i-\\textbf{u}_i)\\times (\\textbf{w}_i-\\textbf{u}_i)|$, $h_i=\\sqrt{A_i}$, and \n$h=\\max\\limits_{1\\le i \\le N}h_i$.\n\nSince a function $f$ defined on $\\tau_i$ can be interpreted as a function $g(\\eta)$ with respect to the reference element $\\tau$,\n\\begin{equation}\n\\label{eqf2g}\nf(\\textbf{x}) = f(\\textbf{x}(\\eta)) = g(\\eta) \\quad \\text{for } \\eta\\in\\tau, \\quad {\\bf x}\\in\\tau_i.\n\\end{equation}\nwe can define a finite element space by functions on $\\Gamma_N$ whose\npullbacks to the reference triangle are polynomials in $\\eta$. The\nsimplest example is the space of piecewise constant functions, which\nare polynomials of order zero on each triangle, which will be denoted\nby $S^0_h(\\Gamma_N)$. Obviously, the dimension of this space is $N$\nand the basis is given by the box functions\n\\begin{equation}\n\\label{eqpiecewiseconstant}\n\\psi^0_i(\\bf{x}) =\\begin{cases}\n1 & \\text{if $\\bf{x}\\in\\tau_i$},\\\\\n0 & \\text{otherwise},\n\\end{cases}\n\\end{equation}\nwhere $i$ is an index of a triangle.\n\nThe next step up are piecewise linear\nfunctions. Since there are three independent linear functions on\n$\\tau$, namely,\n\\begin{equation}\n\\label{eqlinearfcn}\n\\psi^{1}_1(\\eta) = 1-\\eta_1, \\quad \\psi^{1}_2(\\eta) = \\eta_1-\\eta_2, \\quad \\psi^{1}_3(\\eta) = \\eta_2\n\\quad \\text{for }\\eta=(\\eta_1, \\eta_2)\\in\\tau.\n\\end{equation}\nthe dimension is $3N$. Usually, one works with the space of\ncontinuous linear functions, denoted by $S^1_h(\\Gamma_N)$. It is not hard to see that the dimension\nof this space is the number of vertices and that the basis is given by\n\\begin{equation}\n\\label{eqglinearcfcn}\n\\psi^c_{i}(\\textbf{x}) = \\begin{cases}\n1 & \\text{for $\\textbf{x}=\\textbf{v}_i$,}\\\\\n0 & \\text{for $\\textbf{x}=\\textbf{v}_j\\neq\\textbf{v}_i$,}\\\\\n\\text{piecewise linear}& \\text{elsewhere,}\n\\end{cases}\n\\end{equation}\nwhere ${v}_i$ is the $i$-th vertex. \n\nThe approximation powers of piecewise polynomial spaces are well\nknown, see, e.g., \\cite{Rjasanow:2006}. For a function\n$w\\in H^1_{pw}(\\Gamma_N)$ we denote by $w_h^0$ the $L_2$-orthogonal\nprojection of $w$ into the space of piecewise constant functions, then\n\\begin{equation}\n\\label{eqconstanterr}\n\\|w-w_h^0\\|_{L_2(\\Gamma_N)} \\le\nc\\left(\\sum_{i=1}^{N}h^2_i|w|^2_{H^1(\\tau_i)}\\right)^{\\frac{1}{2}}\n \\le c h |w|_{H^1_{pw}(\\Gamma_N)},\n\\end{equation}\nwhere $c$ is the upper bound of the mesh ratio $h_{\\max}\/h_{\\min}$,\n$h$ is the maximal diameter of a triangle\nand\n\\begin{equation}\n\\label{eqHpw}\n|w|_{H^1_{pw}(\\Gamma_N)} = \\Bigg(\\sum_{i=1}^{N}|w|^2_{H^1(\\tau_i)}\\Bigg)^{1\/2}.\n\\end{equation}\nThus, the constant piecewise basis function can give a convergence rate of maximum $O(h)$.\n\nLikewise, the error for the $L_2$-orthogonal projection $w_h^1$ of $w\\in\nH^2_{pw}(\\Gamma_N)$ into $S^1_h(\\Gamma_N)$ is\n\\begin{equation}\n\\label{lineerr}\n\\|w-w_h^1\\|_{L_2(\\Gamma_N)} \\le\nc\\left(\\sum_{i=1}^{N}h^2_i|w|^2_{H^2(\\tau_i)}\\right)^{\\frac{1}{2}}\n \\le c h^2 |w|_{H^2_{pw}(\\Gamma_N)},\n\\end{equation}\n\n\nFinite element spaces with higher order polynomials could also be\nconsidered, however their practical value for surfaces with low\nregularity and complicated geometries is limited.\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThe Galerkin discretization is based on a variational formulation of\nintegral equations (\\ref{eqbim_3}) and (\\ref{eqbim_4}). That is,\ninstead of understanding the equations pointwise for ${\\bf x} \\in \\Gamma_N$\nthe equations are multiplied by test functions $\\psi$ and\n$\\psi^\\nu$ and integrated again over $\\Gamma_N$. Solving the\nvariational form amounts to finding $\\phi_1$ and\n$\\partial\\phi_1\/\\partial \\nu$ such that \n{\\small\n\\begin{subequations}\n\\begin{align}\n\\label{eqbim_5}\n\\int_{\\Gamma_N} \\bigg\\{ \\frac{1}{2}\\left(1+\\varepsilon\\right)\\phi_1({\\bf x}) & -\n\\int_{\\Gamma_N} \\left[K_1({\\bf x}, {\\bf y})\\frac{\\partial\\phi_1({\\bf y})} {\\partial{\\nu}} +\nK_2({\\bf x}, {\\bf y})\\phi_1({\\bf y})\\right]dS_{{\\bf y}} \\bigg\\} \\psi({\\bf x}) dS_{\\bf{x}}= \\int_{\\Gamma_N} S_{1}({\\bf x}) \\psi({\\bf x}) dS_{\\bf{x}},\\\\\n\\label{eqbim_6}\n\\int_{\\Gamma_N} \\bigg\\{ \\frac{1}{2}\\left(1+\\frac{1}{\\varepsilon}\\right)\\frac{\\partial\\phi_1({\\bf x})}{\\partial{\\nu}} & -\n\\int_{\\Gamma_N} \\left[K_3({\\bf x}, {\\bf y})\\frac{\\partial\\phi_1({\\bf y})}{\\partial{\\nu}} +\nK_4({\\bf x}, {\\bf y})\\phi_1({\\bf y})\\right]dS_{{\\bf y}} \\bigg\\} \\psi^\\nu({\\bf x}) dS_{\\bf{x}}\n= \\int_{\\Gamma_N} S_{2}({\\bf x}) \\psi^\\nu({\\bf x}) dS_{\\bf{x}},\n\\end{align}\n\\end{subequations}\n}\nholds for all test functions $\\psi, \\psi^\\nu$. In the Galerkin method the solution and\nthe test functions are formally replaced by functions in the finite element\nspace. To that end, the unknowns are expanded by basis functions\n$\\psi_i$ (which\ncould be either box or hat functions)\n\\begin{equation}\n \\label{eqprojectphi}\n \\phi \\approx \\sum\\limits_{i=1}^{N} \\phi_{i}\\psi_{i}, \\quad\\mbox{and}\\quad\n \\frac{\\partial \\phi}{\\partial \\nu} \\approx \\sum\\limits_{i=1}^{N} \\phi^\\nu_{i}\\psi_{i}\n\\end{equation}\nand integral equations are tested against the basis functions. This\nleads to the linear system $Ax=b$ where $x$ contains the coefficients\nin \\eqref{eqprojectphi} and\n\\begin{equation}\\label{def:linsys}\nA=\\left[\\begin{array}{cc}A_{11} & A_{12} \\\\A_{21} & A_{22}\\end{array}\\right]\n\\quad \\mbox{and} \\quad \nb = \\left[\\begin{array}{c} b_1 \\\\ b_2 \\end{array}\\right] .\n\\end{equation}\nThe entries of these block matrices are given as\n\\begin{eqnarray}\n\\label{eq_linsysEntry}\n\\nonumber\n A_{11}(i,j) &=& \\int_{\\Gamma_N}\n \\frac{1}{2}\\left(1+{\\varepsilon}\\right) \\psi_i({\\bf x}) \\psi_j({\\bf x}) \\text{d}S_{\\bf x}\n+ \\int_{\\Gamma_N} \\int_{\\Gamma_N} K_2({\\bf x}, {\\bf y}) \n \\psi_i({\\bf x}) \\psi_j({\\bf y}) \\text{d} S_{\\bf y} \\text{d}S_{\\bf x}\\\\\\nonumber\n A_{12}(i,j) &=& \\int_{\\Gamma_N} \\int_{\\Gamma_N} K_1({\\bf x}, {\\bf y}) \n \\psi_i({\\bf x}) \\psi_j({\\bf y}) \\text{d} S_{\\bf y} \\text{d}S_{\\bf x}\\\\\\nonumber\n A_{21}(i,j) &=& \\int_{\\Gamma_N} \\int_{\\Gamma_N} K_4({\\bf x}, {\\bf y}) \n \\psi_i({\\bf x}) \\psi_j({\\bf y}) \\text{d} S_{\\bf y} \\text{d}S_{\\bf x}\\\\\n A_{22}(i,j) &=& \\int_{\\Gamma_N}\n \\frac{1}{2}\\left(1+\\frac{1}{\\varepsilon}\\right) \\psi_i({\\bf x}) \\psi_j({\\bf x}) \\text{d}S_{\\bf x}\n+ \\int_{\\Gamma_N} \\int_{\\Gamma_N} K_3({\\bf x}, {\\bf y}) \n \\psi_i({\\bf x}) \\psi_j({\\bf y}) \\text{d} S_{\\bf y} \\text{d}S_{\\bf x} \n\\end{eqnarray}\nand the right hand side is\n\\begin{equation}\n b_1(i) = \\int_{\\Gamma_N} S_{1}({\\bf x}) \\psi_i({\\bf x}) dS_{\\bf{x}}\n \\quad \\mbox{and} \\quad\n b_2(i) = \\int_{\\Gamma_N} S_{2}({\\bf x}) \\psi_i({\\bf x}) dS_{\\bf{x}}\n\\end{equation}\nSince the basis functions vanish on most triangles, the integrations for\nthe coefficients are only local. For instance, for piecewise constant\nelements, the integral $\\int_{\\Gamma}\\dots\\psi_i dS_{\\bf{x}}$ reduces\nto $\\int_{\\tau_i}\\dots dS_{\\bf{x}}$.\nSince the coefficients cannot be expressed in analytical form they\nhave to be calculated by a suitable choice of quadrature rule.\nHowever singularities will appear \nif triangles $\\tau_i$ and $\\tau_j$ are identical or sharing common edges and vertices. \nTo overcome this issue,\nwe apply the singularity removing transformation of\n\\cite{Sauter:1996}. This results smooth integrals over a four\ndimensional cube. The latter integrals are then approximated by\ntensor product Gauss-Legendre quadrature.\n\n\n\n\n\nAfter the solution of the linear system has been obtained, \nthe electrostatic free solvation energy can be calculated using the\napproximations for the surface potentials and its normal derivative\n\\begin{equation}\n\\label{eqsolengdis}\nE_{sol} = \\frac{1}{2}\\sum_{n=1}^{N_c}q_n\\sum_{i=1}^{N}\n\\int_{\\tau_{i}}\\Big[K_1(\\textbf{x}_n,\\textbf{x}) \\phi_{1i}^{\\nu} + K_2(\\textbf{x}_n,\\textbf{x})\\phi_{1i} \\Big] dS_{\\textbf{x}}.\n\\end{equation}\n\n\nSince the matrix $A$ is a dense and non-symmetric our choice of solver\nis the GMRES method.\nIn each step of GMRES iteration, a matrix-vector product is calculated and a direct summation for this requires $O(N^2)$ complexity.\nBelow we will introduce the $O(N)$ Cartesian Fast Multipole Method (FMM) to accelerate the matrix-vector product. Calculating the electrostatic solvation free energy $E_{sol}$ in Eq.~(\\ref{eqsolengdis}) is $O(N_cN)$ and we use a Cartesian treecode to reduce the cost to $O(N_c\\log(N))$. Both FMM and treecode algorithm are described the next for comparison and for the reason that both are used to accelerate the N-body particle-particle interactions.\n\n\n\n\n\\subsection{Cartesian fast multipole method (FMM)}\nIn this section, \nwe introduce the Cartesian FMM to evaluate matrix vector products with\nthe matrix in (\\ref{def:linsys}) efficiently. This is a\nkernel-independent version of the FMM, where instead of the multipole\nexpansion, truncated Taylor series are used to approximate Coulomb ($\\kappa=0$) the screened Coulomb ($\\kappa \\ne 0$) potential.\n\nThis considerably simplifies the translation operators for the kernels $K_{1-4}$ because\nthey involve different values of $\\kappa$. It was shown in\n\\cite{Tausch:2003} how the derivatives of the Coulomb kernel can be\ncomputed by simple recurrence formulas. \nFurthermore, the moment-to-moment (MtM) and local-to-local (LtL) translation\nare easily derived using the binomial formula. \n\nWhen a refined mesh is required for a larger $N$, \nincreasing the expansion order is essential to control the accuracy.\nThe FMM error analysis implies that the truncated Taylor expansion error \nhas the same magnitude as the discretization error if the expansion\norder is adjusted to the level according to the formula\n\\begin{equation}\\label{variable:order}\n p_l = p_L + L - l\n\\end{equation}\nwhere $l = 0,1,\\cdots,L$ with $l=0$ the coarsest level and $l=L$ the finest level. That\nis, the finest level uses a low-order expansion $p_L$, and the order\nis incremented in each coarser level, see \\cite{Tausch:2004}.\n\nNote that the multipole series is more efficient as it contains $(p+1)^2$\nterms, while the Taylor series has $p(p+1)(p+2)\/6$ terms. This\ndifference becomes significant with larger values of $p$. However,\nwith the variable order scheme the advantage of the multipole series\nis much less because most translation operators are in the fine levels\nwhere the number of terms in both series are comparable.\n\n\nThe matrix vector product can be considered as generalized $N$-body\nproblem of the form\n\\begin{equation}\n\\label{eqgeneralKernel}\nV_i = \\int_{\\Gamma_N} \\int_{\\Gamma_N} \\psi_i({\\bf x})\\frac{\\partial^l}{\\partial\\nu_{\\bf x}^l} \\frac{\\partial^k}{\\partial\\nu_{\\bf y}^k} G({\\bf x},{\\bf y}) f_h({\\bf y}) dS_{\\bf y} dS_{\\bf x}.\n\\end{equation}\nwhere $k,l \\in \\{0,1\\}$ and $f_h$ is a linear combination of the basis\nfunctions $\\psi_j$.\n\n\n\nNext we show how the Cartesian FMM is used under the framework of boundary element method. \n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width=3.3in]{figures\/precinterlist}\n\\includegraphics[width=3.3in]{figures\/particle_cluster}\n\\caption{FMM vs Treecode structure. \nLeft: FMM cluster-cluster interaction list; \nRight: treecode particle-cluster interaction ($R$ is the distance from the charge to the blue cluster's center;\n$r_c$ is the radius of the blue cluster $c$ which is the farthest particle inside $c$ to the center of $c$.}\n\\label{fig_2}\n\\end{center}\n\\end{figure}\n\nFigure~\\ref{fig_2}a is a 2-D illustration of a discretized molecular surface $\\Gamma_N$ \nembedded in a hierarchy of cubes (squares in the image). \nEach black solid dot represents a boundary triangle $\\tau_{i}$ for $i=1,\\cdots,N$. \n\nA cluster $c$ in any level $l$ is defined as the union of triangles\nwhose centroid are located in a cube of that level. $C_l$ is the set\nof all clusters in level $l$.\n\n\n\n\nThe level-0 cube is the smallest axiparallel cube that contains\n$\\Gamma_N$ and \nthus $C_0 = \\Gamma_N$. The refinement of coarser cubes into finer\ncubes stops when clusters in the finest level contain at most a\npredetermined (small) number of triangles. For a cluster $c$ we denote\nby $B_c$ the smallest axiparallel rectangular box that contains $c$\nand write ${\\bf x}_c$ for its center and $\\rho_c$ for the half-length\nof its longest diagonal. Note that $B_c$ can be\nconsiderably smaller than its cube, this is why we call this process\nthe shrink scheme. For two clusters $c$ and $c'$ in the same level we\ndenote by\n\\begin{equation}\n \\eta(c,c') = \\frac{ \\rho_c + \\rho_{c'}}{|{\\bf x}_c - {\\bf x}_{c'}|}\\label{def:eta}\n\\end{equation}\nthe separation ratio of the two clusters. This number determines the\nconvergence rate of the Taylor series\nexpansion, see~\\cite{Tausch:2004}. Two clusters in the same level are neighbors if\ntheir separation ratio is larger than a predetermined constant. \n$\\mathcal{N}(c)$ denotes the set of its neighbors for a given cluster $c$.\nThe set of nonempty children of $c$ that are generated in the refinement\nprocess is denoted by $\\mathcal{K}(c)$.\nFinally, we use $\\mathcal{I}(c)$ to denote the interaction list for a cluster $c$,\nwhich are clusters at the same level such that \nfor any $c'\\in \\mathcal{I}(c)$, \nthe parent of $c'$ is a neighbor of the parent of $c$, but $c'$ itself\nis not a neighbor of $c$.\n\nUnder the FMM framework, \nthe evaluation of Eq.~(\\ref{eqgeneralKernel}) consists of the near field direct summation \nand the Taylor expansion approximation for well-separated far field.\nThe near field direct summation happens in between neighboring panels\nin the finest level. The far field summation is done by multipole or\nTaylor expansions between interaction lists in all levels. This\nprocess is described in many papers, so we do not give details about\nthe derivation. \n\n\n\nTo emphasize the differences of the cartesian FMM, we \nconsider a cluster-cluster interaction between two clusters $c$ and\n$c' \\in \\mathcal{I}(c)$. Let $u$ be the potential due to sources in\n$c'$ which is evaluated in $c$, then \nby Taylor expansion of the kernel with center ${\\bf x} = {\\bf x}_c$ and ${\\bf y} =\n{\\bf x}_{c'}$ one finds easily that\n\\begin{equation}\n\\label{eqapproximation}\nu_{c,c'}({\\bf x}) = \\int_{c'} \\frac{\\partial^l}{\\partial\\nu_{\\bf x}^l} \\frac{\\partial^k}{\\partial\\nu_{\\bf y}^k} G({\\bf x}, {\\bf y})f_h({\\bf y}) dS_{\\bf y} \\approx \\sum_{|\\alpha| \\leq p}\\lambda ^{\\alpha}_{c} \\frac{\\partial^l}{\\partial\\nu_{\\bf x}^l} ({\\bf x}-{\\bf x}_{c})^{\\alpha} .\n\\end{equation}\nwhere \nand $\\alpha = (\\alpha_1, \\alpha_2, \\alpha_3) \\in \\mathbb{N}^3$\nis a multi-index. \nThe expansion coefficients are given by \n\\begin{equation}\n\\label{eqcoeff}\n\\lambda^{\\alpha}_{c} = \\sum\\limits^{p-|\\alpha|}_{|\\beta|=0}\\frac{D^{\\alpha+\\beta} G({\\bf x}_{c}, {\\bf x}_{c'})}{\\alpha!\\beta!}(-1)^{\\beta} m^{\\beta}_{c'}(f_h), \\quad |\\alpha| \\leq p.\n\\end{equation}\nwhere $\\alpha!=\\alpha_1!\\alpha_2!\\alpha_3!$ and $m^{\\beta}_{c'}(f)$ is the\nmoment of $f_h$, given by\n\\begin{equation}\n\\label{eqmoment}\nm^{\\beta}_{c'}(f) = \\int_{S_{c'}} \\frac{\\partial^k}{\\partial\\nu_{\\bf x}^k} ({\\bf x} - {\\bf x}_{c'})^{\\beta}f_h({\\bf x})dS_{\\bf x}, \\quad |\\beta| \\leq p.\n\\end{equation}\nEquation~(\\ref{eqcoeff}) translates the moment of $c'$ to the local expansion coefficients of the cluster $c$,\nand it is therefore called MtL translation. Since $f_h$ is a linear\ncombination of basis functions, we obtain from linearity that\n\\begin{equation}\\label{eqmomentL}\nm^{\\beta}_{c'}(f) = \\sum_{i\\in c'} m^{\\beta}_{c'}(\\psi_i) f_i\n\\end{equation}\nwhere $f_i$ are the coefficients of $f_h$ with respect to the\n$\\psi_i$-basis and the summation is taken over basis functions that\nwhose support overlaps with $c$.\nSince we consider a Galerkin discretization we have to integrate the\nfunction $u_{c,c'}({\\bf x})$ against the test functions, to obtain the\ncontribution $g_i$ of the two clusters to the matrix vector product. Thus we\nget from (\\ref{eqapproximation})\n\\begin{equation}\\label{LtP}\ng_i = \\int_{c} \\psi_i({\\bf x}) u_{c,c'}({\\bf x}) dS_{\\bf x} =\n\\sum_{|\\alpha| \\leq p}\\lambda ^{\\alpha}_{c} \\int_{c}\n\\frac{\\partial^l}{\\partial\\nu_{\\bf x}^l} ({\\bf x}-{\\bf\n x}_{c})^{\\alpha} \\psi_i({\\bf x}) dS_{\\bf x} =\n\\sum_{|\\alpha| \\leq p}\\lambda^{\\alpha}_{c} m_c^{\\alpha}(\\psi_i)\n\\end{equation}\nThis operation converts expansion coefficients to potentials and is\ndenoted as LtP translation.\n\nTo move moments and local expansion coefficients between levels, we also\nneed the moment-to-moment (MtM) and local-to-local (LtL)\ntranslations. They can be derived easily from the multivariate\nbinomial formula. We obtain\n\\begin{equation}\\label{eqMtM}\nm^{\\alpha}_{c}(f)\n= \\sum_{c'\\in\\mathcal{K}(c)}\\sum_{\\beta \\leq \\alpha} \\binom {\\alpha}{\\beta} ({\\bf x}_{c'} - {\\bf x}_{c})^{\\alpha-\\beta} m^{\\beta}_{c'}(f),\n\\end{equation}\nand \n\\begin{equation}\\label{eqLtL}\n\\lambda ^{\\beta}_{c'} = \\sum_{\\substack{\\alpha\\le\\beta \\\\ |\\alpha|\\le p}} \\binom {\\alpha}{\\beta} ( {\\bf x}_{c'} - {\\bf x}_{c})^{\\alpha-\\beta} \\lambda ^{\\alpha}_{c}, \\quad |\\beta|\\le p.\n\\end{equation}\nwhere $c' \\in \\mathcal{K}(c)$.\n\n\n\n\n\n\nWe see that moments and expansion coefficients are computed by\nrecurrence from the previous level. In the finest level the moments of\nthe basisfunctions $m_c(\\psi_i)$ can be either computed by numerical\nquadrature, or even analytically, because we consider flat panels and\npolynomial ansatz functions. We skip the details, as these formulas are\nstraightforward application of the binomial formula. \n\nIn summary, the Cartesian FMM under the framework of boundary element method is described as the following.\n\n\n\\begin{enumerate}\n\\item \\emph{Nearfield Calculation.}\\\\\nfor $c \\in C_L$\\\\\n\\hspace*{1.5em} for $c' \\in \\mathcal{N}(c)$ \\\\\n\\hspace*{3em} multiply matrix block of $c$ and $c'$ directly.\n\n\\item \\emph{Moment Calculation.} \\\\\nfor $c \\in C_L$\\\\\n\\hspace*{1.5em} Compute the moments $m^{\\beta}_{c'}(f)$ in (\\ref{eqmomentL}).\n\n\\item \\emph{Upward Pass.} \\\\\nfor $l=L-1,\\dots,l_{min}$\\\\\n\\hspace*{1.5em}for $c \\in C_l$\\\\\n\\hspace*{3em}for $c' \\in \\mathcal{K}(c)$\\\\\n\\hspace*{4.5em} Compute the MtM translation (\\ref{eqMtM})\n\n\\item \\emph{Interaction Phase.} \\\\\nfor $l=L,\\dots,l_{min}$\\\\\n\\hspace*{1.5em} for $\\nu \\in C_l$\\\\\n\\hspace*{3em} for $\\nu' \\in \\mathcal{N}(c)$\\\\\n\\hspace*{4.5em} Compute the MtL translation (\\ref{eqcoeff})\n\n\\item \\emph{Downward Pass.} \\\\\nfor $l=l_{min},\\dots,L-1$\\\\\n\\hspace*{1.5em} for $c \\in C_l$\\\\\n\\hspace*{3em} for $c' \\in \\mathcal{K}(c)$\\\\\n\\hspace*{4.5em} Compute the LtL translation (\\ref{eqLtL})\n\n\\item \\emph{Evaluation Phase.}\\\\\nfor $c \\in C_L$\\\\\n\\hspace*{1.5em} Compute the LtP translation (\\ref{LtP})\n\\end{enumerate}\n\nIn this algorithm $l_{min}$ is the coarsest level that contains\nclusters with non-empty interaction lists.\n\nSince the finest level contain a fixed number of triangles the number\nof levels grows logarithmically with $N$ as the mesh is refined. \nWith a geometric series argument, one can show that the total number\nof interaction lists in all levels is $O(N)$. If the translations in\nall levels are computed with the same order $p$ then the complexity of\nall translations is $O(N p^3)$. If the variable order method is used\nwhere the order is given by (\\ref{variable:order}), then the\ncomplexity reduces to $O(N)$. More details can be found in ~\\cite{Tausch:2004}.\n\n\n\n\n\n\n\\subsection{Cartesian treecode}\nThe Cartesian treecode can be considered as a fast multipole method\nwithout the downward pass. \nThe computational cost of treecode is order of $O(N\\log N)$ as opposed to the $O(N)$ FMM. \nHowever, the constants in this complexity estimate are smaller, and we\nfound it to be useful for the computation of the free solvation energy\n(\\ref{eqsolengdis}), where\nthe source and evaluation points are different and zero or limited near field\ncalculations are required.\nThe direct computation of the solvation energy \nas interactions between $N$ boundary elements and $N_c$ atomic centers\nhas $O(N_cN)$ complexity. \nThis is shown in Fig.~\\ref{fig_2}b, \nin which a charge located at ${\\bf x}_n$ will interact with induced charges ($\\phi_1$ or $\\frac{\\partial \\phi_1}{\\partial \\nu}$)\nlocated at the center of each panel. \nThese interactions consist of near field particle-particle interaction by direction summation \nand far field particle-cluster interaction controlled by maximum acceptance criterion (MAC) as specified below. \nFor simplicity, we write the involved calculations as \n\\begin{equation}\n\\label{eqsolengform}\nE = \\sum_{n=1}^{N_c} q_n V_n\n = \\sum_{n=1}^{N_c} q_n\\sum_{l=1}^{N}\n\\int_{\\tau_{l}} \\frac{\\partial^k}{\\partial\\nu_\\textbf{x}^k} G(\\textbf{x}_n,\\textbf{x})f(\\textbf{x})dS_\\textbf{x},\n\\end{equation}\nwhere $G$ is Coulomb or screened Coulomb potential kernel, $k \\in\n\\{0,1\\}$, \n$q_n$'s are partial charges,\nand $f$ is either $\\phi_1$ or $\\partial\\phi_1\/\\partial\\nu$.\n\n\n\n\\begin{figure}\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/treecode}\n\\caption{}\n\\label{fig_3_1}\n\\end{subfigure}\n\\begin{subfigure}{.5\\textwidth}\n\\centering\n\\includegraphics[width=\\linewidth]{figures\/pc_interaction}\n\\caption{}\n\\label{fig_3_2}\n\\end{subfigure}\n\\caption{The hierarchical tree structure for the treecode: Left: the form of the tree by recursively dividing a parent cluster into eight\/four children clusters using $N_0=8$; Right: The adaptive shrink scheme improves the treecode efficiency.}\n\\label{fig_3}\n\\end{figure}\n\n\nIn our implementation, we use the same clustering scheme of\n$\\Gamma_N$ as in the FMM. Instead of using the interaction lists to calculate the far field\ninteraction, the treecode use the following multipole acceptance criterion (MAC) to determine if the particle and the cluster are well separated or thus a far-field particle-cluster interaction will be considered. \nThis is similar to the separation ratio in the FMM. The MAC is given as \n\\begin{equation}\n\\label{eqMAC}\n\\frac{r_c}{R} \\le \\theta,\n\\end{equation}\nwhere $r_c=\\text{max}_{\\textbf{x}_j\\in c}|\\textbf{x}_j-\\textbf{x}_c|$ is the cluster radius,\n$R=|\\textbf{x}_n-\\textbf{x}_c|$ is the particle-cluster distance,\nand $\\theta<1$ is a user-specified parameter.\nIf the criterion is not satisfied,\nthe program checks the children of the cluster recursively\nuntil either the MAC is satisfied\nor the leaves (the finest level cluster) are reached\nat which direct summation is applied.\nOverall, the treecode evaluates the potentials~(\\ref{eqsolengform}) \nas a combination of particle-cluster interactions and direct summations.\nThus, when $\\textbf{x}_n$ and $c$ are well-separated,\nthe potential can be evaluated as\n\\begin{equation}\n\\label{eqsolengformTree}\n\\int_{c} \\frac{\\partial^{k}}{\\partial\\nu_{\\bf x}^{k}} G({\\bf x}_n,{\\bf x}) f({\\bf x})dS_{\\bf x}\n\\approx \n\\sum\\limits^{p}_{|\\beta|=0} D^{\\beta}G({\\bf x}_{n}, {\\bf x}_{c}) (-1)^{\\beta} {m'}^{\\beta}_{c}(f),\n\\end{equation}\nwhere the moment ${m'}^{\\beta}_{c}(f)$ is calculated by the same operator-MtM in FMM.\n\nThe treecode method therefore can be concluded as\n\n\\begin{enumerate}\n\\item \\emph{Moment Calculation.} \\\\\nfor $c \\in C_L$\\\\\n\\hspace*{1.5em} Compute the moments $m^{\\beta}_{c'}(f)$ in (\\ref{eqmomentL}).\n\n\\item \\emph{Upward Pass.} \\\\\nfor $l=L-1,\\dots,l_{min}$\\\\\n\\hspace*{1.5em}for $c \\in C_l$\\\\\n\\hspace*{3em}for $c' \\in \\mathcal{K}(c)$\\\\\n\\hspace*{4.5em} Compute the MtM translation (\\ref{eqMtM})\n\n\\item \\emph{Interaction Phase}\\\\\n for $n=1,...,N_c$\\\\\n\\hspace*{1.5em}$E_n = 0$\\\\\n\\hspace*{1.5em}for $c \\in C_0$\\\\\n\\hspace*{3em} addCluster(c,$\\textbf{x}_n$,$E_n$) \n\\end{enumerate}\nwhere addCluster(c,$\\textbf{x}_n$,$E_n$) as shown below is a routine that\nrecurses from the coarse clusters to the finer clusters until the\nseparation is sufficient to use the Taylor series approximation\\\\\n\\begin{itemize}\n \\item[]if $\\textbf{x}_n$ and $\\textbf{x}_c$ satisfy the MAC for $c$\\\\\n\t$~~~~~~~~~~~~~~E_n\\mathrel{+}=\\sum\\limits^{p}_{|\\beta|=0} D^{\\beta}\\Phi({\\bf x}_{n}, {\\bf x}_{c}) (-1)^{\\beta} {m'}^{\\beta}_{c}(f)$\n\t\\item[] else if $\\mathcal{K}(c) \\ne \\emptyset$\\\\\n $~~~~~~$for $c'\\in \\mathcal{K}(c)$\\\\\n $~~~~~~~~~~~~~~$ addCluster($c'$,$\\textbf{x}_n$,$E_n$)\n\t\\item[] else\\\\ \n\t$~~~~~~~~~~~~~~E_n\\mathrel{+}=\\displaystyle\\int_{c} \\frac{\\partial^{\\kappa}}{\\partial\\nu_{\\bf x}^{\\kappa}} \\Phi({\\bf x}_n,{\\bf x})f(x)dS_{\\bf x}$\n\\end{itemize}\n\n\n\n\n\n\nNote that steps $1$ and $2$ are analogous to the steps the in FMM,\nhence the addition of the treecode to the FMM code requires little\nextra work.\n\n\n\\subsection{Preconditioning}\nThe results in our previous work \\cite{Geng:2013b, Geng:2013a} shows\nthat the PB boundary integral formulation in Eqs.~(\\ref{eqbim_3}) and (\\ref{eqbim_4})\nis well-conditioned thus will only require a small number of GMRES iteration if\nthe triangulation quality is satisfied (e.g. nearly quasi-uniform). \nHowever, due to the complexity of the molecular surface, \nthe triangulation unavoidably has a few triangles with defects (e.g. narrow triangles and tiny triangles)\nwhich deteriorate the condition number of the linear algebraic matrix, resulting in \nincreased GMRES iteration number required to reach the desired convergence accuracy. \n \nRecently, we designed a block-diagonal preconditioning scheme \nto improve the matrix condition for the treecode-accelerated boundary integral (TABI) Poisson-Boltzmann solver \\cite{Chen:2018a}.\nThe essential idea for this preconditioning scheme is \nto use the short range interactions within the leaves of the tree to form the preconditioning matrix $M$. \nThis preconditioning matrix $M$ can be permuted into a block diagonal form thus $Mx=y$\ncan solved by the efficient and accurate direct methods. \nIn the current study of FAGBI solver, \nthe same conditioning issue rises and it can be resolved \nby a similar but FMM structure adjusted and controlled preconditioning scheme. \n\nThe key idea is to find an approximating matrix $M$ of $A$ such that\n$M$ is similar to $A$ and the linear system $My=z$ is easy to solve.\nTo this end, our choice for $M$ is the matrix \ninvolving only direct sum interactions in cubes\/clusters at a designated level \n(an optimal choice considering both cost and efficiency) \nas opposed to $A$, \nwhich involves all interactions.\n\nThe definition of $M$ will be essentially similar to $A$ in\n(\\ref{def:linsys}) except that the entries of $M$ are zero \nif $\\tau_i$ and $\\tau_j$ are not on the same cube at a designated level of the tree, i.e. \n\\begin{equation}\nM_{mn}(i,j) = \\left\\{\\begin{array}{l}\nA_{mn}(i,j) \\text{~~~if~$\\tau_i, \\tau_j$ are on the same cube at a designated level of the tree} \\\\\n0 \\text{~~~~~~~~~~~~~otherwise.}\\end{array}\\right. \n\\end{equation}\n\n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[width=2.18in]{figures\/fig_A4}\n\t\t\\includegraphics[width=2.18in]{figures\/fig_M4}\n\t\t\\includegraphics[width=2.18in]{figures\/fig_M4d}\n\t\t\\hskip 0.8in (a) \\hskip 2.0in (b) \\hskip 2.0in (c)\n\t\n\t\t\\caption{A schematic illustration of the boundary element dense matrix $A$ and its preconditioning matrix $M$: (a) matrix $A$ for the case of $N=20$ elements (the size of the matrix entry shows the strength of the interaction; the four different color-coded region relates to $K_{1-4}$ in Eqs.~(\\ref{eqbim_3})-(\\ref{eqbim_4})); (b) the ``block diagonal block\" preconditioning matrix $M$ (assuming the cube at the designated level contains at most 3 panels; (c) the ``block diagonal\" preconditioning matrix $M$, which is a permuted matrix from $M$ in (b) after switching the order of the unknowns. \n\t\t}\n\t\t\\label{fig_matrices}\n\t\\end{center}\n\\end{figure}\n\n\nHere we use Fig.~\\ref{fig_matrices} to illustrate \nhow we design our preconditioning scheme and its advantage.\nFigure~\\ref{fig_matrices}(a) is the illustration of the dense boundary element matrix $A$\nfor the discretized system~(\\ref{def:linsys}) with 20 boundary elements. \nThe four different colors represent the four kernels $K_{1-4}$ related entries of the linear algebraic matrix $A$ in Eq.~(\\ref{eq_linsysEntry}). \nNote the unknowns are ordered by the potentials $\\phi_1$ on all elements, \nfollowed by the normal derivative of the potential $\\frac{\\partial \\phi_1}{\\partial \\nu}$. \nThe size of the matrix entry in Fig.~\\ref{fig_matrices} indicates the magnitude of the interaction \nbetween a target element and a source element, which decays from the main diagonal to its two wings. \nBy only including the interactions between elements on the same cube at a designated level, \nwe obtain our designed preconditioning matrix $M$ as illustrated in \nFig.~\\ref{fig_matrices}(b). This preconditioning matrix $M$ has four blocks, and each block is \na diagonal block matrix.\nFollowing the procedure detailed in \\cite{Chen:2018a}, \nby rearranging the order of the unknowns, a block diagonal matrix $M$ \nis achieved as illustrated in Fig.~\\ref{fig_matrices}(c). \nSince $M=\\text{diag}\\{M_1, M_2, \\cdots, M_{N_l}\\}$ as shown in Fig.~\\ref{fig_matrices}(c) is a block diagonal matrix \nsuch that $My=z$ can be solved using direct method e.g. LU factorization by solving each individual $M_iy_i=z_i$. Here each $M_i$ is a square nonsingular matrix, which represents the interaction between particles\/elements on the $i$th cube of the tree at a designated level. As shown in \\cite{Chen:2018a}, the total cost of solving $My=z$ is essentially $O(N)$ thus is very efficient. Results for the preconditioning performance will be shown in the next section. \n\n\n\n\n \n\n\\section{Results}\nOur numerical results are mostly produced \non a desktop with an i5 7500 CPU and 16G Memory, \nusing GNU Fortran 7.2.0 compiler with compiling option ``-O2''. \nA few results for the long elapsed direct summation are obtained from the SMU high performance computing cluster, ManeFrame II (M2), with Intel Xeon Phi 7230 Processors, using openmpi\/3.1.3 compiler with compiling option ``-O2''. \nNote these direct sum results are needed for the evaluation of accuracy only \nand low resolutions results are checked on different machines to ensure the consistency of the accuracy.\nAll protein structures are obtained from Protein Data Bank (\\url{https:\/\/www.wwpdb.org)} and partial charges are assigned by CHARMM22 force field \\cite{Brooks:1983} using PDB2PQR software \\cite{Dolinsky:2007}.\n\nThe physical quantity we computed in this manuscript is the electrostatic free energy of solvation with the unit kcal\/mol. The electrostatic potential $\\phi$ or $\\phi_1$ governed in Eq.~(\\ref{eqNPBE}) or Eqs.~(\\ref{eqbim_3}-\\ref{eqbim_4}) uses the unit of $e_c\/(4\\pi$\\AA), where $e_c$ is the elementary charge. By doing this, we can directly use the partial charge obtained from PDB2PQR \\cite{Dolinsky:2007} for solving the PB equation. \nAfter obtaining the potential, we can convert the unit $e_c\/(4\\pi$\\AA) to kcal\/mol\/$e_c$ by multiplying the constant $4\\pi332.0716$ at room temperature T=300K. From potential to energy, only a multiplication of $e_c$ is needed. \n\n\nWe solved the PB equation first on the Kirkwood sphere \\cite{Kirkwood:1934}, where the analytic solution is available to validate the accuracy and efficiency of FAGBI solver, then on a typical protein 1a63 to demonstrate the overall performance, and finally on a series of proteins to emphasize the preconditioning scheme and the broad usage of the FAGBI solver. \n\n\\subsection{The Kirkwood sphere}\nOur first test case is the Kirkwood sphere of radius 50\\AA~with an atomic charge $q=50e_c$ at the center of the sphere.\nThe dielectric constant is $\\varepsilon_1=1$ inside the sphere and $\\varepsilon_2=40$ outside the sphere. \nWe provide three parts of this test case on the Kirkwood sphere, \nwhich show first the discretization error, \nthen the impact of the quadrature orders toward the convergence of accuracy, \nand finally the comparison between using the Cartesian FMM and using direct sum \nin terms of error, CPU time, and memory usage. \\\\ \n\n\\subsubsection{Overall discretization error}\nWe first solve the boundary integral PB equation on the Kirkwood sphere \nusing the direct summation for matrix-vector product instead of using the FMM acceleration. \nThe linear algebraic system is solved using GMRES iterative solver with $L_2$ relative tolerance $\\tau = 10^{-6}$. \nThe Galerkin method is applied to form the matrix combined with a single point Gauss quadrature. \nCubature methods \\cite{Sauter:1996} are applied for treating the singularities \narising from Galerkin discretization of boundary integral equations.\n\n\n\\begin{table}[htp!]\n\\caption{\\small Discretization error from solving the PB equation on a Kirkwood sphere with a centered charge. \nResults include electrostatic solvation free energy $E^{ds}_{sol}$ with error $e^{ds}_{sol}$ and convergence rate $r^{ds}_{sol}$, and discretization error in surface potential $e^{ds}_{\\phi}$, normal derivative $e^{ds}_{\\partial_n \\phi}$ with their convergence rates $r^{ds}_{\\phi}$ and $r^{ds}_{\\partial_n \\phi}$.}\n{\n\\small\n\\begin{center}\n\\begin{tabular}{lccccccccc}\n\\hline\n\\rule{0pt}{12pt}\n$N^1$ &$h$& $E^{ds}_{sol}$ (kcal\/mol) & $e^{ds}_{sol}$ (\\%) & $r^{ds}_{sol}$ & $e^{ds}_{\\phi}$ (\\%) & $r^{ds}_{\\phi}$ & $e^{ds}_{\\partial_n \\phi}$ (\\%) & $r^{ds}_{\\partial_n \\phi}$ & $\\text{Iters}^2$ \\\\\\hline\n320 &9.90&-8413.28&1.692&3.6&17.925&4.1&0.652&1.9&3\\\\\n1280 &4.95&-8328.18&0.663&2.6& 4.419&4.1&0.212&3.1&3\\\\\n5120 &2.48&-8293.42&0.243&2.7& 1.112&4.0&0.092&2.3&3\\\\\n20,480 &1.24&-8280.70&0.089&2.7& 0.285&3.9&0.046&2.0&3\\\\\n81,920 &0.62&-8276.33&0.036&2.5& 0.077&3.7&0.023&2.0&3\\\\\n327,680 &0.31&-8274.64&0.016&2.3& 0.022&3.5&0.011&2.0&3\\\\\n1,310,720&0.15&-8273.91&0.007&2.2& 0.007&3.2&0.006&2.0&4\\\\\n$\\infty^3$ &&-8273.31&&&&&&&\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n${}^1$ $N$ is number of triangles in triangulation; $h$ is the average of largest edge length of all triangles; $h\\approx O(N^{-2})$\\\\\n${}^2$ Number of GMRES iterations.\\\\\n${}^3$ This row displays the exact electrostatic solvation energy $E^{ex}_{sol}$,\nwhich is known analytically.\n}\n\\label{tb_discretization}\n\\end{table}%\n\nTable~\\ref{tb_discretization} shows the total discretization errors, \nwhich is related to triangulation, quadrature, and basis function. \nIn this table, Column 1 is the number of triangles $N$ for the sphere with the refinement of the mesh and \nColumn 2 is the average of largest edge length of all triangles $h$. Note we have $h \\approx O(N^{-2})$, which can be seen from the comparison of values in the two columns. Since using $N$ is more convenient to specify mesh refinement in our numerical simulation, we use it to quantify the mesh refinement for the rest of the paper.\n\nColumns 3-4 show that the electrostatic solvation energy $E^{ds}_{sol}$ \nand its error $e^{ds}_{sol}$ compared with the true value in the last row of the table. \nThe convergence rate $r^{ds}_{sol}$ defined as the ratio of the error is shown in column 5 with an $O(N^{-1\/2})$ pattern. \nThe relative $L_\\infty$ errors of surface potential $\\phi$, $e^{ds}_{\\phi}$ \nand normal derivative $\\partial_n \\phi$, $e^{ds}_{\\partial_n \\phi}$, \nare shown in columns 6 and 8. \nThe surface potential converges with a pattern of $O(N^{-1})$ as shown in column 7, \nwhich is faster than its normal derivative with a pattern of $O(N^{-1\/2})$ as shown in column 9. \nWe believe that this is due to the continuity of surface potential \nand the discontinuity of the normal derivatives across the interface. \nWe also can observe that the GMRES iterations shown in column 10 in all the tests \nare less than or equal to four, \nwhich verifies that the boundary integral formulation is well-posed.\n\nNote a back-to-back comparison between Table~\\ref{tb_discretization} in this manuscript \nand Table 2 in our previous work \\cite{Geng:2013b} shows improvements in convergence of \n$E^{ds}_{sol}$, $\\phi$, and $\\partial_n \\phi$ for the present work. \nThis is due to the Galerkin scheme with Duffy's trick and the Cubature method in treating the singularity\nas opposed to the collocation scheme with simply the removal of \nsingular integral whenever it occurs on an element \\cite{Geng:2013b}. \n\n\n\n\n\\subsubsection{Quadrature error}\nIn part 1, we noticed that the converge rate for $E^{ds}_{sol}$ is about $O(N^{-1\/2})$ \nas the rate of $\\partial_n \\phi$, \nbut is less than the $O(N^{-1})$ rate of $\\phi$. \nTo investigate the possible reason, \nwe study the influence of the quadrature rule the next.\n\nWe increase the order of the tensor product Gauss-Legendre rule to $2$, $3$ and $4$ \nand test their effects on the discretization error of $E^{ds}_{sol}$ \nas shown in Table~\\ref{tb_quadrature}. \nComparing with results in Table~\\ref{tb_discretization}, \nusing higher order quadratures improves both the convergence rate of $E^{ds}_{sol}$\nand the required GMRES iterations. \nWhen the quadrature order is $4$, the electrostatic solvation energy $E^{ds}_{sol}$ \nconverges to the exact energy at the rate $O(N^{-1})$ approximately. \n\nIncreasing the quadrature order further\nwill not significantly improve the convergence rate of $E^{ds}_{sol}$,\nbecause then the discretization error will be greater than the\nquadrature error. \nSince higher quadrature requires more computational cost, \nin practice, due to the large size of the protein solvation problem, \nwe will use quadrature order 1 as it shows the optimal combination or accuracy and efficiency. \n\n\\begin{table}[htp]\n\\caption{\\small Discretization error of solvation free energy $E^{ds}_{sol}$ \nfor solving the same problem in case 1 using Gaussian quadrature orders of 2-4}\n{\n\\small\n\\begin{center}\n\\begin{tabular}{lccccccccc}\n\\hline\n & \\multicolumn{3}{c}{Quad. Order 2} & \\multicolumn{3}{c}{Quad. Order 3} & \\multicolumn{3}{c}{Quad. Order 4}\n\\\\\n\\rule{0pt}{12pt}\n$N$ & $E^{ds}_{sol}$ & $r^{ds}_{sol}$ & Iters & $E^{ds}_{sol}$ & $r^{ds}_{sol}$ & Iters & $E^{ds}_{sol}$ & $r^{ds}_{sol}$ & Iters\\\\\\hline\n320 &-8378.82&3.7&2 &-8369.13&3.9&2 &-8369.21&3.9&2\\\\\n1280 &-8305.24&3.3&3 &-8298.57&3.8&2 &-8297.67&4.0&2\\\\\n5120 &-8284.01&3.0&3 &-8280.12&3.7&3 &-8279.40&3.9&3\\\\\n20,480 &-8277.27&2.7&3 &-8275.21&3.6&3 &-8374.81&4.0&3\\\\\n81,920 &-8274.94&2.4&3 &-8273.89&3.3&3 &-8373.68&4.1&3\\\\\n327,680 &-8274.03&2.3&3 &-8273.51&3.0&3 &-8373.40&4.0&3\\\\\n1,310,720&-8273.64&2.2&3 &-8273.38&2.7&3 &-8273.33&4.7&3\\\\\n$\\infty$ &-8273.31& & &-8273.31& & &-8273.31& & \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n}\n\\label{tb_quadrature}\n\\end{table}%\n\n\\subsubsection{FMM}\nThis part of the test studies the role of Cartesian FMM \nrelating to the accuracy and efficiency of the algorithm. \nWe applied the FMM to replace the direct-sum for accelerating the matrix-product calculation in GMRES. \nHere we use the first order quadrature rule for simplicity.\nWe set $\\eta = 0.8$, which is defined in (\\ref{def:eta}) and adjust\n{the number of levels $L$ in the FMM algorithm} for different $N$.\nFigure~\\ref{fig_sphere_err} shows (a) the error in electrostatic solvation energy, (b) the CPU time, and (c) the memory usage \nversus the number of triangles $N$.\nHere the error is computed as compared with the exact value $E_{sol}^{ex} = -8273.31$. \nWe provide results using fixed Taylor expansion order $p=1,3,5,7$ \nand adaptive order start from $p=1$, and $p=3$. \nHere the adaptive order represents the idea that \nexpansion order should be adjusted to the level (e.g. higher expansion order at higher level) \nin order to match the discretization error \\cite{Tausch:2003}. \nIn this figure, the solid blue line with square marks is results of direct summation with one point quadrature, \nwhich shows in (a) an $O(N^{-1\/2})$ order of convergence in accuracy as observed in Table~\\ref{tb_discretization},\nan $O(N^2)$ CPU time in (b), and an $O(N)$ memory usage in (c). \n\nAs seen in Fig.~~\\ref{fig_sphere_err}(a), \nthe use of FMM introduces truncation error in addition to the discretization error. \nTruncation errors are more significant than the discretization error when the order $p$ is small and are less significant when $p$ is large. \n\n\nFurthermore, we observed that when expansion orders $p=5,7$ of the FMM are used, \nthe errors are even smaller than those obtained with the direct sum. \nThis is due to the fact that the error of the truncation error of\nTaylor approximation is smaller than the quadrature error of the far\nfield coefficients in the direct sum.\n\n\n\n\n\\begin{figure}[htb!]\n\t\\begin{center}\t\n\t\t\\includegraphics[width=2.188in]{figures\/FMM_sphere_err}\n\t\t\\includegraphics[width=2.188in]{figures\/FMM_sphere_time}\n\t\t\\includegraphics[width=2.188in]{figures\/FMM_sphere_mem}\\\\\n\t\t(a) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (b) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ (c)\n\t\t\\caption{(Compute electrostatic solvation energy on a Kirkwood sphere as number of triangles\/particles $N$ in creases: (a) Error, (b) CPU time, and (c) Memory usage; discretization error $e^{ds}_{sol}$ (solid line), Cartesian FMM approximation error $e^{cf}_{sol}$ (dashed line); Taylor expansion order $p = 1, 3, 5, 7$ and adaptive Taylor order $p=1,3$}\n\t\t\\label{fig_sphere_err}\n\t\\end{center}\n\\end{figure}\n\nAs seen in Fig.~\\ref{fig_sphere_err}(b), the use of FMM significantly reduce the CPU time, \nwhich shows a $O(N)$ pattern as opposed to the $O(N^2)$ pattern of the direct sum. \nFigures.~\\ref{fig_sphere_err}(a-b) combined also justify the use of adaptive order. \nAdaptive order 1 and 3 use about the same amount of CPU time as regular order 1 and 3 but achieved significant improvements in accuracy. \nMeanwhile, Figure~\\ref{fig_sphere_err}(c) shows that FMM use additional memory in trading of efficiency. \nHowever, the $O(N)$ pattern of memory usage \nare well preserved at different orders with only an adjustment in a factor. \n \nIn summary, from Tables~\\ref{tb_discretization} and \\ref{tb_quadrature}, \nwe observed that the Galerkin discretization with piecewise constant basis functions \ncan achieve $O(N^{-1\/2})$ convergence rate with low quadrature order (e.g. 1 or 2)\nand can achieve $O(N^{-1})$ convergence rate with high quadrature order (e.g. 4). \nApplying FMM algorithm for acceleration significantly reduced the $O(N^2)$ CPU time to $O(N)$ \nwhile maintains desired accuracy and $O(N)$ memory usage. \nFor later tests, we apply the adaptive FMM with starting order $1$ \nand $\\eta=0.8$ as an optimal choice at the consideration of both efficiency and accuracy.\n\n\\subsection{The protein 1a63}\nIn this section, we use the FAGBI solver to compute the solvation energy for protein 1A63, \nwhich has 2065 atoms. In computation involving proteins,\nthe molecular surface is triangulated by MSMS \\cite{Sanner:1996},\nwith atom locations from the Protein Data Bank \\cite{PDB}\nand partial charges from the CHARMM22 force field \\cite{Brooks:1983}.\nMSMS has a user-specified density parameter $d$\ncontrolling the number of vertices per $\\text{\\AA}^2$ in the triangulation.\nMSMS constructs an irregular triangulation\nwhich becomes smoother as $d$ increases.\nThe tree structure level is adjusted according to different number of particles.\nThe GMRES tolerance is $\\tau=10^{-4}$.\nThese are representative parameter values chosen \nto ensure that the FMM approximation error and GMRES iteration error are smaller\nthan the direct sum discretization error,\nand to keep efficient performance in CPU time and memory based on tests on spheres previously.\n\n\\begin{table}[htp]\n\\caption{(protein 1A63). FAGBI results; PB equations; showing electrostatic solvation energy $E_{sol}$, error, CPU time, memory usage; columns show MSMS density ($d$~in $\\text{\\AA}^{-2}$), number of triangles $N$, $E_{sol}$ values computed by direct sum (ds) and Cartesian FMM (cf), discretization error $e^{ds}_{sol}$, Cartesian FMM approximation error $e^{cf}_{sol}$ and their convergence rate $r^{ds}_{sol}$ and $r^{cf}_{sol}$; adaptive Taylor expansion order $p=1$, separate rate $\\eta = 0.8$.}\n{\n\\small\n\\begin{center}\n\\begin{tabular}{lrrrcrrcrrcrrcrr}\n\\hline\n$d$ & \\multicolumn{1}{l}{$N^a$} & \\multicolumn{2}{l}{$E_{sol}$ (kcal\/mol)} & & \\multicolumn{2}{l}{Error $(\\%)$} & & \\multicolumn{2}{l}{Rate} & & \\multicolumn{2}{l}{CPU (s)} & & \\multicolumn{2}{l}{Mem. (MB)} \n\\\\\n\\cline{3-4}\\cline{6-7}\\cline{9-10}\\cline{12-13}\\cline{15-16}\n\\rule{0pt}{13pt}\n & & $ds$ & $cf$ & & \\multicolumn{1}{r}{$e^{ds}_{sol}$} & \\multicolumn{1}{r}{$e^{cf}_{sol}$} & & \\multicolumn{1}{r}{$r^{ds}_{sol}$} & \\multicolumn{1}{r}{$r^{cf}_{sol}$} & & \\multicolumn{1}{r}{$ds$} & \\multicolumn{1}{r}{$cf$} & & \\multicolumn{1}{r}{$ds$} & \\multicolumn{1}{r}{$cf$}\n\\\\\n\\hline\n1 & 20,227&-2755.05&-2756.82&&16.12&16.20&& & && 632& 6&& 18& 30\\\\\n2 & 30,321&-2498.20&-2499.20&& 5.30& 5.34&&5.5&5.4&& 1135& 8&& 23& 40\\\\\n5 & 69,969&-2412.40&-2413.02&& 1.68& 1.71&&2.7&2.7&& 5912& 17&& 66&111\\\\\n10&132,133&-2383.09&-2382.50&& 0.45& 0.42&&4.1&4.4&& 36,530& 37&& 92&165\\\\\n20&264,927&-2375.21&-2376.52&& 0.11& 0.17&&3.9&2.6&&149,651& 69&&249&423\\\\\n40&536,781&-2371.52&-2372.77&& 0.04& 0.01&&2.9&7.5&&618,879&141&&359&654\\\\\n &${\\infty}^b$&-2372.48&-2372.48&&&&&&&&&\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n${}^a$ Number of elements in triangulation.\\\\\n${}^b$ This row shows the estimates of exact energy $E^{ex}_{sol}$ obtained by the parallel computing on high order quadrature method.\n}\n\\label{tb_1a63}\n\\end{table}%\n\nIn Table~\\ref{tb_1a63}, the first two columns give the MSMS density ($d$) and number of faces $N$ in the triangulation .\nThe next two columns give the electrostatic solvation energy $E_{sol}$ computed by\ndirect sum $(ds)$ and Cartesian FMM $(cf)$.\nWe use a parallel version of direct sum\nto compute an estimate of the exact energy \nwith high order quadrature methods.\nWe computed the discretization errors $e^{ds}_{sol}$ and $e^{cf}_{sol}$\non the fifth and sixth columns, \nwhich shows convergence rate faster than $O(N^{-1})$ as\nobserved for the geodesic grid triangulation of the Kirkwood\nsphere in Case 1. \nThe faster convergence seen here is due to\nnon-uniform adaptive treatment of MSMS triangulation \\cite{Geng:2013b}.\n\nA back-to-back comparison of results from direct sum ($ds$) and Cartesian FMM results $cf$\nin Error, Rate, CPU time, and Memory in table~\\ref{tb_1a63} provides the following conclusions. \n(1) the adoption of FMM only slightly modify the error and its convergence rate in accuracy, not even necessarily in a negative way; \n(2) Cartesian FMM dramatically reduces the $O(N^2)$ direct sum CPU time to $O(N)$. \nFor example, the simulation with $d = 10\\text{\\AA}^{-2}$\nand $N=132,133$ took $36,530 \\text{s} \\approx 10 \\text{h}$ by direct sum\nand $37 \\text{s} \\approx 1\/2 \\text{min}$ by FMM; \n(3) Moreover, the memory usage shows that\nboth the direct sum and FMM memory usage is $O(N)$.\nFor the FMM, more memory is used for the moment and local coefficient storage\nbut this only adds a pre-factor rather than increases the growth rate.\\\\\n\n\\subsection{27 proteins}\n\\begin{table}[t!]\n{\\small\n\\caption{Convergence comparison using diagonal preconditioning (d) and block diagonal preconditioning (bd) on a set of 27 proteins; MSMS density $d=10$.}\n\\begin{center}\n\\begin{tabular}{rrr|rrr|rr|rrr}\n\\hline\nInd. & PDB & \\# of ele. & \\multicolumn{3}{c|}{$E_\\text{sol}$ (kcal\/mol)} &\\multicolumn{2}{c|}{\\# of it.} & \\multicolumn{3}{c}{CPU time (s)} \\\\\n&&& d & bd & \\text{diff.} (\\%) & d & bd & d & bd & ratio \\\\\\hline\n1&1ajj&40496 &-1141.17&-1141.15&0.00&22&14&12.5&9.7&1.28\\\\\n2&2erl&43214 & -953.43& -953.42&0.00&15&10&9.2&7.8&1.18\\\\\n3&1cbn&44367 & -305.94& -305.94&0.00&12&11&7.4&8.3&0.88\\\\\n4&1vii&47070 & -906.11& -906.11&0.00&16&14&10.6&11.5&0.92\\\\\n5&1fca&47461 &-1206.46&-1206.48&0.00&16&11&10.2&8.8&1.16\\\\\n6&1bbl&49071 & -991.21& -991.22&0.00&19&13&13.3&11.2&1.18\\\\\n7&2pde&50518 & -829.49& -829.46&0.00&{\\bf 75}&23&50.7&19.5&{\\bf 2.60}\\\\\n8&1sh1&51186 & -756.64& -756.63&0.00&{\\bf \\underline{100}}&21&70.7&18.2&{\\bf 3.89}\\\\\n9&1vjw&52536 &-1242.55&-1242.56&0.00&11&10&8.2&9.3&0.87\\\\\n10&1uxc&53602 &-1145.38&-1145.38&0.00&20&13&14.7&11.9&1.23\\\\\n11&1ptq&54256 & -877.83& -877.84&0.00&16&13&11.9&12.2&0.97\\\\\n12&1bor&54628 & -857.28& -857.27&0.00&14&13&10.9&12.5&0.87\\\\\n13&1fxd&54692 &-3318.18&-3318.14&0.00&10&10&7.8&9.9&0.79\\\\\n14&1r69&57646 &-1094.86&-1094.86&0.00&13&12&10.6&12.6&0.84\\\\\n15&1mbg&58473 &-1357.32&-1357.33&0.00&18&13&14.8&13.6&1.09\\\\\n16&1bpi&60600 &-1309.61&-1310.02&0.03&18&12&16.2&14.5&1.11\\\\\n17&1hpt&61164 & -816.47& -817.34&0.11&15&13&12.8&14.0&0.92\\\\\n18&451c&79202 &-1031.74&-1031.91&0.02&27&20&30.3&28.8&1.05\\\\\n19&1svr&88198 &-1718.97&-1718.97&0.00&15&12&21.4&21.3&1.01\\\\\n20&1frd&81792 &-2868.29&-2867.32&0.00&14&12&18.1&17.2&1.05\\\\\n21&1a2s&84527 &-1925.23&-1925.24&0.00&20&17&26.4&24.8&1.06\\\\\n22&1neq&89457 &-1740.50&-1740.49&0.00&19&15&26.7&22.8&1.17\\\\\n23&1a63&132133&-2382.50&-2382.50&0.00&21&16&41.3&36.8&1.12\\\\\n24&1a7m&147121&-2171.13&-2172.12&0.00&{\\bf 55}&21&111.2&51.4&{\\bf 2.16}\\\\\\hline\n25&2go0&111615&-1968.61&-1968.65&0.00&{\\bf 44}&24&67.6&43.0&1.57\\\\\n26&1uv0&128497&-2296.43&-2296.43&0.00&{\\bf 73}&25&130.7&52.6&{\\bf 2.48}\\\\\n27&4mth&123737&-2479.62&-2479.61&0.00&{\\bf 36}&18&64.3&37.0&1.74\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\label{tb_proteinSets}\n}\n\\end{table}%\n\n\nWe finally provide testing results on a set of 27 proteins \nfor the purpose of demonstrating the general application of FAGBI solver to broader macromolecules \nand the efficiency of the preconditioning scheme. \nTable~\\ref{tb_proteinSets} shows the convergence tests using diagonal preconditioning (d)\nand block diagonal preconditioning (bd) for a set of 27 proteins.\nAfter applying the block diagonal preconditioning scheme,\nthe cases with slow convergence using diagonal preconditioning \nhas been well resolved.\nIn this table, the first column is the protein index,\nfollowed by the PDB ID in the second column, \nand the number of elements in the third column generated by MSMS with density $d=10$.\nColumns 4 and 5 are the solvation energy of the proteins \napplying both preconditioning schemes,\nand column 6 is the relative difference between both methods, \nwhich shows no significant difference.\nA significant reduction of number of iterations using block diagonal preconditioning (bd)\nis shown in column 8 compared with results in column 7 using diagonal preconditioning (d).\nOne can see that the worse the diagonal preconditioning result is, \nthe larger improvements block diagonal preconditioning can achieve.\nFor example, proteins 2pde, 1sh1, 1a7m, 2go0, 1uv0 and 4mth \nfirst use 75, 100, 55, 44, 73, and 36 iterations for diagonal preconditioning as highlighted in column 7, \nbut only use 23, 21, 21, 24, 25, and 18 iterations for block diagonal preconditioning.\nThe CPU time comparison in columns 9 and 10, as well as their ratio in column 11,\nfurther confirms the results in columns 7 and 8 \nas CPU time is related to the number of iterations.\nThe ratio of CPU reduction for some proteins are more than 2 times as highlighted in the last column.\nWe plot the results of columns 7,8,9 and 10 in Fig.~\\ref{fig_proteinSets}\nwhich shows the improvements on both number of iterations\nand CPU time when block diagonal preconditioning is used to replace the diagonal preconditioning.\nIt shows that the block diagonal preconditioning does not impair\nthe originally well-conditioned cases\nbut significantly improve the slow convergence cases, \nwhich suggests that we can uniformly use block diagonal preconditioning \nin replace of the original diagonal preconditioning.\nFigures~\\ref{fig_proteinSets}(a) and~\\ref{fig_proteinSets}(b) shows a similar pattern\nas CPU time and the number of iterations are highly correlated. \n\n\n\n\n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[width=3.3in]{figures\/fig_fmm27ite}~~~~~\n\t\t\\includegraphics[width=3.3in]{figures\/fig_fmm27cpu}\\\\\n\t\t\\hskip 0.3in (a) \\hskip 3.25in (b)\n\t\n\t\t\\caption{convergence comparison using diagonal preconditioning and block diagonal preconditioning.\n\t\t\t(a) number of iterations; \n\t\t\t(b) CPU time (s). \n\t\t}\n\t\t\\label{fig_proteinSets}\n\t\\end{center}\n\\end{figure}\n\n\n\n\n\n\n\\section{Conclusion}\nIn this paper, we report recent work in developing \nan FMM accelerated Galerkin boundary integral (FAGBI) method \nfor solving the Poisson-Boltzmann equation. \nThe solver has combined advantages in accuracy, efficiency, and memory\nas it applies a well-posed boundary integral formulation \nto circumvent many numerical difficulties\nand \nuses an $O(N)$ Cartesian FMM to accelerate the GMRES iterative solver. \nSpecial treatments such as\nadaptive FMM order,\nblock diagonal preconditioning, \nGalerkin discretization, \nand Duffy's transformation are combined to improve the performance,\nwhich is validated on benchmark Kirkwood's sphere and a series of testing proteins. \nWith its attractive $O(N^{-1})$ convergence rate in accuracy, $O(N)$ CPU run time, and $O(N)$ memory usage, the FAGBI solver and its broad usage can contribute significantly to the greater computational biophysics\/biochemistry community as a powerful tool for the study of electrostatics of solvated biomolecules. \n\n\n\n\\section*{Acknowledgments}\nThe work of W.G. and J.C. was supported by NSF grant DMS-1418957,\nDMS-1819193, SMU new faculty startup fund and SMU center for\nscientific computing. The work of J.T. is in part funded by NSF grant\nDMS-1720431. \n\n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzjgtw b/data_all_eng_slimpj/shuffled/split2/finalzzjgtw new file mode 100644 index 0000000000000000000000000000000000000000..3d42e71e8bfcd514375fceaf0e665b077a59f02d --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzjgtw @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nSubmodular functions arise in a wide range of applications: graph theory, optimization, economics, game theory, to name a few. \nA function $f: 2^V\\to \\mathbb{R}$ on a ground set $V$ is {\\em submodular} if\n$f(X)+f(Y)\\geq f(X\\cap Y)+f(X\\cup Y)$ for all sets $X,Y\\subseteq\nV$. Submodularity can also be interpreted as a decreasing marginals property.\n\nThere has been significant interest in submodular optimization in the machine learning and computer vision communities. The {\\em submodular function minimization} (SFM) problem arises\nin problems in image segmentation or MAP inference tasks in Markov Random Fields. Landmark results in combinatorial optimization give polynomial-time exact algorithms for SFM. However, the high-degree polynomial dependence in the running time is prohibitive for large-scale problem instances. The main objective in this context is to develop fast and scalable SFM algorithms.\n\nInstead of minimizing arbitrary submodular functions, several recent papers aim to exploit special structural properties of submodular functions arising in practical applications. A popular model is {\\em decomposable submodular functions}: these can be written as sums of several ``simple'' submodular functions defined on small supports.\n\nSome definitions are in order. Let $f:2^V\\to\\mathbb{R}$ be a submodular function, and let $n:=|V|$. We can assume w.l.o.g. that $f(\\emptyset)=0$. We are interested in solving the {\\em submodular function minimization problem}\n\\begin{equation}\\label{prob:sfm}\n\\min_{S\\subseteq V} f(S).\\tag{SFM}\n\\end{equation}\nThe base polytope of a submodular function is defined as\n\\begin{align*}\n B(f) := \\{ x \\in \\mathbb{R}^V \\colon x(S) \\leq f(S) \\; \\forall S \\subseteq V, x(V) = f(V) \\}.\n\\end{align*}\nOne can optimize linear functions over $B(f)$ using the greedy algorithm.\nThe problem \\eqref{prob:sfm} can be reduced to finding the minimum norm point of the base polytope $B(f)$ \\cite{fujishige80}.\n\\begin{equation}\\label{prob:fuji}\n\\min\\left\\{ \\frac12 \\|y\\|_2^2\\colon y \\in B(f)\\right\\}.\\tag{Min-Norm}\n\\end{equation}\nThis reduction is the starting point of convex optimization approaches for \\eqref{prob:sfm}.\nWe refer the reader to Sections 44--45 in \\cite{Schrijver03} for concepts and results in submodular optimization, and to \\cite{Bach-monograph} on machine learning applications.\n\nWe assume that $f$ is given in the decomposition \n\\[f(S) = \\sum_{i = 1}^r f_i(S), \\]\nwhere each $f_i: 2^{V} \\rightarrow \\mathbb{R}$ is a submodular function. Such functions are called \\emph{decomposable} or\n\\emph{Sum-of-Submodular (SoS)} in the literature. In this paper, we will use the abbreviation DSFM.\n\nFor each $i\\in [r]$, the function $f_i$ has an effective support $C_i$ such that $f_i(S)=f_i(S\\cap C_i)$ for every $S\\subseteq V$. For each $i\\in [r]$, we assume that two oracles are provided: {\\em (i)} a value oracle that returns $f_i(S)$ for any set $S\\subseteq V$ in time $\\mathrm{EO}_i$; and {\\em (ii)} a quadratic minimization oracle ${\\cal O}_i(w)$.\nFor any input vector $w \\in \\mathbb{R}^n$, this oracle returns an optimal solution to \\eqref{prob:fuji} for the function $f_i+w$, or equivalently, an optimal solution to $\\min_{y \\in B(f_i)} \\|y + w\\|^2_2$. We let $\\Theta_i$ denote the running time of a single call to the oracle ${\\cal O}_i$, $\\Theta_{\\max}:=\\max_{i\\in[r]}\\Theta_i$ denote the maximum time of an oracle call, $\\Theta_{\\mathrm{avg}} := {1 \\over r} \\sum_{i \\in [r]} \\Theta_i$ denote the average time of an oracle call.\\footnote{For flow-type algorithms for DSFM, a slightly weaker oracle assumption suffices, returning a minimizer of $\\min_{S \\subseteq C_i} f_i(S) + w(S)$ for any given $w \\in \\mathbb{R}^{C_i}$. This oracle and the quadratic minimization oracle are reducible to each other: the former reduces to a single call to the latter, and one can implement the latter using $O(|C_i|)$ calls to the former (see e.g. \\cite{Bach-monograph}).} We let $F_{i,\\max} := \\max_{S \\subseteq V} |f_i(S)|$, $F_{\\max}:= \\max_{S \\subseteq V} |f(S)|$ denote the maximum function values.\n\nDecomposable SFM thus requires algorithms on two levels. The \\emph{level-0} algorithms\nare the subroutines used to evaluate the oracles ${\\cal O}_i$ for every $i\\in [r]$. The \\emph{level-1} algorithm minimizes the function $f$ using the level-0 algorithms as black boxes.\n\n\\subsection{Prior work}\nSFM has had a long history in combinatorial optimization since the early 1970s, following the influential work of Edmonds \\cite{Edmonds70}. The first polynomial-time algorithm was obtained via the ellipsoid method \\cite{Grotschel1981}; recent work presented substantial improvements using this approach \\cite{LeeSW15}.\nSubstantial work focused on designing strongly polynomial combinatorial algorithms \\cite{Schrijver00,iwata2001,fleischer03,Iwata03,Orlin09,IwataO09}. \nStill, designing practical algorithms for SFM that can be applied to large-scale problem instances remains an open problem.\n\nLet us now turn to decomposable SFM.\nPrevious work mainly focused on level-1 algorithms. These can be classified as \\emph{discrete} and \\emph{continuous} optimization methods. The discrete approach builds on techniques of classical discrete algorithms for network flows and for submodular flows. Kolmogorov \\cite{kolmogorov12} showed that the problem can be reduced to submodular flow maximization, and also presented a more efficient augmenting path algorithm. Subsequent discrete approaches were given in \\cite{Arora12,Fix13,Fix14}.\nContinuous approaches start with convex programming formulation \\eqref{prob:fuji}. Gradient methods were \napplied for the decomposable setting in \\cite{StobbeK10,NishiharaJJ14,EneN15}. \n\nLess attention has been given to the level-0 algorithms. Some papers mainly focus on theoretical guarantees on the running time of level-1 algorithms, and treat the level-0 subroutines as black-boxes (e.g. \\cite{kolmogorov12,NishiharaJJ14,EneN15}). In other papers (e.g. \\cite{StobbeK10,JegelkaBS13}), the model is restricted to functions $f_i$ of a simple specific type that are easy to minimize. An alternative assumption is that all $C_i$'s are small, of size at most $k$; and thus these oracles can be evaluated by exhaustive search, in $2^k$ value oracle calls (e.g. \\cite{Arora12,Fix13}). \n\nShanu {\\em et al.}\\xspace \\cite{Shanu16} use a block coordinate descent method for level-1, and allow arbitrary functions $f_i$. The oracles are evaluated via the Fujishige-Wolfe minimum norm point algorithm \\cite{Fujishige11, Wolfe76} for level-0. \n\n\\subsection{Our contributions}\nOur paper establishes connections between discrete and continuous methods for decomposable SFM, as well as provides a systematic experimental comparison of these approaches.\nOur main theoretical contribution improves the worst-case complexity bound of the most recent continuous optimization methods \\cite{NishiharaJJ14,EneN15} by a factor of $r$, the number of functions in the decomposition. This is achieved by improving the bounds on the relevant condition numbers. Our proof exploits ideas from the discrete optimization approach. This provides not only better, but also considerably simpler arguments than the algebraic proof in \\cite{NishiharaJJ14}. \n\nThe guiding principle of our experimental work is the clean conceptual distinction between the level-0 and level-1 algorithms. Previous experimental studies considered the level-0 and level-1 algorithms as a single ``package''. For example, Shanu {\\em et al.}\\xspace \\cite{Shanu16} compare the performance of their \\emph{SoS Min-Norm algorithm} to the continuous approach of Jegelka {\\em et al.}\\xspace \\cite{JegelkaBS13} and the combinatorial approach of Arora {\\em et al.}\\xspace \\cite{Arora12}. However, these implementations are difficult to compare since they use three different level-0 algorithms: Fujishige-Wolfe in SoS Min-Norm, a general QP solver for the algorithm of \n\\cite{JegelkaBS13}, and exhaustive search for \\cite{Arora12}. For potentials of large support, Fujishige-Wolfe outperforms these other level-0 subroutines, hence the algorithms in \\cite{JegelkaBS13,Arora12} could have compared more favorably using the same Fujishige-Wolfe subroutine.\n\nIn our experimental setup, we compare level-1 algorithms by using the same level-0 subroutines. We compare the state-of-the-art continuous and discrete algorithms: RCDM and ACDM from \\cite{EneN15} with Submodular IBFS from \\cite{Fix13}. We consider multiple options for the level-0 subroutines. For certain potential types, we use tailored subroutines exploiting the specific form of the problem. We also consider a variant of the Fujishige-Wolfe algorithm as a subroutine applicable for arbitrary potentials. \nOur experimental results reveal the following tradeoff. Discrete algorithms on level-1 require more calls to the level-0 oracle, but less overhead computation. Hence using algorithms such as IBFS on level-1 can be significantly faster than gradient descent as long as the potentials have fairly small supports. However, as the size of the potentials grow, or we do need to work with a generic level-0 algorithm, the better choice is using gradient methods. Gradient methods can perform better for larger potentials also due to weaker requirements on the level-0 subroutines: approximate level-0 subroutines suffice for them, whereas discrete algorithms require exact optimal solutions on level-0. \n\n{\\bf Paper outline.} The rest of the paper is structured as follows.\nSection~\\ref{sec:flow} describes the level-1 algorithmic framework for DSFM that is based on network flows, and outlines the IBFS algorithm. Section~\\ref{sec:conv-opt} describes the level-1 algorithmic framework for DSFM that is based on convex optimization, and outlines the gradient descent algorithms. Section~\\ref{sec:kappa} gives improved convergence guarantees for the gradient descent algorithms outlined in Section~\\ref{sec:conv-opt}. Section~\\ref{sec:level-0} discusses the different types of level-0 algorithms and how they can be used together with the level-1 frameworks. Section~\\ref{sec:experiments} presents our experimental results.\n\n\\section{Level-1 algorithms based on network flow}\n\\label{sec:flow}\n\nIn this section, we outline a level-1 algorithmic framework for DSFM that is based on a combinatorial framework first studied in \\cite{FujishigeZhang92}.\\footnote{The framework was introduced in a slightly different context, for the submodular intersection problem. The dual of this problem is minimizing a submodular function of the form $f=f_1+f_2$, with access to oracles minimizing $f_1$ and $f_2$.} \n\nFor a decomposable function $f$, every $x\\in B(f)$ can be written as $x = \\sum_{i = 1}^r x_i$, where\n$\\text{supp}(x_i)\\subseteq C_i$ and $x_i\\in B(f_i)$ (see e.g. Theorem 44.6 in \\cite{Schrijver03}). A natural algorithmic approach is to maintain an $x\\in B(f)$ in such a representation, and iteratively update it using the combinatorial framework described below. \nDSFM can be casted as a maximum network flow instance in a network that is suitably defined based on the current point $x$. This can be viewed as an analogue of the residual graph in the maxflow\/mincut setting, and it is precisely the residual graph if the DSFM instance was a mincut instance.\n\n{\\bf The auxiliary graph.} For an $x\\in B(f)$ of the form $x=\\sum_{i = 1}^r x_i$, we construct the following directed auxiliary graph $G = (V, E)$, with $E = \\bigcup_{i = 1}^r E_i$ and capacities $c:E\\to {\\mathbb{R}}_+$. The arc sets $E_i$ are complete directed graphs (cliques) on $C_i$, and for an arc $(u,v)\\in E_i$, we define $c(u,v):=\\min\\{ f_i(S)-x_i(S)\\colon S\\subseteq C_i, u\\in S, v\\notin S\\}$. This is the maximum value $\\varepsilon$ such that $x'_i\\in B(f_i)$, where $x'_i(u)=x_i(u)+\\varepsilon$, $x'_i(v)=x_i(v)-\\varepsilon$, $x'_i(z)=x_i(z)$ for $z\\notin\\{u,v\\}$. \n\nLet \n $N := \\{ v \\in V \\colon x(v) < 0 \\}$ and $P := \\{ v \\in V \\colon x(v) > 0\\}$. \nThe algorithm aims to improve the current $x$ by updating along shortest directed paths from $N$ to $P$ with positive capacity; there are several ways to update the solution, and we discuss specific approaches later in the section. If there exists no such directed path, then we let $S$ denote the set reachable from $N$ on directed paths\nwith positive capacity; thus, $S\\cap P=\\emptyset$. It is easy to show that $S$ is a minimizer of the function $f$. \n\nUpdating along a shortest path ${\\cal Q}$ from $N$ to $P$ amounts to the following. Let $\\varepsilon$ denote the minimum capacity of an arc on $\\cal Q$. If $(u,v)\\in {\\cal Q}\\cap E_i$, then we increase $x_i(u)$ by $\\varepsilon$ and decrease $x_i(v)$ by $\\varepsilon$. The crucial technical claim \\cite{FujishigeZhang92} is the following. Let $d(u)$ denote the shortest path distance of positive capacity arcs from $u$ to the set $P$. Then, an update along a shortest directed path from $N$ to $P$ results in a feasible $x\\in B(f)$, and further, all distance labels $d(u)$ are non-decreasing.\n\n{\\bf Level-1 algorithms based on the network flow approach.} Using this auxiliary graph, and updating on shortest augmenting paths, one can generalize several maximum flow algorithms to a level-1 algorithm of DSFM.\nThese algorithms include: the Edmonds-Karp-Dinitz maximum flow algorithm, the preflow-push algorithm \\cite{GoldbergTarjan88}, the incremental breadth first search algorithm (IBFS) \\cite{goldberg11}, and the excesses incremental breadth first search algorithm \\cite{goldberg15}. Our experiments will use an implementation of IBFS, following~\\cite{Fix13}. \n\n{\\bf Submodular incremental breadth first search (IBFS).} Fix {\\em et al.}\\xspace\n\\cite{Fix13} adapt the IBFS algorithm to the above\ndescribed submodular framework using the above\nmentioned claims by Fujishige \\& Zhang \\cite{FujishigeZhang92}. IBFS is an augmenting path algorithm\nfor the maximum flow problem. It identifies a shortest path from\nthe source set $N$ to the sink set $P$ via \ngrowing shortest path trees simultaneously forwards from $N$ and\nbackwards from $P$. \n\nThe submodular IBFS algorithm provides us with a level-1 algorithm for DSFM. Each step of the algorithm involves determining the capacity of an arc in the auxiliary graph; as we explain in Section~\\ref{sec:level-0}, each of these capacities can be computed using a single call to a level-0 subroutine ${\\cal O}_i$.\n\nBy combining the level-1 IBFS algorithm with appropriate level-0 subroutines, we obtain an algorithm for DSFM whose running time can be upper bounded as follows. On a directed graph with $n$ nodes and $m$ arcs, IBFS runs in time $O(n^2m)$. In the DSFM setting, we have $m=O(\\sum_{i\\in [r]} |C_i|^2 )$. Every step involves determining an auxiliary capacity, which can be implemented using a single call to a level-0 subroutine ${\\cal O}_i$ (see Section~\\ref{sec:level-0}); the maximum time of such an oracle call is $\\Theta_{\\max}$. Hence, the running time bound for submodular IBFS can be given as $O(n^2\\Theta_{\\max} \\sum_{i\\in [r]}|C_i|^2)$. If all $C_i$'s are small, $O(1)$, then this gives $O(n^2r\\Theta_{\\max})$.\n\n\\section{Level-1 algorithms based on convex optimization}\n\\label{sec:conv-opt}\nIn this section, we outline the level-1 algorithms for DSFM that are based on gradient descent. \nRecall the convex quadratic program (\\ref{prob:fuji}) from the Introduction. This program has a unique optimal solution $s^*$; the set $S=\\{v \\in V \\colon s^*(v) < 0\\}$ is the unique smallest minimizer to \\eqref{prob:sfm}. We will refer to this optimal solution $s^*$ throughout the section. \n\nIn the DSFM setting, one can write (\\ref{prob:fuji}) in multiple equivalent forms \\cite{JegelkaBS13}. For the first formulation, we let $\\mathcal{P} := \\prod_{i = 1}^r B(f_i)\\subseteq {\\mathbb{R}}^{rn},$ and let $A\\in {\\mathbb{R}}^{n\\times (rn)}$ denote the following matrix:\n\t$$A := \\underbrace{[I_n I_n \\dots I_n]}_\\text{$r$ times}.$$\nNote that, for every $y \\in \\mathcal{P}$, $Ay = \\sum_{i = 1}^r y_i$, where $y_i$ is the $i$-th block of $y$, and thus $Ay \\in B(f)$. The problem \\eqref{prob:fuji} can be reformulated for DSFM as follow.\n\\begin{equation}\n\\label{Prox-DSFM}\n\t\\min\\left\\{ {1 \\over 2} \\left\\|Ay \\right\\|^2_2 \\colon \\tag{Prox-DSFM} y\\in \\mathcal{P}\\right\\}.\n\\end{equation}\nThe second formulation is the following. Let us define the subspace $\\mathcal{A} := \\{ a \\in \\mathbb{R}^{nr} \\colon Aa= 0\\}$, and minimize its distance from $\\mathcal{P}$:\n\\begin{equation}\n\\label{Best-Approx}\n\t\\min\\left\\{ \\|a - y\\|^2_2 \\tag{Best-Approx} \\colon a \\in \\mathcal{A}, y \\in \\mathcal{P}\\right\\}.\n\\end{equation}\nThe set of optimal solutions for both formulations (\\ref{Prox-DSFM}) and (\\ref{Best-Approx}) is the set \n$\\mathcal{E} := \\{y \\in \\mathcal{P} \\colon Ay = s^*\\}$,\n where $s^*$ is the optimum of (\\ref{prob:fuji}). We note that, even though the set of solutions to (\\ref{Best-Approx}) are pairs of points $(a, y) \\in \\mathcal{A} \\times \\mathcal{P}$, the optimal solutions are uniquely determined by $y \\in \\mathcal{P}$, since the corresponding $a$ is the projection of $y$ to $\\mathcal{A}$.\n\n\\begin{lemma}[{\\cite{JegelkaBS13}, Lemma 2}]\n\\label{lem:decomp-opt-set}\n The set $\\mathcal{E}$ is non-empty and it coincides with the set of optimal solutions of (\\ref{Prox-DSFM}) and (\\ref{Best-Approx}).\n\\end{lemma}\n\n{\\bf Gradient methods.} The gradient descent algorithms of \\cite{NishiharaJJ14,EneN15} provide level-1 algorithms for DSFM. In the following, we provide a brief overview of these algorithms and we refer the reader to the respective papers for more details. \n\n{\\bf The alternating projections algorithm.} Nishihara {\\em et al.}\\xspace minimize (\\ref{Best-Approx}) using \\emph{alternating projections} \\cite{NishiharaJJ14}. The algorithm starts with a point $a_0 \\in \\mathcal{A}$ and it iteratively constructs a sequence $\\set{(a^{(k)}, x^{(k)})}_{k \\geq 0}$ by projecting onto $\\mathcal{A}$ and $\\mathcal{P}$: $x^{(k)} = {\\operatorname{argmin}}_{x \\in \\mathcal{P}}\\|a^{(k)} - x\\|_2$, $a^{(k + 1)} = {\\operatorname{argmin}}_{a \\in \\mathcal{A}}\\|a - x^{(k)}\\|_2$.\n\n{\\bf Random coordinate descent algorithms.} Ene and Nguyen minimize (\\ref{Prox-DSFM}) using \\emph{random coordinate descent} \\cite{EneN15}. The RCDM algorithm adapts the random coordinate descent algorithm of Nesterov \\cite{Nesterov10} to (\\ref{Prox-DSFM}). In each iteration, the algorithm samples a block $i \\in [r]$ uniformly at random and it updates $x_i$ via a standard gradient descent step for smooth functions. ACDM, the accelerated version of the algorithm, presents a further enhancement using techniques from \\cite{FR13}.\n\n{\\bf Rate of convergence.} The algorithms mentioned above enjoy a \\emph{linear convergence rate} despite the fact that the objective functions of (\\ref{Best-Approx}) and (\\ref{Prox-DSFM}) are not strongly convex. Instead, these works show that there are certain parameters that one can associate with the objective functions such that the convergence is at the rate $(1 - \\alpha)^k$, where $\\alpha \\in (0, 1)$ is a quantity that depends on the appropriate parameter. Let us now precisely define these parameters and state the convergence guarantees as a function of these parameters.\n\n Let $\\mathcal{A}'$ be the affine subspace \n$\\mathcal{A}':= \\{ a \\in \\mathbb{R}^{nr} \\colon Aa = s^*\\}.$ \nNote that $\\mathcal{E} = \\mathcal{P} \\cap \\mathcal{A}'$.\nFor $y\\in {\\mathbb{R}}^{nr}$ and a closed set $K\\subseteq {\\mathbb{R}}^{nr}$, we let $d(y,K)=\\min\\set{\\|y-z\\|_2 \\colon z\\in K}$ denote the distance between $y$ and $K$.\n The relevant parameter for the Alternating Projections algorithm is defined as follows.\n\\begin{definition}[\\cite{NishiharaJJ14}]\nFor every $y\\in (\\mathcal{P} \\cup \\mathcal{A}') \\setminus {\\cal E}$, let\n\t\\begin{align*}\n\t\t\\kappa(y) &\\coloneqq \\frac{d(y, {\\cal E})}{\\max\\set{d(y, \\mathcal{P}), d(y, \\mathcal{A}')}},\\quad \\mbox{ and }\\\\\n\t\t\\kappa_* &\\coloneqq \\sup\\left\\{ \\kappa(y) \\colon y \\in (\\mathcal{P} \\cup \\mathcal{A}') \\setminus {\\cal E} \\right\\}.\n\t\\end{align*}\n\\end{definition}\n\nThe relevant parameter for the random coordinate descent algorithms is the following.\n\n\\begin{definition}[\\cite{EneN15}]\n\tFor every $y\\in\\mathcal{P}$, let $y^* \\coloneqq {\\operatorname{argmin}}_{p} \\{\\|p-y\\|_2 \\colon Ap=s^*\\}$ be the optimal solution to (\\ref{Prox-DSFM}) that is closest to $y$. We say that the objective function ${1 \\over 2} \\|Ay\\|^2_2$ of (\\ref{Prox-DSFM}) is \\emph{restricted $\\ell$-strongly convex} if, for all $y \\in \\mathcal{P}$, we have\n\t\\begin{align*}\n\t\t&\\|A(y - y^*)\\|^2_2 \\geq \\ell \\|y - y^*\\|^2_2,\\quad \\mbox{ and }\\\\\n\t\t\\ell_* \\coloneqq \\sup&\\left\\{\\ell \\colon {1 \\over 2} \\|Ay\\|^2_2 \\text{ is restricted $\\ell$-strongly convex}\\right\\}.\n\t\\end{align*}\n\\end{definition}\n\nThe running time dependence of the algorithms on these parameters is given in the following theorems.\n\\begin{theorem}[\\cite{NishiharaJJ14}]\n\tLet $(a^{(0)}, x^{(0)} = {\\operatorname{argmin}}_{x \\in \\mathcal{P}}\\|a^{(0)} - x\\|_2)$ be the initial solution and let $(a^*, x^*)$ be an optimal solution to (\\ref{Best-Approx}). The alternating projection algorithm produces in\n\t\t\\[k = \\Theta\\left(\\kappa_*^2 \\ln\\left( {\\|x^{(0)} - x^*\\|_2 \\over \\epsilon} \\right) \\right) \\]\n\titerations a pair of points $a^{(k)} \\in \\mathcal{A}$ and $x^{(k)} \\in \\mathcal{P}$ that is $\\epsilon$-optimal, i.e.,\n\t\t\\[\\|a^{(k)} - x^{(k)}\\|_2^2 \\leq \\|a^* - x^*\\|_2^2 + \\varepsilon. \\]\n\\end{theorem}\n\n\\begin{theorem}[\\cite{EneN15}]\n\tLet $x^{(0)} \\in \\mathcal{P}$ be the initial solution and let $x^*$ be an optimal solution to (\\ref{Prox-DSFM}) that minimizes $\\|x^{(0)} - x^*\\|_2$. The random coordinate descent algorithm produces in\n\t\\[ k = \\Theta\\left( {r \\over \\ell_*} \\ln\\left( {\\|x^{(0)} - x^* \\|_2 \\over \\epsilon} \\right)\\right)\\]\n\titerations a solution $x^{(k)}$ that is $\\epsilon$-optimal in expectation, i.e., $\\mathbb{E}\\left[{1 \\over 2} \\|A x^{(k)} \\|^2_2 \\right] \\leq {1 \\over 2} \\|Ax^*\\|^2_2 + \\epsilon$.\n\t\n\tThe accelerated coordinate descent algorithm produces in \n\t\\[k = \\Theta \\left (r \\sqrt{{1 \\over \\ell_*}} \\ln\\left( { \\|x^{(0)} - x^* \\|_2 \\over \\epsilon} \\right) \\right) \\]\n\titerations (specifically, $\\Theta\\left( \\ln\\left( { \\|x^{(0)} - x^*\\|_2 \\over \\epsilon} \\right) \\right)$ epochs with $\\Theta\\left(r \\sqrt{{1 \\over \\ell_*}}\\right)$ iterations in each epoch) a solution $x^{(k)}$ that is $\\epsilon$-optimal in expectation, i.e., $\\mathbb{E}\\left[{1 \\over 2} \\|A x^{(k)} \\|^2_2 \\right] \\leq {1 \\over 2} \\|Ax^*\\|^2_2 + \\epsilon$.\n\\end{theorem}\n\nNishihara {\\em et al.}\\xspace show that $\\kappa_* \\leq nr$, and a family of instances (in fact, minimum cut instances) is given for which $\\kappa_* \\geq \\Omega(n \\sqrt{r})$. Ene and Nguyen show that $\\ell_* \\geq {r \/\\kappa^2_*}$. In Theorem~\\ref{thm:kappa-bound}, we close the remaining gap and show that $\\kappa_* = \\Theta(n \\sqrt{r})$ and $\\ell_* = \\Theta(1 \/ n^2)$, and thus we obtain tight analyses for the running times of the above mentioned algorithms.\n\nBy combining the level-1 gradient descent algorithms with appropriate level-0 subroutines, we obtain algorithms for DSFM whose running times can be upper bounded as follows. Using our improved convergence guarantees, it follows that RCDM obtains in time $O\\left(n^2 r\\Theta_{\\mathrm{avg}}\\ln\\left( { \\|x^{(0)} - x^* \\|_2 \\over \\epsilon}\\right)\\right)$ a solution that is $\\varepsilon$-approximate in expectation. For ACDM, the improved time bound is $O\\left(nr \\Theta_{\\mathrm{avg}} \\ln\\left( { \\|x^{(0)} - x^* \\|_2 \\over \\epsilon} \\right) \\right)$. We can upper bound the diameter of the base polytope by $O(\\sqrt{n} F_{\\max})$ \\cite{Jegelka11}. For integer-valued functions, a $\\varepsilon$-approximate solution can be converted to an exact optimum if $\\varepsilon=O(1\/n)$ \\cite{Bach-monograph}.\n\n\\section{Tight convergence bounds for the continuous algorithms}\n\\label{sec:kappa}\n\nIn this section, we show that the combinatorial approach introduced in Section~\\ref{sec:flow} can be applied to obtain better bounds on the parameters $\\kappa_*$ and $\\ell_*$ defined in Section~\\ref{sec:conv-opt}. Besides giving a stronger bound, our proof is considerably simpler than the algebraic one using Cheeger's inequality in \\cite{NishiharaJJ14}.\nThe key is the following lemma.\n\\begin{lemma}\n\\label{lem:decompose}\n\tLet $y \\in \\mathcal{P}$ and $s^* \\in B(f)$. Then there exists a point $x \\in \\mathcal{P}$ such that $Ax = s^*$ and $\\|x - y\\|_2 \\leq \\frac{\\sqrt{n}}{2} \\|Ay - s^*\\|_1$.\n\\end{lemma}\nBefore proving this lemma, we show how it can be used to derive the bounds.\n\\begin{theorem}\\label{thm:kappa-bound}\n\tWe have $\\kappa_* \\leq n \\sqrt{r}\/2 + 1$ and $\\ell_* \\geq {4 \/ n^2}$.\n\\end{theorem}\n\\begin{proof}\n\tWe start with the bound on $\\kappa_*$. In order to bound $\\kappa_*$, we need to upper bound $\\kappa(y)$ for any $y\\in (\\mathcal{P}\\cup\\mathcal{A}')\\setminus \\mathcal{E}$. We distinguish between two cases: $y \\in \\mathcal{P} \\setminus\\mathcal{E}$ and $y \\in \\mathcal{A}' \\setminus\\mathcal{E}$.\n\n\t\\noindent{\\bf Case I: $y \\in \\mathcal{P}\\setminus\\mathcal{E}$.} The denominator in the definition of $\\kappa(y)$ is equal to $d(y, \\mathcal{A}') = {\\|Ay - s^*\\|_2}\/{\\sqrt{r}}$. This follows since the closest point $a=(a_1,\\ldots,a_r)$ to $y$ in $\\mathcal{A}'$ is to set $a_i=y_i+(s^*-Ay)\/r$ for each $i\\in [r]$. \nLemma~\\ref{lem:decompose} gives an $x \\in \\mathcal{P}$ such that $Ax = s^*$ and $\\|x - y\\|_2 \\leq \\frac{\\sqrt{n}}{2} \\|Ay - s^*\\|_1\\le \\frac{n}2 \\|Ay - s^*\\|_2$. Since $Ax = s^*$, we have $x \\in {\\cal E}$ and thus the numerator of $\\kappa(y)$ is at most $\\|x - y\\|_2$. Thus $\\kappa(y) \\leq {\\|x - y\\|_2 \/(\\|Ay - s^*\\|_2\/\\sqrt{r})} \\leq n\\sqrt{r}\/2$.\n\n\t\\noindent{\\bf Case II: $y \\in \\mathcal{A}'\\setminus\\mathcal{E}$.}\nThis means that $Ay = s^*$. The denominator of $\\kappa(y)$ is equal to $d(y, \\mathcal{P})$. For each $i \\in [r]$, let $q_i \\in B(f_i)$ be the point that minimizes $\\|y_i - q_i\\|_2$. Let $q = (q_1, \\dots, q_r)\\in {\\cal P}$. Then $d(y, \\mathcal{P}) = \\|y - q\\|_2$. Lemma~\\ref{lem:decompose} with $q$ in place of $y$ gives a point $x\\in {\\cal E}$ such that $\\|q - x\\|_2 \\leq \\frac{\\sqrt{n}}2 \\|Aq - s^*\\|_1$. We have\n\t $\\|Aq - s^*\\|_1 = \\|Aq - Ay\\|_1 \\leq \\sum_{i = 1}^r \\|q_i - y_i\\|_1 = \\|q - y\\|_1 \\leq {\\sqrt{nr}} \\|q - y\\|_2.$\n\tThus $\\|q - x\\|_2 \\leq \\frac{{n \\sqrt{r}} }2\\|q - y\\|_2$. \n\tSince $x \\in {\\cal E}$, we have \n\t$d(y, {\\cal E}) \\leq \\|x - y\\|_2 \\leq \\|x - q\\|_2 + \\|q - y\\|_2 \\leq \\left(1 + \\frac{n \\sqrt{r}}2\\right) \\|q - y\\|_2 = \\left(1 + \\frac{n \\sqrt{r}}2\\right) d(y, \\mathcal{P}).$\n\tTherefore $\\kappa(p) \\leq 1 + \\frac{n \\sqrt{r}}2$, as desired.\n\nLet us now prove the bound on $\\ell_*$. Let $y \\in \\mathcal{P}$ and let $y^* \\coloneqq {\\operatorname{argmin}}_{p} \\{\\|p-y\\|_2\\colon Ap=s^*\\}$. We need to verify that $\\|A(y - y^*)\\|^2_2 \\geq {4 \\over n^2} \\|y - y^*\\|^2_2$.\nAgain, we apply Lemma~\\ref{lem:decompose} to obtain a point $x\\in \\mathcal{P}$ such that $Ax = s^*$ and \n$\\|x - y\\|_2^2 \\leq \\frac{n}4\\|Ax-Ay\\|_1^2 \\le \\frac{n^2}4 \\|Ax - Ay\\|_2^2$.\nSince $Ax= s^*$, the definition of $y^*$ gives $\\|y - y^*\\|_2^2 \\leq \\|x - y \\|_2^2$. Using that $Ax = Ay^* = s^*$, we have $\\|Ax - Ay\\|_2 =\\|Ay-Ay^*\\|_2$. The same calculation as in Case II above implies the required $\\|y - y^*\\|_2^2 \\leq \\frac{n^2}4 \\|A(y - y^*)\\|_2^2$.\n\\end{proof}\n\n\\begin{proofof}{Lemma~\\ref{lem:decompose}}\nWe give an algorithm that transforms $y$ to a vector $x\\in {\\cal P}$ as in the statement through a sequence of path augmentations in the auxiliary graph defined in Section~\\ref{sec:flow}.\nWe initialize $x=y$ and maintain $x\\in {\\cal P}$ (and thus $Ax\\in B(f)$) throughout. We now define the set of source and sink nodes as \n$N := \\{ v \\in V \\colon (Ax)(v) < s^*(v) \\}$ and $P := \\{ v \\in V \\colon (Ax)(v) > s^*(v)\\}$. Once $N=P=\\emptyset$, we have $Ax=s^*$ and terminate. Note that since $Ax,s^*\\in B(f)$, we have $\\sum_v (Ax)(v)=\\sum_v s^*(v)=f(V)$, and therefore $N=\\emptyset$ is equivalent to $P=\\emptyset$.\nThe blocks of $x$ are denoted as $x=(x_1,x_2,\\ldots,x_r)$, with $x_i\\in B(f_i)$.\n\\begin{claim} If $N\\neq\\emptyset$, then there exists a directed path of positive capacity in the auxiliary graph between the sets $N$ and $P$.\n\\end{claim}\n\\begin{proof}\nWe say that a set $T$ is $i$-tight, if $x_i(T)=f_i(T)$. It is a simple consequence of submodularity that the intersection and union of two $i$-tight sets are also $i$-tight sets.\nFor every $i\\in [r]$ and every $u\\in V$, we define $T_i(u)$ as the unique minimal $i$-tight set containing $u$. It is easy to see that for an arc $(u,v)\\in E_i$, \n$c(u,v)>0$ if and only if $v\\in T_i(u)$. We note that if $u\\notin C_i$, then $x(u)=f_i(\\{u\\})=0$ and thus $T_i(u)=\\{u\\}$.\n\nLet $S$ be the set of vertices reachable from $N$ on a directed path of positive capacity in the auxiliary graph. For a contradiction, assume $S\\cap P=\\emptyset$. \nBy the definition of $S$, we must have $T_i(u)\\subseteq S$ for every $u\\in S$ and every $i\\in[r]$. Since the union of $i$-tight sets is also $i$-tight, we see that $S$ is $i$-tight for every $i\\in[r]$, and consequently, $x(S)=f(S)$. On the other hand, since $N \\subseteq S$, $S \\cap P = \\emptyset$, and $N \\neq \\emptyset$, we have $x(S) < s^*(S)$. Since $s^* \\in B(f)$, we have $x(S) < s^*(S) \\leq f(S)$, which is a contradiction. We conclude that $S \\cap P \\neq \\emptyset$.\n\\end{proof}\n\nIn every step of the algorithm, we take a shortest directed path ${\\cal Q}$ of positive capacity from $N$ to $P$, and update $x$ along this path. That is, \nif $(u,v)\\in {\\cal Q}\\cap E_i$, then we increase $x_i(u)$ by\n$\\varepsilon$ and decrease $x_i(v)$ by $\\varepsilon$, where\n$\\varepsilon$ is the minimum capacity of an arc on\n$\\cal Q$. Note that this is the same as running the Edmonds-Karp-Dinitz algorithm in the submodular auxiliary graph. Using the analysis in \\cite{FujishigeZhang92}, one can show that this change maintains $x\\in {\\cal P}$, and that the algorithm terminates in finite (in fact, strongly polynomial) time.\n\nIt remains to bound $\\|x-y\\|_2$. At every path update, the change in $\\ell_{\\infty}$-norm of $x$ is at most $\\varepsilon$, and the change in $\\ell_1$-norm is at most $n\\varepsilon$, since the length of the path is $\\le n$. At the same time, $\\sum_{v\\in N} (s^*(v)-(Ax)(v))$ decreases by $\\varepsilon$.\nThus, $\\|x-y\\|_\\infty\\le \\|Ay-s^*\\|_1 \/2$ and $\\|x-y\\|_1\\le n\\|Ay-s^*\\|_1 \/2$. Using the inequality $\\|p\\|_2\\le \\sqrt{\\|p\\|_1\\|p\\|_{\\infty}}$, we obtain $\\|x-y\\|_2\\le \\frac{\\sqrt{n}}2 \\|Ay-s^*\\|_1$, completing the proof.\n\\end{proofof}\n\n\\section{The level-0 algorithms}\n\\label{sec:level-0}\n\nIn this section, we briefly discuss the level-0 algorithms and the interface between the level-1 and level-0 algorithms.\n\n{\\bf Two-level frameworks via quadratic minimization oracles.}\nRecall from the Introduction the assumption on the subroutines ${\\cal O}_i(w)$ that finds the minimum norm point in $B(f_i+w)$ for the input vector $w\\in{\\mathbb{R}}^n$. \nThe continuous methods in Section~\\ref{sec:conv-opt} directly use the subroutines ${\\cal O}_i(w)$ for the alternating projection or coordinate descent steps. For the flow-based algorithms in Section~\\ref{sec:flow}, the main oracle query is to find the auxiliary graph capacity $c(u,v)$ of an arc $(u,v)\\in E_i$ for some $i\\in[r]$. This can be easily formulated as minimizing the function $f_i+w$ for an appropriate $w$ with $\\text{supp}(w)\\subseteq C_i$; the details are given in Lemma~\\ref{lem:exchange-cap-oracle}. As explained at the beginning of Section~\\ref{sec:conv-opt}, an optimal solution to \\eqref{prob:fuji} immediately gives an optimal solution to \\eqref{prob:sfm} for the same submodular function. Hence, the auxiliary graph capacity queries can be implemented via the subroutines ${\\cal O}_i(w)$. \nLet us also remark that, while the functions $f_i$ are formally defined on the entire ground set $V$, their effective support is $C_i$, and thus it suffices to solve the quadratic minimization problems on the ground set $C_i$.\n\n\\begin{lemma}\n\\label{lem:exchange-cap-oracle}\n The capacity $c(u, v) := \\min\\{f_i(S) - x_i(S) \\colon S \\subseteq\n C_i, u \\in S, v \\notin S\\}$ can be computed as the minimum value of\n $\\min_{S\\subseteq C_i} f_i(S)+w(S)$ for an appropriately chosen vector $w\\in{\\mathbb{R}}^n$,\n $\\text{supp}(w)\\subseteq C_i$.\n\\end{lemma}\n\\begin{proof}\n We define a weight vector $w \\in \\mathbb{R}^n$ as follows: $w(u)\n = - (f_i(\\{u\\}) + 1)$; $w(v) = - (f_i(C_i) - f_i(C_i \\setminus\n \\{v\\}) - 1)$; $w(a) = - x(a)$ for all $a \\in C_i \\setminus \\{u,\n v\\}$, and $w(a)=0$ for all $a\\notin C_i$.\nLet $A\\subseteq C_i$ be a minimizer of $\\min_{S \\subseteq C_i} f_i(S)\n+ w(S)$.\nIt suffices to show that $u \\in A$ and $v \\notin A$. Note that\n$f_i(\\{u\\}) = f_i(\\{u\\}) - f(\\emptyset)$ is the maximum marginal value\nof $u$, i.e., $\\max_{S} (f_i(S \\cup \\{u\\}) - f_i(S))$. Moreover,\n$f_i(C_i) - f_i(C_i \\setminus \\{v\\})$ is the minimum marginal value of\n$v$. To show $u\\in A$, let us assume for a contradiction that $u\\notin A$.\n \\begin{align*}\n f_i(A \\cup \\{u\\}) + w(A \\cup \\{u\\}) &= (f_i(A) + w(A)) + (f_i(A \\cup \\{u\\}) - f_i(A)) + w(u)\\\\\n &= (f_i(A) + w(A)) + (f_i(A \\cup \\{u\\}) - f_i(A)) - f_i(\\{u\\}) + 1\\\\\n &\\leq f_i(A) + w(A) - 1.\n \\end{align*}\n Similarly, to show that $v\\notin A$, suppose for a contradiction that $v \\in A$, and consider the set $A \\setminus \\{v\\}$. Since $f_i(C_i) - f_i(C_i \\setminus \\{v\\}) \\leq f_i(A) - f_i(A \\setminus \\{v\\})$, we have\n \\begin{align*}\n f_i(A \\setminus \\{v\\}) + w(A \\setminus \\{v\\}) &= (f_i(A) + w(A)) - (f_i(A) - f_i(A \\setminus \\{v\\})) - w(v)\\\\\n &= (f_i(A) + w(A)) - (f_i(A) - f_i(A \\setminus \\{v\\})) + (f_i(C_i) - f_i(C_i \\setminus \\{v\\})) - 1\\\\\n &\\leq f_i(A) + w(A) - 1.\n \\end{align*}\n Therefore $u \\in A$ and $v \\notin A$, and hence $A \\in {\\operatorname{argmin}}\\{f_i(S) - x_i(S) \\colon u \\in S, v \\notin S\\}$. \n\\end{proof}\n\n\n\nWhereas discrete and continuous algorithms require the same type of oracles, there is an important difference between the two algorithms in terms of exactness for the oracle solutions. The discrete algorithms require exact values of the auxiliary graph capacities $c(u,v)$, as they must maintain $x_i\\in B(f_i)$ throughout. Thus, the oracle must always return an optimal solution. The continuous algorithms are more robust, and return a solution with the required accuracy even if the oracle only returns an approximate solution. As discussed in Section~\\ref{sec:experiments}, this difference leads to the continuous methods being applicable in settings where the combinatorial algorithms are prohibitively slow.\n\n{\\bf Level-0 algorithms.}\nWe now discuss specific algorithms for quadratic minimization over the base polytopes of the functions $f_i$. Several functions that arise in applications are ``simple'', meaning that there is a function-specific quadratic minimization subroutine that is very efficient. If a function-specific subroutine is not available, one can use a general-purpose submodular minimization algorithm. The works \\cite{Arora12,Fix13} use a {\\em brute force search} as the subroutine for each each $f_i$, whose running time is $2^{|C_i|} \\mathrm{EO}_i$. However, this is applicable only for small $C_i$'s and is not suitable for our experiments where the maximum clique size is quite large. \nAs a general-purpose algorithm, we used the {\\em Fujishige-Wolfe minimum norm point algorithm} \\cite{Fujishige11, Wolfe76}. This provides an $\\varepsilon$-approximate solution in $O(|C_i| F^2_{i,\\max}\/\\varepsilon)$ iterations, with overall running time bound $O((|C_i|^4 + |C_i|^2 \\mathrm{EO}_i) F^2_{i, \\max} \/ \\varepsilon)$. \\cite{Chakrabarty14}. \nThe experimental running time of the Fujishige-Wolfe algorithm can be prohibitively large \\cite{jegelka11fast}. As we discuss in Section~\\ref{sec:experiments}, by warm-starting the algorithm and performing only a small number of iterations, we were able to use the algorithm in conjunction with the gradient descent level-1 algorithms. \n\n\\section{Experiments}\n\\label{sec:experiments}\n\n\n\\begin{table}\n\\caption{Instance sizes}\n\\label{tb:instance-sizes}\n\\centering\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n{\\bf image} & {\\bf \\# pixels} & {\\bf \\# edges} & {\\bf \\# squares}\\\\\n\\hline\nbee & 273280 & 1089921 & 68160 \\\\\n\\hline\noctopus & 273280 & 1089921 & 68160\\\\\n\\hline\npenguin & 154401 & 615200 & 38400\\\\\n\\hline\nplant & 273280 & 1089921 & 68160\\\\\n\\hline\nplane & 154401 & 615200 & 38400 \\\\\n\\hline\n\\end{tabular}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n{\\bf \\# regions} & \\multicolumn{3}{|c|}{{\\bf min, max, and average region size}}\\\\\n\\hline\n50 & 298 & 299 & 298.02\\\\\n\\hline\n49 & 7 & 299 & 237.306\\\\\n\\hline\n50 & 5 & 299 & 279.02\\\\\n\\hline\n50 & 8 & 298 & 275.22\\\\\n\\hline\n50 & 10 & 299 & 291.48\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}\n\\caption{Minimum cut experiments}\n\\label{tb:mincut}\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n{\\bf image} & {\\bf \\# functions ($r$)} & {\\bf IBFS time (sec)}\\\\\n\\hline\nbee & 1363201 & 1.70942\\\\\n\\hline\noctopus & 1363201 & 1.09101 \\\\\n\\hline\npenguin & 769601 & 0.684413 \\\\\n\\hline\nplant & 1363201 & 1.30977 \\\\\n\\hline\nplane & 769601 & 0.745521 \\\\\n\\hline\n\\end{tabular}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{{\\bf UCDM time (sec)}}\\\\\n\\hline\n{\\bf \\# iter = $5r$} & {\\bf \\# iter = $10r$} & {\\bf \\# iter = $100r$} & {\\bf \\# iter = $1000r$}\\\\\n\\hline\n0.951421 & 1.6234 & 13.4594 & 134.719 \\\\\n\\hline\n0.937317 & 1.6279 & 13.9887 & 137.969 \\\\\n\\hline\n0.492372 & 0.836147 & 7.1069 & 70.1742\\\\\n\\hline\n0.943306 & 1.63492 & 13.9559 & 137.865 \\\\\n\\hline\n0.521685 & 0.850145 & 7.31664 & 71.8874 \\\\\n\\hline\n\\end{tabular}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{{\\bf ACDM time (sec)}}\\\\\n\\hline\n{\\bf \\# iter = $5r$} & {\\bf \\# iter = $10r$} & {\\bf \\# iter = $100r$} & {\\bf \\# iter = $1000r$}\\\\\n\\hline\n1.3769 & 2.2696 & 18.4351 & 182.069 \\\\\n\\hline\n1.40884 & 2.33431 & 19.0471 & 188.887 \\\\\n\\hline\n0.757929 & 1.24094 & 9.99443 & 98.5717 \\\\\n\\hline\n1.39893 & 2.29446 & 18.6846 & 185.274 \\\\\n\\hline\n0.766455 & 1.26081 & 10.1244 & 99.0298 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\begin{table}\n\\caption{Small cliques experiments}\n\\label{tb:smallcliques}\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n{\\bf image} & {\\bf \\# functions ($r$)} & {\\bf IBFS time (sec)}\\\\\n\\hline\nbee & 1431361 & 14.5125 \\\\\n\\hline\noctopus & 1431361 & 12.9877 \\\\\n\\hline\npenguin & 808001 & 7.58177 \\\\\n\\hline\nplant & 1431361 & 13.7403 \\\\\n\\hline\nplane & 808001 & 7.67518 \\\\\n\\hline\n\\end{tabular}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{{\\bf RCDM time (sec)}}\\\\\n\\hline\n{\\bf \\# iter = $5r$} & {\\bf \\# iter = $10r$} & {\\bf \\# iter = $100r$} & {\\bf \\# iter = $1000r$}\\\\\n\\hline\n4.14091 & 7.57959 & 66.0576 & 660.496 \\\\\n\\hline\n4.29358 & 7.80816 & 68.5862 & 675.23 \\\\\n\\hline\n2.16441 & 4.08777 & 37.8157 & 372.733 \\\\\n\\hline\n4.6404 & 8.21702 & 69.059 & 672.753 \\\\\n\\hline\n2.182 & 4.12521 & 37.8602 & 373.825 \\\\\n\\hline\n\\end{tabular}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{{\\bf ACDM time (sec)}}\\\\\n\\hline\n{\\bf \\# iter = $5r$} & {\\bf \\# iter = $10r$} & {\\bf \\# iter = $100r$} & {\\bf \\# iter = $1000r$}\\\\\n\\hline\n5.24474 & 10.0951 & 98.7737 & 932.954 \\\\\n\\hline\n5.5891 & 10.7124 & 99.4081 & 924.076 \\\\\n\\hline\n2.95226 & 5.71215 & 52.9766 & 512.665 \\\\\n\\hline\n5.8395 & 11.0806 & 102.023 & 900.979 \\\\\n\\hline\n2.95003 & 5.70771 & 53.7524 & 486.294 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\\begin{table}\n\\caption{Large cliques experiments with potential specific quadratic minimization for the region potentials. In order to be able to run IBFS, we used smaller regions: $50$ regions with an average size between $45$ and $50$.}\n\\label{tb:largecliques}\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n{\\bf image} & {\\bf \\# functions ($r$)} & {\\bf IBFS time (sec)}\\\\\n\\hline\nbee & 1431411 & 14.7271\\\\\n\\hline\noctopus & 1431411 & 12.698 \\\\\n\\hline\npenguin & 808051 & 7.51067 \\\\\n\\hline\nplant & 1431411 & 13.6282 \\\\\n\\hline\nplane & 808051 & 7.64527 \\\\\n\\hline\n\\end{tabular}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{{\\bf RCDM time (sec)}}\\\\\n\\hline\n{\\bf \\# iter = $5r$} & {\\bf \\# iter = $10r$} & {\\bf \\# iter = $100r$} & {\\bf \\# iter = $1000r$}\\\\\n\\hline\n 4.29954 & 7.87555 & 67.8876 & 664.816\\\\\n\\hline\n 4.18879 & 7.61576 & 66.7 & 656.71\\\\\n\\hline\n 2.132 & 4.01926 & 36.9896 & 364.694\\\\\n\\hline\n 4.55894 & 8.06429 & 67.72 & 659.685\\\\\n\\hline\n 2.16248 & 4.0713 & 37.1917 & 366.272\\\\\n\\hline\n\\end{tabular}\n\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{{\\bf ACDM time (sec)}}\\\\\n\\hline\n{\\bf \\# iter = $5r$} & {\\bf \\# iter = $10r$} & {\\bf \\# iter = $100r$} & {\\bf \\# iter = $1000r$}\\\\\n\\hline\n 5.34726 & 10.3231 & 100.24 & 912.477\\\\\n\\hline\n 5.44726 & 10.4446 & 96.2384 & 898.579\\\\\n\\hline\n 2.90223 & 5.60117 & 51.9775 & 500.083\\\\\n\\hline\n 5.72946 & 10.8512 & 99.6597 & 879.872\\\\\n\\hline\n 2.89726 & 5.61102 & 52.5439 & 475.967\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\nWe evaluate the algorithms on energy minimization problems that arise\nin image segmentation problems.\n We follow the standard approach and model the image segmentation task of segmenting an object from the background as finding a minimum cost $0\/1$ labeling of the pixels. The total labeling cost is the sum of labeling costs corresponding to \\emph{cliques}, where a clique is a set of pixels. We refer to the labeling cost functions as clique potentials.\n\nThe main focus of our experimental analysis is to compare the running times of the decomposable submodular minimization algorithms. Therefore we have chosen to use the simple hand-tuned potentials that were used in previous work~\\cite{Shanu16,Arora12,StobbeK10}: the \\emph{edge-based costs} defined by \\cite{Arora12} and the \\emph{count-based costs} defined by \\cite{StobbeK10}. Specifically, we used the following clique potentials in our experiments, all of which are submodular:\n\\begin{compactitem}\n\\item {\\bf Unary potentials} for each pixel. The unary potentials are derived from Gaussian Mixture Models of color features \\cite{RotherKB04}.\n\\item {\\bf Pairwise potentials} for each edge of the $8$-neighbor grid graph. Each graph edge $(i, j)$ between pixels $i$ and $j$ is assigned a weight that is a function of $\\exp(- \\|v_i - v_j\\|^2)$, where $v_i$ is the RGB color vector of pixel $i$. The clique potential for the edge is the cut function of the edge: the cost of a labeling is equal to zero if the two pixels have the same label and it is equal to the weight of the edge otherwise.\n\\item {\\bf Square potentials} for each $2 \\times 2$ square of pixels. We view a $2 \\times 2$ square as a graph on $4$ nodes connected with $4$ edges (two horizontal and two vertical edges). The cost of a labeling is the square root of the number of edges of the square that have different labels. This is the basic edge-based potential defined by \\cite{Arora12}.\n\\item {\\bf Region potentials} for a set of regions of the image. We compute a set of regions of the image using the region growing algorithm suggested by \\cite{StobbeK10}. For each region $C_i$, we define a count-based clique potential as in \\cite{StobbeK10,Shanu16}: for each set $S \\subseteq C_i$ of pixels, $f_i(S) = |S| |C_i \\setminus S|$.\n\\end{compactitem}\nWe used five image segmentation instances to evaluate the algorithms\\footnote{The data is available at \\url{http:\/\/melodi.ee.washington.edu\/~jegelka\/cc\/index.html} and \\url{http:\/\/research.microsoft.com\/en-us\/um\/cambridge\/projects\/visionimagevideoediting\/segmentation\/grabcut.htm}}. Table~\\ref{tb:instance-sizes} provides the sizes of the resulting instances. The experiments were carried out on a single computer with a 3.3 GHz Intel Core i5 processor and 8 GB of memory. The reported times are averaged over 10 trials.\n\n\\begin{table}\n\\caption{Large cliques experiments with Fujishige-Wolfe quadratic minimization algorithm for the region potentials. The Fujishige-Wolfe algorithm was run for $10$ iterations starting from the current gradient descent solution. The region sizes are given in Table~\\ref{tb:instance-sizes}.}\n\\label{tb:largecliquesmnp}\n\\centering\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{{\\bf RCDM time (sec)}}\\\\\n\\hline\n{\\bf \\# iter = $5r$} & {\\bf \\# iter = $10r$} & {\\bf \\# iter = $100r$} & {\\bf \\# iter = $1000r$}\\\\\n\\hline\n4.4422 & 8.18077 & 69.0444 & 674.526 \\\\\n\\hline\n4.30835 & 7.86231 & 68.1428 & 665.57\\\\\n\\hline\n2.2724 & 4.28243 & 38.1329 & 366.549\\\\\n\\hline\n4.61008 & 8.20094 & 68.8351 & 660.469\\\\\n\\hline\n2.28484 & 4.30316 & 38.0435 & 366.825\\\\\n\\hline\n\\end{tabular}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{{\\bf ACDM time (sec)}}\\\\\n\\hline\n{\\bf \\# iter = $5r$} & {\\bf \\# iter = $10r$} & {\\bf \\# iter = $100r$} & {\\bf \\# iter = $1000r$}\\\\\n\\hline\n5.29305 & 10.2853 & 103.452 & 936.613\\\\\n\\hline\n5.55511 & 10.6411 & 97.955 & 901.875\\\\\n\\hline\n2.95909 & 5.74585 & 54.3808 & 505.977\\\\\n\\hline\n5.71402 & 10.8467 & 99.6515 & 873.694\\\\\n\\hline\n2.9556 & 5.73271 & 54.0599 & 482.496\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n{\\bf Number of iterations for the coordinate methods.} We have run the coordinate descent algorithms for $1000 r$ iterations, where $r$ is the number of functions in the decomposition. Our choice is based on the empirical results of Jegelka {\\em et al.}\\xspace \\cite{JegelkaBS13} that showed that this number of iterations suffices to obtain good results.\n\n{\\bf Minimum cut experiments.} We evaluated the algorithms on instances containing only the unary potentials and the pairwise potentials. Table~\\ref{tb:mincut} gives the running times in seconds.\n\n{\\bf Small cliques experiments.} We evaluated the algorithms on instances containing the unary potentials, the pairwise potentials, and the square potentials. Table~\\ref{tb:smallcliques} gives the running times in seconds.\n\n\\iffalse\nFor the coordinate descent methods, we report the running times after a given number of iterations between $5r$ and $1000r$, where $r$ is the number of functions. \n\\fi\n\n{\\bf Large cliques experiments.} We evaluated the algorithms on instances containing all of the potentials: the unary potentials, the pairwise potentials, the square potentials, and the region potentials. For the region potentials, we used a potential-specific level-$0$ algorithm that performs quadratic minimization over the base polytope in time $O(|C_i| \\log(|C_i|) + |C_i| \\mathrm{EO}_i)$. Additionally, due to the slow running time of IBFS, we used smaller regions: $50$ regions with an average size between $45$ and $50$.\n\n{\\bf Large cliques experiments with Fujishige-Wolfe algorithm.} We also ran a version of the large cliques experiments with the Fujishige-Wolfe algorithm as the level-$0$ algorithm for the region potentials. The Fujishige-Wolfe algorithm was significantly slower than the potential-specific quadratic minimization algorithm and in our experiments it was prohibitive to run the Fujishige-Wolfe algorithm to near-convergence. Since the IBFS algorithm requires almost exact quadratic minimization in order to compute exchange capacities, it was prohibitive to run the IBFS algorithm with the Fujishige-Wolfe algorithm. In contrast, the coordinate descent methods can potentially make progress even if the level-$0$ solution is far from being converged.\n\nIn order to empirically evaluate this hypothesis, we made a simple but crucial change to the Fujishige-Wolfe algorithm: we \\emph{warm-started} the algorithm with the current solution. Recall that the coordinate descent algorithms maintain a solution $x_i \\in B(f_i)$ for each function $f_i$ in the decomposition. We warm-started the Fujishige-Wolfe algorithm with the current solution $x_i$, and we ran the algorithm for a small number of iterations. In our experiments, we ran the Fujishige-Wolfe algorithm for $10$ iterations. These changes made the level-$0$ running time considerably smaller, which made it possible to run the level-$1$ coordinate descent algorithms for as many as $1000r$ iterations. At the same time, performing $10$ iterations starting from the current solution seemed enough to provide an improvement over the current solution. Table~\\ref{tb:largecliquesmnp} gives the running times.\n\n{\\bf Conclusions.} The combinatorial level-$1$ algorithms such as IBFS are exact and can be significantly faster than the gradient descent algorithms provided that the sizes of the cliques are fairly small. For instances with larger cliques, the combinatorial algorithms are no longer suitable if the only choice for the level-$0$ algorithms are generic methods such as the Fujishige-Wolfe algorithm. The experimental results suggest that in such cases, the coordinate descent methods together with a suitably modified Fujishige-Wolfe algorithm provides an approach for obtaining an approximate solution.\n\n\n\\newpage \\clearpage\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe largest globular cluster associated with our Galaxy, $\\omega$\nCentauri, shows substantial ranges in abundance of all elements that\nare studied (e.g. \\citealt{nd95a, scl95, smi00, pan02}). The iron\nabundance ranges from \\mbox{[Fe\/H]$\\approx$--2.0} up to\n\\mbox{[Fe\/H]$\\approx$--0.4}. A peak at [Fe\/H]=--1.7 exists which accounts for\napproximately 70\\% of the population, and there is a long tail to\nhigher metallicities \\citep{nfm96, sk96, lee99, pan00, sta06a}.\n\n\nStudies of individual elements have also shown a range in their\nabundances within the cluster members. Carbon, nitrogen and oxygen\nshow large scatters in their abundance ratios for a given\n[Fe\/H]~\\citep{per80, bw93, nd95a}. The $\\alpha$ elements (Mg, Si, Ca\nand Ti) show a constant value of \\mbox{[$\\alpha$\/Fe]{$\\approx$}0.3} below\n\\mbox{[Fe\/H]$<$--1.0} \\citep{bw93, scl95, nd95a, smi00} but decrease\nto \\mbox{[$\\alpha$\/Fe]=0.0} at higher metallicities\n\\mbox{([Fe\/H]$>$--1.0)} \\citep{pan02}. The constancy with iron at lower\nabundances indicates they are produced by the same source, most likely\nsupernovae Type II. At the higher metallicities, the [$\\alpha$\/Fe]\ndecreases, consistent with supernovae Type Ia contributions.\n\n\nSodium and aluminium abundance ratios are correlated, and both are\nanticorrelated with [O\/Fe] \\citep{bw93, nd95a, nd95b, smi00}.\n\\citet{smi00} also report [Al\/Fe] being anticorrelated with [Mg\/Fe].\n[Cu\/Fe] has been found to be constant for \\mbox{[Fe\/H]$<$--0.8}\n\\citep{smi00, cun02}, but \\citet{pan02} reported a trend of increasing\n[Cu\/Fe] as the metallicity increased from \\mbox{[Fe\/H]=--1.2} to~--0.5.\nStudies of the iron peak elements (Cr, Ni, Ti) and metals (Sc, V) have\nshown that there is no trend with respect to iron (up to\n\\mbox{[Fe\/H]$\\approx$--0.8)}, consistent with primordial enrichment from Type\nII supernovae. The heavy neutron capture elements have been shown by\n\\citet{nd95a} and \\citet{scl95} to increase sharply as a function of\niron abundance. These results are in contrast to normal globular\nclusters, and suggest that the stellar winds from AGB stars (sources\nof s-process elements) were involved with the enrichment of {$\\omega$ Cen}.\n\n\nThe cluster also has several members that are of distinct classes,\nsuch as CH stars, Ba stars, S stars \\citep{le83} and stars with strong\nCO \\citep{per80} to name a few. Here we present another object that\nhas unusual abundance ratios, the formation of which is difficult to\nexplain.\n\n\nWe have obtained observations of a sample of main sequence and turnoff\n(MSTO) stars within the cluster for several purposes. Firstly, the\npossibility of an age-metallicity relation within the cluster's member\nstars was investigated \\citep{sta06a}. We found an age range of\n\\mbox{2--4} Gyrs exists between the most metal-poor and metal-rich\npopulations within the cluster, similar to that found by other studies\n\\citep{hil04, rey04, sol05}. Positions, photometry, metallicities and\nages of the stars in the catalogue can be found in the electronic\nversion of \\citet{sta06a}. The second goal was to determine abundance\nratios of several elements --- carbon, nitrogen, strontium and barium\nas functions of [Fe\/H] \\citep{sta06b}. In that analysis a metal-rich\nmain sequence (MS) member, 2015448, was found that exhibited unusual\ns-process abundance ratios. The purpose of this Letter is to detail\nthe abundance analysis for star 2015448, and to propose possible\nformation scenarios. \\S \\ref{OR_sect} briefly describes the\nobservation and reduction process for our sample of main sequence and\nturnoff members. The stellar parameters and analysis are detailed in\n\\S \\ref{SPA_sect}, while \\S \\ref{D_sect} discusses the possible\nformation scenarios for this object.\n\n\n\\section{Observations} \\label{OR_sect}\n\nThe observations and reduction of our data are described in detail in\n\\citet{sta06a} to which we refer the reader. Briefly, photometry for\nthe cluster was obtained in the V and B bands, and samples were chosen\nwithin an annulus \\mbox{15'---25'} from the cluster center. Two\nregions were defined near the MSTO, shown in Figure \\ref{cmd}, and\nspectra were obtained for objects within these CMD regions using the\nTwo Degree Field Spectrograph (2dF) on the Anglo-Australian Telescope\n\\citep{lew02}. Figure \\ref{cmd} shows the color-magnitude diagram of\nall objects with no membership information as small dots, the sample\nof 420 radial velocity members as large dots, and the object that is\nthe topic of this Letter as a large star. This object, 2015448, has\n$V=18.22$, and $B-V=0.69$. This star immediately stood out in a\nvisual inspection of the spectra, due to the anomalously strong Sr\nlines, seen in a Figure \\ref{spec}. Its position is\n\\mbox{RA=13$^h$24$^m$49.10$^s$} and\n\\mbox{Dec=--47$\\degr$40$'$34.7$''$} (J2000). With a radial velocity\nof \\mbox{228$\\pm$11~kms$^{-1}$}, star 2015448 is likely to be a member of\nthe cluster as {$\\omega$ Cen} has a radial velocity of\n\\mbox{232$\\pm$0.7~kms$^{-1}$} \\citep{din99}\n\n\nThe observations were carried out in half hour exposures, and, as\nseveral such exposures were needed to obtain the required\nsignal-to-noise of $\\sim$30 for each star, there are several\nindividual observations for star 2015448. Also, this object was observed\nin observing sessions in 1998, 1999 and 2002, in different fibres and\nspectrographs. This gives an independent check on the relative\nstrengths of the Sr and Ba features. It was found that the two Sr\nfeatures were present in all observations, while Ba was not clearly\ndetected in any of our spectra.\n\nA spectroscopic abundance analysis of the full sample is detailed in\n\\citet{sta06b}. Abundances relative to iron of C, N and Sr were\ndetermined using spectrum synthesis techniques. The CH feature at\n$\\sim$4300{\\AA} was used to determine the [C\/Fe] abundance for each\nstar. This [C\/Fe] was then adopted when the CN feature at 3883{\\AA}\nwas analyzed to find [N\/Fe]. The Sr abundance ratio was determined\nusing the two Sr lines at 4077.71{\\AA} and 4215.52{\\AA}. When\npossible, [Ba\/Fe] was also determined from the feature at\n4554.03{\\AA}.\n\n\n\\section{Stellar Parameters and Analysis} \\label{SPA_sect}\n\nThe process for calculating metallicities is described in detail in\n\\citet{sta06a}. The Sr-rich star has a metallicity of\n\\mbox{[Fe\/H]=--0.74$\\pm$0.23~dex}, determined solely from the\nAuto-Correlation Function method, and an age of\n\\mbox{14.5$\\pm$2.2~Gyrs}. This age is consistent with the mean age of\nthe bulk of the stars in the cluster. The ages for the individual\nstars in the sample were calculated by fitting an isochrone to each\nobject using the magnitude, color, metallicity and [$\\alpha$\/Fe]\nabundance (Y$^2$, \\citet{yi01}, Green Table). This also enabled an\nindividual temperature and gravity to be assigned to each star. A\ntemperature of 5820 (K), and \\mbox{log$g$=4.2~(cgs)} were adopted for\nthis object. A reddening \\mbox{$E(B-V)$=0.11} \\citep{lub02} and\ndistance modulus \\mbox{(m--M)$_{V}$=14.10} were assumed. Although the\nstar's color is redder than the bulk of the population in the cluster,\nit is also more metal-rich than that population. The hydrogen line\nstrengths in the observed spectrum are consistent with those expected\non the basis of the star's color.\n\nSynthetic spectra were obtained using the stellar models of Kurucz\n(1993), atomic line lists of Bell (2000, private communication) and\nmolecular line lists of Kurucz. The spectrum synthesis code was\ndeveloped by \\citet{cn78}. More details of this process are given in\n\\citet{sta06b}.\n\n\n\n\\subsection{Abundance Ratios} \\label{A_sect}\n\nFigure \\ref{spec1} shows the synthetic spectrum fits to the CH, CN, Sr\nand Ba features. The G band at $\\sim$4300{\\AA} was analyzed first,\nand gave a C abundance of \\mbox{[C\/Fe]=--0.5$\\pm$0.3~dex}. This\nfeature loses its sensitivity for abundances less than\n\\mbox{[C\/Fe]=--0.5}, shown by the close proximity of the\n\\mbox{[C\/Fe]=--0.8} and --0.5~dex synthetic lines. As we have no\ninformation on O, which affects the C abundance obtained from the CH\nfeature, a value of \\mbox{[O\/Fe]=0.18~dex} was assumed. This star was\nassumed to have a \\mbox{[C\/Fe]=--0.5}, which was then used when\ndetermining [N\/Fe]. Synthesis of the CN feature at $\\sim$3883{\\AA} led\nto a N abundance ratio of \\mbox{[N\/Fe]$<$0.5~dex}. A range of N\nabundance ratios are shown in the synthetic spectra, enabling an upper\nlimit on to be placed on the N abundance ratio. The CN feature in\nFigure \\ref{spec1} shows that abundance ratios of \\mbox{[N\/Fe]=1.5}\nand 1.0 are too high. However, this feature loses its sensitivity as\nsmaller N abundance ratios are considered and only an upper limit can\nbe determined.\n\n\n\nThe Sr and Ba lines were analyzed together. There were two Sr lines\nused, Sr {\\sc} II at 4077.71{\\AA} and Sr {\\sc II} at 4215.52{\\AA}.\nThe Ba {\\sc II} 4554.03{\\AA} line was used to constrain the Ba\nabundance ratio. The two Sr lines are in agreement and yield\n\\mbox{[Sr\/Fe]=1.6$\\pm$0.1~dex}. It can be seen clearly that a solar\nSr abundance ratio does not fit the observed spectrum. The CN\nbandhead at $\\sim$4216{\\AA} can affect the abundance ratio obtained\nfor the Sr 4215{\\AA} feature. However with an assumed solar N\nabundance ratio, there is little effect to the Sr line. While the N\nabundance ratio could be as high as \\mbox{[N\/Fe]=0.5~dex}, spectrum\nsynthesis calculations show this does not affect on the Sr 4215{\\AA}\nline significantly, and does not alter the abundance ratio obtained.\nUsing the same enhancement for Ba that was found for Sr, one finds the\npredicted strength to be too high to fit the observed Ba line. The\nsensitivity of the Ba feature is low up until abundance ratios of\n\\mbox{[Ba\/Fe]=0.6~dex}, and an upper limit can be placed at this\nvalue.\n\n\n\\subsection{Errors} \\label{E_sect}\n\nThe stellar parameters temperature, gravity and metallicity, for\nstar 2015448 were varied individually by their uncertainties to give an\nestimate of the error in our determined abundance ratios. These were\nthen added in quadrature. The error in temperature comes from\n$\\Delta(B-V)$ and the reddening, equating to $\\pm$100K uncertainty in\ntemperature. This propagated to errors of $\\pm$0.2~dex in C and\n$\\pm$0.1~dex in Sr. Gravity was changed by $\\pm$0.2~dex, and did not\nlead to any significant errors in the final abundance ratios. The\nuncertainty in metallicity propagated to errors in abundance ratios\nfor \\mbox{$\\Delta$C$\\pm$0.2~dex} and \\mbox{$\\Delta$Sr$\\pm$0.1~dex}.\nThe CN feature used to determine N, adopting the previously determined\nC abundance ratio for 2015448, did not show any change with the above\nchanges in temperature, gravity, metallicity, or the determined error\nin the C abundance ratio, due to the low sensitivity of this feature.\nTo determine the effect of the assumed oxygen abundance,\n\\mbox{[O\/Fe]=0.0} and +0.3 were used and the CH feature reanalyzed.\nIt was found to have no noticeable effect on the [C\/Fe] abundance\ndetermined.\n\n\n\\section{Discussion} \\label{D_sect}\n\nThis peculiar star in {$\\omega$ Cen} shows \\mbox{[C\/Fe]=--0.5},\n\\mbox{[N\/Fe]$<$0.5}, enhanced \\mbox{[Sr\/Fe]=1.6}, and\n\\mbox{[Ba\/Fe]$<$0.6}. The formation process that created it was\nunusual as there are no similar objects yet found within the cluster.\nWe do find MS objects with high Sr, but these also have similar\nenhancements in Ba. This star was one out of 420 stars studied on the\nmain sequence turnoff. Given that it is a metal-rich star, and we\nonly found 25 such objects, a more extensive search of other areas of\nthe center of the cluster may prove fruitful in determining if there\nare more of these objects. On the RGB, at a metallicity of\n\\mbox{[Fe\/H]=--0.8}, most stars have \\mbox{[Ba\/Fe]$\\approx$+0.6}\n\\citep{nd95a}, which is consistent with the upper limit found for\nstar 2015448. \\citet{nd95a} did not observe Sr, and a direct comparison\ncannot be made. However, they did investigate other light s-process\nelements such as Y and Zr. Both of these elements have abundance\nratios equal to or less than 0.6, and no stars on the RGB exhibit\ns-process abundances as high as is found here for [Sr\/Fe].\n\nThere are at least two possibilities that lead to the formation of\nthis star. Firstly, it may have been in a binary system with a\ncompanion that underwent unusual s-process enrichment and transfered\nmass to the star we see today. At present we have no evidence that\nthis star is in a binary system: the individual velocities from the\n1998, 1999 and 2002 observations are consistent to within the\nmeasurement errors of \\mbox{$\\sim$10kms$^{-1}$}. A second possibility\nis that the Sr-rich star formed out of already enriched material.\n\nAGB stars of low and intermediate mass dredge up $^{12}$C to the\nsurface via helium shell thermal pulses. In general, if the star is\nmore massive than $\\sim$3M$_{\\odot}$, the $^{12}$C is processed to\n$^{14}$N via the CN cycle \\citep{vdm02}. The main s-process is\nthought to occur within the $^{13}$C pocket where the neutron source\nis the \\mbox{$^{13}$C($\\alpha$,n)$^{16}$O} reaction\n(e.g. \\citealt{lk01}). To generate the abundance ratio seen here it\nwould seem necessary that the s-processing occurs only for the light\ns-process elements, perhaps up to the peak at Zr, rather than\nprogressing through to heavy s-process elements which include Ba.\nThis was may be due to the neutron density not being high enough, as\nhigher neutron densities, assuming all other parameters being equal,\nwill produce larger amounts of heavier nuclei such as Ba or La,\nrelative to the lighter ones \\citep{smi05}.\n\nAnother source of s-processing, known as the weak s-process, occurs in\nmassive stars \\citep{phn90}. The \\mbox{$^{22}$Ne($\\alpha$,n)$^{25}$Mg}\nreaction is the source of the neutrons. Most of the neutrons produced\nfrom this reaction are captured by the light elements, and only a\nsmall fraction are captured by the $^{56}$Fe seed nucleus, a process\nknown as ``self-poisoning''. This is the reason for the limited\nefficiency of the \\mbox{$^{22}$Ne($\\alpha$,n)$^{25}$Mg} source for\ns-process. It allows for the production of only light s-process nuclei\nwith mass numbers \\mbox{65$<$A$<$90} and Sr, with a mass number of 87,\nfalls into this range. Ba, on the other hand with mass number 137,\ndoes not and little of this element is produced by this process\n\\citep{phn90}. It is unclear whether massive stars could also produce\nthe necessary C and N abundance pattern found here, or whether star\n2015448 contains other elemental abundance patterns that may have\noriginated from these objects.\n\n\nThe r-process occurs in an environment that is rich in neutrons, and\nthe mean time between successive neutron captures in very short\ncompared with the time to undergo $\\beta$-decay. This scenario as the\nenrichment source also requires the presence of SNe products such as\ncalcium and iron, or to have the Sr enriched material transferred but\nnot any SNe products.\n\nUsing the abundance yields for the r and (main) s-process (occurring\nin low-to-intermediate mass AGB stars) from \\citet{cam82}, and\nnormalizing to [Sr\/Fe] one finds a difference greater than one dex\nbetween predicted and observed abundances for Ba. For the main\ns-process, the predicted abundance is \\mbox{[Ba\/Fe]=1.68}, while the\nr-process predicts \\mbox{[Ba\/Fe]=1.84}. Both of these values are\nobviously too large, confirming the unusual nature of the source of\nthe abundance patterns found in star 2015448. However, it should be\nnoted that AGB star yields do not explain the abundance anomalies of\nNa, Mg, Al, and O found in other stars of {$\\omega$ Cen} or globular clusters\n(see e.g. \\citealt{fen04}). This may be due to the inadequacy of the\nmodels used to calculate yields for these objects.\n\nFrom weak s-process yields in massive stars \\citep{phn90}, using an\ninitial metallicity of \\mbox{Z\/Z$_{\\odot}$=10$^{-1}$} and mass of\n16M$_{\\odot}$, \\mbox{[Sr\/Fe]\\eq1.8} and \\mbox{[Ba\/Fe]\\eq0.7}, in good\nagreement with the Sr abundance and upper limit of the Ba one obtained\nhere. This indicates the weak s-process is the more favorable option\nfor the source of enrichment in star 2015448.\n\n\nThis star is also highly unusual when compared to observations of\nfield stars. Studies of n-capture elements in samples of field stars\nfor a range of metallicities show a spread in abundance ratios of Sr\nand Ba at low metallicities \\mbox{([Fe\/H]$<$-2.0)}, with most stars\nhaving [Sr\/Fe] or [Ba\/Fe] $\\leq$0.0~\\citep{mcw98, bur00, hon04}. At\nhigher metallicities, the abundances are within $\\sim$0.5 of the solar\nabundance.\n\nA cool field giant, U Aquarii, has enhanced [Sr\/Fe] and [Y\/Fe]\nabundances, and low [Ba\/Fe] \\citep{bln79}. This star is a faint R CrB\ntype variable star and shows no CH features, but strong $^{12}$C$_{2}$\nbands. \\citet{bln79} concluded that U Aqr is a hydrogen deficient\ncarbon star with enhanced abundances of the light s-process elements\nSr and Y (by a factor of $\\sim$100) and little or no Ba. It is now a\nHe-C core of an evolved star of $\\sim$1M$\\odot$ that ejected its H\nrich envelope at the He core flash. \\citet{bln79} postulated that a\nsingle neutron exposure occurred at the flash resulting in a brief\nneutron irradiation producing only the light s-process elements. A\nsimilar giant, known as Sakurai's Object \\citep{asp97} shows similarly\nenhanced C and light s-process elements, and is H deficient. These\ntypes of stars may be responsible for the abundance pattern found in\nstar 2015448. However, a significant difference is the C-rich nature of\nthe giants compared with the carbon depleted nature of star 2015448.\n\nOur results provide a challenging puzzle to determine the source of\nthe abundance patterns found for this star. That said, the resolution\nof our data is inadequate to address this question. Higher resolution\nspectra with high S\/N are needed to analyze as many elements as\npossible, in particular the s-process ones, to be able to obtain an\naccurate history for the evolution of star 2015448.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{#1}}\n\\newtheorem{vdef}{Definition}[section]\n\\newtheorem{prop}[vdef]{Proposition}\n\\newtheorem{lemma}[vdef]{Lemma}\n\\newtheorem{cor}[vdef]{Corollary}\n\\newtheorem{thm}{Theorem}[section]\n\\newcommand\\qedSymbol{\\ensuremath{\\Box}}\n\\renewenvironment{proof}[1][]{\\par\\noindent\\textbf{Proof#1: }\\newline\\defenumi}\\noindent\\ignorespaces}{\\hfill\\qedSymbol\\par\\medskip{enumi}\\noindent\\ignorespaces}{\\hfill\\qedSymbol\\par\\medskip}\n\\theoremstyle{definition}\n\\newtheorem{rem}[vdef]{Remark}\n\\newenvironment{Rem}{\\begin{rem}\\par\\vspace{-4ex}\\nopagebreak[2]\\begin{enumerate}%\n }{\\end{enumerate}\\vskip-2ex\\end{rem}}\n\\newtheorem{example}[vdef]{Example}\n\\newenvironment{Examples}{\\begin{example}\\ \\par\\vspace{-1.5ex}%\n \\nopagebreak[2]\\begin{description}}{\\end{description}\\end{example}}\n\\newtheorem{exerc}{Exercise}\n\\newcommand\\defbb[2]{\\newcommand#1{{\\mathbb{#2}}}}\n\\newcommand\\deffrak[2]{\\newcommand#1{{\\mathfrak{#2}}}}\n\\newcommand\\defcal[2]{\\newcommand#1{{\\mathcal{#2}}}}\n\\newcommand\\redefcal[2]{\\renewcommand#1{{\\mathcal{#2}}}}\n\\newcommand\\defrm[2]{\\newcommand#1{{\\mathrm{#2}}}}\n\n\n\\def\\[{\\begin{equation}}\n\\def\\]{\\end{equation}}\n\\def\\beq{\\begin{equation}} \n\\def\\eeq{\\end{equation}} \n\\def\\begin{eqnarray}{\\begin{eqnarray}}\n\\def\\end{eqnarray}{\\end{eqnarray}}\n\\def\\begin{align}} % if you feel {eqnarray} is buggy try to convert to {align{\\begin{align}} \n\\def\\end{align}{\\end{align}}\n\\def\\begin{equation}{\\begin{equation}}\n\\def\\end{equation}{\\end{equation}}\n\\def\\<{(}\n\\def\\>{)}\n\\def\\mathbb{1}{\\mathbb{1}}\n\\def\\frac12{\\frac12}\n\\def\\:{\\colon}\n\\def\\mathcal{A}{\\mathcal{A}}\n\\DeclareMathOperator\\Ad{Ad}\n\\DeclareMathOperator\\ad{ad}\n\\def{\\text{alt.}}{{\\text{alt.}}}\n\\def\\beta{\\beta}\n\\def\\gamma{\\gamma}\n\\redefcal\\C{C}\n\\DeclareMathOperator\\CDO{CDO}\n\\def\\conn_#1#2{\\nabla_{\\!#1\\,}#2}\n\\def{C^\\infty}{\\mathcal{C}^\\infty}\n\\def\\:{\\colon}\n\\def{\\text{cycl.}}{{\\text{cycl.}}}\n\\defcal\\CD{D}\n\\defrm\\ud{d}\n\\def\\udCE{{\\ud_{\\mathrm{CE}}}} \n\\defrm\\uD{D}\n\\def\\uD{\\uD}\n\\def{{}^E\\!{}\\mathrm{D}}{{{}^E\\!{}\\mathrm{D}}}\n\\DeclareMathOperator\\Der{Der}\n\\let\\Euler=\\epsilon\n\\def\\varepsilon{\\varepsilon}\n\\def\\epsilon{\\epsilon}\n\\def{\\mathcal{E}}{{\\mathcal{E}}}\n\\def\\hookrightarrow{\\hookrightarrow}\n\\DeclareMathOperator\\End{End}\n\\DeclareMathOperator\\ev{ev}\n\\defcal\\CF{F}\n\\def\\CF{\\CF}\n\\def\\g{\\g}\n\\def\\Gamma{\\Gamma}\n\\def\\Econn_#1{{{}^E{}\\nabla}_{\\!#1\\,}}\n\\deffrak\\g{g}\n\\deffrak\\h{h}\n\\DeclareMathOperator\\Hom{Hom}\n\\defcal\\I{I}\n\\let\\ti=\\i\n\\DeclareMathOperator\\id{Id}\n\\DeclareMathOperator\\im{im}\n\\let\\iso=\\cong\n\\def\\Koszul{\\Euler} \n\\DeclareMathOperator\\Lie{Lie}\n\\def\\L_#1{\\mathcal{L}_{#1\\,}}\n\\def\\circlearrowright{\\circlearrowright}\n\\let\\lra=\\leftrightarrow\n\\let\\LRA=\\Leftrightarrow\n\\def\\mu{\\mu}\n\\defcal\\CM{M} \n\\def{\\CM_1}{{\\CM_1}}\n\\def{\\CM_2}{{\\CM_2}}\n\\def\\CM{\\CM}\n\\defbb\\N{N}\n\\def{{}^E{}\\nabla}{{{}^E{}\\nabla}}\n\\def\\Econn_#1{{{}^E{}\\nabla}_{\\!#1\\,}}\n\\def\\Wconn_#1{{{}^W\\nabla}_{\\!#1\\,}}\n\\def{{}^W\\nabla}{{{}^W\\nabla}}\n\\def{{}^E{}\\Omega}{{{}^E{}\\Omega}}\n\\DeclareMathOperator\\Part{Part}\n\\newcommand{\\pfrac}[2]{\\ensuremath{\\frac{\\partial #1}{\\partial #2}}}\n\\renewcommand\\pmatrix[2]{\\left(\\begin{array}{#1}#2\\end{array}\\right)}\n\\def\\mathrm{pt}{\\mathrm{pt}}\n\\def\\Qo{{Q_1}} \n\\def{Q_2}{{Q_2}}\n\\def\\Real{\\Real}\n\\defbb\\Real{R}\n\\def{{}^E\\!{}R}{{{}^E\\!{}R}}\n\\DeclareMathOperator\\rk{rk}\n\\providecommand\\eqref[1]{(\\ref{#1})}\n\\def\\sigma{\\sigma}\n\\def\\x{\\sigma} \n\\def\\Sigma{\\Sigma}\n\\def{C^\\infty}{{C^\\infty}}\n\\def\\twoheadrightarrow{\\twoheadrightarrow}\n\\def\\longrightarrow{\\longrightarrow}\n\\def\\xrightarrow{\\xrightarrow}\n\\def\\trace{\\trace}\n\\DeclareMathOperator\\Unsh{Un}\n\\defbb\\V{V} \n\\defbb\\VV{V} \n\\def\\wedge{\\wedge}\n\\defW{W}\n\\deffrak\\X{X}\n\\def{\\X_\\bullet(\\CM)}{{\\X_\\bullet(\\CM)}}\n\\def{\\X_\\bullet^{\\mathrm{vert}}(\\CM)}{{\\X_\\bullet^{\\mathrm{vert}}(\\CM)}}\n\\def\\X_0^{\\mathrm{vert}}(\\CM){\\X_0^{\\mathrm{vert}}(\\CM)}\n\\def\\X_{-1}^{\\mathrm{vert}}(\\CM){\\X_{-1}^{\\mathrm{vert}}(\\CM)}\n\\defbb\\Z{Z}\n\\newcommand\\zero{\\item[0.]}\n\n\n\\newcommand\\newpassage{4mm}\n\n\n\\begin{document}\n\\maketitle\n\\vskip-2ex\n\\centerline{\\def\\thefootnote{\\fnsymbol{footnote}}%\n \\quad \\\\\n}\n\\medskip\n\\begin{abstract}\n Starting with minimal requirements from the physical experience with higher gauge theories, i.e.~gauge theories for a tower of differential forms of different form degrees, we discover that all the structural identities governing such theories can be concisely recombined into a so-called Q-structure or, equivalently, an $L_\\infty$-algebroid. This has many technical and conceptual advantages: Complicated higher bundles become just bundles in the category of Q-manifolds in this approach (the many structural identities being encoded in the one operator $Q$ squaring to zero), gauge transformations are generated by internal vertical automorphisms in these bundles and even for a relatively intricate field content the gauge algebra can be determined in some lines only and is given by the so-called derived bracket construction. \n\nThis article aims equally at mathematicians and theoretical physicists; each more physical section is followed by a purely mathematical one. While the considerations are valid for arbitrary highest form-degree $p$, we pay particular attention to $p=2$, i.e.~1-~and 2-form gauge fields coupled non-linearly to scalar fields (0-form fields). The structural identities of the coupled system correspond to a Lie 2-algebroid in this case and we provide different axiomatic descriptions of those, inspired by the application, including e.g.~one as a particular kind of a vector-bundle twisted Courant algebroid. \n\\end{abstract}\n\\newpage\n\\tableofcontents\n\\section{Introduction and Results}\n\\subsection{Motivation}\nThese are basic and general considerations on higher gauge theories, i.e.~gauge theories where the standard connection 1-forms of Yang-Mills (YM) gauge theories are replaced by a whole tower of such gauge fields of different form degrees. Standard Yang-Mills theories are governed by some (finite-dimensional, quadratic) Lie algebra $\\g = \\Lie(G)$, where $G$ is the structure group of the underlying principal bundle. On the other hand, when one attempts to construct an interacting theory of 1-form gauge fields, various consistency conditions would lead one to a constraint that has to be satisfied by the interaction constants $C^a_{bc}$, namely\\footnote{There is a second condition that one obtains: The constants $\\eta_{ab}$ entering the ``free'' kinetic term in a functional is also constrained; it has to correspond to an ad-invariant scalar product.}\n\\begin{equation} \\label{Jacobi}\nC^a_{eb} C^e_{cd} + {\\text{cycl.}}(bcd) = 0 \\, .\n\\end{equation}\nThis constraint is easily recognized as the Jacobi identity of a Lie algebra. Thus, also in the other direction, one is led by itself to a Lie algebra $\\g$ which, in a second step, then can be related to a principal bundle as mentioned before. In this paper we want to follow this strategy for the much more involved context of gauge fields of form degrees up to some $p \\in \\N$, including 0-forms for completeness. \n\nAt the place of \\eqref{Jacobi} one then obtains a much more involved set of equations for the unknown parameters introduced in an ansatz for the multiple interactions. Due to the fact that we permit also the 0-forms, these are even differential equations. It is one of the goals of this paper to identify this coupled system of structural equations with an underlying mathematical structure which it corresponds to, in generalization of the incomparably simpler relation of \\eqref{Jacobi} with a structural Lie algebra $\\g$, needed for the construction of an ordinary Yang-Mills theory. \n\nAnother goal is, however, to provide one mathematical formulation of what ``consistency conditions'', often imposed in more physically oriented constructions, can mean. In particular, we aim at isolating a basic or minimal set of such requirements, which we believe that the physics community would usually want to have and which we found to be fulfilled in all the examples we know or are aware of. \n\nTwo, three remarks are in place here in this context: First, there exists a very general and elegant approach to the \\emph{deformation} of free gauge theories to interacting ones, given by a BRST-BV-formalism, cf.~\\cite{BBH00} for a review. There one starts with just the kinetic terms of the fields that one wants to consider and the related, simple-to-find free gauge transformations. The result of the procedure provides the most general possible action that contains the original terms to lowest order and that preserves the ``number'' of the gauge symmetries, although not their form; e.g.~starting with $r$ 1-form fields $A^a$ and the free (abelian) action functional \n\\begin{equation}\\label{Skin}\nS[A] = \\int_\\Sigma \\eta_{ab} \\, \\ud A^a \\wedge *\\ud A^b \\, ,\n\\end{equation}\none is led to the most general Yang-Mills functional for a quadratic Lie algebra $\\g$ of dimension $r$ (cf.~the above discussion including the previous footnote), the abelian curvature or field strength $\\ud A^a$ being effectively replaced by its non-abelian generalization $F^a = \\ud A^a + \\frac{1}{2}C^a_{bc}A^b A^c$ in the above functional. There are several advantages of this procedure: except for possible problems of global nature or of convergence (one works in the setting of formal power series) the approach is completely systematic and extensive, providing the most general (formal) deformation moreover already up to trivial ones, that can be induced by changes of coordinates on the space of fields and that is captured by BRST-exact terms vanishing in the cohomological treatment; furthermore, the resulting theory already comes in its BV-form, thus ready for a subsequent quantization. There is, however, one decisive drawback of this formalism: The calculations are generically very technical and lengthy and the resulting constraints on the parameters of the deformation do not come with a mathematical interpretation. This does not pose a problem if they are of a comparatively simple form like in Equation \\eqref{Jacobi} (accompanied by a second condition expressing the ad-invariance of $\\eta$ in this case), where one retrieves a more mathematical interpretation rather easily. But it does so in a context like the present one, where the structural equations that one finds can easily fill a page (cf., e.g., results like in \\cite{Bizdadea09,Bizdadea09b,Bizdadea10}).\n\nSecond, what is also transparent from the above YM example already, in the case of blowing up the number of equations by adding another permitted form degree, it may be useful and will prove so to separate equations that replace the defining condition of the Lie algebra, Eq.~\\eqref{Jacobi}, from objects defined on it, like the ad-invariant tensor $\\eta$, that is needed for the functional (cf.~the discussion above) and thus the dynamics of the physical fields, but not yet for the underlying geometry of a principal bundle. Note that a metric $h$ on $\\Sigma$ entered the functional (in terms of the Hodge duality operation $*$) and in more complicated theories the gauge transformations leaving invariant a functional can and will depend on $h$ also (cf.~\\cite{DSM} for an example), while some important part of the gauge transformations, to be specified below, will prove to be independent of this additional structure. Both these observations lead us to focus on generalizations of the curvature or field strength in a first step, where neither $\\eta$ nor $h$ enters. \n\nThis leads us also to postpone questions of the global bundle structure to a separate investigation: one may want to find the generalization of the structural Lie algebra, that governs the theory already on a local level, in a first instance. \nThis restricted focus permits one to work in a local chart on the spacetime or base manifold $\\Sigma$, to replace a connection by a set of 1-forms (in the ordinary YM case, but generalized to include higher form degrees here) and to deal with the question of how to represent an (adequately generalized) connection on a possibly non-trivial (higher) bundle in a second step only.\\footnote{In fact, this investigation appeared in the mean time already, at least on the arXiv, cf.~\\cite{KS07}. The present article thus can be seen, not solely but also, as a possible introduction or motivation from the perspective of physics to \\cite{KS07},\nwhich has an essentially mathematical focus. We will describe the resulting global picture in subsection \\ref{sec:phys} below, cf., e.g., Fig.~\\ref{fig1}.} \n\n\\subsection{Structure of the article}\nThe paper is organized as follows. In the two subsections \\ref{sec:phys} and \\ref{sec:math} we summarize the main results of the present paper. \n\nIn Section~\\ref{s:Bian} we expand in detail on the first part of what is explained in subsections \\ref{sec:phys} below. In particular, we draw the necessary mathematical conclusions from the physical requirements, which we believe to be minimal or mandatory for any higher gauge theory. The main result of these considerations is summarized in Theorem~\\ref{thm:phys1} below. In Section~\\ref{s:examQ} we unravel the geometric structure of Q2-manifolds, as they appear as what replaces the structural Lie algebra of an ordinary YM gauge theory when considering a higher gauge theory with 0-form, 1-form, and 2-form gauge fields. They are in equivalence with Lie 2-algebroids or 2-term $L_\\infty$-algebroids. In Section~\\ref{s:examQ} we provide, in particular, a description generalizing the one of a Lie 2-algebra as a crossed module, in the Appendix~\\ref{s:Linf} we complement this by a descriptions in terms of multiple brackets as usual for $L_\\infty$-algebras (cf., also, subsection \\ref{sec:math} below).\n\nIn Section~\\ref{s:gauge} we take another step in the construction of a ``physically acceptable'' higher gauge theory, focusing on the gauge symmetries in this context. The main result of this investigation is summarized in Theorems~\\ref{thm:phys2} and \\ref{theo:Lie}) below. Having been led to regard derived brackets in this context, we take another view on Q2-manifolds or Lie 2-algebras from this perspective in the subsequent Section~\\ref{sec:derived}. This motivates the definition of what we call a V-twisted Courant algebroid. We derive its elementary properties in that section and provide sufficient conditions under which it is equivalent to a Lie 2-algebroid again. \n\nAppendix~\\ref{s:Linf} reviews briefly the relation between Q-structures and $L_\\infty$-algebras\/ algebroids in general. \n\n\\subsection{Chronology of the results}\\label{s:hist\n\nSections \\ref{s:Bian}, \\ref{s:examQ}, and \\ref{s:gauge} were written already in 2005, Section \\ref{sec:derived} between 2005 and 2008 (as visible also by several talks of TS and a partial distribution of preliminary versions). The Introduction and the Appendix, as well as some fine-tuning of the remainder, is from the end of 2013\/beginning of 2014, where we finally finished the article. For TS the results of this work as of 2005 were among the main motivations for starting his work with Alexei Kotov on characteristic classes and Q-bundles, the first results of which appeared as a preprint in 2007 on the arXiv and is now published right behind this article in the present journal \\cite{KS07}. We attempted to leave the Sections \\ref{s:Bian}, \\ref{s:examQ}, \\ref{s:gauge}, and \\ref{sec:derived} as much as possible in their original form from the time of 2005 to 2008, including an updated perspective only in the two subsections to follow (to which we then also moved important formulas and statements from those older sections, like Equation \\eqref{ideal} and Theorem \\ref{thm:phys1}, respectively)\n\nFor what concerns our work summarized in section \\ref{sec:derived}, we found out later that there were parallel developments arriving at a similar notion of a vector-bundle twisted Courant algebroid as the one given in the present paper. It is interesting to see that quite different motivations and starting points led to the same or similar objects of interest at about the same time. In the following, we comment on the relation in more detail: Chen, Liu and Sheng have introduced $E$-Courant algebroids in \\cite{CLS08}. An $E$-Courant algebroid as defined by them is a $V$-twisted Courant algebroid as defined in this paper after identifying their vector bundle $E$ with our $V$ iff in addition their inner product is surjective to this vector bundle. Conversely a $V$-twisted Courant algebroid is a $V$-Courant algebroid in their sense iff the anchor $\\rho$ factors through $\\mathfrak{D}V$, the derived module of $V$. Similarly, Li-Bland introduced an $AV$-Courant algebroid in \\cite{LiB11}. This is a $V$-twisted Courant algebroid in our sense and, conversely, every $V$-twisted Courant algebroid is an $AV$-Courant algebroid iff the anchor map factors through a Lie algebroid $A$ and the action of the twisted Courant algebroid on $V$ induces an action of $A$ on $V$.\n \n\n\n\\subsection{Acknowledgments}\nWe thank the TPI in Jena and in particular A.~Wipf for the support of our work in 2005 and 2013, the ESI in Vienna for support in 2007, as well as the ICJ in Lyon for likewise support in 2008 and 2013. \nWe also thank the anonymous referee for helpful suggestions, such as a change of the subtitle. T.S.~thanks A.~Kotov for many valuable discussions during our stimulating parallel work. \n\n\\subsection{Mathematical framework for physical theories} \\label{sec:phys}\nLet us now describe our approach in some detail and summarize the main results of the paper. As mentioned above, we consider a tower of differential forms. Let us denote them collectively by $A^\\alpha$ and call them the ``gauge fields'' of the theory under investigation. So $A^\\alpha$ collects differential forms of various form degrees, in general at each level of a different number: $A^\\alpha = (X^i, A^a,B^D, \\ldots)$, where $(X^i)_{i=1}^n$ are $n$ 0-forms (or scalar fields), $(A^a)_{a=1}^r$ are $r$ 1-forms (called, in a somewhat misleading way, ``vector fields'' in the physics literature), $(B^C)_{C=1}^s$ are $s$ 2-forms etc., up to some highest form degree $p$. At this stage of the consideration we do not want to attach any geometrical interpretation to those differential forms yet, treating them in a most elementary way possible in the first step and believing in the power of ``physically oriented'' argumentation, supposed to lead us to a geometrically interesting picture more or less by itself in the end. \n\nNext we make a most general ansatz for the ``field strengths'' $F^\\alpha$ compatible with the form degrees and not using any additional structures (like a metric $h$ on $\\Sigma$), since we believe that these objects, which in the end should represent in one way or another some generalized curvature(s), should not depend on anything else but the data specified up to here. Most physicists would agree, moreover, that $F^\\alpha$ should start with the exterior derivative acting on the gauge fields, $F^\\alpha = \\ud A^\\alpha + \\ldots$, and that this should be complemented by terms, represented by the dots, that are polynomial in the gauge fields of form degree 1 and higher, but, to stay as general as possible, with a priory unrestricted coefficient functions of the scalar fields $X^i$. This ansatz can be considered as still too restrictive, however. First, one may possibly want to replace the first term in $F^\\alpha$ by a coefficient matrix $M^\\alpha_\\beta$---depending on the $X^i$s---times $\\ud A^\\beta$; second, one may want to permit also $\\mathcal{A}^\\beta \\wedge \\ud A^\\gamma$-contributions to the remaining terms indicated by the dots above. However, as we will argue in the main text, both of these apparent generalizations of our ansatz can be mapped in a first transformation to a situation of the simpler form above, provided only that the matrix $M^\\alpha_\\beta$ is invertible.\\footnote{Correspondingly, also theories that have field strengths containing Chern-Simons type of contributions can be tackled by the present formalism and, e.g., the Theorem \\ref{thm:phys1} below is applicable also in such situation. This is exemplified for instance in \\cite{SLS14}.} Thus, anticipating this argument, without loss of generality we can thus assume the field strengths to have the form\n\\begin{equation} \\label{curv}\nF^\\alpha = \\ud A^\\alpha + t^\\alpha_\\beta(X) A^\\beta + \\tfrac{1}{2} C^\\alpha_{\\beta \\gamma}(X) A^\\beta \\wedge A^\\gamma +\\tfrac{1}{6} H^\\alpha_{\\beta \\gamma\\delta}(X) A^\\beta \\wedge A^\\gamma \\wedge A^\\delta + \\ldots ,\n\\end{equation}\nwhere the coefficients or coefficient functions $t^\\alpha_\\beta$, $C^\\alpha_{\\beta \\gamma}$, $H^\\alpha_{\\beta \\gamma\\delta}$, $\\ldots$ are assumed to be fixed for defining the theory (on a local level and at this stage of constructing the theory). Their choice is now assumed to be constrained by some kind of ``physical consistency condition'', like one that would lead to \\eqref{Jacobi} in the case of only 1-form gauge fields $A^a$. It is thus now our task and declared goal to specify one or several possible consistency requirements, motivated by a more or less physical argument, to display the resulting constraints on the coefficients in some explicit form, and, in a final step, to give them some algebraic-geometrical meaning so as to unravel what replaces the structural Lie algebra $\\g$ of a standard YM theory in this much more general setting. \n\nBefore turning to these issues two more remarks on the nomenclature and the setting, however. We chose to include scalar fields (i.e.~0-forms) into that what we call gauge fields. In some sense this is unconventional, since they normally describe matter, while the conventional gauge fields (the 1-forms) represent the interaction forces. There are two reasons for this: First, it is suggested by the study of the Poisson sigma model (PSM) \\cite{Ikeda94, Str94}: As argued in \\cite{Str04b} and \\cite{KS07}, the PSM can be seen as a ``Chern Simons (CS) theory'' of the Lie algebroid $T^*M$ associated to a Poisson manifold $M$; like the CS-theory corresponds to the Pontryagin class, also the PSM, and more generally all AKSZ sigma models \\cite{AKSZ}, correspond to characteristic classes (cf.~also \\cite{FRS13}). In this context, all the tower of differential forms correspond to a single map or section of an adequate higher bundle, and it in particular also includes the 0-forms. \n\nThe second reason for including the 0-forms is that even if one wants to consider the 0-forms $X^i$ as matter fields, it has advantages to include them right away.\\footnote{In this case the ``1-form field strength'' $F^i \\equiv \\ud X^i - \\rho^i_a(X) A^a$ should be rather interpreted as a (generalized) covariant derivative. All the considerations concerning the ideal $\\I$ below will apply also to those.} Suppose one considers an ordinary YM-connection with scalar fields being a section of an associated (vector) bundle induced by a representation $\\rho$ of $\\g$\non a vector space or, more generally, by a $\\g$-action on a manifold $M$ that serves as a fiber of the associated bundle. In a local chart, the connection and the section correspond to a collection of 0-forms and 1-forms, $X^i$ and $A^a$, respectively. On the other hand, $E:=\\g \\times M\\to M$ becomes a Lie algebroid in a canonical way and the data of the fields $X^i$ and $A^a$ are the same as a vector bundle morphism from $T\\Sigma \\to E$; moreover, the covariant derivative of the scalar fields and the curvature of the connection combine into a field strength map, cf.~\\cite{Str04b,Str09,KS07}. There are many more Lie algebroids\n(cf., e.g., \\cite{Mack87,CaWe99})\n than those coming from a Lie algebra action $\\rho$; if one intends to treat the 0-forms in a second step only, one misses all these additional possibilities. From a more physical perspective, scalar fields are sometimes seen to deform the structural identities underlying the gauge fields of degrees at least one in a highly nontrivial way (cf, e.g., \\cite{Str04b} or some supergravity theories). Thus also from this perspective it seems reasonable to not exclude a geometrical explanation of such structures (in terms of some Lie algebroids for example) from the outset by the restriction to a two-step procedure.\n \n \n \n \nThe second remark concerns the nomenclature. We have chosen to call the $A^\\alpha$-fields \\emph{gauge fields} and the combinations $F^\\alpha$ as in \\eqref{curv} \\emph{field strengths}, following a ``physics oriented'' wording. We in particular avoid to call them generalized connections and curvatures, as it may seem more adequate from a (superficial) mathematical perspective. The reason for this is the following. As already obvious from the previously given example of the PSM, that what replaces the structural Lie algebra $\\g$ (as well as its integration $G$) of an ordinary principal bundle can and in general will be a bundle itself, in fact, even some kind of higher bundle. So, the fibers of what replace principal and associated bundles will be bundles themselves (and it may be useful to choose auxiliary connections for technical reasons, that must be distinguished strictly from the ``dynamical'' gauge fields $A^\\alpha$). But even worse, a Poisson manifold for example does not always give rise to an integrating Lie groupoid, there can be obstructions in the integration process \\cite{CrF01}, while still the above mentioned characteristic classes as well as the PSM do exist as meaningful objects for any Poisson manifold $M$. To cover all these examples in a common mathematical framework it thus is indicated to work with bundles (over the base $\\Sigma$ or its tangent space $T\\Sigma$) where one does not need the analogue of the structural Lie group $G$, but it is sufficient to work with its Lie algebra $\\g$. In the ordinary setting this is the so-called Atiyah algebroid $A\\to T\\Sigma$ associated to a principal bundle $P\\to \\Sigma$, where $A$ can be identified with $TP\/G$ and, as a bundle over $T\\Sigma$, its fiber is isomorphic to $\\g$ only (cf.~\\cite{KS07} for further details). Now a connection in $P$, i.e.~an ordinary gauge field in physics language, corresponds to a \\emph{section} in $A$, to be distinguished from connections in $A$, that exist as well as on any bundle. (The generalization of $A$ will be what was called a $Q$-bundle in \\cite{KS07}.) These potential sources of a possible confusion with other natural connections lead us to avoid calling $A^\\alpha$ a generalized connection.\n\nWe now come back to our principal goal, finding conditions for the restriction of the coefficients appearing in \\eqref{curv}. The approach nearest at hand may seem to look at gauge transformations and to require some generalized form of invariance of the field strengths. This, however, has a decisive disadvantage: There is a lot of freedom in defining gauge transformations, and, as examples show (cf., e.g., the Dirac sigma model (DSM) \\cite{DSM}), they may depend on several additional structures not needed for the definition of the above field strengths. \n\n\nThe main idea for the departure is, therefore, to use a generalization of the Bianchi identities. Normally, they would contain a covariant derivative, that we have not yet defined as well, not yet knowing the adequate algebraic-geometric setting to use. Not wanting to use any additional new parameters\/coefficients, we thus will want to formulate a minimalistic version of Bianchi that at least most people in the field would agree with. Regarding the standard Bianchi identity for an ordinary YM theory, $\\ud F^a + C^a_{bc}A^b\\wedge F^c = 0$, we note that the application of the exterior derivative on the field strength becomes ``proportional'' to the field strength itself, albeit with a field dependent ``prefactor''. \n\nMathematically we express this as follows: We require that the exterior derivative $\\ud$ applied to any monomial containing a field strength $F^\\alpha$, i.e.~terms of the form $\\ldots \\wedge A^\\alpha \\wedge F^\\beta \\wedge A^\\gamma \\wedge \\ldots$, yields expressions that again contain at least one field strength. In other words, we regard the ideal $\\I$ generated by expressions of the form \\eqref{curv} within the (graded commutative, associative) algebra $\\mathcal{A}$ generated by abstract elements $A^\\alpha$ and $\\ud A^\\alpha$. The condition we require is then simply\n\\beq \\boxed{\\ud \\I \\subset \\I } \\, ,\\label{ideal} \\eeq\ni.e.~$\\I$ has to be what is called a differential ideal. As innocent and weak as this condition may seem, it turns out to restrict the coefficient functions of \\eqref{curv} already very drastically, at least if the dimension $d$ of spacetime $\\Sigma$ exceeds the highest permitted form degree $p$ of $A^\\alpha$ by two, i.e.~$d \\geq p+2$. In that case, we will find that \\eqref{ideal} holds true iff the following vector field squares to zero:\n\\begin{equation} \\label{Q1}\nQ= \\left(t^\\alpha_\\beta(x) q^\\beta + \\frac{1}{2} C^\\alpha_{\\beta \\gamma}(x) q^\\beta q^\\gamma +\\frac{1}{6} H^\\alpha_{\\beta \\gamma\\delta}(x) q^\\beta q^\\gamma q^\\delta + \\ldots\\right) \\dfrac{\\partial}{\\partial q^\\alpha} .\n\\end{equation}\nHere we introduced for each gauge field $A^\\alpha$ an abstract coordinate $q^\\alpha$, carrying a degree that equals the form degree of $A^\\alpha$ (for the degree zero variables we also use $q^i \\equiv x^i$ to distinguish them from the others, since they can appear non-polynomially).\nThese coordinates commute (anticommute) iff the respective differential forms commute (anticommute). Also, since by its definition, $F^\\alpha = \\ud A^\\alpha + \\ldots$ has a form degree that exceeds the one of $A^\\alpha$ by precisely one, the vector field $Q$ is an odd vector field of degree $+1$. In general, the condition\n\\beq\\label{Qsquare}\nQ^2 \\equiv \\tfrac{1}{2} [Q,Q] = 0\n\\eeq\nsubsummarizes a whole list of equations and in particular reduces precisely to \\eqref{Jacobi} in the case of coordinates of only degree one. \n\nThis observation turns out to be rather easy to prove, but so general that we consider it worth being called a fundamental theorem in the context of higher gauge theories:\n\\begin{thm} \\label{thm:phys1} If $\\dim \\Sigma \\geq p+2$ the generalized Bianchi identities \\eqref{ideal} hold true if and only if the vector field \\eqref{Q1} squares to zero, $Q^2=0$.\n\\end{thm}\nThus any higher gauge theory is associated to what is called a differential graded manifold or a Q-manifold \n$\\CM$. \n(In the case of an ordinary YM theory this corresponds to a reformulation of the Lie algebra $\\g$ in terms of its Chevalley-Eilenberg complex; in this case, simply $ (C^\\infty(\\CM), Q) \\cong (\\Lambda^\\cdot \\g^*, \\udCE)$). The most important part of the above theorem is not that a Q-manifold, as a generalization of a Lie algebra, gives rise to some generalized Bianchi identities, but the reverse direction: Even if Bianchi identities are formulated in as weak a version as (\\ref{bian}), one \\emph{necessarily}\ndeals with a Q-manifold on the target if one observes it or not. This is the strength of the above observation. \n\nTogether with \\cite{BKS,KS07} (and the considerations on the gauge symmetries to follow) this paves the way for an elegant, general, and simple picture of higher gauge theories. Their field content (and symmetries) is compactly described by a section $a$ in a Q-bundle (cf.~Fig.~\\ref{fig1} below):\n\\begin{figure}\n$$\\begin{xy}\/r0.15pc\/:\n (-50,-70)*{}; (50,-70)*{} **\\crv{(0,-60)}+(0,-7)*\n (T[1]\\Sigma,\\ud)};\n (-50,0)*{}; (50,0)*{} **\\crv{(-20,20) & (20,-20)}; {\\ar_{\\textstyle a}(-10,-65)*{};(-10,2)*{}};\n (-50,-50)*{}=\"A\"; (50,-50)*{} **\\crv{(0,-40)}; (72,30)*{(\\CM_{tot},Q_{tot})};\n (50,-50)*{}; (50,50)*{} **\\crv{(40,-25)&(60,25)};\n (50,50)*{}; (-50,50)*{} **\\crv{(0,60)};\n \"A\"; (-50,50)*{} **\\crv{(-60,-25)&(-40,25)};\n (-35,-47.5)*{};(-35,52.5)*{} **\\dir{-};\n {\\ar(-60,10)*{(\\CM,Q)};(-35,20)*{}};\n\n\n\n (-48,20)*{}; (52,20)*{} **\\crv{~*=<4pt>{.} (-20,30) & (20,10)}; {\\ar_{\\textstyle \\delta a}(0,0)*{};(0,20)*{}};\n\\end{xy}\n$$\n\\caption{\\label{fig1} The resulting geometry governing all higher gauge theories: The generalization of a connection, called a \\emph{gauge field} in this work, corresponds to a section $a$ of a graded bundle $\\CM_{tot}$. The bundle is a Q-bundle and (the principal part of) gauge transformations correspond to vertical inner automorphisms. The section is a Q-morphism iff the generalization of curvature, called the \\emph{field strength} here, vanishes. In physical theories, however, it is almost exclusively those gauge fields which have a non-vanishing field strength that, up to gauge invariance, are of interest.}\n\\end{figure}\nThe tangent bundle $T\\Sigma$ of spacetime $\\Sigma$ can be viewed as a graded manifold when fiber-linear coordinates are regarded as carrying degree 1, functions on this graded manifold $T[1]\\Sigma$ correspond to differential forms, and the nilpotent, degree 1 de Rham differential $\\ud$ equips it with a Q-structure. The Q-manifold $(\\CM,Q)$ that we introduced above in the context of (\\ref{Q1}) can be viewed as a target if we interpret the tower of gauge fields $A^\\alpha$ as corresponding to a degree-preserving map from $T[1]\\Sigma$ to \n$\\CM$.\\footnote{We will explain all this in more detail in the main text, but we want to give an overview of the resulting picture already as this point as a guide for what we are going to obtain or derive as a final framework.} As a straightforward generalization of the considerations in \\cite{BKS} this can be reinterpreted as a section $a$ in a trivial Q-bundle, $(\\CM_{tot},Q_{tot}) := (\\CM \\times T[1]\\Sigma, Q+\\ud)$, an observation that proves essential in the context of describing the gauge symmetries (in \\cite{BKS} for the PSM, here for general higher gauge theories). One of the observations fundamental to \\cite{KS07} is, on the other hand, that this picture generalizes also to the twisted case, i.e.~that a connection in an ordinary principal bundle corresponds just to a section of a (possibly non-trivial) Q-bundle. \n\nWe add a remark in parenthesis: The section $a \\colon T[1]\\Sigma \\to \\CM_{tot}$, corresponding to our gauge fields, is a section of the category of ($\\N_0$-)graded manifolds, and, not, in the sense of the category augmented by a Q-structure. It becomes an ``honest'' section of the category of Q-bundles, i.e.~a Q-morphism, iff all the field strengths $F^\\alpha$ vanish. As nice as this may seem from a purely mathematical perspective, as inadequate it is from the one of physics: It would exclude all the particles (like photons, gluons, etc) for which gauge fields were introduced in physics in the first place. In fact, even in topological models, such as the PSM, the DSM, or the AKSZ sigma models, the gauge fields become Q-morphisms only ``on-shell'', i.e.~on the level of the Euler-Lagrange equations of the corresponding action functionals, cf.~\\cite{BKS,DSM,KS08} for the proofs, respectively.\n\nLet us now address the gauge symmetries. Similarly to the general ansatz (\\ref{curv}), we make one for the gauge symmetries\n\\begin{equation}\\label{gaugedelta0}\n\\delta^{(0)}_{\\epsilon} A^\\alpha = \\ud \\epsilon^\\alpha + \\left(s^\\alpha_\\beta(X) + D^\\alpha_{\\gamma \\beta}(X) A^\\beta +\\tfrac{1}{2} G^\\alpha_{ \\gamma\\delta\\beta}(X) A^\\gamma \\wedge A^\\delta + \\ldots \\right)\\wedge \\epsilon^\\beta,\n\\end{equation}\nfor some collection $\\epsilon^\\alpha$ of differential forms of degree up to $p-1$, where we may allow the total gauge transformations to also \ncontain a part proportional to $F^\\alpha$ again:\n\\begin{equation} \\label{gaugedelta}\n\\delta_{\\epsilon} A^\\alpha = \\delta^{(0)}_\\epsilon A^\\alpha + O(F^\\alpha) \\, . \n\\end{equation} \nIt is important to remark that the terms $O(F^\\beta)$ have to be in the ideal $\\I$ for the following to be still true. This still includes higher gauge theories as they appear as a part of gauged supergravity for example, cf.~\\cite{SSW11,SLS14,SKS14}, and where they play an important role, but excludes terms containing e.g.~$*F^\\alpha$ (as they appear in the DSM \\cite{DSM}) or terms containing ``contractions'' of $F$ (with the inverse of the part of $A$ that may describe a vielbein) as they typically appear in the gravitational part of supergravity theories. In analogy to (\\ref{ideal}), we now require that also our gauge transformations do not leave the ideal \n$\\I$ generated by the field strengths,\n\\beq \\boxed{\\delta_\\varepsilon \\I \\subset \\I } \\, .\\label{deltaI} \\eeq\nThe result we obtain, illustrated and proven in detail in the main text, is the following (cf.~also Fig.~\\ref{fig1})\n\\begin{thm} \\label{thm:phys2}\nProvided $\\dim \\Sigma \\geq p+1$ and $Q^2=0$, infinitesimal gauge transformations of the form (\\ref{gaugedelta}) leave the field strength ideal $\\I$ invariant, Eq.~\\eqref{deltaI}, iff their principal part $\\delta^{(0)}_\\varepsilon$ corresponds to vertical inner automorphisms of the Q-bundle $(\\CM_{tot},Q_{tot})$. Moreover, if the infinitesimal gauge transformations are of the more restricted form (\\ref{gaugedelta0}), $\\delta_\\varepsilon \\equiv \\delta^{(0)}_\\varepsilon$, one does not need to require $Q^2=0$, but one can conclude it together with the generalized Bianchi identities (\\ref{bian}). \n\\end{thm}\nLet us briefly explain what inner automorphisms of a Q-manifold should be thought of to our mind. An automorphism should be degree preserving diffeomporphism that leaves invariant the vector field $Q$. Infinitesimally this corresponds to degree zero vector field annihilated by $\\ad_Q$. It is an immediate consequence of (\\ref{Qsquare}) that $\\ad_Q$ squares to zero as well. We call an inner automorphism of a Q-manifold one that is generated by the image of $\\ad_Q$, i.e.~by vector fields of the form $\\ad_Q(\\epsilon)$, where $\\epsilon$ is a degree -1 on the Q-manifold. If the Q-manifold is an ordinary Lie algebra (shifted in degree by one) $\\g[1]$, this corresponds to the usual definition of inner automorphisms, i.e.~to the adjoint action of the integrating group $G$ on $\\g=\\Lie(G)$ (generated infinitesimally by the left action on itself). For $(T[1]\\Sigma, \\ud)$ this definition of inner automorphism gives infinitesimally precisely the Lie derivatives of vector fields on $\\Sigma$, so they correspond to the canonical lift of a diffeomorphism of $\\Sigma$ to its tangent bundle. \n\nGauge transformations of an ordinary YM-theory are generated by vertical automorphisms of the underlying principal bundle $P$. They correspond precisely to inner vertical automorphisms in the above sense for the associated Q-bundle $T[1]P\/G$. The advantage of the present formulation is that like this it generalizes in a straightforward way to \\emph{all} higher gauge theories. \n\nSome further explanations are in place: First, additional structures entering a particular theory can certainly constrain the symmetries permitted in the respective case. For example, in the Poisson sigma model or, more generally, the AKSZ sigma models, the inner automorphisms also have to preserve the symplectic form defined on the fibers and that are compatible with the Q-structure on the fibers. For the PSM, for example, this implies that its gauge symmetries are generated infinitesimally by the Lie subalgebra $\\Omega^1_{closed}(M)$ of $\\Gamma(T^*M)$ ($T^*M$ viewed upon as the Lie algebroid of the Poisson manifold $M$ in this case). Second, there are cases where the gauge transformations including the additional terms of the form $O(F)$ in (\\ref{gaugedelta}) can be given a similarly good geometrical interpretation. For example, as will be shown in \\cite{SLS14}, for the higher gauge theory sector of the (1,0) superconformal model in six spacetime dimensions \\cite{SSW11}, the $O(F)$-part is essential for an off-shell closed algebra of gauge transformations on the one hand. On the other hand, in this case the gauge transformations (\\ref{gaugedelta}) correspond to nothing but inner automorphisms of another Q-bundle associated to the one in Fig.~\\ref{fig1}, where the typical fiber $(\\CM,Q)$ is replaced by $(T[1]\\CM, \\ud_{\\CM} \n+ \\L_Q)$ (cf.~also \\cite{KS07,SaS13}).\\footnote{Any Q-bundle $(\\CM_{tot},Q_{tot}) \\to (\\CM_1,Q_1)$ can be lifted to a tangent Q-bundle $(T[1]\\CM_{tot}, \\ud_{\\CM_{tot}} \n+ \\L_{Q_{tot}}) \\to (T[1]\\CM_1, \\ud_{\\CM_1} + \\L_{Q_1})$. Using the canonical 0-section $\\sigma \\colon \\CM_1\\to T[1]\\CM_1$, we can pull back $T[1]\\CM_{tot}$ to become a bundle over $\\CM_1$, which turns out to be compatible with the respective Q-structure\n\n}\n\n\n\nThird, it is important to be aware that $Q_{tot}$ projects to $\\ud$ on the base of the bundle, and it is like this that $\\ud \\varepsilon^\\alpha$ is generated in (\\ref{gaugedelta0}), (\\ref{gaugedelta}); this becomes possible only by going to the picture of a bundle, even in the case of a local treatment, where the bundle (at least when defined as in \\cite{KS07}) is trivial. \n\nIn the present description, the vector fields generating the gauge transformation always form a Lie algebra. They are vertical vector fields of degree zero of the form $\\ad_{Q_{tot}}(\\epsilon)$ and they close with respect to the usual commutator of vector fields. They are parameterized by vertical vector fields $\\epsilon$ of degree minus one \\emph{up to $\\ad_{Q_{tot}}$-closed} contributions. On the vector fields $\\epsilon$ one discovers that one has an induced bracket of the form \n\\begin{equation}\\label{derQtot}\n[\\epsilon,\\epsilon']_{Q_{tot}} \\equiv [\\ad_{Q_{tot}}(\\epsilon), \\epsilon'] \\, ,\n\\end{equation} called the ``derived bracket'' (following Cartan and \\cite{YKS03en}). For example, if one considers the Q-manifold $(\\g[1], \\udCE)$, the derived bracket just reproduces the Lie algebra bracket. While in general, however, this bracket is a Leibniz--Loday algebra only and not necessarily antisymmetric, it becomes so for the coexact elements, i.e.~for those $\\epsilon$ where one considers the quotient by closed contributions (which do not change the gauge transformation, as argued above). In this way one obtains the Theorem \\ref{theo:Lie} in the main text.\\footnote{To simplify the notation in that section, we called the total Q-structure simply $Q$ and renamed the Q-structure on the fiber by $Q_2$.}\n\nIn a conventional treatment of gauge symmetries (i.e.~along the lines as described e.g.~in \\cite{HT}), they form a so-called \\emph{open} algebra for a generic higher gauge theory. This is not in contradiction to the present, to our mind more elegant formulation of its gauge symmetries. To resolve the apparent paradox, it is useful to consider the PSM---as a simpler toy-model, showing already all the essential features of this behavior. In \\cite{BKS} it is shown, in particular, that one can obtain the usual, only on-shell closed algebra of gauge symmetries of the PSM from a incomparably shorter calculation in the above-mentioned framework. This generalizes to the context of any higher gauge theory. For instance for a higher gauge theory containing 0-forms $X^i$, 1-forms $A^a$, and 2-forms $B^C$ one obtains in this way the result of Proposition \\ref{prop:onshellclosed} below. We also will comment further on this issue in \\cite{SLS14,GSS14}. \n \nWhile one can still discuss the usefulness or necessity of using the exterior derivative $\\ud$ in vector analysis or $ \\udCE$ for Lie algebras, there will be no doubt in the case of $p \\ge 2$; already for $p=2$ there is, in general, a whole list of coupled differential equations satisfied by the structural quantities, cf.\\ Eqs.~\\eqref{B1}--\\eqref{B7} in the main text below, which are to be used in almost every calculation related to such a gauge theory and which can be replaced and subsummarized in a single operator $Q$ squaring to zero. The technical usefulness becomes particularly obvious in the above-mentioned context of the calculation of the commutator of gauge symmetries, but also already for the concrete form of the Bianchi identities (cf., e.g., the derivation of Eqs.~\\eqref{BFi}--\\eqref{BFB} in the main text below).\n\nLet us briefly come back to Fig.~\\ref{fig1} and the corresponding two theorems, Thm.~\\ref{thm:phys1} and \\ref{thm:phys2}. Not only the base of the Q-bundle is in fact a bundle in ordinary differential-geometric terms, but even worse so for the fibers: the Q-manifold $(\\CM,Q)$ isomorphic to the typical fiber is generically a rather intricate (higher) bundle structure itself. We will come back to this issue in more detail in subsection \\ref{sec:math} below. What we consider important in this context is that the infinite-dimensional space of gauge fields is described as sections in \\emph{finite-dimensional} (graded) bundles. This gives additional mathematical control. The situation can be compared with the interest in Lie algebroids (for a definition cf., e.g., \\ref{def:Loid}): While for infinite-dimensional Lie algebras topological questions usually play an important role and are intricate to deal with, the situation simplifies greatly when this infinite-dimensional Lie algebra comes from the sections of a Lie algebroid. \n\nLikewise, the above operator $Q$ (or also $Q_{tot}$) must not be confused also with a BV- or BRST-operator. Although the BV-BRST operator $Q_{BV}$ also squares to zero, $Q_{BV}^2=0$, and is of degree +1, it is defined over the \\emph{infinite-dimensional} graded manifold of (gauge) fields, ghosts, and their antifields, and, more importantly, it, in general, needs many more structures to be defined, namely those needed for an action functional that we have not yet addressed much up to here (and for which there are many options still open).\\footnote{Correspondingly, also the discovery of the role of the derived bracket in the context of the gauge symmetries, cf.~Thm.~\\ref{theo:Lie}, is not or at most vaguely related of the role of the derived bracket in the BV-context as observed in \\cite{YKS97}.} \n\n\nThere are very particular models of topological nature, like the Poisson sigma model or, more generally, the AKSZ-sigma models \\cite{AKSZ}, on the other hand, where there \\emph{is} a close relation of the two Q-structures $Q$ and $Q_{BV}$: $Q$ corresponds to a defining Q-structure on the target of the sigma model, which is also equipped with a compatible symplectic form $\\omega$, $\\ud$ is a Q-structure on its source $T[1]\\Sigma$, all very much like in Fig.~\\ref{fig1}. Then $a \\colon T[1]\\Sigma \\to \\CM$ is no more degree preserving (so as to capture ghosts and antifields), but a general supermap and $\\Omega_{BV}$ is, roughly speaking, the ``difference'' of $Q$ and $\\ud$ induced on this space of supermaps $\\CM_{BV}$. Using $\\omega$ and a canonical measure on $T[1]\\Sigma$, one then induces also a (weakly) symplectic form $\\omega_{BV}$ on $\\CM_{BV}$ and shows that $Q_{BV}$ is Hamiltonian, i.e.~$Q_{BV}=\\{ S_{BV}, \\cdot \\}_{BV}$, where $S_{BV}$ is the BV-action, extending the classical action of the PSM and AKSZ-sigma model, respectively (cf., e.g., \n\\cite{AKSZ,Catt01,Royt06}). \n\n\nThis changes quite drastically, however, in the case of non-topological, physically relevant theories; already for an ordinary YM gauge theory the BV operator is not constructed in such a simple way (unfortunately). Even for \\emph{relatively} simple generalizations like (for the PSM) the topological DSM \\cite{DSM} or (for YM) the non-topological Lie algebroid Yang-Mills theories \\cite{Str04b}, the BV operator is even not known at the present time; however, in both cases the target Q-structures are very well known, corresponding to Lie algebroids in both cases. And, again, the field content for any such a theory, and, more generally, for any higher gauge theory, its described compactly by sections in the finite-dimensional Q-bundle as sketched in Fig.~\\ref{fig1}. \n\nAlthough originally intended to do so, we will not address the construction of action functionals within this paper, leaving it for eventual later work (but cf.~also \\cite{Str04b,KS08,SLS14}). \n\n\n\\subsection{Further mathematical results} \\label{sec:math}\nHere we briefly summarize the ideas and results contained in the more mathematical sections \\ref{s:examQ} and \\ref{sec:derived}. It mainly consists in reformulating supergeometrically defined objects appearing in the Q-bundles as potential fibers in \nmore conventional algebraic-geometrical terms. \n\nOur graded or supermanifolds, on which the homological vector fields $Q$ are defined, always will be $\\N_0$ graded manifolds, being simultaneously supermanifolds by means of the induced $\\Z\/2\\Z$-grading. These are conventionally called simply\n N-manifolds and are defined as follows (for further details see also \\cite{Royt02,Gru09}): An N-manifold is a ringed second countable Hausdorff topological space $\\CM=(M,{C^\\infty})$ where the structure sheaf, which by abuse of notation we also denote as ${C^\\infty}$, is locally of the form smooth functions in tensor product with the exterior algebra in the odd coordinates tensor product with the polynomial algebra in the even coordinates not of degree 0. This implies that it \n consists of a tower of affine fibrations. In degree 0 we just have a smooth manifold $M$. In degree 1, also after truncation of a higher degree N-manifold, we have an odd vector bundle $E[1]$, i.e.\\ the fiber-linear coordinates of $E$ are declared as functions of degree 1. Their parity is odd and thus the function algebra are the $E$-forms. The topological structure of such an N-manifold comes from the smooth base only. \n\nMorphisms of N-manifolds are morphisms of ringed spaces that preserve the $\\N_0$-grading of the structure sheaf. Coordinate changes are particular local isomorphisms. Like this one finds e.g.~that the coordinates of degree 1 transform as the coordinates of a vector bundle so that any N-manifold $\\CM$ with degrees up to 1, i.e.~an N1-manifold,\\footnote{In general, we call an N$k$-manifold an N-manifold, where the highest coordinate degree is $k$. \\label{fn:Nk}} can be canonically identified with vector bundles $E\\to M$, $\\CM \\cong E[1]$. \n\nIn degree 2, coordinate changes contain, however, also affine terms beside the terms one would expect for a vector bundle. Globally an N2-manifold $\\CM$ is an affine bundle over the base $E[1]$ modeled after the pullback $\\pi_1^*V[2]$ of a vector bundle $V\\to M$ where $\\pi_1\\colon E[1]\\to M$. Since the fibers $V_x$ for $x\\in M$ of this bundle are contractible, there exists a (non-canonical) global section (splitting) that identifies the affine bundle $\\CM$ with the graded vector bundle $E[1]\\oplus V[2]$ over the base $M$. In all of our theorems on Q2-manifolds, i.e.~N2-manifolds equipped with a degree +1 homological vector field, we will need such a splitting permitting us to identify the underlying graded manifold with the direct sum of two vector bundles $E\\to M$ and $V\\to M$, $\\CM \\cong E[1]\\oplus V[2]$. \n\nAs mentioned already in the previous subsection, the definition of an ordinary Lie algebra $\\g$ is equivalent to a Q1-manifold over a point, i.e.~with coordinates of degree 1 only. The corresponding N1-manifold gives the vector space (vector bundle $E$ over a point), $\\CM \\cong \\g[1]$ (as graded vector spaces), the functions $C^\\infty(\\CM)$ are identified with elements of $\\Lambda^\\cdot \\g^*$, and the degree +1 vector field $Q$ corresponds precisely to the Chevalley-Eilenberg differential, $Q\\cong \\udCE$, from which one certainly can also reconstruct the Lie bracket on the vector space. \n\nA general Q1-manifold can be seen to be equivalent to a Lie algebroid, as was first observed in \\cite{Vai97}. For completeness, we briefly provide a definition:\n\\begin{vdef} \\label{def:Loid} A \\emph{Lie algebroid} consists of a vector bundle $E\\to M$ together with a bundle map $\\rho \\colon E \\to TM$ (over the identity) and a Lie algebra $(\\Gamma(E),[\\cdot,\\cdot])$ on the sections of $E$ such that for all $\\psi_1$, $\\psi_2 \\in \\Gamma(E)$ and $f \\in C^\\infty(M)$ one has the following Leibniz rule\n\\begin{equation} \\label{Leib}\n[\\psi_1, f \\psi_2] = f [\\psi_1, \\psi_2] + \\rho(\\psi_1)f \\,\\cdot \\psi_2 \\;.\n\\end{equation}\n\\end{vdef}\nThese data can be recovered from a homological vector field of degree 1 on an N1-manifold. In local coordinates $x^i$ and $\\xi^a$ of degree 0 and 1, respectively, it always is of the form \n\\beq \\label{Q0} Q = \\rho^i_a(x) \\xi^a \\frac{\\partial}{\\partial x^i}\n- \\frac12 C^a_{bc}(x) \\xi^b \\xi^c \\frac{\\partial}{\\partial \\xi^a} \\, . \\eeq\nThe axioms can be reconstructed\nby using $Q^2=0$ \\emph{and} the behavior of the coefficient functions $ \\rho^i_a$ and \n$C^a_{bc}$ with respect to (degree preserving) coordinate changes on $\\CM \\cong E[1]$ (so as to recover also \\eqref{Leib}). The details of this process can be found in the main text. It is also true, moreover, that the Q-morphisms (morphisms of N-manifolds such that their pullback intertwine the two Q-structures) correspond precisely to Lie algebroid morphisms in the sense of \\cite{MaH90} (cf.~\\cite{BKS} for an explicit proof). \nWe summarize this in \n\\begin{thm}[Vaintrob 97] \\label{Vain}\nA Q1-manifold is in 1:1-correspondence with a Lie algebroid. This is an equivalence of categories.\n\\end{thm}\nIn the present paper we treat the analogue question for one degree higher, i.e.~we address the description of a Q2-manifold. Instead of two coefficient functions $\\rho$ and $C$ satisfying two equations resulting from $Q^2=0$, one ends up with five coefficient functions satisfying seven coupled differential equations, cf.~Eqs.~\\eqref{Q} and \\eqref{B1}--\\eqref{B7}, respectively, below. For an orientation we first collect related results, corresponding to special cases.\n\nIf the Q2 manifold is one over a point, it corresponds to a (semistrict) Lie 2-algebra or, equivalently, to a 2-term $L_\\infty$-algebra. A special case of this are strict Lie 2-algebras, which, according to \\cite{Baez03vi}, are equivalent to Lie algebra crossed modules. We thus here define a Lie 2-algebra by means of this description:\n\\begin{vdef} A \\emph{strict Lie 2-algebra} \\label{strictL2}\nis a pair $(\\g,\\h)$ of Lie algebras \ntogether with a homomorphism $t \\colon \\h\\to \\g$ and a representation\n$\\alpha \\colon \\g\\to\\Der(\\h)$ such that $\\, t(\\alpha (x)v) =[x,t(v)]$ \nand $ \\alpha(t(v))w =[v,w]$ hold true for all $x \\in \\g$, $v,w \\in \\h$. \n\\end{vdef}\nWe remark in parenthesis that this definition can be shortened:\n\\begin{lemma} A strict Lie 2-algebra $(\\g,\\h)$ is the same as an intertwiner, $t \\colon \\h \\to \\g$, from a $\\g$-representation $\\h$ to the adjoint representation on $\\g$ such that $t(v)\\cdot v = 0$ for all $v\\in \\h$.\n\\end{lemma}\nThe data of Definition \\ref{strictL2} can be retrieved easily with all their properties; e.g.~the Lie algebra structure on $\\h$ can then be \\emph{defined} by means of $[v,w]_\\h := t(v) \\cdot w$.\n\nThe definition of a semistrict Lie 2-algebra or simply \\emph{a Lie 2-algebra} is analogous, \nbut contains an additional map $H \\colon \\Lambda^3 \\g \\to \\h$, satisfying a ``coherence-type'' consistency relation. $H$ turns out to control the Jacobiators of brackets on $\\g$ and $\\h$ and enters a modification of the definition above in several places. For $H=0$ one recovers a strict Lie 2-algebra. (In fact, one may extract its precise definition from specializing the Definition \\ref{def:Lie2d} below to the case of $M$ being a point.)\n\n\nAlternatively, one can describe a Lie 2-algebra also by means of a 2-term $L_\\infty$-algebra. We recapitulate its definition in detail in the Appendix \\ref{s:Linf1} (cf.~also \\cite{SHLA,SHLA2} for the original definitions). Essentially this consists of a 2-term complex $\\h[-1] \\xrightarrow{t} \\g$,\\footnote{Note that in the context of N-manifolds, $\\h[-1]$ means that the linear coordinates are of degree -1, which implies that the \\emph{elements} of this vector space are of degree +1. The shifting is slightly changed with respect to the previous occurrence of $\\g$ and $\\h$, but this is due to different but equivalent conventions, cf.~also Appendix \\ref{s:Linf1}.} a unary bracket $[\\cdot]_1=t$ of degree -1, a binary bracket $[\\cdot,\\cdot]_2$ of degree 0 and a ternary bracket $[\\cdot,\\cdot,\\cdot]_3$ of degree 1 \nthat have to fulfill several quadratic relations which can be understood as generalized Jacobi identities. One of those is, e.g., for $x,y,z\\in\\g$\n\\begin{equation}\n [x,[y,z]_2]_2+[z,[x,y]_2]_2 +[y,[z,x]_2]_2 = [\\,[x,y,z]_3]_1 \\, ,\n\\end{equation}\nshowing that the 2-bracket forms a Lie algebra iff the image of the 3-bracket lie in the kernel of $t$. It is thus not surprising that this 3-bracket is the above mentioned map $H$. \nIt is thus again a non-vanishing $H$ that impedes the appearance of ordinary Lie algebras and that controls their deviation from being so on the other hand. In particular, $H$ itself has to fulfill a Jacobi-type quadratic relation. Further details on this perspective of a Q2-manifold over a point and $L_\\infty$-algebras in general are deferred to Appendix \\ref{s:Linf1}.\n\nFinally, concerning Q2-manifolds with an honest base manifold $M$, there is the following result (see also \\cite{Royt02}). \n\\begin{thm}[Roytenberg]$\\quad$\\\\ \\label{thm:Roy}\nA QP2-manifold is in 1:1-correspondence with a Courant algebroid.\n\\end{thm}\nSome comments are in order: A QP-manifold is a Q-manifold (in the above sense) equipped with a compatible symplectic form $\\omega$. QP2 signifies that the underlying N-manifold is an N2-manifold in our sense (cf.~footnote \\ref{fn:Nk}) or, equivalently, that the 2-form $\\omega$ has total degree $4=2+2$.\nA Courant algebroid can be defined as follows\n\\begin{vdef}\n\\label{def:Cour} A \\emph{Courant algebroid} consists of a vector bundle $E\\to M$ together with a bundle map $\\rho \\colon E \\to TM$ (over the identity), a Leibniz--Loday algebra $(\\Gamma(E),[\\cdot,\\cdot])$ on the sections of $E$, and fiber metric $\\<.,.\\>$ on $E$\nsuch that for all $\\psi_1$, $\\psi_2 \\in \\Gamma(E)$ one has the following rules\n\\begin{align} \\label{Coura}\n \\< \\psi_1,[\\psi_2,\\psi_2]\\>&= \\tfrac{1}{2}\\rho(\\psi_1)\\<\\psi_2,\\psi_2\\> = \\<[\\psi_1,\\psi_2],\\psi_2\\> \\, .\n\\end{align}\n\\end{vdef}\nHere some further explanations may be in place: First, we call Leibniz--Loday algebra an algebra, where the left-multiplication is a derivation, i.e.\n\\begin{equation}\\label{JaCou}\n[\\psi_1,[\\psi_2,\\psi_3]] = [[\\psi_1,\\psi_2],\\psi_3] +[\\psi_2,[\\psi_1,\\psi_3]]\n\\end{equation}\n holds true $\\forall \\psi_k \\in \\Gamma(E)$. Moreover, one can deduce the Leibniz rule \\eqref{Leib} from the axioms. So, \\emph{if} the bracket $[\\cdot , \\cdot]$ were antisymmetric, this would be a Lie algebroid (with an additional fiber metric). However, the first equation of \\eqref{Coura} prohibits this: The inner product being non-degenerate, the symmetric part of the bracket can vanish only in the relatively uninteresting case of an identically vanishing map $\\rho$ \n(in which case one obtains a bundle of quadratic Lie algebras over $M$---ad-invariance following from the remainder of \\eqref{Coura} then). So, it is precisely the inner product (together with $\\rho$) that governs the symmetric part of the bracket. The second equation of \\eqref{Coura} expresses a (for non-vanishing $\\rho$ not just pointwise) ad-invariance of the fiber-metric.\n\nThis completes the explanations of the terms appearing in the above theorem. As an aside we mention that one may also rephrase the axioms of a Courant algebroid in terms of the anti-symmetrization of the above bracket, \n\\begin{equation}\\label{anti}\n[\\phi,\\psi]_2=\\tfrac12\\left([\\phi,\\psi] -[\\psi,\\phi]\\right) \\, .\n\\end{equation}\nIn that case, however, (\\ref{JaCou}) does not yield a Jacobi identity for this new bracket. It thus does not come as a surprise that this can be rephrased in terms of an (infinite-dimensional) 2-term $L_\\infty$-algebra $[.]_1\\:V_1\\to V_0$ \\cite{Royt98}: Here the vector spaces are $V_0=\\Gamma(E)$ and $V_1={C^\\infty}(M)$, respectively, the map $[.]_1$ is provided by means of the de Rham differential $\\ud$ followed by the transpose of the anchor map and a subsequent use of the inner product (to identify $E^*$ with $E$ and likewise so for the sections). Between two elements of $V_0$ the degree 0 binary bracket is given by \\eqref{anti}, while for $f\\in V_1$ and $\\phi \\in V_0$ one has $[\\phi,f]_2:=\\rho(\\phi)f$.\nThe degree 1 ternary bracket, finally, can be non-vanishing only on $[.,.,.]_3:\\Lambda^3V_0\\to V_1$, where it is given by $[\\psi_1,\\psi_2,\\psi_3]_3=\\tfrac13\\<[\\psi_1,\\psi_2]_2,\\psi_3\\>+{\\text{cycl.}}$ All higher brackets vanish for degree reasons. \n\nWe now turn to describing our main result of section \\ref{s:examQ}. As in the previously mentioned results on Q1-manifolds, Lie 2-algebras, and QP2-manifolds, an essential ingredient of it is a definition (specifying to what precisely the Q2-manifold is equivalent). To obtain this definition, we proceeded as sketched above for Q1-manifolds, i.e.~analyzed the behavior of coefficient functions with respect to coordinate changes on the N2-manifold, which we identified with the sum of two vector bundles (in a non-canonical way, but different choices being related by an isomorphism). We first provide the generalization of Definition \\ref{strictL2} to the case of a general base $M$ of $\\CM$, trying to stay as close as possible to the known one for $M=\\mathrm{pt}$: \n\\begin{vdef}\\label{d:sLie2}\nA \\emph{strict Lie 2-algebroid} or a \\emph{crossed module of Lie algebroids} is a pair of \n $(E,V)$ of Lie algebroids over the same base $M$, where $\\rho_V \\colon V \\to TM$ vanishes,\ntogether with a homomorphism $t \\colon V\\to E$ and a representation ${{}^E{}\\nabla}$ of $E$ on $V$\nsuch that $t\\left(\\Econn_\\psi v\\right) = [\\psi,t(v)]$\n and $\\Econn_{t(v)}w = [v,w]$ hold true for all $\\phi,\\psi\\in\\Gamma(E)$, $v,w\\in\\Gamma(V)$.\n\\end{vdef}\nThe generalization of a representation of a Lie algebra $\\g$ on a vector space to a representation of a Lie algebroid $E$ on a vector bundle $V$ (over the same base) uses the notion of an $E$-covariant derivative: This is a map ${{}^E{}\\nabla}\\:\\Gamma(E)\\times\\Gamma(V)\\to\\Gamma(V)$ such that for all $\\psi\\in\\Gamma(E)$, $v\\in\\Gamma(V)$, and $f\\in{C^\\infty}(M)$\n\\[\\label{EConn}\n {{}^E{}\\nabla}_{f\\psi} v = f {{}^E{}\\nabla}_\\psi v, \\quad\\quad \n {{}^E{}\\nabla}_\\psi (f\\cdot v) = \\rho(\\psi)[f]\\cdot v +f{{}^E{}\\nabla}_\\psi v.\n\\]\nThis becomes a representation iff its $E$-curvature, defined by means of ${{}^E\\!{}R}(\\phi,\\psi)v:=[\\Econn_\\phi,\\Econn_\\psi]v -\\Econn_{[\\phi,\\psi]}v$, vanishes,${{}^E\\!{}R}=0$. This flatness condition implies also that $\\Econn_\\psi$ is a derivation of the (bundle of) Lie algebra(s) $(\\Gamma(V),[.,.])$.\n\n\n\\begin{vdef}\\label{def:Lie2d} A (semi-strict) \\emph{Lie 2-algebroid} is defined as in Definition \\ref{d:sLie2} except that the (still antisymmetric) brackets on $\\Gamma(E)$ and $\\Gamma(V)$ are no more required to satisfy the Jacobi identity and\n ${{}^E{}\\nabla}$ is no more flat in general. Instead, all these deviations are governed by a single $E$-3-form $H\\in\\Omega^3(E,V)$ with values in $V$: For all $\\psi_k\\in\\Gamma(E)$, $v,w\\in\\Gamma(V)$ and $f\\in{C^\\infty}(M)$ one has\n\\begin{align}\n [\\psi_1,[\\psi_2,\\psi_3]] +{\\text{cycl.}} &= t\\left(H(\\psi_1,\\psi_2,\\psi_3)\\right), \\label{HJacobi} \\\\\n {{}^E\\!{}R}(\\psi_1,\\psi_2)v &= H(\\psi_1,\\psi_2,t(v)), \\label{HRep}\\\\\n\n \n {{}^E\\!{}\\mathrm{D}} H &= 0 \\, . \\label{DH}\n\\end{align}\n\\end{vdef}\nHere ${{}^E\\!{}\\mathrm{D}}$ is the exterior $E$-covariant derivative associated to ${{}^E{}\\nabla}$. We did not need to specify the Jacobiator of the bracket on $\\Gamma(V)$, since it follows at once from \\eqref{HRep} and $\\Econn_{t(v)}w = [v,w]$: \n\\begin{equation}\n[u,[v,w]] + {\\text{cycl.}} = H(t(u),t(v),t(w)) \\qquad \\forall u,v,w \\in \\Gamma(V) \\, .\n\\end{equation} \nIt is noteworthy that in this formulation, the coherence condition for $H$ takes the simple form \\eqref{DH}. \nIn this paper we will then prove the following:\n\\begin{thm}\\label{thm1.5} $\\quad$ \\\\ A Q2-manifold is in 1:1-correspondence with a (semi-strict) Lie 2-algebroid \\\\ up to isomorphism.\n\\end{thm}\nSimilarly to Theorem \\ref{Vain}, this statement can be turned also into an equivalence of categories.\n\n\nIn this context it is important to remark that the notion of a morphism of Lie 2-algebroids (as defined above in Definition \\ref{def:Lie2d}) is less restrictive than the one of strict Lie 2-algebroids (Definition \\ref{d:sLie2}). This can be seen as follows: A definition of a morphism of a Lie 2-algebroid can be tailored from the corresponding Q-morphisms and the identification given in the above theorem (such as one can do it for ordinary Lie algebroids and Q1-manifolds). On the one hand, the change of a splitting of the Q-manifold corresponds to a Q-morphism.\nOn the other hand, it corresponds to an element $B\\in\\Omega^2(E,V)$ and $H$ changes under this $B$ (in a slightly tricky way, cf.~Equation \\eqref{split3} below) and in particular this can be used to generate a non-zero $H$ from a vanishing one. \n\nA remark to the nomenclature seems important. One of the goals here is to give a definition of a mathematical structure equivalent to a Q2-manifold on the one hand and generalizing the definition of a Lie 2-algebra as given in \\cite{Baez03vi} on the other hand. In other words, we strive at a definition equivalent to the \\emph{super}-geometrical notion of a Q2-manifold in terms of notions used only in \\emph{ordinary} differential geometry. This can be viewed as the same philosophy underlying the Theorems \\ref{Vain} and \\ref{thm:Roy}---with the difference that in these two cases the definitions \\ref{def:Loid} and \\ref{def:Cour} were already established as a geometrical notion before the equivalence (expressed in the two theorems mentioned above) to the (simpler) super- or graded-geometrical structure was observed. Some also call Q$k$-manifolds simply Lie $k$-algebroids \\emph{by definition}. To emphasize the non-tautological nature of Theorem \\ref{thm1.5}, it may thus be useful to add the specification ``semi-strict'' in the equivalence, with \\emph{semi-strict} Lie 2-algebroids being then \\emph{introduced} (to our knowledge for the first time) in \nDefinition \\ref{def:Lie2d} above. \n\nNote that as a simple corollary of the above theorem, we find that any Q2-manifold gives rise to an infinite-dimensional Lie 2-algebra or also 2-term $L_\\infty$-algebra, $\\Gamma(V) \\xrightarrow{t} \\Gamma(E)$. In fact, a Lie 2-algebroid provides the additional control over such an infinite-dimensional Lie 2-algebra in the same way as a Lie algebroid provides one for its infinite-dimensional Lie algebra of sections. \n\nCertainly, a QP2-manifold is also a Q2-manifold. The infinite-dimensional Lie 2-algebra found in the above way is, however, \\emph{not} the same one as the one constructed in \\cite{Royt98}: There the 2-term complex is $\\Omega^0(M) \\xrightarrow{t} \\Gamma(E)$ and, containing the de Rham differential $\\ud$, neither $t$ nor the Jacobiator are $C^\\infty(M)$-linear, \nwhereas in our construction one has $\\Omega^1(M) \\xrightarrow{t} \\Gamma(E)$ rendering the map $t$ and the Jacobiator $C^\\infty(M)$-linear.\n\nFurther perspectives on Q$2$-manifolds in terms of particular $L_\\infty$-algebras or algebroids are deferred to Appendix \\ref{s:Linf}).\n\n\nThere is also another way of obtaining the axioms for a Lie algebroid from studying a Q1-manifold than the one we sketched above (by a study of components of $Q$ with respect to changes of coordinates). This is provided by regarding the Q-derived bracket on the space of degree $-1$ vector fields on the Q-manifold $\\CM$: Let $\\psi$, $\\psi'$ be two such vector fields, then\n\\begin{equation}\n[\\psi,\\psi']_Q := [[\\psi,Q],\\psi']] \\label{QderLie}\n\\end{equation}\nis another vector field of degree -1 on $\\CM$. While in the case of a Q1-manifold one easily identifies vector fields of degree -1 on $\\CM = E[1]$ with sections in the Lie algebroid $E$ and the bracket \\eqref{QderLie} with the Lie bracket between them, so that at the end one rediscovers also by this procedure the usual axioms of a Lie algebroid, for a Q2-manifold one obtains a \\emph{different} picture. First of all, the derived bracket does not need to be antisymmetric anymore, since $[\\psi,\\psi]$ can be non-zero now as, in contrast to a Q1-manifold, a Q2-manifold carries non-trivial degree -2 vector fields (for more details and explanations we refer to section \\ref{sec:derived}). The derived bracket, however, always gives a Leibniz--Loday algebra, satisfying a condition of the form \\eqref{JaCou}. These observations, which are motivated moreover by our study of the gauge transformations (cf., in particular, Equation \\eqref{derQtot})\\footnote{Note that $\\ad_Q(\\psi) \\equiv [Q,\\psi] = [\\psi,Q]$ since both vector fields are odd.}, lead to an attempt of finding another description of a Q2-manifold $\\CM$. In fact, since $[\\cdot,\\cdot]$, restricted to degree -1 (and thus odd) vector fields, yields a $C^\\infty(M)$-linear symmetric map from the degree -1 vector fields on $\\CM$ to the degree -2 vector fields, and since vector fields of negative degrees on an N-manifold can always be identified with sections of vector bundles, one swiftly extracts from the above an axiomatic formulation very close to the one of a Courant algebroid, with the difference that now the inner product can take values in another bundle. \n\n\\begin{vdef}\n\\label{def:VCour} A \\emph{$V$-twisted Courant algebroid} consists of a vector bundle $W\\to M$, a bundle map $\\rho \\colon W \\to TM$, a Leibniz--Loday algebra $(\\Gamma(W),[\\cdot,\\cdot])$, a non-degenerate surjective symmetric product $\\<.,.\\>$ taking values in another vector bundle $V \\to M$, and a $W$-connection ${{}^W\\nabla}$ on $V$\nsuch that for all $\\psi_1$, $\\psi_2 \\in \\Gamma(W)$ one has the following axioms\n\\begin{align} \\label{VCoura}\n \\< \\psi_1,[\\psi_2,\\psi_2]\\>&= \\tfrac{1}{2}{{}^W\\nabla}_{\\psi_1}\\<\\psi_2,\\psi_2\\> = \\<[\\psi_1,\\psi_2],\\psi_2\\> \\, .\n\\end{align}\n\\end{vdef}\nIt is easy to see that this reduces to an ordinary Courant algebroid if $V=M \\times \\Real$ and ${{}^W\\nabla}(1)=0$ (in this case of the trivialized bundle we can identify its sections with functions and if the unit section is parallel, ${{}^W\\nabla}$ reduces to $\\rho$ on these functions). More generally, we can say that a $\\rk V=1$, $V$-twisted Courant algebroid admitting a non-vanishing, $W$-constant section $v\\in\\Gamma(V)$ (i.e.~$v(x)\\neq 0 \\; \\forall x\\in M$ and $\\Wconn_\\psi v=0 \\; \\forall \\psi \\in \\Gamma(W)$) is in a canonical correspondence with a Courant algebroid. However, line-bundle twisted Courant algebroids are a strictly more general notion than the one of a Courant algebroid.\n\nA nontrivial example of a $V$-twisted Courant algebroid can be provided as follows: Let $A$ be a Lie algebroid together with a flat $A$-connection ${}^A\\nabla$ on a vector bundle $V$. Then the couple $W$, $V$, becomes a $V$-twisted Courant algebroid if one takes $W:=A\\oplus \\Hom(A,V) \\cong A\\oplus A^*\\otimes V$ and equips it with the bracket\n\\begin{equation}\\label{ACourant}\n[X\\oplus\\alpha,Y\\oplus\\beta] = [X,Y] \\oplus [\\imath_X,\\uD]\\beta -\\imath_Y\\uD\\alpha +H(X,Y,\\bullet)\n\\end{equation}\nwhere $\\uD\\:\\Omega^k(A,V)\\to\\Omega^{k+1}(A,V)$ is the exterior covariant derivative constructed from ${}^A\\nabla$, $[.,.]$ the Lie algebroid bracket on $A$, and $H\\in\\Omega^3(A,V)$ is any $A$-3-form with values in $V$ satisfying $\\uD H=0$. The other maps are obvious, for example $\\Wconn_{X\\oplus\\alpha} v={}^A\\conn_X v$. Certainly, this reduces to the standard Courant algebroid $W\\cong TM \\oplus T^*M$ twisted by a closed 3-form $H$ if $A$ is chosen be the standard Lie algebroid $TM$ (and $V$ and ${{}^W\\nabla}$ as mentioned above), in which case \\eqref{ACourant} reduces to the known, $H$-twisted Courant-Dorfman bracket \n\\begin{equation} \\label{standardtwistCour}\n[X\\oplus\\alpha,Y\\oplus\\beta] = [X,Y] \\oplus \\L_X \\beta -\\imath_Y\\ud\\alpha + \\imath_Y \\imath_X H \\, .\n\\end{equation}\nWhile an ordinary Courant algebroid reduces to a quadratic Lie algebra when $M$ is a point, the ``decoupling'' of $\\rho$ and ${{}^W\\nabla}$ has the advantage that now we can also host (non-Lie) Leibniz--Loday algebras in this framework. For $M=\\mathrm{pt}$, the definition reduces to a Leibniz--Loday algebra together with an invariant inner product with values in a vector space that has a representation. Also, conversely, one can promote \\emph{any} Leibniz--Loday algebra $W$ to a $V$-twisted Courant algebra $W$ in a \\emph{canonical} way, cf.~Proposition \\ref{canonical} below.\n\n\n\nThe main result of section \\ref{sec:derived} is then the following (for further details, cf.~Proposition~\\ref{p:FWder} and Theorem~\\ref{thm:der})\n\\begin{thm}$\\,$ \\\\ Up to isomorphism and provided that $\\rk V>1$, there is a 1:1 correspondence between Lie 2-algebroids $V \\xrightarrow{t} E$ and $V$-twisted Courant algebroids $W$ fitting into the following exact sequence \\begin{equation} \\label{VEW}\n0\\longrightarrow\\Hom(E,V) \\longrightarrow W\\xrightarrow{\\;\\;\\pi\\; \\;} E\\longrightarrow 0\n\\end{equation} with $\\rho_W = \\rho_E \\circ \\pi$. In particular, if $\\rk V >1$ and after the choice of a splitting of \\eqref{VEW},\nthe bracket on a $V$-twisted Courant algebroid $W$ as above has the form\n\\begin{equation}\\label{VCourant2}\n[X\\oplus a,Y\\oplus b]_W = [X,Y]_E - t(a(Y)) \\oplus [\\imath_X,{{}^E\\!{}\\mathrm{D}}]b -\\imath_Y{{}^E\\!{}\\mathrm{D}} a +\\imath_Y \\imath_X H+ b \\circ t \\circ a - a\\circ t \\circ b ,\n\\end{equation}\nwhere all the operations on the r.h.s.~are the induced ones on the corresponding Lie 2-algebroid.\\label{thm1.6}\n\\end{thm}\nIn the above formula, $X$, $Y \\in \\Gamma(E)$ and $a$, $b$ are either viewed as elements of $\\Omega^1(E,V)$ (in the first two terms following the $\\oplus$) or as homomorphisms from $E$ to $V$ so that they can be composed with the fixed map $t \\in \\Hom(V,E)$ of the Lie 2-algebroid. \n\nThe similarity of formula \\eqref{VCourant2} with Equations \\eqref{standardtwistCour} and \\eqref{ACourant} is striking, in particular in view of the fact that, in contrast to those examples, now the (anti-symmetric) Lie 2-algebroid bracket $[ \\cdot , \\cdot ]_E$ does no more satisfy a Jacobi condition in general, while still \\eqref{VCourant2} obeys the left-derivation property \\eqref{JaCou}. Similarly, ${{}^E\\!{}\\mathrm{D}}$ does no more square to zero in general (such as $\\uD$ and $\\ud$ in the analogous formulas \\eqref{ACourant} and \\eqref{standardtwistCour}, respectively),\nbut rather gives the $E$-curvature \\eqref{HRep}. It is here where the simple $t$-contributions to the new bracket become essential.\n\n\n\nTheorems \\ref{thm1.5} and \\ref{thm1.6} certainly also induce a 1:1 correspondence between the above mentioned ``exact V-twisted Courant algebroids'' (with $\\rk V >1$) and Q2-manifolds $\\CM$ (with at least two independent generators of $C^\\infty(\\CM)$ of degree 2). Exact $V$-twisted Courant algebroids thus provide an alternative, ordinary differential geometric description of the graded-geometrical notion of a Q2-manifold---with respective restrictions provided in the parenthesis of the previous sentence.\n\nThe condition of $\\rk V \\neq 1$ seems necessary, since there exist examples of line bundle twisted Courant algebroids not corresponding to Q2-manifolds (in the above way). In fact, one to our mind interesting problem left open in the analysis of the present paper is a classification of line bundle twisted Courant algebroids, i.e.~$V$-twisted Courant algebroids $W$ with $\\rk V = 1$. \n\nOn the technical level this restriction on the rank can be traced back to the following simple fact of linear algebra (proven in Lemma \\ref{p:Jh=0}): Let $V_1$ and $V_2$ be two vector spaces. Then any linear operator $\\Delta \\colon \\Hom(V_1,V_2) \\to V_1$ satisfying $a(\\Delta(a))=0$ for any $a \\in \\Hom(V_1,V_2)$ has to necessarily vanish \\emph{iff} $\\dim V_2 >1$, while, e.g., for $V_2 = \\Real$ one may identify $\\Delta$ with an element of $V^1 \\otimes V^1$ and then any $\\Delta \\in \\Lambda^2 V_1$ obviously satisfies the condition for symmetry reasons. \n\nIn view of the different descriptions of QP2 manifolds as infinite dimensional Lie 2-algebras sketched before, it is at this point no more astonishing that if the QP2 manifold is regarded as a Q2 manifold (by the forgetful functor), one does not obtain the usual Courant algebroid in the above fashion. Instead, as before, $M \\times \\Real$ is replaced by $T^*M$, so one obtains a particular $T^*M$-twisted Courant algebroid in the above identification (when applied to a QP2 manifold viewed upon as a Q2 manifold). \n\nThe bracket \\eqref{standardtwistCour} is also well-defined for sections of $TM \\oplus \\Lambda^k T^*M$ if $H \\in \\Omega^{k+2}_{closed}(M)$, (without the $H$-twist, this bracket goes back to Vinogradov, cf.~\\cite{Vino90}). One obtains similar properties to those of the standard Courant algebroid, but, for example, the symmetrized contraction of its elements provides an inner product which now takes values in $V=\\Lambda^{k-1} T^*M$. In section \\ref{sec:derived} we propose an axiomatization of these properties: Very much as \\cite{Xu97} does it for the case of $k=1$, leading (together with its later reformulations\/simplifications) to the Definition \\ref{def:Cour} above, we propose the definition of a \\emph{Vinogradov algebroid} (cf.~Definition \\ref{def:Vino} below). We also show that $V$-twisted Courant algebroids are particular Vinogradov algebroids. \n\nSection \\ref{sec:derived} is a relatively long section and contains further potentially interesting subjects (like the axiomatization of the bracket \\eqref{standardtwistCour} when applied to higher form degrees into a Vinogradov algebroid and a simple non-standard Q-description of $H$-twisted standard Courant and Vinogradov algebroids). We do not want to summarize all this here, so as to keep some tension for the interested reader. \n\n\n\n\\vskip\\newpassage\n\n\nFinally, in the Appendix \\ref{s:Linf} we provide a further reformulation of Q$k$-manifolds in terms of $k$-term $L_\\infty$-algebroids, which we also define there so as to generalize the more standard relation between Q-manifolds over a point and $L_\\infty$-algebras \\cite{SHLA}.\n\n\\newpage\n\\section{From Bianchi identities to Q-structures} \\label{s:Bian}\nWe want to construct a gauge theory for some $p$-form gauge field. As examples\nshow, at least if we want to go beyond a trivial abelian version of such a theory with $p\\ge 1$, \nthis necessarily involves a tower of gauge fields, where the highest form\ndegree is $p$. For definiteness we take $p=2$, but the generalization to\narbitrary $p$ is immediate. We thus consider a collection of 2-forms $B^D$,\ntogether with 1-form fields $A^a$, and in general also 0-form fields\n$X^i$. The scalar fields are permitted to take values in some internal\nmanifold $M$, which is not necessarily a vector space---so in general the\ntheory is a sigma model. Denoting spacetime by $\\Sigma$, one has\n $X \\colon \\Sigma \\to M$, and the particular collection of scalar fields\n $X^i=X^i(\\x)$ ($\\x \\in \\Sigma$) results after a (local) choice of\n coordinates $x^i$ \non $M$, $X^i = X^*(x^i)$.\\footnote{It is\ntempting to exchange the notation of $M$ and\n$\\Sigma$; then $M$ would be spacetime, and if $\\Sigma$ is nontrivial, one has a\nsigma model. However, here we conform to previous publications.} In\nmany examples there are no scalar fields and the tower of gauge fields starts\nwith 1-forms only. This just corresponds to the choice $M$ a point, in which\ncase one may drop the map $X$ as it contains no information. However, even in this case, scalar fields would then enter the theory in a second step representing matter; intending to stay as general as possible, we permit their presence thus right from the beginning. \n\nOne of the main ingredients in a gauge theory are the field strengths. We\nalso know how they should start, namely with the exterior derivative acting\non the respective gauge field, possibly then modified by adding wedge\nproducts of gauge fields to this expression. We thus make the most\ngeneral ansatz of this kind:\n\\begin{eqnarray} F^i &:=& \\ud X^i - \\rho^i_a A^a \\, , \\label{Fi}\\\\\n F^a&:=& \\ud A^a + \\frac12 C^a_{bc} A^b A^c - t^a_D B^D\\, , \\label{Fa}\\\\\n F^D&:=& \\ud B^D + \\Gamma^D_{aC} A^a B^C - \\frac{1}{6} \n H^D_{abc} A^a A^b A^c \\, ,\n\\label{FB}\n\\end{eqnarray}\nwhere wedge products between differential forms are understood \nand\nall the yet unrestricted coefficients can be functions of $X$.\nFor some purposes it will prove convenient to use a more condensed\nnotation: We then denote the field strengths \ncollectively by $F^\\alpha$ and likewise \nthe gauge fields by $A^\\alpha$ (so $A^i \\equiv X^i$, $A^D \\equiv B^D$).\n\n\nWe now want to require a very mild form of Bianchi identities. In\nordinary YM-gauge theories Bianchi identities tell us that $\\ud\nF^a\\equiv -C^a_{bc} A^b F^c$. We want to relax\/generalize this\ncondition on the complete set of field strengths $F^\\alpha$ by requiring\nthat the exterior derivative of them produces terms that all \n\\emph{contain} field strengths again, $\\ud F^\\alpha = \\lambda^\\alpha_\\beta F^\\beta$,\nwhere $\\lambda^\\alpha_\\beta$ may be arbitrary, field dependent factors.\nInterpreted in this sense, we require that one has:\n\\beq \\boxed{\\ud F^\\alpha |_{F^\\beta=0} \\equiv 0}\\, . \\label{bian}\\end{equation}\nThis equation has to be understood in such a way that for \\emph{every} choice of the gauge fields $A^\\alpha$ satisfying $F^\\alpha =0$ one has $\\ud F^\\alpha =0$.\nWe remark right away that this is satisfied also for generalizations\nsuch as for \nhigher YM theories as discussed in \\cite{Baez02jn}\nor the Algebroid YM theories of \\cite{Str04b}. Moreover, there are\noccasions where\nthe $X$-fields above, considered as part of the gauge field here, can\nbe identified with some Higgs type scalar fields and $F^i$ then\ncorresponds to the covariant\nderivative of this scalar field $X^i$. In standard\nYM-theory e.g.~we may use the fact that the square of the\nexterior covariant derivative\nis proportional to the curvature to show\nthat also in this enlarged setting \n\\eqref{bian} is satisfied.\n\nLet us rephrase the\nrequirement \\eqref{bian}\nin more mathematical terms: Within\nthe differential graded commutative algebra $(\\mathcal{A},\\ud)$\ngenerated by abstract elements $A^\\alpha$ and $\\ud A^\\alpha$ there is an ideal $\\I$\ngenerated by $F^\\alpha$ as \ndefined in Eqs.~\\eqref{Fi}--\\eqref{FB}, i.e.~by the choice of\nstructural functions $\\rho^i_a(X)$, $\\ldots$, $H^D_{abc}(X)$. The\ngeneralized Bianchi identities hold iff (by definition) this ideal is\nan invariant subspace with respect to the exterior derivative, i.e.~iff Eq.~(\\ref{ideal}) holds true.\n\nThere are some further possible reformulations of the above\nconditions.\nOne may rephrase them as saying e.g.~that the\nstructural functions must be such that the quotient\nalgebra $\\mathcal{A}\/\\I$ forms a so-called free differential algebra\n(cf.~\\cite{Sull77b} or, for a physical\nexample in low dimensions, e.g.~\\cite{Izq99}). Since this quotient\ncorresponds to looking at vanishing field strengths only, which will\n\\emph{not} be our field equations when interested in constructing\na (non-topological) generalization of Yang-Mills theories, we avoid\nthis perspective, such as that we avoid the notation $\\ud F^\\alpha\n\\approx 0$ instead of \\eqref{bian} since for non-topological\ntheories the ``on-shell'' equality would not coincide with the on-shell\nnotion dictated by field equations. Finally, Eq.~\\eqref{bian} may\nalso be viewed as an integrability condition in the following sense:\nFor all fields $A^\\alpha$ which satisfy $F^\\alpha(\\sigma)=0$ for some point $\\sigma\n\\in \\Sigma$, it follows automatically (by the choice of structural\nfunctions) that also $\\left(\\ud F^\\alpha\\right)(\\sigma)=0$. \n\n \n\n\nThe main purpose of this section is twofold: First, we want to show\nthat the simple ``physical'' and rather general requirement \n\\eqref{bian}\n\nautomatically and naturally leads to\nQ-structures.\\footnote{By definition (but cf.~also the more detailed\n explanation below), Q-structures, introduced in the \n context of topological field theories in \\cite{AKSZ}\\label{ct:Scha}\n are homological vector fields of degree one on a\nsupermanifold.}\nSecondly, to obtain the explicit form\nof the generalized Bianchi identities (cf., e.g.,\nEqs.~\\eqref{BFi}--\\eqref{BFB}\nbelow), the use of this formalism shortens calculations extremely. \n\n\nFor this purpose we first unify the gauge fields into a single map.\nThe scalar fields $X ^i(\\x)$ describe some map $a_0 \\colon \\Sigma \\to\nM$ ($a_0 \\equiv X$ in previous notation); as already mentioned, they\nare the pullback\nof coordinate functions \non $M$ to functions on $\\Sigma$ by precisely this map (possibly\nrestricted to local charts). To imitate\nthis for the other components $A^a, \nB^D, \\ldots$ of the gauge \nfield, we introduce auxiliary spaces spanned by coordinates \n$\\xi^a, b^D, \\ldots$, taking these spaces to be linear and thus \ndefined by the respective linear coordinates: $W = \\langle \\xi_a \\rangle$,\n$\\V = \\langle b_I \\rangle$, where $\\xi_a$ is a basis \nof vectors \ndual to $\\xi^a$ (linear coordinates on a vector space are a basis in the\ndual vector space) etc. \n\nSince every 1-form on $\\Sigma$ may be viewed as a (fiberwisely linear)\nfunction on $T\\Sigma$, we may thus\nregard $A^a$ as the pullback of the coordinate function $\\xi^a$ by a map from\n$T\\Sigma$ to $W$. A local coordinate system on $T\\Sigma$ can be chosen in the form\n$\\x^\\mu, \\theta^\\mu$, where $\\theta^\\mu=\\ud \\x^\\mu$. Since the natural\nmultiplication of 1-forms is the antisymmetric wedge product, it is near at\nhand to declare the fiber coordinates $\\theta^\\mu$ of $T\\Sigma$ as anti-commuting;\nthis is taken into account by the notation $T[1]\\Sigma$: the coordinates $\\x^\\mu$\nof the base are of degree zero (and thus even and commuting), while\n$\\theta^\\mu$ is of degree one (and in particular odd). In this way we may interpret $A^a$ as\nthe pullback of $\\xi^a$ to functions on $T[1]\\Sigma$ (i.e.~differential forms) by\na map $a_1 \\colon T[1]\\Sigma \\to W[1]$, $A^a = (a_1)^* \\xi^a$; here we declared\nthe coordinates $\\xi^a$ to be of degree one, too, so that the map $a_1$ becomes\ndegree preserving, and $(a_1)^*$ a morphism of algebras. Likewise,\n $B^D = (a_2)^* b^D$, where $a_2 \\: T[1]\\Sigma \\to \\V[2]$, and $b^D$ are declared to\nbe of degree two, since $B^D$ is a 2-form (that is a function on $T[1]\\Sigma$ of\ndegree two) and we want $a_2$ to be degree preserving again.\\footnote{If one considers supersymmetric or supergravity theories, the form degree does not determine if a homogeneous element is commuting or anti-commuting. If one has bosonic and fermionic 0-forms, for example, then already $M$ will be a corresponding supermanifold. Likewise, there may be anticommuting and commuting 1-forms in a supergravity theory; to describe such objects one would introduce a bidegree on the target, so that the first degree corresponds to the form degree and the other one to the commutativity property. We will not go into further details about this possible generalization here; as a simple prototype of such a theory one may look at supergravity theories reformulated as Poisson sigma models with a super-Poisson structure on the target as given in \\cite{Strobl:1999zz}.}\n\nIf $X^i$ denote local coordinates on $U \\subset M$, we can assemble the above\nmaps into a single map $a$ from (open domains of) $T[1]\\Sigma$ to $U \\times W[1] \\times\n\\V[2]$. It is tempting to require this direct product form of the target\nmanifold only on local charts $U$. Globally we may permit a twist (nontrivial\ntransition functions) in changing the vector spaces when changing charts in\n$M$. In fact, what we permit is a super-manifold $\\CM_2$ of degree 2: Locally\nit is described by degree 0 coordinates $x^i$, degree 1 coordinates $\\xi^a$,\nand degree 2 coordinates $b^D$.\\footnote{\\label{fn:pos} \nIn this and the following section we will \nconsider only non-negatively graded supermanifolds. (We will often call an integer graded manifold with its induced superalgebra of functions determined by the parity of the degrees simply a supermanifold, understanding implicitly from the context that we consider the full $\\Z$-grading (and not just modulo 2)). Speaking of the degree of it we will \nthen always mean the maximal degree of ``local coordinate functions'' (local generators). \nLater on also supermanifolds with (coordinate) functions of negative degrees will play \nsome role, cf in particular Section~\\ref{sec:derived}.} \nThe transition functions need to be degree\npreserving (by definition). This implies that forgetting about the degree two\ncoordinates one has a vector bundle $E \\to M$ with typical fiber\n$W$. Similarly, $x^i, b^D$ define a vector bundle $V \\to M$ with typical fiber\n$\\V$. (Since the transition functions of $b^D$s may also contain a part\nquadratic in the degree one coordinates $\\xi^a$, the total space\n$\\CM_2$ does not correspond to the direct sum vector bundle $E \\oplus\nV$ canonically, but does so only after an additional choice of\nsection; we will come back to this subtlety in the subsequent section).\n\nThus in sum we obtain a degree preserving map $a \\colon \\CM_1 \\to \\CM_2$,\nwhere $\\CM_1 = T[1]\\Sigma$ is a degree one super-manifold.\\footnote{More generally,\nwe may also permit a nontrivial fibration over $\\CM_1$, such that $\\CM_2$ is\nonly a typical fiber, leading to the notion of ``$Q$-bundles''. This\nis dealt with in \\cite{KS07}, while in this paper we restrict to\ntrivial bundles (except for the target bundle, encoded in the\nsupermanifold $\\CM_2$). Still, the present \nformulas remain correct also in the more general setting if they are\nreinterpreted as local ones.}\nChoosing local coordinates $q^\\alpha : = (x^i, \\xi^a, b^D)$\non $\\CM_2$ this gives rise to \nfunctions on $\\CM_1$ by means of the pullback $a^*$, which then \njust yields the set of local 0-forms, 1-forms, and 2-forms,\nrespectively, with which we started this section, $A^\\alpha = a^*(q^\\alpha)$. \n\nThe de Rham differential $\\ud$ on $\\Sigma$ corresponds to a degree one\nvector field \non $T[1]\\Sigma \\equiv \\CM_1$, namely $\\Qo = \\theta^\\mu \n\\partial\/\\partial \\x^\\mu$. It squares to zero, $(\\Qo)^2 \\equiv 0$, and thus defines \nwhat is called a $Q$-structure on $\\CM_1$. We now define a likewise vector field on $\\CM_2$ \nusing the above data. It is sufficient to know its action on coordinates $q^\\alpha$: \n\\begin{eqnarray} {Q_2} x^i &:=& \\rho^i_a \\xi^a \\, , \\quad \\label{Qi}\\\\\n {Q_2} \\xi^a& := &- \\frac12 C^a_{bc} \\xi^b \\xi^c +t^a_D b^D\\, , \n \\label{Qa}\\\\\n {Q_2} b^D&:= & -\\Gamma^D_{aC} \\xi^a b^C + \\frac{1}{6} \n H^D_{abc} \\xi^a \\xi^b \\xi^c \\, .\\label{QB}\n\\end{eqnarray}\nThis vector field is obviously also of degree one, i.e.~it raises the degree\nof the respective coordinate by precisely one unit. Furthermore we\nintroduce (cf.~also \\cite{BKS})\n\\beq \\CF := \\Qo \\circ a^* - a^* \\circ {Q_2} \\qquad \\Rightarrow \\qquad \\CF (q^\\alpha) \n\\equiv F^\\alpha \\, . \\label{CF}\\end{equation}\n\nUp to here we only rewrote the initial data contained in\nEqs.~\\eqref{Fi} -- \\eqref{FB}. We now come to analyze\nCondition~\\eqref{bian}. Denoting by $\\ev_\\x$ evaluation of a\nfunction on $\\CM_1=T[1]\\Sigma$ at a point $\\x \\in \\Sigma$, in the present\nlanguage the generalized Bianchi identities (in their formulation as\nintegrability conditions for $F^\\alpha=0$ equations) can be translated into the \nrequirement that for every $\\x \\in \\Sigma$\nthe composed operator $\\ev_\\x \\circ \\Qo \\circ\n\\CF$ vanishes whenever \n$a\\colon\\CM_1 \\to \\CM_2$ is such that the operator $\\ev_\\x\n\\circ \\CF$ vanishes.\n\nNow, using \n$(\\Qo)^2 \\equiv 0$, we find the identity\n\\begin{equation}\n\\ev_\\x \\circ \\Qo \\circ \\CF =- \\ev_\\x \\circ \\left( \\CF \\circ {Q_2}\n +a^* \\circ ({Q_2})^2\\right). \\label{preBian}\n\\end{equation}\nThus, for \\eqref{bian} to hold true\nit is obviously sufficient that the vector field ${Q_2}$ \nsquares to zero. But also the converse is true, provided only that the\ndimension of spacetime $\\Sigma$ is at least $p+2$ ($p$ is the degree of\nthe supermanifold $\\CM_2$, in our discussion above $p=2$): Assume that\n$({Q_2})^2 \\neq 0$, that $a\\colon \\CM_1 \\to \\CM_2$ is such that\n$\\ev_\\x \\circ \\CF \\equiv 0$, and that the integrability\ncondition holds true. Then there exists at least one\nbasic coordinate function $q^{\\alpha_0} \\in C^\\infty(\\CM_2)$ for which\n$({Q_2})^2 q^{\\alpha_0}=: \\varphi$ is a non-vanishing function on $\\CM_2$.\nNote that this function has at most degree $p+2$, so $a^* \\varphi$ is a\ndifferential form on $\\Sigma$ of at most degree $p+2$. \nApplying \\eqref{preBian} to $q^{\\alpha_0}$ under the above assumptions,\nwe obtain $\\ev_\\x a^* \\varphi =0$, valid for every $\\sigma \\in \\Sigma$. \nIf we are able to show that\nthere always exists some $a$ fulfilling $(F^\\alpha)_\\x=0$ for which this\nequation is violated, we obtained a contradiction, proving that ${Q_2}$\nmust square to zero. For this to be the case, we need the condition\n$\\dim \\Sigma \\ge p+2$, because, otherwise, \\emph{any} violation\n$\\varphi$ of $({Q_2})^2=0$ in a degree higher than $\\dim \\Sigma$ will not contribute\nto $\\ev_\\x a^* \\varphi$. On the other hand, it is precisely the\nintegrability of \n the equations $F^\\alpha =0$ that ensure that for \\emph{every} choice of\n $\\ev_\\x \\circ a$ (i.e.~for every choice of $(A^\\alpha)_\\x$)\nwe can \nfind a map $a$, defined at least in a neighborhood of $\\x \\in \\Sigma$, such\nthat also $(F^\\alpha)_\\x=0$. This concludes the proof. \n\n\n\n\nWe thus managed to translate the relatively weak form of Bianchi identities \n\\eqref{bian}, $\\ud F^\\alpha \\sim \\sum F^\\beta$ (where the sum contains\narbitrary field dependent coefficients), \ninto a $Q$-structure on a target super manifold $\\CM_2$. This gives the first part of Theorem \\ref{thm:phys1} mentioned in the introduction, or, in the\nparticular case $p=2$, it means in detail:\n\\begin{thm} \\label{theo:nilpot}\nFor a spacetime dimension of at least 4, the nilpotency of the\nsuper vector field ${Q_2}$\ndefined in Eqs.~\\eqref{Qi} -- \\eqref{QB} is necessary and\nsufficient for the validity of \\eqref{bian}. \n\\end{thm}\n\nThe relation $({Q_2})^2=0$ contains \\emph{all} the conditions we\nwant to place on the \nstructural functions in \\eqref{Fi} -- \\eqref{FB}. They contain the \ngeneralization of the Jacobi identity of the structure constants in ordinary \nYang-Mills gauge theories. In the subsequent section we will display all \nthe conditions (in the example $p=2$), cf.~Eqs.~\\eqref{B1} --\n\\eqref{B7} below, discuss\ntheir general structure and present examples of it\nsuch as Lie 2-algebras or Lie algebroids (an alternative perspective, containing Courant \nalgebroids and their generalizations will be discussed separately in Section~\\ref{sec:derived}).\n\nIn the present section we still want to display the explicit form of\nthe Bianchi identities that we obtain when \\emph{using} the nilpotency \nof ${Q_2}$. One finds\n\\begin{align} \\ud F^i &\\equiv \\rho^i_{a,j}A^a F^j - \\rho^i_a F^a \\, ,\n \\label{BFi}\\\\\n \\ud F^a&\\equiv \\frac12C^a_{bc,i}A^b A^c F^i -t^a_{D,i}B^D F^i\n -C^a_{bc} A^b F^c - t^a_D F^D\\, , \\label{BFa}\\\\\n \\ud F^D&\\equiv -\\Gamma^D_{aC,i} A^aB^C F^i +\\Gamma^D_{aC} B^C F^a\n +\\frac16H^D_{abc,i} A^aA^bA^c F^i \\nonumber\\\\\n &\\quad -\\frac12H^D_{abc} A^aA^b F^c -\\Gamma^D_{aC} A^a F^C \\, .\n \\label{BFB}\n\\end{align}\nHere the power of the employed super language becomes particularly transparent: \nWhereas in every explicit calculation one needs to make heavily use \nof the Eqs.~\\eqref{B1} -- \\eqref{B7} below, one may derive the above \nconditions within some lines by use of the following obvious two relations\n(which\nfollow directly from the defining Equation~\\eqref{CF} and the fact\nthat $\\Qo$, ${Q_2}$ are nilpotent vector fields, cf.~also \\cite{BKS,Str04b})\n\\begin{eqnarray} \\Qo \\circ \\CF &=& - \\CF \\circ {Q_2} \\label{dF} \\\\\n \\CF(\\phi) &=& F^\\alpha \\, a^*(\\partial_\\alpha \\phi) \\label{FLeibniz} \n\\end{eqnarray}\nwhere the second equation holds for all functions $\\phi$ on the target\nand $\\partial_\\alpha$ denotes a left derivation w.r.t.~$q^\\alpha$. \nWe illustrate this for $\\alpha=i$: \nApplying the left hand side of \\eqref{dF} to $x^i$, we get $\\ud F^i$. On the \nother hand $\\CF ({Q_2} x^i) = \\CF(\\rho^i_a\\xi^a) = \\CF(\\rho^i_a) a^*(\\xi^a)+\na^*(\\rho^i_a) \\CF(\\xi^a) = \\rho^i_{a,j} \\CF(x^j) A^a + \\rho^i_a F^a$.\n(Following \nusual physics notation,\nwe did not explicitly display $a^*$ when acting on \nfunctions over $M$ such as $\\rho^i_{a,j}\\equiv \\partial_j \\rho^i_a$,\nwhich are then understood as\nfunctions over $\\Sigma$ via the ``scalar fields'' $X^i$).\n\n\n \nThe result above is readily specialized to particular cases of interest. \nE.g., for $M$ a point, we merely drop all terms containing $F^i$. If there is \nno $B$-field (i.e.~there are no degree two generators on $\\CM_2$), we \nput $F^D$ and $B^D$ to zero wherever they occur. Combining both steps\nwe regain \nthe Bianchi identities of ordinary Yang-Mills theory. \n\n\\vskip\\newpassage\n\nBefore continuing we briefly comment on some further\ngeneralization. In the ansatz \\eqref{Fi}--\\eqref{FB} we could\nhave permitted also terms proportional to $\\ud A^\\alpha$, adding e.g.~a\nterm $\\nu^a_i \\ud X^i$ to the r.h.s.~of \\eqref{Fa} etc. Since\nsuch terms enter field strengths $F^\\beta$ only for\nvalues of $\\beta$ corresponding to a higher form degree than the one\nof $A^\\alpha$, it is always possible to replace $\\ud A^\\alpha$ by $F^\\alpha$ by\nmerely renaming coefficient functions in $F^\\beta$. Thus, in the above\nexample of Eq.~\\eqref{Fa}, we may instead equally well add $\\nu^a_i\nF^i$ to the ansatz, merely by\nreplacing $C^a_{bc}$ with new structure functions. Constructing\nout of these new structural functions---except for the additional\ncoefficients $\\nu^\\beta_\\alpha$, which we may just ignore at this point---the \nvector field $Q_2$ as before, Eqs.~\\eqref{Qi}--\\ref{QB}, \nTheorem~\\ref{theo:nilpot}\nstill holds true: While the ideal $\\I$ generated by field strengths changes with \nthe change of structural functions in the first step of the rewriting, it does not change by adding or dropping the terms with lower degree field strengths on the right hand sides in their definition and thus $\\ud \\I \\subset \\I$ gives the\nsame conditions.\n\nMore generally, the ideal $\\I$ remains unaltered by every\nredefinition of the form $\\widetilde F^\\alpha = M^\\alpha_\\beta F^\\beta$ if the field\ndependent matrix $M^\\alpha_\\beta$ is invertible. If the form degree\nof $A^\\alpha$ is denoted by $|\\alpha|$, then $M^\\alpha_\\beta$ has form\ndegree $|\\alpha| - |\\beta|\\ge 0$. It thus is a lower triangular matrix, where\nthe non-vanishing off-diagonal components correspond to $\\nu^\\alpha_\\beta$\nintroduced above. The (block) diagonal pieces, those where $|\\alpha| =\n|\\beta|$, correspond to $X$-dependent coefficient matrices in front of \n$\\ud A^\\alpha$ in the definition of the new field strength $\\widetilde\nF^\\alpha$---the matrix $M^\\alpha_\\beta$ is thus invertible, if and only if each of its \ndiagonal components is nonzero everywhere. For example, instead of \n\\eqref{Fi} we would consider\n$\\widetilde F^i = M^i_j \\ud X^j - \\widetilde \\rho^i_a A^a$. Now, if the\nmatrix $M^i_j(X)$ is invertible (for every point in the target $M$),\nwe may introduce $\\rho^i_a := (M^{-1})^i_j \\widetilde \\rho^j_a$, and\nthis is the first component of the vector field $Q_2$ of Theorem\n\\ref{theo:nilpot}, \nand so on. Only\nredefinitions of the above form with a non maximal and possibly even\nvarying rank of the\ndiagonal pieces of $M^\\alpha_\\beta$ would require a new and in general more\nintricate analysis. \n\nAlso coordinate transformations on $M$, or, more\ngenerally, on the supermanifold $\\CM_2$, will induce\nredefinitions of field strengths. In an active interpretation of coordinate transformations, i.e.~viewing them as (local expressions of) diffeomorphisms instead of a \nchange of a chart (passive interpretation in physics terminology), the ideal $\\I$ changes in general, but in a covariant way. In this active interpretation $\\I$ remains unchanged only under very particular super-diffeomorphisms, namely those that leave the $Q$-structure invariant; we will come back to them in the context of gauge symmetries in section \\ref{s:gauge} below. \n\nIf $M$ is not a point, \nthe field strength components $F^\\alpha$ themselves do, however, \\emph{not} transform covariantly, except for the lowest ones with $|\\alpha| =0$. (Only the complete set\nof the whole ideal $\\I$ does). \n This can be corrected by means of the choice of connections in\nthe vector bundles $E$ and\n $V$.\\footnote{The bundles $E$ and $V$ and these connections are not to\nbe confused with the (total) bundle of the gauge theory and the gauge fields,\nrespectively. The first one, the total bundle, is assumed to be trivial\nwithin this paper---for a generalization cf.~\\cite{KS07}---and we\non purpose did not call $A^\\alpha$ a connection since this would require\nfurther geometrical justification, and in some way is even the wrong\nnotion. The choice of $M$, $E$, $V$, and possibly connections on the\nlatter two, are fixed background data; they replace the choice of a\nLie algebra in ordinary Yang-Mills theories. Only the gauge fields $A^\\alpha$ \nare dynamical.} One way of seeing this is to first determine the\ntransformation behavior of all the structural coefficient functions \nunder a change of coordinates on ${\\CM_2}$---we will do this within the\nsubsequent section---and then to correct for the overall behavior of\n$F^a$ and $F^D$ by appropriate additions. There is a much simpler route,\nhowever, using the fact\nthat $\\CF$ is a (graded) Leibniz-type operator\nover $a^*$, cf.~Eq.~\\eqref{FLeibniz}:\nA change of local frame in the vector bundle $E$, e.g.,\ncorresponds to a change of coordinates $\\xi^a \\mapsto\n\\widetilde{\\xi^a} = M^{\\tilde a}_b \\xi^b$ on ${\\CM_2}$, where $M^{\\tilde a}_b$ are\nfunctions on the base\\footnote{which is not to be confused with\nspacetime $\\Sigma$---both $M$ and $E$ are data of the target.}\n $M$ of $E$. With \\eqref{FLeibniz} this implies $F^{\\widetilde{a}}= M^{\\tilde\n a}_b(X) F^b + M^{\\tilde a}_{b,i}(X) F^i A^b$, \n where $M^{\\tilde a}_{b,i} \\equiv \\partial\n M^{\\tilde a}_{b}\/\\partial x^i$. Only the first term of the two\n corresponds to a tensorial behavior. From this it is obvious\n that if $\\Gamma^a_{ib}$ and\n $\\Gamma^D_{iC}$ denote connection coefficients in local frames in $E$\n and $V$, respectively, the redefined field strengths\n \\begin{eqnarray} F^a_{(\\Gamma)} &:=& F^a + \\Gamma^a_{ib} F^i A^b \\label{FaG}\n \\\\\n F^D_{(\\Gamma)} &:=& F^D + \\Gamma^D_{iC} F^i B^C\n \\label{FBG} \\end{eqnarray}\n(together with the unaltered $F^i$) are tensorial. In some gauge invariant \naction functionals $S$ there may be\na contribution of the form $\\Lambda_i F^i$, where $\\Lambda_i$ is a\nLagrange multiplier field;\\footnote{Cf, e.g., \\cite{Str04b,KS08}.} in such a case one may replace $F^a_{(\\Gamma)},\nF^D_{(\\Gamma)}$ by the simpler expressions\n$F^a, F^D$ in the remaining part of the action $S$---at\nthe mere expense that the new, redefined Lagrange multipliers loose\ntheir tensorial behavior. For other functionals these contributions\nmay be mandatory, however. \n\nFinally, an interesting alternative to the use of $\\CF$ or the auxiliary connections needed for a tensorial behavior of the field strengths like in Eqs.~\\eqref{FaG} and \\eqref{FBG} is to introduce a map $f \\colon \\CM_1 \\to T[1]\\CM_2$ defined by means of $f^*(q^\\alpha)=a^*(q^\\alpha)$ and $f^*(\\ud q^\\alpha)=\\CF (q^\\alpha)$;\\footnote{This was found later in the paper \\cite{KS07}.} \n equipping $T[1]\\CM_2$ with an appropriate canonical Q-structure, $f$ becomes a Q-morphism for \\emph{every} map $a \\colon \\CM_1 \\to \\CM_2$ and generalizes the Chern-Weil map in the case of non-trivial Q-bundles as is useful for constructing corresponding characteristic classes. In particular, $f^* \\colon {C^\\infty}(T[1]\\CM_2 \\to \\Omega^\\bullet(\\Sigma))$ is an algebra morphism then, $f^*(x^i\\xi^a \\ud \\xi^D) = X^i A^a \\wedge F^D$ etc. \n\n\\newpage\n\n\\section{Q-structures for $p=2$ --- Lie 2-algebroids}\n\\label{s:examQ}\nIn this section we study general vector fields of the form\n\\eqref{Qi}--\\eqref{QB}. Dropping the index 2 for notational\nconvenience within \\emph{this} section,\n\\beq \\label{Q} Q = \\rho^i_a \\xi^a \\frac{\\partial}{\\partial x^i}\n- \\frac12 C^a_{bc} \\xi^b \\xi^c \\frac{\\partial}{\\partial \\xi^a} +t^a_D b^D\n\\frac{\\partial}{\\partial \\xi^a} -\\Gamma^D_{aC} \\xi^a b^C\n\\frac{\\partial}{\\partial b^D}\n+ \\frac{1}{6} \n H^D_{abc} \\xi^a \\xi^b \\xi^c \\frac{\\partial}{\\partial b^D} \\eeq\nis the most general degree one vector field on a degree two\nsupermanifold (cf.~also footnote \\ref{fn:pos}). \nThe range of indices is $i = 1, \\ldots, n$, $a = 1,\n\\ldots r$, $D=1 \\ldots s$, respectively,\nwhere, in previous notation, $n = \\dim M$, $r = \\dim V$, and $s = \\dim W$.\nNote that $C^c_{ab}$ and $H^D_{abc}$ are completely\nantisymmetric in their lower indices by construction. \nRequiring $Q$ to be homological, that is to square to\nzero and thus to define a Q-structure, one obtains the following identities to\nhold true: \n\\begin{align}\n \\rho^j_{[a}\\rho^i_{b],j} -\\frac12\\rho^i_c C^c_{ab} &= 0 \\label{B1}\\\\\n \\rho^i_a t^a_D &= 0 \\label{B2}\\\\\n 3C^e_{[ab} C^d_{c]e} +3\\rho^i_{[c} C^d_{ab],i} - t^d_D H^D_{abc} &= 0\n \\label{B3}\\\\\n \\rho^i_c t^a_{D,i} -\\Gamma^F_{cD} t^a_F -C^a_{bc}t^b_D &= 0 \\label{B4}\\\\\n \\rho^i_{[b}\\Gamma^D_{a]F,i} +\\frac12 \\Gamma^D_{eF}C^e_{ab}\n -\\Gamma^D_{[aC}\\Gamma^C_{b]F} +\\frac12H^D_{abc}t^c_F &= 0 \\label{B5}\\\\\n t^a_{(A} \\Gamma^D_{aC)} &= 0 \\label{B6}\\\\\n \\Gamma^D_{[aF}H^F_{bcd]} +\\rho^i_{[a}H^D_{bcd],i} -\\frac32H^D_{e[ab}C^e_{cd]} &= 0 \\label{B7}\n\\end{align}\nThe first two equations result upon twofold application of $Q$ to\ncoordinates $X^i$, the next three to $\\xi^a$, and the last two when\n$Q^2$ is applied to $b^D$ (the result being put to zero\nin all cases). Round\/square brackets indicate symmetrization\/antisymmetrization\nof enclosed\nindices of equal type (in Eq.~\\eqref{B5}, e.g., this concerns the indices $a$\nand $b$ in the respective two terms, and only them).\n\nWe now discuss the meaning of these equations in more standard,\nindex-free terms. We start with $M =\\mathrm{pt}$ and $s=0$, i.e.~there are no $X^i$\nand $b^D$ coordinates. Correspondingly, in \\eqref{Q} only the term\ncontaining $C^c_{ab}$ survives. From the above set of equations, the\nonly nontrivial one is Eq.~\\eqref{B3}, reducing to $C^e_{[ab}\nC^d_{c]e}=0$, which is the Jacobi identity for the coefficients\n$C^a_{bc}$. Thus in this simple case, in which the supermanifold\n$\\CM$ has a\ntrivial base and is actually only of degree one,\none finds the Q-manifold\n$(\\CM,Q)= (\\g[1],\\ud_{CE})$, where $\\g$ is some Lie algebra and\n$\\ud_{CE}$ the corresponding Chevalley-Eilenberg\ndifferential. Super vector spaces of degree one\nwith Q-structure are equivalent to\nLie algebras. \n\nLet us in a next step drop the restriction that $M$ is a point,\ni.e.~we deal with a general degree one supermanifold (which,\ncertainly, may also be viewed as a particular degree two\nsupermanifold) with Q-structure. This leaves the first two\nterms in \\eqref{Q}. The corresponding equations, \\eqref{B1} and\n\\eqref{B3} with $t^d_D \\equiv 0$, are recognized as the\nstructural equations for a Lie algebroid: A Lie algebroid is a vector\nbundle $E$ over $M$ together with a bundle map $\\rho \\colon E \\to TM$,\ncalled the anchor map, \nand a Lie algebra defined on the sections of $E$ satisfying the\nLeibniz rule $[\\psi, f\n\\varphi] = f [\\psi, \\varphi] + \\rho(\\psi) f \\, \\varphi$.\\footnote{For\nlater use we mention that we call $E$ an \\emph{almost Lie algebroid} if it\nis defined as above with the mere difference that the bracket $[\n\\cdot, \\cdot ]$ needs to\nbe antisymmetric only, but not necessarily to satisfy the Jacobi\nidentity. If $\\rho \\equiv 0$ in an (almost) Lie algebroid, the bracket\ndefines a fiberwise\nproduct and $E$ becomes a bundle of (almost) Lie algebras.}\n{} From these\ndata one can infer that $\\rho$ is also a morphism of Lie algebras, $\\rho([\\psi, \n\\varphi]) = [\\rho(\\psi), \\rho(\\varphi)]$. Choosing a local frame\n$\\xi_a$ in $E$ and local coordinates $x^i$ on $M$, the anchor gives\nrise to structural functions $\\rho^i_a=\\rho^i_a(x)$ \nvia $\\rho(\\xi_a) = \\rho^i_a \\partial_i$. Likewise, the bracket between\ntwo sections from the basis induces $C^c_{ab}=C^c_{ab}(x)$:\n$[\\xi_a,\\xi_b]= C^c_{ab} \\xi_c$. Now it is easy to see that the\nmorphism property of $\\rho$ implies Eq.~\\eqref{B1} and the Jacobi\nidentity for the Lie bracket on $E$, together with the Leibniz rule,\nyields (the respective remainder of) Eq.~\\eqref{B3}. \n\nFor the reverse direction, i.e.~to recover the index free geometrical\nstructure from the above formulas, one needs to address coordinate\nchanges on the Q-manifold $\\CM$. By definition, coordinate changes on\na graded manifold need to be degree preserving; therefore, \n$\\widetilde{x^i}= \\widetilde{x^i}(x)$,\n$\\widetilde{\\xi^a} = M^{\\tilde a}_b(x) \\xi^b$. Thus one deals with a\nvector bundle $E$ over the base $M$ of $\\CM$ and $\\CM = E[1]$ (by\ndefinition, for a vector bundle the shift in degree\nconcerns only the fiber coordinates). Implementing this coordinate\nchange on $Q$, we see that $\\rho^i_a$ transforms like a tensor, but\n$C_{ab}^c$ does not. Indeed,\n$\\partial_i=\\frac{\\partial \\widetilde{x^j}}{\\partial x^i}\n\\widetilde{\\partial_j} + \\frac{\\partial \\widetilde{\\xi^a}}{\\partial\nx^i} \\widetilde{\\partial_a}$, where \n$\\partial_i = \\partial\/\\partial x^i$, $\\partial_a = \\partial\/\\partial \\xi^a$\netc. So, there is a contribution proportional to $\\rho^i_a$ and the\n$x$-derivative of $M^{\\tilde a}_b$ to the transformation law of\n$C_{ab}^c$. Since $C_{ab}^c \\xi_c = [\\xi_a, \\xi_b]$ is to hold true\nin every frame---after all we do not want to have frame-dependent\ndefinitions of a bracket---this results in the Leibniz property of the\nbracket. The rest is then obvious.\n\nSummarizing, a degree one supermanifold with Q-structure, \ncalled Q1-manifold for simplicity, is \\emph{tantamount}\nto a Lie algebroid, $(\\CM,Q)=(E[1],{}^E\\ud)$.\\footnote{This was observed first by Vaintrob \\cite{Vai97}, including also the compact formulation of Lie algebroid morphisms (a proof of equivalence of this definition with the standard, more complicated one of \\cite{MaH93} can be found in \\cite{BKS}). --- Recall \\label{f:8} that we \nrestricted to non-negatively graded supermanifolds; Q-manifolds with \nthis restriction are also called NQ-manifolds in the literature, cf., e.g., \n\\cite{Sch93,AKSZ}.} Here ${}^E\\ud$ is the\ncanonical differential that generalizes $\\ud_{CE}$ from above for\n$M=\\mathrm{pt}$ and the de Rham differential for $E=TM$, $\\rho = \\id$,\nthe standard Lie algebroid (cf., e.g., \\cite{CaWe99} for\n${}^E\\ud$ or just\nuse the above formulas for $Q$, reinterpreted correspondingly, as a\npossible definition). \n \nIn order to reveal the differential geometry contained in a general\n``Q2-manifold'', i.e.~a degree two graded manifold with Q-structure, \nEq.~\\eqref{Q} together\nwith Eqs.~\\eqref{B1}--\\eqref{B7}, we follow the same strategy: We\nfirst look at degree preserving coordinate changes on $\\CM$,\n\\beq \\widetilde{x^i}= \\widetilde{x^i}(x), \\qquad \n\\widetilde{\\xi^a} = M^{\\tilde a}_b(x) \\xi^b , \\qquad\n \\widetilde{b^D} = N^{\\tilde D}_C(x) b^C +\\frac12 L^{\\tilde D}_\n{ab}(x)\\xi^a\\xi^b \\, . \\label{trafo} \\eeq\n If there were no $\\xi^a$ coordinates,\ni.e.~$r=0$, similarly to before we would obtain $\\CM = V[2]$ for some\nvector bundle $V$ over $M$. The above $L$-terms imply that in general\n$\\CM$ is not just the direct sum of two vector bundles $E$ and $V$,\nbut instead one has the sequence of supermanifolds\n\\beq V[2] \\to \\CM \\to E[1] \\, . \\label{sequence} \\eeq\n(The first map is an embedding, characterized \nby $\\xi^a =0$, and the second map a (surjective)\nprojection, dropping the $b^D$ coordinates). \nOnly after the choice of a\nsection $\\imath \\colon E[1] \\to \\CM$, characterized locally by $b^D = L^D_{ab}(x) \\xi^a \\xi^b$, \nthis gives rise to $\\CM = E[1] \\oplus V[2]$. The \\emph{difference} between two \nsections is a \nsection $B$ of $\\Lambda^2 E^* \\otimes V$---similarly to the fact that the\ndifference between two connections in a vector bundle\ncorresponds to a tensorial object.\nIn what follows the choice of a section \nshall be understood, so that in terms of ordinary differential geometry \nthere is a total space corresponding to the Whitney sum \n$E \\oplus V$ of two vector bundles over $M$. However, let us mention that when the Q-manifold under discussion carries \nadditional structures, like a compatible symplectic form, this may not be the best adapted description.\n\n\n\n\nSo, under the above assumption, a (non-negatively) graded manifold of (maximal) degree two gives rise to\na vector bundle $E \\oplus V$ \nover $M$. Next we\ndetermine the transformation properties of the coefficients in\n\\eqref{Q}. For later purposes we keep also $L$-contributions,\ndisregarding them only afterwards within the rest of this\nsection. With \\eqref{trafo} one\nobtains:\n\\begin{eqnarray}\n \\widetilde{\\rho^i_a} &=& \\frac{\\partial\\tilde x^\\imath}{\\partial\nx^j} \\rho^j_b (M^{-1}) ^b_{\\tilde a} \\label{trafo1} \\\\\n \\widetilde{C_{ab}^c} M^{\\tilde a}_d M^{\\tilde b}_e \n &=& M^{\\tilde c}_f C_{de}^f\n -2M^{\\tilde c}_{d],i}\\rho^i_{[e}\n +M^{\\tilde c}_f t^f_D (N^{-1})^D_{\\tilde E} L^{\\tilde E}_{de}\n \\label{trafo2} \\\\\n \\widetilde {t^a_D} &=& M^{\\tilde a}_b t^b_C (N^{-1})^C_{\\tilde D}\n \\label{trafo3}\n \\\\\n \\widetilde {\\Gamma^C_{aE}} M^{\\tilde a}_b N^{\\tilde E}_D \n &=& N^{\\tilde C}_E \\Gamma^E_{bD}\n -N^{\\tilde C}_{D,i} \\rho^i_b\n -L^{\\tilde C}_{db} t^d_D \\label{trafo4} \\\\\n \\widetilde {H^D_{def} } M^{\\tilde d}_a M^{\\tilde e}_b M^{\\tilde f}_c\n \n &=& N^{\\tilde D}_C H^C_{abc} \n +3N^{\\tilde D}_F \\Gamma_{a]C}^F (N^{-1})^C_{\\tilde E} L^{\\tilde E}_{[bc}\n +3\\rho^i_{[a} L^{\\tilde D}_{bc],i}\n -3L^{\\tilde D}_{d[a}C^d_{bc]}\n \\nonumber\\\\ &&\n -3N^{\\tilde D}_{C,i}\\rho^i_{[a} \\big(N^{-1}\\big)^C_{\\tilde E}\n L^{\\tilde E}_{bc]}\n -3L^{\\tilde D}_{d[a} t^d_C (N^{-1})^C_{\\tilde E} L^{\\tilde E}_{bc]}\n \\label{trafo5}\n\\end{eqnarray}\nNote that $C_{ab}^c$, $H^D_{abc}$ and $\\Gamma_{aD}^C$ are affected by the affine \n$L$-contributions. Only the two quantities $\\rho^i_a$ and $t_D^a$ are\nnot; they both correspond to bundle maps, namely $t \\colon V\n\\to E$ and $\\rho \\colon E \\to TM$ or, equivalently, to sections of\n$V^* \\otimes E$ and $E^* \\otimes TM$, respectively.\n If $L$-terms are suppressed, $H$ is likewise just a \nsection of $\\Lambda^3 E^* \\otimes V$. But the nature of all three\nobjects, $H$,\nthe bracket on $E$ induced by $C^c_{ab}$, and the object corresponding\nto $\\Gamma_{aD}^C$ discussed further below\ncan change if $L^D_{ab}$ is not forced to vanish. \nFor the next steps we will first restrict to vanishing $L$-contributions, i.e.~to\na fixed choice of a section of \\eqref{sequence}\n\nEven under this assumption, the nature of $\\Gamma_{aD}^C$ is more\nintricate than that of $H$. \nAs anticipated by the notation, $\\Gamma_{aD}^C$ are\nthe coefficients of a kind of connection. More precisely, \nintroduce an\n$E$-connection ${{}^E{}\\nabla}$ in $V$, i.e.\\ for every $\\psi \\in \\Gamma(E)$ an\noperator $\\Econn_\\psi \\colon \\Gamma(V)\n\\to \\Gamma(V)$ satisfying the Leibniz rule \\mbox{\\( \\Econn_\\psi (fv) =\n\\rho(\\psi)f \\cdot v +f \\Econn_\\psi v \\)} such that the assignment $\\psi\n\\mapsto \\Econn_\\psi$ is $C^\\infty(M)$-linear in\n$\\psi$.\\footnote{More abstractly, $\\Econn_\\cdot$ corresponds to a map from $E$\n to $A(V)$, the Atiyah algebroid of $V$. $A(V)$ is a vector bundle over $M$\n itself, namely the Lie algebroid\n corresponding to the canonical Lie groupoid of automorphisms of the\n vector bundle $V$.}\n This definition reduces to the one of an ordinary covariant\nderivative in the case of $E=TM$. Then $\\Gamma_{aD}^C$ are just the\nE-connection coefficients in a local frame, $\\Econn_{\\xi_a} b_D =\n\\Gamma_{aD}^C b_C$; now Eq.~\\eqref{trafo4} follows from\n\\eqref{trafo} (both without the respective\n$L$-contribution, \\eqref{trafo} interpreted as a change of frames in\nthe two vector bundles) by means of the properties of ${{}^E{}\\nabla}$ and vice versa. \n\nFinally the coefficients $C_{ab}^c$ induce a\nbracket $[ \\cdot , \\cdot ]$ on sections of $E$ via $[\\xi_a,\\xi_b] :=\nC_{ab}^c \\xi_c$, which is antisymmetric by construction and, \n\\emph{according to} \\eqref{trafo2}\n without $L$-terms, satisfies a Leibniz\nrule. \n\nNow, using the above data, we are to reinterpret the conditions\nensuring $Q^2=0$. In this way we arrive at:\n\\begin{prop} \\label{prop:double}\nA Q-structure on a supermanifold of degree two (a Q2-manifold) \nwith section of the sequence \\eqref{sequence} as described above \nis equivalent to a Lie 2-algebroid.\n\\end{prop}\n\\begin{vdef} \\label{Lie2def} A \\emph{Lie 2-algebroid} is a complex of vector bundles\n\\begingroup\\makeatletter\\ifx\\SetFigFont\\undefined%\n\\gdef\\SetFigFont#1#2#3#4#5{%\n \\reset@font\\fontsize{#1}{#2pt}%\n \\fontfamily{#3}\\fontseries{#4}\\fontshape{#5}%\n \\selectfont}%\n\\fi\\endgroup%\n\\beq \\mbox\n\\setlength{\\unitlength}{3947sp}%\n\\begin{picture}(1297,885)(601,-361)\n\\thicklines\n{\\put(1351,314){\\vector( 1, 0){375}}\n}%\n{\\put(1876,164){\\vector(-2,-1){540}}\n}%\n{\\put(751,314){\\vector( 1, 0){375}}\n}%\n{\\put(1276,164){\\vector( 0,-1){300}}\n}%\n{\\put(676,164){\\vector( 2,-1){540}}\n}%\n\\put(601,239){\\makebox(0,0)[lb]{\\smash{\\SetFigFont{12}{14.4}{\\rmdefault}{\\mddefault}{\\updefault}{V}%\n}}}\n\\put(1201,239){\\makebox(0,0)[lb]{\\smash{\\SetFigFont{12}{14.4}{\\rmdefault}{\\mddefault}{\\updefault}{E}%\n}}}\n\\put(1801,239){\\makebox(0,0)[lb]{\\smash{\\SetFigFont{12}{14.4}{\\rmdefault}{\\mddefault}{\\updefault}{TM}%\n}}}\n\\put(1201,-361){\\makebox(0,0)[lb]{\\smash{\\SetFigFont{12}{14.4}{\\rmdefault}{\\mddefault}{\\updefault}{M}%\n}}}\n\\put(826,389){\\makebox(0,0)[lb]{\\smash{\\SetFigFont{12}{14.4}{\\rmdefault}{\\mddefault}{\\updefault}{$t$}%\n}}}\n\\put(1426,389){\\makebox(0,0)[lb]{\\smash{\\SetFigFont{12}{14.4}{\\rmdefault}{\\mddefault}{\\updefault}{$\\rho$}%\n}}}\n\\end{picture}\n\n \\label{compl} \n\\eeq \ntogether with an antisymmetric bracket $[ \\cdot , \\cdot ]$ on $E$, an $E$-connection ${{}^E{}\\nabla}$ \non $V$, and an ``anomaly'' $H \\in \\Gamma(\\Lambda^3 E^* \\otimes V)$ satisfying\nthe following compatibility conditions: \n\\begin{align} [\\psi_1,f\\psi_2] &= \\rho(\\psi_1)f \\, \\psi_2 + \nf[\\psi_1,\\psi_2] \\, , \\label{Leibn}\\\\\n {}[\\psi_1,[\\psi_2,\\psi_3]\\,] +\\textup{cycl(123)} &= t \\big(H(\\psi_1,\n \\psi_2, \\psi_3)\\big) \\, , \n \\label{Jac}\\\\{}\n[\\Econn_{\\psi_1},\\Econn_{\\psi_2}]v - \\Econn_{[\\psi_1,\\psi_2]}v &= \nH\\big(\\psi_1,\\psi_2,t(v)\\big) \\, , \\label{repr1}\n\\end{align}\nas well as $t(\\Econn_\\psi v) = [\\psi, t(v)]$ and $\\Econn_{t(v)} w = -\n\\Econn_{t(w)} v$. In addition, $H$ has to satisfy\n\\beq\n \\Econn_{\\psi_1} H(\\psi_2,\n \\psi_3, \\psi_4) -\\frac32 H([\\psi_1,\\psi_2],\\psi_3,\\psi_4)\n +\\textup{Alt}(1234)= 0 \\label{strange} \\, .\n\\eeq\n\\end{vdef}\nWe will now make further remarks on the statement of the above\nproposition and some parts of the above definition. In this way we\nwill arrive also at another, somewhat simpler form of the description of Q2-manifolds (or of what we just called Lie 2-algebroids), provided by Theorem\n\\ref{theo1} below.\\footnote{In this part of the paper we provide a \nclassical differential geometry description of a Q2-manifold. The relatively simple notion of a Q2 manifold in graded geometry encodes relatively involved data in the classical geometry setting. These are, however, chosen so as to mimic the standard definition of a Lie algebroid (corresponding to Q1-manifolds) and, simultaneously, generalize existing definitions of strict and semi-strict Lie 2-algebras. It may be reasonable to call the above or any equivalent data in \\emph{classical} differential geometry \\emph{semi-strict} Lie 2-algebroids, so as to clearly distinguish from definitions like the ones given in the Appendix or increasingly popular conventions where Q$k$-manifolds are considered as a defintion of Lie $k$-algebroids (cf also the corresponding discussion in the Introduction following Theorem \\ref{thm1.5}).}\n\n\nIn the above, $\\psi_i \\in \\Gamma(E)$, $f \\in\nC^\\infty(M)$, and $v,w \\in \\Gamma(V)$. \\eqref{compl} defining a\ncomplex just means that $ \\rho \\circ t = 0$, as corresponds to\nEq.~\\eqref{B2}. The three projections going to one and the same\ncopy of the base $M$ means that the bundle maps $t$ and $\\rho$ cover\nthe identity map on $M$. Eqs.~\\eqref{Jac}, \\eqref{repr1}, and\n\\eqref{strange} correspond to Eqs.~\\eqref{B3}, \\eqref{B5},\nand \\eqref{B7}, respectively; the two equations in the text are\n\\eqref{B4} and \\eqref{B6}. From Eqs.~\\eqref{Leibn} and\n\\eqref{Jac} together with $ \\rho \\circ t = 0$ one may show that\n$\\rho$ is a morphism of brackets:\n\\beq \\rho\\big([\\psi_1,\\psi_2]\\big) = [\\rho (\\psi_1), \\rho (\\psi_2)] \\, , \n\\label{rhomor} \\eeq\nwhere the bracket on the r.h.s.~denotes the commutator of \nvector fields (i.e.~the bracket in the standard Lie algebroid $TM$). \nThis equation, finally, reduces to \\eqref{B1} in components. \n\n\n\n\nCondition \\eqref{Leibn} is the Leibniz rule for the bracket on\n$E$, $t \\circ H$ defining its Jacobiator according to\n\\eqref{Jac}. So, for $H=0$, or even for $H$ living only in the\nkernel of $t$, $E$ becomes a Lie algebroid. As follows from\n\\eqref{repr1}, $H$ also governs the $E$-curvature ${{}^E\\!{}R}$ of ${{}^E{}\\nabla}$; indeed,\n$$ {{}^E\\!{}R}(\\psi_1,\\psi_2) \\equiv [\\Econn_{\\psi_1},\\Econn_{\\psi_2}] -\n\\Econn_{[\\psi_1,\\psi_2]} \\quad , \\qquad \n{{}^E\\!{}R} \\in \\Gamma(\\Lambda^2E^* \\otimes \\End(V)) \\, , \n$$ where $[\\Econn_{\\psi_1},\\Econn_{\\psi_2}]$ denotes the ordinary commutator of derivations, \nis nothing but the generalization of\ncurvature to the context of $E$-connections in $V$ for the \nalmost Lie algebroid $E$. Denoting by ${{}^E{}\\Omega}^p(M,V)$ the sections of\n$\\Lambda^p E^* \\otimes V$, the $E$-$p$-forms with values in $V$, where $V$ is some\nvector bundle over the base $M$ of $E$, ${{}^E\\!{}R}$ becomes an $E$-2-form\nwith values in the endomorphisms of $V$. Note that we used\nEq.~\\eqref{rhomor} to show \nthat the operator ${{}^E\\!{}R}(\\psi_1,\\psi_2)$ is\n$C^\\infty(M)$-linear and thus indeed defines an \nendomorphism of $V$. $H=0$ implies a flat E-connection, $ {{}^E\\!{}R}=0$, which \nis a \\emph{representation} of $E$ on $V$. The condition\n\\eqref{strange} implies that the Jacobi condition for $[ \\cdot ,\n\\cdot ]$ and the\nrepresentation property of ${{}^E{}\\nabla}$ are violated in a relatively mild,\ncontrolled way.\n\nEq.~\\eqref{strange} can be rewritten in a more compact form.\nIntroduce the generalization of an exterior covariant derivative\n${{}^E\\!{}\\mathrm{D}} \\colon {{}^E{}\\Omega}^\\cdot (M,V) \\to {{}^E{}\\Omega}^{\\cdot \\, +1} (M,V)$ \nby means of a generalized Cartan\nformula\n\\begin{align} \\label{genCar} {{}^E\\!{}\\mathrm{D}} \\omega (\\psi_0, \\ldots , \\psi_p) = \n & \\sum_i (-1)^i \\Econn_{\\psi_i}\n \\omega(\\psi_0,\\ldots,\\widehat{\\psi_i},\\ldots,\\psi_p) \\nonumber \\\\\n & +\\sum_{i$\nsuch that the following three identities hold true \n\\begin{align}\n [\\psi,[\\phi_1,\\phi_2]_W]_W &= [[\\psi,\\phi_1]_W,\\phi_2]_W +[\\phi_1,[\\psi,\\phi_2]_W]_W\n \\label{cJacobi} \\\\\n [\\psi,\\psi]_W &= \\frac12\\uD\\<\\psi,\\psi\\> \\label{cnSkew}\\\\\n \\rho(\\psi)\\<\\phi_1,\\phi_2\\> &=\n \\<[\\psi,\\phi_1]_W,\\phi_2\\> + \\<\\phi_1,[\\psi,\\phi_2]_W\\> \n\\, , \\label{cInvar}\n\\end{align}\nwhere $\\uD$ is induced by means of $\\rho$ and the inner product via $\\<\\uD f , \\phi\\> =\n\\rho(\\phi) f$. \n \\end{vdef}\nThe first axiom is the (left-)Leibniz rule of the bracket with respect\nto itself. From \\eqref{cInvar} one can also conclude \\cite{LiB11} a\nLeibniz rule w.r.t.~multiplication of sections by functions\n\\begin{equation} [\\psi,f\\phi]_W = \\rho(\\psi)f\\phi +f[\\psi,\\phi]_W\n\\label{cLeibniz} \\, . \\end{equation}\nThe properties \\eqref{cJacobi} and \\eqref{cLeibniz} render \n$(W,\\rho,[ \\cdot , \\cdot ]_W)$\nwhat we want to call a Leibniz--Loday algebroid (cf.~also \\cite{KS08})\\footnote{In fact, in \\cite{KS08} we called it Loday algebroid. We finally decided for the present paper to call an algebra with the property \\eqref{cJacobi} a Leibniz-Loday algebra and, if it is an algebra of sections and the condition \\eqref{cLeibniz} is satisfied, a Leibniz-Loday algebroid.}. \nThe second line brings in the fiber metric so as\nto control the violation of the antisymmetry of the bracket, while the last line is the\nad-invariance of the inner product.\nEqs.~\\eqref{cJacobi} and\n\\eqref{cLeibniz} permit to conclude that $\\rho$ is a morphism of\nbrackets, cf.~e.g.~the corresponding generalization in\nProp.~\\ref{p:Vinogrelem} below; together with \\eqref{cLeibniz} this\ngives the five axioms often demanded in the literature for the\ndefinition of a Courant algebroid. \n\nFor a general Courant algebroid one always has the complex\n\\beq 0 \\to T^*M \\to W \\to TM \\to 0 \\, , \\label{seq}\\eeq\ninduced by $\\rho$ and its adjoint. Exact Courant algebroids, which, by\ndefinition, means that the above sequence is exact, are classified by\nan element $[H] \\in H^3_{{\\mathrm{dR}}}(M)$ \\cite{SevLett}; up to a\ncontribution $H(v,w,\\cdot)$ added to the r.h.s.~of\n\\eqref{Cour}, this reproduces the explicit formulas on $W=TM\\oplus\nT^*M$ introduced above (with $\\rho$ being projection to the first\nfactor and a change of splitting in \\eqref{seq} corresponding to a\nchange $H \\mapsto H + \\ud B$).\n\nThe construction in \\eqref{Anton} can be translated easily into\nsuper-language. As before we may use $\\CM=T[1]M$ and consider\n$\\imath_v$ as a vertical vector field and $\\alpha \\wedge$ as\nmultiplication with a function of degree one. If we want to have\n$\\alpha$'s enter as vector fields as well, getting a model for what\nhappens in the previous section, we can simply extend $T[1]M$ by one\n(graded) copy of $\\Real$. Denoting this coordinate by $b$, and\nchoosing local $x^i$ on $M$ with its induced odd coordinates $\\xi^i =\n\\ud x^i$ on $T[1]M$, we then associate to every $\\psi = v \\oplus \\alpha$\nthe vector field\n\\begin{equation} \\psi = v^i \\frac{\\partial}{\\partial \\xi^i} + \\alpha_i \\xi^i \\frac{\\partial}{\\partial b} \\, ; \\end{equation}\ndeclaring $b$ to have degree two, this becomes a homogeneous vector\nfield on the graded manifold $\\CM = T[1]M \\times \\Real [2]$ of degree\nminus one (and, moreover, it has the form of the most general vector\nfield of this degree). The bracket \\eqref{Cour} now follows as a\nderived bracket of (graded) vector fields on $\\CM$, $[ v \\oplus \\alpha , w\n\\oplus \\beta]_W = [\\psi,\\psi']_Q$ where the latter is defined by \n\\begin{equation} [ \\psi, \\psi']_Q := [[ \\psi , Q ], \\psi'] , \\label{Cour2} \\end{equation}\nwith $Q$ being the\ncanonical vector field on $T[1]M$ corresponding to the de Rham\ndifferential on $M$ extended trivially to the product and the bracket\ndenoting the graded Lie bracket of vector fields.\n\nObviously, this\ncan be ``twisted'' by a closed 3-form $H$ by replacing the above $Q$\nby \\begin{equation} Q = \\xi^i \\frac{\\partial}{\\partial x^i} + \\frac{1}{3!} H_{ijk}\n\\xi^i \\xi^j \\xi^k \\frac{\\partial}{\\partial b} \\label{Qtwist1} \\end{equation}\nwhich again yields a Q-structure on $\\CM$. This reproduces the general\nsituation of an exact Courant algebroid mentioned above. Note also\nthat in this language the inner product on $W=TM \\oplus T^*M$ results\nfrom the (graded) bracket of the corresponding vector fields on\n$\\CM$: \n\\begin{equation} [\\psi,\\psi'] = \\<\\psi,\\psi'\\> \\frac{\\partial}{\\partial b} \\, .\n\\label{inner2}\\end{equation} \n\n\nTo have vector fields $v$ and 1-forms $\\alpha$ enter more symmetrically,\none may alternatively lift the graded construction on $\\CM = T[1]M$\ncanonically to the (super)symplectic manifold $T^*[2]\\CM$: The local\ncoordinates $x^i$ and $\\xi^i$ of degree 0 and 1 on $\\CM$ are then\naccompanied by canonically conjugate momenta $p_i$ and $\\xi_i$ of\ndegree 2 and 1, respectively. $\\psi= v + \\alpha = v^i \\xi_i + \\alpha_i \\xi^i$\nthen describes the most general function of (total) degree\none\\footnote{A vector field on $\\CM$ is always a fiber linear function\non its cotangent bundle and a function can be pulled back canonically\nby the projection. The shift in grading by two has been chosen so as\nto have them both of the same degree.}, and the canonical Hamiltonian $\\tilde Q$ for\nlifting the vector field $Q=\\xi^i \\partial\/\\partial x^i$ to the\ncotangent bundle, $\\tilde Q = \\xi^i p_i$, then reproduces\n\\eqref{Anton} and the inner product by means of the (graded)\nPoisson brackets\n\\begin{align}\n\\{ \\, \\{ \\psi , \\tilde Q \\} , \\psi' \\} &= [ \\psi , \\psi' ]_W \n\\label{dBrack1}\\\\\n\\{ \\psi , \\psi' \\} &= \\< \\psi , \\psi' \\> \\, .\n\\label{dInner}\\end{align}\nIn fact, such a construction can be extended to the general setting of\na Courant algebroid as defined above. One then finds that Courant\nalgebroids are in {\\em one-to-one} correspondence to {\\it symplectic}\nQ2-manifolds \\cite{Royt02} (where, by definition, the symplectic form\nis required to be of degree two and to also be compatible with the\nQ-structure, i.e.\\ to be preserved by the vector field $Q$).\n\nOne observes the similarity of the formulas \\eqref{Cour2},\n\\eqref{inner2} with \\eqref{dBrack1}, \\eqref{dInner}. In both\ncases the original bracket is a (graded) Lie bracket, in the second\ncase of degree minus two, in the first case of degree\nzero.\\footnote{\\label{f:24} In fact, the second bracket is even a\n(graded) Poisson bracket, i.e.~there exists a second compatible graded\ncommutative multiplication $\\cdot$, usually taken to be of degree zero. (Here\ncompatibility means a graded Leibniz condition: $\\{F,G \\cdot H \\} = \\{F,G\n\\} \\cdot H + (-1)^{(|F|+d)|G|} G \\cdot \\{F, H \\}$, where $d$ is the degree of the bracket, cf.~also the subsequent footnote). If one\nprefers, also the bracket of vector fields can be embedded into a\nPoisson algebra, by viewing vector fields as fiber linear functions on\nthe cotangent bundle and thus replacing $\\CM$ by the graded manifold\n$T^*\\CM$, if one prefers, shifted also by some degree (by degree one\nif the canonical Poisson bracket is to reproduce the\nSchouten--Nijenhuis bracket of (graded)\nvector fields on $\\CM$). We will come back to this perspective below.}\nIn the previous section, moreover, we viewed the left-adjoint action\nof $Q$ as a (compatible) differential $\\ud_Q$. Thus, to cover all\nrelevant cases simultaneously, one is led to regard the derived\nbracket construction of a differential graded Lie algebra (DGLA),\ni.e.\\ a graded Lie algebra with a differential (nilpotent odd\noperator) compatible with the bracket:\\footnote{We call\n$(A^\\bullet,[.,.])$ a (graded) \\emph{Leibniz--Loday algebra} of degree $d$, if\n$[A^k,A^l]\\subset A^{k+l+d}$ and $[a,[b,c]]=[[a,b],c]\n+(-1)^{(|a|+d)(|b|+d)}[b,[a,c]]$. It is a \\emph{differential graded\nLeibniz--Loday algebra} (DGLoA) if in addition there is an operation $D \\colon\nA^\\bullet \\to A^{\\bullet+1}$ squaring to zero and satisfying\n$D([a,b])=[Da,b] + (-1)^{(|a|+d)} [a,Db]$. If the bracket $[.,.]$ is\nalso graded antisymmetric, i.e.~$[b,a]=-(-1)^{(|a|+d)(|b|+d)}[a,b]$, we\ncan replace ``Leibniz--Loday'' by ``Lie'' in these definitions.}\n\n\\begin{prop}[Kosmann-Schwarzbach \\cite{YKS96}]\\label{p:YKSder} Let\n$(A^\\bullet,[.,.],\\uD)$ be a DGLA (with bracket\nof degree $d$) and define its derived bracket as:\n\\begin{equation}\\label{Dder} [\\phi,\\psi]_D = (-1)^{|\\phi|+d+1}[\\uD\\phi,\\psi] \\;,\n\\end{equation}\nwhere $|\\phi|$ denotes the degree of $\\phi$ (assumed to be\nhomogeneous). \n\n\\noindent Then the following facts are true: \n\\begin{enumerate}\n\\item This new bracket has degree $d+1$ and is a graded\nLeibniz-Loday bracket,\n\\begin{equation}\\label{Loday} [\\phi,[\\psi_1,\\psi_2]_D]_D = [[\\phi,\\psi_1]_D,\\psi_2]_D\n +(-1)^{(|\\phi|+d+1)(|\\psi_1|+d+1)}[\\psi_1,[\\phi,\\psi_2]_D]_D \\;.\n\\end{equation}\n\\item If $|\\phi|$, $|\\psi_!|$, and $|\\psi_2|$ all have the opposite parity of $d$, one has \n\\begin{eqnarray}\\label{dSkew} [\\phi,\\phi]_D &=&\\frac12 \\uD[\\phi,\\phi] \\;, \\\\\n {}[\\phi,[\\psi_1,\\psi_2]]_D &=& [[\\phi,\\psi_1]_D,\\psi_2]+ \n[[\\psi_1,[\\phi,\\psi_2]_D]\\label{Dcomp} \\; . \\label{adInvar}\n\\end{eqnarray}\n\\item The quotient of $(A^\\bullet,[.,.]_D)$ by $\\ker D$ is a graded Lie\nalgebra. \n\\end{enumerate} \n\\end{prop}\nThese facts are easy to verify and recommended to the reader as an\nexercise. (To see that $A^\\bullet\/\\ker D$ is well-defined one observes\nthat $[\\phi,\\psi]_D$ equals $[\\phi,D\\psi]$ modulo a $D$-exact term,\nwhich can also be used to establish the graded antisymmetry of the\ninduced bracket). \n\nIn fact, the Leibniz-Loday Property~\\eqref{Loday} also follows in the more\ngeneral context of $(A^\\bullet,[.,.],\\uD)$ being a DGLoA only, cf.~the\npreceding footnote and Lemma \\ref{p:der2} below. The choice of the\nsign in \\eqref{Dder} is essential and can be motivated also from\nexamples where $\\uD$ is an inner derivation, $\\uD=[Q,.]$ with\n$[Q,Q]=0$, and the brackets arise as in \\eqref{Cour2}. This\ndefinition also agrees with (the first part of) Eq.~\\eqref{induced}\nwith the identification $\\uD=\\ud_Q$, since there the bracket has\ndegree zero and the elements are odd.\n\n\\bigskip\n\nWe now turn to the alternative description of Q-manifolds when\nstudying the vector fields on them and their derived bracket. To\nexplain this procedure, we start with a Q1-manifold, $\\CM = E[1]$:\nSections $\\psi$ in $E$ can be identified with degree minus one vector\nfields on $\\CM$, $\\psi = \\psi^a\n\\partial\/\\partial \\xi^a$ in our notation. Replacing $\\ud$ in Cartan's\nformula \\eqref{derbasic} with the differential of the Lie algebroid (acting\n on ${{}^E{}\\Omega}^\\bullet(M)$) every Lie algebroid bracket occurs naturally as a\nderived bracket. In super-language this means explicitly that the\nLie algebroid bracket on $E$, which for clarity we now denote by $[\n\\cdot , \\cdot ]_E$ is a derived bracket w.r.t.~$Q$,\n\\begin{equation} [\\phi,\\psi]_E = [[\\phi,Q],\\psi] \\, , \\label{Eder}\n\\end{equation}\nwhere, for notational simplicity, we did not introduce a symbol for\nthe map identifying sections in $E$ (as they appear on the l.h.s.~of\nthis equation) with the respective degree minus one vector field (as\nunderstood on the r.h.s.)---in \\eqref{derbasic} the symbol\n$\\imath_\\cdot$ could be interpreted as such an isomorphism. We will\nkeep such a notational convention also further on. From the previous\ngeneral considerations about DGLAs and their derived brackets, we can\nnow at once conclude about two of the three defining properties for\n$[\\cdot , \\cdot ]_E$ to be a Lie algebroid bracket: Since there are\n\\emph{no} degree minus two vector fields on $\\CM = E[1]$, the right\nhand side of \\eqref{dSkew} vanishes identically. With the fact that\n$\\phi$, $\\psi$ have odd degree and the natural bracket of super vector\nfields degree $d=0$, Eqs.~\\eqref{dSkew} and \\eqref{Loday} yield\n$[\\cdot , \\cdot ]_E$ to be (ungraded) antisymmetric and to satisfy an\n(ungraded) Jacobi identity, respectively. To retrieve the Leibniz \nproperty w.r.t.~an or the anchor map of $E$,\nwe also need to make use of the module structure of vector fields\nw.r.t.~functions on $\\CM$, where we may restrict to degree zero\nfunctions $C^\\infty_0(\\CM) \\cong C^\\infty(M)$ so as to preserve the\ndegree of our vector fields. This will be provided by\nEq.~\\eqref{eq:Leibniz} below, then establishing that we obtain a Lie\nalgebroid structure on $E$ by this procedure. \n\nSince we will need such type of anchor maps also in the general\ncontext of higher degree $Q$-manifolds and their compatibility with\nthe derived bracket construction, we will extend the\nconsiderations on DGL(o)As to this setting. A rather minimalistic way of\ndoing so is \n\\begin{vdef}\nWe call $(A^\\bullet,[.,.],\\uD, R, \\rho_0)$ a \\emph{weakly anchored} DGL(o)A\nover the ring $R$ (commutative, with unity) if $(A^\\bullet,[.,.],\\uD)$\nis a DGL(o)A of degree $d$ and $\\cdot \\colon R \\times A^k\n\\to A^k$ and $\\rho_0 \\colon A^{-d} \\to \\End_{\\Real} (R)$\nare maps such that \n\\begin{equation} [\\phi, f \\cdot \\psi] = \\left(\\rho_0(\\phi)\\right)(f) \\cdot \\psi + f\n\\cdot [\\phi, \\psi] \\label{ring}\n\\end{equation}\nholds true for every $\\phi \\in A^{-d}$, $\\psi \\in A$, and\n$f\\in R$.\n\nWe call it merely \\emph{anchored} iff in addition $\\rho_0$ is $R$-linear and\nso is its composition with $\\uD$, i.e.~for all $\\phi \\in A^{-d-1}$\none has\n\\[ \\label{rhoBund} \\rho_0(\\uD(f\\phi))=f\\rho_0(\\uD\\phi) \\,. \\]\nWe call it \\emph{strongly anchored} if also ($\\forall f,g \\in R, \\psi \\in\nA$) \n\\[ \\uD(fg\\psi) +fg\\uD \\psi= f\\uD(g\\psi) +g\\uD(f\\psi) \\,. \\label{1stder} \\]\n\\end{vdef} \n\nIn fact, \\eqref{1stder} is \\emph{equivalent} to the\nexistence of an operator $\\CD \\colon R\\to \\Hom_R(A^\\bullet,A^{\\bullet+1})$ such that\n\\begin{align}\n \\uD(f\\phi) &= (\\CD f)(\\phi) +f\\uD \\phi \\label{DLeib} \\,, \\\\\n \\CD(fg) &= f\\CD g +g\\CD f \\label{D0Leib}\n\\end{align} holds true. \\eqref{D0Leib} implies that $\\CD$ is a first \norder differential operator and \\eqref{DLeib} that likewise is $\\uD$\nw.r.t.~$R$. In fact Eq.~\\eqref{DLeib} also shows constructively how to\ndefine $\\CD$ once \\eqref{1stder} is fulfilled. We have chosen the\nformulation as in \\eqref{1stder} since it is intrinsic and does not\nneed the introduction of new operations such as the above map $\\CD$. \n\nA graded Lie algebroid over an ungraded manifold $M$ gives an example\nof a strongly anchored DGLA with $\\uD=0$; here $A^\\bullet$ are the\nsections in the graded vector bundle and $R=C^\\infty(M)$, as an\n$R$-module $A^\\bullet$ is projective in this example. Obviously the\nabove definition can be easily extended to graded commutative rings\n$R$ in which case general graded Lie algebroids can be covered as\nwell; this is however not needed for the applications we are having in\nmind here (cf.~Proposition \\ref{p:tangentLie} below). An example with\nnontrivial $\\uD$ can be constructed from Lie bialgebroids satisfying\nEq.~\\eqref{rhoBund}; here $A^\\bullet$ consists of the sections of the\nexterior powers of the (ungraded) vector bundle. But more important\nfor our context is the following observation:\\footnote{Cf.~also\nfootnote\n\\ref{f:8} concerning our simplified nomenclature and footnote \\ref{f:24} for\na possible relation of the two parts of the proposition. A differential\ngraded Poisson algebra $(A^\\bullet, \\{ . , . \\}, \\cdot , \\uD)$ of degree $d$ is\na DGLA $(A^\\bullet, \\{ . , . \\}, \\uD)$ and a graded Poisson algebra\n$(A^\\bullet, \\{ . , . \\}, \\cdot)$, both of degree $d$, such that $\\uD(F \\cdot\nG) = \\uD F \\cdot G + (-1)^{|F|} F \\cdot \\uD G$ for all $F\n\\in A^\\bullet$, $G \\in A$.}\n\\begin{prop}\\label{p:tangentLie}\n$\\,$\n\\begin{enumerate}\n\\item $(A^\\bullet,[.,.])=\\X(\\CM)$ with $\\uD=\\ud_Q$\nof a Q-manifold is a strongly anchored DGLA with bracket of degree zero \nw.r.t.\\ $R=C^\\infty_0(\\CM)\\cong{C^\\infty}(M)$, where $\\rho_0(\\phi)[f]=\\phi(f)$.\n\\item \nA differential graded Poisson algebra (DGPA) on $C^\\infty(\\CM)$ is a weakly\nanchored DGLA w.r.t.\\ $R=C^\\infty_0(\\CM)$ and $\\rho_0(\\phi)f=\\{\\phi,f\\}$. It\nis strongly anchored iff $\\{f,g\\}=0=\\{\\phi \\cdot \\uD f,g\\}$ for all $f,g\\in\nR$ and $\\phi \\in C^\\infty_{-d-1}(\\CM)$.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof} For the case of vector fields use $\\uD \\phi = [Q,\\phi]$ and find\n\\eqref{DLeib} with \n$(Df)(\\phi)$ being the multiplication of $\\phi$ by $Q(f)$. $Q$ being a vector\nfield, this proves also \\eqref{D0Leib} and thus \\eqref{1stder}. From its\ndefinition it is immediate that $\\rho_0$ is $R$-linear in $\\phi\n\\in \\X_0(\\CM)$. Moreover using \\eqref{DLeib} and observing that $\\rho_0\n\\left(Q(f) \\phi\\right)=0$, which follows since here $\\phi$ is degree minus\none and vanishes when applied to degree zero functions, also \\eqref{rhoBund}\nfollows. \n \n\nIn the Poisson case the two additional conditions are found by directly\nchecking the desired conditions using the definition of\n$\\rho_0$. E.g.~$\\{f,g\\}=0$ follows immediately from $R$-linearity of\n$\\rho_0$. \n\\end{proof}\n\n\\pagebreak[2]These additional structures are compatible with the\nderived bracket construction in the following sense:\\nopagebreak[3]\n\\begin{lemma}\\label{p:der2}\n Let $(A^\\bullet,[.,.],\\uD,R,\\rho_0)$ be a (weakly\/strongly) anchored DGLoA, \n then $(A^\\bullet,[.,.]_\\uD,\\uD,R,\\rho)$ is also a (weakly\/strongly)\n anchored DGLoA with \n\\begin{equation} \\rho(\\phi) = \\rho_0(\\uD\\phi) \\;. \\label{eq:rho1}\n\\end{equation}\n\\end{lemma}\nThis can be proven by simple straightforward calculations. The Lemma contains\nthree different statements. \n\nWe now apply these considerations first by returning to Q1-manifolds. As a\nconsequence of Proposition \\ref{p:tangentLie} and the preceding lemma,\nwe notice that the derived bracket \\eqref{Eder} satisfies a Leibniz\nrule \\begin{equation} [\\phi, f \\cdot \\psi]_E =\\rho(\\phi)(f) \\cdot \\psi + f\n\\cdot [\\phi, \\psi]_E \\label{eq:Leibniz} \n\\end{equation} with an $R$- or $C^\\infty(M)$-linear anchor map \n$\\rho$ given by $\\rho(\\phi)f = [Q,\\phi] f$. $C^\\infty(M)$-linearity\nimplies that $\\rho$ indeed comes from a vector bundle map $E\\to TM$. \n\n\n\nThus, indeed, the bracket on $\\Gamma(E)$ defined via the derived\nbracket construction\n\\eqref{Eder} induces a Lie algebroid structure on $E$. Moreover, this is the \n\\emph{same} Lie algebroid structure as found before in Section~\\ref{s:examQ}, \nwhich one easily verifies on a basis: Using the above formulas as definitions \nfor the bracket and the anchor, one verifies easily that $[\\partial_a,\n\\partial_b]_E = C^c_{ab} \\partial_c$ and $\\rho(\\partial_a)=\\rho^i_a \n\\partial_i$ (with $\\partial_a \\equiv \\frac{\\partial}{\\partial \\xi^a}$ and \n$\\partial_i \\equiv \\frac{\\partial}{\\partial x^i}$ for the parameterization used \nin \\eqref{Q}).\n\nWe now want to imitate this second procedure, illustrated at the\nexample of Q1-manifolds and in spirit close to the considerations of\nthe previous section on gauge symmetries, so as to find an alternative\ncharacterization of degree two Q-manifolds. In fact, we want to\nperform some of the considerations common to all choices of degree $p$\nfirst and later specialize the discussion to $p=2$. Courant algebroids\nshould be particular examples of the latter case, moreover, and we\nwill find a description of Q2-manifolds or Lie-2-algebroids\ncomplementary to the one given in Section~\\ref{s:examQ} and much closer to\nthe usual formulation of Courant algebroids.\n\n\n\\subsection{Vinogradov and $V$-twisted Courant algebroids}\n\\label{subsec:Vino}\nTo obtain a prototype of a bracket for higher $p$ than two, we\nmay look at a generalization of the Courant-Dorfman bracket, the\nso-called Vinogradov\nbracket. It is in fact the bracket \\eqref{Cour} with $\\alpha,\\beta\n\\in \\Omega^k(M)$ for some arbitrary $k \\in \\N$, and it can be \nobtained precisely as for the case $k=1$. Accordingly, we again have\nat least two options translating this scenario into super-language. In\nthe $\\CM=T[1]M \\times \\Real[p]$ picture\\footnote{We have found this description in 2005. In the mean time it appeared also in \\cite{Uri13}}, sections $v\\oplus\\alpha$ correspond\nto \\begin{equation} \\label{psi}\n\\psi = \\imath_v + \\alpha \\frac{\\partial}{\\partial b} \\, , \\end{equation} \nwhere $b$ obviously needs to have degree $p=k+1$ for $\\psi$ to be\nhomogeneous of degree minus one. The Vinogradov bracket then follows\nagain as a derived bracket \\eqref{Cour2} with $Q$ being the de Rham\ndifferential. In generalization of Eq.~\\eqref{Qtwist1} this may\ncertainly again be twisted, \\begin{equation} Q = \\ud + H \\frac{\\partial}{\\partial\nb} \\, , \\label{Qtwist2} \\end{equation} where now $H$ needs to be a closed\n($k+2$)-form obviously. Like in the Courant algebroid, also here the\noriginal (graded) Lie bracket plays an important role,\ncf.~Eq.~\\eqref{inner2} (or \\eqref{dInner}), generating a\n$C^\\infty(M)$ bilinear pairing; it comes from the contraction of the\nvector fields with the $k$-forms, as one easily verifies using\n\\eqref{psi}. Thus, this time the pairing maps two elements in\n$W=TM \\oplus \\Lambda^k T^*M$ to an element of another bundle, namely\nof $V=\\Lambda^{k-1} T^*M$. This, the appearance of a second bundle $V$\nof relevance, will be one feature we want to keep also for the more\ngeneral setting in what follows.\n\nBefore axiomatizing these data, in analogy to the step from\n\\eqref{Cour} to Definition \\ref{def:Cour}, into what one may want\nto call a Vinogradov algebroid, we briefly translate the above picture\nalso into a graded Poisson language. We therefore regard \n$\\CM=T^*[p]T[1]M$ with its canonical Poisson bracket $\\{ \\cdot , \\cdot\n\\}$ of degree $-p$. Local coordinates $x^i$ and $\\xi^i$ of degree 0\nand 1, respectively, are then accompanied by momenta $p_i$ and $\\xi_i$\nof degree $p$ and $p-1$, respectively. Sections of $W$ correspond to\nfunctions on this $\\CM$ of degree $k=p-1$, $\\psi= v^i \\xi_i +\n\\frac{1}{k!}\\alpha_{i_1\n\\ldots i_k} \\xi^{i_1} \\ldots \\xi^{i_k}$, and the bracket and pairing \non $W$ results from formulas \\eqref{dBrack1} and \\eqref{dInner},\nrespectively, where $\\tilde{Q}=\\xi^i p_i +H$. \n\n\nWe now return to the task of axiomatization. In particular, one may\nwant to check properties of the bracket and pairing, finding\nappropriate generalizations of those written in definition\n\\ref{def:Cour}. There are natural candidates for the analogues of the\nobjects appearing in \\eqref{cLeibniz} and \\eqref{cnSkew}, which will\nalso turn out to be realized for what one may call the Standard\nVinogradov algebroid ($W=TM \\oplus \\Lambda^k T^*M$, $V=\\Lambda^{k-1}\nT^*M$), possibly twisted by some $H\\in \\Omega_{cl}^{k+2}(M)$. The\nanchor map $\\rho$ will be projection to $TM$, for $\\uD$ one can choose\nthe de Rham differential (succeeded by the natural embedding of\n$k$-forms into sections of $W$). For the ad-invariance of the pairing\n(cf.~Eq.~\\eqref{cInvar}), however, we can no more use solely the\nanchor map, since the pairing lands in sections of $V$. The operator\non the l.h.s.~of\n\\eqref{cInvar} will then be found to be replaced by the Lie derivative\nw.r.t.~the anchor of the section $\\psi$. Note that this is for $k>1$\n(or, likewise, $p>2$) no more $C^\\infty(M)$-linear in $\\psi$. So, in\nthe axioms there will be besides $\\rho$ and $\\uD$ one more new object,\nwhich we will call $\\rho_D$, not necessarily coming from a bundle map,\nbut being restricted appropriately otherwise. Also, the relation\nbetween $\\rho$ and $\\uD$ being adjoint to one another does not make\nany sense anymore.\\footnote{Only the generalization that $\\rho_D$\nwould be adjoint of $\\uD$ w.r.t.~the pairing can be meaningful. In\nfact, although it is not satisfied in the Standard Vinogradov\nalgebroid for $k>1$, posing this additional condition will be studied\nin detail the context of $p=2$ below, cf., e.g., definition\n\\ref{def:VCourant} and Proposition \\ref{p:FW}.} $\\uD$ needs to be\nrestricted then by something like Equation~\\eqref{1stder} so as to\nensure that it is a first order differential operator. Likewise for\n$\\rho_D$ we require it to take values in the module $\\Gamma(\\CDO)(V)\\equiv \\Gamma(\\CDO(V))$ of covariant differential operators on $V$: Keeping in\nmind that the $\\Real$-linear first order differential operators on a\nvector bundle $V\\to M$ are themselves a projective module over $M$, we\ncall the generating bundle $\\CDO(V)$, the bundle of covariant\ndifferential operators on $V$; elements in $\\Gamma(\\CDO)(V)$ are then sections of this bundle. There is a canonical map from $\\CDO(V)$\nto $TM$ which we may use to require $\\rho_D$ to cover the\nmap from $\\Gamma(W)$ to $\\Gamma(TM)$ induced by $\\rho$ (and\nconventionally denoted by the same letter), i.e.~in formulas we require \n\\begin{equation} \n \\rho_D(\\phi)[fv] = \\rho(\\phi)[f]v +f\\rho_D(\\phi)[v] \\, .\n\\label{vLeibn3} \n\\end{equation} where $\\phi \\in \\Gamma(W)$, $f \\in C^\\infty(M)$, and $v \\in \\Gamma(V)$.\nWe thus are led to the following\n\n\\newpage\n\\begin{vdef} \\label{def:Vino} A \\emph{Vinogradov algebroid} are two vector bundles $W$ and $V$\ntogether with a bracket $[.,.]_W:\\Gamma(W)\\times\\Gamma(W)\\to \\Gamma(W)$,\na map $\\rho\\:W\\to TM$, a non-degenerate\\footnote{A (skew)-symmetric bilinear form $\\<.,.\\>$ is non-degenerate if for every $\\phi\\in E_x$, $\\phi\\ne0$ there is a $\\psi\\in E_x$ such that $\\<\\phi,\\psi\\>\\ne0$.} surjective inner product\n$\\<.,.\\>\\: W\\otimes_M W\\to V$, an $\\Real$-linear map\n$\\uD:\\Gamma(V)\\to \\Gamma(W)$,\nand a map $\\rho_D: \\Gamma(W)\\to \\Gamma(\\CDO)(V)$ covering the map\n$\\rho \\colon \\Gamma(W)\\to \\Gamma(TM)$ subject to the\nfollowing axioms.\n\\begin{align}\n [\\psi,[\\phi_1,\\phi_2]_W]_W &= [[\\psi,\\phi_1]_W,\\phi_2]_W +\n[\\phi_1,[\\psi,\\phi_2]_W]_W\n \\label{vJacobi} \\\\\n [\\psi,\\psi]_W &= \\frac12\\uD\\<\\psi,\\psi\\> \\label{vSkew}\\\\\n \\rho_D(\\psi)\\<\\phi,\\phi\\> &=\n 2\\<[\\psi,\\phi]_W,\\phi\\> \\label{vInvar}\\\\\n \\uD(fgv) +fg\\uD(v) &= f\\uD(gv) +g\\uD(fv) \\label{vLeibn2}\n\\end{align} where $\\phi_i,\\psi \\in \\Gamma(W)$, $f,g \\in {C^\\infty}(M)$,\nand $v \\in \\Gamma(V)$.\n\\end{vdef}\nIt is easy to see that \\eqref{vInvar} and \\eqref{vLeibn3} imply \n\\begin{equation}\n[\\psi,f\\phi]_W = \\rho(\\psi)f\\phi +f[\\psi,\\phi]_W \\label{vLeibniz} \\;\n\\end{equation} and, conversely,\none may also conclude \\eqref{vLeibn3} from \\eqref{vInvar}\nand \\eqref{vLeibniz}. One may consider adding $[\\uD v,\\phi]_W =\n0\\label{brwExact}$ to the axioms as this rule is suggested from\nseveral perspectives (e.g.~it is satisfied for the Vinogradov\nbracket). For the case of\n $\\rho_D(\\phi)[v]=\\<\\phi,\\uD v\\>$, on the other hand,\nthis follows automatically, cf.~Proposition \\ref{p:Royt} below.\n\nTo require surjectivity of the pairing goes almost without loss of\ngenerality since $V$ enters the axioms only via the image of $\\<\n\\cdot , \\cdot \\>$; only if this image changes its rank, one cannot\nconsistently restrict to it as a vector bundle. On the other side, it\nis suggested from some perspectives to drop the condition of\nnon-degeneracy,\nwhich, however, then would go with a partially\ndifferent formulation of the axioms and we\nwill not pursue this idea in the present article. Let us mention that\nin the application to Q-manifolds the inner product is automatically\nsurjective and non-degenerate, cf.~Proposition \\ref{p:genStdVinogr}\nbelow.\n\n\nNote that the Equation~\\eqref{vInvar} is nothing but the invariance of the\ninner product under sections of $W$, being equivalent to\n\\begin{align*}\n \\rho_D(\\psi)\\<\\phi_1,\\phi_2\\> &=\n \\<[\\psi,\\phi_1]_W,\\phi_2\\> + \\<\\phi_1,[\\psi,\\phi_2]_W\\> \\,\n\\end{align*}\nby a standard polarization argument. It is an instructive exercise to\ncompute the analog of the Leibniz rule (Eq.~\\eqref{vLeibniz}) for\nthe left hand side of the bracket, which shows that the bracket\nis also a first order linear partial differential operator in the l.h.s.\nAlso one has the following \n\\begin{prop}\\label{p:Vinogrelem} The maps $\\rho$ and $\\rho_D$ are morphisms of\nbrackets.\\footnote{In the case of Courant algebroids it was apparently\nfirst K. Uchino \\cite{Uchi02} who observed that the morphism property\nof $\\rho$ can be deduced from the other axioms.} Furthermore, their\ncomposition with $\\uD$ vanishes ($\\rho\\circ\\uD =0=\\rho_D\\circ\\uD$).\n\\end{prop}\n\n\\begin{proof}\n \nThe morphism property of $\\rho$ can be concluded easily from\n\\eqref{vJacobi} and \\eqref{vLeibniz}, cf., e.g., \\cite{HS08}. \nFor the morphism property of $\\rho_D$ write a section of\n$V$ locally as scalar product of two sections of $W$. Then use \\eqref{vInvar}\nand \\eqref{vJacobi} to proceed in a similar manner.\nThe last two properties follow from \\eqref{vSkew} and the first or second property, respectively.\n\\end{proof}\n\n\n\n\nLet us now check that the axioms are indeed all fulfilled for the\n(twisted standard) Vinogradov bracket described above. Since we\nalready introduced some super language for the bracket, let us start\nfrom there:\n\\begin{prop}\\label{p:derVino} Given a strongly anchored DGLA structure\n $(A^\\bullet,[.,.],\\uD,\\rho_0)$ with bracket of degree $d$ over\n $R={C^\\infty}(M)$ of a smooth manifold $M$, where $A^{-d-1}$ and\n $A^{-d-2}$ are projective modules over $M$, then there exist vector\n bundles $W\\to M$ and $V\\to M$ such that $A^{-d-1}$ is (canonically\n isomorphic to) the space of sections of $W$ denoted\n $\\phi,\\psi,\\ldots$ in what follows and $A^{-d-2}$ is the space of\n sections of $V$ a typical element of which is denoted by $v$ below.\n Thus $\\uD \\colon \\Gamma(V)\\to \\Gamma(W)$. Now the following operations\n can be defined by their right hand sides:\n\\begin{align}\n [.,.]_W &: \\Gamma(W)\\otimes\\Gamma(W)\\to \\Gamma(W),& [\\phi,\\psi]_W&:=[\\uD\\phi,\\psi] \\label{dBrack2}\\\\\n \\rho &: W\\to TM,&\n \\rho(\\phi)[f]&:= \\rho_0(\\uD\\phi)[f] \\label{dAnchor}\\\\\n \\<.,.\\> &: \\Gamma(W)\\otimes\\Gamma(W)\\to \\Gamma(V),&\n \\<\\phi,\\psi\\>&:=[\\phi,\\psi] \\label{dInner2}\\\\\n \\rho_D &: \\Gamma(W)\\to \\Gamma(\\CDO)(V), \\;& \\rho_D(\\phi)[v]&:= [\\uD\\phi,v] \\,. \\label{rhoQ}\n\\end{align}\n $\\<.,.\\>$ is ${C^\\infty}(M)$-linear iff $[\\psi,f\\phi]=f[\\psi,\\phi]$ for\n all functions $f$ on $M$. Requiring this and that the induced inner product\n $\\<.,.\\>\\:W\\otimes_M W\\to V$ is non-degenerate and surjective, these\n operations form a Vinogradov algebroid.\n\\end{prop}\n\\begin{proof} Follows from Proposition \\ref{p:YKSder}\n and Lemma \\ref{p:der2} with the interpretation that \\eqref{vLeibn3} is just\n the Leibniz rule for the derived bracket restricted to $A^{-d-2}$ in the\n right-hand argument.\n\\end{proof}\n\nDue to Proposition \\ref{p:tangentLie} the vector fields on our super manifold\n$\\CM=T[1]M\\times\\Real[k+1]$ together with the above $Q$-structure fulfill the\nfirst condition. ${C^\\infty}(M)$-linearity of the inner product is\nautomatically fulfilled for Q-manifolds (since $d=0$ and thus $\\Gamma(W)$\nis identified with degree minus one vector fields in this case). \nNon-degeneracy follows by direct inspection, noting that \n$W \\cong TM\\oplus \\Lambda^kT^*M$ for\n$1\\le k\\le\\dim M$. This shows that the (twisted) Vinogradov\nbracket and the canonical pairing indeed provide an example of a\nVinogradov algebroid; we will call this the ($H$-twisted) standard \nVinogradov algebroid.\n\nAn immediate generalization of this is to replace the tangent bundle\nin the above construction by a Lie algebroid $A$. Starting with a Lie\nalgebroid $(A,[.,.]_A,\\rho_A)$ and a positive integer $k\\le\\rk A$ we\nconsider the bundle $W:=A\\oplus\\Lambda^kA^*$ with the canonical\nsymmetric (non-degenerate) pairing $\\\n:=\\imath_{X}\\alpha+\\imath_{Y}\\beta$. The choice for $V$ is therefore\n$\\Lambda^{k-1}A^*$. The choice of the super manifold $\\CM$ is analog\nto the tangent case $\\CM=T^*[k+1]A[1]$. $A[1]$ also has a canonical\n$Q$-structure, the differential $\\ud_A$ of the algebroid $A$. Its\nHamiltonian lift to $\\CM$ is the $Q$-structure we use. Since the\nHamiltonian is itself of degree $k+2$, we can add an $A$-$(k+2)$-form\n$H$, satisfying $\\ud_AH=0$ so as to render the total charge further on\nnilpotent. This leads us to\n\\begin{prop}\\label{p:genStdVinogr} Let $(A,[.,.]_A,\\rho_A)$ be a Lie algebroid over\n$M$. Let $W=A\\oplus\\Lambda^kA^*$ and $V= \\Lambda^{k-1}A^*$. Then \n\\begin{align}\n (X\\oplus\\alpha,Y\\oplus\\beta) &:= \\imath_X\\beta+\\imath_Y\\alpha \\\\\n [X\\oplus\\alpha,Y\\oplus\\beta]_W &:= [X,Y]_A\\oplus \\L_X\\beta\n -\\imath_Y\\ud_A\\alpha +\\imath_X\\imath_Y H \\label{stdVbrack}\\\\\n \\rho(X\\oplus\\alpha) &:= \\rho_A(X) \\\\\n \\uD(v) &:= 0\\oplus\\ud_Av \\label{stdVder}\\\\\n \\rho_D(X\\oplus\\alpha) &:= \\L_X \\, , \\label{stdVrhoQ}\n\\end{align} where $X,Y \\in \\Gamma(A)$, $\\alpha,\\beta \\in\n\\Gamma(\\Lambda^kA^*)$, and $v \\in \\Gamma(V)$, equips $(W,V)$ with the\nstructure of a Vinogradov algebroi\n.\n\\end{prop}\n\\begin{proof} Follows from Proposition \\ref{p:derVino} with the\nrealization as a Q-structure. \n\\end{proof}\nWe will call this example an ($H$-twisted) generalized standard\nVinogradov algebroid, reducing to an ($H$-twisted) standard Vinogradov\nalgebroid for the choice of $A$ being a standard Lie algebroid, $A\n=TM$. Note that for every generalized standard Vinogradov algebroid we\nhave two quite distinct super-geometric descriptions, one as described\nabove where $\\CM = T^*[k+1]A[1]$ and another one on the supermanifold\n$\\CM' = A[1] \\times\n\\Real[k+1]$ with $Q'=\\ud_A + H \\partial\/\\partial b$, $b$ being the\ncanonical coordinate on $\\Real[k+1]$.\\footnote{Also these two pictures\nare not just related by the Hamiltonian reformulation of $(\\CM',Q')$\naccording to footnote \\ref{f:24}, $T^*[\\bullet]\\CM'$ does not agree\nwith $\\CM$.}\n\nAn important class of Vinogradov algebroids is provided by those where\nthe map $\\rho_D$ is $C^\\infty(M)$-linear (note that for a generalized\nstandard Vinogradov algebroid this is the case only for $\\rk\nV=1$). A special case of this is provided by the following definition\n(cf.~Prop.~\\ref{p:FW} below):\n\\begin{vdef}[{$V$}-twisted Courant algebroid]\\label{def:VCourant}\nLet $V,W$ be vector bundles over $M$, $W$ equipped with an anchor map $\\rho\n\\colon W\\to TM$, a bracket $[ . , . ]_W$ on its sections, and a non-degenerate\nsurjective symmetric product $\\<.,.\\>$ taking values in $V$. Let\n ${{}^W\\nabla}$ be a $W$-connection on $V$. We call this a vector bundle twisted (or\nV-twisted) Courant algebroid if these data are subject to the\nfollowing axioms\n\\begin{align}\n [\\psi,[\\phi_1,\\phi_2]_W]_W &= [[\\psi,\\phi_1]_W,\\phi_2]_W +[\\phi_1,[\\psi,\\phi_2]_W]_W\n \\label{fwJacobi} \\\\\n \\<[\\psi,\\phi]_W,\\phi\\> &=\\frac12\\Wconn_\\psi \\<\\phi,\\phi\\> = \\<\\psi,[\\phi,\\phi]_W\\>\n \\label{fwInvar} \\;.\n\\end{align}\n\\end{vdef}\n\nWe remark that the usual Leibniz rule \\eqref{vLeibniz} follows from \\eqref{fwInvar} after polarization, $\\<[\\psi,\\phi_1],\\phi_2\\>+\\<\\phi_1,[\\psi,\\phi_2]\\> = \\Wconn_\\psi\\<\\phi_1,\\phi_2\\> $, from the Leibniz rule for ${{}^W\\nabla}$.\n\n\n\\begin{prop} \\label{p:FW} A V-twisted Courant algebroid\n is a Vinogradov algebroid with $\\rho_D(\\phi):=\\Wconn_\\phi$ and $\\uD$\nits adjoint, i.e.~$\\uD$ being defined via\n\\begin{align} \n \\label{FW} \\<\\phi,\\uD v\\> =\\Wconn_\\phi v \\;.\n\\end{align}\nConversely, a Vinogradov algebroid is a V-twisted Courant algebroid if\n\\begin{align} \n \\label{FW'} \\<\\phi,\\uD v\\> =\\rho_D(\\phi) v \\;.\n\\end{align}\n\\end{prop}\n\\begin{proof} Given \\eqref{FW} then \\eqref{fwInvar} is equivalent to \n \\eqref{vSkew}--\\eqref{vInvar}.\n It remains to check that $\\uD$ fulfills the Leibniz rule \\eqref{vLeibn2},\n but this follows since \\eqref{vLeibn3} shows that the adjoint\n ${{}^W\\nabla}$ of $\\uD$ is a first order linear PDO. For the other direction note that \\eqref{FW'} replaces \\eqref{FW} thus \\eqref{vSkew}--\\eqref{vInvar} are fulfilled. Further ${{}^W\\nabla}=\\rho_D$ the adjoint of $\\uD$ implies that it is ${C^\\infty}$-linear in $\\phi$.\n\\end{proof}\n\nNote that the l.h.s.~ of \\eqref{FW} also permits the interpretation that\n$\\uD\\:\\Gamma(V)\\to{}^W{}\\Omega^1(M,V)$ and this 1-form is then applied to a vector\n$\\phi\\in W$, i.e.\\ $\\uD$ is an exterior $W$-covariant derivative. From\nProposition \\ref{p:Vinogrelem} we learn that ${{}^W\\nabla}$ is a\nmorphism of brackets,\ni.e.~$$\\Wconn_{[\\phi,\\psi]_W}=[\\Wconn_\\phi,\\Wconn_\\psi] \\, .$$ In other words,\nthe $W$-connection of a V-twisted Courant algebroid is always\nflat and thus also $\\uD^2=0$. Furthermore, one has \n\\begin{prop}\\label{p:Royt} In a twisted generalized standard Vinogradov \n algebroid as well as in a V-twisted Courant algebroid\n brackets for $\\uD$-exact sections satisfy \\\\(for all $v,v' \\in \\Gamma(V)$ and\n$\\phi \\in \\Gamma(W)$):\n\\begin{align*} \\<\\uD v,\\uD v'\\>&=0 \\\\ [\\uD v,\\phi]_W &=0\\\\\n [\\phi,\\uD v]_W &= \\uD\\<\\phi,\\uD v\\> \n\\end{align*}\n\\end{prop}\n\\begin{proof} The first equality follows from $ \\<\\uD v,\\uD\nv'\\>=\\Wconn_{\\uD v}v' $ and Proposition \\ref{p:Vinogrelem}. For the\nremaining two equations one can use the explicit formula\n\\eqref{stdVbrack} for the derived bracket \nin the case of twisted generalized standard Vinogradov algebroids and\ngeneralize the proof in \\cite[p.20, lemma 2.6.2]{Royt99} to\n$V$-twisted Courant algebroids in a straightforward way, respectively.\n\\end{proof}\n\nAs the terminology suggests, ordinary Courant algebroids\n$(W,[.,.],\\rho,\\<.,.\\>)$ provide examples of $V$-twisted ones: one just\ntakes $V$ to be the trivial $\\Real$ bundle over the base $M$ so that its\nsections can be identified with functions on $M$ and identify ${{}^W\\nabla}$\nwith $\\rho$ in this case. For the reverse direction we need an\nadditional condition: Consider a V-twisted Courant algebroid with $V$\nof rank one. Suppose we can find a section $v$ of $V$ which vanishes\nnowhere (consequently $V$ has to be trivial) and which is annihilated\nby $\\uD$, $\\uD v=0$. Then we can define a non-degenerate symmetric\nbilinear form $\\<.,.\\>_C$ on $W$ by means of\n$\\<\\phi,\\psi\\>=\\<\\phi,\\psi\\>_Cv$. Moreover, $\\<\\uD (fv),\\phi\\>=\n\\left(\\rho(\\phi) f\\right) v$. It now is easy to see that $([.,.]_W,\n\\rho, \\<.,.\\>_C)$ defines a Courant algebroid.\n\nLet us remark, however, that a line-bundle twisted Courant algebroid,i.e.~a $V$-twisted Courant algebroid with $\\rk V = 1$, is a strictly more general notion than the one of an ordinary Courant algebroid. We intend to come back to this elsewhere.\n\n\nOrdinary Courant algebroids over a point reduce to quadratic Lie\nalgebras. This is no more the case for the above $V$-twisted\ngeneralizations, where even the bracket need not be antisymmetric. In\nthis case $W$ is what one calls a Leibniz--Loday algebra and $V$ becomes a\n$W$-module by means of $\\Wconn_\\cdot \\colon W \\to \\End(V)$. Whereas the\nr.h.s.~of \\eqref{cnSkew} vanishes identically for $M$ being a point,\nthe analogous second equality in \\eqref{fwInvar} does \\emph{not} lead\nto $[\\phi,\\phi]_W=0$ for any representation $\\Wconn_\\cdot$. An explicit\nexample with a bracket having a symmetric part is given by the\nfollowing construction which was motivated by the example in \\cite[chap.\\ 1]{San07}\n\n\\begin{prop} Let $W:=\\End(\\Real^2)$, $V:=\\Real^3$\n\\begin{align*}\n P\\: W&\\to W: (x_{ij})\\mapsto\\pmatrix{cc}{x_{11}&0\\\\0&0} \\\\\n [X,Y]_W &:= P(X)Y -YP(X) \\\\\n \\ &:= (x_{11}y_{12}+x_{12}y_{11}, -x_{11}y_{21}-x_{21}y_{12},x_{22}y_{22}) \\\\\n \\Wconn_Y(p,q,r) &:= (2y_{11}p,2y_{11}q,0)\n\\end{align*} Then these 5 structures form a V-twisted Courant algebroid over a point.\n\\end{prop}\n\\begin{proof} Straightforward calculations show that\n $$P(P(X)Y)=P(X)P(Y)=P(XP(Y)) \\;. $$ Due to \\cite{San07} $(W,[.,.]_W)$ forms a Leibniz--Loday algebra. By inspection the given inner product is non degenerate. It is now a straightforward calculation that the given connection fulfills \\eqref{fwInvar}. Thus the given structure is a V-twisted Courant algebroid.\n\\end{proof}\n\nIn fact, even the following statement is true: \n\\begin{prop} \\label{canonical}\n\\emph{Any} Leibniz--Loday algebra $(W,[ \\cdot , \\cdot ]_W)$ becomes a $V$-twisted Courant algebroid over a point in a \\emph{canonical} way. \n\\end{prop}\n\\begin{proof} For this purpose one takes $V$ to be the two-sided ideal generated by quadratic elements, $V=\\langle [w,w]_W| w \\in W \\rangle$, and the inner product to be nothing but the symmetrization of the original bracket, $(w_1, w_2) := \\frac{1}{2} [w_1 , w_2 ]_W + \\frac{1}{2}[w_2 , w_1 ]_W$, mapping elements from $S^2W$ into $V\\subset W$. With the evident choice $\\Wconn_{w_1} (w_2) := [w_1,w_2]_W$, the conditions \\eqref{fwInvar} follow directly from \\eqref{fwJacobi}. \n\\end{proof}\n\nIn the present context the most important examples of $V$-twisted Courant algebroids are, however, provided by Q2-manifolds. This is the subject of the\nfollowing subsection.\n\n\n\\subsection{Lie-2-algebroids and V-twisted Courant algebroids}\n\\label{subsec:Lie2}\nEvery Q2-manifold gives rise to a V-twisted Courant algebroid as follows:\n\\begin{prop}\\label{p:derVcour} For the strongly anchored DGLA $\\X_\\bullet (\\CM)$ of\na Q2-manifold (cf.~Prop.~\\ref{p:tangentLie}) the conditions in \nProp.~\\ref{p:derVino} are satisfied and the resulting Vinogradov\nalgebroid is even a V-twisted Courant algebroid.\n\\end{prop}\n\n\\begin{proof} First we recall that $d=0$ and thus vector fields of\ndegree -2 are sections of $V\\to M$ while those of degree -1 are\nsections of the vector bundle $W\\to M$\n(cf.~Prop..~\\ref{p:tangentLie}). We have \n $[\\psi,f\\phi]=f[\\psi,\\phi]$ for all $\\psi, \\phi \\in \\Gamma(W) \\cong \\X_{-1}(\\CM)$ and\n $f \\in {C^\\infty}(M)\\cong C^\\infty_0(\\CM)$ since there are no negative\ndegree functions on $\\CM$. In every (local) splitting of\n\\eqref{sequence} we find $W=E\\oplus E^*\\otimes V$ and the bracket \nbetween tangent vector fields of degree -1 is just the canonical pairing\non this direct sum \\begin{equation} \\ = a(Y)+b(X)\n\\label{pair} \\, , \\end{equation}\nwhere $X,Y \\in \\Gamma(E)$ and $a,b \\in \\Gamma(E^* \\otimes V)\\cong {}^E\\Omega^1(M,V)$.\nThis is\nobviously non-degenerate and surjective. Thus by Prop.~\\ref{p:derVino}\nwe conclude to have a Vinogradov algebroid. According to\nProp.~\\ref{p:FW} it now suffices to verify Eq.~\\eqref{FW'}: Indeed,\nthis follows from the compatibility of the Lie bracket with $\\uD$ and\nthe fact that $[\\psi,v]\\equiv 0$ $\\forall \\psi \\in \\Gamma(W) \\cong \\X_{-1}(\\CM)$ and $v \\in \n\\Gamma(V) \\cong \\X_{-2}(\\CM)$, since there are no vector fields of\ndegree -3.\\end{proof} In Section~\\ref{s:examQ} we found an\ninterpretation of the components of the vector field $Q$,\nEq.~\\eqref{Q}, on a (split) Q2-manifold in terms of a Lie\n2-algebroid. Using the same $Q$ to calculate explicitly the derived\nbracket given in the previous proposition, one arrives at the\nfollowing\n\\begin{prop}\\label{p:FWder} Let $(E,V, [.,.]_E,\\rho_E,t,{{}^E{}\\nabla},H)$ be a Lie\n2-algebroid as defined in Def.~\\ref{Lie2def\n. Then $W:=E\\oplus E^*\\otimes V$ with \n\\begin{align} \\rho_W(X\\oplus a):=&\\; \\rho_E(X) \\, , \\label{Rho}\\\\\n \\Wconn_{X\\oplus a}v := &\\Econn_Xv -\\imath_{t(v)}a \\label{rhoQCont} \\\\\n \\ :=&\\; \\imath_Y a + \\imath_X b \\, , \\label{Paar}\\\\\n [X\\oplus a, Y\\oplus b]_W :=&\\; \\left([X,Y]_E \\oplus\n [\\imath_X,{{}^E\\!{}\\mathrm{D}}]b -\\imath_Y{{}^E\\!{}\\mathrm{D}} a + \\imath_X \\imath_Y H \\right) \\nonumber\\\\\n &-\\left(t(a(Y))\\oplus T(a,b)\\right)\\, , \\label{brCont}\n\\end{align} where $X,Y \\in \\Gamma(E)$, $a,b \\in \n{{}^E{}\\Omega}^1(M,V)$, and $v \\in \\Gamma(V)$, is a $V$-twisted Courant\nalgebroid. Here ${{}^E\\!{}\\mathrm{D}}$ has been defined in Eq.~\\eqref{genCar} and the\noperator $T\\colon (E^*\\otimes V)^{\\otimes2} \\to E^*\\otimes V$ is induced by $t$ according to the composition of maps \n\\begin{equation} \nT(a,b)\\equiv a\\circ t\\circ b - b\\circ t\\circ a \\, . \\label{Tdefn}\n\\end{equation}\n \n\\end{prop}\n\\begin{proof} Instead of checking all axioms separately one shows that the given algebroid is the V-twisted Courant algebroid from Proposition \\ref{p:derVcour}.\n\nThe Lie 2-algebroid induces a Q-structure via Theorem \\ref{theo1}. To prove the above formulas one can e.g.\\ use local coordinates on the Q-manifold.\n\\end{proof}\n\nIt turns out that under certain circumstances Proposition \\ref{p:FWder} can be reversed. For that purpose we now consider a generalization of exact Courant algebroids.\nLet $W$ be an ordinary Courant algebroid. Then the anchor map $\\rho\\colon W\\to TM$ together with the inner product induce a map $\\rho^* \\colon T^*M\\to W$, where we used the inner product to identify $W^*$ with $W$. Due to Proposition \\ref{p:Vinogrelem} the composition $\\rho\\circ\\rho^*$ vanishes. Thus for \\emph{any} Courant algebroid $W$ one has the complex\n \\begin{equation}\n 0\\to T^*M\\xrightarrow{\\rho^*} W\\xrightarrow{\\rho} TM\\to 0 \\, ; \\label{Wexact}\n\\end{equation} \n$W$ is said to be exact iff the above short sequence is exact.\nExamples of exact Courant algebroids occur on $W=TM\\oplus T^*M$ via the choice of a closed 3-form $H$ with the bracket \\eqref{Cour}. Conversely given an exact Courant algebroid we can choose an isotropic splitting $j\\:TM\\to E$\\footnote{We will be more explicit on this in the more general context below.} and define a 3-form $H$ as\n $$ H(X,Y,Z):= \\<[j(X),j(Y)],j(Z)\\> \\;. $$ \\eqref{cnSkew} shows that $H$ is skew-symmetric in the arguments $X$ and $Y$, while \\eqref{cInvar} implies that it is invariant under cyclic permutations of $X$, $Y$ and $Z$. Thus $H$ is a 3-form. The remaining identity on the bracket implies that this 3-form has to be closed under the de Rham differential. Finally, a change of splitting, which is necessarily induced by a skew-symmetric 2-form $B$, changes $H$ by an exact term, $H \\mapsto H+\\ud B$. Thus, exact Courant algebroids are in 1:1 correspondence to classes of the third real cohomology on $M$, an observation going back to \\v{S}evera \\cite{SevLett}.\n\nWe wish to generalize this result to certain V-twisted Courant algebroids $W$. Since the inner product on $W$ has values in $V$, the map $\\rho \\equiv \\rho_W \\colon W \\to TM$ induces a map $\\rho^* \\colon T^*M \\otimes V\\to W$ by means of $( \\rho^*(\\alpha \\otimes v), \\psi ) = \\alpha(\\rho(\\psi))$ for all $\\psi \\in W$. Thus, \\emph{any} $V$-twisted Courant algebroid $W$ gives rise to the complex\n\\begin{equation}\n 0\\to T^*M\\otimes V \\xrightarrow{\\rho^*} W\\xrightarrow{\\rho} TM\\to 0 \\, . \\label{VWexact}\n\\end{equation}\nLet us call a $V$-twisted Courant algebroid exact if this complex is acyclic, in generalization of ordinary exact Courant algebroids (which fit into this definition since for $V=M\\times \\Real$ \\eqref{VWexact} reduces to \\eqref{Wexact}). More generally, consider a map $\\pi\\colon W\\to E$. Together with the V-valued inner product it induces a map $\\pi^*\\colon E^*\\otimes V\\to W$. Here $E^*\\otimes V$ is the $V$-dual of E and $W$ the $V$-dual of itself. We are thus led to \n\\begin{vdef}\nSuppose that in a V-twisted \nCourant algebroid $(V,W)$ the anchor map $\\rho_W \\colon W \\to TM$ factors through a bundle map $\\pi\\colon W\\to E$ for some vector bundle $E$. If the then induced complex \n\\[ 0 \\rightarrow E^*\\otimes V \\xrightarrow{\\pi^*} W \\xrightarrow{\\pi} E \\to 0\n \\label{Vexact}\\]\nis acyclic, we call $(V,W)$ an \\emph{exact $V$-twisted (or $V$-exact) Courant algebroid over $E$} and for $E=TM$, $\\pi=\\rho_W$ simply a \\emph{$V$-exact Courant algebroid}.\n\\end{vdef}\nThere then is the following remarkable fact:\n\\begin{thm}\\label{thm:der} Let $(V,W)$ be a $V$-exact Courant algebroid over some $E$ with $\\rk V>1$ and let $j\\: E\\to W$ be an isotropic \nsplitting of \\eqref{Vexact}.\n\nThen $E$ and $V$ carry canonically the structure of a semistrict Lie 2-algebroid, $(E,V,[\\cdot, \\cdot]_E,\\rho,t,{{}^E{}\\nabla},H)$, and the structural data of the $V$-twisted Courant algebroid $W$ are necessarily of the form of Proposition \\ref{p:FWder} above. \n\nA change of splitting $j(X)\\mapsto j(X)-B^\\#(X)$ results into a change of the underlying data $[.,.]_E$, $H$, and ${{}^E{}\\nabla}$ according to \\eqref{split1}--\\eqref{split3} furthermore.\n\\end{thm}\n\nWe first remark that an isotropic splitting of \\eqref{Vexact} always exists. This is seen as follows: First, as for any exact sequence of vector bundles, we can choose \\emph{some} splitting $j_0\\:E\\to W$ of the short exact sequence \\eqref{Vexact}. This permits us to identify $E\\oplus E^*\\otimes V\\xrightarrow\\sim W$ via $(X,a)\\mapsto j_0(X)+\\pi^*(a)$ for $X,Y\\in E_x$, $a,b\\in E^*\\otimes V_x$ and $x\\in M$. Computing the symmetric 2-form $L(X,Y):=\\$, $L\\:E\\otimes E\\to V$, it may or may not vanish. If it does, $j_0$ provides an isotropic splitting. If it does not, it induces a map $L^\\#\\:E\\to E^*\\otimes V,\\, X\\mapsto L(X,\\bullet)$. If we change $j_0$ by $-\\pi^*\\circ L^\\#$, we obtain an isotropic splitting $W\\cong E\\oplus E^*\\otimes V$ and the inner product on $W$ is identified with the standard product on $E\\oplus E^*\\otimes V$, $\\=a(Y)+b(X)$.\n\nWe henceforth do no more write $j$ and $\\pi^*$ explicitly, but use the identification $W\\cong E\\oplus E^*\\otimes V$ induced by the isotropic splitting to write elements of $W$ in the form $X\\oplus a$ (instead of $j(X)+\\pi^*(a)$) with $X \\in E$ and $a \\in \\Hom(E,V)$.\n\nWe also remark that the condition on the rank of $V$ is necessary. The 1:1 relation between $V$-exact Courant algebroids over $E$ and semi-strict Lie 2-algebroids $(E,V)$ breaks down explicitly in the case of $\\rk V=1$. While due to Proposition \\ref{p:FWder} also for $V$ being a line bundle, one obtains a $V$-exact Courant algebroid over $E$ from any semi-strict Lie 2-algebroid, not any such a generalized Courant algebroid is of this form, even if the line bundle is trivial, $V=M \\times \\Real$.\nThis will become clear within the proof below, which will also permit us to provide an explicit example of this fact right after the proof.\n\\begin{proof}\nOur task now is to retrieve the data\n$[.,.]_E$, $\\rho_E$, $t$, ${{}^E{}\\nabla}$, and $H$ from the data of the $V$-twisted exact Courant algebroid over $E$ and to show that \nthey satisfy all the identities they have to so as to form a Lie 2-algebroid.\n\nWe start with Equation \\eqref{rhoQCont} and \\emph{define} ${{}^E{}\\nabla}$ and $t$ implicitly by means of\n\\[\n \\Wconn_{X\\oplus a} v =: \\Econn_Xv -a(t(v)) \\, , \\label{naTdefn}\n\\]\nwhich has to hold true for all $v \\in \\Gamma(V)$ and $X \\oplus a \\in \\Gamma(E) \\oplus \\Gamma(\\Hom(E,V)) \\cong \\Gamma(W)$. We also know by assumption that $\\rho_W = \\rho_E \\circ \\pi$ for some $\\rho_E \\colon E \\to TM$. Noting that after the choice of a splitting, $\\pi$ just becomes projection to the first factor, $\\pi(X \\oplus a)=X$, the Leibniz property of ${{}^W\\nabla}$ and Definition \\eqref{naTdefn} (for simplicity one may set $a=0$) ensure that ${{}^E{}\\nabla}$ is an $E$-covariant derivative on $V$ with the anchor map $\\rho_E$, as anticipated by the notation chosen. Returning with this information to \\eqref{naTdefn}, it implies that the map $t$ is $C^\\infty(M)$-linear, and thus comes from a bundle map $t\\colon V\\to E$.\n\nNext, anticipating what we want to achieve in \\eqref{brCont}, we \\emph{define} $[.,.]_E$ and some ${H}$ from $[X,Y]_W$ by means of\n$$ [X\\oplus0,Y\\oplus0]_W =: [X,Y]_E\\oplus -{H}(X,Y) \\, .$$\nFirst we can conclude from the known properties of the left-hand-side that $[X,Y]_E$ satisfies a Leibniz property with respect to the second argument for the anchor map $\\rho_E$. Likewise we also follow that $H(\\cdot, \\cdot)$ is $C^\\infty$-linear in the second argument. Moreover, as a consequence of the isotropy of the chosen splitting, $(X\\oplus0,Y\\oplus0)=0$, and the second equation of \\eqref{fwInvar} yields antisymmetry of the $E$-bracket and $H$ in $X$ and $Y$. By definition, $H(X,Y) \\in \\Gamma(E^* \\otimes V)$. The first equation of \\eqref{fwInvar} then implies that $H(X,Y,Z) := \\imath_Z H(X,Y)$ \nis also antisymmetric in $Y$ and $Z$. Thus it is antisymmetric in all three arguments and the $C^\\infty$-linearity extends to all these arguments. All together this implies that $H\\in{{}^E{}\\Omega}^3(M,V)$.\n\nWe now have all of the ingredients $[.,.]_E$, $\\rho_E$, $t$, ${{}^E{}\\nabla}$, and $H$ necessary to define the bracket \\eqref{brCont}. Let us denote the r.h.s.~of this equation by $[X\\oplus a,Y\\oplus b]_W^0$ and parameterize the difference to this form of the bracket by several bilinear maps $\\delta_i$ as follows:\n\\begin{align}\\label{ansatzbr}\n [X\\oplus a,Y\\oplus b] =: &\\;[X\\oplus a,Y\\oplus b]_W^0 +\\\\ &+\\delta_1(a,Y)+\\delta_2(b,X)+\\delta_3(a,b) \n \\oplus \\delta_4(X,b)+\\delta_5 (Y,a) +\\delta_6(a,b) \\, . \\nonumber\n\\end{align}\nThese maps are certainly $\\Real$-bilinear, but due to the property of $[X\\oplus a,Y\\oplus b]_W^0$ they are even $C^\\infty(M)$-bilinear (due to the already established features of the quantities entering this bracket).\n\nWe now want to show that for $\\rk V\\ne1$, the Axiom~\\eqref{fwInvar} implies that $\\delta_1 = \\ldots = \\delta_6=0$, while the Leibniz--Loday property \\eqref{fwJacobi} implies the three Equations \\eqref{HJacobi}, \\eqref{HRep}, and \\eqref{DH} needed to endow $(E,V)$ with the structure of a semi-strict Lie 2-algebroid. \n\nThe term in the center of \\eqref{fwInvar} does not depend on the bracket at all and these terms are precisely balanced by the terms coming from $[X\\oplus a,Y\\oplus b]_W^0$ in the right equation. This implies that for equal arguments, $X=Y$ and $a=b$, all the delta-contributions have to vanish in \\eqref{ansatzbr}. This is the case iff\n\\begin{equation}\n\\delta_1=-\\delta_2 \\; , \\quad \\delta_4 = -\\delta_5 \\; , \\label{delta4} \n\\end{equation}\nand $\\delta_3$ and $\\delta_6$ are antisymmetric in their two arguments. We now turn to the l.h.s.~of \\eqref{fwInvar}. The left Equation \\eqref{fwInvar} then gives the following additional constraints:\n\\begin{eqnarray}\n\\delta_5 &=& 0 \\, ,\\label{delta5}\\\\\nb(\\delta_2(b,X)) &=& 0 \\, ,\\label{delta2}\\\\\nb(\\delta_3(a,b)) &=& 0\\, , \\label{delta3}\\\\\n\\iota_Y\\delta_6(a,b)+ b(\\delta_1(a,Y)) &=& 0 \\, ,\\label{delta6} \n\\end{eqnarray}\nto hold true for all choices of arguments. To exploit Equations \\eqref{delta2} and \\eqref{delta3}, we prove the following\n\\begin{lemma}\\label{p:Jh=0}\n Let $V_1$ and $V_2$ be two vector spaces and let $\\Delta \\in \\left(\\Hom(V_1, V_2)\\right)^* \\otimes V_1 \\cong (V_1)^{\\otimes 2} \\otimes V_2^*$ satisfying \\begin{equation} \\label{aa}\n a(\\Delta(a))=0\n\\end{equation} for all $a \\in \\Hom(V_1, V_2)$. Then, for $\\dim V_2 >1$, necessarily $\\Delta =0$, while for $V_2 \\cong \\Real$ any $\\Delta \\in \\Lambda^2 V_1$ has this property.\n\\end{lemma}\n\\begin{proof}[{ (of Lemma \\ref{p:Jh=0})}] \nLet us first consider the case of a factorizing $a$: $a = \\alpha \\otimes v$ with $\\alpha \\in V_1^*$ and $v \\in V_2$. Then \\eqref{aa} holds true iff $\\Delta \\in \\Lambda^2(V_1) \\otimes V_2^*$. It remains to check \\eqref{aa} for $a = a_1 + a_2$ with both summands such products. This is satisfied iff the contraction of $\\Delta$ in the $V_2$-slot with the first factor of any term of the form $v_1 \\wedge v_2 \\in \\Lambda^2 V_2$ has to vanish. If $\\dim V_2 = 1$, this condition is empty and the statement follows from $\\Lambda^2(V_1) \\otimes \\Real^* \\cong \\Lambda^2(V_1)$. In the case that $V_2$ has at least two directions, on the other hand, evidently one is forced to a vanishing $\\Delta$.\n\\end{proof}\nBy $C^\\infty(M)$-linearity, the Equations \\eqref{delta2} and \\eqref{delta3} are pointwise in $M$ and for $\\rk V>1$ we can conclude the vanishing of $\\delta_2$ and $\\delta_3$ by setting $V_1 := E|_x$, $V_2 :=V|_x$ for any fixed $x \\in M$ and $\\delta(\\cdot)$ equal to \n$\\delta_2(\\cdot , X)|_x$ and $\\delta_3(a,\\cdot)|_x$, respectively (again for any fixed $X$ and $a$, respectively). \n\nThus, if the rank condition on $V$ is satisfied, Equations \\eqref{delta4}---\\eqref{delta6} imply the vanishing of $\\delta_i$, $i=1,\\ldots,6$. It now remains to exploit Equation \\eqref{fwJacobi}. \n\nIt is a straightforward calculation to check that \\eqref{fwJacobi}---together with the already established form of the bracket $[ \\cdot , \\cdot ]_W = [ \\cdot , \\cdot]_W^0$, i.e.~\\eqref{brCont} holding true,---implies the remaining identities \\eqref{HJacobi}, \\eqref{HRep}, and \\eqref{DH}. For example, regarding the $\\Gamma(E)$-part of \\eqref{fwJacobi} yields \n\\begin{gather}\n [[X,Y]_E,Z]_E -[t(\\imath_Y a),Z]_E-t\\left( \\imath_Z\\left([\\imath_X,{{}^E\\!{}\\mathrm{D}}]b -\\imath_Y{{}^E\\!{}\\mathrm{D}} a + \\imath_X \\imath_Y H -T(a,b)\\right)\\right)= \\nonumber \\\\\n = [X,[Y,Z]_E]_E -[X,t(\\imath_Zb)]_E -t\\left( a\\left([Y,Z]_E-t(\\imath_Z b)\\right)\\right) - \\left(X \\leftrightarrow Y \\,\\,\\text{and}\\,\\, a \\leftrightarrow b\\right)\\, ,\n\\end{gather}\nwhich, after the dust clears and using the definitions \\eqref{genCar} and \\eqref{Tdefn} for ${{}^E\\!{}\\mathrm{D}}$ and $T$, respectively, yields the the first of the three equations \nand the right hand equation of \\eqref{repr2}. Establishing the three quadratic equations on the structural data for $E$ and $V$ shows that they form a semi-strict Lie 2-algebroid.\n\nAlternatively one may also proceed as follows in the last step: Given the above data, we can define a vector field of the form \\eqref{Q} on $E[1]\\oplus V[2]$. Vector fields of degree -1 can then be identified with sections of $W$. Equation~\\eqref{vJacobi} then translates into $0=\\frac12[[[[Q,Q],\\psi],\\phi_1],\\phi_2]$. \n It remains to check that\nthis equation contains the three Equations \\eqref{B3}, \\eqref{B5}, and \\eqref{B7}. In fact, it contains even all seven components \\eqref{B1}--\\eqref{B7} of\n$Q^2=0$ as one may convince oneself.\n\nA change of splitting of the degree 2 Q-manifold used above corresponds precisely to a change of splitting of the sequence \\eqref{Vexact} if one regards the vector fields of degree -1 on that graded manifold. This then proves the theorem.\n\\end{proof}\n\nWe conclude with some related remarks. First, we saw that in the case $\\rk V =1$ the discussion of $V$-exact Courant algebroids over some $E$ is at least more involved. Let us restrict to a trivial bundle $V=M \\times \\Real$ so that we can identify $\\Hom(E,V)$ with $E^*$. Then, according to the above considerations, the data appearing in Proposition \\ref{p:FWder} can be extended by a modification of the bracket by means of two tensors: $\\delta_1 = - \\delta_2 \\sim \\delta_6 \\in \\Gamma(\\Lambda^2E \\otimes E^*)$ and $\\delta_3 \\in \\Gamma(\\Lambda^3 E)$. While the general analysis of this special case gets somewhat involved---one still needs to analyze the consequences of \\eqref{fwJacobi}---, we just make two remarks at this point here: First, \\emph{every} Lie 2-algebra gives rise to a $V$-exact Courant algebroid over $E$ according to Proposition \\ref{p:FWder}, also in the case of $\\rk V =1$. Second, one simple example of a $V$-exact Courant algebroid over $E$ with $\\rk V = 1$ \\emph{not} covered by this parameterization may be provided by the following construction: Choose the Lie 2-algebroid to have all structural data vanish and take, in the above parameterization also $\\delta_1=0$ but $\\delta_3\\equiv h \\in \\Gamma(\\Lambda^3 E)$ arbitrary. \nThen for arbitrary sections in $E \\oplus E^*\\otimes V \\cong E \\oplus E^*$ one has \n\\begin{equation}\n[X \\oplus \\alpha , Y \\oplus \\beta ]_W := h(\\alpha, \\beta) \\oplus 0 \\, . \n\\end{equation} \nIt is evident that double brackets always vanish with this definition, and thus Equation \\eqref{fwJacobi} is satisfied trivially. But also each of the three terms in \\eqref{fwInvar} vanish by the antisymmetry of $h$ in all three arguments and the vanishing of ${{}^W\\nabla}$, so that also these two equations are satisfied. \n\nWe intend to come back to a more systematic investigation of line-bundle-twisted exact Courant algebroids over some $E$ elsewhere. \n\n\n\n\n \n \n \n \n \n \n \n \n \n \n \n\n\n\n\n\n\n \n \n \n \n \n\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOne of the main aims of research in nuclear physics is to make reliable predictions for the ground state nuclear properties of all nuclei in the periodic table with one nuclear model. For this purpose, several approaches have been developed and they can be classified into three categories: The first one called\nthe macroscopic models such as the Bethe-Weizs\u00e4cker mass formula~\\cite{samanta2002}. The second one is the macroscopic-microscopic models such as the Finite Range Droplet Model\n(FRDM)~\\cite{moller1995}. The last one consists of the microscopic models such as the conventional Hartree Fock method~\\cite{skyrme1956,decharg1980,bassem2015,bassem2016,antonov2017} with effective density-dependent interactions and its relativistic analog, the relativistic mean field (RMF) theory~\\cite{walecka1974,reinhard1989}; the former is based on the nonrelativistic kinematics, in which the density-dependent and the spin-orbit interactions are important ingredients, while the latter is based on the relativistic kinematics, \nwhere the ingredients are the nucleons, the mesons and their interactions.\nTherefore, the spin appears automatically and the interplay of the scalar and the vector potentials leads naturally to a proper spin orbit interaction and appropriate shell structure. The RMF theory has received wide attention due to its successful description of lots of properties of nuclei not only in the valley of stability but also far away from it.~\\cite{meng1998,zhou2003,meng1999,geng2004,ring1996,meng1996,ginocchio1997}.\n\n\nDensity functional theories (DFT's) are extremely useful in understanding nuclear many-body dynamics. Among different nuclear DFTs, the covariant density functional theory (CDFT)~\\cite{niki2014,lalazissis2005,rocamaza2011,niki2002} based on the energy density functionals (EDFs) is one of the most attractive and is very successful in describing very well the ground and excited states throughout the\nchart of nuclei~\\cite{abusara2012,agbemava2015,agbemava2017} as well as in the nuclear structure analysis~\\cite{meng2015,matev2007,afanasjev2008}.\nIn Ref.~\\refcite{Agbemava2014}, the global performance of some covariant\nenergy density functionals on some nuclear observables was analyzed.\n\nIn this work, we are interested in calculating and analyzing some ground-state properties of even-even Pt isotopes, N=82-160, within the framework of the covariant density functional theory by using two of the state-of-the-art functionals which provide a complete and an accurate description of different ground states and excited states over the whole nucleic chart~\\cite{karim2015,afanasjev2010,elbassem2019}, namely : The density-dependent point-coupling DD-PC1~\\cite{niki2008} and the density-dependent meson-exchange DD-ME2~\\cite{DD-ME2}.\n\nThe paper is organized in the following way: The covariant density functional theory and details of the numerical calculations are presented in section~\\ref{Theoretical Framework}. \nSection~\\ref{Results and Discussion} is devoted to present our results and discussion.\nFinally, the conclusions of this study are presented in section~\\ref{Conclusion}.\n\n\\section{Covariant density functional theory}\n\\label{Theoretical Framework}\nTwo classes of covariant density functional models are used throughout this paper: the density-dependent point-coupling (DD-PC) model and the density-dependent meson-exchange (DD-ME) model. \nThe first has a finite interaction range and has been fitted \nto binding energies and radii of spherical nuclei; while the latter uses a zero-range interaction and has been fitted to nuclear matter data and for finite nuclei only to binding energies of a large range of deformed nuclei.\\\\\n\n\nIn the meson-exchange model, the nucleus is considered as a system of Dirac nucleons which interact via the exchange of mesons with finite masses leading to finite-range interactions~\\cite{typel1999,lalazissis2009}.\nThe standard Lagrangian density with medium dependence vertices that defines the meson-exchange model~\\cite{gambhir1990} is given by:\n\\begin{eqnarray}\n\\mathcal{L} & =\\bar{\\psi}\\left[\n\\gamma(i\\partial-g_{\\omega}\\omega-g_{\\rho\n}\\vec{\\rho}\\,\\vec{\\tau}-eA)-m-g_{\\sigma}\\sigma\\right] \\psi\\nonumber +\\frac{1}{2}(\\partial\\sigma)^{2}-\\frac{1}{2}m_{\\sigma}^{2}\\sigma^{2}\\\\\n&\n-\\frac{1}{4}\\Omega_{\\mu\\nu}\\Omega^{\\mu\\nu}+\\frac{1}{2}m_{\\omega}^{2}\\omega\n^{2}\\label{lagrangian} -\\frac{1}{4}{\\vec{R}}_{\\mu\\nu}{\\vec{R}}^{\\mu\\nu}+\\frac{1}{2}m_{\\rho}%\n^{2}\\vec{\\rho}^{\\,2}-\\frac{1}{4}F_{\\mu\\nu}F^{\\mu\\nu}\n\\end{eqnarray}\nwhere $m$ is the bare nucleon mass and $\\psi$ denotes the Dirac spinors. $m_\\rho$, $m_\\sigma$ and $m_\\omega$ are the masses of $\\rho$ meson, $\\sigma$ meson and $\\omega$ meson, \nwith the corresponding coupling constants for the mesons to the nucleons as $g_\\rho$, $g_\\sigma$ and $g_\\omega$ respectively, and $e$ is the charge of the proton.\\\\\n\nThe point-coupling model represents an alternative formulation of the self-consistent RMF framework~\\cite{manakos1988,rusnak1997,brvenich2002,zhao2010}. The Lagrangian for the DD-PC model~\\cite{niki2008,nikolaus1992} is given by:\n\\begin{eqnarray}\n&\\mathcal{L} =\\bar{\\psi}\\left(i\\gamma \\cdot \\partial-m\\right) \\psi\n-\\frac{1}{2}\\alpha_S(\\hat\\rho)\\left(\\bar{\\psi}\\psi\\right)\\left(\\bar{\\psi}\\psi\\right)-\\frac{1}{2}\\alpha_V(\\hat\\rho)\\left(\\bar{\\psi}\\gamma^{\\mu}\\psi\\right)\\left(\\bar{\\psi}\\gamma_{\\mu}\\psi\\right)\\label{Lag-pc}\\\\\n&\n-\\frac{1}{2}\\alpha_{TV}(\\hat\\rho)\\left(\\bar{\\psi}\\vec\\tau\\gamma^{\\mu}\\psi\\right)\\left(\\bar{\\psi}\\vec\\tau\\gamma_{\\mu}\\psi\\right)-\\frac{1}{2}\\delta_S\\left(\\partial_v\\bar{\\psi}\\psi\\right)\\left(\\partial^v\\bar{\\psi}\\psi\\right) - e\\bar\\psi\\gamma \\cdot A\n\\frac{(1 - \\tau_3)}{2}\\psi .\\nonumber\n\\end{eqnarray}\nThe Eq.~(\\ref{Lag-pc}) contains the free-nucleon Lagrangian, the point coupling interaction terms, and the coupling of the proton to the electromagnetic field, the derivative terms account for the leading effects of finite-range interaction\nwhich are important in nuclei. \n\nIn the present work, we have used the recently developed density-dependent meson-exchange relativistic energy functional DD-ME2~\\cite{DD-ME2} and the very successful density-dependent point-coupling interaction DD-PC1~\\cite{niki2008} with the parameter sets listed in Tables~\\ref{tab1}~and~\\ref{tab2} respectively. \n\n\n\\begin{table}[!ht\n\t\\caption{ The parameters of the DD-ME2 functional. \n\t\t\\label{tab1}}\n\t\\begin{center}%\n\t\t\\begin{tabular}\n\t\t\t[c]\t\t{@{\\hspace{0pt}}c@{\\hspace{24pt}}@{\\hspace{24pt}}c@{\\hspace{0pt}}}\n\t\t\t\\hline Parameter & DD-ME2 \\\\\\hline\n\t\t\t$m$ & 939 \\\\\n\t\t\t$m_{\\sigma}$ & 550.1238 \\\\\n\t\t\t$m_{\\omega}$ & 783.000 \\\\\n\t\t\t$m_{\\rho}$ & 763.000 \\\\\n\t\t\t$g_{\\sigma}$ & 10.5396\\\\\n\t\t\t$g_{\\omega}$ & 13.0189\\\\\n\t\t\t$g_{\\rho}$ & 3.6836 \\\\\n\t\t\t$a_{\\sigma}$ & 1.3881 \\\\\n\t\t\t$b_{\\sigma}$ & 1.0943 \\\\\n\t\t\t$c_{\\sigma}$ & 1.7057 \\\\\n\t\t\t$d_{\\sigma}$ & 0.4421 \\\\\n\t\t\t$a_{\\omega}$ & 1.3892 \\\\\n\t\t\t$b_{\\omega}$ & 0.9240 \\\\\n\t\t\t$c_{\\omega}$ & 1.4620 \\\\\n\t\t\t$d_{\\omega}$ & 0.4775 \\\\\n\t\t\t$a_{\\rho}$ & 0.5647 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table}\n\n\\begin{table}[!ht]\n\t\\caption{The parameters of the DD-PC1 functional.}%\n\t\\label{tab2}\n\t\\begin{center}\n\t\t\\begin{tabular}\n\t\t\t[c]\t\t{@{\\hspace{0pt}}c@{\\hspace{24pt}}@{\\hspace{24pt}}c@{\\hspace{0pt}}}\n\t\t\t\\hline Parameter & DD-PC1 \\\\\\hline\n\t\t\t$m$ & 939\\\\\n\t\t\t$a_{\\sigma}$ & -10.04616\\\\\n\t\t\t$b_{\\sigma}$ & -9.15042\\\\\n\t\t\t$c_{\\sigma}$ & -6.42729\\\\\n\t\t\t$d_{\\sigma}$ & 1.37235\\\\\n\t\t\t$a_{\\omega}$ & 5.91946\\\\\n\t\t\t$b_{\\omega}$ & 8.86370\\\\\n\t\t\t$d_{\\omega}$ & 0.65835\\\\\n\t\t\t$b_{\\rho}$ & 1.83595\\\\\n\t\t\t$d_{\\rho}$ & 0.64025\\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table}\n\n\n\n\\section{Results and Discussion}\n\\label{Results and Discussion}\n\nIn this section, we present the ground-state properties of $^{160-238}$Pt nuclei obtained in the framework of the CDFT by using the interactions DD-ME2~\\cite{DD-ME2} and DD-PC1~\\cite{niki2008}. \nOur results are compared with the available experimental data, the predictions of the RMF model with NL3~\\cite{lalazissis1999} functional and with the results of HFB theory with SLy4 Skyrme force calculated by using the computer code HFBTHO v2.00d~\\cite{Stoitsov2013} as in Ref~\\refcite{bassem2017}. \n\n\\subsection{Binding energy}\n\nBinding energy (BE) is directly related to the stability of nuclei and is a very important quantity in nuclear physics. \nThe binding energies per nucleon (BE\/A) of ground states for platinum isotopes, $^{160-238}$Pt, are presented in Fig.~\\ref{BEexp} as a function of the neutron number N. The available\nexperimental data~\\cite{wang2012} as well as the predictions of the RMF(NL3)~\\cite{lalazissis1999} and HFB(SLy4) theories are also shown for comparison.\n\n\\begin{figure}[ht]\n\t\\centering \\includegraphics[scale=0.5]{BE_Pt.eps}\n\t\\caption{(Color online) The binding energies per nucleon for even-even $^{166-238}$Pt isotopes.}\n\t\\label{BEexp}\n\\end{figure}\t\t\n\nIt can be clearly seen from Fig. \\ref{BEexp} that the theoretical predictions reproduce the experimental data accurately and, qualitatively, all curves show a similar behavior.\n\nIn order to provide a further check of the accuracy of our results, the differences between the experimental total BE for even-even platinum isotopes and the calculated results obtained in this work are presented in Fig.~\\ref{Delta_BE} as a function of the neutron number N. \n\n\\begin{figure}[ht]\n\t\\centering \\includegraphics[scale=0.5]{deltaBE_Pt.eps}\n\t\\caption{(Color online) Differences between theoretical total binding energies and experimental values for even-even Pt isotopes.}\n\t\\label{Delta_BE}\n\\end{figure}\n\nThe comparison of the available experimental total binding energies of the ground state for $^{166-238}$Pt with the present calculations and with the other theoretical models, is also done by the root mean square (rms) deviation tabulated in Table~\\ref{rms}. \n\n\n\\begin{table}[!htb]\n\t\\caption{The rms deviations of the total binding energies of platinum isotopes.\\label{rms}}\n\t\\centering\n\t\\begin{tabular}\n\t\t{@{\\hspace{0pt}}c@{\\hspace{24pt}}c@{\\hspace{24pt}}c@{\\hspace{0pt}}@{\\hspace{24pt}}c@{\\hspace{0pt}}}\n\t\t\\hline\\noalign{\\smallskip}\n\t\tDD-ME2 \t& DD-PC1\t& NL3\t\t & SLy4\\\\ \n\t\t\\noalign{\\smallskip}\\hline\\noalign{\\smallskip}\n\t\t1.864 \t&\t1.880\t&\t2.755\t & \t4.019\t\\\\ \n\t\t\\noalign{\\smallskip}\\hline\n\t\\end{tabular}\n\\end{table}\n\nAs we can see from Fig.~\\ref{Delta_BE} and Table~\\ref{rms}, DD-ME2 exceeds in accuracy the other nuclear models DD-PC1, NL3 and SLy4.\n\n\\subsection{Two neutron separation energy ($S_{2n}$) and shell gap ($\\delta_{2n}$)}\n\nIn the present work, we have calculated two-neutron separation energies, \n$S_{2n}(N,Z)$ = BE$(N,Z)$ - BE$(N - 2,Z)$, for Pt isotopes by using the density-dependent effective interactions DD-PC1 and DD-ME2. \n\nIn Fig.~\\ref{S2n}, we display the calculated S$_{2n}$ of even-even platinum isotopes, as a function of the neutron number N, in comparison with the available experimental data~\\cite{wang2012} and the predictions of RMF(NL3)~\\cite{lalazissis1999} and HFB(SLy4).\n\n\\begin{figure}[ht]\n\t\\centering \\includegraphics[scale=0.52]{S2n_Pt.eps}\n\t\\caption{(Color online) The two-neutron separation energies, S$_{2n}$, for Pt\n\t\tisotopes.}\n\t\\label{S2n}\n\\end{figure}\n\nAs one can see from Fig.~\\ref{S2n}, the results of the two density-dependent models DD-ME2 and DD-PC1 as well as those of NL3 and SLy4 reproduce the experimental data quite well except some small discrepancies which are mainly due to the missing beyond mean field corrections~\\cite{Litvinova2006}. S$_{2n}$ gradually decreases with N, and a sharp drop is distinctly seen at N = 126 in both experimental and theoretical curves, except in SLy4 curve, which corresponds to the closed shell at this magic neutron number. \nThe discontinuity of S$_{2n}$ at this nucleus indicates that it corresponds to a critical point of first order phase transition to prolate shape.\n\nA more sensitive observable for locating the shell closure is the two-neutron shell gap $\\delta_{2n}=S_{2n}(N,Z)-S_{2n}(N+2,Z)$, also known as the S$_{2n}$~differential. \n$\\delta_{2n}$ is shown in Fig.~\\ref{ShellGap} as a function of the neutron number N. The strong peaking in the two-neutron shell gap ($\\delta_{2n}$) clearly seen at N~=~126 further supports the shell closure at this neutron magic number as shown in Fig.~\\ref{S2n} by two-neutron separation energy ($S_{2n}$). Here again, only the SLy4 curve does not show a strong peak at the neutron magic number N~=~126. \n\n\n\\begin{figure}[!ht]\n\t\\centering \\includegraphics[scale=0.55]{ShellGap_Pt.eps}\n\t\\caption{(Color online) The two-neutron shell gap $\\delta_{2n}$ for even-even $^{94-168}$Pt isotopes.}\n\t\\label{ShellGap}\n\\end{figure}\n\n\n\n\\subsection{Quadrupole deformation} \n\nThe quadrupole deformation is also an important property for describing the\nstructure and shape of the nucleus. \nIn Fig.~\\ref{23beta}, we show for every Pt isotope (covering the mass\ninterval $160 \\leqslant A \\leqslant 204$) the energy curves along\nthe axial symmetry axis, as a function of the deformation\nparameter, $\\beta$, obtained within CDFT framework by using the density-dependent effective interactions DD-ME2 and DD-PC1.\n\n\n\\begin{figure}[!htb]\n\t\\centering \\includegraphics[scale=0.6]{23beta_Pt.eps}\n\t\\caption{(Color online) The total energy curves for $^{160-204}$Pt as a function of the axial quadrupole deformation parameter $\\beta_2$.}\n\t\\label{23beta}\n\\end{figure}\n\nAs we can see from Fig.~\\ref{23beta}, the interaction DD-PC1 provides potential\nenergy curves which are extremely similar to the ones obtained with DD-ME2. The deformations of the oblate and prolate minima are practically independent of the force. \n\nThe lightest isotopes, $^{160-162}$Pt, exhibit spherical shape. The next isotope, $^{164}$Pt, starts to develop two shallow degenerate minima, oblate\nand prolate, that correspond to a small value of $\\beta$. The next isotope, $^{166}$Pt, starts to develop a more pronounced prolate minimum. The $^{168-186}$Pt isotopes show a similar structure, with a well-deformed prolate minimum, $\\beta \\approx 0.3$, and an oblate local minimum. \n\nA transition from prolate to oblate shapes occurs smoothly between $^{188}$Pt(prolate) and $^{190}$Pt(oblate). In $^{190-200}$Pt two minima appear, with the opposite situation\noccurring in $^{168-186}$Pt. As the mass number increases, the two well-deformed minima gradually disappear and we get a flat potential energy curve at A = 202. At A = 204, we get a sharp single minimum, which confirms the spherical shape at the magic neutron number N = 126. \n\nThese results are in good agreement with recent works \\cite{Rodriguez2010,Garcia2014,Nomura2011} in terms of predicting $^{188}$Pt as critical. However, other calculations have different results that are not in agreement with ours, such as: Ref.~\\refcite{Yao2013} which predicts that the shape transition in Pt isotopes within a beyond-mean-field approach with the Skyrme SLy6 occurs at A = 186 to 188 instead of A = 188 to 190 in our calculations. In the same line, \nconstrained Hartree-Fock+ BCS calculations with the Skyrme forces Sk3, SGII, and SLy4 suggest a prolate to oblate shape transition at $^{182}$Pt~\\cite{Boillos2015}. Furthermore, triaxial D1M-Gogny calculations predict a smooth shape transition at A = 184 to 186~\\cite{Nomura2013}.\n\nThese differences between theoretical methods in predicting the exact location of the shape transition are due, firstly, to the difference between the models used and, secondly, to the fact that the shape transition is very sensitive to small details of the calculation, since the shape transition occurs exactly around the region where the energy of the competing shapes are practically degenerate.\n\nIn Fig.~\\ref{PESs} we display the triaxial contour plots of $^{186-190}$Pt\nisotopes in the ($\\beta$, $\\gamma$) plane. To study the dependency on $\\gamma$, systematic constrained triaxial calculations have been done for mapping the quadrupole deformation space defined by $\\beta_2$ and $\\gamma$ using the effective interaction DD-ME2. \nThe constrained calculations are performed by imposing constraints on both axial and triaxial mass quadrupole moments. The potential energy surface\n(PES) study as a function of the quadrupole deformation parameter is performed by the method of quadratic constraint~\\cite{Ring1980} (see Ref.~\\refcite{Abusara2017} for more details).\nEnergies are normalized with respect to the binding energy of the global minimum such that the ground state has a zero MeV energy.\n\n\n\\begin{figure}[!htb]\n\t\\minipage{0.5\\textwidth}\n\t\\centering \\includegraphics[scale=0.4]{Pt186.eps}\n\t\\endminipage\\hfill\n\t\\minipage{0.5\\textwidth}\n\t\\centering \\includegraphics[scale=0.4]{Pt188.eps}\n\t\\endminipage\\hfill\n\t\\centering \\includegraphics[scale=0.4]{Pt190.eps}\n\t\\caption{(Color online) Potential energy surfaces for $^{186-190}$Pt in the ($\\beta$, $\\gamma$) plane, obtained from a CDFT calculations with the DD-ME2 parameter set. The color scale shown at the right has the unit of MeV, and scaled such that the ground state has a zero MeV energy.}\n\t\\label{PESs}\n\\end{figure}\n\nFrom this figure, we can notice that the location of the ground state minimum moves from near prolate shape at $^{186}$Pt to near oblate shape at $^{190}$Pt. $^{188}$Pt is slightly triaxial with its global minimum at (0.25, 10\\si{\\degree}). Thus, the shape transition is smooth, and there are no sudden changes in the nuclear shape. These results confirm those seen previously in Fig.~\\ref{23beta} and are in full agreement with the results shown in Fig. 5 of Ref.~\\cite{Garcia2014} obtained with Hartree-Fock-Bogoliubov based on Gogny-D1S interaction. \n\n\n\\subsection{Neutron, Proton and Charge radii}\n\nThe charge radii calculated within the framework of the covariant density functional theory by using the functionals DD-ME2 and DD-PC1 are compared with the available experimental data~\\cite{angeli2004} and with the predictions of RMF(NL3)~\\cite{lalazissis1999} and HFB(SLy4). \nGood agreement between theory and experiment can be clearly seen in Fig.~\\ref{Rc}, except in the region $100 \\le N \\le 108$ where a small difference between the models and experiment is seen. This discrepancy is mainly due to the deformation effect since the experimental $\\beta_2$ values are extracted from experimental $B(E2)$$\\uparrow$ values (the reduced probability of the transition from the ground state of the nucleus to the first excited $2^+$ state) by using the Bohr model, which is valid only in the case of well-deformed nuclei, as it has been explained in Ref.~\\refcite{elbassem2019}.\\\\\n\n\\begin{figure}[ht]\t\t\t\t\t\t\t\t \n\t\\centering \\includegraphics[scale=0.55]{Rc_Pt.eps}\n\t\\caption{(Color online) The charge radii of Pt isotopes}\n\t\\label{Rc}\n\\end{figure}\t\t\t\t\t\t\t\t \n\nWe display in Fig. \\ref{RnRp} the calculated neutron and proton radii, $R_n$ and $R_p$, of platinum isotopes obtained by using the two functionals DD-ME2 and DD-PC1. The results of HFB theory with SLy4 Skyrme force as well as those of RMF model with NL3 functional are also shown for comparison. \n\n\n\\begin{figure}[!htb]\n\t\\minipage{0.48\\textwidth}\n\t\\centering \\includegraphics[scale=0.4]{R_Pt.eps}\n\t\\endminipage\\hfill\n\t\\minipage{0.48\\textwidth}\n\t\\centering \\includegraphics[scale=0.41]{Rn-Rp_Pt.eps}\n\t\\endminipage\\hfill\n\t\\caption{(Color online) The neutron and proton radii of Pt isotopes (left panel) and the neutron skin thicknesses ($\\Delta R$ = $R_n$ \u2212 $R_p$)(right panel).}\n\t\\label{RnRp}\n\\end{figure}\n\nIn the left panel of Fig.~\\ref{RnRp}, we see that the neutron rms radii obtained by using the density-dependent\neffective interactions DD-ME2 and DD-PC1 and by HFB(SLy4) are almost equal\nthroughout the isotopic chain, but the NL3 results are overestimated. The origin of this difference is the missing density-dependence in the iso-vector channel of NL3~\\cite{lalazissis2005}. The proton rms radii obtained by all the formalisms, CDFT(DD-ME2 and DD-PC1), HFB(SLy4) and RMF(NL3) are almost similar.\n\nWe also notice from Fig.~\\ref{RnRp} (right panel) that the difference between the rms radii of neutrons and protons ($\\Delta R = R_n - R_p$) increases by increasing the neutron number in favor of developing a neutron skin. \n$\\Delta R$ attains its maximum for $^{238}$Pt in which it reaches 0.41~fm in the case of DD-ME2 and DD-PC1 formalisms, 0.38~fm in HFB(SLy4) and 0.55~fm in RMF(NL3) calculations.\n\nThere are also some abnormalities in the charge and proton radii of Pt isotopes as we can see in Figs.~\\ref{Rc} and~\\ref{RnRp} (left panel), these little jumps in $R_c$ and $R_p$ in the two regions $98 \\leqslant N \\leqslant 108$ and $140 \\leqslant N \\leqslant 160$ are due to the deformation effect~\\cite{elbassem2019}.\\\\\n\n\\section{Conclusion}\n\\label{Conclusion}\nIn the present work, we have studied the ground state properties of even-even\nplatinum isotopes, $^{160-238}$Pt, from the proton-rich side up to the neutron-rich one within the framework of the covariant density functional theory, by using two of the most recent functionals: The density-dependent point-coupling DD-PC1 and the density-dependent meson-exchange DD-ME2.\nThe bulk ground state properties are quite well reproduced in our calculations and are in good agreement with the experimental data.\nA strong shell closure is clearly seen at N=126. The neutron skin in this study reaches 0.41 fm for $^{238}$Pt. \nThe total energy curves for $^{160-204}$Pt obtained in this work suggest a smooth\nprolate to oblate shape transition at $^{188}$Pt.\\\\\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe sampling Kantorovich operators have been introduced to approximate and reconstruct not necessarily continuous signals. In \\cite{BABUSTVI}, the authors introduced this operators starting from the well-known generalized sampling operators (see e.g. \\cite{BUFIST,BUST2,BAVI2,VI1,BABUSTVI2}) and replacing, in their definition, the sample values $f(k\/w)$ with $w \\int_{k\/w}^{(k+1)\/w}f(u)\\,du$. Clearly, this is the most natural mathematical modification to obtain operators which can be well-defined also for general measurable, locally integrable functions, not necessarily continuous. Moreover, this situation very often occur in Signal Processing, when one cannot match exactly the sample at the point $k\/w$: this represents the so-called \"time-jitter'' error. The theory of sampling Kantorovich operators allow us to reduces the time-jitter error, calculating the information in a neighborhood of $k\/w$ rather that exactly in the node $k\/w$. These operators, as the generalized sampling operators, represent an approximate version of classical sampling series, based on the Whittaker-Kotelnikov-Shannon sampling theorem (see e.g. \\cite{ANVI}).\n\n Subsequently, the sampling Kantorovich operators have been studied in various settings. In \\cite{COVI,COVI2} the multivariate version of these operators were introduced. Results concerning the order of approximation are shown in \\cite{COVI3}. Extensions to more general contexts are presented in \\cite{VIZA1,VIZA2,VIZA3,BAMA2}.\n\n The multivariate sampling Kantorovich operators considered in this paper are of the form:\n$$\n(S_w f)(\\underline{x})\\ :=\\ \\sum_{\\underline{k} \\in \\Z^n} \\chi(w\\underline{x}-t_{\\underline{k}})\\left[\\frac{w^n}{A_{\\underline{k}}} \\int_{R_{\\underline{k}}^w}f(\\underline{u})\\ d\\underline{u}\\right], \\hskip0.7cm (\\underline{x} \\in \\R^n), \\hskip0.8cm \\mbox{(I)}\n$$\nwhere $f: \\R^n \\to \\R$ is a locally integrable function such that the above series is convergent for every $\\underline{x} \\in \\R^n$. The symbol $t_{\\underline{k}}=\\left(t_{k_1},...,t_{k_n}\\right)$ denotes vectors where each $(t_{k_i})_{k_i \\in \\Z}$, $i=1,...,n$ is a certain strictly increasing sequence of real numbers with $\\Delta_{k_i}=t_{k_{i+1}}-t_{k_i}>0$.\nNote that, the sequences $(t_{k_i})_{k_i \\in \\Z}$ are not necessary equally spaced. We denote by $R_{\\underline{k}}^w$ the sets:\n$$\nR_{\\underline{k}}^w\\ :=\\ \\left[\\frac{t_{k_1}}{w},\\frac{t_{k_1+1}}{w}\\right]\\times\\left[\\frac{t_{k_2}}{w},\\frac{t_{k_2+1}}{w}\\right]\\times...\\times\\left[\\frac{t_{k_n}}{w},\\frac{t_{k_n+1}}{w}\\right], \\hskip2.2cm \\mbox{(II)}\n$$\n$w>0$ and $A_{\\underline{k}} = \\Delta_{k_1} \\cdot \\Delta_{k_2} \\cdot...\\cdot \\Delta_{k_n}$, $\\underline{k} \\in \\Z^n$. Moreover, the function $\\chi:\\R^n \\to \\R$ is a kernel satisfying suitable assumptions. \n\n For the operators in (I) we recall some convergence results. We have that the family $(S_wf)_{w>0}$ converges pointwise to $f$, when $f$ is continuous and bounded and $(S_w f)_{w>0}$ converges uniformly to $f$, when $f$ is uniformly continuous and bounded. Moreover, to cover the case of not necessarily continuous signal, we study our operators in the general setting of Orlicz spaces $L^{\\varphi}(\\R^n)$. For functions $f$ belonging to $L^{\\varphi}(\\R^n)$ and generated by the convex $\\varphi$-function $\\varphi$, the family of sampling Kantorovich operators is \"modularly'' convergent to $f$, being the latter the natural concept of convergence in this setting. \n\n The latter result, allow us to apply the theory of the sampling Kantorovich operators to approximate and reconstruct images. In fact, static gray scale images are characterized by jumps of gray levels mainly concentrated in their contours or edges and this can be translated, from a mathematical point of view, by discontinuities (see e.g. \\cite{GOWO}).\n\n Here, we introduce and analyze in detail some practical applications of the sampling Kantorovich algorithm to thermographic images, very useful for the analysis of buildings in seismic engineering. The thermography is a remote sensing technique, performed by the image acquisition in the infrared. Thermographic images are widely used to make non-invasive investigations of structures, to analyze the story of the building wall, to make diagnosis and monitoring buildings, and to make structural measurements. A further important use, is the application of the texture algorithm for the separation between the bricks and the mortar in masonries images. Through this procedure the civil engineers becomes able to determine the mechanical parameters of the structure under investigation. Unfortunately, the direct application of the texture algorithm to the thermographic images, can produce errors, as an incorrect separation between the bricks and the mortar. \n\n Then, we use the sampling Kantorovich operators to process the thermographic images before to apply the texture algorithm. In this way, the result produced by the texture becomes more refined and therefore we can apply structural analysis after the calculation of the various parameters involved. In order to show the feasibility of our applications, we present in detail a real-world case-study.\n\n\n\\section{Preliminaries}\n\nIn this section we recall some preliminaries, notations and definitions.\n\n We denote by $C(\\R^n)$ (resp. $C^0(\\R^n)$) the space of all uniformly continuous and bounded (resp. continuous and bounded) functions $f:\\R^n\\to \\R$ endowed with the usual sup-norm $\\|f\\|_{\\infty} := \\sup_{\\uu \\in \\R^n}\\left|f(\\uu)\\right|$, $\\uu = (u_1,\\, ...,\\, u_n)$, and by $C_c(\\R^n) \\subset C(\\R^n)$ the subspace of the elements having compact support. Moreover, $M\\left(\\R^n\\right)$ will denote the linear space of all (Lebesgue) measurable real functions defined on $\\R^n$.\n\n We now recall some basic fact concerning Orlicz spaces, see e.g. \\cite{MUORL,MU1,RAO1,BAVI_1,BAMUVI}. \n\n\\noindent The function $\\varphi: \\R^+_0 \\to \\R^+_0$ is said to be a $\\varphi$-function if it satisfies the following assumptions: $(i)$ $\\varphi \\left(0\\right)=0$, and $\\varphi \\left(u\\right)>0$ for every $u>0$; $(ii)$ $\\varphi$ is continuous and non decreasing on $\\R^+_0$; $(iii)$\n$\\lim_{u\\to \\infty}\\varphi(u)\\ =\\ + \\infty$.\n\n The functional $I^{\\varphi} : M(\\R^n)\\to [0,+\\infty]$ (see e.g. \\cite{MU1,BAMUVI}) defined by\n$$\nI^{\\varphi} \\left[f\\right] := \\int_{\\R^n} \\varphi(\\left| f(\\underline{x}) \\right|)\\ d\\underline{x},\\ \\hskip0.5cm \\left(f \\in M(\\R^n)\\right),\n$$\nis a modular in $M(\\R^n)$ . The Orlicz space generated by $\\varphi$ is given by\n$$\nL^{\\varphi}(\\R^n) := \\left\\{f \\in M\\left(\\R^n\\right):\\ I^{\\varphi} [\\lambda f]<+\\infty,\\ \\mbox{for\\ some}\\ \\lambda>0\\right\\}.\n$$\nThe space $L^{\\varphi}(\\R^n)$ is a vector space and an important subspace is given by\n$$\nE^{\\varphi}(\\R^n) := \\left\\{f \\in M\\left(\\R^n\\right):\\ I^{\\varphi} [\\lambda f]<+\\infty,\\ \\mbox{for\\ every}\\ \\lambda>0\\right\\}.\n$$\n$E^{\\varphi}(\\R^n)$ is called the space of all finite elements of $L^{\\varphi}(\\R^n)$. It is easy to see that the following inclusions hold:\n$\nC_c(\\R^n) \\subset E^{\\varphi}(\\R^n) \\subset L^{\\varphi}(\\R^n).\n$\nClearly, functions belonging to $E^{\\varphi}(\\R^n)$ and $L^{\\varphi}(\\R^n)$ are not necessarily continuous. A norm on $L^{\\varphi}(\\R^n)$, called Luxemburg norm, can be defined by\n$$\n\\left\\|f\\right\\|_{\\varphi}\\ :=\\ \\inf \\left\\{\\lambda>0:\\ I^{\\varphi}[f\/\\lambda] \\miu \\lambda\\right\\}, \\hskip0.7cm (f \\in L^{\\varphi}(\\R^n)).\n$$\n\n We will say that a family of functions $(f_w)_{w>0} \\subset L^{\\varphi}(\\R^n)$ is norm convergent to a function $f \\in L^{\\varphi}(\\R^n)$, i.e., $\\left\\|f_w-f\\right\\|_{\\varphi}\\rightarrow 0$ for $w\\to +\\infty$, if and only if \n$\\lim_{w \\to +\\infty} I^{\\varphi}\\left[\\lambda(f_w-f)\\right]\\ =\\ 0$, for every $\\lambda>0$.\nMoreover, an additional concept of convergence can be studied in Orlicz spaces: the \"modular convergence\". The latter induces a topology (modular topology) on the space $L^{\\varphi}(\\R^n)$ (\\cite{MU1,BAMUVI}). \n\n We will say that a family of functions $(f_w)_{w>0} \\subset L^{\\varphi}(\\R^n)$ is modularly convergent to a function $f \\in L^{\\varphi}(\\R^n)$ if\n$\n\\lim_{w \\to +\\infty} I^{\\varphi}\\left[\\lambda(f_w-f)\\right]\\ =\\ 0$,\nfor some $\\lambda>0$. Obviously, norm convergence implies modular convergence, while the converse implication does not hold in general. The modular and norm convergence are equivalent if and only if the $\\varphi$-function $\\varphi$ satisfies the $\\Delta_2$-condition, see e.g., \\cite{MU1,BAMUVI}. Finally, as last basic property of Orlicz spaces, we recall the following.\n\\begin{lemma}[\\cite{BAMA}] \\label{lemma1}\nThe space $C_c(\\R^n)$ is dense in $L^{\\varphi}(\\R^n)$ with respect to the modular topology, i.e., for every $f \\in L^{\\varphi}(\\R^n)$ and for every $\\ep>0$ there exists a constant $\\lambda>0$ and a function $g \\in C_c(\\R^n)$ such that $I^{\\varphi}[\\lambda(f-g)]<\\ep$.\n\\end{lemma}\n\n\n\\section{The sampling Kantorovich operators}\n\nIn this section we recall the definition of the operators with which we will work. We will denote by \n$t_{\\underline{k}}=\\left(t_{k_1},...,t_{k_n}\\right)$ a vector where each element $(t_{k_i})_{k_i \\in \\Z}$, $i=1,...,n$ is a sequence of real numbers with $-\\infty < t_{k_i}0$ for which $\\delta \\leq \\Delta_{k_i}:= t_{k_{i+1}}-t_{k_{i}} \\leq \\Delta$, for every $i=1, ... , n$.\nNote that, the elements of $(t_{k_i})_{k_i \\in \\Z}$ are not necessary equally spaced. In what follows, we will identify with the symbol $\\Pi^n$ the sequence $(t_{\\underline{k}})_{\\kk \\in \\Z^n}$.\n\n A function $\\chi: \\R^n \\to \\R$ will be called a kernel if it satisfies the following properties:\n\\begin{itemize}\n\t\\item[$(\\chi1)$] $\\chi \\in L^1(\\R^n)$ and is bounded in a neighborhood of $\\underline{0} \\in \\R^n$; \n\t\\item[$(\\chi 2)$] for every $\\underline{u} \\in \\R^n$, \\hskip0.2cm $\\displaystyle \\sum_{\\underline{k} \\in \\Z^n} \\chi(\\underline{u} - t_{\\underline{k}})\\ =\\ 1$;\n\t\\item[$(\\chi 3)$] for some $\\beta>0$,\n$$\nm_{\\beta, \\Pi^n }(\\chi)\\ =\\ \\sup_{\\underline{u} \\in \\R^n}\\sum_{\\underline{k} \\in \\Z^n}\\left|\\chi(\\underline{u}-t_{\\underline{k}})\\right|\\cdot \\left\\|\\underline{u}-t_{\\underline{k}}\\right\\|^{\\beta}_2\\ <\\ +\\infty,\n$$\nwhere $\\| \\cdot \\|_2$ denotes the usual Euclidean norm. \n\\end{itemize}\nWe now recall the definition of the linear multivariate sampling Kantorovich operators introduced in \\cite{COVI}. Define:\n\\begin{equation} \\label{KANTO}\n(S_w f)(\\underline{x})\\ :=\\ \\sum_{\\underline{k} \\in \\Z^n} \\chi(w\\underline{x}-t_{\\underline{k}})\\left[\\frac{w^n}{A_{\\underline{k}}} \\int_{R_{\\underline{k}}^w}f(\\underline{u})\\ d\\underline{u}\\right], \\hskip0.7cm (\\underline{x} \\in \\R^n),\n\\end{equation}\nwhere $f: \\R^n \\to \\R$ is a locally integrable function such that the above series is convergent for every $\\underline{x} \\in \\R^n$, where\n\\begin{displaymath}\nR_{\\underline{k}}^w\\ :=\\ \\left[\\frac{t_{k_1}}{w},\\frac{t_{k_1+1}}{w}\\right]\\times\\left[\\frac{t_{k_2}}{w},\\frac{t_{k_2+1}}{w}\\right]\\times...\\times\\left[\\frac{t_{k_n}}{w},\\frac{t_{k_n+1}}{w}\\right],\n\\end{displaymath}\n$w>0$ and $A_{\\underline{k}} = \\Delta_{k_1} \\cdot \\Delta_{k_2} \\cdot...\\cdot \\Delta_{k_n}$, $\\underline{k} \\in \\Z^n$. The operators in (\\ref{KANTO}) have been introduced in \\cite{BABUSTVI} in the univariate setting.\n\\begin{remark} \\label{remark1} \\rm\n(a) Under conditions $(\\chi 1)$ and $(\\chi 3)$, the following properties for the kernel $\\chi$ can be proved:\n$$\nm_{0, \\Pi^n}(\\chi)\\ :=\\ \\sup_{\\uu \\in \\R^n} \\sum_{\\kk \\in \\Z^n} \\left|\\chi(\\uu-\\tk)\\right|\\ <\\ +\\infty,\n$$\nand, for every $\\gamma >0$\n\\begin{equation} \\label{eee}\n\\lim_{w \\to +\\infty} \\sum_{\\left\\|w \\uu-\\tk\\right\\|_2 >\\gamma w}\\left|\\chi(w \\uu - \\tk)\\right|\\ =\\ 0,\n\\end{equation}\nuniformly with respect to $\\uu \\in \\R^n$, see \\cite{COVI}.\n\\vskip0.2cm\n\\noindent (b) By (a), we obtain that $S_w f$ with $f \\in L^{\\infty}(\\R^n)$ are well-defined. Indeed,\n\\begin{displaymath}\n\\left|(S_w f)(\\xx)\\right|\\ \\miu\\ m_{0, \\Pi^n}(\\chi)\\left\\|f\\right\\|_{\\infty}\\ < +\\infty,\n\\end{displaymath}\nfor every $\\xx \\in \\R^n$, i.e. $S_w : L^{\\infty}(\\R^n)\\rightarrow L^{\\infty}(\\R^n)$.\n\\end{remark}\n\\begin{remark} \\label{remark3} \\rm\nNote that, in the one-dimensional setting, choosing $t_{k}=k$, for every $k \\in \\Z$, condition $(\\chi 2)$ is equivalent to \n\\begin{displaymath}\n\\widehat{\\chi}(k) :=\\ \\left\\{\n\\begin{array}{l}\n0, \\hskip0.5cm k \\in \\Z\\setminus \\left\\{0\\right\\}, \\\\\n1, \\hskip0.5cm k=0,\n\\end{array}\n\\right.\n\\end{displaymath}\nwhere $\\widehat{\\chi}(v):=\\int_{\\R}\\chi(u)e^{-ivu}\\ du$, $v \\in \\R$, denotes the Fourier transform of $\\chi$; see \\cite{BUNE,BABUSTVI,COVI,COVI3}.\n\\end{remark}\n\n\n\n\n\\section{Convergence results}\n\nIn this section, we show the main approximation results for the multivariate sampling Kantorovich operators. In \\cite{COVI}, the following approximation theorem for our operators has been proved. \n\\begin{theorem} \\label{th1}\nLet $f \\in C^0(\\R^n)$. Then, for every $\\underline{x} \\in \\R^n$,\n\\begin{displaymath}\n\\lim_{w \\to +\\infty} (S_w f)(\\underline{x})\\ =\\ f(\\underline{x}).\n\\end{displaymath}\nIn particular, if $f \\in C(\\R)$, then\n\\begin{displaymath}\n\\lim_{w \\to +\\infty} \\left\\|S_w f -f\\right\\|_{\\infty}\\ =\\ 0.\n\\end{displaymath}\n\\end{theorem}\n\\begin{proof}\nHere we highlight the main points of the proof.\n\n\\noindent Let $f \\in C^0(\\R^n)$ and $\\xx \\in \\R^n$ be fixed. By the continuity of $f$ we have that for every fixed $\\ep>0$ there exists $\\gamma>0$ such that $|f(\\xx)-f(\\uu)|<\\ep$ for every $\\| \\xx -\\uu\\|_2 \\miu \\gamma$, $\\uu \\in \\R^n$. Then, by $(\\chi 2)$ we obtain:\n$$\n\\left|(S_w f)(\\xx)-f(\\xx)\\right|\\ \\miu\\ \\sum_{\\kk \\in \\Z^n} \\left|\\chi(w \\xx - \\tk)\\right| \\frac{w^n}{A_{\\kk}}\\int_{R^w_{\\kk}}\\left|f(\\uu)-f(\\xx)\\right|\\, d\\uu\\ \n$$\n\\vskip-0.5cm\n\\begin{eqnarray*}\n&=& (\\sum_{\\left\\|w \\xx-\\tk\\right\\|_2 \\miu w \\gamma\/2}+\\sum_{\\left\\|w \\xx-\\tk\\right\\|_2 > w \\gamma\/2})\\left|\\chi(w \\xx - \\tk)\\right|\\frac{w^n}{A_{\\kk}}\\int_{R^w_{\\kk}}\\left|f(\\uu)-f(\\xx)\\right|\\ d\\uu \\\\\n&=&\\ I_1+I_2.\n\\end{eqnarray*}\nFor $\\uu \\! \\in \\! R^w_{\\kk}$ and $\\left\\|w \\xx-\\tk\\right\\|_2\\! \\miu \\! w \\gamma\/2$ we have $\\|\\uu-\\xx\\|_2\\! \\miu \\! \\gamma$ for $w\\! > \\!0$ sufficiently large, then by the continuity of $f$ we obtain $I_1 \\miu m_{0,\\Pi^n}(\\chi) \\ep$ (see Remark \\ref{remark1} (a)). Moreover, by the boundedness of $f$ and (\\ref{eee}) we obtain $I_2 \\miu 2\\|f\\|_{\\infty} \\ep$ for $w>0$ sufficiently large, then the first part of the theorem follows since $\\ep>0$ is arbitrary. The second part of the theorem follows similarly replacing $\\gamma>0$ with the parameter of the uniform continuity of $f$. \n\\end{proof}\n\n In order to obtain a modular convergence result, the following norm-convergence theorem for the sampling Kantorovich operators (see \\cite{COVI}) can be formulated.\n\\begin{theorem} \\label{norm_conv} \nLet $\\varphi$ be a convex $\\varphi$-function. For every $f \\in C_c(\\R^n)$ we have\n$$\n\\lim_{w \\to +\\infty} \\left\\|S_w f - f \\right\\|_{\\varphi}\\ =\\ 0.\n$$\n\\end{theorem}\nNow, we recall the following modular continuity property for $S_w$, useful to prove the modular convergence for the above operators in Orlicz spaces.\n\\begin{theorem} \\label{mod_cont} \nLet $\\varphi$ be a convex $\\varphi$-function. For every $f \\in L^{\\varphi}(\\R^n)$ there holds\n$$\nI^{\\varphi}[\\lambda S_w f]\\ \\miu\\ \\frac{\\left\\|\\chi\\right\\|_1}{\\delta^n\\cdot m_{0,\\Pi^n}(\\chi)}I^{\\varphi}[\\lambda m_{0,\\Pi^n}(\\chi)f],\n$$\nfor some $\\lambda>0$. In particular, $S_w$ maps $L^{\\varphi}(\\R^n)$ in $L^{\\varphi}(\\R^n)$.\n\\end{theorem}\n\\noindent Now, the main result of this section follows (see \\cite{COVI}).\n\\begin{theorem} \\label{th2}\nLet $\\varphi$ be a convex $\\varphi$-function. For every $f \\in L^{\\varphi}(\\R^n)$, there exists $\\lambda>0$ such that\n$$\n\\lim_{w \\to +\\infty} I^{\\varphi}[\\lambda (S_w f-f)]\\ =\\ 0.\n$$\n\\end{theorem}\n\\begin{proof}\nLet $f \\in L^{\\varphi}(\\R^n)$ and $\\ep>0$ be fixed. By Lemma \\ref{lemma1}, there exists $\\overline{\\lambda}>0$ and $g \\in C_c(\\R^n)$ such that $I^{\\varphi}[\\overline{\\lambda} (f-g)]\\!<\\! \\ep$. Let now $\\lambda>0$ such that $3\\lambda(1 + m_{0,\\Pi^n}(\\chi)) \\miu \\overline{\\lambda}$. By the properties of $\\varphi$ and Theorem \\ref{mod_cont}, we have\n\\begin{eqnarray*}\n&& \\hskip-0.7cm I^{\\varphi}[\\lambda(S_w f-f)]\\ \\miu\\ I^{\\varphi}[3\\lambda(S_w f - S_w g)]+I^{\\varphi}[3\\lambda(S_w g - g)] + I^{\\varphi}[3\\lambda(f-g)]\\\\\n&\\miu&\\ \\frac{1}{m_{0,\\Pi^n}(\\chi)\\cdot \\delta^n}\\left\\|\\chi \\right\\|_1 I^{\\varphi}[\\overline{\\lambda} (f-g)] + I^{\\varphi}[3\\lambda(S_w g - g)] + I^{\\varphi}[\\overline{\\lambda} (f-g)] \\\\\n&<&\\ \\left(\\frac{1}{m_{0,\\Pi^n}(\\chi) \\cdot \\delta^n}\\left\\|\\chi \\right\\|_1+1\\right) \\ep + I^{\\varphi}[3\\lambda(S_w g - g)].\n\\end{eqnarray*}\nThe assertion follows from Theorem \\ref{norm_conv}.\n\\end{proof} \nThe setting of Orlicz spaces allows us to give a unitary approach for the reconstruction since we may obtain convergence results for particular cases of Orlicz spaces. For instance, choosing $\\varphi(u)=u^p$, $1 \\leq p < +\\infty$, we have that $L^{\\varphi}(\\R^n) = L^p(\\R^n)$ and $I^{\\varphi}[f]=\\|f\\|^p_{p}$, where $\\| \\cdot \\|_p$ is the usual $L^p$-norm. Then, from Theorem \\ref{mod_cont} and Theorem \\ref{th2} we obtain the following corollary.\n\\begin{corollary}\nFor every $f \\in L^p(\\R^n)$, $1 \\leq p < +\\infty$, \nthe following inequality holds:\n\\begin{displaymath}\n\\left\\|S_w f\\right\\|_p\\ \\miu\\ \\delta^{-n\/p}\\, [m_{0,\\Pi^n}(\\chi)]^{(p-1)\/p}\\, \\left\\|\\chi\\right\\|^{1\/p}_1\\, \\left\\|f\\right\\|_p.\n\\end{displaymath}\nMoreover, we have:\n$$\n\\lim_{w \\to +\\infty} \\|S_w f-f\\|_p\\ =\\ 0.\n$$\n\\end{corollary}\nThe corollary above, allows us to reconstruct $L^p$-signals (in $L^p$-sense), therefore signals\/images not necessarily continuous. Other examples of Orlicz spaces for which the above theory can be applied can be found e.g., in \\cite{MUORL,MU1,BAMUVI,BABUSTVI,COVI}. The theory of sampling Kantorovich operators in the general setting of Orlicz spaces allows us to obtain, by means of a unified treatment, several applications in many different contexts.\n\n\n\\section{Examples of special kernels} \\label{sec5}\n\nOne important fact in our theory is the choice of the kernels, which influence the order of approximation that can be achieved by our operators (see e.g. \\cite{COVI3} in one-dimensional setting).\n\n For instance, one can take into consideration {\\em radial kernels}, i.e., functions for which the value depends on the Euclidean norm of the argument only. Example of such a kernel can be given, for example, by the Bochner-Riesz kernel, defined as follows $b^{\\alpha}(\\underline{x}):=2^{\\alpha}\\Gamma(\\alpha+1)\\|\\underline{x}\\|_2^{-(n\/2)+\\alpha}{\\cal B}_{(n\/2)+\\alpha}(\\|\\underline{x}\\|_2)$,\nfor $\\underline{x} \\in \\R^n$, where $\\alpha > (n-1)\/2$, ${\\cal B}_{\\lambda}$ is the Bessel function of order $\\lambda$ and $\\Gamma$ is the Euler function. For more details about this matter, see e.g. \\cite{BUFIST}. \n\n To construct, in general, kernels satisfying all the assumptions $(\\chi_i)$, $i=1,2,3$ is not very easy.\n\n For this reason, here we show a procedure useful to construct examples using product of univariate kernels, see e.g. \\cite{BUFIST,COVI,COVI2}. In this case, we consider the case of uniform sampling scheme, i.e., $t_{\\underline{k}}=\\underline{k}$.\n\n Denote by $\\chi_1, ..., \\chi_n$, the univariate functions $\\chi_i : \\R \\to \\R$, $\\chi_i \\in L^1(\\R)$, satisfying\nthe following assumptions:\n\\begin{equation} \\label{uno-dim-}\nm_{\\beta,\\Pi^1}(\\chi_i)\\ :=\\ \\sup_{u \\in \\R}\\sum_{k \\in \\Z}\\left|\\chi_i(u-k)\\right| |u-k|^{\\beta}\\ <\\ +\\infty,\n\\end{equation}\nfor some $\\beta >0$, $\\chi_i$ is bounded in a neighborhood of the origin and \n\\begin{equation} \\label{sing}\n\\sum_{k \\in \\Z}\\chi_i(u-k) =1,\n\\end{equation}\nfor every $u \\in \\R$, for $i=1,...,n$. \nNow, setting $\\chi(\\underline{x}) := \\prod_{i=1}^n\\chi_i(x_i)$, $\\underline{x}=(x_1,...,x_n) \\in \\R^n$,\nit is easy to prove that $\\chi$ is a multivariate kernel for the operators $S_w$ satisfying all the assumptions of our theory, see e.g., \\cite{BUFIST,COVI}. \n\n As a first example, consider the univariate Fej\\'{e}r's kernel defined by $F(x)\\ :=\\ \\frac{1}{2}\\mbox{sinc}^2\\left(\\frac{x}{2}\\right)$, $x \\in \\R$, where the sinc function is given by\n\\begin{displaymath}\n\\mbox{sinc}(x)\\ :=\\ \\left\\{\n\\begin{array}{l}\n\\disp \\frac{\\sin \\pi x}{\\pi x}, \\hskip1cm x \\in \\R\\setminus \\left\\{0\\right\\}, \\\\\n\\hskip0.5cm 1, \\hskip1.5cm x=0.\n\\end{array}\n\\right.\n\\end{displaymath}\nClearly, $F$ is bounded, belongs to $L^1(\\R)$ and satisfies the moment conditions $(\\ref{uno-dim-})$ for $\\beta = 1$, as shown in \\cite{BUNE,BABUSTVI,COVI}. \nFurthermore, taking into account that the Fourier transform of $F$ is given by (see \\cite{BUNE})\n\\begin{displaymath}\n\\widehat{F}(v) :=\\ \\left\\{\n\\begin{array}{l}\n1-|v\/\\pi|,\\ \\hskip0.5cm |v|\\miu \\pi, \\\\\n0,\\ \\hskip1.85cm |v|>\\pi,\n\\end{array}\n\\right.\n\\end{displaymath}\nwe obtain by Remark \\ref{remark3} that condition $(\\ref{sing})$ is fulfilled. Then, we can define $\\disp \\mathcal{F}_n(\\xx)= \\prod^n_{i=1}F(x_i)$, $\\underline{x}=(x_1,...,x_n) \\in \\R^n,$ the multivariate Fej\\'{e}r's kernel, satisfying the condition upon a multivariate kernel. The Fej\\'{e}r's kernel $F$ and the bivariate Fej\\'{e}r's kernel $F(x)\\cdot F(y)$ are plotted in Figure \\ref{fig1}.\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.22]{fejer}\n\\hskip0.8cm\n\\includegraphics[scale=0.22]{multidimFejer}\n\\caption{\\small{The Fej\\'{e}r's kernel $F$ (left) and the bivariate Fej\\'{e}r's kernel ${\\cal F}(x,y)$ (right).}} \\label{fig1}\n\\end{figure}\n\n The Fej\\'{e}r's kernel $\\mathcal{F}_n$ is an example of kernel with unbounded support, then to evaluate our sampling Kantorovich series at any given $\\xx \\in \\R^n$, we need of an infinite number of mean values $w^n \\int_{R^w_{\\kk}}f(\\uu)\\ d\\uu$. However, if the function $f$ has compact support, this problem does not arise. In case of function having unbounded support the infinite sampling series must be truncated to a finite one, which leads to the so-called truncation error. In order to avoid the truncation error, one can take kernels $\\chi$ with bounded support. Remarkable examples of kernels with compact support, can be constructed using the well-known central B-spline of order $k \\in \\N$, defined by\n$$\nM_k(x) :=\\ \\frac{1}{(k-1)!}\\sum^k_{i=0}(-1)^i \\left(\\begin{array}{l} \\!\\! \nk\\\\\n\\hskip-0.1cm i\n\\end{array} \\!\\! \\right)\n\\left(\\frac{k}{2}+x-i\\right)^{k-1}_+,\n$$\nwhere the function $(x)_+ := \\max\\left\\{x,0\\right\\}$ denotes the positive part of $x \\in \\R$ (see \\cite{BABUSTVI,VIZA1,COVI}). The central B-spline $M_3$ and the bivariate B-spline kernel $M_3(x)\\cdot M_3(y)$ are plotted in Figure \\ref{fig2}.\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.22]{spline}\n\\hskip0.7cm\n\\includegraphics[scale=0.22]{bidimspline}\n\\caption{\\small{The B-spline $M_3$ (left) and the bivariate B-spline ${\\cal M}^2_3(x,y)$ (right).}} \\label{fig2}\n\\end{figure}\nWe have that, the Fourier transform of $M_k$ is given by\n$\\widehat{M_k}(v)\\ :=\\ \\mbox{sinc}^n\\left( \\frac{v}{2 \\pi} \\right)$, $v \\in \\R$, and then, if we consider the case of the uniform spaced sampling scheme, condition (\\ref{sing}) is satisfied by Remark \\ref{remark3}. Clearly, $M_k$ are bounded on $\\R$, with compact support $[-n\/2,n\/2]$, and hence $M_k \\in L^1(\\R)$, for all $k \\in \\N$. Moreover, it easy to deduce that condition $(\\ref{uno-dim-})$ is fulfilled for every $\\beta>0$, see \\cite{BABUSTVI}.\nHence $\\mathcal{M}^n_k(\\xx)\\ :=\\ \\prod^n_{i=1}M_k(x_i)$, $\\underline{x}=(x_1,...,x_n) \\in \\R^n$,\nis the multivariate B-spline kernel of order $k \\in \\N$.\n\n Finally, other important examples of univariate kernels are given by the Jackson-type kernels, defined by\n$J_k(x)=c_k\\, \\mbox{sinc}^{2k}\\left(\\frac{x}{2k\\pi\\alpha}\\right)$, $x \\in \\R$,\nwith $k \\in \\N$, $\\alpha \\geq 1$, where the normalization coefficients $c_k$ are given by\n$c_k\\ :=\\ \\left[ \\int_{\\R} \\mbox{sinc}^{2k}\\left(\\frac{u}{2 k \\pi \\alpha} \\right) \\, du \\right]^{-1}$.\nSince $J_k$ are band-limited to $[-1\/\\alpha,\\, 1\/\\alpha]$, i.e., their Fourier transform vanishes outside this interval, condition $(\\chi 2)$ is satisfied by Remark \\ref{remark3} and $(\\ref{uno-dim-})$ is satisfied since $J_k(x)=\\mathcal{O}(|x|^{-2k})$, as $x \\to \\pm \\infty$, $k \\in \\N$. In similar manner, by the previous procedure we can construct a multivariate version of Jackson kernels. For more details about Jackson-type kernels, and for others useful examples of kernels see e.g. \\cite{BUNE,BAMUVI,BABUSTVI0,BABUSTVI}.\n\n\n\\section{The sampling Kantorovich algorithm for image reconstruction}\n\nIn this section, we show applications of the multivariate sampling Kantorovich operators to image reconstruction. First of all, we recall that every bi-dimensional gray scale image $A$ is represented by a suitable matrix and can be modeled as a step function $I$, with compact support, belonging to $L^p(\\R^2)$, $1 \\leq p <+\\infty$. The definition of $I$ arise naturally as follows:\n$$\nI(x,y)\\ :=\\ \\sum^{m}_{i=1}\\sum^{m}_{j=1}a_{ij} \\cdot \\textbf{1}_{ij}(x,y), \\hskip0.3cm (x,y) \\in \\R^2,\n$$\nwhere ${\\bf 1}_{ij}(x,y)$, $i,j =1,2,...,m$, are the characteristics functions of the sets $(i-1,\\ i]\\times(j-1,\\ j]$ (i.e. ${\\bf 1}_{ij}(x,y)=1$, for $(x,y)\\ \\in\\ (i-1,\\ i]\\times(j-1,\\ j]$ and ${\\bf 1}_{ij}(x,y)=0$ otherwise). \n\n The above function $I(x,y)$ is defined in such a way that, to every pixel $(i,j)$ it is associated the corresponding gray level $a_{ij}$. \n\n We can now consider approximation of the original image by the bivariate sampling Kantorovich operators $(S_w I)_{w>0}$ based upon some kernel $\\chi$. \n\n Then, in order to obtain a new image (matrix) that approximates in $L^p$-sense the original one, it is sufficient to sample $S_w I$ (for some $w>0$) with a fixed sampling rate. In particular, we can reconstruct the approximating images (matrices) taking into consideration different sampling rates and this is possible since we know the analytic expression of $S_w I$.\n\n If the sampling rate is chosen higher than the original sampling rate, one can get a new image that has a better resolution than the original one's. The above procedure has been implemented by using MATLAB and tools of matrix computation in order to obtain an optimized algorithm based on the multivariate sampling Kantorovich theory.\n\n In the next sections, examples of thermographic images reconstructed by the sampling Kantorovich operators will be given to show the main applications of the theory to civil engineering. In particular, a real world case-study is analyzed in term of modal analysis to obtain a model useful to study the response of the building under seismic action.\n\n\n\\section{An application of thermography to civil engineering}\n\nIn the present application, thermographic images will be used to locate the resisting elements and to define their dimensions, and moreover to investigate the actual texture of the masonry wall, i.e., the arrangement of blocks (bricks and\/or stones) and mortar joints. \nIn general, thermographic images have a resolution too low to accurately analyze the texture of the masonries, therefore it has to be increased by means of suitable tools.\n\nIn order to obtain a consistent separation of the phases, that is a correct identification of the pixel which belong to the blocks and those who belong to mortar joints, the image is converted from gray-scale representation to black-and-white (binary) representation by means of an image texture algorithm, which employs techniques belonging to the field of digital image processing. \nThe image texture algorithm, described in details in \\cite{CACLGU}, leads to areas of white pixels identified as blocks and areas of black pixels identified as mortar joints.\nHowever, the direct application of the image texture algorithm to the thermographic images (see Figure~\\ref{fig:original_ric}), can produce errors, as an incorrect separation between the bricks and the mortar (see Figure~\\ref{fig:original_BW}). \nTherefore, we can use the sampling Kantorovich operators to process the thermographic images. \nIn particular, here we used the operators $S_w$ based upon the bivariate Jackson-type kernel with $k=12$ (see Section \\ref{sec5}) and the parameter $w=40$ (see Figure~\\ref{fig:reconstructed_ric}). \nThe application of the image texture algorithm produces a consistent separation of the phases (see Figure~\\ref{fig:reconstructed_BW}).\n\n\\begin{figure}[!ht]\n\\centering\n\\subfigure[]{ \\includegraphics[width = 0.2\\textwidth]{sez_im3_color} \\label{fig:original_ric}}\n\\hskip0.15cm\n\\subfigure[]{ \\includegraphics[width = 0.2\\textwidth]{sez_im3_color_BW} \\label{fig:original_BW}}\n\\hskip0.15cm \n\\subfigure[]{ \\includegraphics[width = 0.2\\textwidth]{ric_sez_im3_color_JACK_N=12_450pix_w=40} \\label{fig:reconstructed_ric}}\n\\hskip0.15cm\n\\subfigure[]{ \\includegraphics[width = 0.2\\textwidth]{ric_sez_im3_color_JACK_N=12_450pix_w=40_BW} \\label{fig:reconstructed_BW}} \n\\caption{{\\small (a) Original thermographic image ($75\\times 75$ pixel resolution) and (b) its texture; (c) Reconstructed thermographic image ($450\\times 450$ pixel resolution) and (d) its texture.}}\n\\label{fig:original}\n\\end{figure}\n\nIn order to perform structural analysis, the mechanical characteristics of an homogeneous material equivalent to the original heterogeneous material are sought. \nThe equivalence is in the sense that, when subjected to the same boundary conditions (b.c.), the overall responses in terms of mean values of stresses and deformations are the same.\nThe equivalent elastic properties are estimated by means of the ``test-window'' method \\cite{CLGU}.\n\n\n\\section{Case-study}\n\nThe image texture algorithm described previously has been used to analyze a real-world case-study: a building consisting of two levels and an attic, with a very simple architectural plan, a rectangle with sides 11 m and 11.4 m (see Figure~\\ref{fig:pianta_prospetto}). \nThe vertical structural elements consist of masonry walls. \nAt first level the masonry walls have thickness of 40 cm with blocks made of stones, while at second level the masonry is made of squared tuff block and it has thickness of about 35 cm. \nBoth surfaces of the walls are plastered.\nThe slab can be assumed to be rigid in the horizontal plane and \nthe building is placed in a medium risk seismic area. \nAccording to a preliminary visual survey, the structure does not have strong asymmetries between principal directions and the distribution of the masses are quite uniform.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width = 0.75\\textwidth]{pianta_prospetto} \n\\caption{{\\small Case-study building: plant (left) and main facade (right).}}\n\\label{fig:pianta_prospetto}\n\\end{figure}\n\n\nThe response of the building is evaluated by means of three different models.\nIn the first model, only information that can be gathered by the visual survey have been used, and the characteristics of material has been assume according to Italian Building Code. \nIn the second model information concerning the actual geometry of vertical structural elements acquired by means of thermographic survey have been used; \nfor example, the survey showed that the openings at ground level in the main facade were once larger and have been subsequently partially filled with non-structural masonry, and thus the dimensions of all the masonry walls in the facade have to be reduced.\nEventually, in the third model information about the actual texture of masonries of ground level (chaotic masonry, Fig.~\\ref{fig:chaotic}) and of second level (periodic masonry, Fig.~\\ref{fig:periodic}) have been used. \nIn particular, the actual textures where established using the reconstructed thermographic images. \n\n\\begin{figure}[!ht]\n\\centering\n\\subfigure[]{ \\includegraphics[width = 0.2\\textwidth]{ir5463_dx_ric_BW} \\label{fig:chaotic} }\n\\hskip0.8cm\n\\subfigure[]{ \\includegraphics[width = 0.2\\textwidth]{ir5462_dx_ric_BW} \\label{fig:periodic}} \n\\caption{{\\small Texture of (a) ground level and (b) second level masonries.}}\n\\label{fig:masonries}\n\\end{figure}\n\nThe main characteristics of the models are reported in Table~\\ref{tab:char_models}.\nFor the mechanical characteristic the Young's modulus $E$ and the shear modulus $G$ are shown.\nIn the third model, the following mechanical characteristics (Young's modulus $E$ and Poisson's ratio $\\nu$) of the constituent phases have been used: for the stones of the ground level, $E = 25000\\,\\mathrm{N\\,mm^{-2}}, \\nu = 0.2$, for the bricks of the second level $E = 1700\\,\\mathrm{N\\,mm^{-2}}, \\nu = 0.2$, for the mortar of both levels $E = 2500\\,\\mathrm{N\\,mm^{-2}}, \\nu = 0.2$.\n\n\\begin{table}[!ht]\n\\center\n{\\small\n\\begin{tabular}{c|c|cc|cc}\n\\hline\n & & \\multicolumn{2}{|c}{ground level} & \\multicolumn{2}{|c}{second level}\\\\\n & geometry & $E$ & $G$ & $E$ & $G$ \\\\\n & & $\\mathrm{N\\,mm^{-2}}$ & $\\mathrm{N\\,mm^{-2}}$ & $\\mathrm{N\\,mm^{-2}}$ & $\\mathrm{N\\,mm^{-2}}$ \\\\\n\\hline\n Model \\#1 & visual survey & 3346 & 1115 & 1620 & 540 \\\\\n Model \\#2 & thermogr. survey & 3346 & 1115 & 1620 & 540 \\\\\n Model \\#3 & thermogr. survey & 7050 & 2957 & 1996 & 833 \\\\\n \\hline\\end{tabular}\n}\n\\caption{{\\small Geometry and masonries' mechanical characteristics in the models.}}\n\\label{tab:char_models}\n\\end{table}\n\n\nThe behavior under seismic actions is estimated by means of modal analysis, using a commercial code based on the Finite Element Method.\nThe periods of the first three modes and the corresponding mass participating ratio for the two principal direction of seismic action are reported in Tab.~\\ref{tab:periodi_masse}.\nFor all the models, the first mode is along the $y$ axis, the second is along the $x$ axis, and the third is mainly torsional. \nAnyway, the second mode is not a pure translational one since the asymmetry in the walls distribution produces also a torsional component. \n\n\\begin{table}[h!]\n\\center\n{\\small\n\\begin{tabular}{c|p{0.7cm}p{0.7cm}p{0.7cm}|p{0.7cm}p{0.7cm}p{0.7cm}|p{0.7cm}p{0.7cm}p{0.7cm}}\n\\hline\n& \\multicolumn{3}{|c|}{Periods}\n & \\multicolumn{6}{|c}{Mass partecipating ratio} \\\\\n & \\multicolumn{3}{|c|}{ }\n & \\multicolumn{3}{|c|}{ $x$ direction} & \\multicolumn{3}{|c}{$y$ direction} \\\\\n & 1st mode & 2nd mode & 3rd mode & 1st mode & 2nd mode & 3rd mode & 1st mode & 2nd mode & 3rd mode\\\\\n\\hline\n Model \\#1 & 0.22 & 0.15 & 0.13 & 0.00 & 0.64 & 0.13 & 0.73 & 0.00 & 0.00 \\\\\n Model \\#2 & 0.22 & 0.14 & 0.13 & 0.00 & 0.51 & 0.27 & 0.73 & 0.01 & 0.00 \\\\\n Model \\#3 & 0.18 & 0.13 & 0.11 & 0.00 & 0.46 & 0.26 & 0.68 & 0.01 & 0.00 \\\\\n \\hline\\end{tabular}\n}\n\\caption{{\\small Periods and mass participating ratio for two directions of seismic action of the first three modes .}}\n\\label{tab:periodi_masse}\n\\end{table}\n\nAs can be noted, the first two models have the same period for the fundamental mode in $y$ direction; nevertheless, in $x$ direction there is a slight difference due to the reduced dimensions of masonry walls width discovered by means of thermographic survey. In fact, the reduction of dimensions leads to a decrease of global stiffness while the total mass is almost the same. \nThe periods of third model show a reduction of about 20\\% due to the greater value of equivalent elastic moduli estimated by means of homogenized texture.\nAs already noted, for seismic action in $x$ direction the structure response is dominated by second mode, with also a significant contribution from the third mode (which is torsional); for seismic action in $y$ the response is dominated by first mode.\nIt is also worth noting that Model \\#3 shows a reduced mass participating ratio for the fundamental mode in each direction, and therefore a greater number of modes should be considered in the evaluation of seismic response in order to achieve a suitable accuracy. \n\n\n\n\n\n\n\\section{Concluding remarks}\n\nThe sampling Kantorovich operators and the corresponding MATLAB algorithm for image reconstruction are very useful to enhance the quality of thermographic images of portions of masonry walls. In particular, after the processing by sampling Kantorovich algorithm, the thermographic image has higher definition with respect to the original one and therefore, it was possible to estimate the mechanical characteristics of homogeneous materials equivalent to actual masonries, taking into account the texture (i.e., the arrangement of blocks and mortar joints). These materials were used to model the behavior of a case study under seismic action. This model has been compared with others constructed by well-know methods, using ``naked eyes'' survey and the mechanical parameters for materials taken from the Italian Building Code. Our method based on the processing of thermographic images by sampling Kantorovich operators enhances the quality of the model with respect to that based on visual survey only.\n\n\\noindent In particular the proposed approach allows to overcome some difficulties that arise when dealing with the vulnerability analysis of existing structures, which are: i) the knowledge of the actual geometry of the walls (in particular the identification of hidden doors and windows); ii) the identification of the actual texture of the masonry and the distribution of inclusions and mortar joints, and from this iii) the estimation of the elastic characteristics of the masonry.\nIt is noteworthy that, for item i) the engineer has usually limited knowledge, due to the lack of documentation, while for items ii) and iii) he usually use tables proposed in technical manuals and standards which however give large bounds in order to encompass the generality of the real masonries. Instead, the use of reconstruction techniques on thermographic images coupled with homogenization permits to reduce these latter uncertainty on the estimation of the mechanical characteristics of the masonry.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzkfpi b/data_all_eng_slimpj/shuffled/split2/finalzzkfpi new file mode 100644 index 0000000000000000000000000000000000000000..d7587ac563b7015e2e24610db148629126bcb748 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzkfpi @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{1}\n\nEdges play an important role in many vision tasks~\\cite{YangZYPML20,wu2012strong,ramamonjisoa2020predicting,wang2015designing}. While generic edge detection~\\cite{xie2015hed,liu2017rcf,he2019bdcn,wang2017ced} has been extensively studied for decades, specific-edge detection recently attracts increasing amount of efforts due to its practical applications concerning on different types of edges, such as occlusion contours~\\cite{wang2016doc,wang2018doobnet,lu2019ofnet} or semantic boundaries~\\cite{yu2017casenet,dff19}.\n\nIn his seminal work \\cite{marr1982vision}, David Marr summarized four basic ways edges can arise: (1) \\textit{surface-reflectance discontinuity}, (2) \\textit{illumination discontinuity}, (3) \\textit{surface-normal discontinuity}, and (4) \\textit{depth discontinuity}, as shown in Fig.~\\ref{fig:fig1}. Recent studies~\\cite{YangZYPML20,wu2012strong,ramamonjisoa2020predicting,wang2015designing} have shown that the above types of edges are beneficial for downstream tasks. For example, pavement crack detection (reflectance discontinuity) is a critical task for intelligent transportation \\cite{YangZYPML20}; shadow edge (illumination discontinuity) detection is a prerequisite for shadow removal and path detection \\cite{wu2012strong}; \\cite{ramamonjisoa2020predicting} and \\cite{wang2015designing} show that depth edge and normal edge representation prompt refined normal and sharp depth estimation, respectively. Besides, \\cite{KimTO15Joint} utilizes four types of cues simultaneously to improve the performance of depth refinement.\n\nDespite their importance, fine-grained edges are still under-explored, especially when compared with generic edges. Generic edge detectors usually treat edges indistinguishably; while existing studies for specific edges focus on individual edge type. By contrast, the four fundamental types of edges, to our best knowledge, have never been explored in an integrated edge detection framework.\n\n\n\nIn this paper, for the first time, we propose to detect simultaneously the four types of edges, namely \\textit{reflectance edge} (RE), \\textit{illumination edge} (IE), \\textit{normal edge} (NE) and \\textit{depth edge} (DE). Although edges share similar patterns in intensity variation in images, they have different physical bases. Specifically, REs and IEs are mainly related to photometric reasons -- REs are caused by changes in material appearance (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, texture and color), while IEs are produced by changes in illumination (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, shadows, light sources and highlights). By contrast, NEs and DEs reflect the geometry changes in object surfaces or depth discontinuity. Considering the correlations and distinctions among all types of edges, we develop a CNN-based solution, named \\textit{RINDNet}, for jointly detecting the above four types of edges.\n\nRINDNet works in three stages. In stage \\uppercase\\expandafter{\\romannumeral1}, it extracts general features and spatial cues from a backbone network for all edges. Then, in stage \\uppercase\\expandafter{\\romannumeral2}, it proceeds with four separate decoders. Specifically, low-level features are first integrated under the guidance of high-level hints by Weight Layer (WL), and then fed into RE-Decoder and IE-Decoder to produce features for REs and IEs respectively. At the same time, NE\/DE-Decoder take the high-level features as input and explore effective features. After that, these features and accurate spatial cues are forwarded to four decision heads in stage \\uppercase\\expandafter{\\romannumeral3} to predict the initial results. Finally, the attention maps obtained by the Attention Module (AM), which captures the underlying relations between all types, are aggregated with the initial results to generate the final predictions. All these components are differentiable, making RINDNet an end-to-end architecture to jointly optimize the detection of four types of edges.\n\n\nTraining and evaluating edge detectors for all four types of edges request images with all such edges annotated. In this paper, we create the first known such dataset, named \\textit{BSDS-RIND}, by carefully labeling images from the BSDS~\\cite{arbelaez2010bsds} benchmark (see Fig.~\\ref{fig:fig1}). BSDS-RIND allows the first thorough evaluation of edge detection of all four types. The proposed RINDNet shows clear advantages over previous edge detectors, both quantitatively and qualitatively. The source code, dataset, and benchmark are available at \\url{https:\/\/github.com\/MengyangPu\/RINDNet}.\n\nWith the above efforts, our study is expected to stimulate further research along the line, and benefit more downstream applications with rich edge cues. Our contributions are summarized as follows: (1) We develop a novel end-to-end edge detector, RINDNet, to jointly detect the four types of edges. RINDNet is designed to effectively investigate shared information among different edges (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, through feature sharing) and meanwhile flexibly model the distinction between them (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, through edge-aware attention). (2) We present the first public benchmark, BSDS-RIND, dedicated to studying simultaneously the four edge types, namely reflectance edge, illumination edge, normal edge and depth edge. (3) In our experiments, the proposed RINDNet shows clear advantages over state of the arts.\n\n\\section{Related Works}\n\\label{2}\n\n\n\\vspace{-2mm}\n\\paragraph{Edge Detection Algorithms.}\nEarly edge detectors~\\cite{kittler1983accuracy,canny1986computational,winnemoller2011xdog} obtain edges based directly on the analysis of image gradients. By contrast, learning-based methods \\cite{martin2004learning,dollar2006supervised,lim2013sketch} exploit different low-level features that respond to characteristic changes, then a classifier is trained to generate edges. CNN-based edge detectors~\\cite{kokkinos2015pushing,deng2020dscd,xu2017AMHNet,shen2015deepcontour,bertasius2015deepedge,bertasius2015hfl,liu2016rds,maninis2016cob,deng2018lpcb,kelm2019rcn,poma2020dexined} do not rely on hand-crafted features and achieve better performance.\nCombining multi-scale and multi-level features, \\cite{xie2015hed,liu2017rcf,poma2020dexined,he2019bdcn} yield outstanding progresses on generic edge detection. A novel refinement architecture is also proposed in \\cite{wang2017ced} using a top-down backward refinement pathway to generate crisp edges.\nRecent works~\\cite{zhen2020joint,acuna2019devil,yu2018simultaneous,yang2016object} pay more attention to special types of edges. In~\\cite{hariharan2011semantic} the generic object detector is combined with bottom-up contours to infer object contours. CASENet~\\cite{yu2017casenet} adopts a nested architecture to address semantic edge detection. For better prediction, DFF~\\cite{dff19} learns adaptive weights to generate specific features of each semantic category. For occlusion boundary detection, DOC~\\cite{wang2016doc} decomposes the task into occlusion edge classification and occlusion orientation regression, then two sub-networks are used to separately perform the above two tasks. DOOBNet~\\cite{wang2018doobnet} uses an encoder-decoder structure to obtain multi-scale and multi-level features, and shares the backbone features with two branches. OFNet~\\cite{lu2019ofnet} considers the relevance and distinction for the occlusion edge and orientation, thus it shares the occlusion cues between two sub-networks.\n\n\n\\vspace{-5mm}\n\\paragraph{Edge Datasets.} \nMany datasets have been proposed for studying edges.\nBSDS \\cite{arbelaez2010bsds} is a popular edge dataset for detecting generic edges containing $500$ RGB natural images. Although each image is annotated by multiple users, they usually pay attention to salient edges related to objects. BIPED~\\cite{mely2016multicue} is created to explore more comprehensive and dense edges, and contains $250$ outdoor images. NYUD~\\cite{silberman2012indoor} contains $1,449$ RGB-D indoor images, and lacks edge types pertaining to outdoor scenes. Significantly, Multicue~\\cite{mely2016multicue} considers the interaction between several visual cues (luminance, color, stereo, motion) during boundary detection.\n\nRecently, SBD \\cite{hariharan2011semantic} is presented for detecting semantic contours, using the images from the PASCAL VOC challenge \\cite{everingham2010pascal}. Cityscapes \\cite{cordts2016cityscapes} provides the object or semantic boundaries focusing on road scenes. For reasoning occlusion relationship between objects, the dataset in \\cite{ren2006figure} consists of $200$ images, where boundaries are assigned with figure\/ground labels. Moreover, PIOD~\\cite{wang2016doc} contains $10,000$ images, each with two annotations: a binary edge map denotes edge pixels and a continuous-valued occlusion orientation map. The recent dataset in~\\cite{ramamonjisoa2020predicting} annotates NYUD test set for evaluating the occlusion boundary reconstruction.\n\n\nOur work is inspired by the above pioneer studies, but makes novel contributions in two aspects: the proposed RINDNet, to the best of our knowledge, is the first edge detector to jointly detect all four types of edges, and the proposed BSDS-RIND is the first benchmark with all four types of edges annotated.\n\n\\section{Problem Formulation and Benchmark}\n\\label{3}\n\n\\subsection{Problem Formulation}\n\\label{3.1}\nLet $X \\in \\mathbb{R}^{3 \\times W \\times H}$ be an input image with ground-truth labels $\\mathcal{E}=\\{E^{r},E^{i},E^{n},E^{d}\\}$, where $E^{r}, E^{i}, E^{n}, E^{d} \\in\\{0,1\\}^{W \\times H}$ are binary edge maps indicating the reflectance edges (REs), illumination edges (IEs), surface-normal edges (NEs) and depth edges (DEs), respectively. Our goal is to generate the final predictions $\\mathcal{Y}=\\{Y^{r}, Y^{i}, Y^{n}, Y^{d}\\}$, where $Y^{r}, Y^{i}, Y^{n}, Y^{d}$ are the edge maps corresponding to REs, IEs, NEs and DEs, respectively. In our work, we aim to learn a CNN-based edge detector $\\psi$: \n$\\mathcal{Y} = \\psi (X)$.\n\nThe training of $\\psi$ can be done over training images by minimizing some loss functions between $\\mathcal{E}$ and $\\mathcal{Y}$. Therefore, a set of images with ground-truth labels are required to learn the mapping $\\psi$. We contribute such annotations in this work, and the detailed processes are shown in \\S\\ref{3.2}.\n\n\n\\subsection{Benchmark}\n\\label{3.2}\nOne aim of our work is to contribute a first public benchmark, named BSDS-RIND, over BSDS images~\\cite{arbelaez2010bsds}. The original images contain various complex scenes, which makes it challenging to jointly detect all four types of edges. Fig.~\\ref{fig:fig1} (Right) shows some examples of our annotations.\n\n\n\\vspace{-3mm}\n\\paragraph{Edge Definitions.}\nIt is critical to define four types of edges for the annotation task. Above all, we give the definition of each type and illustrate it with examples.\n\\begin{itemize}\n \\vspace{-1.6mm}\\item \\textbf{\\textit{Reflectance Edges} (REs)} usually are caused by the changes in material appearance (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, texture and color) of smooth surfaces. Notably, although the edges within paintings in images (see Fig.~\\ref{fig:fig1} (c)) could be classified to DEs by human visual system, these edges are assigned as REs since there is no geometric discontinuity. \n \\vspace{-1.6mm}\\item \\textbf{\\textit{Illumination Edges} (IEs)} are produced by shadows, light sources, highlights, \\emph{etc.}} \\def\\vs{\\emph{vs.}\\ (as shown in Fig.~\\ref{fig:fig1}).\n \\vspace{-1.6mm}\\item \\textbf{\\textit{Normal Edges} (NEs)} mark the locations of discontinuities in surface orientation and normally arise between parts. As shown in Fig.~\\ref{fig:fig1} (b), we take the edges between the building and the ground as an example, the change of depth across these two surfaces is continuous but not smooth, which is caused by the surface-normal discontinuity between them.\n \\vspace{-1.6mm}\\item \\textbf{\\textit{Depth Edges} (DEs)} are resulted by depth discontinuity, and often coincide with object silhouettes. It is difficult to measure the depth difference (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, Fig.~\\ref{fig:fig1} (a)), thus the relative depth difference is used to determine whether an edge belongs to DEs. Although there exists the depth changes between windows and walls in the building (Fig.~\\ref{fig:fig1} (b)), the small distance ratio is caused by the long distance between them and camera, so such edges are classified to REs rather than DEs.\n\\end{itemize}\n\n\\vspace{-6.5mm}\n\\paragraph{Annotation Process.}\nThe greatest effort for constructing a high-quality edge dataset is devoted to, not surprisingly, manual labeling, checking, and revision. For this task, we manually construct the annotations using ByLabel~\\cite{qin2018bylabel}.\nTwo annotators collaborate to label each image. One annotator first manually labels the edges, and another annotator checks the result and may supplement missing edges. Those edges with labels are added directly to the final dataset if both annotators agree with each other. Ambiguous edges will be revised by both annotators together: the consistent annotations are given after discussion. After iterating several times, we get the final annotations. Moreover, for some edges that are difficult to determine the main factors of their formation, multiple labels are assigned for them. It is only about 53k (2\\%) pixels with multi-labels in BSDS-RIND. In addition, we use the average Intersection-over-Union (IoU) score to measure agreement between two annotators, and get $0.97$, $0.92$, $0.93$ and $0.95$ for REs, IEs, NEs and DEs, respectively. The statistics show good consistency. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.86\\linewidth, height=.28\\linewidth]{edgeNum.pdf}\n\\caption{The distribution of pixels for each type of edges on BSDS-RIND training set and testing set.}\n\\label{fig:edge_num}\n\\vspace{-13pt}\n\\end{figure}\n\nWith all efforts, finally, a total of $500$ images are carefully annotated, leading to a densely annotated dataset, named BSDS-RIND. Then it is split into $300$ training images, and $200$ testing images, respectively. The total number of pixels for each type on the BSDS-RIND training and testing set are reported in Fig. \\ref{fig:edge_num}. Significantly, the number of edge pixels in BSDS-RIND are twice as in BSDS. Moreover, edge detection is a pixel-wise task, thus the number of samples provided by BSDS-RIND decently supports learning-based algorithms. More examples and details are given in the supplementary material.\n\n\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.95\\linewidth, height=.44\\linewidth]{figure2.pdf}\n\\caption{The three-stage architecture of RINDNet. \\textbf{Stage \\uppercase\\expandafter{\\romannumeral1}}: the input image is fed into a backbone to extract features shared with all edge types. \\textbf{Stage \\uppercase\\expandafter{\\romannumeral2}}: features across different levels are fused via Weight Layers (WLs), and are forwarded to four decoders in two clusters: RE-Decoder\/IE-Decoder and NE-Decoder\/DE-Decoder. \\textbf{Stage \\uppercase\\expandafter{\\romannumeral3}}: four decision heads predict the four types of initial results. In addition, the attention maps learned by the attention module are integrated into the final prediction ($A^b$ used only in training). (Best viewed in color)}\n\\label{fig:fig2}\n\\vspace{-13pt}\n\\end{figure*}\n\n\n\n\\section{RINDNet}\n\\label{4}\n\nIn this work, we design an end-to-end network (\\S\\ref{4.1}), named \\textit{RINDNet}, to learn distinctive features for optimizing the detection of four edge types jointly. Fig.~\\ref{fig:fig2} shows an overview of our proposed RINDNet, which includes three stages of initial results inference (\\emph{i.e.}} \\def\\Ie{\\emph{I.e.}, extracting common features, preparing distinctive features, and generating initial results) and final predictions integrated by an Attention Module. We also explain the loss functions and training details in \\S\\ref{4.2} and \\S\\ref{4.3}.\n\n\\subsection{Methodology}\n\\label{4.1}\n\n\\paragraph{Stage \\uppercase\\expandafter{\\romannumeral1}: Extracting Common Features for All Edges.}\nWe first use a backbone to extract common features for all edges because these edges share similar patterns in intensity variation in images.\nThe backbone follows the structure of ResNet-50 \\cite{he2016deepres} which is composed of five repetitive building blocks. Specifically, the feature maps from the above five blocks of ResNet-50 \\cite{he2016deepres} are denoted as $res_1$, $res_2$, $res_3$, $res_4$ and $res_5$, respectively.\n\nThen, we generate spatial cues from the above features. It is well known that different layers of CNN features encode different levels of appearance\/semantic information, and contribute differently to different edge types. Specifically, bottom-layer feature maps $res_{1-3}$ focus more on low-level cues (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, color, texture and brightness), while top-layer maps $res_{4-5}$ are in favor of object-aware information. Thus it is beneficial to capture multi-level spatial responses from different layers of feature maps. Given multiple feature maps $res_{1-5}$, we obtain the spatial response maps:\n\\begin{equation}\n f_{sp}^k = \\psi^{k}_{\\rm sp}(res_k), \\quad k \\in \\{1,2,3,4,5\\}\n\\end{equation}\nwhere the spatial responses $f_{sp}^k \\in \\mathbb{R}^{2 \\times W \\times H}$ are learned by Spatial Layer $\\psi^{k}_{\\rm sp}$ which is composed of one convolution layer and one deconvolution layer.\n\n\n\\vspace{-3mm}\n\\paragraph{Stage \\uppercase\\expandafter{\\romannumeral2}: Preparing Distinctive Features for REs\/IEs and NEs\/DEs.} Afterwards, RINDNet learns particular features for each edge type separately by the corresponding decoder in stage \\uppercase\\expandafter{\\romannumeral2}. Inspired by \\cite{lu2019ofnet}, we design the Decoder with two streams to recover fine location information, as shown in Fig.~\\ref{fig:fig6} (b). Two-stream decoder can work collaboratively and learn more powerful features from different views in the proposed architecture. Although four decoders have the same structure, some special designs are proposed for different types of edges, and we will give the detailed descriptions below. To distinguish each type of edges reasonably and better depict our work, we next cluster the four edge types into two groups, \\emph{i.e.}} \\def\\Ie{\\emph{I.e.}, REs\/IEs and NEs\/DEs, to prepare features for them respectively.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.9\\linewidth, height=.7\\linewidth]{components.pdf}\n\\caption{The architectures of (a) Weight Layer, (b) Decoder and (c) Attention Module. $\\odot$ is the element-wise multiplication.}\n\\label{fig:fig6}\n\\vspace{-13pt}\n\\end{figure}\n\n\n\\textbf{\\textit{REs and IEs.}} In practice, the low-level features (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, $res_{1-3}$) capture detailed intensity changes that are often reflected in REs and IEs. Besides, REs and IEs are related to the global context and surrounding objects provided by the high-level features (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, $res_{5}$). Thus, it is desirable that semantic hints may give the felicitous guidance to aware the intensity changes, before forwarding to the Decoder. Moreover, it is notable that simply concatenating the low-level and high-level features may be too computationally expensive due to the increased number of parameters. We therefore propose the Weight Layer (WL) to adaptively fuse the low-level features and high-level hints in a learnable manner, without increasing the dimension of the features. \n\nAs shown in Fig.~\\ref{fig:fig6} (a), WL contains two paths: the first path receives high-level feature $res_{5}$ to recover high resolution through a deconvolution layer, and then two $3\\times3$ convolution layers with Batch Normalization (BN) and ReLU excavate adaptive semantic hints; another path is implemented as two convolution layers with BN and ReLU, which encodes low-level features $res_{1-3}$. Afterwards, they are fused by element-wise multiplication. Formally, given the low-level features $res_{1-3}$ and high-level hints $res_{5}$, we generate the fusion features for REs and IEs individually,\n\\begin{equation}\n\\begin{array}{ll}\n g^{r}= \\psi^{r}_{\\rm wl} \\big(res_5, [res_1,res_2,\\rm up (res_3)]\\big), \\\\\n g^{i}= \\psi^{i}_{\\rm wl} \\big(res_5, [res_1,res_2,\\rm up (res_3)]\\big),\n\\end{array}\n\\end{equation}\nwhere the WL of REs and IEs are indicated as $\\psi^{r}_{\\rm wl}$ and $\\psi^{i}_{\\rm wl}$ respectively, $g^{r}$\/$g^{i}$ are the fusion features for REs\/IEs, and $[\\cdot]$ is the concatenation. Note that, the resolution of $res_{3}$ is smaller than $res_{1}$ and $res_{2}$, so one up-sampling operation $\\rm up(\\cdot)$ is used on $res_{3}$ to increase resolution before feature concatenation.\nNext, the fusion features are fed into the corresponding Decoder to generate specific features with accurate location information separately for IEs and REs,\n\\begin{equation}\n\\begin{array}{ll}\n f^{r} = \\psi^{r}_{\\rm deco}(g^{r}) , &\n f^{i} = \\psi^{i}_{\\rm deco}(g^{i}) , \\\\\n\\end{array}\n\\end{equation}\nwhere $\\psi^{r}_{\\rm deco}$ and $\\psi^{i}_{\\rm deco}$ indicate Decode of REs and IEs respectively, and $f^{r}$\/$f^{i}$ are decoded feature maps for REs\/IEs.\n\n\n\\textbf{\\textit{NEs and DEs.}} Since the high-level features (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, $res_5$) express strong semantic responses that are usually epitomized in NEs and DEs, we utilize $res_5$ to obtain the particular features for NEs and DEs,\n\\begin{equation}\n\\begin{array}{ll}\n f^{n} = \\psi^{n}_{\\rm deco}(res_5) , &\n f^{d} = \\psi^{d}_{\\rm deco}(res_5) ,\\\\\n\\end{array}\n\\end{equation}\nwhere NE-Decode and DE-Decoder are denoted as $\\psi^{n}_{\\rm deco}$ and $\\psi^{d}_{\\rm deco}$ respectively, and $f^{n}$\/$f^{d}$ are the decoded features of NEs\/DEs. Since DEs and NEs commonly share some relevant geometry cues, we share the weights of the second stream of NE-Decoder and DE-Decoder to learn the collaborative geometry cues. At the same time, the first stream of NE-Decoder and DE-Decoder is responsible for learning particular features for REs and DEs, respectively.\n\n\\vspace{-3mm}\n\\paragraph{Stage \\uppercase\\expandafter{\\romannumeral3}: Generating Initial Results.}\nWe predict the initial results for each type of edges by the respective decision head in final stage. The features from previous stages, containing rich location information of edges, can be used to predict edges. Specifically, we concatenate the decoded features $f^{r}$\/$f^{i}$ with spatial cues $f_{sp}^{1-3}$ to predict REs\/IEs,\n\\begin{equation}\n\\begin{array}{ll}\n O^{r} = \\psi^{r}_{\\rm h}\\big([f^{r}, f_{sp}^{1-3}]\\big) , & \n O^{i} = \\psi^{i}_{\\rm h}\\big([f^{i}, f_{sp}^{1-3}]\\big) , \n\\end{array}\n\\end{equation}\nwhere $O^{r}$\/$O^{i}$ are the initial predictions of REs\/IEs. The decision heads of REs and IEs, named $\\psi^{r}_{\\rm h}$ and $\\psi^{i}_{\\rm h}$ respectively, are modeled as a $3\\times3$ convolution layer and a $1\\times1$ convolution layer. Note that REs and IEs do not directly rely on the location cues provided by top-layer, thus spatial cues $f_{sp}^{4-5}$ are not used for them. By contrast, all spatial cues $f_{sp}^{1-5}$ are concatenated with the decoded features to generate initial results for NEs and DEs, respectively,\n\\begin{equation}\n\\begin{array}{ll}\n O^{n} = \\psi^{n}_{\\rm h}\\big([f^{n}, f_{sp}^{1-5}]\\big) , &\n O^{d} = \\psi^{d}_{\\rm h}\\big([f^{d}, f_{sp}^{1-5}]\\big) , \n\\end{array}\n\\end{equation}\nwhere $\\psi^{n}_{\\rm h}$ and $\\psi^{d}_{\\rm h}$ respectively indicate the decision heads of NEs and DEs, which are composed of three $1 \\times 1$ convolutional layers to integrate hints at each position. In summary, $\\mathcal{O}=\\{O^{r},O^{i},O^{n},O^{d}\\}$ denotes the initial result set.\n\n\n\\begin{table*}\n\\caption{Quantitative comparison for REs, IEs, NEs, DEs and Average (best viewed in color: ``\\textcolor{red}{\\bf{red}}'' for best, and ``\\textcolor{blue}{\\bf{blue}}'' for second best).}\n \\centering\n \\small\n \\renewcommand\\tabcolsep{3.5pt}\n \\renewcommand\\arraystretch{0.9}\n \\begin{tabular}{|l|ccc|ccc|ccc|ccc|ccc|}\n \\hline\n \\multirow{2}{*}{Method}\n &\\multicolumn{3}{c|}{Reflectance} & \\multicolumn{3}{c|}{Illumination} & \\multicolumn{3}{c|}{Normal} &\\multicolumn{3}{c|}{Depth} &\\multicolumn{3}{c|}{Average}\\\\ \n \\cline{2-16}\n & ODS & OIS & AP & ODS & OIS & AP & ODS & OIS & AP & ODS & OIS & AP & ODS & OIS & AP\\\\\n \\hline\n HED~\\cite{xie2015hed} & 0.412 & 0.466 & 0.343 & 0.256 & 0.290 & 0.167 & 0.457 & 0.505 & \\bf{\\textcolor{blue}{0.395}} & 0.644 & 0.679 & 0.667 & 0.442 & 0.485 & \\bf{\\textcolor{blue}{0.393}}\\\\\n CED~\\cite{wang2017ced} & 0.429 & 0.473 & 0.361\t & 0.228 & 0.286 & 0.118 & 0.463 & 0.501 & 0.372 & 0.626\t& 0.655\t& 0.620 & 0.437 & 0.479 & 0.368\\\\\n RCF~\\cite{liu2017rcf} & 0.429 & 0.448 & 0.351 & 0.257 & 0.283 & \\bf{\\textcolor{red}{0.173}} & 0.444 & 0.503 & 0.362 & 0.648 & 0.679 & 0.659 & 0.445 & 0.478 & 0.386\\\\\n BDCN~\\cite{he2019bdcn} & 0.358 & 0.458 & 0.252 & 0.151 & 0.219 & 0.078 & 0.427 & 0.484 & 0.334 & 0.628 & 0.661 & 0.581 & 0.391 & 0.456 & 0.311\\\\\n DexiNed~\\cite{poma2020dexined} & 0.402 & 0.454 & 0.315\t & 0.157 & 0.199 & 0.082 & 0.444 & 0.486 & 0.364 & 0.637 & 0.673 & 0.645 & 0.410 & 0.453 & 0.352\\\\\n CASENet~\\cite{yu2017casenet} & 0.384 & 0.439 & 0.275 & 0.230 & 0.273 & 0.119 & 0.434 & 0.477 & 0.327 & 0.621 & 0.651 & 0.574 & 0.417 & 0.460 & 0.324\\\\\n DFF~\\cite{dff19} & \\bf{\\textcolor{blue}{0.447}} & 0.495 & 0.324 & \\bf{\\textcolor{red}{0.290}} & \\bf{\\textcolor{red}{0.337}} & 0.151 & \\bf{\\textcolor{blue}{0.479}} & \\bf{\\textcolor{blue}{0.512}} & 0.352 & \\bf{\\textcolor{blue}{0.674}} & \\bf{\\textcolor{blue}{0.699}} & 0.626 & \\bf{\\textcolor{blue}{0.473}} & \\bf{\\textcolor{blue}{0.511}} & 0.363\\\\\n *DeepLabV3+~\\cite{chen2018deeplabv3} & 0.297 & 0.338 & 0.165 & 0.103 & 0.150 & 0.049 & 0.366 & 0.398 & 0.232 & 0.535 & 0.579 & 0.449 & 0.325 & 0.366 & 0.224\\\\\n *DOOBNet~\\cite{wang2018doobnet} & 0.431 & 0.489 & 0.370 & 0.143 & 0.210 & 0.069 & 0.442 & 0.490 & 0.339 & 0.658 & 0.689 & 0.662 & 0.419 & 0.470 & 0.360\\\\\n *OFNet~\\cite{lu2019ofnet} & 0.446 & 0.483 & \\bf{\\textcolor{blue}{0.375}} & 0.147 & 0.207 & 0.071 & 0.439 & 0.478 & 0.325 & 0.656 & 0.683 & \\bf{\\textcolor{blue}{0.668}} & 0.422 & 0.463 & 0.360\\\\\n \\hline\n DeepLabV3+~\\cite{chen2018deeplabv3} & 0.444 & 0.487 & 0.356 & 0.241 & \\bf{\\textcolor{blue}{0.291}} & 0.148 & 0.456 & 0.495 & 0.368 & 0.644 &0.671 & 0.617 & 0.446 & 0.486 & 0.372\\\\\n DOOBNet~\\cite{wang2018doobnet} & 0.446 & \\bf{\\textcolor{blue}{0.503}} & 0.355 & 0.228 & 0.272 & 0.132 & 0.465 & 0.499 & 0.373 & 0.661 & 0.691 & 0.643 & 0.450 & 0.491 & 0.376\\\\\n OFNet~\\cite{lu2019ofnet} & 0.437 & 0.483 & 0.351 & 0.247 & 0.277 & 0.150 & 0.468 & 0.498 & 0.382 & 0.661 & 0.687 & 0.637 & 0.453 & 0.486 & 0.380\\\\\n \n \\hline\n RINDNet (Ours) & \\bf{\\textcolor{red}{0.478}} & \\bf{\\textcolor{red}{0.521}} & \\bf{\\textcolor{red}{0.414}} &\\bf{\\textcolor{blue}{0.280}} &\\bf{\\textcolor{red}{0.337}} &\\bf{\\textcolor{blue}{0.168}} &\\bf{\\textcolor{red}{0.489}} &\\bf{\\textcolor{red}{0.522}} &\\bf{\\textcolor{red}{0.440}} &\\bf{\\textcolor{red}{0.697}} &\\bf{\\textcolor{red}{0.724}} &\\bf{\\textcolor{red}{0.705}} &\\bf{\\textcolor{red}{0.486}} &\\bf{\\textcolor{red}{0.526}} &\\bf{\\textcolor{red}{0.432}}\\\\\n \\hline\n \\end{tabular}\n \\label{tab:tab1}\n\\vspace{-8pt}\n\\end{table*}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.99\\linewidth, height=.20\\linewidth\n]{eval.pdf}\n\\caption{Evaluation results on BSDS-RIND for (a) REs, (b) IEs, (c) NEs, (d) DEs and (e) generic edges.}\n\\label{fig:fig3}\n\\vspace{-8pt}\n\\end{figure*}\n\n\\vspace{-3mm}\n\\paragraph{Attention Module.} Finally, RINDNet integrates initial results with attention maps obtained by Attention Module (AM) to generate the final results. Since different types of edges are reflected in different locations, when predicting each type of edges, it is necessary to pay more attention to the related locations. Fortunately, the edge annotations provide the label of each location. Accordingly, the proposed AM could infer the spatial relationships between multiple labels with pixel-wise supervision by the attention mechanism. The attention maps could be used to activate the responses of related locations. Formally, given the input image $X$, AM learns spatial attention maps,\n\\begin{equation} \\label{eq:eq4.4}\n \\mathcal{A} =\\{A^{b},A^{r},A^{i},A^{n},A^{d}\\} ={\\rm softmax} \\big(\\psi_{\\rm att}(X) \\big),\\\\\n\\end{equation}\nwhere $\\mathcal{A}$ is the normalized attention maps by a softmax function, and $A^{b},A^{r},A^{i},A^{n},A^{d} \\in [0, 1]^{W \\times H}$ are the attention maps corresponding to background, REs, IEs, NEs and DEs respectively. Obviously, if a label is tagged to one pixel, the location of this pixel should be assigned with higher attention values. The AM $\\psi_{\\rm att}$ is achieved by the first building block of ResNet, four $3 \\times 3$ convolution layers (each layer is followed by ReLU and BN operations), and one $1 \\times 1$ convolution layers, as shown in Fig. \\ref{fig:fig6} (c). \n\nFinally, the initial results are integrated with the attention maps to generate the final results $\\mathcal{Y}$,\n\\begin{equation} \n \\mathcal{Y} = {\\rm sigmoid}\\big(\\mathcal{O} \\odot (1 + A^{\\{r,i,n,d\\}})\\big),\n\\end{equation}\nwhere $\\odot$ is the element-wise multiplication.\n\n\n\n\\subsection{Loss Function}\n\\label{4.2}\n\\paragraph{Edge Loss.}\nWe use the loss function presented in \\cite{wang2018doobnet} to supervise the training of our edge predictions:\n\\begin{equation}\n\\mathcal{L}_{\\rm e}(\\mathcal{Y},\\mathcal{E})= \\sum_{k \\in {\\left\\{r,i,n,d\\right\\}}} \\ell_{\\rm e} \\left (Y^{k},E^{k} \\right),\n\\end{equation}\n\\begin{equation}\\label{eq10}\n\\begin{split}\\ell_{\\rm e} \\left (Y,E \\right) & = - \\sum_{i,j} \\Big( E_{i,j}\\alpha_{1}\\beta^{(1-Y_{i,j})^{\\gamma_{1}}}{\\rm log}(Y_{i,j}) \\\\\n & + (1-E_{i,j})(1-\\alpha_{1})\\beta^{Y^{\\gamma_{1}}}{\\rm log}(1-Y_{i,j}) \\Big), \n\\end{split}\n\\end{equation}\nwhere $\\mathcal{Y}=\\{Y^{r}, Y^{i}, Y^{n}, Y^{d}\\}$ is the final prediction, $\\mathcal{E} = \\{E^{r},E^{i},E^{n},E^{d}\\}$ is the corresponding ground-truth label, and $E_{i,j}$\/$Y_{i,j}$ are the $(i,j)^{th}$ element of matrix $E$\/$Y$ respectively.\nMoreover, $\\alpha_{1}=|E_{-}|\/|E|$ and $1-\\alpha_{1}=|E_{+}|\/|E|$, where $E_{-}$ and $E_{+}$ denote the non-edge and edge ground truth label sets, respectively. In addition, $\\gamma_{1}$ and $\\beta$ are the hyperparameters. We drop the superscript $k$ in Eq. \\ref{eq10} and Eq. \\ref{eq13} for simplicity.\n\n\n\\vspace{-5mm}\n\\paragraph{Attention Module Loss.}\nSince the pixel-wise edge annotations provide spatial labels, it is easy to obtain the ground truth of attention.\nLet $\\mathcal{T} = \\{T^{b},T^{r},T^{i},T^{n},T^{d}\\}$ be the ground-truth label of attention, where $T^{b}$ specifies the non-edge pixels. $T^{b}_{i,j} = 1$ if the $(i,j)^{th}$ pixel is located on non-edge\/background, otherwise $T^{b}_{i,j} = 0$.\n$T^{r},T^{i},T^{n},T^{d}$ indicate attention labels of REs, IEs, NEs and DEs respectively, which are obtained from $\\mathcal{E} = \\{E^{r},E^{i},E^{n},E^{d}\\}$,\n\\begin{equation}\n\\label{eq:att_gt}\nT_{i,j}^k = \\left\\{\\begin{array}{ll}\n E_{i,j}^k, & {\\rm if} \\sum_{k} E_{i,j}^k =1, k\\in\\{r,i,n,d\\} \\\\\n 255, & {\\rm if} \\sum_{k} E_{i,j}^k > 1, k\\in\\{r,i,n,d\\} \\\\\n \\end{array} \\right.,\n\\end{equation}\nwhere $k$ denotes the type of edges, $T_{i,j}^k$ and $E_{i,j}^k$ indicate the attention label and edge label of the $(i,j)^{th}$ pixel, respectively. The attention label equals the edge label if one pixel is only assigned one type of edge label, or it will be tagged 255 that will be ignored during training if one pixel has multiple types. It is should be noted that multi-labeled edges are used when training four decision heads for each type of edges, and only excluded when training the AM. The loss function $\\mathcal{L}_{\\rm att}$ of the AM is formulated as:\n\\begin{equation}\n\\mathcal{L}_{\\rm att}(\\mathcal{A},\\mathcal{T}) = \n\\sum_{k \\in {\\left\\{b,r,i,n,d\\right\\}}} \\ell_{\\rm foc} \\left (A^{k},T^{k} \\right),\n\\end{equation}\n\\begin{equation}\\label{eq13}\n\\begin{split}\n\\ell_{\\rm foc} &\\left (A,T \\right) = - \\sum_{i,j} \\big( T_{i,j}\\alpha_{2}(1-A_{i,j})^{\\gamma_{2}} {\\rm log}(A_{i,j}) \\\\\n& + (1-T_{i,j})(1-\\alpha_{2})A_{i,j}^{\\gamma_{2}} {\\rm log}(1-A_{i,j})\\big),\n\\end{split}\n\\end{equation}\nwhere $\\ell_{\\rm foc}$ indicates the Focal Loss \\cite{lin2017focal} and $\\mathcal{A}$ is the output of Attention Module. Note that $\\alpha_{2}$ and $\\gamma_{2}$ are a balancing weight and a focusing parameter, respectively.\n\n\\textbf{Total Loss.} Finally, we optimize RINDNet by minimizing the total loss defined as:\n\\begin{equation}\n\\label{eq:eq4}\n\\mathcal{L} = \\lambda \\mathcal{L}_{\\rm e} + (1-\\lambda) \\mathcal{L}_{\\rm att} ,\n\\end{equation}\nwhere $\\lambda$ is the weight for balancing the two losses.\n\n\n\n\\begin{table*}\n \\caption{Ablation study to verify the effectiveness of each component in our proposed RINDNet.}\n \\label{tab:wlam}\n \\centering\n \\small\n \\renewcommand\\tabcolsep{3.7pt}\n \\renewcommand\\arraystretch{0.9}\n \\begin{tabular}{|l|ccc|ccc|ccc|ccc|ccc|}\n \\hline\n \\multirow{2}{*}{Method}\n &\\multicolumn{3}{c|}{Reflectance} & \\multicolumn{3}{c|}{Illumination} & \\multicolumn{3}{c|}{Normal} &\\multicolumn{3}{c|}{Depth} &\\multicolumn{3}{c|}{Average}\\\\ \n \\cline{2-16}\n & ODS & OIS & AP & ODS & OIS & AP & ODS & OIS & AP & ODS & OIS & AP & ODS & OIS & AP\\\\\n \\hline\n Ours & 0.478 & 0.521 & 0.414 & 0.280 & 0.337 & 0.168 & 0.489 & 0.522 & 0.440 & 0.697 & 0.724 & 0.705 & 0.486 & 0.526 & 0.432\\\\\n Ours w\/o WL & 0.422 & 0.468 & 0.357 & 0.280 & 0.321 & 0.180 & 0.476 & 0.515 & 0.425 & 0.693 & 0.713 & 0.700 & 0.468 & 0.504 & 0.416\\\\\n Ours w\/o AM & 0.443 & 0.494 & 0.338 & 0.268 & 0.327 & 0.139 & 0.473 & 0.506 & 0.378 & 0.670 & 0.699 & 0.649 & 0.464 & 0.507 & 0.376\\\\ \n Ours w\/o AM\\&WL & 0.409 & 0.460 & 0.316 & 0.277 & 0.331 & 0.178 & 0.471 & 0.507 & 0.389 & 0.677 & 0.707 & 0.662 & 0.459 & 0.501 & 0.386\\\\\n \\hline\n \\end{tabular}\n \\vspace{-13pt}\n\\end{table*}\n\n\n\\subsection{Training Details}\n\\label{4.3}\nOur network is implemented using PyTorch \\cite{paszke2017automatic} and finetuned from a ResNet-50 model pre-trained on ImageNet \\cite{DengDSLL009}. Specifically, we adopt the Stochastic Gradient Descent optimizer with momentum=$0.9$, initial learning rate=$10^{-5}$, and we decay it by the ``poly'' policy on every epoch. We train the model for $70$ epochs on one GPU with a batch size of $4$. Moreover, we set $\\beta=4$ and $\\gamma_{1}=0.5$ for $\\mathcal{L}_{\\rm e}$; $\\alpha_{2}=0.5$ and $\\gamma_{2}=2$ for $\\mathcal{L}_{\\rm att}$; and $\\lambda=0.1$ for total loss. In addition, following \\cite{wang2018doobnet}, we augment our dataset by rotating each image by four different angles of $\\{0,90,180,270\\}$ degrees. Each image is randomly cropped to $320\\times320$ during training while retaining the original size during testing.\n\n\n\\section{Experiment Evaluation}\n\\label{5}\nWe compare our model with $10$ state-of-the-art edge detectors. HED \\cite{xie2015hed}, RCF \\cite{liu2017rcf}, CED \\cite{wang2017ced}, DexiNed \\cite{poma2020dexined}, and BDCN \\cite{he2019bdcn} exhibit excellent performance in general edge detection; DeepLabV3+ \\cite{chen2018deeplabv3}, CASENet \\cite{yu2017casenet} and DFF \\cite{dff19} show outstanding accuracy on semantic edge detection; DOOBNet \\cite{wang2018doobnet} and OFNet \\cite{lu2019ofnet} yield competitive results for occlusion edge detection. \nAll models are trained on $300$ training images and evaluated on $200$ test images. In addition, more qualitative results are provided in the supplementary material. \nWe evaluate these models with three metrics introduced by \\cite{arbelaez2010bsds}: fixed contour threshold (ODS), per-image best threshold (OIS), and average precision (AP). Moreover, a non-maximum suppression \\cite{canny1986computational} is performed on the predicted edge maps before evaluation.\n\n\n\n\\subsection{Experiments on Four Types of Edges}\n\\vspace{-2mm}\n\n\\paragraph{Comparison with State of the Arts.} To adapt existing detectors for four edge types simultaneously, they are modified in two ways: (1) The output $Y \\!\\in\\! \\{0,1\\}^{W \\times H}$ is changed to $\\mathcal{Y} \\!\\in\\! \\{0,1\\}^{4 \\times W \\times H}$. In particular, for DeepLabV3+ focusing on segmentation, the output layer of DeepLabV3+ is replaced by an edge path (same as DOOBNet~\\cite{wang2018doobnet} and OFNet~\\cite{lu2019ofnet}, containing a sequence of four $3\\!\\times\\!3$ convolution blocks and one $1\\!\\times\\!1$ convolution layer) to predict edge maps. As shown in Table \\ref{tab:tab1}, ten compared models are symbolized as HED, RCF, CED, DexiNed, BDCN, CASENet, DFF, *DeepLabV3+, *DOOBNet and *OFNet, respectively. (2) DeepLabV3+, DOOBNet and OFNet only provide one edge prediction branch without structure suitable for multi-class predictions, thus we provide the second modification: the last edge prediction branch is expanded to four, and each branch predicts one type of edges. The modification is similar to the prediction approach of our model and aims to explore the capabilities of these models. They are symbolized as DeepLabV3+, DOOBNet, and OFNet, respectively.\n\n\nTable~\\ref{tab:tab1} and Fig.~\\ref{fig:fig3} present the F-measure of the four types of edges and their averages. We observe that the proposed RINDNet outperforms other detectors over most metrics across the dataset. Essentially, \\cite{xie2015hed,liu2017rcf,wang2017ced,he2019bdcn,poma2020dexined,wang2018doobnet,lu2019ofnet} are designed for generic edge detection, so the specific features of different edges are not fully explored. Even if we extend the prediction branch of OFNet~\\cite{lu2019ofnet}, DOOBNet~\\cite{wang2018doobnet} and DeepLabV3+~\\cite{chen2018deeplabv3} to four for learning specific features respectively, the results are still unsatisfactory. DFF~\\cite{dff19} could learn the specific cues in a certain extent by introducing the dynamic feature fusion strategy, but the performance is still limited. On the contrary, our proposed RINDNet achieves promising results by extracting the corresponding distinctive features based on different edge attributes.\n\n\n\\begin{table}\n\\caption{Ablation study on the choices of features or spatial cues from different layers for the proposed RINDNet.}\n\\centering\n\\small\n\\renewcommand\\tabcolsep{4.9pt}\n\\renewcommand\\arraystretch{1.0}\n\\begin{tabular}{|c|c|c|ccc|}\n \\hline\n \\multirow{2}{*}{Reference} & \\multirow{2}{*}{RE\\&IE} & \\multirow{2}{*}{NE\\&DE} &\\multicolumn{3}{c|}{Average} \\\\\n \\cline{4-6}\n & & & ODS & OIS & AP\\\\\n \\hline\n \\hline\n \\multirow{4}{*}{\\makecell[c]{ Different-Layer \\\\ Features \\\\}} \n & $res_{1-3}$ & $res_{5}$ & 0.486 & 0.526 & 0.432\\\\\n & $res_{1-3}$ & $res_{1-3}$ & 0.467 & 0.499 & 0.422\\\\\n & $res_{5}$ & $res_{1-3}$ & 0.452 & 0.482 & 0.381\\\\ \n & $res_{5}$ & $res_{5}$ & 0.464 & 0.489 & 0.396\\\\\n \\hline\n \\hline\n \\multirow{4}{*}{Spatial Cues} \n & $f_{sp}^{1-3}$ & $f_{sp}^{1-5}$ & 0.486 & 0.526 & 0.432\\\\\n & $f_{sp}^{1-3}$ & $f_{sp}^{1-3}$ & 0.472 & 0.504 & 0.416\\\\\n & $f_{sp}^{1-5}$ & $f_{sp}^{1-5}$ & 0.478 & 0.512 & 0.418\\\\\n & w\/o $f_{sp}$ & w\/o $f_{sp}$ & 0.478 & 0.516 & 0.420\\\\\n \\hline\n\\end{tabular}\n\\label{tab:sp}\n\\vspace{-8pt}\n\\end{table}\n\n\\begin{table}\n\\caption{Ablation study to verify the effectiveness of Decoder for the proposed RINDNet. SW refers to ``Share Weight'', $1^{st}$ and $2^{nd}$ refer to the first stream and the second stream in Decoder.}\n\\centering\n\\small\n\\renewcommand\\tabcolsep{5.0pt}\n\\renewcommand\\arraystretch{1.1}\n\\begin{tabular}{|cc|cc|ccc|}\n \\hline\n \\multicolumn{2}{|c|}{RE\\&IE-Decoder} &\\multicolumn{2}{c|}{NE\\&DE-Decoder} &\\multicolumn{3}{c|}{Average}\\\\\n \\cline{1-7}\n $1^{st}$ & $2^{nd}$ & $1^{st}$ & $2^{nd}$ & ODS & OIS & AP\\\\\n \\hline\n $\\surd$ & $\\surd$ & $\\surd$ & w SW & \\bf{0.486} & \\bf{0.526} & \\bf{0.432}\\\\\n $\\surd$ & $\\times$ & $\\surd$ & $\\times$ & 0.457 & 0.492 & 0.398\\\\\n $\\surd$ & $\\times$ & $\\surd$ & w SW & 0.476 & 0.514 & 0.415\\\\\n $\\surd$ & $\\surd$ & $\\surd$ & w\/o SW & 0.474 & 0.517 & 0.408\\\\\n \\hline\n\\end{tabular}\n\\label{tab:decoder}\n\\vspace{-13pt}\n\\end{table}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.975\\linewidth]{test.pdf}\n\\caption{Qualitative comparison for (a) Reflectance Edges, (b) Illumination Edges, (c) Normal Edges, (d) Depth Edges and (e) Generic Edges (best viewed in color: ``\\textcolor{green2}{\\textbf{green}}'' for true positive, ``\\textcolor{red}{\\textbf{red}}'' for false negative (missing edges), and ``\\textcolor{yellow}{\\textbf{yellow}}'' for false positive). We provide visualization results of the top 6 scores in this figure, more results can be found in the \\textbf{supplementary material}.}\n\\label{fig:fig4}\n\\vspace{-13pt}\n\\end{figure*}\n\n\\vspace{-3mm}\n\\paragraph{Ablation Study.}\nWe first conduct the ablation study to verify the role of Weight Layer (WL) and Attention Module (AM) in RINDNet. In the experiments, each module is removed separately or together to construct multiple variants for evaluation, as shown in Table~\\ref{tab:wlam} (rows 2 -- 4). Intuitively, WL plays a significant role for REs and IEs. Especially for REs, in terms of ODS, OIS and AP, the WL improves the performance significantly from $42.2\\%$, $46.8\\%$ and $35.7\\%$ to $47.8\\%$, $52.1\\%$ and $41.4\\%$, respectively. Besides, with AM, RINDNet achieves noticeable improvements for all types of edges, as shown in row 1 and row 3 of Table~\\ref{tab:wlam}. This illustrates the effectiveness of the proposed AM for capturing the distinctions between different edges. Overall, the cooperation of WL and AM allows RINDNet to successfully capture the specific features of each type of edges and thus delivers remarkable performance gain.\n\nNext, we perform an experiment to verify the effectiveness of different-layer features for detecting REs\/IEs and NEs\/DEs. As shown in Table~\\ref{tab:sp} (rows 1 -- 4), the combination based on edge attributes (row 1) performs better than using other choices (rows 3 -- 4). Besides, we also explore the impact of choosing different spatial cues for REs\/IEs and NEs\/DEs, and report the quantitative results in Table~\\ref{tab:sp} (rows 5 -- 7). Similarly, there are average performance drops in different combinations of spatial cues (rows 6 -- 7). Basically, ours (row 5) is the reasonable combination that helps achieve the best performance.\n\nWe also design careful ablation experiments (Table \\ref{tab:decoder}) to study the effectiveness of each stream in Decoder. The two-stream design combined with share weight (referred to as SW) for NEs\/DEs performs better together (row 1) than using either of them separately (rows 3 -- 4). More detailed results are provided in the supplementary material.\n\n\n\\subsection{Experiments on Generic Edges}\n\nTo fully examine the performance of RINDNet, we modify and test it on generic edges setting (the ground truth of four types of edges are merged to one ground-truth edge map). The outputs of four decision heads are combined and fed into a final decision head $\\psi_{\\rm e}$ (one $1\\times1$ convolution layer) to predict generic edges: $P=\\psi_{\\rm e}\\big([Y^{r}, Y^{i}, Y^{n}, Y^{d}]\\big)$. Moreover, the original ground-truth labels of Attention Module (AM) are unavailable in this setting. Thus we take ground-truth labels of generic edges as the supervisions of AM, so that AM could capture the location of generic edges.\n\n\\begin{table}\n\\caption{Comparison of generic edge detection on \\textbf{BSDS-RIND}.\n}\n\\centering\n\\small\n\\renewcommand\\tabcolsep{14pt}\n\\renewcommand\\arraystretch{0.9}\n\\begin{tabular}{|l|ccc|}\n \\hline\n Method & ODS & OIS & AP \\\\\n \\hline\n HED~\\cite{xie2015hed} & 0.786 & 0.805 &\\bf{\\textcolor{red}{0.834}} \\\\\n CED~\\cite{wang2017ced} & \\bf{\\textcolor{red}{0.801}} & \\bf{\\textcolor{red}{0.814}} & \\bf{\\textcolor{blue}{0.824}} \\\\\n RCF~\\cite{liu2017rcf} & 0.771 & 0.791 & 0.800 \\\\\n BDCN~\\cite{he2019bdcn} & 0.789 & 0.803 & 0.757 \\\\\n DexiNed~\\cite{poma2020dexined} & 0.789\t& 0.805\t& 0.816 \\\\\n CASENet~\\cite{yu2017casenet} & 0.792 & 0.806 & 0.786 \\\\\n DFF~\\cite{dff19} & 0.794 & 0.806 & 0.767 \\\\\n DeepLabV3+~\\cite{chen2018deeplabv3} & 0.780 & 0.792 & 0.776 \\\\\n DOOBNet~\\cite{wang2018doobnet} & 0.790 & 0.805 & 0.809 \\\\\n OFNet~\\cite{lu2019ofnet} & 0.794 & 0.807 & 0.800 \\\\\n \\hline\n RINDNet (Ours) &\\bf{\\textcolor{blue}{0.800}} &\\bf{\\textcolor{blue}{0.811}} & 0.815 \\\\\n \\hline\n\\end{tabular}\n\\label{tab:generic}\n\\vspace{-12pt}\n\\end{table}\n\nWe report the quantitative results over generic edges in Table~\\ref{tab:generic} and Fig.~\\ref{fig:fig3} (e). Note that CED is pre-trained on HED-BSDS. In contrast, our model is trained from scratch and still achieves competitive results. This confirms the integration capability of our model, especially considering that RINDNet is not specially designed for generic edges. Some qualitative results on BSDS-RIND are shown in the last row in Fig. \\ref{fig:fig4}.\n\n\\section{Conclusions}\n\\label{6}\n\nIn this work, we study edge detection on four types of edges including \\textit{Reflectance Edges}, \\textit{Illumination Edges}, \\textit{Normal Edges} and \\textit{Depth Edges}. We propose a novel edge detector RINDNet that, for the first time, simultaneously detects all four types of edges. In addition, we contribute the first public benchmark with four types of edges carefully annotated. Experimental results illustrate that RINDNet yields promising results in comparison with state-of-the-art edge detection algorithms.\n\n\\vspace{-4mm}\n\\paragraph{Acknowledgements.}\nThis work is supported by Fundamental Research Funds for the Central Universities (2019JBZ104) and National Natural Science Foundation of China (61906013, 51827813). The work is partially done while Mengyang Pu was with Stony Brook University.\n\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nOne of the important observations in the LHC era is that the self-normalised production of hadrons increases faster than linearly as a function of the self-normalised charged-particle multiplicity \\cite{aliceHFhadrons}. This enhancement in high-multiplicity events is not fully understood, but calculations including multiple parton interactions (MPI) and colour reconnection (CR) are able to reproduce the trend. The measurement of the multiplicity-dependent production of the \\W boson, which is colourless, with associated hadron in proton--proton (pp) collisions can then provide further understanding of the MPI and CR mechanisms.\n\nIn heavy-ion collisions (HIC), electroweak bosons are valuable probes of the initial phase of the collision, on which a precise knowledge is required to disentangle QGP-induced phenomena from other nuclear effects. Being hard processes, the production of the \\W and \\Z bosons is highly sensitive to the initial state, especially the modifications of the Parton Distribution Functions (PDF) in the nucleus with respect to that in the proton. The bosons decay before the typical time of creation of the QGP and can be measured via their leptonic decays, providing a final state insensitive to the strong force. The whole process is then medium-blind, carrying the information from the initial state to the detector where it can be recorded. \\\\\n\nThe measurements presented here were performed using data collected with the ALICE detector~\\cite{alice}. In pp collisions at \\thirteen, electrons from \\W-boson decays were measured at midrapidity, in the interval $|y| < 0.6$. Electrons coming from \\W-boson decays typically have a large transverse momentum (\\pt) and are well isolated from other particles. They were thus identified by considering electrons in the range $30 < \\pt^{\\rm e} < 60$~\\GeVc, then defining an isolation cone surrounding the electron in which the total energy is requested to be below a certain threshold. The associated hadrons, produced together with \\W bosons, were detected away-side in azimuth with respect to the electron coming from the decay of the boson, applying a lower minimum \\pt selection of 5~\\GeVc.\n\nMeasurements in HIC were performed in proton--lead (p--Pb) collisions at \\eightnn and lead--lead (Pb--Pb) collisions at \\fivenn. The data were collected from muon-triggered events in the muon forward spectrometer, covering the rapidity interval $2.5 < y < 4$. The \\Z-boson candidates were reconstructed by combining high-\\pt muons ($\\pt^\\mu > 20$ \\GeVc) in pairs of opposite charge, considering the pairs with an invariant mass in the range $60 < m_{\\mu^+\\mu^-} < 120$ \\GeVmass. The number of \\Wminus and \\Wplus bosons was extracted via a template fit to the single-muon \\pt distribution, accounting for the various contributions to the inclusive spectrum. As the low-\\pt region features a very low signal-to-background ratio, the measurements were performed on muons with \\pt > 10 \\GeVc.\n\nThe available measurements of electroweak bosons performed by the ALICE Collaboration at forward rapidity can be found in Refs.~\\cite{aliceWZ,aliceZ,aliceW}.\n\n\\section{Results}\n\\label{sec:results}\n\n\\subsection{Measurements in pp collisions}\n\\label{sec:pp}\n\nFigure~\\ref{fig:pp} shows the self-normalised multiplicity-dependent yield of electrons from \\W boson decays (combining \\Wminus and \\Wplus for increased precision), and that of hadrons associated with a \\W boson, measured in pp collisions at \\thirteen. The yield of electrons from \\W-boson decays is consistent with a linear increase as a function of the charged-particle multiplicity, while the yield of associated hadron shows a faster-than-linear increase. This measurement thus suggests a significant correlation of the associated hadron production with the event multiplicity, and the trend is well reproduced by PYTHIA~8~\\cite{pythia8} calculations including MPI and CR mechanisms. This analysis also constitutes the first measurement of electroweak bosons at midrapidity with ALICE, and will serve as reference for similar measurements in HIC.\n\n\\begin{figure}[h]\n \\centering\n \\sidecaption\n \\includegraphics[width=0.5\\linewidth,clip]{figures\/pp13tev_multiplicity.pdf}\n \\caption{Self-normalised yield of electron from \\W decays (red), and hadron associated with \\W bosons (blue), as a function of the charged-particle multiplicity. The coloured line correspond to calculations with PYTHIA~8~\\cite{pythia8} including MPI and CR.}\n \\label{fig:pp}\n\\end{figure}\n\n\\subsection{Measurements in p--Pb collisions}\n\\label{sec:ppb}\n\nThe measurements of the \\Z and \\Wplus bosons as a function of rapidity in p--Pb collisions at \\eightnn are shown in the left and right panels of Fig.~\\ref{fig:ppb}, respectively. The larger number of events available for the \\Wplus boson allowed splitting the acceptance into several rapidity intervals. The \\Z-boson measurement is compared with predictions from the EPPS16~\\cite{epps16} and nCTEQ15~\\cite{ncteq15} nuclear PDFs (nPDFs), as well as predictions from the CT14~\\cite{ct14} free PDF accounting for the isospin effect but without nuclear modifications. The models are in good agreement with the measurement, although no strong conclusion can be derived on the nuclear modifications due to statistical limitations.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.43\\linewidth,clip]{figures\/pPb8tev_Z.pdf}\n \\includegraphics[width=0.55\\linewidth,clip]{figures\/pPb8tev_W_xSecWplusDiff.pdf}\n \\caption{\\textbf{Left}: cross section of dimuons from \\Z decays measured in p--Pb collisions at \\eightnn, compared with theoretical calculations with and without nuclear modifications of the PDF. \\textbf{Right}: cross section of muons from \\Wplus decays in the same collisions systems, compared with model predictions from various PDF and nPDF sets.}\n \\label{fig:ppb}\n\\end{figure}\n\nThe \\Wplus cross section shown in the right panel of Fig.~\\ref{fig:ppb} is compared with the same models, as well as two recent nPDF sets: nCTEQ15WZ~\\cite{ncteq15wz}, an improvement of nCTEQ15 with the addition of electroweak-boson measurements from the LHC for the nPDF determination; and nNNPDF2.0~\\cite{nnnpdf2}, a new model obtained following a methodology based on machine learning. The models including nuclear modifications are in agreement with one another, and provide a good description of the data. Calculations without nuclear modifications, on the contrary, overestimate the production at large positive rapidity. The resulting 3.5$\\sigma$ deviation is the strongest observation of nuclear modification of the PDF measured by the ALICE Collaboration from electroweak-boson measurements. This measurement helps constraining the nPDF models in the very low Bjorken-$x$ region (down to about 10$^{-4}$), where the available constraints are scarce. \\\\\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.46\\linewidth,clip]{figures\/ZPbPb5tev_IntegratedYield.pdf}\n \\includegraphics[width=0.52\\linewidth,clip]{figures\/PbPb5tev_W_yieldW_HGpythia.pdf}\n \\caption{\\textbf{Left}: \\avTaa-scaled yield of dimuons from \\Z-boson decays in Pb--Pb collisions at \\fivenn, compared with theoretical calculations with and without nuclear modifications of the PDF. \\textbf{Right}: \\avTaa-scaled yield of muons from \\W-boson decays as a function of centrality, in Pb--Pb at \\fivenn. The measured distribution is compared with HG-PYTHIA~\\cite{hg-pythia} predictions of the nuclear modification factor of hard scatterings, scaled to the measured value in 0--90\\% centrality.}\n \\label{fig:pbpb}\n\\end{figure}\n\n\\subsection{Measurements in Pb--Pb collisions}\n\\label{sec:pbpb}\n\nThe results obtained in Pb--Pb collisions at \\fivenn are illustrated in Fig.~\\ref{fig:pbpb}. In the left panel, the yield of dimuons from \\Z decays, scaled with the average nuclear overlap function \\avTaa, is compared with a previously published result obtained from the 2015 data only, as well as predictions with and without nuclear modifications. The combination of the 2015 and 2018 data periods for this new measurement improved the available integrated luminosity by a factor three, allowing for a significant reduction of the uncertainty. The various nPDFs provide a good description of the measured yields, while a deviation by 3.4$\\sigma$ from the CT14-only calculation is observed, constituting another strong sign of nuclear modifications.\n\nIn the right panel of Fig.~\\ref{fig:pbpb}, the \\avTaa-scaled yield of electrons and positrons from \\W decays are shown as a function of the collision centrality. They are compared with HG-PYTHIA~\\cite{hg-pythia} calculations of the nuclear modification factor of hard scatterings, scaled with the value measured in 0--90\\% centrality. The scaled calculations are in good agreement with the data. They predict a sizeable drop for the most peripheral collisions, but statistical limitations prevent to have enough granularity in this region in the measurement, such that no experimental conclusion can be drawn. Nonetheless, this measurement is the first measurement of the \\W-boson production in Pb--Pb collisions at forward rapidity, where low values of Bjorken-$x$ are attained.\n\n\\section{Conclusion}\n\nThe recent measurements of electroweak-boson production show their usefulness in the characterization of key features of QCD. The measurement at midrapidity in pp collisions is well described by calculations including MPI and CR. In heavy-ion collisions, the measurements are generally well described by nuclear models, while significant deviations from free-PDF calculations can be observed. These measurements can thus be used to help constraining the nPDF models.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $\\frg$ be a finite-dimensional simple Lie algebra over $\\bbC$. Fix a Cartan subalgebra $\\frh$ of $\\frg$.\nThe associated root system is $\\Delta=\\Delta(\\frg, \\frh)\\subseteq\\frh_{\\bbR}^*$. Recall that a decomposition\n\\begin{equation}\\label{grading}\n\\frg=\\bigoplus_{i\\in \\bbZ}\\frg(i)\n\\end{equation}\nis a \\emph{$\\bbZ$-grading} of $\\frg$ if $[\\frg(i), \\frg(j)]\\subseteq \\frg(i+j)$ for any $i, j\\in\\bbZ$.\nIn particular, in such a case, $\\frg(0)$ is a Lie subalgebra of $\\frg$. Since each derivation of $\\frg$ is inner, there exists $h_0\\in\\frg(0)$ such that $\\frg(i)=\\{x\\in\\frg\\mid [h_0, x]=i x\\}$. The element $h_0$ is said to be \\emph{defining} for the grading \\eqref{grading}. Without loss of generality, one may assume that $h_0\\in\\frh$. Then $\\frh\\subseteq\\frg(0)$. Let $\\Delta(i)$ be the set of roots in $\\frg(i)$. Then we can\nchoose a set of positive roots $\\Delta(0)^+$ for $\\Delta(0)$ such that\n$$\n\\Delta^+ :=\\Delta(0)^+\\sqcup \\Delta(1)\\sqcup \\Delta(2)\\sqcup \\cdots\n$$\nis a set of positive roots of $\\Delta(\\frg, \\frh)$. Let $\\Pi$ be the\ncorresponding simple roots, and put $\\Pi(i)=\\Delta(i)\\cap \\Pi$. Note\nthat the grading \\eqref{grading} is fully determined by\n$\\Pi=\\bigsqcup_{i\\geq 0} \\Pi(i)$. We refer the reader to Ch.~3, \\S 3\nof \\cite{GOV} for generalities on gradings of Lie algebras. Each\n$\\Delta(i)$, $i\\geq 1$, inherits a poset structure from the usual\none of $\\Delta^+$. That is, let $\\alpha$ and $\\beta$ be two roots of\n$\\Delta(i)$, then $\\beta\\geq\\alpha$ if and only if $\\beta-\\alpha$ is\na nonnegative integer combination of simple roots.\n\n\n\n\n\n\nRecently, Panyushev initiated the study of the rich structure of\n$\\Delta(1)$ in \\cite{P}. In particular, he raised five\nconjectures concerning the $\\mathcal{M}$-polynomial,\n$\\mathcal{N}$-polynomial and the reverse operator of $\\Delta(1)$.\nNote that Conjectures 5.1, 5.2 and 5.12 there have been solved by\nWeng and the author \\cite{DW}. The current paper aims\nto handle conjecture 5.11 of \\cite{P}. Let us prepare more notation.\n\nRecall that a subset $I$ of a finite poset $(P, \\leq)$ is a\n\\emph{lower} (resp., \\emph{upper}) \\emph{ideal} if $x\\leq y$ in $P$\nand $y\\in I$ (resp. $x\\in I$) implies that $x\\in I$ (resp. $y\\in\nI$). We collect the lower ideals of $P$ as $J(P)$, which is\npartially ordered by inclusion. A subset $A$ of $(P, \\leq)$ is an\n\\emph{antichain} if any two elements in $A$ are non-comparable under\n$\\leq$. We collect the antichains of $P$ as $\\mathrm{An}(P)$. For\nany $x\\in P$, let $I_{\\leq x}=\\{y\\in P\\mid y\\leq x\\}$. Given an\nantichain $A$ of $P$, let $I(A)=\\bigcup_{a\\in A} I_{\\leq a}$. The\n\\emph{reverse operator} $\\mathfrak{X}$ is defined by\n$\\mathfrak{X}(A)=\\min (P\\setminus I(A))$. Since antichains of $P$\nare in bijection with lower (resp. upper) ideals of $P$, the reverse\noperator acts on lower (resp. upper) ideals of $P$ as well. Note\nthat the current $\\mathfrak{X}$ is inverse to the reverse operator\n$\\mathfrak{X}^{\\prime}$ in Definition 1 of \\cite{P}, see Lemma\n\\ref{lemma-inverse-reverse-operator}. Thus replacing $\\mathfrak{X}^{\\prime}$ by $\\mathfrak{X}$ does not affect our\nforthcoming discussion on orbits.\n\n\nWe say the $\\bbZ$-grading \\eqref{grading} is \\emph{extra-special} if\n\\begin{equation}\\label{extra-special}\n\\frg=\\frg(-2)\\oplus \\frg(-1) \\oplus \\frg(0) \\oplus \\frg(1)\n\\oplus \\frg(2) \\mbox{ and }\\dim\\frg(2)=1,\n\\end{equation}\nUp to conjugation, any simple Lie algebra $\\frg$ has a unique extra-special $\\bbZ$-grading. Without loss of generality, we assume that $\\Delta(2)=\\{\\theta\\}$ , where $\\theta$ is the highest root of $\\Delta^+$.\nNamely, we may assume that the grading \\eqref{extra-special} is defined by the element $\\theta^{\\vee}$, the dual root of $\\theta$. In such a case, we have\n\\begin{equation}\\label{Delta-one}\n\\Delta(1)=\\{\\alpha\\in\\Delta^+\\mid (\\alpha, \\theta^{\\vee})=1\\}.\n\\end{equation}\nLet $\\mathrm{ht}$ be the height function. Recall that $h:=\\mathrm{ht}(\\theta)+1$ is the \\emph{Coxeter number}\nof $\\Delta$. Let $h^*$ be the \\emph{dual Coxeter number }of\n$\\Delta$. That is, $h^*$ is the height of\n$\\theta^{\\vee}$ in $\\Delta^{\\vee}$. As noted on p.~1203 of \\cite{P},\nwe have $|\\Delta(1)|=2h^*-4$. We call a lower (resp. upper) ideal\n$I$ of $\\Delta(1)$ \\emph{Lagrangian} if $|I|=h^*-2$. Write\n$\\Delta_l$ (resp. $\\Pi_l$) for the set of \\emph{all} (resp.\n\\emph{simple}) \\emph{long} roots. In the simply-laced cases, all\nroots are assumed to be both long and short. Note that $\\theta$ is\nalways long, while $\\theta^{\\vee}$ is always short.\n\n\n\n\n\n\n\n\nNow Conjecture 5.11 of \\cite{P} is stated as follows.\n\n\n\\medskip\n\\noindent \\textbf{Panyushev conjecture.}\\quad In any extra-special\n$\\bbZ$-grading of $\\frg$, the number of\n$\\mathfrak{X}_{\\Delta(1)}$-orbits equals $|\\Pi_l|$, and each orbit\nis of size $h-1$. Furthermore, if $h$ is even (which only excludes the case $A_{2k}$ where $h=2k+1$), then each\n$\\mathfrak{X}_{\\Delta(1)}$-orbit contains a unique Lagrangian lower\nideal.\n\n\\medskip\n\nOriginally, the conjecture is stated in terms of upper ideals and\nthe reverse operator $\\mathfrak{X}^{\\prime}$. One agrees that we can\nequivalently phrase it using lower ideals and $\\mathfrak{X}$. The\nmain result of the current paper is the following.\n\n\\begin{thm}\\label{thm-main}\nPanyushev conjecture is true.\n\\end{thm}\n\nAfter collecting necessary preliminaries in Section 2, the above\ntheorem will be proven in Section 3. Moreover, we note that by our\ncalculations in Section 3, one checks easily that for any\nextra-special $1$-standard $\\bbZ$-grading of $\\frg$, all the\nstatements of Conjecture 5.3 in \\cite{P} hold.\n\n\n\n\n\n\n\n\\medskip\n\n\\noindent\\textbf{Notation.} Let $\\bbN =\\{0, 1, 2, \\dots\\}$, and let\n$\\mathbb{P}=\\{1, 2, \\dots\\}$. For each $n\\in\\mathbb{P}$, $[n]$\ndenotes the poset $(\\{1, 2, \\dots, n\\}, \\leq)$.\n\n\\section{Preliminaries}\n\nLet us collect some preliminary results in this section. Firstly,\nlet us compare the two reverse operators. Let $(P, \\leq)$ be any\nfinite poset. For any $x\\in P$, let $I_{\\geq x}=\\{y\\in P\\mid y\\geq\nx\\}$. For any antichain $A$ of $P$, put $I_{+}(A)=\\bigcup_{a\\in A}\nI_{\\geq a}$. Recall that in Definition 1 of \\cite{P}, the reverse\noperator $\\mathfrak{X}^{\\prime}$ is given by\n$\\mathfrak{X}^{\\prime}(A)=\\max (P\\setminus I_{+}(A))$.\n\n\\begin{lemma}\\label{lemma-inverse-reverse-operator}\nThe operators $\\mathfrak{X}$ and $\\mathfrak{X}^{\\prime}$ are\ninverse to each other.\n\\end{lemma}\n\\begin{proof}\nTake any antichain $A$ of $P$, note that\n$$I_{+}(\\min(P\\setminus\nI(A)))=P\\setminus I(A)\\mbox{ and } I(\\max(P\\setminus\nI_{+}(A)))=P\\setminus I_{+}(A).\n$$\nThen the lemma follows.\n\\end{proof}\n\n\nLet $(P_i,\\leq), i=1, 2$ be two finite posets. One can define a\nposet structure on $P_1\\times P_2$ by setting $(u_1, v_1)\\leq (u_2,\nv_2)$ if and only if $u_1\\leq u_2$ in $P_1$ and $v_1\\leq v_2$ in\n$P_2$. We simply denote the resulting poset by $P_1 \\times P_2$. The\nfollowing well-known lemma describes the lower ideals of\n$[m]\\times P$.\n\n\\begin{lemma}\\label{lemma-ideals-CnP}\nLet $P$ be a finite poset. Let $I$ be a subset of $[m]\\times P$. For\n$1\\leq i\\leq m$, denote $I_i=\\{a\\in P\\mid (i, a)\\in I\\}$. Then $I$\nis a lower ideal of $[m]\\times P$ if and only if each $I_i$ is a\nlower ideal of $P$, and $I_m\\subseteq I_{m-1}\\subseteq \\cdots\n\\subseteq I_{1}$.\n\\end{lemma}\n\n\nIn this section, by a \\emph{finite graded poset} we always mean a\nfinite poset $P$ with a rank function $r$ from $P$ to the positive\nintegers $\\mathbb{P}$ such that all the minimal elements have rank\n$1$, and $r(x)=r(y)+1$ if $x$ covers $y$. In such a case, let $P_i$\nbe the set of elements in $P$ with rank $i$. The sets $P_i$ are said\nto be the \\emph{rank levels} of $P$. Suppose that\n$P=\\bigsqcup_{j=1}^{d} P_j$. Let $P_0$ be the empty set $\\emptyset$.\nPut $L_i=\\bigsqcup_{j=1}^{i} P_j$ for $1\\leq j\\leq d$, and let $L_0$ be the empty set.\nWe call those $L_i$ \\emph{rank level lower ideals}.\n\n\nLet $\\mathfrak{X}$ be the reverse operator on $[m]\\times P$. In\nview of Lemma \\ref{lemma-ideals-CnP}, we denote by $(I_1, \\cdots,\nI_m)$ a general lower ideal of $[m]\\times P$, where each $I_i\\in\nJ(P)$ and $I_m\\subseteq \\cdots \\subseteq I_{1}$. We say that the\nlower ideal $(I_1, \\cdots, I_m)$ is \\emph{full rank} if each $I_i$\nis a rank level lower ideal of $P$. Let $\\mathcal{O}(I_1, \\cdots,\nI_m)$ be the $\\mathfrak{X}_{[m]\\times P}$-orbit of $(I_1, \\cdots,\nI_m)$. The following lemma will be helpful in determining\n$\\mathfrak{X}_{[m]\\times P}$-orbits consisting of rank level lower\nideals.\n\n\\begin{lemma}\\label{lemma-operator-ideals-CmP}\nKeep the notation as above. Then\nfor any $n_0\\in \\bbN$, $n_i\\in\\mathbb{P}$ ($1\\leq i\\leq s$) such that $\\sum_{i=0}^{s} n_i =m$, we have\n\\begin{equation}\\label{rank-level}\n\\mathfrak{X}_{[m]\\times P}(L_d^{n_0}, L_{i_1}^{n_1}, \\cdots, L_{i_s}^{n_s})=\n(L_{i_1+1}^{n_0+1}, L_{i_2+1}^{n_1}, \\cdots, L_{i_s+1}^{n_{s-1}}, L_0^{n_s-1}),\n\\end{equation}\nwhere $0\\leq i_s<\\cdots 0$\nis even, while $(L_{n}, L_{n-1})$ is the unique ideal\nwith size $2n$ when $i=0$.\n\n\nSecondly, assume that $n$ is even and let us analyze the orbit\n$\\mathcal{O}(I_n, I_n)$. Indeed, by Lemma \\ref{lemma-operator-ideals-CmK}, we have\n\\begin{align*}\n\\mathfrak{X}(I_n, I_n)&=(I_{n^{\\prime}}, L_0),\\\\\n\\mathfrak{X}^{n-1}(I_{n^{\\prime}}, L_0)&=(I_{n}, L_{n-1}),\\\\\n\\mathfrak{X}(I_{n}, L_{n-1})&=(L_{n}, I_{n}),\\\\\n\\mathfrak{X}^{n-1}(L_{n}, I_{n})&=(L_{2n-1}, I_{n^{\\prime}}),\\\\\n\\mathfrak{X}(L_{2n-1}, I_{n^{\\prime}})&=(I_{n}, I_{n}).\n\\end{align*}\nThus the type II orbit $\\mathcal{O}(I_n, I_n)$ consists of $2n+1$ elements. Moreover,\nin this orbit, $(I_n, I_n)$ is the unique ideal with size $2n$. The\nanalysis of the orbit $\\mathcal{O}(I_{n^{\\prime}}, I_{n^{\\prime}})$\nis entirely similar.\n\n\nFinally, assume that $n$ is odd and let us analyze the orbit\n$\\mathcal{O}(I_n, I_n)$. Indeed, by Lemma \\ref{lemma-operator-ideals-CmK}, we have\n\\begin{align*}\n\\mathfrak{X}(I_n, I_n)&=(I_{n^{\\prime}}, L_0),\\\\\n\\mathfrak{X}^{n-1}(I_{n^{\\prime}}, L_0)&=(I_{n^{\\prime}}, L_{n-1}),\\\\\n\\mathfrak{X}(I_{n^{\\prime}}, L_{n-1})&=(L_{n}, I_{n^{\\prime}}),\\\\\n\\mathfrak{X}^{n-1}(L_{n}, I_{n^{\\prime}})&=(L_{2n-1}, I_{n^{\\prime}}),\\\\\n\\mathfrak{X}(L_{2n-1}, I_{n^{\\prime}})&=(I_{n}, I_{n}).\n\\end{align*}\nThus the type II orbit $\\mathcal{O}(I_n, I_n)$ consists of $2n+1$ elements. Moreover,\nin this orbit, $(I_n, I_n)$ is the unique ideal with size $2n$. The\nanalysis of the orbit $\\mathcal{O}(I_{n^{\\prime}}, I_{n^{\\prime}})$\nis entirely similar.\n\nTo sum up, we have verified the claim since there are $(n+2)(2n+1)$\nlower ideals in $[2]\\times K_{n-1}$ by Lemma \\ref{lemma-ideals-CnP}.\nNote that $|\\Pi_{l}|=n+2$, $h=h^*=2n+2$ for $\\frg=D_{n+2}$, one sees that Theorem\n\\ref{thm-main} holds for $D_{n+2}$.\n\n\n\n\nTheorem \\ref{thm-main} has been verified for all exceptional Lie\nalgebras using \\texttt{Mathematica}. We only present the details for\n$E_6$, where $\\Delta(1)=[\\alpha_2]$, and the Dynkin diagram is as follows.\n\n\\begin{figure}[H]\n\\centering \\scalebox{0.5}{\\includegraphics{E6-Dynkin.eps}}\n\\end{figure}\n\nNote that $|\\Pi_l|=6$,\n$h-1=11$, $h^*-2=10$. On the other hand, $\\mathfrak{X}$ has six\norbits on $\\Delta(1)$, each has $11$ elements. Moreover, the size of\nthe lower ideals in each orbit is distributed as follows:\n\\begin{itemize}\n\\item[$\\bullet$] $0, 1, 2, 4, 7, \\textbf{10}, 13, 16, 18, 19, 20$;\n\n\\item[$\\bullet$] $3, 4, 5, 6, 9, \\textbf{10}, 11, 14, 15, 16, 17$;\n\n\\item[$\\bullet$] $3, 4, 5, 6, 9, \\textbf{10}, 11, 14, 15, 16, 17$;\n\n\\item[$\\bullet$] $7, 7, 8, 8, 9, \\textbf{10}, 11, 12, 12, 13, 13$;\n\n\\item[$\\bullet$] $5, 6, 6, 8, 9, \\textbf{10}, 11, 12, 14, 14, 15$;\n\n\\item[$\\bullet$] $7, 7, 8, 8, 9, \\textbf{10}, 11, 12, 12, 13, 13$.\n\\end{itemize}\nOne sees that each orbit has a unique Lagrangian lower ideal.\n\n\n\n\nThis finishes the proof of Theorem \\ref{thm-main}. \\hfill\\qed\n\n\n\\medskip\n\n\\centerline{\\scshape Acknowledgements} The research is supported by\nthe National Natural Science Foundation of China (grant no.\n11571097) and the Fundamental Research Funds for the Central\nUniversities.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nAccording to the International Diabetes Federation, the number of people affected by diabetes is expected to reach 700 million by 2045 \\cite{Saeedi2019}. Diabetic retinopathy (DR) affects over one-third of this population and is the leading cause of vision loss worldwide \\cite{ogurtsova2017idf}. This happens when the retinal blood vessels are damaged by high blood sugar levels, causing swelling and leakage. In fundus retina images, lesions appear as leaking blood and fluids. Red and bright lesions are the type of lesions that can be commonly identified during DR screening. The blindness incidence can be reduced if the DR is detected at an early stage. In clinical routine, color fundus photographs (CFP) are employed to identify the morphological changes of the retina by examining the presence of retinal lesions such as microaneurysms, hemorrhages, and soft or hard exudates. The international clinical DR severity scale includes no apparent DR, mild non-proliferative diabetic retinopathy (NPDR), moderate NPDR, severe NPDR, and proliferative diabetic retinopathy (PDR), labeled as grades 0, 1, 2, 3, (illustrated in Fig.\\ref{fig:progresion_DR}) and 4. NPDR (grades 1, 2, 3) corresponds to the early-to-middle stage of DR and deals with a progressive microvascular disease characterized by small vessel damages and occlusions. PDR (grade 4) corresponds to the period of potential visual loss which is often due to a massive hemorrhage. Early identification and adequate treatment, particularly in the mild to moderate stage of NPDR, may slow the progression of DR, consequently preventing the establishment of diabetes-related visual impairments and blindness.\n\nIn the past years, deep learning has achieved great success in medical image analysis. Many supervised learning techniques based on convolutional neural networks have been proposed to tackle the automated DR grading task \\cite{PRATT2016200,QUELLEC2017178,gayathri2020lightweight}. Nevertheless, these approaches rarely take advantage\nof longitudinal information. In this direction, Yan et al. \\cite{Yan} proposed to exploit a Siamese network with different pre-training and fusion schemes to detect the early stage of RD using longitudinal pairs of CFP acquired from the same patient. Further, self-supervised learning (SSL) held great promise as it can learn robust high-level representations by training on pretext tasks \\cite{e24040551} before solving a supervised downstream task. Current self-supervised models are largely based on contrastive learning \\cite{liu2020self,https:\/\/doi.org\/10.48550\/arxiv.2002.05709}. However, the choice of the pretext task to learn a good representation is not straightforward, and the application of contrastive learning to medical images is relatively limited. To tackle this, a self-supervised framework using lesion-based contrastive learning was employed for automated diabetic retinopathy (DR) grading \\cite{Lesion-based}.\n\n\n\\begin{figure}[h!]\n\\includegraphics[width=\\textwidth]{image_paper\/longitudinal_progression_DR.png}\n\\caption{Evolution from no DR to severe NPDR for a patient in OPHDIAT \\cite{ophdiat} dataset.\n} \n\\label{fig:progresion_DR}\n\\end{figure}\n\n\n\nMore recently, a new pretext task has appeared for classification purposes in a longitudinal context. Rivail et al. \\cite{Rivail2019} proposed a longitudinal self-supervised learning Siamese model trained to predict the time interval between two consecutive longitudinal retinal optical coherence tomography (OCT) acquisitions and thus capturing the disease progression. Yang et al. \\cite{Zhao2021} proposed an auto-encoder named LSSL that takes two consecutive longitudinal scans as inputs. They added to the classic reconstruction term a time alignment term that forces the topology of the latent space to change in the direction of longitudinal changes. An extension of such principle was provided in \\cite{Ouyang}. To reach a smooth trajectory field, a dynamic graph in each training batch was computed to define a neighborhood in the latent space for each subject. The graph then connects nearby subjects and enforces their progression directions to be maximally aligned.\n\nIn this regard, we aim to use LSSL approaches to capture the disease progression to predict the change between no DR\/mild NPDR (grade 0 or 1) and more severe DR (grade $\\geq$2) through two consecutive follow-ups. To this end, we explore three methods incorporating current and prior examinations. Finally, a comprehensive evaluation is conducted by comparing these pipelines on the OPHDIAT dataset \\cite{ophdiat}. To the best of our knowledge, this work is the first to automatically assess the early DR severity changes between consecutive images using self-supervised learning applied in a longitudinal context.\n\n\\section{Methods}\n\nIn this work, we study the use of different longitudinal pretext tasks. We use the encoders trained with those pretext tasks as feature extractors embedded with longitudinal information. The aim is to predict the severity grade change from normal\/mild NPDR to more severe DR between a pair of follow-up CFP images. Let $\\mathcal{X}$ be the set of subject-specific image pairs for the collection of all CFP images. $\\mathcal{X}$ contains all $(x^t, x^s)$ that are from the same subject with $x^t$ scanned before $x^s$. These image pairs are then provided as inputs to an auto-encoder (AE) structure (Fig.\\ref{fig:overview}\\textit{c}). The latent representations generated by the encoder are denoted by $z^t=F(x^t)$ and $ z^s=F(x^s)$ where $F$ is the encoder. From this encoder, we can define the $\\Delta z = (z^s - z^t)$ trajectory vector and then formulate $\\Delta z^{(t,s)} = (z^s - z^t) \/ \\Delta t^{(t,s)}$ as the normalized trajectory vector where $\\Delta t^{(t,s)}$ is the time interval between the two acquisitions. The decoder $H$ uses the latent representation to reconstruct the input images such that $\\tilde{x}^t=H(z^t)$ and $\\tilde{x}^s=H(z^s)$. $\\mathbf{E}$ denotes the expected value. In what follows, three longitudinal self-supervised learning schemes are further described.\n\n\\begin{figure}[h!]\n\\includegraphics[width=\\textwidth]{image_paper\/image_paper_LSSL_norm_delta_z_final.png}\n\\caption{The figure a) illustrates to longitudinal Siamese and takes as inputs a pair of consecutive images and predict the time between the examinations. The figure b) represents the longitudinal self-supervised learning which is composed of two independent modules, an AE and dense layers. The AE takes as input the pair of consecutive images and reconstruct the image pairs while the dense layer maps a dummy vector to the direction vector $\\tau$. The figure c) corresponds to the LNE, and takes as input the consecutive pairs and build a dynamic graph to align in a neighborhood the subject-specific trajectory vector ($\\Delta z$) and the pooled trajectory vector ($\\Delta h$) that represents the local progression direction in latent space (green circle).}\n\\label{fig:overview}\n\\end{figure}\n\n\n\\subsection{Longitudinal Siamese} \n\nThe Siamese network takes the two image pairs $(x^t, x^s)$. These images are encoded into a compact representation ($z^t$, $z^s$) by the encoder network, $F$. A feed forward neural network (denoted $G$) then predicts $\\Delta t^{(t,s)}$, the time interval between the pair of CFP images (Fig.\\ref{fig:overview}\\textit{a}). The regression model is trained by minimizing the following L2 loss: $\\parallel G(z^t,z^s) - \\Delta t^{(t,s)} \\parallel_2^2$.\n\n\\subsection{Longitudinal self-supervised learning} \n\nThe longitudinal self-supervised learning (LSSL) exploits a standard AE. The AE is trained with a loss that forces the trajectory vector $\\Delta z$ to be aligned with a direction that could rely in the latent space of the AE called $\\tau$. This direction is learned through a sub-network composed of single dense layers which map a dummy data into a vector $\\tau \\in \\Omega_{\\alpha}$, the dimension of the latent space. The high-level representation of the network is illustrated in Fig.\\ref{fig:overview}\\textit{b}. Enforcing the AE to respect this constraint is equivalent to encouraging $\\cos\\left(\\Delta z,\\boldsymbol{\\tau}\\right)$ to be close to 1, i.e. a zero-angle between $\\boldsymbol{\\tau}$ and the direction of progression in the representation space.\n\n\n\n\n\\noindent\\textbf{Objective Function.}\n\\begin{equation}\n\\mathbf{E}_{(x^t, x^s) \\sim \\mathcal{X}} \\left(\\lambda_{rec}\\cdot(\\parallel x^t - \\tilde{x}^t \\parallel_2^2 + \\parallel x^s - \\tilde{x}^s \\parallel_2^2)-\\lambda_{dir} \\cdot \\cos(\\Delta z,\\tau)\\right)\n\\label{eq:loss_1}\n\\end{equation}\n\n\n\n\n\\subsection{Longitudinal neighbourhood embedding}\n\n\nLongitudinal neighborhood embedding (LNE) is based on the LSSL framework. The main difference is that a directed graph $\\mathcal{G}$ is built in each training batch. A pair of sample ($x_{t},x_{s}$) serves as a node in the graph with node representation $\\Delta z$. For each node $i$, Euclidean distances to other nodes $j\\neq i$ are computed by $D_{i,j} = \\parallel z^t_i - z^t_j\\parallel_2$. The neighbour size ($N_{nb}$) is the closest nodes of node $i$ form its 1-hop neighbourhood $\\mathcal{N}_i$ with edges connected to $i$. The adjacency matrix $A$ for the directed graph ($\\mathcal{G}$) is then defined as:\n\n\\begin{align*} \n A_{i,j} := \n \\begin{cases}\n exp(-\\frac{D_{i,j}^2}{2\\sigma_i^2}) & j \\in \\mathcal{N}_i\\\\\n 0, & j \\notin \\mathcal{N}_i\n \\end{cases}~. \\\\\n \\mbox{with } \\sigma_i := max(D_{i,j \\in \\mathcal{N}_i}) - min(D_{i,j \\in \\mathcal{N}_i})\n \\label{eqn:adj}\n\\end{align*}\nThis matrix regularizes each node's representation by a longitudinal neighbourhood embedding $\\Delta h$ pooled from the neighbours' representations. The neighborhood embedding for a node $i$ is computed by:\n\\begin{equation*}\n\\Delta h_i := \\sum _{j \\in \\mathcal{N}_i} A_{i,j} O^{-1}_{i,j} \\Delta z_j,\n\\end{equation*} where $O$ is the out-degree matrix of graph $\\mathcal{G}$, a diagonal matrix that describes the sum of the weights for outgoing edges at each node. They define $\\theta_{\\langle \\Delta z,\\Delta h \\rangle}$ the angle between $\\Delta z$ and $\\Delta h$, and only incite $\\cos(\\theta_{\\langle \\Delta z,\\Delta h \\rangle}) = 1$, i.e., a zero-angle between the subject-specific trajectory vector and the pooled trajectory vector that represents the local progression direction in the latent space (Fig. \\ref{fig:overview}\\textit{c}). \n\n\\noindent\\textbf{Objective Function.} \n\\begin{equation}\n \\mathbf{E}_{(x^t, x^s) \\sim \\mathcal{X}} \\left(\\lambda_{rec} \\cdot(\\parallel x^t - \\tilde{x}^t \\parallel_2^2 + \\parallel x^s - \\tilde{x}^s \\parallel_2^2)- \\lambda_{dir} \\cdot \\cos(\\theta_{\\langle \\Delta z,\\Delta h \\rangle})\\right) \\label{eq:loss_2}\n\\end{equation}\n\n\\section{Dataset}\n\nThe proposed models were trained and evaluated on OPHDIAT \\cite{ophdiat}, a large CFP database collected from the Ophthalmology Diabetes Telemedicine network consisting of examinations acquired from 101,383 patients between 2004 and 2017. Within 763,848 interpreted CFP images 673,017 are assigned with a DR severity grade, the others being non-gradable. Image sizes vary from 1440 $\\times$ 960 to 3504 $\\times$ 2336 pixels. Each examination has at least two images for each eye. Each subject had 2 to 16 scans with an average of 2.99 scans spanning an average time interval of 2.23 years. The age range of the patients is from 9 to 91. \n\n\\noindent\\textbf{Image pair selection.} The majority of patients from the OPHDIAT database have multiple images with different fields of view for both eyes. To facilitate the pairing, we propose to select a single image per eye for each examination: we select the one that best characterizes the DR severity grade, as detailed hereafter. For this purpose, we train a standard classifier using the complete dataset that predicts the DR severity grade (5 grades). During the first epoch, we randomly select one image per eye and per examination for the full dataset. After the end of the first epoch with the learned weights of the model, for each image present in every examination, we select the image that gives the highest classification probability. We repeat this process until the selected images by the model converge to a fixed set of images per examination. From the set of selected images, we construct consecutive pairs for each patient and finally obtain 100,033 pairs of images from 26,483 patients. Only 6,690 (6.7\\%) pairs have severity grade changes from grade 0 or 1 to grade $\\geq 2$ against 93,343 (93.3\\%) pairs with severity changes that lie between grades 0 and 1. The resulting dataset exhibits the following proportions in gender (Male 52\\%, Female 48\\%) and diabetes type (type 2 69\\%, type 1 31\\%). This dataset was further divided into training (60\\%), validation (20\\%), and test (20\\%) based on subjects, i.e., images of a single subject belonged to the same split and in a way that preserves the same proportions of examples in each class as observed in the original dataset. \n\n\\noindent\\textbf{Image pre-processing.} Image registration is a fundamental pre-processing step for longitudinal analysis \\cite{Saha2019}. Therefore, using an affine transformation, we first conducted a registration step to align $x_{t}$ to $x_{s}$. Images are then adaptively cropped to the width of the field of view (i.e., the eye area in the CPF image) and then resized to 256$\\times$256. A Gaussian filter estimates the background in each color channel to attenuate the strong intensity variations among the dataset which is then subtracted from the image. Finally, the field of view is eroded by 5\\% to eliminate illumination artifacts around the edges. During the training, random resized crops ([0.96, 1.0] as scale range and [0.95, 1.05] as aspect ratio range) are applied for data augmentation purposes. \n\n\n\\section{Experiments and Results}\n\n\\noindent\\textbf{Implementation Details.} As it was conducted in \\cite{Zhao2021,Ouyang}, we constructed a standard AE for all the compared methods to focus only on the presented methodology and make a fair comparison between approaches, with the hope that using advanced AE structures could lead to better encoding and generalization. In our basic architecture, we employed a stack of $n$ pre-activated residual blocks, where $n$ determines the depth of that scale for the encoder. In each res-block, the residual feature map was calculated using a series of three 3$\\times$3 convolutions, the first of which always halves the number of the feature maps employed at the present scale, such that the residual representations live on a lower-dimensional manifold. Our encoder comprises five levels; the first four levels are composed of two residual blocks, and the latter only one residual block. This provides a latent representation of size $64\\times4\\times4$. The employed decoder is a reverse structure of the encoder. The different networks were trained for 100 epochs by the AdamW optimizer with a learning rate of $5 \\times 10^{-4}$, OneCycleLR as scheduler and a weight decay of $10^{-5}$, using an A6000 GPU with the PyTorch framework. The regularization weights were set to $\\lambda_{dir}=1.0$ and $\\lambda_{rec}=5.0$. A batch size of 64 was used for all models, and a neighbour size $N_{nb}=5$ and $\\Delta z^{(t,s)}$ were used for the LNE, as in the original paper \\cite{Ouyang}.\n\\subsection{Comparison of the approaches on the early change detection}\n\nWe evaluate the LSSL encoders on detecting the severity grade change from normal\/mild NPDR to more severe DR between a pair of follow-up CFP images. The classifier was constructed as the concatenation of the learned backbone (feature extractor) and a multi-layer perceptron (MLP). The MLP consists of two fully connected layers of dimensions 1024 and 64 with LeakyReLU activation followed by a last single perceptron. Receiving the flattened representation of the trajectory vector $\\Delta z$, the MLP predicts a score between 0 and 1.\nWe compared the area under the receiver operating characteristics curve (AUC) and the accuracy (Acc) in Tab.\\ref{table:table_comparaison_AUC} for different pre-training strategies (from scratch, trained on LSSL methods, encoder from a standard AE). We also pre-trained on the OPHDIAT dateset (classification of the DR severity grade) to compare the LSSL pre-training strategies with a conventional pre-training method. The statistical significance was estimated using DeLong's t-test \\cite{Robin2011} to analyze and compare ROC curves. The results in Tab.\\ref{table:table_comparaison_AUC} and Fig.\\ref{fig:ROC_curve} show the clear superiority of the LSSL encoder, with a statistical significance p-value < 2.2e-16. Due to class imbalance, the Longitudinal-siamese (L-siamese) have a high Acc while exhibiting a lower AUC than the baseline (trained from scratch). \n\n\n\n\n\n\n\\newfloatcommand{capbtabbox}{table}[][\\FBwidth]\n\n\n\n\\begin{figure}\n\\begin{floatrow}\n\\ffigbox{%\n \\includegraphics[width=6cm]{image_paper\/roc_curve.png}%\n}{%\n \\caption{ROC Curve Analysis of the compared methods}\\label{fig:ROC_curve}%\n}\n\\capbtabbox{%\n\\begin{adjustbox}{width=\\columnwidth,center}%\n\\begin{tabular}{l c c}\n \n \n \n \\cline{2-3} \n & AUC (95\\% CI) & Acc \\tabularnewline\n \\cline{2-3} \n Model & & \\tabularnewline\n \\hline\n No pretrain& 0.8758 (0.8688-0.883) & 0.8379 \\tabularnewline\n Pre-train on OPHDIAT& 0.8994 (0.8921-0.9068) &\t0.8289 \\tabularnewline\n AE& 0.7724 (0.7583-0.7866) & 0.5599 \\tabularnewline\n L-siamese \\cite{Rivail2019}& 0.8253 (0.8127-0.838) & 0.9354 \\tabularnewline\n LSSL \\cite{Zhao2021}& \\textbf{0.9624} (0.9593-0.9655) & 0.8871 \\tabularnewline\n LNE \\cite{Ouyang} & \\textbf{0.9448} (0.9412-0.9486) & 0.8646 \\tabularnewline\n \\hline\n & & \\tabularnewline\n & & \\tabularnewline\n & & \\tabularnewline\n & & \\tabularnewline\n & & \\tabularnewline\n \n \n\\end{tabular}\n\\end{adjustbox}\n\n}{%\n\\caption{Comparison of the approach on the early change detection with the frozen encoder.}\\label{table:table_comparaison_AUC}%\n}%\n\\end{floatrow}\n\\end{figure}\n\n\n\n\n\n\n\n\n \t \t\n\n\n\n\n\n\n\\subsection{Norm of trajectory vector analyze}\n\nWe constructed different histograms in Fig.\\ref{fig:norm_plot} representing the mean value of the norm of the trajectory vector with respect to both diabetes type and progression type. According to Fig.\\ref{fig:norm_plot}, only the models with the direction alignment loss term are able to capture the change detection in the DR relative to the longitudinal representation. Therefore, we observe in the histogram that the trajectory vector $(\\Delta z)$ is able to dissociate the two types of diabetes (t-test p-value < 0.01) and change detection (t-test p-value < 0.01). For the diabetes type, a specific study \\cite{Chamard2020} about the OPHDIAT dataset indicates that the DR progression is faster for \n\\begin{figure}[h!]\n\\includegraphics[width=\\textwidth]{image_paper\/norm_z_change.png}\n\\caption{Mean of the trajectory vector norm for the different self-supervised method used} \n\\label{fig:norm_plot}\n\\end{figure}\npatients with type 1 diabetes. Based on the fact that the $\\Delta z$ can be seen as a relative speed, this observation agrees with the histogram plot of the mean of the $\\Delta z$ norm represented in Fig. \\ref{fig:norm_plot}. We also observed that the norm of the $\\Delta z$ vector is lower for the normal stage of the DR than for mild NPDR to more severe. This was expected because only the methods with a direction alignment term in their objective explicitly modeled longitudinal effects, resulting in more informative $\\Delta z$. This also implies that simply computing the trajectory vector itself is not enough to force the representation to capture the longitudinal change. \n\n\\section{Discussion}\nWe applied different LSSL techniques to encode diabetic retinopathy (DR) progression. The accuracy boost, relative to a network trained from scratch or transferred from conventional tasks, demonstrates that longitudinal pre-trained self-supervised representation learns clinically meaningful information. Concerning the limitations of the LSSL methods, we first observe that the models with no time alignment loss perform poorly and provide no evidence of disease progression encoding. Also, we report for the LNE that the normalized trajectory vector for some pairs, that have a large time between examinations, is almost all zeros, which results in a non-informative representation. This could explain the difference between the LSSL and LNE prediction performances. Moreover, during the LSSL and the LNE training, we often faced a plateau with the direction loss alignment. Therefore, we also claim that intensive work should be done regarding the choice of the hyperparameters : constant weights for the losses, latent space size ($\\Omega_{\\alpha}$), neighbour size ($N_{nb}$). The results concerning quantifying the encoding of the disease progression from the models trained with a time direction alignment are encouraging but not totally clear. As it was mentioned in \\cite{Vernhet2021}, one limitation of the LSSL approach pretains to the cosine loss (direction alignment term from equations (\\ref{eq:loss_1},\\ref{eq:loss_2}) used to encode the longitudinal progression in a specific direction in the latent space and learned while training. The loss only focuses on the correlation with the disease progression timeline but not disentanglement of the disease progression itself. Therefore, a more in-depth analysis of the latent space is required to evaluate if the trajectory vector could be used to find a specific progression trajectory according to patient characteristics (diabetes types, age, DR severity). The pairing and the registration are critical steps in the longitudinal study. As it was previously mentioned, by using a better registration method and exploiting different fusion schemes and backbone architectures, we could get enriched latent representation and, thus, hopefully, better results. Also, the frozen encoders could be transferred to other types of longitudinal problems. In summary, LSSL techniques are quite promising: preliminary results are encouraging, and we expect further improvements. \\\\\n\n\n\n\\noindent\\textbf{Acknowledgements}\nThe work takes place in the framework of the ANR RHU project Evired. This work benefits\nfrom State aid managed by the French National Research Agency under \"Investissement\nd'Avenir\" program bearing the reference ANR-18-RHUS-0008\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLong period helical spin order resulting from\nDzyaloshinskii-Moriya (DM) spin-orbit\ncoupling\\cite{dzyaloshinskii58} in non-centrosymmetric\nitinerant magnets (e.g. MnSi, Fe$_x$Co$_{1-x}$Si, FeGe with crystal\nstructure B20) has been intensively studied in the\npast.\\cite{dzyaloshinskii64,ishikawa76, moriya76,nakanishi80,bak80,beille83,lebech89,ishimoto95,uchida06}\nOne material of this family, MnSi, has attracted recent interest\nbecause of its puzzling behavior under applied hydrostatic\npressure.\\cite{pfleiderer97,thessieu97} \n A state with non-Fermi liquid transport\nproperties\\cite{pfleiderer01} is obtained over a wide range of\npressures above a critical threshold $p_c$. \nBeginning at the same critical pressure, but over a smaller\npressure range, magnetic 'partial order' is observed in neutron\nscattering experiments.\\cite{yu04,pfleiderer04} While usual helical\norder gives rise to sharp Bragg peaks in neutron scattering\n(corresponding to the periodicity of the helical spin-density wave),\n'partial order' is characterized by a neutron scattering signal which\nis smeared over a wavevector sphere rather than localized at\ndiscrete points in reciprocal space.\n\nRecent theoretical work on electronic properties, critical fluctuations and\ncollective modes of helimagnets \nare in Refs.~\\cite{fischer04,grigoriev05,belitz06}. Theoretical\nproposals for the high pressure state of MnSi have invoked proximity to a\nquantum multi-critical point \\cite{schmalian04} or magnetic liquid-gas transitions.\\cite{tewari05} Closest in spirit to our approach are the skyrmion-like magnetic patterns studied in Refs.~\\cite{bogdanov05,fischer06}.\n\nRecently, we have proposed a novel kind of magnetic order, the\nhelical spin crystal, as a promising starting point for a theory of\n'partial order'.\\cite{binz06} Helical spin crystals are magnetic\npatterns, which are obtained by superposition of several helical\nspin-density waves which propagate in different directions. There is a substabtial resemblance to multi-$k$ magnetic structures (also known as multiple-$q$ or multiple spin density wave states).\\cite{mukamel76,jo83,forgan89,longfield02,steward04} But in contrast to most other magnetic multi-$k$ systems, in helical spin crystals the ordering wavevectors are selected from an infinite number of degenerate modes lying on a sphere in reciprocal space - a process analogous to the crystallization of liquids.\n\nIn this\nwork, we present a detailed theory of such structures. The\nstability, structure and distinctive properties of such states are\ndescribed, and the consequences of \n coupling to non-magnetic disorder is discussed.\n\nThe paper is organized as follows. First, we review the standard\ntheory of helimagnetism in Section \\ref{GLsection}, finishing with a\nshort remark about more general helical magnetic states.\n Then, the theory of helical spin crystals is developed.\nThe requirements\nto energetically stabilize helical spin crystal states are\ninvestigated in Section \\ref{energetics}. The analysis works in two directions. First, we establish a phase diagram in terms of natural parameters which tune the interaction between helical modes and second, we give simple rules to construct model interactions which stabilize a large class of helical spin crystals. The remaining parts of the\npaper are dedicated to extracting testable consequences of these\nnovel magnetic states. In Section \\ref{structure}, we give a\ndescription of the most prominent spin crystals which emerge from\nour energetic analysis in terms of their symmetry. It is shown that the symmetry of the magnetic state may stabilize topological textures like merons and anti-vortices which are otherwise not expected to be stable in the present context, given the order parameter and dimensionality of the system.\nSymmetry also determines, which higher-harmonics Bragg peaks these structures would produce.\nWe subsequently study the response of helical spin crystals with respect to\ndifferent perturbations in Section \\ref{response}. For example, sub-leading spin orbit coupling (crystal anisotropy) locks the magnetic crystal to the\n underlying atomic lattice and thus determines the location of magnetic Bragg peaks. \nWe also study the response to an\nexternal magnetic field which, apart from producing a uniform\nmagnetic moment, also leads to distinctive distortions of the helical\nmagnetic structure, which could be observable by neutron scattering. \nFinally, in Section \\ref{disorder} we investigate the implications of\nnon-magnetic impurities, which are expected to destroy long-range\nmagnetic order and produce diffuse scattering.\n\n\n\\section{Landau-Ginzburg theory of helimagnetism}\\label{GLsection}\n\nFor a cubic magnet without a center of inversion, the Landau-Ginzburg free energy to quadratic order in the magnetization $\\bv M(\\bv r)$ is\n\\begin{equation}\nF_2=\\left\\langle r_0 \\bv M^2 + J (\\partial_\\alpha M_\\beta)(\\partial_\\alpha M_\\beta)+2D\\bv M\\cdot(\\bv \\nabla\\times\\bv M)\\right\\rangle,\\label{F2}\n\\ee\nwhere $\\left\\langle\\ldots\\right\\rangle$ indicates sample averaging, $r_0,J,D$ are parameters ($J>0$) and Einstein summation is understood. The last term of Eq.~\\eref{F2} is the DM interaction, which is odd under spatial inversion and originates in spin-orbit coupling.\\cite{dzyaloshinskii58} Fourier transformation, $\\bv M(\\bv r)=\\sum_{\\bv q} \\bv m_{\\bv q}e^{i\\bv q\\cdot\\bv r}$ with $\\bv m_{-\\bv q}=\\bv m^*_{\\bv q}$, leads to\n\\begin{equation}\nF_2=\\sum _{\\bv q} \\left[\\left(r_0+Jq^2\\right)|\\bv m_{\\bv q}|^2+2D \\bv m_{\\bv q}^*\\cdot\\left(i\\bv q\\times\\bv m_{\\bv q}\\right)\\right].\n\\ee\nClearly, the energy is minimal for circularly polarized spiral modes,\n where $\\nabla\\times\\bv M$ points in the direction of $-D\\,\\bv M$. For such modes,\n\\begin{equation}\nF_2=\\sum _{\\bv q} r(q)|\\bv m_{\\bv q}|^2,\n\\ee\nwhere $r(q)=r_0-JQ^2+J(q-Q)^2$ with $Q=|D|\/J$.\nThe Gaussian theory thus determines both the chirality of low-energy helical modes and their wavelength $\\lambda=2\\pi\/Q$. The latter is typically long (between $180\\mathring{A}$ in MnSi and $2300\\mathring{A}$ in Fe$_{0.3}$Co$_{0.7}$Si), reflecting the smallness of spin-orbit coupling effects compared to exchange.\n However, no preferred spiraling direction is selected by Eq.~\\eref{F2}, since $F_2$ is rotation-invariant. Cubic anisotropy terms which break this invariance are of higher order in the spin-orbit interaction and therefore small. We neglect them for the moment and reintroduce them later.\n\nThe isotropic Gaussian theory leaves us with an infinite number of modes which become soft as $r(Q)\\to0$. They consist of helical spin-density waves with given chirality (determined by the sign of $D$), whose wave-vectors lie on a sphere $|\\bv q|=Q$ in reciprocal space.\n Each of these helical modes is determined by an amplitude and a phase. Hence, for each point $\\bv q$ on the sphere, we define a complex order parameter $\\psi_{\\bv q}$ (with $\\psi_{-\\bv q}=\\psi^*_{\\bv q}$) through\n\\begin{equation}\n\\bv m_{\\bv q}=\\frac12\\psi_{\\bv q}(\\uv\\epsilon'_{\\bv q}+i \\uv\\epsilon''_{\\bv q}),\\label{psis}\n\\ee\nwhere $\\uv \\epsilon'_{\\bv q}$, $\\uv \\epsilon''_{\\bv q}$, and $\\uv q$, are mutually orthogonal unit vectors (with a defined handedness, given by the sign of $D$). Obviously, changing the phase of $\\psi_{\\bv q}$ is equivalent to rotating $\\uv \\epsilon'_{\\bv q}$ and $\\uv \\epsilon''_{\\bv q}$ around $\\uv q$. The phase of $\\psi_{\\bv q}$ is thus only defined relative to some initial choice of $\\epsilon'_{\\bv q}$. The neutron scattering intensity is proportional to $|\\uv q\\times \\bv m_{\\bv q}|^2=1\/2|\\psi_{\\bv q}|^2$, independent of the phase. Changing the phase of $\\psi_{\\bv q}$ is also equivalent to translating $\\bv M(\\bv r)$ along $\\uv q$.\n\n\nIn the following, we study minima of the free energy in the ordered phase [$r(Q)<0$].\nThese depend on the interactions between degenerate modes (i.e., free energy contributions, which are quartic or higher order in $\\bv M$). We only consider interactions which, as $F_2$, have full rotation symmetry and we will include the weak crystal anisotropy last. The most general quartic term which has full rotation symmetry (transforming space and spin together) is of the form\n\\begin{equation}\n F_4=\\!\\!\\sum_{\\bv q_1,\\bv q_2,\\bv q_3}\\!\\!U(\\bv\nq_1,\\bv q_2,\\bv q_3)\\left(\\bv m_{\\bv q_1}\\cdot\\bv m_{\\bv\nq_2}\\right)\\left(\\bv m_{\\bv q_3}\\cdot\\bv m_{\\bv\nq_4}\\right),\\label{F4}\n\\ee\n with $\\bv q_4=-(\\bv q_1+\\bv q_2+\\bv q_3)$.\n\n\\subsection{Single-spiral state}\n\nFor example, if $U(\\bv q_1,\\bv q_2,\\bv q_3)$ is a constant, then $F_4\\propto \\left\\langle \\bv M^4\\right\\rangle$.\nIf the interaction depends only on the local magnetization amplitude, i.e., in general if $F=F_2+\\left\\langle f(\\bv M^2)\\right\\rangle$ for some function $f$, then the absolute minimum of $F$ is given by a single-spiral state (also known as helical spin density wave) $\\bv M(\\bv r)=\\bv m_{\\bv k}e^{i\\bv k\\cdot\\bv r}+\\bv m^*_{\\bv k}e^{-i\\bv k\\cdot\\bv r}$, where a single pair of opposite momenta $\\pm \\bv k$ is selected. \n To proof this, we write $F$ as\n\\begin{equation}\n\\sum_{\\bv q} [r(q)-r(Q)]\\,|\\bv m_{\\bv q}|^2 + \\left\\langle r(Q)\\bv M^2+f(\\bv M^2)\\right\\rangle.\\label{proof}\n\\ee\nIn the single-spiral state, $\\bv M^2$ is constant in space and it minimizes the first and the second term of Eq.~\\eref{proof} independently. Therefore, no other magnetic state can be lower in energy.\n\nBecause $Q$ is small, the relevant wavevectors entering Eq.~\\eref{F4} are also small and $U(\\bv q_1,\\bv q_2,\\bv q_3)$ is effectively\n close to a constant. Therefore, the single-spiral state, as observed in Fe$_x$Co$_{1-x}$Si, FeGe and in MnSi at ambient pressure, is the most natural helical magnetic order from the point of view of Landau theory.\n\n\n\n\n\\subsection{Linear superpositions of single-spiral states}\\label{continuous}\n\nMotivated by the phenomenology of 'partial order', we will now extend the theory beyond this standard solution. We speculate that $U(\\bv q_1,\\bv q_2,\\bv q_3)$ is not constant, such that $F_4$ favors a linear superpositions of multiple spin-spirals with different wave-vectors on the sphere of degenerate modes $|\\bv q|=Q$.\n\n\nOne may first speculate about magnetic patterns whose Fourier transform\nis non-zero everywhere on the wave-vector sphere and\n peaked infinitely sharply perpendicular to the sphere, i.e.\n\\begin{equation}\n|\\psi_{\\bv q}|^2\\propto \\delta(|\\bv q|-Q)\\label{po-literally}\n\\ee\n[see Eq.~\\eref{psis}]. This idea turns out to be complicated for at least two reasons.\n\nThe first complication is that there is no continuous way of attributing a finite-amplitude spiral mode to each point on the wavevector sphere. This can be seen by noting that $\\uv \\epsilon'_{\\bv q}$ [Eq.~\\eref{psis}] is a tangent vector field on the sphere. Thus, it cannot be continuous (impossibility of combing a hedgehog).\\cite{mermin79} Thus there is no ``uniform'' superposition of helical modes on the sphere. The problem of singularities can be avoided if one assumes a $\\psi_{\\bv q}$ with point nodes on the sphere.\n\nThe second complication is that higher harmonics would result in a broadening of the delta-function in Eq.~\\eref{po-literally}. This is seen as follows. Consider three momenta $\\bv q_1, \\bv q_2, \\bv q_3$ on the wavevector sphere and $\\bv q_4$ which is off the sphere. The non-vanishing modes $\\bv m_{\\bv q_1},\\bv m_{\\bv q_2}, \\bv m_{\\bv q_3}$ couple linearly to $\\bv m_{\\bv q_4}$ via Eq.~\\eref{F4} and thus induce a higher harmonic ``off-shell'' mode $\\bv m_{\\bv q_4}\\neq0$. Since this happens for every point away from the sphere,\n the effect is an intrinsic broadening of the peak in $|\\psi_{\\bv q}|^2$, in contradiction with the initial assumption of Eq.~\\eref{po-literally}.\n\n\n\\section{Energetics of helical spin crystals}\\label{energetics}\n\nIn the following, we study magnetic structures which are superpositions of a {\\em finite} number of degenerate helical modes $\\psi_j$ with wavevectors $\\pm\\bv k_j$, $j=1,\\ldots,N$. We call the resulting states helical spin crystals, because of the analogy with weak crystallization theory of the solid-liquid transition.\\cite{brazovskii87}\n\n\\subsection{Structure of the quartic interaction}\\label{section_int}\n\nWe assume that $F_4$ is small, and that its main effect is to provide an\ninteraction between the modes which are degenerate under $F_2$.\nThus, the relevant terms of $F_4$ are those with $|\\bv q_1|= |\\bv q_2|=|\\bv\nq_3|=|\\bv q_4|=Q$. This phase-space constraint and rotational symmetry implies that the coupling function $U$ depends only on two relative angles between the momenta\n\\begin{equation}\n\\left.U(\\bv q_1,\\bv q_2,\\bv q_3)\\right|_{|\\bv q_i|=Q}=U(\\theta,\\phi),\\label{effint}\n\\ee\n where we have chosen the following parameterization:\n\\begin{equation}\n\\begin{split}\n2\\,\\theta&=\\arccos(\\uv q_1\\cdot\\uv q_2)\\\\\n\\phi\/2&=\\arccos\\left[\\frac{(\\uv q_2-\\uv q_1)\\cdot \\uv q_3}{1-\\uv q_1\\cdot\\uv q_2}\\right].\n\\end{split}\\label{thph}\n\\ee\n Geometrically,\n$\\phi\/2$ is the angle between the two planes\nspanned by $(\\bv q_1, \\bv q_2)$ and $(\\bv q_3,\\bv q_4)$ (Fig.~\\ref{angles}). In the special case $\\bv q_1+\\bv q_2=0$, it\n becomes the angle between $\\bv q_2$ and $\\bv q_3$. This mapping allows $\\theta$ and $\\phi$ to be interpreted as the polar and azimuthal angles of a sphere and the coupling $U(\\theta,\\phi)$ is a function on that sphere. Since it describes an effective coupling between modes on the wavevector sphere, the coupling $U(\\theta,\\phi)$ has a status similar to that of Fermi liquid parameters in the theory of metals.\n\n\n\\begin{figure}\n\\includegraphics[scale=0.6]{fig1.eps}\n\\caption{The set of quartets $(\\bv q_1,\\ldots,\\bv q_4)$ satisfying $|\\bv q_i|=Q$ and $\\bv q_1+\\ldots+\\bv q_4=0$, modulo global rotations, may be parameterized by two angles $\\theta$ and $\\phi$ as shown in this figure. \\label{angles}}\n\\end{figure}\n\n\nThe Landau free energy $F=F_2+F_4$ of helical spin crystal states is calculated as follows. Obviously,\n\\begin{equation}\nF_2=r(Q)\\sum_j|\\psi_j|^2.\\label{F2psi}\n\\ee\nThe quartic term, Eq.~\\eref{F4}, may be split into three distinct contributions $F_4=F_{s}+F_{p}+F_{\\rm nt}$. The first term, $F_{s}$, is the self-interaction of each spiral mode with itself:\n\\begin{equation}\nF_{s}=U_{s}\\sum_j|\\psi_j|^4,\\label{Fs}\n\\ee\nwhere $U_{s}=U\\!(\\theta\\!=\\!\\pi\/2,\\phi\\!=\\!0)$. A minimum requirement for stability of the theory is $U_{s}>0$.\n\nThe next term, $F_{p}$, stems from pairwise interactions between modes. It is of the form\n\\begin{equation}\nF_{p}=2\\sum_{i0$ and $W'=0$. In the grey region, $F4<0$ and the quartic theory is unstable. The various phases are explained in the text. In contrast to our earlier use of these symbols,\\cite{binz06} ``$\\bigtriangleup$'' and ``$\\square$'' denote general states with 3 and 4 helical modes, respectively.\n\\label{pdiag0}}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[scale=0.6]{fig3.eps}\n\\caption{Same as Fig.~\\ref{pdiag-2} with $Q^2W'=-0.5W$. The point $U_{11}=U_{22}=0$ is now at the phase boundary between spiral order and simple cubic.\n\\label{pdiag-2}}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[scale=0.6]{fig4.eps}\n\\caption{a) and b) For the locations $A,\\ldots,E$ in the phase diagram of Fig.~\\ref{pdiag0},\nthe ratio between the pair interaction $V_{p}(\\theta)$ and the self-interaction $U_{s}$ is plotted as a function of the angle $2\\theta$ (the angle between propagation directions of modes). A: single-spiral state; B: phase boundary between single-spiral and bcc1; C: bcc1; D: sc; E: $\\bigtriangleup$.\n\\label{Vpfig}}\n\\end{figure}\n\nAll magnetic ground states are equal-amplitude superpositions of 1,2,3,4 or 6 spiral modes.\n The body-centered cubic (bcc) states are superpositions of all $\\langle110\\rangle$-modes.\n bcc1 and bcc2 differ by the relative phases of the six interfering helical modes (see Section \\ref{bcc1}). The simple cubic (sc) crystal consists of three mutually orthogonal spirals (e.g. along all $\\langle100\\rangle$ directions). A face-centered cubic (fcc) helical spin crystal is obtained by superposing all four $\\langle111\\rangle$-modes. However, the ground state is not fcc but a small distortion of it: fcc$^*$. In fcc$^*$, the wavevectors are shifted slightly away from $\\langle111\\rangle$ in order to gain energy from the quartet-term $F_{\\rm nt}$.\\footnote{The shift\n to directions $[\\pm\\sqrt{1+\\delta}\\,\\pm\\!\\!\\sqrt{1+\\delta}\\,\\sqrt{1-2\\delta}\\,]$ is very small with $\\delta\\sim 0.02$ and even without the deformation, fcc would still beat the neighboring phases in the fcc$^*$-region.} The symbols ``$\\bigtriangleup$'' and ``$\\square$'' are used differently here than in our previous paper.\\cite{binz06} Here, ``$\\square$'' stands for a superposition of four modes with wavevectors as shown in Fig.~\\ref{angles} with $\\phi\/2=\\pi\/2$. Hence, wavevectors $\\pm k_j$ form a square cuboid and allow for one quartet-term $F_{\\rm nt}$. The angle $2\\theta$ changes as a function of interaction parameters within the range $0.24\\pi<2\\theta<0.38\\pi$.\nFinally, the phase ``$\\bigtriangleup$'' consists of three modes. The wavevectors $\\bv k_1,\\bv k_2,\\bv k_3$ point to the vertices of an equilateral triangle on the sphere, whose size is determined by the requirement that the mutual angle between two wavevectors, $2\\theta$, minimizes Eq.~\\eref{V}. The angle $2\\theta$ is parameter-dependent and lies in the range $0.14\\pi<2\\theta<0.24\\pi$.\n\nAs expected, a negative $W'$ [Eq.~\\eref{W'}] favors multi-mode spin crystal states with varying magnetization amplitude relative to the spiral state with constant $\\bv M^2$ (compare Figs.~\\ref{pdiag0} and \\ref{pdiag-2}). Positive $W'$ has the opposite effect, and enhances the region of the spiral phase (not shown). The term $W'$ alone (i.e., with $U_{11}=U_{22}=0$) stabilizes sc in the regime $Q^2W'<-W\/2<0$. However, a small positive $U_{22}$ is sufficient to favors bcc1 over sc.\n\nIn conclusion, we observe that two helical spin crystals, bcc1 and sc, appear adjacent to the single-spiral state and are stable at relatively small values of $Q^2W'$, $U_{11}$ and $U_{22}$.\n In the following, we study the properties of the bcc1, and sc states since they are the most likely candidates of helical spin crystals from the point of view of energetics.\n\n\n\n\\subsection{Model interactions with exact ground states}\\label{exact}\n\nIn the preceding Section, we established a variational phase diagram for ``natural'', i.e., slowly varying coupling functions $U(\\theta,\\phi)$. Most phases in this phase diagram, (``spiral'', sc, bcc1, bcc2, fcc and ``$\\bigtriangleup$'') can be shown to be the {\\em exact} global minima for some fine-tuned model interaction, that are constructed below.\n\nLet us consider the toy model $F_{{\\rm toy}}=F_2+F_{s}+F_{p}$ [Eqs.~(\\ref{F2psi}-\\ref{Fp})] where the quartet term $F_{\\rm nt}$ is dropped and we replace $V_{p}(\\theta)$ by a constant $V$. In this model, local minima with $N$ non-vanishing modes ($N\\geq1$) must have equal amplitudes $|\\psi_j|^2=|r|\/(U_{s}-V+N V)\/2$ and the minimum energy with $N$ modes is\n\\begin{equation}\nF_{{\\rm toy},N}=-\\frac14\\,\\frac{r^2}{V+\\frac{U_{s}-V}N}.\n\\ee\nThere are three regimes. If $00$ ($<0$), the phases of the $\\psi$'s are such that $T_x$, $T_y$, $T_z$ are all negative (positive). Three out of the six phases are arbitrary due to global translation symmetry. This means that the magnetic pattern of bcc1 and bcc2 is uniquely determined up to translational and time-reversal degeneracy.\n\n\\begin{table\n\\begin{ruledtabular}\n\\begin{tabular}{c|cccccc|ccc}\n & $\\psi_1'$ & $\\psi_2'$ & $\\psi_3'$ & $\\psi_4'$ & $\\psi_5'$ & $\\psi_6'$ & $T_x'$ & $T_y'$ & $T_z'$ \\\\\n\\hline\n$R_z$ & $\\psi_2$ & $\\psi_1^*$ & $\\psi_6^*$ & $\\psi_5^*$ & $\\psi_3$ & $\\psi_4$ & $T_y$ & $T_x^*$ & $T_z^*$ \\\\\n$R_x$ & $i\\psi_5$ & $-i\\psi_6^*$ & $-\\psi_4^*$ & $\\psi_3$ & $i\\psi_2^*$ & $i\\psi_1$ & $T_x^*$ & $T_z$ & $T_y^*$\n\\end{tabular}\n\\caption{Transformation properties of the $\\psi$-variables and three quartic terms (defined in Section \\ref{bcc1}) of the bcc spin crystals under rotations. $R_z$ and $R_x$, respectively, are $\\pi\/2$ rotations around the $z$- and $x$-axis. These two rotations generate the cubic point group $O$ and therefore, the behavior under any rotation which maps the 12 wavevectors onto each other may be obtained by combining these two operations.\n\\label{trans}}\n\\end{ruledtabular}\n\\end{table}\n\nThe solution for $\\lambda_{\\rm nt}>0$, bcc1, turns out to be the bcc structure with the highest point group symmetry.\nBy selecting the coordinate origin conveniently, we obtain \n $-\\psi_1=\\psi_2=-i\\psi_3=i\\psi_4=i\\psi_5=-i\\psi_6=SM_0$ for bcc1, where $S=\\pm1$ is the time-reversal symmetry label and $M_0>0$ is the amplitude. From Table \\ref{trans}, we deduce that $\\bv M(\\bv r)$ changes sign under a $\\pi\/2$ rotation about the $x$, $y$ or $z$ axis. That is, the magnetic point group is $O(T)$ (international notation $\\underline{4}32$) with 4-fold anti-rotation axes at $\\langle 100\\rangle$, 3-fold rotation axes at $\\langle111\\rangle$ and 2-fold anti-rotation axes at $\\langle110\\rangle$.\n\nThe real-space representation of the bcc1 state is\n\\begin{equation}\n\\bv M(\\bv r)=SM_0\\left(\\begin{array}{c}\\sqrt2\\,s_x(c_y-c_z)-2\\,s_ys_z\\\\ \\sqrt2\\,s_y(c_z-c_x)-2\\,s_zs_x\\\\ \\sqrt2\\,s_z(c_x-c_y)-2\\,s_xs_y\\end{array}\\right),\n\\ee\n where $s_x=\\sin(Qx\/\\sqrt{2})$, $c_x=\\cos(Qx\/\\sqrt{2})$, etc.\n The resulting pattern was shown in Fig.~2 of our earlier paper.\\cite{binz06} In Fig.~\\ref{real-space}, we show the symmetry axes. As discussed above, the magnetization must vanish along the 4-fold anti-rotation axes, which are anti-vortices with winding number $-1$. The $x$, $y$, $z$ axes, and their translations according to the bcc periodicity, form two interpenetrating cubic latices of such line-nodes. The cubic space diagonals are 3-fold and the red arrowed lines of Fig.~\\ref{real-space} are 2-fold rotation axes.\n In the vicinity of these lines, the magnetization field is skyrmion-like (i.e., $\\bv M_\\perp$ has a winding number of $+1$).\n\nThe fact that the bcc1 state breaks $\\mathcal{T}$ in a non-trivial way and cannot be restored by any translation is manifest in the occurrence of the $\\mathcal{T}$-breaking order parameter $\\langle M_xM_yM_z\\rangle=SM_0^3\/2\\neq0$, which is a magnetic octupole. This curious property may lead to distinctive anomalous effects, e.g. in the magnetotransport.\\cite{binz06b} \nOctupolar magnetic ordering has recently been discussed in different contexts.\\cite{octupole}\n\n\n\\begin{figure}\n\\includegraphics[scale=0.6]{fig6.eps}\n\\caption{(Color online). Symmetry of the bcc1 state. The figure shows a cubic unit cell of bcc1. Black lines are anti-vortex lines with 4-fold anti-rotation symmetry and vanishing magnetization. The red (dark gray) lines are 2-fold rotation axes and the arrows indicate the direction of $\\bv M$. The structure has 3-fold rotation symmetry about all cubic space diagonals.\n\\label{real-space}}\n\\end{figure}\n\n\\subsubsection{Symmetry and real-space picture of sc}\n\n\nThe simple cubic (sc) helical spin crystal consists of three modes with $\\uv k_1=[100]$, $\\uv k_2=[010]$ and $\\uv k_3=[001]$. It forms a periodic structure with a cubic unit cell and the lattice constant is $\\lambda=2\\pi\/Q$.\n The convention for $ \\epsilon_j''$ is given by Eq.~\\eref{conv}, where the unit vector $\\uv z$ is replaced by $[111]$.\n\n\\begin{table\n\\begin{ruledtabular}\n\\begin{tabular}{c|ccccc}\n& $R_x$ & $R_y$ & $R_z$ & $R_{[111]}$ & $R_{[1\\bar 1 0]}$ \\\\\n\\hline\n $\\psi_1'$ & $i\\psi_1$ & $-i\\psi_3$ & $\\psi_2^*$ & $\\psi_3$ & $-\\psi_2^*$ \\\\\n $\\psi_2'$ & $\\psi_3^*$ & $i\\psi_2$ & $-i\\psi_1$ & $\\psi_1$ & $-\\psi_1^*$ \\\\\n $\\psi_3'$ & $-i\\psi_2$ & $\\psi_1^*$ & $i\\psi_3$ & $\\psi_2$ & $-\\psi_3^*$\n\\end{tabular}\n\\caption{Transformation for the $\\psi$-variables of sc under rotations. $R_z$ and $R_x$ are $\\pi\/2$-rotations, $R_{[111]}$ is a $2\\pi\/3$-rotation and $R_{[1\\bar 1 0]}$ a $\\pi$-rotation around the indicated axis.\n\\label{transsc}}\n\\end{ruledtabular}\n\\end{table}\n\n\n\n The transformation properties of the $\\psi$-variables under rotations are given in Table \\ref{transsc}. By choosing the center of coordinates corresponding to $\\psi_1=\\psi_2=\\psi_3=iM_0$, we obtain from Table \\ref{transsc} that the point group symmetry is $D_3(D_3)$ (international notation 32). That is, the chosen origin has a 3-fold rotation axis along $[111]$ and three two-fold axes along $[1\\bar10]$, $[10\\bar1]$ and $[01\\bar1]$. Obviously, $\\bv M$ must vanish at a point of such high symmetry. Hence there is a point node at the origin.\n\nSymmetry operations consisting of a rotation followed by an appropriate translation yield similar point nodes at $\\frac 12\\frac {3}4\\frac {1}4$ (with 3-fold axis along $[\\bar111]$), $\\frac 14\\frac {1}2\\frac 34$ (3-fold axis $[1\\bar11]$) and $\\frac {3}4\\frac {1}4\\frac {1}2$ (3-fold axis $[11\\bar1]$). Finally, each of these nodes is doubled inside one unit cell\n because a translation by $(\\frac \\lambda2,\\frac \\lambda2,\\frac \\lambda2)$ amounts to $\\bv M\\to-\\bv M$.\nThe 2- and 3-fold rotation axes form a complex array of skyrmion-like lines, all with winding numbers of $+1$.\nThe real-space representation is\n\\begin{equation}\n\\bv M(\\bv r)=SM_0\\left(\\begin{array}{c}\n\\tilde c_y-\\tilde s_z\\\\\n\\tilde c_z-\\tilde s_x\\\\\n\\tilde c_x-\\tilde s_y\n\\end{array}\\right),\n\\ee\nwhere $\\tilde s_x=\\sin[Q(x+\\lambda\/8)]$, $\\tilde c_x=\\cos[Q (x+\\lambda\/8)]$, etc.\n\n\n\\subsection{Higher harmonics Fourier modes}\\label{higher-harmonics}\n\nAs briefly mentioned in Section \\ref{continuous}, magnetic ordering in wavevectors $\\pm\\bv k_j$ generally induces higher harmonics in the magnetic structure. In the presence of magnetic order $\\bv m_{\\bv k_j}\\neq0$, the Landau free energy for the modes $\\bv m_{\\bv q}$, which do not belong to the set $\\bv m_{\\bv k_j}$, is (to quartic order)\n\\begin{equation}\n\\Delta F=\\sum_{\\bv q} \\tilde r_\\psi(\\bv q) |\\bv m_{\\bv q}|^2-\\bv h_{\\psi}(\\bv q)\\cdot \\bv m^*_{\\bv q}-\\bv h^*_{\\psi}(\\bv q)\\cdot \\bv m_{\\bv q}, \\label{induce}\n\\ee\nwith $\\tilde r_\\psi(\\bv q)=r(q)+O\\left(|\\psi_j|^2\\right)$ and\n\\begin{equation}\n\\bv h_{\\psi}(\\bv q)=-4\n \\!\\!{\\sum_{\\bv q_1,\\bv q_2,\\bv q_3}}^{\\!\\!\\!\\!\\!\\!\\prime}\\,\\,U(\\bv\nq_1,\\bv q_2,\\bv q_3)\\left(\\bv m_{\\bv q_1}\\cdot\\bv m_{\\bv\nq_2}\\right)\\,\\bv m_{\\bv q_3},\n\\label{hpsi}\n\\ee\nwhere the sum is restricted to $\\bv q_1,\\bv q_2,\\bv q_3\\in \\{\\pm\\bv k_{j}\\}$ such that $\\bv q_1+\\bv q_2+\\bv q_3=\\bv q$. The origin of the exchange field $\\bv h_{\\psi}$ is the coupling term Eq.~\\eref{F4}. In the following, we assume that $\\tilde r(\\bv q)>0$. Obviously, Eq.~\\eref{induce} then leads to induced modes\n\\begin{equation}\n\\bv m_{\\bv q}=\\frac{\\bv h_{\\psi,\\bv q}}{\\tilde r(\\bv q)}\\label{harmonics}\n\\ee\nat momenta $\\bv q=\\pm\\bv k_{j_1}\\pm\\bv k_{j_2}\\pm\\bv k_{j_3}$.\\footnote{In non-magnetic crystals, the coupling term is cubic and therefore higher harmonics are generated at all sums of {\\em two} momenta $\\bv k_{j_1}\\pm\\bv k_{j_2}$. In magnetic structures, \n higher-harmonics wavevectors are restricted to sums of {\\em three} ordering momenta.} These modes modify the detailed magnetic structure, but they do not change its symmetry, since the field $\\bv h_\\psi$ respects all the symmetries of the spin crystal.\n\nWe now briefly discuss the consequences for the three helical magnetic structures under consideration.\n\n\n A single spin-density wave involving wavevectors $\\pm\\bv k$ might create higher harmonics at $\\pm3\\bv k$ via Eq.~\\eref{harmonics}. However, in the case of spin spirals, $\\bv m_{\\bv k}^2=0$ and therefore $\\bv h_{\\psi,3\\bv k}=0$.\n Thus, there are {\\em no} higher harmonics created by a single spin spiral.\n\n\nThe sc spin structure with principal ordering wave-vectors along $\\langle001\\rangle$ with $|\\bv k_j|=Q$\n generates higher harmonics along $\\langle111\\rangle$ (with $|\\bv q|=\\sqrt{3}Q$) and\n along $\\langle012\\rangle$ (with $|\\bv q|=\\sqrt{5}Q$). Note that throughout the current and last sections, all crystal directions refer to the magnetic crystals.\nThe orientation of a magnetic crystal with respect to the atomic crystal depends on the anisotropy term $F_{\\rm a}$, which will be considered in Section \\ref{crystalanisotropy}.\n\nIn contrast to the former cases, bcc structures couple linearly to the $\\bv q=0$ mode (i.e., the uniform magnetization), since some triples of ordering vectors add to zero. This coupling will be further investigated in Section \\ref{H}. Here, we only notice that for the bcc1 and bcc2 states, $\\bv h_{\\psi,\\bv q=0}=0$. This results can be understood in terms of the symmetry of these states. In the case of bcc1, the point group symmetry is too high to support a non-zero axial vector $\\bv h_{\\psi,\\bv q=0}$. Therefore, bcc1 and bcc2 do not create a spontaneous net magnetization.\nThe next set of wavevectors which can be reached by adding three ordering vectors are along $\\langle001\\rangle$ (with $|\\bv q|=\\sqrt{2}Q$). However for bcc1 and bcc2, direct calculation shows $\\bv h_{\\psi}=0$ for these modes. As before, this can be understood in terms of symmetry. Higher harmonics along $\\langle001\\rangle$ would have the structure of a sc spin crystal. We have seen in Section \\ref{symmetry}, that the point group symmetry of sc is lower than that of bcc1. Therefore, bcc1 can not create such an exchange field $\\bv h_{\\psi}$. \n We conclude that bcc1 creates {\\em no} secondary Bragg peaks at $(0,0,\\sqrt{2}Q)$, etc. The same is true for bcc2.\nThe shortest wavevectors which are created by bcc1 or bcc2 as higher harmonics are along $\\langle 112\\rangle$ (with $|\\bv q|=\\sqrt{3}Q$). Others are at $\\langle110\\rangle$ ($|\\bv q|=2Q$), $\\langle013\\rangle$ ($|\\bv q|=\\sqrt{5}Q$), $\\langle111\\rangle$ ($|\\bv q|=\\sqrt{6}Q$) and $\\langle123\\rangle$ ($|\\bv q|=\\sqrt{7}Q$).\n\n\\section{Response to crystal anisotropy, magnetic field and disorder.}\\label{response}\n\n\\subsection{Effect of crystal anisotropy}\n\nSo far, our free energy has\nbeen completely rotation invariant. In the magnetically ordered states, full rotation symmetry\nis spontaneously broken, but any global rotation of the spin\nstructure leaves the energy invariant.\n This degeneracy is lifted by an additional anisotropy term $F_{\\rm a}$, which couples the magnetic crystal to the underlying atomic lattice. The crystal anisotropy energy is small and may be treated as a perturbation which merely selects the directional orientation, but does not otherwise affect the magnetic state.\n\n\n\\subsubsection{Single-spiral state}\\label{spiral-anisotropy}\n\nIn the case of a single-spiral state, crystal anisotropy is a function $F_{\\rm a}(\\uv k)$, where $\\uv k$ is the spiral direction.\n The function $F_{\\rm a}(\\uv k)$ may depend on various parameters, it should be symmetric under the\n point group of the (atomic) crystal lattice and satisfy $F_{\\rm a}(\\uv k)=F_{\\rm a}(-\\uv k)$. For concreteness, we assume the cubic point group $T$, relevant for the $B20$ crystal structure. We further assume that $F_{\\rm a}(\\uv k)$ is a slowly varying function, since a singular or rapidly oscillating function in reciprocal space would translate into a (non-local) interaction between magnetic moments and the atomic crystal. Such a function $F_{\\rm a}(\\uv k)$ generally has its minimum at either $\\left\\langle100\\right\\rangle$ or $\\langle111\\rangle$, which can be shown in two different ways.\n\nThe first argument is based on combining symmetry with Morse's theory of critical points.\\cite{morse34} Morse theory implies that\n\\begin{equation}\n\\mbox{maxima}-\\mbox{saddles}+\\mbox{minima}=2 \\label{morse}\n\\ee\nfor a function on the unit sphere. Symmetry requires that $F_{\\rm a}(\\uv k)$ has stationary points (points with vanishing first derivative, i.e., maxima, minima or saddles) at $\\left\\langle100\\right\\rangle$ (6 directions), $\\left\\langle111\\right\\rangle$ (8 directions) and $\\left\\langle110\\right\\rangle$ (12 directions).\nIf $F_{\\rm a}(\\uv k)$ is slowly varying, we suspect that these are the only stationary points, since adding more maxima, minima and saddles means that the function is more rapidly oscillating. Under this hypothesis, it follows from Eq.~\\eref{morse}, that the $\\left\\langle110\\right\\rangle$ directions are saddle points and that the extrema are at $\\left\\langle111\\right\\rangle$ and $\\left\\langle100\\right\\rangle$. For $\\left\\langle110\\right\\rangle$ to be minima, $F_{\\rm a}(\\uv k)$ needs to have additional stationary points (e.g. saddles) at non-symmetric, parameter-dependend locations. We conclude that an anisotropy which favors $\\left\\langle110\\right\\rangle$ would need to be more rapidly oscillating than required by symmetry.\n\nThe second argument is based on an expansion of $F_{\\rm a}(\\uv k)$ in powers of the directions cosines $\\hat k_x, \\hat k_y,\\hat k_z$:\n\\begin{equation}\n F_{\\rm a}(\\uv k)=\\alpha\\, (\\hat k_x^4+\\hat k_y^4+\\hat k_z^4) + \\alpha' \\, \\hat k_x^2\\hat k_y^2\\hat k_z^2+\\ldots,\\label{Fa}\n\\ee\nwhere we retained the first two terms allowed by cubic symmetry. Because of the smallness of the wave-vector sphere $Q$, one typically expects $|\\alpha'|\\ll |\\alpha|$ and subsequent terms even smaller. It is easily checked that for most values of the parameters $\\alpha,\\alpha'$, Eq.~\\ref{Fa} has its global minima at either $\\left\\langle100\\right\\rangle$ (for $\\alpha<\\min\\{0,\\alpha'\/18\\}$) or $\\left\\langle100\\right\\rangle$ (for $a>\\max\\{2\\alpha'\/9,\\alpha'\/18\\}$). Only in the narrow parameter regime $0<\\alpha<2\\alpha'\/9$, the minima are indeed at $\\left\\langle110\\right\\rangle$.\\footnote{In this regime, all 26 high-symmetry points are extrema of $F_{\\rm a}$ and 24 saddle points are located at $\\left\\langle\\sqrt{2\\alpha}\\,\\sqrt{2\\alpha}\\,\\sqrt{\\alpha'-4\\alpha}\\right\\rangle$, in agreement with Eq.~\\eref{morse}.} We conclude that crystal anisotropy which favors $\\left\\langle110\\right\\rangle$ may only appear in a narrow regime between two phases which favor $\\left\\langle111\\right\\rangle$ and $\\left\\langle100\\right\\rangle$, respectively.\n\nAccordingly, $\\left\\langle111\\right\\rangle$ or $\\left\\langle100\\right\\rangle$ are the selected spiral directions in all cubic helimagnets known so far.\\cite{ishikawa76,beille83,lebech89}\nThe preferred direction in MnSi at low pressure is $\\left\\langle 111\\right\\rangle$ and in Fe$_x$Co$_{1-x}$Si, it is $\\left\\langle 100\\right\\rangle$. In FeGe, there is a phase transition between these two directions,\\cite{lebech89} but no intermediate phase with $\\left\\langle110\\right\\rangle$ spiral orientation has been reported. However, the neutron scattering data\\cite{pfleiderer04} in the partially ordered phase of MnSi clearly show a maximum signal along the $\\left\\langle110\\right\\rangle$ crystal directions. While it is initially tempting to interpret the partially ordered state of MnSi as a single-spiral state that has lost it's orientational long-range order by some mechanism, one would still expect a maximal scattering intensity in the energetically preferred lattice direction.\n Theories of the partially ordered state in terms of disordered helical spin-density waves\\cite{tewari05,grigoriev05} thus depend on a crystal anisotropy that prefers spiral directions along $\\left\\langle 110\\right\\rangle$. As we have shown, this seems very unlikely .\n\n\n\n\n\\subsubsection{Helical spin crystals}\\label{crystalanisotropy}\n\n\n For multi-mode spin crystals, \n $F_{\\rm a}$ is no longer determined by a single direction $\\uv k$, so the arguments of Section \\ref{spiral-anisotropy} do not apply. Rather, the anisotropy energy depends on three Euler angles, which rotate the full three-dimensional magnetic structure relatively to the atomic crystal. In other words, $F_{\\rm a}$ is a function of the rotation group $SO(3)$. Relative to some standard orientation $\\bv k_j$ of the mode directions, the leading-order anisotropy term is\n\\begin{equation}\nF_{\\rm a}(R)=a\\sum_{j}g({R\\uv k_j})\\,|\\psi_j|^2\n\\label{crysFa}\n\\ee\nwhere \n $R$ is a rotation operator and $g(\\uv k)=\\hat k_x^4+\\hat k_y^4+\\hat k_z^4$. As before, we have assumed a cubic point group symmetry.\n\nIn the case $a>0$, the modes of the bcc spin crystals get locked to the $\\langle110\\rangle$ directions. The orientation of sc is four times degenerate if $a>0$. The four minima of $F_{\\rm a}$ are obtained from the standard orientation along $\\langle100\\rangle$ through a $\\pi\/3$-rotation around any of the four space diagonals, such that the three spiral modes point along $\\langle122\\rangle$.\n\nIn the opposite case ($a<0$), sc is oriented along $\\langle100\\rangle$. This time, it is the bcc spin crystals that get rotated by $\\pi\/3$ around any $\\langle111\\rangle$ axis to reach one of four stable orientations. Under such $\\pi\/3$-rotations, three of the six modes remain along $\\langle110\\rangle$ and three move to $\\langle114\\rangle$. Each individual $\\langle114\\rangle$ direction appears only in one of the four solutions but each $\\langle110\\rangle$ direction appears in two of four solutions.\n\n\nIf there is more than one degenerate\n orientation, the sample typically breaks up into domains such that full cubic symmetry is restored in the neutron scattering signal. Table \\ref{orienttable} lists the directions of magnetic Bragg peaks for the \n different cases.\n Out of the three prominent phases in our phase diagram, the bcc1 spin crystal is the only one that can explain the neuron-scattering peaks along $\\langle110\\rangle$ in the 'partial order' phase of MnSi. It does so most naturally for the case $a>0$, which is the known sign of the anisotropy in MnSi at low pressure.\n\n\nIn order to compare the energy scale of $F_{\\rm a}$ (i.e., the locking energy) for the different magnetic states, we note the following.\n At the phase boundary between two equal-amplitude spin-crystal phases (one phase with amplitudes $|\\psi_1|=\\ldots=|\\psi_N|$ and the second phase with $|\\tilde\\psi_1|=\\ldots=|\\tilde\\psi_{\\tilde N}|$)\nthe amplitudes of the two neighboring phases are related by\n\\begin{equation}\nN|\\psi_j|^2=\\tilde N|\\tilde\\psi_{j'}|^2.\n\\ee\nIn the vicinity of the phase boundary, the anisotropy term is therefore proportional to the mode-average $1\/N\\sum_j g(R\\uv k_j)$. Using this result, we find that the effective anisotropy energy is smaller for the bcc spin crystals than for the single-spiral state by a factor of 4 - 4.5.\\footnote{Using $\\max\\{F_{\\rm a}\\}-\\min\\{F_{\\rm a}\\}$ to estimate the locking energy scale leads to a reduction factor of 4.5. Expanding $F_{\\rm a}$ around its minimum for $a>0$ leads to a factor of 4.}\nFor the sc state, the locking energy is anisotropic. Certain rotations are equally costly in energy as for the single-spiral case while some small rotations about the minimum for $a>0$ are softer than for the single-spiral by a factor of 4.5.\n\n\n\\begin{table\n\\begin{ruledtabular}\n\\begin{tabular}{c|cc|cc|cc}\n & spiral & No. & bcc & No. & sc & No. \\\\\n\\hline\n $a>0$ &$\\langle111\\rangle$ & 4 & $\\langle110\\rangle$ & 1 &$\\langle122\\rangle$ & 4\\\\\n $a<0$ &$\\langle100\\rangle$ & 3 & $\\langle110\\rangle$,$\\langle114\\rangle$\\footnote{When averaged over domains, Bragg peaks along $\\langle110\\rangle$ are twice as intense as peaks along $\\langle114\\rangle$.} & 4 &$\\langle100\\rangle$ & 1\n\\end{tabular}\n\\caption{Crystal directions of magnetic Bragg peaks and number of degenerate orientations for three magnetic structures, sc, bcc and single-spiral, with crystal anisotropy given by Eq.~\\eref{crysFa}.\n\\label{orienttable}}\n\\end{ruledtabular}\n\\end{table}\n\n\\subsection{Effect of magnetic field}\\label{H}\n\n\nA uniform external magnetic field $\\bv H$ couples to the $\\bv q=0$ mode of the\nmagnetization, $\\bv m=\\langle\\bv M\\rangle$, via Zeeman coupling.\nThe uniform magnetization, in turn, couples to the helical modes $\\psi_j$ through\n\\begin{equation}\n\\begin{split}\nF=&\\left.F\\right|_{\\bv m=0}+r_0\\,\\bv m^2+U(0,0,0)\\,\\bv m^4-\\bv h_{\\psi}(0)\\,\\bv m\\\\\n&+2\\sum_j\\left(U(\\bv k_j,-\\bv k_j,0)\\,\\bv m^2+U(\\bv k_j,0,0)\\,\\bv m^2_{\\perp,j}\\right)|\\psi_j|^2,\\label{Fm}\n\\end{split}\n\\ee\nwhere $\\bv m_{\\perp,j}=\\bv m-(\\bv m\\cdot\\uv k_j)\\uv k_j$ and we have used Eqs.~\\eref{F4},\\eref{hpsi}. Plumer and Walker\\cite{plumer81} argued that $U(0,0,0)\\approx U(\\bv k_j,-\\bv k_j,0)\\approx U_{s}$, which we will use in the following for simplicity.\n\n\\subsubsection{Response of the single-spiral state}\n\n\nThe behavior of the single-mode spiral state under a\nmagnetic field has been studied both experimentally\\cite{ishikawa76,lebech89,ishimoto95, thessieu97,uchida06} and theoretically.\\cite{kataoka81,plumer81}\n It is characterized by a strongly anisotropic susceptibility,\ninduced by the last term in Eq.~\\eref{Fm}. For a fixed spiral direction $\\bv k$, the susceptibilities parallel and orthogonal to the spiral direction are given by\n\\begin{equation}\n\\begin{split}\n\\chi_\\parallel\\,\\approx&\\,\\Delta^{-1}\\\\\n\\chi_\\perp\\approx&\\left(\\Delta+2\\frac{U'}{U_{s}}|r(Q)|\\right)^{-1},\n\\end{split}\n\\ee\nwhere $\\Delta=2[r_0-r(Q)]=2JQ^2$ and $U'=U(\\bv k,0,0)$. Well below the critical ordering temperature, $\\Delta\\ll 2|r(Q)|$ and therefore $\\chi_\\parallel\\gg \\chi_\\perp$. This strong anisotropy leads to a spin reorientation transition at $H=H_{\\rm sr}$, where the spiral axis gets oriented along the field direction. The value of $H_{\\rm sr}$ depends on the anisotropy [Eq.~\\eref{Fa}] and the field direction. For $\\alpha>0$ and $\\bv H\\parallel \\langle 100\\rangle$, Plumer and Walker obtained\n\\begin{equation}\nH_{\\rm sr}^2=\\frac{4\\alpha}{\\chi_\\parallel-\\chi_\\perp}\\gtrsim 4 \\alpha \\Delta,\\label{Hsr}\n\\ee\nwhere we have used $\\chi_\\parallel\\gg \\chi_\\perp>0$. Once the spiral is oriented, the susceptibility is large (equal to $\\chi_\\parallel$).\n\nThe spiral amplitude decreases as a function of the external field and vanishes at\n\\begin{equation}\n H_{\\rm c}=|\\psi_0|\\Delta,\\label{Hc}\n\\ee\nwhere $|\\psi_0|^2=|r(Q)|\/(2U_{s})$. Above $H_{\\rm c}$, the magnetization is uniform.\n\n\n\\subsubsection{Response of the bcc1 spin crystal}\\label{bcc-and-field}\n\n In the bcc spin crystal\nstates, the linear response is isotropic, because\ntheir symmetry group does not allow for an anisotropic\nsusceptibility tensor. As a consequence, there is no orientation of the bcc state towards the magnetic field at the level of linear response (i.e., from energies up to order $\\bv H^2$). However, there is a sub-leading contribution to the energy $\\propto\\langle M_xM_yM_z\\rangle H_xH_yH_z$. This contribution splits the degeneracy between the $S=1$ and $S=-1$ states and it may lead to a reorientation of the bcc crystal towards the field.\n\n\nIn terms of the six $\\psi$-variables of bcc, the exchange field $\\bv h_\\psi(0)$, which enters Eq.~\\eref{Fm}, amounts to\n \\begin{equation}\n\\bv h_\\psi(0)=-\\mu \\Re{\\left[(5-i\\sqrt{2})\\bv{\\tilde h}_\\psi\\right]},\n\\ee\nwhere $\\mu=U(\\bv k_{j_1},\\bv k_{j_2},\\bv k_{j_3})$ for $\\bv k_{j_1}+\\bv k_{j_2}+\\bv k_{j_3}=0$ and\n\\begin{equation}\n\\begin{split}\n\\bv{\\tilde h}_\\psi= & \\,\\psi_1\\psi_3^*\\psi_6^*\\left(\\begin{array}{c}1\\\\1\\\\1\\end{array}\\right)\n+\\psi_1^*\\psi_4\\psi_5\\left(\\begin{array}{c}-1\\\\-1\\\\1\\end{array}\\right) \\\\\n& -\\psi_2\\psi_4^*\\psi_6\\left(\\begin{array}{c}-1\\\\1\\\\-1\\end{array}\\right)\n-\\psi_2^*\\psi_3\\psi_5^*\\left(\\begin{array}{c}1\\\\-1\\\\-1\\end{array}\\right).\n\\end{split}\n\\ee\n\nThe (isotropic) inverse spin susceptibility (see Appendix) in the bcc1 state is composed of three contributions\n\\begin{equation}\n\\chi^{-1}_{\\rm bcc1}=\\chi^{-1}_{\\rm bare}+\\chi_{\\rm phase}^{-1}+\\chi_{\\rm amp}^{-1}.\\label{chibcc1}\n\\ee\nThe first term\n\\begin{equation}\n\\chi^{-1}_{\\rm bare}=2\\left(r_0+\\frac{U_s+\\frac23 U'}{U_{\\rm bcc1}}|r(Q)|\\right),\n\\ee\nwhere $U_{\\rm bcc1}=1\/6[U_{s}+V_{p}(\\pi\/4)+4V_{p}(\\pi\/6)-\\lambda_{\\rm nt}]$, can be derived in analogy to the single-spiral case. In fact, $\\chi_{\\rm bare}$ is a ``mixture'' of $\\chi_\\parallel$ and $\\chi_\\perp$, determined geometrically by the angles between the mode directions $\\bv k_j$ and the magnetic field. It follows that $\\chi_{bare}\\ll \\chi_\\parallel$, provided $U_{\\rm bcc1}\\sim U_{s}$ (the two couplings are equal at the phase boundary between single-spiral and bcc1).\nThe remaining terms in Eq.~\\eref{chibcc1}\n\\begin{equation}\n\\begin{split}\n\\chi_{\\rm phase}^{-1}&=-\\,\\frac{\\mu^2\\,|r(Q)|}{3\\,U_{\\rm bcc1}\\,\\lambda_{\\rm nt}}\\\\\n\\chi_{\\rm amp}^{-1}&=-\\,\\frac{25\\,\\mu^2\\,|r(Q)|}{12\\,U_{\\rm bcc1}\\,[U_{s}-V_{p}(\\pi\/4)+\\lambda_{\\rm nt}]},\n\\end{split}\\label{chi-phase-amp}\n\\ee\nstem from the response of the bcc magnetic structure to the field. That is, they originate from the adjustments of relative phases and amplitudes, respectively, of the helical modes as a result of the term $-\\bv h_\\psi(0)\\cdot\\bv m$ in Eq.~\\eref{Fm}. The effect of $\\chi_{\\rm phase}$ and $\\chi_{\\rm amp}$, which are necessarily negative, is to increase the susceptibility of the bcc1 state.\n\nThe change in\nthe relative amplitudes and phases of the six interfering spirals as\na function of the magnetic field may be calculated (see Appendix). For example, the\nlinear response of the amplitudes of bcc1 is\n\\begin{equation}\n\\left(\\begin{array}{c}\n\\delta|\\psi_1|\\\\ \\delta|\\psi_2|\\\\ \\delta|\\psi_3|\\\\ \\delta|\\psi_4|\\\\ \\delta|\\psi_5|\\\\ \\delta|\\psi_6|\\\\\n\\end{array}\\right)=\\frac{5\\,\\mu\\, S}{4 [U_{s}-V_{p}(\\pi\/4)+\\lambda_{\\rm nt}]}\\left(\\begin{array}{r}\n-m_z\\\\ m_z\\\\ -m_x\\\\ m_x\\\\ m_y\\\\ -m_y\n\\end{array}\\right).\n\\ee\nThis response should be observable by neutron scattering, if it is\npossible to prepare the sample in a single-domain state (i.e.\nwithout mixture of the two time-reversal partners). For example, a field in $\\uv z$ direction affects $|\\psi_1|$ and $|\\psi_2|$, the amplitudes of the modes propagating orthogonally to $\\uv z$ (Fig. \\ref{6modes}), which get enhanced and suppressed by the magnetic field, respectively. \n\nThe expected effects of external magnetic field on the resistivity of bcc spin crystal are presented elsewhere.\\cite{binz06b}\n\n\n\\section{Effect of impurities: a possible route to 'partial order'.}\\label{disorder}\n\nWhile the helical spin crystal states are expected to show Bragg\nspots at particular wavevectors, a variety of effects such as\nthermal or quantum fluctuations or disorder can destroy the long\nrange order while preserving the helical spin crystal structure at\nshorter scales. Here, we investigate in more detail the effect of\nnon-magnetic disorder on helical spin crystal structures.\n\n Although\nthe experimentally studied helimagnets are very clean from the\nelectrical resistivity point of view, the helical magnetic\nstructures are sensitive to disorder at a much longer length scale.\nIn addition, the low energy scales required to distort them means\nthat one needs to consider disorder effects. An observation that can\nimmediately be made is that for the physically relevant case of\nnon-magnetic disorder ($V_{dis}(\\bv r)$), the single-spiral state\nand the spin crystal states respond very differently. By symmetry,\nthe coupling of disorder to the magnetic structure is given by\n$F_{dis}=\\langle V_{dis}(\\bv r) |\\bv M(\\bv r)|^2\\rangle$. Hence,\nsingle-spiral states which are unique in having a spatially uniform\nmagnitude of magnetization ($|\\bv M(\\bv r)|={\\rm constant}$) are\nunaffected by this coupling; in contrast the spin-crystal states\nnecessarily have a modulated magnitude\\cite{binz06} and hence are\naffected by non-magnetic disorder. Therefore the neutron scattering\nsignal of the spin-crystal state is expected to have more diffuse\nscattering than the single mode state. This is consistent with the\nexperimental observation that the high pressure phase has diffuse\nscattering peaked about $\\langle110\\rangle$ while the low pressure\nphase has sharper spots, consistent with identifying the two as\nspin-crystal and single-spiral states respectively.\n\n The effect of disorder on the\nspin-crystal state is closely related to the problem\nof the ordering of an XY model in the presence of a random external field.\nThe phase rotation symmetry of the XY model captures the\ntranslational invariance of the spin-crystal in the clean state.\nDisorder destroys this invariance and behaves like a random field\napplied to the XY system. Using the insights from the study of that\nproblem in three dimensions,\\cite{giamarchi95}\n one expects that for weak\ndisorder a Bragg glass will result, where although true long range\norder is destroyed, power law divergent peaks at the Bragg\nwavevectors remain, and the elastic constants remain finite. For\nstronger disorder one expects this algebraic phase to also be\ndestroyed, and recover a short range correlated phase without\nelasticity. Nevertheless, for the case of the bcc1 and bcc2\ncrystals, due to time reversal symmetry ($\\mathcal{T}$) breaking in\nthese states, the disordered states also spontaneously break time\nreversal symmetry, and hence a phase transition is expected on\ncooling despite the absence of long range order. It is difficult to\npredict which of these two scenarios (Bragg glass or only\n$\\mathcal{T}$ breaking) is more appropriate for MnSi. In the latter\ncase one may estimate the spreading of the Bragg spots due to\ndisorder by considering the energetic cost to deform the\nspin-crystal state in different ways.\n\nIgnoring elastic contributions,\n there are two\ndistinct types of deformations - ones that involve a change in the\nmagnitude of the ordering wavevectors $\\delta q_\\parallel$\n and others that do not change\nthe wavevector magnitude but rotate the structure from its preferred\norientation: $\\delta q_\\perp$. The second is expected to be low in energy because\nrotations of the structure are locked by the crystal anisotropy\nterm, which is weak.\n From Eq.~\\eref{Fa}, we obtain the energy cost to shift the ordering vector by $\\delta q_\\perp$ along the sphere $|\\bv q|=Q$\n\\begin{equation}\n\\delta F_\\perp=\\frac{4\\alpha}{3\\kappa}\\left(\\frac{\\delta q_\\perp}Q\\right)^2,\\label{Fperp}\n\\ee\nwhere $\\kappa=1$ for the single-spiral. For multi-mode spin crystals, the energy cost of rotation is reduced, as explained in Section \\ref{crystalanisotropy}. Thus, $\\kappa\\approx4$ for the bcc spin crystals.\nIn contrast, deformations that change the magnitude of the ordering\nwavevectors, must contend with the DM\ninteraction scale, and hence pay a higher energy penalty\n\\begin{equation}\n\\delta F_\\parallel=\\frac12\\Delta\\cdot|\\psi_0|^2\\cdot\\left(\\frac{\\delta q_\\parallel}Q\\right)^2.\\label{Fpar}\n\\ee\nAssuming\nthe disorder couples to these deformations equally, we can estimate\nthe ratio of their amplitudes in the limit of weak deformations, by equating Eqs.~\\eref{Fperp} and \\eref{Fpar}. It follows\n\\begin{equation}\n\\left(\\frac{\\delta q_\\perp}{\\delta q_\\parallel}\\right)^2=\\frac{3\\kappa\\Delta|\\psi_0|^2}{8\\alpha}.\n\\ee\nUsing Eqs.~\\eref{Hsr} and \\eref{Hc}, we can relate this ratio to the experimentally known ratio between the critical magnetic fields for, respectively, reorienting and polarizing the single-spiral state\n\\begin{equation}\n\\frac{\\delta q_\\perp}{\\delta q_\\parallel}\n\\gtrsim\\sqrt{\\frac{3\\kappa}2}\\frac{H_{\\rm c}}{H_{\\rm sr}}.\n\\ee\n\nWe can now apply these results to the case of MnSi and test the\nhypothesis that the 'partial order' state is in fact a\ndisordered bcc spin crystal. Setting in $\\kappa=4$ and the\nexperimentally measured\\cite{thessieu97} critical fields for MnSi,\n$H_{c}=0.6{\\rm Tesla}$ and $H_{c1}=0.1{\\rm Tesla}$, one obtains\n$\\delta q_{\\perp}\/\\delta q_\\parallel\\gtrsim 15$. Neutron scattering\nexperiments do indeed find that the transverse broadening is larger\nthan the longitudinal broadening, but since the latter is resolution\nlimited, this only gives us an lower bound that is consistent with\nthe estimate above: $[\\delta q_{\\perp}\/ \\delta\nq_{\\parallel}]_{\\rm{expt}}> 2.3$. Nevertheless, the trend that the\nwidth of the spot is greater along the equal magnitude sphere than\ntransverse to it is clearly seen in the experimental data.\n\nThus, weak non-magnetic disorder of the atomic crystal is expected to destroy magnetic long range order in multi-mode helical spin crystal states and lead to a neutron scattering signal compatible with the observations in the 'partial order' phase of MnSi. \n However in the case of bcc spin crystals, time-reversal symmetry breaking is expected to persist even in the presence of disorder. The scenario of interpreting 'partial order' in MnSi as a bcc1 state disordered by impurities thus predicts quasi-static local magnetic moments \n and implies a finite temperature phase transition on cooling into this phase. \n \n\\section{Conclusion}\n\nWe have analyzed the magnetic properties of non-centrosymmetric weak ferromagnets subject to DM spin-orbit coupling. This problem falls into the general class of systems where the low energy excitations live on a surface in reciprocal space rather than on discrete points.\nThe addition of DM interactions to a ferromagnetic state produces a\nlarge degeneracy of magnetic states characterized by arbitrary\nsuperpositions of spin helices of a fixed helicity and fixed\nwavevector magnitude. This enormous degeneracy is broken by\ninteractions between modes, and the single-spiral state is realized\nfor slowly varying interactions, by virtue of its unique property of\nhaving a spatially uniform magnitude of magnetization. For more\ngeneral interactions, multi-mode helical spin crystal states are\nobtained. We show that for the model interactions considered, the\nphase diagram is largely determined just by considering the\ninteractions between pairs of modes. The phase that is eventually\nrealized may be readily deduced from the range of angles in which\nthis interaction drops below a critical value. In particular, the\nbcc structure is stabilized by virtue of the fact that its\nreciprocal lattice, fcc, is a close packed structure. These results\nmay also be relevant in other physical situations where\ncrystallization occurs, such as the Larkin-Ovchinnikov-Fulde-Ferrel\n instability in spin-imbalanced superconductors, which may\npotentially be realized in solid state systems,\\cite{CeCoIn5} cold\natomic gases\\cite{zwierlein} and dense nuclear matter.\\cite{Rajagopal}\n\nHelical spin crystals typically give rise to complicated real space\nmagnetic structures which we discussed in this paper. In particular,\ntopological textures like merons and anti-vortices can be seen about\nspecial axes in particular realizations, although these are not\nexpected to be stable given the order parameter and spatial\ndimensionality of the system. We show here that such topological\nstructures exist as a consequence of symmetry, which also dictates\nthe absence of certain higher Bragg reflections, which a naive\nanalysis would predict.\n\nThe response of helical spin crystals to crystalline anisotropy and\napplied magnetic field are considered with a special emphasis on the\nbcc structures which are contrasted against the response of the\nsingle helix state. An unusual transfer of spectral intensity in the\npresence of an applied magnetic field, which is strongly dependent\non the direction of applied field is noted for the bcc structures.\nThis is a consequence of broken time reversal symmetry in the\nabsence of a net magnetization (which is symmetry forbidden). The\nunusual magnetotransport in such a state, a linear in field\nmagnetoresistance and quadratic Hall effect, has been discussed\nbriefly in Ref.~\\cite{binz06} and was elaborated upon in Ref.~\\cite{binz06b}.\n\n Helical spin crystals exhibit Bragg peaks at specific\nwave-vectors, and hence are not directly consistent with the\nexperimental observation of 'partial order'. The point of view taken\nin our earlier work\\cite{binz06} is that the short distance and\nshort time properties are captured by the appropriate helical spin\ncrystal structure. Studying the properties of helical spin crystals\nwith long range order is a theoretically well defined task with\ndirect consequences for a proximate disordered phase with similar\ncorrelations up to some intermediate scale. The mechanism\nthat leads to the destruction of long range helical spin crystal\norder is uncear; in Ref.~\\cite{binz06}, this was assumed to be the coupling\nto non-magnetic disorder. Then, as elaborated in this paper,\nbeginning with a bcc helical spin crystal a neutron scattering\nsignature consistent with that of 'partial-order' may be obtained.\nHowever, within the simplest version of this scenario, one also\nexpects a finite temperature phase transition where time reversal\nsymmetry breaking develops, and static magnetic order which may be\nseen in nuclear magnetic resonance or muon spin rotation\nexperiments. Other mechanism for the destruction of long range order\nof the bcc spin crystal state, such as thermal or quantum\nfluctuations may also be considered, but are left for future work.\n\n\n\\acknowledgments We would like to thank L. Balents, P. Fazekas, I. Fischer, D.\nHuse, J. Moore, N. Nagaosa, M.P. Ong, C. Pfleiderer, D. Podolsky, A. Rosch, T.\nSenthil, H. Tsunetsugu and C. Varma for useful and stimulating discussions and V.\nAji for an earlier collaboration on related topics. This work was\nsupported by the Swiss National Science Foundation, the A.P. Sloan\nFoundation and Grant No. DE-AC02-05CH11231.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzkyof b/data_all_eng_slimpj/shuffled/split2/finalzzkyof new file mode 100644 index 0000000000000000000000000000000000000000..013d62efd91375fc7e07cf050b7a30a6b1b06151 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzkyof @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe paper is written for the special issue dedicated to the outstanding physicist Mark Azbel. It addresses the problems of quantum physics, to which Prof. Azbel made a number of seminal \ntheoretical contributions \\cite{AzbelP,AzbelN,Azbel}. \n\nThe angle variable $\\varphi$ is ubiquitous in classical and quantum physics. Among examples of this variable are the rotation angle of the plane rotator (mechanical pendulum as a particular case) and the phase difference across the single Josephson junction (JJ). The canonically conjugate variable to the phase is the angular momentum $M$ (further called moment) in the first example and the charge in the second one. \n\nThe history of using the canonically conjugate pair ``angle (phase)--moment'' in quantum mechanics is full of controversies and disputes. In particular, the commutation relation \n\\begin{equation}\n[\\hat \\varphi,\\hat M]=i\\hbar\n \\ee{ComS}\nintroduced by \\citet{dirac} was challenged \\cite{judge,PhasRev,Pegg}. Here $\\hat \\varphi$ and $\\hat M$ are operators of the angle (phase) and the moment, respectively. The problem with the commutation relation was connected with the non-Hermitian character of the phase operator. It was suggested to repair this mostly mathematical flaw by rewriting the commutation relation in the terms of Hermitian operators $\\sin \\hat \\varphi$ and $\\cos \\hat \\varphi$ \\cite{suss}. The uncertainty relation \n\\begin{equation}\n\\Delta M \\Delta \\varphi > {\\hbar\\over 2} \n \\ee{unc}\nwas also under scrutiny \\cite{judge,PhasRev,Pegg}. Here $\\Delta M$ and $ \\Delta \\varphi$ are uncertainties of the moment and the phase, respectively.\n \nAnother problem (but connected with the first one) is that the phase $\\varphi$ is defined modulo $2\\pi$. One can choose the phase defined in an interval from an arbitrarily chosen phase $\\varphi_0$ to $\\varphi_0 +2\\pi$ (compact phase), or the phase ranging from $-\\infty$ to $\\infty$ (extended phase). If the phases differing by the integer number $s$ of $2\\pi$ describe the same state, it does not matter at all which phase, compact or extended, one uses in the theoretical analysis. The analysis (if correct) must lead to the same results. However, in quantum mechanics there are some subtleties, and there is no consensus on the dilemma ``compact vs. extended phase''. \n\nThe proponents of the suggestion that only the compact phase must be used in the quantum theory of the JJ argue that it is natural to expect that that the states with the phases $\\varphi$ and $\\varphi+2\\pi$ are the same state and only states with wave functions periodic in $\\varphi$ with the period $2\\pi$ are possible. This means that the variable canonically conjugated to the phase (moment in a quantum rotator, or charge in a JJ) is quantized. The proponents of the extended phase argue, that the ``natural expectation'' of the identity of the states with the phases $\\varphi$ and $\\varphi+2\\pi$ is not so natural and is invalid in the case of the JJ because of its interaction with the environment. Then different phases in the whole interval $-\\infty<\\varphi <\\infty$ always correspond to different states. This was called ``decompactification of the phase''. \n\nIt is important to stress, however, that the compact phase is sufficient for description of states, but not for description of dynamical processes of transitions between states with different phases. In these processes it is necessary to know not only the variation of the compact phase but also an integer number $s$ of full $2\\pi$ winding during the process. It is more convenient instead of two variables to deal only with one variable, which is an extended phase determined in the interval $(-\\infty,\\infty)$\n\\begin{equation}\n\\varphi(t) =2\\pi s(t) + \\varphi_c(t).\n \\ee{comp}\nHere $\\varphi_c$ is the compact phase determined in any interval of the length $2\\pi$. The voltage drop over the JJ is determined by the time derivative of the extended but not the compact phase. The time derivative of the compact phase has unphysical jumps when the phase reaches borders of the $2\\pi$ interval chosen for the compact phase.\nThus, one should not interpret the requirement of using only the compact phase (phase compactification) literally but interpret it as the requirement of using only wave functions periodic in the extended phase $\\varphi$ with the period $2\\pi$. Under the assumption of decompactification this requirement is abandoned.\nThus, the dilemma ``compact vs. extended phase'' is in fact the dilemma whether the states with the phases $\\varphi$ and $\\varphi+2\\pi$ indistinguishable or distinguishable. Nevertheless, further in the paper the dilemma will be called ``compact vs. extended phase'' as widely accepted in the literature.\n\n\nActuality of this dilemma reemerged in the recent dispute about the Dissipative Quantum Phase Transition (DQPT) \\cite{Sacl,CommMurani,ReplMurani} between the superconducting and insulating states of a single JJ predicted about 40 years ago \\cite{Schmid,Bulg}. \\citet{Sacl} claimed that their experiment and theory proved the absence of the DQPT, because the single JJ cannot be an insulator. \n\nThe estimation done in Ref. \\onlinecite{CommMurani} demonstrated that the experiment of Murani {\\em et al.} was done at the conditions, in which the existing theory did not predict the DQPT. Therefore, the experiment could not provide any evidence either pro or contra the DQPT. Their theoretical arguments were also rejected, but they deserve an analysis more detailed than it was possible within a short Comment \\cite{CommMurani}. In particular, Murani {\\em et al.} \\cite{Sacl,ReplMurani} argued that the conventional theory failed because it used the extended phase while only the compact phase must be used. This bring us to the topic of the present paper.\n\nBecause of generality of the aforementioned problems with the phase variable, the paper addresses three systems: quantum rotator, particle in a periodic potential, and single JJ. In the quantum rotator a particle moves around a 1D ring. The quantum pendulum \\cite{Cond} and an electron moving around a 1D normal ring \\cite{silver,AzbelN,Buttiker,Gefen} are examples of the quantum rotator. The paper explores analogies between these systems, but looking for possible differences at the same time. \n\nIn order to resolve the dilemma ``compact vs. extended phase'', it is necessary to answer to three questions:\n\\begin{enumerate}\n\\item\nAre the states with the phases $\\varphi$ and $\\varphi+2\\pi$ indistinguishable in the JJ?\n\\item\nMust the wave function be periodic in $\\varphi$ if the states with the phases $\\varphi$ and $\\varphi+2\\pi$ are indistinguishable?\n\\item\nThe last but not the least: Is it important for the theory of DQPT whether the states with the phases $\\varphi$ and $\\varphi+2\\pi$ are distinguishable, or not?\n\\end{enumerate}\nThe paper looks for answers to these three questions. \n\n\n\nOur analysis has fully confirmed the final conclusions of the about 40-years old conventional theory of the DQPT in the small JJ. But it reassessed justifications of these conclusions and analogies of the single JJ with other systems (quantum rotator and particle in a periodic potential). While in the past it was widely (but not unanimously) believed that the single JJ is an analog of particle in a periodic potential, but not of a mechanical pendulum, we argue that the opposite is true. This means that the states with the phases $\\varphi$ and $\\varphi+2\\pi$ are distinguishable both in the JJ and the quantum rotator. However, whatever analogy is more correct, the DQPT must take place both in the JJ and in the quantum rotator. \n\n\n\\section{The phase in the classical theory} \\label{cl}\n\n\\subsection{Plane rotator in the conjugate variable ``angle--moment''} \\label{QRc}\n\nThe Hamiltonian of the classical plane rotator is \n\\begin{equation}\nH={m^2\\over 2 J}+G (1-\\cos \\varphi )-\\varphi N,\n \\ee{Ho}\nwhere $m$ is the moment of the particle moving in the rotator, $J$ is the moment of inertia, $N$ is the external torque, and the periodic in $\\varphi$ potential $\\propto G$\nemerges from a constant force acting on the rotator (the gravity force in the case of the pendulum or the constant electric field in the case of a charged particle in a 1D ring). The $\\varphi$-dependent part of the Hamiltonian is the well known washboard potential.\nThe Hamilton equations are\n\\begin{eqnarray}\n{dm\\over dt}= -{\\delta H\\over \\delta \\varphi }=- G\\sin\\varphi +N,\n \\nonumber \\\\\n {d\\varphi \\over dt}={\\delta H\\over \\delta m}={m\\over J}.\n \\eem{phit}\n\n\nThe Hamiltonian \\eq{Ho} is not periodic in $\\varphi$. This looks as a flaw, since violates the principle that the states with the phases $\\varphi$ and $\\varphi+2\\pi $ are indistinguishable. \nAccording to this Hamiltonian, the energies of these states differ by the energy $2\\pi N$ pumped to the rotator from the environment after full $2\\pi$ winding of the rotator. \n\n\nThe flaw can be eliminated by using another Hamiltonian \n\\begin{equation}\nH={( M+M_0)^2\\over 2 J}+G (1-\\cos \\varphi ), \n \\ee{Hm}\nwhich is periodic in $\\varphi$. Here the moment $M_0$ transferred to the rotator by the external torque,\n\\begin{equation}\n{dM_0\\over dt}=N,\n \\ee{}\nwas introduced. Since $M_0$ emerges from the interaction with the environment, we shall call it {\\em external moment}. \nThe Hamilton equations for the Hamiltonian \\eq{Hm} are\n\n\\begin{eqnarray}\n{d M\\over dt}= -{\\delta H\\over \\delta \\varphi }=- G\\sin\\varphi ,\n\\nonumber \\\\\n{d\\varphi \\over dt}={\\delta H\\over \\delta M}={ M+M_0\\over J}.\n \\eem{phgP}\nThe moment \n\\begin{equation}\n M ={\\partial {\\cal L}\\over \\partial \\dot \\varphi}\n \\ee{}\nis the canonical moment determined by the Lagrangian \n\\begin{equation}\n{\\cal L}={(J \\dot \\varphi -M_0)^2\\over 2 J}-G (1-\\cos \\varphi ).\n \\ee{}\nHowever, the angular velocity $\\omega=\\dot \\varphi$ of the phase rotation is determined not by the canonical but by the kinetic moment $m= M+M_0$. The equations of motion \\eq{phit} in the terms of the observables $m$ and $\\varphi$ do not depend on the choice of the Hamiltonians \\eq{Ho} or \\eq{Hm}, since they differ by the full time derivative from $\\varphi M_0$, which does not affect the equations of motion. Later on we shall call the gauges with the periodic Hamiltonian like in \\eq{Hm} and with the non-periodic Hamiltonian like in \\eq{Ho} {\\em periodic }and {\\em non-periodic gauge}, respectively. \n\n\n\nThe terms {\\em canonical moment} and {\\em kinetic moment} were introduced by the analogy with the {\\em canonical momentum} and {\\em kinetic momentum} of a charged particle in the electromagnetic field. Splitting of the kinetic moment $m$ onto the canonical and the external moment obtained from the environment is purely formal in the classical theory. But this splitting is more important in the quantum theory. \n\n In the absence of the torque $ N$, which pumps the moment and the energy to the system, there are two types of motion: (i) an oscillation around the ground state $\\varphi=0$ with $m$ vanishing in average, and (ii) a monotonic rotation with $\\langle m \\rangle \\neq 0$ $\\varphi(t)$ being a periodic function in the time interval from $-\\infty$ to $\\infty$. \nThe stationary state with constant \n\\begin{equation}\n\\varphi =\\arcsin(N\/G)+2\\pi s\n \\ee{}\n is possible if $N G$ the torque drives the quantum rotator to rotate with acceleration, but the monotonic rotation with the angular velocity $\\omega=d\\varphi \/dt$ periodically oscillating around the constant average angular velocity $\\bar \\omega$ is possible in the presence of friction.\n\n\n\n\\subsection{Plane rotator vs. particle in a periodic potential}\n\nLet us compare the rotator dynamics with the dynamics of a particle with charge $q$ moving in a periodic potential and a classical electromagnetic field. In the latter case the Hamiltonian is \n\n\\begin{equation}\nH={( P- {q\\over c} A)^2\\over 2 m_0 }+G\\left(1-\\cos {2\\pi x\\over l} \\right) +q\\Phi. \n \\ee{Hg}\nHere $m_0$ is the particle mass, $c$ is the speed of light, $P$ and $A$ are the $x$-components of the canonical momentum $\\bm P$ and of the electromagnetic vector potential $\\bm A$ (we consider the 1D problem), and $\\Phi$ is the electromagnetic scalar potential. The gauge transformation,\n\\begin{equation}\nA=A' +\\nabla \\chi(x,t),~~\\Phi=\\Phi' -\\dot \\chi(x,t),\n \\ee{GT}\nwith $\\chi(x,t)$ being an arbitrary function of $x$ and $t$,\nyields the Hamiltonian \\eq{Hg} with $A'$ and $\\Phi'$ instead of $A$ and $\\Phi$ and with the full time derivative of $q\\dot \\chi(x,t)$ added, which does not affect the Hamilton equations of motion.\n\nThe torque on the plane rotator can be a result of the magnetic field normal to the plane of the rotator. The Hamiltonian \\eq{Hg} describes also the dynamics of the plane rotator with $x$ being the coordinate along the circumference of the 1D ring of the rotator and $l$ being the length of this circumference. The relations between variables in two presentations are \n\\begin{equation}\n\\varphi={2\\pi x\\over l} ,~~M={Pl\\over 2\\pi},~~J={m_0l^2\\over 4\\pi^2} .\n \\ee{cord}\nThe periodic gauge in Sec.~\\ref{QRc} corresponds to the gauge without the scalar potential $\\Phi$ and with the Hamiltonian \n\\begin{equation}\nH={( P- {q\\over c} A)^2\\over 2 m_0 }+G\\left(1-\\cos {2\\pi x\\over l} \\right), \n \\ee{HmS}\nThe external moment and the external torque of Sec.~\\ref{QRc} are\n\\begin{equation}\nM_0=-{ql \\over 2\\pi c }A=-\\hbar{\\phi\\over \\phi_0},~~N=\\dot M_0={ ql \\over 2\\pi}E,\n \\ee{cor}\nwhere $E=-\\dot A\/c$ is the azimuthal component of the electric field and $\\phi_0=hc\/q$ is the magnetic flux quantum for the particle of charge $q$.\nThe external moment $M_0$ is determined by the magnetic flux $\\phi=Al$ through the area restricted by the 1D ring of the rotator. The magnetic field is supposed to be axisymmetric.\n\nThe transformation with the gradient $\\nabla \\chi = A(t)$ independent from $x$ yields the non-periodic gauge, in which the potential $A(t)$ is absent, but instead the linear in $x$ scalar potential $\\Phi(t)=- E(t) x=\\dot A(t) x\/c$ appears: \n\\begin{equation}\nH={ P^2\\over 2 m_0 }+G \\left(1-\\cos {2\\pi x\\over l} \\right)- q E x. \n \\ee{WS}\n \n\nThe scalar potential in the non-periodic gauge is multivalued. \nThis does not produce any problem, since only fields but not potentials are observable quantities.\n\n\n\n Whatever gauge one uses, there is no difference between dynamics of the rotator and the charged particle in a periodic potential. The dynamics does not depend on whether the positions of the particle with the coordinates $x$ and $x+l$ (or angles $\\varphi$ and $\\varphi+ 2\\pi $) are distinguishable, or not. Thus, in the classical theory the dilemma ``compact vs. extended phase'' does not exist and there is no difference between the dynamics in a 1D ring and in the infinite 1D space with a periodic potential.\n\n\n\\section{The phase in the quantum theory}\n\n\n\\subsection{Axisymmetric quantum rotator: commutation and uncertainty relations, and wave packets} \\label{WP}\n\n\nThe standard way to go from the classical to the quantum theory is to replace in the Hamiltonian the canonical moment $ M$ by the operator\n\\begin{equation}\n\\hat { M} =-i\\hbar {\\partial \\over \\partial \\varphi}.\n \\ee{Mphi}\n\nGeneral problems with the canonically conjugate pair ``angle (phase)--moment'' can be discussed for\nthe simple case of the axisymmetric rotator ($G=0$). This case has already been investigated in the works on persistent currents in 1D normal rings \\cite{Buttiker,Gefen}. For a while we also ignore \n the external moment $M_0$. This means that we ignore any interaction with the environment, either at the present moment, or in the past. \n \n The objection to commutation relation \\eq{ComS} was based on the following calculation of the matrix elements of the commutation relation between two eigenstates of the moment operators with eigenvalues $M_s$ and $M_{s'}$ \\cite{judge,PhasRev,Pegg}:\n\\begin{eqnarray}\n\\int\\limits_0^{2\\pi}\\psi^*_{s'} [\\hat \\varphi,\\hat M]\\psi_s\\,d\\varphi\n\\nonumber \\\\\n=-i\\hbar \\int\\limits_0^{2\\pi}\\psi^*_{s'}\\left[ \\varphi{\\partial \\psi_s\\over \\partial \\varphi}-{\\partial (\\varphi \\psi_s)\\over \\partial \\varphi} \\right]\\,d\\varphi\n\\nonumber \\\\\n=-i\\hbar\\int\\limits_0^{2\\pi}\\left(\\psi^*_{s'} \\varphi{\\partial \\psi_s \\over \\partial \\varphi}+{\\partial \\psi^*_{s'} \\over \\partial \\varphi}\\varphi \\psi_s\\right)\\,d\\varphi\n \\nonumber \\\\\n=\\left. i\\hbar \\psi^*_{s'} \\varphi\\psi_s \\right|_0^{2\\pi}+(M_s-M_{s'})\\int\\limits_0^{2\\pi}\\psi^*_{s'} \\varphi\\psi_s \\,d\\varphi\n \\eem{ComC}\nThe opponents of the commutation relation \\eq{ComS} neglected the first term emerging from the borders of the integration interval $(0,2\\pi)$. Then the diagonal matrix elements ($M_s=M_{s'}$) of the commutator vanish, while the diagonal matrix elements of the righthand side of the commutation relation \\eq{ComS} are not zero. The justification\nfor ignoring of the border contribution was that it should not appear in a matrix element of a Hermitian operator when the integrand is a periodic function of $\\varphi$. \nThe operator $\\hat \\varphi$ in the commutation relation is not Hermitian and breaks periodicity. \n\nVarious modifications of the commutation relation were suggested (one of them is discussed below). However, there is another resolution of the problem, which rehabilitates the commutation relation \\eq{ComS}. The border term in \\eq{ComC} appears after the integration by parts of only one from two terms in the original commutator. While any of two terms in the commutator separately is non-Hermitian and breaks periodicity, their sum is Hermitian and does not break periodicity. The matrix element of the commutator can be calculated without integration by parts:\n\\begin{eqnarray}\n\\int\\limits_0^{2\\pi}\\psi^*_{s'} [\\hat \\varphi,\\hat M]\\psi_s\\,d\\varphi\n\\nonumber \\\\\n=-i\\hbar \\int\\limits_0^{2\\pi}\\psi^*_{s'}\\left( \\varphi{\\partial \\psi_s\\over \\partial \\varphi}-\\varphi {\\partial \\psi_s\\over \\partial \\varphi} -\\psi_s\\right)\\,d\\varphi\n\\nonumber \\\\\n=i\\hbar \\int\\limits_0^{2\\pi}\\psi^*_{s'}\\psi_s\\,d\\varphi.\n \\eem{ComP}\nThis is equal to the matrix element of the righthand side of the commutation relation. Similar arguments rehabilitating the standard commutation relation \\eq{ComS} were presented by \\citet{LossM}.\n\nWe checked the commutation relation using the wave functions in the continuous space of the phase $\\varphi$. Another route is to do it in the discrete space of quantized moments $M$. Then one encounters the problem that because of discreteness of $M$ the expression conjugate to \\eq{Mphi}\n\\begin{equation}\n\\hat \\varphi =i\\hbar {\\partial \\over \\partial M}\n \\ee{phiM}\nfor the operator $\\hat \\varphi$ in the moment space is invalid. Instead one can use the operator $e^{i\\hat \\varphi}$, which shifts from one quantized eigenvalue of the operator $\\hat M$ to the next one.\nThe commutation relation with this operator is\n\\begin{equation}\n[e^{i\\hat \\varphi},\\hat M]=-\\hbar e^{i\\hat \\varphi}.\n \\ee{eph}\n\nThe operator $e^{i\\hat \\varphi}$ is a superposition of two Hermitian operators $\\cos \\hat \\varphi$ and $\\sin \\hat \\varphi$. The commutation relations for these operators, which are equivalent to \\eq{eph}, were suggested by \\citet{suss}. Although the commutation relation \\eq{eph} contains only Hermitian operators, its expansion in $\\hat \\varphi$ consists of non-Hermitian operators, which must be treated correspondingly. A failure of some operation valid only for Hermitian operators, means the failure of the operation, but not of the commutation relation.\n\nThe problem with the canonical commutation relation naturally leads to the problem with the uncertainty relation \\eq{unc}. The uncertainty relation is derived from the analysis of semiclassical wave packets, which demonstrates the correspondence principle: the transition from the quantum mechanical to the classical description. The wave packet is formed by a superposition of the states with moments $M$ in the interval of the width $\\Delta M$. In the continuous moment space the superposition is determined by an integral. In the discrete moment space the integral must be replaced by a sum over the quantized values of the moment.\n \n \n \n Let us assume that the phase uncertainty is essentially less than $2\\pi$. Then the moment uncertainty\n\\begin{equation}\n\\Delta M \\gg {\\hbar\\over 4\\pi} \n \\ee{}\nis much larger than the distance between quantized values of the moment. Then the quantization can be ignored, and the summation determining the wave packet can be replaced by the integration. This provides a sufficiently accurate description of the wave packet within the phase interval $2\\pi$. \n\nBut there is a hurdle in this picture. The original wave function of the wave packet with summation over quantized values of the moment is periodic in $\\varphi $ because any term of the sum is periodic. However, the replacing of summation by integration definitely breaks the periodicity. The flaw is easily healed. The wave packet is replaced by the periodic chain of packets. The procedure can be considered as a compactification of phase, which the proponents of the compact phase insist on. Namely, one calculates the phase only in one $2\\pi$ interval and then continues this wave function periodically on all other phase intervals. At the same time this demonstrates that the compactification is necessary only due to inaccuracy of the approximation and is not needed for a sufficiently exact analysis. \n\n\nAnother situation emerges if one tries to construct a wave packet with the phase uncertainty $ \\Delta \\varphi$ exceeding $2\\pi$. In this case the moment uncertainty $\\Delta M <\\hbar\/4\\pi$ is less than the moment quantum, the summation reduces to only one term, and the picture of wave packets fails. Then the physical meaning of the uncertainty relation \\eq{unc} becomes unclear.\n\nThere were suggestions to modify the standard uncertainty relation \\eq{unc} as a reaction to the aforementioned problem \\cite{judge,Pegg}. They were based on the concept of the compact phase, which assumed that the phase uncertainty cannot exceed $2\\pi$. This concept ignores that a phase fluctuation cannot be described only by the fluctuation of the compact phase (Sec.~\\ref{cl}) . A number $s$ of full $2\\pi$ rotations [see \\eq{comp}] in the course of the fluctuation is also important. We do not dwell more on this issue, since our analysis of the slow dynamics is based on the adiabatic approximation and does not use the wave packet concept. \n\n\n\\subsection{Particle moving in a 1D ring vs. particle moving in an infinite 1D space} \\label{per}\n\nLet us now compare the quantum mechanical dynamics of a particle moving in a 1D ring of the rotator and a particle moving in the infinite 1D space. As in the previous subsection,\nwe ignore the periodic potential $G\\left(1-\\cos {2\\pi x\\over l} \\right)$. This allows to deal with simple analytical solutions of the Schr\\\"odinger equation.\n\nIn the quantum mechanics the canonical momentum becomes an operator:\n\\begin{equation}\n\\hat P=-i\\hbar {\\partial \\over \\partial x}.\n \\ee{}\nThe quantum mechanical version of the Hamiltonian \\eq{Hg} and the Schr\\\"odinger equation at $G=0$ in the periodic gauge are \n\\begin{equation}\nH={1\\over 2 m_0}\\left|-i \\hbar{\\partial \\psi \\over \\partial x} -{q\\over c}A(t)\\psi\\right|^2, \n \\ee{Hcx}\n\\begin{equation}\ni\\hbar {\\partial \\psi \\over \\partial t}=-{1\\over 2m_0}\\left[ \\hbar{\\partial \\over \\partial x} -i{q\\over c}A(t)\\right]^2\\psi .\n \\ee{Shcx}\nThe Schr\\\"odinger equation has a solution for an arbitrary time dependence of $A(t)$:\n\\begin{equation}\n\\psi(\\varphi,t) = e^{iP x \/\\hbar-i \\int^t{{\\cal E}(t')\\over \\hbar}dt'},\n \\ee{wfc}\nwhich is an eigenstate on the canonical momentum with the eigenvalue $P$. Here the time dependent energy is\n\\begin{equation}\n{\\cal E}(t)={[P -{q\\over c}A(t)]^2\\over 2m_0}.\n \\ee{}\nThe particle velocity \n\\begin{equation}\nv ={dx\\over dt} ={d{\\cal E}\\over d P}={P -{q\\over c}A(t)\\over m_0}\n \\ee{omega0}\ndepends on the kinetic momentum $p=P -{q\\over c}A(t)$ and is well-defined, while the coordinate $x$ itself is not defined at all. There is an equal probability for any value of $x$. \nAn electric field $E=-\\dot A\/c$ monotonically accelerates the particle, as in the classical theory.\n\nIn a constant electric field\n\\begin{equation}\n\\psi(\\varphi,t) = e^{iP x \/\\hbar- {i(P+qEt)^3\\over 6m_0 qE}}.\n \\ee{}\n\n\n\nIn the quantum mechanics the difference of the particle dynamics in the 1D infinite space and in the 1D ring becomes important. In the former case any value of $P$ is allowed. In the latter case \nthe canonical momentum $P$ is quantized and cannot differ from the values $sh\/l$ with integer $s$. Only at these quantized values the wave function \\eq{wfc} is periodic with the period $l$. \nIn the variables ``moment--angle'' the quantized values of the canonical moment $M=Pl\/2\\pi$ [see \\eq{cord}] are $s\\hbar$.\nThe plot the energy vs. the external moment $M_0$ for different quantized values of the canonical moment is shown in Fig.~\\ref{f1}(a).\n\n \n\\begin{figure}[!b]\n\\includegraphics[width=0.4 \\textwidth]{Energ} \n \\caption{ Plot the energy vs. the external moment $M_0$ at various quantized values of the canonical moment $M=s\\hbar$. (a) Axisymmetric rotator. (b) Quantum rotator in a constant field. \\label{f1}}\n \\end{figure}\n\n\n\n\nIn the quantum theory the gauge transformation \\eq{GT} must be accompanied by the transformation of the wave function \\cite{LLqu}:\n\\begin{equation} \n\\psi =\\psi'e^{iq\\chi\/\\hbar c}.\n \\ee{GTp}\nAfter the transformation with $\\chi =xA=-xc\\int^tE(t')dt'$,\n\\begin{equation} \n\\psi =\\psi'e^{i qAx\/\\hbar c},\n \\ee{gauge}\n from the periodic to the non-periodic gauge, the Hamiltonian and the Schr\\\"odinger equation become\n\\begin{equation}\nH={\\hbar^2\\over 2 m_0}\\left|{\\partial \\psi' \\over \\partial x}\\right|^2- qE(t)x|\\psi'|^2,\n \\ee{Hwb0}\n\\begin{equation}\ni\\hbar {\\partial \\psi' \\over \\partial t}=-{\\hbar^2\\over 2m_0}{\\partial^2 \\psi '\\over \\partial x^2} - qE(t)x\\psi'.\n \\ee{Sht0}\nThe gauge transformation \\eq{gauge} yields an non-stationary state with the non-periodic wave function\n \\begin{eqnarray}\n\\psi'=\\psi e^{-qAx\/c} = e^{i\\left(P +qE t\\right) x \/\\hbar-i \\int^t{{\\cal E}(t')\\over \\hbar}dt'}.\n \\eem{nonpf}\n In a constant electric field\n\\begin{equation}\n\\psi'(\\varphi,t) = e^{i(P+qEt) x \/\\hbar- {i(P+qEt)^3\\over 6m_0 qE}}.\n \\ee{}\n\n\n\nAfter the gauge transformation the canonical and the kinetic momentum do not differ and are determined by the operator\n\\begin{equation}\n\\hat p'=\\hat P'=-{\\partial \\over \\partial x}\n \\ee{} \nin the space of functions $\\psi'$. In the quantum rotator the quantization of the canonical momentum must be done in the periodic but not in the non-periodic gauge. This means that not the momentum $p'$ but the canonical momentum $P=sh\/l$ is equal to an integer number $s$ of the momentum quanta. \n\nIn the non-periodic gauge the wave function is non-periodic since the gauge transformation \\eq{GTp} and the Hamiltonian \\eq{Hwb0} are non-periodic. One can rewrite \\eq{nonpf} as \n\\begin{eqnarray}\n\\psi'= e^{iPx \/\\hbar-i \\int^t{{\\cal E}'(t')\\over \\hbar}dt'},\n \\eem{}\nwhere \n\\begin{equation}\n{\\cal E}'(t)={\\left(P +qEt\\right)^2\\over 2J}-qEx\n \\ee{}\nis the energy after the gauge transformation. It is evident that the wave function is not periodic because of the non-periodic term $-qEx$ in the energy. It is impossible to satisfy the requirement of the wave function periodicity in the non-periodic gauge. If the wave function is periodic at some moment of time it will become non-periodic at the next moment of time because of the non-periodic Schr\\\"odinger equation.\n\n\n\nThe loss of periodicity of the wave function in the non-periodic gauge should not be a matter of concern, as well as not a matter of concern is the non-periodic electric scalar potential. The phase factor, which makes the wave functions $\\psi(x)$ and $\\psi(x+l)$ different, means that the state is described by a multivalued wave function. The property of the gauge transformation to make the wave function multivalued was pointed out by \\citet{LLqu} in Sec. 111 of their book. Multivaluedness (non-periodicity) of the wave function of the quantum rotator compensates multivaluedness (non-periodicity) of the washboard potential in the non-periodic gauge \\cite{david}.\n\n\nIn summary, the important and the only difference between the dynamics of the particle in the quantum rotator and the particle moving in the infinite 1D space is that the Hilbert space of wave functions in the former case is discrete and is a subspace of the continuous Hilbert state in the latter case.\n\n\\subsection{Quantum rotator in an external constant field} \\label{rcf}\n\nWhile in Sec.~\\ref{per} the variables ``coordinate--momentum'' were more convenient for comparison of the rotator with the particle moving in the infinite 1D space, here we return back to the variables ``angle (phase)--moment'', which are more convenient for comparison with the JJ.\n\nAt the presence of the external constant field the quantum mechanical version of the Hamiltonian \\eq{Hm} in the periodic gauge is\n\\begin{equation}\nH={1\\over 2 J}\\left|-i \\hbar{\\partial \\psi \\over \\partial \\varphi} +M_0\\psi\\right|^2+G (1-\\cos \\varphi ) |\\psi|^2.\n \\ee{Hms}\nThe Schr\\\"odinger equation for this Hamiltonian is\n\\begin{equation}\ni\\hbar {\\partial \\psi \\over \\partial t}=-{1\\over 2J}\\left(\\hbar{\\partial \\over \\partial \\varphi}+iM_0\\right)^2\\psi +G (1-\\cos \\varphi ) \\psi.\n \\ee{Sh}\n\n\nAccording to the Bloch theorem, any stationary solution of \\eq{Sh} is the Bloch function\n\\begin{equation}\n\\psi(\\varphi,t)=u(\\tilde M+M_0,\\varphi)e^{i (\\tilde M \\varphi- E_0 t)\/\\hbar},\n \\ee{BF}\nwhere $u(\\tilde M+M_0,\\varphi)=u(\\tilde M+M_0,\\varphi+2\\pi)$ is a periodic in $\\varphi$ function and $\\tilde M$ is a canonical quasimoment (analog of the canonical quasimomentum in the solid body theory \\cite{LLstPh2}). The energy spectrum of Bloch states consists of bands with forbidden gaps between them. In the quantum rotator the Bloch wave function must be periodic in $\\varphi$. It is possible only for quantized values of the canonical quasimoment $\\tilde M=s\\hbar$. \n\n\n\nWe consider only the lowest band with the energy $E_0(\\tilde M +M_0)$, which depends on the kinetic quasimoment $\\tilde m=\\tilde M +M_0$. \nBy the analogy with the quasimomentum and the coordinate operators for a particle in a periodic potential, one can consider the energy $E_0(\\tilde M +M_0)$ as a Hamiltonian \\cite{LLstPh2}, \nwhich yields the Hamilton equations \n\\begin{equation}\n{d \\tilde M\\over dt}=- {\\partial E_0\\over \\partial \\varphi}=0,\n \\ee{Mt}\n\\begin{equation}\n\\omega={d \\varphi\\over dt}={\\partial E_0(\\tilde M +M_0)\\over \\partial \\tilde M}.\n \\ee{omega}\nWhile the canonical quasimoment $\\tilde M$ does not vary in time, the kinetic quasimoment $\\tilde m=\\tilde M +M_0$ depends on time:\n\\begin{equation} \n{d\\tilde m\\over dt}={d\\tilde M\\over dt} +\\dot M_0=\\dot M_0.\n \\ee{tlM}\nIn general, these equations must be operator equations for the conjugate operators of the canonical quasimoment $\\tilde M$ and of the angle $\\varphi$ \\cite{LLstPh2}. But if the torque is weak, one can use the adiabatic approximation with $M_0$ being a slowly varying adiabatic parameter. This allows to assume that at any moment the state does not differ essentially from the eigenstate of the canonical quasimoment with the eigenvalue $\\tilde M$ at fixed $M_0$. Then Eqs.~(\\ref{Mt}--\\ref{tlM}) can be treated as classical equations. \n\n \n In the solid body theory the classical treatment of Eqs.~(\\ref{Mt}) and (\\ref{omega}) is sometimes justified by considering them as written for semiclassical wave packets \\cite{ziman}. Since for the quantum rotator the concept of wave packets is problematic (see Sec.~\\ref{WP}), it is important that for this justification we used the adiabatic principle, but not the concept of wave packets.\n\n\nThe function $E_0(\\tilde m)$ is determined by the solution of the Schr\\\"odinger equation in Mathieu functions. Close to the bottom of the band\n\\begin{equation}\n E_0(\\tilde M +M_0)={(\\tilde M +M_0)^2\\over 2 J^*},~~\\omega={\\tilde M +M_0\\over J^*},\n \\ee{}\nwhere \n\\begin{equation}\nJ^* = \\left[{\\partial ^2E_0(\\tilde M +M_0)\\over \\partial \\tilde M^2}\\right]^{-1}\n \\ee{}\nis the effective moment of inertia, an analog of the effective mass in the Bloch theory for solids.\n\n\n\n\n\nIn the weak binding limit $G \\ll \\hbar^2\/J$ the energy in the Brillouin zone $-\\hbar\/2 < \\tilde m < \\hbar\/2$ is $E_0={\\tilde m^2\\over 2J}$ excepting the close vicinity to the zone borders.\nThe effective moment of inertia $J^*$ does not differ from the bare moment of inertia $J$. The dependence of the energy on the external moment $M_0$ in the weak binding limit is shown for two Bloch bands at quantized values of $\\tilde M=s\\hbar$ in Fig.~\\ref{f1}(b).\n\nIn the strong binding limit $G \\gg \\hbar^2\/J$ there is the narrow band\n \\begin{equation}\n E_0=\\Delta \\left(1-\\cos {2\\pi \\tilde m\\over \\hbar}\\right), \n \\ee{}\nwhere $\\Delta $ is the band half-width, which goes to zero at $GJ\/\\hbar^2\\to \\infty$. The effective moment of inertia is\n\\begin{equation}\nJ^*={\\hbar^2 \\over 4\\pi^2\\Delta}.\n \\ee{}\n\nThe band energy has extrema at $M_0=s\\hbar$, where the phase angular velocity $\\omega$ vanishes. Thus, at zero external moment $M_0$, i.e., in the absence of any connection with the environment, either at the present moment, or in the past, the monotonic phase rotation is impossible. This follows from the analysis of the quantum pendulum \n\\cite{Cond} and of the quantum rotator in a constant electric field \\cite{silver} ignoring the connection with the environment. \n \n Impossibility of monotonic phase rotation without any connection with the environment can be explained by the following arguments. In an axisymmetric rotator, i.e., without an external constant field, there are two degenerate eigenstates of fixed energy, which are either two states $e^{\\pm i M\\varphi\/\\hbar}$ with rotating phase, or states $\\cos {M\\varphi\\over \\hbar}$ and $\\sin {M\\varphi\\over \\hbar}$ with vanishing average angular velocity $\\langle\\dot \\varphi\\rangle$. But in a coherent superposition of degenerate states $\\cos {M\\varphi\\over \\hbar}$ and $\\sin {M\\varphi\\over \\hbar}$ the nonzero angular velocity $\\langle\\dot \\varphi\\rangle$ is possible. However, whatever weak phase dependent potential (even a single impurity) breaks the axial symmetry and lifts degeneracy of states $\\cos {M\\varphi\\over \\hbar}$ and $\\sin {M\\varphi\\over \\hbar}$. Then the superposition of two states is not an eigenstate of the energy operator. In any eigenstate of the energy phase rotation is impossible.\n \n\nLet us apply a constant weak torque $N$ to the rotator. The external moment $M_0=Nt$ is proportional to time, and the time can be excluded from Eqs.~(\\ref{omega}) and (\\ref{tlM}). This yields the equation \n\\begin{equation}\nN{d \\varphi\\over d\\tilde m}={\\partial E_0(\\tilde m)\\over \\partial \\tilde m}.\n \\ee{Tm}\nThe equation describes Bloch oscillations with the time period\n \\begin{equation}\n T={\\hbar \\over N }. \n \\ee{t}\nWhile in the absence of the external field ($G=0$), the torque produces rotation with acceleration as in the classical theory (Sec.~\\ref{per}), in the presence of the periodic external field\nthe group velocity $\\partial E_0(\\tilde m)\/\\partial \\tilde m$ is a periodic function of $\\tilde m$, and the phase angular velocity $\\omega$ performs periodic Bloch oscillations. The angular velocity vanishes after averaging over the time and the amplitude of the phase oscillation is determined by the width $\\Delta E_0=E_{0\\mbox{max}}-E_{0\\mbox{min}}$ of the Bloch band: \n\\begin{equation}\n\\Delta \\varphi ={ \\Delta E_0\\over N }.\n \\ee{}\nThus, any finite torque makes a monotonic rotation impossible. However, in the limit $N \\to 0$ when the period $ T$ becomes much longer than the time of observation, one cannot discern the Bloch oscillation from monotonic rotation.\n\nNext we phenomenologically introduce dissipation. The environment not only pumps the moment into the rotator, but also provides a friction torque proportional to the phase angular velocity:\n\\begin{equation}\n\\dot M_0= N -f \\omega = N -f {\\partial E_0(\\tilde m)\\over \\partial \\tilde m}.\n \\ee{dis}\nHere $f$ is the friction coefficient. This equation has a solution with constant $M_0$ and the phase angular velocity \n\\begin{equation}\n\\omega={N\\over f}.\n \\ee{Tf}\nNote that no moment is pumped to the rotator since the moment $M_0$ does not vary in time. The external torque $ N$ is balanced by the friction torque $f\\omega$, i.e., the pumped moment is returned back to the environment.\n\nRotation of the particle of charge $q$ with the angular velocity $\\omega$ produces the current $j=q\\omega\/2\\pi$. At the same time, the torque is connected with the electric field $E$ [see \\eq{cor}]. Thus, \\eq{Tf} is in fact the Ohm law $j=El \/R_r$, where \n\\begin{equation}\nR_r ={4\\pi^2 f\\over q^2}\n \\ee{Rf}\nis the resistance to the circular current $j$ around the 1D ring of the quantum rotator.\n\n\\begin{figure}[!t]\n\\includegraphics[width=0.4 \\textwidth]{VIcurve} \n \\caption{Plot the average angular velocity $\\bar \\omega$ vs. torque $N$ for the quantum rotator and plot the average voltage $\\bar V$ vs. current $I$ for the JJ in dimensionless variables. Curve 1: Weak binding limit, $\\omega_0=2\\hbar\/J$, $N_0=\\hbar f\/2 J$, $V_0=e\/C$, $I_0=e\/RC$. Curve 2: Strong binding limit, $\\omega_0=\\hbar\/2\\pi J^*$, $N_0=\\hbar f\/2\\pi J^*$, $V_0=e\/\\pi C^*$, $I_0=e\/\\pi RC$. \\label{f2}}\n \\end{figure}\n\n\n\nThe derivative $\\partial E_0(\\tilde m)\/ \\partial \\tilde m$ of the periodic function has a maximum which determines the maximum of the phase angular velocity $\\omega_m$. When the torque $N$ becomes larger than $f \\omega_m$ the steady phase rotation is impossible and the Bloch oscillation starts. In the presence of the dissipation Eqs.~(\\ref{Tm}) and (\\ref{t}) become\n\\begin{equation}\n{d \\varphi\\over d\\tilde m}=\\frac{{\\partial E_0(\\tilde m)\\over \\partial \\tilde m}}{N-f\\ {\\partial E_0(\\tilde m)\\over \\partial \\tilde m}},\n \\ee{Tmf}\n \\begin{equation}\n T=\\int\\limits_{-\\hbar\/2}^{\\hbar\/2}{d\\tilde m \\over N-f {\\partial E_0(\\tilde m)\\over \\partial \\tilde m }}. \n \\ee{tf}\nNow the phase not only oscillates but also rotates with the average angular velocity\n\\begin{equation}\n\\bar \\omega ={1\\over T} \\int\\limits_{-\\hbar\/2}^{\\hbar\/2}{{\\partial E_0(\\tilde m)\\over \\partial \\tilde m } \\over N-f {\\partial E_0(\\tilde m)\\over \\partial \\tilde m }}d\\tilde m={N\\over f}-{\\hbar \\over fT}.\n \\ee{omf}\nAt large $N$ the average angular velocity decreases as $1\/N$. \n\nIn the weak binding limit $G \\ll \\hbar^2\/J$ \n\\begin{equation}\nT= {J\\over f} \\ln \\frac{1+{\\hbar f\\over 2N J}}{1-{\\hbar f\\over 2N J}} ,~~\\bar \\omega={N \\over f}- {\\hbar \\over J}\\ln^{-1} \\frac{1+{\\hbar f\\over 2N J}}{1-{\\hbar f\\over 2N J}} .\n \\ee{WC}\n In the strong binding limit $G \\gg \\hbar^2\/J$\n\\begin{equation}\nT= {J^*\\over f}{1 \\over \\sqrt{{J^{*2}N^2\\over \\hbar^2 f^2 }- {1\\over 4\\pi^2}}},~~\\bar \\omega ={ N\\over f }-{\\hbar \\over J^*}\\sqrt{{J^{*2}N^2\\over \\hbar^2 f^2 }- {1\\over 4\\pi^2}} .\n \\ee{WCs}\nThe dependences of the average angular velocity $\\bar\\omega$ on the torque $N$ in the weak and the strong limit are shown in the dimensionless variables in Fig.~\\ref{f2}. \n\nThe gauge transformation \\eq{gauge} in the variables ``phase--moment'' is \n\\begin{equation} \n\\psi =\\psi'e^{-M_0\\varphi\/\\hbar }.\n \\ee{gaug}\nIt transforms the Hamiltonian \\eq{Hms} and the Schr\\\"odinger equation \\eq{Sh} to \n\\begin{equation}\nH={\\hbar^2\\over 2 J}\\left|{\\partial \\psi' \\over \\partial \\varphi}\\right|^2+[G (1-\\cos \\varphi )- \\dot M_0 \\varphi]|\\psi'|^2,\n \\ee{Hwb}\n\\begin{equation}\ni\\hbar {\\partial \\psi' \\over \\partial t}=-{\\hbar^2\\over 2J}{\\partial^2 \\psi' \\over \\partial \\varphi^2}+[G (1-\\cos \\varphi )- \\dot M_0 \\varphi] \\psi',\n \\ee{Sht}\nwhich are not periodic in $\\varphi$. The Bloch wave function after the transformation is also non-periodic:\n\\begin{eqnarray}\n \\psi'(\\varphi)=u(\\tilde M+M_0,\\varphi)e^{i \\left[(\\tilde M+M_0) \\varphi- \\int^{t}E_0( t')dt'\\right]\/\\hbar}\n \\nonumber \\\\\n=u(\\tilde M+M_0,\\varphi)e^{i\\left[\\tilde M\\varphi-\\int^{t}E'_0 (t')dt' \\right]\/\\hbar}\n \\eem{M0}\nwhere\n\\begin{equation}\nE'_0=E_0(\\tilde M+M_0) -\\dot M_0 \\varphi=E_0(\\tilde M+M_0) -N\\varphi\n \\ee{}\nis the energy in the non-periodic gauge.\nFurther discussion of the non-periodic gauge does not differ essentially from the discussion of this gauge for the axisymmetric rotator (Sec.~\\ref{per}). The wave function is not periodic because in the non-periodic gauge the energy with the washboard potential is non-periodic.\n\n \n \\begin{figure}[!t]\n\\includegraphics[width=0.4 \\textwidth]{Energ0} \n \\caption{The band energy ${\\cal E}_0$ vs. the external moment $M_0$. (a) The canonical quasimoment $\\tilde M =s\\hbar$ is quantized. (b) The canonical quasimoment $\\tilde M $ is not quantized. There is a continuous manifold of curves for various values of $\\tilde M$. \\label{f1.1}}\n \\end{figure}\n\n\n\n Let us address now the case of the non-quantized canonical quasimoment $\\tilde M$ (particle in a periodic potential in the infinite 1D space). Figure \\ref{f1.1} compares the dependence of the energy ${\\cal E}_0$ in the lowest Bloch band on the external moment $M_0$ for the quantized [Fig.~\\ref{f1.1}(a)] and the non-quantized [Fig.~\\ref{f1.1}(b)] canonical quasimoment $\\tilde M$. Instead of a single curve for quantized $\\tilde M$ one has a continuous manifold of curves. Figure~\\ref{f1.1}(b), however, shows only a discrete manifold of curves in order to demonstrate that the curves are obtained from the single curve in Fig.~\\ref{f1.1}(a) by a shift without deformation. \n\n \n Despite this difference between quantized and non-quantized $\\tilde M$, for the regimes discussed above (monotonous phase rotation and Bloch oscillation) the effect of quantization is practically absent. The dynamics of these regimes is governed by the kinetic quasimoment $\\tilde m =\\tilde M+M_0$. In the absence of quantization the division of $\\tilde m$ into quantized $\\tilde M$ and non-quantized $M_0$ is meaningless since the both are not quantized. If in Fig.~\\ref{f1.1}(b) one plots the energy as a function of $\\tilde m$ instead of $M_0$ this yields the same single curve as in Fig.~\\ref{f1.1}(a).\n\nSummarizing, the slow dynamics of the particle moving in the 1D ring of the rotator with indistinguishable states with the phases $\\varphi$ and $\\varphi+2\\pi$ does not differ from the dynamics of the particle moving in the infinite 1D space with the periodic potential, when the phases $\\varphi$ and $\\varphi+2\\pi$ correspond to different states.\nThis is because only one Bloch state participates in the adiabatic processes. During its tuning by the external torque there are neither intraband, nor interband\ntransitions between Bloch states.\n\n\n\\section{JJ and DQPT}\n\nThere is one to one correspondence (ideal mapping) between the quantum rotator in the constant external field and the single JJ. The correspondence between variables of two systems is shown in Table 1. \n\n\\begin{table}[ht\n\\begin{tabular}{|l|l|} \\hline {\\bf Rotator}~~~~~~~~~~~~~~~~ & {\\bf JJ}\n \\\\\n & \\\\ \\hline\n Canonical moment $M$ & Canonical charge $Q \\to {2e \\over \\hbar} M$ \\\\ \\hline\n External moment $M_0$ & External charge $Q_0 \\to {2e \\over \\hbar}M_0$ \\\\ \\hline \n Torque $N$ & Electrical current $I\\to {2e \\over \\hbar}N $ \\\\ \\hline\n Rotation angle $\\varphi$ & Quantum mechanical phase $\\varphi$ \\\\ \\hline\nAngular velocity $\\omega=\\dot \\varphi$ & Voltage $V= {\\hbar\\over 2e}\\dot \\varphi$ \\\\ \\hline\nMoment of inertia $J$ & Capacitance $C\\to {4e ^2\\over \\hbar^2}J$ \\\\ \\hline\nFriction coefficient $f$ & Conductance $1\/R\\to {4e ^2\\over \\hbar^2 }f$ \\\\ \\hline\n \\end{tabular}\n\\caption{Correspondence of variables of the rotator and the JJ} \\end{table}\n\n\nAs well as in the case of quantum rotator, the theory of the JJ can use either the periodic gauge, in which the Hamiltonian and the wave function are periodic in $\\varphi$, or the non-periodic gauge, in which both the Hamiltonian and the wave function are not periodic in $\\varphi$. \n\nTranslating the periodic Hamiltonian \\eq{Hms} and the Schr\\\"odinger equation \\eq{Sh} for the rotator to the JJ one obtains\n\\begin{equation}\nH={1\\over 2 C}\\left|-i 2e{\\partial \\psi \\over \\partial \\varphi} +Q_0\\psi\\right|^2+E_J (1-\\cos \\varphi ) |\\psi|^2,\n \\ee{HmQ}\n\\begin{equation}\ni\\hbar {\\partial \\psi \\over \\partial t}=-{1\\over 2C}\\left(2e{\\partial \\over \\partial \\varphi}+iQ_0\\right)^2\\psi +E_J (1-\\cos \\varphi ) \\psi.\n \\ee{ShQ}\nHere $Q_0$ is the external charge, while the canonical charge is an operator\n\\begin{equation}\n\\hat Q =-2ie{\\partial \\over \\partial \\varphi}.\n \\ee{}\nThe gauge transformation analogous to \\eq{gauge},\n\\begin{equation}\n\\psi=\\psi' e^{-Q_0 \\varphi \/2e},\n \\ee{}\nyields the Hamiltonian and the Schr\\\"odinger equation in the non-periodic gauge. The wave function is periodic in $\\varphi$ in the periodic gauge, but not periodic in the non-periodic gauge. \n\nSolutions of the Schr\\\"odinger equation \\eq{ShQ} are Bloch functions \n\\begin{equation} \n\\psi(\\varphi,t) =u(\\tilde Q+Q_0)e^{i(\\tilde Q\\varphi\/2e -E_0t\/\\hbar)},\n \\ee{}\nwhere $\\tilde Q$ is the canonical quasicharge, which is quantized in the JJ, and $u(\\tilde Q+Q_0)$ is a periodic function of the kinetic quasicharge $\\tilde q= \\tilde Q+Q_0$ with the period $2e$.\n\nThe further analysis is similar to that for the quantum rotator (Sec.~\\ref{rcf}). The analog of \\eq{dis} is Kirchhoff's law\n\\begin{equation}\n\\dot {\\tilde q}=\\dot Q_0 =I - {V\\over R},\n \\ee{}\nwhere \n \\begin{equation}\nV={\\hbar \\over 2e}{d\\varphi\\over dt} \n \\ee{}\nis the voltage drop across the JJ. The ohmic resistance \n\\begin{equation}\nR= {R_sR_{qp}\\over R_s+R_{qp}},\n \\ee{}\nis determined by the resistance $R_{qp}$ of the normal channel in the JJ and by the resistance $R_s$ of the external shunt parallel to the JJ.\n\nAt small current $I$ $\\dot Q_0=0$, and the phase rotates with the constant angular velocity, i.e., at the constant voltage $V=IR$. This means that the whole current goes through the ohmic channel, and the JJ is an insulator. The insulating state is possible as far as the voltage $V$ dies not exceeds the voltage $V_0$ equal to the maximum of the derivative ${\\partial E_0\/ \\partial Q_0}$ in the Bloch band. In the limits of weak and strong binding\n\\begin{equation}\nV_0=\\left\\{\\begin{array}{ccc} {e\\over C} & & E_J\\ll {e^2\\over C} \\\\ {e\\over\\pi C^*}& & E_J\\gg {e^2\\over C} \\end{array}\\right. .\n \\ee{}\nThe voltage $V_0$ is the electric breakdown voltage of the insulator \\cite{CommMurani}. At $V>V_0$ the steady rotation of the phase is impossible. Instead the Bloch oscillation regime takes place accompanied by a slow drift of the phase. The JJ becomes a conductor. The $VI$ curve of the JJ is described by the same plot as the plot ``angular velocity vs. torque'' for the quantum rotator shown in Fig.~\\ref{f2}. The $VI$ curves in Fig.~\\ref{f2} were calculated for the JJ by \\citet{widom2} and by Averin, Likharev, and Zorin \\cite{AverLikh,Likh}. \\citet{widom2} called the voltage maximum on this curve current-voltage anomaly. \n\\citet{Schon} called it Bloch nose. The corresponding maximum on the curve ``resistance--current'' was called the Coulomb blockade bump \\cite{SI,Penttila2001,CommMurani}.\n\nWhile \\citet{widom2} calculated the $VI$ curve using the analogy with the pendulum, i.e., assuming that the states with the phases $\\varphi$ and $\\varphi+2\\pi$ are identical, Averin, Likharev, and Zorin \\cite{AverLikh,Likh} assumed that they are not identical as in the case of a particle in a periodic potential. Nevertheless, the both groups obtained the same $VI$ curves in agreement with our conclusion in the end of Sec.~\\ref{rcf}. \n\nThe possibility of monotonic phase rotation in the Bloch band theory is due to quantum tunneling between neighboring wells of the periodic potential. Dissipation can suppress quantum tunneling \\cite{CL}. Then the particle (virtual particle in the JJ case) becomes localized in one of the wells of the periodic potential. This is the superconducting state of the JJ. The transition from the superconducting to the insulating state is the DQPT. \n\nThe DQPT is a joint effect of Coulomb interaction, dissipation, and quantum mechanics. The Coulomb blockade of Cooper pairs makes the JJ an insulator at small bias. However, it is effective only if the Coulomb energy $E_C= e^2\/C$ exceeds the quantum-mechanical uncertainty $\\hbar \/\\tau$ \\cite{Tin}, where $\\tau=RC$ is the time of the charge relaxation in the circuit. According to the condition $E_C \\sim \\hbar\/\\tau$, the DQPT is expected at the resistance $R$ of the order of the quantum resistance $R_q=h\/4e^2$. This qualitative estimation \\cite{Tin,CommMurani} agrees with the more detailed and accurate theory predicting the DQPT exactly at $R = R_q$ \\cite{Schmid,Bulg}. \n \nA similar DQPT must exist in the quantum rotator. The analog of the Coulomb energy $e^2\/C$ is the energy \n$\\hbar^2\/ J$ necessary for a transfer of the moment quantum $\\hbar$ to the rotator. The time $\\tau=J\/f$ is the decay time for the moment in the rotator with friction. Thus, the critical friction coefficient is of the order of $f \\sim \\hbar$. \\Eq{Rf} yields the relation between the friction coefficient $f$ and the ohmic resistance $R_r$ for the current produced by the particle of charge $q$ rotating in the rotator. According to this relation, the condition $f \\sim \\hbar$ for the DQPT in the rotator is identical to the condition $R_r\\sim R_q $. Now $R_q \\sim \\hbar \/q^2$ is the quantum resistance for the charge $q$.\n\n\nHowever, there is a difference in the role of the resistance $R$ in the JJ and of the resistance $R_r$ in the quantum rotator. In the quantum rotator the monotonic phase rotation is possible at small $R_rR_q$. In the regime of the phase rotation the JJ is an insulator, while the quantum rotator is a conductor. In the regime of the localized phase the JJ is a superconductor, while the quantum rotator is an insulator. In the both cases the insulating state takes place at resistances larger than the quantum resistance. \n\n\nOne can estimate the resistance of the quantum rotator using the Drude formula for the conductivity $l\/R_r$, which for the 1D system in the weak-binding limit yields\n\\begin{equation}\n{l\\over R_r}\\sim {q^2\\tau \\over m_0}={q^2l_0 \\over \\hbar k l},\n \\ee{}\nwhere $l_0$ is the mean-free path of the particle and $\\tau =l_0\/v=m_0l_0\/\\hbar k$ is the relaxation time at elastic scattering by impurities. Since the wave number $k$ of the particle is on the order of the inverse space period $l$, the phase transition condition \n\\begin{equation}\n{R_q\\over R_r}\\sim {l_0 \\over l^2 k} \\sim kl_0 \\sim 1\n \\ee{}\nbecomes the Ioffe--Regel condition for the metal--insulator transition \\cite{Mott}.\n\nAlthough the pioneer theoretical investigations \\cite{Schmid,Bulg} predicted the DQPT at the line $R=R_q$ independently from the ratio $E_J\/ E_C$, later it became clear that in the real experiment it is impossible to detect the transition at this line at $E_J\\gg E_C$. One reason is an inevitable non-zero temperature of the experiment \\cite{Zaik}. But even at strictly zero temperature the insulator state is not observable at $E_J\\gg E_C$ because either the observation time is short compared with the average time interval between tunneling events, which are phase slips destroying superconductivity \\cite{Schon}, or the accuracy of the voltage measurement is not sufficient for detection of the phase rotation since the voltage error bar exceeds the electric breakdown voltage $V_0$ \\cite{SI,Penttila2001}. \n\nThe results of the experimental and theoretical investigation of the phase diagram by Penttil\\\"a {\\em et al.} \\cite{SI,Penttila2001} are shown in Fig.~\\ref{PD}. Open and black circles show observations of the superconductorlike and the insulatorlike behavior, respectively. The insulating (I) state differs from the superconducting (S) state by the presence of the Coulomb blockade bump at dependences of the resistance on the current.\nThe solid line was determined from the condition that the error bar of voltage measurements is equal to the voltage $V_0$ at which the electric breakdown of the insulating state takes place. It is impossible to detect the insulating state above the solid line. The observation of the Coulomb blockade bump below the solid line is a smoking gun of the DQPT.\n\nIf one increases measurement accuracy (or lowers temperature), the solid line moves closer to the Schmid--Bulgadaev line. Thus, the vertical Schmid--Bulgadaev line is an idealized asymptotic limit, which remains experimentally unattainable in practice for large $E_J\/E_C$. \n\n\\section{Discussion and conclusions}\n\n\n \\begin{figure}[t]\n\\includegraphics[width=.45\\textwidth]{PhD}\n \\caption{The phase diagram of the JJ \\cite{SI}. The dashed vertical line shows the DQPT of \\citet{Schmid} and \\citet{Bulg}. Due to voltage measurement error bar the experimental detection of the DQPT is expected at the solid line \\cite{SI}.\n Black and open symbols show observations of the superconductorlike ($S$) and the insulatorlike ($I$) $RI$ curves (see insets), respectively. Squares shows results of experimental observations of unshunted junctions when the resistance $R$ is equal to a very large quasiparticle resistance $R_{qp}$ of the junction itself. \\footnote{ In the caption to Fig.~3 in Ref.~\\onlinecite{SI} it was stated that \"unshunted samples (squares) are collected at $R_q\/R=0$\". However, as said in the text of the paper, because of a large shunting quasiparticle resistance $R_{qp}$, $R_q\/R$ was never truly equal to 0 in the experiments.}\n \\label{PD}}\n \\end{figure}\n\nAlthough the analogy of the JJ with the quantum rotator was pointed out in the past \\cite{AzbelP,Rogov,widom2,Loss}, the opinion that the JJ is an analog of a particle in a periodic potential was more prevalent \\cite{AverLikh,Likh,GefenJ,Zwerg,Apen,SI,Penttila2001,CommMurani,morel,Schon,golub}. Averin, Likharev, and Zorin \\cite{AverLikh,Likh} argued that the states of the JJ with $\\varphi$ and $\\varphi+2\\pi$ are not identical because the states of the environment (electric circuit) for two values of the phase are different. As a result, they concluded that all states in the Bloch band are possible and used the concept of wave packets at the derivation of Bloch oscillations. \\citet{Zwerg} and \\citet{morel} explained the distinguishability of the states with $\\varphi$ and $\\varphi+2\\pi$ (decompactification of the phase) by the effect of dissipation. \\citet{Apen} argued that the distinguishability assumption is justified by taking into account fluctuations of the external classical charge, which were not taken into account in the previous calculations of the $VI$ curve in Refs.~\\cite{widom2,AverLikh,Likh} and in the present paper. \n\n The assumption that the states of the JJ with $\\varphi$ and $\\varphi+2\\pi$ can be different was disputed by \\citet{Loss}. We also believe that the indistinguishability of the states with $\\varphi$ and $\\varphi+2\\pi$ is a just requirement for the quantum rotator and the JJ. Although the connection between the JJ and the environment is of utmost importance and must not be ignored, we consider the wave function of the junction alone, but not the wave function of the junction+environment. There is an important difference between two questions: (i) whether the states of the JJ with $\\varphi$ and $\\varphi+2\\pi$ can be different or not, and (ii) whether the states of the JJ and the environment are the same before and after a phase slip, which is a jump $2\\pi$ of the phase difference across the JJ. \n \nAverin, Likharev, and Zorin \\cite{AverLikh,Likh} looked for an answer to the second question. Let us illustrate this for a simple SQUID circuit, which is a superconducting loop interrupted by a JJ. There is a phase $\\varphi$ across the JJ and a phase difference $\\Phi$ along the rest part of the loop. They must satisfy the boundary condition $\\varphi+\\Phi =2\\pi N$, where $N$ is an integer. Averin, Likharev, and Zorin compared the state with $\\varphi$, $\\Phi$, and $N$, with the state with $\\varphi+2\\pi $, $\\Phi-2\\pi$, and $N$. These are states before and after a phase slip, which are definitely different. While before the phase slip the state was stationary, after the slip the state is not stationary: Adding $2\\pi$ to the Josephson phase does not change the current across the JJ but does change the current in the rest part of the circuit and the magnetic flux because the phase difference $\\Phi$ is not the same before and after the slip.\n\nThe states of the JJ with the phases $\\varphi$ and $\\varphi+2\\pi $ must be compared at fixed state of the environment. One should compare the state with $\\varphi$, $\\Phi$, and $N$ with the state with $\\varphi+2\\pi $, $\\Phi$, and $N+1$. The environment (the rest part of the circuit) remains in the same state because the phase $\\Phi$ remains the same, and the current and the magnetic flux do not differ.\n \n Through the whole paper it was assumed that the external moment in the quantum rotator (charge in the JJ) is a well defined classical variable slowly varying in time. As mentioned above,\n fluctuations of the external charge were considered as a reason for the phase decompactification in the JJ. Indeed, for any {\\em given} external charge only {\\em one} state in the Bloch band is possible, but this state is different for different external charges. Thus, for a broad ensemble of external charges any state in the Bloch band is possible as in the case of a particle in the infinite 1D state. However, this does not eliminate the difference between the Hilbert space of all possible Bloch states in the case of a particle in the infinite 1D state and its much smaller discrete subspace of the rotator states with quantized canonical moments (charges). If one has the ensemble of $n$ external moments, the number of states of the system ``quantum rotator+environment'' is equal to $n$, while the number of states for ``particle in an infinite 1D space+environment'' is $n\\times n_B$, where $n_B$ is the number of all Bloch states in a Bloch band.\n \n This difference is not important for the conventional theory on the insulating state and the DQPT. This is true because the theory deals only with a slow adiabatic tuning of only {\\em one} state in the lowest Bloch band. The question whether this state is a single state or other Bloch states are possible, is irrelevant. This is another example when according to \\citet{Legg} ``it is largely unnecessary to address the vexed question of whether or not states differing in $\\varphi$ value by $2\\pi$ should be identified.''\n \n This does not mean that difference between dynamics of a particle in a 1D ring and a particle in the infinite 1D space is not important in general, at any experimental conditions. \n \\citet{Loss} made calculations of the resonant tunneling between quantum levels in the neighboring potential wells of the washboard potential in the JJ. In the Bloch theory this is the Zener interband tunneling. The results depended on the choice of the initial state. The latter was either periodic in the extended phase as must be for a particle in a 1D ring, or was confined in the interval $2\\pi$ that is possible only for a particle in the infinite 1D space. This calculation has not yet resolved the dilemma ``compact vs. extended phase''. \\citet{Loss} solved the Schr\\\"odinger equation \\eq{ShQ} at the constant current $I= \\dot Q_0$, while the proponents of the extended phase assumption for the JJ connected the decompactification with fluctuations of the external charge $Q_0$, which were not taken into account in their calculation. Thus, the answer to the question whether and how the difference between motion of a particle in a 1D ring and motion of a particle in the infinite 1D space with a periodic potential can affect observable phenomena is still lacking. \n\n\nArguing that the conventional theory failed and the insulating state is impossible in the JJ, \\citet{Sacl} referred to the existence of supercurrent in the Cooper pair box, which is identical to a JJ in the limit $R\\to \\infty$. Indeed, in Sec.~\\ref{rcf} we demonstrated that without dissipation the phase (angle) of the quantum rotator is localized and performs Bloch oscillations around the localization point. Thus, a supercurrent flows across the JJ. However, from the same subsection it is clear that {\\em any} dissipation whatever large $R$ be, delocalizes the phase and the JJ becomes an insulator. One can see in Fig.~\\ref {f1.1} that at $R \\to \\infty $ the voltage vanishes at currents $I\\gg e\/RC^*$. Although the insulating state is possible at any $R$, the electric breakdown of the insulator occurs at the current $I\\sim e\/RC^*$, which vanishes at $R \\to \\infty $. This makes observation of the DQPT impossible in this limit, but it is not an argument against its existence. \n\n\nSummarizing, our answers to three questions formulated in the introduction are following:\n\\begin{enumerate}\n\\item\nThe states with the phases $\\varphi$ and $\\varphi+2\\pi$ are indistinguishable in the JJ for the fixed state of the environment. In this aspect the JJ is an analog of the quantum rotator, but not of a particle in a periodic potential.\n\\item\nThe assumption that in the quantum rotator and in the JJ the states with the phases $\\varphi$ and $\\varphi+2\\pi$ are indistinguishable, does not mean that the wave functions must be also periodic in all gauges. The wave functions can be periodic only in the periodic gauge where the Hamiltonian is periodic in $\\varphi$. \n\\item\nThe conventional theory of the DQPT is valid independently from whether the states with the phases $\\varphi$ and $\\varphi+2\\pi$ are distinguishable, or not.\n\\end{enumerate}\n\nThe present analysis and its conclusions referred only to the case of a single particle, which excludes interaction between particles. The case of many interacting particles in a quantum rotator requires another approach to the dilemma ``compact vs. extended phase'' \\cite{averin}.\n\n\nThe close analogy between the regime of phase rotation in the quantum rotator and the phase rotation in the insulating state of the JJ allows to expect the DQPT also in the quantum rotator. This can be checked by experimental investigations of persistent currents in 1D normal rings put in a constant electric field. \n\n\n\n\n\n\n\\begin{acknowledgments}\nI thank Dmitri Averin, Pertti Hakonen, and Andrei Zaikin for discussions and useful comments.\n\\end{acknowledgments}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{XP algorithm parameterized by neighborhood diversity}\\label{sec:nd}\nIn this section, we deal with neighborhood diversity ${\\nu}$, which is a more general parameter than vertex cover number. We first give an $O^*((n\/{\\nu})^{{\\nu}})$-time algorithm for Arc Kayles. This is an XP algorithm parameterized by neighborhood diversity. On the other hand, we show that there is a graph having at least $O^*((n\/{\\nu})^{{\\rm \\Omega}({\\nu})})$ non-isomorphic induced subgraphs, which implies the analysis of the proposed algorithm is\nasymptotically tight. \n\n\nBy Proposition \\ref{prop:isomorphic}, if we list up all non-isomorphic induced subgraphs, the winner of Arc Kayles can be determined by using recursive formulas (\\ref{eq:recursion1}) and (\\ref{eq:recursion2}).\nLet $\\mathcal{M}=\\{M_1,M_2,\\ldots,M_{{\\nu}}\\}$ be a partition such that $\\bigcup_i M_i=V$ and vertices of $M_i$ are twins each other. We call each $M_i$ a \\emph{module}.\nWe can see that non-isomorphic induced subgraphs of $G$ are identified by how many vertices are selected from which module.\n\n\\begin{lemma}\\label{lem:positions:nd}\nThe number of non-isomorphic induced subgraphs of a graph of neighborhood diversity ${\\nu}$ is at most $(n\/{\\nu}+1)^{{\\nu}}$.\n\\end{lemma}\n\\begin{proof}\nBy the definition of neighborhood diversity, vertices in a module are twins each other. Therefore, the number of non-isomorphic induced subgraphs of $G$ is at most \n $\\prod_{i=1}^{{\\nu}}(\\card{M_i}+1) \\le (\\sum_{i=1}^{{\\nu}}(\\card{M_i}+1)\/{\\nu})^{{\\nu}}\n \\le (n\/{\\nu}+1)^{{\\nu}}$.\n \n\\end{proof}\n\nWithout loss of generality, we select an edge whose endpoints are the minimum indices of vertices in the corresponding module. \nBy Proposition \\ref{prop:isomorphic}, the algorithm in Section \\ref{sec:Exalgo} can be modified to run in time $O^*((n\/{\\nu}+1)^{{\\nu}})$.\n\n\n \n\\begin{theorem}\\label{thm:algo:nd}\nThere is an $O^*((n\/{\\nu}+1)^{{\\nu}})$-time algorithm for Arc Kayles.\n\\end{theorem}\n \nThe idea can be extended to Colored Arc Kayles and BW-Arc Kayles. \nIn $G=(V,E_{\\CG}\\cup E_{\\CB}\\cup E_{\\CW})$, two vertices $u,v\\in V$ are called \\emph{colored twins} if $c(\\{u,w\\})=c(\\{v,w\\})$ holds $\\forall w \\in V\\setminus \\{u,v\\}$. \nWe then define the notion of colored neighborhood diversity. \n\\begin{definition}\\label{def:cnd}\nThe \\emph{colored neighborhood diversity} of $G=(V,E)$ is defined as minimum ${\\nu'}$ such that $V$ can be partitioned into ${\\nu'}$ vertex sets of colored twins.\n\\end{definition} \nIn Colored Arc Kayles or BW-Arc Kayles, we can utilize a partition of $V$ into modules each of which consists of colored twins. \nIf we are given a partition of the vertices into colored modules, we can decide the winner of Colored Arc Kayles or BW-Arc Kayles like \nTheorem \\ref{thm:algo:nd}. Different from ordinary neighborhood diversity, it might be hard to compute colored neighborhood diversity in polynomial time. \n\\begin{theorem}\\label{thm:algo:cnd}\nGiven a graph $G=(V,E_{\\CG}\\cup E_{\\CB}\\cup E_{\\CW})$ with a partition of $V$ into ${\\nu'}$ modules of colored twins, we can compute the winner of Colored Arc Kayles on $G$ in time $O^*((n\/{\\nu'}+1)^{{\\nu'}})$. \n\\end{theorem}\n\n\n\n \n\n \n \n \n \nIn the rest of this section, we give a bad instance for the proposed algorithm as shown in Figure \\ref{fig:lower:nd}.\nThe result implies that the analysis of Theorem \\ref{thm:algo:nd} is\nasymptotically tight. \n\n\\begin{theorem}\\label{thm:lower:nd}\nThere is a graph having at least $(n\/{\\nu}+1-o(1))^{{\\nu}(1 - o(1))}$ non-isomorphic positions of Arc Kayles. \n\\end{theorem}\n\n\\begin{proof}\nWe construct such a graph $G$. Assume that $k$ is a number forming power of two minus one, that is, $k=2^{k'}-1$. \nFirst, we prepare $k$ cliques of $s$ vertices, $C_1,\\ldots,C_k$, and vertex set $X=\\{x_1,x_2,\\ldots,x_{\\log_2 (k+1)}\\}$. The subscript $i$ of $x_i$ represents $i$-th bit used below. \nFor each $x_i$, we attach $i-1$ pendant vertices, which is used to distinguish from another $x_j$. For $j$, $\\mathrm{bin}(j)$ and $\\mathrm{bin}(j,i)$ denote the $j$'s binary representation and its $i$-th bit, respectively. For example, $\\mathrm{bin}(6)=110$, \n$\\mathrm{bin}(6,1)=0, \\mathrm{bin}(6,2)=1, \\mathrm{bin}(6,3)=1$, and $\\mathrm{bin}(6,i)=0$ for $i\\ge 4$. \nWe connect the vertices in $C_j$ and vertices in $X$ according to the binary representation of $j$; vertices in $C_j$ are connected with $x_i$ if and only if $\\mathrm{bin}(j,i)=1$. Finally, we connect all the vertices in $\\bigcup_i C_i$, which form a large clique with size $sk$. \nFigure \\ref{fig:lower:nd} shows the constructed graph $G$.\n\nThe number of vertices in $G$ is $n=sk + \\log_2 (k+1)(\\log_2 (k+1)+1)\/2$, that is, $s=(n-\\log_2 (k+1)(\\log_2 (k+1)+1)\/2)\/k$, \nand the neighborhood diversity of $G$ is ${\\nu}=k+2\\log_2 (k+1)$, \nbecause vertices in each clique are twins, and also pendant vertices connected to each $x_i$ are twins. \n\n\n\nWe estimate the number of non-isomorphic induced subgraphs of $G$. \nWe restrict vertices to delete only from $\\bigcup_i C_i$, \nthat is, edges to select are inside of $\\bigcup_i C_i$. \nSince the number of pendant vertices for each $x_i$ is different, \n$x_i$'s substantially have IDs, and thus vertices from two distinct cliques are distinguishable. Hence, the number of non-isomorphic induced subgraphs obtained by removing edges inside $\\bigcup_i C_i$ are decided by the numbers of remaining vertices in $C_i$'s, each of which varies from $0$ to $s$. \nTherefore, the number of non-isomorphic induced subgraphs is at least\n\\begin{align*}\n(s+1)^{k}\/2 &= \\frac{1}{2}\\left(\\frac{n-\\log_2 (k+1)(\\log_2 (k+1)+1)\/2}{k}+1\\right)^{k}\\\\\n& = \\frac{1}{2}\\left(\\frac{n-o(k)}{k}+1\\right)^{k} \\ge \n\\frac{1}{2}\\left(\\frac{n}{k}+1-o(1)\\right)^{k},\n\\end{align*}\nwhere the division of $2$ comes from the fact that the number of deleted vertices must be even. \nSince ${\\nu}=k+2\\log_2 (k+1)$, $k={\\nu} - 2\\log_2 (k+1)\\ge {\\nu} - 2\\log_2 {\\nu}$ holds. We thus have lower bound $\\left(n\/{\\nu}+1-o(1)\\right)^{{\\nu}(1 - o(1))}$. \n\\end{proof}\n\n\\begin{figure}[tbp]\n \\centering\n \\includegraphics[width=0.65\\linewidth]{nd_lowler.pdf}\n \\caption{The constructed graph $G$ with neighborhood diversity ${\\nu}=k+2\\log_2 (k+1)$.}\n \\label{fig:lower:nd}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section\nBasic Algorithm }\\label{sec:Exalgo}\nIn this section, we show that the winner of \\emph{Colored} Arc Kayles on $G$ can be determined in time $O^*(2^n)$. We first observe that the following lemma holds by the definition of the game.\n\\begin{lemma}\\label{lem:winner:Colored_Arc_Kayles}\nSuppose that Colored Arc Kayles is played on $G=(V,E_{\\CG}\\cup E_{\\CW}\\cup E_{\\CB})$. \nThen, player $\\mathrm{B}$ (resp., $\\mathrm{W}$) wins on $G$ with player $\\mathrm{B}$'s (resp., $\\mathrm{W}$'s) turn if and only if there is an edge $\\{u,v\\}\\in E_{\\CG}\\cup E_{\\CB}$ (resp., $\\{u,v\\}\\in E_{\\CG}\\cup E_{\\CW}$) such that player $\\mathrm{W}$ (resp., $\\mathrm{B}$) loses on $G-u-v$ with player $\\mathrm{W}$'s (resp., $\\mathrm{B}$'s) turn. \n\\end{lemma}\nThis lemma is interpreted by the following two recursive formulas:\n\\begin{align}\\label{eq:recursion1}\n f_{\\mathrm{B}}(G) & = \\bigvee_{\\{u,v\\}\\in{E_{\\CG}\\cup E_{\\CB}}}\\lnot \\left(f_{\\mathrm{W}}(G-u-v)\\right),\\\\ \\label{eq:recursion2} \n f_{\\mathrm{W}}(G) & = \\bigvee_{\\{u,v\\}\\in{E_{\\CG}\\cup E_{\\CW}}}\\lnot \\left(f_{\\mathrm{B}}(G-u-v)\\right). \n\\end{align}\nBy these formulas, we can determine the winner of $G$ with either first or second player's turn by computing $f_{\\mathrm{B}}(G)$ and $f_{\\mathrm{W}}(G)$ for all induced subgraphs of $G$. Since the number of all induced subgraphs of $G$ is $2^n$, it can be done in time $O^*(2^n)$ by a standard dynamic programming algorithm. \n\n\\begin{comment}\nOur algorithm is based on dynamic programming with respect to induced subgraphs of an input graph $G$. \nAs a key observation, any position of Colored Arc Kayles corresponds to an induced subgraph of $G$.\n\nFurther, we have the following lemma.\n\nBased on Lemma \\ref{lem:winner:Colored_Arc_Kayles}, we present Algorithm \\ref{alg:Colored_Arc_Kayles}. The input of Algorithm \\ref{alg:Colored_Arc_Kayles} is a graph $G=(V,E_{\\CG}\\cup E_{\\CB}\\cup E_{\\CW})$ and three edge sets $E_{\\CG}$,$E_{\\CB}$, and $E_{\\CW}$. Algorithm \\ref{alg:Colored_Arc_Kayles} returns 1 if the first player, which can choose edges in $E_{\\CG}\\cup E_{\\CB}$, wins. Otherwise, it returns 0. In the algorithm, if $E_{\\CG}\\cup E_{\\CB}$ is empty, the first player $p_1$ cannot choose an edge, and hence he\/she loses. Otherwise, he\/she picks an edge from $E_{\\CG}\\cup E_{\\CB}$. By Lemma \\ref{lem:winner:Colored_Arc_Kayles}, the winner is determined on $G-u-v$. \nHere, the second player $p_2$ loses on $G-u-v$ if and only if there is no edge $\\{u',v'\\}\\in E_{\\CG}\\cup E_{\\CW}\\setminus\\Gamma(\\{u,v\\})$ such that $p_1$ loses on $G-u-v-u'-v'$. That is, the winner on $G$ can be found by considering a game such that the first player is $p_2$, the second player is $p_1$, and the input graph is $G'-u-v$. Thus, in Line 3 of Algorithm \\ref{alg:Colored_Arc_Kayles}, it returns $\\bigvee_{\\{u,v\\}\\in{E_{\\CG}\\cup E_{\\CB}}}\\lnot \\left({\\bf Colored\\_Arc\\_Kayles}(G-u-v)\\right)$. \n\nSince the number of all induced subgraphs is at most $2^n$, the running time of Algorithm \\ref{alg:Colored_Arc_Kayles} is $O^*(2^n)$.\n\n\\begin{algorithm*}[t]\n \\caption{${\\bf Colored\\_Arc\\_Kayles}(G,E_{\\CG},E_{\\CW},E_{\\CB})$, {\\bf Input:} $G=(V,E_{\\CG}\\cup E_{\\CW}\\cup{E_{\\CB}})$}\n \\begin{algorithmic}[1]\\label{alg:Colored_Arc_Kayles}\n\n \\IF {$E_{\\CG}\\cup E_{\\CB}=\\emptyset$}\n \\RETURN $0$\n\n \\ELSE\n \\RETURN \\\\\n $\\bigvee_{\\{u,v\\}\\in{E_{\\CG}\\cup E_{\\CB}}}\\lnot \\left({\\bf Colored\\_Arc\\_Kayles}(G-u-v)\\right)$\n \\ENDIF\n \\end{algorithmic}\n \\end{algorithm*}\n\\end{comment}\n\n\\begin{theorem}\\label{thm:Colored_Arc_Kayles:Exp}\nThe winner of Colored Arc Kayles can be determined in time $O^*(2^n)$.\n\\end{theorem}\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\\subsection{Background and Motivation} \nCram, Domineering, and Arc Kayles are well-studied two-player mathematical games and \ninterpreted as combinatorial games on graphs. \nDomineering (also called Stop-Gate) was \nintroduced by G\\\"oran Andersson around 1973 under the name of Crosscram~\\cite{conway2000numbers,gardner1974mathematical}.\nDomineering is usually played on a checkerboard. The two players are denoted by\nVertical and Horizontal. Vertical (resp., Horizontal) player is\nonly allowed to place its dominoes vertically (resp., horizontally) on the board.\nNote that placed dominoes are not allowed to overlap. \nIf no place is left to place a domino, the player in the turn loses the game. Domineering is a partisan game, where players use different pieces. The impartial version of the game is Cram, where two players can place dominoes both vertically and horizontally. \n\nAn analogous game played on an undirected graph $G$ is Arc Kayles.\nIn Arc Kayles, the action of a player in a turn is to select an edge of $G$, and then the selected edge and its neighboring edges are removed from $G$. If no edge remains in the resulting graph, the player in the turn loses the game. \nFigure \\ref{ex:arcKayles} is a play example of Arc Kayles. In this example, the first player selects edge $e_1$, and then the second player selects edge $e_2$. By the first player selecting edge $e_3$, no edge is left; the second player loses. \nNote that the edges selected throughout a play form a maximal matching on the graph.\n\nSimilarly, we can define BW-Arc Kayles, which is played on an undirected graph with black and white edges. The rule is the same as the ordinary Arc Kayles except that the black (resp., white) player can select only black (resp., white) edges. \nNote that Cram and Domineering are respectively interpreted as Arc Kayles and BW-Arc Kayles on a two-dimensional grid graph, which is the graph Cartesian product of two path graphs. \n\nTo focus on the common nature of such games with matching structures, we newly define {Colored Arc Kayles}. {Colored Arc Kayles} is played on a graph whose edges are colored in black, white, or gray, and black (resp., white) edges can be selected only by the black (resp., white) player, though grey edges can be selected by both black and white players. \n{BW-Arc Kayles} and ordinary {Arc Kayles} are special cases of {Colored Arc Kayles}. In this paper, we investigate {Colored Arc Kayles} from the algorithmic point of view.\n\n\\begin{figure}[tbp]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{arc_kayles.pdf}\n \\caption{A play example of Arc Kayles}\n \\label{ex:arcKayles}\n\\end{figure}\n\n\\subsection{Related work}\n\\subsubsection{Cram and Domineering}\nCram and Domineering are well studied in the field of combinatorial game theory. In \\cite{gardner1974mathematical}, Gardner gives winning strategies for some simple cases. For Cram on $a\\times b$ board, the second player can always win if both $a$ and $b$ are even, and the first player can always win if one of $a$ and $b$ is even and the other is odd. This can be easily shown by the so-called Tweedledum and Tweedledee strategy. \nFor specific sizes of boards, computational studies have been conducted~\\cite{uiterwijk2019solving}. \nIn \\cite{uiterwijk2018construction}, Cram's endgame databases for all board sizes with at most 30 squares are constructed. \nAs far as the authors know, the complexity to determine the winner for Cram on general boards still remains open. \n\n\nFinding the winning strategies of Domineering for specific sizes of boards by using computer programs is well studied. \nFor example, the cases of $8\\times 8$ and $10\\times 10$ are solved in 2000~\\cite{BREUKER2000195} and 2002~\\cite{bullock2002domineering}, respectively. The first player wins in both cases. Currently, the status of boards up to $11\\times 11$ is known~\\cite{uiterwijk201611}. \nIn \\cite{uiterwijk2015new}, endgame databases for all single-component positions up to 15 squares for Domineering are constructed.\nThe complexity of Domineering on general boards also remains open. Lachmann, Moore, and Rapaport show that the winner and a winning strategy Domineering on $m\\times n$ board can be computed in polynomial time for $m\\in \\{1, 2, 3, 4, 5, 7, 9, 11\\}$ and all $n$~\\cite{lachmann2000wins}.\n\n\n\\subsubsection{Kayles, Node Kayles, and Arc Kayles}\nKayles is a simple impartial game, introduced by Henry Dudeney in 1908~\\cite{dudeney2002canterbury}. \nThe name ``Kayles'' derives from French word ``quilles'', meaning ``bowling''. The rule of Kayles is as follows. \nGiven bowling pins equally spaced in a line, players take turns to knock out either one pin or two adjacent pins, until all the pins are gone.\nAs graph generalizations, Node Kayles and Arc Kayles are introduced by Schaefer~\\cite{SCHAEFER1978185}. \nNode Kayles is the vertex version of Arc Kayles. Namely, \nthe action of a player is to select a vertex instead of an edge, and then the selected vertex and its neighboring vertices are removed. \nNote that both generalizations can describe the original Kayles; Kayles is represented as Node Kayles on sequentially linked triangles or as Arc Kayles on a caterpillar graph.\n\nNode Kayles is known to be PSPACE-complete \\cite{SCHAEFER1978185}, whereas the winner determination is solvable in polynomial time on graphs of bounded asteroidal numbers such as cocomparability graphs and cographs by using Sprague-Grundy theory~\\cite{BODLAENDER2002}. \nFor general graphs, Bodlaender et al. propose an $O(1.6031^n)$-time algorithm~\\cite{BODLAENDER2015165}. Furthermore, they show that the winner of Node Kayles can be determined in time $O(1.4423^{n})$ on trees. In \\cite{Kobayashi2018}, Kobayashi sophisticates the analysis of the algorithm in \\cite{BODLAENDER2015165} from the perspective of the parameterized complexity and shows that it can be solved in time $O^*(1.6031^{{\\mathtt{\\mu}}})$, where ${\\mathtt{\\mu}}$ is the modular width of an input graph\\footnote[1]{The $O^*(\\cdot)$ notation suppresses polynomial factors in the input size.}. He also gives an $O^*(3^{{\\tau}})$-time algorithm, where ${\\tau}$ is the vertex cover number, \nand a linear kernel when parameterized by neighborhood diversity.\n\nDifferent from Node Kayles, the complexity of Arc Kayles has remained open for more than 30 years. Even for subclasses of trees, not much is known. For example, Huggans and Stevens study Arc-Kayles on subdivided stars with three paths~\\cite{huggan2016polynomial}. \nTo our best knowledge,\nno exponential-time algorithm for Arc Kayles is presented except for an $O^*(4^{{\\tau}^2})$-time algorithm proposed in \\cite{LM2014}.\n\n\\subsection{Our contribution}\n\nIn this paper, we address winner determination algorithms for Colored Arc Kayles. \nWe first propose an $O^*(2^n)$-time algorithm for Colored Arc Kayles.\nNote that this is generally faster than applying the Node Kayles algorithm to the line graph of an instance of Arc Kayles; \nit takes time $O(1.6031^m)$, where $m$ is the number of the original edges.\nWe then focus on algorithms based on graph parameters.\nWe present an $O^*(1.4143^{{\\tau}^2+3.17{\\tau}})$-time algorithm for Colored Arc Kayles, where ${\\tau}$ is the vertex cover number. \nThe algorithm runs in time $O^*(1.3161^{{\\tau}^2+4{\\tau}})$ and $O^*(1.1893^{{\\tau}^2+6.34{\\tau}})$ for BW-Arc Kayles, and Arc Kayles, respectively.\nThis is faster than the previously known time complexity $O^*(4^{{\\tau}^2})$ in \\cite{LM2014}. \n\nOn the other hand, we give a bad instance for the proposed algorithm, which implies the running time analysis is asymptotically tight.\nFurthermore, we show that the winner of Arc Kayles can be determined in time $O^*((n\/{\\nu}+1)^{{\\nu}})$, where ${\\nu}$ is the neighborhood diversity of an input graph. This analysis is also asymptotically tight, because there is an instance having $(n\/{\\nu}-o(1))^{{\\nu}(1-o(1))}$.\nWe finally show that the winner determination of Arc Kayles on trees can be solved in $O^*(2^{n\/2})=O(1.4143^n)$ time, which improves $O^*(3^{n\/3})(=O(1.4423^n))$ by a direct adjustment of the analysis of Bodlaender et al.'s $O^*(3^{n\/3})$-time algorithm for Node Kayles. \n\n\n\n\n\n\n\n\n\n\n\n\\section{Preliminaries}\\label{sec:preliminaries}\n\\subsection{Notations and terminology}\nLet $G=(V,E)$ be an undirected graph. We denote $n=\\card{V}$ and $m=\\card{E}$, respectively. \nFor an edge $e=\\{u,v\\}\\in E$, we define $\\Gamma(e)=\\{e'\\mid e\\cap e'\\neq \\emptyset\\}$.\nFor a graph $G=(V,E)$ and a vertex subset $V'\\subseteq V$, we denote by $G[V']$ the subgraph induced by $V'$. For simplicity, we denote $G-v$ instead of $G[V\\setminus \\{v\\}]$. For an edge subset $E'$, we also denote by $G-E'$ the subgraph obtained from $G$ by removing all edges in $E'$ from $G$. \nA vertex set $S$ is called a \\emph{vertex cover} if $e\\cap S\\neq \\emptyset$ for every edge $e\\in E$. We denote by ${\\tau}$ the size of a minimum vertex cover of $G$.\nTwo vertices $u,v\\in V$ are called \\emph{twins} if $N(u)\\setminus \\{v\\}=N(v)\\setminus \\{u\\}$.\n\n\\begin{definition}\\label{def:nd}\nThe \\emph{neighborhood diversity} ${\\nu}(G)$ of $G=(V,E)$ is defined as the minimum number $w$ such that $V$ can be partitioned into $w$ vertex sets of twins. \n\\end{definition}\n\nIn the following, we simply write ${\\nu}$ instead of ${\\nu}(G)$ if no confusion arises. We can compute the neighborhood diversity of $G$ and the corresponding partition in polynomial time \\cite{Lampis2012}. For any graph $G$, ${\\nu} \\le 2^{{\\tau}}+{\\tau}$ holds.\n\n\\subsection{Colored Arc Kayles}\nColored Arc Kayles is played on a graph $G=(V,E_{\\CG}\\cup E_{\\CB}\\cup E_{\\CW})$, where $E_{\\CG},E_{\\CB}, E_{\\CW}$ are mutually disjoint. The subscripts $\\mathrm{G}$, $\\mathrm{B}$, and $\\mathrm{W}$ of $E_{\\CG}, E_{\\CB}, E_{\\CW}$ respectively, stand for gray, black, and white. \nFor every edge $e\\in E_{\\CG} \\cup E_{\\CB} \\cup E_{\\CW}$, let $c(e)$ be the color of $e$, that is, $c(e)=\\mathrm{G}$ if $e\\in E_{\\CG}$, $\\mathrm{B}$ if $e\\in E_{\\CB}$, and $\\mathrm{W}$ if $e\\in E_{\\CW}$.\nIf $\\{u,v\\}\\not\\in E_{\\CG} \\cup E_{\\CB} \\cup E_{\\CW}$, we set $c(\\{u,v\\})=\\emptyset$ \nfor convenience. \nAs explained below, the first (black or $\\mathrm{B}$) player can choose only gray or black edges, and the second (white or $\\mathrm{W}$) player can choose only gray or white edges. \n\nTwo players alternatively choose an edge of $G$. Player B can choose an edge in $E_{\\CG}\\cup E_{\\CB}$ and player W can choose an edge in $E_{\\CG}\\cup E_{\\CW}$.\nThat is, there are three types of edges; $E_{\\CB}$ is the set of edges that only the first player can choose, $E_{\\CW}$ is the set of edges that only the second player can choose, and $E_{\\CG}$ is the set of edges that both the first and second players can choose. Once an edge $e$ is selected, the edge and its neighboring edges (i.e., $\\Gamma(e)$) are removed from the graph, and the next player chooses an edge of $G- \\Gamma(e)$. \nThe player that can take no edge loses the game. \nSince (Colored) Arc Kayles is a two-person zero-sum perfect information game and ties are impossible, one of the players always has a winning strategy. \nWe call the player having a winning strategy the \\emph{definite winner}, or simply \\emph{winner}. \n\nThe problem that we consider in this paper is defined as follows: \n\\begin{description}\n\\item {\\bf Input}: $G=(V,E_{\\CG}\\cup E_{\\CB}\\cup E_{\\CW})$, active player in $\\{\\mathrm{B},\\mathrm{W}\\}$. \n\\item {\\bf Question}: Suppose that players $\\mathrm{B}$ and $\\mathrm{W}$ play Colored Arc Kayles on $G$ from the active player's turn. Which player is the winner?\n\\end{description}\nRemark that if $E_{\\CB}=E_{\\CW}=\\emptyset$, Colored Arc Kayles is equivalent to Arc Kayles and if $E_{\\CG}=\\emptyset$, it is equivalent to BW-Arc Kayles.\n\nTo simply represent the definite winner of Colored Arc Kayles, we introduce two Boolean functions $f_{\\mathrm{B}}$ and $f_{\\mathrm{W}}$. The $f_{\\mathrm{B}}(G)$ is defined such that $f_{\\mathrm{B}}(G)=1$ if and only if the winner of Colored Arc Kayles on $G$ from player B's turn is player B. Similarly, $f_{\\mathrm{W}}(G)$ is the function such that $f_{\\mathrm{W}}(G)=1$ if and only if the winner of Colored Arc Kayles on $G$ from player W's turn is the player W. If two graphs $G$ and $G'$ satisfy that \n$f_{\\mathrm{B}}(G)=f_{\\mathrm{B}}(G')$ and $f_{\\mathrm{W}}(G)=f_{\\mathrm{W}}(G')$, we say that \\emph{$G$ and $G'$ have the same game value on Colored Arc Kayles}. \n\n\n\n\\section{Arc Kayles for Trees}\\label{sec:trees}\nIn \\cite{BODLAENDER2015165}, Bodlaender et al. show that the winner of Node Kayles on trees can be determined in time $O^*(3^{n\/3})=O(1.4423^n)$. \n It is easy to show by a similar argument that the winner of Arc Kayles can also be determined in time $O(1.4423^n)$. \n It is also mentioned that the analysis is sharp apart from a polynomial factor because there is a tree for which the algorithm takes $\\Omega(3^{n\/3})$ time. The example is also available for Arc Kayles; namely, as long as we use the same algorithm, the running time cannot be improved. \n \n \\medskip\n \nIn this section, we present that the winners of Arc Kayles on trees can be determined in time $O^*(2^{n\/2})=O(1.4143^n)$, which is attained by considering a tree (so-called) unordered. Since a similar analysis can be applied to Node Kayles on trees, the winner of Node Kayles on trees can be determined in time $O^*(2^{n\/2})$.\n\n\\medskip \n\n \n\nLet us consider a tree $T=(V,E)$. \nBy Sprague\u2013Grundy theory, if all connected subtrees of $T$ are enumerated, one can determine the winner of Arc Kayles. \nFurthermore, by Proposition \\ref{prop:isomorphic}, once a connected subtree $T'$ is listed, we can ignore subtrees isomorphic to $T'$. Here we adopt \nisomorphism of rooted trees. \n\\begin{definition}\\label{def:isomorphic:tree}\nLet $T^{(1)}=(V^{(1)},E^{(1)},r^{(1)})$ and $T^{(2)}=(V^{(2)},E^{(2)},r^{(2)})$ be trees rooted at $r^{(1)}$ and $r^{(2)}$, respectively. Then, \n$T^{(1)}$ and $T^{(2)}$ are called \\emph{isomorphic with respect to root} if for any pair of $u,v\\in V^{(1)}$ there is a bijection $f: V^{(1)}\\to V^{(2)}$ such that $\\{u,v\\}\\in E^{(1)}$ if and only if $\\{f(u),f(v)\\}\\in E^{(2)}$ and $f(r^{(1)})=f(r^{(2)})$. \n\\end{definition}\nFor a tree $T$ rooted at $r$, two subtrees $T'$ and $T''$ are simply said \\emph{non-isomorphic} \nif $T'$ with root $r$ and $T''$ with root $r$ are not isomorphic with respect to root. \nNow, we estimate the number of non-isomorphic connected subgraphs of $T$ based on isomorphism of rooted trees. \nFor $T=(V,E)$ rooted at $r$, a connected subtree $T'$ rooted at $r$ is called an \\emph{AK-rooted subtree} of $T$, if there exists a matching $M \\subseteq E$ such that $T[V\\setminus \\bigcup M]$ consists of $T'$ and isolated vertices.\nNote that $M$ can be empty, AK-rooted subtree $T'$ must contain root $r$ of $T$, and the graph consisting of only vertex $r$ can be an AK-rooted subtree.\n\n\n\\begin{lemma}\\label{lem:K-set:tree}\nAny tree rooted at $r$ has $O^*(2^{n\/2})(=O(1.4143^n))$ non-isomorphic AK-rooted subtrees rooted at $r$. \n\\end{lemma}\n\n\\begin{proof}\n Let $R(n)$ be the maximum number of non-isomorphic AK-rooted subtrees of any tree rooted at some $r$ with $n$ vertices. \nWe claim that $R(n)\\le 2^{n\/2}-1$ for all $n\\geq 4$, which proves the lemma. \n\nWe will prove the claim by induction. For $n\\le 4$, the values of $R(n)$'s are as follows: $R(1)=1, R(2)=1, R(3)=2$, and $R(4)=3$. These can be shown by concretely enumerating trees. For example, for $n=2$, a tree $T$ with $2$ vertices is unique, and an AK-rooted subtree of $T$ containing $r$ is also unique, which is $T$ itself. For $n=3$, the candidates of $T$ are shown in Figure \\ref{tree1}. For Type A in Figure \\ref{tree1}, AK-rooted subtrees are the tree itself and isolated $r$, and for Type B, an AK-rooted subtree is only the tree itself; thus we have $R(3)=2$. Similarly, we can show $R(4)=3$ as seen in Figure \\ref{tree2}.\nNote that $R(1)>2^{1\/2}-1$, $R(2)=1\\le 2^{2\/2}-1=1$, $R(3)=2 > 2^{3\/2}-1$, and $R(4)=3\\le 2^{4\/2}-1=3$. This $R(4)$ is used as the base case of induction. \n\n\\begin{figure}[htbp]\n\\begin{minipage}{0.48\\hsize}\n \\centering\n \\includegraphics[width=0.55\\linewidth]{tree3.pdf}\n \\caption{Trees with 3 vertices rooted at $r$}\n \\label{tree1}\n\\end{minipage}\n\\begin{minipage}{0.48\\hsize}\n \\centering\n \\includegraphics[width=1.05\\linewidth]{tree4.pdf}\n \\caption{Trees with 4 vertices rooted at $r$}\n \\label{tree2}\n\\end{minipage}\n\\end{figure}\nAs the induction hypothesis, let us assume that the claim is true \nfor all $n' < n$ except $1$ and $3$, and consider a tree $T$ rooted at $r$ with $n$ vertices. Let $u_1, u_2, \\ldots, u_p$ be the children of root $r$, and $T_i$ be the subtree of $T$ rooted at $u_i$ with $n_i$ vertices for $i=1,2,\\ldots,p$. \nNote that for an AK-rooted subtree $T'$ of $T$, the intersection of $T'$ and $T_i$ for each $i$ is either empty or an AK-rooted subtree of $T_i$ rooted at $u_i$. Based on this observation, we take a combination of the number of AK-rooted subtrees of $T_i$'s, which gives an upper bound on the number of AK-rooted subtrees of $T$. We consider two cases: (1) for any $i$, $n_i \\neq 3$, (2) otherwise. For case (1), the number of AK-rooted subtrees of $T$ is at most \n\\[\n\\prod_{i:n_i>1}(R(n_i)+1)\\cdot \\prod_{i:n_i=1}1 \\le \\prod_{i:n_i>1} 2^{n_i\/2}=2^{\\sum_{i:n_i>1} n_i\/2} \\le 2^{(n-1)\/2} \\le 2^{n\/2}-1. \n\\]\nThat is, the claim holds in this case. Here, in the left hand of the first inequality, $R(n_i)+1$ represents the choice of AK-rooted subtree of $T_i$ rooted at $u_i$ or empty, \nand ``$1$'' for $i$ with $n_i=1$ represents that $u_i$ needs to be left as is, because otherwise edge $\\{r,u_i\\}$ must be removed, which violates the condition ``rooted at $r$''. The first inequality holds since any $n_i$ is not $3$ and thus the induction hypothesis can be applied. \nThe last inequality holds by $n\\ge 5$. \n\nFor case (2), we further divide into two cases: (2.i), for every $i$ such that $n_i=3$, $T_i$ is Type B, and (2.ii) otherwise. For case (2.i), since an AK-rooted subgraph of $T_i$ of Type B in Figure \\ref{tree1} is only $T_i$ itself, the number is $1\\le 2^{3\/2}-1$. Thus, the similar analysis of Case (1) can be applied as follows: \n\\begin{align*}\n\\prod_{i:n_i\\neq 1, 3}(R(n_i)+1) \\cdot \\prod_{i: T_i \\mathrm{{\\ is\\ Type\\ B }} }(2^{3\/2}-1+1) \\le \\prod_{i:n_i>1} 2^{n_i\/2}\\le 2^{n\/2}-1, \n\\end{align*}\nthat is, the claim holds also in case (2.i). \n\nFinally, we consider case (2.ii). By the assumption, at least one $T_i$ is Type A in Figure \\ref{tree1}. Suppose that $T$ has $q$ children of $r$ forming Type A, which are renamed $T_1,\\ldots,T_q$ as canonicalization. Such renaming is allowed because we count non-isomorphic subtrees. \nFurthermore, we can sort AK-rooted subtrees of $T_1,\\ldots, T_q$ as canonicalization. Since each Type A tree can form in $T'$ empty, a single vertex, or Type A tree itself, $T_1,\\ldots,T_q$ of $T$, the number of possible forms of subforests of $T_1,\\ldots,T_q$ of $T$ is \n\\[\n\\multiset{q}{3}=\\binom{q+2}{2}. \n\\]\nSince the number of subforests of $T_i$'s other than $T_1,\\ldots,T_q$ are similar evaluated as above, we can bound the number of AK-rooted subtrees by \\[\n\\binom{q+2}{2} \\prod_{i:i>q} 2^{n_i\/2} \\le \\frac{(q+2)(q+1)}{2} 2^{\\sum_{{i:i>q}} n_i\/2} \\le \\frac{(q+2)(q+1)}{2} 2^{(n-3q-1)\/2}. \n\\]\nThus, to prove the claim, it is sufficient to show that $(q+2)(q+1)2^{(n-3q-3)\/2}\\le 2^{n\/2}-1$ for any pair of integers $n$ and $q$ satisfying $n\\ge 5$ and $1\\le q\\le (n-1)\/3$. \nThis inequality is transformed to the following\n\\[\n\\frac{(q+1)(q+2)}{2^{\\frac{3(q+1)}{2}}} \\le 1-\\frac{1}{2^{\\frac{n}{2}}}.\n\\]\nSince the left hand and right hand of the inequality are monotonically decreasing with respect to $q$ and monotonically increasing with respect to $n$, respectively, the inequality always holds if it is true for $n=5$ and $q=1$. In fact, we have \n\\[\n\\frac{(1+1)(1+2)}{2^{\\frac{3(1+1)}{2}}}= \\frac{3}{4}=1 - \\frac{1}{2^2} \\le 1-\\frac{1}{2^{\\frac{5}{2}}}, \n\\]\nwhich completes the proof. \n\\end{proof}\n\n\\begin{theorem}\nThe winner of Arc Kayles on a tree with $n$ vertices can be determined in time $O^*(2^{n\/2})=O(1.4143^n)$.\n\\end{theorem}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{comment}\nEach type A has three position, no edge is selected, upper edge is selected and lower edge is selected.\nSo, this tree has rooted K-set at most\n\\begin{align*}\nT(n)\n&\\leq \\binom{a+2}{2}(T(n_1)+1)(T(n_2)+1)\\cdots(T(n_k)+1)\\\\\n&\\leq\\frac{1}{2}(a+1)(a+2)\\cdot2^\\frac{n_1}{2}\\cdot2^\\frac{n_2}{2}\\cdots2^\\frac{n_k}{2}\\\\\n&=\\frac{1}{2}(a+1)(a+2)\\cdot2^\\frac{n_1+n_2+\\cdots+n_k}{2}\\\\\n&=\\frac{1}{2}(a+1)(a+2)\\cdot2^\\frac{n-3a-1}{2}.\n\\end{align*}\nThen,\n\\begin{align}\n2^\\frac{n}{2}-1-T(n)\n&\\geq2^\\frac{n}{2}-1-\\frac{1}{2}(a+1)(a+2)\\cdot2^\\frac{n-3a-1}{2}\\notag\\\\\n&=2^\\frac{n}{2}\\left\\{1-\\frac{(a+1)(a+2)}{2^\\frac{3a+3}{2}}\\right\\}-1. \\label{eq1}\n\\end{align}\n\n\nWe consider function $f(a)=(2^\\frac{5}{2}-1)2^\\frac{3a+3}{2}-2^\\frac{5}{2}(a+1)(a+2)$.\nWhen $a$ is non-negative integer, $f(a)>0$ and\n\\begin{align}\n(2^\\frac{5}{2}-1)2^\\frac{3a+3}{2}-2^\\frac{5}{2}(a+1)(a+2)&>0\\notag\\\\\n(2^\\frac{5}{2}-1)2^\\frac{3a+3}{2}&>2^\\frac{5}{2}(a+1)(a+2)\\notag\\\\\n1-\\frac{(a+1)(a+2)}{2^\\frac{3a+3}{2}}&>\\frac{1}{2^\\frac{5}{2}}\\label{eq2}.\n\\end{align}\nFrom the formula\\ref{eq1} and formula\\ref{eq2},\n\\begin{align*}\n2^\\frac{n}{2}-1-T(n)\n&\\geq2^\\frac{n}{2}\\left(1-\\frac{(a+1)(a+2)}{2^\\frac{3a+3}{2}}\\right)-1\\\\\n&>2^\\frac{n}{2}\\cdot\\frac{1}{2^\\frac{5}{2}}-1\\\\\n&\\geq2^\\frac{5}{2}\\cdot\\frac{1}{2^\\frac{5}{2}}-1\\\\\n&>0.\n\\end{align*}\nTherefore, $n\\geq4$, $T(n)\\leq2^\\frac{n}{2}-1=O(1.4143^n)$\n\\end{comment}\n\n\\bigskip\n\nIn the rest of this section, we mention that we can determine the winner of Node Kayles for a tree in the same running time as Arc Kayles. The outline of the proof is also almost the same as Arc Kayles. Only the difference is to utilize the notion of \\emph{NK-rooted subtree} instead of \nAK-rooted subtree for Arc Kayles. \nFor $T=(V,E)$ rooted at $r$, a connected subtree $T'$ rooted at $r$ is called an NK-rooted subtree of $T$, if there exists an independent set $U \\subseteq V$ such that $T[V\\setminus N[U]]=T'$.\n\n\n\n\n\\begin{lemma}\nAny tree rooted at $r$ has $O^*(2^{n\/2})(=O(1.4143^n))$ non-isomorphic NK-rooted subtrees rooted at $r$. \n\\end{lemma}\n\n\\begin{proof}\nLet $\\hat{R}(n)$ be the maximum number of non-isomorphic NK-rooted subtrees of any tree rooted at some $r$ with $n$ vertices. \nSimilarly to Arc Kayles, we can show that $\\hat{R}(n)\\le 2^{n\/2}-1$ for all $n\\geq 4$ by induction. For $n\\le 4$, it is easy to see that the values of $\\hat{R}(n)$'s coincide with those of $R(n)$'s: $\\hat{R}(1)=R(1)=1, \\hat{R}(2)=R(2)=1, \\hat{R}(3)=R(3)=2$ (see Figure \\ref{tree1}), and $\\hat{R}(4)=R(4)=3$ (see Figure \\ref{tree2}). That is, $\\hat{R}(n)\\le 2^{n\/2}-1$ does not hold for $n=1$ and $3$, whereas it holds for $n=2$ and $4$, which shows the base step of the induction. We then consider the induction step. \n\nIn the induction step, we again take the same strategy as Arc Kayles; \nwe take a combination of the number of NK-rooted subtrees of $T_i$'s, which gives an upper bound on the number of NK-rooted subtrees of $T$. Since all the arguments use the same induction hypothesis and the same values of $\\hat{R}(n)=R(n)$ for $n=1,2,3,4$, the derived bound is the same as Arc Kayles. \n\\end{proof}\n\n\n\n\n\n\n\n\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{FPT algorithm parameterized by vertex cover}\\label{sec:vc}\nIn this section, we propose winner determination algorithms for Colored Arc Kayles parameterized by the vertex cover number. As mentioned in Introduction, the selected edges in a play of Colored Arc Kayles form a matching. This implies that the number of turns is bounded above by the maximum matching size of $G$ and thus by the vertex cover number. \nFurthermore, the vertex cover number of the input graph is bounded by twice of the number of turns of Arc Kayles. \nIntuitively, we may consider that a game taking longer turns is harder to analyze than games taking shorter turns. In that sense, the parameterization by the vertex cover number is quite natural. \n\n\nIn this section, we propose an $O^*(1.4143^{{\\tau}^2+3.17{\\tau}})$-time algorithm for Colored Arc Kayles, where ${\\tau}$ is the vertex cover number of the input graph.\nIt utilizes similar recursive relations shown in the previous section, \nbut we avoid to enumerate all possible positions by utilizing equivalence classification.\n\nBefore explaining the equivalence classification, we give a simple observation based on isomorphism. The isomorphism on edge-colored graphs is defined as follows. \n\n\\begin{definition}\\label{def:isomorphic:colored}\nLet $G^{(1)}=(V^{(1)},E_{\\CG}^{(1)}\\cup E_{\\CB}^{(1)}\\cup E_{\\CW}^{(1)})$ and $G^{(2)}=(V^{(2)},E_{\\CG}^{(2)}\\cup E_{\\CB}^{(2)}\\cup E_{\\CW}^{(2)})$ be edge-colored graphs. \nThen\n$G^{(1)}$ and $G^{(2)}$ are called \\emph{isomorphic} if for any pair of $u,v\\in V$ there is a bijection $f: V^{(1)}\\to V^{(2)}$ such that (i) $\\{u,v\\}\\in E_{\\CG}^{(1)}$ if and only if $\\{f(u),f(v)\\}\\in E_{\\CG}^{(2)}$, (ii) $\\{u,v\\}\\in E_{\\CB}^{(1)}$ if and only if $\\{f(u),f(v)\\}\\in E_{\\CB}^{(2)}$, and (iii) $\\{u,v\\}\\in E_{\\CW}^{(1)}$ if and only if $\\{f(u),f(v)\\}\\in E_{\\CW}^{(2)}$. \n\\end{definition}\n\nThe following proposition is obvious. \n\n\\begin{proposition}\\label{prop:isomorphic}\nIf edge-colored graphs $G^{(1)}$ and $G^{(2)}$ are isomorphic, \n$G^{(1)}$ and $G^{(2)}$ have the same game value for Colored Arc Kayles.\n\\end{proposition}\n\n\\begin{comment}\n\\begin{definition}\\label{def:isomorphic}\nLet $G=(V,E)$ and $G'=(V',E')$ be graphs. Then $G$ is \\emph{isomorphic} to $G'$ if for ant pair of $u,v\\in V$ there is a bijection $f: V\\to V'$ such that $\\{u,v\\}\\in E$ if and only if $\\{f(u),f(v)\\}\\in E'$. \n\\end{definition}\n\nWe further define isomorphism for colored graphs.\n\n\\begin{definition}\\label{def:isomorphic:colored}\nLet $G=(V,E)$ and $G'=(V',E')$ be edge-colored graphs where $E=\\bigcup_{i=1}^r E_i$ and $E'=\\bigcup_{i=1}^r E'_i$. Then $G$ is isomorphic to $G'$ if for ant pair of $u,v\\in V$ there is a bijection $f: V\\to V'$ such that $\\{u,v\\}\\in E_i$ if and only if $\\{f(u),f(v)\\}\\in E'_i$. \n\\end{definition}\n\nOur algorithm is also based on dynamic programming. However, we do not compute the winner on $G$ if the winner of $G'$ isomorphic to $G$ has already been computed. Instead, we use the result of $G'$.\nTo do this, we utilize memoization and preserve the result of $G$ if the winner of a graph isomorphic to $G$ is not computed.\n\n\n\\end{comment}\n\nLet $S$ be a vertex cover of $G=(V,E_{\\CG}\\cup E_{\\CW}\\cup E_{\\CB})$, that is, any $e=\\{u,v\\}\\in E_{\\CG} \\cup E_{\\CW} \\cup E_{\\CB}$ satisfies that $\\{u,v\\}\\cap S\\neq \\emptyset$. \nNote that for $v\\in V\\setminus S$, $N(v)\\subseteq S$ holds. \nWe say that two vertices $v,v'\\in V\\setminus S$ are \\emph{equivalent with respect to $S$ in $G$} if $N(v)=N(v')$ and $c(\\{u,v\\})=c(\\{u,v'\\})$ holds for $\\forall u\\in N(v)$. If two vertices $v,v'\\in V\\setminus S$ are equivalent with respect to $S$ in $G$, $G-u-v$ and $G-u-v'$ are isomorphic because the bijective function swapping only $v$ and $v'$ satisfies the isomorphic condition. Thus, we have the following lemma.\n\\begin{comment}\n\\begin{lemma}\\label{lem:isomorphic:type}\nFor $v,v'\\in V$ of the same type, the winner of Kayles on $G-u-v$ is the same as the one on $G-u-v'$ where $u\\in N(v)$.\n\\end{lemma}\n\\end{comment}\n\\begin{lemma}\\label{lem:isomorphic:type}\nSuppose that two vertices $v,v'\\in V\\setminus S$ are equivalent with respect to $S$ in $G$. Then, for any $u\\in N(v)$, $G-u-v$ and $G-u-v'$ have the same game value.\n\\end{lemma}\n\nBy the equivalence with respect to $S$, we can split $V\\setminus S$ into equivalence classes. Note here that the number of equivalence classes is at most $4^{\\card{S}}$, because for each $u\\in S$ and $v \\in V\\setminus S$, edge $\\{u,v\\}$ does not exist, or it can be colored with one of three colors if exists; \nwe can identify an equivalent class with ${\\bm{x}}\\in \\{\\emptyset,\\mathrm{G},\\mathrm{B},\\mathrm{W}\\}^{S}$, a 4-ary vector with length $\\card{S}$. For $S'\\subseteq S$, let ${\\bm{x}}[S']$ denotes\nthe vector by dropping the components of ${\\bm{x}}$ except the ones corresponding to $S'$. Also for $u\\in S$, ${\\bm{x}}[u]$ denotes the component corresponding to $u$ in ${\\bm{x}}$. \nThen, $V$ is partitioned into $V_S^{({\\bm{x}})}$'s, where $V_S^{({\\bm{x}})}=\\{v\\in V\\setminus S \\mid \\forall u\\in S: c(\\{v,u\\})={\\bm{x}}[u]\\}$. We arbitrarily define the representative of non-empty $V_S^{({\\bm{x}})}$ (e.g., the vertex with the smallest ID), which is denoted by $\\rho(V_S^{({\\bm{x}})})$. By using $\\rho$, we also define the representative edge set by\n\\begin{align*}\n E^R(S)=\\bigcup_{{\\bm{x}}\\in \\{\\emptyset,\\mathrm{G},\\mathrm{B},\\mathrm{W}\\}^{S}} \\{\\{u,\\rho(V_S^{({\\bm{x}})})\\} \\in E_{\\CG}\\cup E_{\\CB} \\cup E_{\\CW} \\mid u \\in S\\}.\n\\end{align*}\nBy Lemma \\ref{lem:isomorphic:type}, we can assume that both players choose an edge only in $E^R(S)$, which enables to modify the recursive equations (\\ref{eq:recursion1}) and (\\ref{eq:recursion2}) as follows: \n\\begin{comment}\nBy Lemma \\ref{lem:isomorphic:type}, if there are at least two vertices with the same type, we only choose edges incident to one of them.\nThus, we define the representative edge set $E^R$ by\n\\begin{align*}\n E^R=\\left\\{\\{u,v_i\\}\\in E \\mid \\nexists v_j (j2.5$ (see Table \\ref{tab:variance}). This is to be expected as the CMB data is the only point we have above $z=2.5$. Thus, the constraints from this integrated effect are expected to become dominant in the redshift range between $2.5 < z < 1100$. However, it is important to note that the CMB is not the only contributor to the $\\delta H(s)$ constraints over this redshift range since $f \\! \\sigma_8$ data also constraints $\\delta H(s)$ over its whole domain through its role in solving the Jeans equation. The associated constraint is however very weak.\n\nFor the purpose of studying the effect of different data types, we split the data points within the data sets employed in the fiducial analysis in two groups: ``geometry'' -- exclusively containing measurements of the expansion history -- and ``growth'' -- solely containing $f \\! \\sigma_8$ measurements. \n\n\\begin{figure} \n\\centering\n \\includegraphics[width=\\linewidth]{{\"dH_gp_geo_vs_gro\"}.png} \n \\caption{$1\\sigma$-constraints on $\\delta H(z)$ broken down by type of data considered. Solid lines represent the mean of the GPs at each redshift. In red we display the constraints resulting from the analysis of all present data, in blue the effect of removing the CMB data set, in black the effect of fixing the $\\Omega_m$ and $\\sigma_8$ to their best-fit (BF) value; in magenta, the constraints resulting of only considering growth data; and in green, the constraints resulting of only considering geometry data.}\n \\label{fig:dH_geo_vs_gro}\n\\end{figure}\n\nAs we can see in the second panel of Fig. \\ref{fig:dH_geo_vs_gro}, growth data only is much weaker at constraining $\\delta H(z)$. From Table \\ref{tab:variance} we see that constraints from growth data alone on $\\overline{\\sigma}(\\delta H(s))$ are approximately twice as wide as those resulting from analysing the entire data set. These constraints are consistent with the prior on the hyperparameter $\\eta$. On the other hand, the constraining power of the geometry data is only slightly weaker than that of the entire data set. Hence the $\\delta H(z)$ constraints are mostly dominated by the geometry data sets as one would expect. Nonetheless, the addition of growth data increases constraining power. This is shown explicitly in the last panel of Fig. \\ref{fig:dH_geo_vs_gro}, which shows the results of using geometry data alone compared to those using the full data set. This recovers the expected behavior: more data increases the constraining power and the contours shrinks.\n\n\\begin{figure} \n\\centering\n \\includegraphics[width=\\linewidth]{{\"data_cosmo_functions\"}.pdf} \n \\caption{$1\\sigma$-constraints for the cosmological functions $H(z)$ and $f \\! \\sigma_{8}$ (top and bottom panel respectively) broken down by combination of data set. Solid lines represent the mean of the GPs at each redshift. The dashed black lines show the prediction for each cosmological function using our Planck 2018 fiducial cosmology (see Table~\\ref{tab:fiducial}). In red we show the constraints resulting of the analysis of all present data; in green, the impact of removing the CMB data set; and in black, the impact of removing the DSS data set.}\n \\label{fig:comp_data}\n\\end{figure} \n\nWe now shift the focus of our discussion to the constraints we derive from $\\delta H(s)$ for the expansion history itself, $H(z)$, and the linear growth of matter anisotropies, $f\\!\\sigma_8$. Comparing the constraints for both cosmological functions from our analysis of current data with the \\textit{Planck} 2018 predictions, we find an overall good agreement, finding both functions to contain the \\textit{Planck} 2018 predictions within their $2\\sigma$ confidence contours. This can be seen in Fig. \\ref{fig:comp_data}. Nonetheless, we observed a greater than 1$\\sigma$ preference for a lower $f\\!\\sigma_8$ between $0T_{\\rm c})>$-0.1 $\\Phi_0\/$A. The local $T_{\\rm c}$ mapping clearly shows that the local $T_{\\rm c}$ is weakly enhanced at the edge but is homogeneous 30~$\\mu$m away from the edge into the sample [Fig. 2, sample\\#1; Fig. S2, sample\\#2]. We note that the reported local $T_{\\rm c}$ near the edge is a lower bound relative to the actual value because the penetration depth near $T_{\\rm c}$ is longer than the pickup loop's scale and the susceptibility loses some signal at the edge. If two phase transitions do exist, they must be closer to each other than 25 mK, or the second kink below $T_{\\rm c}$ is much smaller than our experimental noise.\n\n\\begin{figure}[!hb]\n\\begin{center}\n\\includegraphics*[width=7cm]{.\/Fig2.jpg}\n\\caption{ Small enhancement of local $T_{\\rm c}$ near the edge of sample\\#1. (a) The local $T_{\\rm c}$ mapping is obtained from the local susceptometry scans. (b) Cross section of the local $T_{\\rm c}$ from A to A' shows the local $T_{\\rm c}$ enhancement of 30 mK at the edge. The plotted area and the cross section from A to A' are the same with Fig. 1.}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure*}[htb]\n\\begin{center}\n\\includegraphics*[width=13cm]{.\/Fig3.jpg}\n\\caption{The vortex density is homogeneous over many-micron distances. The existence of vortices and antivortices in low-field scans may indicate a local magnetic source in the sample\\#1. (a,b) Local magnetometry scan after field cooling shows the vortices pinned parallel to the sample edge, as denoted by the dashed lines. (c) There are a few vortices and antivortices pinned far from the edge after near zero field cooling. (d) Magnetometry scan near zero field above $T_{\\rm c}$ shows a small magnetic dipole at the sample's edge, but no other indication of magnetism. The \"tail\" of the vortices and dipoles are due to the asymmetric shielding structure of the scanning SQUID~\\cite{Kirtleyrsi2016}.}\n\\end{center}\n\\end{figure*}\n\nNow we turn to the pinned vortex density, which reflects the impurity density on the crystal surface for small applied magnetic fields. The distance between vortices is on the order of microns. Our magnetometry scan imaged the pinned vortices induced by cooling in an applied uniform magnetic field from 2 K to 100 mK [Figs. 3(a),3(b), sample\\#1; Figs. S3(a),S3(b), sample\\#2]. The number of vortices corresponds to the applied field, but the vortices are preferentially pinned along lines in one direction, which is parallel to the sample edge. This linear pinning indicates the existence of a line anomaly, such as nanometer-scale step edges along crystal axes. Near zero magnetic field, there are still a few vortices and antivortices pinned far from the edge [Fig. 3(c), sample\\#1; Fig. S3(c), sample\\#2]. Notably, these vortices and antivortices do not disappear after zero field cooling with slower cooling rates, which is expected to cancel the uniform background field normal to the sample surface by the application of an external field. These data are inconsistent with the argument from polar Kerr effect measurements that there are no vortices in UTe$_2$ within the beam size area ($\\sim11~\\mu$m radius)~\\cite{Wei2022}. Further, our results indicate the existence of a local magnetic source that induces vortices and antivortices, in spite of the absence of long-range order or strong magnetic sources on the scan plane above $T_{\\rm c}$ \\cite{Ran2019Nearly,Miyake2019Meta,Hutanu2020Low}. \nSmall dipole fields are observed at the edge of the sample, which may stem from U impurities; however, these impurities cannot induce pinned vortices and antivortices as they are too far away [Fig. 3(d), sample\\#1]. Muon spin resonance and NMR measurements have detected the presence of strong and slow magnetic fluctuations in UTe$_2$ at low temperatures~\\cite{Tokunaga2022Slow,Sundar2022Ubi}. Therefore, a sensible scenario is that these fluctuations are pinned by defects and become locally static.\n\n\n\\begin{figure*}[htb]\n\\begin{center}\n\\includegraphics*[width=16cm]{.\/Fig4.jpg}\n\\caption{The temperature dependence of the superfluid density = best matches an anisotropic, rather than isotropic, gap structure. (a-c) Temperature dependence of the normalized superfluid density $n_{(011)}$ at the fixed position in sample\\#1,\\#2 that are indicated by the blue dot in Figs. 1(b), S1(b), respectively. The thick lines are simulation curves best fit for 4 gap symmetries with (a) Isotropic FS, (b) Ellipsoidal FS, and (c) Cylindrical FS~\\cite{supple}. (d-i) Best fit models of gap symmetries (d,e), (f,g) and (h,i) for (a), (b) and (c), respectively. FS is plotted by yellow color. The distance between larger surfaces and FS represents the angular dependence of the SC gap $\\Omega$ in (a,b) the spherical coordinate and (c) the cylindrical coordinate. All surfaces are cut for clarity.}\n\\end{center}\n\\end{figure*}\n\n\nTo estimate the local superfluid density, we measure the local susceptibility at different temperatures with the pickup loop position fixed. The local superfluid density is obtained using the numerical expression of the susceptibility assuming a homogeneous penetration depth, $\\lambda$, as described below. Kirtley {\\it et al}. developed the expression for the susceptibility as a function of the distance between the susceptometer and the sample surface~\\cite{Kirtley2012}. In this model, wherein the sample surface is at $z=0$, we consider three regions. Above the sample ($z>0$), the pickup loop and field coil are at $z$ in vacuum and $\\mu_1 = \\mu_0$, where $\\mu_0$ is the permeability in vacuum. In the sample ($-t\\leq z \\leq 0$), the London penetration depth is $\\lambda=\\sqrt{m\/4\\pi ne^2}$, and the permeability is $\\mu_2$. Below the sample ($z<-t$), there is a nonsuperconducting substrate with a permeability $\\mu_3$. The radius of the field coil and the pickup loop are $a$ and $b$, respectively. By solving Maxwell's equations and the London equation for the three regions, the SQUID height dependence of the susceptibility $\\chi(z)$ is expressed as \n\\begin{widetext}\n\\begin{equation}\\label{phi-z}\n \\chi(z)\/\\phi_s = \\int_0^\\infty{dx e^{-2x\\bar{z}}xJ_1(x)}\\left[\\frac{-(\\bar{q}+\\bar{\\mu}_2x)(\\bar{\\mu}_3\\bar{q}-\\bar{\\mu}_2x)+e^{2\\bar{q}\\bar{t}}(\\bar{q}-\\bar{\\mu}_2x)(\\bar{\\mu}_3q+\\bar{\\mu}_2x)}{-(\\bar{q}-\\bar{\\mu}_2x)(\\bar{\\mu}_3\\bar{q}-\\bar{\\mu}_2x)+e^{2\\bar{q}\\bar{t}}(\\bar{q}+\\bar{\\mu}_2x)(\\bar{\\mu}_3q+\\bar{\\mu}_2x)}\\right],\n\\end{equation}\n\\end{widetext}\nwhere $\\phi_s = A\\mu_0\/2\\Phi_0a$ is the self inductance between the field coil and the pickup loop, $A$ is the effective area of the pickup loop, $\\bar{z} = z\/a$, $J_1$ is the Bessel function of first order, $\\bar{t} = t\/a$, $\\bar{q} = \\sqrt{x^2 + \\bar{\\lambda}^{-2}}$, and $\\bar{\\lambda} = \\lambda\/a$. For the bulk sample on a copper substrate ($\\bar{t}>>1, \\mu_3 = 1$), the observed susceptibility only depends on $\\lambda$, $\\mu_2$, and the SQUID structure. \n\nThe penetration depth $\\lambda(T)$ was calculated using Eq.~\\eqref{phi-z} and the observed susceptibility [Fig. 4(a)]. The normalized superfluid density $n_{s} = \\lambda^2(0)\/\\lambda^2(T)$ was calculated from the obtained penetration depth's temperature dependence, where $\\lambda(0) = 1620\\pm150$~nm [sample\\#1], $1730\\pm300$~nm [sample\\#2] [Fig.~4(b)]. Here the error for $\\lambda$ and $n_s$ is roughly calculated from the pickup loop height uncertainty. We note that sample\\#2 had a dead layer of 700~nm on the surface, which we estimated by assuming that sample\\#2 has a similar penetration depth at zero temperature with sample\\#1, because sample\\#2 was accidentally exposed in air about one extra hour. The locally obtained superfluid density $n_s$ saturates below $T\/T_{\\rm c}=0.1$.\n\n\nWe examine the SC gap structure through the temperature dependence of the superfluid density. The superfluid density $n_{i}$ is sensitive to low-energy excitations along the $i$ axis, which is perpendicular to the applied field. In our case, $n_{i}$ is sensitive to excitations within the plane normal to [011], and the extracted $n_{(011)}$ is the average of $n_a$ and $n_{\\perp[011],a}$)~\\cite{Chandrasekhar1993}. The SC gap function of UTe$_2$, $\\Delta$, is most likely odd-parity within the orthorhombic $D_{2h}$ point group. In the presence of strong spin-orbit coupling, $\\Delta(T,\\Vec{k})=\\Psi(T)\\Omega(\\Vec{k})$, and the angle dependence of the gap function is expressed as $\\Omega(\\Vec{k})\\propto\\sqrt{\\Vec{d}\\cdot\\Vec{d}^*\\pm|\\Vec{d}\\times\\Vec{d}^*|}$. In this case, the possible irreducible representations are $A_{1u}$ [full gap, $\\Vec{d}=(c_1k_x,c_2k_y,c_3k_z)$)], $B_{1u}$ [point nodes along $c$, $\\Vec{d}=(c_1k_y,c_2k_x,c_3k_xk_yk_z)$], $B_{2u}$ [point nodes along $b$, $\\Vec{d}=(c_1k_z,c_2k_xk_yk_z,c_3k_x)$], and $B_{3u}$ [point nodes along $a$, $\\Vec{d}=(c_1k_xk_yk_z,c_2k_z,c_3k_y)$]~\\cite{Annett1990}. We note that coefficients $c_1, c_2$, and $c_3$ may differ by orders of magnitude~\\cite{IshizukaPRB2021}. \n\nFor the sake of completeness, here we assume three cases of Fermi surface structure to calculate the temperature-dependent superfluid density with fit parameters $c_1, c_2$, and $c_3$~\\cite{supple,Kogan2021}: (I) isotropic Fermi Surface (FS) based on the isotropic heavy 3D Fermi surface observed by angle-resolved photoemission spectroscopy (ARPES) measurements ~\\cite{FujimoriJPSJ2019,MiaoPRL2020}; (II) ellipsoidal FS based on the upper critical field~\\cite{AokiJPSJ2019}; and (III) cylindrical FS. Case (III) is based on both ARPES measurements, which observed cylindrical light electron bands~\\cite{FujimoriJPSJ2019,MiaoPRL2020} and recent de Haas\u2013van Alphen measurements that reveal heavy cylindrical bands ~\\cite{Aoki2022First}. \n\nThe isotropic fully gapped model $A_{1u}$ saturates at $T\/T_{\\rm c}=0.2$ [Fig.~S6]. In contrast, the experimental data saturate at a lower temperature, which implies an anisotropic structure in the SC gap function. The calculated normalized superfluid density $n_{(011)}\\sim(n_a+n_{\\perp[011],a})\/2$ for highly anisotropic $A_{1u}$ and $B_{1u}$ have a similar temperature dependence compared to our experimental results, whereas $n_{(011)}$ for $B_{2u}$ and $B_{3u}$ do not agree with our data because of their point nodes near the (011) plane with isotropic or ellipsoidal 3D Fermi surfaces [Figs.~4(a),4(b)]. For highly anisotropic $A_{1u}$ and $B_{3u}$, $n_{(011)}$ agrees with our experimental results, whereas $n_{(011)}$ for $B_{2u}$ inconsistent with the data for a cylindrical Fermi surface [Fig.~4(c)]. We note that our calculations with point nodes do not completely explain our results near zero temperature, which may be caused by our assumptions of the simplest structures of Fermi surface and gap functions, the simple averaging of $n_a$ and $n_{\\perp[011],a}$, or by the assumption of a single band.\nOur results indicate the existence of point nodes along the $a$ axis for a cylindrical Fermi surface. A highly anisotropic fully-gapped component is also allowed.\n\n\nIn summary, we microscopically imaged the superfluid density and the vortex density in high quality samples of UTe$_2$. The superfluid density is homogeneous, and the temperature dependence below the SC transition $T_{\\rm c}$ does not show evidence for a second phase transition. The observed temperature dependence of the superfluid density can be explained by a $B_{1u}$ order parameter for a 3D ellipsoidal (or isotropic) Fermi surfaces or by a $B_{3u}$ order parameter for a quasi-2D cylindrical Fermi surface. A highly anisotropic $A_{1u}$ symmetry component is also allowed for any Fermi surface structures. Combining our results with previous studies about the gap symmetry of UTe$_2$~\\cite{Bae2021Ano,Metz2019Point,Kittaka2020Ori,Fujibayashi2022Super}, we conclude that the SC order parameter is most likely dominated by the $B_{3u}$ symmetry. In light of our results, evidence for time-reversal symmetry breaking and chiral superconductivity in UTe$_2$ could be understood either through the presence of vortices and antivortices even at zero applied field or by the presence of a finite anisotropic $A_{1u}$ symmetry in the SC order parameter. \n\n\n\n\n\\begin{acknowledgments}\nThe authors thank J. Ishizuka for fruitful discussions. This work was primarily supported by the Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract No. DE- AC02-76SF00515. Sample synthesis at LANL was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Science and Engineering \"Quantum Fluctuations in Narrow-Band Systems\" project, while heat capacity measurements were performed with support from the LANL LDRD program. Y.I. was supported by the Japan Society for the Promotion of Science (JSPS), Overseas Research Fellowship.\n\\end{acknowledgments}\n\n\n\n\t\n\t\n\\section*{Contributions}\nY.I. carried out the scanning SQUID microscopy, analyzed experimental data, simulated the superfluid density, and wrote the manuscript. H.M. carried out the scanning SQUID microscopy. S.M.T., F.R., and P.F.S.R. synthesized the crystals. K.A.M. supervised the project. All the authors discussed the results and implications and commented on the manuscript.\n\t\n\t\n\\section*{Additional information}\nCorrespondence and requests for materials should be addressed to Y. I. (yiguchi@stanford.edu)\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nConsider the problem of finding a local minimizer of the expectation\n$F(x)\\eqdef\\bE(f(x,\\xi))$ w.r.t. $x\\in \\bR^d$, where $f(\\,.\\,,\\xi)$ is\na possibly non-convex function depending on some random\nvariable~$\\xi$.\nThe distribution of $\\xi$ is assumed unknown, but revealed online by\nthe observation of iid copies $(\\xi_n:n\\geq 1)$ of the r.v.~$\\xi$.\nStochastic gradient descent (SGD) is the most classical algorithm\nto search for such a minimizer.\nVariants of SGD which include an inertial term have also become very popular.\nIn these methods, the update rule depends on a parameter called\nthe \\emph{learning rate}, which is generally assumed constant or vanishing.\nThese algorithms, although widely used, have at least two limitations.\nFirst, the choice of the learning rate is generally difficult; large learning rates result in\nlarge fluctuations of the estimate, whereas small learning rates\ninduce slow convergence. Second, a common learning rate is used for\nevery coordinate despite the possible discrepancies in the values of\nthe gradient vector's coordinates.\n\nTo alleviate these limitations, the popular\n\\adam\\ algorithm \\cite{kingma2014adam} adjusts the learning rate\ncoordinate-wise, as a function of the past values of the squared\ngradient vectors' coordinates. The algorithm thus combines the assets\nof inertial methods with an adaptive per-coordinate learning rate\nselection. Finally, the algorithm includes a so-called\n\\emph{bias correction} step. Acting on the current estimate of the gradient vector,\nthis step is especially useful during the early iterations.\n\nDespite the growing popularity of the algorithm, only few works\ninvestigate its behavior from a theoretical point of\nview (see the discussion in Section~\\ref{sec:related-works}).\nThe present paper studies the convergence of \\adam\\ from a dynamical system viewpoint.\\\\\n\n\\noindent{\\bf Contributions}\n\\begin{itemize}[leftmargin=*]\n\\item We introduce a continuous-time version of the \\adam\\ algorithm under the form\n of a non-autonomous ordinary differential equation (ODE).\n Building on the existence of an explicit Lyapunov function for the ODE,\n we show the existence of a unique global solution to the ODE. This first result\n turns out to be non-trivial due to the irregularity of the vector field.\n We then establish the convergence of the continuous-time \\adam\\\n trajectory to the set of critical points of the objective function $F$.\nThe proposed continuous-time version of \\adam\\ provides useful\n insights on the effect of the bias correction step. It is shown\n that, close to the origin, the objective function~$F$ is\n non-increasing along the \\adam\\ trajectory,\n suggesting that early iterations of \\adam\\ can only improve the initial guess.\n\n\\item Under a \\L{}ojasiewicz-type condition,\n we prove that the solution to the ODE converges to a single critical point of the objective\n function $F$. We provide convergence rates in this case.\n\n\\item\n In discrete time, we first analyze the \\adam\\ iterates in the constant stepsize\n regime as originally introduced in \\cite{kingma2014adam}.\n In this work, it is shown that the discrete-time \\adam\\ iterates shadow the\n behavior of the non-autonomous ODE in the asymptotic regime where\n the stepsize parameter $\\gamma$ of \\adam\\ is small. More\n precisely, we consider the interpolated process $\\sz^\\gamma(t)$\n which consists of a piecewise linear interpolation of the \\adam\\ iterates.\n The random process $\\sz^\\gamma$ is indexed by the parameter~$\\gamma$, which is\n assumed constant during the whole run of the algorithm.\n In the space of continuous functions on $[0,+\\infty)$ equipped\n with the topology of uniform convergence on compact sets,\n we establish that $\\sz^\\gamma$ converges in probability\n to the solution to the non-autonomous ODE when $\\gamma$ tends to zero.\n\n\\item Under a stability condition, we prove the asymptotic ergodic convergence\n of the probability of the discrete-time \\adam\\ iterates to approach the set of critical\n points of the objective function in the doubly asymptotic regime\n where $n\\to\\infty$ then $\\gamma\\to 0$.\n\\item\nBeyond the original constant stepsize \\adam\\,, we propose\na decreasing stepsize version of the algorithm.\nWe provide sufficient conditions ensuring the stability and the almost sure convergence of the iterates towards\nthe critical points of the objective function.\n\\item We establish a convergence rate of the stochastic iterates of the decreasing stepsize algorithm under the\nform of a conditional central limit theorem.\n\n\\end{itemize}\nWe claim that our analysis can be easily extended to\nother adaptive algorithms such as e.g. \\textsc{RmsProp} or\n\\textsc{AdaGrad} \\cite{tieleman2012lecture,duchi2011adaptive}\nand \\textsc{AmsGrad} (see Section~\\ref{sec:related-works}).\\\\%\\medskip\n\nThe paper is organized as follows.\nIn Section~\\ref{sec:adam}, we present\nthe \\adam\\ algorithm and the main assumptions.\nOur main results are stated in Sections~\\ref{sec:continuous_time} to \\ref{sec:discrete_decreasing}.\nWe provide a review of related works in Section~\\ref{sec:related-works}.\nThe rest of the paper addresses the proofs of our results\n(Sections~\\ref{sec:proofs_cont_time} to \\ref{sec:proofs_sec_discrete_decreasing}).\n\\hfill\\\\\n\n\\noindent{\\bf Notations}. If $x$, $y$ are two vectors on $\\bR^d$ for some $d\\geq 1$, we denote by $x \\odot y$, $x^{\\odot 2}$, $x\/y$, $|x|$, $\\sqrt{|x|}$ the vectors\non $\\bR^d$ whose $i$-th coordinates are respectively given by $x_iy_i$, $x_i^2$, $x_i\/y_i$, $|x_i|$, $\\sqrt{|x_i|}$.\nInequalities of the form $x\\leq y$ are read componentwise.\nDenote by $\\|\\cdot\\|$ the standard Euclidean norm.\nFor any vector $v\\in (0,+\\infty)^d$, write $\\|x\\|^2_v = \\sum_i v_i x_i^2$.\nNotation $A^T$ represents the transpose of a matrix $A$.\nIf $z\\in \\bR^d$ and $A$ is a non-empty subset of $\\bR^d$,\nwe use the notation $\\mathsf d(z,A) \\eqdef \\inf\\{ \\|z-z'\\| :z'\\in A\\}$.\nIf $A$ is a set, we denote by $\\1_A$ the function equal to one on that set and to zero elsewhere.\n We denote by $C([0,+\\infty),\\bR^d)$ the space of continuous functions from $[0,+\\infty)$ to $\\bR^d$ endowed with the topology of\nuniform convergence on compact intervals.\n\n\\section{The \\adam\\ Algorithm}\n\\label{sec:adam}\n\n\\subsection{Algorithm and Assumptions}\n\nLet $(\\Omega,\\cF,\\bP)$ be a probability space, and let\n$(\\Xi,\\mathfrak{S})$ denote an other measurable space. Consider a\nmeasurable map $f:\\bR^d\\times \\Xi\\to \\bR$, where $d$ is an integer.\nFor a fixed value of $\\xi$, the mapping $x\\mapsto f(x,\\xi)$ is\nsupposed to be differentiable, and its gradient w.r.t. $x$ is denoted\nby $\\nabla f(x,\\xi)$. Define $\\cZ\\eqdef \\bR^d\\times\\bR^d\\times\n\\bR^d$, $\\cZ_{+}\\eqdef \\bR^d\\times\\bR^d\\times [0,+\\infty)^d$ and\n$\\cZ_{+}^*\\eqdef \\bR^d\\times\\bR^d\\times (0,+\\infty)^d$. \\adam\\\ngenerates a sequence $z_n\\eqdef (x_n,m_n,v_n)$ on\n$\\cZ_+$ given by Algorithm~\\ref{alg:adam}.\n\\begin{algorithm}[tb]\n \\caption{\\bf \\adam$(\\gamma,\\alpha,\\beta,\\varepsilon)$.}\n \\label{alg:adam}\n\\begin{algorithmic}\n \\STATE {\\bfseries Initialization:} $x_0\\in \\bR^d, m_0=0$, $v_0=0$.\n \\FOR{$n=1$ {\\bfseries to} $n_{\\text{iter}}$}\n \\STATE $m_n = \\alpha m_{n-1} + (1-\\alpha) \\nabla f(x_{n-1},\\xi_n)$\n \\STATE $v_n = \\beta v_{n-1} + (1-\\beta) \\nabla f(x_{n-1},\\xi_n)^{\\odot 2}$\n \\STATE $\\hat m_{n} = m_{n}\/(1-\\alpha^{n})$\n \\COMMENT{bias correction step}\n \\STATE $\\hat v_{n} = v_{n}\/(1-\\beta^{n})$\n \\COMMENT{bias correction step}\n \\STATE $x_{n} = x_{n-1} - \\gamma \\hat m_{n} \/ (\\varepsilon+\\sqrt{\\hat v_{n}}) \\,.$\n \\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\nIt satisfies:\n$\nz_n=T_{\\gamma,\\alpha,\\beta}(n,z_{n-1},\\xi_n)\\,,\n$\nfor every $n\\geq 1$, where for every $z=(x,m,v)$ in $\\cZ_+$, $\\xi\\in \\Xi$,\n\\begin{equation}\n T_{\\gamma,\\alpha,\\beta}(n,z,\\xi) \\eqdef\n \\begin{pmatrix}\n x -\\frac{\\gamma (1-\\alpha^{n})^{-1}(\\alpha m+(1-\\alpha)\\nabla f(x,\\xi))}{ \\varepsilon+{(1-\\beta^{n})^{-1\/2} (\\beta v+(1-\\beta)\\nabla f(x,\\xi)^{\\odot 2})^{1\/2}}} \\\\\n\\alpha m + (1-\\alpha) \\nabla f(x,\\xi) \\\\\n\\beta v + (1-\\beta) \\nabla f(x,\\xi)^{\\odot 2}\n\\end{pmatrix}\\,.\\label{eq:T}\n\\end{equation}\n\n\\begin{remark}\n\\label{rem:debiasing}\n\\quad\\quad The iterates $z_n$ form a non-homogeneous Markov chain, because $T_{\\gamma,\\alpha,\\beta}(n,z,\\xi)$ depends on $n$.\nThis is due to the so-called debiasing step, which consists of replacing $m_n,v_n$ in Algorithm~\\ref{alg:adam}\nby their ``debiased'' versions $\\hat m_n,\\hat v_n$. The motivation becomes clear when expanding the expression:\n\\begin{equation*}\n\\hat m_n = \\frac{m_n}{1-\\alpha^n} = \\frac {1-\\alpha}{1-\\alpha^n}\\sum_{k=0}^{n-1}\\alpha^k \\nabla f(x_k,\\xi_{k+1})\\,. \\label{eq:debiasing}\n\\end{equation*}\nFrom this equation, it is observed that, $\\hat m_n$ forms a convex combination of the past gradients.\nThis is unlike $m_n$, which may be small during the first iterations.\n\n\\end{remark}\n\n\\begin{assumption} \\label{hyp:model}\nThe mapping $f:\\bR^d\\times\\Xi\\to \\bR$ satisfies the following.\n \\begin{enumerate}[{\\sl i)}]\n \\item For every $x\\in \\bR^d$, $f(x,\\,.\\,)$ is $\\mathfrak{S}$-measurable.\n \\item For almost every $\\xi$, the map $f(\\,.\\,,\\xi)$ is continuously differentiable. \\label{hyp:dif}\n \\item There exists $x_*\\in \\bR^d$ s.t. $\\bE(|f(x_*,\\xi)|)<\\infty$ and $\\bE(\\|\\nabla f(x_*,\\xi)\\|^2)<\\infty$.\n \\item For every compact subset $K\\subset \\bR^d$, there exists $L_K>~0$ such that\n for every $(x,y)\\in K^2$, $\\bE(\\|\\nabla f(x,\\xi)-\\nabla f(y,\\xi)\\|^2)\\leq L_K^2\\|x-y\\|^2$.\n \\end{enumerate}\n\\end{assumption}\n\nUnder Assumption~\\ref{hyp:model}, it is an easy exercise to show that the mappings\n$F:\\bR^d\\to\\bR$ and $S:\\bR^d\\to\\bR^d$, given by:\n\\begin{equation}\n F(x) \\eqdef \\bE(f(x,\\xi)) \\quad \\text{and} \\quad S(x) \\eqdef \\bE(\\nabla f(x,\\xi)^{\\odot 2})\\label{eq:F_and_S}\n\\end{equation}\nare well defined; $F$ is continuously differentiable and by\nLebesgue's dominated convergence theorem, $\\nabla F(x) = \\bE(\\nabla\nf(x,\\xi))$ for all $x$. Moreover, $\\nabla F$ and $S$ are locally Lipschitz continuous.\n\\begin{assumption}\n \\label{hyp:coercive}\n$F$ is coercive.\n\\end{assumption}\n\\begin{assumption}\n \\label{hyp:S>0}\nFor every $x\\in \\bR^d$, $S(x)>0$.\n\\end{assumption}\nIt follows from our assumptions that the set of critical points of\n$F$, denoted by $$\\cS \\eqdef \\nabla F^{-1}(\\{0\\}),$$ is non-empty.\nAssumption~\\ref{hyp:S>0} means that there is \\emph{no} point $x\\in\n\\bR^d$ satisfying $\\nabla f(x,\\xi) = 0$ with probability one (w.p.1). This is\na mild hypothesis in practice.\n\n\\subsection{Asymptotic Regime}\n\nWe address the constant stepsize regime, where\n$\\gamma$ is fixed along the iterations (the default value recommended\nin \\cite{kingma2014adam} is $\\gamma = 0.001$). As opposed to the decreasing stepsize context,\nthe sequence $z_n^\\gamma\\eqdef z_n$ \\emph{cannot} in general converge as $n$ tends to infinity, in an almost sure sense.\nInstead, we investigate the asymptotic behavior of the {family} of processes $(n\\mapsto z_n^\\gamma)_{\\gamma>0}$\nindexed by $\\gamma$, in the regime where $\\gamma\\to0$. We use the so-called ODE method (see e.g. \\cite{ben-(cours)99}).\nThe interpolated process $\\sz^{\\gamma}$ is the\npiecewise linear function defined on $[0,+\\infty)\\to \\cZ_+$\nfor all $t \\in [n\\gamma ,(n+1)\\gamma)$ by:\n\\begin{equation}\n\\sz^{ \\gamma}(t) \\eqdef z_{n}^{ \\gamma} + (z_{n+1}^{ \\gamma}-z_{n}^{ \\gamma})\\left(\\frac {t-n\\gamma}{\\gamma}\\right)\\,.\n\\label{eq:interpolated-process}\n\\end{equation}\nWe establish the convergence in probability of the\nfamily of random processes $(\\sz^{ \\gamma})_{\\gamma>0}$ as $\\gamma$ tends to zero, towards a\ndeterministic continuous-time system defined by an ODE.\nThe latter ODE, which we provide below at Eq.~(\\ref{eq:ode}), will be referred to\nas the continuous-time version of \\adam.\n\nBefore describing the ODE, we need to be more specific about our\nasymptotic regime. As opposed to SGD, \\adam\\ depends on\ntwo parameters $\\alpha$, $\\beta$, in addition to the stepsize $\\gamma$.\nThe paper \\cite{kingma2014adam} recommends choosing the constants $\\alpha$ and $\\beta$ close to one\n(the default values $\\alpha=0.9$ and $\\beta=0.999$ are suggested).\nIt is thus legitimate to assume that\n$\\alpha$ and $\\beta$ tend to one, as $\\gamma$ tends to zero.\nWe set $\\alpha \\eqdef \\bar \\alpha(\\gamma)$ and $\\beta\\eqdef\\bar\\beta(\\gamma)$, where\n$\\bar \\alpha(\\gamma)$ and $\\bar\\beta(\\gamma)$ converge to one as $\\gamma\\to 0$.\n\\begin{assumption}\n\\label{hyp:alpha-beta}\nThe functions $\\bar \\alpha:\\bR_+\\to [0,1)$ and $\\bar \\beta:\\bR_+\\to [0,1)$ are\ns.t. the following limits exist:\n \\begin{equation}\n a\\eqdef \\lim_{\\gamma_\\downarrow 0} \\frac{1-\\bar\\alpha(\\gamma)}\\gamma ,\\quad b\\eqdef \\lim_{\\gamma_\\downarrow 0}\\frac{1-\\bar\\beta(\\gamma)}\\gamma\\,.\\label{eq:regime}\n \\end{equation}\nMoreover, $a>0$, $b>0$, and the following condition holds: $b\\leq 4a\\,.$\n\\end{assumption}\nNote that the condition $b\\leq 4a$ is compatible with the default settings recommended\nby \\cite{kingma2014adam}.\nIn our model, we shall now replace the map $T_{\\gamma,\\alpha,\\beta}$ by $T_{\\gamma,\\bar \\alpha(\\gamma),\\bar \\beta(\\gamma)}$.\nLet $x_0\\in \\bR^d$ be fixed. For any fixed $\\gamma>0$, we define the sequence $(z_n^\\gamma)$ generated by\n\\adam\\ with a fixed stepsize~$\\gamma>0$:\n\\begin{equation}\nz_n^\\gamma \\eqdef T_{\\gamma,\\bar \\alpha(\\gamma),\\bar \\beta(\\gamma)}(n,z^\\gamma_{n-1},\\xi_n)\\,,\\label{eq:znT}\n\\end{equation}\nthe initialization being chosen as $z_0^\\gamma=(x_0,0,0)$.\n\n\n\\section{Continuous-Time System}\n\\label{sec:continuous_time}\n\n\\subsection{Ordinary Differential Equation}\n\nIn order to gain insight into the behavior of the sequence $(z_n^\\gamma)$\ndefined by (\\ref{eq:znT}),\nit is convenient to rewrite the \\adam\\ iterations under the following equivalent form, for every $n\\geq 1$:\n\\begin{equation}\nz_n^{\\gamma} = z^{\\gamma}_{n-1} + \\gamma h_{\\gamma}(n,z^{\\gamma}_{n-1}) + \\gamma \\Delta^\\gamma_{n}\\,,\\label{eq:RM}\n\\end{equation}\nwhere we define for every $\\gamma>0$, $z\\in \\cZ_+$,\n\\begin{equation}\nh_\\gamma(n,z) \\eqdef \\gamma^{-1}\\bE(T_{\\gamma,\\bar\\alpha(\\gamma),\\bar\\beta(\\gamma)}(n,z,\\xi)-z)\\,,\\label{eq:hgamma}\n\\end{equation}\nand where $\\Delta^\\gamma_{n}\\eqdef \\gamma^{-1}(z_n^{\\gamma} - z^{\\gamma}_{n-1}) - h_{\\gamma}(n,z^{\\gamma}_{n-1})$.\nNote that $(\\Delta^\\gamma_{n})$ is a martingale increment noise sequence in the sense that\n$\\bE(\\Delta^\\gamma_{n}|\\cF_{n-1}) = 0$ for all $n\\geq 1$, where $\\cF_n$ stands for the $\\sigma$-algebra generated\nby the r.v. $\\xi_1,\\dots,\\xi_n$.\nDefine the map $h:(0,+\\infty)\\times \\cZ_+\\to\\cZ$ for all $t>0$, all $z=(x,m,v)$ in $\\cZ_+$ by:\n\\begin{equation}\n \\label{eq:h}\n h(t,z)= \\begin{pmatrix}\n -\\frac{(1-e^{-at})^{-1}m}{\\varepsilon+\\sqrt{(1-e^{-bt})^{-1} v}} \\\\\na (\\nabla F(x)-m) \\\\\nb (S(x)-v)\n\\end{pmatrix} \\,,\n\\end{equation}\nwhere $a,b$ are the constants defined in Assumption~\\ref{hyp:alpha-beta}.\nWe prove that, for any fixed $(t,z)$, the quantity $h(t,z)$ coincides with the limit\nof $h_\\gamma(\\lfloor t\/\\gamma\\rfloor,z)$ as $\\gamma\\downarrow 0$. This remark along with Eq.~(\\ref{eq:RM})\nsuggests that, as $\\gamma\\downarrow 0$, the interpolated process $\\sz^\\gamma$ shadows the non-autonomous differential equation\n\\begin{equation}\n \\begin{array}[h]{l}\n\\dot z(t) = h(t, z(t))\\,.\n\\end{array}\n\\tag{ODE}\n\\label{eq:ode}\n\\end{equation}\n\n\\subsection{Existence, Uniqueness, Convergence}\n\\label{subsec:odeanalysis}\n\nSince $h(\\,.\\,,z)$ is non-continuous\nat point zero for a fixed $z\\in \\cZ_+$, and since $h(t,\\,.\\,)$\nis not locally Lipschitz continuous for a fixed~$t~>~0$,\nthe existence and uniqueness of the solution to (\\ref{eq:ode}) do not stem directly from off-the-shelf theorems.\n\nLet {\\color{black}$x_0$ be fixed}.\nA continuous map $z:[0,+\\infty)\\to\\cZ_+$ is said to be a global solution to (\\ref{eq:ode}) with initial condition {\\color{black}$(x_0,0,0)$}\nif $z$ is continuously differentiable on $(0,+\\infty)$, if Eq.~(\\ref{eq:ode})\nholds for all $t>0$, and if {\\color{black}$z(0)=(x_0,0,0)$}.\n\n\\begin{theorem}[Existence and uniqueness]\n \\label{th:exist-unique}\n \n Let Assumptions~\\ref{hyp:model} to \\ref{hyp:alpha-beta} hold true.\n There exists a unique global solution $z:[0,+\\infty)\\to\\cZ_+$\n to~(\\ref{eq:ode}) with\n initial condition $(x_0,0,0)$.\n Moreover, $z([0,+\\infty))$ is a bounded subset of $\\cZ_+$.\n\\end{theorem}\nOn the other hand, we note that a solution may not exist for\nan initial point$(x_0,m_0,v_0)$ with arbitrary (non-zero) values of $m_0, v_0$.\n\n\\begin{theorem}[Convergence]\n\\label{th:cv-adam}\nLet Assumptions~\\ref{hyp:model} to \\ref{hyp:alpha-beta} hold true.\nAssume that $F(\\cS)$ has an empty interior.\nLet $z:t\\mapsto (x(t),m(t),v(t))$ be\nthe global solution to~(\\ref{eq:ode}) with the initial condition\n$(x_0,0,0)$.\\,\nThen, the set $\\cS$ is non-empty and $\\lim_{t\\to\\infty} \\sd(x(t),\\cS) =0$,\n $\\lim_{t\\to\\infty}m(t)=0$, $\\lim_{t\\to\\infty}S(x(t))-v(t)=0$.\n\n\n\\end{theorem}\n\n\\noindent{\\bf Lyapunov function.} The proof of Th.~\\ref{th:exist-unique}\n relies on the existence of a Lyapunov function for the non-autonomous\n equation~(\\ref{eq:ode}). Define $V:(0,+\\infty)\\times \\cZ_+\\to \\bR$ by\n \\begin{equation}\n \\label{eq:V}\n V(t,z)\\eqdef F(x)+\\frac 12 \\left\\|m\\right\\|^2_{U(t,v)^{-1}}\\,,\n \\end{equation}\n for every $t>0$ and every $z=(x,m,v)$ in $\\cZ_+$, where\n $U:(0,+\\infty)\\times [0,+\\infty)^d\\to \\bR^d$ is the map given by:\n \\begin{equation}\n \\label{eq:U}\n U(t,v) \\eqdef a(1-e^{-at})\\left(\\varepsilon+\\sqrt{\\frac{v}{1-e^{-bt}}}\\right)\\,.\n \\end{equation}\nThen, $t\\mapsto V(t,z(t))$ is decreasing if $z(\\cdot)$ is the global solution to~(\\ref{eq:ode}).\n\n\\noindent{\\bf Cost decrease at the origin.} As $F$ itself is not a Lyapunov function for~(\\ref{eq:ode}),\nthere is no guarantee that $F(x(t))$ is decreasing w.r.t. $t$.\nNevertheless, the statement holds at the origin. Indeed, it can be shown that\n$\\lim_{t\\downarrow 0}V(t,z(t))=F(x_0)$ (see Prop.~\\ref{prop:adam-bounded}). As a consequence,\n\\begin{equation}\n \\forall t\\geq 0,\\ F(x(t))\\leq F(x_0)\\,.\n\\label{eq:costdecrease}\n\\end{equation}\n\nIn other words, the (continuous-time) \\adam\\ procedure\n\\emph{can only improve} the initial guess $x_0$.\nThis is the consequence of the so-called bias correction steps in \\adam\\ (see Algorithm~\\ref{alg:adam})\\,.\nIf these debiasing steps were deleted in the \\adam\\ iterations,\nthe early stages of the algorithm could degrade the initial estimate $x_0$.\n\n\\noindent{\\bf Derivatives at the origin.}\nThe proof of Th.~\\ref{th:exist-unique} reveals that the initial derivative is given\nby $\\dot x(0) = -\\nabla F(x_0)\/(\\varepsilon+\\sqrt{S(x_0)})$ (see Lemma~\\ref{lem:m-v-derivables-en-zero}).\nIn the absence of debiasing steps, the initial derivative $\\dot x(0)$ would be a function of the initial\nparameters $m_0$, $v_0$, and the user would be required to tune these hyperparameters.\nNo such tuning is required thanks to the debiasing step.\nWhen $\\varepsilon$ is small and when the variance of $\\nabla f(x_0,\\xi)$ is small (\\emph{i.e.}, $S(x_0)\\simeq \\nabla F(x_0)^{\\odot 2}$),\nthe initial derivative $\\dot x(0)$ is approximately equal to $-\\nabla F(x_0)\/|\\nabla F(x_0)|$.\nThis suggests that in the early stages of the algorithm, the \\adam\\ iterations\nare comparable to the \\emph{sign} variant of the gradient descent, the properties of which were\ndiscussed in previous works, see \\cite{balles2018dissecting}\n\n\\subsection{Convergence rates}\n\nIn this paragraph, we establish the convergence to a single critical\npoint of $F$ and quantify the convergence rate, using the following\nassumption \\cite{lojasiewicz1963}.\n\\begin{assumption}[\\L{}ojasiewicz property]\n \\label{hyp:lojasiewicz_prop}\nFor any $x^* \\in {\\mathcal S}$, there exist\n$c >0\\,, \\sigma >0$ and $\\theta \\in (0,\\frac 12]$ s.t.\n\\begin{equation}\n \\label{eq:lojasiewicz}\n\\forall x \\in \\bR^d \\,\\,\\text{s.t}\\,\\,\\|x-x^*\\|\\leq \\sigma\\,,\\quad \\|\\nabla F(x)\\| \\geq c |F(x) - F(x^*)|^{1-\\theta}\\,.\n\\end{equation}\n\\end{assumption}\nAssumption~\\ref{hyp:lojasiewicz_prop} holds for real-analytic functions\nand semialgebraic functions.\nWe refer to\n\\cite{harauxjendoubi2015,attouch2009convergence,bolte2014proximal}\nfor a discussion and a review of applications.\nWe will call any $\\theta$ satisfying (\\ref{eq:lojasiewicz}) for some $c,\\sigma>0$,\nas a \\L{}ojasiewicz exponent of $f$ at $x^*$. The next result establishes the convergence\nof the function $x(t)$ generated by the ODE to a single critical point of $f$,\nand provides the convergence rate as a function of the \\L{}ojasiewicz exponent of $f$\nat this critical point. The proof is provided in subsection~\\ref{sec:cont_asymptotic_rates}.\n\\begin{theorem}\n \\label{thm:asymptotic_rates}\n \n Let Assumptions~\\ref{hyp:model} to \\ref{hyp:alpha-beta} and \\ref{hyp:lojasiewicz_prop} hold true.\n Assume that $F(\\cS)$ has an empty interior. Let $x_0 \\in \\bR^d$ and let $z:t\\mapsto (x(t),m(t),v(t))$ be\n the global solution to~(\\ref{eq:ode}) with initial condition $(x_0,0,0)$.\n Then, there exists $x^* \\in \\cS$ such that $x(t)$ converges to $x^*$ as $t \\to +\\infty$.\n\n Moreover, if $\\theta\\in (0,\\frac 12]$ is a \\L{}ojasiewicz exponent of $f$ at $x^*$,\n there exists a constant $C >0$ s.t. for all $t\\geq 0$,\n \\begin{align*}\n \\|x(t)-x^*\\| &\\leq C t^{-\\frac{\\theta}{1-2\\theta}}\\,,\\quad \\text{if}\\,\\,\\, 0<\\theta<\\frac{1}{2}\\,,\\\\\n \\|x(t)-x^*\\| &\\leq C e^{- \\delta t} \\,,\\quad \\text{for some}\\,\\, \\delta >0\\,\\,\\text{if}\\,\\, \\theta = \\frac 12 \\,.\n \\end{align*}\n\\end{theorem}\n\n\\section{Discrete-Time System: Convergence of \\adam}\n\\label{sec:discrete}\n\n\\begin{assumption}\n \\label{hyp:iid}\nThe sequence $(\\xi_n:n\\geq 1)$ is iid, with the same distribution as~$\\xi$.\n\\end{assumption}\n\n\\begin{assumption}\n \\label{hyp:moment-f}\nLet $p>0$. Assume either one of the following conditions.\n\\begin{enumerate}[i)]\n\\item \\label{momentegal} For every compact set $K\\subset \\bR^d$, $\\sup_{x\\in K} \\bE(\\|\\nabla f(x,\\xi)\\|^{p})<\\infty\\,.$\n\\item \\label{momentreinforce} For every compact set $K\\subset \\bR^d$, $\\exists\\, p_K>p$, $\\sup_{x\\in K} \\bE(\\|\\nabla f(x,\\xi)\\|^{p_K})<\\infty.$\n\\end{enumerate}\n\\end{assumption}\nThe value of $p$ will be specified in the sequel, in the statement of\nthe results. Clearly, Assumption~\\ref{hyp:moment-f}~\\ref{momentreinforce} is stronger than Assumption~\\ref{hyp:moment-f}~\\ref{momentegal}.\nWe shall use either the latter or the former in our statements.\n\n\\begin{theorem}\n \\label{th:weak-cv}\nLet Assumptions~\\ref{hyp:model} to \\ref{hyp:alpha-beta} and \\ref{hyp:iid} hold true. Let Assumption~\\ref{hyp:moment-f}~\\ref{momentreinforce}\nhold with $p=2$.\nConsider $x_0\\in \\bR^d$. For every $\\gamma>0$, let $(z_n^\\gamma:n\\in \\bN)$ be the random sequence defined by the \\adam\\ iterations~(\\ref{eq:znT})\nand $z_0^\\gamma = (x_0,0,0)$. Let $\\sz^\\gamma$ be the corresponding interpolated process defined by Eq.~(\\ref{eq:interpolated-process}).\nFinally, let $z$ denote the unique global solution to (\\ref{eq:ode}) issued from $(x_0,0,0)$.\nThen,\n$$\n\\forall\\, T>0,\\ \\forall\\, \\delta>0,\\ \\ \\lim_{\\gamma\\downarrow 0}\\bP\\left(\\sup_{t\\in[0,T]}\\|\\sz^\\gamma(t)-z(t)\\|>\\delta\\right)=0\\,.\n$$\n\\end{theorem}\nRecall that a family of r.v. $(X_\\alpha)_{\\alpha\\in I}$ is called \\emph{bounded in probability}, or \\emph{tight}, if for every\n$\\delta>0$, there exists a compact set $K$ s.t. $\\bP(X_\\alpha\\in K)\\geq 1-\\delta$ for every $\\alpha\\in I$.\n\\begin{assumption}\n\\label{hyp:tight}\n There exists $\\bar \\gamma_0>0$ s.t. the family of r.v. $(z_n^\\gamma:n\\in \\bN,0<\\gamma<\\bar \\gamma_0)$ is bounded in probability.\n\\end{assumption}\n\n\\begin{theorem}\nConsider $x_0\\in \\bR^d$. For every $\\gamma>0$, let $(z_n^\\gamma:n\\in \\bN)$ be the random sequence defined by the \\adam\\ iterations~(\\ref{eq:znT})\nand $z_0^\\gamma = (x_0,0,0)$.\nLet Assumptions~\\ref{hyp:model} to \\ref{hyp:alpha-beta}, \\ref{hyp:iid} and \\ref{hyp:tight} hold.\nLet Assumption~\\ref{hyp:moment-f}~\\ref{momentreinforce} hold with $p=2$. Then,\nfor every $\\delta > 0$,\n \\begin{equation}\n \\label{eq:long-run}\n \\lim_{\\gamma\\downarrow 0}\\limsup_{n\\to\\infty} \\frac 1{n} \\sum_{k=1}^n \\bP(\\sd(x_k^\\gamma, \\cS)>\\delta) = 0\\,.\n \\end{equation}\n\\label{th:longrun}\n\\end{theorem}\n\n\n\\noindent{\\bf Convergence in the long run.}\nWhen the stepsize $\\gamma$ is constant, the sequence\n$(x_n^\\gamma)$ cannot converge in the almost sure sense as\n$n\\to\\infty$.\nConvergence may only hold in the doubly asymptotic regime where\n$n\\to\\infty$ then $\\gamma\\to 0$.\n\n\\noindent{\\bf Randomization.} For every $n$, consider a r.v. $N_n$ uniformly\ndistributed on $\\{1,\\dots,n\\}$. Define $\\tilde x_n^\\gamma = x_{N_n}^\\gamma$.\nWe obtain from Th.~\\ref{th:longrun} that for every $\\delta>0$,\n$$\n\\limsup_{n\\to\\infty} \\ \\bP(\\sd(\\tilde x_n^\\gamma, \\cS)>\\delta) \\xrightarrow[\\gamma\\downarrow 0]{} 0\\,.\n$$\n\n\\noindent{\\bf Relationship between discrete and continuous time \\adam.}\nTh.~\\ref{th:weak-cv} means that the family of random processes $(\\sz^\\gamma:~\\gamma>0)$ converges\nin probability as $\\gamma\\downarrow 0$ towards the unique solution to~(\\ref{eq:ode}) issued from $(x_0,0,0)$.\nThis motivates the fact that the non-autonomous system~(\\ref{eq:ode})\nis a relevant approximation to the behavior of the iterates $(z_n^\\gamma:n\\in \\bN)$ for a small\nvalue of the stepsize~$\\gamma$.\n\n\\noindent{\\bf Stability.} Assumption~\\ref{hyp:tight} ensures\nthat the iterates $z_n^\\gamma$ do not explode in the long run.\nA sufficient condition is for instance that\n$\\sup_{n,\\gamma} \\bE \\|z_n^\\gamma\\|<\\infty\\,.$\n{In theory, this assumption can be difficult to verify.}\nNevertheless, in practice, a projection step on a compact set\ncan be introduced to ensure the boundedness of the estimates.\n\n\\section{A Decreasing Stepsize \\adam\\, Algorithm}\n\\label{sec:discrete_decreasing}\n\n\\subsection{Algorithm}\n\n\\adam\\, inherently uses constant stepsizes. Consequently, the\niterates~(\\ref{eq:znT}) do not converge in the almost sure sense.\nIn order to achieve convergence, we introduce in this section a decreasing\nstepsize version of \\adam. The iterations are given in Algorithm~\\ref{alg:adam-decreasing}.\n\\begin{algorithm}[tb]\n \\caption{\\bf \\adam - decreasing stepsize $(((\\gamma_n,\\alpha_n,\\beta_n):n\\in \\bN^*), \\varepsilon)$.}\n \\label{alg:adam-decreasing}\n\\begin{algorithmic}\n \\STATE {\\bfseries Initialization:} $x_0\\in \\bR^d, m_0=0$, $v_0=0$, $r_0=\\bar r_0=0$.\n \\FOR{$n=1$ {\\bfseries to} $n_{\\text{iter}}$}\n \\STATE $m_n = \\alpha_n m_{n-1} + (1-\\alpha_n) \\nabla f(x_{n-1},\\xi_n)$\n \\STATE $v_n = \\beta_n v_{n-1} + (1-\\beta_n) \\nabla f(x_{n-1},\\xi_n)^{\\odot 2}$\n \\STATE $r_n = \\alpha_n r_{n-1} + (1-\\alpha_n)$\n \\STATE $\\bar r_n = \\beta_n \\bar r_{n-1} + (1-\\beta_n)$\n \\STATE $\\hat m_{n} = m_{n}\/r_n$\n \\COMMENT{bias correction step}\n \\STATE $\\hat v_{n} = v_{n}\/\\bar r_n$\n \\COMMENT{bias correction step}\n \\STATE $x_{n} = x_{n-1} - \\gamma_n \\hat m_{n} \/ (\\varepsilon+\\sqrt{\\hat v_{n}}) \\,.$\n \\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\nThe algorithm generates a sequence $z_n=(x_n,m_n,v_n)$ with initial point $z_0=(x_0,0,0)$,\nwhere $x_0\\in\\bR^d$. Apart from the fact that the hyperparameters $(\\gamma_n,\\alpha_n,\\beta_n)$\nnow depend on $n$, the main difference w.r.t Algorithm~\\ref{alg:adam} lies in the expression of the\ndebiasing step. As noted in Remark~\\ref{rem:debiasing}, the aim is to rescale $m_n$ (resp. $v_n$)\nin such a way that the rescaled version $\\hat m_n$ (resp. $\\hat v_n$) is a convex combination of\npast stochastic gradients (resp. squared gradients). While in the constant step case the rescaling\ncoefficient is $(1-\\alpha^n)^{-1}$ (resp. $(1-\\beta^n)^{-1}$), the decreasing step case requires dividing\n$m_n$ by the coefficient $r_n=1-\\prod_{i=1}^n\\alpha_i$ (resp. $v_n$ by $\\bar r_n=1-\\prod_{i=1}^n\\beta_i$),\nwhich keeps track of the previous weights:\n$$\n\\hat m_n = \\frac{m_n}{r_n} = \\frac{\\sum_{k=1}^{n} \\rho_{n,k} \\nabla f(x_{k-1},\\xi_{k})}{\\sum_{k=1}^{n} \\rho_{n,k}}\\,,\n$$\nwhere for every $n,k$, $\\rho_{n,k} = \\alpha_n\\cdots\\alpha_{k+1}(1-\\alpha_k)$. A similar equation holds for $\\hat v_n$.\n\n\\subsection{Almost sure convergence\n \\begin{assumption}[Stepsizes]\n \\label{hyp:stepsizes}\n \n The following holds.\n \\begin{enumerate}[{\\sl i)}]\n \\item For all $n \\in \\bN$, $\\gamma_n > 0$ and $\\gamma_{n+1}\/\\gamma_n\\to 1$,\n \\item $\\sum_n \\gamma_n = +\\infty$ and $\\sum_n \\gamma_n^2 <+\\infty$,\n \\item For all $n \\in \\bN$, $0 \\leq \\alpha_n \\leq 1$ and $0 \\leq \\beta_n \\leq 1$,\n \\item There exist $a,b$ s.t. $00}, \\ref{hyp:iid} and~\\ref{hyp:stepsizes} hold.\nLet Assumption \\ref{hyp:moment-f}~\\ref{momentegal} hold with $p=4$.\nAssume that $F(\\cS)$ has an empty interior\nand that the random sequence $((x_n,m_n,v_n):n\\in \\bN)$\ngiven by Algorithm~\\ref{alg:adam-decreasing}\nis bounded, with probability one.\nThen, w.p.1, $\\lim_{n\\to\\infty}\n\\sd (x_n,\\cS)=0$, $lim_{n\\to\\infty}m_n = 0$\nand $lim_{n\\to\\infty} (S(x_n)-v_n)=0$.\nIf moreover $\\cS$ is finite or countable, then w.p.1, there exists $x^*\\in \\cS$\ns.t. $\\lim_{n\\to\\infty} (x_n,m_n,v_n) = (x^*,0,S(x^*))$.\n\n\n\n\n\n\n\n\n \\end{theorem}\nTh.~\\ref{thm:as_conv_under_stab} establishes the almost sure convergence of\n$x_n$ to the set of critical points of~$F$, under the assumption that the sequence\n$((x_n,m_n,v_n))$ is a.s. bounded. The next result provides a sufficient condition\nunder which almost sure boundedness holds.\n\\begin{assumption}\n\\label{hyp:stab}\n The following holds.\n \\begin{enumerate}[{\\sl i)}]\n\\item \\label{lipschitz} $\\nabla F$ is Lipschitz continuous.\n\\item \\label{momentgrowth} There exists $C>0$ s.t. for all $x \\in \\bR^d$, $\\bE[\\|\\nabla f(x,\\xi)\\|^2] \\leq C (1+F(x))$.\n\\item We assume the condition:\n\n\\lim\\sup_{n\\to\\infty}\n\\left(\\frac 1{\\gamma_n}-\\left(\\frac{1-\\alpha_{n+2}}{1-\\alpha_{n+1}}\\right)\\frac 1{\\gamma_{n+1}}\\right)\n< 2(a-\\frac b4)\\,,\n$\\\\%$$\nwhich is satisfied for instance if $b<4a$ and $1-\\alpha_{n+1} = a\\gamma_n$.\n \\end{enumerate}\n\\end{assumption}\n \\begin{theorem}\n \\label{thm:stab}\n\nLet Assumptions~\\ref{hyp:model}, \\ref{hyp:coercive}, \\ref{hyp:iid}, \\ref{hyp:stepsizes} and \\ref{hyp:stab} hold.\nLet Assumption~\\ref{hyp:moment-f}~\\ref{momentegal} hold with $p=4$.\nThen, the sequence\n$((x_n,m_n,v_n):n\\in \\bN)$ given by Algorithm~\\ref{alg:adam-decreasing} is bounded with probability one.\n \\end{theorem}\n\n\\subsection{Central Limit Theorem}\n\n \\begin{assumption}\n \\label{hyp:mean_field_tcl}\nLet $x^*\\in \\cS$. There exists a neighborhood $\\cV$ of $x^*$ s.t.\n\\begin{enumerate}[{\\sl i)}]\n\\item $F$ is twice continuously differentiable on $\\cV$,\n and the Hessian $\\nabla^2 F(x^*)$ of $F$ at $x^*$ is positive\n definite.\n\\item $S$ is continuously differentiable on $\\cV$.\n\\end{enumerate}\n \\end{assumption}\nDefine $\nD \\eqdef \\textrm{diag}\\left((\\varepsilon + \\sqrt{S_1(x^*)})^{-1},\\cdots,(\\varepsilon + \\sqrt{S_d(x^*)})^{-1}\\right)\\,.\n$\nLet $P$ be an orthogonal matrix s.t. the following spectral decomposition holds:\n$$\nD^{1\/2}\\nabla^2F(x^*)D^{1\/2} = P\\textrm{diag}(\\lambda_1,\\cdots,\\lambda_d)P^{-1}\\,,\n$$\nwhere $\\lambda_1, \\cdots,\\lambda_d$ are the (positive) eigenvalues of\n$D^{1\/2} \\nabla^2F(x^*)D^{1\/2}$.\nDefine\n\\begin{equation}\nH \\eqdef\n\\begin{pmatrix}\n 0 & -D & 0\\\\ a\\nabla^2F(x^*) & -a I_d & 0 \\\\ b \\nabla S(x^*) & 0 & -b I_d\n\\end{pmatrix}\\label{eq:H}\n\\end{equation}\nwhere $I_d$ represents the $d\\times d$ identity matrix and $\\nabla S(x^*)$ is the\nJacobian matrix of~$S$ at $x^*$.\nThe largest real part of the eigenvalues of $H$ coincides with $-L$, where\n\\begin{equation}\nL\\eqdef b\\wedge \\frac a2\\left( 1-\\sqrt{\\left(1-\\frac{4\\lambda_1}a\\right)\\vee 0}\\right) >0\\,.\\label{eq:L}\n\\end{equation}\nFinally, define the $3d\\times 3d$ matri\n\\begin{equation}\n \\resizebox{.9\\hsize}{!}{$\nQ \\eqdef\n\\left( \\begin{array}[h]{cc}\n 0 &\n \\begin{array}[h]{cc}\n 0 & \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ 0\n \\end{array}\\\\\n \\begin{array}[h]{c}\n 0 \\\\ 0\n \\end{array}\n & \\bE\\left[\n \\begin{pmatrix}\n a\\nabla f(x^*,\\xi) \\\\ b(\\nabla f(x^*,\\xi)^{\\odot 2}-S(x^*))\n \\end{pmatrix}\\begin{pmatrix}\n a\\nabla f(x^*,\\xi) \\\\ b(\\nabla f(x^*,\\xi)^{\\odot 2}-S(x^*))\n \\end{pmatrix}^T\\right]\n \\end{array}\\right)\\,.\\label{eq:Q}\n $}\n\\end{equation}\n\\begin{assumption}\n\\label{hyp:step-tcl}\nThe following holds.\n\\begin{enumerate}[{\\sl i)}]\n\\item \\label{step-tcl-i} There exist $\\kappa \\in (0,1]$, $\\gamma_0>0$, s.t. the sequence $(\\gamma_n)$\nsatisfies\n$\\gamma_n = {\\gamma_0}\/{(n+1)^\\kappa}$ for all $n$.\nIf $\\kappa = 1$, we assume moreover that $\\gamma_0 > \\frac{1}{2L}$.\n\\item The sequences $\\left(\\frac{1}{\\gamma_n}(\\frac{1-\\alpha_n}{\\gamma_n}-a)\\right)$ and $\\left(\\frac{1}{\\gamma_n}(\\frac{1-\\beta_n}{\\gamma_n}-b)\\right)$ are bounded.\n\\end{enumerate}\n\\end{assumption}\n\n\\noindent For an arbitrary sequence $(X_n)$ of random variables on some Euclidean space, a probability measure\n$\\mu$ on that space and an event $\\Gamma$ s.t. $\\bP(\\Gamma)>0$, we say that $X_n$ converges in distribution\nto $\\mu$ \\emph{given $\\Gamma$} if the measures $\\bP (X_n\\in \\cdot\\,|\\Gamma)$ converge weakly to $\\mu$.\n\\begin{theorem}\n \\label{thm:clt}\nLet Assumptions~\\ref{hyp:model}, \\ref{hyp:S>0}, \\ref{hyp:iid}, \\ref{hyp:mean_field_tcl} and \\ref{hyp:step-tcl} hold true.\nLet Assumption~\\ref{hyp:moment-f}~\\ref{momentreinforce} hold with $p=4$.\nConsider the iterates $z_n=(x_n,m_n,v_n)$ given by Algorithm~\\ref{alg:adam-decreasing}. Set $z^*=(x^*,0,S(x^*))$.\nSet $\\zeta \\eqdef 0$ if $0<\\kappa<1$ and $\\zeta \\eqdef \\frac{1}{2 \\gamma_0}$ if $\\kappa =1$.\nAssume $\\bP(z_n \\to z^*)>0$. Then, given the event $\\{z_n\\to z^*\\}$,\nthe rescaled vector $\\sqrt{\\gamma_n}^{-1}(z_n-z^*)$\nconverges in distribution to a zero mean Gaussian distribution on $\\bR^{3d}$ with a covariance matrix $\\Sigma$\nwhich is solution to the Lyapunov equation: $ \\left(H + \\zeta I_{3d} \\right) \\Sigma + \\Sigma \\left( H^T + \\zeta I_{3d} \\right) = - Q$.\nIn particular, given $\\{z_n\\to z^*\\}$, the vector $\\sqrt{\\gamma_n}^{-1}(x_n-x^*)$\nconverges in distribution to a zero mean Gaussian distribution with a covariance matrix $\\Sigma_1$ given by:\n\\begin{equation}\n\\Sigma_1 = D^{1\/2} P\n\\left(\n\\frac{C_{k,\\ell}}{ (1 - \\frac{2\\zeta}{a})(\\lambda_k+\\lambda_\\ell-2\\zeta + \\frac 2a \\zeta^2) +\\frac 1{2(a-2\\zeta)}(\\lambda_k-\\lambda_\\ell)^2}\n\n\\right)_{k,\\ell=1\\dots d}\nP^{-1}D^{1\/2}\\label{eq:cov}\n\\end{equation}\nwhere $C\\eqdef P^{-1}D^{1\/2}\\bE\\left(\\nabla f(x^*,\\xi)\\nabla f(x^*,\\xi)^T\\right)D^{1\/2}P$.\n\\end{theorem}\n\nThe following remarks are useful.\n\\begin{itemize}[leftmargin=*]\n\\item The variable $v_n$ has an impact on the limiting covariance $\\Sigma_1$ through its limit $S(x^*)$ (used to define $D$),\nbut the fluctuations of $v_n$ and the parameter $b$ have no effect on $\\Sigma_1$.\nAs a matter of fact, $\\Sigma_1$ coincides with the limiting covariance matrix that would have been obtained by considering\niterates of the form\n\\begin{equation*}\n \\begin{cases}\n x_{n+1} &= x_n - \\gamma_{n+1} p_{n+1} \\\\\n p_{n+1} &= p_n + a\\gamma_{n+1}(D\\nabla f(x_n,\\xi_{n+1})-p_n) \\,,\n \\end{cases}\n\\end{equation*}\n\nwhich can be interpreted as a preconditioned version of the stochastic heavy ball algorithm~\\cite{gadat2018stochastic}.\nOf course, the above iterates are not implementable because the preconditioning matrix $D$ is unknown.\n\\item \nWhen $a$ is large, $\\Sigma_1$ is close to the matrix $\\Sigma_1^{(0)}$ obtained\nwhen letting $a \\to +\\infty$ in Eq.~(\\ref{eq:cov}).\nThe matrix $\\Sigma_1^{(0)}$ is the solution to the Lyapunov equation\n$$\n(D \\nabla^2F(x^*) - \\zeta I_d) \\Sigma_1^{(0)} + \\Sigma_1^{(0)} (\\nabla^2F(x^*) D - \\zeta I_d) = D \\bE\\left(\\nabla f(x^*,\\xi)\\nabla f(x^*,\\xi)^T\\right) D\\,.\n$$\nThe matrix $\\Sigma_1^{(0)}$ can be interpreted as the asymptotic covariance matrix of the $x$-variable\nin the absence of the inertial term (that is, when one considers \\textsc{RmsProp} instead of \\adam).\nThe matrix $\\Sigma_1^{(0)}$ approximates $\\Sigma_1$ in the sense that $\\Sigma_1 = \\Sigma_1^{(0)}+\\frac 1a\\Delta + O(\\frac 1{a^2})$\nfor some symmetric matrix $\\Delta$ which can be explicited. The matrix $\\Delta$ is neither positive nor negative definite\nin general.\nThis suggests that the question of the potential benefit of the presence of an inertial term\nis in general problem dependent.\n\\item In the statement of Th.~\\ref{thm:clt},\nthe conditioning event $\\{z_n\\to z^*\\}$ can be replaced by the event $\\{x_n\\to x^*\\}$\nunder the additional assumption that $\\sum_n \\gamma_n^2 < +\\infty$.\n\\end{itemize}\n\n \\section{Related Works}\n \\label{sec:related-works}\n\n Although the idea of adapting the\n (per-coordinate) learning rates as a function of past gradient values\n is not new (see \\emph{e.g.} variable metric methods such as the BFGS\n algorithms),\n \\textsc{AdaGrad} \\cite{duchi2011adaptive} led the way to a new class of algorithms\n that are sometimes referred to as adaptive gradient methods. \\textsc{AdaGrad}\n consists of dividing the learning rate by the square root of the sum\n of previous gradients squared\n componentwise. \n The idea was to give larger learning rates to highly informative but\n infrequent features instead of using a fixed predetermined schedule.\n However, in practice, the division by the cumulative sum of squared\n gradients may generate small learning rates, thus freezing the\n iterates too early. Several works proposed\n heuristical ways to set the learning rates using a less aggressive\n policy.\n The work \\cite{tieleman2012lecture} introduced an unpublished, yet popular, algorithm\n referred to as \\textsc{RmsProp} where the cumulative sum\n used in \\textsc{AdaGrad} is replaced by a moving average of squared\n gradients.\n\n\n\n \\adam\\ combines the advantages of both \\textsc{AdaGrad},\n \\textsc{RmsProp} and inertial methods\n\n As opposed to \\textsc{AdaGrad}, for which theoretical convergence guarantees exist\n \\cite{duchi2011adaptive,chen2018convergence,zhou2018convergence,ward2018adagrad},\n \\adam\\ is comparatively less studied.\n The initial paper \\cite{kingma2014adam} suggests a $\\mathcal{O}(\\frac{1}{\\sqrt{T}})$ average regret bound in the convex setting,\n but \\cite{j.2018on} exhibits a counterexample in contradiction with this statement.\n The latter counterexample implies that the average regret bound of \\adam\\ does\n not converge to zero. A first way to overcome the problem is to modify the \\adam\\ iterations\n themselves in order to obtain a vanishing average regret. This led \\cite{j.2018on}\n to propose a variant called \\textsc{AmsGrad} with the aim to recover, at least in the convex case, the sought guarantees.\n\n\n\n\n\n\n\n\n\n The work \\cite{balles2018dissecting} interprets \\adam\\ as a variance-adapted sign descent combining an update direction given by the sign and\n a magnitude controlled by a variance adaptation principle. A ``noiseless'' version\n of \\adam\\ is considered in \\cite{basu2018convergence}. Under quite specific values of the \\adam-hyperparameters, it is shown that for every $\\delta>0$,\n there exists some time instant\n for which the norm of the gradient of the objective\n at the current iterate is no larger than~$\\delta$.\n\n\n The recent paper \\cite{chen2018convergence} provides a similar result\n for \\textsc{AmsGrad} and \\textsc{AdaGrad}, but the generalization to \\adam\\ is subject\n to conditions which are not easily verifiable.\n The paper \\cite{zaheer2018adaptive} provides a convergence result for \\textsc{RmsProp}\n using the objective function $F$ as a Lyapunov function.\n\n However, our work suggests that unlike \\textsc{RmsProp},\n \\adam\\ does not admit $F$ as a Lyapunov function.\n This makes the approach of\n\n \\cite{zaheer2018adaptive} hardly generalizable to \\adam.\n Moreover, \\cite{zaheer2018adaptive} considers biased gradient estimates instead of the debiased\n estimates used in \\adam.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n In the present work, we study the behavior of an ODE, interpreted as the\n limit in probability of the (interpolated) \\adam\\ iterates as the stepsize tends to zero.\n Closely related continuous-time dynamical systems are also studied in \\cite{attouch2000heavy,cabot2009long}.\n We leverage the idea of approximating a discrete time stochastic system by a deterministic continuous one,\n often referred to as the ODE method.\n A recent work \\cite{gadat2018stochastic} fruitfully exploits this method to study\n a stochastic version of the celebrated heavy ball algorithm.\n We refer to \\cite{davis2018stochastic} for the reader interested in the non-differentiable setting\n with an analysis of the stochastic subgradient algorithm for non-smooth non-convex objective functions.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Concomitant to the present paper, Da Silva and Gazeau \\cite{da2018general}\n (posted only four weeks after the first version of the present work)\n\n study the asymptotic behavior of a similar dynamical system as the one introduced here.\n They establish several results in continuous time, such as avoidance of traps\n as well as convergence rates in the convex case; such aspects are out of the scope of this paper.\n However, the question of the convergence of the (discrete-time) iterates is left open.\n In the current paper, we also exhibit a Lyapunov function which allows, amongst others, to draw useful conclusions on the effect of the\n debiasing step of \\adam\\,. Finally, \\cite{da2018general} studies a slightly modified version of \\adam\\, allowing to recover an\n ODE with a locally Lipschitz continuous vector field, whereas the original \\adam\\ algorithm \\cite{kingma2014adam} leads\n to an ODE with an irregular vector field. This technical issue is tackled in the present paper.\n\n\\section{Proofs of Section~\\ref{sec:continuous_time}}\n\\label{sec:proofs_cont_time}\n\n\\subsection{Preliminaries}\n\\label{subsec:setting}\nThe results in this section are not specific to the case where $F$ and $S$ are defined as in\nEq.~(\\ref{eq:F_and_S}): they are stated for\n\\emph{any} mappings $F$, $S$ satisfying the following hypotheses.\n\\begin{assumption}\n \\label{hyp:F}\nThe function $F:\\bR^d\\to\\bR$ is s.t.: $F$ is continuously differentiable and\n$\\nabla F$ is locally Lipschitz continuous.\n\\end{assumption}\n\\begin{assumption}\n\\label{hyp:S}\nThe map $S:\\bR^d\\to [0,+\\infty)^d$ is locally Lipschitz continuous.\n\\end{assumption}\nIn the sequel, we consider the following generalization of Eq. (\\ref{eq:ode}) for any $\\eta >0$:\n\\begin{equation}\n \\begin{array}[h]{l}\n\\dot z(t) = h(t+\\eta, z(t))\\,.\n\\end{array}\n\\tag{ODE\\mbox{$_\\eta$}}\n\\label{eq:odeeta}\\end{equation}\nWhen $\\eta=0$, Eq. (\\ref{eq:odeeta}) boils down to the equation of interest (\\ref{eq:ode}).\nThe choice $\\eta\\in (0,+\\infty)$ will be revealed useful to prove Th.~\\ref{th:exist-unique}.\nIndeed, for $\\eta>0$, a solution to Eq. (\\ref{eq:odeeta}) can be shown to exist (on some interval) due to the continuity of\nthe map $h(\\,.+\\eta,\\,.\\,)$. Considering a family of such solutions indexed by $\\eta\\in (0,1]$,\nthe idea is to prove the existence of a solution to (\\ref{eq:ode}) as a cluster point of the latter family when $\\eta\\downarrow 0$.\nIndeed, as the family is shown to be equicontinuous, such a cluster point does exist thanks to the Arzel\u00e0-Ascoli theorem.\nWhen $\\eta=+\\infty$,\nEq. (\\ref{eq:odeeta}) rewrites\n\\begin{equation}\n \\label{eq:ode-a}\n \\begin{array}[h]{l}\n\\dot z(t) = h_\\infty(z(t))\\,,\n\\end{array}\n\\tag{ODE\\mbox{$_\\infty$}}\n\\end{equation}\nwhere $h_\\infty(z)\\eqdef \\lim_{t\\to \\infty} h(t,z)$.\nIt is useful to note that for $(x,m,v)\\in \\cZ_+$,\n\\begin{equation}\n \\label{eq:h_infty}\nh_{\\infty}((x,m,v)) = \\left(-m \/ (\\varepsilon+\\sqrt{v})\\,,\\, a (\\nabla F(x)-m) \\,,\\,b (S(x)-v) \\right)\\,.\n\\end{equation}\nContrary to Eq. (\\ref{eq:ode}), Eq.~(\\ref{eq:ode-a}) defines an autonomous ODE.\nThe latter admits a unique global solution for any initial condition in $\\cZ_+$,\nand defines a dynamical system $\\cD$. We shall exhibit a strict Lyapunov function\nfor this dynamical system $\\cD$, and deduce that any solution to (\\ref{eq:ode-a}) converges\nto the set of equilibria of $\\cD$ as $t\\to\\infty$.\nOn the otherhand, we will prove that the solution to (\\ref{eq:ode}) with a proper initial condition is a so-called asymptotic pseudotrajectory (APT) of $\\cD$. Due to the\nexistence of a strict Lyapunov function, the APT shall inherit the convergence behavior of the autonomous system as $t\\to\\infty$,\nwhich will prove Th.~\\ref{th:cv-adam}.\n\nIt is convenient to extend the map $h:(0,+\\infty)\\times \\cZ_+\\to\\cZ$ on $(0,+\\infty)\\times \\cZ\\to\\cZ$ by\nsetting $h(t,(x,m,v))\\eqdef h(t,(x,m,|v|))$ for every $t>0$, $(x,m,v)\\in \\cZ$.\nSimilarly, we extend $h_\\infty$ as $h_\\infty((x,m,v)) \\eqdef h_\\infty((x,m,|v|))$.\nFor any $T\\in (0,+\\infty]$ and any $\\eta\\in [0,+\\infty]$, we say that a map $z:[0,T)\\to \\cZ$ is a solution\nto (\\ref{eq:odeeta}) on $[0,T)$ with initial condition $z_0\\in \\cZ_+$,\nif $z$ is continuous on $[0,T)$, continuously differentiable\non $(0,T)$, and if (\\ref{eq:odeeta}) holds for all $t\\in (0,T)$.\nWhen $T=+\\infty$, we say that the solution is global.\nWe denote by $Z^\\eta_T(z_0)$ the subset of $C([0,T),\\cZ)$ formed by the solutions to (\\ref{eq:odeeta})\non $[0,T)$ with initial condition $z_0$.\nFor any $K\\subset\\cZ_+$, we define $Z^\\eta_T(K)\\eqdef \\bigcup_{z\\in K}Z^\\eta_T(z)$.\n\n\n\\begin{lemma}\n \\label{lem:m-v-derivables-en-zero}\n Let Assumptions~\\ref{hyp:F} and \\ref{hyp:S} hold. Consider $x_0\\in \\bR^d$,\n $T\\in (0,+\\infty]$ and let $z\\in Z_T^0((x_0,0,0))$,\n which we write $z(t) = (x(t),m(t),v(t))$. Then, $z$\n is continuously differentiable on $[0,T)$,\n $\\dot m(0)=a\\nabla F(x_0)$, $\\dot v(0)=bS(x_0)$ and\n$\n\\dot x(0) = \\frac{-\\nabla F(x_0)}{\\varepsilon + \\sqrt{S(x_0)}}.\n$\n\\end{lemma}\n\\begin{proof}\nBy definition of $z(\\,.\\,)$, $m(t)=\\int_0^ta(\\nabla F(x(s))-m(s))ds$ for all $t\\in [0,T)$\n(and a similar relation holds for $v(t)$).\nThe integrand being continuous, it holds\nthat $m$ and $v$ are differentiable at zero and $\\dot m(0)=a\\nabla F(x_0)$, $\\dot v(0)=bS(x_0)$.\nSimilarly, $x(t) = x_0+\\int_0^t h_x(s,z(s))ds$, where\n$h_x(s,z(s)) \\eqdef -(1-e^{-as})^{-1}m(s)\/(\\varepsilon+\\sqrt{(1-e^{-bs})^{-1}v(s)})\\,.$\nNote that $m(s)\/s \\to \\dot m(0) = a\\nabla F(x_0) $ as $s\\downarrow 0$.\nThus, $(1-e^{-as})^{-1}m(s)\\to \\nabla F(x_0)$ as $s\\to 0$. Similarly,\n$(1-e^{-bs})^{-1}v(s)\\to S(x_0)$. It follows that\n$h_x(s,z(s))\\to -(\\varepsilon+\\sqrt{S(x_0)})^{-1}\\nabla F(x_0)$.\nThus, $s\\mapsto h_x(s,z(s))$\ncan be extended\nto a continuous map on $[0,T)\\to\\bR^d$ and the differentiability of $x$ at zero\nfollows.\n\\end{proof}\n\n\n\\begin{lemma}\n\\label{lem:v-positif}\nLet Assumptions~\\ref{hyp:S>0}, \\ref{hyp:F} and \\ref{hyp:S} hold.\nFor every $\\eta\\in [0,+\\infty]$, $T\\in (0,+\\infty]$, $z_0\\in \\cZ_+$, $z\\in Z_T^\\eta(z_0)$,\nit holds that $z((0,T))\\subset \\cZ_+^*$.\n\\end{lemma}\n\\begin{proof}\nSet $z(t) = (x(t),m(t),v(t))$ for all $t$. Consider $i\\in \\{1,\\dots,d\\}$.\nAssume by contradiction that there exists $t_0\\in (0,T)$ s.t.\n$v_i(t_0)<0$. Set $\\tau\\eqdef\\sup\\{t\\in [0,t_0]:v_i(t)\\geq 0\\}$.\nClearly, $\\tauv_i(t_0)$.\nThus, $v_i(t)\\geq 0$ for all $t\\in [0,T)$.\nNow assume by contradiction that there exists $t\\in (0,T)$ s.t.\n$v_i(t)=0$. Then, $\\dot v_i(t)=bS_i(x(t))>0$.\nThus,\n$\n\\lim_{\\delta\\downarrow 0} \\frac{v_i(t-\\delta)}{-\\delta} = bS_i(x(t))\\,.\n$\nIn particular, there exists $\\delta>0$ s.t.\n$v_i(t-\\delta) \\leq -\\frac{\\delta b}2S_i(x(t))\\,.$ This contradicts the first point.\n\\end{proof}\n\n Recall the definitions of $V$ and $U$ from Eqs.~(\\ref{eq:V}) and (\\ref{eq:U}).\n Clearly, $U_\\infty(v)\\eqdef\\lim_{t\\to \\infty} U(t,v)=a(\\varepsilon+\\sqrt{v})$ is well defined for every $v\\in [0,+\\infty)^d$.\n Hence, we can also define $V_\\infty(z)\\eqdef \\lim_{t\\to \\infty} V(t,z)$ for every $z\\in \\cZ_+$.\n\n\n\n\n\\begin{lemma}\n\\label{lem:V}\n \n Let Assumptions~\\ref{hyp:F} and \\ref{hyp:S} hold.\n Assume that $0< b\\leq 4a$.\nConsider $(t,z)\\in (0,+\\infty)\\times \\cZ_+^*$ and set $z=(x,m,v)$.\nThen, $V$ and $V_\\infty$ are differentiable at points $(t,z)$ and $z$ respectively. Moreover,\n$\\ps{\\nabla V_\\infty(z),h_{\\infty}(z)}\\leq -\\varepsilon \\left\\|\\frac {am}{U_\\infty(v)}\\right\\|^2\\,$ and\n\\begin{equation*}\n\\ps{\\nabla V(t,z), (1,h(t,z))} \\leq -\\frac{\\varepsilon }2\\left\\|\\frac{a\\,m}{U(t,v)}\\right\\|^2\\,.\n\\end{equation*}\n\\end{lemma}\n\\begin{proof}\nWe only prove the second point, the proof of the first point follows the same line.\n Consider $(t,z)\\in (0,+\\infty)\\times \\cZ_+^*$.\nWe decompose\n$\\ps{\\nabla V(t,z), (1,h(t,z))} = \\partial_tV(t,z)\n+\\ps{\\nabla_z V(t,z),h(t,z)}$. After tedious but straightforward derivations, we get:\n\\begin{equation}\n \\label{eq:partial-t}\n \\resizebox{0.99\\hsize}{!}{$\n \\partial_t V(t,z) =- \\sum_{i=1}^d \\frac{a^2m_i^2}{U(t,v_i)^2}\\left(\\frac{e^{-at}\\varepsilon}2+\n\\left(\\frac{e^{-at}}2-\\frac{be^{-bt}(1-e^{-at})}{4a(1-e^{-bt})}\\right)\\sqrt{\\frac{v_i}{1-e^{-bt}}}\\right)\\,,\n$}\n\\end{equation}\nwhere $U(t,v_i)=a(1-e^{-at})\\left(\\varepsilon+\\sqrt{\\frac{v_i}{1-e^{-bt}}}\\right)$ and $\\ps{\\nabla_z V(t,z),h(t,z)}$ is equal to:\n\\begin{equation*}\n \\sum_{i=1}^d \\frac{-a^2m_i^2(1-e^{-at})}{U(t,v_i)^2}\n \\left(\\varepsilon\n+ (1-\\frac b{4a})\\sqrt{\\frac{v_i}{1-e^{-bt}}}\n+\\frac{bS_i(x)}{4a\\sqrt{v_i(1-e^{-bt})}}\n\\right)\\,.\n\\end{equation*}\nUsing that $S_i(x)\\geq 0$, we obtain:\n\\begin{equation}\n\\ps{\\nabla V(t,z), (1,h(t,z))} \\leq -\\sum_{i=1}^d \\frac{a^2m_i^2}{U(t,v_i)^2}\\left(\n(1-\\frac{e^{-at}}2)\\varepsilon+c_{a,b}(t)\\sqrt{\\frac{v_i}{1-e^{-bt}}}\n\\right)\\,,\\label{eq:ineg-V}\n\\end{equation}\nwhere $c_{a,b}(t)\\eqdef 1-\\frac{e^{-at}}2-\\frac b{4a}\\frac{1-e^{-at}}{1-e^{-bt}}\\,.$\nUsing inequality $1-{e^{-at}}\/2\\geq 1\/2$ in (\\ref{eq:ineg-V}), the inequality~(\\ref{eq:ineg-V})\nproves the Lemma, provided that one is able to show that $c_{a,b}(t)\\geq 0$, for all $t>0$\nand all $a,b$ satisfying $0< b\\leq 4a$. We prove this last statement.\nIt can be shown that the function $b\\mapsto c_{a,b}(t)$ is decreasing on $[0,+\\infty)$.\nHence, $c_{a,b}(t)\\geq c_{a,4a}(t)$. Now, $c_{a,4a}(t) = q(e^{-at})$ where $q:[0,1)\\to\\bR$ is the\nfunction defined for all $y\\in [0,1)$ by $q(y) = y \\left(y^4-2y^3+1\\right)\/(2(1-y^4))\\,$.\nHence $q \\geq 0$. Thus, $c_{a,b}(t)\\geq q(e^{-at})\\geq 0$.\n\\end{proof}\n\n\\subsection{Proof of Th.~\\ref{th:exist-unique}}\n\n\\subsubsection{Boundedness}\nDefine $\\cZ_0 \\eqdef \\{(x,0,0):x\\in \\bR^d\\}$.\nLet $\\bar e:(0,+\\infty)\\times \\cZ_+\\to\\cZ_+$ be defined by\n$\\bar e(t,z)\\eqdef (x,m\/(1-e^{-at}),v\/(1-e^{-bt}))\\,$\nfor every $t>0$ and every $z=(x,m,v)$ in $\\cZ_+$.\n\n\\begin{proposition}\n\\label{prop:adam-bounded}\nLet Assumptions~\\ref{hyp:coercive}, \\ref{hyp:S>0}, \\ref{hyp:F} and \\ref{hyp:S} hold.\nAssume that $0< b\\leq 4a$.\nFor every $z_0\\in \\cZ_0$, there exists a compact set $K\\subset \\cZ_+$ s.t.\nfor all $\\eta\\in [0,+\\infty)$, all $T\\in (0,+\\infty]$ and all $z\\in Z_T^\\eta(z_0)$,\n$\\left\\{\\bar e(t+\\eta,z(t)) :t\\in (0,T)\\right\\} \\subset K\\,.$\nMoreover, choosing $z_0$ of the form $z_0=(x_0,0,0)$ and $z(t) = (x(t), m(t),v(t))$, it holds that $F(x(t))\\leq F(x_0)$ for all $t\\in [0,T)$.\n\\end{proposition}\n\n\\begin{proof}\nLet $\\eta\\in [0,+\\infty)$.\nConsider a solution $z_\\eta(t) = (x_\\eta(t),m_\\eta(t),v_\\eta(t))$ as in the statement, defined on some interval $[0,T)$.\nDefine\n$\\hat m_\\eta(t) = m_\\eta(t)\/(1-e^{-a(t+\\eta)})$,\n$\\hat v_\\eta(t) = v_\\eta(t)\/(1-e^{-b(t+\\eta)})$.\nBy Lemma~\\ref{lem:v-positif}, $t\\mapsto V(t+\\eta,z(t))$ is continuous on $[0,T)$, and\ncontinuously differentiable on $(0,T)$.\nBy Lemma~\\ref{lem:V}, $\\dot V(t+\\eta,z_\\eta(t)) \\leq 0$ for all $t>0$.\nAs a consequence, $t\\mapsto V(t+\\eta,z_\\eta(t))$ is non-increasing on $[0,T)$.\nThus, for all $t\\geq 0$, $F(x_{\\eta}(t))\\leq \\lim_{t'\\downarrow 0}V(t'+\\eta,z_\\eta(t'))$. Note that\n$\n V(t+\\eta,z_\\eta(t)) \\leq F(x_\\eta(t))+\\frac 12 \\sum_{i=1}^d\n \\frac{m_{\\eta,i}(t)^2}{a(1-e^{-a(t+\\eta)})\\varepsilon}\\,. \\label{eq:majV}\n$\nIf $\\eta>0$, every term in the sum in the righthand side\ntends to zero, upon noting that\n$m_{\\eta}(t)\\to 0$ as $t\\to 0$.\nThe statement still holds if $\\eta=0$. Indeed, by Lemma~\\ref{lem:m-v-derivables-en-zero},\nfor a given $i\\in \\{1,\\dots,d\\}$, there exists $\\delta>0$ s.t. for all $00$. Hence,\n$\\dot v_{\\eta,i}(\\tau) = b(S_i(x_\\eta(\\tau))-v_{\\eta,i}(\\tau)) \\leq -b$.\nThis means that there exists $\\tau'<\\tau$ s.t. $v_{\\eta,i}(\\tau')>v_{\\eta,i}(\\tau)$, which contradicts the definition of $\\tau$.\nWe have shown that $v_{\\eta,i}(t)\\leq R_i+1$ for all $t\\in (0,T)$.\nIn particular, when $t\\geq 1$, $\\hat v_{\\eta,i}(t) = v_{\\eta,i}(t)\/(1-e^{-bt}) \\leq (R_i+1)\/(1-e^{-b})\\,.$\nConsider $t\\in (0,1\\wedge T)$.\nBy the mean value theorem, there exists $\\tilde t_\\eta\\in [0,t]$ s.t. $v_{\\eta,i}(t) = \\dot v_{\\eta,i}(\\tilde t_\\eta)t$.\nThus, $v_{\\eta,i}(t) \\leq b S_i(x(\\tilde t_\\eta)) t\\leq b R_i t$. Using that the map $y\\mapsto y\/(1-e^{-y})$ is increasing on $(0,+\\infty)$,\nit holds that for all $t\\in (0,1\\wedge T)$,\n$\\hat v_{\\eta,i}(t)\n\\leq bR_i \/(1-e^{-b})\\,.$\nWe have shown that, for all $t\\in (0,T)$ and all $i\\in \\{1,\\dots,d\\}$, $0\\leq \\hat v_{\\eta,i}(t)\\leq M$, where\n$M\\eqdef (1-e^{-b})^{-1}(1+ b)(1+\\max\\{R_\\ell:\\ell\\in \\{1,\\dots,d\\})$.\n\nAs $V(t+\\eta,z_\\eta(t))\\leq F(x_0)$, we obtain: $F(x_0) \\geq F(x_\\eta(t))+\\frac 12\n\\left\\|m_\\eta(t)\\right\\|^2_{U(t+\\eta,v_\\eta(t))^{-1}}$.\nThus, $F(x_0) \\geq \\inf F+\\frac 1{2a(\\varepsilon+\\sqrt{M})} \\|m_{\\eta}(t)\\|^2\\,$.\nTherefore, $m_\\eta(\\,.\\,)$ is bounded on $[0,T)$, uniformly in $\\eta$.\nThe same holds for $\\hat m_\\eta$ by using the mean value theorem\nin the same way as for $\\hat v_\\eta$. The proof is complete.\n\\end{proof}\n\n\n\\begin{proposition}\n\\label{prop:bounded}\nLet Assumptions~\\ref{hyp:coercive}, \\ref{hyp:S>0}, \\ref{hyp:F} and \\ref{hyp:S} hold.\nAssume that $0< b\\leq 4a$.\nLet $K$ be a compact subset of $\\cZ_+$.\nThen, there exists an other compact set $K'\\subset \\cZ_+$ s.t.\nfor every $T\\in (0,+\\infty]$ and every $z\\in Z_{T}^\\infty(K)$,\n$z([0,T))\\subset K'$.\n\\end{proposition}\n\\begin{proof}\nThe proof follows the same line as Prop.~\\ref{prop:adam-bounded} and is omitted.\n\\end{proof}\n\n\n\nFor any $K\\subset \\cZ_+$, define $v_{\\min}(K)\\eqdef\\inf\\{v_k: (x,m,v)\\in K,i\\in \\{1,\\dots,d\\}\\}$.\n\\begin{lemma}\n\\label{lem:v-lowerbound}\nUnder Assumptions~\\ref{hyp:coercive}, \\ref{hyp:S>0}, \\ref{hyp:F} and \\ref{hyp:S},\nthe following holds true\n\\begin{enumerate}[{\\it i)},leftmargin=*]\n\\item For every compact set $K\\subset \\cZ_+$, there exists $c>0$, s.t. for every $z\\in Z^\\infty_{\\infty}(K)$, of the form\n$z(t)= (x(t),m(t),v(t))$, $v_i(t)\\geq c \\min\\left(1 ,\\frac{v_{\\min}(K)}{2c}+ t\\right)\\qquad(\\forall t\\geq 0, \\forall i\\in\\{1,\\dots,d\\})\\,.$\n\\item For every $z_0\\in \\cZ_0$, there exists $c>0$ s.t. for every $\\eta\\in [0,+\\infty)$ and every $z\\in Z_\\infty^\\eta(z_0)$,\n$v_i(t)\\geq c\\min(1,t)\\qquad(\\forall t\\geq 0, \\forall i\\in\\{1,\\dots,d\\})\\,.$\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nWe prove the first point. Consider a compact set $K\\subset \\cZ_+$.\nBy Prop.~\\ref{prop:bounded}, one can find a compact set $K'\\subset \\cZ_+$ s.t.\nfor every $z\\in Z^\\infty_{\\infty}(K)$, it holds that $\\{z(t):t\\geq 0\\}\\subset K'$.\nDenote by $L_S$ the Lipschitz constant of $S$ on the compact set $\\{x:(x,m,v)\\in K'\\}$.\nIntroduce the constants $M_1\\eqdef \\sup\\{\\|m\/(\\varepsilon + \\sqrt v)\\|_\\infty:(x,m,v)\\in K'\\}$,\n$M_2\\eqdef \\sup\\{\\|S(x)\\|_\\infty:(x,m,v)\\in K'\\}$.\nThe constants $L_S, M_1, M_2$ are finite.\nNow consider a global solution $z(t)=(x(t),m(t),v(t))$ in $Z^\\infty_{\\infty}(K)$.\nChoose $i\\in \\{1,\\dots,d\\}$ and consider $t\\geq 0$. By the mean value theorem,\nthere exists $t'\\in [0,t]$ s.t. $v_i(t) = v_i(0)+\\dot v_i(t')t$. Thus,\n$ v_i(t) = v_i(0) + \\dot v_i(0) t + b(S_i(x(t')) - v_i(t') - S_i(x(0))+ v_i(0)) t$,\nwhich in turn implies\n$ v_i(t)\\geq v_i(0) + \\dot v_i(0) t - b L_S\\|x(t')-x(0)\\|t - b |v_i(t') - v_i(0)| t$.\nUsing again the mean value theorem, for every $\\ell\\in \\{1,\\dots,d\\}$, there exists $t''\\in [0,t']$ s.t.\n$\n|x_\\ell(t')-x_\\ell(0)| = t' |\\dot x_\\ell(t'')| \\leq t M_1\\,.\n$\nTherefore, $\\|x(t')-x(0)\\|\\leq \\sqrt d M_1 t$. Similarly, there exists $\\tilde t$ s.t.:\n$|v_i(t') - v_i(0)|= t'|\\dot v_i(\\tilde t)|\\leq t'b S_i(x(\\tilde t)) \\leq t bM_2\\,.$\nPutting together the above inequalities, $v_i(t) \\geq v_i(0) (1-bt) + bS_i(x(0)) t - bC t^2 \\,$,\nwhere $C\\eqdef (M_2+L_S\\sqrt d M_1)$.\nFor every $t\\leq 1\/(2b)$, $v_i(t) \\geq \\frac{v_{\\min}}{2} + tbC\\left(\\frac{S_{\\min}}C - t\\right) \\,,$\nwhere we defined $S_{\\min}\\eqdef\\inf\\{S_i(x):i\\in \\{1,\\dots,d\\}, (x,m,v)\\in K\\}$.\nSetting $\\tau \\eqdef 0.5\\min(1\/b,S_{\\min}\/C)$,\n\\begin{equation}\n\\forall t\\in [0,\\tau],\\ v_i(t) \\geq \\frac{v_{\\min}}{2} + \\frac{bS_{\\min}t}{2}\\,.\\label{eq:vlin}\n\\end{equation}\nSet $\\kappa_1\\eqdef 0.5(v_{\\min} + bS_{\\min}\\tau)$. Note that $v_i(\\tau)\\geq \\kappa_1$.\nDefine $S_{\\min}'\\eqdef\\inf\\{S_i(x):i\\in \\{1,\\dots,d\\}, (x,m,v)\\in K'\\}\\,.$\nNote that $S_{\\min}'>0$ by Assumptions~\\ref{hyp:S} and \\ref{hyp:S>0}.\nFinally, define $\\kappa = 0.5\\min(\\kappa_1,S_{\\min}')$.\nBy contradiction, assume that the set $\\{t\\geq \\tau : v_i(t)<\\kappa\\}$ is non-empty, and\ndenote by $\\tau'$ its infimum. It is clear that $\\tau'>\\tau$ and\n$v_i(\\tau')=\\kappa$. Thus, $b^{-1}\\dot v_i(\\tau') =S_i(x(\\tau'))-\\kappa$.\nWe obtain that $b^{-1}\\dot v_i(\\tau') \\geq 0.5S_{\\min}'>0$.\nAs a consequence, there exists $t\\in (\\tau,\\tau')$ s.t. $v_i(t)0}, \\ref{hyp:F} and \\ref{hyp:S} hold.\nAssume that $0< b\\leq 4a$.\nFor every $z_0\\in \\cZ_+$,$Z_{\\infty}^\\infty(z_0)\\neq\\emptyset$.\nFor every $(z_0,\\eta)\\in \\cZ_0\\times (0,+\\infty)$,$Z_{\\infty}^\\eta(z_0)\\neq\\emptyset$.\n\\end{corollary}\n\\begin{proof}\nWe prove the first point (the proof of the second point follows the same line).\nUnder Assumptions~\\ref{hyp:F} and \\ref{hyp:S}, $h_\\infty$ is continuous.\nTherefore, Cauchy-Peano's theorem guarantees the existence of a solution to the (\\ref{eq:ode}) issued from $z_0$,\nwhich we can extend over a maximal interval of existence $[0,T_{\\max})$\nWe conclude that the solution is global ($T_{\\max} = +\\infty$) using the boundedness of the solution given by Prop.~\\ref{prop:bounded}.\n\\end{proof}\n\\begin{lemma}\n\\label{lem:equicont-eta}\n \n Let Assumptions~\\ref{hyp:coercive}, \\ref{hyp:S>0}, \\ref{hyp:F} and \\ref{hyp:S} hold.\n Assume that $0< b\\leq 4a$.\nConsider $z_0\\in \\cZ_0$. Denote by $(z_\\eta:\\eta\\in (0,+\\infty))$ a family of functions on $[0,+\\infty)\\to \\cZ_+$\ns.t. for every $\\eta>0$, $z_\\eta\\in Z_\\infty^\\eta(z_0)$.\nThen, $(z_\\eta)_{\\eta>0}$ is equicontinuous.\n\\end{lemma}\n\\begin{proof}\nFor every such solution $z_\\eta$, we set $z_\\eta(t)=(x_\\eta(t),m_\\eta(t),v_\\eta(t))$ for all $t\\geq 0$,\nand define $\\hat m_\\eta$ and $\\hat v_\\eta$ as in Prop.~\\ref{prop:adam-bounded}.\nBy Prop.~\\ref{prop:adam-bounded}, there exists a constant $M_1$ s.t. for all $\\eta>0$ and all $t\\geq 0$,\n$\\max(\\|x_\\eta(t)\\|,\\|\\hat m_\\eta(t)\\|_\\infty,\\|\\hat v_\\eta(t)\\|)\\leq M_1$.\nUsing the continuity of $\\nabla F$ and $S$, there exists an other finite constant $M_2$ s.t.\n$M_2\\geq \\sup\\{\\|\\nabla F(x)\\|_\\infty:x\\in \\bR^d, \\|x\\|\\leq M_1\\}$ and\n$M_2\\geq \\sup\\{\\|S(x)\\|_\\infty:x\\in \\bR^d, \\|x\\|\\leq M_1\\}$.\nFor every $(s,t)\\in [0,+\\infty)^2$, we have for all $i\\in \\{1,\\dots,d\\}$,\n$|x_{\\eta,i}(t)-x_{\\eta,i}(s)|\\leq \\int_s^t\\left|\\frac{\\hat m_{\\eta,i}(u)}{\\varepsilon + \\sqrt{\\hat v_{\\eta,i}(u)}}\\right|du\\,\\leq \\frac{M_1}\\varepsilon |t-s|$,\nand similarly $|m_{\\eta,i}(t)-m_{\\eta,i}(s)| \\leq a(M_1+M_2)|t-s|$,\n$|v_{\\eta,i}(t)-v_{\\eta,i}(s)|\\leq b(M_1+M_2)|t-s|$\\,.\nTherefore, there exists a constant $M_3$, independent from $\\eta$, s.t. for all $\\eta>0$ and all $(s,t)\\in [0,+\\infty)^2$,\n$\\|z_\\eta(t)-z_\\eta(s)\\|\\leq M_3 |t-s|$\n\\end{proof}\n\n\n\\begin{proposition}\n \\label{prop:existence-adam}\n \n Let Assumptions~\\ref{hyp:coercive}, \\ref{hyp:S>0}, \\ref{hyp:F} and \\ref{hyp:S} hold.\n Assume that $0< b\\leq 4a$.\nFor every $z_0\\in \\cZ_0$, $Z_\\infty^0(z_0)\\neq \\emptyset$ \\emph{i.e.},\n(\\ref{eq:ode}) admits a global solution issued from $z_0$.\n\\end{proposition}\n\\begin{proof}\n By Cor.~\\ref{coro:existence}, there exists a family $(z_\\eta)_{\\eta>0}$ of functions on $[0,+\\infty)\\to \\cZ$\ns.t. for every $\\eta>0$, $z_\\eta\\in Z^\\eta_\\infty(z_0)$.\nWe set as usual $z_\\eta(t)=(x_\\eta(t),m_\\eta(t),v_\\eta(t))$. By Lemma~\\ref{lem:equicont-eta},\nand the Arzel\u00e0-Ascoli theorem, there exists a map $z:[0,+\\infty)\\to \\cZ$ and a sequence $\\eta_n\\downarrow 0$ s.t.\n$z_{\\eta_n}$ converges to $z$ uniformly on compact sets, as $n\\to\\infty$. Considering some fixed scalars $t>s> 0$,\n$z(t) = z(s) + \\lim_{n\\to\\infty}\\int_s^t h(u+\\eta_n, z_{\\eta_n}(u))du\\,.$\nBy Prop.~\\ref{prop:adam-bounded}, there exists a compact set $K\\subset \\cZ_+$ s.t.\n$\\{z_{\\eta_n}(t):t\\geq 0\\}\\subset K$ for all $n$.\nMoreover, by Lemma~\\ref{lem:v-lowerbound}, there exists a constant $c>0$ s.t.\nfor all $n$ and all $u\\geq 0$, $v_{\\eta_n,k}(u)\\geq c \\min(1,u)$.\nDenote by $\\bar K\\eqdef K\\cap (\\bR^d\\times\\bR^d\\times [c\\min(1,s),+\\infty)^d)$.\nIt is clear that $\\bar K$ is a compact subset of $\\cZ_+^*$.\nSince $h$ is continuously differentiable on the set\n$[s,t]\\times \\bar K$, it is Lipschitz continuous on that set. Denote by $L_h$ the corresponding\nLipschitz constant. We obtain:\n$$\n\\int_s^t\\|h(u+\\eta_n, z_{\\eta_n}(u)) - h(u, z(u))\\|du \\leq L_h\\left(\\eta_n + \\sup_{u\\in [s,t]}\\|z_{\\eta_n}(u)-z(u)\\|\\right)(t-s)\\,,\n$$\nand the righthand side converges to zero. As a consequence, for all $t>s$,\n$\nz(t) = z(s) + \\int_s^t h(u, z(u))du\\,.\n$ Moreover, $z(0)=z_0$. This proves that $z\\in Z^0_\\infty(z_0)$.\n\\end{proof}\n\n\\subsubsection{Uniqueness}\n\n\\begin{proposition}\n \\label{prop:unique-adam}\nLet Assumptions~\\ref{hyp:coercive}, \\ref{hyp:S>0}, \\ref{hyp:F} and \\ref{hyp:S} hold.\nAssume $b\\leq 4a$.\nFor every $z_0\\in\\cZ_0$, $Z_\\infty^0(z_0)$ is a singleton\n\\emph{i.e.}, there exists a unique global solution to (\\ref{eq:ode})\nwith initial condition $z_0$.\n\\end{proposition}\n\\begin{proof}\n \n Consider solutions $z$ and $z'$ in $Z^0_\\infty(z_0)$. We denote by $(x(t),m(t),v(t))$ the blocks of $z(t)$,\n and we define $(x'(t),m'(t),v'(t))$ similarly. For all $t>0$, we\n define $\\hat m(t)\\eqdef m(t)\/(1-e^{-at})$, $\\hat v(t)\\eqdef\n v(t)\/(1-e^{-bt})$, and we define $\\hat m'(t)$ and $\\hat v'(t)$\n similarly. By Prop.~\\ref{prop:adam-bounded}, there exists a compact\n set $K\\subset \\cZ_+$ s.t. $ (x(t),\\hat m(t),\\hat v(t))$ and\n $(x'(t),\\hat m'(t),\\hat v'(t))$ are both in $K$ for all $t> 0$. We\n denote by $L_S$ and $L_{\\nabla F}$ the Lipschitz constants of $S$\n and $\\nabla F$ on the compact set $\\{x:(x,m,v)\\in K\\}$. These\n constants are finite by Assumptions~\\ref{hyp:F}\n and \\ref{hyp:S}.\nWe define $M\\eqdef \\sup\\{\\|m\\|_\\infty:(x,m,v)\\in K\\}$.\nDefine $u_x(t) \\eqdef \\|x(t)-x'(t)\\|^2$,\n$u_m(t)\\eqdef \\|\\hat m(t)-\\hat m'(t)\\|^2$ and $u_v(t)\\eqdef \\|\\hat v(t)-\\hat v'(t)\\|^2$.\nLet $\\delta>0$. Define: $u^{(\\delta)}(t) \\eqdef u_x(t)+\\delta u_m(t)+\\delta u_v(t)\\,.$\nBy the chain rule and the Cauchy-Schwarz inequality,\n$\\dot u_x(t)\\leq 2\\|x(t)-x'(t)\\|\\|\\frac{\\hat m(t)}{\\varepsilon +\\sqrt{\\hat v(t)}}-\\frac{\\hat m'(t)}{\\varepsilon +\\sqrt{\\hat v'(t)}}\\|$. Thus,\nusing Lemma~\\ref{lem:v-lowerbound}, there exists $c>0$ s.t.\n\\begin{equation*}\n \\dot u_x(t)\\leq 2\\|x(t)-x'(t)\\|\\left(\\varepsilon^{-1}\\left\\|\\hat m(t)-\\hat m'(t)\\right\\|\n+\\frac M{2\\varepsilon^2\\sqrt{c\\min(1,t)}}\\left\\|{\\hat v(t)}-{\\hat v'(t)}\\right\\|\\right)\\,.\n\\end{equation*}\nFor any $\\delta>0$,\n$2\\|x(t)-x'(t)\\|\\,\\|\\hat m(t)-\\hat m'(t)\\|\\leq \\delta^{-1\/2}(u_x(t)+\\delta u_m(t))\\leq \\delta^{-1\/2}u^{(\\delta)}(t)$.\nSimilarly, $2\\|x(t)-x'(t)\\|\\,\\|\\hat v(t)-\\hat v'(t)\\|\\leq \\delta^{-1\/2}u^{(\\delta)}(t)$.\nThus, for any $\\delta>0$,\n\\begin{align}\n\\label{eq:ux}\n \\dot u_x(t)&\\leq \\left(\\frac 1{\\varepsilon\\sqrt \\delta}+\\frac M{2\\varepsilon^2\\sqrt{\\delta c\\min(1,t)}}\\right) u^{(\\delta)}(t)\\,.\n\\end{align}\nWe now study $u_m(t)$. For all $t>0$, we obtain after some algebra:\n$\\frac {d}{dt}\\hat m(t) = a(\\nabla F(x(t)) - \\hat m(t))\/(1-e^{-at})\\,.$\nTherefore,\n$\n\\dot u_m(t) \\leq \\frac{2aL_{\\nabla F}}{1-e^{-at}}\\|\\hat m(t)-\\hat m'(t)\\|\\,\\|x(t) -x'(t)\\|\\,.\n$\nFor any $\\theta>0$, it holds that $2\\|\\hat m(t)-\\hat m'(t)\\|\\,\\|x(t) -x'(t)\\|\\leq \\theta u_x(t)+ \\theta^{-1}u_m(t)$.\nIn particular, letting $\\theta\\eqdef 2L_{\\nabla F}$, we obtain that for all $\\delta>0$,\n\\begin{equation}\n\\resizebox{\\hsize}{!}{$\n\\delta \\dot u_m(t)\\leq \\frac{a }{2(1-e^{-at})}\\left(4\\delta L_{\\nabla F}^2 u_x(t)+ \\delta u_m(t)\\right)\n \\leq \\left(\\frac a2+\\frac 1{2t}\\right)\\left(4\\delta L_{\\nabla F}^2 u_x(t)+ \\delta u_m(t)\\right)\\,,\n$}\n\\label{eq:um}\n\\end{equation}\nwhere the last inequality is due to the fact that $y\/(1-e^{-y})\\leq 1+y$ for all $y>0$.\nUsing the exact same arguments, we also obtain that\n\\begin{align}\n \\delta \\dot u_v(t)&\\leq \\left(\\frac b2+\\frac 1{2t}\\right)\\left(4\\delta L_{S}^2 u_x(t)+ \\delta u_m(t)\\right)\\,.\n\\label{eq:uv}\n\\end{align}\nWe now choose any $\\delta$ s.t. $4\\delta \\leq 1\/\\max(L_S^2,L_{\\nabla F}^2)$.\nThen, Eq.~(\\ref{eq:um}) and~(\\ref{eq:uv}) respectively imply that\n$\\delta \\dot u_m(t)\\leq 0.5(a+t^{-1})u^{(\\delta)}(t)$ and\n$\\delta \\dot u_v(t)\\leq 0.5(b+t^{-1})u^{(\\delta)}(t)$.\nSumming these inequalities along with Eq.~(\\ref{eq:ux}), we obtain that for every $t>0$,\n$\\dot u^{(\\delta)}(t) \\leq \\psi(t) u^{(\\delta)}(t)\\,$,\nwhere: $\\psi(t) \\eqdef \\frac{a+b}2+\\frac 1{\\varepsilon\\sqrt \\delta}+\\frac M{2\\varepsilon^2\\sqrt{\\delta c\\min(1,t)}}\n+ \\frac 1t\\,.$\nFrom Gr\\\"onwall's inequality, it holds that for every $t>s>0$,\n$u^{(\\delta)}(t)\\leq u^{(\\delta)}(s)\\exp\\left(\\int_s^t \\psi(s')ds'\\right)\\,$.\nWe first consider the case where $t\\leq 1$. We set $c_1\\eqdef (a+b)\/2+(\\varepsilon\\sqrt \\delta)^{-1}$\nand $c_2\\eqdef M\/(\\varepsilon^2\\sqrt{\\delta c})$. With these notations,\n$\\int_s^t \\psi(s')ds' \\leq c_1t+c_2\\sqrt t + \\ln \\frac ts\\,.$\nTherefore, $u^{(\\delta)}(t)\\leq \\frac{u^{(\\delta)}(s)}{s}\n\\exp\\left(c_1t+c_2\\sqrt t + \\ln t\\right)\\,$.\nBy Lemma~\\ref{lem:m-v-derivables-en-zero}, recall that $\\dot x(0)$ and\n$\\dot x'(0)$ are both well defined (and coincide). Thus,\n$$\nu_x(s) = \\|x(s)-x'(s)\\|^2\n\\leq 2\\|x(s)-x(0)-\\dot x(0)s\\|^2+2\\|x'(s)-x'(0)-\\dot x'(0)s\\|^2\\,.\n$$\nIt follows that $u_x(s)\/s^2$ converges to zero as $s\\downarrow 0$.\nWe now show the same kind of result for $u_m(s)$ and $u_v(s)$.\nConsider $i\\in \\{1,\\dots,d\\}$. By the mean value theorem, there exists $\\tilde s$ (resp. $\\tilde s'$) in\n$[0,t]$ s.t. $m_i(s)=\\dot m_i(\\tilde s)s$ (resp. $m_i'(s)=\\dot m_i'(\\tilde s')s$).\nThus, $\\hat m_i(s) = \\frac{as}{1-e^{-as}} \\left(\\partial_i F(x(\\tilde s))-m_i(\\tilde s)\\right)$,\nand a similar equality holds for $\\hat m_i'(s)$. Then,\ngiven that $\\|x(\\tilde s)-x'(\\tilde s')\\| \\vee \\|m(\\tilde s)-m'(\\tilde s')\\| \\leq \\|z(\\tilde s)-z'(\\tilde s')\\|$\n, $\\tilde s\\leq s$ and $\\tilde s'\\leq s$,\n$$\n\\frac{|\\hat m_i(s) -\\hat m_i'(s) |}s\n\\leq \\frac{2a(L_{\\nabla F}\\vee 1)s}{1-e^{-as}} \\left(\\frac{\\|z(\\tilde s)-z(0)\\|}{\\tilde s}+\\frac{\\|z'(\\tilde s')-z'(0)\\|}{\\tilde s'}\\right)\\,.\n$$\nBy Lemma~\\ref{lem:m-v-derivables-en-zero}, $z$ and $z'$ are differentiable at point zero.\nThen, the above inequality gives $\\limsup_{s\\downarrow 0}\\frac{|\\hat m_i(s) -\\hat m_i'(s) |}s \\leq 4(L_{\\nabla F}\\vee 1)\\|\\dot z(0)\\|$\nand\n\n\\limsup_{s\\downarrow 0}\\frac{u_m(s)}{s^2} \\leq 16d(L_{\\nabla F}^2\\vee 1)\\|\\dot z(0)\\|^2\\,.\n\nTherefore, $u_m(s)\/s$ converges to zero as $s\\downarrow 0$.\nBy similar arguments, it can be shown that\n$\\limsup_{s\\downarrow 0}{u_v(s)}\/{s^2} \\leq 16d(L_{S}^2\\vee 1)\\|\\dot z(0)\\|^2$,\nthus $\\lim u_v(s)\/s=0$.\nFinally, we obtain that\n${u^{(\\delta)}(s)}\/{s}$ converges to zero as $s\\downarrow 0$.\nLetting $s$ tend to zero, we obtain that for every\n$t\\leq 1$, $u^{(\\delta)}(t)=0$. Setting $s=1$ and $t>1$,\nand noting that $\\psi$ is integrable on $[1,t]$, it follows that $u^{(\\delta)}(t)=0$ for all $t>1$.\nThis proves that $z=z'$.\n\\end{proof}\n\nWe recall that a semiflow $\\Phi$ on the space $(E,\\sd)$ is a continuous map\n$\\Phi$ from $[0,+\\infty)\\times E$ to $E$ defined by $(t,x) \\mapsto \\Phi(t,x) = \\Phi_t(x)$\nsuch that $\\Phi_0$ is the identity and $\\Phi_{t+s} = \\Phi_t\\circ\\Phi_s$ for all $(t,s)\\in [0,+\\infty)^2$.\n\n\\begin{proposition}\n \\label{prop:semiflow}\nLet Assumptions~\\ref{hyp:coercive}, \\ref{hyp:S>0}, \\ref{hyp:F} and \\ref{hyp:S} hold.\nAssume that $0< b\\leq 4a$.\nThe map $Z_{\\infty}^{\\infty}$ is single-valued on $\\cZ_+\\to C([0,+\\infty),\\cZ_+)$\n\\emph{i.e.}, there exists a unique global solution to~(\\ref{eq:ode-a})\nstarting from any given point in $\\cZ_+$.\nMoreover, the following map is a semiflow:\n\\begin{equation}\n \\label{eq:flot}\n \\begin{array}[h]{rcl}\n \\Phi:[0,+\\infty)\\times \\cZ_+&\\to& \\cZ_+ \\\\\n(t,z) &\\mapsto& Z_{\\infty}^{\\infty}(z)(t)\n \\end{array}\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nThe result is a direct consequence of Lemma~\\ref{prop:unique-adam}.\n\\end{proof}\n\n\\subsection{Proof of Th.~\\ref{th:cv-adam}}\n\\label{sec:convergence}\n\\subsubsection{Convergence of the semiflow}\n\nWe first recall some useful definitions and results.\nLet $\\Psi$ represent any semiflow on an arbitrary metric space $(E,\\sd)$.\nA point $z\\in E$ is called an \\emph{equilibrium point} of the semiflow $\\Psi$ if $\\Psi_t(z)=z$ for all $t\\geq 0$.\nWe denote by $\\Lambda_\\Psi$ the set of equilibrium points of~$\\Psi$.\nA continuous function $\\sV:E\\to\\bR$ is called a \\emph{Lyapunov function} for the semiflow $\\Psi$\nif $\\sV(\\Psi_t(z))\\leq \\sV(z)$ for all $z\\in E$ and all $t\\geq 0$.\nIt is called a \\emph{strict Lyapunov function} if, moreover,\n$\n\\{ z\\in E\\,:\\, \\forall t\\geq 0,\\,\\sV(\\Psi_t(z))=\\sV(z) \\}= \\Lambda_\\Psi\n$.\nIf $\\sV$ is a strict Lyapunov function for $\\Psi$ and if $z\\in E$ is a point s.t. $\\{\\Psi_t(z):t\\geq 0\\}$ is relatively compact,\nthen it holds that $\\Lambda_\\Psi\\neq \\emptyset$ and $\\sd(\\Psi_t(z),\\Lambda_\\Psi)\\to 0$, see \\cite[Th.~2.1.7]{haraux1991systemes}.\nA continuous function $z:[0,+\\infty)\\to E$ is said to be an asymptotic pseudotrajectory (APT)\nfor the semiflow $\\Psi$ if for every $T\\in (0,+\\infty)$,\n$\n\\lim_{t\\to+\\infty} \\sup_{s\\in [0,T]} \\sd(z(t+s),\\Psi_s(z(t))) = 0\\,.\n$\nThe following result follows from \\cite[Th.~5.7]{ben-(cours)99} and \\cite[Prop.~6.4]{ben-(cours)99}.\n\\begin{proposition}[\\cite{ben-(cours)99}]\\hfill\\\\\n\\label{prop:benaim}\n Consider a semiflow $\\Psi$ on $(E,d)$ and a map $z:[0,+\\infty)\\to E$. Assume the following:\n \\begin{enumerate}[{\\it i)}]\n \\item $\\Psi$ admits a strict Lyapunov function $\\sV$.\n \\item The set $\\Lambda_\\Psi$ of equilibrium points of $\\Psi$ is compact.\n \\item $\\sV(\\Lambda_\\Psi)$ has an empty interior.\n \\item $z$ is an APT of $\\Psi$.\n \\item $z([0,\\infty))$ is relatively compact.\n \\end{enumerate}\nThen, $\n\\bigcap_{t\\geq 0}\\overline{z([t,\\infty))}$ is a compact connected subset of $\\Lambda_\\Psi$\\,.\n\\end{proposition}\n\nFor every $\\delta>0$ and every $z = (x,m,v)\\in \\cZ_+$, define:\n\\begin{equation}\nW_\\delta(x,m,v) \\eqdef V_{\\infty}(x,m,v) - \\delta \\ps{\\nabla F(x),m} + \\delta \\|S(x)-v\\|^2\\,,\\label{eq:Wdelta}\n\\end{equation}\nwhere we recall that $V_\\infty(z)\\eqdef \\lim_{t\\to \\infty} V(t,z)$ for every $z\\in \\cZ_+$ and $V$ is defined by Eq.(\\ref{eq:V}).\nConsider the set $\\cE\\eqdef h_\\infty^{-1}(\\{0\\})$ of all equilibrium points of (\\ref{eq:ode-a}), namely:\n$\\cE = \\{(x,m,v)\\in \\cZ_+:\\nabla F(x)=0,m=0,v=S(x)\\}\\,$.\nThe set $\\cE$ is non-empty by Assumption~\\ref{hyp:coercive}.\n\n\\begin{proposition}\n \\label{prop:Wstrict}\nLet Assumptions~\\ref{hyp:coercive}, \\ref{hyp:S>0}, \\ref{hyp:F} and \\ref{hyp:S} hold.\nAssume that $0< b\\leq 4a$.\nLet $K\\subset \\cZ_+$ be a compact set. Define $K'\\eqdef \\overline{\\{\\flot(t,z):t\\geq 0, z\\in K\\}}$.\nLet $\\bflot:[0,+\\infty)\\times K'\\to K'$ be the restriction of the semiflow $\\flot$ to $K'$ \\emph{i.e.},\n$\\bflot(t,z) = \\flot(t,z)$ for all $t\\geq 0, z\\in K'$. Then,\n\\begin{enumerate}[{\\it i)}]\n\\item $K'$ is compact.\n\\item $\\bflot$ is well defined and is a semiflow on $K'$.\n\\item The set of equilibrium points of $\\bflot$ is equal to $\\cE\\cap K'$.\n\\item There exists $\\delta>0$ s.t. $W_\\delta$ is a strict Lyapunov function for the semiflow $\\bflot$.\n\\end{enumerate}\n\\end{proposition}\n\n\\begin{proof}\nThe first point is a consequence of Prop.~\\ref{prop:bounded}.\nThe second point stems from Prop.~\\ref{prop:semiflow}.\nThe third point is immediate from the definition of $\\cE$ and the fact that $\\bflot$ is valued in $K'$.\nWe now prove the last point.\nConsider $z\\in K'$ and write $\\bflot_t(z)$ under the form $\\bflot_t(z) = (x(t),m(t),v(t))$. For \\emph{any} map ${\\mathsf W}:\\cZ_+\\to\\bR$, define\nfor all $t>0$, $ \\cL_{\\mathsf W}(t) \\eqdef \\limsup_{s\\to 0} s^{-1}({\\mathsf W}(\\bflot_{t+s}(z)) - {\\mathsf W}(\\bflot_{t}(z)))\\,.$\nIntroduce $G(z)\\eqdef -\\ps{\\nabla F(x),m}$ and $H(z)\\eqdef \\|S(x)-v\\|^2$ for every $z=(x,m,v)$.\nConsider $\\delta>0$ (to be specified later on).\nWe study $\\cL_{W_\\delta} = \\cL_V + \\delta \\cL_{G} + \\delta \\cL_{H}$.\nNote that $\\bflot_t(z)\\in K'\\cap \\cZ_+^*$ for all $t>0$ by Lemma~\\ref{lem:v-positif}. Thus, $t\\mapsto V_\\infty(\\bflot_t(z))$\nis differentiable at any point $t>0$ and the derivative coincides with $\\cL_V(t) = \\dot V_\\infty(\\bflot_t(z))$.\nDefine $C_1\\eqdef \\sup\\{\\|v\\|_\\infty:(x,m,v)\\in K'\\}$.\nThen, by Lemma~\\ref{lem:V}, $\\cL_V(t)\\leq -\\varepsilon(\\varepsilon+\\sqrt{C_1})^{-2} \\left\\|m(t)\\right\\|^2$.\nLet $L_{\\nabla F}$ be the Lipschitz constant of $\\nabla F$\non $\\{x:(x,m,v)\\in K'\\}$.\nFor every $t>0$,\n\\begin{align*}\n \\cL_G(t\n&\\leq \\limsup_{s\\to 0} s^{-1}\\|\\nabla F(x(t))- \\nabla F(x(t+s))\\| \\|m(t+s)\\| - \\ps{\\nabla F(x(t)),\\dot m(t)}\\\\\n&\\leq L_{\\nabla F}\\varepsilon^{-1}\\|m(t)\\|^2 - a\\|\\nabla F(x(t))\\|^2 + a\\ps{\\nabla F(x(t)),m(t)}\\\\\n&\\leq - \\frac a2\\|\\nabla F(x(t))\\|^2 + \\left(\\frac a2+\\frac{L_{\\nabla F}}{\\varepsilon}\\right)\\|m(t)\\|^2\\,.\n\\end{align*}\nDenote by $L_S$ the Lipschitz constant of $S$ on $\\{x:(x,m,v)\\in K'\\}$.\nFor every $t>0$,\n\\begin{eqnarray*}\n \\cL_H(t\n&=& \\limsup_{s\\to 0} s^{-1}(\\|S(x(t+s))-S(x(t)) + S(x(t))-v(t+s)\\|^2-\\|S(x(t))-v(t)\\|^2)\\\\\n&=& - 2\\ps{S(x(t))-v(t),\\dot v(t)\n+\\limsup_{s\\to 0} 2s^{-1}\\ps{S(x(t+s))-S(x(t)),S(x(t))-v(t+s)} \\\\\n&\\leq& - 2b\\|S(x(t))-v(t)\\|^2+2L_S\\varepsilon^{-1} \\|m(t)\\| \\|S(x(t))-v(t)\\|\\,.\n\\end{eqnarray*}\nUsing that $2 \\|m(t)\\| \\|S(x(t))-v(t)\\|\\leq \\frac {L_S}{b\\varepsilon}\\|m(t)\\|^2 + \\frac {b\\varepsilon}{L_S}\\|S(x(t))-v(t)\\|^2$, we obtain\n$\n\\cL_H(t) \\leq - b\\|S(x(t))-v(t)\\|^2+\\frac {L_S^2}{b\\varepsilon^2}\\|m(t)\\|^2\\,.\n$\nHence,\nfor every $t>0$,\n$$\n \\cL_{W_\\delta}(t) \\leq -M(\\delta)\n \\|m(t)\\|^2 - \\frac {a\\delta}2\\|\\nabla\n F(x(t))\\|^2 - \\delta b \\|S(x(t))-v(t)\\|^2\\,.\n$$\nwhere $M(\\delta)\\eqdef \\varepsilon(\\varepsilon+\\sqrt{C_1})^{-2} -\\frac {\\delta\n L_S^2}{b\\varepsilon^2} - \\delta \\left(\\frac a2+\\frac{L_{\\nabla\n F}}{\\varepsilon}\\right)\\,.$\nChoosing $\\delta$ s.t. $M(\\delta)>0$,\n\\begin{equation}\n\\forall t>0,\\ \\ \\cL_{W_\\delta}(t) \\leq -c\\left( \\|m(t)\\|^2 + \\|\\nabla F(x(t))\\|^2 + \\|S(x(t))-v(t)\\|^2\\right)\\,,\\label{eq:ae-derivee}\n\\end{equation}\nwhere $c \\eqdef \\min\\{ M(\\delta), \\frac {a\\delta}2, \\delta b\\}$.\nIt can easily be seen that for every $z\\in K'$, $t\\mapsto W_\\delta(\\bflot_t(z))$ is Lipschitz continuous, hence absolutely continuous.\nIts derivative almost everywhere coincides with $\\cL_{W_\\delta}$, which is non-positive.\nThus, $W_\\delta$ is a Lyapunov function for $\\bflot$.\nWe prove that the Lyapunov function is strict.\nConsider $z\\in K'$ s.t. $W_\\delta(\\bflot_t(z))=W_\\delta(z)$ for all $t>0$.\nThe derivative almost everywhere of $t\\mapsto W_\\delta(\\bflot_t(z))$ is identically zero,\nand by Eq. (\\ref{eq:ae-derivee}), this implies that\n $-c\\left(\\|m_t\\|^2 + \\|\\nabla F(x_t)\\|^2 +\\|S(x_t)-v_t\\|^2\\right)$ is equal to zero\nfor every $t$ a.e. (hence, for every~$t$, by continuity of $\\bflot$).\nIn particular for $t=0$, $m=\\nabla F(x)=0$ and $S(x)-v=0$.\nHence, $z\\in h_\\infty^{-1}(\\{0\\})$.\n\\end{proof}\n\n\\begin{corollary}\n \\label{coro:cv}\nLet Assumptions~\\ref{hyp:coercive}, \\ref{hyp:S>0}, \\ref{hyp:F} and \\ref{hyp:S} hold.\nAssume that $0< b\\leq 4a$.\nFor every $z\\in \\cZ_+$, $\\lim_{t\\to \\infty} \\sd(\\Phi(z,t),\\cE)=0\\,.$\n\\end{corollary}\n\\begin{proof}\nUse Prop.~\\ref{prop:Wstrict} with $K\\eqdef\\{z\\}$.\nand \\cite[Th.~2.1.7]{haraux1991systemes}.\n\\end{proof}\n\n\n\\subsubsection{Asymptotic Behavior of the Solution to (\\ref{eq:ode})}\n\\label{sec:cv-non-autonomous}\n\n\n\\begin{proposition}[APT]\n \n Let Assumptions~\\ref{hyp:coercive}, \\ref{hyp:S>0}, \\ref{hyp:F} and \\ref{hyp:S} hold true.\n Assume that $0< b\\leq 4a$.\nThen, for every $z_0\\in \\cZ_0$, $Z^0_\\infty(z_0)$ is an asymptotic pseudotrajectory\nof the semiflow $\\Phi$ given by~(\\ref{eq:flot}).\n\\label{prop:apt}\n\\end{proposition}\n\\begin{proof}\n Consider $z_0\\in \\cZ_0$, $T\\in (0,+\\infty)$ and define $z\\eqdef Z_\\infty^0(z_0)$.\nConsider $t\\geq 1$. For every $s\\geq 0$, define $\\Delta_t(s)\\eqdef \\|z(t+s)-\\flot(z(t))(s)\\|$.\nThe aim is to prove that $\\sup_{s\\in [0,T]}\\Delta_t(s)$ tends to zero as~$t\\to\\infty$.\nPutting together Prop.~\\ref{prop:adam-bounded} and Lemma~\\ref{lem:v-lowerbound},\nthe set $K\\eqdef \\overline{\\{z(t):t\\geq 1\\}}$\nis a compact subset of $\\cZ_+^*$.\nDefine $C(t)\\eqdef \\sup_{s\\geq 0}\\sup_{z'\\in K}\\|h(t+s,z')-h_\\infty(z')\\|$.\nIt can be shown that $\\lim_{t\\to\\infty} C(t)=0$.\nWe obtain that for every $s\\in [0,T]$, $\n\\Delta_t(s)\\leq T C(t) + \\int_0^s\n \\|h_\\infty(z(t+s'))-h_{\\infty}(\\flot(z(t))(s'))\\|ds'\\,.$\nBy Lemma~\\ref{lem:v-lowerbound}, $K'\\eqdef \\overline{\\bigcup_{z'\\in\\flot(K)}z'([0,\\infty))}$ is a compact subset of $\\cZ_+^*$.\nIt is immediately seen from the definition that $h_{\\infty}$ is Lipschitz continuous on every compact subset of $\\cZ_+^*$, hence on $K\\cup K'$. Therefore, there exists a constant $L$, independent from $t,s$, s.t.\n$\n\\Delta_t(s)\\leq T C(t) + \\int_0^s L \\Delta_t(s')ds'\\qquad (\\forall t\\geq 1, \\forall s\\in[0,T])\\,.\n$\nUsing Gr\\\"onwall's lemma, it holds that for all $s\\in [0,T]$,\n$\n\\Delta_t(s)\\leq TC(t) e^{Ls}\\,.\n$\nAs a consequence,\n$\\sup_{s\\in [0,T]}\\Delta_t(s)\\leq TC(t) e^{LT}$ and the righthand side converges to zero\nas $t\\to\\infty$.\n\\end{proof}\n\n\\subsubsection*{End of the Proof of Th.~\\ref{th:cv-adam}}\n\nBy Prop.~\\ref{prop:adam-bounded}, the set $K\\eqdef \\overline{Z^0_\\infty(z_0)([0,\\infty))}$ is a compact subset of $\\cZ_+$.\nDefine $K'\\eqdef \\overline{\\{\\flot(t,z):t\\geq 0, z\\in K\\}}\\,,$ and let\n $\\bflot:[0,+\\infty)\\times K'\\to K'$ be the restriction $\\flot$ to $K'$.\nBy Prop.~\\ref{prop:Wstrict}, there exists $\\delta>0$ s.t.\n$W_\\delta$ is a strict Lyapunov function for the semiflow $\\bflot$.\nMoreover, the set of equilibrium points coincides with $\\cE\\cap K'$.\nIn particular, the equilibrium points of $\\bflot$ form a compact set.\nBy Prop.~\\ref{prop:apt}, $Z^0_\\infty(z_0)$ is an APT of $\\bflot$.\nNote that every $z\\in\\cE$ can be written under the form\n$z=(x,0,S(x))$ for some $x\\in \\cS$.\nFrom the definition of $W_\\delta$ in (\\ref{eq:Wdelta}),\n$W_\\delta(z)=W_\\delta(x,0,S(x)) =V_{\\infty}(x,0,S(x)) = F(x)$.\nSince $F(\\cS)$ is assumed to have an empty interior, the same holds\nfor $W_\\delta(\\cE\\cap K')$. By Prop.~\\ref{prop:benaim},\n$\n\\bigcap_{t\\geq 0}\\overline{Z^0_\\infty(z_0)([t,\\infty))} \\subset \\cE\\cap K'\\,.\n$\nThe set in the righthand side coincides with the set of limits of convergent\nsequences of the form $Z^0_\\infty(z_0)(t_n)$ for $t_n\\to\\infty$.\nAs $Z^0_\\infty(z_0)([0,\\infty))$ is a bounded set, $\\sd(Z^0_\\infty(z_0)(t),\\cE)$ tends to zero.\n\n\n\\subsection{Proof of Th.~\\ref{thm:asymptotic_rates}}\n\\label{sec:cont_asymptotic_rates}\n\nThe proof follows the path of \\cite[Th.~10.1.6, Th.~10.2.3]{harauxjendoubi2015},\nbut requires specific adaptations to deal with the dynamical system at hand.\nDefine for all $\\delta>0$, $t>0$, and $z=(x,m,v)$,\n\\begin{equation}\n\\tilde W_\\delta(t,(x,m,v)) \\eqdef V(t,(x,m,v)) - \\delta \\ps{\\nabla F(x),m} + \\delta \\|S(x)-v\\|^2\\,.\\label{eq:Wdelta-t}\n\\end{equation}\nThe function $\\tilde W_\\delta$ is the non-autonomous version of the function (\\ref{eq:Wdelta}).\nConsider a fixed $x_0\\in \\bR^d$, and define $w_{\\delta}(t)\\eqdef \\tilde W_\\delta(t,z(t))$\nwhere $z(t)=(x(t),m(t),v(t))$ is the solution to~(\\ref{eq:ode}) with initial condition $(x_0,0,0)$.\nThe proof uses the following steps.\n\n\\begin{enumerate}[{\\sl i)},leftmargin=11pt]\n\\item \\textit{Upper-bound on $w_\\delta(t)$.}\nFrom Eq.~(\\ref{eq:V}),\nwe obtain that for every $t\\geq 1$,\n\nV(t,z(t))\\leq |F(x(t))|+\\frac {\\|m(t)\\|^2}{2a\\varepsilon (1-e^{-a})}\\,.\n\nUsing $\\ps{\\nabla F(x),m}\\leq (\\|\\nabla F(x)\\|^2+\\|m\\|^2)\/2$, we obtain that there exists a constant\n$c_1$ (depending on $\\delta$) s.t. for every $t\\geq 1$,\n\\begin{equation}\n \\label{eq:w-up}\n w_\\delta(t) \\leq c_1\\left(|F(x(t))|+\\|m(t)\\|^2+\\|\\nabla F(x(t))\\|^2+ \\|S(x(t))-v(t)\\|^2\\right)\\,.\n\\end{equation}\n\n\\item \\textit{Upper-bound on $\\frac d{dt} w_\\delta(t)$.}\nThe function $w_{\\delta}$ is absolutely continuous on $[1,+\\infty)$.\n \n \n \n \n \n \nMoreover, there exist $\\delta>0$, $c_2>0$ (both depending on\n$x_0$) s.t. for every $t\\geq 1$ a.e.,\n\\begin{equation}\n \\label{eq:dw-up}\n \\frac d{dt} w_\\delta (t) \\leq -c_2\\left(\\|m(t)\\|^2+\\|\\nabla F(x(t))\\|^2+\\|S(x(t))-v(t)\\|^2\\right)\\,.\n\\end{equation}\nThe proof of Eq.~(\\ref{eq:dw-up}) uses arguments that are similar to\nthe ones used in the proof of Prop.~\\ref{prop:Wstrict} (just\nuse Lemma~\\ref{lem:V} to bound the derivative of the first term in\nEq.~(\\ref{eq:Wdelta-t})). For this reason, it is omitted.\n\\item \\textit{Positivity of $w_\\delta(t)$.} By Lemma~\\ref{lem:V},\nthe function $t\\mapsto V(t,z(t))$ is decreasing. As it is lower bounded,\n$\\ell\\eqdef \\lim_{t\\to\\infty}V(t,z(t))$ exists. By Th.~\\ref{th:cv-adam},\n$m(t)$ tends to zero, hence this limit coincides with $\\lim_{t\\to\\infty} F(x(t))$.\nReplacing $F$ with $F-\\ell$, one can assume without loss of generality that $\\ell=0$.\nBy Eq.~(\\ref{eq:dw-up}), $w_\\delta$ is non-increasing on $[1,+\\infty)$, hence converging\nto some limit. Using again Th.~\\ref{th:cv-adam}, $\\ps{\\nabla F(x(t)),m(t)}\\to 0$\nand $S(x(t))-v(t)\\to 0$. Thus, $\\lim_{t\\to\\infty} w_\\delta(t) = \\ell = 0$.\nAssume that there exists $t_0\\geq 1$ s.t. $w_\\delta(t_0)=0$. Then, $w_\\delta$\nis constant on $[t_0,+\\infty)$. By Eq.~(\\ref{eq:dw-up}), this implies that $m(t)=0$ on this interval.\nHence, $d x(t)\/dt = 0$. This means that $x(t) = x(t_0)$ for all $t\\geq t_0$. By Th.~\\ref{th:cv-adam},\nit follows that $x(t_0)\\in \\cS$. In that case, the final result is shown. Therefore, one\ncan assume that $w_\\delta(t)>0$ for all $t\\geq 1$.\n\n\\item \\textit{Putting together (\\ref{eq:w-up}) and (\\ref{eq:dw-up}) using the \\L{}ojasiewicz condition.}\nBy Prop. \\ref{prop:benaim} and \\ref{prop:apt}, the set\n$\nL\\eqdef \\overline{\\bigcup_{s\\geq 0}\\{z(t):t\\geq s\\}}\\,\n$\nis a compact connected subset of $\\cE = \\{(x,0,S(x)):\\nabla F(x)=0\\}$.\nThe set $\\mathcal{U}\\eqdef \\{x:(x,0,S(x))\\in L\\}$ is a compact and connected subset of $\\cS$.\nUsing Assumption~\\ref{hyp:lojasiewicz_prop} and \\cite[Lemma 2.1.6]{harauxjendoubi2015},\nthere exist constants $\\sigma,c>0$ and $\\theta\\in (0,\\frac 12]$, s.t.\n$\\|\\nabla F(x)\\| \\geq c|F(x)|^{1-\\theta}\\,$ for all $x$ s.t.~$\\sd(x,\\mathcal{U})<\\sigma\\,$.\nAs $\\sd(x(t), \\mathcal{U})\\to 0$, there exists $T\\geq 1$ s.t. for all $t\\geq T$,\n$\\|\\nabla F(x(t))\\|\\geq c |F(x(t))|^{1-\\theta}$. Thus, we may replace the term $\\|\\nabla F(x(t))\\|^2$\nin the righthand side of Eq.~(\\ref{eq:dw-up}) using\n$\\|\\nabla F(x(t))\\|^2\\geq \\frac 12\\|\\nabla F(x(t))\\|^2+ \\frac 12|F(x(t))|^{2(1-\\theta)}$.\nUpon noting that $2(1-\\theta)\\geq 1$,\nwe thus obtain that there exists a constant $c_3$ and some $T'\\geq 1$ s.t. for $t\\geq T'$ a.e.,\n$$\n\\frac d{dt} w_\\delta (t) \\leq -c_3\\left(\\|m(t)\\|^2+\\|\\nabla F(x(t))\\|^2+|F(x(t))|+\\|S(x(t))-v(t)\\|^2\\right)^{2(1-\\theta)}\\,.\n$$\nPutting this inequality together with Eq.~(\\ref{eq:w-up}), we obtain that for some constant $c_4>0$ and for all\n$t\\geq T'$ a.e.,\n$\n\\frac d{dt} w_\\delta (t) \\leq -c_4 w_\\delta(t)^{2(1-\\theta)}\\,.\n$\n\\item \\textit{End of the proof.} Following the arguments of \\cite[Th.~10.1.6]{harauxjendoubi2015},\nby integrating the preceding inequality,\nover $[T',t]$, we obtain\n$\nw_\\delta(t) \\leq c_5 t^{-\\frac 1{1-2\\theta}\n$\nfor $t\\geq T'$\nin the case where $\\theta<\\frac 12$, whereas $w_\\delta(t)$ decays exponentially if $\\theta=\\frac 12$.\nFrom now on, we focus on the case $\\theta<\\frac 12$.\nBy definition of~(\\ref{eq:ode}), $\\|\\dot{x}(t)\\|^2\\leq \\|m(t)\\|^2\/((1-e^{-a T'})^2\\varepsilon^2)$\nfor all $t\\geq T'$. Since Eq.~(\\ref{eq:dw-up}) implies $\\|m(t)\\|^2\\leq -\\dot w_\\delta(t)\/c_2$,\nwe deduce that there exists $c,c'>0$ s.t. for all $t\\geq T'$,\n\n \n \\int_t^{2t} \\|\\dot{x}(s)\\|^2 ds \\leq c w_\\delta(t) \\leq c' t^{-\\frac 1{1-2\\theta}}\\,.\n\nApplying \\cite[Lemma 2.1.5]{harauxjendoubi2015}, it follows that\n$\\int_t^{\\infty} \\|\\dot{x}(s)\\|^2 ds \\leq c t^{-\\frac{\\theta}{1-2\\theta}}$\nfor some other constant $c$.\nTherefore $x^* \\eqdef \\lim_{t \\to +\\infty} x(t)$ exists by Cauchy's criterion and\nfor all $t\\geq T'$, $\\|x(t)-x^*\\| \\leq c t^{-\\frac{\\theta}{1-2\\theta}}$\\,.\nFinally, since $x(t)\\to a$, we remark that, using the same arguments,\nthe global \\L{}ojasiewicz exponent $\\theta$ can be replaced by any \\L{}ojasiewicz exponent\nof $f$ at $x^*$.\nWhen $\\theta=\\frac 12$, the proof follows the same line.\n\\end{enumerate}\n\n\n\\section{Proofs of Section~\\ref{sec:discrete}}\n\\label{sec:proofs_sec_discrete}\n\\subsection{Proof of Th.~\\ref{th:weak-cv}}\n\nGiven an initial point $x_0\\in \\bR^d$ and a stepsize $\\gamma>0$,\nwe consider the iterates $z_n^{\\gamma}$ given by~(\\ref{eq:znT})\nand $z_0^\\gamma\\eqdef (x_0,0,0)$.\nFor every $n\\in \\bN^*$ and every $z\\in \\cZ_+$, we define\n\\begin{equation*}\nH_\\gamma(n,z,\\xi)\\eqdef \\gamma^{-1} (T_{\\gamma,\\bar\\alpha(\\gamma),\\bar\\beta(\\gamma)}(n,z,\\xi)-z)\\,.\n\\end{equation*}\nThus, $z_n^\\gamma = z_{n-1}^\\gamma+\\gamma H_\\gamma(n,z_{n-1}^\\gamma,\\xi_n)$ for every $n\\in\\bN^*$.\nFor every $n\\in\\bN^*$ and every $z\\in \\cZ$ of the form\n$z=(x,m,v)$, we define $e_\\gamma(n,z)\\eqdef (x,(1-\\bar\\alpha(\\gamma)^n)^{-1}m,(1-\\bar\\beta(\\gamma)^n)^{-1}v)$,\nand set $e_\\gamma(0,z)\\eqdef z$.\n\\begin{lemma}\n \\label{lem:moment-H}\nLet Assumptions~\\ref{hyp:model}, \\ref{hyp:alpha-beta} and \\ref{hyp:moment-f} hold true.\nThere exists $\\bar \\gamma_0>0$ s.t. for every $R>0$, there exists $s>0$,\n\\begin{equation}\n\\sup\\left\\{\\bE\\left(\\left\\|H_\\gamma(n+1,z,\\xi)\\right\\|^{1+s}\\right):\\gamma\\in (0,\\bar \\gamma_0], n\\in \\bN, z\\in \\cZ_+\\,\\text{s.t.}\\,\\|e_\\gamma(n,z)\\|\\leq R\\right\\}<\\infty\\,.\n\\label{eq:UI}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\n\n\nLet $R>0$.\nWe denote by $(H_{\\gamma,\\sx},H_{\\gamma,\\sm},H_{\\gamma,\\sv})$ the block components of $H_\\gamma$.\nThere exists a constant $C_s$ depending only on $s$ s.t.\n$\\|H_\\gamma\\|^{1+s}\\leq C_s(\\|H_{\\gamma,\\sx}\\|^{1+s}+\\|H_{\\gamma,\\sm}\\|^{1+s}+\\|H_{\\gamma,\\sv}\\|^{1+s})$.\nHence, it is sufficient to prove that Eq.~(\\ref{eq:UI}) holds\nrespectively when replacing $H_\\gamma$ with each of $H_{\\gamma,\\sx},H_{\\gamma,\\sm},H_{\\gamma,\\sv}$.\nConsider $z=(x,m,v)$ in $\\cZ_+$. We write:\n$\\|H_{\\gamma,\\sx}(n+1,z,\\xi)\\| \\leq \\varepsilon^{-1}(\\|\\frac{m}{1-\\bar \\alpha(\\gamma)^n}\\|+\\|\\nabla f(x,\\xi)\\|)\\,.$\nThus, for every $z$ s.t. $\\|e_\\gamma(n,z)\\|\\leq R$, there exists a constant $C$ depending only on $\\varepsilon$, $R$ and $s$ s.t.\n$\\|H_{\\gamma,\\sx}(n+1,z,\\xi)\\|^{1+s} \\leq C(1+ \\|\\nabla f(x,\\xi)\\|^{1+s})$. By Assumption~\\ref{hyp:moment-f}, (\\ref{eq:UI}) holds for\n$H_{\\gamma,\\sx}$ instead of $H_\\gamma$. Similar arguments hold for $H_{\\gamma,\\sm}$ and $H_{\\gamma,\\sv}$\nupon noting that the functions $\\gamma\\mapsto(1-\\bar \\alpha(\\gamma))\/\\gamma$ and $\\gamma\\mapsto (1-\\bar \\beta(\\gamma))\/\\gamma$\nare bounded under Assumption~\\ref{hyp:alpha-beta}.\n\\end{proof}\n\nFor every $R>0$, and every arbitrary sequence $z=(z_n:n\\in\\bN)$ on $\\cZ_+$, we define\n$\\tau_R(z) \\eqdef \\inf\\{n\\in \\bN:\\|e_\\gamma(n,z_n)\\|>R\\}$ with the convention that $\\tau_R(z)=+\\infty$ when the set is empty.\nWe define the map $B_R:\\cZ_+^\\bN\\to\\cZ_+^\\bN$ given for any arbitrary sequence $z=(z_n:n\\in\\bN)$ on $\\cZ_+$ by\n$B_R(z)(n) = z_n\\1_{n<\\tau_R(z)}+z_{\\tau_R(z)}\\1_{n\\geq\\tau_R(z)}$.\nWe define the random sequence $z^{\\gamma,R}\\eqdef B_R(z^\\gamma)$. Recall that a family $(X_i:i\\in I)$ of random variables on some Euclidean space\nis called \\emph{uniformly integrable} if $\\lim_{A\\to+\\infty} \\sup_{i\\in I}\\bE(\\|X_i\\|\\1_{\\|X_i\\|>A})=0$.\n\\begin{lemma}\n \\label{lem:UI}\nLet Assumptions~\\ref{hyp:model}, \\ref{hyp:alpha-beta}, \\ref{hyp:moment-f} and \\ref{hyp:iid} hold true.\nThere exists $\\bar \\gamma_0>0$ s.t. for every $R>0$, the family of r.v.\n$(\\gamma^{-1}(z_{n+1}^{\\gamma,R} - z_n^{\\gamma,R}):n\\in \\bN,\\gamma\\in (0,\\bar \\gamma_0])$ is uniformly integrable.\n\\end{lemma}\n\\begin{proof}\nLet $R>0$.\nAs the event $\\{n<\\tau_R(z^\\gamma)\\}$ coincides with\n$\\bigcap_{k=0}^n\\{\\|e_\\gamma(k,z_k^\\gamma)\\|\\leq R\\}$, it holds that\nfor every $n\\in \\bN$,\n\\begin{equation*}\n \\frac{z_{n+1}^{\\gamma,R} - z_n^{\\gamma,R}}\\gamma =\\frac{z_{n+1}^{\\gamma} - z_n^{\\gamma}}\\gamma \\1_{n<\\tau_R(z^\\gamma)}\n = H_\\gamma(n+1,z_n^\\gamma,\\xi_{n+1}) \\prod_{k=0}^n\\1_{\\|e_\\gamma(k, z_k^\\gamma)\\|\\leq R} \\,.\n\\end{equation*}\nChoose $\\bar \\gamma_0>0$ and $s>0$ as in Lemma~\\ref{lem:moment-H}.\nFor every $\\gamma\\leq \\bar \\gamma_0$,\n\\begin{equation*}\n \\resizebox{\\hsize}{!}{$\n \\bE\\left(\\left\\|\\gamma^{-1}(z_{n+1}^{\\gamma,R} - z_n^{\\gamma,R})\\right\\|^{1+s}\\right)\n \\leq \\sup\\left\\{ \\bE\\left(\\left\\|H_{\\gamma'}(\\ell+1,z,\\xi)\\right\\|^{1+s} \\right):\\gamma'\\in (0,\\bar \\gamma_0],\\ell\\in \\bN,z\\in \\cZ_+, \\|e_\\gamma(\\ell,z)\\|\\leq R\\right\\}\\,.\n $}\n\\end{equation*}\nBy Lemma~\\ref{lem:moment-H}, the righthand side is finite and does not depend on $(n,\\gamma)$.\n\\end{proof}\nFor a fixed $\\gamma>0$, we define the interpolation map $\\sX_\\gamma:\\cZ^\\bN\\to C([0,+\\infty),\\cZ)$ as follows for every\nsequence $z=(z_n:n\\in \\bN)$ on $\\cZ$:\n$$\n \\sX_\\gamma(z)\\,:t \\mapsto z_{\\lfloor \\frac t\\gamma\\rfloor} + (t\/\\gamma-\\lfloor t\/\\gamma\\rfloor)(z_{\\lfloor \\frac t\\gamma\\rfloor+1}-z_{\\lfloor \\frac t\\gamma\\rfloor})\\,.\n$$\nFor every $\\gamma,R>0$, we define $\\sz^{\\gamma,R}\\eqdef \\sX_\\gamma(z^{\\gamma,R}) = \\sX_\\gamma\\circ B_R (z^\\gamma)$.\nNamely, $\\sz^{\\gamma,R}$ is the interpolated process associated with the sequence $(z^{\\gamma,R}_n)$.\nIt is a random variable on $C([0,+\\infty),\\cZ)$.\nWe recall that $\\cF_n$ is the $\\sigma$-algebra generated by the r.v. $(\\xi_k:1\\leq k\\leq n)$.\nFor every $\\gamma,n,R$, we use the notation: $\\Delta_0^{\\gamma,R}\\eqdef 0$ and\n$$\n\\Delta_{n+1}^{\\gamma,R} \\eqdef \\gamma^{-1}(z_{n+1}^{\\gamma,R} - z_n^{\\gamma,R}) - \\bE(\\gamma^{-1}(z_{n+1}^{\\gamma,R} - z_n^{\\gamma,R})|\\cF_n)\\,.\n$$\n\\begin{lemma}\n \\label{lem:tightness-in-C}\nLet Assumptions~\\ref{hyp:model}, \\ref{hyp:alpha-beta}, \\ref{hyp:moment-f} and \\ref{hyp:iid} hold true.\nThere exists $\\bar \\gamma_0>0$ s.t. for every $R>0$, the family of r.v. $(\\sz^{\\gamma,R}:\\gamma\\in (0,\\bar \\gamma_0])$\nis tight. Moreover, for every $\\delta>0$,\n$ \n\\bP\\left(\\max_{0\\leq n\\leq \\lfloor\\frac T\\gamma\\rfloor}\\gamma\\left\\|\\sum_{k=0}^n\\Delta_{k+1}^{\\gamma,R}\\right\\|>\\delta\\right)\\xrightarrow[]{\\gamma\\to 0} 0\\,.\n\n\\end{lemma}\n\\begin{proof}\n It is an immediate consequence of Lemma~\\ref{lem:UI} and \\cite[Lemma 6.2]{bianchi2019constant}\n\\end{proof}\n\\noindent The proof of the following lemma is omitted.\n\\begin{lemma}\n \\label{lem:cv-h}\nLet Assumptions~\\ref{hyp:model} and \\ref{hyp:alpha-beta} hold true.\nConsider $t>0$ and $z\\in \\cZ_+$. Let $(\\varphi_n,z_n)$ be a sequence on $\\bN^*\\times \\cZ_+$ s.t.\n$\\lim_{n\\to\\infty}\\gamma_n\\varphi_n= t$ and $\\lim_{n\\to\\infty}z_n= z$. Then, $\\lim_{n\\to\\infty} h_{\\gamma_n}(\\varphi_n,z_n)= h(t,z)$\nand $\\lim_{n\\to\\infty} e_{\\gamma_n}(\\varphi_n,z_n)= \\bar e(t,z)$.\n\\end{lemma}\n\n\\noindent\\textbf{End of the Proof of Th.~\\ref{th:weak-cv}}\nConsider $x_0\\in \\bR^d$ and set $z_0=(x_0,0,0)$. Define\n$R_0\\eqdef \\sup\\left\\{\\|\\bar e(t,Z_\\infty^0(z_0)(t))\\|:t>0\\right\\}\\,.$\nBy Prop.~\\ref{prop:adam-bounded}, $R_0<+\\infty$. We select an arbitrary $R$ s.t. $R\\geq R_0+1$.\nFor every $n\\geq 0$, $z\\in \\cZ_+$,\n$$\nz_{n+1}^{\\gamma,R} = z_{n}^{\\gamma,R}+\\gamma H_\\gamma(n+1,z_{n}^{\\gamma,R},\\xi_{n+1}) \\1_{\\|e_\\gamma(n,z_{n}^{\\gamma,R})\\|\\leq R} \\,.\n$$\nDefine for every $n\\geq 1$, $z\\in \\cZ_+$,\n$h_{\\gamma,R}(n,z)\\eqdef h_\\gamma(n,z)\\1_{\\|e_\\gamma(n-1,z)\\|\\leq R}$. Then,\\\\\n$\n\\Delta_{n+1}^{\\gamma,R} = \\gamma^{-1}(z_{n+1}^{\\gamma,R} - z_n^{\\gamma,R}) - h_{\\gamma,R}(n+1,z_{n}^{\\gamma,R})\\,.\n$\nDefine also for every $n\\geq 0$, \\\\\n\n M_n^{\\gamma,R} \\eqdef \\sum_{k=1}^n\\Delta_k^{\\gamma,R} =\\gamma^{-1}(z_n^{\\gamma,R} - z_0) - \\sum_{k=0}^{n-1} h_{\\gamma,R}(k+1,z_{k}^{\\gamma,R})\\,.\n$\nConsider $t\\geq 0$ and set $n\\eqdef \\lfloor t\/\\gamma\\rfloor$.\nFor any $T>0$, it holds that :\n\\begin{equation*}\n \\sup_{t\\in [0,T]} \\left\\| \\sz^{\\gamma,R}(t)-z_0 - \\int_{0}^{t}h_{\\gamma,R}(\\lfloor s\/\\gamma\\rfloor+1,\\sz^{\\gamma,R}(\\gamma\\lfloor s\/\\gamma\\rfloor))ds\\right\\|\\\\\n \n \\leq \\max_{0\\leq n\\leq \\lfloor T\/\\gamma\\rfloor+1} \\gamma \\|M_n^{\\gamma,R}\\|\\,.\n\\end{equation*}\nBy Lemma~\\ref{lem:tightness-in-C},\n\\begin{equation}\n \\label{eq:Mn-cv-proba-zero}\n \\bP\\left( \\sup_{t\\in [0,T]} \\left\\| \\sz^{\\gamma,R}(t)-z_0 - \\int_{0}^{t}h_{\\gamma,R}\\left(\\lfloor s\/\\gamma\\rfloor+1,\\sz^{\\gamma,R}(\\gamma\\lfloor s\/\\gamma\\rfloor)\\right)ds\\right\\|\n>\\delta\\right) \\xrightarrow[]{\\gamma\\to 0} 0\\,.\n\\end{equation}\nAs a second consequence of Lemma~\\ref{lem:tightness-in-C}, the family of r.v. $(\\sz^{\\gamma,R}:0<\\gamma\\leq \\bar \\gamma_0)$ is tight, where $\\bar \\gamma_0$\nis chosen as in Lemma~\\ref{lem:tightness-in-C} (it does not depend on $R$).\nBy Prokhorov's theorem, there exists a sequence $(\\gamma_k:k\\in \\bN)$ s.t. $\\gamma_k\\to 0$ and\ns.t. $(\\sz^{\\gamma_k,R}:k\\in \\bN)$ converges in distribution to some probability measure $\\nu$ on $C([0,+\\infty),\\cZ_+)$.\nBy Skorohod's representation theorem, there exists a r.v. $\\sz$ on some probability space $(\\Omega',\\cF',\\bP')$, with distribution $\\nu$,\nand a sequence of r.v. $(\\sz_{(k)}:k\\in \\bN)$ on that same probability space where for each $k \\in \\bN$, the r.v. $\\sz_{(k)}$ has the same distribution\nas the r.v. $\\sz^{\\gamma_k,R}$,\nand s.t. for every $\\omega\\in\\Omega'$, $\\sz_{(k)}(\\omega)$ converges\nto $\\sz(\\omega)$ uniformly on compact sets. Now select a fixed $T>0$. According to Eq.~(\\ref{eq:Mn-cv-proba-zero}), the sequence\n$$\n\\sup_{t\\in [0,T]} \\left\\| \\sz_{(k)}(t)-z_0 - \\int_{0}^{t}h_{\\gamma_{k},R}\\left(\\lfloor s\/\\gamma_{k}\\rfloor+1,\\sz_{(k)}(\\gamma_{k}\\lfloor s\/\\gamma_{k}\\rfloor)\\right)ds\\right\\|\\,,\n$$\nindexed by $k\\in \\bN$, converges in probability to zero as $k\\to\\infty$. One can therefore extract a further subsequence $\\sz_{(\\varphi_k)}$,\ns.t. the above sequence converges to zero almost surely. In particular, since $\\sz_{(k)}(t)\\to \\sz(t)$ for every $t$, we obtain that\n\\begin{equation}\n\\sz(t) = z_0 + \\lim_{k\\to\\infty} \\int_{0}^{t}h_{\\gamma_{\\varphi_k},R}\\left(\\lfloor s\/\\gamma_{\\varphi_k}\\rfloor+1,\\sz_{(\\varphi_k)}(\\gamma_{\\varphi_k}\\lfloor s\/\\gamma_{\\varphi_k}\\rfloor)\\right)ds\\quad (\\forall t\\in [0,T])\\,.\\label{eq:cv_z}\n\\end{equation}\nConsider $\\omega\\in \\Omega'$ s.t. the r.v. $\\sz$ satisfies (\\ref{eq:cv_z}) at point $\\omega$.\nFrom now on, we consider that $\\omega$ is fixed, and we handle $\\sz$ as an element of $C([0,+\\infty),\\cZ_+)$,\nand no longer as a random variable.\nDefine $\\tau\\eqdef \\inf\\{t\\in [0,T]: \\|\\bar e(t,\\sz(t))\\|>R_0+\\frac 12\\}$ if the latter set is non-empty,\nand $\\tau\\eqdef T$ otherwise.\nSince $\\sz(0)=z_0$ and $\\|z_0\\|0$ using the continuity of $\\sz$.\nChoose any $(s,t)$ s.t. $00$,\n\\begin{equation}\n\\forall \\delta>0,\\ \\lim_{\\gamma\\to 0} \\bP\\left(\\sup_{t\\in [0,T]}\\left\\| \\sz^{\\gamma,R}(t) - Z_\\infty^0(x_0)(t)\\right\\|>\\delta \\right) =0\\,.\n\\label{eq:cv-proba-R}\n\\end{equation}\nIn order to complete the proof, we show that\n$\n\\bP\\left(\\sup_{t\\in [0,T]}\\left\\| \\sz^{\\gamma,R}(t) - \\sz^{\\gamma}(t)\\right\\|>\\delta \\right) \\to 0\n$\nas $\\gamma \\to 0$, for all $\\delta >0$.\nwhere we recall that $\\sz^\\gamma=\\sX_\\gamma(z^\\gamma)$.\nNote that $\\left\\| \\sz^{\\gamma,R}(t)\\right\\|\\leq\n\\left\\| \\sz^{\\gamma,R}(t)-Z_\\infty^0(z_0)(t)\\right\\| + R_0$ by the triangular inequality.\nTherefore, for every $T,\\delta>0$,\n\\begin{align*\n \\bP\\left(\\sup_{t\\in [0,T]}\\left\\| \\sz^{\\gamma,R}(t) - \\sz^{\\gamma}(t)\\right\\|>\\delta \\right)\n &\\leq \\bP\\left(\\sup_{t\\in [0,T]}\\left\\| \\sz^{\\gamma,R}(t)\\right\\|\\geq R \\right)\\\\\n &\\leq \\bP\\left(\\sup_{t\\in [0,T]}\\left\\| \\sz^{\\gamma,R}(t)-Z_\\infty^0(z_0)(t)\\right\\| \\geq R-R_0\\right)\\,.\n\\end{align*\nBy Eq.~(\\ref{eq:cv-proba-R}), the RHS of the above inequality tends to zero as $\\gamma\\to 0$.\nThe proof is complete.\n\n\\subsection{Proof of Th.~\\ref{th:longrun}}\n\\label{sec:longrun}\n\n\nWe start by stating a general result. Consider a Euclidean space $\\sX$ equipped with its Borel $\\sigma$-field $\\cal X$.\nLet $\\bar \\gamma_0>0$, and consider two families\n$(P_{\\gamma,n}:0<\\gamma<\\bar \\gamma_0, n\\in \\bN^*)$ and $(\\bar P_{\\gamma}:0<\\gamma<\\bar \\gamma_0)$\nof Markov transition kernels on $\\sX$.\nDenote by $\\cP(\\sX)$ the set of probability measures on $\\sX$.\nLet $X=(X_n:n\\in\\bN)$ be the canonical process on $\\sX$.\nLet $(\\bP^{\\gamma,\\nu}:0<\\gamma<\\bar \\gamma_0,\\nu\\in \\cP(\\sX))$ and\n$(\\bar \\bP^{\\gamma,\\nu}:0<\\gamma<\\bar \\gamma_0,\\nu\\in \\cP(\\sX))$ be two families of measures on the canonical space\n$(X^\\bN,\\cal X^{\\otimes\\bN})$ such that the following holds:\n\\begin{itemize}[leftmargin=*]\n\\item Under $\\bP^{\\gamma,\\nu}$, $X$ is a non-homogeneous Markov chain with transition kernels $(P_{\\gamma,n}:n\\in \\bN^*)$\nand initial distribution $\\nu$, that is, for each $n\\in\\bN^*$,\n$\n\\bP^{\\gamma,\\nu}(X_{n}\\in dx|X_{n-1}) = P_{\\gamma,n}(X_{n-1},dx)\\,.\n$\n\\item Under $\\bar\\bP^{\\gamma,\\nu}$, $X$ is an homogeneous Markov chain with transition kernel $\\bar P_{\\gamma}$\nand initial distribution $\\nu$.\n\\end{itemize}\nIn the sequel, we will use the notation $\\bar P^{\\gamma,x}$ as a shorthand notation for\n$\\bar P^{\\gamma,\\delta_x}$ where $\\delta_x$ is the Dirac measure at some point $x\\in \\sX$.\nFinally, let $\\Psi$ be a semiflow on $\\sX$. A Markov kernel $P$ is \\emph{Feller}\nif $Pf$ is continuous for every bounded continuous $f$.\n\\begin{assumption} Let $\\nu\\in \\cP(\\sX)$.\n \\begin{enumerate}[{\\sl i)}]\n \\item For every $\\gamma$, $\\bar P_{\\gamma}$ is Feller.\n \\item $(\\bP^{\\gamma,\\nu}X_n^{-1}:n\\in \\bN,0<\\gamma<\\bar \\gamma_0)$ is a tight family of measures.\n \\item For every $\\gamma\\in (0,\\bar \\gamma_0)$ and every bounded Lipschitz continuous function $f:\\sX\\to\\bR$,\n$P_{\\gamma,n}f$ converges to $\\bar P_\\gamma f$ as $n\\to\\infty$, uniformly on compact sets.\n\\item For every $\\delta>0$, for every compact set $K\\subset \\sX$, for every $t>0$,\\\\\n\n\\lim_{\\gamma\\to 0}\\sup_{x\\in K}\\bar P^{\\gamma,x}\\left(\\|X_{\\lfloor t\/\\gamma\\rfloor} - \\Psi_t(x)\\|>\\delta\\right)=0\\,.\n\n \\end{enumerate}\n\\label{hyp:general}\n\\end{assumption}\nLet $BC_\\Psi$ be the Birkhof center of $\\Psi$ \\emph{i.e.}, the closure of the set of recurrent points.\n\n\\begin{theorem}\n\\label{longrun}\nConsider $\\nu\\in \\cP(\\sX)$ s.t. Assumption~\\ref{hyp:general} holds true. Then, for every $\\delta>0$,\n$\n\\lim_{\\gamma\\to 0}\\limsup_{n\\to\\infty} \\frac 1{n+1}\\sum_{k=0}^n \\bP^{\\gamma,\\nu}\\left(d(X_k,BC_\\Psi)>\\delta\\right)=0\\,.\n$\n\\end{theorem}\nWe omit the proof of this result which follows a similar reasoning to \\cite[Th.~5.5 and Proof in section 8.4]{bianchi2019constant} and makes use of results from \\cite{for-pag-99}.\n\n\\noindent\\textbf{End of the Proof of Th.~\\ref{th:longrun}.}\nWe apply Th.~\\ref{longrun} in the case where $P_{\\gamma,n}$ is the kernel\nof the non-homogeneous Markov chain $(z_n^\\gamma)$ defined by~(\\ref{eq:znT}) and\n$\\bar P_\\gamma$ is the kernel of the homogeneous Markov chain $(\\bar z_n^\\gamma)$\ngiven by\n$\\bar z_n^\\gamma = \\bar z_{n-1}^\\gamma+\\gamma H_\\gamma(\\infty,\\bar z_{n-1}^\\gamma,\\xi_n)$\nfor every $n\\in\\bN^*$ and $\\bar z_0 \\in \\cZ_+$ where $H_\\gamma(\\infty,\\bar z_{n-1}^\\gamma,\\xi_n) \\eqdef \\lim_{k \\to \\infty} H_\\gamma(k,\\bar z_{n-1}^\\gamma,\\xi_n)$. The task is merely to verify Assumption~\\ref{hyp:general}{\\sl iii)}, the other assumptions being easily verifiable using Th.~\\ref{th:weak-cv},\nConsider $\\gamma\\in (0,\\bar \\gamma_0)$. Let $f: \\cZ \\to \\mathbb{R}$ be a bounded $M$-Lipschitz continuous function and $K$ a compact.\nFor all $z=(x,m,v) \\in K$:\n\\begin{align*}\n&|P_{\\gamma,n}(f)(z) - \\bar P_{\\gamma}(f)(z)| \\leq M \\gamma \\bE \\left \\| \\frac{(1-\\alpha^n)^{-1}\\tilde{m}_\\xi}{ \\varepsilon+(1-\\beta^n)^{-\\frac{1}{2}}{\\tilde{v}_\\xi^{1\/2}}} -\\frac{\\tilde{m}_\\xi}{ \\varepsilon+{\\tilde{v}_\\xi^{1\/2}}}\\right \\| \\\\\n&\\resizebox{.99\\hsize}{!}{$\\leq \\frac{M \\gamma\\alpha^n}{\\varepsilon(1- \\alpha^n)} \\sup_{x,m}\\left(\\alpha ||m|| + (1-\\alpha)\\bE||\\nabla f(x,\\xi)|| \\right)\n + \\frac{M \\gamma \\bE ||\\tilde{m}_\\xi \\odot \\tilde{v}_\\xi^{1\/2}||}{\\varepsilon^2}\\left(1- \\frac{1}{(1-\\beta^n)^{1\/2}}\\right)$}\\,\n\\end{align*}\nwhere we write $\\alpha=\\bar \\alpha(\\gamma)$, $\\beta=\\bar \\beta(\\gamma)$,\n$\\tilde{m}_\\xi \\eqdef \\alpha m+(1-\\alpha)\\nabla f(x,\\xi)$ and\n$\\tilde{v}_\\xi \\eqdef \\beta v+(1- \\beta)\\nabla f(x,\\xi)^{\\odot 2}$.\nThus, condition~\\ref{hyp:general}{\\sl iii)} follows.\nFinally, Cor.~\\ref{coro:cv} implies $BC_\\Phi=\\cE$.\n\n\\section{Proofs of Section~\\ref{sec:discrete_decreasing}}\n\\label{sec:proofs_sec_discrete_decreasing}\nIn this section, we denote by $\\bE_n = \\bE(\\cdot|\\cF_n)$ the conditional expectation w.r.t. $\\cF_n$.\nWe also use the notation $\\nabla f_{n+1} \\eqdef \\nabla f(x_n,\\xi_{n+1})$.\n\nThe following lemma will be useful in the proofs.\n\\begin{lemma}\n\\label{lemma:r_n}\nLet the sequence $(r_n)$ be defined as in Algorithm~\\ref{alg:adam-decreasing}.\nAssume that $0 \\leq \\alpha_n \\leq 1$ for all $n$ and that $(1-\\alpha_n)\/\\gamma_n \\to a > 0$ as $n \\to +\\infty$.\nThen,\n\\begin{enumerate}[{\\sl i)}]\n\\item $\\forall n \\in \\bN, r_n = 1 - \\prod_{i=1}^n \\alpha_i$,\n\\item The sequence $(r_n)$ is nondecreasing and converges to $1$.\n\\item Under Assumption~\\ref{hyp:step-tcl}~\\ref{step-tcl-i}, for every $\\epsilon>0$, for sufficiently large $n$, we have\n$r_n-1 \\leq e^{-\\frac{a\\gamma_0}{2(1-\\kappa)}n^{1-\\kappa}}$ if $\\kappa \\in (0,1)$ and\n$r_n-1 \\leq n^{-a \\gamma_0\/(1+\\epsilon)}$ if $\\kappa = 1$.\n\\end{enumerate}\n\\end{lemma}\nA similar lemma holds for the sequence $(\\bar{r}_n)$.\n\\begin{proof}\ni) stems from observing that $r_{n+1} -1 = \\alpha_{n+1}(r_n -1)$ for every $n \\in \\bN$ and\niterating this relation ($r_0=0$). As a consequence, the sequence $(r_n)$ is nondecreasing.\nWe can write :\n$\n0 \\leq 1-r_n \\leq \\exp(- \\sum_{i=1}^n (1-\\alpha_i))\\,.\n$\niii) As $\\sum_{n\\geq 1} \\gamma_n = + \\infty$\nand $(1-\\alpha_n) \\sim a\\gamma_n$, we deduce that $\\sum_{i=1}^n (1-\\alpha_i) \\sim \\sum_{i=1}^n a\\gamma_i$.\nThe results follow from the fact that $\\sum_{i=1}^n \\gamma_i \\sim \\frac{\\gamma_0}{1-\\kappa} n^{1-\\kappa}$\nwhen $\\kappa \\in (0,1)$ and $\\sum_{i=1}^n \\gamma_i \\sim \\gamma_0 \\ln n$ for $\\kappa=1$.\n\\end{proof}\n\n\\subsection{Proof of Th.~\\ref{thm:as_conv_under_stab}}\n\nWe define\n$\\bar z_n = (x_{n-1},m_n,v_n)$ (note the shift in the index of the variable $x$).\nWe have\n$$\n\\bar z_{n+1} = \\bar z_n + \\gamma_{n+1} h_\\infty(\\bar z_n) + \\gamma_{n+1}\\chi_{n+1} + \\gamma_{n+1} \\varsigma_{n+1}\\,,\n$$\nwhere $h_\\infty$ is defined in Eq.~(\\ref{eq:h_infty}) and where we set\n$$\n\\chi_{n+1} = \\left(0,\\gamma_{n+1}^{-1}(1-\\alpha_{n+1})(\\nabla f_{n+1}-\\nabla F(x_n)),\\gamma_{n+1}^{-1}(1-\\beta_{n+1})(\\nabla f_{n+1}^{\\odot 2}-S(x_n))\\right)\n$$\nand $\\varsigma_{n+1} = (\\varsigma_{n+1}^x,\\varsigma_{n+1}^m,\\varsigma_{n+1}^v)$ with the components defined by:\n$\\varsigma_{n+1}^x = \\frac{m_n}{\\varepsilon + \\sqrt{v_n}} - \\frac{\\gamma_n}{\\gamma_{n+1}}\\frac{\\hat m_n}{\\varepsilon + \\sqrt{\\hat v_n}}$,\n$\\varsigma_{n+1}^m = \\left(\\frac{1-\\alpha_{n+1}}{\\gamma_{n+1}}-a\\right)(\\nabla F(x_n)-m_n) + a(\\nabla F(x_n) - \\nabla F(x_{n-1}))$ and\n$\\varsigma_{n+1}^v = \\left(\\frac{1-\\beta_{n+1}}{\\gamma_{n+1}}-b\\right)(S(x_n)-v_n) + b(S(x_n) - S(x_{n-1}))$ \\,.\nWe prove that $\\varsigma_n\\to 0$ a.s.\nUsing the triangular inequality,\n\\begin{align*}\n \\|\\varsigma_n^x\\|\n&\\leq \\left\\| \\frac{m_n}{\\varepsilon + \\sqrt{v_n}} -\\frac{m_n}{\\bar r_n^{1\/2}\\varepsilon + \\sqrt{v_n}}\\right\\|\n+ \\left|1-\\frac{\\gamma_nr_n^{-1}}{\\gamma_{n+1}\\bar r_n^{-1\/2}}\\right|\\left\\| \\frac{m_n}{\\bar r_n^{1\/2}\\varepsilon + \\sqrt{v_n}}\\right\\|\\\\\n&\\leq \\varepsilon^{-1}|1-\\bar r_n^{-1\/2}|\\|m_n\\| + \\varepsilon^{-1}\\left|\\bar r_n^{-1\/2}-\\frac{\\gamma_nr_n^{-1}}{\\gamma_{n+1}}\\right|\\|m_n\\|\\,,\n\\end{align*}\nwhich converges a.s. to zero because of the boundedness of $(z_n)$ combined with Assumption~\\ref{hyp:stepsizes} and Lemma~\\ref{lemma:r_n} for $(\\bar r_n)$.\nThe components $\\varsigma_{n+1}^m$ and $\\varsigma_{n+1}^v$ converge a.s. to zero,\nas products of a bounded term and a term converging to zero.\nIndeed, note that $\\nabla F$ and $S$ are locally Lipschitz continuous under Assumption~\\ref{hyp:model}.\nHence, there exists a constant $C$ s.t. $\\|\\nabla F(x_n) - \\nabla F(x_{n-1})\\| \\leq C \\|x_n-x_{n-1}\\| \\leq \\frac{C}{\\varepsilon}\\gamma_n\\|m_n\\|$.\nThe same inequality holds when replacing $\\nabla F$ by $S$.\nNow consider the martingale increment sequence $(\\chi_n)$, adapted to $\\cF_n$.\nEstimating the second order moments, it is easy to show using Assumption~\\ref{hyp:moment-f}~\\ref{momentegal} that\nthere exists a constant $C'$ s.t.\n$\\bE_n(\\|\\chi_{n+1}\\|^2)\\leq C'$.\nUsing that $\\sum_k\\gamma_k^2<\\infty$, it follows that $\\sum_n\\bE_n(\\|\\gamma_{n+1}\\chi_{n+1}\\|^2)<\\infty$ a.s.\nBy Doob's convergence theorem, $\\lim_{n\\to\\infty} \\sum_{k\\leq n} \\gamma_k\\chi_k$ exists almost surely.\nUsing this result along with the fact that $\\varsigma_n$ converges a.s. to zero, it follows from\nusual stochastic approximation arguments \\cite{ben-(cours)99} that the interpolated process\n$\\bar\\sz: [0,+\\infty)\\to \\cZ_+$ given by\n\\[\n\\bar \\sz(t) = \\bar z_n + (t-\\tau_n) \\frac{\\bar z_{n+1}-\\bar z_n}{\\gamma_{n+1}} \\qquad \\left(\\forall n \\in \\bN\\,,\\, \\forall t \\in [\\tau_n,\\tau_{n+1})\\right)\n\\]\n(where $\\tau_n = \\sum_{k=0}^n\\gamma_k$), is almost surely a bounded APT of the semiflow $\\bar\\Phi$ defined by~(\\ref{eq:ode-a}).\nThe proof is concluded by applying Prop.~\\ref{prop:benaim} and Prop.~\\ref{prop:Wstrict}.\n\n\\subsection{Proof of Prop.~\\ref{thm:stab}}\n\\label{sec:stability}\n\nAs $\\inf F>-\\infty$, one can assume without loss of generality that $F\\geq 0$.\nIn the sequel,\n$C$ denotes some positive constant which may change from line to line.\nWe define $a_n \\eqdef (1-\\alpha_{n+1})\/\\gamma_n$ and\n$P_n\\eqdef \\frac 1{2a_nr_n}\\ps{m_n^{\\odot 2},\\frac 1{\\varepsilon+\\sqrt{\\hat v_n}}}$.\nWe have $a_n\\to a$ and $r_n\\to 1$.\nBy Assumption~\\ref{hyp:stab}-\\ref{lipschitz},\n\\begin{align}\nF(x_{n}) \n&\\leq F(x_{n-1}) - \\gamma_{n} \\ps{\\nabla F(x_{n}),\\frac{\\hat m_n}{\\varepsilon + \\sqrt{\\hat v_n}}} + C\\gamma_n^2 P_n\\label{eq:lip}.\n\\end{align}\nWe set $u_n \\eqdef 1-\\frac{a_{n+1}}{a_{n}}$ and\n$D_n \\eqdef \\frac {r_n^{-1}}{\\varepsilon+\\sqrt{\\hat v_{n}}}$, so that $P_n = \\frac 1{2a_n}\\ps{D_n,m_n^{\\odot 2}}$.\nWe can write:\n\\begin{equation}\nP_{n+1}-P_n= u_nP_{n+1} +\\ps{\\frac{D_{n+1}-D_n}{2a_n},m_{n+1}^{\\odot 2}}+\\ps{\\frac{D_n}{2a_n},m_{n+1}^{\\odot 2}-m_n^{\\odot 2}}.\\label{eq:P-P}\n\\end{equation}\nWe estimate the vector $D_{n+1}-D_n$.\nUsing that $(r_n^{-1})$ is non-increasing,\n$$\nD_{n+1}-D_n \\leq r_{n}^{-1} \\frac{\\sqrt{\\hat v_n} - \\sqrt{\\hat v_{n+1}}}{(\\varepsilon + \\sqrt{\\hat v_{n+1}})\\odot(\\varepsilon + \\sqrt{\\hat v_n})}\\,.\n$$\nRemarking that $v_{n+1} \\geq \\beta_{n+1} v_n$, recalling that\n$(\\bar{r}_n)$ is nondecreasing and using the update rules of $v_n$ and\n$\\bar{r}_n$, we obtain after some algebra\n\\begin{align}\n \\label{eq:subsubterm2}\n\\sqrt{\\hat v_n} - \\sqrt{\\hat v_{n+1}} &= \\bar{r}_{n+1}^{-\\frac 12}(1-\\beta_{n+1})\\frac{v_n-\\nabla f_{n+1}^{\\odot 2}}{\\sqrt{v_n}+\\sqrt{v_{n+1}}\n + \\frac{\\bar{r}_{n+1}-\\bar{r}_n}{\\sqrt{\\bar{r}_n}(\\sqrt{\\bar{r}_n}+\\sqrt{\\bar{r}_{n+1}})} \\sqrt{\\frac{v_n}{\\bar{r}_{n+1}}} \\nonumber\\\\\n \n &\\leq c_{n+1} \\sqrt{\\hat v_{n+1}} \\,\\, \\text{where} \\,\\,\n c_{n+1} \\eqdef \\frac{1-\\beta_{n+1}}{\\sqrt{\\beta_{n+1} }}\\left( \\frac{1}{1+\\sqrt{\\beta_{n+1}}} + \\frac{1- \\bar{r}_n}{2\\bar{r}_n} \\right)\\,.\n\\end{align}\nIt is easy to see that $c_n\/\\gamma_n\\to b\/2$. Thus, for any $\\delta>0$, $c_{n+1}\\leq (b+2\\delta)\\gamma_{n}\/2$ for all $n$ large enough.\nUsing also that $\\sqrt{\\hat v_{n+1}}\/(\\varepsilon +\\sqrt{\\hat v_{n+1}})\\leq 1$, we obtain that\n\nD_{n+1}-D_n \\leq\n\\frac{b+2\\delta}2 \\gamma_n D_n\\,.\n$\nSubstituting this inequality in Eq.~(\\ref{eq:P-P}), we get\n\\begin{align*}\n P_{n+1}-P_n&\\leq u_nP_{n+1} +\\gamma_n\\ps{\\frac{b+2\\delta}{4a_n} D_n,m_{n+1}^{\\odot 2}}+\\ps{\\frac{D_n}{2a_n},m_{n+1}^{\\odot 2}-m_n^{\\odot 2}}\\,.\n\\end{align*}\nUsing $m_{n+1}^{\\odot 2} - m_n^{\\odot 2} = 2 m_n\n\\odot (m_{n+1}-m_n)+(m_{n+1}-m_n)^{\\odot 2} $, and noting that\\\\ $\\bE_n(m_{n+1}-m_n) = a_n\\gamma_n(\\nabla F(x_n) -m_n)$,\n\\begin{equation*}\n \\bE_n\\ps{\\frac {D_n}{2a_n},m_{n+1}^{\\odot 2}-m_n^{\\odot 2}}\n = \\gamma_n\\ps{\\nabla F(x_n),\\frac{\\hat m_n}{\\varepsilon+\\sqrt{\\hat v_n}}}-2a_n\\gamma_nP_n+\\ps{\\frac {D_n}{2a_n}, \\bE_n[(m_{n+1}-m_n)^{\\odot 2}] }\n\\end{equation*}\nAs $a_n\\to a$, we have $a_n-\\frac{b+2\\delta}{4}\\geq a-\\frac{b+\\delta}{4}$ for all $n$ large enough. Hence,\n\\begin{align*}\n \\bE_n P_{n+1}-P_n&\\leq\nu_nP_{n+1} -2(a-\\frac{b+\\delta}{4})\\gamma_n P_n + \\gamma_n\\ps{\\nabla F(x_n),\\frac{\\hat m_n}{\\varepsilon+\\sqrt{\\hat v_n}}}\n\\\\\n&+\n\\gamma_n^2\\frac{b+2\\delta}2\\ps{\\nabla F(x_n),\\frac{\\hat m_n}{\\varepsilon+\\sqrt{\\hat v_n}}}\n+C\\ps{\\frac {D_n}{2a_n}, \\bE_n[(m_{n+1}-m_n)^{\\odot 2}] }\\,.\n\\end{align*}\nUsing the Cauchy-Schwartz inequality and Assumption~\\ref{hyp:stab}~\\ref{momentgrowth},\nit is easy to show the inequality\n$\\ps{\\nabla F(x_n),\\frac{\\hat m_n}{\\varepsilon+\\sqrt{\\hat v_n}}}\\leq C(1+F(x_n)+P_n)$.\nMoreover, using the componentwise inequality $(\\nabla f_{n+1}-m_n)^{\\odot 2} \\leq 2 \\nabla f_{n+1}^{\\odot 2} + 2 m_n^{\\odot 2}$\nalong with Assumption~\\ref{hyp:stab}~\\ref{momentgrowth}, we obtain\n\\begin{equation*}\n \\resizebox{\\hsize}{!}{$\n \\ps{\\frac {D_n}{2a_n}, \\bE_n[(m_{n+1}-m_n)^{\\odot 2}] } \\leq\n2(1-\\alpha_{n+1})^2\\ps{\\frac {D_n}{2a_n},\n\\bE_n[ \\nabla f_{n+1}^{\\odot 2}]+ m_n^{\\odot 2}\n}\n\\leq C\\gamma_n^2(1+F(x_n)+P_n)\\,.\n$}\n\\end{equation*}\nPutting all pieces together with Eq.~(\\ref{eq:lip}),\n\\begin{equation}\n\\label{eq:F+P}\n\\resizebox{0.95\\hsize}{!}{$\n \\bE_n(F(x_n)+ P_{n+1}) \\leq F(x_{n-1})+P_{n}\n +u_nP_{n+1} -2(a-\\frac{b+\\delta}{4})\\gamma_n P_n\n+C\\gamma_n^2(1+F(x_n)+P_n)\\,.\n$}\n\\end{equation}\nDefine\n$\nV_n\\eqdef (1-C\\gamma_{n-1}^2)F(x_{n-1})+(1-u_{n-1})P_{n}\n$\nwhere the constant $C$ is fixed so that\nEq.~(\\ref{eq:F+P}) holds.\nThen,\n\\begin{equation*}\n \\bE_n(V_{n+1}) \\leq V_n\n -\\left(2a-\\frac{b+\\delta}{2}-\\frac{u_{n-1}}{\\gamma_n}\\right)\\gamma_n P_n\n+C\\gamma_n^2(1+P_n)+ C\\gamma_{n-1}^2F(x_{n-1})\\,.\n\\end{equation*}\nBy Assumption~\\ref{hyp:stab}, $\\lim\\sup_n u_{n-1}\/\\gamma_n< 2a-b\/2$ and for $\\delta$ small enough, we obtain\n\\begin{equation*}\n \\bE_n(V_{n+1}) \\leq V_n\n+C\\gamma_n^2(1+P_n)+ C\\gamma_{n-1}^2F(x_{n-1})\\leq (1+ C'\\gamma_n^2)V_n +C\\gamma_n^2\\,.\n\\end{equation*}\nBy the Robbins-Siegmund's theorem \\cite{robbins1971convergence},\nthe sequence $(V_n)$ converges almost surely to a finite random variable $V_\\infty \\in \\bR^+$.\nIn turn, the coercivity of $F$ implies that $(x_n)$ is almost surely bounded.\nWe now establish the almost sure boundedness of $(m_n)$.\nConsider the martingale difference sequence $\\Delta_{n+1}\\eqdef \\nabla f_{n+1} -\\nabla F(x_n)$.\nWe decompose $m_{n} = \\bar m_n + \\tilde m_n$ where\n$\\bar m_{n+1} = \\alpha_{n+1} \\bar m_n + (1-\\alpha_{n+1})\\nabla F(x_n)$ and\n$\\tilde m_{n+1} = \\alpha_{n+1} \\tilde m_n + (1-\\alpha_{n+1})\\Delta_{n+1}$, setting $\\bar m_0=\\tilde m_0=0$.\nWe prove that both terms $\\bar m_n$ and $\\tilde m_n$ are bounded. Consider the first term:\n$\n\\|\\bar m_{n+1}\\|\\leq \\alpha_{n+1} \\|\\bar m_n\\| + (1-\\alpha_{n+1}) \\sup_k\\|\\nabla F(x_k)\\|\\,.\n$\nBy continuity of $\\nabla F$, the supremum in the above inequality is almost surely finite.\nThus, for every $n$, the ratio $\\|\\bar m_n\\|\/\\sup_k\\|\\nabla F(x_k)\\|$ is upperbounded by the bounded sequence\n$r_n$. Hence, $(\\bar m_n)$ is bounded w.p.1. Consider now the term $\\tilde m_n$:\n\\begin{equation*}\n \\resizebox{\\hsize}{!}{$\n \\bE_n(\\|\\tilde m_{n+1}\\|^2) = \\alpha_{n+1}^2\\|\\tilde m_n\\|^2 + (1-\\alpha_{n+1})^2\\bE_n(\\|\\Delta_{n+1}\\|^2)\n \\leq (1+(1-\\alpha_{n+1})^2)\\|\\tilde m_n\\|^2 + (1-\\alpha_{n+1})^2C\\,,\n $}\n\\end{equation*}\nwhere $C$ is a constant s.t. $\\bE_n(\\|\\nabla f_{n+1}\\|^2) \\leq C$ by Assumption~\\ref{hyp:moment-f}~\\ref{momentegal}.\nHere, we used $\\alpha_{n+1}^2\\leq (1+(1-\\alpha_{n+1})^2)$ and the inequality\n$\\bE_n(\\|\\Delta_{n+1}\\|^2) \\leq \\bE_n(\\|\\nabla f_{n+1}\\|^2)$.\nBy Assumption~\\ref{hyp:stepsizes}, $\\sum_n(1-\\alpha_{n+1})^2<\\infty$. By the Robbins-Siegmund theorem,\nit follows that $\\sup_n\\|\\tilde m_n\\|^2<\\infty$ w.p.1. Finally, it can be shown that $(v_n)$ is\nalmost surely bounded using the same arguments.\n\n\\subsection{Proof of Th.~\\ref{thm:clt}}\n\\label{sec:clt}\n\nWe use~\\cite[Th.~1]{pelletier1998weak}.\nAll the assumptions in the latter can be verified in our case, at the exception of\na positive definiteness condition on the\nlimiting covariance matrix, which corresponds, in our case, to the matrix $Q$\ngiven by Eq.~(\\ref{eq:Q}). As $Q$ is not positive definite, it is strictly speaking not possible\nto just cite and apply \\cite[Th.~1]{pelletier1998weak}. Nevertheless,\na detailed inspection of the proofs of \\cite{pelletier1998weak} shows that only a minor\nadaptation is needed in order to cover the present case.\nTherefore, proving the convergence result of \\cite{pelletier1998weak} from scratch is worthless.\nIt is sufficient to verify the assumptions of \\cite[Th.~1]{pelletier1998weak}\n(except the definiteness of $Q$) and then to point out the specific part of the proof of \\cite{pelletier1998weak}\nwhich requires some adaptation.\n\nLet $z_n=(x_n,m_n,v_n)$ be the output of Algorithm~\\ref{alg:adam-decreasing}.\nDefine $z^*=(x^*,0,S(x^*))$.\nDefine $\\eta_{n+1} \\eqdef (0, a(\\nabla f_{n+1} - \\nabla F(x_n)), b(\\nabla f_{n+1}^{\\odot 2}- S(x_n)))$.\nWe have\n\\begin{equation}\n \\label{eq:zn_dec}\n z_{n+1} = z_n + \\gamma_{n+1} h_\\infty(z_n) + \\gamma_{n+1}\\eta_{n+1} + \\gamma_{n+1} \\epsilon_{n+1}\\,,\n\\end{equation}\nwhere $\\epsilon_{n+1} \\eqdef (\\epsilon_{n+1}^1,\\epsilon_{n+1}^2,\\epsilon^3_{n+1})$, whose components are given by\n\\begin{equation*}\n \\resizebox{\\hsize}{!}{$\n \\epsilon_{n+1}^1 = \\frac{m_n}{\\varepsilon+\\sqrt{v_n}} - \\frac{\\hat m_{n+1}}{\\varepsilon+\\sqrt{\\hat v_{n+1}}};\\,\n \\epsilon_{n+1}^2 = \\left(\\frac{1-\\alpha_{n+1}}{\\gamma_{n+1}}-a\\right)\\left( \\nabla f_{n+1} - m_n\\right);\\,\n \\epsilon_{n+1}^3 = \\left(\\frac{1-\\beta_{n+1}}{\\gamma_{n+1}}-b\\right)\\left( \\nabla f_{n+1}^{\\odot 2} - v_n\\right)\\,.\n $}\n\\end{equation*}\nHere, $\\eta_{n+1}$ is a martingale increment noise and\n$\\epsilon_{n+1} = (\\epsilon_{n+1}^1,\\epsilon_{n+1}^2,\\epsilon^3_{n+1})$ is a remainder term.\nThe aim is to check the assumptions (A1.1) to (A1.3) of\n\\cite{pelletier1998weak}, where the role of the quantities ($h$,\n$\\varepsilon_n$, $r_n$, $\\sigma_n$, $\\alpha$, $\\rho$, $\\beta$) in\n\\cite{pelletier1998weak} is respectively played by the quantities\n($h_\\infty$, $\\eta_n$, $\\epsilon_n$, $\\gamma_n$, $\\kappa$, $1$, $1$) of\nthe present paper.\n\nLet us first consider Assumption (A1.1) for $h_\\infty$.\nBy construction, $h_\\infty(z^*) = 0$. By Assumptions~\\ref{hyp:mean_field_tcl} and \\ref{hyp:S>0},\n$h_\\infty$ is continuously differentiable in the neighborhood of $z^*$ and its\nJacobian at $z^*$ coincides with the matrix $H$ given by Eq.~(\\ref{eq:H}).\nAs already discussed, after some algebra,\nit can be shown that the largest real part of the eigenvalues of $H$ coincides with $-L$ where $L>0$\nis given by Eq.~(\\ref{eq:L}).\nHence, Assumption (A1.1) of \\cite{pelletier1998weak}\nis satisfied for $h_\\infty$. Assumption (A1.3) is trivially satisfied\nusing Assumption~\\ref{hyp:step-tcl}. The crux is therefore to verify Assumption (A1.2).\nClearly, $\\bE(\\eta_{n+1}|\\cF_n)=0$. Using Assumption~\\ref{hyp:moment-f}\\ref{momentreinforce},\nit follows from straightforward manipulations based on Jensen's inequality\nthat for any $M>0$,\nthere exists $\\delta>0$ s.t.\n\n\\sup_{n\\geq 0}\\bE_n\\left(\\|\\eta_{n+1}\\|^{2+\\delta}\\right) \\1_{\\{\\|z_n-z^*\\|\\leq M\\}}<\\infty\\,.\n$\nNext, we verify the condition\n\\begin{equation}\n \\label{eq:cond-reste}\n \\lim_{n\\to\\infty} \\bE\\left(\\gamma_{n+1}^{-1}\\|\\epsilon_{n+1}\\|^2\\1_{\\{\\|z_n-z^*\\|\\leq M\\}}\\right) = 0\\,.\n\\end{equation}\nIt is sufficient to verify the latter for $\\epsilon^i_n$ ($i=1,2,3$) in place of $\\epsilon_n$.\nThe map $(m,v)\\mapsto m\/(\\varepsilon+\\sqrt{v})$ is Lipschitz continuous in a neighborhood of\n$(0,S(x^*))$ by Assumption~\\ref{hyp:S>0}. Thus, for $M$ small enough, there exists a constant $C$ s.t.\nif $ \\|z_n-z^*\\|\\leq M$, then\n\n \\|\\epsilon_{n+1}^1\\| \\leq C\\left\\|r_{n+1}^{-1}m_{n+1}-m_n\\right\\|+C\\left\\|\\bar r_{n+1}^{-1} v_{n+1}-v_n\\right\\|\\,.\n$\nUsing the triangular inequality and the fact that $r_{n+1},\\bar r_{n+1}$ are bounded sequences away from zero, there exists an other constant $C$ s.t.\n\\begin{align*}\n \\|\\epsilon_{n+1}^1\\| &\\leq C\\left\\|m_{n+1}-m_n\\right\\|+C\\left\\|v_{n+1}-v_{n}\\right\\|\n+C|r_{n+1}-1|+C|\\bar r_{n+1}-1|\\,.\n\\end{align*}\nUsing Lemma~\\ref{lemma:r_n} under Assumption~\\ref{hyp:step-tcl} (note that $\\gamma_0 > 1\/2L \\geq 1\/a$ when $\\kappa=1$),\nwe obtain that the sequence $|r_{n}-1|\/\\gamma_n$ is bounded, thus $|r_{n+1}-1|\\leq C\\gamma_{n+1}$.\n The sequence $(1-\\alpha_{n})\/\\gamma_n$ being also bounded, it holds that\n\\begin{align*}\n \\|m_{n+1}-m_n\\|^2 \\1_{\\{\\|z_n-z^*\\|\\leq M\\}} \\leq C\\gamma_{n+1}^2(1+\\|\\nabla f_{n+1}\\|^2 )\\1_{\\{\\|z_n-z^*\\|\\leq M\\}}\\,.\n\\end{align*}\nBy Assumption~\\ref{hyp:moment-f}~\\ref{momentreinforce},\n$\\bE_n(\\|\\nabla f_{n+1}\\|^2|)$ is bounded by a deterministic constant on $\\{\\|z_n-z^*\\|\\leq M\\}$.\nThus, $\\bE_n(\\|m_{n+1}-m_n\\|^2\\1_{\\{\\|z_n-z^*\\|\\leq M\\}})\\leq C\\gamma_{n+1}^2$.\nA similar result holds for $\\|v_{n+1}-v_n\\|^2$. We have thus shown that\n$ \\bE_n\\left(\\|\\epsilon_{n+1}^1\\|^2\\1_{\\{\\|z_n-z^*\\|\\leq M\\}}\\right)\\leq C\\gamma_{n+1}^2$. Hence,\nEq.~(\\ref{eq:cond-reste}) is proved for $\\epsilon_{n+1}^1$ in place of $\\epsilon_{n+1}$.\nUnder Assumption~\\ref{hyp:step-tcl}, the proof uses the same kind of arguments for $\\epsilon_{n+1}^2$, $\\epsilon_{n+1}^3$ and is omitted.\nFinally, Eq.~(\\ref{eq:cond-reste}) is proved.\nContinuing the verification of Assumption (A1.2), we establish that\n\\begin{equation}\n \\label{eq:lim-cov}\n \\bE_n(\\eta_{n+1}\\eta_{n+1}^T) \\to Q\\textrm{ a.s. on } \\{z_n\\to z^*\\}\\,.\n\\end{equation}\nDenote by $\\bar Q(x)$ the matrix given by the righthand side of Eq.~(\\ref{eq:Q}) when $x^*$\nis replaced by an arbitrary $x\\in \\cV$. It is easily checked that $\\bE_n(\\eta_{n+1}\\eta_{n+1}^T)=\\bar Q(x_n)$\nand by continuity, $\\bar Q(x_n)\\to Q$ a.s. on $\\{z_n\\to z^*\\}$, which proves (\\ref{eq:lim-cov}).\nTherefore, Assumption (A1.2) is fulfilled, except for the point mentioned at the beginning of this section :\n\\cite{pelletier1998weak} puts the additional condition that the limit\nmatrix in Eq.~(\\ref{eq:lim-cov}) is positive definite. This condition is not satisfied in our case,\nbut the proof can still be adapted. The specific part of the proof where the positive definiteness\ncomes into play is Th.~7 in \\cite{pelletier1998weak}. The proof of \\cite[Th.~1]{pelletier1998weak} can therefore\nbe adapted to the case of a positive semidefinite matrix.\nIn the proof of \\cite[Th.~7]{pelletier1998weak}, we only substitute the inverse of the square root of $Q$ by the Moore-Penrose inverse.\nFinally, the uniqueness of the stationary distribution $\\mu$ and its expression follow from \\cite[Th.~6.7, p. 357]{karatzas-shreve1991}.\n\n\\noindent\\textbf{Proof of Eq.~(\\ref{eq:cov})}.\nWe introduce the $d\\times d$ blocks of the $3d\\times 3d$ matrix\n$\\Sigma = \\left(\\Sigma_{i,j}\\right)_{i,j=1,2,3}$\nwhere $\\Sigma_{i,j}$ is $d\\times d$. We denote by $\\tilde\\Sigma$ the $2d\\times 2d$ submatrix $\\tilde\\Sigma \\eqdef\\left(\\Sigma_{i,j}\\right)_{i,j=1,2}$.\nBy Th.~\\ref{thm:clt}, we have the subsystem:\n\\begin{equation}\n\\tilde H \\tilde\\Sigma + \\tilde\\Sigma\\tilde H^T =\n\\begin{pmatrix}\n 0 & 0 \\\\\n0 & -a^2 \\tilde Q\n\\end{pmatrix}\\qquad\\text{where }\\tilde H \\eqdef\n\\begin{pmatrix}\n \\zeta I_d & -D \\\\\na\\nabla^2F(x^*) & (\\zeta-a) I_d\n\\end{pmatrix}\\label{eq:lyap-reduced}\n\\end{equation}\nand where $\\tilde Q \\eqdef \\textrm{Cov}\\left(\\nabla\n f(x^*,\\xi)\\right)$. The next step is to triangularize the matrix\n$\\tilde H$ in order to decouple the blocks of $\\tilde\\Sigma$. For\nevery $k=1,\\dots,d$, set\n$\\nu_k^\\pm \\eqdef -\\frac{a}{2}\\pm\\sqrt{a^2\/4-a\\lambda_k}$\nwith the convention that $\\sqrt{-1} =\n\\imath$ (inspecting the characteristic polynomial of $\\tilde H$, these\nare the eigenvalues of $\\tilde H$). Set\n$M^\\pm\\eqdef\\diag{(\\nu_1^\\pm,\n\\cdots,\\nu_d^\\pm)}$ and $R^\\pm\\eqdef\nD^{-1\/2}PM^\\pm P^TD^{-1\/2}$. Using the identities $M^++M^-=-a I_d$ and\n$M^+M^-=a\\Lambda$ where\n$\\Lambda\\eqdef\\diag{(\\lambda_1,\\cdots,\\lambda_d)}$, it can be checked\nthat\n$$\n\\cR\\tilde H =\\begin{pmatrix}\n D R^+ + \\zeta I_d & -D \\\\ 0 & R^-D + \\zeta I_d\n\\end{pmatrix}\\cR,\\text{ where }\\cR\\eqdef\n\\begin{pmatrix}\n I_d & 0 \\\\ R^+ & I_d\n\\end{pmatrix}\\,.\n$$\nSet $\\check \\Sigma \\eqdef \\cR\\tilde\\Sigma\\cR^T$. Denote by\n$(\\check \\Sigma_{i,j})_{i,j=1,2}$ the blocks of $\\check\\Sigma$.\nNote that $\\check\\Sigma_{1,1} = \\Sigma_{1,1}$. By left\/right multiplication of Eq.~(\\ref{eq:lyap-reduced})\nrespectively with $\\cR$ and $\\cR^T$, we obtain\n\\begin{align}\n &(DR^++\\zeta I_d) \\Sigma_{1,1}+\\Sigma_{1,1}(R^+D+\\zeta I_d) = \\check\\Sigma_{1,2} D+ D\\check\\Sigma_{1,2}^T \\label{eq:lapremiere}\\\\\n& (DR^++\\zeta I_d) \\check\\Sigma_{1,2}+\\check\\Sigma_{1,2} (DR^-+\\zeta I_d) = D\\check\\Sigma_{2,2} \\label{eq:ladeuxieme}\\\\\n& (R^-D+\\zeta I_d) \\check\\Sigma_{2,2} + \\check\\Sigma_{2,2} (DR^-+\\zeta I_d) = -a^2\\tilde Q \\label{eq:laderniere}\n\\end{align}\nSet $\\bar \\Sigma_{2,2} = P^{-1}D^{1\/2}\\check\\Sigma_{2,2} D^{1\/2}P$.\nDefine $C\\eqdef P^{-1}D^{1\/2}\\tilde Q D^{1\/2}P$.\nEq.~(\\ref{eq:laderniere}) yields $(M^-+\\zeta I_d) \\bar\\Sigma_{2,2} + \\bar\\Sigma_{2,2} (M^-+\\zeta I_d) = -a^2C$.\nSet $\\bar \\Sigma_{1,2} = P^{-1}D^{-1\/2}\\check\\Sigma_{1,2} D^{1\/2}P$.\nEq.~(\\ref{eq:ladeuxieme}) rewrites $(M^++\\zeta I_d) \\bar \\Sigma_{1,2}+\\bar \\Sigma_{1,2} (M^-+\\zeta I_d) = \\bar \\Sigma_{2,2}$.\nWe obtain that\n\n\\bar\\Sigma_{1,2}^{k,\\ell} = (\\nu_k^++\\nu_\\ell^-+2\\zeta)^{-1}\\bar\\Sigma_{2,2}^{k,\\ell} = \\frac{-a^2C_{k,\\ell}}{(\\nu_k^++\\nu_\\ell^-+2\\zeta)(\\nu_k^-+\\nu_\\ell^-+2\\zeta)}\\,.\n$\nSet\n$\\bar \\Sigma_{1,1} = P^{-1}D^{-1\/2}\\Sigma_{1,1} D^{-1\/2}P$.\nEq.~(\\ref{eq:lapremiere}) becomes $(M^++\\zeta I_d)\\bar \\Sigma_{1,1} + \\bar \\Sigma_{1,1}(M^++\\zeta I_d) = \\bar \\Sigma_{1,2} + \\bar \\Sigma_{1,2}^T$. Thus,\n\\begin{align*}\n\\bar\\Sigma_{1,1}^{k,\\ell} &\n\\resizebox{.9\\hsize}{!}{$\n= \\frac{\\bar\\Sigma_{1,2}^{k,\\ell}+\\bar\\Sigma_{1,2}^{\\ell,k}}{\\nu_k^++\\nu_\\ell^++2\\zeta}\n= \\frac{-a^2C_{k,\\ell}}{(\\nu_k^++\\nu_\\ell^++2\\zeta)(\\nu_k^-+\\nu_\\ell^-+2\\zeta)}\\left(\\frac{1}{\\nu_k^++\\nu_\\ell^-+2\\zeta}\n+\\frac{1}{\\nu_k^-+\\nu_\\ell^++2\\zeta}\\right)\n$}\n\\\\ &\n= \\frac{C_{k,\\ell}}{ (1 - \\frac{2\\zeta}{a})(\\lambda_k+\\lambda_\\ell-2\\zeta + \\frac 2a \\zeta^2) +\\frac 1{2(a-2\\zeta)}(\\lambda_k-\\lambda_\\ell)^2}\\,,\n\\end{align*}\nand the result is proved.\n\n\\bibliographystyle{siamplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzkzht b/data_all_eng_slimpj/shuffled/split2/finalzzkzht new file mode 100644 index 0000000000000000000000000000000000000000..227f68aeb9d5f5c06bf2cc0481cfa432548eed29 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzkzht @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION}\n\nHigh-mass $\\gamma$-ray\nbinaries (HMGBs) consist of a compact object, either a neutron star (NS) or black hole (BH), orbiting a\nhot, massive O or B type star. \nThe number of known HMGBs \nhas been increasing\nin recent years \\citep{2019arXiv190103624P}. However, in all but two HMGBs (LS\\,2883\/B1259--63 and MT91\\,213\/TeV J2031+4130), \nwhich host young pulsars, the nature of the compact object remains unknown. \n Two scenarios\nfor \nhigh-energy emission of HMGBs\n are \n usually discussed: (1) interaction of the\npulsar wind with the wind of the massive donor star leading to the \nformation of an intrabinary shock, or (2) jets powered by accretion\nonto a compact object (likely a BH). All HMGBs that have been observed with VLBI exhibit extended radio \nemission on milliarcsecond scales, which can be attributed to either a pulsar-wind nebula or to \njets \nproduced by accretion onto a BH (i.e., the classical microquasar scenario). \n\n\n\nX-ray observations can provide important information for \n understanding the nature of HMGBs. Therefore, \n these objects have been extensively observed in soft and hard X-rays. \n In particular, {\\sl NuSTAR} observed LS \\,2883 \\citep{2015MNRAS.454.1358C}, MT91\\,213 \\citep{2017ApJ...843...85L}, 1FGL\\,J1018.6--5856 \\citep{2015ApJ...806..166A}, HESS J0632+057 \\citep{2019arXiv190803083P,2020ApJ...888..115A}, LS\\,I\\,+61$^\\circ$303 \\citep{2020MNRAS.tmp.2539M}, and LMC P3 \\citep{2020AAS...23545701C}. All of these HMGBs were also observed in soft X-rays with {\\sl XMM-Newton}, {\\sl Swift}, {\\sl Suzaku}, and {\\sl Chandra} (see e.g., \\citealt{2009ApJ...697..592T,2013A&ARv..21...64D,2014AN....335..301K}). In all cases, the X-ray spectra appear to be consistent with featureless power-laws (PLs) with photon indices $\\Gamma\\simeq1.3-2$. \n Joint fits to the soft X-ray and hard X-ray spectra provide no evidence of a spectral cut-off in the {\\sl NuSTAR} band. No periodicity associated with the compact object spin has been found in any of the X-ray data (including two systems with known radio pulsars -- LS 2883 with PSR B1259--63 and MT91 213 with PSR J2032+4127). Using NuSTAR data, \n \\cite{2015ApJ...806..166A} \n and \\cite{2020ApJ...888..115A}\n found no short-term variability, quasi-periodic oscillations, red noise, or any other temporal or spectral evidence of accretion in 1FGL J1018.6--5856 and HESS J0632+057, respectively.\n\n\n\n\n\n\n\n\n\n\n\n\n LS\\,5039, discovered by \\cite{1998A&A...338L..71M}, is\n a binary at a distance \n $d=2.9\\pm 0.8$ kpc \n composed of a massive ($M_\\ast=23$~M$_\\odot$)\n O6.5V((f)) type star ($V=11.2$ mag) and a compact object \nwith a poorly constrained mass, $\\sim (1$--$4) M_\\odot$. \nThe compact object orbits the star with a period $P_{\\rm orb}\\simeq \n3.9$\ndays, which is the shortest orbital period among all HGMBs. The binary\norbit inclination angle is $i\\sim 30^\\circ$ \\citep{2005MNRAS.364..899C,Sarty11}. Radio observations have shown a persistent (over many binary periods) AU-scale\nasymmetric extension around LS~5039 whose morphology varies with orbital phase \\citep{2012A&A...548A.103M}.\n Initially the extension was interpreted as jets from an accreting compact object, \nwhich led to a ``microquasar'' classification \n \\citep{2000Sci...288.2340P}.\nHowever, more recently \n\\cite{2012A&A...548A.103M} attributed the varying extended radio morphology to a pulsar wind nebula whose shape varies due to the interaction with the wind of the massive star. Overall, the debate over\nwhether the compact object is an accreting BH or a pulsar interacting\nwith its surroundings and producing an extended nebula\nresembling ``jets'' is still ongoing (see e.g., \\citealt{2015CRPhy..16..661D}). \n\nObviously, the most direct evidence \n of LS 5039 containing a pulsar would be the \n detection of pulsations. \n However, among all HMGBs radio\npulsations have only been detected in LS\\,2883 and MT91\\,213 \\citep{2009ApJ...705....1C,2014MNRAS.437.3255S}. \nEven if LS\\,5039 hosts a pulsar, the\nnon-detection of radio pulsations \nis not surprising \nbecause of the tight orbit and correspondingly large\noptical depth to free-free absorption \\citep{2006A&A...456..801D}. Therefore, searching for pulsations in X-rays may be a more promising approach for LS\\,5039 than for the other HMGBs. \n\nPulsations have been searched for in previous X-ray observations of LS 5039. For instance, \\cite{2011MNRAS.416.1514R} found no periodicity in the {\\sl CXO} observation of LS\\,5039. \nHowever, \\cite{Yoneda20}, \n hereafter Y+20, have recently reported the detection of \na period $P\\approx 8.96$ s in the {\\sl Suzaku} HXD data \nfrom 2007, and a potential counterpart at $P\\approx9.05$ s in the \\nustar\\ data from 2016 (the same data we analyze below), at photon energies above 10 keV. \nThe difference in the periods implies a fast spin-down,\nsuggesting that the compact object in LS\\,5039 could be a magnetar (i.e., a NS with a very high magnetic field; \\citealt{2017ARA&A..55..261K}). The statistical significance of this result is, however, questionable, as discussed below.\n\n\nThe shape of the hard X-ray spectrum can also provide critical\ninformation about the nature of the compact object. For instance, some\nNSs in accreting HMXBs show cyclotron resonant scattering features in\nthe range of about 10--100 keV \nand \nmany of them have exponential cutoffs around a few tens of keV\n(e.g., \\citealt{2002ApJ...580..394C}). Such spectral characteristics are very\nefficiently detected with {\\sl NuSTAR} (see e.g., \\citealt{2014ApJ...784L..40F,2016MNRAS.457..258T}). \nLS\\,5039's\nspectrum was studied in the 0.01--10 MeV range with {\\sl INTEGRAL},\n{\\sl RXTE}, {\\sl Suzaku}, and {\\sl CGRO}. \nMost of these measurements suggest a relatively soft\nhigh-energy spectrum ($\\Gamma\\approx 2$), while the {\\sl Suzaku} HXD spectrum is more\nconsistent with a simple extrapolation of the \n$\\Gamma=1.4$--1.6 PL spectrum\nmeasured below 10 keV with \n{\\sl CXO}, {\\sl XMM-Newton}, and {\\sl\n Suzaku} XIS\n \\citep{2009ApJ...697..592T} \n\n\n\n\nOf all known HMGBs, LS\\,5039 has the shortest orbital period.\nThis prompted us to carry\nout {\\sl NuSTAR} observations of LS\\,5039 over the entire orbital period to obtain a\ncomplete spectral and temporal portrait of this HMGB. Additionally, we used archival {\\sl Suzaku} XIS observations \nthat also cover the full binary period and extend spectral coverage to lower energies.\nIn Section \\ref{obs} we\ndescribe the {\\sl NuSTAR} and {\\sl Suzaku} observations and data reduction procedures. In\nSection \\ref{timing_analysis} we describe the binary orbit corrections to the\narrival times caused by the compact object's motion around its massive\ncompanion and present the results of the periodicity\n and variability search.\n In Section \\ref{specana} \nwe present the spectrum of LS 5039 and the results of spectral fitting\nas a function of the orbital phase. We discuss our findings and conclude with a brief summary in Section \\ref{summ}.\n\n\\section{Observations and Data Reduction}\n\\label{obs}\n\n \n\nThe {\\it Nuclear Spectroscopic Telescope Array} (\\nustar, \\citealt{\n harrison13ApJ:NuSTAR}) consists of two \n similar modules, FPMA and\nFPMB, operating in the energy range 3--79~keV. The \\nustar\\ \n observation of LS 5039 (ObsID 30201034002) started on 2016 September 1 \n (MJD 57632.0972)\nand lasted for $T_{\\rm obs} \\approx 345\\,{\\rm ks}\\,\\,\n\\approx 1.024 P_{\\rm orb}$ ($\\approx60$ consecutive {\\sl NuSTAR} orbits).\n We processed the data using the\n\\nustar\\ Data Analysis Software, \\texttt{nustardas} ver.\\ 1.8.0. The photon arrival times were corrected to the solar system barycenter using the \\texttt{barycorr} tool\\footnote{See \\url{https:\/\/heasarc.gsfc.nasa.gov\/ftools\/caldb\/help\/barycorr.html}.} and the latest clock correction file\\footnote{nuCclock20100101v110.fits.gz \\url{http:\/\/nustarsoc.caltech.edu\/NuSTAR\\_Public\/NuSTAROperationSite\/clockfile.php}}. \nThe timing accuracy of {\\sl NuSTAR} is expected to be 65 $\\mu$s, on average \\citep{2020arXiv200910347B}. \nWe\nreduced the data using the \\texttt{nuproducts} task \nand HEASOFT ver.\\ 6.22.1. \n\nFor spectral analysis and binary lightcurves we used the flags \\texttt{-saacalc=2\n --saamode=optimized --tentacle=yes} to correct for enhanced\nbackground activity visible at the edges of the good time intervals (GTIs)\nimmediately before entering the SAA. This resulted in a total GTI of about 166~ks.\nWe extracted source events \nfrom the 60\\arcsec radius circle\n around the\nsource position,\n which\nmaximized the S\/N ratio. Background events are extracted from an \nannulus around the source position with the inner and outer radii of\n120\\arcsec\\ and 200\\arcsec.\n\n\n\n\n \n \nTo extend \nthe spectral analysis to \n lower \n energies, \n we used \n archival {\\sl Suzaku} data.\n {\\sl Suzaku} observed LS\\,5039 between 2007 September 9 and 2007 September 15, with a total scientific exposure time of $\\approx 203$ ks (obsID 402015010). The observation, originally reported \n by \\cite{2009ApJ...697..592T}, provided coverage of about 1.5 orbits of the LS\\,5039 binary. In the soft X-ray energy band \n (0.3--12 keV), {\\sl Suzaku} had four X-ray telescopes \\citep{2007PASJ...59S...9S} each with \n its own focal plane CCD camera (X-ray Imaging Spectrometer; XIS; \\citealt{2007PASJ...59S..23K}) having an $18'\\times18'$ field-of-view. The XIS2 camera was turned off in November 2006 due to an anomaly and is not used in our analysis. The XIS0 and XIS3 detectors use front-illuminated CCDs, while the XIS1 has a back-illuminated CCD. \n \n We used HEASOFT ver.\\ 6.25 \n for {\\sl Suzaku} data reduction. The data were reprocessed using the {\\tt aepipeline} script and were reduced following the standard procedures\\footnote{See \\url{https:\/\/heasarc.gsfc.nasa.gov\/docs\/suzaku\/analysis\/abc\/}.}. The source spectra and light curves were extracted from a $3'$ radius circle centered on the source position, while the background spectra and light curves were extracted from a $3'$ circle placed $7\\farcm5$ south of the source in \n each of the three XIS images. The response matrix and ancillary response file \n were made using the {\\tt xisrmfgen} and {\\tt xissimarfgen} tools, respectively. \n Since there are known calibration issues near the Si edge in the XIS detectors near 2 keV (see e.g., \\citealt{2011PASJ...63S.991S,2013ApJ...772...83L}),\n we exclude the 1.7--2.3 keV energy range from all of our spectral fits.\n Prior to producing the light curves, the event arrival times were corrected to the solar system barycenter using the {\\tt barycorr} tool. Since the {\\sl Suzaku} XIS time resolution was only 8 s in this observation, we did not use the XIS data for the periodicity search. \n We \n also used the (barycentered) {\\sl Suzaku} HXD data \n to investigate the candidate periodic signal reported by Y+20, but we did not use them for spectral analysis\n (because of the strong background contamination in this non-imaging instrument). All errors quoted throughout the paper are reported \nat the $1\\sigma$ level, unless otherwise noted.\n\n\n\n\n\n\\section{Timing Analysis}\n\\label{timing_analysis}\n\nA fully coherent periodicity search for an observation with a length comparable to the LS~5039 binary period is a ``needle in a haystack'' type problem. The large uncertainties in the orbital ephemeris (see Table \\ref{tab:orb}) require a \nprohibitively large \ngrid in the multidimensional parameter space to guarantee that the \nperiodic signal \nis not missed (see the discussion in \\citealt{2012MNRAS.427.2251C}). An alternative approach \nis to segment the observation into multiple time intervals during which the radial velocity of the compact object is approximately constant, and \nsearch for periodicity within each segment by analyzing the distribution of Fourier power in the time-frequency domain (a dynamic power spectrum search; e.g., \\citealt{2012hpa..book.....L}). \n\nSuch an approach \n was employed by Y+20 (who found a period candidate $P=9.05$ s in the \\nustar\\ data). \nThese authors only searched for \n a signal with \n period $P>1$~s. They \n justify this restriction due to the relatively small number of photons\n in the 10--30 keV band chosen for the periodicity search in the \\nustar\\ data. Furthermore, they \n justify this energy band selection \n by the fact that the 8.96 s period candidate was seen in {\\sl Suzaku} HXD, which \n is only sensitive above 10 keV.\n However, \n we see no reason for the signal not to be present below 10 keV \n because there is no change in the source spectrum \n (see Section \\ref{specana}). \n Since we \n cannot exclude the possibility\n that the true period is \n different from that claimed by Y+20, we \n perform a period search in a broader range of frequencies and a \n different \n energy range.\n \nTo improve the sensitivity by decreasing the spread of the potential signal in the frequency domain, we (1) use an optimal division into time segments \nthat depends on the frequency intervals we are searching in, (2) introduce a statistic that provides a higher sensitivity to a signal than the simplistic incoherent summing of Fourier powers from non-overlapping time intervals within the observation, and (3) apply the R\\\"omer delay correction (e.g., \\citealt{1976ApJ...205..580B}) to the photon arrival times using the best known ephemeris. Then we perform the dynamic pulsation search. \nThis approach allows us to search up to much higher frequencies \n(e.g, 1\\,000 Hz) \nthan the 1 Hz limit used in Y+20. \n\n\n \nSince the nature of the compact object is unknown, \nwe adopt the approach, \ndescribed in Section \\ref{dynamicfourier}, which allows us to search for both \nperiodic signals and \nquasi-periodic oscillations (QPOs). We also perform a burst-like variability search (on scales from 1 s to 200 s)\nand orbital variability characterization on larger timescales (Section \\ref{bin_lc}). \n \n\\begin{deluxetable*}{ccccccc}[t!]\n\\tablecaption{Orbital parameters for LS\\,5039 inferred from different observations\n\\label{tab:orb}}\n\\tablewidth{0pt}\n\\tablehead{\n& \\colhead{Aragona09} & \\colhead{Casares11} & \\colhead{Sarty11} &\n\\colhead{Yoneda20} & \\colhead{Yoneda20} &\n\\colhead{this work} \\\\\n & & & &\n\\colhead{\\sl NuSTAR} & \\colhead{Suzaku} &\n}\n\\startdata \n$P_{\\rm orb}$ (d) & $3.90608\\pm0.00010$ & $3.90608\\pm0.00008$ & 3.906 & 3.90608 & 3.90608 & $3.9057\\pm0.28$ \\\\\n$T_0$ (MJD) &\t$52825.48\\pm0.05$\t& $52477.58\\pm0.06$ & $55016.58\\pm0.06$ \n& $57629.250_{-0.030}^{+0.023}$ & $54352.455_{-0.035}^{+0.05}$ & \n$57633.71\\pm0.06$ \\\\ \n$e$ & $0.337\\pm0.036$ & $0.35\\pm0.03$ & $0.24\\pm0.08$ \t\n& $0.306_{-0.013}^{+0.015}$ & $0.278_{-0.023}^{+0.014}$ & $0.289\\pm0.09$ \\\\\n$W_\\ast$ (deg) & $236\\pm5.8$ & $212\\pm5$ & $237.3\\pm21.8$ \n& $236.8_{-3.1}^{+2.3}$ & $234.6_{-3.3}^{+5.1}$ \t&\t\t\t\\\\ \n$a_\\ast\\sin i$ (lt-s) & $3.33\\pm0.15$ & $4.06\\pm0.16$ & $4.11\\pm0.35$\t&&&\\\\\t\n$W_p$ (deg) \t& \t$56\\pm5.8$ & $32\\pm5$\t & $57.3\\pm21.8$\t \n& $56.8_{-3.1}^{+2.3}$ & $54.6_{-3.3}^{+5.1}$ & $44.6\\pm4.1$ \\\\\n$a_p\\sin i$ (lt-s) & \t$48\\pm14$ & $52_{-19}^{+9}$\t & $52_{-19}^{+10}$\t\n& $48.1\\pm0.4$ & $53.05_{-0.55}^{+0.7}$ & $48.1\\pm2.7$ \t\t\n\\enddata\n\\tablecomments{Orbital parameters from \\cite{2009ApJ...698..514A}, \\cite{Casares11}, \\cite{Sarty11}, \n Y+20 and this work. Subscripts $\\ast$ and $p$ correspond to the massive star and the compact object (putative pulsar), respectively. For the \\cite{2009ApJ...698..514A}, \\cite{Casares11}, and \\cite{Sarty11} orbital solutions, the projected semi-major axis \n$a_p\\sin i$ \nwas calculated assuming \n $m_\\ast = (23\\pm 3) M_\\odot$\nwhile\nthe compact object's mass\nwas assumed to be $m_p =(1.6\\pm0.4) M_\\odot$ for \\cite{2009ApJ...698..514A} and $m_p=1.8_{-0.6}^{+0.2}M_\\odot$ \nfor \\cite{Casares11} and \\cite{Sarty11}. We use the \\cite{Casares11} orbital parameters \nfor the R\\\"omer delay correction. The Y+20 \nand `this work' parameters were not measured from observations of the massive star but \nobtained from fits maximizing the significance of the putative period near 9.05 s. \nThe large uncertainties of the `this work' parameters take into account the multitude of solutions of about the same significance in the vicinity of the best-fit solution (see Section \\ref{p9s_cand}).\n}\n\n\\end{deluxetable*}\n\n\n\n\n\\subsection{Correction for the R\\\"omer delay caused by the orbital motion}\n\nAs the putative pulsar\nis orbiting a massive star, the distance between the pulsar and the observer changes with orbital phase, which translates into changing times of photon travel \nto the observer. \nThis effect can be equivalently described in the observer's frame as a \nDoppler shift of the pulsation frequency \nvarying with the binary phase because of\nthe changing radial velocity of the\npulsar.\nDifferent authors have inferred slightly different sets of orbital parameters from optical observations of the massive companion (see examples in Table \\ref{tab:orb}).\nTo correct the event arrival times \nfor this effect (R\\\"omer delay), we adopted the\n`eccentric fit + 1d oscillation' orbital solution from\n \\cite{Casares11},\nwhich included \nmodulation of the radial velocity \nwith a 1 day period, possibly caused by non-radial oscillations of the massive companion in its eccentric orbit.\nThe orbital dependencies of the R\\\"omer delay and Doppler shift \nof the other published binary solutions are within $\\approx \\pm 1\\sigma$ uncertainties of the \\cite{Casares11} curves for these quantities (see Figure \\ref{fig:deltat} \nand Figure \\ref{fig:vel_acc} in the Appendix). \n\nThe projection of the orbit's semi-major axis \nonto the sky plane, $a\\sin i$, and the longitude of periastron, $W$, in \n\\cite{Casares11} (and the other papers quoted in Table \\ref{tab:orb})\npertain to the massive stellar component of the binary.\n They are connected with the corresponding compact object \n parameters as follows, \n\\begin{equation}\n\ta_p\\sin i=\n\t\\frac{m_\\ast}{m_p}\\, a_\\ast\\sin i\\,,\n\t\\quad\\quad\n\tW_p=W_\\ast-180^\\circ,\n\\label{eq1}\n\\end{equation}\nwhere the subscripts $\\ast$\nand $p$ correspond to the massive\nstar and the compact object (putative pulsar), respectively, $m$ is mass, and $i$ is the orbital inclination (the angle between the orbital plane and the plane of the sky). \nFor the pulsar (NS) mass we use $m_p=1.8_{-0.6}^{+0.2}M_\\odot$, from the assumption that $m_p$ should be in the range of 1.2--2.0 solar masses, which gives $a_p\\sin i=52_{-19}^{+9}$\\,lt-s = $(1.55_{-0.6}^{+0.27})\\times 10^{12}$\\,cm.\n\n\n\n\nThe corrections to the photon arrival times due to orbital motion (R\\\"omer delay)\nare calculated as follows,\n\\begin{eqnarray}\n\tt_\\text{corr}=t-\\frac{a_p\\sin i}{c}\\left[\\sin W_p(\\cos E-e)+\\right.\\nonumber\\\\\\left.\n\t\\sqrt{1-e^2}\\cos W_p\\sin E\\right]\\,.\n\\label{eq:romercorr}\n\\end{eqnarray}\nHere $e$ is the eccentricity of the orbit, and $E$ is the eccentric anomaly,\n\\begin{equation}\n\tE-e\\sin E=\\Omega_{\\rm orb}(t-T_0)\\,,\n\t\\label{exc_anomaly}\n\\end{equation}\nwere $\\Omega_{\\rm orb}=2\\pi\/P_\\text{orb}$, \n$P_\\text{orb}$ is the orbital period, \nand $T_0$ is the epoch of periastron \\citep{1976ApJ...205..580B}. The right hand side of Equation (\\ref{exc_anomaly}) is commonly called the mean anomaly.\n Depending on the orbital phase, the correction ranges from \n$-$40 s to $+$60 s (see Figure \\ref{fig:deltat}).\n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.5\\textwidth]{delta_t.png}\n \\caption{Arrival time corrections due to R\\\"omer delay during our {\\sl NuSTAR} observation for 6 \n sets of orbital parameters (see Table \\ref{tab:orb}). Shown is $\\Delta t = t_\\text{corr}-t$ vs.\\ $t$, given by Equation (\\ref{eq:romercorr}). The shaded area shows the uncertainty of $\\Delta t$ due to the uncertainties of the orbital parameters for the binary ephemeris from \\cite{Casares11} (the model with 1\\,d oscillations).\n }\n \\label{fig:deltat}\n\\end{figure}\n \n\n\n\n\\subsection{Periodicity search in the \\nustar\\ data}\n\\label{sec:perser}\n\nWe searched for periodic or quasi-periodic signals up to $f_{\\rm max}=1000$ Hz\nby analyzing the arrival times of $N=56\\,647$ events, \nregistered during $\\approx190$ ks (when the target was not occulted by the Earth),\nin the photon energy range of 3--20 keV.\nWe excluded higher energies to reduce the background contamination as \\nustar's sensitivity decreases at higher energies. \nWe corrected the event\narrival times \nfor the R\\\"omer delay using the best-fit binary parameters \nfrom Casares et al.\\ 2011\n(Table \\ref{tab:orb}) prior to the search. \n\n\\subsubsection{\nFourier power spectrum for the entire \\nustar\\ observation \nassuming the binary parameters are known with high precision\n}\n\\label{fourier_entire_obs}\nUsing the corrected arrival times from the entire \\nustar\\ observation, we \ncalculated the Fourier power \nspectrum \n(see the top\npanel of \nFigure \\ref{fig:fur} and the Appendix \\ref{appendixA}).\nThe top\npanel of Figure \\ref{fig:fur} shows the values of the Fourier power ${\\cal P}_n$ as a function of frequency $f=n\\,\\Delta f$, where $\\Delta f \\simeq 3\\times 10^{-7}$ Hz is the \nfrequency bin width \n(only 24,448 power values with ${\\cal P}_n>20$ are shown).\nThe power is normalized in such a way that the mean $\\overline{{\\cal P}_n} = 2$ for a Poisson-distributed noise.\nThe \nhigh values of ${\\cal P}_n$ \nat low frequencies, $f\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$<$}}} 0.003$ Hz, are associated with the periodic motion of the \\nustar\\ satellite around the Earth (the frequency of \\nustar\\ revolution, $1.72\\times 10^{-4}$ Hz, and its \n15 harmonics are shown by \nblue vertical lines). \nAt higher frequencies \nnone of \nthe ${\\cal P}_n$ values is outstanding, and their significances do not exceed $4\\sigma$.\nThus, we conclude that\nno significant periodic signal is detected. \n\nTo look for quasi-periodic signals, we increased the width of frequency bins by factors of 10, 100, and 1000 (compared to the natural width\n$T_{\\rm obs}^{-1}\\approx 2.89$ $\\mu$Hz)\nbut found no outstanding peaks in the binned \nFourier power spectra. \n\n\\begin{figure*}[]\n\\includegraphics[width=1\\textwidth]{Pn_vs_f.pdf}\n\\includegraphics[width=1\\textwidth]{z2_alpha_t_f.pdf}\n \\caption{\n The top panel shows\n the Fourier power spectrum ${\\cal P}_n$, \n calculated from the entire \\nustar\\ observation\n (see \n Appendix \\ref{appendixA} and Section \\ref{fourier_entire_obs}).\n Prior to calculating the Fourier power spectrum, the arrival times of photons (with energies restricted to the 3--20 keV band) were corrected for the R\\\"omer delay \nusing the best-fit orbital parameters from \\cite{Casares11} (see Table \\ref{tab:orb}). The vertical blue lines correspond to the \\nustar\\ orbital period and its harmonics. \n The bottom panels show the \n dynamic (time-resolved) spectra of Fourier power\n and significance,\n calculated individually for each of the 60 \\nustar\\ orbits, after applying the same R\\\"omer delay correction (see Section \\ref{fourier_spec_sep_orbs} and Appendix \\ref{appendixA}).\n To improve the visualization quality,\n we do not show \n power (and significance) values in each frequency bin\n but \n instead divide the entire log-frequency range in 200 segments of equal size \n and plot (using color) the maximum Fourier power, ${\\cal P}_k^{\\rm max}$ (left), within the $k$-th segment and its significance, $\\alpha_k$ \n (right).\n The vertical black stripes are the gaps due to the occultation of LS 5039 by the Earth. \n }\n\\label{fig:fur}\n\\end{figure*}\n\n\n\n\n\\subsubsection{\nFourier spectra in separate \\nustar\\ orbits }\n\\label{fourier_spec_sep_orbs}\n\nThe apparent lack of pulsations in the Fourier power spectrum of the entire observation could be due to \na large difference between the actual binary parameters and \nthe parameters used for the R\\\"omer delay correction.\nBecause of this difference, the binary-phase-dependent frequency shift may not be fully compensated by the R\\\"omer delay correction. As a result, the signal coherence was lost, the power peak corresponding to the (unknown) pulsation frequency was spread over many frequency bins, and the peak's height was strongly reduced. \nTo mitigate the coherence loss, \nwe \nsearch for pulsations in much shorter time segments, corresponding to the intervals of visibility of LS\\,5030 in the 60 consecutive \\nustar\\ orbits covered by our observation.\nDuring the relatively short intervals, 3.2 ks on average, \nthe difference between the actual and best-fit radial velocities of the compact companion \ndoes not change as much as over the entire orbit, and\nthere is a higher chance that the signal coherence is preserved. \n\nSimilar to the search in the entire \\nustar\\ observation, we calculated 60 Fourier power spectra \nand \n corresponding signal detection significances.\nThe results of this dynamical timing are shown in the bottom\npanels of Figure \\ref{fig:fur} as time-frequency images in which the power and detection significance values are shown by the \nbrightness of the image ``pixels''.\nThe vertical light and dark stripes correspond to intervals of visibility and occultation of LS\\,5039, respectively.\n\nIf pulsations were detected, they would be seen as a sequence of brighter (lighter) pixels along the time axis at frequencies around the pulsation frequency. If the actual binary parameters coincided with the \nassumed ones, this sequence would be seen as a horizontal stripe parallel to the time axis. Deflections of the brighter pixels from a horizontal stripe would provide the difference between the \n actual values of the radial velocity and the ones used in the R\\\"omer delay correction, in separate \\nustar\\ orbits. \n\nWe see from the bottom-right panel of Figure \\ref{fig:fur} that the search in separate \\nustar\\ orbits also did not provide periodicity detection. \nAlthough some of the $\\alpha_k$ values may appear marginally significant, one has to keep in mind that $\\alpha_k$ is defined for a single \\nustar\\ orbit (see the Appendix \\ref{appendixA}), and the true significance is therefore lower when all orbits are included (due to the larger number of statistical trials). Moreover, we do not see any extended \n(along the horizontal direction) connected clusters of adjacent pixels. \n\nThe lack of detection could be due to the lower sensitivity of this search. The first reason for the sensitivity loss is the smaller number of counts in separate orbits \nthan in the entire observation (on average, 994 versus 56,647 counts). Since, for a periodic signal with a given pulsed fraction, the power ${\\cal P}_n$ is proportional to the number of counts, \nweak pulsations would not be detected.\n\nThe second reason for the sensitivity loss is the spread of signal frequency over several frequency bins caused by the Doppler shift.\nIn the $i$-th \\nustar\\ orbit, the \nspread associated with the Doppler shift\nunaccounted for by the R\\\"omer delay correction can be estimated as $(\\delta f)_i \\sim f\\, |\\Delta \\dot{v}_{\\parallel,i}| T_i\/c$, where $T_i$ is the visibility interval, and $\\Delta \\dot{v}_{\\parallel,i}$ is the difference between the \nassumed and actual radial accelerations of the binary motion in the middle of the $i$-th orbit (see Appendix \\ref{appendixB}). This spread becomes greater than the natural width $T_i^{-1}$ of the frequency bin for $f > \\tilde{f}_i \\sim c\\, (|\\Delta \\dot{v}_{\\parallel,i}| T_i^2)^{-1} = \n3.9\\, [|\\Delta \\dot{v}_{\\parallel,i}|\/(3\\,{\\rm m\\,s}^{-2})]^{-1} [T_i\/(3.15\\,\n{\\rm ks})]^{-2}$ Hz.\nAt a frequency $f$ substantially higher than \n$\\tilde{f}_i$ the peak in the signal power is spread over $f\/\\tilde{f}_i$ bins, and the peak height will be reduced by about the same amount.\n\n\\subsubsection{Search for pulsations in a dynamic Fourier spectrum with frequency-dependent time windows}\n\\label{dynamicfourier}\nSince the R\\\"omer delay correction is \n imperfect\ndue to the binary ephemeris uncertainties,\na periodic signal \n with a certain frequency $f_0$ in the reference frame of the pulsar\n can be spread \n by the Doppler effect\n over \n a number of neighboring frequency bins in the observer's reference frame\n(see the Appendix \n\\ref{appendixB} for details).\nThis spread can be mitigated by \nsplitting the observation duration $T_{\\rm obs}$ into $N_w$ shorter time windows of a length $T_w = T_{\\rm obs}\/N_w$ (hence wider frequency bins $T_w^{-1}$), but then the signal \ncan be shifted to different frequencies $f$ in different time windows.\n In order to maximize the sensitivity to such a signal, one should \nselect optimal lengths of the time windows \n and use an efficient algorithm for detecting the signal \n in the time-frequency domain.\n\nThe optimal lengths \nof the time \nwindows are determined by the\nrequirement that the maximum possible drift in frequency does not exceed $T_w^{-1}$. Such choice of\n$T_w$ is optimal because if one chooses an even smaller $T_w$, then events are lost (as the number of events is proportional to $T_w$) and the Fourier power decreases. \n\nAs we show in Appendix \\ref{appendixB}, the optimal lengths and numbers of time windows depend on the signal frequency\n($T_w\\sim 9.9 f_0^{-1\/2}$ ks, $N_w\\sim 35 f_0^{1\/2}$ in our case). \nFor practical purposes,\nit is convenient to divide the entire 0--1000 Hz frequency range into 7 broad frequency intervals with\ndifferent numbers and lengths of time windows.\nFor each of these time windows we calculate the Fourier power spectrum ${\\cal P}_n$ in the 0--1000 Hz frequency range, with a frequency resolution \nof $T_w^{-1}$.\n\nAs the next step,\nwe split the entire frequency range \ninto \nsegments \n $ f_m(1-\\beta) < f < f_m(1+\\beta)$, \nwhere the coefficient $\\beta$ is proportional to $\\delta v_\\parallel\/c$,\n$\\delta v_\\parallel$ is the maximum residual uncertainty of the pulsar's radial velocity in the appropriate time window, and the\ncentral frequency $f_m$ of the $m$-th segment\n($m=1, 2, \\ldots$) satisfies Equation (B13). \nThe segment width $2\\beta f_m$ must be large \nenough to ensure that the entire signal,\n whose Doppler-shifted frequency $f$ is \n varying with time, \n is contained within this \n segment (see Figure \\ref{fig:mc2} in the Appendix). \n\nEach of these segments (their total number is about 2880, \nfor the chosen $\\beta=0.0022$\nin the 0.01--1000 Hz range)\nis inspected for the presence of signal signatures. \nAny chosen segment\nis within one of the seven broad frequency intervals, described in the Appendix \\ref{appendixB}, which determines \nthe number $N_w$ and length $T_w$ of the time windows\n(hence the choice of the precalculated Fourier power spectra) appropriate for the segment analysis.\n\n\n\n \n\n\nIn the time-frequency plane, an $m$-th frequency segment\nconsists of $N_w$ time windows and $N_f = 2\\beta f_m T_w$ frequency bins, i.e., of $N_w\\times N_f= 2\\beta f_m T_{\\rm obs} = 690 (\\beta\/10^{-3}) f_m$ elements\nfor which Fourier powers \n${\\cal P}_{n,j}$ have been calculated ($n$ and $j$ number the frequency bins and time windows, respectively). \nTo locate \npossible signal signatures in the segment, we pinpoint the largest values ${\\cal P}_{j}^{\\rm max}$ \nin the sets of $N_f$ powers ${\\cal P}_{n,j}$ within each of the time windows $j$. \nIf the ${\\cal P}_j^{\\rm max}$ values represent a sufficiently strong signal, then the mean of these values over the entire\ntime of observation\n\\begin{equation}\n\\mu=\\frac{1}{N_w}\\sum_{j=1}^{N_w} {\\cal P}_{j}^{\\rm max},\n\\label{eq:mu}\n\\end{equation}\nshould significantly exceed a similarly defined mean for the noise,\n$\\mu_{\\rm noise}$.\n\nTo characterize the significance of a possible excess of the measured $\\mu$ over $\\mu_{\\rm noise}$,\n we introduce the following $s$-statistic: \n\\begin{equation}\n\ts=(\\mu-\\mu_\\text{noise})\/\\sigma_\\text{noise}\\,,\n\t\\label{eq:stat}\n\\end{equation}\nwhich \nprovides the signal significance in units of standard deviation. Here, $\\sigma_\\text{noise}$ is the standard deviation of the noise.\n\nBecause the observed data contain numerous time gaps, and the count rate \nchanges with time,\nwe used Monte-Carlo simulations to infer $\\mu_\\text{noise}$ and $\\sigma_\\text{noise}$ for each $N_w$ \n(when simulating noise, we included gaps larger than 60 s).\n\n\nIn the right panel of Figure \\ref{fig:stat} \n we plot\nthe $s$-statistic values for\n for each\n of 1261 frequency segments in which $s$ is positive\n(the total number of frequency segments is 2496 \nin\nthe 0.017--1000 Hz range). \n Although there are several peaks slightly exceeding the $3\\sigma$ level (i.e., $s>3$),\n no signal candidates are passing the $4\\sigma$ threshold.\n\n\n\n\n \n\n\n\\begin{figure*}[hbt]\n \\includegraphics[width=0.48\\textwidth]{fig3_1_new.png}\n \\includegraphics[width=0.48\\textwidth]{fig3_2_new.pdf}\n \\caption{\n Left panel shows the $H$ statistic \n calculated for the frequencies up to 0.017 Hz. \n Blue vertical lines correspond to the orbital frequency of \\nustar\\ \n($1.7\\times 10^{-4}$ Hz)\nand its harmonics. \nThe H-statistic peaks are aliases caused by the presence of quasi-periodic gaps (the gap width \nvaries slightly with time) due to the occultation of the target by Earth during the long \\nustar\\ observation\n and SAA passages. \nRight panel shows all positive $s$-statistic values (see Equation \\ref{eq:stat}) in 1261 frequency segments\nbetween 0.017 Hz and 1000 Hz.\nDots on red vertical lines show the $s$ values for the (artificial) test signals (see the description and Figure (\\ref{fig:mc2}) in the Appendix \\ref{appendixB}); the numbers near the dots are the signal fractions.\n}\n\\label{fig:stat}\n\\end{figure*}\n\n\n\n \n \n \n \n This panel also shows the $s$-statistic values computed for six simulated periodic\n sinusoidal test signals with varying levels of \n signal strength (characterized by the \n signal fraction $p$, with $1-p$ being the unpulsed\n fraction)\n that are imperfectly corrected for the R\\\"omer delay\n (see the time-frequency images in Figure \\ref{fig:mc2} in Appendix \\ref{appendixB}). \nWe see that the detectability of a signal with a given signal \nfraction strongly depends on the signal's frequency (the higher the frequency the larger $p$ must be for the signal to be detected). At plausible young pulsar frequencies $\\sim 3$--100 Hz, the signal fractions \n should significantly exceed 0.1--0.2 to be detected with this method in the available \\nustar\\ data. \n \n For the low-frequency part of the Fourier power spectrum,\n (Figure \\ref{fig:stat}, left panel) we \n use the more conventional $H$-statistic,\n defined as $H=\\max\\{Z_m^2-4m+4\\}$ for $1 \\le m \\le 20$ \\citep{1989A&A...221..180D}, where $Z_m^2$ is the statistic commonly used in periodicity searches in X-ray and $\\gamma$-ray astronomy \n (see, e.g., Buccheri et al.\\ 1983). \nThis is possible because for $f\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$<$}}} 0.01$ Hz the residual drift due to the imperfect R\\\"omer delay correction \n does not exceed the natural width ($\\sim T_w^{-1}$) of the peak in $H$-statistic spectrum \n during the entire observation (i.e., $N_w=1$; see the Appendix \\ref{appendixB}). Although the $H$-statistic values at lower frequencies are high, they mostly coincide with multiple integers of the \\nustar\\ orbital frequency (shown by blue vertical lines) while the others are likely to be aliases due the visibility gaps that are somewhat varying in their duration. No credible signal is detected. \n \n\\subsection{$P=9.05$ s candidate} \n\\label{p9s_cand}\n\nY+20\nsearched the {\\sl Suzaku} HXD data of 2007 September \n for \n periodic \n signals with \n periods $P>1$~s in the 10--30 keV band. \n They found \n maximum $Z_{4}^2=68.0$ (the other $Z_n^2$ values were not reported)\n at $f_{\\rm HXD}=0.1116510(5)$ Hz, or $P_{\\rm HXD}=8.95648(4)$ s, with the estimated\n significance of 98.8\\%.\n\n \n \nIn a subsequent search in the \nabove-described {\\sl NuSTAR} data \n(191 ks net observing time, 12,000 events in the 10--30 keV energy band, and assuming $P>1$~s) \nY+20 obtained a maximum $Z_{4}^2$=66.9 at\n$f_{\\rm NuST} = 0.1104507(4)$ Hz or $P_{\\rm NuST}=9.05381(3)$ s, with an estimated significance of only 93\\%.\n \n To find these period values, Y+20 varied the orbital parameters in the ranges provided in Table 1 of that work, applied the R\\\"omer delay correction for each parameter set, and chose the set, and the corresponding period, that maximized the $Z_4^2$ value.\nThe uncertainties of the periods given in Y+20 appear to be underestimated because they do not account for the uncertainties in the ephemeris parameters (as we show below).\n Y+20 concluded that two \n incompatible sets of orbital parameters are needed to maximize the strength of the {\\sl NuSTAR} and {\\sl Suzaku} candidate periodic signals.\n Moreover, the ephemeris \n that maximizes the periodic candidate signal in the {\\sl NuSTAR} data\n is incompatible with any previously published ephemeris (within their uncertainties; see Table \\ref{tab:orb} and Figure \\ref{fig:deltat}).\n \n\n\n \\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{dynamic_obs.png}\n \\includegraphics[width=0.5\\textwidth]{dynamic_puls.png}\n \\caption{ \n The top panel shows $Z_1^2$ as a function of \n frequency and \n time near\n $f_0=0.110498$ Hz ($P_0=9.04996$ s), shown by the dashed horizontal line. \n The $Z_1^2$ was calculated using events extracted \n from the $r=38''$ aperture in the 10--18 keV band.\n The red curve shows the \n Doppler shift due to the orbital motion as a function of the observation time \n for the \n orbital parameters that we found (see Table \\ref{tab:orb}).\n The bottom panel shows the same thing as the top panel but the photon arrival times are corrected for the orbital motion with this ephemeris. \n In both cases the color bars show the value of $Z_1^2$ \n per time window.\n }\n \\label{9ssignal_1}\n\\end{figure}\n \n \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.53\\textwidth]{905sig.pdf}\n \\includegraphics[width=0.5\\textwidth]{lightcurve_LS5039_all.pdf}\n \\includegraphics[width=0.5\\textwidth]{lightcurve_LS5039_270-320ks.pdf}\n \\caption{ The top panel shows $Z_1^2(f)$ \n in the vicinity of $f_0=0.110498$ Hz\n ($P_0=9.04996$ s) calculated \n for the entire observation \n with arrival times corrected for the \n orbital motion with \n the same ephemeris as in Figure \\ref{9ssignal_1}. The vertical red lines mark the aliases of the main peak which are offset from $f_0$\n by $\\pm 0.017$ mHz, which is the frequency of the \\nustar\\ orbit around the Earth.\n The middle and bottom panels show the pulse profiles \n folded on the \n period $P_0$ for the entire observation and for the 270--320 ks interval \n (counting from the beginning of the observation), where the signal is particularly strong\n (see Figure \\ref{9ssignal_1}). \n }\n \\label{9ssignal_2}\n\\end{figure} \n \n\n \nTo verify the period and the significance \nreported by Y+20, we have performed \n an independent period search \nin the {\\sl NuSTAR} data \naround $P=9.05$ s. \nChoosing a set of 5 orbital parameters within the $2\\sigma$ uncertainties of the Casares et al.\\ (2011) ephemeris, we applyed the R\\\"omer delay correction to the \ntimes of arrival,\ncalculated $Z_{m}^2(f)$ for $m=1,\\ldots, 20$ (using \nevents from the entire observation) in the frequency interval of (0.109--0.113) Hz, and used the $H$-test \\citep{1989A&A...221..180D} \nto determine the maximum number of significant harmonics.\nVarying the orbital parameters on the 5-dimensional grid, we found the parameter set that maximizes $Z_m^2$ and the corresponding frequency.\nWe repeated these calculations for various energy ranges within 3--40 keV and various source aperture radii up to $r=60''$ to choose an optimal (maximizing $Z_m^2$) energy range and aperture radius. \nAs a result, in the 10--18 keV energy range and for the aperture radius $r=38''$,\nwe found a signal (shown in Figure \\ref{9ssignal_2}) with maximum $Z_1^2=60$\nat $f=0.1104977$ Hz ($P=9.049962$ s).\nIn the $Z_m^2(f)$ dependence, the peak at $f=0.1104977$ Hz \nis surrounded by many other peaks with slightly lower heights, including a peak at \n$f=0.1104507$ Hz \nreported by Y+20. These \npeaks, appearing at \ndifferent combinations of orbital parameters, look virtually as significant as the highest one, so that we cannot prefer one peak to another. Therefore, the true uncertainty of the putative pulsation frequency (and the fitted orbital parameters) is determined not by the width of a separate peak but by the width of the entire `cluster of peaks', $\\sim 3\\times 10^{-5}$ Hz in our case, which is about two orders of magnitude larger than the uncertainty claimed by Y+20. \nAccounting for this uncertainty, the frequency and period of the putative pulsations are \n$f=0.11050(3)$ Hz and $P=9.050(2)$ s), for the orbital parameters listed in the column `this work' of Table \\ref{tab:orb}. \n\n We note that higher harmonics are not required by the $H$-test.\nHowever, for comparison with Y+20, who reports $Z_4^2$, we also calculated it and found $Z_4^2=71$ for our best fit, at $f= 0.1104977$ Hz.\nWe also note that\n our optimal \norbital parameters are much closer to those \nfound in the previous papers\nthan the set of parameters \nsuggested by Y+20\n(see Figures \\ref{fig:deltat} and \\ref{fig:vel_acc}).\n\n\n\n\n \n To explore the distribution of the Fourier power as a function of \n time (or binary phase), we calculated\n a time-resolved Fourier spectrum, \n i.e.\\ the $Z_1^2$\n distribution in the time-frequency plane around $f=0.110498$ Hz \n ($P=9.04996$ s).\n To reduce the effect of time gaps, we used \n a sliding time window with the size of $T_{\\rm obs}\/10$ which is moved by 10\\% at each step \n (see Figure \\ref{9ssignal_1}). \n The plots show that the strongest contribution to the signal comes \n from a time interval close to the end of the \\nustar\\ observation (between 270 and 330 ks, counted from the start of the observation), near the binary apastron.\n\nWe note that the \n$s$-statistic, introduced in Section \\ref{dynamicfourier}, is not sensitive to such a signal because the Fourier power does not remain constant during the observation time span. \nFigure \\ref{9ssignal_2} shows the $Z_1^2$ distribution calculated from the entire observation \naround $f=0.110498$ Hz ($P=9.04996$) and the corresponding folded pulse profiles from the entire observation and from the part of it with the strongest signal. \n\n\n\n\n In addition, we re-analyzed the {\\sl Suzaku} HXD data and confirmed \n the signal candidate reported by Y+20, with maximum $Z_4^2 =67.8$ at $P=8.95648$ s.\n We also analyzed jointly the {\\sl NuSTAR} and {\\sl Suzaku} HXD data, requiring a common binary ephemeris. However, the strongest signal that we were able to find in this case was \n rather insignificant, with $Z_1^2\\approx50$ \n ($Z_m^2$ with $m>1$ were even less significant). \n\nIt should also be noted that, among other factors (such as the energy range, aperture, and ephemeris choices), the significance of the $9.05$ s periodic signal candidate depends on the maximum frequency of the frequency range searched. \nThere is no physical a priori reason to limit the search to \n $f<1$ Hz. \n Extending the frequency range to much higher frequencies and accounting for \n the huge number of trials associated with varying the ephemeris parameters, \n energy range, and extraction aperture would make the putative $9.05$ s signal candidate insignificant.\n \n\\subsection{\nThe binary light curves and search for nonperiodic variability}\n\\label{bin_lc}\n\nThe background-subtracted light curves of LS\\,5039 \nfor 3 energy bands\nare shown in Figure \\ref{ls5039LC} \nas functions of the binary phase $\\phi={\\rm frac}[(t-T_0)\/P_{\\rm orb}]$, where ${\\rm frac}[X]$ is fractional part of $X$, and \n$T_0$ and $P_{\\rm orb}$ are the best-fit values of the epoch of periastron and binary period taken from \\cite{2009ApJ...698..514A}.\nThe light curves grow from minima at $\\phi\\approx 0.1$ (near superior conjunction, $\\phi_{\\rm supc}=0.046$) up to \n$\\phi\\approx 0.4$, then \nremain nearly flat, with short \nfluctuations, around apastron ($\\phi=0.5$), show narrow peaks at $\\phi=0.6$ (before inferior conjunction; $\\phi_{\\rm infc}=0.67$), and secondary peaks at $\\phi\\approx 0.8$. \nThus, the light curves exhibit a flat-top maximum, encompassing the apastron \nand inferior conjunction phases. If, instead of the ephemeris from\n\\citet{2009ApJ...698..514A}, \nwe use the ephemeris from Casares et al.\\ (2011) or Sarty et al.\\ (2011) \nfor folding and\/or calculating the conjunction phases,\nthe shifts will not exceed 0.1 in phase.\n\nThe overall structure of the light curves does not evolve noticeably with energy, not only within the \\nustar\\ band but also between the \\suzaku\\ XIS and \\nustar\\ bands (see \\citealt{2009ApJ...697..592T}). It also \ndoes not show appreciable changes between the \\suzaku\\ \n XIS and \\nustar\\ observations,\nseparated by 9 years.\n\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[angle=0,width=0.5\\textwidth]{ls5039_FPMA_lc_3-60keV2500s_phase_backCorr.pdf}\n \\includegraphics[angle=0,width=0.5\\textwidth]{ls5039_FPMA_lc_3-10keV2500s_phase_backCorr.pdf}\n \\includegraphics[angle=0,width=0.5\\textwidth]{ls5039_FPMA_lc_10-60keV2500s_phase_backCorr.pdf}\n\\caption{\\nustar\\ background-corrected light curves \nof LS\\,5039\\ \n as functions of orbital phase. \nThe upper, middle, and lower\n panels are for the energy\nbands 3--60~keV, 3--10~keV, and\n 10--60~keV. The superior and inferior conjunctions are \n shown by\n the solid and dash-dot\n lines, while the apastron and periastron\n are shown by dashed and dotted lines, respectively. }\n\\label{ls5039LC}\n\\end{center}\n\\end{figure}\n\nWe have also searched the data for short aperiodic variability, such as bursts. \nWe \n binned the arrival times\nto produce light curves with bin sizes varying from 1 to 200 s (so that the largest bins are smaller than the smallest gap in the \\nustar\\ observation). \nWe calculated the Poisson probability, $q$,\nof having a\n number of counts per bin larger than measured.\n Note that the average number of counts per bin (the Poisson distribution parameter) varies with the binary phase.\n This is taken into account by calculating a local mean over $\\approx3000$ s \n time interval (slightly larger than the largest gap in the \\nustar\\ observation) surrounding the bin for which the probability is calculated (this bin itself is excluded from the mean calculation). \n For the chosen bin size, the Poisson probability \n should be corrected for the number of trials, $N_{\\rm tr}$, which is \n equal to the number of bins, \n $N_{\\rm bin}$, in the entire {\\sl NuSTAR} observation:\n $q_{\\rm corr} =1-(1-q)^{N_{\\rm bin}}$.\n Figure \\ref{fig:bursts} shows two most significant bursts that we found. The burst durations are about 1 s and 70 s. With the above definition of probability, the significances \n are 3.6$\\sigma$ for the short burst and 3.2$\\sigma$ for the longer burst. However, we note that this probably \n was derived for a fixed number of bins. If we account for all possible binning schemes (not just the one that results in the highest significance), then $N_{\\rm tr}>N_{\\rm bin}$, i.e., the \n confidence levels become lower.\n Therefore, these two burst candidates are not\n truly\n significant. \n\n\\begin{figure*}[hbt]\n \\includegraphics[width=\\textwidth]{bursts.pdf}\n \\caption{Light curves of the two most significant bursts (3.6$\\sigma$ and 3.2$\\sigma$, respectively). Bin sizes in the left and right light curves are 1 s and 15 s, respectively.\n Time \n (in seconds)\n is counted from the end of the nearest \n preceding occultation of the target by Earth. }\n \\label{fig:bursts}\n\\end{figure*}\n\n\n\\section{Spectral analysis}\n\\label{specana}\n\n\n\n\n\n\\subsection{Phase-integrated spectroscopy}\n\n\nWe \nanalyzed the {\\sl NuSTAR} and {\\sl Suzaku}\nspectra using XSPEC ver.\\ 12.9.1p\n\\citep{arnaud96conf}. To account for interstellar absorption, we used the T\\\"{u}bingen-Boulder \nmodel with the photo-electric cross-sections of\n\\citet{1996ADNDT..64....1V} and the abundances of \\citet{2000ApJ...542..914W}.\nWe bin\nthe spectra to have \n$S\/N\\simeq 7$ in each spectral bin,\ncorresponding to 50 counts per bin, and used the $\\chi^2$ statistic \nfor model parameter estimation and error calculation. \nFor all spectral fits, we added \nmultiplicative constants to the FPMA and FPMB normalizations,\nfrozen to\n1 for the former and allowed to vary for the latter, to account for \ncalibration uncertainties between the two \ndetector modules. We found \nthe difference in the normalization factors not exceeding 2\\%.\nWe also \nfound, using the same approach, that the calibration uncertainty \nbetween the \\nustar\\ \n and {\\sl Suzaku}\ninstruments \nis around\n10\\%.\n\n\nLS\\,5039\\ is detected with \\nustar\\ \nup to 70 keV, with a background-corrected count rate of 0.186(1) cts s$^{-1}$ and 0.171(1) cts s$^{-1}$, in the FPMA and FPMB detectors, respectively, in the 3--70 keV energy range,\n with a background contribution of $<10\\%$ in each detector. The number of background-corrected counts in the 60--70~keV energy range is about $45\\pm12$ in each\nmodule with a background contribution of $\\approx70-80\\%$, i.e.,\nthe source becomes \nhardly distinguishable from the background at higher energies.\nAn absorbed PL model gives \na statistically acceptable fit to the phase-integrated \n spectrum in the 3--70 keV band, with $\\chi^2=667$ for\n708 degrees of freedom (dof). We find a hydrogen column density\n$N_{\\rm H}=(0.7\\pm0.4)\\times10^{22}$~cm$^{-2}$, \na photon index\n$\\Gamma=1.61\\pm0.01$,\nand an absorption-corrected \nenergy flux \n$F_{\\rm 3-70\\,keV} =(2.31\\pm0.02)\\times10^{-11}$~erg~s$^{-1}$~cm$^{-2}$. There are no\nindications of \nspectral features in the fit residuals,\nincluding around\nthe Fe K$\\alpha$ line complex \nat 6--7 keV. Moreover,\nconsidering the \\nustar\\ spectra alone,\nwe find no evidence of a\nhigh-energy cutoff usually present in the hard X-ray spectra of HMXBs\nharboring \naccreting neutron stars \n(see, e.g., \\citealt{2002ApJ...580..394C,2015ApJ...809..140K,2017ApJ...841...35F}). \nThe photon index as inferred from the \\nustar\\\nspectra is \nlarger \nthan $\\Gamma=1.51\\pm0.01$\nfound at \nlower X-ray energies with\n\\suzaku\\ \\citep{2009ApJ...697..592T}.\n\n\\begin{figure}[th]\n\\begin{center}\n\\includegraphics[angle=0,width=0.48\\textwidth]{phAvgSpecLS5039_PL_suzNuS.pdf}\n\\caption{\nFit of the absorbed PL model to the\nphase-integrated \\nustar\\ and \\suzaku\\ XIS data (for clarity we only show XIS0), and the corresponding residuals in \nunits of $\\sigma$ (lower panel).}.\n\\label{suzNuSpec}\n\\end{center}\n\\end{figure}\n\nGiven the \nlong-term stability\nof the \nbinary's periodic X-ray light curve,\n\\citep{2009ApJ...697L...1K}, we fit the \\nustar\\ 3--70~keV and the \\suzaku\\ 0.7--10~keV phase-averaged spectra simultaneously with an absorbed PL model (Figure~\\ref{suzNuSpec}). We linked all parameters among the two spectra, except for a constant normalization to take into account calibration uncertainties between the instruments. We find a statistically acceptable fit to the data \n($\\chi^2 = 2722$ for 2795 dof), with \n$N_{\\rm H}=(1.18\\pm0.01)\\times10^{22}$~cm$^{-2}$, \n$\\Gamma=1.588\\pm0.007$ (Table~\\ref{specParam}),\nand absorption-corrected \nfluxes\n$F_{\\rm 0.5-10~keV}=(9.64\\pm0.05)\\times10^{-12}$~erg~s$^{-1}$~cm$^{-2}$, and \n$F_{\\rm\n 10-70~keV}=(18.0\\pm0.2)\\times10^{-12}$~erg~s$^{-1}$~cm$^{-2}$.\nThese fluxes correspond to a luminosity $L_{\\rm 0.5-70\\,keV} \\approx 2.8\\times 10^{34} (d\/2.9\\,{\\rm kpc})^2$~erg~s$^{-1}$.\n\n\n\n\n\\subsection{Phase-resolved spectroscopy}\n\nWe performed broad-band phase-resolved spectroscopy \nusing the \\suzaku\\ XIS\nand \\nustar\\ data\nto look for \nmodulation of spectral parameters with the orbital \nperiod.\n We used the same definition of binary phase \nas for the \nlight curve shown in Figure \\ref{ls5039LC}. \nWe extract the spectra in orbital phase bins $\\Delta\\phi = 0.1$ \n($\\phi=0$ corresponds to binary periastron).\nBecause of lower count statistics in the \nchosen phase bins, we \nbin the spectra to 5 counts per energy bin and use the Cash statistic. \nWe fit all 10 spectra simultaneously with an absorbed PL. Given that phase-resolved spectroscopy with \\suzaku\\ alone revealed no variability in the hydrogen column density \\citep{2009ApJ...697..592T}, we linked $N_{\\rm H}$ between all spectra, but left the photon index and the normalization of the PL free to vary. We find a good fit to the spectra with a C-stat of 8950 for 9000 dof.\n\nThe fit results are presented in Table \\ref{specParam}.\nFigure~\\ref{PhResSpec} shows the flux and photon index \nvariations \nwith orbital phase. We find a strong modulation of the photon index $\\Gamma$,\nas similar to those inferred previously \nfrom {\\sl RXTE} and \\suzaku\\ observations \\citep{2005ApJ...628..388B, 2009ApJ...697..592T}. Our \\nustar\\ plus \\suzaku\\ fits have shown harder spectra than {\\sl RXTE} at all phases.\nThe addition of the \\nustar\\ data \nto the \\suzaku\\ data has \nprovided tighter constraints \non the photon index, showing that it varies by $\\Delta\\Gamma\\approx0.1$ \nfrom maximum to minimum.\n\n\n\nA similar tendency was noticed previously in the {\\sl RXTE} phase-resolved spectra (3--30 keV band), but the values of $\\Gamma$ and $\\Delta\\Gamma$ were substantially larger (see Figure 4 in \\citealt{2005ApJ...628..388B}). A similar $\\Gamma$-$F$ anti-correlation in the 1--10 keV range ({\\sl XMM-Newton}, {\\sl Chandra}, {\\sl ASCA} and {\\sl Suzaku} data) \nis shown \nby Figure 1 in \\cite{2009ApJ...697L...1K}. \n\n\n\\begin{figure*}[t]\n\\begin{center}\n \\includegraphics[angle=0,width=0.494\\textwidth]{ls5039_SuzNuS_specEvol.pdf}\n \\includegraphics[angle=0,width=0.445\\textwidth]{ls5039_SuzNuS_FvsGam.pdf}\n\\caption{{\\sl Left:} Phase-resolved spectroscopic results with $\\Delta\\phi=0.1$.\n The blue points show the 3--70~keV unabsorbed flux variation as a function of\n orbital phase, with periastron at $\\phi=0$. The orange squares show the photon index variation with\n phase. {\\sl Right}: Anti-correlation of the photon index \n and the flux.} \n\\label{PhResSpec}\n\\end{center}\n\\end{figure*}\n\n\n\\begin{deluxetable}{lccc}[t]\n\\tablecaption{Spectral parameters of LS\\,5039 from joint PL fits to the \\nustar\\ and {\\sl Suzaku} data} \\label{specParam}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{Phases}\n& \\colhead{$N_{\\rm H}$} & \\colhead{$\\Gamma$} & \\colhead{Flux (3-70~keV)} \\\\\n\\colhead{} & \\colhead{$10^{22}$\\,cm$^{-2}$} & \\colhead{} & \\colhead{ $10^{-11}$\\,erg\\,s$^{-1}$\\,cm$^{-2}$}\n}\n\\startdata \n$0.0-1.0$ \n& $1.18\\pm0.01$ & $1.588\\pm0.007$ & $2.31\\pm0.02$\\\\\n\\hline\n$0.0-0.1$ & $0.5\\pm0.4$ & $1.67\\pm0.02$ & $1.24\\pm0.03$\\\\\n$0.1-0.2$ & \\ldots & $1.65\\pm0.02$ & $0.98\\pm0.03$\\\\\n$0.2-0.3$ & \\ldots & $1.62\\pm0.02$ & $1.63\\pm0.04$\\\\\n$0.3-0.4$ & \\ldots & $1.57\\pm0.02$ & $2.56\\pm0.06$\\\\\n$0.4-0.5$ & \\ldots & $1.56\\pm0.01$ & $3.12\\pm0.06$\\\\\n$0.5-0.6$ & \\ldots & $1.57\\pm0.02$ & $3.30\\pm0.05$\\\\\n$0.6-0.7$ & \\ldots & $1.56\\pm0.01$ & $3.48\\pm0.06$\\\\\n$0.7-0.8$ & \\ldots & $1.57\\pm0.02$ & $3.05\\pm0.07$\\\\\n$0.8-0.9$ & \\ldots & $1.59\\pm0.02$ & $2.33\\pm0.05$\\\\\n$0.9-1.0$ & \\ldots & $1.62\\pm0.02$ & $1.45\\pm0.04$\\\\\n\\enddata\n\\end{deluxetable}\n\n\n\\section{ Discussion and Summary\n}\n\\label{summ}\n\nThe nature of the compact object in LS~5039 has long been elusive. \nThe two main scenarios \n that are often used for the interpretation of its observed properties\n are the {\\em microquasar scenario} (radiatively-inefficient accretion onto a BH with the nonthermal emission coming from the jets; e.g., \\citealt{2005ApJ...628..388B}), \n and the {\\em colliding winds scenario} \n (the relativistic wind from a young rotation-powered pulsar \n collides with the massive star's wind resulting in an intrabinary shock and particle acceleration (\\citealt{2006A&A...456..801D, 2015CRPhy..16..661D, 2020A&A...641A..84M}).\n In addition, one could also consider \n the `{\\em propeller \n scenario}' in which\n the interaction of \n the massive star's wind with the rotating magnetosphere of a strongly magnetized NS \n can accelerate wind particles to relativistic energies via shocks or magnetic field reconnection.\n Such a scenario was suggested by\n \\citealt{2012ApJ...744..106T} \n for another HMGB, LS\\,I\\,+61$^\\circ$\\,303, from which a magnetar-like burst was likely observed, and it was also mentioned by Y+20 as a possibility for LS\\,5039 if the proposed magnetar \n nature of the compact object is confirmed.\n The possibility of accretion onto the NS surface (magnetic poles) \n seems to be excluded due to the long-term stability of the orbital light curve in X-rays, featureless PL spectrum from 0.3 to $\\sim$100 keV, low X-ray luminosity, $L_{X}\\approx2.8\\times 10^{34}(d\/2.9~{\\rm kpc})^2$ erg s$^{-1}$, and a lack of outbursts.\n\nThe detection of \nfast pulsations, typical for a young rotation-powered pulsar, would \nstrongly support the colliding wind scenario while \ndetection of slow pulsations \n(e.g., with a few seconds period) could support the \nscenario with a magnetar propeller, as suggested by Y+20. However, our timing analysis did not yield any evidence for a young pulsar. Although we were able to confirm \nthe Fourier power excess \nnear $P=9.05$ s, previously reported by Y+20,\nand find a binary ephemeris \ncompatible with the optical observations\n(contrary to Y+20), the significance of this excess \nis rather low.\nThe excess near $P=8.96$ found by Y+20 in the {\\sl Suzaku} HXD data \nobtained 9 years earlier could strengthen the magnetar hypothesis, if a large enough $\\dot{P}$ is assumed. However, we failed to find a \ncommon binary ephemeris which would \nprovide even marginally acceptable signal detections at both 9.05 s (in \\nustar) and 8.96 s (in {\\sl Suzaku}). Another \\nustar\\ observation is needed to decisively confirm or rule out the 9 s period candidate and magnetar scenario or, if this period is not confirmed, to perform a more sensitive search for pulsations at other frequencies. \n\nOur variability analysis does not support the accretion scenario. The binary light curve appears to be rather stable over timescales of at least 9 years (between the {\\sl Suzaku} and \\nustar\\ observations), including even the fine structure \n(e.g, the narrow spike near the aparstron and inferior conjuction). The stable fine structure cannot be explained by \na clumpy stellar wind or instabilities in the accretion flow. Understanding the nature of this stable small-scale structure in the light curve may hold the key to the interaction scenario.\n\nWe also looked for bursts on even shorter time scales ($<200$ s), which could be expected \nin the accretion and magnetar scenarios, but found no significant bursting activity. The binary light curve \nalso shows little-to-no dependence on energy in the 0.7--70 keV range. Having the light curve maximum around the apastron (which is close to the inferior conjunction in LS 5039) phase is hardly compatible with \nwind accretion (unless the wind has very unusual properties).\n\nThe spectral analysis shows that both the phase-integrated and phase-resolved spectra in the 0.7--70 keV range can be fitted by a simple PL model modified by the interstellar extinction. We do not find any evidence of a cutoff at higher energies. The photon index values ($\\Gamma\\simeq1.6$ in the phase-integrated spectrum, $1.56\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$<$}}}\\Gamma\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$<$}}} 1.67$ in separate phase bins) are typical for such synchrotron sources as pulsars and PWNe \\citep{2008AIPC..983..171K}. The value of $\\Gamma$ is also typical for other HMGBs (see e.g., Figure~1 from \\citealt{2014AN....335..301K}) but the spread of $\\Gamma$ with the orbital phase is smaller for LS 5039 than those for the other HMGBs,\nperhaps due to the LS 5039's tighter orbit. \n\nWe \nhave confirmed a statistically significant anti-correlation between the flux and the photon index observed throughout the binary orbit, \npreviously reported by \\cite{2005ApJ...628..388B},\n\\cite{2009ApJ...697..592T}, and \\cite{2009ApJ...697L...1K}. This phenomenon may hold another important clue to the nature of the compact object and emission processes in LS\\,5039 and the other HMGBs. A similar anti-correlation has been seen in at least two other HMGBs: 1FGL J1018.6--5856 \n\\citep{2013ApJ...775..135A} \nand LS\\,I\\,+61$^{\\circ}$303 \n\\citep{2017MNRAS.468.3689M}.\nIn the colliding wind scenario, the anti-correlation could be explained by a model in which the observer sees a harder spectrum of emitted electrons at the phases when the flux is increased by, e.g., the Doppler boosting. \nWe see from Figures \\ref{PhResSpec} (and \\ref{ls5039LC}) that the flux is maximal and the photon index minimal near inferior conjunction, when the compact object is between the observer and the massive star. This suggests that the Doppler boosting occurs in a shocked pulsar wind confined by the dynamically dominant stellar wind in a (hollow) cone (a paraboloid-like shell) around the star-pulsar direction (see, e.g., left panel in Figures 10 in \\citealt{2013A&ARv..21...64D}).\nThis assumption is supported by\nsimulations by \\cite{2012MNRAS.419.3426B} and \\cite{2019MNRAS.490.3601B} \nwho show that the bulk flow of the shocked pulsar wind \ncan reach a bulk Lorentz factor of a few as the wind is streaming away from the cone apex.\nIn order to explain the anti-correlation between the flux and photon index,\none has to assume that particle acceleration in the shocked pulsar wind proceeds more efficiently as the bulk flow accelerates, despite the adiabatic and cooling losses. The mechanism of particle acceleration is unclear but it might be akin to that in extended AGN jets (e.g., the shear acceleration in an expanding flow; \\citealt{2016ApJ...833...34R}). \n\nThe hollow cone configuration of the shocked pulsar wind would naturally give rise to a double peaked structure in the light curve (e.g., the peaks at phases 0.4 and 0.8 in Figure \\ref{ls5039LC}), as \ncone crosses the observers line of sight. This would also naturally explain the relatively flat top in the light curve. However in this scenario it is unclear what could cause the third (and strongest) peak, near phase 0.6.\n\n\n\n\nOverall, we conclude that the colliding wind scenario with the compact object being a young pulsar remains the most plausible option as it can explain the dependence of the flux on the binary phase (by Doppler beaming), the spectrum (by synchrotron emission from particles accelerated by the colliding winds), the longer term stability of the light curve, and the lack of variability on short timescales. \n\n\n\n{\\em Facilities:} \\facility{{\\sl NuSTAR} }, \\facility{{\\sl Suzaku} (XRT)}\n\n\\begin{acknowledgements}\n\nSupport for this work was provided by the National Aeronautics and\nSpace Administration through the \\nustar\\ award NNX17AB77G. JH would like to thank John Tomsick for helpful discussions regarding the reduction of Suzaku data. JH acknowledges support from\nan appointment to the NASA Postdoctoral Program at the\nGoddard Space Flight Center, administered by the Universities Space Research Association under contract with NASA. GY acknowledges support from NASA through Fermi grant 80NSSC20K1571.\n\n\\end{acknowledgements}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\subsection{Integrating summary data from GWAS and eQTL studies}\n\n\n\nIntegrative genomics aims to integrate various biological data sets for systematic discovery of genetic basis that underlies and modifies human disease \\citep{giallourakis2005disease}. To realize its full potential in genomic research, methods of both computational efficiency and theoretical guarantee for such integrative analyses are needed in various applications. This paper proposes a method that combines datasets from genome-wide association studies (\\textsc{gwas}) and expression quantitative trait loci (e\\textsc{qtl}) studies in order to identify genetically regulated disease genes and to provide an integrative view of the underlying biological mechanism of complex diseases such as heart failure. Results from \\textsc{gwas} have revealed that the majority of disease-associated single nucleotide polymorphisms (\\textsc{snp}s) lie in non-coding regions of the genome \\citep{hindorff2009potential}. These \\textsc{snp}s likely regulate the expression of a set of downstream genes that may have effects on diseases \\citep{nicolae2010trait}. On the other hand, e\\textsc{qtl} studies measure the association between both cis- and trans- \\textsc{snp}s and the expression levels of genes, which characterizes how genetic variants regulate transcriptions. A key next step in human genetic research is to explore whether these intermediate cellular level e\\textsc{qtl} signals are located in the same loci (``colocalize\") as \\textsc{gwas} signals and potentially mediate the genetic effects on disease, and to find disease genes whose e\\textsc{qtl} overlap significantly with the set of loci associated with the disease \\citep{he2013sherlock}.\n\n\n\nThis paper focuses on the integrative analysis of the summary statistics of \\textsc{gwas} and e\\textsc{qtl} studies performed on possibly different set of subjects. Due to the privacy and confidentiality concerns of \\textsc{gwas}\/e\\textsc{qtl} participants, the raw genotype data are often not available, instead most of the published papers provide summary statistics that include single \\textsc{snp} analysis results such as the estimated effect size, its p-value and the minor allele frequency. \nBased on these summary statistics, we propose a method that identifies potential disease genes by measuring their genetic overlaps to the disease. In particular, we propose a gene-specific measure, $T$-score, that characterizes the total amount of simultaneous \\textsc{snp} signals that share the same loci in both \\textsc{gwas} and e\\textsc{qtl} study of a relevant normal tissues. Such a measure enables us to prioritize genes whose expression levels may underlie and modify human disease \\citep{zhao2016sparse}. \n\nTreating \\textsc{snp}-specific \\textsc{gwas} and e\\textsc{qtl} summary $z$-score statistics (as obtained for linear or logistic regression coefficients) as two independent sequences of Gaussian random variables, we define the parameter $T$-score as the sum of the product of the absolute values of two normal means over a given set of $n$ \\textsc{snp}s. Specifically, for any individual gene $g$, we denote $\\mathbf{x}_n^g$ the vector of $z$-scores from e\\textsc{qtl} study, and $\\mathbf{y}_n$ the vector of $z$-scores from \\textsc{gwas}. We assume $\\mathbf{x}_n^g \\sim N(\\theta^g, \\boldsymbol{\\Sigma}_1)$ and $\\mathbf{y}_n \\sim N(\\mu, \\boldsymbol{\\Sigma}_2)$ for some $\\theta^g,\\mu\\in \\mathbb{R}^n$ and covariance matrices $\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_2\\in \\mathbb{R}^{n\\times n}$ with unit diagonals. The $T$-score for gene $g$ is then defined as\n\\begin{equation}\n\\text{$T$-score}(g) = \\sum_{i=1}^n |\\theta_i^g\\mu_i|,\n\\end{equation}\nwhere the summation is over a given set of $n$ \\textsc{snp}s. The $T$-score quantifies the amount of simultaneous signals contained in two Gaussian mean vectors, regardless of the directions of the signals. Intuitively, a large $T$-score would possibly result from a large number of contributing components $i$'s whose means $\\theta^g_i$ and $\\mu_i$ are simultaneously large in absolute values. The supports (nonzero coordinates) of the mean vectors $\\theta$ (hereafter we omit its dependence on $g$ for simplicity) and $\\mu$ are assumed to have sparse overlaps since it has been observed that, for a relatively large set of \\textsc{snp}s, only a small subset of \\textsc{snp}s are associated with both disease and gene expression \\citep{he2013sherlock}. By estimating the $T$-scores for all the genes using summary statistics, we would be able to, after proper normalizations that accounts for study sample sizes, the number of \\textsc{snp}s and effect sizes (see Section 2.5), identify and prioritize those genetically regulated candidate disease genes. Besides, the $T$-scores can also be used in the Gene Set Enrichment Analysis to identify the disease associated gene sets and pathways, or to quantify the genetic sharing among different complex traits using the \\textsc{gwas} summary statistics \\citep{Bulik-Sullivan}.\n\n\\subsection{Related works}\n\n\nStatistically, estimation of $T$-score involves estimating a non-smooth functional -- the absolute value function -- of Gaussian random variables. Unlike the problems of estimating smooth functionals such as the linear or quadratic functionals \\citep{ibragimov1985nonparametric,donoho1990minimax, fan1991estimation,efromovich1994adaptive, cai2006optimal} where some natural unbiased estimators are available, much less is known for estimating the non-smooth functionals. Using approximation theory, \\cite{cai2011testing} established the minimax risk and constructed a minimax optimal procedure for estimating a non-smooth functional. More recently, this idea has been adapted to statistical information theory that also considered estimation of non-smooth functionals such as the R\\'enyi entropy, support size, and $L_1$ distance \\citep{jiao2015minimax, jiao2016minimax,wu2016minimax,wu2019chebyshev,acharya2016unified}. Nonetheless, how to estimate the absolute inner product of two Gaussian mean vectors ($T$-score) remains unknown.\n\nIn the statistical genetics and genomics literature, several approaches have been proposed for integrating \\textsc{gwas} and e\\textsc{qtl} data sets. Under the colocalization framework, methods such as \\cite{nica2010candidate} and \\cite{giambartolomei2014bayesian} were developed to detect colocalised \\textsc{snp}s. However, these methods do not directly identify the potential causal genes. Under the transcriptome-wise association study (TWAS) framework, \\cite{zhu2016integration} proposed a summary data-based Mendelian randomization method for causal gene identification, by posing some structural causality assumptions. \\cite{Pediscan} developed a gene-based association method called PrediXcan that directly tests the molecular mechanisms through which genetic variation affects phenotype. Nevertheless, there is still a need for a quantitative measure of the genetic sharing between genes and the disease that can be estimated from the \\textsc{gwas}\/e\\textsc{qtl} summary statistics. \n\nAs a related but different quantity, the genetic covariance $\\rho$, proposed by \\cite{Bulik-Sullivan} as a measure of the genetic sharing between two traits, can be expressed using our notation as $\\rho = \\sum_{i=1}^n \\theta_i\\mu_i$. In addition to the difference due to the absolute value function, in the original definition of genetic covariance $\\rho$, the mean vectors $\\theta$ and $\\mu$ represent the conditional effect sizes (i.e., conditional on all other \\textsc{snp}s in the genome), whereas the mean vectors in our $T$-score correspond to the marginal effect sizes, so as to be directly applicable to the standard \\textsc{gwas}\/e\\textsc{qtl} summary statistics. In addition, unlike the \\textsc{ld}-score regression approach considered in \\cite{Bulik-Sullivan}, our proposed method takes advantage of the fact that the support overlap between $\\theta$ and $\\mu$ are expected to be very sparse. \n\n\n\n\n\n\n\n\\subsection{Main contributions}\n\nIn this paper, we propose an estimator of the $T$-score, based on the idea of thresholding and truncating the best polynomial approximation estimator. To the best of our knowledge, this is the first result concerning estimation of such absolute inner product of two Gaussian mean vectors. Under the framework of statistical decision theory, the minimax lower bounds are obtained and we show that our proposed estimators are minimax rate-optimal over various parameter spaces. In addition, our results indicate that the proposed estimators are locally adaptive to the unknown sparsity level and the signal strength (Section 2). Our simulation study shows the strong empirical performance and robustness of the proposed estimators in various settings, and provides guidelines for using our proposed estimators in practice (Section 3). Analysis of \\textsc{gwas} and e\\textsc{qtl} data sets of heart failure using the proposed method identifies several important genes that are functionally relevant to the etiology of human heart failure (Section 4).\n\n\n\n\\section{Minimax Optimal Estimation of T-score}\n\n\\subsection{Minimax Lower Bounds}\n\nWe start with establishing the minimax lower bounds for estimating $T$-score over various parameter spaces. Throughout, we denote $T(\\theta,\\mu)=\\sum_{i=1}^n|\\theta_i\\mu_i|$. For a vector ${a} = (a_1,...,a_n)^\\top \\in \\mathbb{R}^{n}$, we define the $\\ell_\\infty$ norm $\\| {a}\\|_{\\infty} = \\max_{1\\le j\\le n} |a_{i}|$. For sequences $\\{a_n\\}$ and $\\{b_n\\}$, we write $a_n\\lesssim b_n$ or $b_n \\gtrsim a_n$ if there exists a constant $C$ such that $a_n \\le Cb_n$ for all $n$. We write $a_n\\asymp b_n$ if $a_n \\lesssim b_n$ and $a_n\\gtrsim b_n$.\n\nAs of both practical and theoretical interest, we focus on the class of mean vector pairs $(\\theta,\\mu)$ with only a small fraction of support overlaps. \nSpecifically, for any $s< n$, we define the parameter space for $(\\theta,\\mu)$ as $D(s) =\\{(\\theta,\\mu)\\in \\mathbb{R}^n\\times \\mathbb{R}^n : |\\text{supp}(\\theta) \\cap \\text{supp}(\\mu)| \\le s\\}.$\nIntuitively, in addition to the sparsity $s$, the difficulty of estimating $T$-score $T(\\theta,\\mu)$ should also rely on the magnitudes of the mean vectors $\\theta$ and $\\mu$, and the covariance matrices $\\boldsymbol{\\Sigma}_1$ and $\\boldsymbol{\\Sigma}_2$. Towards this end, we define the parameter space for $(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_2)$ as \n\\[\nD^\\infty(s,L_n) = \\big\\{(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_2): (\\theta,\\mu)\\in D(s),\\max(\\|\\theta\\|_\\infty , \\|\\mu\\|_\\infty )\\le L_n, \\boldsymbol{\\Sigma}_1=\\boldsymbol{\\Sigma}_2={\\bf I}_n \\big\\},\n\\]\nwhere both $s$ and $L_n$ can growth with $n$.\nIn particular, we calibrate the sparsity $s \\asymp n^{\\beta}$ for some $0<\\beta<1$. Throughout, we consider the normalized loss function as the squared distance scaled by $n^{-2}$ and define the estimation risk for some estimator $\\hat{T}$ as\n\\[\n\\mathcal{R}(\\hat{T})=\\frac{1}{n^2}\\mathbb{E} (\\hat{T}-T(\\theta,\\mu))^2.\n\\]\nTo simplify our statement, we define the rate function\n\\[\n\\psi(s,n) = \\log\\bigg(1+\\frac{n}{s^2} \\bigg)+\\frac{1}{\\log s}.\n\\]\nThe following theorem establishes the minimax lower bound over the parameter space $D^{\\infty}(s,L_n)$.\n\n\\begin{theorem} \\label{sparse.lower3}\nLet $\\mathbf{x_n} \\sim N(\\theta,\\boldsymbol{\\Sigma}_1)$ and $\\mathbf{y_n} \\sim N(\\mu,\\boldsymbol{\\Sigma}_2)$ be multivariate Gaussian random vectors where $(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_2) \\in D^\\infty(s,L_n)$ and $L_n\\gtrsim \\sqrt{\\log n}$. Then \n\\begin{equation} \\label{sparse.lower.equation4}\n\\inf_{\\hat{T}} \\sup_{\\substack{(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_1) \\in D^\\infty(s,L_n)}}\t\\mathcal{R}(\\hat{T}) \\gtrsim \\frac{L_n^2s^2\\psi(s,n)}{n^2}\n\\end{equation}\nwhere $\\hat{T}$ is any estimator based on $(\\mathbf{x_n} ,\\mathbf{y_n} )$.\n\\end{theorem}\nFrom the above theorem and the definition of the rate function $\\psi(s,n)$, apparently, there is a discrepancy between the cases when $\\beta\\in(0,1\/2)$ and when $\\beta\\in [1\/2,1)$. Specifically, when $\\beta\\in (0,1\/2)$, the minimax lower bound (\\ref{sparse.lower.equation4}) becomes\n\\begin{equation} \\label{lb.l}\n\\inf_{\\hat{T}} \\sup_{\\substack{(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_2) \\in D^{\\infty}(s,L_n)}}\t\\mathcal{R}(\\hat{T})\\gtrsim\\frac{L_n^2s^2\\log n}{n^2},\n\\end{equation}\nwhereas when $\\beta\\in [1\/2,1)$, we have\n\\begin{equation} \\label{lb.h}\n\\inf_{\\hat{T}} \\sup_{\\substack{(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_2) \\in D^{\\infty}(s,L_n)}} \t\\mathcal{R}(\\hat{T}) \\gtrsim \\frac{L_n^2s^2}{n^2\\log n}.\n\\end{equation}\nThese results suggest that the minimax optimal estimators of $T(\\theta,\\mu)$ might not be the same across different regions of $\\beta$.\n\n\\subsection{Optimal estimators of T-score via polynomial approximation}\n\nIn general, the proposed estimators are based on the idea of optimal estimation of the absolute value of normal means studied by \\cite{cai2011testing}. Therein, the best polynomial approximation of the absolute value function was applied to obtain the rate-optimal estimator, as well as the minimax lower bound. Specifically, it was shown that, if we define the $2K$-degree polynomial \n\\begin{equation}\n\\label{GK}\nG_K(x) = \\frac{2}{\\pi}T_0(x) +\\frac{4}{\\pi}\\sum_{k=1}^{K} (-1)^{k+1}\n\\frac{T_{2k}(x)}{4k^2-1} \\equiv \\sum_{k=0}^K g_{2k} x^{2k},\n\\end{equation}\nwhere $T_k(x) = \\sum_{j=0}^{\\lfloor k\/2 \\rfloor} (-1)^j \\frac{k}{k-j} {k-j\\choose j} 2^{k-2j-1}x^{k-2j}$ are Chebyshev polynomials, \nthen for any $X \\sim N(\\theta,1)$, if $H_k$ are Hermite polynomials with respect to the standard normal density $\\phi$ such that\n\\begin{equation}\n\\frac{d^k}{dy^k}\\phi(y) = (-1)^k H_k(y) \\phi(y),\n\\end{equation}\nthe estimator\n\\begin{equation} \\label{SK}\n\\tilde{S}_K(X) \\equiv \\sum_{k=0}^K g_{2k}M_n^{-2k+1}H_{2k}(X)\n\\end{equation}\nfor some properly chosen $K$ and $M_n$ has some optimality properties for estimating $|\\theta|$. This important result motivates our construction of the optimal estimators of $T$-score. \n\nWe begin by considering the setting where $\\mathbf{x}_n=(x_1,...,x_n)^\\top \\sim N(\\theta,{\\bf I}_n)$ and $\\mathbf{y}_n=(y_1,...,y_n)^\\top \\sim N(\\mu,{\\bf I}_n)$. To estimate $T(\\theta,\\mu)$, we first split each sample into two copies, one is used for testing, and the other is used for estimation. Specifically, for $x_i \\sim N(\\theta_i,1)$, one can generate $x_{i1}$ and $x_{i2}$ from $x_{i}$ by letting $z_i \\sim N(0,1)$ and setting $x'_{i1} = x_i+z_i$ and $x'_{i2} = x_i-z_i$. Let $x_{il} = x'_{il}\/\\sqrt{2}$ for $l=1,2$, then $x_{il} \\sim N(\\theta'_i,1)$ for $l=1,2$ and $i=1,...,n$ with $\\theta_i' = \\theta_i\/\\sqrt{2}$. Similarly, one can construct $y_{il}\\sim N(\\mu'_i,1)$ for $l=1,2$ and $i=1,...,n$ with $\\mu'_i = \\mu_i\/\\sqrt{2}$. Since $T(\\theta,\\mu)=2T(\\theta',\\mu')$, estimating $T(\\theta,\\mu)$ with $\\{ x_i,y_i\\}_{i=1}^n$ is then equivalent to estimating $T(\\theta', \\mu')$ with $\\{x_{il},y_{il} \\}_{i=1}^n, l=1,2$. \n\nIn light of the estimator defined in (\\ref{SK}), we consider a slightly adjusted statistic $S_K(X) = \\sum_{k=1}^K g_{2k}M_n^{-2k+1}H_{2k}(X)$ and define its truncated version\n\\[\n{\\delta}_K(X) = \\min\\{{S}_K(X) ,n^2 \\},\n\\]\nwith $M_n = 8\\sqrt{\\log n}$ and $K\\ge 1$ to be specified later. It will be seen that the above truncation is important in reducing the variance of $\\delta_K(X)$.\nFollowing the sample splitting procedure, we construct an estimator of $|\\theta'_i|$ as\n\\[\n\\hat{V}_{i,K}(x_i)= {\\delta}_K(x_{i1})I(|x_{i2}|\\le 2\\sqrt{2\\log n}) + |x_{i1}| I(|x_{i2}|> 2\\sqrt{2\\log n}),\n\\]\nand a similar estimator of $|\\mu'_i|$ as $\\hat{V}_{i,K}(y_i)$. To further exploit the sparse structure, we also consider their thresholded version \n\\[\n\\hat{V}^S_{i,K}(x_i)= {\\delta}_K(x_{i1})I(\\sqrt{2\\log n}<|x_{i2}|\\le 2\\sqrt{2\\log n}) + |x_{i1}| I(|x_{i2}|> 2\\sqrt{2\\log n})\n\\]\nas an estimator of $|\\theta'_i|$ and similarly $\\hat{V}^S_{i,K}(y_i)$ for $|\\mu'|$. Intuitively, both $\\hat{V}_{i,K}(x_i)$ and $\\hat{V}^S_{i,K}(x_i)$ are hybrid estimators: $\\hat{V}_{i,K}(x_i)$ is a composition of an estimator based on polynomial approximation designed for small to moderate observations (less than $2\\sqrt{2\\log n}$ in absolute value) and the simple absolute value estimator applied to large observations (larger than $2\\sqrt{2\\log n}$ in absolute value), whereas $\\hat{V}^S_{i,K}(x_i)$ has an additional thresholding procedure for small observations (less than $\\sqrt{2\\log n}$ in absolute value). Consequently, we propose two estimators of $T(\\theta,\\mu)$, namely,\n\\begin{equation} \\label{sparse.est}\n\\widehat{T_K(\\theta,\\mu)} = {2}\\sum_{i=1}^n \\hat{V}_{i,K}(x_i)\\hat{V}_{i,K}(y_i),\n\\end{equation}\nas the hybrid non-thresholding estimator and\n\\begin{equation}\n\\widehat{T^S_K(\\theta,\\mu)} = {2}\\sum_{i=1}^n \\hat{V}^S_{i,K}(x_i)\\hat{V}^S_{i,K}(y_i),\n\\end{equation}\nas the hybrid thresholding estimator. Both estimators rely on $K$, which is the tuning parameter to be specified later.\n\n\\subsection{Theoretical properties and minimax optimality}\n\n\nIn this section, we formally study the theoretical properties of $\\widehat{T_K(\\theta,\\mu)}$ and $\\widehat{T^S_K(\\theta,\\mu)}$. \nThe following theorem provides the risk upper bounds and therefore establishes the minimax optimality of $\\widehat{T_K(\\theta,\\mu)}$ and $\\widehat{T^S_K(\\theta,\\mu)}$ over $D^{\\infty}(s,L_n)$ with different sparsity levels.\n\n\\begin{theorem} \\label{sparse.upper3}\nLet $\\mathbf{x_n} \\sim N(\\theta,\\boldsymbol{\\Sigma}_1)$ and $\\mathbf{y_n} \\sim N(\\mu,\\boldsymbol{\\Sigma}_2)$ be multivariate Gaussian random vectors with $(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_2)\\in D^{\\infty}(s,L_n)$, $L_n\\gtrsim \\sqrt{\\log n}$ and $s\\asymp n^\\beta$. Then \n\\begin{enumerate}\n\t\\item for any $\\beta\\in(0,1)$, let $K$ be any finite positive integer, we have,\n\t\\begin{equation} \\label{sparse.upper.equation5}\n\t\\sup_{(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_2)\\in D^{\\infty}(s,L_n)}\t\\mathcal{R}(\\widehat{T^S_K(\\theta,\\mu)} ) \\lesssim \\frac{L^2_ns^2\\log n}{n^2};\n\t\\end{equation}\n\t\\item for any $\\beta\\in (1\/2,1)$, let $K= r\\log n$ for some $0\\beta$, whereas the estimator $\\widehat{T_K(\\theta,\\mu)}$ with $K$ being any finite positive integer is simultaneously rate-optimal for any $L_n$ and $\\beta\\in(0,1\/2)$. \n\n\nUnfortunately, even with an appropriate choice of $K$, neither $\\widehat{T^S_K(\\theta,\\mu)}$ nor $\\widehat{T_K(\\theta,\\mu)}$ is simultaneously optimal across all $\\beta\\in(0,1)$. However, since the difference in the optimal rates of convergence between (\\ref{r1}) and (\\ref{r2}) is only of a factor of $\\log n$, we argue that in practice, even when $\\beta\\in(1\/2,1)$, the nearly optimal thresholding estimator $\\widehat{T^S_K(\\theta,\\mu)}$ would perform just as well as the non-thresholding minimax optimal estimator $\\widehat{T_K(\\theta,\\mu)}$. Moreover, as for the optimal choice of $K$, since the difference between $K\\asymp \\log n$ and $K\\asymp 1$ is minor for most applications with small to moderately large sample sizes (e.g., with $n\\sim 10^5$, we have $\\log n\\approx 11.51$), in practice it suffices to choose $K$ as some fixed moderate constant.\n\n\n\n\\subsection{Sparse estimation via simple thresholding}\n\nAccording to our previous analysis, if the parameter space is very sparse, i.e., $0<\\beta<1\/2$, the proposed estimator $\\widehat{T^S_K(\\theta,\\mu)}$ is minimax optimal if we choose $K$ as any constant positive integer. In other words, any constant degree polynomial approximation suffices for the optimal estimation of $T(\\theta,\\mu)$, including the constant function. It means that in this case the polynomial approximation is essentially redundant for our purpose. \n\nIn light of the above observation, we consider the following simple thresholding estimator\n\\[\n\\widetilde{T(\\theta,\\mu)}={2}\\sum_{i=1}^n \\hat{U}_i(x_i)\\hat{U}_i(y_i), \\quad \\text{where} \\quad \\hat{U}_i(x_i)= |x_{i1}| I(|x_{i2}|> 2\\sqrt{2\\log n}).\n\\]\nIn the following, we show that $\\widetilde{T(\\theta,\\mu)}$ has the same rates of convergence as $\\widehat{T_K^S(\\theta,\\mu)}$ over the parameter spaces $D^{\\infty}(n^{\\beta},L_n)$, and, in light of Theorem \\ref{sparse.lower3}, is minimax optimal and adaptive over any sparsity level $\\beta\\in(0,1\/2)$.\n\n\\begin{theorem} \\label{sparse.upper.3}\nLet $\\mathbf{x_n} \\sim N(\\theta,\\boldsymbol{\\Sigma}_1)$ and $\\mathbf{y_n} \\sim N(\\mu,\\boldsymbol{\\Sigma}_2)$ be multivariate Gaussian random vectors with $(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_2) \\in D^{\\infty}(s,L_n)$ and $L_n\\gtrsim \\sqrt{\\log n}$. Then\n\\begin{equation}\n\\sup_{\\substack{(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_1) \\in D^\\infty(s,L_n)}}\\mathcal{R}(\\widetilde{T(\\theta,\\mu)} ) \\lesssim \\frac{L_n^2s^2\\log n}{n^2}.\n\\end{equation}\n\\end{theorem}\n\nSince our simple thresholding estimator $\\widetilde{T(\\theta,\\mu)}$ completely drops the polynomial components in $\\widehat{T^S_K(\\theta,\\mu)}$, its variance is significantly reduced. As a consequence, we find that as long as $\\max(\\|\\theta\\|_\\infty , \\|\\mu\\|_\\infty )\\le \\sqrt{n}$, the isotopic Gaussian assumption $\\boldsymbol{\\Sigma}_1=\\boldsymbol{\\Sigma}_2={\\bf I}_n$ can be removed without changing the rate of convergence. Towards this end, we define the enlarged parameter space\n\\[\nD_0^\\infty(s,L_n) = \\Bigg\\{(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_2): \\begin{aligned} & (\\theta,\\mu)\\in D(s),\\max(\\|\\theta\\|_\\infty , \\|\\mu\\|_\\infty )\\le L_n, \\\\\n& \\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_2\\succeq 0, \\text{$\\boldsymbol{\\Sigma}_1$ and $\\boldsymbol{\\Sigma}_2$ have unit diagonals}.\n\\end{aligned} \\Bigg\\}.\n\\]\nIn particular, as $\\boldsymbol{\\Sigma}_1$ and $\\boldsymbol{\\Sigma}_2$ have unit diagonals, the sample splitting procedure (Section 2.1) still applies, which only leads to a $1\/2$-scaling of the off-diagonal entries of the covariance matrices.\n\\begin{theorem} \\label{sparse.upper.4}\nLet $\\mathbf{x_n} \\sim N(\\theta,\\boldsymbol{\\Sigma}_1)$ and $\\mathbf{y_n} \\sim N(\\mu,\\boldsymbol{\\Sigma}_2)$ where $(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_2)\\in D_0^{\\infty}(s, \\sqrt{n})$. Then we have\n\\begin{equation}\n\\sup_{\\substack{(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_1) \\in D_0^\\infty(s,\\sqrt{n})}}\\mathcal{R}(\\widetilde{T(\\theta,\\mu)} )\\lesssim \\frac{s^2\\log^2 n}{n^2}.\n\\end{equation}\n\\end{theorem}\n\nBy definition, we have $D^{\\infty}(s,L_n)\\subset D_0^{\\infty}(s,L_n)$. It then follows from Theorem \\ref{sparse.lower3} that for any $s\\lesssim\\sqrt{n}$,\n\\begin{equation}\n\\inf_{\\hat{T}} \\sup_{\\substack{(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_1) \\in D_0^\\infty(s,\\sqrt{ n})}}\\mathcal{R}(\\hat{T}) \\asymp \\frac{s^2\\log^2 n}{n^2},\n\\end{equation}\nwhere the minimax optimal rate can be attained by $\\widetilde{T(\\theta,\\mu)}$. This establishes the minimax optimality and adaptivity of $\\widetilde{T(\\theta,\\mu)}$ over $D_0^{\\infty}(n^\\beta,\\sqrt{n})$ for any $\\beta\\in(0,1\/2)$.\nThe result confirms an important advantage of $\\widetilde{T(\\theta,\\mu)}$ over $\\widehat{T^S_K(\\theta,\\mu)}$, namely, its guaranteed theoretical performance over arbitrary correlation structures, which complies with the fact that in many applications the observations are not necessarily independent.\n\n\n\\subsection{Normalization and linkage disequilibrium}\n\nDealing with linkage disequilibrium (\\textsc{ld}) among the \\textsc{snp}s \\citep{reich2001linkage,daly2001high,pritchard2001linkage} is essential in any genetic studies. In this paper, we follow the idea of \\cite{Bulik-Sullivan} and propose to use the normalized $T$-score\n\\[\n\\text{Normalized $T$-score}(g) = \\frac{\\sum_{i=1}^n|\\theta_i^g\\mu_i|}{\\|\\theta^g\\|_2\\|\\mu\\|_2}\n\\]\nas a measure of genetic overlap between gene $g$ and the outcome disease. In particular, the estimation of the $\\ell_2$ norms $\\|\\theta^g\\|_2$ and $\\|\\mu\\|_2$, or in our context, the \\textsc{snp}-heritability of the traits \\citep{yang2010common}, can be easily accomplished using summary statistics. As a result, every normalized $T$-score lies in between $0$ and $1$, which is scale-invariant (e.g., invariance to study sample sizes and \\textsc{snp} effect sizes) and comparable across many different genes or studies. In addition, as similarly argued in \\cite{Bulik-Sullivan}, the normalized $T$-score is less sensitive to the choice of the $n$-\\textsc{snp} sets used in the analysis. \n\nMoreover, in Theorem 4, we show that the simple thresholding estimator $\\widetilde{T(\\theta,\\mu)}$ does not require the independence of the $z$-scores, which theoretically guarantees its applicability in the presence of arbitrary \\textsc{ld} structure among the \\textsc{snp}s. However, our theoretical results concerning $\\widehat{T_K(\\theta,\\mu)}$ and $\\widehat{T^S_K(\\theta,\\mu)}$ rely on such an independence assumption. In our simulation studies, we found that the empirical performance (including optimality) of $\\widehat{T_K(\\theta,\\mu)}$ and $\\widehat{T^S_K(\\theta,\\mu)}$ is not likely affected by the dependence due to the \\textsc{ld} structure. As a result, our proposed estimation method, although partially analysed under the independence assumption, can be directly applied to the summary statistics without specifying the underlying \\textsc{ld} or covariance structure.\n\n\n\n\\section{Simulation Studies}\n\nIn this section, we demonstrate and compare the empirical performance of our proposed estimators and some alternative estimators under various settings. Since our goal is to estimate the T-score for each gene and to identify important genes that have strong association with disease, we generate data that mimic genetic association data, including block-wise and sparse signals. In particular, the block-wise correlated structure of the \\textsc{snp}s and therefore also the corresponding $z$-scores is widely observed in large-scale genetic studies due to \\textsc{ld} among the \\textsc{snp}s. In the following, we consider the block-wise signals with both the standard independent errors or errors with local dependence to show the robustness of our proposed estimator.\n\nWe generate a pair of $n$-dimensional vectors, denoted as $\\mathbf{x}_n$ and $\\mathbf{y}_n$, with $ n = 1.5\\times 10^5,3\\times 10^5$ and $5\\times 10^5$, from a multivariate normal distribution $N(\\theta,\\boldsymbol{\\Sigma})$ and $N(\\mu,\\boldsymbol{\\Sigma})$, respectively. The mean vectors $(\\theta,\\mu)$, and the covariance matrix $\\Sigma$ are generated as follows:\n\\begin{itemize}\n\t\\item Block-wise signals: we first partition the $n$ coordinates of $\\theta$ and $\\mu$ into blocks with each block consisting of 10 neighboring coordinates. For any sparsity level $s=50,100,200,300$ or $400$, we randomly pick $s\/10$ blocks as the support. In each of these blocks, we assign coordinates of $\\theta$ and $\\mu$ with symmetric triangle-shaped values, whose maximal value is generated from a uniform distribution over $[3,5]$. For the rest of the $(n-s)\/10$ blocks, we set them as 0.\n\t\\item Global covariance: $\\boldsymbol{\\Sigma}={\\bf I}$. \n\t\\item Block-wise covariances: $\\boldsymbol{\\Sigma}$ is block diagonal where each block is either a $10\\times 10$ Toeplitz matrix (see Supplementary Material for details) or a $1000\\times 1000$ exchangeable covariance matrix whose off-diagonal elements are $0.5$. These two covariance matrices are denoted as $\\boldsymbol{\\Sigma}_1$ and $\\boldsymbol{\\Sigma}_2$, respectively.\n\\end{itemize}\nIn particular, for the correlated cases, the correlation exists over either a small block of 10 elements whose correlation decays in distance, or over a large interval of 1000 elements with a constant correlation. \n\nWe compare the following five estimators of $T(\\theta,\\mu)$: (1) the hybrid thresholding estimator $\\widehat{T^S_K(\\theta,\\mu)}$; (2) the hybrid thresholding estimator without sample splitting procedure, denoted as $\\widehat{T^{S*}_K(\\theta,\\mu)}$; (3) the proposed simple thresholding estimator $\\widetilde{T(\\theta,\\mu)}$; (4) the hybrid non-thresholding estimator $\\widehat{T_K(\\theta,\\mu)}$; and (5) the estimator that simply calculates the absolute inner product of observed vectors, denoted as Naive.\n\nFor $\\widehat{T^S_K(\\theta,\\mu)}$, $\\widehat{T_K(\\theta,\\mu)}$ and $\\widehat{T^{S*}_K(\\theta,\\mu)}$, we choose $K=8$ (see the last paragraph of Section 2.3). Each setting was repeated 100 times. We compare the estimators using the empirical version of the following rescaled mean square error:\n\\[\n\\textsc{rmse}(\\hat{T}) = \\frac{1}{s}\\sqrt{{\\mathbb{E}(\\hat{T}-{T})^2}}.\n\\]\nIn particular, as the empirical \\textsc{rmse}s are subject to a factor of $s$, small numerical differences in \\textsc{rmse} actually indicate large differences in overall MSE.\nTables 1-3 give the empirical \\textsc{rmse} of the five estimators under the settings with either independent or dependent observations. In general, $\\widehat{T^{S*}_K(\\theta,\\mu)}$ outperforms all the other estimators in all the settings. The performances of $\\widehat{T^S_K(\\theta,\\mu)}$, $\\widetilde{T(\\theta,\\mu)}$ and $\\widehat{T^S_K(\\theta,\\mu)}$ are roughly the same, with $\\widehat{T^S_K(\\theta,\\mu)}$ and $\\widetilde{T(\\theta,\\mu)}$ being slightly better over the more sparse cases. The Naive estimator performs poorly in all the settings. The superiority of $\\widehat{T^{S*}_K(\\theta,\\mu)}$ over $\\widehat{T^S_K(\\theta,\\mu)}$ might be explained by the fact that, by using the sample splitting procedure, we introduced extra variability to our estimator. Since the sample splitting is used only to facilitate our theoretical analysis, in applications we suggest to use $\\widehat{T^{S*}_K(\\theta,\\mu)}$ for its better performance. \n\nAs for the robustness of our estimators to the presence of correlated observations, we see that all of the proposed estimators have roughly the same performance as the independent cases. In real applications, we expect that the observed data are more likely to be ``block-wise signal with block-wise correlation.\" Our simulation shows that $\\widehat{T^{S*}_K(\\theta,\\mu)}$ provides the best estimate of the T-scores. \n\n\\begin{table}[t!] \n\t\\caption{\\footnotesize Empirical \\textsc{rmse} under covariance $\\boldsymbol{\\Sigma}={\\bf I}_n$. $\\widehat{T^{S*}_K(\\theta,\\mu)}$: the hybrid thresholding estimator without sample splitting procedure;\n\t\t$\\widehat{T^S_K(\\theta,\\mu)}$: the hybrid thresholding estimator;\n\t\t$\\widetilde{T(\\theta,\\mu)}$: the proposed simple thresholding estimator;\n\t\t$\\widehat{T_K(\\theta,\\mu)}$: the hybrid non-thresholding estimator;\n\t\tNaive: the estimator that simply calculates the absolute inner product of observed vectors.}\n\t\\vskip .2cm\n\t\\centerline{\\tabcolsep=3truept\\begin{tabular*}{ 0.78 \\textwidth}{ccccccc}\t\\hline \n\t\t\t$n$& $s$ & $\\widehat{T^{S*}_K(\\theta,\\mu)}$ & $\\widehat{T^S_K(\\theta,\\mu)}$ & \t$\\widetilde{T(\\theta,\\mu)}$ & $\\widehat{T_K(\\theta,\\mu)}$ & Naive \\\\ \n\t\t\t\\hline \n\t\t\t&100& 4.091 & 5.818 & 5.764 & 5.847 & 954.4 \\\\\n\t\t\t&250& 4.058 & 6.348 & 6.157 & 6.382 & 381.3\\\\\n\t\t\t150,000 & 400& 3.949 & 6.274 & 6.042 & 238.4 & 6.307 \\\\\n\t\t\t& 550 & 4.024 & 6.272 & 6.069 & 6.304 & 173.2\\\\\n\t\t\t& 700& 3.973 & 5.84 & 5.702 & 5.870& 135.9 \\\\\t\t\n\t\t\t\\hline \n\t\t\t& 100& 4.271 & 6.579 & 6.385 & 6.613& 1909.1 \\\\\n\t\t\t&250& 4.222 & 6.244 & 6.118 & 6.276 & 763.3\\\\\n\t\t\t300,000 & 400& 4.143 & 6.015 & 5.882 & 6.046& 476.8 \\\\\n\t\t\t& 550 & 4.214 & 5.991 & 5.874 & 6.022& 346.6\\\\\n\t\t\t& 700& 4.242 & 6.144 & 6.026& 6.175 & 272.4 \\\\\n\t\t\t\\hline \n\t\t\t& 100& 4.380 & 6.139 & 6.031 & 6.170 & 3182.7 \\\\\n\t\t\t&250& 4.428 & 6.251 & 6.157 & 6.283& 1272.6 \\\\\n\t\t\t500,000 & 400& 4.374 & 6.109 & 5.997 & 6.140 & 795.3 \\\\\n\t\t\t& 550 &4.395 & 6.149 & 6.056 & 6.180& 578.3 \\\\\n\t\t\t& 700& 4.363 & 6.221 & 6.117 & 6.253& 454.2 \\\\\n\t\t\t\\hline\n\t\\end{tabular*}}\n\t\\label{table:t1}\n\\end{table}\n\n\n\\begin{table}[t!]\n\t\\caption{Empirical \\textsc{rmse} under covariance $\\boldsymbol{\\Sigma}_1$.}\n\t\\vskip .2cm\n\t\\centerline{\\tabcolsep=3truept\\begin{tabular*}{ 0.78 \\textwidth}{ccccccc}\n\t\t\t\\hline \n\t\t\t$n$& $s$ & $\\widehat{T^{S*}_K(\\theta,\\mu)}$ & $\\widehat{T^S_K(\\theta,\\mu)}$ & \t$\\widetilde{T(\\theta,\\mu)}$ & $\\widehat{T_K(\\theta,\\mu)}$ & Naive \\\\ \n\t\t\t\\hline \n\t\t\t&100& 4.090 & 6.704 & 6.502 & 7.042& 953.5 \\\\\n\t\t\t&250& 4.134 & 5.816 & 5.702 & 6.104& 381.9 \\\\\n\t\t\t150,000 & 400& 4.068 & 6.087 & 5.896 & 6.396 & 237.9 \\\\\n\t\t\t& 550 & 3.947 & 5.991 & 5.826 & 6.293 & 172.9 \\\\\n\t\t\t& 700& 4.163 & 6.011 & 5.913 & 6.310& 135.8 \\\\\n\t\t\t\\hline \n\t\t\t& 100& 3.868 & 6.088 & 6.009 & 6.403& 1912.2 \\\\\n\t\t\t&250& 4.196 & 5.963 & 5.797 & 6.262& 764.7 \\\\\n\t\t\t300,000 & 400& 4.215 & 6.171 & 6.09 & 6.478& 476.8\\\\\n\t\t\t& 550 & 4.238 & 6.005 & 5.888 & 6.306& 345.7\\\\\n\t\t\t& 700& 4.409 & 6.056 & 5.965 & 6.136& 272.6 \\\\\n\t\t\t\\hline \n\t\t\t& 100& 4.288 & 5.385 & 5.345 & 5.651 & 3184.8 \\\\\n\t\t\t&250& 4.353 & 6.227 & 6.027 & 6.531& 1273.0 \\\\\n\t\t\t500,000 & 400& 4.478 & 5.878 & 5.877 & 6.170 & 795.3 \\\\\n\t\t\t& 550 &4.370 & 5.929 & 5.855 & 6.222& 577.7\\\\\n\t\t\t& 700& 4.464 & 6.135 & 6.083 & 6.438 & 455.1\\\\\n\t\t\t\\hline\n\t\\end{tabular*}}\n\t\\label{table:t2}\n\\end{table}\n\n\n\\begin{table}[h!]\n\t\\centering\n\t\\caption{Empirical \\textsc{rmse} under covariance $\\boldsymbol{\\Sigma}_2$.}\n\t\\vskip .2cm\n\t\\centerline{\\tabcolsep=3truept\\begin{tabular*}{ 0.78 \\textwidth}{ccccccc}\n\t\t\t\\hline \n\t\t\t$n$& $s$ & $\\widehat{T^{S*}_K(\\theta,\\mu)}$ & $\\widehat{T^S_K(\\theta,\\mu)}$ & \t$\\widetilde{T(\\theta,\\mu)}$ & $\\widehat{T_K(\\theta,\\mu)}$ & Naive \\\\ \n\t\t\t\\hline \n\t\t\t&100& 4.336 & 5.88 & 5.807 & 5.906 & 485.3\\\\\n\t\t\t&250& 4.645 & 5.857 & 5.828 & 5.885& 189.7 \\\\\n\t\t\t150,000 & 400& 4.770 & 6.543 & 6.438 & 6.575 & 120.1 \\\\\n\t\t\t& 550 & 4.566 & 6.035 & 5.985& 6.065 & 87.12 \\\\\n\t\t\t& 700& 4.523 & 5.983 & 5.938 & 6.012 & 68.89\\\\\n\t\t\t\\hline \n\t\t\t& 100& 4.484 & 6.005 & 5.980 & 6.029 & 961.0\\\\\n\t\t\t&250& 4.718 & 5.86 & 5.865 & 5.887 & 380.6\\\\\n\t\t\t300,000 & 400& 4.753 & 6.26 & 6.241 & 6.291& 241.8\\\\\n\t\t\t& 550 & 4.729 & 5.985 & 5.949 & 6.015& 174.8 \\\\\n\t\t\t& 700& 4.84 & 6.334 & 6.289 & 6.365& 135.7 \\\\\n\t\t\t\\hline \n\t\t\t& 100& 4.836 & 6.292 & 6.279 & 6.313& 1593.5 \\\\\n\t\t\t&250& 4.749 & 6.629 & 6.569 & 6.658& 633.4 \\\\\n\t\t\t500,000 & 400& 4.889 & 6.409 & 6.401 & 6.438 & 398.1 \\\\\n\t\t\t& 550 &4.867 & 6.343 & 6.309 & 6.373& 288.8 \\\\\n\t\t\t& 700& 4.887 & 6.131 & 6.112 & 6.159& 227.7 \\\\\n\t\t\t\\hline\n\t\\end{tabular*}}\n\t\\label{table:t3}\n\\end{table}\n\n\n\n\\section{Integrative data analysis of human heart failure}\n\nFinally, we apply our proposed estimation procedure to identify genes whose expressions are possibly causally linked to heart failure by integrating \\textsc{gwas} and e\\textsc{qtl} data. \\textsc{gwas} results were obtained from a heart failure genetic association study at the University of Pennsylvania, a prospective study of patients recruited from the University of Pennsylvania, Case Western Reserve University, and the University of Wisconsin, where genotype data were collected from 4,523 controls and 2,206 cases using the Illumina OmniExpress Plus array. \\textsc{gwas} summary statistics were calculated controlling for age, gender, and the first two principal components of the genotypes. \n\nHeart failure e\\textsc{qtl} data were obtained from the MAGNet e\\textsc{qtl} study (https:\/\/www.med. upenn.edu\/magnet\/index.shtml), where the left ventricular free-wall tissue was collected from 136 donor hearts without heart failure. Genotype data were collected using using Affymetrix genome-wide SNP array 6.0 and only markers in Hardy-Weinberg equilibrium with minor allele frequencies above 5\\% were considered. Gene expression data were collected using Affymetrix GeneChip ST1.1 arrays, normalized using \\textsc{RMA} \\citep{irizarry2003exploration} and batch-corrected using ComBat \\citep{johnson2007adjusting}. To obtain a common set of \\textsc{snp}s, \n\\textsc{snp}s were imputed using 1000 Genomes Project data. Summary statistics for the MAGNet e\\textsc{qtl} data were obtained using the fast marginal regression algorithm of \\citet{sikorska2013gwas} controlling for age and gender.\n\n\\subsection{Ranking of Potential Heart Failure Causal Genes}\nAfter matching the \\textsc{snp}s of both e\\textsc{qtl} and \\textsc{gwas} data, we had a total of 347,019 \\textsc{snp}s and 19,081 genes with expression data available. For each gene, the vector of $z$-scores from e\\textsc{qtl} analysis was obtained and the value of its absolute inner product with the vector of $z$-scores from \\textsc{gwas} data was estimated by our proposed estimator $\\widehat{T_K^{S*}(\\theta,\\mu)}$. We chose $K=8$ as in our simulation study. After obtaining the estimates of $T$-scores for all the genes, and the corresponding \\textsc{snp}-heritability, we rank these genes in the order of their normalized $T$-scores as defined in Section 2.5. \nAs a result, genes with the highest ranks are considered important in gaining insights into the biological mechanisms of heart failure.\n\nTo assess that the top scored genes indeed represent true biological signals, we calculated the $T$-scores for two ``null datasets\" that are created using permutations. For the first dataset, we randomly permuted the labels of the \\textsc{snp}s of the \\textsc{gwas} $z$-scores by sampling without replacement before estimating the normalized $T$-scores with e\\textsc{qtl} $z$-scores. For the second dataset, we permuted the labels of the \\textsc{snp}s of the \\textsc{gwas} $z$-scores in a circular manner similar to \\cite{cabrera2012uncovering}. Specifically, for each chromosome, we randomly chose one \\textsc{snp} as start of the chromosome and move the \\textsc{snp}s on the fragment before this \\textsc{snp} to the end. Such a cyclic permutation preserves the local dependence of the $z$-scores. \nBy permuting the data from one phenotype, we break the original matching of the $z$-scores between the two phenotypes. The permutation was performed 50 times and the null distribution of $T$-scores based on the permuted data was obtained. Figure \\ref{permute} shows the ranked normalized $T$-scores based on the original data and the box plots of the ranked $z$-scores based on 50 permutations of the $z$-scores. We find that all the top ranked genes have larger $T$-scores than the ones based on permutations. In addition, about 30 top ranked genes in the top plot and about 10 top ranked genes in the bottom plot have true $T$-scores larger than all the $T$-scores from the permuted datasets. This further confirms that the top ranked genes based on their estimated normalized $T$-scores are not due to random noise and indeed represent certain sharing of genetic variants between heart failure and gene expression levels. \n\n\\begin{figure}[h!]\n\t\\includegraphics[angle=0,width=14cm]{rank_plot_perm1.pdf}\n\t\\includegraphics[angle=0,width=14cm]{Perm2.pdf}\n\t\\caption{Estimated score for the top 50 genes and the box plots of the top scores based on 50 permutations. Top: random permutation of the \\textsc{gwas} scores; bottom: cyclic permutations of the \\textsc{gwas} scores. }\n\t\\label{permute}\n\\end{figure} \n\n\nTable \\ref{genes} lists the top eight highest ranked genes along with their biological annotations. All of the genes are either directly or indirectly associated with human heart failure, including \nthose related to fibrotic myocardial degeneration, Wnt signalling activity and heart-valve development. It is interesting that our proposed methods can identify these relevant genes using only the gene expression data measured on normal heart tissues. \n\n\\begin{table}\n\t\\caption{Top eight heart failure associated genes based on the estimated normalized $T$-scores and their functional annotations.}\\label{genes}\n\t\\vskip .2cm\n\t\\centerline{\\tabcolsep=3truept\\begin{tabular*}{ 1 \\textwidth}{lll}\n\t\t\t\\hline \n\t\t\tGene Name & Annotations \\\\ \n\t\t\t\\hline \n\t\t\tTMEM37 & voltage-gated ion channel activity \\citep{chen2007calcium} \\\\\n\t\t\tADCY7 & adenylate cyclase activity; fibrotic myocardial \\\\\n\t\t\t& degeneration \\citep{nojiri2006oxidative}\\\\\n\t\t\tC1QC & Wnt signaling activity; associated with heart \\\\\n\t\t\t& failure \\citep{naito2012complement} \\\\\n\t\t\tFAM98A & associated with ventricular septal defect \\citep{liu2018differential}\\\\\n\t\t\tBMP2 & associated with heart-valve development \\\\ &\\citep{rivera2006bmp2}\\\\\n\t\t\tSLCO2B1 & organic anion transporter; associated with cardiac glycoside \\\\\n\t\t\t& \\citep{mikkaichi2004organic} \\\\\n\t\t\tC1QA & Wnt signaling activity; associated with heart \\\\\n\t\t\t& failure \\citep{naito2012complement} \\\\\n\t\t\tFCGR2B & intracellular signaling activity; associated with vascular \\\\\n\t\t\t& disease pathogenesis \\citep{tanigaki2015fcgamma}\\\\\n\t\t\t\\hline\n\t\\end{tabular*}}\n\t\\label{table:t4}\n\\end{table}\n\n\n\n\\subsection{Gene Set Enrichment Analysis}\nTo complete our analysis, we finish this section with the gene set enrichment analysis (\\textsc{gsea}) \\citep{subramanian2005gene} using the normalized $T$-scores in order to identify the heart failure associated biological processes. In the following analysis, we removed genes with low expression and small variability across the samples, which resulted in a total of 6,355 genes. \nFor the normalized $T$-scores $T_j$, $1\\le j\\le J$, given a gene set $S$, the Kolmogorov-Smirnov statistic defined as\n\\[\n\\sup_t \\bigg| \\frac{1}{k}\\sum_{j\\in S}I(T_j \\le t) -\\frac{1}{k'}\\sum_{j\\in S^c}I(T_j \\le t) \\bigg|,\n\\]\nis calculated, where $k$ and $k'$ are the number of genes in $S$ and $S^c$, respectively. This tests whether the distribution of $T_j$ differs between genes in $S$ and genes in $S^c$. For a given gene set, significance of this test implies that this gene set is enriched by genes that shared similar genetic variants as those for heart failure, suggesting their relevance to the etiology of heart failure. \n\nThis analysis was applied to the gene sets from Gene Ontology (\\textsc{go}) (\\citealt{botstein2000gene}) that contain at least 10 genes. Specifically, 5,023 biological processes were tested.\nFigure 2 in our supplementary material presents the directed acyclic graphs of the \\textsc{go} biological processes that linked to the most significant \\textsc{go} terms from the simultaneous signal \\textsc{gsea} analysis. Table \\ref{table:t5} shows the top six \\textsc{go} biological processes identified from the \\textsc{gsea} analysis. Among these processes, regulation of skeletal muscle contraction, linoleic acid metabolic process and calcium ion regulation are strongly implicated in human heart failure. \\cite{murphy2011cardiovascular} showed that skeletal muscle reflexes are essential to the initiation and regulation of the cardiovascular response to exercise, and alteration of this reflex mechanism can happen in disease states such as hypertension and heart failure. In \\cite{farvid2014dietary}, a systematic review of literature and meta-analysis was carried out, which supports a significant inverse association between dietary linoleic acid intake, when replacing either carbohydrates or saturated fat, and risk of coronary heart disease. Moreover, the importance of calcium-dependent signaling in the heart failure was reported in \\cite{marks2003calcium}, who suggested that impaired calcium release causes decreased muscle contraction (systolic dysfunction) and defective calcium removal hampers relaxation (diastolic dysfunction).\n\n\n\\begin{table}\n\t\\centering\n\t\\caption{Top six \\textsc{go} biological processes that are associated with heart failure based on the gene set enrichment analysis}\n\t\\vspace{0.3cm}\n\t\\begin{tabular}{lc}\n\t\t\\hline \n\t\t\\textsc{go} term & p-value \\\\ \n\t\t\\hline \n\t\t\\emph{Biological Process} &\\\\\n\t\tregulation of skeletal muscle contraction by regulation of release&\\\\\n\t\tof sequestered calcium ion & $7.9\\times 10^{-7}$\\\\\n\t\tlinoleic acid metabolic process & $1.0\\times 10^{-6}$\\\\\n\t\tregulation of skeletal muscle contraction by calcium ion signaling & $3.4\\times 10^{-6}$ \\\\\n\t\tpositive regulation of sequestering of calcium ion & $3.4\\times 10^{-6}$\\\\\n\t\tcellular response to caffeine& $1.0\\times 10^{-5}$\\\\\n\t\tcellular response to purine-containing compound & $1.0\\times 10^{-5}$\\\\\n\t\t\\hline\n\t\\end{tabular}\n\t\\label{table:t5}\n\\end{table}\n\n\n\n\\section{Discussion}\n\nThis paper considers the optimal estimation over sparse parameter spaces. However, in Section 2, the minimax rates of convergence were only established for parameter spaces $D^{\\infty}(n^{\\beta},L_n)$ with $\\beta\\in (0,1\/2)\\cup (1\/2,1)$. Apparently, there was a missing gap at $\\beta=1\/2$. Our theoretical analysis suggests a lower bound (\\ref{sparse.lower.equation4}) with the rate function $\\psi(s,n)\\asymp \\log^{-1}n$. However, none of our proposed estimators attain such rates, leaving it an open question how to optimality estimate $T(\\theta,\\mu)$ when $s\\asymp \\sqrt{n}$.\n\n\nIn some applications, we may need to consider non-sparse parameter spaces. In this case, our theoretical analysis shows that the estimator $\\widehat{T_K(\\theta,\\mu)}$ with $K=r\\log n$ for some small constant $r>0$ can still be applied. Specifically, from our proof of Theorem 1 and Theorem 2, it follows that, if we define the non-sparse parameter space as $\\mathcal{D}^{\\infty}_U(L_n)=\\big\\{(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_2): (\\theta,\\mu)\\in \\mathbb{R}^n\\times \\mathbb{R}^n,\\max(\\|\\theta\\|_\\infty , \\|\\mu\\|_\\infty )\\le L_n, \\boldsymbol{\\Sigma}_1=\\boldsymbol{\\Sigma}_2={\\bf I}_n \\big\\}$ with $L_n\\gtrsim \\sqrt{\\log n}$, then for $\\mathbf{x_n} \\sim N(\\theta,\\boldsymbol{\\Sigma}_1)$ and $\\mathbf{y_n} \\sim N(\\mu,\\boldsymbol{\\Sigma}_2)$, the minimax optimal rate for estimating $T(\\theta,\\mu)$ over $\\mathcal{D}^{\\infty}_U(L_n)$ is\n\\begin{equation} \n\\inf_{\\hat{T}}\\sup_{(\\theta,\\mu,\\boldsymbol{\\Sigma}_1,\\boldsymbol{\\Sigma}_2)\\in \\mathcal{D}^{\\infty}_U(L_n) } \\mathcal{R}(\\hat{T} ) \\asymp \\frac{L_n}{ \\log n},\n\\end{equation}\nwhich can be attained by the above $\\widehat{T_K(\\theta,\\mu)}$. \n\nIn addition to the estimation problem, it is also of interest to conduct hypothesis testing or construct confidence intervals for $T$-score. These problems can be technically challenging due to the non-smooth functional. Moreover, the polynomial approximation technique adopted in the current work may introduce extra complexities to the higher-order asymptotic analysis. We leave these problems for future investigations.\n\n\n\n\n\\vskip 14pt\n\\noindent {\\large\\bf Supplementary Materials}\n\nOur supplementary material includes proofs of the main theorems and the technical lemmas.\n\\par\n\\bibliographystyle{chicago}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Appendix A: Incompatibility robustness SDP and its dual}\n\nThe incompatibility robustness $1+\\mathcal{IR}$, can be cast as the following SDP\n\\begin{eqnarray}\n\\min_{\\tilde{G}_{\\lambda}}\\,&& \\sum_{\\lambda}\\frac{\\tr [\\tilde{G}_{\\lambda} ]}{d} \\\\\n\\label{Eq:constraint1}\\text{s. t.: }&& \\sum_{\\lambda}D(a|x,\\lambda) \\tilde{G}_{\\lambda} \\geq M_{a|x} \\text{ for all }a,x \\\\\n\\label{Eq:constraint2} && \\tilde{G}_{\\lambda}\\geq 0 \\\\\n&& \\sum_{\\lambda} \\tilde{G}_{\\lambda} = \\frac{\\ensuremath{\\mathds{1}}_d}{d} \\sum_{\\lambda} \\tr[\\tilde{G}_{\\lambda}]\\notag,\n\\end{eqnarray}\nwhere $d$ is the dimension of the space of the $G_{\\lambda}$ such that \n$\\sum_{\\lambda} G_{\\lambda} = \\ensuremath{\\mathds{1}}_d$. The number of labels $a$ we denote by $|a|$, and similarly $|x|$ denotes the number of labels $x$. The number of constraints in Eq.~\\eqref{Eq:constraint1} is $|a|\\cdot |x|$ and in Eq.~\\eqref{Eq:constraint2} it is $|a|^{|x|}$.\nThis SDP has both, equalities and inequalities as constraints and it is of the general form\n\\begin{eqnarray}\n\\min_{X}&& \\langle A,X \\rangle \\\\\n\\text{s. t.: }&& \\Phi(X)=B_1 \\notag \\\\\n&& \\Psi(X)\\geq B_2 \\notag \\\\\n&& X\\geq 0. \\notag\n\\end{eqnarray}\nWe choose $A=\\ensuremath{\\mathds{1}}_{d\\cdot |a|^{|x|}} \\qquad\\text{and}\\qquad \nX=\\text{diag}\\qty[\\tilde{G}_{\\lambda}\/d]_{\\lambda}\\in\\mathcal{M}(\\mathbb{C}^{d\\cdot \n|a|^{|x|}})$, which is a block diagnoal matrix with the submatrices \n$\\tilde{G}_{\\lambda}\/d$. The objective function then reads $\\langle A,X \\rangle \n= \\tr[X]=\\sum_\\lambda \\tr [X_\\lambda]$. For the equality constraint we choose \n$B_1=0$ and define a mapping $\\Phi: \\mathcal{M}(\\mathbb{C}^{d\\cdot |a|^{|x|}})\\mapsto \n\\mathcal{M}(\\mathbb{C}^{d})$ by\n\\begin{equation}\n \\Phi(X) = \\ensuremath{\\mathds{1}}_d \\tr[X]-d \\sum_\\lambda X_\\lambda.\n\\end{equation}\nFor the inequality constraint we choose $B_2=\\text{diag}\\qty[M_{a|x}]_{a,x}$ and we define $\\Psi(X): \\mathcal{M}(\\mathbb{C}^{d\\cdot |a|^{|x|}})\\mapsto \\mathcal{M}(\\mathbb{C}^{d\\cdot |a|\\cdot |x|})$ by\n\\begin{equation}\n \\Psi(X) = \\text{diag}\\qty[d\\sum_{\\lambda}D(a|x,\\lambda) X_{\\lambda}]_{a,x}.\n\\end{equation}\nThe dual problem then reads~\\cite{JWatrous_Notes_2011}\n\\begin{eqnarray}\n\\max_{Z,Y}&& \\langle B_1,Z \\rangle + \\langle B_2,Y \\rangle \\\\\n\\text{s. t.: }&& \\Phi^{\\dagger}(Z) + \\Psi^{\\dagger}(Y) \\leq A \\notag\\\\\n&& Z\\text{ is Hermitian} \\notag \\\\\n&& Y\\geq 0. \\notag\n\\end{eqnarray}\nThe dual $\\Psi^{\\dagger}(Y)$ is straight forward~\\cite{Piani_Watrous_Char_of_EPR_Steering}, that is\n\\begin{eqnarray}\n \\Psi^{\\dagger}(Y) = \\text{diag}\\qty[d\\sum_{a,x}D(a|x,\\lambda) Y^{a|x}]_{\\lambda}.\n\\end{eqnarray}\nTo find $\\Phi^{\\dagger}(Z)$ we write\n\\begin{eqnarray}\n \\tr[\\Phi(X)Z]&=&\\tr[\\tr(X) Z]-\\tr[d\\sum_{\\lambda} X_{\\lambda} Z] \\\\\n &=& \\tr[X\\qty{\\tr(Z)\\ensuremath{\\mathds{1}}_{d\\cdot a^x}-d(Z\\oplus Z\\oplus \\cdots \\oplus Z)}] \\notag \\\\\n &=& \\tr[X \\Phi^{\\dagger}(Z)]\\notag.\n\\end{eqnarray}\nFrom this we directly obtain the dual of the robustness SDP as\n\\begin{eqnarray}\n\\label{Eq:IRDual}\n\\max_{Y_{a|x}}&& \\sum_{a,x} \\tr[M_{a|x}Y^{a|x}] \\\\\n\\text{s. t.: }&& \\text{diag}\\qty[d\\sum_{a,x}D(a|x,\\lambda) \nY^{a|x}]_{\\lambda}\\notag\\\\\n&&+ \\tr(Z)\\ensuremath{\\mathds{1}}_{d\\cdot |a|^{|x|}}-d(Z\\oplus Z \\oplus \\cdots \\oplus Z) \\leq \\ensuremath{\\mathds{1}}_{d\\cdot |a|^{|x|}} \\notag\\\\\n&& Z \\text{ is Hermitian} \\notag\\\\\n&& Y\\geq 0. \\notag\n\\end{eqnarray}\n\n\n\\section{Appendix B: Upper bound on the success probability for sets on compatible POVMs}\n\nNext, we show that whenever a set of jointly measurable POVMs is used to discriminate the optimal state assemblage $\\qty{Y^{a|x}}$, we find that\n\\begin{eqnarray}\n \\sum_{a,x} \\tr[O_{a|x}Y^{a|x}] &=& \\sum_{a,x,\\lambda} D(a|x,\\lambda) \\tr[J_{\\lambda}Y^{a|x}] \\\\\n &=:& \\sum_{\\lambda} \\tr[J_{\\lambda}\\tilde{Y}^{\\lambda}].\n\\end{eqnarray}\nFrom the first constraint of the dual program in Eq.~\\eqref{Eq:IRDual} we obtain (from each block labeled by $\\lambda$) that $\\tilde{Y}^{\\lambda}\\leq\\frac{\\ensuremath{\\mathds{1}}_d}{d}(1-\\tr Z)+Z$. This leads to\n\\begin{eqnarray}\n \\sum_{\\lambda} \\tr[J_{\\lambda}\\tilde{Y}^{\\lambda}] &\\leq& \\sum_{\\lambda} \\tr[J_{\\lambda}\\frac{\\ensuremath{\\mathds{1}}_d}{d}(1-\\tr Z)+Z] \\notag\\\\\n &=& \\tr[\\frac{\\ensuremath{\\mathds{1}}_d}{d}(1-\\tr Z)+Z] = 1.\n\\end{eqnarray}\nHence, for any set of jointly measurable POVMs it holds that\n\\begin{equation}\n\\label{Eq:PsuccJM}\n \\sum_{a,x} \\tr[O_{a|x}Y^{a|x}] \\leq 1.\n\\end{equation}\n\n\\section{Appendix C: Construction of the state discrimination task with prior information from the optimal dual variable}\n\nFrom the optimal solution of the dual we also construct a state discrimination task with prior information in the following way. First observe that\n\\begin{eqnarray}\nY^{a|x} &=& \\tr[Y]\\frac{\\sum_{a'}\\text{tr}[Y^{a'|x}]}{\\tr[Y]}\\frac{\\tr[Y^{a|x}]}{\\sum_{a'}\\tr[Y^{a'|x}]}\\frac{Y^{a|x}}{\\tr[Y^{a|x}]}\\notag\\\\\n&=& \\tr[Y]p(x)p(a|x)\\varrho_{a|x}.\n\\end{eqnarray}\nInserting this into the objective function of the dual in Eq.~\\eqref{Eq:IRDual} yields\n\\begin{eqnarray}\n &&\\sum_{a,x} \\tr[M_{a|x}Y^{a|x}] \\notag \\\\\n &=& \\tr[Y] \\sum_{a,x} p(x)p(a|x) \\tr[M_{a|x}\\varrho_{a|x}] \\notag \\\\\n &=& \\tr[Y]\\quad p_{\\text{succ}}(M_{a|x}, \\varrho_{a|x}).\n\\end{eqnarray}\nThen, using Eq.~\\eqref{Eq:PsuccJM} the ratio of success probabilities in Eq.~\\eqref{Eq:PsuccRatio} is lower bounded by\n\\begin{eqnarray}\n &&\\frac{p_{\\text{succ}}(M_{a|x}, \\varrho_{a|x})}{\\max_{O_{a|x}\\in JM} p_{\\text{succ}}(O_{a|x},\\varrho_{a|x})} \\\\\n &=& \\frac{\\sum_{a,x} \\tr[M_{a|x}Y^{a|x}]}{\\max_{O_{a|x}\\in JM} \\sum_{a,x} \\tr[O_{a|x}Y^{a|x}]}\\\\\n &\\geq& \\sum_{a,x} \\tr[M_{a|x}Y^{a|x}] =1+\\mathcal{IR}.\n\\end{eqnarray}\nThe inequality follows from Eq.~\\eqref{Eq:PsuccJM}. From this, Observation~\\ref{Obs:JMisusefull} follows.\n\n\n\\section{Appendix D: Robustness of sets of measurements and conic programming} \n\nDenote any set of free POVMs by $F$. Let $|x|$ be the number of POVMs. The generalized robustness is defined by\n\\begin{equation}\n \\mathcal{R}_F(M_{a|x}) = \\min\\qty{t\\geq 0 | \\frac{M_{a|x}+tN_{a|x}}{1+t}=O_{a|x}\\in F}.\n\\end{equation}\nThis can be cast as the following conic program\n\\begin{eqnarray}\n1+\\mathcal{R}_F(M_{a|x}) = \\min_t &&\\, 1+t \\\\\n\\text{s. t.: }&& t\\geq 0 \\\\\n&& \\frac{M_{a|x}+tN_{a|x}}{1+t}=O_{a|x}\\in F \\\\\n&& \\qty{N_{a|x}}\\text{ is a POVM}.\n\\end{eqnarray}\nSolving for $N_{a|x}$ one obtains\n\\begin{eqnarray}\n\\min_t &&\\,1+t \\\\\n\\text{s. t.: }&& t\\geq 0 \\\\\n&& (1+t) O_{a|x} - M_{a|x} \\geq 0 \\\\\n&& O_{a|x} \\in F.\n\\end{eqnarray}\nBy defining $\\tilde{O}_{a|x}=(1+t) O_{a|x}$ this can be written as\n\\begin{eqnarray}\n\\min_{\\tilde{O}_{a|x}}&& \\frac{1}{\\abs{x}} \\sum_{a,x} \\frac{\\tr[\\tilde{O}_{a|x}]}{d} \\\\\n\\text{s. t.: }&& \\tilde{O}_{a|x} \\geq M_{a|x} \\\\\n&& \\tilde{O}_{a|x} \\in C_F,\n\\end{eqnarray}\nwhere $C_F$ is a cone with basis $F$. This can be brought into the form of Eq.~(\\ref{Eq:ConicProgram}) by choosing $A=-\\frac{1}{\\abs{x} d}\\ensuremath{\\mathds{1}}$, $X=\\text{diag}(\\tilde{O}_{a|x})_{a,x}$, $B=-\\text{diag}(M_{a|x})_{a,x}$, $\\Lambda=-\\textit{id}$. Then, the dual cone program reads\n\\begin{eqnarray}\n\\max_{Y^{a|x}}&& \\sum_{a,x} \\tr[M_{a|x}Y^{a|x}] \\\\\n\\text{s. t.: }&& -Y+\\frac{1}{|x| d}\\ensuremath{\\mathds{1}} \\in C_F^* \\\\\n&& Y \\geq 0.\n\\end{eqnarray}\nThe first constraint translates to $\\braket{\\frac{1}{|x| d}\\ensuremath{\\mathds{1}}-Y}{T}\\geq 0$. Hence, $\\tr[YT]\\leq \\tr[T\/|x| d]$ for all $T\\in C_F$ or equivalently $\\tr[YT]\\leq 1$ for all $T\\in F$. The final form of the dual then reads\n\\begin{eqnarray}\n\\max_{Y^{a|x}}\\,&& \\sum_{a,x} \\tr[M_{a|x}Y^{a|x}] \\\\\n\\text{s. t.: }&& Y \\geq 0 \\\\\n&& \\tr[YT]\\leq 1 \\text{ for all } T\\in F.\n\\end{eqnarray}\nNote that the last constraint is a typical property of a witness. Similar results hold for state assemblages $\\qty{\\varrho_{a|x}}_{a|x}$ by simply dropping the factor $1\/d$, since the operators $\\sum_a\\varrho_{a|x}$ have a unit trace.\n\n\n\\section{Appendix E: Robustness of state assemblages and conic programming}\n\nFor a given free set of assemblages $F$, an assemblage $\\{\\varrho_{a|x}\\}_{a,x}$, and an subchannel discrimination task $(\\mathbf{\\Lambda},\\mathbf{N})$ we have\n\\begin{align}\n \\dfrac{p_{\\text{succ}}(\\{\\varrho_{a|x}\\},\\mathbf{\\Lambda},\\mathbf{N})}{\\max_{F} p_{\\text{succ}}(\\{\\sigma_{a|x}\\},\\mathbf{\\Lambda},\\mathbf{N})} \\leq 1+\\mathcal{R}_F(\\varrho_{a|x}).\n\\end{align}\nTo formulate the statement of Theorem~\\ref{Mainresult} for state assemblages we note that the primal problem for the robustness of an assemblage $\\{\\varrho_{a|x}\\}_{a,x}$ with respect to a free set $F$ of assemblages is given as\n\\begin{eqnarray}\n\\min_{\\tilde{\\sigma}_{a|x}}\\, && \\frac{1}{\\abs{x}} \\sum_{a,x}\\tr[\\tilde{\\sigma}_{a|x}] \\\\\n\\text{s. t.: }&& \\tilde{\\sigma}_{a|x} \\geq \\varrho_{a|x}, \\quad \\tilde{\\sigma}_{a|x} \\in C_F, \\notag\n\\end{eqnarray}\nwhere $\\tilde{\\sigma}_{a|x}=(1+t)\\sigma_{a|x}$. The dual program can be written as\n\\begin{eqnarray}\n\\max_{Y^{a|x}}\\,&& \\sum_{a,x} \\tr[\\varrho_{a|x}Y^{a|x}] \\\\\n\\text{s. t.: }&& Y \\geq 0, \\quad \\tr[TY]\\leq 1\\, \\forall T\\in F. \\notag\n\\end{eqnarray}\nWe have again denoted by $Y$ the direct sum of the operators $\\{Y^{a|x}\\}_{a,x}$. Note that Slater's conditions can be verified similarly to the case of measurements for the free sets we are interested in.\n\nUsing the techniques introduced in Ref.~\\cite{Piani_Watrous_Char_of_EPR_Steering} it is clear that any witness $Y$ of the above form can be cast as a subchannel discrimination task with one-way LOCC measurements. Namely, define subchannels and a POVM as $\\Lambda_a^\\dagger(|x\\rangle\\langle x|)=\\alpha Y^{a|x}$ and $N_x=|x\\rangle\\langle x|$, where $\\alpha=\\|\\sum_{a,x} Y^{a|x}\\|_\\infty^{-1}$ and $\\qty{\\ket{x}}_x$ is an \northonormal basis. If these subchannels do not form an instrument, i.e. $\\sum_a\\Lambda_a^\\dagger(\\openone)\\neq\\openone$, the set can be \ncompleted into one by defining an extra subchannel as \n$\\Lambda(\\varrho)=\\tr[(\\openone-\\sum_a\\Lambda_a^\\dagger(\\openone))\\varrho]\\sigma$, \nwhere $\\sigma$ is some quantum state. It is worth noting that we have one more \nsubchannel in the discrimination problem than we have outputs. \n\n\n\\section{Appendix F: Convexity and compactness of the set of assemblages that can be prepared from states with a fixed Schmidt number}\nConvex combinations of such assemblages can be prepared by increasing the size of \nAlice's system. To be more precise, given that Alice's dimension is $d$ and \nthat one assemblage is prepared with measurements $\\{A_{a|x}\\}_{a,x}$ on the \nstate $\\varrho_{AB}$ and another assemblage with measurements $\\{\\tilde \nA_{a|x}\\}_{a,x}$ on the state $\\tilde\\varrho_{AB}$, we can consider the convex \ncombination \n$\\varrho:=\\lambda\\ketbra{0}{0}\\otimes\\varrho_{AB}+(1-\\lambda)\\ketbra{1}{1}\\otimes \n\\tilde\\varrho_{AB}$ and the measurements $\\hat A_{a|x}:=\\ketbra{0}{0}\\otimes \nA_{a|x}+\\ketbra{1}{1}\\otimes \\tilde A_{a|x}$, where $\\{\\ket{0},\\ket{1}\\}$ is \nthe basis of an auxiliary qubit of Alice.\n\nTo prove the compactness of the desired set of assemblages, we first notice that the extremal points are obtained by pure states and that every extremal point can be reached with a finite dimensional Alice. As Bob is assumed to be finite dimensional, any assemblage that is preparable by a Schmidt number $n$ (or smaller) state can be expressed as a finite convex combination of extremal assemblages. Hence, any sequence of assemblages preparable with a Schmidt number $n$ (or smaller) state can be written as\n\\begin{equation}\n(\\varrho_{a|x}^m)_m=(\\sum_{i=1}^k p_{i|m}\\xi_{a|x}^{i|m})_m,\n\\end{equation}\nwhere $k$ is some fixed finite number depending on the number of dimension \n(Carath\u00e9odory's theorem), $p_{i|m}$ is a probability distribution for every \n$m$, and $\\xi_{a|x}^{i|m}$ are assemblages preparable with a pure Schmidt rank \n$n$ or less state. The set of assemblages preparable with pure states is \nclearly compact (as it is the image of a cartesian product of compact sets in \na continuous mapping) and, therefore, for every $i$ we can pick a converging \nsubsequence of $\\xi_{a|x}^{i|m}$. Picking the subsequences for different \nindices $i$ sequentially (i.e. subsequences of subsequences) results in \na subsequence in which convergence is guaranteed for every $i$. Repeating the \nprocedure once more to pick a converging sequence of probability distributions \ngives a subsequence of $(\\varrho_{a|x}^m)_m$ that converges. Hence, the set of \nSchmidt number $n$ (or smaller) preparable assemblages is compact.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:intro}Introduction}\n\nIn the ongoing quest for unraveling the nonperturbative structure of QCD, considerable effort has been \ndedicated to the study of Green's (correlation) functions by means of both continuous methods~\\cite{Roberts:1994dr,Alkofer:2000wg,Maris:2003vk,Pawlowski:2003hq,Pawlowski:2005xe,Fischer:2006ub,Aguilar:2006gr, Roberts:2007ji,Aguilar:2008xm, Boucaud:2008ky,Fischer:2008uz,Binosi:2009qm,Tissier:2010ts,Campagnari:2010wc,Pennington:2011xs,Aguilar:2011xe,Vandersickel:2012tz,Serreau:2012cg,Fister:2013bh,Cloet:2013jya,Binosi:2014aea,Kondo:2014sta,Aguilar:2015bud,Binosi:2016rxz,Binosi:2016nme,Corell:2018yil,Cyrol:2017ewj,Gao:2017uox,Huber:2018ned,Pelaez:2021tpq} and large-volume lattice simulations~\\cite{Sternbeck:2005tk,Ilgenfritz:2006he,Cucchieri:2007md,Cucchieri:2007rg,Bogolubsky:2007ud,Bowman:2007du,Cucchieri:2008fc,Cucchieri:2009zt,Bogolubsky:2009dc,Oliveira:2009eh,Oliveira:2010xc,Maas:2011se,Boucaud:2011ug,Oliveira:2012eh,Ayala:2012pb,Bicudo:2015rma}.\nIn this pursuit, the detailed scrutiny of the ghost sector of the theory \nis particularly important, both because of its direct connection with specific scenarios of color confinement \\cite{Kugo:1979gm,Nakanishi:1990qm},\nbut also due to its impact on the nonperturbative \nbehavior of other key Green's functions, such as the gluon propagator and the three-gluon vertex~\\cite{Cucchieri:2006tf,Cucchieri:2008qm,Alkofer:2008jy,\nHuber:2012zj,Aguilar:2013vaa,Pelaez:2013cpa,Blum:2014gna,Eichmann:2014xya,Williams:2015cvx,Blum:2015lsa,Cyrol:2016tym,Duarte:2016ieu,Athenodorou:2016oyh,Boucaud:2017obn,Aguilar:2019jsj,Aguilar:2019uob,Aguilar:2019kxz}.\nIn particular, the nonperturbative masslessness of the ghost is responsible for the vanishing \nof the gluon spectral density at the origin~\\cite{Cyrol:2018xeq,Haas:2013hpa,Fischer:2020xnb,Horak:2021pfr}, and for the infrared suppression of the three-gluon vertex~\\cite{Huber:2012zj,Aguilar:2013vaa,Athenodorou:2016oyh,Boucaud:2017obn,Aguilar:2019jsj}. \nIn that sense, the ghost dynamics leave their imprint on a variety of fundamental phenomena, \nsuch as chiral symmetry breaking and the generation of quark constituent masses\\mbox{~\\cite{Aguilar:2018epe,Gao:2021wun,Mitter:2014wpa,Aguilar:2010cn,Fischer:2003rp,Roberts:1994dr}},\nthe emergence of a mass gap in the gauge sector of the theory~\\cite{Cornwall:1981zr,Aguilar:2008xm,Aguilar:2019kxz,Aguilar:2020uqw}, and the dynamical formation of hadronic bound states~\\cite{Maris:2003vk,Maris:1997tm,Cloet:2013jya,Eichmann:2009qa,Eichmann:2016yit} and glueballs~\\cite{Meyers:2012ka,Fukamachi:2016wxf,Souza:2019ylx,Huber:2020ngt}. \n\n\nIn the framework of the Schwinger-Dyson equations (SDEs),\nthe momentum evolution of the ghost dressing function is governed by a\nrelatively simple integral equation, whose main ingredients are the\ngluon propagator and the fully-dressed ghost-gluon vertex.\nIf one treats the \ngluon propagator as external input obtained from lattice simulations (see {\\it e.g.}, ~\\cite{Aguilar:2013xqa}),\nthen the main technical challenge of this approach is the determination of the ghost-gluon vertex.\nIn the Landau gauge, the ghost-gluon vertex is rather special, because, by virtue of Taylor's theorem, \nits renormalization constant is finite~\\cite{Taylor:1971ff}. Of the two possible tensorial structures allowed by Lorentz invariance, only that corresponding to the classical (tree-level) tensor survives in the\ncalculations. The form factor associated with it will be denoted by $B_1(r,p,q)$, where\n$r$, $p$, and $q$ are the momenta of the antighost, ghost, and gluon, respectively.\n\n\nThe most complicated aspect of the SDE that determines $B_1(r,p,q)$\nis that, in addition to $B_1(r,p,q)$ itself, the resulting integral equation,\nderived in the so-called ``one-loop dressed'' approximation, \ndepends also on the fully-dressed three-gluon vertex.\nThis latter vertex has a rich tensorial structure~\\cite{Ball:1980ax}, \nand a complicated description at the level of the SDEs\\mbox{~\\cite{Schleifenbaum:2004id,Huber:2012kd,Aguilar:2013xqa,Huber:2012zj,Blum:2014gna,Eichmann:2014xya,Williams:2015cvx, Binosi:2016wcx,Hawes:1998cw,Chang:2009zb,Qin:2011dd};} \ntherefore, it is often approximated by resorting to gauge-technique constructions~\\cite{Salam:1963sa,Salam:1964zk,Delbourgo:1977jc,Delbourgo:1977hq}, \nbased on the Slavnov-Taylor identities (STIs) that it \\mbox{satisfies}.\n\nThe comprehensive treatment of the relevant SDEs presented in~\\cite{Aguilar:2018csq}\ngives rise to a $B_1(r,p,q)$ with a mild momentum-dependence and a modest\ndeviation from its tree-level value (see also~\\cite{Aguilar:2013xqa}), and a ghost dressing function, $F(q^2)$, that is in good\n(but not perfect, see, {\\it e.g.}, right panel on \\mbox{Fig.~16} of~\\cite{Aguilar:2018csq}) agreement with the lattice data~\\cite{Bogolubsky:2009dc}.\n\n\nIn the present work, we take a fresh look at the system of coupled SDEs that determines \nthe ghost dynamics, taking advantage of two recent advances in the area of lattice QCD~\\mbox{\\cite{Aguilar:2021lke,Boucaud:2017ksi, Boucaud:2018xup}}.\nFirst, the simulation of the three-gluon vertex in the ``soft gluon limit'' \\mbox{($q \\to 0$)}~\\cite{Aguilar:2021lke} \nfurnishes accurate data for a special form factor, denoted by $\\Ls(r^2)$, which\nconstitutes a central ingredient of the SDE for $B_1(r,p,q)$,\nwhen computed in the same kinematic limit, namely $B_1(r,-r,0)$. \nSecond, the lattice two-point functions employed in our study \nhave been cured from volume and discretization artifacts,\nonce the scale-setting and continuum-limit extrapolation \nput forth in~\\cite{Boucaud:2017ksi,Boucaud:2018xup} have been implemented.\n\nThe way the aforementioned elements are incorporated into the present analysis is as follows.\nThe starting point is the computation of $F(q^2)$ and $B_1(r,p,q)$ from the\ncoupled system of SDEs they satisfy. In the SDE for $B_1(r,p,q)$, an approximate form of \nthe three-gluon vertex is employed: only the tree-level tensorial structures are retained, and the \nassociated form factors are taken from the STI-based derivation of~\\cite{Aguilar:2019jsj}. \nIn addition, \nthe gluon propagator of~\\cite{Boucaud:2018xup} combined with that of \\cite{Bogolubsky:2009dc}, subjected to the refinements mentioned above, is used in the SDEs as external input.\nThe solution of the system yields a $F(q^2)$ which is in outstanding agreement with the ghost dressing\nfunction of~\\cite{Boucaud:2018xup}. The corresponding\nsolution for $B_1(r,p,q)$, in general kinematics, displays the salient features known from previous studies~\\cite{Schleifenbaum:2004id,Huber:2012kd,Aguilar:2013xqa,Cyrol:2016tym,Mintz:2017qri,Aguilar:2018csq,Huber:2018ned,Aguilar:2019jsj,Barrios:2020ubx}.\nIn fact, one may extract from it various kinematic limits as special cases, \nand, in particular, the two-dimensional ``slice'' that corresponds to the\nsoft gluon limit, thus obtaining $B_1(r,-r,0)$. \n\nThe next step is to implement the soft gluon limit \\mbox{($q \\to 0$)} {\\it directly} at the level of the\nSDE for $B_1(r,p,q)$, which is thus converted to a dynamical equation for $B_1(r,-r,0)$.\nBy virtue of this operation, the three-gluon vertex nested in one of the defining Feynman diagrams\nis projected naturally to its soft gluon limit, thus allowing\nus to replace it precisely by the function $\\Ls(r^2)$\nobtained from the lattice analysis of~\\cite{Aguilar:2021lke},\nwithout having to resort to any Ans\\\"atze or simplifying assumptions.\nThe resulting $B_1(r,-r,0)$ is then compared with the\ncorresponding ``slice'' obtained from the full kinematic analysis of $B_1(r,p,q)$ mentioned above,\nrevealing excellent coincidence. This coincidence, in turn, is indicative of an\nunderlying consonance between elements originating from inherently distinct computational\nframeworks, such as the lattice and the SDEs. \n\n\nThe article is organized as follows. In Sec.~\\ref{sec:back} we present the \nnotation and theoretical ingredients that are relevant for our analysis.\nIn Sec.~\\ref{sec:coupled} we set up and solve the coupled system of SDEs\nfor the ghost dressing function and the ghost-gluon vertex in general kinematics. \nNext, in Sec.~\\ref{sec:cgsoft} we derive and analyze the SDE\nfor the ghost-gluon vertex in the soft gluon configuration,\ncomparing our results with those obtained in the previous section.\nIn addition, we compare the \nstrong running coupling obtained from the three-gluon vertex with the one constructed from \nthe ghost-gluon vertex, both in the soft gluon configuration.\nIn Sec.~\\ref{sec:conc} we discuss our results and present our conclusions.\nFinally, in Appendix~\\ref{sec:App_renor} we present useful relations between the Taylor and soft gluon\nrenormalization schemes, while in Appendix~\\ref{sec:App_latt} we discuss the treatment\nof finite cut-off effects and lattice scale-setting.\n\n\n\\section{\\label{sec:back}Theoretical background}\n\nIn this section we \nsummarize the main properties of the two and three-point functions that enter in the nonperturbative determination \nof the ghost-gluon vertex, paying particular attention to the soft gluon limit of\nthe three-gluon vertex. Note that in the present study we restrict ourselves to a\n\\emph{quenched} version of QCD, {\\it i.e.}, a pure Yang-Mills theory with no dynamical quarks. \n\n\nThroughout this article we work in the Landau gauge, where the gluon propagator \\mbox{$\\Delta^{ab}_{\\mu\\nu}(q) = -i\\delta^{ab} \\Delta_{\\mu\\nu}(q^2)$} assumes the fully transverse form \n\\begin{equation}\n \\Delta_{\\mu\\nu}(q) = \\Delta(q^2) P_{\\mu\\nu}(q) \\,,\\quad\\quad P_{\\mu\\nu}(q) = g_{\\mu\\nu} - q_\\mu q_\\nu\/q^2 \\,,\\quad\\quad\\, \\Delta(q^2)= \\Dr(q^2)\/q^2.\n\\label{gluon} \n\\end{equation}\n\nAs has been firmly established by a variety of large-volume simulations and continuous studies, \n$\\Delta(q^2)$ saturates at a finite nonvanishing value, a feature which is widely attributed to the emergence\nof a gluonic mass scale~\\cite{Cornwall:1981zr,Aguilar:2008xm}. For later convenience,\nthe gluon dressing function, $\\Dr(q^2)$, has also been defined in \\1eq{gluon}.\n\n \n In addition, we introduce the ghost propagator, \\mbox{$D^{ab}(q^2)=i\\delta^{ab}D(q^2)$}, whose \n dressing function, $F(q^2)$, is given by\n\\begin{equation}\nD(q^2) = F(q^2)\/q^2 \\,,\n\\label{ghost}\n\\end{equation}\nand is known to saturate at a finite value in the deep infrared~\\cite{Boucaud:2008ky,Aguilar:2008xm,Dudal:2008sp,Fischer:2008uz}.\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[scale=0.65]{fig1}\n \\caption{Diagrammatic representation of ($a$)~the ghost-gluon vertex, and ($b$)~the three-gluon vertex, with their respective momenta conventions. All momenta are incoming, $q+p+r=0$. }\n \\label{vertices}\n\\end{figure}\n\nTurning to the three-point sector of the theory,\nwe introduce the ghost-gluon vertex,\n\\mbox{$\\fatg_\\mu^{mna}(r,p,q) = -gf^{mna} \\fatg_\\mu(r,p,q)$}, \nand the three-gluon vertex, \\mbox{$\\fatg^{abc}_{\\alpha\\mu \\nu}(q,r,p) =gf^{abc}\\fatg_{\\alpha\\mu \\nu}(q,r,p)$},\ndepicted diagrammatically in the panels ($a$) and ($b$) of Fig.~\\ref{vertices}, respectively.\n\nIn a series of works~\\cite{Aguilar:2011xe,Binosi:2012sj,Ibanez:2012zk,Binosi:2017rwj,Aguilar:2017dco},\nthe emergence of an infrared finite gluon propagator from the corresponding SDE has been connected \nwith certain outstanding nonperturbative features of the fundamental vertices\n$\\fatg_\\mu(r,p,q)$ and $\\fatg_{\\alpha\\mu \\nu}(q,r,p)$. \nSpecifically, both vertices are composed by two distinct types of terms, according to\n\\begin{equation}\n\\fatg_{\\mu}(r,p,q) = \\Gamma_{\\mu}(r,p,q)+ V_{\\mu}(r,p,q),\\qquad\n\\fatg_{\\alpha\\mu \\nu}(q,r,p) = \\Gamma_{\\alpha\\mu \\nu}(q,r,p)+ V_{\\alpha\\mu \\nu}(q,r,p)\\,.\n\\label{fatG}\n\\end{equation}\nThe terms $V_{\\mu}(r,p,q)$ and $V_{\\alpha\\mu \\nu}(q,r,p)$ are purely nonperturbative\nand contain longitudinally coupled massless poles; \nwhen inserted into the SDE of the gluon propagator, \nthey trigger the\n{\\it Schwinger mechanism}~\\cite{Schwinger:1962tn,Schwinger:1962tp,Jackiw:1973tr,Eichten:1974et}, \ninducing the infrared finiteness of the gluon propagator.\nIt is important to emphasize that these terms drop out from transversely projected Green's functions, or lattice ``observables'',\ndue to the property\\footnote{\\ Equivalently, the general tensorial structure of the pole vertices is given by \n\\mbox{$V_{\\mu}(r,p,q) = \\frac{q_\\mu}{q^2} A(r,p,q)$} and \n\\mbox{$V_{\\alpha\\mu\\nu}(q,r,p) = \\frac{q_\\alpha}{q^2} B_{\\mu\\nu}(q,r,p) +\n\\frac{r_\\mu}{r^2} C_{\\alpha\\nu}(q,r,p) + \\frac{p_\\nu}{p^2} D_{\\alpha\\mu}(q,r,p)$.}} \n\\begin{equation}\n\\label{eq:transvp}\n {P}_{\\mu'}^{\\mu}(q) V_{\\mu}(r,p,q) =0\\,, \\qquad\\qquad\n {P}_{\\alpha'}^{\\alpha}(q){P}_{\\mu'}^{\\mu}(r){P}_{\\nu'}^{\\nu}(p) V_{\\alpha\\mu\\nu}(q,r,p) = 0 \\,.\n\\end{equation}\n\nOn the other hand, the terms $\\Gamma_{\\mu}(r,p,q)$ and $\\Gamma_{\\alpha\\mu \\nu}(q,r,p)$ \ndenote the pole-free components of the two vertices. For large momenta, they\ncapture the standard perturbative contributions, while in the deep infrared they\nmay be finite or diverge logarithmically, depending on whether or not \nthey are regulated by the nonperturbative gluon mass scale~\\cite{Aguilar:2013vaa}.\n\n\nThe most general tensorial decomposition of $\\Gamma_\\mu(r,p,q)$ can be written as \n\\begin{equation}\n \\label{decomp}\n \\Gamma_\\mu(r,p,q) = B_1(r,p,q)r_\\mu + B_2(r,p,q)q_\\mu\\,,\n\\end{equation}\nwhere $B_i(r,p,q)$ are the corresponding form factors.\nAt tree-level, \\mbox{$\\Gamma^{(0)}_\\mu =r_\\mu $}, and so \\mbox{$B_1^{(0)}=1$} and \\mbox{$B_2^{(0)}=0$}. In addition, by virtue of Taylor's theorem~\\cite{Taylor:1971ff}, the renormalization constant associated with $\\Gamma_\\mu(r,p,q)$ is finite.\n\n\nThe vertex $\\Gamma_{\\alpha\\mu \\nu}(q,r,p)$ is composed by fourteen linearly independent tensors.\nA standard basis, which manifestly reflects the Bose symmetry of $\\Gamma_{\\alpha\\mu \\nu}(q,r,p)$,\nis the one introduced in~\\cite{Ball:1980ax}; see also Eqs.~(3.4) and (3.6) of~\\cite{Aguilar:2019jsj}.\nNote, however, that the explicit form of the basis will not be required in what follows. \n\nAt tree level, $\\Gamma_{\\alpha\\mu \\nu}(q,r,p)$ reduces to the standard expression \n\\begin{equation}\n\\label{eq:treelevel}\n\\Gamma_{\\!{0}}^{\\alpha\\mu\\nu}(q,r,p) = (q-r)^\\nu g^{\\alpha\\mu} + (r-p)^\\alpha g^{\\mu\\nu} + (p-q)^\\mu g^{\\nu\\alpha}\\,. \n\\end{equation}\n\nWe next turn to the quantity studied in the lattice simulation of~\\cite{Aguilar:2021lke}, \n\\begin{eqnarray}\n\\Ls(r^2) &=& \\frac{\\Gamma_0^{\\alpha\\mu \\nu}(q,r,p)\nP_{\\alpha\\alpha'}(q)P_{\\mu\\mu'}(r)P_{\\nu\\nu'}(p) \\Gamma^{\\alpha'\\mu'\\nu'}(q,r,p)}\n{\\rule[0cm]{0cm}{0.45cm}\\; {\\Gamma_0^{\\alpha\\mu\\nu}(q,r,p) P_{\\alpha\\alpha'}(q)P_{\\mu\\mu'}(r)P_{\\nu\\nu'}(p) \\Gamma_0^{\\alpha'\\mu'\\nu'}(q,r,p)}}\n\\rule[0cm]{0cm}{0.5cm} \\Bigg|_{\\substack{\\!\\!q\\to 0 \\\\ p\\to -r}} \\,, \n\\label{asymlat}\n\\end{eqnarray}\nwhere the external legs have been appropriately amputated\\footnote{In~\\cite{Aguilar:2021lke} and other related lattice works, this quantity has been denominated as the \\emph{asymmetric} kinematic limit. Here\n we find it more appropriate to employ the term ``soft gluon limit''.}. Note that the starting expression involves the full vertex \n$\\fatg^{\\alpha'\\mu'\\nu'}(q,r,p)$, which, by virtue of \\1eq{eq:transvp}, is reduced to $\\Gamma^{\\alpha'\\mu'\\nu'}(q,r,p)$, \n{\\it i.e.}, the term $V^{\\alpha'\\mu'\\nu'}(q,r,p)$ associated with the poles drops out in its entirety. \n\nNow, in the limit of interest, namely $q \\to 0$, the tensorial structure of \nthe three-gluon vertex is considerably simplified, given by\n\\begin{equation}\n\\Gamma^{\\alpha\\mu\\nu}(0,r,-r) = 2 {\\cal A}_1(r^2) \\,r^\\alpha g^{\\mu\\nu} + {\\cal A}_2(r^2)\\, (r^\\mu g^{\\alpha\\nu} + \\,r^\\nu g^{\\alpha\\mu})\n+ {\\cal A}_3(r^2)\\, r^\\alpha r^\\mu r^\\nu \\,.\n\\label{Gsoft}\n\\end{equation}\nAt tree level,\n\\begin{equation}\n\\Gamma_0^{\\alpha\\mu\\nu}(0,r,-r) = 2 \\,r^\\alpha g^{\\mu\\nu} - (r^\\mu g^{\\alpha\\nu} + \\,r^\\nu g^{\\alpha\\mu})\\,,\n\\label{Gsoft0}\n\\end{equation}\nwhich, in the notation of \\1eq{Gsoft}, means that ${\\cal A}_1^{(0)}(r^2)= 1$, ${\\cal A}_2^{(0)}(r^2)=-1$,\nand ${\\cal A}_3^{(0)}(r^2)= 0$.\n\nThen, the numerator and denominator of the fraction on the r.h.s. of \\1eq{asymlat}, to be denoted\nby ${\\cal N}$ and ${\\cal D}$, respectively, become \n\\begin{equation}\n{\\cal N} = 4 (d-1) [r^2 - (q\\cdot r)^2\/q^2] {\\cal A}_1(r^2)\\,, \\qquad {\\cal D} = 4 (d-1) [r^2 - (q\\cdot r)^2\/q^2] \\,.\n\\label{NandD}\n\\end{equation}\nThus, the path-dependent contribution contained in the square bracket drops out when forming the ratio ${\\cal N}\/{\\cal D}$,\nand \\1eq{asymlat} yields simply\n\\begin{equation}\n\\Ls(r^2) = {\\cal A}_1(r^2) \\,.\n\\label{LisB}\n\\end{equation}\nCombining \\2eqs{Gsoft0}{LisB}, it is immediate to derive one of the key relations of this work, namely \n\\begin{equation}\n\\label{GammaLasym}\nP_{\\mu\\mu'}(r) P_{\\nu\\nu'}(r) \\Gamma^{\\alpha\\mu\\nu}(0,r,-r) = 2 \\Ls(r^2) r^{\\alpha}P_{\\mu'\\nu'}(r) \\,.\n\\end{equation}\n\n\n\\section{\\label{sec:coupled} The system of coupled SDEs}\n\n\nIn this section, we set up and solve \nthe system of coupled SDEs that governs the ghost dressing function and the ghost-gluon vertex for general space-like momenta. \nThe external ingredients employed are a fit of the lattice data for the gluon propagator, and certain form factors\nof the three-gluon vertex (in general kinematics), obtained from the nonperturbative Ball-Chiu construction of~\\cite{Aguilar:2019jsj}.\n\n\\begin{figure}[b]\n \\centering\n \\vspace{-0.5cm}\n \\includegraphics[scale=0.7]{fig2}\n \\vspace{-0.5cm}\n \\caption{The SDEs for the ghost propagator and the ghost-gluon vertex (upper and lower panels, respectively). \n The white circles represent the full gluon and ghost propagators, while the blue ones\n denote the full ghost-gluon vertex. The gray ellipse indicates the ``one-particle reducible'' four-point ghost-gluon kernel.} \n \\label{fig:coupsys}\n\\end{figure}\n\n\n\\subsection{The ghost gap equation and ghost-gluon SDE}\n\n\nOur starting point is the SDE for the ghost propagator, whose diagrammatic representation is shown in the upper panel of Fig.~\\ref{fig:coupsys}.\nWhen expressed in terms of the ghost dressing function, this SDE acquires the standard form known in the literature, namely \n\\begin{equation}\nF ^{-1}(p^2) = Z_{\\rm c} + \\Sigma(p^2)\\,,\n\\label{sde_ghost2}\n\\end{equation}\nwith\n\\begin{equation}\n\\Sigma(p^2) = ig^2C_{\\rm A}Z_1\\!\\! \\int_k f(k,p)B_1(-p,k+p,-k)\\Delta(k)D(k+p)\\,; \\qquad f(k,p):= 1 -\\frac{(k\\cdot p)^2}{k^2p^2}\\,.\n\\label{sde_ghost}\n\\end{equation}\nIn the above equation, $C_\\mathrm{A}$ is the Casimir eigenvalue of the adjoint representation [$N$ for $SU(N)$],\nwhile $Z_{\\rm c}$ and $Z_1$ are the renormalization constants of $D(p^2)$ and $\\Gamma^\\mu (r,p,q)$, respectively [see Eq.~\\eqref{renorm}].\n In addition, we have introduced the integral measure \n\\begin{equation}\n \\int_k := \\frac{1}{(2\\pi)^4} \\int\\!\\! \\dd[4]{k} \\,,\n\\end{equation}\nwhere the presence of a symmetry-preserving regularization scheme is implicitly understood. \n\n\n\nIn this analysis, the renormalization is implemented within the well-known variant of the momentum subtraction\n(MOM) scheme known as ``Taylor scheme''~\\cite{Boucaud:2008gn,Boucaud:2011eh}\\footnote{In the literature this scheme is also known as minimal momentum subtraction (MiniMOM) scheme~\\cite{vonSmekal:2009ae}, \nand has been employed for a recent determination of $\\alpha_{\\ols{\\srm{MS}}}$ from unquenched lattice simulations~\\cite{Zafeiropoulos:2019flq}, consistent with the experimental \\emph{world average.}}, \nwhich fixes the (finite) vertex renormalization constant at the special value $Z_1=1$.\nAs for $Z_{\\rm c}$, its value is fixed by the standard MOM requirement \n$F ^{-1}(\\mu^2) = 1$, where $\\mu$ is the renormalization scale.\n\nImplementing this condition at the level of \n\\1eq{sde_ghost2} yields\n\\begin{equation}\nZ_{\\rm c} = 1 - \\Sigma (\\mu^2)\\,,\n\\end{equation}\nand \\1eq{sde_ghost2} may be cast in the form\n\\begin{equation}\nF ^{-1}(p^2) = 1 + \\Sigma(p^2) - \\Sigma (\\mu^2)\\,.\n\\label{sde_ghostks}\n\\end{equation}\n\nWe next turn to the SDE for the ghost-gluon vertex, shown diagrammatically in the lower panel of Fig.~\\ref{fig:coupsys}.\nIn the present work, we will consider the so-called ``one-loop dressed'' approximation of this SDE, which\ncorresponds to keeping only the first two terms in the skeleton expansion of the SDE kernel, shown in Fig.~\\ref{fig:diagvert}.\nNote that the omitted set of contributions is captured by the one-particle irreducible four-point function, represented by the yellow ellipse,\nwhose dynamics has been studied in detail in~\\cite{Huber:2017txg,Huber:2018ned}. As was shown there, this subset of corrections\nis clearly subleading, affecting the ghost-gluon vertex by a mere $2\\%$. \nIt is therefore expected that the above truncation should provide a quantitatively accurate description of\nthe infrared behavior of the ghost-gluon vertex [see also the corresponding discussion in Sec.~\\ref{sec:conc}]. \n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.7]{fig3}\n \\caption{The skeleton expansion of the ``one particle reducible'' four-point ghost-gluon kernel.\n Only the first two terms will be considered in our analysis.}\n \\label{fig:diagvert}\n\\end{figure}\n\n\nThus, the expression for the SDE for the ghost-gluon vertex in the Taylor scheme can be schematically written as \n\\begin{equation}\n \\label{rvgg}\n \\Gamma_\\mu(r,p,q) = r_\\mu - \\frac{i}{2}g^2 C_\\mathrm{A} [a_{\\mu}(r,p,q) - b_{\\mu}(r,p,q)] ,\n\\end{equation}\nwith\n\\begin{align}\n \n a_\\mu(r,p,q) &= r_\\rho \\int_k \\Delta^{\\rho\\sigma}(k) \\fatg_{\\mu\\sigma\\alpha}(q,k,-t) \\Delta^{\\alpha\\beta}(t) \\Gamma_\\beta(-\\ell,p,t) D(\\ell) \\,, \\nonumber\\\\\n \n b_\\mu (r,p,q)&= r_\\alpha \\int_k \\Delta^{\\alpha\\beta}(\\ell) \\Gamma_\\beta(t,p,-\\ell) D(t) \\fatg_\\mu(k,-t,q) D(k) \\,, \n\\label{d1d2}\n\\end{align}\nwhere $\\ell := k-r$ and $t :=k+q$. Note that we have employed the first of the two relations in \\1eq{eq:transvp} in order to eliminate\nthe terms $V_{\\mu}(r,p,q)$ from the ghost-gluon vertices that are contracted by a transverse gluon propagator\n(Landau gauge). \n\n\nIn order to isolate the contribution of the form factor $B_1(r,p,q)$, defined in \\1eq{decomp}, we contract \\1eq{rvgg}\nby the projector~\\cite{Aguilar:2013xqa}\n\\begin{equation} \n \\varepsilon^\\mu(r,q) = \\frac{q^2 r^\\mu - q^\\mu ( q\\cdot r )}{ h(r,q) }\\,, \\qquad h(q,r) = q^2 r^2 - (q\\cdot r)^2 \\,.\n\\label{projector}\n\\end{equation}\nAn immediate consequence of this contraction and the property \\1eq{eq:transvp} is that \n\\begin{eqnarray} \n\\fatg_{\\mu\\sigma\\alpha}(q,k,-t)P^{\\rho\\sigma}(k) P^{\\alpha\\beta}(t)\n&\\to & \\Gamma_{\\mu\\sigma\\alpha}(q,k,-t)P^{\\rho\\sigma}(k) P^{\\alpha\\beta}(t)\\,,\n\\nonumber\\\\\n\\fatg_\\mu(k,-t,q) &\\to & \\Gamma_\\mu(k,-t,q)\\,,\n\\label{subst}\n\\end{eqnarray}\n{\\it i.e.}, the terms associated with the nonperturbative poles are annihilated, and we are only left with the\npole-free components of the two vertices.\n\n\n\n\nThe next step is to carry out in the expressions of \\1eq{d1d2} the substitution\n\\begin{align}\nB_1(-\\ell, p, t) \\to & \\frac{1}{2}\\left[ B_1(-\\ell, p, t) + B_1(r, \\ell, - k) \\right]\\,, \\nonumber \\\\\nB_1(t, p, -\\ell) \\to & \\frac{1}{2}\\left[ B_1(t, p, -\\ell) + B_1(r, -k, \\ell) \\right]\\,,\n\\label{sym_sub}\n\\end{align}\nin order to restore the symmetry of $B_1(r,p,q)$ with respect to the interchange of the ghost and antighost momenta,\nwhich has been compromised by the truncation of the SDE~\\cite{Aguilar:2018csq}.\n\nIn addition, the structure of the three-gluon vertex entering in $a_\\mu(r,p,q)$ is approximated\nby retaining only the tensorial structures with a nonvanishing tree-level limit. Specifically,\nin the notation of~\\cite{Aguilar:2019jsj}, we set \n\\begin{equation}\n\\label{eq:vtensortree}\n\\Gamma^{\\alpha\\mu\\nu}(q,r,p) \\approx (q-r)^\\nu g^{\\alpha\\mu}X_1(q,r,p) + (r-p)^\\alpha g^{\\mu\\nu} X_4(q,r,p)\n+ (p-q)^\\mu g^{\\nu\\alpha} X_7(q,r,p)\\,, \n\\end{equation}\nwhere, due to the Bose symmetry of $\\Gamma^{\\alpha\\mu\\nu}(q,r,p)$, we have \\mbox{$X_1(q,r,p) = X_4(p,q,r) = X_7(r,p,q)$}.\n\n\n\nThus, we arrive at (Minkowski space) \n\\begin{equation}\nB_1(r,p,q) = 1 - \\frac{i}{2}g^2 C_\\mathrm{A} \\left[a(r,p,q) - b(r,p,q) \\right]\\,,\n \\label{B1general}\n\\end{equation}\nwith\n\\begin{equation}\na(r,p,q) = \\int_k {\\mathcal K}_1(k,r,q){\\mathcal N}_1 (k,r,q)\\,, \\qquad \\quad b(r,p,q) =\\int_k {\\mathcal K}_2(k,r,q) {\\mathcal N}_2 (k,r,q) \\,, \n\\label{d12kernels}\n \\end{equation}\nwhere\n\\begin{eqnarray}\n{\\mathcal K}_1(k,r,q) &=& \\frac{\\Delta(k^2) \\Delta(t^2) F(\\ell^2)} { k^2 \\ell^2 t^2 h(q,r) } \\left[ B_1(-\\ell, p, t) + B_1(r, \\ell, - k) \\right]\\,, \\nonumber \\\\\n{\\mathcal K}_2 (k,r,q)&=& \\frac{F(k^2) \\Delta( \\ell^2 ) F(t^2) }{ 2\\, k^2 \\ell^2 t^2 h(q,r) }\\left[ B_1(t, p, -\\ell) + B_1(r, -k, \\ell) \\right]B_1(k,-t,q) \\,, \n\\label{kb1}\n\\end{eqnarray}\nand \n\\begin{eqnarray}\n{\\mathcal N}_1 &=& a_1X_1(k, t, q) +a_4X_4(k, t, q)+a_7X_7(k, t, q)\\,, \\\\ \n{\\mathcal N}_2&=& [ q^2 ( k \\cdot r ) - ( k\\cdot q )(q\\cdot r ) ] [ (k\\cdot r)(q\\cdot r) + (k\\cdot q )(k\\cdot r) - r^2 (k\\cdot q ) - k^2(q\\cdot r ) - h(k,r) ]\\,. \\nonumber \n\\end{eqnarray}\nThe coefficients $a_i$ are given by\n\\begin{align}\na_1 =& \\left[ q^2 ( k\\cdot r ) - ( k\\cdot q ) (q\\cdot r) \\right] \\left\\lbrace k^2 \\left[ ( k\\cdot q ) (k\\cdot r) - ( k\\cdot q )(q\\cdot r)-2 r^2( k\\cdot q ) \\right. \\right. \\nonumber\\\\\n& \\left.\\left. + (k\\cdot r)( q\\cdot r) + (k\\cdot r)^2 - h(q,r) \\right] - k^4 \\left[ (q\\cdot r ) + r^2 \\right] + ( k\\cdot r ) \\left[ q^2 (k\\cdot r) + (k\\cdot q) (k\\cdot r) \\right.\\right. \\nonumber\\\\\n& \\left.\\left. - (k\\cdot q)(q\\cdot r) + (k\\cdot q)^2\\right] \\right\\rbrace \\,, \\nonumber \\\\\na_4 =& \\left[ k^2 ( q\\cdot r ) - ( k\\cdot q ) ( k\\cdot r ) \\right] \\left\\lbrace \\left(k^2+q^2\\right) h(q,r) - q^2 ( k\\cdot r ) \\left[ q^2 + ( q\\cdot r ) \\right] + (k\\cdot q)^2 (q\\cdot r) \\right.\\nonumber\\\\\n& \\left. + ( k\\cdot q )\\left[ ( k\\cdot r ) (q\\cdot r) - q^2 ( k\\cdot r ) + q^2 (q\\cdot r) + 2 q^2 r^2 - (q\\cdot r)^2 \\right] - q^2 (k\\cdot r)^2 \\right\\rbrace \\,, \\nonumber\\\\\na_7 =& \\left\\lbrace q^2 ( k\\cdot r ) - k^2 \\left[ q^2 + ( q\\cdot r ) \\right] + ( k\\cdot q ) [ ( k\\cdot r ) - ( q\\cdot r ) ] + ( k\\cdot q)^2 \\right\\rbrace \\nonumber\\\\\n& \\times \\left[ k^2 h(q,r) - q^2 (k\\cdot r)^2 + ( k\\cdot q )( k\\cdot r )( q\\cdot r ) \\right]\\,. \n\\label{ai_Mink}\n\\end{align} \n\n\\subsection{\\label{sec:ngene} Numerical analysis}\n\n\nIn order to proceed with the numerical solution, the system of integral equations formed by\n\\2eqs{sde_ghostks}{B1general} must be passed to Euclidean space, \nfollowing standard conventions (see, {\\it e.g.}, Eq.~(5.1) of~\\cite{Aguilar:2018csq}) and employing spherical coordinates for the\nfinal treatment.\n\nThen, appropriate inputs for the gluon propagator, $\\Delta(q^2)$, \nand the form factors $X_{1,4,7}(q,r,p)$ of the three-gluon vertex must be furnished.\n\nFor the gluon propagator we employ a fit for the results obtained after a reanalysis of the lattice data of~\\cite{Bogolubsky:2009dc},\nfollowing the procedure put forth in~\\cite{Boucaud:2017ksi,Boucaud:2018xup}, in order to cure volume and discretization artifacts, \nsee Appendix~\\ref{sec:App_latt} for details. Specifically, the resulting $\\Delta(q^2)$ is shown in the\nleft panel of Fig.~\\ref{fig:X1}, together with the numerical fit given by Eq.~\\eqref{gluonfit}.\n\n\\begin{figure}[t]\n\\begin{minipage}[b]{0.45\\linewidth}\n\\centering\n\\hspace{-1.0cm}\n\\includegraphics[scale=0.24]{fig4a}\n\\end{minipage}\n\\hspace{0.25cm}\n\\begin{minipage}[b]{0.45\\linewidth}\n\\includegraphics[scale=0.9]{fig4b}\n\\end{minipage}\n\\caption{Left panel: Lattice data for the gluon propagator, $\\Delta(q^2)$, after performing the continuum extrapolation of~\\cite{Boucaud:2018xup} to the data set of~\\cite{Bogolubsky:2009dc}, together with the corresponding fit given by Eq.~\\eqref{gluonfit}. The gluon propagator is renormalized at \\mbox{$\\mu = 4.3$ GeV}. Right panel: A representative case of the three-gluon form factor $X_1(q^2,r^2, \\phi)$ for a fixed value of the angle, $\\phi=0$.} \n\\label{fig:X1}\n\\end{figure}\n\n\\begin{figure}[t]\n\\begin{minipage}[b]{0.45\\linewidth}\n\\centering\n\\hspace{-1.0cm}\n\\includegraphics[scale=0.26]{fig5a}\n\\end{minipage}\n\\hspace{0.25cm}\n\\begin{minipage}[b]{0.45\\linewidth}\n\\vspace{0.5cm}\n\\includegraphics[scale=0.9]{fig5b}\n\\end{minipage}\n\\caption{ Left panel: The numerical solution for the ghost dressing function, $F(p^2)$, (red continuous line) compared with the lattice data of~\\cite{Boucaud:2018xup}. Right panel: The form factor $B_1(r^2,p^2,\\theta_1 )$ for a fixed value of the angle $\\theta_1 =\\pi$, obtained as solution of the coupled system of \\2eqs{sde_ghost2}{B1general} when \\mbox{$\\alpha_s(\\mu)= 0.244$}. }\n\\label{fig:rescoupled}\n\\end{figure}\n\n\n\nFor the determination of the form factors $X_{1,4,7}(q,r,p)$, we follow the nonperturbative version of the Ball-Chiu construction\ndeveloped in~\\cite{Aguilar:2019jsj}. The general idea of the method is based on reconstructing the longitudinal form factors\nof the three-gluon vertex, such as $X_{1,4,7}(q,r,p)$, from the set of STIs that $\\Gamma_{\\alpha\\mu \\nu}(q,r,p)$ satisfies.\nThis procedures allows us to express $X_{1,4,7}(q,r,p)$ in terms of the ghost dressing function, the ``kinetic'' part of the\ngluon propagator, and three of the form factors of the ghost-gluon kernel. A representative case of $X_1(q^2,r^2,\\phi=0)$ \nis shown in the right panel of Fig.~\\ref{fig:X1}, where $\\phi$ is the angle formed between the momenta $q$ and $r$. \nNote that the form factor deviates markedly from unity, displaying clearly what is known in the literature\nas ``infrared suppression''~\\cite{Aguilar:2013vaa,Athenodorou:2016oyh,Boucaud:2017obn,Blum:2015lsa,Corell:2018yil,Aguilar:2019jsj}.\n\n\n\n\n\nWith the inputs introduced above, the coupled system is solved numerically by an iterative process. The external momenta $r^2$ and $p^2$ are distributed on a logarithmic grid, with $96$ points in the interval \\mbox{$[5\\times10^{-5},10^4]$ GeV$^2$}, whereas the angle between them, $\\theta_1$, is uniformly distributed in \\mbox{$[0,\\pi]$} with 19 points. The interpolations in three variables, needed for evaluating the $X_i$ and the $B_1$, are performed with B-splines~\\cite{de2001practical}, and the triple integrals are computed with a Gauss-Kronrod method~\\cite{Berntsen:1991:ADA:210232.210234}.\n\n\n\n\nIn Fig.~\\ref{fig:rescoupled}, we show the numerical results for $F(p^2)$ and $B_1(r^2,p^2, \\theta_1)$, \nobtained from the solution of the coupled system.\nWe emphasize that the renormalization point has been fixed at \\mbox{$\\mu= 4.3$ GeV},\nwhich coincides with the highest value of the momentum\naccessible by the lattice simulation of~\\cite{Bogolubsky:2009dc}.\nIn particular, \none can observe that when the gauge coupling assumes the value \\mbox{$\\alpha_s(\\mu):=g^2(\\mu)\/4\\pi = 0.244$},\nthe solution of the system yields a $F(p^2)$ that is in outstanding agreement with the ghost dressing data of~\\cite{Boucaud:2018xup} (left panel), which were properly extrapolated to the physical continuum limit, \nas explained in Appendix~\\ref{sec:App_latt}. \n\n\\begin{figure}[t]\n\\begin{minipage}[b]{0.45\\linewidth}\n\\centering\n\\hspace{-1.0cm}\n\\includegraphics[scale=0.9]{fig6a.pdf}\n\\end{minipage}\n\\hspace{0.25cm}\n\\begin{minipage}[b]{0.45\\linewidth}\n\\includegraphics[scale=0.24]{fig6b}\n\\end{minipage}\n\\caption{Left panel: The form factor $B_1(r^2,q^2,\\theta_2)$ plotted as function of the momenta of antighost, $r$, and gluon, $q$, for a fixed value of the angle, $\\theta_2=\\pi$.\nOn the 3-D surface, three curves are highlighted, representing\nthe soft gluon (red dot-dashed), soft ghost (orange continuous), and symmetric (green dashed) kinematic limits. Right panel: Direct comparison of the three special configurations (2-D projections)\nidentified in the left panel. } \n\\label{fig:conf}\n\\end{figure}\n\n\nMoreover, in the right panel of Fig.~\\ref{fig:conf}, one can see that the solution for $B_1(r^2,p^2, \\theta_1)$ is symmetric with respect to the diagonal plane defined by the condition $r=p$.\nThis is a direct consequence of the ghost-antighost symmetry, and becomes manifest only when $B_1$ is plotted as a\nfunction of the momenta $r$ and $p$. \n \n\nWe next explore certain special kinematic limits of $B_1$. To that end,\nwe choose $r$ and $q$ as our reference momenta (antighost and gluon, respectively), denoting by \n$\\theta_2$ the angle between them. In the left panel of Fig.~\\ref{fig:conf} we plot the corresponding\n3-D plot, for the special value $\\theta_2=2\\pi\/3$; this choice for the angle \nis particularly convenient, because one can identify \non a unique 3-D surface the following three kinematic limits: \n\n\n{\\it(i)} The \\emph{soft gluon limit}, obtained by setting \n$q = 0$; then, the momenta $r$ and $p$ have the same magnitude, \\mbox{$|p|=|r|=|Q|$}, and are anti-parallel, {\\it i.e.}, $\\theta_1=\\pi$.\nThis kinematic configuration is represented by the red dot-dashed curve on the 3-D plot of Fig.~\\ref{fig:conf}.\n\n\n {\\it(ii)} \n The \\emph{soft (anti)ghost limit}, in which $r= 0$ and \nthe momenta \\mbox{$|q|=|p|=|Q|$}; evidently, $|r||q|\\cos\\theta_2= 0$,\nand any dependence on the angle $\\theta_2$ is washed out. This kinematic limit is represented by the orange continuous curve \non the 3-D plot of Fig.~\\ref{fig:conf}.\n\n \n{\\it(iii)} The \\emph{totally symmetric limit}, defined by \\mbox{$q^2 = p^2= r^2 = Q^2$}; with the\nscalar products given by \n \\mbox{$(q\\cdot p) = (q\\cdot r) = (p\\cdot r) = -\\frac{1}{2}Q^2$}, and the angles \\mbox{$ \\widehat{rp} = \\widehat{rq} = \\widehat{qp} =2\\pi\/3$},\nrepresented by the green dashed curve on the 3-D plot of Fig.~\\ref{fig:conf}.\n\n\nThe three 2-D projections described above are plotted together in the right panel of Fig.~\\ref{fig:conf},\nwith all their corresponding momenta denoted by $Q$. As we can see, all cases display a peak around the same region of momenta, {\\it i.e.}, \\mbox{$(0.8-1.2)$ GeV}, with moderate differences in their heights. In addition, in the deep infrared, all curves recover\nthe result $B_1(0,0,0) = 1$.\n\n\n\n\n\n\n\\section{\\label{sec:cgsoft}Ghost-gluon vertex in the soft gluon configuration}\n \n\nIn this Section we implement the soft gluon limit, {\\it i.e.}, ($q \\rightarrow 0$), directly at the level of the\nSDE for the ghost-gluon vertex, which permits us to \nuse the lattice data for $\\Ls(q^2)$~\\cite{Aguilar:2021lke}\\footnote{In~\\cite{Aguilar:2021lke}, the lattice result for $\\Ls(q^2)$\n has been reproduced particularly well by means of the STI-based construction of~\\cite{Aguilar:2019jsj}. Nonetheless, in the present analysis we\nemploy directly the best fit to the lattice data, for achieving the highest possible accuracy.}\nin the treatment of the resulting\nintegral equation.\n\nThe basic observation is that, in the soft gluon limit, \nthe term \\mbox{$P^{\\rho\\sigma}(k) \\Gamma_{\\mu\\sigma\\alpha}(q,k,-t) P^{\\alpha\\beta}(t)$} appearing \ninside the $a_\\mu(r,p,q)$ of \\1eq{d1d2} becomes simply \n\\begin{equation}\nP^{\\rho\\sigma}(k) P^{\\alpha\\beta}(t) \\Gamma_{\\mu\\sigma\\alpha}(q,k,-t) \\xrightarrow[\\text{}]{\\text{$q\\to 0$}}\nP^{\\rho\\sigma}(k)P^{\\alpha\\beta}(k) \\Gamma_{\\sigma\\alpha\\mu}(0,k,-k) = 2 \\Ls(k^2) k_{\\mu}P^{\\rho\\beta}(k)\\,, \n\\end{equation}\nwhere in the last step \\1eq{GammaLasym} was used. \n\nNote, however, that a final subtlety prevents the immediate use of the lattice results \nfor $\\Ls(k^2)$ into \\1eq{d1d2}. Specifically, the renormalization employed in the lattice \nanalysis of~\\cite{Aguilar:2021lke} is the ``soft gluon scheme'', which differs from the Taylor scheme\nused in the derivation of the system of coupled SDEs. As a result, the lattice data must\nundergo a finite renormalization, $\\tilde{z}_{3}$, which will convert\nthem from one scheme to the other, according to \\1eq{asytaylor}. \n\nThen, it is straightforward to implement the soft gluon limit at the level of \\1eq{d1d2}.\nUsing the short-hand notation \\mbox{$B_1(k^2):=B_1(k,-k,0)$}, we arrive at \n %\n\\begin{align}\n B_1(r^2) & = 1 - \\frac{ig^2C_\\mathrm{A}}{\\tilde{z}_3} \\int_k F(\\ell^2) \\Delta^2(k^2)\n f(k,r)\\frac{(k\\vdot r)}{\\ell^2} B_1(-\\ell,-r,k)\\Ls(k^2) \\nonumber \\\\\n & + \\frac{i}{2} g^2C_\\mathrm{A} \\int_k F^2(k^2) \\Delta(\\ell^2) f(k,r)\\frac{(k\\vdot r)}{k^2\\ell^2} B_1(k,-r,-\\ell) B_1(k^2) \\,, \n\\label{B1exp} \n\\end{align} \n where the function $f(k,r)$ has been defined in \\1eq{sde_ghost}.\n\nAs a final step, \\1eq{B1exp} will be converted to Euclidean space (spherical coordinates),\nusing standard transformation rules. Defining \n \\begin{equation}\n k^2:= y; \\qquad r^2:=x; \\qquad \\ell^2:= z; \\qquad k\\vdot r\\equiv \\sqrt{xy}\\cos{\\theta}; \\qquad \\ell \\vdot r\\equiv \\sqrt{xz}\\cos{\\varphi};\n \\end{equation}\nand setting \n\\begin{eqnarray} \nB_1 (\\ell ,-r, k) \\to B_1(z,x,\\varphi)\\,, \\qquad B_1 (k ,-r, -\\ell) \\to B_1(y,x,\\pi -\\theta)\\,, \n\\label{ffeu}\n\\end{eqnarray}\nwe arrive at \n\\begin{eqnarray}\nB_1(x) & =& 1 +\\frac {C_\\mathrm{A}\\alpha_s}{2\\pi^2 \\,\\tilde{z}_{3}} \\int_0^\\infty\\!\\!\\! \\dd{y} y\\sqrt{xy}\\, \\Ls(y) \\Delta^2(y) \\int_0^\\pi\\!\\!\\! \\dd{\\theta} \\sin^4 \\theta \\cos{\\theta} B_1(z,x,\\varphi) \\, z^{-1} F(z)\\nonumber\\\\ \n & +& \\frac{C_\\mathrm{A}\\alpha_s}{4\\pi^2}\\int^\\infty_0\\!\\!\\! \\dd{y} \\,\\sqrt{xy} F^2(y) B_1(y) \\int_0^\\pi \\!\\!\\! \\dd{\\theta}\n \\sin^4\\theta \\cos{\\theta} B_1(y,x,\\pi-\\theta)\\, z^{-1} \\Delta(z) \\,, \n\\label{FinalEq}\n\\end{eqnarray}\nwhere we have that $\\cos{\\varphi}= \\sqrt{y\/z} \\cos\\theta - \\sqrt{x\/z}$.\n\n\n\\1eq{FinalEq} will be solved numerically, through an iterative procedure, using the\nfollowing external inputs.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.28]{fig7}\n\\caption{The lattice data for $\\Ls(q^2)$ (circles) from~\\cite{Aguilar:2021lke}, together with the fit given by \nEq.~\\eqref{Lasymfit} (blue continuous curve). }\n\\label{fig:Lasym}\n\\end{figure}\n\n{\\it(i)} Throughout the analysis we use \\mbox{$\\mu= 4.3$ GeV} and $\\alpha_s(\\mu) = 0.244$, as was determined\nin our numerical study of the SDE system discussed in Sec.~\\ref{sec:ngene}.\n\n\n{\\it(ii)} For both $\\Delta(q^2)$ and $F(q^2)$, renormalized at the aforementioned $\\mu$,\nwe employ the fits given by \\2eqs{gluonfit}{ghostfit}, respectively.\n\n\n{\\it(iii)} For $\\Ls(q^2)$ we employ an excellent fit to the lattice data of~\\cite{Aguilar:2021lke}. The curve is shown in Fig.~\\ref{fig:Lasym}, and its functional form is given by\n\\begin{equation}\n\\label{Lasymfit}\n\\Ls(q^2)=F(q^2)T(q^2)+\\nu_1 \\left( \\frac{1}{1+(q^2\/\\nu_2)^2} - \\frac{1}{1+(\\mu^2\/\\nu_2)^2} \\right), \n\\end{equation}\nwith\n\\begin{equation}\n T(q^2) = 1 +\\frac{3\\lambda_{\\srm S}}{4\\pi} \\left( 1+ \\frac{\\tau_1}{q^2+\\tau_2} \\right) \\left[ 2\\ln \\left( \\frac{q^2+ \\eta^2(q^2)}{\\mu^2+ \\eta^2(\\mu^2)}\\right) + \\frac{1}{6}\\ln \\left( \\frac{q^2}{\\mu^2} \\right) \\right],\n\\end{equation}\nand \n\\begin{equation}\n\\label{eta}\n \\eta^2(q^2) = \\frac{\\eta_1^2}{1 + q^2\/\\eta_2^2}\\,,\n\\end{equation}\nwhere the fitting parameters are given by\n\\mbox{$\\lambda_{\\srm S}=0.27$},\n \\mbox{$\\nu_1=0.179$}, \\mbox{$\\nu_2 =0.830$ GeV$^{2}$},\n \\mbox{$\\tau_1= 2.67$ GeV$^{2}$},\n \\mbox{$\\tau_2 = 1.05$ GeV$^{2}$},\n \\mbox{$\\eta_1^2= 3.10$ GeV$^{2}$}, and \n \\mbox{$\\eta_2^2 = 0.729$ GeV$^{2}$}. \n\n\n\nNote that the above fit incorporates, by construction, the renormalization condition \\mbox{$\\Ls(\\mu^2) = 1$}, corresponding to the soft gluon MOM scheme employed in the lattice simulation of~\\cite{Aguilar:2021lke}.\nIn addition, the zero crossing of $\\Ls(q^2)$ is located at about 170 MeV. \n\n{\\it(iv)} \nThe value of $\\tilde{z}_3$ is determined from the basic relation given by \\1eq{z3from_alphas},\nwhich yields the numerical value \\mbox{$\\tilde{z}_3 \\approx 0.95$}, quoted in \\1eq{z3num}. \n\n\n{\\it(v)} For the form factors $B_1(\\ell^2,r^2, \\varphi)$ and $B_1(k^2,r^2,\\pi-\\theta)$ we\ninterpolate the results for $B_1(r^2,p^2,\\theta_1)$ obtained in Sec.~\\ref{sec:ngene} [see Fig.~\\ref{fig:rescoupled}].\n\n\n\\begin{figure}[t]\n\\begin{minipage}[b]{0.45\\linewidth}\n\\centering\n\\hspace{-1.0cm}\n\\includegraphics[scale=0.26]{fig8a}\n\\end{minipage}\n\\hspace{0.25cm}\n\\begin{minipage}[b]{0.45\\linewidth}\n\\includegraphics[scale=0.26]{fig8b.eps}\n\\end{minipage}\n\\caption{ Left panel: The $B_1(r^2)$ obtained as solution of Eq.~\\eqref{FinalEq} (blue continuous)\n together with the lattice data (circles) from~\\cite{Ilgenfritz:2006he,Sternbeck:2006rd}. Right panel: The numerical impact of dressing the vertices $\\Ls(q^2)$ and $B_1(\\ell^2,r^2, \\varphi)$[$B_1(k^2,r^2,\\pi-\\theta)$] on $B_1(r^2)$, determined from Eq.~\\eqref{FinalEq}. } \n\\label{fig:B1soft}\n\\end{figure}\n\n\nUsing the inputs described above, \nthe $B_1(r^2)$ that emerges as a solution of \\1eq{FinalEq} is given by \nthe blue continuous curve in the left panel of Fig.~\\ref{fig:B1soft}, where it is compared \n with the $\\rm SU(3)$ lattice data of~\\cite{Ilgenfritz:2006he,Sternbeck:2006rd}. \n Although the error bars are rather sizable, we clearly see that our solution follows the general trend of the data. In particular, notice that both peaks occur in the same intermediate region of momenta. \n\nThe $B_1(r^2)$ may be accurately fitted with the functional form \n\\begin{equation} \n\\label{fitB1}\n B_1(r^2) = 1 + \\frac{r^2(a+br^2)}{1+cr ^2 +dr^4\\ln\\left[(r^2+ r_0^2)\/\\rho^2\\right]}\\,,\n\\end{equation}\nwhere the parameters are given by \\mbox{$a=2.07\\, \\mbox{GeV}^{-2}$}, $b=9.85\\, \\mbox{GeV}^{-4}$, $c=22.3\\, \\mbox{GeV}^{-2}$, \\mbox{$d=56.4\\, \\mbox{GeV}^{-4}$}, \\mbox{$r_0^2=1.48\\, \\mbox{GeV}^2$}, and \\mbox{$\\rho^2=1.0\\, \\mbox{GeV}^2$},\nand the \\mbox{$\\chi^2\/\\text{d.o.f.} = 1.0 \\times 10^{-6}$}. \n \n \n \nWe next study the impact that the\namount of ``dressing'' carried by the various vertices has on $B_1(r^2)$.\nTo that end, we solve \\1eq{FinalEq} considering the three-gluon and ghost-gluon vertices to \nbe either at their tree-level values or fully dressed. The results of the four cases considered are displayed \nin the right panel of Fig.~\\ref{fig:B1soft}. The hierarchy observed in this plot is\ncompletely consistent with the known infrared properties of these two fundamental vertices: \nat low momenta, the ghost-gluon vertex displays a mild enhancement \nwith respect to its tree-level value, whereas the three-gluon vertex is considerably suppressed. \n\nBased on this particular combination of facts, one would expect that the solution with the\nmaximal support will be obtained from \\1eq{FinalEq} \nwhen the ghost-gluon vertices are dressed while the three-gluon vertex is kept bare ($ \\Ls=1$);\nthis is indeed what happens, as may be seen from the dot-dashed green curve, which displays\nthe most pronounced peak. By the same logic, the reverse combination, namely bare ghost-gluon vertices \nand dressed three-gluon vertex, should furnish the most suppressed $B_1(r^2)$; evidently, this\nis what we find, as shown by the purple dashed curve. The remaining cases, where both vertices\nare bare, or fully dressed, must lie between the two prior cases; this expectation\nis clearly realized within the detailed numerical analysis, as can be seen by the corresponding curves,\nindicated by dotted red and continuous blue, respectively. \n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.26]{fig9}\n\\caption{Comparison of the soft gluon result obtained as solution of Eq.~\\eqref{FinalEq} (blue continuous curve) with the one \nextracted from the 3-D plot shown in Fig.~\\ref{fig:conf} (red dashed curve). }\n\\label{fig:3Dcomp2D}\n\\end{figure}\n\n\n\n\n\nWe conclude our numerical analysis with an instructive self-consistency check.\nSpecifically, as explained in item {\\it(v)} above, in order to solve \\1eq{FinalEq}\nwe have used as external input the result for the ghost-gluon vertex for general kinematics, \nderived in Sec.~\\ref{sec:ngene}. \nBut, as is clear from Fig.~\\ref{fig:conf}, the input used\nto obtain the soft gluon limit contains already a prediction of what that limit should be,\nnamely the red dot-dashed curve of $B_1(r^2,q^2,2\\pi\/3)$, shown in Fig.~\\ref{fig:conf}.\nTherefore, a reasonable indication of the self-consistency of the entire procedure\nwould be the degree of coincidence between the latter 2-D projection and the result\nfor $B_1(r^2)$ obtained from \\1eq{FinalEq}, namely the blue continuous\ncurve in either panel of Fig.~\\ref{fig:B1soft}.\n\nThe direct comparison between these two curves is shown in Fig.~\\ref{fig:3Dcomp2D}, where\nan excellent coincidence may be observed. Specifically, the maximum discrepancy,\nlocated at about 2 GeV, is of the order of 2\\%. The proximity between these results\nsuggests an underlying consistency between the various ingredients entering in the\ncorresponding calculations. Note, in particular, that the insertion of lattice ingredients, such as\nthe gluon propagator and the $\\Ls(q^2)$, into the SDEs appears to be a completely congruous operation.\n\n\nFinally, it is rather instructive to compare the effective strengths associated with the ghost-gluon and the three-gluon interactions in the soft gluon configuration by means of {\\it renormalization-group invariant} quantities.\nTo that end, we consider the corresponding effective couplings,\nto be denoted by ${\\alpha}_{\\rm{cg}}(q^2)$ and ${\\alpha}_{\\rm{3g}}(q^2)$,\ndefined as (see, {\\it e.g.}, ~\\cite{Athenodorou:2016oyh,Mitter:2014wpa,Fu:2019hdw})\n\\begin{equation}\n {\\alpha}_{\\rm{cg}}(q^2) = {\\alpha}_s(\\mu^2) B_1^2(q^2) F^2(q^2)\\Dr(q^2)\\,,\\qquad {\\alpha}_{\\rm{3g}}(q^2)={\\alpha}_s(\\mu^2) [\\TLs(q^2)]^2 \\Dr^3(q^2)\\,, \n\\label{coup_cg}\n\\end{equation}\nwhere $\\Dr(q^2)$ is the dressing of the gluon propagator, defined in \\1eq{gluon},\nwhile $\\TLs(q^2)$ is the lattice result of~\\cite{Aguilar:2021lke} adjusted to the Taylor scheme, according to \n\\2eqs{asytaylor}{z3num}. Note that, by means of this latter adjustment, all ingredients entering in the definitions of both effective couplings\nare computed in the same renormalization scheme, namely the Taylor scheme. In addition, \naccording to our SDE estimate (see Sec. \\ref{sec:ngene}), we have that ${\\alpha}_s(\\mu) = 0.244$ , at $\\mu = 4.3$ GeV.\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[scale=0.26]{fig10}\n\\caption{ The comparison of the effective couplings, ${\\alpha}_{\\rm{cg}}(q^2)$ (red continuous line) and ${\\alpha}_{\\rm{3g}}(q^2)$ (blue dashed).}\n\\label{fig:coupling}\n\\end{figure}\n\nThe result of the evaluation of the two effective couplings is shown in Fig.~\\ref{fig:coupling}. The main feature, consistent with a variety of previous studies~\\cite{Huber:2012kd,Blum:2014gna,Williams:2014iea,Cyrol:2016tym,Cyrol:2017ewj, Aguilar:2020yni},\nis the considerable suppression displayed by ${\\alpha}_{\\rm{3g}}(q^2)$ in the region below 2 GeV. \n\n\n\n\n\n\n\n\\section{\\label{sec:conc}Conclusions}\n\nIn this work we have carried out a thorough study of the dynamics related with the\nghost sector of pure Yang-Mills theories, incorporating into the\nstandard SDEs pivotal elements stemming from recent lattice studies.\nIn fact, these lattice results serve both as external inputs for some of the quantities that\nare difficult to determine accurately within the SDE framework, such as the gluon propagator and certain\ncomponents of the three-gluon vertex, as well as refined benchmarks for testing the reliability of our numerical solutions, such as the ghost dressing function. \nSpecifically, the lattice gluon propagator has been used as a global input in all SDEs considered in the present study, while, in the ``soft gluon'' SDE, the lattice data for the three-gluon vertex \nin the same limit have been employed.\n\n\nThe main results of our analysis are \nsuccinctly captured in Figs.~\\ref{fig:rescoupled} and~\\ref{fig:3Dcomp2D}. \nIn particular, in the left panel of Fig.~\\ref{fig:rescoupled},\nthe ghost dressing function obtained as a solution of the coupled SDE system is compared to the \nresults of the lattice simulation of~\\cite{Boucaud:2018xup}. \nIt is important to appreciate that the success of this comparison \nhinges on the optimization for the cure of discretization artifacts, in connection to the scale-setting and continuum extrapolation, imposed on this set of lattice data.\nIndeed, the difference between the latter lattice data and the (non-extrapolated) \nresults of~\\cite{Bogolubsky:2009dc}, displayed in Fig.~\\ref{fig:ghostdress},\nis rather substantial, affecting a phenomenologically important region of momenta. \nThis difference accounts almost entirely for the discrepancies\nfound in earlier studies~\\cite{Aguilar:2009pp,Aguilar:2013xqa,Aguilar:2018csq}, where the SDE results were compared with the data of~\\cite{Bogolubsky:2009dc}. \n\nWe next turn to Fig.~\\ref{fig:3Dcomp2D}, where the curves for $B_1(r^2)$, \nobtained following two distinct procedures, are compared. \nThe excellent agreement between both results suggests an underlying self-consistency\namong the several elements participating non-trivially in the computation of these results.\nParticularly interesting in this context is the pivotal role played by the three-gluon vertex,\nwhich appears in both computations leading to the results of Fig.~\\ref{fig:3Dcomp2D},\nalbeit in rather dissimilar kinematic arrangements. \nSpecifically, to obtain the result marked by the blue continuous\ncurve, the vertex was approximated by its classical tensor structure,\naccompanied by the corresponding form factors in general kinematics, as explained in Sec.~\\ref{sec:coupled}.\nInstead, the red dashed curve is obtained through the direct use of the\nlattice results in the soft gluon limit, according to the discussion in Sec.~\\ref{sec:cgsoft}. The coincidence between\nthe results indicates that the STI-based construction of~\\cite{Aguilar:2019jsj},\nwhich gave rise to the form factors used for the\ncomputation of the blue continuous curve, is quite reliable.\nIn that sense, it is rather gratifying to see how well the\ndynamical equations respond in this particular set of circumstances; in fact,\nthe use of lattice data as SDE inputs appears to be completely consistent. \n\nNote that the present study is fully compatible with the assertion of~\\cite{Huber:2017txg,Huber:2018ned}\nthat the four-point function represented by the yellow ellipse in Fig.~\\ref{fig:diagvert} \nis numerically rather negligible.\nEvidently, the excellent agreement with the lattice\nfound in the left panel of Fig.~\\ref{fig:rescoupled} indicates that the omission of the corresponding term \nfrom the skeleton expansion of the SDE kernel does not introduce any appreciable error. \nIn fact, it is interesting to observe that an entirely different conclusion about the importance of this\nfour-point function would have been drawn \nif the non-extrapolated lattice results of~\\cite{Bogolubsky:2009dc} had been used for the\ncomparison in Fig.~\\ref{fig:rescoupled}.\nSpecifically, any attempt to interpret the difference alternatively as a consequence of the kernel truncation would force this four-point function to acquire considerably higher values than those found in the detailed analysis of~\\cite{Huber:2017txg,Huber:2018ned}.\n\nFinally, it would be interesting to extend the considerations of Sec.~\\ref{sec:cgsoft} to the case of the\nquark-gluon vertex, whose SDE and corresponding skeleton expansion\nare given by replacing in Figs.~\\ref{fig:coupsys} and~\\ref{fig:diagvert}, respectively, all ghost lines by quark lines. \nIn particular, the soft gluon limit of the quark-gluon vertex\ninvolves three form factors, whose determination \nhas attracted particular attention over the years. In fact, up until very recently~\\cite{Kizilersu:2021jen},\nnotable discrepancies existed between the continuous predictions~\\cite{Bhagwat:2004kj,LlanesEstrada:2004jz,Fischer:2003rp,Fischer:2006ub,Aguilar:2014lha,Aguilar:2016lbe,Oliveira:2018fkj, Oliveira:2018ukh,Oliveira:2020yac} and the results of lattice simulations~\\cite{Skullerud:2002ge,Skullerud:2003qu,Skullerud:2004gp,Lin:2005zd,Kizilersu:2006et,Oliveira:2016muq,Sternbeck:2017ntv}.\nIt is likely that the inclusion of $\\Ls$ in the SDE treatment might shed further light on this intricate problem.\n\n\\section*{\\label{sec:acknowledgments}Acknowledgments}\nThe work of A.~C.~A. is supported by the CNPq grant 307854\/2019-1 and the project 464898\/2014-5 (INCT-FNA).\nA.~C.~A. , C. ~O.~A, and M.~N.~F. also acknowledge financial support from the FAPESP projects 2017\/05685-2, 2019\/05656-8, and 2020\/12795-1, respectively.\nJ.~P. is supported by the Spanish MICIU grant FPA2017-84543-P,\nand the grant Prometeo\/2019\/087 of the Generalitat Valenciana. \nF.~D.~S. and J.~R.~Q.~ are supported the Spanish MICINN grant PID2019-107844-GB-C2, and regional Andalusian project P18-FR-5057. This study was financed in part by the Coordena\\c{c}\\~{a}o de Aperfei\\c{c}oamento de Pessoal de N{\\'\\i}vel Superior - Brasil (CAPES) Finance Code 001 (B.~M.~O).\n\n\n\\newpage \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\bf{Introduction}} \\label{sec:intro}\nMagnetars are non-accreting neutron stars with long spin periods ($P\\sim2$--$12\\,{\\rm\ns}$) and the largest spin-down rates ($\\dot{P}\\sim10^{-13}$--$10^{-10}\\,{\\rm\ns}\\,{\\rm\ns}^{-1}$) among the pulsar population. Most of them have spin-down inferred magnetic field\nstrength, $B$,\nup to $\\sim10^{15}\\,{\\rm G}$. It is generally believed that\nmagnetars are young neutron stars and some are found inside Supernova Remnants\n(SNRs).\nMagnetars usually have persistent X-ray luminosity,\n$L_{\\rm X}\\sim10^{34-36}\\,{\\rm erg}\\,{\\rm s}^{-1}$, much larger than\ntheir rotational energy loss rate $\\dot{E}$,\nand they occasionally exhibits violent bursting activities \\citep[see review\nby][]{kb17}.\nIn order to explain the properties of this pulsar class, magnetar models have\nbeen developed. The most popular one is the twisted\nmagnetosphere model \\citep{td95,td01,bel09,bel11}.\nIt suggests that the toroidal magnetic field could exist in the stellar\ncrust.\nIf the internal magnetic field is strong enough,\nit could tear the crust followed by twisting the\ncrust-anchored external field \\citep{td95,tdw00,tyk02}.\nIn addition, a starquake arising from the plastic deformation of the crust would\ncause magnetar\nbursts due to magnetic reconnection \\citep{td95,pbh12,pbh13}.\nPersistent X-ray emission of magnetars could be explained by the magnetic field decay\n\\citep{tyk02,plm07}. Meanwhile, the magneto-thermal evolution theory suggests\nthat the field decay could be\nenhanced due to the changes in the conductivity and the magnetic diffusivity of magnetars\n\\citep{vrp13}. As a consequence, magnetars are observed to have higher surface\ntemperature and X-ray luminosity than canonical pulsars.\nIn general, soft X-ray spectra of magnetars can be\ndescribed by an absorbed blackbody model with temperature\n$kT\\sim0.3$--$0.6\\,{\\rm keV}$ plus an\nadditional power-law with photon index $\\Gamma\\sim2$--$4$ or\nanother blackbody component with $kT\\sim0.7\\,{\\rm keV}$\n\\citep[see][]{ok14,kb17}. It indicates that the\nsoft X-ray emission could be contributed by thermal emission and some\nnon-thermal radiation processes, such as synchrotron or inverse-Compton\nscattering.\\\\\n\\indent \\object{SGR~0501+4516} is a magnetar discovered with the Burst Alert\nTelescope (BAT) on board \\emph{Swift} on 2008\\,August\\,22 due to\na series of short bursts \\citep{bbb08}. X-ray pulsations were detected with a\nperiod of $P\\sim5.7\\,{\\rm s}$ \\citep{rit09}.\nAfter the discovery, the source was subsequently identified in an archival\n\\emph{ROSAT} observation taken in 1992.\nThe soft X-ray flux was $\\sim80$ times higher in the outburst when compared to\nthe 1992 observation \\citep{rit09}. The hard X-ray tail above $10\\,{\\rm keV}$\nwas first discovered with \\emph{INTEGARL} right after the outburst \\citep{rit09}. It had also been\ndetected with \\emph{Suzaku} observation \\citep{ern10}.\nFrom the spin period and spin-down rate, $B$\nwas estimated to be $2\\times10^{14}\\,{\\rm G}$ \\citep{wgk08}. The soft X-ray\nspectrum of SGR~0501+4516~\nbelow $10\\,{\\rm keV}$ could be described by an absorbed blackbody model with a\npower-law component, using\n\\emph{XMM-Newton} observations obtained in the first year after the outburst\n\\citep{rit09,cpr14}.\nThe X-ray spectral properties\nfrom 2008 to 2013 were also measured with four \\emph{Suzaku} observations\n\\citep{esk17}, but it is interesting to note that\nthe results are different from those reported in other literature, including\na smaller hydrogen column density,\nlower blackbody temperature, larger radius, and softer power-law photon index\n\\citep{rit09,gwk10,cpr14}.\\\\\n\\begin{deluxetable*}{ccccc}\n\\tablecaption{Observations of SGR 0501+4516 Used in Our Analysis.\n\\label{tab:obs}}\n\\tablehead{\\colhead{Date} & \\colhead{Observatory\n(Instruments)} & \\colhead{ObsID} &\n\\colhead{Mode} & \\colhead{Net Exposure (ks)}\n}\n\\startdata\n2008 Aug 31 & \\emph{XMM-Newton} (PN) &\n0552971201 &\nSW & $10.2$ \\\\\n2008 Sep 02 & \\emph{XMM-Newton} (PN) &\n0552971301 &\nSW & $20.5$ \\\\\n2008 Sep 25 & \\emph{CXO} (HRC-I) &\n9131 &\n-- & $10.1$ \\\\\n2008 Sep 30 & \\emph{XMM-Newton} (PN\/MOS1\/MOS2) &\n0552971401 &\nLW\/SW\/SW &\n$30.1\/32.3\/32.3$ \\\\\n2009 Aug 30 & \\emph{XMM-Newton} (PN\/MOS1\/MOS2) &\n0604220101 &\nSW\/FF\/SW &\n$53.9\/52.4\/53.1$ \\\\\n2012 Dec 09 & \\emph{CXO} (ACIS-S\\tablenotemark{a})&\n15564 &\nTE & $14.0$ \\\\\n2013 Apr 03 & \\emph{CXO} (ACIS-S\\tablenotemark{a})&\n14811 &\nTE & $13.7$ \\\\\n2013 Aug 31 & \\emph{Suzaku} (XIS0\/XIS1\/XIS3) & 408013010\n& Normal & $36.0\/41.1\/41.2$\n\\enddata\n\\tablecomments{\n\\tablenotetext{a}{Made in the sub-array mode with only one-eighth of CCD\\,7.}}\n\\end{deluxetable*}\n\\indent Until now, there is no accurate distance measurement for SGR~0501+4516.\nAs magnetars are young pulsars, SGR~0501+4516~ is expected to be located\nclose to the\nspiral arm of the Galaxy.\nThe line of sight intercepts the Perseus and Outer arms of the Galaxy, at\ndistances of $\\sim2.5$ and $\\sim5\\,{\\rm kpc}$,\nrespectively.\nIn this paper, we assume the distance $d=5\\,{\\rm kpc}$. In addition, there\nexists a supernova remnant\n(SNR) G160.9+2.6, $\\sim80'$ north of SGR~0501+4516~ \\citep{gc08,gwk10}. The\ndistance and age of the SNR were estimated as\n$800\\pm400\\,{\\rm pc}$ and $4000$--$7000\\,{\\rm years}$ \\citep{lt07}.\n\\citet{gwk10} proposed that SGR~0501+4516~ could be associated\nwith G160.9+2.6. Leaving the distance aside, if\nthis is the case, the magentar should have a large proper motion of\n$0\\farcs7$--$1\\farcs2\\,{\\rm yr}^{-1}$ to the south. \\\\\n\\indent In this paper, we used new X-ray observations to show that\nSGR 0501+4516 had returned to quiescence in 2013,\nfive years after the outburst, and we report on its spectral and timing\nproperties during flux relaxation.\nWe also analyzed\narchival observations to investigate the\nlong-term evolution.\n\\section{\\bf{Observations and data reduction}} \\label{sec:obs}\nThere are eight X-ray observations used in this study (see\nTable\\,\\ref{tab:obs}). We obtained two new\nobservations obtained with the Advanced CCD Imaging Spectrometer (ACIS)\non board \\emph{Chandra X-ray Observatory (CXO)} on 2012\\,December\\,9 and\n2013\\,April\\,3. Both of them were made in the Time Exposure (TE) mode for\n14\\,ks using only one-eighth of the CCD, providing a fast frame time of\n0.4\\,s. This allows us to obtain a crude pulse\nprofile for this $\\sim5.67\\,{\\rm s}$ period pulsar.\nBy inspecting the light curves, no bursts from the source or background flares were detected during\nthe exposures. We checked that pile-up was negligible\nin both observations. In addition to these two ACIS observations, a\n\\emph{Chandra}\nHigh Resolution Camera (HRC) observation taken on 2008\\,September\\,25 was also used\nto measure the source position only.\nAll \\emph{Chandra} data were reprocessed with \\texttt{chandra\\_repro}\nin CIAO 4.8 with CALDB 4.7.4 before performing any analysis.\\\\\n\\indent There were six \\emph{XMM-Newton} observations after the discovery of\nthe source. We only analyzed the latest\nfour from 2008\\,August\\,31 to 2009\\,August\\,30 because\nSGR 0501+4516 showed strong bursting activities during the two earliest\nobservations.\nThe source was still bright 11 days after the outburst; the pile-up effect\nwas an issue in the\nMOS data obtained on 2008\\,August\\,31 and September\\,2 and hence only the PN\ndata were\nused in these two observations.\nWe first reprocessed all the data by the tasks\n\\texttt{epchain}\/\\texttt{emchain}\nin XMMSAS version 1.2. In the analysis,\nonly PATTERN\\,$\\leq4$ events of the PN data and PATTERN\\,$\\leq12$ events in\nthe MOS data were used.\nWe also used the standard screening for\nthe MOS (FLAGS~=~\\#XMMEA\\_EM) and\nPN (FLAGS~=~\\#XMMEA\\_EP) data.\nAfter removal of periods with\nbackground flares, we obtained net exposures ranging from $10.2$ to\n$53.9\\,{\\rm ks}$ (see Table\\,\\ref{tab:obs}).\\\\\n\\indent We also used the latest \\emph{Suzaku} data in the archive taken on\n2013\\,August\\,31,\nto combine with the \\emph{Chandra} data to better constrain the quiescent\nspectral properties.\nIn order to focus on the soft X-ray spectral properties,\nonly the data obtained with the XIS were used (see Table\\,\\ref{tab:obs}).\nThe XIS data were reprocessed using \\texttt{xisrepro} in HEAsoft 6.20\nwith standard screening criteria. We inspected the light curves\nto verify that no bursts were detected throughout the observation with\n$\\sim40\\,{\\rm ks}$.\n\\section{\\bf{ANALYSIS AND RESULTS}}\n\\subsection{Imaging and Astrometry}\nWe measured the position of SGR~0501+4516~ in all\n\\emph{Chandra} data using the\ntask \\texttt{celldetect} and obtained a consistent result of\n$\\alpha$=5:01:06.8,\n$\\delta$=+45:16:34 (J2000) within the uncertainty.\nThe measurement uncertainties in the $90\\%$ confidence level have radii $0\\farcs4$ (HRC) and $0\\farcs5$ (ACIS).\nAs the ACIS images were taken in the sub-array mode with a small field of\nview, we did not find any background sources to align the two images.\nTherefore, we also need to consider the absolute astrometric accuracy of\n\\emph{Chandra}, which is $0\\farcs8$ at the $90\\%$ confidence\nlevel\\footnote{\\url{http:\/\/cxc.harvard.edu\/cal\/ASPECT\/celmon\/}}.\nThis gives an upper limit of the proper motion of\n$0\\farcs32\\,{\\rm yr}^{-1}$ ($90\\%$ confidence level),\nrejecting the suggestion that SGR 0501+4516 was born at the center of SNR\nG160.9+2.6 \\citep{gwk10}.\\\\\n\\indent Finally, we simulated a model point spread function for ACIS data with\nChaRT\\footnote{\\url{http:\/\/cxc.harvard.edu\/ciao\/PSFs\/chart2\/}} using the\nbest-fit spectrum (see Section\\,\\ref{sec:spe_ana} below)\nand confirmed that the radial profile is fully consistent with that of the\nreal data, indicating no extended emission was found near the magnetar.\n\\subsection{Timing Analysis}\\label{sec:tim_ana}\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.47\\textwidth]{pulse_profile}\n\\caption{Pulse profiles of SGR 0501+4516 in the energy range of\n$0.5$--$7\\,{\\rm keV}$ for the latest \\emph{Chandra} observations.\nThe two profiles were aligned manually by matching\nthe brightest bin. The uncertainties are at\nthe $1\\sigma$ level.\n}\\label{fig:pulse_profile}\n\\end{figure}\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{xmm_pn}\n\\caption{\\emph{XMM-Newton} PN spectra of SGR 0501+4516. The solid lines\nindicate the\nbest-fit 2BB+PL model on different epochs. The orange, purple, and black\ndashed\nlines indicate the low temperature BB, high temperature BB, and\nPL components of the 2009 August 30 spectrum, respectively.}\\label{fig:xmm_fits}\n\\includegraphics[width=0.45\\textwidth]{suzaku_chandra}\n\\caption{\\emph{Chandra} and \\emph{Suzaku} spectra of SGR 0501+4516. All\nsolid lines indicate the same best-fit 2BB+PL model. The different shape is\ndue to different responses of the\ninstruments. The orange, purple, and black dashed lines indicate the low\ntemperature BB, high\ntemperature BB, and PL components, respectively, with the \\emph{Suzaku} XIS\nresponse.}\\label{fig:suzaku_chandra}\n\\end{figure}\nWe extracted the source photons from the two new \\emph{Chandra} observations\nby using a $2\\farcs5$ radius aperture and obtained 4149 and 4043 counts,\nrespectively,\nin the $0.5$--$7\\,{\\rm keV}$ energy range. The estimated\nbackground photon counts in the source region are $\\sim0.6$ for both\nobservations.\nWe then applied a barycentric\ncorrection to the photon arrival times.\nWe employed the $\\chi^2$-test after epoch folding\n\\citep{lea87} and found periods of\n$P=5.76286(8)\\,{\\rm s}$ and $P=5.76299(9)\\,{\\rm s}$ for 2012\\,December\\,9 and\n2013\\,April\\,3 data, respectively.\nThe $1\\sigma$ uncertainties quoted here were estimated using the simulation results from\n\\citet{lea87}.\nWe used the best-fit periods to generate the pulse profiles for\nboth \\emph{Chandra} observations. As the frame time of our\nobservations was $0.4\\,{\\rm s}$, we only divided the pulse period of\n$P=5.76\\,{\\rm s}$ into 13 phase bins.\nFigure\\,\\ref{fig:pulse_profile} shows the pulse profile, which has a\ndouble-peaked shape. The pulse \nprofile between the two observations did not show any obvious variations,\nsuggesting that\nthe source had already returned to quiescence\nin 2013.\\\\\n\\indent As the dates of the two new \\emph{Chandra}\nobservations were separated by too far apart, we were unable to measure the\nspin-down rate $\\dot{P}$ by phase coherent timing analysis. Meanwhile, the\nuncertainties of individual timing measurements\nwere too large so that $\\dot{P}$ could not be obtained from our \\emph{Chandra}\nobservations.\nWe found that the two periods measured in 2012 and 2013 are formally\nconsistent with each other\nafter accounting for the uncertainties; however, they are different from the value obtained in the 2009 observation\n\\citep{cpr14}.\nComparing our results with the spin period $P=5.7622571(2)$ measured in 2009,\nwe obtained\n$\\dot{P}=6(1)\\times10^{-12}\\,{\\rm s}\\,{\\rm s}^{-1}$ at the \n$1\\sigma$ confidence level from 2009 to 2013, which is\ncompatible with\n$5.94(2)\\times10^{-12}\\,{\\rm s}\\,{\\rm s}^{-1}$ reported by \\citet{cpr14}.\n\\subsection{Spectral Analysis}\\label{sec:spe_ana}\nWe extracted the source spectrum from the \\emph{Chandra} observations using\nthe same $2\\farcs5$ radius apertures as in\nthe timing analysis above.\nFor the \\emph{XMM-Newton} and \\emph{Suzaku} XIS spectra, we used apertures of\n$36''$ and $1\\farcm8$ radius, respectively.\nWe chose a larger region far from the source on the same CCD as the background\nregion. We restricted\nthe analysis in the energy range of $0.5$--$10\\,{\\rm keV}$\nfor \\emph{XMM-Newton} and \\emph{Suzaku} data, and $0.5$--$7\\,{\\rm keV}$ for\n\\emph{Chandra} data to optimize the signal-to-noise ratio. We grouped the\nspectra with a minimum of 30 counts\nper energy bin.\\\\\n\\begin{turnpage}\n\\begin{deluxetable*}{lcccccccccc}[ht!]\n\\centering\n\\tablecaption{Best-Fit Spectral Parameters for SGR~0501+4516 with\nUncertainties\nat the $90\\%$\nConfidence Level\\label{tab:pas_results}}\n\\tablecolumns{10}\n\\tablehead{\\colhead{Date} & \\colhead{$N_{\\rm H}$} &\n\\colhead{$kT_{\\rm low}$} & \\colhead{$R_{\\rm low}$\\tablenotemark{a}}\n& \\colhead{$F_{\\rm low}$\\tablenotemark{b} ($10^{-11}$}\n& \\colhead{$kT_{\\rm high}$} & \\colhead{$R_{\\rm high}$\\tablenotemark{a}} &\n\\colhead{$F_{\\rm high}$\\tablenotemark{b} ($10^{-11}$} & \\colhead{$\\Gamma$}\n& \\colhead{$F_{\\rm PL}$\\tablenotemark{b} ($10^{-11}$} & $\\chi^2\/{\\rm dof}$ \\\\\n & \\colhead{$(10^{22}\\,{\\rm cm}^{-2})$} & \\colhead{(keV)} & \\colhead{(km)} &\n\\colhead{erg\\,cm$^{-2}\\,{\\rm s}^{-1}$)} & \\colhead{(keV)} &\n\\colhead{(km)} & \\colhead{${\\rm erg}\\,{\\rm cm}^{-2}\\,{\\rm s}^{-1}$)} &\n& \\colhead{erg\\,cm$^{-2}\\,{\\rm s}^{-1}$)} &\n}\n\\startdata\n\\cutinhead{BB+PL model}\n2008 Aug 31\\tablenotemark{c}& $1.34\\pm0.06$ & \\nodata & \\nodata & \\nodata &\n$0.70\\pm0.02$\n& $1.45_{-0.08}^{+0.09}$ & $1.48\\pm0.08$ & $2.9\\pm0.1$ & $1.63\\pm0.08$ & $764.9\/741$\\\\\n2008 Sep 02\\tablenotemark{c} & $1.29_{-0.05}^{+0.04}$ & \\nodata & \\nodata & \\nodata &\n$0.68\\pm0.01$ & $1.46\\pm0.06$ & $1.34\\pm0.05$ &\n$2.85\\pm0.07$ & $1.46\\pm0.06$ & $947.0\/915$ \\\\\n2008 Sep 30\\tablenotemark{d} & $1.36\\pm0.03$ & \\nodata & \\nodata & \\nodata &\n$0.66\\pm0.01$ & $1.03\\pm0.04$ & $0.57\\pm0.02$ & $3.15_{-0.05}^{+0.06}$\n& $0.87\\pm0.02$ & $2367.5\/2169$ \\\\\n2009 Aug 30\\tablenotemark{d} & $1.43\\pm0.04$ & \\nodata & \\nodata & \\nodata &\n$0.56\\pm0.02$ & $0.56\\pm0.05$ & $0.072_{-0.007}^{+0.008}$ &\n$4.0\\pm0.1$ & $0.21\\pm0.01$ & $1308.3\/1227$ \\\\\n2013 Jun 23\\tablenotemark{e} & $1.43_{-0.08}^{+0.09}$ & \\nodata & \\nodata & \\nodata &\n$0.63_{-0.05}^{+0.04}$ & $0.34_{-0.05}^{+0.06}$ & $0.05\\pm0.01$ & $3.9_{-0.2}^{+0.3}$ & $0.15\\pm0.01$\n & $626.9\/603$ \\\\\n\\hline\n\\cutinhead{2BB+PL model}\n2008 Aug 31\\tablenotemark{c} & $0.90\\pm0.02$\\tablenotemark{f} & $0.35_{-0.02}^{+0.03}$ &\n$4.6_{-0.5}^{+0.6}$ & $0.55\\pm0.09$ & $0.75\\pm0.02$ &\n$1.4\\pm0.1$ & $2.14_{-0.09}^{+0.07}$ & $1.33\\tablenotemark{g}$ &\n$0.42\\pm0.07$ & $5911.5\/5653$\\tablenotemark{f} \\\\\n2008 Sep 02\\tablenotemark{c} & $0.90\\pm0.02$\\tablenotemark{f} & $0.31\\pm0.02$ &\n$5.2_{-0.6}^{+0.7}$ & $0.38\\pm0.05$ & $0.71_{-0.02}^{+0.01}$ &\n$1.56_{-0.06}^{+0.10}$ & $1.99_{-0.05}^{+0.04}$ &\n$1.33\\tablenotemark{g}$ & $0.45\\pm0.04$ &\n$5911.5\/5653$\\tablenotemark{f} \\\\\n2008 Sep 30\\tablenotemark{d} & $0.90\\pm0.02$\\tablenotemark{f} & $0.31\\pm0.01$\n& $4.8_{-0.2}^{+0.4}$ & $0.30\\pm0.02$ & $0.69_{-0.01}^{+0.02}$ & $1.13_{-0.06}^{+0.05}$ &\n$0.95\\pm0.02$ &\n$1.33\\tablenotemark{g}$ & $0.20\\pm0.02$ & $5911.5\/5653$\\tablenotemark{f} \\\\\n2009 Aug 30\\tablenotemark{d} & $0.90\\pm0.02$\\tablenotemark{f} & $0.25\\pm0.02$ &\n$4.4_{-0.4}^{+0.8}$ & $0.085\\pm0.008$ & $0.56_{-0.02}^{+0.06}$ & $0.7\\pm0.1$\n& $0.14\\pm0.01$ & $2.6_{-2.5}^{+0.4}$ & $0.06_{-0.02}^{+0.01}$ &\n$5911.5\/5653$\\tablenotemark{f} \\\\\n2013 Jun 23\\tablenotemark{e} & $0.90\\pm0.02$\\tablenotemark{f} & $0.26_{-0.02}^{+0.01}$ &\n$3.7_{-0.7}^{+0.3}$ & $0.07\\pm0.01$ & $0.62_{-0.04}^{+0.03}$ &\n$0.49_{-0.10}^{+0.05}$ &\n$0.10_{-0.02}^{+0.01}$ &\n$2.3_{-2.5}^{+0.7}$ & $0.032_{-0.025}^{+0.012}$ &\n$5911.5\/5653$\\tablenotemark{f}\n\\enddata\n\\tablecomments{\n\\tablenotetext{a}{Assuming a distance of $5\\,{\\rm kpc}$.}\n\\tablenotetext{b}{Absorbed fluxes in the $0.5$--$10\\,{\\rm keV}$ energy range.}\n\\tablenotetext{c}{Only PN data were used.}\n\\tablenotetext{d}{Joint-fit results of both PN and MOS data.}\n\\tablenotetext{e}{Joint-fit results of \\emph{Chandra} and \\emph{Suzaku} data. The date is the weighted-averaged epoch.}\n\\tablenotetext{f}{$N_{\\rm H}$ is linked in the fit for all observations.}\n\\tablenotetext{g}{Fixed at $\\Gamma=1.33$ from the results\nof \\citet{ern10}.}}\n\\end{deluxetable*}\n\\end{turnpage}\nAll spectral analyses were performed in the \\texttt{Sherpa}\nenvironment\\footnote{\\url{http:\/\/cxc.harvard.edu\/sherpa\/}}. We\ntried an absorbed blackbody plus\npower-law (BB+PL) model as in previous studies \\citep{rit09,gwk10,cpr14}. We used\nthe interstellar\nabsorption model \\texttt{tbabs} and the solar abundances\nwere set to\n\\texttt{wilm} \\citep{wam00}.\nThe \\emph{XMM-Newton} spectra from the same epoch were fit with a\nsingle set of parameters.\nWe found that the \\emph{Chandra} and \\emph{Suzaku} spectra share similar\nbest-fit parameters,\nsuggesting the quiescent property. In order to boost the\nsignal-to-noise ratio,\nwe fit them together with the same parameters.\\\\\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{panel}\n\\caption{Evolution trends for the best-fit parameters of SGR 0501+4516 with\nthe 2BB+PL model since the 2008 outburst.}\\label{fig:trends}\n\\end{figure}\n\\indent\nThe best-fit spectral\nparameters are listed in Table\\,\\ref{tab:pas_results}.\nFrom 2008 to 2013, the best-fit blackbody radius shrunk\nsignificantly from $R=1.45\\,{\\rm km}$ to $R=0.34\\,{\\rm km}$ (assuming\n$d=5\\,{\\rm kpc}$) and the power-law photon index\nsoftened from $\\Gamma=2.9$ to $\\Gamma=3.9$.\nOur \\emph{XMM-Newton} results are consistent with those reported by\n\\citet{rit09} and \\citet{cpr14} except with a slightly higher absorption column\ndensity $N_{\\rm H}$ due to the different absorption model we used. While \\citet{cpr14}\nsuggested that the source had already returned\nto its quiescence one year after the 2008 outburst, our new results show that\nthe total absorbed flux was still\ndecreasing from $2.8\\times10^{-12}\\,{\\rm erg}\\,{\\rm cm}^{-2}\\,{\\rm s}^{-1}$\nin 2009 to $2.0\\times10^{-12}\\,{\\rm erg}\\,{\\rm cm}^{-2}\\,{\\rm s}^{-1}$ in\n2013.\nComparing with the previously reported \\emph{Suzaku} results,\nour blackbody component has a higher temperature and smaller size. This could\nbe the\nresult of the much lower column density ($N_{\\rm H}\\sim0.4\\times10^{22}\\,{\\rm\ncm}^{-2}$) reported by \\citet{esk17}. \\\\\n\\indent We noted the best-fit PL component is\nsoft with $\\Gamma\\gtrsim3$. This could indicate the thermal nature of the\nemission. To verify that, we tried to narrow down the energy range to $6\\,{\\rm keV}$ and\ncompared the best-fit results of the BB+PL and the double-blackbody (2BB) models. We\nfound that the latter provided better fits to all spectra, thus, confirming\nour idea. When we fit the entire energy range, the 2BB fit has obvious\nresiduals in the highest energy bins for all \\emph{XMM-Newton} spectra, hinting at an additional PL component.\nIn the final model, we consider the double-blackbody plus power-law (2BB+PL)\nmodel and found that it provides the best fit.\nAssuming that $N_{\\rm H}$ remained unchanged between epochs, we fit\nall spectra simultaneously with a linked absorption model.\nWe found that the PL component dominated only above $\\sim6\\,{\\rm keV}$ for which our\nobservations were not very sensitive. As the first three\n\\emph{XMM-Newton} observations were taken within $\\sim1$ month after the\n2008\\,August\\,26 \\emph{Suzaku} observation, we believe that they should share\na similar spectral property. In order to obtain a better\nfit, we adopted the 2BB+PL result reported by\n\\citet{ern10} and fixed $\\Gamma=1.33$ in the fitting of the 2008\n\\emph{XMM-Newton} spectra. As the photon index could have changed after 2008, we\ndid not fix $\\Gamma$ for all spectra taken after 2009. However, the PL\ncomponent was poorly constrained. We list the best-fit spectral results\nin Table\\,\\ref{tab:pas_results}. The best-fit 2BB+PL model to the \\emph{XMM-Newton} PN spectra at different \nepochs are plotted in Figure\\,\\ref{fig:xmm_fits}, and the fit to the last epoch \\emph{Chandra} and \\emph{Suzaku} \nspectra are plotted in Figure\\,\\ref{fig:suzaku_chandra}. Figure\\,\\ref{fig:trends} shows evolution trends of the\ntwo blackbody components. The\ntemperature of the cooler blackbody component dropped from\n$kT_{\\rm low}=0.35\\,{\\rm keV}$ in 2008 to $0.26\\,{\\rm keV}$ in 2012,\nwhile there was no significant change in the radius, with $R_{\\rm low}$ stayed\n$\\sim4.5\\,{\\rm km}$ among all the observations.\nThe best-fit parameters for the hotter blackbody component, meanwhile,\nare consistent with those from the BB+PL fit. Both the temperature\n$kT_{\\rm high}$ and the radius $R_{\\rm high}$ of this component\ndropped since the outburst.\nWe found that adding the best-fit ${\\rm BB}_{\\rm high}$ component shares\nsimilar parameters as the ${\\rm BB}_{\\rm high}$ in the BB+PL model.\nSimilar to the BB+PL results, Figure\\,\\ref{fig:trends} shows that\n$R_{\\rm high}$ was not lowest in 2009, indicating that the source was\nnot yet in quiescence at that time.\\\\\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{flux}\n\\caption{Decay trends of unabsorbed fluxes of SGR 0501+4516 for all components\nin the 2BB+PL model.}\\label{fig:flux}\n\\end{figure}\n\\indent In Figure\\,\\ref{fig:flux}, we plot the flux evolution of all\ncomponents in the 2BB+PL\nmodel, we see decreasing trends since the 2008 outburst.\nThe plot indicates a significant drop of the ${\\rm BB}_{\\rm high}$ flux after\n2009,\nand we claim that the source had not yet returned to quiescence at that time.\nOn the other hand, we found similar\ncount rates in the 2012 and 2013 \\emph{Chandra} observations, which suggests\nthat\nSGR 0501+4516 had reached a quiescent state five years after the outburst.\nFinally, we note that there is no obvious plateau in the flux evolution,\ncontrary to what the crustal cooling model suggests\n\\citep{let02}.\\\\\n\\indent In addition to the BB+PL and 2BB+PL models,\nwe also tried the resonant cyclotron scattering (RCS) \\citep{rzt08} and the 3D\nsurface thermal emission\nand magnetospheric scattering (STEM3D) models \\citep{wg15} but\nthe fits converged to the boundary of the parameter\nspace. Therefore, we do not believe that the results are physical.\\\\\n\\begin{turnpage}\n\\begin{deluxetable*}{lccccccccc}\n\\centering\n\\tablecaption{Two-Temperature Fits to the Spectra of Magnetars in Quiescence\nwith Uncertainties or Upper Limits at the $90\\%$ Confidence Level\n\\label{tab:low_nh}}\n\\tablewidth{0pt}\n\\tabletypesize{\\scriptsize}\n\\tablehead{ \\colhead{Object} & Instrument (ObsID) & Model & \\colhead{$N_{\\rm H}$} &\n\\colhead{$kT_{\\rm low}$} & \\colhead{$R_{\\rm low}$} & \\colhead{$kT_{\\rm high}$}\n& \\colhead{$R_{\\rm high}$} & \\colhead{$\\Gamma$} & $\\chi^2_\\nu$\\,(dof)\n\\\\& & & \\colhead{$(10^{22}\\,{\\rm cm}^{-2})$} & \\colhead{(keV)} & \\colhead{(km)}\n& \\colhead{(keV)} & \\colhead{(km)} & &\n}\n\\startdata\nCXOU J010043.1--721134 &\\emph{XMM}\\,(see Reference 1) & 2BB & $0.063^{+0.020}_{-0.016}$ & $0.30\\pm0.02$ &\n$12.1^{+2.1}_{-1.4}$ & $0.68^{+0.09}_{-0.07}$ & $1.7^{+0.6}_{-0.5}$ & \\nodata &\n$1.14\\,(100)$ \\\\\nSGR 0526--66 & \\emph{CXO}\\,(10806) & 2BB\\tablenotemark{a} & $0.07_{-0.05}^{+0.06}$ &\n$0.42_{-0.05}^{+0.04}$ & $8.8_{-1.5}^{+1.7}$ &\n$1.1_{-0.3}^{+0.8}$ & $0.8_{-0.5}^{+0.9}$ & \\nodata &1.31\\,(117) \\\\\nXTE J1810--197\\tablenotemark{b}& \\emph{XMM}\\,(see Reference2) & 2BB & $0.60\\pm0.02$\n& $0.167\\pm0.006$ & $9.3\\pm1.1$ &\n$0.33\\pm0.02$ & $0.9\\pm0.2$ & \\nodata\n& $1.21\\,(824)$\\tablenotemark{c}\\\\\n\\emph{Swift} J1822.3--1606 & \\emph{CXO}\\,(15989-15993) & 2BB & $0.62\\pm0.05$ &\n$0.11\\pm0.01$ & $6.3\\pm1.7$ & $0.29\\pm0.03$ & $0.24_{-0.10}^{+0.14}$ & \\nodata &\n1.06\\,(74)\\\\\n4U 0142+61\\tablenotemark{b} &\\emph{XMM}\\,(see Reference 3)& 2BB+PL & $0.70\\pm0.03$ &\n$0.27\\pm0.02$ & $14\\pm3$ &\n$0.50\\pm0.02$ & $2.6\\pm1.1$ & $2.6\\pm0.2$ & $1.11\\,(2350)$\\tablenotemark{c}\\\\\nSGR 0501+4516 & see Table\\,\\ref{tab:pas_results}& 2BB+PL & $0.9\\pm0.2$ &\n$0.26_{-0.02}^{+0.01}$ & $3.7_{-0.7}^{+0.3}$ & $0.62_{-0.04}^{+0.03}$\n& $0.49_{-0.10}^{+0.05}$ & $2.3_{-2.5}^{+0.7}$ &\n$1.05\\,(5653)$\\tablenotemark{c} \\\\\n1E 2259+586 & \\emph{XMM}\\,(0203550701) & 2BB+PL & $1.1\\pm0.2$ & $0.32^{+0.04}_{-0.05}$ &\n$5.6\\pm1.6$ & $0.5^{+0.1}_{-0.2}$ & $1.5^{+1.7}_{-0.7}$ & $3.0^{+0.5}_{-3.2}$\n& 1.03\\,(494)\\\\\n1E 1048.1--5937& \\emph{XMM}\\,(0723330101) &2BB+PL & $1.6^{+0.2}_{-0.6}$ &\n$<0.18$ & $<410$ &\n$0.62\\pm0.01$ & $1.7\\pm0.5$ & $3.2\\pm0.1$ & 0.97\\,(909)\\\\\n1RXS J170849.0--400910 & \\emph{CXO}\\,(4605) & 2BB+PL & $2.72^{+0.04}_{-0.67}$\n& $<0.14$ &\n$<450$ & $0.41\\pm0.05$ & $2.8^{+2.2}_{-1.1}$ & $3.10^{+0.03}_{-0.64}$ &\n1.15\\,(389)\\\\\n1E 1547.0--5408 & \\emph{XMM}\\,(0402910101) &2BB\\tablenotemark{a} & $3.8^{+0.8}_{-0.6}$ &\n$0.39_{-0.08}^{+0.07}$ & $0.9_{-0.4}^{+0.9}$ &\n$0.8_{-0.1}^{+0.3}$ & $0.11_{-0.07}^{+0.12}$ & \\nodata & 1.52\\,(85)\\\\\nSGR 1900+14 & \\emph{XMM}\\,(0506430101) & 2BB+PL & $4.1^{+0.7}_{-0.2}$ & $<0.12$ & $<1080$ &\n$0.39^{+0.01}_{-0.05}$ & $5.5\\pm1.9$ & $2.2^{+0.3}_{-0.1}$ & 1.01\\,(276)\\\\\n1E 1841--045 & \\emph{XMM}\\,(0013340101) & 2BB+PL & $4.2^{+1.8}_{-1.1}$ & $<0.28$ & $<2300$ &\n$0.5^{+0.1}_{-0.2}$ & $6^{+30}_{-4}$ & $1.9^{+0.4}_{-0.7}$ & 1.12\\,(232)\\\\\nCXOU J171405.7--381031 & \\emph{CXO}\\,(11233) & 2BB+PL & $6.6^{+1.1}_{-1.5}$ &\n$<0.28$ & $<90$\n& $0.5\\pm0.1$ & $<4.3$ & $1.3^{+1.8}_{-2.1}$ & 1.08\\,(108)\\\\\nCXOU J164710.2--455216 & \\emph{CXO}\\,(14360) & 2BB+PL & $6.9^{+1.9}_{-1.7}$ &\n$<0.18$ &\n$<3800$ & $0.47^{+0.19}_{-0.12}$ & $<9$ & $3.2^{+0.6}_{-1.1}$ & 0.91\\,(107) \\\\\nSGR 1806--20 & \\emph{CXO}\\,(7612) & 2BB\\tablenotemark{a} & $9.8^{+1.0}_{-0.9}$ &\n$0.67_{-0.09}^{+0.11}$ & $1.8_{-0.5}^{+0.8}$ &\n$2.1^{+0.7}_{-0.3}$ & $0.22\\pm0.08$ & \\nodata &\n1.15\\,(268)\n\\enddata\n\\tablecomments{\n\\tablenotetext{a}{We noted that BB+PL could be a better model (see the text).}\n\\tablenotetext{b}{The uncertainties have been scaled to the\n$90\\%$ confidence level.}\n\\tablenotetext{c}{Joint fits with different observations.}}\n\\tablerefs{\\scriptsize (1) \\citet{tem08}; (2) \\citet{bid09};\n(3) \\citet{gdk10}\n}\n\\end{deluxetable*}\n\\end{turnpage}\n\\section{\\bf{DISCUSSION}}\n\\subsection{Two-Temperature Spectral Model}\\label{sec:2bb_fit}\nOur study showed that the spectrum of SGR 0501+4516 is best described by\na two-temperature model. This is similar to the cases of some magnetars,\nincluding\nCXOU J010043.1--721134, XTE J1810--197, and 4U 0142+61\n\\citep{tem08,bid09,gdk10}.\nThe result motivates us to\ntest this model on a larger sample of the magnetar population. \\\\\n\\indent We identified 15 magnetars with X-ray observations\ntaken a few\nyears after their outbursts.\nThe three sources mentioned above have\npreviously been fit with two-temperature spectral models.\nFor the rest, we reduced \\emph{Chandra} and \\emph{XMM-Newton} observations\nand extracted their spectra with the same\nprocedures as for SGR 0501+4516. We tried both 2BB and\n2BB+PL models on all sources and report the one with lower reduced $\\chi^2$\nvalue ($\\chi^2_\\nu$).\nTable\\,\\ref{tab:low_nh} lists our results and those\nreported in previous studies.\nWe found that, for most magnetars with\nsmall $N_{\\rm H}\\lesssim1\\times10^{22}\\,{\\rm cm}^{-2}$, their spectra are\ngenerally well fit by\nthe two-temperature spectral model.\nThe higher temperature blackbody component always has a smaller radius\n$R\\lesssim3\\,{\\rm km}$ and vice versa. For the sources with large\n$N_{\\rm H}$, the lower temperature blackbody component\nis not well constrained due to heavy absorption by the ISM below $2\\,{\\rm\nkeV}$. There are three exceptional cases: SGR 0526--66, 1E\n1547.0--5408, SGR\n1806--20, for which $kT_{\\rm high}$ seems too high to be physical. We compared the\n$\\chi^2$-statistics between the 2BB and the BB+PL fits and found that\nthey are similar. It is therefore possible that the BB+PL model\nprovides a more physical description of their spectra.\\\\\n\\indent Our results hint that the two-temperature components could be a\ncommon feature among magnetars, although not all could be detected due to\ninterstellar absorption. The physical interpretation\nof the two blackbody components will be discussed below.\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{luminosity_to_hsarea.eps}\n\\caption{Trend of the hotspot X-ray luminosity $L_{\\rm X}$ in\n$0.5$--$10\\,{\\rm keV}$ against the hotspot\narea $4\\pi R_{\\rm high}^2$ for SGR 0501+4516. The blue dashed\nline shows the best-fit correlation\n$L_{\\rm X}=1.8\\times10^{34}A_{11}^{1.42}\\,{\\rm erg}\\,{\\rm s}^{-1}$\nand\nthe black solid lines show the theoretical predicted correlation,\n$L_{\\rm X}=1.3\\times10^{33}KA_{11}^2\\,{\\rm erg}\\,{\\rm s}^{-1}$, with $K=1$ and\n$K=20$.}\\label{fig:l_to_a}\n\\end{figure}\n\\begin{deluxetable*}{lccc}[b!]\n\\tablecaption{Blackbody Temperature and Spin-Inferred Magnetic Field Strength\nof Magnetars\nand Young High Magnetic Field Rotation-Powered Pulsars as Plotted in\nFigure\\,\\ref{fig:kt_to_b}\\label{tab:kt_to_b}}\n\\tablewidth{0pt}\n\\tabletypesize{\\small}\n\\tablehead{\\colhead{Source} & \\colhead{$B$\\tablenotemark{a} ($10^{14}\\,$G)} &\n\\colhead{$kT$\\tablenotemark{b} (keV)} &\n\\colhead{Reference}\n}\n\\startdata\n\\sidehead{Magnetars (entire surface):}\n\\emph{Swift} J1822--1606 & $0.14$ & $0.11\\pm0.01$ & See\nTable\\,\\ref{tab:low_nh}\\\\\n1E 2259+586 & $0.59$ & $0.32_{-0.05}^{+0.04}$ & See Table\\,\\ref{tab:low_nh}\\\\\n4U 0142+61 & $1.3$ & $0.27\\pm0.02$ & 1\\\\\nSGR 0501+4516 & $1.9$ & $0.26_{-0.02}^{+0.01}$ & See Table\\,\\ref{tab:low_nh}\\\\\nXTE J1810--197 & $2.1$ & $0.167\\pm0.006$ & 2\\\\\nCXOU J010043.1--721134 & $3.9$ & $0.30\\pm0.02$ & 3\\\\\n1RXS J170849.0--400910 & $4.7$ & $0.42\\pm0.02$ & 4\\\\\nSGR 0526--66 & $5.6$ & $0.44\\pm0.02$ & 5\\\\\nSGR 1900+14 & $7.0$ & $0.47\\pm0.02$ & 6\\\\\n1E 1841--045 & $7.0$ & $0.45\\pm0.03$ & 7\\\\\nSGR 1806--20 & $11.3$ & $0.55\\pm0.07$ & 8\\\\\n\\sidehead{Magnetars (hotspot):}\nSGR 0418+5729 & $0.061$ & $0.32\\pm0.05$ & 9\\\\\n\\emph{Swift} J1822--1606 & $0.14$ & $0.29\\pm0.03$ & See\nTable\\,\\ref{tab:low_nh}\\\\\n1E 2259+586 & $0.59$ & $0.5_{-0.2}^{+0.1}$ & See Table\\,\\ref{tab:low_nh}\\\\\nCXOU J164710.2--455216 & $1.0$ & $0.53\\pm0.03$ & 10\\\\\n4U 0142+61 & $1.3$ & $0.50\\pm0.02$ & 1\\\\\nSGR 0501+4516 & $1.9$ & $0.62_{-0.04}^{+0.03}$ & See\nTable\\,\\ref{tab:low_nh}\\\\\nXTE J1810--197 & $2.1$ & $0.33\\pm0.02$ & 2\\\\\nSGR 1935+2154 & $2.2$ & $0.47\\pm0.03$ & 11\\\\\n1E 1547.0--5408 & $2.2$ & $0.43\\pm0.05$ & 12\\\\\nPSR J1622--4950 & $2.7$ & $0.5\\pm0.1$ & 13\\\\\nCXOU J010043.1--721134 & $3.9$ & $0.68_{-0.07}^{+0.09}$ & 3\\\\\n1E 1048.1--5937 & $4.5$ & $0.56\\pm0.02$ & 14\\\\\n\\sidehead{High-$B$ rotation-powered pulsars:}\nPSR B1509--58 & $0.15$ & $0.15\\pm0.01$ & 15\\\\\nPSR J1119--6127 & $0.41$ & $0.21\\pm0.04$ & 16\\\\\nPSR J1846--0258 & $0.49$ & $<0.25$ & 17\n\\enddata\n\\tablecomments{\n\\tablenotetext{a}{\\scriptsize Adopted from the Magnetar Catalog\n\\citep{ok14}.}\n\\tablenotetext{b}{\\scriptsize Uncertainties are at the $90\\%$ confidence level.}}\n\\tablerefs{\\scriptsize\n(1) \\citet{gdk10}; (2) \\citet{bid09}; (3) \\citet{tem08};\n(4) \\citet{cri07}; (5) \\citet{phs12}; (6) \\citet{met06b};\n(7) \\citet{ks10};\n(8) \\citet{emt07b}; (9) \\citet{rip13}; (10) \\citet{akac13};\n(11) \\citet{ier16}; (12) \\citet{bis11}; (13) \\citet{ags12};\n(14) \\citet{tgd08}; (15) \\citet{hnt17};\n(16) \\citet{nkh12}; (17) \\citet{ln11}\n}\n\\end{deluxetable*}\n\\subsection{Physical Interpretation of the Hotter Blackbody\nComponent}\\label{sec:high_bb}\nThe best-fit radius of the higher temperature component $R_{\\rm high}$ of SGR~0501+4516\nhad shrunk to $0.49\\,{\\rm km}$ from 2008 to 2013,\nindicating that the thermal emission could come from a hotspot on surface.\nThere were several magnetars with\nblackbody radii that continued to shrink for a few years after their outbursts\n\\citep{bl16}.\n\\citet{bel09} suggested that this\ncould be the observational evidence supporting the $j$-bundle model. When a\ntwisted magnetic field is implanted into the closed\nmagnetosphere, the current ($j$-bundle) would flow along the closed magnetic\nfield lines and return back to the stellar surface,\nheating up the footprints of the $j$-bundle and resulting in hotspots. After\nan outburst, the footprints\nare expected to keep shrinking and the hotspot could be observed as a blackbody\ncomponent with a decreasing radius.\nThis predicts a correlation between the X-ray luminosity and the area of a\nhotspot as\n$L_{\\rm X}=1.3\\times10^{33}KA_{11}^{2}\\,{\\rm erg}\\,{\\rm s}^{-1}$, where $A_{11}$ is\nthe blackbody\narea in units of $10^{11}\\,{\\rm cm}^{2}$ and $K$ is a constant depending on\nthe twisting angle of the $j$-bundle,\nthe surface magnetic field strength, and the discharge voltage\n\\citep{bel09,bel11}.\\\\\n\\indent We plot in Figure\\,\\ref{fig:l_to_a} the hotspot\nluminosity of SGR 0501+4516 against\nits area $A=4\\pi R_{\\rm high}^2$. The distance of the source is assumed to be\n$d=5\\,{\\rm kpc}$\nfor calculating the luminosity. Our result broadly agrees with the theory\nprediction and suggests $K\\sim20$.\nIf we fit the data points with a straight line in the log--log plot in\nFigure\\,\\ref{fig:l_to_a},\nthe best-fit correlation is flatter, with\n$L_{\\rm X}=1.8\\times10^{34}A_{11}^{1.42}\\,{\\rm erg}\\,{\\rm s}^{-1}$.\nSimilar behavior was also found in several other magnetars during flux\nrelaxation after outbursts \\citep{bl16}.\nThe discrepancy could be due to the time variation of the proportionality\nconstant $K$.\\\\\n\\subsection{Physical Interpretation of the Cooler Blackbody\nComponent}\\label{sec:low_bb}\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=0.43\\textwidth]{kt_to_b}\n\\caption{Blackbody temperature against magnetic field strength of magnetars\nand three young high-$B$ rotation-powered pulsars,\nusing values listed in Table\\,\\ref{tab:kt_to_b}.\nThe red and green dots indicate blackbodies from the entire surface and the\nhotspots, respectively.\nThe blue dots represents the high-$B$ rotation-powered pulsars. The red solid line shows\nthe best-fit correlation\n$kT\\propto B^{0.4}$ using the red data points only. The black dashed line\nrepresents the theoretical prediction of $kT\\propto\nB^{0.5}$.}\\label{fig:kt_to_b}\n\\end{figure}\nIn the two temperature fits, the cooler blackbody always shows a larger radius\n$R_{\\rm low}$ and some values listed in Table\\,\\ref{tab:low_nh} are compatible\nwith the neutron star\nradius. We therefore believe that this blackbody component could originate\nfrom the entire surface. An additional support is that in our case of SGR\n0501+4516, $R_{\\rm low}$ has been relatively stable during flux relaxation.\nTheories suggest that the thermal emission of magnetars\ncould be arise from the decay of the crustal magnetic field\n\\citep{td96,plm07}. If this\nis the only energy source, one expects\na correlation between the surface temperature $kT$ and the magnetic\nfield strength $B$ \\citep{plm07}. The conservation of energy could be\nexpressed as\n\\begin{equation}\n-A\\Delta R\\frac{{\\rm d}E_{\\rm m}}{{\\rm d}t}=A\\sigma T^4, \\label{eq:b_to_t}\n\\end{equation}\nwhere $E_{\\rm m}$ is the magnetic energy density, $A$ is the emission area,\n$\\Delta R$ is the thickness of\nthe neutron star crust, and $\\sigma$ is the Stefan-Boltzmann constant. The\nmagnetic energy density\n$E_{\\rm m}$ could be written as $B^2\/8\\pi$. If the decay of $B$ is in the\nexponential form, it implies a relation $T\\propto \\sqrt{B}$.\nNote that this ignores any age effects that are justified,\nas magnetars are young objects in general \\citep[see][]{vrp13}.\\\\\n\\indent To verify the correlation above,\nwe investigated the trend between $kT$ and $B$ for\nall quiescent magnetars, using the latest results reported in the literature and\nfrom our own analysis (see Table\\,\\ref{tab:low_nh}). These values are listed\nin Table\\,\\ref{tab:kt_to_b}\nand plotted in Figure\\,\\ref{fig:kt_to_b}.\nAs we mentioned, some blackbody components correspond to the hotspot and some\ncorrespond to the entire\nsurface. We show them separately in the plot as two groups, depending on\nwhether the blackbody radius $R$\nis larger or smaller than $3\\,{\\rm km}$. The plot shows an increasing trend for the entire surface $kT$,\nwith a correlation coefficient $r=0.85$. We fit the log--log plot with a\nstraight line and obtained\n$kT\\propto B^{0.4}$, which is a bit flatter than, but generally comparable\nwith the theoretical prediction of $B^{0.5}$.\nOn the other hand, the temperature of the hotspots\nshows no such a correlation, which suggests that they could probably be powered\nby $j$-bundle\ninstead of the decay of the crustal field.\\\\\n\\begin{deluxetable*}{lcccc}\n\\tablecaption{Quiescent X-Ray Luminosity in $2$--$10\\,{\\rm keV}$ and\nSpin-Inferred Magnetic Field Strength\nof Magnetars and High Magnetic Field Rotation-Powered Pulsars as Plotted in\nFigure\\,\\ref{fig:l_to_b}\\label{tab:l_to_b}}\n\\tablewidth{0pt}\n\\tabletypesize{\\small}\n\\tablehead{\\colhead{Source} & \\colhead{$L_{\\rm X}$\\tablenotemark{a} ($10^{35}\\,{\\rm\nerg}\\,{\\rm s}^{-1}$)} &\n\\colhead{$B$\\tablenotemark{b} $(10^{14}\\,{\\rm G})$} &\n\\colhead{Distance\\tablenotemark{b} (kpc)} & \\colhead{Reference}\n}\n\\startdata\n\\sidehead{Magnetars:}\nSGR 0418+5729 & $1.0_{-0.9}^{+1.1}\\times10^{-5}$ & $0.061$ &\n$2.0\\pm0.4$\\tablenotemark{c} & 1\\\\\n\\emph{Swift} J1822.3--1606 & $5_{-4}^{+3}\\times10^{-5}$ & $0.14$ &\n$1.6\\pm0.3$ & See Table\\,\\ref{tab:low_nh} \\\\\n1E 2259+586 & $0.20_{-0.06}^{+0.04}$ & $0.59$ & $3.2\\pm0.2$ & See\nTable\\,\\ref{tab:low_nh} \\\\\nCXOU J164710.2--455216 & ($4.5\\pm3.8)\\times10^{-3}$ & $1.0$ & $3.9\\pm0.7$ &\n2\\\\\n4U 0142+61 & $1.05\\pm0.33$ & $1.3$ & $3.6\\pm0.4$ & 3\\\\\nSGR 0501+4516 & $3.5_{-1.3}^{+1.0}\\times10^{-2}$ & $1.9$ &\n$5.0\\pm1.0$\\tablenotemark{c} & See Table\\,\\ref{tab:low_nh}\\\\\nXTE J1810--197 & $1.3_{-0.9}^{+0.5}\\times10^{-3}$ & $2.1$ & $3.1\\pm0.5$ & 4\\\\\n1E 1547.0--5408 & $1.3_{-0.9}^{+0.5}\\times10^{-2}$ & $2.2$ & $4.5\\pm0.5$ & 5\\\\\nSGR 1627--41 & $2.5_{-1.3}^{+2.3}\\times10^{-2}$ & $2.2$ & $11.0\\pm0.3$ & 6\\\\\nPSR J1622--4950 & $4.4_{-3.6}^{+7.0}\\times10^{-3}$ & $2.7$ &\n$9.0\\pm1.8$\\tablenotemark{c} & 7 \\\\\nCXOU J010043.1--721134 & $0.7_{-0.3}^{+1.7}$ & $3.9$ & $62.4\\pm1.6$ & 8\\\\\n1E 1048.1--5937 & $0.5\\pm0.3$ & $4.5$ & $9.0\\pm1.7$ & 9\\\\\n1RXS J170849.0--400910 & $0.42\\pm0.11$ & $4.7$ & $3.8\\pm0.5$ & 10\\\\\nCXOU J171405.7--381031 & $0.33\\pm0.24$ & $5.0$ & $10.2\\pm3.5$ & 11\\\\\nSGR 0526--66 & $1.9_{-0.4}^{+0.3}$ & $5.6$ & $53.6\\pm1.2$ & 12\\\\\nSGR 1900+14 & $0.7\\pm0.3$ & $7.0$ & $12.5\\pm1.7$ & 13\\\\\n1E 1841--045 & $1.8_{-1.0}^{+0.7}$ & $7.0$ & $8.5_{-1.0}^{+1.3}$ & 14\\\\\nSGR 1806--20 & $1.6_{-0.7}^{+0.8}$ & $11.3$ & $8.7_{-1.5}^{+1.8}$ & 15\\\\\n\\sidehead{High-$B$ rotation-powered pulsars:}\nPSR B1509--58 & $0.96\\pm0.05$ & $0.15$ & $5.2\\pm1.4$ & 16 \\\\\nPSR J1119--6127 & $2.5_{-1.3}^{+3.2}\\times10^{-3}$ & $0.41$ & $8.4\\pm0.4$ & 17\n\\\\\nPSR J1846--0258 & $0.19_{-0.03}^{+0.04}$ & $0.49$ & $6.0_{-0.9}^{+1.5}$ & 18\n\\enddata\n\\tablecomments{\n\\tablenotetext{a}{\\scriptsize $90\\%$ uncertainties in\n$L_{\\rm X}$, derived by\ncombining the errors in flux\nand distance using the standard error propagation formula.}\n\\tablenotetext{b}{\\scriptsize Adopted from the Magnetar Catalog\n\\citep{ok14}. For those with multiple estimated distances, we\nsimply used the most updated or the better measured values.}\n\\tablenotetext{c}{\\scriptsize As the uncertainty in distance is not\nreported, we assumed a relative error of $20\\%$, similar to that of other\nsources.}}\n\\tablerefs{\\scriptsize\n(1) \\citet{rip13}; (2) \\citet{akac13};\n(3) \\citet{rni07}; (4) \\citet{bid09}; (5) \\citet{gg07};\n(6) \\citet{eiz08}; (7) \\citet{ags12}; (8) \\citet{tem08};\n(9) \\citet{tgd08};\n(10) \\citet{rio07}; (11) \\citet{sbni10}; (12) \\citet{phs12};\n(13) \\citet{nmy09}; (14) \\citet{ks10}; (15) \\citet{emt07b};\n(16) \\citet{hnt17}; (17) \\citet{nkh12}; (18) \\citet{ln11}\n}\n\\end{deluxetable*}\n\\indent There is recent evidence showing that both\nyoung high magnetic field rotation-powered pulsars and magnetars share\nsimilar properties, making the division between these two classes blurred\n\\citep[see][]{ggg08,nk11,glk16}. This motivates us to include the three young\nsources with age of $\\sim10^3\\,{\\rm years}\\,$, PSRs B1509--58, J1119--6127, and J1846--0258,\nin Table\\,\\ref{tab:kt_to_b} and Figure\\,\\ref{fig:kt_to_b} for comparison.\nThe thermal emission of PSRs B1509--58 and J1119--6127 has blackbody radii\n$R=10^{+39}_{-5}\\,{\\rm km}$\nand $3^{+4}_{-1}\\,{\\rm km}$, respectively, suggesting that they could be\noriginated from\nthe entire surface (or large area; \n\\citealt{nkh12,hnt17}). However, for PSR B1509--58, the blackbody radius was not\nvery well\nconstrained due to strong non-thermal emission.\nOn the other hand, there is no thermal emission found in PSR J1846--0258 in\nquiescence, with an\nupper limit of $0.25\\,{\\rm keV}$ \\citep{ln11}. From the plot,\nit is interesting to note that all high-$B$ rotation-powered pulsars seem to follow the\nsame $kT$--$B$ trend\nas magnetars. Our results suggest that the energy source, i.e. $B$-field\ndecay, could power the\nentire surface thermal emission of magnetars and high-$B$ rotation-powered pulsars.\\\\\n\\indent\nWhile $kT$ and $B$ appear to show a correlation that is broadly consistent\nwith the theory,\nthere remain some unsolved problems in this picture.\nThe temperature of the cooler blackbody component is typically higher in\noutburst, then decays to a constant value a few years after.\nHence, the outburst could partly contributed to\nthe thermal emission (see Figure\\,\\ref{fig:trends} and also\n\\citealt{bid09} and \\citealt{gdk10}). Also, we note that\nsome radii of the cooler blackbody are\nsmaller than that of a neutron star.\nIt could indicate that the emission regions are smaller than the entire\nsurface or that the temperature\ndistribution is inhomogeneous. It is unclear if the Equation\\,\\ref{eq:b_to_t}\nneeds to be modified in this case.\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.41\\textwidth]{l_to_b}\n\\caption{X-ray luminosity $L_{\\rm X}$ in $2$--$10\\,{\\rm keV}$ against\nmagnetic field strength $B$, using values listed in Table\\,\\ref{tab:l_to_b}. The red dots indicate magnetars in quiescence, and\nthe blue dots indicate high-$B$ rotation-powered pulsars.\nThe red and the green dashed lines show the theoretical predictions of\n$L_{\\rm X}\\propto B^{2}$ and $\\propto B^{4.4}$,\nrespectively.}\\label{fig:l_to_b}\n\\end{figure}\n\\subsection{Correlation Between X-Ray Luminosity and Magnetic\nField}\\label{sec:correlation_L_kT_B}\nWe revisit the correlation between the quiescent X-ray luminosity, $L_{\\rm X}$, and\nthe magnetic field, $B$, of magnetars as reported by \\citet{akt12}, using\nupdated measurements\nlisted in Table\\,\\ref{tab:l_to_b}. The results are plotted in\nFigure\\,\\ref{fig:l_to_b}.\nWe compared the trend with two theoretical predictions of $L_{\\rm X}\\propto\nB^{2}$ deduced from the Equation\\,\\ref{eq:b_to_t} \\citep{plm07} and\n$L_{\\rm X}\\propto B^{4.4}$ based on the ambipolar diffusion model with neutrino\ncooling \\citep{td96}.\nThe plot shows a general trend but with large scatter, particularly for\nmagnetars\nwith $B\\sim10^{14}\\,{\\rm G}$.\nOur updated plot prefers $B^2$, providing some support to the\nsimple magnetic field decay model.\nNote that our result contradicts that reported by \\citet{akt12}.\nThe main discrepancy is due to the updated measurements from two low-field\nmagnetars, SGR 0418+5729 and\n\\emph{Swift} J1822.3--1606. If we fit the log--log plot with a straight line,\nwe obtain a slightly\nflatter correlation of $L_{\\rm X}\\propto B^{1.7}$.\nFrom the plot, 1E~2259+586 and 4U~0142+61 are far more\nluminous than other magnetars with similar $B$. Excluding these two\noutliers gives $L_{\\rm X}\\propto B^{2.8}$, which again prefers $B^2$ to $B^{4.4}$.\\\\\n\\indent Similar to the $kT$--$B$ plot, we also include three young high\nmagnetic field rotation-powered pulsars in Figure\\,\\ref{fig:l_to_b}.\nWe found that only PSR J1119--6127 follows the general trend of magnetars,\nwhile the other two, PSRs B1509--58 and J1846--0258, have luminosities a few\norders of magnitude\nhigher. We believe that their X-ray emission is dominated by\nnon-thermal radiation powered by spin-down, which could be a main difference\nbetween magnetars and high-$B$ rotation-powered pulsars. Although the correlation appears\nto support to the\ntheoretical prediction, there are too few magnetar examples with\n$B<10^{14}\\,{\\rm G}$.\nIncreasing the sample in this magnetic field range in future studies can\nbetter confirm the theory.\n\\section{\\bf{CONCLUSION}}\\label{sec:con}\nWe performed spectral and timing analyses of SGR 0501+4516 using new and\narchival\nX-ray observations taken with \\emph{Chandra}, \\emph{XMM-Newton}, and\n\\emph{Suzaku}. We show that the\nsource has returned to quiescence in 2013, five years after the outburst.\nOur timing results found a spin period of $\\sim5.762\\,{\\rm s}$ with stable\npulse profiles in 2012 and 2013.\nThe \\emph{Chandra} images show no detectable proper motion, with an upper\nlimit of $0\\farcs32\\,{\\rm yr}^{-1}$,\nrejecting the idea that SGR 0501+4516 was born in SNR G160.9+2.6.\nWe found that the soft X-ray spectrum is best described by\na double blackbody plus power-law (2BB+PL) model.\nThe quiescent spectrum has temperatures of $0.26\\,{\\rm keV}$\n(with $R=3.7\\,{\\rm km}$) and $0.62\\,{\\rm keV}$ (with $R=0.49\\,{\\rm km}$).\nWe found a correlation between the X-ray luminosity and the area of the\nevolving hotter blackbody component, which agrees with the prediction of the\n$j$-bundle model.\\\\\n\\indent We further applied the two-temperature spectral model to other\nmagnetars in\nquiescence and found that it provides a good fit to\nmost sources with low column density,\nsuggesting that this could be a common feature.\nWe investigated the correlation between the blackbody temperature $kT$ and the\nspin-inferred\nmagnetic field $B$ of all magnetars in quiescence.\nFor blackbodies with large areas comparable to the entire stellar surface,\nthe correlation generally agrees with the prediction from the simple magnetic\nfield decay model.\nWe found that this simple scenario can also explain the trend between the\nquiescent X-ray luminosity and magnetic field strength of magnetars.\n\n\\acknowledgements\nWe thank the referee for the comments that improved\nthis paper. The scientific results reported in this article are based on observations made\nby the\n\\textit{Chandra X-ray Observatory} and data obtained from the \\textit{Chandra}\nData Archive.\nThis work was based on observations obtained with \\textit{XMM-Newton}, an ESA\nscience mission with instruments and contributions directly funded by ESA\nMember States and NASA.\nThis research has made use of the NASA Astrophysics Data System (ADS) and\nsoftware provided by the Chandra X-ray Center (CXC) in the application package\nCIAO and Sherpa.\nThis work is supported by a GRF grant of Hong Kong Government under HKU\n17300215P.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzlcse b/data_all_eng_slimpj/shuffled/split2/finalzzlcse new file mode 100644 index 0000000000000000000000000000000000000000..5c92e5ef70dad7fd1210a44eca3e5b56e089bb85 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzlcse @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\blfootnote{\n \\hspace{-0.65cm} \n This work is licensed under a Creative Commons \n Attribution 4.0 International License.\n License details:\n \\url{http:\/\/creativecommons.org\/licenses\/by\/4.0\/}.\n}\nThe ability to understand user's requests is essential to develop effective task-oriented dialogue systems. For example, in the utterance \"\\textit{I want to listen to Hey Jude by The Beatles}\", a dialogue system should correctly identify that the user's intention is to give a command to play a song, and that \\textit{Hey Jude} and \\textit{The Beatles} are, respectively, the song's title and the artist name that the user would like to listen. In a dialogue system this information is typically represented through a \\textit{semantic-frame} structure \\cite{tur2011spoken}, \nand extracting such representation involves two tasks: identifying the correct frame (i.e. \\textit{intent classification (IC)}) and filling the correct value for the slots of the frame (i.e. \\textit{slot filling (SF)}). \n\nIn recent years, neural-network based models have achieved the state of the art for a wide range of natural language processing tasks, including SF and IC. Various neural architectures have been experimented on SF and IC, including RNN-based \\cite{Mesnil2013InvestigationOR} and attention-based \\cite{Liu2016AttentionBasedRN} approaches, till the more recent transformers models \\cite{Chen2019BERTFJ}. Input representations have also evolved from static word embeddings \\cite{Mikolov2010RecurrentNN,Collobert2008AUA,Pennington2014GloveGV} to contextualized word embeddings \\cite{peters2018deep,Devlin2019BERTPO}. Such progress allows to better address dialogue phenomena involving SF and IC, including context modeling, handling out-of-vocabulary words, long-distance dependency between words, and to better exploit the synergy between SF and IC through joint models. \nIn addition to rapid progresses in the research community, the demand for commercial conversational AI is also growing fast, shown by a variety of available solutions, such as Microsoft LUIS, Google Dialogflow, RASA, and Amazon Alexa. These solutions also use various kinds of semantic frame representations as part of their framework.\n\nMotivated by the rapid explosion of scientific progress, and by unprecedented market attention, we think that a guided map of the approaches on SF and IC can be useful for a large spectrum of researchers and practitioners interested in dialogue systems.\nThe primary goal of the survey is to give a broad overview of recent neural models applied to SF and IC, and to compare their performance in the context of task-oriented dialogue systems. We also highlight and discuss open issues that still need to be addressed in the future. The paper is structured as follows: Section \\ref{sec:slot_filling_intent_classification} describes the SF and IC tasks, commonly used datasets and evaluation metrics. Section \\ref{sec:independent_model}, \\ref{sec:joint}, and \\ref{sec:transfer_learning} elaborate on the progress and state of the art of \\textit{independent}, \\textit{joint}, and \\textit{transfer learning models }for both tasks. Section \\ref{sec:discussion} discusses the performance of existing models and open challenges.\n\n\n\n\\begin{table*}[ht!]\n \\centering\n \\small\n \\begin{tabular}{|l||c|c|c|c|c|c|c|c|c|c|c|}\n \n \n \\hline\n \\textbf{Utterance} & I & want & to & listen & to & Hey & Jude & by & The & Beatles \\\\\n \\hline\n \\textbf{Slot} & \\textsc{O} & \\textsc{O} & \\textsc{O} & \\textsc{O}& \\textsc{O} & \\textsc{B-SONG} & \\textsc{I-SONG} & \\textsc{O} & \\textsc{B-ARTIST} & \\textsc{I-ARTIST}\\\\\n \\hline\n \\multicolumn{1}{|l||}{\\textbf{Intent}} & \\multicolumn{10}{|l|}{\\textsc{play\\_song}} \\\\\n \\hline\n \\end{tabular}\n \\caption{Example of SF and IC output for an utterance. Slot labels are in \\textsc{BIO} format: \\textsc{B} indicates the start of a slot span, \\textsc{I} the inside of a span while \\textsc{O} denotes that the word does not belong to any slot. }\n \\label{tab:sf_ic_example}\n\\end{table*}\n\n\\section{Slot Filling and Intent Classification}\n\\label{sec:slot_filling_intent_classification}\nThis section provides some background relevant for SF and IC, sets the scope of the survey with respect to the context in dialogue systems, defines SF and IC as tasks, and introduces the datasets and the metrics that will be used in the rest of the paper.\n\n\n\n\n\\subsection{Background}\n\\label{subsec:background}\nTask-oriented dialogue systems aim to assist users to accomplish a task (e.g. booking a flight, making a restaurant reservation and playing a song) through dialogue in natural language, either in a spoken or written form.\nAs in most of the current approaches, we assume a system involving a pipeline of components \\cite{young2010hidden}, where the user utterance is first processed by an Automatic Speech Recognition (ASR) module and then\nprocessed by a Natural Language Understanding (NLU) component, which interprets the user's needs. Then a Dialogue State Tracker (DST) accumulates the dialogue information as the conversation progresses and may query a domain knowledge base to obtain relevant data. A dialogue policy manager then decides what is the next action to be executed and a Natural Language Generation (NLG) component produces the actual response to the user. \n\nWe focus on the NLU component, and we generalize several recent approaches assuming that the output of the NLU process is a partially filled semantic frame \\cite{wang2005spoken,tur2011spoken}, corresponding to the intent of the user in a certain portion of the dialogue, with a number of slot-value pairs that need to be filled to accomplish the intent.\nThe notion of \\textit{intent} originates from the idea that utterances can be assigned to a small set of \\textit{dialogue acts} \\cite{stolcke-etal-2000-dialogue}, and it is now largely adopted to identify a task or action that the system can execute in a certain domain. \\textit{Slot-value pairs}, on the other end, represent the domain of the dialogue, and have been actually implemented either as an ontology \\cite{conf\/interspeech\/Bellegarda13}, possibly with reasoning services (e.g. checking the constraints over slot values) or simply trough a list of entity types that the system needs to identify during the dialogue.\n\n\nIntents may correspond either to specific needs of the user (e.g. blocking a credit card, transferring money, etc.), or to general needs (e.g. asking for clarification, thanking, etc.). Slots are defined for each intent: for instance, to block a credit card it is relevant to know the name of the owner and the number of the card. Values for the slots are collected through the dialogue, and can be expressed by the user either in a single turn or in several turns. \nAt each user turn in the dialogue the NLU component has to determine the intent of the user utterance (\\textit{intent classification}) and has to detect the slot-value pairs referred in the particular turn (\\textit{slot filling}).\nTable \\ref{tab:sf_ic_example} shows the expected NLU output for the utterance \"\\textit{I want to listen to Hey Jude by The Beatles}\".\n\n\n\n\\subsection{Scope of the Survey}\n\nIn Section \\textsection\\ref{subsec:background}, we described a task-oriented system as a pipeline of components, saying that SF and IC are core tasks at the NLU level. Particularly, IC consists of classifying an utterance with a set of pre-defined intents, while SF is defined as a sequence tagging problem \\cite{Raymond2007GenerativeAD,Mesnil2013InvestigationOR}, where each token of the utterance has to be tagged with a slot label. In this scenario training data for SF typically consist of single utterances in a dialogue where tokens are annotated with a pre-defined set of slot names, and slot values correspond to arbitrary sequences of tokens. In this perspective, it is worth mentioning a research line on dialogue state tracking \\cite{henderson2014word,mrkvsic2015multi,budzianowski-etal-2018-multiwoz}, where the NLU component is usually embedded into DST. What is relevant for our topic is that in this context SF is defined as a classification problem: given the current utterance and the previous dialogue history, the system has to decide whether a certain slot-value pair defined in the domain ontology is referred or not in the current utterance. Although promising, from the NLU perspective, this research line poses constraints (e.g. all slot-value pairs have to be pre-defined in an ontology,) that limit the SF applicability. For this reason, and because NLU components are the prevalent solution in current task-oriented systems, the focus of our survey will be on SF as a sequence tagging problem, as more precisely defined in the next section.\n\n\n \n\n\n\n\\subsection{Task Definition}\n\n\\label{sec:approach}\nWe formulate SF and IC as follows. \nGiven an input utterance $\\boldsymbol{x} = (x_1,x_2,.., x_T)$, SF consists in a token-level sequence tagging, where the system has to assign a corresponding slot label $\\boldsymbol{y}^{slot} = (y^{slot}_1, y^{slot}_2,.., y^{slot}_T)$ to each token $x_i$ of the utterance. \nOn the other end, IC is defined as a classification task over utterances, where the system has to assign the correct intent label $y^{intent}$ for the whole utterance $\\boldsymbol{x}$. In general, most machine learning approaches learn a probabilistic model to estimate $p(y^{intent},\\boldsymbol{y}^{slot}|\\boldsymbol{x}, \\boldsymbol{\\theta})$ where $\\boldsymbol{\\theta}$ is the parameter of the model. Table \\ref{tab:sf_ic_example} shows an example of the expected output of a model for the SF and IC tasks. In the following sections, we outline the main models that have been proposed for SF and IC, and categorize the models into three groups, namely \\textit{independent models} (\\textsection \\ref{sec:independent_model}) , \\textit{joint models} (\\textsection \\ref{sec:joint}), and \\textit{transfer learning based models} (\\textsection{\\ref{sec:transfer_learning}}). \n\n\n\n\n\\subsection{Datasets for SF and IC}\n\\label{dataset}\n\nIn this section, according to our task definition, we list available dialogue datasets (most of them are publicly available) where each utterance is assigned to one intent, and tokens are annotated with slot names. Most of such datasets are collections of \\textit{single turn user utterances} (i.e., not multi-turn dialogues). An example of a single-turn utterance annotation is shown in Table \\ref{tab:sf_ic_example}. \n\nThe ATIS (Airline Travel Information System) dataset \\cite{Hemphill1990TheAS} is the most widely used single-turn dataset for NLU benchmarking. The total number of utterances is around 5K utterances that consist of queries related to the airline travel domain, such as searching for a flight, asking for flight fare, etc. While it has a relatively large number slot and intent labels, the distribution is quite skewed; more than 70\\% of the intent is a flight search. The slots are dominated by a slot that expresses location names such as \\textsc{FromLocation} and \\textsc{ToLocation}. The MEDIA dataset \\cite{DBLP:conf\/interspeech\/Bonneau-MaynardRAKM05} is constructed by simulating the conversation between a tourist and a hotel representative in the French language. Compared to ATIS, the MEDIA corpus size is around three times larger; however, MEDIA is only annotated with slot labels. The slots are related to hotel booking scenarios such as the number of people, date, hotel facility, relative distance, etc. The MIT corpus \\cite{DBLP:conf\/icassp\/LiuPCG13} is constructed through a crowdsourcing platform where crowd workers are hired to create natural language queries in English and annotate the slot label in the queries. The MIT corpus covers two domains, namely movie and restaurant, in which the utterances are related to finding information of a particular movie or actor, searching or booking a restaurant with a particular distance and cuisine criteria. The SNIPS dataset \\cite{Coucke2018SnipsVP} was collected by crowdsourcing through the SNIPS voice platform. Intents include requests to a digital assistant to complete various tasks, such as asking the weather, playing a song, book a restaurant, asking for a movie schedule, etc. SNIPS is now often used as a benchmark for NLU evaluations.\n\nWhile most datasets are available in English, recently there has been growing interest in expanding slot filling and intent classification datasets to non-English languages. The original ATIS dataset has been derived into several languages, namely Hindi, Turkish \\cite{Upadhyay2018AlmostZC}, and Indonesian \\cite{Susanto2017NeuralAF}. The MultiATIS++ dataset from \\newcite{DBLP:journals\/corr\/abs-2004-14353} expands the ATIS dataset to more languages, namely Spanish, Portuguese, German, French, Chinese, and Japanese. The work from \\cite{DBLP:conf\/clic-it\/BellomariaCFR19} introduces the Italian version of the original SNIPS dataset. The Facebook multi-lingual dataset \\cite{Schuster2018CrosslingualTL}, introduced a dataset on Thai and Spanish languages across three domains namely weather, alarm, and reminder. \nThe detailed statistics of each dataset are listed in Appendix A. \n \n\n\\subsection{Evaluation Metrics}\n For the IC task, evaluation is performed on the utterance level. The typical evaluation metric for IC is \\textit{accuracy}, calculated as the number of the correct predictions made by the model divided by the total number of predictions. As for SF, the evaluation is performed on the entity level. The common metrics used is the metric introduced in CoNLL-2003 shared task \\cite{DBLP:conf\/conll\/SangM03} to evaluate Named Entity Recognition (NER) by computing the F-1 score. The F1-score, is the harmonic mean score between precision and recall. Precision is the percentage of slot predictions from the model which are correct, while recall is the percentage of slots in the corpus that are found by the model. A slot prediction is considered \\textit{correct} when an \\textit{exact} match is found \\cite{DBLP:conf\/conll\/SangM03}. As the slot is annotated in BIO format to mark the boundary of the slot (see Table \\ref{tab:sf_ic_example}), a correct prediction is only counted when the model can predict the correct slot label on the correct token offset. Consequently, the exact match metrics does not reward cases when the model predict correct slot label but get the incorrect slot boundary (\\textit{partial match}). \n\n\\section{Independent Models for SF and IC}\n\\label{sec:independent_model}\nIndependent models train each task \\textit{separately} and recent neural models typically use RNN as the building block for SF and IC. At each time step \\textit{t}, the encoder transforms the word representation $x_t$ to the hidden state $h_t$. For SF, the output layer predicts the slot label $y^{slot}_t$ condition on $h_t$. For IC, typically the last hidden state $h_T$ is used to predict the intent label $y^{intent}$of the utterance $\\boldsymbol{x}$. Note that, for independent approaches, the models for SF and IC are trained separately. Most neural models for SF and IC generally consist of several layers, namely an \\textit{input layer}, one or more \\textit{encoder layer}, and an \\textit{output layer}. Consequently, the main differences between models are in the specifics of these layers. The most common dataset used for evaluating independent models is ATIS. \n\n\nIn the \\textit{input layer} of neural models each word is mapped into embeddings. \\newcite{Mesnil2013InvestigationOR} compared several embeddings, namely pre-trained SENNA \\cite{Collobert2011NaturalLP}, RNN Language Model (RNNLM) \\cite{Mikolov2011RNNLMR}, and random embeddings. SENNA gives the best result compared to other embeddings, and, typically, further fine-tuning word embeddings improves performance. \\cite{Yao2013RecurrentNN} report that embeddings learned from scratch directly on ATIS data (\\textit{task-specific embeddings}) are better than SENNA. However, task-specific embeddings are composed not only by words but also by named entities (\\textit{NE}) and syntactic features\\footnote{Gold named entity and syntactic information}. NE improves performance significantly while part-of-speech only adds small benefits. \\newcite{Ravuri2015RecurrentNN} emphasizes the importance of \\textit{character representation} to handle OOV issues. \n\nFor the \\textit{encoder layer}, various RNN architectures have been applied to SF and IC \\cite{Mesnil2013InvestigationOR,Mesnil2015UsingRN,Liu2015RecurrentNN}. \\newcite{Mesnil2013InvestigationOR} compare the Elman \\cite{elman1990finding} and Jordan \\cite{jordan1997serial} RNNs. They observe that the performance of the Jordan RNN is marginally better than Elman. They also experiment a \\textit{bi-directional} version of Jordan RNN and obtained the best score of 93.89 F1 for SF, performing better than CRF for about +1 absolute F1 improvement. \\newcite{xu2013convolutional} use Convolutional Neural Network (CNN) \\cite{LeCun1998GradientbasedLA} to extract 5-gram features and apply max-pooling to obtain the word representation before passing it to the output layer. Compared with RNN \\cite{Yao2013RecurrentNN,Mesnil2013InvestigationOR}, CNN gives lower performance for SF on ATIS. Other studies \\cite{Yao2014SpokenLU,vu2016bi} adapt Long Short-Term Memory Network (LSTM) \\cite{Hochreiter1997LongSM} to SF. The LSTM model gives better SF performance compared to CRF, CNN, and RNN. \\newcite{Ravuri2015RecurrentNN} compare the performance of vanilla RNN and LSTM for IC. They find that the vanilla RNN works best for shorter utterances, while LSTM is better for longer utterances. \n\n\n\n\n\\begin{table}[t!]\n\\small\n\\begin{center}\n\\begin{tabular}{llllll}\n\\toprule\n\n\\bf & \\bf Input & \\textbf{Model (Enc\/Dec)} & \\textbf{Output} & \\textbf{Slot (F1)} & \\textbf{Intent(Err)}\\\\\n\\toprule\n\\newcite{xu2013convolutional} & lexical & CNN & softmax & 94.35 & 6.65\\\\\n\\newcite{Yao2013RecurrentNN} & lexical & Elman RNN & softmax & 94.11 &- \\\\\n\\newcite{Yao2013RecurrentNN} & lexical+NE & Elman RNN & softmax & 96.60 -\\\\\n\n\\newcite{Yao2014SpokenLU}& lexical & LSTM & softmax & 94.85 & -\\\\\n\\newcite{Yao2014RecurrentCR}& lexical+NE & Elman RNN & CRF & 96.65 & -\\\\\n\\newcite{Mesnil2015UsingRN}& lexical & Hybrid Elman + Jordan RNN & softmax & 95.06 & -\\\\\n\\newcite{Liu2015RecurrentNN}& lexical & Elman RNN with label sampling & softmax & 94.89 & -\\\\\n\\newcite{vu2016bi} & lexical & bi-directional RNN & softmax &94.92&- \\\\\n\\newcite{Liu2016AttentionBasedRN} & lexical & bi-directional RNN+attention& softmax & 95.75 & 2.35\\\\\n\\newcite{DBLP:conf\/emnlp\/KurataXZY16} & lexical & Encoder-Decoder LSTM& softmax & 95.40 & -\\\\\n\\bottomrule\n\\end{tabular}\n\\end{center}\n\\caption{ Comparison of independent SF and IC models and their performance on ATIS.}\n\\label{tab:independent_models}\n\\end{table}\n\n \nFor the \\textit{output layer}, typically a \\textit{softmax} function is used for prediction at a particular time step. \\newcite{Yao2014RecurrentCR} propose a \\textsc{R-CRF} model combining the feature learning power of RNN and the \\textit{sequence level optimization} of CRF for SF. \nThe RNN + CRF scoring mechanism incorporates the features learned from RNN and the transition scores of the slot slot labels. R-CRF outperforms CRF and vanilla RNN on ATIS and on the Bing query understanding dataset. Table \\ref{tab:independent_models} summarizes the performance of independent models on SF and IC. \n\n\n\n\\textbf{Takeaways on independent SF and IC models:}\n\\begin{itemize}[noitemsep,nolistsep, leftmargin=*]\n \\item Performance of RNN encoders (\\textit{unidirectional}) are $\\text{Jordan}\\leq\\text{Elman}<\\text{LSTM}$. Bi-directional encoding is additive to the performance of each encoder. \n \\item Incorporating more context information is better for SF performance. Using global context information, such as sentence level representation, and attention mechanisms \\cite{DBLP:conf\/emnlp\/KurataXZY16,Liu2016AttentionBasedRN} boosts performance of bi-directional encoder even further. \n \\item When adding external features is possible, semantic features such as NE are more beneficial than syntactic features for SF. When NE is used, it can boost the model performance for SF significantly.\n \\item The slot filling task is related to Named Entity Recognition (NER) \\cite{DBLP:conf\/coling\/GrishmanS96} task as slot values can be a named entity such as airline name, city name etc. If the slot filling task is modeled as a sequence tagging problem, basically recent neural models proposed for NER can be used for slot filling and vice versa. To know more about the recent development of neural NER models, one can consult the survey from \\newcite{yadav-bethard-2018-survey}. \n \\item The main disadvantage of independent models is that they do not exploit the interaction between intent and slots and may introduce error propagation when they are used in a pipeline. \n \n\\end{itemize}\n\n\n\n\\section{Joint Models for SF and IC}\n\\label{sec:joint}\n\n\\begin{figure*}[htb!]\n \\centering\n \\includegraphics[scale=0.26]{img\/param_sharing_gating_bert.png}\n \\caption{\\textit{Left:} Shared Bi-GRU encoder \\protect\\cite{Zhang2016AJM}. \\textit{Middle:} Slot-Gate Mechanism \\protect\\cite{goo2018slot}. \\textit{Right:} BERT Based \\protect \\cite{Chen2019BERTFJ}.}\n \\label{fig:sharing_gating_mechanism}\n\\end{figure*}\n\nIn Section \\ref{sec:independent_model} we reported approaches that treat SF and IC \\textit{independently}. However, as the two tasks always appear together in an utterance and they share information, it is intuitive to think that they can benefit each other. For instance, if the word \"\\textit{The Beatles}\" is recognized as the slot \\textsc{Artist}, then it is more likely that the intent of the utterance is \\textsc{PlaySong} rather than \\textsc{BookFlight}. On the other hand, recognizing that the intent is \\textsc{PlaySong} would help to recognize \"\\textit{Hey Jude}\" as the slot \\textsc{Artist} rather than \\textsc{MovieName}. \n \nRecent approaches model the relationship between SF and IC \\textit{simultaneously} in a \\textit{joint model}. These approaches promote \\textit{two-way} information sharing between the two tasks instead of a one-way (\\textit{pipeline}). We describe several alternatives to exploit the relation between SF and IC: through \\textit{parameter and state sharing} and \\textit{gate mechanism}.\n\n\n\\subsection{Parameter and State Sharing} \nA pioneering work in joint modeling is \\newcite{xu2013convolutional}, which performs parameter sharing and captures the relation between SF and IC through Tri-CRF \\cite{jeong2008triangular}. The model uses CNN as a \\textit{shared} encoder for both tasks and the produced hidden states are utilized for SF and IC. In addition to features learned from the NN and from the slot label transition, Tri-CRF incorporates an additional factor $g$ to learn the correlation between the slot label assigned to each word and the intent assigned to the utterance, which explicitly captures the dependency between the two tasks. A similar approach \\cite{guo2014joint}, shares the node representation produced by Recursive Neural Network (RecNN) which operates on the syntactic tree of the utterance. The node's representation is \\textit{shared} among SF and IC. \\newcite{Zhang2016AJM} use a \\textit{shared} bi-GRU encoder and a \\textit{joint loss function} between SF and IC (Figure \\ref{fig:sharing_gating_mechanism} \\textit{Left}), in which the loss function has weights associated with each tasks. \n\n\n\\newcite{Liu2016AttentionBasedRN} use a neural sequence to sequence (encoder-decoder) model with attention mechanism commonly used for neural machine translation. The \\textit{shared} encoder is a bi-directional LSTM, and the last hidden state of the encoder is then used by the decoder to generate a sequence of slot labels, while for IC there is a separate decoder. The attention mechanism is used to learn alignments between slot labels in the decoder and words in the encoder. \\newcite{HakkaniTr2016MultiDomainJS} also adopt parameter sharing similar to \\newcite{Zhang2016AJM}, but instead of using GRU they use a shared LSTM and perform predictions for slots, intent, and also domain. \n\n\n\n \n In a recent approach by \\newcite{Wang2018ABB} propose a bi-model based structure to learn the \\textit{cross-impact} between SF and IC. They argue that a single model for two tasks can hurt performance, and, instead of sharing parameters, they use two-task networks to learn the cross-impact between the two tasks and only share the hidden state of the other task. In the model, every hidden state $h^1_t$ in the first network is combined with the hidden state of the second network $h^2_t$, and vice versa. Training is also done asynchronously, as each model has a separate loss function. Qin et al. \\newcite{Qin2019ASF} use a self-attentive shared encoder to produce better context-aware representations, then apply IC at the \\textit{token level} and use this information to guide the SF task. They argue that previous work based on \\textit{single utterance-level} intent prediction is more prone to error propagation. If some token-level intent is incorrectly predicted, the other correct token-level prediction can still be useful for corresponding SF. For the final IC prediction, they use a voting mechanism to take into account the IC prediction on each token. \n \n\\newcite{Chen2019BERTFJ} use a Transformer \\cite{Vaswani2017AttentionIA} model for joint SF and IC by fine-tuning a pre-trained BERT \\cite{Devlin2019BERTPO} model (Figure \\ref{fig:sharing_gating_mechanism} \\textit{Right}). The input is passed through several layers of transformer encoders and the hidden state outputs are used to compute slot and intent labels. The hidden state $h^{\\texttt{CLS}}$ is used for IC\\footnote{\\texttt{[CLS]} is a special token in BERT input format that often used as the sentence representation.} while the rest of the hidden states at each time step $h_i$ serve SF. \n\n\n\n\\subsection{Slot-Intent Gate Mechanism} \n\nIn addition to parameter and state sharing, a separate network with a \\textit{slot gating mechanism} was introduced by \\newcite{goo2018slot} to model the interaction between SF and IC more explicitly (Figure \\ref{fig:sharing_gating_mechanism} \\textit{Middle}). In the encoder, a \\textit{slot context vector} for each time step, $\\boldsymbol{c}^{S}_{i}$, and a global intent context vector $\\boldsymbol{c}^{I}$ are computed using an attention mechanism \\cite{DBLP:journals\/corr\/BahdanauCB14}. The slot-gate $\\boldsymbol{g}^{s}$ is computed as a function of $\\boldsymbol{c}^{S}_{i}$ and $\\boldsymbol{c}^{I}$, $\\boldsymbol{g}^{s} = \\sum v \\cdot tanh(\\boldsymbol{c}^{S}_{i} + \\boldsymbol{W}\\cdot \\boldsymbol{c}^{I} )$. Then, $\\boldsymbol{g}^{s}$ is used as a weight between $\\boldsymbol{h}_i$ and $\\boldsymbol{c}^{S}_i$ to compute $y^{slot}_{i}$ as follows: $y^{slot}_i=\\texttt{softmax}(\\boldsymbol{W}(\\boldsymbol{h}_i + \\boldsymbol{g}^{s}\\cdot\\boldsymbol{c}^{S}_i))$. Larger $\\boldsymbol{g}^{s}$ indicates a stronger correlation between $c^{S}_{i}$ and $c^{I}$. \n\n \\newcite{Haihong2019ANB} propose a bi-directional model, SF-ID (SF-Intent Detection) network, sharing ideas with \\newcite{goo2018slot}, with two key differences. First, in addition to the slot-gated mechanism, they add an intent-gated mechanism as well. Second, they use an iterative mechanism between the SF and ID network, meaning that the gate vector from SF is injected into the ID network and vice versa. This mechanism is repeated for an arbitrary number of iteration. Compared to \\cite{goo2018slot}, the SF-ID network performs better both in SF and IC on ATIS and SNIPS. The work from \\newcite{Li2018ASM} is also similar to \\newcite{goo2018slot} with two differences. First, they use a self-attention mechanism \\cite{Vaswani2017AttentionIA} to compute $c^{S}_{i}$. Secondly, they use a separate network to compute gate vector $g^{s}$, but the input of this network is the concatenation of $c^{S}_{i}$ and the intent embedding $v$, and $g^s$ is defined as $g^{s} = \\texttt{tanh}(\\boldsymbol{W}^{g}[c^i_{slot}, v^{intent}] + b^s)$. After that, $h_i$ is combined with $g^s$ through element-wise multiplication to compute $y^s_{i}$ as follows: $y^{slot}_i=\\texttt{softmax}(\\boldsymbol{W}^s(h_i \\odot g^s) + b^s$). They report a +0.5\\% improvement on SF over \\newcite{Liu2016AttentionBasedRN}. A recent work by \\newcite{Zhang2019AJL}, further improves the performance of the BERT based model by adding a gate mechanism \\cite{Li2018ASM} to the BERT model. Table \\ref{tab:joint_models} compares the performance of the joint models. \n\n\\begin{table}[htb!]\n\\centering\n\\small\n\\begin{tabular}{llcccc}\n\\toprule\n\\multirow{3}{*}{\\textbf{Method}} & \\multirow{3}{*}{\\textbf{Model}}& \\multicolumn{2}{c}{\\textbf{ATIS}} & \\multicolumn{2}{c}{\\textbf{SNIPS}} \\\\\n & & \\textbf{Slot} & \\textbf{Intent }& \\textbf{Slot} & \\textbf{Intent} \\\\\n & & \\textbf{F1} & \\textbf{Acc\/Err}& \\textbf{F1} & \\textbf{Acc\/Err} \\\\\n \\midrule\n\\textbf{Parameter \\& State Sharing} & & & & & \\\\\n \\newcite{xu2013convolutional} &CNN + Tri-CRF & 95.42 & -\/5.91 & - & - \\\\\n\\newcite{guo2014joint} & Recursive NN & 93.96 & 95.40 & - & -\\\\\n\\newcite{Zhang2016AJM} & Joint Multi-Task,Bi-GRU & 95.49 & 98.10 & - & - \\\\\n\\newcite{Liu2016AttentionBasedRN} & Seq2Seq + Attention & 94.20 & 91.10 & 87.80 & 96.70 \\\\\n\\newcite{HakkaniTr2016MultiDomainJS} & Bi-LSTM& 94.30 & 92.60 & 87.30 & 96.90 \\\\\n\\newcite{Qin2019ASF}& Token-Level IC + Self-Attention & 95.90 & 96.90 & 94.20 & 98.00 \\\\\n\n\\newcite{Chen2019BERTFJ} & Transformer (BERT)& 96.10 & 97.50 & 97.00 & 98.60 \\\\\n\\midrule\n\\textbf{State Sharing} & & & & & \\\\\n\\newcite{Wang2018ABB} &Bi-model, BiLSTM & 96.89 & 98.99 & - & - \\\\\n\\midrule\n\\textbf{Slot-Intent Gating} & & & & & \\\\\n\\newcite{goo2018slot}& Slot-Gated Full Attention \\ & 94.80 & 93.60 & 88.80 & 97.70 \\\\\n\\newcite{Li2018ASM}& BiLSTM + Self-Attention & 96.52 & -\/1.23 & - & - \\\\\n\\newcite{Haihong2019ANB}& SF-ID Network & 95.75 & 97.76 & 91.43 & 97.43 \\\\\n\\midrule\n\\textbf{Hybrid Param Sharing + Gating}& & & & & \\\\\n\\newcite{Zhang2019AJL} & \\textsc{BERT} + Intent-Gate & 98.75 & 99.76 & 98.78 & 98.96 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Performance comparison of joint models for SF and IC on ATIS and SNIPS-NLU.}\n\\label{tab:joint_models}\n\\end{table}\n\n\n\n\n\\textbf{Takeaways on joint SF and IC models:}\n\\begin{itemize}[noitemsep,nolistsep, leftmargin=*]\n \\item The overall performance of joint models for SF and IC (Table \\ref{tab:independent_models}) is competitive with independent models (Table \\ref{tab:joint_models}). The advantage of joint models is that they have relatively less parameters than independent models, as both tasks are trained on a single model. \n \\item When computational power is not an issue, fine-tuning a pre-trained model such as BERT is the way to go for maximum SF and IC performance. Hybrid methods combining parameter and state sharing + intent gating yield the best performance \\cite{Zhang2019AJL}. \n \\item For the non BERT-based model, using state sharing \\cite{Wang2018ABB} is the best on ATIS. However, the disadvantage is that it is actually a bi-model and not a single model. \n \\item Similar to independent models, contextual information is crucial to performance. Adding a self-attention mechanism \\cite{Qin2019ASF,Li2018ASM} to either parameter and state sharing or to slot-intent gating can boost performance even further.\n \\item When sufficiently large in-domain training data is available, the SF and IC performance in ATIS and SNIPS is already saturated. Therefore, further research on this classic leaderboard chase is not worth it. We discuss more about that in Section 6. \n \\item Most of the work in joint models and also independent models (Section \\textsection \\ref{sec:independent_model}) reports F1 scores for slot filling performance. However, these scores do not reveal in which specific cases these models behave differently, contributing to overall performance. We leave further analysis on model performance as a potential future work.\n\\end{itemize}\n\n\\section{Scaling to New Domains}\n\\label{sec:transfer_learning}\n\n\n\nSo far, the models that we consider in Section \\textsection \\ref{sec:independent_model} and Section \\textsection \\ref{sec:joint} are designed to be trained on a \\textit{single domain} (e.g. banking, restaurant reservation) and require relatively \\textit{large labeled data} to perform well. In practice, new intents and slots are regularly added to a system to support new tasks and domains, requiring data and time intensive processes. Hence, methods to train models for new domains with limited or without labeled data are needed. We refer to this situation as the \\textit{domain scaling} problem. \n\\begin{figure*}[h!]\n \\centering\n \\includegraphics[scale=0.3]{img\/transfer_learning.png}\n \\caption{\\textit{Left:} Data-driven approach \\protect\\cite{Jaech2016DomainAO,HakkaniTr2016MultiDomainJS}. \\textit{Middle:} Model-Driven Approach with expert models \\protect\\cite{Kim2017DomainAW}. \\textit{Right:} Zero-shot model \\protect\\cite{DBLP:conf\/interspeech\/BapnaTHH17}.}\n \\label{fig:transfer_learning}\n\\end{figure*}\n\\subsection{Transfer Learning Models for SF and IC}\nA common approach to deal with domain scaling is transfer learning (TF).\\footnote{We do not differentiate between \\textit{domain adaptation} and transfer learning in this paper.} In the TF setup we have $K$ source domains $\\mathcal{D}^1_{S}, \\mathcal{D}^{2}_{S}, \\ldots, \\mathcal{D}^{K}_{S}$ \nand a target domain $\\mathcal{D}^{K+1}_{T}$, and we assume abundance of data in $\\mathcal{D_S}$ and limited data in $\\mathcal{D_T}$. Instead of \ntraining a target model $\\mathcal{M}_T$ for $\\mathcal{D}_T$ from scratch, TF aims to \\textit{adapt} the learned model $\\mathcal{M}_S$ from $\\mathcal{D}_S$ to produce a model $\\mathcal{M}_T$ trained on $\\mathcal{D}_T$. TF is typically applied with various parameter sharing and training mechanisms. For SF and IC two approaches are proposed, namely \\textit{data-driven} and \\textit{model- driven}. As for data-driven techniques, typically we combine data from $\\mathcal{D}_S$ and $\\mathcal{D}_T$ and we partition the parameters in the model into parts that are \\textit{task-specific} and parameters that are shared across tasks. \nSome studies \\cite{Jaech2016DomainAO,HakkaniTr2016MultiDomainJS,Louvan2019LeveragingNT} apply this technique using \\textit{multi-task learning} (MTL) \\cite{caruana1997multitask} and the models are trained simultaneously on $\\mathcal{D}_S$ and $\\mathcal{D}_T$ (Figure \\ref{fig:transfer_learning} \\textit{Left}). Results have shown that MTL is particularly effective relative to single-task learning (STL) when the data in $\\mathcal{D_T}$ is scarce and the benefits over STL diminish as more data is available. Another technique that is typically used in data-driven approaches is based on \\textit{pre-train} and \\textit{fine-tune} mechanisms. \\newcite{DBLP:conf\/naacl\/GoyalMM18} train a joint model of SF and IC, $\\mathcal{M}_S$, on large $\\mathcal{D_S}$, then fine-tune $\\mathcal{M}_S$ by replacing the output layer corresponding with the label space from $\\mathcal{D}_T$ and train the model further on $\\mathcal{D}_T$. \\newcite{DBLP:conf\/aaai\/SiddhantGM19} also uses fine-tuning mechanism, but the main difference with \\newcite{DBLP:conf\/naacl\/GoyalMM18} is they leverage large unlabeled data to learn contextual embedding, ELMo \\cite{peters2018deep}, before fine-tuning on $\\mathcal{D}_T$. \n\nAs we need to train from scratch the whole model when adding a new domain, data-driven approaches, especially MTL-based, need increasing training time as the number of domains grows. The alternative strategy, the model-driven approach, alleviates the problem by enabling model \\textit{reusability}. Although different domains have different slot schemas, slots such as \\textit{date}, \\textit{time} and \\textit{location} can be shared. In model driven adaptation \"expert\" models (Figure \\ref{fig:transfer_learning} \\textit{Middle}) are first trained on these reusable slots \\cite{Kim2017DomainAW,Jha2018BagOE} and the outputs of the expert models are used to guide the training of $\\mathcal{M}_T$ for a new target domain. This way the training time of $\\mathcal{M}_T$ is faster, as it is proportional to the $\\mathcal{D}_T $ data size, instead of the larger data size of the whole $\\mathcal{D}_S$ and $\\mathcal{D}_T$. In this model-driven settings, \\newcite{Kim2017DomainAW} do not treat each expert model on each $\\mathcal{D}_S$ equally, instead they use attention mechanism to learn a weighted combinations from the feedback of the expert models. \\newcite{Jha2018BagOE} use a similar model as \\newcite{Kim2017DomainAW}, however they do not use attention mechanism. For training the expert models, instead of using all available $\\mathcal{D}_S$, they build a repository consisting of common slots, such as \\textit{date}, \\textit{time}, \\textit{location} slots. The assumption is that these slots are potentially reusable in many target domains. Upon training $\\mathcal{M}_S$ on this reusable repository, the output of $\\mathcal{M}_S$ is directly used to guide the training of $\\mathcal{M}_T$. \n\n\n\\subsection{Zero-shot Models for SF and IC}\nWhile data-driven and model-driven approaches can share knowledge learned on different domains, such models are still trained on a pre-defined set of labels, and can not handle \\textit{unseen} labels, i.e. not mapped to the existing schema. For example, a model trained to recognize a \\textsc{Destination} slot, can not be used directly to recognize the slot \\textsc{Arrival\\_Location} for a new domain, although both slots are semantically similar. For this reason, researchers have recently been working on \\textit{zero-shot} models, trained on \\textit{label representations} that leverage natural language \\textit{descriptions} of the slots \\cite{DBLP:conf\/interspeech\/BapnaTHH17,Lee2019ZeroShotAT}. Assuming that accurate slot descriptions are provided, slots with \\textit{different} names although semantically similar would have similar description as well. Thus, having trained a model for the \\textsc{Destination} slot with its descriptions, it is now possible to recognize the slot \\textsc{Arrival\\_Location} without training on it, but only supplying the corresponding slot description. \n\nIn addition to slot description, other zero-shot approaches explore the use of slot value examples \\cite{shah-etal-2019-robust,DBLP:conf\/sigdial\/GueriniMBM18}. \\newcite{shah-etal-2019-robust} showing that a combination of a small number of slot values examples with a slot description performs better than \\cite{DBLP:conf\/interspeech\/BapnaTHH17,Lee2019ZeroShotAT} on the SNIPS dataset. Zero-shot models are typically trained on a per-slot basis (Figure \\ref{fig:transfer_learning} \\textit{Right}), meaning that if we have $N$ slots, then the model will output $N$ predictions, therefore, a merging mechanism is needed in case there are prediction overlaps. In order to alleviate the problem of having multiple predictions, \\newcite{DBLP:conf\/acl\/LiuWXF20} propose a \\textit{coarse-to-fine} approach, in which the model learns the slot entity pattern (coarsely) to identify a particular token is an entity or not. After that, the model performs a single prediction of the slot type (fine) based on the similarity between the feature representation and the slot description. \n\n\\paragraph{Takeaways on scaling to new domains:}\n\\begin{itemize}[noitemsep,nolistsep, leftmargin=*]\n \\item Both data driven methods, MTL and pre-train fine tuning, improve performance when data in $\\mathcal{D}_T$ is limited. Both are also flexible, as virtually many tasks from different domains can be plugged into these methods. As the number of domains grow, pre-train and fine tuning is more desirable than MTL. However, fine tuning is more prone to the \\textit{forgetting} problem \\cite{He2019AnalyzingTF} compared to MTL.\n \\item When the number of domain, $K$, is massive, the pre-train fine tuning approach and model driven approaches, such as expert based adaptation, are preferable with respect of training time. \n \\item When there exists $K$ existing domains and no annotation is available in $\\mathcal{D}_T$, the choice is zero-shot approaches with the expense of providing meta-information such as slot and intent descriptions.\n \\item As typically zero-shot models perform prediction on a \\textit{per-slot} basis, potential disadvantages are model accuracy when there is a prediction overlap and the model can also be computationally inefficient when dealing with many slots.\n\\end{itemize}\n\\section{State of the Art and Beyond}\n\\label{sec:discussion}\n\nBased on the results in Table \\ref{tab:independent_models} and \\ref{tab:joint_models}, it is evident that neural models have achieved outstanding performance on ATIS and SNIPS, showing that it is relatively easy for neural models to capture patterns that recognize slots and intents. ATIS, in particular, is already overused for SF and IC evaluations and recent analysis \\cite{Bchet2018IsAT,Niu2019RationallyRA} have shown that the dataset is relatively simple and the room for performance improvement is tiny. \nA similar trend in performance can be noted for other datasets, such as SNIPS, and it is likely that performance improvement can be quickly saturated. However, it does not mean these models have solved SF and IC, or NLU problems in general, rather that the model has merely solved the datasets. \nNevertheless, there are still a number of issues in SF and IC that need further investigation:\n\\paragraph{Portable and Data Efficient Models.} Instead of evaluating models with the typical \\textit{leaderboard} setup with fixed (train\/dev\/test) splits on a specific target domain, it would be also important to test models in different scenarios, so that different aspects of the model can be captured. For example, as neural models are data hungry, more work is still needed on transfer learning scenarios, where evaluation is carried out with \\textit{less} or \\textit{without} labeled data (\\textit{zero-shot}) for a particular target domain. In addition, most models for SF and IC are evaluated on English, which means that more effort is still needed to make models that work well for other languages. Some recent works have started exploring zero-shot cross lingual methods \\cite{DBLP:conf\/ijcai\/QinN0C20,DBLP:conf\/aaai\/LiuWLXF20,DBLP:conf\/emnlp\/LiuSXWXMF19} and also few-shot scenarios \\cite{DBLP:conf\/acl\/HouCLZLLL20} and the room for improvement for these scenarios is still large. In short, designing a \\textit{data efficient} model that is \\textit{portable} across domains and languages is still a challenging problem for the coming future.\n \\paragraph{Leveraging unlabeled data from live traffic.} In real situations, personal digital assistants such as Google Home, Apple Siri and Amazon Alexa, receive live traffic data from real users. This large amount of unlabeled data from live traffic is a potential data source for model training, in addition to in-house annotated data. Unlabeled live data are likely different from in-house data, as they can contain more diverse utterances and also noisy and irrelevant utterances. In this situation, existing methods to tap on unlabeled data, such as semi-supervised learning, still face unique challenges to handle live data.\n It is worth to note that a bottleneck in this direction is that working on live data in academic settings is not trivial. Some recent works explore this line of research by applying semi-supervised learning \\cite{DBLP:conf\/asru\/ChoXLKC19} and also data selection \\cite{DBLP:conf\/emnlp\/DoG19} mechanism.\n \\paragraph{Generative Models.} Most of the proposed models are \\textit{discriminative}, among the few works carried out for \\textit{generative models} for SF and IC, \\cite{Raymond2007GenerativeAD,Yogatama2017GenerativeAD} have shown that a generative model is relatively better than a discriminative model in a situation where data is \\textit{scarce}. One possible direction for generative models is to apply data augmentation to automatically create additional training data \\cite{DBLP:conf\/aaai\/YooSL19,DBLP:conf\/emnlp\/ZhaoZY19,DBLP:conf\/coling\/HouLCL18,DBLP:conf\/emnlp\/KurataXZY16,DBLP:journals\/corr\/abs-2004-13952,DBLP:conf\/naacl\/KimRK19}. The main challenge for data augmentation is to generate diverse and fluent synthetic utterance, which \\textit{preserve} the semantics of the original utterance. \n \n \\paragraph{Evaluation of SF and IC on more complex dataset.} Existing neural approaches typically evaluated on \\textit{single-intent} utterance, however in a real-world scenario users may indicate \\textit{multiple-intent} in an utterance e.g. \"\\textit{Show me all flights from Atlanta to London and get the cost}\" \\cite{Gangadharaiah2019JointMI} or even expressing multiple sentences in one single turn. While most datasets for slot filling and intent classification are \\textit{single-turn} utterance, there are some recent multi-turn datasets that provide slot annotation on the token-level, namely the \\textsc{Restaurant-8K}, TaskMaster-1 and 2 \\cite{48484}, and Frame \\cite{DBLP:conf\/sigdial\/AsriSSZHFMS17} datasets. The subset of Schema Guided Dialogue (SGD) dataset \\cite{DBLP:conf\/aaai\/RastogiZSGK20} used in DTSC-8 is also annotated with slots in the token-level and covers 16 domains. In addition to that, the TOP dataset \\cite{DBLP:conf\/emnlp\/GuptaSMKL18} introduces datasets annotated with \\textit{hierarchical} representation and MTOP dataset \\cite{DBLP:journals\/corr\/abs-2008-09335} provides both flat and hierarchical representation on 6 languages across 11 domains. \n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\nWe have surveyed recent neural-based models applied to SF and IC in the context of task-oriented dialogue systems. We examined three approaches, i.e. \\textit{independent}, \\textit{joint}, and \\textit{transfer learning based} models. Joint models exploiting the relation between SF and IC simultaneously shown relatively better performance than independent models. Empirical results have shown that most joint models nearly \"solve\" widely used datasets, ATIS and SNIPS, given \\textit{sufficient in-domain training data}. Nevertheless, there are still several challenges related to SF and IC, especially improving the scalability of the model to new domains and languages when limited labeled data are available. \n\n\n\n\\bibliographystyle{coling}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{intro}\n\nDynamical systems with fast-oscillating conditions are ubiquitous in physics: they naturally arise in mechanics,\nastrophysics, fluid and air dynamics, and many other domains. They often exhibit surprising properties, a\nbeautiful example of which is the inverted pendulum, which stabilizes via\nfast vibration of its pivot. The standard way of analyzing such equations \nincludes a procedure of constructing an averaged\nsystem, whose solutions remain close to those of the original system for very long time \n(see e.g. \\cite{kapi, land, ArnMM, arko}).\n\nIn many examples of Hamiltonian one-frequency oscillating systems\none obtains an additional term, a drift in the averaged equation. A similar drift (or shift) is observed\nin hydrodynamical-type systems, including the $\\beta$-plane\nequation in meteorology (see e.g. \\cite{zeit}), infinite-conductivity equation for \nelectron flows \\cite{khch}, and the Craik-Leibovich equation for an ideal\nfluid confined to a domain with oscillating boundary \\cite{crle, vla}.\nIn those hydrodynamical systems\nsuch a shift is often related to the consideration of a central extension of an appropriate Lie algebra \\cite{viz}.\n\nBelow we explain this phenomenon by building a general connection between the averaging method and symplectic \nreduction in appropriate, possibly nontrivial, $S^1$-bundles. Namely, one starts with\n symplectic reduction of the cotangent bundle over a circle action, which is one of the most studied objects\nin symplectic geometry.\nOne observes two features for the reduction over a nonzero value of the momentum map: the appearance\nof a twisted symplectic structure (similar to how the curvature \narises in the description of gyroscopes on surfaces \\cite{cole2}), \nwhere a new magnetic term supplements the canonical symplectic form of the reduced space,\nand the appearance of an amended potential function, see Section \\ref{sec:reduction}. It turns out that exactly these two\n phenomena occur in the averaging procedure. This can be summarized in the following statement (which is a combined version of Theorems \\ref{thm:ave-red} and \\ref{thm:Ave_natHamiltonian}):\n\n\n\n\n\\begin{figure}[H]\n\\tikzstyle{block} = [rectangle, draw, fill=blue!10,\n text width=6em, text centered, rounded corners, minimum height=6em]\n\\tikzstyle{line} = [draw, -latex']\n\n\\centerline{\n\\begin{tikzpicture}[node distance = 5.1cm,auto]\n \n \\node [block] (SDE) {SDE (fast time=new space variable)};\n \\node [block, right of=SDE] (tilSDE) {$\\widetilde{SDE}$ (fibre-constant objects)};\n \\node [block, below of=SDE] (aveDE) {$\\overline{DE}_1$ average equation (over fast time)};\n \\node [block, left of=aveDE] (DE) {DE with fast oscillation (fast time $\\tau=\\omega t$)};\n \\node [block, below of=tilSDE] (redSDE) {$SDE_{red}$ (symplectic structure with magnetic term; effective potential)};\n \\node [block, right of=redSDE, node distance=5cm] (Euler) {Euler equation on $\\hat{\\mathfrak{g}}^*$(central extension)};\n \n \\path [line] (DE) -- node {suspension $\\dot{\\phi}=\\omega$} (SDE) ;\n \\path [line] (SDE) -- node [align=center]{space \\\\averaging \\\\for $S^1$-action} (tilSDE) ;\n \\path [line,transform canvas={yshift=0.1em}] (DE) -- node [align=center,below]{fast time\\\\averaging\\\\procedure}(aveDE);\n \\path [ line, decorate, decoration=snake] (SDE) -- node [align=right,left] {Poincar\\'e \\\\approximation\\\\theorem} (aveDE);\n \\path [line] (tilSDE) -- node [align=right,left]{symplectic reduction \\\\for a non-zero value \\\\of momentum map }(redSDE);\n \\path [line,dashed] (tilSDE) -- node [align=right,xshift=-1.5em,yshift=-2em]{configuration space for \\\\$\\widetilde{SDE}$ is the central \\\\extension group $\\widehat G$, \\\\ reduction to $\\hat{\\mathfrak{g}}^*$ }(Euler);\n \\path [line,dashed] (redSDE) -- node [align=center,below]{for group $G$\\\\ as the base,\\\\ Hamiltonian \\\\~ reduction to $\\hat{\\mathfrak{g}}^*$ }(Euler);\n \\draw[implies-implies, double distance=0.5em](aveDE) -- (redSDE);\n \n \n\\end{tikzpicture}\n}\n\\caption{DIAGRAM}\t\n\\label{fig:diagram}\n\\end{figure}\n\n\\newpage \n\n\\begin{theorem}\nFor a natural slow-fast Hamiltonian system the resulting slow (averaged) system coincides with the one\nobtained by space averaging over the fibers of an appropriate $S^1$-bundle and performing the symplectic reduction of the corresponding cotangent bundle over $S^1$-action at the momentum value related to the \nfast frequency. The averaged system turns out to be a natural Hamiltonian system with an amended potential function with respect to a twisted (magnetic) symplectic structure.\n\\end{theorem}\n\nFurthermore, central extensions appear whenever the base of the reduction turns out to be a group by itself, as\ndiscussed in Section \\ref{sect:cenExtention}. This can be regarded as a manifestation of the reduction by stages\ndeveloped in \\cite{mars2}.\nThe second main result of the paper is the following abbreviated version of Theorem \\ref{thm:group-ext}:\n\n\\begin{theorem}\nIf the slow manifold is a group $G$ and the perturbed Hamiltonian system is invariant relative to the $G$-action,\nthen the second reduction of such \na fast oscillating system gives an Euler equation, Hamiltonian with respect to the\nPoisson-Lie bracket on a central extension $\\widehat{\\mathfrak g}$ of the corresponding Lie algebra $\\mathfrak g$.\n\\end{theorem}\n\nThe essence of the paper is described in the diagram in Figure \\ref{fig:diagram}: we show how to view the fast time averaging approximation on the left by going via the averaging on the top and reduction in the right column of\nthe diagram. We describe this averaging-reduction procedure in Section \\ref{sect:averaging}, and compare its result with the one obtained by using the classical fast-time averaging method in Section \\ref{sec:ODEaveraging}.\n\n\\smallskip\n\nIn Section \\ref{sect:applications} we describe three examples by using the averaging-reduction procedure developed in this paper: the vibrating pendulum manifests the appearance of the\namended potential, the Craik-Leibovich equation for oscillating boundary is related to the magnetic term in\nthe symplectic structure and a central extension, while the motion of particles\nin rapidly oscillating potentials has both magnetic term and additional potential\npresent upon averaging.\n(Note that, instead of the classical approach of applying ingenious canonical transformations \\cite{cole},\nthe present paper gives an alternative method of averaging natural Hamiltonian systems: one can average the metric, which contains all relevant information, and then obtain the averaged natural system directly\nfrom that metric.)\n\n\\smallskip\n\nWhile the symplectic reduction part of this paper is also valid for high-dimensional torus action, i.e. for\nmany-frequency case, the approximation theorem does not work in this generality, as\nfor several frequencies resonances can appear unavoidably in such systems, \nas e.g. KAM theory manifests.\nNote also that in many examples the two contributions appearing in averaging, the magnetic term \nand the potential amendment, are of different order in the small parameter of perturbation. \nIt would be interesting to see if it is always the case. \n\n\n\\bigskip\n\n{\\bf Acknowledgments.}\nWe are grateful to Mark Levi and Anatoly Neishtadt for stimulating discussions. A part of this work was done when C.Y. was visiting the Fields Institute in Toronto and B.K. was visiting the Weizmann Institute in Rehovot\nand the IHES in Bures-sur-Yvette. The work was partially supported by an NSERC research grant.\n\n\n\n\n\\medskip\n\n\\section{Symplectic reduction of cotangent bundles}\\label{sec:reduction}\n\nWe start by recalling (following \\cite{mars2}) general results on the symplectic\n reduction. Consider an action of an abelian group $\\mathbb T:=S^1$\n(or more generally, a torus $\\mathbb T=T^k$)\ncommon in averaging, while the results with appropriate amendments hold for a reduction by any Lie group.\nAssume that the group $\\mathbb T$ acts on a configuration space $Q$ (from the right) properly and freely,\nso that the quotient space $Q\/\\mathbb T$ is a manifold.\nOur first goal is to reduce $Q$ by the $\\mathbb T$-action and describe structures on the reduced phase space.\nThe quotient\nprojection $\\pi: Q \\rightarrow B:=Q\/\\mathbb T$ defines a principal fiber bundle over the base $B$.\nIt turns out that the curvature of this $\\mathbb T$-bundle enters the symplectic structure of the reduced manifold.\nThe gyroscope example below can be regarded as an illustration of the abstract reduction procedure.\n\\smallskip\n\nNamely, the group $\\mathbb T$ acts on $T^* Q$ by cotangent lifts, and we denote the momentum map\n of this action by $J: T^* Q\\rightarrow \\mathfrak{t}^*$.\n The momentum map is a natural projection \n of $T_q^* Q$ at any $q\\in Q$ to the cotangent space to the fiber, $\\mathfrak{t}^*$.\nFor an arbitrary value $\\mu\\in \\mathfrak{t}^*$ of the momentum map consider the reduced phase space\\footnote{For an arbitrary Lie group the reduced space is defined as\n$J^{-1}(\\mu)\/\\mathbb T_\\mu$ where $\\mathbb T_\\mu$ is the stationary subgroup of $\\mu$.\nIn this section we use the fact that $\\mathbb T$ is abelian, and hence the stationary group $\\mathbb T_\\mu$ coincides with the full group: $\\mathbb T_\\mu = \\mathbb T$.}\n$(T^*Q)_\\mu := J^{-1}(\\mu)\/\\mathbb T$.\n\n\n\\begin{theorem} \\label{thm:cotbundle}{\\rm (see e.g. \\cite{mars2})}\n\tLet $\\mathbb T$ be an abelian group acting on a manifold $Q$ so that\n\t$\\pi: Q \\rightarrow Q\/\\mathbb T=:B$ is a principal fiber bundle, and fix $\\mu \\in \\mathfrak{t}^*$.\n\tLet $\\mathcal{A}: TQ \\rightarrow \\mathfrak{t}$ be a principle connection 1-form on this bundle.\n\tThen\n\t\n\t $i)$ for $\\mu=0$ there is a symplectic diffeomorphism between $(T^* Q)_0$ and\n\t$T^*B=T^* (Q\/\\mathbb T)$ equipped with the canonical symplectic form \t$\\omega_{can}$;\n\t\n\t$ii)$ for $\\mu\\not=0$ there is a symplectic diffeomorphism between $(T^* Q)_\\mu$ and\n\t$T^*B$, where the latter is\n\tequipped with symplectic form $\\omega_\\mu:=\\omega_{can} - \\beta_\\mu$. \tHere the 2-form\n\t$\\beta_\\mu := \\pi_P^* \\sigma_\\mu$ on $T^*B$ is obtained by the pull-back via the cotangent bundle\n\tprojection $\\pi_P: T^*B \\rightarrow B$ from the 2-form $\\sigma_\\mu$ on $B$. The latter 2-form is\n\tthe $\\mu$-component of the curvature of the principal fiber bundle $Q$ over $B$, namely\n\t$\\pi^* \\sigma_\\mu = \\mathbf{d}\\left\\langle \\mu, \\mathcal{A}\\right\\rangle.$\n\\end{theorem}\n\n\\begin{proof}[\\textrm{\\textit{Proof outline}}]\nWe just recall an explicit form of the isomorphism between $(T^* Q)_\\mu$ and $T^* (Q\/\\mathbb T)$, see Theorem~2.3.3 in \\cite{mars2} for more detail.\n\nThe isomorphism $\\varphi_0: (T^* Q)_0 \\rightarrow T^* (Q\/\\mathbb T)$ is defined by noting that\n\\[\nJ^{-1}(0) = \\{ p_q \\in T^* Q: \\left< p_q , \\xi_Q(q) \\right> = 0 \\quad \\text{for all $\\xi \\in \\mathfrak{t}$} \\}\\,,\n\\]\nwhere $\\xi_Q$ is the vector field on $Q$ corresponding to the infinitesimal action $\\xi$, i.e. vectors $\\xi_Q(q)$ span the vertical subspace at $q$.\nThus the map $\\Phi : J^{-1}(0) \\rightarrow T^*(Q\/\\mathbb T)$ given by\n\\begin{equation} \\label{barphi}\n\t\\left< \\Phi(p_q) , \\pi_*(v_q) \\right> = \\left\n\\end{equation}\nis well defined. The map $\\Phi$ is $\\mathbb T$-invariant and surjective, and hence induces a quotient map $\\varphi_0: (T^* Q)_0 \\rightarrow T^* (Q\/\\mathbb T)$.\n\nThe isomorphism $\\varphi_\\mu : (T^* Q)_\\mu \\rightarrow T^* (Q\/\\mathbb T)$ is the composition\n$\\varphi_\\mu = \\varphi_0 \\circ {\\mathrm{shift}}_\\mu$ of $\\varphi_0$ with the \n isomorphism ${\\mathrm{shift}}_\\mu : (T^* Q)_\\mu \\rightarrow (T^* Q)_0$ defined as follows.\n Introduce a map ${\\mathrm{Shift}}_\\mu : J^{-1}(\\mu) \\rightarrow J^{-1}(0)$ by\n\\[\n\t{\\mathrm{Shift}}_\\mu(p_q) = p_q - \\left< \\mu, \\mathcal{A}(q) \\right>\n\\]\nfor any $p_q\\in J^{-1}(\\mu)$. It is $\\mathbb T$-invariant, so it drops to a quotient map ${\\mathrm{shift}}_\\mu: (T^* Q)_\\mu \\rightarrow (T^* Q)_0$.\nThe $\\mathfrak{t}$-valued\n2-form $\\mathbf{d} \\mathcal{A}$ is the curvature of the (abelian) connection $\\mathcal{A}$, while to construct the 2-form $\\sigma_\\mu$\none uses its $\\mu$-component, cf. \\cite{mars2}.\n\\end{proof}\n\n\\begin{remark}\nThe isomorphism between $(T^* Q)_\\mu$ and $T^*B=T^* (Q\/\\mathbb T)$ is connection-dependent.\nThe reduced symplectic form on $T^*B$ is modified by the curvature 2-form $\\sigma_\\mu$ on $B$, which is traditionally called a {\\it magnetic term}, since it also appears in the description of motion of a charged particle in a magnetic field on $B$.\n\\end{remark}\n\n\n\n\\begin{definition}\\label{def:mech-conn}\nLet the space $Q$ be equipped with a $\\mathbb T$-invariant metric. This metric\ndefines an invariant distribution of horizontal spaces: at each point $q\\in Q$ there is a subspace of $T_qQ$ orthogonal to the fiber (i.e. the $\\mathbb T$-orbit) at $q$. Hence the metric defines an invariant connection 1-form\n$\\mathcal{A}: TQ \\rightarrow \\mathfrak{t}$ on this fiber bundle. This 1-form is called a {\\it mechanical connection.}\n\n\\end{definition}\n\nConsider a natural system on $T^*Q$ with Hamiltonian\n$H(q,p)=(1\/2) (p,\\,p)_q+U(q)$\ninvariant with respect to the $\\mathbb T$-action. (Here and below $(.\\,,.)_q$ stands for the\nmetric on $Q$, i.e. the inner product \non $TQ$, or the induced one on $T^*_qQ$, depending on the context. The Euclidean inner product in $\\mathbb R^n$ is denoted by dot.) \nThis system descends to a Hamiltonian system\non the quotient $(T^* Q)_\\mu$ with respect to the symplectic structure $\\omega_\\mu=\\omega_{can} - \\beta_\\mu$.\nThe new Hamiltonian $H_\\mu$ is obtained from $H$ by applying the map\n${\\mathrm{Shift}}_\\mu$ and the corresponding potential $U(q)$ acquires an additional term, as we discuss below.\n\n\n\n\n\\begin{example}\\label{ex:spinDisk}\nThe following example of a spinning disk (a gyroscope) on a curved surface shed the light on the geometry behind the symplectic reduction above. \n Cox and Levi proved in \\cite{cole2} that the motion of the disk center coincides with the motion of a charged particle in a magnetic field which is normal to the surface and equal in magnitude to the Gaussian curvature of the surface. \n \nTo explain their result in the context of reduction theory, let $q=(q_1,q_2)$ be orthogonal local coordinates on a surface \n$B\\subset \\mathbb R^3$, so that metric on the surface is given by $ds^2=a_{11}(q)dq_1^2+a_{22}(q)dq_2^2$. \nWhen the disk is not spinning, its kinetic energy is a function of its position $q$ and linear velocity $\\dot q$, i.e. a function on $TB$. It is given by\n$$\nE_0=\\frac m2(G\\dot q, \\dot q)+\\frac{\\mathbb I_d}2 h(\\dot q,\\dot q),\n$$\nwhere $G=diag(a_{11},a_{22})$, $h$ is the second fundamental form of $B$, and $\\mathbb I_d$ is the moment of inertia of the disk along its diameter. Denote by $\\mathbb I_a$ the moment of inertia about the disk axis.\n\n\n\\begin{theorem}\\label{thm:spinDisk}{\\rm \\cite{cole2}}\nFor a spinning disk on a surface $B$ the angular momentum $\\mu=\\mathbb I_a\\omega_a$ of the disk about its axis is constant and the disk's center satisfies the following equation \n\\begin{equation}\\label{eq:spinDisk} \n\\frac {d}{dt}\\frac{\\partial E_0}{\\partial \\dot q}-\\frac{\\partial E_0}{\\partial q}=\\sqrt{a_{11}a_{22}}\\,\\mu\\,K(q)\\,\\left[\\begin{array}{cc}0&-1\\\\1&0\\end{array}\\right]\\,\\dot q,\n\\end{equation}\nwhere $K(q)$ is the Gaussian curvature of the surface $B$. \n\\end{theorem}\n\n\\begin{remark}\nBefore we provide a different proof of this result via symplectic reduction, note that the configuration space $Q$ of this system is the circle bundle over the surface: the disk position is defined by the position of its center on the surface $B$ \nand the angle of rotation.\nGlobally $Q$ may be a nontrivial $\\mathbb T$-bundle over $B=Q\/\\mathbb T$. However, for the local consideration below it suffices to consider the case \n$Q=B\\times \\mathbb T$. The phase space is the corresponding tangent bundle $TQ$. \nThe metric in $\\mathbb R^3$ allows one to identify $TQ$ and $T^*Q$, while the disk motion is a Hamiltonian system on the cotangent bundle $T^*Q$.\nThe trajectory of the disk center can be obtained as the symplectic reduction of the system on $T^*Q$ with respect to the\n$\\mathbb T$-action, as we quotient out the disk rotation. Different angular velocities of the disk lead to different values $\\mu$\nof the momentum map, over which one takes the quotient. \nAccording to Theorem \\ref{thm:cotbundle} the resulting system is a Hamiltonian system on $T^*B$, with two \namendments. The corresponding symplectic structure after the reduction will be twisted by a magnetic term.\nIn this setting it will be proportional to the curvature $K(q)$ of the surface, which one observes in Equation \\eqref{eq:spinDisk}. \nMoreover, the corresponding Hamiltonian undergoes a shift by $\\mu$. However, in the gyroscope case the shift \nreduces to adding a constant to the Hamiltonian and does not appear in the equations.\\footnote{Jumping ahead, in order to see this \none can use an explicit formula of Theorem \\ref{thm:ave-red}, which gives an additional term $\\frac 12 \\langle \\mu,\\,\\mathbb{I}(q)^{-1}\\mu\\rangle$. It is indeed constant, since in the gyroscope case the inertia operator $\\mathbb I$ does not depend on $q$, while $\\mu$ is a constant angular velocity.}\n\\end{remark}\n\n\\begin{proof}\nLet us identify $TB$ and $T^*B$ by means of the metric. \nFirst note that Equation (\\ref{eq:spinDisk}) is the Euler-Lagrange equation for a Lagrangian system, which can be rewritten as a Hamiltonian system with the Hamiltonian energy function $E_0$ on the cotangent bundle of the surface $B$ (thanks to the metric identification) with a twist symplectic structure given in local coordinates by\n$$\n\\omega_{\\mu}=\\omega_{can}-\\mu\\,\\sqrt{a_{11}a_{22}}\\,K(q)\\,dq_1\\wedge dq_2\\,,\n$$\nwhere $\\omega_{can}$ is the canonical symplectic structure on $T^*B$.\n\nNext, we show how to obtain this system via symplectic reduction.\nDenote by $\\theta$ the angle between a fixed radius on the disk and the positive direction of the line \n$\\{q_2=const\\}$. This gives us a principal\n$\\mathbb T$-bundle $Q$ with the curved surface $B$ as the base. \n\nThe absolute angular velocity of a spinning disk is $\\Omega_a=\\dot \\theta+A(q)\\dot q$, where $A(q)\\dot q$ is the transferred velocity, and $A(q)=(k_1\\sqrt{a_{11}},k_2\\sqrt{a_{22}})$, where $k_1,\\;k_2$ are the geodesic curvatures of coordinate lines\n$\\{q_1=const\\}$ and $\\{q_2=const\\}$.\n\nSo, in local coordinates, the metric on the principal $\\mathbb T$-bundle $Q$ is given by\n$$\n\\left((\\dot q,\\dot \\theta),(\\dot q,\\dot \\theta)\\right)_{(q,\\theta)}=\\mathbb I_a(\\dot\\theta+A(q)\\dot q)^2+m(G\\dot q,\\dot q)+\\mathbb I_dh(\\dot q,\\dot q).\n$$\nNote that this metric is invariant under the $\\mathbb T$-rotations.\n\nTherefore, the momentum map $J:TQ\\rightarrow \\mathfrak t^*=\\mathbb R$ is $J(q,\\theta;\\dot q,\\dot \\theta)=\\mathbb I_a(\\dot\\theta+A(q)\\dot q)$ and the mechanical connection\n$\\mathcal A:TQ\\rightarrow\\mathfrak t=\\mathbb R$ is $\\mathcal A=d\\theta+A(q)\\,dq$ (here we again identify $TQ$ and $T^*Q$). By Theorem \\ref{thm:cotbundle}, for a fixed value $\\mu=\\mathbb I_a\\Omega_a$ of\nthe momentum map $J$, the system can be reduced to the (co)tangent bundle of the surface $B$ with the magnetic symplectic structure\n$$\n\\omega_{\\mu}=\\omega_{can}-\\mu\\;d(A(q)dq)=\\omega_{can}-\\mu\\;\\sqrt{a_{11}a_{22}}\\,K(q)\\,dq_1\\wedge dq_2,\n$$\nwhere $K(q)$ is the Gaussian curvature of the surface $B$. \n\nThe energy Lagrangian $E=\\frac 12 \\left((\\dot q,\\dot \\theta),(\\dot q,\\dot \\theta)\\right)_{(q,\\theta)}$ \non $Q$ defines the reduced Hamiltonian on $B$, which turns out to be $E_0=1\/2(m(G\\dot q, \\dot q)+\\mathbb I_d h(\\dot q,\\dot q))$. Here we omit the constant term $\\mathbb I_a(\\dot\\theta+A(q)\\dot q)^2\n=\\langle \\Omega_a,\\,\\mathbb{I}_a\\Omega_a\\rangle= \\langle \\mu,\\,\\mathbb{I}_a^{-1}\\mu\\rangle$ in the energy expression, \nsince the value $\\mu$ of the momentum map (i.e. the angular momentum of the disk) is conserved. \n\nThis reduced Hamiltonian system with Hamiltonian function $E_0$ \non the cotangent bundle $(T^*B, \\omega_\\mu)$ of the surface describes the motion of the disk center.\n\\end{proof}\n\n\\end{example}\n\n\n\n\\medskip\n\\section{Averaging-Reduction procedure for a natural system}\\label{sect:averaging}\n\\subsection{Averaging}\nLet $\\pi:Q\\to B$ be a principal $\\mathbb T$-bundle. From now on we assume that $\\mathbb T=S^1$ (and occasionally comment on $\\mathbb T=T^n$). The cotangent lift of $\\mathbb T$-action on $Q$ induces $\\mathbb T$-action on $T^*Q$.\nDenote by $\\rho, \\rho^*$, and $\\rho_*$ the $\\mathbb T$-action on $Q, T^*Q$ and $TQ$, respectively.\nLet $d\\eta$ be the standard Euclidean measure on the group $\\mathbb T$.\n\nConsider a natural Hamiltonian system on the cotangent bundle $T^*Q$:\n\\begin{equation}\\label{eq:nature}\nH(q,p)=\\frac 12 (p,\\,p)_q+U(q).\n\\end{equation}\nHere $Q$ is the configuration space of the motion, we assume that this Hamiltonian system has slow motion on the base $B$ and fast motion on the fibers isomorphic to $\\mathbb T$. \nThe Hamiltonian function $H(q,p)$ is not necessarily invariant\nunder the $\\mathbb T$-action on $T^*Q$. As the first step one passes to the space $\\mathbb T$-average\n$\\overline{H(q,p)}^{\\mathbb T}$, the $\\mathbb T$-invariant function on $T^*Q$ defined by the following formula:\n$$\n \\overline{H(q,p)}^{\\mathbb T}:=\\frac{1}{\\eta(\\mathbb T)}\\int_{g\\in \\mathbb T} H(\\rho_g^*(q,p))\\;d\\eta(g).\n$$\nFor the natural system \\eqref{eq:nature}, one averages both the kinetic and potential parts of the energy:\n$$\n \\overline{H(q,p)}^{\\mathbb T}=\\frac 12 \\overline{(p,p)}_q^{\\mathbb T}+\\overline{U(q)}^{\\mathbb T},\n$$\nwhere $\\overline{U(q)}^{\\mathbb T} =\\frac{1}{\\eta({\\mathbb T})}\\int_{\\mathbb T} U(\\rho_g(q))\\,d\\eta(g)$\nand $\\overline{(p,p)}_q^\\mathbb T=\\frac{1}{\\eta({\\mathbb T})}\\int_{\\mathbb T}(\\rho_g^*p,\\rho_g^*p)_{\\rho_{g^{-1}}(q)}\\,d\\eta(g)$ \nis defined via the following averaged metric on $Q$:\n\\begin{definition}\n The \\textit{averaged metric} $\\overline{(\\cdot,\\cdot)}^{\\mathbb T}$ on the principal $\\mathbb T$-bundle $Q$ is given by\n$$\n\\overline{(v,v)}_q^{\\mathbb T}:=\\frac{1}{\\eta(\\mathbb T)}\\int_{\\mathbb T}(\\rho_{g*} v,\\rho_{g*} v)_{\\rho_g(q)}\\,d\\eta(g),\n$$\nfor any $v\\in T_qQ$. This defines a $\\mathbb T$-invariant metric on $Q$.\n\\end{definition}\n\nNow define the connection on $Q$ corresponding to the averaged metric:\n\n\\begin{definition}\n The \\textit{averaged connection} $\\bar{\\mathcal{A}}\\in\\Omega^1(Q,\\mathfrak t)$ on the principal $\\mathbb T$-bundle $Q$ is the connection induced by the averaged metric $ \\overline{(\\,,\\,)}_q^{\\mathbb T}$\nby the invariant distribution of horizontal spaces: at each point $q\\in Q$ there is a subspace of $T_qQ$ orthogonal to the fiber (i.e. the $\\mathbb T$-orbit) at $q$. \n\nThe connection induced by an invariant averaged metric on $Q$ is the mechanical connection, according to\nDefinition \\ref{def:mech-conn}.\n\\end{definition}\n\n\n\\begin{remark}\nWe would like to give a more explicit description of averaged metrics and connections.\nFirst note that a $\\mathbb T$-invariant metric $(.\\,,.)_q$ on $Q$ can be defined by means of a metric operator\n$\\mathbb I_Q(q):T_qQ\\to T^*_qQ$ for $q\\in Q$, where $\\mathbb I_Q(q): v\\mapsto v^\\flat$, i.e. \n$(v,v)_q:=\\langle v, \\mathbb I_Q (q)v\\rangle$ for $v\\in T_qQ$.\nThis defines the ``fiber inertia operator\" $\\mathbb I(q):\\mathfrak t\\to \\mathfrak t^*$ by restricting to\n$\\mathfrak t=T_q\\mathbb T\\subset T_qQ$ the metric operator $\\mathbb I_Q(q)$\nfor $q\\in Q$. (Recall, that for $\\mathbb T=S^1$, we have $\\mathfrak t=\\mathbb R$.) The $\\mathbb T$-invariance of metric implies that the fiber inertia operator $\\mathbb I$ is equivariant, $\\mathbb I(g(q))=Ad_{g^{-1}}^*\\mathbb I(q)$, i.e. it depends on the base point $\\pi(q)\\in B=Q\/\\mathbb T$ only.\n\n\\smallskip\nThe invariant metric on $TQ$ also induces the momentum map $J:T^*Q\\to \\mathfrak t^*$ for the action of the group $\\mathbb T$.\nIn these terms the averaged mechanical connection can be defined explicitly by\n$$\n\\bar{\\mathcal{A}}(v_q)=\\mathbb{I}(q)^{-1}J(p_q),\n$$\nwhere $v_q$ is a tangent vector in $T_qQ$, \\, $p_q:=\\mathbb I_Q(q)v_q=v_q^\\flat\\in T_q^*Q$ is the corresponding metric-dual cotangent vector, and $\\mathbb I$ is the inertia operator on $\\mathfrak t$ in the fiber at $q$.\n\\end{remark}\n\n\\begin{remark}\\label{rem:metric}\nMore specifically, in coordinates for a trivial bundle $Q$ the general form for a $\\mathbb T$-invariant metric on $Q=B\\times \\mathbb T$ is as follows:\n\\begin{equation}\\label{eq:aveMet}\n((u,\\gamma),(u,\\gamma))_{(x,\\tau)}=(u,u)_x+2\\gamma\\, h(x)\\,\\langle A(x),u\\rangle+h(x)\\gamma^2,\n\\end{equation}\nwhere $(u,\\gamma)\\in T_{(x,\\tau)}(B\\times \\mathbb T)=T_x B\\times\\mathfrak t$, $A(x)\\in \\Omega^1(B,\\mathfrak t)=T_x^*B$, $h(x)\\in \\mathbb R^{+}$, and $\\mathfrak t\\simeq \\mathbb R$.\nFor a non-trivial $Q$ this general form is valid locally on the base.\n\\end{remark}\n\n\\begin{proposition}\\label{prop:momentum}\nFor a trivial bundle $Q=B\\times\\mathbb T$ the averaged connection \n$\n\\bar{\\mathcal{A}}\\in\\Omega^1(B\\times\\mathbb T,\\mathfrak t)= T^*_{(x,\\tau)}(B\\times\\mathbb T)\n$ \ncorresponding to the averaged metric (\\ref{eq:aveMet}) is given by $\\bar{\\mathcal{A}}(x,\\tau)=A(x)+d\\tau$.\nThe summands can be regarded as connections on the base $A(x)\\in T^*_xB$ and in the fiber $d\\tau$.\n\\end{proposition}\n\n\\begin{proof}\nFor a trivial bundle $Q$ the momentum map $J:T_{(x,\\tau)}^*(B\\times \\mathbb T)\\to \\mathfrak t^*$ is given by \n$$\nJ_{(x,\\tau)}(a,\\eta)=h(x)\\langle A(x),u\\rangle+h(x)\\langle d\\tau, \\xi\\rangle,\n$$\nwhere $(a,\\eta)\\in T_{(x,\\tau)}^*(B\\times \\mathbb T)$ and $(u,\\xi)\\in T_{(x,\\tau)}(B\\times \\mathbb T)$ is the image \nof $(a,\\eta)$ under the metric identification. Indeed, by definition of the momentum map, \nfor any $\\zeta\\in\\mathfrak t$, \n$$\n\\langle J_{(x,\\tau)}(a,\\eta),\\zeta\\rangle=\\langle (a,\\eta),(0,\\zeta)\\rangle=((u,\\xi),(0,\\zeta))\n$$\n$$\n=(u,0)_x+\\zeta h(x)\\langle A(x),u\\rangle+\\xi h(x)\\langle A(x),0\\rangle+\\xi h(x)\\zeta=\\langle h(x)\\langle A(x),u\\rangle+h(x)\\xi,\\zeta\\rangle.\n$$ \nFurthermore, the inertia operator $\\mathbb I(x):\\mathfrak t\\to \\mathfrak t^*$ at $x\\in B$ is given by\n$\\mathbb I(x)\\gamma=h(x)\\gamma $ for any $\\gamma\\in\\mathfrak t,$ hence \nthe average mechanical connection assumes the form\n$\\bar{\\mathcal{A}}(u,\\xi)=\\mathbb{I}(x)^{-1}J(a, \\eta)=\\langle A(x),u\\rangle+\\langle d\\tau, \\xi\\rangle$, as required.\n\\end{proof}\n\n\\smallskip\n\n\n\\subsection{Reduction}\nBy considering the $\\mathbb T$-invariant metric and Hamiltonian (obtained by $\\mathbb T$-averaging)\nwe are now in the framework of Section \\ref{sec:reduction}.\nThe dynamics defined by the averaged Hamiltonian $\\overline{H}^{\\mathbb T}$\non $T^*Q$ can be derived from the corresponding \\textit{averaged} or \\textit{slow motion}, i.e. \nthe dynamics on $T^*B$ of the base space $B=Q\/\\mathbb T$. However, unlike the standard averaging \nmethod discussed below in Section \\ref{subsec:averHam}, now we obtain this slow motion via symplectic reduction.\n\nRecall that, for a fixed value $\\mu$ of the momentum map, the reduced space $J^{-1}(\\mu)\/\\mathbb T$ is symplectomorphic to the cotangent bundle $T^*B$ of the base space $B$ with the twisted symplectic form \n $\\omega_\\mu=\\omega_{can}-\\beta_\\mu,$\nwhere $\\omega_{can}$ and $\\beta_\\mu$ are the canonical and magnetic 2-forms\non $T^*B$ (see Theorem \\ref{thm:cotbundle}).\nThe averaged\/slow system turns out to be a Hamiltonian system on the symplectic manifold $(T^*B,\\omega_\\mu)$ with the following reduced Hamiltonian function $\\bar H_\\mu$.\n\n\\begin{theorem}\\label{thm:ave-red}\nFor a natural system on a $\\mathbb T$-bundle $Q$ over the slow manifold $B$\nwith Hamiltonian function $H(q,p)=(1\/2) (p,\\,p)_q+U(q)$\nthe result of the symplectic reduction\nwith respect to the $\\mathbb T$-action of the averaged system is a natural system\nwith the Hamiltonian function $\\bar H_\\mu$,\n\\begin{equation}\\label{aveHam}\n\\bar H_\\mu(q,p)=\\frac 12 (p,\\,p)_B+U_\\mu(q)\\,,\n\\end{equation}\n on the symplectic manifold $(T^*B,\\omega_\\mu)$.\nHere $(q,p)\\in T^*B$, $(\\cdot,\\cdot)_B$ stands for the metric on the base $B=Q\/\\mathbb T$ obtained as a Riemannian\nsubmersion $Q\\to B$ from the metric \n$\\overline{(\\cdot,\\cdot)}^{\\mathbb T}$ on $Q$, while \n$U_\\mu(q):=\\overline{ U(q)}^{\\mathbb T}+\\frac 12 \\langle \\mu,\\,\\mathbb{I}(q)^{-1}\\mu\\rangle$ is the effective potential.\n\\end{theorem}\n\n\\begin{proof}\nWe start by computing the result of averaging and consequent symplectic reduction on $T^*Q$ with respect to the $\\mathbb T$-action.\nUpon averaging along $\\mathbb T$-orbits one can assume that the Hamiltonian $\\bar H$ on $Q$ is $\\mathbb T$-invariant,\n$\\bar{H}(q,p)=\\overline{ H(q,p)}^{\\mathbb T}$.\nThe reduced Hamiltonian system on the quotient $(T^* Q)_\\mu$ is Hamiltonian with respect to the symplectic structure \n$\\omega_\\mu=\\omega_{can} - \\beta_\\mu$. The new Hamiltonian is obtained from $\\bar H$ by applying the map\n${\\mathrm{Shift}}_\\mu$.\nNamely, abusing the notation, for $(q,p)\\in T^*B$ and a connection $\\bar{\\mathcal{A}}$ in the $\\mathbb T$-bundle $Q$ one has\n$$\n\\bar H_\\mu(q,p)=\\bar H (q,\\,p+ \\langle \\mu,\\bar{\\mathcal{A}}(q) \\rangle)\n=\\frac 12\\overline{( p+ \\langle \\mu, \\bar{\\mathcal{A}}(q) \\rangle, \\, p+ \\langle \\mu,\\bar{\\mathcal{A}}(q)\\rangle )}_q^{\\mathbb T}+\\overline{U(q)}^{\\mathbb T}\n$$\n$$\n=\\frac 12 (p,\\,p)_B+\\overline{(p,\\, \\langle \\mu, \\bar{\\mathcal{A}}(q) \\rangle)}_q^{\\mathbb T}\n+ \\frac 12\\overline{(\\langle \\mu, \\bar{\\mathcal{A}}(q)\\rangle,\\,\\langle \\mu, \\bar{\\mathcal{A}}(q)\\rangle)}_q^{\\mathbb T}+\\overline{U(q)}^{\\mathbb T}\n=\\frac 12 (p,\\,p)_B+\\overline{U_\\mu(q)}^{\\mathbb T}\n$$\nfor $U_\\mu(q):=\\frac 12 \\overline{(\\langle \\mu, \\bar{\\mathcal{A}}(q)\\rangle,\\,\\langle \\mu, \\bar{\\mathcal{A}}(q)\\rangle)}_q^{\\mathbb T}+\\overline{U(q)}^{\\mathbb T}$.\nHere we use that $\\bar{\\mathcal{A}}$ is the mechanical connection corresponding to the averaged metric $\\overline{(\\cdot,\\cdot)_q}^{\\mathbb T}$, and hence we have\n$\\overline{(p,\\, \\langle \\mu, \\bar{\\mathcal{A}}(q) \\rangle)}_q^{\\mathbb T}=\\langle \\mu, \\, \\bar{\\mathcal{A}}(q)(v)\\rangle=\\langle \\mu,\\,\\mathbb I(q)^{-1}J(p)\\rangle=0$, since $J(p)=0$, and where $(q,p)\\in T^*B$ is identified with $(q,v)\\in TB$ by means of\nthe averaged metric. Thus on the reduced symplectic manifold\n$T^*B$ with the twisted symplectic form $\\omega_\\mu=\\omega_{can} - \\beta_\\mu$\nthe new reduced Hamiltonian is\n$$\n\\bar H_\\mu(q,p)=\\frac 12 (p,\\,p)_B+U_\\mu(q)\n$$\nfor $q\\in B$ and $p\\in T^*_qB$. It is a natural system with a new effective potential\n$$\nU_\\mu(q)=\\frac 12 \\overline{(\\langle \\mu, \\bar{\\mathcal{A}}(q)\\rangle,\\,\\langle \\mu, \\bar{\\mathcal{A}}(q)\\rangle)}_q^{\\mathbb T}+\\overline{U(q)}^{\\mathbb T}\n=\\frac 12 \\langle \\mu, \\mathbb I(q)^{-1}\\mu\\rangle+\\overline{U(q)}^{\\mathbb T}\\,.\n$$\n\\end{proof}\n\nIn Section \\ref{sect:Ave_natHamiltonian} below we will prove the following corollary of Theorem \\ref{thm:ave-red} for averaging\none-frequency fast-oscillating systems: Under certain conditions, solutions of the averaged system and projections to slow manifold of solutions of the actual system with the same initial conditions remain $\\epsilon$-close to each other for $0\\le t\\le 1\/\\epsilon$.\n\n\n\n\\begin{remark}\nThe two features of the averaged-reduced Hamiltonian system are the additional term in the effective potential\n$U_\\mu$ and the magnetic term $-\\beta_\\mu$ in the symplectic structure $\\omega_\\mu$.\nTherefore this averaging-reduction procedure provides a geometrical explanation of these two phenomena,\noften observed in the averaging theory.\n\\end{remark}\n\n\n\\begin{remark}\nIn the classical averaging of fast-oscillating systems (cf. Section \\ref{subsec:averHam} below)\none starts by fixing the action variable $J$. This \ncan be regarded as a manifestation of symplectic reduction in flat coordinates, as this means fixing \na certain value of the corresponding momentum map.\nThe bundle averaging-reduction procedure described here is also applicable in that case, but the metric\nin this bundle turns out to be flat. Namely, in the reduction to a submanifold $J=\\mu$ \none chooses a flat connection on the principal bundle which corresponds to the direct product of the base\nand fibres, and hence no twisted symplectic structure appears on the reduced manifold: for\nthe momentum value $J=\\mu$, the averaged Hamiltonian function is $\\epsilon \\,\\bar{H}(Q,P,\\mu)$\non the ``flat\" cotangent bundle $(T^*\\mathbb R^{\\ell}, dP\\wedge dQ)$.\n\\end{remark}\n\n\n\n\\medskip\n\n\\section{Central extensions in symplectic reduction} \\label{sect:cenExtention}\nAbove we described the reduced phase space $(T^* Q)_\\mu$ for the right action by the group $\\mathbb T$.\nIn this case, the reduced phase space $(T^* Q)_\\mu$ coincides with $T^* (Q\/\\mathbb T)$, equipped with the magnetic symplectic structure $\\omega_\\mu$ described before.\nNow assume in addition that the base space $Q\/\\mathbb T$ has the structure of another Lie group $G$, which\nacts on itself from the left and leaves the metric on $G=Q\/\\mathbb T$ invariant.\n As a result, $G$ acts on $T^* G=T^* (Q\/\\mathbb T)$ and, as one can check, this action leaves the symplectic structure $\\omega_\\mu=\\omega_{can} - \\beta_\\mu$ invariant.\n (Recall that the magnetic 2-form $\\beta_\\mu := \\pi_G^* \\sigma_\\mu$\n on $T^* G$ is the pullback of the left-invariant 2-form $\\sigma_\\mu$ on the group $G$.)\n Hence another reduction for this $G$-action (``the reduction by stages\") would take this magnetic\n symplectic structure on $T^*G$\n to an appropriate structure on the dual Lie algebra $\\mathfrak{g}^*$, as described below.\n\n\n\n\\begin{theorem}{\\rm (Theorem~7.2.1 in \\cite{mars2})}\\label{thm:poisson}\n The Poisson reduced space for the left action of\n $G$ on $(T^* G, \\omega_\\mu=\\omega_{can} - \\beta_\\mu)$ is the dual Lie algebra $\\mathfrak{g}^*$ with the\n Poisson bracket given by\n\\begin{equation} \\label{bracket}\n\\{f, g\\}_\\mu(\\nu) = -\\left< \\nu, \\left[ \\frac{\\delta f}{\\delta \\nu},\n \\frac{\\delta g}{\\delta \\nu} \\right] \\right> - {\\sigma}_\\mu(e)\\left(\n \\frac{\\delta f}{\\delta \\nu}, \\frac{\\delta g}{\\delta \\nu} \\right)\n\\end{equation}\nfor $f, g \\in C^\\infty(\\mathfrak{g}^*)$ at any $\\nu\\in \\mathfrak{g}^*$, where ${\\sigma}_\\mu(e)$\nis the value of the left-invariant 2-form ${\\sigma}_\\mu$ at $e\\in G$ on the pair of tangent vectors $\\frac{\\delta f}{\\delta \\nu}, \\frac{\\delta g}{\\delta \\nu}\\in T_eG=\\mathfrak{g}$, and $\\beta_\\mu := \\pi_G^* \\sigma_\\mu$ is the pullback\nof ${\\sigma}_\\mu$ to $T^*G$.\n\\end{theorem}\n\n\n\\begin{remark}\nThe above Poisson bracket is the Lie-Poisson bracket of the dual $\\widehat{\\mathfrak{g}}^*$ of the central extension\n$\\widehat{\\mathfrak{g}}$ of the Lie algebra ${\\mathfrak{g}}$ by means of the ${\\mathfrak{t}}$-valued\n2-cocycle $\\sigma$, such that $\\langle \\sigma, \\mu\\rangle :={\\sigma}_\\mu(e)$.\nNamely, the Lie algebra $\\widehat{\\mathfrak{g}}$ is the direct sum ${\\mathfrak{g}}\\oplus {\\mathfrak{t}}$,\nas a vector space, with the commutator\n$$\n[(u,a), (v,b)]_{\\widehat{\\mathfrak{g}}}:= ([u,v]_{\\mathfrak{g}}, {\\sigma}(u,v))\n$$\nfor $u,v\\in \\mathfrak{g}$ and $a,b\\in {\\mathfrak{t}}$.\nIt turns out that under certain integrality conditions, the space $Q$ gives a realization of the corresponding\ncentrally extended group $\\widehat G$.\n\\end{remark}\n\nFor the right action of $G$ the bracket changes sign.\n\n\\begin{theorem}\\label{thm:group-ext}\nLet $G$ be a group equipped with a closed integral left-invariant 2-form $\\sigma_\\mu\/2\\pi$. Then the $\\mathbb T$-bundle $Q$\nover the group $G$ with the curvature form $\\sigma_\\mu$ can be canonically identified with the\ncentral extension $\\widehat G$\nof the group $G$ by means of $\\mathbb T$, where the Lie algebra 2-cocycle is ${\\sigma}_\\mu(e)$, i.e.\nits value on a pair of Lie algebra elements $\\xi$ and $\\eta$ is ${\\sigma}_\\mu(e)(\\xi,\\eta)$.\n\\end{theorem}\n\n\\begin{proof} The proof is based on a version of Proposition 4.4.2 of \\cite{segal} adjusted to the setting at hand.\nIn fact, one can explicitly construct $\\widehat G$ and identify it with $Q$, the $\\mathbb T$-bundle over $G$.\nNamely, first for any oriented loop $\\ell$ in $G$ one associates an element\n$C(\\ell)=\\exp (i\\int_{\\partial^{-1}\\ell}\\sigma_\\mu)$, where $\\partial^{-1}\\ell$ is an oriented 2D surface in $G$ bounded\nby $\\ell$. The value $C(\\ell)$ is well-defined, since for two different surfaces with the same boundary\nthe integrals of $\\sigma_\\mu$ for an integral 2-form $\\sigma_\\mu\/2\\pi$ differ by a multiple of $2\\pi$.\n\nThe map $\\ell\\mapsto C(\\ell)$ is independent of parametrization of $\\ell$, additive, and $G$-invariant.\nIt defines a central extension $\\widehat G$ of the group $G$ by $\\mathbb T$ as a set of triples $(g,u, p)$, where\n$g\\in G$, $u\\in \\mathbb T$ and $p$ is a path in $G$ from $e$ to $g$, modulo the following equivalence.\nTwo triples $(g,u, p)$ and $(g',u', p')$ are equivalent if $g'=g$ and $u'=C(p'\\cup p^{-1})u$. The composition\nis $(g_1,u_1, p_1)\\circ (g_2,u_2, p_2)= (g_1g_2, u_1u_2, p_1\\cup g_1(p_2)))$.\n\nRecall that the space $Q$ with an invariant metric has a structure of \na $\\mathbb T$-bundle with mechanical connection. Then a triple\n$(g,u, p)$ modulo equivalence can be interpreted as the following point in $Q$: it is the point in the\n$\\mathbb T$-fiber over $g\\in G$, obtained from the point $(e,u)$ of the $\\mathbb T$-fiber over $e\\in G$\nby a horizontally lifted path $p$ from $e$ to $g$. Then the equivalence of triples stands for their correspondence to the same point in $Q$, since the form $\\sigma_\\mu$ is the curvature of the mechanical connection, while\nthe formula $u'=C(p'\\cup p^{-1})u$ describes the holonomy of the connection over a closed loop.\n\\end{proof}\n\n\\begin{remark}\nTheorems \\ref{thm:poisson} and \\ref{thm:group-ext} can be extended to \nthe case of a torus $\\mathbb T$-bundle $Q$ over $G$, where $\\mathbb T=T^k$. In Theorem \\ref{thm:group-ext}\none realizes $Q$ as a group central extension of $G$ by $\\mathbb T$ by applying the above consideration to \nthe ``coordinate 2-forms\" $\\sigma_\\mu=\\langle \\sigma, \\mu\\rangle $ of the $\\mathfrak t$-valued 2-form $\\sigma$.\n\\end{remark}\n\n\\begin{remark}\nReturn to the 2-cocycle $\\beta_\\mu$ on the Lie algebra $\\mathfrak g$, which defines\nthe central extension and the magnetic term. In many examples, this 2-cocycle\nis a 2-boundary, i.e. the 2-form $\\sigma_\\mu$ on the Lie algebra can be represented as a linear\nfunctional of the Lie algebra commutator, $\\sigma_\\mu(\\xi, \\eta)=L([\\xi,\\eta])$\nfor some element $L\\in \\mathfrak g^*$.\nIn that case, the corresponding Poisson structure on $\\mathfrak g^*$ is the linear Lie-Poisson structure on the dual space $\\mathfrak g^*$ shifted to the point $L$.\nThe associated Euler equation also manifests a certain shift, observed, e.g.\nas a Stokes drift velocity related to surface waves in the Craik-Leibovich equation, cf. Section \\ref{sect:CL}.\n\\end{remark}\n\n\\begin{remark}\nWhen considering dynamics on the reduced space $T^*G$, in order to use the second reduction\nover the $G$-action one has to confine to the natural systems with effective potential independent of $q$, i.e.\n$U_\\mu(q)=const$. The latter are geodesic flows for the invariant metric on $G$ defined by the inertia operator\n$\\mathbb I_G:\\mathfrak g\\to \\mathfrak g^*$. The second reduction \n defines the Euler equations for the quadratic Hamiltonian\n$H(p):=\\frac 12 (p,p)_e=\\frac 12 \\langle p, \\mathbb I_G^{-1}p\\rangle$\non the dual $\\widehat{\\mathfrak{g}}^*$ of centrally extended Lie algebra $\\widehat{\\mathfrak{g}}$.\n\\end{remark}\n\n\n\\medskip\n\n\\section{Reminder on averaging and examples}\\label{sec:ODEaveraging}\n\n\\subsection{Averaging in one-frequency Hamiltonian systems}\\label{subsec:averHam}\nConsider a Hamiltonian system with $\\ell+1$ degrees of freedom and Hamiltonian of the form\n$H(q,p, I, \\phi)=H_0(I)+\\epsilon H_1(q,p, I, \\phi)$, where $\\phi ({\\rm mod }\\, 2\\pi)\\in \\mathbb T$, \nwhile $H$ is $2\\pi$-periodic in $\\phi$, and $(q,p, I)\\in D\\subset \\mathbb R^{2\\ell+1}$. (Such perturbations of properly degenerate Hamiltonian systems are typical in celestial mechanics.)\nThe corresponding Hamiltonian equations for the standard symplectic structure are as follows:\n\\begin{equation}\\label{eq:Ham_eq}\n\\left\\{\n \\begin{array}{l}\n ~ \\dot q = \\quad \\epsilon\\,{\\partial H_1}\/{\\partial p} \\qquad\\quad\n\t\t\\dot I = \\qquad\\qquad -\\epsilon \\,{\\partial H_1}\/{\\partial \\phi},\\\\\\\n\t\t \\dot p = -\\epsilon\\,{\\partial H_1}\/{\\partial q}, \n\t\t \\qquad\\quad\\dot \\phi=\\,{\\partial H_0}\/{\\partial I}+\\epsilon\\,{\\partial H_1}\/{\\partial I}.\n\\end{array} \\right.\n\\end{equation}\n\n\n\\begin{definition}\nThe {\\it averaged system} for the above Hamiltonian $H=H_0+\\epsilon H_1$ is the system of $2\\ell+1$ equations:\n\\begin{equation}\\label{eq:Ave_Ham_eq}\n\\left\\{\n \\begin{array}{l}\n\t\t\t\t~\\dot Q = \\quad \\epsilon \\,{\\partial \\bar{H}_1}\/{\\partial P}, \\qquad\\dot J = 0,\\\\\\\n \\dot P = -\\epsilon \\,{\\partial \\bar{H}_1}\/{\\partial Q},\n\\end{array} \\right.\n\\end{equation}\nwhere $\\bar H_1(Q, P,J):=\\frac{1}{2\\pi}\\int_0^{2\\pi} H_1(Q, P,J, \\phi)\\,d\\phi$.\n\\end{definition}\n\nSince there is no evolution of $J$ in the averaged system, one can fix it and regard $J$ as a parameter\nfor the Hamiltonian system with $\\ell$ degrees of freedom, where\n$\\bar{H}(Q,P)=\\bar{H}_J(Q,P)=H_0(J)+\\epsilon \\bar{H}_1(Q,P,J)$.\n\\medskip\n\nLet $(q,p, I)$ belong to a domain $D\\subset \\mathbb R^{2\\ell+1}$, and $D_\\delta\\subset D$ stands for\na subdomain whose $\\delta$-neighbourhood belongs to $D$. Assume that the Hamiltonian\n$H$ is $C^3$-bounded for\n$(q, p, I, \\phi) \\in D\\times \\mathbb T$, as well as ${\\partial H_0(I)}\/{\\partial I}>C>0$ in $D$ and\n$(Q(t), P(t), J(t))\\in D_\\delta$ for all $0\\le t\\le 1\/\\epsilon$.\n\n\\begin{theorem}\\label{thm:Ave_Ham}{\\rm (cf. \\cite{ArnMM, arko})}\nFor sufficiently small positive $\\epsilon$ (i.e. $0<\\epsilon<\\epsilon_0$) solutions of the actual system \\eqref{eq:Ham_eq} and averaged system \\eqref{eq:Ave_Ham_eq}\nwith the same initial conditions $(q(0), p(0), I(0))=(Q(0),P(0), J(0))$\nremain $\\epsilon$-close to each other for $0\\le t\\le 1\/\\epsilon$:\n$|I(t)-I(0)|0$ solutions for the original Hamiltonian \\eqref{eq:natHamiltonian} and the averaged Hamiltonian \\eqref{eq:Ave_natHamiltonian}\nwith the same initial conditions $(q(0), p(0), \\gamma(0))=(Q(0),P(0),\\mu(0))$\nremain $\\epsilon$-close to each other for $0\\le t\\le 1\/\\epsilon$:\n$|\\gamma(t)-\\mu(t)|+|q(t)-Q(t)|+|p(t)-P(t)|0$, subset $D_{\\epsilon}(\\alpha)\\subset D_{\\epsilon}$ contains the initial points such that the difference \\bk{How to measure this difference? Some metric?} between the exact and averaged trajectories does not exceed $\\alpha$ for all $t\\in[0,1\/\\epsilon]$. Then according to the Anosov's theorem \\cite{loch}, under the above ergodicity assumption and assume that the Hamiltonian function is smooth enough, we have\n\\begin{equation}\n\\lim_{\\epsilon\\rightarrow 0} \\lambda(D_{\\epsilon}\\setminus D_{\\epsilon}(\\alpha))=0,\n\\end{equation}\nwhere $\\lambda$ is the volume form on the base.\n\n\n\n\\fi\n\n\n\\medskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRecently, Ishida et al. have proposed the covariant\n $\\widetilde{U}(12)_{SF}$-classification scheme of hadrons\\cite{IIM2000},\n which gives covariant quark representations for composite hadrons with\n definite Lorentz and chiral transformation properties.\n The $\\widetilde{U}(12)_{SF}$-classification scheme has a unitary symmetry\n in the hadron rest frame, called ``static $U(12)_{SF}$\n symmetry''\\cite{IIYMO2005},\n embedded in the covariant $\\widetilde{U}(12)_{SF}$-representation space,\n of which tensors can be decomposed into representations of\n $\\widetilde{U}(4)_{DS} \\times SU(3)_{F}$,\n $\\widetilde{U}(4)_{DS}$ being the\n pseudounitary homogeneous Lorentz group for Dirac spinors.\n The static $U(12)_{SF}$ contains the Dirac spin group $U(4)_{DS}$\n in its subgroups and\n $U(4)_{DS}$ contains two $SU(2)$ subgroups as\n $U(4)_{DS} \\supset SU(2)_{\\rho} \\times SU(2)_{\\sigma}$,\n where $SU(2)_{\\rho}$ and $SU(2)_{\\sigma}$ are the spin groups\n concerning the boosting and intrinsic-spin rotation, respectively,\n of constituent quarks, being connected with decomposition of Dirac\n $\\gamma$-matrices, $\\gamma \\equiv \\rho \\otimes \\sigma$.\n Thus the static $U(12)_{SF}$ symmetry includes\n the chiral $SU(3)_{L} \\times SU(3)_{R}$ symmetry as\n $U(12)_{SF} \\supset SU(3)_{L} \\times SU(3)_{R} \\times SU(2)_{\\sigma}$.\n This implies\n that the $\\widetilde{U}(12)_{SF}$-classification scheme is able to\n incorporate effectively the effects of chiral symmetry and its\n spontaneous breaking, essential for understanding of properties\n of the low-lying hadrons, into what is called a constituent quark\n model.\n\n\n\\section{Experimental Candidates for the Ground-State\n Quark-Antiquark Mesons}\n\nAn essential feature of the $\\widetilde{U}(12)_{SF}$-classification\n scheme is to have the static $U(4)_{DS}$ symmetry\n for light $u, d, s$ quarks confined inside hadrons.\n The degree of freedom on the $\\rho$-spin, being indispensable\n for covariant description of spin $1\/2$ particles, offers a basis\n to define the rule of chiral transformation for quark-composite hadrons.\n Since we have the $\\rho$-spin degree of freedom, which is discriminated\n by the eigenvalues of $\\rho _{3}$, $r=\\pm$, in addition to the ordinary\n $\\sigma$-spin, the ground states of light-quark $q \\bar{q}$ mesons are\n composed of eight $SU(3)_{F}$ multiplets with respective $J^{PC}$\n quantum numbers, two pseudoscalars\n $(0^{-+}_{\\mathrm{N}}, 0^{-+}_{\\mathrm{E}})$, two scalars\n $(0^{++}_{\\mathrm{N}}, 0^{+-}_{\\mathrm{E}})$, two vectors\n $(1^{--}_{\\mathrm{N}}, 1^{--}_{\\mathrm{E}})$, and two axial-vectors\n $(1^{++}_{\\mathrm{N}}, 1^{+-}_{\\mathrm{E}})$\n (N and E denoting ``normal'' and ``extra''),\n where each N (E) even-parity multiplet is the chiral partner\n of the corresponding N (E) odd-parity multiplet and\n they form linear representations of the chiral symmetry.\n \nSince the eigenstates only with the $\\rho _{3}$-eigenvalue\n $r=+$ are taken for heavy quarks, we have\n for heavy-light meson systems two heavy-spin multiplets,\n $(0^{-}, 1^{-})$ and $(0^{+}, 1^{+})$, which are the chiral partner\n each other, while for heavy-heavy meson systems\n we have the same $(0^{-}, 1^{-})$-spin multiplets as in the\n conventional nonrelativistic quark model.\n\n\n\\subsection{The $\\widetilde{U}(12)_{SF}$-scheme assignments\n for the observed mesons}\n\nWe try to assign some of the observed mesons to the predicted\n $q \\bar{q}$ multiplets, resorting to their $J^{PC}$ quantum\n numbers and masses. The observed meson data are taken from\n the Particle Data Group 2004 edition\\cite{PDG2004},\n except for the following mesons:\n\n\\begin{itemize}\n\n\\item $\\rho(1250)$.\n There are several experimental indications of the existence\n of the $\\rho(1250)$\n reported by the OBELIX\\cite{OBELIX1997} and LASS\\cite{LASS1994}\n Collaborations, and others.\\footnote{See the $\\rho(1450)$\n Particle Listings and the ``Note on the $\\rho(1700)$''\n in \\cite{PDG2004}.}\n \n\\item $\\omega(1200)$.\n The existence of $\\omega(1200)$ is claimed in the analysis\n of the $e^{+}e^{-} \\rightarrow \\pi^{+}\\pi^{-}\\pi^{0}$\n cross section by the SND Collaboration\\cite{SND1999}.\n \n\\end{itemize}\nWe accept the existence of these vector mesons as true\\cite{Komada}.\n The resulting assignments, though some of them are ambiguous,\n are shown in Table 1.\n\\begin{table}\n \\caption{Experimental candidates for ground-state mesons in the\n $\\widetilde{U}(12)_{SF}$-classification scheme.}\n\t\\includegraphics[width=.99\\textwidth]{table-1.epsi}\n\\end{table}\nHere we make some comments on these assignments.\n\\begin{enumerate}\n\\renewcommand{\\labelenumi}{(\\theenumi)}\n\\item The light scalar mesons\n $\\{a_{0}(980), \\sigma, f_{0}(980), \\kappa\\}$ are assined to\n the $(0^{++}_{\\mathrm{N}})$-nonet as a chiral partner of\n the $\\pi$-meson $(0^{-+}_{\\mathrm{N}})$-nonet.\n\n\\item The low-mass vector mesons\n $\\{\\rho(1250), \\omega(1200), K^{\\ast}(1410)\\}$ are assined\n to the $(1^{--}_{\\mathrm{E}})$-nonet as a chiral partner\n of the $(1^{+-}_{\\mathrm{E}})$-nonet\n $\\{b_{1}(1235), h_{1}(1170),$ $h_{1}(1380),$ $K_{1}(1400)\\}$.\n\n\\item The axial-vector mesons\n $\\{a_{1}(1260), f_{1}(1285), f_{1}(1420), K_{1}(1270)\\}$\n are assined to the $(1^{++}_{\\mathrm{N}})$-nonet\n as a chiral partner of the $\\rho(770)$-meson\n $(1^{--}_{\\mathrm{N}})$-nonet.\n\n\\item The recent observed mesons\n $\\{D_{sJ}^{\\ast}(2317), D_{sJ}(2460)\\}$ are assined to\n the $(0^{+}, 1^{+})$ multiplet as a chiral partner of\n the $(0^{-}, 1^{-})$ multiplet $\\{D_{s}, D_{s}^{*}\\}$\\cite{Ishida2003}.\n These newly observed mesons, together with the $\\sigma$-meson\n nonet, are the best candidates for the hadronic states with $r=-$\n whose existence is expected\n in the $\\widetilde{U}(12)_{SF}$ scheme.\n\n\\item It is noted that the normal (N) and extra (E) states\n with the same $J^{PC}$ generally mix together due to\n the spontaneous as well as explicit breaking of chiral symmetry\n and some other mechanism.\n\n\\end{enumerate}\n\n\n\\section{CHIRAL MASS SPLITTING FOR THE CHARMED AND\n CHARMED-STRANGE MESON SYSTEMS}\n\nIn the $\\widetilde{U}(12)_{SF}$-classification scheme\n heavy-light $(c \\bar{q})$ meson fields, aside from the\n internal space-time wave functions, are given by\n\\begin{equation}\n\t\\Phi(v)=\\frac{1}{2\\sqrt{2}}(1-i v \\cdot \\gamma)\n\t\t(i \\gamma _{5} \\mathbf{D}\n\t\t+i \\tilde{\\gamma} _{\\mu} \\mathbf{D}_{\\mu}^{\\ast}\n\t\t+\\mathbf{D}_{0}\n\t\t+i \\gamma _{5}\\tilde{\\gamma}_{\\mu} \\mathbf{D}_{1 \\mu})\n\\end{equation}\nwith\n\t$v_{\\mu} \\equiv P_{\\mu}\/M,\\ \n\t\\tilde{\\gamma} _{\\mu} \\equiv \\gamma _{\\mu}\n\t\t+ v_{\\mu}(v \\cdot \\gamma)$,\nwhere $(\\mathbf{D},\\mathbf{D}_{\\mu}^{\\ast},\\mathbf{D}_{0},\n\\mathbf{D}_{1 \\mu})$ represent the local fields for the\n $c \\bar{q}$ mesons with $J^{P}=(0^{-},1^{-},0^{+},1^{+})$,\n $P_{\\mu}$ ($M$) is the four-momentum (mass) of meson fields,\n and flavor indices are omitted for simplicity.\nTo describe the light-quark pseudoscalar and scalar mesons\n together with the spontaneous breaking of chiral symmetry,\n we adopt the $SU(3)$ linear sigma model, introducing\n the chiral field $\\Sigma _{5}$ defined by\n\\begin{equation}\n\t\\Sigma _{5} = s-i \\gamma _{5}\\phi\n\\end{equation}\nwith\n\t\\[ s=\\frac{1}{\\sqrt{2}} s^{a} \\lambda^{a}, \\ \\\n\t\\phi=\\frac{1}{\\sqrt{2}} \\phi^{a} \\lambda^{a} \\ \\ \\\n\t(a=0, \\cdots ,8), \\]\nwhere $\\lambda^{0}=\\sqrt{2\/3} \\ \\mathbf{1}$ and $s^{a}$\n ($\\phi^{a}$)\n are the scalar (pseudoscalar) fields. We now write\n a chiral-symmetric effective Lagrangian which gives\n the chiral mass splitting between the heavy-light\n $(0^{-},1^{-})$ and $(0^{+},1^{+})$ multiplets\n through the spontaneous breaking of\n chiral symmetry\\cite{BEH2003,Ishida2003}:\n\\begin{equation}\n\t\\mathcal{L}_{ND}=-g_{ND} \\mathrm{Tr}\n\t[\\Phi \\Sigma _{5} \\bar{\\Phi}] \\label{eq:LND},\n\\end{equation}\nwhere $g_{ND}$ is the dimensionless coupling constant\n of Yukawa interaction in the nonderivative form and\n the trace is taken over the spinor and flavor indices.\n \nWhen the chiral symmetry is spontaneously broken,\n $s$ has the vacuum expectation value,\n $\\langle s \\rangle _{0}=\\mathrm{diag}(a,a,b)$,\n where $a$ and $b$ are related to the pion and kaon decay\n constants by\n\\begin{equation}\n\ta=\\frac{1}{\\sqrt{2}} f_{\\pi}, \\ \\\n\tb=\\frac{1}{\\sqrt{2}} (2f_{k}-f_{\\pi}).\n\\end{equation}\nThen the mass splitting between the two multiplets\n is induced and the mass differences $\\Delta M_{\\chi}(c \\bar q)$\n are given by\n\t$\\Delta M_{\\chi}(c \\bar n)=2g_{ND} a$ and \n\t$\\Delta M_{\\chi}(c \\bar s)=2g_{ND} b$,\nwhich leads to the relation\n\\begin{equation}\n\t\\Delta M_{\\chi}(c \\bar n)=\\Delta M_{\\chi}(c \\bar s) \\frac{a}{b}\n\t=\\Delta M_{\\chi}(c \\bar s) \\left( \\frac{2f_{K}}{f_{\\pi}}-1 \\right)^{-1}.\n\\end{equation}\nFrom this relation with the experimental values\\cite{PDG2004},\n\t$\\Delta M_{\\chi}(c \\bar s) = 348.0 \\pm 0.8 \\ \\mathrm{MeV}$\n\tand\n\t$f_{K^{+}}\/f_{\\pi^{+}} = 1.223 \\pm 0.015$,\nwe obtain\n\t$\\Delta M_{\\chi}(c \\bar n) = 240.8 \\pm 5.4 \\ \\mathrm{MeV}$\nand consequently predict the masses\n\\begin{equation}\n\tM(D_{0}^{\\ast}) = 2.11 \\pm 0.01 \\ \\mathrm{GeV}, \\ \\\n\tM(D_{1}) = 2.25 \\pm 0.01 \\ \\mathrm{GeV}\n\\end{equation}\nfor the $(0^{+},1^{+})$ $c \\bar n$ mesons, using the measured\n mass values\\cite{PDG2004} of the $D(0^{-})$ and $D^{\\ast}(1^{-})$ mesons.\n We hereafter refer to these predicted mesons, respectively, as\n ``$D_{0}^{\\ast}(2110)$'' and ``$D_{1}(2250)$''.\n\n\n\\section{POSSIBLE INDICATIONS OF THE EXISTENCE OF LIGHT SCALAR\n AND AXIAL-VECTOR CHARMED MESONS}\n\nWe could ask experimental data whether there was some evidence\n for the existence of $D_{0}^{\\ast}(2110)$ and $D_{1}(2250)$.\n Here we check on the recent published data on the\n $D \\pi$ and $D^{*} \\pi$ mass distributions in\n $B \\rightarrow (D \\pi)\\pi$, $(D^{*}\\pi)\\pi$ decays from\n the Belle\\cite{Belle2004} and BABAR\\cite{BABAR2003} Collaborations.\n\n\\begin{itemize}\n\\item \\textbf{$D \\pi$ mass spectrum}:\n In the Belle data\\footnote{See the $D \\pi$ mass distribution\n in Figure 3 of \\cite{Belle2004}.}\n we see an excess of events, a single data point of 20 MeV bin,\n at a mass of 2.13 GeV near the predicted mass of the\n $D_{0}^{\\ast}(2110)$, and so might regard it as an indication\n of that resonance, though it is natural to think that\n its data point should be within a statistical error.\n On the other hand, it would seem to us that the BABAR\n data\\footnote{See the $D \\pi$ mass distribution\n in Figure 3 (right) of \\cite{BABAR2003}.} around a mass of\n 2.1 GeV show a typical pattern of interference between\n two or more resonances.\n\n\\item \\textbf{$D^{*}\\pi$ mass spectrum}:\n In the Belle data\\footnote{See the $D^{*} \\pi$ mass distribution\n in Figure 9 of \\cite{Belle2004}.} there is also an excess of events,\n a single data point of 10 MeV bin, at a mass of 2.255 GeV\n near the predicted mass of the $D_{1}(2250)$,\n and so it might be an indication of the resonance.\n Although it is not clear, the BABAR data\\footnote{See the $D^{*} \\pi$\n mass distribution in Figure 3 (left) of \\cite{BABAR2003}.}\n around a mass of 2.26 GeV might show a typical pattern\n of interference.\n\n\\end{itemize}\n\nIf the $D_{0}^{\\ast}(2110)$ and $D_{1}(2250)$ resonances\n really exist, their widths have to be narrow,\n $\\leq 20 \\textrm{-} 30$ MeV, judging from the data mentioned above.\n The dominant decay modes of these resonances are\n $D \\pi$ and $D^{*} \\pi$, respectively, and thus\n we examine their single pion transitions.\n To estimate the widths of $D_{0}^{\\ast}\n \\rightarrow D + \\pi$ and $D_{1}\n \\rightarrow D^{*} + \\pi$ decays, together with\n $D^{\\ast} \\rightarrow D + \\pi$, we set up,\n in addition to the nonderivative interaction $\\mathcal{L}_{ND}$\n in Eq. (\\ref{eq:LND}),\n the chiral-invariant effective interaction\n with the derivative form:\n\\begin{equation}\n\t\\mathcal{L}_{D} = g_{D} \\mathrm{Tr}\n\t[\\Phi (\\partial _{\\mu} \\Sigma _{5}) \\gamma _{\\mu}\n\t(F_{U} \\bar{\\Phi})] \\label{eq:LD},\n\\end{equation}\nwhere\n $F_{U} = \\gamma \\cdot \\partial\n \/ \\sqrt{\\partial \\cdot \\partial}$\n and $g_{D}$ is the coupling constant with a dimension\n of $(mass)^{-1}$, which is related to the axial coupling\n constant $g_{A}$ by\n $g_{D} = g_{A}\/2a = g_{A}\/\\sqrt{2} f_{\\pi}$.\n The pionic decay widths of the $D^{*}$, $D_{0}^{\\ast}$,\n and $D_{1}$ states are derived from $\\mathcal{L}_{ND}$\n and $\\mathcal{L}_{D}$, and the decay widths of\n $D_{0}^{\\ast}$ and $D_{1}$ are identical. Using the measured\n value of $\\Gamma [D^{*+} \\rightarrow D^{0} + \\pi ^{+}]\n =65 \\ \\mathrm{keV}$\\cite{PDG2004}\n and $g_{ND}=1.84$ from $\\Delta M_{\\chi}(c \\bar n)\n = 241 \\ \\mathrm{MeV}$,\n the coupling $g_{D}$ is fixed to 3.96 GeV$^{-1}$\n (corresponding to $g_{A}=0.521$), and then we obtain\n\\begin{equation}\n\t\\Gamma [D_{0}^{*}(2110) \\rightarrow D + \\pi]\n\t=\\Gamma [D_{1}(2250) \\rightarrow D^{*} + \\pi]\n\t\\approx 30 \\ \\mathrm{MeV}.\n\\end{equation}\nThis value is consistent with the speculated widths of\n the $D_{0}^{\\ast}(2110)$ and $D_{1}(2250)$.\n\n\n\\section{CONCLUDING REMARKS}\n\nWe have presented the possible assignments for some of\n the observed mesons in the covariant\n $\\widetilde{U}(12)_{SF}$-classification scheme.\n It is necessary and important to examine the strong- and\n radiative-decay\\cite{Maeda} properties of the assigned states\n in order to establish their assignments.\n On the basis of these assignments we have also predicted\n the existence of the low-mass\n $(0^{+},1^{+})$ $c \\bar n$ mesons with narrow width,\n which might have been seen in the recent published data\n on the $D \\pi$ and $D^{*} \\pi$ mass distributions\n from the Belle and BABAR Collaborations.\n\n\n\\bibliographystyle{aipprocl}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn \n\\cite{Feynman1} \nR. Feynman writes: \"I have pointed out these things because the more you see how strangely Nature behaves, the harder it is to make a model that explains how even the simplest phenomena actually work. So theoretical physics has given up on that.\"\n\n\"A model that explains how phenomena actually work\" is referred to as a \"functional model\" in this paper. \nA functional model of a given function explicitly describes a sequence of steps for the development of the function's final state. A more detailed explanation of what in this paper is understood as a \"functional model\" is provided in \\cite{diel2}.\n\nThe need for a functional model of QT was established when the author tried to develop a comprehensive computer model of QT.\nHe discovered that certain computability problems associated with QT\/QFT (see \\cite{diel1}) can be overcome with a functional model of QT\/QFT. A second motivation for a functional model, in particular of QT interactions, was the author's search for a solution to the QT measurement problem, one that did not require modifications of the Schr\\\"odinger equation (or similar equations of motion of QFT) or one that did not rely on assumptions outside the scope of falsifiable physics. In \\cite{diel4}, a model of a QT measurement is described where measurements are explained in terms of QFT interactions and their functional descriptions.\n\nQuantum field theory (QFT) is the area of QT that addresses interactions among particles. The founders of QFT, including R. Feynman, recognized the need for a process-based theory of QT interactions. Feynman himself used the word \"process\" extensively when describing his quantum electrodynamics (QED) .\n\\footnote{ R. Feynman published a QED related book with the title \"The Theory of Fundamental Processes\" \\cite{Feynman2}.} The most fundamental tool in QFT, Feynman diagrams, give the impression of showing a temporal structure, i.e., a process.\nHowever, upon closer inspection, it turns out that the process orientation of QFT is insufficient and that QFT\/QED must still be considered a declarative (i.e., non-functional) description. The term \"process\" may be applicable when viewing a QFT interaction as a whole, but the theory does not provide any temporal substructures including intermediate states. \nFeynman diagrams must not be viewed as showing a temporal structure. The fact that multiple diagrams may have to be applied for a (single) scattering process disturbs any temporal interpretation. Additionally, possible loops in Feynman diagrams do not mean temporal loops. \n\n\nThe description of (physical) processes requires a certain specification language. Only very simple dynamic processes can be specified purely in terms of mathematical equations such as differential equations. Unfortunately, there does not exist a generally agreed process (i.e., functional) specification language that is comparably as powerful and compact as the language of mathematical equations used in physics for the description of static relationships (including derivatives). The method of description used in this paper is a mixture of plain English, some semi-formal algorithmic specification language, and the mathematical equations of QFT.\n\nThe model is based on cellular automaton (CA). A CA implies a special structure for the overall dynamic evolution and for space-time. The most important implication is the discreteness with respect to time and space.\n\nThis functional model of interactions in QT is embedded in the overall functional description of QT described in \n\\cite{diel2}.\nThe key features of this overall functional description of QT are listed in Section 2. A functional description of a system describes the dynamical evolution of this system in terms of state changes. Therefore, in Section 3, the structure and components of the system state of the functional model is first described before the steps in interaction processes are described in Sections 5 and 6.\nThere are a number of important QT concepts, such as measurement, entanglement, and decoherence, that are related to interactions. These subjects are also addressed in the present paper in section 7.\n\nIn this paper, interactions are assumed to always occur between two \"in\" particles and to result in two \"out\" particles (not necessarily identical to the \"in\" particles). Interactions between more than two particles are assumed to be translatable into a series of interactions between two particles. Special cases, for example, those in which QFT allows for more than two \"out\" particles, will not be addressed. \n\n\n\n\\section {Key Features of the Overall Functional Description of QT}\n\nThe functional description of interactions in QT, described in this paper, is embedded in the overall QT functional description \n\\cite{diel2}.\nThe following features are of primary importance for the overall functional description of QT. \n\n\\subsection{Discreteness of QT attributes}\n\nIn \n\\cite{tHooft} \nG. 't Hooft writes \"Often, authors forget to mention the first, very important, step in this logical procedure: replace the classical procedure one wishes to quantize by a strictly finite theory. Assuming that physical structures smaller than a certain size will not be important for our considerations, we replace the continuum of three-dimensional space by a discrete but dense lattice of points. \". \n\nThe functional description of QT assumes discrete and coarse grain attributes \nnot only for three-dimensional space, but for most other entities where standard QFT assumes differentiable attributes.\nThis applies to the spatial extension of particles\/waves and to their momentum. Also, the wave function is structured into a\ndiscrete set of alternative paths.\n\nOf course, the graining has to be kept fine enough to prevent significant deviations from the predictions obtained\nwith standard QFT.\n\n\\subsection{The Transition from Possibilities to Facts - Handling of Non-Determinism}\n\nThe functional description\/model of QT interactions must demonstrate the evolution of the wave function to generate probability amplitudes in accordance with the predictions of QT\/QFT. However, it does not end with the determination of probability amplitudes but includes\na model for the realization of the predictions represented by the probability amplitude. This process step is called\n\"the transition from possibilities to facts\". With standard QT, the transition from possibilities to facts is a non-deterministic process step that occurs exclusively with measurements. \n\nOne of the key features of the functional description of QT is that the transition from possibilities to facts is not exclusively tied to measurements. With most interpretations of QT, the measurement process implies a \"collapse of the wave function\".\nThere is an ongoing debate among QT physicists whether a collapse of the wave function has to be assumed. \nThe functional description assumes that measurements always imply interactions, more specifically, interactions that lead to a collapse of the wave function. The statement that the transition from possibilities to facts is not exclusively tied to measurements, first of all, means that the collapse of the wave function is not exclusively observed with measurement interactions. The collapse of the wave function also occurs with other (\"normal\") interactions.\n\n\\subsection{Particle Fluctuations and Interaction Channels Instead of Virtual Particles}\n\nIn the perturbation (Feynman) approach, virtual particles are an essential concept for describing interactions among\nparticles. \nThe functional description reinterprets the role of virtual particles; instead of the original QFT virtual particles, the functional description assumes \"particle fluctuations\" and \"interaction channels\". Particle fluctuations initiate the interaction, if the fluctuation affects multiple (i.e., at least two) particles. These particle fluctuations are assumed to actually occur (with a certain probability), whereas virtual particles are constructs that affect only the probability amplitudes.\nInteraction channels (like those mediated by virtual particles) guide the possible flow of particle transitions during an interaction (see Section 6.2).\n\n\\subsection{Splitting of a Wave Function \\emph{Collection} into Multiple Paths}\n\nThe splitting of a wave function into multiple paths is a constituent part of the perturbation approach to QFT (see \\cite{Feynman1}). \nThe overall effect of the wave function progression is then determined by the superposition (via path integrals) of the multiple paths.\nWith the QT functional description, the splitting into multiple paths is applied to \\textbf{collections} of particles \nwhich exit an interaction.\nThis allows for the modeling of entanglement (see section 7.6).\n\n\\section{The System State} \n\nBecause the QT functional model does not distinguish between a particle and the (associated) wave, the term \"particle\/wave\" will be used in the following.\n\\footnote{In the literature on QT some authors used the name \"wavicle\" for what here is called \"particle\/wave\".}\n\nThe description of the evolution of a quantum system, such as a collection of\nparticles\/waves, must be related to the information that makes up this quantum\nsystem. \nFor the functional model it is useful to arrange the information in a certain structure, primarily derived from the differing variability and lifetime of the entities. \nThe information that represents the quantum system for the functional\nmodel must encompass all of the entities known from QT\/QFT, such as state\nvectors, wave functions, masses, charges, etc. \nIn addition, the functional model must include objects and state components which are suited for the description of intermediate states. \n\nFor the functional model, the totality of information constituting a quantum\nsystem consists of a set of q-objects plus a set of fields.\n\\small{\n\\begin{verbatim}\nQT-system := \n q-object-set,\n field-set;\n\\end{verbatim}\n}\n\\normalsize\nFields (i.e., \"field-set\") are not further addressed in the present paper.\n\n\\subsection{The Quantum Object, q-object} \n\nThe most general entity for the description of a quantum system is the q-object.\nA q-object is an aggregate object that can be described by a common wave function and not just the product of the wave functions of the elements of the q-object. A particle\/wave may occur as a separate q-object, or may be part of a q-object.\n\\\\\nFor example, the wave function \n\n(1) $ \\psi = 1\/ \\sqrt{2}\\; ( | \\; pw1.up, pw2.down > + \\; | \\; pw1.down, pw2.up) > $ \n\\\\\nrefers to a q-object $ \\psi $ with elements (e.g., entangled particle\/waves) pw1 and pw2.\n\nA q-object may be viewed as having a two-dimensional structure. One dimension represents the elements of the q-object (with the above example, pw1 and pw2); the other dimension represents alternatives that may be selected during the evolution of the q-object, for example, by an interaction. In this paper, these alternatives are called \"paths\". Each path has associated a probability amplitude.\n\n\\small{\n\\begin{verbatim}\nq-object := \n path[1],\n ...\n path[NPATH];\n\\end{verbatim}\n}\n\\normalsize\n\\begin{table}\n\\caption{\\label{label}Structure of a q-object consisting of two particle\/waves pw1 and pw2.}\n\\begin{tabular} { | c | c | c | c | }\n\\hline\npaths & pw1-state & pw2-state & amplitude \\\\\n\n\\hline\n\npath-1\t & pw1-state$_{1} $ & pw2-state$_{1} $ & ampl-1 \\\\\n\npath-2\t & pw1-state$_{2} $ & pw2-state$_{2} $ & ampl-2 \\\\\n\n...\t & ... & ... & ... \\\\\n\npath-N\t & pw1-state$_{N} $ & pw2-state$_{N} $ & ampl-N \\\\\n\\hline\n\\end{tabular}\n\\end{table} \n\\small{\n\\begin{verbatim}\npath := \n state-element[1], ...,state-element[n], amplitude;\n\\end{verbatim}\n}\n\\normalsize\nWith the above example at least two paths, path[1] := ( pw1.up, pw2.down ) and \npath[2] := ( pw1.down, pw2.up), both with amplitude = $ 1\/ \\sqrt{2} $ may express the state of a q-object with a specific system evolution. \n\nThe wave function for $ \\psi $, eq. (1), specifies a continuous set of possible measurement results for $ \\psi $. Conversely, the q-object of the functional model specifies a discrete set of paths that may be selected with interactions and measurements.\n \nDifferent types of q-objects, containing different types of elements, can be distinguished.\nThe simplest type of q-object is the (single) particle\/wave. In this paper, in addition to the particle\/wave, two other q-objects, the particle\/wave-collection (pw-collection) and the interaction-object, are of particular importance. A pw-collection represents a collection of particles\/waves that are entangled. Table 1 shows the structure of a pw-collection.\nThe interaction object is an interaction internal object created at the beginning of an interaction. At the end of the interaction, the interaction object is transformed to a pw-collection.\n\nIn addition to the q-objects addressed in this paper, there are further entities in QT, such as bound systems, which may fall under the concept of q-objects. This functional model assumes that q-objects are created from interactions and dissipate (i.e., collapse) when they get involved in new interactions.\n\\footnote{As a consequence, the idea of the whole universe constituting a single big q-object (with a common probability amplitude) is not supported by the functional model.}\n\nThe state of an element of a q-object (above denoted state-element []) consists of the state-components known from QFT.\nFor QED, these are, first of all, the parameters used in QFT to specify a matrix element of the scattering matrix\n$ \\Psi_{p1,\\sigma1,n1;p2,\\sigma2,n2, ...} $, i.e., the four-momenta $ p^{\\mu}$, the spin z-component (or for massless particles, helicity) $ \\sigma $, and the particle type n. The Lagrangian, Hamiltonian, and the equations of motion for the typical fields of QFT contain the time derivative $ d\\phi \/ \\partial t $ of the state $ \\phi $. For the functional model, this time derivative therefore has to be included as part of the state. \nIn addition, the position vector x is part of the state.\n\n\n\\section{The Cellular Automaton}\n\nTo describe the dynamical evolution of the states of a system, some language or description method is required. \nThe method chosen in this paper for the specification of the dynamical evolution of \na QT systems a cellular automaton (CA).\n\n\\subsection{Standard Cellular Automaton}\nThe standard CA consists of a k-dimensional grid of cells. The state of the CA is given by the totality of the states of the individual cells.\n\n $ S_{CA} = \\{ s_{1}, ... , s_{n} \\} $\n\nWith traditional standard CAs, the cell states uniformly consist of the same state components\n\n $ s_{i} = \\{ s^{1}_{i}, ... , s^{j}_{i} \\} $\n\\\\\nTypically, the number of state components, j, is 1, and the possible values are restricted to integer numbers.\n\nThe dynamical evolution of the CA is given by the (single general) \"update-function\" which computes the new \nstate of a cell as a function of its current state and the states of the neighbor cells.\n\\small{\n\\begin{verbatim}\nStandard-CellularAutomaton(initial-state) := \/\/ transition function\nDO FOREVER {\n state = update-function(state, timestep);\n IF ( termination-state) STOP;\n}\n\\end{verbatim}\n}\n\\normalsize\nThe full complexity (if any) of a particular cellular automaton is concentrated in the update-function. As Wolfram (see \\cite{Wolfram}) and others (see, for example, \\cite{Ilachinski}) showed, a large variety of process types (stable, chaotic, pseudo-random, oscillating) can be achieved with relatively simple update-functions.\n\nFor specific applications of the cellular automaton, the update-function may be derived from application specific specifications.\nIn \\cite{Elze1}, Elze describes a cellular automaton whose update-function is derived from the Hamiltonian (or the equation of motion). \n\n\\subsection{QFTCA, a Cellular Automaton Supporting QFT }\n\nThe CA described in \\cite{Elze1} , \\cite{Elze2}, and \\cite{Hooft} may be viewed as the starting points for the CA described in the present paper. \nTo support QFT (at least to the extent required to demonstrate a model of interactions), a cellular automaton QFTCA is defined; QFTCA, however, requires certain extensions of the standard cellular automaton and embedding of the cellular automaton in an overall QFT-based structure. The embedding into the QFT-based structure affects two aspects, the structure of the state of the system and the functions for the dynamic evolution of the system.\n\n\\subsubsection{State structure}\n\nThe CA-cells may be viewed as representing the space.\nThe cells have associated with them (i.e., contain) parts of the system state.\nFor the mapping of the overall system state, as described in Section 3, to QFTCA, some \"design decisions\" have to made.\nThe following lists the major characteristics (without supporting arguments):\n\\begin{itemize}\n\\item Time is \\emph{not} a (fourth) dimension for QFTCA. Instead, the time derivatives $ d\\psi \/ \\partial t $ and $ p^{\\mu}$ are explicit parts of the system state.\n\\item Differing from the standard CA (mentioned in Section 4.1), the states of QFTCA cannot be expressed solely by the collection of cell states. Those parts of the q-object state that are position dependent can be assigned to cell states. Other parts have to be kept besides the cell states. For the functional model described in this paper, left open is the assignment of state components to CA cells as opposed to state components that are kept with the (paths of the) q-objects. We need to ensure that the cells (i.e., space points) belonging to a q-object can be determined and vice versa, that the q-objects occupying a CA cell can be determined.\n\\item A q-object path may cover multiple cells.\n\\item A cell may be covered by multiple q-object paths.\n\\item The above-described state structure and timing considerations result in differing content of CA cells and differing CA update functions. For the proposed model of the QT measurement process, this could be implemented by either (a) a single relatively complex QFTCA, or (b) by a QFTCA consisting of a collection of CAs that will partly merge whenever interactions are performed.\nIn the present paper, the decision of (a) or (b) and the details of (a) and (b) are left open.\n\\end{itemize}\n\n\\subsubsection{Evolution of the system state} \nThe structuring of the overall system state into the CA-cells and the superimposed q-objects was also motivated by the differing update requirements of the QT-objects.\n\\small{\n\\begin{verbatim}\nQFTCA (initial-state) := \/\/ transition function\nDO FOREVER {\n state = global-update-function(state, timestep);\n IF ( termination-state) STOP;\n}\n\nglobal-update-function(state, timestep) := \nDO PARALLEL {\n field-state = field-update-function(field-state, timestep);\n FOR ( all qobjects qobj[i] ) {\n propertimestep = fx(qobj[i]) * timestep;\n FOR ( all particle\/waves pw[k] of qobj[i]) {\n pw[k] = pw-update-function(pw[k], propertimestep);\n IF ( interaction-occurred( pw[k], pw2 ) \n perform-interaction( pw[k], pw2 );\t\n }\n }\n}\n\\end{verbatim}\n}\n\\normalsize\nQFT is a relativistic theory. Special relativity distinguishes proper time (or wrist-watch time) of the inertial system and global time of the overall space-time system.\nFor the evolution of the overall system (state), the functional model assumes that the QFTCA proceeds in uniform global time steps. The update-function for the individual q-objects, however, have to proceed in proper time associated with the \nq-objects.\n\nAs a simplified description \"propertimestep = fx(qobj[i]) * timestep;\" indicates the transition from global time to proper time as requested by special relativity. A more detailed and more proper description is outside the scope of this paper.\n\nThe update functions for fields (field-update-function()) and for the normal particle propagation (pw-update-function()) are straightforward and not topics discussed in this paper. The topics discussed are the update functions related to interactions (interaction-occurred() and perform-interaction()). These are addressed in Sections 5 and 6.\n\n\\section{Overall Model of Interactions between Particles\/Waves}\n\nTwo types of interaction between particles\/waves are distinguished: (1) interactions\nthat destroy the superposition among possible multiple paths of a wave function, resulting in a collapse of the wave function, and (2) interactions that only affect the attributes (e.g., momentum) of the involved particles\/waves. For this description of the functional model, interactions resulting in a collapse of the wave function are considered to be the general case. Interactions that do not result in a collapse are exceptions. They are called volatile interactions and will be addressed where appropriate and explicitly in Section 7.2.\n\nWith QFT (see \\cite{Weinberg}) an interaction (e.g., scattering) is described by the scattering matrix (S-matrix) which assign a probability amplitude $ S_{\\beta \\alpha}$ to the transition of a given \"in\" state $ \\Psi_{\\alpha} $ to an \"out\" state $ \\Psi_{\\beta} $.\n\\\\\n $ S_{\\beta \\alpha} = ( \\Psi_{\\beta}, \\Psi_{\\alpha} ) $\n\\\\\nThe \"in\" and \"out\" states are specified by their state components (see Section 3)\n $ \\Psi_{\\alpha} = \\Psi_{p1,\\sigma1,n1; p2,\\sigma2,n2, ...}; \\Psi_{\\beta} = \\Psi_{p1',\\sigma1',n1'; p2',\\sigma2',n2', ...} $\n\\\\\nThe functional model of QFT interactions has to provide a model for the process that transforms an \"in\" state\n$ \\Psi_{\\alpha} = \\Psi_{p1,\\sigma1,n1; p2,\\sigma2,n2, ...}$ into a multitude of possible \"out\" states $ { \\Psi_{\\beta 1}, \n\\Psi_{\\beta 2}, ... }$ .\nThis multitude of possible \"out\" states constitutes a pw-collection as presented in Table 1.\n\nThe overall model of interactions between particles\/waves is based on the following assumptions:\n\\begin{itemize}\n\\item Interactions are process steps that actually occur (with a certain probability) rather than wave function alternatives which are in superposition with \"no interaction occurrence\".\n\\item An interaction always occurs at a definite (discrete) point in space-time.\n\\\\\n(The QFT model of interactions in coordinate space, where the possible results of an interaction are computed by assuming superpositions among all possible interaction positions, is not supported by the functional model.) \n\\item Only those q-object paths that cover the interaction position affect the outcome of the interaction. If multiple paths of a q-object cover the interaction position, only one of them is \nselected as the interacting path.\n\\item The non-selected paths are discarded.\n\\item The selection of the interaction position and of the significant path represents (a first step in) the transition of probabilities to facts.\n\\item The discarding of the non-selected paths can be viewed as \"the collapse of the wave function\".\n\\end{itemize}\n\\begin{figure}[ht]\n\\center{\\includegraphics*[scale=0.5] {figure1a.JPG} }\n\\caption{Information flow with the interaction between two particle\/waves}\n\\end{figure}\nFigure 1 shows the overall flow of information with the interaction of two particle\/waves pw1 and pw2. The figure contains four q-objects: pw1, pw2 (the figure does not assume pw1 and pw2 belonging to the same q-object), the interaction-object, and the \"out\" pw-collection containing the \"out\" particle\/waves.\nAt the beginning of the interaction the information contained in pw1 and pw2 is merged into the interaction object. The \ninteraction object performs a sophisticated process, including the formation, union, and splitting of \"ia-channels\", which finally results in the \"out\" pw-collection.\n\n\\subsection{Occurrence of an interaction}\n\nAlthough QFT provides precise rules for the computation of probabilities (amplitudes) for the occurrence of specific interaction results as a function of the \"in\" states of the involved particle waves, it does not provide further details on the circumstances that must hold for an interaction to occur. In contrast, the functional model of QT interactions has to include a process model for the occurrence of an interaction that supports the above listed basic assumptions. The author claims that measurements imply interactions and therefore interactions occur with probabilities that are equal to the probabilities from the results of measurement.\nThe model chosen by the author for the occurrence of interactions is the \"particle\/wave fluctuation\" (pw-fluctuation).\nA pw-fluctuation can be thought of as a temporary concentration and amplification of one or several particles\/waves at a certain point in space. The following assumptions are essential in considering pw-fluctuations and their role in the functional model of QT interactions:\n\\begin{itemize}\n\\item With the functional model each interaction is preceded by a pw-fluctuation. \n\\item Only one pw-fluctuation can be active at a given point in time for a particle\/wave.\n\\item The position where the pw-fluctuation occurs can be anywhere within the space occupied by the involved\nparticles\/waves. The position is determined\nrandomly as a function of the amplitudes of the involved particles\/waves and of the fields involved.\n\\end{itemize}\nThe immediate effect of a pw-fluctuation is the temporary formation of an entity called an interaction-object (see section 6.1). \nWhen the temporary interaction object disappears again, the original particles\/waves may persist (or\nmay be reinstalled), or a different set of particles\/waves may appear. Accordingly, the\nlong-term effect of a pw-fluctuation can be one of the following:\n\\begin{enumerate}\n\\item nothing durable (this may be the case with the majority of pw-fluctuations),\n\\item an interaction \\emph{with} a collapse of the wave function (the major case considered in this paper),\n\\item an interaction \\emph{without} a collapse of the wave function (only roughly addressed in section 7.2),\n\\item a particle decay (not further addressed in this paper).\n\\end{enumerate} \n\n\\subsection{Results of an interaction}\nThis functional model of QT interactions is based on QFT. With QFT, the possible results of an interaction are provided by the S-matrix. It includes probability amplitudes for possible results for a given \"in\" state. The functional model has to deliver the collection of the complete variety of possible \"out\" states and their probability amplitudes in form of the \"out\" pw-collection.\n\n\\section{Functional Steps in an Interaction Process}\n\nThe functional description must show process steps for the dynamic evolution of an interaction. The process steps have to produce results that are compatible with the results predicted by standard QT\/QFT in form of the S-matrix.\n\nFirst thoughts on the mapping of QFT to a functional model may start with looking at the operators and operator equations of QFT. For example, in \\cite{Mandl} the \"\\emph{processes}\" that contribute in QED to the determination of the S-matrix, are expressed as\n\n$ H_{W}(x) = -eN \\{ ( \\bar{\\psi^{+} } + \\bar{\\psi^{-} }) ( \\not A^{+} + \\not A^{-}) ( \\psi^{+} - \\psi^{-}) \\}_{x} $\n\\\\\nwhere $ \\bar{ \\psi^{+}}, \\bar{ \\psi^{-}}, \\not A^{+}, \\not A^{-}, \\psi^{+}, \\psi^{-} $ are creation and annihilation operators.\n\nThe further QFT treatment of interactions leads to Feynman diagrams. Although Feynman diagrams already seem to contain certain process-oriented aspects, the QFT operators and Feynman diagrams are not directly usable as basis for a functional (i.e., process-based) description. \n\nInstead of the QFT creation and annihilation operators, the functional model contains split() and combine() operators. Similar to the creation and annihilation operators, the split() and combine() operators have to appear in pairs.\nInstead of the Feynman diagrams, the functional model contains \"interaction channels\". \nOf course, it has to be ensured that the functional model results in QFT compatible predictions.\n\nThe description of the overall interaction process is subdivided into three process steps:\n\\small{\n\\begin{verbatim}\nperform-interaction ::= {\n Step1: Formation of interaction-object;\n Step2: Formation and processing of ia-channels;\n Step3: Generation of \"out\" particle\/wave collection;\n}\n\\end{verbatim}\n}\n\\normalsize\n\n\n\\subsubsection{Example: Bhabha Scattering}\nQFT provides rules and equations for the computation of scattering matrix amplitudes. The equations are derived from the pertinent Feynman diagrams. For Bhabha scattering the equations are\n\n(1) $ M_{A} = (-ie)^{2} \\bar{v}(\\vec{p}_{2}, s_{2} ) \\gamma_{\\mu} u(\\vec{p}_{1}, s_{1} ) (-ig^{\\mu\\nu}\/(p_{1} + p_{2})^{2})\n \\bar{u}(\\vec{p}'_{1}, s'_{1} ) \\gamma_{\\nu} v(\\vec{p}'_{2}, s'_{2} ) $.\n\\\\and\n\n (2) $ M_{B} = (-ie)^{2} \\bar{u}(\\vec{p}'_{1}, s'_{1} ) \\gamma_{\\mu} u(\\vec{p}_{1}, s_{1} ) ( -ig^{\\mu\\nu}\/(p_{1}-p'_{1})^{2})\n \\bar{v}(\\vec{p}_{2}, s_{2} ) \\gamma_{\\nu} v(\\vec{p}'_{2}, s'_{2} ) $\n \\\\\nAccording to the usual QFT notation $ u() $ represents the \"in\" electron, $ \\bar{u}()$ the \"out\" electron, $ \\bar{v}$ the \"in\" positron, and $ v() $ the \"out\" positron. $ M_{A} $ and $ M_{B} $ are the probability amplitudes. The total probability amplitude M for Bhabha scattering (first order perturbation) is $ M = M_{A} - M_{B} $.\n\nThe \"in\" electron $ u() $ and the positron $ \\bar{v}() $ provide the initial state of pw1 and pw2. Differing from standard QFT computations, no specific value can be assumed for the states of $ \\bar{u}()$ and $ v() $ when an interaction starts within the functional model. Therefore, the result of an interaction within the functional model will not be a single probability amplitude M, but a set of probability amplitudes embraced in the \"out\" pw-collection. \n\n\\subsection{Formation of interaction object}\n\nAt the beginning of an actual interaction, the information from the interacting particles\/waves is combined into the interaction object. At the end of the interaction, the interaction object is replaced by a new particle\/wave collection\nrepresenting the \"out\" particles\/waves. The interaction object is a special type of q-object.\nWhen the interaction object is initialized, it contains only a single path which contains the information from the selected paths of the two \"in\" particles\/waves.\n\\small{\n\\begin{verbatim}\npw-ia-object := path[1];\n\npath :=\n pw1.selectedPath.attributes, pw2.selectedPath.attributes, amplitude;\n\\end{verbatim}\n}\n\\normalsize\nTo support the transition from the \"in\"\nparticles\/waves to the \"out\" particle\/wave collection, the interaction object has\na very dynamic structure.\nDuring the interaction process, the interaction object will be extended by ia-channels which reflect \nintermediate states.\n\\small{\n\\begin{verbatim}\npw-ia-object := \n path[1] := ia-channel[1],\n ...\n path[k] := ia-channel[k];\n\\end{verbatim}\n}\n\\normalsize\n\nThe functional model assumes that interaction objects, similarly to virtual\nparticles, have a limited life-time before they decay into the particles\/wave collections that\nare the result of the interaction. \n\n\\subsubsection{Example: Bhabha Scattering}\nFor the formation of the interaction object the mapping of QFT to the functional model is still rather trivial. $u(\\vec{p}_{1}, s_{1} ) $ is mapped to pw1.selectedPath.attributes; $ \\bar{v}(\\vec{p}_{2}, s_{2} ) $ is mapped to pw2.selectedPath.attributes. Differing from standard QFT, however, the functional model assumes only single paths to be selected as input for the further processing of the interaction. \n\n\\subsection{Formation and processing of ia-channels}\n\nThe processing of interactions proceeds with the formation of ia-channels. Like the paths of a pw-collection (see section 3), an ia-channel represents one or multiple alternative(s) for the evolution of the \"in\" particles\/waves toward the\ninteraction result. Each ia-channel starts with both \"in\" particles\/waves and\nends with \"out\" particles\/waves. Alternative ia-channels may differ in the set\nof \"out\" particles\/waves and\/or in the sub-channels between the \"in\" and \"out\"\nparticles\/waves.\n\nA specific ia-channel is formed by the combination of the two operators\nsplitl() and combine().\n\n split(a) $ \\rightarrow $ (b,c) means that split(a) results in b and c;\n\ncombine(a,b)$ \\rightarrow $ (c) means that combine(a,b) results in c.\n\\\\\nFor example, starting with two interacting particles\/waves pw1 and pw2, the ia-channel \n\\\\\n (pw1,pw2): split(pw1) $ \\rightarrow $ (a,b); combine( a, pw2 ) $ \\rightarrow $ ( c) \n\\\\ would result in \"out\" particles\/waves b and c.\n\nThe split() and combine() operators are analogous (although not equal) to the creation and annihilation operators of QFT. \n\nDerived from QFT the following rules are established for the functional model of interactions:\n\\begin{itemize}\n\\item Rule1: An interaction always starts with two \"in\" particles\/waves and ends with two \"out\" particles\/waves,\n\\item Rule2: An ia-channel always contains one combine() and one split() (in arbitrary sequence).\n\\footnote{In QFT deviations from these rules can be found which, however, will not be addressed here.} \n\\end{itemize}\nGiven the above rules, there are five possible ia-channels which can be constructed from the two interacting particles\/waves pw1 and pw2:\n\\begin{enumerate}\n\\item combine( pw1, pw2 ) $ \\rightarrow $ (a) split(a) $ \\rightarrow $ (b,c)\n\\item split(pw1) $ \\rightarrow $ (a,b) combine( a, pw2 ) $ \\rightarrow $ (c)\n\\item split(pw1) $ \\rightarrow $ (a,b) combine( b, pw2 ) $ \\rightarrow $ (c)\n\\item split(pw2) $ \\rightarrow $ (a,b) combine( pw1, a ) $ \\rightarrow $ (c)\n\\item split(pw2) $ \\rightarrow $ (a,b) combine( pw1, b ) $ \\rightarrow $ (c)\n\\end{enumerate}\nAll these ia-channels end with two particles\/waves (a,c) or (b,c). Depending on\nthe type of particles\/waves involved in the interaction, QFT supports only specific\nkinds of splits and combines. The rules that define what combinations of particle\ntypes may be subject to combine(p1, p2) and what the resulting particle types\nof split(p1) and combine(p1, p2 ) can be are equivalent to the rules regarding the\npossible vertices of Feynman diagrams as described in numerous textbooks on\nQFT\n (see for example, \\cite{Ryder}, \\cite{Griffiths}, \\cite{Mandl}).\n For QED, for example, one of the rules is\n split( $ \\gamma) \\rightarrow ( e^{-}, e^{+}) $; combine($ e^{-}, \\gamma) \\rightarrow ( e^{-}) $ .\n\\\\\nSometimes, QFT rules allow for multiple results for split() and combine(). For example, split(photon) may result in (electron, positron), (muon, antimuon),\nor (tauon, antitauon). Finally, with specific \"in\" particle\/wave combinations, it may turn out that some of the possible\n ia-channels may be considered equivalent, and therefore, only one\nof them has to be included. \n\\footnote{The rules regarding when ia-channels may be considered equivalent are defined with QFT (in terms of equivalent Feynman diagrams) and are not further addressed here.}\nGiven the QFT rules, typically only one or two of the above mentioned five alternative ia-channels are possible for a specific \"in\" particle combination.\n\nThe operator split(a) $\\rightarrow $ (b, c) is a non-bijective function insofar as there\nare many alternatives with respect to the attributes (e.g., momentum and spin)\nof the resulting (b, c). Rather than selecting a particular specific result, QFT\n(and the functional model of QT interactions) requires that the multitude of\npossible results is generated, with differing probability amplitudes assigned. \n\\footnote{The functional model of QT interactions assumes a certain granularity with respect to the multitude of possible results.}\nThus, split(a) $ \\rightarrow $ (b, c) results in a two-fold splitting and may be expressed as\n\nsplit(a) $ \\rightarrow ((b_{1}, c_{1}), (b_{2}, c_{2}), ... (b_{n}, c_{n})) $. \n\\\\\nFor the interaction object this means that a multitude of paths representing $ ((b_{1}, c_{1}), (b_{2}, c_{2}), ... (b_{n}, c_{n})) $ has to be generated. As a consequence, at the end, each ia-channel contains a multitude of paths, because each ia-channel includes a split() operator.\nThe alternative ia-channels that are generated by the varying application of \nthe split() and combine() operators are processed in parallel. The effects\nof processing the split() and combine() operators is reflected in extensions of and\nchanges in the interaction object. \n\nThe rules governing the computation of the amplitudes of a path of the interaction object must be in accordance with QFT. These rules are well known and described in many textbooks\non QFT\n(see for example, \\cite{Ryder}, \\cite{Griffiths}, \\cite{Mandl}). \nHowever,\nwith QFT the respective rules are defined in terms of (external and internal ) lines and vertices of Feynman diagrams. For the functional model, the QFT rules have been mapped to rules regarding the split() and combine() operators. \n\\footnote{This mapping is not described in this article.} \n\n\\subsubsection{Example: Bhabha Scattering}\nBhabha scattering refers to the electron-positron scattering process $ ( e^{-}, e^{+} ) \\rightarrow ( e^{-}, e^{+} ) $.\nDerived from the rules of QFT (more specifically, quantum electrodynamics), the following ia-channels are possible:\n\\begin{itemize}\n\\item CA: $ combine1( e^{-}, e^{+} ) \\rightarrow ( \\gamma );$ $ split2( \\gamma ) \\rightarrow ( e^{-}, e^{+} ) $\n\\item CB1: $ split1( e^{-}) \\rightarrow ( e^{-}, \\gamma ); $ $ combine2( \\gamma, e^{+} ) \\rightarrow ( e^{+} )$\n\\item CB2: $ split1( e^{+}) \\rightarrow ( e^{+}, \\gamma ); $ $ combine2( \\gamma, e^{-} ) \\rightarrow ( e^{-} )$\n\\end{itemize}\nCB1 and CB2 can be shown to be equivalent. Therefore in the following only CA and CB (=CB1) will be considered. As can be easily demonstrated CA corresponds to equation (1) for $ M_{A} $ whereas CB corresponds to equation (2) for $ M_{B} $.\n\\\\\nConsequently, CA $ = combine( e^{-}, e^{+} ) \\rightarrow ( \\gamma ); split( \\gamma ) \\rightarrow ( e^{-}, e^{+} ) $ has to be mapped to \\\\\n $ (-ie)^{2} \\bar{v}(\\vec{p}_{2}, s_{2} ) \\gamma_{\\mu} u(\\vec{p}_{1}, s_{1} ) (-ig^{\\mu\\nu}\/(p_{1} + p_{2})^{2})\n \\bar{u}(\\vec{p}'_{1}, s'_{1} ) \\gamma_{\\nu} v(\\vec{p}'_{2}, s'_{2} ) $ and\n\\\\\nCB $ = split( e^{-}) \\rightarrow ( e^{-}, \\gamma ), combine( \\gamma, e^{+} ) \\rightarrow ( e^{+} ) $ has to be mapped to \\\\\n $ (-ie)^{2} \\bar{u}(\\vec{p}'_{1}, s'_{1} ) \\gamma_{\\mu} u(\\vec{p}_{1}, s_{1} ) ( -ig^{\\mu\\nu}\/(p_{1}-p'_{1})^{2})\n \\bar{v}(\\vec{p}_{2}, s_{2} ) \\gamma_{\\nu} v(\\vec{p}'_{2}, s'_{2} ) $.\n\\\\\nThis allows the derivation of the general function logic for the split() and combine() functions.Without going into further details here, the following points are worth mentioning:\n\\begin{itemize}\n\\item Fermion chains known from QFT have to be observed with the computations for split() and combine(). \n\\item The details of the functions split() and combine() depends on which of these two functions appears first.\n\n\\end{itemize}\n\n\\subsection{Generation of \"Out\" Particle\/Wave Collection}\n\nThe processing of the ia-channel ends with a certain \"out\" particle\/wave combination. With some types of interactions, different \"out\" particle\/wave combinations may occur. The functional model of QT interactions assumes that from the possibly multiple alternative \"out\" particle\/wave combinations only one will actually leave an interaction. This is a further case of \"transition from probability to facts\" which is possibly not in agreement with standard QFT. \n\\footnote{The possible deviation from standard QFT, however, will be difficult to test in experiments.}\nThe determination of the \"out\" particle\/wave combination may be performed somewhere between step 2 (Formation and processing of ia-channels) and (the present) Step 3. (Generation of \"Out\" Particle\/Wave Collection ). In this paper the function is addressed in step 3. The detailed mechanism for the selection of the \"out\" particle\/wave combination is beyond the scope of the present paper. Several alternative mechanisms are imaginable.\n\nAfter processing the individual ia-channels (in parallel) and dropping those\nia-channels that do not deliver the selected \"out\" particle\/wave combination,\neach ia-channel contains the same set of paths, however, with different probability amplitudes for the paths. Therefore, the multiple ia-channels can be (re-) united by the summation of the corresponding amplitudes. According\nto QFT rules (usually formulated in terms of Feynman diagrams), the \"summation\" in some cases has to be performed with a negative sign (i.e., $ amplitude1 - amplitude2 $ instead of $ amplitude1 + amplitude2 $).\n\n\\subsubsection{Example: Bhabha Scattering}\n\nElectron-positron scattering may result in lepton pairs (electron, positron), (muon, antimuon), or (tauon, antitauon). For the selection of the \"out\" particle\/wave combination, QFT does not offer any rules besides the equations for the computation of the probability (amplitudes) for the different channels. The selection of the \"out\" particle\/wave combination by the functional model, of course, has to be in accordance with the respective QFT equations.\n\nThe summation (or subtraction) of the probability amplitudes also follows the rules defined by QFT.\n\n\\section{Related topics}\n\n\\subsection{Collapse of the Wave Function} \n\nWhen an \"out\" particle\/wave collection is generated, possibly existing \"in\" particle\/wave collections are obsolete. The \n\"collapse\" of the \"in\" particle\/wave collection consists of two sub-steps:\n\\begin{itemize}\n\\item The particles\/waves that are involved in the interaction (i.e., pw1 and pw2) will be discarded from their \"in\" particle\/wave collections.\n\\item Those particles\/waves that are not involved in the interaction will (of course) survive, however, only the paths which caused the interaction.\n\\end{itemize}\nThe fact that this reduction to a single path also affects other particles\/waves that are not (directly) involved in the interaction (but \"entangled\" with the interacting particles\/waves) supports entanglement. \n\n\\subsection{Volatile Interaction - Interactions that Do Not Destroy the Superpositions}\n\n\nNumerous cases are known in the QT concerning\ninteractions between particles\/waves and other matter (particles, atoms, devices, etc.) in which there is no destruction of the superposition before ultimately\na measurement occurs. Typical examples are photons being reflected at mirrors\nor photons passing through transparent materials.\nBased on the process steps described in section 6., which imply the collapse\nof the \"in\" pw-collections, the question becomes under what circumstances will the mechanism described in section 6.1 be suppressed such that the\n\"in\" particle\/wave collections will be preserved. The primary example of this case, which can\nhardly be mapped to the process described in section 6, is the interaction\nbetween a particle\/wave and a bound system, if the bound system has to be\ntreated as a single (large) entity. Bound systems are poorly understood with\nQFT (see below). This functional model of QT, which is mainly a mapping of\nQT\/QFT to a process-oriented model, therefore does not yet contain precise criteria for the determination of when the interaction with a bound system will be\nvolatile. Some widely accepted rules seem to be (1) the higher the energy\/mass\nof the bound system is, the higher the probability of a volatile interaction becomes;\nand (2) the lower the energy\/mass of the scattered particle\/wave (e.g., photon)\nis, the lower the probability of a non-volatile interaction becomes.\n\nThe author expects that the development of a better understanding of bound systems\nmay also provide answers to the question regarding when bound\nsystems must be treated as an entity (resulting in non-volatile interactions).\n\n\n\\subsection{Interaction with a Bound (State) System}\n\nThis functional model of QT interactions is largely derived from the physics\nof interactions as provided by QFT. Predictions of the behaviour of bound (state)\nsystems (henceforth called bound system), such as an atom, a nucleus, or a hadron, can be computed using QFT\nfor special situations only, and only with considerable effort. QFT includes some\ntheory and considerations regarding bound systems \n( see \\cite{Weinberg}, \\cite{Veltman}, \\cite{Griffiths}), \n however, this does\nnot include a complete and consistent description of the total system in terms\nof QFT constructs such as Feynman diagrams. In \n \\cite{Weinberg} \nS. Weinberg writes, \"It must be said that the theory of relativistic effects and radiative corrections\nin bound states is not yet in entirely satisfactory shape\".\n\nAs a consequence of this QFT weakness, the QT functional model cannot yet offer\na complete model of the interaction between a particle\/wave and a bound system or\nof the interaction between two bound systems. As the major open question, it is not\nunderstood when or to what extent a bound system can\/must be treated as an entity\nor the whole interaction process has to be broken down to interactions among (elementary)\nparticles\/waves. \n\n\\subsection{Model of Measurement Process}\nOne of the motivations for the development of the functional model of QT was the author's suspicion that the apparent nonlinear evolution of the wave function with measurements can get a more plausible explanation, if one relinquishes the requirement that physical processes be describable purely by differential equations. \n\nThe proposed functional model of QT interactions enables a model for the measurement process where there is no explicit \"measurement\" process (step). Measurement is explained in terms of normal interactions as described in the present paper.\nThe functional model of measurements described in \\cite{diel4} is based on the following assumptions:\n\\begin{itemize}\n\\item Measurements require interactions between the measured QT object and part of the measurement apparatus. In general, QT interactions (a) imply transitions from probabilities to facts and (b) have to adhere to the rules and equations of quantum field theory (QFT).\nThus, as described in this paper, the evolution of the wave function during a measurement process is not just a normal linear progression, but a more complicated process which includes the transition from probabilities to facts. \n\\item Interactions, in general, support only a limited exchange of information between the \"in\" QT objects and the \"out\" QT objects. This limited exchange of information is the cause of some of the peculiarities of QT measurements.\n\\end{itemize}\nWith standard QT, the transition of possibilities to facts occurs exclusively\nwith measurements. The functional model of QT interactions assumes non-deterministic actions at various process steps.\n\n\\subsection{Superpositions}\nSuperposition is one of the key concepts of QT. For the functional model it is assumed that superpositions apply to the paths of the same q-object only, and that superpositions become effective only with interactions (including measurement interactions) occurring at space-time points shared by multiple paths.\n\n\\subsection{Entanglement}\n\nAs described in section 6, the most important effect of an interaction is the formation of a particle\/wave collection with \nmultiple paths. \nA path supports correlations and entanglements between the particles\/waves leaving the interaction.\nEach path represents one of the possible outcomes of an interaction (such as a measurement). As described in section 6.1, the interaction discards all paths of the \"in\" particle\/wave collection except the one selected.\n\nWhen particles\/waves are entangled, the entanglement endures for some time\nuntil it becomes \"measured\" as a result of an interaction involving one of the entangled particles\/waves. \"Measured\" indicates that due to an interaction one of the entangled particles\/waves obtains a definite value for the\nmeasured observable. The measurement of a definite value for particle\/wave-1\naffects the possible values that can be measured for particle\/wave-2. With\nthe functional model, the determination of a specific measurement value means\nselection of a path from a pw-collection. The selection of a path means selection of\na specific value for both particle\/wave-1 and particle\/wave-2. The path selected\nis the only one surviving for the further evolution of particle\/wave-2. \n\nThe entanglement concept of the QT functional model maintains the strange\nnon-locality of QT. However, the non-local effect applies to the elimination of\nalternative paths, rather than to direct value changes.\n\n\n\\subsection{Decoherence}\n\nDecoherence theory, like this functional model, discusses process steps such\nas the formation of entanglements, interactions, and the (apparent) collapse of\nthe wave function. Insofar, it may be considered as providing a type of functional\nmodel of QT interactions.\n\nDecoherence is the process of changing the coherent wave function of a local\nquantum system through interactions with the environment to a wave function\nthat is entangled with the environment. H.D. Zeh, one of the founders of \ndecoherence theory, calls decoherence an \"uncontrolled dis-localization of superpositions\" \n(see \\cite{ZehPOR}, page 35).\n\nAccording to Zeh, decoherence occurs constantly as the result of interactions of the observed subsystem with the environment. Even when the observed\nsubsystem may be isolated, decoherence is unavoidable for the measurement apparatus, and thus, the overall system consisting of the observed subsystem, the\nmeasurement apparatus, and the environment is subject to decoherence \n(see \\cite{ZehPOR}, page 34).\n\nDecoherence theory does not assume a collapse of the wave function. Instead, the many-worlds interpretation \n(see \\cite{Everett})\nis considered a suitable interpretation of measurements and measurement-like\ninteractions. The author considers the denial of the collapse of the wave function as an\nunavoidable consequence of the insistence on differential equations as the only\nphysics-conforming way to describe temporal relationships.\nIn contrast to decoherence theory, the author recognizes a need for an additional\nexplicit process step besides the unitary evolution defined by the Schr\\\"odinger\nequation. Even with the assumption of a many-worlds interpretation, the \"branching\" into new worlds (as a replacement of the collapse of the wave function) is\na non-trivial process step that requires further elaboration and (most importantly)\ncriteria for when it occurs.\n\n\\section{Conclusions}\n\nThe development of the functional model of QT interactions may be considered\nan exercise demonstrating that a functional model of QT that is compatible with\nstandard QT\/QFT is feasible.\n\nAlthough the functional model aims for maximum compatibility with standard QT\/QFT, there are exceptions in the functional model. Examples include some small deviations to standard QFT that have been intentionally included, items where it is not clear what the QT\/QFT conformal behavior exactly is, and items where there is no equivalent QT\/QFT position because the level of detail is below the scope of QT\/QFT. For all these areas, verification by experiments is appropriate.\n\nThere is one area in which the functional model is designed to deviate from standard QT\/QFT, the concept of the \"transition from probabilities to facts\". The functional model assumes the transition from probabilities to facts to occur in multiple steps. As indicated within the paper, it will be very difficult to test possible deviations from standard QT in this area.\n\nIn addition to the areas in which the functional model intentionally deviates from standard QT\/QFT, there may be further accidental deviations. Such accidental deviations should be discovered through computer simulations that compare the predictions of standard QT\/QFT with those of the functional model, as described in \n\\cite{diel3}.\nPreliminary results of these computer simulations show agreement to the extent expected.\n\nThe functional model of QT interactions is not considered complete in all\nareas. The major areas where the specification is incomplete are (1) the definition\nof clear criteria for which interactions do not result in a collapse of the wave\nfunction (see section 7.2) and (2) the treatment of bound systems (see section 7.3).\nIn both areas, which seem to be interrelated, the existing QT\/QFT is not yet in a state that provides much\nhelp for the construction of a functional model.\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Causality}\n \nIt is claimed in \\cite{jones} that our analysis of damping of \nneutrino oscillations implies the possibility \nof superluminal signals and thus leads to causality violation. To illustrate \nthis point, the author considers a two-stage thought experiment, in which \ninitially neutrinos \nare produced in \nelectron captures in a low-density gas, so that the interactions of the \ndaughter nuclei with the surrounding particles of the medium can be \nneglected. This leads to relatively long neutrino \nWPs and no \ndecoherence by WP separation observed in the detector. Then, at some \n``decision making time\" $t_0$, one compresses the gas in the source with a \npiston, leading to strong localization of the daughter nucleus $N'$ and much \nshorter neutrino WPs; as a result, an observer at the detector position \nshould see decoherence effects. However, according to the claim in \n\\cite{jones}, this happens outside the future light cone with the origin at \nthe point of $N'$ interaction soon after $t_0$, as the light signal from \nthis point would reach the neutrino detector after the neutrino detection \nprocess has already been over. This would mean causality violation. \n\nThe above argument is based on the incorrect space-time diagrams \nin Figs.~1 and 2 of \\cite{jones} which do not correspond to our \ncalculations. Our analysis in \\cite{us} is based on a consideration of \nmean free times of the particles involved in the neutrino production \nprocess. We have demonstrated that the production time is determined by \nthe shortest among the mean free times $t_a$ of all the involved particles. \nIn the cases considered in \\cite{jones}, these are the mean free times of \neither electron (Fig.~1\n\\footnote{ \nNote that the light cone in Fig.~1 of \\cite{jones} is plotted \nincorrectly: since the produced electron has in this case the shortest time \nof free propagation, the light cone should originate from the point of \ninteraction with the surrounding atoms of the electron and not of $N'$.} \nor of the parent nucleus $N$ (both panels of Fig.~2), but not of $N'$. \nThus, in all the diagrams of \\cite{jones} the mean free time $t_{N'}$ of the \ndaughter nucleus corresponds to the times when the neutrino production \nprocess has already been over. In such circumstances the interactions of \n$N'$ are irrelevant; they cannot (and do not) affect the outcome of the \nneutrino detection experiment. \n\nConsider now the situation when $t_{N'}0$ there is at \nleast one unstable direction. \\medskip\n\nNote that minima are relatively easy to find by steepest descent but that saddle points are \nmore challenging, particularly if one wants to find multiple ones for fixed parameters. \nLi and Zhou \\cite{LiZhou,LiZhou1}, motivated by the work in \\cite{ChoiMcKenna,DingCostaChen}, \ndeveloped an algorithm to compute multiple saddle solutions. We shall only outline the basic\nproposed strategy here; for details see \\cite{LiZhou,LiZhou1}. Let $\\tilde{\\cH}\\subset \\cH$\nbe a subspace of the Hilbert space $\\cH$ and consider the unit sphere \n$S_{\\tilde{\\cH}}:=\\{v\\in \\tilde{\\cH}:\\|v\\|_{\\cH}=1\\}$. Let $\\cL$ be a closed subspace in $\\cH$\nwith orthogonal complement $\\cL^\\perp$. For each $v\\in S_{\\cL^\\perp}$, define the closed \nhalf-space\n\\be\n[\\cL,v]:=\\{tv+w:w\\in\\cL,t\\geq 0\\}.\n\\ee\nA set-valued map $P:S_{\\cL^\\perp}\\ra 2^\\cH$ is called the peak mapping of $J$ with respect to\n$\\cL$ if for any $v\\in S_{\\cL^\\perp}$, $P(v)$ is the set of all local maximum points of $J$ in \n$[\\cL,v]$. Essentially, the map $P$ collects local maxima in a half-space. A single-valued \nmap $p:S_{\\cL^\\perp}\\ra \\cH$ is a peak selection of $J$ with respect to $\\cL$ if \n\\be\np(v)\\in P(v),\\qquad \\forall v\\in S_{\\cL^\\perp}.\n\\ee \nBasically $p$ selects a single maximum parametrized by $v$; the notion of peak selection can be localized \nby intersecting the relevant sets with neighbourhood of $v$ \\cite{LiZhou,LiZhou1}. The main idea\nof finding critical points is to restrict to suitable solution submanifolds\n\\be\n\\cM:=\\{p(v):v\\in S_{\\cL^\\perp}\\}.\n\\ee\nEssentially, one would like to think of the manifold $\\cM$ as (an approximation to) \nthe stable manifold of a saddle critical point.\nThen one may use a descent method on $\\cM$, in combination with a method to stay on $\\cM$ during \nthe iteration, to find the saddle point. Of course, one has also to avoid convergence to a saddle\npoint, which has been found previously. The descent process is a minimization problem and the step\nto return to $\\cM$ is carried out via a maximization problem, leading to a \\texttt{minimax}-type algorithm.\nThe span of the previously found solutions is the space $\\cL$, which one tries to avoid at\nfuture iteration steps, {i.e.}, the search essentially works in the complementary half-space $[\\cL,v]$.\n\nNote that the algorithm just outlined, can be viewed as a descent method with a constraint to stay inside \na certain search space by excluding previously found directions. The entire procedure is easily implemented\nvia FEM as the FEM discretization just reduces the original problem to a discrete finite set of values\nof $u$ at certain points. Then one has to actually solve a large, but finite-dimensional, constrained \noptimization problem. A detailed algorithmic description can be found in \\cite{LiZhou,LiZhou1}, the \nimplementation of the \\texttt{minimax} algorithm we use here is \\cite{Zhou}, while a number of \napplications of the method are discussed, {e.g.}, in \\cite{ChenZhou,ChenZhou1,WangZhou,XieYuanZhou}.\\medskip\n\nIt is desirable to try to glue the variational \\texttt{minimax} approach to a standard continuation\npackage, which itself is glued to a standard FEM package. This is precisely, what is carried out \nin practice here for several problems using \\texttt{pde2path} as the main focal point. For more details on \nthe scientific computing challenges involved, we refer to Section \\ref{sec:software}.\n\n\\section{A Cross-Validation Test Problem}\n\\label{sec:test}\n\nAs a first step, we are going to compare, for a test problem from the class of elliptic PDEs \n\\eqref{eq:ellipticPDE}, two different approaches: \n\n\\begin{itemize}\n \\item[(I)] Use the \\texttt{minimax} approach to calculate at a fixed parameter value $\\mu_0=\\mu(s_0)$ several \n starting solutions $u_0^l=u_0^l(x)$, $l\\in\\{1,2,\\ldots,L\\}$ for some $L\\geq 2$. Then use numerical \n continuation for each starting solution, generating multiple solution branches \n $Z^l(s):=(u^l(s),\\mu^l(s))$.\n \\item[(II)] Start with a single, preferably simple and easy-to-guess, solution at a fixed parameter \n value $(u^*_0,\\mu^*_0)$ and then continue the branch $Z^*(s)=(u^*(s),\\mu^*(s))$. Via branch switching\n at bifurcation points, one can then try to recover the starting set of solutions from (I) as points\n on the branch, {i.e.}, $(u^*(s^l),\\mu^*(s^l))=(u^l_0,\\mu_0)$ for some values $s^l$. \n\\end{itemize}\n\nIt is helpful to select a test problem, where we may expect that classical continuation ideas using the \nhomotopy approach (II) would suffice, but where the variational problem is nontrivial. As a domain, we choose \na rectangle $\\Omega:=(-l_{x_1},l_{x_1})\\times (-l_{x_2},l_{x_2})\\subset \\R^2$, $l_{x_1},l_{x_2}>0$. Consider \nthe elliptic PDE for $u=u(x)$ given by\n\\be\n\\label{eq:ac_test}\n\\left\\{\n\\begin{array}{ll}\n0=-\\Delta u-\\mu u-u^3=g(u,\\mu),\\quad & x\\in\\Omega,\\\\\n0=u,\\quad & x\\in\\partial \\Omega.\n\\end{array}\\right.\n\\ee\nwhere $\\mu\\in\\R$ is the main bifurcation parameter. For $\\mu=0$, and multiplying the entire equation by $-1$, \nthe PDE \\eqref{eq:ac_test} is also known as the Lane-Emden-Fowler equation \\cite{Chandrasekhar,SerrinZou,Wong}. \nIn fact, $\\mu=0$ is the standard test case from \\cite{Zhou} so we definitely can try to carry out the strategy \n(I) with $\\mu_0=0$. Furthermore, $u\\equiv 0$ is a solution, so setting $(u^*_0,\\mu^*_0)=(0,0)$ is a standard\nstarting point for strategy (II).\\medskip \n\n\\begin{figure}[htbp]\n\\psfrag{lam}{$\\mu$}\n\\psfrag{x}{$x$}\n\\psfrag{y}{$y$}\n\\psfrag{norm}{$\\|u\\|_\\I$}\n\\psfrag{a}{(a)}\n\\psfrag{c1}{(c1)}\n\\psfrag{c2}{(c2)}\n\\psfrag{d}{(d)}\n\\psfrag{e}{(e)}\n\\psfrag{f}{(f)}\n\\psfrag{b1}{(b1)}\n\\psfrag{b2}{(b2)}\n\\psfrag{AC}{$0=-\\Delta u-\\mu u-u^3$}\n\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{.\/fig1}\n\t\t\\caption{\\label{fig:1}Numerical continuation results for \\eqref{eq:ac_test} \n\t\tstarting from variational solutions found via the \\texttt{minimax} algorithm at $\\mu=0$. \n\t\tThe seven starting solutions are shown in (a)-(e). The four double bump solutions\n\t\t(b)-(c) split into two classes with different $L^\\I$-norm, where the norm form\n\t\tfor (b1)-(b2) is smaller than for (c1)-(c2). The norms for (d) and (e) are the two\n\t\tlargest norms. The part (f) of the figure shows the raw output of \\texttt{pde2path} upon \n\t\tcontinuation; see also the description in the text.}\n\\end{figure} \n\nWe start with strategy (I). The variational formulation \\eqref{eq:functional} of \\eqref{eq:ac_test} via \nan energy functional is given, for $u\\in H^1_0(\\Omega)$ and $\\mu_0=0$, by\n\\be\n\\label{eq:var_form}\nJ(u)=\\int_\\Omega\\left[\\frac12 \\|\\nabla u(x)\\|^2-\\frac14 u^4(x)\\right]\\txtd x,\n\\ee \nwhere $\\|\\cdot\\|$ denotes the usual Euclidean norm in $\\R^2$.\nThe results from the \\texttt{minimax} algorithm can be reproduced using \\cite{Zhou} and are shown in \\ref{fig:1}(a)-(e)\nfor $l_{x_1}=0.5=l_{x_2}$. Figure \\ref{fig:1}(a)-(e) shows four distinct classes of solutions. The \nsolution of Morse index $\\textnormal{MI}=1$ is shown in Figure \\ref{fig:1}(a). Two solutions with two peaks\ncentered along the coordinate axis of Morse index $\\textnormal{MI}=2$ are shown in Figures \\ref{fig:1}(c1)-(c2)\nand the corresponding two-peak solutions with the peaks along the diagonal and anti-diagonal are displayed in Figures \n\\ref{fig:1}(b1)-(b2). Then, there are also two different classes of solutions of Morse index \n$\\textnormal{MI}=4$ shown in Figures \\ref{fig:1}(d)-(e). Once the starting solutions have been found, they\nwere saved and re-meshed onto a slightly coarser grid for continuation in \\texttt{pde2path}. Figure \\ref{fig:1}(f)\nshows the resulting bifurcation diagram in $(\\mu,\\|u\\|_\\I)$-space, where $\\|\\cdot\\|_\\I$ denotes the usual \n$L^\\I$-norm. Figure \\ref{fig:1}(f) shows the raw output of the \\texttt{pde2path} continuation, {i.e.}, the actual \npoints computed on the desired seven branches, starting from the seven starting solutions located at $\\mu=0$.\nThe $L^\\I$-norms of the seven starting solutions correspond to lexicographical order of the labels, {e.g.}, \nthe smallest $L^\\I$-norm is in Figure \\ref{fig:1}(a), while the largest is in Figure \\ref{fig:1}(e).\\medskip \n\n\\begin{figure}[htbp]\n\\psfrag{lam}{$\\mu$}\n\\psfrag{norm}{$\\|u\\|_\\I$}\n\\psfrag{a}{(a)}\n\\psfrag{c}{(c)}\n\\psfrag{d}{(d)}\n\\psfrag{e}{(e)}\n\\psfrag{b}{(b)}\n\t\\centering\n\t\t\\includegraphics[width=0.95\\textwidth]{.\/fig2}\n\t\t\\caption{\\label{fig:2}Numerical continuation results for \\eqref{eq:ac_test} \n\t\tstarting from from the zero solution $u\\equiv0$ at $\\mu=0$; the color coding is\n\t\tas in Figure \\ref{fig:1}. Further solutions are\n\t\tfound by branch switching at four detected bifurcation\/branch points (marked as black dots).\n\t\tThe four final ($\\mu=0$) solutions on the nontrivial branches are shown in (a)-(d). \n\t\tThe two double bump solutions (b)-(c) again have different $L^\\I$-norm at $\\mu=0$. \n\t\tThe branch point for the solution in Figure \\ref{fig:1}(e) has not been found in a\n\t\tcontinuation run using the routine \\texttt{cont}. However, the branch point can be \n\t\tfound when using \\texttt{findbif} and is located at $\\mu\\approx 101.218$. The part \n\t\t(e) of this figure shows the raw output of \\texttt{pde2path} upon continuation.}\n\\end{figure} \n\nSome of the important algorithmic parameter values for \\texttt{pde2path} used during the continuation runs were:\n\\be\n\\xi=1\/n_p, \\quad \\texttt{neig}=100, \\quad \\texttt{dsmax}=0.5, \\quad \\texttt{hmax}=0.1,\n\\ee\nwhere \\texttt{neig} is the number of computed eigenvalues for stability\/bifurcation detection, \\texttt{dsmax}\nis the maximum step size for $\\delta s$ and $\\texttt{hmax}$ is the maximum triangle side lengths $h$ for the\n(uniform) global triangulation. Otherwise, we used standard parameter values, where it should be noted that\nusing finite-differencing for the Jacobian seemed to yield a more stable algorithm\\footnote{In the forthcoming \nversion \\texttt{p2p2} of \\texttt{pde2path} \\cite{DohnalRademacherUeckerWetzel}, this effect, which arises\ndue to interpolation error, is resolved. I would like to thank Hannes Uecker for explaining me the\ndetails behind this observation.}. The results in Figure\n\\ref{fig:1}(f) show that all seven initial bifurcation branches connect back to the trivial solution branch\nat $(u,\\mu)=(0,\\mu)$. We also observed that changing some \\texttt{pde2path} continuation parameters it\nwas possible to change the direction the zero branch was continued in once it was reached, either to the\nleft or to the right; Figure \\ref{fig:1} shows all seven branches going to the right, {i.e.}~increasing $\\mu$, once\nthe zero branch was reached. There seems to be a numerical artifact for the branch corresponding to \nthe starting solution in Figure \\ref{fig:1}(e), where a bifurcation point (blue circle) is detected quite a \nbit before the actual zero branch is reached.\\medskip \n\nFor the second approach (II), we continue the trivial zero branch. The results are shown in Figure \n\\ref{fig:2}. Four branch points (black dots) were detected when the continuation was run between $\\mu=0$ \nand $\\mu=110$; again, the last bifurcation point corresponding to the diagonal\/anti-diagonal $\\textnormal{MI}=4$\nsolution from Figure \\ref{fig:1}(e) was not detected using a standard bifurcation run using \\texttt{cont}. However, \nthis branch point is correctly detected using the routine \\texttt{findbif}\\footnote{I would like to thank Hannes \nUecker for making me aware of this, {i.e.}, that \\texttt{findbif} evaluates the eigenvalues, while \\texttt{cont} \nchecks for a vanishing determinant.}. Branch\nswitching was performed using \\texttt{pde2path} at the bifurcation points. The new non-trivial solutions \nwere tracked back up to $\\mu=0$, which yields the same - up to symmetry - solutions as recorded for approach (I).\nIn summary, both approaches essentially lead to the same results and we have definitely cross-validated (I) and (II). \n\n\\section{Adding a Localized Microforce Load}\n\\label{sec:microforce}\n\nWe have observed for the test problem in Section \\ref{sec:test} that symmetries lead to difficulties\nof detection of certain branches as multiple solution branches come together at a single bifurcation\npoint. Hence, it is natural to break the symmetry by a certain perturbation. In \n\\cite{UeckerWetzelRademacher}, the symmetry is broken for a cubic-quintic Ginzburg-Landau\/Allen-Cahn \ntype equation of the form \n\\be\n\\label{eq:UWR}\n\\left\\{\n\\begin{array}{ll}\n0=-c\\Delta u-\\mu u-u^3+u^5,\\quad & x\\in\\Omega,\\\\\n0=u,\\quad & x\\in\\partial \\Omega,\n\\end{array}\\right.\n\\ee\non a rectangle $\\Omega:=(-l_{x_1},l_{x_1})\\times (-l_{x_2},l_{x_2})\\subset \\R^2$ with $l_{x_1}=1$ and \n$l_{x_2}=0.9$. Instead of breaking the geometry of the domain, one may also ask, how one expects generic \nsymmetry-breaking for Allen-Cahn\/Ginzburg-Landau-type PDEs when the equation itself is perturbed. \nOne possible mechanism can be \nfound in the work of Gurtin \\cite{Gurtin}, who derives the Ginzburg-Landau equation from \nbasic principles of force balance. The basic motivation for Ginzburg-Landau-type models are\ntwo-phase systems, whose evolution is described on a macroscopic level by an order parameter \n$\\rho=\\rho(x,t)$ for $(x,t)\\in\\Omega\\times [0,T]$ for some $T>0$. The force balance equations\nfor a given body lead to the generalized Ginzburg-Landau equation\n\\be\n\\beta\\frac{\\partial \\rho}{\\partial t}=\\nabla \\cdot \\left[\\frac{\\partial \\Psi}{\\partial p}(\\rho,\\nabla \\rho)\\right]\n-\\frac{\\partial \\Psi}{\\partial \\rho}(\\rho,\\nabla \\rho)+\\gamma, \n\\ee\nwhere $p=\\nabla \\rho$, $\\Psi:\\R^2\\ra \\R$ is a given free energy, $\\gamma$ is an external\nmicroforce applied to the body and $\\beta$ is the constitutive modulus as discussed in \\cite{Gurtin}.\nTaking a constant positive $\\beta$ and the free energy as \n$\\Psi(\\rho,\\nabla \\rho)=F(\\rho,\\mu)+\\frac12 \\alpha \\|\\nabla \\rho\\|^2 $ leads to the Ginzburg-Landau\nequation with an applied microforce\n\\be\n\\beta\\frac{\\partial \\rho}{\\partial t}=\\alpha\\Delta \\rho-\\frac{\\partial F}{\\partial \\rho}(\\rho,\\mu)+\\gamma, \n\\ee\nwhere $\\alpha,\\beta$ are positive parameters and $\\gamma=\\gamma(x)$ is an\nexternal microforce on the body. Frequently, the potential $F$ is chosen as a double-well leading to \n$F'(u)=\\mu u-u^3$, which has different signs in comparison to the Lane-Emden-Fowler equation from \nSection \\ref{sec:test}. However, the idea that there is a (microscopic) external force acting on \nthe problem still seems very reasonable. Hence, we propose to study the elliptic problem \n\\be\n\\label{eq:LEF_force}\n\\left\\{\n\\begin{array}{ll}\n0=-\\Delta u-\\mu u-u^3+\\gamma,\\quad & x\\in\\Omega,\\\\\n0=u,\\quad & x\\in\\partial \\Omega,\n\\end{array}\\right.\n\\ee\nwhere $\\gamma=\\gamma(x)$ depends in a non-trivial way upon the spatial coordinate and $\\Omega:=(-l_{x_1},l_{x_1})\\times (-l_{x_2},l_{x_2})\\subset \\R^2$ with $l_{x_1}=1=l_{x_2}$. \nTo break the symmetry encountered in Section \\ref{sec:test}, we propose to consider a point-type load\nmodeled by the function\n\\be\n\\label{eq:mforce}\n\\gamma=\\gamma(x):=\\left\\{\n\\begin{array}{rl}\n\\gamma_a\\exp\\left(-1\/\\gamma_b[(x_1-\\gamma_1)^2+(x_2-\\gamma_2)^2)] \\right),&\\quad\\text{if \n$x=(x_1,x_2)^\\top \\in\\Omega$,}\\\\\n0,&\\quad \\text{if $x\\in \\partial \\Omega$,}\n\\end{array}\n\\right.\n\\ee\nand we fix $\\gamma_a=10$, $\\gamma_b=\\frac{1}{10}$, $\\gamma_1=0.5=\\gamma_2$. Note that \n$\\gamma:\\Omega\\ra [0,+\\I)$ is basically localized, with respect to numerical accuracy, in a small \nneighbourhood of the point $(x_1,x_2)=(0.5,0.5)$. Note carefully that using \\eqref{eq:mforce} as the forcing \nin \\eqref{eq:LEF_force} means that there are no homogeneous constant steady states $u(x)\\equiv \\text{constant}$ \navailable to start the continuation as in Section \\ref{sec:test}. This makes \\eqref{eq:LEF_force} substantially\nmore difficult.\\medskip\n\n\\begin{figure}[htbp]\n\\psfrag{mu}{$\\mu$}\n\\psfrag{norm}{$\\|u\\|_\\I$}\n\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{.\/fig3}\n\t\t\\caption{\\label{fig:3}Numerical continuation results for \\eqref{eq:LEF_force} with \n\t\texternal force given by \\eqref{eq:mforce}. The starting solution was obtained by \n\t\tthe variational \\texttt{minimax} approach at $\\mu=0$. Then, forward and backward numerical \n\t\tcontinuation from this starting point leads to two branches (black and grey). The \n\t\tstarting solution with Morse index $\\textnormal{MI}=1$ is also shown (on the right side), as well\n\t\tas the two endpoints on the computed branches (on the left side); for a more detailed description\n\t\tsee the main text.}\n\\end{figure} \n\nThe homotopy continuation strategy (II) with branch switching from Section \\ref{sec:test}\nis now extremely complicated to carry out. One option is to set $\\gamma_a=0$, then start with the zero \nsolution, compute the different non-trivial branches using strategy (II), and then continue each branch \nalso in the parameter $\\gamma_a$ up to $\\gamma_a=10$. Albeit certainly possible, it is actually a lot easier \nin this case to follow the gluing strategy (I) as the code can basically be used almost unaltered, just \nby adding $\\gamma$ and noticing that the\nnew energy functional, for $\\mu=0$ ({cf.} equation \\eqref{eq:var_form}), is just given by \n\\be\n\\label{eq:var_form1}\nJ(u)=\\int_\\Omega\\left[\\frac12 \\|\\nabla u(x)\\|^2-\\frac14 u^4(x)+u(x)\\gamma(x)\\right]\\txtd x.\n\\ee \nThe results for the continuation runs are shown in Figures \\ref{fig:3}-\\ref{fig:5} with \n$\\texttt{hmax}=0.07$ and $\\texttt{neig}=200$. The results\nin Figure \\ref{fig:3} show the continuation for the $\\textnormal{MI}=1$ solution. The starting solution\nat $\\mu=0$ has a peak slightly shifted towards the upper right corner of the domain in comparison\nto the solution in Figure \\ref{fig:1}(a). Continuing forward in $\\mu$ leads to the black curve in\nFigure \\ref{fig:3} along which the solution peak shrinks and moves into the top right corner. On this\ncurve, a fold point at $\\mu=\\mu_f\\approx 2.92$ occurs, at which the unstable starting solution stabilizes. \nThe bottom curve below the fold is expected to be the global attractor for non-steady-state starting solutions \nin the range of for all $\\mu\\in(-\\I,\\mu_f]$ for the associated parabolic PDE. \nRunning the continuation backwards, {i.e.} with step size $-\\delta s$, yields the grey curve. Along\nthis curve, the solution peak increases and also moves into the top right corner. We also continued both \nparts of the ``(reflected) C-shaped'' branch up to $\\mu=-100$ and no branching was detected. In fact, the \nbranch shape remains, only the peaks seem to sharpen. In fact, a mesh refinement was helpful for the upper\npart of the curve as discussed in Section \\ref{ssec:FEM}. The sharpening of the peaks is expected from\na formal scaling argument since we can re-write \\eqref{eq:LEF_force} for $\\mu\\neq 0$ as \n\\be\n\\label{eq:LEF_force1}\n\\left\\{\n\\begin{array}{ll}\n0=-\\frac{1}{\\mu}\\Delta u- u-\\frac1\\mu u^3+\\frac1\\mu\\gamma,\\quad & x\\in\\Omega,\\\\\n0=u,\\quad & x\\in\\partial \\Omega,\n\\end{array}\\right.\n\\ee\nwhich is a singularly perturbed elliptic PDE as $|\\mu| \\ra +\\I$ (here $\\mu\\ra -\\I$). For \nsingularly perturbed elliptic PDEs, concentration phenomena, spike- and\/or boundary-layer\nsolutions are a common phenomenon \\cite{LinNiTakagi,Ni,Ni1}. More detailed numerical \nperformance calculations were also carried out for this branch and are discussed in Appendix \n\\ref{ap:performance}.\\medskip \n\n\\begin{figure}[htbp]\n\\psfrag{mu}{$\\mu$}\n\\psfrag{norm}{$\\|u\\|_\\I$}\n\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{.\/fig4}\n\t\t\\caption{\\label{fig:4}Numerical continuation results for \\eqref{eq:LEF_force} with \n\t\texternal force given by \\eqref{eq:mforce}. The starting solution was obtained by \n\t\tthe variational \\texttt{minimax} approach at $\\mu=0$. Then forward and backward numerical \n\t\tcontinuation from this starting point leads to two branches (black and grey). The \n\t\tstarting solution with Morse index $\\textnormal{MI}=2$ is also shown; for a more \n\t\tdetailed description see the main text.}\n\\end{figure} \n\nIn Figure \\ref{fig:4}, a starting solution from the \\texttt{minimax} approach with $\\textnormal{MI}=2$\nand two peaks along the diagonal is considered. Interestingly, along the lower (black) branch of \nsolutions, there is a deformation from a two-peak solution with one minimum and one maximum to a \nsingle minimum solution. Along the upper (grey) branch, the two-peak structure remains to hold.\nAgain, we have a C-shaped branch, which folds even tighter back towards itself in the $L^\\I$-norm.\nContinuing back towards $\\mu=-100$ did not lead to the detection of bifurcation points so we \nmay actually conjecture that the symmetry-breaking leads to various isolas corresponding to different\npeak-structure solutions.\\medskip\n\n\\begin{figure}[htbp]\n\\psfrag{mu}{$\\mu$}\n\\psfrag{norm}{$\\|u\\|_\\I$}\n\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{.\/fig5}\n\t\t\\caption{\\label{fig:5}Numerical continuation results for \\eqref{eq:LEF_force} with \n\t\texternal force given by \\eqref{eq:mforce}. The starting solution was obtained by \n\t\tthe variational \\texttt{minimax} approach at $\\mu=0$. Then forward and backward numerical \n\t\tcontinuation from this starting point leads to two branches (black and grey). The \n\t\tstarting solution with Morse index $\\textnormal{MI}=2$ is also shown. Note that this\n\t\tstarting solutions is different from the one in Figure \\ref{fig:4}; for a more \n\t\tdetailed description see the main text.}\n\\end{figure} \n\nWe also used the \\texttt{minimax} method to find another two-peak saddle-solution shown as the\nstarting solution in Figure \\ref{fig:5}. This starting solutions has $\\textnormal{MI}=2$\nand a slightly broader peak near the top left corner in comparison to the peak in the \nlower right corner. As before, we applied the same forward and backward continuation \nsteps and obtain another C-shaped branch, which does not connect to any other solutions\nup to $\\mu=-100$ and is likely to be isolated from other branches. However, in comparison\nto the case of Figure \\ref{fig:4}, there is no change in the peak structure along the \nbranch and the turning in the $L^\\I$-norm is even tighter.\\medskip\n\nIn summary, the \\texttt{minimax} starting strategy turned out to be computationally simple and \nquite effective to determine starting solutions. Furthermore, we may conclude for the \nLane-Emden-Fowler equation \\eqref{eq:LEF_force} with microforce \\eqref{eq:mforce} that \nsymmetry-breaking by external forces can lead to quite interesting bifurcation branch \nstructures, splitting the various branches, which are connected to $u\\equiv 0$ when \n$\\gamma=0$ as shown in Section \\ref{sec:test}. There seem to be many open questions \nregarding the global bifurcation structure for various forces, even for simple \nelliptic problems in the plane. Although\nthe global structure is easily accessed numerically, one may hope to understand the\nproblem near a bifurcation point of the trivial branch for $0<|\\gamma(x)|\\ll1$ analytically\nby a perturbation argument and local unfolding; again, this questions seems to be open\nfor different microforces. \n\n\\section{The Caginalp System}\n\\label{sec:Caginalp}\n\nIn addition to the previous two case studies in Sections \\ref{sec:test}-\\ref{sec:microforce},\nwhere the \\texttt{minimax} starting solutions have been used according to approach (I), one may also\nask for an example where the classical continuation approach is preferable but gluing computation\nstill plays an important role.\\medskip\n\nConsider again a compact domain $\\Omega\\subset \\R^2$, which represents a material or body, and\nlet $(x,t)\\in \\Omega\\times [0,T]$. The Caginalp system models the interaction between a phase \nfield (or order parameter) $\\chi=\\chi(x,t)$ and the temperature $\\vartheta=\\vartheta(x,t)$. It\nwas originally proposed in \\cite{Caginalp} and studied using many different techniques \n\\cite{GrasselliMiranvilleSchimperna,MiranvilleQuintanilla}. From the modelling perspective, it\ndescribes, {e.g.}, melting-solidification processes between different pure phases. \nHere we use a version of the Caginalp system in the scaling considered in \n\\cite{GrasselliPetzeltovaSchimperna} given by\n\\be\n\\label{eq:Caginalp}\n\\left\\{\n\\begin{array}{rcll}\n\\frac{\\partial \\vartheta}{\\partial t}+ \\frac{\\partial}{\\partial t}\\lambda(\\chi) \n& = & \\Delta \\vartheta +f,\\quad & \\text{if $(x,t)\\in\\Omega\\times [0,T]$,}\\\\\n\\frac{\\partial \\chi}{\\partial t} & = & \\Delta \\chi -(\\txtD W)(\\chi)+(\\txtD\\lambda)(\\chi)\\vartheta,\n\\quad & \\text{if $(x,t)\\in\\Omega\\times [0,T]$,}\\\\\n0&=&\\vartheta, \\quad & \\text{if $(x,t)\\in\\partial \\Omega \\times [0,T]$,}\\\\\n0&=&\\vec n \\cdot \\nabla\\chi, \\quad & \\text{if $(x,t)\\in\\partial \\Omega \\times [0,T]$,}\\\\\n\\vartheta &=&\\vartheta_0,\\quad & \\text{if $(x,t)\\in \\Omega \\times \\{0\\}$,}\\\\\n\\chi &=&\\chi_0,\\quad & \\text{if $(x,t)\\in \\Omega \\times \\{0\\}$,}\n\\end{array}\n\\right.\n\\ee \nwhere $W:\\R\\ra \\R$ is a given potential, $f:\\Omega \\ra \\R$ is a heat source\/sink and\n$\\lambda:\\R\\ra \\R$ models the latent heat density. We are just interested in the\nstationary Caginalp system under a given fixed heat source. In this case, the system\nreduces to\n\\be\n\\label{eq:Caginalp1}\n\\left\\{\n\\begin{array}{rcll}\n0 & = & \\Delta \\vartheta +f,\\quad & \\text{if $x\\in\\Omega$,}\\\\\n0 & = & \\Delta \\chi -(\\txtD W)(\\chi)+(\\txtD\\lambda)(\\chi)\\vartheta,\n\\quad & \\text{if $x\\in\\Omega$,}\\\\\n0&=&\\vartheta, \\quad & \\text{if $x\\in\\partial \\Omega $,}\\\\\n0&=&\\vec n \\cdot \\nabla\\chi, \\quad & \\text{if $x\\in\\partial \\Omega$,}\\\\\n\\end{array}\n\\right.\n\\ee \nwhere all unknown functions $\\chi=\\chi(x),\\vartheta=\\vartheta(x)$ only depend upon\nthe spatial variable. Returning to one modelling purpose of the Caginalp system,\nnamely melting-solidification processes, we aim to chose a domain $\\Omega\\subset \\R^2$\non which melting and solidification definitely play a crucial role in applications.\nOne engineering application of interest are airplane wings, where the process of\navoiding ice formation is of critical importance \\cite{ThomasCassoniMacArthur}. So\nwe chose $\\Omega$ as a very basic wing-shaped domain as shown in Figure \\ref{fig:7}.\n\\medskip \n \n\\begin{figure}[htbp]\n\\psfrag{x}{$x$}\n\\psfrag{y}{$y$}\n\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{.\/fig7}\n\t\t\\caption{\\label{fig:7}Solution of the Poisson equation \\eqref{eq:Poisson} computed\n\t\tover the basic wing-shaped domain $\\Omega$ with a constant point heat source \n\t\t$f\\equiv2\\cdot 10^3$. This solution is \n\t\tused as an input to the Caginalp system \\eqref{eq:Caginalp2}.}\n\\end{figure} \n\nA first step for the numerical continuation of \\eqref{eq:Caginalp1} is to note\nthat we may use an offline pre-computing step since we have assumed that $f$ is\ntime-independent, {i.e.}, we may solve a Poisson equation \n\\be\n\\label{eq:Poisson}\n\\left\\{\n\\begin{array}{rcll}\n0 & = & \\Delta \\vartheta +f,\\quad & \\text{if $x\\in\\Omega$,}\\\\\n0&=&\\vartheta, \\quad & \\text{if $x\\in\\partial \\Omega $.}\\\\\n\\end{array}\n\\right.\n\\ee \nIn this context, a gluing framework is very helpful. It is very easy to use the\ngraphical user-interface provided by the \\texttt{pdetoolbox} to define the domain $\\Omega$, \nspecify $f$, the Dirichlet boundary conditions, the PDE itself, and then solve\nthe Poisson equation \\eqref{eq:Poisson}. The solution for $f\\equiv 2000$\nis shown in Figure \\ref{fig:7}. Hence, if we view $\\vartheta(x)=\\vartheta$ as a\nknown temperature input for the Caginalp system \\eqref{eq:Caginalp1} then it\nremains to study\n\\be\n\\label{eq:Caginalp2}\n\\left\\{\n\\begin{array}{rcll}\n0 & = & -\\Delta \\chi +(\\txtD W)(\\chi)-(\\txtD\\lambda)(\\chi)\\vartheta,\n\\quad & \\text{if $x\\in\\Omega$,}\\\\\n0&=&\\vec n \\cdot \\nabla\\chi, \\quad & \\text{if $x\\in\\partial \\Omega$,}\\\\\n\\end{array}\n\\right.\n\\ee \nas an elliptic PDE for the order parameter $\\chi$.\\medskip\n\n\\begin{figure}[htbp]\n\\psfrag{mu}{$\\mu$}\n\\psfrag{norm}{$\\|\\chi\\|_\\I$}\n\t\\centering\n\t\t\\includegraphics[width=1\\textwidth]{.\/fig6}\n\t\t\\caption{\\label{fig:6}Numerical continuation results for \\eqref{eq:Caginalp2} with\n\t\t$\\vartheta=\\vartheta(x)$ from Figure \\ref{fig:7}, latent heat density \\eqref{eq:latentheat}\n\t\tand potential \\eqref{eq:Ceng}. The starting solution was computed for $\\mu=1$ by \n\t\titerating an initial guess. The remaining part of the diagram was computed using \n\t\tcontinuation in \\texttt{pde2path}. For a more detailed description see the main text.}\n\\end{figure} \n\nTo study the reduced\nCaginalp equation \\eqref{eq:Caginalp2} numerically, we still have to specify\nthe form of the latent heat density $\\lambda:\\R\\ra \\R$ and the potential\n$W:\\R\\ra \\R$. The latent heat density is frequently modeled and\/or empirically determined as \nleading-order affine-linear term with higher-order small-prefactor nonlinear polynomial correction \nterms \\cite{RogersYau}. To replicate this modelling, we just consider\n\\be\n\\label{eq:latentheat}\n\\lambda(\\tau):=\\mu \\left(c_0+c_1 \\tau +c_2 \\tau^2\\right),\n\\ee\nwhere $\\mu$ is again the bifurcation parameter and $c_1=1$ and $c_2=0.05$ are\nfixed for the computation; note that $c_0$ is not relevant here as only $\\txtD \\lambda$\nappears in \\eqref{eq:Caginalp2}. For the potential, we select the standard example \n\\be\n\\label{eq:Ceng}\nW(y):=(y^2-1)^2,\n\\ee\nwhich was already studied by Caginalp \\cite{Caginalp}. Note carefully, that\nthe elliptic PDE \\eqref{eq:Caginalp2} has a variational structure with\nenergy functional\n\\be\n\\label{eq:var_form2}\nJ(\\chi)=\\int_\\Omega\\left[\\frac12 \\|\\nabla \\chi(x)\\|^2+W(\\chi)-\n\\lambda(\\chi(x))\\vartheta(x)\\right]\\txtd x.\n\\ee \nHowever, the quartic term $\\chi^4$ has a positive sign, {i.e.}, the Allen-Cahn\/Ginzburg-Landau\nsign and not the Lane-Emden-Fowler sign. Still one may formally try to apply a\nminimax approach but the \\texttt{minimax} code \\cite{Zhou} we used for Sections \n\\ref{sec:test}-\\ref{sec:microforce} does not yield multiple solutions, even upon\nminor modifications. Hence, a deeper modification, or a completely different approach\nwould be required to scan for multiple solutions at a fixed parameter value.\\medskip\n\nNaturally, we first have to make sure there really are multiple solutions for some \nfixed parameter value. In this case, the continuation approach can be used to find \nthese solutions. In particular, we start at $\\mu=1$ by guessing a constant solution.\nUnder Newton iteration, we actually do reach a true solution of the problem for $\\mu=1$.\nThen we can use numerical continuation to track the solution branch as shown in Figure\n\\ref{fig:6}. We observe that there are at least four different solutions at $\\mu=1$\nso there should definitely be a variational approach to determine them systematically\nand we leave this problem as an open problem for future numerical work\\footnote{It is\nlikely that there is a variational algorithm available to search for multiple solutions\nin this case but not one, which is freely available for the direct gluing computation\napproach proposed here.}.\\medskip\n\nThe bifurcation diagram in Figure \\ref{fig:6} is quite interesting for applications so\nwe briefly comment on the results. We cut off the bifurcation diagram and only consider\nthe region with $\\|\\chi\\|_\\I\\leq 1$ as $\\chi=\\pm1$ represent the pure states and we are interested \nin the transition (or so-called mushy) regime between the pure states. Suppose we start in the \nbifurcation diagram with an (almost) pure \nstate $\\chi(x)\\approx 1$\nfor all $x\\in\\Omega$ and then change $\\mu$. This leads to a transition sequence involving \nfour folds as shown in Figure \\ref{fig:6}. If we interpret $\\chi(x)\\approx 1$ as a melted\n(or liquid) state, then the solution curve at the first fold assumes a mixed\/mushy state\nwhere solidification starts non-uniformly over the domain, with a focus near the airplane body.\nThis focus reverses as the solution branch is traversed further and at the third fold,\nwhen the solidification focuses on the tip. In the last step, the system transitions to the almost fully solid state\nafter the fourth fold has been traversed. Although we shall not pursue these ideas here further,\nit is clear that these observations should have interesting applications in engineering \napplications to build airplane wings and\/or during a de-icing process. We leave a detailed \nparametric numerical continuation study for melting\/solidification processes for future work. \n\n\\section{Perspectives on Software Development}\n\\label{sec:software}\n\nAlthough we have successfully demonstrated that gluing computing between FEM, continuation\nand minimax can be efficient and lead to very interesting practical results for elliptic PDEs, \nwe have not commented on several\nimportant practical scientific computing issues. In particular, the question is why the computations\nhere worked in a relatively straightforward way, while gluing computation can become very\ncomplicated in many cases when one tries to track patterns via numerical continuation; the thesis \n\\cite{Avitabile} is an excellent example, how challenging such a computational approach can become. \nThe following issues seem to be very helpful to keep in mind for further\nsoftware development\\footnote{Of course, the various issues are written from the viewpoint\nof a user in applied nonlinear dynamics, who is interested in gluing computations.}:\n\\medskip \n\n\\textbf{High-level language:} A key ingredient to efficiently glue different parts\nrequired for the computation is to use a high-level programming language. Here \\texttt{MatLab} \n\\cite{MatLab2013b_base} is used. However, an excellent non-commercial alternative would be \n\\texttt{Python} \\cite{Python}, which contains aspects designed for gluing computation and is already\nefficiently used in various scientific computing packages for differential equations \n\\cite{Doedel_AUTO2007,Hoffmanetal,GuyerWheelerWarren,Cimrman,Herouxetal}. The problem with working directly \nwith FEM, continuation or variational-PDE packages via a fast-computation, but lower-level, programming \nlanguage, {e.g.}~\\texttt{C}, \\texttt{C++} or \\texttt{Fortran}, is that even apparently simple-looking \ntasks of transferring data, adding on functionality, problem formulation, interlinking of algorithms, \ncross-validation, testing and visualization, become incredibly complicated and \ntime-consuming. To really make a gluing approach work on the basis of a lower-level language, \none has to fully integrate multiple software packages into a single, necessarily constrained,\nenvironment. Although this may be very desirable in standardized industrial applications, it does\nnot adequately represent software development and flexibility requirements in an academic environment.\nOnly a higher-level language provides the required flexibility. Of course, ones has to give up a\nslight bit of computational efficiency but overall, this trade-off seems worthwhile.\\medskip \n\nIn this context, it should be noted that such an integration of input-output via a high-level language\nshould probably be made a design principle from the start. If wrapper-functionality is added\nlater on, it is frequently still necessary to fully comprehend the underlying low-level code\nto actually use the wrapper.\\medskip \n\n\\textbf{Algorithmic blocks:} Based upon the previous point, one should make sure to design \nself-consistent blocks of code, which interface\/communicate directly with a high-level language\nbut run on a lower-level fast language for computational purposes. For example, in the context \nconsidered in this paper one might want to split up the process into the following components:\n\n\\begin{enumerate}\n \\item[(A1)] problem formulation and definitions;\n \\item[(A2)] mesh generation and error-estimator based mesh refinement;\n \\item[(A3)] discretization of the PDE, {i.e.}, conversion to a nonlinear algebraic system;\n \\item[(A4)] algorithms for generating starting solutions sets;\n \\item[(A5)] numerical continuation and bifurcation detection; \n \\item[(A6)] efficient, fast numerical linear algebra;\n \\item[(A7)] data analysis and visualization.\n\\end{enumerate}\n\nOf course, not all the different steps are necessary for a given class of PDEs or chosen discretization\nalgorithm. For example, sometimes one may omit (A4) due to the availability of starting solutions as for approach (II) \nin Section \\ref{sec:test}, or use spectral or other mesh-free methods to generate the nonlinear system\nof algebraic equations to be solved in (A3).\\medskip \n\n\\textbf{Problem description:} Another natural question is, how we should formulate the PDE meshing, discretization\nand continuation problem in the high-level gluing language? In this regard, \\texttt{pde2path} builds upon the \nsuccessful ideas implemented in \\texttt{AUTO}. The main idea is to have only very few files that specify all the problem \ndetails completely: one (or two) files to specify the elliptic PDE, one file to initialize all the data (domain, initial guess,\nalgorithmic parameters, {etc.}) and one structure to track the current state of the numerical problem. Those ideas turned out, \nagain, to be extremely robust and helpful to carry out the gluing computations in this paper. We also completely\nglued the \\texttt{minimax}-algorithm \\cite{Zhou} to \\texttt{pde2path} by using only data from standard \\texttt{pde2path} problem \ndescription files in \\texttt{minimax}. In principle, this could be implemented permanently into \\texttt{pde2path} but\nthere is a new version released soon \\cite{DohnalRademacherUeckerWetzel}. The variational starting solution approach\nwill be implemented in this new version in future work\\footnote{The new version \\texttt{p2p2} is not backward compatible\nwith \\texttt{pde2path} so we postpone this step to future work. It is expected that compatibility is guaranteed from the version \\texttt{p2p2} onwards \\cite{DohnalRademacherUeckerWetzel}.}.\\medskip\n \n\\textbf{PDE discretization:} By now, there are a large number of different software packages available to automatically\ngenerate meshes, nonlinear equations and adaptive mesh refinements for various classes of PDEs; see Section \\ref{sec:intro}.\nMany of these software packages would provide excellent tools for understanding dynamics and patterns in a wide\nvariety of applications a lot better. In fact, the barrier does not lie in the packages themselves but the difficulty\nto access the problem formulation. For example, it is relatively rare that one can simply \nprovide a problem description and obtain, \\emph{in a simple format}, the discretized nonlinear equations as an output; of course, it is\nalways \\emph{possible} but the goal is to make gluing computation \\emph{easy}. However, there seems significant progress in this \ndirection, {e.g.}~see \\cite{Hoffmanetal}. The demonstration we have given here for three problems also strongly points towards the conjecture\nthat practical gluing computations for dynamical systems analysis of PDEs will become a lot easier in the very near future.\\medskip\n\n\\textbf{Continuation algorithms:} The main barrier to make continuation algorithms more applicable to spatial\nproblems was to move beyond the class of two-point BVPs and provide a more generic package suitable for PDEs.\nThis is one current disadvantage with the \\texttt{pde2path} focus we considered in this paper since \\texttt{pde2path} links \ninternally a continuation algorithm directly to the MatLab \\texttt{pdetoolbox}. Therefore, it is not easy in practice \nto replace the steps (A3) and (A5) as they are tied together in \\texttt{pde2path}. However, this could be \nresolved by using a generic continuation toolbox, such as the recently developed continuation core \\texttt{COCO} \n\\cite{DankowiczSchilder} or certain tools for large-dimensional linear systems such as \\texttt{LOCA} \\cite{Salingeretal} or other recently developed codes \\cite{Bindeletal}. The main point is, that in the future one must \ntake advantage of the advanced numerical analysis discretization schemes to reduce the computational linear algebra\neffort more efficiently. Overall, there is also significant recent progress in this direction supporting the conjecture stated in the last paragraph.\n\n\\section{Summary}\n\\label{sec:summary}\n\nIn this paper we have addressed the issue of gluing computation at the interface between numerical\nPDEs, continuation methods, scientific computing and variational theory in the context of elliptic PDEs for \nthree different equations. We \nhave shown that efficient integration of various algorithms within the framework of a high-level\nprogramming framework can be possible. A combination of \\texttt{pde2path}, the MatLab \\texttt{pdetoolbox} \nand a \\texttt{minimax}-algorithm has been glued and cross-validated using the Lane-Emden-Fowler equation \nwith a linear term as a test problem. As a second step, we argued\nthat a natural symmetry-breaking mechanism, which is not based upon the domain geometry, could be localized asymmetric\nmicroscopic forces which also arise in generalized Ginzburg-Landau\/Allen-Cahn equations. We found\ninteresting inverse-C-shaped bifurcation curves. There is numerical evidence to conjecture that these curves are isolas, which could be analytically captured in the regime of a very small microforce. Along the bifurcation curves, complicated deformation of patterns can already take place under a simple near-localized external microforce. Then we \nproposed to study as a third example the Caginalp system for melting-solidification processes\non a wing-shaped geometry. In this case, the variational \\texttt{minimax}-algorithm could not directly be glued despite\nthe existence of multiple solutions at fixed parameters. This illustrated the limitations of the process. For the Caginalp system, we also computed an interesting phase transition diagram for melting-solidification processes, which pointed towards potentially useful aerospace and engineering applications. Finally, we also \ncommented on scientific computing issues encountered during the process.\\medskip \n\n\\textbf{Acknowledgments:} I would like to thank the Austrian Academy of Sciences ({\\\"{O}AW})\nfor support via an APART fellowship. I also acknowledge support of the European Commission \n(EC\/REA) via a Marie-Curie International Re-integration Grant. I would like to thank John\nGuckenheimer, Hannes Uecker and Mathieu Desroches for very helpful comments on earlier versions of this \nmanuscript. Furthermore, I would like to thank two anonymous referees, whose comments have helped to improve \nthe paper.\n\n\\section{References}\n\n{\\small \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWith somewhat imprecise boundaries, interstellar clouds are generally\nclassed as diffuse, \\mbox{A$_{\\rm V}$} $\\la$ 1 mag, or dark, \\mbox{A$_{\\rm V}$} $\\ga$ 4-6 mag, with an\nintermediate translucent regime \\citep{SnoMcC06}. In diffuse clouds the\ndominant form of carbon is C\\mbox{$^+$}\\ and hydrogen is mostly atomic, although\nwith a very significant overall admixture of \\mbox{H$_2$}, 25\\% or more as a global\naverage \\citep{SavDra+77,LisPet+10}. In dark or molecular clouds the\ncarbon is overwhelmingly in CO with an admixture of C I and the hydrogen\nresides almost entirely in \\mbox{H$_2$}. The population of diffuse clouds is\nsometimes called H I clouds in radio astronomical terms.\n\nThe shadows of dark clouds are seen outlined against brighter background\nfields and the clouds themselves are often imaged in the mm-wave emission\nof CO and many other species: the {\\emr{^{12}CO}{}(1-0) sky \\citep{DamHar+01} is\n usually (and in part incorrectly, see below) understood as a map of\n fully-molecular clouds. The shadows of H I or diffuse clouds are their\n absorption-line spectra and for the most part, individual diffuse clouds\n are known only as kinematic features in optical and\/or radio absorption\n spectra. No means exist to image individual diffuse clouds at optical\n wavelengths and attempts to map individual H I clouds at radio\n wavelengths are generally frustrated by the blending and overlapping of\n contributions from multiple clouds and gas phases. This lack of identity\n has greatly complicated our ability to define diffuse clouds physically\n because absorption lines do not generally permit a direct determination\n of the cloud size or internal density.\n \n When diffuse clouds discovered in absorption-line spectra have a\n sufficiently high complement of molecules they may be imaged at radio\n wavelengths in species such as OH and CH and, most usefully, CO. Despite\n a low fractional abundance of CO relative to \\mbox{H$_2$}, $\\mean{{\\rm X(CO)}} =\n 3\\times 10^{-6}$ \\citep{BurFra+07}, mapping is facilitated by an enhanced\n brightness of the J=1-0 line in diffuse gas: the temperature is somewhat\n elevated (\\Tsub K $\\ga 25$ K), the density is comparatively small at typical\n ambient thermal pressure \\citep{JenTri11} and the rotation ladder is\n subthermally excited. In accord with theory \\citep{GolKwa74}, it is\n found observationally that there is a simple, linear proportionality\n between the integrated intensity \\mbox{W$_{\\rm CO}$}\\ of the CO J=1-0 lines and the CO\n column density, even when the gas is optically thick: N(CO) $\\approx\n 10^{15}~{\\rm cm}^{-2}$ \\mbox{W$_{\\rm CO}$}\/\\emr{\\,K\\,km\\,s^{-1}} for \\mbox{W$_{\\rm CO}$}\\ $\\approx 0.2 - 6$ \\emr{\\,K\\,km\\,s^{-1}}\n \\citep{LisLuc98,Lis07CO}. Per molecule, the ratio \\mbox{W$_{\\rm CO}$}\/N(CO) is 30-50\n times higher in diffuse gas, compared to conditions in dense shielded\n fully-molecular gas where the rotation ladder is thermalized\n \\citep{LisPet+10}. Of course this is of substantial assistance in the\n present work. Conversely, the high brightness (5-12 K) of many of lines\n we detected should \\emph{not} be taken as discrediting their origin in\n diffuse gas.\n\n\\begin{figure*}\n \\includegraphics[height=11.75cm]{FindingChart}\n\\caption[]{Outer-galaxy finding chart for the sources studied here except \n B1730-130 at $l = 12$\\mbox{$^{\\rm o}$}, see Table 1. The colored background is\n reddening at 6\\arcmin\\ resolution \\citep{SchFin+98} truncated at a\n maximum of 2.4 mag as shown on the bar scale at left. Positions of\n continuum sources are indicated (see Table 1) along with a few other\n objects: the high-latitude molecular clouds MBM23 and MBM 53\n \\citep{MagBli+85}; TMC-1; SAMS 1 and 2 \\citep{Hei04}; and two Perseus\n stars commonly used for optical absorption line studies.}\n\\end{figure*}\n\n\\begin{table*}\n\\caption[]{Continuum target, line of sight and map field properties$^1$}\n{\n\\small\n\\begin{tabular}{lccccccccccc}\n\\hline\nTarget & ra & dec & l & b & Map & \\mbox{E$_{\\rm B-V}$}$^2$ & N (H I)$^3$ & N(\\mbox{H$_2$})$^4$ & \\mbox {f$_{{\\rm H}_2}$}$^7$ & \\mbox{W$_{\\rm CO}$} & $<$\\mbox{W$_{\\rm CO}$}$>$ \\\\\n & (J2000) & (J2000) & & & size & mag & $10^{20}~{\\rm cm}^{-2}$ & $10^{20}~{\\rm cm}^{-2}$ & & \\emr{\\,K\\,km\\,s^{-1}} & \\emr{\\,K\\,km\\,s^{-1}}\\\\\n\\hline\nB0736+017 & 07:39:18.03 & 01:37:04.6 & 216.99 & 11.38 & 15\\arcmin &0.13 & 7.7 & 3.3 & 0.46 & 0.8 & 0.4\\\\\nB0954+658 & 09:58:47.24 & 65:33:54.7 & 145.75 & 43.13 & 30\\arcmin & 0.12 & 5.3$^5$ & 4.8 & 0.64 & 1.6 & 0.6\\\\\nB1730-130 & 17:33:02.66 & -13:04:49.5 & 12.03 & 10.81 & 30\\arcmin & 0.53 & 28.3 & 4.3 & 0.23 & 0.4 & 0.4\\\\\nB1928+738 & 19:27:48.58 & 73:58:01.6 & 105.63 & 23.54 & 20\\arcmin & 0.13 & 7.2$^5$ & 2.7 & 0.43 & $<$0.2 & 0.0 \\\\\nB1954+513 & 19:55:42.69 & 51.31:48.5 & 85.30 & 11.76 & 30\\arcmin & 0.15 & 12.7$^5$ & 5.0 & 0.44 & 1.6 & 0.3 \\\\\nB2251+158 & 22:53:57.71 & 16:08:53.4 & 86.11 & -38.18 & 30\\arcmin &0.10 & 4.6 & 1.2 & 0.34 & 0.8 & 0.2 \\\\ \n\\hline\nB0528+134 & 05:30:46.41 & 13:31:55.1 & 191.37 & -11.01 & 30\\arcmin & 0.89 & 30.9 & 7.9 & 0.34 & 2.2 & 2.6\\\\\nB2200+420 & 22:02:43.24 & 42:16:39.9 & 92.59 & -10.44 & 30\\arcmin & 0.33 & 9.7 & 8.8 & 0.66 & 5.8 & 3.7\\\\\n\\hline\nB0212+735 & 02:17:30.81 & 73:49:32.6 & 128.93& 11.96 & 30\\arcmin & 0.76 & 32.1 & 18.6 & 0.54 & 5.8 & 1.56\\\\\nB0224+671 & 02:28:50.03 & 67:21:31.3 &132.12 & 6.23 & 30\\arcmin &1.00 & 38.3 & 9.2 & 0.32 &1.9 & 4.0 \\\\\nB0355+508 & 03:59:29.73 & 50:57:50.1 &150.38 &-1.60 &30\\arcmin x 50\\arcmin & (1.50)$^6$ & 111.3 & 24.2 & 0.30 \n& 14.3 & 4.2\\\\\n\\hline\nMean & & & & & & 0.41 & 17.7 & 6.58 & 0.43 & 2.09 & 1.38 \\\\\n\\hline\n\\end{tabular}}\n\\\\\n$^1$Sources are placed in three groups according to their discussion in Sect. 3, 4 and 5 \\\\\n$^2$from \\cite{SchFin+98} \\\\\n$^3$ N(H I) = $2.6 \\times 10^{20}~{\\rm cm}^{-2} \\int{\\tau(H I)}dv$ (see Sect 2) except where noted \\\\\n$^4$ N(\\mbox{H$_2$}) = N(\\hcop)\/$3\\times10^{-9}$ see Sect. 2. \\\\\n$^5$ from \\cite{HarBur97}, N(H I) = $1.823 \\times 10^{18}~{\\rm cm}^{-2} \\int{\\Tsub B (H I)}dv$ \\\\\n$^6$ at such a low galactic latitude \\mbox{E$_{\\rm B-V}$}\\ is not reliably determined \\\\\n$^7$ \\mbox {f$_{{\\rm H}_2}$} = 2N(\\mbox{H$_2$})\/(N(H I) + 2N(\\mbox{H$_2$}))\n\\\\\n\\end{table*}\n\nEarlier we showed that, in the mean, CO-\\mbox{H$_2$}\\ conversion factors are similar\nin diffuse and dense fully molecular gas \\citep{LisPet+10}, because the\nsmall abundance of CO relative to \\mbox{H$_2$}\\ in diffuse gas is compensated by a\nmuch higher brightness per CO molecule. But the proportionality between\n\\mbox{W$_{\\rm CO}$}\\ and N(CO) in diffuse gas, where CO represents such a small fraction\nof the available gas phase carbon, means that the CO map of a diffuse cloud\nis really an image of the CO chemistry. Moreover the CO abundance exhibits\nextreme sensitivities to local conditions that are manifested as order of\nmagnitude scatter in N(CO)\/N(\\mbox{H$_2$}) in optical absorption line studies\n\\citep{SonWel+07,BurFra+07,SheRog+07,SheRog+08}, even beyond the\noften-rapid variation of N(\\mbox{H$_2$}) with \\mbox{E$_{\\rm B-V}$}\\ \\citep{SavDra+77} (\\mbox{E$_{\\rm B-V}$}\\ \n$\\approx$ \\mbox{A$_{\\rm V}$}\/3.1). The net result is that the CO emission map of a\ndiffuse cloud can only indirectly be interpreted as tracing the underlying\nmass distribution, or even that of the \\mbox{H$_2$}. Nonetheless, it should (we\nhope) provide some impression of the nature of the host gas, especially in\nthe absence of any other means of ascertaining this.\n\nIn this paper we present maps of CO J=1-0 emission at arcminute resolution\nover 11 sky fields, typically 30\\arcmin $\\times$ 30\\arcmin\\ around the\npositions of compact extragalactic mm-wave continuum sources that we have\nlong used as targets for absorption line studies of the chemistry of\ndiffuse clouds. As is the case for nearly all background sources seen at\ngalactic latitudes $|b| < 15-18$\\mbox{$^{\\rm o}$}, and for some sources at higher\nlatitudes, the current targets were known to show absorption from \\hcop\\ \nand from one or more other commonly-detected species (OH, CO, \\mbox{C$_2$H}, \\mbox{C$_3$H$_2$});\nmost but not all directions also were known to show CO emission in at least\nsome of the kinematic features present in absorption.\n\nThis work is organized as follows. The observational material discussed\nhere is summarized in Sect. 2. In Sects. 3-5 we discuss the new maps with\nsources grouped in order of kinematic complexity. Sect. 6 is an\nintermediate summary of the lessons drawn from close scrutiny of the maps.\nSect. 7 briefly discusses the influences of galactic and internal cloud\nkinematics and Sect. 8 presents a comparison of CO intensity and reddening\nwithin a few of the simpler individual fields. Online Appendix A shows a\nfew position-velocity diagrams that, while of interest, could be considered\nredundant with those shown in the main text in Figs. 13 and 14. Figures\nB.1 and B.2 in online Appendix B show the target lines of sight in the\ncontext of large-scale galactic kinematics sampled in H I emission.\n\n\\section{Observational material}\n\n\\subsection{CO J=1-0 emission}\n\nOn-the-fly maps of CO J=1-0 emission were made at the ARO 12m telescope in\n2008 December, 2009 January and 2009 December in generally poor weather\nusing filter banks with 100 kHz or 0.260 \\mbox{km\\,s$^{-1}$}{} channel spacing and\nspectral resolution. System temperatures were typically $450 - 750$ K.\nThe data were subsequently put onto 20\\mbox{$^{\\prime\\prime}$}} \\def\\arcmin{\\mbox{$^{\\prime}$}\\ pixel grids using the AIPS\ntasks OTFUV and SDGRD; the final spatial resolution is 1\\arcmin. Most maps\nare approximately 30\\arcmin\\ $\\times$ 30\\arcmin\\ on the sky and were\ncompleted in 4-5 hours total observing time. The new CO emission data are\npresented in terms of the \\mbox{${\\rm T}_{\\rm R}^*$}\\ scale in use at the 12m antenna and all\nvelocites are referred to the kinematic Local Standard of Rest. The\ntypical rms channel-channel noise in these maps at 1\\arcmin\\ and 0.26 \\mbox{km\\,s$^{-1}$}\\ \nresolution is 0.4-0.5 K; their sensitivity is rather moderate and the\ndetectability limit is of order 1 \\emr{\\,K\\,km\\,s^{-1}}{} for a single line component.\n\nMore sensitive CO J=1-0 line profiles at higher spectral resolution (25\nkHz) had been previously observed toward the continuum sources as part of\nour survey efforts, for instance see \\cite{LisLuc98}. It is these profiles\nthat are displayed in the Figures shown here representing emission in the\nspecific direction of the background target and used to calculate line\nprofile integrals as quoted in Table 1.\n\nMany interstellar clouds lie at distances of about 150 pc from the Sun,\njust outside the Local Bubble. At this distance the 1\\arcmin\\ resolution\nof our CO mapping corresponds to 0.041 pc.\n\n\\subsection{H I absorption and emission}\n\nThe $\\lambda$ 21cm H I absorption spectra shown here are largely from the\nwork of \\cite{DicKul+83} augmented by a few spectra taken at the VLA in\n2005 May. The spectral resolution of this data is 0.4 - 1.0 \\mbox{km\\,s$^{-1}$}.\n\nFigures B.1 and B.2 of the online Appendix B show latitude-velocity\ndiagrams of H I emission drawn from the Leiden-Dwingeloo Survey of\n\\cite{HarBur97}.\n\n\\subsection{Molecular absorption}\n\nAlso shown here are spectra of $\\lambda 18$cm OH absorption from\n\\cite{LisLuc96} and mm-wave absorption spectra of CO \\citep{LisLuc98},\n\\hcop\\ \\citep{LucLis96} \\mbox{C$_2$H}\\ \\citep{LucLis00C2H} and \\mbox{H$_2$CO}\\ \n\\citep{LisLuc+06}.\n\n\\subsection{Reddening}\n\nMaps of reddening were constructed from the results of \\cite{SchFin+98}.\nThis dataset has 6\\arcmin\\ spatial resolution on a 2.5\\arcmin\\ pixel grid.\nThe stated single-pixel error is a percentage, 16\\%, of the pixel value.\nOn average, 1 mag of reddening corresponds to a neutral gas column N(H) $=\n5.8 \\times 10^{21}~{\\rm cm}^{-2}$ \\citep{SavDra+77}.\n\n\\subsection{Target fields}\n\nThe positions and other observational properties are summarized in Table 1\nwhere the sources are grouped according to their order of presentation in\nSect. 3, 4 and 5. The groups appear in order of increasing reddening and\ngas column density and decreasing distance from the galactic plane. The\nline profile integrals \\mbox{W$_{\\rm CO}$}\\ quoted in Table 1 result from the more\nsensitive earlier observations noted in Sect. 2.1. The mean values quoted\nfor \\mbox{W$_{\\rm CO}$}\\ along individual sightlines are averages over the new map data\ntaken for this work.\n\nTable 2 gives some pixel statistics about noise levels and spatial covering\nfactors as discussed in Sect. 8.\n\n\\subsection{Presentation of observations}\n\nA finding chart including all sources except B1730-130 is shown in Fig. 1\nwhere the locations of the background targets are shown on a large-scale\nmap of reddening, along with locations of a few other landmark objects as\nnoted in the figure caption.\n\nMaps and spectra of the target fields and background sources are shown in\nFigs. 2-12. Within each of three groups, sources appear in order of\nincreasing right ascension. Members of the first group, shown in Figs. 2-7\nand discussed in Sect. 3, are the simplest kinematically. Figs. 8 and 9\nshow the fields around the background sources B0528+134 and B2200+420 (aka\nBL Lac) that are also kinematically simple but are heavily patterned and\nrather bright in CO emission; these are discussed in Sect. 4. Figs. 10-12\n(Sect. 5) show the results over three target fields with rather amorphous\nstructure whose kinematics are too complex to fit into the framework in\nwhich the data for the other sources are presented in earlier figures. Two\nof these sources (B0212+735 and B0224+671) are relatively near each other\non the sky and sample similar galactic structure while the third target\nB0355+508 (aka NRAO150) is the only source within 2\\mbox{$^{\\rm o}$}\\ of the galactic\nequator (see Table 1).\n\nThe format of Figures 2-11 is: at upper left a 90\\arcmin\\ map of \\mbox{E$_{\\rm B-V}$}\\ from\nthe dataset of \\cite{SchFin+98}, with an inset showing the field of view\nmapped in CO, typically 30\\arcmin\\ on a side; at lower left a map of \\mbox{W$_{\\rm CO}$};\nat lower right various atomic (H I) and molecular absorption spectra\nshowing the kinematic structure toward the background source; at upper\nright, CO emission spectra of various sorts as depicted in the figure\ncaptions. The absorption spectra shown at lower right in these figures are\nsomewhat inhomogeneous because not all sources have the same full\ncomplement of profiles. In general, H I is at the bottom wherever possible\nand above that are spectra of the most common molecules observed in\nabsorption; \\hcop, observed toward all targets, OH, \\mbox{C$_2$H}\\ and\/or CO. The\nuppermost spectrum wherever possible is a species like \\mbox{H$_2$CO}\\ or HNC\n\\citep{LisLuc+06,LisLuc01} that is detected less commonly and is indicative\nof greater chemical complexity.\n\n\\begin{figure*}\n \\includegraphics[height=13cm]{B0736Slide}\n\\caption[]{The sky field around the position of B0736+017. Upper left:\n reddening at 6\\arcmin\\ resolution \\citep{SchFin+98}. Lower left:\n integrated CO emission \\mbox{W$_{\\rm CO}$}\\ in units of K \\mbox{km\\,s$^{-1}$}; contour levels are in\n 1,2,... K \\mbox{km\\,s$^{-1}$}. Lower right: absorption line profiles, scaled as noted.\n Upper right: {\\bf CO emission,} toward B0736+017 and at the peak of the\n nearby CO distribution.}\n\\end{figure*}\n\nAlso shown for all sources are CO emission spectra at various locations in\nthe field mapped, as indicated in the spectra. More complex aspects of the\npresentation are discussed in the individual figure captions.\n\n\\begin{table}\n\\caption[]{Noise levels and spatial covering factors}\n{\n\\small\n\\begin{tabular}{lccccc}\n\\hline\nTarget & V & $\\sigma_{\\rm profile}$ & $\\sigma_{\\rm map}$ & f$_{>1}^a$ & f$_{>2}^b$\\\\ \n & \\mbox{km\\,s$^{-1}$} & K & \\emr{\\,K\\,km\\,s^{-1}} & & \\\\\n\\hline\nB0736& 5.1,7.2 & 0.43 & 0.48 & 0.18 & 0.07 \\\\\nB0954& 2.6,5.2 & 0.25 & 0.33 & 0.20 & 0.12 \\\\\nB1730& 4.0,6.1 & 0.36 & 0.52 & 0.20 & 0.03 \\\\\nB1928& -4.2,-0.8 & 0.46 & 0.52 & 0.03 & 0.00 \\\\\nB1954& -1.0,2.4 & 0.33 & 0.35 & 0.20 & 0.13 \\\\\nB2251&-10.8,-7.9 & 0.28 & 0.32 & 0.06 & 0.02 \\\\\n\\hline\nB0528& 0.6.3.7 & 0.23 & 0.36 & 0.07 & 0.05 \\\\\n & 8.9,11.7 & & 0.33 & 0.69 & 0.46 \\\\\nB2200& -3.8,2.8 & 0.39 & 0.63 & 0.61 & 0.50 \\\\\n\\hline\nB0212& -14.1,-7.9 & 0.30 & 0.80 & 0.26 & 0.11 \\\\\n & -1.4,1.0 & & 0.36 & 0.03 & 0.01 \\\\\n & 1.3,4.9 & & 0.66 & 0.34 & 0.19 \\\\ \nB0224& -17.1,-11.9 & 0.43 & 0.76 & 0.14 & 0.03 \\\\\n & -11.6,-9.2 & & 0.60 & 0.33 & 0.20 \\\\\n & -9.2,-5.1 & & 0.56 & 0.39 & 0.20 \\\\\n & -4.9,-2.0 & & 0.47 & 0.47 & 0.33 \\\\\n & -2.0,-0.2 & &0.39 & 0.05 & 0.28 \\\\\n & -0.2 .. 2.0 & &0.42 & 0.09 & 0.03 \\\\\nB0355& -19.2,-15.9 & 0.35 &0.63 &0.18 & 0.07 \\\\\n & -15.8,-11.7 & & 0.92 & 0.43 & 0.28 \\\\\n & -11.1,-9.8 & &0.59 & 0.34 & 0.18 \\\\\n & -9.6,-7.3 & & 0.62 & 0.34 & 0.10 \\\\\n & -6.0,-1.4 & & 0.79 & 0.17 & 0.04 \\\\\n\\hline\n\\end{tabular}}\n\\\\\n$^a$ fraction of mapped area with \\mbox{W$_{\\rm CO}$}\\ $\\geq 1$ \\emr{\\,K\\,km\\,s^{-1}} \\\\\n$^b$ fraction of mapped area with \\mbox{W$_{\\rm CO}$}\\ $\\geq 2$ \\emr{\\,K\\,km\\,s^{-1}} \\\\\n\\end{table}\n\n\\subsection{Molecular gas properties in the current sample}\n\nThe sightlines studied here were selected on the basis of their known\n\\hcop\\ absorption spectra, creating the possibility that the sample is\nbiased to large molecular fractions and\/or strong CO emission. However, it\nwas earlier noticed in a flux-limited survey \\citep{LucLis96} not based on\nprior knowledge of CO emission that very nearly all sightlines at galactic\nlatitudes within about 15\\mbox{$^{\\rm o}$}\\ of the galactic equator show \\hcop\\ \nabsorption. Our present tally, slightly extending the earlier result, is\nthat \\hcop\\ absorption occurs toward 19 of 19 sources at $|b| \\la 12$\\mbox{$^{\\rm o}$},\ntoward 22 of 25 sources at $|b| \\la 18$\\mbox{$^{\\rm o}$}\\ and toward 4 out of 12\nsources at $|b| \\ga 23$\\mbox{$^{\\rm o}$}\\ including three shown here. Thus it is a\nnear certainty that \\hcop\\ absorption would be detected over the entirety\nof the sky fields mapped here below about 15\\mbox{$^{\\rm o}$}{}-18\\mbox{$^{\\rm o}$}, no matter what\nis the covering factor of detectable CO emission. This is discussed in\nSect. 8 immediately following the more descriptive portions of the text.\n\nIf we discuss the mean properties of the ten sightlines in Table 1 having\nreliably determined \\mbox{E$_{\\rm B-V}$}\\ (all except B0355+508 that lies too near the\ngalactic plane) in the same terms that we used earlier to derive the mean\nCO-\\mbox{H$_2$}\\ conversion factor in diffuse gas, \\citep{LisPet+10}, we derive an\nensemble average\n\n\\begin{eqnarray*}\n \\frac{N(\\mbox{H$_2$})}{\\mbox{W$_{\\rm CO}$}} \n &=& \\frac{5.8\\times 10^{21}~{\\rm cm}^{-2}<\\mbox{E$_{\\rm B-V}$}>-}{<2\\mbox{W$_{\\rm CO}$}>} \\\\\n &=& 1.52 \\times 10^{20}~{\\rm cm}^{-2}(\\emr{\\,K\\,km\\,s^{-1}})^{-1},\n\\end{eqnarray*}\ni.e., 25\\% smaller than the previous result found a larger sample. In the\nsame terms, the mean atomic gas fraction is $<$N(H I)$>$\/ $<$N(H)$> =\n0.74$, as opposed to 0.65 found earlier.\n\nEstimates of N(\\mbox{H$_2$}) based on assuming the ensemble-average mean value\n\\citep{LisPet+10} N(\\hcop)\/N(\\mbox{H$_2$}) $ = 3\\times 10^{-9}$ along each line of\nsight are also given in Table 1. They indicate higher molecular fractions\nand somewhat higher total column densities N(H) than are found using scaled\n\\mbox{E$_{\\rm B-V}$}\\ for N(H) and using the decomposition discussed just above based on\nsubtracting N(H I) from N(H) determined as the scaled \\mbox{E$_{\\rm B-V}$}. Specifically\nthe chemistry-based ensemble average is $<$\\mbox {f$_{{\\rm H}_2}$} $>$ = $<$2N(\\mbox{H$_2$})$>$\/\n$<$(N(H I)+2N(\\mbox{H$_2$})$>$ = 0.43.\n\n\\section{Six simple fields at moderate-high latitude}\n\n\\subsection{B0736+0117 $(b \\sim 11\\mbox{$^{\\rm o}$})$}\n\nThe 15\\arcmin\\ sky field around B0736+016 shown in Fig. 2 is the smallest\nand kinematically simplest field studied; the map was made on the spur of\nthe moment in a relatively brief open period between two other larger maps.\nThe reddening is modest over the area of the CO map shown in Fig. 2, \\mbox{E$_{\\rm B-V}$}\n$\\la$ 0.165 mag but the molecular fraction implied by the entries in Table\n1 is of order 30-50\\%. Toward the source the integrated CO is fairly weak,\n$< 0.8$ \\emr{\\,K\\,km\\,s^{-1}}{} but a very slightly blue-shifted 4.5 K line is found within\njust a few arcminutes.\n\n\\subsection{B0954+658 $(b \\sim 43\\mbox{$^{\\rm o}$})$}\n\n\\begin{figure*}\n \\includegraphics[height=13cm]{B0954Slide}\n\\caption[] {The sky field around the position of B0954+658, as\n in Fig. 2.}\n\\end{figure*}\n\n\\begin{figure*}\n \\includegraphics[height=13cm]{B1730Slide}\n\\caption[]{The sky field around the position of B1730-130, as\n in Fig. 2. The emission profile labelled ``nw quadrant'' is an average\n over that portion of the map. The emission profile labelled $<$otf {\\bf\n CO}$>$ is the mean over the entire region mapped in CO. }\n\\end{figure*}\n\n\\begin{figure*}\n \\includegraphics[height=13cm]{B1928Slide}\n\\caption[]{The sky field around B1928+738. The emission profile at \n upper right labeled $<$otf {\\bf CO}$>$ is the mean over the 20\\arcmin\n $\\times$ 20\\arcmin region mapped in CO. }\n\\end{figure*}\n\nThis source (Fig. 3) is seen at the upper tip of the Polaris Flare near the\nlocation of the M81 group and the so-called small area molecular structures\nmapped by \\cite{Hei04} (see the finding chart in Fig. 1). As toward\nB0736+017, a fairly strong CO line, 5 K, is seen within a few arcminutes of\nthe background target where the CO brightness is more modest, 2 K.\n\nThe reddening is very moderate but a high molecular fraction, above 50\\%,\nis suggested by comparing the value of N(\\mbox{H$_2$}) in Table 1 with N(H I) or\n\\mbox{E$_{\\rm B-V}$}. Consistent with this high molecular fraction and the relative\nsimplicity of a higher-latitude line of sight, this field is the only one\nstudied in which there is a strong proportionality between \\mbox{E$_{\\rm B-V}$}\\ and \\mbox{W$_{\\rm CO}$},\nas discussed in Sect. 8. The complement of supporting material is\ndisappointingly slender.\n\n\\subsection{B1730-130 $(b \\sim 11\\mbox{$^{\\rm o}$})$}\n\nThe reddening is relatively large over this field (Fig. 4) and the \\hcop\\ \nabsorption is strong, implying a molecular fraction of order one-third, but\nCO emission is very weak toward the background continuum source (\\mbox{W$_{\\rm CO}$}\\ =\n0.4 \\emr{\\,K\\,km\\,s^{-1}}) and absent over most of the field mapped. Much stronger but\nstill rather weak emission (1.5 K) is seen 15\\arcmin\\ to the northwest as\nindicated in Fig. 4. All of the absorption profiles shown in Fig. 4 have\na redward wing that is separately visible in the two spatially-averaged CO\nprofiles shown at upper right.\n\n\\subsection{B1928+738 $(b \\sim 23.5\\mbox{$^{\\rm o}$})$}\n\nCO emission is absent over the entire 20\\arcmin\\ field shown in Fig. 5\ndespite the presence of fairly strong \\hcop\\ absorption and a suggested\nmolecular fraction approaching 40\\%. The reddening is modest,\napproximately 0.15 mag around the target, but the implied CO-\\mbox{H$_2$}\\ \nconversion factor is very large; from the entries in Table 1 we have\nN(\\mbox{H$_2$})\/\\mbox{W$_{\\rm CO}$}\\ $> 2.5\\times\\ 10^{21}~{\\rm cm}^{-2}$ at the $2\\sigma$ level.\n\n\\subsection{B1954+513 $(b \\sim 12\\mbox{$^{\\rm o}$})$}\n\nThe reddening in the field around this source is modest, 0.18 mag (see Fig.\n6 and 17), and CO emission toward the background continuum source is\nunimpressive (2 K) but the \\hcop\\ absorption is strong and the molecular\nfraction is of order 40\\%. Two kinematic components are found in the field\nwith 4.5 K peak brightness, only one of which is seen toward B1954 in\neither emission or absorption.\n\n\\subsection{B2251+158, aka 3C454.3 $(b \\sim -38\\mbox{$^{\\rm o}$})$}\n\nAs shown in Fig. 1, this very strong continuum source is seen about 3\\mbox{$^{\\rm o}$}\\ \nremoved from an elongated complex of high-latitude clouds that includes the\nobjects MBM 53-55 \\citep{MagBli+85} and new clouds discovered by\n\\cite{YamOni+03}. B2251+158 lies within the region surveyed by\n\\cite{YamOni+03} in CO but the emission detected here (see Fig. 7) escaped\ntheir notice, presumably because of their 4\\arcmin\\ map sampling of the\n2.7\\arcmin\\ beam. The reddening is moderate, \\mbox{E$_{\\rm B-V}$}\\ $\\la$ 0.11 mag (see\nFig. 7) and CO emission toward the continuum source is quite weak (0.8 K).\nHowever, much stronger emission (5 K) is seen only 5\\arcmin\\ away, as with\nB0736, B0954 and B1954. There is no obvious large-scale correlation of the\nCO emission with reddening, as evidenced by the weakness of the CO at the\nposition of highest reddening in the larger field shown at upper left in\nFig. 7 (i.e. the spectrum labelled ``NW'' at upper right there). The\nrelationship between \\mbox{W$_{\\rm CO}$}\\ and \\mbox{E$_{\\rm B-V}$}\\ is shown in Figs. 16-17.\n\nThe blue wing of the peak emission and the line seen at the northwest\nreddening peak both fall to the blue of the CO emission or absorption seen\ntoward B2251. Nonetheless they overlap a weaker blue wing of the \\hcop\\ \nabsorption that has no counterpart in CO emission, and they fill in a\nportion of the H I absorption spectrum.\n\n\\section{Two unusual fields at moderate latitude}\n\n\\subsection{B0528+134 $(b \\sim -11\\mbox{$^{\\rm o}$})$}\n\nMm-wave absorption toward B0528+134 (Fig. 8) was first discussed by\n\\cite{HogDeG+95}. This object is viewed against the outer edge of the dark\ncloud B30 in the $\\lambda$ Orionis ring of molecular clouds\n\\citep{MadMor87} that is centered on the H II region S264 and its central\nionizing star Lambda Ori (Fig. 1). There is a very substantial foreground\nreddening \\mbox{E$_{\\rm B-V}$}\\ = 0.86 mag and much more heavily extincted regions in the\nfield to the South.\n\nAlthough CO emission toward B0528+134 is fairly weak, 2.3 K, emission over\nthe surrounding field is characterized by a pronounced quasi-periodic\npattern with some very strong (10-12K) and narrow CO emission lines:\nemission is undetectable over much of the intervening troughs. A similar\nwavelike pattern may have been observed across the surface of the Orion\nmolecular cloud by \\cite{BerMar+10}.\n\nA weak blue-shifted component of \\hcop\\ absorption that is absent in CO\ntoward B0528 has a very bright CO emission counterpart to the Southeast as\nshown in Fig. 8. Despite an 8 \\mbox{km\\,s$^{-1}$}\\ velocity difference, the blueshifted\nemision line gives the strong visual impression of being physically\nassociated with the main kinematic component at 10 \\mbox{km\\,s$^{-1}$}, see the map at\nlower left in Fig. 8. The kinematic span of the CO emission seen at top\nright in Fig. 8 neatly coincides with the extent of the H I absorption\ntoward B0528+134. Line kinematics in this field are illustrated in more\ndetail in Fig. 14.\n\n\\subsection{B2200+420 aka BL Lac{} $(b \\sim -10.5\\mbox{$^{\\rm o}$})$}\n\nThis target (see Fig. 9) was the first source seen in mm-wave absorption\nfrom diffuse gas \\citep{MarBan+91}, in CO actually, and was also the first\nseen in \\hcop\\ absorption in our work \\citep{LucLis93}. CO emission toward\nthe source is fairly strong, 4 K or 6 \\emr{\\,K\\,km\\,s^{-1}}{} and the line is quite opaque.\nThe molecular column density indicated by the strong \\hcop\\ absorption is\nabout as large as N(H) inferred from \\mbox{E$_{\\rm B-V}$}\\ = 0.32 mag, given the \\mbox{E$_{\\rm B-V}$}-N(H)\nrelationship N(H) $= 5.8\\times 10^{21}~{\\rm cm}^{-2}\\,\\mbox{E$_{\\rm B-V}$}$ of \\cite{SavDra+77}.\n\nThe CO emission in this field originates from an unusual filamentary\nmorphology (Fig. 9 at lower left) at the edge of an arched pattern in the\nreddening map. The integrated intensity takes on very large values within\nthe field, up to 20 \\emr{\\,K\\,km\\,s^{-1}}\\ but the profile is compound and relatively\nbroad. Toward the continuum source only the blue side of the core of H I\nabsorption is seen strongly in molecular absorption or CO emission but a\nred-shifted CO emission component overlaying the red side of the H I line\ncore is present to the Northeast as indicated in Fig 9.\n\n\\section{Three complex fields at low-moderate latitude}\n\n\\begin{figure*}\n \\includegraphics[height=15cm]{B1954Slide}\n\\caption[]{The sky field around the position of B1954+513, as in Fig. 2. \n In the map at lower left the grayscale represents the total integrated\n emission at -1$\\le$ v $\\le$ 2 \\mbox{km\\,s$^{-1}$}\\ and the red and blue contours show\n the individual distributions of red and blue-shifted components,\n respectively. Profiles at the peak of the red and blue-shifted emission\n components are shown at upper right along with the profile toward the\n continuum source (shaded).}\n\\end{figure*}\n \n\\begin{figure*}\n \\includegraphics[height=15cm]{B2251Slide}\n\\caption[]{The sky field around the position of B2251+158 (aka 3C454.3), as in \n Fig. 2. The map of reddening at upper left is offset to show a separate\n peak to the Northwest near $22^{\\rm H}50^{\\rm M}$ and a profile at the\n position of this peak is shown at upper right, shaded green and labeled\n 'NW', along with profiles toward 3C454.3 (shaded) and at the peak of the\n small clump that is seen immediately adjacent to the continuum source.}\n\\end{figure*}\n\n\\begin{figure*}\n \\includegraphics[height=15cm]{B0528Slide}\n\\caption[]{The sky field around the position of B0528+134, \n as in Fig. 2. The map of CO emission at lower left superposes the\n integrated intensity at 0-4 \\mbox{km\\,s$^{-1}$}\\ as blue contours against a background\n grayscale representing emission at v = 8-12 \\mbox{km\\,s$^{-1}$}. Very strong CO lines\n are seen in the foreground gas as shown in the upper right panel:\n positions at which they originate are indicated at lower left.}\n\\end{figure*}\n\n\\subsection{B0212+735 $(b \\sim 12\\mbox{$^{\\rm o}$})$}\n\nB0212+735 (Fig. 10) sits in a mild trough with \\mbox{E$_{\\rm B-V}$}\\ $\\approx$ 0.76 mag in\na region of substantial reddening at moderate galactic latitude b=12\\mbox{$^{\\rm o}$}.\nIt has three molecular absorption components whose balance is entirely\nopposite to that of H I. Whereas most of the atomic absorption toward\nB0212+725 occurs in a deep and broad feature at v $\\la 10$ \\mbox{km\\,s$^{-1}$}, most of\nthe molecules are concentrated in a narrower-lined feature at v $\\approx$ 4\n\\mbox{km\\,s$^{-1}$}. An obvious molecular absorption feature at 0-velocity is, very\nunusually, not apparent in H I. It seems possible that the low velocity\nresolution of the H I profile (1 \\mbox{km\\,s$^{-1}$}) is responsible. The only other\npublished example of this phenomenon is toward B0727-115 \\citep{LeqAll+93}.\n\nThe CO emission line kinematics have been color coded at lower left in Fig.\n10 to display the observed behaviour in one panel. The gray-scale\nbackground represents the integrated intensity of the gas at 1.5-5 \\mbox{km\\,s$^{-1}$};\nhigher resolution mapping with the IRAM 30m telescope to be discussed in a\nforthcoming paper indicates that the feature is compound but this is not\napparent in the present dataset. The blue contours represent the CO\nprofile integral at $-16$ \\mbox{km\\,s$^{-1}$}{} $\\leq$ v $\\leq$ $-9.5$ \\mbox{km\\,s$^{-1}$}; consistent\nwith the prominence of this gas in H I, it is almost as widely distributed\nover the field as the stronger emission at 1.5-5 \\mbox{km\\,s$^{-1}$}\\ (Table 2: 26\\% vs.\n34\\%) even if it is barely seen toward the continuum. The profile labelled\n'A' at upper right is an example. The black contours represent the profile\nintegral at $-2$ \\mbox{km\\,s$^{-1}$}\\ $\\leq$ v $\\leq$ 1 \\mbox{km\\,s$^{-1}$}{} and an example is shown at\nupper right as profile 'B'. Emission from this gas occurs only at the\neastern edge of the map area.\n\n\\begin{figure*}\n \\includegraphics[height=15cm]{B2200Slide}\n\\caption[]{The sky field around the position of B2200+420 (aka BL Lac), \n as in Fig. 2. The map of CO emission at lower left superposes the\n integrated intensity in the range v=0-2 \\mbox{km\\,s$^{-1}$}\\ as red contours against a\n background grayscale representing emission at v $\\leq 0$ \\mbox{km\\,s$^{-1}$}. Molecular\n absorption and most emission is sequestered in the blue wing of the core\n of the HI absorption profile but a red-shifted emission component is\n present to the Northeast as illustrated by the spectrum at position ``C''\n indicated at lower left.}\n\\end{figure*}\n\n\\begin{figure*}\n \\includegraphics[height=15cm]{B0212Slide}\n\\caption[]{The sky field around the position of B0212+735,\n as in Fig. 2. Contours in the CO emission map at lower left are color\n coded in blue for emission at $-15 \\leq {\\rm v} \\leq -9.5$ \\mbox{km\\,s$^{-1}$}\\ and in\n green for emission at $-2 \\leq {\\rm v} \\leq 1$ \\mbox{km\\,s$^{-1}$}. The gray scale\n background represents the integrated emission of the strongest emission\n component seen toward the continuum source, at v = 1.5 - 5 \\mbox{km\\,s$^{-1}$}. \\mbox{$^{12}$CO}\\ \n spectra at two locations labelled A and B are shown at upper right along\n with a strongly-scaled mean profile taken over the full map area.}\n\\end{figure*}\n\n\\subsection{B0224+671 $(b \\sim 6\\mbox{$^{\\rm o}$})$}\n\nThis line of sight toward B0224+671 (Fig. 11) samples the two\nlower-velocity features seen toward B0212+735 but at substantially lower\ngalactic latitude b=6.2\\mbox{$^{\\rm o}$}, see Table 1. The extinction is large in this\nfield as are the H I and inferred \\mbox{H$_2$}\\ column densities. CO emission is\nweak on a per-component basis toward the continuum target but fairly total\nlarge values of \\mbox{W$_{\\rm CO}$}\\ are attained overall.\n\nThe integrated CO emission is compact but rather formless because it is the\nsum of many kinematic components. Paradoxically, the strongest molecular\nfeatures seen toward and near B0224+671 are not widely distributed over the\nmap area as shown in the middle panel at right in Fig. 11 comparing the\nprofile toward B0224+671 with the unweighted average of all profiles\ndenoted '$<$otf$>$': the strongest emission peak, at the red edge of the CO\nemission profile toward B0212+735 is strongly underrepresented in the mean\nprofile. Examples of profiles seen over the map area are shown at upper\nright in Fig. 11; they were chosen at local peaks in more finely-divided\n(in velocity) moment maps, with velocity increasing from a to g.\nEspecially at v $<$ 0 \\mbox{km\\,s$^{-1}$}\\ the lines shown are much stronger than seen\ntoward the continuum. Among them, the various CO emission components cover\nthe range of strong H I absorption toward the continuum source, $-15$ \\mbox{km\\,s$^{-1}$}\\ \n$\\leq$ v $\\leq$ 2 \\mbox{km\\,s$^{-1}$}{}. B0224 and B0212 have similar absorption spectra\nin that both have stronger atomic absorption at v $ < -10$ \\mbox{km\\,s$^{-1}$}{} where the\nmolecular absorption is weaker. They thus sample the same large-scale gas\nkinematics although they are separated by about 15 pc, assuming they lie on\na sphere of 150 pc radius.\n\n\\subsection{B0355+508 aka NRAO150 $(b \\sim -1.6\\mbox{$^{\\rm o}$})$}\n\nThis is the only low-latitude source studied here. The more strongly\nblue-shifted gas seen in this direction is likely to be relatively distant.\nThe actual velocity field is probably affected by galactic streaming\nmotions but a typical velocity gradient due to galactic rotation in this\ndirection is 8\\mbox{km\\,s$^{-1}$}\\kpc$^{-1}$.\n\n\\begin{figure*}\n \\includegraphics[height=15cm]{B0224Slide}\n\\caption[]{The sky field around the position of B0224+671,\n much as in Fig. 2. The map at lower left has been integrated over the\n very wide interval $-15.5 \\leq v \\leq +2$ \\mbox{km\\,s$^{-1}$}. Shown in the middle\n panel at right are the CO emission spectrum toward B0224+671 and as\n averaged over the region of the entire CO emission map. At top right are\n example profiles from the positions labelled at lower left, chosen from\n moment maps over narrow intervals increasing in velocity from a-g.}\n\\end{figure*}\n\n\\begin{figure*}\n \\includegraphics[height=13.3cm]{B0355Slide}\n\\caption[]{The sky field around the position of B0355+508. At top \n are maps of integrated CO intensity made over the velocity intervals\n indicated in each panel, corresponding to the five strong components of\n the \\hcop\\ absorption profile seen at lower right. CO emission profiles\n at various locations indicated in the map panels are shown at lower left.\n CO emission profiles toward B0355+508 and averaged over the map area are\n shown above the absorption line profiles.}\n\\end{figure*}\n\nFig. 12 does not display a map of the reddening for this source because the\nreddening maps are not believed to be accurate at such low galactic\nlatitudes. Taken as a whole the line of sight is very heavily extincted,\nwith large columns of both H I and \\mbox{H$_2$}\\ (Table 1). However, the high\ninferred \\mbox{H$_2$}\\ column density is the sum of components whose individual\nvalues indicate that they have \\mbox{A$_{\\rm V}$}\\ $\\approx 1$ mag.\n\nThe absorption profiles toward B0355+508 at lower right in Fig. 12 show\nfive obvious \\hcop\\ and CO components having roughly equal N(\\hcop)\n$\\approx 1.2\\times10^{12}~{\\rm cm}^{-2}$ and somewhat more variable N(CO)\n\\citep{LisLuc98}. Only two have substantial abundances of less-common\nspecies such as HCN. Less obvious is the fact that the \\hcop\\ absorption\nprofile has a weak broad blue wing extending to $-35$ \\mbox{km\\,s$^{-1}$}{} so that the\nentire core of the H I absorption line is seen in molecular gas\n\\citep[see][]{LisLuc00}.\n\nShown at the top in Fig. 12 are CO moment maps made over velocity intervals\ncorresponding to the obvious \\hcop\\ and CO absorption line features; a map\nintegrated over all velocities is shown in the top right-most panel. The CO\nemission distribution is heavily structured and very complex. Profiles at\npositions of local peaks in narrower CO moment maps are shown at lower\nleft, along with profiles at the integrated emission peak (see Fig. 12 at\ntop right) and toward the background source. At the eastern edge of the\nmap at position ``a'' there is a blue-shifted CO emission line that\n(unusually) falls outside the range of the strong molecular absorption\ntoward B0355+508 but is well inside the H I absorption profile). A very\nbright ($>$ 10 K) line is found at v = $-4.5$ \\mbox{km\\,s$^{-1}$}\\ at the northern edge of\nthe map at position f, corresponding to a prominent molecular absorption\nfeature that has only a very weak emission counterpart toward the\ncontinuum.\n \nThe middle right panel displays the profile toward the continuum background\nsource and the profile averaged over the mapped field of view. The\nkinematics of this region are shown in more detail in Fig. 15 and discussed\nin Sect. 7.2. The displacement of the two strong CO emission lines about\nthe centroid of the mean profile results from a coherent kinematic\npatttern, perhaps a shell or bubble in the underlying gas distribution.\nRecall, however that absorption at the mean field velocity is not absent\ntoward the continuum source.\n\nThe complexity of the emission distribution makes the division into ranges\nbased on \\hcop\\ absorption quite arbitrary. Moreover, the emission and\nabsorption profiles show rather different structure even toward the\ncontinuum target. A very detailed discussion of CO emission within a\n90\\mbox{$^{\\prime\\prime}$}} \\def\\arcmin{\\mbox{$^{\\prime}$}\\ field centered on NRAO150 was given by \\cite{PetLuc+08}.\nRemarkably, the peak emission brightness seen just $6''$ from the\nbackground continuum source is almost 13 K. As the spatial resolution\nincreases, the CO emission profile toward B0355 more nearly resembles the\nabsorption and the blended emission at v $\\approx$ -10 \\mbox{km\\,s$^{-1}$}\\ resolves into\ntwo distinct components.\n\n\\begin{figure}\n \\includegraphics[height=7.2cm]{B1954-OneRAV}\n\\caption[]{A right ascension-velocity diagram of CO emission through \n the position of B1954+513. The \\hcop\\ (not CO) absorption spectrum\n toward B1954 is shown with its 0-level at the location of the continuum\n source; the peak absorption is 90\\%. Contours are shown at levels 1, 2,\n 3, ... K.}\n\\end{figure}\n\nConversely, in the present dataset, the emission components seen toward\nB0355+508 lose their identity as the resolution degrades below a 4\\arcmin\\ \nhpbw. Both the peak profile and the mean are broad, largely unstructured\nand centrally peaked about velocities lying between the two strong CO\nemission components seen toward the background.\n\n\\section{Statistical lessons}\n\nFaute de mieux, the first surveys for suitable absorption-line targets were\nconducted in CO emission \\citep{BanMar+91,LisWil93,Lis94} but the discovery\nof yet more common \\hcop\\ absorption \\citep{LucLis96} caused a reversal in\nthe search strategy for diffuse molecular gas. Thus, all targets studied\nhere were pre-selected to have absorption from \\hcop\\ but only some were\nknown to have CO emission. However any division between targets with and\nwithout CO emission is misleading because the same sightline may have and\nlack CO emission on a per-component basis.\n\nStronger CO emission is always found somewhere else in the map when CO is\npresent toward the continuum but comparably strong CO emission was found,\nsomewhere on the sky, from absorption components lacking CO emission\ncounterparts toward the continuum background. This is true with only one\nexception, B1928. The implication is that CO emission is somewhat more\nubiquitous than is presently believed to be the case because nearly all\n\\hcop\\ absorption components will be found in nearby emission after a small\nsearch. In terms of numbers, in this work we observed 20 absorption line\ncomponents (and a few distinct line wings) with 13 carbon monoxide emission\ncounterparts toward the continuum targets, and we found 23 CO emission\ncomponents within $15'$ of the background continuum source during the\nmapping.\n\nThe following gives some conclusions that are drawn from the preceding\npresentation; they should generally be understood as applying on a\ncomponent-by-component basis.\n\n{\\bf From the standpoint of CO emission:}\n\n1) In every case where CO emission was detected toward the background\nsource, much stronger emission was also detected within approximately\n15\\arcmin\\ and often much less.\n\n2) Near B0355+508, B0528+134 and B2200+430 the nearby stronger emission was\nvery strong indeed, with peak line temperatures of 10-12 K and\/or line\nprofile integrals as large as 20 K \\mbox{km\\,s$^{-1}$}.\n\n3) 1) and 2) are also generally true for kinematic components that were\npresent in absorption toward the continuum source but {\\it not} detected in\nemission there (7 of 20 components).\n\n4) In one case only (B1928+738), representing 1 of 11 fields and 1 of 20\nkinematic absorption line components, the entire field mapped was devoid of\nemission when emission was not detected toward the continuum source (from a\nkinematic absorption line component). The field mapped around B1928 was\nonly 20\\arcmin\\ $\\times$ 20\\arcmin\\ as against 30\\arcmin\\ $\\times$\n30\\arcmin\\ or more for all of the other fields.\n\n5) In 2 fields we found an emission feature without a counterpart in\nmolecular absorption toward the background continuum object (B1954 and\nB0355 at -20 \\mbox{km\\,s$^{-1}$}).\n\n{\\bf From the standpoint of absorption:}\n\n1) The same kinematic components are seen in both absorption and emission\nwith angular separations of up to $15'$.\n\n2) The absorption spectra toward a background source are a preview of what\nwill be seen in emission in a larger field about the background source.\n\n3) Molecular absorption components seen toward the continuum source were\nfound in CO emission somewhere in the field except in the smaller region\nmapped around B1928+738\n\n{\\bf From the standpoint of the atomic-molecular transition:}\n\n1) The same kinematic components are seen in both atomic and molecular\ntracers at angular separations between $0'$ and $15'$.\n\n2) The components seen in molecular absorption are present in H I\nabsorption, although somewhat indistinctly in some cases. For instance,\nthe 0-velocity molecular absorption line in B0212+735 appears only as a\nblue wing of the 4 \\mbox{km\\,s$^{-1}$}\\ H I absorption component.\n\n3) Portions of H I absorption profiles adjacent to molecular features but\nlacking a molecular counterpart are seen in CO emission elsewhere in the\nfield in two cases on (B2200+420 and B0528+134).\n\n4) We saw no molecular features in absorption or emission outside the span\nof the H I absorption (see Appendix B).\n\n\\section{Kinematics}\n\nMolecular gas is generally well-mixed with other components of the ISM\n\\citep{DamTha94,GirBli+94} and does not require exceptional kinematics.\nThis is apparent in our work from the coincidence of molecular and atomic\nabsorption features, even if they do not have precisely the same patterns\nof line depth. The kinematics are affected by galactic structure and local\nexternal influences such as shocks, but this only becomes apparent on broad\nangular scales. The targets B0212+735 and B0224+671 (Figs 10-11) are\nrelatively close to each other and both are most strongly absorbed in H I\naround -15 \\mbox{km\\,s$^{-1}$}. The background target B2251+158 (Fig. 7 and Sect. 3.6) is\nseen in the outskirts of the MBM53-55 cloud complex, which is part of a\nlarge shell that has been extensively mapped in molecular and atomic gas\n\\citep{GirBli+94,YamOni+03}.\n\nIn individual line profiles and over small scales, the kinematics are often\ndominated by the internal structure of individual clouds. The internal\nmotions of diffuse molecular gas are now understood to reflect turbulent\ngas flows \\citep{PetFal03,HilFal09} that are characterized by unsteady\nprojected velocity fields with strong shears and abrupt reversals of the\nvelocity gradient. \\cite{SakSun03} show the transition between diffuse and\ndense molecular gas at the edge of TMC1 and \\cite{LisPet+09} discuss gas\nflows in the diffuse cloud occulting $\\zeta$ Oph.\n\nIn this Section we discuss the kinematics of just two of the fields mapped\nhere. Further examples of CO kinematics in individual sky fields are given\nin Figs. A.1-A.3 of Appendix A (available online) and the galactic context\nfor all fields is given in Figs. B.1 and B.2 of online Appendix B, showing\nlarge-scale latitude-velocity cuts in HI from the Leiden-Dwingeloo H I\nsurvey of \\cite{HarBur97} with the locations of the continuum background\nsources marked in each case.\n\nFig. 13 shows the kinematics in the relatively simple sky field around\nB1954+513 (Sect. 3.5 and Fig. 6) with the spatially-displaced blue and\nred-shifted CO emission components that were illustrated in Fig. 6. The\nred-shifted component seen toward the continuum has a partially-resolved\nvelocity gradient that carries it just to the midpoint of the associated\n\\hcop\\ absorption profile at the continuum position. It is certain that\nthe blue-shifted CO emission to the East would have an associated \\hcop\\ \nabsorption at its position but the structure of the redward gas cannot be\ntraced away from the continuum and, regrettably we do not have an H I\nabsorption profile that might show both the red and blue-shifted gas in\natomic absorption as toward BL Lac\\ (Fig. 9 and Sect. 4.2).\n\nFig. 14 shows the more complicated field at low latitude around B0355+508\n(Sect. 5.3, Fig. 12) and illustrates how the partition of a line profile\ninto components, no matter how seemingly obvious, can also be arbitrary and\ncapricious. None of the well-defined absorption features has an obvious CO\nemission counterpart except perhaps in the immediate vicinity of the\ncontinuum target. This is not an artifact of taking a cut in declination,\nwhich is actually richer than that in right ascension (see Fig. 12).\n\nNonetheless, mapping the CO emission does help to clarify interpretation of\nthe absorption profiles. For instance, consider gas near $-9$ \\mbox{km\\,s$^{-1}$}\\ around\nthe location of B0355 in Fig. 14. In absorption there are two distinct\nkinematic components at $-11$ and $-8$ \\mbox{km\\,s$^{-1}$}\\ that would usually be\ninterpreted as unrelated because, aside from their separation in velocity,\nthey have different patterns of chemical abundances (Fig. 12) However, Fig.\n14 shows that the CO emission line has an appreciable velocity gradient\nacross the position of the continuum source, spanning the two absorption\nlines, making it likely that the two absorption components are part of the\nsame body \\footnote{\\cite{PetLuc+08} show that the overlapping CO emission\n line is resolved into two kinematic components at 6\\mbox{$^{\\prime\\prime}$}} \\def\\arcmin{\\mbox{$^{\\prime}$}\\ resolution\n toward the continuum source.}. Moreover, the CO mapping suggests that\nthe components at $-17$ and $-10$ \\mbox{km\\,s$^{-1}$}\\ may also be part of the same\nstructure (and separated by a velocity gradient), which was actually\nsuggested by several coincidences in our earlier high-resolution CO mapping\n\\citep{PetLuc+08}. The lines at $-11$ and $-17$ \\mbox{km\\,s$^{-1}$}\\ are very bright (13\nK) at high resolution and have considerable chemical complexity. There are\nalso some seemingly correlated spatial intensity variations. The evidence\nfor an association is entirely indirect but has a clear precedent in the\nkinematics around B0528+134 (Sect. 4.1 and Fig. 8) where a similar\nvelocity separation occurs between two emission components that are seen\nsuperposed in an unusual wave-like spatial configuration.\n\n\\section{The brightness of diffuse cloud CO}\n\n\\subsection{\\mbox{W$_{\\rm CO}$}\\ relative to \\mbox{E$_{\\rm B-V}$}\\ and \\mbox {f$_{{\\rm H}_2}$}}\n\nThe large-scale finding chart in Fig. 1 is a map of the total intervening\ngas column density, except where discrete sources of infrared emission\n(often H II regions) ``leak'' into the map (usefully indicating when the\nbackground target may have been observed through disturbed foreground gas).\nLarge-scale surveys of CO emission at 8\\arcmin\\ resolution show a good\ncorrelation with reddening \\citep{DamHar+01}, contributing to the common\ninterpretation of CO sky maps as displaying the global distribution of\ndense, fully-molecular gas.\n\nIn diffuse gas appreciable scatter in the \\mbox{W$_{\\rm CO}$}-\\mbox{E$_{\\rm B-V}$}\\ relationship is\nexpected because the reddening is a sum over atomic and molecular\ncomponents that both make important contributions to N(H), combined with\nthe fact that both N(\\mbox{H$_2$}) and N(CO)\/N(\\mbox{H$_2$}) exhibit order-of-magnitude or\nlarger scatter with respect to \\mbox{E$_{\\rm B-V}$}\\ even when all quantities are measured\nalong the same microscopic sightlines toward nearby bright stars\n\\citep{BurFra+07,RacSno+09}. The disparity in angular resolution between\nthe reddening data and our 1\\arcmin\\ CO maps presents another sort of\ncomplication that is considered in Sect. 8.2 but does not by itself\ndominate the scatter. Recall also the discussion in \\cite{LisPet+10} where\na good correlation was shown between \\mbox{E$_{\\rm B-V}$}\\ at 6\\arcmin\\ resolution compared\nwith the integrated H I optical depth measured in absorption at 21cm toward\na larger set of the same kind of point-like radiocontinuum background\ntarget considered here.\n\nSmall-scale maps of reddening are shown in the various Figs. 2-12 detailing\nthe individual fields. They may visually suggest correlations between\n\\mbox{E$_{\\rm B-V}$}\\ and \\mbox{W$_{\\rm CO}$}, and there is a threshold \\mbox{E$_{\\rm B-V}$}\\ $\\ga $ 0.09 mag for\ndetecting CO emission, consistent with the well-known and quite abrupt\nincrease of N(\\mbox{H$_2$})\/N(H) at comparable reddening \\citep{SavDra+77}.\nHowever, reddening is not a reliable predictor of CO emission in our sky\nfields. For instance, in the field around B2251+158 in Fig. 7, CO emission\nis much weaker at the peak of the reddening map where \\mbox{E$_{\\rm B-V}$}\\ = 0.14 mag (the\nprofile indicated as ``NW'' at upper right in Fig. 7) than nearer the\ncontinuum source at smaller \\mbox{E$_{\\rm B-V}$}\\ = 0.10 mag. Around B2200+420 (Fig. 9)\nthe shape of the CO distribution appears to parallel that of the reddening\nbut in detail CO only traces the edge rather than the peak ridge of the\n\\mbox{E$_{\\rm B-V}$}\\ distribution.\n\nIn Fig. 15 we show the relationship between \\mbox{W$_{\\rm CO}$}\\ and \\mbox{E$_{\\rm B-V}$}\\ in the four\nsimple cases discussed in Sect. 3, where the extinction is small and a\nsingle narrow CO spectral component is present at each pixel\n\\footnote{Green diamonds in Figs. 15 and 16 show \\mbox{E$_{\\rm B-V}$}\\ and \\mbox{W$_{\\rm CO}$}\\ toward the\n continuum target as given in Table 1}. The rms noise levels in these four\ndatasets (Table 2) are 0.48, 0.33, 0.32 and 0.35 \\emr{\\,K\\,km\\,s^{-1}}{} reading clockwise\nfrom upper left so that datapoints with \\mbox{W$_{\\rm CO}$}\\ $\\ga$ 1 \\emr{\\,K\\,km\\,s^{-1}}\\ (the usual\nlast contour on CO sky maps) are detected at or above the 90\\% confidence\nlevel. To put these brightness and sensitivity levels in context, note\nthat there is a straightforward relationship between \\mbox{W$_{\\rm CO}$}, \\mbox {f$_{{\\rm H}_2}$}, and \\mbox{E$_{\\rm B-V}$}\\ \nonce the CO-\\mbox{H$_2$}\\ and \\mbox{E$_{\\rm B-V}$}\/N(H) conversion factors are fixed; for the\n\\emph{standard} \\mbox{W$_{\\rm CO}$}\/N(\\mbox{H$_2$}) = $2\\times 10^{20} ~{\\rm cm}^{-2}$ \\mbox{H$_2$}\\ (\\mbox{km\\,s$^{-1}$})$^{-1}$ and\nN(H)\/\\mbox{E$_{\\rm B-V}$}\\ = $5.8\\times 10^{21} ~{\\rm cm}^{-2} {\\rm mag}^{-1}$ one has \\mbox{W$_{\\rm CO}$}\\ = 14.5\n\\mbox {f$_{{\\rm H}_2}$}\\ \\mbox{E$_{\\rm B-V}$} \\emr{\\,K\\,km\\,s^{-1}}{}. At \\mbox{E$_{\\rm B-V}$}\\ = 0.1 mag, emission only slightly exceeding 1\nK \\mbox{km\\,s$^{-1}$}\\ implies a molecular fraction \\mbox {f$_{{\\rm H}_2}$}\\ $>$ 1 and therefore is too\nbright to be accomodated by a CO-\\mbox{H$_2$}\\ conversion factor as large as the\nstandard $2\\times 10^{20} ~{\\rm cm}^{-2}$ \\mbox{H$_2$}\\ (\\mbox{km\\,s$^{-1}$})$^{-1}$.\n\n\\begin{figure}\n \\includegraphics[height=8.5cm]{B0355-SimpleShell}\n\\caption[]{A declination-velocity diagram of CO emission\n at the right ascension of B0355+508. The CO absorption spectrum toward\n B0355+508 is shown with its baseline level at the declination of the\n background source. The strongest CO absorption line is quite opaque, see\n Fig. 12.}\n\\end{figure}\n\nShown in each panel of Fig. 15 are lines representing the CO emission\nexpected if various fractions \\mbox {f$_{{\\rm H}_2}$}\\ of the total neutral gas column are in\n\\mbox{H$_2$}\\ with a typical galactic \\mbox{W$_{\\rm CO}$}\/N(\\mbox{H$_2$}) conversion factor X$_{\\rm CO} =\n2\\times10^{20}$ \\mbox{H$_2$}\\ $~{\\rm cm}^{-2}$ (\\emr{\\,K\\,km\\,s^{-1}})$^{-1}$. Much of the CO in Fig. 15\noccurs above the line corresponding to \\mbox {f$_{{\\rm H}_2}$}\\ = 1 and is therefore too\nbright to be accomodated by the usual CO-\\mbox{H$_2$}\\ conversion factor; indeed,\nalmost every CO line with \\mbox{W$_{\\rm CO}$}\\ $\\ga$ 1 \\emr{\\,K\\,km\\,s^{-1}}{} may be described as\noverly-bright in this way if \\mbox {f$_{{\\rm H}_2}$}\\ = 0.5, hence the great majority of all\nthe statistically significant emission represented in Fig. 15 and in the\nmaps shown earlier for these sources. For the brightest pixels\nN(\\mbox{H$_2$})\/\\mbox{W$_{\\rm CO}$}\\ $< 5\\times10^{19} \\mbox{H$_2$}\\ ~{\\rm cm}^{-2} $ (\\emr{\\,K\\,km\\,s^{-1}})$^{-1}$.\n \nThe same \\mbox{W$_{\\rm CO}$}-\\mbox{E$_{\\rm B-V}$}\\ diagrams are shown for sources with higher \\mbox{E$_{\\rm B-V}$}\\ in\nFig. 16. Much of the gas around B2200+420 falls above the line for \\mbox {f$_{{\\rm H}_2}$}\\ =\n1, and attains such high brightness that its \\mbox{H$_2$}\/\\mbox{W$_{\\rm CO}$}\\ ratio is 3-4 times\nbelow the standard conversion factor. However, this case becomes\nincreasingly harder to make toward the other sources having higher \\mbox{E$_{\\rm B-V}$}\\ as\nin the bottom panels of Fig. 16.\n\n\\subsection{Sub-structure in reddening \n would not eliminate large \\mbox{W$_{\\rm CO}$}\/\\mbox{E$_{\\rm B-V}$}\\ ratios}\n\nCO emission is heavily structured on arcminute scales, well below the\n6\\arcmin\\ angular resolution of the reddening maps, and the high values and\nlarge scatter in \\mbox{W$_{\\rm CO}$}\/\\mbox{E$_{\\rm B-V}$}\\ in Fig. 15 cannot be accomodated with a fixed\nratio of \\mbox{W$_{\\rm CO}$}\/N(\\mbox{H$_2$}) or \\mbox{W$_{\\rm CO}$}\/N(H) except by positing strong unresolved\nvariations, essentially clumping, in \\mbox{E$_{\\rm B-V}$}. It is important to understand\nthe extent to which this might represent unresolved structure in the total\ncolumn density, for instance with regard to cleaning maps of the cosmic\nmicrowave background \\citep{bernard11}. Given the extreme sensitivities of\nthe CO abundance and brightness to N(\\mbox{H$_2$}) in diffuse clouds and the fact\nthat even \\mbox {f$_{{\\rm H}_2}$}\\ may vary in diffuse material, it is entirely possible that\nthe large contrasts seen in \\mbox{W$_{\\rm CO}$}\\ do not have strong consequences for the\ndistribution of N(H), \\mbox{E$_{\\rm B-V}$}, or even N(\\mbox{H$_2$}).\n\nShown in Fig. 17 are cumulative distribution functions of the integrated CO\nemission \\mbox{W$_{\\rm CO}$}\\ in the fields around B0954+658 and BL Lac, using the native\nARO data and versions of the data smoothed to lower angular resolution\n3\\arcmin\\ (similar to NANTEN) and 5\\arcmin\\ (similar to Planck). The\nbrightness distribution of the CO around B0954+658 is compact in Fig. 3 but\nstill sufficiently extended that 4.5 \\emr{\\,K\\,km\\,s^{-1}}{} integrated intensities are\npresent at 5\\arcmin\\ resolution; this is well above the line for \\mbox {f$_{{\\rm H}_2}$}\\ = 1\nin Fig. 15.\n\nThe distribution of strongly emitting CO around BL Lac\\ is sufficiently broad\nin angle that 20-30\\% of the pixels are occupied by CO with \\mbox{W$_{\\rm CO}$}\\ $\\geq$ 5\nK \\mbox{km\\,s$^{-1}$}\\ whether the angular resolution is 1\\arcmin\\ or 5\\arcmin; the very\nstrongest CO lines have \\mbox{W$_{\\rm CO}$}\\ $\\ga$ 15\\emr{\\,K\\,km\\,s^{-1}}\\ at 1-5\\arcmin\\ resolution in\nthe BL Lac\\ field. This is consistent with our recent observations of CO\nemission in the field around $\\zeta$ Oph\\ \\citep{LisPet+09} where the same peak\nbrightnesses were found in ARO and NANTEN data at 3\\arcmin\\ resolution.\n \n\\begin{figure}\n \\includegraphics[height=8cm]{FourEBV}\n\\caption[]{ Distribution of \\mbox{E$_{\\rm B-V}$}\\ and \\mbox{W$_{\\rm CO}$}\\ for four fields mapped here\n in CO. Each 20\\mbox{$^{\\prime\\prime}$}} \\def\\arcmin{\\mbox{$^{\\prime}$}\\ pixel in the CO maps is plotted as a separate\n point. The (red) dashed lines in each panel show the CO emission expected\n if 25\\%, 50\\% and 100\\% of the gas is in molecular form with a typical\n value of the \\mbox{W$_{\\rm CO}$}-N(\\mbox{H$_2$}) conversion factor, N(\\mbox{H$_2$})\/\\mbox{W$_{\\rm CO}$}\\ =\n $2\\times10^{20}$\\mbox{H$_2$}$~{\\rm cm}^{-2}$ (\\emr{\\,K\\,km\\,s^{-1}})$^{-1}$. In each panel a (green) filled\n diamond is shown at the value given in Table 1 toward the background\n source.}\n\\end{figure}\n\n\\begin{figure}\n \\includegraphics[height=8cm]{FourNEBV}\n\\caption[]{As in Fig. 15 for four fields with larger reddening.}\n\\end{figure}\n\n\\begin{figure}\n \\includegraphics[height=8cm]{Cumulate}\n\\caption[]{Cumulative distribution functions for \\mbox{W$_{\\rm CO}$}\\ at spatial \n resolutions of 1\\arcmin\\ (gray, thicker), 3\\arcmin\\ (blue) and 5\\arcmin\\ \n (red) for B0954 (left) and B2200.}\n\\end{figure}\n\nBecause high CO brightness, and, therefore, impossibly high ratios\n\\mbox{W$_{\\rm CO}$}\/\\mbox{E$_{\\rm B-V}$}\\ (requiring \\mbox {f$_{{\\rm H}_2}$}\\ $> 1$ for the mean \\mbox{W$_{\\rm CO}$}\/N(\\mbox{H$_2$}) ratio) persist\nto low angular resolution, \\mbox{W$_{\\rm CO}$}\\ cannot maintain a constant proportionality\nto N(\\mbox{H$_2$}) over the body of these clouds. The observed variations in \\mbox{W$_{\\rm CO}$}\\ \nare too large to be accomodated by the total amount of material that is\npresent along the line of sight and unresolved structure in reddening\ncannot account for the high values of \\mbox{W$_{\\rm CO}$}\/\\mbox{E$_{\\rm B-V}$}\\ or the range of variation\nin that ratio.\n\n\\subsection{Covering factors and bright and dark CO}\n\nIn Sect. 2.7 we noted the statistical certainty with which \\hcop\\ \nabsorption is found in spectra taken within about 15\\mbox{$^{\\rm o}$}{} - 18\\mbox{$^{\\rm o}$}{} of\nthe galactic plane. A corollary to this is that molecular gas is certain\nto be omnipresent over the sky fields mapped in CO at such latitudes, no\nmatter how much of sky we actually found to be occupied by CO emission.\n\nTable 2 shows pixel statistics for the CO emission maps made in the course\nof this work; shown are profile channel-channel rms noise values and map\npixel-pixel rms noise in \\mbox{W$_{\\rm CO}$}\\ for each mappable kinematic component. In\neach case the covering factor was determined by forming a histogram of the\n\\mbox{W$_{\\rm CO}$}\\ values and subtracting a gaussian fit to the component corresponding\nto the noise, and is apparent in that it extends to unphysical negative\nvalues of \\mbox{W$_{\\rm CO}$}. This is not a large correction; if the noise in \\mbox{W$_{\\rm CO}$}\\ is\nrandom at a level $\\sigma_{\\rm map}$ the covering factor at\/above any given\n\\mbox{W$_{\\rm CO}$}\\ in the absence of signal is dA(\\mbox{W$_{\\rm CO}$})\/A =\n0.5(1-erf(\\mbox{W$_{\\rm CO}$}\/$\\sqrt(2)\\sigma_{\\rm map}$)). The expected covering factor\ndue to noise at even 2$\\sigma$ significance is already below 3\\%.\n\nTable 2 shows that typical covering factors are 20\\%-40\\%, with a few null\nvalues and only two sky fields (B0528 and B2200) where the majority of the\nmap area is covered. Very approximately, the covering factors are small to\nabout the same extent that the \\mbox{W$_{\\rm CO}$}\/N(\\mbox{H$_2$}) conversion factors of the\ndetected CO emission are higher than indicated by the standard CO-\\mbox{H$_2$}\\ \nconversion. In the end, some form of spatial averaging over brighter and\ndimmer CO must be responsible for the global mean CO-\\mbox{H$_2$}\\ conversion factor\nwhether in fully molecular or diffuse molecular gas. The idea that all gas\nparcels would show identically the same \\mbox{W$_{\\rm CO}$}\/N(\\mbox{H$_2$}) is preposterous.\n\n\\subsection{Pressure and density in CO-emitting gas}\n\nPartial pressures of molecular hydrogen p(\\mbox{H$_2$})\/k = n(\\mbox{H$_2$})\\Tsub K \\ were derived\nby \\cite{LisLuc98} for all of the CO-bearing clouds discussed here, in the\ndirections of the background targets. They generally fall in the range\n$1-5 \\times 10^{3} \\pccc$ K, typical of thermal pressures in the diffuse\nISM sampled by neutral atomic carbon \\citep{JenTri11} , which should share\nthe same volume. The CO derivation depends on recognizing that, when the\ngas is warm and excitation is strongly sub-thermal, the excitation\ntemperature of the J=1-0 transition depends only on p(\\mbox{H$_2$}) and the optical\ndepth of the transition $\\tau(1-0)$. In the limit of zero optical depth\nthe excitation temperature of the J=1-0 transition \\Tsub exc (1-0) is only\nproportional to p(\\mbox{H$_2$}), not to either the temperature or density\nindependently, making CO a useful barometer. This first became apparent in\nthe work of \\cite{SmiSte+78} and is illustrated in Fig. 11 of\n\\cite{LisLuc98}: it has persisted over several generations of improved\ncollision cross sections. Moreover, the excitation contribution from\natomic hydrogen should be small in CO-bearing gas where the molecular\nfraction must be appreciable even if \\mbox{H$_2$}\\ does not dominate the overall\natomic-molecular balance \\citep{Lis07CO}.\n\nFor $\\tau(1-0) \\la 3$ and p(\\mbox{H$_2$})\/k $ \\la 2\\times 10^{4} \\pccc$ K the\nbehaviour seen in Fig. 11 of \\cite{LisLuc98} may be straightforwardly\nparameterized to an accuracy of a few percent as\n\n\\begin{equation}\n\\Tsub exc (1-0)-\\Tsub cmb \\ = 0.303 K \\left[ \\frac{p\\left(\\mbox{H$_2$}\\right)}{10^3 \\pccc~K} \\right]^{1.02} \n e^{\\tau(1-0)^{0.6}\/2.6}\n\\end{equation}\n\nAs examples of the application of this notion, we note:\n\n\\begin{itemize}\n \n\\item For a typical line with $\\tau(1-0) = 1.5$ and a Rayleigh-Jeans\n brightness above the CMB of 1.5 K, \\Tsub exc (1-0) = 5.04 K and p(\\mbox{H$_2$})\/k $=\n 5\\times 10^3 \\pccc$ K or n(\\mbox{H$_2$}) $= 200 \\pccc$ at \\Tsub K \\ = 25 K.\n \n\\item For the strongest absorption line component toward B0355 (-17.8\n \\mbox{km\\,s$^{-1}$}), $\\tau(1-0) = 3.1$ and \\Tsub exc (1-0) $\\approx 6$ K, so that p(\\mbox{H$_2$})\/k\n $\\approx\\ 5\\times 10^3 \\pccc$ K once more.\n \n\\item The $\\approx$ 4.5 K lines observed at the peak positions in several\n of the simple fields discussed in Section 3 require p(\\mbox{H$_2$})\/k $\\ga\n 8.5\\times 10^3 \\pccc$ K or $\\ga 15\\times 10^3 \\pccc$ K for $\\tau(1-0)$ =\n 3 or 1, respectively. Such a heavy over-pressure must be transient.\n \n\\item The very bright 10-12 K lines seen near B0528+134 and B2200+420\n require excitation temperatures of 15 K or more and lie somewhat beyond\n the range where \\mbox{W$_{\\rm CO}$}\\ and N(CO) can be shown to be linearly proportional.\n They will be discussed separately in a forthcoming publication based on\n observations of \\hcop, HCN, and CS.\n\n\\end{itemize}\n\n\\subsection{Failing to detect \\mbox{H$_2$}\\ when CO emission is weak}\n\nThere are cases where the brightness of the 1-0 line is well below 1 K even\nwhen the CO optical depth is appreciable, as summarized in Table 3;\nunfortunately we do not have a CO absorption profile toward B1928+738 in\nwhose field CO emission was not detected, see Sect. 3.4. The regions of\nvery low p(\\mbox{H$_2$}) toward B0212 and B0224 somehow manage to produce\nappreciable amounts of CO without exciting it to detectable levels but\nother lines represented in Table 3 do not arise in regions of especially\nlow pressure.\n\nIn Sect. 2.7 we showed that, on the whole, molecular gas is not\nunderrepresented by CO emission in the collection of sightlines comprising\nthis work and earlier we showed that the same is true of the larger sample\nof absorption-cloud sightlines from which the current sample was drawn\n\\citep{LisPet+10}. Moreover, CO emission from all of the components\nrepresented in Table 3 is detected (usually quite strongly) elsewhere in\nthe mapped fields except around B1928. However, the fraction of molecular\ngas that is detectable in CO along individual sightlines varies\nsubstantially. To quantify this we equate the molecular column density\nwith the integrated optical depth measured in \\hcop\\ \\citep{LucLis96}, see\nthe right-most column in Table 3. In this case the fraction of molecular\ngas that is missed by failing to detect CO emission from particular\nindividual components in four directions is 12\\% toward B0212, 16\\% toward\nB0224, 8\\% toward B0528+134 and 100\\% toward B1928. Overall the fraction\nof molecular gas represented by the weakly-emitting CO summarized in Table\n3 is 8\\% toward B0528, 16\\% toward B0224, 22\\% toward B0212, 43\\% toward\nB0355 and 100\\% toward B1730 and B1928.\n\n\\begin{table*}\n\\caption[]{Components with weak CO emission toward the continuum target$^a$}\n{\n\\small\n\\begin{tabular}{lcccccccc}\n\\hline\nTarget & V & $\\tau(1-0)$ & \\mbox{${\\rm T}_{\\rm R}^*$} & dN(CO)\/dV & \\Tsub exc (1-0) & p(\\mbox{H$_2$}) & n(\\mbox{H$_2$})$^b$ & $\\int \\tau(\\hcop)dv$\/total \\\\ \n & \\mbox{km\\,s$^{-1}$} & & K & $10^{15} ~{\\rm cm}^{-2}$ (\\mbox{km\\,s$^{-1}$})$^{-1}$ & K & $10^3 \\pccc$ K & $\\pccc$ & \\\\\n\\hline\nB0212& -10.3 & 0.49 & 0.40 & 0.65 & 3.4-3.6 & 1.9-2.5 & 75-100 & 0.122\\\\\n & -0.05 & 0.95 & $<$ 0.14$^c$ & 1.11 & $<$3.1-3.2 & $<$0.9-1.1 & $<$35-45 & 0.102 \\\\\nB0224 & -10.6 & 0.43 & $<$ 0.10 & 0.48 & $<$3.0-3.1 & $<$0.8-1.0 & $<$30-40 & 0.161 \\\\\nB0355& -13.9 & 0.45 & 0.31 & 0.52 & 3.8-4.0 & 3.0-3.5 & 120-140 & 0.224\\\\\n & -4.0 & 0.86 & 0.37 & 1.10 & 3.6-3.8 & 2.1-2.5 & 85-100 & 0.204\\\\\nB0528& 2.8 & $<$ 0.11 & $<$ 0.16 & & & & & 0.080 \\\\\nB1730& 5.1 & 1.15 & 0.24 & 1.42 & 3.5-3.7 & 1.8-2.2& 70-90 & 1.000\\\\\nB1928& $-3$ & $<$ 0.11 & $<$ 0.11 & & & & & 1.000 \\\\\n\\hline\n\\end{tabular}}\n\\\\\n$^a$ Using CO parameters originally derived by \\cite{LisLuc98} and $\\tau(\\hcop)$ from \\cite{LucLis96} \\\\\n$^b$ At \\Tsub K \\ = 25 K \\\\\n$^c$ Upper limits in this column are $2\\sigma$ \\\\\n\n\\end{table*}\n\n\\section{Discussion}\n\nEven at \\mbox{E$_{\\rm B-V}$}\\ = 0.1 - 0.3 mag, the CO emission traced in this work runs the\nfull gamut from undetectable to having brightness comparable to that seen\nin fully-molecular dark clouds. CO emission may be undetectably weak ($<<$\n1K) when molecular gas is present in absorption (including that of CO\nitself) but in other directions it may be so bright that the N(\\mbox{H$_2$})\/\\mbox{W$_{\\rm CO}$}\\ \nratio is 4-5 times smaller than the typical CO-\\mbox{H$_2$}\\ conversion factor\n$2\\times10^{20}~~{\\rm cm}^{-2}$ (\\emr{\\,K\\,km\\,s^{-1}})$^{-1}$. Under the conditions encountered in\ndiffuse clouds, CO emission is foremost an indicator of the CO chemistry,\nsecondarily an indicator of the rotational excitation (which reflects the\npartial thermal pressure of \\mbox{H$_2$}) and only peripherally a measure of the\nunderlying hydrogen column density distribution as discussed in Sect. 8.\nIndeed, the simulations of CO emission from the interstellar medium by\n\\citet{shetty11} agree with observations for the dense gas. However, a\ndetailed comparison with our results on the diffuse material shows that the\nradiative transfer factor is correct but there are up to 4 orders of\nmagnitude difference in N(\\mbox{H$_2$})\/N(CO). This is linked to the\npoorly-understood polyatomic chemistry in the diffuse gas~\\citep[see][their\nsection 4.3]{shetty11}.\n\nThe over-arching issues most relevant to diffuse cloud CO emission are\nthree-fold: 1) How it may be identified for what it is, originating in\nrelatively tenuous gas that is unassociated with star formation; 2) Whether\nit makes a substantial contribution when CO emission is used as a surrogate\nfor N(\\mbox{H$_2$}) in circumstances where emission contributions from diffuse and\ndense heavily-shielded gas may be blended; 3) How it is related to the\nso-called ``dark'' gas discovered by and FERMI \\citep{grenier10} and PLANCK\n\\citep{bernard11} that is most prominent at moderate extinction where the\ntransition from atomic to molecular gas occurs and is claimed to host\n50\\%-120\\% of the previously-known CO emitting gas in the solar\nneighbourhood.\n\nAs for the identification of diffuse gas, the \\mbox{W$_{\\rm CO}$}\/\\mbox{W$_{^{13}{\\rm CO}}$}\\ ratio is the most\nacessible and direct probe. When diffuse cloud CO is excited to detectable\nlevels it is generally in the regime where \\mbox{W$_{\\rm CO}$}\\ $\\propto$ N(CO) and \\mbox{W$_{^{13}{\\rm CO}}$}\\ \n$\\propto$ N(\\mbox{$^{13}$CO}) so that the brightness ratio \\mbox{W$_{\\rm CO}$}\/\\mbox{W$_{^{13}{\\rm CO}}$}\\ will be much\nlarger than the values 3-5 that are seen when emission arises from\noptically thick lines from denser gas where the rotation ladder is close to\nbeing thermalized. Fractionation progressively lowers the\nN(\\mbox{$^{12}$CO})\/N(\\mbox{$^{13}$CO}) column density ratio in diffuse gas at larger N(CO)\n\\citep{LisLuc98,SheRog+07} but not below about 15. Intensity ratios\n\\mbox{W$_{\\rm CO}$}\/\\mbox{W$_{^{13}{\\rm CO}}$}\\ of 8-10 or higher are a strong indicator that there is a major\ncontribution from diffuse material.\n\nRegarding the contribution of diffuse gas we recently assessed it in the\ncase where an outside observer looked down on the Milky Way disk in the\nvicinity of the Sun \\citep{LisPet+10}. We compared the mean emission for\nthe ensemble of lines of sight from which the current background targets\nwere drawn with the vertically-integrated emission expected for the\ngalactic disk component at the Solar Circle drawn from galactic plane\nsurveys. The ensemble mean in our dataset, expressed as an equivalent to\nlooking vertically through the galactic layer, was 2$<$ \\mbox{W$_{\\rm CO}$}\\ sin($|$b$|$)\n$>$ = 0.47 \\emr{\\,K\\,km\\,s^{-1}}{}. The galactic disk contribution was inferred from\ngalactic plane surveys that find A(CO) = 5 \\emr{\\,K\\,km\\,s^{-1}}{} (kpc)$^{-1}$ and an\nequivalent disk thickness of 150 pc, implying an integrated intensity\nthrough the disk of 5 \\emr{\\,K\\,km\\,s^{-1}}{} (kpc)$^{-1} \\times 0.15$ kpc = 0.75 \\emr{\\,K\\,km\\,s^{-1}}.\n\nEven if viewed as entirely distinct (because it originates at galactic\nlatitudes well above those typically sampled in galactic plane surveys) the\ndiffuse gas contribution to the total seen looking down on the Milky Way\nfrom outside would be 0.47\/(0.47+0.75) = 38\\%, a surprisingly high fraction\ngiven the supposed absence of molecular gas and CO emission at higher\ngalactic latitudes. The alternative, that the diffuse gas is already\nincorporated in galactic plane surveys, makes the majority of the gas\n(0.47\/0.75) in the galactic plane diffuse. This is an even more radical\nproposition, but is consistent with finding that the preponderance of the\nmolecular gas seen toward the heavily-extincted line of sight toward\nB0355+508 at $b=-1.6$\\mbox{$^{\\rm o}$}\\ is actually diffuse.\n \nIn fact, the extent of the diffuse and\/or high-latitude contribution to the\nlocal CO emission remains to be determined by wide-field CO surveys whose\ndetection limit is substantially better than 1 \\emr{\\,K\\,km\\,s^{-1}}\\ and perhaps no worse\nthan even 0.25 \\emr{\\,K\\,km\\,s^{-1}}. Assessing the contribution of diffuse gas at lower\nlatitudes awaits a wider examination of the character of the gas seen in\nthe galactic disk, but the contribution of diffuse molecular gas in the\ninner galactic disk is apparent in recent HERSCHEL\/PRISMAS observations of\nsub-mm absorption spectra toward star-forming regions\n\\citep{GerdeL+10,SonNeu+10}.\n\n\\section{Summary and conclusions}\n\nWe compared maps of CO emission with reddening maps on a typical field of\nview of about $30'\\times30'$ at an angular resolution of $1'$ toward 11\ndiffuse lines of sight for which we already had sub-arcsec molecular and\/or\natomic absorption profiles. This allowed us to draw three kinds of\nconclusions.\n\n\\subsection{ Conclusions about the position-position-velocity\n structure of the emission}\n\\begin{itemize}\n \n\\item Although most of the CO emission structure was amorphous or merely\n blob-like when mapped, the emission around B0528+138 was found to be\n highly regular and quasi-periodic while that around B2200+420 (aka BL Lac)\n was seen to be filamentary and tangled.\n \n\\item Toward B0355+508 and B0528+134, CO mapping suggests that pairs of\n absorption lines separated by 6-8 \\mbox{km\\,s$^{-1}$}\\ are physically related, not\n merely accidental superpositions.\n \n\\item CO mapping shows that partition of an absorption profile into\n kinematic components, no matter how seemingly obvious, may actually be\n arbitrary and capricious: the decomposition may have no apparent validity\n in emission at positions only slightly removed from the continuum\n background.\n\n\\end{itemize}\n\n\\subsection{Conclusions linking the absorption to the\n emission kinematics:}\n\\begin{itemize}\n \n\\item The same clouds were seen in absorption and emission, and in atomic\n and molecular phases, although not necessarily in the same location. We\n failed to find CO emission corresponding to just one out of 20 molecular\n absorption features, in one relatively small spatial field, i.e\n 20\\arcmin\\ $\\times$ 20\\arcmin\\ vs. 30\\arcmin\\ $\\times$ 30\\arcmin\\ or\n more. Conversely, while mapping away from the continuum background we\n saw only 2 CO emission features lacking molecular absorption\n counterparts.\n \n\\item CO emission was sometimes found in the field at velocities\n corresponding to features seen only in H I absorption toward the\n continuum. We saw no molecular features outside the span of the H I\n absorption.\n \n\\end{itemize}\n\n\\subsection{Conclusions regarding the CO luminosity of diffuse gas.}\n\\begin{itemize}\n \n\\item We found relatively bright CO emission at modest reddening in the\n fields we mapped, with peak brightnesses of 4-5 K at \\mbox{E$_{\\rm B-V}$}\\ $\\la 0.15$ mag\n and up to 10-12 K at \\mbox{E$_{\\rm B-V}$}\\ $\\simeq$ 0.3 mag (i.e \\mbox{A$_{\\rm V}$} $\\simeq 1$ mag).\n This was true even for features that were seen only in absorption toward\n the continuum source in the field center.\n \n\\item The CO emission lines represent small column densities N(CO) $\\leq\n 10^{16} ~{\\rm cm}^{-2}$, less than 10\\% of the amount of free gas phase carbon\n expected along a line of sight with \\mbox{E$_{\\rm B-V}$}\\ = 0.15 mag or \\mbox{A$_{\\rm V}$}\\ = 0.5 mag.\n The dominant form of gas phase carbon is still C\\mbox{$^+$}.\n \n\\item When CO emission was detected at levels of 1.5 \\emr{\\,K\\,km\\,s^{-1}}{} and higher, it\n was generally over-luminous in the sense of having a small ratio\n N(\\mbox{H$_2$})\/\\mbox{W$_{\\rm CO}$}, i.e. a value of the CO-\\mbox{H$_2$}\\ conversion factor below\n $2\\times 10^{20}$\\mbox{H$_2$}\\ (\\emr{\\,K\\,km\\,s^{-1}})$^{-1}$. \\mbox{W$_{\\rm CO}$}\/N(\\mbox{H$_2$}) ratios as small as\n N(\\mbox{H$_2$})\/\\mbox{W$_{\\rm CO}$}\\ $\\la 5\\times 10^{19} ~{\\rm cm}^{-2}$ \\mbox{H$_2$}\\ (\\mbox{km\\,s$^{-1}$})$^{-1}$ are mandated\n by the observed reddening in cases where the line of sight was relatively\n free of extraneous material.\n \n\\item On the whole, the \\mbox{W$_{\\rm CO}$}\/N(\\mbox{H$_2$}) ratio in diffuse gas is the same as in\n dense fully molecular clouds despite the presence of strong variations\n between individual diffuse gas parcels or sightlines. The global\n \\mbox{W$_{\\rm CO}$}\/N(\\mbox{H$_2$}) ratio in diffuse gas is the result of averaging over limited\n regions where CO emission is readily detectable and overly bright (in the\n sense of having \\mbox{W$_{\\rm CO}$}\/N(\\mbox{H$_2$}) much higher than the mean), and with other\n regions having a significant molecular component (as seen in absorption)\n but where CO emission is comparatively weak or simply undetectable.\n \n\\item Small \\mbox{W$_{\\rm CO}$}\/N(\\mbox{H$_2$}) ratios and sharp variations in the \\mbox{W$_{\\rm CO}$}\/\\mbox{E$_{\\rm B-V}$}\\ ratio\n are not artifacts of the disparity in resolution between the 1\\arcmin\\ CO\n emission beam and the 6\\arcmin\\ resolution of the reddening maps, because\n high CO brightnesses and small \\mbox{W$_{\\rm CO}$}\/\\mbox{E$_{\\rm B-V}$}\\ ratios persist when the\n resolution of the CO maps is degraded to that of the reddening maps.\n \n\\item Sharp variations in the CO emission brightness on arcminute scales do\n not necessarily represent unresolved structure in the reddening maps or\n in the column density of H or \\mbox{H$_2$}. Detectable CO emission generally\n arises in the regime where \\mbox{W$_{\\rm CO}$}\\ $\\propto$ N(CO), and variations in the\n line brightness represent primarily the CO chemistry with its extreme\n sensitivity to \\mbox{E$_{\\rm B-V}$}\\ and N(\\mbox{H$_2$}). Secondarily the line brightness is\n influenced by CO rotational excitation since some features are not seen\n in emission toward continuum sources where there is CO absorption with\n appreciable optical depth.\n \n\\item Only peripherally does the CO brightness represent the underlying\n mass or \\mbox{H$_2$}\\ column density distribution of diffuse molecular gas.\n\n\\end{itemize}\n\n\\begin{acknowledgements}\n The National Radio Astronomy Observatory is operated by Associated\n Universites, Inc. under a cooperative agreement with the US National\n Science Foundation. The Kitt Peak 12-m millimetre wave telescope is\n operated by the Arizona Radio Observatory (ARO), Steward Observatory,\n University of Arizona. IRAM is operated by CNRS (France), the MPG\n (Germany) and the IGN (Spain). This work has been partially funded by\n the grant ANR-09-BLAN-0231-01 from the French {\\it Agence Nationale de la\n Recherche} as part of the SCHISM project (http:\/\/schism.ens.fr\/). We\n thank Edith Falgarone for comments that inspired Sections 8.4 and 8.5 of\n this work.\n\\end{acknowledgements}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nDiscontinuous Galerkin methods are a popular approach to discretize PDE\nboundary value problems \\cite{dipietro_ern_2012a}. Similar to conforming finite element methods\n\\cite{ciarlet_2002a,ern_guermond_2019a}, they can handle general meshes, which allows to account for complicated\ngeometries. In addition, the use of discontinuous polynomial shape functions\nfacilitates the presence of hanging nodes and varying polynomial degree.\nAs a result, discontinuous Galerkin methods are especially suited for $hp$-adaptivity\n\\cite{congreve_gedicke_perugia_2019a,houston_schotzau_wihler_2006a}.\nFurthermore, since all their degrees of freedom are attached with mesh cells,\ndiscontinuous Galerkin methods allow linear memory access, which is crucial for\nefficient computer implementations, in particular on GPUs\n\\cite{chan_wang_modave_remacle_warburton_2016a}.\n\nIn the context of wave propagation problems, discontinuous Galerkin methods\nfurther display specific advantages. For time-dependent problems, the resulting\nmass-matrix is block-diagonal \\cite{agut_diaz_2013a,grote_schneebeli_schotzau_2006a},\nwhich enables explicit time-stepping schemes without mass-lumping\n\\cite{cohen_joly_roberts_tordjman_2001a}. For time-harmonic wave propagation too,\ndiscontinuous Galerkin formulations are attractive as they exhibit additional stability\nas compared to conforming alternatives\n\\cite{bernkopf_sauter_torres_veit_2022a,feng_wu_2009a,feng_wu_2011a}.\nThese interesting properties have henceforth motivated a large number of works\nconsidering discontinuous Galerkin discretizations of wave propagation problems,\nand a non-exhaustive list includes \\cite{agut_diaz_2013a,congreve_gedicke_perugia_2019a,%\ndu_zhu_2016a,feng_wu_2009a,feng_wu_2011a,grote_schneebeli_schotzau_2006a,hoppe_sharma_2013a,%\nsauter_zech_2015a}.\n\nIn this work, we consider the acoustic Helmholtz equation, which is probably the\nsimplest model problem relevant to the difficulties of wave propagation.\nSpecifically, given a domain $\\Omega \\subset \\mathbb R^d$, $d=2$ or $3$, and $f: \\Omega \\to \\mathbb C$,\nthe unknown $u: \\Omega \\to \\mathbb C$ should satisfy\n\\begin{equation}\n\\label{eq_helmholtz_strong}\n\\left \\{\n\\begin{array}{rcll}\n-\\omega^2 \\mu u-\\div \\left (\\boldsymbol A\\boldsymbol \\nabla u\\right ) &=& f & \\text{ in } \\Omega,\n\\\\\nu &=& 0 & \\text{ on } \\Gamma_{\\rm D},\n\\\\\n\\boldsymbol A\\boldsymbol \\nabla u \\cdot \\boldsymbol n &=& 0 & \\text{ on } \\Gamma_{\\rm N},\n\\\\\n\\boldsymbol A\\boldsymbol \\nabla u \\cdot \\boldsymbol n - i\\omega\\gamma u &=& 0 & \\text{ on } \\Gamma_{\\rm R},\n\\end{array}\n\\right .\n\\end{equation}\nwhere $\\overline{\\Gamma_{\\rm D}} \\cup \\overline{\\Gamma_{\\rm N}} \\cup \\overline{\\Gamma_{\\rm R}} = \\partial \\Omega$\nis a partition of the boundary and $\\mu,\\boldsymbol A$ and $\\gamma$ are given coefficients.\nPrecise assumptions are listed in Section \\ref{section_setting}.\n\nOur interest lies in interior penalty discontinuous Galerkin (IPDG) discretizations\nof \\eqref{eq_helmholtz_strong}. In particular, we focus on the ``minimal regularity''\ncase, where we do not assume any specific smoothness for the solution $u$. To the best\nof our knowledge, this problem has not been considered in the literature, and available\nworks essentially assumes that the solution belongs to $H^{3\/2+\\varepsilon}$, so that the\ntraces of $\\boldsymbol \\nabla u$ are well-defined on mesh faces, see e.g. \\cite[Eq. (4.5)]{hoppe_sharma_2013a}\nand \\cite[Lemma 2.6]{sauter_zech_2015a}. Unfortunately, such assumptions rule out important\nconfigurations of coefficients and boundary conditions, which may bring the regularity of the\nsolution arbitrarily close to $H^1$ (see the appendix of \\cite{costabel_dauge_nicaise_1999a}\nfor instance).\n\nWhen considering conforming finite element discretizations of \\eqref{eq_helmholtz_strong},\nthe so-called ``Schatz argument'' enables to show that the discrete solution $u_h$ is\nquasi-optimal if the mesh is fine enough \\cite{schatz_1974a}. Assuming for simplicity that\n$\\Gamma_{\\rm R} = \\emptyset$ and introducing the energy norm\n\\begin{equation*}\n\\enorm{v}_{\\omega,\\Omega}^2\n:=\n\\omega^2 \\|v\\|_{\\mu,\\Omega}^2 + \\|\\boldsymbol \\nabla v\\|_{\\boldsymbol A,\\Omega}^2\n\\quad\nv \\in H^1_{\\Gamma_{\\rm D}}(\\Omega),\n\\end{equation*}\nwe have\n\\begin{equation}\n\\label{eq_intro_apriori}\n\\enorm{u-u_h}_{\\omega,\\Omega}\n\\leq\n\\frac{1}{1-\\gamma_{\\rm ba}^2} \\min_{v_h \\in V_h} \\enorm{u-v_h}_{\\omega,\\Omega},\n\\end{equation}\nwhenever the approximation factor\n\\begin{equation}\n\\label{eq_intro_gba}\n\\gamma_{\\rm ba} := 2\\omega \\max_{\\substack{\\psi \\in L^2(\\Omega) \\\\ \\|\\psi\\|_{\\mu,\\Omega} = 1}}\n\\min_{v_h \\in V_h} \\enorm{u_\\psi-v}_{\\omega,\\Omega}\n\\end{equation}\nis strictly less than one (all these notations are detailed in Section \\ref{section_preliminary}\nbelow). Above, $u_\\psi$ solves \\eqref{eq_helmholtz_strong}\nwith right-hand side $\\mu\\psi$ instead of $f$. Similarly, when considering a posteriori\nerror estimation \\cite{chaumontfrelet_ern_vohralik_2021a}, we have\n\\begin{equation}\n\\label{eq_intro_aposteriori}\n\\enorm{u-u_h}_{\\omega,\\Omega} \\leq \\sqrt{1+\\gamma_{\\rm ba}^2} \\eta,\n\\end{equation}\nwhere $\\eta$ is a suitable a posteriori estimator.\n\nEstimates similar to \\eqref{eq_intro_apriori} and \\eqref{eq_intro_aposteriori}\nare available for IPDG discretizations \\cite{melenk_parsania_sauter_2013a,sauter_zech_2015a},\nbut with energy norms invovling the normal trace of the gradient on faces, thus essentially\nrequiring $H^{3\/2+\\varepsilon}$ regularity of $u_\\psi$. Here, in contrast, we extend\n\\eqref{eq_intro_apriori} and \\eqref{eq_intro_aposteriori} to IPDG discretizations without\nadditional regularity assumptions on the solutions $u_\\psi$. As detailed below, our key\nfinding is that it can be achieved by redefining the approximation factor as\n\\begin{equation}\n\\label{eq_intro_gba_IPDG}\n\\gamma_{\\rm ba}^2\n:=\n4\\omega^2\n\\max_{\\substack{\\psi \\in L^2(\\Omega) \\\\ \\|\\psi\\|_{\\mu,\\Omega} = 1}}\n\\left (\n\\min_{v_h \\in V_h} \\enorm{u_\\psi-v_h}_{1,\\dagger,\\mathcal T_h}^2\n+\n\\min_{\\boldsymbol w_h \\in \\boldsymbol W_h} \\enorm{\\boldsymbol A\\boldsymbol \\nabla u_\\psi-\\boldsymbol w_h}_{\\operatorname{div},\\dagger,\\mathcal T_h}^2\n\\right ),\n\\end{equation}\nwhere $\\boldsymbol W_h$ is the BDM finite element space built using the same mesh and polynomial degree\nthan $V_h$, and $\\enorm{{\\cdot}}_{1,\\dagger,\\mathcal T_h}$ and $\\enorm{{\\cdot}}_{\\operatorname{div},\\dagger,\\mathcal T_h}$\nare $H^1_{\\Gamma_{\\rm D}}(\\Omega)$ and $\\boldsymbol H_{\\Gamma_{\\rm N}}(\\operatorname{div},\\Omega)$ norms appropriately\nscaled by the mesh size. The additional term in \\eqref{eq_intro_gba_IPDG}\nas compared to \\eqref{eq_intro_gba} is necessary to account for the non-conformity\nof the scheme.\n\nActually, the interest of the subject exceeds time-harmonic wave propagation.\nIn fact, the a priori and a posteriori error analysis of finite element discretizations\nto \\eqref{eq_helmholtz_strong} rely on duality arguments of Aubin-Nitsche type\n\\cite{schatz_1974a}. Such techniques are crucial in time-harmonic wave propagation\n\\cite{melenk_sauter_2010a,sauter_zech_2015a}, but there also useful in other contexts\nto establish convergence in weak norms, see e.g.\n\\cite[Section 5.1]{arnold_brezzi_cockburn_marini_2002a}.\nTo the best of our knowledge, duality analysis for IPDG discretizations under\nminimal regularity has not been addressed in the literature, and it is our goal\nto do so here.\n\n\nThe remainder of this work is organized as follows. In Section \\ref{section_preliminary},\nwe precise the setting and introduce key notations. Section \\ref{section_duality}\npresents the key argument that enables duality techniques for IPDG under minimal regularity.\nSections \\ref{section_apriori} and \\ref{section_aposteriori} then employ the aforementioned\nreasoning to perform the a priori and a posteriori error analysis of IPDG discretization for\nHelmholtz problems.\n\n\\section{Setting and preliminary results}\n\\label{section_preliminary}\n\n\\subsection{Setting}\n\\label{section_setting}\n\nThroughout this work, $\\Omega \\subset \\mathbb R^d$, with $d=2$ or $3$, is a Lipschitz\npolytopal domain. The boundary of $\\Omega$ is partitioned into three open, Lipschitz\nand disjoint polytopal subsets $\\Gamma_{\\rm D}$, $\\Gamma_{\\rm N}$ and $\\Gamma_{\\rm R}$ such that\n$\\partial \\Omega = \\overline{\\Gamma_{\\rm D}} \\cup \\overline{\\Gamma_{\\rm N}} \\cup \\overline{\\Gamma_{\\rm R}}$.\nWe employ the notation $\\boldsymbol n$ for the unit vector normal to $\\partial \\Omega$\npointing outside $\\Omega$.\n\nWe consider coefficients $\\mu: \\Omega \\to \\mathbb R$, $\\boldsymbol A: \\Omega \\to \\mathbb R^{d \\times d}$\nand $\\gamma: \\Gamma_{\\rm R} \\to \\mathbb R$ satisfying the following properties. We assume that there\nexists a partition $\\mathscr P$ of $\\Omega$ into a finite number disjoint open polytopal\nsubdomains such that $\\mu|_P = \\mu_P \\in \\mathbb R$ and $\\boldsymbol A|_P = \\boldsymbol A_P \\in \\mathbb R^{d \\times d}$ take constant\nvalues for all $P \\in \\mathscr P$. Similarly, there exists a finite partition $\\mathscr Q$ of $\\Gamma_{\\rm R}$\nconsisting of open polytopal subsets such that $\\gamma|_Q = \\gamma_Q \\in \\mathbb R$ is constant\nfor each $Q \\in \\mathscr Q$. For each $P \\in \\mathscr P$, we assume that $\\boldsymbol A_P$ is symmetric, and let\n$\\alpha_P := \\min_{\\boldsymbol \\xi \\in \\mathbb R^d; |\\boldsymbol \\xi| = 1} \\boldsymbol A_P \\boldsymbol \\xi \\cdot \\boldsymbol \\xi$. We then classically\nrequire that $\\min_{P \\in \\mathscr P} \\mu_P > 0$, $\\min_{P \\in \\mathscr P} \\alpha_P > 0$, and\n$\\min_{Q \\in \\mathscr Q} \\gamma_Q > 0$.\n\nWe also fix a real number $\\omega > 0$ representing the (angular) frequency.\n\n\\subsection{Functional spaces}\n\nIf $D \\subset \\mathbb R^d$ is an open set, we denote by $L^2(D)$ the Lebesgue space\nof complex-valued square integrable functions defined over $D$, and we set\n$\\boldsymbol L^2(D) := [L^2(D)]^d$ for vector-valued functions. The notations $(\\cdot,\\cdot)_D$\nand $\\|{\\cdot}\\|_D$ then stand for the usual inner-product and norm of $L^2(D)$ or $\\boldsymbol L^2(D)$.\nIn addition, if $w: D \\to \\mathbb R$ is a measurable function satisfying\n$0 < \\operatorname{ess} \\inf_D w$ and $\\operatorname{ess} \\sup_D w < +\\infty$,\nthen $\\|{\\cdot}\\|_{w,D}^2 := (w \\cdot,\\cdot)_D$ defines a norm on $L^2(D)$ equivalent\nto the standard one. We use the same notation in $\\boldsymbol L^2(D)$ with matrix-valued weights.\nBesides, we employ similar notations for $d-1$ manifolds.\n\n$H^1(D)$ is the Sobolev space of functions $v \\in L^2(D)$ such that $\\boldsymbol \\nabla v \\in \\boldsymbol L^2(D)$,\nwhere $\\boldsymbol \\nabla$ denotes the weak gradient defined in the sense of distributions. If\n$\\Gamma \\subset \\partial D$ is a relatively open subset of the boundary of $D$, then\n$H^1_\\Gamma(D)$ stands for the subset of functions $v \\in H^1(D)$ such that\n$v|_{\\Gamma} = 0$ in the sense of traces. We refer the reader to \\cite{adams_fournier_2003a}\nfor a detailed description of the above spaces. On $H^1_{\\Gamma_{\\rm D}}(\\Omega)$, we will often employ\nthe following ``energy'' norm:\n\\begin{equation}\n\\label{eq_energy_norm}\n\\enorm{v}_{\\omega,\\Omega}^2\n:=\n\\omega^2\\|v\\|_{\\mu,\\Omega}^2 + \\omega\\|v\\|_{\\gamma,\\Gamma_{\\rm R}}^2 + \\|\\boldsymbol \\nabla v\\|_{\\boldsymbol A,\\Omega}^2\n\\quad\n\\forall v \\in H^1_{\\Gamma_{\\rm D}}(\\Omega).\n\\end{equation}\n\nIt turns out that Sobolev spaces of vector-valued functions will be key in the duality\nanalysis we are about to perform. Specifically, we will need the space $\\boldsymbol H(\\operatorname{div},D)$\nof functions $\\boldsymbol v \\in \\boldsymbol L^2(D)$ such that $\\div \\boldsymbol v \\in L^2(D)$ where $\\div$ is the weak\ndivergence operator \\cite{girault_raviart_1986a}. Following, e.g., \\cite{fernandes_gilardi_1997a},\nif $\\Gamma \\subset \\partial D$ is a relatively open set, the normal trace $(\\boldsymbol w \\cdot \\boldsymbol n)|_\\Gamma$\nof $\\boldsymbol w \\in \\boldsymbol H(\\operatorname{div},D)$ can be defined in a weak sense, and $\\boldsymbol H_\\Gamma(\\operatorname{div},D)$\nwill stand for the space of $\\boldsymbol w \\in \\boldsymbol H(\\operatorname{div},D)$ such that $(\\boldsymbol w \\cdot \\boldsymbol n)|_\\Gamma = 0$.\n\nWe are studying ``Robin-type'' boundary conditions which are not naturally handled in\nthe $\\boldsymbol H(\\operatorname{div})$ setting. We follow the standard remedy \\cite{chaumontfrelet_2019a,monk_2003a}\nand introduce the space\n\\begin{equation*}\n\\boldsymbol X(\\operatorname{div},\\Omega)\n:=\n\\left \\{\n\\boldsymbol w \\in \\boldsymbol H(\\operatorname{div},\\Omega) \\; | \\; (\\boldsymbol w \\cdot \\boldsymbol n)|_{\\Gamma_{\\rm R}} \\in L^2(\\Gamma_{\\rm R})\n\\right \\},\n\\end{equation*}\nwhere additional normal trace regularity is enforced on the Robin boundary.\nThe notation $\\boldsymbol X_{\\Gamma_{\\rm N}}(\\operatorname{div},\\Omega) := \\boldsymbol X(\\operatorname{div},\\Omega) \\cap \\boldsymbol H_{\\Gamma_{\\rm N}}(\\operatorname{div},\\Omega)$\nwill also be useful.\n\n\\subsection{Helmholtz problem}\n\nCentral to our considerations will be the sesquilinear form\n\\begin{equation}\n\\label{eq_b}\nb(\\phi,v)\n:=\n-\\omega^2 (\\mu\\phi,v)_\\Omega\n-i\\omega(\\gamma\\phi,v)_{\\Gamma_{\\rm R}}\n+(\\boldsymbol A\\boldsymbol \\nabla \\phi,\\boldsymbol \\nabla v)_\\Omega\n\\qquad\n\\forall \\phi,v \\in H^1_{\\Gamma_{\\rm D}}(\\Omega),\n\\end{equation}\ncorresponding to the weak formulation of \\eqref{eq_helmholtz_strong}.\nWe will assume throughout this work that the considered Helmholtz\nproblem is well-posed, meaning that there exists $\\gamma_{\\rm st} > 0$ such that\n\\begin{equation}\n\\label{eq_well_posed}\n\\min_{\\substack{\\phi \\in H^1_{\\Gamma_{\\rm D}}(\\Omega) \\\\ \\enorm{\\phi}_{\\omega,\\Omega} = 1}}\n\\max_{\\substack{ v \\in H^1_{\\Gamma_{\\rm D}}(\\Omega) \\\\ \\enorm{ v }_{\\omega,\\Omega} = 1}}\n\\Re b(\\phi,v)\n=\n\\frac{1}{\\gamma_{\\rm st}}.\n\\end{equation}\nIn our setting, \\eqref{eq_well_posed} always holds as soon as $\\Gamma_{\\rm R}$ has a positive\nmeasure due to the unique continuation principle. On the other hand, if $\\Gamma_{\\rm R} = \\emptyset$,\nthen \\eqref{eq_well_posed} fails to hold if and only if $\\omega^2$ is an eigenvalue of the\nresulting self-adjoint operator. In general, it is hard to quantitatively estimate $\\gamma_{\\rm st}$, but a\nreasonable assumption is that it grows polynomially with the frequency\n\\cite{lafontaine_spence_wunsch_2021a}. It is also worth mentioning that the\nconstant may be explicitly controlled in some specific configurations\n\\cite{%\nbarucq_chaumontfrelet_gout_2017a,%\nchandlerwilde_spence_gibbs_smyshlyaev_2020a,%\nchaumontfrelet_spence_2021a,%\nmoiola_spence_2019a}.\n\nIn the remainder of this work, we fix a right-hand side $f \\in L^2(\\Omega)$,\nand let $u \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)$ be the unique element satisfying\n\\begin{equation}\n\\label{eq_helmholtz_weak}\nb(u,v) = (f,v)_\\Omega \\quad \\forall v \\in H^1_{\\Gamma_{\\rm D}}(\\Omega).\n\\end{equation}\n\n\n\\subsection{Computational mesh}\n\nWe consider a mesh $\\mathcal T_h$ of $\\Omega$ consisting of non-overlapping (closed) simplicial\nelements $K$. We classically assume that $\\mathcal T_h$ is a matching mesh meaning that\nthe intersection of two distinct elements either is empty, or it is a full face,\nedge or vertex of the two elements (see, e.g. \\cite[Section 2.2]{ciarlet_2002a} or\n\\cite[Definition 6.11]{ern_guermond_2019a}).\nThe set of faces of the mesh is denoted by $\\mathcal F_h$. We also employ the standard notations\n$h_K$ and $\\rho_K$ for the diameter of $K \\in \\mathcal T_h$ and the diameter of the largest ball\ncontained in $K$ (see \\cite[Theorem 3.1.3]{ciarlet_2002a} or\n\\cite[Definition 6.4]{ern_guermond_2019a}).\nThen, $\\kappa_K := h_K\/\\rho_K$ is the shape-regularity parameter of $K \\in \\mathcal T_h$, and\n$\\kappa := \\max_{K \\in \\mathcal T_h} \\kappa_K$. We also introduce the global mesh size\n$h := \\max_{K \\in \\mathcal T_h} h_K$. Similarly, $h_F$ stands for the diameter of the face\n$F \\in \\mathcal F_h$.\n\nWe further assume that the mesh $\\mathcal T_h$ conforms with the partition of the boundary\nand coefficients in the sense that, for each $K \\in \\mathcal T_h$, there exists a (unique)\n$P \\in \\mathscr P$ such that $K \\subset \\overline{P}$ and for each $F \\in \\mathcal F_h$ with\n$F \\subset \\partial \\Omega$, we have either $F \\subset \\overline{\\Gamma_{\\rm D}}$,\n$F \\subset \\overline{\\Gamma_{\\rm N}}$ or $F \\subset \\overline{\\Gamma_{\\rm R}}$.\nIf $F \\subset \\overline{\\Gamma_{\\rm R}}$, we additionally require that $F \\subset \\overline{Q}$\nfor some (unique) $Q \\in \\mathscr Q$. We respectively denote by $\\mathcal F_h^{\\rm D}$,\n$\\mathcal F_h^{\\rm N}$ and $\\mathcal F_h^{\\rm R}$ the set of faces $F \\in \\mathcal F_h$ such that\n$F \\subset \\overline{\\Gamma_{\\rm D}}$, $\\overline{\\Gamma_{\\rm N}}$ or $\\overline{\\Gamma_{\\rm R}}$. We also set\n$\\mathcal F_h^{\\rm e} := \\mathcal F_h^{\\rm D} \\cup \\mathcal F_h^{\\rm N} \\cup \\mathcal F_h^{\\rm R}$,\nand $\\mathcal F^{\\rm i} := \\mathcal F_h \\setminus \\mathcal F_h^{\\rm e}$.\nFor $K \\in \\mathcal T_h$, we can then set $\\mu_K := \\mu_P$, $\\alpha_K := \\alpha_P$, and\n$\\vartheta_K := \\sqrt{\\alpha_K\/\\mu_K}$, where $P \\in \\mathscr P$ contains $K$. Similarly, if\n$F \\in \\mathcal F_h^{\\rm e}$, we set $\\alpha_F := \\alpha_K$ where $K \\in \\mathcal T_h$ is the\nonly element having $F$ as a face. If $F \\in \\mathcal F_h^{\\rm R}$, we also introduce\n$\\gamma_F := \\gamma_Q$ and $\\vartheta_F := \\alpha_F\/\\gamma_F$ where $Q$ is the unique\nsubset in $\\mathscr Q$ containing $F$.\n\nFinally, we associate with each $F \\in \\mathcal F_h$ a unit normal vector $\\boldsymbol n_F$. If\n$F \\in \\mathcal F_h^{\\rm e}$, we require that $\\boldsymbol n_F = \\boldsymbol n$. Otherwise, the orientation\nof $\\boldsymbol n_F$ is arbitrary, but fixed.\n\n\\subsection{Polynomial spaces}\n\nIn the remainder of this work, we fix a polynomial degree $p \\geq 1$.\nFor $K \\in \\mathcal T_h$, $\\mathcal P_p(K)$ is the set of polynomial functions $K \\to \\mathbb C$\nof total degree less than or equal to $q$. For vector-valued functions, we also set\n$\\boldsymbol{\\CP}_p(K) := [\\mathcal P_p(K)]^3$. If $\\mathcal T \\subset \\mathcal T_h$ is a collection of elements\ncovering the (open) domain $\\Theta$, then\n$\\mathcal P_p(K) := \\{ v \\in L^2(\\Theta); \\; v|_K \\in \\mathcal P_p(K) \\; \\forall K \\in \\mathcal T\\}$\nand $\\boldsymbol{\\CP}_p(\\mathcal T) := [\\mathcal P_p(\\mathcal T)]^3$.\n\n\\subsection{Broken Sobolev spaces}\n\nWe define $H^1(\\mathcal T_h)$ as the\nsubset of functions $v \\in L^2(\\Omega)$ such that $v|_K \\in H^1(K)$ for all $K \\in \\mathcal T_h$.\nFor functions in $H^1(\\mathcal T_h)$, we still employ the notations $\\boldsymbol \\nabla$ for the element-wise\nweak gradient, so that $\\boldsymbol \\nabla (H^1(\\mathcal T_h)) \\subset \\boldsymbol L^2(\\Omega)$. To avoid confusions, we\nwill employ the alternative notations $(\\cdot,\\cdot)_{\\mathcal T_h}$ and $\\|{\\cdot}\\|_{\\mathcal T_h}$ for\nthe $L^2(\\Omega)$ and $\\boldsymbol L^2(\\Omega)$ norms and inner-products when working with broken functions\n(we also employ the same notation for weighted norms). Similarly, we write\n\\begin{equation*}\n\\|v\\|_{\\gamma,\\mathcal F_h^{\\rm R}}^2 := \\sum_{F \\in \\mathcal F_h^{\\rm R}} \\|v\\|_{\\gamma,F}^2\n\\quad\n\\forall v \\in L^2(\\Gamma_{\\rm R}).\n\\end{equation*}\nThe notation\n\\begin{equation*}\n(\\phi,v)_{\\mathcal F_h} := \\sum_{F \\in \\mathcal F_h} (\\phi,v)_F\n\\qquad\n\\forall \\phi,v \\in \\bigoplus_{F \\in \\mathcal F_h} L^2(F)\n\\end{equation*}\nwill also be useful.\n\n\\subsection{Jumps, averages, lifting and discrete gradient}\n\nConsidering $\\phi \\in H^1(\\mathcal T_h)$, its jump through a face $F \\in \\mathcal F_h^{\\rm i}$\nis defined by\n\\begin{equation*}\n\\jmp{\\phi}_F := \\phi_+|_F \\boldsymbol n_+ \\cdot \\boldsymbol n_F + \\phi_-|_F \\boldsymbol n_- \\cdot \\boldsymbol n_F\n\\end{equation*}\nwith $K_\\pm \\in \\mathcal T_h$ the two elements such that $F = \\partial K_+ \\cap \\partial K_-$,\n$\\phi|_\\pm := \\phi|_{K_\\pm}$ and $\\boldsymbol n_\\pm := \\boldsymbol n_{K_\\pm}$.\nFor an exterior face $F \\in \\mathcal F_h^{\\rm e}$, we set instead\n\\begin{equation*}\n\\jmp{\\phi}_F := \\phi|_F \\text{ if } F \\in \\mathcal F_h^{\\rm D}\n\\quad\n\\text{ and }\n\\quad\n\\jmp{\\phi}_F := 0 \\text{ otherwise.}\n\\end{equation*}\nSimilarly, if $\\boldsymbol w \\in \\boldsymbol{\\CP}_p(\\mathcal T_h)$, its average on $F \\in \\mathcal F_h$ is given by\n\\begin{equation*}\n\\avg{\\boldsymbol w}_F := \\frac{1}{2} (\\boldsymbol w_+|_F + \\boldsymbol w_-|_F) \\quad \\text{ if } F \\in \\mathcal F_h^{\\rm i}\n\\quad \\text{ and } \\quad\n\\avg{\\boldsymbol w}_F := \\boldsymbol w|_F \\quad \\text{ if } F \\in \\mathcal F_h^{\\rm e},\n\\end{equation*}\nwhere $\\boldsymbol w_\\pm := \\boldsymbol w|_{K_\\pm}$ for the two elements $K_\\pm \\in \\mathcal T_h$ such that\n$F = K_- \\cap K_+$ in the case of an interior face.\n\nA key part of our analysis will be to give a meaning to the terms\n$(\\jmp{\\phi},\\avg{\\boldsymbol A\\boldsymbol \\nabla v} \\cdot \\boldsymbol n_F)_{\\mathcal F_h}$ appearing in the\nIPDG form, for functions $\\phi$ and $v$ only belonging to $H^1(\\mathcal T_h)$.\nThis is subtle, since the normal trace of $\\boldsymbol \\nabla v$ is actually not\ndefined on faces. Following \\cite[Section 4.3]{dipietro_ern_2012a},\nthe solution is to introduce a lifting operator defined as follows.\n\nFor $\\phi \\in H^1(\\mathcal T_h)$ we define the lifting $\\LIFT{\\phi}$ as the\nunique element of $\\boldsymbol{\\CP}_p(\\mathcal T_h)$ such that\n\\begin{equation*}\n(\\LIFT{\\phi},\\boldsymbol w_h)_{\\mathcal T_h}\n=\n(\\jmp{\\phi}_F,\\avg{\\boldsymbol w_h} \\cdot \\boldsymbol n_F)_{\\mathcal F_h}\n\\qquad\n\\forall \\boldsymbol w_h \\in \\boldsymbol{\\CP}_p(\\mathcal T_h).\n\\end{equation*}\nNotice that we can then write $(\\boldsymbol A\\LIFT{\\phi},\\boldsymbol \\nabla v)_{\\mathcal T_h}$\ninstead of $(\\jmp{\\phi},\\avg{\\boldsymbol A\\boldsymbol \\nabla v}\\cdot\\boldsymbol n_F)_{\\mathcal F_h}$\nfor functions $\\phi, v \\in \\mathcal P_p(\\mathcal T_h)$, with the advantage that\nthe second expression is still well-defined for general $H^1(\\mathcal T_h)$ arguments.\n\nThe notion of weak gradient will also be useful. Specifically, we set\n\\begin{equation*}\n\\mathfrak G(\\phi) := \\boldsymbol \\nabla \\phi - \\LIFT{\\phi} \\in \\boldsymbol L^2(\\mathcal T_h)\n\\end{equation*}\nfor all $\\phi \\in H^1(\\mathcal T_h)$. Importantly, we have $\\mathfrak G(\\phi) = \\boldsymbol \\nabla \\phi$\nwhenever $\\phi \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)$, and\n\\begin{equation*}\n(\\mathfrak G(\\phi),\\boldsymbol w_h)_{\\mathcal T_h} + (\\phi,\\div \\boldsymbol w_h)_{\\mathcal T_h} = (\\phi,\\boldsymbol w_h \\cdot \\boldsymbol n)_{\\Gamma_{\\rm R}}\n\\end{equation*}\nfor all $\\phi \\in H^1(\\mathcal T_h)$ and $\\boldsymbol w_h \\in \\boldsymbol{\\CP}_p(\\mathcal T_h) \\cap \\boldsymbol X_{\\Gamma_{\\rm N}}(\\Omega,\\operatorname{div})$.\n\n\\subsection{Broken norms}\n\nFor $v \\in H^1(\\mathcal T_h)$, the broken counterpart of the energy norm introduced in\n\\eqref{eq_energy_norm} is defined by\n\\begin{equation*}\n\\enorm{v}^2_{\\omega,\\mathcal T_h}\n:=\n\\omega^2 \\|v\\|_{\\mu,\\mathcal T_h}^2\n+\n\\omega \\|v\\|_{\\gamma,\\mathcal F_h^{\\rm R}}^2\n+\n\\|\\mathfrak G(v)\\|_{\\boldsymbol A,\\mathcal T_h}^2.\n\\end{equation*}\nNotice that it is equal to the $\\enorm{{\\cdot}}_{\\omega,\\Omega}$ norm if\n$v \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)$. We will also employ the mesh-dependent norms\n\\begin{equation*}\n\\|v\\|_{\\dagger,1,\\mathcal T_h}^2\n:=\n\\sum_{K \\in \\mathcal T_h} \\left \\{\n\\max \\left (\n1,\\frac{\\omega^2 h_K^2}{\\vartheta_K^2}\n\\right )\n\\frac{\\alpha_K}{h_K^2}\\|v\\|_K^2\n+\n\\|\\mathfrak G(v)\\|_{\\boldsymbol A,K}^2\n\\right \\}\n+\n\\sum_{F \\in \\mathcal F_h^{\\rm R}}\n\\max \\left (1,\\frac{\\omega h_F}{\\vartheta_F}\\right )\n\\frac{\\alpha_F}{h_F} \\|v\\|_F^2\n\\end{equation*}\nand\n\\begin{equation*}\n\\|\\boldsymbol w\\|_{\\dagger,\\operatorname{div},\\mathcal T_h}^2\n:=\n\\sum_{K \\in \\mathcal T_h} \\left \\{\n\\|\\boldsymbol w\\|_{\\boldsymbol A^{-1},K}^2\n+\n\\frac{h_K^2}{\\alpha_K} \\|\\div \\boldsymbol w\\|_{K}^2\n\\right \\}\n+\n\\sum_{F \\in \\mathcal F_h^{\\rm R}} \\frac{h_F}{\\alpha_F} \\|\\boldsymbol w \\cdot \\boldsymbol n\\|_F^2\n\\end{equation*}\nfor $\\boldsymbol w \\in \\boldsymbol X_{\\Gamma_{\\rm N}}(\\operatorname{div},\\Omega)$. These norms are ``dual'' to each other in the sense that\n\\begin{equation}\n\\label{eq_norm_dual}\n|(\\mathfrak G(\\phi),\\boldsymbol w)_{\\mathcal T_h} + (\\phi,\\div \\boldsymbol w)_{\\mathcal T_h} - (\\phi,\\boldsymbol w)_{\\mathcal F_h^{\\rm R}}|\n\\leq\n\\|\\phi\\|_{\\dagger,1,\\mathcal T_h}\\|\\boldsymbol w\\|_{\\dagger,\\operatorname{div},\\mathcal T_h}\n\\end{equation}\nfor all $\\phi \\in H^1(\\mathcal T_h)$ and $\\boldsymbol w \\in \\boldsymbol X_{\\Gamma_{\\rm N}}(\\operatorname{div},\\Omega)$.\nFor future references, we notice that the ``$\\max$'' in the definition of the\n$\\|{\\cdot}\\|_{\\dagger,1,\\mathcal T_h}$ are chosen so that\n\\begin{equation}\n\\label{eq_norm_control}\n\\enorm{v}_{\\omega,\\mathcal T_h} \\leq \\|v\\|_{\\dagger,1,\\mathcal T_h} \\qquad \\forall v \\in H^1(\\mathcal T_h).\n\\end{equation}\n\n\\subsection{IPDG form}\n\nFollowing \\cite[Section 4.3.3]{dipietro_ern_2012a}, our DG approximation of the\n$(\\boldsymbol A\\boldsymbol \\nabla\\cdot,\\boldsymbol \\nabla\\cdot)_\\Omega$ form is given by\n\\begin{equation}\n\\label{eq_IPDG_discrete_gradient}\na_h(\\phi,v)\n:=\n(\\boldsymbol A\\mathfrak G(\\phi),\\mathfrak G(v))_{\\mathcal T_h} + s_h(\\phi,v)\n\\qquad\n\\forall \\phi,v \\in H^1(\\mathcal T_h)\n\\end{equation}\nwhere $s_h: H^1(\\mathcal T_h) \\times H^1(\\mathcal T_h) \\to \\mathbb C$ is a sesqulinear form\nsatisfying $s_h(\\phi,v) = 0$ whenever $\\phi \\in H^1_{\\Gamma_{\\rm R}}(\\Omega)$ or\n$v \\in H^1_{\\Gamma_{\\rm R}}(\\Omega)$. Importantly, we have\n\\begin{equation}\n\\label{eq_consistency}\na_h(\\phi,v) = (\\boldsymbol A\\boldsymbol \\nabla\\phi,\\boldsymbol \\nabla v)_\\Omega\n\\end{equation}\nwhen $\\phi,v \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)$.\n\nIt is worthy to note that the IPDG form is usually not presented (and not implemented)\nas it is presented in \\eqref{eq_IPDG_discrete_gradient}. Instead\n\\cite{agut_diaz_2013a,feng_wu_2009a,grote_schneebeli_schotzau_2006a,sauter_zech_2015a},\nthe method is usually written as\n\\begin{multline}\n\\label{eq_IPDG_jump}\na_h(\\phi,v)\n=\n(\\boldsymbol A\\boldsymbol \\nabla\\phi,\\boldsymbol \\nabla v)_{\\mathcal T_h}\n\\\\\n-\n(\\avg{\\boldsymbol A\\boldsymbol \\nabla \\phi}\\cdot \\boldsymbol n_F,\\jmp{v})_{\\mathcal F_h}\n-\n(\\jmp{\\phi},\\avg{\\boldsymbol A\\boldsymbol \\nabla v}\\cdot \\boldsymbol n_F)_{\\mathcal F_h}\n+\n\\sum_{F \\in \\mathcal F_h^{\\rm i} \\cup \\mathcal F_h^{\\rm D}} \\frac{\\beta_F}{h_F}(\\jmp{\\phi},\\jmp{v})_F\n\\end{multline}\nwhere $\\beta_F \\sim p^2$ is a penalty parameter chosen to be large enough\n\\cite{agut_diaz_2013a}.\nHowever, as shown in \\cite[Section 4.3.3]{dipietro_ern_2012a},\nthe formulation of \\eqref{eq_IPDG_jump} can be recast in the framework of\n\\eqref{eq_IPDG_discrete_gradient} by setting\n\\begin{equation}\n\\label{eq_penalization_usual}\ns_h(\\phi,v)\n:=\n\\sum_{F \\in \\mathcal F_h^{\\rm i} \\cup \\mathcal F_h^{\\rm D}} \\frac{\\beta_F}{h_F}(\\jmp{\\phi},\\jmp{v})_F\n-(\\boldsymbol A\\LIFT{\\phi},\\LIFT{v})_\\Omega.\n\\end{equation}\n\n\\subsection{The discrete Helmholtz problem}\n\nConsider the sesquilinear form \n\\begin{equation*}\nb_h(\\phi,v)\n:=\n-\\omega^2 (\\mu\\phi,v)_{\\mathcal T_h}\n-i\\omega (\\gamma\\phi,v)_{\\mathcal F_h^{\\rm R}}\n+a_h(\\phi,v)\n\\qquad\n\\forall \\phi,v \\in H^1(\\mathcal T_h).\n\\end{equation*}\nThen, the discrete problem consists in finding $u_h \\in \\mathcal P_p(\\mathcal T_h)$ such that\n\\begin{equation}\n\\label{eq_helmholtz_ipdg}\nb_h(u_h,v_h) = (f,v_h)_{\\mathcal T_h} \\qquad \\forall v_h \\in \\mathcal P_p(\\mathcal T_h).\n\\end{equation}\nFor any $u_h$ satisfying \\eqref{eq_helmholtz_ipdg}, recalling \\eqref{eq_helmholtz_weak}\nand due to \\eqref{eq_consistency}, the Galerkin orthogonality property\n\\begin{equation}\n\\label{eq_galerkin_orthogonality}\nb_h(u-u_h,w_h) = 0\n\\end{equation}\nholds true for all discrete conforming test functions $w_h \\in \\mathcal P_p(\\mathcal T_h) \\cap H^1_{\\Gamma_{\\rm D}}(\\Omega)$.\nSimilarly, we have\n\\begin{equation}\n\\label{eq_continuity}\n|b_h(\\phi,v)|\n\\leq\n\\enorm{\\phi}_{\\omega,\\mathcal T_h}\\enorm{v}_{\\omega,\\mathcal T_h}\n\\end{equation}\nfor $\\phi,v \\in H^1(\\mathcal T_h)$ whenever $\\phi$ or $v$ belongs to $H^1_{\\Gamma_{\\rm D}}(\\Omega)$.\n\n\n\\subsection{Conforming subspaces and projections}\n\nThe sets $\\mathcal P_p(\\mathcal T_h) \\cap H^1_{\\Gamma_{\\rm D}}(\\Omega)$ and $\\boldsymbol{\\CP}_p(\\mathcal T_h) \\cap \\boldsymbol X_{\\Gamma_{\\rm N}}(\\operatorname{div},\\Omega)$\nare the usual Lagrange and BDM finite element spaces (see, e.g.,\n\\cite[Sections 8.5.1 and 11.5]{ern_guermond_2019a}). Although they are not needed to implement\nthe IPDG discretization, they will be extremely useful for the analysis.\nThe projections\n\\begin{equation*}\n\\pi_h^{\\rm g} v\n:=\n\\arg \\min_{v_h \\in \\mathcal P_p(\\mathcal T_h) \\cap H^1_{\\Gamma_{\\rm D}}(\\Omega)}\n\\|v-v_h\\|_{1,\\dagger,\\mathcal T_h}\n\\end{equation*}\nand\n\\begin{equation*}\n\\pi_h^{\\rm d} \\boldsymbol w\n:=\n\\arg \\min_{\\boldsymbol w_h \\in \\boldsymbol{\\CP}_p(\\mathcal T_h) \\cap \\boldsymbol X_{\\Gamma_{\\rm N}}(\\operatorname{div},\\Omega)}\n\\|\\boldsymbol w-\\boldsymbol w_h\\|_{\\operatorname{div},\\dagger,\\mathcal T_h}\n\\end{equation*}\nare well-defined for all $v \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)$ and $\\boldsymbol w \\in \\boldsymbol X_{\\Gamma_{\\rm N}}(\\operatorname{div},\\Omega)$,\nsince the mesh-dependent norms appearing in the right-hand sides are naturally associated\nwith inner-product. We will also need the conforming projector\n\\begin{equation*}\n\\pi^{\\rm g} v := \\arg \\min_{s \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)} \\|v-s\\|_{1,\\dagger,\\mathcal T_h}\n\\end{equation*}\ndefined for all $v \\in H^1(\\mathcal T_h)$.\n\n\\subsection{Approximation factors}\n\n\\begin{subequations}\n\\label{eq_definition_gba}\nFollowing\n\\cite{chaumontfrelet_ern_vohralik_2021a,\nmelenk_parsania_sauter_2013a,\nmelenk_sauter_2010a,\nsauter_zech_2015a}, our analysis will rely on duality arguments and\napproximation factors will be a central concept.\nFor $\\psi \\in L^2(\\Omega)$ and $\\Psi \\in L^2(\\Gamma_{\\rm R})$,\nlet $u^\\star_\\psi,U^\\star_\\Psi \\in H^1_{\\Gamma_{\\rm R}}(\\Omega)$ solve\n\\begin{equation*}\nb(w,u^\\star_\\psi) = \\omega(\\mu w,\\psi)_\\Omega,\n\\qquad\nb(w,U^\\star_\\Psi) = \\omega^{1\/2}(\\gamma w,\\Psi)_{\\Gamma_{\\rm R}}\n\\qquad\n\\forall w \\in H^1_{\\Gamma_{\\rm R}}(\\Omega).\n\\end{equation*}\nThe approximation factors\n\\begin{equation}\n\\label{eq_definition_gbag}\n\\widecheck \\gamma_{{\\rm ba},{\\rm g}}\n:=\n\\max_{\\substack{\\psi \\in L^2(\\Omega) \\\\ \\|\\psi\\|_{\\mu,\\Omega} = 1}}\n\\|u_\\psi^\\star-\\pi_h^{\\rm g} u_\\psi^\\star\\|_{\\dagger,1,\\mathcal T_h},\n\\quad\n\\widetilde \\gamma_{{\\rm ba},{\\rm g}}\n:=\n\\max_{\\substack{\\Psi \\in L^2(\\Gamma_{\\rm R}) \\\\ \\|\\Psi\\|_{\\gamma,\\Gamma_{\\rm R}} = 1}}\n\\|U_\\Psi^\\star-\\pi_h^{\\rm g} U_\\Psi^\\star\\|_{\\dagger,1,\\mathcal T_h},\n\\end{equation}\nwhere previously employed in \\cite{chaumontfrelet_ern_vohralik_2021a}\nfor the a posteriori error analysis of conforming discretizations.\nHere, we will additionally need divergence-conforming approximation factors\n\\begin{equation}\n\\label{eq_definition_gbad}\n\\begin{aligned}\n\\widecheck \\gamma_{{\\rm ba},{\\rm d}} &:= \\max_{\\substack{\\psi \\in L^2(\\Omega) \\\\ \\|\\psi\\|_{\\mu,\\Omega} = 1}}\n\\|\\boldsymbol A\\boldsymbol \\nabla u_\\psi^\\star-\\pi_h^{\\rm d} (\\boldsymbol A\\boldsymbol \\nabla u_\\psi^\\star)\\|_{\\dagger,\\operatorname{div},\\mathcal T_h},\n\\\\\n\\widetilde \\gamma_{{\\rm ba},{\\rm d}} &:= \\max_{\\substack{\\Psi \\in L^2(\\Gamma_{\\rm R}) \\\\ \\|\\Psi\\|_{\\gamma,\\Gamma_{\\rm R}} = 1}}\n\\|\\boldsymbol A\\boldsymbol \\nabla U_\\Psi^\\star-\\pi_h^{\\rm d} (\\boldsymbol A\\boldsymbol \\nabla U_\\Psi^\\star)\\|_{\\dagger,\\operatorname{div},\\mathcal T_h},\n\\end{aligned}\n\\end{equation}\nto deal with the non-conformity of the IPDG approximation under minimal regularity.\nWe finally introduce the following short-hand notations\n\\begin{align}\n\\label{eq_definition_gba_summed}\n\\gamma_{{\\rm ba},{\\rm g}}^2 &:= 4\\widecheck \\gamma_{{\\rm ba},{\\rm g}}^2 + 2\\widetilde \\gamma_{{\\rm ba},{\\rm g}}^2,\n\\quad\n&\\gamma_{{\\rm ba},{\\rm d}}^2 &:= 4\\widecheck \\gamma_{{\\rm ba},{\\rm d}}^2 + 2\\widetilde \\gamma_{{\\rm ba},{\\rm d}}^2,\n\\\\\n\\widecheck \\gamma_{\\rm ba}^2 &:= \\widecheck \\gamma_{{\\rm ba},{\\rm g}}^2 + \\widecheck \\gamma_{{\\rm ba},{\\rm d}}^2,\n\\quad\n&\\widetilde \\gamma_{\\rm ba}^2 &:= \\widetilde \\gamma_{{\\rm ba},{\\rm g}}^2 + \\widetilde \\gamma_{{\\rm ba},{\\rm d}}^2,\n\\end{align}\nand\n\\begin{equation}\n\\gamma_{\\rm ba}^2 := \\gamma_{{\\rm ba},{\\rm g}}^2 + \\gamma_{{\\rm ba},{\\rm d}}^2 = 4\\widecheck \\gamma_{\\rm ba}^2 + 2\\widetilde \\gamma_{\\rm ba}^2.\n\\end{equation}\n\\end{subequations}\n\nBy combining elliptic regularity and approximation properties of finite element\nspaces, it is easy to see that $\\gamma_{\\rm ba} \\to 0$ as $h\/p \\to 0$. However, the dependency\non the PDE coefficients and on the frequency may be hard to track. In particular,\nthe stability constant $\\gamma_{\\rm st}$ typically plays a central role in estimating $\\gamma_{\\rm ba}$.\nQualitative estimates for $\\gamma_{\\rm ba}$ can be found in\n\\cite{chaumontfrelet_nicaise_2020a,melenk_sauter_2010a}, and explicit estimates\nare available in specific situations \\cite{chaumontfrelet_ern_vohralik_2021a}.\n\n\\section{Duality under minimal regularity}\n\\label{section_duality}\n\nHere, we state in a separate section the key technical result that enables us\nto perform duality techniques under minimal regularity assumptions. We believe\nit will be important in other contexts, so that we make it easy to reference.\nAlthough the arguments are not complicated, we were not able to find a proof\nin the literature.\n\n\\begin{lemma}[Lifting error]\n\\label{lemma_lifting}\nFor all $\\phi \\in H^1(\\mathcal T_h)$ and $\\boldsymbol \\sigma \\in \\boldsymbol X_{\\Gamma_{\\rm N}}(\\operatorname{div},\\Omega)$, we have\n\\begin{multline}\n\\label{eq_nonconf_identity}\n(\\mathfrak G(\\phi),\\boldsymbol \\sigma)_{\\mathcal T_h}\n+\n(\\phi,\\div \\boldsymbol \\sigma)_{\\mathcal T_h}\n-\n(\\phi,\\boldsymbol \\sigma \\cdot \\boldsymbol n)_{\\mathcal F_h^{\\rm R}}\n=\n\\\\\n(\\mathfrak G(\\phi-\\widetilde\\phi),\\boldsymbol \\sigma-\\boldsymbol \\sigma_h)_{\\mathcal T_h}\n+\n(\\phi-\\widetilde \\phi,\\div(\\boldsymbol \\sigma-\\boldsymbol \\sigma_h))_{\\mathcal T_h}\n-\n(\\phi-\\widetilde \\phi,(\\boldsymbol \\sigma-\\boldsymbol \\sigma_h) \\cdot \\boldsymbol n)_{\\mathcal F_h^{\\rm R}}\n\\end{multline}\nand in particular\n\\begin{equation}\n\\label{eq_nonconf_estimate}\n|(\\mathfrak G(\\phi),\\boldsymbol \\sigma)_{\\mathcal T_h}+(\\phi,\\div \\boldsymbol \\sigma)_{\\mathcal T_h}-(\\phi,\\boldsymbol \\sigma \\cdot \\boldsymbol n)_{\\mathcal F_h^{\\rm R}}|\n\\leq\n\\|\\phi-\\widetilde \\phi\\|_{\\dagger,1,\\mathcal T_h}\n\\|\\boldsymbol \\sigma-\\boldsymbol \\sigma_h\\|_{\\dagger,\\operatorname{div},\\mathcal T_h}\n\\end{equation}\nfor all $\\widetilde \\phi \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)$ and\n$\\boldsymbol \\sigma_h \\in \\boldsymbol{\\CP}_p(\\mathcal T_h) \\cap \\boldsymbol X_{\\Gamma_{\\rm N}}(\\operatorname{div},\\Omega)$.\n\\end{lemma}\n\n\\begin{proof}\nWe first observe that if $\\widetilde \\phi \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)$, integration by parts gives\n\\begin{equation*}\n(\\mathfrak G(\\widetilde \\phi),\\boldsymbol \\tau)_{\\mathcal T_h}\n+\n(\\widetilde \\phi,\\div \\boldsymbol \\tau)\n-\n(\\widetilde \\phi,\\boldsymbol \\tau \\cdot \\boldsymbol n)_{\\Gamma_{\\rm R}}\n=\n(\\boldsymbol \\nabla \\widetilde \\phi,\\boldsymbol \\tau)_{\\mathcal T_h}\n+\n(\\widetilde \\phi,\\div \\boldsymbol \\tau)\n-\n(\\widetilde \\phi,\\boldsymbol \\tau \\cdot \\boldsymbol n)_{\\Gamma_{\\rm R}}\n=\n0\n\\end{equation*}\nfor all $\\boldsymbol \\tau \\in \\boldsymbol X_{\\Gamma_{\\rm N}}(\\operatorname{div},\\Omega)$ due to the essential boundary conditions\non $\\Gamma_{\\rm D}$ and $\\Gamma_{\\rm N}$. Thus,\n\\begin{align*}\n(\\mathfrak G(\\phi),\\boldsymbol \\sigma)_{\\mathcal T_h} + (\\phi,\\div \\boldsymbol \\sigma)_{\\mathcal T_h} - (\\phi,\\boldsymbol \\sigma \\cdot \\boldsymbol n)_{\\mathcal F_h^{\\rm R}}\n=\n(\\mathfrak G(\\phi-\\widetilde \\phi),\\boldsymbol \\sigma)_{\\mathcal T_h} + (\\phi-\\widetilde \\phi,\\div \\boldsymbol \\sigma)_{\\mathcal T_h}\n-\n(\\phi-\\widetilde \\phi,\\boldsymbol \\sigma \\cdot \\boldsymbol n)_{\\mathcal F_h^{\\rm R}}.\n\\end{align*}\nIf $\\boldsymbol \\sigma_h \\in \\boldsymbol{\\CP}_p(\\mathcal T_h) \\cap \\boldsymbol H(\\operatorname{div},\\Omega)$, we also have\n\\begin{align*}\n0\n&=\n(\\mathfrak G(\\phi),\\boldsymbol \\sigma_h)_{\\mathcal T_h}\n+\n(\\phi,\\div \\boldsymbol \\sigma_h)_{\\mathcal T_h}\n-\n(\\phi,\\boldsymbol \\sigma_h \\cdot \\boldsymbol n)_{\\mathcal F_h^{\\rm R}}\n\\\\\n&=\n(\\mathfrak G(\\phi)-\\boldsymbol \\nabla \\widetilde \\phi,\\boldsymbol \\sigma_h)_{\\mathcal T_h}\n+\n(\\boldsymbol \\nabla \\widetilde \\phi,\\boldsymbol \\sigma_h)_{\\mathcal T_h}\n+\n(\\phi,\\div \\boldsymbol \\sigma_h)_{\\mathcal T_h}\n-\n(\\phi,\\boldsymbol \\sigma_h \\cdot \\boldsymbol n)_{\\mathcal F_h^{\\rm R}}\n\\\\\n&=\n(\\mathfrak G(\\phi-\\widetilde \\phi),\\boldsymbol \\sigma_h)_{\\mathcal T_h}\n+\n(\\phi-\\widetilde \\phi,\\div \\boldsymbol \\sigma_h)_{\\mathcal T_h}\n-\n(\\phi-\\widetilde \\phi,\\boldsymbol \\sigma_h \\cdot \\boldsymbol n)_{\\mathcal F_h^{\\rm R}},\n\\end{align*}\nand \\eqref{eq_nonconf_identity} follows after summation. Then, \\eqref{eq_nonconf_estimate}\nis a direct consequence of \\eqref{eq_norm_dual}.\n\\end{proof}\n\nA simple consequence of Lemma \\ref{lemma_lifting} is the following result,\nwhich is crucial to estimate the non-conformity error in the context of\nduality arguments.\n\n\\begin{corollary}[Control of the non-confomity]\n\\label{corollary_nonconf}\nFor all $\\phi \\in H^1(\\mathcal T_h)$ and $\\xi \\in H^1(\\Omega)$ with $\\boldsymbol A\\boldsymbol \\nabla \\xi \\in \\boldsymbol X_{\\Gamma_{\\rm N}}(\\operatorname{div},\\Omega)$,\nwe have\n\\begin{equation}\n\\label{eq_nonconf}\n\\left |\na_h(\\phi,\\xi)+(\\phi,\\div(\\boldsymbol A\\boldsymbol \\nabla\\xi))_{\\mathcal T_h}-(\\phi,\\boldsymbol A\\boldsymbol \\nabla \\xi \\cdot \\boldsymbol n)_{\\mathcal F_h^{\\rm R}}\n\\right |\n\\leq\n\\|\\phi-\\widetilde \\phi\\|_{\\dagger,1,\\mathcal T_h}\\|\\boldsymbol A\\boldsymbol \\nabla \\xi-\\boldsymbol \\sigma_h\\|_{\\dagger,\\operatorname{div},\\mathcal T_h}\n\\end{equation}\nfor all $\\widetilde \\phi \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)$ and\n$\\boldsymbol \\sigma_h \\in \\boldsymbol{\\CP}_p(\\mathcal T_h) \\cap \\boldsymbol X_{\\Gamma_{\\rm N}}(\\operatorname{div},\\Omega)$.\n\\end{corollary}\n\n\\section{A priori analysis}\n\\label{section_apriori}\n\nThe purpose of this section is to establish the existence and uniqueness of\na discrete solution $u_h \\in \\mathcal P_p(\\mathcal T_h)$ to \\eqref{eq_helmholtz_ipdg} and\nto derive a priori error estimates controlling $u-u_h$ in suitable norms.\nThe proof follows the line of the Schatz argument \\cite{schatz_1974a}, with\nsuitable modifications to take into account the non-conforming nature of\nthe discrete scheme.\n\nIn this section, we require that there exists $\\rho > 0$ such that\n\\begin{equation}\n\\label{eq_assumption_stabilization_apriori}\n\\Re s_h(v_h,v_h) \\geq \\rho^2 \\|v_h-\\pi^{\\rm g} v_h\\|_{\\dagger,1,\\mathcal T_h}^2\n\\qquad\n\\forall v_h \\in \\mathcal P_p(\\mathcal T_h).\n\\end{equation}\nThis assumption always holds true when\n$s_h(\\cdot,\\cdot) = \\beta\/h(\\jmp{\\cdot},\\jmp{\\cdot})_{\\mathcal F_h}$\nwith $\\beta > 0$. It is also the case when $s_h$ takes the form\n\\eqref{eq_penalization_usual} if the penalization parameter is\nsufficiently large \\cite{agut_diaz_2013a}. We note that this assumption\nrules out some choices of stabilization including complex-value stabilization\nparameters, as proposed e.g. in \\cite{feng_wu_2009a,sauter_zech_2015a}.\nFor the sake of simplicity, we only consider stabilizations satisfying\n\\eqref{eq_assumption_stabilization_apriori}, but slight modifications\nof our proofs could handle more general situations.\n\nWe start with the main duality argument, which is a generalization of the Aubin-Nitsche trick.\n\n\\begin{lemma}[Aubin-Nitsche]\n\\label{lemma_aubin_nitsche}\nAssume that $u_h \\in \\mathcal P_p(\\mathcal T_h)$ satisfies \\eqref{eq_helmholtz_ipdg}.\nThen, we have\n\\begin{equation}\n\\label{eq_aubin_nitsche_omega}\n\\omega\\|u-u_h\\|_{\\mu,\\mathcal T_h}\n\\leq\n\\widecheck \\gamma_{{\\rm ba},{\\rm g}} \\enorm{u-u_h}_{\\omega,\\mathcal T_h}\n+\n\\widecheck \\gamma_{{\\rm ba},{\\rm d}} \\|u_h-\\pi^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h}\n\\end{equation}\nand\n\\begin{equation}\n\\label{eq_aubin_nitsche_GR}\n\\omega^{1\/2}\\|u-u_h\\|_{\\gamma,\\mathcal F_h^{\\rm R}}\n\\leq\n\\widetilde \\gamma_{{\\rm ba},{\\rm g}} \\enorm{u-u_h}_{\\omega,\\mathcal T_h}\n+\n\\widetilde \\gamma_{{\\rm ba},{\\rm d}}\\|u_h-\\pi^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h}.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nFor the sake of shortness, we set $e_h := u-u_h$.\nWe first establish \\eqref{eq_aubin_nitsche_omega}. To do so,\nwe define $\\xi$ as the unique element of $H^1_{\\Gamma_{\\rm D}}(\\Omega)$ such that\n$b(\\phi,\\xi) = \\omega (\\mu\\phi,e_h)_\\Omega$ for all $\\phi \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)$.\nIntegrating by parts, we see that the identities\n\\begin{equation*}\n-\\omega^2 \\mu \\xi - \\div\\left (\\boldsymbol A\\boldsymbol \\nabla \\xi\\right ) = e_h\n\\qquad\n\\boldsymbol A\\boldsymbol \\nabla \\xi \\cdot \\boldsymbol n+i\\omega\\xi = 0\n\\end{equation*}\nrespectively hold in $L^2(\\Omega)$ and $L^2(\\Gamma_{\\rm R})$. As a result,\n\\begin{equation*}\n\\omega \\|e_h\\|_{\\mu,\\mathcal T_h}^2\n=\n-\\omega^2 (\\mu e_h,\\xi)\n-i\\omega(e_h,\\phi)_{\\mathcal F_h^{\\rm R}}\n-\n\\left \\{\n(e_h,\\div(\\boldsymbol A\\boldsymbol \\nabla \\xi))_{\\mathcal T_h}\n-(e_h,\\boldsymbol A\\boldsymbol \\nabla \\xi \\cdot \\boldsymbol n)_{\\mathcal F^{\\rm R}}\n\\right \\}\n\\end{equation*}\nand we have from Corollary \\ref{corollary_nonconf} that\n\\begin{equation*}\n\\omega \\|e_h\\|_{\\mu,\\mathcal T_h}^2\n\\leq\n|b_h(e_h,\\xi)|\n+\n\\|e_h-\\widetilde \\phi\\|_{\\dagger,1,\\mathcal T_h}\n\\|\\boldsymbol A\\boldsymbol \\nabla \\xi-\\boldsymbol \\sigma_h\\|_{\\dagger,\\operatorname{div},\\mathcal T_h}\n\\end{equation*}\nfor all $\\widetilde \\phi \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)$ and\n$\\boldsymbol \\sigma_h \\in \\boldsymbol{\\CP}_p(\\mathcal T_h) \\cap \\boldsymbol X_{\\Gamma_{\\rm N}}(\\operatorname{div},\\Omega)$.\nOn the one hand, we select $\\widetilde \\phi = u-\\pi^{\\rm g} u_h$ and\n$\\boldsymbol \\sigma_h = \\pi_h^{\\rm d}(\\boldsymbol A\\boldsymbol \\nabla\\xi)$, so that using \\eqref{eq_definition_gbad},\nwe have\n\\begin{equation*}\n\\omega \\|e_h\\|_{\\mu,\\mathcal T_h}^2\n\\leq\n|b_h(e_h,\\xi)|\n+\n\\widecheck \\gamma_{{\\rm ba},{\\rm d}} \\|u_h-\\pi^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h} \\|e_h\\|_{\\mu,\\mathcal T_h}.\n\\end{equation*}\nOn the other hand, using \\eqref{eq_galerkin_orthogonality} and \\eqref{eq_continuity}, we have\n\\begin{equation*}\n|b_h(e_h,\\xi)|\n=\n|b_h(e_h,\\xi-\\pi_h^{\\rm g}\\xi)|\n\\leq\n\\enorm{e_h}_{\\omega,\\mathcal T_h}\\enorm{\\xi-\\pi_h^{\\rm g}\\xi}_{\\omega,\\Omega}\n\\leq\n\\widecheck \\gamma_{{\\rm ba},{\\rm g}} \\enorm{e_h}_{\\omega,\\mathcal T_h}\\|e_h\\|_{\\mu,\\mathcal T_h}.\n\\end{equation*}\n\nWe then prove \\eqref{eq_aubin_nitsche_GR}. Instead of $\\xi$,\nwe define $\\chi$ as the unique element of $H^1_{\\Gamma_{\\rm D}}(\\Omega)$\nsuch that $b(\\phi,\\chi) = \\omega^{1\/2}(\\gamma\\chi,e_h)_{\\Gamma_{\\rm R}}$ for all\n$\\phi \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)$. We then have\n\\begin{equation*}\n-\\omega^2 \\mu \\chi - \\div\\left (\\boldsymbol A\\boldsymbol \\nabla \\chi\\right ) = 0\n\\qquad\n\\boldsymbol A\\boldsymbol \\nabla \\chi \\cdot \\boldsymbol n+i\\omega\\chi = e_h\n\\end{equation*}\nin $L^2(\\Omega)$ and $L^2(\\Gamma_{\\rm R})$, respectively. We thus have\n\\begin{equation*}\n\\omega^{1\/2} \\|e_h\\|_{\\gamma,\\mathcal F_h^{\\rm R}}^2\n=\n-\\omega^2 (\\mu e_h,\\chi)\n-i\\omega(e_h,\\phi)_{\\mathcal F_h^{\\rm R}}\n-\n\\left \\{\n(e_h,\\div(\\boldsymbol A\\boldsymbol \\nabla \\chi))_{\\mathcal T_h}\n-(e_h,\\boldsymbol A\\boldsymbol \\nabla \\chi \\cdot \\boldsymbol n)_{\\mathcal F^{\\rm R}}\n\\right \\},\n\\end{equation*}\nand applying again Corollary \\ref{corollary_nonconf} we arrive at\n\\begin{equation*}\n\\omega^{1\/2} \\|e_h\\|_{\\gamma,\\mathcal F_h^{\\rm R}}^2\n\\leq\n|b_h(e_h,\\chi-\\pi_h^{\\rm g} \\chi)|\n+\n\\|u_h-\\pi^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h}\n\\|\\boldsymbol A\\boldsymbol \\nabla \\chi-\\pi_h^{\\rm d}(\\boldsymbol A\\boldsymbol \\nabla \\chi)\\|_{\\dagger,\\operatorname{div},\\mathcal T_h},\n\\end{equation*}\nand the result follows from the definitions of $\\widetilde \\gamma_{{\\rm ba},{\\rm g}}$ and $\\widetilde \\gamma_{{\\rm ba},{\\rm d}}$\ngiven in \\eqref{eq_definition_gba_summed}.\n\\end{proof}\n\nWe can now complete the Schatz argument, leading to quasi-optimality\nof the discrete solution for sufficiently refined meshes.\n\n\\begin{theorem}[A priori estimate]\nIf $\\gamma_{{\\rm ba},{\\rm g}} < 1$ and $\\gamma_{{\\rm ba},{\\rm d}} \\leq \\rho$, then there exists a unique\nsolution $u_h \\in \\mathcal P_p(\\mathcal T_h)$ to \\eqref{eq_helmholtz_ipdg} and the estimate\n\\begin{equation}\n\\label{eq_aprio_estimate}\n\\enorm{u-u_h}_{\\omega,\\mathcal T_h}\n\\leq\n\\frac{1}{1-\\gamma_{{\\rm ba},{\\rm g}}^2}\n\\min_{v_h \\in \\mathcal P_p(\\mathcal T_h) \\cap H^1_{\\Gamma_{\\rm D}}(\\Omega)} \\enorm{u-v_h}_{\\omega,\\Omega}\n\\end{equation}\nholds true. In addition, if $\\gamma_{{\\rm ba},{\\rm d}} < \\rho$, we also have\n\\begin{equation}\n\\label{eq_aprio_nonconf}\n\\|u_h-\\pi^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h}\n\\leq\n\\left (\n\\frac{1}{(1-\\gamma_{{\\rm ba},{\\rm g}}^2)(\\rho^2-\\gamma_{{\\rm ba},{\\rm d}}^2)}\n\\right )^{1\/2}\n\\min_{v_h \\in \\mathcal P_p(\\mathcal T_h) \\cap H^1_{\\Gamma_{\\rm D}}(\\Omega)} \\enorm{u-v_h}_{\\omega,\\Omega}.\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nWe first observe that\n\\begin{align*}\n\\Re b_h(e_h,e_h) + \\omega\\|e_h\\|_{\\gamma,\\mathcal F_h^{\\rm R}}^2 + 2\\omega^2\\|e_h\\|_{\\mu,\\mathcal T_h}^2\n&=\n\\enorm{e_h}_{\\omega,\\mathcal T_h}^2\n+\n\\Re s_h(u_h,u_h)\n\\\\\n&\\geq\n\\enorm{e_h}_{\\omega,\\Omega}^2 + \\rho^2 \\|u_h-\\pi^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h}^2.\n\\end{align*}\nAs a result, we have\n\\begin{align*}\n\\enorm{e_h}_{\\omega,\\mathcal T_h}^2 + \\rho^2 \\|u_h-\\pi^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h}^2\n&\\leq\n|b_h(e_h,u-v_h)| + 2\\omega^2\\|e_h\\|_{\\mu,\\mathcal T_h}^2 + \\omega\\|e_h\\|_{\\gamma,\\mathcal F_h^{\\rm R}}^2\n\\\\\n&\\leq\n\\enorm{e_h}_{\\omega,\\mathcal T_h} \\enorm{u-v_h}_{\\omega,\\Omega}\n\\\\\n&+\n2\\left (\\widecheck \\gamma_{{\\rm ba},{\\rm g}} \\enorm{e_h}_{\\omega,\\mathcal T_h} + \\widecheck \\gamma_{{\\rm ba},{\\rm d}} \\|u_h-\\pi^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h}\\right )^2\n\\\\\n&+\n\\left (\\widetilde \\gamma_{{\\rm ba},{\\rm g}} \\enorm{e_h}_{\\omega,\\mathcal T_h} + \\widetilde \\gamma_{{\\rm ba},{\\rm d}} \\|u_h-\\pi^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h}\\right )^2\n\\\\\n&\\leq\n\\enorm{e_h}_{\\omega,\\mathcal T_h} \\enorm{u-v_h}_{\\omega,\\Omega}\n\\\\\n&+\n(4\\widecheck \\gamma_{{\\rm ba},{\\rm g}}+2\\widetilde \\gamma_{{\\rm ba},{\\rm g}}) \\enorm{e_h}_{\\omega,\\mathcal T_h}^2\n+\n(4\\widecheck \\gamma_{{\\rm ba},{\\rm d}}+2\\widetilde \\gamma_{{\\rm ba},{\\rm d}}) \\|u_h-\\pi^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h}^2\n\\end{align*}\nso that\n\\begin{equation*}\n(1-4\\widecheck \\gamma_{{\\rm ba},{\\rm g}}^2-2\\widetilde \\gamma_{{\\rm ba},{\\rm g}}^2)\\enorm{e_h}_{\\omega,\\mathcal T_h}^2\n+\n(\\rho^2-4\\widecheck \\gamma_{{\\rm ba},{\\rm d}}^2-2\\widetilde \\gamma_{{\\rm ba},{\\rm d}}^2)\\|u_h-\\pi_h^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h}^2\n\\leq\n\\enorm{e_h}_{\\omega,\\mathcal T_h}\\enorm{u-v_h}_{\\omega,\\Omega},\n\\end{equation*}\nfrom which \\eqref{eq_aprio_estimate} and \\eqref{eq_aprio_nonconf} follows,\nrecalling the definitions of $\\gamma_{{\\rm ba},{\\rm g}}$ and $\\gamma_{{\\rm ba},{\\rm d}}$ in \\eqref{eq_definition_gba}.\n\\end{proof}\n\n\n\\section{A posteriori analysis}\n\\label{section_aposteriori}\n\nWe now provide a posteriori error estimates under minimal regularity assumptions.\nWe first present an abstract framework, and then show how it can be applied to the\nparticular of residual-based estimators.\n\nThroughout this section, we assume that $u_h$ is a fixed function in\n$\\mathcal P_p(\\mathcal T_h)$ satisfying \\eqref{eq_helmholtz_ipdg}. Notice that unique\nsolvability of \\eqref{eq_helmholtz_ipdg} is not required. In particular,\nthe proposed analysis applies without any restriction on the mesh size\nor polynomial degree. Also, in contrast to the a priori analysis, we do\nnot need any specific positivity assumption on the stabilisation form $s_h$.\n\n\\subsection{Abstract reliability analysis}\n\n\\begin{subequations}\nFollowing \\cite[Theorem 3.3]{ern_vohralik_2015a}, the discretisation error\nmay be bounded using two different terms that respectively control the equation\nresidual, and the non-conformity of the discrete solution. For the equation\nresidual, we introduce the residual functional and its norm\n\\begin{equation}\n\\label{eq_definition_Rr}\n\\langle \\mathscr R_{\\rm r}, v\\rangle := b_h(e_h,v) \\quad \\forall v \\in H^1_{\\Gamma_{\\rm D}}(\\Omega),\n\\qquad\n\\textup R_{\\rm r}\n:=\n\\sup_{\\substack{v \\in H^1_{\\Gamma_{\\rm D}}(\\Omega) \\\\ \\|\\boldsymbol \\nabla v\\|_{\\boldsymbol A,\\Omega} = 1}}\n|\\langle \\mathscr R_{\\rm r},v\\rangle|.\n\\end{equation}\nNotice that this first term only involves conforming test functions.\nTo account for the non-conformity of the discrete solution, we additionally\nintroduce\n\\begin{equation}\n\\label{eq_definition_Rc}\n\\textup R_{\\rm c} := \\|u_h-\\pi^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h}.\n\\end{equation}\nThe total abstract estimator is then the Hilbertian sum of the two above components:\n\\begin{equation}\n\\label{eq_definition_R}\n\\textup R^2 := \\textup R_{\\rm r}^2 + \\textup R_{\\rm c}^2.\n\\end{equation}\n\\end{subequations}\n\nFollowing \\cite{chaumontfrelet_ern_vohralik_2021a,sauter_zech_2015a},\nour a posteriori analysis follows the lines of the Schatz argument \\cite{schatz_1974a}.\nSpecifically, we start by controlling weak norms of the errors with\nthe estimator and the approximation factor using a duality argument.\n\n\\begin{lemma}[Aubin-Nitsche]\nThe following estimates hold true:\n\\begin{equation}\n\\label{eq_aubin_nitsche_apost}\n\\omega \\|u-u_h\\|_{\\mu,\\mathcal T_h} \\leq \\widecheck \\gamma_{\\rm ba} \\textup R,\n\\qquad\n\\omega^{1\/2} \\|u-u_h\\|_{\\gamma,\\mathcal F_h^{\\rm R}} \\leq \\widetilde \\gamma_{\\rm ba} \\textup R.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nLet us set $e_h := u-u_h$.\nWe start with the first estimate in \\eqref{eq_aubin_nitsche_apost}.\nWe define $\\xi$ as the unique element of $H^1_{\\Gamma_{\\rm D}}(\\Omega)$ such that\n$b(w,\\xi) = \\omega (\\mu w,e_h)_{\\mathcal T_h}$ for all $w \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)$.\nFollowing the lines the proof of Lemma \\ref{lemma_aubin_nitsche},\nwe write\n\\begin{align*}\nb_h(e_h,\\xi)\n&=\n-\\omega^2(\\mu e_h,\\xi)_{\\mathcal T_h}\n-i\\omega(\\gamma e_h,\\xi)_{\\mathcal F_h^{\\rm R}}\n+(\\mathfrak G(e_h),\\boldsymbol A\\boldsymbol \\nabla \\xi)_{\\mathcal T_h}\n\\\\\n&=\n(e_h,-\\omega^2 \\mu\\xi-\\div(\\boldsymbol A\\boldsymbol \\nabla \\xi))_{\\mathcal T_h}\n-(\\gamma e_h,\\boldsymbol A\\boldsymbol \\nabla\\xi\\cdot\\boldsymbol n)_{\\mathcal F_h^{\\rm R}}+\n(e_h,\\div(\\boldsymbol A\\boldsymbol \\nabla \\xi))+(\\mathfrak G(e_h),\\boldsymbol A\\boldsymbol \\nabla \\xi)_{\\mathcal T_h}\n\\\\\n&=\n\\omega \\|e_h\\|_{\\mu,\\mathcal T_h}^2\n+\n(\\mathfrak G(\\pi^{\\rm g} u_h -u_h),\\boldsymbol \\sigma-\\boldsymbol \\sigma_h)_{\\mathcal T_h}\n+\n(\\pi^{\\rm g} u_h-u_h,\\div(\\boldsymbol \\sigma-\\boldsymbol \\sigma_h))_{\\mathcal T_h},\n\\end{align*}\nfor all $\\boldsymbol \\sigma_h \\in \\boldsymbol{\\CP}_p(\\mathcal T_h) \\cap \\boldsymbol H(\\operatorname{div},\\Omega)$,\nwhere we employed \\eqref{eq_nonconf_identity} in the last identity. It follows that\n\\begin{equation*}\n\\omega \\|e_h\\|_{\\mu,\\mathcal T_h}^2\n\\leq\n|b_h(e_h,\\xi)| + \\|u_h-\\pi^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h}\\|\\boldsymbol A\\boldsymbol \\nabla \\xi-\\boldsymbol \\sigma_h\\|_{\\dagger,\\operatorname{div},\\mathcal T_h}.\n\\end{equation*}\nOn the one hand, recalling the Galerkin orthgonality property stated\nin \\eqref{eq_galerkin_orthogonality} and the definition of $\\textup R_{\\rm r}$\nat \\eqref{eq_definition_Rr}, we have\n\\begin{equation*}\n|b_h(e_h,\\xi)|\n=\n|b_h(e_h,\\xi-\\pi_h^{\\rm g}\\xi)|\n\\leq\n\\textup R_{\\rm r} \\|\\boldsymbol \\nabla(\\xi-\\pi_h^{\\rm g}\\xi)\\|_{\\boldsymbol A,\\Omega}\n\\leq\n\\widecheck \\gamma_{\\rm ba}^{\\rm g} \\textup R_{\\rm r} \\|e_h\\|_{\\mu,\\mathcal T_h}.\n\\end{equation*}\nOn the other hand, recalling \\eqref{eq_definition_gbad} and \\eqref{eq_definition_Rc}, we have\n\\begin{equation*}\n\\|u_h-\\pi^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h}\\|\\boldsymbol A\\boldsymbol \\nabla \\xi-\\pi_h^{\\rm d}(\\boldsymbol A\\boldsymbol \\nabla \\xi)\\|_{\\dagger,\\operatorname{div},\\mathcal T_h}\n\\leq\n\\widecheck \\gamma_{\\rm ba}^{\\rm d}\\textup R_{\\rm c}\\|e_h\\|_{\\mu,\\mathcal T_h}.\n\\end{equation*}\nThen, the first estimate of \\eqref{eq_aubin_nitsche_apost} follows from \\eqref{eq_definition_R}\nsince\n\\begin{equation*}\n\\widecheck \\gamma_{\\rm ba}^{\\rm g} \\textup R_{\\rm r} + \\widecheck \\gamma_{\\rm ba}^{\\rm d} \\textup R_{\\rm c}\n\\leq\n\\left (\n(\\widecheck \\gamma_{\\rm ba}^{\\rm g})^2 + (\\widecheck \\gamma_{\\rm ba}^{\\rm d})^2\n\\right )^{1\/2}\n\\left (\n\\textup R_{\\rm r}^2\n+\n\\textup R_{\\rm c}^2\n\\right )^{1\/2}\n=\n\\widecheck \\gamma_{\\rm ba} \\textup R.\n\\end{equation*}\n\nFor the sake of shortness, we do not detail the proof of the second estimate\nin \\eqref{eq_aubin_nitsche_apost} as it follows the lines of the first estimate\nbut using the function $\\chi$ from the proof of Lemma \\ref{lemma_aubin_nitsche}\ninstead of $\\xi$.\n\\end{proof}\n\nWe can now conclude the abstract reliability analysis.\nFollowing \\cite{ern_vohralik_2015a}, our proof uses a Pythagorean\nidentity to separate the conforming and non-conforming parts of the error.\n\n\\begin{theorem}[Abstract reliability]\nWe have\n\\begin{equation}\n\\label{eq_abstract_reliability}\n\\enorm{u-u_h}_{\\omega,\\mathcal T_h}\n\\leq\n\\sqrt{1+\\gamma_{\\rm ba}^2} \\textup R.\n\\end{equation}\n\\end{theorem}\n\n\\begin{proof}\nFor the sake of simplicity, we introduce\n\\begin{equation*}\nb^+_h(\\phi,v)\n:=\n\\omega^2 (\\mu\\phi,v)_{\\mathcal T_h}\n+\n\\omega (\\gamma\\phi,v)_{\\mathcal F_h^{\\rm R}}\n+\n(\\boldsymbol A\\mathfrak G(\\phi),\\mathfrak G(v))_{\\mathcal T_h}\n\\qquad\n\\forall \\phi,v \\in H^1(\\mathcal T_h),\n\\end{equation*}\nand we define $\\widetilde u$ as the only element of $H^1_{\\Gamma_{\\rm D}}(\\Omega)$ such that\n\\begin{equation*}\nb^+_h(\\widetilde u-u_h,v) = 0 \\qquad \\forall v \\in H^1_{\\Gamma_{\\rm D}}(\\Omega).\n\\end{equation*}\nNotice that this indeed a well-formed definition, since $b^+_h$ is equivalent\nto the usual inner-product of $H^1_{\\Gamma_{\\rm D}}(\\Omega)$. Classically, we have the Pythagorean identity\n\\begin{align}\n\\label{tmp_rel1}\n\\enorm{u-u_h}_{\\omega,\\mathcal T_h}^2\n&=\n\\enorm{\\widetilde u-u_h}_{\\omega,\\mathcal T_h}^2\n+\n2\\Re b^+(\\widetilde u - u_h,u-\\widetilde u)\n+\n\\enorm{u-\\widetilde u}_{\\omega,\\mathcal T_h}^2\n\\\\\n\\nonumber\n&=\n\\enorm{u_h-\\widetilde u}_{\\omega,\\mathcal T_h}^2\n+\n\\enorm{u-\\widetilde u}_{\\omega,\\mathcal T_h}^2.\n\\end{align}\nThen, on the one hand, it is clear using \\eqref{eq_norm_control} that\n\\begin{equation}\n\\label{tmp_rel2}\n\\enorm{u_h-\\widetilde u}_{\\omega,\\mathcal T_h}\n\\leq\n\\enorm{u_h-\\pi^{\\rm g} u_h}_{\\omega,\\mathcal T_h}\n\\leq\n\\|u_h-\\pi^{\\rm g} u_h\\|_{\\dagger,1,\\mathcal T_h}\n=\n\\textup R_{\\rm c},\n\\end{equation}\nand on the other hand, we have\n\\begin{align*}\n\\enorm{u-\\widetilde u}_{\\omega,\\Omega}^2\n&=\nb^+_h(u-\\widetilde u,u-\\widetilde u)\n=\nb^+_h(e_h,u-\\widetilde u)\n\\\\\n&=\nb_h(e_h,u-\\widetilde u)\n+\n2\\omega^2(\\mu e_h,u-\\widetilde u)_{\\mathcal T_h}\n+\n(1-i)\\omega (\\gamma e_h,u-\\widetilde u)_{\\mathcal F_h^{\\rm R}}\n\\\\\n&\\leq\n\\textup R_{\\rm r}\n\\|\\boldsymbol \\nabla(u-\\widetilde u)\\|_{\\boldsymbol A,\\Omega}\n+\n2\\widecheck \\gamma_{\\rm ba} \\textup R \\omega\\|u-\\widetilde u\\|_{\\mu,\\mathcal T_h}\n+\n\\sqrt{2}\\widetilde \\gamma_{\\rm ba} \\textup R \\omega\\|u-\\widetilde u\\|_{\\gamma,\\mathcal F_h^{\\rm R}}\n\\\\\n&\\leq\n\\left (\n\\textup R_{\\rm r}^2\n+\n(4\\widecheck \\gamma_{\\rm ba}^2 + 2\\widetilde \\gamma_{\\rm ba}^2) \\textup R^2\n\\right )^{1\/2}\n\\enorm{u-\\widetilde u}_{\\omega,\\Omega},\n\\end{align*}\nand recalling \\eqref{eq_definition_gba_summed}, we arrive at\n\\begin{equation}\n\\label{tmp_rel3}\n\\enorm{u-\\widetilde u}_{\\omega,\\Omega}^2\n\\leq\n\\textup R_{\\rm r}^2\n+\n(4\\widecheck \\gamma_{\\rm ba}^2+2\\widetilde \\gamma_{\\rm ba}^2) \\textup R^2\n=\n\\textup R_{\\rm r}^2\n+\n\\gamma_{\\rm ba}^2 \\textup R^2.\n\\end{equation}\nThe estimate in \\eqref{eq_abstract_reliability} then follows from\n\\eqref{tmp_rel1}, \\eqref{tmp_rel2} and \\eqref{tmp_rel3}.\n\\end{proof}\n\n\\subsection{Residual-based a posteriori estimator}\n\nThe residuals $\\textup R_{\\rm r}$ and $\\textup R_{\\rm c}$ can be straightforwardly\ncontrolled using flux and potential reconstructions. We refer the reader\nto \\cite{congreve_gedicke_perugia_2019a,ern_vohralik_2015a} for details\nabout these constructions. Here, we focus instead on residual-based\nestimators, for which the link may be less clear. In this section, the\nletter $C$ refers to a generic constant that may change from one occurrence\nto the other, and that only depends on the contrasts of the coefficients\n$\\mu$, $\\boldsymbol A$ and $\\gamma$ as well as the polynomial degree $p$ and the\nshape-regularity parameter $\\kappa$.\nThe stability result for the lifting operator\n\\begin{equation}\n\\label{eq_stability_lifting}\n\\|\\LIFT{v_h}\\|_{\\boldsymbol A,\\mathcal T_h}^2\n\\leq\nC\n\\sum_{K \\in \\mathcal T_h} \\frac{\\alpha_K}{h_K} \\left \\|\\jmp{v_h}\\right \\|_{\\partial K}^2\n\\quad\n\\forall v_h \\in \\mathcal P_p(\\mathcal T_h)\n\\end{equation}\ncan be found, e.g., in \\cite[Section 4.3]{dipietro_ern_2012a}\nor \\cite[Lemma 4.1]{houston_schotzau_wihler_2006a}.\n\nWe first establish that $\\textup R_{\\rm c}$ can be controlled by the jumps of $u_h$. The notations\n\\begin{equation*}\n\\frac{\\omega h_F^\\star}{\\vartheta_F^\\star}\n:=\n\\max_{F \\in \\mathcal F_h^{\\rm R}} \\frac{\\omega h_F}{\\vartheta_F},\n\\qquad\n\\frac{\\omega h_K^\\star}{\\vartheta_K^\\star}\n:=\n\\max_{K \\in \\mathcal T_h} \\frac{\\omega h_K}{\\vartheta_K},\n\\end{equation*}\nwill be useful.\n\n\\begin{lemma}[Control of the non-conformity]\nWe have\n\\begin{equation}\n\\label{eq_estimate_Rc}\n\\textup R_{\\rm c}^2\n\\leq\nC\n\\max \\left (\n1,\\frac{\\omega h_F^\\star}{\\vartheta_F^\\star},\\left (\\frac{\\omega h_K^\\star}{\\vartheta_K^\\star}\\right )^2\n\\right )\n\\sum_{K \\in \\mathcal T_h} \\frac{\\alpha_K}{h_K} \\|\\jmp{u_h}\\|_{\\partial K \\setminus (\\Gamma_{\\rm N} \\cup \\Gamma_{\\rm R})}^2.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nWe start with two estimates that are easily inferred from\n\\cite[Lemma 4.3]{ern_guermond_2017a}. For all $v_h \\in \\mathcal P_p(\\mathcal T_h)$,\nthere exists a conforming approximation $\\mathcal J v_h \\in \\mathcal P_p(\\mathcal T_h) \\cap H^1_{\\Gamma_{\\rm D}}(\\Omega)$\nsuch that and all $K \\in \\mathcal T_h$, we have\n\\begin{equation*}\n\\|v_h-\\mathcal J v_h\\|_K^2\n\\leq\nC h_K \\sum_{K' \\in \\widetilde \\mathcal T_K} \\|\\jmp{v_h}\\|_{\\partial K'}^2,\n\\qquad\n\\|\\boldsymbol \\nabla(v_h-\\mathcal J v_h)\\|_K^2\n\\leq\nC \\frac{1}{h_K} \\sum_{K' \\in \\widetilde \\mathcal T_K} \\|\\jmp{v_h}\\|_{\\partial K'}^2,\n\\end{equation*}\nwhere $\\widetilde \\mathcal T_K$ collects those elements $K' \\in \\mathcal T_h$ sharing at least\none vertex with $K$. Employing the multiplicative trace inequality\n\\begin{equation*}\n\\|\\theta\\|_{\\partial K}^2\n\\leq\nC \\left \\{\n\\frac{1}{h_K}\\|\\theta\\|_K^2 + \\|\\theta\\|_K\\|\\boldsymbol \\nabla \\theta\\|_K\n\\right \\}\n\\leq\nC \\left \\{\n\\frac{1}{h_K}\\|\\theta\\|_K^2 + h_K\\|\\boldsymbol \\nabla \\theta\\|_K^2\n\\right \\}\n\\qquad\n\\forall \\theta \\in H^1(K),\n\\end{equation*}\nwe also have\n\\begin{equation*}\n\\|v_h-\\mathcal J v_h\\|_{\\partial K}^2\n\\leq\nC \\sum_{K' \\in \\widetilde \\mathcal T_K} \\|\\jmp{v_h}\\|_{\\partial K'}^2.\n\\end{equation*}\nIt is therefore clear that\n\\begin{equation*}\n\\|u_h-\\mathcal J u_h\\|_{\\dagger,1,\\mathcal T_h}^2\n\\leq\nC \\max \\left (\n1,\\frac{\\omega h_F^\\star}{\\vartheta_F^\\star},\\left (\\frac{\\omega h_K^\\star}{\\vartheta_K^\\star}\\right )^2\n\\right )\n\\sum_{K \\in \\mathcal T_h} \\frac{\\alpha_K}{h_K} \\|\\jmp{u_h}\\|_{\\partial K}^2,\n\\end{equation*}\nand \\eqref{eq_estimate_Rc} follows from the definition of $\\textup R_{\\rm c}$ in \\eqref{eq_definition_Rc}.\n\\end{proof}\n\nWe then control the conforming residual $\\textup R_{\\rm r}$.\nThis is classically done by combining element-wise integration\nby parts and a quasi-interpolation operator.\n\n\\begin{lemma}[Control of the residual]\nWe have\n\\begin{multline}\n\\label{eq_estimate_Rr}\n\\textup R_{\\rm r}^2\n\\leq\nC\n\\\\\n\\Bigg \\{\n\\sum_{K \\in \\mathcal T_h}\n\\left (\n\\frac{h_K^2}{\\alpha_K}\n\\|f+\\omega^2 \\mu u_h + \\div (\\boldsymbol A\\boldsymbol \\nabla u_h)\\|_{\\mathcal T_h}^2\n+\n\\frac{h_K}{\\alpha_K}\n\\|\\jmp{\\boldsymbol A\\boldsymbol \\nabla u_h} \\cdot \\boldsymbol n_K\\|_{\\partial K \\setminus \\partial \\Omega}^2\n+\n\\frac{\\alpha_K}{h_K}\n\\|\\jmp{u_h}\\|_{\\partial K \\setminus \\partial \\Omega}^2\n\\right )\n\\\\\n+\\sum_{F \\in \\mathcal F_h^{\\rm D}} \\frac{\\alpha_F}{h_F} \\|u_h\\|_F^2\n+\\sum_{F \\in \\mathcal F_h^{\\rm N}} \\frac{h_F}{\\alpha_F} \\|\\boldsymbol A \\boldsymbol \\nabla u_h \\cdot \\boldsymbol n\\|_F^2\n+\\sum_{F \\in \\mathcal F_h^{\\rm R}} \\frac{h_F}{\\alpha_F} \\|\\boldsymbol A\\boldsymbol \\nabla u_h \\cdot \\boldsymbol n-i\\omega\\gamma u_h\\|_F^2\n\\Bigg \\}.\n\\end{multline}\n\\end{lemma}\n\n\\begin{proof}\nWe consider an arbitrary $v \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)$, and let\n$\\widetilde v := v-\\mathcal Q_h v \\in H^1_{\\Gamma_{\\rm D}}(\\Omega)$, where $\\mathcal Q_h$\nis a quasi-interpolation operator satisfying\n\\begin{equation*}\n\\frac{1}{h_K^2} \\|v-\\mathcal Q_h v\\|_K^2\n+\n\\frac{1}{h_K}\\|v-\\mathcal Q_h\\|_{\\partial K}^2\n+\n\\|\\boldsymbol \\nabla(v-\\mathcal Q_h v)\\|_K^2\n\\leq\nC \\frac{1}{\\alpha_K} \\|\\boldsymbol \\nabla v\\|_{\\boldsymbol A,\\widetilde K},\n\\end{equation*}\nwith $\\widetilde K$ the domain corresponding to all the elements $K' \\in \\mathcal T_h$\nthat share at least one vertex with $K$. We refer the reader to, e.g.,\n\\cite[Theorem 5.2]{ern_guermond_2017a}, for the construction of $\\mathcal Q_h$.\nThanks to Galerkin orthogonality \\eqref{eq_galerkin_orthogonality}, we\ncan write that\n\\begin{align}\n\\label{tmp_rel_Rc0}\n&\\langle \\mathscr R_{\\rm r}, v \\rangle\n\\\\\n\\nonumber\n&=\nb_h(e_h,v)\n=\nb_h(e_h,\\widetilde v)\n\\\\\n\\nonumber\n&=\n(f,\\widetilde v)_\\Omega\n+\n\\omega^2 (\\mu u_h,\\widetilde v)_{\\mathcal T_h}\n+\ni\\omega (\\gamma u_h,\\widetilde v)_{\\mathcal F_h^{\\rm R}}\n+\n(\\boldsymbol A\\mathfrak G(u_h),\\boldsymbol \\nabla \\widetilde v)_{\\mathcal T_h}\n\\\\\n\\nonumber\n&=\n\\underbrace{(f,\\widetilde v)_\\Omega\n+\n\\omega^2 (\\mu u_h,\\widetilde v)_{\\mathcal T_h}\n+\ni\\omega (\\gamma u_h,\\widetilde v)_{\\mathcal F_h^{\\rm R}}\n-\n(\\boldsymbol A\\boldsymbol \\nabla u_h,\\boldsymbol \\nabla \\widetilde v)_{\\mathcal T_h}\n}_{r_1}\n-\n\\underbrace{(\\boldsymbol A\\LIFT{u_h},\\boldsymbol \\nabla \\widetilde v)_{\\mathcal T_h}}_{r_2}.\n\\end{align}\nThen, on the one hand, we see that\n\\begin{align*}\nr_1\n&=\n(f+\\omega^2\\mu u_h+\\div(\\boldsymbol A\\boldsymbol \\nabla u_h),\\widetilde v)_{\\mathcal T_h}\n+\ni\\omega(\\gamma u_h,\\widetilde v)_{\\mathcal F_h^{\\rm R}}\n-\n\\sum_{K \\in \\mathcal T_h} (\\boldsymbol A \\boldsymbol \\nabla u_h \\cdot \\boldsymbol n_K,\\widetilde v)_{\\partial K}\n\\\\\n&=\n(f+\\omega^2\\mu u_h-\\div(\\boldsymbol A\\boldsymbol \\nabla u_h),\\widetilde v)_{\\mathcal T_h}\n+\n\\sum_{F \\in \\mathcal F_h^{\\rm i}} (\\jmp{\\boldsymbol A \\boldsymbol \\nabla u_h} \\cdot \\boldsymbol n_F,\\widetilde v)_{F}\n\\\\\n&\n-(\\boldsymbol A \\boldsymbol \\nabla u_h \\cdot \\boldsymbol n,\\widetilde v)_{\\mathcal F_h^{\\rm N}}\n-(\\boldsymbol A \\boldsymbol \\nabla u_h \\cdot \\boldsymbol n - i\\omega\\gamma u_h,\\widetilde v)_{\\mathcal F_h^{\\rm R}},\n\\end{align*}\nand therefore,\n\\begin{align}\n\\label{tmp_rel_Rc1}\n|r_1|\n&\\leq\n\\sum_{K \\in \\mathcal T_h}\n\\left (\n\\|f+\\omega^2\\mu u_h+\\div\\left(\\boldsymbol A\\boldsymbol \\nabla u_h\\right )\\|_K\n\\|\\widetilde v\\|_K\n+\n\\|\\jmp{\\boldsymbol \\nabla u_h}\\cdot \\boldsymbol n_K\\|_{\\partial K}\\|\\widetilde v\\|_{\\partial K}\n\\right )\n\\\\\n\\nonumber\n&\n+\\sum_{F \\in \\mathcal F_h^{\\rm N}}\\|\\boldsymbol A\\boldsymbol \\nabla u_h\\cdot \\boldsymbol n\\|_F\\|\\widetilde v\\|_F\n+\\sum_{F \\in \\mathcal F_h^{\\rm R}}\\|\\boldsymbol A\\boldsymbol \\nabla u_h\\cdot \\boldsymbol n-i\\omega\\gamma u_h\\|_F\\|\\widetilde v\\|_F\n\\\\\n\\nonumber\n&\\leq\nC \\Bigg \\{\n\\sum_{K \\in \\mathcal T_h}\n\\left (\n\\frac{h_K^2}{\\alpha_K} \\|f+\\omega^2\\mu u_h+\\div\\left(\\boldsymbol A\\boldsymbol \\nabla u_h\\right )\\|_K\n+\n\\frac{h_K}{\\alpha_K}\\|\\jmp{\\boldsymbol \\nabla u_h}\\cdot \\boldsymbol n_K\\|_{\\partial K}\n\\right )\n\\\\\n\\nonumber\n&\n\\qquad\n+\\sum_{F \\in \\mathcal F_h^{\\rm N}}\\frac{h_F}{\\alpha_F}\\|\\boldsymbol A\\boldsymbol \\nabla u_h\\cdot \\boldsymbol n\\|_F\n+\\sum_{F \\in \\mathcal F_h^{\\rm R}}\\frac{h_F}{\\alpha_F}\\|\\boldsymbol A\\boldsymbol \\nabla u_h\\cdot \\boldsymbol n-i\\omega\\gamma u_h\\|_F\n\\Bigg \\} \\|\\boldsymbol \\nabla v\\|_{\\boldsymbol A,\\Omega}.\n\\end{align}\nOn the other hand, we have\n\\begin{align}\n\\label{tmp_rel_Rc2}\n|r_2|\n&\\leq\n\\|\\LIFT{u_h}\\|_{\\boldsymbol A,\\Omega}\\|\\boldsymbol \\nabla \\widetilde v\\|_{\\boldsymbol A,\\Omega}\n\\\\\n\\nonumber\n&\\leq\nC \\left \\{\n\\sum_{K \\in \\mathcal T_h} \\frac{\\alpha_K}{h_K} \\|\\jmp{u_h}\\|_{\\partial K \\setminus \\partial \\Omega}^2\n+\n\\sum_{F \\in \\mathcal F_h^{\\rm D}} \\frac{\\alpha_F}{h_F} \\|\\jmp{u_h}\\|_F^2\n\\right \\}\n\\|\\boldsymbol \\nabla v\\|_{\\boldsymbol A,\\Omega},\n\\end{align}\nand \\eqref{eq_estimate_Rr} follows from \\eqref{tmp_rel_Rc0}, \\eqref{tmp_rel_Rc1}\nand \\eqref{tmp_rel_Rc2} recalling the definition of $\\textup R_{\\rm r}$ in \\eqref{eq_definition_Rr}.\n\\end{proof}\n\nWe therefore define a residual-based estimator by\n\\begin{align*}\n\\eta^2\n&:=\n\\sum_{K \\in \\mathcal T_h}\n\\left (\n\\frac{h_K^2}{\\alpha_K}\n\\|f+\\omega^2 \\mu u_h - \\div (\\boldsymbol A\\boldsymbol \\nabla u_h)\\|_{\\mathcal T_h}^2\n+\n\\frac{h_K}{\\alpha_K}\n\\|\\jmp{\\boldsymbol A\\boldsymbol \\nabla u_h} \\cdot \\boldsymbol n_K\\|_{\\partial K \\setminus \\partial \\Omega}^2\n+\n\\frac{\\alpha_K}{h_K}\n\\|\\jmp{u_h}\\|_{\\partial K \\setminus \\partial \\Omega}^2\n\\right )\n\\\\\n&\n+\\sum_{F \\in \\mathcal F_h^{\\rm D}} \\frac{\\alpha_F}{h_F} \\|u_h\\|_F^2\n+\\sum_{F \\in \\mathcal F_h^{\\rm N}} \\frac{h_F}{\\alpha_F} \\|\\boldsymbol A \\boldsymbol \\nabla u_h \\cdot \\boldsymbol n\\|_F^2\n+\\sum_{F \\in \\mathcal F_h^{\\rm R}} \\frac{h_F}{\\alpha_F} \\|\\boldsymbol A\\boldsymbol \\nabla u_h \\cdot \\boldsymbol n\\|_F^2.\n\\end{align*}\nAs a direct consequence of \\eqref{eq_abstract_reliability},\n\\eqref{eq_estimate_Rc} and \\eqref{eq_estimate_Rr}, we obtain a\nreliability estimate stated in Theorem \\ref{theorem_reliability_residual}.\nIt is to be compared with \\cite[Theorem 2.3]{chaumontfrelet_ern_vohralik_2021a}\nand \\cite[Theorem 3.6]{sauter_zech_2015a}.\n\n\\begin{theorem}[Reliability of the residual estimator]\n\\label{theorem_reliability_residual}\nWe have\n\\begin{equation*}\n\\enorm{e_h}_{\\omega,\\mathcal T_h}\n\\leq\nC\n\\max \\left (\n1,\\frac{\\omega h_F^\\star}{\\vartheta_F^\\star},\\left (\\frac{\\omega h_K^\\star}{\\vartheta_K^\\star}\\right )^2\n\\right )\n(1+\\gamma_{\\rm ba}) \\eta.\n\\end{equation*}\n\\end{theorem}\n\n\\subsection{Efficiency}\n\nFor the sake of shortness, we do not give any efficiency proofs.\nActually, such proofs can be found in \\cite[Section 4]{congreve_gedicke_perugia_2019a}\nfor equilibrated estimators and in \\cite[Section 3.3]{sauter_zech_2015a} for the\nresidual-based estimators considered here.\n\n\\bibliographystyle{amsplain}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFinancial time series are hard to model, since they are heavily influenced by unpredictable events.\nNatural disasters, uncertainty about public behaviour, statements from governments and central banks, etc., are all events that can drastically affect the market. As a consequence financial data do not behave the same at all times, hence we cannot assume any stationarity property for them. The latter implies that classic techniques used to analyse time series are widely inadequate to model such data, therefore alternative methods have to be developed. The family of \\emph{Markov Switching Models} (MSM) constitutes a possible solution, since these models allow us to effectively address the non-stationarity of financial data.\n\nThe main idea behind the MSM is that, in order to take into account the changes in the behaviour of the data, we allow the distribution of the observations to change over time. A general MSM model can be written in the following form\n\\begin{equation}\\label{generalMSM}\n\\begin{cases}\ny_t = f(S_t,\\theta,\\psi_{t-1})\\\\\nS_t = g\\left(\\tilde{S}_{t-1},\\psi_{t-1}\\right) \\\\\nS_t \\in \\Lambda\n\\end{cases}\n\\end{equation}\nwhere $S_{t}$ indicates the state of the model at time $t$, $\\theta$ is the vector of the parameters characterizing the model, $\\psi_t:=\\left\\{y_k: k=1,\\dots,t\\right\\}$ is the set of all observations up to time $t$, $\\tilde{S}_{t} := \\{S_{1},...,S_{t}\\}$ is the set of all observed states up to time $t$, $\\Lambda=\\{1,...,M\\}$ is the set of all possible states, and $g$ is the function that governs the transitions between the states. The function $f$ defines how the observation at time $t$ depends on $S_t,\\theta,\\ \\text{and}\\ \\psi_{t-1}$ and finally, $t \\in \\{0,1,...,T\\}$, where $T \\in \\mathbb{N}$, $T < +\\infty$, is the so called \\emph{terminal time}.\nSystem \\eqref{generalMSM} clearly shows the intrinsic richness of the MSM approach. Particular realizations of \\eqref{generalMSM} allow the treatment of specific problems. Before getting into the details of our study, it is worth mentioning that in most of the dedicated literature, we can distinguish between two classes of models. The first class consists of models that have complicated distributions for the data or a large number of states, but very simple transition laws, e.g., a first order Markov chain, see, e.g., \\cite{DiPMF16,Ham89,Ch98}. The second class is made up of models with simple assumptions and very few states, usually two, but with more complicated transition laws, see, e.g., \\cite{Kim99, DiLeeW94, Pe05}. \n\nThe present paper is structured as follows: in Sections 1 through 4 we cover the mathematical and computational theory needed to establish the techniques that we then develop in subsequent sections; in Section 5 we introduce the jump-diffusion model, while in Section 6 we present a model that uses $\\alpha-$stable distributions; in Section 7 we explain how the models can be concretely implemented and, in Section 8, we present the related results obtained applying them to a relevant case study which concerns the S\\&P500 index; conclusions and further developments are outlined in Section 9.\n\n\\section{Bayesian Inference}\nBayesian Inference is a branch of statistical inference that assumes the parameters of a probability distribution to be randomly distributed according to a {\\it prior} distribution. In particular the idea is to exploit the observed data, along with the Bayes rule, to generate the {\\it posterior} distribution of the aforementioned parameters. Therefore, the posterior distribution can be interpreted as the distribution of the parameters once we have taken into account both our subjective belief about them, namely the prior, and the data. Such an approach can be rigorously represented as follows\n\\begin{align}\n& \\nonumber \\theta \\sim \\pi(\\theta) \\,;\\\\\n& y|\\theta \\sim f(y|\\theta) \\,;\\\\\n& \\nonumber f(\\theta|y) = \\frac{\\pi(\\theta)f(y|\\theta)}{f(y)} \\, ,\n\\end{align}\nwhere $\\pi(\\theta)$ is the prior distribution, $f(y|\\theta)$ is the distribution of the data depending on the parameter $\\theta$, and $f(\\theta|y)$ is the posterior of $\\theta$. Finally, $f(y)$ is the marginal distribution of $y$, namely\n\\begin{equation*}\nf(y) = \\int f(y,\\theta) d\\theta = \\int \\pi(\\theta)f(y|\\theta) d\\theta\\:.\n\\end{equation*}\nClearly the choice of the prior can have a large impact on the posterior. A particularly convenient form of prior is what is known as a \\emph{conjugate prior}. We say that a prior distribution is conjugate if the posterior distribution derived from it belongs to the same family, as it happens, e.g., for the Beta-Bernoulli pair, namely\n\\begin{equation}\n\\begin{aligned}\n\\pi(\\theta) &= \\theta^{\\alpha-1} (1-\\theta)^{\\beta-1} \\,;\\\\ \nf(y|\\theta) &= \\binom{n}{y} \\theta^{y} (1-\\theta)^{n-y} \\,;\\\\\nf(\\theta|y) & \\propto \\theta^{(\\alpha + y)-1} (1-\\theta)^{(\\beta+n-y)-1} \\,;\\\\\n& \\propto \\text{Beta}(\\alpha + y,\\beta+n-y)\\,.\n\\end{aligned}\n\\end{equation}\n\nIt follows that if we start with a $\\text{Beta}(\\alpha, \\beta)$ prior and assume that the data are binomially distributed, we end up with a $\\text{Beta}(\\alpha + y,\\beta+n-y)$ posterior. Hence, we do not have to update the distribution for each new observation, just its parameters. We would like to note that the latter is a particularly relevant aspect from the algorithmic point of view since it translates into less computationally expensive code. For the sake of completeness, in the following subsections we list other particularly convenient choices for distribution pairs and, in order to give clear examples, we first start by explaining how the posterior of a set of independent identically distributed (i.i.d) random variables is obtained. Let $\\{y_{1},...,y_{n}\\},\\ n \\in \\mathbb{N}$ be a set of i.i.d. random variables with \ndensity function $f(y|\\theta)$. Moreover, let $\\theta \\sim \\pi(\\theta)$. Then\n\\begin{equation}\nf(\\theta|y_{1},...,y_{n}) = \\frac{\\pi(\\theta) L_{n}(\\theta)}{\\int\\limits_{\\Theta} \\pi(\\theta) L_{n}(\\theta) d\\theta} \\propto \\pi(\\theta) L_{n}(\\theta)\\;,\n\\end{equation}\nwhere $$L_{n}(\\theta) = \\prod\\limits_{i=1}^{n} f(y_{i}|\\theta)\\;,$$ is the likelihood function of the data, and $\\Theta$ is the set of all possible values of $\\theta$. For the rest of this paper we will denote the vector of observations by $\\boldsymbol y$. \n\n\\subsection{Normal-Normal}\n\\label{nn}\nAssume that we have $n \\in \\mathbb{N}$ independent observations\n\\begin{align*}\ny_{i}| \\mu \\sim \\mathcal{N}(\\mu,\\sigma^{2}),\\ i \\in \\{1,...,n\\} \\;,\n\\end{align*}\nwhere $\\mu \\in \\mathbb{R}$, and the value of $\\sigma>0$ is known. In order to perform Bayesian inference on the given data, we also need to place a distribution on $\\mu$. Therefore, we set\n\\begin{align*}\n\\mu &\\sim \\mathcal{N} \\left(\\mu_{0},\\frac{1}{k}\\right)\\:.\n\\end{align*}\nThe corresponding likelihood function is\n\\begin{align*}\nL_{n}(\\mu) = \\prod\\limits_{i=1}^{n} \\frac{1}{\\sqrt{2 \\pi \\sigma^{2}}} e^{- \\frac{1}{2} \\left( \\frac{y_{i}-\\mu}{\\sigma} \\right)^{2}} &\\propto e^{- \\frac{1}{2\\sigma^{2}} \\sum\\limits_{i=1}^{n} \\left( y_{i}-\\mu \\right)^{2}} \\\\\n\\propto e^{- \\frac{1}{2\\sigma^{2}} \\left( \\sum\\limits_{i=1}^{n} (y_{i}-\\bar{y})^{2} + n(\\bar{y}-\\mu)^{2} \\right)} &\\propto e^{- \\frac{n}{2\\sigma^{2}}(\\bar{y}-\\mu)^{2}}\n\\:,\n\\end{align*}\nwhile for the posterior we have\n\\begin{align*}\n& f(\\mu|\\boldsymbol y) \\propto e^{- \\frac{n}{2\\sigma^{2}}(\\bar{y}-\\mu)^{2}} e^{- \\frac{k}{2} \\left( \\mu-\\mu_{0} \\right)^{2}} \\\\ \n& = e^{- \\frac{1}{2} \\left( \\tfrac{n}{\\sigma^{2}}(\\bar{y}-\\mu)^{2} + k(\\mu-\\mu_{0})^{2} \\right) } \\\\\n&= e^{-\\frac{1}{2}\\left( \\left(\\frac{n}{\\sigma^{2}}+k\\right) \\left( \\mu - \\frac{\\frac{n}{\\sigma^{2}}\\bar{y} + k \\mu_{0}}{\\frac{n}{\\sigma^{2}}+k} \\right)^2 + \\frac{\\frac{nk}{\\sigma^{2}}}{\\frac{n}{\\sigma^{2}}+k} (\\bar{y}-\\mu_{0})^{2} \\right) } \\\\\n& \\propto \\mathcal{N}\\left(\\frac{\\frac{n}{\\sigma^{2}}\\bar{y} + k \\mu_{0}}{\\frac{n}{\\sigma^{2}}+k}, \\frac{\\sigma^{2}}{n+k \\sigma^{2}}\\right)\\;,\n\\end{align*}\nhence \n\\begin{equation}\n\\boldsymbol y \\sim \\mathcal{N} \\left(\\frac{n\\bar{y}+\\mu_{0}k\\sigma^{2}}{n+k\\sigma^{2}},\\frac{\\sigma^{2}}{n+k \\sigma^{2}} \\right)\\:.\n\\end{equation} \n\n\\subsection{Inverse Gamma-Normal}\n\\label{ign}\nAssume again that we have $n \\in \\mathbb{N}$ independent observations\n\\begin{align*}\ny_{i}| \\mu \\sim \\mathcal{N}(\\mu,\\sigma^{2}),\\ i \\in \\{1,...,n\\},\\ n \\in \\mathbb{N}\\;,\n\\end{align*}\nbut, this time, $\\mu \\in \\mathbb{R}$ is known, while $\\sigma>0$ is unknown. Taking $\\sigma^{2}$ to be inverse-gamma distributed with parameters $\\alpha_{0}$ and $\\beta_{0}$, and denoting the distribution by $\\text{inv}\\Gamma(\\alpha_{0},\\beta_{0})$, we can write the density function of $\\sigma$ as follows\n\\begin{equation}\nf(\\sigma) = \\frac{\\beta_{0}^{\\alpha_{0}}}{\\Gamma(\\alpha_{0})} \\left( \\sigma^{2} \\right)^{-\\alpha_{0}-1} e^{-\\frac{\\beta_{0}}{\\sigma^{2}}}\\;,\n\\end{equation}\nwhere $\\Gamma(t)$ is the extension of the factorial to the set of positive real numbers, known as the {\\it Gamma function}, and defined by\n\\begin{equation*}\n\\Gamma(t) := \\int\\limits_{0}^{\\infty} x^{t-1} e^{-x} dx\\:.\n\\end{equation*}\nTherefore, the associated likelihood function is\n\\begin{equation*}\nL_{n}(\\sigma) \\propto \\left( \\sigma^{2} \\right)^{-\\frac{n}{2}} e^{- \\frac{1}{2\\sigma^{2}} \\sum\\limits_{i=1}^{n} \\left( y_{i}-\\mu \\right)^{2}}\\;,\n\\end{equation*}\nhence, the posterior is\n\\begin{equation}\n\\begin{aligned}\nf(\\sigma| \\boldsymbol y) \\propto \\left( \\sigma^{2} \\right)^{-\\frac{n}{2}-\\alpha_{0}-1} e^{- \\frac{1}{2\\sigma^{2}} \\sum\\limits_{i=1}^{n} \\left( y_{i}-\\mu \\right)^{2}} e^{-\\frac{\\beta_{0}}{\\sigma^{2}}} \\\\\n\\propto \\text{inv}\\Gamma \\left(\\frac{n}{2}+\\alpha_{0}, \\frac{1}{2} \\sum\\limits_{i=1}^{n} \\left( y_{i}-\\mu \\right)^{2} + \\beta_{0} \\right)\n\\:.\n\\end{aligned}\n\\end{equation}\n\\begin{remark}\n\tUnfortunately, not all distribution pairs are as convenient as the previously mentioned ones, especially from the point of view of the parameter simulation needed by concrete computational studies. When the posterior is a well known distribution, as in the {\\it normal-normal} and {\\it inverse gamma-normal} cases, we can simulate the parameters using, e.g., existing R libraries. Otherwise, {\\it ad hoc} sampling algorithms have to be developed. The next section addresses these problems.\n\n\\end{remark}\n\n\\section{Markov Chain Monte Carlo}\nIn this section, we describe two methods that will be used to sample the parameters, namely, the \\emph{Gibbs Sampling Method} and the \\emph{Metropolis-Hastings Algorithm}. The latter will be used in situations where the posterior distribution is non-standard, while the former will be used when the distribution can be simulated using an existing software.\n\n\\subsection{Gibbs Sampling}\nAssume that we have a model with a finite number $k$ of parameters, $\\boldsymbol \\theta = (\\theta_{1},...,\\theta_{k})$, and that we want to find the full posterior distribution $f(\\theta_{1},...,\\theta_{k}|\\boldsymbol y)$. This goal can be quite difficult to reach, since the multivariate simulation of distributions is much more tangled and computationally heavy than its univariate counterpart. The Gibbs sampling approach allows the sampling of $f(\\theta_{1},...,\\theta_{k}|\\boldsymbol y)$, knowing only the conditional distributions $f(\\theta_{i}|\\theta_{1},...,\\theta_{i-1},\\theta_{i+1},...,\\theta_{k}, \\boldsymbol y), i \\in \\{1,...,k\\}$.\n\nLet $N$ be the number of simulations we want to perform. We assign arbitrary starting values $(\\theta_{1}^{0},...,\\theta_{k}^{0})$ to each of the parameters. Then, for every $j \\in \\{1,...,N\\}$, we perform the following steps\n\\begin{align*}\n\\textbf{Step 1:}\\ & \\text{Draw}\\ \\theta_{1}^{j}\\ \\text{from}\\ f(\\theta_{1}^{j}|\\theta_{2}^{j-1},...,\\theta_{k}^{j-1}, \\boldsymbol y)\\,; \\\\\n\\textbf{Step 2:}\\ & \\text{Draw}\\ \\theta_{2}^{j}\\ \\text{from}\\ f(\\theta_{2}^{j}|\\theta_{1}^{j}, \\theta_{3}^{j-1},...,\\theta_{k}^{j-1}, \\boldsymbol y)\\,; \\\\\n\\vdots \\\\\n\\textbf{Step k:}\\ & \\text{Draw}\\ \\theta_{k}^{j}\\ \\text{from}\\ f(\\theta_{k}^{j}|\\theta_{1}^{j},...,\\theta_{k-1}^{j}, \\boldsymbol y)\\,;\n\\end{align*}\nhence we can simulate each of the model parameters. The first $J$ simulations are discarded, being part of what is called the \\emph{burn in period}, in order to get rid of the dependence on the arbitrary choice of the starting point $(\\theta_{1}^{0},...,\\theta_{k}^{0})$, while the remaining $N-J$ values are assumed to be a suitable approximation of the real distribution. It is worth mentioning that the number of iterations, as well as the length of the {\\it burn in period}, should be chosen carefully, since for larger values of $N$ the simulations become too time consuming, while small values might not provide enough iterations for the sampler to converge.\n\n\\subsection{Metropolis-Hastings Algorithm}\n\\label{sbsec:metropolis_hastings}\nThe Gibbs sampler is rather easy to implement, but its major drawback is that it requires each $f(\\theta_{i}| \\boldsymbol \\theta_{-i}, \\boldsymbol y)$ to be readily samplable, where $\\boldsymbol \\theta_{-i}$ is the vector $\\boldsymbol \\theta \\backslash \\{\\theta_{i}\\}$. The Metropolis-Hastings algorithm allows for a solution to such an inconvenience. In particular, it only requires a function $f^{\\star}(\\theta_{i}|\\boldsymbol \\theta_{-i}, \\boldsymbol y)$ proportional to the density function $f(\\theta_{i}|\\boldsymbol \\theta_{-i}, \\boldsymbol y)$, and a \\emph{proposal distribution} $q(\\cdot| \\boldsymbol \\theta)$ which denotes a proper probability density function defined on the space $\\Theta$ of all possible values of $\\boldsymbol \\theta$.\nIn what follows we provide the description of the general Metropolis-Hastings algorithm, which uses the full parameter vector $\\boldsymbol \\theta$, as it is reported in \\cite{Car}. We underline that the algorithm remains unchanged when $\\boldsymbol \\theta$ is a scalar. \n\nLet $N$ be the number of simulations we want to perform. We assign an arbitrary starting value $\\boldsymbol \\theta_{0}$ to the parameter vector. Then, for every $j \\in \\{1,...,N\\}$, we perform the following steps\n\\begin{align*}\n& \\textbf{Step 1:}\\ \\text{Draw}\\ \\boldsymbol \\theta_{new}\\ \\text{from}\\ q(\\cdot| \\boldsymbol \\theta^{j-1})\\,; \\\\\n& \\textbf{Step 2:}\\ \\text{Compute the ratio}\\ r =\\frac{f^{\\star}(\\boldsymbol \\theta_{new}) q(\\boldsymbol \\theta^{j-1}|\\boldsymbol \\theta^{new})}{f^{\\star}(\\boldsymbol \\theta^{j-1})q(\\boldsymbol \\theta_{new}|\\boldsymbol \\theta^{j-1})} \\,;\\\\\n& \\textbf{Step 3:}\\ \\text{Define}\\ p = \\min \\left\\lbrace 1,r \\right\\rbrace \\,;\\\\\n& \\textbf{Step 4:}\\ \\text{Set}\\ \\boldsymbol \\theta^{j} := \n\\begin{cases}\n\\boldsymbol \\theta^{new}, &\\text{with probability}\\ p \\\\\n\\boldsymbol \\theta^{j-1}, &\\text{with probability}\\ 1-p\n\\end{cases}\\,.\n\\end{align*}\nHaving defined a method to sample the parameters, we now have the task of simulating the states of the models we will be using. This problem is the subject of the next section.\n\n\\section{State Simulation}\n\\label{chap:states}\nIn this section our goal is to simulate the state vector $\\tilde{S}_{T}$. In order to accomplish this, we first need to obtain the values $\\mathbb{P}(S_{1}|\\tilde{y}_{1}),...,\\mathbb{P}(S_{T}|\\tilde{y}_{T})$. We start by setting arbitrary values for the parameters, and then we use the following expression\n\\begin{equation}\n\\begin{aligned}\ng(\\tilde{S}_{T}|\\tilde{y}_{T}) &= \ng(S_{T}|\\tilde{y}_{T}) \\prod_{t=1}^{T-1} g(S_{t}|S_{t+1}, \\tilde{y}_{t}) \\\\\n& = g(S_{T}|\\tilde{y}_{T}) \\prod_{t=1}^{T-1} g(S_{t+1}|S_{t}) g(S_{t}| \\tilde{y}_{t})\\:.\n\\end{aligned}\n\\end{equation}\n\nNotice that, $\\forall t \\in \\{1,...,T-1\\}$, we can sample from $\\tilde{S}_{T}$ if we have $g(S_{t+1}|S_{t})$, which is nothing more than the transition probability from one state to another, and $g(S_{t}| \\tilde{y}_{t})$. The latter can be obtained, $\\forall t \\in \\{1,...,T\\}$, exploiting the Hamilton filter, see below.\n\\subsection*{Hamilton filter}\nThe basic Hamilton filter, see \\cite{Ham89}, can be described as input-output-byproduct.\n\n\\begin{flushleft}\n\t\\textbf{input:} $g(S_{t-1}=s_{t-1}|\\tilde{y}_{t-1})\\;;$\n\t\n\t\\textbf{output:} $g(S_{t}=s_{t}|\\tilde{y}_{t})\\;;$\n\t\n\t\\textbf{byproduct:} $f(y_{t}|\\tilde{y}_{t-1})\\:.$\n\\end{flushleft}\nRunning the Hamilton filter for $t \\in \\{1,...,T\\}$, we get the desired values $g(S_{1}| \\tilde{y}_{1}),..., g(S_{T}|\\tilde{y}_{T})$, which can be used to generate $\\tilde{S}_{T}$, as described in what follows\n\\begin{align*}\n\\mathbb{P} (S_{T}=i|\\tilde{y}_{T}) &= \\frac{g(S_{T}=i|\\tilde{y}_{T})}{\\sum\\limits_{j=1}^{4} g(S_{T}=j|\\tilde{y}_{T})}\\;,\n\\end{align*}\nsuch a probability is used to draw a sample of $S_{T}$, i.e.\n\\begin{align*}\n& \\mathbb{P} (S_{T-1}=i|\\tilde{y}_{T-1}) = \\frac{g(S_{T-1}=i|\\tilde{y}_{T-1})}{\\sum\\limits_{j=1}^{4} g(S_{T-1}=j|\\tilde{y}_{T-1})} \\\\ &= \\frac{g(S_{T}|S_{T-1}=i) g(S_{T-1}=i| \\tilde{y}_{T-1})}{\\sum\\limits_{j=1}^{4} g(S_{T}|S_{T-1}=j) g(S_{T-1}=j| \\tilde{y}_{T-1})}\\;,\n\\end{align*}\nthen, the above probability together with the previously simulated $S_{T}$, are both used to simulate $S_{T-1}$, and, proceeding iteratively, we have\n\\begin{align*}\n&\\ \\ \\vdots \\\\\n\\mathbb{P} (S_{1}=i|\\tilde{y}_{1}) & = \\frac{g(S_{1}=i|\\tilde{y}_{1})}{\\sum\\limits_{j=1}^{4} g(S_{1}=j|\\tilde{y}_{1})} \\\\ & = \\frac{g(S_{2}|S_{1}=i) g(S_{1}=i| \\tilde{y}_{1})}{\\sum\\limits_{j=1}^{4} g(S_{2}|S_{1}=j) g(S_{1}=j| \\tilde{y}_{1})}\\;,\n\\end{align*}\ntherefore,\nwe can simulate $S_{1}$ obtaining the last component of $\\tilde{S}_{T}$. The latter implies that, for every $t \\in \\{1,...,T\\}$, we know what the distribution of $y_{t}$ is, because we know what the state we are in is. In the next two sections we present the models that will be used later.\n\n\\section{Jump Diffusion Model}\\label{JumpDiffusionModel}\nIn the paper {\\it The Variation of Certain Speculative Prices}, see \\cite{Mb63}, Benoit Mandelbrot draws attention to the fact that the normal distribution is inadequate when it comes to describing economic and financial data. \n\\begin{figure}[!h]\n\t\\includegraphics[width = \\columnwidth, height = .2\\paperheight]{\"normal_inad\"}\n\t\\caption{ \\small{The histogram of the weekly log returns of the S\\&P500 and the density of the normal distribution obtained by the maximum likelihood method. We see that the normal density has to {\\it sacrifice} values around the mean to cover the values at the tails.} }\n\\end{figure}\nHe argues that although the histograms of price changes seem to behave according to a Gaussian distribution, a more careful analysis reveals that the large number of outliers makes the normal distribution fitted to the data much flatter than the actual data are, and with not enough density at the tails to include all the extreme values. If one tries to manipulate the variance of the Gaussian distribution to accommodate the values around the mean, then the result is a distribution that is even worse than the previous one where the extreme values are concerned. \n\\begin{figure}[!h]\n\t\\includegraphics[width = \\columnwidth, height = .2\\paperheight]{\"normal_peak\"}\n\t\\caption{ \\small{The histogram of the weekly log returns of the S\\&P500 and the density of the normal distribution with modified variance. In this case the larger values, in absolute terms, are severely underrepresented by the density.} }\n\\end{figure}\nIn what follows we will show how to solve the aforementioned issue by using a Gaussian distribution, to model the values around the mean, plus jumps of stochastic intensity, to include outlying values. Specifically, our model is the following\n\\begin{equation}\\label{model}\n\\begin{cases}\ny_t = \\epsilon_{t} + \\delta \\sum\\limits_{i=1}^{N_{t}} z_{i} \\\\\n\\epsilon_{t} \\sim \\mathcal{N}(\\mu_{S_{t}}, \\sigma_{S_{t}}^{2}) \\\\\nz_{i} \\stackrel{i.i.d.}{\\sim} \\mathcal{E}(b) \\\\\nN_{t} \\sim \\mathcal{P}(\\theta_{S_{t}}) \\\\\n\\delta \\in \\{-1,1\\},\\ \\mathbb{P} (\\delta=-1) = \\mathbb{P} (\\delta=1) = \\frac{1}{2} \\\\\n\\mu_{S_{t}} = \\mu_{j}\\ \\text{if}\\ S_{t}=j,\\ \\forall j \\in \\{1,...,M\\} \\\\\n\\sigma_{S_{t}} = \\sigma_{j}\\ \\text{if}\\ S_{t}=j,\\ \\forall j \\in \\{1,...,M\\} \\\\\nb >0 \\\\\n\\theta_{S_{t}} = \\theta_{j}\\ \\text{if}\\ S_{t}=j,\\ \\forall j \\in \\{1,...,M\\} \\\\\nS_{t} \\in \\{1,...,M\\} \\\\\np_{ij} = \\mathbb{P} (S_{t}=j|S_{t-1}=i) \\\\\n\\pi_{0} = [\\mathbb{P} (S_{0}=1),...,\\mathbb{P} (S_{0}=M)]\n\\end{cases}\n\\end{equation}\nWe divide the analysis of the model defined in \\eqref{model} into two components, the {\\it Gaussian component} and the {\\it jump component}.\n\n\\subsection{Gaussian Element}\\label{GaussianElement}\nWe will use the Gaussian distribution to model most of the data by means of the random variable $\\epsilon_{t} \\sim \\mathcal{N}(\\mu_{S_{t}}, \\sigma_{S_{t}}^{2})$, where both the mean and the variance of $\\epsilon_{t}$ are state dependent. In particular, we define the state dependence of the mean as follows\n\\begin{equation}\n\\mu_{S_{t}} = \\mu_{j}\\ \\text{, if}\\ S_{t}=j, \\, \\forall j \\in \\{1,...,M\\}\\;,\n\\end{equation}\nhence each state has its own, constant mean, without further restrictions. Concerning the variance, we assume that it increases depending on the state, namely\n\\begin{equation}\n\\sigma_{S_{t}}^{2} =\n\\begin{cases}\n\\sigma^{2}_{1} , &S_{t} = 1 \\\\\n\\sigma^{2}_{1} \\prod\\limits_{i=2}^{S_{t}} (1+h_{i}), &S_{t} \\in \\{2,...,M\\}\n\\end{cases}\n\\;,\n\\end{equation}\nwhere, $\\forall i \\in \\{2,3,...,M\\}$, $h_{i}>0$, which gives us\n\\begin{equation}\n\\label{sigmainequality}\n\\sigma_{1}^{2} < \\sigma_{2}^{2} < ... < \\sigma_{M}^{2}\\;,\n\\end{equation}\nhence, by \\eqref{sigmainequality}, as we go up in states we also go up in volatility.\n\n\\subsection{Jump Element}\\label{JumpElement}\nJump diffusion models, first introduced into finance by Robert C. Merton in \\cite{Mert75}, are currently widely accepted as an effective way to model the behaviour of financial data, see, e.g., \\cite{Bates96, It14}. \nIn order to incorporate the jump feature in our model, we have to deal with two major difficulties. First, we have to find a distribution under which the sum of independent random variables behaves well, at least from the point of view of real statistical applications. This task is not as straightforward as it may seem, since even the sum of i.i.d. uniform random variables has a distribution that rapidly grows in complexity with the number of addends. To overcome this particular problem, we have chosen to exploit the exponential distribution to model the i.i.d. jump amplitudes, as the sum of i.i.d. exponential random variables follows a {\\it Gamma distribution}, namely\n\\begin{equation}\\label{Gamma}\nz_{i} \\stackrel{i.i.d.}{\\sim} \\mathcal{E}(b),\\ 1 \\leq i \\leq N \\Rightarrow \\sum\\limits_{i=1}^{N}z_{i} \\sim \\Gamma(N,b)\\;,\n\\end{equation}\nwhere $\\Gamma(\\alpha, \\beta)$ is the Gamma distribution in the $(\\alpha, \\beta)$ parameterization, while $b>0$ and $N$ are given natural numbers.\n\nThe aforementioned choice leads us to the second problem, which concerns the sign of the jumps. Obviously, financial shocks can have both positive and negative values, while the Gamma distribution allows only for the positive ones. The issue can be solved by multiplying the sum in \\eqref{Gamma} by a random variable $\\delta$ taking values in $\\{-1,1\\}$, with equal probability. We refer to the resulting distribution as the \\emph{symmetric Gamma distribution} and we denote it by $\\text{sym}\\Gamma(\\alpha, \\beta)$. Assuming now that $X \\sim \\text{sym}\\Gamma(\\alpha, \\beta)$, the probability density function of $X$ is given by\n\\begin{equation}\nf_{X}(x) = \\frac{\\beta^{\\alpha}}{2 \\Gamma(\\alpha)} |x|^{\\alpha-1} e^{-\\beta |x|},\\ x \\in \\mathbb{R}\\;,\n\\end{equation}\nhence $X$ has mean equal to zero, and variance\n\\begin{equation}\\label{GammaVar}\n\\begin{aligned}\n\\mathbb{V}(X) & = \\mathbb{E}(X^{2}) - \\mathbb{E}(X)^{2} = \\mathbb{E}(X^{2}) \\\\ \n& = \\int\\limits_{-\\infty}^{\\infty} x^{2} \\frac{\\beta^{\\alpha}}{2 \\Gamma(\\alpha)} |x|^{\\alpha-1} e^{-\\beta |x|} dx \\\\\n& = \\int\\limits_{0}^{\\infty} \\frac{\\beta^{\\alpha}}{\\Gamma(\\alpha)} \\underbrace{x^{(\\alpha+2)-1} e^{-\\beta x} }_{\\propto \\Gamma(\\alpha+2, \\beta)} dx \\\\\n& = \\dfrac{\\beta^{\\alpha}}{\\Gamma(\\alpha)} \\frac{\\Gamma(\\alpha+2)}{\\beta^{\\alpha+2}} = \\frac{\\alpha(\\alpha+1)}{\\beta^{2}}\n\\end{aligned} \n\\end{equation}\n\n\n\\begin{figure}[!h]\n\t\\includegraphics[width = \\columnwidth, height = .25\\paperheight]{\"krc\"}\n\t\\caption{ \\small{Comparison of four of the variants of the symmetric Gamma probability density function.} }\n\\end{figure}\n\nLooking at eq. \\eqref{GammaVar} one can see that $\\beta$ can be used to control how much $\\alpha$ influences the variance of the distribution. For example, if we take two random variables $X_{i} \\sim \\text{sym}\\Gamma(i,1),\\ i \\in \\{1,2\\}$, we have $\\mathbb{V}(X_{1})=2$ and $\\mathbb{V}(X_{1})=6$ which is a drastic increase in variance. Taking $X_{i} \\sim \\text{sym}\\Gamma(i,30),\\ i \\in \\{1,2\\}$ on the other hand gives us the variances $\\mathbb{V}(X_{1}) \\approx 0.0022$ and $\\mathbb{V}(X_{2}) \\approx 0.0066$, which is a much smaller increase, for the same change in $\\alpha$. The previous, rather straightforward observation, will be useful later since in our model $\\alpha$ will represent the number of jumps at a certain point in time. Taking a large $\\beta$ means that every extra jump only slightly increases the variance of the model, hence allowing for a finer analysis of the data.\nThe next step consists in determining the length $N$ of the sum in eq. \\eqref{Gamma}. In particular we assume that such a sum has a state dependent length represented by a state-dependent Poisson random variable $N_{t} \\sim \\mathcal{P}(\\theta_{S_{t}})$, hence we have to determine the values of $\\theta_{S_{t}}$. In keeping with the interpretation of the states, see eq. \\eqref{sigmainequality}, we want the number of jumps to increase as the state the data are in increases. This can be done by ordering the parameters $\\theta_{S_{t}}$. Moreover, in order to also allow the parameters to be sufficiently flexible for our purposes, we assume them to be distributed as follows\n\\begin{equation}\n\\label{poissonprior}\n\\theta_{S_{t}} \\sim \n\\begin{cases}\n\\mathcal{U}(0,u_{1}), &{S_{t}}=1 \\\\\n\\mathcal{U}(u_{1},u_{2}), &{S_{t}}=2 \\\\\n& \\vdots \\\\\n\\mathcal{U}(u_{M-2},u_{M-1}), &{S_{t}}=M-1 \\\\\n\\mathcal{U}(u_{M-1},u_{M}), &{S_{t}}=M\n\\end{cases}\n\\end{equation}\nwhich clearly guarantees that $\\theta_{1} \\leq \\theta_{2} \\leq ... \\leq \\theta_{M}$.\n\n\\subsection{Full Model}\nSumming up the definitions stated in subsections \\ref{GaussianElement} and \\ref{JumpElement}, we can write the full model as follows\n\\begin{equation}\n\\begin{cases}\ny_{t} &= \\epsilon_{t} + \\zeta_{t} \\\\\n\\epsilon_{t} &\\sim \\mathcal{N}(\\mu_{S_{t}}, \\sigma_{S_{t}}^{2}) \\\\\n\\zeta_{t} &\\sim \\text{sym}\\Gamma(N_{t},b)\n\\end{cases}\n\\:.\n\\end{equation}\nThere is no analytic expression for the distribution of $y_{t}$, but we can obtain an integral form of it using the following well known fact.\nLet $X$ and $Y$ be two independent random variables with density functions $f_{X}(x)$ and $f_{Y}(x)$, defined for $x \\in \\mathbb{R}$. Then the sum $Z = X + Y$ is a random variable with density function $f_{Z}(z)$ given by\n\\begin{align}\\label{convolution}\nf_{Z}(z) = \\int\\limits_{-\\infty}^{\\infty} f(z - y)g(y) dy = \\int\\limits_{-\\infty}^{\\infty}f(y) g(z - y) dy\\:.\n\\end{align}\nTherefore, by the convolution formula in \\eqref{convolution}, we have\n\\begin{equation}\\label{FMdensity}\nf_{y_{t}}(z) = \\int\\limits_{-\\infty}^{\\infty} \\frac{b^{N_{t}} |y|^{N_{t}-1}}{\\sqrt{8 \\pi \\sigma_{S_{t}}^{2}}\\Gamma(N_{t})} e^{-\\frac{1}{2} \\left( \\frac{z-y-\\mu_{S_{t}}}{\\sigma_{S_{t}}} \\right)^{2} -b |y|} dy\n\\end{equation}\nAlthough not very useful in general, the expression in eq. \\eqref{FMdensity} can be computationally handled with little difficulty, a crucial fact for the concrete case study we will consider in Section \\ref{CaseStudy}. In the next section we consider the $\\alpha$-stable distribution model.\n\n\\section{$\\boldsymbol \\alpha$-Stable Distribution Model}\\label{AlfaStableModel}\nIn Section \\ref{JumpDiffusionModel}, we pointed out that the Gaussian distribution is not adequate to model financial data, mainly because of its slim tails, which we offset by adding jumps. In what follows, we will consider a different approach, namely we will model the data using a distribution that has fatter tails than the Gaussian one, but still preserves its most important characteristics.\n\n\\subsection{$\\alpha$-Stable Distribution}\nThere are multiple equivalent ways to define a stable distribution. We will consider the two most common ones, the interested reader can refer to, e.g., \\cite{SamTaq}, for the others.\n\\begin{definition}\n\tA random variable $X$ is said to have a stable distribution if, for every $A$ and $B$ positive, there exists a positive number $C$ and a real number $D$ such that\n\\end{definition}\n\\begin{equation}\nAX_{1}+BX_{2} \\stackrel{\\text{ d}}{=} CX+D\\;,\n\\end{equation}\nwhere $X_{1}$ and $X_{2}$ are independent copies of $X$ and $\\stackrel{\\text{ d}}{=}$ stands for {\\it equal in distribution}. This implies that the sum of two stable independent identically distributed random variables is still a stable random variable, with the same distribution, up to a {\\it scale factor} $C$, and a shift component $D$. As an example, we can consider two Gaussian random variables $X_{1}$ and $X_{2}$, assumed to be independent copies of $ X \\sim \\mathcal{N}(\\mu, \\sigma^{2})$. Then, $X_{1}+X_{2} \\sim \\mathcal{N}(2\\mu, 2\\sigma^{2})$, which means that $X_{1}+X_{2} \\stackrel{\\text{ d}}{=} \\sqrt{2}X + (2-\\sqrt{2})\\mu$.\nAlternatively, we can define the stable distribution using characteristic functions, namely\n\n\\begin{definition}\n\tA random variable $X$ is said to have a stable distribution if there exist parameters $0 < \\alpha \\leq 2,\\ \\sigma \\geq 0,\\ |\\beta| \\leq 1\\ \\text{and}\\ \\mu \\in \\mathbb{R}$, such that its characteristic function has the form\n\t\\begin{equation}\n\t\\mathbb{E}(e^{i\\theta X}) =\n\t\\begin{cases}\n\te^{\\left[ - \\sigma^{ \\alpha } | \\theta |^{\\alpha} (1 - i \\beta \\text{sgn}(\\theta) \\tan(\\frac{\\pi \\alpha}{2})) + i \\mu \\theta \\right]}, &\\alpha \\neq 1 \\\\\n\te^{\\left[ - \\sigma | \\theta | (1 + i \\beta \\frac{2}{\\pi} \\text{sgn}(\\theta) \\ln |\\theta| ) + i \\mu \\theta \\right]}, &\\alpha = 1\n\t\\end{cases}\n\t\\:.\n\t\\end{equation}\n\\end{definition}\nWe call $\\alpha$ the \\emph{stability} parameter, $\\beta$ the \\emph{skewness} parameter, $\\gamma$ the \\emph{scale} parameter and $\\mu$ the \\emph{location} parameter. For $\\alpha=2$ we obtain the normal distribution, which is the only member of the stable distribution family that has finite variance. For $\\alpha \\in (1,2)$ we have infinite variance and mean $\\mu$, while, for $\\alpha \\in (0,1]$, both the mean and the variance are undefined. We note that in general there is not a solution in closed form for the probability density function of a stable distribution. The stable distribution will be denoted by $\\mathcal{S}_{\\alpha,\\beta}(\\gamma,\\mu)$ for the remainder of the paper. \n\n\\begin{figure}[!h]\n\t\\includegraphics[width = \\columnwidth, height = .25\\paperheight]{\"alphaComp\"}\n\t\\caption{ \\small{Comparison of the alpha-stable distribution for different values of $\\alpha$. In the special case $\\alpha=2$ we have a normal distribution.} }\n\\end{figure}\n\n\n\\subsection{The Model}\nIn the model we propose, the data are assumed to follow a symmetric $\\alpha$-stable distribution, more precisely $y_{t} \\sim\\mathcal{S}_{\\alpha,0}(\\gamma_{S_{t}},\\mu_{S_{t}})$. The full model is presented in the following\n\\begin{equation}\\label{alfamodel}\n\\begin{cases}\ny_{t} \\sim\\mathcal{S}_{\\alpha,0}(\\gamma_{S_{t}},\\mu_{S_{t}}) \\\\\n\\gamma_{S_{t}} = \\gamma_{j}\\ \\text{if}\\ S_{t}=j,\\ \\forall j \\in \\{1,...,M\\} \\\\\n\\mu_{S_{t}} = \\mu_{j}\\ \\text{if}\\ S_{t}=j,\\ \\forall j \\in \\{1,...,M\\} \\\\\n\\alpha \\in (1,2) \\\\\nS_{t} \\in \\{1,...,M\\} \\\\\np_{ij} = \\mathbb{P} (S_{t}=j|S_{t-1}=i) \\\\\n\\pi_{0} = [\\mathbb{P} (S_{0}=1),...,\\mathbb{P} (S_{0}=M)]\n\\end{cases}\n\\:.\n\\end{equation}\nThe motivation behind the choice of the model represented by \\eqref{alfamodel}, mainly relies on empirical observations of financial data which exhibit fat tails that can not be well described using the Gaussian approach. In particular we believe that such phenomenon can be suitably addressed by exploiting $\\alpha$-stable distributions with $\\alpha \\in (1,2)$. Moreover, financial data often exhibit structural breaks because of abrupt changes in the market, e.g. as during the sub-prime mortgage credit crisis of 2008, which is the reason why we consider both the scale and the location parameters, to be state-dependent.\nAs we previously mentioned, in general there is no closed form for the density of an $\\alpha$-stable distribution. Nevertheless, this problem can be circumvented using the fact that $y_{t}$ can be conditionally represented as a Gaussian random variable, see, e.g., \\cite{GoKuRu10,SamTaq}, by introducing a random variable $\\lambda$ and using the property\n\\begin{align}\n\\label{conditionalnormal}\n\\begin{split}\n\\text{If:}\\ \\qquad &\\lambda \\sim \\mathcal{S}_{\\tfrac{\\alpha}{2},1}(2 \\left( \\cos(\\frac{\\pi \\alpha}{4}) \\right)^{\\frac{2}{\\alpha}} ,0) \\\\\n\\text{then:}\\ \\qquad &y_{t}|\\lambda \\sim \\mathcal{N}(\\mu_{S_{t}}, \\lambda\\gamma^{2}_{S_{t}})\\;,\n\\end{split}\n\\end{align}\nwhich allows us to have an analytic likelihood function which significantly speeds up the sampling process. \n\n\\begin{figure}[!h]\n\t\\includegraphics[width = \\columnwidth, height = .3\\paperheight]{\"microbenchmark\"}\n\t\\caption{ \\small{Comparison between the time needed for the jump diffusion model (top) and the stable distribution model (bottom) to draw 1000 samples. The best performance of the first model was $156$ seconds, its worst performance $1103$ seconds, with a mean of $207$ and a median of $160$. The best performance of the second model was $24$ seconds, its worst performance $868$ seconds, with a mean of $68$ and a median of $25$ seconds.} }\n\t\\label{fig:benchmark}\n\\end{figure}\n\nAnalogously to what we considered in Section \\ref{JumpDiffusionModel}, we have one mean for each state, without further restrictions, namely\n\\begin{equation}\n\\mu_{S_{t}} = \\mu_{j}\\ \\text{if}\\ S_{t}=j,\\ \\forall j \\in \\{1,...,M\\}\\;.\n\\end{equation}\nWe also want the scale parameter to be increasing with respect to the state, namely\n\\begin{equation}\n\\gamma_{S_{t}} =\n\\begin{cases}\n\\gamma_{1} , &S_{t} = 1 \\\\\n\\gamma_{1} \\prod\\limits_{i=2}^{S_{t}} (1+h_{i}), &S_{t}=j,\\ 2 \\leq j \\leq M\n\\end{cases}\n\\;,\n\\end{equation}\nwhere $\\forall i \\in \\{2,3,...,M\\},\\ h_{i}>0$, which leads to the property\n\\begin{equation}\n\\gamma_{1} < \\gamma_{2} < ... < \\gamma_{M}\\;,\n\\end{equation}\nso that an increase in the state number indicates an increase in volatility.\n\n\\section{Implementation}\nIn this section we get into the specifics of our two models. In particular we provide the details regarding the likelihood functions, the priors and the posteriors, for both the {\\it Jump Diffusion Model}, described in Section \\ref{JumpDiffusionModel}, and for the {\\it $\\alpha-$Stable Model}, defined in Section \\ref{AlfaStableModel}.\nThe concept of duration analysis is also explained, along with its importance. Both of the models will be characterized by four states, with the states being interpreted as \\emph{low, medium, high} and \\emph{very high} volatility {\\it regime}. Let us start by defining the following quantities\n\\begin{align}\ny^{j} & = \\{y_{t} \\in \\tilde{y}_{T} :S_{t}=j\\},\\ j \\in \\{1,2,3,4\\} \\;,\\\\\nn_{j} & = \\# y^{j}\\:.\n\\end{align}\nFor the rest of this section we will suppress unneeded parameters. Therefore, e.g., the conditional posterior $f(\\mu_{j}|\\boldsymbol y, \\sigma_{j}, N_{j},...)$, will be denoted by $f(\\mu_{j}|\\boldsymbol y)$, the general rule being that the parameters that are not being inferred on are considered known.\n\\subsection{Jump Diffusion Model}\n\\label{subsec:jump_diffusion_model}\nThe description of the implementation is divided into three parts, namely: the first part deals with the form of the likelihood function, the second with the priors while the third part provides a detailed analysis of the different types of obtained posteriors. \n\\subsubsection{Likelihood}\n\\label{likelihood1}\nWe have to take into account whether there are jumps in the model or not, as well as the state of each observation. Hence, if $N_{j} = 0$, we define\n\\begin{equation}\n\\begin{aligned}\nL^{j}\\left( \\cdot \\right) &= \\prod\\limits_{y_{t} \\in y^{j}} \\frac{e^{- \\frac{1}{2} \\left( \\frac{y_{t}-\\mu_{j}}{\\sigma_{j}} \\right)^{2}} }{\\sqrt{2 \\pi \\sigma_{j}^{2}}} \\\\\n& = \\left( 2 \\pi \\sigma_{j}^{2} \\right)^{-\\frac{n_{j}}{2}} e^{-\\frac{1}{2 \\sigma_{j}^{2}} \\sum\\limits_{y_{t} \\in y^{j}}\\left( y_{t}-\\mu_{j} \\right)^{2} }\\;,\n\\end{aligned}\n\\end{equation}\nwhile, if $N_{j} \\geq 1$, we define $L^{j}\\left( \\cdot \\right)$ as\n\\begin{equation}\n\\prod\\limits_{y_{t} \\in y^{j}} \\frac{b^{N_{j}}}{\\sqrt{8 \\pi \\sigma_{j}^{2}}\\Gamma(N_{j})} \\int\\limits_{-\\infty}^{\\infty} |y|^{N_{j}-1}e^{-\\frac{1}{2}\\left( \\frac{y_{t}-y-\\mu_{j}}{\\sigma_{j}} \\right)^{2}}dy\\:.\n\\end{equation}\nThen, the full likelihood function is \n\\begin{equation}\nL_{n} = \\prod\\limits_{j=1}^{4} L^{j}\\:,\n\\end{equation}\nwhich has a standard form only if $N_{j}=0$, for every $j \\in \\{1,2,3,4\\}$. As this very rarely happens, we will use the Metropolis-Hastings algorithm in this model.\n\n\\subsubsection{Priors}\n\\begin{raggedleft}\n\t\\textbf{Mean:}\n\\end{raggedleft} \nWe take the mean to be normally distributed. Moreover, we give the same prior to the means of all the states, namely\n\\begin{equation}\n\\label{meanprior}\n\\mu_{j} \\sim \\mathcal{N}\\left(0, \\frac{1}{k}\\right),\\ 1 \\leq j \\leq 4 \\ \\text{and}\\ k>0\\:.\n\\end{equation}\n\n\\begin{raggedleft}\n\t\\textbf{Variance:}\n\\end{raggedleft} \nThe variance $\\sigma_{1}^{2}$ will have an inverse-gamma prior.\n\\begin{equation*}\n\\sigma_{1}^{2} \\sim\\ \\text{inv}\\Gamma(\\alpha_{0}, \\beta_{0}),\\ \\alpha_{0}, \\beta_{0}>0\\:.\n\\end{equation*}\n\n\\begin{raggedleft}\n\t\\textbf{H parameters:}\n\\end{raggedleft}\nWe previously saw that in order for \\eqref{sigmainequality} to hold, we need $h_{j} > 0,\\ \\forall j \\in \\{2,3,4\\}$, hence we define $h_{j}^{\\star} := 1+h_{j}$, for all $j$, and make these parameters Fr\\'{e}chet distributed, namely \n\\begin{equation}\nh_{j}^{\\star} \\sim \\mathcal{F}(h_{j}^{\\star}|1, \\alpha_{F}, s_{F})\\;,\n\\end{equation}\nthen the density function of $h_{j}^{\\star}$ reads as follows\n\\begin{equation}\nf(h_{j}^{\\star}) = \\frac{\\alpha_{F}}{s_{F}} \\left( \\frac{h_{j}^{\\star}-1}{s_{F}} \\right)^{1-\\alpha_{F}} e^{- \\left( \\frac{h_{j}^{\\star}-1}{s_{F}} \\right)^{- \\alpha_{F}}}\\;,\n\\end{equation}\nwhere $\\alpha_{F},s_{F}>0$, and $f$ is defined for $h_{j}^{\\star}>1$.\n\n\\begin{raggedleft}\n\t\\textbf{Poisson parameters:}\n\\end{raggedleft} \nFor the priors of the Poisson parameters we refer to \\eqref{poissonprior}.\n\n\\begin{raggedleft}\n\t\\textbf{Transition probabilities:}\n\\end{raggedleft}\nFor the transition probabilities we will use a Dirichlet prior, namely \n\\begin{equation}\n(p_{1j},p_{2j},p_{3j},p_{4j}) \\sim \\mathcal{D}(m_{1j},m_{2j},m_{3j},m_{4j})\\;,\n\\end{equation}\nfor every $j \\in \\{1,2,3,4\\}$. The density function of this particular Dirichlet distribution is given by\n\\begin{align}\nf(p_{1j},p_{2j},p_{3j},p_{4j}) = \\frac{\\Gamma\\left(\\sum\\limits_{i=1}^{4} m_{ij} \\right)}{\\prod\\limits_{i=1}^{4}\\Gamma(m_{ij})} \\prod\\limits_{i=1}^{4} p_{ij}^{m_{ij}-1}\n\\;,\n\\end{align}\nand it is defined on the simplex\n\\begin{align}\n\\nonumber & p_{1j},p_{2j},p_{3j} > 0 \\;, \\\\\n& p_{1j}+p_{2j}+p_{3j} < 1 \\;,\\\\\n\\nonumber & p_{4j} = 1 - p_{1j}+p_{2j}+p_{3j}\n\\;,\n\\end{align}\nwhile, everywhere else, its value is zero. Finally, the parameter $b$ is a constant.\n\n\\subsubsection{Posteriors}\n\\begin{raggedleft}\n\t\\textbf{Mean:}\n\\end{raggedleft}\nBecause the likelihood function depends on whether or not jumps have occurred, we have two different posteriors for the mean. In particular, if $N_{j}=0$, by \\eqref{nn}, we have\n\\begin{equation}\n\\label{meanposterior}\n\\mu_{j}|y^{j} \\sim \\mathcal{N} \\left(\\frac{n_{j}\\bar{y}^{j}}{n_{j}+k\\sigma_{j}^{2}},\\frac{\\sigma_{j}^{2}}{n_{j}+k \\sigma_{j}^{2}} \\right)\\;,\n\\end{equation}\nfor all $j \\in \\{1,2,3,4\\}$, while, if $N_{j} \\geq 1$, we obtain\n\\begin{equation}\nf(\\mu_{j}|y^{j}) \\propto f(\\mu_{j}) L^{j}(\\mu_{j}) \\:.\n\\end{equation}\n\n\\begin{raggedleft}\n\t\\textbf{Variance:}\n\\end{raggedleft}\nSimilarly to the previous point, we have to differentiate between the jump and no-jump cases. Therefore, if $N_{j}=0$, by \\eqref{ign}, we have\n\\begin{equation}\n\\sigma_{1}^{2} \\sim\\ \\text{inv}\\Gamma \\left(\\frac{n_{1}}{2}+\\alpha_{0}, \\frac{1}{2} \\sum\\limits_{y_{t} \\in y^{1}} \\left( y_{t}-\\mu_{1} \\right)^{2} + \\beta_{0} \\right)\\;,\n\\end{equation}\notherwise, we obtain\n\\begin{equation}\nf(\\sigma_{1}|y^{1}) \\propto f(\\sigma_{1}) L^{1}(\\sigma_{1})\\:.\n\\end{equation}\n\n\\begin{raggedleft}\n\t\\textbf{H parameters:}\n\\end{raggedleft}\nIn order to obtain $h_{i},\\ i \\in \\{2,3,4\\}$, we need to transform the data, also taking into account the different states. \n\nIn particular we have the following cases\n\n\\begin{raggedleft}\n\t$\\boldsymbol{S_{t}=2:}$\n\\end{raggedleft}\n\\begin{align*}\ny_{t} \\sim \\mathcal{N}(\\mu_{2}, \\sigma_{1}^{2} h_{2}^{\\star}) \\Rightarrow \\psi_{t} := \\frac{y_{t}-\\mu_{2}}{\\sigma_{1}} \\sim \\mathcal{N}(0, h_{2}^{\\star})\\:,\n\\end{align*} \nwhere we have used the transformed data set $\\psi_{t}$ to obtain a posterior for $h_{2}^{\\star}$ when $N_{2}=0$, namely\n\\begin{equation}\nf(h_{2}^{\\star}|y^{2}) \\propto \\left( h_{2}^{\\star} \\right)^{- \\frac{n_{2}}{2}} e^{- \\frac{1}{2 h_{2}^{\\star}} \\sum\\limits_{i=1}^{n_{2}} \\psi^{2}_{i} } \\mathcal{F}(h_{2}^{\\star}|1,\\alpha_{F},s_{F})\\:.\n\\end{equation}\n\n\\begin{raggedleft}\n\t$\\boldsymbol{S_{t}=3:}$\n\\end{raggedleft}\n\\begin{equation*}\ny_{t} \\sim \\mathcal{N}(\\mu_{3}, \\sigma_{1}^{2} h_{2}^{\\star} h_{3}^{\\star}) \\Rightarrow \\zeta_{t} := \\frac{y_{t}-\\mu_{3}}{\\sqrt{\\sigma_{1}^{2} h_{2}^{\\star}}} \\sim \\mathcal{N}(0, h_{3}^{\\star})\\;,\n\\end{equation*} \nwhich gives us the posterior\n\\begin{equation}\nf(h_{3}^{\\star}|y^{3}) \\propto \\left( h_{3}^{\\star} \\right)^{- \\frac{n_{3}}{2}} e^{- \\frac{1}{2 h_{3}^{\\star}} \\sum\\limits_{i=1}^{n_{3}} \\zeta^{2}_{i} } \\mathcal{F}(h_{3}^{\\star}|1,\\alpha_{F},s_{F})\\;,\n\\end{equation}\nhence, when $N_{3}=0$, the posterior is analogous to the one for $h_{2}^{\\star}$, with the only difference being that we use $\\zeta_{t}$ instead of $\\psi_{t}$. \n\\begin{flushleft}\n\t$\\boldsymbol{S_{t}=4:}$\n\\end{flushleft}\n\\begin{align*}\ny_{t} &\\sim \\mathcal{N}(\\mu_{4}, \\sigma_{1}^{2} h_{2}^{\\star} h_{3}^{\\star} h_{4}^{\\star})\\\\ \\Rightarrow \\theta_{t} &:= \\frac{y_{t}-\\mu_{4}}{\\sqrt{\\sigma_{1}^{2} h_{2}^{\\star} h_{3}^{\\star}}} \\sim \\mathcal{N}(0, h_{4}^{\\star})\\;,\n\\end{align*} \nwhich yields the posterior\n\\begin{equation}\nf(h_{4}^{\\star}|y^{4}) \\propto \\left( h_{4}^{\\star} \\right)^{- \\frac{n_{4}}{2}} e^{- \\frac{1}{2 h_{4}^{\\star}} \\sum\\limits_{i=1}^{n_{4}} \\theta^{2}_{i} } \\mathcal{F}(h_{4}^{\\star}|1,\\alpha_{F},s_{F})\\:.\n\\end{equation}\n\nIn the case where we have jumps, i.e. $N_{j} \\geq 1,\\ j \\in \\{2,3,4\\}$, there is no analytic expression, therefore \n\\begin{equation*}\nf(h_{j}^{\\star}|y^{j}) \\propto \\mathcal{F}(h_{j}^{\\star}|1,\\alpha_{F},s_{F}) L^{j}(h_{j}^{\\star})\\:.\n\\end{equation*}\n\n\\begin{raggedleft}\n\t\\textbf{Poisson parameters:}\n\\end{raggedleft}\nConcerning the posterior of the theta parameters, for $j \\in \\{1,2,3,4\\}$, we have\n\\begin{equation}\n\\begin{aligned}\nf(\\theta_{j}|y^{j}) \\propto f(y^{j}|\\theta_{j}) = \\sum_{N_{j}=0}^{\\infty} f(y^{j},N_{j}|\\theta_{j}) \\\\ \\propto \\sum_{N_{j}=0}^{\\infty} f(N_{j}|\\theta_{j})f(y^{j}|N_{j},\\theta_{j}) = \\sum_{N_{j}=0}^{\\infty} f(N_{j}|y^{j},\\theta_{j})\n\\end{aligned}\n\\end{equation}\n\n\n\\begin{raggedleft}\n\t\\textbf{Transition probabilities:}\n\\end{raggedleft}\nThe transition probabilities differ from the other parameters in that they do not depend directly on the observations $y_{1},...,y_{T}$. Instead, they depend on the vector of states $\\tilde{S}_{T}$. Assuming that the vector $\\tilde{S}_{T}$ is known, the posterior distribution of the transition probability vector $(p_{1j},p_{2j},p_{3j},p_{4j})$, $j \\in \\{1,2,3,4\\}$, has the Dirichlet distribution\n\\begin{equation}\n\\mathcal{D}(m_{1j}+n_{1j},m_{2j}+n_{2j},m_{3j}+n_{3j},m_{4j}n_{4j})\\;,\n\\end{equation}\nwhere $n_{ij}$ is the number of transitions from state $j$ to state $i$.\n\n\\subsection{$\\alpha$-Stable Distribution Model}\nIn what follows, we proceed analogously to subsection \\ref{subsec:jump_diffusion_model}.\n\\subsubsection{Likelihood}\nUsing the fact that, in the present setting, our data are conditionally normal, see \\eqref{conditionalnormal}, the likelihood function reads as follows\n\\begin{align*}\nL \\left( \\cdot|\\boldsymbol y \\right) &= \\prod\\limits_{j=1}^{4} \\prod\\limits_{y_{t} \\in y^{j}} \\frac{e^{- \\frac{1}{2} \\left( \\frac{y_{t}-\\mu_{j}}{\\lambda\\gamma_{j}}\\right)^{2}} }{\\sqrt{2 \\pi \\lambda \\gamma_{j}^{2}}} \\\\\n& = \\prod\\limits_{j=1}^{4} \\left( 2 \\pi \\lambda \\gamma_{j}^{2} \\right)^{-\\frac{n_{j}}{2}} e^{-\\frac{1}{2 \\lambda \\gamma_{j}^{2}} \\sum\\limits_{y_{t} \\in y^{j}}\\left( y_{t}-\\mu_{j} \\right)^{2} }\\;,\n\\end{align*}\nand, unlike in the previous model, we do not have to worry about multiple cases.\n\n\\subsubsection{Priors}\n\\begin{raggedleft}\n\t\\textbf{Mean:}\n\\end{raggedleft}\nThe prior of the means is the same as in eq.~\\eqref{meanprior}.\n\n\\begin{raggedleft}\n\t\\textbf{Scale:}\n\\end{raggedleft}\nThe distribution of the scale is analogous to that of the variance in the previous model, namely\n\\begin{align*}\n\\gamma^{2} \\sim\\ \\text{inv}\\Gamma(\\alpha_{1}, \\beta_{1}), \\alpha_{1}, \\beta_{1}>0\\:.\n\\end{align*}\n\n\\begin{raggedleft}\n\t\\textbf{H parameters:}\n\\end{raggedleft}\nThese parameters are exactly the same as they were in the previous model, in fact their role remains unchanged, since they allow for the volatility to increase as the states increase. \n\n\\begin{raggedleft}\n\t\\textbf{Lambda:}\n\\end{raggedleft}\nThe lambda parameter follows a stable distribution, hence\n\\begin{align*}\n\\lambda \\sim S_{\\tfrac{\\alpha}{2},1}(2 \\left( \\cos\\left(\\frac{\\pi \\alpha}{4}\\right) \\right)^{\\frac{2}{\\alpha}} ,0)\\:.\n\\end{align*}\n\n\\subsubsection{Posteriors}\n\\begin{raggedleft}\n\t\\textbf{Mean:}\n\\end{raggedleft}\nThe posterior of the mean is analogous to that of the one in eq. \\eqref{meanposterior}, with the only difference being the form of the variance. In particular, we have\n\\begin{equation}\n\\mu_{j}|y^{j} \\sim \\mathcal{N} \\left(\\frac{n_{j}\\bar{y}^{j}}{n_{j}+k \\lambda \\gamma_{j}^{2}},\\frac{\\lambda \\gamma_{j}^{2}}{n_{j}+k \\lambda \\gamma_{j}^{2}} \\right)\n\\:.\n\\end{equation}\n\n\\begin{raggedleft}\n\t\\textbf{Scale:}\n\\end{raggedleft}\nThe posterior of the scale is \n\\begin{equation}\n\\gamma^{2} \\sim\\ \\text{inv}\\Gamma \\left(\\frac{n_{1}}{2}+\\alpha_{1}, \\frac{1}{2} \\sum\\limits_{y_{t} \\in y^{1}} \\left( y_{t}-\\mu_{1} \\right)^{2} + \\beta_{1} \\right)\n\\:.\n\\end{equation}\n\n\\begin{raggedleft}\n\t\\textbf{H parameters:}\n\\end{raggedleft}\nIn what follows we limit ourselves to listing the needed transformations, therefore we have\n\n\\begin{raggedleft}\n\t$\\boldsymbol{S_{t}=2:}$\n\\end{raggedleft}\n\\begin{align*}\ny_{t} \\sim \\mathcal{N}(\\mu_{2},\\lambda \\gamma^{2} h_{2}^{\\star}) \\Rightarrow \\psi_{t} := \\frac{y_{t}-\\mu_{2}}{\\sqrt{\\lambda \\gamma^{2}}} \\sim \\mathcal{N}(0, h_{2}^{\\star})\\:.\n\\end{align*} \n\n\\begin{raggedleft}\n\t$\\boldsymbol{S_{t}=3:}$\n\\end{raggedleft}\n\\begin{equation*}\ny_{t} \\sim \\mathcal{N}(\\mu_{3}, \\lambda \\gamma^{2} h_{2}^{\\star} h_{3}^{\\star}) \\Rightarrow \\zeta_{t} := \\frac{y_{t}-\\mu_{3}}{\\sqrt{\\lambda \\gamma^{2} h_{2}^{\\star}}} \\sim \\mathcal{N}(0, h_{3}^{\\star})\\:.\n\\end{equation*} \n\n\\begin{raggedleft}\n\t$\\boldsymbol{S_{t}=4:}$\n\\end{raggedleft}\n\\begin{align*}\ny_{t} &\\sim \\mathcal{N}(\\mu_{4}, \\lambda \\gamma^{2} h_{2}^{\\star} h_{3}^{\\star} h_{4}^{\\star}) \\\\\n\\Rightarrow \\theta_{t} &:= \\frac{y_{t}-\\mu_{4}}{\\sqrt{\\lambda \\gamma^{2} h_{2}^{\\star} h_{3}^{\\star}}} \\sim \\mathcal{N}(0, h_{4}^{\\star})\\:.\n\\end{align*} \nThe posteriors are obtained as in the previous case.\n\n\\begin{raggedleft}\n\t\\textbf{Lambda:}\n\\end{raggedleft}\nSince there is no closed form for the posterior distribution of the lambda parameter, we only write\n\\begin{equation}\n\\label{eq:lambda_posterior}\nf(\\lambda|\\boldsymbol y) \\propto S_{\\tfrac{\\alpha}{2},1}(2 \\left( \\cos\\left(\\frac{\\pi \\alpha}{4}\\right) \\right)^{\\frac{2}{\\alpha}} ,0) L(\\lambda|\\boldsymbol y)\n\\:.\n\\end{equation}\n\nThe prior and posterior of the transition probabilities are the same in both proposed models. In fact, the transition probabilities do not depend by any of the parameters, but only by the state vector $S_{T}$.\n\n\\subsection{Duration Analysis}\n\\label{subsubsec:duration}\nThe expected duration of each state for a MSM is a quantity of significant interest. Having an estimate of how long a certain data set remains in a particular state can give us useful insights into how the model will behave for a certain period of time. In this section we are going to explain how the expected duration can be calculated exploiting the transition probabilities.\n\nThe expected duration, denoted by $\\hat{d}_j$, is defined as follows\n\\begin{equation}\n\\label{eq:expdur_def}\n\\hat{d}_j := \\mathbb{E}\\left[d_j\\right]\\;,\n\\end{equation}\nwhere $d_j$ is the random variable that models the length of the time interval for which the time series is in state $j$. The first thing we have to consider is $\\mathbb{P}\\left(d_j = d\\right)$, the probability of the data being in state $j$, meaning\n\\begin{equation}\n\\nonumber\n\\mathbb{P}\\left(d_j = d\\right) = p_{jj}^{d-1} \\left(1-p_{jj}\\right)\\;,\n\\end{equation}\nwhere $p_{ij} = \\mathbb{P}\\left(S_t = j| S_{t-1} = i\\right)$. It just so happens that $\\hat{d}_j$ has a very simple closed form, in particular\n\\begin{align}\n\\nonumber\\mathbb{E}\\left(d_j\\right) &= \\sum_{m=1}^\\infty m p_{jj}^{m-1} \\left(1-p_{jj}\\right) \\\\\n\\nonumber&= \\left(1-p_{jj}\\right) \\lim_{n\\to\\infty} \\sum_{m=1}^{n} m\\, p_{jj}^{m-1} \\\\\n\\nonumber&= \\left(1-p_{jj}\\right) \\lim_{n\\to\\infty} \\frac{1}{p_{jj}} \\sum_{m=1}^{\\infty} m\\, p_{jj}^m \\\\\n\\nonumber&= \\left(1-p_{jj}\\right) \\lim_{n\\to\\infty} \\frac{1-(n+1)p_{jj}^n+n p_{jj}^{n+1}}{(1-p_{jj})^2}\\\\\n\\nonumber&= \\left(1-p_{jj}\\right) \\frac{1}{(1-p_{jj})^2}\\\\\n\\label{eq:expdur}&= \\frac{1}{1-p_{jj}}.\n\\end{align}\n\nWe will use the previous expression later on, when we compare the state durations obtained in the present paper, with those provided in \\cite{DiPMF16}. We will see that there is a significant difference in the state durations, showing that the models developed in this paper perform better than the one proposed in \\cite{DiPMF16} to model the time series of the Chicago Board Options Exchange Volatility Index, better known as VIX.\n\n\\section{Case Study}\\label{CaseStudy}\nOur case study is concerned with the application of the above theory to developing an indicator that has a role similar to the one played by the VIX. In particular we use the set of S\\&P500 weekly prices, considering a time interval that runs from the 3rd of January, 2007 to 29th of December, 2014. We picked this interval to include the sub-prime mortgage crash of 2008 as well as the subsequent period of relative calm. This choice allows us to analyse how our approach performs in both situations. We will show that our techniques improve the results stated in \\cite{DiPMF16}, where the model was very effective in periods of high volatility, but also too smooth in case of low volatility. Our results are summarised below with respect to both the {\\it Jump Diffusion Model}, defined in Section \\ref{JumpDiffusionModel}, and the \n{\\it $\\alpha-$Stable Model}, provided in Section \\ref{AlfaStableModel}.\n\n\\subsection{Jump Diffusion Model}\n\\label{jump_diffusion_case}\nFor the jump diffusion model we model the data as a zero mean process in order to make the framework more parsimonious. We take the exponential distribution parameter $b$ to be equal to 40, in order to make the contribution of each extra jump to the variance relatively small. This choice of b allows for a finer analysis. We first present the histograms of the sampled variances, see Fig. \\ref{fig:variance_comparison}. \n\nAs we can see, the algorithm is rather accurate in sampling the variances. In particular, we recall that the theoretical posterior of the variances is an inverse-Gamma distribution, which is exactly what we can observe in the histograms. \n\\begin{figure}[!h]\n\t\\includegraphics[width = \\columnwidth, height = .35\\paperheight]{\"variance_comparison\"}\n\t\\caption{ \\small{Comparison between the histograms of each of the variances belonging to the four states.} }\n\t\\label{fig:variance_comparison}\n\\end{figure}\n\nMoreover, in Table \\ref{tbl:variance1}, we report the point estimates of each variance value.\n\\begin{table}[h!]\n\t\\caption{Gaussian Variance Point Estimates}\n\t\\centering\n\t\\begin{tabular}{c|c}\n\t\t\\label{tbl:variance1}\n\t\tEstimator & Value \\\\\n\t\t\\hline\n\t\t$\\hat{\\sigma}_{1}^{2}$ & $0.155709$ \\\\\n\t\t$\\hat{\\sigma}_{2}^{2}$ & $0.2336135$ \\\\\n\t\t$\\hat{\\sigma}_{3}^{2}$ & $0.254559$ \\\\\n\t\t$\\hat{\\sigma}_{4}^{2}$ & $0.2816716$\n\t\\end{tabular}\n\t\\label{tab:variance1}\n\\end{table}\n\nConcerning the distribution of the jumps, we refer to Fig. \\ref{fig:jump_comparison} which highlights that the jumps, like the Gaussian variances, they have an amplitude that increases according with the number of state. The first state almost never has jumps. The latter means that, when we are in the first state, the description of the variance of the observations is left mainly to the parameter $\\sigma_{1}^{2}$. For the other states we see an increase in non-zero jumps and also an increase in the average number of jumps, as we go up in states.\n\n\\begin{figure}[h!]\n\t\\includegraphics[width = \\columnwidth, height = .35\\paperheight]{\"jump_histograms\"}\n\t\\caption{ \\small{Comparison between the histograms of each of the variances belonging to the four states.} }\n\t\\label{fig:jump_comparison}\n\\end{figure} \n\nNoticing how close the sample variances $\\hat{\\sigma}_{2}^{2}, \\hat{\\sigma}_{3}^{2},\\ \\text{and}\\ \\hat{\\sigma}_{4}^{2}$ are in value (Table \\ref{tab:variance1}), we see that the distinction between the states is in the jumps. This is exactly the result that we have aimed to obtain by defining this model. In \\eqref{eq:transition_matrix_jump}, we report the matrix with the transition probabilities of the model, while eq. \\eqref{eq:transition_matrix_jump} indicates the average number of transitions from one state to another. In particular, the element $\\hat P_{ij}$ is the transition probability $\\hat p_{ij}$ and $\\hat M_{ij}$ is the number of transitions from state $i$ to state $j$.\n\n\\begin{equation}\n\\label{eq:probability_matrix_jump}\n\\hat P = \\begin{bmatrix}\n0.6147 & 0.2522 & 0.0894 & 0.0435\\\\\n0.5326 & 0.3073 & 0.1082 & 0.0518\\\\\n0.1491 & 0.0805 & 0.7250 & 0.0453\\\\\n0.1671 & 0.0904 & 0.0487 & 0.6936\n\\end{bmatrix}\n\\end{equation} \n\n\\begin{equation}\n\\label{eq:transition_matrix_jump}\n\\hat M = \\begin{bmatrix}\n113.62 & 46.61 & 16.53 & 8.04 \\\\\n46.35 & 26.74 & 9.41 & 4.51 \\\\\n15.50 & 8.37 & 75.37 & 4.71 \\\\\n9.38 & 5.08 & 2.74 & 38.97\n\\end{bmatrix}\n\\end{equation} \n\nThe last thing we want to list before comparing our indicator with the VIX index, is the expected values of the state durations. Using eq. \\eqref{eq:expdur}, we obtain the following result\n\\begin{align}\n\\label{duration1}\n\\hat{d_1} = 2.6501, \\quad \\hat{d_2}=1.4227\\;,\\\\\n\\nonumber \\hat{d_3}=3.6518, \\quad \\hat{d_4}=3.3432\\;,\n\\end{align}\nwhich are coherent with the choice we made of exploiting the Dirichlet prior to make the model more resistant on states 3 and 4, while allowing more transitions between states 1 and 2.\n\n\\begin{remark}\n\tIn~\\cite{DiPMF16} the authors underlined that one of the main issues that the proposed model cannot fix was the over-smoothing effect. In particular, they obtained the following durations: $\\hat{d_1}=45.2489 $, $\\hat{d_2}=23.9808$ $\\hat{d_3}=26.4550$ $\\hat{d_4}=3.3356$. One can notice that the duration of the highest state is conserved while the others changed, which was exactly our goal. Moreover such results clearly show an important development in the solution of the aforementioned issue, since duration can be used as a quantitative indicator of smoothness.\n\\end{remark}\n\nFinally, we compare our results with the VIX index data. In particular, our volatility indicator, denoted by $I_{t}^{J}$, will indicates the expected standard deviation of the data at time $t$, namely\n\\begin{equation}\nI_{t}^{J} := \\sqrt{\\sum\\limits_{j=1}^{4} \\mathbb{P} (S_{t}=j|\\psi_{t}) \\left(\\hat{\\sigma}_{j}^{2} + \\frac{\\hat{N}_{j} \\left( \\hat{N}_{j}+1 \\right)}{40^{2}} \\right)} \\;, \n\\end{equation} \nwhere $J$ stands for {\\it jump}, we also present a visual comparison in the figure below.\n\n\\begin{figure}[!h]\n\t\\includegraphics[width = \\columnwidth, height = .25\\paperheight]{\"jump_b40\"}\n\t\\caption{ \\small{Visual comparison between the VIX and the volatility indicator $I_{t}^{J}$. We have applied a linear scaling function to $I_{t}^{J}$.} }\n\\end{figure}\n\nSince we want to compare this model to the $\\alpha-$stable distribution model that follows, we need some way to quantify what is a {\\it good} estimate of the volatility. Following a standard approach, we take into consideration the sum of the squares of the difference between our volatility indicator and the VIX, namely we define \n\\begin{equation}\n\\label{eq:squares1}\n\\mathbf{S}^{J} = \\sum\\limits_{t=1}^{T} \\left( I_{t}^{J} - y_{t, \\text{\\tiny VIX}} \\right)^{2}\\;,\n\\end{equation}\nwhere $y_{t, \\text{\\tiny VIX}}$ is the value of the VIX at time $t$. Using eq. \\eqref{eq:squares1}, we obtain $\\mathbf{S}^{J}=26.23$. In the next section we analyse the performances provided by the second model.\n\n\\subsection{$\\alpha$-Stable Distribution Model}\n\\label{alpha_stable_case}\nIn this model we do not have to sample the jumps $N_{t}$, or their parameters $\\theta_{S_{t}}$. We use this decrease in the number of parameters to infer on the mean $\\mu_{S_{t}}$ without making the simulations too cumbersome, or inaccurate. In Fig. \\ref{fig:stable_variance_histograms}, we present the histograms of the values $\\lambda \\gamma_{j}^{2},\\ j \\in \\{1,2,3,4\\}$. We choose these values, instead of just $\\gamma_{j}^{2}$, since the indicate the variance of $y_{t}|\\lambda$. Related estimates can be found in Table \\ref{tbl:variance2}. \n\n\\begin{figure}[!h]\n\t\\includegraphics[width = \\columnwidth, height = .3\\paperwidth]{\"stable_variance_histograms\"}\n\t\\caption{ \\small{The histograms of the variances $\\lambda \\gamma_{S_{t}}^{2}$.} }\n\t\\label{fig:stable_variance_histograms}\n\\end{figure}\n\nLooking at Table \\ref{tbl:variance2}, we notice that there is a much bigger difference between the last three scale estimators, than there one between the last three Gaussian variance estimators. The latter result is due to the fact that the present model lacks of a jump component, therefore all the volatility has to be {\\it explained} by mean of the scale parameters. \n\n\\begin{table}[h!]\n\t\\caption{Parameter Point Estimates}\n\t\\centering\n\t\n\t\\begin{tabular}{c|c}\n\t\t\\label{tbl:variance2}\n\t\tEstimator & Value \\\\\n\t\t\\hline\n\t\t$\\hat{\\lambda}$ & $0.00390925$ \\\\\n\t\t$\\hat{\\gamma}_{1}^{2}$ & $0.1017084$ \\\\\n\t\t$\\hat{\\gamma}_{2}^{2}$ & $0.1841539$ \\\\\n\t\t$\\hat{\\gamma}_{3}^{2}$ & $0.3254673$ \\\\\n\t\t$\\hat{\\gamma}_{4}^{2}$ & $0.9113278$\n\t\\end{tabular}\n\\end{table}\n\nWe would like to underline that the simulations of $\\lambda$ are not robust. In particular, there is a very low acceptance rate in the exploited Metropolis-Hastings. We explain why this happens by an example, all the notation used in what follows, being the same as in subsection \\ref{sbsec:metropolis_hastings}.\n\nWe are in the special case of the Metropolis-Hastings algorithm where $\\lambda = \\boldsymbol \\theta$ is the only parameter. If we take $\\lambda^{j-1}=0.01$, where $\\lambda^{j-1}$ stands for the $(j-1)$-st sample of $\\lambda$, hence it is not its $(j-1)$-st power, and plot the graph of the \\emph{nonzero acceptance probabilities} around that value, see Fig. \\ref{fig:acceptance_probabilities}, we clearly see where the simulations fail to be robust. \n\\begin{figure}[!h]\n\t\\includegraphics[width = \\columnwidth, height=.25 \\paperheight ]{\"acceptance_probabilities\"}\n\t\\caption{ \\small{Acceptance probabilities of candidates around the value 0.01.} }\n\t\\label{fig:acceptance_probabilities}\n\\end{figure}\nIn particular, if $\\lambda_{new} \\in \\{0.013,0.0014,...,0.0099\\}$, then its acceptance probability is 1, namely we automatically take $\\lambda^{j}=\\lambda_{new}$. There are only three values that $\\lambda_{new}$ can take and that are larger than $0.01$, i.e. $0.0101,\\ 0.0102,\\ \\text{and}\\ 0.0103$, with acceptance probabilities $0.23,\\ 0.053,\\ \\text{and}\\ 0.012$, respectively. The latter implies that the samples of $\\lambda$ will converge towards zero, which poses a numerical problem since then the values of $\\gamma_{S_{t}}$ will blow up, since $V(y_{t}|\\lambda) = \\lambda \\gamma_{S_{t}}^{2}$, in order to compensate. At some point the values of $\\gamma_{S_{t}}$ become too large to be numerically handled, subsequently the simulation crashes. We can get around small values for $\\lambda$, e.g. bounding it from below by some small constant, but then, once $\\lambda$ reaches such a value, the algorithm rarely accept a larger value as a sample, hence leading to the problem of too few samples of $\\lambda$ being accepted. Despite the aforementioned shortcoming, the proposed model still works quite effective, as we will see further down. \n\nIn Fig. \\ref{fig:mean_histograms}, we present the histograms of the different means, while Table \\ref{tbl:means} reports their point estimates.\n\\begin{table}[!h]\n\t\\caption{Mean Point Estimates}\n\t\\centering\n\t\\begin{tabular}{c|c}\n\t\t\\label{tbl:means}\n\t\tEstimator & Value \\\\\n\t\t\\hline\n\t\t$\\hat\\mu_{1}$ & $0.007381561$ \\\\\n\t\t$\\hat\\mu_{2}$ & $0.008574574$ \\\\\n\t\t$\\hat\\mu_{3}$ & $0.002191889$ \\\\\n\t\t$\\hat\\mu_{4}$ & $-0.02942247$\n\t\\end{tabular}\n\\end{table}\n\nOne thing that stands out in the mean point estimates, is the sign of the mean of the fourth state, which is negative. The latter should not come as a surprise since it refers to highest volatility value in the time series, namely the one related to the mortgage crisis of 2008. We recall that, during a severe financial crisis, most price movements are downward, resulting in a negative drift.\n\n\\begin{figure}[!h]\n\t\\includegraphics[width = \\columnwidth, height=.25 \\paperheight]{\"mean_comparisons\"}\n\t\\caption{ \\small{The histograms of the means $\\mu_{S_{t}}$.} }\n\t\\label{fig:mean_histograms}\n\\end{figure}\n\nIn what follows, we present the transition probability matrix, see eq. \\eqref{eq:probability_matrix_stable}, and the transition matrix, see eq. \\eqref{eq:transition_matrix_jump}, namely\n\n\\begin{equation}\n\\label{eq:probability_matrix_stable}\n\\hat P = \\begin{bmatrix}\n0.3492 & 0.2719 & 0.2102 & 0.1686\\\\\n0.3655 & 0.2728 & 0.1993 & 0.1622\\\\\n0.0157 & 0.0144 & 0.9518 & 0.0179\\\\\n0.0264 & 0.0222 & 0.0220 & 0.9292\n\\end{bmatrix}\n\\end{equation} \n\n\\begin{equation}\n\\label{eq:transition_matrix_stable}\n\\hat M = \\begin{bmatrix}\n29.10 & 15.65 & 7.75 & 3.62\\\\\n16.16 & 13.76 & 5.82 & 2.95\\\\\n6.61 & 5.54 & 270.39 & 4.17\\\\\n4.02 & 3.49 & 3.22 & 39.68\n\\end{bmatrix}\n\\end{equation} \n\nAs in the previous case, we note the expected values of the state durations obtained using eq. \\eqref{eq:expdur}, namely\n\\begin{align}\n\\label{duration2}\n\\hat{d_1} &= 1.5365, \\quad \\hat{d_2}=1.3751\\;,\\\\\n\\nonumber \\hat{d_3}&=20.746, \\quad \\hat{d_4}=14.124\\;.\n\\end{align}\nWe can note how the difference between the results in \\eqref{duration2} and those in \\eqref{duration1}, is significant. In order to better explain the latter datum, let us define the volatility indicator within the present framework, and make a comparison with the VIX index. In particular we define a second volatility indicator, denoted by $I_{t}^{\\alpha}$, which, analogously to the previous case, will stand for the expected standard deviation of the data at time $t$, i.e.\n\\begin{equation}\nI_{t}^{\\alpha} = \\sqrt{ \\hat{\\lambda} \\sum\\limits_{j=1}^{4} \\mathbb{P} (S_{t}=j|\\psi_{t}) \\hat{\\gamma}_{j}^{2} } \\;.\n\\end{equation} \nIn Fig. \\ref{fig:stable_comparison}, we can see a visual comparison between the two values.\n\\begin{figure}[!h]\n\t\\includegraphics[width = \\columnwidth, height = .25\\paperheight]{\"stable_vix\"}\n\t\\caption{ \\small{Visual comparison between the VIX and the volatility indicator $I_{t}^{\\alpha}$. We have applied a linear scaling function to $I_{t}^{\\alpha}$.} }\n\t\\label{fig:stable_comparison}\n\\end{figure}\nUsing eq. \\eqref{eq:squares1}, we obtain $\\mathbf{S}^{\\alpha}=39.69$, which is a significant increase over $\\mathbf{S}^{J}$. This leads us to conclude that the estimate obtained from the jump diffusion model is closer to the VIX than the one obtained from the $\\alpha$-stable distribution model.\n\nWe now briefly explain the difference between the results in \\eqref{duration1} and \\eqref{duration2}. The expected duration of state 1 falls while at the same time there is a drastic increase in the expected durations of states 3 and 4. Looking at the way the estimators behave in Fig. \\ref{fig:jump_comparison} and in Fig. \\ref{fig:stable_comparison}, it is to note that the estimator obtained from the jump diffusion model is much more {\\it jagged}, because of the regular transition from one state to another; while the one obtained from the stable distribution model is much smoother, seeing as the time series tends to stay in the high volatility states much longer. Furthermore, when the stable distribution model has to place observations, that should be in the low volatility states, in the high volatility ones, so that to solve the variance underestimation problem mentioned in \\cite{DiPMF16}, the jump diffusion model can simply add a few jumps to make up for the missing variance. This is why, despite its attempts to increase the variance by staying in the higher states, we see the indicator of the stable model {\\it drooping} and underestimating the low volatility, while, in this situation, the jump model stays much closer to the VIX.\n\n\\section{Conclusion and Future Developments}\nIn the present paper we have presented two novel techniques to implement a Markov Switching Model (MSM) type approach to non-stationary data, namely a jump diffusion-MSM and an $\\alpha-$stable-MSM. \nIn Sec. \\ref{jump_diffusion_case}, we have shown that the first one is very effective in mimic the VIX index, moreover its implementation can be smoothly done without sacrificing its theoretical peculiarities, see Sec. \\ref{JumpDiffusionModel}.\nA slightly different situation concerns the implementation of the second approach, see Sec. \\ref{alpha_stable_case}, since eeven if the $\\alpha$-stable-MSM approach turns to be quite effective, we have to consider sampling problems of one of its parameters, implying that computational results do not behave exactly the way the are meant to. \n\nWe would like to underline that the achieved tractability of the jump diffusion model is a crucial point, and it witnesses how such technique can be fruitfully used to model any kind of time series presenting pronounced tails, not just financial ones.\n\nAs far as the issues of over-smoothing and excessive state duration, which have been stated in \\cite{DiPMF16} as the main deficiencies of the MSM approach to financial data, we have shown that, using the models here presented, the state durations can been significantly reduced, see subsection \\ref{jump_diffusion_case}, and the problem of over-smoothing can be solved, see subesctions \\ref{jump_diffusion_case} and \\ref{alpha_stable_case}.\nConcerning future developments, we aim at improving our jump diffusion-MSM model by considering , instead of a simple first-order Markov transition law, a $k$-th order Markov transition law. Other possibilities consist in dealing with a transition law that is state duration dependent, or allowing the law to depend on other observable quantities used as indicators of the economy behaviour, e.g., real personal income, industrial production index, rate of private credit growth, etc. \n\\linebreak \\\\\n\\textbf{Acknowledgements:} We would like to sincerely thank Matteo Frigo for his insightful comments and his fundamental suggestions, which have helped us a lot in preparing the present work, especially with respect to the duration analysis.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzmfxh b/data_all_eng_slimpj/shuffled/split2/finalzzmfxh new file mode 100644 index 0000000000000000000000000000000000000000..fc1ed484687555a233f94561f0f4b0715621210b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzmfxh @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro}\nIn this paper we consider efficient computational approaches to compute approximate\nsolutions of a linear inverse problem,\n\\begin{equation}\nb = Ax_\\text{true} + \\eta, \\quad \\quad A\\in\\mathbb{R}^{m\\times n},\\label{eq:inverse_problem}\n\\end{equation}\nwhere $A$ is a known matrix, \nvector $b$ represents known acquired data, $\\eta$ represents noise, and vector $x_\\text{true}$ \nrepresents the unknown quantity that needs to be approximated.\nWe are particularly interested in imaging applications where $x_\\text{true} \\geq 0$ and $Ax_\\text{true} \\geq 0$.\nAlthough this basic problem has been studied extensively \n(see, for example, \\cite{Engl2000Regularization,Hansen2010Discrete,Mueller2012Linear,Vogel2002Computational}\nand the references therein), the noise is typically assumed to come from a single source (or to be\nrepresented by a single statistical distribution) and the data to contain no outliers.\nIn this paper we focus on a practical situation that arises in many imaging applications, and for which relatively\nlittle work has been done, namely when the noise is comprised of a mixture of Poisson and\nGaussian components {\\em and}\nwhen there are outliers in the measured data. While some research has been done\non the two topics separately (i.e., mixed Poisson--Gaussian noise models {\\em or} \noutliers in measured data), to our knowledge no work has been done when the measured\ndata contains both issues. In the following, we review some of the approaches used to handle each of the issues.\n\\subsubsection*{Poisson--Gaussian noise}\nA Poisson--Gaussian statistical model for the measured data takes the form\n\\begin{equation}\nb_i = n_\\text{obj}(i) + g(i), \\ i = 1,\\ldots,m, \\quad \\ n_\\text{obj}(i) \\sim \\text{Pois}([Ax_\\text{true}]_i), \\ g(i) \\sim \\mathcal{N}(0,\\sigma^2), \\label{eq:noise}\n\\end{equation}\nwhere $b_i$ is the $i$th component of the vector $b$ and $[Ax_\\text{true}]_i$ the $i$th component of the true noise-free \ndata $Ax_\\text{true}$. We assume that the two random variables $n_\\text{obj}(i)$ and $g(i)$ are independent. \nThis mixed noise model arises in many important applications, such as when using charged coupled device (CCD) arrays,\nx-ray detectors, and infrared \nphotometers \\cite{Bardsley2003nonnegatively,Gravel2004method,Luisier2011Image,Makitalo2013Optimal,Snyder1993Image}.\nThe Poisson part (sometimes referred to as shot noise) can arise\nfrom the accumulation of photons over a detector, and the Gaussian part usually is due to {\\em read-out} noise from\na detector, which can be generated by thermal fluctuations in interconnected electronics.\n\nSince the log-likelihood function for the mixed Poisson--Gaussian model (\\ref{eq:noise}) has an infinite series representation \\cite{Snyder1993Image}, we assume a simplified model, where both random variables have the same type of distribution. \nThere are two main approaches one can take to generate a simplified model.\nThe first approach is to add $\\sigma^2$ to each component of the vector $b$, and from (\\ref{eq:noise}) it then follows that\n\\begin{equation}\n\\mathbb{E}(b_i + \\sigma^2) = [Ax_\\text{true}]_i + \\sigma^2 \\quad \\text{and} \\quad \\text{var}(b_i + \\sigma^2) = [Ax_\\text{true}]_i + \\sigma^2. \\label{eq:poiss_approx_noise}\n\\end{equation}\nFor large $\\sigma$, the Gaussian random variable $g(i) + \\sigma^2$ is well-approximated by a Poisson random variable with the Poisson parameter $\\sigma^2$, and therefore $b_i + \\sigma^2$ is also well approximated by a Poisson random variable with the Poisson parameter $[Ax_\\text{true}]_i + \\sigma^2$. The data fidelity function corresponding to the negative Poisson log-likelihood then has the form\n\\begin{equation}\n\\sum_{i=1}^m ([Ax_\\text{true}]_i + \\sigma^2) - (b_i +\\sigma^2)\\,\\log([Ax_\\text{true}]_i + \\sigma^2);\\label{eq:poisson}\n\\end{equation}\nsee also \\cite{Snyder1993Image}. An alternative approach is to approximate the true negative log-likelihood by a weighted least-squares function, where the weights correspond to the measured data, i.e.,\n\\begin{equation}\n \\sum_{i=1}^m \\frac{1}{2}\\left(\\frac{[Ax]_i - b_i}{\\sqrt{b_i + \\sigma^2}}\\right)^2;\\label{eq:wls_basic}\n\\end{equation}\nsee \\cite[Sec. 1.3]{Hansen2013Least}. \nA more accurate approximation can be achieved by replacing the measured data by the computed data \n(which depends on $x$), i.e., replace the fidelity function (\\ref{eq:wls_basic}) by\n\\begin{equation}\n \\sum_{i=1}^m \\frac{1}{2}\\left(\\frac{[Ax]_i - b_i}{\\sqrt{[Ax]_i + \\sigma^2}}\\right)^2;\\label{eq:wls}\n\\end{equation}\nsee \\cite{Stagliano2011Analysis} for more details. \nAdditional additive Poisson noise (e.g., background emission) can be incorporated into the model in a straightforward way. \n\\subsubsection*{Outliers}\nFor data corrupted solely with Gaussian noise, i.e., \n\\begin{equation}\nb_i = [Ax]_i + g(i), \\ i = 1,\\ldots,m, \\quad g(i) \\sim \\mathcal{N}(0,\\sigma^2), \\label{eq:gausssian_noise}\n\\end{equation} employing the negative log-likelihood leads to the standard least-squares functional\n\\begin{equation}\n \\sum_{i=1}^m \\frac{1}{2}\\left([Ax]_i - b_i\\right)^2.\\label{eq:ls}\n\\end{equation} \nIt is well known however that a computed solution based on least squares is not robust if outliers occur, meaning that even a small number of components with gross errors can cause a severe deterioration of our estimate. Robustness of the least squares fidelity function can be achieved by replacing the loss function $\\frac{1}{2}t^2$ used in \\eqref{eq:ls} by a function $\\rho(t)$ as\n\\begin{equation}\n \\sum_{i=1}^m \\rho\\left([Ax]_i - b_i\\right),\\label{eq:robust}\n\\end{equation}where the function $\\rho$ is less stringent towards the gross errors and satisfies the following conditions:\n\\begin{enumerate}\n\\item $\\rho(t) \\geq 0$;\n\\item $\\rho(t) = 0 \\Leftrightarrow t =0$;\n\\item $\\rho(-t) = \\rho(t)$;\n\\item $\\rho(t^\\prime) \\geq \\rho(t)$, for $t^\\prime \\geq t \\geq 0$;\n\\end{enumerate}\nsee also \\cite[Sec. 1.5]{Hansen2013Least}. A list of eight most commonly used loss functions $\\rho$ can be found in \\cite{Coleman1980system} or in MATLAB under \\texttt{robustfit}; some of them are discussed in Section~\\ref{sec:conv_anal}. Each of these functions also depends on a parameter $\\beta$ (see Section~\\ref{sec:beta}) defining the trade-off between the robustness and efficiency. Note that if we use this robust regression approach, in order to reduce the influence of possible outliers, we always sacrifice some efficiency at the model.\n\n\\bigskip\n\nIn this paper, we focus on combining these two approaches to suppress the influence of outliers for data with mixed noise \\eqref{eq:noise}. Our work has been motivated by O'Leary \\cite{OLeary1990Robust}, and more recent work by Calef \\cite{Calef2013Iteratively}. The initial ideas of our work were first outlined in the \nconference paper \\cite{Kubinova2015Iteratively}.\n\nThe paper is organized as follows. In Section~\\ref{sec:robust} we introduce a data-fidelity function suitable for data corrupted both with mixed Poisson--Gaussian noise and outliers. In Section~\\ref{sec:reg_param} we propose a regularization parameter choice method for the regularization of the resulting inverse problem, and in Section~\\ref{sec:optim} we focus on the optimization algorithm and the solution of the linear subproblems. Section~\\ref{sec:num_exp} demonstrates the performance of the resulting method on image deblurring problems with various types of outliers.\n\nThroughout the paper, $D$ (or $D$ with an accent) denotes a general real diagonal matrix, $e_i$ denotes the $i$th column of the identity matrix of a suitable size.\n\n\n\\section{Data-fidelity function}\\label{sec:robust}\nIn Section \\ref{sec:intro}, we reviewed fidelity functions \\eqref{eq:poisson}, \\eqref{eq:wls_basic}, and \\eqref{eq:wls}, commonly used for problems with mixed Poisson--Gaussian noise and also robust loss functions used to handle problems with Gaussian noise and outliers \\eqref{eq:robust}. Since we need to deal with both issues simultaneously here, we propose combining both approaches. More specifically, combining a robust loss function with the weighted least squares problem \\eqref{eq:wls}, so that the data fidelity function becomes\n\\begin{equation}\n J(x) = \\sum_{i=1}^m \\rho\\left(\\frac{[Ax]_i - b_i}{\\sqrt{[Ax]_i + \\sigma^2}}\\right).\n\\label{eq:robust_wls}\n\\end{equation}\nIn the remainder of this section, we investigate the properties of the proposed data-fidelity function \\eqref{eq:robust_wls} and the choice of the robustness parameter $\\beta$, which is defined in the next subsection.\n\\subsection{Choice of the loss function -- convexity analysis} \\label{sec:conv_anal}\nFor ordinary least squares, functions known under names Huber, logistic, Fair, and Talwar, shown in Figure~\\ref{fig:loss_functions}, lead to an interval-wise convex data fidelity function, see \\cite{OLeary1990Robust}, i.e., positive-semidefinite Hessian, which is favorable for Newton-type minimization algorithms. This however does not \nalways hold in our\ncase where the weighted least squares formulation \\eqref{eq:robust_wls} has solution-dependent weights. \n\nTo see this, let us begin by denoting the components of the residual as $r_i \\equiv [Ax]_i - b_i$ and the \nsolution-dependent weights as $w_i \\equiv \\frac{1}{\\sqrt{[Ax]_i + \\sigma^2}}$. Then the gradient and the Hessian of \\eqref{eq:robust_wls} can be rewritten as\n\n\\begin{align}\n\\text{grad}_J(x) &= A^Tz, & z_i &= \\left(w^\\prime_ir_i + w_i\\right)\\rho^{\\prime}(w_ir_i);\\\\\n\\text{Hess}_J(x) &= A^TDA, & \\quad D_{ii} &= (w^{\\prime\\prime}_ir_i + 2w_i^{\\prime})\\rho^{\\prime}(w_ir_i) + (w^{\\prime}_i r_i + w_i)^2\\rho^{\\prime\\prime}(w_ir_i). \\label{eq:hess_and_grad}\n\\end{align}\nWe investigate the entries $D_{ii}$ in order to examine the positive semi-definiteness of the Hessian $\\text{Hess}_J(x)$. Recall that $A^TDA$ is positive semi-definite, if $D_{ii}\\geq 0$.\n\nAssuming $\\rho^{\\prime\\prime}\\geq 0$, the signs of the diagonal entries $D_{ii}$ in \\eqref{eq:hess_and_grad} are\n\\begin{equation}\n\\text{sign}(D_{ii}) = \\left(\\circlearound{+}\\cdot\\text{sign}(r_i) + 2\\circlearound{-}\\right)\\cdot\\text{sign}(\\rho^{\\prime}(w_ir_i)) + \\circlearound{+}\\circlearound{+}, \\label{eq:dii_sign}\n\\end{equation}\nwhere we have replaced some of the quantities in the\nexpression for $D_{ii}$ shown in equation \\eqref{eq:hess_and_grad} with the symbol $\\circlearound{-}$ when the\nvalue it replaces is always a negative number and with $\\circlearound{+}$ when the value it replaces is always nonnegative. We will now investigate all possible cases with respect to $\\text{sign}(\\rho^{\\prime}(w_ir_i))$:\n\\begin{itemize}\n\\item \\textit{Case 1: $\\rho^\\prime(w_ir_i)< 0$} \\\\\n$\\rho^\\prime(w_ir_i)< 0$ yields $r_i< 0$, and therefore $D_{ii} > 0$.\n\\item \\textit{Case 2: $\\rho^\\prime(w_ir_i)= 0$} \\\\\n$\\rho^{\\prime}(w_ir_i) = 0 $ yields $D_{ii} = 0$.\n\\item \\textit{Case 3: $\\rho^\\prime(w_ir_i)> 0$} \\\\\n Substituting for $w_i$ and $r_i$ in \\eqref{eq:hess_and_grad}, we obtain\n\\begin{align*}\\label{eq:dii}\n D_{ii} &= \\left(\\frac{3}{4}([Ax]_i-b_i)([Ax]_i+\\sigma^2)^{-5\/2} - ([Ax]_i+\\sigma^2)^{-3\/2})\\right)\\rho^{\\prime}(w_ir_i)\\\\ \n\\nonumber & \\quad + \\left(-\\frac{1}{2}([Ax]_i-b_i)([Ax]_i+\\sigma^2)^{-3\/2} + ([Ax]_i+\\sigma^2)^{-1\/2}\\right)^2\\rho^{\\prime\\prime}(w_ir_i).\n\\end{align*} \nFor $[Ax]_i\\gg b_i+\\sigma^2$, to achieve $D_{ii}\\geq 0$, \n\\[\n\\sqrt{[Ax]_i}\\cdot\\rho^{\\prime\\prime}(\\sqrt{[Ax]_i})\\gtrsim \\rho^{\\prime}(\\sqrt{[Ax]_i}),\\\\\n\\]\nmust hold. This corresponds to \n\\[\n\\rho^{\\prime}(t) \\gtrsim t \\quad \\text{yielding} \\quad \\rho(t) \\gtrsim t^2\/2,\\\\\n\\]\ni.e., for large $[Ax]_i$, the loss function $\\rho$ has to grow at least quadratically.\n\\end{itemize}\n\nConcluding, for large $t$, the loss function $\\rho(t)$ has to be either constant or grow at least quadratically, which is in contradiction with the idea of robust regression. Therefore, considering the functions from \\cite{Coleman1980system}, the only loss function $\\rho$ for which the data fidelity function (\\ref{eq:robust_wls}) has positive semidefinite Hessian, is the function Talwar:\n\\begin{equation}\n\\rho(t) = \\left\\{\\begin{array}{ll}\nt^2\/2, & |t|\\leq\\beta,\\\\\n\\beta^2\/2, & |t| > \\beta.\n\\end{array} \\right.\\label{eq:talwar}\n\\end{equation} \n\\subsection{Selection of the robustness parameter}\\label{sec:beta}\nParameters $\\beta$ for $95\\%$ asymptotic efficiency with respect to the standard loss function $\\frac{1}{2}t^2$ when the disturbances come from the unit normal distribution can again be found in \\cite{Coleman1980system}. For Talwar, the 95\\% efficiency tuning parameter is \n\\begin{equation}\n\\beta_{95} = 2.795.\\label{eq:beta_opt}\n\\end{equation} \nNote that in our specific case, the random variable inside the function $\\rho$ in (\\ref{eq:robust_wls}) is already rescaled to have unit variance and therefore approximately unit normal distribution. We may therefore apply the parameter $\\beta_{95}$ without any further rescaling based on estimated variance, which is usually required in case of ordinary least squares with unknown variance of noise. Function Talwar with $\\beta = \\beta_{95}$ is shown in Figure~\\ref{fig:rho_talwar}. \n\\begin{figure}\n\\centering\n\\begin{subfigure}[b]{.4\\textwidth}\n\\centering\n\\includegraphics[width = .8\\textwidth]{loss_function_fair} \n\\caption{Fair}\n\\end{subfigure}\n\\begin{subfigure}[b]{.4\\textwidth}\n\\centering\n\\includegraphics[width = .8\\textwidth]{loss_function_huber} \n\\caption{Huber}\n\\end{subfigure}\n\n\\begin{subfigure}[b]{.4\\textwidth}\n\\centering\n\\includegraphics[width = .8\\textwidth]{loss_function_logistic} \n\\caption{logistic}\n\\end{subfigure}\n\\begin{subfigure}[b]{.4\\textwidth}\n\\centering\n\\includegraphics[width = .8\\textwidth]{loss_function_talwar} \n\\caption{Talwar}\\label{fig:rho_talwar}\n\\end{subfigure}\n\n\\caption{Loss functions Fair, Huber, logistic and Talwar for the tuning parameter $\\beta$ corresponding to 95\\% efficiency (solid line) together with the standard loss function $t^2\/2$ (dashed line).\n}\\label{fig:loss_functions}\n\\end{figure}\n\n\\subsection{Non-negativity constraints}\nIn many applications, such as imaging, the reconstruction will benefit from taking into account the prior information about the component-wise non-negativity of the true solution $x_\\text{true}$. Here, however, imposing non-negativity is not just a question of visual appeal, it also guarantees the two estimates (\\ref{eq:poisson}) and (\\ref{eq:wls}) of the negative log-likelihood will provide similar results; see \\cite{Stagliano2011Analysis}. Therefore, the component-wise non-negativity constraint is an integral part of the resulting optimization problem. However, employment of the non-negativity constraint results in the need of more sophisticated optimization tools. The use of one of the possible algorithms is discussed in Section~\\ref{sec:optim}.\n\n\\section{Regularization and selection of the regularization parameter}\\label{sec:reg_param}\nAs a consequence of noise and ill-posedness of the inverse problem (\\ref{eq:inverse_problem}), some form of regularization needs to be employed in order to achieve a reasonable approximation of the true solution $x_\\text{true}$. For computational convenience, we use Tikhonov regularization with a quadratic penalization term, i.e., we minimize the functional of the form\n\\begin{equation}\nJ_\\lambda(x) \\equiv \\sum_{i=1}^m \\rho\\left(\\frac{[Ax]_i - b_i}{\\sqrt{[Ax]_i + \\sigma^2}}\\right) + \\frac{\\lambda}{2}\\|Lx\\|^2, \\qquad x\\geq 0.\\label{eq:functional}\n\\end{equation}\nWe assume that a good regularization parameter $\\lambda$ with respect to $L$ is used so that the penalty term is reasonably close to the prior and the residual therefore is close to noise. In case of robust regression, it is particularly important not to over-regularize, since this would lead to large residuals and too many components of the data $b$ would be considered outliers. Methods for\nchoosing $\\lambda$ are discussed in this section.\n\n\\subsection{Morozov's discrepancy principle}\nSince the residual components are scaled, and for data without outliers we have the expected value\n \\begin{equation}\nE\\left\\{\\frac{1}{n}\\sum_{i=1}^{n}\\frac{([Ax]_i - b_i)^2}{[Ax]_i + \\sigma^2}\\right\\} = 1,\\label{eq:discrepancy_estimate}\n\\end{equation}\nan obvious choice would be to use the Morozov's discrepancy principle \\cite{Morozov1966solution,Vogel2002Computational}. However, as reported in \\cite{Stagliano2011Analysis}, even without outliers, the discrepancy principle based on \\eqref{eq:discrepancy_estimate} tends to provide unsatisfactory reconstructions for problems with large signal-to-noise ratio. Therefore we will not consider this approach further.\n\n\\subsection{Generalized cross validation}\\label{sec:gcv}\nThe generalized cross validation method \\cite{Golub1979Generalized}\\cite[chap. 7]{Vogel2002Computational} is a method derived from the standard leave-one-out cross validation. To apply this method for linear Tikhonov regularization, one selects the regularization parameter $\\lambda$ such that it minimizes the GCV functional\n\\begin{equation}\n\\text{GCV}(\\lambda) = \\frac{n\\|r_\\lambda\\|^2}{(\\text{trace}(I-A_\\lambda))^2},\\label{eq:gcv_fun}\n\\end{equation}\nwhere $r_\\lambda = Ax_\\lambda - b = (A_\\lambda - I)b$ is the residual, $n$ is its length, and the influence matrix $A_\\lambda$ takes the form $A_\\lambda = A(A^TA + \\lambda L^TL)^{-1}A^T$. Here, due to the non-negativity constraints and the weights, the residual and the influence matrix have a more complicated form. An approximation of the influence matrix for problems with mixed noise, but without outliers, has been proposed in \\cite{Bardsley2009Regularization}. There the numerator of the GCV functional\ntakes the form $n\\|Wr_\\lambda\\|^2$ and the approximate influence matrix\n\\begin{equation}\nA_\\lambda = WA(D_\\lambda(A^TW^2A + \\lambda L^TL)D_\\lambda)^{\\dagger}D_\\lambda A^TW,\\label{eq:gcv_fun_new} \n\\end{equation}\nwhere $W$ and $D_\\lambda$ are diagonal matrices\n\\begin{align}\nW_{ii} &=\n\\frac{1}{\\sqrt{[Ax_\\lambda]_i + \\sigma^2}};\n\\label{eq:W_ii_old}\\\\\n[D_\\lambda]_{ii} &= \\left\\{\n\\begin{array}{cc}\n1 & [x_\\lambda]_i > 0, \\\\ \n0 &\\text{otherwise,}\n\\end{array} \\right. \\nonumber\n\\end{align}\nand \\,${}^\\dagger$ denotes the Moore-Penrose pseudoinverse. Matrix $D_\\lambda$ only handles the non-negativity constraints, \nand therefore can be adopted directly. The matrix $W$ needs a special adjustment, due to the change of the loss function to Talwar. The aim is to construct a matrix $W$ satisfying\n\\[\\|Wr_\\lambda\\|^2 = \\sum_{i=1}^m \\rho\\left(\\frac{[Ax_\\lambda]_i - b_i}{\\sqrt{[Ax_\\lambda]_i + \\sigma^2}}\\right).\\]\nSubstituting for $\\rho$ from the definition of the function Talwar \\eqref{eq:talwar}, we redefine the scaling matrix as\n\\begin{align}\nW_{ii} &\\equiv \\left\\{\n\\begin{array}{cc}\n\\frac{1}{\\sqrt{[Ax_\\lambda]_i + \\sigma^2}} & \\quad \\left\\vert\\frac{[Ax_\\lambda]_i - b_i}{\\sqrt{[Ax_\\lambda]_i + \\sigma^2}}\\right\\vert\\leq \\beta, \\\\ \n\\frac{\\beta}{[Ax_\\lambda]_i - b_i} &\\text{otherwise};\n\\end{array} \\right.\\label{eq:W_ii}\n\\end{align}\n\nIn order to make the evaluation of \\eqref{eq:gcv_fun_new} feasible for large-scale problems, we approximate the trace of a matrix using the random trace estimation \\cite{Hutchinson1990stochastic,Vogel2002Computational} as $\\text{trace}(M)\\approx v^TMv$, where the entries of $v$ take values $\\pm 1$ with equal probability. Applying the random trace estimation to \\eqref{eq:gcv_fun_new}, we obtain\n\\[\n(\\text{trace}(I-A_\\lambda))^2 \\approx (v^Tv - v^TA_\\lambda v)^2.\n\\]\nFinally, $A_\\lambda v$ is approximated by $WAy$, with $y$ obtained applying truncated conjugate gradient iteration to\n\\begin{equation}\n(D_\\lambda(A^TW^2A + \\lambda L^TL)D_\\lambda)y = D_\\lambda A^TWv.\\label{eq:gcv_lin_syst}\n\\end{equation}\n\n\\section{Minimization problem}\\label{sec:optim}\n\nIn this section we discuss numerical methods to compute a minimum of \\eqref{eq:functional}.\nWe consider incorporating a non-negative constraint and solution of linear subproblems, including\nproposing a preconitioner. \n\n\\subsection{Projected Newton's method}\nVarious methods for constrained optimization have been developed over the years, some related to image deblurring can be found in \\cite{Bardsley2003nonnegatively,Bonettini2009scaled,Hanke2000Quasi,More1991solution,Nagy2000Enforcing}. \nFor our computations, we chose a projected Newton's method\\footnote{In \\cite{Haber2015Computational}, the method was derived as the Projected Gauss--Newton method. Here, since the evaluation of the Hessian does not represent a computational difficulty, we use it as a variant of Newton's method. Therefore, in the remainder of the text, the method is referred to as the Projected Newton's Method.}, combined with projected PCG to compute the search direction in each step, see \\cite[sec. 6.4]{Haber2015Computational}. The convenience of this method lies in the fact that the projected PCG does not require any special form of the preconditioner and a generic conjugate gradient preconditioner can be employed. Besides lower bounds, upper bounds on the reconstruction can also be enforced. For completeness, we include the projected Newton method in Algorithm~\\ref{alg:projGNCG}, and projected PCG in\nAlgorithm~\\ref{alg:projPCG}.\n\n\n\\begin{algorithm}\n\\caption{Projected Newton's method \\cite{Haber2015Computational}}\n\\label{alg:projGNCG}\n\\begin{algorithmic}\n\\STATE{$k = 0$}\n\\WHILE{not converged} \n\\STATE{$\\text{Active} = (x^{(k)} \\leq 0$)} \n\\STATE{$g = \\text{grad}_{J_\\lambda}(x^{(k)})$}\n\\STATE{$H = \\text{Hess}_{J_\\lambda}(x^{(k)})$}\n\\STATE{$M = \\texttt{prec}(H)$} \\COMMENT{setup preconditioner for the Hessian}\n\\STATE{$s = \\texttt{projPCG}(H,-g,\\text{Active},M)$} \\COMMENT{compute the search direction for inactive cells}\\STATE{$g_a = g(\\text{Active})$} \n\\IF{$\\max(\\text{abs}(g_a)) > \\max(\\text{abs}(s))$} \n\\STATE {$g_a = g_a\\cdot\\max(\\text{abs}(s))\/\\max(\\text{abs}(g_a))$} \\COMMENT{rescaling needed} \\ENDIF\n\\STATE{$s(\\text{Active}) = g_a$} \\COMMENT{take gradient direction in active cells}\n\\STATE{$x^{(k+1)} = \\texttt{linesearch}(s,x^{(k)},J_\\lambda,\\text{grad}_{J_\\lambda})$}\n\\STATE{$k = k+1$}\n\\ENDWHILE\n\\RETURN $x^{(k)}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[H]\n\\caption{Projected PCG \\cite{Haber2015Computational}}\n\\label{alg:projPCG}\n\\begin{algorithmic}\n\\STATE{input: $A$, $b$, Active, $M$}\n\\STATE{$x_0 = 0$}\n\\STATE{$D_\\mathcal{I} = \\text{diag}(1-\\text{Active})$} \\COMMENT{projection onto inactive set}\n\\STATE{$r_0 = D_\\mathcal{I}b$}\n\\STATE{$z_0 = D_\\mathcal{I}(M^{-1}r_0)$}\n\\STATE{$p_0 = z_0$}\n\\STATE{$k = 0$}\n\\WHILE{not converged} \n\\STATE{$\\alpha_k = \\frac{r_k^Tz_k}{p^TD_\\mathcal{I}Ap_k}$} \n\\STATE{$x_{k+1} = x_k + \\alpha_kp_k $}\n\\STATE{$r_{k+1} = x_k - \\alpha_kD_\\mathcal{I}Ap_k $}\n\\STATE{$z_{k+1} = D_\\mathcal{I}(M^{-1}r_k)$}\n\\STATE{$\\beta_{k+1} = \\frac{z_{k+1}^Tr_{k+1}}{z_{k}^Tr_{k}}$}\n\\STATE{$p_{k+1} = z_{k+1} + \\beta_kp_k$}\n\\STATE{$k = k+1$}\n\\ENDWHILE\n\\RETURN $x_{k}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\subsection{Solution of the linear subproblems}\\label{sec:lin_systems}\nEach step of the projected Newton method (Algorithm \\ref{alg:projGNCG}) \nrequires solving a linear system with the Hessian:\n\\begin{align}\n\\text{Hess}_{J_\\lambda}(x^{(k)})s &= -\\text{grad}_{J_\\lambda}(x^{(k)})\\nonumber \\\\\n(A^TD^{(k)}A + \\lambda L^TL)s &= -\\left(A^Tz^{(k)} + \\lambda L^TLx^{(k)}\\right).\\label{eq:lin_system}\n\\end{align}\nFor the objective functional \\eqref{eq:functional}, the diagonal matrix $D^{(k)}$ and the vector $z^{(k)}$ have the form:\n\\begin{align}\nz_{i} &= \\left\\{\\begin{array}{ll}\n\\frac{1}{2}\\left(1 - \\frac{\\left(b_i + \\sigma^2\\right)^2}{\\left([Ax]_i + \\sigma^2\\right)^2}\\right), & \\left\\vert\\frac{[Ax]_i - b_i}{\\sqrt{[Ax]_i + \\sigma^2}}\\right\\vert\\leq \\beta,\\\\\n0, & \\text{otherwise}.\n\\end{array}\n\\right.\\\\\nD_{ii} &= \\left\\{\\begin{array}{ll}\n\\frac{\\left(b_i + \\sigma^2\\right)^2}{\\left([Ax]_i + \\sigma^2\\right)^3}, & \\left\\vert\\frac{[Ax]_i - b_i}{\\sqrt{[Ax]_i + \\sigma^2}}\\right\\vert\\leq \\beta,\\\\\n0, & \\text{otherwise}.\\label{eq:row_scaling}\n\\end{array}\n\\right.\n\\end{align}\nNote that in case of constant weights, robust regression represents extra computational cost in comparison with standard least squares because it leads to a sequence of weighted least squares problems, while standard least squares problems are solved in one step. In our setting, the weights in \\eqref{eq:wls} themselves have to be updated and therefore employing a different loss function does not change the type of the problem we need to solve. \n\nWithout preconditioning, the convergence of projected PCG can be rather slow, and\nit is therefore important to consider preconditioning. The idea of many preconditioners, such as constraint \\cite{Keller2000Constraint,Dollar2007Using}, constraint-type \\cite{Dollar2007Constraint} or Hermitian and skew-Hermitian \\cite{Benzi2006Preconditioned} preconditioners is based on the fact that in many cases it is possible to efficiently solve the \nlinear system in \\eqref{eq:lin_system} if the diagonal matrix $D^{(k)}$ is the identity matrix; that is, if the linear system\ninvolves the matrix\n\\begin{equation}\nA^TA + \\lambda L^TL \\label{eq:without_D}.\n\\end{equation}\nFor example, in the case of image deblurring, it is well known that linear systems involving the matrix\n\\eqref{eq:without_D} can be solved efficiently using fast trigonometric or \nfast Fourier transforms (FFT).\n\nAlthough the constraint-type, and Hermitian and skew-Hermitian preconditioners seem to perform well for problems with a random \nmatrix $D^{(k)}$ (i.e., a random row scaling), see \\cite{Benzi2006Preconditioned}, they performed unsatisfactorily for problems of the form \\eqref{eq:lin_system}, \\eqref{eq:row_scaling}.\n\nA preconditioner based on a similar idea of fast computations with matrices of type \\eqref{eq:without_D} for imaging problems was proposed in \\cite{Fessler1999Conjugate}. In this case, the row scaling is approximated by a column scaling; that is, \nwe find $\\hat{D}^{(k)}$ such that \n\\begin{equation}\nA^TD^{(k)}A \\approx \\hat{D}^{(k)}(A^TA)\\hat{D}^{(k)},\\label{eq:pullout_prec}\n\\end{equation}\nwhere\n\\begin{equation}\n\\hat{D}_{ii}^{(k)} \\equiv \\sqrt{\\frac{e_i^T(A^TD^{(k)}A)e_i}{e_i^T(A^TA)e_i}}.\\label{eq:out_diag_def}\n\\end{equation}\nNote that for $\\hat{D}^{(k)}$ defined in \\eqref{eq:out_diag_def}, the diagonals of the matrices on the two sides of approximation \\eqref{eq:pullout_prec} are exactly equal. \n\nSince for large-scale problems, matrix $A$ is typically not formed explicitly, exact evaluation of the entries of $\\hat{D}^{(k)}$ might become too expensive. To get around this restriction, note that\n\\begin{equation}\ne_i^T(A^TD^{(k)}A)e_i = ((A^T)\\,.^{2}\\,\\text{diag}(D^{(k)}))_i \\quad \\text{and} \\quad e_i^T(A^TA)e_i =((A^T)\\,.^2\\,\\mathbf{1})_i,\n\\label{eq:AT_squared}\n\\end{equation}\nwhere $\\mathbf{1}$ a vector of all ones, and \nwe use MATLAB notation $.^2$ to mean component-wise squaring.\nIn some cases it may be relatively easy to compute both the entries of $(A^T).^2$ and the vector\n$(A^T).^2\\mathbf{1}$; this is the case for image deblurring, and is\ndiscussed in more detail in Section~\\ref{sec:num_exp}.\n\n\nUsing \\eqref{eq:pullout_prec}, we define the preconditioner for the linear system \\eqref{eq:lin_system} as\n\\begin{equation}\nM \\equiv\\hat{D}^{(k)} \\left( A^TA + \\hat{\\lambda} L^TL \\right) \\hat{D}^{(k)},\\label{eq:prec}\n\\end{equation}\nwith \n\\[\\hat{\\lambda} \\equiv \\lambda\/\\text{mean}\\left(\\text{diag}(\\hat{D}^{(k)})\\right)^2 .\\]\nMore details on the computational costs involved in constructing and applying the preconditioner\nin the case of image deblurring are provided in Section~\\ref{sec:num_exp}.\n\n\n\\section{Numerical tests}\\label{sec:num_exp}\n\nThe Poisson--Gaussian model arises naturally in image applications, so in this section\nwe present numerical examples from image deblurring. Specifically, we consider the \ninverse problem \\eqref{eq:inverse_problem} with data model \\eqref{eq:noise}, where\nvector $b$ is an observed image that is corrupted by blur and noise, matrix $A$ models\nthe blurring operation, vector $x_\\text{true}$ is the true image, and $\\eta$ is noise.\nAlthough an image is naturally represented as an array of pixel values, when we \nrefer to `vector' representations, we assume the pixel values have been reordered\nas vectors. For example, if we have a $p \\times p$ image of pixel values, these can\nbe stored in a vector of length $n = p^2$ by, for example, lexicographical ordering\nof the pixel values.\n\nIn many practical image deblurring applications, the blurring is spatially invariant,\nand $A$ is structured matrix defined by a {\\em point spread function} (PSF).\nIn this situation, image deblurring can also be referred to as image decovolution,\nbecause the operation $Ax_\\text{true}$ is the convolution of $x_\\text{true}$ and the PSF.\nAlthough the PSF may be given as an actual function, the more common situation is\nto compute estimates of it by imaging point source objects. Thus, the PSF can be \nrepresented as an image; we typically display the PSF as a mesh plot, which makes\nit easier to visualize how a point in an image is spread to its neighbors because of\nthe blurring operation.\nThe precise structure of the matrix $A$ depends on the imposed boundary condition;\nsee \\cite{Hansen2006Deblurring} for details. In this section we impose periodic boundary\nconditions, so that $A$ and $L$ are both diagonalizable by FFTs.\n\nSo far we have only described what we refer to as the {\\em single-frame} situation, where\n$b$ is a single observed image. It is often the case, especially in astronomical imaging,\nto have multiple observed images of the same object, but with each having\na different blurring matrix associated with it. We refer to this as the\nmulti-frame image deblurring problem. Here, $b$ represents all observed images, stacked\none on top of each other, and similarly $A$ is formed by stacking the various blurring matrices.\n\nBefore describing the test problems used in this section, we first summarize the computational\ncosts.\nFrom the discussion around equation \\eqref{eq:AT_squared}, to construct the preconditioner we need to \nbe able to efficiently square all entries of the matrix $A^T$, or equivalently those of $A$; this can\neasily be approximated by squaring the point-spread function component-wise before forming the operator, i.e.,\n\\begin{equation}\n(A_\\text{PSF}).^2 \\approx A_{\\text{PSF}.^2}\\, .\\label{eq:PSF_squaring}\n\\end{equation}\nUsing this approximation, in each Newton step we only need to perform one multiplication by a matrix,\none component-wise multiplication, and one component-wise square-root to obtain the entries of the diagonal matrix \\eqref{eq:out_diag_def}. \nWith the assumption that $A$ and $L$ are both diagonalizable by FFTs, \nefficient multiplication by the Hessian \\eqref{eq:lin_system} involves two \ntwo-dimensional forward and inverse FFTs, which we refer to as \\texttt{fft2} and \\texttt{ifft2}, respectively. \nSolving systems with matrix \\eqref{eq:prec} involves only one \\texttt{fft2} and one \\texttt{ifft2}. \nIn addition to the \\texttt{fft2} requirements, multiplication by the Hessian \\eqref{eq:lin_system} involves 4 pixel-wise multiplications and 1 addition. Solving systems with the preconditioner \\eqref{eq:prec} involves 3 pixel-wise multiplications (component-wise reciprocals are assumed to be computed only once at the beginning). The total counts for each operation are shown in Table \\ref{tab:op_counts}.\n\\begin{table}\n\\caption{Operation counts for single-frame case.}\\label{tab:op_counts}\n\\centering\n{\\small\n\\begin{tabular}{llcccc}\n\\toprule\n& operation& \\texttt{fft2} & \\texttt{ifft2} & mults & adds \\\\ \n\\midrule\nHessian \\eqref{eq:lin_system} &multiply & 2 & 2 & 4 & 1 \\\\ \npreconditioner \\eqref{eq:prec} &solve & 1 & 1 & 3 & 0 \\\\ \n\\bottomrule\n\\end{tabular}} \n\\end{table}\n\n\n\n\n\nThe robustness and the efficiency of the proposed method is demonstrated on two test problems adopted from \\cite{Nagy2004Iterative}:\n\\paragraph{Satellite}\nAn atmospheric seeing problem with spatially invariant atmospheric blur (moderate seeing conditions with the Fried parameter 30). We also consider a multi-frame case, where the same object is blurred by three different PSFs. \nThese PSFs are generated by transposing and flipping the first PSF. \nThe setting is shown in Figures \\ref{fig:satellite} and \\ref{fig:PSF_mesh_1}.\n\\paragraph{Carbon ash}\nAn image deblurring problem with spatially invariant non-separable Gaussian blur, where the PSF has the\nfunctional definition\n\\begin{equation}\n\\label{eq:GaussianPSF}\n \\mbox{PSF}(s,t) = \\frac{1}{2\\pi \\sqrt{\\gamma}} \\exp\\left\\{\n -\\frac{1}{2} \\left[ \\begin{array}{cc} s & t \\end{array} \\right] C^{-1}\n \\left[ \\begin{array}{c} s \\\\ t \\end{array} \\right] \\right\\}\\, ,\n\\end{equation}\nwhere \n$$\n C = \\left[ \\begin{array}{cc} \\gamma_1^2 & \\tau^2 \\\\[3pt] \\tau^2 & \\gamma_2^2 \\end{array} \\right]\\,, \\quad\n \\mbox{and} \\quad \\gamma_1^2 \\gamma_2^2 - \\tau^4 > 0\\,.\n$$\nThe shape of the Gaussian PSF depends on the parameters $\\gamma_1, \\gamma_2$ and $\\tau$;\nwe use\n$\\gamma_1 = 4$, $\\gamma_2 = 2$; $\\tau = 2$. We also consider a multi-frame case, where the same object is blurred by three different PSFs. The other two PSFs are Gaussian blurs with parameters $\\gamma_1 = 4$, $\\gamma_2 = 2$, $\\tau = 0$, and $\\gamma_1 = 4$, $\\gamma_2 = 2$, $\\tau = 0$. The setting is shown in Figures \\ref{fig:carbon_ash} and \\ref{fig:PSF_mesh_2}.\n\nAs previously mentioned, in the multi-frame case, the vector $b$ in (\\ref{eq:inverse_problem}) is concatenation of the vectorized blurred noisy images, the matrix $A$ is concatenation of the blurring operators, i.e., $A\\in\\mathbb{R}^{3n\\times n}$. For the test problems all true images are $256 \\times 256$ arrays of pixels (with intensities scaled to $[0,255]$), and thus\n$n = 65536$.\n\nComputation was performed in MATLAB R2015b. Noise is generated artificially using MATLAB functions \\texttt{poissrnd} and \\texttt{randn}. Unless specified otherwise, the standard deviation $\\sigma$ is set to $5$. We use the discretized Laplacian, see \\cite[p. 95]{Hansen2006Deblurring}, as the regularization matrix $L$. The Projected Newton method (Algorithm \\ref{alg:projGNCG}) is terminated when the relative size of the projected gradient\n\\begin{equation}\n\\mathcal{P}(\\text{grad}_{J_{\\lambda}}(x^{(k)})), \\quad \\text{where} \\quad \\mathcal{P}(v) \\equiv v.*(1 - \\text{Active}) + \\text{Active}.*(v < 0),\n\\label{eq:proj_grad_def}\n\\end{equation}\nreaches the tolerance $10^{-4}$ or after 40 iterations. We use MATLAB notation $.^*$ to mean component-wise multiplication. Projected PCG (Algorithm \\ref{alg:projPCG}) is terminated when the relative size of the projected residual (denoted in Algorithm \\ref{alg:projPCG} by $r_i$) reaches $10^{-1}$, or the number of iterations reaches $100$. \nWe use the preconditioner given in \\eqref{eq:prec} as the default preconditioner. Given a search direction $s_k$, we apply a projected backtracking linesearch, with the initial step length equal to 1, which we terminate when \n\\[J_\\lambda(x^{(k+1)}) < J_\\lambda(x^{(k)}).\\] \n\\begin{figure}[!th]\n\\centering\n\\includegraphics[width=.2\\textwidth]{SatelliteSingle_true}\n\\hspace*{.5cm}\n\\includegraphics[width=.2\\textwidth]{SatelliteMulti_noisy_1}\n\\includegraphics[width=.2\\textwidth]{SatelliteMulti_noisy_2}\n\\includegraphics[width=.2\\textwidth]{SatelliteMulti_noisy_3}\n\\caption{Test problem Satellite: true image (left) together with three blurred noisy images (right). \n}\\label{fig:satellite}\n\\end{figure}\n\\begin{figure}[!th]\n\\centering\n\\includegraphics[width=.2\\textwidth]{CarbonAshSingle_true}\n\\hspace*{.5cm}\n\\includegraphics[width=.2\\textwidth]{CarbonAshMulti_noisy_1}\n\\includegraphics[width=.2\\textwidth]{CarbonAshMulti_noisy_2}\n\\includegraphics[width=.2\\textwidth]{CarbonAshMulti_noisy_3}\n\\caption{Test problem Carbon ash: true image (left) together with three blurred noisy images (right). \n}\\label{fig:carbon_ash}\n\\end{figure}\n\n\\begin{figure}[!th]\n\\centering\n\\begin{subfigure}[b]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{SatelliteSingle_PSFmesh_1}\n\\caption{Satellite}\\label{fig:PSF_mesh_1}\n\\end{subfigure}\n\\begin{subfigure}[b]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{CarbonAshSingle_PSFmesh_1}\n\\caption{Carbon ash}\\label{fig:PSF_mesh_2}\n\\end{subfigure}\n\\caption{Point-spread functions for the first frame of each test problem.\n}\\label{fig:setting}\n\\end{figure}\n\n\n\n\\subsection{Robustness with respect to various types of outliers}\n\nIn this section, we consider several types of outliers, whose choice was motivated by \\cite{Calef2013Iteratively}, and demonstrate the robustness of the proposed method. Note that the difference between \\cite{Calef2013Iteratively} and the proposed approach lies, among others, in the fact that while in \\cite{Calef2013Iteratively}, the approximation of the solution is computed in order to update the outer (robust) weights associated with the components of residual. Here, the weights are represented by the loss function $\\rho$ and are updated implicitly in each Newton step and therefore our approach does not involve any outer iteration.\n\n\\subsubsection*{Random corruptions}\nFirst we consider the most simple case of the outliers -- a given percentage of pixels is corrupted at random. These corruptions are generated artificially by adding a value randomly chosen between $0$ and $\\max(Ax_\\text{true})$ to the given percentage of pixels. The location of these pixels is also chosen randomly. Figures \\ref{fig:sliding_curves_1}, \\ref{fig:sliding_curves_2}, \\ref{fig:sliding_curves_3}, and \\ref{fig:sliding_curves_4} show semiconvergence curves\\footnote{For ill-posed problems,\nthe relative error of an iterative method generally does not decrease monotonically. Instead, unless the problem is highly over-regularized, the relative errors decrease in the early iterations, but at later iterations the noise and\nother errors tend to corrupt the approximations. This behavior, where the relative errors decrease to a certain\nlevel and then increase at later iterations, is referred to as {\\em semiconvergence}; for\nmore information, we refer readers to \\cite{Engl2000Regularization,Hansen2010Discrete,Mueller2012Linear,Vogel2002Computational}.}, representing the dependence of the error on the regularization parameter $\\lambda$, when we increase the percentage of corrupted pixels. \n\\begin{figure}[!ht]\n\\centering\n\\begin{subfigure}[b]{\\textwidth}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_SatelliteSingle_0}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_SatelliteSingle_2}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_SatelliteSingle_10}\n\\caption{Satellite single-frame}\\label{fig:sliding_curves_1}\n\\end{subfigure}\n\n\\vspace*{.3cm}\n\n\\begin{subfigure}[b]{\\textwidth}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_SatelliteMulti_0}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_SatelliteMulti_2}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_SatelliteMulti_10}\n\\caption{Satellite multi-frame}\\label{fig:sliding_curves_2}\n\\end{subfigure}\n\n\\vspace*{.3cm}\n\n\\begin{subfigure}[b]{\\textwidth}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_CarbonAshSingle_0}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_CarbonAshSingle_2}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_CarbonAshSingle_10}\n\\caption{Carbon ash single-frame}\\label{fig:sliding_curves_3}\n\\end{subfigure}\n\n\\vspace*{.3cm}\n\n\\begin{subfigure}[b]{\\textwidth}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_CarbonAshMulti_0}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_CarbonAshMulti_2}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_CarbonAshMulti_10}\n\\caption{Carbon ash multi-frame}\\label{fig:sliding_curves_4}\n\\end{subfigure}\n\\caption{Semiconvergence curves -- dependence of the relative error of the reconstruction on the size of the regularization parameter $\\lambda$ for various percentages of outliers: Talwar \\eqref{eq:robust_wls} - \\eqref{eq:beta_opt} (solid line) and the standard data fidelity function \\eqref{eq:wls} (dashed line). \n}\n\\end{figure}\nIt is no surprise that when outliers occur, more regularization is needed in order to obtain a reasonable approximation of the true image $x_\\text{true}$. This is however not the case if we use loss function Talwar, for which the semiconvergence curve remains the same even with increasing percentage of outliers, and therefore no adjustment of the regularization parameter is needed. In Figures \\ref{fig:reconstructions_satellite} and \\ref{fig:reconstructions_carbonash}, we show the reconstructions corresponding to $10 \\%$ outliers. The regularization paramter is chosen as a close-to-the-optimal regularization parameter for the same problem with no outliers. Note that Figures \\ref{fig:reconstructions_satellite} and \\ref{fig:reconstructions_carbonash} show only one frame for illustration. In the multi-frame case, the corruptions look similar for all frames, except that the random locations of the outliers is different. For random outliers like this, robust regression is clearly superior to standard weighted least squares. The influence of the outliers in the multi-frame case is less severe, due to intrinsic regularization of the overdetermined system (\\ref{eq:inverse_problem}). A more comprehensive comparison of the standard and robust approach is shown in Table \\ref{tab:percent}, giving the percentage of cases in which the robust approach provides better reconstruction. The robust approach provides better reconstruction in all cases except for the test problem Satellite with no outliers, where the standard approach gave sometimes slightly better reconstructions. However, even in these cases we observed that the difference between the errors of the reconstructions is rather negligible, about $3\\%$.\n\\begin{figure}[!th]\n\\centering\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{data_SatelliteSingleRand10} \n\\caption{data\\newline}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_SatelliteSingleRand10} \n\\caption{single-frame\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_SatelliteSingleRand10} \n\\caption{single-frame\\\\ robust}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_SatelliteMultiRand10} \n\\caption{multi-frame,\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_SatelliteMultiRand10} \n\\caption{multi-frame \\\\robust}\n\\end{subfigure}\n\\caption{Random corruptions: (a) blurred noisy image with $10 \\%$ corrupted pixels (only first frame is shown); (b) - (d) reconstructions corresponding to $\\lambda = 10^{-4}$. \n}\\label{fig:reconstructions_satellite}\n\\end{figure}\n\\begin{figure}[!th]\n\\centering\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{data_CarbonAshSingleRand10} \n\\caption{data\\newline}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_CarbonAshSingleRand10} \n\\caption{single-frame\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_CarbonAshSingleRand10} \n\\caption{single-frame\\\\ robust}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_CarbonAshMultiRand10} \n\\caption{multi-frame,\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_CarbonAshMultiRand10} \n\\caption{multi-frame \\\\robust}\n\\end{subfigure}\n\\caption{Random corruptions: (a) blurred noisy image with $10 \\%$ corrupted pixels (only first frame is shown); (b) - (d) reconstructions corresponding to $\\lambda = 10^{-3}$. \n}\\label{fig:reconstructions_carbonash}\n\\end{figure}\n\\begin{table}[!th]\n\\centering\n\\caption{Comparison of the quality of reconstruction for the standard vs. the robust approach. For each test problem and each percentage of outliers, the results are averaged over 100 independent positions and sizes of random corruptions. Regularization parameters are chosen identically as in Figures \\ref{fig:reconstructions_satellite} and \\ref{fig:reconstructions_carbonash}. Reconstructions are considered to be of the same quality if the difference between the corresponding relative errors is smaller than 1\\%. \n}\\label{tab:percent}\n{\\small \\begin{tabular}{lcccc} \\toprule \n\\multicolumn{5}{c}{better reconstruction: robust\/same\/standard} \\\\ \nproblem\/\\% outliers & \\multicolumn{1}{c}{0\\%} & \\multicolumn{1}{c}{1\\%} & \\multicolumn{1}{c}{2\\%} & \\multicolumn{1}{c}{5\\%} \\\\ \\midrule\nSatellite single-frame& 0\/93\/7& 100\/0\/0& 100\/0\/0& 100\/0\/0\\\\ \nSatellite multi-frame& 0\/94\/6& 100\/0\/0& 100\/0\/0& 100\/0\/0\\\\ \nCarbon ash single-frame& 0\/100\/0& 100\/0\/0& 100\/0\/0& 100\/0\/0\\\\ \nCarbon ash multi-frame& 0\/100\/0& 100\/0\/0& 100\/0\/0& 100\/0\/0\\\\ \n\\bottomrule\\end{tabular}}\n\\end{table}\n\n\n\\subsubsection*{Added object with different blurring}\nWe also consider a situation when a small object appears in the scene, but is blurred by a different PSF than the main object (satellite). The aim is to recover the main object, while suppressing the influence of the added one. In our case, the added object is a small satellite in the left upper corner that is blurred by a small motion blur. In the multi-frame case, the small satellite is added to the first frame only. The difference between the reconstructions using standard and robust approach is shown in Figure \\ref{fig:alien}. For the single frame problem, reconstructions obtained using the standard loss function is fully dominated by the small added object. For the multi-frame situation, the influence of the outlier is somewhat compensated by the two frames without outliers. In both cases, however, robust regression provides better reconstruction, comparable to the reconstruction from the data without outliers.\n\\begin{figure}[!th]\n\\centering\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{data_SatelliteSingleAlien} \n\\caption{data\\newline}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_SatelliteSingleAlien} \n\\caption{single-frame\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_SatelliteSingleAlien} \n\\caption{single-frame\\\\ robust}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_SatelliteMultiAlien} \n\\caption{multi-frame,\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_SatelliteMultiAlien} \n\\caption{multi-frame \\\\robust}\n\\end{subfigure}\n\\caption{Added object: (a) blurred noisy image with a small object added to the first frame (only first frame is shown); (b) - (d) reconstructions corresponding to $\\lambda = 10^{-4}$. \n}\\label{fig:alien}\n\\end{figure}\n\n\n\\subsubsection*{Outliers introduced by boundary conditions}\nDefining the boundary conditions plays an important role in solving image deblurring problems. As is well known, see e.g. \\cite{Hansen2006Deblurring}, unless some strong a priori information about the scene outside the borders is available, any choice of the boundary conditions may lead to artefacts around edges in the reconstruction. Similarly as in \\cite{Calef2013Iteratively}, we may expect that the robust objective functional (\\ref{eq:functional}) can to some extent compensate for these edge artifacts, i.e., the outliers are represented by the `incorrectly' imposed boundary conditions. In our model we assume periodic boundary conditions, which is computationally very appealing, since it allows evaluating the multiplication by $A$ very efficiently using the fast Fourier transform. However, if any of the objects in the scene is close to the boundary, these boundary conditions will most probably cause artifacts. In order to demonstrate the ability of the proposed scheme to eliminate influence of this type of outlier, we shifted the satellite to the right edge of the image. Other settings remain unchanged. Reconstructions using standard and robust approach are shown in Figure \\ref{fig:bc}. We see that, although not spectacular, robust regression can reduce the artifacts caused by incorrectly imposed boundary conditions and therefore provide better reconstruction of the true image. Quantitative results for this and all the previous types of outliers are shown in Tables~\\ref{tab:2a} and \\ref{tab:2b}.\n\\begin{figure}[!th]\n\\centering\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{data_SatelliteSingleBC} \n\\caption{data\\newline}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_SatelliteSingleBC} \n\\caption{single-frame\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_SatelliteMultiBC} \n\\caption{single-frame\\\\ robust}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_SatelliteMultiBC} \n\\caption{multi-frame,\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_SatelliteSingleBC} \n\\caption{multi-frame \\\\robust}\n\\end{subfigure}\n\\caption{Incorrectly imposed periodic boundary conditions: (a) blurred noisy image close to the edge (only first frame is shown); (b) - (d) reconstructions corresponding to $\\lambda = 10^{-4}$. \n}\\label{fig:bc}\n\\end{figure}\n\n\\begin{table}[!th]\n\\caption{Comparison of the standard and robust approach in terms of relative error of the reconstruction. Each row contains results for the standard and robust approach. Abbreviation `\\# it' stands for the number of Newton steps performed before the relative size of the projected gradient reached the tolerance $10^{-4}$. Corresponding reconstructions are shown in Figures~\\ref{fig:reconstructions_satellite}--\\ref{fig:bc}. \n}\n\\centering\n{\\small\n\\begin{subtable}[h]{\\textwidth}\n\\subcaption{single-frame}\n\\centering\n\\begin{tabular}{lcccc}\\toprule \n & \\multicolumn{2}{c}{standard} & \\multicolumn{2}{c}{robust} \\\\ \n\\multicolumn{1}{c}{problem} & \\multicolumn{1}{c}{\\# it} & \\multicolumn{1}{c}{rel. error} & \\multicolumn{1}{c}{\\# it} & \\multicolumn{1}{c}{rel. error} \\\\ \\midrule\nSatellite& 15& 3.40$\\times 10^{-1}$& 16& 3.42$\\times 10^{-1}$\\\\ \nSatellite random corr. 10\\%& 14& 6.78$\\times 10^{-1}$& 14& 3.57$\\times 10^{-1}$\\\\ \nCarbon ash& 10& 3.10$\\times 10^{-1}$& 11& 3.08$\\times 10^{-1}$\\\\ \nCarbon ash random corr. 10\\% & 11& 3.80$\\times 10^{-1}$& 14& 3.10$\\times 10^{-1}$\\\\ \nSatellite added object& 15& 4.72$\\times 10^{-1}$& 15& 3.43$\\times 10^{-1}$\\\\ \nSatellite boundary conditions& 15& 5.48$\\times 10^{-1}$& 25& 4.51$\\times 10^{-1}$\\\\ \n\\bottomrule\\end{tabular}\\label{tab:2a}\n\\end{subtable}\n\n\\medskip\n\n\\begin{subtable}[h]{1\\textwidth}\n\\subcaption{multi-frame}\n\\centering\n\\begin{tabular}{lcccc} \\toprule\n & \\multicolumn{2}{c}{standard} & \\multicolumn{2}{c}{robust} \\\\ \n\\multicolumn{1}{c}{problem} & \\multicolumn{1}{c}{\\# it} & \\multicolumn{1}{c}{rel. error} & \\multicolumn{1}{c}{\\# it} & \\multicolumn{1}{c}{rel. error} \\\\ \\midrule\nSatellite& 12& 2.89$\\times 10^{-1}$& 11& 2.89$\\times 10^{-1}$\\\\ \nSatellite random corr. 10\\%& 11& 6.45$\\times 10^{-1}$& 13& 3.00$\\times 10^{-1}$\\\\ \nCarbon ash& 12& 3.07$\\times 10^{-1}$& 11& 3.05$\\times 10^{-1}$\\\\ \nCarbon ash random corr. 10\\%& 9& 3.70$\\times 10^{-1}$& 19& 3.06$\\times 10^{-1}$\\\\ \nSatellite added object& 13& 3.33$\\times 10^{-1}$& 11& 2.90$\\times 10^{-1}$\\\\ \nSatellite boundary conditions& 14& 5.26$\\times 10^{-1}$& 14& 4.27$\\times 10^{-1}$\\\\ \n\\bottomrule\\end{tabular}\\label{tab:2b}\n\\end{subtable}}\n\\end{table} \n\n\\subsection{Generalized cross-validation}\\label{sec:gcv_tests}\nFor the remainder of this section we will only assume the robust approach, i.e., functional \\eqref{eq:functional} with the loss function Talwar. In Section~\\ref{sec:gcv} we described a regularization parameter selection rule based on leave-one-out cross validation. Since GCV belongs to standard methods, we focus here mainly on the influence of the outliers on its reliability. To obtain various noise levels, we scale the original true scene (with maximum intensity = 255) by 10 and by 100, which results in a decrease of the relative size of Poisson noise. The standard deviation $\\sigma$ for the additive Gaussian noise is scaled accordingly by $\\sqrt{10}$ and $10$. We compute the resulting signal-to-noise ratio as the reciprocal of the coefficient of variation, i.e.,\n\\[\n\\text{SNR} = \\frac{\\|Ax\\|}{\\sqrt{\\sum_{i=1}^n([Ax]_i + \\sigma^2)}}.\n\\]\n\nFor our computations, we use CG to solve \\eqref{eq:gcv_lin_syst}, which we terminate if the relative size of the residual reaches $10^{-4}$ or if the number of iterations reaches 150. \nTo minimize the GCV functional, we use the MATLAB built-in function \\texttt{fminbnd}, for which we set the lower bound to $0$ and the upper bound to $10^{-1}$, $10^{-2}$, $10^{-4}$, depending on the maximum intensity of the image. The tolerance \\texttt{TolX} was set to $10^{-8}$. \n\nFor test problem Satellite, we show the semiconvergence curves including the minimum error and the error obtained using GCV in Figure~\\ref{fig:GCV_satellite}. Quantitative results (averaged over 10 realizations of outliers) for both test problems are shown in Table~\\ref{tab:gcv}. We observe that the proposed rule is rather stable with respect to the increasing number of outliers and generally better for the Carbon ash than for the Satellite. As expected, the method provides better approximation of the optimal regularization parameter for smaller noise levels (larger $Ax_\\text{true}$), where the functional \\eqref{eq:wls} approximates better the maximum likelihood functional for the mixed Poisson--Gaussian model. Occasionally, GCV provides slightly worse reconstruction for the highest percentage (10\\%) of outliers.\n\\begin{figure}[!ht]\n \\centering\n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle_Laplacian_1_wls_talwar} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle_Laplacian_5_wls_talwar} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle_Laplacian_10_wls_talwar} \n \t\\caption{Satellite single-frame, max. intensity 255 (SNR = 5).}\\end{subfigure}\n \n\\vspace*{.3cm}\n\n\n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle2550_Laplacian_1_wls_talwar} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle2550_Laplacian_5_wls_talwar}\n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle2550_Laplacian_10_wls_talwar} \n \t\\caption{Satellite single-frame, max. intensity 2550 (SNR = 17).}\\end{subfigure}\n \n\\vspace*{.3cm}\n\n\n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle25500_Laplacian_1_wls_talwar} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle25500_Laplacian_5_wls_talwar}\n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle25500_Laplacian_10_wls_talwar} \n \t\\caption{Satellite single-frame, max. intensity 25500 (SNR = 52).}\\end{subfigure}\n \n\\vspace*{.3cm}\n\n \t\n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{GCV_SatelliteMulti_Laplacian_1_wls_talwar} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteMulti_Laplacian_5_wls_talwar}\n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteMulti_Laplacian_10_wls_talwar} \n \t\\caption{Satellite multi-frame, max. intensity 255 (SNR = 5).}\\end{subfigure}\n \n \t\\caption{GCV for data with outliers. \n} \\label{fig:GCV_satellite}\n\\end{figure}\n\n\n\\begin{landscape}\n\\begin{table}\n\\centering\n\\caption{Relative errors of the reconstruction: optimal $\\lambda$ vs. $\\lambda$ obtained by minimizing the GCV functional \\eqref{eq:gcv_fun}. \n}\\label{tab:gcv}\n\\begin{subtable}[h]{1.5\\textwidth}\n\\subcaption{Satellite}\n\\centering\n{\\scriptsize\\begin{tabular}{llcllclcclc} \\toprule \n\\multicolumn{2}{r}{}& \\multicolumn{4}{r}{error min} & \\multicolumn{4}{l}{($\\lambda$ optimal)} \\\\ \n\\multicolumn{2}{r}{} & \\multicolumn{4}{r}{error GCV} & \\multicolumn{4}{l}{($\\lambda$ GCV)} \\\\ \n\\multicolumn{1}{r}{} & \\multicolumn{9}{c}{\\% outliers}\\\\ \nproblem & max. int. & \\multicolumn{2}{c}{1\\%} & & \\multicolumn{2}{c}{5\\%} & & \\multicolumn{2}{c}{10\\%} \\\\ \\midrule \n\\multirow{2}{*}{single-frame} & \\multirow{2}{*}{255}& $3.39\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 2.1\\times 10^{-4}$) & & $3.43\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 2.5\\times 10^{-4}$) & & $3.49\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 3.2\\times 10^{-4}$)\\\\ \n & & $3.60\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 2.1\\times 10^{-3}$) & & $3.68\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 3.1\\times 10^{-3}$) & & $3.80\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 5.6\\times 10^{-3}$)\\\\ \\addlinespace[.2cm] \n\\multirow{2}{*}{single-frame} & \\multirow{2}{*}{2550}& $2.46\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.5\\times 10^{-6}$) & & $2.48\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.5\\times 10^{-6}$) & & $2.50\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.7\\times 10^{-6}$)\\\\ \n & & $2.48\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.5\\times 10^{-6}$) & & $2.57\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 4.5\\times 10^{-6}$) & & $2.69\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 8.4\\times 10^{-6}$)\\\\ \\addlinespace[.2cm] \n\\multirow{2}{*}{single-frame} & \\multirow{2}{*}{25500}& $1.78\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.0\\times 10^{-8}$) & & $1.79\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.0\\times 10^{-8}$) & & $1.81\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.0\\times 10^{-8}$)\\\\ \n & & $1.79\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 8.8\\times 10^{-9}$) & & $1.81\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.5\\times 10^{-8}$) & & $2.42\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 7.5\\times 10^{-6}$)\\\\ \\addlinespace[.2cm] \n\\multirow{2}{*}{mutli-frame} & \\multirow{2}{*}{255}& $2.89\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-4}$) & & $2.92\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-4}$) & & $2.97\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-4}$)\\\\ \n & & $3.16\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.6\\times 10^{-3}$) & & $3.13\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.2\\times 10^{-3}$) & & $3.49\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 5.8\\times 10^{-3}$) \\\\ \\addlinespace[.2cm] \n\\bottomrule\\end{tabular}\n}\n\\end{subtable}\n\n\\begin{subtable}[h]{1.5\\textwidth}\n\\subcaption{Carbon ash}\n\\centering\n{\\scriptsize\\begin{tabular}{llcllclcclc} \\toprule \n\\multicolumn{2}{r}{}& \\multicolumn{4}{r}{error min} & \\multicolumn{4}{l}{($\\lambda$ optimal)} \\\\ \n\\multicolumn{2}{r}{} & \\multicolumn{4}{r}{error GCV} & \\multicolumn{4}{l}{($\\lambda$ GCV)} \\\\ \n\\multicolumn{1}{r}{} & \\multicolumn{9}{c}{\\% outliers}\\\\ \nproblem & max. int. & \\multicolumn{2}{c}{1\\%} & & \\multicolumn{2}{c}{5\\%} & & \\multicolumn{2}{c}{10\\%} \\\\ \\midrule \n\\multirow{2}{*}{single-frame} & \\multirow{2}{*}{255}& $3.08\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-3}$) & & $3.09\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-3}$) & & $3.10\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.7\\times 10^{-3}$)\\\\ \n& & $3.12\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 8.9\\times 10^{-4}$) & & $3.11\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.5\\times 10^{-3}$) & & $3.11\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 2.2\\times 10^{-3}$) \\\\ \\addlinespace[.2cm] \n\\multirow{2}{*}{single-frame} & \\multirow{2}{*}{2550}& $2.91\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.9\\times 10^{-5}$) & & $2.92\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 2.1\\times 10^{-5}$) & & $2.92\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.9\\times 10^{-5}$)\\\\ \n& & $2.97\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 9.7\\times 10^{-6}$) & & $2.95\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.2\\times 10^{-5}$) & & $2.93\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 2.2\\times 10^{-5}$)\\\\ \\addlinespace[.2cm] \n\\multirow{2}{*}{single-frame} & \\multirow{2}{*}{25500}& $2.77\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 3.0\\times 10^{-7}$) & & $2.78\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 2.7\\times 10^{-7}$) & & $2.79\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 2.6\\times 10^{-7}$)\\\\ \n& & $3.00\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 5.2\\times 10^{-8}$) & & $2.84\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 9.4\\times 10^{-8}$) & & $2.82\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.3\\times 10^{-7}$)\\\\ \\addlinespace[.2cm] \n\\multirow{2}{*}{multi-frame} & \\multirow{2}{*}{255}& $3.03\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-3}$) & & $3.04\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-3}$) & & $3.04\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-3}$)\\\\ \n & & $3.14\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 6.8\\times 10^{-4}$) & & $3.07\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 9.8\\times 10^{-4}$) & & $3.05\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.9\\times 10^{-3}$)\\\\ \\addlinespace[.2cm] \n\\bottomrule\\end{tabular}\n}\n\\end{subtable}\n\\end{table} \n\\end{landscape}\n\n\n\n\\subsection{Linear subproblems}\n\nAs mentioned earlier, various types of preconditioners have been developed to speed up convergence of iterative methods applied to systems of type \\eqref{eq:lin_system} or its saddle-point counterpart \n\\[\n\\begin{pmatrix}\n D^{-1} & A \\\\\n A^T & \\lambda L^TL\\\\\n \\end{pmatrix} = \n \\begin{pmatrix}\n -z \\\\\n \\lambda L^TLx\\\\\n \\end{pmatrix}.\\label{eq:saddle_point}\n\\]\nThe Hermitian and skew-Hermitian (HSS) preconditioner, as well as constraint precondtioner belong to the best known preconditioners for this type of linear system. Both were incorporated in GMRES and tested on deblurring problems with random diagonal scaling $D$ in \\cite{Benzi2006Preconditioned}. Using random $D$, they indeed accelerate convergence also in our case, as shown in Figure \\ref{fig:prec1}. However, our preconditioner \\eqref{eq:prec} provides a much better speedup. Moreover, for real computations, e.g., when the matrix $D$ is actually generated during the Projected Newton computation, the HSS and constraint preconditioners did not perform well, and even slowed down the convergence, see Figure~\\ref{fig:prec2}. This is fortunately not the case for \nour proposed preconditioner. In this experiment, we did not assume projection on the non-negative half-plane and since in \\eqref{eq:saddle_point}, we need to evaluate $D^{-1}$, if some component $D_{ii}=0$, we replaced it by $2\\sqrt{\\epsilon_\\text{mach}}$, see also \\cite{Mastronardi2008Fast}.\nWe also did not incorporate any outliers for these initial experiments with the preconditioners; \nthese results are intended to show that \nour proposed preconditioning for these problems\noften performs much better than the well-known standard preconditioners.\nIn fact, we see that the behavior of the constraint and HSS preconditioner depends heavily on the actual setting of the problem. In the remainder of this section we will therefore focus on the preconditioner given in \\eqref{eq:prec}.\n\\begin{figure}[!ht]\n \\centering\n \\begin{subfigure}[b]{0.85\\textwidth}\n \t\\includegraphics[width=.47\\textwidth]{preconditioners_unconstrained_CG_random} \n \t\\hspace*{.1cm}\n \t\\includegraphics[width=.47\\textwidth]{preconditioners_unconstrained_GMRES_random} \n \t\\caption{Satellite single-frame, random diagonal}\n \t\\end{subfigure}\n \t\\caption{Preconditoner defined in \\eqref{eq:prec}, constraint preconditioner (CP), and Hermitian and skew-Hermitian splitting preconditioner (HSSP) performance for $(A^TDA + \\lambda L^TL)s = -A^Tb$, where $A$ and $b$ are adopted from the test problem Satellite, and $D$ is a diagonal with random entries uniformly distributed in $(0,1)$. \n}\\label{fig:prec1}\n\\end{figure}\n\\begin{figure}[!ht]\n \\centering\n \\begin{subfigure}[b]{0.85\\textwidth}\n \t\\includegraphics[width=.47\\textwidth]{preconditioners_unconstrained_CG_real} \n \t\\hspace*{.1cm}\n \t\\includegraphics[width=.47\\textwidth]{preconditioners_unconstrained_GMRES_real} \n \t\\caption{Satellite single-frame, Newton it = 3}\n \t\\end{subfigure}\n \t\\caption{Preconditoner defined in \\eqref{eq:prec}, constraint preconditioner (CP), and Hermitian\/skew-Hermitian preconditioner (HSSP) performance for $(A^TD^{(k)}A + \\lambda L^TL)s = -\\left(A^Tz^{(k)} + \\lambda L^TLx^{(k)}\\right)$. \n}\\label{fig:prec2} \n\\end{figure}\n\nIn Figure \\ref{fig:prec_levels}, we investigate the overall speedup of the convergence by plotting the number of projected PCG steps needed in each Newton iteration to reach the desired tolerance on the relative size of the projected gradient. Even for the most generous tolerance $10^{-1}$, preconditioner \\eqref{eq:prec} significantly reduces the number of projPCG iterations. Note that in this experiment, the linear subproblems solved in each Newton iteration are generally not identical, since the subproblems are not solved exactly and therefore the approximations $x^{(k)}$ are not the same. We set the outer tolerance to $0$ in order to perform always at least $15$ Newton iterations. \n\\begin{figure}[!ht]\n \\centering\n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{precond_levels_SatelliteSingle_5_1} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_SatelliteSingle_5_2} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_SatelliteSingle_5_3} \n \t\\caption{Satellite single-frame}\\end{subfigure}\n \n \n \\vspace*{.3cm} \t\n \n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{precond_levels_SatelliteMulti_5_1} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_SatelliteMulti_5_2}\n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_SatelliteMulti_5_3} \n \t\\caption{Satellite multi-frame}\\end{subfigure}\n \n \n \\vspace*{.3cm} \t\n \n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{precond_levels_CarbonAshSingle_5_1} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_CarbonAshSingle_5_2} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_CarbonAshSingle_5_3} \n \t\\caption{Carbon Ash single-frame}\\end{subfigure}\n \n \\vspace*{.3cm} \t\n \n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{precond_levels_CarbonAshMulti_5_1} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_CarbonAshMulti_5_2} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_CarbonAshMulti_5_3} \n \t\\caption{Carbon Ash multi-frame}\\end{subfigure}\n \n \n \t\\caption{The effect of preconditioning by preconditioner defined in \\eqref{eq:prec}: number of projPCG iterations performed in each Newton iteration to achieve the desired tolerance. 5 \\% outliers.\n} \\label{fig:prec_levels}\n\\end{figure}\nThe choice of projPCG tolerance is a difficult question, but from the average number of Newton iterations\/projPCG iterations\/fast Fourier transforms shown in Table \\ref{tab:numit}, we observe that raising the tolerance does not considerably increase the number of Newton steps we need to perform here. Therefore larger tolerance, here $10^{-1}$, leads to a smaller total number of projPCG iterations. This is independent of the percentage of outliers. For each setting, the number of projPCG iterations is significantly smaller for the preconditioned version. This is not always the case for the total count of the fast Fourier transforms, since we need to perform 6 \\texttt{fft2}\/\\texttt{ifft2} in each iteration vs. 4 for the unpreconditioned iterations; see Table \\ref{tab:op_counts}. For large scale problems, however, the computational complexity of fast Fourier transform, which is $\\mathcal{O}(n\\log n)$ is comparable to other operations performed in projPCG, such as the inner products, whose complexity is $\\mathcal{O}(n)$, and therefore the number of projPCG iterations seems to be the more important indicator of efficiency of the preconditioner. Recall here that\n$n$ is the number of pixels in the image, so if we have a $256 \\times 256$ array of pixels, then $n = 65535$.\n\\begin{table}[!ht]\n\\centering\n\\caption{Average number of Newton iterations, projPCG iterations, and (inverse) 2D Fourier transforms for projPCG with and without preconditioning, and two tolerances on the relative size of the projPCG residual. Results are averaged over 10 independent realization of noise and outliers. \n}\\label{tab:numit}\n\\begin{subtable}[h]{\\textwidth}\n\\subcaption{projPCG tol = $10^{-1}$}\n\\centering\n{\\small\\begin{tabular}{lcccc} \\toprule \n \\multicolumn{5}{c}{average count: Newton\/CG\/\\texttt{fft2}s } \\\\ \n& & \\multicolumn{3}{c}{\\% outliers} \\\\ \n\\cmidrule(r){3-5}\n\\multicolumn{1}{c}{problem} & precond & \\multicolumn{1}{c}{0\\%} & \\multicolumn{1}{c}{2\\%} & \\multicolumn{1}{c}{10\\%} \\\\ \\midrule\n\\multirow{2}{*}{Satellite single-frame}& no& 14\/290\/1383& 14\/274\/1329& 14\/283\/1374\\\\ \n & yes& 15\/161\/1280& 16\/172\/1362& 14\/158\/1252\\\\ \n\\addlinespace[.2cm]\n\\multirow{2}{*}{Satellite multi-frame}& no& 12\/250\/2398& 12\/216\/2104& 13\/241\/2364\\\\ \n& yes& 12\/107\/1545& 11\/107\/1535& 12\/103\/1507\\\\ \n\\addlinespace[.2cm]\n\\multirow{2}{*}{Carbon ash single-frame}& no& 11\/190\/939& 10\/179\/891& 11\/184\/915\\\\ \n& yes& 10\/71\/641& 11\/72\/654& 13\/82\/753\\\\ \n\\addlinespace[.2cm]\n\\multirow{2}{*}{Carbon ash multi-frame}& no& 14\/221\/2200& 14\/219\/2179& 16\/254\/2542\\\\ \n& yes& 16\/88\/1510& 15\/85\/1419& 17\/99\/1654\\\\ \n\\bottomrule\n\\end{tabular}}\n\\end{subtable}\n\n\\vspace*{.5cm}\n\n\\begin{subtable}[h]{1\\textwidth}\n\\subcaption{projPCG tol = $10^{-2}$}\n\\centering\n{\\small\\begin{tabular}{lcccc} \\toprule \n \\multicolumn{5}{c}{average count: Newton\/CG\/\\texttt{fft2}s } \\\\ \t\n& & \\multicolumn{3}{c}{\\% outliers} \\\\ \n\\cmidrule(r){3-5}\n\\multicolumn{1}{c}{problem} & precond & \\multicolumn{1}{c}{0\\%} & \\multicolumn{1}{c}{2\\%} & \\multicolumn{1}{c}{10\\%} \\\\ \\midrule\n\\multirow{2}{*}{Satellite single-frame}& no& 13\/536\/2359& 13\/539\/2373& 14\/641\/2819\\\\ \n& yes& 14\/284\/2001& 15\/302\/2117& 14\/296\/2091\\\\ \n\\addlinespace[.2cm] \n\\multirow{2}{*}{Satellite multi-frame}& no& 11\/457\/4082& 11\/460\/4130& 12\/499\/4511\\\\ \n& yes& 11\/201\/2536& 11\/197\/2499& 12\/213\/2747\\\\ \n\\addlinespace[.2cm]\n\\multirow{2}{*}{Carbon ash single-frame}& no& 11\/393\/1754& 11\/432\/1912& 13\/526\/2315\\\\ \n& yes& 10\/121\/934& 11\/129\/1004& 13\/155\/1201\\\\ \n\\addlinespace[.2cm]\n\\multirow{2}{*}{Carbon ash multi-frame}& no& 13\/426\/3813& 14\/461\/4127& 16\/498\/4467\\\\ \n& yes& 13\/144\/1954& 14\/150\/2038& 16\/173\/2359\\\\ \n\\bottomrule\n\\end{tabular}}\n\\end{subtable}\n\n\\end{table} \n\n\\section{Conclusion}\nWe have presented an efficient approach to compute approximate\nsolutions of a linear inverse problem that is contaminated with mixed Poisson--Gaussian noise, and \nwhen there are outliers in the measured data. \nWe investigated the convexity properties of various robust regression functions and\nfound that the Talwar function was the best option. We proposed a preconditioner,\nand illustrated that it was more effective than other\nstandard preconditioning approaches on the types of problems studied in this paper. Moreover, we showed that a variant of the GCV method\ncan perform well in estimating regularization parameters in robust regression.\nA detailed discussion of computational\ncosts, and extensive numerical experiments illustrate the approach proposed in this paper is effective and efficient on\nimage deblurring problems.\n\n\\section{Acknowledgment}\nThe authors would like to thank Lars Ruthotto for pointing them to the Projected PCG algorithm and providing them with a Matlab code. The first author would like to thank Emory University for the hospitality offered in academic year 2014-2015, when part of this work was completed.\n\n\n\\FloatBarrier\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe phenomena of influence propagation through social networks have attracted a great body of research works. A key function of an online social network (OSN), besides sharing, is that it enables users to express their personal opinions about a product or trend of news by means of posts, likes\/dislikes, etc. Such opinions are propagated to other users and might make a significant influence on them, either positive or negative. \n \nReal world is full of imprecision and uncertainty and this fact necessarily impacts on OSN data. In fact, social interactions can not be always precise and certain, also, OSN allows only a limited access for their data which generates more imprecision and uncertainty. Then, if we ignore this imperfection, we may be confronted to erroneous analysis results. In such a situation, the theory of belief functions \\cite{Dempster67a,Shafer76} have been widely applied. Furthermore, this theory was used for analyzing social networks \\cite{Jendoubi14a,Jendoubi2015,Zhou2015}. \n\n\nInfluence maximization (IM) is the problem of finding a set of $k$ seed nodes that are able to influence the maximum number of nodes in the social network. In the literature, we find many solutions for the IM problem. \\textit{Kempe et al.} \\cite{Kempe03} propose two propagation simulation models which are the \\textit{Linear Threshold Model (LTM)} and the \\textit{Independent Cascade Model (ICM)}. Besides, the credit distribution (CD) model \\cite{Goyal12} is a data based approach that investigates past propagation to detect influencers. However, these solutions does not consider the user's opinion. Zhang et al. \\cite{Zhang2013} propose an opinion based cascading model that considers the user's opinion about the product. However, their work is not based on real word data to estimate user's opinion and influence. \n\nIn this paper, we propose a new data based model for influence maximization\nin online social networks that searches to detect influencer users\nthat adopt a positive opinion about the product. \nThe proposed model is data based because we use past propagation to estimate the influence, and users messages to estimate the opinion. Besides, it uses the theory of belief functions to estimate the influence to deal with\ndata imprecision and uncertainty. To the best of our knowledge, the proposed model\nis the first evidential data based model that maximizes the influence\non OSN, detects influencer users having a positive opinion about\nthe product and uses the theory of belief functions to process the\ndata imperfection.\n\nThe remainder of this paper is organized as follows: section 2 introduces the proposed model\nfor maximizing the positive opinion influence, section 3\nshows the performance of our model through some relevant\nexperiments. Finally, the paper is concluded in section 4.\n\n\\vspace{-0.05cm}\n\n\\section{Maximizing positive opinion influence}\n\nIn this section, we present our positive opinion influence measure and the proposed influence maximization\nalgorithm.\n\n\n\\subsection{Influence measure}\n\nGiven a social network $G=\\left(V,E\\right)$, a frame of discernment expressing opinion $\\Theta=\\left\\{ Pos,\\, Neg,\\, Obj\\right\\} $, $Pos$ for positive, $Neg$ for negative and $Obj$ for objective, a frame of discernment expressing influence and passivity $\\Omega=\\left\\{ I,P\\right\\} $, $I$ for influencer and $P$ for passive user, a probability distribution $\\Pr^{\\Theta}\\left(u\\right)$ defined on $\\Theta$ that expresses the opinion of the user $u\\in V$ about the product and a basic belief assignment (BBA) function~\\cite{Shafer76}, $m^{\\Omega}\\left(u,v\\right)$, defined on $\\Omega$ that expresses the influence that exerts the user $u$ on the user $v$. The first step of the influence maximization process is to measure the influence of each user in the network. Then we propose an influence measure to estimate the positive influence of each user in the network. \n\nThe mass value $m^{\\Omega}\\left(u,v\\right)\\left(I\\right)$ measures the influence of $u$ on $v$ but without considering the opinion of $u$ about the product. We define the positive opinion influence of $u$ on $v$ as the positive proportion of $m^{\\Omega}\\left(u,v\\right)\\left(I\\right)$ and we measure this proportion as: \n\\begin{equation}\nInf_{Pos}\\left(u,v\\right)=\\textrm{Pr}^{\\Theta}\\left(u\\right)\\left(Pos\\right).m^{\\Omega}\\left(u,v\\right)\\left(I\\right)\n\\end{equation}\n\nNext, we define the amount of influence given to a set of nodes $S\\subseteq V$\nfor influencing a user $v\\in V$. We estimate the influence of $S$\non a user $v$ as follows:\n\n\\begin{equation}\nInf_{Pos}\\left(S,v\\right)=\\begin{cases}\n1 & \\textrm{if} \\, v\\in S \\\\\n{\\displaystyle \\sum_{u\\in S}{\\displaystyle }\\sum_{x\\in D_{IN}\\left(v\\right)\\cup\\left\\{ v\\right\\} }Inf_{Pos}\\left(u,x\\right).Inf_{Pos}\\left(x,v\\right)} & \\textrm{Otherwise}\n\\end{cases}\n\\end{equation}\nsuch that $Inf_{Pos}\\left(v,v\\right)=1$ and $D_{IN}\\left(v\\right)$ is\nthe set of nodes in the indegree of $v$. Finally, we define the influence\nspread $\\sigma\\left(S\\right)$ under the evidential model as the total\ninfluence given to $S\\subseteq V$ from all nodes in the social network\nas $\\sigma\\left(S\\right)=\\sum_{v\\in V}Inf_{Pos}\\left(S,v\\right)$. In the\nspirit of the IM problem, as defined by \\textit{Kempe et al.} \\cite{Kempe03}, $\\sigma\\left(S\\right)$\nis the objective function to be maximized.\n\n\n\\subsection{Influence maximization}\n\nIn this section, we present the evidential positive opinion influence\nmaximization model. Its purpose is to find a set of nodes $S$ that\nmaximizes the objective function $\\sigma\\left(S\\right)$. Given a\ndirected social network $G=\\left(V,\\, E\\right)$, an integer $k\\leq |V|$, the goal is to find a set of users $S\\subseteq V,$ $|S|=k$,\nthat maximizes $\\sigma\\left(S\\right)$. We proved that $\\sigma\\left(S\\right)$\nis monotone and sub-modular, also the influence maximization under\nthe proposed model is NP-Hard. However, the number of pages limitation\nprevents us to present proofs in detail. \n\nThe influence maximization under the evidential positive opinion influence\nmaximization model is NP-Hard, consequently, the greedy algorithm performs\ngood approximation for the optimal solution especially when we use\nit with this formula: \n\\begin{equation}\n\\label{eq:mg}\n\\sigma_{Bel} \\left(S\\cup\\left\\{ x\\right\\} \\right)-\\sigma_{Bel}\\left(S\\right)=1+\\sum_{v\\in V\\setminus S\\,}\\sum_{a\\in D_{IN}\\left(v\\right)\\cup\\left\\{ v\\right\\} }Inf\\left(x,a\\right).Inf\\left(a,v\\right)\n\\end{equation}\nthat computes the marginal gain of a candidate node $x$. We choose\nthe cost effective lazy-forward algorithm (CELF) \\cite{Leskovec07b}\nwhich is a two pass modified greedy algorithm. It exploits the sub-modularity\nproperty of the objective function, also, it is about 700 times faster\nthen the basic greedy algorithm.\nThe CELF based evidential influence maximization algorithm starts by estimating the marginal gain of all users in the network and sorts them according to their marginal gain, then, it selects the user that have the maximum marginal gain and add it to $S$ (seed set). After that, the algorithm iterates on the following steps until getting $|S| = k$: 1) Choose the next user in the list, 2) Update its marginal gain (formula (\\ref{eq:mg})), and 3) If the chosen node keeps its position in the list (it still the maximum)\nthen add it to $S$\n\n\n\\section{Experiments}\n\nIn this section, we conduct some experiments on real world data. We used the library Twitter4j\\footnote{http:\/\/twitter4j.org\/en\/index.html} which is a java implementation of the Twitter API to collect Twitter data. We crawled the Twitter network for the period between 08\/09\/2014 and 03\/11\/2014, and we filtered our data by keeping only tweets that talk about smartphones and users that have at least one tweet in the data base. To estimate the opinion polarity of each tweet in our data set, first, we used the java library ``Stanford POS Tagger'' \\footnote{http:\/\/nlp.stanford.edu\/software\/tagger.shtml} with the model ``GATE Twitter part-of-speech tagger''\\footnote{https:\/\/gate.ac.uk\/wiki\/twitter-postagger.html} that were designed for tweets. This step gives a tag (verb, noun, etc) to each word in the tweet. After, we estimated the opinion polarity of each tweet using the SentiWordNet 3.0\\footnote{http:\/\/sentiwordnet.isti.cnr.it\/} dictionary and tags from the first step. We estimated $m^{\\Omega}\\left(u,v\\right)$ using the network structure and past propagation between $u$ and $v$. First, we calculated the number of common neighbors between $u$ and $v$, the number of tweets where $u$ mentions $v$ and the number of tweets where $v$ retweets from $u$. After we used the process defined by \\textit{Wei et al.} \\cite{Wei13} to estimate a BBA for each defined variable. Finally we combine the resulting BBAs to obtain $m^{\\Omega}\\left(u,v\\right)$. In this section, we call belief model: our model in which we use $Inf\\left(u,v\\right)=m^{\\Omega}\\left(u,v\\right)\\left(I\\right)$ as influence measure, CD model: the credit distribution model and opinion model: the proposed positive opinion based model.\n\nThe goal of the first experiment is to show that the proposed model\ndetects well influencer spreaders. To examine the quality of the selected\nseeds, we fixed four comparison criteria which are: the number of\nfollowers, \\#Follow, the number of tweets, \\#Tweet, the number of\ntimes the user was mentioned and retweeted, \\#Mention and \\#Retweet.\nIn fact, we assume that if a user is an influencer on Twitter he would be necessarily:\nvery active so he has a lot of tweets, he is followed by many users\nin the network, he is frequently mentioned and his tweets are retweeted several times. In Figure\n(\\ref{fig:Comparison}), we compare the maximization results of the proposed opinion model with CD model and belief model according to the fixed criteria. Figure (\\ref{fig:Comparison}) shows the performance\nof the proposed model against CD model and belief model. In fact, we see that\nthe proposed opinion model detects influencer that \nhave many followers (more than 8000 for 50 influencer), many tweets (over 250 for 50 users), many mentions (about 1200) and many retweets (about 800).\nHowever, users detected using the belief model have only two good criteria, \\textit{i.e.} \\#Follow (over 8000 follower for 50 users) and \\#Tweet (over 150 tweets for 50 users), and the CD model does not satisfy any criteria. This shows that, the opinion model is the best in detecting influencers.\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[scale=0.45]{IphoneDataBaseFollowCompare.png} \\includegraphics[scale=0.45]{IphoneDataBaseMentionCompare.png}\n\\par\\end{centering}\n\n\\begin{centering}\n\\includegraphics[scale=0.45]{IphoneDataBaseRetweetCompare.png} \\includegraphics[scale=0.45]{IphoneDataBaseTweetCompare.png}\n\\par\\end{centering}\n\n\\caption{Comparison between opinion model, belief model and CD model according to \\#Follow,\n\\#Mention, \\#Retweet and \\#Tweet\\label{fig:Comparison}}\n\\end{figure}\n\n\nIn a second experiment, we calculated the mean positive opinion of the first 100 influencer user. The proposed model performed well by selecting influencers that have a positive opinion about the product. In fact, it gives a mean positive opinion equals to 0.89 ($\\pm0.04$, $95\\%$ confidence interval). However, the belief model gives 0.34 ($\\pm0.05$) and the CD model gives only 0.09 ($\\pm0.04$). These results show the performance of the proposed model in selecting influencer users that have a positive opinion against the belief and the CD models that have not.\n\n\n\\section{Conclusion}\n\nIn this paper, we proposed a new influence measure that estimates the positive opinion\ninfluence of OSN users. We used the theory of belief functions to deal with the problem of data imperfection. In future works, we will search to improve the proposed influence maximization model by considering other parameters like the user's profile and the propagation time.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nNearly five decades since the publication in 1974 of Allen Shweenk's article \\cite{Schwenk1974}, the determination\nof the spectrum and characteristic polynomial of the generalized composition $H[G_1, \\dots, G_p]$ (recently\ndesignated $H$-join of $\\mathcal{G}=\\{G_1, \\dots, G_p\\}$ \\cite{Cardoso_et_al2013}), in terms of the spectra\n(and characteristic polynomials) of the graphs in $\\mathcal{G}$ and an associated matrix, where all graphs\nare undirected, simple and finite, was limited to families of regular graphs. Very recently in\n\\cite{SaravananMuruganArunkumar2020}, as an application of a new generalization of Fiedler's Lemma, the\ncharacteristic polynomial of the universal adjacency matrix of the $H$-joim of a family of arbitrary graphs\nis determined in terms of the characteristic polynomial and a related racional function of each component, and\nthe determinant of an associated matrix. In this work, using a distincy approach, the determination of the spectrum\n(and characteristic polynomial) of the $H$-join, in terms of its components and associated matrix, is extended to\nfamilies of arbitrary graphs (which should be undirected, simple and finite).\\\\\n\nThe generalized composition $H[G_1, \\dots, G_p]$, introduced in \\cite[p. 167]{Schwenk1974} was rediscovered in\n\\cite{Cardoso_et_al2013} under the designation of $H$-join of a family of graphs $\\mathcal{G}=\\{G_1, \\dots, G_p\\}$,\nwhere $H$ is a graph of order $p$. In \\cite[Th. 7]{Schwenk1974}, assuming that $G_1, \\dots, G_p$ are all regular graphs\nand taking into account that $V(G_1) \\cup \\dots \\cup V(G_p)$ is an equitable partition $\\pi$, the characteristic\npolynomial of $H[G_1, \\dots, G_p]$ is determined in terms of the characteristic polynomials of the graphs\n$G_1, \\dots, G_p$ and the matrix associated to $\\pi$. Using a generalization of a Fiedler's result\n\\cite[Lem. 2.2]{Fiedler1974} obtained in \\cite[Th. 3]{Cardoso_et_al2013}, the spectrum of the $H$-join of a family of\nregular graphs (not necessarily connected) is determined in \\cite[Th. 5]{Cardoso_et_al2013}.\n\nWhen the graphs of the family $\\mathcal{G}$ are all isomorphic to a fixed graph $G$, the $H$-join of $\\mathcal{G}$\nis the same as the lexicographic product (also called the composition) of the graphs $H$ and $G$ which is denoted as\n$H[G]$ (or $H \\circ G$). The lexicographic product of two graphs was introduced by Harary in \\cite{harary1} and\nSabidussi in \\cite{sabidussi} (see also \\cite{harary2, hammack_et_al}). From the definition, it is immediate that\nthis graph operation is associative but not commutative.\n\nIn \\cite{abreu_et_al}, as an application of the $H$-join spectral properties, the lexicographic powers of a graph\n$H$ were considered and their spectra determined, when $H$ is regular. The $k$-th lexicographic power of $H$, $H^k$,\nis the lexicographic product of $H$ by itself $k$ times (then $H^2=H[H], H^3=H[H^2]=H^2[H], \\dots$). As an example,\nin \\cite{abreu_et_al}, the spectrum of the $100$-th lexicographic power of the Petersen graph, which has a gogool\nnumber (that is, $10^{100}$) of vertices, was determined. With these powers, $H^k$, in \\cite{Cardoso_et_al2017} the\nlexicographic polynomials were introduced and their spectra determined, for connected regular graphs $H$, in terms\nof the spectrum of $H$ and the coefficients of the polynomial.\n\nOther particular $H$-join graph operations appear in the literature under different designations, as it is the case\nof the mixed extension of a graph $H$ studied in \\cite{haemers_et_al}, where special attention is given to the mixed\nextensions of $P_3$. The mixed extension of a graph $H$, with vertex set $V(H)=\\{1, \\dots, p\\}$, is the $H$-join of\na family of graphs $\\mathcal{G}=\\{G_1, \\dots, G_p\\}$, where each graph $G_i \\in \\mathcal{G}$ is a complete graph or\nits complement. From the $H$-join spectral properties, we may conclude that the mixed extensions of a graph $H$ of\norder $p$ has at most $p$ eigenvalues unequal to $0$ and $-1$.\\\\\n\nThe remaining part of the paper is organized as follows. The focus of Section~\\ref{sec_2} is the preliminaries.\nNamely, the notation and basic definitions, the main spectral results of the $H$-join graph operation and the more\nrelevant properties, in the context of this work, of the main characteristic polynomial and walk-matrix of a graph.\nIn Section~\\ref{sec_3}, the main result of this artice, the determination of the spectrum of the $H$-join\nof a family of arbitrary graphs is deduced. Section~\\ref{sec_4} includes some final remarks. Namely, about\nparticular cases of the $H$-join such as the lexicographic product and determination of the eigenvectors of the\nadjacency matrix of the $H$-join in terms of the eigenvectors of the adjacency matrices of the components and an\nassociated matrix.\n\n\\section{Preliminaries}\\label{sec_2}\n\n\\subsection{Notation and basic definitions}\n\nThroughout the text we consider undirected, simple and finite graphs, which are just called graphs. The vertex set\nand the edge set of a graph $G$ is denoted by $V(G)$ and $E(G)$, respectively. The order of $G$ is the cardinality\nof its vertex set and when it is $n$ we consider that $V(G) = \\{1, \\dots, n\\}$. The eigenvalues of the adjacency\nmatrix of a graph $G$, $A(G)$, are also called the eigenvalues of $G$.\nFor each distinct eigenvalue $\\mu$ of $G$, ${\\mathcal E}_G(\\mu)$ denotes the eigenspace of $\\mu$ whose dimension is\nequal to the algebraic multiplicity of $\\mu$, $m(\\mu)$. The spectrum of a graph $G$ of order $n$ is denoted by\n$\\sigma(G)=\\{\\mu^{[m_1]}_1,\\dots,\\mu^{[m_s]}_s,\\mu^{[m_{s+1}]}_{s+1}, \\dots, \\mu_t^{[m_t]}\\}$, where\n$\\mu_1,\\dots,\\mu_s,\\mu_{s+1}, \\dots, \\mu_t$ are the distinct eigenvalues of $G$, $\\mu_i^{[m_i]}$ means that\n$m(\\mu_i)=m_i$ and then $\\sum_{j=1}^{t}{m_j}=n$. When we say that $\\mu$ is an eigenvalue of $G$ with zero multiplicity\n(that is, $m(\\mu)=0$) it means that $\\mu \\not \\in \\sigma(G)$. The distinct eigenvalues of $G$ are indexed in such way\nthat the eigenspaces ${\\mathcal E}_G(\\mu_i)$, for $1 \\le i \\le s$, are not orthogonal to ${\\bf j}_n$, the all-$1$\nvector with $n$ entries (sometimes we simple write ${\\bf j})$. The eigenvalues $\\mu_i$, with $1 \\le i \\le s$, are\ncalled main eigenvalues of $G$ and the remaining distinct eigenvalues non-main. The concept of main (non-main)\neigenvalue was introduced in \\cite{cvetkov70} and further investigated in several publications. As it is well known,\nthe largest eigenvalue of a connected graph $G$ is main and, when $G$ is regular, all its remaining distinct\neigenvalues are non-main \\cite{cds79}. A survey on main eigenvalues was published in \\cite{rowmain}.\n\n\\subsection{The $H$-join operation}\n\nNow we recall the definition of the $H$-join of a family of graphs \\cite{Cardoso_et_al2013}.\n\n\\begin{definition}\\label{def_h-join}\nConsider a graph $H$ with vertex subset $V(H)=\\{1, \\dots, p\\}$ and a family of graphs\n$\\mathcal{G} = \\{G_1, \\dots, G_p\\}$ such that $|V(G_1)|=n_1, \\dots , |V(G_p)|=n_p$.\nThe $H$-join of $\\mathcal{G}$ is the graph\n$$\nG = \\bigvee_{H}{\\mathcal{G}}\n$$\nin which $V(G) = \\bigcup_{j=1}^{p}{V(G_j)}$ and\n$E(G) = \\left(\\bigcup_{j=1}^{p}{E(G_j)}\\right) \\cup \\left(\\bigcup_{rs \\in E(H)}{E(G_r \\vee G_s)}\\right)$,\nwhere $G_r \\vee G_s$ denotes the join.\n\\end{definition}\n\n\\begin{theorem}\\label{H-Join_Spectra} \\cite{Cardoso_et_al2013}\nLet $G$ be the $H$-join as in Definition~\\ref{def_h-join}, where $\\mathcal{G}$ is a family of regular graphs\nsuch that $G_1$ is $d_1$-regular, $G_2$ is $d_2$-regular, $\\dots$ and $G_p$ is $d_p$-regular. Then\n\\begin{equation}\\label{h-join-spectra}\n\\sigma(G) = \\left(\\bigcup_{j=1}^{p}{\\left(\\sigma(G_j) \\setminus \\{d_j\\}\\right)}\\right) \\cup \\sigma(\\widetilde{C}),\n\\end{equation}\nwhere the matrix $\\widetilde{C}$ has order $p$ and is such that\n\\begin{equation}\\label{matrix_c}\n\\left(\\widetilde{C}\\right)_{rs} = \\left\\{\\begin{array}{ll}\n d_r & \\hbox{if } r=s,\\\\\n \\sqrt{n_rn_s} & \\hbox{if } rs \\in E(H),\\\\\n 0 & \\hbox{otherwise,} \\\\\n \\end{array}\\right.\n\\end{equation}\nand the set operations in \\eqref{h-join-spectra} are done considering possible repetitions of elements of the multisets.\n\\end{theorem}\n\nFrom the above theorem, if there is $G_i \\in \\mathcal{G}$ which is disconnected, with $q$ components, then its\nregularity $d_i$ appears $q$ times in the multiset $\\sigma(G_i)$. Therefore, according to \\eqref{h-join-spectra},\n$d_i$ remains as an eigenvalue of $G$ with multiplicity $q-1$.\n\nFrom now on, given a graph $H$, we consider the following notation:\n$$\n\\delta_{i,j}(H) = \\left\\{\\begin{array}{ll}\n 1 & \\hbox{if } ij \\in E(H), \\\\\n 0 & \\hbox{otherwise.}\n \\end{array}\n \\right.\n$$\n\nBefore the next result, it is worth observe the following. Considering a graph $G$, it is always possible to extend\na basis of the eigensubspace associated to a main eigenvalue $\\mu_j$, ${\\mathcal E}_G(\\mu_j) \\cap {\\bf j}^{\\perp}$,\nto one of ${\\mathcal E}_G(\\mu_j)$ by adding an eigenvector, $\\hat{\\bf u}_{\\mu_j}$, which is orthogonal to\n${\\mathcal E}_G(\\mu_j) \\cap {\\bf j}^{\\perp}$ and uniquely determined, without considering its multiplication by a\nnonzero scalar. The eigenvector $\\hat{\\bf u}_{\\mu_j}$ is called the main eigenvector of $\\mu_j$. The subspace with\nbasis $\\{\\hat{\\bf u}_{\\mu_1}, \\dots, \\hat{\\bf u}_{\\mu_s}\\}$ is the main subspace of $G$ and is denoted as $Main(G)$.\nNote that for each main eigenvector $\\hat{\\bf u}_{\\mu_j}$ of the basis of $Main(G)$,\n$\\hat{\\bf u}_{\\mu_j}^T{\\bf j} \\ne 0$.\n\n\\begin{lemma}\\label{main_and_nom-main_eigenvalues-join}\nLet $G$ be the $H$-join as in Definition~\\ref{def_h-join} and $\\mu_{i,j} \\in \\sigma(G_i)$. Then\n$\\mu_{i,j} \\in \\sigma(G)$ with multiplicity\n$$\n\\left\\{\\begin{array}{ll}\n m(\\mu_{i,j}) & \\hbox{whether $\\mu_{i,j}$ is a non-main eigenvalue of } G_i, \\\\\n m(\\mu_{i,j})-1 & \\hbox{whether $\\mu_{i,j}$ is a main eigenvalue of } G_i.\n \\end{array}\\right.\n$$\n\\end{lemma}\n\n\\begin{proof}\nDenoting $\\delta_{i,j} = \\delta_{i,j}(H)$, then $\\delta_{i,j} {\\bf j}_{n_i}{\\bf j}^T_{n_j}$ is an $n_i \\times n_j$\nmatrix whose entries are 1 if $ij \\in E(H)$ and $0$ otherwise. Then the adjacency matrix of $G$ has the form\n$$\nA(G) = \\left(\\begin{array}{ccccc}\n A(G_1) & \\delta_{1,2}{\\bf j}_{n_1} {\\bf j}^T_{n_2} & \\cdots & \\delta_{1,p-1}{\\bf j}_{n_1} {\\bf j}^T_{n_{p-1}}&\\delta_{1,p}{\\bf j}_{n_1} {\\bf j}^T_{n_p}\\\\\n \\delta_{2,1}{\\bf j}_{n_2} {\\bf j}^T_{n_1} & A(G_2) & \\cdots & \\delta_{2,p-1}{\\bf j}_{n_2} {\\bf j}^T_{n_{p-1}}&\\delta_{2,p}{\\bf j}_{n_2} {\\bf j}^T_{n_p}\\\\\n \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\\n \\delta_{p-1,1}{\\bf j}_{n_{p-1}}{\\bf j}^T_{n_1} &\\delta_{p-1,2}{\\bf j}_{n_{p-1}} {\\bf j}^T_{n_2} & \\cdots & A(G_{p-1})&\\delta_{p-1,p}{\\bf j}_{n_{p-1}} {\\bf j}^T_{n_p}\\\\\n \\delta_{p,1}{\\bf j}_{n_p}{\\bf j}^T_{n_1} &\\delta_{p,2}{\\bf j}_{n_p} {\\bf j}^T_{n_2}& \\cdots &\\delta_{p,p-1}{\\bf j}_{n_p} {\\bf j}^T_{n_{p-1}} & A(G_p)\\\\\n \\end{array}\\right).\n$$\nLet $\\hat{\\bf u}_{i,j}$ be an eigenvector of $A(G_i)$ associated to an eigenvalue $\\mu_{i,j}$ whose sum of its\ncomponents is zero (then, $\\mu_{i,j}$ is non-main or it is main with multiplicity greater than one). Then,\n\\begin{equation}\\label{eigenvalue-equation}\nA(G) \\left(\\begin{array}{c}\n 0 \\\\\n \\vdots \\\\\n 0 \\\\\n \\hat{\\bf u}_{i,j} \\\\\n 0 \\\\\n \\vdots \\\\\n 0\n \\end{array}\\right) = \\left(\\begin{array}{c}\n \\delta_{1,i}\\left({\\bf j}^T_{n_i}\\hat{\\bf u}_{i,j}\\right){\\bf j}_{n_1} \\\\\n \\vdots \\\\\n \\delta_{i-1,i}\\left({\\bf j}^T_{n_i}\\hat{\\bf u}_{i,j}\\right){\\bf j}_{n_{i-1}} \\\\\n A(G_i)\\hat{\\bf u}_{i,j} \\\\\n \\delta_{i+1,i}\\left({\\bf j}^T_{n_i}\\hat{\\bf u}_{i,j}\\right){\\bf j}_{n_{i+1}} \\\\\n \\vdots \\\\\n \\delta_{p,i}\\left({\\bf j}^T_{n_i}\\hat{\\bf u}_{i,j}\\right){\\bf j}_{n_p}\n \\end{array}\\right).\n\\end{equation}\nIt should be noted that when $\\mu_{i,j}$ is main, there are $m(\\mu_{i,j})-1$ linear independent eigenvectors\nbelonging to ${\\mathcal E}_G(\\mu_{i,j}) \\cap {\\bf j}^{\\perp}$.\n\\end{proof}\n\n\\subsection{The main characteristic polynomial and the walk-matrix}\nIf $G$ has $s$ distinct main eigenvalues $\\mu_1, \\dots, \\mu_s$, then the main characteristic polynomial of $G$\nis the polynomial of degree $s$ \\cite{rowmain}\n\\begin{eqnarray}\nm_G(x) &=& \\Pi_{i=1}^{s}{(x-\\mu_i)} \\nonumber\\\\\n &=& x^s - c_{0} - c_{1}x - \\cdots - c_{s-2} x^{s-2} - c_{s-1} x^{s-1}. \\label{mcpG}\n\\end{eqnarray}\nAs referred in \\cite{rowmain} (see also \\cite{CvetkovicPetric84}), if $\\mu$ is a main eigenvalue of $G$, so is its\nalgebraic conjugate $\\mu^*$ and then the coefficients of $m_G(x)$ are integers. Furthermore, it is worth to recall\nthe next result which follows from \\cite[Th. 2.5]{Teranishi2001} (see also \\cite{rowmain}).\n\n\\begin{theorem}\\cite[Prop. 2.1]{rowmain}\\label{minimal_p}\nFor every polynomial $f(x) \\in \\mathbb{Q}[x]$, $f(A(G)){\\bf j}={\\bf 0}$ if and only if $m_G(x)$ divides $f(x)$.\n\\end{theorem}\n\nIn particular, it is immediate that $m_G(A(G)){\\bf j}={\\bf 0}$. Therefore,\n\\begin{equation}\\label{main_polynomial}\nA^s(G){\\bf j} = c_{0}{\\bf j} + c_{1}A(G){\\bf j} + \\cdots + c_{s-2} A^{s-2}(G){\\bf j} + c_{s-1} A^{s-1}(G){\\bf j}.\n\\end{equation}\n\nGiven a graph $G$ of order $n$, let us consider the $n \\times k$ matrix \\cite{harscwk2, posu}\n$$\n{\\bf W}_{G;k} = \\left({\\bf j}, A(G) {\\bf j}, A^{2}(G){\\bf j}, \\ldots, A^{k-1}(G){\\bf j} \\right ).\n$$\nThe vector space spanned by the columns of ${\\bf W}_{G;k}$ is denoted as $ColSp{\\bf W}_{G;k}$. The matrix\n${\\bf W}_{G;k}$ with largest integer $k$ such that the dimension of $ColSp{\\bf W}_{G;k}$ is equal to $k$, that is,\nsuch that its columns are linear independent, is referred to be the walk-matrix of $G$ and is just denoted as\n${\\bf W}_G$. Then, as a consequence of Theorem~\\ref{minimal_p} and equality \\eqref{main_polynomial}, it follows a\ntheorem which appears in \\cite{hagos1}.\n\n\\begin{theorem}\\cite[Th. 2.1]{hagos1} \\label{hagos}\nThe rank of $W_G$ is equal to the number of main eigenvalues of the graph $G$.\n\\end{theorem}\n\nFrom Theorem~\\ref{hagos}, we may conclude that the number of distinct main eigenvalues is\n$s = \\max \\{k: \\{{\\bf j}, A(G){\\bf j}, A^2(G){\\bf j}, \\ldots, A^{k-1}(G){\\bf j}\\} \\text{ is linearly\nindependent}\\}.$ \\\\\n\nThe equality \\eqref{main_polynomial} also implies the next corollary.\n\n\\begin{corollary} \\label{ap}\nThe $s$-th column of $A(G){\\bf W}_G$ is $ A^{s}(G){\\bf j} = {\\bf W}_G\\left(\\begin{array}{c}\n c_{0} \\\\\n \\vdots \\\\\n c_{s-2} \\\\\n c_{s-1} \\\\\n \\end{array} \\right ),$\nwhere $c_j$, for $0 \\le j \\le s-1$, are the coefficients of the main characteristic polynomial $m_G$,\ngiven in \\eqref{mcpG}.\n\\end{corollary}\n\nThis corollary allows the determination of the coefficients of the main characteristic polynomial, $m_G$, by solving\nthe linear system ${\\bf W_G \\hat{x}} = A^{s}(G){\\bf j}$.\\\\\n\nFrom \\cite[Th. 2.4]{rowmain} we may conclude the following theorem.\n\n\\begin{theorem}\\label{colspw}\nLet $G$ be a graph with adjacency matrix $A(G)$. Then $ColSp{\\bf W_G}$ coincides with $Main(G)$. Moreover $Main(G)$\nand the vector space spanned by the vectors orthogonal to $Main(G)$, $\\left(Main(G)\\right)^{\\perp}$, are both\n$A(G)$--invariant.\n\\end{theorem}\n\nFrom the above definitions, if $G$ is a $r$-regular graph of order $n$, since its largest eigenvalue, $r$,\nis the unique main eigenvalue, then $m_G(x) = x - r$ and $W_G = \\left( {\\bf j}_n \\right)$.\n\n\\section{The spectrum of the $H$-join of a family of arbitrary graphs}\\label{sec_3}\n\nBefore the main result of this article, we need to define a special matrix ${\\bf \\widetilde{W}}$ which will be called\nthe $H$-join associated matrix.\n\n\\begin{definition}\\label{main_def}\nLet $G$ be the $H$-join as in Definition~\\ref{def_h-join} and denote $\\delta_{i,j}=\\delta_{i,j}(H)$. For each\n$G_i \\in \\mathcal{G}$, consider the main characteristic polynomial \\eqref{mcpG},\n$m_{G_i}(x)=x^{s_i} - c_{i,0} - c_{i,1}x - \\cdots - c_{i,s_i-1} x^{s_i-1}$ and its walk-matrix ${\\bf W}_{G_i}$.\nThe $H$-join associated matrix is the $s \\times s$ matrix, with $s= \\sum_{i=1}^{p}{s_i}$,\n$$\n\\widetilde{\\bf W} = \\left(\\begin{array}{ccccc}\n {\\bf C}(m_{G_1}) & \\delta_{1,2}{\\bf M}_{1,2}& \\dots & \\delta_{1,p-1}{\\bf M}_{1,p-1}& \\delta_{1,p}{\\bf M}_{1,p} \\\\\n \\delta_{2,1}{\\bf M}_{2,1}& {\\bf C}(m_{G_2}) & \\dots & \\delta_{2,p-1}{\\bf M}_{2,p-1}& \\delta_{2,p}{\\bf M}_{2,p} \\\\\n \\vdots & \\vdots &\\ddots & \\vdots & \\vdots \\\\\n \\delta_{p,1}{\\bf M}_{p,1}& \\delta_{p,2}{\\bf M}_{p,2}& \\dots & \\delta_{p,p-1}M_{p,p-1}& {\\bf C}(m_{G_p})\n \\end{array}\\right), \\text{ where}\n$$\n${\\bf C}(m_{G_i}) = \\left(\\begin{array}{ccccc}\n 0 & 0 &\\dots & 0 & c_{i,0} \\\\\n 1 & 0 &\\dots & 0 & c_{i,1} \\\\\n 0 & 1 &\\dots & 0 & c_{i,2} \\\\\n \\vdots &\\vdots &\\ddots&\\vdots &\\vdots\\\\\n 0 & 0 &\\dots & 1 &c_{i,s_i-1} \\\\\n \\end{array}\\right)$ and ${\\bf M}_{i,j}=\\left(\\begin{array}{c}\n {\\bf j}^T_{n_j}{\\bf W}_{G_j} \\\\\n 0 \\; \\dots \\; 0 \\\\\n \\vdots \\; \\ddots\\; \\vdots\\\\\n 0 \\; \\dots \\; 0 \\\\\n \\end{array}\\right)$, for $1 \\le i,j \\le p$.\n\\end{definition}\n\nNote that the ${\\bf C}(m_{G_i})$ is the Frobenius companion matrix of the main characteristic polynomial $m_{G_i}$\nand ${\\bf M}_{i,j}$ is a $s_i \\times s_j$ matrix such that its first row is\n${\\bf j}_{n_i}^T{\\bf W}_{G_i} = (N_0^i, N_1^i, \\dots, N_{s_i-1}^i)$, where $N_k^i$ is the number of walks of length $k$ in\n$G_i$, for $0 \\le k \\le s_i-1$ (considering $N^i_0=n_i$) and the entries of the remaining rows are equal to zero.\n\n\\begin{theorem}\\label{main_theorem}\nLet $G$ be the $H$-join as in Definition~\\ref{def_h-join}, where $\\mathcal{G}$ is a family of arbitrary graphs.\nIf for each graph $G_i$, with $1 \\le i \\le p$,\n\\begin{equation}\\label{spectrum_Gi}\n\\sigma(G_i)=\\{\\mu_{i,1}^{[m_{i,1}]}, \\dots, \\mu_{i,s_i}^{[m_{i,s_i}]}, \\mu_{i,s_i+1}^{[m_{i,s_i+1}]}, \\dots, \\mu_{i,t_i}^{[m_{i,t_i}]}\\},\n\\end{equation}\nwhere $m_{i,j}=m(\\mu_{i,j})$ and $\\mu_{i,1}, \\dots, \\mu_{i,s_i}$ are the main distinct eigenvalues of $G_i$, then\n\\begin{eqnarray}\n\\sigma(G) &=& \\bigcup_{i=1}^{p}{\\{\\mu_{i,1}^{[m_{i,1}-1]}, \\dots, \\mu_{i,s_i}^{[m_{i,s_i}-1]}\\}} \\cup\n \\bigcup_{i=1}^{p}{\\{\\mu_{i,s_i+1}^{[m_{i,s_i+1}]}, \\dots, \\mu_{i,t_i}^{[m_{i,t_i}]}\\}} \\cup\n \\sigma({\\bf \\widetilde{W}}), \\label{G_spectrum}\n\\end{eqnarray}\nwhere the union of multisets is considered with possible repetitions.\n\\end{theorem}\n\n\\begin{proof}\nFrom Lemma~\\ref{main_and_nom-main_eigenvalues-join} it is immediate that\n$$\n\\bigcup_{i=1}^{p}{\\{\\mu_{i,1}^{[m_{i,1}-1]}, \\dots, \\mu_{i,s_i}^{[m_{i,s_i}-1]}\\}} \\cup\n \\bigcup_{i=1}^{p}{\\{\\mu_{i,s_i+1}^{[m_{i,s_i+1}]}, \\dots, \\mu_{i,t_i}^{[m_{i,t_i}]}\\}} \\subseteq \\sigma(G).\n$$\nSo it just remains to prove that $\\sigma(\\widetilde{{\\bf W}}) \\subseteq \\sigma(G)$.\\\\\n\nLet us define the vector\n\\begin{eqnarray}\n\\hat{\\bf v} &=& \\left(\\begin{array}{c}\n \\hat{\\bf v}_{1} \\\\\n \\vdots \\\\\n \\hat{\\bf v}_{p} \\\\\n \\end{array}\\right), \\text{ such that } \\label{vector_v}\\\\\n\\hat{\\bf v}_{i} &=& \\sum_{k=0}^{s_i-1}{\\alpha_{i,k}A^k(G_i){\\bf j}_{n_i}}\n = {\\bf W}_{G_i}\\hat{\\mathbf{\\alpha}}_{i},\\label{main_vector_Gi}\n\\end{eqnarray}\nwhere $\\hat{\\mathbf{\\alpha}}_{i} = \\left(\\begin{array}{c}\n \\alpha_{i,0} \\\\\n \\alpha_{i,1} \\\\\n \\vdots \\\\\n \\alpha_{i,s_i-1} \\\\\n \\end{array} \\right ), \\text{ for } 1 \\le i \\le p.$\nFrom \\eqref{main_vector_Gi}, each $\\hat{\\bf v}_{i} \\in Main(G_i)$ and then all vectors $\\hat{\\bf v}$ defined in\n\\eqref{vector_v} are orthogonal to the eigenvectors of $A(G)$ in \\eqref{eigenvalue-equation}. Moreover,\n\n\\begin{equation}\\label{main_subspace}\nA(G_i)\\hat{\\bf v}_{i} = A(G_i){\\bf W}_{G_i}\\hat{\\mathbf{\\alpha}}_{i} = \\sum_{k=0}^{s_i-1}{\\alpha_{i,k}A^{k+1}(G_i){\\bf j}_{n_i}}, \\text{ for } 1 \\le i \\le p.\n\\end{equation}\n\nTherefore,\n\n\\begin{eqnarray}\nA(G)\\hat{\\bf v} &=& \\left(\\begin{array}{cccc}\nA(G_1) & \\delta_{1,2}{\\bf j}_{n_1} {\\bf j}^T_{n_2}& \\cdots & \\delta_{1,p}{\\bf j}_{n_1} {\\bf j}^T_{n_p}\\\\\n\\delta_{2,1}{\\bf j}_{n_2} {\\bf j}^T_{n_1} & A(G_2) & \\cdots & \\delta_{2,p}{\\bf j}_{n_2} {\\bf j}^T_{n_p}\\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\delta_{p,1}{\\bf j}_{n_p}{\\bf j}^T_{n_1} &\\delta_{p,2}{\\bf j}_{n_p} {\\bf j}^T_{n_2} & \\cdots & A(G_p)\\\\\n \\end{array}\\right) \\left(\\begin{array}{c}\n \\hat{\\bf v}_{1}\\\\\n \\hat{\\bf v}_{2}\\\\\n \\vdots \\\\\n \\hat{\\bf v}_{p}\n \\end{array}\\right) \\nonumber \\\\\n&=& \\left(\\begin{array}{c}\n A(G_1)\\hat{\\bf v}_{1} + \\left(\\sum_{k \\in [p] \\setminus \\{1\\}}{\\delta_{1,k}{\\bf j}^T_{n_k}\\hat{\\bf v}_{k}}\\right){\\bf j}_{n_1}\\\\\n A(G_2)\\hat{\\bf v}_{2} + \\left(\\sum_{k \\in [p] \\setminus \\{2\\}}{\\delta_{2,k}{\\bf j}^T_{n_k}\\hat{\\bf v}_{k}}\\right){\\bf j}_{n_2}\\\\\n \\vdots \\\\\n A(G_p)\\hat{\\bf v}_{p} + \\left(\\sum_{k \\in [p] \\setminus \\{p\\}}{\\delta_{p,k}{\\bf j}^T_{n_k}\\hat{\\bf v}_{k}}\\right){\\bf j}_{n_p}\n \\end{array}\\right) \\label{walk_1}\\\\\n&=& \\left(\\begin{array}{c}\n A(G_1)\\hat{\\bf v}_{1} + \\left(\\sum_{k \\in [p] \\setminus \\{1\\}}{\\delta_{1,k}{\\bf j}^T_{n_k}{\\bf W}_{G_k}\\hat{\\mathbf{\\alpha}}_{k}}\\right){\\bf j}_{n_1}\\\\\n A(G_2)\\hat{\\bf v}_{2} + \\left(\\sum_{k \\in [p] \\setminus \\{2\\}}{\\delta_{2,k}{\\bf j}^T_{n_k}{\\bf W}_{G_k}\\hat{\\mathbf{\\alpha}}_{k}}\\right){\\bf j}_{n_2}\\\\\n \\vdots \\\\\n A(G_p)\\hat{\\bf v}_{p} + \\left(\\sum_{k \\in [p] \\setminus \\{p\\}}{\\delta_{p,k}{\\bf j}^T_{n_k}{\\bf W}_{G_k}\\hat{\\mathbf{\\alpha}}_{k}}\\right){\\bf j}_{n_p}\n \\end{array}\\right), \\label{walk_2}\n\\end{eqnarray}\nwhere \\eqref{walk_2} is obtained applying \\eqref{main_vector_Gi} in \\eqref{walk_1}. Defining\n\\begin{equation*}\n\\beta_{i,0}=\\sum_{k \\in [p] \\setminus \\{i\\}}{\\delta_{i,k}{\\bf j}^T_{n_k}{\\bf W}_{G_k}\\hat{\\mathbf{\\alpha}}_{k}},\n \\text{ for } 1 \\le i \\le p\n\\end{equation*}\nand taking into account \\eqref{main_subspace}, the $i$-th row of \\eqref{walk_2} can be written as\n{\\small \\begin{eqnarray}\n\\hspace{-0.5cm}\\beta_{i,0}{\\bf j}_{n_i} + A(G_i)\\hat{\\bf v}_{i} &=& \\left(\\underbrace{\\sum_{k \\in [p]\\setminus\\{i\\}}{\\delta_{i,k}{\\bf j}^T_{n_k}{\\bf W}_{G_k}\\hat{\\bf \\alpha}_{k}}}_{\\beta_{i,0}}\\right){\\bf j}_{n_i} + \\sum_{k=0}^{s_i-1}{\\alpha_{i,k}A^{k+1}(G_i){\\bf j}_{n_i}} \\nonumber\\\\\n\\hspace{-0.5cm} &=& \\beta_{i,0}{\\bf j}_{n_i} + \\sum_{k=1}^{s_i-1}{\\alpha_{i,k-1}A^k(G_i){\\bf j}_{n_i}}\n + \\alpha_{i,s_i-1}A^{s_i}(G_i){\\bf j}_{n_i} \\label{ith-row_1}\\\\\n\\hspace{-0.5cm} &=& \\beta_{i,0}{\\bf j}_{n_i} + \\sum_{k=1}^{s_i-1}{\\alpha_{i,k-1}A^k(G_i){\\bf j}_{n_i}}\n + \\alpha_{i,s_i-1}{\\bf W}_{G_i}\\left(\\begin{array}{c}\n c_{i,0} \\\\\n c_{i,1} \\\\\n \\vdots \\\\\n c_{i,s_i-1} \\\\\n \\end{array} \\right) \\label{ith-row_2}\n\\end{eqnarray}}\n\\begin{eqnarray}\n\\qquad \\qquad &=& {\\bf W}_{G_i}\\left(\\begin{array}{c}\n \\beta_{i,0} + \\alpha_{i,s_i-1}c_{i,0} \\\\\n \\alpha_{i,0} + \\alpha_{i,s_i-1}c_{i,1}\\\\\n \\vdots \\\\\n \\alpha_{i,s_i-2} + \\alpha_{i,s_i-1}c_{i,s_i-1} \\\\\n \\end{array}\\right). \\label{ith-row_3}\n\\end{eqnarray}\nObserve that \\eqref{ith-row_2} is obtained applying Corollary~\\ref{ap} to \\eqref{ith-row_1}. Taking into account the\ndefinition of $\\beta_{i,0}$, \\eqref{ith-row_3} can be replaced by the expression\n\n{\\tiny\n$$\n\\hspace{-1.5cm}{\\bf W}_{G_i}\\underbrace{\\left(\\begin{array}{ccccccccccc}\n\\overbrace{\\delta_{i,1}{\\bf j}^T_{n_1}{\\bf W}_{G_1}}^{s_1\\text{ columns}}&\\cdots&\\overbrace{\\delta_{i,{i-1}}{\\bf j}^T_{n_{i-1}}{\\bf W}_{G_{i-1}}}^{s_{i-1}\\text{ columns}}& 0 & 0 &\\cdots& 0 & c_{i,0} & \\overbrace{\\delta_{i,{i+1}}{\\bf j}^T_{n_{i+1}}{\\bf W}_{G_{i+1}}}^{s_{i+1}\\text{ columns}} &\\cdots&\\overbrace{\\delta_{i,p}{\\bf j}^T_{n_p}{\\bf W}_{G_p}}^{s_p\\text{ columns}}\\\\\n {\\bf 0} &\\cdots& {\\bf 0}& 1 & 0 &\\cdots& 0 & c_{i,1} &{\\bf 0}&\\cdots& {\\bf 0} \\\\\n {\\bf 0} &\\cdots&{\\bf 0}& 0 & 1 &\\cdots& 0 & c_{i,2} &{\\bf 0}&\\cdots& {\\bf 0} \\\\\n \\vdots &\\ddots&\\vdots& \\vdots &\\vdots&\\ddots&\\vdots& \\vdots &\\vdots&\\ddots& \\vdots \\\\\n {\\bf 0} &\\cdots&{\\bf 0}& 0 & 0 &\\cdots& 1 &c_{i,s_i-1} &{\\bf 0}&\\cdots& {\\bf 0}\n\\end{array}\\right)}_{\\widetilde{\\bf W}_i}\\left(\\begin{array}{c}\n \\hat{\\bf \\alpha}_{1}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{i-1}\\\\\n \\hat{\\bf \\alpha}_{i}\\\\\n \\hat{\\bf \\alpha}_{i+1}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{p}\n \\end{array}\\right)\n$$}\nwhich is equivalent to the expression\n$$\n{\\bf W}_{G_i}\\underbrace{\\left(\\begin{array}{ccccccc}\n \\delta_{i,1}{\\bf M}_{i,1} & \\dots & \\delta_{i,i-1}{\\bf M}_{i,i-1} & {\\bf C}(m_{G_i}) & \\delta_{i,i+1}{\\bf M}_{i,i+1} & \\dots & \\delta_{i,p}{\\bf M}_{i,p}\\\\\n \\end{array}\\right)}_{\\widetilde{\\bf W}_i}\\left(\\begin{array}{c}\n \\hat{\\bf \\alpha}_{1}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{i-1}\\\\\n \\hat{\\bf \\alpha}_{i}\\\\\n \\hat{\\bf \\alpha}_{i+1}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{p}\n \\end{array}\\right).\n$$\nFrom the above analysis\n\\begin{eqnarray*}\nA(G)\\hat{\\bf v} & = & \\left(\\begin{array}{cccc}\n {\\bf W}_{G_1} & {\\bf 0} & \\cdots & {\\bf 0} \\\\\n {\\bf 0} & {\\bf W}_{G_2} & \\cdots & {\\bf 0} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n {\\bf 0} & {\\bf 0} & \\cdots & {\\bf W}_{G_p}\n \\end{array}\\right) \\left(\\begin{array}{c}\n {\\bf \\widetilde{W}}_1\\\\\n {\\bf \\widetilde{W}}_2\\\\\n \\vdots\\\\\n {\\bf \\widetilde{W}}_p\n \\end{array}\\right)\\left(\\begin{array}{c}\n \\hat{\\bf \\alpha}_{1}\\\\\n \\hat{\\bf \\alpha}_{2}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{p}\n \\end{array}\\right)\n\\end{eqnarray*}\nand, according to \\eqref{main_vector_Gi},\n\\begin{eqnarray*}\n\\hat{\\bf v} &=& \\left(\\begin{array}{cccc}\n {\\bf W}_{G_1} & {\\bf 0} & \\cdots & {\\bf 0} \\\\\n {\\bf 0} & {\\bf W}_{G_2} & \\cdots & {\\bf 0} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n {\\bf 0} & {\\bf 0} & \\cdots & {\\bf W}_{G_p}\n \\end{array}\\right)\\left(\\begin{array}{c}\n \\hat{\\bf \\alpha}_{1}\\\\\n \\hat{\\bf \\alpha}_{2}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{p}\n \\end{array}\\right). \\label{vecto_v}\n\\end{eqnarray*}\nTherefore, $A(G)\\hat{\\bf v} = \\rho \\hat{\\bf v}$ if and only if\n\\begin{eqnarray}\n\\underbrace{\\left(\\begin{array}{cccc}\n {\\bf W}_{G_1} & {\\bf 0} & \\cdots & {\\bf 0} \\\\\n {\\bf 0} & {\\bf W}_{G_2} & \\cdots & {\\bf 0} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n {\\bf 0} & {\\bf 0} & \\cdots & {\\bf W}_{G_p}\n \\end{array}\\right)}_{{\\bf (*)}}\\left(\\left(\\begin{array}{c}\n {\\bf \\widetilde{W}}_1\\\\\n {\\bf \\widetilde{W}}_2\\\\\n \\vdots \\\\\n {\\bf \\widetilde{W}}_p\n \\end{array}\\right) - \\rho I_s\\right)\\left(\\begin{array}{c}\n \\hat{\\bf \\alpha}_{1}\\\\\n \\hat{\\bf \\alpha}_{2}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{p}\n \\end{array}\\right) &=& {\\bf 0}. \\label{main_equality}\n\\end{eqnarray}\nIt is immediate that ${\\bf \\widetilde{W}} = \\left(\\begin{array}{c}\n {\\bf \\widetilde{W}}_1\\\\\n {\\bf \\widetilde{W}}_2\\\\\n \\vdots \\\\\n {\\bf \\widetilde{W}}_p\n \\end{array}\\right)$ and since the columns of each matrix\n${\\bf W}_{G_i}$ are linear independent, the columns of the matrix $(*)$ are also linear independent. Consequently,\n\\eqref{main_equality} is equivalent to\n\\begin{equation}\n\\left({\\bf \\widetilde{W}} - \\rho I_s\\right)\\hat{\\bf \\alpha} = {\\bf 0}, \\label{w_eigenvetor}\n\\end{equation}\nwhere $\\hat{\\bf \\alpha} = \\left(\\begin{array}{c}\n \\hat{\\bf \\alpha}_{1}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{p}\n \\end{array}\\right).$\nFinally, we may conclude that $(\\rho,\\hat{\\bf v})$ is an eigenpair of $A(G)$ if and only if $(\\rho, \\hat{\\bf \\alpha})$,\nis an eigenpair of the $H$-join associated matrix ${\\bf \\widetilde{W}}$.\n\\end{proof}\n\nBefore the next corollary of Theorem~\\ref{main_theorem}, it is convenient to introduce the notation\n$\\phi(G)$ and $\\phi({\\bf A})$ which, from now on, will be used for the characteristic polynomial of a graph $G$\nand a matrix ${\\bf A}$, respectively.\n\n\\begin{corollary}\\label{cor_charact_poly}\nLet $G$ be the $H$-join as in Definition~\\ref{def_h-join} with associated matrix ${\\bf \\widetilde{W}}$. Assuming\nthat $\\mathcal{G}=\\{G_1, \\dots, G_p\\}$ is a family of arbitrary graphs for which $\\sigma(G_1), \\dots, \\sigma(G_p)$\nare defined as in \\eqref{spectrum_Gi} and $m_{G_1}, \\dots, m_{G_p}$ are their main characteristic polynomials, then\n$$\n\\phi(G) = \\left(\\prod_{i=1}^{p}{\\frac{\\phi(G_i)}{m_{G_i}}}\\right)\\phi({\\bf \\widetilde{W}}).\n$$\n\\end{corollary}\n\n\\begin{proof}\nSince, for $1 \\le i \\le p$,\n$\\sigma(G_i)=\\{\\mu_{i,1}^{[m_{i,1}]}, \\dots, \\mu_{i,s_i}^{[m_{i,s_i}]}, \\mu_{i,s_i+1}^{[m_{i,s_i+1}]}, \\dots, \\mu_{i,t_i}^{[m_i,t_i]}\\}$,\nwhere the first $s_i$ eigenvalues are main, and the roots of $m_{G_i}$ are all simple main eigenvalues of $G_i$,\nit is immediate that $\\phi(G_i)=m_{G_i}\\phi'(G_i)$, where the roots of the polynomial $\\phi'(G_i)$ are the eigenvalues\nof $G_i$,\n$$\n\\{\\mu_{i,1}^{[m_{i,1}-1]}, \\dots, \\mu_{i,s_i}^{[m_{i,s_i}-1]}, \\mu_{i,s_i+1}^{[m_{i,s_i+1}]}, \\dots, \\mu_{i,t_i}^{[m_i,t_i]}\\}.\n$$\nTherefore, according to \\eqref{G_spectrum}, the roots of $\\prod_{i=1}^{p}{\\frac{\\phi(G_i)}{m_{G_i}}}$ are the\neigenvalues in $\\sigma(G) \\setminus \\sigma({\\bf \\widetilde{W}})$.\n\\end{proof}\n\n\n\\begin{example}\nConsider the graph $H \\cong P_3$, the path with three vertices, and the graphs\n$K_{1,3}$, $K_2$ and $P_3$ depicted in the Figure~\\ref{figura_1}. Then\n$\\sigma(K_{1,3})=\\{\\sqrt{3},-\\sqrt{3},0^{[2]}\\},$ $\\sigma(K_2)=\\{1,-1\\}$,\n$\\sigma(P_3)=\\{\\sqrt{2},-\\sqrt{2},0\\}$\nand their main characteristic polynomials are $m_{K_{1,3}}(x) = x^2 - 3$, $m_{K_2}(x) = x - 1$\nand $m_{P_3}(x) = x^2 - 2$, respectively.\n\n\\begin{figure}[h]\n\\begin{center}\n\\unitlength=0.25 mm\n\\begin{picture}(400,120)(60,60)\n\\put(50,110){\\line(2,1){50}} \n\\put(50,110){\\line(2,-1){50}}\n\\put(50,110){\\line(1,0){50}} \n\\put(150,85){\\line(0,1){50}} \n\\put(225,110){\\line(0,1){25}}\n\\put(225,110){\\line(0,-1){25}\n\\put(50,110){\\circle*{5.7}} \n\\put(100,135){\\circle*{5.7}}\n\\put(100,110){\\circle*{5.7}}\n\\put(100,85){\\circle*{5.7}} \n\\put(150,135){\\circle*{5.7}}\n\\put(150,85){\\circle*{5.7}} \n\\put(225,135){\\circle*{5.7}}\n\\put(225,110){\\circle*{5.7}}\n\\put(225,85){\\circle*{5.7}}\n\\put(40,110){\\makebox(0,0){\\footnotesize 1}}\n\\put(100,145){\\makebox(0,0){\\footnotesize 2}}\n\\put(110,110){\\makebox(0,0){\\footnotesize 3}}\n\\put(100,75){\\makebox(0,0){\\footnotesize 4}}\n\\put(75,50){\\makebox(0,0){\\footnotesize $K_{1,3}$}}\n\\put(150,145){\\makebox(0,0){\\footnotesize 5}}\n\\put(150,75){\\makebox(0,0){\\footnotesize 6}}\n\\put(150,50){\\makebox(0,0){\\footnotesize $K_2$}}\n\\put(225,145){\\makebox(0,0){\\footnotesize 7}}\n\\put(235,110){\\makebox(0,0){\\footnotesize 8}}\n\\put(225,75){\\makebox(0,0){\\footnotesize 9}}\n\\put(225,50){\\makebox(0,0){\\footnotesize $P_3$}}\n\\put(300,110){\\line(2,1){50}} \n\\put(300,110){\\line(2,-1){50}}\n\\put(300,110){\\line(1,0){50}} \n\\put(400,85){\\line(0,1){50}}\n\\put(300,110){\\line(4,1){100}}\n\\put(300,110){\\line(4,-1){100}\n\\put(350,135){\\line(1,0){125}}\n\\put(350,135){\\line(1,-1){50}}\n\\put(350,110){\\line(2,1){50}}\n\\put(350,110){\\line(2,-1){50}}\n\\put(350,85){\\line(1,0){125}} \n\\put(350,85){\\line(1,1){50}} \n\\put(400,135){\\line(3,-1){75}}\n\\put(400,135){\\line(3,-2){75}}\n\\put(400,85){\\line(3,1){75}} \n\\put(400,85){\\line(3,2){75}} \n\\put(475,135){\\line(0,-1){50}}\n\\put(300,110){\\circle*{5.7}}\n\\put(350,135){\\circle*{5.7}}\n\\put(350,110){\\circle*{5.7}} \n\\put(350,85){\\circle*{5.7}} \n\\put(400,135){\\circle*{5.7}\n\\put(400,85){\\circle*{5.7}}\n\\put(475,135){\\circle*{5.7}}\n\\put(475,110){\\circle*{5.7}}\n\\put(475,85){\\circle*{5.7}} \n\\put(295,110){\\makebox(0,0){\\footnotesize 1}}\n\\put(350,145){\\makebox(0,0){\\footnotesize 2}}\n\\put(360,110){\\makebox(0,0){\\footnotesize 3}}\n\\put(350,75){\\makebox(0,0){\\footnotesize 4}}\n\\put(400,145){\\makebox(0,0){\\footnotesize 5}}\n\\put(400,75){\\makebox(0,0){\\footnotesize 6}}\n\\put(475,145){\\makebox(0,0){\\footnotesize 7}}\n\\put(485,110){\\makebox(0,0){\\footnotesize 8}}\n\\put(475,75){\\makebox(0,0){\\footnotesize 9}}\n\\put(400,50){\\makebox(0,0){{\\footnotesize $G=\\bigvee_{P_3}{\\{K_{1,3}, K_2, P_3\\}}$}}}\n\\end{picture}\n\\end{center}\n\\caption{The $P_3$-join of the family of graphs $K_{1,3}$, $K_2$ and $P_3$.}\\label{figura_1}\n\\end{figure}\nSince\n$$\n\\begin{array}{lclcl}\n{\\bf \\widetilde{W}}_1 &=& \\left(\\begin{array}{ccccc}\n 0 & c_{1,0} & \\delta_{1,2}2 & \\delta_{1,3}3 & \\delta_{1,3}4 \\\\\n 1 & c_{1,1} & 0 & 0 & 0 \\\\\n \\end{array}\\right) & = & \\left(\\begin{array}{ccccc}\n 0 & 3 & 2 & 0 & 0 \\\\\n 1 & 0 & 0 & 0 & 0 \\\\\n \\end{array}\\right),\\\\\n{\\bf \\widetilde{W}}_2 &=& \\left(\\begin{array}{ccccc}\n \\delta_{2,1}4 & \\delta_{2,1}6 & c_{2,0} & \\delta_{2,3}3 & \\delta_{2,3}4\\\\\n \\end{array}\\right) & = & \\left(\\begin{array}{ccccc}\n 4 & 6 & 1 & 3 & 4\\\\\n \\end{array}\\right),\\\\\n{\\bf \\widetilde{W}}_3 &=& \\left(\\begin{array}{ccccc}\n \\delta_{3,1}4 & \\delta_{3,1}6 & \\delta_{3,2}2 & 0 & c_{3,0} \\\\\n 0 & 0 & 0 & 1 & c_{3,1}\n \\end{array}\\right) & = & \\left(\\begin{array}{ccccc}\n 0 & 0 & 2 & 0 & 2 \\\\\n 0 & 0 & 0 & 1 & 0\n \\end{array}\\right),\n\\end{array}\n$$\nit follows that\n\\begin{equation*}\n{\\bf \\widetilde{W}} = \\left(\\begin{array}{c}\n {\\bf \\widetilde{W}}_1\\\\\n {\\bf \\widetilde{W}}_2\\\\\n {\\bf \\widetilde{W}}_3\n \\end{array}\\right) = \\left(\\begin{array}{ccccc}\n 0 & 3 & 2 & 0 & 0 \\\\\n 1 & 0 & 0 & 0 & 0 \\\\\n 4 & 6 & 1 & 3 & 4 \\\\\n 0 & 0 & 2 & 0 & 2 \\\\\n 0 & 0 & 0 & 1 & 0\n \\end{array}\\right).\n\\end{equation*}\nTherefore, the characteristic polynomial of ${\\bf \\widetilde{W}}$ is the polynomial\n$$\n\\phi({\\bf \\widetilde{W}}) = -42 - 40 x + 15 x^2 + 19 x^3 + x^4 - x^5\n$$\nand, applying Corollary~\\ref{cor_charact_poly}, we obtain the characteristic polynomial of $G$,\n$$\n\\phi(G) = x^3(x+1)\\phi({\\bf \\widetilde{W}}) = x^3(x+1)(-42 - 40 x + 15 x^2 + 19 x^3 + x^4 - x^5).\n$$\n\\end{example}\n\n\\section{Final remarks}\\label{sec_4}\n\nWhen all graphs of the family $\\mathcal{G}$ are regular, that is, $G_1$ is $d_1$-regular, $G_2$ is $d_2$-regular,\n$\\dots$, $G_p$ is $d_p$-regular, the walk matrices are ${\\bf W}_{G_1}=\\left({\\bf j}_{n_1}\\right)$,\n${\\bf W}_{G_2}=\\left({\\bf j}_{n_2}\\right)$, $\\dots$, ${\\bf W}_{G_p}=\\left({\\bf j}_{n_p}\\right)$, respectively.\nConsequently, the main polynomials are $m_{G_1}(x) = x - d_1$, $m_{G_2}(x) = x - d_2$, $\\dots$, $m_{G_p}(x) = x - d_p$.\nAs direct\nconsequence, for this particular case, the $H$-join associated matrix is\n{\\small $$\n{\\bf\\widetilde{W}}=\\left(\\begin{array}{cccc}\n d_1 & \\delta_{1,2}{\\bf j}_{n_2}^T{\\bf W}_{G_2} & \\cdots & \\delta_{1,p}{\\bf j}_{n_p}^T{\\bf W}_{G_p}\\\\\n \\delta_{2,1}{\\bf j}_{n_1}^T{\\bf W}_{G_1} & d_2 & \\cdots & \\delta_{2,p}{\\bf j}_{n_p}^T{\\bf W}_{G_p}\\\\\n \\vdots &\\vdots & \\ddots & \\vdots \\\\\n \\delta_{p,1}{\\bf j}_{n_1}^T{\\bf W}_{G_1} & \\delta_{p,2}{\\bf j}_{n_2}^T{\\bf W}_{G_2} & \\cdots &d_p \\\\\n \\end{array}\\right) = \\left(\\begin{array}{cccc}\n d_1 & \\delta_{1,2}n_2 & \\cdots & \\delta_{1,p}n_p \\\\\n \\delta_{2,1}n_1 & d_2 & \\cdots & \\delta_{2,p}n_p \\\\\n \\vdots &\\vdots & \\ddots & \\vdots \\\\\n \\delta_{p,1}n_1 &\\delta_{p,2}n_2 & \\cdots & d_p \\\\\n \\end{array}\\right).\n$$}\nTherefore, it is immediate that when all the graphs of the family $\\mathcal{G}$ are regular, the matrix\n${\\bf\\widetilde{W}}$ and the matrix $\\widetilde{C}$ in \\eqref{matrix_c} are similar matrices. Note that\n$\\widetilde{C} = D {\\bf\\widetilde{W}}D^{-1}$, where\n$D = \\text{diag}\\left(\\sqrt{n_1}, \\sqrt{n_2}, \\dots, \\sqrt{n_p}\\right)$ and thus\n${\\bf\\widetilde{W}}$ and $\\widetilde{C}$ are cospectral matrices as it should be.\\\\\n\nIn the particular case of the lexicographic product $H[G]$, which is the $H$-join of a family of graphs\n$\\mathcal{G}$, where all the graphs in $\\mathcal{G}$ are isomorphic to a fixed graph $G$, consider that the graph $H$\nhas order $p$ and the graph $G$ has order $n$. Let\n$\\sigma(G)=\\{\\mu_1^{[m_1]}, \\dots, \\mu_{s}^{[m_s]}, \\mu_{s+1}^{[m_{s+1}]}, \\dots, \\mu_{t}^{[m_t]}\\}$, where\n$\\mu_1, \\dots, \\mu_s$ are the distinct main eigenvalues of $G$ and $\\sum_{i=1}^{t}{m_{i}}=n$. Then, according to\nthe Definition~\\ref{main_def}, the $H$-join associated matrix is\n$$\n{\\bf \\widetilde{W}} = \\left(\\begin{array}{ccccc}\n {\\bf C}(m_{G}) & \\delta_{1,2}{\\bf M} & \\dots & \\delta_{1,p-1}{\\bf M} & \\delta_{1,p}{\\bf M} \\\\\n \\delta_{2,1}{\\bf M} & {\\bf C}(m_{G})& \\dots & \\delta_{2,p-1}{\\bf M} & \\delta_{2,p}{\\bf M} \\\\\n \\vdots & \\vdots &\\ddots & \\vdots & \\vdots \\\\\n \\delta_{p,1}{\\bf M} & \\delta_{p,2}{\\bf M} & \\dots & \\delta_{p,p-1}{\\bf M} & {\\bf C}(m_{G})\n \\end{array}\\right),\n$$\nwhere ${\\bf C}(m_{G})$ is the Frobenius companion matrix of $m_G$ and\n${\\bf M} = \\left(\\begin{array}{c}\n {\\bf j}_n^{\\top}{\\bf W}_G\\\\\n {\\bf 0}\\\\\n \\end{array}\\right)$ (both are $s \\times s$ matrices). Applying Theorem~\\ref{main_theorem}, it follows\n\\begin{eqnarray*}\n\\sigma(G) &=& p{\\{\\mu_{1}^{[m_{1}-1]}, \\dots, \\mu_{s}^{[m_{s}-1]}\\}} \\cup\n p{\\{\\mu_{s+1}^{[m_{s+1}]}, \\dots, \\mu_{t}^{[m_{t}]}\\}} \\cup\n \\sigma({\\bf \\widetilde{W}}),\n\\end{eqnarray*}\nwhere the multiplication of $p$ by a set $X$ means the union of $X$ with himself $p$ times. Therefore, from\nCorollary~\\ref{cor_charact_poly}, the characteristic polynomial of $H[G]$ is\n\\begin{equation}\n\\phi(H[G]) = \\left(\\frac{\\phi(G)}{m_G}\\right)^p \\phi({\\bf \\widetilde{W}}). \\label{charact_poly_lex}\n\\end{equation}\n\nIn \\cite[Th. 2.4]{WangWong2018} a distinct expression for $\\phi(H[G])$ is determined. Such expression is not\nonly related with $H$ and $G$ but also with eigenspaces of the adjacency matrix of $G$.\\\\\n\nFrom the obtained results, we are able to determine all the eigenvectors of the adjacency matrix of the $H$-join of\na family of arbitrary graphs $G_1, \\dots, G_p$ in terms of the eigenvectors of the adjacency matrices $A(G_i)$, for\n$1 \\le i \\le p$, and the eigenvectors of the $H$-join associated matrix ${\\bf \\widetilde{W}}$, as follows.\n\\begin{enumerate}\n\\item Let $G$ be the $H$-join as in Definition~\\ref{def_h-join}, where $\\mathcal{G}=\\{G_1, G_2, \\dots, G_p\\}$ is a\n family of arbitrary graphs.\n\\item For $1 \\le i \\le p$, consider $\\sigma(G_i)$ as defined in \\eqref{spectrum_Gi}. For each eigenvalue\n $\\mu_{i,j} \\in \\sigma(G_i)$, every eigenvetor\n $\\hat{\\bf u}_{i,j} \\in \\mathcal{E}_{G_i}(\\mu_{i,j}) \\cap {\\bf j}_{n_i}^{\\perp}$ defines an eigenvector for\n $A(G)$ as in \\eqref{eigenvalue-equation}.\n\\item The remaining eigenvectors of $A(G)$ are the vectors $\\hat{\\bf v}$ defined in \\eqref{vector_v}-\\eqref{main_vector_Gi}\n from the eigenvectors of ${\\bf \\widetilde{W}}$, $\\hat{\\bf \\alpha}$, obtained as linear independent solutions\n of \\eqref{w_eigenvetor}, for each $\\rho \\in \\sigma({\\bf \\widetilde{W}})$.\n\\end{enumerate}\n\n\\medskip\\textbf{Acknowledgments.}\nThis work is supported by the Center for Research and Development in Mathematics and Applications (CIDMA) through\nthe Portuguese Foundation for Science and Technology (FCT - Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia), reference\nUIDB\/04106\/2020.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn this erratum to \\cite{F}, we point out that there is a gap in the proof of\nTheorem 1 of that paper. Specifically, the estimate for $\\sup_{x\\in E_{n}%\n}\\left\\Vert \\overline{f}_{n}^{\\prime}\\left( x\\right) -F_{n}^{\\prime}\\left(\nx\\right) \\right\\Vert $ in \\cite{F} does not hold (as the inductive proof\nfails here), and as a consequence the conclusion of Theorem 1 does not follow.\nNevertheless, using a construction from \\cite{J}, techniques from\n\\cite{AFGJL}, and employing a similar proof as originally, we are able to\nestablish all the results of \\cite{F} under the additional assumption that the\nsubset $Y\\subset X$ is convex (see Theorem 1 below.)\n\nWe note that the main motivation for this work was to find an analogous result\nto that of \\cite{AFM} for not necessarily bounded functions. Let us recall in\n\\cite{AFM} it was shown, in particular, that for a separable Banach space $X$\nadmitting a Lipschitz, $C^{p}$ smooth bump function, that given $\\varepsilon\n>0$ and a bounded, uniformly continuous function $f:X\\rightarrow\\mathbb{R}$,\nthere exists a Lipschitz, $C^{p}$ smooth function $K$ with $\\left\\vert\nf-K\\right\\vert <\\varepsilon$ on $X.$ We remark that to establish our result\nhere, we need to further assume that our Banach space $X$ has an unconditional\nbasis. However, in addition to relaxing the boundedness condition on $f$, when\n$f$ is also Lipschitz, unlike the result of \\cite{AFM}, we are able to find\nLipschitz, $C^{p}$ smooth approximates $K$ where the Lipschitz constants do\nnot depend on the $\\varepsilon$-degree of precision in the approximation. We\nalso note that the results of \\cite{AFM} are restricted to real-valued maps.\n\n$\\smallskip$\n\nThe notation we employ is standard, with $X,Y,$ etc. denoting Banach spaces.\nWe write the closed unit ball of $X$ as $B_{X}.$ The G\\^{a}teaux derivative of\na function $f$ at $x$ in the direction $h$ will be denoted $D_{h}f\\left(\nx\\right) ,$ while the Fr{\\'{e}}chet derivative of $f$ at $x$ on $h$ is\nwritten $f^{\\prime}\\left( x\\right) \\left( h\\right) .$ We note that a\n$C^{p}$-smooth function is necessarily Fr{\\'{e}}chet differentiable (see e.g.,\n\\cite{BL}.)\n\n$\\smallskip$\n\nA $C^{p}$\\textbf{-smooth bump function} $b$ on $X$ is a $C^{p}$-smooth,\nreal-valued function on $X$ with bounded, non-empty support, where\n\\[\n\\text{support}\\left( b\\right) =\\overline{\\left\\{ x\\in X:b\\left( x\\right)\n\\neq0\\right\\} }.\n\\]\nIf $f:X\\rightarrow Y$ is Lipschitz with constant $\\eta,$ we will say that $f $\nis $\\eta$-Lipschitz. Most additional notation is explained as it is introduced\nin the sequel. For any unexplained terms we refer the reader to \\cite{DGZ} and\n\\cite{FHHMPZ}. For further historical context see the introduction in \\cite{F}.\n\n\\section{Main Results}\n\nWe first introduce some notation which will be used throughout the paper. Let\n$\\left\\{ e_{j},e_{j}^{\\ast}\\right\\} _{j=1}^{\\infty}$ be an unconditional\nSchauder basis on $X,$ and $P_{n}:X\\rightarrow X$ the canonical projections\ngiven by $P_{n}\\left( x\\right) =P_{n}\\left( \\sum_{j=1}^{\\infty}x_{j}%\ne_{j}\\right) =\\sum_{j=1}^{n}x_{j}e_{j},$ and where we set $P_{0}=0.$ By\nrenorming, we may assume that the unconditional basis constant is $1.$ In\nparticular, $\\left\\Vert P_{n}\\right\\Vert \\leq1$ for all $n.$ We put\n$E_{n}=P_{n}\\left( X\\right) ,$ and $E^{\\infty}=\\cup_{n}E_{n},$ noting that\n$\\dim E_{n}=n,$ $E_{n}\\subset E_{n+1},$ and $E^{\\infty}\\ $is a dense subspace\nof $X.$ It will be convenient to denote the closed unit ball of $E_{n}$ by\n$B_{E_{n}}.$\n\n$\\smallskip$\n\nThe proof of our main theorem is a modification of some techniques found in\n\\cite{M} and \\cite{AFGJL}, where $C^{p}$-fine approximation on Banach spaces\nis considered. We also rely on the main construction from \\cite{J}. We follow\nthe original proof of \\cite{F} closely, and have decided to reproduce the\ndetails so that this note is self contained.\n\n\\begin{theorem}\nLet $X$ be a Banach space with unconditional basis which admits a Lipschitz,\n$C^{p}$-smooth bump function. Let $Y\\subset X$ be a convex subset and\n$f:Y\\rightarrow\\mathbb{R}$ a uniformly continuous map. Then for each\n$\\varepsilon>0$ there exists a Lipschitz, $C^{p}$-smooth function\n$K:X\\rightarrow\\mathbb{R}$ such that for all $y\\in Y,$%\n\n\\[\n\\left\\vert f(y)-K(y)\\right\\vert <\\varepsilon.\n\\]\n\n\n$\\smallskip$\n\nIf $Z\\ $is any Banach space, $Y\\subset X$ is any subset, and $f:X\\rightarrow\nZ$ (respectively $f:Y\\rightarrow\\mathbb{R}$) is Lipschitz with constant\n$\\eta,$ then we can choose $K:X\\rightarrow Z$ (respectively $K:X\\rightarrow\n\\mathbb{R}$) to have Lipschitz constant no larger than $C_{0}\\eta,$ where\n$C_{0}>1$ is a constant depending only on $X$ (in particular, $C_{0}$ is\nindependent of $\\varepsilon.$)\n\\end{theorem}\n\n\\medskip\n\n\\textbf{Proof\\ \\ }As noted before, the main idea of the proof is a\nmodification of the proof of \\cite[Lemma 5]{AFGJL} using ideas from \\cite{J}.\n\n\\medskip\n\n\\noindent We will need to use the following result, and refer the reader to\n\\cite[Proposition II.5.1]{DGZ} and \\cite{L} for a proof.\n\n\\medskip\n\n\\begin{proposition}\nLet $Z$ be a Banach space. The following assertions are equivalent.\n\n(a).$\\ Z$ admits a $C^{p}$-smooth, Lipschitz bump function.\n\n\\smallskip\n\n(b). There exist numbers $a,b>0$ and a Lipschitz function $\\psi:Z\\rightarrow\n\\lbrack0,\\infty)$ which is $C^{p}$-smooth on $Z\\setminus\\{0\\}$, homogeneous\n(that is $\\psi(tx)=|t|\\psi(x)$ for all $t\\in\\mathbb{R},x\\in Z$), and such that\n$a\\Vert\\cdot\\Vert\\leq\\psi\\leq b\\Vert\\cdot\\Vert$.\n\\end{proposition}\n\n\\medskip\n\nFor such a function $\\psi$, the set $A=\\{z\\in Z:\\psi(z)\\leq1\\}$ is what we\ncall a $C^{p}$-smooth, Lipschitz \\textbf{starlike body}, and the Minkowski\nfunctional of this body, $\\mu_{A}(z)=\\inf\\{t>0:(1\/t)z\\in A\\}$, is precisely\nthe function $\\psi$ (see \\cite{AD} and the references therein for further\ninformation on starlike bodies and their Minkowski functionals).\n\nWe will denote the open ball of center $x$ and radius $r$, with respect to the\nnorm $\\Vert\\cdot\\Vert$ of $X$, by $B(x,r).$ If $A$ is a bounded starlike body\nof $X$, we define the \\textbf{open}\\textit{\\ }$A$\\textit{-}\\textbf{pseudoball}\nof center $x$ and radius $r$ as%\n\n\\[\nB_{A}(x,r):=\\{y\\in X:\\mu_{A}(y-x)0$. This fact will sometimes be used implicitly in what follows.\n\nFor the proof, we shall first define a function $\\overline{f}:E^{\\infty\n}\\rightarrow\\mathbb{R},$ then a map $\\Psi:X\\rightarrow E^{\\infty},$ and\nfinally our desired function $K$ will be given by $K=\\overline{f}\\circ\\Psi. $\n\nTo begin the proof, first note that as $f$ is real-valued and $Y$ is convex,\nby \\cite[Proposition 2.2.1 (i)]{BL} $f$ can be uniformly approximated by a\nLipschitz map, and so we may and do suppose that $f$ is Lipschitz with\nconstant $\\eta$. Using an infimal convolution, we extend $f$ to a Lipschitz\nmap $F$ on $X$ with the same constant $\\eta$ by defining, $F\\left( x\\right)\n=\\inf\\left\\{ f\\left( y\\right) +\\eta\\left\\Vert x-y\\right\\Vert :y\\in\nY\\right\\} .$\n\nLet $\\varepsilon>0$ and $r\\in\\left( 0,\\varepsilon\/3M\\eta\\right) .$ We shall\nrequire the main construction from \\cite{J} (see also \\cite{FWZ}), and for the\nsake of completeness we outline this here. Let $\\left\\{ h_{i}\\right\\}\n_{i=1}^{\\infty}$ be a dense sequence in $B_{X},$ and $\\varphi_{i}\\in\nC^{\\infty}\\left( \\mathbb{R},\\mathbb{R}^{+}\\right) $ with $\\int_{\\mathbb{R}%\n}\\varphi_{i}=1$ and support$\\left( \\varphi_{i}\\right) \\in\\left[\n-\\frac{\\varepsilon}{6\\eta2^{i}},\\frac{\\varepsilon}{6\\eta2^{i}}\\right] .$\n\n\\medskip\n\n\\noindent Now we define functions $g_{n}:X\\rightarrow\\mathbb{R}$ by,%\n\n\\[\ng_{n}\\left( x\\right) =\\int_{\\mathbb{R}^{n}}F\\left( x-\\sum_{i=1}^{n}%\nt_{i}h_{i}\\right) \\prod_{i=1}^{n}\\varphi_{i}\\left( t_{i}\\right) dt,\n\\]\n\n\n\\noindent where the integral is $n$-dimensional Lebesgue measure.\n\n\\medskip\n\n\\noindent It is proven in \\cite{J} that the following hold:\n\n\\medskip\n\n\\begin{enumerate}\n\\item There exists $g$ with $g_{n}\\rightarrow g$ uniformly on $X,$\n\n\\item $\\left\\vert g-F\\right\\vert <\\varepsilon\/3$ on $X,$\n\n\\item The map $g$ is $\\eta$-Lipschitz,\n\n\\item The map $g$ is uniformly G\\^{a}teaux differentiable\n\\end{enumerate}\n\n\\medskip\n\n\\noindent Next, following \\cite[Lemma 5]{AFGJL}, let $\\varphi:\\mathbb{R}%\n\\rightarrow\\left[ 0,1\\right] $ be a $C^{\\infty}$-smooth function such that\n$\\varphi\\left( t\\right) =1$ if $\\left\\vert t\\right\\vert <1\/2$,\n$\\varphi\\left( t\\right) =0$ if $\\left\\vert t\\right\\vert >1,$ $\\varphi\n^{\\prime}([0,\\infty))\\subseteq\\left[ -3,0\\right] ,$ $\\varphi(-t)=\\varphi(t)$.\n\n\\medskip\n\n\\noindent Let us define a function G\\^{a}teaux differentiable on $X,$ and\n$C^{p}$-smooth on $E_{n},$ by%\n\n\\[\nF_{n}\\left( x\\right) =\\frac{(a_{n})^{n}}{c_{n}}\\int_{E_{n}}g(x-y)\\varphi\n(a_{n}\\mu_{A}\\left( y\\right) )dy\n\\]\nwhere\n\\[\nc_{n}=\\int_{E_{n}}\\varphi\\left( \\mu_{A}\\left( y\\right) \\right) dy,\n\\]\nand (keeping in mind (2.1) and $\\left( 3\\right) $) we have chosen the\nconstants $a_{n}$ large enough that\n\\begin{equation}\n\\sup_{x\\in E_{n}}\\left\\vert F_{n}\\left( x\\right) -g\\left( x\\right)\n\\right\\vert <\\frac{\\varepsilon}{6}2^{-n}.\n\\end{equation}\n\n\n\\noindent As pointed out to us by P. H\\'{a}jek, since $g$ is Lipschitz and\nuniformly G\\^{a}teaux differentiable, by \\cite[Lemma 4]{HJ} for each $h$ the\nmap $x\\rightarrow D_{h}g\\left( x\\right) $ is uniformly continuous. From\nthis, the Lipschitzness of $g,$ and compactness of $B_{E_{n}},$ we can choose\nthe $a_{n}$ larger if need be so that for all $h\\in B_{E_{n}}$ we have,%\n\n\\begin{equation}\n\\sup_{x\\in E_{n}}\\left\\vert D_{h}F_{n}\\left( x\\right) -D_{h}g\\left(\nx\\right) \\right\\vert <\\frac{\\eta}{2}2^{-n}.\n\\end{equation}\n\n\n\\smallskip\n\n\\noindent Note that for any $x,x^{\\prime}\\in X$,%\n\n\\begin{align*}\n\\left\\vert F_{n}\\left( x\\right) -F_{n}\\left( x^{\\prime}\\right)\n\\right\\vert & \\leq\\frac{(a_{n})^{n}}{c_{n}}\\int_{E_{n}}\\left\\vert\ng(x-y)-g\\left( x^{\\prime}-y\\right) \\right\\vert \\varphi(a_{n}\\mu_{A}\\left(\ny\\right) )dy\\\\\n& \\\\\n& \\leq\\eta\\left\\Vert x-x^{\\prime}\\right\\Vert \\frac{(a_{n})^{n}}{c_{n}}%\n\\int_{E_{n}}\\varphi(a_{n}\\mu_{A}\\left( y\\right) )dy=\\eta\\left\\Vert\nx-x^{\\prime}\\right\\Vert ,\n\\end{align*}\n\n\n\\noindent that is, $F_{n}$ is $\\eta$-Lipschitz.\n\n\\medskip\n\n\\noindent We next define a sequence of G\\^{a}teaux differentiable functions\n$\\overline{f}_{n}:X\\rightarrow\\mathbb{R},$ $C^{p}$-smooth on $E_{n},$ as\nfollows. Put $\\bar{f}_{0}=f\\left( 0\\right) ,$ and supposing that\n$\\overline{f}_{0},...,\\overline{f}_{n-1}$ have been defined, we set\n\n\\medskip%\n\n\\[\n\\bar{f}_{n}\\left( x\\right) =F_{n}\\left( x\\right) +\\bar{f}_{n-1}\\left(\nP_{n-1}\\left( x\\right) \\right) -F_{n}\\left( P_{n-1}\\left( x\\right)\n\\right) .\n\\]\n\n\n\\medskip\n\n\\noindent One can verify by induction, using $\\left\\Vert P_{n}\\right\\Vert\n\\leq1,\\ \\left( 2.2\\right) $ and $\\left( 2.3\\right) ,$ that,\n\n\\medskip\n\n(i). The $\\bar{f}_{n}$ are G\\^{a}teaux differentiable, the restriction of\n$\\bar{f}_{n}$ to $E_{n}$ is $C^{p}$-smooth, and $\\bar{f}_{n}$ extends $\\bar\n{f}_{n-1}$,\n\n\\medskip\n\n(ii).\\ $\\sup_{x\\in E_{n}}\\left\\vert \\bar{f}_{n}\\left( x\\right) -g\\left(\nx\\right) \\right\\vert <\\frac{\\varepsilon}{3}\\left( 1-\\frac{1}{2^{n}}\\right)\n$,\n\n\\medskip\n\n(iii). $\\sup_{x\\in E_{n}}\\left\\vert D_{h}\\bar{f}_{n}\\left( x\\right)\n-D_{h}g\\left( x\\right) \\right\\vert \\leq\\eta\\left( 1-\\frac{1}{2^{n}}\\right)\n$, for all $h\\in B_{E_{n}}.$\n\n\\medskip\n\n\\noindent We now define the map $\\bar{f}:E^{\\infty}\\rightarrow\\mathbb{R}$ by\n\n\\medskip%\n\n\\[\n\\bar{f}\\left( x\\right) =\\lim_{n\\rightarrow\\infty}\\bar{f}_{n}\\left(\nx\\right) .\n\\]\n\n\n\\noindent For $x\\in E^{\\infty}=\\cup_{n}E_{n},$ define $n_{x}\\equiv\\min\\left\\{\nn:x\\in E_{n}\\right\\} ,$ and note that we have for any $m\\geq n_{x}$,%\n\n\\begin{equation}\n\\bar{f}\\left( x\\right) =\\lim_{n\\rightarrow\\infty}\\bar{f}_{n}\\left(\nx\\right) =\\bar{f}_{m}\\left( x\\right) .\n\\end{equation}\n\n\n\\medskip\n\n\\noindent In particular, for any $n,$ $\\overline{f}\\mid_{E_{n}}=\\overline\n{f}_{n}.$ One can verify using $\\left( 2.4\\right) ,$ (i), (ii), and (iii)\nabove that $\\bar{f}$ has the following properties:\n\n\\medskip\n\n(i)$^{\\prime}$. The restriction of $\\bar{f}$ to every subspace $E_{n}$ is\n$C^{p}$-smooth,\n\n\\medskip\n\n(ii)$^{\\prime}$. $\\sup_{x\\in E^{\\infty}}\\left\\vert \\bar{f}\\left( x\\right)\n-g\\left( x\\right) \\right\\vert \\leq\\frac{\\varepsilon}{3}.$\n\n\\medskip\n\n(iii)$^{\\prime}$. $\\sup_{x\\in E_{n}}\\left\\vert D_{h}\\bar{f}\\left( x\\right)\n-D_{h}g\\left( x\\right) \\right\\vert \\leq\\eta$, for all $h\\in B_{E_{n}}.$\n\n\\medskip\n\n\\noindent The proof now closely follows \\cite[Lemma 5]{AFGJL}, and we provide\nsome of the details for the sake of completeness.\n\n\\medskip\n\n\\noindent Next let $x=\\sum_{n}x_{n}e_{n}\\in X$ and define the maps\n\\[\n\\chi_{n}\\left( x\\right) =1-\\varphi\\left[ \\frac{\\mu_{A}\\left(\nx-P_{n-1}\\left( x\\right) \\right) }{r}\\right] ,\n\\]\nand\n\\[\n\\Psi\\left( x\\right) =\\sum_{n}\\chi_{n}\\left( x\\right) x_{n}e_{n}.\n\\]\n\n\nFor any $x_{0},$ because $P_{n}\\left( x_{0}\\right) \\rightarrow x_{0}$ and\nthe $\\Vert P_{n}\\Vert$ are uniformly bounded, there exist a neighbourhood\n$N_{0}$ of $x_{0}$ and an $n_{0}=n_{x_{0}}$ so that $\\chi_{n}\\left( x\\right)\n=0$ for all $x\\in N_{0}$ and $n\\geq n_{0}$ and so $\\Psi\\left( N_{0}\\right)\n\\subset E_{n_{0}}.$ Thus, $\\Psi:X\\rightarrow E^{\\infty}$ is a $C^{p}$-smooth\nmap whose range is locally contained in the finite dimensional subspaces\n$E_{n}$. Using the fact that $\\left\\{ e_{n}\\right\\} $ is unconditional with\nconstant $C=1,$ one can show that (see \\cite[Fact 7]{AFGJL})\n\\begin{equation}\n\\left\\Vert x-\\Psi\\left( x\\right) \\right\\Vert 0,$ there exists a Lipschitz,\n$C^{p}$-smooth map $K:X\\rightarrow\\mathbb{R}$ with $\\left\\vert f-K\\right\\vert\n<\\varepsilon$ on $Y.$\n\n\\item For every subset $Y\\subset X,$ Lipschitz function $f:Y\\rightarrow\n\\mathbb{R},$ and $\\varepsilon>0,$ there exists a Lipschitz, $C^{p}$-smooth map\n$K:X\\rightarrow\\mathbb{R}$ with $\\left\\vert f-K\\right\\vert <\\varepsilon$ on\n$Y.$\n\n\\item For every Banach space $Z,$ Lipschitz map $f:X\\rightarrow Z,$ and\n$\\varepsilon>0,$ there exists a Lipschitz, $C^{p}$-smooth map $K:X\\rightarrow\nZ$ with $\\left\\Vert f-K\\right\\Vert <\\varepsilon$ on $X.$\n\\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\nThat $\\left( 1\\right) \\Rightarrow\\left( 2\\right) ,\\left( 3\\right) ,$ and\n$\\left( 4\\right) $ is Theorem 1. For $\\left( 2\\right) \\Rightarrow\\left(\n1\\right) ,$ choose $Y=X,$ and $f=\\left\\Vert \\cdot\\right\\Vert .$ Let\n$K:X\\rightarrow\\mathbb{R}$ be a $C^{p}$-smooth, Lipschitz map with $\\left\\vert\nf-K\\right\\vert <1$ on $X.$ Let $\\xi:\\mathbb{R\\rightarrow R}$ be $C^{\\infty}%\n$-smooth and Lipschitz with, $\\xi\\left( t\\right) =1$ if $t\\leq1$ and\n$\\xi\\left( t\\right) =0$ if $t\\geq2.$ Then $b=\\xi\\circ K$ is a $C^{p}%\n$-smooth, Lipschitz map with $b\\left( 0\\right) =1$ and $b\\left( x\\right)\n=0$ when $\\left\\Vert x\\right\\Vert \\geq3.$ The remaining implications are similar.\n\\end{proof}\n\n\\medskip\n\n\\textbf{Remark }The Lipschitz constant of $K$ obtained for the second\nstatement of Theorem 1 is not the best possible. By using better derivative\nestimates$,$ one can show that for any $\\delta>0,$ we may arrange $\\left\\Vert\nK^{\\prime}\\right\\Vert \\leq\\left( \\eta+\\delta\\right) \\left( 2\\left(\n2+\\delta\\right) M^{2}+1\\right) .$ This should be compared with the recent\nresult in \\cite{AFLR}, where it is shown in particular that for separable\nHilbert spaces $X$, any Lipschitz, real-valued function on $X$ can be\nuniformly approximated by $C^{\\infty}$ smooth functions with Lipschitz\nconstants arbitrarily close to the Lipschitz constant of $f.$ It is open\nwhether such a result holds outside the Hilbert space setting.\n\n\\medskip\n\n{\\small Acknowledgement\\ \\ The author wishes to thank D. Azagra and J.\nJaramillo for bringing to his attention the problems addressed in this note\nduring the rsme-ams 2003 conference in Sevilla. Also we want to thank P.\nH\\'{a}jek for pointing out \\cite[Lemma 4]{HJ} which enabled us to correct and\nsimplify an earlier version of this corrigendum. }\n\n\\medskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduciton}\n\nThere are increasing interests in seeking novel electro-optic\nmaterials analogous to the famous Sr$_{x}$Ba$_{1-x}$Nb$_{2}$O$_6$\n(SBN, $0.32\\leq x\\leq 0.75$) that belongs to the\n(A$_1)_2$(A$_2)_4$(B$_1)_2$(B$_2)_8$O$_{30}$ tetragonal tungsten\nbronze (TTB) structure with the non-centrosymmetry of P4{\\it bm} at\nroom temperature (see Fig.\\protect\\ref{Fig0})\n\\protect\\cite{Podlozhenov,Chernaya1,Chernaya2} and shows excellent\nelectro-optic and pyroelectric properties \\protect\\cite{Glass} and typical\nrelaxor behaviors.\\protect\\cite{Kleemann1,Kleemann2} Recently,\nCa$_{x}$Ba$_{1-x}$Nb$_{2}$O$_6$ (CBN, $0.2\\leq x\\leq 0.4$) single\ncrystals have received considerable critical attention due to their\nexcellent electro-optic properties and higher working temperature\nin comparison with SBN.\\protect\\cite{Ebser,Burianek,Muehlberg,Qi,Song} The\nferroelectricity of CBN is first discovered by Smolenskii et\nal..\\protect\\cite{Smolenskii} CBN alloys also have the ferroelectric TTB\nstructure at room temperature, however, contrary to Sr in SBN where\nSr ions randomly occupy both the two large A$_1$ and A$_2$ sites in\nthe tungsten bronze framework of NbO$_6$ octahedra, smaller Ca ions\nin CBN only occupy the A$_1$ sites in the structure. It has been\ndemonstrated experimentally that smaller A$_1$ site is almost\nexclusively occupied by Ca while larger A$_2$ site is predominately\nby Ba in CBN single crystal with congruent melting composition\n($x$=0.28).\\protect\\cite{Graetsch1,Graetsch2} In ferroelctric CBN alloys,\nniobium atoms are displaced from the centers of their coordination\npolyhedra and shifted along the tetragonal $c$-axis, which is\nconsidered to be the origin of spontaneous electric\npolarization.\\protect\\cite{Chernaya2, Abrahams}\n\nCurrently, most of investigations on CBN alloys are focused on the\ncrystal growth\\protect\\cite{Ebser,Song}, dielectric,\\protect\\cite{Qi,Niemer}\nferroelectric,\\protect\\cite{Qi} optic,\\protect\\cite{Ebser2,Heine,Sheng} and elastic\n\\protect\\cite{Pandey,Pandey2,Pandey3,Suzuki} properties of CBN single\ncrystal with congruent melting composition $x$=0.28. Despite these\ninvestigations on CBN, there is still a lack of full understanding\non the basic issues in CBN alloys. One important problem is that CBN\nalloys are relaxors (such as their isostructural compounds SBN\nalloys\\protect\\cite{Kleemann2,Blinc,Banys,Miga}), which is characterized by\na broad peak of dielectric susceptibility with strong frequency\ndispersion in the radio frequency over a large temperature range\n\\protect\\cite{Miga,Levstik,Fu1,Fu2} and smear ferroelectric phase\ntransition\\protect\\cite{Fu1,Taniguchi} without an obvious heat capacity\npeak,\\protect\\cite{Moriya} or they are ferroelectrics (such as the normal\nferroelectric BaTiO$_3$), which have a well-defined\npara-ferroelectric thermal phase transition at Curie point $T_{\\rm\nc}$ \\protect\\cite{Fu3} but show polarization precursor dynamics before\ntransition into the ferroelectric\nphase.\\protect\\cite{Burns,Tai,Ziebinska,Ko,Dulkin,Pugachev} Since CBN is an\nisostructural compound of SBN that typical relaxor phenomena have\nbeen clearly demonstrated,\\protect\\cite{Banys,Miga} it is naturally expected\nthat CBN should show the relaxor behaviors such as a broad peak of\ndielectric susceptibility with strong frequency dispersion due to\nthe dynamics of polar-nano-regions (PNRs) occurring in the\nparaelectric mother phase. Following this scenario, several attempts\nhave been made to confirm the relaxor behaviors in\nCBN.\\protect\\cite{Pandey,Pandey2,Pandey3,Suzuki} From the investigations on\nlattice strain and thermal expansion of CBN single crystal with\ncongruent melting composition $x$=0.28, Pandey et al. found the\ndeviation from the linear temperature dependence of lattice strain\nand the anomalous thermal expansion in this single crystal.\\protect\\cite{Pandey} They\nattribute these anomalous elastic behaviors to the relaxor phenomena\noccurring in CBN crystal. They further suggest Burns\ntemperature \\protect\\cite{Burns} $T_{\\rm B}$ to be 1100 K, which\ncharacterizes the initiation of dynamic PNRs, and the intermediate\ntemperature $T^*$ to be 800 K, which indicates the beginning of\nPNRs freezing,\\protect\\cite{Roth,Dkhil} for CBN with $x=0.28$.\\protect\\cite{Pandey}\nOn the other hand, in a recent Brillouin scattering study, $T_{\\rm\nB}$ and $T^*$ are proposed to be approximately 790 K and 640 K,\nrespectively, for CBN with the same composition.\\protect\\cite{Suzuki} The\ndifference in $T_{\\rm B}$ and $T^*$ values estimated by\ndifferent techniques for a same compound is surprisingly large.\nApparently, the existence and exact values of $T_{\\rm B}$ and $T^*$\nin CBN need to be clarified in further investigations.\n\nIt should be noticed that the existence of $T_{\\rm B}$ and $T^*$ is\nnot the characteristic properties of relaxor. In a normal\nferroelectric such as BaTiO$_3$, the existence of these two\ncharacteristic temperatures have been clearly\ndemonstrated.\\protect\\cite{Burns,Dulkin} In\n early investigations, Burns et al.\\protect\\cite{Burns} have demonstrated that\nthe temperature dependence of the optic index of refraction, $n(T)$,\ndeviates from the high temperature extrapolated value between\n$T_{\\rm c}$ and $T_{\\rm c} + 180$ K in the paraelectric phase of\nBaTiO$_3$ crystal. This deviation is commonly accepted to be due to\nthe local polarization precursor dynamics before the\npara-ferroelectric phase transition in BaTiO$_3$, thus the\ntemperature where this deviation occurs is generally called as Burns\ntemperature $T_{\\rm B}$, which is widely used to characterize the\nappearance of PNRs in the paraelectric phase of ferroelectric or\nrelaxor. In additional to the optical\nmeasurements,\\protect\\cite{Burns,Ziebinska,Pugachev} various other\nexperiments such as pulsed X-Ray laser\nmeasurements,\\protect\\cite{Tai,Namikawa} acoustic emission\\protect\\cite{Dulkin} and\nBrillouin light scattering\\protect\\cite{Ko} all show the existence of local\npolarization in BaTiO$_3$ before transition into the ferroelectric\nphase. Theoretically, it has been proposed that local dynamical\nprecursor domains are common to perovskite ferroelectrics, and these\npolar domains grow upon approaching $T_{\\rm c}$ to coalesce into a\nhomogeneously polarized state at $T_{\\rm\nc}$.\\protect\\cite{Bussmann-Holder1,Bussmann-Holder2} Similarly, PNRs develop\nfrom $T_{\\rm B}$ in relaxor, and cause the very strong frequency\ndispersion of dielectric susceptibility. In contrast to relaxor,\nprecursor domains in the perovskite ferroelectric such as BaTiO$_3$\ndo not need to give rise to a frequency-dependent dielectric\nresponse in the radio frequency.\\protect\\cite{Bussmann-Holder1}\n\nFurthermore, a sharp peak of heat capacity has been observed at the\npara-ferroelectric phase transition in CBN crystal with\n$x=0.311$.\\protect\\cite{Muehlberg} Also, in the dielectric measurements on\nCBN single cyrstal with congruent melting composition $x$=0.28, it\nseems that the para-ferroelectric phase transition temperature is\nnot dependent on frequency.\\protect\\cite{Qi} All these findings in CBN are\nin sharp contrast to the characteristic properties of relaxor such\nas smear phase transition \\protect\\cite{Fu1,Taniguchi} without an evident heat\ncapacity peak,\\protect\\cite{Moriya} and very strong frequency dependence\nof dielectric responses in the radio frequency around a temperature,\n$T_{\\rm m}$, where the maximum of dielectric response occurs. These\nresults suggest that CBN alloys may be classified as a\nferroelectric with thermal phase transition associating with\npolarization precursor dynamics rather than a relaxor.\n\n\nIn this study, we firstly determined the solid solution limit of CBN\nferroelectric alloys. We then investigated the para-ferroelectric\nphase transition in these alloys by the dielectric measurements and\nthe differential scanning calorimetry (DSC). Contrary to the relaxor\npicture what have been expected in previous\ninvestigations,\\protect\\cite{Pandey,Pandey2,Pandey3} we clearly demonstrated that CBN\nalloys can be classified as ferrolectrics with a first-order thermal\nphase transition associating with precursor dynamics over a large\ntemperature range above $T_{\\rm c}$ such as\nBaTiO$_3$.\\protect\\cite{Burns,Ziebinska,Pugachev} We also found that the\nlocal polarizations grown exponentially for a temperature range of\n$T_{\\rm c}0.32$.\nThese facts indicate that single phase of CBN ferroelectric alloys\nis available only in a composition range of $0.19\\leq x\\leq0.32$.\n\nThe lattice parameters of CBN alloys at room temperature were then\ndetermined by the method of least squares using twelve reflections\nwith 2$\\theta>50$ degree. The results are shown in Fig.\\protect\\ref{fig2}. In\nCBN alloys with ferroelectric TTB structure, $a$-axis lattice\nis nearly unchanged with composition within the error range. In contrast, the polar\n$c$-axis lattice is shorten with an increase in Ca concentration,\nwhich results in a similar decrease of unit cell volume. This result\nis expectable because the ionic radii of Ca and Ba are 1.34 {\\AA}\nand 1.61 {\\AA},\\protect\\cite{Shannon} respectively, and Ca has a smaller\nionic radius. In spite of the large change in composition $x$ ($\\Delta\nx=0.13$), the corresponding change in the unit cell volume is\nsurprisingly small (approximately 0.65\\%). This situation is\ndifferent significantly from the case of substitution of Ca for Ba\nin Ba$_{1-x}$Ca$_x$TiO$_3$ perovskite oxides, which shows an\napproximate 3.5\\%-reduction in the unit cell volume at a same level\nof Ca-substitution.\\protect\\cite{Fu3} This fact seems to suggest that the\nstacking structure in TTB-type CBN crystal is mainly determined by\nthe framework of NbO$_6$ octahedra as shown in Fig.(\\protect\\ref{Fig0}).\n\n\n\\subsection{Thermal phase transition}\n\nTo investigate the para-ferroelectric phase transition in CBN\nalloys, we observed the variation of dielectric susceptibility as a\nfunction of temperature in a frequency range from 100 Hz to 100 kHz.\nThe results were shown in Fig.\\protect\\ref{fig3} and Fig.\\protect\\ref{fig4},\nrespectively. The dielectric loss for lower frequencies ($\\leq$ 1 kHz) was\nlarge for temperatures higher than approximately 500 K and was not shown here, which is\nvery likely due to the thermal activation of vacancies in the\nsamples. The large dielectric loss made difficult to determine a\nreliable value of dielectric susceptibility for these low frequencies for\nthe high temperatures. However, as shown in Fig.\\protect\\ref{fig4}, we\nindeed observed that (1) Curies point is not dependent on frequency ranging from 100 Hz to 100 kHz,\nand (2) the dielectric susceptibility measured at frequency between\n10 KHz and 100 kHz is nearly independent of frequency in the\ntemperature range of 300 K - 870 K. We thus consider that the\ndielectric susceptibility measured at frequency between 10 KHz and 100 kHz is free from\nthe effects of vacancies and reflects the intrinsic dielectric\nresponse of CBN alloys, which will be used in the later analysis.\nApparently, the dielectric response with frequency and\ntemperature in CBN alloys is in sharp contrast to that observed in\nrelaxor.\\protect\\cite{Fu1} Instead, the dielectric behaviors in CBN are\nsimilar to those observed in normal ferroelectric such as\nBaTiO$_3$.\\protect\\cite{Fu3}\n\nThe temperature dependence of dielectric susceptibility shown in\nFig.\\protect\\ref{fig4} clearly indicates that the para-ferroelectric phase\ntransition has a thermal hysteresis. Curies points on heating and\ncooling have different values. The difference of $T_{\\rm c}$\nbetween heating and cooling is 12.4 K for $x=0.19$ and it increases\nto 25.2 K for $x=0.32$. This indicates that the substitution of Ba\nwith Ca in CBN enhances the thermal hysteresis of para-ferroelectric\nphase transition. On the other hand, this substitution lowers\nCuries point $T_{\\rm c}$ of CBN alloys as demonstrated clearly in\nFig.\\protect\\ref{fig3}.\n\nTo further confirm the nature of thermal phase transition in CBN\nferroelectric alloys, we also observed the change in enthalpy during\nthe phase transition by the DSC measurements. The results are shown\nin Fig.\\protect\\ref{fig5} Although our equipment doesn't allow us to\ndetermine the heat capacity of phase transition, we really\nobserved a change in enthalpy during the phase transition. DSC\nmeasurements also clearly indicate that the phase transition has a thermal\nhysteresis. This result is in good agreements with that observed in\nthe dielectric measurements shown in Fig.\\protect\\ref{fig3}. Our result is\nalso well accorded with that reported for CBN crystal with $x=0.31$\n showing a sharp peak of heat capacity during the phase\ntransition.\\protect\\cite{Muehlberg}\n\nAll facts showed above indicate that CBN alloys undergo a thermal\nphase transition. It can be concluded that the para-ferroelectric\nphase transition in CBN alloys is of first-order and has a thermal\nhysteresis of 12.4 K to 25.2 K depending on Ca-concentration.\n\n\n\\subsection{Precursor behaviors}\n\nIn a recent investigation on CBN single crystal with congruent\nmelting composition $x$=0.28, the birefringence has been\ndemonstrated to occur in a temperature region of $T_{\\rm\nc}T_{\\rm c}$, we performed an analysis on the\ntemperature dependence of dielectric susceptibilities of CBN alloys\nwithin its solid solution limit. As mentioned in the above section,\nhere, we only used the data obtained at 100 kHz since these data are\nconsidered to be free from the effects of vacancies on the\ndielectric response. As shown in Fig.\\protect\\ref{fig6}(b), the dielectric\nsusceptibility $\\chi'$ obeys Curie law,\n\\begin{equation}\\label{eq2}\n \\chi'=\\pm C\/(T-T_0)\\;\\; {\\rm for}\\;\\; TT_{\\rm\n B},\n\\end{equation}\nwhere $C$ is Curie constant and $T_0$ is Curie-Weiss temperature.\nUpon cooling, the dielectric susceptibility deviates from Curie\nlaw at a characteristic temperature. Since the existence of Burns\ntemperature $T_{\\rm B}$ and the intermediate temperature $T^*$ are\nnot well established for CBN,\\protect\\cite{Pandey,Suzuki} Here, we\ntentatively consider this characteristic temperature as Burns\ntemperature $T_{\\rm B}$. The values of $T_{\\rm B}$ were estimated to\nbe approximately $T_{\\rm c}$+88 K for $x=0.19$ and $T_{\\rm c}$+143 K\nfor $x=0.32$, respectively, and $T_{\\rm B}$ showed a nearly linear change\nwith composition as shown\nin the phase diagram (Fig.\\protect\\ref{fig9}(a)). The fitting parameters\nobtained from Curie law are summarized in Fig.\\protect\\ref{fig7}. It should\nbe noticed that Curie constant in the paralectric phase ($T>T_{\\rm\nB}$) of CBN alloys has a similar magnitude of that of BaTiO$_3$\nsingle crystal ($1.5 \\times10^5$ K).\\protect\\cite{Shiozaki}\n\n\n\nFor temperatures higher than $T_{\\rm B}$, the dielectric\nsusceptibility of CBN alloys exactly follows Curie law, which\nindicates that the dielectric response in these high temperatures\ncan be considered to be due to the lattice dynamics. However, on\ncooling, deviation from Curie law was observed from $T\\approx\nT_{\\rm B}$. As mentioned above, this deviation from Curie law of the\ndielectric susceptibility can be reasonably attributed to the\npolarization precursors dynamics occurring before para-ferroelectric\nphase transition in CBN alloys. Therefore, it can be considered that\nthe total dielectric susceptibility is contributed by both lattice\nand precursor dynamics for $T_{\\rm c}T_{\\rm B}$. On cooling to $T\\approx T_{\\rm\n B}$, polarization precursors emerge in the paraelectric mother\n phase of CBN alloys. On further cooling, these local\n polarizations growth exponentially with the temperature before\n transition into a ferrolectric phase with a non-centrocymmetric tetragonal structure at\n $T=T_{\\rm c}$. This ferroelectric thermal phase transition is of\n first-order, and the transition temperature upon heating is\n $12.4\\sim 25.2$ K higher than that on cooling. As shown in the\n phase diagram, increase in Ca-concentration leads to a larger thermal\n hysteresis together with the lowering of $T_{\\rm c}$ and $T_{B}$. On the\n other hand, the temperature range of precursor existence becomes\n larger as increasing the amount of substitution of smaller Ca ions for Ba\n ions. It is still unclear why Ca-substitution enlarges the precursor growth region, the\n local polarization developed at $T_{\\rm c}^+$, and the thermal\n hysteresis of phase transition in CBN alloys.\n A likely reason is relative to the increase of vacancies in\n larger A$_2$ site in the TTB structure due to the increase of substitution amount of\n Ca for Ba. Because Ca almost exclusively occupied smaller A$_1$\nsite in CBN TTB structure, increase of Ca-concentration naturally\nresults in the increase of vacancies in\n A$_2$ site occupied initially by Ba.\\protect\\cite{Graetsch1,Graetsch2} Increase of vacancies in larger A$_2$ site\nwith $x$ may give rise to larger fluctuation of local lattice\ndistortion in CBN alloys, which is though to be the sources of\nlocal polarizations. This suggestion is supported by recent\ninvestigations on the variation of lattice strain with $x$ in CBN\nalloys, in which the thermal expansion along the polar axis becomes\nlarger as increasing Ca-concentration.\\protect\\cite{Pandey3}\n\nOn the other hand, substitution of Ca for Ba leads to a nearly\nlinear reduction of $T_{\\rm c}$ and $T_{B}$ with $x$. As well known\nin many normal ferroelectrics, pressure can significantly reduce the\nferroelectric transition temperature. In solid solution, pressure\ncan be due to chemical substitution of smaller ions for original\nlarger one, and this kind of pressure is normally called as chemical\npressure. Chemical-pressure effects on the reduction of $T_{\\rm\nc}$ have been well studied in many ferroelectric alloys with\nperovskite structure such as\n(Ba$_{1-x}$Sr$_x$)TiO$_3$,\\protect\\cite{Shiozaki} in which $T_{\\rm\nc}$-reduction is generally considered to be due to the reduction of\nunit cell volume by the chemical pressure effects. As mentioned in\nthe above section, in CBN alloys, substitution of smaller Ca ions\nfor Ba ions only lead to an extremely small change in its unit cell\nvolume ($\\Delta=(V(x)-V(0.19))\/V(0.19)$), and $\\Delta$ is less than\n0.65\\% within the solid solution limit. Thus, it can be considered\nthat the chemical pressure effects on the reduction of $T_{\\rm c}$\nwas vanishingly small in CBN alloys.\n\nThe spontaneous polarization in CBN has been reported to be\noriginated from Nb-displacement along the $c$-axis in the TTB\nstructure.\\protect\\cite{Graetsch1} It can is expected that the lattice\ndistortion along the $c$-axis has influence on Nb-displacement.\nFigure\\protect\\ref{fig9}(b) showed the variation of $T_{\\rm c}$ with\n$c$-axis lattice constant. Indeed, one can see that $T_{\\rm c}$ is\nreduced with the shrink of $c$-axis lattice. This fact indicates\nthat the shrink of $c$-axis lattice should reduce the space\navailable for Nb-displacement, thus leading to the reduction of\nferroelectricity and its $T_{\\rm c}$. However, since the reduction\nin $c$-axis lattice constant is approximately 0.03 ${\\AA}$\nwithin the solid solution, this lattice shrink is not enough to\ncompletely explain the large reduction in $T_{\\rm c}$ in CBN alloys.\nIt seems that other reasons may be existed for larger reduction of\n$T_{\\rm c}$ with Ca-concentration in CBN alloys. The exact reason for this large reduction in $T_{\\rm c}$ with Ca-substitution in CBN alloys remains to be\nclarified in further structural investigations.\n\n\n\n\\section{Summary}\n\nIn summary, we have studied the phase transition in CBN alloys\nwithin its solid solution limit ranging from $x=0.19$ to $x=0.32$.\nIn contrast to their isostructural compounds SBN that normally show\ntypical relaxor behaviors, CBN alloys exhibit behaviors such as\nnormal ferroelectric BaTiO$_3$, but show precursors dynamics before\ntransition into the ferroelectric phase. For $T>T_{\\rm B}$, CBN\nalloys obey classical Curie law and can be considered to be\nparaelectric. On further cooling toward $T_{\\rm c}$, local\npolarizations occurs in the paraelectric mother phase, and these\npolarization precursors grow exponentially as temperature lowering.\nFinally, a ferroelectric phase transition was realized at $T_{\\rm c}$. This thermal\nferroelectric phase transition is essentially of first-order. A\nphase diagram was then establish for CBN ferroelectric alloys. These\nfindings provide new insights on understanding the underlying\nphysics in CBN ferroelectric alloys.\n\n\n\n\n\n\n\\newpage\n\n\n\\section{Introduction}\n\nThis is the author's guide to \\revtex~4, the preferred submission\nformat for all APS journals. This guide is intended to be a concise\nintroduction to \\revtex~4. The documentation has been separated out\ninto smaller units to make it easier to locate essential\ninformation.\n\nThe following documentation is also part of the APS \\revtex~4\ndistribution. Updated versions of these will be maintained at\nthe \\revtex~4 homepage located at \\url{http:\/\/publish.aps.org\/revtex4\/}.\n\\begin{itemize}\n\\item \\textit{APS Compuscript Guide for \\revtex~4}\n\\item \\textit{\\revtex~4 Command and Options Summary}\n\\item \\textit{\\revtex~4 Bib\\TeX\\ Guide}\n\\item \\textit{Differences between \\revtex~4 and \\revtex~3}\n\\end{itemize}\nThis guide assumes a working \\revtex~4\ninstallation. Please see the installation guide included with the\ndistribution.\n\nThe \\revtex\\ system for \\LaTeX\\ began its development in 1986 and has\ngone through three major revisions since then. All versions prior to\n\\revtex~4 were based on \\LaTeX2.09 and, until now, \\revtex\\ did not\nkeep pace with the advances of the \\LaTeX\\ community and thus became\ninconvenient to work with. \\revtex~4 is designed to remedy this by\nincorporating the following design goals:\n\n\\begin{itemize}\n\\item\nMake \\revtex\\ fully compatible with \\LaTeXe; it is now a \\LaTeXe\\\ndocument class, similar in function to the standard\n\\classname{article} class.\n\n\\item\nRely on standard \\LaTeXe\\ packages for common tasks, e.g,\n\\classname{graphicx},\n\\classname{color}, and\n\\classname{hyperref}.\n\n\\item\nAdd or improve macros to support translation to tagged formats such as\nXML and SGML. This added markup will be key to enhancing the\npeer-review process and lowering production costs.\n\n\\item\nProvide a closer approximation to the typesetting style used in\n\\emph{Physical Review}.\n\n\\item\nIncorporate new features, such as hypertext, to make \\revtex\\ a\nconvenient and desirable e-print format.\n\n\\item\nRelax the restrictions in \\revtex\\ that had only been necessary for\ntypesetting journal camera-ready copy.\n\\end{itemize}\n\nTo meet these goals, \\revtex~4 is a complete rewrite with an emphasis\non maintainability so that it will be easier to provide enhancements.\n\nThe \\revtex~4 distribution includes both a template\n(\\file{template.aps}) and a sample document (\\file{apssamp.tex}).\nThe template is a good starting point for a manuscript. In the\nfollowing sections are instructions that should be sufficient for\ncreating a paper using \\revtex~4.\n\n\\subsection{Submitting to APS Journals}\n\nAuthors using \\revtex~4 to prepare a manuscript for submission to\n\\textit{Physical Review} or \\textit{Reviews of Modern Physics} \nmust also read the companion document \\textit{APS Compuscript Guide\nfor \\revtex~4}\ndistributed with \\revtex\\ and follow the guidelines detailed there.\n\nFurther information about the compuscript program of the American\nPhysical Society may be found at \\url{http:\/\/publish.aps.org\/ESUB\/}.\n\n\\subsection{Contact Information}\\label{sec:resources}%\nAny bugs, problems, or inconsistencies should reported to\n\\revtex\\ support at \\verb+revtex@aps.org+.\nReports should include information on the error and a \\textit{small}\nsample document that manifests the problem if possible (please don't\nsend large files!).\n\n\\section{Some \\LaTeXe\\ Basics}\nA primary design goal of \\revtex~4 was to make it as compatible with\nstandard \\LaTeXe\\ as possible so that authors may take advantage of all\nthat \\LaTeXe\\ offers. In keeping with this goal, much of the special\nformatting that was built in to earlier versions of \\revtex\\ is now\naccomplished through standard \\LaTeXe\\ macros or packages. The books\nin the bibliography provide extensive coverage of all topics\npertaining to preparing documents under \\LaTeXe. They are highly recommended.\n\nTo accomplish its goals, \\revtex~4 must sometimes patch the underlying\n\\LaTeX\\ kernel. This means that \\revtex~4 requires a fairly recent version of\n\\LaTeXe. Versions prior to 1996\/12\/01 may not work\ncorrectly. \\revtex~4 will be maintained to be compatible with future\nversions of \\LaTeXe.\n\n\\subsection{Useful \\LaTeXe\\ Markup}\n\\LaTeXe\\ markup is the preferred way to accomplish many basic tasks.\n\n\\subsubsection{Fonts}\n\nBecause \\revtex~4 is based upon \\LaTeXe, it inherits all of the\nmacros used for controlling fonts. Of particular importance are the\n\\LaTeXe\\ macros \\cmd{\\textit}, \\cmd{\\textbf}, \\cmd{\\texttt} for changing to\nan italic, bold, or typewriter font respectively. One should always\nuse these macros rather than the lower-level \\TeX\\ macros \\cmd{\\it},\n\\cmd{\\bf}, and \\cmd{\\tt}. The \\LaTeXe\\ macros offer\nimprovements such as better italic correction and scaling in super-\nand subscripts for example. Table~\\ref{tab:fonts}\nsummarizes the font selection commands in \\LaTeXe.\n\n\\begin{table}\n\\caption{\\label{tab:fonts}\\LaTeXe\\ font commands}\n\\begin{ruledtabular}\n\\begin{tabular}{ll}\n\\multicolumn{2}{c}{\\textbf{Text Fonts}}\\\\\n\\textbf{Font command} & \\textbf{Explanation} \\\\\n\\cmd\\textit\\marg{text} & Italics\\\\\n\\cmd\\textbf\\marg{text} & Boldface\\\\\n\\cmd\\texttt\\marg{text} & Typewriter\\\\\n\\cmd\\textrm\\marg{text} & Roman\\\\\n\\cmd\\textsl\\marg{text} & Slanted\\\\\n\\cmd\\textsf\\marg{text} & Sans Serif\\\\\n\\cmd\\textsc\\marg{text} & Small Caps\\\\\n\\cmd\\textmd\\marg{text} & Medium Series\\\\\n\\cmd\\textnormal\\marg{text} & Normal Series\\\\\n\\cmd\\textup\\marg{text} & Upright Series\\\\\n &\\\\\n\\multicolumn{2}{c}{\\textbf{Math Fonts}}\\\\\n\\cmd\\mathit\\marg{text} & Math Italics\\\\\n\\cmd\\mathbf\\marg{text} & Math Boldface\\\\\n\\cmd\\mathtt\\marg{text} & Math Typewriter\\\\\n\\cmd\\mathsf\\marg{text} & Math Sans Serif\\\\\n\\cmd\\mathcal\\marg{text} & Calligraphic\\\\\n\\cmd\\mathnormal\\marg{text} & Math Normal\\\\\n\\cmd\\bm\\marg{text}& Bold math for Greek letters\\\\\n & and other symbols\\\\\n\\cmd\\mathfrak\\marg{text}\\footnotemark[1] & Fraktur\\\\\n\\cmd\\mathbb\\marg{text}\\footnotemark[1] & Blackboard Bold\\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\footnotetext[1]{Requires \\classname{amsfonts} or \\classname{amssymb} class option}\n\\end{table}\n\n\\subsubsection{User-defined macros}\n\\LaTeXe\\ provides several macros that enable users to easily create new\nmacros for use in their manuscripts:\n\\begin{itemize}\n\\footnotesize\n\\item \\cmd\\newcommand\\marg{\\\\command}\\oarg{narg}\\oarg{opt}\\marg{def} \n\\item \\cmd\\newcommand\\verb+*+\\marg{\\\\command}\\oarg{narg}\\oarg{opt}\\marg{def}\n\\item \\cmd\\renewcommand\\marg{\\\\command}\\oarg{narg}\\oarg{opt}\\marg{def}\n\\item \\cmd\\renewcommand\\verb+*+\\marg{\\\\command}\\oarg{narg}\\oarg{opt}\\marg{def}\n\\item \\cmd\\providecommand\\marg{\\\\command}\\oarg{narg}\\oarg{opt}\\marg{def}\n\\item \\cmd\\providecommand\\verb+*+\\marg{\\\\command}\\oarg{narg}\\oarg{opt}\\marg{def}\n\\end{itemize}\nHere \\meta{\\\\command} is the name of the macro being defined,\n\\meta{narg} is the number of arguments the macro takes,\n\\meta{opt} are optional default values for the arguments, and\n\\meta{def} is the actually macro definiton. \\cmd\\newcommand\\ creates a\nnew macro, \\cmd\\renewcommand\\ redefines a previously defined macro,\nand \\cmd\\providecommand\\ will define a macro only if it hasn't\nbeen defined previously. The *-ed versions are an optimization that\nindicates that the macro arguments will always be ``short'' arguments. This is\nalmost always the case, so the *-ed versions should be used whenver\npossible.\n\nThe use of these macros is preferred over using plain \\TeX's low-level\nmacros such as\n\\cmd\\def{},\\cmd\\edef{}, and \\cmd\\gdef{}. APS authors must follow the\n\\textit{APS Compuscript Guide for \\revtex~4} when defining macros.\n\n\\subsubsection{Symbols}\n\n\\LaTeXe\\ has added some convenient commands for some special symbols\nand effects. These are summarized in Table~\\ref{tab:special}. See\n\\cite{Guide} for details.\n\n\\begin{table}\n\\caption{\\label{tab:special}\\LaTeXe\\ commands for special symbols and effects}\n\\begin{ruledtabular}\n\\begin{tabular}{lc}\nCommand & Symbol\/Effect\\\\\n\\cmd\\textemdash & \\textemdash\\\\\n\\cmd\\textendash & \\textendash\\\\\n\\cmd\\textexclamdown & \\textexclamdown\\\\\n\\cmd\\textquestiondown & \\textquestiondown\\\\\n\\cmd\\textquotedblleft & \\textquotedblleft\\\\\n\\cmd\\textquotedblright & \\textquotedblright\\\\\n\\cmd\\textquoteleft & \\textquoteleft\\\\\n\\cmd\\textquoteright & \\textquoteright\\\\\n\\cmd\\textbullet & \\textbullet\\\\\n\\cmd\\textperiodcentered & \\textperiodcentered\\\\\n\\cmd\\textvisiblespace & \\textvisiblespace\\\\\n\\cmd\\textcompworkmark & Break a ligature\\\\\n\\cmd\\textcircled\\marg{char} & Circle a character\\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n\\LaTeXe\\ also removed some symbols that were previously automatically\navailable in \\LaTeX 2.09. These symbols are now contained in a\nseparate package \\classname{latexsym}. To use these symbols, include\nthe package using:\n\\begin{verbatim}\n\\usepackage{latexsym}\n\\end{verbatim}\n\n\\subsection{Using \\LaTeXe\\ packages with \\revtex}\\label{sec:usepackage}%\n\nMany \\LaTeXe\\ packages are available, for instance, on CTAN at\n\\url{ftp:\/\/ctan.tug.org\/tex-archive\/macros\/latex\/required\/}\nand at\n\\url{ftp:\/\/ctan.tug.org\/tex-archive\/macros\/latex\/contrib\/}\nor may be available on other distribution media, such as the \\TeX\\\nLive CD-ROM \\url{http:\/\/www.tug.org\/texlive\/}. Some of these packages\nare automatically loaded by \\revtex~4 when certain class options are\ninvoked and are, thus, ``required''. They will either be distributed\nwith \\revtex\\ or are already included with a standard \\LaTeXe\\\ndistribution.\n\nRequired packages are automatically loaded by \\revtex\\ on an as-needed\nbasis. Other packages should be loaded using the\n\\cmd\\usepackage\\ command. To load the\n\\classname{hyperref} package, the document preamble might look like:\n\\begin{verbatim}\n\\documentclass{revtex}\n\\usepackage{hyperref}\n\\end{verbatim}\n\nSome common (and very useful) \\LaTeXe\\ packages are \\textit{a priori}\nimportant enough that \\revtex~4 has been designed to be specifically\ncompatible with them. \nA bug stemming from the use of one of these packages in\nconjunction with any of the APS journals may be reported by contacting\n\\revtex\\ support.\n\\begin{description}\n\\item[\\textbf{AMS packages}] \\revtex~4 is compatible with and depends\n upon the AMS packages\n\\classname{amsfonts},\n\\classname{amssymb}, and\n\\classname{amsmath}. In fact, \\revtex~4 requires use of these packages\nto accomplish some common tasks. See Section~\\ref{sec:math} for more.\n\\revtex~4 requires version 2.0 or higher of the AMS-\\LaTeX\\ package.\n\n\\item[\\textbf{array and dcolumn}]\nThe \\classname{array} and \\classname{dcolumn} packages are part of\n\\LaTeX's required suite of packages. \\classname{dcolumn} is required\nto align table columns on decimal points (and it in turn depends upon\nthe \\classname{array} package).\n\n\\item[\\textbf{longtable}]\n\\file{longtable.sty} may be used for large tables that will span more than one\npage. \\revtex~4 dynamically applies patches to longtable.sty so that\nit will work in two-column mode.\n\n\\item[\\textbf{hyperref}] \\file{hyperref.sty} is a package by Sebastian Rahtz that is\nused for putting hypertext links into \\LaTeXe\\ documents.\n\\revtex~4 has hooks to allow e-mail addresses and URL's to become\nhyperlinks if \\classname{hyperref} is loaded.\n\\end{description}\n\nOther packages will conflict with \\revtex~4 and should be\navoided. Usually such a conflict arises because the package adds\nenhancements that \\revtex~4 already includes. Here are some common\npackages that clash with \\revtex~4:\n\\begin{description}\n\\item[\\textbf{multicol}] \\file{multicol.sty} is a package by Frank Mittelbach\nthat adds support for multiple columns. In fact, early versions of\n\\revtex~4 used \\file{multicol.sty} for precisely this. However, to\nimprove the handling of floats, \\revtex~4 now has its own macros for\ntwo-column layout. Thus, it is not necessary to use \\file{multicol.sty}.\n\n\\item[\\textbf{cite}] Donald Arseneau's \\file{cite.sty} is often used to provide\nsupport for sorting a \\cmd\\cite\\ command's arguments into numerical\norder and to collapse consecutive runs of reference numbers. \\revtex~4\nhas this functionality built-in already via the \\classname{natbib} package.\n\n\\item[\\textbf{endfloat}] The same functionality can be accomplished\nusing the \\classoption{endfloats} class option.\n\n\\item[\\textbf{float}] \\revtex~4 already contains a lot of this\nfunctionality.\n\\end{description}\n\n\\section{The Document Preamble}\n\nThe preamble of a \\LaTeX\\ document is the set of commands that precede\nthe \\envb{document} line. It contains a\n\\cmd\\documentclass\\ line to load the \\revtex~4 class (\\textit{i.~e.},\nall of the \\revtex~4 macro definitions), \\cmd\\usepackage\\ macros to\nload other macro packages, and other macro definitions.\n\n\\subsection{The \\emph{documentclass} line}\nThe basic formatting of the manuscript is controlled by setting\n\\emph{class options} using\n\\cmd\\documentclass\\oarg{options}\\aarg{\\classname{revtex4}}.\nThe macro \\cmd\\documentclass\\ \nreplaces the \\cmd\\documentstyle\\ macro of \\LaTeX2.09. The optional\narguments that appear in the square brackets control the layout of the\ndocument. At this point, one only needs to choose a journal style\n(\\classoption{pra}, \\classoption{prb},\n\\classoption{prc}, \\classoption{prd},\n\\classoption{pre}, \\classoption{prl}, \\classoption{prstab},\nand \\classoption{rmp}) and either \\classoption{preprint} or\n\\classoption{twocolumn}. Usually, one would want to use\n\\classoption{preprint} for draft papers. \\classoption{twocolumn} gives\nthe \\emph{Physical Review} look and feel. Paper size options are also\navailable as well. In particular, \\classoption{a4paper} is available\nas well as the rest of the standard \\LaTeX\\ paper sizes. A\nfull list of class options is given in the \\textit{\\revtex~4 Command\nand Options Summary}.\n\n\\subsection{Loading other packages}\nOther packages may be loaded into a \\revtex~4 document by using the\nstandard \\LaTeXe\\ \\cmd\\usepackage\\ command. For instance, to load\nthe \\classoption{graphics} package, one would use\n\\verb+\\usepackage{graphics}+.\n\n\\section{The Front Matter}\\label{sec:front}\n\nAfter choosing the basic look and feel of the document by selecting\nthe appropriate class options and loading in whatever other macros are\nneeded, one is ready to move on to creating a new manuscript. After\nthe preamble, be sure to put in a \\envb{document} line (and put\nin an \\enve{document} as well). This section describes the macros\n\\revtex~4 provides for formatting the front matter of the\narticle. The behavior and usage of these macros can be quite\ndifferent from those provided in either \\revtex~3 or \\LaTeXe. See the\nincluded document \\textit{Differences between \\revtex~4 and \\revtex~3} for an\noverview of these differences.\n\n\\subsection{Setting the title}\n\nThe title of the manuscript is simply specified by using the\n\\cmd\\title\\aarg{title} macro. A \\verb+\\\\+ may be used to put a line\nbreak in a long title.\n\n\\subsection{Specifying a date}%\n\nThe \\cmd\\date\\marg{date} command outputs the date on the\nmanuscript. Using \\cmd\\today\\ will cause \\LaTeX{} to insert the\ncurrent date whenever the file is run:\n\\begin{verbatim}\n\\date{\\today}\n\\end{verbatim}\n\n\\subsection{Specifying authors and affiliations}\n\nThe macros for specifying authors and their affiliations have\nchanged significantly for \\revtex~4. They have been improved to save\nlabor for authors and in production. Authors and affiliations are\narranged into groupings called, appropriately enough, \\emph{author\ngroups}. Each author group is a set of authors who share the same set\nof affiliations. Author names are specified with the \\cmd\\author\\\nmacro while affiliations (or addresses) are specified with the\n\\cmd\\affiliation\\ macro. Author groups are specified by sequences of\n\\cmd\\author\\ macros followed by \\cmd\\affiliation\\ macros. An\n\\cmd\\affiliation\\ macro applies to all previously specified\n\\cmd\\author\\ macros which don't already have an affiliation supplied.\n\nFor example, if Bugs Bunny and Roger Rabbit are both at Looney Tune\nStudios, while Mickey Mouse is at Disney World, the markup would be:\n\\begin{verbatim}\n\\author{Bugs Bunny}\n\\author{Roger Rabbit}\n\\affiliation{Looney Tune Studios}\n\\author{Mickey Mouse}\n\\affiliation{Disney World}\n\\end{verbatim}\nThe default is to display this as \n\\begin{center}\nBugs Bunny and Roger Rabbit\\\\\n\\emph{Looney Tune Studios}\\\\\nMickey Mouse\\\\\n\\emph{Disney World}\\\\\n\\end{center}\nThis layout style for displaying authors and their affiliations is\nchosen by selecting the class option\n\\classoption{groupedaddress}. This option is the default for all APS\njournal styles, so it does not need to be specified explicitly.\nThe other major way of displaying this\ninformation is to use superscripts on the authors and\naffiliations. This can be accomplished by selecting the class option\n\\classoption{superscriptaddress}. To achieve the display\n\\begin{center}\nBugs Bunny,$^{1}$ Roger Rabbit,$^{1,2}$ and Mickey Mouse$^{2}$\\\\\n\\emph{$^{1}$Looney Tune Studios}\\\\\n\\emph{$^{2}$Disney World}\\\\\n\\end{center}\none would use the markup\n\\begin{verbatim}\n\\author{Bugs Bunny}\n\\affiliation{Looney Tune Studios}\n\\author{Roger Rabbit}\n\\affiliation{Looney Tune Studios}\n\\affiliation{Disney World}\n\\author{Mickey Mouse}\n\\affiliation{Disney World}\n\\end{verbatim}\n\nNote that \\revtex~4 takes care of any commas and \\emph{and}'s that join\nthe author names together and font selection, as well as any\nsuperscript numbering. Only the author names and affiliations should\nbe given within their respective macros.\n\nThere is a third class option, \\classoption{unsortedaddress}, for\ncontrolling author\/affiliation display. The default\n\\classoption{groupedaddress} will actually sort authors into the\napproriate author groups if one chooses to specify an affiliation for\neach author. The markup:\n\\begin{verbatim}\n\\author{Bugs Bunny}\n\\affiliation{Looney Tune Studios}\n\\author{Mickey Mouse}\n\\affiliation{Disney World}\n\\author{Roger Rabbit}\n\\affiliation{Looney Tune Studios}\n\\end{verbatim}\nwill result in the same display as for the first case given\nabove even though Roger Rabbit is specified after Mickey Mouse. To\navoid Roger Rabbit being moved into the same author group as Bugs\nBunny, use the\n\\classoption{unsortedaddress} option instead. In general, it is safest\nto list authors in the order they should appear and specify\naffiliations for multiple authors rather than one at a time. This will\nafford the most independence for choosing the display option. Finally,\nit should be mentioned that the affiliations for the\n\\classoption{superscriptaddress} are presented and numbered \nin the order that they are encountered. These means that the order\nwill usually follow the order of the authors. An alternative ordering\ncan be forced by including a list of \\cmd\\affiliation\\ commands before\nthe first \\cmd{\\author} in the desired order. Then use the exact same\ntext for each affilation when specifying them for each author.\n\nIf an author doesn't have an affiliation, the \\cmd\\noaffiliation\\\nmacro may be used in the place of an \\cmd\\affiliation\\ macro.\n\n\n\\subsubsection{Collaborations}\n\nA collaboration name can be specified with the \\cmd\\collaboration\\\nmacro. This is very similar to the \\cmd\\author\\ macro, but it can only\nbe used with the class option \\classoption{superscriptaddress}. The\n\\cmd\\collaboration\\ macro should appear at the end of the list of\nauthors. The collaboration name will be appear centered in parentheses\nbetween the list of authors and the list of\naffiliations. Because collaborations\ndon't normally have affiliations, one needs to follow the\n\\cmd\\collaboration\\ with \\cmd\\noaffiliation.\n\n\\subsubsection{Footnotes for authors, collaborations, affiliations or title}\\label{sec:footau}\n\nOften one wants to specify additional information associated with an\nauthor, collaboration, or affiliation such an e-mail address, an\nalternate affiliation, or some other anicillary information. \n\\revtex~4 introduces several new macros just for this purpose. They\nare:\n\\begin{itemize}\n\\item\\cmd\\email\\oarg{optional text}\\aarg{e-mail address}\n\\item\\cmd\\homepage\\oarg{optional text}\\aarg{URL}\n\\item\\cmd\\altaffiliation\\oarg{optional text}\\aarg{affiliation}\n\\item\\cmd\\thanks\\aarg{miscellaneous text}\n\\end{itemize}\nIn the first three, the \\emph{optional text} will be prepended before the\nactual information specified in the required argument. \\cmd\\email\\ and\n\\cmd\\homepage\\ each have a default text for their optional arguments\n(`Electronic address:' and `URL:' respectively). The \\cmd\\thanks\\\nmacro should only be used if one of the other three do not apply. Any\nauthor name can have multiple occurences of these four macros. Note\nthat unlike the\n\\cmd\\affiliation\\ macro, these macros only apply to the \\cmd\\author\\\nthat directly precedes it. Any \\cmd\\affiliation\\ \\emph{must} follow\nthe other author-specific macros. A typical usage might be as follows:\n\\begin{verbatim}\n\\author{Bugs Bunny}\n\\email[E-mail me at: ]{bugs@looney.com}\n\\homepage[Visit: ]{http:\/\/looney.com\/}\n\\altaffiliation[Permanent address: ]\n {Warner Brothers}\n\\affiliation{Looney Tunes}\n\\end{verbatim}\nThis would result in the footnote ``E-mail me at: \\texttt{bugs@looney.com},\nVisit: \\texttt{http:\/\/looney.com\/}, Permanent address: Warner\nBrothers'' being attached to Bugs Bunny. Note that:\n\\begin{itemize}\n\\item Only an e-mail address, URL, or affiliation should go in the\nrequired argument in the curly braces.\n\\item The font is automatically taken care of.\n\\item An explicit space is needed at the end of the optional text if one is\ndesired in the output.\n\\item Use the optional arguments to provide customized\ntext only if there is a good reason to.\n\\end{itemize}\n\nThe \\cmd\\collaboration\\ , \\cmd\\affiliation\\ , or even \\cmd\\title\\ can\nalso have footnotes attached via these commands. If any ancillary data\n(\\cmd\\thanks, \\cmd\\email, \\cmd\\homepage, or\n\\cmd\\altaffiliation) are given in the wrong context (e.g., before any\n\\cmd\\title, \\cmd\\author, \\cmd\\collaboration, or \\cmd\\affiliation\\\ncommand has been given), then a warning is given in the \\TeX\\ log, and\nthe command is ignored.\n\nDuplicate sets of ancillary data are merged, giving rise to a single\nshared footnote. However, this only applies if the ancillary data are\nidentical: even the order of the commands specifying the data must be\nidentical. Thus, for example, two authors can share a single footnote\nindicating a group e-mail address.\n\nDuplicate \\cmd\\affiliation\\ commands may be given in the course of the\nfront matter, without the danger of producing extraneous affiliations\non the title page. However, ancillary data should be specified for\nonly the first instance of any particular institution's\n\\cmd\\affiliation\\ command; a later instance with different ancillary\ndata will result in a warning in the \\TeX\\ log.\n\nIt is preferable to arrange authors into\nsets. Within each set all the authors share the same group of\naffiliations. For each author, give the \\cmd\\author\\ (and appropriate\nancillary data), then follow this author group with the needed group\nof \\cmd\\affiliation\\ commands.\n\nIf affiliations have been listed before the first\n\\cmd\\author\\ macro to ensure a particular ordering, be sure\nthat any later \\cmd\\affiliation\\ command for the given institution is\nan exact copy of the first, and also ensure that no ancillary data is\ngiven in these later instances.\n\n\nEach APS journal has a default behavior for the placement of these\nancillary information footnotes. The \\classoption{prb} option puts all\nsuch footnotes at the start of the bibliography while the other\njournal styles display them on the first page. One can override a\njournal style's default behavior by specifying explicitly the class\noption\n\\classoption{bibnotes} (puts the footnotes at the start of the\nbibliography) or \\classoption{nobibnotes} (puts them on the first page).\n\n\\subsubsection{Specifying first names and surnames}\n\nMany APS authors have names in which either the surname appears first\nor in which the surname is made up of more than one name. To ensure\nthat such names are accurately captured for indexing and other\npurposes, the \\cmd\\surname\\ macro should be used to indicate which portion\nof a name is the surname. Similarly, there is a \\cmd\\firstname\\ macro\nas well, although usage of \\cmd\\surname\\ should be sufficient. If an\nauthor's surname is a single name and written last, it is not\nnecessary to use these macros. These macros do nothing but indicate\nhow a name should be indexed. Here are some examples;\n\\begin{verbatim}\n\\author{Andrew \\surname{Lloyd Weber}}\n\\author{\\surname{Mao} Tse-Tung}\n\\end{verbatim}\n\n\\subsection{The abstract}\nAn abstract for a paper is specified by using the \\env{abstract}\nenvironment:\n\\begin{verbatim}\n\\begin{abstract}\nText of abstract\n\\end{abstract}\n\\end{verbatim}\nNote that in \\revtex~4 the abstract must be specified before the\n\\cmd\\maketitle\\ command and there is no need to embed it in an explicit\nminipage environment.\n\n\\subsection{PACS codes}\nAPS authors are asked to supply suggested PACS codes with their\nsubmissions. The \\cmd\\pacs\\ macro is provided as a way to do this:\n\\begin{verbatim}\n\\pacs{23.23.+x, 56.65.Dy}\n\\end{verbatim}\nThe actual display of the PACS numbers below the abstract is\ncontrolled by two class options: \\classoption{showpacs} and\n\\classoption{noshowpacs}. In particular, this is now independent of\nthe \\classoption{preprint} option. \\classoption{showpacs} must be\nexplicitly included in the class options to display the PACS codes.\n\n\\subsection{Keywords}\nA \\cmd\\keywords\\ macro may also be used to indicate keywords for the\narticle. \n\\begin{verbatim}\n\\keywords{nuclear form; yrast level}\n\\end{verbatim}\nThis will be displayed below the abstract and PACS (if supplied). Like\nPACS codes, the actual display of the the keywords is controlled by\ntwo classoptions: \\classoption{showkeys} and\n\\classoption{noshowkeys}. An explicit \\classoption{showkeys} must be\nincluded in the \\cmd\\documentclass\\ line to display the keywords.\n\n\\subsection{Institutional report numbers}\nInstitutional report numbers can be specified using the \\cmd\\preprint\\\nmacro. These will be displayed in the upper lefthand corner of the\nfirst page. Multiple \\cmd\\preprint\\ macros maybe supplied (space is\nlimited though, so only three or less may actually fit). \n\n\\subsection{maketitle}\nAfter specifying the title, authors, affiliations, abstract, PACS\ncodes, and report numbers, the final step for formatting the front\nmatter of the manuscript is to execute the \\cmd\\maketitle\\ macro by\nsimply including it:\n\\begin{verbatim}\n\\maketitle\n\\end{verbatim}\nThe \\cmd\\maketitle\\ macro must follow all of the macros listed\nabove. The macro will format the front matter in accordance with the various\nclass options that were specified in the\n\\cmd\\documentclass\\ line (either implicitly through defaults or\nexplicitly).\n\n\\section{The body of the paper}\n\nFor typesetting the body of a paper, \\revtex~4 relies heavily on\nstandard \\LaTeXe\\ and other packages (particulary those that are part\nof AMS-\\LaTeX). Users unfamiliar with these packages should read the\nfollowing sections carefully. \n\n\\subsection{Section headings}\n\nSection headings are input as in \\LaTeX.\nThe output is similar, with a few extra features.\n\nFour levels of headings are available in \\revtex{}:\n\\begin{quote}\n\\cmd\\section\\marg{title text}\\\\\n\\cmd\\subsection\\marg{title text}\\\\\n\\cmd\\subsubsection\\marg{title text}\\\\\n\\cmd\\paragraph\\marg{title text}\n\\end{quote}\n\nUse the starred form of the command to suppress the automatic numbering; e.g.,\n\\begin{verbatim}\n\\section*{Introduction}\n\\end{verbatim}\n\nTo label a section heading for cross referencing, best practice is to\nplace the \\cmd\\label\\marg{key} within the argument specifying the heading:\n\\begin{verbatim}\n\\section{\\label{sec:intro}Introduction}\n\\end{verbatim}\n\nIn the some journal substyles, such as those of the APS,\nall text in the \\cmd\\section\\ command is automatically set uppercase.\nIf a lowercase letter is needed, use \\cmd\\lowercase\\aarg{x}.\nFor example, to use ``He'' for helium in a \\cmd\\section\\marg{title text} command, type\n\\verb+H+\\cmd\\lowercase\\aarg{e} in \\marg{title text}.\n\nUse \\cmd\\protect\\verb+\\\\+ to force a line break in a section heading.\n(Fragile commands must be protected in section headings, captions, and\nfootnotes and \\verb+\\\\+ is a fragile command.)\n\n\\subsection{Paragraphs and General Text}\n\nParagraphs always end with a blank input line. Because \\TeX\\\nautomatically calculates linebreaks and word hyphenation in a\nparagraph, it is not necessary to force linebreaks or hyphenation. Of\ncourse, compound words should still be explicitly hyphenated, e.g.,\n``author-prepared copy.''\n\nUse directional quotes for quotation marks around quoted text\n(\\texttt{``xxx''}), not straight double quotes (\\texttt{\"xxx\"}).\nFor opening quotes, use one or two backquotes; for closing quotes,\nuse one or two forward quotes (apostrophes).\n\n\\subsection{One-column vs. two-column}\\label{sec:widetext}\n\nOne of the hallmarks of \\textit{Physical Review} is its two-column\nformatting and so one of the \\revtex~4 design goals is to make it easier to\nacheive the \\textit{Physical Review} look and feel. In particular, the\n\\classoption{twocolumn} option will take care of formatting the front matter\n(including the abstract) as a single column. \\revtex~4 has its own\nbuilt-in two-column formatting macros to provide well-balanced columns\nas well as reasonable control over the placement of floats in either\none- or two-column modes.\n\nOccasionally it is necessary to change the formatting from two-column to\none-column to better accomodate very long equations that are more\neasily read when typeset to the full width of the page. This is\naccomplished using the \\env{widetext} environment:\n\\begin{verbatim}\n\\begin{widetext}\nlong equation goes here\n\\end{widetext}\n\\end{verbatim}\nIn two-column mode, this will temporarily return to one-column mode,\nbalancing the text before the environment into two short columns, and\nreturning to two-column mode after the environment has\nfinished. \\revtex~4 will also add horizontal rules to guide the\nreader's eye through what may otherwise be a confusing break in the\nflow of text. The\n\\env{widetext} environment has no effect on the output under the \n\\classoption{preprint} class option because this already uses\none-column formatting.\n\nUse of the \\env{widetext} environment should be restricted to the bare\nminimum of text that needs to be typeset this way. However short pieces\nof paragraph text and\/or math between nearly contiguous wide equations\nshould be incorporated into the surrounding wide sections.\n\nLow-level control over the column grid can be accomplished with the\n\\cmd\\onecolumngrid\\ and \\cmd\\twocolumngrid\\ commands. Using these, one\ncan avoid the horizontal rules added by \\env{widetext}. These commands\nshould only be used if absolutely necessary. Wide figures and tables\nshould be accomodated using the proper \\verb+*+ environments.\n\n\\subsection{Cross-referencing}\\label{sec:xrefs}\n\n\\revtex{} inherits the \\LaTeXe\\ features for labeling and cross-referencing\nsection headings, equations, tables, and figures. This section\ncontains a simplified explanation of these cross-referencing features.\nThe proper usage in the context of section headings, equations,\ntables, and figures is discussed in the appropriate sections.\n\nCross-referencing depends upon the use of ``tags,'' which are defined by\nthe user. The \\cmd\\label\\marg{key} command is used to identify tags for\n\\revtex. Tags are strings of characters that serve to label section\nheadings, equations, tables, and figures that replace explicit,\nby-hand numbering.\n\nFiles that use cross-referencing (and almost all manuscripts do)\nneed to be processed through \\revtex\\ at least twice to\nensure that the tags have been properly linked to appropriate numbers.\nIf any tags are added in subsequent editing sessions, \n\\LaTeX{} will display a warning message in the log file that ends with\n\\texttt{... Rerun to get cross-references right}.\nRunning the file through \\revtex\\ again (possibly more than once) will\nresolve the cross-references. If the error message persists, check\nthe labels; the same \\marg{key} may have been used to label more than one\nobject.\n\nAnother \\LaTeX\\ warning is \\texttt{There were undefined references},\nwhich indicates the use of a key in a \\cmd\\ref\\ without ever\nusing it in a \\cmd\\label\\ statement.\n\n\\revtex{} performs autonumbering exactly as in standard \\LaTeX.\nWhen the file is processed for the first time,\n\\LaTeX\\ creates an auxiliary file (with the \\file{.aux} extension) that \nrecords the value of each \\meta{key}. Each subsequent run retrieves\nthe proper number from the auxiliary file and updates the auxiliary\nfile. At the end of each run, any change in the value of a \\meta{key}\nproduces a \\LaTeX\\ warning message.\n\nNote that with footnotes appearing in the bibliography, extra passes\nof \\LaTeX\\ may be needed to resolve all cross-references. For\ninstance, putting a \\cmd\\cite\\ inside a \\cmd\\footnote\\ will require at\nleast three passes.\n\nUsing the \\classname{hyperref} package to create hyperlinked PDF files\nwill cause reference ranges to be expanded to list every\nreference in the range. This behavior can be avoided by using the\n\\classname{hypernat} package available from \\url{www.ctan.org}.\n\n\\subsection{Acknowledgments}\nUse the \\env{acknowledgments} environment for an acknowledgments\nsection. Depending on the journal substyle, this element may be\nformatted as an unnumbered section title \\textit{Acknowledgments} or\nsimply as a paragraph. Please note the spelling of\n``acknowledgments''.\n\\begin{verbatim}\n\\begin{acknowlegments}\nThe authors would like to thank...\n\\end{acknowlegments}\n\\end{verbatim}\n\n\\subsection{Appendices}\nThe \\cmd\n\\section{Introduction}\nThis document gives a brief summary of how \\revtex~4 is different from\nwhat authors may already be familiar with. The two primary design\ngoals for \\revtex~4 are to 1) move to \\LaTeXe\\ and 2) improve the\nmarkup so that infomation can be more reliably extracted for the\neditorial and production processes. Both of these goals require that\nauthors comfortable with earlier versions of \\revtex\\ change their\nhabits. In addition, authors may already be familiar with the standard\n\\classname{article.cls} in \\LaTeXe. \\revtex~4 differs in some\nimportant ways from this class as well. For more complete\ndocumentation on \\revtex~4, see the main \\textit{\\revtex~4 Author's\nGuide}. The most important changes are in the markup of the front\nmatter (title, authors, affiliations, abstract, etc.). Please see\nSec.~\\ref{sec:front}.\n\n\\section{Version of \\LaTeX}\nThe most obvious difference between \\revtex~4 and \\revtex~3 is that\n\\revtex~4 works solely with \\LaTeXe; it is not useable as a \\LaTeX2.09 package.\nFurthermore, \\revtex~4 requires an up-to-date \\LaTeX\\ installation\n(1996\/06\/01 or later); its use under older versions is not supported.\n\n\\section{Class Options and Defaults}\nMany of the class options in \\revtex~3 have been retained in\n\\revtex~4. However, the default behavior for these options can be\ndifferent than in \\revtex~3. Currently, there is only one society\noption, \\classoption{aps}, and this is the default. Furthermore, the\nselection of a journal (such as \\classoption{prl}) will automatically\nset the society as well (this will be true even after other societies\nare added).\n\nIn \\revtex~3, it was necessary to invoke the \\classoption{floats}, but\nthis is the default for \\classoption{aps} journal in\n\\revtex~4. \\revtex~4 introduces two new class options,\n\\classoption{endfloats} and \\classoption{endfloats*} for moving floats\nto the end of the paper.\n\nThe preamble commands \\cmd{\\draft} and \\cmd{\\tighten} have been replaced\nwith new class options \\classoption{draft} and\n\\classoption{tightenlines}, respectively. The \\cmd{\\preprint} command\nis now used only for specifying institutional report numbers (typeset\nin the upper-righthand corner of the first page); it no longer\ninfluences whether PACS numbers are displayed below the abstract. PACS\ndisplay is controlled by the \\classoption{showpacs} and\n\\classoption{noshowpacs} (default) class options.\n\nPaper size options (\\classoption{letter}, \\classoption{a4paper}, etc.)\nwork in \\revtex~4. The text ``Typeset by \\revtex'' no longer appears\nby default - the option \\classoption{byrevtex} will place this text in\nthe lower-lefthand corner of the first page.\n\n\\section{One- and Two-column formatting}\n\n\\revtex~4 has excellent support for achieving the two-column\nformatting in the \\textit{Physical~Review} and \\textit{Reviews of\nModern Physics} styles. It will balance the columns\nautomatically. Whereas \\revtex~3 had the \\cmd{\\widetext} and\n\\cmd{\\narrowtext} commands for switching between one- and two-cloumn\nmodes, \\revtex~4 simply has a \\env{widetext} environment,\n\\envb{widetext} \\dots \\enve{widetext}. One-column formatting can be\nspecified by choosing either the \\classoption{onecolumn} or\n\\classoption{preprint} class option (the \\revtex~3 option\n\\classoption{manuscript} no longer exists). Two-column formatting is\nthe default for most journal styles, but can be specified with the\n\\classoption{twocolumn} option. Note that the spacing for\n\\classoption{preprint} is now set to 1.5, rather than full\ndouble-spacing. The \\classoption{tightenlines} option can be used to\nreduce this to single spacing.\n\n\n\\section{Front Matter Markup}\n\\label{sec:front}\n\n\\revtex~4 has substantially changed how the front matter for an article\nis marked up. These are the most significant differences between\n\\revtex~4 and other systems for typesetting manuscripts. It is\nessential that authors new to \\revtex~4 be familiar with these changes.\n\n\\subsection{Authors, Affiliations, and Author Notes}\n\\revtex~4 has substantially changed the markup of author names,\naffiliations, and author notes (footnotes giving additional\ninformation about the author such as a permanent address or an email\naddress).\n\\begin{itemize}\n\\item Each author name should appear separately in\nindividual \\cmd\\author\\ macros. \n\n\\item Email addresses should be marked up using the \\cmd\\email\\ macro.\n\n\\item Alternative affiliation information should be marked up using\nthe \\cmd\\altaffiliation\\ macro.\n\n\\item URLs for author home pages can be specified with a\n\\cmd\\homepage\\ macro.\n\n\\item The \\cmd\\thanks\\ macro should only be used if one of the above\ndon't apply.\n\n\\item \\cmd{\\email}, \\cmd{\\homepage}, \\cmd{\\altaffiliation}, and\n\\cmd{\\thanks} commands are grouped together under a single footnote for\neach author. These footnotes can either appear at the bottom of the\nfirst page of the article or as the first entries in the\nbibliography. The journal style controls this placement, but it may be\noverridden by using the class options \\classoption{bibnotes} and\n\\classoption{nobibnotes}. Note that these footnotes are treated\ndifferently than the other footnotes in the article.\n\n\\item The grouping of authors by affiliations is accomplished\nautomatically. Each affiliation should be in its own\n\\cmd{\\affiliation} command. Multiple \\cmd{\\affiliation},\n\\cmd{\\email}, \\cmd{\\homepage}, \\cmd{\\altaffiliation}, and \\cmd{\\thanks}\ncommands can be applied to each author. The macro \\cmd\\and\\ has been\neliminated.\n\n\\item \\cmd\\affiliation\\ commmands apply to all previous authors that\ndon't have an affiliation already declared. Furthermore, for any\nparticular author, the \\cmd\\affilation\\ must follow any \\cmd{\\email},\n\\cmd{\\homepage}, \\cmd{\\altaffiliation}, or \\cmd{\\thanks} commands for\nthat author.\n\n\\item Footnote-style associations of authors with affilitations should\nnot be done via explicit superscripts; rather, the class option\n\\classoption{superscriptaddress} should be used to accomplish this\nautomatically.\n\n\\item A collaboration for a group of authors can be given using the\n\\cmd\\collaboration\\ command.\n\n\\end{itemize}\n\nTable~\\ref{tab:front} summarizes some common pitfalls in moving from\n\\revtex~3 to \\revtex~4.\n\\begin{table*}\n\\begin{ruledtabular}\n\\begin{tabular}{lll}\n\\textbf{\\revtex~3 Markup} & \\textbf{\\revtex~4 Markup} & \\textbf{Explanation}\\\\\n& & \\\\\n\\verb+\\author{Author One and Author Two}+ & \\verb+\\author{Author One}+ & One name per\\\\\n& \\verb+\\author{Author Two}+ & \\verb+\\author+ \\\\\n& & \\\\\n\\verb+\\author{Author One$^{1}$}+ & \\verb+\\author{Author One}+& Use \\classoption{superscriptaddress}\\\\\n\\dots &\\dots & class option \\\\\n\\verb+\\address{$^{1}$APS}+ &\\verb+\\affiliation{APS}+ & \\\\\n& & \\\\\n\\verb+\\thanks{Permanent address...}+ & \\verb+\\altaffiliation{}+& Use most\nspecific macro \\\\\n\\verb+\\thanks{Electronic address: user@domain.edu}+ &\n\\verb+\\email{user@domain.edu}+& available\\\\\n\\verb+\\thanks{http:\/\/publish.aps.org\/}+ &\n\\verb+\\homepage{http:\/\/publish.aps.org\/}+& \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\caption{Common mistakes in marking up the front matter}\n\\label{tab:front}\n\\end{table*}\n\n\n\\subsection{Abstracts}\n\\revtex~4, like \\revtex~3, uses the \\env{abstract} environment\n\\envb{abstract} \\dots \\enve{abstract} for the abstract. The\n\\env{abstract} environment must appear before the \\cmd{\\maketitle}\ncommand in \\revtex~4. The abstract will be formatted\nappropriately for either one-column (preprint) or two-column\nformatting. In particular, in the two-column case, the abstract will\nautomatically be placed in a single column that spans the width of the\npage. It is unnecessary to use a \\cmd{\\minipage} or any other macro to\nachieve this result.\n\n\n\\section{Citations and References}\n\n\\revtex~4 uses the same \\cmd{\\cite},\\cmd{\\ref}, and \\cmd{\\bibitem}\ncommmands as standard \\LaTeX\\ and \\revtex~3. Citation handling is\nbased upon Patick Daly's \\classname{natbib} package. The\n\\env{references} environment is no longer used. Instead, use the\nstandard \\LaTeXe\\ environment \\env{thebibliography}.\n\nTwo new \\BibTeX\\ files have been included with \\revtex~4,\n\\file{apsrev.bst} and \\file{apsrmp.bst}. These will format references\nin the style of \\textit{Physical Review} and \\textit{Reviews of Modern\nPhysics} respectively. In addition, these \\BibTeX\\ styles\nautomatically apply a special macro \\cmd{\\bibinfo} to each element of the\nbibliography to make it easier to extract information for use in the\neditorial and production processes. Authors are strongly urged to use\n\\BibTeX\\ to manage their bibliographies so that the \\cmd{\\bibinfo}\ndirectives will be automatically included. Other bibliography styles\ncan be specified by using the \\cmd\\bibliographystyle\\ command, but\nunlike standard \\LaTeXe, you must give this command \\emph{before} the\n\\envb{document} statement.\n\nPlease note that the package \\classname{cite.sty} is not needed with\n\\revtex~4 and is incompatible.\n\n\\section{Footnotes and Tablenotes}\n\\label{sec:foot}\n\n\\revtex~4 uses the standard \\cmd{\\footnote} macro for\nfootnotes. Footnotes can either appear on the bottom of the page on\nwhich they occur or they can appear as entries at the end of the\nbibliography. As with author notes, the journal style option controls\nthe placement; however, this can be overridden with the class options\n\\classoption{footinbib} and \\classoption{nofootinbib}.\n\nWithin a table, the \\cmd{\\footnote} command behaves differently. Footnotes\nappear at the bottom of the table. \\cmd{\\footnotemark} and\n\\cmd{\\footnotetext} are also available within the table environment so\nthat multiple table entries can share the same footnote text. There\nis no longer a need to use a \\cmd{\\tablenote}, \\cmd{\\tablenotemark},\nand \\cmd{\\tablenotetext} macros.\n\n\\section{Section Commands}\n\nThe title in a \\cmd\\section\\marg{title} command will be automatically\nuppercased in \\revtex~4. To prevent a particular letter from being\nuppercased, enclose it in curly braces.\n\n\\section{Figures}\n\nFigures should be enclosed within either a \\env{figure} or \\env{figure*}\nenvironment (the latter will cause the figure to span the full width\nof the page in two-column mode). \\LaTeXe\\ has two convenient packages\nfor including the figure file itself: \\classname{graphics} and\n\\classname{graphicx}. These two packages both define a macro\n\\cmd{\\includegraphics} which calls in the figure. They differ in how\narguments for rotation, translation, and scaling are specified. The\npackage \\classname{epsfig} has been re-implemented to use these\n\\classname{graphicx} package. The package \\classname{epsfig} provides\nan interface similar to that under the \\revtex~3 \\classoption{epsf}\nclass option. Authors should use these standard\n\\LaTeXe\\ packages rather than some other alternative.\n\n\\section{Tables}\n\nShort tables should be enclosed within either a \\env{table} or \\env{table*}\nenvironmnent (the latter will cause the table to span the full width\nof the page in two-column mode). The heart of the table is the\n\\env{tabular} environment. This will behave for the most part as in\nstandard \\LaTeXe. Note that \\revtex~4 no longer automatically adds\ndouble (Scotch) rules around tables. Nor does the \\env{tabular}\nenvironment set various table parameters as before. Instead, a new\nenvironment \\env{ruledtabular} provides this functionality. This\nenvironment should surround the \\env{tabular} environment:\n\\begin{verbatim}\n\\begin{table}\n\\caption{...}\n\\label{tab:...}\n\\begin{ruledtabular}\n\\begin{tabular}\n...\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\\end{verbatim}\n\nUnder \\revtex~3, tables automatically break across pages. \\revtex~4\nprovides some of this functionality. However, this requires adding the\ntable a float placement option of [H] (meaning put the table\n``here'') to the \\envb{table} command.\n\nLong tables are more robustly handled by using the\n\\classname{longtable.sty} package included with the standard \\LaTeXe\\\ndistribution (put \\verb+\\usepackage{longtable}+ in the preamble). This\npackage gives precise control over the layout of the table. \\revtex~4\ngoes out of its way to provide patches so that the \\env{longtable}\nenvironment will work within a two-column format. A new\n\\env{longtable*} environment is also provided for long tables that are\ntoo wide for a narrow column. (Note that the \\env{table*} and\n\\env{longtable*} environments should always be used rather than\nattempting to use the \\env{widetext} environment.)\n\nTo create tables with columns of numbers aligned on decimal points,\nload the standard \\LaTeXe\\ \\classname{dcolumn} package and use the\n\\verb+d+ column specifier. The content of each cell in the column is\nimplicitly in math mode: Use of math delimiters (\\verb+$+) is unnecessary\nin a \\verb+d+ column.\n\nFootnotes within a table can be specified with the\n\\cmd{\\footnote} command (see Sec.~\\ref{sec:foot}). \n\n\\section{Font selection}\n\nThe largest difference between \\revtex~3 and \\revtex~4 with respect to\nfonts is that \\revtex~4 allows one use the \\LaTeXe\\ font commands such\nas \\cmd{\\textit}, \\cmd{\\texttt}, \\cmd{\\textbf} etc. These commands\nshould be used in place of the basic \\TeX\/\\LaTeX\\ 2.09 font commands\nsuch as \\cmd{\\it}, \\cmd{\\tt}, \\cmd{\\bf}, etc. The new font commands\nbetter handle subtleties such as italic correction and scaling in\nsuper- and subscripts.\n\n\\section{Math and Symbols}\n\n\\revtex~4 depends more heavily on packages from the standard \\LaTeXe\\\ndistribution and AMS-\\LaTeX\\ than \\revtex~3 did. Thus, \\revtex~4 users\nshould make sure their \\LaTeXe\\ distributions are up to date and they\nshould install AMS-\\LaTeX\\ 2.0 as well. In general, if any fine control of\nequation layout, special math symbols, or other specialized math\nconstructs are needed, users should look to the \\classname{amsmath}\npackage (see the AMS-\\LaTeX\\ documentation).\n\n\\revtex~4 provides a small number of additional diacritics, symbols,\nand bold parentheses. Table~\\ref{tab:revsymb} summarizes this.\n\n\\begin{table}\n\\caption{Special \\revtex~4 symbols, accents, and boldfaced parentheses \ndefined in \\file{revsymb.sty}}\n\\label{tab:revsymb}\n\\begin{ruledtabular}\n\\begin{tabular}{ll|ll}\n\\cmd\\lambdabar & $\\lambdabar$ &\\cmd\\openone & $\\openone$\\\\\n\\cmd\\altsuccsim & $\\altsuccsim$ & \\cmd\\altprecsim & $\\altprecsim$ \\\\\n\\cmd\\alt & $\\alt$ & \\cmd\\agt & $\\agt$ \\\\\n\\cmd\\tensor\\ x & $\\tensor x$ & \\cmd\\overstar\\ x & $\\overstar x$ \\\\\n\\cmd\\loarrow\\ x & $\\loarrow x$ & \\cmd\\roarrow\\ x & $\\roarrow x$ \\\\\n\\cmd\\biglb\\ ( \\cmd\\bigrb ) & $\\biglb( \\bigrb)$ &\n\\cmd\\Biglb\\ ( \\cmd\\Bigrb )& $\\Biglb( \\Bigrb)$ \\\\\n& & \\\\\n\\cmd\\bigglb\\ ( \\cmd\\biggrb ) & $\\bigglb( \\biggrb)$ &\n\\cmd\\Bigglb\\ ( \\cmd\\Biggrb\\ ) & $\\Bigglb( \\Biggrb)$ \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\nHere is a partial list of the more notable changes between \\revtex~3\nand \\revtex~4 math:\n\\begin{itemize}\n\\item Bold math characters should now be handle via the standard\n\\LaTeXe\\ \\classname{bm} package (use \\cmd{\\bm} instead of \\cmd{\\bbox}).\n\\cmd{\\bm} will handle Greek letters and other symbols.\n\n\\item Use the class options \\classoption{amsmath},\n\\classoption{amsfonts} and \\classoption{amssymb} to get even more math\nfonts and symbols. \\cmd{\\mathfrak} and \\cmd{\\mathbb} will, for instance, give\nFraktur and Blackboard Bold symbols.\n\n\\item Use the \\classoption{fleqn} class option for making equation\nflush left or right. \\cmd{\\FL} and \\cmd{\\FR} are no longer provided.\n\n\\item In place of \\cmd{\\eqnum}, load the \\classname{amsmath} package\n[\\verb+\\usepackage{amsmath}+] and use \\cmd{\\tag}.\n\n\\item In place of \\cmd{\\case}, use \\cmd{\\textstyle}\\cmd{\\frac}.\n\n\\item In place of the \\env{mathletters} environment, load the\n\\classname{amsmath} package and use \\env{subequations} environment.\n\n\\item In place of \\cmd{\\slantfrac}, use \\cmd{\\frac}.\n\n\\item The macros \\cmd{\\corresponds}, \\cmd{\\overdots}, and\n\\cmd{\\overcirc} have been removed. See Table~\\ref{tab:obsolete}.\n\n\\end{itemize}\n\n\\section{Obsolete \\revtex~3.1 commands}\n\nTable~\\ref{tab:obsolete} summarizes more differences between \\revtex~4\nand \\revtex~3, particularly which \\revtex~3 commands are now obsolete.\n\n\\begin{table*}\n\\caption{Differences between \\revtex~3.1 and \\revtex~4\nmarkup}\\label{tab:diff31}\n\\label{tab:obsolete}\n\\begin{ruledtabular}\n\\begin{tabular}{lp{330pt}}\n\\textbf{\\revtex~3.1 command}&\\textbf{\\revtex~4 replacement}\n\\lrstrut\\\\\n\\cmd\\documentstyle\\oarg{options}\\aarg{\\classname{revtex}}&\\cmd\\documentclass\\oarg{options}\\aarg{\\classname{revtex4}}\n\\\\\noption \\classoption{manuscript}& \\classoption{preprint}\n\\\\\n\\cmd\\tighten\\ preamble command & \\classoption{tightenlines} class option\n\\\\\n\\cmd\\draft\\ preamble command & \\classoption{draft} class option\n\\\\\n\\cmd\\author & \\cmd\\author\\marg{name} may appear\nmultiple times; each signifies a new author name.\\\\\n & \\cmd\\collaboration\\marg{name}:\nCollaboration name (should appear after last \\cmd\\author)\\\\\n & \\cmd\\homepage\\marg{URL}: URL for preceding author\\\\\n & \\cmd\\email\\marg{email}: email\naddress for preceding author\\\\\n & \\cmd{\\altaffiliation}: alternate\naffiliation for preceding \\cmd\\author\\\\\n\\cmd\\thanks & \\cmd\\thanks, but use only for\ninformation not covered by \\cmd{\\email}, \\cmd{\\homepage}, or \\cmd{\\altaffilitiation}\\\\\n\\cmd\\and & obsolete, remove this command\\\\\n\\cmd\\address & \\cmd\\affiliation\\marg{institution}\\ gives the affiliation for the group of authors above\\\\\n & \\cmd\\affiliation\\oarg{note} lets you specify a footnote to this institution\\\\\n & \\cmd\\noaffiliation\\ signifies that the above authors have no affiliation\\\\\n\n\\cmd\\preprint & \\cmd\\preprint\\marg{number} can appear multiple times, and must precede \\cmd\\maketitle\\\\\n\\cmd\\pacs & \\cmd\\pacs\\ must precede \\cmd\\maketitle\\\\\n\\env{abstract} environment & \\env{abstract} environment must precede \\cmd\\maketitle\\\\\n\\cmd\\wideabs & obsolete, remove this command\\\\\n\\cmd\\maketitle & \\cmd\\maketitle\\ must follow\n\\emph{all} front matter data commands\\\\\n\\cmd\\narrowtext & obsolete, remove this command\\\\\n\\cmd\\mediumtext & obsolete, remove this command\\\\\n\\cmd\\widetext & obsolete, replace with \\env{widetext} environment\\\\\n\\cmd\\FL & obsolete, remove this command\\\\\n\\cmd\\FR & obsolete, remove this command\\\\\n\\cmd\\eqnum & replace with \\cmd\\tag, load \\classname{amsmath}\\\\\n\\env{mathletters} & replace with \\env{subequations}, load\n\\classname{amsmath}\\\\\n\\env{tabular} environment & No longer puts in doubled-rules. Enclose \\env{tabular} in \\env{ruledtabular} to get old behavior.\\\\\n\\env{quasitable} environment & obsolete, \\env{tabular} environment no longer\nputs in rules\\\\\n\\env{references} environment & replace with \\env{thebibliography}\\verb+{}+\\\\\n\\cmd\\case & replace with \\cmd\\textstyle\\cmd\\frac\\\\\n\\cmd\\slantfrac & replace with \\cmd\\frac\\\\\n\\cmd\\tablenote & replace with \\cmd\\footnote\\\\\n\\cmd\\tablenotemark & replace with \\cmd\\footnotemark\\\\\n\\cmd\\tablenotetext & replace with \\cmd\\footnotetext\\lrstrut\\\\\n\\cmd\\overcirc & Use standard \\LaTeXe\\ \\cmd\\mathring\\ \\\\\n\\cmd\\overdots & Use \\cmd\\dddot\\ with \\classoption{amsmath}\\\\\n\\cmd\\corresponds & Use \\cmd\\triangleq\\ with \\classoption{amssymb}\\\\\n\\classoption{epsf} class option & \\verb+\\usepackage{epsfig}+\\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table*}\n\n\n\\section{Converting a \\revtex~3.1 Document to \\revtex~4}\\label{sec:conv31}%\n\n\\revtex~3 documents can be converted to \\revtex~4 rather\nstraightforwardly. The following checklist covers most of the major\nsteps involved.\n\n\\begin{itemize}\n\\item Change \\cmd\\documentstyle\\verb+{revtex}+ to\n\\cmd\\documentclass\\verb+{revtex4}+, and run the document under\n\\LaTeXe\\ instead of \\LaTeX2.09.\n\n\\item\nReplace the \\cmd\\draft\\ command with the \\classoption{draft} class option.\n\n\\item\nReplace the \\cmd\\tighten\\ command with the \\classoption{tightenlines}\nclass option.\n\n\\item\nFor each \\cmd\\author\\ command, split the multiple authors into\nindividual \\cmd\\author\\ commands. Remove any instances of \\cmd\\and.\n\n\\item For superscript-style associations between authors and\naffiliations, remove explicit superscripts and use the\n\\classoption{superscriptaddress} class option.\n\n\\item\nUse \\cmd\\affiliation\\ instead of \\cmd\\address.\n\n\\item\nPut \\cmd\\maketitle\\ after the \\env{abstract} environment and any\n\\cmd\\pacs\\ commands.\n\n\\item If double-ruled table borders are desired, enclose \\env{tabular}\nenviroments in \\env{ruledtabular} environments.\n\n\\item\nConvert long tables to \\env{longtable}, and load the\n\\classname{longtable} package. Alternatively, give the \\env{table}\nan [H] float placement parameter so that the table will break automatically.\n\n\\item\nReplace any instances of the \\cmd\\widetext\\ and \\cmd\\narrowtext\\\ncommands with the \\env{widetext} environment.\nUsually, the \\envb{widetext} statement will replace the \\cmd\\widetext\\\ncommand, and the \\enve{widetext} statement replaces the matching\n\\cmd\\narrowtext\\ command.\n\nNote in this connection that due to a curious feature of \\LaTeX\\\nitself, \\revtex~4 having a \\env{widetext} environment means that it\nalso has a definition for the \\cmd\\widetext\\ command, even though the\nlatter cammand is not intended to be used in your document.\nTherefore, it is particularly important to remove\nall \\cmd\\widetext\\ commands when converting to \\revtex~4.\n\n\\item\nRemove all obsolete commands: \\cmd\\FL, \\cmd\\FR, \\cmd\\narrowtext, and\n\\cmd\\mediumtext\\ (see Table~\\ref{tab:diff31}).\n\n\\item\nReplace \\cmd\\case\\ with \\cmd\\frac. If a fraction needs to be set\nin text style despite being in a display equation, use the\nconstruction \\cmd\\textstyle\\cmd\\frac. Note that \\cmd\\frac\\ does not\nsupport the syntax \\cmd\\case\\verb+1\/2+.\n\n\\item\nReplace \\cmd\\slantfrac\\ with \\cmd\\frac.\n\n\\item\nChange \\cmd\\frak\\ to \\cmd\\mathfrak\\marg{char}\\index{Fraktur} and\n\\cmd\\Bbb\\ to \\cmd\\mathbb\\marg{char}\\index{Blackboard Bold}, and invoke\none of the class options \\classoption{amsfonts} or\n\\classoption{amssymb}.\n\n\\item\nReplace environment \\env{mathletters} with environment\n\\env{subequations} and load the \\classname{amsmath} package.\n\n\\item\nReplace \\cmd\\eqnum\\ with \\cmd\\tag\\ and load the \\classname{amsmath} package.\n\n\\item\nReplace \\cmd\\bbox\\ with \\cmd\\bm\\ and load the \\classname{bm} package.\n\n\\item\nIf using the \\cmd\\text\\ command, load the \\classname{amsmath} package.\n\n\\item\nIf using the \\verb+d+ column specifier in \\env{tabular} environments,\nload the \\classname{dcolumn} package. Under \\classname{dcolumn}, the\ncontent of each \\verb+d+ column cell is implicitly in math mode:\nremove any \\verb+$+ math delimiters appearing in cells in a \\verb+d+\ncolumn.\n\n\\item\nReplace \\cmd\\tablenote\\ with \\cmd\\footnote, \\cmd\\tablenotemark\\ with\n\\cmd\\footnotemark, and \\cmd\\tablenotetext\\ with \\cmd\\footnotetext.\n\n\\item\nReplace \\envb{references} with \\envb{thebibliography}\\verb+{}+;\n\\enve{references} with \\enve{thebibliography}.\n\\end{itemize}\n\\end{document}\n\n\\section{}+, \\verb+\\subsection{}+,\n\\verb+\\subsubsection{}+ & Start a new section or\nsubsection.\\\\\n\\verb+\\section*{}+ & Start a new section without a number.\\\\\n\\verb+\n\\section{\\label{sec:level1}First-level heading:\\protect\\\\ The line\nbreak was forced \\lowercase{via} \\textbackslash\\textbackslash}\n\nThis sample document demonstrates proper use of REV\\TeX~4 (and\n\\LaTeXe) in mansucripts prepared for submission to APS\njournals. Further information can be found in the REV\\TeX~4\ndocumentation included in the distribution or available at\n\\url{http:\/\/publish.aps.org\/revtex4\/}.\n\nWhen commands are referred to in this example file, they are always\nshown with their required arguments, using normal \\TeX{} format. In\nthis format, \\verb+#1+, \\verb+#2+, etc. stand for required\nauthor-supplied arguments to commands. For example, in\n\\verb+\\section{#1}+ the \\verb+#1+ stands for the title text of the\nauthor's section heading, and in \\verb+\\title{#1}+ the \\verb+#1+\nstands for the title text of the paper.\n\nLine breaks in section headings at all levels can be introduced using\n\\textbackslash\\textbackslash. A blank input line tells \\TeX\\ that the\nparagraph has ended. Note that top-level section headings are\nautomatically uppercased. If a specific letter or word should appear in\nlowercase instead, you must escape it using \\verb+\\lowercase{#1}+ as\nin the word ``via'' above.\n\n\\subsection{\\label{sec:level2}Second-level heading: Formatting}\n\nThis file may be formatted in both the \\texttt{preprint} and\n\\texttt{twocolumn} styles. \\texttt{twocolumn} format may be used to\nmimic final journal output. Either format may be used for submission\npurposes; however, for peer review and production, APS will format the\narticle using the \\texttt{preprint} class option. Hence, it is\nessential that authors check that their manuscripts format acceptably\nunder \\texttt{preprint}. Manuscripts submitted to APS that do not\nformat correctly under the \\texttt{preprint} option may be delayed in\nboth the editorial and production processes.\n\nThe \\texttt{widetext} environment will make the text the width of the\nfull page, as on page~\\pageref{eq:wideeq}. (Note the use the\n\\verb+\\pageref{#1}+ to get the page number right automatically.) The\nwidth-changing commands only take effect in \\texttt{twocolumn}\nformatting. It has no effect if \\texttt{preprint} formatting is chosen\ninstead.\n\n\\subsubsection{\\label{sec:level3}Third-level heading: References and Footnotes}\nReference citations in text use the commands \\verb+\\cite{#1}+ or\n\\verb+\\onlinecite{#1}+. \\verb+#1+ may contain letters and numbers.\nThe reference itself is specified by a \\verb+\\bibitem{#1}+ command\nwith the same argument as the \\verb+\\cite{#1}+ command.\n\\verb+\\bibitem{#1}+ commands may be crafted by hand or, preferably,\ngenerated by using Bib\\TeX. REV\\TeX~4 includes Bib\\TeX\\ style files\n\\verb+apsrev.bst+ and \\verb+apsrmp.bst+ appropriate for\n\\textit{Physical Review} and \\textit{Reviews of Modern Physics},\nrespectively. REV\\TeX~4 will automatically choose the style\nappropriate for the journal specified in the document class\noptions. This sample file demonstrates the basic use of Bib\\TeX\\\nthrough the use of \\verb+\\bibliography+ command which references the\n\\verb+assamp.bib+ file. Running Bib\\TeX\\ (typically \\texttt{bibtex\napssamp}) after the first pass of \\LaTeX\\ produces the file\n\\verb+apssamp.bbl+ which contains the automatically formatted\n\\verb+\\bibitem+ commands (including extra markup information via\n\\verb+\\bibinfo+ commands). If not using Bib\\TeX, the\n\\verb+thebibiliography+ environment should be used instead.\n\nTo cite bibliography entries, use the \\verb+\\cite{#1}+ command. Most\njournal styles will display the corresponding number(s) in square\nbrackets: \\cite{feyn54,witten2001}. To avoid the square brackets, use\n\\verb+\\onlinecite{#1}+: Refs.~\\onlinecite{feyn54} and\n\\onlinecite{witten2001}. REV\\TeX\\ ``collapses'' lists of\nconsecutive reference numbers where possible. We now cite everyone\ntogether \\cite{feyn54,witten2001,epr}, and once again\n(Refs.~\\onlinecite{epr,feyn54,witten2001}). Note that the references\nwere also sorted into the correct numerical order as well.\n\nWhen the \\verb+prb+ class option is used, the \\verb+\\cite{#1}+ command\ndisplays the reference's number as a superscript rather than using\nsquare brackets. Note that the location of the \\verb+\\cite{#1}+\ncommand should be adjusted for the reference style: the superscript\nreferences in \\verb+prb+ style must appear after punctuation;\notherwise the reference must appear before any punctuation. This\nsample was written for the regular (non-\\texttt{prb}) citation style.\nThe command \\verb+\\onlinecite{#1}+ in the \\texttt{prb} style also\ndisplays the reference on the baseline.\n\nFootnotes are produced using the \\verb+\\footnote{#1}+ command. Most\nAPS journal styles put footnotes into the bibliography. REV\\TeX~4 does\nthis as well, but instead of interleaving the footnotes with the\nreferences, they are listed at the end of the references\\footnote{This\nmay be improved in future versions of REV\\TeX.}. Because the correct\nnumbering of the footnotes must occur after the numbering of the\nreferences, an extra pass of \\LaTeX\\ is required in order to get the\nnumbering correct.\n\n\\section{Math and Equations}\nInline math may be typeset using the \\verb+$+ delimiters. Bold math\nsymbols may be achieved using the \\verb+bm+ package and the\n\\verb+\\bm{#1}+ command it supplies. For instance, a bold $\\alpha$ can\nbe typeset as \\verb+$\\bm{\\alpha}$+ giving $\\bm{\\alpha}$. Fraktur and\nBlackboard (or open face or double struck) characters should be\ntypeset using the \\verb+\\mathfrak{#1}+ and \\verb+\\mathbb{#1}+ commands\nrespectively. Both are supplied by the \\texttt{amssymb} package. For\nexample, \\verb+$\\mathbb{R}$+ gives $\\mathbb{R}$ and\n\\verb+$\\mathfrak{G}$+ gives $\\mathfrak{G}$\n\nIn \\LaTeX\\ there are many different ways to display equations, and a\nfew preferred ways are noted below. Displayed math will center by\ndefault. Use the class option \\verb+fleqn+ to flush equations left.\n\nBelow we have numbered single-line equations; this is the most common\ntype of equation in \\textit{Physical Review}:\n\\begin{eqnarray}\n\\chi_+(p)\\alt{\\bf [}2|{\\bf p}|(|{\\bf p}|+p_z){\\bf ]}^{-1\/2}\n\\left(\n\\begin{array}{c}\n|{\\bf p}|+p_z\\\\\npx+ip_y\n\\end{array}\\right)\\;,\n\\\\\n\\left\\{%\n \\openone234567890abc123\\alpha\\beta\\gamma\\delta1234556\\alpha\\beta\n \\frac{1\\sum^{a}_{b}}{A^2}%\n\\right\\}%\n\\label{eq:one}.\n\\end{eqnarray}\nNote the open one in Eq.~(\\ref{eq:one}).\n\nNot all numbered equations will fit within a narrow column this\nway. The equation number will move down automatically if it cannot fit\non the same line with a one-line equation:\n\\begin{equation}\n\\left\\{\n ab12345678abc123456abcdef\\alpha\\beta\\gamma\\delta1234556\\alpha\\beta\n \\frac{1\\sum^{a}_{b}}{A^2}%\n\\right\\}.\n\\end{equation}\n\nWhen the \\verb+\\label{#1}+ command is used [cf. input for\nEq.~(\\ref{eq:one})], the equation can be referred to in text without\nknowing the equation number that \\TeX\\ will assign to it. Just\nuse \\verb+\\ref{#1}+, where \\verb+#1+ is the same name that used in\nthe \\verb+\\label{#1}+ command.\n\nUnnumbered single-line equations can be typeset\nusing the \\verb+\\[+, \\verb+\\]+ format:\n\\[g^+g^+ \\rightarrow g^+g^+g^+g^+ \\dots ~,~~q^+q^+\\rightarrow\nq^+g^+g^+ \\dots ~. \\]\n\n\\subsection{Multiline equations}\n\nMultiline equations are obtained by using the \\verb+eqnarray+\nenvironment. Use the \\verb+\\nonumber+ command at the end of each line\nto avoid assigning a number:\n\\begin{eqnarray}\n{\\cal M}=&&ig_Z^2(4E_1E_2)^{1\/2}(l_i^2)^{-1}\n\\delta_{\\sigma_1,-\\sigma_2}\n(g_{\\sigma_2}^e)^2\\chi_{-\\sigma_2}(p_2)\\nonumber\\\\\n&&\\times\n[\\epsilon_jl_i\\epsilon_i]_{\\sigma_1}\\chi_{\\sigma_1}(p_1),\n\\end{eqnarray}\n\\begin{eqnarray}\n\\sum \\vert M^{\\text{viol}}_g \\vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}\n (N^2-1)\\nonumber \\\\\n & &\\times \\left( \\sum_{i